Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,300 | 5,799 | A hybrid sampler for Poisson-Kingman mixture
models
Mar??a Lomel??
Gatsby Unit
University College London
[email protected]
Stefano Favaro
Department of Economics and Statistics
University of Torino and Collegio Carlo Alberto
[email protected]
Yee Whye Teh
Department of Statistics
University of Oxford
[email protected]
Abstract
This paper concerns the introduction of a new Markov Chain Monte Carlo scheme
for posterior sampling in Bayesian nonparametric mixture models with priors that
belong to the general Poisson-Kingman class. We present a novel compact way
of representing the infinite dimensional component of the model such that while
explicitly representing this infinite component it has less memory and storage requirements than previous MCMC schemes. We describe comparative simulation
results demonstrating the efficacy of the proposed MCMC algorithm against existing marginal and conditional MCMC samplers.
1
Introduction
According to Ghahramani [9], models that have a nonparametric component give us more flexiblity
that could lead to better predictive performance. This is because their capacity to learn does not saturate hence their predictions should continue to improve as we get more and more data. Furthermore,
we are able to fully consider our uncertainty about predictions thanks to the Bayesian paradigm.
However, a major impediment to the widespread use of Bayesian nonparametric models is the problem of inference. Over the years, many MCMC methods have been proposed to perform inference
which usually rely on a tailored representation of the underlying process [5, 4, 18, 20, 28, 6]. This
is an active research area since dealing with this infinite dimensional component forbids the direct
use of standard simulation-based methods for posterior inference. These methods usually require a
finite-dimensional representation. There are two main sampling approaches to facilitate simulation
in the case of Bayesian nonparametric models: random truncation and marginalization. These two
schemes are known in the literature as conditional and marginal samplers.
In conditional samplers, the infinite-dimensional prior is replaced by a finite-dimensional representation chosen according to a truncation level. In marginal samplers, the need to represent the
infinite-dimensional component can be bypassed by marginalising it out. Marginal samplers have
less storage requirements than conditional samplers but could potentially have worst mixing properties. However, not integrating out the infinite dimensional compnent leads to a more comprehensive
representation of the random probability measure, useful to compute expectations of interest with
respect to the posterior.
In this paper, we propose a novel MCMC sampler for Poisson-Kingman mixture models, a very
large class of Bayesian nonparametric mixture models that encompass all previously explored ones
in the literature. Our approach is based on a hybrid scheme that combines the main strengths of
1
both conditional and marginal samplers. In the flavour of probabilistic programming, we view our
contribution as a step towards wider usage of flexible Bayesian nonparametric models, as it allows
automated inference in probabilistic programs built out of a wide variety of Bayesian nonparametric
building blocks.
2
Poisson-Kingman processes
Poisson-Kingman random probability measures (RPMs) have been introduced in Pitman [23] as a
generalization of homogeneous Normalized Random Measures (NRMs) [25, 13]. Let X be a complete and separable metric space endowed with the Borel ?-field BpXq, let ? ? CRMp?, H0 q be a
homogeneous Completely Random Measure (CRM) with L?evy measure ? and base distribution H0 ,
see Kingman [15] for a good overview about CRMs and references therein. Then, the corresponding
total mass of ? is T ? ?pXq and let it be finite, positive almost surely, and absolutely continuous
with respect to Lebesgue measure. For any t P R` , let us consider the conditional distribution of
?{t given that the total mass T P dt. This distribution is denoted by PKp?, ?t , H0 q, it is the distribution of a RPM, where ?t denotes the usual Dirac delta function. Poisson-Kingman RPMs form
a class of RPMs whose distributions are obtained by mixing PKp?, ?t , H0 q, over t, with respect to
some distribution ? on the positive real line. Specifically, a Poisson-Kingman RPM has following
the hierarchical representation
T ??
P |T ? t ? PKp?, ?t , H0 q.
(1)
The RPM P is referred to as the Poisson-Kingman RPM with L?evy measure ?, base distribution H0
and mixing distribution ?. Throughout the paper we denote by PKp?, ?, H0 q the distribution of P
and, without loss of generality, we will assume that ?pdtq9hptqf? ptqdt where f? is the density of
the total mass T under the CRM and h is a non-negative function. Note that, when ?pdtq
? ? f? ptqdt
then the distribution PKp?, f? , H0 q coincides with NRMp?, H0 q. The resulting P ? k?1 pk ??k
is almost surely discrete and since ? is homogeneous, the atoms p?k qk?1 of P are independent of
their masses ppk qk?1 and form a sequence of independent random variables identically distributed
according to H0 . Finally, the masses of P have distribution governed by the L?evy measure ? and
the distribution ?.
n
One nice property is that P is almost surely discrete: if we obtain a sample tYi ui?1 from it, there is
a positive probability of Yi ? Yj for each pair of indexes i ? j. Hence, it induces a random partition
? on N, where i and j are in the same block in ? if and only if Yi ? Yj . Kingman [16] showed
that ? is exchangeable, this property will be one of the main tools for the derivation of our hybrid
sampler.
2.1
Size-biased sampling Poisson-Kingman processes
A second object induced by a Poisson-Kingman RPM is a size-biased permutation of its atoms.
Specifically, order the blocks in ? by increasing order of the least element in each block, and for
each k P N let Zk be the least element of the kth block. Zk is the index among pYi qi?1 of the
first appearance of the kth unique value in the sequence. Let J?k ? ?ptYZk uq be the mass of the
corresponding atom in ?. Then pJ?k qk?1 is a size-biased permutation of the masses of atoms in ?,
?
with larger masses tending to appear earlier in the sequence. It is easy to see that k?1 J?k ? T , and
that the sequence can be understood as a stick-breaking construction: starting with a stick of length
T0 ? T ; break off the first piece of length J?1 ; the surplus length of stick is T1 ? T0 ? J?1 ; then the
second piece with length J?2 is broken off, etc.
Theorem 2.1 of Perman et al. [21] states that the sequence of surplus masses pTk qk?0 forms a
Markov chain and gives the corresponding initial distribution and transition kernels. The corresponding generative process for the sequence pYi qi?1 is as follows:
i) Start with drawing the total mass from its distribution P?,h,H0 pT P dtq9hptqf? ptqdt.
ii) The first draw Y1 from P is a size-biased pick from the masses of ?. The actual value of Y1
is simply Y1? ? H0 , while the mass of the corresponding atom in ? is J?1 , with conditional
2
distribution
s1
f? pt ? s1 q
P?,h,H0 pJ?1 P ds1 |T P dtq ? ?pds1 q
,
t
f? ptq
with surplus mass
T1 ? T ? J?1 .
iii) For subsequent draws i ? 2:
? Let K be the current number of distinct values among Y1 , . . . , Yi?1 , and Y1? , . . . , YK?
the unique values, i.e., atoms in ?. The masses of these first K atoms are denoted by
?K
J?1 , . . . , J?K and the surplus mass is TK ? T ? k?1 J?k .
? For each k ? K, with probability J?k {T , we set Yi ? Y ? .
k
? With probability TK {T , Yi takes on the value of an atom in ? besides the first K
?
atoms. The actual value YK`1
is drawn from H0 , while its mass is drawn from
sK`1
f? ptK ? sK`1 q
P?,h,H0 pJ?K`1 P dsK`1 |TK P dtK q ?
?pdsK`1 q
,
tK
f? ptK q
TK`1 ? TK ?J?K`1 .
By multiplying the above infinitesimal probabilities, one obtains the joint distribution of the random
elements T , ?, pJ?i qi?1 and pYi? qi?1
P?,h,H0 p?n ? pck qkPrKs , Yk? P dyk? , J?k P dsk for k P rKs, T P dtq
? t?n f? pt ?
?K
k?1 sk qhptqdt
K
?
(2)
|c |
sk k ?pdsk qH0 pdyk? q,
k?1
where pck qkPrKs denotes a particular partition of rns with K blocks, c1 , . . . , cK , ordered by increasing least element and |ck | is the cardinality of block ck . The distribution (2) is invariant to the
size-biased order. Such a joint distribution was first obtained in Pitman [23] , see also Pitman [24]
for further details.
2.2
Relationship to the usual Stick-breaking construction
In the generative process above, we mentioned that it is reminiscent of the well known stick breaking
construction from Ishwaran & James [12], where you break a stick of length one but it is not the
same. However, we can effectively reparameterize the model, starting with Equation (2), due to
d
d
P
J?
two useful identities in distribution: Pj ? T ?? j J? and Vj ? 1?? j P` for j ? 1, . . . , K.
`?j
`
`?j
Indeed, using this reparameterization, we obtain the corresponding joint in terms of K p0, 1q-valued
K
stick-breaking weights tVj uj?1 which correspond to a stick-breaking representation. Note that this
joint distribution is for a general L?evy measure ?, density f? and it is conditioned on the valued
of the random variable T . We can recover the well known Stick breaking representations for the
Dirichlet and Pitman-Yor processes, for a specific choice of ? and if we integrate out T , see the
supplementary material for further details about the latter. However, in general, these stick-breaking
random variables form a sequence of dependent random variables with a complicated distribution,
except for the two previously mentioned processes, see Pitman [22] for details.
2.3
Poisson-Kingman mixture model
We are mainly interested in using Poisson-Kingman RPMs as a building block for an infinite mixture
model. Indeed, we can use Equation (1) as the top level of the following hierarchical specification
T ??
P |T ? PKp?? , ?T , H0 q
iid
Yi | P ?
P
ind
Xi | Yi ? F p? | Yi q
3
(3)
X1
J?1 , Y1?
X4
X1
J?2 , Y2?
X2
J?3 , Y3?
J?4 , Y1e
X6
X8
X5
X3
T?
n
P4
`=1
0
Y1 e , Y2e
J?`
o
9
Figure 1: Varying table size Chinese restaurant representation for observations tXi ui?1
where F p? | Y q is the likelihood term for each mixture component, and our dataset consists of
n observations pxi qiPrns of the corresponding variables pXi qiPrns . We will assume that F p? | Y q is
smooth. After specifying the model we would like to carry out inference for clustering and/or density
estimation tasks. We can do it exactly and more efficiently than with known MCMC samplers with
our novel approach. In the next section, we present our main contribution and in the following one
we show how it outperforms other samplers.
3
Hybrid Sampler
Equation?s (2) joint distribution is written in terms of the first K size-biased weights. In order to
obtain a complete representation of the RPM, we need to size-bias sample from it a countably infinite
number of times. Succesively, devise some way of representing this object exactly in a computer
with finite memory and storage is needed.
We introduce the following novel strategy: starting from equation (2), we exploit the generative
process of section 2.1 when reassigning observations to clusters. In addition to this, we reparame?K
terize the model in terms of a surplus mass random variable V ? T ? k?1 J?k and end up with the
following joint distribution
P?,h,H0 p?n ? pck qkPrKs , Yk? P dyk? , J?k P dsk for k P rKs, T ?
K
?
J?k P dv, Xi P dxi for i P rnsq
k?1
(4)
K
?
? pv `
?
sk q?n h v `
k?1
K
?
?
sk
k?1
f? pvq
K
?
|c |
sk k ?pdsk qH0 pdyk? q
?
F pdxi |yk? q.
iPck
k?1
For this reason, while having a complete representation of the infinite dimensional part of the model
we only need to explicitly represent those size-biased weights associated to occupied clusters plus
a surplus mass term which is associated to the rest of the empty clusters, as Figure 1 shows. The
cluster reassignment step can be seen as a lazy sampling scheme: we explicitly represent and update
the weights associated to occupied clusters and create a size-biased weight only when a new cluster
appears. To make this possible we use the induced partition and we call Equation (4) the varying
table size Chinese restaurant representation because the size-biased weights can be thought as the
sizes of the tables in our restaurant. In the next subsection, we compute the complete conditionals
of each random variable of interest to implement an overall Gibbs sampling MCMC scheme.
3.1
Complete conditionals
Starting from equation (4), we obtain the following complete conditionals for the Gibbs sampler
?
??n
?
?
K
K
?
?
P pV P dv | Restq9 v `
sk
f? pvqh v `
sk dv
(5)
k?1
k?1
?
??n ?
?
?
?
?
?
|c |
P J?i P dsi | Rest 9 v ` si `
sk
h v ` si `
sk si i ?pdsi qIp0,Surpmassi q psi qdsi
k?i
k?i
4
where Surpmassi ? V `
?k
j?1
?
J?j ? j?i J?j .
#
sc F pdxi | tXj ujPc Yc? q if i is assigned to existing cluster c
Ppci ? c | c?i , Restq9 v
?
if i is assigned to a new cluster c
M F pdxi | Yc q
According to the rule above, the ith observation will be either reassigned to an existing cluster or to
one of the M new clusters in the ReUse algorithm as in Favaro & Teh [6]. If it is assigned to a new
cluster, then we need to sample a new size-biased weight from the following
?
?
P J?k`1 P dsk`1 | Rest 9f? pv ? sk`1 q?psk`1 qsk`1 Ip0,vq psk`1 qdsk`1 .
(6)
Every time a new cluster is created we need to obtain its corresponding size-biased weight which
could happen 1 ? R ? n times per iteration hence, it has a significant contribution to the overall
computational cost. For this reason, an independent and identically distributed (i.i.d.) draw from
its corresponding complete conditional (6) is highly desirable. In the next subsection we present a
way to achieve this. Finally, for updating cluster parameters tYk? ukPrKs , in the case where H0 is
non-conjugate to the likelihood, we use an extension of Favaro & Teh [6]?s ReUse algorithm, see
Algorithm 3 in the supplementary material for details.
The complete conditionals in Equation (5) do not have a standard form but a generic MCMC method
can be applied to sample from each within the Gibbs sampler. We use slice sampling from Neal [19]
to update the size-biased weights and the surplus mass. However, there is a class of priors where the
total mass?s density is intractable so an additional step needs to be introduced to sample the surplus
mass. In the next subsection we present two alternative ways to overcome this issue.
3.2
Example of classes of Poisson-Kingman priors
a) ?-Stable Poisson-Kingman processes [23].
For any ? P p0, 1q, let f? ptq ?
?8 p?1qj`1
?p?j`1q
1
be
the
density
function
of a positive ?-Stable random variable and
sinp??jq
j?0
?
j!
t?j`1
?
???1
dx. This class of RPMs is denoted by PKp?? , hT , H0 q where h
?pdxq ? ?? pdxq :? ?p1??q x
is a function that indexes each member of the class. For example, in the experimental section, we
picked 3 choices of the h function that index the following processes: Pitman-Yor, Normalized Stable and Normalized Generalized Gamma processes. This class includes all Gibbs type priors with
parameter ? P p0, 1q, so other choices of h are possible, see Gnedin & Pitman [10] and De Blasi
et al. [1] for a noteworthy account of this class of Bayesian nonparametric priors. In this case, the
total mass?s density is intractable and we propose two ways of dealing with this. Firstly, we used
Kanter [14]?s integral representation for the ?-Stable density as in Lomeli et al. [17], introduce an
auxiliary variable Z and slice sample each variable
?
?
?
??n
k
k
?
?
?
?
??
?
? 1??
P pV P dv | Restq9 v `
exp ?v 1?? Apzq h v `
si dv
si
v
i?1
i?1
?
?
?
P pZ P dz | Restq9Apzq exp ?v p? 1?? q Apzq dz,
see Algorithm 1 in the supplementary material for details. Alternatively, we can completely bypass
the evaluation of the total mass?s density by updating the surplus mass with a Metropolis-Hastings
step with an independent proposal from a Stable or from an Exponentially Tilted Stable(?). It is
straight forward to obtain i.i.d draws from these proposals, see Devroye [3] and Hofert [11] for an
improved rejection sampling method for the Exponentially tilted case. This leads to the following
acceptance ratio
?
??n ?
?
?k
?k
1
1
1
s
v
`
s
h
v
`
1
1
i?1 i
i?1 i dv exp p?vq
P pV P dv | Restq f? pvq exp p??vq
? ?
,
??n ?
?
?k
?k
P pV P dv | Restq f? pv 1 q exp p??v 1 q
v ` i?1 si
h v ` i?1 si dv exp p?v 1 q
see Algorithm 2 in the supplementary material for details. Finally, to sample a new size-biased
weight
?
?
P J?k`1 P dsk`1 | Rest 9f? pv ? sk`1 qs?? Ip0,vq psk`1 qdsk`1 .
k`1
5
Fortunately, we can get an i.i.d. draw from the above due to an identity in distribution given by
Favaro et al. [8] for the usual stick breaking weights for any prior in this class such that ? ? uv
where u ? v are coprime integers. Then we just reparameterize it back to obtain the new size-biased
weight, see Algorithm 4 in the supplementary material for details.
b)
? logBeta-Poisson-Kingman
?pa`bq
?paq?pbq
exp p?atq p1 ? expp?tqq
b?1
processes
[25,
27].
Let
f? ptq
?
d
be the density of a positive random variable X ? ? log Y ,
where Y ? Betapa, bq and ?pxq ? expp?axqp1?expp?bxqq
. This class of RPMs generalises the
xp1?expp?xqq
Gamma process but has similar properties. Indeed, if we take b ? 1 and the density function for
T is ?ptq ? f? ptq we recover the L?evy measure and total mass?s density function of a Gamma
process. Finally, to sample a new size-biased weight
b?1
?
? p1 ? expps
p1 ? expp?bsk`1 qq
k`1 ? vqq
P J?k`1 P dsk`1 | Rest 9
dsk`1 Ip0,vq psk`1 q.
1 ? expp?sk`1 q
If b ? 1, this complete conditional is a monotone decreasing unnormalised density with maximum
at b. We can easily get an i.i.d. draw with a simple rejection sampler [2] where the rejection constant
is bv and the proposal is U p0, vq. There is no other known sampler for this process.
3.3
Relationship to marginal and conditional MCMC samplers
Starting from equation (2), another strategy would be to reparameterize the model in terms of the
usual stick breaking weights. Next, we could choose a random truncation level and represent finitely
many sticks as in Favaro & Walker [7]. Alternatively, we could integrate out the random probability
measure and sample only the partition induced by it as in Lomeli et al. [17]. Conditional samplers
have large memory requirements as often, the number of sticks needed can be very large. Furthermore, the conditional distributions of the stick lengths are quite involved so they tend to have
slow running times. Marginal samplers have less storage requirements than conditional samplers but
could potentially have worst mixing properties. For example, Lomeli et al. [17] had to introduce a
number of auxiliary variables which worsen the mixing.
Our novel hybrid sampler exploits marginal and conditional samplers advantages. It has less memory
requirements since it just represents the size-biased weights of occupied as opposed to conditional
samplers which represent both empty and occupied clusters. Also, it does not integrate out the
size-biased weights thus, we obtain a more comprehensive representation of the RPM.
4
Performance assesssment
We illustrate the performance of our hybrid sampler on a range of Bayesian nonparametric mixture
models, obtained by different specifications of ? and ?, as in Equation (3). At the top level of this
hierarchical specification, different Bayesian nonparametric priors were chosen from both classes
presented in the examples section. We chose the base distribution H0 and the likelihood term F for
the kth cluster to be
`
?
`
?
?nk
N xi | ?k , ?12 ,
H0 pd?k q ? N d?k | ?0 , ?02
and
F pdx1 , . . . , dxnk | ?k , ?1 q ? i?1
n
k
are the nk observations assigned to the kth cluster at some iteration. N denotes a
where tXj uj?1
Normal distribution with mean ?k and variance ?12 , a common parameter among all clusters. The
mean?s prior distribution is Normal, centered at ?0 and with variance ?02 . Although the base distribution is conjugate to the likelihood we treated it as non-conjugate case and sampled the parameters
at each iteration rather than integrating them out.
We used the dataset from Roeder [26] to test the algorithmic performance in terms of running time
and effective sample size (ESS), as Table 1 shows. The dataset consists of measurements of velocities in km/sec of n ? 82 galaxies from a survey of the Corona Borealis region. For the ?-Stable
Poisson-Kingman class, we compared it against our implementation of Favaro & Walker [7]?s conditional sampler and against the marginal sampler of Lomeli et al. [17]. We chose to compare our
hybrid sampler against these existing approaches which follow the same general purpose paradigm.
6
Algorithm
Pitman-Yor process (? ? 10)
Hybrid
Hybrid-MH (? ? 0)
Conditional
Marginal
Hybrid
Hybrid-MH (? ? 50)
Conditional
Marginal
Normalized Stable process
Hybrid
Hybrid-MH (? ? 0)
Conditional
Marginal
Hybrid
Hybrid-MH (? ? 50)
Conditional
Marginal
Normalized Generalized Gamma process (? ? 1)
Hybrid
Hybrid-MH (? ? 0)
Conditional
Marginal
Hybrid
Hybrid-MH (? ? 50)
Conditional
Marginal
-logBeta (a ? 1, b ? 2)
Hybrid
Conditional
Marginal
?
Running time
ESS(?std)
0.3
0.3
0.3
0.3
0.5
0.5
0.5
0.5
7135.1(28.316)
5469.4(186.066)
NA
4685.7(84.104)
3246.9(24.894)
4902.3(6.936)
10141.6(237.735)
4757.2(37.077)
2635.488(187.335)
2015.625(152.030)
NA
2382.799(169.359)
3595.508(174.075)
3579.686(135.726)
905.444(41.475)
2944.065(195.011)
0.3
0.3
0.3
0.3
0.5
0.5
0.5
0.5
5054.7(70.675)
7866.4(803.228)
NA
7658.3(193.773)
5382.9(57.561)
4537.2(37.292)
10033.1(22.647)
8203.1(106.798)
5324.146(167.843)
5074.909(100.300)
NA
2630.264(429.877)
4877.378(469.794)
4454.999(348.356)
912.382(167.089)
3139.412(351.788)
0.3
0.3
0.3
0.3
0.5
0.5
0.5
0.5
4157.8(92.863)
4745.5(187.506)
NA
7685.8(208.98)
6299.2(102.853)
4686.4(35.661)
10046.9(206.538)
8055.6(93.164)
5104.713(200.949)
4848.560(312.820)
NA
3587.733(569.984)
4646.987(370.955)
4343.555(173.113)
1000.214(70.148)
4443.905(367.297)
-
2520.6(121.044)
NA
NA
3068.174(540.111)
NA
NA
Table 1: Running times in seconds and ESS averaged over 10 chains, 30,000 iterations, 10,000 burn in.
Table 1 shows that different choices of ? result in differences in the algorithm?s running times and
ESS. The reason for this is that in the ? ? 0.5 case there are readily available random number
generators which do not increase the computational cost. In contrast, in the ? ? 0.3 case, a rejection
sampler method is needed every time a new size-biased weight is sampled which increases the
computational cost, see Favaro et al. [8] for details. Even so, in most cases, we outperform both
marginal and conditional MCMC schemes in terms of running times and in all cases, in terms of
ESS. In the Hybrid-MH case, even thought the ESS and running times are competitive, we found
that the acceptance rate is not optimal, we are currently exploring other choices of proposals. Finally,
in Example b), our approach is the only one available and it has good running times and ESS. This
qualitative comparison confirms our previous statements about our novel approach.
5
Discussion
Our main contribution is our Hybrid MCMC sampler as a general purpose tool for inference with a
very large class of infinite mixture models. We argue in favour of an approach in which a generic
algorithm can be applied to a very large class of models, so that the modeller has a lot of flexibility in
choosing specific models suitable for his/her problem of interest. Our method is a hybrid approach
since it combines the perks of the conditional and marginal schemes. Indeed, our experiments
confirm that our hybrid sampler is more efficient since it outperforms both marginal and conditional
samplers in running times in most cases and in ESS in all cases.
We introduced a new compact way of representing the infinite dimensional component such that it is
feasible to perform inference and how to deal with the corresponding intractabilities. However, there
are still various challenges that remain when dealing with these type of models. For instance, there
are some values for ? which we are unable to perform inference with our novel sampler. Secondly,
when a Metropolis-Hastings step is used, there could be other ways to improve the mixing in terms
of better proposals. Finally, all BNP MCMC methods can be affected by the dimensionality and
size of the dataset when dealing with an infinite mixture model. Indeed, all methods rely on the
same way of dealing with the likelihood term. When adding a new cluster, all methods sample its
7
corresponding parameter from the prior distribution. In a high dimensional scenario, it could be very
difficult to sample parameter values close to the existing data points. We consider these points to be
an interesting avenue of future research.
Acknowledgments
We thank Konstantina Palla for her insightful comments. Mar??a Lomel?? is funded by the Gatsby
Charitable Foundation, Stefano Favaro is supported by the European Research Council through
StG N-BNP 306406 and Yee Whye Teh is supported by the European Research Council under
the European Unions Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no.
617071.
References
[1] De Blasi, P., Favaro, S., Lijoi, A., Mena, R. H., Pr?uenster, I., & Ruggiero, M. 2015. Are Gibbs-type priors
the most natural generalization of the Dirichlet process? Pages 212?229 of: IEEE Transactions on Pattern
Analysis & Machine Intelligence, vol. 37.
[2] Devroye, L. 1986. Non-Uniform Random Variate Generation. Springer-Verlag.
[3] Devroye, L. 2009. Random variate generation for exponentially and polynomially tilted Stable distributions. ACM Transactions on Modelling and Computer Simulation, 19, 1?20.
[4] Escobar, M. D. 1994. Estimating normal means with a Dirichlet process prior. Journal of the American
Statistical Association, 89, 268?277.
[5] Escobar, M. D., & West, M. 1995. Bayesian density estimation and inference using mixtures. Journal of
the American Statistical Association, 90, 577?588.
[6] Favaro, S., & Teh, Y. W. 2013. MCMC for Normalized Random Measure Mixture Models. Statistical
Science, 28(3), 335?359.
[7] Favaro, S., & Walker, S. G. 2012. Slice sampling ?-Stable Poisson-Kingman mixture models. Journal of
Computational and Graphical Statistics, 22, 830?847.
[8] Favaro, S., Lomeli, M., Nipoti, B., & Teh, Y. W. 2014. On the Stick-Breaking representation of ?-Stable
Poisson-Kingman models. Electronic Journal of Statistics, 8, 1063?1085.
[9] Ghahramani, Z. 2015. Probabilistic Machine Learning and Artificial Inteligence. Nature, 521, 452459.
[10] Gnedin, A., & Pitman, J. 2006. Exchangeable Gibbs partitions and Stirling triangles. Journal of Mathematical Sciences, 138, 5674?5684.
[11] Hofert, M. 2011. Efficiently sampling nested Archimedean copulas. Comput. Statist. Data Anal., 55,
5770.
[12] Ishwaran, H., & James, L. F. 2001. Gibbs Sampling Methods for Stick-Breaking Priors. Journal of the
American Statistical Association, 96(453), 161?173.
[13] James, L. F. 2002. Poisson process partition calculus with applications to exchangeable models and
Bayesian nonparametrics. ArXiv:math/0205093.
[14] Kanter, M. 1975. Stable densities under change of scale and total variation inequalities. Annals of
Probability, 3, 697?707.
[15] Kingman, J. F. C. 1967. Completely Random Measures. Pacific Journal of Mathematics, 21, 59?78.
[16] Kingman, J. F. C. 1978. The representation of partition structures. Journal of the London Mathematical
Society, 18, 374?380.
[17] Lomeli, M., Favaro, S., & Teh, Y. W. 2015. A marginal sampler for ?-stable Poisson-Kingman mixture
models. Journal of Computational and Graphical Statistics (To appear).
[18] Neal, R. M. 1998. Markov Chain Sampling Methods for Dirichlet Process Mixture Models. Tech. rept.
9815. Department of Statistics, University of Toronto.
[19] Neal, R. M. 2003. Slice sampling. Annals of Statistics, 31, 705?767.
[20] Papaspiliopoulos, O., & Roberts, G. O. 2008. Retrospective Markov chain Monte Carlo methods for
Dirichlet process hierarchical models. Biometrika, 95, 169?186.
[21] Perman, M., Pitman, J., & Yor, M. 1992. Size-biased sampling of Poisson point processes and excursions.
Probability Theory and Related Fields, 92, 21?39.
[22] Pitman, J. 1996. Random discrete distributions invariant under size-biased permutation. Advances in
Applied Probability, 28, 525?539.
8
[23] Pitman, J. 2003. Poisson-Kingman Partitions. Pages 1?34 of: Goldstein, D. R. (ed), Statistics and
Science: a Festschrift for Terry Speed. Institute of Mathematical Statistics.
[24] Pitman, J. 2006. Combinatorial Stochastic Processes. Lecture Notes in Mathematics. Springer-Verlag,
Berlin.
[25] Regazzini, E., Lijoi, A., & Pr?uenster, I. 2003. Distributional results for means of normalized random
measures with independent increments. Annals of Statistics, 31, 560?585.
[26] Roeder, K. 1990. Density estimation with confidence sets exemplified by super-clusters and voids in the
galaxies. Journal of the American Statistical Association, 85, 617?624.
[27] von Renesse, M., Yor, M., & Zambotti, L. 2008. Quasi-invariance properties of a class of subordinators.
Stochastic Processes and their Applications, 118, 2038?2057.
[28] Walker, Stephen G. 2007. Sampling the Dirichlet Mixture Model with Slices. Communications in Statistics - Simulation and Computation, 36, 45.
9
| 5799 |@word dtk:1 flexiblity:1 km:1 confirms:1 simulation:5 calculus:1 p0:4 pick:1 carry:1 initial:1 efficacy:1 outperforms:2 existing:5 current:1 si:7 dx:1 reminiscent:1 written:1 readily:1 tilted:3 subsequent:1 partition:8 happen:1 dtq:2 update:2 generative:3 intelligence:1 es:8 ith:1 math:1 evy:5 toronto:1 firstly:1 favaro:14 mathematical:3 direct:1 qualitative:1 consists:2 combine:2 introduce:3 subordinators:1 indeed:5 p1:4 bnp:2 decreasing:1 palla:1 actual:2 cardinality:1 increasing:2 estimating:1 underlying:1 mass:25 y3:1 every:2 exactly:2 biometrika:1 uk:2 exchangeable:3 unit:1 grant:1 stick:17 appear:2 positive:5 t1:2 understood:1 rept:1 pxq:2 oxford:1 pkp:7 noteworthy:1 plus:1 chose:2 therein:1 burn:1 specifying:1 range:1 averaged:1 unique:2 acknowledgment:1 yj:2 union:1 block:8 implement:1 x3:1 area:1 thought:2 blasi:2 confidence:1 integrating:2 get:3 close:1 storage:4 yee:2 dz:2 paq:1 economics:1 starting:5 survey:1 pyi:3 stats:1 rule:1 q:1 his:1 reparameterization:1 variation:1 increment:1 qq:1 annals:3 construction:3 pt:3 programming:1 homogeneous:3 xqq:1 pa:1 element:4 velocity:1 agreement:1 updating:2 std:1 distributional:1 worst:2 region:1 yk:5 mentioned:2 pd:1 broken:1 ui:2 predictive:1 completely:3 triangle:1 easily:1 joint:6 mh:7 various:1 derivation:1 distinct:1 describe:1 london:2 monte:2 effective:1 artificial:1 sc:1 choosing:1 h0:22 whose:1 kanter:2 larger:1 valued:2 supplementary:5 quite:1 drawing:1 statistic:11 ptk:3 sequence:7 advantage:1 ucl:1 propose:2 borealis:1 p4:1 mixing:6 flexibility:1 achieve:1 dirac:1 cluster:19 requirement:5 empty:2 comparative:1 escobar:2 object:2 wider:1 tk:6 illustrate:1 ac:2 finitely:1 auxiliary:2 lijoi:2 stochastic:2 centered:1 material:5 vqq:1 require:1 generalization:2 secondly:1 extension:1 exploring:1 normal:3 exp:7 algorithmic:1 major:1 purpose:2 estimation:3 combinatorial:1 currently:1 council:2 create:1 tool:2 super:1 ck:3 occupied:4 rather:1 varying:2 tvj:1 pxi:2 modelling:1 likelihood:5 mainly:1 tech:1 contrast:1 stg:1 inference:9 roeder:2 dependent:1 her:2 jq:1 quasi:1 interested:1 overall:2 among:3 flexible:1 issue:1 denoted:3 copula:1 marginal:20 field:2 bsk:1 having:1 sampling:14 atom:9 x4:1 represents:1 reassigning:1 future:1 gamma:4 comprehensive:2 festschrift:1 replaced:1 lebesgue:1 interest:3 acceptance:2 highly:1 evaluation:1 mixture:16 chain:5 integral:1 bq:2 regazzini:1 instance:1 earlier:1 stirling:1 cost:3 perman:2 uniform:1 seventh:1 thanks:1 density:15 ppk:1 probabilistic:3 off:2 na:10 von:1 opposed:1 choose:1 american:4 kingman:24 account:1 de:2 sec:1 includes:1 txj:2 explicitly:3 piece:2 view:1 break:2 picked:1 lot:1 start:1 recover:2 competitive:1 complicated:1 worsen:1 contribution:4 qk:4 variance:2 efficiently:2 correspond:1 bayesian:12 iid:1 carlo:3 multiplying:1 straight:1 modeller:1 ed:1 archimedean:1 infinitesimal:1 against:4 involved:1 james:3 galaxy:2 associated:3 dxi:1 psi:1 sampled:2 dataset:4 subsection:3 dimensionality:1 surplus:9 back:1 goldstein:1 appears:1 dt:1 x6:1 follow:1 improved:1 nonparametrics:1 ox:1 mar:2 generality:1 furthermore:2 marginalising:1 just:2 hastings:2 widespread:1 facilitate:1 building:2 mena:1 normalized:7 y2:1 usage:1 hence:3 assigned:4 neal:3 deal:1 ind:1 x5:1 coincides:1 reassignment:1 generalized:2 whye:2 pck:3 complete:9 txi:1 stefano:3 corona:1 novel:7 common:1 tending:1 overview:1 exponentially:3 belong:1 association:4 significant:1 measurement:1 gibbs:7 lomel:2 uv:1 mathematics:2 erc:1 had:1 funded:1 specification:3 stable:13 etc:1 base:4 posterior:3 showed:1 lomeli:6 scenario:1 verlag:2 inequality:1 continue:1 yi:8 devise:1 seen:1 additional:1 fortunately:1 surely:3 paradigm:2 ii:1 stephen:1 encompass:1 desirable:1 nrms:1 smooth:1 generalises:1 alberto:1 qi:4 prediction:2 expectation:1 poisson:22 metric:1 rks:2 iteration:4 represent:5 tailored:1 kernel:1 arxiv:1 c1:1 proposal:5 addition:1 conditionals:4 void:1 walker:4 biased:20 rest:5 comment:1 induced:3 tend:1 member:1 call:1 integer:1 xp1:1 iii:1 identically:2 crm:2 automated:1 variety:1 marginalization:1 easy:1 restaurant:3 variate:2 impediment:1 avenue:1 qj:1 t0:2 favour:1 reuse:2 retrospective:1 dsk:7 useful:2 nonparametric:10 statist:1 induces:1 tyi:1 outperform:1 inteligence:1 delta:1 per:1 discrete:3 ds1:1 vol:1 affected:1 demonstrating:1 drawn:2 pj:5 ht:1 monotone:1 year:1 uncertainty:1 you:1 almost:3 throughout:1 perk:1 electronic:1 excursion:1 draw:6 pvq:2 atq:1 rpm:13 flavour:1 strength:1 bv:1 x2:1 speed:1 reparameterize:3 separable:1 department:3 pacific:1 according:4 conjugate:3 remain:1 metropolis:2 s1:2 dv:9 invariant:2 pr:2 equation:9 vq:6 previously:2 needed:3 fp7:1 pbq:1 end:1 available:2 endowed:1 ishwaran:2 hierarchical:4 generic:2 uq:1 alternative:1 denotes:3 dirichlet:6 top:2 clustering:1 running:9 graphical:2 exploit:2 ghahramani:2 uj:2 chinese:2 society:1 torino:1 strategy:2 usual:4 kth:4 unable:1 thank:1 berlin:1 capacity:1 argue:1 reason:3 length:6 besides:1 index:4 relationship:2 devroye:3 ratio:1 difficult:1 robert:1 potentially:2 statement:1 negative:1 implementation:1 anal:1 perform:3 teh:8 observation:5 markov:4 finite:4 crms:1 communication:1 y1:7 rn:1 introduced:3 pair:1 able:1 usually:2 pattern:1 exemplified:1 yc:2 qsk:1 challenge:1 program:1 built:1 memory:4 terry:1 tyk:1 suitable:1 treated:1 hybrid:24 rely:2 natural:1 representing:4 scheme:8 improve:2 created:1 x8:1 dyk:2 unnormalised:1 prior:13 literature:2 nice:1 ptq:5 expp:6 fully:1 loss:1 permutation:3 dsi:1 lecture:1 interesting:1 generation:2 generator:1 foundation:1 integrate:3 charitable:1 intractability:1 bypass:1 supported:2 truncation:3 bias:1 institute:1 wide:1 pitman:13 yor:5 distributed:2 slice:5 overcome:1 transition:1 forward:1 programme:1 polynomially:1 transaction:2 compact:2 obtains:1 countably:1 dealing:5 confirm:1 active:1 xi:3 alternatively:2 forbids:1 continuous:1 sk:14 table:6 reassigned:1 nature:1 learn:1 zk:2 bypassed:1 european:3 vj:1 pk:1 main:5 x1:2 papaspiliopoulos:1 referred:1 west:1 borel:1 gatsby:3 slow:1 pv:8 comput:1 governed:1 breaking:11 saturate:1 theorem:1 specific:2 insightful:1 explored:1 pz:1 concern:1 intractable:2 adding:1 effectively:1 conditioned:1 konstantina:1 nk:2 rejection:4 simply:1 appearance:1 lazy:1 ordered:1 logbeta:2 springer:2 nested:1 acm:1 conditional:26 identity:2 towards:1 psk:4 feasible:1 change:1 infinite:12 specifically:2 except:1 sampler:34 total:9 invariance:1 experimental:1 college:1 latter:1 absolutely:1 mcmc:13 |
5,301 | 58 | 730
Analysis of distributed representation of
constituent structure in connectionist systems
Paul Smolensky
Department of Computer Science, University of Colorado, Boulder, CO 80309-0430
Abstract
A general method, the tensor product representation, is described for the distributed representation of
value/variable bindings. The method allows the fully distributed representation of symbolic structures:
the roles in the structures, as well as the fillers for those roles, can be arbitrarily non-local. Fully and
partially localized special cases reduce to existing cases of connectionist representations of structured
data; the tensor product representation generalizes these and the few existing examples of fuUy
distributed representations of structures. The representation saturates gracefully as larger structures
are represented; it penn its recursive construction of complex representations from simpler ones; it
respects the independence of the capacities to generate and maintain multiple bindings in parallel; it
extends naturally to continuous structures and continuous representational patterns; it pennits values to
also serve as variables; it enables analysis of the interference of symbolic structures stored in
associative memories; and it leads to characterization of optimal distributed representations of roles
and a recirculation algorithm for learning them.
Introduction
Any model of complex infonnation processing in networks of simple processors must solve the
problem of representing complex structures over network elements. Connectionist models of realistic
natural language processing, for example, must employ computationally adequate representations of
complex sentences. Many connectionists feel that to develop connectionist systems with the
computational power required by complex tasks, distributed representations must be used: an
individual processing unit must participate in the representation of multiple items, and each item must
be represented as a pattern of activity of multiple processors. Connectionist models have used more or
less distributed representations of more or less complex structures, but little if any general analysis of
the problem of distributed representation of complex infonnation has been carried out This paper
reports results of an analysis of a general method called the tensor product representation.
The language-based fonnalisms traditional in AI pennit the construction of arbitrarily complex
structures by piecing together constituents. The tensor product representation is a connectionist
method of combining representations of constituents into representations of complex structures. If the
constituents that are combined have distributed representations, completely distributed representations
of complex structures can result each part of the network is responsible for representing multiple
constituents in the structure, and each constituent is represented over multiple units. The tensor
product representation is a general technique, of which the few existing examples of fully distributed
representations of structures are particular cases.
The tensor product representation rests on identifying natural counterparts within connectionist
computation of certain fundamental elements of symbolic computation. In the present analysis, the
problem of distributed representation of symbolic structures is characterized as the problem of taking
complex structures with certain relations to their constituent symbols and mapping them into activity
vectors--patterns of activation-with corresponding relations to the activity vectors representing their
constituents. Central to the analysis is identifying a connectionist counterpart of variable binding: a
method for binding together a distributed representation of a variable and a distributed representation
of a value into a distributed representation of a variable/value binding-a representation which can
co-exist on exactly the same network units with representations of other variable/value bindings, with
@ American Institute of Physics 1988
731
limited confusion of which variables are bound to which values.
In summary, the analysis of the tensor product representation
(1)
(2)
(3)
(4)
provides a general technique for constructing fully distributed representations of
arbitrarily complex structures;
clarifies existing representations found in particular models by showing what particular
design decisions they embody;
allows the proof of a number of general computational properties of the representation;
identifies natural counterparts within connectionist computation of elements of symbolic
computation, in particular, variable binding.
The recent emergence to prominence of the connectionist approach to AI raises the question of the
relation between the non symbolic computation occurring in connectionist systems and the symbolic
computation traditional in AI. The research reported here is part of an attempt to marry the two types
of computation, to develop for AI a form of computation that adds crucial aspects of the power of
symbolic computation to the power of connectionist computation: massively parallel soft constraint
satisfaction. One way to marry these approaches is to implement serial symbol manipulation in a
connectionist system 1.2. The research described here takes a different tack. In a massively parallel
system the processing of symbolic structures-for example, representations of parsed sentences-need
not be limited to a series of elementary manipulations: indeed one would expect the processing to
involve massively parallel soft constraint satisfaction. But in order for such processing to occur, a
satisfactory answer must be found for the question: How can symbolic structures. or structured data in
general. be naturally represented in connectionist systems? The difficulty here turns on one of the
most fundamental problems for relating symbolic and connectionist computation: How can variable
binding be naturally performed in connectionist systems?
This paper provides an overview of a lengthy analysis reported elsewhere3 of a general
connectionist method for variable binding and an associated method for representing structured data.
The purpose of this paper is to introduce the basic idea of the method and survey some of the results;
the reader is referred to the full report for precise defmitions and theorems, more extended examples.
and proofs.
The problem
Suppose we want to represent a simple structured object, say a sequence of elements, in a
connectionist system. The simplest method, which has been used in many models, is to dedicate a
network processing unit to each possible element in each possible position4-9. This is a purely local
representation. One way of looking at the purely local representation is that the binding of
constituents to the variables representing their positions is achieved by dedicating a separate unit to
every possible binding, and then by activating the appropriate individual units.
Purely local representations of this sort have some advantages 10, but they have a number of serious
problems. Three immediately relevant problems are these:
(1)
(2)
(3)
The number of units needed is #elements * #positions; most of these processors are
inactive and doing no work at any given time.
The number of positions in the structures that can be represented has a fixed, rigid upper
limit.
If there is a notion of similar elements, the representation does not take advantage of this:
similar sequences do not have similar representations.
The technique of distributed representation is a well-known way of coping with the first and third
problems ll - 14. If elements are represented as patterns of activity over a population of processing units,
and if each unit can participate in the representation of many elements, then the number of elements
that can be represented is much greater than the number of units, and similar elements can be
represented by similar patterns, greatly enhancing the power of the network to learn and take
advantage of generalizations.
732
Distributed representations of elements in structures (like sequences) have been successfully used
in many modelsl.4.S.1S-18. For each position in the structure, a pool of units is dedicated. The element
occurring in that position is represented by a pattern of activity over the units in the pool.
Note that this technique goes only part of the way towards a truly distributed representation of the
entire structure. While the values of the variables defming the roles in the structure are represented by
distributed patterns instead of dedicated units, the variables themselves are represented by localized,
dedicated pools. For this reason I will call this type of representation semi-local.
Because the representation of variables is still local, semi-local representations retain the second of
the problems of purely local representations listed above. While the generic behavior of connectionist
systems is to gradually overload as they attempt to hold more and more information, with dedicated
pools representing role variables in structures, there is no loading at all until the pools are
exhausted-and then there is complete saturation. The pools are essentially registers, and the
representation of the structure as a whole has more of the characteristics of von Neumann storage than
connectionist representation. A fully distributed connectionist representation of structured data would
saturate gracefully.
Because the representation of variables in semi-local representations is local, semi-local
representations also retain part of the third problem of purely local representations. Similar elements
have similar representations only if they occupy exactly the same role in the structure. A notion of
similarity of roles cannot be incorporated in the semi-local representation.
Tensor product binding
There is a way of viewing both the local and semi-local representations of structures that makes a
generalization to fully distributed representations immediately apparent. Consider the following
structure: strings of length no more than four letters. Fig. 1 shows a purely local representation and
Fig. 2 shows a semi-local representation (both of which appeared in the letter-perception model of
McClelland and Rumelharr4,s). In each case, the variable binding has been viewed in the same way.
On the left edge is a set of imagined units which can represent an element in the structure-a ftller of a
role; these are the filler units. On the bottom edge is a set of imagined units which can represent a
role: these are the role units. The remaining units are the ones really used to represent the structure:
the binding units. They are arranged so that there is one for each pair offiller and role units.
In the purely local case, both the filler and the role are represented by a "pattern of activity"
localized to a single unit In the semi-local case, the ftiler is represented by a distributed pattern of
activity but the role is still represented by a localized pattern. In either case, the binding of the filler to
the role is achieved by a simple product operation: the activity of each binding unit is the product of
the activities of the associated ftller and role unit In the vocabulary of matrix algebra, the activity
representing the value/variable binding forms a matrix which is the outer product of the activity vector
representing the value and the activity vector representing the variable. In the terminology of vector
spaces, the value/variable binding vector is the tensor product of the value vector and the variable
vector. This is what I refer to as the tensor product representation for variable bindings.
Since the activity vectors for roles in Figs. 1 and 2 consist of all zeroes except for a single activity
of 1, the tensor product operation is utterly trivial. The local and semi-local cases are trivial special
cases of a general binding procedure capable of producing completely distributed representations. Fig.
3 shows a distributed case designed for visual transparency. Imagine we are representing speech data,
and have a sequence of values for the energy in a particular formant at successive times. In Fig. 3,
distributed patterns are used to represent both the energy value and the variable to which it is bound:
the position in time. The particular binding shown is of an energy value 2 (on a scale of 1-4) to the
time 4. The peaks in the patterns indicate the value and variable being represented.
733
Filler
(L.".')
!3
0
(0
(8
8
8
[i]
G)
(8
,e
8
e
El
0
0
0
0
0
0
6)
@
@
8
0
0
@)
@>
@
?
El
.?
.!::
?
0
0
0
0
0
?
?
0
0
o
ROI. (Po,'tlon)
Fig. 1. Purely local representation of strings.
?
0
0
0
0
0
0
0
0
0
0
0
0
?
?
?
o o
Rol. (Po,lIIon)
Fig. 2. Semi-local representation of strings.
If the patterns representing the value and variable being bound together are not as simple as those
used in Fig. 3, the tensor product pattern representing the binding will not of course be particularly
visually infonnative. Such would be the case if the patterns for the fIllers and roles in a structure were
defmed with respect to a set of filler and role features: such distributed bindings have been used
effectively by McClelland and Kawamoto 18 and by Derthick 19 ,20. The extreme mathematical
simplicity of the tensor product operation makes feasible an analysis of the general, fully distributed
case.
Each binding unit in the tensor product representation corresponds to a pair of imaginary role and
filler units. A binding unit can be readily interpreted semantically if its corresponding fIller and role
units can. The activity of the binding unit indicates that in the structure being represented an element
is present which possesses the feature indicated by the corresponding filler unit and which occupies a
role in the structure which possesses the feature indicated by the corresponding role unit. The binding
unit thus detects a conjunction of a pair of fIller and role features. (Higher-order conjunctions will
arise later.)
A structure consists of multiple fIller/role bindings. So far we have only discussed the
representation of a single binding. In the purely local and semi-local cases, there are separate pools for
different roles, and it is obvious how to combine bindings: simultaneously represent them in the
separate pools. In the case of a fully distributed tensor product binding (eg., Fig. 3), each single
binding is a pattern of activity that extends across the entire set of binding units. The simplest
possibility for combining these patterns is simply to add them up; that is, to superimpose all the
bindings on top of each other. In the special cases of purely local and semi-local representations, this
procedure reduces trivially to simultaneously representing the individual fillers in the separate pools.
734
QB
~V~>?
.
Filler
,
0
0
0
0
0
..
()
0
0
o
~
0
@
:
:>-~
.:
,:,,~.~~:?;t' ??
0
"i:'
~
.
"~.}.
(Energy)
~
:. jot ';,
~-}
Role (Time)
Fig. 3. A visually transparent fully distributed tensor product representation.
The process of superimposing the separate bindings produces a representation of structures with
the usual connectionist properties. If the patterns representing the roles are not too similar, the
separate bindings can all be kept straight. It is not necessary for the role patterns to be nonoverlapping, as they are in the purely local and semi-local cases; it is sufficient that the patterns be
linearly independent. Then there is a simple operation that will correctly extract the filler for any role
from the representation of the structure as a whole. If the patterns are not just linearly independent,
but are also orthogonal, this operation becomes quite direct; we will get to it shortly. For now, the
point is that simply superimposing the separate bindings is sufficient. If the role patterns are not too
similar, the separate bindings do not interfere. The representation gracefully saturates if more and
more roles are filled, since the role patterns being used lose their distincmess once their number
approaches that of the role units.
Thus problem (2) listed above, shared by purely local and semi-local representations, is at last
removed in fully distributed tensor product representations: they do not accomodate structures only up
to a certain rigid limit, beyond which they are completely saturated; rather, they saturate gracefully.
The third problem is also fully addressed, as similar roles can be represented by similar patterns in the
tensor product representation and then generalizations both across similar fillers and across similar
roles can be learned and exploited.
The defmition of the tensor product representation of structured data can be summed up as follows:
(a)
(b)
(c)
Let a set S of structured objects be given a role decomposition: a set of fillers, F, a set of
roles R , and for each object s a corresponding set of bindings
P(s) = {f Ir : f fills role r in s }.
Let a connectionist representation of the fillers F be given; f is represented by the activity
vector r.
Let a connectionist representation of the roles R be given; r is represented by the activity
735
(d)
vector r.
Then the corresponding tensor product representation of s is
Y
feB> r (where eB>
!Irtl(s)
denotes the tensor product operation).
In the next section I will discuss a model using a fully distributed tensor product representation,
which will require a brief consideration of role decompositions. I will then go on to summarize general
properties of the tensor product representation.
Role decompositions
The most obvious role decompositions are positional decompositions that involve fixed position
slots within a structure of pre-detennined fonn. In the case of a string, such a role would be the it"
position in the string; this was the decomposition used in the examples of Figs. 1 through 3. Another
example comes from McClelland and Kawamoto's modeI18 for learning to assign case roles. They
considered sentences of the form The N 1 V the N 2 with the N 3; the four roles were the slots for the
three nouns and the verb.
A less obvious but sometimes quite powerful role decomposition involves not fixed positions of
elements but rather their local context. As an example, in the case of strings of letters, such roles
might be r"y = is preceded by x andfollowed by y, for various letters x and y.
Such a local context decomposition was used to considerable advantage by Rumelhart and
McClelland in their model of learning the morphophonology of the English past tense2l ? Their
structures were strings of phonetic segments, and the context decomposition was well-suited for the
task because the generalizations the model needed to learn involved the transformations undergone by
phonemes occurring in different local contexts.
Rumelhart and McClelland's representation of phonetic strings is an example of a fully distributed
tensor product representation. The fillers were phonetic segments, which were represented by a
pattern of phonetic features, and the roles were nearest-neighbor phonetic contexts, which were also
represented as distributed patterns. The distributed representation of the roles was in fact itself a
tensor product representation: the roles themselves have a constituent structure which can be further
broken down through another role decomposition. The roles are indexed by a left and right neighbor;
in essence, a string of two phonetic segments. This string too can be decomposed by a context
decomposition; the filler can be taken to be the left neighbor, and the role can be indexed by the right
neighbor. Thus the vowel [i] in the word week is bound to the role rw kt and this role is in turn a
binding of the filler [w] in the sub-role r' 1. The pattern for [i] is a vectOr i of phonetic features; the
pattern for [w] is another such vector o(features w, and the pattern for the sub-role r'_l is a third
vector k consisting of the phonetic features of [k]. The binding for the [i] in week is thus i0 (weB> k).
Each unit in the representation represents a third-order conjunction of a phonetic feature for a central
segment together with two phonetic features for its left and right neighbors. [To get precisely the
representation used by Rumelhart and McClelland, we have to take this tensor product representation
of the roles (eg. rW_1) and throw out a number of the binding units generated in this further
decomposition; only certain combinations of features of the left and right neighbors were used. The
distributed representation of letter triples used by Touretzky and Hinton l can be viewed as a similar
third-order tensor product derived from nested context decompositions, with some binding units
thrown away-in fact, all binding units off the main diagonal were discarded.]
This example illustrates how role decompositions can be iterated, leading to iterated tensor product
representations. Whenever the fillers or roles of one decomposition are structured objects, they can
themselves be further reduced by another role decomposition.
It is often useful to consider recursive role decompositions in which the fiDers are the same type of
object as the original structure. It is clear from the above definition that such a decomposition cannot
be used to generate a tensor product representation. Nonetheless, recursive role decompositions can
be used to relate the tensor product representation of complex structures to the tensor product
representations of simpler structures. For example, consider Lisp binary tree structures built from a set
A of atoms. A non-recursive decomposition uses A as the set of fIllers, with each role being the
736
occupation of a certain position in the tree by an atom. From this decomposition a tensor product
representation can be constructed. Then it can be seen that the operations car, cdr, and cons
correspond to certain linear operators car, cdr, and cons in the vector space of activation vectors. Just
as complex S-expressions can be constructed from atoms using cons, so their connectionist
representations can be constructed from the simple representation of atoms by the application of cons.
(This serial "construction" of the complex representation from the simpler ones is done by the analyst,
not necessarily by the network; cons is a static, descriptive, mathematical operator-not necessarily a
transformation to be carried out by a network.)
Binding and unbinding in connectionist networks
So far, the operation of binding a value to a variable has been described mathematically and
pictured in Figs. 1-3 in terms of "imagined" filler units and role units. Of course, the binding
operation can actually be performed in a network if the filler and role units are really there. Fig. 4
shows one way this can be done. The triangular junctions are Hinton's multiplicative connections22:
the incoming activities from the role and filler units are multiplied at the junction and passed on to the
binding unit.
Binding Units
Fi lIer
Units
Role Units
Fig. 4. A network for tensor product binding and unbinding.
"Unbinding" can also be performed by the network of Fig. 4. Suppose the tensor product
representation of a structure is present in the binding units, and we want to extract the filler for a
particular role. As mentioned above, this can be done accurately if the role patterns are linearly
independent (and if each role is bound to only one filler). It can be shown that in this case, for each
role there is a pattern of activity which, if set up on the role units, will lead to a pattern on the filler
units that represents the corresponding filler. (If the role vectors are orthogonal, this pattern is the
same as the role pattern.) As in Hinton's model 20, it is assumed here that the triangular junctions work
in all directions, so that now they take the product of activity coming in from the binding and role units
and pass it on to the filler units, which sum all incoming activity.
737
The network of Fig. 4 can bind one value/variable pair at a time. In order to build up the
representation of an entire structure, the binding units would have to accumulate activity over an
extended period of time during which all the individual bindings would be performed serially.
Multiple bindings could occur in parallel if part of the apparatus of Fig. 4 were duplicated: this
requires several copies of the sets of filler and role units, paired up with triangular junctions, all
feeding into a single set of binding units.
Notice that in this scheme there are two independent capacities for parallel binding: the capacity to
generate bindings in parallel, and the capacity to maintain bindings simultaneously. The former is
determined by the degree of duplication of the ftller/role unit sets (in Fig. 4, for example, the parallel
generation capacity is 1). The parallel maintenance capacity is determined by the number of possible
linearly independent role patterns, i.e. the number of role units in each set It is logical that these two
capacities should be independent, and in the case of the human visual and linguistic systems it seems
that our maintenance capacity far exceeds our generation capacity21. Note that in purely local and
semi-local representations, there is a separate pool of units dedicated to the representation of each role,
so there is a tendency to suppose that the two capacities are equal. As long as a connectionist model
deals with structures (like four-letter words) that are so small that the number of bindings involved is
within the human parallel generation capacity, there is no harm done. But when connectionist models
address the human representation of large structures (like entire scenes or discourses), it will be
important to be able to maintain a large number of bindings even though the number that can be
generated in parallel is much smaller.
Further properties and extensions
Continuous structures. It can be argued that underlying the connectionist approach is a
fundamentally continuous formalization of computation 13. This would suggest that a natural
connectionist representation of structure would apply at least as well to continuous structures as to
discrete ones. It is therefore of some interest that the tensor product representation applies equally
well to structures characterized by a continuum of roles: a "string" extending through continuous time,
for example, as in continuous speech. In place of a sum over a discrete set of bindings, l:Ji 0ri we
have an integral over a continuum of bindings: J,r(t)0r(t) dt This goes over exactly to the discrete
case if the fillers are discrete step-functions of time.
Continuous patterns. There is a second sense in which the tensor product representation
extends naturally to the continuum. If the patterns representing fillers and/or roles are continuous
curves rather than discrete sets of activities, the tensor product operation is still well-defmed. (Imagine
Fig. 3 with the filler and role patterns being continuous peaked curves instead of discrete
approximations; the binding pattern is then a continuous peaked two-dimensional surface.) In this case,
the vectors rand/or r are members of infmite-dimensional function spaces; regarding them as patterns
of activity over a set of processors would require an infmite number of processors. While this might
pose some problems for computer simulation, the case where rand/of r are functions rather than
finite-dimensional vectors is not particularly problematic analytically. And if a problem with a
continuum of roles is being considered, it may be desirable to assume a continuum of linearly
independent role vectors: this requires considering infinite-dimensional representations.
Values as variables. Treating both values and variables symmetrically as done in the tensor
product representation makes it possible for the same entity to simultaneously serve both as a value
and as a variable. In symbolic computation it often happens that the value bound to one variable is
itself a variable which in tum has a value bound to it In a semi-local representation, where variables
are localized pools of units and values are patterns of activity in these pools, it is difficult to see how
the same entity can simultaneously serve as both value and variable. In the tensor product
representation, both values and variables are patterns of activity, and whether a pattern is serving as a
"variable" or "value"-Qr both-might be merely a matter of descriptive preference.
738
Symbolic structures in associative memories. The mathematical simplicity of the tensor
product representation makes it possible to characterize conditions under which a set of symbolic
structures can be stored in an associative memory without interference. These conditions involve an
interesting mixture of the numerical character of the associative memory and the discrete character of
the stored data.
Learning optimal role patterns by recirculation. While the use of distributed patterns to
represent constituents in structures is well-known, the use of such patterns to represent roles in
structures poses some new challenges. In some domains, features for roles are familiar or easy to
imagine; eg.. features of semantic roles in a case-frame semantics. But it is worth considering the
problem of distributed role representations in domain-independent terms as well. The patterns used to
represent roles determine how information about a structure's fillers will be coded, and these role
patterns have an effect on how much information can subsequently be extracted from the
representation by connectionist processing. The challenge of making the most information available
for such future extraction can be posed as follows. Assume enough apparatus has been provided to do
all the variable binding in parallel in a network like that of Fig. 4. Then we can dedicate a set of role
units to each role; the pattern for each role can be set up once and for all in one set of role units. Since
the activity of the role units provide multipliers for filler values at the triangular junctions, we can treat
these fixed role patterns as weights on the lines from the filler units to the binding units. The problem
of finding good role patterns now becomes the problem of finding good weights for encoding the
fillers into the binding units.
Now suppose that a second set of connections is used to try to extract all the fillers from the
representation of the structure in the binding units. Let the weights on this second set of connections
be chosen to minimize the mean-squared differences between the extracted ftller patterns and the
actual original filler patterns. Let a set of role vectors be called optimal if this mean-squared error is as
small as possible.
It turns out that optimal role vectors can be characterized fairly simply both algebraically and
geometrically (with the help of results from Williams24). Furthermore, having imbedded the role
vectors as weights in a connectionist net, it is possible for the network to learn optimal role vectors by
a fairly simple learning algorithm. The algorithm is derived as a gradient descent in the mean-squared
error, and is what G. E. Hinton and J. L. McClelland (unpublished communication) have called a
recirculation algorithm: it works by circulating activity around a closed network loop and training on
the difference between the activities at a given node on successive passes.
Acknowledgements
This research has been supported by NSF grants 00-8609599 and ECE-8617947, by the Sloan
Foundation's computational neuroscience program, and by the Department of Computer Science and
Institute of Cognitive Science at the University of Colorado at Boulder.
739
References
1. D. S. Touretzky & G. E. Hinton. Proceedings of the International Joint Conference on Artificial
Intelligence, 238-243 (1985).
2. D. S. Touretzky. Proceedings of the 8th Conference of the Cognitive Science Society, 522-530
(1986).
3. P. Smolensky. Technical Report CU-CS-355-87, Department of Computer Science, University
of Colorado at Boulder (1987).
4. J. L. McClelland & D. E. Rumelhart. Psychological Review 88, 375-407 (1981).
5. D. E. Rumelhart & J. L. McClelland. Psychological Review 89, 60-94 (1982).
6. M. Fanty. Technical Report 174, Department of Computer Science, University of Rochester
(1985).
7. J. A. Feldman. The Behavioral and Brain Sciences 8,265-289 (1985).
8. J. L. McClelland & J. L. Elman. In J. L. McClelland, D. E. Rumelhart, & the PDP Research
Group, Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 2:
Psychological and biological models. Cambridge, MA: MIT Press/Bradford Books, 58-121
(1986).
9. T. J. Sejnowski & C. R. Rosenberg. Complex Systems 1,145-168 (1987).
10. J. A. Feldman. Technical Report 189, Department of Computer Science, University of Rochester
(1986).
11. J. A. Anderson & G. E. Hinton. In G. E. Hinton and J. A. Anderson, Eds., Parallel models of
associative memory. Hillsdale, NJ: Erlbaum, 9-48 (1981).
12. G. E. Hinton, J. L. McClelland, & D. E. Rumelhart. In D. E. Rumelhart, J. L. McClelland, & the
PDP Research Group, Parallel distributed processing: Explorations in the microstructure of
cognition. Vol. 1: Foundations. Cambridge, MA: MIT Press/Bradford Books, 77-109 (1986).
13. P. Smolensky. The Behavioral and Brain Sciences 11(1) (in press).
14. P. Smolensky. In J. L. McClelland, D. E. Rumelhart, & the PDP Research Group, Parallel
distributed processing: Explorations in the microstructure of cognition. Vol. 2: Psychological and
biological models. Cambridge, MA: MIT Press/Bradford Books, 390-431 (1986).
15. G. E. Hinton. In Hinton, G.E. and Anderson, J.A., Eds., Parallel models of associative memory.
Hillsdale,NJ: Erlbaum, 161-188 (1981).
16. M. S. Riley & P. Smolensky. Proceedings of the Sixth Annual Conference of the Cognitive Science
Society, 286-292 (1984).
17. P. Smolensky. In D. E. Rumelhart, J. L. McOelland, & the PDP Research Group, Parallel
distributed processing: Explorations in the microstructure of cognition. Vol. 1: Foundations.
Cambridge, MA: MIT Press/Bradford Books, 194-281 (1986).
18. J. L. McClelland & A. H. Kawamoto. In J. L. McClelland, D. E. Rumelhart, & the PDP Research
Group, Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 2:
Psychological and biological models. Cambridge, MA: MIT Press/Bradford Books, 272-326
(1986).
19. M. Derthick. Proceedings of the National Conference on Artificial Intelligence, 346-351 (1987).
20. M. Dertbick. Proceedings of the Annual Conference of the Cognitive Science Society, 131-142
(1987).
21. D. E. Rumelhart & J. L. McClelland. In J. L. McClelland, D. E. Rumelhart, & the PDP Research
Group, Parallel distributed processing: Explorations in the microstructure of cognition. Vol. 2:
Psychological and biological models. Cambridge, MA: MIT Press/Bradford Books, 216-271
(1986)
22. G. E. Hinton. Proceedings of the Seventh International Joint Conference on Artificial Intelligence,
683-685 (1981).
23. A. M. Treisman & H. Schmidt. Cognitive Psychology 14,107-141 (1982).
24. R. J. Williams. Technical Report 8501, Institute of Cognitive Science, University of California,
San Diego (1985).
| 58 |@word cu:1 loading:1 seems:1 simulation:1 prominence:1 decomposition:21 rol:1 fonn:1 series:1 past:1 existing:4 imaginary:1 activation:2 must:6 readily:1 numerical:1 realistic:1 enables:1 designed:1 treating:1 intelligence:3 item:2 utterly:1 characterization:1 provides:2 node:1 successive:2 preference:1 simpler:3 mathematical:3 constructed:3 direct:1 consists:1 combine:1 behavioral:2 introduce:1 indeed:1 behavior:1 themselves:3 elman:1 embody:1 brain:2 detects:1 decomposed:1 little:1 actual:1 considering:2 becomes:2 provided:1 underlying:1 what:3 pennits:1 unbinding:3 interpreted:1 string:11 finding:2 transformation:2 nj:2 every:1 exactly:3 unit:64 penn:1 grant:1 producing:1 local:37 bind:1 apparatus:2 limit:2 treat:1 encoding:1 might:3 eb:1 co:2 limited:2 responsible:1 recursive:4 implement:1 procedure:2 coping:1 pre:1 word:2 suggest:1 symbolic:14 get:2 cannot:2 operator:2 storage:1 context:7 go:3 williams:1 survey:1 simplicity:2 identifying:2 immediately:2 fill:1 population:1 notion:2 feel:1 construction:3 suppose:4 colorado:3 imagine:3 diego:1 us:1 element:17 rumelhart:13 particularly:2 bottom:1 role:104 removed:1 mentioned:1 broken:1 raise:1 segment:4 algebra:1 serve:3 purely:13 completely:3 po:2 joint:2 represented:21 various:1 sejnowski:1 artificial:3 apparent:1 quite:2 larger:1 solve:1 posed:1 say:1 triangular:4 formant:1 emergence:1 itself:2 associative:6 sequence:4 advantage:4 derthick:2 descriptive:2 net:1 product:44 coming:1 fanty:1 relevant:1 combining:2 loop:1 detennined:1 representational:1 constituent:11 qr:1 neumann:1 extending:1 produce:1 object:5 help:1 develop:2 pose:2 nearest:1 throw:1 c:1 involves:1 indicate:1 come:1 direction:1 subsequently:1 exploration:6 occupies:1 human:3 viewing:1 hillsdale:2 require:2 piecing:1 activating:1 assign:1 feeding:1 transparent:1 generalization:4 really:2 argued:1 microstructure:6 biological:4 elementary:1 mathematically:1 extension:1 hold:1 around:1 considered:2 roi:1 visually:2 mapping:1 week:2 cognition:6 continuum:5 purpose:1 lose:1 infonnation:2 successfully:1 mit:6 rather:4 rosenberg:1 conjunction:3 linguistic:1 derived:2 indicates:1 greatly:1 sense:1 rigid:2 el:2 i0:1 entire:4 relation:3 semantics:1 noun:1 special:3 summed:1 fairly:2 equal:1 once:2 extraction:1 having:1 atom:4 represents:2 peaked:2 future:1 connectionist:32 report:6 fundamentally:1 serious:1 few:2 employ:1 simultaneously:5 national:1 individual:4 familiar:1 consisting:1 maintain:3 vowel:1 attempt:2 thrown:1 interest:1 possibility:1 saturated:1 truly:1 extreme:1 mixture:1 kt:1 edge:2 capable:1 integral:1 necessary:1 orthogonal:2 filled:1 indexed:2 tree:2 recirculation:3 psychological:6 soft:2 infonnative:1 riley:1 seventh:1 erlbaum:2 too:3 characterize:1 stored:3 reported:2 answer:1 combined:1 fundamental:2 peak:1 international:2 retain:2 physic:1 off:1 pool:12 together:4 treisman:1 von:1 central:2 squared:3 cognitive:6 book:6 american:1 leading:1 nonoverlapping:1 matter:1 register:1 sloan:1 performed:4 later:1 multiplicative:1 try:1 closed:1 doing:1 sort:1 parallel:20 rochester:2 minimize:1 ir:1 irtl:1 phoneme:1 characteristic:1 clarifies:1 correspond:1 iterated:2 accurately:1 worth:1 straight:1 processor:5 touretzky:3 whenever:1 ed:2 lengthy:1 definition:1 sixth:1 energy:4 nonetheless:1 involved:2 obvious:3 naturally:4 proof:2 associated:2 con:5 static:1 duplicated:1 logical:1 car:2 actually:1 tum:1 higher:1 dt:1 rand:2 arranged:1 done:5 though:1 anderson:3 furthermore:1 just:2 until:1 web:1 interfere:1 indicated:2 effect:1 multiplier:1 counterpart:3 former:1 analytically:1 satisfactory:1 infmite:2 semantic:1 eg:3 deal:1 ll:1 during:1 defmed:2 essence:1 complete:1 confusion:1 dedicated:5 marry:2 consideration:1 fi:1 preceded:1 ji:1 overview:1 imagined:3 discussed:1 relating:1 defmition:1 accumulate:1 refer:1 cambridge:6 feldman:2 ai:4 trivially:1 language:2 similarity:1 surface:1 add:2 feb:1 recent:1 massively:3 manipulation:2 certain:6 phonetic:10 binary:1 arbitrarily:3 exploited:1 seen:1 greater:1 determine:1 algebraically:1 period:1 semi:16 multiple:7 full:1 desirable:1 reduces:1 transparency:1 exceeds:1 technical:4 characterized:3 long:1 serial:2 equally:1 coded:1 paired:1 basic:1 maintenance:2 enhancing:1 essentially:1 represent:9 sometimes:1 achieved:2 dedicating:1 want:2 addressed:1 crucial:1 rest:1 defmitions:1 posse:2 pass:1 duplication:1 member:1 lisp:1 call:1 symmetrically:1 easy:1 enough:1 independence:1 psychology:1 reduce:1 idea:1 regarding:1 inactive:1 whether:1 expression:1 passed:1 speech:2 adequate:1 useful:1 clear:1 involve:3 listed:2 mcclelland:18 simplest:2 rw:1 generate:3 occupy:1 reduced:1 exist:1 problematic:1 nsf:1 notice:1 neuroscience:1 correctly:1 serving:1 discrete:7 vol:6 group:6 four:3 terminology:1 kept:1 merely:1 geometrically:1 sum:2 letter:6 powerful:1 extends:3 place:1 reader:1 pennit:1 decision:1 bound:7 annual:2 activity:30 occur:2 constraint:2 precisely:1 scene:1 ri:1 aspect:1 qb:1 department:5 structured:8 combination:1 across:3 smaller:1 character:2 making:1 happens:1 gradually:1 boulder:3 interference:2 taken:1 computationally:1 turn:3 discus:1 needed:2 kawamoto:3 generalizes:1 operation:10 junction:5 available:1 multiplied:1 apply:1 away:1 appropriate:1 generic:1 schmidt:1 shortly:1 original:2 top:1 remaining:1 denotes:1 fonnalisms:1 parsed:1 build:1 mcoelland:1 society:3 tensor:40 question:2 imbedded:1 usual:1 traditional:2 diagonal:1 gradient:1 separate:9 capacity:10 entity:2 outer:1 gracefully:4 participate:2 trivial:2 reason:1 analyst:1 length:1 difficult:1 relate:1 design:1 upper:1 discarded:1 finite:1 descent:1 saturates:2 extended:2 precise:1 looking:1 incorporated:1 hinton:11 frame:1 communication:1 pdp:6 verb:1 superimpose:1 pair:4 required:1 unpublished:1 sentence:3 connection:2 connectionists:1 california:1 learned:1 address:1 beyond:1 able:1 pattern:53 perception:1 smolensky:6 appeared:1 summarize:1 challenge:2 saturation:1 program:1 built:1 memory:6 power:4 satisfaction:2 natural:4 difficulty:1 serially:1 pictured:1 representing:15 scheme:1 brief:1 identifies:1 circulating:1 carried:2 extract:3 review:2 acknowledgement:1 occupation:1 fully:13 expect:1 generation:3 interesting:1 localized:5 triple:1 foundation:3 degree:1 sufficient:2 undergone:1 course:2 summary:1 supported:1 last:1 copy:1 english:1 lier:1 institute:3 neighbor:6 taking:1 jot:1 distributed:44 curve:2 vocabulary:1 san:1 far:3 dedicate:2 incoming:2 harm:1 assumed:1 continuous:11 learn:3 complex:16 necessarily:2 constructing:1 domain:2 main:1 linearly:5 whole:2 paul:1 arise:1 fig:20 referred:1 formalization:1 sub:2 position:10 third:6 theorem:1 saturate:2 tlon:1 down:1 showing:1 symbol:2 consist:1 effectively:1 accomodate:1 illustrates:1 occurring:3 exhausted:1 suited:1 simply:3 cdr:2 visual:2 positional:1 partially:1 binding:68 applies:1 corresponds:1 nested:1 discourse:1 extracted:2 ma:6 slot:2 viewed:2 towards:1 shared:1 feasible:1 considerable:1 determined:2 except:1 infinite:1 semantically:1 tack:1 called:3 pas:1 ece:1 superimposing:2 tendency:1 bradford:6 filler:41 overload:1 |
5,302 | 580 | Multi-State Time Delay Neural Networks
for Continuous Speech Recognition
Alex Waibel
Patrick Haffner
CNET Lannion A TSSIRCP
22301 LANNION, FRANCE
[email protected]
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We present the "Multi-State Time Delay Neural Network" (MS-TDNN) as an
extension of the TDNN to robust word recognition. Unlike most other hybrid
methods. the MS-TDNN embeds an alignment search procedure into the connectionist architecture. and allows for word level supervision. The resulting
system has the ability to manage the sequential order of subword units. while
optimizing for the recognizer performance. In this paper we present extensive
new evaluations of this approach over speaker-dependent and speaker-independent connected alphabet.
1
INTRODUCTION
Classification based Neural Networks (NN) have been successfully applied to phoneme
recognition tasks. Extending those classification capabilities to word recognition is an
important research direction in speech recognition. However. connectionist architectures
do not model time alignment properly. and they have to be combined with a Dynamic Programming (DP) alignment procedure to be applied to word recognition. Most of these
"hybrid" systems (Bourlard. 1989) take advantage of the powerful and well tried probabilistic formalism provided by Hidden Markov Models (HMM) to make use of a reliable
alignment procedure. However. the use of this HMM formalism strongly limits one's
choice of word models and classification procedures.
MS-TDNNs. which do not use this HMM formalism. suggest new ways to design speech
recognition systems in a connectionist framework. Unlike most hybrid systems where
connectionist procedures replace some parts of a pre-existing system. MS-TDNNs are
designed from scratch as a global Neural Network that performs word recognition. No
bootstrapping is required from an HMM. and we can apply learning procedures that correct the recognizer's errors explicitly. These networks have been successfully tested on
135
136
Haffner and Waibel
difficult word recognition tasks. such as speaker-dependent connected alphabet recognition (Haffner et al. 1991a) and speaker-independent telephone digit recognition (Haffner
and Waibel. 1991b). Section 2 presents an overview of hybrid Connectionist/HMM architectures and training procedures. Section 3 describes the MS-TDNN architecture. Section
4 presents our novel training procedure. In section 5. MS-TDNNs are tested on speakerdependent and speaker-independent continuous alphabet recognition.
2
HYBRID SYSTEMS
HMMs are currently the most efficient and commonly used approach for large speech recognition tasks: their modeling capacity. however limited, fits many speech recognition
problems fairly well (Lee. 1988). The main limit to the modelling capacity of HMMs is
the fact that trainable parameters must be interpretable in a probabilistic framework to be
reestimated using the Baum-Welch algorithm with the Maximal Likelihood Estimation
training criterion (MLE).
Connectionist learning techniques used in NNs (generally error back-propagation) allow
for a much wider variety of architectures and parameterization possibilities. Unlike
HMMs. NNs model discrimination surfaces between classes rather than the complete
input/output distributions (as in HMMs) : their parameters are only trained to minimize
some error criterion. This gain in data modeling capacity, associated with a more discriminant training procedure, has permitted improved performance on a number of speech
tasks. especially those in which modeling sequential information is not necessary. For
instance. Time Delay Neural Networks have been applied, with high performance, to phoneme classification (Waibel et al. 1989). To extend this performance to word recognition,
one has to combine a front-end NN with a procedure performing time alignment, usually
based on DP. A variety of alignment procedures and training methods have been proposed
for those "hybrid" systems.
2.1 TIME ALIGNMENT
To take into account the time distortions that may appear within its boundaries, a word is
generally modeled by a sequence of states (l ?...? s .... ,N) that can have variable durations.
The score of a word in the vocabulary accumulates frame-level scores which are a function of the output Y(t) = (Y1(t), ...?Y.,(t? of the front end NN
(1)
The DP algorithm finds the optimal alignment {T I' ... , TN + I} which maximizes this word
score. A variety of Score functions have been proposed for Eq.(l). They are most often
treated as likelihoods, to apply the probabilistic Viterbi alignment algorithm.
2.1.1 NN outputs probabilities
Outputs of classification based NNs have been shown to approximate Bayes probabilities,
provided that they are trained with a proper objective function (Bourlard, 1989). For
instance, we can train our front-end NN to output, at each time frame, state probabilities
that can be used by a Viterbi alignment procedure (to each state s there corresponds a NN
Multi-State Time Delay Neural Networks for Continuous Speech Recognition
output irs)~. Eq.(I) gives the resulting word log (likelihood) as a sum of frame-levellog(likelihoods) which are written 1:
Scores (Y(t? = log (Yi(s)(t?
(2)
~
2.1.2 Comparing NN output to a reference vector
The front end NN can be interpreted as a system remapping the input to a single density
continuous HMM (Bengio. 1991). In the case of identity covariance matrices, Eq.(l) gives
the log(likelihood) for the k-th word (after Viterbi alignment) as a sum of distances
between the NN frame-level output and a reference vector associated with the current
state2.
Scores (Y{t?
= II yet) -
r, ... ,
y l1 2
S
(3)
Here. the reference vectors (ft. ... ,
yN) correspond to the means of gaussian PDFs,
and can be estimated with the Baum-Welch algorithm.
2.2 TRAINING
The first hybrid models that were proposed (Bourlard. 1989; Franzini, 1991) optimized the
state-level NN (with gradient descent) and the word-level HMM (with Baum-Welch) separately. Even though each level of the system may have reached a local optimum of its
cost function, training is potentially suboptimal for the given complete system. Global
optimization of hybrid connectionist/HMM systems requires a unified training algorithm,
which makes use of global gradient descent (Bridle. 1990).
3
THE MS-TDNN ARCHITECTURE
MS-TDNNs have been designed to extend TDNNs classification performance to the word
level. within the simplest possible connectionist framework. Unlike the hybrid methods
presented in the previous section, the HMM formalism is not taken as the underlying
framework here. but many of the models developed within this formalism are applicable
to MS-TDNNs.
3.1 FRAME?LEVEL TDNN ARCHITECTURE
All the MS-TDNNs architectures described in this paper use the front-end TDNN architecture (Waibel et al. 1989), shown in Fig.l. at the state level. Each unit of the first hidden
layer receives input from a 3-frame window of input coefficients. Similarly, each unit in
the second hidden layer receives input from a 5-frame window of outputs of the first hidden layer. At this level of the system (2nd hidden layer). the network produces, at each
time frame. the scores for the desired phonetic features. Phoneme recognition TDNNs are
trained in a time-shift invariant way by integrating over time the output of a single state.
3.2 BASELINE MS? TDNN
With MS-TDNNs, we have extended the formalism of TDNNs to incorporate time alignment. The front-end TDNN architecture has I output units, whose activations (ranging
1. State prior probabilities would add a constant tenn to Eq.(2)
2. State transition probabilities add an offset to Eq.(3)
137
138
Haffner and Waibel
2nd hidden
layer
Word2
[!]
wordllll
?????????
Input
lpeec
?? ??????????????
??????????????
??????????????
??? ??????????????
??????????? ???
t
State scores are
copied from the
2nd hidden layer
oftheTDNN
??
?
??
TIME
Figure 1: Frame-Level TDNN
Figure 2: MS-TDNN
from 0 to 1) represent the trame-level scores. To each state s corresponds a TDNN output
i(s). Different states may share the same output (for instance with phone models). The DP
procedure, as described in Eq.(1), determines the sequence of states producing the maximum sum of activations3:
Scores(Y(t? = Yj(s)
(4)
The frame-level score used in the MS-TDNN combines the advantages of being simple
with that of having a formal description as an extension of the TDNN accumulation process to multiple states. It becomes possible to model the accumulation process as a connectionist word unit that sums the activations from the best sequence of incoming state
units, as shown in Fig.2. This is mostly useful during the back-propagation phase: at each
time frame, we imagine a virtual connection between the active state unit and the word
unit, which is used to backpropagate the error at the word level down to the state level. 4
3.3 EXTENDING MS? TDNNs
In the previous section, we presented the baseline MS-TDNN architecture. We now
present extensions to the word-level architecture, which provide additional trainable
parameters. Eq.(4) is extended as:
Scores(Y(t? = Weight;? Y;(s) +Bias j
(5)
3. This equation is not very different from Eq.(2) presented in the previous section, however, all
attempts to use log(Y,{t)) instead of Y,{t) have resulted in unstable learning runs, that have never con?
verged properly. During the test phase, the two approaches may be functionally not very different.
Outputs that affect the error rate in a critical way are mostly those of the correct word and the best
incorrect word, especially when they are close. We have observed that frame level scores which play
a key role in discrimination are close to 1.0: the two scores become asymptotically equivalent (less
1): log(Y,(t? - Y,{t) - 1.
4. The alignment path found by the DP routine during the forward phase is "frozen", so that it can
be represented as a connectionist accumulation unit during the backward phase. The problem is that,
after modification of the weights, this alignment path may no longer be the optimal one. Practical
consequences of this seem minimal.
Multi-State Time Delay Neural Networks for Continuous Speech Recognition
Weight; allows to weight differently the importance of each state belonging to the same
word. We do not have to assume that each part of a speech pattern contains an equal
amount of infonnation.
Bias; is analog to a transition log(probability) in a HMM.
However. we have observed that a small variation in the value of those parameters may
alter recognition perfonnance a lot. The choice of a proper training procedure is critical.
Our gradient back-propagation algorithm has been selected for its efficient training of the
parameters of the front-end TDNN ; as our training procedure is global. we have also
applied it to train Weight; and Bias;. but with some difficulty.
In section 4.1, we show that they are useful to shift the word scores so that a sigmoid function separates the correct words (output 1) properly from the incorrect ones (output 0).
3.4 SEQUENCE MODELS
We design very simple state sequence models by hand that may use phonetic knowledge
(phone models) or may not (word models).
Phone Models: The phonetic representation of a word is transcribed as a sequence of
states. As an example shown in Fig.3, the letter 'p' combines 3 phone units. P captures
the closure and the burst of this stop consonant. P-IY is a co-articulation unit. The phone
IY is recognized in a context independent way. This phone is shared with all the other eset letters. States are duplicated to enforce minimal phone durations.
[email protected]+~
Figure 3 Phone Model for 'p'
Word Models: No specific phonemic meaning is associated with the states of a word.
Those states cannot be shared with other words.
Transition States: One can add specialized transition units that are trained to detect this
transition more explicitly: the resulting stabilization in segmentation yields an increase
in performance. This method is however sensitive to a good bootstrapping of our system
on proper phone boundaries. and has so far only been applied to speaker dependent
alphabet recognition.
4
TRAINING
In many speech recognition systems. a large discrepancy is found between the training
procedure and the testing procedure. The training criterion. generally Maximum Likelihood Estimation. is very far from the word accuracy the system is expected to maximize.
Good perfonnance depends on a large quantity of data, and on proper modeling. With MSTDNNs. we suggest optimization procedures which explicitly attempt to minimize the
number of word substitutions; this approach represents a move towards systems in which
the training objective is maximum word accuracy. The same global gradient back-propagation is applied to the whole system, from the output word units down to the input units.
Each desired word is associated with a segment of speech with known boundaries. and this
association represents a learning sample. The DP alignment procedure is applied between
the known word boundaries. We describe now three training procedures we have applied
to MS-TDNNs.
139
140
Haffner and Waibel
4.1 STANDARD BACK-PROPAGATION WITH SIGMOIDAL OUTPUTS
Word outputs Q1 = ftW 1 . 0 1 + B 1) are compared to word targets (J for the desired word,
ofor the other words), and the resulting error is back-propagated. Ok is the OP sum given
by Eq.(1) for the k-th word in the vocabulary.jis the sigmoid function, Wk gives the slope
of the sigmoid and B k is a bias term, as shown in Fig.4. They are trained so that the sigmoid function separates the correct word (Output 1) form the incorrect words (Output 0)
properly. When the network is trained with the additional parameters of Eq.(5),Weighti and
Biasi can account for these sigmoid slope and bias.
MS-TDNNs are applied to word recognition problems where classes are highly confusable. The score of the best incorrect word may be very close to the score of the correct
word: in this case, the slope and the bias are parameters which are difficult to tune, and the
learning procedure has problems to attain the 0 and 1 target values. To overcome those difficulties, we have developed new training techniques which do not require the use of a sigmoid function and of fixed word targets.
S.l9pe
f
A
p==:
I II
best
incorrect
Other word scores
I
correct
bias
Fig.4. The sigmoid Function
4.2 ON-LINE CORRECTION OF CLASSIFICATION ERRORS
The testing procedure recognizes the word (or the string of words) with the largest output,
and there is an error when this is not the correct word. As the goal of the training procedure is to minimize the number of errors, the "ideal" procedure would be, each time a classification error has occurred, to observe where it comes from, and to modify the
parameters of the system so that it no longer happens.
The MS-TDNN has to recognize the correct word CoWo. There is a training error if, for
an incorrect word InWo, one has O/nWo > OCowo- m. No sigmoid function is needed to
compare these outputs, m is an additional margin to ensure the robustness of the training
procedure. Only in the event of a training error do we modify the parameters of the MSTONN. The word targets are moving (for instance, the target score for an incorrect word is
OCowo- m) instead of fixed (0 or 1).
This technique overcomes the difficulties due to the use of an output sigmoid function.
Moreover, the number of incorrect words whose output is actually modified is greatly
reduced: this is very helpful in training under-represented classes, as the numbers of positive and negative examples become much more balanced.
Compared to the more traditional training technique (with a sigmoid) presented in the previous section, large increases in training speed and word accuracy were observed.
4.3 FUZZY WORD BOUNDARIES
Training procedures we have presented so far do not take into account the fact that the
sample words may come from continuous speech. The main difficulty is that their straightforward extension to continuous speech would not be computationally feasible, as the set
Multi-State Time Delay Neural Networks for Continuous Speech Recognition
of possible training classes will consist of all the possible strings of words.We have
adopted a staged approach: we modify the training procedure, so that it matches the continuous recognition conditions more and more closely, while remaining computationally
feasible.
The first step deals with the problem of word boundaries. During training, known word
boundaries give additional information that the system uses to help recognition. But this
information is not available when testing. To overcome this problem when learning a correct word (noted CoWo). we take as the correct training token the triplet PreWo-CoWoNexWo (PreWo is the preceding correct word, NexWo is the next correct word in the sentence). All the other triplets PreWo-InWo-NexWo are considered as incorrect. These triplets are aligned between the beginning known boundary of PrevWo and the ending known
boundary of NexWo. What is important is that no precise boundary information is given
forCoWo.
The word classification training criterion presented here only minimizes word substitutions. In connected speech. one has to deal with deletions and insertions errors: procedures
to describe them as classification errors are currently being developed.
5
EXPERIMENTS ON CONNECTED ALPHABET
Recognizing spoken letters is considered one of the most challenging small-vocabulary
tasks in speech recognition. The vocabulary, consisting of the 26 letters of the American
English alphabet, is highly confusable, especially among subsets like the E-set
('B' :C' :D' :E' :G' :P' :T' :V' :Z') or ('M', 'N'). In all experiments. as input parameters,
16 filterbank melscale spectrum coefficients are computed at a lOmsec frame rate. Phone
models are used.
5.1 SPEAKER DEPENDENT ALPHABET
Our database consists of 1000 connected strings of letters, some corresponding to grammatical words and proper names, others simply random. There is an average of five letters
per string. The learning procedure is described in ?4.1 and applied to the extended MSTDNN (?3.3), with a bootstrapping phase where phone labels are used to give the alignment of the desired word. During testing, time alignment is performed over the whole sentence. A one-stage DP algorithm (Ney, 1984) for connected words (with no grammar) is
used in place of the isolated word DP algorithm used in the training phase. The additional
use of minimal word durations, word entrance penalties and word boundary detectors has
reduced the number of word insertions and deletions (in the DP algorithm) to an acceptable level. On two speakers, the word error rates are respectively 2.4% and 10.3%. By
comparison, SPHINX, achieved error rates of 6% and 21.7%, respectively, when contextindependent (as in our MS-TDNN) phone models were used. Using context-dependent
models (as described in Lee, 1988), SPHINX performance achieves 4% and 16.1% error
rates, respectively. No comparable results yet exist for the MS-TDNN for this case.
5.2 SPEAKER INDEPENDENT ALPHABET (RMspell)
Our database, a part of the DARPA Resource Management database (RM), consists of
120 speakers, spelling about 15 words each. 109 speakers (about 10,000 spelled letters)
are used for training. 11 speakers (about 1000 spelled letters) are used for testing. 57
phone units. in the second hidden layer. account for the phonemes and the co-articulation
units. We apply the training algorithms described in ?4.2 and ?4.3 to our baseline MS-
141
142
Haffner and Waibel
TDNN architecture (?3.2), without any additional procedure (for instance, no phonetic
bootstrapping). An important difference from the experimental conditions described in the
previous section is that we have kept training and testing conditions exactly similar (for
instance, the same knowledge of the boundaries is used during training and testing).
Table 1: Alphabet classification errors (we do not allow for insertions or deletions errors).
6
Algorithm
%Error
Known Word Boundaries (?4.2)
Fuzzy Word Boundaries (?4.3)
5.7%
6.5%
SUMMARY
We presented in this paper MS-TDNNs, which extend TDNNs classification performance
to the sequence level. They integrate the DP alignment procedure within a straightforward
connectionist framework. We developed training procedures which are computationally
reasonable and train the MS-TDNN in a global way. Their only supervision is the minimization of the recognizer's error rate. Experiments were conducted on speaker independent
continuous alphabet recognition. The word error rates are 5.7% with known word boundaries and 6.5% with fuzzy word boundaries.
Acknowledgments
The authors would like to express their gratitude to Denis J ouvet and Michael Witbrock,
for their help writing this paper, and to Cindy Wood, for gathering the databases. This
work was partially performed at CNET laboratories, and at Carnegie Mellon University,
under DARPA support.
References
Bengio, Y "Artificial Neural Networks and their Application to Sequence Recognition"
Ph.D. Thesis, McGill University, Montreal, June 1991.
Bourlard, H and Morgan, N. "Merging Multilayer Perceptrons and Hidden Markov Models: Some Experiments in Continuous Speech Recognition", TR-89-033, ICSI, Berkeley, CA, July 1989
Bridle, J.S. "Alphanets: a recurrent 'neural' network architecture with a hidden Markov
model interpretation." Speech Communication, "Neurospeech" issue, Feb 1990.
Franzini, M.A., Lee, K.F., and Waibel, A.H.,"Connectionist Viterbi Training: A New
Hybrid Method for Continuous Speech Recognition," ICASSP, Albuquerque 1990
Haffner, P., FranzinLM. and Waibel A., " Integrating Time Alignment and Neural Networks for High Performance Continuous Speech Recognition " ICASSP, Toronto
1991a.
Haffner. P and Waibel A. "Time-Delay Neural Networks Embedding Time Alignment: a
Performance Analysis" Europseech'91 , Genova, September 1991b.
Lee, K.F. "Large-Vocabulary Speaker-Independent Continuous Speech Recognition: the
SPHINX system", PhD Thesis, Carnegie Mellon University, 1988.
Ney, H. ''The Use of a One-Stage Dynamic Programming Algorithm for Connected Word
Recognition" in IEEE Trans. on Acoustics, Speech and Signal Processing. April 1984.
Waibel, A.H., Hanazawa, T., Hinton, G., Shikano, K., and Lang, K., "Phoneme Recognition using Time-Delay Neural Networks" in IEEE Transactions on Acoustics, Speech
and Signal Processing 37(3):328-339, 1989.
| 580 |@word nd:3 closure:1 tried:1 covariance:1 q1:1 tr:1 substitution:2 contains:1 score:19 subword:1 existing:1 current:1 comparing:1 activation:2 yet:2 lang:1 must:1 written:1 entrance:1 cindy:1 designed:2 interpretable:1 discrimination:2 tenn:1 selected:1 parameterization:1 beginning:1 denis:1 toronto:1 sigmoidal:1 five:1 burst:1 become:2 incorrect:9 consists:2 combine:3 expected:1 multi:5 window:2 becomes:1 provided:2 underlying:1 moreover:1 maximizes:1 remapping:1 what:1 interpreted:1 string:4 minimizes:1 fuzzy:3 developed:4 unified:1 spoken:1 bootstrapping:4 berkeley:1 exactly:1 filterbank:1 rm:1 unit:16 appear:1 yn:1 producing:1 positive:1 local:1 modify:3 limit:2 consequence:1 accumulates:1 path:2 challenging:1 co:2 hmms:4 limited:1 practical:1 acknowledgment:1 yj:1 testing:7 digit:1 procedure:32 attain:1 word:82 pre:1 speakerdependent:1 integrating:2 suggest:2 cannot:1 close:3 context:2 writing:1 accumulation:3 equivalent:1 baum:3 straightforward:2 duration:3 welch:3 embedding:1 variation:1 mcgill:1 imagine:1 play:1 target:5 programming:2 us:1 pa:1 recognition:34 database:4 observed:3 ft:1 role:1 capture:1 connected:7 icsi:1 balanced:1 insertion:3 dynamic:2 trained:6 segment:1 icassp:2 darpa:2 differently:1 represented:2 alphabet:10 train:3 describe:2 artificial:1 whose:2 distortion:1 grammar:1 ability:1 hanazawa:1 advantage:2 sequence:8 frozen:1 maximal:1 fr:1 aligned:1 verged:1 description:1 optimum:1 extending:2 produce:1 spelled:2 wider:1 help:2 recurrent:1 montreal:1 op:1 phonemic:1 eq:10 c:1 come:2 direction:1 closely:1 correct:12 stabilization:1 virtual:1 require:1 extension:4 correction:1 considered:2 viterbi:4 achieves:1 recognizer:3 estimation:2 applicable:1 label:1 currently:2 infonnation:1 nwo:1 sensitive:1 largest:1 successfully:2 minimization:1 gaussian:1 modified:1 rather:1 june:1 properly:4 pdfs:1 modelling:1 likelihood:6 greatly:1 baseline:3 detect:1 helpful:1 dependent:5 nn:10 hidden:10 france:1 issue:1 classification:12 among:1 fairly:1 equal:1 never:1 having:1 represents:2 alter:1 discrepancy:1 connectionist:12 word2:1 others:1 resulted:1 recognize:1 phase:6 consisting:1 attempt:2 irs:1 possibility:1 highly:2 evaluation:1 alignment:20 melscale:1 necessary:1 perfonnance:2 desired:4 confusable:2 isolated:1 minimal:3 instance:6 formalism:6 modeling:4 cost:1 witbrock:1 subset:1 delay:8 recognizing:1 conducted:1 front:7 combined:1 nns:3 density:1 probabilistic:3 lee:4 michael:1 reestimated:1 iy:2 thesis:2 mstdnn:1 manage:1 management:1 transcribed:1 sphinx:3 american:1 account:4 wk:1 coefficient:2 explicitly:3 depends:1 performed:2 lot:1 reached:1 bayes:1 capability:1 slope:3 minimize:3 accuracy:3 phoneme:5 correspond:1 yield:1 albuquerque:1 detector:1 associated:4 con:1 bridle:2 gain:1 stop:1 propagated:1 duplicated:1 knowledge:2 segmentation:1 routine:1 actually:1 back:6 ok:1 permitted:1 improved:1 april:1 though:1 strongly:1 stage:2 hand:1 receives:2 propagation:5 name:1 laboratory:1 deal:2 during:7 speaker:14 noted:1 m:24 criterion:4 complete:2 tn:1 performs:1 l1:1 ranging:1 meaning:1 novel:1 sigmoid:10 specialized:1 ji:1 overview:1 extend:3 interpretation:1 analog:1 association:1 functionally:1 occurred:1 mellon:3 similarly:1 moving:1 supervision:2 surface:1 longer:2 patrick:1 add:3 feb:1 optimizing:1 phone:13 phonetic:4 yi:1 morgan:1 additional:6 preceding:1 recognized:1 biasi:1 maximize:1 signal:2 ii:2 july:1 multiple:1 match:1 mle:1 multilayer:1 cmu:1 represent:1 achieved:1 tdnns:15 separately:1 unlike:4 seem:1 ideal:1 bengio:2 variety:3 affect:1 fit:1 architecture:14 suboptimal:1 haffner:10 shift:2 penalty:1 speech:23 generally:3 useful:2 tune:1 amount:1 ph:1 simplest:1 reduced:2 exist:1 estimated:1 ofor:1 per:1 carnegie:3 express:1 key:1 kept:1 backward:1 asymptotically:1 sum:5 wood:1 run:1 letter:8 powerful:1 place:1 reasonable:1 acceptable:1 genova:1 comparable:1 layer:7 copied:1 alex:1 speed:1 performing:1 waibel:12 belonging:1 describes:1 modification:1 happens:1 invariant:1 gathering:1 taken:1 computationally:3 equation:1 resource:1 needed:1 end:7 staged:1 adopted:1 available:1 apply:3 observe:1 enforce:1 ney:2 robustness:1 remaining:1 ensure:1 recognizes:1 especially:3 franzini:2 objective:2 move:1 quantity:1 spelling:1 traditional:1 september:1 gradient:4 dp:10 distance:1 separate:2 capacity:3 hmm:10 discriminant:1 unstable:1 modeled:1 difficult:2 mostly:2 potentially:1 negative:1 design:2 proper:5 markov:3 descent:2 extended:3 communication:1 cnet:3 precise:1 frame:13 y1:1 hinton:1 lannion:3 gratitude:1 required:1 trainable:2 extensive:1 optimized:1 connection:1 sentence:2 acoustic:2 deletion:3 trans:1 usually:1 pattern:1 articulation:2 reliable:1 critical:2 event:1 treated:1 hybrid:10 difficulty:4 bourlard:4 tdnn:21 prior:1 integrate:1 share:1 summary:1 token:1 english:1 formal:1 allow:2 bias:7 grammatical:1 boundary:16 overcome:2 vocabulary:5 transition:5 ending:1 forward:1 commonly:1 author:1 far:3 transaction:1 approximate:1 overcomes:1 global:6 active:1 incoming:1 pittsburgh:1 state2:1 consonant:1 shikano:1 spectrum:1 continuous:14 search:1 triplet:3 table:1 scratch:1 robust:1 ca:1 main:2 whole:2 alphanets:1 fig:5 contextindependent:1 embeds:1 down:2 specific:1 offset:1 consist:1 sequential:2 merging:1 importance:1 phd:1 margin:1 backpropagate:1 simply:1 partially:1 corresponds:2 determines:1 identity:1 goal:1 towards:1 replace:1 shared:2 feasible:2 telephone:1 experimental:1 perceptrons:1 support:1 incorporate:1 tested:2 neurospeech:1 |
5,303 | 5,800 | Tree-Guided MCMC Inference for Normalized
Random Measure Mixture Models
Juho Lee and Seungjin Choi
Department of Computer Science and Engineering
Pohang University of Science and Technology
77 Cheongam-ro, Nam-gu, Pohang 37673, Korea
{stonecold,seungjin}@postech.ac.kr
Abstract
Normalized random measures (NRMs) provide a broad class of discrete random
measures that are often used as priors for Bayesian nonparametric models. Dirichlet process is a well-known example of NRMs. Most of posterior inference methods for NRM mixture models rely on MCMC methods since they are easy to
implement and their convergence is well studied. However, MCMC often suffers
from slow convergence when the acceptance rate is low. Tree-based inference is
an alternative deterministic posterior inference method, where Bayesian hierarchical clustering (BHC) or incremental Bayesian hierarchical clustering (IBHC) have
been developed for DP or NRM mixture (NRMM) models, respectively. Although
IBHC is a promising method for posterior inference for NRMM models due to its
efficiency and applicability to online inference, its convergence is not guaranteed
since it uses heuristics that simply selects the best solution after multiple trials are
made. In this paper, we present a hybrid inference algorithm for NRMM models,
which combines the merits of both MCMC and IBHC. Trees built by IBHC outlines partitions of data, which guides Metropolis-Hastings procedure to employ
appropriate proposals. Inheriting the nature of MCMC, our tree-guided MCMC
(tgMCMC) is guaranteed to converge, and enjoys the fast convergence thanks to
the effective proposals guided by trees. Experiments on both synthetic and realworld datasets demonstrate the benefit of our method.
1
Introduction
Normalized random measures (NRMs) form a broad class of discrete random measures, including Dirichlet proccess (DP) [1] normalized inverse Gaussian process [2], and normalized generalized Gamma process [3, 4]. NRM mixture (NRMM) model [5] is a representative example where
NRM is used as a prior for mixture models. Recently NRMs were extended to dependent NRMs
(DNRMs) [6, 7] to model data where exchangeability fails. The posterior analysis for NRM mixture (NRMM) models has been developed [8, 9], yielding simple MCMC methods [10]. As in DP
mixture (DPM) models [11], there are two paradigms in the MCMC algorithms for NRMM models: (1) marginal samplers and (2) slice samplers. The marginal samplers simulate the posterior
distributions of partitions and cluster parameters given data (or just partitions given data provided
that conjugate priors are assumed) by marginalizing out the random measures. The marginal samplers include the Gibbs sampler [10], and the split-merge sampler [12], although it was not formally
extended to NRMM models. The slice sampler [13] maintains random measures and explicitly samples the weights and atoms of the random measures. The term ?slice? comes from the auxiliary slice
variables used to control the number of atoms to be used. The slice sampler is known to mix faster
than the marginal Gibbs sampler when applied to complicated DNRM mixture models where the
evaluation of marginal distribution is costly [7].
1
The main drawback of MCMC methods for NRMM models is their poor scalability, due to the
nature of MCMC methods. Moreover, since the marginal Gibbs sampler and slice sampler iteratively
sample the cluster assignment variable for a single data point at a time, they easily get stuck in local
optima. Split-merge sampler may resolve the local optima problem to some extent, but is still
problematic for large-scale datasets since the samples proposed by split or merge procedures are
rarely accepted. Recently, a deterministic alternative to MCMC algorithms for NRM (or DNRM)
mixture models were proposed [14], extending Bayesian hierarchical clustering (BHC) [15] which
was developed as a tree-based inference for DP mixture models. The algorithm, referred to as
incremental BHC (IBHC) [14] builds binary trees that reflects the hierarchical cluster structures of
datasets by evaluating the approximate marginal likelihood of NRMM models, and is well suited
for the incremental inferences for large-scale or streaming datasets. The key idea of IBHC is to
consider only exponentially many posterior samples (which are represented as binary trees), instead
of drawing indefinite number of samples as in MCMC methods. However, IBHC depends on the
heuristics that chooses the best trees after the multiple trials, and thus is not guaranteed to converge
to the true posterior distributions.
In this paper, we propose a novel MCMC algorithm that elegantly combines IBHC and MCMC
methods for NRMM models. Our algorithm, called the tree-guided MCMC, utilizes the trees built
from IBHC to proposes a good quality posterior samples efficiently. The trees contain useful information such as dissimilarities between clusters, so the errors in cluster assignments may be detected
and corrected with less efforts. Moreover, designed as a MCMC methods, our algorithm is guaranteed to converge to the true posterior, which was not possible for IBHC. We demonstrate the
efficiency and accuracy of our algorithm by comparing it to existing MCMC algorithms.
2
Background
Throughout this paper we use the following notations. Denote by [n] = {1, . . . , n} a set of indices
and by X = {xi | i ? [n]} a dataset. A partition ?[n] of [n] is a set of disjoint nonempty subsets
of [n] whose union is [n]. Cluster c is an entry of ?[n] , i.e., c ? ?[n] . Data points in cluster c is
denoted by Xc = {xi | i ? c} for c ? ?n . For the sake of simplicity, we often use i to represent
a singleton {i} for i ? [n]. In this section, we briefly review NRMM models, existing posterior
inference methods such as MCMC and IBHC.
2.1
Normalized random measure mixture models
Let ? be a homogeneous completely random measure (CRM) on measure space (?, F) with L?evy
intensity ? and base measure H, written as ? ? CRM(?H). We also assume that,
Z ?
Z ?
?(dw) = ?,
(1 ? e?w )?(dw) < ?,
(1)
0
0
P?
so that ? has infinitely many atoms and the total mass ?(?) is finite: ? = j=1 wj ??j? , ?(?) =
P?
j=1 wj < ?. A NRM is then formed by normalizing ? by its total mass ?(?). For each index
i ? [n], we draw the corresponding atoms from NRM, ?i |? ? ?/?(?). Since ? is discrete, the
set {?i |i ? [n]} naturally form a partition of [n] with respect to the assigned atoms. We write the
partition as a set of sets ?[n] whose elements are non-empty and non-overlapping subsets of [n],
and the union of the elements is [n]. We index the elements (clusters) of ?[n] with the symbol c,
and denote the unique atom assigned to c as ?c . Summarizing the set {?i |i ? [n]} as (?[n] , {?c |c ?
?[n] }), the posterior random measure is written as follows:
Theorem 1. ([9]) Let (?[n] , {?c |c ? ?[n] }) be samples drawn from ?/?(?) where ? ? CRM(?H).
With an auxiliary variable u ? Gamma(n, ?(?)), the posterior random measure is written as
X
?|u +
wc ??c ,
(2)
c??[n]
where
?u (dw) := e?uw ?(dw),
?|u ? CRM(?u H),
2
P (dwc ) ? wc|c| ?u (dwc ).
(3)
Moreover, the marginal distribution is written as
P (?[n] , {d?c |c ? ?[n] }, du) =
un?1 e??? (u) du Y
?? (|c|, u)H(d?c ),
?(n)
(4)
c??[n]
where
Z
?? (u) :=
0
?
(1 ? e?uw )?(dw),
Z
?? (|c|, u) :=
?
w|c| ?u (dw).
(5)
0
Using (4), the predictive distribution for the novel atom ? is written as
X ?? (|c| + 1, u)
P (d?|{?i }, u) ? ?? (1, u)H(d?) +
??c (d?).
?? (|c|, u)
(6)
c??[n]
The most general CRM may be used is the generalized Gamma [3], with L?evy intensity ?(dw) =
??
???1 ?w
e dw. In NRMM models, the observed dataset X is assusmed to be generated from
?(1??) w
a likelihood P (dx|?) with parameters {?i } drawn from NRM.
R We focus
Qon the conjugate case where
H is conjugate to P (dx|?), so that the integral P (dXc ) := ? H(d?) i?c P (dxi |?) is tractable.
2.2
MCMC Inference for NRMM models
The goal of posterior inference for NRMM models is to compute the posterior P (?[n] , {d?c }, du|X)
with the marginal likelihood P (dX).
Marginal Gibbs Sampler: marginal Gibbs sampler is basesd on the predictive distribution (6).
At each iteration, cluster assignments for each data point is sampled, where xi may join an exist? (|c|+1,u)
ing cluster c with probability proportional to ??? (|c|,u) P (dxi |Xc ), or create a novel cluster with
probability proportional to ?? (1, u)P (dxi ).
Slice sampler: instead of marginalizing out ?, slice sampler explicitly sample the atoms and weights
{wj , ?j? } of ?. Since maintaining infinitely many atoms is infeasible, slice variables {si } are introduced for each data point, and atoms with masses larger than a threshold (usually set as mini?[n] si )
are kept and remaining atoms are added on the fly as the threshold changes. At each iteration, xi is
assigned to the jth atom with probability 1[si < wj ]P (dxi |?j? ).
Split-merge sampler: both marginal Gibbs and slice sampler alter a single cluster assignment at
a time, so are prone to the local optima. Split-merge sampler, originally developed for DPM, is
a marginal sampler that is based on (6). At each iteration, instead of changing individual cluster
assignments, split-merge sampler splits or merges clusters to propose a new partition. The split or
merged partition is proposed by a procedure called the restricted Gibbs sampling, which is Gibbs
sampling restricted to the clusters to split or merge. The proposed partitions are accepted or rejected
according to Metropolis-Hastings schemes. Split-merge samplers are reported to mix better than
marginal Gibbs sampler.
2.3
IBHC Inference for NRMM models
Bayesian hierarchical clustering (BHC, [15]) is a probabilistic model-based agglomerative clustering, where the marginal likelihood of DPM is evaluated to measure the dissimilarity between nodes.
Like the traditional agglomerative clustering algorithms, BHC repeatedly merges the pair of nodes
with the smallest dissimilarities, and builds binary trees embedding the hierarchical cluster structure
of datasets. BHC defines the generative probability of binary trees which is maximized during the
construction of the tree, and the generative probability provides a lower bound on the marginal likelihood of DPM. For this reason, BHC is considered to be a posterior inference algorithm for DPM.
Incremental BHC (IBHC, [14]) is an extension of BHC to (dependent) NRMM models. Like BHC
is a deterministic posterior inference algorithm for DPM, IBHC serves as a deterministic posterior
inference algorithms for NRMM models. Unlike the original BHC that greedily builds trees, IBHC
sequentially insert data points into trees, yielding scalable algorithm that is well suited for online
inference. We first explain the generative model of trees, and then explain the sequential algorithm
of IBHC.
3
xi
d(?, ?) > 1
i
case 1
l(c)r(c)
{x1 , . . . xi?1 }
i r(c) l(c)r(c) i
case 2 case 3
l(c)
?
i
i
Figure 1: (Left) in IBHC, a new data point is inserted into one of the trees, or create a novel tree. (Middle)
three possible cases in SeqInsert. (Right) after the insertion, the potential funcitons for the nodes in the
blue bold path should be updated. If a updated d(?, ?) > 1, the tree is split at that level.
IBHC aims to maximize the joint probability of the data X and the auxiliary variable u:
P (dX, du) =
un?1 e??? (u) du X Y
?? (|c|, u)P (dXc )
?(n)
(7)
?[n] c??[n]
Let tc be a binary tree whose leaf nodes consist of the indices in c. Let l(c) and r(c) denote the
left and right child of the set c in tree, and thus the corresponding trees are denoted by tl(c) and
tr(c) . The generative probability of trees is described with the potential function [14], which is the
unnormalized reformulation of the original definition [15]. The potential function of the data Xc
given the tree tc is recursively defined as follows:
?(Xc |hc ) := ?? (|c|, u)P (dXc ),
?(Xc |tc ) = ?(Xc |hc ) + ?(Xl(c) |tl(c) )?(Xr(c) |tr(c) ).
(8)
Here, hc is the hypothesis that Xc was generated from a single cluster. The first therm ?(Xc |hc ) is
proportional to the probability that hc is true, and came from the term inside the product of (7). The
second term is proportional to the probability that Xc was generated from more than two clusters
embedded in the subtrees tl(c) and tr(c) . The posterior probability of hc is then computed as
P (hc |Xc , tc ) =
?(Xl(c) |tl(c) )?(Xr(c) |tr(c) )
1
, where d(l(c), r(c)) :=
.
1 + d(l(c), r(c))
?(Xc |hc )
(9)
d(?, ?) is defined to be the dissimilarity between l(c) and r(c). In the greedy construction, the pair of
nodes with smallest d(?, ?) are merged at each iteration. When the minimum dissimilarity exceeds
one (P (hc |Xc , tc ) < 0.5), hc is concluded to be false and the construction stops. This is an important mechanism of BHC (and IBHC) that naturally selects the proper number of clusters. In the
perspective of the posterior inference, this stopping corresponds to selecting the MAP partition that
maximizes P (?[n] |X, u). If the tree is built and the potential function is computed for the entire
dataset X, a lower bound on the joint likelihood (7) is obtained [15, 14]:
un?1 e??? (u) du
?(X|t[n] ) ? P (dX, du).
?(n)
(10)
Now we explain the sequential tree construction of IBHC. IBHC constructs a tree in an incremental
manner by inserting a new data point into an appropriate position of the existing tree, without computing dissimilarities between every pair of nodes. The procedure, which comprises three steps, is
elucidated in Fig. 1.
Step 1 (left): Given {x1 , . . . , xi?1 }, suppose that trees are built by IBHC, yielding to a partition
?[i?1] . When a new data point xi arrives, this step assigns xi to a tree tbc , which has the smallest
distance, i.e., b
c = arg minc??[i?1] d(i, c), or create a new tree ti if d(i, b
c) > 1.
Step 2 (middle): Suppose that the tree chosen in Step 1 is tc . Then Step 2 determines an appropriate
position of xi when it is inserted into the tree tc , and this is done by the procedure SeqInsert(c, i).
SeqInsert(c, i) chooses the position of i among three cases (Fig. 1). Case 1 elucidates an option
where xi is placed on the top of the tree tc . Case 2 and 3 show options where xi is added as a
sibling of the subtree tl(c) or tr(c) , respectively. Among these three cases, the one with the highest
potential function ?(Xc?i |tc?i ) is selected, which can easily be done by comparing d(l(c), r(c)),
d(l(c), i) and d(r(c), i) [14]. If d(l(c), r(c)) is the smallest, then Case 1 is selected and the insertion
terminates. Otherwise, if d(l(c), i) is the smallest, xi is inserted into tl(c) and SeqInsert(l(c), i) is
recursively executed. The same procedure is applied to the case where d(r(c), i) is smallest.
4
c
SampleSub
l(c? ) r(c? )
?
S
c c c c
2
3
1
?[n]
S
Q
?
l(c? )r(c? )
SampleSub
tc?
?
c c1 c2 c3
(merge) ??[n]
l(c? )
r(c? )
??[n]
(split)
?[n]
M
StocInsert
?
Q
S StocInsert
c c
c2 c3
1
(split)
?
l(c? )
r(c? )
?[n]
(merge)
S
? c
c1 c2 c3
?[n]
Figure 2: Global moves of tgMCMC. Top row explains the way of proposing split partition ??[n] from partition
?[n] , and explains the way to retain ?[n] from ??[n] . Bottom row shows the same things for merge case.
Step 3 (right): After Step 1 and 2 are applied, the potential functions of tc?i should be computed
again, starting from the subtree of tc to which xi is inserted, to the root tc?i . During this procedure,
updated d(?, ?) values may exceed 1. In such a case, we split the tree at the level where d(?, ?) > 1,
and re-insert all the split nodes.
After having inserted all the data points in X, the auxiliary variable u and hyperparameters for
?(dw) are resampled, and the tree is reconstructed. This procedure is repeated several times and the
trees with the highest potential functions are chosen as an output.
3
Main results: A tree-guided MCMC procedure
IBHC should reconstruct trees from the ground whenever u and hyperparameters are resampled,
and this is obviously time consuming, and more importantly, converge is not guaranteed. Instead
of completely reconstructing trees, we propose to refine the parts of existing trees with MCMC.
Our algorithm, called tree-guided MCMC (tgMCMC), is a combination of deterministic tree-based
inference and MCMC, where the trees constructed via IBHC guides MCMC to propose good-quality
samples. tgMCMC initialize a chain with a single run of IBHC. Given a current partition ?[n]
and trees {tc | c ? ?[n] }, tgMCMC proposes a novel partition ??[n] by global and local moves.
Global moves split or merges clusters to propose ??[n] , and local moves alters cluster assignments of
individual data points via Gibbs sampling. We first explain the two key operations used to modify
tree structures, and then explain global and local moves. More details on the algorithm can be found
in the supplementary material.
3.1
Key operations
SampleSub(c, p): given a tree tc , draw a subtree tc0 with probability ? d(l(c0 ), r(c0 )) + . is
added for leaf nodes whose d(?, ?) = 0, and set to the maximum d(?, ?) among all subtrees of tc . The
drawn subtree is likely to contain errors to be corrected by splitting. The probability of drawing tc0
is multiplied to p, where p is usually set to transition probabilities.
a stochastic version of IBHC. c may be inserted to c0 ? S via
d?1 (c0 ,c)
SeqInsert(c , c) with probability 1+P 0 d?1 (c0 ,c) , or may just be put into S (create a new cluster
StocInsert(S, c, p):
0
c ?S
in S) with probability 1+P 0 1d?1 (c0 ,c) . If c is inserted via SeqInsert, the potential functions are
c ?S
updated accordingly, but the trees are not split even if the update dissimilarities exceed 1. As in
SampleSub, the probability is multiplied to p.
3.2
Global moves
The global moves of tgMCMC are tree-guided analogy to split-merge sampling. In split-merge
sampling, a pair of data points are randomly selected, and split partition is proposed if they belong
to the same cluster, or merged partition is proposed otherwise. Instead, tgMCMC finds the clusters
that are highly likely to be split or merged using the dissimilarities between trees, which goes as
follows in detail. First, we randomly pick a tree tc in uniform. Then, we compute d(c, c0 ) for
5
c0 ? ?[n] \c, and put c0 in a set M with probability (1 + d(c, c0 ))?1 (the probability of merging
0 1[c0 ?M
/ ]
Q
)
c and c0 ). The transition probability q(??[n] |?[n] ) up to this step is |?1[n] | c0 d(c,c
1+d(c,c0 ) . The
set M contains candidate clusters to merge with c. If M is empty, which means that there are no
candidates to merge with c, we propose ??[n] by splitting c. Otherwise, we propose ??[n] by merging
c and clusters in M .
Split case: we start splitting by drawing a subtree tc? by SampleSub(c, q(??[n] |?[n] )) 1 . Then we
split c? to S = {l(c? ), r(c? )}, destroy all the parents of tc? and collect the split trees into a set Q
(Fig. 2, top). Then we reconstruct the tree by StocInsert(S, c0 , q(??[n] |?[n] )) for all c0 ? Q. After
the reconstruction, S has at least two clusters since we split S = {l(c? ), r(c? )} before insertion. The
split partition to propose is ??[n] = (?[n] \c) ? S. The reverse transition probability q(?[n] |??[n] ) is
computed as follows. To obtain ?[n] from ??[n] , we must merge the clusters in S to c. For this, we
should pick a cluster c0 ? S, and put other clusters in S\c into M . Since we can pick any c0 at first,
the reverse transition probability is computed as a sum of all those possibilities:
q(?[n] |??[n] )
=
X
c0 ?S
1
|??[n] |
Y
c00 ???
\c0
[n]
00
/
d(c0 , c00 )1[c ?S]
,
1 + d(c0 , c00 )
(11)
Merge case: suppose that we have M = {c1 , . . S
. , cm } 2 . The merged partition to propose is given
m
as ??[n] = (?[n] \M ) ? cm+1 , where cm+1 = i=1 cm . We construct the corresponding binary
tree as a cascading tree, where we put c1 , . . . cm on top of c in order (Fig. 2, bottom). To compute the reverse transition probability q(?[n] |??[n] ), we should compute the probability of splitting
cm+1 back into c1 , . . . , cm . For this, we should first choose cm+1 and put nothing into the set
Q d(cm+1 ,c0 )
M to provoke splitting. q(?[n] |??[n] ) up to this step is |?1? | c0 1+d(c
0 . Then, we should
m+1 ,c )
[n]
sample the parent of c (the subtree connecting c and c1 ) via SampleSub(cm+1 , q(?[n] |??[n] )), and
this would result in S = {c, c1 } and Q = {c2 , . . . , cm }. Finally, we insert ci ? Q into S via
StocInsert(S, ci , q(?[n] |??[n] )) for i = 2, . . . , m, where we select each c(i) to create a new cluster in S. Corresponding update to q(?[n] |??[n] ) by StocInsert is,
q(?[n] |??[n] ) ? q(?[n] |??[n] ) ?
m
Y
i=2
1 + d?1 (c, ci ) +
1
Pi?1
j=1
d?1 (cj , ci )
.
(12)
Once we?ve proposed ??[n] and computed both q(??[n] |?[n] ) and q(?[n] |??[n] ), ??[n] is accepted with
probability min{1, r} where r =
p(dX,du,?[n] ?)q(?[n] |??
[n] )
p(dX,du,?[n] )q(??
|?[n] ) .
[n]
Ergodicity of the global moves: to show that the global moves are ergodic, it is enough to show
that we can move an arbitrary point i from its current cluster c to any other cluster c0 in finite step.
This can easily be done by a single split and merge moves, so the global moves are ergodic.
Time complexity of the global moves: the time complexity of StocInsert(S, c, p) is O(|S| + h),
where h is a height of the tree to insert c. The total time complexity of split proposal is mainly determined by the time to execute StocInsert(S, c, p). This procedure is usually efficient, especially
when the trees are well balanced. The time complexity to propose merged partition is O(|?[n] |+M ).
3.3
Local moves
In local moves, we resample cluster assignments of individual data points via Gibbs sampling. If a
leaf node i is moved from c to c0 , we detach i from tc and run SeqInsert(c0 , i) 3 . Here, instead
of running Gibbs sampling for all data points, we run Gibbs sampling for a subset of data points S,
which is formed as follows. For each c ? ?[n] , we draw a subtree tc0 by SampleSub. Then, we
1
Here, we restrict SampleSub to sample non-leaf nodes, since leaf nodes cannot be split.
We assume that clusters are given their own indices (such as hash values) so that they can be ordered.
3
We do not split even if the update dissimilarity exceed one, as in StocInsert.
2
6
?1900
?2000
Gibbs
SM
tgMCMC
?2100
?2200
?2300
0
2
4
6
8
10
?2000
?2500
?3000
Gibbs
SM
tgMCMC
?3500
?4000
0
2
time [sec]
max log-lik
-1730.4069
(64.6268)
-1602.3353
(43.6904)
-1577.7484
(0.2889)
Gibbs
SM
tgMCMC
4
6
8
?1800
?1900
?2000
?2200
?2300
?2400
0
10
20
15
10
5
1
?2100
20
time [sec]
log r
ESS
10.4644
(9.2663)
7.6180
(4.9166)
35.6452
(22.6121)
?1700
?1700
-362.4405
(271.9865)
-3.4844
(9.5959)
time /
iter
0.0458
(0.0062)
0.0702
(0.0141)
0.0161
(0.0088)
40
60
80
100
average log?likelihood
?1800
average log?likelihood
?1700
average log?likelihood
average log?likelihood
?1600
?1600
?1800
?1900
?2000
0
1
?2100
2
3
4
?2200
?2300
0
20
iteration
Gibbs
SM
tgMCMC
max log-lik
-1594.7382
(18.2228)
-1588.5188
(1.1158)
-1587.8633
(1.0423)
40
60
80
100
iteration
ESS
15.8512
(7.1542)
36.2608
(54.6962)
149.7197
(82.3461)
log r
-325.9407
(233.2997)
-2.6589
(8.8980)
time /
iter
0.0523
(0.0067)
0.0698
(0.0143)
0.0152
(0.0088)
Figure 3: Experimental results on toy dataset. (Top row) scatter plot of toy dataset, log-likelihoods of three
samplers with DP, log-likelihoods with NGGP, log-likelihoods of tgMCMC with varying G and varying D.
(Bottom row) The statistics of three samplers with DP and the statistics of three samplers with NGGP.
4
average log?likelihood
x 10
?7.5
Gibbs
?8
SM
?8.5
Gibbs
SM
Gibbs_sub
SM_sub
tgMCMC
?9
?9.5
?10
0
200
400
600
800
1000
Gibbs sub
SM sub
tgMCMC
time [sec]
max log-lik
-74476.4666
(375.0498)
-74400.0337
(382.0060)
-74618.3702
(550.1106)
-73880.5029
(382.6170)
-73364.2985
(8.5847)
ESS
27.0464
(7.6309)
24.9432
(6.7474)
24.1818
(5.5523)
20.0981
(4.9512)
93.5509
(23.5110)
log r
-913.0825
(621.2756)
-918.2389
(623.9808)
-8.7375
(21.9222)
time /
iter
10.7325
(0.3971)
11.1163
(0.4901)
0.2659
(0.0124)
0.3467
(0.0739)
0.4295
(0.0833)
Figure 4: Average log-likelihood plot and the statistics of the samplers for 10K dataset.
draw a subtree of tc0 again by SampleSub. We repeat this subsampling for D times, and put the leaf
nodes of the final subtree into S. Smaller D would result in more data points to resample, so we can
control the tradeoff between iteration time and mixing rates.
Cycling: at each iteration of tgMCMC, we cycle the global moves and local moves, as in split-merge
sampling. We first run the global moves for G times, and run a single sweep of local moves. Setting
G = 20 and D = 2 were the moderate choice for all data we?ve tested.
4
Experiments
In this section, we compare marginal Gibbs sampler (Gibbs), split-merge sampler (SM) and tgMCMC on synthetic and real datasets.
4.1
Toy dataset
We first compared the samplers on simple toy dataset that has 1,300 two-dimensional points with
13 clusters, sampled from the mixture of Gaussians with predefined means and covariances. Since
the partition found by IBHC is almost perfect for this simple data, instead of initializing with IBHC,
we initialized the binary tree (and partition) as follows. As in IBHC, we sequentially inserted data
points into existing trees with a random order. However, instead of inserting them via SeqInsert,
we just put data points on top of existing trees, so that no splitting would occur. tgMCMC was
initialized with the tree constructed from this procedure, and Gibbs and SM were initialized with
corresponding partition. We assumed the Gaussian-likelihood and Gaussian-Wishart base measure,
H(d?, d?) = N (d?|m, (r?)?1 )W(d?|??1 , ?),
(13)
where r = 0.1, ? = d+6, d is the dimensionality, m is the sample mean and ? = ?/(10?det(?))1/d
(? is the sample covariance). We compared the samplers using both DP and NGGP priors. For
tgMCMC, we fixed the number of global moves G = 20 and the parameter for local moves D = 2,
except for the cases where we controlled them explicitly. All the samplers were run for 10 seconds,
and repeated 10 times. We compared the joint log-likelihood log p(dX, ?[n] , du) of samples and the
effective sample size (ESS) of the number of clusters found. For SM and tgMCMC, we compared
the average log value of the acceptance ratio r. The results are summarized in Fig. 3. As shown in
7
6
average log?likelihood
x 10
?4.25
Gibbs
?4.26
SM
Gibbs
SM
Gibbs_sub
SM_sub
tgMCMC
?4.27
?4.28
?4.29
0
2000
4000
6000
8000
10000
Gibbs sub
SM sub
tgMCMC
time [sec]
max log-lik
-4247895.5166
(1527.0131)
-4246689.8072
(1656.2299)
-4246878.3344
(1391.1707)
-4248034.0748
(1703.6653)
-4243009.3500
(1101.0383)
ESS
18.1758
(7.2028)
28.6608
(18.1896)
13.8057
(4.5723)
18.5764
(18.6368)
3.1274
(2.6610)
log r
-3290.2988
(2617.6750)
-3488.9523
(3145.9786)
-256.4831
(218.8061)
time /
iter
186.2020
(2.9030)
186.9424
(2.2014)
49.7875
(0.9400)
49.9563
(0.8667)
42.4176
(2.0534)
Figure 5: Average log-likelihood plot and the statistics of the samplers for NIPS corpus.
the log-likelihood trace plot, tgMCMC quickly converged to the ground truth solution for both DP
and NGGP cases. Also, tgMCMC mixed better than other two samplers in terms of ESS. Comparing
the average log r values of SM and tgMCMC, we can see that the partitions proposed by tgMCMC
is more often accepted. We also controlled the parameter G and D; as expected, higher G resulted
in faster convergence. However, smaller D (more data points involved in local moves) did not
necessarily mean faster convergence.
4.2
Large-scale synthetic dataset
We also compared the three samplers on larger dataset containing 10,000 points, which we will
call as 10K dataset, generated from six-dimensional mixture of Gaussians with labels drawn from
PY(3, 0.8). We used the same base measure and initialization with those of the toy datasets, and
used the NGGP prior, We ran the samplers for 1,000 seconds and repeated 10 times. Gibbs and SM
were too slow, so the number of samples produced in 1,000 seconds were too small. Hence, we also
compared Gibbs sub and SM sub, where we uniformly sampled the subset of data points and ran
Gibbs sweep only for those sampled points. We controlled the subset size to make their running time
similar to that of tgMCMC. The results are summarized in Fig. 4. Again, tgMCMC outperformed
other samplers both in terms of the log-likelihoods and ESS. Interestingly, SM was even worse than
Gibbs, since most of the samples proposed by split or merge proposal were rejected. Gibbs sub and
SM sub were better than Gibbs and SM, but still failed to reach the best state found by tgMCMC.
4.3
NIPS corpus
We also compared the samplers on NIPS corpus4 , containing 1,500 documents with 12,419 words.
We used the multinomial likelihood and symmetric Dirichlet base measure Dir(0.1), used NGGP
prior, and initialized the samplers with normal IBHC. As for the 10K dataset, we compared
Gibbs sub and SM sub along. We ran the samplers for 10,000 seconds and repeated 10 times.
The results are summarized in Fig. 5. tgMCMC outperformed other samplers in terms of the loglikelihood; all the other samplers were trapped in local optima and failed to reach the states found
by tgMCMC. However, ESS for tgMCMC were the lowest, meaning the poor mixing rates. We
still argue that tgMCMC is a better option for this dataset, since we think that finding the better
log-likelihood states is more important than mixing rates.
5
Conclusion
In this paper we have presented a novel inference algorithm for NRMM models. Our sampler,
called tgMCMC, utilized the binary trees constructed by IBHC to propose good quality samples.
tgMCMC explored the space of partitions via global and local moves which were guided by the
potential functions of trees. tgMCMC was demonstrated to be outperform existing samplers in both
synthetic and real world datasets.
Acknowledgments: This work was supported by the IT R&D Program of MSIP/IITP (B010115-0307, Machine Learning Center), National Research Foundation (NRF) of Korea (NRF2013R1A2A2A01067464), and IITP-MSRA Creative ICT/SW Research Project.
4
https://archive.ics.uci.edu/ml/datasets/Bag+of+Words
8
References
[1] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics,
1(2):209?230, 1973.
[2] A. Lijoi, R. H. Mena, and I. Pr?unster. Hierarchical mixture modeling with normalized inverseGaussian priors. Journal of the American Statistical Association, 100(472):1278?1291, 2005.
[3] A. Brix. Generalized Gamma measures and shot-noise Cox processes. Advances in Applied
Probability, 31:929?953, 1999.
[4] A. Lijoi, R. H. Mena, and I. Pr?unster. Controlling the reinforcement in Bayesian nonparametric mixture models. Journal of the Royal Statistical Society B, 69:715?740, 2007.
[5] E. Regazzini, A. Lijoi, and I. Pr?unster. Distriubtional results for means of normalized random
measures with independent increments. The Annals of Statistics, 31(2):560?585, 2003.
[6] C. Chen, N. Ding, and W. Buntine. Dependent hierarchical normalized random measures for
dynamic topic modeling. In Proceedings of the International Conference on Machine Learning
(ICML), Edinburgh, UK, 2012.
[7] C. Chen, V. Rao, W. Buntine, and Y. W. Teh. Dependent normalized random measures. In
Proceedings of the International Conference on Machine Learning (ICML), Atlanta, Georgia,
USA, 2013.
[8] L. F. James. Bayesian Poisson process partition calculus with an application to Bayesian L?evy
moving averages. The Annals of Statistics, 33(4):1771?1799, 2005.
[9] L. F. James, A. Lijoi, and I. Pr?unster. Posterior analysis for normalized random measures with
independent increments. Scandinavian Journal of Statistics, 36(1):76?97, 2009.
[10] S. Favaro and Y. W. Teh. MCMC for normalized random measure mixture models. Statistical
Science, 28(3):335?359, 2013.
[11] C. E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric
problems. The Annals of Statistics, 2(6):1152?1174, 1974.
[12] S. Jain and R. M. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet
process mixture model. Journal of Computational and Graphical Statistics, 13:158?182, 2000.
[13] J. E. Griffin and S. G. Walkera. Posterior simulation of normalized random measure mixtures.
Journal of Computational and Graphical Statistics, 20(1):241?259, 2011.
[14] J. Lee and S. Choi. Incremental tree-based inference with dependent normalized random measures. In Proceedings of the International Conference on Artificial Intelligence and Statistics
(AISTATS), Reykjavik, Iceland, 2014.
[15] K. A. Heller and Z. Ghahrahmani. Bayesian hierarchical clustering. In Proceedings of the
International Conference on Machine Learning (ICML), Bonn, Germany, 2005.
9
| 5800 |@word trial:2 cox:1 middle:2 briefly:1 version:1 c0:27 calculus:1 simulation:1 covariance:2 pick:3 tr:5 shot:1 recursively:2 contains:1 selecting:1 document:1 interestingly:1 existing:7 current:2 comparing:3 si:3 scatter:1 dx:8 written:5 must:1 partition:26 designed:1 plot:4 update:3 hash:1 generative:4 leaf:6 greedy:1 selected:3 intelligence:1 accordingly:1 es:8 provides:1 node:12 nrm:9 evy:3 favaro:1 height:1 along:1 c2:4 constructed:3 combine:2 inside:1 manner:1 expected:1 resolve:1 provided:1 project:1 notation:1 moreover:3 maximizes:1 mass:3 lowest:1 cm:11 developed:4 proposing:1 finding:1 every:1 ti:1 ro:1 uk:1 control:2 before:1 engineering:1 local:14 modify:1 path:1 merge:22 initialization:1 studied:1 collect:1 unique:1 acknowledgment:1 union:2 implement:1 xr:2 procedure:12 word:2 get:1 cannot:1 put:7 py:1 deterministic:5 map:1 demonstrated:1 center:1 go:1 starting:1 ergodic:2 simplicity:1 splitting:6 assigns:1 cascading:1 importantly:1 nam:1 dw:9 embedding:1 increment:2 updated:4 annals:4 construction:4 suppose:3 controlling:1 elucidates:1 homogeneous:1 us:1 hypothesis:1 element:3 utilized:1 iceland:1 observed:1 inserted:8 bottom:3 fly:1 ding:1 initializing:1 wj:4 cycle:1 iitp:2 highest:2 ran:3 balanced:1 insertion:3 complexity:4 dynamic:1 predictive:2 efficiency:2 completely:2 gu:1 easily:3 joint:3 represented:1 jain:1 fast:1 effective:2 monte:1 detected:1 artificial:1 qon:1 whose:4 heuristic:2 larger:2 supplementary:1 loglikelihood:1 drawing:3 otherwise:3 reconstruct:2 statistic:12 seungjin:2 think:1 final:1 online:2 obviously:1 propose:11 reconstruction:1 product:1 inserting:2 uci:1 detach:1 mixing:3 moved:1 scalability:1 convergence:6 cluster:37 optimum:4 extending:1 empty:2 parent:2 incremental:6 perfect:1 unster:4 ac:1 dxc:3 auxiliary:4 come:1 guided:8 lijoi:4 drawback:1 merged:6 stochastic:1 material:1 explains:2 therm:1 c00:3 extension:1 insert:4 considered:1 ground:2 normal:1 ic:1 smallest:6 resample:2 outperformed:2 bag:1 label:1 create:5 reflects:1 gaussian:3 aim:1 exchangeability:1 minc:1 varying:2 focus:1 likelihood:23 mainly:1 greedily:1 summarizing:1 inference:21 dependent:5 stopping:1 streaming:1 ferguson:1 entire:1 selects:2 germany:1 arg:1 among:3 denoted:2 proposes:2 initialize:1 marginal:17 construct:2 once:1 having:1 atom:12 sampling:9 nrf:1 broad:2 icml:3 alter:1 employ:1 randomly:2 gamma:4 ve:2 resulted:1 individual:3 national:1 atlanta:1 acceptance:2 highly:1 possibility:1 evaluation:1 mixture:19 arrives:1 yielding:3 chain:2 subtrees:2 predefined:1 integral:1 korea:2 tree:66 initialized:4 re:1 regazzini:1 modeling:2 rao:1 assignment:7 applicability:1 subset:5 entry:1 uniform:1 too:2 buntine:2 reported:1 dir:1 synthetic:4 chooses:2 thanks:1 international:4 retain:1 lee:2 probabilistic:1 connecting:1 quickly:1 again:3 containing:2 choose:1 wishart:1 worse:1 american:1 toy:5 potential:9 singleton:1 bold:1 sec:4 summarized:3 provoke:1 explicitly:3 depends:1 msip:1 root:1 start:1 maintains:1 complicated:1 option:3 formed:2 accuracy:1 efficiently:1 maximized:1 bayesian:11 produced:1 carlo:1 converged:1 explain:5 reach:2 suffers:1 whenever:1 definition:1 involved:1 james:2 naturally:2 dxi:4 sampled:4 stop:1 dataset:13 dimensionality:1 cj:1 back:1 originally:1 higher:1 evaluated:1 done:3 execute:1 just:3 rejected:2 ergodicity:1 hastings:2 overlapping:1 defines:1 quality:3 usa:1 mena:2 normalized:14 true:3 contain:2 hence:1 assigned:3 symmetric:1 iteratively:1 neal:1 postech:1 during:2 unnormalized:1 generalized:3 outline:1 demonstrate:2 meaning:1 novel:6 recently:2 multinomial:1 exponentially:1 belong:1 association:1 gibbs:33 moving:1 scandinavian:1 base:4 posterior:21 own:1 perspective:1 moderate:1 reverse:3 binary:8 came:1 minimum:1 converge:4 paradigm:1 maximize:1 lik:4 multiple:2 mix:2 nrms:5 ing:1 exceeds:1 faster:3 reykjavik:1 controlled:3 scalable:1 poisson:1 iteration:8 represent:1 c1:7 proposal:4 background:1 concluded:1 unlike:1 archive:1 dpm:6 thing:1 call:1 exceed:3 split:35 easy:1 crm:5 tc0:4 enough:1 restrict:1 idea:1 sibling:1 tradeoff:1 det:1 msra:1 six:1 effort:1 repeatedly:1 useful:1 nonparametric:4 http:1 outperform:1 exist:1 problematic:1 alters:1 trapped:1 disjoint:1 blue:1 discrete:3 write:1 key:3 indefinite:1 reformulation:1 threshold:2 pohang:2 iter:4 drawn:4 changing:1 kept:1 uw:2 destroy:1 sum:1 realworld:1 inverse:1 run:6 throughout:1 almost:1 utilizes:1 draw:4 griffin:1 bound:2 resampled:2 guaranteed:5 refine:1 elucidated:1 occur:1 sake:1 wc:2 bonn:1 simulate:1 min:1 department:1 according:1 creative:1 combination:1 poor:2 conjugate:3 terminates:1 smaller:2 reconstructing:1 metropolis:2 restricted:2 pr:4 nonempty:1 mechanism:1 merit:1 tractable:1 serf:1 operation:2 gaussians:2 multiplied:2 juho:1 hierarchical:9 appropriate:3 alternative:2 original:2 top:6 dirichlet:5 clustering:7 include:1 remaining:1 running:2 subsampling:1 maintaining:1 sw:1 graphical:2 xc:13 build:3 especially:1 society:1 sweep:2 move:23 added:3 costly:1 traditional:1 cycling:1 dp:8 distance:1 topic:1 agglomerative:2 extent:1 argue:1 reason:1 index:5 mini:1 ratio:1 executed:1 trace:1 proper:1 teh:2 datasets:9 sm:20 markov:1 finite:2 extended:2 arbitrary:1 intensity:2 introduced:1 pair:4 c3:3 merges:3 nip:3 usually:3 program:1 built:4 including:1 max:4 royal:1 rely:1 hybrid:1 scheme:1 technology:1 prior:7 review:1 ict:1 heller:1 marginalizing:2 embedded:1 mixed:1 proportional:4 analogy:1 bhc:12 foundation:1 pi:1 row:4 nrmm:18 prone:1 placed:1 repeat:1 supported:1 infeasible:1 enjoys:1 jth:1 guide:2 benefit:1 slice:10 edinburgh:1 evaluating:1 transition:5 world:1 stuck:1 made:1 reinforcement:1 reconstructed:1 approximate:1 ml:1 global:14 sequentially:2 corpus:2 assumed:2 consuming:1 xi:14 un:3 promising:1 nature:2 du:10 hc:10 necessarily:1 elegantly:1 inheriting:1 did:1 aistats:1 main:2 noise:1 hyperparameters:2 nothing:1 child:1 repeated:4 x1:2 fig:7 representative:1 referred:1 join:1 tl:6 georgia:1 slow:2 fails:1 position:3 comprises:1 sub:10 xl:2 candidate:2 theorem:1 choi:2 symbol:1 explored:1 normalizing:1 consist:1 false:1 sequential:2 merging:2 kr:1 ci:4 dissimilarity:9 subtree:9 chen:2 suited:2 tc:20 simply:1 likely:2 infinitely:2 antoniak:1 failed:2 ordered:1 corresponds:1 truth:1 determines:1 goal:1 change:1 determined:1 except:1 corrected:2 uniformly:1 sampler:44 total:3 called:4 accepted:4 experimental:1 rarely:1 formally:1 select:1 mcmc:25 tested:1 |
5,304 | 5,801 | Reflection, Refraction, and Hamiltonian Monte Carlo
Justin Domke
National ICT Australia (NICTA) &
Australian National University
Canberra, ACT 0200
[email protected]
Hadi Mohasel Afshar
Research School of Computer Science
Australian National University
Canberra, ACT 0200
[email protected]
Abstract
Hamiltonian Monte Carlo (HMC) is a successful approach for sampling from continuous densities. However, it has difficulty simulating Hamiltonian dynamics
with non-smooth functions, leading to poor performance. This paper is motivated
by the behavior of Hamiltonian dynamics in physical systems like optics. We introduce a modification of the Leapfrog discretization of Hamiltonian dynamics on
piecewise continuous energies, where intersections of the trajectory with discontinuities are detected, and the momentum is reflected or refracted to compensate for
the change in energy. We prove that this method preserves the correct stationary
distribution when boundaries are affine. Experiments show that by reducing the
number of rejected samples, this method improves on traditional HMC.
1
Introduction
Markov chain Monte Carlo sampling is among the most general methods for probabilistic inference.
When the probability distribution is smooth, Hamiltonian Monte Carlo (HMC) (originally called
hybrid Monte Carlo [4]) uses the gradient to simulate Hamiltonian dynamics and reduce random
walk behavior. This often leads to a rapid exploration of the distribution [7, 2]. HMC has recently
become popular in Bayesian statistical inference [13], and is often the algorithm of choice.
Some problems display piecewise smoothness, where the density is differentiable except at certain
boundaries. Probabilistic models may intrinsically have finite support, being constrained to some
region. In Bayesian inference, it might be convenient to state a piecewise prior. More complex and
highly piecewise distributions emerge in applications where the distributions are derived from other
distributions (e.g. the distribution of the product of two continuous random variables [5]) as well as
applications such as preference learning [1], or probabilistic programming [8].
While HMC is motivated by smooth distributions, the inclusion of an acceptance probability means
HMC does asymptotically sample correctly from piecewise distributions1 . However, since leapfrog
numerical integration of Hamiltonian dynamics (see [9]) relies on the assumption that the corresponding potential energy is smooth, such cases lead to high rejection probabilities, and poor performance. Hence, traditional HMC is rarely used for piecewise distributions.
In physical systems that follow Hamiltonian dynamics [6], a discontinuity in the energy can result
in two possible behaviors. If the energy decreases across a discontinuity, or the momentum is large
enough to overcome an increase, the system will cross the boundary with an instantaneous change
in momentum, known as refraction in the context of optics [3]. If the change in energy is too large to
be overcome by the momentum, the system will reflect off the boundary, again with an instantaneous
change in momentum.
1
Technically, here we assume the total measure of the non-differentiable points is zero so that, with probability one, none is ever encountered
1
6
6
6
4
4
2
2
0.
2
0.
2
0.25
4
25
0.
reflection
refraction
0.75
0.8
2
0.75
0.85
2
0
q
q2
0.25
0.8
0.95
0
0.75
q2
0.9
0
0.9
?2
0.8
0.85
0.
?2
?2
?4
?4
25
0.75
?4
0.25
0.
0.
2
?6
?6
?4
?2
0
q1
2
4
2
6
?6
?6
?4
(a)
?2
0
q1
(b)
2
4
6
?6
?6
?4
?2
0
q1
2
4
6
(c)
Figure 1: Example trajectories of baseline and reflective HMC. (a) Contours of the target distribution in
two dimensions, as defined in Eq. 18. (b) Trajectories of the rejected (red crosses) and accepted (blue dots)
proposals using baseline HMC. (c) The same with RHMC. Both use leapfrog parameters L = 25 and = 0.1
In RHMC, the trajectory reflects or refracts on the boundaries of the internal and external polytope boundaries
and thus has far fewer rejected samples than HMC, leading to faster mixing in practice. (More examples in
supplementary material.)
Recently, Pakman and Paninski [11, 10] proposed methods for HMC-based sampling from piecewise
Gaussian distributions by exactly solving the Hamiltonian equations, and accounting for what we
refer to as refraction and reflection above. However, since Hamiltonian equations of motion can
rarely be solved exactly, the applications of this method are restricted to distributions whose logdensity is piecewise quadratic.
In this paper, we generalize this work to arbitrary piecewise continuous distributions, where each
region is a polytope, i.e. is determined by a set of affine boundaries. We introduce a modification to
the leapfrog numerical simulation of Hamiltonian dynamics, called Reflective Hamiltonian Monte
Carlo (RHMC), by incorporating reflection and refraction via detecting the first intersection of a
linear trajectory with a boundary. We prove that our method has the correct stationary distribution,
where the main technical difficulty is proving volume preservation of our dynamics to establish detailed balance. Numerical experiments confirm that our method is more efficient than baseline HMC,
due to having fewer rejected proposal trajectories, particularly in high dimensions. As mentioned,
the main advantage of this method over [11] and [10] is that it can be applied to arbitrary piecewise densities, without the need for a closed-form solution to the Hamiltonian dynamics, greatly
increasing the scope of applicability.
2
Exact Hamiltonian Dynamics
Consider a distribution P (q) ? exp(?U (q)) over Rn , where U is the potential energy.
HMC [9] is based on considering a joint distribution on momentum and position space
P (q, p) ? exp(?H(q, p)), where H(q, p) = U (q) + K(p), and K is a quadratic, meaning that
P (p) ? exp(?K(p)) is a normal distribution. If one could exactly simulate the dynamics, HMC
would proceed by (1) iteratively sampling p ? P (p), (2) simulating the Hamiltonian dynamics
dqi
?H
=
= pi
dt
?pi
dpi
?H
?U
=?
=?
dt
?qi
?qi
(1)
(2)
for some period of time , and (3) reversing the final value p. (Only needed for the proof of correctness, since this will be immediately discarded at the start of the next iteration in practice.)
Since steps (1) and (2-3) both leave the distribution P (p, q) invariant, so does a Markov chain that
alternates between the two steps. Hence, the dynamics have P (p, q) as a stationary distribution. Of
course, the above differential equations are not well-defined when U has discontinuities, and are
typically difficult to solve in closed-form.
2
3
Reflection and Refraction with Exact Hamiltonian Dynamics
Take a potential function U (q) which is differentiable in all points except at some boundaries of
partitions. Suppose that, when simulating the Hamiltonian dynamics, (q, p) evolves over time as in
the above equations whenever these equations are differentiable. However, when the state reaches a
boundary, decompose the momentum vector p into a component p? perpendicular to the boundary
and a component pk parallel to the boundary. Let ?U be the (signed) difference in potential energy
on the p
two sides of the discontinuity. If kp? k2 > 2?U then p? is instantaneously replaced by
p
0
p? := kp? k2 ? 2?U ? kp? k . That is, the discontinuity is passed, but the momentum is changed
?
in the direction perpendicular to the boundary (refraction). (If ?U is positive, the momentum will
decrease, and if it is negative, the momentum will increase.) On the other hand, if kp? k2 ? 2?U ,
then p? is instantaneously replaced by ?p? . That is, if the particle?s momentum is insufficient
to climb the potential boundary, it bounces back by reversing the momentum component which is
perpendicular to the boundary.
Pakman and Paninski [11, 10] present an algorithm to exactly solve these dynamics for quadratic
U . However, for non-quadratic U , the Hamiltonian dynamics rarely have a closed-form solution,
and one must resort to numerical integration, the most successful method for which is known as the
leapfrog dynamics.
4
Reflection and Refraction with Leapfrog Dynamics
Informally, HMC with leapfrog dynamics iterates three steps. (1) Sample p ? P (p). (2) Perform
leapfrog simulation, by discretizing the Hamiltonian equations into L steps using some small stepsize . Here, one interleaves a position step q ? q + p between two half momentum steps p ?
p ? ?U (q)/2. (3) Reverse the sign of p. If (q, p) is the starting point of the leapfrog dynamics,
and (q0 , p0 ) is the final point, accept the move with probability min(1, exp(H(p, q) ? H(p0 , q0 )))
See Algorithm 1.
It can be shown that this baseline HMC method has detailed balance with respect to P (p), even if
U (q) is discontinuous. However, discontinuities mean that large changes in the Hamiltonian may
occur, meaning many steps can be rejected. We propose a modification of the dynamics, namely,
reflective Hamiltonian Monte Carlo (RHMC), which is also shown in Algorithm 1.
The only modification is applied to the position steps: In RHMC, the first intersection of the trajectory with the boundaries of the polytope that contains q must be detected [11, 10]. The position step
is only taken up to this boundary, and reflection/refraction occurs, depending on the momentum and
change of energy at the boundary. This process continues until the entire amount of time has been
simulated. Note that if there is no boundary in the trajectory to time , this is equivalent to baseline
HMC. Also note that several boundaries might be visited in one position step.
As with baseline HMC, there are two alternating steps, namely drawing a new momentum variable
p from P (p) ? exp(?K(p)) and proposing a move (p, q) ? (p0 , q0 ) and accepting or rejecting it
with a probability determined by a Metropolis-Hastings ratio. We can show that both of these steps
leave the joint distribution P invariant, and hence a Markov chain that also alternates between these
steps will also leave P invariant.
As it is easy to see, drawing p from P (p) will leave P (q, p) invariant, we concentrate on the second
step i.e. where a move is proposed according to the piecewise leapfrog dynamics shown in Alg. 1.
Firstly, it is clear that these dynamics are time-reversible, meaning that if the simulation takes state
(q, p) to (q0 , p0 ) it will also take state (q0 , p0 ) to (q, p). Secondly, we will show that these dynamics
are volume preserving. Formally, if D denotes the leapfrog dynamics, we will show that the absolute
value of the determinant of the Jacobian of D is one. These two properties together show that the
probability density of proposing a move from (q, p) to (q0 , p0 ) is the same of that proposing a move
from (q0 , p0 ) to (q, p). Thus, if the move (q, p) ? (q0 , p0 ) is accepted according to the standard
Metropolis-Hastings ratio, R ((q, p) ? (q0 , p0 )) = min(1, exp(H(q, p) ? H(q0 , p0 )), then detailed
balance will be satisfied. To see this, let Q denote the proposal distribution, Then, the usual proof of
correctness for Metropolis-Hastings applies, namely that
3
P (q, p)Q ((q, p) ? (q0 , p0 )) R ((q, p) ? (q0 , p0 ))
P (q0 , p0 )Q((q0 , p0 ) ? (q, p))R((q0 , p0 ) ? (q, p))
P (q, p) min(1, exp(H(q, p) ? H(q0 , p0 ))
= 1. (3)
=
P (q0 , p0 ) min(1, exp(H(q0 , p0 ) ? H(q, p))
(The final equality is easy to establish, considering the cases where H(q, p) ? H(q0 , p0 ) and
H(q0 , p0 ) ? H(q, p) separately.) This means that
detailed balance holds, and so P is a stationary distribution.
q0
0
p
x
The major difference in the analysis of RHMC, relap
tive to traditional HMC is that showing conservation
of volume is more difficult. With standard HMC
q
and leapfrog steps, volume conservation is easy to
show by observing that each part of a leapfrog step
q1 = c
is a shear transformation. This is not the case with
RHMC, and so we must resort to a full analysis of
the determinant of the Jacobian, as explored in the Figure 2: Transformation T : hq,pi ? hq0 ,p0 i
following section.
described by Lemma 1 (Refraction).
Algorithm 1: BASELINE & R EFLECTIVE HMC A LGORITHMS
input : q0 , current sample; U , potential function, L, # leapfrog steps; , leapfrog step size
output: next sample
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
begin
q ? q0 ; p ? N (0, 1)
H0 ? kpk2 /2 + U (q)
for l = 1 to L do
p ? p ? ?U (q)/2
# Half-step evolution of momentum
if BASELINE HMC then
# Full-step evolution of position:
q ? q + p
# i.e. if R EFLECTIVE HMC:
else
t0 ? 0
while hx, tx , ?U, ?i ? F IRST D ISCONTINUITY(q, p, ? t0 , U ) 6= ? do
q?x
t0 ? t0 + tx
hp? , pk i = DECOMPOSE(p, ?)
# Perpendicular/ parallel to boundary plane ?
if kp? k2 >p2?U then
p
# Refraction
p? ? kp? k2 ? 2?U ? kp? k
?
else
p? ? ?p?
# Reflection
18
19
20
21
22
23
24
25
p ? p? + pk
q ? q + ( ? t0 )p
p ? p ? ?U (q)/2
# Half-step evolution of momentum
p ? ?p
# Not required in practice; for reversibility proof
H ? kpk2 /2 + U (q); ?H ? H ? H0
if s ? U(0, 1) < e??H return q else return q0
end
note : F IRST D ISCONTINUITY(?) returns x, the position of the first intersection of a boundary plain
with line segment [q, q + ( ? t0 )p]; tx , the time it is visited; ?U , the change in energy at the
discontinuity, and ?, the visited partition boundary. If no such point exists, ? is returned.
4
5
Volume Conservation
5.1
Refraction
In our first result, we assume without loss of generality, that there is a boundary located at the
hyperplane q1 = c. This Lemma shows that, in the refractive case, volume is conserved. The setting
is visualized in Figure 2.
Lemma 1. Let T : hq, pi ? hq0 , p0 i be a transformation in Rn that takes a unit mass located at
q := (q1 , . . . , qn ) and moves it with constant momentum p := (p1 , . . . , pn ) till it reaches a plane
q1 = c (at some point x := (c, x2 , . . . , xn ) where
c is a constant). Subsequently the momentum is
p
0
2
changed to p =
p1 ? 2?U (x), p2 , . . . , pn (where ?U (?) is a function of x s.t. p21 > 2?U (x)).
The move is carried on for the total time period ? till it ends in q0 . For all n ? N, T satisfies the
volume preservation property.
Proof. Since for i > 1, the momentum is not affected by the collision, qi0 = qi + ? ? pi and p0i = pi .
Thus,
?qi0
?qi0
?p0i
?p0i
?j ? {2, . . . , n}s.t. j 6= i,
=
=
=
= 0.
?qj
?pj
?qj
?pj
Therefore, if we explicitly write out the Jacobian determinant |J| of the transformation T , it is
?q0
?q10
?q10
?q10
?q10
?q10 ?q10
1
?q10
?q10
?q10
? ? ? ?pk?1
?q1
?q
? ? ? ?pk?1
?p1
?qk
?pk
?p1
?qk
?pk
1
0
0
0
0
?p01
?p
?p
?p
?p
0
0
0
0
0
1
1
1
1
?p1
?p1
?p1
?p1 ?p1 ? ? ?
? ? ? ?pk?1
?p1
?qk
?pk
?q10
?q1
?p1
?pk?1
?qk
?pk
0
0
0
0
?q2
?q2
?q2
?q2
?q2
?q20
?q20
?q20
?q1
?
?
?
0
0
?
?
?
?p
?p
?q
?p
1
k?1
k
k
?pk?1
?qk
?pk
..
.
.
.
.
.
.
..
..
..
..
= .
..
..
.
.
.
0
0
0
0
0
?pk?1
?pk?1
?pk?1
?pk?1 ?pk?1
?p0k?1
?p0k?1
?
?
?
?q1
0
0
?
?
?
1
?p
?p
?q
?p
1
k?1
k
k
?q0
?qk
?pk
0
0
0
0
0
?qk
?qk
?qk
?qk
k
?qk
?
?
?
?q1
0
0
???
0
1
?p1
?pk?1
?qk
?pk
?pk
0
0
0
0
?p0k
?p
?p
?p
?p
k
k
k
0
0
???
0
0
1
? ? ? ?p k
?q
?p
?q
?p
1
1
k?1
k
k
(4)
Now, using standard properties of the determinant, we have that |J| =
?q10
?q1
?p01
?q1
?q10
?p1
?p01
?p1
.
We will now explicitly calculate these four derivatives. Due to the significance of the result, we carry
out the computations in detail. Nonetheless, as this is a largely mechanical process, for brevity, we
do not comment on the derivation.
Let t1 be the time to reach x and t2 be the period between reaching x and the last point q0 . Then:
def
t1 =
c ? q1
p1
x = q + t1 p
(5)
q10 = c + p01 ? t2 (8)
def
p01 =
q
def
(6)
t2 = ? ? t1 = ? +
p21 ? 2?U (x) (9)
?t2 by (7) 1
=
?q1
p1
?q10
?q 0 ?p0
?q 0 ?t2
= 10 ? 1 + 1 ?
?q1
?p1 ?q1
?t2 ?q1
?q10
?q 0 ?p0
?q 0 ?t2
= 10 ? 1 + 1 ?
?p1
?p1 ?p1
?t2 ?p1
(8 & 10)
=
(7 & 8)
(10)
(11)
?p01
c ? q1
+ p01 ?
?p1
p21
(12)
? p21 ? 2?U (x)
?p01 (9)
1
p1 ? ??U (x)/?p1
= p 2
?
=
?p1
?p1
p01
2 p1 ? 2?U (x)
5
(7)
?p01
1
+ p01 ?
?q1
p1
t2 ?
= t2 ?
q1 ? c
p1
(13)
? p21 ? 2?U (x)
?p01
1
1 ???U (x)
?
= p 2
= 0 ?
?q1
?q1
p1
?q1
2 p1 ? 2?U (x)
(14)
c?q1
?x (5, 6) ? q + p1 p
?q
?(p/p1 )
?1
=
=
+ (c ? q1 ) ?
= (c ? q1 ) 2 ? (0, p2 , p3 , . . . , pn ) (15)
?p1
?p1
?p1
?p1
p1
?x (5, 6) ?q
p ?(c ? q1 )
q
p
p2
pn
?1
=
+
?
=
?
= (1, 0, . . . , 0) ? (1, , . . . , ) =
(0, p2 , . . . , pn )
?q1
?q1
p1
?q1
q1
p1
p1
p1
p1
(16)
?q 0 ?p0
Substituting the appropriate terms into |J| = | ?q11 ?p11 ?
?p01 ?q10
?q1 ?p1 |,
we get that
?q10 ?p01
?q10 ?p01 (11 & 12)
?p01
p01
?p01
?p01
?p0
0 c ? q1
|J| =
?
?
?
=
t2
+
?
? t2
+ p1 2
? 1
?q1 ?p1
?p1 ?q1
?q1
p1
?p1
?p1
p1
?q1
0
0
0
p
?p1
q1 ? c ?p1 (13 & 14) 1
??U (x) q1 ? c ??U (x)
= 1
+
?
=
p1 ?
?
?
p1 ?p1
p1
?q1
p1
?p1
p1
?q1
q1 ? c ??U (x) ?x
1 ??U (x) ?x
?
+
?
?
=1?
p1
?x
?p1
p1
?x
?q1
1 ??U (x) q1 ? c
q1 ? c ?1
(15 & 16)
= 1?
?
(0,
p
,
p
,
.
.
.
,
p
)
+
?
?
(0,
p
,
.
.
.
,
p
)
= 1.
2 3
n
2
n
p1
?x
p21
p1
p1
(4)
5.2
Reflection
Now, we turn to the reflective case, and again show that volume is conserved. Again, we assume
without loss of generality that there is a boundary located at the hyperplane q1 = c.
Lemma 2. Let T : hq, pi ? hq0 , p0 i be a transformation in Rn that takes a unit mass located at
q := (q1 , . . . , qn ) and moves it with the constant momentum p := (p1 , . . . , pn ) till it reaches a plane
q1 = c (at some point x := (c, x2 , . . . , xn ) where c is a constant). Subsequently the mass is bounced
back (reflected) with momentum p0 = (?p1 , p2 , . . . , pn ) The move is carried on for a total time
period ? till it ends in q0 . For all n ? N, T satisfies the volume preservation property.
Proof. Similar to Lemma 1, for i > 1, qi0 = qi +? ?pi and p0i = pi . Therefore, for any j ? {2, . . . , n}
?q 0
?q 0
?p0
?p0
s.t. j 6= i, ?qji = ?pji = ?qji = ?pji = 0. Consequently, by equation (4), and since p01 = ?p1 ,
?q10 ?q10 ?q10 ?q10 ??q 0
?q1 ?p1 ?q1 ?p1
1
|J| = ?p0 ?p0 =
(17)
=
1
1
0
?q
?1
1
?q
?p
1
1
As before, let t1 be the time to reach x and t2 be the period between reaching x and the last point
def
def
0 def
0
1
q0 . That is, t1 = c?q
p1 and t2 = ? ? t1 . It follows that q1 = c + p1 ? t2 is equal to 2c ? ? p1 ? q1 .
Hence, |J| = 1.
5.3
Reflective Leapfrog Dynamics
Theorem 1. In RHMC (Algorithm 1) for sampling from a continuous and piecewise distribution P
which has affine partitioning boundaries, leapfrog simulation preserves volume in (q, p) space.
Proof. We split the algorithm into several atomic transformations Ti . Each transformation is either
(a) a momentum step, (b) a full position step with no reflection/refraction or (c) a full or partial
position step where exactly one reflection or refraction occurs.
6
To prove that the total algorithm preserves volume, it is sufficient to show that the volume is preserved under each Ti (i.e. |JTi (q, p)| = 1) since:
|JT1 oT2 o???oTm | = |JT1 | ? |JT2 | ? ? ? |JTm |
Transformations of kind (a) and (b) are shear mappings and therefore they preserve the volume [9].
Now consider a (full or partial) position step where a single refraction occurs. If the reflective plane
is in form q1 = c, by lemma 1, the volume preservation property holds. Otherwise, as long as
the reflective plane is affine, via a rotation of basis vectors, the problem is reduced to the former
case. Since volume is conserved under rotation, in this case the volume is also conserved. With
similar reasoning, by lemma 2, reflection on a affine reflective boundary preserves volume. Thus,
since all component transformations of RHMC leapfrog simulation preserve volume, the proof is
complete.
Along with the fact that the leapfrog dynamics are time-reversible, this shows that the algorithm
satisfies detailed balance, and so has the correct stationary distribution.
6
Experiment
Compared to baseline HMC, we expect that RHMC will simulate Hamiltonian dynamics more accurately and therefore leads to fewer rejected samples. On the other hand, this comes at the expense
of slower leapfrog position steps since intersections, reflections and refractions must be computed.
To test the trade off, we compare the RHMC to baseline HMC [9] and tuned Metroplis-Hastings
(MH) with a simple isotropic Normal proposal distribution. MH is automatically tuned after [12] by
testing 100 equidistant proposal variances in interval (0, 1] and accepting a variance for which the
acceptance rate is closest to 0.24. The baseline HMC and RHMC number of steps L and step size
are chosen to be 100 and 0.1 respectively. (Many other choices are in the Appendix.) While HMC
performance is highly standard to these parameters [7] RHMC is consistently faster.
The comparison takes place on a heavy tail piecewise model with (non-normalized) negative log
probability
?p
?
if kqk? ? 3
? q>pA q
(18)
U (q) = 1 + q> A q if 3 < kqk? ? 6
?
?+?,
otherwise
where A is a positive definite matrix. We carry out the experiment on three choices of (position
space) dimensionalites, n = 2, 10 and 50.
Due to the symmetry of the model, the ground truth expected value of q is known to be 0. Therefore,
the absolute error of the expected value (estimated by a chain q(1) , . . . , q(k) of MCMC samples) in
each dimension d = 1, . . . , n is the absolute value of the mean of d-th element of the sample vectors.
The worst mean absolute error (WMAE) over all dimensions is taken as the error measurement of
the chain.
k
X
1
(1)
(k)
s
WMAE q , . . . , q
=
max
qd
(19)
k d=1,...n
s=1
For each algorithm, 20 Markov chains are run and the mean WMAE and 99% confidence intervals
(as error bars) versus the number of iterations (i.e. Markov chain sizes) are time (milliseconds) are
depicted in figure 2. All algorithms are implemented in java and run on a single thread of a 3.40GHz
CPU.
For each of the 20 repetitions, some random starting point is chosen uniformly and used for all three
of the algorithms. We use a diagonal matrix for A where, for each repetition, each entry on the main
diagonal is either exp(?5) or exp(5) with equal probabilities.
As the results show, even in low dimensions, the extra cost of the position step is more or less
compensated by its higher effective sample size but as the dimensionality increases, the RHMC
significantly outperforms both baseline HMC and tuned MH.
7
6
6
Baseline.HMC
Tuned.MH
Reflective.HMC
5
4
Error
Error
4
3
3
2
2
1
1
0 0
10
Baseline.HMC
Tuned.MH
Reflective.HMC
5
1
10
2
10
3
10
Iteration (dim=2, L=100, ? =0.1)
0
4
10
1
10
2
(a)
6
Baseline.HMC
Tuned.MH
Reflective.HMC
5
4
Error
Error
Baseline.HMC
Tuned.MH
Reflective.HMC
5
4
3
3
2
2
1
1
1
10
2
10
3
10
Iteration (dim=10, L=100, ? =0.1)
0
4
10
1
10
2
10
(e)
6
5
5
4
4
Baseline.HMC
Tuned.MH
Reflective.HMC
3
Error
Error
3
10
Time (dim=10, L=100, ? =0.1)
(b)
6
3
2
2
1
1
0 0
10
10
(d)
6
0 0
10
3
10
Time (dim=2, L=100, ? =0.1)
1
10
2
10
3
10
Iteration (dim=50, L=100, ? =0.1)
0
4
10
Baseline.HMC
Tuned.MH
Reflective.HMC
1
10
(c)
2
3
10
10
Time (dim=50, L=100, ? =0.1)
4
10
(f)
Figure 3: Error (worst mean absolute error per dimension) versus (a-c) iterations and (e-f) time (ms).
Tuned.HM is Metropolis Hastings with a tuned isotropic Gaussian proposal distribution. (Many more examples
in supplementary material.)
7
Conclusion
We have presented a modification of the leapfrog dynamics for Hamiltonian Monte Carlo for piecewise smooth energy functions with affine boundaries (i.e. each region is a polytope), inspired by
physical systems. Though traditional Hamiltonian Monte Carlo can in principle be used on such
functions, the fact that the Hamiltonian will often be dramatically changed by the dynamics can result in a very low acceptance ratio, particularly in high dimensions. By better preserving the Hamiltonian, reflective Hamiltonian Monte Carlo (RHMC) accepts more moves and thus has a higher
effective sample size, leading to much more efficient probabilistic inference. To use this method,
one must be able to detect the first intersection of a position trajectory with polytope boundaries.
Acknowledgements
NICTA is funded by the Australian Government through the Department of Communications and
the Australian Research Council through the ICT Centre of Excellence Program.
8
References
[1] Hadi Mohasel Afshar, Scott Sanner, and Ehsan Abbasnejad. Linear-time gibbs sampling in
piecewise graphical models. In Association for the Advancement of Artificial Intelligence,
pages 665?673, 2015.
[2] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain
Monte Carlo. CRC press, 2011.
[3] Hans Adolph Buchdahl. An introduction to Hamiltonian optics. Courier Corporation, 1993.
[4] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid monte
carlo. Physics letters B, 195(2):216?222, 1987.
[5] Andrew G Glen, Lawrence M Leemis, and John H Drew. Computing the distribution of
the product of two continuous random variables. Computational statistics & data analysis,
44(3):451?464, 2004.
[6] Donald T Greenwood. Principles of dynamics. Prentice-Hall Englewood Cliffs, NJ, 1988.
[7] Matthew D Homan and Andrew Gelman. The no-u-turn sampler: Adaptively setting path
lengths in hamiltonian monte carlo. The Journal of Machine Learning Research, 15(1):1593?
1623, 2014.
[8] David Lunn, David Spiegelhalter, Andrew Thomas, and Nicky Best. The bugs project: evolution, critique and future directions. Statistics in medicine, 28(25):3049?3067, 2009.
[9] Radford M Neal. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte
Carlo, 2, 2011.
[10] Ari Pakman and Liam Paninski. Auxiliary-variable exact hamiltonian monte carlo samplers
for binary distributions. In Advances in Neural Information Processing Systems, pages 2490?
2498, 2013.
[11] Ari Pakman and Liam Paninski. Exact hamiltonian monte carlo for truncated multivariate
gaussians. Journal of Computational and Graphical Statistics, 23(2):518?542, 2014.
[12] Gareth O Roberts, Andrew Gelman, Walter R Gilks, et al. Weak convergence and optimal
scaling of random walk metropolis algorithms. The annals of applied probability, 7(1):110?
120, 1997.
[13] Stan Development Team. Stan Modeling Language Users Guide and Reference Manual, Version 2.5.0, 2014.
9
| 5801 |@word bounced:1 determinant:4 version:1 lgorithms:1 simulation:5 accounting:1 p0:32 q1:55 carry:2 contains:1 tuned:11 outperforms:1 current:1 com:1 discretization:1 must:5 john:1 numerical:4 partition:2 stationary:5 half:3 fewer:3 advancement:1 intelligence:1 plane:5 isotropic:2 hamiltonian:33 accepting:2 detecting:1 iterates:1 preference:1 firstly:1 along:1 become:1 differential:1 prove:3 introduce:2 excellence:1 expected:2 rapid:1 behavior:3 p1:74 inspired:1 automatically:1 cpu:1 considering:2 increasing:1 begin:1 project:1 mass:3 what:1 kind:1 q2:7 proposing:3 transformation:9 corporation:1 nj:1 act:2 ti:2 exactly:5 k2:5 partitioning:1 unit:2 positive:2 t1:7 before:1 cliff:1 meng:1 critique:1 path:1 might:2 signed:1 au:2 liam:2 perpendicular:4 gilks:1 testing:1 atomic:1 practice:3 definite:1 java:1 significantly:1 convenient:1 courier:1 confidence:1 donald:1 get:1 gelman:3 prentice:1 context:1 equivalent:1 compensated:1 starting:2 immediately:1 proving:1 annals:1 target:1 suppose:1 user:1 exact:4 programming:1 us:1 pa:1 element:1 particularly:2 located:4 continues:1 solved:1 worst:2 calculate:1 region:3 decrease:2 trade:1 mentioned:1 dynamic:33 solving:1 segment:1 technically:1 basis:1 joint:2 mh:9 tx:3 derivation:1 walter:1 effective:2 monte:16 kp:7 detected:2 artificial:1 pendleton:1 h0:2 whose:1 supplementary:2 solve:2 drawing:2 otherwise:2 statistic:3 final:3 advantage:1 differentiable:4 propose:1 product:2 refractive:1 mixing:1 till:4 bug:1 q10:22 nicky:1 convergence:1 leave:4 depending:1 andrew:5 school:1 eq:1 p2:6 implemented:1 auxiliary:1 come:1 australian:4 qd:1 direction:2 concentrate:1 correct:3 discontinuous:1 subsequently:2 qi0:4 exploration:1 australia:1 material:2 crc:1 government:1 hx:1 decompose:2 brian:1 secondly:1 hold:2 hall:1 ground:1 normal:2 exp:10 lawrence:1 scope:1 mapping:1 matthew:1 substituting:1 major:1 visited:3 council:1 correctness:2 repetition:2 reflects:1 instantaneously:2 gaussian:2 reaching:2 pn:7 derived:1 leapfrog:21 consistently:1 greatly:1 baseline:18 detect:1 dim:6 inference:4 typically:1 entire:1 accept:1 among:1 development:1 constrained:1 integration:2 equal:2 having:1 reversibility:1 sampling:6 metroplis:1 jones:1 future:1 t2:15 piecewise:15 preserve:6 national:3 replaced:2 acceptance:3 englewood:1 highly:2 chain:9 partial:2 walk:2 roweth:1 lunn:1 modeling:1 applicability:1 cost:1 rhmc:15 entry:1 successful:2 too:1 adaptively:1 density:4 probabilistic:4 off:2 physic:1 together:1 again:3 reflect:1 satisfied:1 q11:1 external:1 resort:2 derivative:1 leading:3 return:3 li:1 potential:6 explicitly:2 closed:3 observing:1 red:1 start:1 irst:2 parallel:2 simon:1 afshar:3 hadi:3 qk:12 largely:1 variance:2 generalize:1 weak:1 bayesian:2 rejecting:1 jt1:2 accurately:1 none:1 carlo:16 trajectory:9 homan:1 kennedy:1 reach:5 whenever:1 manual:1 energy:11 nonetheless:1 refraction:16 proof:7 popular:1 intrinsically:1 improves:1 dimensionality:1 back:2 steve:1 originally:1 dt:2 higher:2 follow:1 reflected:2 though:1 generality:2 rejected:6 until:1 hand:2 hastings:5 reversible:2 normalized:1 evolution:4 former:1 hence:4 equality:1 alternating:1 q0:30 iteratively:1 neal:1 m:1 complete:1 jti:1 motion:1 reflection:13 reasoning:1 meaning:3 instantaneous:2 ari:2 recently:2 rotation:2 shear:2 physical:3 volume:18 tail:1 association:1 refer:1 measurement:1 gibbs:1 smoothness:1 hp:1 inclusion:1 particle:1 centre:1 language:1 dot:1 funded:1 interleaf:1 han:1 closest:1 multivariate:1 reverse:1 jt2:1 certain:1 dqi:1 discretizing:1 binary:1 conserved:4 preserving:2 period:5 preservation:4 full:5 smooth:5 technical:1 faster:2 pakman:4 cross:2 compensate:1 long:1 qi:4 iteration:6 p0i:4 proposal:6 preserved:1 separately:1 interval:2 else:3 extra:1 comment:1 p01:20 climb:1 reflective:15 distributions1:1 enough:1 easy:3 split:1 equidistant:1 reduce:1 bounce:1 t0:6 logdensity:1 motivated:2 qj:2 thread:1 passed:1 returned:1 proceed:1 dramatically:1 collision:1 detailed:5 informally:1 clear:1 amount:1 visualized:1 reduced:1 millisecond:1 sign:1 estimated:1 correctly:1 per:1 blue:1 write:1 affected:1 four:1 p11:1 pj:2 kqk:2 asymptotically:1 run:2 letter:1 place:1 p3:1 appendix:1 duncan:1 scaling:1 def:6 display:1 quadratic:4 encountered:1 occur:1 optic:3 kpk2:2 x2:2 simulate:3 min:4 department:1 according:2 alternate:2 poor:2 across:1 metropolis:5 evolves:1 modification:5 restricted:1 invariant:4 taken:2 equation:7 turn:2 needed:1 end:3 gaussians:1 appropriate:1 simulating:3 stepsize:1 pji:2 slower:1 thomas:1 denotes:1 graphical:2 qji:2 medicine:1 establish:2 q20:3 move:11 occurs:3 usual:1 traditional:4 diagonal:2 gradient:1 hq:3 ot2:1 simulated:1 polytope:5 nicta:3 length:1 insufficient:1 ratio:3 balance:5 difficult:2 hmc:40 robert:1 expense:1 negative:2 perform:1 markov:7 discarded:1 finite:1 truncated:1 ever:1 communication:1 team:1 rn:3 dpi:1 arbitrary:2 p0k:3 tive:1 david:2 namely:3 required:1 mechanical:1 accepts:1 discontinuity:8 brook:1 justin:2 bar:1 able:1 scott:1 program:1 max:1 difficulty:2 hybrid:2 sanner:1 spiegelhalter:1 stan:2 carried:2 galin:1 hm:1 refracts:1 prior:1 ict:2 acknowledgement:1 loss:2 expect:1 versus:2 affine:6 sufficient:1 xiao:1 principle:2 pi:9 heavy:1 course:1 changed:3 last:2 side:1 guide:1 emerge:1 absolute:5 ghz:1 boundary:29 overcome:2 dimension:7 plain:1 xn:2 contour:1 qn:2 far:1 confirm:1 jtm:1 handbook:2 conservation:3 continuous:6 symmetry:1 alg:1 ehsan:1 complex:1 anthony:1 pk:22 main:3 significance:1 canberra:2 momentum:23 position:14 jacobian:3 glen:1 theorem:1 showing:1 explored:1 incorporating:1 exists:1 hq0:3 drew:1 anu:1 rejection:1 intersection:6 depicted:1 paninski:4 applies:1 radford:1 duane:1 truth:1 satisfies:3 relies:1 gareth:1 consequently:1 change:7 determined:2 except:2 reducing:1 reversing:2 domke:2 hyperplane:2 uniformly:1 lemma:7 sampler:2 called:2 total:4 accepted:2 rarely:3 formally:1 internal:1 support:1 brevity:1 p21:6 mcmc:2 |
5,305 | 5,802 | Planar Ultrametrics for Image Segmentation
Charless C. Fowlkes
Department of Computer Science
University of California Irvine
[email protected]
Julian Yarkony
Experian Data Lab
San Diego, CA 92130
[email protected]
Abstract
We study the problem of hierarchical clustering on planar graphs. We formulate
this in terms of finding the closest ultrametric to a specified set of distances and
solve it using an LP relaxation that leverages minimum cost perfect matching as
a subroutine to efficiently explore the space of planar partitions. We apply our
algorithm to the problem of hierarchical image segmentation.
1
Introduction
We formulate hierarchical image segmentation from the perspective of estimating an ultrametric
distance over the set of image pixels that agrees closely with an input set of noisy pairwise distances.
An ultrametric space replaces the usual triangle inequality with the ultrametric inequality d(u, v) ?
max{d(u, w), d(v, w)} which captures the transitive property of clustering (if u and w are in the
same cluster and v and w are in the same cluster, then u and v must also be in the same cluster).
Thresholding an ultrametric immediately yields a partition into sets whose diameter is less than
the given threshold. Varying this distance threshold naturally produces a hierarchical clustering in
which clusters at high thresholds are composed of clusters at lower thresholds.
Inspired by the approach of [1], our method represents an ultrametric explicitly as a hierarchical
collection of segmentations. Determining the appropriate segmentation at a single distance threshold
is equivalent to finding a minimum-weight multicut in a graph with both positive and negative edge
weights [3, 14, 2, 11, 20, 21, 4, 19, 7]. Finding an ultrametric imposes the additional constraint that
these multicuts are hierarchically consistent across different thresholds. We focus on the case where
the input distances are specified by a planar graph. This arises naturally in the domain of image
segmentation where elements are pixels or superpixels and distances are defined between neighbors
and allows us to exploit fast combinatorial algorithms for partitioning planar graphs that yield tighter
LP relaxations than the local polytope relaxation often used in graphical inference [20].
The paper is organized as follows. We first introduce the closest ultrametric problem and the relation between multicuts and ultrametrics. We then describe an LP relaxation that uses a delayed
column generation approach and exploits planarity to efficiently find cuts via the classic reduction
to minimum-weight perfect matching [13, 8, 9, 10]. We apply our algorithm to the task of natural
image segmentation and demonstrate that our algorithm converges rapidly and produces optimal or
near-optimal solutions in practice.
2
Closest Ultrametric and Multicuts
Let G = (V, E) be a weighted graph with non-negative edge weights ? indexed by edges e =
(u, v) ? E. Our goal is to find an ultrametric distance d(u,v) over vertices of the graph that is
P
close to ? in the sense that the distortion (u,v)?E k?(u,v) ? d(u,v) k22 is minimized. We begin by
reformulating this closest ultrametric problem in terms of finding a set of nested multicuts in a family
of weighted graphs.
1
We specify a partitioning or multicut of the vertices of the graph G into components using a binary
? ? {0, 1}|E| where X
? e = 1 indicates that the edge e = (u, v) is ?cut? and that the vertices
vector X
u and v associated with the edge are in separate components of the partition. We use MCUT(G)
? that represent valid multicuts of the graph G. For
to denote the set of binary indicator vectors X
notational simplicity, in the remainder of the paper we frequently omit the dependence on G which
is given as a fixed input.
? to define a valid multicut in G is that
A necessary and sufficient condition for an indicator vector X
for every cycle of edges, if one edge on the cycle is cut then at least one other edge in the cycle must
also be cut. Let C denote the set of all cycles in G where each cycle c ? C is a set of edges and
c ? e? is the set of edges in cycle c excluding edge e?. We can express MCUT in terms of these cycle
inequalities as:
(
)
X
|E|
?
?
?
Xe ? Xe?, ?c ? C, e? ? c
(1)
MCUT = X ? {0, 1} :
e?c??
e
A hierarchical clustering of a graph can be described by a nested collection of multicuts. We denote
? L which we represent by a set of L
the space of valid hierarchical partitions with L layers by ?
1 ?2 ?3
L
?
?
edge-indicator vectors X = (X , X , X , . . . , X ) in which any cut edge remains cut at all finer
layers of the hierarchy.
? L = {(X
? 1, X
? 2, . . . X
? L) : X
? l ? MCUT, X
?l ? X
? l+1 ?l}
?
(2)
Given a valid hierarchical clustering X , an ultrametric d can be specified over the vertices of the
graph by choosing a sequence of real values 0 = ? 0 < ? 1 < ? 2 < . . . < ? L that indicate a distance
threshold associated with each level l of the hierarchical clustering. The ultrametric distance d
specified by the pair (X , ?) assigns a distance to each pair of vertices d(u,v) based on the coarsest
level of the clustering at which they remain in separate clusters. For pairs corresponding to an edge
in the graph (u, v) = e ? E we can write this explicitly in terms of the multicut indicator vectors
as:
L
X
? el =
? el > X
? el+1 ]
de =
max ? l X
? l [X
(3)
l?{0,1,...,L}
l=0
? eL+1 = 0. Pairs (u, v) that do not correspond to an
? e0 = 1 and X
We assume by convention that X
edge in the original graph can still be assigned a unique distance based on the coarsest level l at
which they lie in different connected components of the cut specified by X l .
To compute the quality of an ultrametric d with respect to an input set of edge weights ?, we measure
the squared L2 difference between the edge weights and the ultrametric distance k? ? dk22 . To write
this compactly in terms of multicut
Pm indicator vectors, we construct a set of weights for each edge
and layer, denoted ?el so that l=0 ?el = k?e ? ? m k2 . These weights are given explicitly by the
telescoping series:
?e0 = k?e k2
l
We use ? ? R
|E|
?el = k?e ? ? l k2 ? k?e ? ? l?1 k2
to denote the vector containing
?el
?l > 1
(4)
for all e ? E.
For a fixed number of levels L and fixed set of thresholds ?, the problem of finding the closest
ultrametric d can then be written as an integer linear program (ILP) over the edge cut indicators.
2
L
L
X
X
XX
l ?l
l+1
?
? el ? X
? el+1 )
min
? [Xe > Xe ]
= min
k?e ? ? l k2 (X
(5)
?e ?
?L
?L
X ??
X ??
e?E
e?E l=0
l=0
!
L
X
X
2 ?0
l 2
l?1 2 ? l
L 2 ? L+1
k?e ? ? k ? k?e ? ? k X + k?e ? ? k X
= min
k?e k X +
?L
X ??
= min
?L
X ??
e
e?E
L X
X
l=0 e?E
e
e
l=1
? el = min
?el X
?L
X ??
L
X
?l
?l ? X
(6)
l=0
This optimization corresponds to solving a collection of minimum-weight multicut problems where
the multicuts are constrained to be hierarchically consistent.
2
(a) Linear combination of cut vectors
(b) Hierarchical cuts
Figure 1: (a) Any partitioning X can be represented as a linear superposition of cuts Z where
each cut isolates a connected component of the partition and is assigned a weight ? = 12 [20]. By
introducing an auxiliary slack variables ?, we are able to represent a larger set of valid indicator
vectors X using fewer columns of Z. (b) By introducing additional slack variables at each layer of
the hierarchical segmentation, we can efficiently represent many hierarchical segmentations (here
{X 1 , X 2 , X 3 }) that are consistent from layer to layer while using only a small number of cut indicators as columns of Z.
Computing minimum-weight multicuts (also known as correlation clustering) is NP hard even in the
case of planar graphs [6]. A direct approach to finding an approximate solution to Eq 6 is to relax
? l and instead optimize over the whole polytope defined by the set of
the integrality constraints on X
? L . While the resulting
cycle inequalities. We use ?L to denote the corresponding relaxation of ?
polytope is not the convex hull of MCUT, the integral vertices do correspond exactly to the set of
valid multicuts [12].
In practice, we found that applying a straightforward cutting-plane approach that successively adds
violated cycle inequalities to this relaxation of Eq 6 requires far too many constraints and is too
slow to be useful. Instead, we develop a column generation approach tailored for planar graphs that
allows for efficient and accurate approximate inference.
3
The Cut Cone and Planar Multicuts
Consider a partition of a planar graph into two disjoint sets of nodes. We denote the space of
indicator vectors corresponding to such two-way cuts by CUT. A cut may yield more than two
connected components but it can not produce every possible multicut (e.g., it can not split a triangle
of three nodes into three separate components). Let Z ? {0, 1}|E|?|CUT| be an indicator matrix
where each column specifies a valid two-way cut with Zek = 1 if and only if edge e is cut in twoway cut k. The indicator vector of any multicut in a planar graph can be generated by a suitable
linear combination of of cuts (columns of Z) that isolate the individual components from the rest of
the graph where the weight of each such cut is 12 .
Let ? ? R|CUT| be a vector specifying a positive weighted combination of cuts. The set CUT4 =
{Z? : ? ? 0} is the conic hull of CUT or ?cut cone?. Since any multicut can be expressed as a
superposition of cuts, the cut cone is identical to the conic hull of MCUT. This equivalence suggests
an LP relaxation of the minimum-cost multicut given by
min ? ? Z?
s.t. Z? ? 1
??0
(7)
where the vector ? ? R|E| specifies the edge weights. For the case of planar graphs, any solution to
this LP relaxation satisfies the cycle inequalities (see supplement and [12, 18, 10]).
Expanded Multicut Objective: Since the matrix Z contains an exponential number of cuts, Eq. 7
is still intractable. Instead we consider an approximation using a constraint set Z? which is a subset
3
of columns of Z. In previous work [20], we showed that since the optimal multicut may no longer
? it is useful to allow some values of Z?
? exceed 1 (see
lie in the span of the reduced cut matrix Z,
Figure 1(a) for an example).
We introduce a slack vector ? ? 0 that tracks the presence of any ?overcut? edges and prevents
them from contributing to the objective when the corresponding edge weight is negative. Let ?e? =
min(?e , 0) denote the non-positive component of ?e . The expanded multi-cut objective is given by:
? ? ?? ? ?
min ? ? Z?
? ?? ?1
s.t. Z?
??0
??0
(8)
For any edge e such that ?e < 0, any decrease in the objective from overcutting by an amount ?e is
exactly compensated for in the objective by the term ??e? ?e .
When Z? contains all cuts (i.e., Z? = Z) then Eq 7 and Eq 8 are equivalent [20]. Further, if ? ? is the
minimizer of Eq 8 when Z? only contains a subset of columns, then the edge indicator vector given
? ? ) still satisfies the cycle inequalities (see supplement for details).
by X = min(1, Z?
4
Expanded LP for Finding the Closest Ultrametric
To develop an LP relaxation of the closest ultrametric problem, we replace the multicut problem at
each layer l with the expanded multicut objective described by Eq 8. We let ? = {? 1 , ? 2 , ? 3 . . . ? L }
and ? = {? 1 , ? 2 , ? 3 . . . ? L } denote the collection of weights and slacks for the levels of the hierarchy and let ?e+l = max(0, ?el ) and ?e?l = min(0, ?el ) denote the positive and negative components
of ?l .
To enforce hierarchical consistency between layers, we would like to add the constraint that
Z? l+1 ? Z? l . However, this constraint is too rigid when Z does not include all possible cuts.
It is thus computationally useful to introduce an additional slack vector associated with each level l
and edge e which we denote as ? = {?1 , ?2 , ?3 . . . ?L?1 }. The introduction of ?el allows for cuts
represented by Z? l to violate the hierarchical constraint. We modify the objective so that violations
to the original hierarchy constraint are paid for in proportion to ?e+l . The introduction of ? allows
us to find valid ultrametrics while using a smaller number of columns of Z to be used than would
otherwise be required (illustrated in Figure 1(b)).
We call this relaxed closest ultrametric problem including the slack variable ? the expanded closest
ultrametric objective, written as:
min
L
L
L?1
X
X
X
?l ? Z? l +
???l ? ? l +
?+l ? ?l
??0
??0 l=1
??0
l=1
s.t. Z? l+1 + ?l+1 ? Z? l + ?l
l
l
Z? ? ? ? 1
(9)
l=1
?l < L
?l
where by convention we define ?L = 0 and we have dropped the constant l = 0 term from Eq 6.
Given a solution (?, ?, ?) we can recover a relaxed solution to the closest ultrametric problem (Eq.
6) over ?L by setting Xel = min(1, maxm?l (Z? m )e ). In the supplement, we demonstrate that for
any (?, ?, ?) that obeys the constraints in Eq 9, this thresholding operation yields a solution X that
lies in ?L and achieves the same or lower objective value.
5
The Dual Objective
We optimize the dual of the objective in Eq 9 using an efficient column generation approach based
on perfect matching. We introduce two sets of Lagrange multipliers ? = {? 1 , ? 2 , ? 3 . . . ? L?1 } and
? = {?1 , ?2 , ?3 . . . ?L } corresponding to the between and within layer constraints respectively. For
4
Algorithm 1 Dual Closest Ultrametric via Cutting Planes
Z? l ? {} ?l, residual ? ??
while residual < 0 do
{?}, {?} ? Solve Eq 10 given Z?
residual = 0
for l = 1 : L do
z l ? arg minz?CUT (?l + ?l + ? l?1 ? ? l ) ? z
residual ? residual + 32 (?l + ?l + ? l?1 ? ? l ) ? z l
{z(1), z(2), . . . , z(M )} ? isocuts(z l )
Z? l ? Z? l ? {z(1), z(2), . . . , z(M )}
end for
end while
notational convenience, let ? 0 = 0. The dual objective can then be written as
max
L
X
??0,??0
l=1
?l
?
??l ? 1
? ??l
? (?
l?1
l
(10)
?l
? ? l ) ? ?+l
l
(? + ? + ?
l?1
l
?l
?? )?Z ?0
?l
The dual LP can be interpreted as finding a small modification of the original edge weights ?l so
that every possible two-way cut of each resulting graph at level l has non-negative weight. Observe
that the introduction of the two slack terms ? and ? in the primal problem (Eq 9) results in bounds
on the Lagrange multipliers ? and ? in the dual problem in Eq 10. In practice these dual constraints
turn out to be essential for efficient optimization and constitute the core contribution of this paper.
6
Solving the Dual via Cutting Planes
The chief complexity of the dual LP is contained in the constraints including Z which encodes
non-negativity of an exponential number of cuts of the graph represented by the columns of Z. To
circumvent the difficulty of explicitly enumerating the columns of Z, we employ a cutting plane
method that efficiently searches for additional violated constraints (columns of Z) which are then
successively added.
Let Z? denote the current working set of columns. Our dual optimization algorithm iterates over
? (2) find the most violated constraint of the
the following three steps: (1) Solve the dual LP with Z,
l
l
l?1
l
form (? + ? + ?
? ? ) ? Z ? 0 for layer l, (3) Append a column to the matrix Z? for each
such cut found. We terminate when no violated constraints exist or a computational budget has been
exceeded.
Finding Violated Constraints: Identifying columns to add to Z? is carried out for each layer l
separately. Finding the most violated constraint of the full problem corresponds to computing the
minimum-weight cut of a graph with edge weights ?l + ?l + ? l?1 ? ? l . If this cut has non-negative
weight then all the constraints are satisfied, otherwise we add the corresponding cut indicator vector
as an additional column of Z.
To generate a new constraint for layer l based on the current Lagrange multipliers, we solve
X
z l = arg min
(?el + ?le + ?el?1 ? ?el )ze
z?CUT
(11)
e?E
? z 1 , z 2 , . . . z L ].
and subsequently add the new constraints from all layers to our LP, Z? ? [Z,
Unlike the multicut problem, finding a (two-way) cut in a planar graph can be solved exactly by a
reduction to minimum-weight perfect matching. This is a classic result that, e.g. provides an exact
solution for the ground state of a 2D lattice Ising model without a ferromagnetic field [13, 8, 9, 10]
3
in O(N 2 log N ) time [15].
5
80
UB
LB
60
?2
Bound
Counts
10
40
?4
10
20
0
10
1
10
2
10
0
0.2
3
10
Time (sec)
0.4
0.6
0.8
Objective ratio (UCM / UM)
1
Figure 2: (a): The average convergence of the upper (blue) and lower-bounds (red) as a function
of running time. Values plotted are the gap between the bound and the best lower-bound computed
(at termination) for a given problem instance. This relative gap is averaged over problem instances
which have not yet converged at a given time point. We indicate the percentage of problem instances
that have yet to terminate using black bars marking [95, 85, 75, 65, .....5] percent. (b) Histogram of
the ratio of closest ultrametric objective values for our algorithm (UM) and the baseline clustering
produced by UCM. All ratios were less than 1 showing that in no instances did UM produce a worse
solution than UCM
Computing a lower bound: At a given iteration, prior to adding a newly generated set of constraints
P
we can compute the total residual constraint violation over all layers of hierarchy by ? = l (?l +
?l + ? l?1 ? ? l ) ? z l . In the supplement we demonstrate that the value of the dual objective plus
3
2 ? is a lower-bound on the relaxed closest ultrametric problem in Eq 9. Thus, as the costs of the
minimum-weight matchings approach zero from below, the objective of the reduced problem over
?L
Z? approaches an accurate lower-bound on optimization over ?
Expanding generated cut constraints: When a given cut z l produces more than two connected
components, we found it useful to add a constraint corresponding to each component, following the
approach of [20]. Let the number of connected components of z l be denoted M . For each of the
M components then we add one column to Z corresponding to the cut that isolates that connected
component from the rest. This allows more flexibility in representing the final optimum multicut as
superpositions of these components. In addition, we also found it useful in practice to maintain a
separate set of constraints Z? l for each layer l. Maintaining independent constraints Z? 1 , Z? 2 , . . . , Z? L
can result in a smaller overall LP.
Speeding convergence of ?: We found that adding an explicit penalty term to the objective that
encourages small values of ? speeds up convergence dramatically with no loss in solution quality.
In our experiments, this penalty is scaled by a parameter = 10?4 which is chosen to be extremely
small in magnitude relative to the values of ? so that it only has an influence when no other ?forces?
are acting on a given term in ?.
Primal Decoding: Algorithm 1 gives a summary of the dual solver which produces a lower-bound
as well as a set of cuts described by the constraint matrices Z? l . The subroutine isocuts(z l ) computes
the set of cuts that isolate each connected component of z l . To generate a hierarchical clustering,
we solve the primal, Eq 9, using the reduced set Z? in order to recover a fractional solution Xel =
min(1, maxm?l (Z? m ? m )e ). We use an LP solver (IBM CPLEX) which provides this primal solution
?for free? when solving the dual in Alg. 1.
We round the fractional primal solution X to a discrete hierarchical clustering by thresholding:
? el ? [Xel > t]. We then repair (uncut) any cut edges that lie inside a connected component. In our
X
implementation we test a few discrete thresholds t ? {0, 0.2, 0.4, 0.6, 0.8} and take that threshold
? with the lowest cost. After each pass through the loop of Alg. 1 we compute these
that yields X
upper-bounds and retain the optimum solution observed thus far.
6
1
Precision
0.8
Maximum F?measure
UCM
UCM?L
UM
0.9
0.7
0.6
0.5
0.4
0
0.2
0.4
0.6
0.8
0.7
0.6
0.5
0.3 0
10
1
Recall
UM
UCM?L
UCM
0.4
1
10
2
10
Time (sec)
3
10
Figure 3: (a) Boundary detection performance of our closest ultrametric algorithm (UM) and the
baseline ultrametric contour maps algorithm with (UCM) and without (UCM-L) length weighting
[5] on BSDS. Black circles indicate thresholds used in the closest UM optimization. (b) Anytime
performance: F-measure on the BSDS benchmark as a function of run-time. UM, UCM with and
without length weighting achieve a maximum F-measure of 0.728, 0.726, and 0.718 respectively.
7
Experiments
We applied our algorithm to segmenting images from the Berkeley Segmentation Data set (BSDS)
[16]. We use superpixels generated by performing an oriented watershed transform on the output
of the global probability of boundary (gPb) edge detector [17] and construct a planar graph whose
vertices are superpixels with edges connecting neighbors in the image plane whose base distance ?
is derived from gP b.
Let gP be be the local estimate of boundary contrast given by averaging the gP b classifier output
over the boundary between a pair of neighboring superpixels.
We
truncate extreme values to enforce
gP be
that gP be ? [, 1 ? ] with = 0.001 and set ?e = log 1?gP be + log 1?
The additive offset
assures that ?e ? 0. In our experiments we use a fixed set of eleven distance threshold levels {?l }
chosen to uniformly span the useful range of threshold values [9.6, 12.6]. Finally, we weighted edges
proportionally to the length of the corresponding boundary in the image.
We performed dual cutting plane iterations until convergence or 2000 seconds had passed. Lowerbounds for the BSDS segmentations were on the order of ?103 or ?104 . We terminate when the
total residual is greater than ?2 ? 10?4 . All codes were written in MATLAB using the Blossom
V implementation of minimum-weight perfect matching [15] and the IBM ILOG CPLEX LP solver
with default options.
Baseline: We compare our results with the hierarchical clusterings produced by the Ultrametric
Contour Map (UCM) [5]. UCM performs agglomerative clustering of superpixels and assigns the
length-weighted averaged gP b value as the distance between each pair of merged regions. While
UCM was not explicitly designed to find the closest ultrametric, it provides a strong baseline for
hierarchical clustering. To compute the closest l-level ultrametric corresponding to the UCM clustering result, we solve the minimization in Eq. 6 while restricting each multicut to be the partition
at some level of the UCM hierarchy.
Convergence and Timing: Figure 2 shows the average behavior of convergence as a function of
runtime. We found the upper-bound given by the cost of the decoded integer solution and the lowerbound estimated by the dual LP are very close. The integrality gap is typically within 0.1% of the
lower-bound and never more than 1 %. Convergence of the dual is achieved quite rapidly; most
instances require less than 100 iterations to converge with roughly linear growth in the size of the
LP at each iteration as cutting planes are added. In Fig 2 we display a histogram, computed over test
image problem instances, of the cost of UCM solutions relative to those produced by closest ultrametric (UM) estimated by our method. A ratio of less than 1 indicates that our approach generated
a solution with a lower distortion ultrametric. In no problem instance did UCM outperform our UM
algorithm.
7
UM
MC
UM
MC
Figure 4: The proposed closest ultrametric (UM) enforces consistency across levels while performing independent multi-cut clustering (MC) at each threshold does not guarantee a hierarchical
segmentation (c.f. first image, columns 3 and 4). In the second image, hierarchical segmentation
(UM) better preserves semantic parts of the two birds while correctly merging the background regions.
Segmentation Quality: Figure 3 shows the segmentation benchmark accuracy of our closest ultrametric algorithm (denoted UM) along with the baseline ultrametric contour maps algorithm (UCM)
with and without length weighting [5]. In terms of segmentation accuracy, UM performs nearly identically to the state of the art UCM algorithm with some small gains in the high-precision regime. It
is worth noting that the BSDS benchmark does not provide strong penalties for small leaks between
two segments when the total number of boundary pixels involved is small. Our algorithm may find
strong application in domains where the local boundary signal is noisier (e.g., biological imaging)
or when under-segmentation is more heavily penalized.
While our cutting-plane approach is slower than agglomerative clustering, it is not necessary to wait
for convergence in order to produce high quality results. We found that while the upper and lower
bounds decrease as a function of time, the clustering performance as measured by precision-recall
is often nearly optimal after only ten seconds and remains stable. Figure 3 shows a plot of the
F-measure achieved by UM as a function of time.
Importance of enforcing hierarchical constraints: Although independently finding multicuts at
different thresholds often produces hierarchical clusterings, this is by no means guaranteed. We ran
Algorithm 1 while setting ?el = 0, allowing each layer to be solved independently. Fig 4 shows
examples where hierarchical constraints between layers improves segmentation quality relative to
independent clustering at each threshold.
8
Conclusion
We have introduced a new method for approximating the closest ultrametric on planar graphs that
is applicable to hierarchical image segmentation. Our contribution is a dual cutting plane approach
that exploits the introduction of novel slack terms that allow for representing a much larger space of
solutions with relatively few cutting planes. This yields an efficient algorithm that provides rigorous
bounds on the quality the resulting solution. We empirically observe that our algorithm rapidly
produces compelling image segmentations along with lower- and upper-bounds that are nearly tight
on the benchmark BSDS test data set.
Acknowledgements: JY acknowledges the support of Experian, CF acknowledges support of NSF
grants IIS-1253538 and DBI-1262547
8
References
[1] Nir Ailon and Moses Charikar. Fitting tree metrics: Hierarchical clustering and phylogeny. In
Foundations of Computer Science, 2005., pages 73?82, 2005.
[2] Bjoern Andres, Joerg H. Kappes, Thorsten Beier, Ullrich Kothe, and Fred A. Hamprecht. Probabilistic image segmentation with closedness constraints. In Proc. of ICCV, pages 2611?2618,
2011.
[3] Bjoern Andres, Thorben Kroger, Kevin L. Briggman, Winfried Denk, Natalya Korogod, Graham Knott, Ullrich Kothe, and Fred. A. Hamprecht. Globally optimal closed-surface segmentation for connectomics. In Proc. of ECCV, 2012.
[4] Bjoern Andres, Julian Yarkony, B. S. Manjunath, Stephen Kirchhoff, Engin Turetken, Charless
Fowlkes, and Hanspeter Pfister. Segmenting planar superpixel adjacency graphs w.r.t. nonplanar superpixel affinity graphs. In Proc. of EMMCVPR, 2013.
[5] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and
hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898?916,
May 2011.
[6] Yoram Bachrach, Pushmeet Kohli, Vladimir Kolmogorov, and Morteza Zadimoghaddam. Optimal coalition structure generation in cooperative graph games. In Proc. of AAAI, 2013.
[7] Shai Bagon and Meirav Galun. Large scale correlation clustering. In CoRR, abs/1112.2903,
2011.
[8] F Barahona. On the computational complexity of ising spin glass models. Journal of Physics
A: Mathematical, Nuclear and General, 15(10):3241?3253, april 1982.
[9] F Barahona. On cuts and matchings in planar graphs. Mathematical Programming, 36(2):53?
68, november 1991.
[10] F Barahona and A Mahjoub. On the cut polytope. Mathematical Programming, 60(1-3):157?
173, September 1986.
[11] Thorsten Beier, Thorben Kroeger, Jorg H Kappes, Ullrich Kothe, and Fred A Hamprecht. Cut,
glue, and cut: A fast, approximate solver for multicut partitioning. In Computer Vision and
Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 73?80, 2014.
[12] Michel Deza and Monique Laurent. Geometry of cuts and metrics, volume 15. Springer
Science & Business Media, 1997.
[13] Michael Fisher. On the dimer solution of planar ising models. Journal of Mathematical
Physics, 7(10):1776?1781, 1966.
[14] Sungwoong Kim, Sebastian Nowozin, Pushmeet Kohli, and Chang Dong Yoo. Higher-order
correlation clustering for image segmentation. In Advances in Neural Information Processing
Systems,25, pages 1530?1538, 2011.
[15] Vladimir Kolmogorov. Blossom v: a new implementation of a minimum cost perfect matching
algorithm. Mathematical Programming Computation, 1(1):43?67, 2009.
[16] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring
ecological statistics. In Proc. of ICCV, pages 416?423, 2001.
[17] David Martin, Charless C. Fowlkes, and Jitendra Malik. Learning to detect natural image
boundaries using local brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach.
Intell., 26(5):530?549, May 2004.
[18] Julian Yarkony. Analyzing PlanarCC. NIPS 2014 workshop, 2014.
[19] Julian Yarkony, Thorsten Beier, Pierre Baldi, and Fred A Hamprecht. Parallel multicut segmentation via dual decomposition. In New Frontiers in Mining Complex Patterns, 2014.
[20] Julian Yarkony, Alexander Ihler, and Charless Fowlkes. Fast planar correlation clustering for
image segmentation. In Proc. of ECCV, 2012.
[21] Chong Zhang, Julian Yarkony, and Fred A. Hamprecht. Cell detection and segmentation using
correlation clustering. In MICCAI, volume 8673, pages 9?16, 2014.
9
| 5802 |@word kohli:2 proportion:1 glue:1 termination:1 barahona:3 decomposition:1 paid:1 brightness:1 briggman:1 reduction:2 series:1 contains:3 current:2 com:1 yet:2 must:2 connectomics:1 written:4 additive:1 partition:7 kothe:3 eleven:1 designed:1 plot:1 cue:1 fewer:1 plane:10 core:1 iterates:1 provides:4 node:2 zhang:1 mathematical:5 along:2 direct:1 doron:1 fitting:1 inside:1 baldi:1 introduce:4 pairwise:1 roughly:1 behavior:1 frequently:1 multi:2 inspired:1 globally:1 solver:4 begin:1 estimating:1 xx:1 medium:1 lowest:1 mahjoub:1 interpreted:1 finding:12 guarantee:1 berkeley:1 every:3 growth:1 runtime:1 exactly:3 um:17 classifier:1 k2:5 scaled:1 partitioning:4 grant:1 omit:1 segmenting:2 positive:4 dropped:1 local:4 modify:1 timing:1 mach:2 uncut:1 analyzing:1 laurent:1 planarity:1 black:2 plus:1 bird:1 equivalence:1 specifying:1 suggests:1 range:1 obeys:1 averaged:2 lowerbound:1 unique:1 enforces:1 practice:4 ucm:19 maire:1 matching:6 wait:1 convenience:1 close:2 applying:1 influence:1 optimize:2 equivalent:2 map:3 compensated:1 straightforward:1 independently:2 convex:1 formulate:2 bachrach:1 simplicity:1 identifying:1 immediately:1 assigns:2 joerg:1 dbi:1 nuclear:1 classic:2 ultrametric:35 diego:1 hierarchy:5 heavily:1 exact:1 programming:3 us:1 superpixel:2 element:1 ze:1 recognition:1 cut:56 ising:3 cooperative:1 database:1 observed:1 solved:2 capture:1 region:2 ferromagnetic:1 cycle:11 connected:8 kappes:2 decrease:2 ran:1 leak:1 complexity:2 gpb:1 denk:1 solving:3 segment:1 tight:1 triangle:2 compactly:1 matchings:2 kirchhoff:1 represented:3 kolmogorov:2 fast:3 describe:1 kevin:1 choosing:1 whose:3 quite:1 larger:2 solve:6 cvpr:1 distortion:2 relax:1 otherwise:2 statistic:1 gp:7 transform:1 noisy:1 final:1 sequence:1 twoway:1 remainder:1 neighboring:1 uci:1 loop:1 rapidly:3 flexibility:1 achieve:1 convergence:8 cluster:6 optimum:2 produce:9 perfect:6 converges:1 develop:2 measured:1 eq:17 strong:3 auxiliary:1 indicate:3 convention:2 closely:1 merged:1 hull:3 subsequently:1 human:1 adjacency:1 require:1 tighter:1 biological:1 frontier:1 ic:1 ground:1 bsds:6 dimer:1 achieves:1 proc:6 applicable:1 combinatorial:1 superposition:3 agrees:1 maxm:2 weighted:5 minimization:1 varying:1 derived:1 focus:1 notational:2 indicates:2 superpixels:5 contrast:1 rigorous:1 baseline:5 detect:1 sense:1 glass:1 inference:2 kim:1 el:20 rigid:1 typically:1 beier:3 relation:1 subroutine:2 pixel:3 arg:2 dual:19 overall:1 ullrich:3 denoted:3 constrained:1 art:1 field:1 construct:2 never:1 identical:1 represents:1 nearly:3 minimized:1 np:1 employ:1 few:2 oriented:1 composed:1 preserve:1 intell:2 individual:1 delayed:1 geometry:1 cplex:2 maintain:1 ab:1 detection:3 mining:1 chong:1 violation:2 extreme:1 hamprecht:5 primal:5 watershed:1 accurate:2 edge:32 integral:1 necessary:2 indexed:1 tree:1 meirav:1 circle:1 plotted:1 e0:2 instance:7 column:19 compelling:1 bagon:1 measuring:1 lattice:1 cost:7 introducing:2 vertex:7 subset:2 too:3 closedness:1 retain:1 probabilistic:1 physic:2 dong:1 decoding:1 michael:2 connecting:1 squared:1 aaai:1 satisfied:1 successively:2 containing:1 multicut:19 worse:1 natalya:1 michel:1 de:1 bjoern:3 sec:2 ultrametrics:3 jitendra:3 explicitly:5 performed:1 lab:1 closed:1 red:1 recover:2 option:1 parallel:1 shai:1 contribution:2 spin:1 accuracy:2 efficiently:4 yield:6 correspond:2 andres:3 produced:3 mc:3 worth:1 finer:1 converged:1 detector:1 sebastian:1 involved:1 naturally:2 associated:3 ihler:1 irvine:1 newly:1 gain:1 recall:2 anytime:1 fractional:2 improves:1 color:1 segmentation:28 organized:1 nonplanar:1 exceeded:1 higher:1 planar:18 specify:1 april:1 miccai:1 correlation:5 until:1 working:1 quality:6 engin:1 k22:1 multiplier:3 assigned:2 reformulating:1 semantic:1 illustrated:1 round:1 game:1 encourages:1 demonstrate:3 performs:2 percent:1 image:20 isolates:2 novel:1 charles:6 empirically:1 volume:2 consistency:2 pm:1 had:1 stable:1 longer:1 surface:1 add:7 base:1 closest:21 showed:1 perspective:1 zadimoghaddam:1 ecological:1 inequality:7 binary:2 xe:4 kroger:1 minimum:11 additional:5 relaxed:3 greater:1 converge:1 zek:1 signal:1 ii:1 violate:1 full:1 stephen:1 segmented:1 sungwoong:1 jy:1 emmcvpr:1 vision:1 metric:2 histogram:2 represent:4 tailored:1 iteration:4 achieved:2 cell:1 addition:1 background:1 separately:1 thorben:2 rest:2 unlike:1 hanspeter:1 isolate:2 xel:3 integer:2 call:1 near:1 leverage:1 presence:1 exceed:1 split:1 identically:1 noting:1 enumerating:1 passed:1 manjunath:1 penalty:3 constitute:1 matlab:1 dramatically:1 useful:6 proportionally:1 amount:1 ten:1 diameter:1 reduced:3 generate:2 specifies:2 outperform:1 exist:1 percentage:1 nsf:1 moses:1 estimated:2 disjoint:1 track:1 correctly:1 blue:1 write:2 discrete:2 express:1 threshold:16 integrality:2 imaging:1 graph:29 relaxation:9 cone:3 run:1 family:1 graham:1 layer:17 bound:15 guaranteed:1 display:1 replaces:1 constraint:30 encodes:1 tal:1 speed:1 min:14 span:2 extremely:1 performing:2 coarsest:2 expanded:5 relatively:1 martin:2 department:1 marking:1 ailon:1 charikar:1 truncate:1 combination:3 coalition:1 across:2 remain:1 smaller:2 lp:16 modification:1 iccv:2 repair:1 thorsten:3 computationally:1 remains:2 assures:1 slack:8 turn:1 count:1 ilp:1 multicuts:11 end:2 operation:1 yarkony:7 apply:2 observe:2 hierarchical:26 appropriate:1 enforce:2 pierre:1 fowlkes:7 slower:1 original:3 clustering:25 include:1 running:1 cf:1 graphical:1 maintaining:1 monique:1 exploit:3 yoram:1 approximating:1 objective:17 malik:3 added:2 dependence:1 usual:1 september:1 affinity:1 distance:16 separate:4 arbelaez:1 polytope:4 agglomerative:2 enforcing:1 length:5 code:1 julian:7 ratio:4 vladimir:2 lowerbounds:1 negative:6 append:1 implementation:3 anal:2 jorg:1 allowing:1 upper:5 ilog:1 knott:1 benchmark:4 november:1 excluding:1 lb:1 introduced:1 pablo:1 pair:6 required:1 specified:5 david:2 california:1 nip:1 trans:2 able:1 bar:1 below:1 pattern:4 regime:1 program:1 max:4 including:2 suitable:1 natural:3 difficulty:1 circumvent:1 force:1 indicator:13 business:1 residual:7 telescoping:1 representing:2 conic:2 carried:1 acknowledges:2 negativity:1 transitive:1 speeding:1 nir:1 prior:1 l2:1 acknowledgement:1 determining:1 contributing:1 relative:4 loss:1 generation:4 foundation:1 sufficient:1 consistent:3 imposes:1 thresholding:3 nowozin:1 ibm:2 eccv:2 deza:1 summary:1 penalized:1 free:1 allow:2 blossom:2 neighbor:2 boundary:8 default:1 valid:8 fred:5 contour:4 computes:1 evaluating:1 collection:4 san:1 far:2 pushmeet:2 approximate:3 cutting:9 global:1 search:1 chief:1 terminate:3 ca:1 expanding:1 alg:2 complex:1 domain:2 did:2 hierarchically:2 whole:1 galun:1 fig:2 slow:1 precision:3 decoded:1 explicit:1 exponential:2 lie:4 minz:1 weighting:3 showing:1 offset:1 intractable:1 essential:1 workshop:1 restricting:1 adding:2 merging:1 importance:1 corr:1 supplement:4 texture:1 magnitude:1 budget:1 gap:3 morteza:1 explore:1 prevents:1 expressed:1 lagrange:3 contained:1 chang:1 springer:1 nested:2 corresponds:2 satisfies:2 minimizer:1 goal:1 replace:1 fisher:1 hard:1 uniformly:1 acting:1 averaging:1 total:3 pfister:1 pas:1 dk22:1 phylogeny:1 support:2 winfried:1 arises:1 noisier:1 alexander:1 violated:6 ub:1 yoo:1 |
5,306 | 5,803 | Learning Bayesian Networks with Thousands of
Variables
Mauro Scanagatta
IDSIA? , SUPSI? , USI?
Lugano, Switzerland
[email protected]
Cassio P. de Campos
Queen?s University Belfast
Northern Ireland, UK
[email protected]
Giorgio Corani
IDSIA? , SUPSI? , USI?
Lugano, Switzerland
[email protected]
Marco Zaffalon
IDSIA?
Lugano, Switzerland
[email protected]
Abstract
We present a method for learning Bayesian networks from data sets containing
thousands of variables without the need for structure constraints. Our approach
is made of two parts. The first is a novel algorithm that effectively explores the
space of possible parent sets of a node. It guides the exploration towards the
most promising parent sets on the basis of an approximated score function that
is computed in constant time. The second part is an improvement of an existing
ordering-based algorithm for structure optimization. The new algorithm provably
achieves a higher score compared to its original formulation. Our novel approach
consistently outperforms the state of the art on very large data sets.
1
Introduction
Learning the structure of a Bayesian network from data is NP-hard [2]. We focus on score-based
learning, namely finding the structure which maximizes a score that depends on the data [9]. Several
exact algorithms have been developed based on dynamic programming [12, 17], branch and bound
[7], linear and integer programming [4, 10], shortest-path heuristic [19, 20].
Usually structural learning is accomplished in two steps: parent set identification and structure
optimization. Parent set identification produces a list of suitable candidate parent sets for each
variable. Structure optimization assigns a parent set to each node, maximizing the score of the
resulting structure without introducing cycles.
The problem of parent set identification is unlikely to admit a polynomial-time algorithm with a
good quality guarantee [11]. This motivates the development of effective search heuristics. Usually
however one decides the maximum in-degree (number of parents per node) k and then simply computes the score of all parent sets. At that point one performs structural optimization. An exception is
the greedy search of the K2 algorithm [3], which has however been superseded by the more modern
approaches mentioned above.
A higher in-degree implies a larger search space and allows achieving a higher score; however it
also requires higher computational time. When choosing the in-degree the user makes a trade-off
between these two objectives. However when the number of variables is large, the in-degree is
?
Istituto Dalle Molle di studi sull?Intelligenza Artificiale (IDSIA)
Scuola universitaria professionale della Svizzera italiana (SUPSI)
?
Universit`a della Svizzera italiana (USI)
?
1
generally set to a small value, to allow the optimization to be feasible. The largest data set analyzed
in [1] with the Gobnilp1 software contains 413 variables; it is analyzed setting k = 2. In [5] Gobnilp
is used for structural learning with 1614 variables, setting k = 2. These are among the largest
examples of score-based structural learning in the literature.
In this paper we propose an algorithm that performs approximated structure learning with thousands
of variables without constraints on the in-degree. It is constituted by a novel approach for parent set
identification and a novel approach for structure optimization.
As for parent set identification we propose an anytime algorithm that effectively explores the space
of possible parent sets. It guides the exploration towards the most promising parent sets, exploiting
an approximated score function that is computed in constant time. As for structure optimization,
we extend the ordering-based algorithm of [18], which provides an effective approach for model
selection with reduced computational cost. Our algorithm is guaranteed to find a solution better than
or equal to that of [18].
We test our approach on data sets containing up to ten thousand variables. As a performance indicator we consider the score of the network found. Our parent set identification approach outperforms
consistently the usual approach of setting the maximum in-degree and then computing the score of
all parent sets. Our structure optimization approach outperforms Gobnilp when learning with more
than 500 nodes. All the software and data sets used in the experiments are available online. 2 .
2
Structure Learning of Bayesian Networks
Consider the problem of learning the structure of a Bayesian Network from a complete data set of N
instances D = {D1 , ..., DN }. The set of n categorical random variables is X = {X1 , ..., Xn }. The
goal is to find the best DAG G = (V, E), where V is the collection of nodes and E is the collection
of arcs. E can be defined as the set of parents ?1 , ..., ?n of each variable. Different scores can be
used to assess the fit of a DAG. We adopt the BIC, which asymptotically approximates the posterior
probability of the DAG. The BIC score is decomposable, namely it is constituted by the sum of the
scores of the individual variables:
BIC(G) =
Xn
Xn
=
BIC(Xi , ?i ) =
i=1
i=1
X
X
??|?i |
x?|Xi |
log N
Nx,? log ??x|? ?
(|Xi | ? 1)(|?i |) ,
2
where ??x|? is the maximum likelihood estimate of the conditional probability P (Xi = x|?i = ?),
and Nx,? represents the number of times (X = x ? ?i = ?) appears in the data set, and | ? | indicates
the size of the Cartesian product space of the variables given as arguments (instead of the number of
variables) such that |Xi | is the number of states of Xi and |?| = 1.
Exploiting decomposability, we first identify independently for each variable a list of candidate
parent sets (parent set identification). Then by structure optimization we select for each node the
parent set that yields the highest score without introducing cycles.
3
Parent set identification
For parent set identification usually one explores all the possible parent sets, whose number however
increases as O(nk ), where k denotes the maximum in-degree. Pruning rules [7] do not considerably
reduce the size of this space.
Usually the parent sets are explored in sequential order: first all the parent size of size one, then all
the parent sets of size two, and so on, up to size k. We refer to this approach as sequential ordering.
If the solver adopted for structural optimization is exact, this strategy allows to find the globally
optimum graph given the chosen value of k. In order to deal with a large number of variables it
is however necessary setting a low in-degree k. For instance [1] adopts k=2 when dealing with
the largest data set (diabetes), which contains 413 variables. In [5] Gobnilp is used for structural
learning with 1614 variables, again setting k = 2. A higher value of k would make the structural
1
2
http://www.cs.york.ac.uk/aig/sw/gobnilp/
http://blip.idsia.ch
2
learning not feasible. Yet a low k implies dropping all the parent sets with size larger than k. Some
of them possibly have a high score.
In [18] it is proposed to adopt the subset ?corr of the most correlated variables with the children
variable. Then [18] consider only parent sets which are subsets of ?corr . However this approach
is not commonly adopted, possibly because it requires specifying the size of ?corr . Indeed [18]
acknowledges the need for further innovative approaches in order to effectively explore the space of
the parent sets.
We propose two anytime algorithms to address this problem. The first is the simplest; we call it
greedy selection. It starts by exploring all the parent sets of size one and adding them to a list. Then
it repeats the following until time is expired: pops the best scoring parent set ? from the list, explores
all the supersets obtained by adding one variable to ?, and adds them to the list. Note that in general
the parent sets chosen at two adjoining step are not related to each other. The second approach
(independence selection) adopts a more sophisticated strategy, as explained in the following.
3.1
Parent set identification by independence selection
Independence selection uses an approximation of the actual BIC score of a parent set ?, which we
denote as BIC? , to guide the exploration of the space of the parent sets. The BIC? of a parent set
constituted by the union of two non-empty parent sets ?1 and ?2 is defined as follows:
BIC? (X, ?1 , ?2 ) = BIC(X, ?1 ) + BIC(X, ?2 ) + inter(X, ?1 , ?2 ) ,
(1)
with ?1 ??2 = ? and inter(X, ?1 , ?2 ) = log2 N (|X|?1)(|?1 |+|?2 |?|?1 ||?2 |?1)?BIC(X, ?).
If we already know BIC(X, ?1 ) and BIC(X, ?2 ) from previous calculations (and we know
BIC(X, ?)), then BIC? can be computed in constant time (with respect to data accesses). We
thus exploit BIC? to quickly estimate the score of a large number of candidate parent sets and to
decide the order to explore them.
We provide a bound for the difference between BIC? (X, ?1 , ?2 ) and BIC(X, ?1 ? ?2 ). To this
end, we denote by ii the Interaction Information [14]: ii(X; Y ; Z) = I(X; Y |Z) ? I(X; Y ), namely
the difference between the mutual information of X and Y conditional on Z and the unconditional
mutual information of X and Y .
Theorem 1. Let X be a node of G and ? = ?1 ? ?2 be a parent set for X with ?1 ? ?2 = ?
and ?1 , ?2 non-empty. Then BIC(X, ?) = BIC? (X, ?1 , ?2 ) + N ? ii(?1 ; ?2 ; X), where ii is the
Interaction Information estimated from data.
Proof. BIC(X, ?1 ? ?2 ) ? BIC? (X, ?1 , ?2 ) =
BIC(X, ?1 ? ?2 ) ? BIC(X, ?1 ) ? BIC(X, ?2 ) ? inter(X, ?1 , ?2 ) =
i X
h
X
Nx log ??x =
Nx,?1 ,?2 log ??x|?1 ,?2 ? log(??x|?1 ??x|?2 ) +
x,?1 ,?2
x
!#
"
X
??x|?1 ??x|?2
?
Nx,?1 ,?2 log ?x|?1 ,?2 ? log
=
x,?1 ,?2
??x
!
!
?? ,? |x ??? ???
X
X
??x|?1 ,?2 ??x
?
1
2
1
2
=
Nx,?1 ,?2 log
=
N ? ??x,?1 ,?2 log
x,?1 ,?2
x,?1 ,?2
??x|?1 ??x|?2
???1 |x ???2 |x ???1 ,?2
!
!!
X
X
???1 ,?2 |x
???1 ,?2
?
?
N
?x,?1 ,?2 log
?
??1 ,?2 log
=
x,?1 ,?2
?1 ,?2
???1 |x ???2 |x
???1 ???2
N ? (I(?1 ; ?2 |X) ? I(?1 ; ?2 )) = N ? ii(?1 ; ?2 ; X) ,
where I(?) denotes the (conditional) mutual information estimated from data.
Corollary 1. Let X be a node of G, and ? = ?1 ? ?2 be a parent set of X such that ?1 ? ?2 = ?
and ?1 , ?2 non-empty. Then
|BIC(X, ?) ? BIC? (X, ?1 , ?2 )| ? N min{H(X), H(?1 ), H(?2 )} .
Proof. Theorem 1 states that BIC(X, ?) = BIC? (X, ?1 , ?2 ) + N ? ii(?1 ; ?2 ; X). We now devise
bounds for interaction information, recalling that mutual information and conditional mutual information are always non-negative and achieve their maximum value at the smallest entropy H of their
3
argument: ?H(?2 ) ? ?I(?1 ; ?2 ) ? ii(?1 ; ?2 ; X) ? I(?1 ; ?2 |X) ? H(?2 ). The theorem is
proven by simply permuting the values ?1 ; ?2 ; X in the ii of such equation. Since
ii(?1 ; ?2 ; X) = I(?1 ; ?2 |X)?I(?1 ; ?2 ) = I(X; ?1 |?2 )?I(X; ?1 ) = I(?2 ; X|?1 )?I(?2 ; X) ,
the bounds for ii are valid.
We know that 0 ? H(?) ? log(|?|) for any set of nodes ?, hence the result of Corollary 1 could
be further manipulated to achieve a bound for the difference between BIC and BIC? of at most
N log(min{|X|, |?1 |, |?2 |}). However, Corollary 1 is stronger and can still be computed efficiently
as follows. When computing BIC? (X, ?1 , ?2 ), we assumed that BIC(X, ?1 ) and BIC(X, ?2 ) had
been precomputed. As such, we can also have precomputed the values H(?1 ) and H(?2 ) at the
same time as the BIC scores were computed, without any significant increase of complexity (when
computing BIC(X, ?) for a given ?, just use the same loop over the data to compute H(?)).
Corollary 2. Let X be a node of G, and ? = ?1 ??2 be a parent set for that node with ?1 ??2 = ?
and ?1 , ?2 non-empty. If ?1 ?
? ?2 , then BIC(X, ?1 ? ?2 ) ? BIC? (X, ?1 ? ?2 ). If ?1 ?
? ?2 |X,
then BIC(X, ?1 ? ?2 ) ? BIC? (X, ?1 ? ?2 ). If the interaction information ii(?1 ; ?2 ; X) = 0,
then BIC(X, ?1 ? ?2 ) = BIC? (X, ?1 , ?2 ).
Proof. It follows from Theorem 1 considering that mutual information I(?1 , ?2 ) = 0 if ?1 and ?2
are independent, while I(?1 , ?2 |X) = 0 if ?1 and ?2 are conditionally independent.
We now devise a novel pruning strategy for BIC based on the bounds of Corollaries 1 and 2.
Theorem 2. Let X be a node of G, and ? = ?1 ? ?2 be a parent set for that node with ?1 ?
?2 = ? and ?1 , ?2 non-empty. Let ?0 ? ?. If BIC? (X, ?1 , ?2 ) + log2 N (|X| ? 1)|?0 | >
N min{H(X), H(?1 ), H(?2 )}, then ?0 and its supersets are not optimal and can be ignored.
Proof. BIC? (X, ?1 , ?2 ) ? N min{H(X), H(?1 ), H(?2 )} + log2 N (|X| ? 1)|?0 | > 0 implies
BIC(?) + log2 N (|X| ? 1)|?0 | > 0, and Theorem 4 of [6] prunes ?0 and all its supersets.
Thus we can efficiently check whether large parts of the search space can be discarded based on
these results. We note that Corollary 1 and hence Theorem 2 are very generic in the choice of ?1
and ?2 , even though usually one of them is taken as a singleton.
3.2
Independence selection algorithm
We now describe the algorithm that exploits the BIC? score in order to effectively explore the space
of the parent sets. It uses two lists: (1) open: a list for the parent sets to be explored, ordered by their
BIC? score; (2) closed: a list of already explored parent sets, along with their actual BIC score.
The algorithm starts with the BIC of the empty set computed. First it explores all the parent sets of
size one and saves their BIC score in the closed list. Then it adds to the open list every parent set of
size two, computing their BIC? scores in constant time on the basis of the scores available from the
closed list. It then proceeds as follows until all elements in open have been processed, or the time is
expired. It extracts from open the parent set ? with the best BIC? score; it computes its BIC score
and adds it to the closed list. It then looks for all the possible expansions of ? obtained by adding
a single variable Y , such that ? ? Y is not present in open or closed. It adds them to open with
their BIC? (X, ?, Y ) scores. Eventually it also considers all the explored subsets of ?. It safely [7]
prunes ? if any of its subsets yields a higher BIC score than ?. The algorithm returns the content of
the closed list, pruned and ordered by the BIC score. Such list becomes the content of the so-called
cache of scores for X. The procedure is repeated for every variable and can be easily parallelized.
Figure 1 compares sequential ordering and independence selection. It shows that independence
selection is more effective than sequential ordering because it biases the search towards the highestscoring parent sets.
4
Structure optimization
The goal of structure optimization is to choose the overall highest scoring parent sets (measured
by the sum of the local scores) without introducing directed cycles in the graph. We start from the
approach proposed in [18] (which we call ordering-based search or OBS), which exploits the fact
4
?1,400
?1,600
?1,600
BIC
BIC
?1,400
?1,800
?2,000
?1,800
?2,000
500
1,000
500
Iteration
1,000
Iteration
(a) Sequential ordering.
(b) Indep. selection ordering.
Figure 1: Exploration of the parent sets space for a given variable performed by sequential ordering
and independence selection. Each point refers to a distinct parent set.
Pn
that the optimal network can be found in time O(Ck), where C = i=1 ci and ci is the number of
elements in the cache of scores of Xi , if an ordering over the variables is given.3 ?(k) is needed to
check whether all the variables in a parent set for X come before X in the ordering (a simple array
can be used as data structure for this checking). This implies working on the search space of the
possible orderings, which is convenient as it is smaller than the space of network structures. Multiple
orderings are sampled and evaluated (different techniques can be used for guiding the sampling). For
each sampled total ordering ? over variables X1 , . . . , Xn , the network is consistent with the order
if ?Xi : ?X ? ?i : X ? Xi . A network consistent with a given ordering automatically satisfies
the acyclicity constraint. This allows us to choose independently the best parent set of each node.
Moreover, for a given total ordering V1 , . . . , Vn of the variables, the algorithm tries to improve the
network by a greedy search swapping procedure: if there is a pair Vj , Vj+1 such that the swapped
ordering with Vj in place of Vj+1 (and vice versa) yields better score for the network, then these
nodes are swapped and the search continues. One advantage of this swapping over extra random
orderings is that searching for it and updating the network (if a good swap is found) only takes time
O((cj + cj+1 ) ? kn) (which can be sped up as cj only is inspected for parents sets containing Vj+1 ,
and cj+1 is only processed if Vj+1 has Vj as parent in the current network), while a new sampled
ordering would take O(n + Ck) (the swapping approach is usually favourable if ci is ?(n), which is
a plausible assumption). We emphasize that the use of k here is sole with the purpose of analyzing
the complexity of the methods, since our parent set identification approach does not rely on a fixed
value for k.
However, the consistency rule of OBS is quite restricting. While it surely refuses all cyclic structures,
it also rules out some acyclic ones which could be captured by interpreting the ordering in a slightly
different manner. We propose a novel consistency rule for a given ordering which processes the
nodes in V1 , . . . , Vn from Vn to V1 (OBS can do it in any order, as the local parent sets can be
chosen independently) and we define the parent set of Vj such that it does not introduce a cycle in
the current partial network. This allows back-arcs in the ordering from a node Vj to its successors, as
long as this does not introduce a cycle. We call this idea acyclic selection OBS (or simply ASOBS).
Because we need to check for cycles at each step of constructing the network for a given ordering, at
a first glance the algorithm seems to be slower (time complexity of O(Cn) against O(Ck) for OBS;
note this difference is only relevant as we intend to work with large values n). Surprisingly, we can
implement it in the same overall time complexity of O(Ck) as follows.
1. Build and keep a Boolean square matrix m to mark which are the descendants of nodes
(m(X, Y ) tells whether Y is descendant of X). Start it all false.
2. For each node Vj in the order, with j = n, . . . , 1:
(a) Go through the parent sets and pick the best scoring one for which all contained parents are not descendants of Vj (this takes time O(ci k) if parent sets are kept as lists).
(b) Build a todo list with the descendants of Vj from the matrix representation and associate an empty todo list to all ancestors of Vj .
(c) Start the todo lists of the parents of Vj with the descendants of Vj .
(d) For each ancestor X of Vj (ancestors will be iteratively visited by following a depthfirst graph search procedure using the network built so far; we process a node after
3
O(?), ?(?) and ?(?) shall be understood as usual asymptotic notation functions.
5
its children with non-empty todo lists have been already processed; the search stops
when all ancestors are visited):
i. For each element Y in the todo list of X, if m(X, Y ) is true, then ignore Y and
move on; otherwise set m(X, Y ) to true and add Y to the todo of parents of X.
Let us analyze the complexity of the method. Step 2a takes overall time O(Ck) (already considering
the outer loop). Step 2b takes overall time O(n2 ) (already considering the outer loop). Steps 2c
and 2(d)i will be analyzed based on the number of elements on the todo lists and the time to process
them in an amortized way. Note that the time complexity is directly related to the number of elements
that are processed from the todo lists (we can simply look to the moment that they leave a list, as
their inclusion in the lists will be in equal number). We will now count the number of times we
process an element from a todo list. This number is overall bounded (over all external loop cycles)
by the number of times we can make a cell of matrix m turn from false to true (which is O(n2 )) plus
the number of times we ignore an element because the matrix cell was already set to true (which is
at most O(n) per each Vj , as this is the maximum number of descendants of Vj and each of them
can fall into this category only once, so again there are O(n2 ) times in total). In other words, each
element being removed from a todo list is either ignored (matrix already set to true) or an entry
in the matrix of descendants is changed from false to true, and this can only happen O(n2 ) times.
Hence the total time complexity is O(Ck + n2 ), which is O(Ck) for any C greater than n2 /k (a
very plausible scenario, as each local cache of a variable usually has more than n/k elements).
Moreover, we have the following interesting properties of this new method.
Theorem 3. For a given ordering ?, the network obtained by ASOBS has score equal than or
greater to that obtained by OBS.
Proof. It follows immediately from the fact that the consistency rule of ASOBS generalizes that of
OBS, that is, for each node Vj with j = n, . . . , 1, ASOBS allows all parent sets allowed by OBS
and also others (containing back-arcs).
Theorem 4. For a given ordering ? defined by V1 , . . . , Vn and a current graph G consistent with
?, if OBS consistency rule allows the swapping of Vj , Vj+1 and leads to improving the score of G,
then the consistency rule of ASOBS allows the same swapping and achieves the same improvement
in score.
Proof. It follows immediately from the fact that the consistency rule of ASOBS generalizes that of
OBS, so from a given graph G, if a swapping is possible under OBS rules, then it is also possible
under ASOBS rules.
5
Experiments
We compare three different approaches for parent set identification (sequential, greedy selection and
independence selection) and three different approaches (Gobnilp, OBS and ASOBS) for structure
optimization. This yields nine different approaches for structural learning, obtained by combining
all the methods for parent set identification and structure optimization. Note that OBS has been
shown in [18] to outperform other greedy-tabu search over structures, such as greedy hill-climbing
and optimal-reinsertion-search methods [15].
We allow one minute per variable to each approach for parent set identification. We set the maximum
in-degree to k = 6, a high value that allows learning even complex structures. Notice that our novel
approach does not need a maximum in-degree. We set a maximum in-degree to put our approach
and its competitors on the same ground. Once computed the scores of the parent sets we run each
solver (Gobnilp, OBS, ASOBS) for 24 hours. For a given data set the computation is performed on
the same machine.
The explicit goal of each approach for both parent set identification and structure optimization is
to maximize the BIC score. We then measure the BIC score of the Bayesian networks eventually
obtained as performance indicator. The difference in the BIC score between two alternative networks
is an asymptotic approximation of the logarithm of the Bayes factor. The Bayes factor is the ratio
of the posterior probabilities of two competing models. Let us denote by ?BIC1,2 =BIC1 -BIC2 the
difference between the BIC score of network 1 and network 2. Positive values of ?BIC1,2 imply
6
Data set
n
Data set
n
Data set
n
Data set
n
Audio
Jester
Netflix
Accidents
100
100
100
111
Retail
Pumsb-star
DNA
Kosarek
135
163
180
190
MSWeb
Book
EachMovie
WebKB
294
500
500
839
Reuters-52
C20NG
BBC
Ad
889
910
1058
1556
Table 1: Data sets sorted according to the number n of variables.
evidence in favor of network 1. The evidence in favor of network 1 is respectively [16] {weak,
positive, strong, very strong} if ?BIC1,2 is between {0 and 2; 2 and 6; 6 and 10 ; beyond 10}.
5.1
Learning from datasets
We consider 16 data sets already used in the literature of structure learning, firstly introduced in [13]
and [8]. We randomly split each data set into three subsets of instances. This yields 48 data sets.
The approaches for parent set identification are compared in Table 2. For each fixed structure optimization approach, we learn the network starting from the list of parent sets computed by independence selection (IS), greedy selection (GS) and sequential selection (SQ). In turn we analyze
?BICIS,GS and ?BICIS,SQ . A positive ?BIC means that independence selection yields a network
with higher BIC score than the network obtained using an alternative approach for parent set identification; vice versa for negative values of ?BIC. In most cases (see Table 2) ?BIC>10, implying
very strong support for the network learned using independence selection. We further analyze the
results through a sign-test. The null hypothesis of the test is that the BIC score of the network
learned under independence selection is smaller than or equivalent to the BIC score of the network
learned using the alternative approach (greedy selection or sequential selection depending on the
case). If a data set yields a ?BIC which is {very negative, strongly negative, negative, neutral}, it
supports the null hypothesis. If a data sets yields a BIC score which is {positive, strongly positive,
extremely positive}, it supports the alternative hypothesis. Under any fixed structure solver, the sign
test rejects the null hypothesis, providing significant evidence in favor of independence selection.
In the following when we further cite the sign test we refer to same type of analysis: the sign test
analyzes the counts of the ?BIC which are in favor and against a given method.
As for structure optimization, ASOBS achieves higher BIC score than OBS in all the 48 data sets,
under every chosen approach for parent set identification. These results confirm the improvement of
ASOBS over OBS, theoretically proven in Section 4. In most cases the ?BIC in favor of ASOBS
is larger than 10. The difference in favor of ASOBS is significant (sign test, p < 0.01) under every
chosen approach for parent set identification.
We now compare ASOBS and Gobnilp. On the smaller data sets (27 data sets with n < 500),
Gobnilp significantly outperforms (sign test, p < 0.01) ASOBS under every chosen approach for
parent set identification. On most of such data sets, the ?BIC in favor of the network learned by
Gobnilp is larger than 10. This outcome is expected, as Gobnilp is an exact solver and those data
Gobnilp
GS
SQ
ASOBS
GS
SQ
GS
SQ
?BIC (K)
Very positive (K >10)
Strongly positive (6<K <10)
Positive (2 <K <6)
Neutral (-2 <K <2)
Negative (-6 <K <-2)
Strongly negative (-10 <K <-6)
Very negative (K <-10)
44
0
0
2
0
1
1
38
0
4
3
1
1
1
44
0
2
0
2
0
0
30
4
3
4
1
5
1
44
1
0
2
0
0
1
32
0
2
4
2
4
4
p-value
<0.01
<0.01
<0.01
<0.01
<0.01
<0.01
structure solver
parent identification: IS vs
OBS
Table 2: Comparison of the approaches for parent set identification on 48 data sets. Given any fixed
solver for structural optimization, IS results in significantly higher BIC scores than both GS and SQ.
7
parent identification
structure solver: AS vs
Independence sel.
GP
OB
Forward sel
GP
OB
Sequential sel.
GP
OB
?BIC (K)
Very positive (K >10)
Strongly positive (6<K<10)
Positive (2<K<6)
Neutral (-2<K<2)
Negative (-6<K<-2)
Strongly negative (-10<K<-6)
Very negative (K<-10)
21
0
0
0
0
0
0
21
0
0
0
0
0
0
20
0
0
0
0
0
1
21
0
0
0
0
0
0
19
0
0
0
0
0
2
21
0
0
0
0
0
0
p-value
<0.01
<0.01
<0.01
<0.01
<0.01
<0.01
Table 3: Comparison between the structure optimization approaches on the 21 data sets with n ?
500. ASOBS (AS) outperforms both Gobnilp (GB) and OBS (OB), under any chosen approach for
parent set identification.
sets imply a relatively reduced search space. However the focus of this paper is on large data sets.
On the 21 data sets with n ? 500, ASOBS outperforms Gobnilp (sign test, p < 0.01) under every
chosen approach for parent set identification (Table 3).
5.2
Learning from data sets sampled from known networks
In the next experiments we create data sets by sampling from known networks. We take the largest
networks available in the literature: 4 andes (n=223), diabetes (n=413), pigs (n=441), link (n=724),
munin (n=1041). Additionally we randomly generate other 15 networks: five networks of size
2000, five networks of size 4000, five networks of size 10000. Each variable has a number of states
randomly drawn from 2 to 4 and a number of parents randomly drawn from 0 to 6. Overall we
consider 20 networks. From each network we sample a data set of 5000 instances.
We perform experiments and analysis as in the previous section. For the sake of brevity we do not
add further tables of results. As for parent set identification, independence selection outperforms
both greedy selection and sequential selection. The difference in favor of independence selection
is significant (sign test, p-value <0.01) under every chosen structure optimization approach. The
?BIC of the learned network is >10 in most cases. Take for instance Gobnilp for structure optimization. Then independence selection yields a ?BIC>10 in 18/20 cases when compared to GS
and ?BIC>10 in 19/20 cases when compared to SQ. Similar results are obtained using the other
solvers for structure optimization.
Strong results support also ASOBS against OBS and Gobnilp. Under every approach for parent set
identification, ?BIC>10 is obtained in 20/20 cases when comparing ASOBS and OBS. The number
of cases in which ASOBS obtains ?BIC>10 when compared against Gobnilp ranges between 17/20
and 19/20 depending on the approach adopted for parent set selection. The superiority of ASOBS
over both OBS and Gobnilp is significant (sign test, p < 0.01) under every approach for parent set
identification.
Moreover, we measured the Hamming distance between the moralized true structure and the learned
structure. On the 21 data sets with n ? 500 ASOBS outperforms Gobnilp and OBS and IS outperforms GS and SQ (sign test, p < 0.01). The novel framework is thus superior in terms of both score
and correctness of the retrieved structure.
6
Conclusion and future work
Our novel approximated approach for structural learning of Bayesian Networks scales up to thousands of nodes without constraints on the maximum in-degree. The current results refer to the BIC
score, but in future the methodology could be extended to other scoring functions.
Acknowledgments
Work partially supported by the Swiss NSF grant n. 200021 146606 / 1.
4
http://www.bnlearn.com/bnrepository/
8
References
[1] M. Bartlett and J. Cussens. Integer linear programming for the Bayesian network structure
learning problem. Artificial Intelligence, 2015. in press.
[2] D. M. Chickering, C. Meek, and D. Heckerman. Large-sample learning of Bayesian networks
is hard. In Proceedings of the 19st Conference on Uncertainty in Artificial Intelligence, UAI03, pages 124?133. Morgan Kaufmann, 2003.
[3] G. F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks
from data. Machine Learning, 9(4):309?347, 1992.
[4] J. Cussens. Bayesian network learning with cutting planes. In Proceedings of the 27st Conference Annual Conference on Uncertainty in Artificial Intelligence, UAI-11, pages 153?160.
AUAI Press, 2011.
[5] J. Cussens, B. Malone, and C. Yuan. IJCAI 2013 tutorial on optimal algorithms for learning
Bayesian networks (https://sites.google.com/site/ijcai2013bns/slides), 2013.
[6] C. P. de Campos and Q. Ji. Efficient structure learning of Bayesian networks using constraints.
Journal of Machine Learning Research, 12:663?689, 2011.
[7] C. P. de Campos, Z. Zeng, and Q. Ji. Structure learning of Bayesian networks using constraints.
In Proceedings of the 26st Annual International Conference on Machine Learning, ICML-09,
pages 113?120, 2009.
[8] J. V. Haaren and J. Davis. Markov network structure learning: A randomized feature generation
approach. In Proceedings of the 26st AAAI Conference on Artificial Intelligence, 2012.
[9] D. Heckerman, D. Geiger, and D.M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197?243, 1995.
[10] T. Jaakkola, D. Sontag, A. Globerson, and M. Meila. Learning Bayesian Network Structure
using LP Relaxations. In Proceedings of the 13st International Conference on Artificial Intelligence and Statistics, AISTATS-10, pages 358?365, 2010.
[11] M. Koivisto. Parent assignment is hard for the MDL, AIC, and NML costs. In Proceedings of
the 19st annual conference on Learning Theory, pages 289?303. Springer-Verlag, 2006.
[12] M. Koivisto and K. Sood. Exact Bayesian Structure Discovery in Bayesian Networks. Journal
of Machine Learning Research, 5:549?573, 2004.
[13] D. Lowd and J. Davis. Learning Markov network structure with decision trees. In Geoffrey I.
Webb, Bing Liu 0001, Chengqi Zhang, Dimitrios Gunopulos, and Xindong Wu, editors, Proceedings of the 10st Int. Conference on Data Mining (ICDM2010), pages 334?343, 2010.
[14] W. J. McGill. Multivariate information transmission. Psychometrika, 19(2):97?116, 1954.
[15] A. Moore and W. Wong. Optimal reinsertion: A new search operator for accelerated and
more accurate Bayesian network structure learning. In T. Fawcett and N. Mishra, editors,
Proceedings of the 20st International Conference on Machine Learning, ICML-03, pages 552?
559, Menlo Park, California, August 2003. AAAI Press.
[16] A. E. Raftery. Bayesian model selection in social research. Sociological methodology, 25:111?
164, 1995.
[17] T. Silander and P. Myllymaki. A simple approach for finding the globally optimal Bayesian
network structure. In Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, UAI-06, pages 445?452, 2006.
[18] M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. In Proceedings of the 21st Conference on Uncertainty in Artificial
Intelligence, UAI-05, pages 584?590, 2005.
[19] C. Yuan and B. Malone. An improved admissible heuristic for learning optimal Bayesian
networks. In Proceedings of the 28st Conference on Uncertainty in Artificial Intelligence,
UAI-12, 2012.
[20] C. Yuan and B. Malone. Learning optimal Bayesian networks: A shortest path perspective.
Journal of Artificial Intelligence Research, 48:23?65, 2013.
9
| 5803 |@word polynomial:1 stronger:1 seems:1 nd:1 open:6 pick:1 moment:1 cyclic:1 contains:2 score:50 liu:1 outperforms:9 existing:1 mishra:1 current:4 comparing:1 com:2 yet:1 happen:1 v:2 implying:1 greedy:9 intelligence:9 malone:3 plane:1 provides:1 node:22 firstly:1 zhang:1 five:3 dn:1 along:1 descendant:7 yuan:3 introduce:2 manner:1 theoretically:1 inter:3 expected:1 indeed:1 globally:2 automatically:1 actual:2 cache:3 solver:8 considering:3 becomes:1 psychometrika:1 webkb:1 moreover:3 notation:1 maximizes:1 bounded:1 null:3 sull:1 cassio:1 developed:1 finding:2 guarantee:1 safely:1 every:9 auai:1 universit:1 k2:1 uk:3 grant:1 superiority:1 before:1 giorgio:2 understood:1 local:3 depthfirst:1 positive:12 gunopulos:1 analyzing:1 path:2 plus:1 specifying:1 range:1 directed:1 acknowledgment:1 globerson:1 union:1 implement:1 swiss:1 sq:8 procedure:3 reject:1 significantly:2 convenient:1 word:1 refers:1 selection:29 operator:1 put:1 wong:1 www:2 equivalent:1 maximizing:1 go:1 starting:1 independently:3 decomposable:1 assigns:1 immediately:2 usi:3 rule:10 array:1 tabu:1 searching:1 mcgill:1 inspected:1 user:1 exact:4 programming:3 us:2 hypothesis:4 diabetes:2 associate:1 element:9 idsia:8 approximated:4 amortized:1 updating:1 continues:1 thousand:5 cycle:7 sood:1 indep:1 ordering:26 trade:1 highest:2 removed:1 andes:1 mentioned:1 italiana:2 complexity:7 dynamic:1 highestscoring:1 bbc:1 basis:2 swap:1 easily:1 gobnilp:18 distinct:1 effective:4 describe:1 artificial:9 tell:1 choosing:1 outcome:1 whose:1 heuristic:3 larger:4 plausible:2 quite:1 otherwise:1 favor:8 statistic:1 gp:3 online:1 advantage:1 propose:4 interaction:4 product:1 silander:1 relevant:1 loop:4 combining:1 zaffalon:2 achieve:2 exploiting:2 parent:86 empty:8 optimum:1 ijcai:1 transmission:1 produce:1 artificiale:1 leave:1 depending:2 ac:2 chengqi:1 measured:2 sole:1 strong:4 c:1 implies:4 come:1 nml:1 switzerland:3 exploration:4 successor:1 molle:1 exploring:1 marco:1 ground:1 achieves:3 adopt:2 smallest:1 purpose:1 decampos:1 visited:2 largest:4 myllymaki:1 vice:2 create:1 correctness:1 always:1 ck:7 pn:1 sel:3 jaakkola:1 corollary:6 focus:2 improvement:3 consistently:2 likelihood:1 indicates:1 check:3 unlikely:1 koller:1 ancestor:4 supersets:3 provably:1 overall:6 among:1 jester:1 pumsb:1 development:1 art:1 mutual:6 equal:3 once:2 sampling:2 represents:1 park:1 look:2 icml:2 future:2 np:1 others:1 modern:1 randomly:4 manipulated:1 individual:1 dimitrios:1 recalling:1 mining:1 mdl:1 analyzed:3 unconditional:1 swapping:6 adjoining:1 permuting:1 accurate:1 partial:1 necessary:1 istituto:1 tree:1 logarithm:1 instance:5 boolean:1 queen:1 assignment:1 cost:2 introducing:3 decomposability:1 subset:5 entry:1 neutral:3 supsi:3 kn:1 considerably:1 scanagatta:1 st:10 explores:5 international:3 randomized:1 probabilistic:1 off:1 quickly:1 again:2 aaai:2 containing:4 choose:2 possibly:2 admit:1 external:1 book:1 return:1 de:3 singleton:1 star:1 blip:1 int:1 depends:1 ad:1 performed:2 try:1 closed:6 analyze:3 start:5 bayes:2 netflix:1 msweb:1 ass:1 square:1 kaufmann:1 efficiently:2 yield:9 identify:1 climbing:1 weak:1 bayesian:24 identification:28 svizzera:2 against:4 competitor:1 proof:6 di:1 hamming:1 sampled:4 stop:1 anytime:2 knowledge:1 cj:4 sophisticated:1 back:2 appears:1 higher:9 methodology:2 improved:1 formulation:1 evaluated:1 though:1 strongly:6 just:1 until:2 working:1 zeng:1 glance:1 google:1 xindong:1 quality:1 lowd:1 true:7 hence:3 iteratively:1 moore:1 deal:1 conditionally:1 davis:2 hill:1 complete:1 performs:2 interpreting:1 novel:9 dalle:1 superior:1 sped:1 ji:2 extend:1 approximates:1 refer:3 significant:5 versa:2 dag:3 meila:1 consistency:6 inclusion:1 had:1 access:1 add:6 posterior:2 multivariate:1 retrieved:1 perspective:1 scenario:1 verlag:1 accomplished:1 devise:2 scoring:4 captured:1 analyzes:1 greater:2 morgan:1 accident:1 prune:2 parallelized:1 surely:1 shortest:2 maximize:1 ii:11 branch:1 multiple:1 eachmovie:1 calculation:1 long:1 iteration:2 fawcett:1 retail:1 cell:2 campos:3 swapped:2 extra:1 integer:2 call:3 structural:10 split:1 independence:17 fit:1 bic:84 competing:1 reduce:1 idea:1 cn:1 whether:3 bartlett:1 gb:1 sontag:1 york:1 nine:1 ignored:2 generally:1 slide:1 ten:1 processed:4 category:1 simplest:1 reduced:2 http:4 corani:1 outperform:1 northern:1 dna:1 generate:1 nsf:1 notice:1 herskovits:1 tutorial:1 sign:10 estimated:2 per:3 dropping:1 shall:1 achieving:1 drawn:2 kept:1 v1:4 asymptotically:1 graph:5 relaxation:1 sum:2 aig:1 run:1 uncertainty:5 place:1 decide:1 wu:1 vn:4 geiger:1 ob:26 cussens:3 decision:1 bound:6 guaranteed:1 meek:1 aic:1 g:8 annual:3 constraint:6 qub:1 software:2 sake:1 argument:2 innovative:1 min:4 pruned:1 todo:10 extremely:1 relatively:1 according:1 universitaria:1 combination:1 smaller:3 slightly:1 heckerman:2 lp:1 explained:1 taken:1 equation:1 bing:1 belfast:1 precomputed:2 eventually:2 count:2 needed:1 know:3 turn:2 end:1 adopted:3 available:3 generalizes:2 koivisto:2 generic:1 save:1 alternative:4 slower:1 original:1 denotes:2 log2:4 sw:1 exploit:3 build:2 objective:1 intend:1 already:8 move:1 strategy:3 teyssier:1 usual:2 ireland:1 distance:1 link:1 mauro:2 outer:2 nx:6 intelligenza:1 considers:1 studi:1 induction:1 ratio:1 providing:1 webb:1 negative:11 motivates:1 perform:1 datasets:1 discarded:1 arc:3 markov:2 extended:1 august:1 introduced:1 namely:3 pair:1 california:1 learned:6 pop:1 hour:1 address:1 beyond:1 proceeds:1 usually:7 refuse:1 pig:1 built:1 suitable:1 rely:1 indicator:2 improve:1 imply:2 superseded:1 raftery:1 acknowledges:1 categorical:1 extract:1 literature:3 discovery:1 checking:1 asymptotic:2 sociological:1 interesting:1 generation:1 proven:2 acyclic:2 geoffrey:1 reinsertion:2 degree:12 consistent:3 expired:2 editor:2 changed:1 repeat:1 surprisingly:1 supported:1 guide:3 allow:2 bias:1 fall:1 xn:4 valid:1 computes:2 adopts:2 made:1 collection:2 commonly:1 forward:1 far:1 social:1 pruning:2 emphasize:1 ignore:2 obtains:1 cutting:1 keep:1 dealing:1 confirm:1 decides:1 uai:4 assumed:1 xi:9 search:16 professionale:1 table:7 additionally:1 promising:2 learn:1 menlo:1 improving:1 expansion:1 complex:1 constructing:1 vj:21 aistats:1 constituted:3 reuters:1 n2:6 child:2 repeated:1 allowed:1 x1:2 site:2 cooper:1 guiding:1 explicit:1 lugano:3 candidate:3 chickering:2 admissible:1 theorem:9 minute:1 moralized:1 favourable:1 list:27 explored:4 evidence:3 scuola:1 restricting:1 sequential:11 effectively:4 corr:3 adding:3 ci:4 false:3 haaren:1 cartesian:1 nk:1 entropy:1 simply:4 explore:3 ordered:2 contained:1 partially:1 springer:1 ch:4 cite:1 satisfies:1 conditional:4 goal:3 sorted:1 towards:3 feasible:2 hard:3 content:2 called:1 total:4 acyclicity:1 exception:1 select:1 mark:1 support:4 brevity:1 accelerated:1 audio:1 d1:1 della:2 correlated:1 |
5,307 | 5,804 | Parallel Predictive Entropy Search for Batch Global
Optimization of Expensive Objective Functions
Amar Shah
Department of Engineering
Cambridge University
[email protected]
Zoubin Ghahramani
Department of Engineering
University of Cambridge
[email protected]
Abstract
We develop parallel predictive entropy search (PPES), a novel algorithm for
Bayesian optimization of expensive black-box objective functions. At each iteration, PPES aims to select a batch of points which will maximize the information
gain about the global maximizer of the objective. Well known strategies exist
for suggesting a single evaluation point based on previous observations, while far
fewer are known for selecting batches of points to evaluate in parallel. The few
batch selection schemes that have been studied all resort to greedy methods to
compute an optimal batch. To the best of our knowledge, PPES is the first nongreedy batch Bayesian optimization strategy. We demonstrate the benefit of this
approach in optimization performance on both synthetic and real world applications, including problems in machine learning, rocket science and robotics.
1
Introduction
Finding the global maximizer of a non-concave objective function based on sequential, noisy observations is a fundamental problem in various real world domains e.g. engineering design [1], finance
[2] and algorithm optimization [3]. We are interesed in objective functions which are unknown but
may be evaluated pointwise at some expense, be it computational, economical or other. The challenge is to find the maximizer of the expensive objective function in as few sequential queries as
possible, in order to minimize the total expense.
A Bayesian approach to this problem would probabilistically model the unknown objective function,
f . Based on posterior belief about f given evaluations of the the objective function, you can decide
where to evaluate f next in order to maximize a chosen utility function. Bayesian optimization [4]
has been successfully applied in a range of difficult, expensive global optimization tasks including
optimizing a robot controller to maximize gait speed [5] and discovering a chemical derivative of a
particular molecule which best treats a particular disease [6].
Two key choices need to be made when implementing a Bayesian optimization algorithm: (i) a
model choice for f and (ii) a strategy for deciding where to evaluate f next. A common approach
for modeling f is to use a Gaussian process prior [7], as it is highly flexible and amenable to analytic
calculations. However, other models have shown to be useful in some Bayesian optimization tasks
e.g. Student-t process priors [8] and deep neural networks [9]. Most research in the Bayesian
optimization literature considers the problem of deciding how to choose a single location where
f should be evaluated next. However, it is often possible to probe several points in parallel. For
example, you may possess 2 identical robots on which you can test different gait parameters in
parallel. Or your computer may have multiple cores on which you can run algorithms in parallel
with different hyperparameter settings.
Whilst there are many established strategies to select a single point to probe next e.g. expected
improvement, probability of improvement and upper confidence bound [10], there are few well
known strategies for selecting batches of points. To the best of our knowledge, every batch selection
1
strategy proposed in the literature involves a greedy algorithm, which chooses individual points
until the batch is filled. Greedy choice making can be severely detrimental, for example, a greedy
approach to the travelling salesman problem could potentially lead to the uniquely worst global
solution [11]. In this work, our key contribution is to provide what we believe is the first non-greedy
algorithm to choose a batch of points to probe next in the task of parallel global optimization.
Our approach is to choose a set of points which in expectation, maximally reduces our uncertainty
about the location of the maximizer of the objective function. The algorithm we develop, parallel
predictive entropy search, extends the methods of [12, 13] to multiple point batch selection. In
Section 2, we formalize the problem and discuss previous approaches before developing parallel
predictive entropy search in Section 3. Finally, we demonstrate the benefit of our non-greedy strategy
on synthetic as well as real-world objective functions in Section 4.
2
Problem Statement and Background
Our aim is to maximize an objective function f : X ? R, which is unknown but can be (noisily)
evaluated pointwise at multiple locations in parallel. In this work, we assume X is a compact subset
of RD . At each decision, we must select a set of Q points St = {xt,1 , ..., xt,Q } ? X , where the
objective function would next be evaluated in parallel. Each evaluation leads to a scalar observation
yt,q = f (xt,q ) + t,q , where we assume t,q ? N (0, ? 2 ) i.i.d. We wish to minimize a future regret,
rT = [f (x? ) ? f (?
xT )], where x? ? argmaxx?X f (x) is an optimal decision (assumed to exist)
and x
?T is our guess of where the maximizer of f is after evaluating T batches of input points. It is
highly intractable to make decisions T steps ahead in the setting described, therefore it is common
to consider the regret of the very next decision. In this work, we shall assume f is a draw from a
Gaussian process with constant mean ? ? R and differentiable kernel function k : X 2 ? R.
Most Bayesian optimization research focuses on choosing a single point to query at each decision i.e. Q = 1. A popular strategy in this setting is to choose the point with highest
expected improvement over the current hbest evaluation, i.e. the imaximizer of aEI (x|D) =
E max(f (x) ? f (xbest ), 0)D = ?(x) ? ? (x) + ? (x)? ? (x) , where D is the set of obp
servations, xbest is the best evaluation point so far, ?(x) = Var[f (x)|D], ?(x) = E[f (x)|D],
? (x) = (?(x) ? f (xbest ))/?(x) and ?(.) and ?(.) are the standard Gaussian p.d.f. and c.d.f.
Aside from being an intuitive approach, a key advantage of using the expected improvement strategy
is in the fact that it is computable analytically and is infinitely differentiable, making the problem
of finding argmaxx?X aEI (x|D) amenable to a plethora of gradient based optimization methods.
Unfortunately, the corresponding strategy for selecting Q > 1 points to evaluate in parallel does not
lead to an analytic expression. [14] considered an approach which sequentially used the EI criterion
to greedily choose a batch of points to query next, which [3] formalized and utilized by defining
Z
aEI?MCMC x|D, {xq0 }qq0 =1 =
aEI x|D?{xq0 , yq0 }qq0 =1 p {yq0 }qq0 =1 |D, {xq0 }qq0 =1 dy1 ..dyq ,
Xq
the expected gain in evaluating x after evaluating {xq0 , yq0 }qq0 =1 , which can be approximated using
Monte Carlo samples, hence the name EI-MCMC. Choosing a batch of points St using the EIMCMC policy is doubly greedy: (i) the EI criterion is greedy as it inherently aims to minimize onestep regret, rt , and (ii) the EI-MCMC approach starts with an empty set and populates it sequentially
(and hence greedily), deciding the best single point to include until |St | = Q.
A similar but different approach called simulated matching (SM) was introduced by [15]. Let ? be a
baseline policy which chooses a single point to evaluate next (e.g. EI). SM aims to select a batch St
of size Q, which includes a point ?close to? the best point which ? would have chosen when applied
sequentially Q times, with high probability. Formally, SM aims to maximize
h h
ii
aSM (St |D) = ?ES?Q Ef min (x ? argmaxx0 ?S?Q f (x0 ))2 D, S?Q ,
x?St
where S?Q is the set of Q points which policy ? would query if employed sequentially. A greedy
k-medoids based algorithm is proposed to approximately maximize the objective, which the authors
justify by the submodularity of the objective function.
The upper confidence bound (UCB) strategy [16] is another method used by practitioners to decide
where to evaluate an objective function next. The UCB approach is to maximize aUCB (x|D) =
1/2
?(x) + ?t ?(x), where ?t is a domain-specific time-varying positive parameter which trades off
2
exploration and exploitation. In order to extend this approach to the parallel setting, [17] noted that
the predictive variance of a Gaussian process depends only on where observations are made, and
not the observations themselves. Therefore, they suggested the GP-BUCB method, which greedily
populates the set St by maximizing a UCB type equation Q times sequentially, updating ? at each
step, whilst maintaining the same ? for each batch. Finally, a variant of the GP-UCB was proposed
by [18]. The first point of the set St is chosen by optimizing the UCB objective. Thereafter, a
?relevant region? Rt ? X which contains the maximizer of f with high probability is defined.
Points are greedily chosen from this region to maximize the information gain about f , measured
by expected reduction in entropy, until |St | = Q. This method was named Gaussian process upper
confidence bound with pure exploration (GP-UCB-PE).
Each approach discussed resorts to a greedy batch selection process. To the best of our knowledge,
no batch Bayesian optimization method to date has avoided a greedy algorithm. We avoid a greedy
batch selection approach with PPES, which we develop in the next section.
3
Parallel Predictive Entropy Search
Our approach is to maximize information [19] about the location of the global maximizer x? , which
we measure in terms of the negative differential entropy of p(x? |D). Analogous to [13], PPES aims
to choose the set of Q points, St = {xq }Q
q=1 , which maximizes
h
i
H p x? |D ? {xq , yq }Q
aPPES (St |D) = H p(x? |D) ? E
,
(1)
Q
q=1
p {yq }q=1 D,St
R
where H[p(x)] = ? p(x) log p(x)dx is the differential entropy of its argument and the expectation
above is taken with respect to the posterior joint predictive distribution of {yq }Q
q=1 given the previous
evaluations, D, and the set St . Evaluating
(1)
exactly
is
typically
infeasible.
The
prohibitive aspects
are that p x? |D ? {xq , yq }Q
would
have
to
be
evaluated
for
many
different
combinations of
q=1
Q
{xq , yq }q=1 , and the entropy computations are not analytically tractable in themselves. Significant
approximations need to be made to (1) before it becomes practically useful [12]. A convenient
equivalent formulation of the quantity in (1) can be written as the mutual information between x?
and {yq }Q
q=1 given D [20]. By symmetry of the mutual information, we can rewrite aPPES as
h
i
Q
?
?
aPPES (St |D) = H p {yq }Q
H
p
{y
}
|D,
S
?
E
|D,
S
,
x
,
(2)
q q=1
t
t
p(x |D)
q=1
?
where p {yq }Q
is the joint posterior predictive distibution for {yq }Q
q=1 |D, St , x
q=1 given the observed data, D and the location of the global maximizer of f . The key advantage of the formulation
in (2), is that the objective is based on entropies of predictive distributions of the observations, which
are much more easily approximated than the entropies of distributions on x? .
In fact, the first term of (2) can be computed analytically. Suppose p {fq }Q
q=1 |D, St is multi
variate Gaussian with covariance K, then H p {yq }Q
= 0.5 log[det(2?e(K + ? 2 I))].
q=1 |D, St
We develop an approach to approximate the expectation of the predictive entropy in (2), using an
expectation propagation based method which we discuss in the following section.
3.1
Approximating the Predictive Entropy
?
in
Assuming a sample of x? , we discuss our approach to approximating H p {yq }Q
q=1 |D, St , x
(2) for a set of query points St . Note that we can write
Z
Q
Y
Q
Q
?
?
p {yq }q=1 |D, St , x = p {fq }q=1 |D, St , x
p(yq |fq ) df1 ...dfQ ,
(3)
q=1
?
is the posterior distribution of the objective function at the locations
where p {fq }Q
q=1 |D, St , x
xq ? St , given previous evaluations D, and that x? is the global maximizer of f . Recall that
p(yq |fq ) is Gaussian for each q. Our approach will be to derive a Gaussian approximation to
?
p {fq }Q
q=1 |D, St , x , which would lead to an analytic approximation to the integral in (3).
The posterior predictive distribution of the Gaussian process, p {fq }Q
q=1 |D, St , is multivariate
Gaussian distributed. However, by further conditioning on the location x? , the global maximizer
of f , we impose the condition that f (x) ? f (x? ) for any x ? X . Imposing this constraint for
3
?
highly inall x ? X is extremely difficult and makes the computation of p {fq }Q
q=1 |D, St , x
?
tractable. We instead impose the following two conditions (i) f (x) ? f (x ) for each x ? St ,
and (ii) f (x? ) ? ymax + , where ymax is the largest observed noisy objective function value and
? N (0, ? 2 ). Constraint (i) is equivalent to imposing that f (x? ) is larger than objective function
values at current query locations, whilst condition (ii) makes f (x? ) larger than previous objective function evaluations, accounting for noise. Denoting the two conditions C, and the variables
f = [f1 , ..., fQ ]> and f + = [f ; f ? ], where f ? = f (x? ), we incorporate the conditions as follows
Z
Q
f ? ? ymax Y
p f |D, St , x? ? p f + |D, St , x? ?
I(f ? ? fq ) df ? ,
(4)
?
q=1
where I(.) is an indicator function. The integral in (4) can be approximated using expectation propagation [21]. The Gaussian process predictive p(f + |D, St , x? ) is N (f + ; m+ , K+ ). We approximate
QQ+1
the integrand of (4) with w(f + ) = N (f + ; m+ , K+ ) q=1 Z?q N (c>
?q , ??q ), where each Z?q
q f +; ?
and ??q are positive, ?
?q ? R and for q ? Q, cq is a vector of length Q + 1 with q th entry ?1, Q + 1st
entry 1, and remaining entries 0, whilst cQ+1 = [0, ..., 0, 1]> . The approximation w(f + ) approximates the Gaussian CDF, ?(.), and each indicator function, I(.), with a univariate, scaled Gaussian
PDF. The site parameters, {Z?q , ?
?q , ??q }Q+1
q=1 , are learned using a fast EP algorithm, for which details
are given in the supplementary material, where we show that w(f + ) = ZN (f + ; ?+ , ?+ ), where
?1
?1
Q+1
Q+1
X?
X 1
?q
?1
>
>
?+ = ?+ K?1
m
+
c
c
c
c
,
?
=
K
+
,
(5)
+
q q
q q
+
+
+
??
??
q=1 q
q=1 q
and hence p f + |D, St , C ? N (f + ; ?+ , ?+ ). Since multivariate
Gaussians are consistent under marginalization, a convenient corollary is that p f |D, St , x? ? N (f ; ?, ?), where ? is the
vector containing the first Q elements of ?+ , and ? is the matrix containing the first Q rows and
columns of ?+ . Since
Gaussians are also Gaussian distributed, we see that
sums of independent
Q
?
>
p {yq }q=1 |D, St , x ? N ([y1 , ..., yQ ] ; ?, ? + ? 2 I). The final convenient attribute of our Gaussian approximation, is that
of a multivariate Gaussian can be computed
the differential entropy
?
analytically, such that H p {yq }Q
? 0.5 log[det(2?e(? + ? 2 I))].
q=1 |D, St , x
3.2
Sampling from the Posterior over the Global Maximizer
?
So far, we have considered how to approximate H p {yq }Q
, given the global maxiq=1 |D, St , x
mizer, x? . We in fact would like the expected value of this quantity over the posterior distribution of
the global maximizer, p(x? |D). Literally, p(x? |D) ? p(f (x? ) = maxx?X f (x)|D), the posterior
probability that x? is the global maximizer of f . Computing the distribution p(x? |D) is intractable,
but it is possible to approximately sample from it and compute a Monte Carlo based approximation
of the desired expectation. We consider two approaches to sampling from the posterior of the global
maximizer: (i) a maximum a posteriori (MAP) method, and (ii) a random feaure approach.
MAP sample from p(x? |D). The MAP of p(x? |D) is its posterior mode, given by x?MAP =
argmaxx? ?X p(x? |D). We may approximate the expected value of the predictive entropy by replacing the posterior distribution of x? with a single point estimate at x?MAP . There are two key
advantages to using the MAP estimate in this way. Firstly, it is simple to compute x?MAP , as it is
the global maximizer of the posterior mean of f given the observations D. Secondly, choosing to
use x?MAP assists the EP algorithm developed in Section 3.1 to converge as desired. This is because
the condition f (x? ) ? f (x) for x ? X is easy to enforce when x? = x?MAP , the global maximizer
of the poserior mean of f . When x? is sampled such that the posterior mean at x? is significantly
suboptimal, the EP approximation may be poor. Whilst using the MAP estimate approximation is
convenient, it is after all a point estimate and fails to characterize the full posterior distribution. We
therefore consider a method to draw samples from p(x? |D) using random features.
Random Feature Samples from p(x? |D). A naive approach to sampling from p(x? |D) would
be to sample g ? p(f |D), and choosing argmaxx?X g. Unfortunately, this would require sampling
g over an uncountably infinite space, which is infeasible. A slightly less naive method would be to
sequentially construct g, whilst optimizing it, instead of evaluating it everywhere in X . However,
this approach would have cost O(m3 ) where m is the number of function evaluations of g necessary
to find its optimum. We propose as in [13], to sample and optimize an analytic approximation to g.
4
By Bochner?s theorem [22], a stationary kernel function, k, has a Fourier dual s(w), which is equal
to the spectral density of k. Setting p(w) = s(w)/?, a normalized density, we can write
>
0
k(x, x0 ) = ?Ep(w) [e?iw (x?x ) ] = 2?Ep(w,b) [cos(w> x + b) cos(w> x0 + b)],
(6)
p
where b ? U [0, 2?]. Let ?(x) = 2?/m cos(Wx + b) denote an m-dimensional feature mapping
where W and b consist of m stacked samples from p(w, b), then the kernel k can be approximated
by the inner product of these features, k(x, x0 ) ? ?(x)> ?(x0 ) [23]. The linear model g(x) =
?(x)> ? + ? where ?|D ? N (A?1 ?> (y ? ?1), ? 2 A?1 ) is an approximate sample from p(f |D),
where y is a vector of objective function evaluations, A = ?> ? + ? 2 I and ?> = [?(x1 )...?(xn )].
In fact, limm?? g is a true sample from p(f |D) [24].
The generative process above suggests the following approach to approximately sampling from
p(x? |D): (i) sample random features ?(i) and corresponding posterior weights ? (i) using the
process above, (ii) construct g (i) (x) = ?(i) (x)> ? (i) + ?, and (iii) finally compute x?(i) =
argmaxx?X g (i) (x) using gradient based methods.
3.3
Computing and Optimizing the PPES Approximation
Let ? denote the set of kernel parameters and the observation noise variance, ? 2 . Our posterior
belief about ? is summarized by the posterior distribution p(?|D) ? p(?)p(D|?), where p(?) is
our prior belief about ? and p(D|?) is the GP marginal likelihood given the parameters ?. For a
fully Bayesian treatment of ?, we must marginalize aPPES with respect to p(?|D). The expectation
with respect to the posterior distribution of ? is approximated with Monte Carlo samples. A similar
approach is taken in [3, 13]. Combining the EP based method to approximate the predictive entropy
with either of the two methods discussed in the previous section to approximately sample from
p(x? |D), we can construct a
?PPES an approximation to (2), defined by
M
i
1 Xh
a
?PPES (St |D) =
log[det(K(i) + ? 2(i) I)] ? log[det(?(i) + ? 2(i) I)] ,
(7)
2M i=1
where K(i) is constructed using ? (i) the ith sample of M from p(?|D), ?(i) is constructed as in
Section 3.1, assuming the global maximizer is x?(i) ? p(x? |D, ? (i) ). The PPES approximation
is simple and amenable to gradient based optimization. Our goal is to choose St = {x1 , ..., xQ }
which maximizes a
?PPES in (7). Since our kernel function is differentiable, we may consider taking
the derivative of a
?PPES with respect to xq,d , the dth component of xq ,
M
h
h
?a
?PPES
?K(i) i
??(i) i
1 X
trace (K(i) + ? 2(i) I)?1
? trace (?(i) + ? 2(i) I)?1
=
. (8)
? xq,d
2M i=1
?xq,d
?xq,d
Computing
?K(i)
?xq,d
(i)
is simple directly from the definition of the chosen kernel function. ?(i) is a
(i)
?K(i)
?xq,d , and that each cq
(i)
parameters, {?
?q }Q+1
q=1 , vary with
function of K , {cq }Q+1
?q }Q+1
q=1 and {?
q=1 , and we know how to compute
is a constant vector. Hence our only concern is how the EP site
xq,d . Rather remarkably, we may invoke a result from Section 2.1 of [25], which says that converged
?
site parameters, {Z?q , ?
?q , ?
?q }Q+1
q=1 , have 0 derivative with respect to parameters of p(f + |D, St , x ).
There is a key distinction between explicit dependencies (where ? actually depends on K) and
implicit dependencies where a site parameter, ?
?q , might depend implicitly on K. A similar approach
is taken in [26], and discussed in [7]. We therefore compute
(i)
(i)
??+
(i) (i)?1 ?K+
(i)?1 (i)
= ?+ K+
K
?+ .
(9)
?xq,d
?xq,d +
On first inspection, it may seem computationally too expensive to compute derivatives with respect
(i)?1 (i)
to each q and d. However, note that we may compute and store the matrices K+ ?+ , (K(i) +
(i)
?K
+
? 2(i) I)?1 and (?(i) + ? 2(i) I)?1 once, and that ?xq,d
is symmetric with exactly one non-zero row
and non-zero column, which can be exploited for fast matrix multiplication and trace computations.
5
0.6
0.6
0.5
0.6
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.2
0.2
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
0
0
1
0
0.2
0.4
0.6
0.8
0
1
1
1
2
0.8
0.7
1.5
0.8
0.6
0.7
0.8
1
0.6
0.5
0.6
0.5
0.6
0.5
0.4
0.4
0
0.4
0.3
?0.5
0.3
0.2
0.2
0.2
0.2
?1
?1.5
0.4
0.1
0
0.2
0.4
0.6
0.8
(a) Synthetic function
1
0
0
0.2
1
0.4
0.6
0.8
1
0
0.1
0
0
0.2
2
(b) aPPES (x, x0 )
0.4
0.6
0.8
0
1
(c) a
?PPES (x, x0 )
0.8
1.5
0.7
Figure 1: Assessing the quality1 0.8of our approximations to the parallel
predictive entropy search strat1
0.6
egy. (a) Synthetic objective function (blue line) defined on [0,
1], with noisy observations (black
0.6
0.5
squares). (b) Ground truth aPPES
defined on [0, 1]2 , obtained0.5by rejection
sampling. (c) Our ap0.4
proximation a
?PPES using expectation propagation. Dark regions
correspond
to pairs (x, x0 ) with
0
0.4
0
high utility, whilst faint regions correspond to pairs (x, x ) with0.3 low utility.
4
Empirical Study
0.2
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
0
?0.5
?1
?1.5
0
0.2
0.4
0.6
0.8
1
In this section, we study the performance
of PPES in comparison to aforementioned methods. We
2
model f as a Gaussian process with constant mean ? and covariance kernel k. Observations of the
1.5
objective function are considered to be independently drawn from N (f (x), ? 2 ). In our experiments,
1
P
0
2
1
we choose
to
use
a
squared-exponential
kernel
of
the
form
k(x,
x
)
=
?
exp
?
0.5
(x
?
d
d
0 2 2
0.5
xd ) /ld . Therefore the set of model hyperparameters is {?, ?, l1 , ..., lD , ?}, a broad Gaussian
hyperprior is placed on ? and uninformative
Gamma priors are used for the other hyperparameters.
0
?0.5
It is worth investigating how well
a
?PPES (7) is able to approximate aPPES (2). In order to test
the approximation in a manner amenable
to visualization, we generate a sample f from a Gaussian
?1
process prior on X = [0, 1], with ? 2 = 1, ? 2 = 10?4 and l2 = 0.025, and consider batches of size
Q = 2. We set M = 200. A?1.5
rejection
sampling
approach
is used to compute the ground
0
0.2
0.4
0.6 based
0.8
1
truth aPPES , defined on X Q = [0, 1]2 . We first discretize [0, 1]2 , and sample p(x? |D) in (2) by
evaluating samples from p(f |D) on the discrete points and
choosing the input with highest function
1
value. Given x? , we compute H p y1 , y2 |D, x1 , x2 , x? using
rejection sampling. Samples from
2
p(f |D) are evaluted on discrete points in [0, 1] and rejected if the highest function value occurs not
2
at x? . We add independent Gaussian
noise with variance
? to the non rejected samples from the
previous step and approximate H p y1 , y2 |D, x1 , x2 , x? using kernel density estimation [27].
Figure 1 includes illustrations of (a) the objective function to be maximized, f , with 5 noisy observations, (b) the aPPES ground truth obtained using the rejection sampling method and finally (c)
a
?PPES using the EP method we develop in the previous section. The black squares on the axes of
Figures 1(b) and 1(c) represent the locations in X = [0, 1] where f has been noisily sampled, and
the darker the shade, the larger the function value. The lightly shaded horizontal and vertical lines
in these figures along the points The figures representing aPPES and a
?PPES appear to be symmetric, as is expected, since the set St = {x, x0 } is not an ordered set, since all points in the set are
probed in parallel i.e. St = {x, x0 } = {x0 , x}. The surface of a
?PPES is similar to that of aPPES .
In paticular, the a
?PPES approximation often appeared to be an annealed version of the ground truth
aPPES , in the sense that peaks were more pronounced, and non-peak areas were flatter. Since we are
interested in argmax{x,x0 }?X 2 aPPES ({x, x0 }), our key concern is that the peaks of a
?PPES occur at
the same input locations as aPPES . This appears to be the case in our experiment, suggesting that
the argmax a
?PPES is a good approximation for argmax aPPES .
We now test the performance of PPES in the task of finding the optimum of various objective functions. For each experiment, we compare PPES (M = 200) to EI-MCMC (with 100 MCMC samples), simulated matching with a UCB baseline policy, GP-BUCB and GP-UCB-PE. We use the random features method to sample from p(x? |D), rejecting samples which lead to failed EP runs. An
experiment of an objective function, f , consists of sampling 5 input points uniformly at random and
running each algorithm starting with these samples and their corresponding (noisy) function values.
We measure performance after t batch evaluations using immediate regret, rt = |f (?
xt ) ? f (x? )|,
? t is the recommendation of an algorithm after t batch
where x? is the known optimizer of f and x
evaluations. We perform 100 experiments for each objective function, and report the median of the
6
regret
regret
regret
regret
6 6
0.4 0.4
4 4
0.3 0.3
0.2 0.2
2 2
0.1 0.1
0 0
0 0
8
4
4
2
0
7
6
7 7
0.7
6 6
0.6
0.6
0.5
0.5
0.4
3 3
2.5 2.5
5 5
2 2
4 4
0.4
1.5 1.5
3 3
0.3
0.3
0.2
0.2
0.1
0.1
1 1
2 2
0
05
510
1015 1520
t
t
2025
0
25
(a) Branin
3
3
2.5
2.5
0
0
0.5 0.5
1 1
05
510
1015 1520
t
t
2025
0 0
025 0
10 10
t
(b) Cosines
20 20
t
(c) Shekel
30 30
0 0
0 0 10 10 20 20 30 30 40 40 50 50
t t
(d) Hartmann
regret
4
regret
Figure 2: Median of the immediate regret of the PPES and 4 other algorithms over 100 experiments
on5 benchmark synthetic objective
functions, using batches of size Q = 3.
2
2
regret
regret
0
6
4
0.8
0.7
2
7
5
regret
6
regret
regret
6
0.8
5 5 10 10 15 15 20 20 25 25
t t
regret
regret
8
PPESPPES
EI-MCMC
EI-MCMC
SMUCB
SMUCB
BUCB
BUCB
UCBPE
UCBPE
regret
regret
10
regret
10
0 0
0 0
5 5 10 10 15 15 20 20 25 25
t t
1.5
immediate
regret obtained1.5 for
each algorithm. The confidence bands represent one standard devia3
3
tion obtained from bootstrapping.
The empirical distribution of the immediate regret is heavy tailed,
1
1
2
2 making
the median more representative of where most data points lie than the mean.
0.5 0.5
1 first set of experiments
Our
is on a set of synthetic benchmark objective functions including BraninHoo
[28],
a
mixture
of
cosines
[29], a Shekel function with 10 modes [30] (each defined on [0, 1]2 )
0
0
0
0
0 10 10 20 20 30 30
010 1020 2030 3040 4050 50
0
0
and the Hartmann-6
function [28] (defined
on [0, 1]6 ). We choose batches of size Q = 3 at each
t
t
t
t
decision time. The plots in Figure 2 illustrate the median immediate regrets found for each algo1
rithm. The results suggest that the PPES algorithm performs close to best if not 1the
best for each
problem considered. EI-MCMC does significantly better on the Hartmann function, which is a relatively smooth function with very few modes, where greedy search appears beneficial. Entropy-based
strategies are more exploratory in higher dimensions. Nevertheless, PPES does significantly better
than GP-UCB-PE on 3 of the 4 problems, suggesting that our non-greedy batch selection procedure
enhances performance versus a greedy entropy based policy.
1
We now consider maximization of real world objective functions. The first, boston, returns the
negative of the prediction error of a neural network trained on a random train/text split of the Boston
Housing dataset [31]. The weight-decay parameter and number of training iterations for the neural
network are the parameters to be optimized over. The next function, hydrogen, returns the amount
of hydrogen produced by particular bacteria as a function of pH and nitrogen levels of a growth
medium [32]. Thirdly we consider
a function, rocket, which runs a simulation of a rocket [33]
1
1
being launched from the Earth?s surface and returns the time taken for the rocket to land on the
Earth?s surface. The variables to be optimized over are the launch height from the surface, the mass
of fuel to use and the angle of launch with respect to the Earth?s surface. If the rocket does not
return, the function returns 0. Finally we consider a function, robot, which returns the walking
speed of a bipedal robot [34]. The function?s input parameters, which live in [0, 1]8 , are the robot?s
controller. We add Gaussian noise with ? = 0.1 to the noiseless function. Note that all of the
functions we consider are not available analytically. boston trains a neural network and returns
test error, whilst rocket and robot run physical simulations involving differential equations
before returning a desired quantity. Since the hydrogen dataset is available only for discrete points,
we define hydrogen to return the predictive mean of a Gaussian process trained on the dataset.
Figure 3 show the median values of immediate regret by each method over 200 random initializations. We consider batches of size Q = 2 and Q = 4. We find that PPES consistently outperforms
competing methods on the functions considered. The greediness and nonrequirement of MCMC
sampling of the SM-UCB, GP-BUCB and GP-UCB-PE algorithms make them amenable to large
batch experiments, for example, [17] consider optimization in R45 with batches of size 10. However,
these three algorithms all perform poorly when selecting batches of smaller size. The performance
on the hydrogen function illustrates an interesting phenemona; whilst the immediate regret of
PPES is mediocre initially, it drops rapidly as more batches are evaluated.
This behaviour is likely due to the non-greediness of the approach we have taken. EI-MCMC makes
good initial progress, but then fails to explore the input space as well as PPES is able to. Recall that
after each batch evaluation, an algorithm is required to output x
?t , its best estimate for the maximizer
of the objective function. We observed that whilst competing algorithms tended to evaluate points
which had high objective function values compared to PPES, yet when it came to recommending x
?t ,
7
?10?2?10?2
1
0
1
0
5
5
0
0 ?10?2
0 ?105?2 5
1
1
1
1
0.5
0.5
0.5
0.5
6
4
4
2
4
2
0
0
0
2
0
0
4
4
3
3
12 10
12
10
3
regretregret
regretregret
2
2
2
2
1
1
8
6
0
0
5
5
10 10 15 15 20 20
t
t
10 10 15 15 20 20
t
t
(a) boston
2
2
1.5
1.5
1.5
1.5
4
2
0.5
0.5
0.5
0.5
2
0
2
0
5
2
2
1
1
0
5
2.5
2.5
1
1
1
0
2.5
2.5
6
4
0
0
8
6
0
0
0
0
0
0
0
0
10 10 20 20 30 30 400 040 0 0 10
0 10
0
t
t
10 10 20 20 30 30 40 40
t
t
(b) hydrogen
10 20
10 20
t
t
20 30
20 30
t
t
30 40
30 40
40
40
3
3
6
4
1
0
3
3
10
8 108
4
2
0
0
10 10 20 20 30 30 40 40
0
0
t
t
0 0 10
00
10 10 20 20 30 30 40 040 0 10
t
t
14 14
14
12 14
12
3
0
10 20
10 20
t
t
20 30
20 30
t
t
30 40
30 40
(c) rocket
40
40
4
4
3.5
3.5
3
3
3
3
2.5
2.5
2.5
2.5
2
2
2
2
1.5
1.5
1.5
1.5
regret
regret
1.5
1.5
8
6
6
4
2
0
10 10 15 15 20 20
t
t
10 10 15 15 20 20
t
t
?10?2?10?2
4
4
regretregret
0
0
1.5
1.5
8
6
4
4
3.5
3.5
1
1
1
1
0.5
0.5
0.5
0.5
0
00
0
0
0 0 10
0 10
4
4
4
4
3.5
3.5
3.5
3.5
3
3
3
3
2.5
2.5
2.5
2.5
regret
regret
1
2
2
regret
regret
1
2
2
regret
regret
2
10
8 108
regret
regret
2
2
2.5
2.5
regret
regret
2
2.5
2.5
regret
regret
3
3
3
12
10 12
10
regret
regret
3
regretregret
3
3
3
14
14
12 12
regretregret
3
14 14
regretregret
regretregret
4
regretregret
4
0
PPES
PPES
EI-MCMC
EI-MCMC
PPES
PPES
SMUCB
SMUCB
EI-MCMC
EI-MCMC
BUCB
BUCB
SMUCB
SMUCB
UCBPE
UCBPE
BUCB
BUCB
UCBPE
UCBPE
?10?2?10?2
4
4
2
2
2
2
1.5
1.5
1.5
1.5
1
1
1
1
0.5
0.5
0.5
0.5
0
00
0
0
0 0 10
0 10
10 20
10 20
t
t
20 30
20 30
t
t
30 40
30 40
40
40
10 20
10 20
t
t
20 30
20 30
t
t
30 40
30 40
40
40
(d) robot
Figure 3: Median of the immediate regret of the PPES and 4 other algorithms over 100 experiments
on real world objective functions. Figures in the top row use batches of size Q = 2, whilst figues on
the bottom row use batches of size Q = 4.
PPES tended to do a better job. Our belief is that this occured exactly because the PPES objective
aims to maximize information gain rather than objective function value improvement.
The rocket function has a strong discontinuity making if difficult to maximize. If the fuel mass,
launch height and/or angle are too high, the rocket would not return to the Earth?s surface, resulting
in a 0 function value. It can be argued that a stationary kernel Gaussian process is a poor model for
this function, yet it is worth investigating the performance of a GP based models since a practitioner
1 black-box function is smooth apriori. PPES seemed to handle this
may not know whether or not 1their
function best and had fewer samples
which resulted in 0 function value than each
1 1
1 of the competing
1
1
methods and made fewer recommendations which led to a 0 function value. 1The relative increase
in PPES performance from increasing batch size from Q = 2 to Q = 4 is small for the robot
function compared to the other functions considered. We believe this is a consequence of using a
slightly naive optimization procedure to save computation time. Our optimization procedure first
computes a
?PPES at 1000 points selected uniformly at random, and performs gradient ascent from
the best point. Since a
?PPES is defined on X Q = [0, 1]32 , this method may miss a global optimum.
Other methods all select their batches greedily, and hence only need to optimize in X = [0, 1]8 .
However, this should easily be avoided by using a more exhaustive gradient based optimizer.
5
Conclusions
We have developed parallel predictive entropy search, an information theoretic approach to batch
Bayesian optimization. Our method is greedy in the sense that it aims to maximize the one-step
information gain about the location of x? , but it is not greedy in how it selects a set of points to
evaluate next. Previous methods are doubly greedy, in that they look one step ahead, and also select
a batch of points greedily. Competing methods are prone to under exploring, which hurts their
perfomance on multi-modal, noisy objective functions, as we demonstrate in our experiments.
References
[1] G. Wang and S. Shan. Review of Metamodeling Techniques in Support of Engineering Design Optimization. Journal of Mechanical Design, 129(4):370?380, 2007.
[2] W. Ziemba & R. Vickson. Stochastic Optimization Models in Finance. World Scientific Singapore, 2006.
8
[3] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. NIPS, 2012.
[4] J. Mockus. Bayesian Approach to Global Optimization: Theory and Applications. Kluwer, 1989.
[5] D. Lizotte, T. Wang, M. Bowling, and D. Schuurmans. Automatic Gait Optimization with Gaussian
Process Regression. IJCAI, pages 944?949, 2007.
[6] D. M. Negoescu, P. I. Frazier, and W. B. Powell. The Knowledge-Gradient Algorithm for Sequencing
Experiments in Drug Discovery. INFORMS Journal on Computing, 23(3):346?363, 2011.
[7] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[8] A. Shah, A. G. Wilson, and Z. Ghahramani. Student-t Processes as Alternatives to Gaussian Processes.
AISTATS, 2014.
[9] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, Mr Prabat, and R. P.
Adams. Scalable Bayesian Optimization Using Deep Neural Networks. ICML, 2015.
[10] E. Brochu, M. Cora, and N. de Freitas. A Tutorial on Bayesian Optimization of Expensive Cost Functions,
with Applications to Active User Modeling and Hierarchical Reinforcement Learning. Technical Report
TR-2009-23, University of British Columbia, 2009.
[11] G. Gutin, A. Yeo, and A. Zverovich. Traveling salesman should not be greedy:domination analysis of
greedy-type heuristics for the TSP. Discrete Applied Mathematics, 117:81?86, 2002.
[12] P. Hennig and C. J. Schuler. Entropy Search for Information-Efficient Global Optimization. JMLR, 2012.
[13] J. M. Hern?andez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive Entropy Search for Efficient
Global Optimization of Black-box Functions. NIPS, 2014.
[14] D. Ginsbourger, J. Janusevskis, and R. Le Riche. Dealing with Asynchronicity in Parallel Gaussian
Process Based Optimization. 2011.
[15] J. Azimi, A. Fern, and X. Z. Fern. Batch Bayesian Optimization via Simulation Matching. NIPS, 2010.
[16] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian Process Optimization in the Bandit Setting:
No Regret and Experimental Design. ICML, 2010.
[17] T. Desautels, A. Krause, and J. Burdick. Parallelizing Exploration-Exploitation Tradeoffs with Gaussian
Process Bandit Optimization. ICML, 2012.
[18] E. Contal, D. Buffoni, D. Robicquet, and N. Vayatis. Parallel Gaussian Process Optimization with Upper
Confidence Bound and Pure Exploration. In Machine Learning and Knowledge Discovery in Databases,
pages 225?240. Springer Berlin Heidelberg, 2013.
[19] D. J. MacKay. Information-Based Objective Functions for Active Data Selection. Neural Computation,
4(4):590?604, 1992.
[20] N. Houlsby, J. M. Hern?andez-Lobato, F. Huszar, and Z. Ghahramani. Collaborative Gaussian Processes
for Preference Learning. NIPS, 2012.
[21] T. P. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, Masachusetts
Institute of Technology, 2001.
[22] S. Bochner. Lectures on Fourier Integrals. Princeton University Press, 1959.
[23] A. Rahimi and B. Recht. Random Features for Large-Scale Kernel Machines. NIPS, 2007.
[24] R. M. Neal. Bayesian Learning for Neural Networks. PhD thesis, University of Toronto, 1995.
[25] M. Seeger. Expectation Propagation for Exponential Families. Technical Report, U.C. Berkeley, 2008.
[26] J. P. Cunningham, P. Hennig, and S. Lacoste-Julien. Gaussian Probabilities and Expectation Propagation.
arXiv, 2013. http://arxiv.org/abs/1111.6832.
[27] I. Ahmad and P. E. Lin. A Nonparametric Estimation of the Entropy for Absolutely Continuous Distributions. IEEE Trans. on Information Theory, 22(3):372?375, 1976.
[28] D. Lizotte. Practical Bayesian Optimization. PhD thesis, University of Alberta, 2008.
[29] B. S. Anderson, A. W. Moore, and D. Cohn. A Nonparametric Approach to Noisy and Costly Optimization. ICML, 2000.
[30] J. Shekel. Test Functions for Multimodal Search Techniques. Information Science and Systems, 1971.
[31] K. Bache and M. Lichman. UCI Machine Learning Repository, 2013.
[32] E. H. Burrows, W. K. Wong, X. Fern, F.W.R. Chaplen, and R.L. Ely. Optimization of ph and nitrogen
for enhanced hydrogen production by synechocystis sp. pcc 6803 via statistical and machine learning
methods. Biotechnology Progress, 25(4):1009?1017, 2009.
[33] J. E. Hasbun. In Classical Mechanics with MATLAB Applications. Jones & Bartlett Learning, 2008.
[34] E. Westervelt and J. Grizzle. Feedback Control of Dynamic Bipedal Robot Locomotion. Control and
Automation Series. CRC PressINC, 2007.
9
| 5804 |@word exploitation:2 version:1 repository:1 pcc:1 mockus:1 simulation:3 eng:1 covariance:2 accounting:1 tr:1 ld:2 reduction:1 initial:1 contains:1 lichman:1 selecting:4 rippel:1 series:1 denoting:1 outperforms:1 freitas:1 current:2 obp:1 dx:1 must:2 written:1 yet:2 wx:1 analytic:4 burdick:1 plot:1 drop:1 sundaram:1 aside:1 stationary:2 greedy:20 fewer:3 discovering:1 guess:1 prohibitive:1 generative:1 inspection:1 selected:1 ith:1 core:1 ziemba:1 location:11 preference:1 toronto:1 firstly:1 org:1 height:2 branin:1 along:1 constructed:2 differential:4 consists:1 doubly:2 manner:1 x0:13 snoek:2 expected:8 themselves:2 kiros:1 multi:2 mechanic:1 alberta:1 increasing:1 becomes:1 maximizes:2 medium:1 mass:2 fuel:2 what:1 developed:2 whilst:11 finding:3 bootstrapping:1 berkeley:1 every:1 concave:1 xd:1 finance:2 xq0:4 exactly:3 growth:1 scaled:1 returning:1 uk:2 control:2 appear:1 before:3 positive:2 engineering:4 treat:1 severely:1 consequence:1 servations:1 approximately:4 contal:1 black:5 might:1 initialization:1 studied:1 xbest:3 suggests:1 shaded:1 co:3 range:1 practical:2 regret:44 procedure:3 powell:1 area:1 empirical:2 drug:1 maxx:1 significantly:3 matching:3 convenient:4 confidence:5 zoubin:2 suggest:1 close:2 selection:7 marginalize:1 mediocre:1 live:1 greediness:2 wong:1 optimize:2 equivalent:2 map:10 yt:1 maximizing:1 annealed:1 williams:1 lobato:2 starting:1 independently:1 formalized:1 pure:2 handle:1 exploratory:1 hurt:1 analogous:1 qq:1 enhanced:1 suppose:1 user:1 carl:1 locomotion:1 element:1 expensive:6 approximated:5 utilized:1 updating:1 walking:1 bache:1 database:1 observed:3 ep:9 bottom:1 wang:2 worst:1 region:4 qq0:5 trade:1 highest:3 ahmad:1 disease:1 cam:2 dynamic:1 trained:2 depend:1 rewrite:1 predictive:19 easily:2 joint:2 multimodal:1 aei:4 various:2 stacked:1 train:2 fast:2 monte:3 query:6 metamodeling:1 choosing:5 exhaustive:1 heuristic:1 larger:3 supplementary:1 say:1 ucbpe:6 asm:1 amar:1 gp:10 noisy:7 tsp:1 final:1 housing:1 advantage:3 differentiable:3 gait:3 propose:1 product:1 relevant:1 combining:1 uci:1 date:1 rapidly:1 poorly:1 ymax:3 intuitive:1 pronounced:1 ijcai:1 empty:1 optimum:3 plethora:1 assessing:1 adam:2 derive:1 develop:5 ac:2 illustrate:1 informs:1 measured:1 progress:2 job:1 strong:1 launch:3 involves:1 larochelle:1 submodularity:1 attribute:1 stochastic:1 exploration:4 material:1 implementing:1 crc:1 require:1 argued:1 behaviour:1 f1:1 andez:2 secondly:1 exploring:1 practically:1 considered:6 ground:4 deciding:3 exp:1 mapping:1 vary:1 optimizer:2 earth:4 estimation:2 iw:1 argmaxx0:1 largest:1 successfully:1 hoffman:1 cora:1 mit:1 gaussian:32 aim:8 rather:2 avoid:1 varying:1 wilson:1 probabilistically:1 corollary:1 ax:1 focus:1 improvement:5 consistently:1 frazier:1 fq:10 likelihood:1 sequencing:1 seeger:2 lizotte:2 greedily:6 baseline:2 sense:2 posteriori:1 inference:1 typically:1 initially:1 cunningham:1 bandit:2 limm:1 interested:1 selects:1 dual:1 flexible:1 aforementioned:1 hartmann:3 mackay:1 mutual:2 marginal:1 equal:1 construct:3 once:1 apriori:1 sampling:11 identical:1 broad:1 look:1 jones:1 icml:4 future:1 report:3 few:4 distibution:1 gamma:1 resulted:1 individual:1 argmax:3 shekel:3 ab:1 highly:3 evaluation:13 mixture:1 bipedal:2 amenable:5 integral:3 necessary:1 bacteria:1 literally:1 filled:1 hyperprior:1 desired:3 vickson:1 column:2 modeling:2 zn:1 maximization:1 cost:2 subset:1 entry:3 satish:1 too:2 characterize:1 dependency:2 synthetic:6 chooses:2 st:41 density:3 fundamental:1 peak:3 recht:1 off:1 invoke:1 squared:1 thesis:3 containing:2 choose:9 resort:2 derivative:4 return:9 yeo:1 suggesting:3 de:1 student:2 summarized:1 includes:2 flatter:1 automation:1 ely:1 rocket:9 depends:2 tion:1 azimi:1 bucb:9 start:1 houlsby:1 parallel:19 contribution:1 minimize:3 square:2 collaborative:1 variance:3 maximized:1 correspond:2 bayesian:19 rejecting:1 produced:1 fern:3 carlo:3 economical:1 worth:2 converged:1 tended:2 definition:1 nitrogen:2 minka:1 r45:1 gain:5 sampled:2 dataset:3 treatment:1 popular:1 recall:2 knowledge:5 occured:1 formalize:1 brochu:1 actually:1 appears:2 higher:1 modal:1 maximally:1 gutin:1 formulation:2 evaluated:6 box:3 anderson:1 rejected:2 implicit:1 until:3 yq0:3 traveling:1 horizontal:1 ei:14 replacing:1 cohn:1 maximizer:18 propagation:5 mode:3 scientific:1 believe:2 name:1 normalized:1 true:1 y2:2 analytically:5 hence:5 chemical:1 symmetric:2 moore:1 neal:1 bowling:1 uniquely:1 noted:1 robicquet:1 cosine:2 criterion:2 pdf:1 theoretic:1 demonstrate:3 performs:2 l1:1 novel:1 dy1:1 ef:1 common:2 physical:1 conditioning:1 thirdly:1 extend:1 discussed:3 approximates:1 perfomance:1 kluwer:1 significant:1 cambridge:2 imposing:2 rd:1 automatic:1 mathematics:1 had:2 robot:9 surface:6 add:2 posterior:18 multivariate:3 grizzle:1 noisily:2 optimizing:4 store:1 came:1 patwary:1 exploited:1 impose:2 mr:1 employed:1 converge:1 maximize:12 bochner:2 ii:7 multiple:3 full:1 reduces:1 rahimi:1 smooth:2 technical:2 calculation:1 lin:1 prediction:1 variant:1 involving:1 regression:1 controller:2 noiseless:1 expectation:10 df:1 scalable:1 arxiv:2 iteration:2 kernel:11 represent:2 robotics:1 buffoni:1 vayatis:1 background:1 remarkably:1 uninformative:1 krause:2 median:6 launched:1 posse:1 ascent:1 seem:1 practitioner:2 paticular:1 iii:1 easy:1 split:1 marginalization:1 variate:1 competing:4 suboptimal:1 inner:1 riche:1 computable:1 tradeoff:1 det:4 whether:1 expression:1 utility:3 assist:1 bartlett:1 dyq:1 biotechnology:1 matlab:1 deep:2 useful:2 amount:1 nonparametric:2 dark:1 band:1 ph:2 generate:1 http:1 exist:2 df1:1 ppes:42 singapore:1 tutorial:1 blue:1 mizer:1 write:2 hyperparameter:1 shall:1 discrete:4 probed:1 hennig:2 key:7 thereafter:1 nevertheless:1 drawn:1 lacoste:1 sum:1 run:4 angle:2 everywhere:1 you:4 uncertainty:1 named:1 extends:1 swersky:1 family:2 decide:2 asynchronicity:1 draw:2 decision:6 huszar:1 bound:4 shan:1 ahead:2 occur:1 constraint:2 your:1 westervelt:1 x2:2 lightly:1 aspect:1 speed:2 argument:1 min:1 extremely:1 integrand:1 fourier:2 relatively:1 department:2 developing:1 combination:1 poor:2 beneficial:1 slightly:2 smaller:1 kakade:1 making:4 medoids:1 taken:5 computationally:1 equation:2 visualization:1 hern:2 discus:3 synechocystis:1 know:2 tractable:2 travelling:1 salesman:2 available:2 gaussians:2 probe:3 hierarchical:1 enforce:1 spectral:1 save:1 batch:38 alternative:1 shah:2 top:1 remaining:1 include:1 running:1 maintaining:1 ghahramani:4 approximating:2 classical:1 objective:38 quantity:3 occurs:1 strategy:12 costly:1 rt:4 enhances:1 gradient:6 detrimental:1 simulated:2 berlin:1 chris:1 considers:1 assuming:2 length:1 pointwise:2 cq:4 illustration:1 difficult:3 unfortunately:2 potentially:1 statement:1 expense:2 trace:3 negative:2 design:4 policy:5 unknown:3 perform:2 upper:4 discretize:1 observation:11 vertical:1 sm:4 benchmark:2 immediate:8 defining:1 y1:3 parallelizing:1 introduced:1 populates:2 pair:2 required:1 mechanical:1 optimized:2 learned:1 distinction:1 established:1 discontinuity:1 nip:5 trans:1 dth:1 suggested:1 able:2 appeared:1 challenge:1 including:3 max:1 belief:4 indicator:2 representing:1 scheme:1 technology:1 yq:18 julien:1 naive:3 columbia:1 xq:18 text:1 prior:5 literature:2 l2:1 review:1 discovery:2 multiplication:1 chaplen:1 relative:1 fully:1 lecture:1 interesting:1 var:1 versus:1 desautels:1 consistent:1 heavy:1 production:1 row:4 uncountably:1 land:1 prone:1 placed:1 rasmussen:1 infeasible:2 institute:1 taking:1 benefit:2 distributed:2 feedback:1 dimension:1 xn:1 world:6 evaluating:6 seemed:1 computes:1 author:1 made:4 reinforcement:1 ginsbourger:1 avoided:2 far:3 approximate:9 compact:1 implicitly:1 dealing:1 global:22 sequentially:6 proximation:1 investigating:2 active:2 assumed:1 evaluted:1 recommending:1 search:11 hydrogen:7 continuous:1 tailed:1 schuler:1 molecule:1 inherently:1 symmetry:1 schuurmans:1 argmaxx:5 heidelberg:1 domain:2 aistats:1 sp:1 noise:4 hyperparameters:2 x1:4 site:4 representative:1 rithm:1 darker:1 fails:2 wish:1 xh:1 explicit:1 exponential:2 lie:1 pe:4 burrow:1 jmlr:1 theorem:1 british:1 shade:1 xt:5 specific:1 faint:1 decay:1 concern:2 intractable:2 consist:1 sequential:2 phd:3 illustrates:1 egy:1 rejection:4 boston:4 entropy:23 led:1 univariate:1 infinitely:1 likely:1 explore:1 failed:1 ordered:1 scalar:1 recommendation:2 springer:1 truth:4 cdf:1 goal:1 onestep:1 infinite:1 uniformly:2 justify:1 miss:1 total:1 called:1 e:1 experimental:1 m3:1 ucb:11 domination:1 select:6 formally:1 support:1 absolutely:1 incorporate:1 evaluate:8 mcmc:14 princeton:1 srinivas:1 |
5,308 | 5,805 | Large-scale probabilistic predictors with and without
guarantees of validity
?
Vladimir Vovk? , Ivan Petej? , and Valentina Fedorova?
Department of Computer Science, Royal Holloway, University of London, UK
?
Yandex, Moscow, Russia
{volodya.vovk,ivan.petej,alushaf}@gmail.com
Abstract
This paper studies theoretically and empirically a method of turning machinelearning algorithms into probabilistic predictors that automatically enjoys a property of validity (perfect calibration) and is computationally efficient. The price to
pay for perfect calibration is that these probabilistic predictors produce imprecise
(in practice, almost precise for large data sets) probabilities. When these imprecise probabilities are merged into precise probabilities, the resulting predictors,
while losing the theoretical property of perfect calibration, are consistently more
accurate than the existing methods in empirical studies.
1
Introduction
Prediction algorithms studied in this paper belong to the class of Venn?Abers predictors, introduced
in [1]. They are based on the method of isotonic regression [2] and prompted by the observation that
when applied in machine learning the method of isotonic regression often produces miscalibrated
probability predictions (see, e.g., [3, 4]); it has also been reported ([5], Section 1) that isotonic
regression is more prone to overfitting than Platt?s scaling [6] when data is scarce. The advantage
of Venn?Abers predictors is that they are a special case of Venn predictors ([7], Chapter 6), and so
([7], Theorem 6.6) are always well-calibrated (cf. Proposition 1 below). They can be considered to
be a regularized version of the procedure used by [8], which helps them resist overfitting.
The main desiderata for Venn (and related conformal, [7], Chapter 2) predictors are validity, predictive efficiency, and computational efficiency. This paper introduces two computationally efficient
versions of Venn?Abers predictors, which we refer to as inductive Venn?Abers predictors (IVAPs)
and cross-Venn?Abers predictors (CVAPs). The ways in which they achieve the three desiderata
are:
? Validity (in the form of perfect calibration) is satisfied by IVAPs automatically, and the
experimental results reported in this paper suggest that it is inherited by CVAPs.
? Predictive efficiency is determined by the predictive efficiency of the underlying learning
algorithms (so that the full arsenal of methods of modern machine learning can be brought
to bear on the prediction problem at hand).
? Computational efficiency is, again, determined by the computational efficiency of the underlying algorithm; the computational overhead of extracting probabilistic predictions consists of sorting (which takes time O(n log n), where n is the number of observations) and
other computations taking time O(n).
An advantage of Venn prediction over conformal prediction, which also enjoys validity guarantees,
is that Venn predictors output probabilities rather than p-values, and probabilities, in the spirit of
Bayesian decision theory, can be easily combined with utilities to produce optimal decisions.
1
In Sections 2 and 3 we discuss IVAPs and CVAPs, respectively. Section 4 is devoted to minimax ways of merging imprecise probabilities into precise probabilities and thus making IVAPs and
CVAPs precise probabilistic predictors.
In this paper we concentrate on binary classification problems, in which the objects to be classified
are labelled as 0 or 1. Most of machine learning algorithms are scoring algorithms, in that they
output a real-valued score for each test object, which is then compared with a threshold to arrive at
a categorical prediction, 0 or 1. As precise probabilistic predictors, IVAPs and CVAPs are ways of
converting the scores for test objects into numbers in the range [0, 1] that can serve as probabilities,
or calibrating the scores. In Section 5 we briefly discuss two existing calibration methods, Platt?s
[6] and the method [8] based on isotonic regression. Section 6 is devoted to experimental comparisons and shows that CVAPs consistently outperform the two existing methods (more extensive
experimental studies can be found in [9]).
2
Inductive Venn?Abers predictors (IVAPs)
In this paper we consider data sequences (usually loosely referred to as sets) consisting of observations z = (x, y), each observation consisting of an object x and a label y ? {0, 1}; we only consider
binary labels. We are given a training set whose size will be denoted l.
This section introduces inductive Venn?Abers predictors. Our main concern is how to implement
them efficiently, but as functions, an IVAP is defined in terms of a scoring algorithm (see the last
paragraph of the previous section) as follows:
? Divide the training set of size l into two subsets, the proper training set of size m and the
calibration set of size k, so that l = m + k.
? Train the scoring algorithm on the proper training set.
? Find the scores s1 , . . . , sk of the calibration objects x1 , . . . , xk .
? When a new test object x arrives, compute its score s. Fit isotonic regression
to (s1 , y1 ), . . . , (sk , yk ), (s, 0) obtaining a function f0 . Fit isotonic regression to
(s1 , y1 ), . . . , (sk , yk ), (s, 1) obtaining a function f1 . The multiprobability prediction
for the label y of x is the pair (p0 , p1 ) := (f0 (s), f1 (s)) (intuitively, the prediction is that
the probability that y = 1 is either f0 (s) or f1 (s)).
Notice that the multiprobability prediction (p0 , p1 ) output by an IVAP always satisfies p0 < p1 , and
so p0 and p1 can be interpreted as the lower and upper probabilities, respectively; in practice, they
are close to each other for large training sets.
First we state formally the property of validity of IVAPs (adapting the approach of [1] to IVAPs).
A random variable P taking values in [0, 1] is perfectly calibrated (as a predictor) for a random
variable Y taking values in {0, 1} if E(Y | P ) = P a.s. A selector is a random variable taking
values in {0, 1}. As a general rule, in this paper random variables are denoted by capital letters (e.g.,
X are random objects and Y are random labels).
Proposition 1. Let (P0 , P1 ) be an IVAP?s prediction for X based on a training sequence
(X1 , Y1 ), . . . , (Xl , Yl ). There is a selector S such that PS is perfectly calibrated for Y provided the
random observations (X1 , Y1 ), . . . , (Xl , Yl ), (X, Y ) are i.i.d.
Our next proposition concerns the computational efficiency of IVAPs; Proposition 1 will be proved
later in this section while Proposition 2 is proved in [9].
Proposition 2. Given the scores s1 , . . . , sk of the calibration objects, the prediction rule for computing the IVAP?s predictions can be computed in time O(k log k) and space O(k). Its application
to each test object takes time O(log k). Given the sorted scores of the calibration objects, the prediction rule can be computed in time and space O(k).
Proofs of both statements rely on the geometric representation of isotonic regression as the slope of
the GCM (greatest convex minorant) of the CSD (cumulative sum diagram): see [10], pages 9?13
(especially Theorem 1.1). To make our exposition more self-contained, we define both GCM and
CSD below.
2
First we explain how to fit isotonic regression to (s1 , y1 ), . . . , (sk , yk ) (without necessarily assuming that si are the calibration scores and yi are the calibration labels, which will be needed to cover
the use of isotonic regression in IVAPs). We start from sorting all scores s1 , . . . , sk in the increasing
order and removing the duplicates. (This is the most computationally expensive step in our calibration procedure, O(k log k) in the worst case.) Let k 0 ? k be the number of distinct elements among
s1 , . . . , sk , i.e., the cardinality of the set {s1 , . . . , sk }. Define s0j , j = 1, .. . , k 0 , to be the jth small
est element of {s1 , . . . , sk }, so that s01 < s02 < ? ? ? < s0k0 . Define wj := i = 1, . . . , k : si = s0j
to be the number of times s0j occurs among s1 , . . . , sk . Finally, define
yj0 :=
1
wj
X
yi
i=1,...,k:si =s0j
to be the average label corresponding to si = s0j .
The CSD of (s1 , y1 ), . . . , (sk , yk ) is the set of points
?
?
i
i
X
X
Pi := ?
wj ,
yj0 wj ? ,
j=1
i = 0, 1, . . . , k 0 ;
j=1
in particular, P0 = (0, 0). The GCM is the greatest convex minorant of the CSD. The value at s0i ,
i = 1, . . . , k 0 , of the isotonic regression fitted to (s1 , y1 ), . . . , (sk , yk ) is defined to be the slope of
Pi?1
Pi
the GCM between j=1 wj and j=1 wj ; the values at other s are somewhat arbitrary (namely,
the value at s ? (s0i , s0i+1 ) can be set to anything between the left and right slopes of the GCM at
Pi
j=1 wj ) but are never needed in this paper (unlike in the standard use of isotonic regression in
machine learning, [8]): e.g., f1 (s) is the value of the isotonic regression fitted to a sequence that
already contains (s, 1).
Proof of Proposition 1. Set S := Y . The statement of the proposition even holds conditionally
on knowing the values of (X1 , Y1 ), . . . , (Xm , Ym ) and the multiset *(Xm+1 , Ym+1 ), . . . , (Xl , Yl ),
(X, Y )+; this knowledge allows us to compute the scores *s1 , . . . , sk , s+ of the calibration objects
Xm+1 , . . . , Xl and the test object X. The only remaining randomness is over the equiprobable
permutations of (Xm+1 , Ym+1 ), . . . , (Xl , Yl ), (X, Y ); in particular, (s, Y ) is drawn randomly from
the multiset *(s1 , Ym+1 ), . . . , (sk , Yl ), (s, Y )+. It remains to notice that, according to the GCM
construction, the average label of the calibration and test observations corresponding to a given
value of PS is equal to PS .
The idea behind computing the pair (f0 (s), f1 (s)) efficiently is to pre-compute two vectors F 0 and
F 1 storing f0 (s) and f1 (s), respectively, for all possible values of s. Let k 0 and s0i be as defined
above in the case where s1 , . . . , sk are the calibration scores and y1 , . . . , yk are the corresponding
labels. The vectors F 0 and F 1 are of length k 0 , and for all i = 1, . . . , k 0 and both ? {0, 1}, Fi is
the value of f (s) when s = s0i . Therefore, for all i = 1, . . . , k 0 :
? Fi1 is also the value of f1 (s) when s is just to the left of s0i ;
? Fi0 is also the value of f0 (s) when s is just to the right of s0i .
Since f0 and f1 can change their values only at the points s0i , the vectors F 0 and F 1 uniquely
determine the functions f0 and f1 , respectively. For details of computing F 0 and F 1 , see [9].
Remark. There are several algorithms for performing isotonic regression on a partially, rather than
linearly, ordered set: see, e.g., [10], Section 2.3 (although one of the algorithms described in that
section, the Minimax Order Algorithm, was later shown to be defective [11, 12]). Therefore, IVAPs
(and CVAPs below) can be defined in the situation where scores take values only in a partially ordered set; moreover, Proposition 1 will continue to hold. (For the reader familiar with the notion
of Venn predictors we could also add that Venn?Abers predictors will continue to be Venn predictors, which follows from the isotonic regression being the average of the original function over
certain equivalence classes.) The importance of partially ordered scores stems from the fact that
they enable us to benefit from a possible ?synergy? between two or more prediction algorithms
3
Algorithm 1 CVAP(T, x) // cross-Venn?Abers predictor for training set T
1: split the training set T into K folds T1 , . . . , TK
2: for k ? {1, . . . , K}
3:
(pk0 , pk1 ) := IVAP(T \ Tk , Tk , x)
4: return GM(p1 )/(GM(1 ? p0 ) + GM(p1 ))
[13]. Suppose, e.g., that one prediction algorithm outputs (scalar) scores s11 , . . . , s1k for the calibration objects x1 , . . . , xk and another outputs s21 , . . . , s2k for the same calibration objects; we would
like to use both sets of scores. We could merge the two sets of scores into composite vector scores,
si := (s1i , s2i ), i = 1, . . . , k, and then classify a new object x as described earlier using its composite
score s := (s1 , s2 ), where s1 and s2 are the scalar scores computed by the two algorithms. Preliminary results reported in [13] in a related context suggest that the resulting predictor can outperform
predictors based on the individual scalar scores. However, we will not pursue this idea further in
this paper.
3
Cross Venn?Abers predictors (CVAPs)
A CVAP is just a combination of K IVAPs, where K is the parameter of the algorithm. It is described
as Algorithm 1, where IVAP(A, B, x) stands for the output of IVAP applied to A as proper training
set, B as calibration set, and x as test object, and GM stands for geometric mean (so that GM(p1 ) is
1
K
the geometric mean of p11 , . . . , pK
1 and GM(1?p0 ) is the geometric mean of 1?p0 , . . . , 1?p0 ). The
folds should be of approximately equal size, and usually the training set is split into folds at random
(although we choose contiguous folds in Section 6 to facilitate reproducibility). One way to obtain
a random assignment of the training observations to folds (see line 1) is to start from a regular array
in which the first l1 observations are assigned to fold 1, the following l2 observations are assigned
to fold 2, up to the last lK observations which are assigned to fold K, where |lk ? l/K| < 1 for all
k, and then to apply a random permutation. Remember that the procedure R ANDOMIZE - IN -P LACE
([14], Section 5.3) can do the last step in time O(l). See the next section for a justification of the
expression GM(p1 )/(GM(1 ? p0 ) + GM(p1 )) used for merging the IVAPs? outputs.
4
Making probability predictions out of multiprobability ones
In CVAP (Algorithm 1) we merge the K multiprobability predictions output by K IVAPs. In this
section we design a minimax way for merging them, essentially following [1]. For the log-loss
function the result is especially simple, GM(p1 )/(GM(1 ? p0 ) + GM(p1 )).
Let us check that GM(p1 )/(GM(1 ? p0 ) + GM(p1 )) is indeed the minimax expression under log
K
loss. Suppose the pairs of lower and upper probabilities to be merged are (p10 , p11 ), . . . , (pK
0 , p1 )
and the merged probability is p. The extra cumulative loss suffered by p over the correct members
p11 , . . . , pK
1 of the pairs when the true label is 1 is
log
p11
pK
+ ? ? ? + log 1 ,
p
p
(1)
and the extra cumulative loss of p over the correct members of the pairs when the true label is 0 is
log
1 ? p10
1 ? pK
0
+ ? ? ? + log
.
1?p
1?p
(2)
Equalizing the two expressions we obtain
p11 ? ? ? pK
(1 ? p10 ) ? ? ? (1 ? pK
1
0 )
=
,
K
K
p
(1 ? p)
which gives the required minimax expression for the merged probability (since (1) is decreasing and
(2) is increasing in p).
For the computations in the case of the Brier loss function, see [9].
4
Notice that the argument above (?conditioned? on the proper training set) is also applicable to IVAP,
in which case we need to set K := 1; the probability predictor obtained from an IVAP by replacing
(p0 , p1 ) with p := p1 /(1 ? p0 + p1 ) will be referred to as the log-minimax IVAP. (And CVAP is
log-minimax by definition.)
5
Comparison with other calibration methods
The two alternative calibration methods that we consider in this paper are Platt?s [6] and isotonic
regression [8].
5.1
Platt?s method
Platt?s [6] method uses sigmoids to calibrate the scores. Platt uses a regularization procedure ensuring that the predictions of his method are always in the range
1
k+ + 1
,
,
k? + 2 k+ + 2
where k? is the number of calibration observations labelled 0 and k+ is the number of calibration
observations labelled 1. It is interesting that the predictions output by the log-minimax IVAP are in
the same range (except that the end-points are now allowed): see [9].
5.2
Isotonic regression
There are two standard uses of isotonic regression: we can train the scoring algorithm using what
we call a proper training set, and then use the scores of the observations in a disjoint calibration
(also called validation) set for calibrating the scores of test objects (as in [5]); alternatively, we can
train the scoring algorithm on the full training set and also use the full training set for calibration (it
appears that this was done in [8]). In both cases, however, we can expect to get an infinite log loss
when the test set becomes large enough (see [9]).
The presence of regularization is an advantage of Platt?s method: e.g., it never suffers an infinite
loss when using the log loss function. There is no standard method of regularization for isotonic
regression, and we do not apply one.
6
Empirical studies
The main loss function (cf., e.g., [15]) that we use in our empirical studies is the log loss
? log p
if y = 1
?log (p, y) :=
? log(1 ? p) if y = 0,
(3)
where log is binary logarithm, p ? [0, 1] is a probability prediction, and y ? {0, 1} is the true label.
Another popular loss function is the Brier loss
?Br (p, y) := 4(y ? p)2 .
(4)
We choose the coefficient 4 in front of (y ? p)2 in (4) and the base 2 of the logarithm in (3) in
order for the minimax no-information predictor that always predicts p := 1/2 to suffer loss 1.
An advantage of the Brier loss function is that it still makes it possible to compare the quality of
prediction in cases when prediction algorithms (such as isotonic regression) give a categorical but
wrong prediction (and so are simply regarded as infinitely bad when using log loss).
The loss of a probability predictor on a test set will be measured by the arithmetic average of the
losses it suffers on the test set, namely, by the mean log loss (MLL) and the mean Brier loss (MBL)
n
MLL :=
n
1X
?log (pi , yi ),
n i=1
MBL :=
1X
?Br (pi , yi ),
n i=1
(5)
where yi are the test labels and pi are the probability predictions for them. We will not be checking
directly whether various calibration methods produce well-calibrated predictions, since it is well
5
known that lack of calibration increases the loss as measured by loss functions such as log loss
and Brier loss (see, e.g., [16] for the most standard decomposition of the latter into the sum of the
calibration error and refinement error).
In this section we compare log-minimax IVAPs (i.e., IVAPs whose outputs are replaced by probability predictions, as explained in Section 4) and CVAPs with Platt?s method [6] and the standard
method [8] based on isotonic regression; the latter two will be referred to as ?Platt? and ?Isotonic?
in our tables and figures. For both IVAPs and CVAPs we use the log-minimax procedure (the
Brier-minimax procedure leads to virtually identical empirical results). We use the same underlying algorithms as in [1], namely J48 decision trees (abbreviated to ?J48?), J48 decision trees with
bagging (?J48 bagging?), logistic regression (sometimes abbreviated to ?logistic?), na??ve Bayes,
neural networks, and support vector machines (SVM), as implemented in Weka [17] (University of
Waikato, New Zealand). The underlying algorithms (except for SVM) produce scores in the interval
[0, 1], which can be used directly as probability predictions (referred to as ?Underlying? in our tables
and figures) or can be calibrated using the methods of [6, 8] or the methods proposed in this paper
(?IVAP? or ?CVAP? in the tables and figures).
For illustrating our results in this paper we use the adult data set available from the UCI repository
[18] (this is the main data set used in [6] and one of the data sets used in [8]); however, the picture
that we observe is typical for other data sets as well (cf. [9]). We use the original split of the data set
into a training set of Ntrain = 32, 561 observations and a test set of Ntest = 16, 281 observations.
The results of applying the four calibration methods (including the vacuous one, corresponding to
just using the underlying algorithm) to the six underlying algorithms for this data set are shown
in Figure 1. The six top plots report results for the log loss (namely, MLL, as defined in (5)) and
the six bottom plots for the Brier loss (namely, MBL). The underlying algorithms are given in the
titles of the plots and the calibration methods are represented by different line styles, as explained
in the legends. The horizontal axis is labelled by the ratio of the size of the proper training set to
that of the calibration set (except for the label all, which will be explained later); in particular,
in the case of CVAPs it is labelled by the number K ? 1 one less than the number of folds. In
the case of CVAPs, the training set is split into K equal (or as close to being equal as possible)
contiguous folds: the first dNtrain /Ke training observations are included in the first fold, the next
dNtrain /Ke (or bNtrain /Kc) in the second fold, etc. (first d?e and then b?c is used unless Ntrain is
divisible by K). In the case of the other calibration methods, we used the first d K?1
K Ntrain e training
observation as the proper training set (used for training the scoring algorithm) and the rest of the
training observations are used as the calibration set.
In the case of log loss, isotonic regression often suffers infinite losses, which is indicated by the
absence of the round marker for isotonic regression; e.g., all the log losses for SVM are infinite. We
are not trying to use ad hoc solutions, such as clipping predictions to the interval [, 1 ? ] for a small
> 0, since we are also using the bounded Brier loss function. The CVAP lines tend to be at the
bottom in all plots; experiments with other data sets also confirm this.
The column all in the plots of Figure 1 refers to using the full training set as both the proper
training set and calibration set. (In our official definition of IVAP we require that the last two sets
be disjoint, but in this section we continue to refer to IVAPs modified in this way simply as IVAPs;
in [1], such prediction algorithms were referred to as SVAPs, simplified Venn?Abers predictors.)
Using the full training set as both the proper training set and calibration set might appear na??ve (and
is never used in the extensive empirical study [5]), but it often leads to good empirical results on
larger data sets. However, it can also lead to very poor results, as in the case of ?J48 bagging? (for
IVAP, Platt, and Isotonic), the underlying algorithm that achieves the best performance in Figure 1.
A natural question is whether CVAPs perform better than the alternative calibration methods in Figure 1 (and our other experiments) because of applying cross-over (in moving from IVAP to CVAP)
or because of the extra regularization used in IVAPs. The first reason is undoubtedly important for
both loss functions and the second for the log loss function. The second reason plays a smaller role
for Brier loss for relatively large data sets (in the lower half of Figure 1 the curve for Isotonic
and IVAP are very close to each other), but IVAPs are consistently better for smaller data sets even
when using Brier loss. In Tables 1 and 2 we apply the four calibration methods and six underlying
algorithms to a much smaller training set, namely to the first 5, 000 observations of the adult data
set as the new training set, following [5]; the first 4, 000 training observations are used as the proper
training set, the following 1, 000 training observations as the calibration set, and all other observa-
6
Log Loss
J48
J48 bagging
0.540
logistic regression
0.480
Underlying
Platt
Isotonic
IVAP
CVAP
0.520
0.500
0.496
Underlying
Platt
Isotonic
IVAP
CVAP
0.460
0.480
Underlying
Isotonic
IVAP
CVAP
0.494
0.492
0.490
0.440
0.460
0.488
0.440
0.420
1
2
3
4
all
1
2
3
4
all
0.486
1
2
neural networks
naive Bayes
Isotonic
IVAP
CVAP
0.458
0.460
0.454
0.450
all
0.505
Underlying
Platt
Isotonic
IVAP
CVAP
0.470
0.456
4
SVM
0.480
0.460
3
Platt
Isotonic
IVAP
CVAP
0.500
0.495
0.452
0.440
1
2
3
4
all
1
2
3
4
all
0.490
1
Brier Loss
J48
2
0.420
0.410
all
0.4310
0.400
Underlying
Platt
Isotonic
IVAP
CVAP
4
logistic regression
J48 bagging
0.430
3
Student Version of MATLAB
Underlying
Platt
Isotonic
IVAP
CVAP
0.390
Underlying
Isotonic
IVAP
CVAP
0.4305
0.4300
0.380
0.400
0.4295
0.370
0.390
0.380
1
2
3
4
all
0.360
0.4290
0.4285
1
2
naive Bayes
3
4
all
1
2
neural networks
0.460
0.440
4
all
SVM
0.420
Underlying
Platt
Isotonic
IVAP
CVAP
3
0.438
Underlying
Platt
Isotonic
IVAP
CVAP
0.410
Platt
Isotonic
IVAP
CVAP
0.436
0.434
0.420
0.400
0.400
0.390
0.432
1
2
3
4
all
1
2
3
4
all
0.430
1
2
3
4
all
Student Version of MATLAB
Figure 1: The log and Brier losses of the four calibration methods applied to the six prediction
algorithms on the adult data set. The numbers on the horizontal axis are ratios m/k of the size
of the proper training set to the size of the calibration set; in the case of CVAPs, they can also be
expressed as K ?1, where K is the number of folds (therefore, column 4 corresponds to the standard
choice of 5 folds in the method of cross-validation). Missing curves or points on curves mean that
the corresponding values either are too big and would squeeze unacceptably the interesting parts of
the plot if shown or are infinite (such as many results for isotonic regression and J48 for log loss).
tions (the remaining training and all test observations) are used as the new test set. The results are
shown in Tables 1 for log loss and 2 for Brier loss. They are consistently better for IVAP than for
IR (isotonic regression). Results for nine very small data sets are given in Tables 1 and 2 of [1],
where the results for IVAP (with the full training set used as both proper training and calibration
sets, labelled ?SVA? in the tables in [1]) are consistently (in 52 cases out of the 54 using Brier loss)
better, usually significantly better, than for isotonic regression (referred to as DIR in the tables in
[1]).
The following information might help the reader in reproducing our results (in addition to our code
being publicly available [9]). For each of the standard prediction algorithms within Weka that we
use, we optimise the parameters by minimising the Brier loss on the calibration set, apart from the
column labelled all. (We cannot use the log loss since it is often infinite in the case of isotonic
regression.) We then use the trained algorithm to generate the scores for the calibration and test sets,
which allows us to compute probability predictions using Platt?s method, isotonic regression, IVAP,
7
Table 1: The log loss for the four calibration methods and six underlying algorithms for a small
subset of the adult data set
algorithm
Platt
IR IVAP
CVAP
J48
0.5226 ? 0.5117 0.5102
J48 bagging
0.4949 ? 0.4733 0.4602
logistic
0.5111 ? 0.4981 0.4948
na??ve Bayes
0.5534 ? 0.4839 0.4747
neural networks 0.5175 ? 0.5023 0.4805
SVM
0.5221 ? 0.5015 0.4997
Table 2: The analogue of Table 1 for the Brier loss
algorithm
J48
J48 bagging
logistic
na??ve Bayes
neural networks
SVM
Platt
0.4463
0.4225
0.4470
0.4670
0.4525
0.4550
IR
0.4378
0.4153
0.4417
0.4329
0.4574
0.4450
IVAP
0.4370
0.4123
0.4377
0.4311
0.4440
0.4408
CVAP
0.4368
0.3990
0.4342
0.4227
0.4234
0.4375
and CVAP. All the scores apart from SVM are already in the [0, 1] range and can be used as probability predictions. Most of the parameters are set to their default values, and the only parameters
that are optimised are C (pruning confidence) for J48 and J48 bagging, R (ridge) for logistic regression, L (learning rate) and M (momentum) for neural networks (MultilayerPerceptron), and
C (complexity constant) for SVM (SMO, with the linear kernel); na??ve Bayes does not involve any
parameters. Notice that none of these parameters are ?hyperparameters?, in that they do not control
the flexibility of the fitted prediction rule directly; this allows us to optimize the parameters on the
training set for the all column. In the case of CVAPs, we optimise the parameters by minimising the cumulative Brier loss over all folds (so that the same parameters are used for all folds). To
apply Platt?s method to calibrate the scores generated by the underlying algorithms we use logistic
regression, namely the function mnrfit within MATLAB?s Statistics toolbox. For isotonic regression calibration we use the implementation of the PAVA in the R package fdrtool (namely, the
function monoreg).
For further experimental results, see [9].
7
Conclusion
This paper introduces two new algorithms for probabilistic prediction, IVAP, which can be regarded
as a regularised form of the calibration method based on isotonic regression, and CVAP, which is
built on top of IVAP using the idea of cross-validation. Whereas IVAPs are automatically perfectly
calibrated, the advantage of CVAPs is in their good empirical performance.
This paper does not study empirically upper and lower probabilities produced by IVAPs and CVAPs,
whereas the distance between them provides information about the reliability of the merged probability prediction. Finding interesting ways of using this extra information is one of the directions of
further research.
Acknowledgments
We are grateful to the conference reviewers for numerous helpful comments and observations, to
Vladimir Vapnik for sharing his ideas about exploiting synergy between different learning algorithms, and to participants of the conference Machine Learning: Prospects and Applications (October 2015, Berlin) for their questions and comments. The first author has been partially supported by
EPSRC (grant EP/K033344/1) and AFOSR (grant ?Semantic Completions?). The second and third
authors are grateful to their home institutions for funding their trips to Montr?eal.
8
References
[1] Vladimir Vovk and Ivan Petej. Venn?Abers predictors. In Nevin L. Zhang and Jin Tian, editors,
Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, pages 829?
838, Corvallis, OR, 2014. AUAI Press.
[2] Miriam Ayer, H. Daniel Brunk, George M. Ewing, W. T. Reid, and Edward Silverman. An
empirical distribution function for sampling with incomplete information. Annals of Mathematical Statistics, 26:641?647, 1955.
[3] Xiaoqian Jiang, Melanie Osl, Jihoon Kim, and Lucila Ohno-Machado. Smooth isotonic regression: a new method to calibrate predictive models. AMIA Summits on Translational Science
Proceedings, 2011:16?20, 2011.
[4] Antonis Lambrou, Harris Papadopoulos, Ilia Nouretdinov, and Alex Gammerman. Reliable
probability estimates based on support vector machines for large multiclass datasets. In
Lazaros Iliadis, Ilias Maglogiannis, Harris Papadopoulos, Kostas Karatzas, and Spyros Sioutas,
editors, Proceedings of the AIAI 2012 Workshop on Conformal Prediction and its Applications,
volume 382 of IFIP Advances in Information and Communication Technology, pages 182?191,
Berlin, 2012. Springer.
[5] Rich Caruana and Alexandru Niculescu-Mizil. An empirical comparison of supervised learning algorithms. In Proceedings of the Twenty Third International Conference on Machine
Learning, pages 161?168, New York, 2006. ACM.
[6] John C. Platt. Probabilities for SV machines. In Alexander J. Smola, Peter L. Bartlett, Bernhard
Sch?olkopf, and Dale Schuurmans, editors, Advances in Large Margin Classifiers, pages 61?74.
MIT Press, 2000.
[7] Vladimir Vovk, Alex Gammerman, and Glenn Shafer. Algorithmic Learning in a Random
World. Springer, New York, 2005.
[8] Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision
trees and naive Bayesian classifiers. In Carla E. Brodley and Andrea P. Danyluk, editors,
Proceedings of the Eighteenth International Conference on Machine Learning, pages 609?
616, San Francisco, CA, 2001. Morgan Kaufmann.
[9] Vladimir Vovk, Ivan Petej, and Valentina Fedorova. Large-scale probabilistic predictors with
and without guarantees of validity. Technical report, arXiv.org e-Print archive, November
2015. A full version of this paper.
[10] Richard E. Barlow, D. J. Bartholomew, J. M. Bremner, and H. Daniel Brunk. Statistical Inference under Order Restrictions: The Theory and Application of Isotonic Regression. Wiley,
London, 1972.
[11] Chu-In Charles Lee. The Min-Max algorithm and isotonic regression. Annals of Statistics,
11:467?477, 1983.
[12] Gordon D. Murray. Nonconvergence of the minimax order algorithm. Biometrika, 70:490?491,
1983.
[13] Vladimir N. Vapnik. Intelligent learning: Similarity control and knowledge transfer. Talk
at the 2015 Yandex School of Data Analysis Conference Machine Learning: Prospects and
Applications, 6 October 2015, Berlin.
[14] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction
to Algorithms. MIT Press, Cambridge, MA, third edition, 2009.
[15] Vladimir Vovk. The fundamental nature of the log loss function. In Lev D. Beklemishev,
Andreas Blass, Nachum Dershowitz, Berndt Finkbeiner, and Wolfram Schulte, editors, Fields
of Logic and Computation II: Essays Dedicated to Yuri Gurevich on the Occasion of His 75th
Birthday, volume 9300 of Lecture Notes in Computer Science, pages 307?318, Cham, 2015.
Springer.
[16] Allan H. Murphy. A new vector partition of the probability score. Journal of Applied Meteorology, 12:595?600, 1973.
[17] Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H.
Witten. The WEKA data mining software: an update. SIGKDD Explorations, 11:10?18, 2011.
[18] A. Frank and A. Asuncion. UCI machine learning repository, 2015.
9
| 5805 |@word illustrating:1 briefly:1 version:5 repository:2 essay:1 decomposition:1 p0:15 contains:1 score:28 daniel:2 existing:3 com:1 si:5 gmail:1 chu:1 gurevich:1 john:1 ronald:1 partition:1 s21:1 plot:6 update:1 half:1 intelligence:1 unacceptably:1 ntrain:3 xk:2 wolfram:1 papadopoulos:2 institution:1 provides:1 multiset:2 org:1 zhang:1 mathematical:1 berndt:1 consists:1 overhead:1 paragraph:1 theoretically:1 allan:1 indeed:1 andrea:1 p1:18 brier:17 s1k:1 karatzas:1 decreasing:1 automatically:3 cardinality:1 increasing:2 becomes:1 provided:1 underlying:21 moreover:1 bounded:1 rivest:1 what:1 interpreted:1 pursue:1 finding:1 guarantee:3 nouretdinov:1 remember:1 auai:1 biometrika:1 wrong:1 classifier:2 uk:1 platt:24 control:2 grant:2 appear:1 reid:1 t1:1 jiang:1 lev:1 optimised:1 amia:1 merge:2 approximately:1 might:2 birthday:1 studied:1 equivalence:1 range:4 tian:1 acknowledgment:1 practice:2 implement:1 silverman:1 procedure:6 empirical:9 arsenal:1 adapting:1 composite:2 significantly:1 imprecise:3 pre:1 confidence:1 regular:1 refers:1 suggest:2 get:1 cannot:1 close:3 context:1 applying:2 isotonic:47 optimize:1 restriction:1 reviewer:1 missing:1 eighteenth:1 pk0:1 convex:2 ke:2 zealand:1 rule:4 holmes:1 machinelearning:1 array:1 regarded:2 his:3 notion:1 justification:1 annals:2 construction:1 yj0:2 gm:15 suppose:2 play:1 losing:1 us:3 regularised:1 elkan:1 element:2 expensive:1 summit:1 predicts:1 bottom:2 role:1 epsrc:1 ep:1 worst:1 wj:7 s1i:1 nevin:1 nonconvergence:1 prospect:2 yk:6 complexity:1 trained:1 grateful:2 predictive:4 serve:1 efficiency:7 easily:1 chapter:2 various:1 s2i:1 represented:1 talk:1 train:3 distinct:1 london:2 artificial:1 whose:2 larger:1 valued:1 statistic:3 hoc:1 advantage:5 sequence:3 equalizing:1 uci:2 reproducibility:1 flexibility:1 achieve:1 fi0:1 olkopf:1 squeeze:1 exploiting:1 p:3 produce:5 perfect:4 object:17 help:2 tk:3 tions:1 completion:1 measured:2 school:1 miriam:1 edward:1 implemented:1 concentrate:1 direction:1 merged:5 correct:2 alexandru:1 exploration:1 enable:1 require:1 f1:9 preliminary:1 proposition:9 reutemann:1 hold:2 considered:1 hall:1 algorithmic:1 danyluk:1 achieves:1 applicable:1 label:13 title:1 brought:1 mit:2 always:4 modified:1 rather:2 thirtieth:1 consistently:5 check:1 sigkdd:1 kim:1 helpful:1 inference:1 niculescu:1 pfahringer:1 kc:1 translational:1 classification:1 among:2 denoted:2 ivap:35 special:1 ewing:1 equal:4 field:1 never:3 schulte:1 sampling:1 identical:1 report:2 gordon:1 duplicate:1 equiprobable:1 richard:1 modern:1 randomly:1 intelligent:1 ve:5 individual:1 mll:3 familiar:1 replaced:1 murphy:1 consisting:2 undoubtedly:1 montr:1 mining:1 leiserson:1 introduces:3 arrives:1 behind:1 devoted:2 accurate:1 j48:16 unless:1 tree:3 incomplete:1 loosely:1 divide:1 waikato:1 logarithm:2 observa:1 theoretical:1 fitted:3 column:4 classify:1 earlier:1 eal:1 cover:1 contiguous:2 caruana:1 assignment:1 calibrate:3 clipping:1 subset:2 predictor:30 front:1 too:1 reported:3 dir:1 sv:1 calibrated:7 combined:1 international:2 ifip:1 fundamental:1 probabilistic:8 yl:5 lee:1 ym:4 na:5 again:1 clifford:1 satisfied:1 choose:2 russia:1 style:1 return:1 student:2 coefficient:1 yandex:2 ad:1 later:3 start:2 bayes:6 participant:1 inherited:1 asuncion:1 slope:3 ir:3 publicly:1 kaufmann:1 efficiently:2 bayesian:2 produced:1 none:1 randomness:1 classified:1 explain:1 suffers:3 sharing:1 definition:2 proof:2 proved:2 popular:1 knowledge:2 appears:1 supervised:1 ayer:1 brunk:2 done:1 just:4 smola:1 pk1:1 multiprobability:4 hand:1 horizontal:2 gcm:6 replacing:1 marker:1 lack:1 logistic:8 quality:1 indicated:1 facilitate:1 validity:7 calibrating:2 true:3 barlow:1 inductive:3 regularization:4 assigned:3 semantic:1 conditionally:1 round:1 self:1 uniquely:1 anything:1 occasion:1 trying:1 ridge:1 l1:1 dedicated:1 osl:1 fi:1 funding:1 charles:3 witten:1 machado:1 empirically:2 volume:2 belong:1 refer:2 corvallis:1 cambridge:1 bartholomew:1 reliability:1 moving:1 calibration:45 f0:8 similarity:1 etc:1 add:1 base:1 fedorova:2 apart:2 certain:1 binary:3 continue:3 s11:1 yuri:1 yi:5 scoring:6 cham:1 p10:3 morgan:1 george:1 somewhat:1 converting:1 determine:1 arithmetic:1 ii:1 full:7 stem:1 smooth:1 technical:1 cross:6 minimising:2 ilias:1 ensuring:1 prediction:38 desideratum:2 regression:37 essentially:1 arxiv:1 sometimes:1 kernel:1 addition:1 whereas:2 interval:2 diagram:1 suffered:1 sch:1 extra:4 rest:1 unlike:1 archive:1 comment:2 tend:1 virtually:1 member:2 legend:1 spirit:1 call:1 extracting:1 presence:1 split:4 enough:1 divisible:1 ivan:4 fit:3 perfectly:3 andreas:1 idea:4 valentina:2 knowing:1 br:2 weka:3 multiclass:1 sva:1 whether:2 expression:4 six:6 utility:1 bartlett:1 suffer:1 peter:2 york:2 nine:1 remark:1 matlab:3 miscalibrated:1 involve:1 stein:1 meteorology:1 generate:1 outperform:2 notice:4 disjoint:2 gammerman:2 four:4 threshold:1 p11:5 drawn:1 capital:1 sum:2 package:1 letter:1 uncertainty:1 arrive:1 almost:1 reader:2 home:1 decision:5 scaling:1 pay:1 fold:16 alex:2 software:1 ohno:1 argument:1 min:1 performing:1 relatively:1 department:1 according:1 combination:1 poor:1 cormen:1 smaller:3 making:2 s1:17 intuitively:1 explained:3 computationally:3 remains:1 discus:2 abbreviated:2 needed:2 conformal:3 fi1:1 end:1 xiaoqian:1 available:2 apply:4 observe:1 alternative:2 s01:1 original:2 moscow:1 remaining:2 cf:3 bagging:8 top:2 thomas:1 ilium:1 especially:2 murray:1 already:2 question:2 occurs:1 print:1 distance:1 berlin:3 nachum:1 reason:2 bremner:1 assuming:1 length:1 code:1 prompted:1 ratio:2 vladimir:7 october:2 statement:2 frank:2 design:1 implementation:1 proper:12 twenty:1 perform:1 upper:3 observation:23 datasets:1 jin:1 november:1 zadrozny:1 situation:1 communication:1 precise:5 y1:9 reproducing:1 arbitrary:1 introduced:1 vacuous:1 pair:5 namely:8 required:1 extensive:2 toolbox:1 resist:1 trip:1 smo:1 adult:4 below:3 usually:3 xm:4 built:1 royal:1 including:1 optimise:2 reliable:1 analogue:1 greatest:2 max:1 natural:1 rely:1 regularized:1 s2k:1 turning:1 scarce:1 melanie:1 mizil:1 minimax:13 technology:1 brodley:1 numerous:1 picture:1 lk:2 axis:2 categorical:2 naive:3 geometric:4 l2:1 checking:1 afosr:1 loss:46 expect:1 bear:1 permutation:2 lecture:1 interesting:3 geoffrey:1 validation:3 mbl:3 editor:5 storing:1 pi:7 prone:1 supported:1 last:4 jth:1 enjoys:2 taking:4 venn:18 benefit:1 curve:3 default:1 stand:2 cumulative:4 rich:1 world:1 dale:1 author:2 refinement:1 san:1 simplified:1 pruning:1 selector:2 bernhard:2 synergy:2 confirm:1 logic:1 overfitting:2 francisco:1 alternatively:1 s0i:8 sk:15 glenn:1 table:11 nature:1 transfer:1 ca:1 obtaining:3 schuurmans:1 s0j:5 necessarily:1 official:1 pk:7 main:4 csd:4 linearly:1 s2:2 big:1 hyperparameters:1 shafer:1 edition:1 allowed:1 defective:1 x1:5 referred:6 bianca:1 wiley:1 kostas:1 momentum:1 xl:5 third:3 petej:4 ian:1 theorem:2 removing:1 bad:1 svm:9 concern:2 eibe:1 workshop:1 vapnik:2 merging:3 importance:1 conditioned:1 sigmoids:1 margin:1 sorting:2 carla:1 simply:2 infinitely:1 expressed:1 contained:1 ordered:3 partially:4 scalar:3 volodya:1 springer:3 cvap:23 corresponds:1 satisfies:1 harris:2 acm:1 ma:1 sorted:1 exposition:1 labelled:7 price:1 absence:1 minorant:2 change:1 included:1 determined:2 except:3 infinite:6 vovk:6 typical:1 called:1 ntest:1 experimental:4 est:1 s02:1 holloway:1 formally:1 support:2 mark:1 latter:2 alexander:1 pava:1 |
5,309 | 5,806 | On the Accuracy of Self-Normalized
Log-Linear Models
Jacob Andreas?, Maxim Rabinovich?, Michael I. Jordan, Dan Klein
Computer Science Division, University of California, Berkeley
{jda,rabinovich,jordan,klein}@cs.berkeley.edu
Abstract
Calculation of the log-normalizer is a major computational obstacle in applications of log-linear models with large output spaces. The problem of fast normalizer computation has therefore attracted significant attention in the theoretical and
applied machine learning literature. In this paper, we analyze a recently proposed
technique known as ?self-normalization?, which introduces a regularization term
in training to penalize log normalizers for deviating from zero. This makes it possible to use unnormalized model scores as approximate probabilities. Empirical
evidence suggests that self-normalization is extremely effective, but a theoretical
understanding of why it should work, and how generally it can be applied, is
largely lacking.
We prove upper bounds on the loss in accuracy due to self-normalization, describe
classes of input distributions that self-normalize easily, and construct explicit examples of high-variance input distributions. Our theoretical results make predictions about the difficulty of fitting self-normalized models to several classes of
distributions, and we conclude with empirical validation of these predictions.
1
Introduction
Log-linear models, a general class that includes conditional random fields (CRFs) and generalized
linear models (GLMs), offer a flexible yet tractable approach modeling conditional probability distributions p(x|y) [1, 2]. When the set of possible y values is large, however, the computational cost
of computing a normalizing constant for each x can be prohibitive?involving a summation with
many terms, a high-dimensional integral or an expensive dynamic program.
The machine translation community has recently described several procedures for training ?selfnormalized? log-linear models [3, 4]. The goal of self-normalization is to choose model parameters that simultaneously yield accurate predictions and produce normalizers clustered around unity.
Model scores can then be used as approximate surrogates for probabilities, obviating the computation normalizer computation.
In particular, given a model of the form
with
p? (y | x) = e?
T
A (?, x) = log
T (y, x)?A(?, x)
!
e?
T
T (y, x)
,
(1)
(2)
y?Y
we seek a setting of ? such that A(x, ?) is close enough to zero (with high probability under p(x))
to be ignored.
?
Authors contributed equally.
1
This paper aims to understand the theoretical properties of self-normalization. Empirical results have
already demonstrated the efficacy of this approach?for discrete models with many output classes, it
appears that normalizer values can be made nearly constant without sacrificing too much predictive
accuracy, providing dramatic efficiency increases at minimal performance cost.
The broad applicability of self-normalization makes it likely to spread to other large-scale applications of log-linear models, including structured prediction (with combinatorially many output
classes) and regression (with continuous output spaces). But it is not obvious that we should expect
such approaches to be successful: the number of inputs (if finite) can be on the order of millions,
the geometry of the resulting input vectors x highly complex, and the class of functions A(?, x) associated with different inputs quite rich. To find to find a nontrivial parameter setting with A(?, x)
roughly constant seems challenging enough; to require that the corresponding ? also lead to good
classification results seems too much. And yet for many input distributions that arise in practice, it
appears possible to choose ? to make A(?, x) nearly constant without having to sacrifice classification accuracy.
Our goal is to bridge the gap between theoretical intuition and practical experience. Previous work
[5] bounds the sample complexity of self-normalizing training procedures for a restricted class of
models, but leaves open the question of how self-normalization interacts with the predictive power
of the learned model. This paper seeks to answer that question. We begin by generalizing the
previously-studied model to a much more general class of distributions, including distributions with
continuous support (Section 3). Next, we provide what we believe to be the first characterization of
the interaction between self-normalization and model accuracy Section 4. This characterization is
given from two perspectives:
? a bound on the ?likelihood gap? between self-normalized and unconstrained models
? a conditional distribution provably hard to represent with a self-normalized model
In Figure 5, we present empirical evidence that these bounds correctly characterize the difficulty of
self-normalization, and in the conclusion we survey a set of open problems that we believe merit
further investigation.
2
Problem background
The immediate motivation for this work is a procedure proposed to speed up decoding in a machine
translation system with a neural-network language model [3]. The language model used is a standard
feed-forward neural network, with a ?softmax? output layer that turns the network?s predictions into
a distribution over the vocabulary, where each probability is log-proportional to its output activation.
It is observed that with a sufficiently large vocabulary, it becomes prohibitive to obtain probabilities
from this model (which must be queried millions of times during decoding). To fix this, the language
model is trained with the following objective:
#
$2 %
!"
!
!
?
?
max
N (yi |xi ; W ) ? log
eN (y |xi ;W ) ? ? log
eN (y |xi ;W )
W
i
y?
y?
where N (y|x; W ) is the response of output y in the neural net with weights W given an input
x. From a Lagrangian perspective, the extra penalty term simply confines the W to the set of
?empirically normalizing? parameters, for which all log-normalizers are close (in squared error) to
the origin. For a suitable choice of ?, it is observed that the trained network is simultaneously
accurate enough to produce good translations, and close enough to self-normalized that the raw
scores N (yi |xi ) can be used in place of log-probabilities without substantial further degradation in
quality.
We seek to understand the observed success of these models in finding accurate, normalizing parameter settings. While it is possible to derive bounds of the kind we are interested in for general neural
networks [6], in this paper we work with a simpler linear parameterization that we believe captures
the interesting aspects of this problem. 1
1
It is possible to view a log-linear model as a single-layer network with a softmax output. More usefully,
all of the results presented here apply directly to trained neural nets in which the last layer only is retrained to
self-normalize [7].
2
Related work
The approach described at the beginning of this section is closely related to an alternative selfnormalization trick described based on noise-contrastive estimation (NCE) [8]. NCE is an alternative to direct optimization of likelihood, instead training a classifier to distinguish between true
samples from the model, and ?noise? samples from some other distribution. The structure of the
training objective makes it possible to replace explicit computation of each log-normalizer with an
estimate. In traditional NCE, these values are treated as part of the parameter space, and estimated
simultaneously with the model parameters; there exist guarantees that the normalizer estimates will
eventually converge to their true values. It is instead possible to fix all of these estimates to one.
In this case, empirical evidence suggests that the resulting model will also exhibit self-normalizing
behavior [4].
A host of other techniques exist for solving the computational problem posed by the log-normalizer.
Many of these involve approximating the associated sum or integral using quadrature [9], herding
[10], or Monte Carlo methods [11]. For the special case of discrete, finite output spaces, an alternative approach?the hierarchical softmax?is to replace the large sum in the normalizer with a
series of binary decisions [12]. The output classes are arranged in a binary tree, and the probability
of generating a particular output is the product of probabilities along the edges leading to it. This
reduces the cost of computing the normalizer from O(k) to O(log k). While this limits the set of
distributions that can be learned, and still requires greater-than-constant time to compute normalizers, it appears to work well in practice. It cannot, however, be applied to problems with continuous
output spaces.
3
Self-normalizable distributions
We begin by providing a slightly more formal characterization of a general log-linear model:
Definition 1 (Log-linear models). Given a space of inputs X , a space of outputs Y, a measure ? on
Y, a nonnegative function h : Y ? R, and a function T : X ? Y ? Rd that is ?-measurable with
respect to its second argument, we can define a log-linear model indexed by parameters ? ? Rd ,
with the form
p? (y|x) = h(y)e?
where
?
A(x, ?) = log
If A(x, ?) ? ?, then
'
y
&
?
T (x,y)?A(x,?)
h(y)e?
?
T (x,y)
,
(3)
d?(y) .
(4)
Y
p? (y|x) d?(y) = 1, and p? (y|x) is a probability density over Y.2
We next formalize our notion of a self-normalized model.
Definition 2 (Self-normalized models). The log-linear model p? (y|x) is self-normalized with respect to a set S ? X if for all x ? S, A(x, ?) = 0. In this case we say that S is self-normalizable,
and ? is self-normalizing w.r.t. S.
An example of a normalizable set is shown in Figure 1a, and we provide additional examples below:
2
Some readers may be more familiar with generalized linear models, which also describe exponential family
distributions with a linear dependence on input. The presentation here is strictly more general, and has a few
notational advantages: it makes explicit the dependence of A on x and ? but not y, and lets us avoid tedious
bookkeeping involving natural and mean parameterizations. [13]
3
(a) A self-normalizable set S for fixed ?: the solutions (x1 , x2 ) to A(x, ?) = 0 with ? ? T (x, y) =
?y? x and ? = {(?1, 1), (?1, ?2)}. The set forms a
smooth one-dimensional manifold bounded on either
side by hyperplanes normal to (?1, 1) and (?1, ?2).
(b) Sets of approximately normalizing parameters ?
for fixed p(x): solutions (?1 , ?2 ) to E[A(x, ?)2 ] =
? 2 with T (x, y) = (x + y, ?xy), y ? {?1, 1} and
p(x) uniform on {1, 2}. For a given upper bound on
normalizer variance, the feasible set of parameters is
nonconvex, and grows as ? increases.
Figure 1: Self-normalizable data distributions and parameter sets.
Example. Suppose
S = {log 2, ? log 2} ,
Y = {?1, 1}
T (x, y) = [xy, 1]
? = (1, log(2/5)) .
Then for either x ? S,
A(x, ?) = log(elog 2+log(2/5) + e? log 2+log(2/5) )
= log((2/5)(2 + 1/2))
=0,
and ? is self-normalizing with respect to S.
It is also easy to choose parameters that do not result in a self-normalized distribution, and in fact to
construct a target distribution which cannot be self-normalized:
Example. Suppose
X = {(1, 0), (0, 1), (1, 1)}
Y = {?1, 1}
T (x, y) = (x1 y, x2 y, 1)
Then there is no ? such that A(x, ?) = 0 for all x, and A(x, ?) is constant if and only if ? = 0.
As previously motivated, downstream uses of these models may be robust to small errors resulting
from improper normalization, so it would be useful to generalize this definition of normalizable
distributions to distributions that are only approximately normalizable. Exact normalizability of
the conditional distribution is a deterministic statement?there either does or does not exist some
x that violates the constraint. In Figure 1a, for example, it suffices to have a single x off of the
indicated surface to make a set non-normalizable. Approximate normalizability, by contrast, is
inherently a probabilistic statement, involving a distribution p(x) over inputs. Note carefully that
we are attempting to represent p(y|x) but have no representation of (or control over) p(x), and that
approximate normalizability depends on p(x) but not p(y|x).
Informally, if some input violates the self-normalization constraint by a large margin, but occurs
only very infrequently, there is no problem; instead we are concerned with expected deviation. It
4
is also at this stage that the distinction between penalization of the normalizer vs. log-normalizer
becomes important. The normalizer is necessarily bounded below by zero (so overestimates might
appear much worse than underestimates), while the log-normalizer is unbounded in both directions.
For most applications we are concerned with log probabilities and log-odds ratios, for which an
expected normalizer close to zero is just as bad as one close to infinity. Thus the log-normalizer is
the natural choice of quantity to penalize.
Definition 3 (Approximately self-normalized models). The log-linear distribution p? (y|x) is ?approximately normalized with respect to a distribution p(x) over X if E[A(X, ?)2 ] < ? 2 . In
this case we say that p(x) is ?-approximately self-normalizable, and ? is ?-approximately selfnormalizing.
The sets of ?-approximately self-normalizing parameters for a fixed input distribution and feature
function are depicted in Figure 1b. Unlike self-normalizable sets of inputs, self-normalizing and
approximately self-normalizing sets of parameters may have complex geometry.
Throughout this paper, we will assume that vectors of sufficient statistics T (x, y) have bounded
?2 norm at most R, natural parameter vectors ? have ?2 norm at most B (that is, they are Ivanovregularized), and that vectors of both kinds lie in Rd . Finally, we assume that all input vectors have
a constant feature?in particular, that x0 = 1 for every x (with corresponding weight ?0 ). 3
The first question we must answer is whether the problem of training self-normalized models is
feasible at all?that is, whether there exist any exactly self-normalizable data distributions p(x),
or at least ?-approximately self-normalizable distributions for small ?. Section 3 already gave an
example of an exactly normalizable distribution. In fact, there are large classes of both exactly and
approximately normalizable distributions.
Observation. Given some fixed ?, consider the set S? = {x ? X : A(x, ?) = 0}. Any distribution p(x) supported on S? is normalizable. Additionally, every self-normalizable distribution is
characterized by at least one such ?.
This definition provides a simple geometric characterization of self-normalizable distributions. An
example solution set is shown in Figure 1a. More generally, if y is discrete and T (x, y) consists of
|Y| repetitions of a fixed feature function t(x) (as in Figure 1a), then we can write
! ?
A(x, ?) = log
e?y t(x) .
(5)
y?Y
Provided ?y? t(x) is convex in x for each ?y , the level sets of A as a function of x form the boundaries
of convex sets. In particular, exactly normalizable sets are always the boundaries of convex regions,
as in the simple example Figure 1a.
We do not, in general, expect real-world datasets to be supported on the precise class of selfnormalizable surfaces. Nevertheless, it is very often observed that data of practical interest lie on
other low-dimensional manifolds within their embedding feature spaces. Thus we can ask whether
it is sufficient for a target distribution to be well-approximated by a self-normalizing one. We begin
by constructing an appropriate measurement of the quality of this approximation.
Definition 4 (Closeness). An input distribution p(x) is D-close to a set S if
(
)
?
E inf
sup
||T
(X,
y)
?
T
(x
,
y)||
2 ?D
?
x ?S y?Y
(6)
In other words, p(x) is D-close to S if a random sample from p is no more than a distance D from S
in expectation. Now we can relate the quality of this approximation to the level of self-normalization
achieved. Generalizing a result from [5], we have:
Proposition 1. Suppose p(x) is D-close to {x : A(x, ?) = 1}. Then p(x) is BD-approximately
self-normalizable (recalling that ||x||2 ? B).
3
It will occasionally be instructive to consider the special case where X is the Boolean hypercube, and we
will explicitly note where this assumption is made. Otherwise all results apply to general distributions, both
continuous and discrete.
5
(Proofs for this section may be found in Appendix A.)
The intuition here is that data distributions that place most of their mass in feature space close to
normalizable sets are approximately normalizable on the same scale.
4
Normalization and model accuracy
So far our discussion has concerned the problem of finding conditional distributions that selfnormalize, without any concern for how well they actually perform at modeling the data. Here
the relationship between the approximately self-normalized distribution and the true distribution
p(y|x) (which we have so far ignored) is essential. Indeed, if we are not concerned with making
a good model it is always trivial to make a normalized one?simply take ? = 0 and then scale
?0 appropriately! We ultimately desire both good self-normalization and good data likelihood, and
in this section we characterize the tradeoff between maximizing data likelihood and satisfying a
self-normalization constraint.
We achieve this characterization by measuring the likelihood gap between the classical maximum
likelihood estimator, and the MLE subject to a self-normalization
constraint. Specifically, given
*
pairs ((x1 , y1 ), (x2 , y2 ), . . . , (xn , yn )), let ?(?|x, y) = i log p? (yi |xi ). Then define
?? = arg max ?(?|x, y)
?
??? = arg max ?(?|x, y)
(7)
(8)
?:V (?)??
(where V (?) =
1
n
*
i
A(xi , ?)2 ).
We would like to obtain a bound on the likelihood gap, which we define as the quantity
?? (?
? , ??? ) =
1
(?(?
? |x, y) ? ?(?
?? |x, y)) .
n
We claim:
Theorem 2. Suppose Y has finite measure. Then asymptotically as n ? ?
+
,
?
?? (?
? , ??? ) ? 1 ?
E KL(p? (?|X) || Unif) .
R||?
? ||2
(9)
(10)
(Proofs for this section may be found in Appendix B.)
This result lower-bounds the likelihood at ??? by explicitly constructing a scaled version of ?? that satisfies the self-normalization constraint. Specifically, if ? is chosen so that normalizers are penalized
for distance from log ?(Y) (e.g. the logarithm of the number of classes in the finite case), then any
increase in ? along the span of the data is guaranteed to increase the penalty. From here it is possible
to choose an ? ? (0, 1) such that ??
? satisfies the constraint. The likelihood at ??
? is necessarily less
than ?(?
?? |x, y), and can be used to obtain the desired lower bound.
Thus at one extreme, distributions close to uniform can be self-normalized with little loss of likelihood. What about the other extreme?distributions ?as far from uniform as possible?? With suitable
assumptions about the form of p??(y|x), we can use the same construction of a self-normalizing
parameter to achieve an alternative characterization for distributions that are close to deterministic:
Proposition 3. Suppose that X is a subset of the Boolean hypercube, Y is finite, and T (x, y) is the
conjunction of each element of x with an indicator on the output class. Suppose additionally that
in every input x, p??(y|x) makes a unique best prediction?that is, for each x ? X , there exists a
unique y ? ? Y such that whenever y ?= y ? , ? ? T (x, y ? ) > ? ? T (x, y). Then
+
,2
?
?? (?
? , ??? ) ? b ||?||2 ?
e?c?/R
(11)
R
for distribution-dependent constants b and c.
This result is obtained by representing the constrained likelihood with a second-order Taylor expansion about the true MLE. All terms in the likelihood gap vanish except for the remainder; this can be
6
upper-bounded by the ||?
?? ||22 times the largest eigenvalue the feature covariance matrix at ??? , which
?c?/R
in turn is bounded by e
.
The favorable rate we obtain for this case indicates that ?all-nonuniform? distributions are also an
easy class for self-normalization. Together with Theorem 2, this suggests that hard distributions
must have some mixture of uniform and nonuniform predictions for different inputs. This is supported by the results in Section 4.
The next question is whether there is a corresponding lower bound; that is, whether there exist
any conditional distributions for which all nearby distributions are provably hard to self-normalize.
The existence of a direct analog of Theorem 2 remains an open problem, but we make progress by
developing a general framework for analyzing normalizer variance.
One key issue is that while likelihoods are invariant to certain changes in the natural parameters,
the log normalizers (and therefore their variance) is far from invariant. We therefore focus on
equivalence classes of natural parameters, as defined below. Throughout, we will assume a fixed
distribution p(x) on the inputs x.
Definition 5 (Equivalence of parameterizations). Two natural parameter values ? and ? ? are said
to be equivalent (with respect to an input distribution p(x)), denoted ? ? ? ? if
p? (y|X) = p?? (y|X)
a.s. p(x)
We can then define the optimal log normalizer variance for the distribution associated with a natural
parameter value.
Definition 6 (Optimal variance). We define the optimal log normalizer variance of the log-linear
model associated with a natural parameter value ? by
V ? (?) = inf
Varp(x) [A(X, ?)] .
?
? ??
We now specialize to the case where Y is finite with |Y| = K and where T : Y ? X ? RKd satisfies
T (k, x)k? j = ?kk? xj .
This is an important special case that arises, for example, in multi-way logistic regression. In this
setting, we can show that despite the fundamental non-identifiability of the model, the variance can
still be shown to be high under any parameterization of the distribution.
Theorem 4. Let X = {0, 1}d and let the input distribution p(x) be uniform on X . There exists an
? 0 ? RKd such that for ? = ?? 0 , ? > 0,
? 1
2
1? ||?||2
d
||?||2
?
?
2(d?1)
V (?) ?
? 4Ke
||?||2 .
32d(d ? 1)
5
Experiments
The high-level intuition behind the results in the preceding section can be summarized as follows: 1)
for predictive distributions that are in expectation high-entropy or low-entropy, self-normalization
results in a relatively small likelihood gap; 2) for mixtures of high- and low-entropy distributions,
self-normalization may result in a large likelihood gap. More generally, we expect that an increased
tolerance for normalizer variance will be associated with a decreased likelihood gap.
In this section we provide experimental confirmation of these predictions. We begin by generating a
set of random sparse feature vectors, and an initial weight vector ?0 . In order to produce a sequence
of label distributions that smoothly interpolate between low-entropy and high-entropy, we introduce
a temperature parameter ? , and for various settings of ? draw labels from p? ? . We then fit a selfnormalized model to these training pairs. In addition to the synthetic data, we compare our results
to empirical data [3] from a self-normalized language model.
Figure 2a plots the tradeoff between the likelihood gap and the error in the normalizer, under various
distributions (characterized by their KL from uniform). Here the tradeoff between self-normalization
and model accuracy can be seen?as the normalization constraint is relaxed, the likelihood gap
decreases.
7
0.2"
0.2"
KL=2.6"
0.15"
KL=5.0"
0.15"
LM"
??
??
0.1"
0.1"
0.05"
0.05"
0"
0"
0"
0.5"
1"
0"
1.5"
5"
10"
15"
20"
EKL(p? ||Unif)
?
(a) Normalization / likelihood tradeoff. As the normalization constraint ? is relaxed, the likelihood gap
?? decreases. Lines marked ?KL=? are from synthetic data; the line marked ?LM? is from [3].
(b) Likelihood gap as a function of expected divergence from the uniform distribution. As predicted by
theory, the likelihood gap increases, then decreases,
as predictive distributions become more peaked.
Figure 2: Experimental results
Figure 2b shows how the likelihood gap varies as a function of the quantity EKL(p? (?|X)||Unif).
As predicted, it can be seen that both extremes of this quantity result in small likelihood gaps, while
intermediate values result in large likelihood gaps.
6
Conclusions
Motivated by the empirical success of self-normalizing parameter estimation procedures, we have
attempted to establish a theoretical basis for the understanding of such procedures. We have characterized both self-normalizable distributions, by constructing provably easy examples, and training
procedures, by bounding the loss of likelihood associated with self-normalization.
While we have addressed many of the important first-line theoretical questions around selfnormalization, this study of the problem is by no means complete. We hope this family of problems
will attract further study in the larger machine learning community; toward that end, we provide the
following list of open questions:
1. How else can the approximately self-normalizable distributions be characterized? The
class of approximately normalizable distributions we have described is unlikely to correspond perfectly to real-world data. We expect that Proposition 1 can be generalized to
other parametric classes, and relaxed to accommodate spectral or sparsity conditions.
2. Are the upper bounds in Theorem 2 or Proposition 3 tight? Our constructions involve
relating the normalization constraint to the ?2 norm of ?, but in general some parameters
can have very large norm and still give rise to almost-normalized distributions.
3. Do corresponding lower bounds exist? While it is easy to construct of exactly selfnormalizable distributions (which suffer no loss of likelihood), we have empirical evidence
that hard distributions also exist. It would be useful to lower-bound the loss of likelihood
in terms of some simple property of the target distribution.
4. Is the hard distribution in Theorem 4 stable? This is related to the previous question.
The existence of high-variance distributions is less worrisome if such distributions are fairly
rare. If the lower bound falls off quickly as the given construction is perturbed, then the
associated distribution may still be approximately self-normalizable with a good rate.
We have already seen that new theoretical insights in this domain can translate directly into practical applications. Thus, in addition to their inherent theoretical interest, answers to each of these
questions might be applied directly to the training of approximately self-normalized models in practice. We expect that self-normalization will find increasingly many applications, and we hope the
results in this paper provide a first step toward a complete theoretical and empirical understanding
of self-normalization in log-linear models.
Acknowledgments The authors would like to thank Robert Nishihara for useful discussions. JA
and MR are supported by NSF Graduate Fellowships, and MR is additionally supported by the
Fannie and John Hertz Foundation Fellowship.
8
References
[1] Lafferty, J. D.; McCallum, A.; Pereira, F. C. N. Conditional random fields: Probabilistic models
for segmenting and labeling sequence data. 2001; pp 282?289.
[2] McCullagh, P.; Nelder, J. A. Generalized linear models; Chapman and Hall, 1989.
[3] Devlin, J.; Zbib, R.; Huang, Z.; Lamar, T.; Schwartz, R.; Makhoul, J. Fast and robust neural
network joint models for statistical machine translation. Proceedings of the Annual Meeting of
the Association for Computational Linguistics. 2014.
[4] Vaswani, A.; Zhao, Y.; Fossum, V.; Chiang, D. Decoding with large-scale neural language
models improves translation. Proceedings of the Conference on Empirical Methods in Natural
Language Processing. 2013.
[5] Andreas, J.; Klein, D. When and why are log-linear models self-normalizing? Proceedings
of the Annual Meeting of the North American Chapter of the Association for Computational
Linguistics. 2014.
[6] Bartlett, P. L. IEEE Transactions on Information Theory 1998, 44, 525?536.
[7] Anthony, M.; Bartlett, P. Neural network learning: theoretical foundations; Cambridge University Press, 2009.
[8] Gutmann, M.; Hyv?arinen, A. Noise-contrastive estimation: A new estimation principle for
unnormalized statistical models. Proceedings of the International Conference on Artificial Intelligence and Statistics. 2010; pp 297?304.
[9] O?Hagan, A. Journal of statistical planning and inference 1991, 29, 245?260.
[10] Chen, Y.; Welling, M.; Smola, A. Proceedings of the Conference on Uncertainty in Artificial
Intelligence 2010, 109?116.
[11] Doucet, A.; De Freitas, N.; Gordon, N. An introduction to sequential Monte Carlo methods;
Springer, 2001.
[12] Morin, F.; Bengio, Y. Proceedings of the International Conference on Artificial Intelligence
and Statistics 2005, 246.
[13] Yang, E.; Allen, G.; Liu, Z.; Ravikumar, P. K. Graphical models via generalized linear models.
Advances in Neural Information Processing Systems. 2012; pp 1358?1366.
9
| 5806 |@word version:1 seems:2 norm:4 tedious:1 open:4 unif:3 hyv:1 seek:3 jacob:1 covariance:1 contrastive:2 dramatic:1 accommodate:1 initial:1 liu:1 series:1 score:3 efficacy:1 freitas:1 activation:1 yet:2 attracted:1 must:3 bd:1 john:1 plot:1 v:1 intelligence:3 prohibitive:2 leaf:1 parameterization:2 mccallum:1 beginning:1 chiang:1 characterization:6 parameterizations:2 provides:1 hyperplanes:1 simpler:1 unbounded:1 along:2 direct:2 become:1 prove:1 consists:1 specialize:1 dan:1 fitting:1 introduce:1 x0:1 sacrifice:1 indeed:1 expected:3 behavior:1 roughly:1 planning:1 multi:1 little:1 becomes:2 begin:4 provided:1 bounded:5 mass:1 what:2 kind:2 finding:2 guarantee:1 berkeley:2 every:3 usefully:1 exactly:5 classifier:1 scaled:1 schwartz:1 control:1 appear:1 yn:1 overestimate:1 segmenting:1 limit:1 despite:1 analyzing:1 approximately:17 might:2 studied:1 equivalence:2 suggests:3 challenging:1 vaswani:1 graduate:1 practical:3 unique:2 acknowledgment:1 practice:3 procedure:6 empirical:10 word:1 morin:1 cannot:2 close:11 measurable:1 deterministic:2 demonstrated:1 lagrangian:1 crfs:1 maximizing:1 equivalent:1 attention:1 convex:3 survey:1 ke:1 estimator:1 insight:1 embedding:1 notion:1 target:3 suppose:6 construction:3 exact:1 us:1 origin:1 trick:1 element:1 infrequently:1 expensive:1 approximated:1 satisfying:1 hagan:1 observed:4 capture:1 region:1 improper:1 gutmann:1 decrease:3 substantial:1 intuition:3 complexity:1 dynamic:1 ultimately:1 trained:3 solving:1 tight:1 predictive:4 division:1 efficiency:1 basis:1 easily:1 joint:1 various:2 chapter:1 fast:2 effective:1 describe:2 monte:2 artificial:3 labeling:1 quite:1 posed:1 larger:1 say:2 otherwise:1 statistic:3 advantage:1 eigenvalue:1 sequence:2 net:2 interaction:1 product:1 remainder:1 translate:1 achieve:2 normalize:3 produce:3 generating:2 derive:1 progress:1 c:1 predicted:2 direction:1 closely:1 violates:2 require:1 ja:1 zbib:1 arinen:1 fix:2 clustered:1 suffices:1 investigation:1 proposition:4 summation:1 strictly:1 around:2 sufficiently:1 hall:1 normal:1 elog:1 claim:1 lm:2 major:1 estimation:4 favorable:1 label:2 bridge:1 largest:1 combinatorially:1 repetition:1 hope:2 always:2 aim:1 normalizability:3 avoid:1 conjunction:1 focus:1 notational:1 likelihood:28 indicates:1 contrast:1 normalizer:27 inference:1 dependent:1 attract:1 unlikely:1 interested:1 provably:3 arg:2 classification:2 flexible:1 issue:1 denoted:1 constrained:1 softmax:3 special:3 fairly:1 field:2 construct:3 having:1 chapman:1 broad:1 nearly:2 peaked:1 gordon:1 inherent:1 few:1 simultaneously:3 divergence:1 interpolate:1 deviating:1 familiar:1 geometry:2 recalling:1 interest:2 highly:1 introduces:1 mixture:2 extreme:3 behind:1 accurate:3 integral:2 edge:1 experience:1 xy:2 tree:1 indexed:1 taylor:1 logarithm:1 desired:1 sacrificing:1 theoretical:11 minimal:1 increased:1 modeling:2 obstacle:1 boolean:2 lamar:1 measuring:1 rabinovich:2 cost:3 applicability:1 deviation:1 subset:1 rare:1 uniform:7 successful:1 too:2 characterize:2 answer:3 varies:1 perturbed:1 synthetic:2 density:1 fundamental:1 international:2 probabilistic:2 off:2 decoding:3 michael:1 together:1 quickly:1 squared:1 choose:4 huang:1 worse:1 american:1 zhao:1 leading:1 de:1 summarized:1 fannie:1 includes:1 north:1 explicitly:2 depends:1 view:1 nishihara:1 analyze:1 sup:1 identifiability:1 accuracy:7 variance:10 largely:1 yield:1 correspond:1 generalize:1 raw:1 carlo:2 herding:1 whenever:1 definition:8 underestimate:1 pp:3 obvious:1 associated:7 proof:2 ask:1 improves:1 formalize:1 carefully:1 actually:1 appears:3 feed:1 response:1 arranged:1 just:1 stage:1 varp:1 smola:1 glms:1 logistic:1 quality:3 indicated:1 believe:3 grows:1 normalized:19 true:4 y2:1 regularization:1 during:1 self:66 unnormalized:2 generalized:5 complete:2 allen:1 temperature:1 recently:2 bookkeeping:1 empirically:1 million:2 analog:1 association:2 relating:1 significant:1 measurement:1 cambridge:1 queried:1 rd:3 unconstrained:1 language:6 stable:1 surface:2 perspective:2 inf:2 rkd:2 occasionally:1 certain:1 nonconvex:1 binary:2 success:2 meeting:2 yi:3 seen:3 greater:1 additional:1 preceding:1 relaxed:3 mr:2 converge:1 reduces:1 smooth:1 characterized:4 calculation:1 offer:1 host:1 equally:1 mle:2 ravikumar:1 prediction:8 involving:3 regression:2 expectation:2 normalization:28 represent:2 achieved:1 penalize:2 background:1 addition:2 fellowship:2 decreased:1 addressed:1 else:1 appropriately:1 extra:1 unlike:1 subject:1 lafferty:1 jordan:2 odds:1 yang:1 intermediate:1 bengio:1 enough:4 easy:4 concerned:4 xj:1 fit:1 gave:1 perfectly:1 andreas:2 devlin:1 tradeoff:4 whether:5 motivated:2 bartlett:2 penalty:2 suffer:1 ignored:2 generally:3 useful:3 involve:2 informally:1 exist:7 nsf:1 estimated:1 correctly:1 klein:3 discrete:4 write:1 key:1 nevertheless:1 asymptotically:1 downstream:1 nce:3 sum:2 uncertainty:1 place:2 family:2 reader:1 throughout:2 almost:1 draw:1 decision:1 appendix:2 layer:3 jda:1 bound:14 guaranteed:1 distinguish:1 nonnegative:1 annual:2 nontrivial:1 constraint:9 infinity:1 normalizable:25 x2:3 nearby:1 aspect:1 speed:1 argument:1 extremely:1 span:1 attempting:1 relatively:1 structured:1 developing:1 hertz:1 makhoul:1 slightly:1 increasingly:1 unity:1 making:1 restricted:1 invariant:2 previously:2 remains:1 turn:2 eventually:1 merit:1 tractable:1 end:1 apply:2 hierarchical:1 appropriate:1 spectral:1 alternative:4 existence:2 linguistics:2 graphical:1 establish:1 approximating:1 hypercube:2 selfnormalized:2 classical:1 fossum:1 objective:2 already:3 question:8 occurs:1 quantity:4 parametric:1 dependence:2 interacts:1 surrogate:1 traditional:1 exhibit:1 said:1 distance:2 thank:1 manifold:2 trivial:1 toward:2 relationship:1 kk:1 providing:2 ratio:1 robert:1 statement:2 relate:1 rise:1 contributed:1 perform:1 upper:4 observation:1 datasets:1 finite:6 immediate:1 precise:1 y1:1 nonuniform:2 retrained:1 community:2 pair:2 kl:5 california:1 learned:2 distinction:1 below:3 sparsity:1 program:1 including:2 max:3 power:1 suitable:2 difficulty:2 treated:1 natural:9 indicator:1 representing:1 literature:1 understanding:3 geometric:1 lacking:1 loss:5 expect:5 interesting:1 proportional:1 worrisome:1 validation:1 penalization:1 foundation:2 sufficient:2 principle:1 translation:5 penalized:1 supported:5 last:1 formal:1 side:1 understand:2 fall:1 sparse:1 tolerance:1 boundary:2 vocabulary:2 world:2 xn:1 rich:1 author:2 made:2 forward:1 far:4 welling:1 transaction:1 approximate:4 doucet:1 conclude:1 nelder:1 xi:6 continuous:4 why:2 additionally:3 robust:2 confirmation:1 inherently:1 expansion:1 complex:2 necessarily:2 constructing:3 domain:1 anthony:1 spread:1 motivation:1 noise:3 arise:1 bounding:1 obviating:1 quadrature:1 x1:3 en:2 pereira:1 explicit:3 exponential:1 lie:2 vanish:1 theorem:6 bad:1 list:1 evidence:4 normalizing:15 closeness:1 concern:1 essential:1 exists:2 sequential:1 maxim:1 margin:1 gap:16 chen:1 entropy:5 generalizing:2 depicted:1 smoothly:1 simply:2 likely:1 desire:1 springer:1 satisfies:3 conditional:7 goal:2 presentation:1 marked:2 replace:2 feasible:2 hard:5 change:1 mccullagh:1 specifically:2 except:1 degradation:1 ekl:2 experimental:2 attempted:1 support:1 arises:1 confines:1 instructive:1 |
5,310 | 5,807 | Policy Evaluation Using the ?-Return
Scott Niekum
University of Texas at Austin
Philip S. Thomas
University of Massachusetts Amherst
Carnegie Mellon University
Georgios Theocharous
Adobe Research
George Konidaris
Duke University
Abstract
We propose the ?-return as an alternative to the ?-return currently used by the
TD(?) family of algorithms. The benefit of the ?-return is that it accounts for
the correlation of different length returns. Because it is difficult to compute exactly, we suggest one way of approximating the ?-return. We provide empirical
studies that suggest that it is superior to the ?-return and ?-return for a variety of
problems.
1
Introduction
Most reinforcement learning (RL) algorithms learn a value function?a function that estimates the
expected return obtained by following a given policy from a given state. Efficient algorithms for estimating the value function have therefore been a primary focus of RL research. The most widely used
family of RL algorithms, the TD(?) family [1], forms an estimate of return (called the ?-return) that
blends low-variance but biased temporal difference return estimates with high-variance but unbiased
Monte Carlo return estimates, using a parameter ? ? [0, 1]. While several different algorithms exist
within the TD(?) family?the original linear-time algorithm [1], least-squares formulations [2], and
methods for adapting ? [3], among others?the ?-return formulation has remained unchanged since
its introduction in 1988 [1].
Recently Konidaris et al. [4] proposed the ?-return as an alternative to the ?-return, which uses a
more accurate model of how the variance of a return increases with its length. However, both the ?
and ?-returns fail to account for the correlation of returns of different lengths, instead treating them
as statistically independent. We propose the ?-return, which uses well-studied statistical techniques
to directly account for the correlation of returns of different lengths. However, unlike the ? and
?-returns, the ?-return is not simple to compute, and often can only be approximated. We propose a
method for approximating the ?-return, and show that it outperforms the ? and ?-returns on a range
of off-policy evaluation problems.
2
Complex Backups
Estimates of return lie at the heart of value-function based RL algorithms: an estimate, V? ? , of the
value function, V ? , estimates return from each state, and the learning process aims to reduce the
error between estimated and observed returns. For brevity we suppress the dependencies of V ? and
V? ? on ? and write V and V? . Temporal difference (TD) algorithms use an estimate of the return
obtained by taking a single transition in the Markov decision process (MDP) [5] and then estimating
the remaining return using the estimate of the value function:
RsTD
= rt + ? V? (st+1 ),
t
1
is the return estimate from state st , rt is the reward for going from st to st+1 via action
where RsTD
t
at , and ? ? [0, 1] is a discount parameter. Monte Carlo algorithms (for episodic tasks) do not use
intermediate estimates but instead use the full return,
RsMC
=
t
L?1
X
? i rt+i ,
i=0
for an episode L transitions in length after time t (we assume that L is finite). These two types of
return estimates can be considered instances of the more general notion of an n-step return,
!
n?1
X
(n)
i
Rs =
? rt+i + ? n V? (st+n ),
t
i=0
for n ? 1. Here, n transitions are observed from the MDP and the remaining portion of return is
estimated using the estimate of the value function. Since st+L is a state that occurs after the end of
an episode, we assume that V? (st+L ) = 0, always.
A complex return is a weighted average of the 1, . . . , L step returns:
Rs?t =
L
X
w? (n, L)Rs(n)
,
t
(1)
n=1
where w? (n, L) are weights and ? ? {?, ?, ?} will be used to specify the weighting schemes of
different approaches. The question that this paper proposes an answer to is: what weighting scheme
will produce the best estimates of the true expected return?
The ?-return, Rs?t , is the weighting scheme that is used by the entire family of TD(?) algorithms
[5]. It uses a parameter ? ? [0, 1] that determines how the weight given to a return decreases as the
length of the return increases:
(
(1 ? ?)?n?1
if n < L
Pn?1
w? (n, L) =
1 ? i=1 w? (i) if n = L.
, which
, which has low variance but high bias. When ? = 1, Rs?t = RsMC
When ? = 0, Rs?t = RsTD
t
t
has high variance but is unbiased. Intermediate values of ? blend the high-bias but low-variance
estimates from short returns with the low-bias but high-variance estimates from the longer returns.
The success of the ?-return is largely due to its simplicity?TD(?) using linear function approximation has per-time-step time complexity linear in the number of features. However, this efficiency
comes at a cost: the ?-return is not founded on a principled statistical derivation.1 Konidaris et al. [4]
remedied this recently by showing that the ?-return is the maximum likelihood estimator of V (st )
(1)
(2)
(L)
given three assumptions. Specifically, Rs?t ? arg maxx?R Pr(Rst , Rst , . . . , Rst |V (st ) = x) if
(1)
(L)
Assumption 1 (Independence). Rst , . . . , Rst are independent random variables,
(n)
(n)
Assumption 2 (Unbiased Normal Estimators). Rst is normally distributed with mean E[Rst ] =
V (st ) for all n.
(n)
Assumption 3 (Geometric Variance). Var(Rst ) ? 1/?n .
Although this result provides a theoretical foundation for the ?-return, it is based on three typically
false assumptions: the returns are highly correlated, only the Monte Carlo return is unbiased, and the
variance of the n-step returns from each state do not usually increase geometrically. This suggests
three areas where the ?-return might be improved?it could be modified to better account for the
(n)
correlation of returns, the bias of the different returns, and the true form of Var(Rst ).
The ?-return uses an approximate formula for the variance of an n-step return in place of Assumption
3. This allows the ?-return to better account for how the variance of returns increases with their
1
To be clear: there is a wealth of theoretical and empirical analyses of algorithms that use the ?-return.
Until recently there was not a derivation of the ?-return as the estimator of V (st ) that optimizes some objective
(e.g., maximizes log likelihood or minimizes expected squared error).
2
length, while simultaneously removing the need for the ? parameter. The ?-return is given by the
weighting scheme:
Pn
( i=1 ? 2(i?1) )?1
w? (n, L) = PL Pn?
.
2(i?1) )?1
n
? =1 (
i=1 ?
3
The ?-Return
We propose a new complex return, the ?-return, that improves upon the ? and ? returns by account(20)
(21)
ing for the correlations of the returns. To emphasize this problem, notice that Rst and Rst will
be almost identical (perfectly correlated) for many MDPs (particularly when ? is small). This means
that Assumption 1 is particularly egregious, and suggests that a new complex return might improve
upon the ? and ?-returns by properly accounting for the correlation of returns.
We formulate the problem of how best to combine different length returns to estimate the true expected return as a linear regression problem. This reformulation allows us to leverage the wellunderstood properties of linear regression algorithms. Consider a regression problem with L points,
{(xi , yi )}L
i=1 , where the value of yi depends on the value of xi . The goal is to predict yi given
(i)
xi . We set xi = 1 and yi = Rst . We can then construct the design matrix (a vector in this case),
(1)
(2)
(L)
|
L
x = 1 = [1, . . . , 1] ? R and the response vector, y = [Rst , Rst , . . . , Rst ]| . We seek a re? This ?? will be our estimate of the true expected
gression coefficient, ?? ? R, such that y ? x?.
return.
Generalized least squares (GLS) is a method for selecting ?? when the yi are not necessarily independent and may have different variances. Specifically, if we use a linear model with (possibly
correlated) mean-zero noise to model the data, i.e., y = x? + , where ? ? R is unknown, is a
random vector, E[] = 0, and Var(|x) = ?, then the GLS estimator
?? = (x| ??1 x)?1 x| ??1 y,
(2)
is the best linear unbiased estimator (BLUE) for ? [6]?the linear unbiased estimator with the
lowest possible variance.
In our setting the assumptions about the true model that produced the data become that
(1)
(2)
(L)
[Rst , Rst , . . . , Rst ]| = [V (st ), V (st ), . . . , V (st )]| + , where E[] = 0 (i.e., the returns are
all unbiased estimates of the true expected return) and Var(|x) = ?. Since x = 1 in our case,
(i)
(j)
(i)
(j)
Var(|x)(i, j) = Cov(Rst ? V (st ), Rst ? V (st )) = Cov(Rst , Rst ), where Var(|x)(i, j) denotes the element of Var(|x) in the ith row and jth column.
? gives us the complex return:
So, using only Assumption 2, GLS ((2), solved for ?)
?
(1)
?
?
???1
Rst
1
? (2)
?
? 1 ??
? Rst
?1 ?
?1 ?
??
?? = ?
?[ 1 1 . . . 1 ] ? ? ... ?? [ 1 1 . . . 1 ] ? ? ..
? .
(L)
1
Rst
|
{z
}|
{z
1
= PL
?1 (n,m)
n,m=1 ?
P
?1 (n,m)R(n)
= L
st
n,m=1 ?
which can be written in the form of (1) with weights:
PL
??1 (n, m)
w? (n, L) = PLm=1
,
?1 (?
n, m)
n
? ,m=1 ?
(i)
?
?
?
?,
?
?
}
(3)
(j)
where ? is an L ? L matrix with ?(i, j) = Cov(Rst , Rst ).
Notice that the ?-return is a generalization of the ? and ? returns. The ?-return can be obtained
by reintroducing the false assumption that the returns are independent and that their variance grows
geometrically, i.e., by making ? a diagonal matrix with ?n,n = ??n . Similarly, the ?-return can be
Pn
obtained by making ? a diagonal matrix with ?n,n = i=1 ? 2(i?1) .
3
Notice that Rs?t is a BLUE of V (st ) if Assumption 2 holds. Since Assumption 2 does not hold, the
?-return is not an unbiased estimator of V (s). Still, we expect it to outperform the ? and ?-returns
because it accounts for the correlation of n-step returns and they do not. However, in some cases
it may perform worse because it is still based on the false assumption that all of the returns are
unbiased estimators of V (st ). Furthermore, given Assumption 2, there may be biased estimators of
V (st ) that have lower expected mean squared error than a BLUE (which must be unbiased).
4
Approximating the ?-Return
In practice the covariance matrix, ?, is unknown and must be approximated from data. This approach, known as feasible generalized least squares (FGLS), can perform worse than ordinary least
squares given insufficient data to accurately estimate ?. We must therefore accurately approximate
? from small amounts of data.
To study the accuracy of covariance matrix estimates, we estimated ? using a large number of
trajectories for four different domains: a 5 ? 5 gridworld, a variant of the canonical mountain car
domain, a real-world digital marketing problem, and a continuous control problem (DAS1), all of
which are described in more detail in subsequent experiments. The covariance matrix estimates are
depicted in Figures 1(a), 2(a), 3(a), and 4(a). We do not specify rows and columns in the figures
because all covariance matrices and estimates thereof are symmetric. Because they were computed
from a very large number of trajectories, we will treat them as ground truth.
We must estimate the ?-return when only a few trajectories are available. Figures 1(b), 2(b), 3(b),
and 4(b) show direct empirical estimates of the covariance matrices using only a few trajectories.
These empirical approximations are poor due to the very limited amount of data, except for the
digital marketing domain, where a ?few? trajectories means 10,000. The solid black entries in
Figures 1(f), 2(f), 3(f), and 4(f) show the weights, w? (n, L), on different length returns when using
different estimates of ?. The noise in the direct empirical estimate of the covariance matrix using
only a few trajectories leads to poor estimates of the return weights.
When approximating ? from a small number of trajectories, we must be careful to avoid this overfitting of the available data. One way to do this is to assume a compact parametric model for ?.
Below we describe a parametric model of ? that has only four parameters, regardless of L (which
determines the size of ?). We use this parametric model in our experiments as a proof of concept?
we show that the ?-return using even this simple estimate of ? can produce improved results over
the other existing complex returns. We do not claim that this scheme for estimating ? is particularly
principled or noteworthy.
4.1
Estimating Off-Diagonal Entries of ?
Notice in Figures 1(a), 2(a), 3(a), and 4(a) that for j > i, Cov(Rsi t , Rsj t ) ? Cov(Rsi t , Rsi t ) =
Var(Rsi t ). This structure would mean that we can fill in ? given its diagonal values, leaving only L
parameters. We now explain why this relationship is reasonable in general, and not just an artifact
of our domains. We can write each entry in ? as a recurrence relation:
Cov[Rs(i)
, Rs(j)
] =Cov[Rs(i)
, Rs(j?1)
+ ? j?1 (rt+j + ? V? (st+j ) ? V? (st+j?1 )]
t
t
t
t
=Cov[Rs(i)
, Rs(j?1)
] + ? j?1 Cov[Rs(i)
, rt+j + ? V? (st+j ) ? V? (st+j?1 )],
t
t
t
when i < j. The term rt+j + ? V? (st+j ) ? V? (st+j?1 ) is the temporal difference error j steps
in the future. The proposed assumption that Cov(Rsi t , Rsj t ) = Var(Rsi t ) is equivalent to assuming that the covariance of this temporal difference error and the i-step return is negligible:
(i)
? j?1 Cov[Rst , rt+j + ? V? (st+j ) ? V? (st+j?1 )] ? 0. The approximate independence of these two
terms is reasonable in general due to the Markov property, which ensures that at least the conditional
(i)
covariance, Cov[Rst , rt+j + ? V? (st+j ) ? V? (st+j?1 )|st ], is zero.
Because this relationship is not exact, the off-diagonal entries tend to grow as they get farther from
the diagonal. However, especially when some trajectories are padded with absorbing states, this
relationship is quite accurate when j = L, since the temporal difference errors at the absorbing state
(i)
(i)
(L?1)
are all zero, and Cov[Rst , 0] = 0. This results in a significant difference between Cov[Rst , Rst
]
4
25
30
20
20
25
30
20
20
15
15
10
10
10
0
5
?10
0
0
0
0
0
5
5
20
10
20
10
15
15
10
20
5
25
25
5
15
15
10
20
5
20
10
15
15
10
20
0
0
5
20
10
15
15
10
5
10
20
5
5
25
25
(a) Empirical ? from 1 (b) Empirical ? from 5 (c) Approximate ? from (d) Approximate ? from
million trajectories.
trajectories.
1 million trajectories.
5 trajectories.
1
1,000
Mean Squared Error
Weight, w(n, 20)
Variance
30
15
0
0
1
20
Return Length
Empirical 1M
Approx 1M
Empirical 5
Approx 5
100
20
-0.2
App ?,
1.95394
?=0.8,
3.19055
10
1
Return Length, n
Empirical 1M
Approx 1M
Empirical 5
Return Type
Approx 5
(e) Approximate and empiri- (f) Approximate and empirical weights (g) Mean squared error from five trajeccal diagonals of ?.
for each return.
tories.
Figure 1: Gridworld Results.
80
200
80
60
150
60
100
100
40
40
20
0
0
?100
0
?20
0
10
0
0
10
30
20
15
30
5
40
30
20
5
40
15
30
10
5
25
20
20
15
30
10
10
30
25
20
20
15
30
10
40
25
20
0
0
10
30
25
20
50
20
10
5
40
(a) Empirical ? from 1 (b) Empirical ? from 2 (c) Approximate ? from (d) Approximate ? from
million trajectories.
trajectories.
1 million trajectories.
2 trajectories.
1
10,000
Approx 1M
Empirical 2
Approx 2
?
App ?
?=1
?=0.9
Emp ?
Return Type
?=0.8
?=0.7
?=0.6
Return Length, n
Empirical 1M
?=0.5
-0.2
App ?, 76.39
?=0.4
Approx 2
WIS, 144.48
10
?=0.3
Approx 1M
Empirical 2
30
?=0.2
Return Length
Empirical 1M
1
30
?=0
0
100
?=0.1
0
1,000
IS
80
WIS
Mean Squared Error
Weight, w(n, 11)
Variance
160
(e) Approximate and empiri- (f) Approximate and empirical weights (g) Mean squared error from two trajeccal diagonals of ?.
for each return.
tories.
Figure 2: Mountain Car Results.
5
0.2
0.2
0.2
0.2
0.15
0.15
0.15
0.15
0.1
0.1
0.1
0.1
0.05
0.05
0.05
0.05
0
0
1
2
4
5
6
7
8
9
10
1
2
3
4
5
6
7
8
9
0
0
1
3
2
10
1
3
4
5
6
7
8
9
10
1
3
2
4
5
6
8
7
9
2
10
1
3
4
5
6
7
8
9
10
1
2
3
4
7
6
5
9
8
2
3
10
4
5
6
7
8
9
10
1
2
3
4
5
6
8
7
10
9
(a) Empirical ? from 1 (b) Empirical ? from (c) Approximate ? from (d) Approximate ? from
million trajectories.
10000 trajectories.
1 million trajectories.
10000 trajectories.
1
0.08
1
Mean Squared Error
0.004
Weight, w(n, 10)
10
App ?,
0.0011
0.002
?=0, 0.0011
Emp ?,
0.0007
Approx 1M
Empirical 10k
App ?
?
?=1
Emp ?
?=0.9
?=0.8
?=0.7
?=0.6
Return Length, n
Empirical 1M
?=0.5
-0.5
?=0.4
Approx 10k
?=0.3
Approx 1M
Empirical 10k
?=0.2
10
Return Length
Empirical 1M
?=0
0
IS
0
?=0.1
0
WIS
Variance
0.16
Return Type
Approx 10k
(e) Approximate and empiri- (f) Approximate and empirical weights (g) Mean squared error from 10000 tracal diagonals of ?.
for each return.
jectories.
Figure 3: Digital Marketing Results.
40
30
40
25
30
20
30
20
20
10
20
10
0
10
0
0
?10
0
0
0
15
10
5
5
20
10
25
25
15
15
10
20
20
10
15
15
5
5
20
10
10
20
5
5
15
15
10
20
20
10
15
15
5
0
0
10
20
5
5
25
25
(a) Empirical ? from (b) Empirical ? from 10 (c) Approximate ? from (d) Approximate ? from
10000 trajectories.
trajectories.
10000 trajectories.
10 trajectories.
1
Approx 10K
Empirical 10
Approx 10
App ?
?
Emp ?
?=0.6
?=0.5
?=0.4
?=0.3
Return Length, n
Empirical 10K
?=0
-0.5
?=0.2
Approx 10
?=0.1
Approx 10K
Empirical 10
IS
Return Length
Empirical 10K
20
WIS
10
?=1
?=1, 3.47436 App ?,
3.1070
1
0
?=0.9
20
0
?=0.8
1
?=0, 3.2102
10
?=0.7
20
Mean Squared Error
Weight, w(n, 10)
Variance
40
100
Return Type
(e) Approximate and empiri- (f) Approximate and empirical weights (g) Mean squared error from 10 trajeccal diagonals of ?.
for each return.
tories.
Figure 4: Functional Electrical Stimulation Results.
6
(i)
(L)
and Cov[Rst , Rst ]. Rather than try to model this drop, which can influence the weights significantly, we reintroduce the assumption that the Monte Carlo return is independent of the other returns,
making the off-diagonal elements of the last row and column zero.
Estimating Diagonal Entries of ?
4.2
The remaining question is how best to approximate the diagonal of ? from a very small number
of trajectories. Consider the solid and dotted black curves in Figures 1(e), 2(e), 3(e), and 4(e),
which depict the diagonals of ? when estimated from either a large number or small number of
trajectories. When using only a few trajectories, the diagonal includes fluctuations that can have
significant impacts on the resulting weights. However, when using many trajectories (which we
treat as giving ground truth), the diagonal tends to be relatively smooth and monotonically increasing
until it plateaus (ignoring the final entry).
This suggests using a smooth parametric form to approximate the diagonal, which we do as follows.
(i)
Let vi denote the sample variance of Rst for i = 1 . . . L. Let v+ be the largest sample variance:
v+ = maxi?{1,...,L} vi . We parameterize the diagonal using four parameters, k1 , k2 , v+ , and vL :
?
?
if i = 1
?k 1
?
if i = L
?k1 ,k2 ,v+ ,vL (i, i) = vL
?
?min{v , k k (1?t) } otherwise.
+ 1 2
?(1, 1) = k1 sets the initial variance, and vL is the variance of the Monte Carlo return. The parameter v+ enforces a ceiling on the variance of the i-step return, and k2 captures the growth rate of
the variance, much like ?. We select the k1 and k2 that minimize the mean squared error between
? i) and vi , and set v+ and vL directly from the data.2
?(i,
This reduces the problem of estimating ?, an L ? L matrix, to estimating four numbers from return
? as computed from many trajectories.
data. Consider Figures 1(c), 2(c), 3(c), and 4(c), which depict ?
The differences between these estimates and the ground truth show that this parameterization is not
perfect, as we cannot represent the true ? exactly. However, the estimate is reasonable and the
resulting weights (solid red) are visually similar to the ground truth weights (solid black) in Figures
1(f), 2(f), 3(f), and 4(f). We can now get accurate estimates of ? from very few trajectories. Figures
? when computed from only a few trajectories. Note their similarity
1(d), 2(d), 3(d), and 4(d) show ?
?
to ? when using a large number of trajectories, and that the resulting weights (unfilled red in Figures
1(f), 2(f), 3(f), and 4(f)) are similar to the those obtained using many more trajectories (the filled red
bars).
Pseudocode for approximating the ?-return is provided in Algorithm 1. Unlike the ?-return, which
can be computed from a single trajectory, the ?-return requires a set of trajectories in order to
estimate ?. The pseudocode assumes that every trajectory is of length L, which can be achieved by
padding shorter trajectories with absorbing states.
2
We include the constraints that k2 ? [0, 1] and 0 ? k1 ? v+ .
7
Algorithm 1: Computing the ?-return.
Require: n trajectories beginning at s and of length L.
(i)
1. Compute Rs for i = 1, . . . , L and for each trajectory.
(i)
2. Compute the sample variances, vi = Var(Rs ), for i = 1, . . . , L.
3. Set v+ = maxi?{1,...,L} vi .
4. Search for the k1 and k2 that minimize the mean squared error between vi and
? k ,k ,v ,v (i, i) for i = 1, . . . , L.
?
1
2
+
L
? k ,k ,v ,v (i, i), using the
5. Fill the diagonal of the L ? L matrix, ?, with ?(i, i) = ?
1 2 + L
optimized k1 and k2 .
6. Fill all of the other entries with ?(i, j) = ?(i, i) where j > i. If (i = L or j = L) and
i 6= j then set ?(i, j) = 0 instead.
7. Compute the weights for the returns according to (3).
8. Compute the ?-return for each trajectory according to (1).
5
Experiments
Approximations of the ?-return could, in principle, replace the ?-return in the whole family of
TD(?) algorithms. However, using the ?-return for TD(?) raises several interesting questions that
are beyond the scope of this initial work (e.g., is there a linear-time way to estimate the ?-return?
Since a different ? is needed for every state, how can the ?-return be used with function approximation where most states will never be revisited?). We therefore focus on the specific problem of
off-policy policy evaluation?estimating the performance of a policy using trajectories generated by
a possibly different policy. This problem is of interest for applications that require the evaluation of
a proposed policy using historical data.
Due to space constraints, we relegate the details of our experiments to the appendix in the supplemental documents. However, the results of the experiments are clear?Figures 1(g), 2(g), 3(g), and
4(g) show the mean squared error (MSE) of value estimates when using various methods.3 Notice
that, for all domains, using the ?-return (the EMP ? and APP ? labels) results in lower MSE than
the ?-return and the ?-return with any setting of ?.
6
Conclusions
Recent work has begun to explore the statistical basis of complex estimates of return, and how we
might reformulate them to be more statistically efficient [4]. We have proposed a return estimator
that improves upon the ? and ?-returns by accounting for the covariance of return estimates. Our
results show that understanding and exploiting the fact that in control settings?unlike in standard
supervised learning?observed samples are typically neither independent nor identically distributed,
can substantially improve data efficiency in an algorithm of significant practical importance.
Many (largely positive) theoretical properties of the ?-return and TD(?) have been discovered over
the past few decades. This line of research into other complex returns is still in its infancy, and
so there are many open questions. For example, can the ?-return be improved upon by removing
Assumption 2 or by keeping Assumption 2 but using a biased estimator (not a BLUE)? Is there
a method for approximating the ?-return that allows for value function approximation with the
same time complexity as TD(?), or which better leverages our knowledge that the environment is
Markovian? Would TD(?) using the ?-return be convergent in the same settings as TD(?)? While
we hope to answer these questions in future work, it is also our hope that this work will inspire other
researchers to revisit the problem of constructing a statistically principled complex return.
3
To compute the MSE we used a large number of Monte Carlo rollouts to estimate the true value of each
policy.
8
References
[1] R.S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9?44,
1988.
[2] S.J. Bradtke and A.G. Barto. Linear least-squares algorithms for temporal difference learning. Machine
Learning, 22(1-3):33?57, March 1996.
[3] C. Downey and S. Sanner. Temporal difference Bayesian model averaging: A Bayesian perspective on
adapting lambda. In Proceedings of the 27th International Conference on Machine Learning, pages 311?
318, 2010.
[4] G.D. Konidaris, S. Niekum, and P.S. Thomas. TD? : Re-evaluating complex backups in temporal difference learning. In Advances in Neural Information Processing Systems 24, pages 2402?2410, 2011.
[5] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA,
1998.
[6] T. Kariya and H. Kurata. Generalized Least Squares. Wiley, 2004.
[7] D. Precup, R. S. Sutton, and S. Singh. Eligibility traces for off-policy policy evaluation. In Proceedings
of the 17th International Conference on Machine Learning, pages 759?766, 2000.
[8] A. R. Mahmood, H. Hasselt, and R. S. Sutton. Weighted importance sampling for off-policy learning with
linear function approximation. In Advances in Neural Information Processing Systems 27, 2014.
[9] J. R. Tetreault and D. J. Litman. Comparing the utility of state features in spoken dialogue using reinforcement learning. In Proceedings of the Human Language Technology/North American Association for
Computational Linguistics, 2006.
[10] G. D. Konidaris, S. Osentoski, and P. S. Thomas. Value function approximation in reinforcement learning
using the Fourier basis. In Proceedings of the Twenty-Fifth Conference on Artificial Intelligence, pages
380?395, 2011.
[11] G. Theocharous and A. Hallak. Lifetime value marketing using reinforcement learning. In The 1st
Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2013.
[12] P. S. Thomas, G. Theocharous, and M. Ghavamzadeh. High confidence off-policy evaluation. In Proceedings of the Twenty-Ninth Conference on Artificial Intelligence, 2015.
[13] D. Blana, R. F. Kirsch, and E. K. Chadwick. Combined feedforward and feedback control of a redundant,
nonlinear, dynamic musculoskeletal system. Medical and Biological Engineering and Computing, 47:
533?542, 2009.
[14] P. S. Thomas, M. S. Branicky, A. J. van den Bogert, and K. M. Jagodnik. Application of the actor-critic
architecture to functional electrical stimulation control of a human arm. In Proceedings of the TwentyFirst Innovative Applications of Artificial Intelligence, pages 165?172, 2009.
[15] P. M. Pilarski, M. R. Dawson, T. Degris, F. Fahimi, J. P. Carey, and R. S. Sutton. Online human training
of a myoelectric prosthesis controller via actor-critic reinforcement learning. In Proceedings of the 2011
IEEE International Conference on Rehabilitation Robotics, pages 134?140, 2011.
[16] K. Jagodnik and A. van den Bogert. A proportional derivative FES controller for planar arm movement.
In 12th Annual Conference International FES Society, Philadelphia, PA, 2007.
[17] N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159?195, 2001.
9
| 5807 |@word open:1 seek:1 r:17 covariance:9 accounting:2 solid:4 initial:2 selecting:1 document:1 outperforms:1 existing:1 past:1 hasselt:1 comparing:1 written:1 must:5 subsequent:1 plm:1 treating:1 drop:1 depict:2 intelligence:3 parameterization:1 beginning:1 ith:1 short:1 farther:1 provides:1 revisited:1 five:1 direct:2 become:1 combine:1 expected:7 nor:1 td:13 increasing:1 provided:1 estimating:8 maximizes:1 lowest:1 what:1 mountain:2 minimizes:1 substantially:1 supplemental:1 spoken:1 hallak:1 temporal:9 every:2 growth:1 litman:1 exactly:2 k2:7 control:4 normally:1 medical:1 positive:1 negligible:1 engineering:1 treat:2 tends:1 theocharous:3 sutton:5 fluctuation:1 noteworthy:1 might:3 black:3 studied:1 tory:3 suggests:3 limited:1 range:1 statistically:3 practical:1 enforces:1 practice:1 branicky:1 episodic:1 area:1 empirical:33 maxx:1 adapting:2 empiri:4 significantly:1 confidence:1 suggest:2 get:2 cannot:1 influence:1 gls:3 equivalent:1 regardless:1 formulate:1 simplicity:1 estimator:11 fill:3 notion:1 exact:1 duke:1 us:4 pa:1 element:2 osentoski:1 approximated:2 particularly:3 observed:3 solved:1 electrical:2 parameterize:1 capture:1 ensures:1 reintroduce:1 episode:2 decrease:1 movement:1 principled:3 environment:1 complexity:2 reward:1 dynamic:1 ghavamzadeh:1 raise:1 singh:1 upon:4 efficiency:2 basis:2 completely:1 various:1 derivation:2 describe:1 monte:6 artificial:3 niekum:2 quite:1 widely:1 otherwise:1 pilarski:1 cov:15 final:1 online:1 propose:4 adaptation:1 ostermeier:1 rst:35 exploiting:1 produce:2 perfect:1 chadwick:1 come:1 musculoskeletal:1 human:3 require:2 generalization:1 biological:1 pl:3 hold:2 considered:1 ground:4 normal:1 visually:1 scope:1 predict:2 claim:1 label:1 currently:1 hansen:1 largest:1 weighted:2 hope:2 mit:1 always:1 aim:1 modified:1 rather:1 pn:4 avoid:1 barto:2 focus:2 bogert:2 properly:1 likelihood:2 vl:5 entire:1 typically:2 blana:1 relation:1 going:1 arg:1 among:1 proposes:1 construct:1 never:1 sampling:1 identical:1 future:2 others:1 jectories:1 few:8 simultaneously:1 rollouts:1 interest:1 highly:1 evaluation:6 derandomized:1 egregious:1 accurate:3 shorter:1 mahmood:1 filled:1 re:2 prosthesis:1 theoretical:3 instance:1 column:3 markovian:1 ordinary:1 cost:1 entry:7 dependency:1 answer:2 combined:1 st:32 international:4 amherst:1 off:8 precup:1 squared:13 possibly:2 worse:2 lambda:1 american:1 dialogue:1 derivative:1 return:135 account:7 degris:1 includes:1 coefficient:1 north:1 depends:1 vi:6 try:1 portion:1 red:3 carey:1 minimize:2 square:6 gression:1 accuracy:1 variance:25 largely:2 bayesian:2 accurately:2 produced:1 carlo:6 trajectory:41 researcher:1 app:8 explain:1 plateau:1 rstd:3 konidaris:5 thereof:1 proof:1 massachusetts:1 begun:1 knowledge:1 car:2 improves:2 supervised:1 planar:1 specify:2 improved:3 response:1 inspire:1 formulation:2 furthermore:1 marketing:4 just:1 lifetime:1 correlation:7 until:2 twentyfirst:1 nonlinear:1 multidisciplinary:1 artifact:1 grows:1 mdp:2 concept:1 unbiased:10 true:8 evolution:1 symmetric:1 unfilled:1 self:1 recurrence:1 eligibility:1 generalized:3 bradtke:1 recently:3 superior:1 absorbing:3 pseudocode:2 functional:2 stimulation:2 rl:4 million:6 association:1 mellon:1 significant:3 cambridge:1 approx:16 similarly:1 language:1 actor:2 longer:1 similarity:1 recent:1 perspective:1 optimizes:1 dawson:1 success:1 yi:5 george:1 redundant:1 monotonically:1 wellunderstood:1 full:1 reduces:1 ing:1 smooth:2 adobe:1 rsmc:2 variant:1 regression:3 impact:1 controller:2 represent:1 achieved:1 robotics:1 wealth:1 grow:1 leaving:1 biased:3 unlike:3 tend:1 leverage:2 intermediate:2 feedforward:1 identically:1 variety:1 independence:2 architecture:1 perfectly:1 reduce:1 texas:1 utility:1 padding:1 downey:1 rsi:6 action:1 clear:2 amount:2 discount:1 outperform:1 exist:1 canonical:1 notice:5 dotted:1 revisit:1 estimated:4 per:1 blue:4 carnegie:1 write:2 four:4 reformulation:1 neither:1 geometrically:2 padded:1 place:1 family:6 almost:1 reasonable:3 decision:2 appendix:1 kirsch:1 convergent:1 annual:1 constraint:2 fourier:1 min:1 innovative:1 relatively:1 according:2 march:1 poor:2 wi:4 making:4 rehabilitation:1 den:2 pr:1 heart:1 ceiling:1 fail:1 needed:1 end:1 available:2 alternative:2 kurata:1 thomas:5 original:1 denotes:1 remaining:3 assumes:1 include:1 linguistics:1 giving:1 k1:7 especially:1 approximating:6 rsj:2 society:1 unchanged:1 objective:1 question:5 occurs:1 blend:2 parametric:4 primary:1 rt:9 strategy:1 diagonal:19 evolutionary:1 remedied:1 philip:1 reintroducing:1 das1:1 assuming:1 length:19 relationship:3 insufficient:1 reformulate:1 difficult:1 fe:2 trace:1 suppress:1 design:1 policy:13 unknown:2 perform:2 twenty:2 markov:2 finite:1 gridworld:2 discovered:1 ninth:1 optimized:1 beyond:1 bar:1 usually:1 below:1 scott:1 sanner:1 arm:2 scheme:5 improve:2 technology:1 mdps:1 philadelphia:1 geometric:1 understanding:1 georgios:1 expect:1 interesting:1 proportional:1 var:10 digital:3 foundation:1 principle:1 critic:2 austin:1 row:3 last:1 keeping:1 jth:1 bias:4 emp:5 taking:1 fifth:1 benefit:1 distributed:2 curve:1 feedback:1 van:2 transition:3 world:1 evaluating:1 reinforcement:7 founded:1 historical:1 approximate:21 emphasize:1 compact:1 overfitting:1 xi:4 continuous:1 search:1 decade:1 why:1 learn:1 ignoring:1 mse:3 complex:10 necessarily:1 constructing:1 domain:5 backup:2 noise:2 whole:1 wiley:1 lie:1 infancy:1 weighting:4 formula:1 remained:1 removing:2 specific:1 showing:1 maxi:2 false:3 importance:2 depicted:1 relegate:1 explore:1 truth:4 determines:2 ma:1 conditional:1 goal:1 careful:1 replace:1 feasible:1 jagodnik:2 specifically:2 except:1 averaging:1 called:1 select:1 brevity:1 correlated:3 |
5,311 | 5,808 | Community Detection via Measure Space Embedding
Shie Mannor
The Technion, Haifa, Israel
[email protected]
Mark Kozdoba
The Technion, Haifa, Israel
[email protected]
Abstract
We present a new algorithm for community detection. The algorithm uses random walks to embed the graph in a space of measures, after which a modification
of k-means in that space is applied. The algorithm is therefore fast and easily
parallelizable. We evaluate the algorithm on standard random graph benchmarks,
including some overlapping community benchmarks, and find its performance to
be better or at least as good as previously known algorithms. We also prove a
linear time (in number of edges) guarantee for the algorithm
on a p, q-stochastic
q
1
1
block model with where p ? c ? N ? 2 + and p ? q ? c0 pN ? 2 + log N .
1
Introduction
Community detection in graphs, also known as graph clustering, is a problem where one wishes to
identify subsets of the vertices of a graph such that the connectivity inside the subset is in some
way denser than the connectivity of the subset with the rest of the graph. Such subsets are referred
to as communities, and it often happens in applications that if two vertices belong to the same
community, they have similar application-related qualities. This in turn may allow for a higher level
analysis of the graph, in terms of communities instead of individual nodes. Community detection
finds applications in a diversity of fields, such as social networks analysis, communication and traffic
design, in biological networks, and, generally, in most fields where meaningful graphs can arise (see,
for instance, [1] for a survey). In addition to direct applications to graphs, community detection can,
for instance, be also applied to general Euclidean space clustering problems, by transforming the
metric to a weighted graph structure (see [2] for a survey).
Community detection problems come in different flavours, depending on whether the graph in question is simple, or weighted, or/and directed. Another important distinction is whether the communities are allowed to overlap or not. In the overlapping communities case, each vertex can belong to
several subsets.
A difficulty with community detection is that the notion of community is not well defined. Different algorithms may employ different formal notions of a community, and can sometimes produce
different results. Nevertheless, there exist several widely adopted benchmarks ? synthetic models
and real-life graphs ? where the ground truth communities are known, and algorithms are evaluated based on the similarity of the produced output to the ground truth, and based on the amount
of required computations. On the theoretical side, most of the effort is concentrated on developing
algorithms with guaranteed recovery of clusters for graphs generated from variants of the Stochastic
Block Model (referred to as SBM in what follows, [1]).
In this paper we present a new algorithm, DER (Diffusion Entropy Reducer, for reasons to be clarified later), for non-overlapping community detection. The algorithm is an adaptation of the k-means
algorithm to a space of measures which are generated by short random walks from the nodes of the
graph. The adaptation is done by introducing a certain natural cost on the space of the measures.
As detailed below, we evaluate the DER on several benchmarks and find its performance to be as
good or better than the best alternative method. In addition, we establish some theoretical guarantees
1
on its performance. While the main purpose of the theoretical analysis in this paper is to provide
some insight into why DER works, our result is also one of a few results in the literature that show
reconstruction in linear time.
On the empirical side, we first evaluate our algorithm on a set of random graph benchmarks known
as the LFR models, [3]. In [4], 12 other algorithms were evaluated on these benchmarks, and
three algorithms, described in [5], [6] and [7], were identified, that exhibited significantly better
performance than the others, and similar performance among themselves. We evaluate our algorithm
on random graphs with the same parameters as those used in [4] and find its performance to be
as good as these three best methods. Several well known methods, including spectral clustering
[8], exhaustive modularity optimization (see [4] for details), and clique percolation [9], have worse
performance on the above benchmarks.
Next, while our algorithm is designed for non-overlapping communities, we introduce a simple
modification that enables it to detect overlapping communities in some cases. Using this modification, we compare the performance of our algorithm to the performance of 4 overlapping community
algorithms on a set of benchmarks that were considered in [10]. We find that in all cases DER performs better than all 4 algorithms. None of the algorithms evaluated in [4] and [3] has theoretical
guarantees.
On the theoretical side, we show that DER reconstructs with high probability the partition of the
? 12
p, q-stochastic
q block model such that, roughly, p ? N , where N is the number of vertices, and
1
p ? q ? c pN ? 2 + log N (this holds in particular when pq ? c0 > 1) for some constant c > 0.
We show that for this reconstruction only one iteration of the k-means is sufficient. In fact, three
passages over the set of edges suffice. While the cost function we introduce for DER will appear at
first to have purely probabilistic motivation, for the purposes of the proof we provide an alternative
interpretation of this cost in terms of the graph, and the arguments show which properties of the
graph are useful for the convergence of the algorithm.
Finally, although this is not the emphasis of the present paper, it is worth noting here that, as will
be evident later, our algorithm can be trivially parallelalized. This seems to be a particularly nice
feature since most other algorithms, including spectral clustering, are not easy to parallelalize and
do not seem to have parallel implementations at present.
The rest of the paper is organized as follows: Section 2 overviews related work and discusses relations to our results. In Section 3 we provide the motivation for the definition of the algorithm, derive
the cost function and establish some basic properties. Section 4 we present the results on the empirical evaluation of the algorithm and Section 5 describes the theoretical guarantees and the general
proof scheme. Some proofs and additional material are provided in the supplementary material.
2
Literature review
Community detection in graphs has been an active research topic for the last two decades and generated a huge literature. We refer to [1] for an extensive survey. Throughout the paper, let G = (V, E)
be a graph, and let P = P1 , . . . , Pk be a partition of V . Loosely speaking, a partition P is a good
community structure on G if for each Pi ? P , more edges stay within Pi than leave Pi . This is
usually quantified via some cost function that assigns larger scalars to partitions P that are in some
sense better separated. Perhaps the most well known cost function is the modularity, which was
introduced in [11] and served as a basis of a large number of community detection algorithms ([1]).
The popular spectral clustering methods, [8]; [2], can also be viewed as a (relaxed) optimization of
a certain cost (see [2]).
Yet another group of algorithms is based on fitting a generative model of a graph with communities
to a given graph. References [12]; [10] are two among the many examples. Perhaps the simplest
generative model for non-overlapping communities is the stochastic block model, see [13],[1] which
we now define: Let P = P1 , . . . , Pk be a partition of V into k subsets. p, q-SBM is a distribution
over the graphs on vertex set V , such that all edges are independent and for i, j ? V , the edge (i, j)
exists with probability p if i, j belong to the same Ps , and it exists with probability q otherwise. If
q << p, the components Pi will be well separated in this model. We denote the number of nodes
by N = |V | throughout the paper.
2
Graphs generated from SBMs can serve as a benchmark for community detection algorithms. However, such graphs lack certain desirable properties, such as power-law degree and community size
distributions. Some of these issues were fixed in the benchmark models in [3]; [14], and these models are referred to as LFR models in the literature. More details on these models are given in Section
4.
We now turn to the discussion of the theoretical guarantees. Typically results in this direction provide
algorithms that can reconstruct,with high probability, the ground partition of a graph drawn from a
variant of a p, q-SBM model, with some, possibly large, number of components k. Recent results
include the works [15] and [16]. In this paper, however, we only analytically analyse the k = 2 case,
and such that, in addition, |P1 | = |P2 |.
For this case, the best known reconstruction result was obtained already in [17] and was only improved in terms of runtime since then. Namely, Bopanna?s result states that if p ? c1 logNN and
p ? q ? c2 logNN , then with high probability the partition is reconstructible. Similar bound can be
obtained, for instance, from the approaches in [15]; [16], to name a few. The methods in this group
are generally based on the spectral properties of adjacency (or related) matrices. The run time of
these algorithms is non-linear in the size of the graph and it is not known how these algorithms
behave on graphs not generated by the probabilistic models that they assume.
It is generally known that when the graphs are dense (p of order of constant), simple linear time
reconstruction algorithms exist (see [18]). The first, and to the best of our knowledge, the only
previous linear time algorithm for non dense graphs was proposed in [18]. This algorithm works
1
for p ? c3 ()N ? 2 + , for any fixed > 0. The approach of [18] was further extended in [19],
to handle more general cluster sizes. These approaches approaches differ significantly from the
spectrum based methods, and provide equally important theoretical insight. However, their empirical behaviour was never studied, and it is likely that even for graphs generated from the SBM,
extremely high values of N would be required for the algorithms to work, due to large constants in
the concentration inequalities (see the concluding remarks in [19]).
3
Algorithm
Let G be a finite undirected graph with a vertex set V = {1, . . . , n}. Denote by A = {aij } the
symmetric
P adjacency matrix of G, where aij ? 0 are edge weights, and for a vertex i ? V , set
di = j aij to be the degree of i. Let D be an n ? n diagonal matrix such that Dii = di , and set
T = D?1 A to be the transition matrix of the random walk on G. Set also pij = Tij . Finally, denote
by ?, ?(i) = Pdidj the stationary measure of the random walk.
j
A number of community detection algorithms are based on the intuition that distinct communities
should be relatively closed under the random walk (see [1]), and employ different notions of closedness. Our approach also takes this point of view.
For a fixed L ? N , consider the following sampling process on the graph: Choose vertex v0 randomly from ?, and perform L steps of a random walk on G, starting from v0 . This results in a
length L + 1 sequence of vertices, x1 . Repeat the process N times independently, to obtain also
x1 , . . . , x N .
Suppose now that we would like to model the sequences xs as a multinomial mixture model with a
single component. Since each coordinate xst is distributed according to ?, the single component of
the mixture should be ? itself, when N grows. Now suppose that we would like to model the same
sequences with a mixture of two components. Because the sequences are sampled from a random
walk rather then independently from each other, the components need no longer be ? itself, as in any
mixture where some elements appear more often together then others. The mixture as above can be
found using the EM algorithm, and this in principle summarizes our approach. The only additional
step, as discussed above, is to replace the sampled random walks with their true distributions, which
simplifies the analysis and also leads to somewhat improved empirical performance.
We now present the DER algorithm for detecting the non-overlapping communities. Its input is
the number of components to detect, k, the length of the walks L, an initialization partition P =
3
Algorithm 1 DER
1: Input: Graph G, walk length L,
number of components k.
2: Compute the measures wi .
3: Initialize P1 , . . . , Pk to be a random partition such that
|Pi | = |V |/k for all i.
4: repeat
5:
(1) For all s ? k, construct ?s = ?Ps .
6:
(2) For all s ? k, set
Ps =
i ? V | s = argmax D(wi , ?l ) .
l
7: until the sets Ps do not change
{P1 , . . . , Pk } of V into disjoint subsets. P would be usually taken to be a random partition of V
into equally sized subsets.
For t = 0, 1, . . . and a vertex i ? V , denote by wit the i-th row of the matrix T t . Then wit is the
distribution of the random walk on G, started at i, after t steps. Set wi = L1 (wi1 + . . . + wiL ), which
is the distribution corresponding to the average of the empirical measures of sequences x that start
at i.
For two probability measures ?, ? on V , set
D(?, ?) =
X
?(i) log ?(i).
i?V
Although D is not a metric, will act as a distance function in our algorithm. Note that if ? was an
empirical measure, then, up to a constant, D would be just the log-likelihood of observing ? from
independent samples of ?.
P
For a subset S ? V , set ?S to be the restriction of the measure ? to S, and also set dS = i?S di
to be the full degree of S. Let
1 X
di wi
(1)
?S =
dS
i?S
denote the distribution of the random walk started from ?S .
The complete DER algorithm is described in Algorithm 1.
The algorithm is essentially a k-means algorithm in a non-Euclidean space, where the points are
the measures wi , each occurring with multiplicity di . Step (1) is the ?means? step, and (2) is the
maximization step.
Let
C=
L X
X
di ? D(wi , ?l )
(2)
l=1 i?Pl
be the associated cost. As with the usual k-means, we have the following
Lemma 3.1. Either P is unchanged by steps (1) and (2) or both steps (1) and (2) strictly increase
the value of C.
The proof is by direct computation and is deferred to the supplementary material. Since the number
of configurations P is finite, it follows that DER always terminates and provides a ?local maximum?
of the cost C.
The cost C can be rewritten in a somewhat more informative form. To do so, we introduce some
notation first. Let X be a random variable on V , distributed according to measure ?. Let Y a step of
a random walk started at X, so that the distribution of Y given X = i is wi . Finally, for a partition
P , let Z be the indicator variable of a partition, Z = s iff X ? Ps . With this notation, one can write
C = ?dV ? H(Y |Z) = dV (?H(Y ) + H(Z) ? H(Z|Y )) ,
4
(3)
26
14
20
29
22
18
15
23
33
32
30
9
27
25
28
8
13
2
31
19
1
24
3
17
0
7
21
12
10
11
5
4
6
16
(b) Political Blogs
(a) Karate Club
where H are the full and conditional Shannon entropies. Therefore, DER algorithm can be interpreted as seeking a partition that maximizes the information between current known state (Z), and
the next step from it (Y ). This interpretation gives rise to the name of the algorithm, DER, since
every iteration reduces the entropy H(Y |Z) of the random walk, or diffusion, with respect to the
partition. The second equality in (3) has another interesting interpretation. Suppose, for simplicity,
that k = 2, with partition P1 , P2 . In general, a clustering algorithm aims to minimize the cut, the
number of edges between P1 and P2 . However, minimizing the number of edges directly will lead
to situations where P1 is a single node, connected with one edge to the rest of the graph in P2 . To
avoid such situation, a relative, normalized version of a cut needs to be introduced, which takes into
account the sizes of P1 , P2 . Every clustering algorithms has a way to resolve this issue, implicitly
or explicitly. For DER, this is shown in second equality of (3). H(Z) is maximized when the components are of equal sizes (with respect to ?), while H(Z|Y ) is minimized when the measures ?Ps
are as disjointly supported as possible.
As any k-means algorithm, DER?s results depend somewhat on its random initialization. All kmeans-like schemes are usually restarted several times and the solution with the best cost is chosen.
In all cases which we evaluated we observed empirically that the dependence of DER on the initial
parameters is rather weak. After two or three restarts it usually found a partition nearly as good
as after 100 restarts. For clustering problems, however, there is another simple way to aggregate
the results of multiple runs into a single partition, which slightly improves the quality of the final
results. We use this technique in all our experiments and we provide the details in the Supplementary
Material, Section A.
We conclude by mentioning two algorithms that use some of the concepts that we use. The Walktrap,
[20], similarly to DER constructs the random walks (the measures wi , possibly for L > 1) as part of
its computation. However, Walktrap uses wi ?s in a completely different way. Both the optimization
procedure and the cost function are different from ours. The Infomap , [5], [21], has a cost that
is related to the notion of information. It aims to minimize to the information required to transmit
a random walk on G through a channel, the source coding is constructed using the clusters, and
best clusters are those that yield the best compression. This does not seem to be directly connected
to the maximum likelyhood motivated approach that we use. As with Walktrap, the optimization
procedure of Infomap also completely differs from ours.
4
Evaluation
In this section results of the evaluation of DER algorithm are presented. In Section 4.1 we illustrate
DER on two classical graphs. Sections 4.2 and 4.3 contain the evaluation on the LFR benchmarks.
4.1
Basic examples
When a new clustering algorithm is introduced, it is useful to get a general feel of it with some
simple examples. Figure 1a shows the classical Zachary?s Karate Club, [22]. This graph has a
5
ground partition into two subsets. The partition shown in Figure 1a is a partition obtained from a
typical run of DER algorithm, with k = 2, and wide range of L?s. (L ? [1, 10] were tested). As is
the case with many other clustering algorithms, the shown partition differs from the ground partition
in one element, node 8 (see [1]).
Figure 1b shows the political blogs graph, [23]. The nodes are political blogs, and the graph has an
(undirected) edge if one of the blogs had a link to the other. There are 1222 nodes in the graph. The
ground truth partition of this graph has two components - the right wing and left wing blogs. The
labeling of the ground truth was partially automatic and partially manual, and both processes could
introduce some errors. The run of DER reconstructs the ground truth partition with only 57 nodes
missclassifed. The NMI (see the next section, Eq. (4)) to the ground truth partition is .74.
The political blogs graphs is particularly interesting since it is an example of a graph for which
fitting an SBM model to reconstruct the clusters produces results very different from the ground
truth. To overcome the problem with SBM fitting on this graph, a degree sensitive version of SBM,
DCBM, was introduced in [24]. That algorithm produces partition with NMI .75. Another approach
to DCBM can be found in [25].
4.2
LFR benchmarks
The LFR benchmark model, [14], is a widely used extension of the stochastic block model, where
node degrees and community sizes have power law distribution, as often observed in real graphs.
An important parameter of this model is the mixing parameter ? ? [0, 1] that controls the fraction
of the edges of a node that go outside the node?s community (or outside all of node?s communities,
in the overlapping case). For small ?, there will be a small number of edges going outside the
communities, leading to disjoint, easily separable graphs, and the boundaries between communities
will become less pronounced as ? grows.
Given a set of communities P on a graph, and the ground truth set of communities Q, there are
several ways to measure how close P is to Q. One standard measure is the normalized mutual
information (NMI), given by:
N M I(P, Q) = 2
I(P, Q)
,
H(P ) + H(Q)
(4)
where H is the Shannon entropy of a partition and I is the mutual information (see [1] for details).
NMI is equal 1 if and only if the partitions P and Q coincide, and it takes values between 0 and 1
otherwise.
When computed with NMI, the sets inside P, Q can not overlap. To deal with overlapping communities, an extension of NMI was proposed in [26]. We refer to the original paper for the definition,
as the definition is somewhat lengthy. This extension, which we denote here as ENMI, was subsequently used in the literature as a measure of closeness of two sets of communities, event in the
cases of disjoint communities. Note that most papers use the notation NMI while the metric that
they really use is ENMI.
Figure 2a shows the results of evaluation of DER for four cases: the size of a graph was either
N = 1000 or N = 5000 nodes, and the size of the communities was restricted to be either between
10 to 50 (denoted S in the figures) or between 20 to 100 (denoted B). For each combination of these
parameters, ? varied between 0.1 and 0.8. For each combination of graph size, community size
restrictions as above and ? value, we generated 20 graphs from that model and run DER. To provide
some basic intuition about these graphs, we note that the number of communities in the 1000S
graphs is strongly concentrated around 40, and in 1000B, 5000S, and 5000B graphs it is around 25,
200 and 100 respectively. Each point in Figure 2a is a the average ENMI on the 20 corresponding
graphs, with standard deviation as the error bar. These experiments correspond precisely to the ones
performed in [4] (see Supplementary Material, Section Cfor more details). In all runs on DER we
have set L = 5 and set k to be the true number of communities for each graph, as was done in [4] for
the methods that required it. Therefore our Figure 2a can be compared directly with Figure 2 in [4].
From this comparison we see that DER and the two of the best algorithms identified in [4], Infomap
[5] and RN [6], reconstruct the partition perfectly for ? ? 0.5, for ? = 0.6 DER?s reconstruction
scores are between Infomap?s and RN?s, with values for all of the algorithms above 0.95, and for
6
1.0
0.8
0.8
0.6
0.6
ENMI
ENMI
1.0
0.4
0.4
n1000S
n1000B
n5000S
n5000B
0.2
0.0
0.1
0.2
0.3
n1000S
n1000B
n5000S
n5000B
0.2
0.4
0.5
0.6
0.7
0.0
0.1
0.8
0.2
0.3
0.4
0.5
0.6
0.7
0.8
?
?
(a) DER, LFR benchmarks
(b) Spectral Alg., LFR benchmarks
? = 0.7 DER has the best performance in two of the four cases. For ? = 0.8 all algorithms have
score 0.
We have also performed the same experiments with the standard version of spectral clustering, [8],
because this version was not evaluated in [4]. The results are shown in Fig. 2b. Although the
performance is generally good, the scores are mostly lower than those of DER, Infomap and RN.
4.3
Overlapping LFR benchmarks
We now describe how DER can be applied to overlapping community detection. Observe that DER
internally operates on measures ?Ps rather then subsets of the vertex set. Recall that ?Ps (i) is the
probability that a random walk started from Ps will hit node i. We can therefore consider each i to
be a member of those communities from which the probability to hit it is ?high enough?. To define
this formally, we first note that for any partition P , the following decomposition holds:
?=
k
X
?(Ps )?Ps .
(5)
s=1
This follows from the invariance of ? under the random walk. Now, given the out put of DER - the
sets Ps and measures ?Ps set
?Ps (i)?(Ps )
?P (i)?(Ps )
,
=
mi (s) = Pk s
?(i)
t=1 ?Pt (i)?(Pt )
(6)
where we used (5) in the second equality. Then mi (s) is the probability that the walks started at Ps ,
given that it finished in i. For each i ? V , set si = argmaxl mi (l) to be the most likely community
given i. Then define the overlapping communities C1 , . . . , Ck via
1
Ct = i ? V | mi (t) ? ? mi (si ) .
(7)
2
The paper [10] introduces a new algorithm for overlapping communities detection and contains also
an evaluation of that algorithm as well as of several other algorithms on a set of overlapping LFR
benchmarks. The overlapping communities LFR model was defined in [3]. In Table 1 we present
the ENMI results of DER runs on the N = 10000 graphs with same parameters as in [10], and also
show the values obtained on these benchmarks in [10] (Figure S4 in [10]), for four other algorithms.
The DER algorithm was run with L = 2, and k was set to the true number of communities. Each
number is an average over ENMIs on 10 instances of graphs with a given set of parameters (as in
[10]). The standard deviation around this average for DER was less then 0.02 in all cases. Variances
for other algorithms are provided in [10].
For ? ? 0.6 all algorithms yield ENMI of less then 0.3. As we see in Table 1, DER performs better
than all other algorithms in all the cases. We believe this indicates that DER together with equation
(7) is a good choice for overlapping community detection in situations where community overlap
between each two communities is sparse, as is the case in the LFR models considered above. Further
discussion is provided in the Supplementary Material, Section D.
7
Table 1: Evaluation for Overlapping LFR. All values except DER are from [10]
Alg.
DER
SVI ([10])
POI ([27])
INF ([21])
COP ([28])
?=0
0.94
0.89
0.86
0.42
0.65
? = 0.2
0.9
0.73
0.68
0.38
0.43
? = 0.4
0.83
0.6
0.55
0.4
0.0
We conclude this section by noting that while in the non-overlapping case the models generated
with ? = 0 result in trivial community detection problems, because in these cases communities are
simply the connected components of the graph, this is no longer true in the overlapping case. As
a point of reference, the well known Clique Percolation method was also evaluated in [10], in the
? = 0 case. The average ENMI for this algorithm was 0.2 (Table S3 in [10]).
5
Analytic bounds
In this section we restrict our attention to the case L = 1 of the DER algorithm. Recall that the
p, q-SBM model was defined in Section 2. We shall consider the model with k = 2 and such that
|P1 | = |P2 |. We assume that the initial partition for the DER, denoted C1 , C2 in what follows, is
chosen as in step 3 of DER (Algorithm 1) - a random partition of V into two equal sized subsets.
In this setting we have the following:
Theorem 5.1. For every > 0 there exists C > 0 and c > 0 such that if
1
and
p ? C ? N ? 2 +
(8)
q
1
p ? q ? c pN ? 2 + log N
(9)
then DER recovers the partition P1 , P2 after one iteration, with probability ?(N ) such that ?(N ) ?
1 when N ? ?.
Note that the probability in the conclusion of the theorem refers to a joint probability of a draw from
the SBM and of an independent draw from the random initialization.
The proof of the theorem has essentially three steps. First, we observe that the random initialization
C1 , C2 is necessarily somewhat biased, in the sense that C1 and C2 never divide P1 exactly into two
1
halves. Specifically, ||C1 ? P1 | ? |C2 ? P1 || ? N ? 2 ? with high probability. Assume that C1 has
the bigger half, |C1 ? P1 | > |C2 ? P1 |. In the second step, by an appropriate linearization argument
we show that for a node i ? P1 , deciding whether D(wi , ?C1 ) > D(wi , ?C2 ) or vice versa amounts
to counting paths of length two between i and |C1 ? P1 |. In the third step we estimate the number of
1
these length two paths in the model. The fact that |C1 ? P1 | > |C2 ? P1 | + N ? 2 ? will imply more
paths to C1 ? P1 from i ? P1 and we will conclude that D(wi , ?C1 ) > D(wi , ?C2 ) for all i ? P1
and D(wi , ?C2 ) > D(wi , ?C1 ) for all i ? P2 . The full proof is provided in the supplementary
material.
References
[1] Santo Fortunato. Community detection in graphs. Physics Reports, 486(35):75 ? 174, 2010.
[2] Ulrike Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416,
2007.
[3] Andrea Lancichinetti and Santo Fortunato. Benchmarks for testing community detection
algorithms on directed and weighted graphs with overlapping communities. Phys. Rev. E,
80(1):016118, 2009.
[4] Santo Fortunato and Andrea Lancichinetti. Community detection algorithms: A comparative
analysis. In Fourth International ICST Conference, 2009.
8
[5] M. Rosvall and C. T. Bergstrom. Maps of random walks on complex networks reveal community structure. Proc. Natl. Acad. Sci. USA, page 1118, 2008.
[6] Peter Ronhovde and Zohar Nussinov. Multiresolution community detection for megascale
networks by information-based replica correlations. Phys. Rev. E, 80, 2009.
[7] Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast
unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and
Experiment, 2008(10), 2008.
[8] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an
algorithm. In Advances in Neural Information Processing Systems 14, 2001.
[9] Gergely Palla, Imre Der?enyi, Ill?es Farkas, and Tam?as Vicsek. Uncovering the overlapping
community structure of complex networks in nature and society. Nature, 435, 2005.
[10] Prem K Gopalan and David M Blei. Efficient discovery of overlapping communities in massive
networks. Proceedings of the National Academy of Sciences, 110(36):14534?14539, 2013.
[11] M. Girvan and M. E. J. Newman. Community structure in social and biological networks.
Proceedings of the National Academy of Sciences, 99(12):7821?7826, 2002.
[12] MEJ Newman and EA Leicht. Mixture models and exploratory analysis in networks. Proceedings of the National Academy of Sciences, 104(23):9564, 2007.
[13] Paul W. Holland, Kathryn B. Laskey, and Samuel Leinhardt. Stochastic blockmodels: First
steps. Social Networks, 5(2):109?137, 1983.
[14] Andrea Lancichinetti, Santo Fortunato, and Filippo Radicchi. Benchmark graphs for testing
community detection algorithms. Phys. Rev. E, 78(4), 2008.
[15] Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham Kakade. A tensor spectral approach to learning mixed membership community models. In COLT, volume 30 of JMLR
Proceedings, 2013.
[16] Yudong Chen, S. Sanghavi, and Huan Xu. Improved graph clustering. Information Theory,
IEEE Transactions on, 60(10):6440?6455, Oct 2014.
[17] Ravi B. Boppana. Eigenvalues and graph bisection: An average-case analysis. In Foundations
of Computer Science, 1987., 28th Annual Symposium on, pages 280?285, Oct 1987.
[18] Anne Condon and Richard M. Karp. Algorithms for graph partitioning on the planted partition
model. Random Struct. Algorithms, 18(2):116?140, 2001.
[19] Ron Shamir and Dekel Tsur. Improved algorithms for the random cluster graph model. Random
Struct. Algorithms, 31(4):418?449, 2007.
[20] Pascal Pons and Matthieu Latapy. Computing communities in large networks using random
walks. J. of Graph Alg. and App., 10:284?293, 2004.
[21] Alcides Viamontes Esquivel and Martin Rosvall. Compression of flow can reveal overlappingmodule organization in networks. Phys. Rev. X, 1:021025, Dec 2011.
[22] W. W. Zachary. An information flow model for conflict and fission in small groups. Journal of
Anthropological Research, 33:452?473, 1977.
[23] Lada A. Adamic and Natalie Glance. The political blogosphere and the 2004 U.S. election:
Divided they blog. LinkKDD ?05, 2005.
[24] Brian Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Phys. Rev. E, 83, 2011.
[25] Arash A. Amini, Aiyou Chen, Peter J. Bickel, and Elizaveta Levina. Pseudo-likelihood methods for community detection in large sparse networks. 41, 2013.
[26] Andrea Lancichinetti, Santo Fortunato, and Jnos Kertsz. Detecting the overlapping and hierarchical community structure in complex networks. New Journal of Physics, 11(3):033015,
2009.
[27] Brian Ball, Brian Karrer, and M. E. J. Newman. Efficient and principled method for detecting
communities in networks. Phys. Rev. E, 84:036103, Sep 2011.
[28] Steve Gregory. Finding overlapping communities in networks by label propagation. New
Journal of Physics, 12(10):103018, 2010.
9
| 5808 |@word version:4 compression:2 seems:1 c0:2 dekel:1 condon:1 decomposition:1 anthropological:1 initial:2 configuration:1 contains:1 score:3 daniel:1 ours:2 current:1 anne:1 si:2 yet:1 partition:34 informative:1 enables:1 analytic:1 designed:1 farkas:1 stationary:1 generative:2 half:2 short:1 santo:5 blei:1 detecting:3 mannor:1 node:15 clarified:1 provides:1 club:2 ron:1 c2:10 direct:2 constructed:1 become:1 symposium:1 natalie:1 prove:1 lancichinetti:4 fitting:3 inside:2 introduce:4 blondel:1 andrea:4 themselves:1 p1:23 mechanic:1 roughly:1 palla:1 resolve:1 election:1 linkkdd:1 provided:4 notation:3 suffice:1 maximizes:1 israel:2 what:2 interpreted:1 finding:1 guarantee:5 pseudo:1 every:3 act:1 runtime:1 exactly:1 hit:2 control:1 partitioning:1 internally:1 appear:2 local:1 acad:1 path:3 bergstrom:1 emphasis:1 initialization:4 studied:1 quantified:1 mentioning:1 range:1 directed:2 testing:2 block:5 differs:2 svi:1 procedure:2 empirical:6 significantly:2 refers:1 get:1 close:1 put:1 restriction:2 disjointly:1 map:1 go:1 attention:1 starting:1 independently:2 survey:3 wit:2 simplicity:1 recovery:1 assigns:1 matthieu:1 insight:2 sbm:9 embedding:1 handle:1 notion:4 coordinate:1 exploratory:1 transmit:1 feel:1 pt:2 suppose:3 shamir:1 massive:1 us:2 kathryn:1 element:2 particularly:2 cut:2 observed:2 connected:3 renaud:1 reducer:1 principled:1 intuition:2 transforming:1 wil:1 rosvall:2 depend:1 argmaxl:1 purely:1 serve:1 basis:1 completely:2 easily:2 joint:1 sep:1 tx:1 separated:2 distinct:1 fast:2 describe:1 enyi:1 labeling:1 aggregate:1 newman:4 outside:3 exhaustive:1 jean:1 widely:2 supplementary:6 denser:1 larger:1 otherwise:2 reconstruct:3 statistic:1 analyse:1 itself:2 cop:1 final:1 sequence:5 eigenvalue:1 reconstruction:5 leinhardt:1 adaptation:2 iff:1 mixing:1 multiresolution:1 academy:3 pronounced:1 convergence:1 cluster:6 p:17 produce:3 comparative:1 leave:1 depending:1 derive:1 ac:2 illustrate:1 andrew:1 eq:1 p2:8 come:1 lognn:2 direction:1 differ:1 stochastic:7 subsequently:1 arash:1 material:7 dii:1 adjacency:2 behaviour:1 really:1 biological:2 brian:3 strictly:1 pl:1 extension:3 hold:2 rong:1 around:3 considered:2 ground:11 deciding:1 radicchi:1 bickel:1 purpose:2 wi1:1 proc:1 label:1 percolation:2 sensitive:1 vice:1 weighted:3 unfolding:1 always:1 aim:2 rather:3 ck:1 pn:3 avoid:1 poi:1 imre:1 aiyou:1 karp:1 likelihood:2 indicates:1 political:5 detect:2 sense:2 membership:1 typically:1 relation:1 going:1 issue:2 among:2 ill:1 uncovering:1 denoted:3 colt:1 pascal:1 initialize:1 mutual:2 field:2 construct:2 never:2 equal:3 ng:1 sampling:1 nearly:1 minimized:1 others:2 report:1 sanghavi:1 richard:1 employ:2 few:2 randomly:1 national:3 individual:1 argmax:1 detection:22 organization:1 huge:1 evaluation:7 deferred:1 introduces:1 mixture:6 natl:1 edge:12 huan:1 euclidean:2 loosely:1 walk:21 haifa:2 divide:1 theoretical:8 instance:4 pons:1 maximization:1 karrer:2 cost:13 introducing:1 vertex:11 subset:12 deviation:2 technion:4 closedness:1 gregory:1 synthetic:1 international:1 stay:1 probabilistic:2 physic:3 michael:1 together:2 connectivity:2 gergely:1 reconstructs:2 choose:1 possibly:2 worse:1 tam:1 wing:2 leading:1 account:1 diversity:1 coding:1 explicitly:1 later:2 view:1 performed:2 closed:1 observing:1 traffic:1 ulrike:1 start:1 parallel:1 minimize:2 il:2 variance:1 maximized:1 yield:2 identify:1 correspond:1 weak:1 vincent:1 produced:1 bisection:1 none:1 lada:1 worth:1 served:1 app:1 lambiotte:1 parallelizable:1 phys:6 manual:1 lengthy:1 definition:3 proof:6 di:6 associated:1 mi:5 recovers:1 sampled:2 hsu:1 popular:1 animashree:1 recall:2 knowledge:1 improves:1 organized:1 ea:1 steve:1 higher:1 restarts:2 improved:4 wei:1 evaluated:6 done:2 strongly:1 just:1 likelyhood:1 until:1 d:2 correlation:1 adamic:1 overlapping:25 lack:1 glance:1 propagation:1 quality:2 perhaps:2 reveal:2 laskey:1 grows:2 believe:1 name:2 reconstructible:1 usa:1 normalized:2 true:4 concept:1 contain:1 analytically:1 equality:3 symmetric:1 deal:1 samuel:1 evident:1 complete:1 performs:2 l1:1 passage:1 multinomial:1 empirically:1 overview:1 volume:1 belong:3 interpretation:3 discussed:1 refer:2 versa:1 automatic:1 trivially:1 similarly:1 had:1 pq:1 similarity:1 longer:2 v0:2 recent:1 inf:1 certain:3 inequality:1 blog:7 life:1 der:43 additional:2 relaxed:1 somewhat:5 full:3 desirable:1 multiple:1 reduces:1 sham:1 levina:1 vicsek:1 divided:1 equally:2 bigger:1 variant:2 basic:3 mej:1 essentially:2 metric:3 iteration:3 sometimes:1 dec:1 c1:14 addition:3 xst:1 source:1 biased:1 rest:3 exhibited:1 undirected:2 shie:2 member:1 flow:2 seem:2 jordan:1 anandkumar:1 ee:1 noting:2 counting:1 easy:1 enough:1 identified:2 perfectly:1 restrict:1 simplifies:1 whether:3 motivated:1 effort:1 peter:2 speaking:1 remark:1 useful:2 generally:4 detailed:1 tij:1 gopalan:1 amount:2 s4:1 concentrated:2 simplest:1 exist:2 tutorial:1 s3:1 disjoint:3 write:1 shall:1 group:3 four:3 nevertheless:1 drawn:1 ravi:1 diffusion:2 replica:1 graph:66 icst:1 fraction:1 run:8 luxburg:1 fourth:1 throughout:2 draw:2 summarizes:1 flavour:1 bound:2 ct:1 guaranteed:1 annual:1 precisely:1 filippo:1 argument:2 extremely:1 concluding:1 separable:1 relatively:1 tsur:1 martin:1 developing:1 according:2 combination:2 ball:1 describes:1 terminates:1 em:1 slightly:1 nmi:7 wi:15 lefebvre:1 kakade:1 rev:6 modification:3 happens:1 dv:2 restricted:1 multiplicity:1 taken:1 equation:1 previously:1 turn:2 discus:1 ge:1 adopted:1 rewritten:1 observe:2 hierarchical:1 spectral:9 appropriate:1 amini:1 alternative:2 yair:1 struct:2 original:1 clustering:14 include:1 etienne:1 sbms:1 establish:2 classical:2 society:1 unchanged:1 seeking:1 tensor:1 question:1 already:1 cfor:1 concentration:1 dependence:1 usual:1 diagonal:1 planted:1 elizaveta:1 distance:1 link:1 sci:1 topic:1 trivial:1 reason:1 karate:2 length:5 minimizing:1 mostly:1 fortunato:5 rise:1 design:1 implementation:1 perform:1 benchmark:20 finite:2 behave:1 situation:3 extended:1 communication:1 rn:3 varied:1 community:74 introduced:4 david:1 namely:1 required:4 lfr:12 extensive:1 c3:1 conflict:1 distinction:1 zohar:1 bar:1 below:1 usually:4 leicht:1 including:3 power:2 overlap:3 event:1 difficulty:1 natural:1 indicator:1 scheme:2 imply:1 finished:1 started:5 nice:1 literature:5 review:1 discovery:1 relative:1 law:2 girvan:1 mixed:1 interesting:2 foundation:1 degree:5 sufficient:1 pij:1 principle:1 pi:5 row:1 repeat:2 last:1 supported:1 aij:3 formal:1 allow:1 side:3 wide:1 sparse:2 distributed:2 overcome:1 boundary:1 zachary:2 transition:1 yudong:1 coincide:1 social:3 transaction:1 implicitly:1 clique:2 active:1 conclude:3 spectrum:1 decade:1 modularity:2 why:1 table:4 fission:1 channel:1 nature:2 alg:3 necessarily:1 complex:3 pk:5 main:1 dense:2 blockmodels:2 motivation:2 arise:1 paul:1 allowed:1 x1:2 xu:1 fig:1 referred:3 wish:1 jmlr:1 third:1 theorem:3 embed:1 x:1 closeness:1 exists:3 linearization:1 occurring:1 chen:2 entropy:4 simply:1 likely:2 blogosphere:1 partially:2 scalar:1 holland:1 restarted:1 truth:8 oct:2 conditional:1 viewed:1 sized:2 kmeans:1 boppana:1 replace:1 change:1 typical:1 except:1 operates:1 specifically:1 lemma:1 invariance:1 e:1 shannon:2 meaningful:1 formally:1 guillaume:1 mark:1 prem:1 evaluate:4 tested:1 |
5,312 | 5,809 | The Consistency of Common Neighbors for
Link Prediction in Stochastic Blockmodels
Deepayan Chakrabarti
IROM, McCombs School of Business
University of Texas at Austin
[email protected]
Purnamrita Sarkar
Department of Statistics
University of Texas at Austin
[email protected]
Peter Bickel
Department of Statistics
University of California, Berkeley
[email protected]
Abstract
Link prediction and clustering are key problems for network-structured
data. While spectral clustering has strong theoretical guarantees under
the popular stochastic blockmodel formulation of networks, it can be expensive for large graphs. On the other hand, the heuristic of predicting
links to nodes that share the most common neighbors with the query node
is much fast, and works very well in practice. We show theoretically that
the common neighbors heuristic can extract clusters with high probability when the graph is dense enough, and can do so even in sparser graphs
with the addition of a ?cleaning? step. Empirical results on simulated and
real-world data support our conclusions.
1
Introduction
Networks are the simplest representation of relationships between entities, and as such have
attracted significant attention recently. Their applicability ranges from social networks such
as Facebook, to collaboration networks of researchers, citation networks of papers, trust
networks such as Epinions, and so on. Common applications on such data include ranking,
recommendation, and user segmentation, which have seen wide use in industry. Most of
these applications can be framed in terms of two problems: (a) link prediction, where the
goal is to find a few similar nodes to a given query node, and (b) clustering, where we want
to find groups of similar individuals, either around a given seed node or a full partitioning
of all nodes in the network.
An appealing model of networks is the stochastic blockmodel, which posits the existence of
a latent cluster for each node, with link probabilities between nodes being simply functions
of their clusters. Inference of the latent clusters allows one to solve both the link prediction
problem and the clustering problem (predict all nodes in the query node?s cluster). Strong
theoretical and empirical results have been achieved by spectral clustering, which uses the
singular value decomposition of the network followed by a clustering step on the eigenvectors
to determine the latent clusters.
However, singular value decomposition can be expensive, particularly for (a) large graphs,
when (b) many eigenvectors are desired. Unfortunately, both of these are common requirements. Instead, many fast heuristic methods are often used, and are empirically observed
to yield good results [8]. One particularly common and effective method is to predict links
to nodes that share many ?common neighbors? with the query node q, i.e., rank nodes
by |CN (q, i)|, where CN (q, i) = {u | q ? u ? i} (i ? j represents an edge between i
1
and j). The intuition is that q probably has many links with others in its cluster, and
hence probably also shares many common friends with others in its cluster. Counting common neighbors is particularly fast (it is a Join operation supported by all databases and
Map-Reduce systems). In this paper, we study the theoretical properties of the common
neighbors heuristic.
Our contributions are the following:
(a) We present, to our knowledge the first, theoretical analysis of the common neighbors for
the stochastic blockmodel.
(b) We demarcate two regimes, which we call semi-dense and semi-sparse, under which
common neighbors can be successfully used for both link prediction and clustering.
(c) In particular, in the semi-dense regime, the number of common neighbors between the
query node q and another node within its cluster is significantly higher than that with a
node outside its cluster. Hence, a simple threshold on the number of common neighbors
suffices for both link prediction and clustering.
(d) However, in the semi-sparse regime, there are too few common neighbors with any node,
and hence the heuristic does not work. However, we show that with a simple additional
?cleaning? step, we regain the theoretical properties shown for the semi-dense case.
(e) We empirically demonstrate the effectiveness of counting common neighbors followed by
the ?cleaning? post-process on a variety of simulated and real-world datasets.
2
Related Work
Link prediction has recently attracted a lot of attention, because of its relevance to important
practical problems like recommendation systems, predicting future connections in friendship
networks, better understanding of evolution of complex networks, study of missing or partial
information in networks, etc [9, 8]. Algorithms for link prediction fall into two main groups:
similarity-based, and model-based.
Similarity-based methods: These methods use similarity measures based on network
topology for link prediction. Some methods look at nodes two hops away from the query
node: counting common neighbors, the Jaccard index, the Adamic-Adar score [1] etc. More
complex methods include nodes farther away, such as the Katz score [7], and methods based
on random walks [16, 2]. These are often intuitive, easily implemented, and fast, but they
typically lack theoretical guarantees.
Model-based methods: The second approach estimates parametric models for predicting links. Many popular network models fall in the latent variable model category [12, 3].
These models assign n latent random variables Z := (Z1 , Z2 , . . . , Zn ) to n nodes in a network. These variables take values in a general space Z. The probability of linkage between
two nodes is specified via a symmetric map h : Z ? Z ? [0, 1]. These Zi ?s can be i.i.d
Uniform(0,1) [3], or positions in some d?dimensional latent space [12]. In [5] a mixture of
multivariate Gaussian distributions is used, each for a separate cluster. A Stochastic Blockmodel [6] is a special class of these models, where Zi is a binary length k vector encoding
membership of a node in a cluster. In a well known special case (the planted partition
model), all nodes in the same cluster connect to each other with probability ?, whereas
all pairs in different clusters connect with probability ?. In fact, under broad parameter
regimes, the blockmodel approximation of networks has recently been shown to be analogous
to the use of histograms as non-parametric summaries of an unknown probability distribution [11]. Varying the number of bins or the bandwidth corresponds to varying the number
or size of communities. Thus blockmodels can be used to approximate more complex models
(under broad smoothness conditions) if the number of blocks are allowed to increase with
n.
Empirical results: As the models become more complex, they also become computationally demanding. It has been commonly observed that simple and easily computable measures
like common neighbors often have competitive performance with more complex methods.
2
This behavior has been empirically established across a variety of networks, starting from
co-authorship networks [8] to router level internet connections, protein protein interaction
networks and electrical power grid network [9].
Theoretical results: Spectral clustering has been shown to asymptotically recover cluster
memberships for variations of Stochastic Blockmodels [10, 4, 13]. However, apart from [15],
there is little understanding of why simple methods such as common neighbors perform so
well empirically.
Given their empirical success and computational tractability, the common neighbors heuristic is widely applied for large networks. Understanding the reasons for the accuracy of
common neighbors under the popular stochastic blockmodel setting is the goal of our work.
3
Proposed Work
Many link prediction methods ultimately make two assumptions: (a) each node belongs to
a latent ?cluster?, where nodes in the same cluster have similar behavior; and (b) each node
is very likely to connect to others in its cluster, so link prediction is equivalent to finding
other nodes in the cluster. These assumptions can be relaxed: instead of belonging to the
same cluster, nodes could have ?topic distributions?, with links being more likely between
pairs of nodes with similar topical interests. However, we will focus on the assumptions
stated above, since they are clean and the relaxations appear to be fundamentally similar.
Model. Specifically, consider a stochastic blockmodel where each node i belongs to an
unknown cluster ci ? {C1 , . . . , CK }. We assume that the number of clusters K is fixed as
the number of nodes n increases. We also assume that each cluster has ? = n/K members,
though this can be relaxed easily. The probability P (i ? j) of a link between nodes i and j
(i 6= j) depends only on the clusters of i and j: P (i ? j) = Bci ,cj , ?{ci = cj } + ?{ci 6= cj }
for some ? > ? > 0; in other words, the probability of a link is ? between nodes in the same
cluster, and ? otherwise. By definition, P (i ? i) = 0. If the nodes were arranged so that all
nodes in a cluster are contiguous, then the corresponding matrix, when plotted, attains a
block-like structure, with the diagonal blocks (corresponding to links within a cluster) being
denser than off-diagonal blocks (since ? > ?).
Under these assumptions, we ask the following two questions:
Problem 1 (Link Prediction and Recommendation). Given node i, how can we identify at
least a constant number of nodes from ci ?
Problem 2 (Local Cluster Detection). Given node i, how can we identify all nodes in ci ?
Problem 1 can be considered as the problem of finding good recommendations for a given
node i. Here, the goal is to find a few good nodes that i could connect to (e.g., recommending
a few possible friends on Facebook, or a few movies to watch next on Netflix). Since withincluster links have higher probability than across-cluster links (? > ?), predicting nodes from
ci gives the optimal answer. Crucially, it is unnecessary to find all good nodes. As against
that, Problem 2 requires us to find everyone in the given node?s cluster. This is the problem
of detecting the entire cluster corresponding to a given node. Note that Problem 2 is clearly
harder than Problem 1.
We next present a summary of our results and the underlying intuition before delving into
the details.
3.1
Intuition and Result Summary
Current approaches. Standard approaches to inference for the stochastic blockmodel
attempt to solve an even harder problem:
Problem 3 (Full Cluster Detection). How can we identify the latent clusters ci for all i?
A popular solution is via spectral clustering, involving two steps: (a) computing the top-K
eigenvectors of the graph Laplacian, and (b) clustering the projections of each node on the
3
corresponding eigenspace via an algorithm
[13]. A slight variation of this has
? like k-means ?
been shown to work as long as (? ? ?)/ ? = ?(log n/ n) and the average degree grows
faster than poly-logarithmic powers of n [10].
However, (a) spectral clustering solves a harder problem than Problems 1 and 2, and (b)
eigen-decompositions can be expensive, particularly for very large graphs. Our claim is that
a simpler operation ? counting common neighbors between nodes ? can yield results that
are almost as good in a broad parameter regime.
Common neighbors. Given a node i, link prediction via common neighbors follows a
simple prescription: predict a link to node j such that i and j have the maximum number
|CN (i, j)| of shared friends CN (i, j) = {u | i ? u ? j}. The usefulness of common
neighbors have been observed in practice [8] and justified theoretically for the latent distance
model [15]. However, its properties under the stochastic blockmodel remained unknown.
Intuitively, we would expect a pair of nodes i and j from the same cluster to have many
common neighbors u from the same cluster, since both the links i ? u and u ? j occur with
probability ?, whereas for ci 6= cj , at least one of the edges i ? u and u ? j must have the
lower probability ?.
P (u ? CN (i, j) | ci = cj ) = ?2 P (cu = ci | ci = cj ) + ? 2 P (cu 6= ci | ci = cj )
= ??2 + (1 ? ?)? 2
P (u ? CN (i, j) | ci 6= cj ) = ??P (cu = ci or cu = cj | ci 6= cj ) + ? 2 P (cu 6= ci , cu 6= cj | ci 6= cj )
= 2??? + (1 ? 2?)? 2 = P (u ? CN (i, j) | ci = cj ) ? ?(? ? ?)2
? P (u ? CN (i, j) | ci = cj )
Thus the expected number of common neighbors E [|CN (i, j)|] is higher when ci = cj . If
we can show that the random variable CN (i, j) concentrates around its expectation, node
pairs with the most common neighbors would belong to the same cluster. Thus, common
neighbors would offer a good solution to Problem 1.
We show conditions under which this is indeed the case. There are three key points regarding
our method: (a) handling dependencies between common neighbor counts, (b) defining the
graph density regime under which common neighbors is consistent, and (c) proposing a
variant of common neighbors which significantly broadens this region of consistency.
Dependence. CN (i, j) and CN (i, j 0 ) are dependent; hence, distinguishing between
within-group and outside-group nodes can be complicated even if each CN (i, j) concentrates around its expectation. We handle this via a careful conditioning step.
Dense versus sparse graphs. In general, the parameters ? and ? can be functions of
n, and we can try to characterize parameter settings when common neighbors consistently
returns nodes from the same cluster as the input node. We
? show that when the graph is
sufficiently ?dense? (average degree is growing faster than n log n), common neighbors is
powerful enough to answer Problem 2. Also, (? ? ?)/? can go to zero at a suitable rate.
On the other hand, the expected number of common neighbors between nodes tends to
zero for sparser graphs, irrespective of whether the nodes are in the same cluster or not.
Further, the standard deviation is of a higher order than the expectation, so there is no
concentration. In this case, counting common neighbors fails, even for Problem 1.
A variant with better consistency properties. However, we show that the addition
of an extra post-processing step (henceforth, the ?cleaning? step) still enables common
neighbors to identify nodes from its own cluster, while reducing the number of off-cluster
nodes to zero with probability tending to one as n ? ?. This requires a stronger separation
condition between ? and ?. However, such ?strong consistency? is only possible when
the average degree grows faster than (n log n)1/3 .? Thus, the cleaning step extends the
consistency of common neighbors beyond the O(1/ n) range.
4
4
Main Results
We first split the edge set of the complete graph on n nodes into two sets: K1 and its
complement K2 (independent of the given graph G). We compute common neighbors on
G1 = G ? K1 and perform a ?cleaning? process on G2 = G ? K2 . The adjacency matrices
of G1 and G2 are denoted by A1 and A2 . We will fix a reference node q, which belongs to
class C1 without loss of generality (recall that there are K clusters C1 . . . CK , each of size
n?).
Let Xi (i 6= q) denote the number of common neighbors between q and i. Algorithm 1
computes the set S = {i : Xi ? tn } of nodes who have at least tn common neighbors with
q on A1 , whereas Algorithm 2 does a further degree thresholding on A2 to refine S into S1 .
Algorithm 1 Common neighbors screening algorithm
1: procedure Scan(A1 , q, tn )
2:
For 1 ? i ? n, Xi ? A21 (q, i)
3:
Xq ? 0
4:
S ? {i : Xi ? tn }
5:
return S
Algorithm 2 Post Selection Cleaning algorithm
1: procedure Clean(S,
A2 , q, sn )
P
2:
S1 ? {i : j?S A2 (i, j) ? sn }
3:
return S1
To analyze the algorithms, we must specify conditions on graph densities. Recall that ?
and ? represent within-cluster and across-cluster link probabilities. We assume that ?/?
is constant while ? ? 0, ? ? 0; equivalently, assume that both ? and ? are both some
constant times ?, where ? ? 0.
The analysis of graphs has typically been divided into two regimes. The dense regime
consists of graphs with n? ? ?, where the expected degree n? is a fraction of n as n grows.
In the sparse regime, n? = O(1), so degree is roughly constant. Our work explores a finer
gradation, which we call semi-dense and semi-sparse, defined next.
Definition 4.1 (Semi-dense graph). A sequence of graphs is called semi-dense if
n?2 / log n ? ? as n ? ?.
Definition 4.2 (Semi-sparse graph). A sequence of graphs is called semi-sparse if n?2 ? 0
but n2/3 ?/ log n ? ? as n ? ?.
Our first result is that common neighbors is enough to solve not only the link-prediction
problem (Problem 1) but also the local clustering problem (Problem 2) in the semi-dense
case. This is because even though both nodes within and outside the query node?s cluster
have a growing number of common neighbors with q, there is a clear distinction in the
expected number of common neighbors between the two classes. Also, since the standard
deviation is of a smaller order than the expectation, the random variables concentrate.
Thus, we can pick a threshold tn such that SCAN(A1 , q, tn ) yields just the nodes in the
same cluster as q with high probability. Note that the cleaning step (Algorithm 2) is not
necessary in this case.
Theorem 4.1 (Algorithm 1 solves Problem 2 in semi-dense graphs). Let tn =
n ?(? + ?)2 /2 + (1 ? 2?)? 2 . Let S be the set of nodes returned by SCAN(A1 , q, tn ). Let
nw and no denote the number of nodes in S ? C1 and S \ C1 respectively. If the graph is
1/4
log n
?2
?
, then P (nw = n?) ? 1 and P (no = 0) ? 1.
semi-dense, and if ???
2
?
n?
?
Proof Sketch. We
Ponly sketch the proof here, deferring details to the supplementary material. Let dqa = i?Ca A1 (q, i) be the number of links from the query node q to nodes in
5
P
cluster Ca . Let dq = {dq1 , . . . qqK } and d = a dqa . We first show that
K
dq1 ? n??(1 ? ?n )
P (dq ? Good) , P
? 1 ? 2,
dqa ? n??(1 ? ?n ) ?a 6= 1
n
qp
p
p
?n , (6 log n)/(n??) =
log n/n ? ?( log n/(n?2 )) ? 0.
(1)
(2)
Conditioned on dq , Xi is the sum of K Binomial(dqa , B1a ) independent random variables
representing the number of common neighbors between q and i via nodes in each of the K
clusters: E[Xi | dq , i ? Ca ] = dqa ? + (d ? dqa )?. We have:
?1 , E[Xi | dq ? Good, i ? C1 ] ? n ??2 + (1 ? ?)? 2 (1 ? ?n ) , `n (1 ? ?n )
?a , E[Xi | dq ? Good, i ? Ca , a 6= 1] ? n 2??? + (1 ? 2?)? 2 (1 + ?n ) , un (1 + ?n )
p
Note that tn = (`n +un )/2, un ? tn ? `n , and `n ?un = n?(???)2 ? 4 log n n?2 / log n ?
?, where we applied condition on (? ? ?)/? noted in the theorem statement. We show:
P (Xi ? tn | dq ? Good, i ? C1 ) ? n?4/3+o(1)
P (Xi ? tn | dq ? Good, i ? Ca , a 6= 1) ? n?4/3+o(1)
Conditioned on dq , both nw and no are sums of conditionally independent and identically
distributed Bernoullis.
K
P (nw = n?) ? P (dq ? Good)P (nw = n? | dq ? Good) ? 1 ? 2 ? (1 ? n? ? n?4/3 ) ? 1
n
P (no = 0) ? P (dq ? Good) ? P (no = 0 | dq ? Good) ? 1 ? ?(n?1/3 ) ? 1
There are two major differences between the semi-sparse and semi-dense cases. First, in the
semi-sparse case, both expectations ?1 and ?a are of the order O(n?2 ) which tends to zero.
Second, standard deviations on the number of common neighbors are of a larger order than
expectations. Together, this means that the number of common neighbors to within-cluster
and outside-cluster nodes can no longer be separated; hence, Algorithm 1 by itself cannot
work. However, after cleaning, the entire cluster of the query node q can still be recovered.
Theorem 4.2 (Algorithm 1 followed by Algorithm 2 solves Problem 2 in semi-sparse
2
graphs). Let tn = 1 and sn = n2 (?? + (1 ?
?)?) (? + ?)/2. Let S = Scan(A1 , q, tn )
(c)
(c)
denote the number of nodes in S1 ? C1
(c)
(S1 \ C1 ). If the graph is semi-sparse, and ?? ? 3(1 ? ?)?, then P nw = n? ? 1 and
(c)
P no = 0 ? 1.
and S1 = Clean(S, A2 , q, sn ). Let nw
no
Proof Sketch. We only sketch the proof here, with details being deferred to the supplementary material. The degree bounds of Eq. 1 and the equations for E[Xi |dq ? Good] hold
even in the semi-sparse case. We can also bound the variances of Xi (which are sums of
conditionally independent Bernoullis):
var[Xi | dq ? Good, i ? C1 ] ? E[Xi | dq ? Good, i ? C1 ] = ?1
Since the expected number of common neighbors vanishes and the standard deviation is an
order larger than the expectation, there is no hope for concentration; however, there are
slight differences in the probability of having at least one common neighbor.
First, by an application of the Paley-Zygmund inequality, we find:
p1 , P (Xi ? 1 | dq ? Good, i ? C1 )
?
E[Xi | dq ? Good, i ? C1 ]2
var(Xi | dq ? Good, i ? C1 ) + E[Xi | dq ? Good, i ? C1 ]2
?
?12
? `n (1 ? ?n )(1 ? ?1 )
?1 + ?12
6
since ?1 ? 0
For a > 1, Markov?s inequality gives:
pa , P (Xi ? 1 | dq ? Good, i ? Ca , a 6= 1) ? E[Xi | dq ? Good, i ? Ca , a 6= 1] = ?a
Even though pa ? 0, n?pa = ?(n2 ?2 ) ? ?, so we can use concentration inequalities like
the Chernoff bound again to bound nw and no .
p
P (nw ? n?p1 (1 ? 6 log n/n?p1 )) ? 1 ? n?4/3
p
P (no ? n(1 ? ?)pa (1 + 6 log n/n(1 ? ?)pa )) ? 1 ? n?4/3
Unlike the denser regime, nw and no can be of the same order here. Hence, the candidate
set S returned by thresholding the common neighbors has a non-vanishing fraction of nodes
from outside q?s community. However, this fraction is relatively small, which is what we
would exploit in the cleaning step.
Let ?w and ?o denote the expected number of edges in
? A2 from a node to S. The separation
condition in the theorem statement gives ?w ? ?o ? 4 ?w log n. Setting the degree threshold
sn = (?w + ?o )/2, we bound the probability of mistakes in the cleaning step:
X
P (?i ? C1 s.t.
A2 (i, j) ? sn | dq ? Good) ? n?1/3+o(1)
j?S
P (?i 6? C1 s.t.
X
A2 (i, j) ? sn | dq ? Good) ? n?1/3+o(1)
j?S
Removing the conditioning on dq ? Good (as in Theorem 4.1) yields the desired result.
5
Experiments
We present our experimental results in two parts. First, we use simulations to support our
theoretical claims. Next we present link prediction accuracies on real world collaborative
networks to show that common neighbors indeed perform close to gold standard algorithms
like spectral clustering and the Katz score.
Implementation details: Recall that our algorithms are based on thresholding. When
there is a large gap between common neighbors between node q and nodes in its cluster (e.g.,
in the semi-dense regime), this is equivalent to using the k-means algorithm with k = 2 to
find S in Algorithm 1. The same holds for finding S1 in algorithm 2. When the number
of nodes with more than two common neighbors is less than ten, we define the set S by
finding all neighbors with at least one common neighbor (as in the semi-sparse regime).
On the other hand, since the cleaning step works only when S is sufficiently large (so that
degrees concentrate), we do not perform any cleaning when |S| < 30. While we used the
split sample graph A2 in the cleaning step for ease of analysis, we did the cleaning using
the same network in the experiments.
Experimental setup for simulations: We use a stochastic blockmodel of 2000 nodes split
into 4 equal-sized clusters. For each value of (?, ?) we pick 50 query nodes at random, and
calculate the precision and recall of the result against nodes from the query node?s cluster
(for any subset S and true cluster C, precision = |S ? C|/|S| and recall = |S ? C|/|C|). We
report mean precision and recall over 50 random generated graph instances.
Accuracy on simulated data: Figure 1 shows the precision and recall as degree grows,
with the parameters (?, ?) satisfying the condition ?? ? 3(1 ? ?)? of Thm. 4.2. We see
that cleaning helps both precision and recall, particularly in the medium-degree range (the
semi-sparse regime). As a reference, we also plot the precision of spectral clustering, when
it was given the correct number of clusters (K = 4). Above average degree of 10, spectral
clustering gives perfect precision, whereas common neighbors can identify a large fraction of
the true cluster once average degree is above 25. On the other hand, for average degree less
than seven, spectral clustering performs poorly, whereas the precision of common neighbors
is remarkably higher. Precision is relatively higher than recall for a broad degree regime,
and this explains why common neighbors are a popular choice for link prediction. On a side
7
(A)
(B)
Figure 1: Recall and precision versus average degree: When degree is very small, none of the
methods work well. In the medium-degree range (semi-sparse regime), we see that common
neighbors gets increasingly better precision and recall, and cleaning helps. With high enough
degrees (semi-dense regime), just common neighbors is sufficient and gets excellent accuracy.
Table 1: AUC scores for co-authorship networks
Dataset
HepTH
Citeseer
NIPS
n
Mean degree
5969
4520
1222
4
5
3.95
Time-steps
AUC
CN
.70
.88
.63
6
11
9
CN-clean
.74
.89
.69
SPEC
.82
.89
.68
Katz
.82
.95
.78
Random
.49
.52
.47
note, it is not surprising that in a very sparse graph common neighbors cannot identify the
whole cluster, since not everyone can be reached in two hops.
Accuracy on real-world data: We used publicly available co-authorship datasets over
time where nodes represent authors and an edge represents a collaboration between two
authors. In particular, we used subgraphs of the High Energy Physics (HepTH) coauthorship dataset (6 timesteps), the NIPS dataset (9 timesteps) and the Citeseer dataset
(11 timesteps). We obtain the training graph by merging the first T-2 networks, use the
T-1th step for cross-validation and use the last timestep as the test graph. The number of
nodes and average degrees are reported in Table 1. We merged 1-2 years of papers to create
one timestep (so that the median degree of the test graph is at least 1).
We compare our algorithm (CN and CN-clean) with the Katz score which is used widely
in link prediction [8] and spectral clustering of the network. Spectral clustering is carried
out on the giant component of the network. Furthermore, we cross-validate the number of
clusters using the held out graph. Our setup is very similar to link prediction experiments
in related literature [14].
Since these datasets are unlabeled, we cannot calculate precision or recall as before. Instead
for any score or affinity measure, we propose to perform link prediction experiments as
follows. For a randomly picked node we calculate the score from the node to everyone else.
We compute the AUC score of this vector against the edges in the test graph. We report
the average AUC for 100 randomly picked nodes. Table 1 shows that even in sparse regimes
common neighbors performs similar to benchmark algorithms.
6
Conclusions
Counting common neighbors is a particularly useful heuristic: it is fast and also works
well empirically. We prove the effectiveness of common neighbors for link prediction as
well as local clustering around a query node, under the stochastic blockmodel setting. In
particular, we show the existence of a semi-dense regime where common neighbors yields
the right cluster w.h.p, and a semi-sparse regime where an additional ?cleaning? step is
required. Experiments with simulated as well as real-world datasets shows the efficacy of
our approach, including the importance of the cleaning step.
8
References
[1] L. Adamic and E. Adar. Friends and neighbors on the web. Social Networks, 25:211?230,
2003.
[2] L. Backstrom and J. Leskovec. Supervised random walks: Predicting and recommending links in social networks. In Proceedings of the Fourth ACM International Conference
on Web Search and Data Mining, pages 635?644, New York, NY, USA, 2011. ACM.
[3] P. J. Bickel and A. Chen. A nonparametric view of network models and newman girvan
and other modularities. Proceedings of the National Academy of Sciences of the Unites
States of America, 106(50):21068?21073, 2009.
[4] K. Chaudhuri, F. C. Graham, and A. Tsiatas. Spectral clustering of graphs with general
degrees in the extended planted partition model. Journal of Machine Learning Research
- Proceedings Track, 23:35.1?35.23, 2012.
[5] M. S. Handcock, A. E. Raftery, and J. M. Tantrum. Model-based clustering for social
networks. Journal of the Royal Statistical Society: Series A (Statistics in Society),
170(2):301?354, 2007.
[6] P. W. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social
Networks, 5(2):109?137, 1983.
[7] L. Katz. A new status index derived from sociometric analysis. In Psychometrika,
volume 18, pages 39?43, 1953.
[8] D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In
Conference on Information and Knowledge Management. ACM, 2003.
[9] L. L? and T. Zhou. Link prediction in complex networks: A survey. Physica A,
390(6):11501170, 2011.
[10] F. McSherry. Spectral partitioning of random graphs. In FOCS, pages 529?537, 2001.
[11] S. C. Olhede and P. J. Wolfe. Network histograms and universality of blockmodel
approximation. Proceedings of the National Academy of Sciences of the Unites States
of America, 111(41):14722?14727, 2014.
[12] A. E. Raftery, M. S. Handcock, and P. D. Hoff. Latent space approaches to social
network analysis. Journal of the American Statistical Association, 15:460, 2002.
[13] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional
stochastic blockmodel. Annals of Statistics, 39:1878?1915, 2011.
[14] P. Sarkar and P. J. Bickel. Role of normalization in spectral clustering for stochastic
blockmodels. To appear in the Annals of Statistics., 2014.
[15] P. Sarkar, D. Chakrabarti, and A. Moore. Theoretical justification of popular link
prediction heuristics. In Conference on Learning Theory. ACM, 2010.
[16] P. Sarkar and A. Moore. A tractable approach to finding closest truncated-commutetime neighbors in large graphs. In Proc. UAI, 2007.
9
| 5809 |@word cu:6 stronger:1 simulation:2 crucially:1 decomposition:3 citeseer:2 pick:2 harder:3 series:1 score:8 efficacy:1 current:1 z2:1 recovered:1 surprising:1 universality:1 router:1 attracted:2 must:2 partition:2 enables:1 plot:1 spec:1 vanishing:1 olhede:1 farther:1 detecting:1 node:88 simpler:1 become:2 chakrabarti:2 focs:1 consists:1 prove:1 theoretically:2 purnamrita:1 indeed:2 expected:6 roughly:1 p1:3 behavior:2 growing:2 sociometric:1 little:1 psychometrika:1 underlying:1 medium:2 eigenspace:1 what:1 irom:1 proposing:1 finding:5 giant:1 guarantee:2 berkeley:2 k2:2 partitioning:2 hepth:2 appear:2 before:2 local:3 tends:2 mistake:1 encoding:1 co:3 ease:1 range:4 practical:1 practice:2 block:4 procedure:2 empirical:4 significantly:2 projection:1 word:1 protein:2 get:2 cannot:3 close:1 selection:1 unlabeled:1 gradation:1 equivalent:2 map:2 missing:1 go:1 attention:2 starting:1 survey:1 subgraphs:1 handle:1 adar:2 variation:2 analogous:1 justification:1 annals:2 user:1 cleaning:19 us:1 distinguishing:1 pa:5 wolfe:1 expensive:3 particularly:6 satisfying:1 database:1 modularities:1 observed:3 role:1 electrical:1 calculate:3 region:1 liben:1 intuition:3 vanishes:1 mccombs:1 ultimately:1 easily:3 america:2 separated:1 fast:5 effective:1 query:12 newman:1 broadens:1 outside:5 heuristic:8 widely:2 solve:3 denser:2 supplementary:2 larger:2 otherwise:1 bci:1 statistic:5 g1:2 itself:1 sequence:2 paley:1 propose:1 regain:1 interaction:1 leinhardt:1 poorly:1 chaudhuri:1 gold:1 academy:2 intuitive:1 validate:1 cluster:59 requirement:1 perfect:1 help:2 friend:4 stat:1 school:1 eq:1 solves:3 strong:3 implemented:1 concentrate:4 posit:1 merged:1 correct:1 stochastic:15 material:2 bin:1 adjacency:1 explains:1 assign:1 suffices:1 fix:1 physica:1 hold:2 around:4 considered:1 sufficiently:2 seed:1 nw:10 predict:3 claim:2 major:1 bickel:4 nowell:1 a2:9 proc:1 utexas:2 create:1 successfully:1 hope:1 clearly:1 gaussian:1 ck:2 zhou:1 varying:2 derived:1 focus:1 consistently:1 rank:1 bernoulli:2 blockmodel:13 attains:1 tantrum:1 inference:2 dependent:1 membership:2 typically:2 entire:2 denoted:1 special:2 hoff:1 equal:1 once:1 having:1 hop:2 chernoff:1 represents:2 broad:4 look:1 yu:1 future:1 others:3 report:2 fundamentally:1 few:5 randomly:2 national:2 individual:1 attempt:1 detection:2 interest:1 screening:1 mining:1 deferred:1 mixture:1 mcsherry:1 held:1 edge:6 partial:1 necessary:1 walk:2 desired:2 plotted:1 theoretical:9 leskovec:1 instance:1 industry:1 contiguous:1 zn:1 applicability:1 tractability:1 deviation:4 subset:1 uniform:1 usefulness:1 too:1 characterize:1 reported:1 connect:4 answer:2 dependency:1 density:2 explores:1 international:1 off:2 physic:1 together:1 again:1 management:1 henceforth:1 american:1 return:3 ranking:1 depends:1 try:1 lot:1 picked:2 view:1 analyze:1 reached:1 competitive:1 recover:1 netflix:1 complicated:1 contribution:1 collaborative:1 publicly:1 accuracy:5 variance:1 who:1 yield:5 identify:6 none:1 researcher:1 finer:1 facebook:2 definition:3 against:3 energy:1 proof:4 dataset:4 popular:6 ask:1 recall:12 knowledge:2 segmentation:1 cj:15 higher:6 supervised:1 specify:1 formulation:1 arranged:1 though:3 generality:1 furthermore:1 just:2 tsiatas:1 hand:4 sketch:4 adamic:2 trust:1 web:2 lack:1 laskey:1 grows:4 ponly:1 usa:1 true:2 evolution:1 hence:6 symmetric:1 moore:2 conditionally:2 auc:4 noted:1 authorship:3 complete:1 demonstrate:1 tn:14 performs:2 recently:3 common:65 tending:1 empirically:5 qp:1 conditioning:2 volume:1 belong:1 slight:2 association:1 katz:5 significant:1 epinions:1 framed:1 smoothness:1 consistency:5 grid:1 handcock:2 similarity:3 longer:1 etc:2 multivariate:1 own:1 closest:1 belongs:3 apart:1 inequality:3 binary:1 success:1 seen:1 additional:2 relaxed:2 determine:1 semi:27 full:2 faster:3 offer:1 long:1 cross:2 prescription:1 divided:1 post:3 a1:7 laplacian:1 prediction:23 involving:1 variant:2 expectation:7 histogram:2 represent:2 normalization:1 achieved:1 c1:17 justified:1 addition:2 want:1 whereas:5 remarkably:1 else:1 singular:2 median:1 extra:1 unlike:1 probably:2 dq1:2 member:1 effectiveness:2 call:2 counting:6 split:3 enough:4 identically:1 variety:2 zi:2 timesteps:3 topology:1 bandwidth:1 reduce:1 regarding:1 cn:17 computable:1 zygmund:1 texas:2 whether:1 linkage:1 peter:1 returned:2 york:1 useful:1 clear:1 eigenvectors:3 nonparametric:1 ten:1 category:1 simplest:1 track:1 group:4 key:2 threshold:3 clean:5 timestep:2 graph:34 asymptotically:1 relaxation:1 fraction:4 sum:3 year:1 powerful:1 fourth:1 extends:1 almost:1 separation:2 jaccard:1 graham:1 bound:5 internet:1 followed:3 refine:1 occur:1 kleinberg:1 qqk:1 relatively:2 department:2 structured:1 belonging:1 across:3 smaller:1 increasingly:1 appealing:1 deferring:1 backstrom:1 s1:7 intuitively:1 computationally:1 equation:1 count:1 tractable:1 available:1 operation:2 away:2 spectral:15 eigen:1 existence:2 top:1 clustering:24 include:2 binomial:1 exploit:1 k1:2 society:2 question:1 parametric:2 planted:2 dependence:1 concentration:3 diagonal:2 affinity:1 distance:1 link:39 separate:1 simulated:4 entity:1 topic:1 seven:1 reason:1 length:1 index:2 deepayan:1 relationship:1 equivalently:1 setup:2 unfortunately:1 statement:2 stated:1 implementation:1 unknown:3 perform:5 datasets:4 markov:1 benchmark:1 truncated:1 defining:1 extended:1 topical:1 thm:1 community:2 sarkar:4 complement:1 pair:4 required:1 specified:1 connection:2 z1:1 california:1 distinction:1 established:1 nip:2 beyond:1 regime:19 including:1 royal:1 everyone:3 power:2 suitable:1 demanding:1 business:1 predicting:5 representing:1 movie:1 irrespective:1 carried:1 raftery:2 extract:1 sn:7 xq:1 understanding:3 literature:1 girvan:1 loss:1 expect:1 versus:2 var:2 validation:1 degree:23 sufficient:1 consistent:1 thresholding:3 dq:25 share:3 collaboration:2 austin:3 demarcate:1 summary:3 supported:1 last:1 side:1 neighbor:64 wide:1 fall:2 sparse:18 distributed:1 world:5 computes:1 author:2 commonly:1 social:7 citation:1 approximate:1 status:1 uai:1 unnecessary:1 recommending:2 xi:20 un:4 latent:10 search:1 why:2 table:3 delving:1 ca:7 excellent:1 complex:6 poly:1 did:1 blockmodels:5 dense:17 main:2 whole:1 n2:3 unites:2 allowed:1 join:1 ny:1 precision:12 fails:1 position:1 a21:1 candidate:1 theorem:5 remained:1 friendship:1 removing:1 rohe:1 merging:1 importance:1 ci:21 conditioned:2 chatterjee:1 sparser:2 gap:1 chen:1 logarithmic:1 simply:1 likely:2 g2:2 watch:1 recommendation:4 holland:1 corresponds:1 acm:4 goal:3 sized:1 careful:1 shared:1 specifically:1 reducing:1 called:2 experimental:2 coauthorship:1 support:2 scan:4 relevance:1 handling:1 |
5,313 | 581 | Burst Synchronization Without
Frequency-Locking in a Completely Solvable
Network Model
Heinz Schuster
Institut fur theoretische Physik
Universitat Kiel
OlshausenstraBe 40
2300 Kiel 1, Germany
Christof Koch
Computation and Neural System Program
California Institute of Technology
Pasadena, California 91125, USA
Abstract
The dynamic behavior of a network model consisting of all-to-all excitatory
coupled binary neurons with global inhibition is studied analytically and
numerically. We prove that for random input signals, the output of the
network consists of synchronized bursts with apparently random intermissions of noisy activity. Our results suggest that synchronous bursts can be
generated by a simple neuronal architecture which amplifies incoming coincident signals. This synchronization process is accompanied by dampened
oscillations which, by themselves, however, do not play any constructive
role in this and can therefore be considered to be an epiphenomenon.
1
INTRODUCTION
Recently synchronization phenomena in neural networks have attracted considerable
attention. Gray et al. (1989, 1990) as well as Eckhorn et al. (1988) provided
electrophysiological evidence that neurons in the visual cortex of cats discharge in a
semi-synchronous, oscillatory manner in the 40 Hz range and that the firing activity
of neurons up to 10 mm away is phase-locked with a mean phase-shift of less than
3 msec. It has been proposed that this phase synchronization can solve the binding
problem for figure-ground segregation (von der Malsburg and Schneider, 1986) and
underly visual attention and awareness (Crick and Koch, 1990).
A number of theoretical explanations based on coupled (relaxation) oscillator mod117
118
Schuster and Koch
els have been proposed for burst synchronization (Sompolinsky et al., 1990). The
crucial issue of phase synchronization has also recently been addressed by Bush and
Douglas (1991), who simulated the dynamics of a network consisting of bursty, layer
V pyramidal cells coupled to a common pool of basket cells inhibiting all pyramidal
cells. 1 Bush and Douglas found that excitatory interactions between the pyramidal
cells increases the total neural activity as expected and that global inhibition leads
to synchronized bursts with random intermissions. These population bursts appear
to occur in a random manner in their model. The basic mechanism for the observed
burst synchronization is hidden in the numerous anatomical and biophysical details
of their model. These, and the related observation that to date no strong oscillations have been recorded in the neuronal activity in visual cortex of awake monkeys,
prompted us to investigate how phase synchronization can occur in the absence of
frequency locking.
2
A COINCIDENCE NETWORK
We consider n excitatory coupled binary McCulloch-Pitts (1943) neurons whose
output x:+l E [0,1] at time t + 1 is given by:
t+l =
Xi
U
[w ~
L: t + '-i~t - (}J
;;
(1)
Xi
e:
Here win > 1 is the normalized excitatory all-to-all synaptic coupling,
represents
the external binary input and u[zJ is the Heaviside step function, such that u[z] = 1
for z > 0 and 0 elsewhere. Each neuron has the same dynamic threshold () > O.
Next we introduce the fraction mt of neurons which fire simultaneously at time t:
mt
= -n1 ~
x~
LJ '
(2)
i
In general, 0 < mt < 1; only if every neuron is active at time t do we have mt = l.
By summing eq. (1) we obtain the following equation of motion for our simple
network.
1
mt+l = u[wm t +
(}J
(3)
n ,.
L
e: -
The behavior of this (n+l)-state automata is fully described by the phase-state
diagram of Figure 1. If () > 1 and (}/w > 1, the output of the network mt will
vary with the input until at some time t', me = O. Since the threshold () is always
larger than the input, the network will remain in this state for all subsequent times.
If () < 1 and () /w < 1, the network will drift until it comes to the state mt' = l.
Since subsequent wmt is at all times larger than the threshold, the network remains
latched at mt = 1. If () > 1, but (}/w < 1, the network can latch in either the
mt = 0 or the mt = 1 state and will remain there indefinitely. Lastly, if () < 1, but
() /w > 1, the threshold is by itself not large enough to keep the network latched
1 This model bears similarities to Wilson and Bower's {1992} model describing the origin
of phase-locking in olfactory cortex.
Burst Synchronization without Frequency Locking in a Completely Solvable Network Model
o
Th:"esho Id 8
Figure 1: Phase diagram for the network described by eq. (3). Different regions
correspond to different stationary output states mt in the long time limit.
into the mt = 1 state. Defining the normalized input activity as
st
= .!.n~'
"e
(4)
,
with 0 < st < 1, we see that in this part of phase space mt+1 = st, and the output
activity faithfully reflects the input activity at the previous time step.
Let us introduce an adaptive time-dependent threshold, ot. We assume that ot
remains at its value 0 < 1 as long as the total activity remains less than 1. If,
however, mt = 1, we increase ot to a value larger than w + 1. This has the effect of
resetting the activity of the entire network to 0 in the next time step, i.e., mt+l
(lin) L:i u(w +
(w + 1 + f))
O. The threshold will then automatically reset
itself to its old value:
1
(5)
mt+l = ;; ~ u[wm t +
O(mt)]
ei -
=
=
,
e: -
with
for mt
for mt
<1
=1
Therefore, we are operating in the topmost left part of Fig. 1 but preventing the
network from latching to mt
1 by resetting it. Such a dynamic threshold bears
some similarities to the models of Horn and Usher (1990) and others, but is much
simpler. Note that O(mt) exactly mimics the effect of a common inhibitory neuron
which is only excited if all neurons fire simultaneously.
=
Our network now acts as a coincidence detector, such that all neurons will "fire"
at time t + 2, i.e., ml+ 2 = 1 if at least k neurons receive at time t a "I" as input.
k is the smallest integer with k > 0 . nlw. The threshold O(mt) is then transiently
increased and the network is reset and the game begins anew. In other words, the
119
120
Schuster and Koch
1
Output
.,.
0.8
0.6
D. "
100
Tl.,..
1
Input
S
0.8
0. 6
D."
o.
Ti.,..
Figure 2: Time dependence of the fraction mt of output neurons which fire simultaneously compared to the corresponding fraction of input signals st for n = 20
and () /w = 0.225. The input variables e! are independently distributed according to
peen = p8(e! - 1) + (1 - p)8(en with p= 0.1. If more than five input signals with
1 coincide, the entire population will fire in synchrony two time steps later,
i.e. mt+2 = 1. Note the "random" appearance of the interburst intervals.
e: =
network detects coincidences and signals this by a synchronized burst of neuronal
activity followed by a brief respite of activity (Figure 2).
The time dependence of mt given by eq. (5) can be written as:
t
1
m+
=
st
{ 1
o
for 0 < mt < 1w
for ~ =:; m t < 1
for mt = 1
(6)
By introducing functions A(m), B(m), C(m) which take on the value 1 in the
intervals specified for m = mt in eq. (6), respectively, and zero elsewhere, m Hl can
be written as:
mt+l
st A(mt) + 1 . B(mt) + O. C(mt)
(7)
This equation can be iterated, yielding an explicit expression for mt as a function
of the external inputs sHl, ... sO and the initial value mO:
=
(8)
with the matrix
M(s)
=(
A(s)
B(s)
C(s)
0 1)
0 0
1 0
Burst Synchronization without Frequency Locking in a Completely Solvable Network Model
Eq. (8) shows that the dynamics of the network can be solved explicitly, by iteratively applying M, t - 1 number of times to the initial network configuration.
3
DISTRIBUTION OF BURSTS AND TIME
CORRELATIONS
The synchronous activity at time t depends on the specific realization of the input
signals at different times (eq. 8). In order to get rid of this ambiguity we resort to
averaged quantities where averages are understood over the distribution P{ st} of
inputs Sf
~ L~=l
A very useful averaged quantity is the probability pt(m),
describing the fraction m of simultaneously firing neurons at time t. pt(m) is related
to the probability distribution P{ st} via:
=
e:.
pt(m) = (6[m - mt{ st-1, ... sO}])
(9)
where ( ... ) denotes the average with respect to P{st} and mt{st-l, .. . sO} is given
by eq. (8). If the input signals are un correlated in time, mt+l depends according
to eq. (7) only on mt, and the time evolution of pt(m) can be described by the
Chapman-Kolmogorov equation. We then find:
e:
pt(m)
= pOO(m) + [pO(m) -
where
pOO(m)
1
pOO(m)]. /(t)
~
= 1 + 217 [P(m) + 176(m -
1)
+ 17 6(m)]
(10)
(11)
is the limiting distribution which evolves from the initial distribution pO( m) for large
times, because the factor /(t) = 17~ cos(Ot) , where 0 = 7r - arctan[J417 -17 2 /17]'
decays exponentially with time and 17 = f01 P(s)B(s)ds = f(},w P(s)ds. Notice that
o < 17 < 1 holds (for more details, see Koch and Schuster, 1992).
The limiting equilibrium distribution poo (m) evolves from the initial distribution
pO( m) in an oscillatory fashion, with the building up of two delta-functions at m = 1
and m = 0 at the expense of pO(m). This signals the emergence of synchronous
bursts, i.e., mt = 1, which are always followed at the next time-step by zero activity,
i.e., mt+l = 0 (see also Fig. 2). The equilibrium value for the mean fraction of
synchronized neurons is
(12)
which is larger than the initial value (s) = f;dsP(s)s, for (s) <~, indicating an
increase in synchronized bursting activity.
It is interesting to ask what type of time correlations will develop in the output
of our network if it is stimulated with uncorrelated noise,
The autocovariance
function is
e:.
can be computed directly since mt and pOO(m) are known explicitly. We find
C( r)
= 6 ,oCO + (1 T
6T,0)Cl17ITI/2 cos(Or + cp)
(14)
121
122
Schuster and Koch
1
((1:)
O. 75
0. 6
0.25
0
-0.25
-0 . 5
-0.75
to
35
-1
1
C( 1:)
0.75
0.5
0. 26
3
-0.25
-0.5
-0.75
t
5
6
7
8
lUTe
-1
Figure 3: Time dependence of the auto-covariance function G( r) for two different
1
values of.,., = fe/w dsP(s). The top figure corresponds to .,., = 0.8 and a period
T = 3.09, while the bottom correlation function is for.,., = 0.2 with an associated
T = 3.50. Note the different time-scales.
A
with br,o the Kroneker symbol (br,o = 1 for r = 0 and 0 else). Figure 3 shows
that G(r) consists of two parts. A delta peak at r = 0 which reflects random
uncorrelated bursting and an oscillatory decaying part which indicates correlations
in the output. The period of the oscillations, T = 21r/rl, varies monotonically
between 3 < T < 4 as O/w moves from zero to one. Since.,., is given by fe1/w P(s)ds,
we see that the strengths of these oscillations increases as the excitatory coupling
w increases. The emergence of periodic correlations can be understood in the limit
1
O/w --+ 0, where the period T becomes three (and.,.,
fo P(s)ds
1), because
according to eq. (6), mt = 0 is followed by mt+l = st which leads for O/w --+ 0
always to mt+2 1 followed by mt+3 O. In other words, the temporal dynamics
of mt has the form OsI10s41Os 7 1Os 1olO.... In the opposite case of O/w --+ 1, .,.,
converges to 0 and the autocovariance function G( r) essentially only contains the
peak at r = O. Thus, the output of the network ranges from completely uncorrelated
noise for O/w ::::: 1 to correlated periodic bursts for ~ --+ O. The power spectrum of
the system is a broad Lorentzian centered at the oscillation frequency, superimposed
onto a constant background corresponding to uncorrelated neural activity.
=
=
A
=
=
It is important to discuss in this context the effect of the size n of the network. If the
e:
input variables are distributed independently in time and space with probabilities
Pi(eD, then the distribution pes) has a width which decreases as l/fo as n --+ 00.
Therefore, in a large system.,., = fe1/w pes )ds is either 0 if O/w > (s) or 1 if O/w < (s),
where (s) is the mean value of s, which coincides for n --+ 00 with the maximum of
Burst Synchronization without Frequency Locking in a Completely Solvable Network Model
/>(8). If.,., = 0 the correlation function is a constant according to eq. (14), while the
system will exhibit undamped oscillations with period 3 for .,.,
1. Therefore, the
irregularity ofthe burst intervals, as shown, for instance, in Fig. 2, is for independent
a finite size effect. Such synchronized dephasing due to finite size has been
reported by Sompolinsky et al. (1989).
=
e:
e: ,
However, for biologically realistic correlated inputs
the width of />(8) can remain
finite for n ~ 1. For example, if the inputs
e~ can be grouped into q
correlated sets
e~
e~,
with finite q, then the width of />(8)
scales like 1/vq. Our model, which now effectively corresponds to a situation with
a finite number q of inputs, leads in this case to irregular bursts which mirror and
amplify the correlations present in the input signals, with an oscillatory component
superimposed due to the dynamical threshold.
eI ... eL ... ... ,e: ... e:,
4
er, ... ,
CONCLUSIONS AND DISCUSSION
We here suggest a mechanism for burst synchronization which is based on the fact
that excitatory coupled neurons fire in synchrony whenever a sufficient number of input signals coincide. In our model, common inhibition shuts down the activity after
each burst, making the whole process repeatable, without entraining any signals. It
is rather satisfactory to us that our simple model shows qualitative similar dynamic
behavior of the much more detailed biophysical simulations of Bush and Douglas
(1991). In both models, all-to-all excitatory coupling leads-together with common
inhibition-to burst synchronization without frequency locking. In our analysis we
updated all neurons in parallel. The same model has been investigated numerically
for serial (asynchronous) updating, leading to qualitatively similar results.
The output of our network develops oscillatory correlations whose range and amplitude increases as the excitatory coupling is strengthened. However, these oscillations do not depend on the presence of any neuronal oscillators, as in our earlier
models (e.g., Schuster and Wagner, 1990; Niebur et al.,1991). The period of the
oscillations reflects essentially the delay between the inhibitory response and the
excitatory stimulus and only varies little with the amplitude of the excitatory coupling and the threshold. The crucial role of inhibitory interneurons in controlling
the 40 Hz neuronal oscillations has been emphasized by Wilson and Bower (1992)
in their simulations of olfactory and visual cortex. Our model shows complete synchronization, in the sense that all neurons fire at the same time. This suggests
that the occurrence of tightly synchronized firing activity across neurons is more
important for feature linking and binding than the locking of oscillatory frequencies.
Since the specific statistics of the input noise is, via coincidence detection, mirrored
in the burst statistics, we speculate that our network-acting as an amplifier for
the input noise-can play an important role in any mechanism for feature linking
that exploits common noise correlations of different input signals.
Acknowledgements
We thank R. Douglas for stimulating discussions and for inspiring us to think about
this problem. Our collaboration was supported by the Stiftung Volkswagenwerk.
The research of C.K. is supported by the National Science Foundation, the James
123
124
Schuster and Koch
McDonnell Foundation, and the Air Force Office of Scientific Research.
References
Bush, P.C. and Douglas, R.J. "Synchronization of bursting action potential discharge in a model network of neocortical neurons." Neural Computation 3: 19-30,
1991.
Crick, F. and Koch, C. ''Towards a neurobiological theory of consciousness." Seminars Neurosci. 2: 263-275, 1990.
Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M. and Reitboeck, H.J. "Coherent oscillations: a mechanism of feature linking in the visual
cortex?" Bioi. Cybern. 60: 121-130, 1988.
Gray, C.M., Engel, A.K., Konig, P. and Singer, W. "Stimulus-dependent neuronal
oscillations in cat visual cortex: Receptive field properties and feature dependence."
Eur. J. Neurosci. 2: 607-619, 1990.
Gray, C.M., Konig, P., Engel, A.K. and Singer, W. "Oscillatory response in cat
visual cortex exhibits inter-columnar synchronization which reflects global stimulus
attributes." Nature 338: 334-337, 1989.
Horn, D. and Usher, M. "Excitatory-inhibitory networks with dynamical thresholds." Int. J. of Neural Systems 1: 249-257, 1990.
Koch, C. and Schuster, H. "A simple network showing burst synchronization without frequency locking." Neural Computation, in press.
McCulloch, W.S. and Pitts, W.A. "A logical calculus of the ideas immanent in
neural nets." Bull. Math. Biophys. 5: 115-137, 1943.
Niebur, E., Schuster, H.G., Kammen, D.M. and Koch, C. "Oscillator-phase coupling
for different two-dimensional network connectivities." Phys. Rev. A., in press.
Schuster, H.G. and Wagner, P. "A model for neuronal oscillations in the visual cortex: I Mean-field theory and the derivation of the phase equations." Bioi. Cybern.
64: 77-82, 1990.
Sompolinsky, H., Golomb, D. and Kleinfeld, D. "Global processing of visual stimuli
in a neural network of coupled oscillators." Proc. Natl. Acad. Sci. USA 87:
7200-7204, 1989.
von der Malsburg, C. and Schneider, W. "A neural cocktail-party processor." Bioi.
Cybern. 54: 29-40, 1986.
Wilson, M.A. and Bower, J .M . "Cortical oscillations and temporal interactions in
a computer simulation of piriform cortex." J. Neurophysiol., in press.
| 581 |@word physik:1 calculus:1 simulation:3 covariance:1 excited:1 initial:5 configuration:1 contains:1 attracted:1 written:2 subsequent:2 realistic:1 underly:1 dampened:1 stationary:1 indefinitely:1 math:1 arctan:1 simpler:1 kiel:2 five:1 burst:21 qualitative:1 prove:1 consists:2 olfactory:2 introduce:2 manner:2 inter:1 p8:1 expected:1 behavior:3 themselves:1 heinz:1 detects:1 automatically:1 little:1 becomes:1 provided:1 begin:1 golomb:1 mcculloch:2 what:1 monkey:1 shuts:1 temporal:2 every:1 act:1 ti:1 exactly:1 christof:1 appear:1 understood:2 limit:2 acad:1 id:1 firing:3 studied:1 bursting:3 suggests:1 co:2 range:3 locked:1 averaged:2 horn:2 irregularity:1 lorentzian:1 word:2 suggest:2 get:1 amplify:1 onto:1 context:1 applying:1 cybern:3 poo:5 attention:2 independently:2 automaton:1 population:2 discharge:2 limiting:2 pt:5 play:2 updated:1 controlling:1 origin:1 updating:1 observed:1 role:3 bottom:1 coincidence:4 solved:1 region:1 sompolinsky:3 decrease:1 topmost:1 locking:9 dynamic:7 depend:1 completely:5 neurophysiol:1 po:4 cat:3 kolmogorov:1 derivation:1 whose:2 larger:4 solve:1 statistic:2 think:1 emergence:2 noisy:1 itself:2 kammen:1 biophysical:2 net:1 interaction:2 reset:2 realization:1 date:1 amplifies:1 konig:2 converges:1 coupling:6 develop:1 stiftung:1 eq:10 strong:1 come:1 synchronized:7 attribute:1 centered:1 munk:1 mm:1 hold:1 koch:10 considered:1 ground:1 bursty:1 equilibrium:2 mo:1 pitt:2 inhibiting:1 vary:1 smallest:1 proc:1 grouped:1 faithfully:1 engel:2 reflects:4 always:3 latched:2 fe1:2 rather:1 wilson:3 office:1 dsp:2 fur:1 indicates:1 superimposed:2 sense:1 dependent:2 el:2 lj:1 entire:2 pasadena:1 hidden:1 germany:1 issue:1 field:2 shl:1 chapman:1 represents:1 broad:1 oco:1 mimic:1 others:1 stimulus:4 transiently:1 develops:1 simultaneously:4 tightly:1 national:1 phase:11 consisting:2 fire:7 n1:1 amplifier:1 detection:1 interneurons:1 investigate:1 yielding:1 natl:1 institut:1 autocovariance:2 old:1 theoretical:1 increased:1 instance:1 earlier:1 bull:1 introducing:1 delay:1 universitat:1 reported:1 varies:2 periodic:2 eur:1 st:12 peak:2 pool:1 together:1 connectivity:1 von:2 ambiguity:1 recorded:1 external:2 resort:1 leading:1 potential:1 accompanied:1 speculate:1 int:1 explicitly:2 depends:2 later:1 apparently:1 wm:2 decaying:1 parallel:1 synchrony:2 air:1 who:1 resetting:2 correspond:1 ofthe:1 theoretische:1 iterated:1 niebur:2 processor:1 oscillatory:7 detector:1 fo:2 phys:1 basket:1 synaptic:1 ed:1 whenever:1 frequency:9 james:1 associated:1 ask:1 logical:1 electrophysiological:1 amplitude:2 response:2 lastly:1 until:2 correlation:9 d:5 ei:2 o:1 kleinfeld:1 gray:3 scientific:1 usa:2 effect:4 building:1 normalized:2 evolution:1 analytically:1 iteratively:1 satisfactory:1 consciousness:1 latch:1 game:1 width:3 coincides:1 complete:1 neocortical:1 motion:1 cp:1 recently:2 common:5 mt:45 rl:1 exponentially:1 linking:3 numerically:2 eckhorn:2 wmt:1 cortex:9 similarity:2 inhibition:4 operating:1 binary:3 der:2 schneider:2 period:5 monotonically:1 signal:12 semi:1 kruse:1 constructive:1 long:2 lin:1 serial:1 basic:1 essentially:2 cell:4 irregular:1 receive:1 background:1 addressed:1 interval:3 else:1 diagram:2 pyramidal:3 crucial:2 ot:4 usher:2 hz:2 jordan:1 integer:1 presence:1 enough:1 architecture:1 opposite:1 idea:1 br:2 shift:1 synchronous:4 expression:1 action:1 cocktail:1 useful:1 detailed:1 inspiring:1 mirrored:1 zj:1 inhibitory:4 notice:1 delta:2 anatomical:1 threshold:11 douglas:5 interburst:1 relaxation:1 fraction:5 oscillation:13 layer:1 followed:4 activity:17 strength:1 occur:2 awake:1 f01:1 according:4 mcdonnell:1 remain:3 across:1 evolves:2 biologically:1 making:1 rev:1 hl:1 segregation:1 equation:4 remains:3 vq:1 describing:2 discus:1 mechanism:4 brosch:1 singer:2 away:1 occurrence:1 denotes:1 top:1 malsburg:2 exploit:1 move:1 quantity:2 receptive:1 dependence:4 exhibit:2 win:1 thank:1 simulated:1 sci:1 me:1 latching:1 prompted:1 piriform:1 fe:1 expense:1 neuron:19 observation:1 finite:5 coincident:1 defining:1 situation:1 drift:1 specified:1 california:2 kroneker:1 coherent:1 dynamical:2 program:1 explanation:1 power:1 force:1 solvable:4 technology:1 brief:1 numerous:1 coupled:6 auto:1 acknowledgement:1 synchronization:17 fully:1 bear:2 interesting:1 undamped:1 foundation:2 awareness:1 reitboeck:1 sufficient:1 uncorrelated:4 pi:1 collaboration:1 excitatory:11 elsewhere:2 supported:2 asynchronous:1 institute:1 wagner:2 distributed:2 bauer:1 cortical:1 preventing:1 qualitatively:1 adaptive:1 coincide:2 party:1 neurobiological:1 keep:1 ml:1 global:4 active:1 incoming:1 anew:1 rid:1 summing:1 xi:2 spectrum:1 un:1 stimulated:1 nature:1 investigated:1 neurosci:2 immanent:1 whole:1 noise:5 neuronal:7 fig:3 tl:1 en:1 fashion:1 strengthened:1 seminar:1 msec:1 explicit:1 sf:1 epiphenomenon:1 bower:3 pe:2 down:1 specific:2 repeatable:1 emphasized:1 showing:1 er:1 symbol:1 decay:1 evidence:1 effectively:1 mirror:1 biophys:1 columnar:1 appearance:1 visual:9 binding:2 corresponds:2 stimulating:1 bioi:3 towards:1 oscillator:4 crick:2 considerable:1 absence:1 acting:1 total:2 indicating:1 bush:4 phenomenon:1 olo:1 heaviside:1 schuster:10 correlated:4 |
5,314 | 5,810 | Inference for determinantal point processes
without spectral knowledge
R?
emi Bardenet?
CNRS & CRIStAL
UMR 9189, Univ. Lille, France
[email protected]
?
Michalis K. Titsias?
Department of Informatics
Athens Univ. of Economics and Business, Greece
[email protected]
Both authors contributed equally to this work.
Abstract
Determinantal point processes (DPPs) are point process models that naturally encode diversity between the points of a given realization, through a
positive definite kernel K. DPPs possess desirable properties, such as exact sampling or analyticity of the moments, but learning the parameters of
kernel K through likelihood-based inference is not straightforward. First,
the kernel that appears in the likelihood is not K, but another kernel L
related to K through an often intractable spectral decomposition. This
issue is typically bypassed in machine learning by directly parametrizing
the kernel L, at the price of some interpretability of the model parameters.
We follow this approach here. Second, the likelihood has an intractable
normalizing constant, which takes the form of a large determinant in the
case of a DPP over a finite set of objects, and the form of a Fredholm
determinant in the case of a DPP over a continuous domain. Our main
contribution is to derive bounds on the likelihood of a DPP, both for finite
and continuous domains. Unlike previous work, our bounds are cheap to
evaluate since they do not rely on approximating the spectrum of a large
matrix or an operator. Through usual arguments, these bounds thus yield
cheap variational inference and moderately expensive exact Markov chain
Monte Carlo inference methods for DPPs.
1
Introduction
Determinantal point processes (DPPs) are point processes [1] that encode repulsiveness using
algebraic arguments. They first appeared in [2], and have since then received much attention,
as they arise in many fields, e.g. random matrix theory, combinatorics, quantum physics.
We refer the reader to [3, 4, 5] for detailed tutorial reviews, respectively aimed at audiences of
machine learners, statisticians, and probabilists. More recently, DPPs have been considered
as a modelling tool, see e.g. [4, 3, 6]: DPPs appear to be a natural alternative to Poisson
processes when realizations should exhibit repulsiveness. In [3], for example, DPPs are used
to model diversity among summary timelines in a large news corpus. In [7], DPPs model
diversity among the results of a search engine for a given query. In [4], DPPs model the
spatial repartition of trees in a forest, as similar trees compete for nutrients in the ground,
and thus tend to grow away from each other. With these modelling applications comes
the question of learning a DPP from data, either through a parametrized form [4, 7], or
non-parametrically [8, 9]. We focus in this paper on parametric inference.
1
Similarly to the correlation between the function values in a Gaussian process (GPs; [10]),
the repulsiveness in a DPP is defined through a kernel K, which measures how much two
points in a realization repel each other. The likelihood of a DPP involves the evaluation
and the spectral decomposition of an operator L defined through a kernel L that is related
to K. There are two main issues that arise when performing likelihood-based inference
for a DPP. First, the likelihood involves evaluating the kernel L, while it is more natural
to parametrize K instead, and there is no easy link between the parameters of these two
kernels. The second issue is that the spectral decomposition of the operator L required
in the likelihood evaluation is rarely available in practice, for computational or analytical
reasons. For example, in the case of a large finite set of objects, as in the news corpus
application [3], evaluating the likelihood once requires the eigendecomposition of a large
matrix. Similarly, in the case of a continuous domain, as for the forest application [4], the
spectral decomposition of the operator L may not be analytically tractable for nontrivial
choices of kernel L. In this paper, we focus on the second issue, i.e., we provide likelihoodbased inference methods that assume the kernel L is parametrized, but that do not require
any eigendecomposition, unlike [7]. More specifically, our main contribution is to provide
bounds on the likelihood of a DPP that do not depend on the spectral decomposition of
the operator L. For the finite case, we draw inspiration from bounds used for variational
inference of GPs [11], and we extend these bounds to DPPs over continuous domains.
For ease of presentation, we first consider DPPs over finite sets of objects in Section 2, and
we derive bounds on the likelihood. In Section 3, we plug these bounds into known inference
paradigms: variational inference and Markov chain Monte Carlo inference. In Section 4, we
extend our results to the case of a DPP over a continuous domain. Readers who are only
interested in the finite case, or who are unfamiliar with operator theory, can safely skip
Section 4 without missing our main points. In Section 5, we experimentally validate our
results, before discussing their breadth in Section 6.
2
2.1
DPPs over finite sets
Definition and likelihood
Consider a discrete set of items Y = {x1 , . . . , xn }, where xi ? Rd is a vector of attributes
that describes item i. Let K be a symmetric positive definite kernel [12] on Rd , and let
K = ((K(xi , xj ))) be the Gram matrix of K. The DPP of kernel K is defined as the
probability distribution over all possible 2n subsets Y ? Y such that
P(A ? Y ) = det(KA ),
(1)
where KA denotes the sub-matrix of K indexed by the elements of A. This distribution
exists and is unique if and only if the eigenvalues of K are in [0, 1] [5]. Intuitively, we
can think of K(x, y) as encoding the amount of negative correlation, or ?repulsiveness?
between x and y. Indeed, as remarked in [3], (1) first yields that diagonal elements of K
are marginal probabilities: P(xi ? Y ) = Kii . Equation (1) then entails that xi and xj are
likely to co-occur in a realization of Y if and only if
2
det K{xi ,xj } = K(xi , xi )K(yi , yi ) ? K(xi , xj )2 = P(xi ? Y )P(xj ? Y ) ? Kij
is large: off-diagonal terms in K indicate whether points tend to co-occur.
Providing the eigenvalues of K are further restricted to be in [0, 1), the DPP of kernel K
has a likelihood [1]. More specifically, writing Y1 for a realization of Y ,
P(Y = Y1 ) =
det LY1
,
det(L + I)
(2)
where L = (I ? K)?1 K, I is the n ? n identity matrix, and LY1 denotes the sub-matrix
of L indexed by the elements of Y1 . Now, given a realization Y1 , we would like to infer
the parameters of kernel K, say the parameters ?K = (aK , ?K ) ? (0, ?)2 of a squared
exponential kernel [10]
kx ? yk2
K(x, y) = aK exp ?
.
(3)
2
2?K
2
Since the trace of K is the expected number of points in Y [5], one can estimate aK by
the number of points in the data divided by n [4]. But ?K , the repulsive ?lengthscale?,
has to be fitted. If the number of items n is large, likelihood-based methods such as maximum likelihood are too costly: each evaluation of (2) requires O(n2 ) storage and O(n3 )
time. Furthermore, valid choices of ?K are constrained, since one needs to make sure the
eigenvalues of K remain in [0, 1).
A partial work-around is to note that given any symmetric positive definite kernel L, the
likelihood (2) with matrix L = ((L(xi , xj ))) corresponds to a valid choice of K, since the
corresponding matrix K = L(I + L)?1 necessarily has eigenvalues in [0, 1], which makes sure
the DPP exists [5]. The work-around consists in directly parametrizing and inferring the
kernel L instead of K, so that the numerator of (2) is cheap to evaluate, and parameters
are less constrained. Note that this step favours tractability over interpretability of the
inferred parameters: if we assume L to take the squared exponential form (3) instead of
K, with parameters aL and ?L , the number of points and the repulsiveness of the points in
Y do not decouple as nicely. For example, the expected number of items in Y depends on
aL and ?L now, and both parameters also significantly affect repulsiveness. There is some
work investigating approximations to K to retain the more interpretable parametrization
[4], but the machine learning literature [3, 7] almost exclusively adopts the more tractable
parametrization of L. In this paper, we also make this choice of parametrizing L directly.
Now, the computational bottleneck in the evaluation of (2) is computing det(L + I). While
this still prevents the application of maximum likelihood, bounds on this determinant can
be used in a variational approach or an MCMC algorithm. In [7], bounds on det(L + I)
are proposed, requiring only the first m eigenvalues of L, where m is chosen adaptively at
each MCMC iteration to make the acceptance decision possible. This still requires applying
power iteration methods, which are limited to finite domains, require storing the whole n?n
matrix L, and are prohibitively slow when the number of required eigenvalues m is large.
2.2
Nonspectral bounds on the likelihood
Let us denote by LAB the submatrix of L where row indices correspond to the elements of A,
and column indices to those of B. When A = B, we simply write LA for LAA , and we drop
the subscript when A = Y. Drawing inspiration from sparse approximations to Gaussian
processes using inducing variables [11], we let Z = {z1 , . . . , zm } be an arbitrary set of points
in Rd , and we approximate L by Q = LYZ [LZ ]?1 LZY . Note that we do not constrain Z
to belong to Y, so that our bounds do not rely on a Nystr?om-type approximation [13]. We
term Z ?pseudo-inputs?, or ?inducing inputs?.
Proposition 1.
1
1
1
e?tr(L?Q) ?
?
.
det(Q + I)
det(L + I)
det(Q + I)
(4)
The proof relies on a nontrivial inequality on determinants [14, Theorem 1], and is provided
in the supplementary material.
3
Learning a DPP using bounds
In this section, we explain how to run variational inference and Markov chain Monte Carlo
methods using the bounds in Proposition 1. In this section, we also make connections with
variational sparse Gaussian processes more explicit.
3.1
Variational inference
The lower bound in Proposition 1 can be used for variational inference. Assume we have
T point process realizations Y1 , . . . , YT , and we fit a DPP with kernel L = L? . The log
3
likelihood can be expressed using (2)
`(?) =
T
X
log det(LYt ) ? T log det(L + I).
(5)
i=1
Let Z be an arbitrary set of m points in Rd . Proposition 1 then yields a lower bound
F(?, Z) ,
T
X
log det(LYt ) ? T log det(Q + I) + T tr (L ? Q) ? `(?).
(6)
t=1
The lower bound F(?, Z) can be computed efficiently in O(nm2 ) time, which is considerably
lower than a power iteration in O(n2 ) if m n. Instead of maximizing `(?), we thus
maximize F(?, Z) jointly w.r.t. the kernel parameters ? and the variational parameters Z.
To maximize (8), one can e.g. implement an EM-like scheme, alternately optimizing in
Z and ?. Kernels are often differentiable with respect to ?, and sometimes F will also be
differentiable with respect to the pseudo-inputs Z, so that gradient-based methods can help.
In the general case, black-box optimizers such as CMA-ES [15], can also be employed.
3.2
Markov chain Monte Carlo inference
If approximate inference is not suitable, we can use the bounds in Proposition 1 to build
a more expensive Markov chain Monte Carlo [16] sampler. Given a prior distribution p(?)
on the parameters ? of L, Bayesian inference relies on the posterior distribution ?(?) ?
exp(`(?))p(?), where the log likelihood `(?) is defined in (7). A standard approach to
sample approximately from ?(?) is the Metropolis-Hastings algorithm (MH; [16, Chapter
7.3]). MH consists in building an ergodic Markov chain of invariant distribution ?(?). Given
a proposal q(?0 |?), the MH algorithm starts its chain at a user-defined ?0 , then at iteration
k + 1 it proposes a candidate state ?0 ? q(?|?k ) and sets ?k+1 to ?0 with probability
"
#
0
e`(? ) p(?0 ) q(?k |?0 )
0
?(?k , ? ) = min 1, `(? )
(7)
e k p(?k ) q(?0 |?k )
while ?k+1 is otherwise set to ?k . The core of the algorithm is thus to draw a Bernoulli
variable with parameter ? = ?(?, ?0 ) for ?, ?0 ? Rd . This is typically implemented by
drawing a uniform u ? U[0,1] and checking whether u < ?. In our DPP application, we
cannot evaluate ?. But we can use Proposition 1 to build a lower and an upper bound
`(?) ? [b? (?, Z), b+ (?, Z)], which can be arbitrarily refined by increasing the cardinality of
Z and optimizing over Z. We can thus build a lower and upper bound for ?
p(?0 )
p(?0 )
b? (?0 , Z 0 ) ? b+ (?, Z) + log
? log ? ? b+ (?0 , Z 0 ) ? b? (?, Z) + log
.
(8)
p(?)
p(?)
Now, another way to draw a Bernoulli variable with parameter ? is to first draw u ? U[0,1] ,
and then refine the bounds in (13), by augmenting the numbers |Z|, |Z 0 | of inducing variables
and optimizing over Z, Z 0 , until1 log u is out of the interval formed by the bounds in (13).
Then one can decide whether u < ?. This Bernoulli trick is sometimes named retrospective
sampling and has been suggested as early as [17]. It has been used within MH for inference
on DPPs with spectral bounds in [7], we simply adapt it to our non-spectral bounds.
4
The case of continuous DPPs
DPPs can be defined over very general spaces [5]. We limit ourselves here to point processes
on X ? Rd such that one can extend the notion of likelihood. In particular, we define here
a DPP on X as in [1, Example 5.4(c)], by defining its Janossy density. For definitions of
traces and determinants of operators, we follow [18, Section VII].
1
Note that this necessarily happens under fairly weak assumptions: saying that the upper and
lower bounds in (4) match when m goes to infinity is saying that the integral of the posterior variance
of a Gaussian process with no evaluation error goes to zero as we add more distinct training points.
4
4.1
Definition
Let ? be a measure on (Rd , B(Rd )) that is continuous with respect to the Lebesgue measure,
with density ?0 . Let L be a symmetric
positive definite kernel. L defines a self-adjoint
R
operator on L2 (?) through L(f ) , L(x, y)f (y)d?(y). Assume L is trace-class, and
Z
tr(L) =
L(x, x)d?(x).
(9)
X
We assume (14) to avoid technicalities. Proving (14) can be done by requiring various
assumptions on L and ?. Under the assumptions of Mercer?s theorem, for instance, (14)
will be satisfied [18, Section VII, Theorem 2.3]. More generally, the assumptions of [19,
Theorem 2.12] apply to kernels over noncompact domains, in particular the Gaussian kernel
with Gaussian base measure that is often used in practice. We denote by ?i the eigenvalues
of the compact operator L. There exists [1, Example 5.4(c)] a simple2 point process on Rd
such that
det((L(xi , xj )) 0
There are n particles, one in each of
P
=
? (x1 ) . . . ?0 (xn ),
(10)
the infinitesimal balls B(xi , dxi )
det(I + L)
Q?
where B(x, r) is the open ball of center x and radius r, and where det(I +L) , i=1 (?i +1)
is the Fredholm determinant of operator L [18, Section VII]. Such a process is called the
determinantal point process associated to kernel L and base measure ?.3 Equation (15) is
the continuous equivalent of (2). Our bounds require ? to be computable. This is the case
for the popular Gaussian kernel with Gaussian base measure.
4.2
Nonspectral bounds on the likelihood
In this section, we derive bounds on the likelihood (15) that do not require to compute the
Fredholm determinant det(I + L).
Proposition 2. Let Z = {z1 , . . . , zm } ? Rd , then
R
?1
1
det LZ
det LZ
e? L(x,x)d?(x)+tr(LZ ?) ?
?
,
det(LZ + ?)
det(I + L)
det(LZ + ?)
R
where LZ = ((L(zi , zj )) and ?ij = L(zi , x)L(x, zj )d?(x).
(11)
As for Proposition 1, the proof relies on a nontrivial inequality on determinants [14, Theorem
1] and is provided in the supplementary material. We also detail in the supplementary
material why (16) is the continuous equivalent to (4).
5
Experiments
5.1
A toy Gaussian continuous experiment
In this section, we consider a DPP on R, so that the bounds derived in Section 4 apply.
As in [7, Section 5.1], we take the base measure to be proportional to a Gaussian, i.e. its
?2
density is ?0 (x) =
?N (x|0, (2?) ). We consider a squared exponential kernel L(x, y) =
2
2
exp ? kx ? yk . In this particular case, the spectral decomposition of operator L is
known [20]4 : the eigenfunctions of L are scaled Hermite polynomials, while the eigenvalues
are a geometrically decreasing sequence. This 1D Gaussian-Gaussian example is interesting
for two reasons: first, the spectral decomposition of L is known, so that we can sample
exactly from the corresponding DPP [5] and thus generate synthetic datasets. Second,
the Fredholm determinant det(I + L) in this special case is a q-Pochhammer symbol, and
2
i.e., for which all points in a realization are distinct.
There is a notion of kernel K for general DPPs [5], but we define L directly here, for the sake
of simplicity. The interpretability issues of using L instead of K are the same as for the finite case,
see Sections 2 and 5.
4
We follow the parametrization of [20] for ease of reference.
3
5
can thus be efficiently computed, see e.g. the SymPy library in Python. This allows for
comparison with ?ideal? likelihood-based methods, to check the validity of our MCMC
sampler, for instance. We emphasize that these special properties are not needed for the
inference methods in Section 3, they are simply useful to demonstrate their correctness.
We sample a synthetic dataset using (?, ?, ) = (1000, 0.5, 1), resulting in 13 points shown
in red in Figure 1(a). Applying the variational inference method of Section 3.1, jointly
optimizing in Z and ? = (?, ?, ) using the CMA-ES optimizer [15], yields poorly consistent
results: ? varies over several orders of magnitude from one run to the other, and relative
errors for ? and go up to 100% (not shown). We thus investigate the identifiability of the
parameters with the retrospective MH of Section 3.2. To limit the range of ?, we choose for
(log ?, log ?, log ) a wide uniform prior over
[200, 2000] ? [?10, 10] ? [?10, 10].
We use a Gaussian proposal, the covariance matrix of which is adapted on-the-fly [21] so as to
reach 25% of acceptance. We start each iteration with m = 20 pseudo-inputs, and increase
it by 10 and re-optimize when the acceptance decision cannot be made. Most iterations
could be made with m = 20, and the maximum number of inducing inputs required in
our run was 80. We show the results of a run of length 10 000 in Figure 5.1. Removing
a burn-in sample of size 1000, we show the resulting marginal histograms in Figures 1(b),
1(c), and 1(d). Retrospective MH and the ideal MH agree. The prior pdf is in green. The
posterior marginals of ? and are centered around the values used for simulation, and are
very different from the prior, showing that the likelihood contains information about ? and
. However, as expected, almost nothing is learnt about ?, as posterior and prior roughly
coincide. This is an example of the issues that come with parametrizing L directly, as
mentioned in Section 1. It is also an example when MCMC is preferable to the variational
approach in Section 3. Note that this can be detected through the variability of the results
of the variational approach across independent runs.
To conclude, we show a set of optimized pseudo-inputs Z in black in Figure 1(a). We
also superimpose the marginal of any single point in the realization, which is available
through the spectral decomposition of L here [5]. In this particular case, this marginal
is a Gaussian. Interestingly, the pseudo-inputs look like evenly spread samples from this
marginal. Intuitively, they are likely to make the denominator in the likelihood (15) small,
as they represent an ideal sample of the Gaussian-Gaussian DPP.
5.2
Diabetic neuropathy dataset
Here, we consider a real dataset of spatial patterns of nerve fibers in diabetic patients.
These fibers become more clustered as diabetes progresses [22]. The dataset consists of 7
samples collected from diabetic patients at different stages of diabetic neuropathy and one
healthy subject. We follow the experimental setup used in [7] and we split the total samples
into two classes: Normal/Mildly Diabetic and Moderately/Severely Diabetic. The first
class contains three samples and the second one the remaining four. Figure 2 displays the
point process data, which contain on average 90 points per sample in the Normal/Mildly
class and 67 for the Moderately/Severely class. We investigate the differences between
these classes by fitting a separate DPP to each class and then quantify the differences of the
repulsion or overdispersion of the point process data through the inferred kernel parameters.
Paraphrasing [7], we consider a continuous DPP on R2 , with kernel function
!
2
2
X
(xi,d ? xj,d )
L(xi , xj ) = exp ?
,
(12)
2?d2
d=1
Q2
and base measure proportional to a Gaussian ?0 (x) = ? d=1 N (xd |?d , ?2d ). As in [7], we
quantify the overdispersion of realizations of such a Gaussian-Gaussian DPP through the
quantities ?d = ?d /?d , which are invariant to the scaling of x. Note however that, strictly
speaking, ? also mildly influences repulsion.
We investigate the ability of the variational method in Section 3.1 to perform approximate
maximum likelihood training over the kernel parameters ? = (?, ?1 , ?2 , ?1 , ?2 ). Specifically, we wish to fit a separate continuous DPP to each class by jointly maximizing the
6
2.0
?3
2.5 ?10
K(?, ?)?0(?)
1.5
?
prior pdf
ideal MH
MH w/ bounds
2.0
1.5
1.0
1.0
0.5
0.5
0.0
?10
0.0
0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
?103
?5
0
5
10
(a)
?
7
prior pdf
6
ideal MH
5
MH w/ bounds
4
3
2
1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
3.0
prior pdf
ideal MH
MH w/ bounds
2.5
2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Figure 1: Results of adaptive Metropolis-Hastings in the 1D continuous experiment of Section 5.1. Figure 1(a) shows data in red, a set of optimized pseudo-inputs in black for ? set
to the value used in the generation of the synthetic dataset, and the marginal of one point
in the realization in blue. Figures 1(b), 1(c), and 1(d) show marginal histograms of ?, ?, .
variational lower bound over ? and the inducing inputs Z using gradient-based optimization. Given that the number of inducing variables determines the amount of the approximation, or compression of the DPP model, we examine different settings for this number and see whether the corresponding trained models provide similar estimates for the
overdispersion measures. Thus, we train the DPPs under different approximations having
m ? {50, 100, 200, 400, 800, 1200} inducing variables and display the estimated overdispersion measures in Figures 3(a) and 3(b). These estimated measures converge to coherent
values as m increases. They show a clear separation between the two classes, as also found
in [7, 22]. This also suggests tuning m in practice by increasing it until inference results
stop varying. Furthermore, Figures 3(c) and 3(d) show the values of the upper and lower
bounds on the log likelihood, which as expected, converge to the same limit as m increases.
We point out that the overall optimization of the variational lower bound is relatively fast
in our MATLAB implementation. For instance, it takes 24 minutes for the most expensive
run where m = 1200 to perform 1 000 iterations until convergence. Smaller values of m
yield significantly smaller times.
Finally, as in Section 5.1, we comment on the optimized pseudo-inputs. Figure 4 displays
the inducing points at the end of a converged run of variational inference for various values
of m. Similarly to Figure 1(a), these pseudo-inputs are placed in remarkably neat grids and
depart significantly from their initial locations.
6
Discussion
We have proposed novel, cheap-to-evaluate, nonspectral bounds on the determinants arising
in the likelihoods of DPPs, both finite and continuous. We have shown how to use these
bounds to infer the parameters of a DPP, and demonstrated their use for expensive-butexact MCMC and cheap-but-approximate variational inference. In particular, these bounds
have some degree of freedom ? the pseudo-inputs ?, which we optimize so as to tighten the
bounds. This optimization step is crucial for likelihood-based inference of parametric DPP
models, where bounds have to adapt to the point where the likelihood is evaluated to yield
7
normal
mild1
mild2
mod1
mod2
sev1
Figure 2: Six out of the seven nerve fiber samples. The first three samples (from left to
right) correspond to a Normal Subject and two Mildly Diabetic Subjects, respectively. The
remaining three samples correspond to a Moderately Diabetic Subject and two Severely
Diabetic Subjects.
?3
x 10
0.03
350
400
300
5
0.01
350
Bounds
10
0.02
Bounds
Ratio 2
Ratio 1
15
300
250
0
400
800
1200
Number of inducing variables
(a)
50
400
800
1200
100
50
Number of inducing variables
400
800
1200
Number of inducing variables
(b)
200
150
200
0
50
250
(c)
50
400
800
1200
Number of inducing variables
(d)
Figure 3: Figures 3(a) and 3(b) show the evolution of the estimated overdispersion measures
?1 and ?2 as functions of the number of inducing variables used. The dotted black lines correspond to the Normal/Mildly Diabetic class while the solid lines to the Moderately/Severely
Diabetic class. Figure 3(c) shows the upper bound (red) and the lower bound (blue) on
the log likelihood as functions of the number of inducing variables for the Normal/Mildly
Diabetic class while the Moderately/Severely Diabetic case is shown in Figure 3(d).
Figure 4: We illustrate the optimization over the inducing inputs Z for several values of
m ? {50, 100, 200, 400, 800, 1200} in the DPP of Section 5.2. We consider the Normal/Mildly
diabetic class. The panels in the top row show the initial inducing input locations for various
values of m, while the corresponding panels in the bottom row show the optimized locations.
decisions which are consistent with the ideal underlying algorithms. In future work, we plan
to investigate connections of our bounds with the quadrature-based bounds for Fredholm
determinants of [23]. We also plan to consider variants of DPPs that condition on the
number of points in the realization, to put joint priors over the within-class distributions
of the features in classification problems, in a manner related to [6]. In the long term, we
will investigate connections between kernels L and K that could be made without spectral
knowledge, to address the issue of replacing L by K.
Acknowledgments
We would like to thank Adrien Hardy for useful discussions and Emily Fox for kindly providing access to the diabetic neuropathy dataset. RB was funded by a research fellowship
through the 2020 Science programme, funded by EPSRC grant number EP/I017909/1, and
by ANR project ANR-13-BS-03-0006-01.
8
References
[1] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes.
Springer, 2nd edition, 2003.
[2] O. Macchi. The coincidence approach to stochastic point processes. Advances in Applied
Probability, 7:83?122, 1975.
[3] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 2012.
[4] F. Lavancier, J. M?ller, and E. Rubak. Determinantal point process models and statistical inference. Journal of the Royal Statistical Society, 2014.
[5] J. B. Hough, M. Krishnapur, Y. Peres, and B. Vir?ag. Determinantal processes and
independence. Probability surveys, 2006.
[6] J. Y. Zou and R. P. Adams. Priors for diversity in generative latent variable models.
In Advances in Neural Information Processing Systems (NIPS), 2012.
[7] R. H. Affandi, E. B. Fox, R. P. Adams, and B. Taskar. Learning the parameters
of determinantal point processes. In Proceedings of the International Conference on
Machine Learning (ICML), 2014.
[8] J. Gillenwater, A. Kulesza, E. B. Fox, and B. Taskar. Expectation-maximization for
learning determinantal point processes. In Advances in Neural Information Proccessing
Systems (NIPS), 2014.
[9] Z. Mariet and S. Sra. Fixed-point algorithms for learning determinantal point processes.
In Advances in Neural Information systems (NIPS), 2015.
[10] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning.
MIT Press, 2006.
[11] Michalis K. Titsias. Variational Learning of Inducing Variables in Sparse Gaussian
Processes. In AISTATS, volume 5, 2009.
[12] N. Cristianini and J. Shawe-Taylor. Kernel methods for pattern recognition. Cambridge
University Press, 2004.
[13] R. H. Affandi, A. Kulesza, E. B. Fox, and B. Taskar. Nystr?om approximation for largescale determinantal processes. In Proceedings of the conference on Artificial Intelligence
and Statistics (AISTATS), 2013.
[14] E. Seiler and B. Simon. An inequality among determinants. Proceedings of the National
Academy of Sciences, 1975.
[15] N. Hansen. The CMA evolution strategy: a comparing review. In Towards a new
evolutionary computation. Advances on estimation of distribution algorithms. Springer,
2006.
[16] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2004.
[17] L. Devroye. Non-uniform random variate generation. Springer-Verlag, 1986.
[18] I. Gohberg, S. Goldberg, and M. A. Kaashoek. Classes of linear operators, Volume I.
Springer, 1990.
[19] B. Simon. Trace ideals and their applications. American Mathematical Society, 2nd
edition, 2005.
[20] G. E. Fasshauer and M. J. McCourt. Stable evaluation of gaussian radial basis function
interpolants. SIAM Journal on Scientific Computing, 34(2), 2012.
[21] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm.
Bernoulli, 7:223?242, 2001.
[22] L. A. Waller, A. S?
arkk?
a, V. Olsbo, M. Myllym?aki, I. G. Panoutsopoulou, W. R.
Kennedy, and G. Wendelschafer-Crabb. Second-order spatial analysis of epidermal
nerve fibers. Statistics in Medicine, 30(23):2827?2841, 2011.
[23] F. Bornemann. On the numerical evaluation of Fredholm determinants. Mathematics
of Computation, 79(270):871?915, 2010.
9
| 5810 |@word determinant:13 compression:1 polynomial:1 nd:2 open:1 d2:1 simulation:1 decomposition:8 covariance:1 nystr:2 solid:1 tr:4 moment:1 initial:2 contains:2 exclusively:1 hardy:1 interestingly:1 ka:2 com:1 comparing:1 arkk:1 gmail:1 vere:1 determinantal:11 numerical:1 cheap:5 drop:1 interpretable:1 fasshauer:1 generative:1 intelligence:1 item:4 haario:1 parametrization:3 core:1 location:3 hermite:1 mathematical:1 become:1 consists:3 fitting:1 manner:1 expected:4 indeed:1 roughly:1 examine:1 decreasing:1 cardinality:1 increasing:2 provided:2 project:1 underlying:1 panel:2 q2:1 ag:1 safely:1 pseudo:9 xd:1 exactly:1 prohibitively:1 scaled:1 preferable:1 vir:1 grant:1 appear:1 positive:4 before:1 waller:1 limit:3 severely:5 encoding:1 ak:3 subscript:1 approximately:1 black:4 burn:1 umr:1 suggests:1 co:2 ease:2 limited:1 tamminen:1 range:1 unique:1 acknowledgment:1 practice:3 definite:4 implement:1 likelihoodbased:1 optimizers:1 significantly:3 radial:1 cannot:2 operator:12 storage:1 put:1 applying:2 writing:1 influence:1 optimize:2 equivalent:2 demonstrated:1 missing:1 yt:1 maximizing:2 straightforward:1 economics:1 attention:1 go:3 center:1 ergodic:1 emily:1 survey:1 simplicity:1 williams:1 seiler:1 proving:1 notion:2 user:1 exact:2 gps:2 goldberg:1 diabetes:1 trick:1 element:4 trend:1 expensive:4 recognition:1 bottom:1 epsrc:1 ep:1 fly:1 coincidence:1 taskar:4 news:2 yk:1 mentioned:1 moderately:6 cristianini:1 trained:1 depend:1 titsias:2 learner:1 basis:1 mh:13 joint:1 chapter:1 various:3 fiber:4 mccourt:1 train:1 univ:2 distinct:2 fast:1 lengthscale:1 monte:6 query:1 detected:1 artificial:1 refined:1 supplementary:3 say:1 drawing:2 otherwise:1 anr:2 ability:1 statistic:2 cma:3 think:1 jointly:3 sequence:1 eigenvalue:8 differentiable:2 analytical:1 zm:2 realization:12 poorly:1 nonspectral:3 adjoint:1 academy:1 inducing:17 validate:1 saksman:1 convergence:1 adam:2 object:3 help:1 derive:3 illustrate:1 augmenting:1 ij:1 received:1 progress:1 implemented:1 involves:2 come:2 skip:1 indicate:1 quantify:2 radius:1 attribute:1 stochastic:1 nutrient:1 centered:1 material:3 require:4 kii:1 clustered:1 proposition:8 strictly:1 around:3 considered:1 ground:1 normal:7 exp:4 optimizer:1 early:1 estimation:1 athens:1 hansen:1 healthy:1 correctness:1 mtitsias:1 tool:1 mit:1 gaussian:22 avoid:1 varying:1 encode:2 derived:1 focus:2 modelling:2 likelihood:32 bernoulli:4 check:1 inference:25 repulsion:2 cnrs:1 typically:2 france:1 interested:1 simple2:1 issue:7 among:3 overall:1 classification:1 proposes:1 adrien:1 spatial:3 constrained:2 fairly:1 special:2 marginal:7 field:1 once:1 plan:2 nicely:1 having:1 sampling:2 lille:1 look:1 jones:1 icml:1 future:1 national:1 ourselves:1 lebesgue:1 statistician:1 freedom:1 acceptance:3 investigate:5 evaluation:7 chain:7 integral:1 partial:1 fox:4 tree:2 indexed:2 hough:1 taylor:1 re:1 fitted:1 kij:1 column:1 instance:3 maximization:1 tractability:1 subset:1 parametrically:1 uniform:3 gr:1 too:1 varies:1 learnt:1 considerably:1 synthetic:3 adaptively:1 density:3 international:1 siam:1 retain:1 physic:1 informatics:1 off:1 squared:3 satisfied:1 janossy:1 choose:1 american:1 toy:1 diversity:4 combinatorics:1 depends:1 lab:1 red:3 start:2 identifiability:1 simon:2 contribution:2 om:2 formed:1 variance:1 who:2 efficiently:2 yield:6 correspond:4 weak:1 bayesian:1 fredholm:6 carlo:6 kennedy:1 converged:1 explain:1 pochhammer:1 reach:1 casella:1 definition:3 infinitesimal:1 remarked:1 naturally:1 proof:2 dxi:1 associated:1 stop:1 dataset:6 popular:1 knowledge:2 greece:1 nerve:3 appears:1 follow:4 done:1 box:1 evaluated:1 furthermore:2 stage:1 correlation:2 until:2 hastings:2 replacing:1 paraphrasing:1 defines:1 scientific:1 building:1 validity:1 requiring:2 contain:1 evolution:2 analytically:1 inspiration:2 symmetric:3 numerator:1 self:1 aki:1 pdf:4 demonstrate:1 variational:18 novel:1 recently:1 volume:2 extend:3 belong:1 marginals:1 refer:1 unfamiliar:1 cambridge:1 dpps:19 rd:10 tuning:1 grid:1 mathematics:1 similarly:3 particle:1 gillenwater:1 shawe:1 funded:2 access:1 entail:1 stable:1 yk2:1 add:1 base:5 posterior:4 optimizing:4 verlag:1 inequality:3 arbitrarily:1 discussing:1 yi:2 employed:1 converge:2 paradigm:1 maximize:2 ller:1 desirable:1 infer:2 match:1 adapt:2 plug:1 long:1 divided:1 equally:1 variant:1 aueb:1 denominator:1 patient:2 expectation:1 poisson:1 iteration:7 kernel:33 sometimes:2 histogram:2 represent:1 audience:1 proposal:2 remarkably:1 fellowship:1 interval:1 grow:1 crucial:1 neuropathy:3 unlike:2 posse:1 sure:2 eigenfunctions:1 subject:5 tend:2 comment:1 ideal:8 split:1 easy:1 xj:9 affect:1 fit:2 zi:2 independence:1 variate:1 computable:1 det:23 favour:1 whether:4 bottleneck:1 six:1 retrospective:3 algebraic:1 speaking:1 matlab:1 generally:1 useful:2 detailed:1 aimed:1 clear:1 amount:2 generate:1 zj:2 tutorial:1 dotted:1 estimated:3 arising:1 per:1 rb:1 blue:2 discrete:1 write:1 four:1 breadth:1 bardenet:2 geometrically:1 compete:1 run:7 named:1 almost:2 reader:2 decide:1 saying:2 separation:1 interpolants:1 draw:4 decision:3 scaling:1 submatrix:1 bound:46 display:3 refine:1 nontrivial:3 adapted:1 occur:2 infinity:1 constrain:1 n3:1 sake:1 emi:1 argument:2 min:1 performing:1 relatively:1 department:1 ball:2 describes:1 remain:1 em:1 across:1 smaller:2 metropolis:3 b:1 happens:1 intuitively:2 restricted:1 invariant:2 mariet:1 macchi:1 equation:2 agree:1 needed:1 overdispersion:5 tractable:2 lyt:2 end:1 repulsive:1 parametrize:1 available:2 apply:2 away:1 spectral:12 alternative:1 denotes:2 michalis:2 remaining:2 top:1 medicine:1 build:3 approximating:1 society:2 question:1 quantity:1 depart:1 parametric:2 costly:1 strategy:1 usual:1 diagonal:2 exhibit:1 gradient:2 evolutionary:1 link:1 separate:2 thank:1 parametrized:2 evenly:1 seven:1 collected:1 reason:2 length:1 devroye:1 index:2 providing:2 ratio:2 setup:1 robert:1 trace:4 negative:1 implementation:1 contributed:1 perform:2 upper:5 markov:6 datasets:1 finite:10 parametrizing:4 defining:1 peres:1 variability:1 y1:5 arbitrary:2 inferred:2 superimpose:1 required:3 repel:1 z1:2 connection:3 optimized:4 engine:1 coherent:1 nm2:1 epidermal:1 timeline:1 alternately:1 nip:3 address:1 cristal:1 suggested:1 pattern:2 appeared:1 kulesza:3 interpretability:3 green:1 royal:1 power:2 suitable:1 business:1 rely:2 natural:2 largescale:1 analyticity:1 scheme:1 library:1 lavancier:1 review:2 literature:1 prior:10 checking:1 l2:1 python:1 relative:1 interesting:1 generation:2 proportional:2 eigendecomposition:2 foundation:1 degree:1 consistent:2 mercer:1 storing:1 row:3 summary:1 placed:1 rasmussen:1 neat:1 wide:1 affandi:2 sparse:3 crabb:1 dpp:27 xn:2 evaluating:2 gram:1 valid:2 quantum:1 author:1 adopts:1 made:3 coincide:1 adaptive:2 lz:7 programme:1 tighten:1 approximate:4 compact:1 emphasize:1 technicality:1 investigating:1 krishnapur:1 corpus:2 conclude:1 xi:14 spectrum:1 continuous:14 search:1 latent:1 diabetic:15 why:1 lyz:1 bypassed:1 sra:1 forest:2 necessarily:2 zou:1 domain:7 kindly:1 aistats:2 main:4 spread:1 whole:1 arise:2 edition:2 n2:2 nothing:1 myllym:1 quadrature:1 x1:2 slow:1 sub:2 inferring:1 explicit:1 wish:1 exponential:3 daley:1 candidate:1 theorem:5 removing:1 minute:1 showing:1 symbol:1 r2:1 normalizing:1 intractable:2 exists:3 magnitude:1 kx:2 mildly:7 vii:3 remi:1 simply:3 likely:2 prevents:1 expressed:1 rubak:1 springer:5 corresponds:1 determines:1 relies:3 identity:1 presentation:1 towards:1 kaashoek:1 price:1 experimentally:1 specifically:3 sampler:2 decouple:1 called:1 total:1 e:2 la:1 experimental:1 rarely:1 repartition:1 evaluate:4 mcmc:5 |
5,315 | 5,811 | Sample Complexity of Learning Mahalanobis
Distance Metrics
Kristin Branson
Janelia Research Campus, HHMI
[email protected]
Nakul Verma
Janelia Research Campus, HHMI
[email protected]
Abstract
Metric learning seeks a transformation of the feature space that enhances prediction quality for a given task. In this work we provide PAC-style sample complexity
rates for supervised metric learning. We give matching lower- and upper-bounds
showing that sample complexity scales with the representation dimension when
no assumptions are made about the underlying data distribution. In addition, by
leveraging the structure of the data distribution, we provide rates fine-tuned to a
specific notion of the intrinsic complexity of a given dataset, allowing us to relax
the dependence on representation dimension. We show both theoretically and empirically that augmenting the metric learning optimization criterion with a simple
norm-based regularization is important and can help adapt to a dataset?s intrinsic complexity yielding better generalization, thus partly explaining the empirical
success of similar regularizations reported in previous works.
1
Introduction
In many machine learning tasks, data is represented in a high-dimensional Euclidean space. The
L2 distance in this space is then used to compare observations in methods such as clustering and
nearest-neighbor classification. Often, this distance is not ideal for the task at hand. For example,
the presence of uninformative or mutually correlated measurements arbitrarily inflates the distances
between pairs of observations. Metric learning has emerged as a powerful technique to learn a
metric in the representation space that emphasizes feature combinations that improve prediction
while suppressing spurious measurements. This has been done by exploiting class labels [1, 2] or
other forms of supervision [3] to find a Mahalanobis distance metric that respects these annotations.
Despite the popularity of metric learning methods, few works have studied how problem complexity
scales with key attributes of the dataset. In particular, how do we expect generalization error to
scale?both theoretically and practically?as one varies the number of informative and uninformative measurements, or changes the noise levels? In this work, we develop two general frameworks
for PAC-style analysis of supervised metric learning. The distance-based metric learning framework uses class label information to derive distance constraints. The objective is to learn a metric
that yields smaller distances between examples from the same class than those from different classes.
Algorithms that optimize such distance-based objectives include Mahalanobis Metric for Clustering
(MMC) [4], Large Margin Nearest Neighbor (LMNN) [1] and Information Theoretic Metric Learning (ITML) [2]. Instead of using distance comparisons as a proxy, however, one can also optimize
for a specific prediction task directly. The second framework, the classifier-based metric learning
framework, explicitly incorporates the hypotheses associated with the prediction task to learn effective distance metrics. Examples in this regime include [5] and [6].
1
Our analysis shows that in both frameworks, the sample complexity scales with a dataset?s representation dimension (Theorems 1 and 3), and this dependence is necessary in the absence of assumptions about the underlying data distribution (Theorems 2 and 4). By considering any Lipschitz loss,
our results improve upon previous sample complexity results (see Section 6) and, for the first time,
provide matching lower bounds.
In light of our observation that data measurements often include uninformative or weakly informative features, we expect a metric that yields good generalization performance to de-emphasize such
features and accentuate the relevant ones. We thus formalize the metric learning complexity of a
given dataset in terms of the intrinsic complexity d of the optimal metric. For Mahalanobis metrics,
we characterize intrinsic complexity by the norm of the matrix representation of the metric. We
refine our sample complexity results and show a dataset-dependent bound for both frameworks that
relaxes the dependence on representation dimension and instead scales with the dataset?s intrinsic
metric learning complexity d (Theorem 7).
Based on our dataset-dependent result, we propose a simple variation on the empirical risk minimizing (ERM) algorithm that returns a metric (of complexity d) that jointly minimizes the observed sample bias and the expected intra-class variance for metrics of fixed complexity d. This
bias-variance balancing criterion can be viewed as a structural risk minimizing algorithm that provides better generalization performance than an ERM algorithm and justifies norm-regularization
of weighting metrics in the optimization criteria for metric learning, partly explaining empirical
success of similar objectives [7, 8]. We experimentally validate how the basic principle of normregularization can help enhance the prediction quality even for existing metric learning algorithms
on benchmark datasets (Section 5). Our experiments highlight that norm-regularization indeed helps
learn weighting metrics that better adapt to the signal in data in high-noise regimes.
2
Preliminaries
In this section, we define our notation, and explicitly define the distance-based and classifier-based
learning frameworks. Given a D-dimensional representation space X = RD , we want to learn a
weighting, or a metric1 M ? on X that minimizes some notion of error on data drawn from a fixed
unknown distribution D on X ? {0, 1}:
M ? := argminM ?M err(M, D),
where M is the class of weighting metrics M := {M | M ? RD?D , ?max (M ) = 1} (we constrain
the maximum singular value ?max to remove arbitrary scalings). For supervised metric learning,
this error is typically label-based and can be defined in two intuitive ways.
The distance-based framework prefers metrics M that bring data from the same class closer together than those from opposite classes. The corresponding distance-based error then measures how
the distances amongst data violate class labels:
h
i
err?dist (M, D) := E(x1 ,y1 ),(x2 ,y2 )?D ?? ?M (x1 , x2 ), Y ,
where ?? (?M , Y ) is a generic distance-based loss function that computes the degree of violation
between weighted distance ?M (x1 , x2 ) := kM (x1 ?x2 )k2 and the label agreement Y := 1[y1 = y2 ]
and penalizes it by factor ?. For example, ? could penalize intra-class distances that are more than
some upper limit U and inter-class distances that are less than some lower limit L > U :
(
min{1, ?[?M ?U ]+ } if Y = 1
?
?L,U (?M , Y ) :=
,
(1)
min{1, ?[L ? ?M ]+ } otherwise
1
Note that we are looking at the linear form of the metric M ; usually the corresponding quadratic form
M M is discussed in the literature, which is necessarily positive semi-definite.
T
2
where [A]+ := max{0, A}. MMC optimizes an efficiently computable variant of Eq. (1) by constraining the aggregate intra-class distances while maximizing the aggregate inter-class distances.
ITML explicitly includes the upper and lower limits with an added regularization on the learned M
to be close to a pre-specified metric of interest M0 .
While we will discuss loss-functions ? that handle distances between pairs of observations, it is easy
to extend to relative distances among triplets:
n
min{1, ?[?M (x1 , x2 ) ? ?M (x1 , x3 )]+ } if y1 = y2 6= y3
??triple (?M (x1 , x2 ), ?M (x1 , x3 ), (y1 , y2 , y3 )) :=
,
0
otherwise
LMNN is a popular variant, in which instead of looking at all triplets, it focuses on triplets in local
neighborhoods, improving the quality of local distance comparisons.
The classifier-based framework prefers metrics M that directly improve the prediction quality for
a downstream task. Let H represent a real-valued hypothesis class associated with the prediction
task of interest (each h ? H : X ? [0, 1]), then the corresponding classifier-based error becomes:
h
i
errhypoth (M, D) := inf E(x,y)?D 1 |h(M x) ? y| ? 1/2 .
h?H
Example classifier-based methods include [5], which minimizes ranking errors for information retrieval and [6], which incorporates network topology constraints for predicting network connectivity
structure.
3
Metric Learning Sample Complexity: General Case
In any practical setting, we estimate the ideal weighting metric M ? by minimizing the empirical
version of the error criterion from a finite size sample from D. Let Sm denote a sample of size
m, and err(M, Sm ) denote the corresponding empirical error. We can then define the empirical
?
risk minimizing metric based on m samples as Mm
:= argminM err(M, Sm ), and compare its
generalization performance to that of the theoretically optimal M ? , that is,
?
err(Mm
, D) ? err(M ? , D).
(2)
Distance-Based Error Analysis. Given an i.i.d. sequence of observations z1 , z2 , . . . from
pair
D, zi = (xi , yi ), we can pair the observations together to form a paired sample2 Sm
=
m
{(z1 , z2 ), . . . , (z2m?1 , z2m )} = {(z1,i , z2,i )}i=1 of size m, and define the sample-based distance
error induced by a metric M as
m
1 X ?
pair
err?dist (M, Sm
) :=
? ?M (x1,i , x2,i ), 1[y1,i = y2,i ] .
m i=1
Then for any B-bounded-support distribution D (that is, each (x, y) ? D, kxk ? B), we have the
following.3,4
Theorem 1 Let ?? be a distance-based loss function that is ?-Lipschitz in the first argument. Then
with probability at least 1 ? ? over an i.i.d. draw of 2m samples from an unknown B-boundedpair
support distribution D paired as Sm
, we have
p
?
pair
sup errdist (M, D)? err?dist (M, Sm
) ? O ?B 2 D ln(1/?)/m .
M ?M
While we pair 2m samples into m independent pairs, it is common to consider all O(m2 ) possibly dependent pairs. By exploiting independence we provide a simpler analysis yielding O(m?1/2 ) sample complexity
rates, which is similar to the dependent case.
3
We only present the results for paired comparisons; the results are easily extended to triplet comparisons.
4
All the supporting proofs are provided in Appendix A.
2
3
This implies a bound on our key quantity of interest, Eq. (2). To achieve estimation error rate ,
m = ?((?B 2 /)2 D ln(1/?)) samples are sufficient, showing that one never needs more than a
number proportional to D examples to achieve the desired level of accuracy with high probability.
Since many applications involve high-dimensional data, we next study if such a strong dependency
on D is necessary. It turns out that even for simple distance-based loss functions like ??L,U (c.f. Eq.
1), there are data distributions for which one cannot ensure good estimation error with fewer than
linear in D samples.
Theorem 2 Let A be any algorithm that, given an i.i.d. sample Sm (of size m) from a fixed unknown
bounded support distribution D, returns a weighting metric from M that minimizes the empirical
error with respect to distance-based loss function ??L,U . There exist ? ? 0, 0 ? U < L (indep. of
1
D+1
D), s.t. for all 0 < , ? < 64
, there exists a bounded support distribution D, such that if m ? 512
2,
h
i
?
?
?
PSm errdist (A(Sm ), D) ? errdist (M , D) > > ?.
While this strong dependence on D may seem discouraging, note that here we made no assumptions about the underlying structure of the data distribution. One may be able to achieve a more
relaxed dependence on D in settings in which individual features contain varying amounts of useful
information. This is explored in Section 4.
Classifier-Based Error Analysis. In this setting, we consider an i.i.d. set of observations z1 , z2 , . . .
from D to obtain the unpaired sample Sm = {zi }m
i=1 of size m. To analyze the generalization-ability
of weighting metrics optimized w.r.t. underlying real-valued hypothesis class H, we must measure
the classification complexity of H. The scale-sensitive version of VC-dimension, the fat-shattering
dimension, of a hypothesis class (denoted Fat? (H)) encodes the right notion of classification complexity and provides a way to relate generalization error to the empirical error at a margin ? [9].
In the context of metric learning with respect to aP
fixed hypothesis class, define the empirical error
1
at a margin ? as err?hypoth (M, Sm ) := inf h?H m
(xi ,yi )?Sm 1[Margin(h(M xi ), yi ) ? ?], where
Margin(?
y , y) := (2y ? 1)(?
y ? 1/2).
Theorem 3 Let H be a ?-Lipschitz base hypothesis class. Pick any 0 < ? ? 1/2, and let m ?
Fat?/16 (H) ? 1. Then with probability at least 1 ? ? over an i.i.d. draw of m samples Sm from an
unknown B-bounded-support distribution D (0 := min{?/2, 1/2?B})
s
!
h
i
1
1 D2 D Fat?/16 (H) m
?
sup errhypoth (M, D) ? errhypoth (M, Sm ) ? O
ln +
ln +
ln
.
m ?
m
0
m
?
M ?M
As before, this implies a bound on Eq. (2). To achieve estimation error rate , m =
?((D2 ln(?DB/?) + Fat?/16 (H) ln(1/??))/2 ) samples suffices. Note that the task of finding an
optimal metric only additively increases sample complexity over that of finding the optimal hypothesis from the underlying hypothesis class. In contrast to the distance-based framework (Theorem 1),
here we get a quadratic dependence on D. The following shows that a strong dependence on D is
necessary in the absence of assumptions on the data distribution and base hypothesis class.
Theorem 4 Pick any 0 < ? < 1/8. Let H be a base hypothesis class of ?-Lipschitz functions that is
closed under addition of constants (i.e., h ? H =? h0 ? H, where h0 : x 7? h(x) + c, for all c)
s.t. each h ? H maps into the interval [1/2 ? 4?, 1/2 + 4?] after applying an appropriate theshold.
Then for any metric learning algorithm A, and for any B ? 1, there exists ? ? 0, for all 0 < , ? <
2
+d
1/64, there exists a B-bounded-support distribution D s.t. if m ln2 m < O 2 D
ln(1/? 2 )
PSm ?D [errhypoth (M ? , D) > err?hypoth (A(Sm ), D) + ] > ?,
where d := Fat768? (H) is the fat-shattering dimension of H at margin 768?.
4
4
Sample Complexity for Data with Un- and Weakly Informative Features
We introduce the concept of the metric learning complexity of a given dataset. Our key observation is that a metric that yields good generalization performance should emphasize relevant features
while suppressing the contribution of spurious features. Thus, a good metric reflects the quality of
individual feature measurements of data and their relative value for the learning task. We can leverage this and define the metric learning complexity of a given dataset as the intrinsic complexity d
of the weighting metric that yields the best generalization performance for that dataset (if multiple
metrics yield best performance, we select the one with minimum d). A natural way to characterize
the intrinsic complexity of a weighting metric M is via the norm of the matrix M . Using metric
learning complexity as our gauge for feature-set richness, we now refine our analysis in both canonical frameworks. We will first analyze sample complexity for norm-bounded metrics, then show how
to automatically adapt to the intrinsic complexity of the unknown underlying data distribution.
4.1
Distance-Based Refinement
We start with the following refinement of the distance-based metric learning sample complexity for
a class of Frobenius norm-bounded weighting metrics.
Lemma 5 Let M be any class of weighting metrics on the feature space X = RD , and define
d := supM ?M kM T M k2F . Let ?? be any distance-based loss function that is ?-Lipschitz in the first
argument. Then with probability at least 1 ? ? over an i.i.d. draw of 2m samples from an unknown
pair
B-bounded-support distribution D paired as Sm
, we have
p
?
?
pair
sup errdist (M, D)? errdist (M, Sm ) ? O ?B 2 d ln(1/?)/m .
M ?M
Observe that if our dataset has a low metric learning complexity d D, then considering an appropriate class of norm-bounded weighting metrics M can help sharpen the sample complexity result,
yielding a dataset-dependent bound. Of course, a priori we do not know which class of metrics is
appropriate; We discuss how to automatically adapt to the right complexity class in Section 4.3.
4.2
Classifier-Based Refinement
Effective data-dependent analysis of classifier-based metric learning requires accounting for potentially complex interactions between an arbitrary base hypothesis class and the distortion induced
by a weighting metric to the unknown underlying data distribution. To make the analysis tractable
while still keeping our base hypothesis class H general, we assume that H is a class of two-layer
feed-forward networks.5 Recall that for any smooth target function f ? , a two-layer feed-forward
neural network (with appropriate number of hidden units and connection weights) can approximate
f ? arbitrarily well [10], so this class is flexible enough to include most reasonable target hypotheses.
More formally, define the base hypothesis class of two-layer feed-forward neural network with K
PK
hidden units as H?2-net
:= {x 7? i=1 wi ? ? (vi ? x) | kwk1 ? 1, kvi k1 ? 1}, where ? ? : R ?
?
[?1, 1] is a smooth, strictly monotonic, ?-Lipschitz activation function with ? ? (0) = 0. Then, for
generalization error w.r.t. any classifier-based ?-Lipschitz loss function ?? ,
err?hypoth (M, D) := inf E(x,y)?D ?? h(M x), y ,
h?H2-net
??
we have the following.6
5
We only present the results for two-layer networks in Lemma 6; the results are easily extended to multilayer feed-forward networks.
6
Since we know the functional form of the base hypothesis class H (i.e., a two layer feed-forward neural
net), we can provide a more precise bound than leaving it as Fat(H).
5
Lemma 6 Let M be any class of weighting metrics on the feature space X = RD , and define
be a two layer feed-forward neural network
d := supM ?M kM T M k2F . For any ? > 0, let H?2-net
?
base hypothesis class (as defined above) and ?? be a classifier-based loss function that ?-Lipschitz
in its first argument. Then with probability at least 1 ? ? over an i.i.d. draw of m samples Sm from
an unknown B-bounded support distribution D, we have
p
sup err?hypoth (M, D)? err?hypoth (M, Sm ) ? O B?? d ln(D/?)/m .
M ?M
4.3
Automatically Adapting to Intrinsic Complexity
While Lemmas 5 and 6 provide a sample complexity bound tuned to the metric learning complexity
of a given dataset, these results are not directly useful since one cannot select the correct normbounded class M a priori, as the underlying distribution D is unknown. Fortunately, by considering
an appropriate sequence of norm-bounded classes of weighting metrics, we can provide a uniform
bound that automatically adapts to the intrinsic complexity of the unknown underlying data distribution D.
Theorem 7 Define Md := {M | kM T M k2F ? d}, and consider the nested sequence of weighting
1
2
d
metric classes
P M ? M ? ? ? ? . Let ?d be any non-negative measure across the sequence M
such that d ?d = 1 (for d = 1, 2, ? ? ? ). Then for any ? ? 0, with probability at least 1 ? ? over an
i.i.d. draw of sample Sm from an unknown B-bounded-support distribution D, for all d = 1, 2, ? ? ? ,
and all M d ? Md ,
p
?
err (M d , D) ? err? (M d , Sm ) ? O C ? B? d ln(1/??d )/m ,
(3)
?
where C := B for distance-based error, or C := ? ln D for classifier-based error (for H?2-net
? ).
In particular, for a data distribution D that has metric learning complexity at most d ? N, if there
are m ? ? d(CB?)2 ln(1/??d )/2 samples, then with probability at least 1 ? ?
?
reg
err (Mm
, D) ? err? (M ? , D) ? O(),
q
reg
for Mm
:= argmin err? (M, Sm ) + ?M dM , ?M :=CB? ln(??dM )?1 /m , dM := kM T M k2F .
M ?M
The measure ?d above encodes our prior belief on the complexity class Md from which a target
metric is selected by a metric learning algorithm given the training sample Sm . In absence of any
prior beliefs, ?d can be set to 1/D (for d = 1, . . . , D) for scale constrained weighting metrics
(?max = 1). Thus, for an unknown underlying data distribution D with metric learning complexity
d, with number of samples just proportional to d, we can find a good weighting metric.
This result also highlights that the generalization error of any weighting metric returned by an algorithm is proportional to the (smallest) norm-bounded class to which it belongs (cf. Eq. 3). If two
metrics M1 and M2 have similar empirical errors on a given sample, but have different intrinsic
complexities, then the expected risk of the two metrics can be considerably different. We expect the
metric with lower intrinsic complexity to yield better generalization error. This partly explains the
observed empirical success of norm-regularized optimization for metric learning [7, 8].
Using this as a guiding principle, we can design an improved optimization criteria for metric learning
that jointly minimizes the sample error and a Frobenius norm regularization penalty. In particular,
min
err(M, Sm ) + ? kM T M k2F
(4)
M ?M
for any error criteria ?err? used in a downstream prediction task and a regularization parameter ?.
Similar optimizations have been studied before [7, 8], here we explore the practical efficacy of
this augmented optimization on existing metric learning algorithms in high noise regimes where a
dataset?s intrinsic dimension is much smaller than its representation dimension.
6
UCI Wine Dataset
UCI Iris Dataset
0.8
0.5
Random
Id. Metric
LMNN
reg?LMNN
ITML
reg?ITML
0.6
0.4
0.3
0.5
0.45
Avg. test error
Avg. test error
0.6
Avg. test error
0.7
UCI Ionosphere Dataset
0.5
0.7
Random
Id. Metric
LMNN
reg?LMNN
ITML
reg?ITML
0.4
0.3
0.4
0.35
0.3
Random
Id. Metric
LMNN
reg?LMNN
ITML
reg?ITML
0.25
0.2
0.2
0.2
0.1
0
0
50
100
150
Ambient noise dimension
0.1
0
50
100
150
200
250
300
350
400
Ambient noise dimension
450
500
0.15
0
20
40
60
80
100
120
Ambient noise dimension
Figure 1: Nearest-neighbor classification performance of LMNN and ITML metric learning algorithms without regularization (dashed red lines) and with regularization (solid blue lines) on benchmark UCI datasets. The
horizontal dotted line is the classification error of random label assignment drawn according to the class proportions, and solid gray line shows classification error of k-NN performance with respect to identity metric (no
metric learning) for baseline reference.
5
Empirical Evaluation
Our analysis shows that the generalization error of metric learning can scale with the representation
dimension, and regularization can help mitigate this by adapting to the intrinsic metric learning
complexity of the given dataset. We want to explore to what degree these effects manifest in practice.
We select two popular metric learning algorithms, LMNN [1] and ITML [2], that are used to find
metrics that improve nearest-neighbor classification quality. These algorithms have varying degrees
of regularization built into their optimization criteria: LMNN implicitly regularizes the metric via its
?large margin? criterion, while ITML allows for explicit regularization by letting the practitioners
specify a ?prior? weighting metric. We modified the LMNN optimization criteria as per Eq. (4) to
also allow for an explicit norm-regularization controlled by the trade-off parameter ?.
We can evaluate how the unregularized criteria (i.e., unmodified LMNN, or ITML with the prior
set to the identity matrix) compares to the regularized criteria (i.e., modified LMNN with best ?, or
ITML with the prior set to a low-rank matrix).
Datasets. We use the UCI benchmark datasets for our experiments: I RIS (4 dim., 150 samples),
W INE (13 dim., 178 samples) and I ONOSPHERE (34 dim., 351 samples) datasets [11]. Each dataset
has a fixed (unknown, but low) intrinsic dimension; we can vary the representation dimension by
augmenting each dataset with synthetic correlated noise of varying dimensions, simulating regimes
where datasets contain large numbers of uninformative features. Each UCI dataset is augmented
with synthetic D-dimensional correlated noise as detailed in Appendix B.
Experimental setup. Each noise-augmented dataset was randomly split between 70% training, 10%
validation, and 20% test samples. We used the default settings for each algorithm. For regularized
LMNN, we picked the best performing trade-off parameter ? from {0, 0.1, 0.2, ..., 1} on the validation set. For regularized ITML, we seeded with the rank-one discriminating metric, i.e., we set the
prior as the matrix with all zeros, except the diagonal entry corresponding to the most discriminating
coordinate set to one. All the reported results were averaged over 20 runs.
Results. Figure 1 shows the nearest-neighbor performance (with k = 3) of LMNN and ITML on
noise-augmented UCI datasets. Notice that the unregularized versions of both algorithms (dashed
red lines) scale poorly when noisy features are introduced. As the number of uninformative features
grows, the performance of both algorithms quickly degrades to that of classification performance in
the original unweighted space with no metric learning (solid gray line), showing poor adaptability
to the signal in the data.
The regularized versions of both algorithms (solid blue lines) significantly improve the classification
performance. Remarkably, regularized ITML shows almost no degradation in classification perfor7
mance, even in very high noise regimes, demonstrating a strong robustness to noise. These results
underscore the value of regularization in metric learning, showing that regularization encourages
adaptability to the intrinsic complexity and improved robustness to noise.
6
Discussion and Related Work
Previous theoretical work on metric learning has focused almost exclusively on analyzing upperbounds on the sample complexity in the distance-based framework, without exploring any intrinsic
properties of the input data. Our work improves these results and additionally analyzes the classifierbased framework. It is, to best of our knowledge, the first to provide lower bounds showing that the
dependence on D is necessary. Importantly, it is also the first to provide an analysis of sample
rates based on a notion of intrinsic complexity of a dataset, which is particularly important in metric
learning, where we expect the representation dimension to be much higher than intrinsic complexity.
[12] studied ?
the norm-regularized convex losses for stable algorithms and showed an upper-bound
sublinear in D, which can be relaxed by applying techniques from [13]. We analyze the ERM
criterion directly (thus no assumptions are made about the optimization algorithm), and provide a
precise characterization of when the problem complexity is independent of D (Lm. 5). Our lowerbound (Thm. 2) shows that the dependence on D is necessary for ERM in the assumption-free case.
[14] and [15] analyzed the ERM criterion, and are most similar to our results providing an upperbound for the distance-based framework. [14] shows a O(m?1/2 ) rate for thresholds on bounded
convex losses for distance-based metric learning without explicitly studying the dependence on
D. Our upper-bound (Thm. 1) improves this result by considering arbitrary (possibly non-convex)
distance-based Lipschitz losses and explicitly revealing the dependence on D. [15] provides an alternate ERM analysis of norm-regularized metrics and parallels our norm-bounded analysis in Lemma
5. While they focus on analyzing a specific optimization criterion (thresholds on the hinge loss with
norm-regularization), our result holds for general Lipschitz losses. Our Theorem 7 extends it further
by explicitly showing when we can expect good generalization performance from a given dataset.
[16] provides an interesting analysis for robust algorithms by relying upon the existence of a partition
of the input space where each cell has similar training and test losses. Their sample complexity
bound scales with the partition size, which in general can be exponential in D.
It is worth emphasizing that none of these closely related works discuss the importance of or leverage the intrinsic structure in data for the metric learning problem. Our results in Section 4 formalize
an intuitive notion of dataset?s intrinsic complexity for metric learning, and show sample complexity rates that are finely tuned to this metric learning complexity. Our lower bounds indicate that
exploiting the structure is necessary to get rates that don?t scale with representation dimension D.
The classifier-based framework we discuss has parallels with the kernel learning and similarity learning literature. The typical focus in kernel learning is to analyze the generalization ability of linear
separators in Hilbert spaces [17, 18]. Similarity learning on the other hand is concerned about finding a similarity function (that does not necessarily has a positive semidefinite structure) that can best
assist in linear classification [19, 20]. Our work provides a complementary analysis for learning
explicit linear transformations of the given representation space for arbitrary hypotheses classes.
Our theoretical analysis partly justifies the empirical success of norm-based regularization as well.
Our empirical results show that such regularization not only helps in designing new metric learning
algorithms [7, 8], but can even benefit existing metric learning algorithms in high-noise regimes.
Acknowledgments
We would like to thank Aditya Menon for insightful discussions, and the anonymous reviewers for
their detailed comments that helped improve the final version of this manuscript.
8
References
[1] K.Q. Weinberger and L.K. Saul. Distance metric learning for large margin nearest neighbor classification.
Journal of Machine Learning Research (JMLR), 10:207?244, 2009.
[2] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. International
Conference on Machine Learning (ICML), pages 209?216, 2007.
[3] M. Schultz and T. Joachims. Learning a distance metric from relative comparisons. Neural Information
Processing Systems (NIPS), 2004.
[4] E.P. Xing, A.Y. Ng, M.I. Jordan, and S.J. Russell. Distance metric learning with application to clustering
with side-information. Neural Information Processing Systems (NIPS), pages 505?512, 2002.
[5] B. McFee and G.R.G. Lanckriet. Metric learning to rank. International Conference on Machine Learning
(ICML), 2010.
[6] B. Shaw, B. Huang, and T. Jebara. Learning a distance metric from a network. Neural Information
Processing Systems (NIPS), 2011.
[7] D.K.H. Lim, B. McFee, and G.R.G. Lanckriet. Robust structural metric learning. International Conference on Machine Learning (ICML), 2013.
[8] M.T. Law, N. Thome, and M. Cord. Fantope regularization in metric learning. Computer Vision and
Pattern Recognition (CVPR), 2014.
[9] M. Anthony and P. Bartlett. Neural network learning: Theoretical foundations. Cambridge University
Press, 1999.
[10] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators.
Neural Networks, 4:359?366, 1989.
[11] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[12] R. Jin, S. Wang, and Y. Zhou. Regularized distance metric learning: Theory and algorithm. Neural
Information Processing Systems (NIPS), pages 862?870, 2009.
[13] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research
(JMLR), 2:499?526, 2002.
[14] W. Bian and D. Tao. Learning a distance metric by empirical loss minimization. International Joint
Conference on Artificial Intelligence (IJCAI), pages 1186?1191, 2011.
[15] Q. Cao, Z. Guo, and Y. Ying.
abs/1207.5437, 2013.
Generalization bounds for metric and similarity learning.
CoRR,
[16] A. Bellet and A. Habrard. Robustness and generalization for metric learning. CoRR, abs/1209.1086,
2012.
[17] Y. Ying and C. Campbell. Generalization bounds for learning the kernel. Conference on Computational
Learning Theory (COLT), 2009.
[18] C. Cortes, M. Mohri, and A. Rostamizadeh. New generalization bounds for learning kernels. International
Conference on Machine Learning (ICML), 2010.
[19] M-F. Balcan, A. Blum, and N. Srebro. Improved guarantees for learning via similarity functions. Conference on Computational Learning Theory (COLT), 2008.
[20] A. Bellet, A. Habrard, and M. Sebban. Similarity learning for provably accurate sparse linear classification. International Conference on Machine Learning (ICML), 2012.
[21] Z. Guo and Y. Ying. Generalization classification via regularized similarity learning. Neural Computation,
26(3):497?552, 2014.
[22] A. Bellet, A. Habrard, and M. Sebban. A survey on metric learning for feature vectors and structured
data. CoRR, abs/1306.6709, 2014.
[23] P. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results.
Journal of Machine Learning Research (JMLR), 3:463?482, 2002.
[24] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing,
Theory and Applications. 2010.
9
| 5811 |@word kulis:1 repository:1 version:5 norm:18 proportion:1 km:6 d2:2 seek:1 additively:1 accounting:1 elisseeff:1 pick:2 solid:4 efficacy:1 exclusively:1 lichman:1 tuned:3 suppressing:2 existing:3 err:20 z2:4 activation:1 must:1 partition:2 informative:3 remove:1 intelligence:1 fewer:1 selected:1 provides:5 characterization:1 org:2 simpler:1 introduce:1 theoretically:3 inter:2 indeed:1 expected:2 dist:3 lmnn:16 relying:1 automatically:4 considering:4 becomes:1 provided:1 campus:2 underlying:10 notation:1 bounded:15 what:1 argmin:1 minimizes:5 finding:3 transformation:2 guarantee:1 mitigate:1 y3:2 fantope:1 fat:7 classifier:12 k2:1 unit:2 positive:2 before:2 local:2 limit:3 despite:1 id:3 analyzing:2 ap:1 studied:3 branson:1 averaged:1 lowerbound:1 practical:2 acknowledgment:1 practice:1 definite:1 x3:2 mcfee:2 universal:1 empirical:15 significantly:1 adapting:2 matching:2 revealing:1 pre:1 get:2 cannot:2 close:1 risk:5 context:1 applying:2 optimize:2 argminm:2 map:1 reviewer:1 maximizing:1 convex:3 focused:1 survey:1 m2:2 importantly:1 stability:1 handle:1 notion:5 variation:1 coordinate:1 target:3 us:1 designing:1 hypothesis:17 agreement:1 lanckriet:2 recognition:1 particularly:1 bache:1 observed:2 wang:1 cord:1 richness:1 indep:1 trade:2 russell:1 complexity:52 weakly:2 upon:2 easily:2 joint:1 represented:1 jain:1 effective:2 artificial:1 aggregate:2 neighborhood:1 h0:2 emerged:1 valued:2 cvpr:1 distortion:1 relax:1 otherwise:2 compressed:1 ability:2 jointly:2 noisy:1 final:1 sequence:4 net:5 propose:1 interaction:1 relevant:2 uci:8 cao:1 poorly:1 achieve:4 adapts:1 ine:1 intuitive:2 frobenius:2 validate:1 exploiting:3 ijcai:1 rademacher:1 help:6 derive:1 develop:1 mmc:2 augmenting:2 nearest:6 eq:6 strong:4 implies:2 indicate:1 closely:1 correct:1 attribute:1 vc:1 explains:1 thome:1 suffices:1 generalization:21 preliminary:1 anonymous:1 strictly:1 exploring:1 mm:4 practically:1 hold:1 cb:2 lm:1 m0:1 vary:1 smallest:1 wine:1 estimation:3 label:6 sensitive:1 gauge:1 kristin:1 weighted:1 reflects:1 minimization:1 gaussian:1 modified:2 zhou:1 varying:3 focus:3 joachim:1 rank:3 underscore:1 contrast:1 baseline:1 rostamizadeh:1 dim:3 dependent:6 nn:1 typically:1 spurious:2 hidden:2 tao:1 provably:1 classification:14 among:1 flexible:1 denoted:1 priori:2 colt:2 constrained:1 never:1 ng:1 shattering:2 k2f:5 icml:5 few:1 inflates:1 randomly:1 individual:2 ab:3 interest:3 intra:3 evaluation:1 violation:1 analyzed:1 yielding:3 light:1 semidefinite:1 accurate:1 ambient:3 closer:1 necessary:6 euclidean:1 penalizes:1 desired:1 theoretical:3 unmodified:1 assignment:1 entry:1 habrard:3 uniform:1 itml:16 characterize:2 reported:2 dependency:1 varies:1 considerably:1 synthetic:2 vershynin:1 international:6 discriminating:2 off:2 enhance:1 together:2 quickly:1 connectivity:1 huang:1 possibly:2 style:2 return:2 upperbound:1 de:1 includes:1 explicitly:6 ranking:1 vi:1 helped:1 picked:1 closed:1 analyze:4 sup:4 red:2 start:1 xing:1 parallel:2 annotation:1 contribution:1 supm:2 accuracy:1 variance:2 efficiently:1 yield:6 emphasizes:1 none:1 worth:1 dm:3 associated:2 proof:1 dataset:26 popular:2 recall:1 manifest:1 knowledge:1 improves:2 lim:1 hilbert:1 formalize:2 adaptability:2 campbell:1 manuscript:1 feed:6 higher:1 supervised:3 bian:1 specify:1 improved:3 done:1 just:1 hand:2 horizontal:1 quality:6 menon:1 gray:2 grows:1 effect:1 contain:2 y2:5 concept:1 regularization:19 seeded:1 dhillon:1 white:1 mahalanobis:4 encourages:1 davis:1 iris:1 criterion:14 ln2:1 theoretic:2 bring:1 balcan:1 common:1 functional:1 sebban:2 empirically:1 discussed:1 extend:1 m1:1 measurement:5 cambridge:1 rd:4 sharpen:1 janelia:4 stable:1 supervision:1 similarity:7 base:8 showed:1 optimizes:1 inf:3 belongs:1 success:4 arbitrarily:2 kwk1:1 approximators:1 yi:3 minimum:1 fortunately:1 relaxed:2 analyzes:1 signal:2 semi:1 dashed:2 violate:1 multiple:1 smooth:2 adapt:4 hhmi:4 retrieval:1 paired:4 controlled:1 prediction:8 variant:2 basic:1 multilayer:2 vision:1 metric:113 accentuate:1 represent:1 kernel:4 cell:1 penalize:1 addition:2 uninformative:5 fine:1 want:2 interval:1 remarkably:1 singular:1 leaving:1 finely:1 comment:1 induced:2 db:1 leveraging:1 incorporates:2 z2m:2 jordan:1 seem:1 practitioner:1 structural:3 presence:1 ideal:2 feedforward:1 constraining:1 easy:1 relaxes:1 sample2:1 leverage:2 independence:1 enough:1 zi:2 split:1 concerned:1 topology:1 opposite:1 computable:1 bartlett:2 assist:1 penalty:1 returned:1 prefers:2 useful:2 detailed:2 involve:1 amount:1 unpaired:1 exist:1 canonical:1 notice:1 dotted:1 popularity:1 theshold:1 per:1 blue:2 key:3 demonstrating:1 threshold:2 blum:1 drawn:2 downstream:2 run:1 psm:2 powerful:1 extends:1 almost:2 reasonable:1 draw:5 appendix:2 scaling:1 bound:18 layer:6 quadratic:2 refine:2 constraint:2 constrain:1 x2:7 ri:1 encodes:2 bousquet:1 argument:3 min:5 performing:1 structured:1 according:1 alternate:1 combination:1 poor:1 smaller:2 across:1 bellet:3 wi:1 erm:6 unregularized:2 ln:14 mutually:1 discus:4 turn:1 know:2 letting:1 tractable:1 studying:1 mance:1 observe:1 generic:1 appropriate:5 simulating:1 shaw:1 robustness:3 weinberger:1 existence:1 original:1 clustering:3 include:5 ensure:1 cf:1 hinge:1 upperbounds:1 k1:1 objective:3 added:1 quantity:1 degrades:1 dependence:11 md:3 diagonal:1 enhances:1 amongst:1 distance:44 thank:1 providing:1 minimizing:4 ying:3 setup:1 potentially:1 relate:1 negative:1 design:1 unknown:13 allowing:1 upper:5 observation:8 datasets:7 sm:24 benchmark:3 finite:1 jin:1 supporting:1 regularizes:1 extended:2 looking:2 precise:2 y1:5 arbitrary:4 thm:2 jebara:1 introduced:1 pair:11 specified:1 z1:4 optimized:1 connection:1 hypoth:5 learned:1 nip:4 able:1 usually:1 pattern:1 regime:6 built:1 max:4 belief:2 stinchcombe:1 natural:1 regularized:10 predicting:1 metric1:1 improve:6 prior:6 literature:2 l2:1 discouraging:1 relative:3 law:1 asymptotic:1 loss:16 expect:5 highlight:2 sublinear:1 interesting:1 proportional:3 srebro:1 triple:1 validation:2 h2:1 foundation:1 degree:3 sufficient:1 proxy:1 principle:2 verma:1 balancing:1 course:1 mohri:1 keeping:1 free:1 bias:2 allow:1 side:1 explaining:2 neighbor:6 saul:1 sparse:1 benefit:1 dimension:18 default:1 unweighted:1 computes:1 forward:6 made:3 refinement:3 avg:3 schultz:1 approximate:1 emphasize:2 implicitly:1 xi:3 don:1 un:1 triplet:4 additionally:1 learn:5 robust:2 sra:1 hornik:1 improving:1 necessarily:2 complex:1 separator:1 anthony:1 nakul:1 pk:1 noise:14 complementary:1 x1:9 augmented:4 guiding:1 explicit:3 exponential:1 jmlr:3 weighting:20 theorem:10 emphasizing:1 specific:3 pac:2 showing:6 kvi:1 insightful:1 explored:1 sensing:1 cortes:1 ionosphere:1 intrinsic:21 exists:3 mendelson:1 corr:3 importance:1 justifies:2 margin:8 explore:2 kxk:1 aditya:1 monotonic:1 nested:1 viewed:1 identity:2 lipschitz:10 absence:3 change:1 experimentally:1 typical:1 except:1 lemma:5 degradation:1 partly:4 experimental:1 select:3 formally:1 support:9 guo:2 evaluate:1 reg:8 correlated:3 |
5,316 | 5,812 | Matrix Manifold Optimization for Gaussian Mixtures
Reshad Hosseini
School of ECE
College of Engineering
University of Tehran, Tehran, Iran
[email protected]
Suvrit Sra
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
Cambridge, MA.
[email protected]
Abstract
We take a new look at parameter estimation for Gaussian Mixture Model (GMMs).
Specifically, we advance Riemannian manifold optimization (on the manifold of
positive definite matrices) as a potential replacement for Expectation Maximization (EM), which has been the de facto standard for decades. An out-of-the-box
invocation of Riemannian optimization, however, fails spectacularly: it obtains
the same solution as EM, but vastly slower. Building on intuition from geometric
convexity, we propose a simple reformulation that has remarkable consequences:
it makes Riemannian optimization not only match EM (a nontrivial result on its
own, given the poor record nonlinear programming has had against EM), but also
outperform it in many settings. To bring our ideas to fruition, we develop a welltuned Riemannian LBFGS method that proves superior to known competing methods (e.g., Riemannian conjugate gradient). We hope that our results encourage a
wider consideration of manifold optimization in machine learning and statistics.
1
Introduction
Gaussian Mixture Models (GMMs) are a mainstay in a variety of areas, including machine learning
and signal processing [4, 10, 16, 19, 21]. A quick literature search reveals that for estimating parameters of a GMM the Expectation Maximization (EM) algorithm [9] is still the de facto choice.
Over the decades, other numerical approaches have also been considered [24], but methods such as
conjugate gradients, quasi-Newton, Newton, have been noted to be usually inferior to EM [34].
The key difficulty of applying standard nonlinear programming methods to GMMs is the positive
definiteness (PD) constraint on covariances. Although an open subset of Euclidean space, this constraint can be difficult to impose, especially in higher-dimensions. When approaching the boundary
of the constraint set, convergence speed of iterative methods can also get adversely affected. A partial remedy is to remove the PD constraint by using Cholesky decompositions, e.g., as exploited in
semidefinite programming [7]. It is believed [30] that in general, the nonconvexity of this decomposition adds more stationary points and possibly spurious local minima.1 Another possibility is
to formulate the PD constraint via a set of smooth convex inequalities [30] and apply interior-point
methods. But such sophisticated methods can be extremely slower (on several statistical problems)
than simpler EM-like iterations, especially for higher dimensions [27].
Since the key difficulty arises from the PD constraint, an appealing idea is to note that PD matrices
form a Riemannian manifold [3, Ch.6] and to invoke Riemannian manifold optimization [1, 6].
Indeed, if we operate on the manifold2 , we implicitly satisfy the PD constraint, and may have a
better chance at focusing on likelihood maximization. While attractive, this line of thinking also
fails: an out-of-the-box invocation of manifold optimization is also vastly inferior to EM. Thus, we
need to think harder before challenging the hegemony of EM. We outline a new approach below.
1
Remarkably, using Cholesky with the reformulation in ?2.2 does not add spurious local minima to GMMs.
Equivalently, on the interior of the constraint set, as is done by interior point methods (their nonconvex
versions); though these turn out to be slow too as they are second order methods.
2
1
Key idea. Intuitively, the mismatch is in the geometry. For GMMs, the M-step of EM is a Euclidean
convex optimization problem, whereas the GMM log-likelihood is not manifold convex3 even for a
single Gaussian. If we could reformulate the likelihood so that the single component maximization
task (which is the analog of the M-step of EM for GMMs) becomes manifold convex, it might have a
substantial empirical impact. This intuition supplies the missing link, and finally makes Riemannian
manifold optimization not only match EM but often also greatly outperform it.
To summarize, the key contributions of our paper are the following:
? Introduction of Riemannian manifold optimization for GMM parameter estimation, for which we
show how a reformulation based on geodesic convexity is crucial to empirical success.
? Development of a Riemannian LBFGS solver; here, our main contribution is the implementation
of a powerful line-search procedure, which ensures convergence and makes LBFGS outperform
both EM and manifold conjugate gradients. This solver may be of independent interest.
We provide substantive experimental evidence on both synthetic and real-data. We compare manifold optimization, EM, and unconstrained Euclidean optimization that reformulates the problem
using Cholesky factorization of inverse covariance matrices. Our results shows that manifold optimization performs well across a wide range of parameter values and problem sizes. It is much less
sensitive to overlapping data than EM, and displays much less variability in running times.
Our results are quite encouraging, and we believe that manifold optimization could open new algorithmic avenues for mixture models, and perhaps other statistical estimation problems.
Note. To aid reproducibility of our results, M ATLAB implementations of our methods are available
as a part of the M IXEST toolbox developed by our group [12]. The manifold CG method that we use
is directly based on the excellent toolkit M ANOPT [6].
Related work. Summarizing published work on EM is clearly impossible. So, let us briefly mention a few lines of related work. Xu and Jordan [34] examine several aspects of EM for GMMs and
counter the claims of Redner and Walker [24], who claimed EM to be inferior to generic secondorder nonlinear programming techniques. However, it is now well-known (e.g., [34]) that EM can
attain good likelihood values rapidly, and scales to much larger problems than amenable to secondorder methods. Local convergence analysis of EM is available in [34], with more refined results
in [18], who show that when data have low overlap, EM can converge locally superlinearly. Our
paper develops Riemannian LBFGS, which can also achieve local superlinear convergence.
For GMMs some innovative gradient-based methods have also been suggested [22, 26], where the
PD constraint is handled via a Cholesky decomposition of covariance matrices. However, these
works report results only for low-dimensional problems and (near) spherical covariances.
The idea of using manifold optimization for GMMs is new, though manifold optimization by itself is
a well-developed subject. A classic reference is [29]; a more recent work is [1]; and even a M ATLAB
toolbox exists [6]. In machine learning, manifold optimization has witnessed increasing interest4 ,
e.g., for low-rank optimization [15, 31], or optimization based on geodesic convexity [27, 33].
2
Background and problem setup
The key object in this paper is the Gaussian Mixture Model (GMM), whose probability density is
XK
p(x) :=
?j pN (x; ?j , ?j ),
x ? Rd ,
j=1
and where pN is a (multivariate) Gaussian with mean ? ? Rd and covariance ? 0. That is,
pN (x; ?, ?) := det(?)?1/2 (2?)?d/2 exp ? 12 (x ? ?)T ??1 (x ? ?) .
? j 0}K and weights ?
? j ? Rd , ?
? ?
Given i.i.d. samples {x1 , . . . , xn }, we wish to estimate {?
j=1
?K , the K-dimensional probability simplex. This leads to the GMM optimization problem
n
XK
X
?j pN (xi ; ?j , ?j ) .
(2.1)
max
log
???K ,{?j ,?j 0}K
j=1
3
4
j=1
i=1
That is, convex along geodesic curves on the PD manifold.
Manifold optimization should not be confused with ?manifold learning? a separate problem altogether.
2
Solving Problem (2.1) can in general require exponential time [20].5 However, our focus is more
pragmatic: similar to EM, we also seek to efficiently compute local solutions. Our methods are set
in the framework of manifold optimization [1, 29]; so let us now recall some material on manifolds.
2.1 Manifolds and geodesic convexity
A smooth manifold is a non-Euclidean space that locally resembles Euclidean space [17]. For optimization, it is more convenient to consider Riemannian manifolds (smooth manifolds equipped with
an inner product on the tangent space at each point). These manifolds possess structure that allows
one to extend the usual nonlinear optimization algorithms [1, 29] to them.
Algorithms on manifolds often rely on geodesics, i.e., curves that join points along shortest paths.
Geodesics help generalize Euclidean convexity to geodesic convexity. In particular, say M is a
Riemmanian manifold, and x, y ? M; also let ? be a geodesic joining x to y, such that
?xy : [0, 1] ? M,
?xy (0) = x, ?xy (1) = y.
Then, a set A ? M is geodesically convex if for all x, y ? A there is a geodesic ?xy contained
within A. Further, a function f : A ? R is geodesically convex if for all x, y ? A, the composition
f ? ?xy : [0, 1] ? R is convex in the usual sense.
The manifold of interest to us is Pd , the manifold of d ? d symmetric positive definite matrices. At
any point ? ? Pd , the tangent space is isomorphic to set of symmetric matrices; and the Riemannian
metric at ? is given by tr(??1 d???1 d?). This metric induces the geodesic [3, Ch. 6]
1/2
?1/2
??1 ,?2 (t) := ?1 (?1
?1/2 t
?2 ?1
1/2
) ?1 ,
0 ? t ? 1.
Thus, a function f : P ? R if geodesically convex on a set A if it satisfies
d
f (??1 ,?2 (t)) ? (1 ? t)f (?1 ) + tf (?2 ),
t ? [0, 1], ?1 , ?2 ? A.
Such functions can be nonconvex in the Euclidean sense, but are globally optimizable due to
geodesic convexity. This property has been important in some matrix theoretic applications [3, 28],
and has gained more extensive coverage in several recent works [25, 27, 33].
We emphasize that even though the mixture cost (2.1) is not geodesically convex, for GMM optimization geodesic convexity seems to play a crucial role, and it has a huge impact on convergence
speed. This behavior is partially expected and analogous to EM, where a convex M-Step makes the
overall method much more practical. Let us now use this intuition to elicit geodesic convexity.
2.2 Problem reformulation
We begin with parameter estimation for a single Gaussian: although this has a closed-form solution
(which ultimately benefits EM), it requires more subtle handling when using manifold optimization.
Consider the following maximum likelihood parameter estimation for a single Gaussian:
Xn
max L(?, ?) :=
log pN (xi ; ?, ?).
(2.2)
i=1
?,?0
Although (2.2) is a Euclidean convex problem, it is not geodesically convex on its domain Rd ? Pd ,
which makes it geometrically handicapped when applying manifold optimization. To overcome this
problem, we invoke a simple reparametrization6 that has far-reaching impact. More precisely, we
augment the sample vectors xi to instead consider yiT = [xTi 1]. Therewith, (2.2) turns into
Xn
b
max L(S)
:=
log qN (yi ; S),
(2.3)
where qN (yi ; S) :=
?
S0
2? exp( 12 )pN (yi ; 0, S).
i=1
Proposition 1 states the key property of (2.3).
b
b
Proposition 1. The map ?(S) ? ?L(S),
where L(S)
is as in (2.3), is geodesically convex.
We omit the proof due to space limits; see [13] for details. Alternatively, see [28] for more general
results on geodesic convexity.
Theorem 2.1 shows that the solution to (2.3) yields the solution to the original problem (2.2) too.
5
6
Though under very strong assumptions, it has polynomial smoothed complexity [11].
This reparametrization in itself is probably folklore; its role in GMM optimization is what is crucial here.
3
23.2
33
23
32
22.6
31
Average log-likelihood
Average log-likelihood
22.8
22.4
22.2
22
30
29
28
21.8
LBFGS, Reformulated MVN
CG, Reformulated MVN
LBFGS, Usual MVN
CG, Usual MVN
21.6
21.4
10?1
100
101
Time (seconds)
LBFGS, Reformulated MVN
CG, Reformulated MVN
LBFGS, Original MVN
CG, Original MVN
27
26 0
10
102
101
(a) Single Gaussian
102
103
Time (seconds)
104
105
(b) Mixtures of seven Gaussians
Figure 1: The effect of reformulation in convergence speed of manifold CG and manifold LBFGS methods
(d = 35); note that the x-axis (time) is on a logarithmic scale.
b ? ) = L(?? , ?? ) for
Theorem 2.1. If ?? , ?? maximize (2.2), and if S ? maximizes (2.3), then L(S
?
? + ?? ?? T ??
S? =
.
?? T
1
Proof. We express S by new variables U , t and s by writing S =
U + sttT
stT
st
. The objective
s
b
function L(S)
in terms of the new parameters becomes
b , t, s) =
L(U
n
2
?
d
2
log(2?) ?
n
2
log s ?
n
2
log det(U ) ?
Xn
1
(xi
i=1 2
? t)T U ?1 (xi ? t) ?
n
2s .
Optimizing Lb over s > 0 we see that s? = 1 must hold. Hence, the objective reduces to a ddimensional Gaussian log-likelihood, for which clearly U ? = ?? and t? = ?? .
Theorem 2.1 shows that reformulation (2.3) is ?faithful,? as it leaves the optimum unchanged. Theorem 2.2 proves a local version of this result for GMMs.
Theorem 2.2. A local maximum of the reparameterized GMM log-likelihood
XK
Xn
K
b
L({S
log
?j qN (yi ; Sj )
j }j=1 ) :=
i=1
j=1
is a local maximum of the original log-likelihood
XK
Xn
L({?j , ?j }K
log
?j pN (xi |?j , ?j ) .
j=1 ) :=
i=1
j=1
The proof can be found in [13].
Theorem 2.2 shows that we can replace problem (2.1) by one whose local maxima agree with those
of (2.1), and whose individual components are geodesically convex. Figure 1 shows the true import
of our reformulation: the dramatic impact on the empirical performance of Riemmanian ConjugateGradient (CG) and Riemannian LBFGS for GMMs is unmistakable.
The final technical piece is to replace the simplex constraint ? ? ?K to make the problem unconstrained. We do this via a commonly used change of variables [14]: ?k = log ??Kk for
k = 1, . . . , K ? 1. Assuming ?K = 0 is a constant, the final GMM optimization problem is:
max
K?1
{Sj 0}K
j=1 ,{?j }j=1
K?1
K
b
L({S
j }j=1 , {?j }j=1 ) :=
n
X
i=1
log
K
X
j=1
exp(?j )
qN (yi ; Sj )
PK
k=1 exp(?k )
(2.4)
We view (2.4) as a manifold optimization problem; specifically, it is an optimization problem on the
QK
d
product manifold
? RK?1 . Let us see how to solve it.
j=1 P
4
3
Manifold Optimization
In unconstrained Euclidean optimization, typically one iteratively (i) finds a descent direction; and
(ii) performs a line-search to obtain sufficient decrease and ensure convergence. On a Riemannian
manifold, the descent direction is computed on the tangent space (this space varies (smoothly) as
one moves along the manifold). At a point X, the tangent space TX is the approximating vector
space (see Fig. 2). Given a descent direction ?X ? TX , line-search is performed along a smooth
curve on the manifold (red curve in Fig. 2). The derivative of this curve at X equals the descent
direction ?X . We refer the reader to [1, 29] for an in depth introduction to manifold optimization.
Successful large-scale Euclidean methods such as
conjugate-gradient and LBFGS combine gradients at the
current point with gradients and descent directions from
previous points to obtain a new descent direction. To adapt
such algorithms to manifolds, in addition to defining gradients on manifolds, we also need to define how to transport vectors in a tangent space at one point to vectors in a
different tangent space at another point.
TX
X
?X
Sd+
On Riemannian manifolds, the gradient is simply a direction on the tangent space, where the inner-product of the
gradient with another direction in the tangent space gives
the directional derivative of the function. Formally, if gX Figure 2: Visualization of line-search on a
manifold: X is a point on the manifold, TX
defines the inner product in the tangent space TX , then
is the tangent space at the point X, ?X is a
Df (X)? = gX (gradf (X), ?), for ? ? TX .
descent direction at X; the red curve is the
Given a descent direction in the tangent space, the curve curve along which line-search is performed.
along which we perform line-search can be a geodesic. A
map that takes the direction and a step length to obtain a corresponding point on the geodesic is
called an exponential map. Riemannian manifolds are also equipped with a natural way of transporting vectors on geodesics, which is called parallel transport. Intuitively, a parallel transport is
a differential map with zero derivative along the geodesics. Using the above ideas, Algorithm 1
sketches a generic manifold optimization algorithm.
Algorithm 1: Sketch of an optimization algorithm (CG, LBFGS) to minimize f (X) on a manifold
Given: Riemannian manifold M with Riemannian metric g; parallel transport T on M; exponential map
R; initial value X0 ; a smooth function f
for k = 0, 1, . . . do
Obtain a descent direction based on stored information and gradf (Xk ) using metric g and transport T
Use line-search to find ? such that it satisfies appropriate (descent) conditions
Calculate the retraction / update Xk+1 = RXk (??k )
Based on the memory and need of algorithm store Xk , gradf (Xk ) and ??k
end for
return estimated minimum Xk
Note that Cartesian products of Riemannian manifolds are again Riemannian, with the exponential
map, gradient and parallel transport defined as the Cartesian product of individual expressions; the
inner product is defined as the sum of inner product of the components in their respective manifolds.
Different variants of Riemannian LBFGS can be obtained depending where to perform the vector
Definition
Tangent space
Metric between two tangent vectors ?, ? at ?
Gradient at ? if Euclidean gradient is ?f (?)
Exponential map at point ? in direction ?
Parallel transport of tangent vector ? from ?1 to ?2
Expression for PSD matrices
Space of symmetric matrices
g? (?, ?) = tr(??1 ???1 ?)
gradf (?) = 12 ?(?f (X) + ?f (X)T )?
R? (?) = ? exp(??1 ?)
1/2
T?1 ,?2 (?) = E?E T , E = (?2 ??1
1 )
Table 1: Summary of key Riemannian objects for the PD matrix manifold.
transport. We found that the version developed in [28] gives the best performance, once we combine
it with a line-search algorithm satisfying Wolfe conditions. We present the crucial details below.
5
3.1
Line-search algorithm satisfying Wolfe conditions
To ensure Riemannian LBFGS always produces a descent direction, it is necessary to ensure that the
line-search algorithm satisfies Wolfe conditions [25]. These conditions are given by:
f (RXk (??k )) ? f (Xk ) + c1 ?Df (Xk )?k ,
Df (Xk+1 )?k+1 ? c2 Df (Xk )?k ,
(3.1)
(3.2)
where 0 < c1 < c2 < 1. Note that ?Df (Xk )?k = gXk (gradf (Xk ), ??k ), i.e., the derivative of
f (Xk ) in the direction ??k is the inner product of descent direction and gradient of the function.
Practical line-search algorithms implement a stronger (Wolfe) version of (3.2) that enforces
|Df (Xk+1 )?k+1 | ? c2 Df (Xk )?k .
Similar to the Euclidean case, our line-search algorithm is also divided into two phases: bracketing
and zooming [23]. During bracketing, we compute an interval such that a point satisfying Wolfe conditions can be found in this interval. In the zooming phase, we obtain such a point in the determined
interval. The one-dimensional function and its gradient used by the line-search are
?0 (?) = ?Df (Xk )?k .
?(?) = f (RXk (??k )),
The algorithm is essentially the same as the line-search in the Euclidean space; the reader can also
see its manifold incarnation in [13]. Theory behind how this algorithm is guaranteed to find a steplength satisfying (strong) Wolfe conditions can be found in [23].
A good choice of initial step-length ?1 can greatly speed up the line-search. We propose the following choice that turns out to be quite effective in our experiments:
?1 = 2
f (Xk ) ? f (Xk?1 )
.
Df (Xk )?k
(3.3)
Equation (3.3) is obtained by finding ?? that minimizes a quadratic approximation of the function
along the geodesic through the previous point (based on f (Xk?1 ), f (Xk ) and Df (Xk?1 )?k?1 ):
?? = 2
f (Xk ) ? f (Xk?1 )
.
Df (Xk?1 )?k?1
(3.4)
Then assuming that first-order change will be the same as in the previous step, we write
?? Df (Xk?1 )?k?1 ? ?1 Df (Xk )?k .
(3.5)
Combining (3.4) and (3.5), we obtain our estimate ?1 expressed in (3.3). Nocedal and Wright [23]
suggest using either ?? of (3.4) for the initial step-length ?1 , or using (3.5) where ?? is set to be
the step-length obtained in the line-search in the previous point. We observed that if one instead
uses (3.3) instead, one obtains substantially better performance than the other two approaches.
4
Experimental Results
We have performed numerous experiments to examine effectiveness of our method. Below we report performance comparisons on both real and simulated data. In all experiments, we initialize the
mixture parameters for all methods using k-means++ [2]. All methods also use the same termination criteria: they stop either when the difference of average log-likelihood (i.e., n1 log-likelihood)
between consecutive iterations falls below 10?6 , or when the number of iterations exceeds 1500.
More extensive empirical results can be found in the longer version of this paper [13].
Simulated Data
EM?s performance is well-known to depend on the degree of separation of the mixture components [18, 34]. To assess the impact of this separation on our methods, we generate data as proposed
in [8, 32]. The distributions are chosen so their means satisfy the following inequality:
?i6=j : kmi ? mj k ? c max{tr(?i ), tr(?j )},
i,j
where c models the degree of separation. Since mixtures with high eccentricity (i.e., the ratio of
the largest eigenvalue of the covariance matrix to its smallest eigenvalue) have smaller overlap, in
6
EM Original
Time (s)
ALL
1.1 ? 0.4
-10.7
30.0 ? 45.5
-12.7
LBFGS Reformulated
Time (s)
ALL
5.6 ? 2.7
-10.7
49.2 ? 35.0
-12.7
CG Reformulated
Time (s)
ALL
3.7 ? 1.5
-10.8
47.8 ? 40.4
-12.7
CG Original
Time (s)
ALL
23.8 ? 23.7
-10.7
206.0 ? 94.2
-12.8
c = 0.2
K=2
K=5
c=1
K=2
K=5
0.5 ? 0.2
104.1 ? 113.8
-10.4
-13.4
3.1 ? 0.8
79.9 ? 62.8
-10.4
-13.3
2.6 ? 0.6
45.8 ? 30.4
-10.4
-13.3
25.6 ? 13.6
144.3 ? 48.1
-10.4
-13.3
c=5
K=2
K=5
0.2 ? 0.2
38.8 ? 65.8
-11.0
-12.8
3.4 ? 1.4
41.0 ? 45.7
-11.0
-12.8
2.8 ? 1.2
29.2 ? 36.3
-11.0
-12.8
43.2 ? 38.8
197.6 ? 118.2
-11.0
-12.8
Table 2: Speed and average log-likelihood (ALL) comparisons for d = 20, e = 10 (each row reports values
averaged over 20 runs over different datasets, so the ALL values are not comparable to each other).
EM Original
Time (s)
ALL
65.7 ? 33.1
17.6
365.6 ? 138.8
17.5
LBFGS Reformulated
Time (s)
ALL
39.4 ? 19.3
17.6
160.9 ? 65.9
17.5
CG Reformulated
Time (s)
ALL
46.4 ? 29.9
17.6
207.6 ? 46.9
17.5
CG Original
Time (s)
ALL
64.0 ? 50.4
17.6
279.8 ? 169.3
17.5
c = 0.2
K=2
K=5
c=1
K=2
K=5
6.0 ? 7.1
40.5 ? 61.1
17.0
16.2
12.9 ? 13.0
51.6 ? 39.5
17.0
16.2
15.7 ? 17.5
63.7 ? 45.8
17.0
16.2
42.5 ? 21.9
203.1 ? 96.3
17.0
16.2
c=5
K=2
K=5
0.2 ? 0.1
17.5 ? 45.6
17.1
16.1
3.0 ? 0.5
20.6 ? 22.5
17.1
16.1
2.8 ? 0.7
20.3 ? 24.1
17.1
16.1
19.6 ? 8.2
93.9 ? 42.4
17.1
16.1
Table 3: Speed and ALL comparisons for d = 20, e = 1.
CG Cholesky Original
e=1
e = 10
Time (s)
ALL
Time (s)
101.5 ? 34.1
17.6
113.9 ? 48.1
627.1 ? 247.3
17.5
521.9 ? 186.9
ALL
-10.7
-12.7
CG Cholesky Reformulated
e=1
e = 10
Time (s)
ALL
Time (s)
36.7 ? 9.8
17.6
23.5 ? 11.9
156.7 ? 81.1
17.5
106.7 ? 39.7
ALL
-10.7
-12.6
c = 0.2
K=2
K=5
c=1
K=2
K=5
135.2 ? 65.4
1016.9 ? 299.8
16.9
16.2
110.9 ? 51.8
358.0 ? 155.5
-10.4
-13.3
38.0 ? 14.5
266.7 ? 140.5
16.9
16.2
49.0 ? 17.8
279.8 ? 111.0
-10.4
-13.4
c=5
K=2
K=5
55.2 ? 27.9
371.7 ? 281.4
17.1
16.1
86.7 ? 47.2
337.7 ? 178.4
-11.0
-12.8
60.2 ? 20.8
270.2 ? 106.5
17.1
16.1
177.6 ? 147.6
562.1 ? 242.7
-11.0
-12.9
Table 4: Speed and ALL for applying CG on Cholesky-factorized problems with d = 20.
addition to high eccentricity e = 10, we also test the (spherical) case where e = 1. We test three
levels of separation c = 0.2 (low), c = 1 (medium), and c = 5 (high). We test two different numbers
of mixture components K = 2 and K = 5; we consider experiments with larger values of K in our
experiments on real data. For e = 10, the results for data with dimensionality d = 20 are given in
Table 2. The results are obtained after running with 20 different random choices of parameters for
each configuration. It is apparent that the performance of EM and Riemannian optimization with our
reformulation is very similar. The variance of computation time shown by Riemmanian optimization
is, however, notably smaller. Manifold optimization on the non-reformulated problem (last column)
performs the worst.
In another set of simulated data experiments, we apply different algorithms to spherical data (e = 1);
the results are shown in Table 3. The interesting instance here is the case of low separation c = 0.2,
where the condition number of the Hessian becomes large. As predicted by theory, the EM converges
very slowly in such a case; Table 3 confirms this claim. It is known that in this case, the performance
of powerful optimization approaches like CG and LBFGS also degrades [23]. But both CG and
LBFGS suffer less than EM, while LBFGS performs noticeably better than CG.
Cholesky decomposition is a commonly suggested idea for dealing with PD constraint. So, we also
compare against unconstrained optimization (using Euclidean CG), where the inverse covariance
matrices are Cholesky factorized. The results for the same data as in Tables 2 and 3 are reported in
Table 4. Although the Cholesky-factorized problem proves to be much inferior to both EM and the
manifold methods, our reformulation seems to also help it in several problem instances.
Real Data
We now present performance evaluation on a natural image dataset, where mixtures of Gaussians
were reported to be a good fit to the data [35]. We extracted 200,000 image patches of size 6?6 from
images and subtracted the DC component, leaving us with 35-dimensional vectors. Performance of
different algorithms are reported in Table 5. Similar to the simulated results, performance of EM and
7
K
K
K
K
K
K
K
K
K
=
=
=
=
=
=
=
=
=
EM Algorithm
Time (s)
ALL
16.61
29.28
90.54
30.95
165.77
31.65
202.36
32.07
228.80
32.36
365.28
32.63
596.01
32.81
900.88
32.94
2159.47
33.05
2
3
4
5
6
7
8
9
10
LBFGS Reformulated
Time (s)
ALL
14.23
29.28
38.29
30.95
106.53
31.65
117.14
32.07
245.74
32.35
192.44
32.63
332.85
32.81
657.24
32.94
658.34
33.06
CG Reformulated
Time (s)
ALL
17.52
29.28
54.37
30.95
153.94
31.65
140.21
32.07
281.32
32.35
318.95
32.63
536.94
32.81
1449.52
32.95
1048.00
33.06
CG Original
Time (s)
ALL
947.35
29.28
3051.89
30.95
6380.01
31.64
5262.27
32.07
10566.76
32.33
10844.52
32.63
14282.80
32.58
15774.88
32.77
17711.87
33.03
CG Cholesky Reformulated
Time (s)
ALL
476.77
29.28
1046.61
30.95
2673.21
31.65
3865.30
32.07
4771.36
32.35
6819.42
32.63
9306.33
32.81
9383.98
32.94
7463.72
33.05
Table 5: Speed and ALL comparisons for natural image data d = 35.
102
102
EM, Usual MVN
LBFGS, Reparameterized MVN
CG, Reparameterized MVN
10
10
100
ALL? - ALL
ALL? - ALL
100
102
EM, Original MVN
LBFGS, Reformulated MVN
CG, Reformulated MVN
1
10?1
10?2
?3
100
10?1
10?2
10?1
10?2
10
10
?3
10?3
10?4
10?4
10?4
10?5
10?5
0
50
100
150
200
250
300
0
# function and gradient evaluations
50
100
150
# function and gradient evaluations
EM, Original MVN
LBFGS, Reformulated MVN
CG, Reformulated MVN
101
ALL? - ALL
1
200
10?5
0
50
100
150
200
250
# function and gradient evaluations
Figure 3: Best ALL minus current ALL values with number of function and gradient evaluations. Left: ?magic
telescope? (K = 5, d = 10). Middle: ?year predict? (K = 6, d = 90). Right: natural images (K = 8, d = 35).
manifold CG on the reformulated parameter space is similar. Manifold LBFGS converges notably
faster (except for K = 6) than both EM and CG. Without our reformulation, performance of the
manifold methods degrades substantially. Note that for K = 8 and K = 9, CG without reformulation stops prematurely because it hits the bound of a maximum 1500 iterations, and therefore its
ALL is smaller than the other two methods. The table also shows results of the Cholesky-factorized
(and reformulated) problem. It is more than 10 times slower than manifold optimization. Optimizing the Cholesky-factorized (non-reformulated) problem is the slowest (not shown) and it always
reaches the maximum number of iterations before finding the local minimum.
Fig. 3 depicts the typical behavior of our manifold optimization methods versus EM. The X-axis
is the number of log-likelihood and gradient evaluations (or the number of E- and M-steps in EM).
Fig. 3(a) and Fig. 3(b) are the results of fitting GMMs to the ?magic telescope? and ?year prediction?
datasets7 . Fig. 3(c) is the result for the natural image data of Table 5. Apparently in the initial few
iterations EM is faster, but manifold optimization methods match EM in a few iterations. This is
remarkable, given that manifold optimization methods need to perform line-search.
5
Conclusions and future work
We introduced Riemannian manifold optimization as an alternative to EM for fitting Gaussian mixture models. We demonstrated that for making manifold optimization succeed, to either match or
outperform EM, it is necessary to represent the parameters in a different space and reformulate
the cost function accordingly. Extensive experimentation with both experimental and real datasets
yielded quite encouraging results, suggesting that manifold optimization could have the potential to
open new algorithmic avenues for mixture modeling.
Several strands of practical importance are immediate (and are a part of our ongoing work): (i)
extension to large-scale GMMs through stochastic optimization [5]; (ii) use of richer classes of
priors with GMMs than the usual inverse Wishart priors (which are typically also used as they make
the M-step convenient), which is actually just one instance of a geodesically convex prior that our
methods can handle; (iii) incorporation of penalties for avoiding tiny clusters, an idea that fits easily
in our framework but not so easily in the EM framework. Finally, beyond GMMs, extension to other
mixture models will be fruitful.
7
Available at UCI machine learning dataset repository via https://archive.ics.uci.edu/ml/datasets
8
References
[1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds. Princeton
University Press, 2009.
[2] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Proceedings of the
eighteenth annual ACM-SIAM symposium on Discrete algorithms (SODA), pages 1027?1035, 2007.
[3] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
[4] C. M. Bishop. Pattern recognition and machine learning. Springer, 2007.
[5] S. Bonnabel. Stochastic gradient descent on riemannian manifolds. Automatic Control, IEEE Transactions
on, 58(9):2217?2229, 2013.
[6] N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre. Manopt, a matlab toolbox for optimization on
manifolds. The Journal of Machine Learning Research, 15(1):1455?1459, 2014.
[7] S. Burer, R. D. Monteiro, and Y. Zhang. Solving semidefinite programs via nonlinear programming. part
i: Transformations and derivatives. Technical Report TR99-17, Rice University, Houston TX, 1999.
[8] S. Dasgupta. Learning mixtures of gaussians. In Foundations of Computer Science, 1999. 40th Annual
Symposium on, pages 634?644. IEEE, 1999.
[9] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, Series B, 39:1?38, 1977.
[10] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley & Sons, 2nd edition, 2000.
[11] R. Ge, Q. Huang, and S. M. Kakade. Learning Mixtures of Gaussians in High Dimensions.
arXiv:1503.00424, 2015.
[12] R. Hosseini and M. Mash?al. Mixest: An estimation toolbox for mixture models. arXiv preprint
arXiv:1507.06065, 2015.
[13] R. Hosseini and S. Sra.
Differential geometric optimization for Gaussian mixture models.
arXiv:1506.07677, 2015.
[14] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181?214, 1994.
[15] M. Journ?ee, F. Bach, P.-A. Absil, and R. Sepulchre. Low-rank optimization on the cone of positive
semidefinite matrices. SIAM Journal on Optimization, 20(5):2327?2351, 2010.
[16] R. W. Keener. Theoretical Statistics. Springer Texts in Statistics. Springer, 2010.
[17] J. M. Lee. Introduction to Smooth Manifolds. Number 218 in GTM. Springer, 2012.
[18] J. Ma, L. Xu, and M. I. Jordan. Asymptotic convergence rate of the em algorithm for gaussian mixtures.
Neural Computation, 12(12):2881?2907, 2000.
[19] G. J. McLachlan and D. Peel. Finite mixture models. John Wiley and Sons, New Jersey, 2000.
[20] A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of gaussians. In Foundations
of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 93?102. IEEE, 2010.
[21] K. P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012.
[22] I. Naim and D. Gildea. Convergence of the EM algorithm for gaussian mixtures with unbalanced mixing
coefficients. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages
1655?1662, 2012.
[23] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2006.
[24] R. A. Redner and H. F. Walker. Mixture densities, maximum likelihood, and the EM algorithm. Siam
Review, 26:195?239, 1984.
[25] W. Ring and B. Wirth. Optimization methods on riemannian manifolds and their application to shape
space. SIAM Journal on Optimization, 22(2):596?627, 2012.
[26] R. Salakhutdinov, S. T. Roweis, and Z. Ghahramani. Optimization with EM and Expectation-ConjugateGradient. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages
672?679, 2003.
[27] S. Sra and R. Hosseini. Geometric optimisation on positive definite matrices for elliptically contoured
distributions. In Advances in Neural Information Processing Systems, pages 2562?2570, 2013.
[28] S. Sra and R. Hosseini. Conic Geometric Optimization on the Manifold of Positive Definite Matrices.
SIAM Journal on Optimization, 25(1):713?739, 2015.
[29] C. Udris?te. Convex functions and optimization methods on Riemannian manifolds. Kluwer, 1994.
[30] R. J. Vanderbei and H. Y. Benson. On formulating semidefinite programming problems as smooth convex
nonlinear optimization problems. Technical report, 2000.
[31] B. Vandereycken. Low-rank matrix completion by riemannian optimization. SIAM Journal on Optimization, 23(2):1214?1236, 2013.
[32] J. J. Verbeek, N. Vlassis, and B. Kr?ose. Efficient greedy learning of gaussian mixture models. Neural
computation, 15(2):469?485, 2003.
[33] A. Wiesel. Geodesic convexity and covariance estimation. IEEE Transactions on Signal Processing, 60
(12):6182?89, 2012.
[34] L. Xu and M. I. Jordan. On convergence properties of the EM algorithm for Gaussian mixtures. Neural
Computation, 8:129?151, 1996.
[35] D. Zoran and Y. Weiss. Natural images, gaussian mixtures and dead leaves. In Advances in Neural
Information Processing Systems, pages 1736?1744, 2012.
9
| 5812 |@word repository:1 briefly:1 version:5 polynomial:2 seems:2 stronger:1 middle:1 duda:1 nd:1 open:3 termination:1 wiesel:1 confirms:1 seek:1 covariance:8 decomposition:4 jacob:1 dramatic:1 mention:1 tr:4 incarnation:1 harder:1 minus:1 sepulchre:3 initial:4 configuration:1 series:1 mishra:1 current:2 must:1 import:1 john:2 numerical:2 shape:1 remove:1 seeding:1 update:1 stationary:1 greedy:1 leaf:2 accordingly:1 xk:30 record:1 gx:2 simpler:1 zhang:1 along:8 c2:3 differential:2 supply:1 symposium:3 focs:1 combine:2 fitting:2 x0:1 notably:2 expected:1 indeed:1 behavior:2 examine:2 salakhutdinov:1 globally:1 spherical:3 encouraging:2 xti:1 equipped:2 solver:2 increasing:1 becomes:3 begin:1 estimating:1 confused:1 maximizes:1 gradf:5 factorized:5 medium:1 what:1 superlinearly:1 minimizes:1 substantially:2 developed:3 finding:2 transformation:1 hit:1 facto:2 control:1 omit:1 positive:7 before:2 engineering:1 local:10 reformulates:1 sd:1 limit:1 consequence:1 mainstay:1 joining:1 path:1 might:1 resembles:1 challenging:1 factorization:1 range:1 averaged:1 practical:3 faithful:1 enforces:1 transporting:1 definite:5 implement:1 procedure:1 area:1 empirical:4 elicit:1 attain:1 convenient:2 suggest:1 get:1 mahony:1 interior:3 superlinear:1 applying:3 impossible:1 writing:1 fruitful:1 map:7 quick:1 missing:1 demonstrated:1 eighteenth:1 convex:17 formulate:1 classic:1 handle:1 analogous:1 play:1 programming:6 us:1 secondorder:2 wolfe:6 satisfying:4 recognition:1 observed:1 role:2 preprint:1 worst:1 calculate:1 ensures:1 counter:1 decrease:1 substantial:1 intuition:3 pd:13 convexity:11 complexity:1 kmi:1 dempster:1 geodesic:20 ultimately:1 zoran:1 depend:1 solving:2 easily:2 gtm:1 tx:7 jersey:1 effective:1 bhatia:1 refined:1 quite:3 whose:3 larger:2 solve:1 apparent:1 say:1 richer:1 statistic:3 think:1 itself:2 laird:1 final:2 advantage:1 eigenvalue:2 propose:2 product:9 uci:2 combining:1 rapidly:1 reproducibility:1 mixing:1 achieve:1 roweis:1 convergence:10 cluster:1 optimum:1 eccentricity:2 produce:1 converges:2 ring:1 object:2 wider:1 help:2 develop:1 ac:1 depending:1 completion:1 school:1 strong:2 coverage:1 ddimensional:1 predicted:1 direction:16 stochastic:2 material:1 noticeably:1 require:1 proposition:2 extension:2 hold:1 bonnabel:1 considered:1 stt:1 wright:2 exp:5 ic:1 algorithmic:2 predict:1 claim:2 steplength:1 consecutive:1 reshad:2 smallest:1 estimation:7 sensitive:1 largest:1 tf:1 hope:1 mclachlan:1 mit:2 clearly:2 gaussian:17 always:2 reaching:1 pn:7 focus:1 rank:3 likelihood:16 slowest:1 greatly:2 cg:28 geodesically:8 summarizing:1 sense:2 absil:3 tehran:2 typically:2 spurious:2 journ:1 quasi:1 monteiro:1 overall:1 classification:1 augment:1 development:1 initialize:1 equal:1 once:1 look:1 icml:2 thinking:1 future:1 simplex:2 report:5 develops:1 few:3 individual:2 murphy:1 geometry:1 phase:2 replacement:1 n1:1 psd:1 peel:1 interest:2 huge:1 possibility:1 vandereycken:1 evaluation:6 mixture:28 semidefinite:4 behind:1 amenable:1 encourage:1 partial:1 necessary:2 xy:5 arthur:1 respective:1 incomplete:1 euclidean:14 theoretical:1 witnessed:1 column:1 instance:3 modeling:1 maximization:4 cost:2 subset:1 successful:1 too:2 learnability:1 stored:1 reported:3 varies:1 synthetic:1 st:2 density:2 international:2 siam:6 lee:1 invoke:2 probabilistic:1 vastly:2 again:1 moitra:1 huang:1 possibly:1 slowly:1 wishart:1 adversely:1 dead:1 expert:1 derivative:5 return:1 suggesting:1 potential:2 de:2 coefficient:1 satisfy:2 piece:1 performed:3 view:1 closed:1 apparently:1 spectacularly:1 red:2 reparametrization:1 parallel:5 contribution:2 gildea:1 ass:1 ir:1 minimize:1 qk:1 who:2 efficiently:1 variance:1 yield:1 directional:1 generalize:1 therewith:1 published:1 reach:1 retraction:1 definition:1 against:2 atlab:2 proof:3 riemannian:30 vanderbei:1 stop:2 dataset:2 massachusetts:1 recall:1 ut:1 dimensionality:1 subtle:1 redner:2 sophisticated:1 actually:1 focusing:1 higher:2 wei:1 done:1 box:2 though:4 just:1 contoured:1 sketch:2 transport:8 nonlinear:6 overlapping:1 defines:1 perhaps:1 believe:1 building:1 effect:1 true:1 remedy:1 hence:1 symmetric:3 laboratory:1 iteratively:1 attractive:1 during:1 inferior:4 noted:1 criterion:1 outline:1 theoretic:1 performs:4 bring:1 image:7 consideration:1 superior:1 stork:1 analog:1 extend:1 kluwer:1 refer:1 composition:1 cambridge:1 rd:4 unconstrained:4 automatic:1 i6:1 had:1 toolkit:1 longer:1 add:2 multivariate:1 own:1 recent:2 perspective:1 optimizing:2 claimed:1 store:1 nonconvex:2 suvrit:2 inequality:2 success:1 yi:5 exploited:1 minimum:4 houston:1 impose:1 converge:1 shortest:1 maximize:1 signal:2 ii:2 reduces:1 smooth:7 technical:3 match:4 adapt:1 exceeds:1 believed:1 faster:2 burer:1 bach:1 divided:1 hart:1 impact:5 prediction:1 variant:1 verbeek:1 naim:1 essentially:1 expectation:3 metric:5 df:13 arxiv:4 iteration:7 represent:1 optimisation:1 c1:2 whereas:1 remarkably:1 background:1 addition:2 interval:3 keener:1 walker:2 bracketing:2 leaving:1 crucial:4 operate:1 posse:1 archive:1 probably:1 subject:1 conjugategradient:2 gmms:15 effectiveness:1 jordan:4 ee:1 near:1 iii:1 variety:1 fit:2 competing:1 approaching:1 inner:6 idea:7 avenue:2 det:2 vassilvitskii:1 expression:2 handled:1 penalty:1 suffer:1 reformulated:20 hessian:1 elliptically:1 matlab:1 tr99:1 iran:1 locally:2 induces:1 telescope:2 generate:1 http:1 outperform:4 fruition:1 estimated:1 write:1 discrete:1 dasgupta:1 affected:1 express:1 group:1 key:7 reformulation:11 yit:1 gmm:9 nonconvexity:1 nocedal:2 geometrically:1 cone:1 sum:1 year:2 run:1 inverse:3 powerful:2 soda:1 reader:2 separation:5 patch:1 decision:1 comparable:1 bound:1 guaranteed:1 display:1 quadratic:1 yielded:1 annual:3 nontrivial:1 constraint:11 precisely:1 incorporation:1 aspect:1 speed:8 extremely:1 innovative:1 formulating:1 poor:1 conjugate:4 across:1 smaller:3 em:51 son:2 appealing:1 kakade:1 making:1 benson:1 intuitively:2 equation:1 agree:1 visualization:1 turn:3 ge:1 end:1 optimizable:1 available:3 gaussians:5 experimentation:1 apply:2 hierarchical:1 generic:2 appropriate:1 subtracted:1 alternative:1 slower:3 altogether:1 original:12 running:2 ensure:3 newton:2 folklore:1 ghahramani:1 prof:3 hosseini:6 especially:2 approximating:1 society:1 unchanged:1 objective:2 move:1 degrades:2 usual:6 gradient:21 link:1 separate:1 zooming:2 simulated:4 seven:1 manifold:76 substantive:1 assuming:2 length:4 reformulate:2 kk:1 ratio:1 equivalently:1 difficult:1 setup:1 riemmanian:3 magic:2 implementation:2 perform:3 datasets:3 finite:1 descent:13 manopt:1 reparameterized:3 defining:1 immediate:1 variability:1 vlassis:1 dc:1 prematurely:1 smoothed:1 lb:1 introduced:1 toolbox:4 extensive:3 beyond:1 suggested:2 usually:1 below:4 mismatch:1 rxk:3 pattern:2 handicapped:1 summarize:1 program:1 including:1 max:5 memory:1 royal:1 overlap:2 mash:1 difficulty:2 rely:1 natural:6 settling:1 technology:1 numerous:1 conic:1 axis:2 mvn:17 text:1 prior:3 geometric:4 literature:1 tangent:14 review:1 asymptotic:1 interesting:1 versus:1 remarkable:2 foundation:2 degree:2 sufficient:1 rubin:1 tiny:1 row:1 summary:1 last:1 institute:1 wide:1 fall:1 boumal:1 benefit:1 boundary:1 dimension:3 xn:6 curve:8 overcome:1 depth:1 qn:4 commonly:2 far:1 transaction:2 sj:3 obtains:2 emphasize:1 implicitly:1 dealing:1 ml:1 reveals:1 xi:6 alternatively:1 search:18 iterative:1 decade:2 table:13 mj:1 sra:4 excellent:1 domain:1 pk:1 main:1 edition:1 xu:3 x1:1 fig:6 join:1 depicts:1 definiteness:1 slow:1 aid:1 wiley:2 ose:1 fails:2 wish:1 exponential:5 invocation:2 wirth:1 theorem:6 rk:1 bishop:1 evidence:1 exists:1 valiant:1 gained:1 importance:1 kr:1 te:1 cartesian:2 smoothly:1 logarithmic:1 simply:1 lbfgs:24 expressed:1 contained:1 strand:1 partially:1 springer:5 ch:2 chance:1 satisfies:3 extracted:1 ma:2 acm:1 succeed:1 rice:1 careful:1 replace:2 change:2 specifically:2 determined:1 except:1 typical:1 called:2 isomorphic:1 ece:1 experimental:3 pragmatic:1 college:1 formally:1 cholesky:13 arises:1 unbalanced:1 ongoing:1 princeton:2 avoiding:1 handling:1 |
5,317 | 5,813 | Scale Up Nonlinear Component Analysis with
Doubly Stochastic Gradients
Bo Xie1 , Yingyu Liang2 , Le Song1
1
Georgia Institute of Technology
[email protected], [email protected]
2
Princeton University
[email protected]
Abstract
Nonlinear component analysis such as kernel Principle Component Analysis
(KPCA) and kernel Canonical Correlation Analysis (KCCA) are widely used in
machine learning, statistics and data analysis, but they cannot scale up to big
datasets. Recent attempts have employed random feature approximations to convert the problem to the primal form for linear computational complexity. However, to obtain high quality solutions, the number of random features should be
the same order of magnitude as the number of data points, making such approach
not directly applicable to the regime with millions of data points.
We propose a simple, computationally efficient, and memory friendly algorithm
based on the ?doubly stochastic gradients? to scale up a range of kernel nonlinear
component analysis, such as kernel PCA, CCA and SVD. Despite the non-convex
nature of these problems, our method enjoys theoretical guarantees that it con?
verges at the rate O(1/t)
to the global optimum, even for the top k eigen subspace.
Unlike many alternatives, our algorithm does not require explicit orthogonalization, which is infeasible on big datasets. We demonstrate the effectiveness and
scalability of our algorithm on large scale synthetic and real world datasets.
1
Introduction
Scaling up nonlinear component analysis has been challenging due to prohibitive computation and
memory requirements. Recently, methods such as Randomized Component Analysis (RCA) [12]
are able to scale to larger datasets by leveraging random feature approximation. Such methods approximate the kernel function by using explicit random feature mappings, then perform subsequent
steps in the primal form, resulting in linear computational complexity. Nonetheless, theoretical analysis [18, 12] shows that in order to get high quality results, the number of random features should
grow linearly with the number of data points. Experimentally, one often sees that the statistical
performance of the algorithm improves as one increases the number of random features.
Another approach to scale up the kernel component analysis is to use stochastic gradient descent and
online updates [15, 16]. These stochastic methods have also been extended to the kernel case [9, 5,
8]. They require much less computation than their batch counterpart, converge in O(1/t) rate, and
are naturally applicable to streaming data setting. Despite that, they share a severe drawback: all
data points used in the updates need to be saved, rendering them impractical for large datasets.
In this paper, we propose to use the ?doubly stochastic gradients? for nonlinear component analysis.
This technique is a general framework for scaling up kernel methods [6] for convex problems and
has been successfully applied to many popular kernel machines such as kernel SVM, kernel ridge
regressions, and Gaussian process. It uses two types of stochastic approximation simultaneously:
random data points instead of the whole dataset (as in stochastic update rules), and random features
instead of the true kernel functions (as in RCA). These two approximations lead to the following
1
benefits: (1) Computation efficiency The key computation is the generation of a mini-batch of
random features and the evaluation of them on a mini-batch of data points, which is very efficient.
(2) Memory efficiency Instead of storing training data points, we just keep a small program for
regenerating the random features, and sample previously used random features according to prespecified random seeds. This leads to huge savings: the memory requirement up to step t is O(t),
independent of the dimension of the data. (3) Adaptibility Unlike other approaches that can only
work with a fixed number of random features beforehand, doubly stochastic approach is able to
increase the model complexity by using more features when new data points arrive, and thus enjoys
the advantage of nonparametric methods.
Although on first look our method appears similar to the approach in [6], the two methods are
fundamentally different. In [6], they address convex problems, whereas our problem is highly nonconvex. The convergence result in [6] crucially relies on the properties of convex functions, which
do not translate to our problem. Instead, our analysis centers around the stochastic update of power
iterations, which uses a different set of proof techniques.
In this paper, we make the following contributions.
General framework We show that the general framework of doubly stochastic updates can be
applied in various kernel component analysis tasks, including KPCA, KSVD, KCCA, etc..
Strong theoretical guarantee We prove that the finite time convergence rate of doubly stochastic
?
approach is O(1/t).
This is a significant result since 1) the global convergence result is w.r.t. a nonconvex problem; 2) the guarantee is for update rules without explicit orthogonalization. Previous
works require explicit orthogonalization, which is impractical for kernel methods on large datasets.
Strong empirical performance Our algorithm can scale to datasets with millions of data points.
Moreover, the algorithm can often find much better solutions thanks to the ability to use many more
random features. We demonstrate such benefits on both synthetic and real world datasets.
Since kernel PCA is a typical task, we focus on it in the paper and provide a description of other
tasks in Section 4.3. Although we only state the guarantee for kernel PCA, the analysis naturally
carries over to the other tasks.
2
Related work
Many efforts have been devoted to scale up kernel methods. The random feature approach [17, 18]
approximates the kernel function with explicit random feature mappings and solves the problem
in primal form, thus circumventing the quadratic computational complexity. It has been applied
to various kernel methods [11, 6, 12], among which most related to our work is RCA [12]. One
drawback of RCA is that their theoretical guarantees are only for kernel matrix approximation: it
does not say anything about how close the solution obtained from RCA is to the true solution. In
contrast, we provide a finite time convergence rate of how our solution approaches the true solution.
In addition, even though a moderate size of random features can work well for tens of thousands of
data points, datasets with tens of millions of data points require many more random features. Our
online approach allows the number of random features, hence the flexibility of the function class,
to grow with the number of data points. This makes our method suitable for data streaming setting,
which is not possible for previous approaches.
Online algorithms for PCA have a long history. Oja proposed two stochastic update rules for approximating the first eigenvector and provided convergence proof in [15, 16], respectively. These
rules have been extended to the generalized Hebbian update rules [19, 20, 3] that compute the top
k eigenvectors (the subspace case). Similar ones have also been derived from the perspective of
optimization and stochastic gradient descent [20, 2]. They are further generalized to the kernel
case [9, 5, 8]. However, online kernel PCA needs to store all the training data, which is impractical for large datasets. Our doubly stochastic method avoids this problem by using random features
and keeping only a small program for regenerating previously used random features according to
pre-specified seeds. As a result, it can scale up to tens of millions of data points.
For finite time convergence rate, [3] proved the O(1/t) rate for the top eigenvector in linear PCA
using Oja?s rule. For the same task, [21] proposed a noise reduced PCA with linear convergence rate,
where the rate is in terms of epochs, i.e., number of passes over the whole dataset. The noisy power
method presented in [7] provided linear convergence for a subspace, although it only converges
linearly to a constant error level. In addition, the updates require explicit orthogonalization, which
2
Algorithm 1: {?i }t1 = DSGD-KPCA(P(x), k)
Algorithm 2: h = Evaluate(x, {?i }ti=1 )
Require: P(?), ?? (x).
1: for i = 1, . . . , t do
2:
Sample xi ? P(x).
3:
Sample ?i ? P(?) with seed i.
k
4:
hi = Evaluate(xi , {?j }i?1
j=1 ) ? R .
5:
?i = ?i ??i (xi )hi .
6:
?j = ?j ? ?i ?j> hi hi , for j = 1, . . . , i ? 1.
7: end for
Require: P(?), ?? (x).
1: Set h = 0 ? Rk .
2: for i = 1, . . . , t do
3:
Sample ?i ? P(?) with seed i.
4:
h = h + ??i (x)?i .
5: end for
is impractical for kernel methods. In comparison, our method converges in O(1/t) for a subspace,
without the need for orthogonalization.
3
Preliminaries
Kernels and Covariance Operators A kernel k(x, y) : X ? X 7? R is a function that is
positive-definite
(PD), i.e., for all n > 1, c1 , . . . , cn ? R, and x1 , . . . , xn ? X , we have
Pn
i,j=1 ci cj k(xi , xj ) ? 0. A reproducing kernel Hilbert space (RKHS) F on X is a Hilbert
space of functions from X to R. F is an RKHS if and only if there exists a k(x, x0 ) such
that ?x ? X , k(x, ?) ? F, and ?f ? F, hf (?), k(x, ?)iF = f (x). Given P(x), k(x, x0 ) with
RKHS F, the covariance operator A : F 7? F is a linear self-adjoint operator defined as
Af (?) := Ex [f (x) k(x, ?)], ?f ? F, and furthermore hg, Af iF = Ex [f (x) g(x)], ?g ? F.
Let F = (f1 (?), f2 (?), . . . , fk (?)) be a list of k functions in the RKHS, and we define matrix-like
notation AF (?) := (Af1 (?), . . . , Afk (?)), and F > AF is a k ? k matrix, whose (i, j)-th element
is hfi , Afj iF . The outer-product of a function v ? F defines a linear operator vv > such that
(vv > )f (?) := hv, f iF v(?), ?f ? F. Let V = (v1 (?), . . . , vk (?)) be a list of k functions, then the
weighted sum of a set of linear operators can be denoted using matrix-like notation as V ?k V > :=
Pk
>
i=1 ?i vi vi , where ?k is a diagonal matrix with ?i on the i-th entry of the diagonal.
Eigenfunctions and Kernel PCA A function v is an eigenfunction of A with the corresponding
eigenvalue ? if Av(?) = ?v(?). Given a set of eigenfunctions {vi } and associated eigenvalues {?i },
where hvi , vj iF = ?ij , we can write the eigen-decomposion as A = V ?k V > + V? ?? V?> , where V
is the list of top k eigenfunctions, ?k is a diagonal matrix with the corresponding eigenvalues, V? is
the list of the rest of the eigenfunctions, and ?? is a diagonal matrix with the rest of the eigenvalues.
Kernel PCA aims to identifyPthe the top k subspace V . In the finite data case, the empirical covariance operator is A = n1 i k(xi , ?) ? k(xi , ?). According to the representer theorem, we have
Pn
k
vi = j=1 ?ij k(xj , ?), where {?i }i=1 ? Rn are weights for the data points. Using Av(?) = ?v(?)
and the kernel trick, we have K?i = ?i ?i , where K is the n ? n Gram matrix.
Random feature approximation The random feature approximation for shift-invariant kernels k(x, y) = k(x ? y), e.g., Gaussian RBF kernel, relies on the identity k(x ? y) =
R
>
ei? (x?y) dP(?) = E [?? (x)?? (y)] since the Fourier transform of a PD function is nonRd
negative, thus can be considered as a (scaled) probability measure [17]. We can therefore approximate the kernel function as an empirical average of samples from the distribution. In other
P
B
words, k(x, y) ? B1 i ??i (x)??i (y), where {(?i )}i are i.i.d. samples drawn from P(?). For
0
0 2
Gaussian RBF kernel, k(x ? x ) = exp(?kx ? x k /2? 2 ), this yields a Gaussian distribution
P(?) = exp(?? 2 k?k2 /2). See [17] for more details.
4
Algorithm
In this section, we describe an efficient algorithm based on the ?doubly stochastic gradients? to
scale up kernel PCA. KPCA is essentially an eigenvalue problem in a functional space. Traditional
approaches convert it to the dual form, leading to another eigenvalue problem whose size equals
the number of training points, which is not scalable. Other approaches solve it in the primal form
with stochastic functional gradient descent. However, these algorithms need to store all the training
points seen so far. They quickly run into memory issues when working with millions of data points.
We propose to tackle the problem with ?doubly stochastic gradients?, in which we make two unbiased stochastic approximations. One stochasticity comes from sampling data points as in stochastic
gradient descent. Another source of stochasticity is from random features to approximate the kernel.
3
One technical difficulty in designing doubly stochastic KPCA is an explicit orthogonalization step
required in the update rules, which ensures the top k eigenfunctions are orthogonal. This is infeasible
for kernel methods on a large dataset since it requires solving an increasingly larger KPCA problem
in every iteration. To solve this problem, we formulate the orthogonality constraints into Lagrange
multipliers which leads to an Oja-style update rule. The new update enjoys small per iteration
complexity and converges to the ground-truth subspace.
We present the algorithm by first deriving the stochastic functional gradient update without random
feature approximations, then introducing the doubly stochastic updates. For simplicity of presentation, the following description uses one data point and one random feature at a time, but typically a
mini-batch of data points and random features are used in each iteration.
4.1
Stochastic functional gradient update
Kernel PCA can be formulated as the following non-convex optimization problem
max tr G> AG s.t. G> G = I,
(1)
G
where G := g 1 , . . . , g k and g i is the i-th function.
Gradient descent on the Lagrangian leads to Gt+1 = Gt + ?t I ? Gt G>
t AGt . Using a stochastic
>
approximation for A: At f (?) = f (xt ) k(xt , ?), we have At Gt = k(xt , ?)gt> and G>
t At Gt = gt gt ,
>
1
k
where gt = gt (xt ), . . . , gt (xt ) . Therefore, the update rule is
Gt+1 = Gt I ? ?t gt gt> + ?t k(xt , ?)gt> .
(2)
This rule can also be derived using stochastic gradient and Oja?s rule [15, 16].
4.2
Doubly stochastic update
The update rule (2) must store all the data points it has seen so far, which is impractical for large scale
datasets. To address this issue, we use the random feature approximation k(x, ?) ? ??i (x)??i (?).
Denote Ht the function we get at iteration t, the update rule becomes
Ht+1 = Ht I ? ?t ht ht > + ?t ??t (xt )??t (?)ht > ,
(3)
1
>
where ht is the evaluation of Ht at the current data point: ht = ht (xt ), . . . , hkt (xt ) . The specific
updates in terms of the coefficients are summarized in Algorithms 1 and 2. Note that in theory new
random features are drawn in each iteration, but in practice one can revisit these random features.
4.3
Extensions
Locating individual eigenfunctions The algorithm only finds the eigen subspace, but not necessarily individual eigenfunctions. A modified version, called Generalized
Hebbian
Algorithm (GHA)
[19] can be used for this purpose: Gt+1 = Gt + ?t At Gt ? ?t Gt UT G>
A
G
, where UT [?] is an
t
t
t
operator that sets the lower triangular parts to zero.
Latent variable models and kernel SVD Recently, spectral methods have been proposed to learn
latent variable models with provable guarantees [1, 22], in which the key computation is SVD. Our
algorithm can be straightforwardly extended to solve kernel SVD, with two simultaneous update
rules. The algorithm is summarized in Algorithm 3. See the supplementary for derivation details.
Kernel CCA and generalized eigenvalue problem Given two variables X and Y , CCA finds two
projections such that the correlations between the two projected variables are maximized. It is
equivalent to a generalized eigenvalue problem, which can also be solved in our framework. We
present the updates for coefficients in Algorithm 4, and derivation details in the supplementary.
Kernel sliced inverse regression Kernel sliced inverse regression [10] aims to do sufficient dimension reduction in which the found low dimension representation preserves the statistical correlation
with the targets. It also reduces to a generalized eigenvalue problem, and has been shown to find the
same subspace as KCCA [10].
5
Analysis
In this section, we provide finite time convergence guarantees for our algorithm. As discussed
in the previous section, explicit orthogonalization is not scalable for the kernel case, therefore we
4
Algorithm 3: DSGD-KSVD
Algorithm 4: DSGD-KCCA
Require: P(?), ?? (x), k.
Output: {?i , ?i }t1
1: for i = 1, . . . , t do
2:
Sample xi ? P(x). Sample yi ? P(y).
3:
Sample ?i ? P(?) with seed i.
k
4:
ui = Evaluate(xi , {?j }i?1
j=1 ) ? R .
i?1
5:
vi = Evaluate(yi , {?j }j=1 ) ? Rk .
6:
W = ui vi> + vi u>
i
7:
?i = ?i ??i (xi )vi .
8:
?i = ?i ??i (yi )ui .
9:
?j = ?j ? ?i W ?j , for j = 1, . . . , i ? 1.
10:
?j = ?j ? ?i W ?j , for j = 1, . . . , i ? 1.
11: end for
Require: P(?), ?? (x), k.
Output: {?i , ?i }t1
1: for i = 1, . . . , t do
2:
Sample xi ? P(x). Sample yi ? P(y).
3:
Sample ?i ? P(?) with seed i.
k
4:
ui = Evaluate(xi , {?j }i?1
j=1 ) ? R .
i?1
5:
vi = Evaluate(yi , {?j }j=1 ) ? Rk .
6:
W = ui vi> + vi u>
i
7:
?i = ?i ??i (xi ) [vi ? W ui ].
8:
?i = ?i ??i (yi ) [ui ? W vi ].
9: end for
need to provide guarantees for the updates without orthogonalization. This challenge is even more
prominent when using random features, since it introduces additional variance.
Furthermore, our guarantees are w.r.t. the top k-dimension subspace. Although the convergence
without normalization for a top eigenvector has been established before [15, 16], the subspace case
is complicated by the fact that there are k angles between k-dimension subspaces, and we need to
bound the largest angle. To the best of our knowledge, our result is the first finite time convergence
result for a subspace without explicit orthogonalization.
Note that even though it appears our algorithm is similar to [6] on the surface, the underlying analysis
is fundamentally different. In [6], the result only applies to convex problems where every local
optimum is a global optimum while the problems we consider are highly non-convex. As a result,
many techniques that [6] builds upon are not applicable.
Conditions and Assumptions We will focus on the case when a good initialization V0 is given:
V0> V0 = I, cos2 ?(V, V0 ) ? 1/2.
(4)
In other words, we analyze the later stage of the convergence, which is typical in the literature (e.g.,
[21]). The early stage can be analyzed using established techniques (e.g., [3]). In practice, one can
achieve a good initialization by solving a small RCA problem [12] with, e.g. thousands, of data
points and random features.
Throughout the paper we suppose |k(x, x0 )| ? ?, |?? (x)| ? ? and regard ? and ? as constants.
Note that this is true for all the kernels and corresponding random features considered. We further
regard the eigengap ?k ? ?k+1 as a constant, which is also true for typical applications and datasets.
5.1
Update without random features
Our guarantee is on the cosine of the principal angle between the computed subspace and the ground
2
kV > Gt wk
truth eigen subspace (also called potential function): cos2 ?(V, Gt ) = minw kG wk2 .
t
Consider the two different update rules, one with explicit orthogonalization and another without
Ft+1 ? orth(Ft + ?t At Ft )
Gt+1 ? Gt + ?t I ? Gt G>
t At Gt
where At is the empirical covariance of a mini-batch. Our final guarantee for Gt is the following.
Theorem 1 Assume (4) and suppose the mini-batch sizes satisfy that for any 1 ? i ? t, kA ? Ai k <
(?k ? ?k+1 )/8. There exist step sizes ?i = O(1/i) such that
1 ? cos2 ?(V, Gt ) = O(1/t).
The convergence rate O(1/t) is in the same order as that of computing only the top eigenvector in
linear PCA [3]. The bound requires the mini-batch size is large enough so that the spectral norm of
A is approximated up to the order of the eigengap. This is because the increase of the potential is in
the order of the eigengap. Similar terms appear in the analysis of the noisy power method [7] which,
however, requires orthogonalization and is not suitable for the kernel case. We do not specify the
5
mini-batch size, but by assuming suitable data distributions, it is possible to obtain explicit bounds;
see for example [23, 4].
Proof sketch We first prove the guarantee for the orthogonalized subspace Ft which is more convenient to analyze, and then show that the updates for Ft and Gt are first order equivalent so Gt enjoys
the same guarantee. To do so, we will require lemma 2 and 3 below
Lemma 2 1 ? cos2 ?(V, Ft ) = O(1/t).
Let c2t denote cos2 ?(V, Ft ), then a key step in proving the lemma is to show the following recurrence
c2t+1 ? c2t (1 + 2?t (?k ? ?k+1 ? 2 kA ? At k)(1 ? c2t )) ? O(?t2 ).
(5)
We will need the mini-batch size large enough so that 2 kA ? At k is smaller than the eigen-gap.
Another key element in the proof of the theorem is the first order equivalence of the two update
rules. To show this, we introduce F (Gt ) ? orth(Gt + ?t At Gt ) to denote the subspace by applying
the update rule of Ft on Gt . We show that the potentials of Gt+1 and F (Gt ) are close:
Lemma 3 cos2 ?(V, Gt+1 ) = cos2 ?(V, F (Gt )) ? O(?t2 ).
The lemma means that applying the two update rules to the same input will result in two subspaces with similar potentials. Then by (5), we have 1 ? cos2 ?(V, Gt ) = O(1/t) which
leads to our theorem. The proof of Lemma 3 is based on the observation that cos2 ?(V, X) =
?min (V > X(X > X)?1 X > V ). Comparing the Taylor expansions w.r.t. ?t for X = Gt+1 and
X = F (Gt ) leads to the lemma.
5.2 Doubly stochastic update
The Ht computed in the doubly stochastic update is no longer in the RKHS so the principal angle is
not well defined. Instead, we will compare the evaluation of functions from Ht and the true principal
subspace V respectively on a point x. Formally, we show that for any function v ? V with unit norm
2
kvkF = 1, there exists a function h in Ht such that for any x, err := |v(x) ? h(x)| is small with
high probability.
? t+1 ? G
? t + ?t k(xt , ?)h> ? ?t G
? t ht h>
To do so, we need to introduce a companion update rule: G
t
t
resulting in function in the RKHS, but the update makes use of function values from ht ? Ht which
? > v be the coefficients of v projected onto G,
? h = Ht w, and
is outside the RKHS. Let w = G
? t w. Then the error can be decomposed as
z=G
2
2
2
2
|v(x) ? h(x)| = |v(x) ? z(x) + z(x) ? h(x)| ? 2 |v(x) ? z(x)| + 2 |z(x) ? h(x)|
2
2
? 2?2 kv ? zkF + 2 |z(x) ? h(x)| .
{z
}
|
{z
} |
(I: Lemma 5)
2
zkF
(6)
(II: Lemma 6)
2
2
kvkF ?kzkF
? t ), so the first error term can be bounded
By definition, kv ?
=
? 1?cos2 ?(V, G
? t , which can be obtained by similar arguments in Theorem 1. For the second
by the guarantee on G
? t is defined in such a way that the difference between z(x) and h(x) is a martingale,
term, note that G
which can be bounded by careful analysis.
Theorem 4 Assume (4) and suppose the mini-batch sizes satisfy that for any 1 ? i ? t, kA ? Ai k <
(?k ? ?k+1 )/8 and are of order ?(ln ?t ). There exist step sizes ?i = O(1/i), such that the following
?>G
? i ) ? ?1 (G
?>G
? i ) = O(1) for all 1 ? i ? t, then for any x and any
holds. If ?(1) = ?k (G
i
i
function v in the span of V with unit norm kvkF = 1, we have that with
probability at least 1 ? ?,
there exists h in the span of Ht satisfying |v(x) ? h(x)|2 = O 1t ln ?t .
?
The point-wise error scales as O(1/t)
with the step t. Besides the condition that kA ? Ai k is up
to the order of the eigengap, we additionally need that the random features approximate the kernel
function up to constant accuracy on all the data points up to time t, which eventually leads to ?(ln ?t )
?>G
? i to be roughly isotropic, i.e., G
? i is roughly orthonormal.
mini-batch sizes. Finally, we need G
i
? 0 is orthonormal; the update for G
? t is
Intuitively, this should be true for the following reasons: G
close to that for Gt , which in turn is close to Ft that are orthonormal.
6
0.1
0
10
Potential function
?1
10
0.05
?2
10
0
?3
10
?4
Convergence
O(1/t)
10
?0.05
5
10
Number of data points
(a)
estimated
groundtruth
6
10
?0.1
?2
?1
0
1
2
(b)
(a)
Figure 1: (a) Convergence of our algorithm on
?
the synthetic dataset. It is on par with the O(1/t)
rate denoted by the dashed red line. (b) Recovery
of the top three eigenfunctions. Our algorithm (in
red) matches the ground-truth (dashed blue).
(b)
Figure 2: Visualization of the molecular space
dataset by the first two principal components.
Bluer dots represent lower PCE values while redder dots are for higher PCE values. (a) Kernel
PCA; (b) linear PCA. (Best viewed in color)
Proof sketch In order to bound term I in (6), we show that
? t ) = O 1 ln t .
Lemma 5 1 ? cos2 ?(V, G
t
?
This is proved by following similar arguments to get the recurrence (5), except with an additional
? t+1 is using the evaluation ht (xt )
error term, which is caused by the fact that the update rule for G
rather than g?t (xt ). Bounding this additional term thus relies on bounding the difference between
ht (x) ? g?t (x), which is also what we need for bounding term II in (6). For this, we show:
t
t
Lemma 6 For any x and
gt (x)w ?
unit vector w, with probability ? 1 ? ? over (D , ? ), |?
1
t
2
ht (x)w| = O t ln ? .
? t makes sure that the difference between
The key to prove this lemma is that our construction of G
g?t (x)w and ht (x)w consists of their difference in each time step. Furthermore, the difference forms
a martingale and thus can be bounded by Azuma?s inequality. See the supplementary for the details.
6
Experiments
Synthetic dataset with analytical solution We first verify the convergence rate of DSGD-KPCA
on a synthetic dataset with analytical solution of eigenfunctions [24]. If the data follow a Gaussian
distribution, and we use a Gaussian kernel, then the eigenfunctions are given by the Hermite polynomials. We generated 1 million such 1-D data points, and ran DSGD-KPCA with a total of 262144
random features. In each iteration, we use a data mini-batch of size 512, and a random feature minibatch of size 128. After all random features are generated, we revisit and adjust the coefficients of
existing random features. The kernel bandwidth is set as the true bandwidth. The step size is scheduled as ?t = ?0 /(1 + ?1 t), where ?0 and ?1 are two parameters. We use a small ?1 ? 0.01 such
that in early stages the step size is large enough to arrive at a good initial solution. Figure 1a shows
the convergence rate of the proposed algorithm seeking top k = 3 subspace. The potential function
is the squared sine of the principle angle. We can see the algorithm indeed converges at the rate
O(1/t). Figure 1b show the recovered top k = 3 eigenfunctions compared with the ground-truth.
The solution coincides with one eigenfunction, and deviates only slightly from two others.
Kernel PCA visualization on molecular space dataset MolecularSpace dataset contains 2.3 million molecular motifs [6]. We are interested in visualizing the dataset with KPCA. The data are
represented by sorted Coulomb matrices of size 75 ? 75 [14]. Each molecule also has an attribute
called power conversion efficiency (PCE). We use a Gaussian kernel with bandwidth chosen by
the ?median trick?. We ran kernel PCA with a total of 16384 random features, with a feature
mini-batch size of 512, and data mini-batch size of 1024. We ran 4000 iterations with step size
?t = 1/(1 + 0.001 ? t). Figure 2 presents visualization by projecting the data onto the top two principle components. Compared with linear PCA, KPCA shrinks the distances between the clusters
and brings out the important structures in the dataset. We can also see higher PCE values tend to lie
towards the center of the ring structure.
Nonparametric Latent Variable Model We learn latent variable models with DSGD-KSVD using
one million data points [22], achieving higher quality solutions compared with two other approaches.
The dataset consists of two latent components, one is a Gaussian distribution and the other a Gamma
distribution with shape parameter ? = 1.2. DSGD-KSVD uses a total of 8192 random features, and
uses a feature mini-batch of size 256 and a data mini-batch of size 512. We compare with 1) random
7
0.2
Estimated Component2
Estimated Component1
True Component2
True Component1
0.2
0.15
0.15
0.1
0.1
0.1
0.05
0.05
0.05
?5
0
5
10
15
0
?10
Estimated Component2
Estimated Component1
True Component2
True Component1
0.2
0.15
0
?10
0.08
0.25
0.25
Estimated Component2
Estimated Component1
True Component2
True Component1
?5
0
5
10
15
0
?10
?5
0
5
10
15
Mean squared error
0.25
Random Fourier features
Nystrom features
DSGD
0.06
0.04
0.02
0
512
1024
2048
4096
10240
Random feature dimension
(a)
(b)
(c)
Figure 3: Recovered latent components: (a) DSGD-KSVD, (b) 2048 Figure 4: Comparison on
random features, (c) 2048 Nystrom features.
KUKA dataset.
Fourier features, and 2) random Nystrom features, both of fixed 2048 functions [12]. Figures 3
shows the learned conditional distributions for each component. We can see DSGD-KSVD achieves
almost perfect recovery, while Fourier and Nystrom random feature methods either confuse high
density areas or incorrectly estimate the spread of conditional distributions.
KCCA MNIST8M We compare our algorithm on MNIST8M in the KCCA task, which consists of
8.1 million digits and their transformations. We divide each image into the left and right halves, and
learn their correlations. The evaluation criteria is the total correlations on the top k = 50 canonical
correlation directions measured on a separate test set of size 10000. We compare with 1) random
Fourier and 2) random Nystrom features on both total correlation and running time. Our algorithm
uses a total of 20480 features, with feature mini-batches of size 2048 and data mini-batches of size
1024, and with 3000 iterations. The kernel bandwidth is set using the ?median trick? and is the same
for all methods. All algorithms are run 5 times, and the mean is reported. The results are presented
in Table 1. Our algorithm achieves the best test-set correlations in comparable run time with random
Fourier features. This is especially significant for random Fourier features, since the run time would
increase by almost four times if double the number of features were used. In addition, Nystrom
features generally achieve better results than Fourier features since they are data dependent. We can
also see that for large datasets, it is important to use more random features for better performance.
Table 1: KCCA results on MNIST 8M (top 50 largest correlations)
# of feat
256
512
1024
2048
4096
Random features
corrs.
minutes
25.2
3.2
30.7
7.0
35.3
13.9
38.8
54.3
41.5
186.7
Nystrom features
corrs.
minutes
30.4
3.0
35.3
5.1
38.0
10.1
41.1
27.0
42.7
71.0
DSGD-KCCA (20480)
corrs.
minutes
43.5
183.2
linear CCA
corrs.
minutes
27.4
1.1
Kernel sliced inverse regression on KUKA dataset We evaluate our algorithm under the setting
of kernel sliced inverse regression [10], a way to perform sufficient dimension reduction (SDR)
for high dimension regression. After performing SDR, we fit a linear regression model using the
projected input data, and evaluate mean squared error (MSE). The dataset records rhythmic motions
of a KUKA arm at various speeds, representing realistic settings for robots [13]. We use a variant
that contains 2 million data points generated by the SL simulator. The KUKA robot has 7 joints,
and the high dimension regression problem is to predict the torques from positions, velocities and
accelerations of the joints. The input has 21 dimensions while the output is 7 dimensions. Since there
are seven independent joints, we set the reduced dimension to be seven. We randomly select 20% as
test set and out of the remaining training set, we randomly choose 5000 as validation set to select step
sizes. The total number of random features is 10240, with mini-feature batch and mini-data batch
both equal to 1024. We run a total of 2000 iterations using step size ?t = 15/(1+0.001?t). Figure 4
shows the regression errors for different methods. The error decreases with more random features,
and our algorithm achieves lowest MSE by using 10240 random features. Nystrom features do not
perform as well in this setting probably because the spectrum decreases slowly (there are seven
independent joints) as Nystrom features are known to work well for fast decreasing spectrum.
Acknowledge
The research was supported in part by NSF/NIH BIGDATA 1R01GM108341, ONR N00014-151-2340, NSF IIS-1218749, NSF CAREER IIS-1350983, NSF CCF-0832797, CCF-1117309, CCF1302518, DMS-1317308, Simons Investigator Award, and Simons Collaboration Grant.
8
References
[1] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y.-K. Liu. Two svds suffice: Spectral decompositions for probabilistic topic modeling and latent dirichlet allocation. CoRR, abs/1204.6703, 2012.
[2] R. Arora, A. Cotter, and N. Srebro. Stochastic optimization of pca with capped msg. In Advances in
Neural Information Processing Systems, pages 1815?1823, 2013.
[3] A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental pca. In Advances in
Neural Information Processing Systems, pages 3174?3182, 2013.
[4] T. T. Cai and H. H. Zhou. Optimal rates of convergence for sparse covariance matrix estimation. The
Annals of Statistics, 40(5):2389?2420, 2012.
[5] T.-J. Chin and D. Suter. Incremental kernel principal component analysis. IEEE Transactions on Image
Processing, 16(6):1662?1674, 2007.
[6] B. Dai, B. Xie, N. He, Y. Liang, A. Raj, M.-F. F. Balcan, and L. Song. Scalable kernel methods via doubly
stochastic gradients. In Advances in Neural Information Processing Systems, pages 3041?3049, 2014.
[7] M. Hardt and E. Price. The noisy power method: A meta algorithm with applications. In Advances in
Neural Information Processing Systems, pages 2861?2869, 2014.
[8] P. Honeine. Online kernel principal component analysis: A reduced-order model. IEEE Trans. Pattern
Anal. Mach. Intell., 34(9):1814?1826, 2012.
[9] K. Kim, M. O. Franz, and B. Sch?olkopf. Iterative kernel principal component analysis for image modeling.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(9):1351?1366, 2005.
[10] M. Kim and V. Pavlovic. Covariance operator based dimensionality reduction with extension to semisupervised settings. In International Conference on Artificial Intelligence and Statistics, pages 280?287,
2009.
[11] Q. Le, T. Sarlos, and A. J. Smola. Fastfood ? computing hilbert space expansions in loglinear time. In
International Conference on Machine Learning, 2013.
[12] D. Lopez-Paz, S. Sra, A. Smola, Z. Ghahramani, and B. Sch?olkopf. Randomized nonlinear component
analysis. In International Conference on Machine Learning (ICML), 2014.
[13] F. Meier, P. Hennig, and S. Schaal. Incremental local gaussian regression. In Z. Ghahramani, M. Welling,
C. Cortes, N. Lawrence, and K. Weinberger, editors, Advances in Neural Information Processing Systems
27, pages 972?980. Curran Associates, Inc., 2014.
[14] G. Montavon, K. Hansen, S. Fazli, M. Rupp, F. Biegler, A. Ziehe, A. Tkatchenko, A. von Lilienfeld,
and K.-R. M?uller. Learning invariant representations of molecules for atomization energy prediction. In
Neural Information Processing Systems, pages 449?457, 2012.
[15] E. Oja. A simplified neuron model as a principal component analyzer. J. Math. Biology, 15:267?273,
1982.
[16] E. Oja. Subspace methods of pattern recognition. John Wiley and Sons, New York, 1983.
[17] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In J. Platt, D. Koller, Y. Singer,
and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge,
MA, 2008.
[18] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Neural Information Processing Systems, 2009.
[19] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward network. Neural Networks, 2:459?473, 1989.
[20] N. N. Schraudolph, S. G?unter, and S. V. N. Vishwanathan. Fast iterative kernel PCA. In B. Sch?olkopf,
J. Platt, and T. Hofmann, editors, Advances in Neural Information Processing Systems 19, Cambridge
MA, June 2007. MIT Press.
[21] O. Shamir. A stochastic pca algorithm with an exponential convergence rate.
arXiv:1409.2848, 2014.
arXiv preprint
[22] L. Song, A. Anamdakumar, B. Dai, and B. Xie. Nonparametric estimation of multi-view latent variable
models. In International Conference on Machine Learning (ICML), 2014.
[23] R. Vershynin. How close is the sample covariance matrix to the actual covariance matrix? Journal of
Theoretical Probability, 25(3):655?686, 2012.
[24] C. K. I. Williams and M. Seeger. The effect of the input density distribution on kernel-based classifiers.
In P. Langley, editor, Proc. Intl. Conf. Machine Learning, pages 1159?1166, San Francisco, California,
2000. Morgan Kaufmann Publishers.
9
| 5813 |@word version:1 polynomial:1 norm:3 zkf:2 cos2:11 crucially:1 covariance:8 decomposition:1 tr:1 carry:1 reduction:3 initial:1 liu:1 contains:2 rkhs:7 existing:1 err:1 current:1 ka:5 comparing:1 recovered:2 must:1 regenerating:2 john:1 subsequent:1 realistic:1 shape:1 hofmann:1 update:35 half:1 prohibitive:1 intelligence:2 isotropic:1 prespecified:1 record:1 math:1 hermite:1 ksvd:6 prove:3 doubly:15 consists:3 lopez:1 yingyu:1 introduce:2 x0:3 indeed:1 roughly:2 simulator:1 multi:1 torque:1 decomposed:1 decreasing:1 actual:1 becomes:1 provided:2 moreover:1 notation:2 underlying:1 bounded:3 suffice:1 lowest:1 what:1 kg:1 eigenvector:4 ag:1 transformation:1 impractical:5 guarantee:14 every:2 ti:1 friendly:1 tackle:1 scaled:1 k2:1 platt:2 classifier:1 unit:3 grant:1 appear:1 t1:3 positive:1 before:1 local:2 despite:2 mach:1 initialization:2 wk2:1 equivalence:1 challenging:1 kvkf:3 range:1 practice:2 definite:1 digit:1 yingyul:1 area:1 langley:1 empirical:4 projection:1 convenient:1 pre:1 word:2 get:3 cannot:1 close:5 onto:2 operator:8 applying:2 equivalent:2 lagrangian:1 center:2 sarlos:1 williams:1 convex:7 formulate:1 simplicity:1 recovery:2 rule:20 deriving:1 orthonormal:3 proving:1 kuka:4 annals:1 target:1 suppose:3 construction:1 shamir:1 us:6 designing:1 curran:1 trick:3 element:2 velocity:1 approximated:1 satisfying:1 associate:1 recognition:1 ft:9 preprint:1 solved:1 hv:1 thousand:2 svds:1 ensures:1 decrease:2 song1:1 ran:3 pd:2 complexity:5 ui:7 solving:2 upon:1 efficiency:3 f2:1 sink:1 joint:4 various:3 represented:1 derivation:2 fast:3 describe:1 dsgd:11 artificial:1 outside:1 whose:2 widely:1 larger:2 solve:3 say:1 supplementary:3 triangular:1 ability:1 statistic:3 transform:1 noisy:3 final:1 online:5 advantage:1 eigenvalue:9 analytical:2 cai:1 propose:3 product:1 translate:1 flexibility:1 achieve:2 roweis:1 adjoint:1 description:2 kv:3 scalability:1 olkopf:3 convergence:20 cluster:1 optimum:3 requirement:2 double:1 corrs:4 intl:1 hkt:1 converges:4 ring:1 perfect:1 incremental:3 measured:1 ij:2 lsong:1 strong:2 solves:1 c:1 come:1 direction:1 drawback:2 saved:1 attribute:1 stochastic:31 require:10 f1:1 preliminary:1 randomization:1 extension:2 hold:1 around:1 considered:2 ground:4 exp:2 seed:6 mapping:2 predict:1 lawrence:1 achieves:3 hvi:1 early:2 purpose:1 estimation:2 proc:1 applicable:3 hansen:1 largest:2 successfully:1 weighted:2 cotter:1 uller:1 minimization:1 mit:2 gaussian:9 aim:2 modified:1 rather:1 pn:2 zhou:1 gatech:2 derived:2 focus:2 june:1 schaal:1 vk:1 contrast:1 afk:1 seeger:1 kim:2 motif:1 dependent:1 streaming:2 typically:1 koller:1 interested:1 issue:2 among:1 dual:1 denoted:2 equal:2 saving:1 sampling:1 biology:1 look:1 icml:2 unsupervised:1 representer:1 t2:2 others:1 fundamentally:2 pavlovic:1 suter:1 randomly:2 oja:6 gamma:1 simultaneously:1 preserve:1 individual:2 intell:1 kitchen:1 sdr:2 n1:1 attempt:1 ab:1 huge:1 highly:2 evaluation:5 severe:1 adjust:1 introduces:1 analyzed:1 primal:4 devoted:1 hg:1 r01gm108341:1 beforehand:1 unter:1 minw:1 orthogonal:1 taylor:1 divide:1 orthogonalized:1 theoretical:5 modeling:2 kpca:10 introducing:1 entry:1 paz:1 reported:1 straightforwardly:1 synthetic:5 vershynin:1 thanks:1 density:2 international:4 randomized:2 recht:2 probabilistic:1 quickly:1 squared:3 von:1 choose:1 slowly:1 fazli:1 conf:1 verge:1 leading:1 style:1 potential:6 summarized:2 wk:1 coefficient:4 inc:1 satisfy:2 caused:1 vi:13 later:1 sine:1 view:1 analyze:2 red:2 hf:1 complicated:1 simon:2 contribution:1 accuracy:1 variance:1 kaufmann:1 maximized:1 yield:1 cc:1 history:1 simultaneous:1 definition:1 nonetheless:1 energy:1 nystrom:9 naturally:2 proof:6 associated:1 redder:1 con:1 dm:1 hsu:1 dataset:15 proved:2 popular:1 hardt:1 knowledge:1 ut:2 improves:1 color:1 cj:1 hilbert:3 mnist8m:2 dimensionality:1 lilienfeld:1 appears:2 higher:3 xie:3 follow:1 specify:1 though:2 shrink:1 furthermore:3 just:1 stage:3 bluer:1 smola:2 correlation:9 working:1 sketch:2 ei:1 replacing:1 nonlinear:6 minibatch:1 defines:1 brings:1 quality:3 scheduled:1 semisupervised:1 effect:1 verify:1 true:14 unbiased:1 counterpart:1 multiplier:1 hence:1 ccf:2 visualizing:1 self:1 recurrence:2 anything:1 cosine:1 coincides:1 criterion:1 generalized:6 prominent:1 chin:1 ridge:1 demonstrate:2 motion:1 orthogonalization:11 balcan:1 image:3 wise:1 recently:2 nih:1 functional:4 million:10 discussed:1 he:1 approximates:1 significant:2 cambridge:2 ai:3 fk:1 stochasticity:2 analyzer:1 dot:2 robot:2 longer:1 surface:1 v0:4 etc:1 gt:43 hfi:1 recent:1 perspective:1 raj:1 moderate:1 store:3 n00014:1 nonconvex:2 inequality:1 onr:1 meta:1 yi:6 seen:2 morgan:1 additional:3 dai:2 employed:1 converge:1 dashed:2 ii:4 reduces:1 rahimi:2 hebbian:2 technical:1 match:1 af:4 schraudolph:1 long:1 molecular:3 award:1 prediction:1 scalable:3 kcca:8 regression:10 variant:1 essentially:1 arxiv:2 iteration:10 kernel:62 normalization:1 c2t:4 represent:1 c1:1 tkatchenko:1 whereas:1 addition:3 grow:2 source:1 median:2 publisher:1 sch:3 rest:2 unlike:2 pass:1 eigenfunctions:11 sure:1 tend:1 probably:1 leveraging:1 effectiveness:1 anandkumar:1 feedforward:1 component1:6 enough:3 rendering:1 xj:2 fit:1 bandwidth:4 cn:1 shift:1 pca:21 effort:1 eigengap:4 song:2 locating:1 york:1 generally:1 eigenvectors:1 nonparametric:3 ten:3 reduced:3 sl:1 exist:2 canonical:2 nsf:4 revisit:2 estimated:7 per:1 blue:1 write:1 dasgupta:1 hennig:1 key:5 four:1 achieving:1 drawn:2 ht:22 v1:1 circumventing:1 convert:2 sum:2 run:5 inverse:4 angle:5 arrive:2 throughout:1 almost:2 groundtruth:1 scaling:2 comparable:1 cca:4 hi:4 bound:4 layer:1 quadratic:1 msg:1 orthogonality:1 constraint:1 vishwanathan:1 fourier:8 speed:1 argument:2 min:1 span:2 performing:1 according:3 smaller:1 slightly:1 increasingly:1 son:1 kakade:1 making:1 intuitively:1 invariant:2 projecting:1 rca:6 computationally:1 ln:5 visualization:3 previously:2 turn:1 eventually:1 singer:1 end:4 balsubramani:1 spectral:3 coulomb:1 alternative:1 batch:20 weinberger:1 eigen:5 top:15 running:1 remaining:1 dirichlet:1 sanger:1 ghahramani:2 build:1 especially:1 approximating:1 seeking:1 diagonal:4 traditional:1 loglinear:1 gradient:14 dp:1 subspace:20 distance:1 separate:1 outer:1 seven:3 topic:1 reason:1 provable:1 rupp:1 assuming:1 besides:1 mini:19 liang:1 negative:1 anal:1 perform:3 conversion:1 av:2 observation:1 neuron:1 datasets:13 finite:6 acknowledge:1 descent:5 incorrectly:1 extended:3 rn:1 reproducing:1 meier:1 required:1 specified:1 california:1 learned:1 established:2 eigenfunction:2 address:2 able:2 capped:1 trans:1 below:1 pattern:3 azuma:1 regime:1 challenge:1 program:2 including:1 memory:5 max:1 power:5 suitable:3 difficulty:1 arm:1 representing:1 molecularspace:1 technology:1 arora:1 deviate:1 epoch:1 literature:1 freund:1 par:1 generation:1 allocation:1 srebro:1 validation:1 sufficient:2 principle:3 foster:1 editor:4 storing:1 share:1 collaboration:1 supported:1 keeping:1 infeasible:2 enjoys:4 vv:2 institute:1 rhythmic:1 sparse:1 benefit:2 regard:2 dimension:12 xn:1 world:2 avoids:1 gram:1 projected:3 franz:1 simplified:1 san:1 far:2 welling:1 transaction:2 approximate:4 feat:1 keep:1 global:3 b1:1 francisco:1 xi:12 biegler:1 spectrum:2 latent:8 iterative:2 table:2 additionally:1 nature:1 learn:3 molecule:2 career:1 sra:1 expansion:2 mse:2 necessarily:1 vj:1 pk:1 spread:1 fastfood:1 linearly:2 big:2 whole:2 noise:1 bounding:3 sliced:4 x1:1 gha:1 georgia:1 martingale:2 wiley:1 atomization:1 orth:2 position:1 explicit:11 exponential:1 lie:1 afj:1 montavon:1 rk:3 theorem:6 companion:1 minute:4 xt:12 specific:1 list:4 liang2:1 svm:1 cortes:1 exists:3 mnist:1 corr:1 ci:1 agt:1 magnitude:1 confuse:1 kx:1 gap:1 lagrange:1 bo:2 applies:1 truth:4 relies:3 ma:2 kzkf:1 conditional:2 identity:1 presentation:1 formulated:1 viewed:1 rbf:2 careful:1 sorted:1 towards:1 acceleration:1 price:1 experimentally:1 typical:3 except:1 principal:8 lemma:12 called:3 pce:4 total:8 svd:4 formally:1 select:2 ziehe:1 investigator:1 bigdata:1 evaluate:8 princeton:2 ex:2 |
5,318 | 5,814 | Parallel Correlation Clustering on Big Graphs
Xinghao Pan?,? , Dimitris Papailiopoulos?,? , Samet Oymak?,? ,
Benjamin Recht?,?, , Kannan Ramchandran? , and Michael I. Jordan?,?,
?
AMPLab, ? EECS at UC Berkeley, Statistics at UC Berkeley
Abstract
Given a similarity graph between items, correlation clustering (CC) groups similar
items together and dissimilar ones apart. One of the most popular CC algorithms
is KwikCluster: an algorithm that serially clusters neighborhoods of vertices, and
obtains a 3-approximation ratio. Unfortunately, in practice KwikCluster requires
a large number of clustering rounds, a potential bottleneck for large graphs.
We present C4 and ClusterWild!, two algorithms for parallel correlation clustering that run in a polylogarithmic number of rounds, and provably achieve nearly
linear speedups. C4 uses concurrency control to enforce serializability of a parallel clustering process, and guarantees a 3-approximation ratio. ClusterWild! is
a coordination free algorithm that abandons consistency for the benefit of better
scaling; this leads to a provably small loss in the 3 approximation ratio.
We demonstrate experimentally that both algorithms outperform the state of the
art, both in terms of clustering accuracy and running time. We show that our
algorithms can cluster billion-edge graphs in under 5 seconds on 32 cores, while
achieving a 15? speedup.
1
Introduction
Clustering items according to some notion of similarity is a major primitive in machine learning.
Correlation clustering serves as a basic means to achieve this goal: given a similarity measure
between items, the goal is to group similar items together and dissimilar items apart. In contrast to
other clustering approaches, the number of clusters is not determined a priori, and good solutions
aim to balance the tension between grouping all items together versus isolating them.
The simplest CC variant can be described on a
complete signed graph. Our input is a graph
G on n vertices, with +1 weights on edges between similar items, and 1 edges between dissimilar ones. Our goal is to generate a partition
of vertices into disjoint sets that minimizes the
number of disagreeing edges: this equals the
number of ?+? edges cut by the clusters plus
the number of ? ? edges inside the clusters.
This metric is commonly called the number of
disagreements. In Figure 1, we give a toy example of a CC instance.
cluster 1
cluster 2
cost = (#? ? edges inside clusters) + (#?+? edges across clusters) = 2
Figure 1: In the above graph, solid edges denote similarity and dashed dissimilarity. The number of disagreeing edges in the above clustering clustering is 2; we
color the bad edges with red.
Entity deduplication is the archetypal motivating example for correlation clustering, with applications in chat disentanglement, co-reference resolution, and spam detection [1, 2, 3, 4, 5, 6]. The
input is a set of entities (say, results of a keyword search), and a pairwise classifier that indicates?
with some error?similarities between entities. Two results of a keyword search might refer to the
same item, but might look different if they come from different sources. By building a similarity
1
graph between entities and then applying CC, the hope is to cluster duplicate entities in the same
group; in the context of keyword search, this implies a more meaningful and compact list of results.
CC has been further applied to finding communities in signed networks, classifying missing edges
in opinion or trust networks [7, 8], gene clustering [9], and consensus clustering [3].
KwikCluster is the simplest CC algorithm that achieves a provable 3-approximation ratio [10], and
works in the following way: pick a vertex v at random (a cluster center), create a cluster for v
and its positive neighborhood N (v) (i.e., vertices connected to v with positive edges), peel these
vertices and their associated edges from the graph, and repeat until all vertices are clustered. Beyond
its theoretical guarantees, experimentally KwikCluster performs well when combined with local
heuristics [3].
KwikCluster seems like an inherently sequential algorithm, and in most cases of interest it requires
many peeling rounds. This happens because a small number of vertices are clustered per round. This
can be a bottleneck for large graphs. Recently, there have been efforts to develop scalable variants
of KwikCluster [5, 6]. In [6] a distributed peeling algorithm was presented in the context of MapReduce. Using an elegant analysis, the authors establish a (3 + ?)-approximation in a polylogarithmic
number of rounds. The algorithm employs a simple step that rejects vertices that are executed in
parallel but are ?conflicting?; however, we see in our experiments, this seemingly minor coordination step hinders scale-ups in a parallel core setting. In [5], a sketch of a distributed algorithm was
presented. This algorithm achieves the same approximation as KwikCluster, in a logarithmic number of rounds, in expectation. However, it performs significant redundant work, per iteration, in its
effort to detect in parallel which vertices should become cluster centers.
Our contributions We present C4 and ClusterWild!, two parallel CC algorithms with provable
performance guarantees, that in practice outperform the state of the art, both in terms of running
time and clustering accuracy. C4 is a parallel version of KwikCluster that uses concurrency control
to establish a 3-approximation ratio. ClusterWild! is a simple to implement, coordination-free
algorithm that abandons consistency for the benefit of better scaling, while having a provably small
loss in the 3 approximation ratio.
C4 achieves a 3 approximation ratio, in a poly-logarithmic number of rounds, by enforcing consistency between concurrently running peeling threads. Consistency is enforced using concurrency
control, a notion extensively studied for databases transactions, that was recently used to parallelize
inherently sequential machine learning algorithms [11].
ClusterWild! is a coordination-free parallel CC algorithm that waives consistency in favor of speed.
The cost we pay is an arbitrarily small loss in ClusterWild!?s accuracy. We show that ClusterWild!
achieves a (3 + ?)OPT + O(? ? n ? log2 n) approximation, in a poly-logarithmic number of rounds,
with provable nearly linear speedups. Our main theoretical innovation for ClusterWild! is analyzing
the coordination-free algorithm as a serial variant of KwikCluster that runs on a ?noisy? graph.
In our experimental evaluation, we demonstrate that both algorithms gracefully scale up to graphs
with billions of edges. In these large graphs, our algorithms output a valid clustering in less than
5 seconds, on 32 threads, up to an order of magnitude faster than KwikCluster. We observe how,
not unexpectedly, ClusterWild! is faster than C4, and quite surprisingly, abandoning coordination in
this parallel setting, only amounts to a 1% of relative loss in the clustering accuracy. Furthermore,
we compare against state of the art parallel CC algorithms, showing that we consistently outperform
these algorithms in terms of both running time and clustering accuracy.
Notation G denotes a graph with n vertices and m edges. G is complete and only has ?1 edges.
We denote by dv the positive degree of a vertex, i.e., the number of vertices connected to v with
positive edges. denotes the positive maximum degree of G, and N (v) denotes the positive neighborhood of v; moreover, let Cv = {v, N (v)}. Two vertices u, v are termed as ?friends? if u 2 N (v)
and vice versa. We denote by ? a permutation of {1, . . . , n}.
2
2
Two Parallel Algorithms for Correlation Clustering
The formal definition of correlation clustering is given below.
Correlation Clustering. Given a graph G on n vertices, partition the vertices into an arbitrary
number k of disjoint subsets C1 , . . . , Ck such that the sum of negative edges within the subsets plus
the sum of positive edges across the subsets is minimized:
OPT = min
1?k?n
min
Ci \Cj =0,8i6=j
[k
i=1 Ci ={1,...,n}
k
X
i=1
E (Ci , Ci ) +
k
k
X
X
i=1 j=i+1
E + (Ci , Cj )
where E + and E are the sets of positive and negative edges in G.
KwikCluster is a remarkably simple algorithm that approximately solves the above combinatorial
problem, and operates as follows. A random vertex v is picked, a cluster Cv is created with v and
its positive neighborhood, then the vertices in Cv are peeled from the graph, and this process is
repeated until all vertices are clustered KwikCluster can be equivalently executed, as noted by [5], if
we substitute the random choice of a vertex per peeling round, with a random order ? preassigned to
vertices, (see Alg. 1). That is, select a random permutation on vertices, then peel the vertex indexed
by ?(1), and its friends. Remove from ? the vertices in Cv and repeat this process. Having an order
among vertices makes the discussion of parallel algorithms more convenient.
C4: Parallel CC using Concurency Control. Algorithm 1 KwikCluster with ?
Suppose we now wish to run a parallel version
of KwikCluster, say on two threads: one thread 1: ? = a random permutation of {1, . . . , n}
picks vertex v indexed by ?(1) and the other 2: while V 6= ; do
thread picks u indexed by ?(2), concurrently. 3: select the vertex v indexed by ?(1)
4: Cv = {v, N (v)}
Can both vertices be cluster centers? They can, 5: Remove
clustered vertices from G and ?
iff they are not friends in G. If v and u are con- 6: end while
nected with a positive edge, then the vertex with
the smallest order wins. This is our concurency
rule no. 1. Now, assume that v and u are not friends in G, and both v and u become cluster centers.
Moreover, assume that v and u have a common, unclustered friend, say w: should w be clustered
with v, or u? We need to follow what would happen with KwikCluster in Alg. 1: w will go with
the vertex that has the smallest permutation number, in this case v. This is concurency rule no. 2.
Following the above simple rules, we develop C4, our serializable parallel CC algorithm. Since, C4
constructs the same clusters as KwikCluster (for a given ordering ?), it inherits its 3 approximation.
The above idea of identifying the cluster centers in rounds was first used in [12] to obtain a parallel
algorithm for maximal independent set (MIS).
C4, shown as Alg. 2, starts by assigning a random permutation ? to the vertices, it then samples an
active set A of n unclustered vertices; this sample is taken from the prefix of ?. After sampling
A, each of the P threads picks a vertex with the smallest order in A, then checks if that vertex can
become a cluster center. We first enforce concurrency rule no. 1: adjacent vertices cannot be cluster
centers at the same time. C4 enforces it by making each thread check the friends of the vertex, say
v, that is picked from A. A thread will check in attemptCluster whether its vertex v has any
preceding friends that are cluster centers. If there are none, it will go ahead and label v as cluster
center, and proceed with creating a cluster. If a preceding friend of v is a cluster center, then v is
labeled as not being a cluster center. If a preceding friend of v, call it u, has not yet received a
label (i.e., u is currently being processed and is not yet labeled as cluster center or not), then the
thread processing v, will wait on u to receive a label. The major technical detail is in showing that
this wait time is bounded; we show that no more than O(log n) threads can be in conflict at the
same time, using a new subgraph sampling lemma [13]. Since C4 is serializable, it has to respect
concurrency rule no. 2: if a vertex u is adjacency to two cluster centers, then it gets assigned to the
one with smaller permutation order. This is accomplished in createCluster. After processing
all vertices in A, all threads are synchronized in bulk, the clustered vertices are removed, a new
active set is sampled, and the same process is repeated until everything has been clustered. In the
following section, we present the theoretical guarantees for C4.
3
Algorithm 2 C4 & ClusterWild!
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
createCluster(v):
clusterID(v) = ?(v)
for u 2 (v) \ A do
clusterID(u) = min(clusterID(u), ?(v))
end for
Input: G, ?
clusterID(1) = . . . = clusterID(n) = 1
? = a random permutation of {1, . . . , n}
while V 6= ; do
attemptCluster(v):
= maximum vertex degree in G(V )
if clusterID(u) = 1 and isCenter(v) then
A = the first ? ? n vertices in V [?].
createCluster(v)
while A 6= ; do in parallel
end if
v = first element in A
A = A {v}
isCenter(v):
if C4 then // concurrency control
for u 2 (v) do // check friends (in order of ?)
attemptCluster(v)
if ?(u) < ?(v) then // if they precede you, wait
else if ClusterWild! then // coordination free
wait until clusterID(u) 6= 1 // till clustered
createCluster(v)
if isCenter(u) then
end if
return 0 //a friend is center, so you can?t be
end while
end if
Remove clustered vertices from V and ?
end if
end while
end for
Output: {clusterID(1), . . . , clusterID(n)}.
return 1 // no earlier friends are centers, so you are
ClusterWild!: Coordination-free Correlation Clustering. ClusterWild! speeds up computation
by ignoring the first concurrency rule. It uniformly samples unclustered vertices, and builds clusters
around all of them, without respecting the rule that cluster centers cannot be friends in G. In ClusterWild!, threads bypass the attemptCluster routine; this eliminates the ?waiting? part of C4.
ClusterWild! samples a set A of vertices from the prefix of ?. Each thread picks the first ordered
vertex remaining in A, and using that vertex as a cluster center, it creates a cluster around it. It peels
away the clustered vertices and repeats the same process, on the next remaining vertex in A. At
the end of processing all vertices in A, all threads are synchronized in bulk, the clustered vertices
are removed, a new active set is sampled, and the parallel clustering is repeated. A careful analysis along the lines of [6] shows that the number of rounds (i.e., bulk synchronization steps) is only
poly-logarithmic.
Quite unsurprisingly, ClusterWild! is faster than C4. Interestingly, abandoning consistency does not
incur much loss in the approximation ratio. We show how the error introduced in the accuracy of the
solution can be bounded. We characterize this error theoretically, and show that in practice it only
translates to only a relative 1% loss in the objective. The main intuition of why ClusterWild! does
not introduce too much error is that the chance of two randomly selected vertices being friends is
small, hence the concurrency rules are infrequently broken.
3
Theoretical Guarantees
In this section, we bound the number of rounds required for each algorithms, and establish the
theoretical speedup one can obtain with P parallel threads. We proceed to present our approximation
guarantees. We would like to remind the reader that?as in relevant literature?we consider graphs
that are complete, signed, and unweighted. The omitted proofs can be found in the Appendix.
3.1
Number of rounds and running time
Our analysis follows those of [12] and [6]. The main idea is to track how fast the maximum degree
decreases in the remaining graph at the end of each round.
Lemma 1. C4 and ClusterWild! terminate after O 1? log n ? log
rounds w.h.p.
We now analyze the running time of both algorithms under a simplified BSP model. The main idea
is that the the running time of each ?super step? (i.e., round) is determined by the ?straggling? thread
(i.e., the one that gets assigned the most amount of work), plus the time needed for synchronization
at the end of each round.
Assumption 1. We assume that threads operate asynchronously within a round and synchronize at
the end of a round. A memory cell can be written/read concurrently by multiple threads. The time
4
spent per round of the algorithm is proportional to the time of the slowest thread. The cost of thread
synchronization at the end of each batch takes time O(P ), where P is the number of threads. The
total computation cost is proportional to the sum of the time spent for all rounds, plus the time spent
during the bulk synchronization step.
Under this simplified model, we show that both algorithms obtain nearly linear speedup, with ClusterWild! being faster than C4, precisely due to lack of coordination. Our main tool for analyzing C4
is a recent graph-theoretic result from [13] (Theorem 1), which guarantees that if one samples an
O(n/ ) subset of vertices in a graph, the sampled subgraph has a connected component of size at
most O(log n). Combining the above, in the appendix we show the following result.
Theorem
2. The ?theoretical running
time of C4 on P cores is upper bounded by
??
?
m+n log n
O
+ P log n ? log
as long as the number of cores P is smaller than mini nii ,
P
where nii is the size of the batch in the i-th round of each algorithm. The running time of ClusterWild! on P cores is upper bounded by O m+n
+ P log n ? log
.
P
3.2
Approximation ratio
We now proceed with establishing the approximation ratios of C4 and ClusterWild!.
C4 is serializable. It is straightforward that C4 obtains precisely the same approximation ratio as
KwikCluster. One has to simply show that for any permutation ?, KwikCluster and C4 will output
the same clustering. This is indeed true, as the two simple concurrency rules mentioned in the
previous section are sufficient for C4 to be equivalent to KwikCluster.
Theorem 3. C4 achieves a 3 approximation ratio, in expectation.
ClusterWild! as a serial procedure on a noisy graph. Analyzing ClusterWild! is a bit more
involved. Our guarantees are based on the fact that ClusterWild! can be treated as if one was
running a peeling algorithm on a ?noisy? graph. Since adjacent active vertices can still become
cluster centers in ClusterWild!, one can view the edges between them as ?deleted,? by a somewhat
unconventional adversary. We analyze this new, noisy graph and establish our theoretical result.
Theorem 4. ClusterWild! achieves a (3 + ?)?OPT+O(??n?log2 n) approximation, in expectation.
We provide a sketch of the proof, and delegate the details to the appendix. Since ClusterWild!
ignores the edges among active vertices, we treat these edges as deleted. In our main result, we
quantify the loss of clustering accuracy that is caused by ignoring these edges. Before we proceed,
we define bad triangles, a combinatorial structure that is used to measure the clustering quality of a
peeling algorithm.
Definition 1. A bad triangle in G is a set of three vertices, such that two pairs are joined with a
positive edge, and one pair is joined with a negative edge. Let Tb denote the set of bad triangles in
G.
To quantify the cost of ClusterWild!, we make the below observation.
Lemma 5. The cost of any greedy algorithm that picks a vertex v (irrespective of the sampling
order), creates Cv , peels it away and repeats, is equal to the number of bad triangles adjacent to
each cluster center v.
? denote the random graph induced by deleting all edges between active vertices per
Lemma 6. Let G
round, for a given run of ClusterWild!, and let ?new denote the number of additional bad triangles
? has compared to G. Then, the expected cost of ClusterWild! can be upper bounded as
thatP
G
E
t2Tb 1Pt + ?new , where Pt is the event that triangle t, with end points i, j, k, is bad, and at
least one of its end points becomes active, while t is still part of the original unclustered graph.
Proof. We begin by bounding the second term E{?new }, by considering the number of new bad
triangles ?new,i created at each round i:
E {?new,i } ?
X
(u,v)2E
P(u, v 2 Ai )?|N (u)[N (v)| ?
X ? ? ?2
(u,v)2E
5
i
?2?
i
?
2??2 ?
E
i
?
2??2 ?n.
Using the result that ClusterWild! terminates after at most O( 1? log n log ) rounds, we get that1
E {?new } ? O(? ? n ? log2 n).
P
P
We are left to bound E
t2Tb 1Pt =
t2Tb pt . To do that we use the following lemma.
P
P
pt
Lemma 7. If pt satisfies 8e,
t:e?t2Tb ? ? 1, then,
t2Tb pt ? ? ? OP T.
Proof. Let B? be one (of the
many) sets
thatPattribute a +1 in theP
cost of an optimal
P possibly P
P of edges
pt
pt
pt
algorithm. Then, OPT = e2B? 1
e2B?
t:e?t2Tb ? =
t2Tb |B? \ t| ?
t2Tb ? .
| {z }
1
Now, as with [6],
P we will simply have to boundSthe expectation of the bad triangles, adjacent to
an edge (u, v): t:{u,v}?t2Tb 1Pt . Let Su,v = {u,v}?t2Tb t be the union of the sets of nodes of
the bad triangles that contain both vertices u and v. Observe that if some w 2 S\{u, v} becomes
active before u and v, then a cost of 1 (i.e., the cost of the bad triangle {u, v, w}) is incurred. On
the other hand, if either u or v, or both, are selected as pivots in some round, then Cu,v can be
as high as |S| 2, i.e., at most equal to all bad triangles containing the edge {u, v}. Let Auv =
{u or v are activated before any other vertices in Su,v }. Then,
?
?
C
E [Cu,v ] = E [ Cu,v | Au,v ] ? P(Au,v ) + E Cu,v | AC
u,v ? P(Au,v )
? 1 + (|S|
2) ? P({u, v} \ A =
6 ;|S \ A 6= ;) ? 1 + 2|S| ? P(v \ A =
6 ;|S \ A 6= ;)
where the last inequality is obtained by a union bound over u and v. We now bound the following
probability:
P { v 2 A| S \ A 6= ;} =
P {v 2 A} ? P {S \ A 6= ; |v 2 A }
P {v 2 A}
=
=
P {S \ A =
6 ;}
P {S \ A 6= ;}
1
P {v 2 A}
.
P {S \ A = ;}
Observe that P {v 2 A} = ? , hence we need to upper bound P {S \ A = ;}. The probability, per
round, that no positive neighbors in S become activated is upper bounded by
"?
? ?
?|S|
?n/P #|S|n/P ? ?|S|n/P
|S| ?
n |S|
Y
P
P
P
1
P
=
1
? 1
=
1
?
.
n
n
|S|
+
t
n
n
e
P
t=1
Hence, the probability can be upper bounded as
|S|P { v \ A =
6 ;| S \ A =
6 ;} ?
? ? |S|/
.
1 e ??|S|/
We know that |S| ? 2 ? + 2 and also ? ? 1. Then, 0 ? ? ? |S| ? ? ? 2 + 2 n ? 4 Hence, we have
o
P
4?
E(Cu,v ) ? 1 + 2 ? 1 exp{
?
t2Tb 1Pt + ?new
4?} . The overall expectation is then bounded by E
?
1+2?
4??
1 e 4??
?
? OPT + ? ? n ? ln(n/ ) ? log
our approximation ratio for ClusterWild!.
3.3
? (3 + ?) ? OPT + O(? ? n ? log2 n) which establishes
BSP Algorithms as a Proxy for Asynchronous Algorithms
We would like to note that the analysis under the BSP model can be a useful proxy for the performance of completely asynchronous variants of our algorithms. Specifically, see Alg. 3, where we
remove the synchronization barriers.
The only difference between the asynchronous execution in Alg. 3, compared to Alg. 2, is the complete lack of bulk synchronization, at the end of the processing of each active set A. Although the
analysis of the BSP variants of the algorithms is tractable, unfortunately analyzing precisely the
speedup of the asynchronous C4 and the approximation guarantees for the asynchronous ClusterWild! is challenging. However, in our experimental section we test the completely asynchronous
algorithms against the BSP algorithms of the previous section, and observe that they perform quite
similarly both in terms of accuracy of clustering, and running times.
1
We skip the constants to simplify the presentation; however they are all smaller than 10.
6
4
Related Work
Correlation clustering was formally introduced
by Bansal et al. [14]. In the general case, minimizing disagreements is NP-hard and hard to
approximate within an arbitrarily small constant (APX-hard) [14, 15]. There are two variations of the problem: i) CC on complete graphs
where all edges are present and all weights are
?1, and ii) CC on general graphs with arbitrary
edge weights. Both problems are hard, however the general graph setup seems fundamentally harder. The best known approximation ratio for the latter is O(log n), and a reduction to
the minimum multicut problem indicates that
any improvement to that requires fundamental
breakthroughs in theoretical algorithms [16].
Algorithm 3 C4 & ClusterWild!
(asynchronous execution)
1: Input: G
2: clusterID(1) = . . . = clusterID(n) = 1
3: ? = a random permutation of {1, . . . , n}
4: while V 6= ; do
5: v = first element in V
6: V = V {v}
7: if C4 then // concurrency control
8:
attemptCluster(v)
9: else if ClusterWild! then // coordination free
10:
createCluster(v)
11: end if
12: Remove clustered vertices from V and ?
13: end while
14: Output: {clusterID(1), . . . , clusterID(n)}.
In the case of complete unweighted graphs, a long series of results establishes a 2.5 approximation
via a rounded linear program (LP) [10]. A recent result establishes a 2.06 approximation using an
elegant rounding to the same LP relaxation [17]. By avoiding the expensive LP, and by just using the
rounding procedure of [10] as a basis for a greedy algorithm yields KwikCluster: a 3 approximation
for CC on complete unweighted graphs.
Variations of the cost metric for CC change the algorithmic landscape: maximizing agreements (the
dual measure of disagreements) [14, 18, 19], or maximizing the difference between the number of
agreements and disagreements [20, 21], come with different hardness and approximation results.
There are also several variants: chromatic CC [22], overlapping CC [23], small number of clusters
CC with added constraints that are suitable for some biology applications [24].
The way C4 finds the cluster centers can be seen as a variation of the MIS algorithm of [12]; the
main difference is that in our case, we ?passively? detect the MIS, by locking on memory variables,
and by waiting on preceding ordered threads. This means, that a vertex only ?pushes? its cluster
ID and status (cluster center/clustered/unclustered) to its friends, versus ?pulling? (or asking) for its
friends? cluster status. This saves a substantial amount of computational effort.
5
Experiments
Our parallel algorithms were all implemented2 in Scala?we defer a full discussion of the implementation details to Appendix C. We ran all our experiments on Amazon EC2?s r3.8xlarge (32
vCPUs, 244Gb memory) instances, using 1-32 threads. The real graphs listed in Table 1 were each
Graph
DBLP-2011
ENWiki-2013
UK-2005
IT-2004
WebBase-2001
# vertices
# edges
986,324
4,206,785
39,459,925
41,291,594
118,142,155
6,707,236
101,355,853
921,345,078
1,135,718,909
1,019,903,190
Description
2011 DBLP co-authorship network [25, 26, 27].
2013 link graph of English part of Wikipedia [25, 26, 27].
2005 crawl of the .uk domain [25, 26, 27].
2004 crawl of the .it domain [25, 26, 27].
2001 crawl by WebBase crawler [25, 26, 27].
Table 1: Graphs used in the evaluation of our parallel algorithms.
tested with 100 different random ? orderings. We measured the runtimes, speedups (ratio of runtime on 1 thread to runtime on p threads), and objective values obtained by our parallel algorithms.
For comparison, we also implemented the algorithm presented in [6], which we denote as CDK for
short3 . Values of ? = 0.1, 0.5, 0.9 were used for C4 BSP, ClusterWild! BSP and CDK. In the interest
of space, we present only representative plots of our results; full results are given in our appendix.
2
Code available at https://github.com/pxinghao/ParallelCorrelationClustering.
CDK was only tested on the smaller graphs of DBLP-2011 and ENWiki-2013, because CDK was prohibitively slow, often 2-3 orders of magnitude slower than C4, ClusterWild!, and even KwikCluster.
3
7
Mean Runtime, UK?2005
Mean runtime / ms
Mean runtime / ms
Mean Runtime, IT?2004
5
10
Serial
C4 As
C4 BSP ?=0.1
CW As
CW BSP ?=0.1
4
10
Mean Speedup, Webbase?2001
Serial
C4 As
C4 BSP ?=0.5
CW As
CW BSP ?=0.5
Ideal
C4 As
C4 BSP ?=0.9
CW As
CW BSP ?=0.9
16
14
12
Speedup
5
10
4
10
10
8
6
4
2
3
1
2
4
8
Number of threads
16
10
32
(a) Mean runtimes, UK-2005, ? = 0.1
1
4000
2000
0.08
0.07
% of blocked vertices
Number of rounds
6000
16
0.06
0.2
0.4
0.04
0.03
0.02
0.6
0.8
1
?
(d)
Mean number of synchronization
rounds for BSP algorithms
5
10
15
20
25
Number of threads
30
10
15
20
25
Number of threads
30
35
(c) Mean speedup, WebBase, ? = 0.9
1.12
0.05
0
0
5
Objective Value Relative to Serial, DBLP?2011
C4 BSP ?=0.9 Min
C4 BSP ?=0.9 Mean
C4 BSP ?=0.9 Max
C4 BSP Min
C4 BSP Mean
C4 BSP Max
0.01
0
0
0
0
32
% of Blocked Vertices, ENWiki?2013
C4/CW BSP, UK?2005
C4/CW BSP, IT?2004
C4/CW BSP, Webbase?2001
C4/CW BSP, DBLP?2011
CDK, DBLP?2011
C4/CW BSP, ENWiki?2013
CDK, ENWiki?2013
8000
4
8
Number of threads
(b) Mean runtimes, IT-2004, ? = 0.5
Mean Number of Rounds
10000
2
Algo obj value : Serial obj value
10
3
1.1
1.08
1.06
1.04
1.02
1
0
35
(e)
Percent of blocked vertices for C4,
ENWiki-2013. BSP run with ? = 0.9.
CW BSP ?=0.9 mean
CW BSP ?=0.9 median
CW As mean
CW As median
CDK ?=0.9 mean
CDK ?=0.9 median
5
10
15
20
25
Number of threads
30
35
(f)
Median objective values, DBLP-2011.
CW BSP and CDK run with ? = 0.9
Figure 2: In the above figures, ?CW? is short for ClusterWild!, ?BSP? is short for the bulk-synchronous variants
of the parallel algorithms, and ?As? is short for the asynchronous variants.
Runtimes & Speedups: C4 and ClusterWild! are initially slower than serial, due to the overheads
required for atomic operations in the parallel setting. However, all our parallel algorithms outperform KwikCluster with 3-4 threads. As more threads are added, the asychronous variants become
faster than their BSP counterparts as there are no synchronization barrriers. The difference between
BSP and asychronous variants is greater for smaller ?. ClusterWild! is also always faster than
C4 since there are no coordination overheads. The asynchronous algorithms are able to achieve a
speedup of 13-15x on 32 threads. The BSP algorithms have a poorer speedup ratio, but nevertheless
achieve 10x speedup with ? = 0.9.
Synchronization rounds: The main overhead of the BSP algorithms lies in the need for synchronization rounds. As ? increases, the amount of synchronization decreases, and with ? = 0.9, our
algorithms have less than 1000 synchronization rounds, which is small considering the size of the
graphs and our multicore setting.
Blocked vertices: Additionally, C4 incurs an overhead in the number of vertices that are blocked
waiting for earlier vertices to complete. We note that this overhead is extremely small in practice?
on all graphs, less than 0.2% of vertices are blocked. On the larger and sparser graphs, this drops to
less than 0.02% (i.e., 1 in 5000) of vertices.
Objective value: By design, the C4 algorithms also return the same output (and thus objective
value) as KwikCluster. We find that ClusterWild! BSP is at most 1% worse than serial across all
graphs and values of ?. The behavior of asynchronous ClusterWild! worsens as threads are added,
reaching 15% worse than serial for one of the graphs. Finally, on the smaller graphs we were able
to test CDK on, CDK returns a worse median objective value than both ClusterWild! variants.
6
Conclusions and Future Directions
In this paper, we have presented two parallel algorithms for correlation clustering with nearly linear
speedups and provable approximation ratios. Overall, the two approaches support each other?when
C4 is relatively fast relative to ClusterWild!, we may prefer C4 for its guarantees of accuracy, and
when ClusterWild! is accurate relative to C4, we may prefer ClusterWild! for its speed.
In the future, we intend to implement our algorithms in the distributed environment, where synchronization and communication often account for the highest cost. Both C4 and ClusterWild! are
well-suited for the distributed environment, since they have polylogarithmic number of rounds.
8
References
[1] Ahmed K Elmagarmid, Panagiotis G Ipeirotis, and Vassilios S Verykios. Duplicate record detection: A survey. Knowledge and Data
Engineering, IEEE Transactions on, 19(1):1?16, 2007.
[2] Arvind Arasu, Christopher R?e, and Dan Suciu. Large-scale deduplication with constraints using dedupalog. In Data Engineering, 2009.
ICDE?09. IEEE 25th International Conference on, pages 952?963. IEEE, 2009.
[3] Micha Elsner and Warren Schudy. Bounding and comparing methods for correlation clustering beyond ilp. In Proceedings of the
Workshop on Integer Linear Programming for Natural Langauge Processing, pages 19?27. Association for Computational Linguistics,
2009.
[4] Bilal Hussain, Oktie Hassanzadeh, Fei Chiang, Hyun Chul Lee, and Ren?ee J Miller. An evaluation of clustering algorithms in duplicate
detection. Technical report, 2013.
[5] Francesco Bonchi, David Garcia-Soriano, and Edo Liberty. Correlation clustering: from theory to practice. In Proceedings of the 20th
ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1972?1972. ACM, 2014.
[6] Flavio Chierichetti, Nilesh Dalvi, and Ravi Kumar. Correlation clustering in mapreduce. In Proceedings of the 20th ACM SIGKDD
international conference on Knowledge discovery and data mining, pages 641?650. ACM, 2014.
[7] Bo Yang, William K Cheung, and Jiming Liu. Community mining from signed social networks. Knowledge and Data Engineering,
IEEE Transactions on, 19(10):1333?1348, 2007.
[8] N Cesa-Bianchi, C Gentile, F Vitale, G Zappella, et al. A correlation clustering approach to link classification in signed networks. In
Annual Conference on Learning Theory, pages 34?1. Microtome, 2012.
[9] Amir Ben-Dor, Ron Shamir, and Zohar Yakhini. Clustering gene expression patterns. Journal of computational biology, 6(3-4):281?297,
1999.
[10] Nir Ailon, Moses Charikar, and Alantha Newman. Aggregating inconsistent information: ranking and clustering. Journal of the ACM
(JACM), 55(5):23, 2008.
[11] Xinghao Pan, Joseph E Gonzalez, Stefanie Jegelka, Tamara Broderick, and Michael Jordan. Optimistic concurrency control for distributed unsupervised learning. In Advances in Neural Information Processing Systems, pages 1403?1411, 2013.
[12] Guy E Blelloch, Jeremy T Fineman, and Julian Shun. Greedy sequential maximal independent set and matching are parallel on average.
In Proceedings of the twenty-fourth annual ACM symposium on Parallelism in algorithms and architectures, pages 308?317. ACM, 2012.
[13] Michael Krivelevich. The phase transition in site percolation on pseudo-random graphs. arXiv preprint arXiv:1404.5731, 2014.
[14] Nikhil Bansal, Avrim Blum, and Shuchi Chawla. Correlation clustering. In 2013 IEEE 54th Annual Symposium on Foundations of
Computer Science, pages 238?238. IEEE Computer Society, 2002.
[15] Moses Charikar, Venkatesan Guruswami, and Anthony Wirth. Clustering with qualitative information. In Foundations of Computer
Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 524?533. IEEE, 2003.
[16] Erik D Demaine, Dotan Emanuel, Amos Fiat, and Nicole Immorlica. Correlation clustering in general weighted graphs. Theoretical
Computer Science, 361(2):172?187, 2006.
[17] Shuchi Chawla, Konstantin Makarychev, Tselil Schramm, and Grigory Yaroslavtsev. Near optimal LP rounding algorithm for correlation
clustering on complete and complete k-partite graphs. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of
Computing, STOC ?15, pages 219?228, 2015.
[18] Chaitanya Swamy. Correlation clustering: maximizing agreements via semidefinite programming. In Proceedings of the fifteenth annual
ACM-SIAM symposium on Discrete algorithms, pages 526?527. Society for Industrial and Applied Mathematics, 2004.
[19] Ioannis Giotis and Venkatesan Guruswami. Correlation clustering with a fixed number of clusters. In Proceedings of the seventeenth
annual ACM-SIAM symposium on Discrete algorithm, pages 1167?1176. ACM, 2006.
[20] Moses Charikar and Anthony Wirth. Maximizing quadratic programs: extending grothendieck?s inequality. In Foundations of Computer
Science, 2004. Proceedings. 45th Annual IEEE Symposium on, pages 54?60. IEEE, 2004.
[21] Noga Alon, Konstantin Makarychev, Yury Makarychev, and Assaf Naor. Quadratic forms on graphs. Inventiones mathematicae, 163(3):
499?522, 2006.
[22] Francesco Bonchi, Aristides Gionis, Francesco Gullo, and Antti Ukkonen. Chromatic correlation clustering. In Proceedings of the 18th
ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1321?1329. ACM, 2012.
[23] Francesco Bonchi, Aristides Gionis, and Antti Ukkonen. Overlapping correlation clustering. In Data Mining (ICDM), 2011 IEEE 11th
International Conference on, pages 51?60. IEEE, 2011.
[24] Gregory J Puleo and Olgica Milenkovic. Correlation clustering with constrained cluster sizes and extended weights bounds. arXiv
preprint arXiv:1411.0547, 2014.
[25] P. Boldi and S. Vigna. The WebGraph framework I: Compression techniques. In WWW, 2004.
[26] P. Boldi, M. Rosa, M. Santini, and S. Vigna. Layered label propagation: A multiresolution coordinate-free ordering for compressing
social networks. In WWW. ACM Press, 2011.
[27] P. Boldi, B. Codenotti, M. Santini, and S. Vigna. Ubicrawler: A scalable fully distributed web crawler. Software: Practice & Experience,
34(8):711?726, 2004.
9
| 5814 |@word worsens:1 cu:5 version:2 milenkovic:1 compression:1 seems:2 pick:6 incurs:1 solid:1 harder:1 reduction:1 liu:1 series:1 nii:2 interestingly:1 prefix:2 e2b:2 bilal:1 com:1 comparing:1 assigning:1 yet:2 written:1 boldi:3 partition:2 happen:1 remove:5 plot:1 drop:1 greedy:3 selected:2 item:9 amir:1 core:5 short:3 record:1 chiang:1 node:1 ron:1 along:1 become:6 symposium:7 qualitative:1 naor:1 overhead:5 dan:1 dalvi:1 inside:2 bonchi:3 webbase:5 introduce:1 theoretically:1 pairwise:1 assaf:1 expected:1 hardness:1 indeed:1 behavior:1 considering:2 becomes:2 begin:1 notation:1 moreover:2 bounded:8 what:1 minimizes:1 finding:1 guarantee:10 pseudo:1 berkeley:2 runtime:6 prohibitively:1 classifier:1 uk:5 control:7 positive:12 before:3 engineering:3 local:1 treat:1 aggregating:1 preassigned:1 analyzing:4 id:1 parallelize:1 establishing:1 approximately:1 signed:5 plus:4 might:2 au:3 studied:1 challenging:1 schudy:1 co:2 micha:1 abandoning:2 seventeenth:1 enforces:1 atomic:1 practice:6 union:2 implement:2 procedure:2 reject:1 convenient:1 ups:1 matching:1 wait:4 get:3 cannot:2 layered:1 context:2 applying:1 www:2 equivalent:1 missing:1 center:21 maximizing:4 primitive:1 go:2 straightforward:1 nicole:1 survey:1 resolution:1 amazon:1 identifying:1 rule:9 notion:2 variation:3 coordinate:1 papailiopoulos:1 pt:12 suppose:1 shamir:1 programming:2 us:2 agreement:3 element:2 infrequently:1 expensive:1 cut:1 database:1 labeled:2 disagreeing:2 preprint:2 unexpectedly:1 compressing:1 connected:3 hinders:1 keyword:3 ordering:3 decrease:2 removed:2 highest:1 thatp:1 mentioned:1 benjamin:1 intuition:1 broken:1 respecting:1 locking:1 substantial:1 environment:2 broderick:1 algo:1 concurrency:11 incur:1 creates:2 completely:2 triangle:11 basis:1 fast:2 newman:1 neighborhood:4 quite:3 heuristic:1 larger:1 say:4 nikhil:1 favor:1 statistic:1 apx:1 noisy:4 abandon:2 asynchronously:1 seemingly:1 ran:1 maximal:2 relevant:1 combining:1 iff:1 subgraph:2 achieve:4 till:1 multiresolution:1 straggling:1 description:1 billion:2 chul:1 cluster:40 extending:1 ben:1 spent:3 develop:2 friend:16 ac:1 alon:1 measured:1 multicore:1 op:1 minor:1 received:1 solves:1 implemented:1 skip:1 come:2 implies:1 synchronized:2 quantify:2 direction:1 liberty:1 opinion:1 everything:1 adjacency:1 shun:1 samet:1 clustered:13 archetypal:1 blelloch:1 opt:6 disentanglement:1 around:2 exp:1 algorithmic:1 makarychev:3 major:2 achieves:6 smallest:3 omitted:1 precede:1 panagiotis:1 combinatorial:2 label:4 currently:1 coordination:11 percolation:1 vice:1 create:1 establishes:3 tool:1 amos:1 weighted:1 hope:1 concurrently:3 always:1 aim:1 super:1 ck:1 reaching:1 chromatic:2 inherits:1 waif:1 improvement:1 consistently:1 indicates:2 check:4 slowest:1 contrast:1 sigkdd:3 industrial:1 detect:2 asychronous:2 initially:1 provably:3 overall:2 among:2 dual:1 classification:1 priori:1 art:3 breakthrough:1 constrained:1 uc:2 equal:3 construct:1 rosa:1 having:2 sampling:3 runtimes:4 biology:2 langauge:1 jiming:1 look:1 unsupervised:1 nearly:4 future:2 minimized:1 report:1 np:1 simplify:1 duplicate:3 employ:1 fundamentally:1 randomly:1 phase:1 dor:1 william:1 detection:3 peel:4 interest:2 mining:5 evaluation:3 semidefinite:1 activated:2 suciu:1 accurate:1 poorer:1 edge:35 experience:1 indexed:4 chaitanya:1 isolating:1 theoretical:9 instance:2 earlier:2 konstantin:2 asking:1 cost:12 vertex:72 subset:4 rounding:3 seventh:1 too:1 motivating:1 characterize:1 eec:1 gregory:1 combined:1 recht:1 fundamental:1 oymak:1 ec2:1 international:5 siam:2 lee:1 nilesh:1 rounded:1 michael:3 together:3 cesa:1 containing:1 possibly:1 multicut:1 worse:3 guy:1 creating:1 return:4 toy:1 account:1 potential:1 jeremy:1 schramm:1 ioannis:1 vcpus:1 gionis:2 yury:1 caused:1 ranking:1 view:1 picked:2 optimistic:1 deduplication:2 analyze:2 red:1 start:1 parallel:28 defer:1 contribution:1 partite:1 accuracy:9 miller:1 yield:1 landscape:1 none:1 ren:1 cc:19 mathematicae:1 edo:1 definition:2 against:2 inventiones:1 involved:1 tamara:1 associated:1 mi:3 proof:4 con:1 sampled:3 emanuel:1 popular:1 color:1 knowledge:5 cj:2 fiat:1 routine:1 follow:1 tension:1 scala:1 furthermore:1 just:1 correlation:23 until:4 sketch:2 hand:1 web:1 trust:1 su:2 christopher:1 overlapping:2 bsp:35 lack:2 propagation:1 chat:1 quality:1 pulling:1 building:1 contain:1 true:1 counterpart:1 hence:4 assigned:2 read:1 alantha:1 round:34 adjacent:4 during:1 noted:1 authorship:1 m:2 bansal:2 complete:10 demonstrate:2 theoretic:1 performs:2 percent:1 recently:2 common:1 wikipedia:1 association:1 refer:1 significant:1 blocked:6 versa:1 cv:6 ai:1 consistency:6 mathematics:1 i6:1 similarly:1 similarity:6 nected:1 recent:2 apart:2 termed:1 inequality:2 arbitrarily:2 santini:2 accomplished:1 flavio:1 seen:1 minimum:1 additional:1 somewhat:1 preceding:4 greater:1 gentile:1 elsner:1 grigory:1 forty:1 redundant:1 venkatesan:2 dashed:1 ii:1 verykios:1 multiple:1 full:2 technical:2 faster:6 ahmed:1 arvind:1 long:2 icdm:1 serial:9 variant:11 basic:1 scalable:2 tselil:1 metric:2 expectation:5 fifteenth:1 arxiv:4 iteration:1 cell:1 c1:1 receive:1 remarkably:1 else:2 median:5 source:1 noga:1 eliminates:1 operate:1 induced:1 elegant:2 inconsistent:1 jordan:2 call:1 obj:2 delegate:1 integer:1 ee:1 ideal:1 yang:1 near:1 hussain:1 architecture:1 idea:3 translates:1 soriano:1 pivot:1 bottleneck:2 thread:34 whether:1 synchronous:1 expression:1 guruswami:2 gb:1 effort:3 proceed:4 krivelevich:1 useful:1 listed:1 amount:4 extensively:1 processed:1 simplest:2 generate:1 http:1 outperform:4 moses:3 disjoint:2 per:6 track:1 bulk:6 discrete:2 waiting:3 group:3 nevertheless:1 blum:1 achieving:1 deleted:2 ravi:1 graph:45 relaxation:1 icde:1 sum:3 enforced:1 run:6 you:3 fourth:1 shuchi:2 reader:1 gonzalez:1 appendix:5 scaling:2 prefer:2 bit:1 bound:6 pay:1 quadratic:2 auv:1 annual:8 ahead:1 serializable:3 precisely:3 constraint:2 fei:1 software:1 speed:3 min:5 extremely:1 kumar:1 passively:1 relatively:1 speedup:15 charikar:3 ailon:1 according:1 across:3 smaller:6 pan:2 terminates:1 lp:4 joseph:1 making:1 happens:1 dv:1 taken:1 ln:1 r3:1 ilp:1 needed:1 know:1 unconventional:1 tractable:1 serf:1 end:19 available:1 operation:1 xinghao:2 observe:4 away:2 enforce:2 disagreement:4 chawla:2 save:1 batch:2 slower:2 swamy:1 substitute:1 original:1 denotes:3 clustering:45 running:11 remaining:3 linguistics:1 log2:4 build:1 establish:4 society:2 webgraph:1 objective:7 intend:1 added:3 win:1 cw:17 link:2 entity:5 vigna:3 gracefully:1 consensus:1 kannan:1 provable:4 enforcing:1 erik:1 code:1 elmagarmid:1 remind:1 mini:1 julian:1 ratio:17 balance:1 minimizing:1 innovation:1 equivalently:1 setup:1 unfortunately:2 executed:2 cdk:11 stoc:1 negative:3 implementation:1 design:1 twenty:1 perform:1 bianchi:1 upper:6 observation:1 francesco:4 hyun:1 extended:1 communication:1 dotan:1 arbitrary:2 community:2 introduced:2 david:1 pair:2 required:2 giotis:1 conflict:1 c4:59 conflicting:1 polylogarithmic:3 zohar:1 beyond:2 serializability:1 adversary:1 below:2 able:2 dimitris:1 pattern:1 parallelism:1 unclustered:5 tb:1 program:2 max:2 memory:3 deleting:1 event:1 suitable:1 serially:1 treated:1 synchronize:1 ipeirotis:1 natural:1 zappella:1 github:1 created:2 irrespective:1 stefanie:1 grothendieck:1 nir:1 literature:1 mapreduce:2 discovery:3 relative:5 unsurprisingly:1 synchronization:13 loss:7 fully:1 permutation:9 ukkonen:2 proportional:2 demaine:1 versus:2 foundation:3 incurred:1 degree:4 jegelka:1 sufficient:1 proxy:2 olgica:1 classifying:1 bypass:1 repeat:4 surprisingly:1 free:8 last:1 asynchronous:10 english:1 antti:2 formal:1 warren:1 neighbor:1 barrier:1 benefit:2 distributed:6 valid:1 xlarge:1 unweighted:3 crawl:3 ignores:1 author:1 commonly:1 transition:1 simplified:2 spam:1 social:2 transaction:3 approximate:1 obtains:2 compact:1 status:2 gene:2 active:9 amplab:1 thep:1 search:3 why:1 table:2 additionally:1 aristides:2 terminate:1 inherently:2 ignoring:2 alg:6 poly:3 anthony:2 domain:2 main:8 big:1 bounding:2 repeated:3 site:1 representative:1 slow:1 chierichetti:1 wish:1 lie:1 wirth:2 peeling:6 theorem:4 bad:12 showing:2 list:1 yakhini:1 yaroslavtsev:1 grouping:1 workshop:1 avrim:1 sequential:3 ci:5 dissimilarity:1 magnitude:2 ramchandran:1 execution:2 push:1 dblp:7 sparser:1 suited:1 logarithmic:4 garcia:1 simply:2 jacm:1 ordered:2 joined:2 bo:1 chance:1 satisfies:1 acm:14 goal:3 presentation:1 cheung:1 careful:1 experimentally:2 hard:4 change:1 determined:2 specifically:1 operates:1 uniformly:1 lemma:6 called:1 total:1 experimental:2 vitale:1 meaningful:1 select:2 formally:1 immorlica:1 support:1 latter:1 dissimilar:3 crawler:2 tested:2 avoiding:1 |
5,319 | 5,815 | Fast Bidirectional Probability Estimation in Markov
Models
Siddhartha Banerjee ?
[email protected]
Peter Lofgren?
[email protected]
Abstract
We develop a new bidirectional algorithm for estimating Markov chain multi-step
transition probabilities: given a Markov chain, we want to estimate the probability of hitting a given target state in ` steps after starting from a given source
distribution. Given the target state t, we use a (reverse) local power iteration to
construct an ?expanded target distribution?, which has the same mean as the quantity we want to estimate, but a smaller variance ? this can then be sampled efficiently by a Monte Carlo algorithm. Our method extends to any Markov chain on
a discrete (finite or countable) state-space, and can be extended to compute functions of multi-step transition probabilities such as PageRank, graph diffusions, hitting/return times, etc. Our main result is that in ?sparse? Markov Chains ? wherein
the number of transitions between states is comparable to the number of states ?
the running time of our algorithm for a uniform-random target node is order-wise
smaller than Monte Carlo and power iteration based algorithms;
in particular, our
?
method can estimate a probability p using only O(1/ p) running time.
1
Introduction
Markov chains are one of the workhorses of stochastic modeling, finding use across a variety of
applications ? MCMC algorithms for simulation and statistical inference; to compute network centrality metrics for data mining applications; statistical physics; operations management models for
reliability, inventory and supply chains, etc. In this paper, we consider a fundamental problem associated with Markov chains, which we refer to as the multi-step transition probability estimation
(or MSTP-estimation) problem: given a Markov Chain on state space S with transition matrix P ,
an initial source distribution ? over S, a target state t ? S and a fixed length `, we are interested in
computing the `-step transition probability from ? to t. Formally, we want to estimate:
p`? [t] := h?P ` , et i = ?P ` eTt ,
(1)
where et is the indicator vector of state t. A natural parametrization for the complexity of MSTPestimation is in terms of the minimum transition probabilities we want to detect: given a desired
minimum detection threshold ?, we want algorithms that give estimates which guarantee small relative error for any (?, t, `) such that p`? [t] > ?.
Parametrizing in terms of the minimum detection threshold ? can be thought of as benchmarking
against a standard Monte Carlo algorithm, which estimates p`? [t] by sampling independent `-step
paths starting from states sampled from ?. An alternate technique for MSTP-estimation is based on
linear algebraic iterations, in particular, the (local) power iteration. We discuss these in more detail
in Section 1.2. Crucially, however, both these techniques have a running time of ?(1/?) for testing
if p`? [t] > ? (cf. Section 1.2).
?
Siddhartha Banerjee is an assistant professor at the School of Operations Research and Information Engineering at Cornell (http://people.orie.cornell.edu/sbanerjee).
?
Peter Lofgren is a graduate student in the Computer Science Department at Stanford (http://cs.
stanford.edu/people/plofgren/).
1
1.1
Our Results
To the best of our knowledge, our work gives the first bidirectional algorithm for MSTP-estimation
which works for general discrete state-space Markov chains1 . The algorithm we develop is very
simple, both in terms of implementation and analysis. Moreover, we prove that in many settings, it
is order-wise faster than existing techniques.
Our algorithm consists of two distinct forward and reverse components, which are executed sequentially. In brief, the two components proceed as follows:
? Reverse-work: Starting from the target node t, we perform a sequence of reverse local power
iterations ? in particular, we use the REVERSE-PUSH operation defined in Algorithm 1.
? Forward-work: We next sample a number of random walks of length `, starting from ? and
transitioning according to P , and return the sum of residues on the walk as an estimate of p`? [t].
This full algorithm, which we refer to as the Bidirectional-MSTP estimator, is formalized in
Algorithm 2. It works for all countable-state Markov chains, giving the following accuracy result:
Theorem 1 (For details, refer Section 2.3). Given any Markov chain P , source distribution ?,
terminal state t, length `, threshold ? and relative error , Bidirectional-MSTP (Algorithm 2)
b `? [t] for p`? [t], which, with high probability, satisfies:
returns an unbiased estimate p
`
p
b ? [t] ? p`? [t] < max p`? [t], ? .
Since we dynamically adjust the number of REVERSE-PUSH operations to ensure that all residues
are small, the proof of the above theorem follows from straightforward concentration bounds.
Since Bidirectional-MSTP combines local power iteration and Monte Carlo techniques, a natural question is when the algorithm is faster than both. It is easy to to construct scenarios where the
runtime of Bidirectional-MSTP is comparable to its two constituent algorithms ? for example,
if t has more than 1/? in-neighbors. Surprisingly, however, we show that in sparse Markov chains
and for typical target states, Bidirectional-MSTP is order-wise faster:
Theorem 2 (For details, refer Section 2.3). Given any Markov chain P , source distribution ?,
length `, threshold ? and desired accuracy ; then for a uniform q
random choice of t ? S, the
e 3/2 d/?), where d is the average
Bidirectional-MSTP algorithm has a running time of O(`
number of neighbors of nodes in S.
?
Thus, for typical targets, we can estimate transition probabilities of order ? in time only O(1/ ?).
Note that we do not need for every state that the number of neighboring states is small, but rather,
that they are small on average ? for example, this is true in ?power-law? networks, where some
nodes have very high degree, but the average degree is small. The proof of this result is based on a
modification of an argument in [2] ? refer Section 2.3 for details.
Estimating transition probabilities to a target state is one of the fundamental primitives in Markov
chain models ? hence, we believe that our algorithm can prove useful in a variety of application
domains. In Section 3, we briefly describe how to adapt our method for some of these applications ? estimating hitting/return times and stationary probabilities, extensions to non-homogenous
Markov chains (in particular, for estimating graph diffusions and heat kernels), connections to local algorithms and expansion testing. In addition, our MSTP-estimator could be useful in several
other applications ? estimating ruin probabilities in reliability models, buffer overflows in queueing
systems, in statistical physics simulations, etc.
1.2
Existing Approaches for MSTP-Estimation
There are two main techniques used for MSTP-estimation. The first is a natural Monte Carlo algorithm: we estimate p`? [t] by sampling independent `-step paths, each starting from a random
state sampled from ?. A simple concentration argument shows that for a given value of ?, we need
e
?(1/?)
samples to get an accurate estimate of p`? [t], irrespective of the choice of t, and the structure
1
Bidirectional estimators have been developed before for reversible Markov chains [1]; our method however
is not only more general, but conceptually and operationally simpler than these techniques (cf. Section 1.2).
2
of P . Note that this algorithm is agnostic of the terminal state t; it gives an accurate estimate for any
t such that p`? [t] > ?.
On the other hand, the problem also admits a natural linear algebraic solution, using the standard
power iteration starting with ?, or the reverse power iteration starting with et (which is obtained
by re-writing Equation (1) as p`? [t] := ?(et (P T )` )T ). When the state space is large, performing a
direct power iteration is infeasible ? however, there are localized versions of the power iteration that
are still efficient. Such algorithms have been developed, among other applications, for PageRank
estimation [3, 4] and for heat kernel estimation [5]. Although slow in the worst case 2 , such local
update algorithms are often fast in practice, as unlike Monte Carlo methods they exploit the local
structure of the chain. However even in sparse Markov chains and for a large fraction of target
states, their running time can be ?(1/?). For example, consider a random walk on a random dregular graph and let ? = o(1/n) ? then for ` ? logd (1/?), verifying p`es [t] > ? is equivalent to
uncovering the entire logd (1/?) neighborhood of s. Since a large random d-regular graph is (whp)
an expander, this neighborhood has ?(1/?) distinct nodes. Finally, note that as with Monte Carlo,
power iterations can be adapted to either the source or terminal state, but not both.
For reversible Markov chains, one can get a bidirectional algorithms for estimating p`es [t] based on
colliding random walks. For example, consider the problem of estimating length-2` random walk
transition probabilities in a regular undirected graph G(V, E) on n vertices [1, 6]. The main idea
is that to test if a random walk goes from s to t in 2` steps with probability ? ?, we can generate
two independent random walks of length `, starting from s and t respectively, and detect if they
terminate at the same intermediate node. Suppose pw , qw are the probabilities that a length-` walk
from s andPt respectively terminate at node w ? then from the reversibility of the chain, we have that
p2`
? [t] = p w?V pw qw ; this is also the collision probability. The critical observation is that if we
generate 1/? walks from s and t, then we get 1/? potential collisions, which is sufficient to detect
if p2`
? [t] > ?. This argument forms the basis of the birthday-paradox, and similar techniques used
in a variety of estimation problems (eg., see [7]). Showing concentration for this estimator is tricky
as the samples are not independent; moreover, to control the variance of the samples, the algorithms
often need to separately deal with ?heavy? intermediate nodes, where pw or qw are much larger than
O(1/n). Our proposed approach is much simpler both in terms of algorithm and analysis, and more
significantly, it extends beyond reversible chains to any general discrete state-space Markov chain.
The most similar approach to ours is the recent FAST-PPR algorithm of Lofgren et al. [2] for PageRank estimation; our algorithm borrows several ideas and techniques from that work. However, the
FAST-PPR algorithm relies heavily on the structure of PageRank ? in particular, the fact that the
PageRank walk has Geometric(?) length (and hence can be stopped and restarted due to the memoryless property). Our work provides an elegant and powerful generalization of the FAST-PPR
algorithm, extending the approach to general Markov chains.
2
2.1
The Bidirectional MSTP-estimation Algorithm
Algorithm
As described in Section 1.1, given a target state t, our bidirectional MSTP algorithm keeps track of
a pair of vectors ? the estimate vector qkt ? Rn and the residual vector rkt ? Rn ? for each length
k ? {0, 1, 2, . . . , `}. The vectors are initially all set to 0 (i.e., the all-0 vector), except rt0 which is
initialized as et . Moreover, they are updated using a reverse push operation defined as:
Algorithm 1 REVERSE-PUSH(v, i)
Inputs: Transition matrix P , estimate vector qit , residual vectors rit , ri+1
t
1: return New estimate vectors {e
qit } and residual-vectors {reit } computed as:
eit ? qit + hrit , ev iev ;
q
e
rit ? rit ? hrit , ev iev ;
e
ri+1
? ri+1
+ hrit , ev i ev P T
t
t
2
In particular, local power iterations are slow if a state has a very large out-neighborhood (for the forward
iteration) or in-neighborhood (for the reverse update).
3
The main observation behind our algorithm is that we can re-write p`? [t] in terms of {qkt , rkt } as an
expectation over random sample-paths of the Markov chain as follows (cf. Equation (3)):
p`? [t] = h?, q`t i +
`
X
EVk ??P k r`?k
(Vk )
t
(2)
k=0
In other words, given vectors {qkt , rkt }, we can get an unbiased estimator for p`? [t] by sampling a
length-` random trajectory {V0 , V1 , . . . , V` } of the Markov chain P starting at a random state V0
sampled from the source distribution ?, and then adding the residuals along the trajectory as in
Equation (2). We formalize this bidirectional MSTP algorithm in Algorithm 2.
Algorithm 2 Bidirectional-MSTP(P, ?, t, `max , ?)
Inputs: Transition matrix P , source distribution ?, target state t, maximum steps `max , minimum
probability threshold ?, relative error bound , failure probability pf
1: Set accuracy parameter c based on and pf and set reverse threshold ?r (cf. Theorems 1 and 2)
p
(in our experiments we use c = 7 and ?r = ?/c)
2: Initialize: Estimate vectors qkt = 0 , ? k ? {0, 1, 2, . . . , `},
Residual vectors r0t = et and rkt = 0 , ? k ? {1, 2, 3, . . . , `}
3: for i ? {0, 1, . . . , `max } do
4:
while ? v ? S s.t. rit [v] > ?r do
5:
Execute REVERSE-PUSH(v, i)
6:
end while
7: end for
8: Set number of sample paths nf = c`max ?r /? (See Theorem 1 for details)
9: for index i ? {1, 2, . . . , nf } do
10:
Sample starting node Vi0 ? ?
11:
Generate sample path Ti = {Vi0 , Vi1 , . . . , Vi`max } of length `max starting from Vi0
`
12:
For ` ? {1, 2, . . . , `max }: sample k ? U nif orm[0, `] and compute St,i
= `rt`?k [Vik ]
(We reinterpret the sum over k in Equation 2 as an expectation and sample k rather sum over
k ? ` for computational speed.)
13: end for
Pnf `
b `? [t] = h?, q`t i + (1/nf ) i=1
14: return {b
p`? [t]}`?[`max ] , where p
St,i
2.2
Some Intuition Behind our Approach
Before formally analyzing the performance of our MSTP-estimation algorithm, we first build some
intuition as to why it works. In particular, it is useful to interpret the estimates and residues in
probabilistic/combinatorial terms. In Figure 1, we have considered a simple Markov chain on three
states ? Solid, Hollow and Checkered (henceforth (S, H, C)). On the right side, we have illustrated
an intermediate stage of reverse work using S as the target, after performing the REVERSE-PUSH
operations (S, 0), (H, 1), (C, 1) and (S, 2) in that order. Each push at level i uncovers a collection
Figure 1: Visualizing a sequence of REVERSE-PUSH operations: Given the Markov chain on the
left with S as the target, we perform REVERSE-PUSH operations (S, 0), (H, 1), (C, 1),(S, 2).
4
of length-(i + 1) paths terminating at S ? for example, in the figure, we have uncovered all length
2 and 3 paths, and several length 4 paths. The crucial observation is that each uncovered path of
length i starting from a node v is accounted for in either qiv or riv . In particular, in Figure 1, all paths
starting at solid nodes are stored in the estimates of the corresponding states, while those starting at
blurred nodes are stored in the residue. Now we can use this set of pre-discovered paths to boost the
estimate returned by Monte Carlo trajectories generated starting from the source distribution. The
dotted line in the figure represents the current reverse-work frontier ? it separates the fully uncovered
neighborhood of (S, 0) from the remaining states (v, i).
In a sense, what the REVERSE-PUSH operation does is construct a sequence of importancesampling weights, which can then be used for Monte Carlo. An important novelty here is that
the importance-sampling weights are: (i) adapted to the target state, and (ii) dynamically adjusted
to ensure the Monte Carlo estimates have low variance. Viewed in this light, it is easy to see how
the algorithm can be modified to applications beyond basic MSTP-estimation: for example, to nonhomogenous Markov chains, or for estimating the probability of hitting a target state t for the first
time in ` steps (cf. Section 3). Essentially, we only need an appropriate reverse-push/dynamic
programming update for the quantity of interest (with associated invariant, as in Equation (2)).
2.3
Performance Analysis
We first formalize the critical invariant introduced in Equation (2):
Lemma 1. Given a terminal state t, suppose we initialize q0t = 0, r0t = et and qkt , rkt = 0 ? k ?
0. Then for any source distribution ? and length `, after any arbitrary sequence of REVERSEPUSH(v, k) operations, the vectors {qkt , rkt } satisfy the invariant:
p`? [t] = h?, q`t i +
`
X
h?P k , rt`?k i
(3)
k=0
The proof follows the outline of a similar result in Andersen et al. [4] for PageRank estimation; due
to lack of space, we defer it to our full version [8]. Using this result, we can now characterize the
accuracy of the Bidirectional-MSTP algorithm:
Theorem 1. We are given any Markov chain P , source distribution ?, terminal state t, maximum
length `max and also parameters ?, pf and (i.e., the desired threshold, failure probability and
relative error). Suppose we choose any reverse threshold
?r > ?, and set the number of sample
paths nf = c?r /?, where c = max 6e/2 , 1/ ln 2 ln (2`max /pf ). Then for any length ` ? `max
with probability at least 1 ? pf , the estimate returned by Bidirectional-MSTP satisfies:
`
p
b ? [t] ? p`? [t] < max p`? [t], ? .
Proof. Given any Markov chain P and terminal state t, note first that for a given length ` ? `max ,
b `? [t] is an unbiased estimator. Now, for any random-trajectory
Equation (2) shows that the estimate p
`
`
`
Tk , we have that the score St,k
obeys: (i) E[St,k
] ? p`? [t] and (ii) St,k
? [0, `?r ]; the first inequality again follows from Equation (2), while the second follows from the fact that we executed
REVERSE-PUSH operations until all residual values were less than ?r .
P
`
Now consider the rescaled random variable Xk = St,k
/(`?r ) and X =
k?[nf ] Xk ; then we
`
have that Xk ? [0, 1], E[X] ? (nf /`?r )p? [t] and also (X ? E[X]) = (nf /`?r )(b
p`? [t] ? p`? [t]).
Moreover, using standard Chernoff bounds (cf. Theorem 1.1 in [9]), we have that:
2
E[X]
P [|X ? E[X]| > E[X]] < 2 exp ?
and P[X > b] ? 2?b for any b > 2eE[X]
3
Now we consider two cases:
`
1. E[St,k
] > ?/2e (i.e., E[X] > nf ?/2e`?r = c/2e): Here, we can use the first concentration
bound to get:
`
nf `
`
`
b
P p? [t] ? p? [t] ? p? [t] = P |X ? E[X]| ?
p [t] ? P [|X ? E[X]| ? E[X]]
`?r ?
2
2
E[X]
c
? 2 exp ?
,
? 2 exp ?
3
6e
5
where we use that nf = c`max ?r /? (cf. Algorithm 2). Moreover, by the union bound, we have:
?
?
2
[
c
`
`
`
?
?
b
? 2`max exp ?
P
p? [t] ? p? [t] ? p? [t]
,
32e
`?`max
Now as long as c ? 6e/2 ln (2`max /pf ), we get the desired failure probability.
`
2. E[St,k
] < ?/2e (i.e., E[X] < c/2e): In this case, note first that since X > 0, we have that
`
b `? [t] ? (nf /`?r )E[X] ? ?/2e < ?. On the other hand, we also have:
p? [t] ? p
`
nf ?
`
b ? [t] ? p? [t] ? ? = P X ? E[X] ?
? P [X ? c] ? 2?c ,
P p
`?r
where the last inequality follows from our second concentration bound, which holds since we
have c > 2eE[X]. Now as before, we can use the union bound to show that the failure probability
is bounded by pf as long as c ? log2 (`max /pf ).
2
Combining
the
two
cases,
we
see
that
as
long
as
c
?
max
6e/
,
1/
ln
2
ln (2`max /pf ), then we
hS
i
`
`
`
b ? [t] ? p? [t] ? max{?, p? [t]} ? pf .
have P `?`max p
One aspect that is not obvious from the intuition in Section 2.2 or the accuracy analysis is if using a
bidirectional method actually improves the running time of MSTP-estimation. This is addressed by
the following result, which shows that for typical targets, our algorithm achieves significant speedup:
Theorem 2. Let any Markov chain P , source
?, maximum length `max and parameters
q distribution
2 ?
?, pf and be given. Suppose we set ?r = `max log(`max /pf ) . Then for a uniform random choice of
q
e `3/2
t ? S, the Bidirectional-MSTP algorithm has a running time of O
d/?
.
max
Proof. The runtime of Algorithm 2 consists of two parts:
Forward-work (i.e., for generating trajectories): we generate nf = c`max ?r /? sample trajectories,
each of length `max ? hence the running time is O c?`2max /? for any Markov chain P , source
distribution ? and target
2node t. Substituting
for c from Theorem 1, we get that the forward-work
`max ?r log(`max /pf )
running time Tf = O
.
2 ?
Reverse-work (i.e., for REVERSE-PUSH operations): Let Tr denote the reverse-work runtime for
a uniform random choice of t ? S. Then we have:
`
E[Tr ] =
max X
1 XX
(din (v) + 1)1{REVERSE-PUSH(v,k) is executed}
|S|
t?S k=0 v?S
Now for a given t ? S and k ? {0, 1, . . . , `max }, note that the only states v ? S on which we
execute REVERSE-PUSH(v, k) are those with residual rkt (v) > ?r ? consequently, for these states,
we have that qkt (v) > ?r , and hence, by Equation (3), we have that pkev [t] ? ?r (by setting ? = ev ,
i.e., starting from state v). Moreover, aPREVERSE-PUSH(v, k) operation involves updating the
residuals for din (v) + 1 states. Note that t?S pkev [t] = 1 and hence, via a straightforward counting
P
argument, we have that for any v ? S, t?S 1{pke [t]??r } ? 1/?r . Thus, we have:
v
`max X
`max X
1 XX
1 XX
(din (v) + 1)1{pke [t]??r } =
(din (v) + 1)1{pke [t]??r }
v
v
|S|
|S|
t?S k=0 v?S
v?S k=0 t?S
P
din (v)
1 X
1
`max
`max d
?
(`max + 1) ? (din (v) + 1) = O
? v?S
=O
|S|
?r
?r
|S|
?r
E[Tr ] ?
v?S
Finally, we choose ?r =
q
2 ?
`max log(`max /pf )
to balance Tf and Tr and get the result.
6
3
Applications of MSTP estimation
? Estimating the Stationary Distribution and Hitting Probabilities: MSTP-estimation can be
used in two ways to estimate stationary probabilities ?[t]. First, if we know the mixing time ?mix
of the chain P , we can directly use Algorithm 2 to approximate ?[t] by setting `max = ?mix and
using any source distribution ?. Theorem
q 2 then guarantees that we can estimate a stationary
3/2
probability of order ? in time O(?mix d/?). In comparison, Monte Carlo has O(?mix /?) runtime. We note that in practice, we usually do not know the mixing time ? in such a setting, our
algorithm can be used to compute an estimate of p`? [t] for all values of ` ? `max .
An alternative is to modify Algorithm 2 to estimate the truncated hitting time pb`,hit
[t](i.e., the
?
probability of hitting t starting from ? for the first time in ` steps). ByPsetting ? = et , we get
an estimate for the expected truncated return time E[Tt 1{Tt ?`max } ] = `?`max `b
p`,hit
et [t]. Now,
using that fact that ?[t] = 1/E[Tt ], we can get a lower bound for ?[t] which converges to ?[t]
as `max ? ?. We note also that the truncated hitting time has been shown to be useful in other
applications such as identifying similar documents on a document-word-author graph [10].
To estimate the truncated hitting time, we modify Algorithm 2 as follows: at each stage i ?
eit [t] = qit [t] +
{1, 2, . . . , `max } (note: not i = 0), instead of REVERSE-PUSH(t, i), we update q
rit [t] = 0 and do not push back rit [t] to the in-neighbors of t in the (i + 1)th stage. The
rit [t], set e
remaining algorithm remains the same. It is easy to see from the discussion in Section 2.2 that the
resulting quantity pb`,hit
[t] is an unbiased estimate of P[Hitting time of t = `|X0 ? ?] ? we omit
?
a formal proof due to lack of space.
? Exact Stationary Probabilities in Strong Doeblin chains: A strong Doeblin chain [11] is obtained by mixing a Markov chain P and a distribution ? as follows: at each transition, the process proceeds according to P with probability ?, else samples a state from ?. Doeblin chains are
widely used in ML applications ? special cases include the celebrated PageRank metric [12], variants such as HITS and SALSA [13], and other algorithms for applications such as ranking [14] and
structured prediction [15]. An important property of these chains is that if we sample a starting
node V0 from ? and sample a trajectory of length Geometric(?) starting from V0 , then the terminal node is an unbiased sample from the stationary distribution [16]. There are two ways in which
our algorithm can be used for this purpose: one is to replace the REVERSE-PUSH algorithm with
a corresponding local update algorithm for the strong Doeblin chain (similar to the one in Andersen et al. [4] for PageRank), and then sample random trajectories of length Geometric(?). A
more direct technique is to choose some `max >> 1/?, estimate {p`? [t]} ? ` ? [`max ] and then
P`max `?1
directly compute the stationary distribution as p[t] = `=1
? (1 ? ?)p`? [t].
? Graph Diffusions: If we assign a weight ?i to random
P? walks of length i on a (weighted)
graph, the resulting scoring functions f (P, ?)[t] := i=0 ?i ? T P i [t] are known as a graph
diffusions [17] and are used in a variety of applications. The case where ?i = ?i?1 (1 ? ?)
corresponds to PageRank. If instead the length is drawn according to a Poisson distribution
(i.e., ?i = e?? ?i /i!), then the resulting function is called the heat-kernel h(G, ?) ? this too
has several applications, including finding communities (clusters) in large networks [5].
Note
P`max
`max
T i
that for any function f as defined above, the truncated sum f
=
obeys
i=0 ?i p? P
P?
||f ? f `max ||? ? `max +1 ?i . Thus a guarantee on an estimate for the truncated sum directly
translates to a guarantee on the estimate for the diffusion. We can use MSTP-estimation to efficiently estimate these truncated sums. We perform numerical experiments on heat kernel estimation in the next section.
? Conductance Testing in Graphs: MSTP-estimation is an essential primitive for conductance
testing in large Markov chains [1]. In particular, in regular undirected graphs, Kale et al [6]
develop a sublinear bidirectional estimator based on counting collisions between walks in order
to identify ?weak? nodes ? those which belong to sets with small conductance. Our algorithm can
be used to extend this process to any graph, including weighted and directed graphs.
? Local Algorithms: There is a lot of interest recently on local algorithms ? those which perform
computations given only a small neighborhood of a source node [18]. In this regard, we note that
Bidirectional-MSTP gives a natural local algorithm for MSTP estimation, and thus for the
applications mentioned above ? given a k-hop neighborhood around the source and target, we can
perform Bidirectional-MSTP with `max set to k. The proof of this follows from the fact
that the invariant in Equation (2) holds after any sequence of REVERSE-PUSH operations.
7
Figure 2: Estimating heat kernels: Bidirectional MSTP-estimation vs. Monte Carlo, Forward Push.
To compare runtimes, we choose parameters such that the mean relative error of all algorithms is
around 10%. Notice that Bidirectional-MSTP is 100 times faster than the other algorithms.
4
Experiments
To demonstrate the efficiency of our algorithm on large Markov chains, we use heat kernel estimation (cf. Section 3) as an example application. The heat kernel is a non-homogenous Markov
chain, defined as the probability of stopping at the target on a random walk from the source, where
the walk length is sampled from a P oisson(`) Distribution. In real-world graphs, a heat-kernel
value between a pair of nodes has been shown to be a good indicator of an underlying community
relationship [5] ? this suggests that it can serve as a metric for personalized search on social networks. For example, if a social network user s wants to view a list of users attending some event,
then sorting these users by heat kernel values will result in the most similar users to s appearing on
top. Bidirectional-MSTP is ideal for such personalized search applications, as the set of users
filtered by a search query is typically much smaller than the set of nodes on the network.
In Figure 2, we compare the runtime of different algorithms for heat kernel computation on four
real-world graphs, ranging from millions to billions of edges 3 . For each graph, for random (source,
target) pairs, we compute the heat kernel using Bidirectional-MSTP, as well as two benchmark algorithms ? Monte Carlo, and the Forward Push algorithm (as presented in [5]). All three
algorithms have parameters which allow them to trade off speed and accuracy ? for a fair comparison, we choose parameters such that the empirical mean relative error each algorithm is 10%. All
three algorithms were implemented in Scala ? for the forward push algorithm, our implementation
follows the code linked from [5]).
We set average walk-length ` = 5?(since longer walks will mix into the stationary distribution), and
set the maximum length to ` + 10 ` ? 27; the probability of a walk being longer than this is 10?12 ,
which is negligible. For reproducibility, our source code is available on our website (cf. [8]).
Figure 2 shows that across all graphs, Bidirectional-MSTP is 100x faster than the two benchmark algorithms. For example, on the Twitter graph, it can estimate a heat kernel score is 0.1
seconds, while the the other algorithms take more than 4 minutes. We note though that Monte Carlo
and Forward Push can return scores from the source to all targets, rather than just one target ? thus
Bidirectional-MSTP is most useful when we want the score for a small set of targets.
Acknowledgments
Research supported by the DARPA GRAPHS program via grant FA9550-12-1-0411, and by NSF
grant 1447697. Peter Lofgren was supported by an NPSC fellowship. Thanks to Ashish Goel and
other members of the Social Algorithms Lab at Stanford for many helpful discussions.
3
Pokec [19], Live Journal [20], and Orkut [20] datasets are from the SNAP [21]; Twitter-2010 [22] was
downloaded from the Laboratory for Web Algorithmics [23]. Refer to our full version [8] for details.
8
References
[1] Oded Goldreich and Dana Ron. On testing expansion in bounded-degree graphs. In Studies in Complexity
and Cryptography. Miscellanea on the Interplay between Randomness and Computation. Springer, 2011.
[2] Peter Lofgren, Siddhartha Banerjee, Ashish Goel, and C Seshadhri. FAST-PPR: Scaling personalized
PageRank estimation for large graphs. In ACM SIGKDD?14, 2014.
[3] Reid Andersen, Fan Chung, and Kevin Lang. Local graph partitioning using PageRank vectors. In IEEE
FOCS?06, 2006.
[4] Reid Andersen, Christian Borgs, Jennifer Chayes, John Hopcraft, Vahab S Mirrokni, and Shang-Hua
Teng. Local computation of PageRank contributions. In Algorithms and Models for the Web-Graph.
Springer, 2007.
[5] Kyle Kloster and David F Gleich. Heat kernel based community detection. In ACM SIGKDD?14, 2014.
[6] Satyen Kale, Yuval Peres, and C Seshadhri. Noise tolerance of expanders and sublinear expander reconstruction. In IEEE FOCS?08, 2008.
[7] Rajeev Motwani, Rina Panigrahy, and Ying Xu. Estimating sum by weighted sampling. In Automata,
Languages and Programming, pages 53?64. Springer, 2007.
[8] Siddhartha Banerjee and Peter Lofgren. Fast bidirectional probability estimation in markov models. Technical report, 2015. http://arxiv.org/abs/1507.05998.
[9] Devdatt P Dubhashi and Alessandro Panconesi. Concentration of measure for the analysis of randomized
algorithms. Cambridge University Press, 2009.
[10] Purnamrita Sarkar, Andrew W Moore, and Amit Prakash. Fast incremental proximity search in large
graphs. In Proceedings of the 25th international conference on Machine learning, pages 896?903. ACM,
2008.
[11] Wolfgang Doeblin. Elements d?une theorie generale des chaines simples constantes de markoff. In
Annales Scientifiques de l?Ecole Normale Sup?erieure, volume 57, pages 61?111. Soci?et?e math?ematique
de France, 1940.
[12] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking:
bringing order to the web. 1999.
[13] Ronny Lempel and Shlomo Moran. The stochastic approach for link-structure analysis (SALSA) and the
TKC effect. Computer Networks, 33(1):387?401, 2000.
[14] Sahand Negahban, Sewoong Oh, and Devavrat Shah. Iterative ranking from pair-wise comparisons. In
Advances in Neural Information Processing Systems, pages 2474?2482, 2012.
[15] Jacob Steinhardt and Percy Liang. Learning fast-mixing models for structured prediction. In ICML?15,
2015.
?
[16] Krishna B Athreya and Orjan
Stenflo. Perfect sampling for Doeblin chains. Sankhy?a: The Indian Journal
of Statistics, pages 763?777, 2003.
[17] Fan Chung. The heat kernel as the pagerank of a graph. Proceedings of the National Academy of Sciences,
104(50):19735?19740, 2007.
[18] Christina E Lee, Asuman Ozdaglar, and Devavrat Shah. Computing the stationary distribution locally. In
Advances in Neural Information Processing Systems, pages 1376?1384, 2013.
[19] Lubos Takac and Michal Zabovsky. Data analysis in public social networks. In International. Scientific
Conf. & Workshop Present Day Trends of Innovations, 2012.
[20] Alan Mislove, Massimiliano Marcon, Krishna P. Gummadi, Peter Druschel, and Bobby Bhattacharjee.
Measurement and Analysis of Online Social Networks. In Proceedings of the 5th ACM/Usenix Internet
Measurement Conference (IMC?07), San Diego, CA, October 2007.
[21] Stanford Network Analysis Platform (SNAP). http://http://snap.stanford.edu/. Accessed: 2014-02-11.
[22] Paolo Boldi, Marco Rosa, Massimo Santini, and Sebastiano Vigna. Layered label propagation: A multi
resolution coordinate-free ordering for compressing social networks. In ACM WWW?11, 2011.
[23] Laboratory for Web Algorithmics. http://law.di.unimi.it/datasets.php. Accessed: 201402-11.
9
| 5815 |@word h:1 version:3 briefly:1 pw:3 vi1:1 simulation:2 crucially:1 uncovers:1 jacob:1 tr:4 solid:2 doeblin:6 initial:1 celebrated:1 uncovered:3 score:4 ecole:1 ours:1 document:2 existing:2 current:1 whp:1 michal:1 lang:1 boldi:1 john:1 numerical:1 shlomo:1 christian:1 update:5 v:1 stationary:9 operationally:1 website:1 une:1 xk:3 parametrization:1 fa9550:1 filtered:1 provides:1 math:1 node:19 ron:1 org:1 simpler:2 scientifiques:1 accessed:2 along:1 direct:2 supply:1 focs:2 prove:2 consists:2 combine:1 purnamrita:1 x0:1 expected:1 multi:4 terminal:7 pf:14 estimating:11 moreover:6 bounded:2 xx:3 agnostic:1 qw:3 underlying:1 what:1 bhattacharjee:1 developed:2 finding:2 guarantee:4 every:1 nf:12 ti:1 reinterpret:1 prakash:1 seshadhri:2 runtime:5 hit:4 tricky:1 control:1 partitioning:1 grant:2 omit:1 ozdaglar:1 reid:2 before:3 negligible:1 engineering:1 local:14 modify:2 analyzing:1 path:12 birthday:1 dynamically:2 suggests:1 graduate:1 obeys:2 directed:1 acknowledgment:1 testing:5 practice:2 union:2 empirical:1 thought:1 significantly:1 word:2 pre:1 ett:1 regular:3 orm:1 get:10 layered:1 ronny:1 live:1 writing:1 www:1 equivalent:1 straightforward:2 primitive:2 starting:19 go:1 rt0:1 kale:2 automaton:1 resolution:1 formalized:1 identifying:1 estimator:7 attending:1 oh:1 coordinate:1 riv:1 updated:1 target:24 suppose:4 heavily:1 user:5 exact:1 programming:2 soci:1 diego:1 element:1 trend:1 updating:1 winograd:1 verifying:1 worst:1 compressing:1 rina:1 ordering:1 trade:1 rescaled:1 devdatt:1 mentioned:1 intuition:3 alessandro:1 complexity:2 dynamic:1 terminating:1 serve:1 efficiency:1 basis:1 druschel:1 darpa:1 eit:2 goldreich:1 heat:14 distinct:2 fast:9 describe:1 monte:14 massimiliano:1 query:1 kevin:1 neighborhood:7 stanford:6 larger:1 widely:1 snap:3 satyen:1 statistic:1 chayes:1 online:1 interplay:1 sequence:5 reconstruction:1 pke:3 neighboring:1 combining:1 mixing:4 reproducibility:1 sebastiano:1 academy:1 constituent:1 billion:1 cluster:1 motwani:2 extending:1 generating:1 incremental:1 converges:1 perfect:1 tk:1 develop:3 andrew:1 school:1 strong:3 p2:2 implemented:1 c:2 involves:1 stochastic:2 qiv:1 oisson:1 brin:1 public:1 assign:1 generalization:1 adjusted:1 extension:1 frontier:1 hold:2 proximity:1 around:2 considered:1 ruin:1 marco:1 exp:4 lawrence:1 substituting:1 achieves:1 ematique:1 purpose:1 tkc:1 estimation:26 assistant:1 combinatorial:1 label:1 tf:2 weighted:3 modified:1 rather:3 normale:1 cornell:3 vk:1 sigkdd:2 detect:3 sense:1 helpful:1 inference:1 twitter:2 stopping:1 entire:1 typically:1 initially:1 france:1 interested:1 uncovering:1 among:1 qkt:7 platform:1 special:1 initialize:2 mstp:35 homogenous:2 rosa:1 construct:3 reversibility:1 sampling:6 chernoff:1 hop:1 represents:1 runtimes:1 icml:1 sankhy:1 report:1 national:1 ab:1 detection:3 conductance:3 interest:2 mining:1 adjust:1 light:1 behind:2 chain:42 iev:2 accurate:2 edge:1 bobby:1 vi0:3 r0t:2 walk:17 desired:4 re:2 initialized:1 stopped:1 vahab:1 modeling:1 vertex:1 uniform:4 too:1 characterize:1 stored:2 st:8 thanks:1 fundamental:2 randomized:1 international:2 negahban:1 probabilistic:1 physic:2 off:1 lee:1 ashish:2 rkt:7 andersen:4 again:1 management:1 choose:5 henceforth:1 conf:1 chung:2 return:8 potential:1 de:4 student:1 blurred:1 satisfy:1 ranking:3 vi:1 view:1 lot:1 lab:1 wolfgang:1 linked:1 sup:1 defer:1 contribution:1 php:1 accuracy:6 variance:3 efficiently:2 identify:1 asuman:1 conceptually:1 weak:1 carlo:14 trajectory:8 randomness:1 against:1 failure:4 obvious:1 associated:2 proof:7 di:1 sampled:5 knowledge:1 improves:1 formalize:2 gleich:1 actually:1 back:1 bidirectional:29 day:1 wherein:1 scala:1 execute:2 though:1 just:1 stage:3 nif:1 until:1 hand:2 web:4 reversible:3 banerjee:4 lack:2 rajeev:2 propagation:1 scientific:1 believe:1 effect:1 unbiased:5 true:1 hence:5 din:6 memoryless:1 laboratory:2 moore:1 illustrated:1 eg:1 deal:1 visualizing:1 outline:1 tt:3 demonstrate:1 workhorse:1 percy:1 logd:2 ranging:1 wise:4 kyle:1 recently:1 volume:1 million:1 belong:1 extend:1 interpret:1 refer:6 significant:1 measurement:2 cambridge:1 imc:1 erieure:1 language:1 reliability:2 longer:2 importancesampling:1 v0:4 etc:3 recent:1 reverse:29 scenario:1 salsa:2 buffer:1 inequality:2 santini:1 scoring:1 krishna:2 minimum:4 goel:2 novelty:1 ii:2 full:3 mix:5 alan:1 technical:1 faster:5 adapt:1 long:3 christina:1 gummadi:1 prediction:2 variant:1 basic:1 essentially:1 metric:3 expectation:2 poisson:1 arxiv:1 iteration:13 kernel:14 sergey:1 addition:1 want:7 residue:4 separately:1 addressed:1 fellowship:1 else:1 source:19 crucial:1 unlike:1 bringing:1 expander:2 undirected:2 elegant:1 member:1 ee:2 counting:2 ideal:1 intermediate:3 easy:3 variety:4 idea:2 vik:1 translates:1 panconesi:1 sahand:1 peter:6 algebraic:2 returned:2 proceed:1 useful:5 collision:3 locally:1 http:6 generate:4 nsf:1 notice:1 dotted:1 track:1 discrete:3 write:1 siddhartha:4 paolo:1 four:1 threshold:8 pb:2 drawn:1 queueing:1 diffusion:5 v1:1 graph:25 annales:1 fraction:1 sum:7 powerful:1 extends:2 scaling:1 comparable:2 bound:8 internet:1 fan:2 adapted:2 colliding:1 ri:3 personalized:3 aspect:1 speed:2 argument:4 performing:2 expanded:1 speedup:1 department:1 structured:2 according:3 alternate:1 smaller:3 across:2 modification:1 invariant:4 ln:5 equation:10 remains:1 jennifer:1 discus:1 devavrat:2 know:2 end:3 available:1 operation:14 appropriate:1 lempel:1 appearing:1 centrality:1 alternative:1 shah:2 markoff:1 top:1 running:9 cf:9 ensure:2 remaining:2 include:1 log2:1 qit:4 exploit:1 giving:1 build:1 overflow:1 amit:1 dubhashi:1 question:1 quantity:3 concentration:6 rt:2 mirrokni:1 separate:1 link:1 pnf:1 vigna:1 panigrahy:1 length:28 code:2 index:1 relationship:1 balance:1 ying:1 liang:1 innovation:1 executed:3 october:1 theorie:1 implementation:2 countable:2 perform:5 observation:3 markov:34 datasets:2 benchmark:2 finite:1 parametrizing:1 truncated:7 peres:1 extended:1 paradox:1 rn:2 discovered:1 arbitrary:1 usenix:1 community:3 sarkar:1 introduced:1 david:1 pair:4 connection:1 algorithmics:2 boost:1 beyond:2 proceeds:1 usually:1 pokec:1 ev:5 program:1 pagerank:14 max:56 including:2 terry:1 power:12 critical:2 event:1 natural:5 indicator:2 residual:8 brief:1 irrespective:1 ppr:4 athreya:1 geometric:3 relative:6 law:2 fully:1 sublinear:2 dana:1 localized:1 borrows:1 downloaded:1 degree:3 sufficient:1 sewoong:1 heavy:1 accounted:1 surprisingly:1 last:1 supported:2 free:1 infeasible:1 side:1 formal:1 allow:1 neighbor:3 sparse:3 tolerance:1 regard:1 transition:13 world:2 forward:9 collection:1 author:1 nonhomogenous:1 san:1 social:6 approximate:1 citation:1 checkered:1 keep:1 mislove:1 ml:1 sequentially:1 search:4 iterative:1 why:1 terminate:2 ca:1 inventory:1 expansion:2 domain:1 main:4 noise:1 expanders:1 fair:1 cryptography:1 xu:1 benchmarking:1 oded:1 slow:2 theorem:10 minute:1 transitioning:1 borgs:1 showing:1 moran:1 list:1 admits:1 orie:1 essential:1 workshop:1 adding:1 importance:1 push:24 sorting:1 steinhardt:1 hitting:10 orkut:1 restarted:1 springer:3 hua:1 corresponds:1 satisfies:2 relies:1 acm:5 viewed:1 consequently:1 massimo:1 replace:1 professor:1 typical:3 except:1 yuval:1 unimi:1 lemma:1 shang:1 called:1 teng:1 e:2 takac:1 formally:2 rit:7 people:2 indian:1 hollow:1 mcmc:1 |
5,320 | 5,816 | Evaluating the statistical significance of biclusters
Jason D. Lee, Yuekai Sun, and Jonathan Taylor
Institute of Computational and Mathematical Engineering
Stanford University
Stanford, CA 94305
{jdl17,yuekai,jonathan.taylor}@stanford.edu
Abstract
Biclustering (also known as submatrix localization) is a problem of high practical relevance in exploratory analysis of high-dimensional data. We develop a
framework for performing statistical inference on biclusters found by score-based
algorithms. Since the bicluster was selected in a data dependent manner by a
biclustering or localization algorithm, this is a form of selective inference. Our
framework gives exact (non-asymptotic) confidence intervals and p-values for the
significance of the selected biclusters.
1
Introduction
Given a matrix X ? Rm?n , biclustering or submatrix localization is the problem of identifying a
subset of the rows and columns of X such that the bicluster or submatrix consisting of the selected
rows and columns are ?significant? compared to the rest of X. An important application of biclustering is the identification of significant genotype-phenotype associations in the (unsupervised)
analysis of gene expression data. The data is usually represented by an expression matrix X whose
rows correspond to genes and columns correspond to samples. Thus genotype-phenotype associations correspond to salient submatrices of X. The location and significance of such biclusters, in
conjunction with relevant clinical information, give preliminary results on the genetic underpinnings
of the phenotypes being studied.
More generally, given a matrix X ? Rm?n whose rows correspond to variables and columns correspond to samples, biclustering seeks sample-variable associations in the form of salient submatrices.
Without loss of generality, we consider square matrices X ? Rn?n of the form
X = M + Z,
Zij ? N (0, ? 2 )
M = ?eI0 eTJ0 ,
? ? 0, I0 , J0 ? [n].
(1.1)
The components of eI , I ? [n] are given by
(eI )i =
i?I
.
otherwise
1
0
For our theoretical results, we assume the size of the embedded submatrix |I0 | = |J0 | = k and the
noise variance ? 2 is known.
The biclustering problem, due to its practical relevance, has attracted considerable attention. Most
previous work focuses on finding significant submatrices. A large class of algorithms for biclustering
are score-based, i.e. they search for submatrices that maximize some score function that measures
the ?significance? of a submatrix. In this paper, we focus on evaluating the significance of submatrices found by score-based algorithms for biclustering. More precisely, let I(X), J(X) ? [n] be a
(random) pair output by a biclustering algorithm. We seek to test whether the localized submatrix
1
XI(X),J(X) contains any signal, i.e. test the hypothesis
X
H0 :
Mij = 0.
(1.2)
i?I(X)
j?J(X)
Since the hypothesis depends on the (random) output of theP
biclustering algorithm, this is a form
of selective inference. The distribution of the test statistic i?I(X) Xij depends on the specific
j?J(X)
algorithm, and is extremely difficult to derive for many heuristic biclustering algorithms.
Our main contribution is to test whether a biclustering algorithm has found a statistically significant
bicluster. The tests and confidence intervals we construct are exact, meaning that in finite samples
the type 1 error is exactly ?.
This paper is organized as follows. First, we review recent work on biclustering and related problems. Then, in section 2, we describe our framework for performing inference in the context of a
simple biclustering algorithm based on a scan statistic. We show
1. the framework gives exact (non-asymptotic) Unif(0, 1) p-values under H0 , and the p-values
can be ?inverted? to form confidence intervals for the amount of signal in XI(X),J(X) .
q
2. under the minimax signal-to-noise ratio (SNR) regime ? & logk n , the test has full asymptotic power .
In section 4, we show the framework handles more computationally tractable biclustering algorithms, including a greedy algorithm originally proposed by Shabalin et al. [12]. In the supplementary materials, we discuss the problem in the more general setting where there are multiple emnbedded submatrices. Finally, we present experimental validation of the various tests and biclustering
algorithms.
1.1
Related work
A slightly easier problem is submatrix detection: test whether a matrix has an embedded submatrix
with nonzero mean [1, 4]. This problem was recently studied by Ma and Wu [11] who characerized
the minimum signal strength ? for any test and any computationally tractable test to reliably detect
an embedded submatrix.
We emphasize that the problem we consider is not the submatrix detection problem, but a complementary problem. Submatrix detection asks whether there are any hidden row-column associations
in a matrix. We ask whether a submatrix selected by a biclustering algorithm captures the hidden
association(s). In practice, given a matrix, a practitioner might perform (in order)
1. submatrix detection: check for a hidden submatrix with elevated mean.
2. submatrix localization: attempt to find the hidden submatrix.
3. selective inference: check whether the selected submatrix captures any signal.
We focus on the third step in the pipeline. Results on evaluating the significance of selected submatrices are scarce. The only result we know of is by Bhamidi, Dey and Nobel, who characterized
the asymptotic distribution of the largest k ? k average submatrix in Gaussian random matrices [6].
Their result may be used to form an asymptotic test of (1.2).
The submatrix localization problem, due to its practical relevance, has attracted considerable attention [5, 2, 3]. Most prior work focuses on finding significant submatrices. Broadly speaking,
submatrix localization procedures fall into one of two types: score-based search procedures and
spectral algorithms. The main idea behind the score-based approach to submatrix localization is
significant submatrices should maximize some score that measures the ?significance? of a submatrix, e.g. the average of its entries [12] or the goodness-of-fit of a two-way ANOVA model [8, 9].
Since there are exponentially many submatrices, many score-based search procedure use heuristics
to reduce the search space. Such heuristics are not guaranteed to succeed, but often perform well in
practice. One of the purposes of our work is to test whether a heuristic algorithm has identified a
significant submatrix.
2
The submatrix localization problem exhibits a statistical and computational trade-off that was first
studied by Balakrishnan et al. [5]. They compare the SNR required by several computationally
efficient algorithms to the minimax SNR. Recently, Chen and Xu [7] study the trade-off when there
are several embedded submatrices. In this more general setting, they show the SNR required by
convex relaxation is smaller than the SNR required by entry-wise thresholding. Thus the power of
convex relaxation is in separating clusters/submatrices, not in identifying one cluster/submatrix.
2
A framework for evaluating the significance of a submatrix
Our main contribution is a framework for evaluating significance of a submatrix selected by a biclustering algorithm. The framework allows us to perform exact (non-asymptotic) inference on the
selected submatrix. In this section, we develop the framework on a (very) simple score-based algorithm that simply outputs the largest average submatrix. At a high level, our framework consists
of characterizing the selection event {(I(X), J(X)) = (I, J)} and applying the key distributional
result in [10] to obtain a pivotal quantity.
2.1
The significance of the largest average submatrix
To begin, we consider performing inference on output of the simple algorithm that simply returns
the k ? k submatrix with largest sum. Let S be the set of indices of all k ? k submatrices of X,
i.e. S = {(I, J) | I, J ? [n], |I| = |J| = k}. The Largest Average Submatrix (LAS) algorithm
returns a pair (ILAS (X), JLAS (X))
(ILAS (X), JLAS (X)) = arg max eTI XeJ
(I,J)?S
2
The optimal value S(1) = tr eJLAS (X) eTILAS (X) X is distributed like the maxima of nk (correlated) normal random variables. Although results on the asymptotic distribution (k fixed, n growing)
of S(1) (under H0 : ? = 0) are known (e.g. Theorem 2.1 in [6]), we are not aware of any results that
characterizes the finite sample distribution of the optimal value. To avoid this pickle, we condition
on the selection event
ELAS (I, J) = {(ILAS (X), JLAS (X)) = (I, J)}
(2.1)
and work with the distribution of X | {(ILAS (X), JLAS (X)) = (I, J)} .
We begin by making a key observation. The selection event given by (2.1) is equivalent to X
satisfying a set of linear inequalities given by
tr eJ eTI X ? tr eJ 0 eTI0 X for any (I 0 , J 0 ) ? S \ (I, J).
(2.2)
Thus the selection event is equivalent to X falling in the polyhedral set
CLAS (I, J) = X ? Rn?n | tr eJ eTI X ? tr eJ 0 eTI0 X for any (I 0 , J 0 ) ? S \ (I, J) . (2.3)
Thus, X | {(ILAS (X), JLAS (X)) = (I, J)} = X | {X ? CLAS (I, J)} is a constrained Gaussian
random variable.
Recall our goal was to perform inference on the amount of signal in the selected submatrix
XILAS (X),JLAS (X) . This task is akin to performing inference on the mean parameter1 of a constrained Gaussian random variable, namely X | {X ? CLAS (I, J)} . We apply the selective inference framework by Lee et al. [10] to accomplish the task.
Before we delve into the details of how we perform inference on the mean parameter of a constrained
Gaussian random variable, we review the key distribution result in [10] concerning constrained
Gaussian random variables.
Theorem 2.1. Consider a Gaussian random variable y ? Rn with mean ? ? Rn and covariance
? ? Sn?n
++ constrained to a polyhedral set
C = {x ? Rp | Ay ? b} for some A ? Rm?n , b ? Rm .
1
The mean parameter is the mean of the Gaussian prior to truncation.
3
Let ? ? Rn represent a linear function of y. Define ? = ?A??
T ?? and
1
V + (y) = sup
(bj ? (Ay)j + ?j ? T y)
?
j:?j <0 j
1
V ? (y) = inf
(bj ? (Ay)j + ?j ? T y)
j:?j >0 ?j
(2.4)
(2.5)
V 0 (y) = inf bj ? (Ay)j
(2.6)
j:?j =0
? x??
? ? a??
2
?
?
.
(2.7)
F (x, ?, ? , a, b) =
? b??
? ? a??
?
?
The expression F (? T y, ? T ?, ? T ??, V ? (y), V + (y)) is a pivotal quantity with a Unif(0, 1) distribution, i.e.
F ? T y, ? T ?, ? T ??, V ? (y), V + (y) | {Ay ? b} ? Unif(0, 1).
(2.8)
Remark 2.2. The truncation limits V + (y) and V ? (y) (and V 0 (y)) depend on ? and the polyhedral
set C. We omit the dependence to keep our notation manageable.
Recall X | {ELAS (I, J)} is a constrained Gaussian random variable (constrained to the polyhedral
set CLAS (I, J) given by (2.3)). By Theorem 2.1 and the characterization of the selection event
ELAS (I, J), the random variable
F S(1) , tr eJ eTI M , ? 2 k 2 , V ? (X), V + (X) | {ELAS (I, J)} ,
where V + (X) and V ? (X) (and V 0 (X)) are evaluated on the polyhedral set CLAS (I, J), is uniformly distributed on the unit interval. The mean parameter tr eJ eTI M is the amount of signal
captured by XI,J :
tr eJ eTI M = |I ? I0 | |J ? J0 | ?.
0
0
What are V + (X) and V ? (X)? Let EI 0 ,J 0 = e0I e0T
J for any I , J ? [n]. For convenience, we index
|I?I 0 ||J?J 0 |?k2
.
the constraints (2.2) by the pairs (I 0 , J 0 ). The term ?I 0 ,J 0 is given by ?I 0 ,J 0 =
k2
Since |I ? I 0 | |J ? J 0 | < k 2 , ?I 0 ,J 0 is negative for any (I 0 , J 0 ) ? Sn,k \ (I, J), and the upper
truncation limit V + (X) is ?. The lower truncation limit V ? (X) simplifies to
T
2
0 ,J 0 )
k
tr
(E
?
E
X
I,J
I
T
.
(2.9)
V ? (X) = 0 0max
tr EI,J
X ?
k 2 ? |I ? I 0 | |J ? J 0 |
(I ,J ):?I 0 ,J 0 <0
We summarize the developments thus far in a corollary.
Corollary 2.3. We have
F S(1) , tr eJ eTI M , k 2 ? 2 , V ? (X), ? | {ELAS (I, J)} ? Unif (0, 1)
(2.10)
T
2
k tr (EI,J ? EI 0 ,J 0 ) X
?
T
(2.11)
V (X) = 0 0max
tr EI,J X ?
k 2 ? |I ? I 0 | |J ? J 0 |
(I ,J ):?I 0 ,J 0 <0
Under the hypothesis
H0 : tr eJLAS (X) eTILAS (X) M = 0,
we expect
(2.12)
F S(1) , 0, k 2 ? 2 , V ? (X), ? | {ELAS (I, J)} ? Unif (0, 1)
Thus 1 ? F S(1) , 0, k 2 ? 2 , V ? (X), ? is a p-value for the hypothesis (2.12). Under the alternative, we expect the selected submatrix to be (stochastically) larger than under the null.
Thus rejecting H0 when the p-value is smaller than ? is an exact ? level test for H0 ; i.e.
Pr0 (reject H0 | {ELAS (I, J)}) = ?. Since the test controls Type I error at ? for all possible selection events (i.e. all possible outcomes of the LAS algorithm), the test also controls Type I error
unconditionally:
X
Pr0 (reject H0 ) =
Pr0 (reject H0 | {ELAS (I, J)}) Pr0 ({ELAS (I, J)})
I,J?[n]
??
X
Pr0 ({ELAS (I, J)}) = ?.
I,J?[n]
Thus the test is an exact ?-level test of H0 . We summarize the result in a Theorem.
4
Theorem 2.4. The test that rejects when
F S(1) , 0, k 2 ? 2 , V ? (X), ? ? 1 ? ?
T
k 2 tr EILAS(X) ,JLAS(X) ? EI 0 ,J 0 X
,
V ? (X), = 0 0max
tr EITLAS(X) ,JLAS(X) X ?
(I ,J ):?I 0 ,J 0 <0
k 2 ? ILAS(X) ? I 0 JLAS(X) ? J 0
P
is a valid ?-level test for H0 : i?I(X) Mij = 0.
j?J(X)
To obtain confidence intervals for the amount of signal in the selected submatrix, we ?invert? the
pivotal quantity given by (2.10). By Corollary 2.3, the interval
n
?
?o
? ? R : ? F S(1) , ?, k 2 ? 2 , V ? (X), ? ? 1 ?
(2.13)
2
2
P
is an exact 1 ? ? confidence interval for i?I(X) Mij . When (ILAS (X), JLAS (X)) = (I0 , J0 ),
j?J(X)
(2.13) is a confidence interval for ?. Like the test given by Lemma 2.4, the confidence intervals
given by (2.13) are also valid unconditionally.
2.2
Power under minimax signal-to-noise ratio
In section 2, we derived an exact (non-asymptotically valid) test for the hypothesis (2.12). In this
section, we study the power of the test. Before we delve into the details, we review some relevant
results to place our result in the correct context.
q
Balakrishnan et al. [5] show ? must be at least ? ? log(n?k)
for any algorithm to succeed (find
k
the embedded submatrix) with high probability. They also show the LAS algorithm is minimax
4
rate optimal; i.e. the LAS algorithm finds the embedded submatrix with probability 1 ? n?k
when
q
. We show that the test given by Theorem 2.4 has asymptotic full power under
? ? 4? 2 log(n?k)
k
the same signal strength. The proof is given in the appendix.
q
q
2
log ?
1
?
?
Theorem 2.5. Let ? = C 2 log(n?k)
.
When
C
>
max
,
4
+
4
k
log(n?k)
? log(n?k)( k?5/4)
n
5
and k ? 2 , the ?-level test given by Corollary 2.3 has power at least 1 ? n?k ; i.e.
Pr(reject H0 ) ? 1 ?
5
n?k .
Further, for any sequence (n, k) such that n ? ?, when C > 4, and k ?
3
n
2,
Pr(reject H0 ) ? 1.
General scan statistics
Although we have elected to present our framework in the context of biclustering, the framework
readily extends to scan statistics. Let z ? N (?, ?), where E[z] has the form
? i?S
E[zi ] =
for some ? > 0 and S ? [ n ].
0 otherwise
The set S belongs to a collection C = {S1 , . . . , SN }. We decide which index set in C generated the
data by
P
S? = arg maxS?C i?S zi .
(3.1)
? we are interested in testing the null hypothesis
Given S,
H0 : E[zS? ] = 0.
(3.2)
To perform exact inference for the selected effect ?S? , we must first characterize the selection event.
We observe that the selection event {S? = S} is equivalent to X satisfying a set of linear inequalities
given by
eTS z ? eTS 0 z for any S 0 ? C \ S.
(3.3)
5
Given the form of the constraints (3.3),
aS 0 =
1
(eS 0 ? eS )T eS
=
(|S ? S 0 | ? |S|) for any S 0 ? C \ S.
|S|
eTS eS
Since |S ? S 0 | ? |S| , we have aS 0 ? [?1, 0], which implies V + (z) = ?. The term V ? (z) also
simplifies:
V ? (z) = sup
S0
1
1
((eS ? eS 0 )T z + aS 0 eTS z) = eTS z + sup
((eS ? eS 0 )T z).
aS 0
S 0 aS 0
Let y(1) , y(2) be the largest and second largest scan statistics. We have
V ? (z) ? z(1) + sup((eS 0 ? eS )T z) = z(1) + z(2) ? z(1) = z(2) .
S0
Intuitively, the pivot will be large (the p-value will be small), when eTS z exceeds the lower truncation
limit V ? by a large margin. Since the second largest scan statistic is an upper bound for the lower
truncation limit, the test will reject when y(1) exceeds y(2) by a large margin.
Theorem 3.1. The test that rejects when
F z(1) , 0, k 2 ? 2 , V ? (z), ? ? 1 ? ?
where V ? (X) = eTS? z + supS 0
1
aS 0
((eS? ? eS 0 )T z), is a valid ?-level test for H0 : eTS? ? = 0.
To our knowledge, most precedures for obtaining valid inference on scan statistics require careful
characterization of the asymptotic distribution of eTS? z. Such results are usually only valid when
the components of z are independent with identical variances (e.g. see [6]), and can only be used
to test the global null: H0 : E[z] = 0. Our framework not only relaxes the independence and
homeoskedastic assumption, but also allows us to for confidence intervals for the selected effect
size.
4
Extensions to other score-based approaches
Returning to the submatrix localization problem, we note that the framework described in section 2
also readily handles other score-based approaches, as long as the scores are affine functions of the
entries. The main idea is to partition Rn?n into non-overlapping regions that corresponding to a
possible outcomes of the algorithm; i.e. the event that the algorithm outputs a particular submatrix
is equivalent to X falling in the corresponding region of Rn?n . In this section, we show how to
perform exact inference on biclusters found by more computationally tractable algorithms.
4.1
Greedy search
2
Searching over all nk submatrices to find the largest average submatrix is computationally intractable for all but the smallest matrices. Here we consider a family of heuristics based on a greedy
search algorithm proposed by Shabalin et al. [12] that looks for ?local? largest average submatrices.
Their approach is widely used to discover genotype-phenotype associations in high-dimensional
gene expression data. Here the score is simply the sum of the entries in a submatrix.
Algorithm 1 Greedy search algorithm
1: Initialize: select J 0 ? [n].
2: repeat
3:
I l+1 ? the indices of the rows with the largest column sum in J l
4:
J l+1 ? the indices of the columns with the largest row sum in I l+1
5: until convergence
To adapt the framework laid out in section 2 to the greedy search algorithm, we must characterize
the selection event. Here the selection event is the ?path? of the greedy search:
EGrS = EGrS (I 1 , J 1 ), (I 2 , J 2 ), . . .
6
is the event the greedy search selected (I 1 , J 1 ) at the first step, (I 2 , J 2 ) at the second step, etc.
In practice, to ensure stable performance of the greedy algorithm, Shabalin et al. propose to run the
greedy search with random initialization 1000 times and select the largest local maximum. Suppose
the m? -th greedy search outputs the largest local maximum. The selection event is
?
T
EGrS,1 ? ? ? ? ? EGrS,1000 ? m = arg max eIGrS,m (X) XeJGrS,m (X)
m=1,...,1000
where
1
1
2
2
EGrS,m = EGrS (Im
, Jm
), (Im
, Jm
), . . . , m = 1, . . . , 1000
1
1
2
2
is the event the m-th greedy search selected (Im
, Jm
) at the first step, (Im
, Jm
) at the second step,
etc.
An alternative to running the greedy search with random initialization many times and picking the
largest local maximum is to initialize the greedy search intelligently. Let Jgreedy (X) be the output
of the intelligent initialization. The selection event is given by
(4.1)
EGrS ? Jgreedy (X) = J 0 ,
where EGrS is the event the greedy search selected (I 1 , J 1 ) at the first step, (I 2 , J 2 ) at the second
step, etc. The intelligent initialization selects J 0 when
eT[n] Xej ? eT[n] Xej 0 for any j ? J 0 , j 0 ? [n] \ J 0 ,
(4.2)
which corresponds to selecting the k columns with largest sum. Thus the selection event is equivalent to X falling in the polyhedral set
n
o
CGrS ? X ? Rn?n | tr ej eT[n] X ? tr ej 0 eT[n] X for any j ? J 0 , j 0 ? [n] \ J 0 ,
where CGrS is the constraint set corresponding to the selection event EGrS (see Appendix for an
explicit characteriation).
4.2
Largest row/column sum test
An alternative to running the greedy search is to use a test statistic based off choosing the k rows and
columns with largest sum. The largest row/column sum test selects a subset of columns J 0 when
eT[n] Xej ? eT[n] Xej 0 for any j ? J 0 , j 0 ? [n] \ J 0
(4.3)
0
which corresponds to selecting the k columns with largest sum. Similarly, it selects rows I with
largest sum. Thus the selection event for initialization at (I 0 , J 0 ) is equivalent to X falling in the
polyhedral set
n
o
X ? Rn?n | tr ej eT[n] X ? tr ej 0 eT[n] X for any j ? J 0 , j 0 ? [n] \ J 0
n
o
? X ? Rn?n | tr ei eT[n] X ? tr ei0 eT[n] X for any i ? I 0 , i0 ? [n] \ I 0 .
(4.4)
The procedure
of selecting the k largest rows/columns was analyzed in [5]. They proved that when
p
? ? 4/k n log(n ? k) the procedure recovers the planted submatrix. We show a similar result for
the test statistic based off the intelligent initialization
F tr eJ 0 (X) eTI0 (X) X , 0, ? 2 k 2 , V ? (X), V + (X) .
(4.5)
Under the null of ? = 0, the statistic (4.5) is uniformly distributed, so type 1 error is controlled at
level ?.pThe theorem below shows that this computationally tractable test has power tending to 1 for
? > k4 n log(n ? k).
p
C
Theorem
k).
Assume that n ? 2 exp(1) and n ? k2 . When C >
q4.1. Let ? = k n log(n ?
?
max 4 1 + 4n1 2 + n2 , ?2 log 2/? + n2 , the ?-level test given by Corollary 2.3 has power at
log(n?k)
least 1 ?
9
n?k ;
i.e.
Pr(reject H0 ) ? 1 ?
9
n?k .
Further, for any sequence (n, k) such that n ? ?, when C > 4, and k ?
7
n
2,
Pr(reject H0 ) ? 1.
k=sqrt(n)
k=.2n
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.2
0
Power
1
Power
Power
k=log n
1
0.4
0.2
0
5
Signal Strength
0
10
n=50
n=100
n=500
n=1000
0.4
0.2
0
5
Signal Strength
0
10
0
5
Signal Strength
10
Figure 1: Random initialization with 10 restarts
k=sqrt(n)
k=.2n
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.2
0
Power
1
Power
Power
k=log n
1
0.4
0.2
0
5
Signal Strength
10
0
n=50
n=100
n=500
n=1000
0.4
0.2
0
5
Signal Strength
10
0
0
5
Signal Strength
10
Figure 2: Intelligent initialization
In practice, we have found that initializing the greedy algorithm with the rows and columns identified
by the largest row/column sum test stabilizes the performance of the greedy algorithm and preserves
power. By intersecting the selection events from the largest row/column sum test and the greedy
algorithm, the test also controls type 1 error. Let (Iloc(X) , Jloc(X) ) be the pair of indices returned
by the greedy algorithm initialized with (I 0 , J 0 ) from the largest row/column sum test. The test
statistic is given by
F tr eJloc(X) eTIloc (X) X , 0, ? 2 k 2 , V ? (X), V + (X) ,
(4.6)
where V + (X), V ? (X) are now computed using the intersection of the greedy and the largest
row/column sum selection events. This statistic is also uniformly distributed under the null.
We test the performance of three of the biclustering algorithms: Algorithm 1 with the intelligent
initialization in (4.4) and Algorithm 1 with 10 random restarts. We generate data from the model
(1.1) for various values of n and k. We only test the power of each procedure, since all of the
algorithms discussed provably control type 1 error.
The results are in Figures 1, and 2. The
y-axis shows power (the probability of rejecting) and the
.q
2 log(n?k)
x-axis is rescaled signal strength ?
. The tests were calibrated to control type 1 error
k
at ? = .1, so any power over .1 is nontrivial. From the k = log n plot, we see that the intelligently
initialized greedy procedure outperforms the greedy algorithm with a single random initialization
and the greedy algorithm with 10 random initializations.
5
Conclusion
In this paper, we considered the problem of evaluating the statistical significance of the output of
several biclustering algorithms. By considering the problem as a selective inference problem, we
are able to devise exact significance tests and confidence intervals for the selected bicluster. We also
show how the framework generalizes to the more practical problem of evaluating the significance of
multiple biclusters. In this setting, our approach gives sequential tests that control family-wise error
rate in the strong sense.
8
References
[1] Louigi Addario-Berry, Nicolas Broutin, Luc Devroye, G?abor Lugosi, et al. On combinatorial
testing problems. The Annals of Statistics, 38(5):3063?3092, 2010.
[2] Brendan PW Ames. Guaranteed clustering and biclustering via semidefinite programming.
Mathematical Programming, pages 1?37, 2012.
[3] Brendan PW Ames and Stephen A Vavasis. Convex optimization for the planted k-disjointclique problem. Mathematical Programming, 143(1-2):299?337, 2014.
[4] Ery Arias-Castro, Emmanuel J Candes, Arnaud Durand, et al. Detection of an anomalous
cluster in a network. The Annals of Statistics, 39(1):278?304, 2011.
[5] Sivaraman Balakrishnan, Mladen Kolar, Alessandro Rinaldo, Aarti Singh, and Larry Wasserman. Statistical and computational tradeoffs in biclustering. In NIPS 2011 Workshop on Computational Trade-offs in Statistical Learning, 2011.
[6] Shankar Bhamidi, Partha S Dey, and Andrew B Nobel. Energy landscape for large average
submatrix detection problems in gaussian random matrices. arXiv preprint arXiv:1211.2284,
2012.
[7] Yudong Chen and Jiaming Xu. Statistical-computational tradeoffs in planted problems and
submatrix localization with a growing number of clusters and submatrices. arXiv preprint
arXiv:1402.1267, 2014.
[8] Yizong Cheng and George M Church. Biclustering of expression data. In ISMB, volume 8,
pages 93?103, 2000.
[9] Laura Lazzeroni and Art Owen. Plaid models for gene expression data. Statistica Sinica,
12(1):61?86, 2002.
[10] Jason D Lee, Dennis L Sun, Yuekai Sun, and Jonathan E Taylor. Exact post-selection inference
with the lasso. arXiv preprint arXiv:1311.6238, 2013.
[11] Zongming Ma and Yihong Wu. Computational barriers in minimax submatrix detection. arXiv
preprint arXiv:1309.5914, 2013.
[12] Andrey A Shabalin, Victor J Weigman, Charles M Perou, and Andrew B Nobel. Finding
large average submatrices in high dimensional data. The Annals of Applied Statistics, pages
985?1012, 2009.
9
| 5816 |@word manageable:1 pw:2 unif:5 seek:2 covariance:1 asks:1 tr:24 contains:1 score:13 zij:1 selecting:3 genetic:1 outperforms:1 parameter1:1 attracted:2 readily:2 must:3 partition:1 plot:1 greedy:23 selected:17 characterization:2 location:1 ames:2 mathematical:3 consists:1 polyhedral:7 manner:1 growing:2 jm:4 considering:1 begin:2 discover:1 notation:1 null:5 what:1 z:1 finding:3 exactly:1 returning:1 rm:4 k2:3 control:6 unit:1 omit:1 before:2 engineering:1 local:4 limit:5 ets:9 path:1 lugosi:1 might:1 initialization:11 studied:3 delve:2 statistically:1 ismb:1 practical:4 testing:2 practice:4 procedure:7 j0:4 submatrices:17 reject:10 confidence:9 convenience:1 selection:18 shankar:1 context:3 applying:1 equivalent:6 attention:2 convex:3 identifying:2 wasserman:1 handle:2 exploratory:1 searching:1 annals:3 suppose:1 exact:12 programming:3 hypothesis:6 satisfying:2 distributional:1 preprint:4 initializing:1 capture:2 region:2 sun:3 trade:3 rescaled:1 alessandro:1 depend:1 singh:1 localization:10 represented:1 various:2 describe:1 outcome:2 h0:18 choosing:1 whose:2 heuristic:5 stanford:3 supplementary:1 larger:1 widely:1 otherwise:2 statistic:15 sequence:2 intelligently:2 propose:1 relevant:2 pthe:1 convergence:1 cluster:4 derive:1 develop:2 andrew:2 strong:1 implies:1 plaid:1 correct:1 larry:1 material:1 require:1 preliminary:1 im:4 extension:1 considered:1 normal:1 exp:1 bj:3 xej:5 stabilizes:1 smallest:1 aarti:1 purpose:1 pickle:1 combinatorial:1 sivaraman:1 largest:26 eti:7 offs:1 gaussian:9 avoid:1 ej:13 conjunction:1 corollary:5 derived:1 focus:4 shabalin:4 check:2 brendan:2 detect:1 sense:1 inference:16 dependent:1 i0:5 abor:1 hidden:4 selective:5 interested:1 selects:3 provably:1 arg:3 development:1 bhamidi:2 constrained:7 initialize:2 art:1 construct:1 aware:1 identical:1 look:1 unsupervised:1 intelligent:5 preserve:1 consisting:1 n1:1 attempt:1 detection:7 analyzed:1 genotype:3 semidefinite:1 behind:1 underpinnings:1 etj0:1 taylor:3 initialized:2 theoretical:1 column:19 goodness:1 subset:2 entry:4 snr:5 characterize:2 accomplish:1 calibrated:1 andrey:1 lee:3 off:4 picking:1 intersecting:1 stochastically:1 laura:1 return:2 depends:2 jason:2 characterizes:1 sup:5 candes:1 ery:1 contribution:2 partha:1 square:1 variance:2 who:2 correspond:5 landscape:1 identification:1 rejecting:2 sqrt:2 energy:1 proof:1 recovers:1 proved:1 ask:1 recall:2 knowledge:1 organized:1 originally:1 restarts:2 evaluated:1 generality:1 dey:2 until:1 e0i:1 dennis:1 ei:9 overlapping:1 effect:2 arnaud:1 nonzero:1 yizong:1 ay:5 elected:1 meaning:1 wise:2 recently:2 charles:1 tending:1 exponentially:1 volume:1 jdl17:1 association:6 elevated:1 discussed:1 significant:7 similarly:1 stable:1 etc:3 recent:1 inf:2 belongs:1 inequality:2 durand:1 devise:1 victor:1 inverted:1 captured:1 minimum:1 george:1 maximize:2 signal:17 stephen:1 full:2 multiple:2 yuekai:3 exceeds:2 adapt:1 characterized:1 clinical:1 long:1 concerning:1 post:1 controlled:1 anomalous:1 arxiv:8 represent:1 invert:1 jiaming:1 interval:11 rest:1 balakrishnan:3 practitioner:1 relaxes:1 independence:1 fit:1 zi:2 identified:2 lasso:1 reduce:1 idea:2 simplifies:2 tradeoff:2 yihong:1 pivot:1 whether:7 expression:6 pr0:5 akin:1 returned:1 speaking:1 remark:1 generally:1 amount:4 broutin:1 generate:1 vavasis:1 xij:1 broadly:1 key:3 salient:2 falling:4 k4:1 anova:1 asymptotically:1 relaxation:2 sum:14 run:1 place:1 extends:1 family:2 decide:1 wu:2 laid:1 appendix:2 submatrix:45 bound:1 guaranteed:2 cheng:1 nontrivial:1 strength:9 precisely:1 constraint:3 extremely:1 performing:4 smaller:2 slightly:1 making:1 s1:1 ei0:2 castro:1 intuitively:1 pr:4 pipeline:1 computationally:6 discus:1 know:1 tractable:4 generalizes:1 apply:1 observe:1 spectral:1 alternative:3 rp:1 running:2 ensure:1 clustering:1 biclusters:6 emmanuel:1 quantity:3 planted:3 dependence:1 bicluster:4 exhibit:1 separating:1 nobel:3 devroye:1 index:6 ratio:2 kolar:1 difficult:1 sinica:1 negative:1 reliably:1 perform:7 upper:2 observation:1 zongming:1 finite:2 mladen:1 rn:10 pair:4 required:3 namely:1 nip:1 perou:1 able:1 usually:2 below:1 regime:1 summarize:2 including:1 max:8 power:18 event:21 ela:10 scarce:1 minimax:5 axis:2 church:1 unconditionally:2 sn:3 review:3 prior:2 berry:1 asymptotic:9 embedded:6 loss:1 expect:2 localized:1 validation:1 affine:1 s0:2 thresholding:1 row:17 repeat:1 truncation:6 addario:1 institute:1 fall:1 characterizing:1 barrier:1 distributed:4 yudong:1 evaluating:7 valid:6 collection:1 far:1 emphasize:1 gene:4 keep:1 global:1 q4:1 xi:3 thep:1 search:17 ca:1 nicolas:1 obtaining:1 significance:13 main:4 lazzeroni:1 statistica:1 noise:3 n2:2 complementary:1 pivotal:3 xu:2 explicit:1 third:1 theorem:10 specific:1 intractable:1 workshop:1 sequential:1 aria:1 logk:1 margin:2 nk:2 chen:2 phenotype:4 easier:1 intersection:1 simply:3 clas:5 rinaldo:1 biclustering:24 mij:3 corresponds:2 ma:2 succeed:2 characerized:1 goal:1 careful:1 luc:1 owen:1 considerable:2 uniformly:3 louigi:1 lemma:1 experimental:1 la:4 e:12 select:2 scan:6 jonathan:3 relevance:3 correlated:1 |
5,321 | 5,817 | Regularization Path of
Cross-Validation Error Lower Bounds
Atsushi Shibagaki, Yoshiki Suzuki, Masayuki Karasuyama, and Ichiro Takeuchi
Nagoya Institute of Technology
Nagoya, 466-8555, Japan
{shibagaki.a.mllab.nit,suzuki.mllab.nit}@gmail.com
{karasuyama,takeuchi.ichiro}@nitech.ac.jp
Abstract
Careful tuning of a regularization parameter is indispensable in many machine
learning tasks because it has a significant impact on generalization performances.
Nevertheless, current practice of regularization parameter tuning is more of an art
than a science, e.g., it is hard to tell how many grid-points would be needed in
cross-validation (CV) for obtaining a solution with sufficiently small CV error. In
this paper we propose a novel framework for computing a lower bound of the CV
errors as a function of the regularization parameter, which we call regularization
path of CV error lower bounds. The proposed framework can be used for providing a theoretical approximation guarantee on a set of solutions in the sense that
how far the CV error of the current best solution could be away from best possible CV error in the entire range of the regularization parameters. Our numerical
experiments demonstrate that a theoretically guaranteed choice of a regularization
parameter in the above sense is possible with reasonable computational costs.
1
Introduction
Many machine learning tasks involve careful tuning of a regularization parameter that controls the
balance between an empirical loss term and a regularization term. A regularization parameter is
usually selected by comparing the cross-validation (CV) errors at several different regularization
parameters. Although its choice has a significant impact on the generalization performances, the
current practice is still more of an art than a science. For example, in commonly used grid-search, it
is hard to tell how many grid points we should search over for obtaining sufficiently small CV error.
In this paper we introduce a novel framework for a class of regularized binary classification problems
that can compute a regularization path of CV error lower bounds. For an ? ? [0, 1], we define ?approximate regularization parameters to be a set of regularization parameters such that the CV
error of the solution at the regularization parameter is guaranteed to be no greater by ? than the best
possible CV error in the entire range of regularization parameters. Given a set of solutions obtained,
for example, by grid-search, the proposed framework allows us to provide a theoretical guarantee
of the current best solution by explicitly quantifying its approximation level ? in the above sense.
Furthermore, when a desired approximation level ? is specified, the proposed framework can be used
for efficiently finding one of the ?-approximate regularization parameters.
The proposed framework is built on a novel CV error lower bound represented as a function of the
regularization parameter, and this is why we call it as a regularization path of CV error lower bounds.
Our CV error lower bound can be computed by only using a finite number of solutions obtained by
arbitrary algorithms. It is thus easy to apply our framework to common regularization parameter tuning strategies such as grid-search or Bayesian optimization. Furthermore, the proposed framework
can be used not only with exact optimal solutions but also with sufficiently good approximate solu1
Figure 1: An illustration of the proposed framework. One of our
algorithms presented in ?4 automatically selected 39 regularization parameter values in [10?3 , 103 ], and an upper bound of the
validation error for each of them is obtained by solving an optimization problem approximately. Among those 39 values, the
one with the smallest validation error upper bound (indicated as
? at C = 1.368) is guaranteed to be ?(= 0.1) approximate regularization parameter in the sense that the validation error for
the regularization parameter is no greater by ? than the smallest
possible validation error in the whole interval [10?3 , 103 ]. See
?5 for the setup (see also Figure 3 for the results with other options).
tions, which is computationally advantageous because completely solving an optimization problem
is often much more costly than obtaining a reasonably good approximate solution.
Our main contribution in this paper is to show that a theoretically guaranteed choice of a regularization parameter in the above sense is possible with reasonable computational costs. To the best
of our knowledge, there is no other existing methods for providing such a theoretical guarantee on
CV error that can be used as generally as ours. Figure 1 illustrates the behavior of the algorithm for
obtaining ? = 0.1 approximate regularization parameter (see ?5 for the setup).
Related works Optimal regularization parameter can be found if its exact regularization path can
be computed. Exact regularization path has been intensively studied [1, 2], but they are known to be
numerically unstable and do not scale well. Furthermore, exact regularization path can be computed
only for a limited class of problems whose solutions are written as piecewise-linear functions of the
regularization parameter [3]. Our framework is much more efficient and can be applied to wider
classes of problems whose exact regularization path cannot be computed. This work was motivated
by recent studies on approximate regularization path [4, 5, 6, 7]. These approximate regularization
paths have a property that the objective function value at each regularization parameter value is no
greater by ? than the optimal objective function value in the entire range of regularization parameters. Although these algorithms are much more stable and efficient than exact ones, for the task
of tuning a regularization parameter, our interest is not in objective function values but in CV errors. Our approach is more suitable for regularization parameter tuning tasks in the sense that the
approximation quality is guaranteed in terms of CV error.
As illustrated in Figure 1, we only compute a finite number of solutions, but still provide approximation guarantee in the whole interval of the regularization parameter. To ensure such a property, we
need a novel CV error lower bound that is sufficiently tight and represented as a monotonic function
of the regularization parameter. Although several CV error bounds (mostly for leave-one-out CV) of
SVM and other similar learning frameworks exist (e.g., [8, 9, 10, 11]), none of them satisfy the above
required properties. The idea of our CV error bound is inspired from recent studies on safe screening
[12, 13, 14, 15, 16] (see Appendix A for the detail). Furthermore, we emphasize that our contribution is not in presenting a new generalization error bound, but in introducing a practical framework
for providing a theoretical guarantee on the choice of a regularization parameter. Although generalization error bounds such as structural risk minimization [17] might be used for a rough tuning of
a regularization parameter, they are known to be too loose to use as an alternative to CV (see, e.g.,
?11 in [18]). We also note that our contribution is not in presenting new method for regularization
parameter tuning such as Bayesian optimization [19], random search [20] and gradient-based search
[21]. As we demonstrate in experiments, our approach can provide a theoretical approximation
guarantee of the regularization parameter selected by these existing methods.
2
Problem Setup
We consider linear binary classification problems. Let {(xi , yi ) ? Rd ?{?1, 1}}i?[n] be the training
set where n is the size of the training set, d is the input dimension, and [n] := {1, . . . , n}. An independent held-out validation set with size n? is denoted similarly as {(x?i , yi? ) ? Rd ? {?1, 1}}i?[n? ] .
A linear decision function is written as f (x) = w? x, where w ? Rd is a vector of coefficients,
and ? represents the transpose. We assume the availability of a held-out validation set only for simplifying the exposition. All the proposed methods presented in this paper can be straightforwardly
2
adapted to a cross-validation setup. Furthermore, the proposed methods can be kernelized if the loss
function satisfies a certain condition. In this paper we focus on the following class of regularized
convex loss minimization problems:
!
1
?
wC
:= arg min ?w?2 + C
?(yi , w? xi ),
(1)
w?Rd 2
i?[n]
where C > 0 is the regularization parameter, and ? ? ? is the Euclidean norm. The loss function is
denoted as ? : {?1, 1} ? R ? R. We assume that ?(?, ?) is convex and subdifferentiable in the 2nd
argument. Examples of such loss functions include logistic loss, hinge loss, Huber-hinge loss, etc.
For notational convenience, we denote the individual loss as ?i (w) := ?(yi , w? xi ) for all i ? [n].
?
The optimal solution for the regularization parameter C is explicitly denoted as wC
. We assume that
the regularization parameter is defined in a finite interval [C? , Cu ], e.g., C? = 10?3 and Cu = 103
as we did in the experiments.
For a solution w ? Rd , the validation error1 is defined as
1 !
Ev (w) := ?
I(yi? w? x?i < 0),
n
?
(2)
i?[n ]
where I(?) is the indicator function. In this paper, we consider two problem setups. The first prob?
?
lem setup is, given a set of (either optimal or approximate) solutions wC
, . . . , wC
at T different
1
T
regularization parameters C1 , . . . , CT ? [C? , Cu ], to compute the approximation level ? such that
?
min Ev (wC
)
t
Ct ?{C1 ,...,CT }
?
? Ev? ? ?, where Ev? := min Ev (wC
),
C?[Cl ,Cu ]
(3)
by which we can find how accurate our search (grid-search, typically) is in a sense of the deviation
of the achieved validation error from the true minimum in the range, i.e., Ev? . The second problem
setup is, given the approximation level ?, to find an ?-approximate regularization parameter within
an interval C ? [Cl , Cu ], which is defined as an element of the following set
#
"
$
#
?
C(?) := C ? [Cl , Cu ] # Ev (wC
) ? Ev? ? ? .
Our goal in this second setup is to derive an efficient exploration procedure which achieves the
specified validation approximation level ?. These two problem setups are both common scenarios in
practical data analysis, and can be solved by using our proposed framework for computing a path of
validation error lower bounds.
3
Validation error lower bounds as a function of regularization parameter
In this section, we derive a validation error lower bound which is represented as a function of the
regularization parameter C. Our basic idea is to compute a lower and an upper bound of the inner
?? ?
product score wC
xi for each validation input x?i , i ? [n? ], as a function of the regularization param?? ?
eter C. For computing the bounds of wC
xi , we use a solution (either optimal or approximate) for
a different regularization parameter C? ?= C.
3.1
Score bounds
?? ?
We first describe how to obtain a lower and an upper bound of inner product score wC
xi based on
?
an approximate solution w
?C? at a different regularization parameter C ?= C.
Lemma 1. Let w
?C? be an approximate solution of the problem (1) for a regularization parameter
?
value C and ?i (w
?C ) be a subgradient of ?i at w = w
?C such that a subgradient of the objective
function is
!
g(w
?C? ) := w
?C + C?
? i (w
?C ).
(4)
i?[n]
1
For simplicity, we regard a validation instance whose score is exactly zero, i.e., w? x?i = 0, is correctly
classified in (2). Hereafter, we assume that there are no validation instances whose input vector is completely
0, i.e., x?i = 0, because those instances are always correctly classified according to the definition in (2).
3
?? ?
Then, for any C > 0, the score wC
xi , i ? [n? ], satisfies
%
?
?(w
?C? , x?i ) ? C1? (?(w
?C? , x?i ) + ?(g(w
?C? ), x?i ))C, if C > C,
?? ?
?? ?
wC
xi ? LB(wC
x i |w
?C? ) :=
1
?
?
?
?
??(w
?C? , xi ) + C? (?(w
?C? , xi ) + ?(g(w
?C? ), xi ))C, if C < C,
%
?
??(w
?C? , x?i ) + C1? (?(w
?C? , x?i ) + ?(g(w
?C? ), x?i ))C, if C > C,
?? ?
?? ?
wC xi ? U B(wC xi |w
?C? ) :=
1
?
?
?
?
?(w
?C? , xi ) ? C? (?(w
?C? , xi ) + ?(g(w
?C? ), xi ))C, if C < C,
(5a)
(5b)
where
1
1
?
?
?? ?
(?wC
?C? ), x?i ) := (?g(w
?C? )??x?i ? + g(w
?C? )? x?i ) ? 0,
? ??xi ? + wC
? xi ) ? 0, ?(g(w
2
2
1
1
?
?
?
?
?? ?
?(wC
?C? ), x?i ) := (?g(w
?C? )??x?i ? ? g(wC? )? x?i ) ? 0.
? , xi ) := (?wC
? ??xi ? ? wC
? xi ) ? 0, ?(g(w
2
2
?
?
?(wC
? , xi ) :=
The proof is presented in Appendix A. Lemma 1 tells that we have a lower and an upper bound of the
?? ?
score wC
xi for each validation instance that linearly change with the regularization parameter C.
When w
?C? is optimal, it can be shown that (see Proposition B.24 in [22]) there exists a subgradient
such that g(w
?C? ) = 0, meaning that the bounds are tight because ?(g(w
?C? ), x?i ) = ?(g(w
?C? ), x?i ) = 0.
? the score w?? x? , i ? [n? ], for the regularization parameter value C?
Corollary 2. When C = C,
i
?
C
itself satisfies
?? ?
?? ?
? ?
?? ?
?? ?
? ?
wC
?C? ) = w
?C
?C? ), x?i ), wC
?C? ) = w
?C
?C? ), x?i ).
? xi ?LB(wC
? xi | w
? xi ??(g(w
? xi ? U B(wC
? xi |w
? xi +?(g(w
The results in Corollary 2 are obtained by simply substituting C = C? into (5a) and (5b).
3.2
Validation Error Bounds
Given a lower and an upper bound of the score of each validation instance, a lower bound of the
validation error can be computed by simply using the following facts:
?? ?
yi? = +1 and U B(wC
x i |w
?C? ) < 0 ? mis-classified,
yi?
(6a)
?? ?
LB(wC
x i |w
?C? )
= ?1 and
> 0 ? mis-classified.
(6b)
Furthermore, since the bounds in Lemma 1 linearly change with the regularization parameter C, we
can identify the interval of C within which the validation instance is guaranteed to be mis-classified.
Lemma 3. For a validation instance with yi? = +1, if
?(w
?C? , x?i )
?(w
?C? , x?i )
? or
?
C? < C <
C
C? < C < C,
?(w
?C? , x?i ) + ?(g(w
?C? ), x?i )
?(w
?C? , x?i ) + ?(g(w
?C? ), x?i )
then the validation instance (x?i , yi? ) is mis-classified. Similarly, for a validation instance with yi? =
?1, if
?(w
?C? , x?i )
?(w
?C? , x?i )
?
C? < C <
C? or
C? < C < C,
?
?
?
?(w
?C? , xi ) + ?(g(w
?C? ), xi )
?(w
?C? , xi ) + ?(g(w
?C? ), x?i )
then the validation instance (x?i , yi? ) is mis-classified.
This lemma can be easily shown by applying (5) to (6).
As a direct consequence of Lemma 3, the lower bound of the validation error is represented as a
function of the regularization parameter C in the following form.
? the validation
Theorem 4. Using an approximate solution w
?C? for a regularization parameter C,
?
error Ev (wC
) for any C > 0 satisfies
?
?
Ev (wC
) ? LB(Ev (wC
)|w
?C? ) :=
(7)
&
'
(
'
(
?
?
!
!
?(w
?C? , xi )
?(w
?C? , xi )
1
?
?
?
I C<C<
C? +
I
?
?
?
? ) C<C<C
?
n
?(
w
?
,
x
)+?(g(
w
?
),
x
)
?(
w
?
,
x
)+?(g(
w
?
),
x
?
?
?
?
i
i
i
i
C
C
C
C
yi? =+1
yi? =+1
'
(
'
()
?
?
!
!
?(
w
?
,
x
)
?(
w
?
,
x
)
?
?
i
i
C
C
?
?
+
I C<C<
C? +
I
C<C<
C? .
?(w
?C? , x?i )+?(g(w
?C? ), x?i )
?(w
?C? , x?i )+?(g(w
?C? ), x?i )
?
?
yi =?1
yi =?1
4
Algorithm 1: Computing the approximation level ? from the given set of solutions
Input: {(xi , yi )}i?[n] , {(x?i , yi? )}i?[n? ] , Cl , Cu , W := {wC?1 , . . . , wC?T }
?
1: Evbest ? minC?t ?{C?1 ,...,C?T } U B(Ev (wC
?t )
?t )|wC
*
+
2: LB(Ev? ) ? minc?[Cl ,Cu ] maxC?t ?{C?1 ,...,C?T } LB(Ev (wc? )|wC?t )
Output: ? = Evbest ? LB(Ev? )
The lower bound (7) is a staircase function of the regularization parameter C.
Remark 5. We note that our validation error lower bound is inspired from recent studies on safe
screening [12, 13, 14, 15, 16], which identifies sparsity of the optimal solutions before solving the
optimization problem. A key technique used in those studies is to bound Lagrange multipliers at the
optimal, and we utilize this technique to prove Lemma 1, which is a core of our framework.
? we can obtain a lower and an upper bound of the validation error for the reguBy setting C = C,
larization parameter C? itself, which are used in the algorithm as a stopping criteria for obtaining an
approximate solution w
?C? .
?
Corollary 6. Given an approximate solution w
?C? , the validation error Ev (wC
? ) satisfies
?
?
Ev (wC
?C? )
? ) ? LB(Ev (wC
? )|w
&
)
! ,
! ,
1
? ?
?
? ?
?
= ?
I w
?C? xi + ?(g(w
?C? ), xi ) < 0 +
I w
?C? xi ? ?(g(w
?C? ), xi ) > 0 ,
n
?
?
yi =+1
(8a)
yi =?1
?
?
Ev (wC
?C? )
? ) ? U B(Ev (wC
? )|w
&
)
! ,
! ,
1
? ?
?
? ?
?
=1? ?
I w
?C? xi ? ?(g(w
?C? ), xi ) ? 0 +
I w
?C? xi + ?(g(w
?C? ), xi ) ? 0 . (8b)
n
?
?
yi =+1
4
yi =?1
Algorithm
In this section we present two algorithms for each of the two problems discussed in ?2. Due to the
space limitation, we roughly describe the most fundamental forms of these algorithms. Details and
several extensions of the algorithms are presented in supplementary appendices B and C.
4.1
Problem setup 1: Computing the approximation level ? from a given set of solutions
Given a set of (either optimal or approximate) solutions w
?C?1 , . . . , w
?C?T , obtained e.g., by ordinary
grid-search, our first problem is to provide a theoretical approximation level ? in the sense of (3)2 .
This problem can be solved easily by using the validation error lower bounds developed in ?3.2. The
algorithm is presented in Algorithm 1, where we compute the current best validation error Evbest in
?
line 1, and a lower bound of the best possible validation error Ev? := minC?[C? ,Cu ] Ev (wC
) in line
2. Then, the approximation level ? can be simply obtained by subtracting the latter from the former.
We note that LB(Ev? ), the lower bound of Ev? , can be easily computed by using T evaluation error
?
lower bounds LB(Ev (wC
)|wC?t ), t = 1, . . . , T , because they are staircase functions of C.
4.2
Problem setup 2: Finding an ?-approximate regularization parameter
Given a desired approximation level ? such as ? = 0.01, our second problem is to find an ?approximate regularization parameter. To this end we develop an algorithm that produces a set of
optimal or approximate solutions w
?C?1 , . . . , w
?C?T such that, if we apply Algorithm 1 to this sequence,
then approximation level would be smaller than or equal to ?. Algorithm 2 is the pseudo-code of
this algorithm. It computes approximate solutions for an increasing sequence of regularization parameters in the main loop (lines 2-11).
2
When we only have approximate solutions w
?C?1 , . . . , w
?C?T , Eq. (3) is slightly incorrect. The first term of
the l.h.s. of (3) should be minC?t ?{C?1 ,...,C?T } U B(Ev (w
?C?t )|w
?C?t ).
5
Let us now consider tth iteration in the main
loop, where we have already computed t?1 apAlgorithm 2: Finding an ? approximate regularproximate solutions w
?C?1 , . . . , w
?C?t?1 for C?1 <
ization parameter with approximate solutions
?
Input: {(xi , yi )}i?[n] , {(x?i , yi? )}i?[n? ] , Cl , Cu , ? . . . < Ct?1 . At this point,
?t ? Cl , C best ? Cl , E best ? 1
?
1: t ? 1, C
C best := arg min
U B(Ev (wC
?C?? ),
v
?? )|w
?? ?{C
?1 ,...,C
?t?1 }
?t ? Cu do
C
2: while C
3: w
?C?t?solve (1) approximately for C=C?t
is the best (in worst-case) regularization param?
4:
Compute U B(Ev (wC
?C?t ) by (8b).
eter obtained so far and it is guaranteed to be an
?t )|w
?
best
?-approximate regularization parameter in the
5:
if U B(Ev (wC
)|
w
?
)
<
E
then
?
v
?t
Ct
interval [Cl ,C?t ] in the sense that the validation
best
?
6:
Ev ? U B(Ev (wC? )|w
?C?t )
t
error,
7:
C best ? C?t
?
Evbest :=
min
U B(Ev (wC
?C?? ),
8:
end if
?? )|w
?? ?{C
?1 ,...,C
?t?1 }
C
?
9:
Set Ct+1 by (10)
10:
t?t+1
is shown to be at most greater by ? than the
11: end while
smallest possible validation error in the interOutput: C best ? C(?).
val [Cl , C?t ]. However, we are not sure whether
C best can still keep ?-approximation property
for C > C?t . Thus, in line 3, we approximately solve the optimization problem (1) at C = C?t and obtain an approximate solution
w
?C?t . Note that the approximate solution w
?C?t must be sufficiently good enough in the sense that
?
?
U B(Ev (wC
?C?? ) ? LB(Ev (wC
?C?? ) is sufficiently smaller than ? (typically 0.1?). If the up?? )|w
?? )|w
?
per bound of the validation error U B(Ev (wC
?C?? ) is smaller than Evbest , we update Evbest and
?? )|w
C best (lines 5-8).
Our next task is to find C?t+1 in such a way that C best is an ?-approximate regularization parameter
in the interval [Cl , C?t+1 ]. Using the validation error lower bound in Theorem 4, the task is to find
the smallest C?t+1 > C?t that violates
?
Evbest ? LB(Ev (wC
)|w
?C?t ) ? ?, ?C ? [C?t , Cu ],
(9)
In order to formulate such a C?t+1 , let us define
?? ?
?? ?
P := {i ? [n? ]|yi? = +1, U B(wC
?C?t ) < 0}, N := {i ? [n? ]|yi? = ?1, LB(wC
?C?t ) > 0}.
?t xi |w
?t xi |w
Furthermore, let
"
$
"
$
?(w
?C?t , x?i )
?(w
?C?t , x?i )
?t
?t
? :=
C
?
C
,
?(w
?C?t , x?i ) + ?(g(w
?C?t ), x?i )
?(w
?C?t , x?i ) + ?(g(w
?C?t ), x?i )
i?P
i?N
and denote the k th -smallest element of ? as k th (?) for any natural number k. Then, the smallest
C?t+1 > C?t that violates (9) is given as
?
C?t+1 ? (?n? (LB(Ev (wC
?C?t )?Evbest +?)?+1)th (?).
?t )|w
5
(10)
Experiments
In this section we present experiments for illustrating the proposed methods. Table 2 summarizes
the datasets used in the experiments. They are taken from libsvm dataset repository [23]. All the
input features except D9 and D10 were standardized to [?1, 1]3 . For illustrative results, the instances were randomly divided into a training and a validation sets in roughly equal sizes. For
quantitative results, we used 10-fold CV. We used Huber hinge loss (e.g., [24]) which is convex
and subdifferentiable with respect to the second argument. The proposed methods are free from
the choice of optimization solvers. In the experiments, we used an optimization solver described in
[25], which is also implemented in well-known liblinear software [26]. Our slightly modified code
3
We use D9 and D10 as they are for exploiting sparsity.
6
liver-disorders (D2)
ionosphere (D3)
australian (D4)
Figure 2: Illustrations of Algorithm 1 on three benchmark datasets (D2, D3, D4). The plots indicate
how the approximation level ? improves as the number of solutions T increases in grid-search (red),
Bayesian optimization (blue) and our own method (green, see the main text).
(a) ? = 0.1 without tricks
(b) ? = 0.05 without tricks
(c) ? = 0.05 with tricks 1 and 2
Figure 3: Illustrations of Algorithm 2 on ionosphere (D3) dataset for (a) op2 with ? = 0.10, (b)
op2 with ? = 0.05 and (c) op3 with ? = 0.05, respectively. Figure 1 also shows the result for op3
with ? = 0.10.
(for adaptation to Huber hinge loss) is provided as a supplementary material, and is also available
on https://github.com/takeuchi-lab/RPCVELB. Whenever possible, we used warmstart approach, i.e., when we trained a new solution, we used the closest solutions trained so far
(either approximate or optimal ones) as the initial starting point of the optimizer. All the computations were conducted by using a single core of an HP workstation Z800 (Xeon(R) CPU X5675
(3.07GHz), 48GB MEM). In all the experiments, we set C? = 10?3 and Cu = 103 .
Results on problem 1 We applied Algorithm 1 in ?4 to a set of solutions obtained by 1) gridsearch, 2) Bayesian optimization (BO) with expected improvement acquisition function, and 3)
adaptive search with our framework which sequentially computes a solution whose validation lower
bound is smallest based on the information obtained so far. Figure 2 illustrates the results on three
datasets, where we see how the approximation level ? in the vertical axis changes as the number of
solutions (T in our notation) increases. In grid-search, as we increase the grid points, the approximation level ? tends to be improved. Since BO tends to focus on a small region of the regularization
parameter, it was difficult to tightly bound the approximation level. We see that the adaptive search,
using our framework straightforwardly, seems to offer slight improvement from grid-search.
Results on problem 2 We applied Algorithm 2 to benchmark datasets for demonstrating theoretically guaranteed choice of a regularization parameter is possible with reasonable computational
costs. Besides the algorithm presented in ?4, we also tested a variant described in supplementary
Appendix B. Specifically, we have three algorithm options. In the first option (op1), we used op?
timal solutions {wC
?t }t?[T ] for computing CV error lower bounds. In the second option (op2),
we instead used approximate solutions {w
?C?t }t?[T ] . In the last option (op3), we additionally used
speed-up tricks described in supplementary Appendix B. We considered four different choices of
? ? {0.1, 0.05, 0.01, 0}. Note that ? = 0 indicates the task of finding the exactly optimal regular7
Table 1: Computational costs. For each of the three options and ? ? {0.10, 0.05, 0.01, 0}, the
number of optimization problems solved (denoted as T ) and the total computational costs (denoted
as time) are listed. Note that, for op2, there are no results for ? = 0.
?
op1
(using w?? )
C
time
T
(sec)
op2
(using w
?C
?)
time
T
(sec)
op3
(using tricks)
time
T
(sec)
op1
(using w?? )
C
time
T
(sec)
op2
(using w
?C
?)
time
T
(sec)
op3
(using tricks)
time
T
(sec)
0.10
0.05
0.01
0
D1
30
68
234
442
0.068
0.124
0.428
0.697
32
70
324
0.031
0.061
0.194
N.A.
33
57
205
383
0.041
0.057
0.157
0.629
D6
92
207
1042
4276
1.916
4.099
16.31
57.57
93
209
1069
0.975
2.065
9.686
N.A.
62
123
728
2840
0.628
1.136
5.362
44.68
0.10
0.05
0.01
0
D2
221
534
1503
10939
0.177
0.385
0.916
6.387
223
540
2183
0.124
0.290
0.825
N.A.
131
367
1239
6275
0.084
0.218
0.623
3.805
D7
289
601
2532
67490
8.492
16.18
57.79
1135
293
605
2788
5.278
9.806
35.21
N.A.
167
379
1735
42135
3.319
6.604
24.04
760.8
0.10
0.05
0.01
0
D3
61
123
600
5412
0.617
1.073
4.776
26.39
62
129
778
0.266
0.468
0.716
N.A.
43
73
270
815
0.277
0.359
0.940
6.344
D8
72
192
1063
34920
0.761
1.687
8.257
218.4
74
195
1065
0.604
1.162
6.238
N.A.
66
110
614
15218
0.606
0.926
4.043
99.57
0.10
0.05
0.01
0
D4
27
64
167
342
0.169
0.342
0.786
1.317
27
65
181
0.088
0.173
0.418
N.A.
23
47
156
345
0.093
0.153
0.399
1.205
D9
134
317
1791
85427
360.2
569.9
2901
106937
136
323
1822
201.0
280.7
1345
N.A.
89
200
1164
63300
74.37
128.5
657.4
98631
0.10
0.05
0.01
0
D5
62
108
421
2330
0.236
0.417
1.201
4.540
63
109
440
0.108
0.171
0.631
N.A.
45
77
258
968
0.091
0.137
0.401
2.451
D10
Ev < 0.10
Ev < 0.05
157
81.75
258552
85610
Ev < 0.10
Ev < 0.05
162
31.02
N.A.
Ev < 0.10
Ev < 0.05
114
36.81
42040
23316
ization parameter. In some datasets, the smallest validation errors are less than 0.1 or 0.05, in which
cases we do not report the results (indicated as ?Ev < 0.05? etc.). In trick1, we initially computed
solutions at four different regularization parameter values evenly allocated in [10?3 , 103 ] in the logarithmic scale. In trick2, the next regularization parameter C?t+1 was set by replacing ? in (10) with
1.5? (see supplementary Appendix B). For the purpose of illustration, we plot examples of validation error curves in several setups. Figure 3 shows the validation error curves of ionosphere (D3)
dataset for several options and ?.
Table 1 shows the number of optimization problems solved in the algorithm (denoted as T ), and
the total computation time in CV setups. The computational costs mostly depend on T , which gets
smaller as ? increases. Two tricks in supplementary Appendix B was effective in most cases for
reducing T . In addition, we see the advantage of using approximate solutions by comparing the
computation times of op1 and op2 (though this strategy is only for ? ?= 0). Overall, the results
suggest that the proposed algorithm allows us to find theoretically guaranteed approximate regularization parameters with reasonable costs except for ? = 0 cases. For example, the algorithm found
an ? = 0.01 approximate regularization parameter within a minute in 10-fold CV for a dataset with
more than 50000 instances (see the results on D10 for ? = 0.01 with op2 and op3 in Table 1).
Table 2: Benchmark datasets used in the experiments.
D1
D2
D3
D4
D5
6
dataset name
heart
liver-disorders
ionosphere
australian
diabetes
sample size
270
345
351
690
768
input dimension
13
6
34
14
8
D6
D7
D8
D9
D10
dataset name
german.numer
svmguide3
svmguide1
a1a
w8a
sample size
1000
1284
7089
32561
64700
input dimension
24
21
4
123
300
Conclusions and future works
We presented a novel algorithmic framework for computing CV error lower bounds as a function
of the regularization parameter. The proposed framework can be used for a theoretically guaranteed
choice of a regularization parameter. Additional advantage of this framework is that we only need to
compute a set of sufficiently good approximate solutions for obtaining such a theoretical guarantee,
which is computationally advantageous. As demonstrated in the experiments, our algorithm is practical in the sense that the computational cost is reasonable as long as the approximation quality ? is
not too close to 0. An important future work is to extend the approach to multiple hyper-parameters
tuning setups.
8
References
[1] B. Efron, T. Hastie, I. Johnstone, and R. TIbshirani. Least angle regression. Annals of Statistics,
32(2):407?499, 2004.
[2] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support vector
machine. Journal of Machine Learning Research, 5:1391?1415, 2004.
[3] S. Rosset and J. Zhu. Piecewise linear regularized solution paths. Annals of Statistics, 35:1012?1030,
2007.
[4] J. Giesen, J. Mueller, S. Laue, and S. Swiercy. Approximating Concavely Parameterized Optimization
Problems. In Advances in Neural Information Processing Systems, 2012.
[5] J. Giesen, M. Jaggi, and S. Laue. Approximating Parameterized Convex Optimization Problems. ACM
Transactions on Algorithms, 9, 2012.
[6] J. Giesen, S. Laue, and Wieschollek P. Robust and Efficient Kernel Hyperparameter Paths with Guarantees. In International Conference on Machine Learning, 2014.
[7] J. Mairal and B. Yu. Complexity analysis of the Lasso reguralization path. In International Conference
on Machine Learning, 2012.
[8] V. Vapnik and O. Chapelle. Bounds on Error Expectation for Support Vector Machines. Neural Computation, 12:2013?2036, 2000.
[9] T. Joachims. Estimating the generalization performance of a SVM efficiently. In International Conference
on Machine Learning, 2000.
[10] K. Chung, W. Kao, C. Sun, L. Wang, and C. Lin. Radius margin bounds for support vector machines with
the RBF kernel. Neural computation, 2003.
[11] M. Lee, S. Keerthi, C. Ong, and D. DeCoste. An efficient method for computing leave-one-out error in
support vector machines with Gaussian kernels. IEEE Transactions on Neural Networks, 15:750?7, 2004.
[12] L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Pacific
Journal of Optimization, 2012.
[13] Z. Xiang, H. Xu, and P. Ramadge. Learning sparse representations of high dimensional data on large
scale dictionaries. In Advances in Neural Information Processing Sysrtems, 2011.
[14] K. Ogawa, Y. Suzuki, and I. Takeuchi. Safe screening of non-support vectors in pathwise SVM computation. In International Conference on Machine Learning, 2013.
[15] J. Liu, Z. Zhao, J. Wang, and J. Ye. Safe Screening with Variational Inequalities and Its Application to
Lasso. In International Conference on Machine Learning, volume 32, 2014.
[16] J. Wang, J. Zhou, J. Liu, P. Wonka, and J. Ye. A Safe Screening Rule for Sparse Logistic Regression. In
Advances in Neural Information Processing Sysrtems, 2014.
[17] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1996.
[18] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning. Cambridge University Press,
2014.
[19] J. Snoek, H. Larochelle, and R. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. In Advances in Neural Information Processing Sysrtems, 2012.
[20] J. Bergstra and Y. Bengio. Random Search for Hyper-Parameter Optimization. Journal of Machine
Learning Research, 13:281?305, 2012.
[21] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector
machines. Machine Learning, 46:131?159, 2002.
[22] P D. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[23] C. Chang and C. Lin. LIBSVM : A Library for Support Vector Machines. ACM Transactions on Intelligent
Systems and Technology, 2:1?39, 2011.
[24] O. Chapelle. Training a support vector machine in the primal. Neural computation, 19:1155?1178, 2007.
[25] C. Lin, R. Weng, and S. Keerthi. Trust Region Newton Method for Large-Scale Logistic Regression. The
Journal of Machine Learning Research, 9:627?650, 2008.
[26] R. Fan, K. Chang, and C. Hsieh. LIBLINEAR: A library for large linear classification. The Journal of
Machine Learning, 9:1871?1874, 2008.
9
| 5817 |@word cu:13 illustrating:1 repository:1 norm:1 advantageous:2 nd:1 seems:1 d2:4 simplifying:1 hsieh:1 liblinear:2 initial:1 liu:2 score:8 hereafter:1 ours:1 existing:2 current:5 com:2 comparing:2 gmail:1 written:2 must:1 numerical:1 plot:2 larization:1 update:1 selected:3 svmguide1:1 core:2 rabbani:1 direct:1 incorrect:1 prove:1 introduce:1 theoretically:5 snoek:1 huber:3 expected:1 roughly:2 behavior:1 inspired:2 automatically:1 cpu:1 decoste:1 param:2 solver:2 increasing:1 provided:1 estimating:1 notation:1 developed:1 finding:4 guarantee:8 pseudo:1 quantitative:1 exactly:2 control:1 bertsekas:1 before:1 tends:2 consequence:1 path:15 approximately:3 might:1 studied:1 ramadge:1 limited:1 range:4 practical:4 practice:2 subdifferentiable:2 procedure:1 empirical:1 suggest:1 get:1 cannot:1 close:1 convenience:1 risk:1 applying:1 demonstrated:1 nit:2 starting:1 convex:4 formulate:1 simplicity:1 disorder:2 rule:1 d5:2 annals:2 exact:6 programming:1 diabetes:1 trick:7 element:2 mukherjee:1 solved:4 wang:3 worst:1 region:2 sun:1 complexity:1 ong:1 trained:2 depend:1 solving:3 tight:2 completely:2 easily:3 represented:4 describe:2 effective:1 tell:3 hyper:2 shalev:1 choosing:1 whose:5 supplementary:6 solve:2 tested:1 statistic:2 itself:2 sequence:2 advantage:2 propose:1 subtracting:1 product:2 adaptation:1 loop:2 kao:1 ogawa:1 exploiting:1 produce:1 adam:1 leave:2 ben:1 tions:1 wider:1 derive:2 ac:1 develop:1 liver:2 op:1 eq:1 implemented:1 indicate:1 australian:2 larochelle:1 error1:1 safe:6 radius:1 exploration:1 violates:2 material:1 elimination:1 generalization:5 proposition:1 extension:1 a1a:1 sufficiently:7 considered:1 algorithmic:1 substituting:1 achieves:1 optimizer:1 smallest:8 giesen:3 dictionary:1 purpose:1 shibagaki:2 minimization:2 rough:1 always:1 gaussian:1 modified:1 zhou:1 minc:4 corollary:3 focus:2 joachim:1 notational:1 improvement:2 indicates:1 sense:11 mueller:1 stopping:1 el:1 entire:4 typically:2 initially:1 kernelized:1 arg:2 classification:3 among:1 overall:1 denoted:6 art:2 equal:2 represents:1 yu:1 future:2 report:1 piecewise:2 intelligent:1 randomly:1 tightly:1 individual:1 keerthi:2 interest:1 screening:5 evaluation:1 numer:1 weng:1 primal:1 held:2 accurate:1 op3:6 euclidean:1 masayuki:1 desired:2 theoretical:7 instance:12 xeon:1 ordinary:1 cost:8 introducing:1 deviation:1 conducted:1 too:2 straightforwardly:2 rosset:2 fundamental:1 international:5 lee:1 d9:4 d8:2 chung:1 zhao:1 japan:1 op1:4 bergstra:1 sec:6 availability:1 coefficient:1 satisfy:1 explicitly:2 lab:1 ichiro:2 red:1 option:7 contribution:3 takeuchi:4 efficiently:2 karasuyama:2 identify:1 bayesian:5 none:1 classified:7 maxc:1 whenever:1 definition:1 acquisition:1 proof:1 mi:5 workstation:1 dataset:6 intensively:1 knowledge:1 efron:1 improves:1 supervised:1 improved:1 though:1 furthermore:7 yoshiki:1 replacing:1 trust:1 nonlinear:1 d10:5 logistic:3 quality:2 indicated:2 scientific:1 name:2 ye:2 staircase:2 true:1 multiplier:1 ization:2 former:1 regularization:75 illustrated:1 illustrative:1 d4:4 criterion:1 presenting:2 demonstrate:2 atsushi:1 meaning:1 variational:1 novel:5 common:2 jp:1 volume:1 discussed:1 slight:1 extend:1 numerically:1 significant:2 cambridge:1 cv:27 tuning:9 rd:5 grid:11 similarly:2 hp:1 chapelle:3 stable:1 etc:2 jaggi:1 closest:1 own:1 recent:3 scenario:1 nagoya:2 certain:1 indispensable:1 inequality:1 binary:2 yi:25 minimum:1 additional:1 greater:4 op2:8 multiple:2 d7:2 cross:4 offer:1 long:1 lin:3 divided:1 impact:2 variant:1 basic:1 regression:3 expectation:1 iteration:1 kernel:3 achieved:1 eter:2 c1:4 swiercy:1 addition:1 interval:7 allocated:1 sure:1 call:2 structural:1 bengio:1 easy:1 enough:1 hastie:2 lasso:2 inner:2 idea:2 nitech:1 whether:1 motivated:1 gb:1 remark:1 generally:1 involve:1 listed:1 tth:1 http:1 exist:1 correctly:2 per:1 tibshirani:2 blue:1 hyperparameter:1 key:1 four:2 nevertheless:1 demonstrating:1 d3:6 libsvm:2 utilize:1 viallon:1 subgradient:3 prob:1 angle:1 parameterized:2 apalgorithm:1 reasonable:5 decision:1 appendix:7 summarizes:1 bound:44 ct:6 guaranteed:10 fold:2 fan:1 adapted:1 software:1 bousquet:1 wc:59 speed:1 argument:2 min:5 pacific:1 according:1 mllab:2 smaller:4 slightly:2 lem:1 ghaoui:1 taken:1 heart:1 computationally:2 loose:1 german:1 needed:1 end:3 available:1 apply:2 away:1 alternative:1 standardized:1 ensure:1 include:1 hinge:4 newton:1 approximating:2 objective:4 already:1 laue:3 strategy:2 costly:1 gradient:1 d6:2 athena:1 evenly:1 unstable:1 svmguide3:1 code:2 besides:1 illustration:4 providing:3 balance:1 setup:14 mostly:2 difficult:1 wonka:1 upper:7 vertical:1 w8a:1 datasets:6 benchmark:3 finite:3 arbitrary:1 lb:14 david:1 timal:1 required:1 specified:2 usually:1 ev:44 sparsity:2 built:1 green:1 suitable:1 natural:1 regularized:3 indicator:1 zhu:2 github:1 technology:2 library:2 identifies:1 axis:1 text:1 understanding:1 val:1 xiang:1 loss:11 limitation:1 validation:45 last:1 transpose:1 free:1 institute:1 johnstone:1 sparse:3 ghz:1 regard:1 curve:2 dimension:3 computes:2 suzuki:3 commonly:1 adaptive:2 far:4 transaction:3 approximate:34 emphasize:1 keep:1 sequentially:1 mem:1 mairal:1 concavely:1 xi:46 shwartz:1 search:15 why:1 table:5 additionally:1 nature:1 reasonably:1 robust:1 warmstart:1 obtaining:6 cl:11 did:1 main:4 linearly:2 whole:2 xu:1 theorem:2 minute:1 svm:3 ionosphere:4 exists:1 vapnik:3 illustrates:2 margin:1 logarithmic:1 simply:3 lagrange:1 pathwise:1 bo:2 chang:2 monotonic:1 springer:1 satisfies:5 acm:2 goal:1 quantifying:1 careful:2 exposition:1 rbf:1 hard:2 change:3 specifically:1 except:2 reducing:1 lemma:7 total:2 support:8 latter:1 d1:2 |
5,322 | 5,818 | Collaboratively Learning Preferences from Ordinal
Data
Sewoong Oh , Kiran K. Thekumparampil
University of Illinois at Urbana-Champaign
{swoh,thekump2}@illinois.edu
Jiaming Xu
The Wharton School, UPenn
[email protected]
Abstract
In personalized recommendation systems, it is important to predict preferences of
a user on items that have not been seen by that user yet. Similarly, in revenue
management, it is important to predict outcomes of comparisons among those
items that have never been compared so far. The MultiNomial Logit model, a
popular discrete choice model, captures the structure of the hidden preferences
with a low-rank matrix. In order to predict the preferences, we want to learn the
underlying model from noisy observations of the low-rank matrix, collected as
revealed preferences in various forms of ordinal data. A natural approach to learn
such a model is to solve a convex relaxation of nuclear norm minimization. We
present the convex relaxation approach in two contexts of interest: collaborative
ranking and bundled choice modeling. In both cases, we show that the convex
relaxation is minimax optimal. We prove an upper bound on the resulting error
with finite samples, and provide a matching information-theoretic lower bound.
1
Introduction
In recommendation systems and revenue management, it is important to predict preferences on items
that have not been seen by a user or predict outcomes of comparisons among those that have never
been compared. Predicting such hidden preferences would be hopeless without further assumptions on the structure of the preference. Motivated by the success of matrix factorization models on
collaborative filtering applications, we model hidden preferences with low-rank matrices to collaboratively learn preference matrices from ordinal data. This paper considers the following scenarios:
? Collaborative ranking. Consider an online market that collects each user?s preference as
a ranking over a subset of items that are ?seen? by the user. Such data can be obtained by
directly asking to compare some items, or by indirectly tracking online activities on which
items are viewed, how much time is spent on the page, or how the user rated the items.
In order to make personalized recommendations, we want a model which (a) captures how
users who prefer similar items are also likely to have similar preferences on unseen items,
(b) predicts which items a user might prefer, by learning from such ordinal data.
? Bundled choice modeling. Discrete choice models describe how a user makes decisions on
what to purchase. Typical choice models assume the willingness to buy an item is independent of what else the user bought. In many cases, however, we make ?bundled? purchases:
we buy particular ingredients together for one recipe or we buy two connecting flights. One
choice (the first flight) has a significant impact on the other (the connecting flight). In order
to optimize the assortment (which flight schedules to offer) for maximum expected revenue, it is crucial to accurately predict the willingness of the consumers to purchase, based
on past history. We consider a case where there are two types of products (e.g. jeans and
shirts), and want (a) a model that captures such interacting preferences for pairs of items,
one from each category; and (b) to predict the consumer?s choice probabilities on pairs of
items, by learning such models from past purchase history.
1
We use a discrete choice model known as MultiNomial Logit (MNL) model [1] (described in Section
2.1) to represent the preferences. In collaborative ranking context, MNL uses a low-rank matrix to
represent the hidden preferences of the users. Each row corresponds to a user?s preference over all
the items, and when presented with a subset of items the user provides a ranking over those items,
which is a noisy version of the hidden true preference. The low-rank assumption naturally captures
the similarities among users and items, by representing each on a low-dimensional space. In bundled
choice modeling context, the low-rank matrix now represents how pairs of items are matched. Each
row corresponds to an item from the first category and each column corresponds to an item from the
second category. An entry in the matrix represents how much the pair is preferred by a randomly
chosen user from a pool of users. Notice that in this case we do not model individual preferences,
but the preference of the whole population. The purchase history of the population is the record of
which pair was chosen among a subsets of items that were presented, which is again a noisy version
of the hidden true preference. The low-rank assumption captures the similarities and dis-similarities
among the items in the same category and the interactions across categories.
Contribution. A natural approach to learn such a low-rank model, from noisy observations, is to
solve a convex relaxation of nuclear norm minimization (described in Section 2.2), since nuclear
norm is the tightest convex surrogate for the rank function. We present such an approach for learning the MNL model from ordinal data, in two contexts: collaborative ranking and bundled choice
modeling. In both cases, we analyze the sample complexity of the algorithm, and provide an upper
bound on the resulting error with finite samples. We prove minimax-optimality of our approach by
providing a matching information-theoretic lower bound (up to a poly-logarithmic factor). Technically, we utilize the Random Utility Model (RUM) [2, 3, 4] interpretation (outlined in Section 2.1)
of the MNL model to prove both the upper bound and the fundamental limit, which could be of
interest to analyzing more general class of RUMs.
Related work. In the context of collaborative ranking, MNL models have been proposed to model
partial rankings from a pool of users. Recently, there has been new algorithms and analyses of
those algorithms to learn MNL models from samples, in the case when each user provides pair-wise
comparisons [5, 6]. [6] proposes solving a convex relaxation of maximizing the likelihood over
matrices with bounded nuclear norm. It is shown that this approach achieves statistically optimal
generalization error rate, instead of Frobenius norm error that we analyze. Our analysis techniques
are inspired by [5], which proposed the convex relaxation for learning MNL, but when the users
provide only pair-wise comparisons. In this paper, we generalize the results of [5] by analyzing
more general sampling models beyond pairwise comparisons.
The remainder of the paper is organized as follows. In Section 2, we present the MNL model and
propose a convex relaxation for learning the model, in the context of collaborative ranking. We
provide theoretical guarantees for collaborative ranking in Section 3. In Section 4, we present the
problem statement for bundled choice modeling, and analyze a similar convex relaxation approach.
Notations.
We use |||A|||F and |||A|||? to denote the Frobenius norm and the `? norm, |||A|||nuc =
P
i (A) denote the i-th singular value, and |||A|||2 =
i ?i (A) to denote the nuclear norm where ?
P
?1 (A) for the spectral norm. We use hhu, vii = i ui vi and kuk to denote the inner product and the
Euclidean norm. All ones vector is denoted by 1 and I(A) is the indicator function of the event A.
The set of the fist N integers are denoted by [N ] = {1, . . . , N }.
2
Model and Algorithm
In this section, we present a discrete choice modeling for collaborative ranking, and propose an
inference algorithm for learning the model from ordinal data.
2.1
MultiNomial Logit (MNL) model for comparative judgment
In collaborative ranking, we want to model how people who have similar preferences on a subset
of items are likely to have similar tastes on other items as well. When users provide ratings, as in
collaborative filtering applications, matrix factorization models are widely used since the low-rank
structure captures the similarities between users. When users provide ordered preferences, we use
a discrete choice model known as MultiNomial Logit (MNL) [1] model that has a similar low-rank
structure that captures the similarities between users and items.
2
Let ?? be the d1 ? d2 dimensional matrix capturing the preference of d1 users on d2 items, where
the rows and columns correspond to users and items, respectively. Typically, ?? is assumed to be
low-rank, having a rank r that is much smaller than the dimensions. However, in the following we
allow a more general setting where ?? might be only approximately low rank. When a user i is
presented with a set of alternatives Si ? [d2 ], she reveals her preferences as a ranked list over those
items. To simplify the notations we assume all users compare the same number k of items, but
the analysis naturally generalizes to the case when the size might differ from a user to a user. Let
vi,` ? Si denote the (random) `-th best choice of user i. Each user gives a ranking, independent of
other users? rankings, from
P {vi,1 , . . . , vi,k } =
k
Y
`=1
??
i,v
P
e
i,`
j?Si,`
?
e?i,j
,
(1)
where with Si,` ? Si \ {vi,1 , . . . , vi,`?1 } and Si,1 ? Si . For a user i, the i-th row of ?? represents
the underlying preference vector of the user, and the more preferred items are more likely to be
ranked higher. The probabilistic nature of the model captures the noise in the revealed preferences.
The random utility model (RUM), pioneered by [2, 3, 4], describes the choices of users as manifestations of the underlying utilities. The MNL models is a special case of RUM where each decision
maker and each alternative are represented by a r-dimensional feature vectors ui and vj respectively,
such that ??ij = hhui , vj ii, resulting in a low-rank matrix. When presented with a set of alternatives
Si , the decision maker i ranks the alternatives according to their random utility drawn from
Uij
= hhui , vj ii + ?ij ,
(2)
for item j, where ?ij follow the standard Gumbel distribution. Intuitively, this provides a justification
for the MNL model as modeling the decision makers as rational being, seeking to maximize utility.
Technically, this RUM interpretation plays a crucial role in our analysis, in proving restricted strong
convexity in Appendix A.5 and also in proving fundamental limit in Appendix C.
There are a few cases where the Maximum Likelihood (ML) estimation for RUM is tractable. One
notable example is the Plackett-Luce (PL) model, which is a special case of the MNL model where
?? is rank-one and all users have the same features. PL model has been widely applied in econometrics [1], analyzing elections [7], and machine learning [8]. Efficient inference algorithms has
been proposed [9, 10, 11], and the sample complexity has been analyzed for the MLE [12] and for
the Rank Centrality [13]. Although PL is quite restrictive, in the sense that it assumes all users share
the same features, little is known about inference in RUMs beyond PL. Recently, to overcome such
a restriction, mixed PL models have been studied, where ?? is rank-r but there are only r classes
of users and all users in the same class have the same features. Efficient inference algorithms with
provable guarantees have been proposed by applying recent advances in tensor decomposition methods [14, 15], directly clustering the users [16, 17], or using sampling methods [18]. However, this
mixture PL is still restrictive, and both clustering and tensor based approaches rely heavily on the
fact that the distribution is a ?mixture? and require additional incoherence assumptions on ?? . For
more general models, efficient inference algorithms have been proposed [19] but no performance
guarantee is known for finite samples. Although the MLE for the general MNL model in (1) is
intractable, we provide a polynomial-time inference algorithm with provable guarantees.
2.2
Nuclear norm minimization
Assuming ?? is well approximated by a low-rank matrix, we estimate ?? by solving the following
convex relaxation given the observed preference in the form of ranked lists {(vi,1 , . . . , vi,k )}i?[d1 ] .
b ? arg min L(?) + ?|||?||| ,
?
nuc
???
where the (negative) log likelihood function according to (1) is
?
?
??
d1 X
k
X
1 X
?hh?, ei eTv ii ? log ?
L(?) = ?
exp hh?, ei eTj ii ?? ,
i,`
k d1 i=1
(3)
(4)
j?Si,`
`=1
with Si = {vi,1 , . . . , vi,k } and Si,` ? Si \{vi,1 , . . . , vi,`?1 }, and appropriately chosen set ? defined
in (7). Since nuclear norm is a tight convex surrogate for the rank, the above optimization searches
3
for a low-rank solution that maximizes the likelihood. Nuclear norm minimization has been widely
used in rank minimization problems [20], but provable guarantees typically exists only for quadratic
loss function L(?) [21, 22]. Our analysis extends such analysis techniques to identify the conditions
under which restricted strong convexity is satisfied for a convex loss function that is not quadratic.
3
Collaborative ranking from k-wise comparisons
We first provide background on the MNL model, and then present main results on the performance
guarantees. Notice that the distribution (1) is independent of shifting each row of ?? by a constant.
Hence, there is an equivalent class of ?? that gives the same distributions for the ranked lists:
[?? ] = {A ? Rd1 ?d2 | A = ?? + u 1T for some u ? Rd1 } .
(5)
Since we can
P only estimate ? up to this equivalent class, we search for the one whose rows sum to
zero, i.e. j?[d2 ] ??i,j = 0 for all i ? [d1 ]. Let ? ? maxi,j1 ,j2 |??ij1 ? ??ij2 | denote the dynamic
range of the underlying ?? , such that when k items are compared, we always have
1 ??
1
1
1
? P {vi,1 = j} ?
? e? ,
e
?
(6)
k
1 + (k ? 1)e?
1 + (k ? 1)e??
k
?
for all j ? Si , all Si ? [d2 ] satisfying |Si | = k and all i ? [d1 ]. We do not make any assumptions on
? other than that ? = O(1) with respect to d1 and d2 . The purpose of defining the dynamic range in
this way is that we seek to characterize how the error scales with ?. Given this definition, we solve
the optimization in (3) over
n
o
X
?? = A ? Rd1 ?d2 |||A|||? ? ?, and ?i ? [d1 ] we have
Aij = 0 .
(7)
j?[d2 ]
While in practice we do not require the `? norm constraint, we need it for the analysis. For a related
problem of matrix completion, where the loss L(?) is quadratic, either a similar condition on `?
norm is required or a different condition on incoherence is required.
3.1
Performance guarantee
We provide an upper bound on the resulting error of our convex relaxation, when a multi-set of
items Si presented to user i is drawn uniformly at random with replacement. Precisely, for a
given k, Si = {ji,1 , . . . , ji,k } where ji,` ?s are independently drawn uniformly at random over
the d2 items. Further, if an item is sampled more than once, i.e. if there exists ji,`1 = ji,`2 for
some i and `1 6= `2 , then we assume that the user treats these two items as if they are two distinct items with the same MNL weights ??i,ji,` = ??i,ji,` .The resulting preference is therefore
1
2
always over k items (with possibly multiple copies of the same item), and distributed according
to (1). For example, if k = 3, it is possible to have Si = {ji,1 = 1, ji,2 = 1, ji,3 = 2}, in
which case the resulting ranking can be (vi,1 = ji,1 , vi,2 = ji,3 , vi,3 = ji,2 ) with probability
?
?
?
?
?
?
(e?i,1 )/(2 e?i,1 + e?i,2 ) ? (e?i,2 )/(e?i,1 + e?i,2 ). Such sampling with replacement is necessary
for the analysis, where we require independence in the choice of the items in Si in order to apply the
symmetrization technique (e.g. [23]) to bound the expectation of the deviation (cf. Appendix A.5).
Similar sampling assumptions have been made in existing analyses on learning low-rank models
from noisy observations, e.g. [22]. Let d ? (d1 + d2 )/2, and let ?j (?? ) denote the j-th singular
value of the matrix ?? . Define
s
d1 log d + d2 (log d)2 (log 2d)4
2?
.
?0 ? e
k d21 d2
Theorem 1. Under the described sampling model, assume 24 ? k ? min{d21 log d, (d21 +
d22 )/(2d1 ) log d, (1/e) d2 (4 log d2 +2 log d1 )}, and ? ? [480?0 , c0 ?0 ] with any constant c0 = O(1)
larger than 480. Then, solving the optimization (3) achieves
min{d1 ,d2 }
2
X
?
?
1 b
b ? ?? + 288e4? c0 ?0
?j (?? ) ,
? ? ?? ? 288 2 e4? c0 ?0 r ?
d1 d2
F
F
j=r+1
(8)
for any r ? {1, . . . , min{d1 , d2 }} with probability at least 1 ? 2d?3 ? d?3
2 where d = (d1 + d2 )/2.
4
A proof is provided in Appendix A. The above bound shows a natural splitting of the error into
two terms, one corresponding to the estimation error for the rank-r component and the second
one corresponding to the approximation error for how well one can approximate ?? with a rank-r
matrix. This bound holds for all values of r and one could potentially optimize over r. We show
such results in the following corollaries.
Corollary 3.1 (Exact low-rank matrices). Suppose ?? has rank at most r. Under the hypotheses
of Theorem 1, solving the optimization (3) with the choice of the regularization parameter ? ?
[480?0 , c0 ?0 ] achieves with probability at least 1 ? 2d?3 ? d?3
2 ,
s
?
1 b
r(d1 log d + d2 (log d)2 (log 2d)4 )
?
.
(9)
? ? ?? ? 288 2e6? c0
k d1
F
d1 d2
?
The number of entries is d1 d2 and we rescale the Frobenius norm error appropriately by 1/ d1 d2 .
When ?? is a rank-r matrix, then the degrees of freedom in representing ?? is r(d1 + d2 ) ? r2 =
O(r(d1 + d2 )). The above theorem shows that the total number of samples, which is (k d1 ), needs to
scale as O(rd1 (log d) + rd2 (log d)2 (log 2d)4 in order to achieve an arbitrarily small error. This is
only poly-logarithmic factor larger than the degrees of freedom. In Section 3.2, we provide a lower
bound on the error directly, that matches the upper bound up to a logarithmic factor.
The dependence on the dynamic range ?, however, is sub-optimal. It is expected that the error
increases with ?, since the ?? scales as ?, but the exponential dependence in the bound seems to
be a weakness of the analysis, as seen from numerical experiments in the right panel of Figure 1.
Although the error increase with ?, numerical experiments suggests that it only increases at most
linearly. However, tightening the scaling with respect to ? is a challenging problem, and such suboptimal dependence is also present in existing literature for learning even simpler models, such as
the Bradley-Terry model [13] or the Plackett-Luce model [12], which are special cases of the MNL
model studied in this paper. A practical issue in achieving the above rate is the choice of ?, since
the dynamic range ? is not known in advance. Figure 1 illustrates that the error is not sensitive to
the choice of ? for a wide range.
Another issue is that the underlying matrix might not be exactly low rank. It is more realistic to
assume that it is approximately low rank. Following [22] we formalize this notion with ?`q -ball? of
matrices defined as
X
Bq (?q ) ? {? ? Rd1 ?d2 |
|?j (?? )|q ? ?q } .
(10)
j?[min{d1 ,d2 }]
When q = 0, this is a set of rank-?0 matrices. For q ? (0, 1], this is set of matrices whose singular
values decay relatively fast. Optimizing the choice of r in Theorem 1, we get the following result.
Corollary 3.2 (Approximately low-rank matrices). Suppose ?? ? Bq (?q ) for some q ? (0, 1]
and ?q > 0. Under the hypotheses of Theorem 1, solving the optimization (3) with the choice of the
regularization parameter ? ? [480?0 , c0 ?0 ] achieves with probability at least 1 ? 2d?3 ,
?
1
d1 d2
b
? ? ??
F
?
? 2?q
s
2
?
2
2
?
2 ?q
d1 d2 (d1 log d + d2 (log d) (log 2d) ) ?
6?
?
288 2c0 e
. (11)
? ?
k d1
d1 d2
This is a strict generalization of Corollary 3.1. For q = 0 and ?0 = r, this recovers the exact
low-rank estimation bound up to a factor of two. For approximate low-rank matrices in an `q -ball,
we lose in the error exponent, which reduces from one to (2 ? q)/2. A proof of this Corollary is
provided in Appendix B.
The left panel of Figure 1 confirms the scaling of the error rate as predicted by Corollary 3.1. The
lines merge top
a single line when the sample size is rescaled appropriately. We make a choice
of ? = (1/2) (log d)/(kd2 ), This choice is independent of ? and is smaller than proposed in
Theorem 1. We generate random rank-r matrices of dimension d ? d, where ?? = U V T with
U ? Rd?r and V ? Rd?r entries generated i.i.d from uniform distribution over [0, 1]. Then the
5
RMSE
RMSE
1
? = 15
? = 10
?=5
r=3,d=50
r=6,d=50
r=12,d=50
r=24,d=50
1
0.1
0.1
0.01
1000
sample size k
10000
1
10
100
?
1000
?
(log d)/(kd2 )
10000
100000
p
Figure 1: The (rescaled) RMSE scales as r(log d)/k as expected from Corollary 3.1 for fixed
d = 50 (left). In the inset, the same data is plotted versus rescaled sample size k/(r log d). The
(rescaled) RMSE is stable for a broad range of ? and ? for fixed d = 50 and r = 3 (right).
row-mean is subtracted form each row, and then the whole matrix is scaled such that the largest
entry is ? = 5. Note that this operation does not increase the rank of the matrix ?. This is because
this de-meaning can be written as ? ? ?11T /d2 and both terms in the operation are of the same
column space as ? which is of rank r. The root mean squared error (RMSE) is plotted where
b . We implement and solve the convex optimization (3) using proximal
RMSE = (1/d)|||?? ? ?|||
F
gradient descent method as analyzed in [24]. The right panelp
in Figure 1 illustrates
p that the actual
error is insensitive to the choice of ? for a broad range of ? ? [ (log d)/(kd2 ), 28 (log d)/(kd2 )],
after which it increases with ?.
3.2
Information-theoretic lower bound for low-rank matrices
For a polynomial-time algorithm of convex relaxation, we gave in the previous section a bound
on the achievable error. We next compare this to the fundamental limit of this problem, by giving a
lower bound on the achievable error by any algorithm (efficient or not). A simple parameter counting
argument indicates that it requires the number of samples to scale as the degrees of freedom i.e.,
kd1 ? r(d1 + d2 ), to estimate a d1 ? d2 dimensional matrix of rank r. We construct an appropriate
packing over the set of low-rank matrices with bounded entries in ?? defined as (7), and show that no
algorithm can accurately estimate the true matrix with high probability using the generalized Fano?s
inequality. This provides a constructive argument to lower bound the minimax error rate, which in
turn establishes that the bounds in Theorem 1 is sharp up to a logarithmic factor, and proves no other
algorithm can significantly improve over the nuclear norm minimization.
Theorem 2. Suppose ?? has rank r. Under the described sampling model, for large enough d1 and
d2 ? d1 , there is a universal numerical constant c > 0 such that
(
)
r
i
h 1
r d2
?d2
b
?
??
, ?
,
(12)
inf sup E ?
? c min ?e
? ? ?
b ?? ???
k d1
F
d1 d2 log d
d1 d2
?
where the infimum is taken over all measurable functions over the observed ranked lists
{(vi,1 , . . . , vi,k )}i?[d1 ] .
A proof of this theorem is provided in Appendix C. The term of primary
p interest in this bound is
the first one, which shows the scaling of the (rescaled) minimax rate as r(d1 + d2 )/(kd1 ) (when
d2 ? d1 ), and matches the upper bound in (8). It is the dominant term in the bound whenever
the number of samples is larger than the degrees of freedom by a logarithmic factor, i.e., kd1 >
r(d1 +d2 ) log d, ignoring the dependence on ?. This is a typical regime of interest, where the sample
size is comparable to the latent dimension of the problem. In this regime, Theorem 2 establishes that
the upper bound in Theorem 1 is minimax-optimal up to a logarithmic factor in the dimension d.
6
4
Choice modeling for bundled purchase history
In this section, we use the MNL model to study another scenario of practical interest: choice modeling from bundled purchase history. In this setting, we assume that we have bundled purchase history
data from n users. Precisely, there are two categories of interest with d1 and d2 alternatives in each
category respectively. For example, there are d1 tooth pastes to choose from and d2 tooth brushes to
choose from. For the i-th user, a subset Si ? [d1 ] of alternatives from the first category is presented
along with a subset Ti ? [d2 ] of alternatives from the second category. We use k1 and k2 to denote
the number of alternatives presented to a single user, i.e. k1 = |Si | and k2 = |Ti |, and we assume
that the number of alternatives presented to each user is fixed, to simplify notations. Given these sets
of alternatives, each user makes a ?bundled? purchase and we use (ui , vi ) to denote the bundled pair
of alternatives (e.g. a tooth brush and a tooth paste) purchased by the i-th user. Each user makes a
choice of the best alternative, independent of other users?s choices, according to the MNL model as
?
P {(ui , vi ) = (j1 , j2 )}
=
P
e?j1 ,j2
j10 ?Si ,j20 ?Ti
e
??
j 0 ,j 0
1
,
(13)
2
for all j1 ? Si and j2 ? Ti . The distribution (13) is independent of shifting all the values of ?? by a
constant. Hence, there is an equivalent class of ?? that gives the same distribution for the choices:
[?? ] ? {A ? Rd1 ?d2 | A = ?? + c11T for some c ? R} . SincePwe can only estimate ?? up to
?
this equivalent class, we search for the one that sum to zero, i.e.
j1 ?[d1 ],j2 ?[d2 ] ?j1 ,j2 = 0. Let
?
?
? = maxj1 ,j10 ?[d1 ],j2 ,j20 ?[d2 ] |?j1 ,j2 ? ?j 0 ,j 0 |, denote the dynamic range of the underlying ?? , such
1 2
that when k1 ? k2 alternatives are presented, we always have
1 ??
1 ?
e
? P {(ui , vi ) = (j1 , j2 )} ?
e ,
k1 k2
k1 k2
(14)
for all (j1 , j2 ) ? Si ? Ti and for all Si ? [d1 ] and Ti ? [d2 ] such that |Si | = k1 and |Ti | = k2 . We
do not make any assumptions on ? other than that ? = O(1) with respect to d1 and d2 . Assuming
?? is well approximate by a low-rank matrix, we solve the following convex relaxation, given the
observed bundled purchase history {(ui , vi , Si , Ti )}i?[n] :
b
?
? arg min0 L(?) + ?|||?|||nuc ,
????
where the (negative) log likelihood function according to (13) is
?
?
??
n
X
X
1
?hh?, eui eTv ii ? log ?
L(?) = ?
exp hh?, ej1 eTj2 ii ?? , and
i
n i=1
j1 ?Si ,j2 ?Ti
n
o
X
?0? ?
A ? Rd1 ?d2 |||A|||? ? ?, and
Aj1 ,j2 = 0 .
(15)
(16)
(17)
j1 ?[d1 ],j2 ?[d2 ]
Compared to collaborative ranking, (a) rows and columns of ?? correspond to an alternative from
the first and second category, respectively; (b) each sample corresponds to the purchase choice of a
user which follow the MNL model with ?? ; (c) each person is presented subsets Si and Ti of items
from each category; (d) each sampled data represents the most preferred bundled pair of alternatives.
4.1
Performance guarantee
We provide an upper bound on the error achieved by our convex relaxation, when the multi-set
of alternatives Si from the first category and Ti from the second category are drawn uniformly at
random with replacement from [d1 ] and [d2 ] respectively. Precisely, for given k1 and k2 , we let
(i)
(i)
(i)
(i)
(i)
(i)
Si = {j1,1 , . . . , j1,k1 } and Ti = {j2,1 , . . . , j2,k2 }, where j1,` ?s and j2,` ?s are independently drawn
uniformly at random over the d1 and d2 alternatives, respectively. Similar to the previous section,
this sampling with replacement is necessary for the analysis. Define
s
e2? max{d1 , d2 } log d
.
(18)
?1 =
n d1 d2
7
Theorem 3. Under the described sampling model, assume 16e2? min{d1 , d2 } log d ? n ?
min{d5 , k1 k2p
max{d21 , d22 }} log d, and ? ? [8?1 , c1 ?1 ] with any constant c1 = O(1) larger than
max{8, 128/ min{k1 , k2 }}. Then, solving the optimization (15) achieves
min{d1 ,d2 }
2
X
?
?
1 b
b ? ?? + 48e2? c1 ?1
?j (?? ) ,
? ? ?? ? 48 2 e2? c1 ?1 r ?
d1 d2
F
F
j=r+1
(19)
for any r ? {1, . . . , min{d1 , d2 }} with probability at least 1 ? 2d?3 where d = (d1 + d2 )/2.
A proof is provided in Appendix D. Optimizing over r gives the following corollaries.
Corollary 4.1 (Exact low-rank matrices). Suppose ?? has rank at most r. Under the hypotheses
of Theorem 3, solving the optimization (15) with the choice of the regularization parameter ? ?
[8?1 , c1 ?1 ] achieves with probability at least 1 ? 2d?3 ,
r
? 3?
1 b
r(d1 + d2 ) log d
?
?
.
(20)
? ? ? ? 48 2e c1
n
F
d1 d2
This corollary shows that the number of samples n needs to scale as O(r(d1 + d2 ) log d) in order
to achieve an arbitrarily small error. This is only a logarithmic factor larger than the degrees of
freedom. We provide a fundamental lower bound on the error, that matches the upper bound up to
a logarithmic factor. For approximately low-rank matrices in an `1 -ball as defined in (10), we show
an upper bound on the error, whose error exponent reduces from one to (2 ? q)/2.
Corollary 4.2 (Approximately low-rank matrices). Suppose ?? ? Bq (?q ) for some q ? (0, 1]
and ?q > 0. Under the hypotheses of Theorem 3, solving the optimization (15) with the choice of
the regularization parameter ? ? [8?1 , c1 ?1 ] achieves with probability at least 1 ? 2d?3 ,
! 2?q
r
?
2
?
?
2
1 b
d
d
(d
+
d
)
log
d
q
1
2
1
2
?
3?
?
?
48 2c1 e
.
(21)
? ? ? ?
n
F
d1 d2
d1 d2
Since the proof is almost identical to the proof of Corollary 3.2 in Appendix B, we omit it.
Theorem 4. Suppose ?? has rank r. Under the described sampling model, there is a universal
constant c > 0 such that that the minimax rate where the infimum is taken over all measurable
functions over the observed purchase history {(ui , vi , Si , Ti )}i?[n] is lower bounded by
(r
)
i
h 1
e?5? r (d1 + d2 ) ?(d1 + d2 )
b
?
? c min
, ?
inf sup E ?
. (22)
? ? ?
b ?? ???
n
F
d1 d2 log d
d1 d2
?
See Appendix E.1 for the proof. The first term is dominant, and when the sample size is comparable
to the latent dimension of the problem, Theorem 3 is minimax optimal up to a logarithmic factor.
5
Discussion
We presented a convex program to learn MNL parameters from ordinal data, motivated by two scenarios: recommendation systems and bundled purchases. We take the first principle approach of
identifying the fundamental limits and also developing efficient algorithms matching those fundamental trade offs. There are several remaining challenges. (a) Nuclear norm minimization, while
polynomial-time, is still slow. We want first-order methods that are efficient with provable guarantees. The main challenge is providing a good initialization to start such non-convex approaches. (b)
For simpler models, such as the PL model, more general sampling over a graph has been studied.
We want analytical results for more general sampling. (c) The practical use of the model and the
algorithm needs to be tested on real datasets on purchase history and recommendations.
Acknowledgments
This research is supported in part by NSF CMMI award MES-1450848 and NSF SaTC award CNS1527754.
8
References
[1] Daniel McFadden. Conditional logit analysis of qualitative choice behavior. 1973.
[2] Louis L Thurstone. A law of comparative judgment. Psychological review, 34(4):273, 1927.
[3] Jacob Marschak. Binary-choice constraints and random utility indicators. In Proceedings of a symposium
on mathematical methods in the social sciences, volume 7, pages 19?38, 1960.
[4] D. R. Luce. Individual Choice Behavior. Wiley, New York, 1959.
[5] Yu Lu and Sahand N Negahban. Individualized rank aggregation using nuclear norm regularization. arXiv
preprint arXiv:1410.0860, 2014.
[6] Dohyung Park, Joe Neeman, Jin Zhang, Sujay Sanghavi, and Inderjit S Dhillon. Preference completion:
Large-scale collaborative ranking from pairwise comparisons. 2015.
[7] Isobel Claire Gormley and Thomas Brendan Murphy. A grade of membership model for rank data.
Bayesian Analysis, 4(2):265?295, 2009.
[8] Tie-Yan Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval,
3(3):225?331, 2009.
[9] D. R. Hunter. Mm algorithms for generalized bradley-terry models. Annals of Statistics, pages 384?406,
2004.
[10] John Guiver and Edward Snelson. Bayesian inference for plackett-luce ranking models. In proceedings
of the 26th annual international conference on machine learning, pages 377?384. ACM, 2009.
[11] Francois Caron and Arnaud Doucet. Efficient bayesian inference for generalized bradley?terry models.
Journal of Computational and Graphical Statistics, 21(1):174?196, 2012.
[12] B. Hajek, S. Oh, and J. Xu. Minimax-optimal inference from partial rankings. In Advances in Neural
Information Processing Systems, pages 1475?1483, 2014.
[13] S. Negahban, S. Oh, and D. Shah. Iterative ranking from pair-wise comparisons. In NIPS, pages 2483?
2491, 2012.
[14] S. Oh and D. Shah. Learning mixed multinomial logit model from ordinal data. In Advances in Neural
Information Processing Systems, pages 595?603, 2014.
[15] W. Ding, P. Ishwar, and V. Saligrama. A topic modeling approach to rank aggregation. Boston University
Center for Info. and Systems Engg. Technical Report http://www.bu.edu/systems/publications, 2014.
[16] A. Ammar, S. Oh, D. Shah, and L. Voloch. What?s your choice? learning the mixed multi-nomial logit
model. In Proceedings of the ACM SIGMETRICS/international conference on Measurement and modeling
of computer systems, 2014.
[17] Rui Wu, Jiaming Xu, R Srikant, Laurent Massouli?e, Marc Lelarge, and Bruce Hajek. Clustering and
inference from pairwise comparisons. arXiv preprint arXiv:1502.04631, 2015.
[18] H. Azari Soufiani, H. Diao, Z. Lai, and D. C. Parkes. Generalized random utility models with multiple
types. In Advances in Neural Information Processing Systems, pages 73?81, 2013.
[19] H. A. Soufiani, D. C. Parkes, and L. Xia. Random utility theory for social choice. In NIPS, pages 126?134,
2012.
[20] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM review, 52(3):471?501, 2010.
[21] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, 2009.
[22] S. Negahban and M. J. Wainwright. Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 2012.
[23] St?ephane Boucheron, G?abor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic
theory of independence. Oxford University Press, 2013.
[24] A. Agarwal, S. Negahban, and M. Wainwright. Fast global convergence rates of gradient methods for
high-dimensional statistical recovery. In In NIPS, pages 37?45, 2010.
[25] J. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Comput. Math., 2011.
[26] S. Van De Geer. Empirical Processes in M-estimation, volume 6. Cambridge university press, 2000.
[27] M. Ledoux. The concentration of measure phenomenon. Number 89. American Mathematical Soc., 2005.
9
| 5818 |@word version:2 polynomial:3 norm:20 seems:1 logit:7 c0:8 achievable:2 d2:70 confirms:1 seek:1 decomposition:1 jacob:1 liu:1 daniel:1 neeman:1 past:2 existing:2 bradley:3 si:32 yet:1 ij1:1 written:1 john:1 numerical:3 realistic:1 j1:14 engg:1 rd2:1 item:41 record:1 parkes:2 provides:4 math:1 preference:29 simpler:2 zhang:1 mathematical:2 along:1 symposium:1 qualitative:1 prove:3 pairwise:3 upenn:2 expected:3 market:1 behavior:2 cand:1 multi:3 shirt:1 grade:1 inspired:1 election:1 little:1 actual:1 provided:4 notation:3 underlying:6 matched:1 bounded:3 maximizes:1 panel:2 what:3 guarantee:9 ti:13 friendly:1 tie:1 exactly:1 scaled:1 k2:9 omit:1 louis:1 treat:1 limit:4 analyzing:3 oxford:1 laurent:1 incoherence:2 approximately:5 merge:1 might:4 lugosi:1 initialization:1 studied:3 collect:1 suggests:1 challenging:1 factorization:2 range:8 statistically:1 fazel:1 practical:3 acknowledgment:1 practice:1 implement:1 universal:2 yan:1 empirical:1 significantly:1 matching:3 get:1 context:6 applying:1 voloch:1 optimize:2 restriction:1 equivalent:4 measurable:2 center:1 maximizing:1 www:1 independently:2 convex:20 d22:2 guiver:1 splitting:1 identifying:1 recovery:1 d5:1 d1:68 nuclear:12 oh:5 population:2 proving:2 notion:1 thurstone:1 justification:1 annals:1 play:1 heavily:1 user:52 hhu:1 pioneered:1 exact:4 us:1 suppose:6 hypothesis:4 trend:1 approximated:1 satisfying:1 econometrics:1 predicts:1 observed:4 role:1 preprint:2 ding:1 capture:8 soufiani:2 azari:1 trade:1 rescaled:5 convexity:3 complexity:2 ui:7 ij2:1 dynamic:5 solving:8 tight:1 technically:2 packing:1 various:1 represented:1 distinct:1 fast:2 describe:1 outcome:2 jean:1 quite:1 widely:3 solve:5 whose:3 larger:5 statistic:2 unseen:1 noisy:5 online:2 ledoux:1 analytical:1 propose:2 interaction:1 product:2 remainder:1 saligrama:1 j2:16 d21:4 achieve:2 frobenius:3 recipe:1 convergence:1 etj:1 francois:1 comparative:2 spent:1 completion:4 rescale:1 ij:3 school:1 edward:1 soc:1 strong:3 predicted:1 differ:1 min0:1 kiran:1 require:3 generalization:2 pl:7 hold:1 mm:1 exp:2 predict:7 j10:2 achieves:7 collaboratively:2 purpose:1 estimation:4 lose:1 maker:3 symmetrization:1 sensitive:1 largest:1 thekumparampil:1 establishes:2 weighted:1 minimization:8 offs:1 always:3 sigmetrics:1 satc:1 publication:1 corollary:12 gormley:1 bundled:14 she:1 rank:53 likelihood:5 indicates:1 brendan:1 sense:1 inference:10 plackett:3 membership:1 typically:2 abor:1 hidden:6 her:1 uij:1 arg:2 among:5 issue:2 pascal:1 denoted:2 exponent:2 proposes:1 special:3 wharton:2 once:1 never:2 having:1 construct:1 sampling:11 identical:1 represents:4 broad:2 kd2:4 yu:1 park:1 purchase:14 sanghavi:1 report:1 simplify:2 ephane:1 few:1 randomly:1 individual:2 murphy:1 replacement:4 freedom:5 interest:6 kd1:3 weakness:1 analyzed:2 mixture:2 partial:2 necessary:2 bq:3 euclidean:1 plotted:2 theoretical:1 psychological:1 column:4 modeling:11 asking:1 deviation:1 subset:7 entry:5 uniform:1 characterize:1 proximal:1 st:1 person:1 recht:2 fundamental:6 negahban:4 international:2 marschak:1 bu:1 probabilistic:1 siam:1 pool:2 connecting:2 together:1 again:1 squared:1 satisfied:1 management:2 diao:1 choose:2 possibly:1 american:1 nonasymptotic:1 parrilo:1 de:2 notable:1 ranking:21 vi:23 root:1 analyze:3 sup:2 start:1 aggregation:2 bruce:1 rmse:6 collaborative:14 contribution:1 who:2 judgment:2 correspond:2 identify:1 generalize:1 bayesian:3 accurately:2 hunter:1 lu:1 tooth:4 j20:2 history:9 whenever:1 definition:1 lelarge:1 e2:4 naturally:2 proof:7 recovers:1 rational:1 sampled:2 popular:1 organized:1 schedule:1 formalize:1 hajek:2 ishwar:1 higher:1 follow:2 flight:4 tropp:1 ei:2 infimum:2 willingness:2 true:3 hence:2 regularization:5 arnaud:1 boucheron:1 dhillon:1 manifestation:1 generalized:4 theoretic:3 dohyung:1 meaning:1 wise:4 snelson:1 recently:2 multinomial:5 ji:13 insensitive:1 volume:2 tail:1 interpretation:2 significant:1 measurement:1 caron:1 cambridge:1 rd:2 swoh:1 outlined:1 sujay:1 similarly:1 fano:1 mathematics:1 illinois:2 stable:1 similarity:5 dominant:2 recent:1 optimizing:2 inf:2 scenario:3 aj1:1 inequality:2 binary:1 success:1 arbitrarily:2 seen:4 minimum:1 additional:1 maximize:1 eui:1 ii:6 fist:1 multiple:2 reduces:2 champaign:1 technical:1 match:3 offer:1 retrieval:2 lai:1 mle:2 award:2 impact:1 expectation:1 arxiv:4 represent:2 agarwal:1 jiaming:2 achieved:1 c1:8 background:1 want:6 else:1 singular:3 crucial:2 appropriately:3 strict:1 massart:1 bought:1 integer:1 counting:1 revealed:2 enough:1 independence:2 gave:1 suboptimal:1 inner:1 luce:4 motivated:2 utility:8 sahand:1 york:1 nuc:3 category:13 generate:1 http:1 nsf:2 srikant:1 notice:2 discrete:5 paste:2 achieving:1 drawn:5 kuk:1 utilize:1 graph:1 relaxation:13 sum:3 massouli:1 extends:1 almost:1 wu:1 decision:4 prefer:2 appendix:9 scaling:3 comparable:2 capturing:1 bound:28 guaranteed:1 quadratic:3 annual:1 activity:1 constraint:2 precisely:3 your:1 personalized:2 argument:2 optimality:1 min:12 relatively:1 developing:1 according:5 ball:3 across:1 smaller:2 describes:1 nomial:1 intuitively:1 restricted:3 taken:2 equation:1 turn:1 hh:4 ordinal:8 tractable:1 generalizes:1 tightest:1 operation:2 assortment:1 apply:1 indirectly:1 spectral:1 appropriate:1 centrality:1 subtracted:1 alternative:17 shah:3 thomas:1 assumes:1 clustering:3 cf:1 top:1 remaining:1 graphical:1 giving:1 restrictive:2 k1:10 prof:1 purchased:1 seeking:1 tensor:2 primary:1 dependence:4 cmmi:1 concentration:2 surrogate:2 gradient:2 individualized:1 me:1 topic:1 collected:1 considers:1 provable:4 consumer:2 assuming:2 providing:2 statement:1 potentially:1 info:1 negative:2 tightening:1 upper:10 observation:3 datasets:1 urbana:1 finite:3 descent:1 jin:1 defining:1 interacting:1 sharp:1 rating:1 pair:10 required:2 nip:3 beyond:2 regime:2 challenge:2 program:1 max:3 shifting:2 terry:3 event:1 wainwright:2 natural:3 ranked:5 rely:1 predicting:1 indicator:2 minimax:8 representing:2 improve:1 rated:1 review:2 literature:1 taste:1 ammar:1 law:1 loss:3 mcfadden:1 mixed:3 filtering:2 versus:1 ingredient:1 revenue:3 foundation:3 degree:5 sewoong:1 principle:1 share:1 row:9 claire:1 hopeless:1 supported:1 copy:1 dis:1 aij:1 allow:1 wide:1 distributed:1 van:1 overcome:1 dimension:5 xia:1 rum:7 made:1 far:1 social:2 approximate:3 preferred:3 ml:1 doucet:1 global:1 buy:3 reveals:1 assumed:1 etv:2 search:3 latent:2 iterative:1 learn:6 nature:1 ignoring:1 poly:2 marc:1 vj:3 mnl:21 main:2 linearly:1 whole:2 noise:2 xu:3 k2p:1 slow:1 wiley:1 sub:1 exponential:1 comput:1 theorem:16 e4:2 inset:1 maxi:1 list:4 r2:1 decay:1 intractable:1 exists:2 joe:1 illustrates:2 rui:1 gumbel:1 boston:1 vii:1 rd1:7 logarithmic:9 likely:3 ordered:1 tracking:1 inderjit:1 recommendation:5 brush:2 corresponds:4 acm:2 conditional:1 viewed:1 typical:2 uniformly:4 total:1 geer:1 e:1 e6:1 people:1 constructive:1 tested:1 phenomenon:1 |
5,323 | 5,819 | SGD Algorithms based on Incomplete U -statistics:
Large-Scale Minimization of Empirical Risk
Guillaume Papa, St?ephan Cl?emenc?on
LTCI, CNRS, T?el?ecom ParisTech
Universit?e Paris-Saclay, 75013 Paris, France
[email protected]
Aur?elien Bellet
Magnet Team, INRIA Lille - Nord Europe
59650 Villeneuve d?Ascq, France
[email protected]
Abstract
In many learning problems, ranging from clustering to ranking through metric
learning, empirical estimates of the risk functional consist of an average over tuples (e.g., pairs or triplets) of observations, rather than over individual observations. In this paper, we focus on how to best implement a stochastic approximation
approach to solve such risk minimization problems. We argue that in the largescale setting, gradient estimates should be obtained by sampling tuples of data
points with replacement (incomplete U -statistics) instead of sampling data points
without replacement (complete U -statistics based on subsamples). We develop a
theoretical framework accounting for the substantial impact of this strategy on the
generalization ability of the prediction model returned by the Stochastic Gradient
Descent (SGD) algorithm. It reveals that the method we promote achieves a much
better trade-off between statistical accuracy and computational cost. Beyond the
rate bound analysis, experiments on AUC maximization and metric learning provide strong empirical evidence of the superiority of the proposed approach.
1
Introduction
In many machine learning problems, the statistical risk functional is an expectation over d-tuples
(d ? 2) of observations, rather than over individual points. This is the case in supervised metric
learning [3], where one seeks to optimize a distance function such that it assigns smaller values
to pairs of points with the same label than to those with different labels. Other popular examples
include bipartite ranking (see [27] for instance), where the goal is to maximize the number of concordant pairs (i.e. AUC maximization), and more generally multi-partite ranking (cf [12]), as well
as pairwise clustering (see [7]). Given a data sample, the most natural empirical risk estimate (which
is known to have minimal variance among all unbiased estimates) is obtained by averaging over all
tuples of observations and thus takes the form of a U -statistic (an average of dependent variables
generalizing the means, see [19]). The Empirical Risk Minimization (ERM) principle, one of the
main paradigms of statistical learning theory, has been extended to the case where the empirical risk
of a prediction rule is a U -statistic [5], using concentration properties of U -processes (i.e. collections of U -statistics). The computation of the empirical risk is however numerically unfeasible in
large and even moderate scale situations due to the exploding number of possible tuples.
In practice, the minimization of such empirical risk functionals is generally performed by means
of stochastic optimization techniques such as Stochastic Gradient Descent (SGD), where at each
iteration only a small number of randomly selected terms are used to compute an estimate of the
gradient (see [27, 24, 16, 26] for instance). A drawback of the original SGD learning method,
introduced in the case where empirical risk functionals are computed by summing over independent
observations (sample mean statistics), is its slow convergence due to the variance of the gradient
estimates, see [15]. This has recently motivated the development of a wide variety of SGD variants
implementing a variance reduction method in order to improve convergence. Variance reduction is
1
achieved by occasionally computing the exact gradient (see SAG [18], SVRG [15], MISO [20] and
SAGA [9] among others) or by means of nonuniform sampling schemes (see [21, 28] for instance).
However, such ideas can hardly be applied to the case under study here: due to the overwhelming
number of possible tuples, computing even a single exact gradient or maintaining a probability
distribution over the set of all tuples is computationally unfeasible in general.
In this paper, we leverage the specific structure and statistical properties of the empirical risk functional when it is of the form of a U -statistic to design an efficient implementation of the SGD
learning method. We study the performance of the following sampling scheme for the gradient
estimation step involved in the SGD algorithm: drawing with replacement a set of tuples directly
(in order to build an incomplete U -statistic gradient estimate), rather than drawing a subset of observations without replacement and forming all possible tuples based on these (the corresponding
gradient estimate is then a complete U -statistic based on a subsample). While [6] has investigated
maximal deviations between U -processes and their incomplete approximations, the performance
analysis carried out in the present paper is inspired from [4] and involves both the optimization error
of the SGD algorithm and the estimation error induced by the statistical finite-sample setting. We
first provide non-asymptotic rate bounds and asymptotic convergence rates for the SGD procedure
applied to the empirical minimization of a U -statistic. These results shed light on the impact of the
conditional variance of the gradient estimators on the speed of convergence of SGD. We then derive
a novel generalization bound which depends on the variance of the sampling strategies. This bound
establishes the indisputable superiority of the incomplete U -statistic estimation approach over the
complete variant in terms of the trade-off between statistical accuracy and computational cost. Our
experimental results on AUC maximization and metric learning tasks on large-scale datasets are
consistent with our theoretical findings and show that the use of the proposed sampling strategy can
provide spectacular performance gains in practice. We conclude this paper with promising lines
for future research, in particular regarding the trade-offs involved in a possible implementation of
nonuniform sampling strategies to further improve convergence.
The rest of this paper is organized as follows. In Section 2, we briefly review the theory of U statistics and their approximations, together with elementary notions of gradient-based stochastic
approximation. Section 3 provides a detailed description of the SGD implementation we propose,
along with a performance analysis conditional upon the data sample. In Section 4, based on these
results, we derive a generalization bound based on a decomposition into optimization and estimation
errors. Section 5 presents our numerical experiments, and we conclude in Section 6. Technical
proofs are sketched in the Appendix, and further details can be found in the Supplementary Material.
2
Background and Problem Setup
Here and throughout, the indicator function of any event E is denoted by I{E} and the variance of
any square integrable r.v. Z by ? 2 (Z).
2.1
U -statistics: Definition and Examples
Generalized U -statistics are extensions of standard sample mean statistics, as defined below.
(k)
(k)
Definition 1. Let K ? 1 and (d1 , . . . , dK ) ? N?K . Let X{1, ..., nk } = (X1 , . . . , Xnk ), 1 ?
k ? K, be K independent samples of sizes nk ? dk and composed of i.i.d. random variables taking
their values in some measurable space Xk with distribution Fk (dx) respectively. Let H : X1d1 ?? ? ??
dK
XK
? R be a measurable function, square integrable with respect to the probability distribution
?dK
? = F1?d1 ? ? ? ? ? FK
. Assume in addition (without loss of generality) that H(x(1) , . . . , x(K) )
is symmetric within each block of arguments x(k) (valued in Xkdk ), 1 ? k ? K. The generalized (or
K-sample) U -statistic of degrees (d1 , . . . , dK ) with kernel H, is then defined as
X
X (1) (2)
1
(K)
Un (H) = QK n
...
H XI1 ; XI2 ; . . . ; XIK ,
(1)
k
k=1
I1
dk
IK
P
P
where n = (n1 , . . . , nK ), the symbol I1 ? ? ? IK refers to summation over all elements of ?,
QK
the set of the k=1 ndkk index vectors (I1 , . . . , IK ), Ik being a set of dk indexes 1 ? i1 < . . . <
(k)
(k)
(k)
idk ? nk and XIk = (Xi1 , . . . , Xid ) for 1 ? k ? K.
k
2
In the above definition, standard mean statistics correspond to the case where K = 1 = d1 . More
generally when K = 1, Un (H) is an average over all d1 -tuples of observations. Finally, K ? 2
corresponds to the multi-sample situation where a dk -tuple is used for each sample k ? {1, . . . , K}.
The key property of the statistic (1) is that it has minimum variance among all unbiased estimates of
h
i
(1)
(1)
(K)
(K)
?(H) = E H X1 , . . . , Xd1 , . . . , X1 , . . . , Xdk
= E [Un (H)] .
One may refer to [19] for further results on the theory of U -statistics. In machine learning, generalized U -statistics are used as performance criteria in various problems, such as those listed below.
Clustering. Given a distance D : X1 ? X1 ? R+ , the quality of a partition P of X1 with respect
to the clustering of an i.i.d. sample X1 , . . . , Xn drawn from F1 (dx) can be assessed through the
within cluster point scatter:
X
X
2
cn (P) =
W
D(Xi , Xj ) ?
I (Xi , Xj ) ? C 2 .
(2)
n(n ? 1) i<j
C?P
P
It is a one sample U -statistic of degree 2 with kernel HP (x, x0 ) = D(x, x0 ) ? C?P I{(x, x0 ) ? C 2 }.
(k)
(k)
Multi-partite ranking. Suppose that K independent i.i.d. samples X1 , . . . , Xnk with nk ? 1
and 1 ? k ? K on X1 ? Rp have been observed. The accuracy of a scoring function s : X1 ? R
with respect to the K-partite ranking is empirically estimated by the rate of concordant K-tuples
(sometimes referred to as the Volume Under the ROC Surface):
nk
K X
o
n
X
1
(K)
(1)
[ n (s) =
I s(Xi1 ) < ? ? ? < s(XiK ) .
VUS
n1 ? ? ? ? ? nK
i =1
k=1
k
The quantity above is a K-sample U -statistic with degrees d1 = . . . = dK = 1 and kernel
? s (x1 , . . . , xK ) = I{s(x1 ) < ? ? ? < s(xK )}.
H
Metric learning. Based on an i.i.d. sample of labeled data (X1 , Y1 ), . . . , (Xn , Yn ) on X1 =
Rp ?{1, . . . , J}, the empirical pairwise classification performance of a distance D : X1 ?X1 ? R+
can be evaluated by:
X
6
bn (D) =
R
I {D(Xi , Xj ) < D(Xi , Xk ), Yi = Yj 6= Yk } ,
(3)
n(n ? 1)(n ? 2)
i<j<k
? D ((x, y), (x0 , y 0 ), (x00 , y 00 )) =
which is a one sample U -statistic of degree three with kernel H
0
00
0
00
I {D(x, x ) < D(x, x ), y = y 6= y }.
2.2
Gradient-based minimization of U -statistics
Let ? ? Rq with q ? 1 be some parameter space and consider the risk minimization problem
min??? L(?) with
(1)
(1)
(K)
(K)
L(?) = E[H(X1 , . . . , Xd1 , . . . , X1 , . . . , XdK ; ?)] = ?(H(.; ?)),
QK
(k)
(k)
where H : k=1 Xkdk ? ? ? R is a convex loss function, the (X1 , . . . , Xdk )?s, 1 ? k ? K,
are K independent random variables with distribution Fk?dk (dx) on Xkdk respectively so that H is
(k)
(k)
square integrable for any ? ? ?. Based on K independent i.i.d. samples X1 , . . . , Xnk with
b n (?) = Un (H(.; ?)). We
1 ? k ? K, the empirical version of the risk function is ? ? ? 7? L
denote by ?? the gradient operator w.r.t. ?.
Many learning algorithms are based on gradient descent, following the iterations ?t+1 = ?t ?
b (? ), with an arbitrary initial value ?0 ? ? and a learning rate (step size) ?t ? 0 such that
? t ?? L
P+? 2
P+? n t
t=1 ?t = +? and
t=1 ?t < +?. Here we place ourselves in a large-scale setting, where the
sample sizes n1 , . . . , nK of the training datasets are such that computing the empirical gradient
! X X
K
Y
nk
def
(1)
(2)
(K)
b
gbn (?) = ?? Ln (?) = 1/
...
?? H(XI1 ; XI2 ; . . . ; XIK ; ?)
(4)
dk
I1
IK
k=1
QK
at each iteration is intractable due to the number #? = k=1 ndkk of terms to be averaged. Instead,
stochastic approximation suggests the use of an unbiased estimate of (4) that is cheap to compute.
3
3
SGD Implementation based on Incomplete U -Statistics
A possible approach consists in replacing (4) by a (complete) U -statistic computed from subsamples
0(k)
0(k)
of reduced sizes n0k << nk , {(X1 , . . . , Xn0 ) : k = 1, . . . , K} say, drawn uniformly at rank
dom without replacement among the original samples, leading to the following gradient estimator:
g?n0 (?) = QK
1
X
n0k
k=1 dk
I1
...
X
0(1)
0(2)
0(K)
?? H(XI1 ; XI2 ; . . . ; XIK ; ?),
(5)
IK
0
P
(k)
0(k)
0(k)
where Ik refers to summation over all ndkk subsets X0 Ik = (Xi1 , . . . , Xid ) related to a set
k
Ik of dk indexes 1 ? i1 < . . . < idk ? n0k and n0 = (n01 , . . . , n0K ). Although this approach is very
natural, one can obtain a better estimate for the same computational cost, as shall be seen below.
3.1
Monte-Carlo Estimation of the Empirical Gradient
From a practical perspective, the alternative strategy we propose is of disarming simplicity. It is
based on a Monte-Carlo sampling scheme that consists in drawing independently with replacement
among the set of index vectors ?, yielding a gradient estimator in the form of a so-called incomplete
U -statistic (see [19]):
X
1
(1)
(K)
g?B (?) =
?? H(XI1 , . . . , XIK ; ?),
(6)
B
(I1 , ..., IK )?DB
where DB is built by sampling B times with replacement in the set ?. We point out that the conditional expectation of (6) given the K observed data samples is equal to gbn (?). The parameter
B, corresponding to the number of terms to be averaged, controls the computational complexity of
the SGD implementation. Observe incidentally that an incomplete U -statistic is not a U -statistic in
general. Hence, as an unbiased estimator of the gradient of the statistical risk L(?), (6) is of course
less accurate than the full empirical gradient (4) (i.e., it has larger variance), but this slight increase
in variance leads to a large reduction in computational cost. In our subsequent analysis, we will
0
QK
show that for the same computational cost (i.e., taking B = k=1 ndkk ), implementing SGD with
(6) rather than (5) leads to much more accurate results. We will rely on the fact that (6) has smaller
variance w.r.t. to ?L(?) (except in the case where K = 1 = d1 ), as shown in the proposition below.
0
QK
Proposition 1. Set B = k=1 ndkk . There exists a universal constant c > 0, such that we have:
2
? (?
gn0 (?)) ? c ?
??2 /
K
X
n0k
2
? (?
gB (?)) ? c ?
and
k=1
(1)
? 2 (?? H(X1 ,
??2 /
K 0
Y
nk
,
dk
k=1
(K)
XdK ; ?)).
...,
Explicit but lengthy expressions of the
for all n ? N?K , with ??2 =
variances are given in [19].
Remark 1. The results of this paper can be extended to other sampling schemes to approximate (4),
such as Bernoulli sampling or sampling without replacement in ?, following the proposal of [14].
For clarity, we focus on sampling with replacement, which is computationally more efficient.
3.2
A Conditional Performance Analysis
As a first go, we investigate and compare the performance of the SGD methods described above
conditionally upon the observed data samples. For simplicity, we denote by Pn (.) the conditional
probability measure given the data and by Enp
[.] the Pn -expectation. Given a matrix M , we denote
by M T the transpose of M and kM kHS := T r(M M T ) its Hilbert-Schmidt norm. We assume
that the loss function H is l-smooth in ?, i.e its gradient is l-Lipschitz, with l > 0. We also restrict
b n is ?-strongly convex for some deterministic constant ?:
ourselves to the case where L
b n (?1 ) ? L
b n (?2 ) 6 ?? L
b n (?1 )T (x ? y) ? ? k?1 ? ?2 k2
L
(7)
2
and we denote by ?n? its unique minimizer. We point out that the present analysis can be extended to
the smooth but non-strongly convex case, see [1]. A classical argument based on convex analysis and
4
stochastic optimization (see [1, 22] for instance) shows precisely how the conditional variance of the
gradient estimator impacts the empirical performance of the solution produced by the corresponding
SGD method and thus strongly advocates the use of the SGD variant proposed in Section 3.1.
b n (?t ), and
Proposition 2. Consider the recursion ?t+1 = ?t ? ?t g(?t ) where En [g(?t )|?t ] = ?? L
2
?
denote by ?n (g(?)) the conditional variance of g(?). For step size ?t = ?1 /t , the following holds.
1. If
1
2
< ? < 1, then:
b n (?t+1 ) ? L
b n (?n? )] 6
En [L
?n2 (g(?n? ))
1
l?12
1
?1 l2??1 (
+
) + o( ? ) .
?
t
2? 2? ? 1
t
{z
}
|
C1
2. If ? = 1 and ?1 >
1
2? ,
then:
2
?
??1
2 2
b n (?t+1 ) ? L
b n (?n? )] 6 ?n (g(?n )) 2 l exp(2?l?1 )?1 + o( 1 ) .
En [L
t+1
(2??1 ? 1)
t
|
{z
}
C2
Proposition 2 illustrates the well-known fact that the convergence rate of SGD is dominated by the
variance term and thus one needs to focus on reducing this term to improve its performance.
We are also interested in the asymptotic behavior of the algorithm (when t ? +?), under the
following assumptions:
b n (?) is twice differentiable on a neighborhood of ?n? .
A1 The function L
b n (?) is bounded.
A2 The function ?L
b n (?n? ). We establish the following result (refer to the Supplementary Material
Let us set ? = ?2 L
for a detailed proof).
Theorem 1. Let the covariance matrix ??n be the unique solution of the Lyapunov equation:
???n + ??n ? ? ???n = ?n (?n? ),
(8)
1
? T
?
?
where ?n (?n ) = En [g(?n )g(?n ) ] and ? = ?1 > 2? if ? = 1, 0 if not. Then, under Assumptions
A1 ? A2 , we have:
b n (?t ) ? L
b n (?? ) ? 1 U T (??n )1/2 ?(??n )1/2 U,
1/?t L
2
where U ? N (0, Iq ). In addition, in the case ? = 0, we have :
1
k(??n ?)1/2 k2HS = E[U T (??n )1/2 ?(??n )1/2 U ] = ?n2 (g(?n? )).
(9)
2
Theorem 1 reveals that the conditional variance term again plays a key role in the asymptotic performance of the algorithm. In particular, it is the dominating term in the precision of the solution. In
the next section, we build on these results to derive a generalization bound in the spirit of [4] which
explicitly depend on the true variance of the gradient estimator.
4
Generalization Bounds
Let ?? = argmin??? L(?) be the minimizer of the true risk. As proposed in [4], the mean excess
risk can be decomposed as follows: ?n ? N?K ,
h
i
b n (?) ? L(?)| + E L
b n (?t ) ? L
b n (?? ) .
E[L(?t ) ? L(?? )] ? 2E sup |L
(10)
n
???
|
{z
}
{z
}
|
E2
E1
Beyond the optimization error (the second term on the right hand side of (10)), the analysis of the
generalization ability of the learning method previously described requires to control the estimation
error (the first term). This can be achieved by means of the result stated below, which extends
Corollary 3 in [5] to the K-sample situation.
5
QK
Proposition 3. Let H be a collection of bounded symmetric kernels on k=1 Xkdk such that MH =
sup(H,x)?H?X |H(x)| < +?. Suppose also that H is a VC major class of functions with finite
Vapnik-Chervonenkis dimension V < +?. Let ? = min {bn1 /d1 c, . . . , bnK /dK c}. Then, for
any n ? N?K
( r
)
2V log(1 + ?)
E sup |Un (H) ? ?(H)| ? MH 2
.
(11)
?
H?H
We are now ready to derive our main result.
Theorem 2. Let ?t be the sequence generated by SGD using the incomplete statistic gradient esti0
QK
mator (6) with B = k=1 ndkk terms for some n01 , . . . , n0K . Assume that {L(.; ?) : ? ? ?} is a
VC major class class of finite VC dimension V s.t.
M? =
|H(x(1) , . . . , x(K) ; ?)| < +?,
sup
???,
(x(1) ,
...,
x(K) )?
(12)
dk
k=1 Xk
QK
and N? = sup??? ??2 < +?. If the step size satisfies the condition of Proposition 2, we have:
( r
)
2V
log(1
+
?)
CN
?
+ 2M? 2
.
?n ? N?K , E[|L(?t ) ? L(?? )|] 6
Bt?
?
For any ? ? (0, 1), we also have with probability at least 1 ? ?: ?n ? N?K ,
!
)
( r
r
r
D? log(2/?)
2V log(1 + ?)
log(4/?)
CN?
?
+
+ 2M? 2
+
.
|L(?t ) ? L(? )| 6
Bt?
t?
?
?
(13)
for some constants C and D? depending on the parameters l, ?, ?1 , a1 .
The generalization bound provided by Theorem 2 shows the advantage of using an incomplete U statistic (6) as the gradient estimator. In particular, we can obtain results of the same form as Theo0
QK
PK
rem 2 for the complete U -statistic estimator (5), but B = k=1 ndkk is then replaced by k=1 n0k
(following Proposition 1), leading to greatly damaged bounds. Using an incomplete U -statistic, we
thus achieve better performance on the test set while reducing the number of iterations (and therefore the numbers of gradient computations) required to converge to a accurate solution. To the best
of our knowledge, this is the first result of this type for empirical minimization of U -statistics. In
the next section, we provide experiments showing that these gains are very significant in practice.
5
Numerical Experiments
In this section, we provide numerical experiments to compare the incomplete and complete U statistic gradient estimators (5) and (6) in SGD when they rely on the same number of terms B.
The datasets we use are available online.1 In all experiments, we randomly split the data into 80%
training set and 20% test set and sample 100K pairs from the test set to estimate the test performance.
We used a step size of the form ?t = ?1 /t, and the results below are with respect to the number of
SGD iterations. Computational time comparisons can be found in the supplementary material.
AUC Optimization We address the problem of learning a binary classifier by optimizing the Area
Under the Curve, which corresponds to the VUS criterion (Eq. 2) when K = 2. Given a sequence of
i.i.d observations Zi = (Xi , Yi ) where Xi ? Rp and Yi ? {?1, 1}, we denote by X + = {Xi ; Yi =
1}, X ? = {Xi ; Yi = ?1} and N = |X + ||X ? |. As done in [27, 13], we take a linear scoring rule
s? (x) = ?T x where ? ? Rp is the parameter to learn, and use the logistic loss as a smooth convex
function upper bounding the Heaviside function, leading to the following ERM problem:
X
1 X
minp
log(1 + exp(s? (Xi? ) ? s? (Xi+ ))).
??R N
+
?
+
?
Xi ?X
1
Xj ?X
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
6
(a) Covtype, Batch size = 9, ?1 = 1
(b) Covtype, Batch size = 400, ?1 = 1
(c) Ijcnn1, Batch size = 25, ?1 = 2
(d) Ijcnn1, Batch size = 100, ?1 = 5
Figure 1: Average over 50 runs of the risk estimate with the number of iterations (solid lines) +/their standard deviation (dashed lines)
We use two datasets: IJCNN1 (?200K examples, 22 features) and covtype (?600K examples, 54
features). We try different values for the initial step size ?1 and the batch size B. Some results,
averaged over 50 runs of SGD, are displayed in Figure 1. As predicted by our theoretical findings,
we found that the incomplete U -statistic estimator always outperforms its complete variant. The
performance gap between the two strategies can be small (for instance when B is very large or ?1 is
unnecessarily small), but for values of the parameters that are relevant in practical scenarios (i.e., B
reasonably small and ?1 ensuring a significant decrease in the objective function), the difference can
be substantial. We also observe a smaller variance between SGD runs with the incomplete version.
Metric Learning We now turn to a metric learning formulation, where we are given a sample of
N i.i.d observations Zi = (Xi , Yi ) where Xi ? Rp and Yi ? {1, . . . , c}. Following the existing
literature [2], we focus on (pseudo) distances of the form DM (x, x0 ) = (x ? x0 )T M (x ? x0 ) where
M is a p ? p symmetric positive semi-definite matrix. We again use the logistic loss to obtain a
convex and smooth surrogate for (3). The ERM problem is as follows:
X
6
I {Yi = Yj 6= Yk } log(1 + exp(DM (Xi , Xj ) ? DM (Xi , Xk ))).
min
M N (N ? 1)(N ? 2)
i<j<k
We use the binary classification dataset SUSY (5M examples, 18 features). Figure 2 shows that the
performance gap between the two strategies is much larger on this problem. This is consistent with
the theory: one can see from Proposition 1 that the variance gap between the incomplete and the
complete approximations is much wider for a one-sample U -statistic of degree 3 (metric learning)
than for a two-sample U -statistic of degree 1 (AUC optimization).
6
Conclusion and Perspectives
In this paper, we have studied a specific implementation of the SGD algorithm when the natural empirical estimates of the objective function are of the form of generalized U -statistics. This situation
7
(a) SUSY, Batch size = 120, ?1 = 0.5
(b) SUSY, Batch size = 455, ?1 = 1
Figure 2: Average over 50 runs of the error test with the number of iterations (solid lines) +/- their
standard deviation (dashed lines)
covers a wide variety of statistical learning problems such as multi-partite ranking, pairwise clustering and metric learning. The gradient estimator we propose in this context is based on an incomplete
U -statistic obtained by sampling tuples with replacement. Our main result is a thorough analysis
of the generalization ability of the predictive rules produced by this algorithm involving both the
optimization and the estimation error in the spirit of [4]. This analysis shows that the SGD variant
we propose far surpasses a more naive implementation (of same computational cost) based on subsampling the data points without replacement. Furthermore, we have shown that these performance
gains are very significant in practice when dealing with large-scale datasets. In future work, we
plan to investigate how one may extend the nonuniform sampling strategies proposed in [8, 21, 28]
to our setting in order to further improve convergence. This is a challenging goal since we cannot
hope to maintain a distribution over the set of all possible tuples of data points. A tractable solution
could involve approximating the distribution in order to achieve a good trade-off between statistical
performance and computational/memory costs.
Appendix - Sketch of Technical Proofs
Note that the detailed proofs can be found in the Supplementary Material.
Sketch Proof of Proposition 2
Set at = En [k?t+1 ? ?n? k2 ] and following [1], observe that the sequence (at ) satisfies the recursion
at+1 6 at (1 ? 2??t (1 ? ?t L))+2?t2 ?n2 (?n? ). A standard stochastic approximation argument yields
b n (?) ? L
b n (?n? ) 6 L k? ? ?n? k2 (see [23]
an upper bound for at (cf [17, 1]), which, combined with L
2
for instance), give the desired result.
Sketch of Proof of Theorem 1
The
on stochastic approximation arguments (see [10, 25, 11]). We first show that
p proof relies
1/?t (?t ? ?n? ) ? N (0.??n ). Then, we apply the second order delta-method to derive the asymptotic behavior of the objective function. Eq. (9) is obtained by standard algebra.
Sketch of Proof of Theorem 2
Combining (10), (12) and Proposition 2 leads to the first part of the result. To derive sharp probability
bounds, we apply the union bound on E1 + E2 . To deal with E1 , we use concentration results for
U -processes, while we adapt the proof of Proposition 2 to control E2 : the r.v.?s are recentered to
make martingale increments appear, and finally we apply Azuma and Hoeffding inequalities.
Acknowledgements This work was supported by the chair Machine Learning for Big Data of
T?el?ecom ParisTech, and was conducted when A. Bellet was affiliated with T?el?ecom ParisTech.
8
References
[1] F. R. Bach and E. Moulines. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for
Machine Learning. In NIPS, 2011.
[2] A. Bellet, A. Habrard, and M. Sebban. A Survey on Metric Learning for Feature Vectors and Structured
Data. Technical report, arXiv:1306.6709, June 2013.
[3] A. Bellet, A. Habrard, and M. Sebban. Metric Learning. Morgan & Claypool Publishers, 2015.
[4] L. Bottou and O. Bousquet. The Tradeoffs of Large Scale Learning. In NIPS, 2007.
[5] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of U-statistics. Ann.
Statist., 36, 2008.
[6] S. Cl?emenc?on, S. Robbiano, and J. Tressou. Maximal deviations of incomplete U -processes with applications to Empirical Risk Sampling. In SDM, 2013.
[7] S. Cl?emenc?on. On U-processes and clustering performance. In NIPS, pages 37?45, 2011.
[8] S. Cl?emenc?on, P. Bertail, and E. Chautru. Scaling up M-estimation via sampling designs: The HorvitzThompson stochastic gradient descent. In IEEE Big Data, 2014.
[9] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A Fast Incremental Gradient Method With Support
for Non-Strongly Convex Composite Objectives. In NIPS, 2014.
[10] B. Delyon. Stochastic Approximation with Decreasing Gain: Convergence and Asymptotic Theory, 2000.
[11] G. Fort. Central limit theorems for stochastic approximation with controlled Markov Chain. EsaimPS,
2014.
[12] J. F?urnkranz, E. H?ullermeier, and S. Vanderlooy. Binary Decomposition Methods for Multipartite Ranking. In ECML/PKDD, pages 359?374, 2009.
[13] A. Herschtal and B. Raskutti. Optimising area under the ROC curve using gradient descent. In ICML,
page 49, 2004.
[14] S. Janson. The asymptotic distributions of Incomplete U -statistics. Z. Wahrsch. verw. Gebiete, 66:495?
505, 1984.
[15] R. Johnson and T. Zhang. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction.
In NIPS, pages 315?323, 2013.
[16] P. Kar, B. Sriperumbudur, P. Jain, and H. Karnick. On the Generalization Ability of Online Learning
Algorithms for Pairwise Loss Functions. In ICML, 2013.
[17] H. J. Kushner and G. Yin. Stochastic approximation and recursive algorithms and applications, volume 35. Springer Science & Business Media, 2003.
[18] N. Le Roux, M. W. Schmidt, and F. Bach. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. In NIPS, 2012.
[19] A. J. Lee. U-Statistics: Theory and Practice. 1990.
[20] J. Mairal. Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine
Learning. Technical report, arXiv:1402.4419, 2014.
[21] D. Needell, R. Ward, and N. Srebro. Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm. In NIPS, pages 1017?1025, 2014.
[22] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust Stochastic Approximation Approach to
Stochastic Programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[23] Y. Nesterov. Introductory lectures on convex optimization, volume 87. Springer, 2004.
[24] M. Norouzi, D. J. Fleet, and R. Salakhutdinov. Hamming Distance Metric Learning. In NIPS, pages
1070?1078, 2012.
[25] M. Pelletier. Weak convergence rates for stochastic approximation with application to multiple targets
and simulated annealing. Ann. Appl.Prob, 1998.
[26] Q. Qian, R. Jin, J. Yi, L. Zhang, and S. Zhu. Efficient Distance Metric Learning by Adaptive Sampling
and Mini-Batch Stochastic Gradient Descent (SGD). Machine Learning, 99(3):353?372, 2015.
[27] P. Zhao, S. Hoi, R. Jin, and T. Yang. AUC Maximization. In ICML, pages 233?240, 2011.
[28] P. Zhao and T. Zhang. Stochastic Optimization with Importance Sampling for Regularized Loss Minimization. In ICML, 2015.
9
| 5819 |@word version:2 briefly:1 norm:1 km:1 seek:1 bn:1 accounting:1 decomposition:2 covariance:1 sgd:26 solid:2 reduction:4 initial:2 chervonenkis:1 janson:1 outperforms:1 existing:1 scatter:1 dx:3 numerical:3 partition:1 subsequent:1 cheap:1 n0:2 juditsky:1 selected:1 xk:7 wahrsch:1 provides:1 zhang:3 along:1 c2:1 ik:10 consists:2 advocate:1 introductory:1 x0:8 pairwise:4 behavior:2 pkdd:1 multi:4 moulines:1 inspired:1 rem:1 decomposed:1 decreasing:1 multipartite:1 salakhutdinov:1 overwhelming:1 provided:1 bounded:2 medium:1 argmin:1 finding:2 pseudo:1 thorough:1 shed:1 sag:1 universit:1 k2:3 classifier:1 control:3 superiority:2 yn:1 appear:1 positive:1 limit:1 lugosi:1 inria:2 twice:1 studied:1 suggests:1 challenging:1 appl:1 nemirovski:1 averaged:3 practical:2 unique:2 yj:2 practice:5 block:1 implement:1 definite:1 union:1 recursive:1 kaczmarz:1 procedure:1 idk:2 area:2 empirical:21 universal:1 composite:1 refers:2 unfeasible:2 cannot:1 operator:1 risk:19 spectacular:1 context:1 optimize:1 measurable:2 deterministic:1 www:1 emenc:5 go:1 independently:1 convex:8 survey:1 simplicity:2 roux:1 assigns:1 needell:1 qian:1 rule:3 estimator:11 notion:1 increment:1 target:1 suppose:2 play:1 damaged:1 exact:2 programming:1 element:1 papa:1 labeled:1 observed:3 role:1 csie:1 trade:4 decrease:1 yk:2 substantial:2 rq:1 complexity:1 nesterov:1 dom:1 depend:1 algebra:1 predictive:2 upon:2 bipartite:1 mh:2 various:1 jain:1 fast:1 monte:2 neighborhood:1 supplementary:4 solve:1 valued:1 say:1 drawing:3 larger:2 dominating:1 recentered:1 ability:4 statistic:44 ward:1 online:2 subsamples:2 sequence:3 differentiable:1 advantage:1 sdm:1 propose:4 maximal:2 fr:2 relevant:1 combining:1 achieve:2 description:1 convergence:10 cluster:1 incidentally:1 incremental:2 wider:1 derive:6 develop:1 iq:1 depending:1 verw:1 eq:2 bn1:1 strong:1 predicted:1 involves:1 lyapunov:1 drawback:1 stochastic:22 vc:3 libsvmtools:1 material:4 implementing:2 xid:2 disarming:1 hoi:1 f1:2 generalization:9 villeneuve:1 ntu:1 proposition:11 elementary:1 summation:2 extension:1 hold:1 exp:3 claypool:1 major:2 achieves:1 a2:2 estimation:8 label:2 miso:1 establishes:1 weighted:1 minimization:11 hope:1 offs:1 always:1 rather:4 pn:2 corollary:1 focus:4 june:1 rank:1 bernoulli:1 greatly:1 dependent:1 el:3 cnrs:1 bt:2 xnk:3 france:2 i1:8 interested:1 sketched:1 among:5 classification:2 denoted:1 development:1 plan:1 equal:1 sampling:20 optimising:1 lille:1 unnecessarily:1 icml:4 promote:1 future:2 others:1 t2:1 report:2 ullermeier:1 randomly:2 composed:1 individual:2 replaced:1 ourselves:2 replacement:11 n1:3 maintain:1 ltci:1 investigate:2 bnk:1 gn0:1 yielding:1 light:1 chain:1 accurate:3 tuple:1 incomplete:18 desired:1 theoretical:3 minimal:1 instance:6 cover:1 maximization:4 cost:7 deviation:4 subset:2 surpasses:1 habrard:2 conducted:1 johnson:1 combined:1 st:1 randomized:1 aur:1 siam:1 lee:1 off:3 xi1:7 together:1 again:2 central:1 hoeffding:1 zhao:2 leading:3 elien:1 explicitly:1 ranking:8 depends:1 performed:1 try:1 sup:5 majorization:1 square:3 partite:4 accuracy:3 variance:20 qk:11 mator:1 correspond:1 yield:1 weak:1 norouzi:1 produced:2 carlo:2 magnet:1 lengthy:1 definition:3 sriperumbudur:1 involved:2 e2:3 dm:3 proof:9 hamming:1 gain:4 dataset:1 popular:1 knowledge:1 organized:1 hilbert:1 supervised:1 formulation:1 evaluated:1 done:1 strongly:4 generality:1 furthermore:1 hand:1 sketch:4 replacing:1 logistic:2 quality:1 unbiased:4 true:2 hence:1 symmetric:3 deal:1 conditionally:1 auc:6 enp:1 criterion:2 generalized:4 complete:8 ranging:1 novel:1 recently:1 raskutti:1 functional:3 sebban:2 empirically:1 volume:3 extend:1 slight:1 numerically:1 refer:2 significant:3 fk:3 hp:1 europe:1 surface:1 perspective:2 optimizing:1 moderate:1 scenario:1 occasionally:1 susy:3 inequality:1 binary:3 kar:1 yi:9 scoring:2 integrable:3 seen:1 minimum:1 morgan:1 converge:1 maximize:1 paradigm:1 exploding:1 dashed:2 semi:1 full:1 multiple:1 smooth:4 technical:4 adapt:1 bach:3 e1:3 a1:3 controlled:1 impact:3 prediction:2 variant:5 n01:2 ensuring:1 involving:1 metric:13 expectation:3 ndkk:7 iteration:7 kernel:5 sometimes:1 arxiv:2 achieved:2 c1:1 proposal:1 background:1 addition:2 vayatis:1 annealing:1 publisher:1 rest:1 induced:1 db:2 spirit:2 leverage:1 yang:1 split:1 variety:2 xj:5 zi:2 restrict:1 idea:1 regarding:1 cn:3 tradeoff:1 fleet:1 motivated:1 expression:1 defazio:1 gb:1 accelerating:1 returned:1 hardly:1 remark:1 generally:3 detailed:3 listed:1 involve:1 statist:1 reduced:1 http:1 shapiro:1 estimated:1 delta:1 shall:1 urnkranz:1 key:2 lan:1 drawn:2 clarity:1 vus:2 lacoste:1 run:4 prob:1 place:1 throughout:1 extends:1 k2hs:1 appendix:2 n0k:7 scaling:1 bound:12 def:1 robbiano:1 bertail:1 xdk:4 precisely:1 aurelien:1 dominated:1 bousquet:1 speed:1 argument:4 min:3 chair:1 structured:1 pelletier:1 bellet:5 smaller:3 tw:1 ijcnn1:3 erm:3 ecom:3 computationally:2 ln:1 equation:1 previously:1 turn:1 cjlin:1 xi2:3 tractable:1 available:1 apply:3 observe:3 alternative:1 schmidt:2 batch:8 rp:5 original:2 clustering:6 include:1 cf:2 subsampling:1 kushner:1 maintaining:1 build:2 establish:1 approximating:1 classical:1 objective:4 quantity:1 strategy:8 concentration:2 surrogate:1 gradient:36 distance:6 simulated:1 argue:1 index:4 mini:1 setup:1 xik:6 nord:1 stated:1 design:2 implementation:7 affiliated:1 upper:2 observation:9 datasets:6 markov:1 finite:4 descent:8 jin:2 displayed:1 ecml:1 situation:4 extended:3 team:1 y1:1 nonuniform:3 arbitrary:1 sharp:1 ephan:1 introduced:1 pair:4 paris:2 required:1 fort:1 nip:8 address:1 beyond:2 below:6 azuma:1 saclay:1 built:1 memory:1 event:1 natural:3 rely:2 business:1 regularized:1 largescale:1 indicator:1 recursion:2 zhu:1 scheme:4 improve:4 xd1:2 ascq:1 julien:1 carried:1 ready:1 naive:1 review:1 literature:1 l2:1 acknowledgement:1 asymptotic:8 loss:7 lecture:1 srebro:1 degree:6 consistent:2 minp:1 principle:1 course:1 supported:1 last:1 transpose:1 svrg:1 side:1 wide:2 taking:2 curve:2 dimension:2 xn:2 karnick:1 collection:2 adaptive:1 far:1 functionals:2 excess:1 approximate:1 indisputable:1 dealing:1 reveals:2 mairal:1 summing:1 conclude:2 tuples:13 xi:15 x00:1 un:5 triplet:1 gbn:2 promising:1 learn:1 reasonably:1 robust:1 investigated:1 cl:5 bottou:1 pk:1 main:3 bounding:1 subsample:1 big:2 n2:3 x1:22 telecom:1 referred:1 en:5 roc:2 martingale:1 slow:1 precision:1 saga:2 explicit:1 exponential:1 theorem:7 specific:2 showing:1 symbol:1 dk:16 covtype:3 evidence:1 consist:1 intractable:1 exists:1 vapnik:1 importance:1 herschtal:1 delyon:1 illustrates:1 nk:11 gap:3 generalizing:1 yin:1 forming:1 springer:2 corresponds:2 khs:1 minimizer:2 satisfies:2 relies:1 conditional:8 goal:2 ann:2 lipschitz:1 paristech:4 except:1 uniformly:1 reducing:2 averaging:1 called:1 experimental:1 concordant:2 xn0:1 guillaume:1 support:1 gebiete:1 assessed:1 heaviside:1 d1:8 |
5,324 | 582 | Combined Neural Network and Rule-Based
Framework for Probabilistic Pattern Recognition
and Discovery
Hayit K. Greenspan and Rodney Goodman
Department of Electrical Engineering
California Institute of Technology, 116-81
Pasadena, CA 91125
Rama Chellappa
Department of Electrical Engineering
Institute for Advanced Computer Studies and Center for Automation Research
University of Maryland, College Park, MD 20742
Abstract
A combined neural network and rule-based approach is suggested as a
general framework for pattern recognition. This approach enables unsupervised and supervised learning, respectively, while providing probability
estimates for the output classes. The probability maps are utilized for
higher level analysis such as a feedback for smoothing over the output label maps and the identification of unknown patterns (pattern "discovery").
The suggested approach is presented and demonstrated in the texture analysis task. A correct classification rate in the 90 percentile is achieved
for both unstructured and structured natural texture mosaics. The advantages of the probabilistic approach to pattern analysis are demonstrated.
1
INTRODUCTION
In this work we extend a recently suggested framework (Greenspan et al,1991) for
a combined neural network and rule-based approach to pattern recognition. This
approach enables unsupervised and supervised learning, respectively, as presented
444
A Framework for Probabilistic Pattern Recognition and Discovery
in Fig. 1. In the unsupervised learning phase a neural network clustering scheme is
used for the quantization of the input features. A supervised stage follows in which
labeling of the quantized attributes is achieved using a rule based system. This
information theoretic technique is utilized to find the most informative correlations
between the attributes and the pattern class specification, while providing probability estimates for the output classes. Ultimately, a minimal representation for a
library of patterns is learned in a training mode, following which the classification
of new patterns is achieved.
The suggested approach is presented and demonstrated in the texture - analysis
task. Recent results (Greenspan et aI, 1991) have demonstrated a correct classification rate of 95 - 99% for synthetic (texton) textures and in the 90 percentile for 2 - 3
class natural texture mosaics. In this paper we utilize the output probability maps
for high-level analysis in the pattern recognition process. A feedback based on the
confidence measures associated with each class enables a smoothing operation over
the output maps to achieve a high degree of classification in more difficult (natural
texture) pattern mosaics. In addition, a generalization of the recognition process
to identify unknown classes (pattern "discovery"), in itself a most challenging task,
is demonstrated.
2
FEATURE EXTRACTION STAGE
The initial stage for a classification system is the feature extraction phase through
which the attributes of the input domain are extracted and presented towards further processing. The chosen attributes are to form a representation of the input
domain, which encompasses information for any desired future task.
In the texture-analysis task there is both biological and computational evidence
supporting the use of Gabor filters for the feature - extraction phase (Malik and
Perona, 1990; Bovik et aI, 1990). Gabor functions are complex sinusoidal gratings
modulated by 2-D Gaussian functions in the space domain, and shifted Gaussians in
the frequency domain. The 2-D Gabor filters form a complete but non-orthogonal
basis which can be used for image encoding into multiple spatial frequency and orientation channels. The Gabor filters are appropriate for textural analysis as they
have tunable orientation and radial frequency bandwidths, tunable center frequencies, and optimally achieve joint resolution in space and spatial frequency.
In this work, we use the Log Gabor pyramid, or the Gabor wavelet decomposition
to define an initial finite set of filters. We implement a pyramidal approach in the
filtering stage reminiscent of the Laplacian Pyramid (Burt and Adelson, 1983). In
our simulations a computationally efficient scheme involves a pyramidal representation of the image which is convolved with fixed spatial support oriented Gabor
filters. Three scales are used with 4 orientations per scale (0,90,45,-45 degrees), together with a non-oriented component, to produce a 15-dimensional feature vector
for every local window in the original image, as the output of the feature extraction
stage.
The pyramidal approach allows for a hierarchical, multiscale framework for the
image analysis. This is a desirable property as it enables the identification of features
at various scales of the image and thus is attractive for scale-invariant pattern
445
446
Greenspan, Goodman, and Chellappa
recognition.
UNSUPERVISED
LEARNING
SUPERVISED
LEARNING
TEXTURE
CLASSES
via
via
KohonenNN
Window
of Input Image
N-Dimensional
Continuous
Feature- Vector
Rule-System
N-Dimensional
Quantized
Feature- Vector
~4
__--------"~~~.
__-----------4~~ ~."------------4.~
FEA TURE-EXTRACTION
PHASE
UNSUPERVISED
SUPERVISED
LEARNING
LEARNING
Figure 1: System Block Diagram
3
QUANTIZATION VIA UNSUPERVISED LEARNING
The unsupervised learning phase can be viewed as a preprocessing stage for achieving yet another, more compact representation, of the filtered input. The goal is to
quantize the continuous valued features which are the result of the initial filtering
stage. The need for discretization becomes evident when trying to learn associations
between attributes in a statistically-based framework, such as a rule-based system.
Moreover, in an extended framework, the network can reduce the dimension of the
feature domain. This shift in representation is in accordance with biological based
models.
The output of the filtering stage consists of N (=15) continuous valued feature maps;
each representing a filtered version of the original input. Thus, each local area of the
input image is represented via an N-dimensional feature vector. An array of such
N -dimensional vectors, viewed across the input image, is the input to the learning
stage. We wish to detect characteristic behavior across the N-dimensional feature
space for the family of textures to be learned. By projecting an input set of samples
onto the N-dimensional space, we search for clusters to be related to corresponding
code-vectors, and later on, recognized as possible texture classes. A neural-network
quantization procedure, based on Kohonen's model (Kohonen, 1984) is utilized for
this stage.
In this work each dimension, out of the N-dimensional attribute vector, is individually clustered. All samples are thus projected onto each axis of the space and
A Framework for Probabilistic Pattern Recognition and Discovery
one-dimensional clusters are found; this scalar quantization case closely resembles
the K-means clustering algorithm. The output of the preprocessing stage is an
N -dimensional quantized vector of attributes which is the result of concatenating
the discrete valued codewords of the individual dimensions. Each dimension can be
seen to contribute a probabilistic differentiation onto the different classes via the
clusters found. As some of the dimensions are more representative than others, it
is the goal of the supervised stage to find the most informative dimensions for the
desired task (with the higher differentiation capability) and to label the combined
clustered domain.
4
SUPERVISED LEARNING VIA A RULE-BASED
SYSTEM
In the supervised stage we utilize the existing information in the feature maps for
higher level analysis, such as input labeling and classification. In particular we need
to learn a classifier which maps the output attributes of the unsupervised stage to
the texture class labels. Any classification scheme could be used. However, we
utilize a rule - based information theoretic approach which is an extension of a
first order Bayesian classifier, because of its ability to output probability estimates
for the output classes (Goodman et aI, 1992). The classifier defines correlations
between input features and output classes as probabilistic rules of the form: If
Y
y then X
x with prob. p, where Y represents the attribute vector and
X is the class variable. A data driven supervised learning approach utilizes an
information theoretic measure to learn the most informative links or rules between
the attributes and the class labels. Such a measure was introduced as the J measure
(Smyth and Goodman, 1991) which represents the information content of a rule as
the average bits of information that attribute values y give about the class X.
The most informative set of rules via the J measure is learned in a training stage,
following which the classifier uses them to provide an estimate of the probability
of a given class being true. When presented with a new input evidence vector, Y,
a set of rules can be considered to "fire" . The classifier estimates the log posterior
probability of each class given the rules that fire as:
=
=
logp(xlrules that fire) = logp(x)
+L
Wj
j
W . = log (P(x IY
1
p(x)
?)
where p(x) is the prior probability of the class x, and Wj represents the evidential
support for the class as provided by rule j. Each class estimate can now be computed by accumulating the "weights of evidence" incident it from the rules that fire.
The largest estimate is chosen as the initial class label decision. The probability
estimates for the output classes can now be used for feedback purposes and further
higher level processing.
The rule-based classification system can be mapped into a 3 layer feed forward
architecture as shown in Fig. 2. The input layer contains a node for each attribute.
447
448
Greenspan, Goodman, and Chellappa
The hidden layer contains a node for each rule and the output layer contains a
node for each class. Each rule (second layer node j) is connected to a class via the
multiplicative weight of evidence Wi.
Inputs
Rules
Class
Probability
Estimates
Figure 2: Rule-Based Network
5
RESULTS
In previous results (Greenspan et aI, 1991) we have shown the capability of the
proposed system to recognize successfully both artificial ("texton") and natural
textures. A classification rate of 95-99% was obtained for 2 and 3 class artificial
images. 90-98% was achieved for 2 and 3 class natural texture mosaics. In this work
we wish to demonstrate the advantage of utilizing the output probability maps in
the pattern recognition process. The probability maps are utilized for higher level
analysis such as a feedback for smoothing and the identification of unknown patterns
(pattern "discovery"). An example of a 5 - class natural texture classification is
presented in Fig. 3. The mosaic is comprised of grass, raffia, herring, wood and
wool (center square) textures. The input mosaic is presented (top left), followed
by the labeled output map (top right) and the corresponding probability maps for
a prelearned library of 6 textures (grass, raffia, wood, calf, herring and wool, left
to right, top to bottom, respectively). The input poses a very difficult task which
is challenging even to our own visual perception. Based on the probability maps
(with white indicating strong probability) the very satisfying result of the labeled
output map is achieved. The 5 different regions have been identified and labeled
correctly (in different shades of gray) with the boundaries between the regions very
strongly evident. A feedback based on the probability maps was used for smoothing
over the label map, to achieve the result presented. It is worthwhile noting that
the probabilistic framework enables the analysis of both structural textures (such
as the wood, raffia and herring) and unstructural textures (such as the grass and
wool).
Fig. 4. demonstrates the generalization capability of our system to the identification of an unknown class. In this task a presented pattern which is not part of the
prelearned library, is to be recognized as such and labeled as an unknown area of
interest . This task is termed "pattern discovery" and its application is wide spread
from identifying unexpected events to the presentation of areas-of-interest in scene
exploratory studies. Learning the unknown is a difficult problem in which the probability estimates prove to be valuable. In the presented example a 3 texture library
A Framework for Probabilistic Pattern Recognition and Discovery
was learned, consisting of wood, raffia and grass textures. The input consists of
wood, raffia and sand (top left). The output label map (top right) which is the
result of the analysis of the respective probability maps (bottom) exhibits the accurate detection of the known raffia and wood textures, with the sand area labeled in
black as an unknown class. This conclusion was based on the corresponding probability estimations which are zeroed out in this area for all the known classes. \Ve
have thus successfuly analyzed the scene based on the existing source of knowledge.
Our most recent results pretain to the application of the system to natural scenery
analysis. This is a most challanging task as it relates to real-world applications, an
example of which are NASA space exploratory goals. Initial simulation results are
presented in Fig. 5. which presents a sand-rock scenerio. The training examples
are presented, followed by two input images and their corresponding output label
maps, left to right, respectively. Here, white represents rock, gray represents sand
and black regions are classified as unknown. The system copes successfully with
this challange. We can see that a distinction between the regions has been made
and for a possible mission such as rock avoidence (landing purposes, navigation
etc.) reliable results were achieved. These initial results are very encouraging and
indicate the robustness of the system to cope with difficult real-world cases.
Input
Output
-.
Probability Maps
wood
sand
wool
Figure 3: Five class natural texture classification
-.."......---~..,
449
450
Greenspan, Goodman, and Chellappa
Input
Output
Figure 4: Identification of an unknown pattern
Output
Training Set
sand
Example I
rock
Example 2
Figure 5: Natural scenery analysis
A Framework for Probabilistic Pattern Recognition and Discovery
6
SUMMARY
The proposed learning scheme achieves a high percentage classification rate on both
artificial and natural textures. The combined neural network and rule-based framework enables a probabilistic approach to pattern recognition. In this work we have
demonstrated the advantage of utilizing the output probability maps in the pattern
recognition process. Complicated patterns were analyzed accurately, with an extension to real-imagery applications. The generalization capability of the system to the
discovery of unknown patterns was demonstrated. Future work includes research
into scale and rotation invariance capabilities of the presented framework.
Acknowledgements
This work is funded in part by DARPA under the grant AFOSR-90-0199 and in part
by the Army Research Office under the contract DAAL03-89-K-0126. Part of this
work was done at Jet Propulsion Laboratory. The advice and software support of
the image-analysis group there, especially that of Dr. Charlie Anderson, is greatly
appreciated.
References
H. Greenspan, R. Goodman and R. Chellappa. (1991) Texture Analysis via
Unsupervised and Supervised Learning. Proceedings of the 1991 International Joint
Conference on Neural Networks, Vol. 1:639-644.
R. M. Goodman, C. Higgins, J. Miller and P. Smyth. (1992) Rule-Based Networks
for Classification and Probability Estimation. to appear in Neural Computation.
P. Smyth and R. M. Goodman. (1991) Rule Induction using Information Theory.
In G. Piatetsky-Shapiro, W. Frawley (eds.), Knowledge Discovery in Databases,
159-176. AAAI Press.
J. Malik and P. Perona. (1990) Preattentive texture discrimination with early
vision mechanisms. Journal of Optical Society of America A, Vol. 7[5]:923-932.
A. C. Bovik, M. Clark and W. S. Geisler. (1990) Multichannel Texture Analysis Using Localized Spatial Filters. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 12(1):55-73.
P.J. Burt and E. A. Adelson. (1983) The Laplacian Pyramid as a compact image
code. IEEE Trans. Commun.,COM-31:532-540.
T. Kohonen. (1984) Self Organisation and Associative Memory. Springer-Verlag.
451
| 582 |@word version:1 simulation:2 decomposition:1 initial:6 contains:3 existing:2 discretization:1 com:1 yet:1 reminiscent:1 herring:3 informative:4 enables:6 grass:4 discrimination:1 intelligence:1 calf:1 filtered:2 quantized:3 node:4 contribute:1 five:1 consists:2 prove:1 behavior:1 encouraging:1 window:2 becomes:1 provided:1 moreover:1 differentiation:2 every:1 classifier:5 demonstrates:1 grant:1 appear:1 engineering:2 accordance:1 textural:1 local:2 encoding:1 black:2 resembles:1 challenging:2 statistically:1 block:1 implement:1 procedure:1 area:5 gabor:7 confidence:1 radial:1 onto:3 accumulating:1 landing:1 map:20 demonstrated:7 center:3 resolution:1 unstructured:1 identifying:1 rule:24 higgins:1 array:1 utilizing:2 exploratory:2 smyth:3 us:1 mosaic:6 recognition:13 satisfying:1 utilized:4 labeled:5 database:1 bottom:2 electrical:2 wj:2 region:4 connected:1 daal03:1 valuable:1 ultimately:1 basis:1 joint:2 darpa:1 various:1 represented:1 america:1 chellappa:5 artificial:3 labeling:2 valued:3 ability:1 itself:1 associative:1 advantage:3 rock:4 mission:1 fea:1 kohonen:3 achieve:3 cluster:3 produce:1 rama:1 pose:1 strong:1 grating:1 involves:1 indicate:1 closely:1 correct:2 attribute:12 filter:6 sand:6 generalization:3 clustered:2 biological:2 extension:2 considered:1 achieves:1 early:1 purpose:2 estimation:2 label:8 individually:1 largest:1 successfully:2 gaussian:1 greenspan:8 office:1 greatly:1 detect:1 pasadena:1 perona:2 hidden:1 classification:13 orientation:3 smoothing:4 spatial:4 extraction:5 represents:5 park:1 adelson:2 unsupervised:9 future:2 others:1 oriented:2 recognize:1 ve:1 individual:1 phase:5 consisting:1 fire:4 detection:1 interest:2 analyzed:2 navigation:1 accurate:1 respective:1 orthogonal:1 desired:2 minimal:1 logp:2 comprised:1 optimally:1 synthetic:1 combined:5 international:1 geisler:1 probabilistic:10 contract:1 together:1 iy:1 imagery:1 aaai:1 dr:1 sinusoidal:1 automation:1 includes:1 later:1 multiplicative:1 capability:5 complicated:1 rodney:1 square:1 characteristic:1 miller:1 identify:1 identification:5 bayesian:1 accurately:1 piatetsky:1 classified:1 evidential:1 ed:1 frequency:5 associated:1 tunable:2 knowledge:2 nasa:1 feed:1 higher:5 supervised:10 done:1 strongly:1 anderson:1 stage:15 correlation:2 multiscale:1 defines:1 mode:1 gray:2 true:1 laboratory:1 white:2 attractive:1 self:1 percentile:2 trying:1 evident:2 theoretic:3 complete:1 demonstrate:1 image:12 recently:1 rotation:1 extend:1 association:1 ai:4 funded:1 specification:1 etc:1 posterior:1 own:1 recent:2 commun:1 driven:1 termed:1 verlag:1 seen:1 recognized:2 relates:1 multiple:1 desirable:1 jet:1 laplacian:2 vision:1 texton:2 achieved:6 pyramid:3 addition:1 diagram:1 pyramidal:3 source:1 bovik:2 goodman:9 structural:1 noting:1 ture:1 architecture:1 bandwidth:1 identified:1 reduce:1 shift:1 multichannel:1 shapiro:1 percentage:1 shifted:1 per:1 correctly:1 discrete:1 vol:2 group:1 achieving:1 wool:4 utilize:3 wood:7 prob:1 family:1 utilizes:1 hayit:1 decision:1 bit:1 layer:5 followed:2 scene:2 software:1 optical:1 department:2 structured:1 across:2 wi:1 projecting:1 invariant:1 computationally:1 prelearned:2 mechanism:1 operation:1 gaussians:1 hierarchical:1 worthwhile:1 appropriate:1 robustness:1 convolved:1 original:2 top:5 clustering:2 charlie:1 especially:1 society:1 malik:2 codewords:1 md:1 exhibit:1 link:1 maryland:1 mapped:1 propulsion:1 induction:1 code:2 providing:2 difficult:4 unknown:10 finite:1 supporting:1 extended:1 burt:2 introduced:1 successfuly:1 california:1 challanging:1 learned:4 distinction:1 trans:1 suggested:4 pattern:28 perception:1 encompasses:1 reliable:1 memory:1 event:1 natural:10 advanced:1 representing:1 scheme:4 technology:1 library:4 axis:1 prior:1 discovery:11 acknowledgement:1 afosr:1 filtering:3 localized:1 clark:1 degree:2 incident:1 zeroed:1 summary:1 appreciated:1 institute:2 wide:1 feedback:5 dimension:6 boundary:1 world:2 forward:1 made:1 preprocessing:2 projected:1 cope:2 transaction:1 compact:2 continuous:3 search:1 channel:1 learn:3 ca:1 quantize:1 complex:1 domain:6 spread:1 fig:5 representative:1 advice:1 wish:2 concatenating:1 wavelet:1 shade:1 evidence:4 organisation:1 quantization:4 texture:26 army:1 visual:1 unexpected:1 scalar:1 springer:1 extracted:1 viewed:2 goal:3 presentation:1 scenery:2 towards:1 content:1 invariance:1 preattentive:1 indicating:1 college:1 support:3 modulated:1 |
5,325 | 5,820 | ??????????? ???????????? ??? ?????????? ????????
???? ????????????? ???????
??????? ????
????????? ????????? ?????
?????????????????????
????? ??????
?????????? ?? ????????? ??? ?????? ???
?????????????????
????????
?? ?????????? ???????? ????????? ????????????? ??????? ??? ????????????? ????????
??????????? ?? ?? ???? ????? ???? ??? ??????? ?????????? ????????? ?????? ?????
???? ??? ???????? ????? ??????? ????? ?????????? ???????? ???????? ?????????? ????
???? ?????????? ????????? ??? ??? ?? ??????? ???????? ??????? ? ?????????? ???
???? ?????? ????? ??? ????? ?????? ?? ??? ?????????? ????? ?? ??? ??? ???? ??
??? ??????? ??????? ?? ?????? ?????? ?? ????????? ????????? ?????????? ?????
?????? ???? ???? ???????????? ??? ???? ??????? ?? ???? ???? ??? ?????? ?? ?
???????? ????????? ?? ???? ?? ?? ????????? ?????????? ???????? ?? ???? ???? ???
???????????????? ???????? ?? ????? ??? ??????????? ???????????? ????????? ?????
???? ????? ????? ???? ??????? ???? ??????????????? ?? ??? ???? ???????????????
? ????????????
?????????? ???????? ???? ????????????? ???? ????????????? ????????? ???????? ????????? ? ????? ??
???? ?? ??????? ???????? ????????? ????? ?? ? ??? ?? ????????? ?????????? ?? ? ????????? ??????? ????
?? ?? ??????? ?? ?????? ???? ??? ?????? ?? ?????? ????? ?? ??? ?????????? ?????? ?????????? ????? ???
?????????? ?????? ??? ???????? ?????????? ??? ???????? ?? ?????????? ????????? ??? ?????? ?? ?????????
???? ??????? ??? ?? ??????? ?????????? ??? ?? ?????????? ?? ??? ??????
?????????? ???? ???????? ????????? ?? ? ????????? ?????? ????????? ?? ?????????? ??? ?? ??? ?????????
???? ??? ????? ?? ?? ?????? ??? ?????????? ????????? ????????????? ??? ??? ??? ??????? ?????? ?????????
?? ??? ????????? ????????? ??????????? ????? ????? ??? ??? ?????????? ?????????? ??????????? ??
?? ???? ????? ???? ??? ??? ??????? ??? ???????? ???????? ????? ??????? ????? ????????? ??? ???
??? ???? ?? ??? ??????? ?????????? ????????? ?????? ??? ??? ?????????? ?????????? ?? ???? ???
?????????????? ?? ??? ???? ???? ??? ????? ?????? ????? ?? ???????????? ??? ???? ??? ??????????
??? ?????? ?????? ?????? ??? ????? ?? ??? ?????????? ???????????? ?????????? ???? ??????? ???? ???
??????? ???? ???????? ????? ??? ?? ????? ??? ??????????? ?? ??? ???????? ??? ?????????? ?? ???? ????????
?????? ??????? ????? ? ????????? ??? ?????? ??????? ???????? ?????? ????????? ??? ???? ???? ?? ??? ????
???? ??? ???????? ??????? ? ?????????? ???????????? ??????? ?? ??????? ??? ?? ??? ????? ?? ??
?????????? ??? ??? ?????? ????? ??? ????? ???? ????????? ?????? ??????? ???????? ??? ??? ???? ??
?????????? ??? ?? ?? ??? ?????????? ?? ????????? ??? ?? ?? ??????? ???? ????? ?? ? ?????? ????? ???
?
???? ???? ????? ??? ?? ????? ???? ??? ??? ???????? ????????? ?????????? ?? ??????? ??? ???? ??? ??? ??????
??? ??????? ?? ?????? ???? ?????? ???? ???? ????????? ????????? ?????? ????????? ?? ? ???????? ?? ???
? ??? ??? ??????????? ?? ???? ???????? ??????? ?? ??? ??????????
????????? ????????? ?? ??? ???? ?????????? ?? ?? ??? ???????? ??????
??????? ??????????? ?????? ??? ??? ???? ??? ??? ????? ?? ??????????
?? ???????? ???? ????? ????? ??????????? ??? ???? ??? ???? ???? ??? ???????? ???? ???? ??????????? ???
?? ??? ???? ????? ?????? ????? ?? ?????????????? ???? ?? ?????????? ???? ??? ??????????? ??????? ????
??? ???? ???????????? ?? ???? ????? ?? ???????????? ???
? ??? ??????????? ?? ???
??????????? ??? ??? ????????????? ????????????
???? ? ????????? ????? ???? ?? ????? ??????? ??????? ????????????? ???? ???? ?????????? ???? ????
??? ????? ???? ??? ???? ???????? ??? ??????? ??? ?? ??? ??? ????? ?? ??? ???????? ?????????? ???
?? ?? ??? ??????? ?? ???????? ???????? ???????????
?? ???????? ??? ??????????? ????????????? ??????? ?????? ??? ?????? ????? ???????? ?? ??? ???
????????? ????????? ?????????? ????? ????? ???????? ??? ??? ???? ??????? ?? ???? ???? ??? ???
???????? ????? ?? ?????? ??????? ??? ??? ?????????? ????? ?? ?? ??? ???????? ????????? ?? ????
???? ??? ??????? ??? ????? ???????? ? ???????? ??????
????? ?????????? ?????????? ??? ??????????? ?? ? ??????? ?? ??? ???? ?? ???? ???? ??? ???? ?????????
?????????? ?????? ?????? ?? ?? ????? ? ???????? ?????? ?? ???? ????????? ?? ???????
???????????? ???????? ?? ???? ??? ???????? ???????? ??? ???? ???????? ?? ???????? ???? ? ??????
?????????? ??????? ???? ??? ?????????? ? ? ? ? ?? ? ? ? ? ? ? ? ?? ? ? ? ? ? ??
????? ? ? ? ? ? ? ?
? ?? ?????? ?????? ??? ??? ????????? ????? ????? ???? ?? ??? ?????? ????? ??????? ?? ??????? ??
?????? ????????? ???? ???????????? ?????? ???? ??? ?????? ??????
?????? ?????? ??? ??? ????????? ?? ?? ???????? ? ????? ????
?
? ??? ???
?
?
?
??
?
? ?? ??? ?
??
?
?
????? ?? ?? ???????? ????????? ??
??
?
? ?? ???
??
?
???? ?? ???? ???? ??
? ??
? ??? ???????? ??????????? ??? ??? ????????? ??????
? ? ? ? ? ?? ? ? ? ?? ? ? ? ? ?? ???? ??? ?? ???? ?? ?? ???? ? ? ???????? ?? ????
??????? ? ??????? ?? ??????? ??? ????? ?????? ?? ?????? ??????????
??? ??? ??????? ????????? ??? ??????????? ???????? ???? ???? ????? ???????? ??? ???? ??? ??
??????? ??? ?? ?????????? ????? ????????????? ??? ?? ???? ??? ??????? ???? ????????? ?? ????????
??? ?????? ? ? ? ? ? ? ? ? ?? ????? ? ? ? ?? ? ?????? ?? ? ?????????? ???? ???????
?? ? ? ?
?? ??????? ?????? ???? ??? ???????? ???????? ??? ????? ??????
?? ????? ????
? ??? ?? ? ????????? ?? ??? ???? ?????? ?? ??????? ? ????? ???? ??? ???? ???? ???????? ???? ???
?
??? ????????? ????? ????? ?? ? ?????? ?? ?????
??????
???? ? ???? ??????
?
??? ?????? ?? ???? ?? ?? ?????? ???? ? ?? ????? ?????????????? ???????????
???? ?
?
?
?
?
??
?
???
?
??
? ?? ? ?? ?????
?? ??? ???? ??????? ??????????
?? ? ??? ?????????? ?? ????? ????
? ??
??? ???? ???????? ?? ???? ??? ????
? ??
????? ????? ???? ??? ??? ????? ?? ???? ???????? ?? ???????? ??????????? ??? ??? ???? ?????????
??? ????
??????? ???? ????? ? ?? ??? ?????? ??? ??????????? ??????? ??? ???????? ????????
? ? ? ???? ?????? ??? ?????? ???? ? ? ? ? ?? ???? ?????? ?????????? ? ?
? ? ???? ??????????
??????
?
?
?
?
?
? ???
?
?
?
?
?
?
??
?
??
?
? ??? ??????????? ?????? ???????
?
???? ? ????
?
? ????? ????? ?? ?? ? ?? ? ?? ?? ????????? ???????? ?? ????
??????? ?????? ????????????? ?????????? ???????? ??? ???? ??????? ?? ??? ??????? ?? ??????????
????????? ????????? ??? ??????????? ?????????? ?? ???? ?? ??? ???????? ???? ?????????? ?? ????????
??? ???? ?????????
?????? ????????? ???????????? ??
????? ?????? ?????? ?? ????? ??? ????? ???? ?? ???? ???? ??? ??? ?? ?????????? ???????? ??????????????
??? ?? ????? ????????????? ??????? ??? ?????????? ???????? ???? ???? ????? ?????? ???? ??? ?????? ????
??????? ????? ???????????? ??? ?? ??? ??? ????????? ??? ???????? ????????? ?????????? ?? ??????????
?????? ????????? ??? ??????? ????? ??????? ?? ?????????? ???? ???? ??? ?????? ????????? ??????????
?????????? ????????? ??? ???? ??????? ??????? ?? ???? ?? ???????????????? ???????? ?? ????????
????? ?????????? ?????? ???? ??? ???? ?? ???? ???? ?????????? ????????? ?? ???? ????? ???? ??? ???
??? ???????? ???? ????? ?????????????? ???????? ????????? ?????????? ???????????
?????? ??????? ???? ????? ???? ??????????? ???????????? ????? ?? ????? ????????? ???????? ?? ???????
??????????? ?????? ???? ?? ?????? ?????????? ????? ??? ?????????? ???????? ????? ???????? ????
?? ??? ???????? ??????? ??? ????????? ???? ????? ????????? ?????????? ??? ????? ?????????? ?? ???
????? ?? ??? ????????? ?? ????????? ?? ??????? ?????? ??????????? ????? ?? ?????????? ??? ?????? ?????
??????? ????? ?????? ????? ?? ???????? ??????????? ?? ??? ????? ??????
?????????? ??????? ??? ?? ??????? ??????????? ????? ????????? ???????? ???? ?? ???????? ??? ??????
?????? ?? ??????? ???????? ???? ? ? ??? ???? ?????? ? ? ?
? ? ? ?? ?????????? ??? ???? ???
?? ?? ?????????? ??? ??????
? ? ?? ?? ?? ??? ?????????? ?? ??? ???? ?????? ???
???????
??? ?????????? ?? ??? ????? ???????
??????? ??? ????????? ?????????? ?? ? ?
? ???? ???
??? ??? ??????????? ?? ? ????????? ?????????
???
?
?
???
??????? ?? ??? ??? ???? ????????? ?????? ?? ????????? ?????? ???? ? ???? ???? ?? ????? ???????
??? ???????? ???? ?? ?? ????? ??? ??????? ??? ????????? ???? ?? ?? ????????? ?????? ?????????
??????? ?????????? ??? ?????? ?? ???????? ??????? ?? ??
? ??? ?????? ?????
??? ????? ?????? ??? ???? ???????? ????? ?? ??? ???? ???? ??? ????? ?? ???? ??????????? ????????
???? ?? ??? ??? ?????? ??? ?? ??????? ?? ?? ??????? ????? ?? ??? ?????? ?????? ????????? ???
?????? ????? ?? ?? ?????? ??????????? ??? ??????? ????????????? ???? ?? ?????????????? ????????? ???
???????? ??? ??????? ?????? ?? ???? ?? ?????????? ?? ??? ?????????? ????? ??? ?????? ??????
??? ? ? ???? ? ?? ?? ? ? ? ? ?? ? ? ?? ????? ??? ???? ???? ????? ? ? ?
? ??? ??? ?????? ? ? ? ?
? ??????? ??? ?????? ?? ??????? ??? ? ?? ??? ?????? ??????????????? ????? ?? ??? ???? ?? ?? ?????
??????? ? ? ? ???? ?? ? ??? ? ????? ???? ????? ? ??? ??? ?????? ?????? ?? ?? ?????? ???? ???
???? ?? ????????? ????????? ?? ??? ????????? ??????
? ??? ?
? ?
?
??
????? ? ? ? ?? ??? ??????? ????????? ?????? ?? ???? ?? ?????? ???? ?????? ?
??? ??? ????? ???????
??? ? ??? ??????? ??????????? ?? ? ???
???
??
?
?
? ??????????????? ???????? ?? ?????????? ? ?? ?? ?????? ??????????? ?? ??? ????? ?????? ??? ?????
??? ??????? ?? ? ????? ?????????? ??????? ???? ? ? ????????? ???? ??? ??????? ??? ???????? ?????
??????? ????? ??????????
?
?
? ??? ???
??
? ???? ?
???
? ?
??
?? ?? ???? ?? ??? ???? ??? ????? ???????? ?? ????????????? ???? ??? ??? ? ? ?? ?? ???? ?
? ????
?? ??? ? ? ?? ???????? ???????????? ?? ????? ??? ????? ????????????? ??? ?????? ?? ???? ?? ??????
??
?
????????? ? ?????????????? ??????????? ???????????? ??? ??? ?????? ?????
???????? ? ? ???? ? ?? ? ? ? ? ??? ? ?? ??? ?????? ?? ??????????? ?
?? ???????? ????????? ? ? ??? ? ??? ? ?? ? ??? ? ? ? ? ? ? ? ? ? ?? ????? ?? ? ? ? ?? ? ? ?? ??
?? ?????????? ?? ? ?
?? ??? ? ? ?? ? ? ? ? ?
? ??
?????????? ???????????
? ?
? ? ???
???
??
??? ??
?
?
?
??
????????????? ????????? ? ?? ? ??? ????
??
? ?????
???? ?
?? ??? ???
?? ??????? ?
???????? ??
??? ?????? ?? ????? ???? ?? ???????? ? ?? ????????????? ??? ?????? ? ???? ???
?
? ??? ???
?
?
?
?
?
?
??
? ????? ?
???
??
????????
?? ??? ????? ??????? ??? ?? ??????? ??? ???? ?? ????????? ?? ? ?? ?????? ?? ?????
???? ???????? ???????????? ?? ??? ??????? ???????? ?? ? ? ?? ?????????? ??? ????? ??????????? ???
????? ??????? ?????????? ?????????? ????? ??????? ??? ???
?? ????? ???
???
?
?
??? ?
?
?
??
?
? ??
??
? ???
???
??
??? ??????? ????? ?? ?????????? ?? ? ?? ???????? ??? ????? ???????? ?????? ???????????? ?????
?????? ?? ??? ????? ?? ??? ???????? ? ??????????????? ????????? ???????? ?? ?? ??? ??????????? ????
????????? ???????? ????? ?? ??????????? ????? ??? ? ???? ? ????? ??????? ???? ?
????? ???? ?? ??? ????? ????????? ?????????? ???????? ??? ?????? ??????????????? ??? ??? ?? ??????
?? ???? ?????????? ?????? ???? ??? ???????? ?? ? ?????? ?????? ?? ??? ????? ???????? ?????? ?? ????
???? ??????? ????????????? ?? ???? ?? ??? ????? ???? ???? ??? ??? ?????? ????????? ??? ? ????????
????? ????? ???? ???????? ?? ??? ??????? ??? ?????????
???? ??? ?? ??? ???? ?? ??? ?????????? ?????????? ??? ??? ????? ??????? ????? ????????? ??????? ??
??? ??????? ???? ???????????? ??? ????? ??????? ?? ????? ????????? ?????? ???? ?????? ?? ??? ??? ????
??????? ?? ??????????? ??????? ??? ?????? ?? ?? ??? ???????? ????????????? ??????? ????????? ?????
?? ???? ??????????? ?? ? ????????? ????? ??? ?????? ?? ?????????? ?? ??? ????? ??? ????? ??? ??????
?? ????? ??? ??????????? ???????? ?? ??????? ??? ??????????? ???????????? ?
???????????? ???? ??? ???????????? ???? ???? ?
?? ?? ? ????????????
??? ? ? ?
?? ? ?????? ???????? ?????? ???? ???????????? ? ? ????? ??? ????
???????? ???? ?? ? ?? ????? ???
???
?
?
??? ??
? ???? ??
? ???? ??
?
?
? ?? ? ? ??????
???????????? ???? ?? ? ?????????? ???????? ?
?
???
??
?
???
??
?
?
?? ?? ???
? ???
?
? ?
?
????? ? ? ? ??
?
?
?? ???????????? ? ? ??
??? ?????????????? ?? ???? ??? ????????
?? ??? ? ? ?? ??? ???????
?
????????? ?? ????? ???? ????? ??
? ? ?? ????? ??? ? ? ?? ???? ???? ?? ? ? ?? ?
??????? ? ??????? ??? ?????? ??????? ??? ?
??
?
? ???? ???????????? ????
?? ? ? ? ? ???
??? ? ??? ??????????? ?? ? ??? ??? ? ? ?
?
??? ?????????? ?? ??????????? ?????????? ??? ????????? ?????? ?? ?????? ????
??????????
?? ??? ??????????
?
?? ??????????? ????
?
? ?? ? ????? ? ? ????? ??? ?????? ?
?
??
?
?? ??? ?
?
? ????
????
?
???
?
???
????? ??? ? ???
?? ??? ? ???
?? ???
??
??
??
??
??
???
??????????
??
???
???????????
?
?
??
??
?
???
?
?
??
?
?
?
?
?
?? ?????
???????? ????? ??????? ??? ?? ???? ???? ??? ????????? ????? ??? ??? ??? ?????????
? ? ??? ?
? ?? ????
? ????
?? ?
?
??? ????? ????? ??? ??? ??? ?? ????? ?? ?? ????? ?? ???? ??? ?? ??? ? ??????? ?? ????????? ????
? ? ???? ? ???? ?? ??? ??????????? ??
????????????? ?? ???
?? ???? ?? ?? ???? ?? ??? ?????
?
?? ???? ????? ???? ?????? ??? ????? ??? ?????? ??? ???? ?? ???? ?? ??????? ???? ????
???
?? ???? ???????????? ??? ??? ??? ???????????? ????? ???? ??? ?? ??????? ???????? ?????????
??? ????? ?? ??? ????? ??????? ?????????? ???? ??? ????????? ????? ????? ????? ???? ? ??????????
????????? ???????? ????? ??? ?? ??? ?????? ????????????? ?? ???? ???? ?? ??? ?????? ??????????
????? ?? ?????? ??? ???????? ?? ??????? ?? ??? ? ?? ?? ??? ?? ? ????? ??????? ?? ????????? ??
????? ??? ????????? ????? ???? ? ????? ?
?
?
?? ? ??? ? ?
? ?? ? ? ??? ?? ? ????
?
? ?? ? ? ???
? ???? ?
?
?
??? ???????? ? ??? ???????? ?????? ?? ???? ?? ??? ??????? ????? ??????
???
???????? ?? ??????????? ????
? ?? ??????? ?????? ???? ? ???????? ?????????????
???? ???
?
??? ? ?? ? ?? ? ? ??
? ? ?
?? ? ?????????? ?????? ???
? ?
? ?? ?? ? ?
?? ? ?? ??? ?????????? ????????????? ?? ? ?????
?????
??
??
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
? ?? ?
?
?
? ???
?
?
?
?
?
?
?
?? ??? ??????? ??? ????? ????? ??????????? ???? ??????? ? ?? ?????? ??? ????? ????? ??? ??????
?????????? ????? ? ???????? ???????? ??????? ?? ??????? ??? ?? ??? ???? ?????? ??? ????? ????? ???
??? ??? ????????? ????? ???????? ???????? ?? ???? ??? ??? ?????????
????????? ? ??????? ??? ?????? ?????? ???????? ????? ??????????? ?????? ??? ? ?? ??????? ????
???? ??? ?
??? ???
?? ????? ??? ? ? ? ? ? ? ?????
??? ??
?? ???
? ??? ? ?? ????? ? ??? ????? ??? ??? ???????? ??? ??? ??? ??? ???????? ??? ??? ??? ?????????
????? ?????? ????? ? ????? ??
? ?????
? ???? ?
?? ??? ?
?
?
?
? ? ?????
????????? ??? ?????? ? ?? ? ??? ?
????? ? ?
?
? ???? ?
?? ?????
??? ??? ?
?
?? ??? ?
?
? ???? ?
?
?
?
? ???? ??
?
?
?
?
????? ????? ??? ??? ??? ???? ?? ??? ???? ???? ??? ????? ?????? ??? ???? ??? ??? ?? ???? ??
??? ??? ???????? ?????? ????? ??? ?? ???? ????? ?? ?? ????????????
????? ?? ??? ??? ??????????? ?? ????????? ? ????? ????? ?? ???? ????? ? ????? ???? ????
? ?????
? ???? ?
??
?
?
?
? ? ?????
????? ? ? ? ?? ? ????????? ?????????
?
? ???? ?
??
?
?
?
?
?
?
?
???????? ?? ????????? ?? ??? ?????????????
??? ??? ??? ??????? ?????? ??
?
?
????? ???? ???? ? ?? ?????? ?? ??? ????? ?? ??? ????? ??? ??????????? ??? ??????????? ????????
?????? ?????????? ????? ???????? ???? ????? ?? ???? ???? ??? ???? ??? ?????????? ??????? ??????? ???
???
???????? ?? ????????? ????
?? ??? ?????????? ??? ????? ????? ??????? ???? ?? ???????? ??? ???????? ???????????? ???????
?
?
?
????? ???? ??? ?
? ? ? ????? ?
??? ?? ? ?? ? ???
??
? ??
??? ???
??? ??? ??? ?????? ??????????? ????????? ????????????? ?? ?? ???? ?? ??? ???? ???? ????? ????
??
?
?
??
?
?
?
?
?
?
??
?
?
?
? ?????
?
?
????? ??????? ? ???? ??????? ?? ??? ???? ????? ??????? ?? ?? ?? ?????? ??? ????????? ??????????
????????? ? ??????? ??? ?????? ?????? ???????? ????? ????????? ?????? ??? ?
?????? ??? ? ??? ? ?? ????? ? ??? ???? ??? ?????????? ????? ????? ? ? ???? ??
?
?? ?????
??? ??? ?
?
? ???? ?
?
?
?
?
????? ? ?? ??? ?????? ?? ????????? ? ???? ? ? ??? ? ?
?????????? ????? ??? ??? ?? ????? ??? ?
???
? ???? ?
?? ?????
?
?
??
?
? ????
?
???? ????? ? ???????? ???? ????? ?? ????? ??
?? ?????? ?????????? ??
? ???? ??? ???? ???? ???
?? ??????????? ?? ?????????? ???? ??????? ???????? ??? ????????? ?????? ????????
? ? ? ?? ? ? ? ?
? ? ? ? ?? ? ? ? ?
?
???? ???? ??? ????? ?? ????????? ?????????? ????? ???????? ?? ???? ? ? ? ??? ???? ??????????
??????????? ??? ??? ??? ????????? ??? ???? ?????????? ?? ???? ????? ??? ?????? ?????? ?????? ??????
??? ????? ???? ??? ??? ????????? ?? ??? ?????? ?? ??????
? ????????? ????????? ??????????
??????????????????? ?????????? ????? ????? ???? ??? ?? ? ?????????????? ?? ??? ????? ?????? ???
???????? ????? ?? ?????? ?????? ?????? ??????? ??? ??? ???????????? ?? ??????? ??????? ?????????
?????????? ????????? ?????????? ???????? ???? ????? ?? ??????? ??? ??? ????? ??? ??? ???? ??????
??? ? ? ?
? ?
?
?
? ? ??? ? ? ?
? ?? ????? ???
? ??
????? ?
?
? ? ?????? ? ? ???
? ??? ???
???????????? ??
??? ????????
?
?
??
?
? ?
???
???
? ??? ???
??
?
?
?
?
?
?
??
??
?
??
?
?
???
? ?? ?????????? ?????
??? ? ??? ????????? ?? ??????? ??? ?????????
?? ??? ????????? ???????? ????
?? ?
?
? ?? ??? ????? ?????? ???? ??????????
? ??? ? ?
?? ?
?
?
??? ?
?
?
??
?
??
?
??
?
???
???? ?????? ??? ??? ??????? ?? ?????????? ??? ????? ???????? ???????? ???? ??? ??????? ??????
??????????? ??????????? ?????? ??????????? ???????????? ??? ???????? ??????????????? ??? ? ? ?????
?
????????? ? ??????????? ??????????? ???????????? ??? ???
???????? ? ? ???? ? ?? ? ? ? ? ??? ? ?? ??? ?????? ?? ??????????? ?
?? ???????? ????????? ? ? ??? ? ?? ? ?? ? ?? ? ? ? ? ? ? ? ? ?? ????? ?? ? ? ? ?? ? ? ?? ??
?? ?????????? ?? ? ?
?? ??? ? ? ?? ? ? ? ? ?
? ??
?????????? ???????????
? ?
? ? ???
? ??
??
?? ??
?
??
????????????? ????????? ?
?? ??? ???
?? ??????? ?
??
?
? ??? ???
????
?
?
?
??
? ????
?
?? ???????? ??????? ?? ????????? ?????? ?? ??????? ??? ?????? ????????? ???? ????????? ?? ???
?? ??? ??? ????????? ??????? ?? ??? ???????? ???????? ?? ?????? ??? ???????? ?????? ????????? ??
??????? ????? ??????? ?? ???? ???? ?? ??? ??????????
??????? ? ??????? ??? ??? ??????? ??? ?
? ? ? ?? ????? ??? ? ?? ??? ????????????
???? ?? ???? ? ? ???
??? ??
?? ??? ? ? ?
? ?? ??? ? ??? ??????? ?? ????????? ?? ????? ??? ?
? ????? ? ? ????? ? ? ? ?? ? ??????
????????? ????? ??? ????????? ????? ????? ? ? ???? ??
?? ? ? ??????
? ? ? ??? ? ?
?
????????? ?? ? ?? ???? ???? ???? ??? ? ?? ??????? ????????????? ??? ??? ???? ????? ?????
? ? ? ? ??? ? ??? ?
??? ? ?? ???? ??? ????????? ????? ????? ? ? ???? ??
?
?
??
?
?
?
?
? ? ????
?? ?
?? ? ? ??????
???
?
?
? ?
?
?
?
? ??
?
??
??? ?
?
???????? ?? ?? ???? ?? ?????? ????? ?????? ??? ??? ?? ???? ???? ?? ?? ?????? ???? ???????? ????????
??????? ?? ??????????? ???????? ????????????? ?????? ?????????? ???????? ???? ??????
?
??
?
? ?
??
??
?
? ? ? ?? ?
???
?
??? ?????? ??? ???? ?????????? ????? ???? ?
????? ?? ??
?
???
? ??
?
??? ???? ????? ????? ???
?
??? ?
?
?
?? ??? ?????? ????? ????? ??
?
?????? ??? ????? ????? ?? ??????
??????? ?? ??????????? ?????????? ???????? ?? ?????? ???? ???? ???????? ??? ??? ?????????
???????????? ???????? ???????? ? ?????????? ?????? ??? ???????? ????? ? ?
??
??? ?
? ??? ??????? ????????????? ??? ??? ????? ?????????? ???
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
??? ?
? ???
?
????? ?
?? ?? ?????? ? ?? ?????? ???? ???? ?????? ???? ??? ? ? ? ? ?
?????? ??? ?????????? ????? ?? ?????? ??? ??? ??? ????? ???
??
?
??
??? ??? ?
???
??
? ??
?
?
?
???????? ??? ????? ?? ?????? ????????? ?? ? ?? ? ? ???? ? ?? ??? ??? ????? ?? ????? ?? ??????? ????
??????????? ?? ????????? ??? ????? ????? ?? ??? ?? ??????????? ?? ??? ??????? ? ????? ???????? ????
??? ? ? ??? ? ? ?????
??
? ??
???????????? ?????????? ?????
????? ??????? ?? ???? ???????????? ????????? ????????? ???????? ???? ?????? ???????????? ????? ????
?????? ? ? ? ?? ??????? ???
? ?? ? ? ?
???
?
?????? ????? ????? ????? ?????
?
??
?????? ????? ???????? ????? ??????
?
??
?
??
???
???
??????
??
??
??
??
?
?
?
?
?
???
???
??????
??
??
? ??
???
?
??
??
??
??
??
??
?
?
?
?
??
??
???
??? ????? ????? ????? ?????
?
??
??
??
??
??
??
?????? ????? ???????? ????? ????
???
???
??????
??
??
??
??
?
???
??
???
?
?
?
?
?
???
???
??????
??
??
? ??
???
?????? ?? ???? ???? ???? ?????? ?????? ?????????? ????? ?? ? ?? ??? ????????? ?????????? ?????
???? ??????? ???? ??????? ????? ???????????? ? ?? ?? ?? ???? ??? ?????? ?????????? ?? ???????
???? ????? ?? ? ?? ???? ?????????? ? ??? ??????? ????? ??????????? ????? ??????????? ?????
?
?
?? ? ?? ???? ???? ??? ??? ?????? ???? ?????? ??????????? ????? ???????
????? ? ? ?
??? ? ?? ??? ?????
??????? ? ? ?
??? ????? ???? ??? ?? ? ?? ????? ?? ? ? ???? ???
? ? ?? ? ? ? ? ? ? ? ? ? ? ???????? ???
??? ?? ?????? ?? ?? ???? ????? ???? ??? ??????? ???????? ?? ??? ??????? ?? ???? ?? ??? ???
???????? ??? ??????? ??? ?????????? ??? ?????????? ????? ?????? ???? ?? ??? ??????? ?? ?????????
???? ??? ??????? ??? ??????? ? ?? ??? ?? ?????? ??? ??? ???????? ?? ??????????? ??
?
???? ?? ???????? ??????? ??????? ?????????? ?? ?
????????? ??? ???????? ??? ?????? ??? ??? ??? ????? ??? ?? ????? ?? ??????? ????? ?? ??? ??
????? ??? ???? ????????? ?????????? ? ? ?????????? ????? ?????????? ???? ??????? ???? ??????????
???? ??? ???? ?????? ???????????? ?? ??? ????? ???????????? ?? ????? ??????? ????????????? ?? ?????
???? ?? ?????????? ??????????? ?? ? ?? ? ????? ?? ?????? ?????????
? ???????????
?? ???? ???????? ?? ??????? ??????? ???? ??????????? ????? ???? ????????? ???? ??? ????????? ????
???????? ?? ????? ??????? ???????? ?? ??? ????? ?????????? ????? ???? ??????? ?????? ?? ??
??
??? ??? ???????????? ???? ??? ??????? ??? ??? ????? ?????
?? ??? ???? ????? ?????????? ??????? ??
???? ????? ??? ????????? ??????? ?? ? ????? ??????? ???????? ?? ??????????? ??? ??? ????? ??????????
?? ?????? ??? ??????? ??????? ???????? ???? ?? ????? ?? ??? ?????? ??? ?? ???? ?? ???????????
?????? ?????
?? ? ? ? ????? ??? ??????
????? ???????? ??? ?? ????????? ? ?? ???? ????????? ????????????? ???????? ??? ???????? ? ?? ??
?
?
? ? ? ??? ???
?
?
?
?
?
????? ??????? ?? ?????? ?? ???? ?? ??? ????????? ?????? ???? ????? ?? ? ???
?
?
?
????? ??? ??? ?? ?? ?? ? ??
? ?????? ? ??? ????? ???? ??? ??????????? ???? ??????? ???????
??? ???? ????? ?? ????????? ?? ??? ????? ??????? ????????? ?????? ?? ???? ??? ??? ?????? ???
??????? ??? ????? ?????????? ???? ?????? ?????? ????????? ?? ??? ??? ?????????
????? ?? ???
???
?
?
?
?
?
?
????
?
?
? ?
?
???? ? ????? ??? ??????? ???????? ????? ??? ? ?? ? ????? ??????? ? ??? ?? ???? ???? ???
?
?
?
?????? ??? ???? ??? ????? ????? ??? ????
?? ?? ? ???
? ??? ??? ?????
?
???
?? ??? ??? ??? ??? ?????
?
???? ???? ???? ? ?? ?? ? ???
????? ??? ?? ???????? ?? ??? ?????? ???? ?? ?????
???????? ?????? ???? ??????? ????????? ???? ?? ? ? ? ?? ?? ? ??? ??? ??? ????? ???????? ?????
???????? ??? ??????????? ???????
? ?
??? ?????? ???? ?? ????????? ???? ?? ? ? ? ????? ??? ??? ????? ???????? ?? ??? ???? ? ???????
???? ????????? ?????????? ? ??? ???????? ?? ?? ? ?????? ???????? ???????
?? ????? ?? ?????
?? ??????????? ?? ??????? ?? ??? ????? ?? ?????????? ?? ?? ???? ?? ? ????? ??? ????? ?? ??? ??
??????????? ?? ? ?????? ? ??? ??????? ???????????? ??? ????? ????????? ????? ???????
?
??????????
??? ????????? ?? ?????????? ??? ????? ?? ?? ?????? ????????? ????????? ?????????? ????????? ??????? ?????
?????? ??? ?????????? ??? ?????? ?????
??? ?????? ?? ??????? ????????? ????????? ??????????? ?? ?? ???????? ??????? ? ????????? ?? ???????????
????????????? ?????????? ?????
??? ???????? ????? ???? ??? ?????? ??????? ????????? ????????? ???????????? ?? ??? ??? ???????? ????
??????? ?? ?????????? ???????? ?????????? ?????? ???????? ?????
??? ??????? ???? ???????? ????? ??? ???? ?? ????? ? ???????????? ?????????? ???????? ?? ???????????
???????? ?? ? ???????????? ????? ???????? ??????????????? ????????????????? ?????
??? ???? ?? ???????? ?????? ???????????? ??????????? ??????????? ????????? ?????
??? ?????? ??????????? ???????????????? ?????????? ??????????? ?????? ?????
??? ??????? ?? ??????? ??????????? ????????? ???????? ????? ??????? ???????? ?????
??? ?????? ????????? ??? ??? ??????? ? ??????? ????????? ??? ????????? ??????? ?????????? ????????? ??
??????????? ?????????? ??????? ????????????? ??????? ?? ??? ??????????? ???????? ?????????????? ?????
??? ?????? ???? ???????? ?????? ??? ??? ?????? ?????????????? ?????????? ?????? ??? ???? ?????????? ???
??????????????? ??????????? ?? ????? ????? ?????????? ?????
???? ????? ???????? ????????? ?? ????????? ???????????? ??????? ??? ???? ?? ??? ?? ????? ?????? ?????????
?? ???????? ?? ?????????? ????????? ?? ????? ?????
???? ????????? ?????????? ?????? ?? ??????????? ??? ??????? ?? ??????? ??????? ????? ???????? ?? ?????
??????????? ???????????? ??????????? ??? ?????? ?? ??????????? ??????????? ?????
???? ?????? ?? ???????? ??? ?????? ?? ??????????? ???????????? ??????? ???????? ?? ???? ???????????
???? ?????? ?? ??????????? ??????? ???????????????? ?????
???? ????????? ????????? ??????? ?? ?????????? ??? ???????????? ??????? ???????? ???????? ????? ???? ??????
???????? ?? ??????? ?? ??????? ???????? ????????? ????? ???????? ?????
???? ??????? ????????? ????????? ????????? ??? ???????????? ??????? ?????? ?????????? ??????? ?????????
??????? ????????? ?????????????? ?????
???? ???? ?? ???????? ????????? ??????? ??? ?? ???? ?????? ???????????? ?????????? ???? ?????????? ???????
????? ??????? ?? ????????????? ??? ????????? ??????????? ?????????????? ?????
???? ???????? ???? ??? ??????? ???? ????? ?????????? ?? ?????????? ???????? ??? ?????? ????????? ??
??????????????? ?????????? ??? ?????????????????? ??????????????? ?? ???????? ?????
???? ?????? ??? ??? ?????? ???? ???????????? ???????? ???????? ?????????? ??? ??????? ?????????? ??????
?????????? ??? ????????? ???????? ??????? ??????????? ?? ?? ???????????? ????????? ???????????? ?????
???? ??????? ????? ???????? ??????????? ??? ????? ????????? ???????? ?????? ?????????? ????? ???????????
????????????? ?? ????? ????? ???????? ?????
???? ??????? ?????? ???? ??? ??? ????? ??????? ??? ?????????? ??? ???????? ?????????? ??? ????????????
????????????? ?? ????? ????? ???????? ?????
???? ????? ??????? ??? ???? ?????? ??????????? ???????? ?? ????? ??????? ?????? ???????? ???? ?????? ??
??????????? ??????? ????????????????? ?????
????
???? ????? ??????? ?? ??? ???????? ??????????? ???????????? ???????????????? ?????
???? ?????? ??????? ??? ???????? ????? ? ?????? ????? ????? ???????? ??? ???????? ???????? ?? ??? ?????
????? ????????? ?????????? ?????? ??????? ?? ????????????? ????????????? ?????
???? ?????? ???? ???? ?? ??????? ??? ???? ?????? ?????? ?????? ???????? ?? ????? ??????????? ???????????
?? ????????????? ???????????? ?????????????? ?????
???? ???? ?? ???????? ???????????? ?????????? ??? ??? ???????????? ?????? ?????? ?? ?? ???????????? ?????????
????????????? ?????
???? ??????? ???????? ???????????? ??????????? ?? ???????????? ?? ??????????? ????????? ???? ????? ? ?????
????? ?????
???? ??????? ????? ????? ??????? ??? ??????????? ???? ?? ????????? ???? ???????????? ??????? ??? ?????
??????????? ????????????? ?? ????? ????? ???????? ?????
???? ?????? ?????? ?????? ?? ??????????? ??? ??????? ?? ??????? ????? ?????? ?? ??? ??????????? ??
??????????????? ?????????? ??? ?????? ?????? ??????????? ?? ????? ????? ???????? ?????
???? ????? ??????????
???????????? ?? ??? ?????????????? ???????? ?? ?????? ?????????
?????????????? ?????
?
?????
| 5820 |@word |
5,326 | 5,821 | On Variance Reduction in Stochastic Gradient
Descent and its Asynchronous Variants
Sashank J. Reddi
Carnegie Mellon University
[email protected]
Ahmed Hefny
Carnegie Mellon University
[email protected]
Suvrit Sra
Massachusetts Institute of Technology
[email protected]
Barnab?as P?oczos
Carnegie Mellon University
[email protected]
Alex Smola
Carnegie Mellon University
[email protected]
Abstract
We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically.
However, asynchronous versions of these algorithms?a crucial requirement for
modern large-scale applications?have not been studied. We bridge this gap by
presenting a unifying framework for many variance reduction techniques. Subsequently, we propose an asynchronous algorithm grounded in our framework, and
prove its fast convergence. An important consequence of our general approach
is that it yields asynchronous versions of variance reduction algorithms such as
SVRG and SAGA as a byproduct. Our method achieves near linear speedup in
sparse settings common to machine learning. We demonstrate the empirical performance of our method through a concrete realization of asynchronous SVRG.
1
Introduction
There has been a steep rise in recent work [6, 7, 9?12, 25, 27, 29] on ?variance reduced? stochastic
gradient algorithms for convex problems of the finite-sum form:
Xn
min f (x) := n1
fi (x).
(1.1)
i=1
x2Rd
Under strong convexity assumptions, such variance reduced (VR) stochastic algorithms attain better
convergence rates (in expectation) than stochastic gradient descent (SGD) [18, 24], both in theory
and practice.1 The key property of these VR algorithms is that by exploiting problem structure
and by making suitable space-time tradeoffs, they reduce the variance incurred due to stochastic
gradients. This variance reduction has powerful consequences: it helps VR stochastic methods
attain linear convergence rates, and thereby circumvents slowdowns that usually hit SGD.
1
Though we should note that SGD also applies to the harder stochastic optimization problem min F (x) =
E[f (x; ?)], which need not be a finite-sum.
1
Although these advances have great value in general, for large-scale problems we still require parallel or distributed processing. And in this setting, asynchronous variants of SGD remain indispensable [2, 8, 13, 21, 28, 30]. Therefore, a key question is how to extend the synchronous finite-sum
VR algorithms to asynchronous parallel and distributed settings.
We answer one part of this question by developing new asynchronous parallel stochastic gradient
methods that provably converge at a linear rate for smooth strongly convex finite-sum problems.
Our methods are inspired by the influential S VRG [10], S2 GD [12], S AG [25] and S AGA [6] family
of algorithms. We list our contributions more precisely below.
Contributions. Our paper makes two core contributions: (i) a formal general framework for variance reduced stochastic methods based on discussions in [6]; and (ii) asynchronous parallel VR algorithms within this framework. Our general framework presents a formal unifying view of several
VR methods (e.g., it includes SAGA and SVRG as special cases) while expressing key algorithmic
and practical tradeoffs concisely. Thus, it yields a broader understanding of VR methods, which
helps us obtain asynchronous parallel variants of VR methods. Under sparse-data settings common
to machine learning problems, our parallel algorithms attain speedups that scale near linearly with
the number of processors.
As a concrete illustration, we present a specialization to an asynchronous S VRG-like method. We
compare this specialization with non-variance reduced asynchronous SGD methods, and observe
strong empirical speedups that agree with the theory.
Related work. As already mentioned, our work is closest to (and generalizes) S AG [25], S AGA [6],
S VRG [10] and S2 GD [12], which are primal methods. Also closely related are dual methods such
as SDCA [27] and Finito [7], and in its convex incarnation M ISO [16]; a more precise relation
between these dual methods and VR stochastic methods is described in Defazio?s thesis [5]. By
their algorithmic structure, these VR methods trace back to classical non-stochastic incremental
gradient algorithms [4], but by now it is well-recognized that randomization helps obtain much
sharper convergence results (in expectation). Proximal [29] and accelerated VR methods have also
been proposed [20, 26]; we leave a study of such variants of our framework as future work. Finally,
there is recent work on lower-bounds for finite-sum problems [1].
Within asynchronous SGD algorithms, both parallel [21] and distributed [2, 17] variants are known.
In this paper, we focus our attention on the parallel setting. A different line of methods is that
of (primal) coordinate descent methods, and their parallel and distributed variants [14, 15, 19, 22,
23]. Our asynchronous methods share some structural assumptions with these methods. Finally,
the recent work [11] generalizes S2GD to the mini-batch setting, thereby also permitting parallel
processing, albeit with more synchronization and allowing only small mini-batches.
2
A General Framework for VR Stochastic Methods
We focus on instances of (1.1) where the cost function f (x) has an L-Lipschitz gradient, so that
krf (x) rf (y)k ? Lkx yk, and it is -strongly convex, i.e., for all x, y 2 Rd ,
f (x)
f (y) + hrf (y), x
yi + 2 kx
yk2 .
(2.1)
While our analysis focuses on strongly convex functions, we can extend it to just smooth convex
functions along the lines of [6, 29].
Inspired by the discussion on a general view of variance reduced techniques in [6], we now describe
a formal general framework for variance reduction in stochastic gradient descent. We denote the
collection {fi }ni=1 of functions that make up f in (1.1) by F. For our algorithm, we maintain an
additional parameter ?it 2 Rd for each fi 2 F. We use At to denote {?it }ni=1 . The general iterative
framework for updating the parameters is presented as Algorithm 1. Observe that the algorithm is
still abstract, since it does not specify the subroutine S CHEDULE U PDATE. This subroutine determines the crucial update mechanism of {?it } (and thereby of At ). As we will see different schedules
give rise to different fast first-order methods proposed in the literature. The part of the update based
on At is the key for these approaches and is responsible for variance reduction.
Next, we provide different instantiations of the framework and construct a new algorithm derived
from it. In particular, we consider incremental methods S AG [25], S VRG [10] and S AGA [6], and
classic gradient descent G RADIENT D ESCENT for demonstrating our framework.
2
ALGORITHM 1: G ENERIC S TOCHASTIC VARIANCE R EDUCTION A LGORITHM
Data: x0 2 Rd , ?i0 = x0 8i 2 [n] , {1, . . . , n}, step size ? > 0
Randomly pick a IT = {i0 , . . . , iT } where it 2 {1, . . . , n} 8 t 2 {0, . . . , T } ;
for t = 0 to T do
P
Update iterate as xt+1
xt ? rfit (xt ) rfit (?itt ) + n1 i rfi (?it ) ;
t
At+1 = S CHEDULE U PDATE({xi }t+1
i=0 , A , t, IT ) ;
end
return xT
Figure 1 shows the schedules for the aforementioned algorithms.
In case of S VRG,
S CHEDULE U PDATE is triggered every m iterations (here m denotes precisely the number of inner iterations used in [10]); so At remains unchanged for the m iterations and all ?it are updated
to the current iterate at the mth iteration. For S AGA, unlike S VRG, At changes at the tth iteration
for all t 2 [T ]. This change is only to a single element of At , and is determined by the index it
(the function chosen at iteration t). The update of S AG is similar to S AGA insofar that only one of
the ?i is updated at each iteration. However, the update for At+1 is based on it+1 rather than it .
This results in a biased estimate of the gradient, unlike S VRG and S AGA. Finally, the schedule for
gradient descent is similar to S AG, except that all the ?i ?s are updated at each iteration. Due to the
full update we end up with the exact gradient at each iteration. This discussion highlights how the
scheduler determines the resulting gradient method.
To motivate the design of another schedule, let us consider the computational and storage costs of
each of these algorithms. For S VRG, since we update At after every m iterations, it is enough to
store a full gradient, and hence, the storage cost is O(d). However, the running time is O(d) at
each iteration and O(nd) at the end of each epoch (for calculating the full gradient at the end of
each epoch). In contrast, both S AG and S AGA have high storage costs of O(nd) and running time
of O(d) per iteration. Finally, G RADIENT D ESCENT has low storage cost since it needs to store the
gradient at O(d) cost, but very high computational costs of O(nd) at each iteration.
S VRG has an additional computation overhead at the end of each epoch due to calculation of the
whole gradient. This is avoided in S AG and S AGA at the cost of additional storage. When m
is very large, the additional computational overhead of S VRG amortized over all the iterations is
small. However, as we will later see, this comes at the expense of slower convergence to the optimal
solution. The tradeoffs between the epoch size m, additional storage, frequency of updates, and the
convergence to the optimal solution are still not completely resolved.
t
SVRG:S CHEDULE U PDATE({xi }t+1
i=0 , A , t, IT )
for i = 1 to n do
?it+1 = (m | t)xt + (m6 | t)?it ;
end
return At+1
t
SAGA:S CHEDULE U PDATE({xi }t+1
i=0 , A , t, IT )
for i = 1 to n do
?it+1 = (it = i)xt + (it 6= i)?it ;
end
return At+1
t
SAG:S CHEDULE U PDATE({xi }t+1
i=0 , A , t, IT )
for i = 1 to n do
?it+1 = (it+1 = i)xt+1 + (it+1 6= i)?it ;
end
return At+1
t
GD:S CHEDULE U PDATE({xi }t+1
i=0 , A , t, IT )
for i = 1 to n do
?it+1 = xt+1 ;
end
return At+1
Figure 1: S CHEDULE U PDATE function for S VRG (top left), S AGA (top right), S AG (bottom left)
and G RADIENT D ESCENT (bottom right). While S VRG is epoch-based, rest of algorithms perform
updates at each iteration. Here a|b denotes that a divides b.
A straightforward approach to design a new scheduler is to combine the schedules of the above algorithms. This allows us to tradeoff between the various aforementioned parameters of our interest.
We call this schedule hybrid stochastic average gradient (H SAG). Here, we use the schedules of
S VRG and S AGA to develop H SAG. However, in general, schedules of any of these algorithms can
3
H SAG:S CHEDULE U PDATE(xt , At , t, IT )
for i = 1 to n?do
(it = i)xt + (it 6= i)?it if i 2 S
?it+1 =
(si | t)xt + (si6 | t)?it
if i 2
/S
end
return At+1
Figure 2: S CHEDULE U PDATE for H SAG. This algorithm assumes access to some index set S and
the schedule frequency vector s. Recall that a|b denotes a divides b
be combined to obtain a hybrid algorithm. Consider some S ? [n], the indices that follow S AGA
schedule. We assume that the rest of the indices follow an S VRG-like schedule with schedule frequency si for all i 2 S , [n] \ S. Figure 2 shows the corresponding update schedule of H SAG. If
S = [n] then H SAG is equivalent to S AGA, while at the other extreme, for S = ; and si = m for all
i 2 [n], it corresponds to S VRG. H SAG exhibits interesting storage, computational and convergence
trade-offs that depend on S. In general, while large cardinality of S likely incurs high storage costs,
the computational cost per iteration is relatively low. On the other hand, when cardinality of S is
small and si ?s are large, storage costs are low but the convergence typically slows down.
Before concluding our discussion on the general framework, we would like to draw the reader?s
attention to the advantages of studying Algorithm 1. First, note that Algorithm 1 provides a unifying
framework for many incremental/stochastic gradient methods proposed in the literature. Second, and
more importantly, it provides a generic platform for analyzing this class of algorithms. As we will
see in Section 3, this helps us develop and analyze asynchronous versions for different finite-sum
algorithms under a common umbrella. Finally, it provides a mechanism to derive new algorithms by
designing more sophisticated schedules; as noted above, one such construction gives rise to H SAG.
2.1
Convergence Analysis
In this section, we provide convergence analysis for Algorithm 1 with H SAG schedules. As observed
earlier, S VRG and S AGA are special cases of this setup. Our analysis assumes unbiasedness of the
gradient estimates at each iteration, so it does not encompass S AG. For ease of exposition, we
assume that all si = m for all i 2 [n]. Since H SAG is epoch-based, our analysis focuses on the
iterates obtained after each epoch. Similar to [10] (see Option II of S VRG in [10]), our analysis will
be for the case where the iterate at the end of (k + 1)st epoch, xkm+m , is replaced with an element
chosen randomly from {xkm , . . . , xkm+m 1 } with probability {p1 , ? ? ? , pm }. For brevity, we use
x
?k to denote the iterate chosen at the k th epoch. We also need the following quantity for our analysis:
X
?k , 1
G
fi (?ikm ) fi (x? ) hrfi (x? ), ?ikm x? i .
n
i2S
Theorem 1. For any positive parameters c, , ? > 1, step size ? and epoch size m, we define the
following quantities:
?
?
?m ?
?
1
1
2c
=? 1
1
2c?(1 L?(1 + ))
?
n ?
??
?
?m
?
? ?
?
?m
?
?m
2
2c
1
2Lc?
1
1
1
? = max
1
+
1+
? 1
1
, 1
.
?
?
?
Suppose the probabilities pi / (1 ?1 )m i , and that c, , ?, step size ? and epoch size m are chosen
such that the following conditions are satisfied:
?
?
1
1
1
2
+ 2Lc? 1 +
? , > 0, ? < 1.
?
n
Then, for iterates of Algorithm 1 under the H SAG schedule, we have
h
i
h
1?
1? i
E f (?
xk+1 ) f (x? ) + G
xk ) f (x? ) + G
k+1 ? ? E f (?
k .
4
As a corollary, we immediately obtain an expected linear rate of convergence for H SAG.
? k 0 and therefore, under the conditions specified in Theorem 1 and with
Corollary 1. Note that G
?? = ? (1 + 1/ ) < 1 we have
?
?
?
?
E f (?
xk ) f (x? ) ? ??k f (x0 ) f (x? ) .
We emphasize that there exist values of the parameters for which the conditions in Theorem 1
and Corollary 1 are easily satisfied. For instance, setting ? = 1/16( n + L), ? = 4/ ?,
= (2 n + L)/L and c = 2/?n, the conditions in Theorem 1 are satisfied for sufficiently large
m. Additionally, in the high condition number regime of L/ = n, we can obtain constant ? < 1
(say 0.5) with m = O(n) epoch size (similar to [6, 10]). This leads to a computational complexity of O(n log(1/?)) for H SAG to achieve ? accuracy in the objective function as opposed to
O(n2 log(1/?)) for batch gradient descent method. Please refer to the appendix for more details on
the parameters in Theorem 1.
3
Asynchronous Stochastic Variance Reduction
We are now ready to present asynchronous versions of the algorithms captured by our general framework. We first describe our setup before delving into the details of these algorithms. Our model of
computation is similar to the ones used in Hogwild! [21] and AsySCD [14]. We assume a multicore
architecture where each core makes stochastic gradient updates to a centrally stored vector x in an
asynchronous manner. There are four key components in our asynchronous algorithm; these are
briefly described below.
1. Read: Read the iterate x and compute the gradient rfit (x) for a randomly chosen it .
2. Read schedule iterate: Read the schedule iterate A and compute the gradients required
for update in Algorithm 1.
3. Update: Update the iterate x with the computed incremental update in Algorithm 1.
4. Schedule Update: Run a scheduler update for updating A.
Each processor repeatedly runs these procedures concurrently, without any synchronization. Hence,
x may change in between Step 1 and Step 3. Similarly, A may change in between Steps 2 and 4. In
fact, the states of iterates x and A can correspond to different time-stamps. We maintain a global
counter t to track the number of updates successfully executed. We use D(t) 2 [t] and D0 (t) 2 [t]
to denote the particular x-iterate and A-iterate used for evaluating the update at the tth iteration. We
assume that the delay in between the time of evaluation and updating is bounded by a non-negative
integer ? , i.e., t D(t) ? ? and t D0 (t) ? ? . The bound on the staleness captures the degree of
parallelism in the method: such parameters are typical in asynchronous systems (see e.g., [3, 14]).
Furthermore, we also assume that the system is synchronized after every epoch i.e., D(t) km for
t km. We would like to emphasize that the assumption is not strong since such a synchronization
needs to be done only once per epoch.
For the purpose of our analysis, we assume a consistent read model. In particular, our analysis
assumes that the vector x used for evaluation of gradients is a valid iterate that existed at some point
in time. Such an assumption typically amounts to using locks in practice. This problem can be
avoided by using random coordinate updates as in [21] (see Section 4 of [21]) but such a procedure
is computationally wasteful in practice. We leave the analysis of inconsistent read model as future
work. Nonetheless, we report results for both locked and lock-free implementations (see Section 4).
3.1
Convergence Analysis
The key ingredients to the success of asynchronous algorithms for multicore stochastic gradient
descent are sparsity and ?disjointness? of the data matrix [21]. More formally, suppose fi only
depends on xP
ei where ei ? [d] i.e., fi acts only on the components of x indexed by the set ei . Let
kxk2i denote j2ei kxj k2 ; then, the convergence depends on , the smallest constant such that
Ei [kxk2i ] ? kxk2 . Intuitively, denotes the average frequency with which a feature appears in
the data matrix. We are interested in situations where
? 1. As a warm up, let us first discuss
convergence analysis for asynchronous S VRG. The general case is similar, but much more involved.
Hence, it is instructive to first go through the analysis of asynchronous S VRG.
5
Theorem 2. Suppose step size ?, epoch size m are chosen such that the following condition holds:
?
?
??
?+L ? 2 ? 2
1
?m + 4L 1 2L2 ? 2 ? 2
?
?? < 1.
0 < ?s := ?
? 2 ?2
1 4L 1?+L
2L2 ? 2 ? 2
Then, for the iterates of an asynchronous variant of Algorithm 1 with S VRG schedule and probabilities pi = 1/m for all i 2 [m], we have
E[f (?
xk+1 )
f (x? )] ? ?s E[f (?
xk )
f (x? )].
The bound obtained in Theorem 2 is useful when is small. To see this, as earlier, consider the
indicative case where L/ = n. The synchronous version of S VRG obtains a convergence rate of
? = 0.5 for step size ? = 0.1/L and epoch size m = O(n). For the asynchronous variant of S VRG,
by setting ? = 0.1/2(max{1, 1/2 ? }L), we obtain a similar rate with m = O(n + 1/2 ? n).
To obtain this, set ? = ?/L where ? = 0.1/2(max{1, 1/2 ? }) and ?s = 0.5. Then, a simple
calculation gives the following:
?
?
m
2
1 2 ? 2 ?2
=
? c0 max{1, 1/2 ? },
n
? 1 12? 14 ? 2 ?2
where c0 is some constant. This follows from the fact that ? = 0.1/2(max{1, 1/2 ? }). Suppose
? < 1/ 1/2 . Then we can achieve nearly the same guarantees as the synchronous version, but ?
times faster since we are running the algorithm asynchronously. For example, consider the sparse
setting where = o(1/n); then it is possible to get near linear speedup when ? = o(n1/2 ). On the
other hand, when 1/2 ? > 1, we can obtain a theoretical speedup of 1/ 1/2 .
We finally provide the convergence result for the asynchronous algorithm in the general case. The
proof is complicated by the fact that set A, unlike in S VRG, changes during the epoch. The key idea
is that only a single element of A changes at each iteration. Furthermore, it can only change to one
of the iterates in the epoch. This control provides a handle on the error obtained due to the staleness.
Due to space constraints, the proof is relegated to the appendix.
Theorem 3. For any positive parameters c, , ? > 1, step size ? and epoch size m, we define the
following quantities:
!
?
? ?
1
2
2 3
? = c? + 1
cL ? ? ,
?
#
?
?
?m "
?
? ?
1
2c
96?L?
1
1
1
2c? 8?L(1 + )
1
,
a =? 1
?
?
n
?
n
?
?
82
9
3
?m 8?L 1 + 1
?
?
?m
?
?m =
< 2c ?
1
1
5, 1 1
?a = max 4
1
+
? 1
1
.
: a
;
?
?
?
a
Suppose probabilities pi / (1 ?1 )m i , parameters , ?, step-size ?, and epoch size m are chosen
such that the following conditions are satisfied:
?
?
?
? ?
?
?m 1
1
1
96?L?
1
1 2
1
1
+ 8?L 1 +
+
1
? ,? ? 1
, a > 0, ?a < 1.
2
?
n
?
n
?
12L ? 2
Then, for the iterates of asynchronous variant of Algorithm 1 with H SAG schedule we have
?
?
1 ?
1 ?
E f (?
xk+1 ) f (x? ) + G
?
?
E
f (?
xk ) f (x? ) + G
k+1
a
k .
a
a
? k 0 and therefore, under the conditions specified in Theorem 3 and with
Corollary 2. Note that G
?
?a = ?a (1 + 1/ a ) < 1, we have
?
?
?
?
E f (?
xk ) f (x? ) ? ??ak f (x0 ) f (x? ) .
6
4
Speedup
2
3
2
1
5
Threads
10
0
5
Threads
3
2
4
3
2
1
0
10
Lock-Free SVRG
Locked SVRG
5
4
1
1
0
Lock-Free SVRG
Locked SVRG
5
Lock-Free SVRG
Locked SVRG
5
Speedup
Lock-Free SVRG
Locked SVRG
Speedup
Speedup
3
5
Threads
10
0
5
Threads
10
Figure 3: l2 -regularized logistic regression. Speedup curves for Lock-Free S VRG and Locked S VRG
on rcv1 (left), real-sim (left center), news20 (right center) and url (right) datasets. We report the
speedup achieved by increasing the number of threads.
By using step size normalized by 1/2 ? (similar to Theorem 2) and parameters similar to the ones
specified after Theorem 1 we can show speedups similar to the ones obtained in Theorem 2. Please
refer to the appendix for more details on the parameters in Theorem 3.
Before ending our discussion on the theoretical analysis, we would like to highlight an important
point. Our emphasis throughout the paper was on generality. While the results are presented here in
full generality, one can obtain stronger results in specific cases. For example, in the case of S AGA,
one can obtain per iteration convergence guarantees (see [6]) rather than those corresponding to per
epoch presented in the paper. Also, S AGA can be analyzed without any additional synchronization
per epoch. However, there is no qualitative difference in these guarantees accumulated over the
epoch. Furthermore, in this case, our analysis for both synchronous and asynchronous cases can be
easily modified to obtain convergence properties similar to those in [6].
4
Experiments
We present our empirical results in this section. For our experiments, we study the problem of
binary classification via l2 -regularized logistic regression. More formally, we are interested in the
following optimization problem:
n
min
x
1X
log(1 + exp( yi zi> x)) + kxk2 ,
n i=1
(4.1)
where zi 2 Rd and yi is the corresponding label for each i 2 [n]. In all our experiments, we set
= 1/n. Note that such a choice leads to high condition number.
A careful implementation of S VRG is required for sparse gradients since the implementation as
stated in Algorithm 1 will lead to dense updates at each iteration. For an efficient implementation, a
scheme like the ?just-in-time? update scheme, as suggested in [25], is required. Due to lack of space,
we provide the implementation details in the appendix.
We evaluate the following algorithms for our experiments:
? Lock-Free S VRG: This is the lock-free asynchronous variant of Algorithm 1 using S VRG schedule; all threads can read and update the parameters with any synchronization. Parameter updates
are performed through atomic compare-and-swap instruction [21]. A constant step size that
gives the best convergence is chosen for the dataset.
? Locked S VRG: This is the locked version of the asynchronous variant of Algorithm 1 using
S VRG schedule. In particular, we use a concurrent read exclusive write locking model, where all
threads can read the parameters but only one threads can update the parameters at a given time.
The step size is chosen similar to Lock-Free S VRG.
? Lock-Free S GD: This is the lock-free asynchronous variant of the S GD algorithm (see [21]).
We compare two different versions of this algorithm:
p (i) S GD with constant step size (referred
to as CS GD). (ii) S GD with decaying step size ?0
0 /(t + 0 ) (referred to as DS GD ), where
constants ?0 and 0 specify the scale and speed of decay. For each of these versions, step size is
tuned for each dataset to give the best convergence progress.
7
0
0.5
1
Time(seconds)
1.5
10
-5
Lock-Free SVRG
DSGD
CSGD
-10
0
2
4
6
Time(seconds)
8
10
10
0
-5
10 -10
Lock-Free SVRG
DSGD
CSGD
0
5
Time(seconds)
10
Objective Value - Optimal
10 -10
Lock-Free SVRG
DSGD
CSGD
10
Objective Value-Optimal
-5
Objective Value - Optimal
Objective Value - Optimal
10
10 0
10 -5
10 -10
Lock-Free SVRG
DSGD
CSGD
0
50
100
Time(seconds)
Figure 4: l2 -regularized logistic regression. Training loss residual f (x) f (x? ) versus time plot of
Lock-Free S VRG, DS GD and CS GD on rcv1 (left), real-sim (left center), news20 (right center) and
url (right) datasets. The experiments are parallelized over 10 cores.
All the algorithms were implemented in C++ 2 . We run our experiments on datasets from LIBSVM
website3 . Similar to [29], we normalize each example in the dataset so that kzi k2 = 1 for all
i 2 [n]. Such a normalization leads to an upper bound of 0.25 on the Lipschitz constant of the
gradient of fi . The epoch size m is chosen as 2n (as recommended in [10]) in all our experiments.
In the first experiment, we compare the speedup achieved by our asynchronous algorithm. To this
end, for each dataset we first measure the time required for the algorithm to each an accuracy of
10 10 (i.e., f (x) f (x? ) < 10 10 ). The speedup with P threads is defined as the ratio of the
runtime with a single thread to the runtime with P threads. Results in Figure 3 show the speedup
on various datasets. As seen in the figure, we achieve significant speedups for all the datasets.
Not surprisingly, the speedup achieved by Lock-free S VRG is much higher than ones obtained by
locking. Furthermore, the lowest speedup is achieved for rcv1 dataset. Similar speedup behavior
was reported for this dataset in [21]. It should be noted that this dataset is not sparse and hence, is a
bad case for the algorithm (similar to [21]).
For the second set of experiments we compare the performance of Lock-Free S VRG with stochastic
gradient descent. In particular, we compare with the variants of stochastic gradient descent, DS GD
and CS GD, described earlier in this section. It is well established that the performance of variance
reduced stochastic methods is better than that of S GD. We would like to empirically verify that such
benefits carry over to the asynchronous variants of these algorithms. Figure 4 shows the performance
of Lock-Free S VRG, DS GD and CS GD. Since the computation complexity of each epoch of these
algorithms is different, we directly plot the objective value versus the runtime for each of these
algorithms. We use 10 cores for comparing the algorithms in this experiment. As seen in the figure,
Lock-Free S VRG outperforms both DS GD and CS GD. The performance gains are qualitatively
similar to those reported in [10] for the synchronous versions of these algorithms. It can also be
seen that the DS GD, not surprisingly, outperforms CS GD in all the cases. In our experiments, we
observed that Lock-Free S VRG, in comparison to S GD, is relatively much less sensitive to the step
size and more robust to increasing threads.
5
Discussion & Future Work
In this paper, we presented a unifying framework based on [6], that captures many popular variance
reduction techniques for stochastic gradient descent. We use this framework to develop a simple
hybrid variance reduction method. The primary purpose of the framework, however, was to provide
a common platform to analyze various variance reduction techniques. To this end, we provided
convergence analysis for the framework under certain conditions. More importantly, we propose an
asynchronous algorithm for the framework with provable convergence guarantees. The key consequence of our approach is that we obtain asynchronous variants of several algorithms like S VRG,
S AGA and S2 GD. Our asynchronous algorithms exploits sparsity in the data to obtain near linear
speedup in settings that are typically encountered in machine learning.
For future work, it would be interesting to perform an empirical comparison of various schedules.
In particular, it would be worth exploring the space-time-accuracy tradeoffs of these schedules. We
would also like to analyze the effect of these tradeoffs on the asynchronous variants.
Acknowledgments. SS was partially supported by NSF IIS-1409802.
2
All experiments were conducted on a Google Compute Engine n1-highcpu-32 machine with 32 processors
and 28.8 GB RAM.
3
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/binary.html
8
Bibliography
[1] A. Agarwal and L. Bottou. A lower bound for the optimization of finite sums. arXiv:1410.0723, 2014.
[2] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In Advances in Neural Information Processing Systems, pages 873?881, 2011.
[3] D. Bertsekas and J. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall,
1989.
[4] D. P. Bertsekas. Incremental gradient, subgradient, and proximal methods for convex optimization: A
survey. Optimization for Machine Learning, 2010:1?38, 2011.
[5] A. Defazio. New Optimization Methods for Machine Learning. PhD thesis, Australian National University, 2014.
[6] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support for
non-strongly convex composite objectives. In NIPS 27, pages 1646?1654. 2014.
[7] A. J. Defazio, T. S. Caetano, and J. Domke. Finito: A faster, permutable incremental gradient method for
big data problems. arXiv:1407.2710, 2014.
[8] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using minibatches. The Journal of Machine Learning Research, 13(1):165?202, 2012.
[9] M. G?urb?uzbalaban, A. Ozdaglar, and P. Parrilo. A globally convergent incremental Newton method.
Mathematical Programming, 151(1):283?313, 2015.
[10] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In NIPS 26, pages 315?323. 2013.
[11] J. Kone?cn?y, J. Liu, P. Richt?arik, and M. Tak?ac? . Mini-Batch Semi-Stochastic Gradient Descent in the
Proximal Setting. arXiv:1504.04407, 2015.
[12] J. Kone?cn?y and P. Richt?arik. Semi-Stochastic Gradient Descent Methods. arXiv:1312.1666, 2013.
[13] M. Li, D. G. Andersen, A. J. Smola, and K. Yu. Communication Efficient Distributed Machine Learning
with the Parameter Server. In NIPS 27, pages 19?27, 2014.
[14] J. Liu, S. Wright, C. R?e, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordinate descent
algorithm. In ICML 2014, pages 469?477, 2014.
[15] J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351?376, 2015.
[16] J. Mairal. Optimization with first-order surrogate functions. arXiv:1305.3120, 2013.
[17] A. Nedi?c, D. P. Bertsekas, and V. S. Borkar. Distributed asynchronous incremental subgradient methods.
Studies in Computational Mathematics, 8:381?407, 2001.
[18] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[19] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[20] A. Nitanda. Stochastic Proximal Gradient Descent with Acceleration Techniques. In NIPS 27, pages
1574?1582, 2014.
[21] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild!: A Lock-Free Approach to Parallelizing Stochastic
Gradient Descent. In NIPS 24, pages 693?701, 2011.
[22] S. Reddi, A. Hefny, C. Downey, A. Dubey, and S. Sra. Large-scale randomized-coordinate descent methods with non-separable linear constraints. In UAI 31, 2015.
[23] P. Richt?arik and M. Tak?ac? . Iteration complexity of randomized block-coordinate descent methods for
minimizing a composite function. Mathematical Programming, 144(1-2):1?38, 2014.
[24] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics,
22:400?407, 1951.
[25] M. W. Schmidt, N. L. Roux, and F. R. Bach. Minimizing Finite Sums with the Stochastic Average
Gradient. arXiv:1309.2388, 2013.
[26] S. Shalev-Shwartz and T. Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In NIPS 26,
pages 378?385, 2013.
[27] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss. The
Journal of Machine Learning Research, 14(1):567?599, 2013.
[28] O. Shamir and N. Srebro. On distributed stochastic optimization and learning. In Proceedings of the 52nd
Annual Allerton Conference on Communication, Control, and Computing, 2014.
[29] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM
Journal on Optimization, 24(4):2057?2075, 2014.
[30] M. Zinkevich, M. Weimer, L. Li, and A. J. Smola. Parallelized stochastic gradient descent. In NIPS,
pages 2595?2603, 2010.
9
| 5821 |@word briefly:1 version:10 stronger:1 nd:4 c0:2 dekel:1 urb:1 instruction:1 km:2 pick:1 incurs:1 sgd:8 thereby:3 incarnation:1 harder:1 carry:1 reduction:13 liu:3 tuned:1 outperforms:2 current:1 comparing:1 si:5 numerical:1 plot:2 update:25 juditsky:1 indicative:1 xk:8 iso:1 core:4 provides:4 iterates:6 bittorf:1 org:1 allerton:1 zhang:4 mathematical:3 along:1 qualitative:1 prove:1 overhead:2 combine:1 manner:1 x0:4 theoretically:1 news20:2 expected:1 behavior:1 p1:1 inspired:2 globally:1 cardinality:2 increasing:2 provided:1 bounded:1 lowest:1 permutable:1 ag:9 guarantee:4 every:3 act:1 sag:16 runtime:3 k2:2 hit:1 control:2 ozdaglar:1 bertsekas:3 before:3 positive:2 consequence:3 ak:1 analyzing:1 niu:1 emphasis:1 studied:1 ease:1 nemirovski:1 locked:8 practical:1 responsible:1 acknowledgment:1 atomic:1 practice:3 block:1 procedure:2 sdca:1 empirical:4 attain:3 composite:2 get:1 storage:9 prentice:1 www:1 equivalent:1 zinkevich:1 center:4 straightforward:1 attention:2 go:1 convex:8 survey:1 bachrach:1 nedi:1 roux:1 immediately:1 importantly:2 classic:1 handle:1 coordinate:9 updated:3 annals:1 construction:1 suppose:5 shamir:2 exact:1 programming:3 designing:1 element:3 amortized:1 updating:3 bottom:2 observed:2 csie:1 capture:2 caetano:1 richt:3 trade:1 counter:1 yk:1 mentioned:1 convexity:1 complexity:3 locking:2 nesterov:1 motivate:1 depend:1 predictive:1 efficiency:1 completely:1 swap:1 resolved:1 easily:2 kxj:1 various:4 pdate:10 fast:3 describe:2 dsgd:4 shalev:2 say:1 s:1 statistic:1 asynchronously:1 online:1 triggered:1 advantage:1 propose:2 realization:1 achieve:3 radient:3 normalize:1 exploiting:1 convergence:23 requirement:1 incremental:9 leave:2 i2s:1 sjakkamr:1 help:4 derive:1 develop:3 ac:2 kxk2i:2 multicore:2 progress:2 sim:2 strong:3 implemented:1 c:9 come:1 australian:1 synchronized:1 direction:1 closely:1 escent:3 stochastic:40 subsequently:1 libsvmtools:1 require:1 barnab:1 ntu:1 randomization:1 exploring:1 hold:1 sufficiently:1 hall:1 wright:3 aga:16 exp:1 great:1 algorithmic:2 achieves:1 smallest:1 purpose:2 label:1 bridge:1 sensitive:1 concurrent:1 robbins:1 successfully:1 mit:1 offs:1 concurrently:1 arik:3 modified:1 rather:2 ikm:2 broader:1 corollary:4 derived:1 focus:4 contrast:1 i0:2 accumulated:1 typically:3 mth:1 relation:1 tak:2 relegated:1 subroutine:2 interested:2 provably:1 dual:4 aforementioned:2 classification:1 html:1 development:1 platform:2 special:2 construct:1 once:1 progressive:1 yu:1 icml:1 nearly:1 future:4 report:2 modern:1 randomly:3 national:1 delayed:1 replaced:1 n1:4 maintain:2 interest:1 huge:1 rfit:3 evaluation:2 analyzed:1 extreme:1 kone:2 primal:2 byproduct:1 indexed:1 divide:2 re:1 theoretical:2 instance:2 earlier:3 cost:11 delay:1 conducted:1 johnson:1 stored:1 reported:2 answer:1 proximal:5 gd:22 combined:1 s2gd:1 unbiasedness:1 st:1 siam:4 recht:1 randomized:2 concrete:2 thesis:2 andersen:1 satisfied:4 opposed:1 return:6 li:2 parrilo:1 disjointness:1 includes:1 depends:2 performed:1 later:1 view:2 hogwild:2 analyze:3 decaying:1 option:1 parallel:12 complicated:1 monro:1 contribution:3 ni:2 accuracy:3 variance:21 yield:2 correspond:1 worth:1 processor:3 nonetheless:1 eneric:1 frequency:4 involved:1 proof:2 gain:1 dataset:7 massachusetts:1 popular:1 recall:1 schedule:25 hefny:2 sophisticated:1 back:1 appears:1 higher:1 follow:2 specify:2 done:1 though:1 strongly:4 generality:2 furthermore:4 just:2 smola:4 d:6 hand:2 ei:4 lack:1 google:1 logistic:3 effect:1 umbrella:1 normalized:1 lgorithm:1 verify:1 hence:4 read:9 staleness:2 during:1 please:2 noted:2 presenting:1 demonstrate:1 duchi:1 fi:8 common:4 empirically:2 extend:2 mellon:4 expressing:1 refer:2 significant:1 rd:4 pm:1 similarly:1 mathematics:1 access:1 yk2:1 lkx:1 closest:1 recent:4 indispensable:1 store:2 certain:1 suvrit:2 oczos:1 binary:2 success:1 server:1 yi:3 captured:1 seen:3 additional:6 parallelized:2 recognized:1 converge:1 recommended:1 semi:2 ii:4 full:4 encompass:1 d0:2 smooth:2 faster:2 ahmed:1 vrg:38 calculation:2 bach:2 permitting:1 prediction:1 variant:16 regression:3 cmu:3 expectation:2 arxiv:6 iteration:22 grounded:1 normalization:1 agarwal:2 achieved:4 gilad:1 crucial:2 biased:1 rest:2 unlike:3 ascent:2 inconsistent:1 reddi:2 call:1 structural:1 near:4 integer:1 insofar:1 enough:1 m6:1 iterate:11 zi:2 architecture:1 reduce:1 inner:1 idea:1 cn:2 tradeoff:6 synchronous:5 thread:12 specialization:2 defazio:4 url:2 gb:1 accelerating:1 downey:1 sashank:1 repeatedly:1 rfi:1 useful:1 dubey:1 amount:1 tth:2 reduced:6 http:1 shapiro:1 outperform:1 exist:1 nsf:1 per:6 track:1 carnegie:4 write:1 key:8 four:1 demonstrating:1 lan:1 wasteful:1 krf:1 libsvm:1 lacoste:1 ram:1 subgradient:2 sum:8 run:3 powerful:1 family:1 reader:1 throughout:1 circumvents:1 draw:1 appendix:4 bound:5 centrally:1 convergent:1 existed:1 encountered:1 annual:1 precisely:2 constraint:2 alex:2 bibliography:1 speed:1 min:3 concluding:1 rcv1:3 separable:1 relatively:2 speedup:20 influential:1 developing:1 remain:1 tw:1 making:1 intuitively:1 bapoczos:1 computationally:1 agree:1 remains:1 discus:1 mechanism:2 cjlin:1 nitanda:1 end:13 studying:1 generalizes:2 tochastic:1 observe:2 generic:1 batch:5 schmidt:1 slower:1 denotes:4 running:3 top:2 assumes:3 lock:23 newton:1 unifying:4 calculating:1 exploit:1 classical:1 unchanged:1 objective:7 question:2 already:1 quantity:3 primary:1 exclusive:1 surrogate:1 exhibit:1 gradient:43 provable:1 index:4 illustration:1 mini:4 ratio:1 minimizing:2 setup:2 steep:1 executed:1 sharper:1 expense:1 trace:1 slows:1 rise:3 negative:1 stated:1 design:2 implementation:5 perform:2 allowing:1 eduction:1 upper:1 datasets:6 finite:8 descent:23 situation:1 communication:2 precise:1 parallelizing:1 required:4 specified:3 website3:1 engine:1 concisely:1 established:1 nip:7 suggested:1 usually:1 below:2 parallelism:2 regime:1 sparsity:2 rf:1 max:6 suitable:1 hybrid:3 warm:1 regularized:4 residual:1 scheme:2 technology:1 julien:1 ready:1 epoch:25 understanding:1 literature:2 l2:5 synchronization:5 loss:2 highlight:2 interesting:2 srebro:1 versus:2 remarkable:1 ingredient:1 incurred:1 degree:1 consistent:1 xp:1 xiao:2 share:1 pi:3 surprisingly:2 slowdown:1 asynchronous:40 svrg:17 free:22 supported:1 tsitsiklis:1 formal:3 institute:1 sparse:5 distributed:10 benefit:1 curve:1 xn:1 evaluating:1 valid:1 ending:1 made:1 collection:1 qualitatively:1 avoided:2 kzi:1 emphasize:2 obtains:1 global:1 instantiation:1 uai:1 mairal:1 xi:5 shwartz:2 iterative:1 additionally:1 itt:1 delving:1 robust:2 sra:2 bottou:1 cl:1 dense:1 linearly:1 weimer:1 s2:3 whole:1 big:1 n2:1 sridhar:1 finito:2 referred:2 vr:12 lc:2 scheduler:3 saga:5 kxk2:2 hrf:1 stamp:1 x2rd:1 down:1 theorem:13 bad:1 xt:11 specific:1 list:1 decay:1 albeit:1 ahefny:1 phd:1 kx:1 gap:1 borkar:1 likely:1 partially:1 applies:1 corresponds:1 determines:2 minibatches:1 acceleration:1 exposition:1 careful:1 lipschitz:2 change:7 xkm:3 determined:1 except:1 typical:1 domke:1 formally:2 support:1 brevity:1 accelerated:2 evaluate:1 instructive:1 |
5,327 | 5,822 | Subset Selection by Pareto Optimization
Chao Qian
Yang Yu
Zhi-Hua Zhou
National Key Laboratory for Novel Software Technology, Nanjing University
Collaborative Innovation Center of Novel Software Technology and Industrialization
Nanjing 210023, China
{qianc,yuy,zhouzh}@lamda.nju.edu.cn
Abstract
Selecting the optimal subset from a large set of variables is a fundamental problem
in various learning tasks such as feature selection, sparse regression, dictionary
learning, etc. In this paper, we propose the POSS approach which employs evolutionary Pareto optimization to find a small-sized subset with good performance.
We prove that for sparse regression, POSS is able to achieve the best-so-far theoretically guaranteed approximation performance efficiently. Particularly, for the
Exponential Decay subclass, POSS is proven to achieve an optimal solution. Empirical study verifies the theoretical results, and exhibits the superior performance
of POSS to greedy and convex relaxation methods.
1
Introduction
Subset selection is to select a subset of size k from a total set of n variables for optimizing some
criterion. This problem arises in many applications, e.g., feature selection, sparse learning and
compressed sensing. The subset selection problem is, however, generally NP-hard [13, 4]. Previous
employed techniques can be mainly categorized into two branches, greedy algorithms and convex
relaxation methods. Greedy algorithms iteratively select or abandon one variable that makes the
criterion currently optimized [9, 19], which are however limited due to its greedy behavior. Convex
relaxation methods usually replace the set size constraint (i.e., the `0 -norm) with convex constraints,
e.g., the `1 -norm constraint [18] and the elastic net penalty [29]; then find the optimal solutions to
the relaxed problem, which however could be distant to the true optimum.
Pareto optimization solves a problem by reformulating it as a bi-objective optimization problem
and employing a bi-objective evolutionary algorithm, which has significantly developed recently in
theoretical foundation [22, 15] and applications [16]. This paper proposes the POSS (Pareto Optimization for Subset Selection) method, which treats subset selection as a bi-objective optimization
problem that optimizes some given criterion and the subset size simultaneously. To investigate the
performance of POSS, we study a representative example of subset selection, the sparse regression.
The subset selection problem in sparse regression is to best estimate a predictor variable by linear
regression [12], where the quality of estimation is usually measured by the mean squared error, or
equivalently, the squared multiple correlation R2 [6, 11]. Gilbert et al. [9] studied the two-phased
approach with orthogonal matching pursuit (OMP), and proved the multiplicative approximation
guarantee 1 + ?(?k 2 ) for the mean squared error, when the coherence ? (i.e., the maximum correlation between any pair of observation variables) is O(1/k). This approximation bound was later
improved by [20, 19]. Under the same small coherence condition, Das and Kempe [2] analyzed the
forward regression (FR) algorithm [12] and obtained an approximation guarantee 1 ? ?(?k) for R2 .
These results however will break down when ? ? w(1/k). By introducing the submodularity ratio
?, Das and Kempe [3] proved the approximation guarantee 1 ? e?? on R2 by the FR algorithm;
this guarantee is considered to be the strongest since it can be applied with any coherence. Note
that sparse regression is similar to the problem of sparse recovery [7, 25, 21, 17], but they are for
1
different purposes. Assuming that the predictor variable has a sparse representation, sparse recovery
is to recover the exact coefficients of the truly sparse solution.
We theoretically prove that, for sparse regression, POSS using polynomial time achieves a multiplicative approximation guarantee 1 ? e?? for squared multiple correlation R2 , the best-so-far
guarantee obtained by the FR algorithm [3]. For the Exponential Decay subclass, which has clear
applications in sensor networks [2], POSS can provably find an optimal solution, while FR cannot.
The experimental results verify the theoretical results and exhibit the superior performance of POSS.
We start the rest of the paper by introducing the subset selection problem. We then present in
three subsequent sections the POSS method, its theoretical analysis for sparse regression, and the
empirical studies. The final section concludes this paper.
2
Subset Selection
The subset selection problem originally aims at selecting a few columns from a matrix, so that the
matrix is most represented by the selected columns [1]. In this paper, we present the generalized
subset selection problem that can be applied to arbitrary criterion evaluating the selection.
2.1
The General Problem
Given a set of observation variables V = {X1 , . . . , Xn }, a criterion f and a positive integer k, the
subset selection problem is to select a subset S ? V such that f is optimized with the constraint
|S| ? k, where | ? | denotes the size of a set. For notational convenience, we will not distinguish
between S and its index set IS = {i | Xi ? S}. Subset selection is formally stated as follows.
Definition 1 (Subset Selection). Given all variables V = {X1 , . . . , Xn }, a criterion f and a positive integer k, the subset selection problem is to find the solution of the optimization problem:
arg minS?V f (S) s.t. |S| ? k.
(1)
The subset selection problem is NP-hard in general [13, 4], except for some extremely simple criteria. In this paper, we take sparse regression as the representative case.
2.2
Sparse Regression
Sparse regression [12] finds a sparse approximation solution to the regression problem, where the
solution vector can only have a few non-zero elements.
Definition 2 (Sparse Regression). Given all observation variables V = {X1 , . . . , Xn }, a predictor
variable Z and a positive integer k, define the mean squared error of a subset S ? V as
h
i
X
M SEZ,S = min??R|S| E (Z ?
?i Xi )2 .
i?S
Sparse regression is to find a set of at most k variables minimizing the mean squared error, i.e.,
arg minS?V M SEZ,S s.t. |S| ? k.
For the ease of theoretical treatment, the squared multiple correlation
2
RZ,S
= (V ar(Z) ? M SEZ,S )/V ar(Z)
is used to replace M SEZ,S [6, 11] so that the sparse regression is equivalently
2
arg maxS?V RZ,S
s.t.
|S| ? k.
(2)
Sparse regression is a representative example of subset selection [12]. Note that we will study Eq. (2)
in this paper. Without loss of generality, we assume that all random variables are normalized to have
2
expectation 0 and variance 1. Thus, RZ,S
is simplified to be 1 ? M SEZ,S .
For sparse regression, Das and Kempe [3] proved that the forward regression (FR) algorithm, pre??S F R ,k
2
sented in Algorithm 1, can produce a solution S F R with |S F R | = k and RZ,S
)?
F R ? (1?e
OP T (where OP T denotes the optimal function value of Eq. (2)), which is the best currently known
approximation guarantee. The FR algorithm is a greedy approach, which iteratively selects a variable with the largest R2 improvement.
2
Algorithm 1 Forward Regression
Input: all variables V = {X1 , . . . , Xn }, a predictor variable Z and an integer parameter k ? [1, n]
Output: a subset of V with k variables
Process:
1: Let t = 0 and St = ?.
2: repeat
2
2
3:
Let X ? be a variable maximizing RZ,S
, i.e., X ? = arg maxX?V \St RZ,S
.
t ?{X}
t ?{X}
4:
Let St+1 = St ? {X ? }, and t = t + 1.
5: until t = k
6: return Sk
3
The POSS Method
The subset selection in Eq. (1) can be separated into two objectives, one optimizes the criterion, i.e.,
minS?V f (S), meanwhile the other keeps the size small, i.e., minS?V max{|S| ? k, 0}. Usually
the two objectives are conflicting, that is, a subset with a better criterion value could have a larger
size. The POSS method solves the two objectives simultaneously, which is described as follows.
Let us use the binary vector representation for subsets membership indication, i.e., s ? {0, 1}n
represents a subset S of V by assigning si = 1 if the i-th element of V is in S and si = 0 otherwise.
We assign two properties for a solution s: o1 is the criterion value and o2 is the sparsity,
+?, s = {0}n , or |s| ? 2k
s.o1 =
, s.o2 = |s|.
f (s), otherwise
where the set of o1 to +? is to exclude trivial or overly bad solutions. We further introduce the
isolation function I : {0, 1}n ? R as in [22], which determines if two solutions are allowed to be
compared: they are comparable only if they have the same isolation function value. The implementation of I is left as a parameter of the method, while its effect will be clear in the analysis.
As will be introduced later, we need to compare solutions. For solutions s and s0 , we first judge if
they have the same isolation function value. If not, we say that they are incomparable. If they have
the same isolation function value, s is worse than s0 if s0 has a smaller or equal value on both the
properties; s is strictly worse if s0 has a strictly smaller value in one property, and meanwhile has a
smaller or equal value in the other property. But if both s is not worse than s0 and s0 is not worse
than s, we still say that they are incomparable.
POSS is described in Algorithm 2. Starting from the solution representing an empty set and the
archive P containing only the empty set (line 1), POSS generates new solutions by randomly flipping bits of an archived solution (in the binary vector representation), as lines 4 and 5. Newly
generated solutions are compared with the previously archived solutions (line 6). If the newly generated solution is not strictly worse than any previously archived solution, it will be archived. Before
archiving the newly generated solution in line 8, the archive set P is cleaned by removing solutions
in Q, which are previously archived solutions but are worse than the newly generated solution.
The iteration of POSS repeats for T times. Note that T is a parameter, which could depend on the
available resource of the user. We will analyze the relationship between the solution quality and T in
later sections, and will use the theoretically derived T value in the experiments. After the iterations,
we select the final solution from the archived solutions according to Eq. (1), i.e., select the solution
with the smallest f value while the constraint on the set size is kept (line 12).
4
POSS for Sparse Regression
In this section, we examine the theoretical performance of the POSS method for sparse regression.
2
For sparse regression, the criterion f is implemented as f (s) = ?RZ,s
. Note that minimizing
2
2
?RZ,s is equivalent to the original objective that maximizes RZ,s in Eq. (2).
We need some notations for the analysis. Let Cov(?, ?) be the covariance between two random
variables, C be the covariance matrix between all observation variables, i.e., Ci,j = Cov(Xi , Xj ),
3
Algorithm 2 POSS
Input: all variables V = {X1 , . . . , Xn }, a given criterion f and an integer parameter k ? [1, n]
Parameter: the number of iterations T and an isolation function I : {0, 1}n ? R
Output: a subset of V with at most k variables
Process:
1: Let s = {0}n and P = {s}.
2: Let t = 0.
3: while t < T do
4:
Select s from P uniformly at random.
5:
Generate s0 from s by flipping each bit of s with probability 1/n.
6:
if @z ? P such
that I(z) = I(s0 ) and (z.o1 < s0 .o1 ? z.o2 ? s0 .o2 ) or (z.o1 ? s0 .o1 ?
z.o2 < s0 .o2 ) then
7:
Q = {z ? P | I(z) = I(s0 ) ? s0 .o1 ? z.o1 ? s0 .o2 ? z.o2 }.
8:
P = (P \ Q) ? {s0 }.
9:
end if
10:
t = t + 1.
11: end while
12: return arg mins?P,|s|?k f (s)
and b be the covariance vector between Z and observation variables, i.e., bi = Cov(Z, Xi ). Let CS
denote the submatrix of C with row and columnPset S, and bS denote the subvector of b, containing
elements bi with i ? S. Let Res(Z, S) = Z ? i?S ?i Xi denote the residual of Z with respect to
S, where ? ? R|S| is the least square solution to M SEZ,S [6]. The submodularity ratio presented
in Definition 3 is a measure characterizing how close a set function f is to submodularity. It is easy
to see that f is submodular iff ?U,k (f ) ? 1 for any U and k. For f being the objective function R2 ,
we will use ?U,k shortly in the paper.
Definition 3 (Submodularity Ratio [3]). Let f be a non-negative set function. The submodularity
ratio of f with respect to a set U and a parameter P
k ? 1 is
x?S (f (L ? {x}) ? f (L))
.
?U,k (f ) =
min
f (L ? S) ? f (L)
L?U,S:|S|?k,S?L=?
4.1
On General Sparse Regression
Our first result is the theoretical approximation bound of POSS for sparse regression in Theorem 1.
Let OP T denote the optimal function value of Eq. (2). The expected running time of POSS is the
average number of objective function (i.e., R2 ) evaluations, the most time-consuming step, which
is also the average number of iterations T (denoted by E[T ]) since it only needs to perform one
objective evaluation for the newly generated solution s0 in each iteration.
Theorem 1. For sparse regression, POSS with E[T ] ? 2ek 2 n and I(?) = 0 (i.e., a constant func2
tion) finds a set S of variables with |S| ? k and RZ,S
? (1 ? e???,k ) ? OP T .
The proof relies on the property of R2 in Lemma 1, that for any subset of variables, there always
exists another variable, the inclusion of which can bring an improvement on R2 proportional to the
current distance to the optimum. Lemma 1 is extracted from the proof of Theorem 3.2 in [3].
? ? V ? S such that
Lemma 1. For any S ? V , there exists one variable X
?
?,k
2
2
2
RZ,S?{
(OP T ? RZ,S
).
? ? RZ,S ?
X}
k
2
?
?
Proof. Let Sk? be the optimal set of variables of Eq. (2), i.e., RZ,S
? = OP T . Let S = Sk ? S
k
? Using Lemmas 2.3 and 2.4 in [2], we can easily derive that
and S 0 = {Res(X, S) | X ? S}.
2
2
2
2
? we have R2
RZ,S?
=
R
+
R
.
Because
RZ,S
increases with S and Sk? ? S ? S,
0
?
? ?
Z,S
Z,S
S
Z,S?S
2
2
2
0
2
?
RZ,S ? = OP T . Thus, RZ,S 0 ? OP T ? RZ,S . By Definition 3, |S | = |S| ? k and RZ,? = 0,
k
P
2
2
2
? 0 = arg maxX 0 ?S 0 R2 0 .
we get X 0 ?S 0 RZ,X
0 ? ??,k RZ,S 0 ? ??,k (OP T ? RZ,S ). Let X
Z,X
??,k
??,k
2
2
2
? ? S? correspond to X
? 0 , i.e.,
Then, R
? 0 (OP T ? R ) ?
(OP T ? R ). Let X
?0
Z,X
|S |
Z,S
k
2
2
? S) = X
? 0 . Thus, R2
Res(X,
? ?RZ,S = RZ,X
?0 ?
Z,S?{X}
4
Z,S
??,k
k (OP T
2
?RZ,S
). The lemma holds.
Proof of Theorem 1. Since the isolation function is a constant function, all solutions are allowed
to be compared and we can ignore it. Let Jmax denote the maximum value of j ? [0, k] such that in
?
2
j
the archive set P , there exists a solution s with |s| ? j and RZ,s
? (1 ? (1 ? ?,k
k ) ) ? OP T . That
??,k j
2
is, Jmax = max{j ? [0, k] | ?s ? P, |s| ? j ? RZ,s ? (1 ? (1 ? k ) ) ? OP T }. We then analyze
the expected iterations until Jmax = k, which implies that there exists one solution s in P satisfying
?
2
k
???,k
that |s| ? k and RZ,s
? (1 ? (1 ? ?,k
) ? OP T .
k ) ) ? OP T ? (1 ? e
The initial value of Jmax is 0, since POSS starts from {0}n . Assume that currently Jmax = i < k.
?
2
i
Let s be a corresponding solution with the value i, i.e., |s| ? i and RZ,s
? (1 ? (1 ? ?,k
k ) ) ? OP T .
It is easy to see that Jmax cannot decrease because cleaning s from P (lines 7 and 8 of Algorithm 2)
implies that s is ?worse? than a newly generated solution s0 , which must have a smaller size and a
larger R2 value. By Lemma 1, we know that flipping one specific 0 bit of s (i.e., adding a specific
??,k
2
2
variable into S) can generate a new solution s0 , which satisfies that RZ,s
0 ? RZ,s ?
k (OP T ?
2
RZ,s ). Then, we have
??,k 2
??,k
??,k i+1
2
RZ,s
)RZ,s +
? OP T ? (1 ? (1 ?
) ) ? OP T.
0 ? (1 ?
k
k
k
Since |s0 | = |s| + 1 ? i + 1, s0 will be included into P ; otherwise, from line 6 of Algorithm 2, s0
must be ?strictly worse? than one solution in P , and this implies that Jmax has already been larger
than i, which contradicts with the assumption Jmax = i. After including s0 , Jmax ? i + 1. Let Pmax
denote the largest size of P . Thus, Jmax can increase by at least 1 in one iteration with probability at
1
1
? n1 (1? n1 )n?1 ? enP1max , where Pmax
is a lower bound on the probability of selecting s in
least Pmax
1
1 n?1
line 4 of Algorithm 2 and n (1 ? n )
is the probability of flipping a specific bit of s and keeping
other bits unchanged in line 5. Then, it needs at most enPmax expected iterations to increase Jmax .
Thus, after k ? enPmax expected iterations, Jmax must have reached k.
By the procedure of POSS, we know that the solutions maintained in P must be incomparable.
Thus, each value of one property can correspond to at most one solution in P . Because the solutions
with |s| ? 2k have +? value on the first property, they must be excluded from P . Thus, |s| ?
{0, 1, . . . , 2k ? 1}, which implies that Pmax ? 2k. Hence, the expected number of iterations E[T ]
for finding the desired solution is at most 2ek 2 n.
Comparing with the approximation guarantee of FR, (1 ? e??SF R ,k ) ? OP T [3], it is easy to see
that ??,k ? ?S F R ,k from Definition 3. Thus, POSS with the simplest configuration of the isolation
function can do at least as well as FR on any sparse regression problem, and achieves the best
previous approximation guarantee. We next investigate if POSS can be strictly better than FR.
4.2
On The Exponential Decay Subclass
Our second result is on a subclass of sparse regression, called Exponential Decay as in Definition 4.
In this subclass, the observation variables can be ordered in a line such that their covariances are
decreasing exponentially with the distance.
Definition 4 (Exponential Decay [2]). The variables Xi are associated with points y1 ? y2 ? . . . ?
yn , and Ci,j = a|yi ?yj | for some constant a ? (0, 1).
Since we have shown that POSS with a constant isolation function is generally good, we prove
below that POSS with a proper isolation function can be even better: it is strictly better than FR
on the Exponential Decay subclass, as POSS finds an optimal solution (i.e., Theorem 2) while FR
cannot (i.e., Proposition 1). The isolation function I(s ? {0, 1}n ) = min{i | si = 1} implies that
two solutions are comparable only if they have the same minimum index for bit 1.
Theorem 2. For the Exponential Decay subclass of sparse regression, POSS with E[T ] ? O(k 2 (n?
k)n log n) and I(s ? {0, 1}n ) = min{i | si = 1} finds an optimal solution.
The proof of Theorem 2 utilizes the dynamic programming property of the problem, as in Lemma 2.
2
Lemma 2. [2] Let R2 (v, j) denote the maximum RZ,S
value by choosing v variables, including
2
2
necessarily Xj , from Xj , . . . , Xn . That is, R (v, j) = max{RZ,S
| S ? {Xj , . . . , Xn }, Xj ?
S, |S| = v}. Then, the following recursive relation holds:
a2|yi ?yj |
a|yi ?yj |
R2 (v + 1, j) = maxj+1?i?n R2 (v, i) + b2j + (bj ? bi )2
?
2b
b
,
j
i
1 ? a2|yi ?yj |
1 + a|yi ?yj |
where the term in is the R2 value by adding Xj into the variable subset corresponding to R2 (v, i).
5
Proof of Theorem 2. We divide the optimization process into k + 1 phases, where the i-th (1 ?
i ? k) phase starts after the (i?1)-th phase has finished. We define that the i-th phase finishes when
for each solution corresponding to R2 (i, j) (1 ? j ? n ? i + 1), there exists one solution in the
archive P which is ?better? than it. Here, a solution s is ?better? than s0 is equivalent to that s0 is
?worse? than s. Let ?i denote the iterations since phase i ? 1 has finished, until phase i is completed.
Starting from the solution {0}n , the 0-th phase has finished. Then, we consider ?i (i ? 1). In this
phase, from Lemma 2, we know that a solution ?better? than a corresponding solution of R2 (i, j) can
be generated by selecting a specific one from the solutions ?better? than R2 (i?1, j +1), . . . , R2 (i?
1
1, n) and flipping its j-th bit, which happens with probability at least Pmax
? n1 (1 ? n1 )n?1 ? enP1max .
Thus, if we have found L desired solutions in the i-th phase, the probability of finding a new desired
solution in the next iteration is at least (n?i+1?L)? enP1max , where n?i+1 is the total number of dePn?i enPmax
sired solutions to find in the i-th phase. Then, E[?i ] ? L=0 n?i+1?L
? O(n log nPmax ). Therefore, the expected number of iterations E[T ] is O(kn log nPmax ) until the k-th phase finishes, which
implies that an optimal solution corresponding to max1?j?n R2 (k, j) has been found. Note that
Pmax ? 2k(n?k), because the incomparable property of the maintained solutions by POSS ensures
that there exists at most one solution in P for each possible combination of |s| ? {0, 1, . . . , 2k ? 1}
and I(s) ? {0, 1, . . . , n}. Thus, E[T ] for finding an optimal solution is O(k 2 (n ? k)n log n).
Then, we analyze FR (i.e., Algorithm 1) for this special class. We show below that FR can be
blocked from finding an optimal solution by giving a simple example.
Example 1. X1 = Y1 , Xi = ri Xi?1 + Yi , where ri ? (0, 1), and Yi are independent random
variables with expectation 0 such that each Xi has variance 1.
Qj
For i < j, Cov(Xi , Xj ) = k=i+1 rk . Then, it is easy to verify that Example 1 belongs to the
Pi
Exponential Decay class by letting y1 = 0 and yi = k=2 loga rk for i ? 2.
Proposition 1. For Example 1 with n = 3, r2 = 0.03, r3 = 0.5, Cov(Y1 , Z) = Cov(Y2 , Z) = ?
and Cov(Y3 , Z) = 0.505?, FR cannot find the optimal solution for k = 2.
Proof. The covariances between Xi and Z are b1 = ?, b2 = 0.03b1 + ? = 1.03? and b3 = 0.5b2 +
2
0.505? = 1.02?. Since Xi and Z have expectation 0 and variance 1, RZ,S
can be simply represented
2
2
2
T ?1
= 1.0609? 2 ,
as bS CS bS [11]. We then calculate the R value as follows: RZ,X1 = ? 2 , RZ,X
2
2
2
2
2
2
2
2
RZ,X3 = 1.0404? ; RZ,{X1 ,X2 } = 2.0009? , RZ,{X1 ,X3 } = 2.0103? , RZ,{X2 ,X3 } = 1.4009? 2 .
2
The optimal solution for k = 2 is {X1 , X3 }. FR first selects X2 since RZ,X
is the largest, then
2
2
2
selects X1 since RZ,{X2 ,X1 } > RZ,{X2 ,X3 } ; thus produces a local optimal solution {X1 , X2 }.
It is also easy to verify that other two previous methods OMP [19] and FoBa [26] cannot find the
optimal solution for this example, due to their greedy nature.
5
Empirical Study
We conducted experiments on 12 data sets1 in Table 1 to compare POSS with the following methods:
? FR [12] iteratively adds one variable with the largest improvement on R2 .
? OMP [19] iteratively adds one variable that mostly correlates with the predictor variable residual.
? FoBa [26] is based on OMP but deletes one variable adaptively when beneficial. Set parameter
? = 0.5, the solution path length is five times as long as the maximum sparsity level (i.e., 5 ? k),
and the last active set containing k variables is used as the final selection [26].
? RFE [10] iteratively deletes one variable with the smallest weight by linear regression.
? Lasso [18], SCAD [8] and MCP [24] replaces the `0 norm constraint with the `1 norm penalty,
the smoothly clipped absolute deviation penalty and the mimimax concave penalty, respectively. For
implementing these methods, we use the SparseReg toolbox developed in [28, 27].
For POSS, we use I(?) = 0 since it is generally good, and the number of iterations T is set to be
b2ek 2 nc as suggested by Theorem 1. To evaluate how far these methods are from the optimum, we
also compute the optimal subset by exhaustive enumeration, denoted as OPT.
1
The data sets are from http://archive.ics.uci.edu/ml/ and http://www.csie.ntu.
edu.tw/?cjlin/libsvmtools/datasets/. Some binary classification data are used for regression.
All variables are normalized to have mean 0 and variance 1.
6
Table 1: The data sets.
data set
housing
eunite2001
svmguide3
ionosphere
#inst
506
367
1284
351
#feat
13
16
21
34
data set
sonar
triazines
coil2000
mushrooms
#inst
208
186
9000
8124
#feat
60
60
86
112
data set
clean1
w5a
gisette
farm-ads
#inst
476
9888
7000
4143
#feat
166
300
5000
54877
Table 2: The training R2 value (mean?std.) of the compared methods on 12 data sets for k = 8. In
each data set, ??/?? denote respectively that POSS is significantly better/worse than the corresponding method by the t-test [5] with confidence level 0.05. ?-? means that no results were obtained after
running several days.
Data set
OPT
housing
.7437?.0297
.8484?.0132
eunite2001
svmguide3
.2705?.0255
ionosphere
.5995?.0326
sonar
?
triazines
?
?
coil2000
mushrooms
?
clean1
?
w5a
?
gisette
?
farm-ads
?
POSS: win/tie/loss
POSS
.7437?.0297
.8482?.0132
.2701?.0257
.5990?.0329
.5365?.0410
.4301?.0603
.0627?.0076
.9912?.0020
.4368?.0300
.3376?.0267
.7265?.0098
.4217?.0100
?
FR
.7429?.0300?
.8348?.0143?
.2615?.0260?
.5920?.0352?
.5171?.0440?
.4150?.0592?
.0624?.0076?
.9909?.0021?
.4169?.0299?
.3319?.0247?
.7001?.0116?
.4196?.0101?
12/0/0
FoBa
.7423?.0301?
.8442?.0144?
.2601?.0279?
.5929?.0346?
.5138?.0432?
.4107?.0600?
.0619?.0075?
.9909?.0022?
.4145?.0309?
.3341?.0258?
.6747?.0145?
.4170?.0113?
12/0/0
OMP
.7415?.0300?
.8349?.0150?
.2557?.0270?
.5921?.0353?
.5112?.0425?
.4073?.0591?
.0619?.0075?
.9909?.0022?
.4132?.0315?
.3313?.0246?
.6731?.0134?
.4170?.0113?
12/0/0
RFE
.7388?.0304?
.8424?.0153?
.2136?.0325?
.5832?.0415?
.4321?.0636?
.3615?.0712?
.0363?.0141?
.6813?.1294?
.1596?.0562?
.3342?.0276?
.5360?.0318?
?
11/0/0
MCP
.7354?.0297?
.8320?.0150?
.2397?.0237?
.5740?.0348?
.4496?.0482?
.3793?.0584?
.0570?.0075?
.8652?.0474?
.3563?.0364?
.2694?.0385?
.5709?.0123?
.3771?.0110?
12/0/0
To assess each method on each data set, we repeat the following process 100 times. The data set is
randomly and evenly split into a training set and a test set. Sparse regression is built on the training
set, and evaluated on the test set. We report the average training and test R2 values.
5.1
On Optimization Performance
Table 2 lists the training R2 for k = 8, which reveals the optimization quality of the methods. Note
that the results of Lasso, SCAD and MCP are very close, and we only report that of MCP due to the
page limit. By the t-test [5] with significance level 0.05, POSS is shown significantly better than all
the compared methods on all data sets.
We plot the performance curves on two data sets for k ? 8 in Figure 1. For sonar, OPT is calculated
only for k ? 5. We can observe that POSS tightly follows OPT, and has a clear advantage over
the rest methods. FR, FoBa and OMP have close performances, while are much better than MCP,
SCAD and Lasso. The bad performance of Lasso is consistent with the previous results in [3, 26].
We notice that, although the `1 norm constraint is a tight convex relaxation of the `0 norm constraint
and can have good results in sparse recovery tasks, the performance of Lasso is not as good as POSS
and greedy methods on most data sets. This is due to that, unlike assumed in sparse recovery tasks,
there may not exist a sparse structure in the data sets. In this case, `1 norm constraint can be a bad
approximation of `0 norm constraint. Meanwhile, `1 norm constraint also shifts the optimization
problem, making it hard to well optimize the original R2 criterion.
Considering the running time (in the number of objective function evaluations), OPT does exhaustive
k
search, thus needs nk ? nkk time, which could be unacceptable for a slightly large data set. FR,
FoBa and OMP are greedy-like approaches, thus are efficient and their running time are all in the
order of kn. POSS finds the solutions closest to those of OPT, taking 2ek 2 n time. Although POSS
is slower by a factor of k, the difference would be small when k is a small constant.
Since the 2ek 2 n time is a theoretical upper bound for POSS being as good as FR, we empirically
examine how tight this bound is. By selecting FR as the baseline, we plot the curve of the R2
value over the running time for POSS on the two largest data sets gisette and farm-ads, as shown
in Figure 2. We do not split the training and test set, and the curve for POSS is the average of 30
independent runs. The x-axis is in kn, the running time of FR. We can observe that POSS takes
about only 14% and 23% of the theoretical time to achieve a better performance, respectively on the
two data sets. This implies that POSS can be more efficient in practice than in theoretical analysis.
7
POSS
FR
FoBa
OMP
0.26
0.2
0.35
5
k
6
7
8
3
(a) on svmguide3
4
5
k
6
7
8
FR
30
5
5
k
6
7
(a) on svmguide3
8
3
0.73
0.25
0.72
0.245
5
k
6
7
5
8
Figure 3: Test R2 (the larger the better).
0.23
6
k
7
(a) on training set (RSS)
(b) on sonar
0.24
0.235
0.7
4
40
7
0.26
0.71
0.05
4
FR
30
0.255
R2
RSS
R2
0.15
0.1
3
20
(b) on farm-ads
6
k
0.2
0.16
10
Running time in kn
0.74
0.18
POSS
0.39
40
Figure 2: Performance v.s. running time of POSS.
4
0.2
R2
20
(a) on gisette
Figure 1: Training R (the larger the better).
0.22
POSS
10
2ek 2n
= 43kn
10kn
0.4
= 43kn
Running time in kn
(b) on sonar
2
3
2ek 2n
6kn
0.64
0.25
4
Lasso
0.41
0.68
0.66
0.3
0.18
3
SCAD
0.7
0.4
R2
R2
R2
0.22
MCP
0.72
0.5
0.45
0.24
5.2
RFE
0.42
0.55
R2
OPT
8
5
6
k
7
8
(b) on test set
Figure 4: Sparse regression with `2 regularization
on sonar. RSS: the smaller the better.
On Generalization Performance
When testing sparse regression on the test data, it has been known that the sparsity alone may be
not a good complexity measure [26], since it only restricts the number of variables, but the range of
the variables is unrestricted. Thus better optimization does not always lead to better generalization
performance. We also observe this in Figure 3. On svmguide3, test R2 is consistent with training R2
in Figure 1(a), however on sonar, better training R2 (as in Figure 1(b)) leads to worse test R2 (as in
Figure 3(b)), which may be due to the small number of instances making it prone to overfitting.
As suggested in [26], other regularization terms may be necessary. We add the `2 norm regularization into the objective function, i.e.,
P
RSSZ,S = min??R|S| E (Z ? i?S ?i Xi )2 + ?|?|22 .
The optimization is now arg minS?V RSSZ,S s.t. |S| ? k . We then test all the compared
methods to solve this optimization problem with ? = 0.9615 on sonar. As plotted in Figure 4, we
can observe that POSS still does the best optimization on the training RSS, and by introducing the
`2 norm, it leads to the best generalization performance in R2 .
6
Conclusion
In this paper, we study the problem of subset selection, which has many applications ranging from
machine learning to signal processing. The general goal is to select a subset of size k from a large
set of variables such that a given criterion is optimized. We propose the POSS approach that solves
the two objectives of the subset selection problem simultaneously, i.e., optimizing the criterion and
reducing the subset size.
On sparse regression, a representative of subset selection, we theoretically prove that a simple POSS
(i.e., using a constant isolation function) can generally achieve the best previous approximation
guarantee, using time 2ek 2 n. Moreover, we prove that, with a proper isolation function, it finds an
optimal solution for an important subclass Exponential Decay using time O(k 2 (n?k)n log n), while
other greedy-like methods may not find an optimal solution. We verify the superior performance of
POSS by experiments, which also show that POSS can be more efficient than its theoretical time.
We will further study Pareto optimization from the aspects of using potential heuristic operators [14]
and utilizing infeasible solutions [23]; and try to apply it to more machine learning tasks.
Acknowledgements We want to thank Lijun Zhang and Jianxin Wu for their helpful comments.
This research was supported by 973 Program (2014CB340501) and NSFC (61333014, 61375061).
8
References
[1] C. Boutsidis, M. W. Mahoney, and P. Drineas. An improved approximation algorithm for the column
subset selection problem. In SODA, pages 968?977, New York, NY, 2009.
[2] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In STOC, pages 45?54,
Victoria, Canada, 2008.
[3] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. In ICML, pages 1057?1064, Bellevue, WA, 2011.
[4] G. Davis, S. Mallat, and M. Avellaneda. Adaptive greedy approximations. Constructive Approximation,
13(1):57?98, 1997.
[5] J. Dem?sar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning
Research, 7:1?30, 2006.
[6] G. Diekhoff. Statistics for the Social and Behavioral Sciences: Univariate, Bivariate, Multivariate.
William C Brown Pub, 1992.
[7] D. L. Donoho, M. Elad, and V. N. Temlyakov. Stable recovery of sparse overcomplete representations in
the presence of noise. IEEE Transactions on Information Theory, 52(1):6?18, 2006.
[8] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal
of the American Statistical Association, 96(456):1348?1360, 2001.
[9] A. C. Gilbert, S. Muthukrishnan, and M. J. Strauss. Approximation of functions over redundant dictionaries using coherence. In SODA, pages 243?252, Baltimore, MD, 2003.
[10] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support
vector machines. Machine Learning, 46(1-3):389?422, 2002.
[11] R. A. Johnson and D. W. Wichern. Applied Multivariate Statistical Analysis. Pearson, 6th edition, 2007.
[12] A. Miller. Subset Selection in Regression. Chapman and Hall/CRC, 2nd edition, 2002.
[13] B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2):227?
234, 1995.
[14] C. Qian, Y. Yu, and Z.-H. Zhou. An analysis on recombination in multi-objective evolutionary optimization. Artificial Intelligence, 204:99?119, 2013.
[15] C. Qian, Y. Yu, and Z.-H. Zhou. On constrained Boolean Pareto optimization. In IJCAI, pages 389?395,
Buenos Aires, Argentina, 2015.
[16] C. Qian, Y. Yu, and Z.-H. Zhou. Pareto ensemble pruning. In AAAI, pages 2935?2941, Austin, TX, 2015.
[17] M. Tan, I. Tsang, and L. Wang. Matching pursuit LASSO Part I: Sparse recovery over big dictionary.
IEEE Transactions on Signal Processing, 63(3):727?741, 2015.
[18] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society:
Series B (Methodological), 58(1):267?288, 1996.
[19] J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231?2242, 2004.
[20] J. A. Tropp, A. C. Gilbert, S. Muthukrishnan, and M. J. Strauss. Improved sparse approximation over
quasiincoherent dictionaries. In ICIP, pages 37?40, Barcelona, Spain, 2003.
[21] L. Xiao and T. Zhang. A proximal-gradient homotopy method for the sparse least-squares problem. SIAM
Journal on Optimization, 23(2):1062?1091, 2013.
[22] Y. Yu, X. Yao, and Z.-H. Zhou. On the approximation ability of evolutionary optimization with application
to minimum set cover. Artificial Intelligence, 180-181:20?33, 2012.
[23] Y. Yu and Z.-H. Zhou. On the usefulness of infeasible solutions in evolutionary search: A theoretical
study. In IEEE CEC, pages 835?840, Hong Kong, China, 2008.
[24] C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics,
38(2):894?942, 2010.
[25] T. Zhang. On the consistency of feature selection using greedy least squares regression. Journal of
Machine Learning Research, 10:555?568, 2009.
[26] T. Zhang. Adaptive forward-backward greedy algorithm for learning sparse representations. IEEE Transactions on Information Theory, 57(7):4689?4708, 2011.
[27] H. Zhou. Matlab SparseReg Toolbox Version 0.0.1. Available Online, 2013.
[28] H. Zhou, A. Armagan, and D. Dunson. Path following and empirical Bayes model selection for sparse
regression. arXiv:1201.3528, 2012.
[29] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005.
9
| 5822 |@word kong:1 version:1 polynomial:1 norm:11 nd:1 triazine:2 r:4 covariance:5 bellevue:1 initial:1 configuration:1 series:2 selecting:5 pub:1 o2:8 current:1 comparing:1 si:4 assigning:1 mushroom:2 must:5 distant:1 subsequent:1 plot:2 alone:1 greedy:13 selected:1 intelligence:2 zhang:5 five:1 unacceptable:1 prove:5 behavioral:1 introduce:1 theoretically:4 expected:6 behavior:1 examine:2 multi:1 zhouzh:1 decreasing:1 zhi:1 enumeration:1 considering:1 spain:1 notation:1 gisette:4 maximizes:1 moreover:1 developed:2 finding:4 argentina:1 guarantee:10 y3:1 subclass:8 concave:2 tie:1 classifier:1 yn:1 positive:3 nju:1 before:1 local:1 treat:1 limit:1 nsfc:1 meet:1 path:2 foba:6 china:2 studied:1 ease:1 limited:1 bi:6 range:1 phased:1 yj:5 testing:1 recursive:1 practice:1 x3:5 procedure:1 empirical:4 maxx:2 significantly:3 matching:2 pre:1 confidence:1 nanjing:2 cannot:5 convenience:1 selection:37 close:3 get:1 operator:1 lijun:1 gilbert:3 equivalent:2 www:1 optimize:1 center:1 maximizing:1 starting:2 convex:5 recovery:6 qian:4 wichern:1 utilizing:1 sar:1 jmax:12 annals:1 tan:1 mallat:1 user:1 exact:1 cleaning:1 programming:1 element:3 satisfying:1 particularly:1 natarajan:1 std:1 csie:1 wang:1 tsang:1 calculate:1 ensures:1 decrease:1 complexity:1 sired:1 dynamic:1 depend:1 tight:2 max1:1 drineas:1 po:54 easily:1 various:1 represented:2 tx:1 muthukrishnan:2 separated:1 artificial:2 choosing:1 pearson:1 exhaustive:2 heuristic:1 larger:5 solve:1 elad:1 say:2 otherwise:3 compressed:1 ability:1 cov:7 statistic:2 farm:4 abandon:1 final:3 online:1 housing:2 advantage:1 indication:1 net:2 propose:2 fr:25 uci:1 iff:1 achieve:4 rfe:3 ijcai:1 empty:2 optimum:3 produce:2 derive:1 measured:1 yuy:1 op:21 eq:7 solves:3 implemented:1 c:2 judge:1 implies:7 submodularity:5 b2j:1 func2:1 libsvmtools:1 implementing:1 crc:1 assign:1 generalization:3 ntu:1 homotopy:1 proposition:2 opt:7 diekhoff:1 strictly:6 hold:2 considered:1 ic:1 hall:1 algorithmic:1 bj:1 dictionary:5 achieves:2 smallest:2 a2:2 purpose:1 estimation:1 currently:3 largest:5 sensor:1 always:2 aim:1 lamda:1 zhou:8 shrinkage:1 derived:1 notational:1 improvement:3 methodological:1 likelihood:1 mainly:1 baseline:1 inst:3 helpful:1 clean1:2 membership:1 relation:1 selects:3 provably:1 arg:7 classification:2 denoted:2 proposes:1 constrained:1 special:1 kempe:5 equal:2 chapman:1 represents:1 yu:6 icml:1 nearly:1 np:2 report:2 aire:1 employ:1 few:2 randomly:2 simultaneously:3 national:1 tightly:1 maxj:1 phase:11 n1:4 william:1 investigate:2 evaluation:3 mahoney:1 analyzed:1 truly:1 necessary:1 orthogonal:1 divide:1 re:3 desired:3 plotted:1 overcomplete:1 theoretical:12 instance:1 column:3 boolean:1 ar:2 cover:1 introducing:3 deviation:1 subset:41 predictor:5 usefulness:1 conducted:1 johnson:1 kn:9 proximal:1 adaptively:1 st:4 fundamental:1 siam:2 yao:1 squared:7 aaai:1 containing:3 worse:11 ek:7 american:1 return:2 li:1 exclude:1 potential:1 archived:6 b2:2 dem:1 coefficient:1 ad:4 multiplicative:2 later:3 break:1 tion:1 try:1 analyze:3 reached:1 start:3 recover:1 bayes:1 jianxin:1 collaborative:1 ass:1 square:3 variance:4 efficiently:1 miller:1 correspond:2 ensemble:1 strongest:1 barnhill:1 definition:8 boutsidis:1 proof:7 associated:1 newly:6 proved:3 treatment:1 originally:1 day:1 methodology:1 improved:3 evaluated:1 generality:1 correlation:4 until:4 tropp:2 mimimax:1 quality:3 b3:1 effect:1 verify:4 unbiased:1 true:1 normalized:2 y2:2 hence:1 regularization:4 reformulating:1 excluded:1 brown:1 laboratory:1 iteratively:5 maintained:2 davis:1 criterion:15 generalized:1 hong:1 bring:1 ranging:1 novel:2 recently:1 superior:3 empirically:1 exponentially:1 association:1 blocked:1 consistency:1 inclusion:1 submodular:2 stable:1 etc:1 add:3 closest:1 multivariate:2 optimizing:2 optimizes:2 belongs:1 loga:1 binary:3 yi:8 minimum:2 unrestricted:1 relaxed:1 omp:8 employed:1 redundant:1 signal:2 branch:1 multiple:4 long:1 regression:40 expectation:3 arxiv:1 iteration:14 want:1 baltimore:1 rest:2 unlike:1 archive:5 comment:1 nonconcave:1 integer:5 yang:1 presence:1 split:2 easy:5 xj:7 isolation:12 finish:2 hastie:1 lasso:8 incomparable:4 cn:1 shift:1 qj:1 greed:1 penalty:5 york:1 matlab:1 generally:4 clear:3 industrialization:1 simplest:1 generate:2 http:2 exist:1 restricts:1 notice:1 overly:1 tibshirani:1 key:1 deletes:2 kept:1 backward:1 relaxation:4 run:1 soda:2 clipped:1 guyon:1 wu:1 utilizes:1 sented:1 coherence:4 comparable:2 bit:7 submatrix:1 bound:5 guaranteed:1 distinguish:1 fan:1 replaces:1 oracle:1 constraint:11 ri:2 software:2 x2:6 generates:1 aspect:1 min:11 extremely:1 according:1 combination:1 scad:4 smaller:5 beneficial:1 slightly:1 contradicts:1 tw:1 b:3 happens:1 making:2 resource:1 mcp:6 previously:3 r3:1 cjlin:1 know:3 letting:1 end:2 pursuit:2 available:2 sets1:1 apply:1 observe:4 victoria:1 spectral:1 shortly:1 slower:1 rz:47 original:2 denotes:2 running:9 completed:1 giving:1 recombination:1 society:2 unchanged:1 objective:14 already:1 flipping:5 md:1 evolutionary:5 exhibit:2 win:1 gradient:1 distance:2 thank:1 armagan:1 evenly:1 trivial:1 svmguide3:5 assuming:1 length:1 o1:9 index:2 relationship:1 ratio:4 minimizing:2 innovation:1 equivalently:2 nc:1 mostly:1 dunson:1 stoc:1 stated:1 negative:1 pmax:6 implementation:1 proper:2 perform:1 upper:1 observation:6 datasets:1 y1:4 arbitrary:1 canada:1 introduced:1 pair:1 cleaned:1 subvector:1 toolbox:2 optimized:3 icip:1 conflicting:1 barcelona:1 avellaneda:1 able:1 suggested:2 usually:3 below:2 sparsity:3 program:1 built:1 max:4 including:2 royal:2 residual:2 representing:1 minimax:1 technology:2 finished:3 axis:1 concludes:1 chao:1 acknowledgement:1 loss:2 proportional:1 proven:1 foundation:1 consistent:2 s0:25 xiao:1 pareto:7 pi:1 austin:1 cancer:1 row:1 prone:1 penalized:1 repeat:3 last:1 keeping:1 supported:1 infeasible:2 characterizing:1 taking:1 absolute:1 sparse:46 curve:3 calculated:1 xn:7 evaluating:1 forward:4 adaptive:2 simplified:1 far:3 employing:1 social:1 correlate:1 transaction:4 temlyakov:1 approximate:1 pruning:1 ignore:1 feat:3 keep:1 gene:1 ml:1 active:1 reveals:1 overfitting:1 b1:2 assumed:1 consuming:1 xi:13 search:2 sk:4 sonar:8 table:4 nature:1 elastic:2 necessarily:1 meanwhile:3 zou:1 da:5 significance:1 big:1 noise:1 edition:2 verifies:1 allowed:2 categorized:1 x1:13 representative:4 ny:1 exponential:9 sf:1 down:1 removing:1 theorem:9 bad:3 specific:4 rk:2 cec:1 sensing:1 r2:43 decay:9 list:1 ionosphere:2 bivariate:1 exists:6 vapnik:1 adding:2 strauss:2 ci:2 nk:1 smoothly:1 simply:1 univariate:1 ordered:1 hua:1 determines:1 relies:1 extracted:1 satisfies:1 weston:1 sized:1 goal:1 donoho:1 replace:2 hard:3 included:1 except:1 uniformly:1 reducing:1 lemma:9 total:2 called:1 experimental:1 select:7 formally:1 support:1 arises:1 buenos:1 constructive:1 evaluate:1 |
5,328 | 5,823 | Interpolating Convex and Non-Convex Tensor
Decompositions via the Subspace Norm
Ryota Tomioka
Toyota Technological Institute at Chicago
[email protected]
Qinqing Zheng
University of Chicago
[email protected]
Abstract
We consider the problem of recovering a low-rank tensor from its noisy observation. Previous work has shown a recovery guarantee with signal to noise ratio
O(ndK/2e/2 ) for recovering a Kth order rank one tensor of size n ? ? ? ? ? n by
recursive unfolding. In this paper, we first improve this bound to O(nK/4 ) by a
much simpler approach, but with a more careful analysis. Then we propose a new
norm called the subspace norm, which is based on the Kronecker products of factors obtained by the proposed simple estimator.
The imposed Kronecker structure
?
?
allows us to show a nearly ideal O( n + H K?1 ) bound, in which the parameter
H controls the blend from the non-convex estimator to mode-wise nuclear norm
minimization. Furthermore, we empirically demonstrate that the subspace norm
achieves the nearly ideal denoising performance even with H = O(1).
1
Introduction
Tensor is a natural way to express higher order interactions for a variety of data and tensor decomposition has been successfully applied to wide areas ranging from chemometrics, signal processing,
to neuroimaging; see [15, 18] for a survey. Moreover, recently it has become an active area in the
context of learning latent variable models [3].
Many problems related to tensors, such as, finding the rank, or a best rank-one approaximation of
a tensor is known to be NP hard [11, 8]. Nevertheless we can address statistical problems, such as,
how well we can recover a low-rank tensor from its randomly corrupted version (tensor denoising)
or from partial observations (tensor completion). Since we can convert a tensor into a matrix by
an operation known as unfolding, recent work [25, 19, 20, 13] has shown that we do get nontrivial
guarantees by using some norms or singular value decompositions. More specifically, Richard &
Montanari [20] has shown that when a rank-one Kth order tensor of size n ? ? ? ? ? n is corrupted
by standard Gaussian noise, a nontrivial bound can be shown with high probability if the signal?to
noise ratio ?/? % ndK/2e/2 by a method called the recursive unfolding1 . Note that ?/? % n
is sufficient for matrices (K = 2) and also for tensors if we use the best rank-one approximation
(which is known to be NP hard) as an estimator. On the other hand, Jain & Oh [13] analyzed the
tensor completion problem and proposed an algorithm that requires O(n3/2 ?polylog(n)) samples for
K = 3; while information theoretically we need at least ?(n) samples and the intractable maximum
likelihood estimator would require O(n ? polylog(n)) samples. Therefore, in both settings, there is
a wide gap between the ideal estimator and current polynomial time algorithms. A subtle question
that we will address in this paper is whether we need to unfold the tensor so that the resulting matrix
become as square as possible, which was the reasoning underlying both [19, 20].
As a parallel development, non-convex estimators based on alternating minimization or nonlinear
optimization [1, 21] have been widely applied and have performed very well when appropriately
1
We say an % bn if there is a constant C > 0 such that an ? C ? bn .
1
Table 1: Comparison of required signal-to-noise ratio ?/? of different algorithms for recovering
a Kth order rank one tensor of size n ? ? ? ? ? n contaminated by Gaussian noise with Standard
deviation ?. See model (2). The bound for the ordinary unfolding is shown in Corollary 1. The
bound for the subspace norm is shown in Theorem 2. The ideal estimator is proven in Appendix A.
Overlapped/
Latent nuclear
norm[23]
O(n(K?1)/2 )
Recursive
unfolding[20]/
square
norm[19]
O(ndK/2e/2 )
Ordinary unfolding
Subspace norm (proposed)
O(nK/4 )
?
?
O( n + H K?1 )
Ideal
O(
p
nK log(K))
set up. Therefore it would be of fundamental importance to connect the wisdom of non-convex
estimators with the more theoretically motivated estimators that recently emerged.
In this paper, we explore such a connection by defining a new norm based on Kronecker products of
factors that can be obtained by simple mode-wise singular value decomposition (SVD) of unfoldings
(see notation section below), also known as the higher-order singular value decomposition (HOSVD)
[6, 7]. We first study the non-asymptotic behavior of the leading singular vector from the ordinary
(rectangular) unfolding X (k) and show a nontrivial bound for signal to noise ratio ?/? % nK/4 .
Thus the result also applies to odd order tensors confirming a conjecture in [20]. Furthermore, this
motivates us to use the solution of mode-wise truncated SVDs to construct a new norm. We propose
the subspace norm, which predicts an unknown low-rank tensor as a mixture of K low-rank tensors,
in which each term takes the form
b (1) ? ? ? ? ? P
b (k?1) ? P
b (k+1) ? ? ? ? ? P
b (K) )> ),
foldk (M (k) (P
(k)
b
where foldk is the inverse of unfolding (?)(k) , ? denotes the Kronecker product, and P
? Rn?H
is a orthonormal matrix estimated from the mode-k unfolding of the observed tensor, for k =
K?1
1, . . . , K; H is a user-defined parameter, and M (k) ? Rn?H
. Our theory tells us that with
(k)
b
sufficiently high signal-to-noise ratio the estimated P
spans the true factors.
We highlight our contributions below:
1. We prove that the required signal-to-noise ratio for recovering a Kth order rank one tensor from
the ordinary unfolding is O(nK/4 ). Our analysis shows a curious two phase behavior: with high
probability, when nK/4 - ?/? - nK/2 , the error shows a fast decay as 1/? 4 ; for ?/? % nK/2 , the
error decays slowly as 1/? 2 . We confirm this in a numerical simulation.
2. The proposed subspace norm is an interpolation between the intractable estimators that directly
control the rank (e.g., HOSVD) and the tractable norm-based estimators. It becomes equivalent to
the latent trace norm [23] when H = n at the cost of increased signal-to-noise ratio threshold (see
Table 1).
3. The proposed estimator is more efficient than previously proposed norm based estimators, because
the size of the SVD required in the algorithm is reduced from n ? nK?1 to n ? H K?1 .
4. We also empirically demonstrate that the proposed subspace norm performs nearly optimally for
constant order H.
Notation
Let X ? Rn1 ?n2 ?????nK be a Kth order tensor. We will often use n1 = ? ? ? = nK = n to
simplify the notation but all the results in this paper generalizes to general dimensions. The inner
product between a pair of tensors is defined as the inner products of them as vectors; i.e., hX , Wi =
hvec(X ), vec(W)i. For u ? Rn1 , v ? Rn2 , w ? Rn3 , u ? v ? w denotes the n1 ? n2 ? n3
rank-one tensor whose i, j, k entry is ui vj wk . The rank of X is the minimum number of rank-one
tensors required to write X as a linear combination of them. A mode-k fiber of tensor X is an nk
dimensional vector that Q
is obtained by fixing all but the kth index of X . The mode-k unfolding X (k)
of tensor X is an nk ? k0 6=k nk0 matrix constructed by concatenating all the mode-k fibers along
columns. We denote the spectral and Frobenius norms for matrices by k ? k and k ? kF , respectively.
2
2
2.1
The power of ordinary unfolding
A perturbation bound for the left singular vector
We first establish a bound on recovering the left singular vector of a rank-one n ? m matrix (with
m > n) perturbed by random Gaussian noise.
Consider the following model known as the information plus noise model [4]:
? = ?uv > + ?E,
X
(1)
where u and v are unit vectors, ? is the signal strength, ? is the noise standard deviation, and
the noise matrix E is assumed to be random with entries sampled i.i.d. from the standard normal
?
distribution. Our goal is to lower-bound the correlation between u and the top left singular vector u
? for signal-to-noise ratio ?/? % (mn)1/4 with high probability.
of X
? does
A direct application of the classic Wedin perturbation theorem [28] to the rectangular matrix X
not provide us the desired result. This is because
?
?it requires the signal to noise ratio ?/? ? 2kEk.
Since the spectral norm of E scales as Op ( n + m) [27], this would mean that we require ?/? %
m1/2 ; i.e., the threshold is dominated by the number of columns m, if m ? n.
?X
? > , a square matrix. Our key insight
? as the leading eigenvector of X
Alternatively, we can view u
?X
? > as follows:
is that we can decompose X
?X
? > = (? 2 uu> + m? 2 I) + (? 2 EE > ? m? 2 I) + ??(uv > E > + Evu> ).
X
Note that u is the leading eigenvector of the first term because adding an identity matrix does not
change the eigenvectors. Moreover, we notice that there are two noise terms: the first term is a
centered Wishart matrix and it is independent of the signal ?; the second term is Gaussian distributed
and depends on the signal ?.
This implies a two-phase behavior corresponding to either the Wishart or the Gaussian noise term
being dominant, depending on the value of ?. Interestingly, we get a different speed of convergence
for each of these phases as we show in the next theorem (the proof is given in Appendix D.1).
Theorem 1. There exists a constant C such that with probability at least 1 ? 4e?n , if m/n ? C,
?
?
1
Cnm
?
?
?
?1 ? (?/?)4 , if m > ? ? (Cnm) 4 ,
|h?
u, ui| ?
?
Cn
?
?
?
?1 ?
, if
? m,
2
(?/?)
?
?
Cn
otherwise, |h?
u, ui| ? 1 ? (?/?)
Cn.
2 if ?/? ?
? has sufficiently many more columns than rows, as the signal to noise ratio ?/?
In other words, if X
? first converges to u as 1/? 4 , and then as 1/? 2 . Figure 1(a) illustrates these results.
increases, u
We randomly generate a rank-one 100 ? 10000 matrix perturbed by Gaussian noise, and measure
? and u. The phase transition happens at ?/? = (nm)1/4 , and there are two
the distance between u
regimes of different convergence rates as Theorem 1 predicts.
2.2
Tensor Unfolding
Now let?s apply the above result to the tensor version of information plus noise model studied by
[20]. We consider a rank one n ? ? ? ? ? n tensor (signal) contaminated by Gaussian noise as follows:
Y = X ? + ?E = ?u(1) ? ? ? ? ? u(K) + ?E,
(2)
where factors u(k) ? Rn , k = 1, . . . , K, are unit vectors, which are not necessarily identical, and
the entries of E ? Rn?????n are i.i.d samples from the normal distribution N (0, 1). Note that this is
slightly more general (and easier to analyze) than the symmetric setting studied by [20].
3
n = 100, m = 10000
0
1
-2
correlation
0.8
-4
-6
-8
-10
|hu1 , u
b1 i|
|hu2 , u
b2 i|
|hu1 , u
b1 i|
|hu2 , u
b2 i|
?1
?2
log(1 ? |hu, u
?i|)
?3.81 log(?/?) + 12.76
?2.38
log(?/?)
1
2 + 6.18
3
[20
[20
[40
[40
40
40
80
80
60]
60]
120]
120]
0.4
0.2
log (nm)1/4
?
log ( m)
2
0.6
-
4
log(?/?)
5
0
0
6
(a) Synthetic experiment showing phase transition
at ?/? = (nm)1/4 and regimes with different rates
of convergence. See Theorem 1.
5
10
15
Q
1
?( i ni ) 4
20
25
30
(b) Synthetic experiment showing phase transition
Q
at ? = ?( k nk )1/4 for odd order tensors. See
Corollary 1.
Figure 1: Numerical demonstration of Theorem 1 and Corollary 1.
Several estimators for recovering X ? from its noisy version Y have been proposed (see Table 1).
Both the overlapped nuclear norm and latent nuclear norm discussed in [23] achives the relative
performance guarantee
?
|||X? ? X ? |||F /? ? Op ? nK?1 /? ,
(3)
where X? is the estimator. This bound implies that if we want
? to obtain relative error smaller than ?,
we need the signal to noise ratio ?/? to scale as ?/? % nK?1 /?.
Mu et al. [19] proposed the square norm, defined as the nuclear norm of the matrix obtained by
grouping the first bK/2c indices along the rows and the last dK/2e
? indices along the columns.
This norm improves
the
right
hand
side
of
inequality
(3)
to
O
(?
ndK/2e /?), which translates to
p
?
requiring ?/? % ndK/2e /? for obtaining relative error ?. The intuition here is the more square the
unfolding is the better the bound becomes. However, there is no improvement for K = 3.
Richard and Montanari [20] studied the (symmetric version of) model (2) and proved that?a recursive
unfolding algorithm achieves the factor recovery error dist(?
u(k) , u(k) ) = ? with ?/? % ndK/2e /?
with high probability, where dist(u, u0 ) := min(ku ? u0 k, ku + u0 k). They also showed that the
randomly initialized tensor
? [7, 16, 3] can achieve the same error ? with slightly worse
? power method
threshold ?/? % max( n/?2 , nK/2 ) K log K also with high probability.
The reasoning underlying both [19] and [20] is that square unfolding is better. However, if we take
the (ordinary) mode-k unfolding
>
Y (k) = ?u(k) u(k?1) ? ? ? ? ? u(1) ? u(K) ? ? ? ? ? u(k+1) + ?E (k) ,
(4)
we can see (4) as an instance of information plus noise model (1) where m/n = nK?2 . Thus the
ordinary unfolding satisfies the condition of Theorem 1 for n or K large enough.
Corollary 1. Consider a K(? 3)th order rank one tensor contaminated by Gaussian noise as in
(2). There exists a constant C such that if nK?2 ? C, with probability at least 1 ? 4Ke?n , we have
?
K?1
1
K
2CnK
?
?
, if n 2 > ?/? ? C 4 n 4 ,
?
4
(?/?)
(k)
2
(k)
dist (?
u ,u ) ?
for k = 1, . . . , K,
?
K?1
2Cn
?
2 ,
?
,
if
?/?
?
n
(?/?)2
? (k) is the leading left singular vector of the rectangular unfolding Y (k) .
where u
This proves that as conjectured by [20], the threshold ?/? % nK/4 applies not only to the even
order case but also to the odd order case. Note that Hopkins et al. [10] have shown a similar
result without the sharp rate of convergence. The above
corollary easily extends to more general
qQ
QK
1/4
and
n1 ? ? ? ? ? nK tensor by replacing the conditions by
`6=k n` > ?/? ? (C
k=1 nk )
qQ
?
?/? ?
`6=k n` . The result also holds when X has rank higher than 1; see Appendix E.
4
We demonstrate this result in Figure 1(b). The models behind the experiment are slightly more
general ones in which [n1 , n2 , n3 ] = [20, 40, 60] or [40, 80, 120] and the signal X ? is rank two with
(1)
(1)
(1)
(1)
? 1 i and hu2 , u
? 2 i as a measure
?1 = 20 and ?2 = 10. The plot shows the inner products hu1 , u
of the quality of estimating the two mode-1 factors. The horizontal axis is the normalized noise
QK
standard deviation ?( k=1 nk )1/4 . We can clearly see that the inner product decays symmetrically
around ?1 and ?2 as predicted by Corollary 1 for both tensors.
3
Subspace norm for tensors
Suppose the true tensor X ? ? Rn?????n admits a minimum Tucker decomposition [26] of rank
(R, . . . , R):
PR
PR
(1)
(K)
X ? = i1 =1 ? ? ? iK =1 ?i1 i2 ...iK ui1 ? ? ? ? ? uiK .
(5)
If the core tensor C = (?i1 ...iK ) ? RR?????R is superdiagonal, the above decomposition reduces to
the canonical polyadic (CP) decomposition [9, 15]. The mode-k unfolding of the true tensor X ? can
be written as follows:
>
(6)
X ?(k) = U (k) C (k) U (1) ? ? ? ? ? U (k?1) ? U (k+1) ? ? ? ? ? U (K) ,
where C (k) is the mode-k unfolding of the core tensor C; U (k) is a n ? R matrix U (k) =
(k)
(k)
[u1 , . . . , uR ] for k = 1, . . . , K. Note that U (k) is not necessarily orthogonal.
>
Let X ?(k) = P (k) ?(k) Q(k) be the SVD of X ?(k) . We will observe that
Q(k) ? Span P (1) ? ? ? ? ? P (k?1) ? P (k+1) ? ? ? ? ? P (K)
(7)
because of (6) and U (k) ? Span(P (k) ).
Corollary 1 shows that the left singular vectors P (k) can be recovered under mild conditions; thus
the span of the right singular vectors can also be recovered. Inspired by this, we define a norm that
models a tensor X as a mixture of tensors Z (1) , . . . , Z (K) . We require that the mode-k unfolding
(k)
(k)
>
of Z (k) , i.e. Z (k) , has a low rank factorization Z (k) = M (k) S (k) , where M (k) ? Rn?H
(k)
K?1
is a
nK?1 ?H K?1
variable, and S ? R
is a fixed arbitrary orthonormal basis of some subspace, which
we choose later to have the Kronecker structure in (7).
In the following, we define the subspace norm, suggest an approach to construct the right factor
S (k) , and prove the denoising bound in the end.
3.1
The subspace norm
Consider a Kth order tensor of size n ? ? ? ? n.
K?1
?H K?1
Definition 1. Let S (1) , . . . , S (K) be matrices such that S (k) ? Rn
with H ? n. The
subspace norm for a Kth order tensor X associated with {S (k) }K
is
defined
as
k=1
(
PK
(k)
inf {M (k) }K
kM
k
,
if
X
?
Span({S (k) }K
?
k=1 ),
k=1
k=1
|||X |||s :=
+?,
otherwise,
(k) K
where k ? k? is the nuclear norm, and Span({S }k=1 ) :=
X ? Rn?????n :
P
>
K
?M (1) , . . . , M (K) , X = k=1 foldk (M (k) S (k) ) .
In the next lemma (proven in Appendix D.2), we show the dual?
norm of the subspace norm has a simple appealing form. As we see in Theorem 2, it avoids the O( nK?1 ) scaling (see the first column
of Table 1) by restricting the influence of the noise term in the subspace defined by S (1) , . . . , S (K) .
Lemma 1. The dual norm of |||?|||s is a semi-norm
|||X |||s? =
max kX (k) S (k) k,
k=1,...,K
where k ? k is the spectral norm.
5
3.2
Choosing the subspace
A natural question that arises is how to choose the matrices S (1) , . . . , S (k) .
Lemma 2. Let the X ?(k) = P (k) ?(k) Q(k) be the SVD of X ?(k) , where P (k) is n ? R and Q(k) is
nK?1 ? R. Assume that R ? n and U (k) has full column rank. It holds that for all k,
i) U (k) ? Span(P (k) ),
ii) Q(k) ? Span P (1) ? ? ? ? ? P (k?1) ? P (k+1) ? ? ? ? ? P (K) .
Proof. We prove the lemma in Appendix D.4.
Corollary 1 shows that when the signal to noise ratio is high enough, we can recover P (k) with high
probability. Hence we suggest the following three-step approach for tensor denoising:
(i) For each k, unfold the observation tensor in mode k and compute the top H left singular
b (k) .
vectors. Concatenate these vectors to obtain a n ? H matrix P
b (1) ? ? ? ? ? P
b (k?1) ? P
b (k+1) ? ? ? ? ? P
b (K) .
(ii) Construct S (k) as S (k) = P
(iii) Solve the subspace norm regularized minimization problem
1
2
|||Y ? X |||F + ?|||X |||s ,
2
min
X
(8)
where the subspace norm is associated with the above defined {S (k) }K
k=1 .
See Appendix B for details.
3.3
Analysis
Let Y ? Rn?????n be a tensor corrupted by Gaussian noise with standard deviation ? as follows:
Y = X ? + ?E.
(9)
We define a slightly modified estimator X? as follows:
X? =
arg min
n1
X ,{M (k) }K
k=1
2
2
|||Y ? X |||F + ?|||X |||s : X =
K
X
o
>
foldk M (k) S (k) , {M (k) }K
?
M(?)
k=1
k=1
(10)
K?1
where M(?) is a restriction of the set of matrices M (k) ? Rn?H
, k = 1, . . . , K defined as
follows:
n
o
?
? ?
(k)
K?1 ), ?k 6= ` .
(
n
+
H
M(?) := {M (k) }K
:
kfold
(M
)
k
?
k
(`)
k=1
K
This restriction makes sure that M (k) , k = 1, . . . , K, are incoherent, i.e., each M (k) has a spectral
norm that is as low as a random matrix when unfolded at a different mode `. Similar assumptions
were used in low-rank plus sparse matrix decomposition [2, 12] and for the denoising bound for the
latent nuclear norm [23].
Then we have the following statement (we prove this in Appendix D.3).
Theorem 2. Let Xp be any tensor that can be expressed as
Xp =
K
X
(k) >
foldk M (k)
S
,
p
k=1
K
which satisfies the above incoherence condition {M (k)
p }k=1 ? M(?) and let rk be the rank
of M (k)
for k = 1, . . . , K. In addition, we assume that each S (k) is constructed as S (k) =
p
6
b (k?1) ? ? ? ? ? P
b (k+1) with (P
b (k) )> P
b (k) = I H . Then there are universal constants c0
P
and c1 such that any solution X? of
the minimization problem (10) with ? = |||Xp ? X ? |||s? +
?
p
?
c0 ?
n + H K?1 + 2 log(K/?) satisfies the following bound
|||X? ? X |||F ? |||Xp ? X |||F + c1 ?
?
?
r
XK
k=1
rk ,
with probability at least 1 ? ?.
Note that the right-hand side of the bound consists of two terms. The first term is the approximation
(k)
error. This term will be zero if X ? lies in Span({S (k) }K
=
k=1 ). This is the case, if we choose S
I nK?1 as in the latent nuclear norm, or if the condition of Corollary 1 is satisfied for the smallest ?R
when we use the Kronecker product construction we proposed. Note that the regularization constant
? should also scale with the dual subspace norm of the residual Xp ? X ? .
The second term is the estimation error with respect to Xp . If we take Xp to be the orthogonal projection of X ? to the Span({S (k) }K
k=1 ), we can ignore the contribution of the residual to ?, because
(Xp ? X ? )(k) S (k) = 0. Then the estimation error scales mildly with the dimensions n, H K?1 and
with the sum of the ranks. Note that if we take S (k) = I nK?1 , we have H K?1 = nK?1 , and we
recover the guarantee (3) .
4
Experiments
In this section, we conduct tensor denoising experiments on synthetic and real datasets, to numerically confirm our analysis in previous sections.
4.1
Synthetic data
We randomly generated the true rank two tensor X ? of size 20?30?40 with singular values ?1 = 20
and ?2 = 10. The true factors are generated as random matrices with orthonormal columns. The
observation tensor Y is then generated by adding Gaussian noise with standard deviation ? to X ? .
Our approach is compared to the CP decomposition, the overlapped approach, and the latent approach. The CP decomposition is computed by the tensorlab [22] with 20 random initializations.
We assume CP knows the true rank is 2. For the subspace norm, we use Algorithm 2 described
b (k) ?s. We computed
in Section 3. We also select the top 2 singular vectors when constructing U
the solutions for 20 values of regularization parameter ? logarithmically spaced between 1 and 100.
For the overlapped and the latent norm, we use ADMM described in [25]; we also computed 20
solutions with the same ??s used for the subspace norm.
We measure the performance in the relative error defined as |||Xb ?X ? |||F /|||X ? |||F . We report the minimum error obtained by choosing the optimal regularization parameter or the optimal initialization.
Although the regularization parameter could be selected by leaving out some entries and measuring
the error on these entries, we will not go into tensor completion here for the sake of simplicity.
Figure 2 (a) and (b) show the result of this experiment. The left panel shows the relative error for 3
representative values of ? for the subspace norm. The black dash-dotted line shows the minimum
error across all the ??s. The magenta dashed
the error corresponding to the theoretically
? line showsp
?
motivated choice ? = ?(maxk ( nk + H K?1 ) + 2 log(K)) for each ?. The two vertical
Q
lines are thresholds of ? from Corollary 1 corresponding to ?1 and ?2 , namely, ?1 /( k nk )1/4 and
Q
?2 /( k nk )1/4 . It confirms that there is a rather sharp increase in the error around the theoretically
predicted places (see also Figure 1(b)). We can also see that the optimal ? should grow linearly with
?. For large ? (small SNR), the best relative error is 1 since the optimal choice of the regularization
parameter ? leads to predicting with Xb = 0.
Figure 2 (b) compares the performance of the subspace norm to other approaches. For each method
the smallest error corresponding to the optimal choice of the regularization parameter ? is shown.
7
1
1
1.8
0.9
1.6
0.8
1.4
0.7
1.2
Relative Error
1.5
Relative Error
Relative Error
2
?1 = 1.62
?2 = 5.46
?3 = 14.38
min error
suggested
0.6
0.5
0.4
0.3
0.5
cp
subspace
latent
overlap
optimistic
0.2
0.1
0
0
0.5
1
?
(a)
1.5
2
0
0
0.5
1
?
1.5
cp
subspace
latent
overlap
suggested
optimistic
1
0.8
0.6
0.4
0.2
2
0
0
500
(b)
1000
?
1500
2000
(c)
Figure 2: Tensor denoising. (a) The subspace approach with three representative ??s on synthetic
data. (b) Comparison of different methods on synthetic data. (c) Comparison on amino acids data.
In addition, to place the numbers in context, we plot the line corresponding to
p P
R k nk log(K)
Relative error =
? ?,
|||X ? |||F
(11)
which we call ?optimistic?. This can be motivated from considering the (non-tractable) maximum
likelihood estimator for CP decomposition (see Appendix A).
Clearly, the error of CP, the subspace norm, and ?optimistic? grows at the same rate, much slower
than overlap and latent. The error of CP increases beyond 1, as no regularization is imposed (see
Appendix C for more experiments). We can see that both CP and the subspace norm are behaving
near optimally in this setting, although such behavior is guaranteed for the subspace norm whereas
it is hard to give any such guarantee for the CP decomposition based on nonlinear optimization.
4.2
Amino acids data
The amino acid dataset [5] is a semi-realistic dataset commonly used as a benchmark for low rank
tensor modeling. It consists of five laboratory-made samples, each one contains different amounts
of tyrosine, tryptophan and phenylalanine. The spectrum of their excitation wavelength (250-300
nm) and emission (250-450 nm) are measured by fluorescence, which gives a 5 ? 201 ? 61 tensor.
As the true factors are known to be these three acids, this data perfectly suits the CP model. The true
rank is fed into CP and the proposed approach as H = 3. We computed the solutions of CP for 20
different random initializations, and the solutions of other approaches with 20 different values of ?.
For the subspace and the overlapped approach, ??s are logarithmically spaced between 103 and 105 .
For the latent approach, ??s are logarithmically spaced between 104 and 106 . Again, we include the
optimistic scaling (11) to put the numbers in context.
Figure 2(c) shows the smallest relative error achieved by all methods we compare. Similar to the
synthetic data, both CP and the subspace norm behaves near ideally, though the relative error of
CP can be larger than 1 due to the lack of regularization. Interestingly the theoretically suggested
scaling of the regularization parameter ? is almost optimal.
5
Conclusion
We have settled a conjecture posed by [20] and showed that indeed O(nK/4 ) signal-to-noise ratio
is sufficient also for odd order tensors. Moreover, our analysis shows an interesting two-phase behavior of the error. This finding lead us to the development of the proposed subspace norm. The
b (1) , . . . , P
b (K) , which are
proposed norm is defined with respect to a set of orthonormal matrices P
estimated by mode-wise singular value decompositions. We have analyzed the denoising performance of the proposed norm, and shown that the error can be bounded by the sum of two terms,
which can be interpreted as an approximation error term coming from the first (non-convex) step,
and an estimation error term coming from the second (convex) step.
8
| 5823 |@word mild:1 version:4 polynomial:1 norm:54 c0:2 hu:1 km:1 simulation:1 confirms:1 bn:2 decomposition:14 contains:1 interestingly:2 current:1 recovered:2 written:1 numerical:2 concatenate:1 realistic:1 chicago:2 confirming:1 plot:2 selected:1 xk:1 core:2 simpler:1 five:1 along:3 constructed:2 direct:1 become:2 ik:3 prove:4 consists:2 theoretically:5 indeed:1 tryptophan:1 behavior:5 dist:3 inspired:1 unfolded:1 considering:1 becomes:2 estimating:1 moreover:3 underlying:2 notation:3 panel:1 bounded:1 interpreted:1 eigenvector:2 finding:2 guarantee:5 control:2 unit:2 incoherence:1 interpolation:1 black:1 plus:4 initialization:3 studied:3 factorization:1 recursive:4 unfold:2 area:2 universal:1 projection:1 word:1 suggest:2 get:2 put:1 context:3 influence:1 restriction:2 equivalent:1 imposed:2 go:1 convex:7 survey:1 rectangular:3 ke:1 simplicity:1 recovery:2 estimator:17 insight:1 nuclear:8 orthonormal:4 oh:1 classic:1 hvec:1 qq:2 construction:1 suppose:1 user:1 overlapped:5 logarithmically:3 predicts:2 observed:1 svds:1 technological:1 intuition:1 mu:1 ui:3 cnm:2 ideally:1 phenylalanine:1 tyrosine:1 basis:1 easily:1 k0:1 hosvd:2 fiber:2 jain:1 fast:1 tell:1 choosing:2 whose:1 emerged:1 widely:1 solve:1 larger:1 say:1 posed:1 otherwise:2 noisy:2 rr:1 propose:2 interaction:1 product:8 coming:2 achieve:1 cnk:1 frobenius:1 chemometrics:1 convergence:4 converges:1 polylog:2 depending:1 completion:3 fixing:1 measured:1 op:2 odd:4 recovering:6 c:1 predicted:2 implies:2 uu:1 centered:1 require:3 hx:1 decompose:1 hold:2 sufficiently:2 around:2 normal:2 achieves:2 smallest:3 estimation:3 fluorescence:1 successfully:1 unfolding:22 minimization:4 clearly:2 gaussian:10 modified:1 rather:1 corollary:10 emission:1 improvement:1 rank:31 likelihood:2 i1:3 arg:1 dual:3 development:2 construct:3 identical:1 qinqing:2 nearly:3 np:2 contaminated:3 simplify:1 richard:2 report:1 randomly:4 phase:7 n1:5 suit:1 zheng:1 analyzed:2 mixture:2 wedin:1 behind:1 xb:2 partial:1 orthogonal:2 conduct:1 initialized:1 desired:1 rn3:1 increased:1 column:7 instance:1 modeling:1 measuring:1 ordinary:7 cost:1 deviation:5 entry:5 snr:1 optimally:2 connect:1 perturbed:2 corrupted:3 synthetic:7 fundamental:1 hopkins:1 again:1 settled:1 rn1:2 nm:5 choose:3 slowly:1 satisfied:1 wishart:2 worse:1 leading:4 rn2:1 b2:2 wk:1 depends:1 performed:1 view:1 hu2:3 later:1 optimistic:5 analyze:1 recover:3 parallel:1 contribution:2 square:6 ni:1 kek:1 qk:2 acid:4 spaced:3 wisdom:1 definition:1 tucker:1 proof:2 associated:2 sampled:1 proved:1 dataset:2 improves:1 subtle:1 higher:3 though:1 furthermore:2 correlation:2 hand:3 horizontal:1 replacing:1 nonlinear:2 lack:1 mode:15 quality:1 grows:1 requiring:1 true:8 normalized:1 hence:1 regularization:9 alternating:1 symmetric:2 laboratory:1 i2:1 excitation:1 demonstrate:3 performs:1 cp:16 reasoning:2 ranging:1 wise:4 recently:2 behaves:1 empirically:2 discussed:1 m1:1 numerically:1 vec:1 uv:2 polyadic:1 behaving:1 dominant:1 recent:1 showed:2 conjectured:1 inf:1 inequality:1 minimum:4 ndk:6 dashed:1 semi:2 u0:3 ii:2 full:1 signal:19 reduces:1 ui1:1 achieved:1 c1:2 addition:2 want:1 whereas:1 singular:14 leaving:1 grow:1 appropriately:1 sure:1 call:1 curious:1 ee:1 symmetrically:1 ideal:5 near:2 iii:1 enough:2 variety:1 perfectly:1 inner:4 cn:4 translates:1 whether:1 motivated:3 eigenvectors:1 amount:1 reduced:1 generate:1 canonical:1 notice:1 dotted:1 estimated:3 write:1 express:1 key:1 nevertheless:1 threshold:5 convert:1 sum:2 inverse:1 extends:1 place:2 almost:1 appendix:9 scaling:3 bound:15 dash:1 guaranteed:1 nontrivial:3 strength:1 kronecker:6 n3:3 sake:1 dominated:1 u1:1 speed:1 span:10 min:4 conjecture:2 combination:1 smaller:1 slightly:4 across:1 ur:1 wi:1 appealing:1 happens:1 pr:2 previously:1 know:1 tractable:2 fed:1 end:1 generalizes:1 operation:1 apply:1 observe:1 spectral:4 slower:1 denotes:2 top:3 include:1 prof:1 establish:1 tensor:56 question:2 blend:1 kth:8 subspace:32 distance:1 index:3 ratio:13 demonstration:1 neuroimaging:1 statement:1 ryota:1 trace:1 motivates:1 unknown:1 vertical:1 observation:4 datasets:1 benchmark:1 truncated:1 defining:1 maxk:1 rn:10 perturbation:2 sharp:2 arbitrary:1 ttic:1 bk:1 pair:1 required:4 namely:1 connection:1 address:2 beyond:1 suggested:3 below:2 regime:2 max:2 power:2 overlap:3 natural:2 regularized:1 predicting:1 hu1:3 residual:2 mn:1 improve:1 axis:1 incoherent:1 kf:1 asymptotic:1 relative:12 highlight:1 interesting:1 proven:2 sufficient:2 xp:8 row:2 last:1 side:2 uchicago:1 institute:1 wide:2 sparse:1 distributed:1 dimension:2 transition:3 avoids:1 commonly:1 made:1 ignore:1 confirm:2 active:1 b1:2 assumed:1 alternatively:1 spectrum:1 latent:12 table:4 ku:2 obtaining:1 interpolating:1 necessarily:2 constructing:1 vj:1 pk:1 montanari:2 linearly:1 noise:30 n2:3 amino:3 representative:2 uik:1 tomioka:2 concatenating:1 lie:1 toyota:1 theorem:10 rk:2 magenta:1 showing:2 decay:3 dk:1 admits:1 grouping:1 intractable:2 exists:2 restricting:1 adding:2 importance:1 illustrates:1 kx:1 nk:34 gap:1 easier:1 mildly:1 wavelength:1 explore:1 expressed:1 applies:2 satisfies:3 goal:1 identity:1 careful:1 admm:1 hard:3 change:1 specifically:1 denoising:8 lemma:4 called:2 svd:4 select:1 arises:1 nk0:1 |
5,329 | 5,824 | Fast, Provable Algorithms for Isotonic Regression in
all `p-norms ?
Rasmus Kyng
Dept. of Computer Science
Yale University
[email protected]
Anup Rao?
School of Computer Science
Georgia Tech
[email protected]
Sushant Sachdeva
Dept. of Computer Science
Yale University
[email protected]
Abstract
Given a directed acyclic graph G, and a set of values y on the vertices, the Isotonic
Regression of y is a vector x that respects the partial order described by G, and
minimizes kx ? yk , for a specified norm. This paper gives improved algorithms
for computing the Isotonic Regression for all weighted `p -norms with rigorous
performance guarantees. Our algorithms are quite practical, and variants of them
can be implemented to run fast in practice.
1
Introduction
A directed acyclic graph (DAG) G(V, E) defines a partial order on V where u precedes v if there
is a directed path from u to v. We say that a vector x ? RV is isotonic (with respect to G) if it is a
weakly order-preserving mapping of V into R. Let IG denote the set of all x that are isotonic with
respect to G. It is immediate that IG can be equivalently defined as follows:
IG = {x ? RV | xu ? xv for all (u, v) ? E}.
V
(1)
V
Given a DAG G, and a norm k?k on R , the Isotonic Regression of observations y ? R , is given
by x ? IG that minimizes kx ? yk .
Such monotonic relationships are fairly common in data. They allow one to impose only weak
assumptions on the data, e.g. the typical height of a young girl child is an increasing function of her
age, and the heights of her parents, rather than a more constrained parametric model.
Isotonic Regression is an important shape-constrained nonparametric regression method that has
been studied since the 1950?s [1, 2, 3]. It has applications in diverse fields such as Operations Research [4, 5] and Signal Processing [6]. In Statistics, it has several applications (e.g. [7, 8]), and the
statistical properties of Isotonic Regression under the `2 -norm have been well studied, particularly
over linear orderings (see [9] and references therein). More recently, Isotonic regression has found
several applications in Learning [10, 11, 12, 13, 14]. It was used by Kalai and Sastry [10] to provably
learn Generalized Linear Models and Single Index Models; and by Zadrozny and Elkan [13], and
Narasimhan and Agarwal [14] towards constructing binary Class Probability Estimation models.
The most common norms of interest are weighted `p -norms, defined as
( P
1/p
wvp ? |zv |p
, p ? [1, ?),
v?V
kzkw,p =
maxv?V wv ? |zv |,
p = ?,
where wv > 0 is the weight of a vertex v ? V. In this paper, we focus on algorithms for Isotonic
Regression under weighted `p -norms. Such algorithms have been applied to large data-sets from
Microarrays [15], and from the web [16, 17].
?
?
Code from this work is available at https://github.com/sachdevasushant/Isotonic
Part of this work was done when this author was a graduate student at Yale University.
1
Given a DAG G, and observations y ? RV , our regression problem can be expressed as the following
convex program:
min kx ? ykw,p
1.1
such that xu ? xv for all (u, v) ? E.
(2)
Our Results
Let |V | = n, and |E| = m. We?ll assume that G is connected, and hence m ? n ? 1.
`p -norms, p < ?. We give a unified, optimization-based framework for algorithms that provably
solve the Isotonic Regression problem for p ? [1, ?). The following is an informal statement of our
main theorem (Theorem 3.1) in this regard (assuming wv are bounded by poly(n)).
Theorem 1.1 (Informal). There is an algorithm that, given a DAG G, observations y, and ? > 0,
runs in time O(m1.5 log2 n log n/?), and computes an isotonic xA LG ? IG such that
p
p
kxA LG ? ykw,p ? min kx ? ykw,p + ?.
x?IG
The previous best time bounds were O(nm log
p = 1 [19].
n2
m)
for p ? (1, ?) [18] and O(nm + n2 log n) for
`? -norms. For `? -norms, unlike `p -norms for p ? (1, ?), the Isotonic Regression problem need
not have a unique solution. There are several specific solutions that have been studied in the literature
(see [20] for a detailed discussion). In this paper, we show that some of them (M AX , M IN , and AVG
to be precise) can be computed in time linear in the size of G.
Theorem 1.2. There is an algorithm that, given a DAG G(V, E), a set of observations y ? RV , and
weights w, runs in expected time O(m), and computes an isotonic xI NF ? IG such that
kxI NF ? ykw,? = min kx ? ykw,? .
x?IG
Our algorithm achieves the best possible running time. This was not known even for linear or tree
orders. The previous best running time was O(m log n) [20].
Strict Isotonic Regression. We also give improved algorithms for Strict Isotonic Regression. Given
observations y, and weights w, its Strict Isotonic Regression xS TRICT is defined to be the limit of x
?p
as p goes to ?, where x
?p is the Isotonic Regression for y under the norm k?kw,p . It is immediate
that xStrict is an `? Isotonic Regression for y. In addition, it is unique and satisfies several desirable
properties (see [21]).
Theorem 1.3. There is an algorithm that, given a DAG G(V, E), a set of observation y ? RV , and
weights w, runs in expected time O(mn), and computes xS TRICT , the strict Isotonic Regression of y.
The previous best running time was O(min(mn, n? ) + n2 log n) [21].
1.2
Detailed Comparison to Previous Results
`p -norms, p < ?. There has been a lot of work for fast algorithms for special graph families, mostly
for p = 1, 2 (see [22] for references). For some cases where G is very simple, e.g. a directed path
(corresponding to linear orders), or a rooted, directed tree (corresponding to tree orders), several
works give algorithms with running times of O(n) or O(n log n) (see [22] for references).
Theorem 1.1 not only improves on the previously best known algorithms for general DAGs, but also
on several algorithms for special graph families (see Table 1). One such setting is where V is a point
set in d-dimensions, and (u, v) ? E whenever ui ? vi for all i ? [d]. This setting has applications
to data analysis, as in the example given earlier, and has been studied extensively (see [23] for
references). For this case, it was proved by Stout (see Prop. 2, [23]) that these partial orders can be
embedded in a DAG with O(n logd?1 n) vertices and edges, and that this DAG can be computed in
time linear in its size. The bounds then follow by combining this result with our theorem above.
We obtain improved running times for all `p norms for DAGs with m = o(n2 / log6 n), and for
d-dim point sets for d ? 3. For d = 2, Stout [19] gives an O(n log2 n) time algorithm.
2
Table 1: Comparison to previous best results for `p -norms, p 6= ?
d-dim vertex set, d ? 3
arbitrary DAG
Previous best
`1
`p , 1 < p < ?
n2 logd n [19]
n2 logd+1 n [19]
2
nm + n2 log n [15] nm log nm [18]
This paper
`p , p < ?
n1.5 log1.5(d+1) n
m1.5 log3 n
For sake of brevity, we have ignored the O(?) notation implicit in the bounds, and o(log n) terms. The results
are reported assuming an error parameter ? = n??(1) , and that wv are bounded by poly(n).
`? -norms. For weighted `? -norms on arbitrary DAGs, the previous best result was O(m log n +
n log2 n) due to Kaufman and Tamir [24]. A manuscript by Stout [20] improves it to O(m log n).
These algorithms are based on parametric search, and are impractical. Our algorithm is simple,
achieves the best possible running time, and only requires random sampling and topological sort.
In a parallel independent work, Stout [25] gives O(n)-time algorithms for linear order, trees, and
d-grids, and an O(n logd?1 n) algorithm for point sets in d-dimensions. Theorem 1.2 implies the
linear-time algorithms immediately. The result for d-dimensional point sets follows after embedding
the point sets into DAGs of size O(n logd?1 n), as for `p -norms.
Strict Isotonic Regression. Strict Isotonic regression was introduced and studied in [21]. It also
gave the only previous algorithm for computing it, that runs in time O(min(mn, n? ) + n2 log n).
Theorem 1.3 is an improvement when m = o(n log n).
1.3
Overview of the Techniques and Contribution
`p -norms, p < ?. It is immediate that Isotonic Regression, as formulated in Equation (2), is
a convex programming problem. For weighted `p -norms with p < ?, applying generic convexprogramming algorithms such as Interior Point methods to this formulation leads to algorithms that
are quite slow.
We obtain faster algorithms for Isotonic Regression by replacing the computationally intensive component of Interior Point methods, solving systems of linear equations, with approximate solves. This
approach has been used to design fast algorithms for generalized flow problems [26, 27, 28].
We present a complete proof of an Interior Point method for a large class of convex programs that
only requires approximate solves. Daitch and Spielman [26] had proved such a result for linear
programs. We extend this to `p -objectives, and provide an improved analysis that only requires
linear solvers with a constant factor relative error bound, whereas the method from Daitch and
Spielman required polynomially small error bounds.
The linear systems in [27, 28] are Symmetric Diagonally Dominant (SDD) matrices. The seminal
work of Spielman and Teng [29] gives near-linear time approximate solvers for such systems, and
later research has improved these solvers further [30, 31]. Daitch and Spielman [26] extended these
solvers to M-matrices (generalizations of SDD). The systems we need to solve are neither SDD,
nor M-matrices. We develop fast solvers for this new class of matrices using fast SDD solvers. We
stress that standard techniques for approximate inverse computation, e.g. Conjugate Gradient, are
not sufficient for approximately solving our systems in near-linear time. These methods have at least
a square root dependence on the condition number, which inevitably becomes huge in IPMs.
`? -norms and Strict Isotonic Regression. Algorithms for `? -norms and Strict Isotonic Regression are based on techniques presented in a recent paper of Kyng et al. [32]. We reduce `? -norm
Isotonic Regression to the following problem, referred to as Lipschitz learning on directed graphs
in [32] (see Section 4 for details) : We have a directed graph H, with edgenlengths givenoby len.
x(u)?x(v)
Given x ? RV (H) , for every (u, v) ? E(H), define grad+
G [x](u, v) = max
len(u,v) , 0 . Now,
given y that assigns real values to a subset of V (H), the goal is to determine x ? RV (H) that agrees
with y and minimizes max(u,v)?E(H) grad+
G [x](u, v).
3
The above problem is solved in O(m + n log n) time for general directed graphs in [32]. We give
a simple linear-time reduction to the above problem with the additional property that H is a DAG.
For DAGs, their algorithm can be implemented to run in O(m + n) time.
It is proved in [21] that computing the Strict Isotonic Regression is equivalent to computing the
isotonic vector that minimizes the error under the lexicographic ordering (see Section 4). Under the
same reduction as in the `? -case, we show that this is equivalent to minimizing grad+ under the
lexicographic ordering. It is proved in [32] that the lex-minimizer can be computed with basically n
calls to `? -minimization, immediately implying our result.
1.4
Further Applications
The IPM framework that we introduce to design our algorithm for Isotonic Regression (IR), and
the associated results, are very general, and can be applied as-is to other problems. As a concrete
application, the algorithm of Kakade et al. [12] for provably learning Generalized Linear Models
and Single Index Models learns 1-Lipschitz monotone functions on linear orders in O(n2 ) time
(procedure LPAV). The structure of the associated convex program resembles IR. Our IPM results
and solvers immediately imply an n1.5 time algorithm (up to log factors).
Improved algorithms for IR (or for learning Lipschitz functions) on d-dimensional point sets could
be applied towards learning d-dim multi-index models where the link-function is nondecreasing
w.r.t. the natural ordering on d-variables, extending [10, 12]. They could also be applied towards
constructing Class Probability Estimation (CPE) models from multiple classifiers, by finding a mapping from multiple classifier scores to a probabilistic estimate, extending [13, 14].
Organization. We report experimental results in Section 2. An outline of the algorithms and analysis for `p -norms, p < ?, are presented in Section 3. In Section 4, we define the Lipschitz regression
problem on DAGs, and give the reduction from `? -norm Isotonic Regression. We defer a detailed
description of the algorithms, and most proofs to the accompanying supplementary material.
2
Experiments
An important advantage of our algorithms is that they can be implemented quite efficiently. Our
algorithms
? are based on what is known as a short-step method (see Chapter 11, [33]), that leads to
an O( m) bound on the number of iterations. Each iteration corresponds to one linear solve in the
Hessian matrix. A variant, known as the long-step method (see [33]) typically require much fewer
iterations, about log m, even though the only provable bound known is O(m).
Running Time in Practice
For the important special case of `2 -Isotonic
Regression, we have implemented our algorithm in Matlab, with long step barrier method,
combined with our approximate solver for the
linear systems involved. A number of heuristics
recommended in [33] that greatly improve the
running time in practice have also been incorporated. Despite the changes, our implementation is theoretically correct and also outputs
an upper bound on the error by giving a feasible point to the dual program. Our implementation is available at https://github.com/
sachdevasushant/Isotonic.
80
70
Time in Secs
60
Grid?1
Grid?10
RandReg?1
RandReg?10
50
40
30
20
10
0
0
2
4
6
Number of Vertices
8
10
4
x 10
In the figure, we plot average running times
(with error bars denoting standard deviation) for `2 -Isotonic Regression on DAGs, where the underlying graphs are 2-d grid graphs and random regular graphs (of constant degree). The edges for
2-d grid graphs are all oriented towards one of the corners. For random regular graphs, the edges
are oriented according to a random permutation. The vector of initial observations y is chosen to be
a random permutation of 1 to n obeying the partial order, perturbed by adding i.i.d. Gaussian noise
to each coordinate. For each graph size, and two different noise levels (standard deviation for the
noise on each coordinate being 1 or 10), the experiment is repeated multiple time. The relative error
in the objective was ascertained to be less than 1%.
4
3
Algorithms for `p -norms, p < ?
Without loss of generality, we assume y ? [0, 1]n . Given p ? [1, ?), let p-ISO denote the following
`p -norm Isotonic Regression problem, and OPTp-ISO denote its optimum:
p
kx ? ykw,p .
min
x?IG
(3)
Let w p denote the entry-wise pth power of w. We assume the minimum entry of w p is 1, and the
p
maximum entry is wmax
? exp(n). We also assume the additive error parameter ? is lower bounded
e notation to hide poly log log n factors.
by exp(?n), and that p ? exp(n). We use the O
Theorem 3.1. Given a DAG G(V, E), a set of observations y ? [0, 1]V , weights w, and an error
p
e m1.5 log2 n log (npwmax
parameter ? > 0, the algorithm I SOTONIC IPM runs in time O
/?) , and
with probability at least 1 ? 1/n, outputs a vector xA LG ? IG with
p
kxA LG ? ykw,p ? OPTp-ISO + ?.
The algorithm I SOTONIC IPM is obtained by an appropriate instantiation of a general Interior Point
Method (IPM) framework which we call A PPROX IPM.
To state the general IPM result, we need to introduce two important concepts. These concepts are
defined formally in Supplementary Material Section A.1. The first concept is self-concordant barrier
functions; we denote the class of these functions by SCB. A self-concordant barrier function f is
a special convex function defined on some convex domain set S. The function approaches infinity
at the boundary of S. We associate with each f a complexity parameter ?(f ) which measures how
well-behaved f is. The second important concept is the symmetry of a point z w.r.t. S: A nonnegative scalar quantity sym(z, S). A large symmetry value guarantees that a point is not too close
to the boundary of the set. For our algorithms to work, we need a starting point whose symmetry is
not too small. We later show that such a starting point can be constructed for the p-ISO problem.
A PPROX IPM is a primal path following IPM: Given a vector c, a domain D and a barrier function
f ? SCB for D, we seek to compute minx?D hc, xi . To find a minimizer, we consider a function
fc,? (x) = f (x) + ? hc, xi, and attempt to minimize fc,? for changing values of ? by alternately
updating x and ?. As x approaches the boundary of D the f (x) term grows to infinity and with
some care, we can use this to ensure we never move to a point x outside the feasible domain D. As
we increase ?, the objective term hc, xi contributes more to fc,? . Eventually, for large enough ?, the
objective value hc, xi of the current point x will be close to the optimum of the program.
To stay near the optimum x for each new value of ?, we use a second-order method (Newton steps)
to update x when ? is changed. This means that we minimize a local quadratic approximation to our
objective. This requires solving a linear system Hz = g, where g and H are the gradient and Hessian
of f at x respectively. Solving this system to find z is the most computationally intensive aspect of
the algorithm. Crucially we ensure that crude approximate solutions to the linear system suffices,
allowing the algorithm to use fast approximate solvers for this step. A PPROX IPM is described in
detail in Supplementary Material Section A.5, and in this section we prove the following theorem.
Theorem 3.2. Given a convex bounded domain D ? IRn and vector c ? IRn , consider the program
min
x?D
hc, xi .
(4)
Let OPT denote the optimum of the program. Let f ? SCB be a self-concordant barrier function
for D. Given a initial point x0 ? D, a value upper bound K ? sup{hc, xi : x ? D}, a symmetry
lower bound s ? sym(x0 , D), and an error parameter 0 < < 1, the algorithm A PPROX IPM runs
for
p
Tapx = O
?(f ) log (?(f )/?s)
iterations and returns a point xapx , which satisfies
hc,xapx i?OPT
K?OPT
? .
The algorithm requires O(Tapx ) multiplications of vectors by a matrix M (x) satisfying 9/10 ?
H(x)?1 M (x) 11/10 ? H(x)?1 , where H(x) is the Hessian of f at various points x ? D
specified by the algorithm.
5
We now reformulate the p-ISO program to state a version which can be solved using the A PPROX IPM framework. Consider points (x, t) ? IRn ? IRn , and define a set
p
DG = {(x, t) : for all v ? V . |x(v) ? y(v)| ? t(v) ? 0} .
To ensure boundedness, as required by A PPROX IPM, we add the constraint hw p , ti ? K.
Definition 3.3. We define the domain DK = (IG ? IRn ) ? DG ? {(x, t) : hw p , ti ? K} .
The domain DK is convex, and allows us to reformulate program (3) with a linear objective:
min hw p , ti
x,t
such that (x, t) ? DK .
(5)
Our next lemma determines a choice of K which suffices to ensure that programs (3) and (5) have
the same optimum. The lemma is proven in Supplementary Material Section A.4.
p
Lemma 3.4. For all K ? 3nwmax
, DK is non-empty and bounded, and the optimum of program (5)
is OPTp-ISO .
The following result shows that for program (5) we can compute a good starting point for the path
following IPM efficiently. The algorithm G OOD S TART computes a starting point in linear time by
running a topological sort on the vertices of the DAG G and assigning values to x according to the
vertex order of the sort. Combined with an appropriate choice of t, this suffices to give a starting
point with good symmetry. The algorithm G OOD S TART is specified in more detail in Supplementary
Material Section A.4, together with a proof of the following lemma.
Lemma 3.5. The algorithm G OOD S TART runs in time O(m) and returns an initial point (x0 , t0 )
p
that is feasible, and for K = 3nwmax
, satisfies sym((x0 , t0 ), DK ) ? 18n21pwmax
p .
Combining standard results on self-concordant barrier functions with a barrier for p-norms developed by Hertog et al. [34], we can show the following properties of a function FK whose exact
definition is given in Supplementary Material Section A.2.
Corollary 3.6. The function FK is a self-concordant barrier for DK and it has complexity parameter ?(FK ) = O(m). Its gradient gFK is computable in O(m) time, and an implicit representation
of the Hessian HFK can be computed in O(m) time as well.
The key reason we can use A PPROX IPM to give a fast algorithm for Isotonic Regression is that we
develop an efficient solver for linear equations in the Hessian of FK . The algorithm H ESSIAN S OLVE
solves linear systems in Hessian matrices of the barrier function FK . The Hessian is composed of
a structured main component plus a rank one matrix. We develop a solver for the main component
by doing a change of variables to simplify its structure, and then factoring the matrix by a blockwise LDL> -decomposition. We can solve straightforwardly in the L and L> , and we show that
the D factor consists of blocks that are either diagonal or SDD, so we can solve in this factor
approximately using a nearly-linear time SDD solver. The algorithm H ESSIAN S OLVE is given in
full in Supplementary Material Section A.3, along with a proof of the following result.
Theorem 3.7. For any instance of program (5) given by some (G, y), at any point z ? DK , for any
vector a, H ESSIAN S OLVE((G, y), z, ?, a) returns a vector b = M a for a symmetric linear operator
M satisfying 9/10 ? HFK (z)?1 M 11/10 ? HFK (z)?1 . The algorithm fails with probability < ?.
e log n log(1/?)).
H ESSIAN S OLVE runs in time O(m
These are the ingredients we need to prove our main result on solving p-ISO. The algorithm I SO TONIC IPM is simply A PPROX IPM instantiated to solve program (5), with an appropriate choice of
parameters. We state I SOTONIC IPM informally as Algorithm 1 below. I SOTONIC IPM is given in
full as Algorithm 6 in Supplementary Material Section A.5.
Proof of Theorem 3.1: I SOTONIC IPM uses the symmetry lower bound s = 18n21pwmax
p , the value
p
?
upper bound K = 3nwmax , and the error parameter = K when calling A PPROX IPM. By Corollary 3.6, the barrier function FK used by I SOTONIC IPM has complexity parameter ?(FK ) ? O(m).
By Lemma 3.5 the starting point (x0 , t0 ) computed by G OOD S TART and used by I SOTONIC IPM is
feasible and has symmetry sym(x0 , DK ) ? 18n21pwmax
p .
hw p ,t
i?OPT
apx
By Theorem 3.2 the point (xapx , tapx ) output by I SOTONIC IPM satisfies
? , where
K?OPT
p
OPT is the optimum of program (5), and K = 3nwmax is the value used by I SOTONIC IPM for the
6
constraint hw p , ti ? K, which is an upper bound on the supremum of objective values of feasible
p
points of program (5). By Lemma 3.4, OPT = OPTp-ISO . Hence, ky ? xapx kp ? hw p , tapx i ?
OPT + K = OPTp-ISO + ?.
Again, by Theorem 3.2, the number of calls to H ESSIAN S OLVE by I SOTONIC IPM is bounded by
p
?
p
O(T ) ? O
?(FK ) log (?(FK )/?s) ? O m log (npwmax/?) .
Each call to H ESSIAN S OLVE fails with probability < n?3 . Thus,
probability
? by a unionpbound, the
that some call to H ESSIAN S OLVE fails is upper bounded by O( m log(npwmax
/?))/n3 = O(1/n).
?
p
e log2 n),
The algorithm uses O ( m log (npwmax/?)) calls to H ESSIAN S OLVE that each take time O(m
p
2
e m1.5 log n log (npwmax/?) .
as ? = n3 . Thus the total running time is O
Algorithm 1: Sketch of Algorithm I SOTONIC IPM
1. Pick a starting point (x, t) using the G OOD S TART algorithm
2. for r = 1, 2
3.
if r = 1 then ? ? ?1; ? ? 1; c = ? gradient of f at (x, t)
4.
else ? ? 1; ? ? 1/poly(n); c = (0, w p )
5.
for i ? 1, . . . , C1 m0.5 log m :
6.
? ? ? ? (1 + ?C2 m?0.5 )
7.
Let H, g be the Hessian and gradient of fc,? at x
8.
Call H ESSIAN S OLVE to compute z ? H ?1 g
9.
Update x ? x ? z
10. Return x.
4
Algorithms for `? and Strict Isotonic Regression
We now reduce `? Isotonic Regression and Strict Isotonic Regression to the Lipschitz Learning
problem, as defined in [32]. Let G = (V, E, len) be any DAG with non-negative edge lengths
len : E ? R?0 , and y : V ? R ? {?} a partial labeling. We think of a partial labeling as a function
that assigns real values to a subset of the vertex set V . We call such a pair (G, y) a partially-labeled
DAG. For a complete labelingnx : V ? R,o define the gradient on an edge (u, v) ? E due to x
x(u)?x(v)
+
to be grad+
G [x](u, v) = max
len(u,v) , 0 . If len(u, v) = 0, then gradG [x](u, v) = 0 unless
x(u) > x(v), in which case it is defined as +?. Given a partially-labelled DAG (G, y), we say that
a complete assignment x is an inf-minimizer if it extends y, and for all other complete assignments
x0 that extends y we have
+ 0
max grad+
G [x](u, v) ? max gradG [x ](u, v).
(u,v)?E
Note that when len = 0, then
(u,v)?E
max(u,v)?E grad+
G [x](u, v)
< ? if and only if x is isotonic on G.
Suppose we are interested in Isotonic Regression on a DAG G(V, E) under k?kw,? . To reduce this
problem to that of finding an inf-minimizer, we add some auxiliary nodes and edges to G. Let VL , VR
be two copies of V . That is, for every vertex u ? V , add a vertex uL to VL and a vertex uR to VR . Let
EL = {(uL , u)}u?V and ER = {(u, uR )}u?V . We then let len0 (uL , u) = 1/w(u) and len0 (u, uR ) =
1/w(u). All other edge lengths are set to 0. Finally, let G0 = (V ? V ? V , E ? E ? E , len0 ).
L
R
L
R
The partial assignment y 0 takes real values only on the the vertices in VL ? VR . For all u ? V ,
y 0 (uL ) := y(u), y 0 (uR ) := y(u) and y 0 (u) := ?. (G0 , y 0 ) is our partially-labeled DAG. Observe
that G0 has n0 = 3n vertices and m0 = m + 2n edges.
Lemma 4.1. Given a DAG G(V, E), a set of observations y ? RV , and weights w, construct G0
and y 0 as above. Let x be an inf-minimizer for the partially-labeled DAG (G0 , y 0 ). Then, x |V is the
Isotonic Regression of y with respect to G under the norm k?kw,? .
Proof. We note that since the vertices corresponding to V in (G0 , y 0 ) are connected to each other by
zero length edges, max(u,v)?E grad+
G [x](u, v) < ? iff x is isotonic on those edges. Since G is a
DAG, we know that there are isotonic labelings on G. When x is isotonic on vertices corresponding
to V , gradient is zero on all the edges going in between vertices in V . Also, note that every vertex
7
x corresponding to V in G0 is attached to two auxiliary nodes xL ? VL , xR ? VR . We also have
y 0 (xL ) = y 0 (xR ) = y(x). Thus, for any x that extends y and is Isotonic on G0 , the only non-zero
entries in grad+ correspond to edges in ER and EL , and thus
max grad+
G0 [x](u, v) = max wu ? |y(u) ? x(u)| = kx ? ykw,? .
(u,v)?E 0
u?V
Algorithm C OMP I NF M IN from [32] is proved to compute the inf-minimizer, and is claimed to work
for directed graphs (Section 5, [32]). We exploit the fact that Dijkstra?s algorithm in C OMP I NF M IN
can be implemented in O(m) time on DAGs using a topological sorting of the vertices, giving a
linear time algorithm for computing the inf-minimizer. Combining it with the reduction given by
the lemma above, and observing that the size of G0 is O(m+n), we obtain Theorem 1.2. A complete
description of the modified C OMP I NF M IN is given in Section B.2. We remark that the solution to
the `? -Isotonic Regression that we obtain has been referred to as AVG `? Isotonic Regression
in the literature [20]. It is easy to modify the algorithm to compute the M AX , M IN `? Isotonic
Regressions. Details are given in Section B.
For Strict Isotonic Regression, we define the lexicographic ordering. Given r ? Rm , let ?r denote
a permutation that sorts r in non-increasing order by absolute value, i.e., ?i ? [m ? 1], |r(?r (i))| ?
|r(?r (i + 1))|. Given two vectors r, s ? Rm , we write r lex s to indicate that r is smaller than s in
the lexicographic ordering on sorted absolute values, i.e.
?j ? [m], |r(?r (j))| < |s(?s (j))| and ?i ? [j ? 1], |r(?r (i))| = |s(?s (i))|
or ?i ? [m], |r(?r (i))| = |s(?s (i))| .
Note that it is possible that r lex s and s lex r while r 6= s. It is a total relation: for every r and s
at least one of r lex s or s lex r is true.
Given a partially-labelled DAG (G, y), we say that a complete assignment x is a lex-minimizer if it
+ 0
extends y and for all other complete assignments x0 that extend y we have grad+
G [x] lex gradG [x ].
Stout [21] proves that computing the Strict Isotonic Regression is equivalent to finding an Isotonic
x that minimizes zu = wu ? (xu ? yu ) in the lexicographic ordering. With the same reduction as
above, it is immediate that this is equivalent to minimizing grad+
G0 in the lex-ordering.
Lemma 4.2. Given a DAG G(V, E), a set of observations y ? RV , and weights w, construct G0
and y 0 as above. Let x be the lex-minimizer for the partially-labeled DAG (G0 , y 0 ). Then, x |V is the
Strict Isotonic Regression of y with respect to G with weights w.
As for inf-minimization, we give a modification of the algorithm C OMP L EX M IN from [32] that
computes the lex-minimizer in O(mn) time. The algorithm is described in Section B.2. Combining
this algorithm with the reduction from Lemma 4.2, we can compute the Strict Isotonic Regression
in O(m0 n0 ) = O(mn) time, thus proving Theorem 1.3.
Acknowledgements. We thank Sabyasachi Chatterjee for introducing the problem to us, and Daniel
Spielman for his advice and comments. We would also like to thank Quentin Stout and anonymous
reviewers for their suggestions. This research was partially supported by AFOSR Award FA955012-1-0175, NSF grant CCF-1111257, and a Simons Investigator Award to Daniel Spielman.
References
[1] M. Ayer, H. D. Brunk, G. M. Ewing, W. T. Reid, and E. Silverman. An empirical distribution function for
sampling with incomplete information. The Annals of Mathematical Statistics, 26(4):pp. 641?647, 1955.
[2] D. J. Barlow, R. E .and Bartholomew, J.M. Bremner, and H. D. Brunk. Statistical inference under order
restrictions: the theory and application of Isotonic Regression. Wiley New York, 1972.
[3] F. Gebhardt. An algorithm for monotone Regression with one or more independent variables. Biometrika,
57(2):263?271, 1970.
[4] W. L. Maxwell and J. A. Muckstadt. Establishing consistent and realistic reorder intervals in productiondistribution systems. Operations Research, 33(6):1316?1341, 1985.
[5] R. Roundy. A 98%-effective lot-sizing rule for a multi-product, multi-stage production / inventory system.
Mathematics of Operations Research, 11(4):pp. 699?727, 1986.
[6] S.T. Acton and A.C. Bovik. Nonlinear image estimation using piecewise and local image models. Image
Processing, IEEE Transactions on, 7(7):979?991, Jul 1998.
8
[7] C.I.C. Lee. The Min-Max algorithm and Isotonic Regression. The Annals of Statistics, 11(2):pp. 467?477,
1983.
[8] R. L. Dykstra and T. Robertson. An algorithm for Isotonic Regression for two or more independent
variables. The Annals of Statistics, 10(3):pp. 708?716, 1982.
[9] S. Chatterjee, A. Guntuboyina, and B. Sen. On Risk Bounds in Isotonic and Other Shape Restricted
Regression Problems. The Annals of Statistics, to appear.
[10] A. T. Kalai and R. Sastry. The isotron algorithm: High-dimensional Isotonic Regression. In COLT, 2009.
[11] T. Moon, A. Smola, Y. Chang, and Z. Zheng. Intervalrank: Isotonic Regression with listwise and pairwise
constraints. In WSDM, pages 151?160. ACM, 2010.
[12] S. M Kakade, V. Kanade, O. Shamir, and A. Kalai. Efficient learning of generalized linear and single
index models with Isotonic Regression. In NIPS. 2011.
[13] B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates.
KDD, pages 694?699, 2002.
[14] H. Narasimhan and S. Agarwal. On the relationship between binary classification, bipartite ranking, and
binary class probability estimation. In NIPS. 2013.
[15] S. Angelov, B. Harb, S. Kannan, and L. Wang. Weighted Isotonic Regression under the l1 norm. In
SODA, 2006.
[16] K. Punera and J. Ghosh. Enhanced hierarchical classification via Isotonic smoothing. In WWW, 2008.
[17] Z. Zheng, H. Zha, and G. Sun. Query-level learning to rank using Isotonic Regression. In Communication,
Control, and Computing, Allerton Conference on, 2008.
[18] D.S. Hochbaum and M. Queyranne. Minimizing a convex cost closure set. SIAM Journal on Discrete
Mathematics, 16(2):192?207, 2003.
[19] Q. F. Stout. Isotonic Regression via partitioning. Algorithmica, 66(1):93?112, 2013.
[20] Q. F. Stout. Weighted l? Isotonic Regression. Manuscript, 2011.
[21] Q. F. Stout. Strict l? Isotonic Regression. Journal of Optimization Theory and Applications, 152(1):121?
135, 2012.
[22] Q. F Stout. Fastest Isotonic Regression algorithms. http://web.eecs.umich.edu/?qstout/
IsoRegAlg_140812.pdf.
[23] Q. F. Stout. Isotonic Regression for multiple independent variables. Algorithmica, 71(2):450?470, 2015.
[24] Y. Kaufman and A. Tamir. Locating service centers with precedence constraints. Discrete Applied Mathematics, 47(3):251 ? 261, 1993.
[25] Q. F. Stout. L infinity Isotonic Regression for linear, multidimensional, and tree orders. CoRR, 2015.
[26] S. I. Daitch and D. A. Spielman. Faster approximate lossy generalized flow via interior point algorithms.
STOC ?08, pages 451?460. ACM, 2008.
[27] A. Madry. Navigating central path with electrical flows. In FOCS, 2013.
[28] Y. ?
T. Lee and A. Sidford. Path finding methods for linear programming: Solving linear programs in
? rank) iterations and faster algorithms for maximum flow. In FOCS, 2014.
O(
[29] D. A. Spielman and S. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification,
and solving linear systems. STOC ?04, pages 81?90. ACM, 2004.
[30] I. Koutis, G. L. Miller, and R. Peng. A nearly-m log n time solver for SDD linear systems. FOCS ?11,
pages 590?598, Washington, DC, USA, 2011. IEEE Computer Society.
[31] M. B. Cohen, R. Kyng, G. L. Miller, J. W. Pachocki, R. Peng, A. B. Rao, and S. C. Xu. Solving SDD
linear systems in nearly m log1/2 n time. STOC ?14, 2014.
[32] R. Kyng, A. Rao, S. Sachdeva, and D. A. Spielman. Algorithms for Lipschitz learning on graphs. In
Proceedings of COLT 2015, pages 1190?1223, 2015.
[33] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[34] D. den Hertog, F. Jarre, C. Roos, and T. Terlaky. A sufficient condition for self-concordance. Math.
Program., 69(1):75?88, July 1995.
[35] J. Renegar. A mathematical view of interior-point methods in convex optimization. SIAM, 2001.
[36] A. Nemirovski. Lecure notes: Interior point polynomial time methods in convex programming, 2004.
[37] E. J. McShane. Extension of range of functions. Bull. Amer. Math. Soc., 40(12):837?842, 12 1934.
[38] H. Whitney. Analytic extensions of differentiable functions defined in closed sets. Transactions of the
American Mathematical Society, 36(1):pp. 63?89, 1934.
9
| 5824 |@word cpe:1 version:1 polynomial:1 norm:31 closure:1 seek:1 crucially:1 decomposition:1 pick:1 boundedness:1 ipm:27 reduction:6 initial:3 score:2 daniel:2 denoting:1 pprox:9 current:1 com:2 assigning:1 additive:1 realistic:1 kdd:1 shape:2 analytic:1 plot:1 update:2 maxv:1 n0:2 implying:1 fewer:1 iso:9 short:1 math:2 node:2 allerton:1 height:2 mathematical:3 along:1 constructed:1 c2:1 ood:5 focs:3 prove:2 consists:1 introduce:2 x0:8 pairwise:1 theoretically:1 peng:2 tart:5 expected:2 nor:1 multi:3 wsdm:1 solver:13 increasing:2 becomes:1 bounded:7 notation:2 underlying:1 what:1 kaufman:2 minimizes:5 narasimhan:2 lpav:1 developed:1 unified:1 finding:4 ghosh:1 sparsification:1 impractical:1 guarantee:2 every:4 nf:5 ti:4 multidimensional:1 biometrika:1 classifier:3 rm:2 control:1 partitioning:2 grant:1 appear:1 reid:1 service:1 local:2 modify:1 xv:2 limit:1 despite:1 establishing:1 path:6 approximately:2 plus:1 therein:1 studied:5 resembles:1 fastest:1 madry:1 nemirovski:1 graduate:1 range:1 directed:9 practical:1 unique:2 practice:3 hfk:3 block:1 silverman:1 xr:2 procedure:1 empirical:1 boyd:1 regular:2 interior:7 close:2 operator:1 risk:1 applying:1 seminal:1 isotonic:71 restriction:1 equivalent:4 www:1 reviewer:1 center:1 go:1 starting:7 convex:12 immediately:3 assigns:2 rule:1 vandenberghe:1 his:1 quentin:1 embedding:1 proving:1 coordinate:2 annals:4 shamir:1 suppose:1 enhanced:1 exact:1 programming:3 us:2 elkan:2 associate:1 robertson:1 satisfying:2 particularly:1 updating:1 labeled:4 solved:2 wang:1 electrical:1 connected:2 sun:1 ordering:8 yk:2 transforming:1 ui:1 complexity:3 weakly:1 solving:8 bipartite:1 girl:1 chapter:1 various:1 instantiated:1 fast:8 effective:1 kp:1 precedes:1 query:1 labeling:2 outside:1 quite:3 heuristic:1 supplementary:8 solve:6 whose:2 say:3 statistic:5 think:1 nondecreasing:1 apx:1 advantage:1 differentiable:1 sen:1 kyng:5 product:1 combining:4 iff:1 description:2 ky:1 parent:1 empty:1 optimum:7 extending:2 hertog:2 develop:3 school:1 solves:3 soc:1 implemented:5 c:1 auxiliary:2 implies:1 indicate:1 correct:1 material:8 require:1 suffices:3 generalization:1 sizing:1 anonymous:1 opt:8 precedence:1 extension:2 accompanying:1 exp:3 mapping:2 m0:3 achieves:2 estimation:4 agrees:1 weighted:7 minimization:2 lexicographic:5 gaussian:1 modified:1 rather:1 kalai:3 gatech:1 corollary:2 ax:2 focus:1 improvement:1 rank:3 tech:1 greatly:1 rigorous:1 dim:3 inference:1 factoring:1 el:2 vl:4 typically:1 her:2 irn:5 relation:1 going:1 labelings:1 interested:1 provably:3 dual:1 colt:2 classification:2 smoothing:1 constrained:2 special:4 fairly:1 ewing:1 field:1 construct:2 never:1 washington:1 sampling:2 kw:3 yu:1 nearly:4 report:1 simplify:1 piecewise:1 oriented:2 sabyasachi:1 dg:2 composed:1 algorithmica:2 isotron:1 n1:2 attempt:1 organization:1 interest:1 huge:1 zheng:2 primal:1 accurate:1 edge:12 partial:7 ascertained:1 unless:1 tree:5 incomplete:1 instance:1 earlier:1 rao:3 sidford:1 whitney:1 assignment:5 bull:1 cost:1 introducing:1 vertex:18 deviation:2 subset:2 entry:4 stout:12 terlaky:1 too:2 reported:1 straightforwardly:1 perturbed:1 eec:1 kxi:1 combined:2 siam:2 stay:1 probabilistic:1 lee:2 together:1 concrete:1 again:1 central:1 nm:5 ldl:1 corner:1 american:1 return:4 concordance:1 student:1 sec:1 ranking:1 vi:1 later:2 root:1 lot:2 closed:1 view:1 doing:1 sup:1 observing:1 zha:1 sort:4 len:7 parallel:1 jul:1 defer:1 simon:1 contribution:1 minimize:2 square:1 ir:3 moon:1 efficiently:2 miller:2 correspond:1 weak:1 basically:1 whenever:1 definition:2 pp:5 involved:1 proof:6 associated:2 proved:5 improves:2 manuscript:2 maxwell:1 follow:1 ayer:1 brunk:2 improved:6 formulation:1 done:1 though:1 amer:1 generality:1 xa:2 implicit:2 stage:1 smola:1 sketch:1 web:2 replacing:1 wmax:1 nonlinear:1 defines:1 behaved:1 grows:1 lossy:1 usa:1 concept:4 true:1 barlow:1 ccf:1 hence:2 symmetric:2 ll:1 self:6 rooted:1 generalized:5 pdf:1 stress:1 outline:1 complete:7 l1:1 logd:5 image:3 wise:1 recently:1 common:2 overview:1 cohen:1 attached:1 extend:2 m1:4 cambridge:1 dag:31 sastry:2 grid:5 fk:9 mathematics:3 bartholomew:1 had:1 harb:1 add:3 dominant:1 recent:1 hide:1 inf:6 claimed:1 kxa:2 binary:3 wv:4 preserving:1 minimum:1 additional:1 care:1 impose:1 optp:5 omp:4 determine:1 olve:9 recommended:1 signal:1 july:1 rv:9 multiple:4 desirable:1 ykw:8 full:2 faster:3 long:2 gfk:1 award:2 variant:2 regression:61 iteration:5 hochbaum:1 agarwal:2 anup:1 c1:1 addition:1 whereas:1 interval:1 else:1 bovik:1 unlike:1 strict:16 comment:1 hz:1 flow:4 call:8 near:3 enough:1 easy:1 gave:1 scb:3 reduce:3 computable:1 microarrays:1 intensive:2 grad:11 multiclass:1 t0:3 ul:4 queyranne:1 locating:1 kzkw:1 hessian:8 york:1 remark:1 matlab:1 ignored:1 detailed:3 informally:1 nonparametric:1 extensively:1 http:3 nsf:1 diverse:1 write:1 discrete:2 zv:2 key:1 changing:1 neither:1 guntuboyina:1 graph:17 monotone:2 run:10 inverse:1 soda:1 extends:4 family:2 ipms:1 wu:2 bound:14 yale:5 topological:3 quadratic:1 nonnegative:1 renegar:1 infinity:3 constraint:4 n3:2 sake:1 calling:1 aspect:1 min:9 structured:1 according:2 conjugate:1 smaller:1 ur:4 kakade:2 modification:1 den:1 restricted:1 computationally:2 equation:3 trict:2 previously:1 eventually:1 know:1 umich:1 informal:2 available:2 operation:3 observe:1 hierarchical:1 sachdeva:3 generic:1 appropriate:3 running:11 ensure:4 log2:5 newton:1 exploit:1 giving:2 prof:1 dykstra:1 society:2 objective:7 move:1 g0:13 lex:11 quantity:1 parametric:2 dependence:1 diagonal:1 gradient:7 minx:1 navigating:1 link:1 thank:2 reason:1 provable:2 bremner:1 kannan:1 assuming:2 length:3 code:1 index:4 relationship:2 rasmus:2 reformulate:2 minimizing:3 equivalently:1 lg:4 mostly:1 statement:1 blockwise:1 stoc:3 negative:1 design:2 implementation:2 allowing:1 upper:5 observation:10 inevitably:1 dijkstra:1 zadrozny:2 immediate:4 extended:1 incorporated:1 precise:1 tonic:1 communication:1 dc:1 arbitrary:2 introduced:1 daitch:4 pair:1 required:2 specified:3 pachocki:1 alternately:1 nip:2 bar:1 below:1 program:19 max:10 power:1 natural:1 mn:5 improve:1 github:2 imply:1 log1:2 literature:2 acknowledgement:1 multiplication:1 relative:2 afosr:1 embedded:1 loss:1 permutation:3 log6:1 suggestion:1 sdd:8 acyclic:2 proven:1 ingredient:1 age:1 degree:1 sufficient:2 consistent:1 production:1 essian:9 changed:1 diagonally:1 supported:1 copy:1 sym:4 allow:1 barrier:10 absolute:2 listwise:1 regard:1 boundary:3 dimension:2 tamir:2 computes:5 author:1 avg:2 ig:11 pth:1 polynomially:1 log3:1 transaction:2 approximate:8 supremum:1 roos:1 instantiation:1 reorder:1 xi:7 search:1 table:2 kanade:1 learn:1 symmetry:7 contributes:1 angelov:1 inventory:1 hc:7 poly:4 constructing:2 domain:6 main:4 noise:3 n2:9 child:1 repeated:1 xu:4 advice:1 referred:2 georgia:1 slow:1 vr:4 wiley:1 fails:3 obeying:1 xl:2 crude:1 sushant:1 learns:1 young:1 hw:6 theorem:18 specific:1 zu:1 er:2 x:2 dk:8 adding:1 corr:1 chatterjee:2 kx:7 sorting:1 fc:4 simply:1 expressed:1 partially:7 scalar:1 chang:1 monotonic:1 corresponds:1 minimizer:10 satisfies:4 determines:1 acm:3 prop:1 goal:1 formulated:1 sorted:1 towards:4 labelled:2 lipschitz:6 feasible:5 change:2 typical:1 lemma:11 total:2 teng:2 experimental:1 concordant:5 formally:1 wvp:1 brevity:1 spielman:9 investigator:1 dept:2 ex:1 |
5,330 | 5,825 | Semi-Proximal Mirror-Prox
for Nonsmooth Composite Minimization
Niao He
Georgia Institute of Technology
[email protected]
Zaid Harchaoui
NYU, Inria
[email protected]
Abstract
We propose a new first-order optimization algorithm to solve high-dimensional
non-smooth composite minimization problems. Typical examples of such problems have an objective that decomposes into a non-smooth empirical risk part
and a non-smooth regularization penalty. The proposed algorithm, called SemiProximal Mirror-Prox, leverages the saddle point representation of one part of the
objective while handling the other part of the objective via linear minimization
over the domain. The algorithm stands in contrast with more classical proximal
gradient algorithms with smoothing, which require the computation of proximal
operators at each iteration and can therefore be impractical for high-dimensional
problems. We establish the theoretical convergence rate of Semi-Proximal MirrorProx, which exhibits the optimal complexity bounds, i.e. O(1/?2 ), for the number
of calls to linear minimization oracle. We present promising experimental results
showing the interest of the approach in comparison to competing methods.
1
Introduction
A wide range of machine learning and signal processing problems can be formulated as the minimization of a composite objective:
min F (x) := f (x) + kBxk
x?X
(1)
where X is closed and convex, f is convex and can be either smooth, or nonsmooth yet enjoys
a particular structure. The term kBxk defines a regularization penalty through a norm k ? k, and
x 7? Bx a linear mapping on a closed convex set X.
In many situations, the objective function F of interest enjoys a favorable structure, namely a socalled saddle point representation [6, 11, 13]:
f (x) = max {hx, Azi ? ?(z)}
z?Z
(2)
where Z is convex compact subset of a Euclidean space, and ?(?) is a convex function. Sec. 4 will
give several examples of such situations. Saddle point representations can then be leveraged to use
first-order optimization algorithms.
The simple first option to minimize F is using the so-called Nesterov smoothing technique [19]
along with a proximal gradient algorithm [23], assuming that the proximal operator associated with
X is computationally tractable and cheap to compute. However, this is certainly not the case when
considering problems with norms acting in the spectral domain of high-dimensional matrices, such
as the matrix nuclear-norm [12] and structured extensions thereof [5, 2]. In the latter situation,
another option is to use a smoothing technique now with a conditional gradient or Frank-Wolfe
algorithm to minimize F , assuming that a a linear minimization oracle associated with X is cheaper
to compute than the proximal operator [6, 14, 24]. Neither option takes advantage of the composite
structure of the objective (1) or handles the case when the linear mapping B is nontrivial.
1
Contributions Our goal is to propose a new first-order optimization algorithm, called SemiProximal Mirror-Prox, designed to solve the difficult non-smooth composite optimization problem (1), which does not require the exact computation of proximal operators. Instead, the SemiProximal Mirror-Prox relies upon i) Saddle point representability of f (a less restricted role than
Fenchel-type representation); ii) Linear minimization oracle associated with k ? k in the domain
X. While the saddle point representability of f allows to cure the non-smoothness of f , the linear
minimization over the domain X allows to tackle the non-smooth regularization penalty k ? k. We
establish the theoretical convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal
complexity bounds, i.e. O(1/?2 ), for the number of calls to linear minimization oracle. Furthermore,
Semi-Proximal Mirror-Prox generalizes previously proposed approaches and improves upon them
in special cases:
1. Case B ? 0: Semi-Proximal Mirror-Prox does not require assumptions on favorable geometry of dual domain Z or simplicity of ?(?) in (2).
2. Case B = I: Semi-Proximal Mirror-Prox is competitive with previously proposed approaches [15, 24] based on smoothing techniques.
3. Case of non-trivial B: Semi-Proximal Mirror-Prox is the first proximal-free or conditionalgradient-type optimization algorithm for (1).
Related work The Semi-Proximal Mirror-Prox algorithm belongs to the family of conditional
gradient algorithms, whose most basic instance is the Frank-Wolfe algorithm for constrained smooth
optimization using a linear minimization oracle; see [12, 1, 4]. Recently, in [6, 13], the authors
consider constrained non-smooth optimization when the domain Z has a ?favorable geometry?,
i.e. the domain is amenable to proximal setups (favorable geometry), and establish a complexity
bound with O(1/?2 ) calls to the linear minimization oracle. Recently, in [15], a method called
conditional gradient sliding is proposed to solve similar problems, using a smoothing technique,
with a complexity bound in O(1/?2 ) for the calls to the linear minimization oracle (LMO) and
additionally a O(1/?) bound for the linear operator evaluations. Actually, this O(1/?2 ) bound for
the LMO complexity can be shown to be indeed optimal for conditional-gradient-type or LMObased algorithms, when solving general1 non-smooth convex problems [14].
However, these previous approaches are appropriate for objective with a non-composite structure.
When applied to our problem (1), the smoothing would be applied to the objective taken as a whole,
ignoring its composite structure. Conditional-gradient-type algorithms were recently proposed for
composite objectives [7, 9, 26, 24, 16], but cannot be applied for our problem. In [9], f is smooth
and B is identity matrix, whereas in [24], f is non-smooth and B is also the identity matrix. The
proposed Semi-Proximal Mirror-Prox can be seen as a blend of the successful components resp. of
the Composite Conditional Gradient algorithm [9] and the Composite Mirror-Prox [11], that enjoys
the optimal complexity bound O(1/?2 ) on the total number of LMO calls, yet solves a broader class
of convex problems than previously considered.
2
Framework and assumptions
We present here our theoretical framework, which hinges upon a smooth convex-concave saddle point reformulation of the norm-regularized non-smooth minimization (3). We shall use the
following notations throughout the paper. For a given norm kP
? k, we
define the dual norm as
m Pn
ksk? = maxkxk?1 hs, xi. For any x ? Rm?n , kxk2 = kxkF = ( i=1 j=1 |xij |2 )1/2 .
Problem We consider the composite minimization problem
Opt = min f (x) + kBxk
x?X
(3)
where X is a closed convex set in the Euclidean space Ex ; x 7? Bx is a linear mapping from X
to Y (? BX), where Y is a closed convex set in the Euclidean space Ey . We make two important
assumptions on the function f and the norm k?k defining the regularization penalty, explained below.
1
Related research extended such approaches to stochastic or online settings [10, 8, 15]; such settings are
beyond the scope of this work.
2
Saddle Point Representation The non-smoothness of f can be challenging to tackle. However,
in many cases of interest, the function f enjoys a favorable structure that allows to tackle it with
smoothing techniques. We assume that f (x) is a non-smooth convex function given by
f (x) = max ?(x, z)
(4)
z?Z
where ?(x, z) is a smooth convex-concave function and Z is a convex and compact set in the Euclidean space Ez . Such representation was introduced and developed in [6, 11, 13], for the purpose
of non-smooth optimization. Saddle point representability can be interpreted as a general form of
the smoothing-favorable structure of non-smooth functions used in the Nesterov smoothing technique [19]. Representations of this type are readily available for a wide family of ?well-structured?
nonsmooth functions f (see Sec. 4 for examples ), and actually for all empirical risk functions with
convex loss in machine learning, up to our knowledge.
Composite Linear Minimization Oracle Proximal-gradient-type
algorithms require
the compu
tation of a proximal operator at each iteration, i.e. miny?Y 12 kyk22 + h?, yi + ?kyk . For several
cases of interest, described below, the computation of the proximal operator can be expensive or
intractable. A classical example is the nuclear norm, whose proximal operator boils down to singular value thresholding, therefore requiring a full singular value decomposition. In contrast to the
proximal operator, the linear minimization oracle can be much cheaper. The linear minimization
oracle (LMO) is a routine which, given an input ? > 0 and ? ? Ey , returns a point
LMO(?, ?) := argmin {h?, yi + ?kyk}
(5)
y?Y
In the case of nuclear-norm, the LMO only requires the computation of the leading pair of singular
vectors, which is an order of magnitude faster in time-complexity.
Saddle Point Reformulation. The crux of our approach is a smooth convex-concave saddle point
reformulation of (3). After massaging the saddle-point reformulation, we consider the associated
variational inequality, which provides the sufficient and necessary condition for an optimal solution
to the saddle point problem [3, 4]. For any optimization problem with convex structure (including
convex minimization, convex-concave saddle point problem, convex Nash equilibrium), the corresponding variational inequality is directly related to the accuracy certificate used to guarantee the
accuracy of a solution to the optimization problem; see Sec. 2.1 in [11] and [18]. We shall present
then an algorithm to solve the variational inequality established below, that exploits its particular
structure.
Assuming that f admits a saddle point representation (4), we write (3) in epigraph form
Opt =
min
max {?(x, z) + ? : y = Bx} .
x?X,y?Y,? ?kyk z?Z
where Y (? BX) is a convex set. We can approximate Opt by
d=
Opt
min
max
x?X,y?Y,? ?kyk z?Z,kwk2 ?1
{?(x, z) + ? + ?hy ? Bx, wi} .
(6)
d = Opt (see details in [11]). By introducing the variables
For properly selected ? > 0, one has Opt
u := [x, y; z, w] and v := ? , the variational inequality associated with the above saddle point
problem is fully described by the domain
X+
=
{x+ = [u; v] : x ? X, y ? Y, z ? Z, kwk2 ? 1, ? ? kyk}
and the monotone vector field
F (x+ = [u; v]) = [Fu (u); Fv ] ,
where
? ?? ?
?
x
?x ?(x, z) ? ?B T w
? y ?? ?
?
?
?w
Fu ?u = ? ?? = ?
?,
z
??z ?(x, z)
w
?(Bx ? y)
?
Fv (v = ? ) = 1.
In the next section, we present an efficient algorithm to solve this type of variational inequality,
which enjoys a particular structure; we call such an inequality semi-structured.
3
3
Semi-Proximal Mirror-Prox for Semi-structured Variational Inequalities
Semi-structured variational inequalities (Semi-VI) enjoy a particular mixed structure, that allows to
get the best of two worlds, namely the proximal setup (where the proximal operator can be computed) and the LMO setup (where the linear minimization oracle can be computed). Basically, the
domain X is decomposed as a Cartesian product over two sets X = X1 ? X2 , such that X1 admits
a proximal-mapping while X2 admits a linear minimization oracle. We now describe the main theoretical and algorithmic components of the Semi-Proximal Mirror-Prox algorithm, resp. in Sec. 3.1
and in Sec. 3.2, and finally describe the overall algorithm in Sec. 3.3.
3.1
Composite Mirror-Prox with Inexact Prox-mappings
We first present a new algorithm, which can be seen as an extension of the Composite Mirror Prox algorithm, denoted CMP for brevity, that allows inexact computation of prox-mappings and can solve
a broad class of variational inequalites. The original Mirror Prox algorithm was introduced in [17]
and was extended to composite settings in [11] assuming exact computations of prox-mappings.
Structured Variational Inequalities. We consider the variational inequality VI(X, F ):
Find x? ? X : hF (x), x ? x? i ? 0, ?x ? X
with domain X and operator F that satisfy the assumptions (A.1)?(A.4) below.
(A.1) Set X ? Eu ? Ev is closed convex and its projection P X = {u : x = [u; v] ? X} ? U ,
where U is convex and closed, Eu , Ev are Euclidean spaces;
(A.2) The function ?(?) : U ? R is continuously differentiable and also 1-strongly convex w.r.t.
some norm2 k ? k. This defines the Bregman distance Vu (u? ) = ?(u? ) ? ?(u) ? h? ? (u), u? ?
ui ? 12 ku? ? uk2 .
(A.3) The operator F (x = [u, v]) : X ? Eu ? Ev is monotone and of form F (u, v) = [Fu (u); Fv ]
with Fv ? Ev being a constant and Fu (u) ? Eu satisfying the condition
?u, u? ? U : kFu (u) ? Fu (u? )k? ? Lku ? u? k + M
for some L < ?, M < ?;
(A.4) The linear form hFv , vi of [u; v] ? Eu ? Ev is bounded from below on X and is coercive on
X w.r.t. v: whenever [ut ; v t ] ? X, t = 1, 2, ... is a sequence such that {ut }?
t=1 is bounded
and kv t k2 ? ? as t ? ?, we have hFv , v t i ? ?, t ? ?.
The quality of an iterate, in the course of the algorithm, is measured through the so-called dual gap
function
?VI (xX, F ) = sup hF (y), x ? yi .
y?X
We give in Appendix A a refresher on dual gap functions, for the reader?s convenience. We shall
establish the complexity bounds in terms of this dual gap function for our algorithm, which directly
provides an accuracy certificate along the iterations. However, we first need to define what we mean
by an inexact prox-mapping.
?-Prox-mapping Inexact proximal mappings were recently considered in the context of accelerated proximal gradient algorithms [25]. The definition we give below is more general, allowing for
non-Euclidean proximal-mappings.
We introduce here the notion of ?-prox-mapping for ? ? 0. For ? = [?; ?] ? Eu ? Ev and x =
[u; v] ? X, let us define the subset Px? (?) of X as
Px? (?) = {b
x = [b
u; vb] ? X : h? + ? ? (b
u) ? ? ? (u), u
b ? si + h?, vb ? wi ? ? ?[s; w] ? X}.
When ? = 0, this reduces to the exact prox-mapping, in the usual setting, that is
Px (?) = Argmin {h?, si + h?, wi + Vu (s)} .
[s;w]?X
2
There is a slight abuse of notation here. The norm here is not the same as the one in problem (3)
4
When ? > 0, this yields our definition of an inexact prox-mapping, with inexactness parameter ?.
Note that for any ? ? 0, the set Px? (? = [?; ?Fv ]) is well defined whenever ? > 0. The Composite
Mirror Prox with inexact prox-mappings is outlined in Algorithm 1.
Algorithm 1 Composite Mirror Prox Algorithm (CMP) for VI(X, F )
Input: stepsizes ?t > 0, inexactness ?t ? 0, t = 1, 2, . . .
Initialize x1 = [u1 ; v 1 ] ? X
for t = 1, 2, . . . , T do
y t := [b
ut ; vbt ] ? Px?tt (?t F (xt )) = Px?tt (?t [Fu (ut ); Fv ])
t+1
t+1 t+1
x
:= [u ; v ] ? Px?tt (?t F (y t )) = Px?tt (?t [Fu (b
ut ); Fv ])
end for
PT
?1 PT
t
Output: xT := [?
uT ; v?T ] = ( t=1 ?t )
t=1 ?t y
(7)
The proposed algorithm is a non-trivial extension of the Composite Mirror Prox with exact proxmappings, both from a theoretical and algorithmic point of views. We establish below the theoretical
convergence rate; see Appendix B for the proof.
Theorem 3.1. Assume that the sequence of step-sizes (?t ) in the CMP algorithm satisfy
ut ) ? ?t2 M 2 ,
?t := ?t hFu (b
ut ) ? Fu (ut ), u
bt ? ut+1 i ? Vubt (ut+1 ) ? Vut (b
t = 1, 2, . . . , T . (8)
Then, denoting ?[X] = sup[u;v]?X Vu1 (u), for a sequence of inexact prox-mappings with inexactness ?t ? 0, we have
PT
PT
?[X] + M 2 t=1 ?t2 + 2 t=1 ?t
?T ? xi ?
?VI (?
xT X, F ) := sup hF (x), x
.
(9)
PT
x?X
t=1 ?t
Remarks.
Note that the assumption on the sequence of step-sizes (?t ) is clearly satisfied when
?
?t ? ( 2L)?1 . When M = 0 (which is essentially the case for the problem described in Section 2),
it suffices as long as ?t ? L?1 . When (?t ) is summable, we achieve the same O(1/T ) convergence
rate as when there is no error. If (?t ) decays with a rate of O(1/t), then the overall convergence
is only affected by a log(T ) factor. Convergence results on the sequence of projections of (?
xT )
onto X1 when F stems from saddle point problem minx1 ?X1 supx2 ?X2 ?(x1 , x2 ) is established in
Appendix B.
The theoretical convergence rate established in Theorem 3.1 and Corollary B.1 generalizes the previous result established in Corollary 3.1 in [11] for CMP with exact prox-mappings. Indeed, when
exact prox-mappings are used, we recover the result of [11]. When inexact prox-mappings are used,
the errors due to the inexactness of the prox-mappings accumulate and is reflected in (9) and (37).
3.2
Composite Conditional Gradient
We now turn to a variant of the composite conditional gradient algorithm, denoted CCG, tailored
for a particular class of problems, which we call smooth semi-linear problems. The composite
conditional gradient algorithm was first introduced in [9] and also developed in [21]. We present an
extension here which turns to be well-suited for sub-problems that will be solved in Sec. 3.3.
Minimizing Smooth Semi-linear Functions. We consider the smooth semi-linear problem
+
min
? (u, v) = ?(u) + h?, vi
(10)
x=[u;v]?X
+
represented by the pair (X; ? ) such that the following assumptions are satisfied. We assume that
i) X ? Eu ? Ev is closed convex and its projection P X on Eu belongs to U , where U is convex
and compact;
ii) ?(u) : U ? R is a convex continuously differentiable function, and there exist 1 < ? ? 2 and
L0 < ? such that
L0 ?
ku ? uk? ?u, u? ? U ;
(11)
?(u? ) ? ?(u) + h??(u), u? ? ui +
?
5
iii) ? ? Ev is such that every linear function on Eu ? Ev of the form
[u; v] 7? h?, ui + h?, vi
(12)
with ? ? Eu attains its minimum on X at some point x[?] = [u[?]; v[?]]; we have at our disposal
a Composite Linear Minimization Oracle (LMO) which, given on input ? ? Eu , returns x[?].
Algorithm 2 Composite Conditional Gradient Algorithm CCG(X, ?(?), ?; ?)
Input: accuracy ? > 0 and ?t = 2/(t + 1), t = 1, 2, . . .
Initialize x1 = [u1 ; v 1 ] ? X
for t = 1, 2, . . . do
Compute ?t = hgt , ut ? ut [gt ]i + h?, v t ? v t [gt ]i, where gt = ??(ut );
if ?t ? ? then
Return xt = [ut ; v t ]
else
Find xt+1 = [ut+1 ; v t+1 ] ? X such that ?+ (xt+1 ) ? ?+ (xt + ?t (xt [gt ] ? xt ))
end if
end for
The algorithm is outlined in Algorithm 2. Note that CCG works essentially as if there were no vcomponent at all. The CCG algorithm enjoys a convergence rate in O(t?(??1) ) in the evaluations
of the function ?+ , and the accuracy certificates (?t ) enjoy the same rate O(t?(??1) ) as well.
Proposition 3.1. Denote D the k?k-diameter of U . When solving problems of type (10), the sequence
of iterates (xt ) of CCG satisfies
??1
2
2L0 D?
(13)
?+ (xt ) ? min ?+ (x) ?
, t?2
x?X
?(3 ? ?) t + 1
In addition, the accuracy certificates (?t ) satisfy
min ?s ? O(1)L0 D?
1?s?t
3.3
2
t+1
??1
, t?2
(14)
Semi-Proximal Mirror-Prox for Semi-structured Variational Inequality
We now give the full description of a special class of variational inequalities, called semi-structured
variational inequalities. This family of problems encompasses both cases that we discussed so far
in Section 3.1 and 3.2. But most importantly, it also covers many other problems that do not fall into
these two regimes and in particular, our essential problem of interest (3).
Semi-structured Variational Inequalities. The class of semi-structured variational inequalities
allows to go beyond Assumptions (A.1) ? (A.4), by assuming more structure. This structure is consistent with what we call a semi-proximal setup, which encompasses both the regular proximal setup
and the regular linear minimization setup as special cases. Indeed, we consider variational inequality
VI(X, F ) that satisfies, in addition to Assumptions (A.1) ? (A.4), the following assumptions:
(S.1) Proximal setup for X: we assume that Eu = Eu1 ? Eu2 , Ev = Ev1 ? Ev2 , and U ?
U1 ? U2 , X = X1 ? X2 with Xi ? Eui ? Evi and Pi X = {ui : [ui ; vi ] ? Xi } ? Ui
for i = 1, 2, where U1 is convex and closed, U2 is convex and compact. We also assume that
?(u) = ?1 (u1 ) + ?2 (u2 ) and kuk = ku1 kEu1 + ku2 kEu2 , with ?2 (?) : U2 ? R continuously
differentiable such that
L0 ?
?2 (u?2 ) ? ?2 (u2 ) + h??2 (u2 ), u?2 ? u2 i +
ku ? u2 k?Eu2 , ?u2 , u?2 ? U2 ;
? 2
for a particular 1 < ? ? 2 and L0 < ?. Furthermore, we assume that the k ? kEu2 -diameter
of U2 is bounded by some D > 0.
(S.2) Partition of F : the operator F induced by the above partition of X1 and X2 can be written as
F (x) = [Fu (u); Fv ] with Fu (u) = [Fu1 (u1 , u2 ); Fu2 (u1 , u2 )], Fv = [Fv1 ; Fv2 ].
6
(S.3) Proximal mapping on X1 : we assume that for any ?1 ? Eu1 and ? > 0, we have at our
disposal easy-to-compute prox-mappings of the form,
argmin
Prox?1 (?1 , ?) :=
{?1 (u1 ) + h?1 , u1 i + ?hFv1 , v1 i} .
x1 =[u1 ;v1 ]?X1
(S.4) Linear minimization oracle for X2 : we assume that we we have at our disposal Composite
Linear Minimization Oracle (LMO), which given any input ?2 ? Eu2 and ? > 0, returns an
optimal solution to the minimization problem with linear form, that is,
LMO(?2 , ?) :=
argmin
{h?2 , u2 i + ?hFv2 , v2 i} .
x2 =[u2 ;v2 ]?X2
Semi-proximal setup We denote such problems as Semi-VI(X, F ). On the one hand, when U2 is
a singleton, we get the full-proximal setup. On the other hand, when U1 is a singleton, we get the full
linear-minimization-oracle setup (full LMO setup). The semi-proximal setup allows to cover both
setups and all the ones in between as well.
The Semi-Proximal Mirror-Prox algorithm. We finally present here our main contribution, the
Semi-Proximal Mirror-Prox algorithm, which solves the semi-structured variational inequality under
(A.1) ? (A.4) and (S.1) ? (S.4). The Semi-Proximal Mirror-Prox algorithm blends both CMP and
CCG. Basically, for sub-domain X2 given by LMO, instead of computing exactly the prox-mapping,
we mimick inexactly the prox-mapping via a conditional gradient algorithm in the Composite Mirror
Prox algorithm. For the sub-domain X1 , we compute the prox-mapping as it is.
Algorithm 3 Semi-Proximal Mirror-Prox Algorithm for Semi-VI(X, F )
Input: stepsizes ?t > 0, accuracies ?t ? 0, t = 1, 2, . . .
[1] Initialize x1 = [x11 ; x12 ] ? X, where x11 = [u11 ; v11 ]; x12 = [u12 , ; v21 ].
for t = 1, 2, . . . , T do
[2] Compute y t = [y1t ; y2t ] that
y1t := [b
ut1 ; vb1t ]
t
ut2 ; vb2t ]
y2 := [b
=
=
Prox?1 (?t Fu1 (ut1 , ut2 ) ? ?1? (ut1 ), ?t )
CCG(X2 , ?2 (?) + h?t Fu2 (ut1 , ut2 ) ? ?2? (ut2 ), ?i, ?t Fv2 ; ?t )
t+1
[3] Compute xt+1 = [xt+1
1 ; x2 ] that
t+1
xt+1
:= [ut+1
1
1 ; v1 ]
t+1
xt+1
:= [ut+1
2
2 ; v2 ]
=
=
ut1 , u
bt2 ) ? ?1? (ut1 ), ?t )
Prox?1 (?t Fu1 (b
ut1 , u
bt2 ) ? ?2? (ut2 ), ?i, ?t Fv2 ; ?t )
CCG(X2 , ?2 (?) + h?t Fu2 (b
end for
PT
?1 PT
t
uT ; v?T ] = ( t=1 ?t )
Output: xT := [?
t=1 ?t y
At step t, we first update y1t = [b
ut1 ; vb1t ] by computing the exact prox-mapping and build y2t = [b
ut2 ; vb2t ]
by running the composite conditional gradient algorithm to problem (10) specifically with
X = X2 , ?(?) = ?2 (?) + h?t Fu2 (ut1 , ut2 ) ? ?2? (ut2 ), ?i, and ? = ?t Fv2 ,
t+1
t+1
= [ut+1
=
until ?(y2t ) = maxy2 ?X2 h??+ (y2t ), y2t ? y2 i ? ?t . We then build xt+1
1
1 ; v1 ] and x2
t+1 t+1
t
[u2 ; v2 ] similarly except this time taking the value of the operator at point y . Combining the
results in Theorem 3.1 and Proposition 3.1, we arrive at the following complexity bound.
Proposition 3.2. Under the assumption (A.1) ? (A.4) and (S.1) ? (S.4) with M = 0, and choice of
stepsize ?t = L?1 , t = 1, . . . , T , for the outlined algorithm to return an ?-solution to the variational
inequality V I(X, F ), the total number of Mirror Prox steps required does not exceed
L?[X]
Total number of steps = O(1)
?
and the total number of calls to the Linear Minimization Oracle does not exceed
1
L0 L? D? ??1
?[X].
N = O(1)
??
In particular, if we use Euclidean proximal setup on U2 with ?2 (?) = 12 kx2 k2 , which leads to ?
=2
and L0 = 1, then the number of LMO calls does not exceed N = O(1) L2 D2 (?[X1 ] + D2 ) /?2 .
7
0
0
10
Semi?MP(eps=1e2/t)
Semi?MP(eps=1e1/t)
Semi?MP(eps=1e0/t)
Semi?LPADMM(eps=1e?3/t)
Semi?LPADMM(eps=1e?4/t)
Semi?LPADMM(eps=1e?5/t)
0.9
?2
0.8
0.7
0.6
0.5
?1
Semi?MP(eps=1e1/t)
Semi?LPADMM(eps=1e?5/t)
0.8
0.7
0.6
0.5
0.4
0.4
10
0.9
objective value
?1
10
Semi?MP(eps=5/t)
Semi?MP(fixed=24)
Smooth?CG(?=1)
Semi?SPG(eps=10/t)
Semi?SPG(fixed=24)
objective value
Semi?MP(eps=10/t)
Semi?MP(fixed=96)
Smooth?CG(?=0.01)
Semi?SPG(eps=5/t)
Semi?SPG(fixed=96)
Objective valule
Objective valule
10
10
0
1000
2000
3000
Elapsed time (sec)
4000
0
500
1000
1500
2000
2500
0.3
0
3000
1000
2000
3000
4000
number of LMO calls
Elapsed time (sec)
5000
0.3
0
200
400
600
800
1000
number of LMO calls
Figure 1: Robust collaborative filtering and link prediction: objective function vs elapsed time.
From left to right: (a) MovieLens100K; (b) MovieLens1M; (c) Wikivote (1024); (d) Wikivote (full)
Discussion The proposed Semi-Proximal Mirror-Prox algorithm enjoys the optimal complexity
bounds, i.e. O(1/?2 ), in the number of calls to LMO; see [14] for the optimal complexity bounds
for general non-smooth optimization with LMO. Consequently, when applying the algorithm to the
variational reformulation of the problem of interest (3), we are able to get an ?-optimal solution
within at most O(1/?2 ) LMO calls. Thus, Semi-Proximal Mirror-Prox generalizes previously
proposed approaches and improves upon them in special cases of problem (3); see Appendix D.2.
4
Experiments
We report the experimental results obtained with the proposed Semi-Proximal Mirror-Prox, denoted
Semi-MP here, and competing algorithms. We consider two different applications: i) robust collaborative filtering for movie recommendation; ii) link prediction for social network analysis. For
i), we compare to two competing approaches: a) smoothing conditional gradient proposed in [24]
(denoted Smooth-CG); b) smoothing proximal gradient [20, 5] equipped with semi-proximal setup
(Semi-SPG). For ii), we compare to Semi-LPADMM, using [22] equipped with semi-proximal
setup. Additional experiments and implementation details are given in Appendix E.
Robust collaborative filtering We consider the collaborative filtering problem, with a nuclearnorm regularization penalty and an ?1 -loss function. We run the above three algorithms on the
the small and medium MovieLens datasets. The small-size dataset consists of 943 users and 1682
movies with about 100K ratings, while the medium-size dataset consists of 3952 users and 6040
movies with about 1M ratings. We follow [24] to set the regularization parameters. In Fig. 1, we
can see that Semi-MP clearly outperforms Smooth-CG, while it is competitive with Semi-SPG.
Link prediction We consider now the link prediction problem, where the objective consists a
hinge-loss for the empirical risk part and multiple regularization penalties, namely the ?1 -norm and
the nuclear-norm. For this example, applying the Smooth-CG or Semi-SPG would require two
smooth approximations, one for hinge loss term and one for ?1 norm term. Therefore, we consider
an alternative approach, Semi-LPADMM, where we apply the linearized preconditioned ADMM algorithm [22] by solving proximal mapping through conditional gradient routines. Up to our knowledge, ADMM with early stopping is not fully theoretically analyzed in literature. However, intuitively, as long as the error is controlled sufficiently, such variant of ADMM should converge.
We conduct experiments on a binary social graph data set called Wikivote, which consists of 7118
nodes and 103747 edges. Since the computation cost of these two algorithms mainly come from the
LMO calls, we present in below the performance in terms of number of LMO calls. For the first set
of experiments, we select top 1024 highest degree users from Wikivote and run the two algorithms
on this small dataset with different strategies for the inner LMO calls.
In Fig. 1, we observe that the Semi-MP is less sensitive to the inner accuracies of prox-mappings
compared to the ADMM variant, which sometimes stops progressing if the prox-mappings of early
iterations are not solved with sufficient accuracy. The results on the full dataset corroborate the fact
that Semi-MP outperforms the semi-proximal variant of the ADMM algorithm.
Acknowledgments
The authors would like to thank A. Juditsky and A. Nemirovski for fruitful discussions. This work
was supported by NSF Grant CMMI-1232623, LabEx Persyval-Lab (ANR-11-LABX-0025), project
?Titan? (CNRS-Mastodons), project ?Macaron? (ANR-14-CE23-0003-01), the MSR-Inria joint centre, and the Moore-Sloan Data Science Environment at NYU.
8
References
[1] Francis Bach. Duality between subgradient and conditional gradient methods. SIAM Journal on Optimization, 2015.
[2] Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with sparsityinducing penalties. Found. Trends Mach. Learn., 4(1):1?106, 2012.
[3] Heinz H. Bauschke and Patrick L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert
Spaces. Springer, 2011.
[4] D. P. Bertsekas. Convex Optimization Algorithms. Athena Scientific, 2015.
[5] Xi Chen, Qihang Lin, Seyoung Kim, Jaime G Carbonell, and Eric P Xing. Smoothing proximal gradient
method for general structured sparse regression. The Annals of Applied Statistics, 6(2):719?752, 2012.
[6] Bruce Cox, Anatoli Juditsky, and Arkadi Nemirovski. Dual subgradient algorithms for large-scale nonsmooth learning problems. Mathematical Programming, pages 1?38, 2013.
[7] M. Dudik, Z. Harchaoui, and J. Malick. Lifted coordinate descent for learning with trace-norm regularization. Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS),
2012.
[8] Dan Garber and Elad Hazan. A linearly convergent conditional gradient algorithm with applications to
online and stochastic optimization. arXiv preprint arXiv:1301.4666, 2013.
[9] Zaid Harchaoui, Anatoli Juditsky, and Arkadi Nemirovski. Conditional gradient algorithms for normregularized smooth convex optimization. Mathematical Programming, pages 1?38, 2013.
[10] E. Hazan and S. Kale. Projection-free online learning. In ICML, 2012.
[11] Niao He, Anatoli Juditsky, and Arkadi Nemirovski. Mirror prox algorithm for multi-term composite
minimization and semi-separable problems. arXiv preprint arXiv:1311.1098, 2013.
[12] Martin Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML, pages 427?
435, 2013.
[13] Anatoli Juditsky and Arkadi Nemirovski. Solving variational inequalities with monotone operators on
domains given by linear minimization oracles. arXiv preprint arXiv:1312.107, 2013.
[14] Guanghui Lan. The complexity of large-scale convex programming under a linear optimization oracle.
arXiv, 2013.
[15] Guanghui Lan and Yi Zhou. Conditional gradient sliding for convex optimization. arXiv, 2014.
[16] Cun Mu, Yuqian Zhang, John Wright, and Donald Goldfarb. Scalable robust matrix recovery: Frankwolfe meets proximal methods. arXiv preprint arXiv:1403.7588, 2014.
[17] Arkadi Nemirovski. Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal
on Optimization, 15(1):229?251, 2004.
[18] Arkadi Nemirovski, Shmuel Onn, and Uriel G Rothblum. Accuracy certificates for computational problems with convex structure. Mathematics of Operations Research, 35(1):52?78, 2010.
[19] Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127?
152, 2005.
[20] Yurii Nesterov. Smoothing technique and its applications in semidefinite optimization. Math. Program.,
110(2):245?259, 2007.
[21] Yurii Nesterov. Complexity bounds for primal-dual methods minimizing the model of objective function.
Technical report, Universit?e catholique de Louvain, Center for Operations Research and Econometrics
(CORE), 2015.
[22] Yuyuan Ouyang, Yunmei Chen, Guanghui Lan, and Eduardo Pasiliao Jr. An accelerated linearized alternating direction method of multipliers, 2014. http://arxiv.org/abs/1401.6607.
[23] Neal Parikh and Stephen Boyd. Proximal algorithms. Foundations and Trends in Optimization, pages
1?96, 2013.
[24] Federico Pierucci, Zaid Harchaoui, and J?er?ome Malick. A smoothing approach for composite conditional
gradient with nonsmooth loss. In Conf?erence dApprentissage Automatique?Actes CAP14, 2014.
[25] Mark Schmidt, Nicolas L. Roux, and Francis R. Bach. Convergence rates of inexact proximal-gradient
methods for convex optimization. In Adv. NIPS. 2011.
[26] X. Zhang, Y. Yu, and D. Schuurmans. Accelerated training for matrix-norm regularization: A boosting
approach. In NIPS, 2012.
9
| 5825 |@word h:1 msr:1 cox:1 norm:15 mimick:1 d2:2 linearized:2 decomposition:1 u11:1 ev1:1 denoting:1 frankwolfe:1 outperforms:2 v21:1 si:2 yet:2 written:1 readily:1 john:1 partition:2 cheap:1 zaid:3 designed:1 update:1 juditsky:5 v:1 intelligence:1 selected:1 kyk:5 core:1 provides:2 certificate:5 iterates:1 node:1 math:1 boosting:1 org:1 zhang:2 mathematical:3 along:2 lku:1 consists:4 dan:1 introduce:1 theoretically:1 indeed:3 automatique:1 multi:1 heinz:1 decomposed:1 equipped:2 considering:1 project:2 notation:2 bounded:3 medium:2 what:2 argmin:4 interpreted:1 ouyang:1 developed:2 coercive:1 impractical:1 eduardo:1 guarantee:1 every:1 concave:5 tackle:3 exactly:1 universit:1 rm:1 k2:2 uk:1 grant:1 enjoy:2 bertsekas:1 tation:1 mach:1 meet:1 abuse:1 rothblum:1 inria:2 maxkxk:1 challenging:1 nemirovski:7 range:1 acknowledgment:1 vu:2 empirical:3 erence:1 composite:28 projection:5 boyd:1 v11:1 regular:2 donald:1 get:4 cannot:1 convenience:1 onto:1 operator:17 risk:3 context:1 applying:2 fruitful:1 jaime:1 fv2:4 center:1 go:1 kale:1 convex:37 simplicity:1 fv1:1 recovery:1 roux:1 pasiliao:1 importantly:1 nuclear:4 handle:1 notion:1 coordinate:1 resp:2 pt:7 annals:1 user:3 exact:7 sparsityinducing:1 programming:4 wolfe:3 trend:2 expensive:1 satisfying:1 yuyuan:1 econometrics:1 role:1 preprint:4 solved:2 revisiting:1 adv:1 eu:12 highest:1 nhe6:1 environment:1 nash:1 complexity:13 miny:1 ui:6 mu:1 nesterov:5 solving:4 upon:4 eric:1 joint:1 refresher:1 represented:1 describe:2 kp:1 artificial:1 whose:2 garber:1 y1t:3 solve:6 elad:1 anr:2 federico:1 statistic:2 online:3 supx2:1 advantage:1 differentiable:3 sequence:6 propose:2 product:1 ome:1 combining:1 achieve:1 description:1 kv:1 convergence:10 measured:1 ex:1 solves:2 fu2:4 come:1 direction:1 stochastic:2 eu2:3 require:5 hx:1 azi:1 crux:1 suffices:1 opt:6 proposition:3 extension:4 sufficiently:1 considered:2 wright:1 equilibrium:1 mapping:29 scope:1 algorithmic:2 early:2 purpose:1 favorable:6 sensitive:1 minimization:30 clearly:2 vbt:1 pn:1 cmp:5 zhou:1 lifted:1 stepsizes:2 gatech:1 broader:1 corollary:2 l0:8 properly:1 mainly:1 contrast:2 attains:1 cg:5 progressing:1 kim:1 kfu:1 stopping:1 cnrs:1 bt:1 overall:2 dual:7 x11:2 denoted:4 malick:2 socalled:1 smoothing:14 special:4 constrained:2 initialize:3 field:1 broad:1 yu:1 icml:2 nonsmooth:5 t2:2 report:2 eu1:2 cheaper:2 geometry:3 ab:1 interest:6 evaluation:2 certainly:1 bt2:2 rodolphe:1 analyzed:1 semidefinite:1 primal:1 amenable:1 bregman:1 fu:10 edge:1 necessary:1 conduct:1 euclidean:7 e0:1 theoretical:7 fenchel:1 instance:1 cover:2 kxkf:1 corroborate:1 hgt:1 introducing:1 cost:1 subset:2 successful:1 bauschke:1 proximal:55 massaging:1 guanghui:3 international:1 siam:2 continuously:3 satisfied:2 leveraged:1 summable:1 compu:1 conf:1 leading:1 bx:7 return:5 ku1:1 prox:56 singleton:2 de:1 persyval:1 sec:9 titan:1 satisfy:3 mp:12 sloan:1 vi:12 view:1 closed:8 lab:1 hazan:2 sup:3 francis:3 xing:1 competitive:2 hf:3 option:3 recover:1 bruce:1 arkadi:6 contribution:2 minimize:2 collaborative:4 accuracy:10 yield:1 basically:2 mastodon:1 movielens100k:1 whenever:2 definition:2 inexact:9 thereof:1 e2:1 associated:5 proof:1 boil:1 stop:1 dataset:4 knowledge:2 kyk22:1 improves:2 ut:20 hilbert:1 routine:2 actually:2 jenatton:1 disposal:3 follow:1 reflected:1 strongly:1 furthermore:2 uriel:1 until:1 hand:2 defines:2 quality:1 scientific:1 requiring:1 y2:2 multiplier:1 regularization:9 alternating:1 moore:1 goldfarb:1 neal:1 lastname:1 tt:4 variational:21 recently:4 parikh:1 discussed:1 he:2 slight:1 kwk2:2 accumulate:1 eps:12 smoothness:2 outlined:3 mathematics:1 similarly:1 centre:1 gt:4 patrick:1 jaggi:1 belongs:2 inequality:20 binary:1 kx2:1 yi:4 seen:2 minimum:1 additional:1 dudik:1 ey:2 converge:1 eui:1 signal:1 semi:65 ii:4 yunmei:1 multiple:1 sliding:2 harchaoui:4 full:7 reduces:1 smooth:32 stem:1 faster:1 technical:1 bach:3 long:2 lin:1 e1:2 controlled:1 prediction:4 variant:4 basic:1 regression:1 scalable:1 essentially:2 arxiv:11 iteration:4 sometimes:1 tailored:1 whereas:1 addition:2 else:1 singular:3 norm2:1 induced:1 call:17 leverage:1 exceed:3 iii:1 easy:1 iterate:1 competing:3 inner:2 ce23:1 penalty:7 ev2:1 remark:1 fu1:3 diameter:2 http:1 xij:1 exist:1 nsf:1 qihang:1 maxy2:1 uk2:1 write:1 shall:3 affected:1 ut2:8 reformulation:5 lan:3 neither:1 kuk:1 v1:4 graph:1 subgradient:2 monotone:5 run:2 arrive:1 family:3 throughout:1 reader:1 appendix:5 vb:2 bound:12 convergent:1 oracle:19 nontrivial:1 x2:16 hy:1 u1:11 min:7 separable:1 px:8 x12:2 martin:1 structured:12 jr:1 wi:3 cun:1 explained:1 restricted:1 intuitively:1 taken:1 computationally:1 previously:4 turn:2 tractable:1 end:4 ut1:9 yurii:3 generalizes:3 available:1 operation:2 apply:1 observe:1 v2:4 spectral:1 appropriate:1 stepsize:1 alternative:1 schmidt:1 evi:1 original:1 top:1 running:1 hinge:3 anatoli:4 exploit:1 build:2 establish:5 classical:2 objective:16 blend:2 strategy:1 cmmi:1 usual:1 niao:2 exhibit:2 gradient:26 y2t:5 distance:1 link:4 thank:1 athena:1 carbonell:1 trivial:2 preconditioned:1 assuming:5 minimizing:2 representability:3 difficult:1 setup:16 frank:3 trace:1 implementation:1 allowing:1 vut:1 datasets:1 descent:1 situation:3 defining:1 extended:2 rating:2 introduced:3 namely:3 pair:2 required:1 fv:9 elapsed:3 yuqian:1 louvain:1 established:4 nip:2 beyond:2 able:1 below:8 u12:1 firstname:1 ev:10 stephen:1 regime:1 encompasses:2 program:1 max:4 including:1 regularized:1 movie:3 technology:1 julien:1 literature:1 l2:1 movielens1m:1 loss:5 fully:2 ksk:1 mixed:1 filtering:4 foundation:1 labex:1 degree:1 sufficient:2 consistent:1 thresholding:1 inexactness:4 pi:1 course:1 supported:1 free:3 enjoys:7 catholique:1 institute:1 wide:2 fall:1 taking:1 lpadmm:6 sparse:2 stand:1 cure:1 world:1 author:2 far:1 social:2 approximate:1 compact:4 minx1:1 mairal:1 xi:5 continuous:1 decomposes:1 additionally:1 promising:1 ku:3 learn:1 robust:4 shmuel:1 nicolas:1 ignoring:1 ku2:1 schuurmans:1 domain:13 spg:7 aistats:1 main:2 linearly:1 whole:1 x1:15 epigraph:1 fig:2 georgia:1 combettes:1 sub:3 lmo:21 kxk2:1 down:1 theorem:3 xt:18 normregularized:1 showing:1 er:1 nyu:3 decay:1 admits:3 macaron:1 intractable:1 essential:1 mirror:31 ccg:8 magnitude:1 cartesian:1 gap:3 chen:2 suited:1 saddle:17 ez:1 u2:18 recommendation:1 springer:1 satisfies:2 relies:1 inexactly:1 labx:1 obozinski:1 conditional:19 goal:1 formulated:1 identity:2 consequently:1 seyoung:1 lipschitz:1 admm:5 typical:1 specifically:1 except:1 movielens:1 acting:1 called:7 total:4 duality:1 experimental:2 onn:1 select:1 guillaume:1 mark:1 latter:1 brevity:1 accelerated:3 handling:1 |
5,331 | 5,826 | A Universal Primal-Dual Convex Optimization Framework
Alp Yurtsever:
:
Quoc Tran-Dinh;
Volkan Cevher:
Laboratory for Information and Inference Systems, EPFL, Switzerland
{alp.yurtsever, volkan.cevher}@epfl.ch
;
Department of Statistics and Operations Research, UNC, USA
[email protected]
Abstract
We propose a new primal-dual algorithmic framework for a prototypical constrained convex optimization template. The algorithmic instances of our framework are universal since they can automatically adapt to the unknown H?older
continuity degree and constant within the dual formulation. They are also guaranteed to have optimal convergence rates in the objective residual and the feasibility
gap for each H?older smoothness degree. In contrast to existing primal-dual algorithms, our framework avoids the proximity operator of the objective function. We
instead leverage computationally cheaper, Fenchel-type operators, which are the
main workhorses of the generalized conditional gradient (GCG)-type methods. In
contrast to the GCG-type methods, our framework does not require the objective
function to be differentiable, and can also process additional general linear inclusion constraints, while guarantees the convergence rate on the primal problem.
1
Introduction
This paper constructs an algorithmic framework for the following convex optimization template:
f ? :? min tf pxq : Ax ? b P Ku ,
xPX
(1)
where f : Rp ? R Y t`8u is a convex function, A P Rn?p , b P Rn , and X and K are nonempty,
closed and convex sets in Rp and Rn respectively. The constrained optimization formulation (1) is
quite flexible, capturing many important learning problems in a unified fashion, including matrix
completion, sparse regularization, support vector machines, and submodular optimization [1?3].
Processing the inclusion Ax ? b P K in (1) requires a significant computational effort in the largescale setting [4]. Hence, the majority of the scalable numerical solution methods for (1) are of
the primal-dual-type, including decomposition, augmented Lagrangian, and alternating direction
methods: cf., [4?9]. The efficiency guarantees of these methods mainly depend on three properties
of f : Lipschitz gradient, strong convexity, and the tractability of its proximal operator.
For instance,
(
the proximal operator of f , i.e., proxf pxq :? arg minz f pzq ` p1{2q}z ? x}2 , is key in handling
non-smooth f while obtaining the convergence rates as if it had Lipschitz gradient.
When the set Ax?b P K is absent in (1), other methods can be preferable to primal-dual algorithms.
For instance, if f has Lipschitz gradient, then we can use the accelerated proximal gradient methods
by applying the proximal operator for the indicator function of the set X [10, 11]. However, as the
problem dimensions become increasingly larger, the proximal tractability assumption can be restrictive. This fact increased the popularity of the generalized conditional gradient (GCG) methods (or
Frank-Wolfe-type algorithms), which instead leverage the following Fenchel-type oracles [1, 12, 13]
rxs7X ,g :? arg max txx, sy ? gpsqu ,
sPX
(2)
where g is a convex function. When g ? 0, we obtain the so-called linear minimization oracle [12].
When X ? Rp , then the (sub)gradient of the Fenchel conjugate of g, ?g ? , is in the set rxs7g .
1
The sharp-operator in (2) is often much cheaper to process as compared to the prox operator [1,
12]. While the GCG-type algorithms require O p1{q-iterations to guarantee an -primal objective
residual/duality gap, they cannot converge when their objective is nonsmooth [14].
To this end, we propose a new primal-dual algorithmic framework that can exploit the sharp-operator
of f in lieu of its proximal operator. Our aim is to combine the flexibility of proximal primal-dual
methods in addressing the general template (1) while leveraging the computational advantages of
the GCG-type methods. As a result, we trade off the computational difficulty per iteration with the
overall rate of convergence. While`we obtain
optimal rates based on the sharp-operator oracles,
?
we note that the rates reduce to O 1{2 with the sharp operator vs. O p1{q with the proximal
operator when f is completely non-smooth (cf. Definition 1.1). Intriguingly, the convergence rates
are the same when f is strongly convex. Unlike GCG-type methods, our approach can now handle
nonsmooth objectives in addition to complex constraint structures as in (1).
Our primal-dual framework is universal in the sense the convergence of our algorithms can optimally
adapt to the H?older continuity of the dual objective g (cf., (6) in Section 3) without having to know
its parameters. By H?older continuity, we mean the (sub)gradient ?g of a convex function g satisfies
? ? M? }? ? ?}
? ? with parameters M? ? 8 and ? P r0, 1s for all ?, ?
? P Rn .
}?gp?q ? ?gp?q}
The case ? ? 0 models the bounded subgradient, whereas ? ? 1 captures the Lipschitz gradient.
The H?older continuity has recently resurfaced in unconstrained optimization by [15] with universal
gradient methods that obtain optimal rates without having to know M? and ?. Unfortunately, these
methods cannot directly handle the general constrained template (1). After our initial draft appeared,
[14] presented new GCG-type methods for composite minimization, i.e., minxPRp f pxq ` ?pxq,
relying on H?older smoothness of f (i.e., ? P p0, 1s) and the sharp-operator of ?. The methods
in [14] do not apply when f is non-smooth. In addition, they cannot process the additional inclusion
Ax ? b P K in (1), which is a major drawback for machine learning applications.
Our algorithmic framework features a gradient method and its accelerated variant that operates on
the dual formulation of (1). For the accelerated variant, we study an alternative to the universal
accelerated method of [15] based on FISTA [10] since it requires less proximal operators in the
dual. While the FISTA scheme is classical, our analysis of it with the H?older continuous assumption
is new. Given the dual iterates, we then use a new averaging scheme to construct the primal-iterates
for the constrained template (1). In contrast to the non-adaptive weighting schemes of GCG-type
algorithms, our weights explicitly depend on the local estimates of the H?older constants M? at each
iteration. Finally, we derive the worst-case complexity results. Our results are optimal since they
match the computational lowerbounds in the sense of first-order black-box methods [16].
Paper organization: Section 2 briefly recalls primal-dual formulation of problem (1) with some
standard assumptions. Section 3 defines the universal gradient mapping and its properties. Section 4
presents the primal-dual universal gradient methods (both the standard and accelerated variants), and
analyzes their convergence. Section 5 provides numerical illustrations, followed by our conclusions.
The supplementary material includes the technical proofs and additional implementation details.
Notation and terminology: For notational simplicity, we work on the Rp {Rn spaces with the
Euclidean norms. We denote the Euclidean distance of the vector u to a closed convex set X by
dist pu, X q. Throughout the paper, } ? } represents the Euclidean norm for vectors and the spectral
norm for the matrices. For a convex function f , we use ?f both for its subgradient and gradient, and
f ? for its Fenchel?s conjugate. Our goal is to approximately solve (1) to obtain x in the following
sense:
Definition 1.1. Given an accuracy level ? 0, a point x P X is said to be an -solution of (1) if
|f px q ? f ? | ? , and dist pAx ? b, Kq ? .
Here, we call |f px q ? f ? | the primal objective residual and dist pAx ? b, Kq the feasibility gap.
2
Primal-dual preliminaries
In this section, we briefly summarize the primal-dual formulation with some standard assumptions.
For the ease of presentation, we reformulate (1) by introducing a slack variable r as follows:
f? ?
min tf pxq : Ax ? r ? bu , px? : f px? q ? f ? q.
xPX ,rPK
(3)
Let z :? rx, rs and Z :? X ?K. Then, we have D :? tz P Z : Ax?r ? bu as the feasible set of (3).
2
The dual problem: The Lagrange function associated with the linear constraint Ax ? r ? b is
defined as Lpx, r, ?q :? f pxq ` x?, Ax ? r ? by, and the dual function d of (3) can be defined and
decomposed as follows:
dp?q :? min tf pxq ` x?, Ax ? r ? byu ? min tf pxq ` x?, Ax ? byu ` min x?, ?ry,
xPX
xPX
rPK
loooooooooooooooomoooooooooooooooon
loooooomoooooon
rPK
dx p?q
dr p?q
n
where ? P R is the dual variable. Then, we define the dual problem of (3) as follows:
!
)
d? :? maxn dp?q ? maxn dx p?q ` dr p?q .
?PR
?PR
(4)
Fundamental assumptions: To characterize the primal-dual relation between (1) and (4), we require the following assumptions [17]:
Assumption A. 1. The function f is proper, closed, and convex, but not necessarily smooth. The
constraint sets X and K are nonempty, closed, and convex. The solution set X ? of (1) is nonempty.
Either Z is polyhedral or the Slater?s condition holds. By the Slater?s condition, we mean ripZq X
tpx, rq : Ax ? r ? bu ? H, where ripZq stands for the relative interior of Z.
Strong duality: Under Assumption A.1, the solution set ?? of the dual problem (4) is also
nonempty and bounded. Moreover, the strong duality holds, i.e., f ? ? d? .
3
Universal gradient mappings
This section defines the universal gradient mapping and its properties.
3.1
Dual reformulation
We first adopt the composite convex minimization formulation of (4) for better interpretability as
G? :? minn tGp?q :? gp?q ` hp?qu ,
?PR
(5)
where G? ? ?d? , and the correspondence between pg, hq and pdx , dr q is as follows:
#
gp?q :? max tx?, b ? Axy ? f pxqu ? ?dx p?q,
xPX
hp?q :? max x?, ry ? ?dr p?q.
(6)
rPK
Since g and h are generally non-smooth, FISTA and its proximal-based analysis [10] are not directly
applicable. Recall the sharp operator defined in (2), then g can be expressed as
(
gp?q ? max x?AT ?, xy ? f pxq ` x?, by,
xPX
and we define the optimal solution to the g subproblem above as follows:
(
x? p?q P arg max x?AT ?, xy ? f pxq ? r?AT ?s7X ,f .
xPX
(7)
The second term, h, depends on the structure of K. We consider three special cases:
paq Sparsity/low-rankness: If K :? tr P Rn : }r} ? ?u for a given ? ? 0 and a given norm } ? },
then hp?q ? ?}?}? , the scaled dual norm of } ? }. For instance, if K :? tr P Rn : }r}1 ? ?u,
then hp?q ? ?}?}8 . While the `1 -norm induces the sparsity of x, computing h requires the max
absolute elements of ?. If K :? tr P Rq1 ?q2 : }r}? ? ?u (the nuclear norm), then hp?q ? ?}?},
the spectral norm. The nuclear norm induces the low-rankness of x. Computing h in this case leads
to finding the top-eigenvalue of ?, which is efficient.
pbq Cone constraints: If K is a cone, then h becomes the indicator function ?K? of its dual cone
K? . Hence, we can handle the inequality constraints and positive semidefinite constraints in (1). For
instance, if K ? Rn` , then hp?q ? ?Rn? p?q, the indicator function of Rn? :? t? P Rn : ? ? 0u. If
p
K ? S`
, then hp?q :? ?S?p p?q, the indicator function of the negative semidefinite matrix cone.
?p
?p
pcq Separable structures: If X and f are separable, i.e., X :? i?1 Xi and f pxq :? i?1 fi pxi q,
then the evaluation of g and its derivatives can be decomposed into p subproblems.
3
3.2
H?older continuity of the dual universal gradient
Let ?gp?q be a subgradient of g, which can be computed as ?gp?q ? b ? Ax? p?q. Next, we define
#
+
?
}?gp?q??gp?q}
M? ? M? pgq :? sup
,
(8)
? ?
}? ? ?}
? n ,???
?
?,?PR
where ? ? 0 is the H?older smoothness order. Note that the parameter M? explicitly depends on
? [15]. We are interested in the case ? P r0, 1s, and especially the two extremal cases, where
we either have the Lipschitz gradient that corresponds to ? ? 1, or the bounded subgradient that
corresponds to ? ? 0.
We require the following condition in the sequel:
? pgq :? inf M? pgq ? `8.
Assumption A. 2. M
0???1
Assumption A.2 is reasonable. We explain this claim with the following two examples. First, if g is
subdifferentiable and X is bounded, then ?gp?q is also bounded. Indeed, we have
A
}?gp?q} ? }b ? Ax? p?q} ? DX
:? supt}b ? Ax} : x P X u.
? ? pgq ? 2DA ? 8.
Hence, we can choose ? ? 0 and M
X
Second, if f is uniformly convex with the convexity parameter ?f ? 0 and the degree q ? 2, i.e.,
? y ? ?f }x ? x
? }q for all x, x
? P Rp , then g defined by (6) satisfies (8) with
x?f pxq ? ?f p?
xq, x ? x
1
`
?
? ? pgq ? ??1 }A}2 q?1 ? `8, as shown in [15]. In particular, if q ? 2, i.e., f
? ? 1 and M
f
q?1
2
is ?f -strongly convex, then ? ? 1 and M? pgq ? ??1
f }A} , which is the Lipschitz constant of the
gradient ?g.
3.3
The proximal-gradient step for the dual problem
? k P Rn and Mk ? 0, we define
Given ?
Mk
? k }2
}? ? ?
2
as an approximate quadratic surrogate of g. Then, we consider the following update rule:
?
?
(
? k q ` hp?q ? prox ?1 ?
? k ? M ?1 ?gp?
?kq .
?k`1 :? arg minn QMk p?; ?
k
M h
? k q :? gp?
? k q ` x?gp?
? k q, ? ? ?
?ky `
QMk p?; ?
?PR
k
(9)
For a given accuracy ? 0, we define
? 1??
?
2
1 ? ? 1 1`?
?
M?1`? .
M :?
1`?
(10)
We need to choose the parameter Mk ? 0 such that QMk is an approximate upper surrogate of g,
i.e., gp?q ? QMk p?; ?k q ` ?k for some ? P Rn and ?k ? 0. If ? and M? are known, then we can
? defined by (10). In this case, QM
set Mk ? M
? is an upper surrogate of g. In general, we do not
know ? and M? . Hence, Mk can be determined via a backtracking line-search procedure.
4
Universal primal-dual gradient methods
We apply the universal gradient mappings to the dual problem (5), and propose an averaging scheme
to construct t?
xk u for approximating x? . Then, we develop an accelerated variant based on the FISTA
? k u for approximating x? .
scheme [10], and construct another primal sequence tx
4.1
Universal primal-dual gradient algorithm
Our algorithm is shown in Algorithm 1. The dual steps are simply the universal gradient method
in [15], while the new primal step allows to approximate the solution of (1).
Complexity-per-iteration: First, computing x? p?k q at Step 1 requires the solution x? p?k q P
r?AT ?k s7X ,f . For many X and f , we can compute x? p?k q efficiently and often in a closed form.
4
Algorithm 1 (Universal Primal-Dual Gradient Method pUniPDGradq)
Initialization: Choose an initial point ?0 P Rn and a desired accuracy level ? 0.
? . Set S?1 ? 0 and x
? ?1 ? 0p .
Estimate a value M?1 such that 0 ? M?1 ? M
for k ? 0 to kmax
1. Compute a primal solution x? p?k q P r?AT ?k s7X ,f .
2. Form ?gp?k q ? b ? Ax? p?k q.
3. Line-search: Set Mk,0 ? 0.5Mk?1 . For i ? ?0 to imax , perform the
? following steps:
?1
3.a. Compute the trial point ?k,i ? proxM ?1 h ?k ? Mk,i
?gp?k q .
k,i
3.b. If the following line-search condition holds:
gp?k,i q ? QMk,i p?k,i ; ?k q ` {2,
then set ik ? i and terminate the line-search loop. Otherwise, set Mk,i`1 ? 2Mk,i .
End of line-search
k
4. Set ?k`1 ? ?k,ik and Mk ? Mk,ik . Compute wk ? M1k , Sk ? Sk?1 `wk , and ?k ? w
Sk .
?
? k ? p1 ? ?k q?
5. Compute x
xk?1 ` ?k x p?k q.
end for
? k for x? .
Output: Return the primal approximation x
Second, in the line-search procedure, we require the solution ?k,i at Step 3.a, and the evaluation of
gp?k,i q. The total computational cost depends on the proximal operator of h and the evaluations of
g. We prove below that our algorithm requires two oracle queries of g on average.
Theorem 4.1. The primal sequence t?
xk u generated by the Algorithm 1 satisfies
?}?? }dist pA?
xk ? b, Kq ? f p?
xk q ? f ? ?
? }?0 }2
M
` ,
k`1
2
(11)
d
dist pA?
xk ? b, Kq ?
?
4M
}?0 ? ?? } `
k`1
?
2M
,
k`1
(12)
? is defined by (10), ?? P ?? is an arbitrary dual solution, and is the desired accuracy.
where M
The worst-case analytical complexity: We establish the total number of iterations kmax to achieve
? k of (1). The supplementary material proves that
an -solution x
??
ffi
fi2
?
ffi
?
2
?
? 1`?
?
ffi
?
M
4 2}? }
??
ffi
?
fl inf
b
kmax ? ?
(13)
fl ,
}?? }
0???1
?1 ` 1 ` 8 ?
}? }r1s
?
?
where }? }r1s ? max t}? }, 1u. This complexity is optimal for ? ? 0, but not for ? ? 0 [16].
At each iteration k, the linesearch procedure at Step 3 requires the evaluations of g. The supplementary material bounds the total number N1 pkq of oracle queries, including the function G and its
gradient evaluations, up to the kth iteration as follows:
?
?
*
"
p1??q
2
1??
N1 pkq ? 2pk ` 1q ` 1 ? log2 pM?1 q` inf
log2
`
log2 M? . (14)
0???1 1`?
p1`?q
1`?
Hence, we have N1 pkq ? 2pk`1q, i.e., we require approximately two oracle queries at each iteration
on the average.
4.2
Accelerated universal primal-dual gradient method
We now develop an accelerated scheme for solving (5). Our scheme is different from [15] in two
key aspects. First, we adopt the FISTA [10] scheme to obtain the dual sequence since it requires
less prox operators compared to the fast scheme in [15]. Second, we perform the line-search after
? k q, which can reduce the number of the sharp-operator computations of f and X .
computing ?gp?
Note that the application of FISTA to the dual function is not novel per se. However, we claim that
our theoretical characterization of this classical scheme based on the H?older continuity assumption
in the composite minimization setting is new.
5
Algorithm 2 (Accelerated Universal Primal-Dual Gradient Method pAccUniPDGradq)
? 0 P Rn and an accuracy level ? 0.
Initialization: Choose an initial point ?0 ? ?
?
? ?1 ? 0p .
Estimate a value M?1 such that 0 ? M?1 ? M . Set S??1 ? 0, t0 ? 1 and x
for k ? 0 to kmax
? k q P r?AT ?s
? 7 .
1. Compute a primal solution x? p?
X ,f
? k q ? b ? Ax? p?
? k q.
2. Form ?gp?
3. Line-search: Set Mk,0 ? Mk?1 . For i ? 0 to imax , perform the following steps:
`
?
? k ? M ?1 ?gp?
?kq .
3.a. Compute the trial point ?k,i ? proxM ?1 h ?
k,i
k,i
3.b. If the following line-search condition holds:
? k q ` {p2tk q,
gp?k,i q ? QMk,i p?k,i ; ?
then ik ? i, and terminate the line-search loop. Otherwise, set Mk,i`1 ? 2Mk,i .
End of line-search
tk
4. Set ?k`1 ? ?k,ik and Mk ? Mk,ik . Compute wk ? M
, S?k ? S?k?1 `wk , and ?k ? wk {S?k .
k
a
?
`
?
?
?1
2
?
5. Compute tk`1 ? 0.5 1 ` 1 ` 4tk and update ?k`1 ? ?k`1 ` ttkk`1
?k`1 ? ?k .
? k q.
? k ? p1 ? ?k qx
? k?1 ` ?k x? p?
6. Compute x
end for
? k for x? .
Output: Return the primal approximation x
Complexity per-iteration: The per-iteration complexity of Algorithm 2 remains essentially the same
as that of Algorithm 1.
? k u generated by the Algorithm 2 satisfies
Theorem 4.2. The primal sequence tx
? }?0 }2 ,
4M
`
2 pk`2q 1`3?
1`?
d
?
?
16M
8M
?
? k ?b, Kq ?
dist pAx
1`3? }?0 ?? } `
1`3? ,
pk`2q 1`?
pk`2q 1`?
? k?b, Kq ? f px
? k q?f ? ?
?}?? }dist pAx
(15)
(16)
? is defined by (10), ?? P ?? is an arbitrary dual solution, and is the desired accuracy.
where M
The worst-case analytical complexity: The supplementary material proves the following worst-case
?k:
complexity of Algorithm 2 to achieve an -solution x
ffi
??
fi 2`2?
ffi
?
1`3?
?
2
?
? 1`3?
?
ffi
?
M
8 2}? }
ffi
??
?
fl
b
(17)
inf
kmax ? ?
fl .
}?}
0???1
?1 ` 1 ` 8 }?}r1s
This worst-case complexity is optimal in the sense of first-order black box models [16].
The line-search procedure at Step 3 of Algorithm 2 also terminates after a finite number of iterations.
Similar to Algorithm 1, Algorithm 2 requires 1 gradient query and ik function evaluations of g at
each iteration. The supplementary material proves that the number of oracle queries in Algorithm 2
is upperbounded as follows:
N2 pkq ? 2pk ` 1q ` 1 `
1??
2
rlog2 pk ` 1q ? log2 pqs `
log2 pM? q ? log2 pM?1 q. (18)
1`?
1`?
Roughly speaking, Algorithm 2 requires approximately two oracle query per iteration on average.
5
Numerical experiments
This section illustrates the scalability and the flexibility of our primal-dual framework using some
applications in the quantum tomography (QT) and the matrix completion (MC).
6
5.1
Quantum tomography with Pauli operators
We consider the QT problem which aims to extract information from a physical quantum system. A
q-qubit quantum system is mathematically characterized by its density matrix, which is a complex
p
p ? p positive semidefinite Hermitian matrix X6 P S`
, where p ? 2q . Surprisingly, we can provably
deduce the state from performing compressive linear measurements b ? ApXq P C n based on Pauli
operators A [18]. While the size of the density matrix grows exponentially in q, a significantly fewer
compressive measurements (i.e., n ? Opp log pq) suffices to recover a pure state q-qubit density
matrix as a result of the following convex optimization problem:
"
*
1
?? ? minp ?pXq :? }ApXq?b}22 : trpXq ? 1 , pX? : ?pX? q ? ?? q,
(19)
2
XPS`
where the constraint ensures that X? is a density matrix. The recovery is also robust to noise [18].
Since the objective function has Lipschitz gradient and the constraint (i.e., the Spectrahedron) is
tuning-free, the QT problem provides an ideal scalability test for both our framework and GCG-type
algorithms. To verify the performance of the algorithms with respect to the optimal solution in largescale, we remain within the noiseless setting. However, the timing and the convergence behavior of
the algorithms remain qualitatively the same under polarization and additive Gaussian noise.
?Xk ?X? ?F
?X? ?F
10 0
10 0
10
-1
Relative solution error:
10
10
0
-1
10 -2
10 -2
10 -3
10 -3
10 -4
10 -4
10 0
10 1
# iteration
10 2
10 0
?Xk ?X? ?F
?X? ?F
10 1
10 1
Relative solution error:
UniPDGrad
AccUniPDGrad
FrankWolfe
Objective residual: |?(Xk ) ? ?? |
Objective residual: |?(Xk ) ? ?? |
10 2
10
2
10
3
10
4
10 -1
10 -2
10
computational time (s)
0
10
1
# iteration
10
2
10 -1
10
-2
10 2
10 3
10 4
computational time (s)
Figure 1: The convergence behavior of algorithms for the q ? 14 qubits QT problem. The solid lines
correspond to the theoretical weighting scheme, and the dashed lines correspond to the line-search
(in the weighting step) variants.
To this end, we generate a random pure quantum state (e.g., rank-1 X6 ), and we take n ? 2p log p
random Pauli measurements. For q ? 14 qubits system, this corresponds to a 2681 4351 456 dimensional problem with n ? 3171 983 measurements. We recast (19) into (1) by introducing the slack
variable r ? ApXq ? b.
We compare our algorithms vs. the Frank-Wolfe method, which has optimal convergence rate guarantees for this problem, and its line-search variant. Computing the sharp-operator rxs7 requires a
top-eigenvector e1 of A? p?q, while evaluating g corresponds to just computing the top-eigenvalue
?1 of A? p?q via a power method. All methods use the same subroutine to compute the sharpoperator, which is based on MATLAB?s eigs function. We set ? 2 ? 10?4 for our methods and
have a wall-time 2 ? 104 s in order to stop the algorithms. However, our algorithms seems insensitive
to the choice of for the QT problem.
Figure 1 illustrates the iteration and the timing complexities of the algorithms. UniPDGrad algorithm, with an average of 1.978 line-search steps per iteration, has similar iteration and timing
performance as compared to the standard Frank-Wolfe scheme with step-size ?k ? 2{pk ` 2q. The
line-search variant of Frank-Wolfe improves over the standard one; however, our accelerated variant,
with an average of 1.057 line-search steps, is the clear winner in terms of both iterations and time.
We can empirically improve the performance of our algorithms even further by adapting a similar
line-search strategy in the weighting step as Frank-Wolfe, i.e., by choosing the weights wk in a
greedy fashion to minimize the objective function. The practical improvements due to line-search
appear quite significant.
5.2
Matrix completion with MovieLens dataset
To demonstrate the flexibility of our framework, we consider the popular matrix completion (MC)
application. In MC, we seek to estimate a low-rank matrix X P Rp?l from its subsampled entries
b P Rn , where Ap?q is the sampling operator, i.e., ApXq ? b.
7
10 0
10 -1
10 -2
10 0
UniPDGrad
AccUniPDGrad
FrankWolfe
10 1
10 2
# iteration
10 3
10 0
1.13
1.13
1.11
1.11
RMSE
10 1
10 1
RMSE
(RMSE - RMSE? ) / RMSE?
(?(X) ? ?? )/??
10 2
1.09
10 -1
1.07
1.07
10 -2
10 0
1.05
1.05
10 1
10 2
10 3
# iteration
1.09
0
1000 2000 3000 4000 5000
# iteration
0
1
2
3
4
5
computational time (min)
Figure 2: The performance of the algorithms for the MC problems. The dashed lines correspond to
the line-search (in the weighting step) variants, and the empty and the filled markers correspond to
the formulation (20) and (21), respectively.
Convex formulations involving the nuclear norm have been shown to be quite effective in estimating
low-rank matrices from limited number of measurements [19]. For instance, we can solve
"
*
1
2
min ?pXq ? }ApXq ? b} : }X}? ? ? ,
(20)
n
XPRp?l
with Frank-Wolfe-type methods, where ? is a tuning parameter, which may not be available a priori.
We can also solve the following parameter-free version
"
*
1
min ?pXq ? }X}2? : ApXq ? b .
(21)
n
XPRp?l
While the nonsmooth objective of (21) prevents the tuning parameter, it clearly burdens the computational efficiency of the convex optimization algorithms.
We apply our algorithms to (20) and (21) using the MovieLens 100K dataset. Frank-Wolfe algorithms cannot handle (21) and only solve (20). For this experiment, we did not pre-process the data
and took the default ub test and training data partition. We start out algorithms form ?0 ? 0n , we
set the target accuracy ? 10?3 , and we choose the tuning parameter ? ? 9975{2 as in [20]. We
use lansvd function (MATLAB version) from PROPACK [21] to compute the top singular vectors,
and a simple implementation of the power method to find the top singular value in the line-search,
both with 10?5 relative error tolerance.
The first two plots in Figure 2 show the performance of the algorithms for (20). Our metrics are
the normalized objective residual and the root mean squared error (RMSE) calculated for the test
data. Since we do not have access to the optimal solutions, we approximated the optimal values,
?? and RMSE? , by 5000 iterations of AccUniPDGrad. Other two plots in Figure 2 compare the
performance of the formulations (20) and (21) which are represented by the empty and the filled
markers, respectively. Note that, the dashed line for AccUniPDGrad corresponds to the line-search
variant, where the weights wk are chosen to minimize the feasibility gap. Additional details about
the numerical experiments can be found in the supplementary material.
6
Conclusions
This paper proposes a new primal-dual algorithmic framework that combines the flexibility of proximal primal-dual methods in addressing the general template (1) while leveraging the computational
advantages of the GCG-type methods. The algorithmic instances of our framework are universal
since they can automatically adapt to the unknown H?older continuity properties implied by the template. Our analysis technique unifies Nesterov?s universal gradient methods and GCG-type methods
to address the more broadly applicable primal-dual setting. The hallmarks of our approach includes
the optimal worst-case complexity and its flexibility to handle nonsmooth objectives and complex
constraints, compared to existing primal-dual algorithm as well as GCG-type algorithms, while essentially preserving their low cost iteration complexity.
Acknowledgments
This work was supported in part by ERC Future Proof, SNF 200021-146750 and SNF CRSII2147633. We would like to thank Dr. Stephen Becker of University of Colorado at Boulder for his
support in preparing the numerical experiments.
8
References
[1] M. Jaggi, Revisiting Frank-Wolfe: Projection-free sparse convex optimization. J. Mach. Learn.
Res. Workshop & Conf. Proc., vol. 28, pp. 427?435, 2013.
[2] V. Cevher, S. Becker, and M. Schmidt.
Convex optimization for big data: Scalable, randomized, and parallel algorithms for big data analytics. IEEE Signal Process.
Mag., vol. 31, pp. 32?43, Sept. 2014.
[3] M. J. Wainwright, Structured regularizers for high-dimensional problems: Statistical and
computational issues. Annu. Review Stat. and Applicat., vol. 1, pp. 233?253, Jan. 2014.
[4] G. Lan and R. D. C. Monteiro, Iteration-complexity of first-order augmented Lagrangian
methods for convex programming. Math. Program., pp. 1?37, Jan. 2015, doi:10.1007/s10107015-0861-x.
[5] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical
learning via the alternating direction method of multipliers. Found. and Trends in Machine
Learning, vol. 3, pp. 1?122, Jan. 2011.
[6] P. L. Combettes and J.-C. Pesquet, A proximal decomposition method for solving convex variational inverse problems. Inverse Problems, vol. 24, Nov. 2008, doi:10.1088/02665611/24/6/065014.
[7] T. Goldstein, E. Esser, and R. Baraniuk, Adaptive primal-dual hybrid gradient methods for
saddle point problems. 2013, http://arxiv.org/pdf/1305.0546.
[8] R. Shefi and M. Teboulle, Rate of convergence analysis of decomposition methods based on
the proximal method of multipliers for convex minimization. SIAM J. Optim., vol. 24, pp. 269?
297, Feb. 2014.
[9] Q. Tran-Dinh and V. Cevher, Constrained convex minimization via model-based excessive gap.
In Advances Neural Inform. Process. Syst. 27 (NIPS2014), Montreal, Canada, 2014.
[10] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM J. Imaging Sci., vol. 2, pp. 183?202, Mar. 2009.
[11] Y. Nesterov, Smooth minimization of non-smooth functions. Math. Program., vol. 103, pp. 127?
152, May 2005.
[12] A. Juditsky and A. Nemirovski, Solving variational inequalities with monotone operators
on domains given by Linear Minimization Oracles. Math. Program., pp. 1?36, Mar. 2015,
doi:10.1007/s10107-015-0876-3.
[13] Y. Yu, Fast gradient algorithms for structured sparsity. PhD dissertation, Univ. Alberta,
Edmonton, Canada, 2014.
[14] Y. Nesterov, Complexity bounds for primal-dual methods minimizing the model of objective
function. CORE, Univ. Catholique Louvain, Belgium, Tech. Rep., 2015.
[15] Y. Nesterov, Universal gradient methods for convex optimization problems. Math. Program.,
vol. 152, pp. 381?404, Aug. 2015.
[16] A. Nemirovskii and D. Yudin, Problem complexity and method efficiency in optimization.
Hoboken, NJ: Wiley Interscience, 1983.
[17] R. T. Rockafellar, Convex analysis (Princeton Math. Series), Princeton, NJ: Princeton Univ.
Press, 1970.
[18] D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert,
Quantum state
tomography via compressed sensing.
Phys. Rev. Lett., vol. 105, pp. Oct. 2010,
doi:10.1103/PhysRevLett.105.150401.
[19] E. Cand`es and B. Recht, Exact matrix completion via convex optimization. Commun. ACM,
vol. 55, pp. 111?119, June 2012.
[20] M. Jaggi and M. Sulovsk?y, A simple algorithm for nuclear norm regularized problems. In
Proc. 27th Int. Conf. Machine Learning (ICML2010), Haifa, Israel, 2010, pp. 471?478.
[21] R. M. Larsen, PROPACK - Software for large and sparse SVD calculations. Available: http:
//sun.stanford.edu/?rmunk/PROPACK/.
9
| 5826 |@word trial:2 version:2 briefly:2 norm:11 seems:1 r:1 seek:1 decomposition:3 p0:1 pg:1 tr:3 solid:1 initial:3 liu:1 series:1 mag:1 frankwolfe:2 existing:2 optim:1 dx:4 chu:1 hoboken:1 numerical:5 additive:1 partition:1 plot:2 update:2 juditsky:1 v:2 greedy:1 fewer:1 xk:10 propack:3 dissertation:1 core:1 volkan:2 tpx:1 draft:1 iterates:2 provides:2 characterization:1 math:5 org:1 become:1 ik:7 prove:1 combine:2 interscience:1 polyhedral:1 hermitian:1 indeed:1 roughly:1 p1:7 dist:7 cand:1 ry:2 behavior:2 xpx:7 relying:1 decomposed:2 pcq:1 automatically:2 alberta:1 becomes:1 estimating:1 bounded:5 notation:1 moreover:1 israel:1 eigenvector:1 q2:1 compressive:2 unified:1 finding:1 nj:2 guarantee:4 preferable:1 scaled:1 qm:1 appear:1 positive:2 local:1 timing:3 pxq:15 mach:1 approximately:3 ap:1 black:2 initialization:2 ease:1 limited:1 nemirovski:1 analytics:1 practical:1 acknowledgment:1 subdifferentiable:1 procedure:4 jan:3 snf:2 universal:20 significantly:1 composite:3 adapting:1 projection:1 pre:1 boyd:1 unc:2 cannot:4 interior:1 operator:23 kmax:5 applying:1 pzq:1 lagrangian:2 paq:1 convex:27 rmunk:1 simplicity:1 recovery:1 pure:2 rule:1 q:1 nuclear:4 imax:2 his:1 handle:5 target:1 colorado:1 exact:1 programming:1 pa:2 wolfe:8 element:1 approximated:1 trend:1 eisert:1 slater:2 subproblem:1 capture:1 worst:6 rpk:4 revisiting:1 ensures:1 sun:1 trade:1 rq:1 gross:1 convexity:2 complexity:15 nesterov:4 depend:2 solving:3 efficiency:3 completely:1 represented:1 tx:3 univ:3 fast:3 effective:1 doi:4 query:6 choosing:1 quite:3 larger:1 supplementary:6 solve:4 stanford:1 otherwise:2 compressed:1 statistic:1 gp:23 advantage:2 differentiable:1 eigenvalue:2 sequence:4 analytical:2 took:1 propose:3 tran:2 loop:2 flexibility:5 achieve:2 ky:1 scalability:2 convergence:11 empty:2 tk:3 derive:1 develop:2 completion:5 stat:1 montreal:1 qt:5 aug:1 strong:3 switzerland:1 direction:2 drawback:1 physrevlett:1 alp:2 material:6 require:6 suffices:1 wall:1 preliminary:1 txx:1 mathematically:1 hold:4 proximity:1 algorithmic:7 mapping:4 qubit:2 claim:2 major:1 adopt:2 belgium:1 proc:2 applicable:2 extremal:1 tf:4 minimization:8 clearly:1 gaussian:1 supt:1 aim:2 shrinkage:1 ax:16 pxi:1 june:1 notational:1 improvement:1 rank:3 mainly:1 tech:1 contrast:3 sense:4 inference:1 spx:1 epfl:2 relation:1 subroutine:1 interested:1 provably:1 monteiro:1 arg:4 dual:46 flexible:1 overall:1 xps:1 tgp:1 priori:1 proposes:1 issue:1 constrained:5 special:1 construct:4 intriguingly:1 having:2 sampling:1 preparing:1 represents:1 yu:1 excessive:1 future:1 nonsmooth:4 cheaper:2 subsampled:1 beck:1 n1:3 organization:1 evaluation:6 upperbounded:1 semidefinite:3 primal:37 spectrahedron:1 regularizers:1 xy:2 filled:2 euclidean:3 haifa:1 desired:3 re:1 theoretical:2 mk:18 cevher:4 increased:1 instance:7 fenchel:4 pax:4 linesearch:1 teboulle:2 tractability:2 introducing:2 addressing:2 cost:2 entry:1 kq:8 optimally:1 characterize:1 sulovsk:1 proximal:15 recht:1 density:4 fundamental:1 randomized:1 siam:2 bu:3 sequel:1 off:1 gcg:12 squared:1 choose:5 dr:5 tz:1 conf:2 derivative:1 return:2 syst:1 prox:3 wk:7 includes:2 rockafellar:1 int:1 explicitly:2 depends:3 root:1 closed:5 sup:1 start:1 recover:1 parallel:1 rmse:7 minimize:2 accuracy:7 efficiently:1 sy:1 correspond:4 unifies:1 mc:4 rx:1 applicat:1 explain:1 inform:1 phys:1 s10107:1 email:1 definition:2 pp:13 larsen:1 proof:2 associated:1 stop:1 dataset:2 popular:1 recall:2 improves:1 goldstein:1 x6:2 formulation:9 box:2 strongly:2 mar:2 just:1 marker:2 continuity:7 defines:2 grows:1 usa:1 verify:1 normalized:1 multiplier:2 regularization:1 hence:5 polarization:1 alternating:2 laboratory:1 proxf:1 generalized:2 pdf:1 demonstrate:1 workhorse:1 hallmark:1 variational:2 novel:1 recently:1 fi:2 parikh:1 physical:1 empirically:1 winner:1 exponentially:1 insensitive:1 proxm:2 dinh:2 significant:2 measurement:5 smoothness:3 tuning:4 unconstrained:1 pm:3 hp:8 inclusion:3 erc:1 submodular:1 had:1 pq:1 esser:1 access:1 deduce:1 pu:1 pkq:4 jaggi:2 feb:1 inf:4 commun:1 inequality:2 rep:1 preserving:1 analyzes:1 additional:4 byu:2 r0:2 converge:1 dashed:3 stephen:1 signal:1 smooth:7 technical:1 match:1 adapt:3 characterized:1 calculation:1 e1:1 feasibility:3 scalable:2 variant:10 involving:1 essentially:2 noiseless:1 metric:1 arxiv:1 iteration:25 addition:2 whereas:1 singular:2 flammia:1 unlike:1 leveraging:2 call:1 leverage:2 ideal:1 pesquet:1 reduce:2 pgq:6 absent:1 t0:1 becker:3 effort:1 speaking:1 matlab:2 generally:1 se:1 clear:1 yurtsever:2 induces:2 tomography:3 generate:1 http:2 popularity:1 per:7 broadly:1 vol:11 key:2 terminology:1 reformulation:1 lan:1 imaging:1 subgradient:4 monotone:1 cone:4 inverse:3 baraniuk:1 throughout:1 reasonable:1 capturing:1 fl:4 bound:2 guaranteed:1 followed:1 correspondence:1 quadratic:1 oracle:9 constraint:10 software:1 aspect:1 min:8 performing:1 separable:2 px:7 department:1 pdx:1 maxn:2 structured:2 conjugate:2 terminates:1 remain:2 increasingly:1 qu:1 rev:1 quoc:1 pr:5 boulder:1 computationally:1 remains:1 slack:2 ffi:8 nonempty:4 know:3 pbq:1 end:6 lieu:1 available:2 operation:1 apply:3 pauli:3 spectral:2 alternative:1 schmidt:1 rp:6 top:5 cf:3 axy:1 log2:6 exploit:1 restrictive:1 especially:1 establish:1 approximating:2 classical:2 m1k:1 prof:3 implied:1 objective:16 strategy:1 surrogate:3 said:1 gradient:34 dp:2 hq:1 distance:1 kth:1 thank:1 sci:1 majority:1 eigs:1 minn:2 illustration:1 reformulate:1 minimizing:1 unfortunately:1 frank:8 lowerbounds:1 subproblems:1 negative:1 implementation:2 proper:1 unknown:2 perform:3 upper:2 r1s:3 finite:1 rlog2:1 nemirovskii:1 rn:16 sharp:8 arbitrary:2 peleato:1 canada:2 eckstein:1 louvain:1 address:1 fi2:1 below:1 nips2014:1 appeared:1 sparsity:3 summarize:1 program:4 recast:1 including:3 max:7 interpretability:1 wainwright:1 power:2 difficulty:1 hybrid:1 regularized:1 largescale:2 indicator:4 residual:6 older:12 scheme:12 improve:1 extract:1 sept:1 xq:1 review:1 relative:4 icml2010:1 prototypical:1 degree:3 minp:1 thresholding:1 surprisingly:1 supported:1 free:3 catholique:1 template:7 absolute:1 sparse:3 tolerance:1 distributed:1 lett:1 dimension:1 default:1 stand:1 avoids:1 evaluating:1 quantum:6 calculated:1 yudin:1 qualitatively:1 adaptive:2 qx:1 approximate:3 qubits:2 nov:1 opp:1 xi:1 continuous:1 search:22 iterative:1 sk:3 ku:1 terminate:2 robust:1 learn:1 obtaining:1 complex:3 necessarily:1 domain:1 da:1 did:1 pk:8 main:1 big:2 noise:2 n2:1 augmented:2 edmonton:1 fashion:2 wiley:1 combettes:1 sub:2 minz:1 weighting:5 theorem:2 annu:1 sensing:1 burden:1 workshop:1 phd:1 illustrates:2 rankness:2 gap:5 backtracking:1 simply:1 saddle:1 lagrange:1 expressed:1 prevents:1 ch:1 corresponds:5 satisfies:4 acm:1 oct:1 conditional:2 goal:1 presentation:1 lipschitz:7 feasible:1 fista:6 determined:1 movielens:2 operates:1 uniformly:1 averaging:2 called:1 total:3 duality:3 e:1 svd:1 support:2 accelerated:10 ub:1 princeton:3 handling:1 |
5,332 | 5,827 | Sample Complexity of Episodic Fixed-Horizon
Reinforcement Learning
Emma Brunskill
Computer Science Department
Carnegie Mellon University
[email protected]
Christoph Dann
Machine Learning Department
Carnegie Mellon University
[email protected]
Abstract
Recently, there has been significant progress in understanding reinforcement
learning in discounted infinite-horizon Markov decision processes (MDPs) by deriving tight sample complexity bounds. However, in many real-world applications,
an interactive learning agent operates for a fixed or bounded period of time, for
example tutoring students for exams or handling customer service requests. Such
scenarios can often be better treated as episodic fixed-horizon MDPs, for which
only looser bounds on the sample complexity exist. A natural notion of sample
complexity in this setting is the number of episodes required to guarantee a certain
performance with high probability (PAC guarantee). In this paper, we derive an
2
2
2
1
? |S| |A|H
? |S||A|H
upper PAC bound O(
ln 1? ) and a lower PAC bound ?(
ln ?+c
)
2
2
that match up to log-terms and an additional linear dependency on the number of
states |S|. The lower bound is the first of its kind for this setting. Our upper bound
leverages Bernstein?s inequality to improve on previous bounds for episodic finitehorizon MDPs which have a time-horizon dependency of at least H 3 .
1
Introduction and Motivation
Consider test preparation software that tutors students for a national advanced placement exam taken
at the end of a year, or maximizing business revenue by the end of each quarter. Each individual
task instance requires making a sequence of decisions for a fixed number of steps H (e.g., tutoring
one student to take an exam in spring 2015 or maximizing revenue for the end of the second quarter
of 2014). Therefore, they can be viewed as a finite-horizon sequential decision making under uncertainty problem, in contrast to an infinite horizon setting in which the number of time steps is infinite.
When the domain parameters (e.g. Markov decision process parameters) are not known in advance,
and there is the opportunity to repeat the task many times (teaching a new student for each year?s
exam, maximizing revenue for each new quarter), this can be treated as episodic fixed-horizon reinforcement learning (RL). One important question is to understand how much experience is required
to act well in this setting. We formalize this as the sample complexity of reinforcement learning [1],
which is the number of time steps on which the algorithm may select an action whose value is not
near-optimal. RL algorithms with a sample complexity that is a polynomial function of the domain
parameters are referred to as Probably Approximately Correct (PAC) [2, 3, 4, 1]. Though there has
been significant work on PAC RL algorithms for the infinite horizon setting, there has been relatively
little work on the finite horizon scenario.
In this paper we present the first, to our knowledge, lower bound, and a new upper bound on the
sample complexity of episodic finite horizon PAC reinforcement learning in discrete state-action
spaces. Our bounds are tight up to log-factors in the time horizon H, the accuracy , the number
of actions |A| and up to an additive constant in the failure probability ?. These bounds improve
upon existing results by a factor of at least H. Our results also apply when the reward model
is a function of the within-episode time step in addition to the state and action space. While we
assume a stationary transition model, our results can be extended readily to time-dependent state1
transitions. Our proposed UCFH (Upper-confidence fixed-horizon RL) algorithm that achieves our
upper PAC guarantee can be applied directly to wide range of fixed-horizon episodic MDPs with
known rewards.1 It does not require additional structure such as assuming access to a generative
model [8] or that the state transitions are sparse or acyclic [6].
The limited prior research on upper bound PAC results for finite horizon MDPs has focused on
different settings, such as partitioning a longer trajectory into fixed length segments [4, 1], or considering a sliding time window [9]. The tightest dependence on the horizon in terms of the number
of episodes presented in these approaches is at least H 3 whereas our dependence is only H 2 . More
importantly, such alternative settings require the optimal policy to be stationary, whereas in general
in finite horizon settings the optimal policy is nonstationary (e.g. is a function of both the state and
the within episode time-step).2 Fiechter [10, 11] and Reveliotis and Bountourelis [12] do tackle a
closely related setting, but find a dependence that is at least H 4 .
Our work builds on recent work [6, 8] on PAC infinite horizon discounted RL that offers much tighter
upper and lower sample complexity bounds than was previously known. To use an infinite horizon
algorithm in a finite horizon setting, a simple change is to augment the state space by the time step
(ranging over 1, . . . , H), which enables the learned policy to be non-stationary in the original state
space (or equivalently, stationary in the newly augmented space). Unfortunately, since these recent
bounds are in general a quadratic function of the state space size, the proposed state space expansion
would introduce at least an additional H 2 factor in the sample complexity term, yielding at least a
H 4 dependence in the number of episodes for the sample complexity.
Somewhat surprisingly, we prove an upper bound on the sample complexity for the finite horizon
case that only scales quadratically with the horizon. A key part of our proof is that the variance of
the value function in the finite horizon setting satisfies a Bellman equation. We also leverage recent
insights that state?action pairs can be estimated to different precisions depending on the frequency to
which they are visited under a policy, extending these ideas to also handle when the policy followed
is nonstationary. Our lower bound analysis is quite different than some prior infinite-horizon results,
and involves a construction of parallel multi-armed bandits where it is required that the best arm in
a certain portion of the bandits is identified with high probability to achieve near-optimality.
2
Problem Setting and Notation
We consider episodic fixed-horizon MDPs, which can be formalized as a tuple M =
(S, A, r, p, p0 , H). Both, the statespace S and the actionspace A are finite sets. The learning agent
interacts with the MDP in episodes of H time steps. At time t = 1 . . . H, the agent observes a state
st and choses an action at based on a policy ? that potentially depends on the within-episode time
step, i.e., at = ?t (st ) for t = 1, . . . , H. The next state is sampled from the stationary transition
kernel st+1 ? p(?|st , at ) and the initial state from s1 ? p0 . In addition the agent receives a reward
drawn from a distribution3 with mean rt (st ) determined by the reward function. The reward function r is possibly time-dependent and takes values in
[0, 1]. The quality
of a policy ? is evaluated by
hP
i
H
?
1
the total expected reward of an episode RM = E
t=1 rt (st ) . For simplicity, we assume that
the reward function r is known to the agent but the transition kernel p is unknown. The question
we study is how many episodes does a learning agent follow a policy ? that is not -optimal, i.e.,
?
?
RM
? > RM
, with probability at least 1 ? ? for any chosen accuracy and failure probability ?.
? and
Notation. In the following sections, we reason about the true MDP M , an empirical MDP M
?
an optimistic MDP M which are identical except for their transition probabilities p, p? and p?t . We
will provide more details about these MDPs later. We introduce the notation explicitly only for M
? and M
? with additional tildes or hats by replacing p with p?t or p?.
but the quantities carry over to M
1
Previous works [5] have shown that the complexity of learning state transitions usually dominates learning
reward functions. We therefore follow existing sample complexity analyses [6, 7] and assume known rewards
for simplicity. The algorithm and PAC bound can be extended readily to the case of unknown reward functions.
2
The best action will generally depend on the state and the number of remaining time steps. In the tutoring
example, even if the student has the same state of knowledge, the optimal tutor decision may be to space
practice if there is many days till the test and provide intensive short-term practice if the test is tomorrow.
3
It is straightforward to have the reward depend on the state, or state/action or state/action/next state.
2
P
The (linear) operator Pi? f (s) := E[f (si+1 )|si = s] = s0 ?S p(s0 |s, ?i (s))f (s0 ) takes any function
f : S ? R and returns the expected value of f with respect to the next time step.4 For convenience,
?
?
we define the multi-step version as Pi:j
f := Pi? Pi+1
. . . P ? f . The value function from time i to
hP
i Pj
j
j
?
?
? ?
time j is defined as Vi:j
(s) := E
t=i rt (st )|si = s =
t=i Pi:t?1 rt = Pi Vi+1:j (s) + ri (s)
?
is the optimal value-function. When the policy is clear, we omit the superscript ?.
and Vi:j
We denote by S(s, a) ? S the set of possible successor states of state s and action a. The maximum
number of them is denoted by C = maxs,a?S?A |S(s, a)|. In general, without making further
assumptions, we have C = |S|, though in many practical domains (robotics, user modeling) each
state can only transition to a subset of the full set of states (e.g. a robot can?t teleport across the
? is similar to the usual O-notation but
building, but can only take local moves). The notation O
?
ignores log-terms. More precisely f = O(g) if there are constants c1 , c2 such that f ? c1 g(ln g)c2
? The natural logarithm is ln and log = log2 is the base-2 logarithm.
and analogously for ?.
3
Upper PAC-Bound
We now introduce a new model-based algorithm, UCFH, for RL in finite horizon episodic domains.
We will later prove UCFH is PAC with an upper bound on its sample complexity that is smaller
than prior approaches. Like many other PAC RL algorithms [3, 13, 14, 15], UCFH uses an optimism under uncertainty approach to balance exploration and exploitation. The algorithm generally
works in phases comprised of optimistic planning, policy execution and model updating that take
several episodes each. Phases are indexed by k. As the agent acts in the environment and observes
(s, a, r, s0 ) tuples, UCFH maintains a confidence set over the possible transition parameters for each
state-action pair that are consistent with the observed transitions. Defining such a confidence set that
holds with high probability can be be achieved using concentration inequalities like the Hoeffding
inequality. One innovation in our work is to use a particular new set of conditions to define the confidence set that enables us to obtain our tighter bounds. We will discuss the confidence sets further
below. The collection of these confidence sets together form a class of MDPs Mk that are consistent
? k as the maximum likelihood estimate of the MDP given the
with the observed data. We define M
previous observations.
Given Mk , UCFH computes a policy ? k by performing optimistic planning. Specifically, we use
a finite horizon variant of extended value iteration (EVI) [5, 14]. EVI performs modified Bellman
backups that are optimistic with respect to a given set of parameters. That is, given a confidence
set of possible transition model parameters, it selects in each time step the model within that set
that maximizes the expected sum of future rewards. Appendix A provides more details about fixed
horizon EVI.
UCFH then executes ? k until there is a state-action pair (s, a) that has been visited often enough
since its last update (defined precisely in the until-condition in UCFH). After updating the model
statistics for this (s, a)-pair, a new policy ? k+1 is obtained by optimistic planning again. We refer to
each such iteration of planning-execution-update as a phase with index k. If there is no ambiguity,
we omit the phase indices k to avoid cluttered notation.
UCFH is inspired by the infinite-horizon UCRL-? algorithm by Lattimore and Hutter [6] but has
several important differences. First, the policy can only be updated at the end of an episode, so
there is no need for explicit delay phases as in UCRL-?. Second, the policies ? k in UCFH are
time-dependent. Finally, UCFH can directly deal with non-sparse transition probabilities, whereas
UCRL-? only directly allows two possible successor states for each (s, a)-pair (C = 2).
Confidence sets. The class of MDPs Mk consists of fixed-horizon MDPs M 0 with the known true
reward function r and where the transition probability p0t (s0 |s, a) from any (s, a) ? S ? A to s0 ?
? . Solely
S(s, a) at any time t is in the confidence set induced by p?(s0 |s, a) of the empirical MDP M
for the purpose of computationally more efficient optimistic planning, we allow time-dependent
transitions (allows choosing different transition models in different time steps to maximize reward),
but this does not affect the theoretical guarantees as the true stationary MDP is still in Mk with high
4
The definition also works for time-dependent transition probabilities.
3
Algorithm 1: UCFH: Upper-Confidence Fixed-Horizon episodic reinforcement learning algorithm
Input: desired accuracy ? (0, 1], failure tolerance ? ? (0, 1], fixed-horizon MDP M
Result: with probability at least 1 ? ?: -optimal policy
|S|H
?
k := 1,
wmin := 4H|S|
,
?1 := 2Umax
,
Umax := |S ? A| log2 w
;
min
2 2C
2
2
2
2
6|S?A|C log2 (4|S| H /)
2 8H |S|
2 CH
m := 512(log2 log2 H) 2 log
ln
;
?
n(s, a) = v(s, a) = n(s, a, s0 ) := 0
while do
?, s ? S, a ? A, s0 ? S(s, a);
/* Optimistic planning
*/
/* Execute policy
*/
p?(s0 |s, a) := n(s, a, s0 )/n(s, a), for all (s, a) with n(s, a) > 0 and s0 ? S(s, a);
? ? Mnonst. : ?(s, a) ? S ? A, t = 1 . . . H, s0 ? S(s, a)
Mk := M
p?t (s0 |s, a) ? ConfidenceSet(?
p(s0 |s, a), n(s, a)) ;
? k , ? k := FixedHorizonEVI(Mk );
M
repeat
SampleEpisode(? k ) ; // from M using ? k
until there is a (s, a) ? S ? A with v(s, a) ? max{mwmin , n(s, a)} and n(s, a) < |S|mH;
/* Update model statistics for one (s, a)-pair with condition above
*/
n(s, a) := n(s, a) + v(s, a);
n(s, a, s0 ) := n(s, a, s0 ) + v(s, a, s0 ) ?s0 ? S(s, a);
v(s, a) := v(s, a, s0 ) := 0 ?s0 ? S(s, a); k := k + 1
Procedure SampleEpisode(?)
s0 ? p0 ;
for t = 0 to H ? 1 do
at := ?t+1 (st ) and st+1 ? p(?|st , at );
v(st , at ) := v(st , at ) + 1 and v(st , at , st+1 ) := v(st , at , st+1 ) + 1;
Function ConfidenceSet(p,
n)
P :=
2 ln(6/?1 )
,
(1)
p0 ? [0, 1] :if n > 1 : |p0 (1 ? p0 ) ? p(1 ? p)| ?
n?1
!
r
r
ln(6/?1 )
2p(1 ? p)
2
6
|p ? p0 | ? min
,
ln(6/?1 ) +
ln
(2)
2n
n
3n ?1
return P
probability. Unlike the confidence intervals used by Lattimore and Hutter [6], we not only include
conditions based on Hoeffding?s inequality5 and Bernstein?s inequality (Eq. 2), but also require that
the variance p(1 ? p) of the Bernoulli random variable associated with this transition is close to the
empirical one (Eq. 1). This additional condition (Eq. 1) is key for making the algorithm directly
applicable to generic MDPs (in which states can transition to any number of next states, e.g. C > 2)
while only having a linear dependency on C in the PAC bound.
3.1 PAC Analysis
For simplicity we assume that each episode starts in a fixed start state s0 . This assumption is not
crucial and can easily be removed by additional notational effort.
Theorem 1. For any 0 < , ? ? 1, the following holds. With probability at least 1 ? ?, UCFH
produces a sequence of policies ? k , that yield at most
2
? H C|S ? A| ln 1
O
2
?
k
k
?
?
episodes with R? ? R? = V1:H
(s0 ) ? V1:H
(s0 ) > . The maximum number of possible successor
states is denoted by 1 < C ? |S|.
5
The first condition in the min in Equation (2) is actually not necessary for the theoretical results to hold. It
can be removed and all 6/?1 can be replaced by 4/?1 .
4
Similarities to other analyses. The proof of Theorem 1 is quite long and involved, but builds on
similar techniques for sample-complexity bounds in reinforcement learning (see e.g. Brafman and
Tennenholtz [3], Strehl and Littman [16]). The general proof strategy is closest to the one of UCRL-?
[6] and the obtained bounds are similar if we replace the time horizon H with the equivalent in the
discounted case 1/(1 ? ?). However, there are important differences that we highlight now briefly.
? A central quantity in the analysis by Lattimore and Hutter [6] is the local variance of the value
function. The exact definition for the fixed-horizon case will be given below. The key insight for
the almost tight bounds of Lattimore and Hutter [6] and Azar et al. [8] is to leverage the fact that
these local variances satisfy a Bellman equation [17] and so the discounted sum of local variances
can be bounded by O((1 ? ?)?2 ) instead of O((1 ? ?)?3 ). We prove in Lemma 4 that local value
2
function variances ?i:j
also satisfy a Bellman equation for fixed-horizon MDPs even if transition
probabilities and rewards are time-dependent. This allows us to bound the total sum of local
variances by O(H 2 ) and obtain similarly strong results in this setting.
? Lattimore and Hutter [6] assumed there are only two possible successor states (i.e., C = 2) which
2
allows them to easily relate the local variances ?i:j
to the difference of the expected value of
successor states in the true and optimistic MDP (Pi ? P?i )V?i+1:j . For C > 2, the relation is less
clear, but we address this by proving a bound with tight dependencies on C (Lemma C.6).
? To avoid super-linear dependency on C in the final PAC bound, we add the additional condition
in Equation (1) to the confidence set. We show that this allows us to upper-bound the total reward
k
2
difference R? ? R? of policy ? k with terms that either depend on ?i:j
or decrease linearly in
the number of samples. This gives the desired linear dependency on C in the final bound. We
therefore avoid assuming C = 2 which makes UCFH directly applicable to generic MDPs with
C > 2 without the impractical transformation argument used by Lattimore and Hutter [6].
We will now introduce the notion of knownness and importance of state-action pairs that is essential
for the analysis of UCFH and subsequently present several lemmas necessary for the proof of Theorem 1. We only sketch proofs here but detailed proofs for all results are available in the appendix.
Fine-grained categorization of (s, a)-pairs. Many PAC RL sample complexity proofs [3, 4, 13,
14] only have a binary notion of ?knownness?, distinguishing between known (transition probability estimated sufficiently accurately) and unknown (s, a)-pairs. However, as recently shown by
Lattimore and Hutter [6] for the infinite horizon setting, it is possible to obtain much tighter sample
complexity results by using a more fine grained categorization. In particular, a key idea is that in order to obtain accurate estimates of the value function of a policy from a starting state, it is sufficient
to have only a loose estimate of the parameters of (s, a)-pairs that are unlikely to be visited under
this policy.
Let the weight of a (s, a)-pair given policy ? k be its expected frequency in an episode
wk (s, a) :=
H
X
P(st = s, ?tk (st ) = a) =
t=1
H
X
P1:t?1 I{s = ?, a = ?tk (s)}(s0 ).
t=1
The importance ?k of (s, a) is its relative weight compared to wmin := 4H|S|
on a log-scale
wk (s, a)
?k (s, a) := min zi : zi ?
where z1 = 0 and zi = 2i?2 ?i = 2, 3, . . . .
wmin
Note that ?k (s, a) ? {0, 1, 2, 4, 8, 16 . . . } is an integer indicating the influence of the state-action
pair on the value function of ? k . Similarly, we define the knownness
nk (s, a)
?k (s, a) := max zi : zi ?
? {0, 1, 2, 4, . . . }
mwk (s, a)
which indicates how often (s, a) has been observed relative to its importance. The constant m is
defined in Algorithm 1. We can now categorize (s, a)-pairs into subsets
? k = S ? A \ Xk
Xk,?,? := {(s, a) ? Xk : ?k (s, a) = ?, ?k (s, a) = ?} and X
? k the set of state-action pairs
where Xk = {(s, a) ? S ? A : ?k (s, a) > 0} is the active set and X
that are very unlikely under the current policy. Intuitively, the model of UCFH is accurate if only few
5
(s, a) are in categories with low knownness ? that is, important under the current policy but have
not been observed often so far. Recall that over time observations are generated under many policies
(as the policy is recomputed), so this condition does not always hold. We will therefore distinguish
between phases k where |Xk,?,? | ? ? for all ? and ? and phases where this condition is violated.
The condition essentially allows for only a few (s, a) in categories that are less known and more and
more (s, a) in categories that are more well known. In fact, we will show that the policy is -optimal
with high probability in phases that satisfy this condition.
We first show the validity of the confidence sets Mk .
Lemma 1 (Capturing the true MDP whp.). M ? Mk for all k with probability at least 1 ? ?/2.
Proof Sketch. By combining Hoeffding?s inequality, Bernstein?s inequality and the concentration result on empirical variances by Maurer and Pontil [18] with the union bound, we get that p(s0 |s, a) ?
P with probability at least 1 ? ?1 for a single phase k, fixed s, a ? S ? A and fixed s0 ? S(s, a). We
then show that the number of model updates is bounded by Umax and apply the union bound.
The following lemma bounds the number of episodes in which ??, ? : |Xk,?,? | ? ? is violated with
high probability.
Lemma 2. Let E be the number of episodes k for which there are ? and ? with |Xk,?,? | > ?,
P?
2Emax
6H 2
i.e. E =
. Then P(E ?
k=1 I{?(?, ?) : |Xk,?,? | > ?} and assume that m ?
ln
?
H
6N Emax ) ? 1 ? ?/2 where N = |S ? A| m and Emax = log2 wmin log2 |S|.
Proof Sketch. We first bound the total number of times a fixed pair (s, a) can be observed while
being in a particular category Xk,?,? in all phases k for 1 ? ? < |S|. We then show that for a
particular (?, ?), the number of episodes where |Xk,?,? | > ? is bounded with high probability, as the
value of ? implies a minimum probability of observing each (s, a) pair in Xk,?,? in an episode. Since
the observations are not independent we use martingale concentration results to show the statement
for a fixed (?, ?). The desired result follows with the union bound over all relevant ? and ?.
The next lemma states that in episodes where the condition ??, ? : |Xk,?,? | ? ? is satisfied and the
true MDP is in the confidence set, the expected optimistic policy value is close to the true value.
This lemma is the technically most involved part of the proof.
Lemma 3 (Bound mismatch in total reward). Assume
M
? Mk . If |Xk,?,? | ? ? for all (?, ?) and
2 8H 2 |S|2
CH 2
?k
?k
2
(s0 )| ? .
(s0 )?V1:H
0 < ? 1 and m ? 512 2 (log2 log2 H) log2
ln ?61 . Then |V?1:H
Proof Sketch. q
Using basic
algebraic
?
transformations, we show that |p ? p?|
p
1
1
1
1
p?(1 ? p?)O
+ O n ln ?1 for each p?, p ? P in the confidence set as defined
n ln ?1
in Eq. 2. Since we assume M ? Mk , we know that p(s0 |s, a) and p?(s0 |s, a) satisfy this bound
with n(s, a) for all s,a and s0 . We use that to bound the difference of the expected
value function
CH
?
?
?
of the successor state in M and M , proving that |(Pi ? Pi )Vi+1:j (s)| ? O n(s,?(s)) ln ?11 +
q
C
1
O
?i:j (s), where the local variance of the value function is defined as
n(s,?(s)) ln ?1 ?
?
2
?
2
2
?i:j (s, a) := E (Vi+1:j (si+1 ) ? Pi? Vi+1:j
(si ))2 |si = s, ai = a
and ?i:j
(s) := ?i:j
(s, ?i (s)).
P
H?1
This bound then is applied to |V?1:H (s0 ) ? V1:H (s0 )| ? t=0 P1:t |(Pt ? P?t )V?t+1:H (s)|. The basic
idea is to split the bound into a sum of two parts by partitioning of the (s, a) space by knownness,
? ?,? for all ? and ? and (st , at ) ? X.
? Using the fact that w(st , at ) and
e.g. that is (st , at ) ? X
n(st , at ) are tightly coupled for each (?, ?), we can bound the expression eventually by . The final
PH
key ingredient in the remainder of the proof is to bound t=1 P1:t?1 ?t:H (s)2 by O(H 2 ) instead of
the trivial bound O(H 3 ). To this end, we show the lemma below.
Lemma 4. The variance of the value function defined as V?i:j (s)
:=
2
Pj
?
2
E
|si = s satisfies a Bellman equation Vi:j = Pi Vi+1:j + ?i:j
t=i rt (st ) ? Vi:j (si )
Pj
2
2
which gives Vi:j =
Since 0 ? V1:H ? H 2 rmax
, it follows that
t=i Pi:t?1 ?t:j .
Pj
2
2 2
0 ? t=1 Pi:t?1 ?t:j (s) ? H rmax for all s ? S.
6
p(i|0, a) =
0
1
n
+
1
2
..
.
?
n
r(+) = 1
p(+|i, a) =
1
2
+ 0i (a)
p(?|i, a) =
1
2
? 0i (a)
r(?) = 0
Figure 1: Class of a hard-to-learn finite horizon MDPs. The function 0 is defined as 0 (a1 ) = /2,
0 (a?i ) = and otherwise 0 (a) = 0 where a?i is an unknown action per state i and is a parameter.
Proof Sketch. The proof works by induction and uses fact that the value function satisfies the Bellman equation and the tower-property of conditional expectations.
Proof Sketch for Theorem 1. The proof of Theorem 1 consists of the following major parts:
1. The true MDP is in the set of MDPs Mk for all phases k with probability at least 1? 2? (Lemma 1).
2. The FixedHorizonEVI algorithm computes a value function whose optimistic value is higher
than the optimal reward in the true MDP with probability at least 1 ? ?/2 (Lemma A.1).
3. The number of episodes with |Xk,?,? |> ? for some
? and ? are bounded with probability at least
|S|
H2
?
?
1 ? ?/2 by O(|S ? A| m) if m = ?
ln
(Lemma 2).
?
4. If |Xk,?,? | ? ? for all ?, ?, i.e., relevant state-action pairs are sufficiently known and m =
? CH2 2 ln 1 , then the optimistic value computed is -close to the true MDP value. Together
?
?1
with part 2, we get that with high probability, the policy ? k is -optimal in this case.
2
1
? C|S?A|H
5. From parts 3 and 4, with probability 1 ? ?, there are at most O
ln
2
? episodes that
are not -optimal.
4
Lower PAC Bound
Theorem 2. There exist positive constants c1 , c2 , ?0 , 0 such that for every ? ? (0, ?0 ) and ?
(0, 0 ) and for every algorithm A that satisfies a PAC guarantee for (, ?) and outputs a deterministic
policy, there is a fixed-horizon episodic MDP Mhard with
c2
c2
|S ? A|H 2
c1 (H ? 2)2 (|A| ? 1)(|S| ? 3)
ln
ln
=?
(3)
E[nA ] ?
2
? + c3
2
? + c3
where nA is the number of episodes until the algorithm?s policy is (, ?)-accurate. The constants
?4
1
H?2
?4
, 0 = 640e
/80.
can be set to ?0 = e80 ? 5000
4 ? H/35000, c2 = 4 and c3 = e
The ranges of possible ? and are of similar order than in other state-of-the-art lower bounds for
multi-armed bandits [19] and discounted MDPs [14, 6]. They are mostly determined by the bandit
result by Mannor and Tsitsiklis [19] we build on. Increasing the parameter limits ?0 and 0 for
bandits would immediately result in larger ranges in our lower bound, but this was not the focus of
our analysis.
Proof Sketch. The basic idea is to show that the class of MDPs shown in Figure 1 require at least a
number of observed episodes of the order of Equation (3). From the start state 0, the agent ends up
in states 1 to n with equal probability, independent of the action. From each such state i, the agent
transitions to either a good state + with reward 1 or a bad state ? with reward 0 and stays there for
the rest of the episode. Therefore, each state i = 1, . . . , n is essentially a multi-armed bandit with
binary rewards of either 0 or H ? 2. For each bandit, the probability of ending up in + or ? is
equal except for the first action a1 with p(st+1 = +|st = i, at = a1 ) = 1/2 + /2 and possibly an
unknown optimal action a?i (different for each state i) with p(st+1 = +|st = i, at = a?i ) = 1/2 + .
In the episodic fixed-horizon setting we are considering, taking a suboptimal action in one of the
bandits does not necessarily yield a suboptimal episode. We have to consider the average over all
bandits instead. In an -optimal episode, the agent therefore needs to follow a policy that would
solve at least a certain portion of all n multi-armed bandits with probability at least 1 ? ?. We show
that the best strategy for the agent to achieve this is to try to solve all bandits with equal probability.
The number of samples required to do so then results in the lower bound in Equation (3).
7
Similar MDPs that essentially solve multiple of such multi-armed bandits have been used to prove
lower sample-complexity bounds for discounted MDPs [14, 6]. However, the analysis in the infinite
horizon case as well as for the sliding-window fixed-horizon optimality criterion considered by
Kakade [4] is significantly simpler. For these criteria, every time step the agent follows a policy that
is not -optimal counts as a ?mistake?. Therefore, every time the agent does not pick the optimal
arm in any of the multi-armed bandits counts as a mistake. This contrasts with our fixed-horizon
setting where we must instead consider taking an average over all bandits.
5
Related Work on Fixed-Horizon Sample Complexity Bounds
We are not aware of any lower sample complexity bounds beyond multi-armed bandit results that
directly apply to our setting. Our upper bound in Theorem 1 improves upon existing results by at
least a factor of H. We briefly review those existing results in the following.
Timestep bounds. Kakade [4, Chapter 8] proves upper and lower PAC bounds for a similar setting where the agent interacts indefinitely with the environment but the interactions are divided in
segments of equal length and the agent is evaluated by the expected
of rewards
sum
until the end
2
6
1 6
? |S| |A|H
of each segment. The bound states that there are not more than O
time steps in
ln
3
?
which the agents acts -suboptimal. Strehl etal. [1] improves
the state-dependency of these bounds
|S||A|H 5
1
?
for their delayed Q-learning algorithm to O
ln ? . However, in episodic MDP it is more
4
natural to consider performance on the entire episode since suboptimality near the end of the episode
is no issue as long as the total reward on the entire episode is sufficiently high. Kolter and Ng [9]
use an interesting sliding-window criterion, but prove bounds for a Bayesian setting instead of PAC.
Timestep-based bounds can be applied to the episodic case by augmenting the original statespace
with a time-index per episode to allow resets after H steps. This adds H dependencies for each |S|
in the original bound which results in a horizon-dependency of at least H 6 of these existing bounds.
Translating the regret bounds of UCRL2 in Corollary 3by Jaksch et al. [20] yields a PAC-bound
2
3
? |S| |A|H
ln 1? even if one ignores the reset after H time
on the number of episodes of at least O
2
steps. Timestep-based lower PAC-bounds cannot be applied directly to the episodic reward criterion.
Episode bounds. Similar to us, Fiechter [10] uses the value of initial states as optimality-criterion,
2
7
1
? |S| |A|H
but defines the value w.r.t. the ?-discounted infinite horizon. His results of order O
ln
2
?
?
?
episodes of length O(1/(1
? ?)) ? O(H)
are therefore not directly applicable to our setting. Auer
and Ortner [5] investigate the same setting as we and propose
10a UCB-type
algorithm that has no7
1
? |S| |A|H
ln
regret, which translates into a basic PAC bound of order O
3
? episodes. We improve
on this bound substantially in terms of its dependency on H, |S| and . Reveliotis and Bountourelis
[12] also consider the episodic undiscounted fixed-horizon setting and present an efficient algorithm
in cases where the transition graph is acyclic and the agent knows for each state a policy that visits
this state with a known minimum probability
assumptions
are quite limiting and rarely
q. These
|S||A|H 4
1
?
ln ? explicitly depends on 1/q.
hold in practice and their bound of order O
2 q
6
Conclusion
We have shown upper and lower bounds on the sample complexity of episodic fixed-horizon RL that
are tight up to log-factors in the time horizon H, the accuracy , the number of actions |A| and up
to an additive constant in the failure probability ?. These bounds improve upon existing results by a
factor of at least H. One might hope to reduce the dependency of the upper bound on |S| to be linear
by an analysis similar to Mormax [7] for discounted MDPs which has sample complexity linear in
|S| at the penalty of additional dependencies on H. Our proposed UCFH algorithm that achieves our
PAC bound can be applied to directly to a wide range of fixed-horizon episodic MDPs with known
rewards and does not require additional structure such as sparse or acyclic state transitions assumed
in previous work. The empirical evaluation of UCFH is an interesting direction for future work.
Acknowledgments: We thank Tor Lattimore for the helpful suggestions and comments. This work
was supported by an NSF CAREER award and the ONR Young Investigator Program.
6
For comparison we adapt existing bounds to our setting. While the original bound stated by Kakade [4]
only has H 3 , an additional H 3 comes in through ?3 due to different normalization of rewards.
8
References
[1] Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman.
PAC Model-Free Reinforcement Learning. In International Conference on Machine Learning, 2006.
[2] Michael J Kearns and Satinder P Singh. Finite-Sample Convergence Rates for Q-Learning and
Indirect Algorithms. In Advances in Neural Information Processing Systems, 1999.
[3] Ronen I Brafman and Moshe Tennenholtz. R-MAX ? A General Polynomail Time Algorithm
for Near-Optimal Reinforcement Learning. Journal of Machine Learning Research, 3:213?
231, 2002.
[4] Sham M. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003.
[5] Peter Auer and Ronald Ortner. Online Regret Bounds for a New Reinforcement Learning
Algorithm. In Proceedings 1st Austrian Cognitive Vision Workshop, 2005.
[6] Tor Lattimore and Marcus Hutter. PAC bounds for discounted MDPs. In International Conference on Algorithmic Learning Theory, 2012.
[7] Istv`an Szita and Csaba Szepesv?ari. Model-based reinforcement learning with nearly tight exploration complexity bounds. In International Conference on Machine Learning, 2010.
[8] Mohammad Gheshlaghi Azar, R?emi Munos, and Hilbert J. Kappen. On the Sample Complexity
of Reinforcement Learning with a Generative Model. In International Conference on Machine
Learning, 2012.
[9] J Zico Kolter and Andrew Y Ng. Near-Bayesian exploration in polynomial time. In International Conference on Machine Learning, 2009.
[10] Claude-Nicolas Fiechter. Efficient reinforcement learning. In Conference on Learning Theory,
1994.
[11] Claude-Nicolas Fiechter. Expected Mistake Bound Model for On-Line Reinforcement Learning. In International Conference on Machine Learning, 1997.
[12] Spyros Reveliotis and Theologos Bountourelis. Efficient PAC learning for episodic tasks with
acyclic state spaces. Discrete Event Dynamic Systems: Theory and Applications, 17(3):307?
327, 2007.
[13] Alexander L Strehl, Lihong Li, and Michael L Littman. Incremental Model-based Learners
With Formal Learning-Time Guarantees. In Conference on Uncertainty in Artificial Intelligence, 2006.
[14] Alexander L Strehl, Lihong Li, and Michael L Littman. Reinforcement Learning in Finite
MDPs : PAC Analysis. Journal of Machine Learning Research, 10:2413?2444, 2009.
[15] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal Regret Bounds for Reinforcement Learning. In Advances in Neural Information Processing Systems, 2010.
[16] Alexander L. Strehl and Michael L. Littman. An analysis of model-based Interval Estimation
for Markov Decision Processes. Journal of Computer and System Sciences, 74(8):1309?1331,
dec 2008.
[17] Matthew J Sobel. The Variance of Markov Decision Processes. Journal of Applied Probability,
19(4):794?802, 1982.
[18] Andreas Maurer and Massimiliano Pontil. Empirical Bernstein Bounds and Sample-Variance
Penalization. In Conference on Learning Theory, 2009.
[19] Shie Mannor and John N Tsitsiklis. The Sample Complexity of Exploration in the Multi-Armed
Bandit Problem. Journal of Machine Learning Research, 5:623?648, 2004.
[20] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal Regret Bounds for Reinforcement Learning. Journal of Machine Learning Research, 11:1563?1600, 2010.
[21] Fan Chung and Linyuan Lu. Concentration Inequalities and Martingale Inequalities: A Survey.
Internet Mathematics, 3(1):79?127, 2006.
9
| 5827 |@word exploitation:1 briefly:2 version:1 polynomial:2 p0:7 pick:1 carry:1 kappen:1 initial:2 existing:7 current:2 whp:1 si:8 must:1 readily:2 john:2 ronald:3 additive:2 wiewiora:1 enables:2 update:4 stationary:6 generative:2 intelligence:1 xk:15 short:1 indefinitely:1 provides:1 mannor:2 knownness:5 simpler:1 ucrl2:1 c2:6 tomorrow:1 prove:5 consists:2 emma:1 introduce:4 finitehorizon:1 expected:9 p1:3 planning:6 multi:9 bellman:6 inspired:1 discounted:9 little:1 armed:8 window:3 considering:2 increasing:1 bounded:5 notation:6 maximizes:1 kind:1 rmax:2 substantially:1 transformation:2 csaba:1 impractical:1 guarantee:6 every:4 act:3 tackle:1 interactive:1 rm:3 partitioning:2 zico:1 omit:2 positive:1 service:1 local:8 limit:1 mistake:3 solely:1 approximately:1 might:1 christoph:1 limited:1 range:4 practical:1 acknowledgment:1 linyuan:1 practice:3 union:3 regret:5 procedure:1 pontil:2 episodic:18 empirical:6 significantly:1 confidence:15 get:2 convenience:1 close:3 cannot:1 operator:1 influence:1 equivalent:1 deterministic:1 customer:1 maximizing:3 straightforward:1 starting:1 cluttered:1 focused:1 survey:1 formalized:1 simplicity:3 immediately:1 emax:3 insight:2 importantly:1 deriving:1 his:1 proving:2 handle:1 notion:3 updated:1 limiting:1 construction:1 pt:1 mormax:1 user:1 exact:1 us:3 distinguishing:1 updating:2 observed:6 episode:34 decrease:1 removed:2 observes:2 environment:2 complexity:26 reward:26 littman:5 dynamic:1 depend:3 tight:6 segment:3 singh:1 technically:1 upon:3 eric:1 learner:1 easily:2 mh:1 indirect:1 chapter:1 massimiliano:1 london:1 artificial:1 choosing:1 whose:2 quite:3 larger:1 solve:3 otherwise:1 statistic:2 superscript:1 final:3 online:1 sequence:2 claude:2 net:1 propose:1 interaction:1 reset:2 remainder:1 relevant:2 combining:1 till:1 achieve:2 convergence:1 undiscounted:1 extending:1 produce:1 distribution3:1 categorization:2 incremental:1 tk:2 derive:1 exam:4 depending:1 augmenting:1 andrew:1 progress:1 eq:4 strong:1 c:1 involves:1 implies:1 come:1 direction:1 closely:1 correct:1 subsequently:1 exploration:4 successor:6 translating:1 require:5 tighter:3 hold:5 sufficiently:3 considered:1 teleport:1 algorithmic:1 matthew:1 major:1 achieves:2 tor:2 actionspace:1 purpose:1 estimation:1 applicable:3 visited:3 hope:1 always:1 super:1 modified:1 avoid:3 corollary:1 ucrl:4 focus:1 notational:1 bernoulli:1 likelihood:1 indicates:1 contrast:2 helpful:1 dependent:6 unlikely:2 entire:2 bandit:16 relation:1 selects:1 issue:1 szita:1 augment:1 denoted:2 art:1 equal:4 aware:1 having:1 fiechter:4 ng:2 identical:1 choses:1 nearly:1 future:2 few:2 ortner:4 national:1 tightly:1 individual:1 delayed:1 replaced:1 phase:11 investigate:1 evaluation:1 yielding:1 sobel:1 accurate:3 tuple:1 necessary:2 experience:1 indexed:1 maurer:2 logarithm:2 desired:3 theoretical:2 mk:11 hutter:8 instance:1 modeling:1 subset:2 comprised:1 delay:1 dependency:12 st:28 international:6 stay:1 michael:5 analogously:1 together:2 na:2 thesis:1 again:1 ambiguity:1 central:1 satisfied:1 possibly:2 hoeffding:3 cognitive:1 chung:1 return:2 li:3 student:5 wk:2 satisfy:4 kolter:2 explicitly:2 dann:1 depends:2 vi:10 later:2 try:1 optimistic:11 observing:1 portion:2 start:3 maintains:1 parallel:1 accuracy:4 variance:13 yield:3 ronen:1 bayesian:2 accurately:1 lu:1 trajectory:1 executes:1 definition:2 failure:4 frequency:2 involved:2 proof:17 associated:1 sampled:1 newly:1 recall:1 knowledge:2 improves:2 hilbert:1 formalize:1 actually:1 auer:4 higher:1 day:1 follow:3 ebrun:1 evaluated:2 though:2 execute:1 until:5 langford:1 sketch:7 receives:1 replacing:1 defines:1 bountourelis:3 quality:1 mdp:16 building:1 validity:1 true:10 jaksch:3 deal:1 suboptimality:1 criterion:5 mohammad:1 performs:1 ranging:1 lattimore:9 recently:2 ari:1 quarter:3 rl:9 mellon:2 significant:2 refer:1 ai:1 mathematics:1 hp:2 teaching:1 similarly:2 lihong:3 access:1 robot:1 longer:1 similarity:1 base:1 add:2 closest:1 recent:3 scenario:2 certain:3 inequality:8 binary:2 onr:1 minimum:2 additional:10 somewhat:1 maximize:1 period:1 sliding:3 full:1 multiple:1 sham:1 match:1 adapt:1 offer:1 long:2 divided:1 visit:1 award:1 a1:3 variant:1 basic:4 austrian:1 essentially:3 cmu:1 expectation:1 vision:1 iteration:2 kernel:2 normalization:1 robotics:1 achieved:1 c1:4 dec:1 addition:2 whereas:3 fine:2 szepesv:1 interval:2 crucial:1 rest:1 unlike:1 probably:1 comment:1 induced:1 shie:1 integer:1 nonstationary:2 near:7 leverage:3 bernstein:4 split:1 enough:1 affect:1 zi:5 identified:1 p0t:1 suboptimal:3 reduce:1 idea:4 andreas:1 translates:1 intensive:1 expression:1 optimism:1 effort:1 penalty:1 peter:3 algebraic:1 action:22 generally:2 clear:2 detailed:1 ph:1 category:4 exist:2 nsf:1 estimated:2 per:2 carnegie:2 discrete:2 key:5 recomputed:1 istv:1 drawn:1 pj:4 v1:5 timestep:3 graph:1 year:2 sum:5 ch2:1 uncertainty:3 almost:1 looser:1 decision:7 appendix:2 capturing:1 bound:78 internet:1 followed:1 distinguish:1 fan:1 quadratic:1 placement:1 precisely:2 ucfh:18 ri:1 software:1 emi:1 argument:1 optimality:3 spring:1 min:4 performing:1 relatively:1 department:2 request:1 across:1 smaller:1 kakade:4 making:4 s1:1 intuitively:1 taken:1 ln:27 equation:9 computationally:1 previously:1 discus:1 loose:1 eventually:1 count:2 know:2 end:8 available:1 tightest:1 apply:3 generic:2 alternative:1 hat:1 evi:3 original:4 thomas:2 remaining:1 include:1 opportunity:1 log2:10 build:3 prof:1 tutor:2 move:1 question:2 quantity:2 moshe:1 strategy:2 concentration:4 dependence:4 rt:5 interacts:2 usual:1 thank:1 tower:1 tutoring:3 reason:1 trivial:1 induction:1 marcus:1 assuming:2 length:3 index:3 balance:1 innovation:1 equivalently:1 unfortunately:1 mostly:1 potentially:1 relate:1 statement:1 stated:1 policy:33 unknown:5 upper:16 observation:3 markov:4 finite:14 tilde:1 defining:1 extended:3 pair:17 required:4 c3:3 z1:1 learned:1 quadratically:1 address:1 beyond:1 tennenholtz:2 gheshlaghi:1 usually:1 below:3 mismatch:1 program:1 max:4 event:1 treated:2 natural:3 business:1 advanced:1 arm:2 improve:4 mdps:23 umax:3 coupled:1 prior:3 understanding:1 review:1 relative:2 highlight:1 interesting:2 suggestion:1 acyclic:4 ingredient:1 revenue:3 h2:1 penalization:1 agent:17 sufficient:1 consistent:2 s0:35 pi:13 strehl:6 repeat:2 surprisingly:1 last:1 brafman:2 supported:1 free:1 tsitsiklis:2 formal:1 allow:2 understand:1 wide:2 taking:2 wmin:4 munos:1 sparse:3 tolerance:1 world:1 transition:23 ending:1 computes:2 ignores:2 collection:1 reinforcement:18 far:1 satinder:1 active:1 assumed:2 tuples:1 learn:1 nicolas:2 career:1 expansion:1 necessarily:1 domain:4 linearly:1 motivation:1 backup:1 azar:2 augmented:1 referred:1 martingale:2 precision:1 brunskill:1 explicit:1 grained:2 young:1 theorem:7 bad:1 pac:29 dominates:1 essential:1 workshop:1 sequential:1 importance:3 phd:1 execution:2 horizon:48 nk:1 reveliotis:3 ch:3 satisfies:4 conditional:1 cdann:2 viewed:1 replace:1 change:1 hard:1 infinite:11 determined:2 operates:1 except:2 specifically:1 lemma:14 kearns:1 total:6 ucb:1 indicating:1 select:1 rarely:1 college:1 categorize:1 alexander:4 violated:2 preparation:1 investigator:1 statespace:2 handling:1 |
5,333 | 5,828 | Private Graphon Estimation for Sparse Graphs?
Christian Borgs
Jennifer T. Chayes
Microsoft Research New England
Cambridge, MA, USA.
{cborgs,jchayes}@microsoft.com
Adam Smith
Pennsylvania State University
University Park, PA, USA.
[email protected]
Abstract
We design algorithms for fitting a high-dimensional statistical model to a large,
sparse network without revealing sensitive information of individual members.
Given a sparse input graph G, our algorithms output a node-differentially private
nonparametric block model approximation. By node-differentially private, we
mean that our output hides the insertion or removal of a vertex and all its adjacent
edges. If G is an instance of the network obtained from a generative nonparametric
model defined in terms of a graphon W , our model guarantees consistency: as the
number of vertices tends to infinity, the output of our algorithm converges to W in
an appropriate version of the L2 norm. In particular, this means we can estimate
the sizes of all multi-way cuts in G.
Our results hold as long as W is bounded, the average degree of G grows at least
like the log of the number of vertices, and the number of blocks goes to infinity
at an appropriate rate. We give explicit error bounds in terms of the parameters of
the model; in several settings, our bounds improve on or match known nonprivate
results.
1
Introduction
Differential Privacy. Social and communication networks have been the subject of intense study
over the last few years. However, while these networks comprise a rich source of information for
science, they also contain highly sensitive private information. What kinds of information can we
release about these networks while preserving the privacy of their users? Simple measures, such as
removing obvious identifiers, do not work; for example, several studies reidentified individuals in
the graph of a social network even after all vertex and edge attributes were removed. Such attacks
highlight the need for statistical and learning algorithms that provide rigorous privacy guarantees.
Differential privacy [17] provides meaningful guarantees in the presence of arbitrary side information. In the context of traditional statistical data sets, differential privacy is now well-developed.
By contrast, differential privacy in the context of graph data is much less developed. There are two
main variants of graph differential privacy: edge and node differential privacy. Intuitively, edge
differential privacy ensures that an algorithm?s output does not reveal the inclusion or removal of a
particular edge in the graph, while node differential privacy hides the inclusion or removal of a node
together with all its adjacent edges. Edge privacy is a weaker notion (hence easier to achieve) and
has been studied more extensively. Several authors designed edge-differentially private algorithms
for fitting generative graph models (e.g. [24]; see the full version for further references), but these
do not appear to generalize to node privacy with meaningful accuracy guarantees.
The stronger notion, node privacy, corresponds more closely to what was achieved in the case of
traditional data sets, and to what one would want to protect an individual?s data: it ensures that no
matter what an analyst observing the released information knows ahead of time, she learns the same
?
A full version of this extended abstract is available at http://arxiv.org/abs/1506.06162
1
things about an individual Alice regardless of whether Alice?s data are used or not. In particular,
no assumptions are needed on the way the individuals? data are generated (they need not even be
independent). Node privacy was studied more recently [21, 14, 6, 26], with a focus on on the release
of descriptive statistics (such as the number of triangles in a graph). Unfortunately, differential
privacy?s stringency makes the design of accurate, private algorithms challenging.
In this work, we provide the first algorithms for node-private inference of a high-dimensional statistical model that does not admit simple sufficient statistics.
Modeling Large Graphs via Graphons. Traditionally, large graphs have been modeled using
various parametric models, one of the most popular being the stochastic block model [20]. Here one
postulates that an observed graph was generated by first assigning vertices at random to one of k
groups, and then connecting two vertices with a probability that depends on their assigned groups.
As the number of vertices of the graph in question grows, we do not expect the graph to be well
described by a stochastic block model with a fixed number of blocks. In this paper we consider
nonparametric models (where the number of parameters need not be fixed or even finite) given in
terms of a graphon. A graphon is a measurable, bounded function W :R [0, 1]2 ? [0, ?) such that
W (x, y) = W (y, x), which for convenience we take to be normalized: W = 1. Given a graphon,
we generate a graph on n vertices by first assigning i.i.d. uniform labels in [0, 1] to the vertices,
and then connecting vertices with labels x, y with probability ?n W (x, y), where ?n is a parameter
determining the density of the generated graph Gn with ?n kW k? ? 1. We call Gn a W -random
graph with target density ?n (or simply a ?n W -random graph).
To our knowledge, random graph models of the above form were first introduced under the name latent position graphs [19], and are special cases of a more general model of ?inhomogeneous random
graphs? defined in [7], which is the first place were n-dependent target densities ?n were considered.
For both dense graphs (whose target density does not depend on the number of vertices) and sparse
graphs (those for which ?n ? 0 as n ? ?), this model is related to the theory of convergent graph
sequences, [8, 23, 9, 10] and [11, 12], respectively.
Estimation and Identifiability. Assuming that Gn is generated in this way, we are then faced with
the task of estimating W from a single observation of a graph Gn . To our knowledge, this task
was first explicitly considered in [4], which considered graphons describing stochastic block models
with a fixed number of blocks. This was generalized to models with a growing number of blocks
[27, 15], while the first estimation of the nonparametric model was proposed in [5]. Most of the
literature on estimating the nonparametric model makes additional assumptions on the function W ,
the most common one being that after a measure-preserving transformation, the integral of W over
one variable is a strictly monotone function of the other, corresponding to an asymptotically strictly
monotone degree distribution of Gn . (This assumption is quite restrictive: in particular, such results
do not apply to graphons that represent block models.) For our purposes, the most relevant works
are Wolfe and Olhede [28], Gao et al. [18], Chatterjee [13] and Abbe and Sandon [2] (as well as
recent work done concurrently with this research [22]), which provide consistent estimators without
monotonicity assumptions (see ?Comparison to nonprivate bounds?, below).
One issue that makes estimation of graphons challenging is identifiability: multiple graphons can
? lead to the same distrilead to the same distribution on Gn . Specifically, two graphons W and W
bution on W -random graphs if and only if there are measure preserving maps ?, ?? : [0, 1] ? [0, 1]
f ?e, where W ? is defined by W (x, y) = W (?(x), ?(y)) [16]. Hence, there is
such that W ? = W
no ?canonical graphon? that an estimation procedure can output, but rather an equivalence class of
graphons. Some of the literature circumvents identifiability by making strong additional assumptions, such as strict monotonicity, that imply the existence of canonical equivalent class representatives. We make no such assumptions, but instead define consistency in terms of a metric on these
equivalence classes, rather than on graphons as functions. We use a variant of the L2 metric,
?2 (W, W 0 ) =
inf
?:[0,1]?[0,1]
kW ? ? W 0 k2 , where ? ranges over measure-preserving bijections. (1)
? from a
Our Contributions. In this paper we construct an algorithm that produces an estimate W
single instance Gn of a W -random graph with target density ?n (or simply ?, when n is clear from
the context). We aim for several properties:
2
1.
2.
3.
4.
5.
? is differentially private;
W
? is consistent, in the sense that ?2 (W, W
? ) ? 0 in probability as n ? ?;
W
?
W has a compact representation (in our case, as a matrix with o(n) entries);
The procedure works for sparse graphs, that is, when the density ? is small;
? can be calculated efficiently.
On input Gn , W
Here we give an estimation procedure that obeys the first four properties, leaving the question of
polynomial-time algorithms for future work. Given an input graph Gn , a privacy-parameter and a
? = A(Gn ) such that
target number k of blocks, our algorithm A produces a k-block graphon W
? A is -differentially node private. The privacy guarantee holds for all inputs, independent
of modeling assumptions.
R
? If (1) W is an arbitrary graphon, normalized so W = 1, (2) the expected average degree
(n ? 1)? grows at least as fast as log n, and (3) k goes to infinity sufficiently slowly with n,
? for W is consistent (that is, ?2 (W
? , W ) ? 0,
then, when Gn is ?W -random, the estimate W
both in probability and almost surely).
? We give a nonprivate variant of A that converges assuming only ?(1) average degree.
Combined with the general theory of convergent graphs sequences, these results in particular give
a node-private procedure for estimating the edge density of all cuts in a ?W -random graph,see
Section 2.2 below.
The main idea of our algorithm is to use the exponential mechanism of [25] to select a block model
which approximately minimizes the `2 distance to the observed adjacency matrix of G, under the
best possible assignment of nodes to blocks (this explicit search over assignments makes the algorithm take exponential time). In order to get an algorithm that is accurate on sparse graphs, we need
several nontrivial extensions of current techniques. To achieve privacy, we use a new variation of the
Lipschitz extension technique of [21, 14] to reduce the sensitivity of the ?2 distance. While those
works used Lipschitz extensions for noise addition, we use of Lipshitz extensions inside the ?exponential mechanism? [25] (to control the sensitivity of the score functions). To bound our algorithm?s
error, we provide a new analysis of the `2 -minimization algorithm; we show that approximate minimizers are not too far from the actual minimizer (a ?stability? property). Both aspects of our work
are enabled by restricting the `22 -minimization to a set of block models whose density (in fact, L?
norm) is not much larger than that of the underlying graph. The algorithm is presented in Section 3.
Our most general result proves consistency for arbitrary graphons W but does not provides a concrete rate of convergence. However, we provide explicit rates under various assumptions on W .
Specifically, we relate the error of our estimator to two natural error terms involving the graphon W :
(O)
the error k (W ) of the best k-block approximation to W in the L2 norm (see (4) below) and an
error term n (W ) measuring the L2 -distance between the graphon W and the matrix of probabilities
Hn (W ) generating the graph Gn (see (5) below.) In terms of these error terms, Theorem 1 shows
s
!
r
k 2 log n
1
(O)
4 log k
?
?2 W, W ? k (W ) + 2n (W ) + OP
+
+
.
(2)
?n
n
?n
provided the average degree ?n grows at least like log n. Along the way, we provide a novel analysis
of a straightforward, nonprivate least-squares estimator that does not require an assumption on the
average degree, and leads to an error bound with a better dependence on k:
s
!
2
log
k
k
(O)
4
? nonprivate ? (W ) + 2n (W ) + OP
?2 W, W
+ 2 .
(3)
k
?n
?n
(O)
It follows from the theory of graph convergence that for all graphons W , we have k (W ) ? 0 as
k ? ? and n (W ) ? 0 almost surely as n ? ?. By selecting k appropriately, the nonprivate
algorithm converges for any bounded graphon as long as ?n ? ? with n; the private algorithm
converges whenever ?n ? 6 log n (e.g., for constant ). As proven in the full version, we also have
p
(O)
n (W ) = OP (k (W ) + 4 k/n), though this upper bound is loose in many cases.
3
As a specific instantiation of these bounds, let us consider the case that W is exactly described
p
(O)
by a k-block model, in which case k (W ) = 0 and n (W ) = OP ( 4 k/n) (see full version
for a proof). For k ? (n/ log2 n)1/3 , ? ? log(k)/k and constant , our p
private estimator has an
asymptotic error that is dominated by the (unavoidable) error of n (W ) = 4 k/n, showing that we
do not lose anything due to privacy in this special case. Another special case is when W is ?-H?older
(O)
continuous, in which case k (W ) = O(k ?? ) and n (W ) = OP (n??/2 ); see Remark 2 below.
Comparison to Previous Nonprivate Bounds. We provide the first consistency bounds for estimation of a nonparametric graph model subject to node differential privacy. Along the way, for sparse
graphs, we provide more general consistency results than were previously known, regardless of privacy. In particular, to the best of our knowledge, no prior results give a consistent estimator for W
that works for sparse graphs without any additional assumptions besides boundedness.
When compared to results for nonprivate algorithms applied to graphons obeying additional assumptions, our bounds are often incomparable, and in other cases match the existing bounds.
We start by considering graphons which are themselves step functions with a known number of steps
k. In the dense case, the nonprivate algorithms of [18] and [13], as well
pas our nonprivate algorithm,
give an asymptotic error that is dominated by the term n (W ) = O( 4 k/n), which is of the same
order as our private estimator as long as k = o?(n1/3 ). [28] provided the first convergence results
for estimating graphons in the sparse regime. Assuming that W is bounded above and below (so it
takes values in a range [?1 , ?2 ] where ?1 > 0), they analyze an inefficient algorithm (the MLE). The
bounds of [28] are incomparable to ours, though for thepcase of k-block graphons, both their bounds
and our nonprivate bound are dominated by the term 4 k/n when ? > (log k)/k and k ? ?n. A
different sequence of works shows how to consistently estimate the underlying block model with a
fixed number of blocks k in polynomial time for very sparse graphs (as for our non-private algorithm,
the only thing which is needed is that n? ? ?) [3, 1, 2]; we are not aware of concrete bounds on
the convergence rate.
For the case of dense ?-H?older-continuous graphons, the results of [18] give an error which is
dominated by the term n (W ) = OP (n??/2 ). For ? < 1/2, our nonprivate bound matches this
bound, while for ? > 1/2 it is worse. [28] considers the sparse case. The rate of their estimator is
incomparable to that of ours; further, their analysis requires a lower bound on the edge probabilities,
while ours does not. Very recently, after our paper was submitted, both the bounds of [28] as well as
our non-private bound (3) were substantially improved [22], leading to an error bound where the 4th
root in (3) is replaced by a square root (at the cost of an extra constant multiplying the oracle error.)
See the full version for a more detailed discussion of the previous literature.
2
2.1
Preliminaries
Notation
For a graph G on [n] = {1, . . . , n}, we use E(G) and A(G) to denote the edge set and the adjacency
matrix of G, respectively. The edge density ?(G) is defined as the number of edges divided by n2 .
Finally the degree di of a vertex i in G is the number of edges containing i. We use theP
same notation
2
for a weighted graph with nonnegative edge weights ?ij , where now ?(G) = n(n?1)
i<j ?ij , and
P
di = j6=i ?ij . We use Gn to denote the set of weighted graphs on n vertices with weights in [0, 1],
and Gn,d to denote the set of all graphs in Gn that have maximal degree at most d.
From Matrices to Graphons. We define a graphon to be a bounded, measurable function W :
[0, 1]2 ? R+ such that W (x, y) = W (y, x) for all x, y ? [0, 1]. It will be convenient to embed
the set of a symmetric n ? n matrix with nonnegative entries into graphons as follows: let Pn =
(I1 , . . . In ) be the partition of [0, 1] into adjacent intervals of lengths 1/n. Define W [A] to be the
step function which equals Aij on Ii ? Ij . If A is the adjacency matrix of an unweighted graph G,
we use W [G] for W [A].
Distances. For p ? [1, ?) we define the Lp norm of an n ? n matrix A and a (Borel)-measurable
1/p
1/p
R
P
function W : [0, 1]2 ? R by kAkp = n12 i,j |Aij |p
, and kf kp =
|f (x, y)|p dxdy
,
4
P
respectively. Associated with the L2 -norm is a scalar product, defined as hA, Bi = n12 i,j Aij Bij
R
for two n ? n matrices A and B, and hU, W i = U (x, y)W (x, y)dxdy for two square integrable
functions U, W : [0, 1]2 ? R. Note that with this notation, the edge density and the L1 norm are
related by kGk1 = n?1
n ?(G).
Recalling (1), we define the ?2 distance between two matrices A, B, or between a matrix A and a
graphon W by ?2 (A, B) = ?2 (W [A], W [B]) and ?2 (A, W ) = ?2 (W [A], W ). In addition, we will
also use the in general larger distances ??2 (A, B) and ??2 (A, W ), defined by taking a minimum over
matrices A0 which are obtained from A by a relabelling of the indices: ??2 (A, B) = minA0 kA0 ?Bk2
and ??2 (A, W ) = minA0 kW [A0 ] ? W k2 .
2.2
W -random graphs, graph convergence and multi-way cuts
W-random graphs and stochastic block models. Given a graphon W we define a random n ? n
matrix Hn = Hn (W ) by choosing n ?positions? x1 , . . . , xn i.i.d. uniformly at random from [0, 1]
and then setting (Hn )ij = W (xi , xj ). If kW k? ? 1, then Hn (W ) has entries in [0, 1], and we can
form a random graph Gn = Gn (W ) on n-vertices by choosing an edge between two vertices i < j
with probability (Hn )ij , independently for all i < j. Following [23] we call Gn (W ) a W -random
graph and Hn (W ) a W -weighted random graph. We incorporate
a target density ?n (or simply ?,
R
when n is clear from the context) by normalizing W so that W = 1 and taking G to be a sample
from Gn (?W ). In other words, we set Q = Hn (?W ) = ?Hn (W ) and then connect i to j with
probability Qij , independently for all i < j.
Stochastic block models are specific examples of W -random graph in which W is constant on sets
of the form Ii ? Ij , where (I1 , . . . , Ik ) is a partition of [0, 1] into intervals of possibly different
lengths.
On the other hand, an arbitrary graphon W can be well approximated by a block model. Indeed, let
(O)
k (W ) = min kW ? W [B]k2
B
(4)
where the minimum runs over all k ? k matrices B. By a straightforward argument (see, e.g., [11])
(O)
k (W ) = kW ? WPk k2 ? 0 as k ? ?. We will take this approximation as a benchmark for our
approach, and consider it the error an ?oracle? could obtain (hence the superscript O).
Another key term in our algorithm?s error guarantee is the distance between Hn (W ) and W ,
n (W ) = ??2 (Hn (W ), W ).
(5)
It goes to zero as n ? ? by the following lemma, which follows easily from the results of [11].
Lemma 1. Let W be a graphon with kW k? < ?. With probability one, kHn (W )k1 ? kW k1 and
n (W ) ? 0.
Convergence. Given a sequence of W -random graphs with target densities ?n , one might wonder
whether the graphs Gn = Gn (?n W ) converge to W in a suitable metric. The answer is yes,
and involves the so-called cut-metric ? first introduced in [9]. Its definition is identical to the
definition (1) of the norm ?2 , except that instead
R of the L
2 -norm k ? ? ? k2 , it involves the FriezeKannan cut-norm kW k defined as the sup of S?T W over all measurable sets S, T ? [0, 1].
In
the metric ? , theW -random graphs Gn = Gn (?W ) then converge to W in the sense that
?
1
?(Gn ) W [Gn ], W
? 0, see [11] for the proof.
Estimation of Multi-Way Cuts. Using the results of [12], the convergence of Gn in the cut-metric
? implies many interesting results for estimating various quantities defined on the graph Gn . In? to W in the metric ?2 is clearly consistent in the weaker metric
deed, a consistent approximation W
? . This distance, in turn, controls various quantities of interest to computer scientists, e.g., the size
of all multi-way cuts, implying that a consistent estimator for W also gives consistent estimators for
all multi-way cuts. See the full version for details.
5
2.3
Differential Privacy for Graphs
The goal of this paper is the development of a differentially private algorithm for graphon estimation.
The privacy guarantees are formulated for worst-case inputs ? we do not assume that G is generated
from a graphon when analyzing privacy. This ensures that the guarantee remains meaningful no
matter what an analyst knows ahead of time about G.
In this paper, we consider node privacy. We call two graphs G and G0 node neighbors if one can be
obtained from the other by removing one node and its adjacent edges.
Definition 1 (-node-privacy). A randomized algorithm A is -node-private if for all events S in the
output space of A, and node neighbors G, G0 ,
Pr[A(G) ? S] ? exp() ? Pr[A(G0 ) ? S] .
We also need the notion of the node-sensitivity of a function f : Gn ? R, defined as maximum
maxG,G0 |f (G) ? f (G0 )|, where the maximum goes over node-neighbors. The node sensitivity is
the Lipshitz constant of f viewed as a map between appropriate metrics.
3
3.1
Differentially Private Graphon Estimation
Least-squares Estimation
Given a graph as input generated by an unknown graphon W , our goal is to recover a block-model
approximation to W . The basic nonprivate algorithm we emulate is least squares estimation, which
outputs the k ? k matrix B which is closest to the input adjacency matrix A in the distance
??2 (B, A) = min kB? ? Ak2 ,
?
where the minimum runs over all equipartitions ? of [n] into k classes, i.e., over all maps ? : [n] ?
[k] such that all classes have size as close to n/k as possible, i.e., such that ||? ?1 (i)| ? n/k| < 1
for all i, and B? is the n ? n block-matrix with entries (B? )xy = B?(x)?(y) . If A is the adjacency
matrix of a graph G, we write ??2 (B, G) instead of ??2 (B, A). In the above notation, the basic
? =
algorithm we would want to emulate is then the algorithm which outputs the least square fit B
?
argminB ?2 (B, G), where the argmin runs over all symmetric k ? k matrices B.
3.2
Towards a Private Algorithm
Our algorithm uses a carefully chosen instantiation of the exponential mechanism of McSherry and
Talwar [25]. The most direct application of their framework would be to output a random k ? k
? according to the probability distribution
matrix B
? = B) ? exp ?C ??2 (B, A) ,
Pr(B
2
for some C > 0. The resulting algorithm is -differentially private if we set C to be over twice the
node-sensitivity of the ?score function?, here ?22 (B, ?). But this value of C turns out to be too small
to produce an output that is a good approximation to the least square estimator. Indeed, for a given
matrix B and equipartition ?, the node-sensitivity of kG ? B? k22 can be as large as n1 , leading to a
value of C which is too small to produce useful results for sparse graphs.
To address this, we first note that we can work with an equivalent score that is much less sensitive.
Given B and ?, we subtract off the squared norm of G to obtain the following:
score(B, ?; G) = kGk22 ? kG ? B? k22 = 2hG, B? i ? kB? k2 , and
score(B; G) = max score(B, ?; G),
?
(6)
(7)
where the max ranges over equipartitions ? : [n] ? [k]. For a fixed input graph G, maximizing
the score is the same as minimizing the distance, i.e. argminB ??2 (B, G) = argmaxB score(B; G).
6
The sensitivity of the new score is then bounded by n22 ? kBk? times the maximum degree in G
(since G only affects the score via the inner product hG, B? i). But this is still problematic since, a
priori, we have no control over either the size of kBk? or the maximal degree of G.
To keep the sensitivity low, we make two modifications: first, we only optimize over matrices B
whose entries bounded by (roughly) ?n (since a good estimator will have entries which are not
much larger than k?n W k? , which is of order ?n ); second, we restrict the score to be accurate only
on graphs whose maximum degree is at most a constant times the average degree, since this is what
one expects for graphs generated from a bounded graphon. While the first restriction can be directly
enforced by the algorithm, the second is more delicate, since we need to provide privacy for all
inputs, including graphs with very large maximum degree. We employ an idea from [6, 21]: we first
consider the restriction of score(B, ?; ?) to Gn,dn where dn will be chosen to be of the order of the
average degree of G, and then extend it back to all graphs while keeping the sensitivity low.
3.3
Private Estimation Algorithm
Our final algorithm takes as input the privacy parameter , the graph G, a number k of blocks, and a
constant ? ? 1 that will have to be chosen large enough to guarantee consistency of the algorithm.
Algorithm 1: Private Estimation Algorithm
Input: > 0, ? ? 1, an integer k and graph G on n vertices.
? estimating ?W
Output: k ? k block graphon (represented as a k ? k matrix B)
Compute an (/2)-node-private density approximation ?? = ?(G) + Lap(4/n) ;
d = ??
?n (the target maximum degree) ;
? ;
? = ??
? (the target L? norm for B)
For each B and ?, let score(B,
[
?; ?) denote a nondecreasing Lipschitz extension (from [21]) of
score(B, ?; ?) from Gn,d to Gn such that for all matrices A, score(B,
[
?; A) ? score(B, ?; A), and
define
score(B;
[
A) = max score(B,
[
?; A)
?
? sampled from the distribution
return B,
? = B) ? exp
Pr(B
where ? =
score(B;
[
A) ,
4?
4d?
4?2 ??2
and B ranges over matrices in
=
2
n
n
B? = {B ? [0, ?]k?k : all entries Bi,j are multiples of n1 };
Our main results about the private algorithm are the following lemma and theorem.
Lemma 2. Algorithm 1 is -node private.
Theorem 1 (Performance of the Private Algorithm). Let W : [0, 1]2 ? [0, ?] be a normalized
graphon, let 0 < ?? ? 1, let G = Gn (?W
? ? 1, and k be an integer. Assume that ?n ? 6 log n
p ), ?n
?
and 8? ? ? ? n, 2 ? k ? min{n ?2 , e 2 }. Then the Algorithm 1 outputs an approximation
? such that
(?
?, B)
s
!
r
1
2 log k
2 log n
?
k
?
(O)
? ? (W ) + 2n (W ) + OP 4
?2 W, W [B]
+?
+
.
k
??
?n
n
n?
Remark 1. While Theorem 1 is stated in term of bounds which hold in probability, our proofs yield
statements which hold almost surely as n ? ?.
Remark 2. Under additional assumptions on the graphon W , we obtain tighter bounds. For example, if we assume that W is H?older continuous, i.e, there exist constants ? ? (0, 1] and C < ?
such that |W (x, y) ? W (x0 , y 0 )| ? C? ? whenever |x ? x0 | + |y ? y 0 | ? ?, then we have that
(O)
k (W ) = O(k ?? ) and n (W ) = OP (n??/2 ).
7
Remark 3. When considering the ?best? block model approximation to W , one might want to
consider block models with unequal block sizes; in a similar way, one might want to construct a
private algorithm that outputs a block model with unequal size blocks, and produces a bound in
(O)
terms of this best block model approximation instead of k (W ). This can be proved with our
methods, with the minimal block size taking the role of 1/k in all our statements.
3.4
Non-Private Estimation Algorithm
We also analyze a simple, non-private algorithm, which outputs the argmin of ??2 (?, A) over all
k ? k matrices whose entries are bounded by ??(G). (Independently of our work, this non-private
algorithm was also proposed and analysed in [22].) Our bound (3) refers to this restricted least square
algorithm, and does not require any assumptions on the average degree. As in (2), we?
suppress the
dependence of the error on ?. To include it, one has to multiply the OP term in (3) by ?.
4
Analysis of the Private and Non-Private Algorithm
At a high level, our proof of Theorem 1 (as well as our new bounds on non-private estimation) follow from the fact that for all B and ?, the expected score E[Score(B, ?; G)] is equal to the score
? of
Score(B, ?; Q), combined with a concentration argument. As a consequence, the maximizer B
?
Score(B; G) will approximately minimize the L2 -distance ?2 (B, Q), which in turn will approxi? to the ?oracle error?
mately minimize k ?1 W [B] ? W k2 , thus relating the L2 -error of our estimator B
(O)
k (W ) defined in (4).
Our main concentration statement is captured in the following proposition. To state it, we define,
for every symmetric n ? n matrix Q with vanishing diagonal, Bern0 (Q) to be the distribution
over symmetric matrices A with zero diagonal such that the entries {Aij : i < j} are independent
Bernouilli random variables with EAij = Qij .
n?n
Proposition 1. Let ? > 0, Q ? [0,
be a symmetric matrix with vanishing diagonal, and
p1]
? ? B? is such that
A ? Bern0 (Q). If 2 ? k ? min{n ?(Q), e?(Q)n } and B
? A) ? max Score(B; A) ? ? 2
Score(B;
B?B?
for some ? > 0, then with probability at least 1 ? 2e?n ,
s
? Q) ? min ??2 (B, Q) + ? + O
??2 (B,
B?B?
4
2
!
k
log
k
?2 ?(Q) 2 +
.
n
n
(8)
Morally, the proposition contains almost all that is needed to establish the bound (3) proving consistency of the non-private algorithm (which, in fact, only involves the case ? = 0), even though there
are several additional steps needed to complete the proof.
The proposition also contains an extra ingredient which is a crucial input for the analysis of the
private algorithm: it states that if instead of an optimal, least square estimator, we output an estimator
whose score is only approximately maximal, then the excess error introduced by the approximation
is small. To apply the proposition, we then establish a lemma which gives us a lower bound on the
? in terms of the maximal score and an excess error ?.
score of the output B
There are several steps needed to execute this strategy, the most important ones involving a rigorous
control of the error introduced by the Lipschitz extension inside the exponential algorithm. We defer
the details to the full version.
Acknowledgments. A.S. was supported by NSF award IIS-1447700 and a Google Faculty Award.
Part of this work was done while visiting Boston University?s Hariri Institute for Computation and
Harvard University?s Center for Research on Computation and Society.
8
References
[1] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without knowing
the parameters. arXiv:1503.00609, 2015.
[2] E. Abbe and C. Sandon. Recovering communities in the general stochastic block model without knowing
the parameters. Manuscript, 2015.
[3] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. arXiv:1405.3267,
2014.
[4] P. J. Bickel and A. Chen. A nonparametric view of network models and newman-girvan and other modularities. Proceedings of the National Academy of Sciences of the United States of America, 106:21068?
21073, 2009.
[5] P. J. Bickel, A. Chen, and E. Levina. The method of moments and degree distributions for network
models. Annals of Statistics, 39(5):2280?2301, 2011.
[6] J. Blocki, A. Blum, A. Datta, and O. Sheffet. Differentially private data analysis of social networks via
restricted sensitivity. In Innovations in Theoretical Computer Science (ITCS), pages 87?96, 2013.
[7] B. Bollobas, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. Random
Struct. Algorithms, 31:3?122, 2007.
[8] C. Borgs, J. T. Chayes, L. Lov?asz, V. S?os, and K. Vesztergombi. Counting graph homomorphisms. In
Topics in Discrete Mathematics (eds. M. Klazar, J. Kratochvil, M. Loebl, J. Matousek, R. Thomas, P.Valtr),
pages 315?371. Springer, 2006.
[9] C. Borgs, J. T. Chayes, L. Lov?asz, V. S?os, and K. Vesztergombi. Convergent graph sequences I: Subgraph
frequencies, metric properties, and testing. Advances in Math., 219:1801?1851, 2008.
[10] C. Borgs, J. T. Chayes, L. Lov?asz, V. S?os, and K. Vesztergombi. Convergent graph sequences II: Multiway
cuts and statistical physics. Ann. of Math., 176:151?219, 2012.
[11] C. Borgs, J. T. Chayes, H. Cohn, and Y. Zhao. An Lp theory of sparse graph convergence I: limits, sparse
random graph models, and power law distributions. arXiv:1401.2906, 2014.
[12] C. Borgs, J. T. Chayes, H. Cohn, and Y. Zhao. An Lp theory of sparse graph convergence II: LD convergence, quotients, and right convergence. arXiv:1408.0744, 2014.
[13] S. Chatterjee. Matrix estimation by universal singular value thresholding. Annals of Statistics, 43(1):
177?214, 2015.
[14] S. Chen and S. Zhou. Recursive mechanism: towards node differential privacy and unrestricted joins. In
ACM SIGMOD International Conference on Management of Data, pages 653?664, 2013.
[15] D. S. Choi, P. J. Wolfe, and E. M. Airoldi. Stochastic blockmodels with a growing number of classes.
Biometrika, 99:273?284, 2012.
[16] P. Diaconis and S. Janson. Graph limits and exchangeable random graphs. Rendiconti di Matematica, 28:
33?61, 2008.
[17] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In S. Halevi and T. Rabin, editors, TCC, volume 3876, pages 265?284, 2006.
[18] C. Gao, Y. Lu, and H. H. Zhou. Rate-optimal graphon estimation. arXiv:1410.5837, 2014.
[19] P. D. Hoff, A. E. Raftery, and M. S. Handcock. Latent space approaches to social network analysis.
Journal of the American Statistical Association, 97(460):1090?1098, 2002.
[20] P. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Soc Netw, 5:109?137, 1983.
[21] S. P. Kasiviswanathan, K. Nissim, S. Raskhodnikova, and A. Smith. Analyzing graphs with nodedifferential privacy. In Theory of Cryptography Conference (TCC), pages 457?476, 2013.
[22] O. Klopp, A. Tsybakov, and N. Verzelen. Oracle inequalities for network models and sparse graphon
estimation. arXiv:1507.04118, 2015.
[23] L. Lov?asz and B. Szegedy. Limits of dense graph sequences. Journal of Combinatorial Theory, Series B,
96:933?957, 2006.
[24] W. Lu and G. Miklau. Exponential random graph estimation under differential privacy. In 20th ACM
SIGKDD International Conference on Knowledge discovery and data mining, pages 921?930, 2014.
[25] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pages 94?103. IEEE,
2007.
[26] S. Raskhodnikova and A. Smith. High-dimensional Lipschitz extensions and node-private analysis of
network data. arXiv:1504.07912, 2015.
[27] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic blockmodel.
Ann. Statist., 39(4):1878?1915, 08 2011.
[28] P. Wolfe and S. C. Olhede. Nonparametric graphon estimation. arXiv:1309.5936, 2013.
9
| 5828 |@word private:38 faculty:1 version:8 polynomial:2 norm:11 stronger:1 hu:1 sheffet:1 homomorphism:1 boundedness:1 ld:1 moment:1 contains:2 score:29 selecting:1 united:1 series:1 ours:3 janson:2 miklau:1 existing:1 current:1 com:1 analysed:1 assigning:2 partition:2 christian:1 designed:1 implying:1 generative:2 smith:4 olhede:2 vanishing:2 provides:2 math:2 node:28 kasiviswanathan:1 attack:1 org:1 relabelling:1 along:2 dn:2 direct:1 differential:14 ik:1 qij:2 focs:1 fitting:2 inside:2 n22:1 privacy:32 x0:2 lov:4 indeed:2 expected:2 roughly:1 themselves:1 p1:1 growing:2 multi:5 actual:1 considering:2 provided:2 estimating:6 bounded:9 underlying:2 notation:4 what:6 kg:2 kind:1 argmin:2 minimizes:1 substantially:1 developed:2 transformation:1 guarantee:9 every:1 exactly:1 biometrika:1 k2:7 control:4 lipshitz:2 exchangeable:1 appear:1 scientist:1 tends:1 limit:3 consequence:1 analyzing:2 approximately:3 might:3 argminb:2 twice:1 studied:2 matousek:1 equivalence:2 challenging:2 alice:2 range:4 bi:2 obeys:1 acknowledgment:1 testing:1 recursive:1 block:36 procedure:4 universal:1 revealing:1 convenient:1 deed:1 word:1 refers:1 get:1 convenience:1 close:1 context:4 raskhodnikova:2 optimize:1 measurable:4 map:3 equivalent:2 restriction:2 maximizing:1 center:1 go:4 regardless:2 straightforward:2 independently:3 bollobas:1 recovery:1 estimator:14 enabled:1 stability:1 proving:1 notion:3 traditionally:1 variation:1 n12:2 annals:2 target:9 user:1 exact:1 us:1 pa:2 wolfe:3 harvard:1 approximated:1 cut:10 modularities:1 observed:2 role:1 worst:1 ensures:3 removed:1 insertion:1 depend:1 triangle:1 easily:1 various:4 emulate:2 represented:1 america:1 kgk1:1 fast:1 kp:1 newman:1 choosing:2 whose:6 quite:1 larger:3 statistic:4 nondecreasing:1 superscript:1 chayes:6 final:1 descriptive:1 sequence:7 tcc:2 leinhardt:1 maximal:4 product:2 relevant:1 subgraph:1 achieve:2 academy:1 differentially:9 convergence:11 produce:5 generating:1 adam:1 converges:4 blocki:1 ij:7 op:9 strong:1 soc:1 recovering:2 quotient:1 involves:3 implies:1 inhomogeneous:2 closely:1 attribute:1 stochastic:11 kb:2 adjacency:5 require:2 preliminary:1 proposition:5 tighter:1 strictly:2 graphon:27 extension:7 hold:4 sufficiently:1 considered:3 hall:1 exp:3 bickel:2 released:1 argmaxb:1 purpose:1 estimation:21 lose:1 label:2 khn:1 combinatorial:1 sensitive:3 weighted:3 minimization:2 concurrently:1 clearly:1 aim:1 rather:2 pn:1 zhou:2 release:2 focus:1 she:1 consistently:1 contrast:1 sigkdd:1 rigorous:2 blockmodel:1 sense:2 inference:1 dependent:1 minimizers:1 a0:2 equipartition:1 i1:2 issue:1 priori:1 development:1 special:3 ak2:1 hoff:1 equal:2 bernouilli:1 comprise:1 construct:2 psu:1 aware:1 identical:1 kw:9 park:1 yu:1 abbe:4 future:1 few:1 employ:1 diaconis:1 national:1 individual:5 replaced:1 phase:1 microsoft:2 n1:3 ab:1 recalling:1 delicate:1 interest:1 mining:1 highly:1 dwork:1 multiply:1 eaij:1 mcsherry:3 hg:2 accurate:3 edge:18 integral:1 xy:1 intense:1 theoretical:1 minimal:1 instance:2 rabin:1 modeling:2 gn:32 morally:1 measuring:1 assignment:2 cost:1 vertex:16 entry:9 expects:1 uniform:1 wonder:1 too:3 connect:1 answer:1 combined:2 density:13 international:2 sensitivity:11 randomized:1 off:1 physic:1 connecting:2 together:1 concrete:2 squared:1 postulate:1 unavoidable:1 management:1 containing:1 hn:11 slowly:1 possibly:1 worse:1 admit:1 american:1 zhao:2 inefficient:1 leading:2 return:1 szegedy:1 matter:2 wpk:1 explicitly:1 depends:1 root:2 view:1 observing:1 analyze:2 bution:1 start:1 sup:1 recover:1 identifiability:3 defer:1 contribution:1 minimize:2 square:9 accuracy:1 efficiently:1 yield:1 yes:1 generalize:1 itcs:1 lu:2 multiplying:1 j6:1 submitted:1 whenever:2 ed:1 definition:3 frequency:1 obvious:1 proof:5 di:3 associated:1 sampled:1 proved:1 popular:1 knowledge:4 carefully:1 back:1 manuscript:1 follow:1 improved:1 done:2 though:3 execute:1 hand:1 cohn:2 o:3 maximizer:1 google:1 reveal:1 laskey:1 grows:4 name:1 usa:2 k22:2 contain:1 normalized:3 calibrating:1 hence:3 assigned:1 symmetric:5 adjacent:4 anything:1 generalized:1 complete:1 l1:1 novel:1 recently:2 common:1 volume:1 extend:1 association:1 relating:1 cambridge:1 consistency:7 mathematics:1 inclusion:2 handcock:1 multiway:1 closest:1 hide:2 recent:1 inf:1 bandeira:1 inequality:1 integrable:1 preserving:4 minimum:3 additional:6 dxdy:2 captured:1 unrestricted:1 surely:3 converge:2 ii:5 full:7 multiple:2 match:3 england:1 levina:1 long:3 ka0:1 divided:1 mle:1 award:2 variant:3 involving:2 basic:2 metric:10 arxiv:9 represent:1 achieved:1 addition:2 want:4 interval:2 singular:1 source:1 leaving:1 crucial:1 appropriately:1 extra:2 asz:4 strict:1 subject:2 thing:2 member:1 call:3 integer:2 presence:1 vesztergombi:3 counting:1 enough:1 mately:1 xj:1 fit:1 affect:1 pennsylvania:1 restrict:1 reduce:1 idea:2 incomparable:3 inner:1 knowing:2 whether:2 remark:4 useful:1 clear:2 detailed:1 nonparametric:8 tsybakov:1 extensively:1 bijections:1 statist:1 http:1 generate:1 exist:1 canonical:2 problematic:1 nsf:1 write:1 discrete:1 group:2 key:1 four:1 blum:1 graph:78 asymptotically:1 monotone:2 year:1 enforced:1 run:3 talwar:2 place:1 almost:4 verzelen:1 circumvents:1 bound:28 convergent:4 oracle:4 nonnegative:2 nontrivial:1 ahead:2 infinity:3 dominated:4 aspect:1 argument:2 min:5 according:1 lp:3 making:1 modification:1 kakp:1 kbk:2 intuitively:1 restricted:2 pr:4 previously:1 jennifer:1 describing:1 loose:1 mechanism:5 turn:3 needed:5 know:2 remains:1 available:1 apply:2 appropriate:3 spectral:1 struct:1 existence:1 thomas:1 clustering:1 include:1 log2:1 sigmod:1 restrictive:1 k1:2 prof:1 establish:2 society:1 klopp:1 g0:5 question:2 quantity:2 parametric:1 concentration:2 dependence:2 strategy:1 traditional:2 diagonal:3 visiting:1 riordan:1 distance:11 topic:1 nissim:2 considers:1 analyst:2 assuming:3 besides:1 length:2 modeled:1 index:1 minimizing:1 innovation:1 unfortunately:1 statement:3 relate:1 stated:1 suppress:1 design:3 unknown:1 upper:1 observation:1 mina0:2 benchmark:1 finite:1 extended:1 communication:1 arbitrary:4 community:2 datta:1 introduced:4 sandon:3 unequal:2 protect:1 address:1 below:6 regime:1 graphons:17 max:4 including:1 power:1 suitable:1 event:1 natural:1 older:3 improve:1 imply:1 raftery:1 faced:1 prior:1 literature:3 l2:7 removal:3 kf:1 discovery:1 determining:1 asymptotic:2 law:1 girvan:1 expect:1 highlight:1 interesting:1 proven:1 stringency:1 ingredient:1 degree:17 sufficient:1 consistent:8 thresholding:1 bk2:1 editor:1 rendiconti:1 supported:1 last:1 keeping:1 aij:4 side:1 weaker:2 institute:1 neighbor:3 taking:3 sparse:16 calculated:1 xn:1 transition:1 rich:1 unweighted:1 author:1 far:1 social:4 matematica:1 excess:2 approximate:1 compact:1 netw:1 keep:1 monotonicity:2 approxi:1 instantiation:2 nonprivate:13 halevi:1 xi:1 thep:1 search:1 latent:2 continuous:3 maxg:1 blockmodels:2 main:4 dense:4 noise:2 n2:1 identifier:1 cryptography:1 x1:1 representative:1 join:1 borel:1 position:2 explicit:3 obeying:1 exponential:6 learns:1 bij:1 removing:2 theorem:5 embed:1 choi:1 specific:2 rohe:1 borgs:6 showing:1 normalizing:1 restricting:1 airoldi:1 chatterjee:3 chen:3 easier:1 subtract:1 boston:1 lap:1 simply:3 gao:2 scalar:1 holland:1 springer:1 corresponds:1 minimizer:1 acm:2 ma:1 goal:2 formulated:1 viewed:1 ann:2 towards:2 lipschitz:5 specifically:2 except:1 uniformly:1 lemma:5 called:1 meaningful:3 select:1 incorporate:1 |
5,334 | 5,829 | HONOR: Hybrid Optimization for NOn-convex
Regularized problems
Jieping Ye
Univeristy of Michigan, Ann Arbor, MI 48109
[email protected]
Pinghua Gong
Univeristy of Michigan, Ann Arbor, MI 48109
[email protected]
Abstract
Recent years have witnessed the superiority of non-convex sparse learning formulations over their convex counterparts in both theory and practice. However, due
to the non-convexity and non-smoothness of the regularizer, how to efficiently
solve the non-convex optimization problem for large-scale data is still quite challenging. In this paper, we propose an efficient Hybrid Optimization algorithm
for NOn-convex Regularized problems (HONOR). Specifically, we develop a hybrid scheme which effectively integrates a Quasi-Newton (QN) step and a Gradient Descent (GD) step. Our contributions are as follows: (1) HONOR incorporates the second-order information to greatly speed up the convergence, while
it avoids solving a regularized quadratic programming and only involves matrixvector multiplications without explicitly forming the inverse Hessian matrix. (2)
We establish a rigorous convergence analysis for HONOR, which shows that convergence is guaranteed even for non-convex problems, while it is typically challenging to analyze the convergence for non-convex problems. (3) We conduct
empirical studies on large-scale data sets and results demonstrate that HONOR
converges significantly faster than state-of-the-art algorithms.
1 Introduction
Sparse learning with convex regularization has been successfully applied to a wide range of applications including marker genes identification [19], face recognition [22], image restoration [2],
text corpora understanding [9] and radar imaging [20]. However, it has been shown recently that
many convex sparse learning formulations are inferior to their non-convex counterparts in both theory and practice [27, 12, 23, 25, 16, 26, 24, 11]. Popular non-convex sparsity-inducing penalties
include Smoothly Clipped Absolute Deviation (SCAD) [10], Log-Sum Penalty (LSP) [6] and Minimax Concave Penalty (MCP) [23]. Although non-convex sparse learning reveals its advantage over
the convex one, it remains a challenge to develop an efficient algorithm to solve the non-convex
optimization problem especially for large-scale data.
DC programming [21] is a popular approach to solve non-convex problems whose objective functions can be expressed as the difference of two convex functions. However, a potentially non-trivial
convex subproblem is required to solve at each iteration, which is not practical for large-scale problems. SparseNet [16] can solve a least squares problem with a non-convex penalty. At each step,
SparseNet solves a univariate subproblem with a non-convex penalty which admits a closed-form
solution. However, to establish the convergence analysis, the parameter of the non-convex penalty
is required to be restricted to some interval such that the univariate subproblem (with a non-convex
penalty) is convex. Moreover, it is quite challenging to extend SparseNet to non-convex problems
with a non-least-squares loss, as the univariate subproblem generally does not admit a closed-form
solution. The GIST algorithm [14] can solve a class of non-convex regularized problems by iteratively solving a possibly non-convex proximal operator problem, which in turn admits a closed-form
solution. However, GIST does not well exploit the second-order information. The DC-PN algorithm
1
[18] can incorporate the second-order information to solve non-convex regularized problems but it
requires to solve a non-trivial regularized quadratic subproblem at each iteration.
In this paper, we propose an efficient Hybrid Optimization algorithm for NOn-convex Regularized
problems (HONOR), which incorporates the second-order information to speed up the convergence.
HONOR adopts a hybrid optimization scheme which chooses either a Quasi-Newton (QN) step or
a Gradient Descent (GD) step per iteration mainly depending on whether an iterate has very small
components. If an iterate does not have any small component, the QN-step is adopted, which uses
L-BFGS to exploit the second-order information. The key advantage of the QN-step is that it does
not need to solve a regularized quadratic programming and only involves matrix-vector multiplications without explicitly forming the inverse Hessian matrix. If an iterate has small components,
we switch to a GD-step. Our detailed theoretical analysis sheds light on the effect of such a hybrid scheme on the convergence of the algorithm. Specifically, we provide a rigorous convergence
analysis for HONOR, which shows that every limit point of the sequence generated by HONOR is
a Clarke critical point. It is worth noting that the convergence analysis for a non-convex problem
is typically much more challenging than the convex one, because many important properties for a
convex problem may not hold for non-convex problems. Empirical studies are also conducted on
large-scale data sets which include up to millions of samples and features; results demonstrate that
HONOR converges significantly faster than state-of-the-art algorithms.
2 Non-convex Sparse Learning
We focus on the following non-convex regularized optimization problem:
min {f (x) = l(x) + r(x)} ,
x?Rn
(1)
where we make the following assumptions throughout the paper:
(A1) l(x) is coercive, continuously differentiable and ?l(x) is Lipschitz continuous with constant L. Moreover, l(x) > ?? for all x ? Rn .
Pn
(A2) r(x) = i=1 ?(|xi |), where ?(t) is non-decreasing, continuously differentiable and concave with respect to t in [0, ?); ?(0) = 0 and ?? (0) 6= 0 with ?? (t) = ??(t)/?t denoting
the derivative of ?(t) at the point t.
Remark 1 Assumption (A1) allows l(x) to be non-convex. Assumption (A2) implies that ?(|xi |) is
generally non-convex with respect to xi and the only convex case is ?(|xi |) = ?|xi | with ? > 0.
Moreover, ?(|xi |) is continuously differentiable with respect to xi in (??, 0) ? (0, ?) and nondifferentiable at xi = 0. In particular, ??(|xi |)/?xi = ?(xi )?? (|xi |) for any xi 6= 0, where
?(xi ) = 1, if xi > 0; ?(xi ) = ?1, if xi < 0 and ?(xi ) = 0, otherwise. In addition, ?? (0) > 0 must
hold (Otherwise ?? (0) < 0 implies ?(t) ? ?(0) + ?? (0)t < 0 for any t > 0, contradicting the fact
that ?(t) is non-decreasing). It is also easy to show that, under the assumptions above, both l(x)
and r(x) are locally Lipschitz continuous. Thus, the Clarke subdifferential [7] is well-defined.
The commonly used least squares loss and the logistic regression loss satisfy the assumption (A1);
we can add a small term ?kxk2 to make them coercive. The following popular non-convex regularizers satisfy the assumption (A2), where ? > 0 and ? > 0 except that ? > 2 for SCAD.
? LSP: ?(|xi |) = ? log(1 + |xi |/?).
?
? ?|x2i |,
?xi +2??|xi |??2
? SCAD: ?(|xi |) =
,
2(??1)
?
(? + 1)?2 /2,
?|xi | ? x2i /(2?),
? MCP: ?(|xi |) =
??2 /2,
if |xi | ? ?,
if ? < |xi | ? ??,
if |xi | > ??.
if |xi | ? ??,
if |xi | > ??.
Due to the non-convexity and non-differentiability of problem (1), the traditional subdifferential
concept for the convex optimization is not applicable here. Thus, we use the Clarke subdifferential
? is a Clarke critical point of problem (1),
[7] to characterize the optimality of problem (1). We say x
? . To be self-contained,
if 0 ? ? o f (?
x), where ? o f (?
x) is the Clarke subdifferential of f (x) at x = x
2
we briefly review the Clarke subdifferential: for a locally Lipschitz continuous function f (x), the
? along the direction d is defined as
Clarke generalized directional derivative of f (x) at x = x
f (x + ?d) ? f (x)
.
?
x??
x,??0
f o (?
x; d) = lim sup
? is defined as
Then, the Clarke subdifferential of f (x) at x = x
? o f (?
x) = {? ? Rn : f o (?
x; d) ? dT ?, ?d ? Rn }.
Interested readers may refer to Proposition 4 in the Supplement A for more properties about the
Clarke Subdifferential. We want to emphasize that some basic properties of the subdifferential of a
convex function may not hold for the Clarke Subdifferential of a non-convex function.
3 Proposed Optimization Algorithm: HONOR
Since each decomposable component function of the regularizer is only non-differentiable at the
origin, the objective function is differentiable, if the segment between any two consecutive iterates
do not cross any axis. This motivates us to design an algorithm which can keep the current iterate
in the same orthant of the previous iterate. Before we present the detailed HONOR algorithm, we
introduce two functions as follows:
Define a function ? : Rn 7? Rn with the i-th entry being:
xi , if ?(xi ) = ?(yi ),
?i (xi ; yi ) =
0, otherwise,
where y ? Rn (yi is the i-th entry of y) is the parameter of the function ?; ?(?) is the sign function
defined as follows: ?(xi ) = 1, if xi > 0; ?(xi ) = ?1, if xi < 0 and ?(xi ) = 0, otherwise.
Define the pseudo-gradient ?f (x) whose i-th entry is given by:
?
?i l(x) + ?? (|xi |), if xi > 0,
?
?
?
? ?i l(x) ? ?? (|xi |), if xi < 0,
?i l(x) + ?? (0),
if xi = 0, ?i l(x) + ?? (0) < 0,
?i f (x) =
?
?
?
?
l(x)
?
?
(0),
if
xi = 0, ?i l(x) ? ?? (0) > 0,
?
? i
0,
otherwise,
where ?? (t) is the derivative of ?(t) at the point t.
Remark 2 If r(x) is convex, ?f (x) is the minimum-norm sub-gradient of f (x) at x. Thus, ? ? f (x)
is a descent direction. However, ?f (x) is not even a sub-gradient of f (x) if r(x) is non-convex.
This indicates that some obvious concepts and properties for a convex problem may not hold in the
non-convex case. Thus, it is significantly more challenging to develop and analyze algorithms for a
non-convex problem.
Interestingly, we can still show that vk = ? ? f (xk ) is a descent direction at the point xk (refer
to Supplement D and replace pk = ?(dk ; vk ) with vk ). To utilize the second-order information,
we may perform the optimization along the direction dk = H k vk , where H k is a positive definite
matrix containing the second-order information. However, dk is not necessarily a descent direction.
To address this issue, we use the following slightly modified direction pk :
pk = ?(dk ; vk ).
We can show that pk is a descent direction (proof is provided in Supplement D). Thus, we can
perform the optimization along the direction pk . Recall that we need to keep the current iterate in
the same orthant of the previous iterate. So the following iterative scheme is proposed:
xk (?) = ?(xk + ?pk ; ? k ),
(2)
where
?ik
=
?(xki ), if xki 6= 0,
?(vik ), if xki = 0,
3
(3)
and ? is a step size chosen by the following line search procedure: for constants ?0 > 0, ?, ? ?
(0, 1) and m = 0, 1, ? ? ? , find the smallest integer m with ? = ?0 ? m such that the following
inequality holds:
f (xk (?)) ? f (xk ) ? ??(vk )T dk .
(4)
However, only using the above iterative scheme may not guarantee the convergence. The main challenge is: if there exists a subsequence K such that {xki }K converges to zero, it is possible that for
sufficiently large k ? K, |xki | is arbitrarily small but never equal to zero (refer to the proof of Theorem 1 for more details). To address this issue, we propose a hybrid optimization scheme. Specifically, for a small constant ? > 0, if I k = {i ? {1, ? ? ? , n} : 0 < |xki | ? min(kvk k, ?), xki vik < 0}
is not empty, we switch the iteration to the following gradient descent step (GD-step):
1
xk (?) = arg min ?l(xk )T (x ? xk ) +
kx ? xk k2 + r(x) ,
2?
x
where ? is a step size chosen by the following line search procedure: for constants ?0 > 0, ?, ? ?
(0, 1) and m = 0, 1, ? ? ? , find the smallest integer m with ? = ?0 ? m such that the following
inequality holds:
f (xk (?)) ? f (xk ) ?
?
kxk (?) ? xk k2 .
2?
(5)
The detailed steps of the algorithm are presented in Algorithm 1.
Remark 3 Algorithm 1 is similar to OWL-QN-type algorithms in [1, 3, 4, 17, 13]. However,
HONOR is significantly different from them: (1) The OWL-QN-type algorithms can only handle ?1 regularized convex problems while HONOR is applicable to a class of non-convex problems beyond
?1 -regularized ones. (2) The convergence analyses of the OWL-QN-type algorithms heavily rely on
the convexity of the ?1 -regularized problem. In contrast, the convergence analysis for HONOR is
applicable to non-convex cases beyond the convex ones, which is a non-trivial extension.
Algorithm 1: HONOR: Hybrid Optimization for NOn-convex Regularized problems
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Initialize x0 , H 0 and choose ?, ? ? (0, 1), ? > 0, ?0 > 0;
for k = 0 to maxiter do
Compute vk ? ? ? f (xk ) and I k = {i ? {1, ? ? ? , n} : 0 < |xki | ? ?k , xki vik < 0}, where
?k = min(kvk k, ?);
Initialize ? ? ?0 ;
if I k = ? then
(QN-step)
Compute dk ? H k vk with a positive definite matrix H k using L-BFGS;
Alignment: pk ? ?(dk ; vk );
while Eq. (4) is not satisfied do
? ? ??; xk (?) ? ?(xk + ?pk ; ? k );
end
else
(GD-step)
while Eq. (5) is not satisfied do
? ? ??;
1
xk (?) ? arg minx ?l(xk )T (x ? xk ) + 2?
kx ? xk k2 + r(x) ;
end
end
xk+1 ? xk (?);
if some stopping criterion is satisfied then
stop and return xk+1 ;
end
end
4
4 Convergence Analysis
We first present a few basic propositions and then provide the convergence theorem based on the
propositions; all proofs of the presented propositions are carefully handled due to the lack of convexity. First of all, an optimality condition is presented (proof is provided in Supplement B), which
will be directly used in the proof of Theorem 1.
? = limk?K,k?? xk , vk = ? ? f (xk ) and v
? = ? ? f (?
Proposition 1 Let x
x), where K is a
subsequence of {1, 2, ? ? ? , k, k + 1, ? ? ? }. If lim inf k?K,k?? |vik | = 0 for all i ? {1, ? ? ? , n}, then
? = 0 and x
? is a Clarke critical point of problem (1).
v
We subsequently show that we have a Lipschitz-continuous-like inequality in the following proposition (proof is provided in Supplement C), which is crucial to prove the final convergence theorem.
Proposition 2 Let vk = ??f (xk ), xk (?) = ?(xk +?pk ; ? k ) and qk? =
with ? > 0. Then under assumptions (A1) and (A2), we have
1
k
k
k
k
? (?(x +?p ; ? )?x )
(i) ?l(xk )T (xk (?) ? xk ) + r(xk (?)) ? r(xk ) ? ?(vk )T (xk (?) ? xk ),
(ii) f (xk (?)) ? f (xk ) ? ?(vk )T qk? +
(6)
2
? L k 2
kq? k .
2
(7)
We next show that both line search criteria in the QN-step [Eq. (4)] and the GD-step [Eq. (5)] at any
iteration k is satisfied in a finite number of trials (proof is provided in Supplement D).
Proposition 3 At any iteration k of the HONOR algorithm, if xk is not a Clarke critical point of
problem (1), then (a) for the QN-step, there exists an ? ? [?
?k , ?0 ] with 0 < ?
? k ? ?0 such that the
line search criterion in Eq. (4) is satisfied; (b) for the GD-step, the line search criterion in Eq. (5) is
satisfied whenever ? ? ? min(?0 , (1 ? ?)/L). That is, both line search criteria at any iteration k
are satisfied in a finite number of trials.
We are now ready to provide the convergence proof for the HONOR algorithm:
Theorem 1 The sequence {xk } generated by the HONOR algorithm has at least a limit point and
every limit point of {xk } is a Clarke critical point of problem (1).
Proof It follows from Proposition 3 that both line search criteria in the QN-step [Eq. (4)] and the
GD-step [Eq. (5)] at each iteration can be satisfied in a finite number of trials. Let ?k be the accepted
step size at iteration k. Then we have
f (xk ) ? f (xk+1 ) ? ??k (vk )T dk = ??k (vk )T H k vk (QN-step),
?
?
or f (xk ) ? f (xk+1 ) ?
kxk+1 ? xk k2 ?
kxk+1 ? xk k2 (GD-step).
k
2?
2?0
(8)
(9)
Recall that H k is positive definite and ? > 0, ?k > 0, which together with Eqs.(8), (9) imply
that {f (xk )} is monotonically decreasing. Thus, {f (xk )} converges to a finite value f?, since f is
bounded from below (note that l(x) > ?? and r(x) ? 0 for all x ? Rn ). Due to the boundedness
of {xk } (see Proposition 7 in Supplement F), the sequence {xk } generated by the HONOR algorithm
? . Since f is continuous, there exists a subsequence K of {1, 2 ? ? ? , k, k +
has at least a limit point x
1, ? ? ? } such that
lim
k?K,k??
?,
xk = x
lim f (xk ) =
k??
(10)
lim
k?K,k??
f (xk ) = f? = f (?
x).
(11)
? is not a Clarke critical point
In the following, we prove the theorem by contradiction. Assume that x
of problem (1). Then by Proposition 1, there exists at least one i ? {1, ? ? ? , n} such that
lim inf |vik | > 0.
k?K,k??
5
(12)
We next consider the following two cases:
? the
? of K and an integer k? > 0 such that for all k ? K,
? k ? k,
(a) There exist a subsequence K
? we have
? k ? k,
GD-step is adopted. Then for all k ? K,
1
xk+1 = arg min ?l(xk )T (x ? xk ) + k kx ? xk k2 + r(x) .
2?
x
Thus, by the optimality condition of the above problem and properties of the Clarke subdifferential
(Proposition 4 in Supplement A), we have
0 ? ?l(xk ) +
1 k+1
(x
? xk ) + ? o r(xk+1 ).
?k
(13)
? for Eq. (9) and considering Eqs. (10), (11), we have
Taking limits with k ? K
lim
?
k?K,k??
kxk+1 ? xk k2 ? 0 ?
lim
?
k?K,k??
xk =
lim
?
k?K,k??
?.
xk+1 = x
(14)
? for Eq. (13) and considering Eq. (14), ?k ? ? min(?0 , (1 ? ?)/L)
Taking limits with k ? K
o
[Proposition 3] and ? r(?) is upper-semicontinuous (upper-hemicontinuous) [8] (see Proposition 4
in the Supplement A), we have
0 ? ?l(?
x) + ? o r(?
x) = ? o f (?
x),
? is not a Clarke critical point of problem (1).
which contradicts the assumption that x
? the QN-step is adopted. According
(b) There exists an integer k? > 0 such that for all k ? K, k ? k,
to Remark 7 (in Supplement F), we know that the smallest eigenvalue of H k is uniformly bounded
from below by a positive constant, which together with Eq. (12) implies
lim inf (vk )T H k vk > 0.
k?K,k??
(15)
Taking limits with k ? K for Eq. (8), we have
lim
k?K,k??
??k (vk )T H k vk ? 0,
which together with ? ? (0, 1), ?k ? (0, ?0 ] and Eq. (15) implies that
lim
k?K,k??
?k = 0.
(16)
Eq. (12) implies that there exist an integer k? > 0 and a constant ?? > 0 such that ?k =
? Notice that for all k ? K, k ? k,
? the QN-step is
min(kvk k, ?) ? ?? for all k ? K, k ? k.
adopted. Thus, we obtain that I k = {i ? {1, ? ? ? , n} : 0 < |xki | ? ?k , xki vik < 0} = ? for all
? We also notice that, if |xk | ? ??, then there exists a constant ?
k ? K, k ? k.
?i > 0 such that
i
k
xi (?) = ?i (xki + ?pki ; ?ik ) = xki + ?pki for all ? ? (0, ?
? i ], as {pki } is bounded (Proposition 8
? k)
? and for all
in Supplement F). Therefore, we conclude that, for all k ? K, k ? k? = max(k,
i ? {1, ? ? ? , n}, at least one of the following three cases must happen:
xki = 0 ? xki (?) = ?i (xki + ?pki ; ?ik ) = xki + ?pki , ?? > 0,
or |xki | > ?k ? ?? ? xki (?) = ?i (xki + ?pki ; ?ik ) = xki + ?pki , ?? ? (0, ?
? i ],
or xki vik ? 0 ? xki pki ? 0 ? xki (?) = ?i (xki + ?pki ; ?ik ) = xki + ?pki , ?? > 0.
It follows that there exists a constant ?
? > 0 such that
1
? ? ? (0, ?
qk? = (xk (?) ? xk ) = pk , ?k ? K, k ? k,
? ].
?
(17)
Thus, considering |pki | = |?i (dki ; vik )| ? |dki | and vik pki ? vik dki for all i ? {1, ? ? ? , n}, we have
? ? ? (0, ?
kqk? k2 = kpk k2 ? kdk k2 = (vk )T (H k )2 vk , ?k ? K, k ? k,
? ],
k T k
k T k
k T k
k T
k k
?
(v ) q = (v ) p ? (v ) d = (v ) H v , ?k ? K, k ? k, ? ? (0, ?
? ].
?
6
(18)
(19)
According to Proposition 8 (in Supplement F), we know that the largest eigenvalue of H k is uniformly bounded from above by some positive constant M . Thus, we have
2
2
(vk )T (H k )2 vk ?
(vk )T H k vk ?
? M (vk )T H k vk , ?k,
?L
?L
which together with Eqs. (18), (19) and dk = H k vk implies
2
2
k T k
k 2
? ? ? (0, ?
(v ) q? ?
? M (vk )T dk , ?k ? K, k ? k,
? ].
kq? k ?
?L
?L
(20)
Considering Eqs. (7), (20), we have
?LM
? ? ? (0, ?
(vk )T dk , ?k ? K, k ? k,
? ],
f (xk (?)) ? f (xk ) ? ? 1 ?
2
which together with (vk )T dk = (vk )T H k vk ? 0 implies that the line search criterion in the
QN-step [Eq. (4)] is satisfied if
?LM
?
? ? , 0 < ? ? ?0 and 0 < ? ? ?
? , ?k ? K, k ? k.
2
Considering the backtracking form of the line search in QN-step [Eq. (4)], we conclude that the line
search criterion in the QN-step [Eq. (4)] is satisfied whenever
1?
?
?k ? ? min(min(?
?, ?0 ), 2(1 ? ?)/(LM )) > 0, ?k ? K, k ? k.
This leads to a contradiction with Eq. (16).
? = limk?K,k?? xk is a Clarke critical point of problem (1).
By (a) and (b), we conclude that x
5 Experiments
In this section, we evaluate the efficiency of HONOR on solving the non-convex regularized loPN
gistic regression problem1 by setting l(x) = 1/N i=1 log(1 + exp(?yi aTi x)), where ai ?
Rn is the i-th sample associated with the label yi ? {1, ?1}. Three non-convex regularizers (LSP, MCP and SCAD) are included in experiments, where the parameters are set as ? =
1/N and ? = 10?2 ? (? is set as 2 + 10?2 ? for SCAD as it requires ? > 2). We compare HONOR with the non-convex solver2 GIST [14] on three large-scale, high-dimensional
and sparse data sets which are summarized in Table 1. All data sets can be downloaded from
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/.
All algorithms are implemented in MatTable 1: Data set statistics.
lab 2015a under a Linux operating sysdatasets
kdd2010a
kdd2010b
url
tem and executed on an Intel Core
? samples N
510,302
748,401
2,396,130
i7-4790 CPU (@3.6GHz) with 32GB
dimensionality n 20,216,830 29,890,095 3,231,961
memory. We choose the starting points
0
x for the compared algorithms using the same random vector whose entries are i.i.d. sampled from
the standard Gaussian distribution. We terminate the compared algorithms if the relative change of
two consecutive objective function values is less than 10?5 or the number of iterations exceeds 1000
(HONOR) or 10000 (GIST). For HONOR, we set ? = 10?5 , ? = 0.5, ?0 = 1 and the number of
unrolling steps in L-BFGS as m = 10. For GIST, we use the non-monotone line search in experiments as it usually performs better than its monotone counterpart. To show how the convergence
behavior of HONOR varies over the parameter ?, we use three values: ? = 10?10 , 10?6 , 10?2 .
We report the objective function value (in log-scale) vs. CPU time (in seconds) plots in Figure 1.
We can observe from Figure 1 that: (1) If ? is set to a small value, the QN-step is adopted at almost
all steps in HONOR and HONOR converges significantly faster than GIST for all three non-convex
1
We do not include the term ?kxk2 in the objective and find that the proposed algorithm still works well.
We do not involve SparseNet, DC programming and DC-PN in comparison, because (1) adapting
SparseNet to the logistic regression problem is challenging; (2) DC programming is shown to be much inferior to GIST; (3) The objective function value of DC-PN is larger than GIST in most cases [18].
2
7
400
600
800
1000
1000
800
1000
1500
2000
2500
10 -3
500
1000
1500
2000
2500
CPU time (seconds)
SCAD (kdd2010a)
SCAD (kdd2010b)
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
10 -1
-2
1000
1500
2000
CPU time (seconds)
2500
3000
3500
6000
8000
10000
12000
14000
10 -3
3000
CPU time (seconds)
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
10 0
10 -1
10 -2
10 -3
0
0.5
1
1.5
2
2.5
3
?10 4
SCAD (url)
10 -2
2000
4000
CPU time (seconds)
10 -1
1000
Objective function value (logged scale)
4000
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
10 0
0
2000
CPU time (seconds)
10 -2
0
10 -1
0
10 -1
3000
10 0
1200
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
10 0
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
10 1
MCP (url)
CPU time (seconds)
10 0
500
600
MCP (kdd2010b)
10 -2
0
400
MCP (kdd2010a)
10 -1
10
200
CPU time (seconds)
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
500
0
CPU time (seconds)
10 0
0
10 0
1200
Objective function value (logged scale)
Objective function value (logged scale)
200
10 1
Objective function value (logged scale)
10 0
LSP (url)
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
4000
Objective function value (logged scale)
10 1
0
Objective function value (logged scale)
LSP (kdd2010b)
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
Objective function value (logged scale)
LSP (kdd2010a)
Objective function value (logged scale)
Objective function value (logged scale)
regularizers on all three data sets. This shows that using the second-order information greatly speeds
up the convergence. (2) When ? increases, the ratio of the GD-step adopted in HONOR increases.
Meanwhile, the convergence performance of HONOR generally degrades. In some cases, setting
a slightly larger ? and adopting a small number of GD steps even sligtly boosts the convergence
performance of HONOR (the green curves in the first row). But setting ? to a very small value is
always safe to guarantee the fast convergence of HONOR. (3) When ? is large enough, the GD steps
dominate all iterations of HONOR and HONOR converge much slower. In this case, HONOR converges even slower than GIST. The reason is that, at each iteration of HONOR, extra computational
cost is required in addition to the basic computation in the GD-step. Moreover, the non-monotone
line search is used in GIST while the monotone line search is adopted in the GD-step. (4) In some
cases (the first row), GIST is trapped in a local solution which has a much larger objective function
value than HONOR with a small ?. This implies that HONOR may have a potential of escaping
from high error plateau which often exists in high dimensional non-convex problems. These results
show the great potential of HONOR for solving large-scale non-convex sparse learning problems.
HONOR(?=1e-10)
HONOR(?=1e-6)
HONOR(?=1e-2)
GIST
10 0
10 -1
10 -2
10 -3
0
0.5
1
1.5
2
CPU time (seconds)
2.5
?10 4
Figure 1: Objective function value (in log-scale) vs. CPU time (in seconds) plots for different non-convex regularizers and different large-scale and high-dimensional data sets. The ratios of the GD-step adopted in HONOR are: LSP (kdd2010a): 0%, 1%, 34%; LSP (kdd2010b):
0%, 2%, 27%; LSP (url): 0.1%, 2%, 35%; MCP (kdd2010a): 0%, 88%, 100%; MCP (kdd2010b):
0%, 89%, 100%; MCP (url): 0%, 97%, 100%; SCAD (kdd2010a): 0%, 43%, 100%; SCAD (2010b):
0%, 32%, 99.5%; SCAD (url): 0%, 79%, 100%.
6 Conclusions
In this paper, we propose an efficient optimization algorithm called HONOR for solving non-convex
regularized sparse learning problems. HONOR incorporates the second-order information to speed
up the convergence in practice and uses a carefully designed hybrid optimization scheme to guarantee the convergence in theory. Experiments are conducted on large-scale data sets and results show
that HONOR converges significantly faster than state-of-the-art algorithms. In our future work, we
plan to develop parallel/distributed variants of HONOR to tackle much larger data sets.
Acknowledgements
This work is supported in part by research grants from NIH (R01 LM010730, U54 EB020403) and
NSF (IIS- 0953662, III-1539991, III-1539722).
8
References
[1] G. Andrew and J. Gao. Scalable training of ?1 -regularized log-linear models. In ICML, pages 33?40,
2007.
[2] J. Bioucas-Dias and M. Figueiredo. A new TwIST: two-step iterative shrinkage/thresholding algorithms
for image restoration. IEEE Transactions on Image Processing, 16(12):2992?3004, 2007.
[3] R. H. Byrd, G. M. Chin, J. Nocedal, and F. Oztoprak. A family of second-order methods for convex
?1 -regularized optimization. Technical report, Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL, 2012.
[4] R. H. Byrd, G. M. Chin, J. Nocedal, and Y. Wu. Sample size selection in optimization methods for
machine learning. Mathematical Programming, 134(1):127?155, 2012.
[5] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190?1208, 1995.
[6] E. Candes, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted ?1 minimization. Journal of Fourier
Analysis and Applications, 14(5):877?905, 2008.
[7] F. Clarke. Optimization and Nonsmooth Analysis. John Wiley&Sons, New York, 1983.
[8] J. Dutta. Generalized derivatives and nonsmooth optimization, a finite dimensional tour. Top, 13(2):185?
279, 2005.
[9] L. El Ghaoui, G. Li, V. Duong, V. Pham, A. Srivastava, and K. Bhaduri. Sparse machine learning methods
for understanding large text corpora. In CIDU, pages 159?173, 2011.
[10] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal
of the American Statistical Association, 96(456):1348?1360, 2001.
[11] J. Fan, L. Xue, and H. Zou. Strong oracle optimality of folded concave penalized estimation. Annals of
Statistics, 42(3):819, 2014.
[12] G. Gasso, A. Rakotomamonjy, and S. Canu. Recovering sparse signals with a certain family of nonconvex
penalties and dc programming. IEEE Transactions on Signal Processing, 57(12):4686?4698, 2009.
[13] P. Gong and J. Ye. A modified orthant-wise limited memory quasi-newton method with convergence
analysis. In ICML, 2015.
[14] P. Gong, C. Zhang, Z. Lu, J. Huang, and J. Ye. A general iterative shrinkage and thresholding algorithm
for non-convex regularized optimization problems. In ICML, volume 28, pages 37?45, 2013.
[15] N. Jorge and J. Stephen. Numerical Optimization. Springer, 1999.
[16] R. Mazumder, J. Friedman, and T. Hastie. Sparsenet: Coordinate descent with nonconvex penalties.
Journal of the American Statistical Association, 106(495), 2011.
[17] P. Olsen, F. Oztoprak, J. Nocedal, and S. Rennie. Newton-like methods for sparse inverse covariance
estimation. In Advances in Neural Information Processing Systems (NIPS), pages 764?772, 2012.
[18] A. Rakotomamonjy, R. Flamary, and G. Gasso. Dc proximal newton for non-convex optimization problems. 2014.
[19] S. Shevade and S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic
regression. Bioinformatics, 19(17):2246, 2003.
[20] X. Tan, W. Roberts, J. Li, and P. Stoica. Sparse learning via iterative minimization with application to
mimo radar imaging. IEEE Transactions on Signal Processing, 59(3):1088?1101, 2011.
[21] P. Tao and L. An. The dc (difference of convex functions) programming and dca revisited with dc models
of real world nonconvex optimization problems. Annals of Operations Research, 133(1-4):23?46, 2005.
[22] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210?227, 2008.
[23] C. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics,
38(2):894?942, 2010.
[24] C. Zhang and T. Zhang. A general theory of concave regularization for high-dimensional sparse estimation
problems. Statistical Science, 27(4):576?593, 2012.
[25] T. Zhang. Analysis of multi-stage convex relaxation for sparse regularization. JMLR, 11:1081?1107,
2010.
[26] T. Zhang. Multi-stage convex relaxation for feature selection. Bernoulli, 2012.
[27] H. Zou and R. Li. One-step sparse estimates in nonconcave penalized likelihood models. Annals of
Statistics, 36(4):1509, 2008.
9
| 5829 |@word trial:3 briefly:1 norm:1 semicontinuous:1 covariance:1 boundedness:1 denoting:1 interestingly:1 ati:1 duong:1 current:2 must:2 john:1 numerical:1 happen:1 plot:2 gist:20 designed:1 v:2 intelligence:1 xk:69 core:1 iterates:1 revisited:1 zhang:6 mathematical:1 along:3 ik:5 prove:2 introduce:1 solver2:1 x0:1 behavior:1 multi:2 decreasing:3 byrd:3 cpu:12 considering:5 unrolling:1 provided:4 moreover:4 bounded:4 maxiter:1 coercive:2 guarantee:3 pseudo:1 every:2 concave:5 tackle:1 shed:1 k2:10 evanston:1 grant:1 superiority:1 before:1 positive:5 bioucas:1 local:1 engineering:1 limit:7 challenging:6 limited:2 range:1 practical:1 practice:3 definite:3 procedure:2 empirical:2 significantly:6 adapting:1 boyd:1 selection:5 operator:1 www:1 jieping:1 starting:1 convex:64 decomposable:1 contradiction:2 dominate:1 handle:1 coordinate:1 annals:4 tan:1 heavily:1 programming:8 us:2 origin:1 recognition:2 csie:1 subproblem:5 convexity:4 radar:2 solving:5 segment:1 efficiency:1 regularizer:2 fast:1 quite:2 whose:3 larger:4 solve:9 say:1 rennie:1 otherwise:5 statistic:4 final:1 advantage:2 sequence:3 differentiable:5 eigenvalue:2 propose:4 flamary:1 inducing:1 convergence:24 empty:1 converges:7 depending:1 develop:4 andrew:1 gong:3 eq:23 strong:1 solves:1 implemented:1 recovering:1 involves:2 implies:8 direction:8 safe:1 subsequently:1 libsvmtools:1 owl:3 ntu:1 proposition:16 extension:1 hold:6 pham:1 sufficiently:1 wright:1 exp:1 great:1 lm:3 consecutive:2 a2:4 smallest:3 estimation:3 integrates:1 applicable:3 label:1 largest:1 successfully:1 minimization:2 gaussian:1 always:1 modified:2 pki:12 pn:4 shrinkage:2 focus:1 vk:34 bernoulli:1 indicates:1 mainly:1 likelihood:2 greatly:2 contrast:1 rigorous:2 industrial:1 stopping:1 el:1 typically:2 quasi:3 interested:1 tao:1 issue:2 arg:3 plan:1 univeristy:2 art:3 initialize:2 constrained:1 equal:1 never:1 icml:3 nearly:1 problem1:1 tem:1 future:1 report:2 nonsmooth:2 few:1 keerthi:1 friedman:1 alignment:1 kvk:3 light:1 regularizers:4 conduct:1 theoretical:1 witnessed:1 restoration:2 cost:1 deviation:1 entry:4 rakotomamonjy:2 tour:1 kq:2 mimo:1 conducted:2 characterize:1 varies:1 proximal:2 xue:1 gd:16 chooses:1 siam:1 together:5 continuously:3 linux:1 satisfied:10 management:1 containing:1 choose:2 possibly:1 huang:1 admit:1 american:2 derivative:4 return:1 li:4 potential:2 bfgs:3 summarized:1 satisfy:2 explicitly:2 stoica:1 closed:3 lab:1 analyze:2 sup:1 parallel:1 candes:1 contribution:1 square:3 il:1 dutta:1 qk:3 efficiently:1 directional:1 identification:1 lu:2 worth:1 plateau:1 kpk:1 whenever:2 obvious:1 proof:9 mi:2 associated:1 stop:1 sampled:1 popular:3 recall:2 lim:12 dimensionality:1 carefully:2 dca:1 dt:1 formulation:2 stage:2 shevade:1 ganesh:1 marker:1 lack:1 logistic:3 scientific:1 effect:1 ye:3 concept:2 unbiased:1 counterpart:3 regularization:3 iteratively:1 reweighted:1 self:1 inferior:2 criterion:8 generalized:2 chin:2 demonstrate:2 performs:1 image:3 wise:1 recently:1 nih:1 lsp:9 twist:1 volume:1 million:1 extend:1 association:2 refer:3 ai:1 smoothness:1 sastry:1 canu:1 operating:1 add:1 recent:1 inf:3 jpye:1 certain:1 nonconvex:3 honor:70 inequality:3 arbitrarily:1 jorge:1 yi:5 matrixvector:1 minimum:1 converge:1 monotonically:1 signal:3 ii:2 stephen:1 exceeds:1 technical:1 faster:4 cross:1 a1:4 variant:1 scalable:1 regression:4 basic:3 enhancing:1 iteration:12 adopting:1 addition:2 subdifferential:10 want:1 interval:1 else:1 crucial:1 extra:1 limk:2 incorporates:3 nonconcave:2 integer:5 noting:1 yang:1 iii:2 easy:1 enough:1 iterate:7 switch:2 hastie:1 escaping:1 vik:10 i7:1 whether:1 handled:1 url:7 gb:1 penalty:10 hessian:2 york:1 remark:4 generally:3 detailed:3 involve:1 locally:2 differentiability:1 http:1 exist:2 nsf:1 notice:2 sign:1 trapped:1 per:1 key:1 kqk:1 utilize:1 nocedal:4 imaging:2 relaxation:2 monotone:4 year:1 sum:1 inverse:3 logged:9 clipped:1 throughout:1 reader:1 almost:1 family:2 wu:1 clarke:18 bound:1 guaranteed:1 fan:2 quadratic:3 oracle:2 fourier:1 speed:4 min:10 optimality:4 xki:26 according:2 scad:11 slightly:2 son:1 contradicts:1 lm010730:1 tw:1 restricted:1 ghaoui:1 mcp:9 remains:1 turn:1 cjlin:1 know:2 end:5 umich:2 dia:1 adopted:8 operation:1 observe:1 slower:2 top:1 include:3 wakin:1 newton:5 exploit:2 especially:1 establish:2 r01:1 objective:17 degrades:1 traditional:1 gradient:6 minx:1 nondifferentiable:1 trivial:3 reason:1 ratio:2 executed:1 robert:1 potentially:1 design:1 motivates:1 perform:2 upper:2 datasets:1 finite:5 descent:8 orthant:3 dc:10 rn:9 required:3 boost:1 nip:1 sparsenet:6 address:2 beyond:2 below:2 usually:1 pattern:1 sparsity:2 challenge:2 including:1 max:1 memory:3 green:1 critical:8 hybrid:9 regularized:18 rely:1 zhu:1 minimax:2 scheme:7 x2i:2 imply:1 axis:1 ready:1 gasso:2 text:2 review:1 understanding:2 acknowledgement:1 multiplication:2 relative:1 loss:3 northwestern:1 downloaded:1 thresholding:2 row:2 penalized:3 supported:1 figueiredo:1 wide:1 face:2 taking:3 absolute:1 sparse:17 ghz:1 distributed:1 curve:1 world:1 avoids:1 qn:18 adopts:1 commonly:1 kdk:1 u54:1 transaction:4 emphasize:1 olsen:1 bhaduri:1 gene:2 keep:2 reveals:1 corpus:2 conclude:3 xi:45 subsequence:4 continuous:5 iterative:5 search:13 table:1 terminate:1 robust:1 mazumder:1 necessarily:1 meanwhile:1 zou:2 pk:10 main:1 contradicting:1 intel:1 wiley:1 sub:2 kxk2:2 jmlr:1 theorem:6 dk:12 admits:2 dki:3 exists:8 effectively:1 supplement:12 kx:3 smoothly:1 michigan:2 backtracking:1 univariate:3 forming:2 gao:1 oztoprak:2 expressed:1 contained:1 kxk:4 springer:1 ma:1 ann:2 lipschitz:4 replace:1 change:1 included:1 specifically:3 except:1 uniformly:2 folded:1 called:1 accepted:1 arbor:2 bioinformatics:1 incorporate:1 evaluate:1 srivastava:1 |
5,335 | 583 | Polynomial Uniform Convergence of
Relative Frequencies to Probabilities
Alberto Bertoni, Paola Carnpadelli~ Anna Morpurgo, Sandra Panizza
Dipartimento di Scienze dell'Informazione
Universita degli Studi di Milano
via Comelico, 39 - 20135 Milano - Italy
Abstract
We define the concept of polynomial uniform convergence of relative
frequencies to probabilities in the distribution-dependent context. Let
Xn = {O, l}n, let Pn be a probability distribution on Xn and let Fn C 2X ,.
be a family of events. The family {(Xn, Pn , Fn)}n~l has the property
of polynomial uniform convergence if the probability that the maximum
difference (over Fn) between the relative frequency and the probability of an event exceed a given positive e be at most 6 (0 < 6 < 1),
when the sample on which the frequency is evaluated has size polynomial
in n,l/e,l/b. Given at-sample (Xl, ... ,Xt), let C~t)(XI, ... ,Xt) be the
Vapnik-Chervonenkis dimension of the family {{x}, ... ,xtl n f I f E Fn}
and M(n, t) the expectation E(C~t) It). We show that {(Xn, Pn , Fn)}n~l
has the property of polynomial uniform convergence iff there exists f3 > 0
such that M(n, t)
O(n/t!3). Applications to distribution-dependent
PAC learning are discussed.
=
1
INTRODUCTION
The probably approximately correct (PAC) learning model proposed by Valiant
[Valiant, 1984] provides a complexity theoretical basis for learning from examples
produced by an arbitrary distribution. As shown in [Blumer et al., 1989], a cen? Also at CNR, Istituto di Fisiologia dei Centri Nervosi, via Mario Bianco 9, 20131
Milano, Italy.
904
Polynomial Uniform Convergence
tral notion for distribution-free learnability is the Vapnik-Chervonenkis dimension,
which allows obtaining estimations of the sample size adequate to learn at a given
level of approximation and confidence. This combinatorial notion has been defined
in [Vapnik & Chervonenkis, 1971] to study the problem of uniform convergence of
relative frequencies of events to their corresponding probabilities in a distributionfree framework.
In this work we define the concept of polynomial uniform convergence of relative
frequencies of events to probabilities in the distribution-dependent setting. More
precisely, consider, for any n, a probability distribution on {O, l}n and a family of
events Fn ~ 2{O,1}"; our request is that the probability that the maximum difference
(over Fn) between the relative frequency and the probability of an event exceed a
given arbitrarily small positive constant ? be at most 6 (0 < 6 < 1) when the sample
on which we evaluate the relative frequencies has size polynomial in n, 1/?,1/6.
The main result we present here is a necessary and sufficient condition for polynomial uniform convergence in terms of "average information per example" .
In section 2 we give preliminary notations and results; in section 3 we introduce the
concept of polynomial uniform convergence in the distribution-dependent context
and we state our main result, which we prove in section 4. Some applications to
distribution-dependent PAC learning are discussed in section 5.
2
PRELIMINARY DEFINITIONS AND RESULTS
Let X be a set of elementary events on which a probability measure P is defined
and let F be a collection of boolean functions on X, i.e. functions f ; X - {O, 1}.
For I E F the set 1-1 (1) is said event, and Pj denotes its probability. At-sample
(or sample of size t) on X is a sequence ~ = (Xl, .. . , X,), where Xk E X (1 < k < t).
Let X(t) denote the space of t-samples and pCt) the probability distribution induced
by P on XCt), such that P(t)(Xl,"" Xt) = P(Xt)P(X2)'" P(Xt).
Given a t-sample ~ and a set
t-sample~, i.e.
f E F, let vjt)(~) be the relative frequency of f in the
(t)( ) _
Vj
X
-
-
L~=l I(x;) ?
t
Consider now the random variable II~) ; XCt) _ [01], defined over (XCt), pCt?),
where
II~)(Xt, ... ,xe) = sup I Vjt)(Xl, ... , Xt) - Pj I .
JEF
The relative frequencies of the events are said to converge to the probabilities uniformly over F if, for every ? > 0, limt_oo pCt){ X I II~)(~) > ?} = O.
In order to study the problem of uniform convergence of the relative frequencies
to the probabilities, the notion of index Ll F ( x) of a family F with respect to a
t-sample ~ has been introduced [Vapnik & Chervonenkis, 1971]. Fixed at-sample
~ = (Xl, ... , Xt),
905
906
Bertoni, Campadelli, Morpurgo, and Panizza
Obviously A.F(Xl, ... ,Xt) ~ 2t; a set {xl, ... ,x,} is said shattered by F iff
A.F(Xl, ... ,Xt)
2t; the maximum t such that there is a set {XI, ... ,Xt} shattered by F is said the Vapnik-Chervonenkis dimension dF of F. The following
result holds [Vapnik & Chervonenkis, 1971].
=
Theorem 2.1 For all distribution probabilities on X I the relative frequencies of the
events converge (in probability) to their corresponding probabilities uniformly over
F iff d F < 00.
We recall that the Vapnik-Chervonenkis dimension is a very useful notion in
the distribution-independent PAC learning model [Blumer et al., 1989]. In the
distribution-dependent framework, where the probability measure P is fixed and
known, let us consider the expectation E[log2 A.F(X)]' called entropy HF(t) of the
family F in samples of size t; obviously HF(t) depends on the probability distribution P. The relevance of this notion is showed by the following result [Vapnik &
Chervonenkis, 1971].
Theorem 2.2 A necessary and sufficient condition for the relative frequencies of
the events in F to converge uniformly over F (in probability) to their corresponding
probabilities is that
3
POLYNOMIAL UNIFORM CONVERGENCE
=
Consider the family {(Xn , Pn , F n}}n>l, where Xn
{O, l}n, Pn is a probability
distribution on Xn and Fn is a family of boolean functions on X n .
Since Xn is finite, the frequencies trivially converge uniformly to the probabilities;
therefore we are interested in studying the problem of convergence with constraints
on the sample size. To be more precise, we introduce the following definition.
Definition 3.1 Given the family {(Xn, Pn , Fn}}n> 1, the relative frequencies of the
events in Fn converge polynomially to their corresponding probabilities uniformly
over Fn iff there exists a polynomial p(n, 1/?, 1/8) such that
\1?,8> 0 \In (t? p(n, 1/?, 1/8) ~ p(t){~ I n~~(~)
In this context
tively.
?
> ?} < 8).
and 8 are the approximation and confidence parameters, respec-
The problem we consider now is to characterize the family {(Xn , Pn , Fn}}n> 1 such
that the relative frequencies of events in Fn converge polynomially to the probabilities. Let us introduce the random variable c~t) : X~t) ~ N, defined as
C~t)(Xl' ... ' Xt) = maxi #A I A ~ {XI, ... , xtl A A is shattered by Fn}.
In this notation it is understood that c~t) refers to Fn. The random variable c~t)
and the index function A.Fn are related to one another; in fact, the following result
can he easily proved.
Polynomial Uniform Convergence
L(~lllUla 3.1 C~t)(~.) < 10g~Fn(~) S; C~)(~) logt.
C(t)
C(t)
t
t
Let M(n, t) = E(_n_) be the expectation of the random variable~. From Lemma
3.1 readily follows that
M(n, t) <
HF (t)
;
S; M(n, t) logt;
therefore M(n, t) is very close to HF,..(t)/t, which can be interpreted as "average
information for example" for samples of size t.
Our main result shows that M(n, t) is a useful measure to verify whether
{(Xn, Pn , Fn) }n>l satisfies the property of polynomial convergence, as shown by
the following theorem.
Theorem 3.1 Given {(Xn, Pn , Fn) }n~ 1, the following conditions are equivalent:
Cl. The relative frequencies of events in Fn converge polynomially to their corre-
sponding probabilities.
C2. There exists f3
> 0 such that M(n, t) = O(n/t!3).
C3. There exists a polynomial1/;(n, l/e) such that
'r/c'r/n (t ~ 1/;(n, l/c)::} M(n,t) < c).
Proof?
? C2 ::} C3 is readily veirfied. In fact, condition C2 says there exist a, f3 > 0
such that M(n,t) S; an/tf3; now, observing that t ~ (an/c)! implies
an/t!3 < e, condition C3 immediately follows .
? C3 ::} C2. As stated by condition C3, there exist a, b, c > 0 such that if
t ~ an bIcc then M(n, t) < c. Solving the first inequality with respect to c
gives, in the worst case, c = (an b/t)~, and substituting for c in the second
inequality yields M(1l,t) ~ (anb/t)~ = a~n~/t~. If ~ < 1 we immediately
obtain M(n,t) ~ a~n~/t~ < a~n/d. Otherwise, if ~ > 1, since M(n,t) < 1,
we have M(n,t) S; min{l,atn~/d} S; min{l,(a~n~/d)~} S; aln/tl.
0
The proof of the equivalence between propositions C1 and C3 will be given in the
next section.
4
PROOF OF THE MAIN THEOREM
First of all, we prove that condition C3 implies condition Cl. The proof is based
on the following lemma, which is obtained by minor modifications of [Vapnik &
Chervonenkis, 1971 (Lemma 2, Theorem 4, and Lemma 4)].
907
908
Benoni, Campadelli, Morpurgo, and Panizza
{(Xn,Pn,Fn)}n~I'
Lemma 4.1 Given the family
\;fe\;fo\;fn (t >
if limt_ex> HF;(t)
= 0 then
1:;;0 => p~t){~ I rr~~(~) > e} < 0),
where to is such that H Fn (to)/to ::; e 2 /64.
As a consequence, we can prove the following.
Theorem 4.1 Given {(Xn,Pn,Fn)}n~}, if there exists apolynomial1/J(n,l/e) such
that
\;fe\;fn (t
~ 1/J(n, l/e) =>
HF;(t) < c),
then the relative frequencies of events in Fn converge polynomially to their probabilities.
Proof (outline). It is sufficient to observe that if we choose to = 1/J(n,64/e 2 ), by
hypothesis it holds that HFn(to)/to < e 2 /64; therefore, from Lemma 4.1, if
132to _ 132./,(
t > e2 0 - e2 0
'f'
64)
n, e2
'
then p~t) {~ I rr~~ (~) > e} < O.
o
An immediate consequence of Theorem 4.1 and of the relation M(n, t) < HF... (t)/t <
M(n, t) logt is that condition C3 implies condition Cl.
We now prove that condition C1 implies condition C3. For the sake of simplicity it
is convenient to introduce the following notations:
a~t)
dt )
=T
Pa(n,e,t)
= p~t){~la~>C~) < e}.
The following lemma, which relates the problem of polynomial uniform convergence
of a family of events to the parameter Pa(n, e, t), will only be stated since it can be
proved by minor modifications of Theorem 4 in [Vapnik & Chervonenkis, 1971].
Lemma 4.2 1ft ~ 16/e 2 then pAt){~lrr~~(x)
> e} > {(1- Pa(n,8e,2t)).
A relevant property of Pa(n, e, t) is given by the following lemma.
Lemma 4.3 \;fa > 1 Pa(n,e/a,at) <
Proof. Let
~l , ... '~a)
P~(n,e,t).
be an at-sample obtained by the concatenation of a elements
~1""'~ E X(t). It is easy to verify that c~at)(~I"" ,~a) ~ maXi=I, ... ,aC~t)(Xi)'
Therefore
PAat){c~at)(~l, ... ,Xa)::; k}::; PAat){c~t)(~I)::; k/l. ???/l.C~t)(~a)
By the independency of the events c~t)(~) < k we obtain
a
p~at){c~at)(Xl'" .,~)
< k} <
II p~t){C~t)(~d::; k}.
i=1
< k}.
Polynomial Uniform Convergence
Recalling that a~)
=C~t) It and substituting k =et, the thesis follows.
o
A relation between Pa(n, e, t) and the parameter M(n, t), which we have introduced
to characterize the polynomial uniform convergence of {(Xn, Pn, Fn)}n~I, is shown
in the following lemma.
Lemma 4.4 For every e (0
< e < 1/4), if M(n, t) > 2.,fi then Pa(n, e, t) < 1/2.
Proof. For the sake of simplicity, let m
6<m =
t
Jo
x dPa =
= M(n, I).
r
Jo
6 2
/
If m
>6>0
t
x dPa
x dPa +
J6/2
, we have
666
< 2Pa (n, 2' I) + 1- Pa(n, 2' I).
Since 0 < 6 < 1, we obtain
6
1- 6
6
Pa (n'2,/) < 1- 6/ 2 ~ 1- 2?
By applying Lemma 4.3 it is proved that, for every a
6
Pa(n, 20" 0'/) ~
For a
=~
~
1,
(6)Q
1- 2
we obtain
62 21
-1
Pa (n'4'"6)<e
1
<2?
=
For e = 62 14 and t
2116, the previous result implies that, if M(n, t.,fi) > 2-j"i,
then Pa(n, e, t) < 1/2.
It is easy to verify that C~Qt)(~I' ... '~Q) <
C~t)(~;) for every a ~ 1. This
implies M(n, at) < M(n, t) for a > 1, hence M(n, t..fi) > M(n, t), from which the
Ef=1
thesis follows.
0
Theorem 4.2 If for the family {(Xn, Pn ,Fn)}n~1 the relative frequencies of
events in Fn converge polynomially to their probabilities, then there exists a polynomial1/;(n, lie) such that
\;f e \;fn (t
~
'ljJ(n, lie) => M(n, t)
~
e).
Proof. By contradiction. Let us suppose that {(Xn' Pn , Fn)}n> 1 polynomially
converges and that for all polynomial functions 1/;(n, lie) there exist e, n, t such
that t ~ 1/;(n, lie) and M(n, t) > e.
Since M(n, t) is a monotone, non-increasing function with respect to t it follows that for every 1/; there exist e, n such that M(n, 1/;(n, lie)) > e. Considering the one-to-one corrispondence T between polynomial functions defined by
T1/;(n, lie) <pen, 4/e 2 ), we can conclude that for any <p there exist e, n such that
M(n, <pen, lie)) > 2.,fi. From Lemma 4.4 it follows that
=
\;f<p3n3e
(Pa(n,e,<p(n,~)) < ~).
(1)
909
910
Bertoni, Campadelli, Morpurgo, and Panizza
Since, by hypothesis, {(Xn, Pn , Fn)}n~l polynomially converges, fixed 6
there exists a polynomial </J such that
\Ie \In \I</J (t
1/20,
~ </J( n, ;) => p~t){.f.1 n~~ (.f.) > c} < 210)
From Lemma 4.2 we know that if t ~ 16/e 2 then
p~t){.f.1 n~~(.f.) > e} ~ ~(1- Pa (n,8e, 2t))
1ft ~ max{16/e 2, </J(n, l/e)} , then H1-Pa(n, 8e, 2t))
<
;0' hence Pa(n, 8e, 2t)
>
~.
Fixed a polynomial p(n, l/e) such that 2p(n, 8/e) ~ max{16/e 2 , </J(n, l/e)}, we can
conclude that
\Ie \In
(Pa(n,e,p(n,~)) > ~).
From assertions (1) and (2) the contradiction
~<~
can easily be derived.
(2)
0
An immediate consequence of Theorem 4.2 is that, in Theorem 3.1, condition C1
implies condition C3. Theorem 3.1 is thus proved.
5
DISTRIBUTION-DEPENDENT PAC LEARNING
In this section we briefly recall the notion of learnability in the distributiondependent PAC model and we discuss some applications of the previous results. Given {(Xn, Pn , Fn)}n~b a labelled t-sample 5, for f E Fn is a sequence
((Xl, f(xt), . . . , (Xt, f(xt}?), where (Xl, . . . , Xt) is a t-sample on X n . We say that
h,/2 E Fn are e-close with respect to Pn iff Pn{xlh(x) f. /2(x)} < e.
A learning algorithm A for {(Xn, P n , Fn )}n>l is an algorithm that, given in input
e,6 > 0, a labelled t-sample 5, with f E Fn , outputs the representation of a function
9 which, with probability 1 - 6, is e-close to f . The family {(Xn, Pn , Fn)}n~l is
said polynomially learnable iff there exists a learning algorithm A working in time
bounded by a polynomial p(n, l/e , 1/6).
Bounds on the sample size necessary to learn at approximation e and confidence
1 - 6 have been given in terms of e-covers [Benedek & Itai, 1988]; classes which
are not learnable in the distribution-free model, but are learnable for some specific
distribution, have been shown (e.g. I-terms DNF [Kucera et al., 1988]) .
The following notion is expressed in terms of relative frequencies.
Definition 5.1 A quasi-consistent algorithm for the family {(Xn, Pn , Fn)}n>l is
an algorithm that, given in input 6, e > 0 and a labelled t-sample 5, with f E F n ,
outputs in time bounded by a polynomial p( n, 1/e, 1/6) the representation of a function 9 E Fn such that
p(t){X
n
-
I vet)
(x) > e} < 6
'$g-
By Theorem 3.1 the following result can easily be derived.
Polynomial Uniform Convergence
=
TheoreIn 5.1 Given {(Xn, Pn , Fn)}n~l' if there exists f3 > 0 such that M(n, t)
O(n/t f3 ) and there exists a quasi-consistent algorithm for {(Xn, Pn , Fn)}n~l then
{(Xn, Pn , Fn) }n~l is polynomially learnable.
6
CONCLUSIONS AND OPEN PROBLEMS
We have characterized the property of polynomial uniform convergence of
{(X n ,Pn ,Fn )}n>l by means of the parameter M(n,t). In particular we proved
that {(Xn,Pn,Fn}}n~l has the property of polynomial convergence iff there exists
f3 > 0 such that M(n, t) = O(n/t f3 ), but no attempt has been made to obtain better
upper and lower bounds on the sample size in terms of M(n, t).
With respect to the relation between polynomial uniform convergence and PAC
learning in the distribution-dependent context, we have shown that if a family
{(Xn, Pn , Fn) }n> 1 satisfies the property of polynomial uniform convergence then it
can be PAC learned with a sample of size bounded by a polynomial function in
n, 1/?, 1/6.
It is an open problem whether the converse implication also holds.
Acknowledgements
This research was supported by CNR, project Sistemi Informatici e Calcolo Parallelo.
References
G. Benedek, A. Itai. (1988) "Learnability by Fixed Distributions". Proc. COLT'88,
80-90.
A. Blumer, A. Ehrenfeucht, D. Haussler, K. Warmuth. (1989) "Learnability and
the Vapnik-Chervonenkis Dimension". J. ACM 36, 929-965.
L. Kucera, A. Marchetti-Spaccamela, M. Protasi. (1988) "On the Learnability of
DNF Formulae". Proc. XV Coli. on Automata, Languages, and Programming,
L.N .C.S. 317, Springer Verlag.
L.G. Valiant. (1984) "A Theory of the Learnable". Communications of the ACM
27 (11), 1134-1142.
V.N. Vapnik, A.Ya. Chervonenkis. (1971) "On the uniform convergence of relative
frequencies of events to their probabilities". Theory of Prob. and its Appl. 16 (2),
265-280.
911
| 583 |@word concept:3 briefly:1 implies:7 polynomial:29 verify:3 hence:2 universita:1 open:2 correct:1 fa:1 ehrenfeucht:1 milano:3 ll:1 said:5 bianco:1 sandra:1 concatenation:1 preliminary:2 chervonenkis:12 proposition:1 outline:1 elementary:1 dipartimento:1 studi:1 hold:3 index:2 ef:1 fi:4 jef:1 readily:2 fn:46 fe:2 substituting:2 cen:1 tively:1 stated:2 marchetti:1 estimation:1 proc:2 discussed:2 he:1 combinatorial:1 warmuth:1 upper:1 xk:1 finite:1 trivially:1 pat:1 immediate:2 provides:1 communication:1 precise:1 language:1 anb:1 dell:1 pn:26 arbitrary:1 c2:4 introduced:2 xtl:2 prove:4 derived:2 showed:1 italy:2 c3:10 comelico:1 introduce:4 verlag:1 learned:1 inequality:2 arbitrarily:1 xe:1 dependent:8 shattered:3 relation:3 converge:9 considering:1 increasing:1 quasi:2 project:1 interested:1 notation:3 bounded:3 ii:4 relates:1 colt:1 max:2 event:19 limt_oo:1 interpreted:1 aln:1 characterized:1 alberto:1 f3:7 calcolo:1 dei:1 every:5 expectation:3 df:1 hfn:1 tral:1 sponding:1 ljj:1 converse:1 acknowledgement:1 c1:3 positive:2 t1:1 understood:1 relative:19 xv:1 dpa:3 consequence:3 benedek:2 recalling:1 attempt:1 probably:1 approximately:1 induced:1 cnr:2 distributionfree:1 equivalence:1 xct:3 sufficient:3 appl:1 consistent:2 exceed:2 lrr:1 easy:2 implication:1 supported:1 free:2 necessary:3 istituto:1 tf3:1 polynomial1:2 whether:2 theoretical:1 convenient:1 dimension:5 confidence:3 xn:27 refers:1 boolean:2 collection:1 made:1 assertion:1 cover:1 close:3 polynomially:9 adequate:1 context:4 applying:1 useful:2 equivalent:1 uniform:20 learnability:5 automaton:1 characterize:2 conclude:2 simplicity:2 immediately:2 exist:5 xi:4 degli:1 contradiction:2 vet:1 haussler:1 pen:2 sistemi:1 ie:2 per:1 learn:2 notion:7 itai:2 obtaining:1 suppose:1 jo:2 thesis:2 independency:1 programming:1 cl:3 choose:1 hypothesis:2 vj:1 xlh:1 pa:18 element:1 pj:2 anna:1 main:4 coli:1 monotone:1 kucera:2 ft:2 prob:1 distributiondependent:1 worst:1 tl:1 family:16 depends:1 h1:1 xl:12 mario:1 sup:1 observing:1 complexity:1 hf:7 bound:2 pct:3 corre:1 lie:7 theorem:14 formula:1 solving:1 xt:16 specific:1 panizza:4 precisely:1 constraint:1 pac:8 maxi:2 x2:1 yield:1 basis:1 learnable:5 sake:2 easily:3 exists:11 vapnik:12 min:2 valiant:3 produced:1 dnf:2 j6:1 request:1 fo:1 entropy:1 logt:3 definition:4 say:2 frequency:21 otherwise:1 modification:2 scienze:1 e2:3 minor:2 proof:8 di:3 expressed:1 springer:1 proved:5 satisfies:2 obviously:2 acm:2 sequence:2 rr:2 recall:2 vjt:2 discus:1 blumer:3 know:1 labelled:3 relevant:1 studying:1 dt:1 respec:1 iff:7 uniformly:5 spaccamela:1 observe:1 lemma:15 evaluated:1 called:1 xa:1 la:1 ya:1 convergence:23 working:1 denotes:1 converges:2 paat:2 bertoni:3 atn:1 n_:1 ac:1 log2:1 relevance:1 evaluate:1 qt:1 |
5,336 | 5,830 | A Convergent Gradient Descent Algorithm for
Rank Minimization and Semidefinite Programming
from Random Linear Measurements
John Lafferty
University of Chicago
[email protected]
Qinqing Zheng
University of Chicago
[email protected]
Abstract
We propose a simple, scalable, and fast gradient descent algorithm to optimize
a nonconvex objective for the rank minimization problem and a closely related
family of semidefinite programs. With O(r3 ?2 n log n) random measurements of
a positive semidefinite n?n matrix of rank r and condition number ?, our method
is guaranteed to converge linearly to the global optimum.
1
Introduction
Semidefinite programming has become a key optimization tool in many areas of applied mathematics, signal processing and machine learning. SDPs often arise naturally from the problem structure,
or are derived as surrogate optimizations that are relaxations of difficult combinatorial problems
[7, 1, 8]. In spite of the importance of SDPs in principle?promising efficient algorithms with polynomial runtime guarantees?it is widely recognized that current optimization algorithms based on
interior point methods can handle only relatively small problems. Thus, a considerable gap exists
between the theory and applicability of SDP formulations. Scalable algorithms for semidefinite programming, and closely related families of nonconvex programs more generally, are greatly needed.
A parallel development is the surprising effectiveness of simple classical procedures such as gradient
descent for large scale problems, as explored in the recent machine learning literature. In many areas
of machine learning and signal processing such as classification, deep learning, and phase retrieval,
gradient descent methods, in particular first order stochastic optimization, have led to remarkably
efficient algorithms that can attack very large scale problems [3, 2, 10, 6]. In this paper we build on
this work to develop first-order algorithms for solving the rank minimization problem under random
measurements and a closely related family of semidefinite programs. Our algorithms are efficient
and scalable, and we prove that they attain linear convergence to the global optimum under natural
assumptions.
The affine rank minimization problem is to find a matrix X ? ? Rn?p of minimum rank satisfying
constraints A(X ? ) = b, where A : Rn?p ?? Rm is an affine transformation. The underdetermined
case where m np is of particular interest, and can be formulated as the optimization
min
rank(X)
X?Rn?p
(1)
subject to A(X) = b.
This problem is a direct generalization of compressed sensing, and subsumes many machine learning problems such as image compression, low rank matrix completion and low-dimensional metric
embedding [18, 12]. While the problem is natural and has many applications, the optimization is
nonconvex and challenging to solve. Without conditions on the transformation A or the minimum
rank solution X ? , it is generally NP hard [15].
1
Existing methods, such as nuclear norm relaxation [18], singular value projection (SVP) [11], and
alternating least squares (AltMinSense) [12], assume that a certain restricted isometry property
(RIP) holds for A. In the random measurement setting, this essentially means that at least O(r(n +
p) log(n + p)) measurements are available, where r = rank(X ? ) [18]. In this work, we assume that
(i) X ? is positive semidefinite and (ii) A : Rn?n ?? Rm is defined as A(X)i = tr(Ai X), where
each Ai is a random n ? n symmetric matrix from the Gaussian Orthogonal Ensemble (GOE), with
(Ai )jj ? N (0, 2) and (Ai )jk ? N (0, 1) for j 6= k. Our goal is thus to solve the optimization
min
rank(X)
subject to
tr(Ai X) = bi , i = 1, . . . , m.
X0
(2)
In addition to the wide applicability of affine rank minimization, the problem is also closely connected to a class of semidefinite programs. In Section 2, we show that the minimizer of a particular
class of SDP can be obtained by a linear transformation of X ? . Thus, efficient algorithms for problem (2) can be applied in this setting as well.
Noting that a rank-r solution X ? to (2) can be decomposed as X ? = Z ? Z ? > where Z ? ? Rn?r ,
our approach is based on minimizing the squared residual
f (Z) =
m
X
2
1
A(ZZ > ) ? b
2 = 1
tr(Z > Ai Z) ? bi .
4m
4m i=1
While this is a nonconvex function, we take motivation from recent work for phase retrieval by
Cand`es et al. [6], and develop a gradient descent algorithm for optimizing f (Z), using a carefully
constructed initialization and step size. Our main contributions concerning this algorithm are as
follows.
? We prove that with O(r3 n log n) constraints our gradient descent scheme can exactly recover X ? with high probability. Empirical experiments show that this bound may potentially be improved to O(rn log n).
? We show that our method converges linearly, and has lower computational cost compared
with previous methods.
? We carry out a detailed comparison of rank minimization algorithms, and demonstrate that
when the measurement matrices Ai are sparse, our gradient method significantly outperforms alternative approaches.
In Section 3 we briefly review related work. In Section 4 we discuss the gradient scheme in detail.
Our main analytical results are presented in Section 5, with detailed proofs contained in the supplementary material. Our experimental results are presented in Section 6, and we conclude with a brief
discussion of future work in Section 7.
2
Semidefinite Programming and Rank Minimization
Before reviewing related work and presenting our algorithm, we pause to explain the connection
between semidefinite programming and rank minimization. This connection enables our scalable
gradient descent algorithm to be applied and analyzed for certain classes of SDPs.
Consider a standard form semidefinite program
min
e X)
e
tr(C
e
X0
subject to
(3)
ei X)
e = bi , i = 1, . . . , m
tr(A
e A
e1 , . . . , A
em ? Sn . If C
e is positive definite, then we can write C
e = LL> where L ? Rn?n
where C,
is invertible. It follows that the minimum of problem (3) is the same as
min
tr(X)
subject to
tr(Ai X) = bi , i = 1, . . . , m
X0
(4)
2
ei L?1 > . In particular, minimizers X
e ? of (3) are obtained from minimizers X ? of
where Ai = L?1 A
(4) via the transformation
e ? = L?1 > X ? L?1 .
X
Since X is positive semidefinite, tr(X) is equal to kXk? . Hence, problem (4) is the nuclear norm
relaxation of problem (2). Next, we characterize the specific cases where X ? = X ? , so that the SDP
and rank minimization solutions coincide. The following result is from Recht et al. [18].
Theorem 1. Let A : Rn?n ?? Rm be a linear map. For every integer k with 1 ? k ? n, define
the k-restricted isometry constant to be the smallest value ?k such that
(1 ? ?k ) kXkF ? kA(X)k ? (1 + ?k ) kXkF
holds for any matrix X of rank at most k. Suppose that there exists a rank r matrix X ? such that
A(X ? ) = b. If ?2r < 1, then X ? is the only matrix of rank at most r satisfying A(X) = b.
Furthermore, if ?5r < 1/10, then X ? can be attained by minimizing kXk? over the affine subset.
In other words, since ?2r ? ?5r , if ?5r < 1/10 holds for the transformation A and one finds a matrix
X of rank r satisfying the affine constraint, then X must be positive semidefinite. Hence, one can
ignore the semidefinite constraint X 0 when solving the rank minimization (2). The resulting
problem then can be exactly solved by nuclear norm relaxation. Since the minimum rank solution
is positive semidefinite, it then coincides with the solution of the SDP (4), which is a constrained
nuclear norm optimization.
The observation that one can ignore the semidefinite constraint justifies our experimental comparison
with methods such as nuclear norm relaxation, SVP, and AltMinSense, described in the following
section.
3
Related Work
Burer and Monteiro [4] proposed a general approach for solving semidefinite programs using factored, nonconvex optimization, giving mostly experimental support for the convergence of the algorithms. The first nontrivial guarantee for solving affine rank minimization problem is given by
Recht et al. [18], based on replacing the rank function by the convex surrogate nuclear norm, as
already mentioned in the previous section. While this is a convex problem, solving it in practice is
nontrivial, and a variety of methods have been developed for efficient nuclear norm minimization.
The most popular algorithms are proximal methods that perform singular value thresholding [5] at
every iteration. While effective for small problem instances, the computational expense of the SVD
prevents the method from being useful for large scale problems.
Recently, Jain et al. [11] proposed a projected gradient descent algorithm SVP (Singular Value
Projection) that solves
2
min
kA(X) ? bk
n?p
X?R
subject to
rank(X) ? r,
where k?k is the `2 vector norm and r is the input rank. In the (t+1)th iteration, SVP updates X t+1 as
the best rank r approximation to the gradient update X t ? ?A> (A(X t ) ? b), which is constructed
from the SVD. If rank(X ? ) = r, then SVP can recover X ? under a similar RIP condition as the
nuclear norm heuristic, and enjoys a linear numerical rate of convergence. Yet SVP suffers from the
expensive per-iteration SVD for large problem instances.
Subsequent work of Jain et al. [12] proposes an alternating least squares algorithm AltMinSense
that avoids the per-iteration SVD. AltMinSense factorizes X into two factors U ? Rn?r , V ?
2
Rp?r such that X = U V > and minimizes the squared residual
A(U V > ) ? b
by updating U and
V alternately. Each update is a least squares problem. The authors show that the iterates obtained
by AltMinSense converge to X ? linearly under a RIP condition. However, the least squares
problems are often ill-conditioned, it is difficult to observe AltMinSense converging to X ? in
practice.
As described above, considerable progress has been made on algorithms for rank minimization and
certain semidefinite programming problems. Yet truly efficient, scalable and provably convergent
3
algorithms have not yet been obtained. In the specific setting that X ? is positive semidefinite, our
algorithm exploits this structure to achieve these goals. We note that recent and independent work of
Tu et al. [21] proposes a hybrid algorithm called Procrustes Flow (PF), which uses a few iterations
of SVP as initialization, and then applies gradient descent.
4
A Gradient Descent Algorithm for Rank Minimization
Our method is described in Algorithm 1. It is parallel to the Wirtinger Flow (WF) algorithm for
phase retrieval [6], to recover a complex vector x ? Cn given the squared magnitudes of its linear
measurements bi = |hai , xi|2 , i ? [m], where a1 , . . . , am ? Cn . Cand`es et al. [6] propose a
first-order method to minimize the sum of squared residuals
n
X
2
fWF (z) =
|hai , zi|2 ? bi .
(5)
i=1
The authors establish the convergence of WF to the global optimum?given sufficient measurements,
the iterates of WF converge linearly to x up to a global phase, with high probability.
If z and the ai s are real-valued, the function fWF (z) can be expressed as
n
X
2
>
>
fWF (z) =
z > ai a>
,
i z ? x ai ai x
i=1
?
which is a special case of f (Z) where Ai = ai a>
i and each of Z and X are rank one. See Figure
1a for an illustration; Figure 1b shows the convergence rate of our method. Our methods and results
are thus generalizations of Wirtinger flow for phase retrieval.
Before turning to the presentation of our technical results in the following section, we present some
intuition and remarks about how and why this algorithm works. For simplicity, let us assume that
the rank is specified correctly.
Initialization is of course crucial in nonconvex optimization, as many local minima may be present.
To obtain a sufficiently accurate initialization, we use a spectral method, similar to those used in
[17, 6]. The starting point is the observation that a linear combination of the constraint values and
matrices yields an unbiased estimate of the solution.
Pm
1
1
?
Lemma 1. Let M = m
i=1 bi Ai . Then 2 E(M ) = X , where the expectation is with respect to
the randomness in the measurement matrices Ai .
Based on this fact, let X ? = U ? ?U ? > be the eigenvalue decomposition of X ? , where U ? =
[u?1 , . . . , u?r ] and ? = diag(?1 , . . . , ?r ) such that ?1 ? . . . ? ?r are the nonzero eigenvalues of
1
of E(M ) associated with
X ? . Let Z ? = U ? ? 2 . Clearly, u?s = zs? / kzs? k is the top sth eigenvector
q
eigenvalue 2 kzs? k . Therefore, we initialize according to zs0 = |?2s | vs where (vs , ?s ) is the top
sth eigenpair of M . For sufficiently large m, it is reasonable to expect that Z 0 is close to Z ? ; this is
confirmed by concentration of measure arguments.
2
Certain key properties of f (Z) will be seen to yield a linear rate of convergence. In the analysis
of convex functions, Nesterov [16] shows that for unconstrained optimization, the gradient descent
scheme with sufficiently small step size will converge linearly to the optimum if the objective function is strongly convex and has a Lipschitz continuous gradient. However, these two properties are
global and do not hold for our objective function f (Z). Nevertheless, we expect that similar conditions hold for the local area near Z ? . If so, then if we start close enough to Z ? , we can achieve the
global optimum.
In our subsequent analysis, we establish the convergence of Algorithm 1 with a constant step
size
of
2
the form ?/ kZ ? kF , where ? is a small constant. Since kZ ? kF is unknown, we replace it by
Z 0
F .
5
Convergence Analysis
In this section we present our main result analyzing the gradient descent algorithm, and give a
sketch of the proof. To begin, note that the symmetric decomposition of X ? is not unique, since
4
100
103
dist(Z,Z ? )
kZ ? kF
f (Z)
102
101
100
10-1
-2
2
2
-2
10-10
10-15
0
0
Z1
10-5
0
200
Z2
400
600
800
iteration
(a)
(b)
Figure 1: (a) An instance of f (Z) where X ? ? R2?2 is rank-1 and Z ? R2 . The underlying truth
is Z ? = [1, 1]> . Both Z ? and ?Z ? are minimizers. (b) Linear convergence of the gradient scheme,
for n = 200, m = 1000 and r = 2. The distance metric is given in Definition 1.
Algorithm 1: Gradient descent for rank minimization
input: {Ai , bi }m
i=1 , r, ?
initialization
Set (v1 , ?1 ), . . . , (vr , ?r ) to the top r eigenpairs of
q
Z 0 = [z10 , . . . , zr0 ] where zs0 = |?2s | ? vs , s ? [r]
k?0
repeat
m
P
>
1
?f (Z k ) = m
tr(Z k Ai Z k ) ? bi Ai Z k
1
m
Pm
i=1 bi Ai
s.t. |?1 | ? ? ? ? ? |?r |
i=1
Z
k+1
k
?
?f (Z k )
|?
|/2
s
s=1
= Z ? Pr
k ?k+1
until convergence;
b = ZkZk>
output: X
X ? = (Z ? U )(Z ? U )> for any r ? r orthonormal matrix U . Thus, the solution set is
o
n
e = Z ? U for some U with U U > = U > U = I .
S = Ze ? Rn?r | Z
e 2 = kX ? k for any Ze ? S. We define the distance to the optimal solution in terms of
Note that kZk
F
?
this set.
Definition 1. Define the distance between Z and Z ? as
d(Z, Z ? ) =
min
kZ ? Z ? U k = min
Z ? Ze
.
F
U U > =U > U =I
e
Z?S
F
Our main result for exact recovery is stated below, assuming that the rank is correctly specified.
Since the true rank is typically unknown in practice, one can start from a very low rank and gradually
increase it.
Theorem 2. Let the condition number ? = ?1 /?r denote the ratio of the largest to the smallest
nonzero eigenvalues of X ? . There exists a universal constant c0 such that if m ? c0 ?2 r3 n log n,
with high probability the initialization Z 0 satisfies
q
3
?r .
(6)
d(Z 0 , Z ? ) ? 16
2
Moreover, there exists a universal constant c1 such that when using constant step size ?/ kZ ? kF
c1
with ? ?
and initial value Z 0 obeying (6), the kth step of Algorithm 1 satisfies
?n
q
? k/2
3
d(Z k , Z ? ) ? 16
?r 1 ?
12?r
with high probability.
5
We now outline the proof, giving full details in the supplementary material. The proof has four main
steps. The first step is to give a regularity condition under which the algorithm converges linearly if
we start close enough to Z ? . This provides a local regularity property that is similar to the Nesterov
[16] criteria that the objective function is strongly convex and has a Lipschitz continuous gradient.
e
denote the matrix closest to Z in the solution set.
Definition 2. Let Z = arg min e
Z ? Z
Z?S
F
We say that f satisfies the regularity condition RC(?, ?, ?) if there exist constants ?, ? such that
for any Z satisfying d(Z, Z ? ) ? ?, we have
2
1
1
2
h?f (Z), Z ? Zi ? ?r
Z ? Z
F +
2 k?f (Z)kF .
?
? kZ ? kF
Using this regularity condition, we show that the iterative step of the algorithm moves closer to the
optimum, if the current iterate is sufficiently close.
?
k
Theorem 3. Consider the update Z k+1 = Z k ?
2 ?f (Z ). If f satisfies RC(?, ?, ?),
kZ ? kF
d(Z k , Z ? ) ? ?, and 0 < ? < min(?/2, 2/?), then
r
2?
k+1
?
d(Z
,Z ) ? 1 ?
d(Z k , Z ? ).
??r
In the next step of the proof, we condition on two events that will be shown to hold with high
probability using concentration results. Let ? denote a small value to be specified later.
m
1 P
?
n
>
>
A1 For any u ? R such that kuk ? ?1 ,
m
(u Ai u)Ai ? 2uu
? r? .
i=1
"
#
? 2 f (Z)
2
e
e
? f (Z)
?
e ? S,
?E
A2 For any Z
? , for all s, k ? [r].
?e
zs ?e
zk>
?e
zs ?e
zk>
r
Here the expectations are with respect to the random measurement matrices. Under these assumptions, we can show that the objective satisfies the regularity condition with high probability.
1
Theorem 4. Suppose that A1 and A2 hold. If ? ? 16
?r , then f satisfies the regularity condition
q
3
RC( 16 ?r , 24, 513?n) with probability at least 1 ? mCe??n , where C, ? are universal constants.
Next we show that under A1, a good initialization can be found.
m
P
r
1
Theorem 5. Suppose that A1 holds. Let {vs , ?s }s=1 be the top r eigenpairs of M = m
bi Ai
i=1
q
such that |?1 | ? ? ? ? ? |?r |. Let Z 0 = [z1 , . . . , zr ] where zs = |?2s | ? vs , s ? [r]. If ? ? 4??rr , then
p
d(Z 0 , Z ? ) ? 3?r /16.
Finally, we show that conditioning on A1 and A2 is valid since these events have high probability
as long as m is sufficiently large.
42
n
Theorem 6. If the number of samples m ?
2 , ?/r? ) n log n, then for any u ? R
2
2
min(?
/r
?
1
1
?
satisfying kuk ? ?1 ,
m
1 X
>
>
(u Ai u)Ai ? 2uu
? r?
m
i=1
holds with probability at least 1 ? mCe??n ?
2
n2 ,
where C and ? are universal constants.
128
e?S
Theorem 7. For any x ? Rn , if m ?
n log n, then for any Z
min(? 2 /4r2 ?12 , ?/2r?1 )
"
#
? 2 f (Z)
e
e
? 2 f (Z)
?
?E
? , for all s, k ? [r],
?e
zs ?e
zk>
?e
zs ?e
zk>
r
with probability at least 1 ? 6me?n ?
4
n2 .
6
Note that since we need ? ? min
1
1
?
16 , 4 r
?r , we have
?
r?1
? 1, and the number of measure-
3 2
ments required by our algorithm scales as O(r ? n log n), while only O(r2 ?2 n log n) samples are
required by the regularity condition. We conjecture this bound could be further improved to be
O(rn log n); this is supported by the experimental results presented below.
Recently, Tu et al. [21] establish a tighter O(r2 ?2 n) bound overall. Specifically, when only one SVP
step is used in preprocessing, the initialization of PF is also the spectral decomposition of 21 M . The
?
authors show that O(r2 ?2 n) measurements are sufficient for Z 0 to satisfy d(Z 0 , Z ? ) ? O( ?r )
with high probability, and demonstrate an O(rn) sample complexity for the regularity condition.
6
Experiments
In this section we report the results of experiments on synthetic datasets. We compare our gradient
descent algorithm with nuclear norm relaxation, SVP and AltMinSense for which we drop the
positive semidefiniteness constraint, as justified by the observation in Section 2. We use ADMM
for the nuclear norm minimization, based on the algorithm for the mixture approach in Tomioka
et al. [19]; see Appendix G. For simplicity, we assume that AltMinSense, SVP and the gradient
scheme know the true rank. Krylov subspace techniques such as the Lanczos method could be used
compute the partial eigendecomposition; we use the randomized algorithm of Halko et al. [9] to
compute the low rank SVD. All methods are implemented in MATLAB and the experiments were
run on a MacBook Pro with a 2.5GHz Intel Core i7 processor and 16 GB memory.
6.1
Computational Complexity
It is instructive to compare the per-iteration cost of the different approaches; see Table 1. Suppose
that the density (fraction of nonzero entries) of each Ai is ?. For AltMinSense, the cost of solving
the least squares problem is O(mn2 r2 + n3 r3 + mn2 r?). The other three methods have O(mn2 ?)
cost to compute the affine transformation. For the nuclear norm approach, the O(n3 ) cost is from
the SVD and the O(m2 ) cost is due to the update of the dual variables. The gradient scheme requires
>
2n2 r operations to compute Z k Z k and to multiply Z k by n ? n matrix to obtain the gradient. SVP
2
needs O(n r) operations to compute the top r singular vectors. However, in practice this partial
SVD is more expensive than the 2n2 r cost required for the matrix multiplies in the gradient scheme.
Method
Complexity
nuclear norm minimization via ADMM
gradient descent
SVP
AltMinSense
O(mn2 ? + m2 + n3 )
O(mn2 ?) + 2n2 r
O(mn2 ? + n2 r)
O(mn2 r2 + n3 r3 + mn2 r?)
Table 1: Per-iteration computational complexities of different methods.
Clearly, AltMinSense is the least efficient. For the other approaches, in the dense case (? large),
the affine transformation dominates the computation. Our method removes the overhead caused by
the SVD. In the sparse case (? small), the other parts dominate and our method enjoys a low cost.
6.2
Runtime Comparison
We conduct experiments for both dense and sparse measurement matrices. AltMinSense is indeed slow, so we do not include it here.
In the first scenario, we randomly generate a 400?400 rank-2 matrix X ? = xx> +yy > where x, y ?
N (0, I). We also generate m = 6n matrices A1 , . . . , Am from the GOE, and then take b = A(X ? ).
b ? X ? kF /kX ? kF . For
We report the relative error measured in the Frobenius norm defined as kX
the nuclear norm approach, we set the regularization parameter to ? = 10?5 . We test three values
? = 10, 100, 200 for the penalty parameter and select ? = 100 as it leads to the fastest convergence.
Similarly, for SVP we evaluate the three values 5 ? 10?5 , 10?4 , 2 ? 10?4 for the step size, and select
10?4 as the largest for which SVP converges. For our approach, we test the three values 0.6, 0.8, 1.0
for ? and select 0.8 in the same way.
7
100
100
10-2
1
probability of successful recovery
10
10-4
b?X ? kF
kX
kX ? kF
b?X ? kF
kX
kX ? kF
10-4
10-6
10-8
10-14
10-6
10
10-10
10-12
rank=1 n=60
0.9
-2
10-10
nuclear norm
SVP
gradient descent
10
1
-8
10
2
time (seconds)
10
3
10-12
10
0
10
1
time (seconds)
(a)
(b)
10
2
gradient
SVP
nuclear
0.8
rank=2 n=60
0.7
gradient
SVP
nuclear
0.6
0.5
rank=1 n=100
0.4
0.3
gradient
SVP
nuclear
0.2
rank=2 n=100
0.1
gradient
SVP
nuclear
0
1
2
3
4
5
m/n
(c)
Figure 2: (a) Runtime comparison where X ? ? R400?400 is rank-2 and Ai s are dense. (b) Runtime
comparison where X ? ? R600?600 is rank-2 and Ai s are sparse. (c) Sample complexity comparison.
In the second scenario, we use a more general and practical setting. We randomly generate a rank-2
matrix X ? ? R600?600 as before. We generate m = 7n sparse Ai s whose entries are i.i.d. Bernoulli:
1 with probability ?,
(Ai )jk =
0 with probability 1 ? ?,
where we use ? = 0.001. For all the methods we use the same strategies as before to select parameters. For the nuclear norm approach, we try three values ? = 10, 100, 200 and select ? = 100. For
SVP, we test the three values 5 ? 10?3 , 2 ? 10?3 , 10?3 for the step size and select 10?3 . For the
gradient algorithm, we check the three values 0.8, 1, 1.5 for ? and choose 1.
The results are shown in Figures 2a and 2b. In the dense case, our method is faster than the nuclear
norm approach and slightly outperforms SVP. In the sparse case, it is significantly faster than the
other approaches.
6.3
Sample Complexity
We also evaluate the number of measurements required by each method to exactly recover X ? ,
which we refer to as the sample complexity. We randomly generate the true matrix X ? ? Rn?n and
compute the solutions of each method given m measurements, where the Ai s are randomly drawn
from the GOE. A solution with relative error below 10?5 is considered to be successful. We run 40
trials and compute the empirical probability of successful recovery.
We consider cases where n = 60 or 100 and X ? is of rank one or two. The results are shown in
Figure 2c. For SVP and our approach, the phase transitions happen around m = 1.5n when X ? is
rank-1 and m = 2.5n when X ? is rank-2. This scaling is close to the number of degrees of freedom
in each case; this confirms that the sample complexity scales linearly with the rank r. The phase
transition for the nuclear norm approach occurs later. The results suggest that the sample complexity
of our method should also scale as O(rn log n) as for SVP and the nuclear norm approach [11, 18].
7
Conclusion
We connect a special case of affine rank minimization to a class of semidefinite programs with
random constraints. Building on a recently proposed first-order algorithm for phase retrieval [6],
we develop a gradient descent procedure for rank minimization and establish convergence to the
optimal solution with O(r3 n log n) measurements. We conjecture that O(rn log n) measurements
are sufficient for the method to converge, and that the conditions on the sampling matrices Ai can be
significantly weakened. More broadly, the technique used in this paper?factoring the semidefinite
matrix variable, recasting the convex optimization as a nonconvex optimization, and applying firstorder algorithms?first proposed by Burer and Monteiro [4], may be effective for a much wider class
of SDPs, and deserves further study.
Acknowledgements
Research supported in part by NSF grant IIS-1116730 and ONR grant N00014-12-1-0762.
8
References
[1] Arash A. Amini and Martin J. Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal components. The Annals of Statistics, 37(5):2877?2921, 2009.
[2] Francis Bach. Adaptivity of averaged stochastic gradient descent to local strong convexity for
logistic regression. The Journal of Machine Learning Research, 15(1):595?627, 2014.
[3] Francis Bach and Eric Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Advances in Neural Information Processing Systems (NIPS),
2011.
[4] Samuel Burer and Renato DC Monteiro. A nonlinear programming algorithm for solving
semidefinite programs via low-rank factorization. Mathematical Programming, 95(2):329?
357, 2003.
[5] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen. A singular value thresholding algorithm
for matrix completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010.
[6] Emmanuel Cand`es, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow:
Theory and algorithms. arXiv preprint arXiv:1407.1065, 2014.
[7] A. d?Aspremont, L. El Ghaoui, M. I. Jordan, and G. Lanckriet. A direct formulation for
sparse PCA using semidefinite programming. In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS), 2004.
[8] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42
(6):1115?1145, November 1995. ISSN 0004-5411.
[9] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness:
Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review,
53(2):217?288, 2011.
[10] Matt Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference.
The Journal of Machine Learning Research, 14, 2013.
[11] Prateek Jain, Raghu Meka, and Inderjit S Dhillon. Guaranteed rank minimization via singular
value projection. In Advances in Neural Information Processing Systems, pages 937?945,
2010.
[12] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using
alternating minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory
of computing, pages 665?674. ACM, 2013.
[13] Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model
selection. Annals of Statistics, pages 1302?1338, 2000.
[14] Michel Ledoux and Brian Rider. Small deviations for beta ensembles. Electron. J. Probab.,
15:no. 41, 1319?1343, 2010. ISSN 1083-6489. doi: 10.1214/EJP.v15-798. URL http:
//ejp.ejpecp.org/article/view/798.
[15] Raghu Meka, Prateek Jain, Constantine Caramanis, and Inderjit S Dhillon. Rank minimization
via online learning. In Proceedings of the 25th International Conference on Machine learning,
pages 656?663. ACM, 2008.
[16] Yurii Nesterov. Introductory lectures on convex optimization, volume 87. Springer Science &
Business Media, 2004.
[17] Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization. In Advances in Neural Information Processing Systems, pages 2796?2804, 2013.
[18] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of
linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471?501, 2010.
[19] Ryota Tomioka, Kohei Hayashi, and Hisashi Kashima. Estimation of low-rank tensors via
convex optimization. arXiv preprint arXiv:1010.0789, 2010.
[20] Joel A Tropp. An introduction to matrix concentration inequalities. arXiv preprint
arXiv:1501.01571, 2015.
[21] Stephen Tu, Ross Boczar, Mahdi Soltanolkotabi, and Benjamin Recht. Low-rank solutions of
linear matrix equations via procrustes flow. arXiv preprint arXiv:1507.03566, 2015.
9
| 5830 |@word trial:1 briefly:1 compression:1 polynomial:1 norm:21 c0:2 confirms:1 decomposition:4 tr:9 carry:1 initial:1 outperforms:2 existing:1 current:2 ka:2 z2:1 surprising:1 yet:3 must:1 john:2 chicago:2 numerical:1 subsequent:2 happen:1 recasting:1 enables:1 remove:1 drop:1 update:5 v:5 core:1 blei:1 iterates:2 provides:1 attack:1 org:1 rc:3 mathematical:1 constructed:2 direct:2 become:1 symposium:1 beta:1 prove:2 overhead:1 introductory:1 indeed:1 cand:4 dist:1 sdp:4 moulines:1 decomposed:1 pf:2 begin:1 xx:1 underlying:1 moreover:1 medium:1 prateek:4 minimizes:1 eigenvector:1 z:6 developed:1 finding:1 transformation:7 guarantee:2 every:2 firstorder:1 runtime:4 exactly:3 rm:3 grant:2 eigenpairs:2 positive:8 before:4 local:4 analyzing:1 laurent:1 initialization:8 weakened:1 challenging:1 fastest:1 factorization:1 bi:11 averaged:1 fazel:1 unique:1 practical:1 practice:4 definite:1 procedure:2 area:3 empirical:2 universal:4 kohei:1 attain:1 significantly:3 projection:3 word:1 spite:1 suggest:1 interior:1 close:5 selection:1 applying:1 optimize:1 map:1 starting:1 convex:8 shen:1 simplicity:2 recovery:3 factored:1 m2:2 nuclear:23 orthonormal:1 dominate:1 embedding:1 handle:1 annals:2 suppose:4 rip:3 zr0:1 programming:10 exact:1 us:1 boczar:1 lanckriet:1 satisfying:5 jk:2 expensive:2 updating:1 ze:3 cut:1 preprint:4 solved:1 wang:1 schoelkopf:1 connected:1 mentioned:1 intuition:1 benjamin:2 convexity:1 complexity:9 nesterov:3 reviewing:1 solving:7 eric:1 caramanis:1 jain:6 fast:1 effective:2 mn2:8 doi:1 whose:1 heuristic:1 widely:1 solve:2 supplementary:2 valued:1 say:1 compressed:1 statistic:2 online:1 eigenvalue:4 rr:1 analytical:1 cai:1 ledoux:1 propose:2 maryam:1 tu:3 achieve:2 frobenius:1 convergence:12 regularity:8 optimum:6 converges:3 wider:1 develop:3 completion:3 measured:1 progress:1 strong:1 netrapalli:2 implemented:1 c:1 solves:1 uu:2 closely:4 stochastic:4 arash:1 material:2 beatrice:1 generalization:2 tighter:1 brian:1 underdetermined:1 hold:9 sufficiently:5 considered:1 around:1 electron:1 smallest:2 a2:3 estimation:2 combinatorial:1 ross:1 largest:2 tool:1 hoffman:1 minimization:24 clearly:2 gaussian:1 factorizes:1 derived:1 rank:61 bernoulli:1 check:1 greatly:1 wf:3 am:2 inference:1 minimizers:3 factoring:1 el:1 typically:1 provably:1 monteiro:3 arg:1 classification:1 ill:1 overall:1 pascal:1 dual:1 multiplies:1 development:1 proposes:2 constrained:1 special:2 initialize:1 equal:1 sampling:1 zz:1 qinqing:2 future:1 np:2 report:2 sanghavi:2 few:1 randomly:4 phase:10 freedom:1 interest:1 zheng:1 multiply:1 joel:2 chong:1 analyzed:1 truly:1 mixture:1 semidefinite:25 accurate:1 closer:1 partial:2 orthogonal:1 conduct:1 instance:3 v15:1 kxkf:2 lanczos:1 deserves:1 applicability:2 cost:8 deviation:1 subset:1 entry:2 successful:3 characterize:1 connect:1 proximal:1 synthetic:1 recht:4 density:1 international:1 randomized:1 siam:3 probabilistic:1 invertible:1 squared:4 choose:1 michel:2 li:1 parrilo:1 semidefiniteness:1 hisashi:1 subsumes:1 satisfy:1 caused:1 later:2 try:1 view:1 francis:2 start:3 recover:4 parallel:2 contribution:1 minimize:1 square:5 ensemble:2 yield:2 sdps:4 confirmed:1 randomness:2 processor:1 explain:1 suffers:1 ed:1 definition:3 naturally:1 proof:5 associated:1 macbook:1 popular:1 satisfiability:1 carefully:1 attained:1 improved:3 formulation:2 strongly:2 furthermore:1 until:1 sketch:1 tropp:2 ei:2 replacing:1 nonlinear:1 logistic:1 xiaodong:1 building:1 matt:1 unbiased:1 true:3 hence:2 regularization:1 alternating:4 symmetric:2 nonzero:3 dhillon:2 ll:1 coincides:1 samuel:1 criterion:1 presenting:1 outline:1 demonstrate:2 pro:1 image:1 variational:1 recently:3 functional:1 conditioning:1 volume:1 martinsson:1 measurement:16 refer:1 ai:33 paisley:1 meka:2 unconstrained:1 sujay:2 mathematics:1 pm:2 similarly:1 soltanolkotabi:2 zs0:2 closest:1 isometry:2 recent:3 optimizing:1 constantine:1 scenario:2 certain:4 nonconvex:7 n00014:1 inequality:1 onr:1 seen:1 minimum:6 zuowei:1 recognized:1 converge:5 forty:1 signal:2 ii:2 stephen:1 full:1 technical:1 faster:2 burer:3 bach:2 long:1 retrieval:7 concerning:1 e1:1 a1:7 converging:1 scalable:5 regression:1 essentially:1 metric:2 expectation:2 arxiv:8 iteration:8 c1:2 justified:1 addition:1 remarkably:1 singular:6 jian:1 crucial:1 massart:1 subject:5 lafferty:2 flow:5 effectiveness:1 jordan:1 integer:1 fwf:3 near:1 noting:1 wirtinger:3 enough:2 variety:1 iterate:1 zi:2 praneeth:2 cn:2 i7:1 pca:1 gb:1 url:1 penalty:1 jj:1 remark:1 matlab:1 deep:1 generally:2 useful:1 detailed:2 procrustes:2 generate:5 http:1 exist:1 nsf:1 per:5 correctly:2 yy:1 broadly:1 write:1 key:2 four:1 gunnar:1 nevertheless:1 drawn:1 kuk:2 v1:1 relaxation:7 fraction:1 sum:1 run:2 family:3 reasonable:1 appendix:1 scaling:1 bound:3 renato:1 guaranteed:3 convergent:2 quadratic:1 annual:1 nontrivial:2 constraint:8 n3:4 nathan:1 argument:1 min:12 relatively:1 conjecture:2 martin:1 according:1 combination:1 slightly:1 em:1 rider:1 sth:2 restricted:2 pr:1 gradually:1 ghaoui:1 equation:2 discus:1 r3:6 needed:1 goe:3 know:1 yurii:1 raghu:2 available:1 operation:2 z10:1 svp:23 observe:1 spectral:2 amini:1 kashima:1 alternative:1 rp:1 top:5 include:1 exploit:1 giving:2 emmanuel:2 build:1 establish:4 classical:1 feng:1 tensor:1 objective:5 move:1 already:1 occurs:1 strategy:1 concentration:3 surrogate:2 hai:2 gradient:33 kth:1 subspace:1 distance:3 thrun:1 me:1 assuming:1 issn:2 illustration:1 ratio:1 minimizing:2 difficult:2 mostly:1 potentially:1 expense:1 ryota:1 stated:1 unknown:2 galton:1 perform:1 observation:3 datasets:1 descent:18 november:1 dc:1 rn:16 bk:1 david:2 pablo:1 required:4 specified:3 connection:2 z1:2 alternately:1 nip:2 krylov:1 below:3 program:8 memory:1 wainwright:1 event:2 natural:2 hybrid:1 business:1 turning:1 zr:1 pause:1 residual:3 scheme:7 brief:1 aspremont:1 sn:1 review:3 literature:1 acknowledgement:1 probab:1 kf:13 relative:2 asymptotic:1 expect:2 lecture:1 adaptivity:1 eigendecomposition:1 degree:1 affine:9 sufficient:3 article:1 principle:1 thresholding:2 course:1 repeat:1 supported:2 enjoys:2 uchicago:2 wide:1 saul:1 fifth:1 sparse:8 ghz:1 kzk:1 valid:1 avoids:1 transition:2 kz:7 author:3 made:1 adaptive:1 coincide:1 projected:1 preprocessing:1 approximate:1 ignore:2 global:6 conclude:1 xi:1 continuous:2 iterative:1 why:1 table:2 promising:1 zk:4 williamson:1 complex:1 constructing:1 diag:1 main:5 dense:4 linearly:7 motivation:1 arise:1 n2:6 intel:1 slow:1 vr:1 tomioka:2 obeying:1 mahdi:2 theorem:7 specific:2 sensing:1 explored:1 r2:8 ments:1 dominates:1 exists:4 importance:1 magnitude:1 justifies:1 conditioned:1 kx:7 mce:2 gap:1 led:1 halko:2 prevents:1 kxk:2 contained:1 expressed:1 inderjit:2 hayashi:1 applies:1 springer:1 minimizer:1 truth:1 satisfies:6 acm:4 goal:2 formulated:1 presentation:1 lipschitz:2 replace:1 considerable:2 hard:1 admm:2 specifically:1 eigenpair:1 lemma:1 principal:1 called:1 goemans:1 e:4 experimental:4 svd:8 select:6 support:1 evaluate:2 instructive:1 |
5,337 | 5,831 | Combinatorial Bandits Revisited
Richard Combes?
M. Sadegh Talebi?
Alexandre Proutiere?
Marc Lelarge?
Centrale-Supelec, L2S, Gif-sur-Yvette, FRANCE
? Department of Automatic Control, KTH, Stockholm, SWEDEN
? INRIA & ENS, Paris, FRANCE
[email protected],{mstms,alepro}@kth.se,[email protected]
?
Abstract
This paper investigates stochastic and adversarial combinatorial multi-armed bandit problems. In the stochastic setting under semi-bandit feedback, we derive
a problem-specific regret lower bound, and discuss its scaling with the dimension of the decision space. We propose ESCB, an algorithm that efficiently exploits the structure of the problem and provide a finite-time analysis of its regret.
ESCB has better performance guarantees than existing algorithms, and significantly outperforms these algorithms in practice. In the adversarial setting under
bandit feedback, we propose C OMB EXP, an algorithm with the same regret scaling as state-of-the-art algorithms, but with lower computational complexity for
some combinatorial problems.
1
Introduction
Multi-Armed Bandit (MAB) problems [1] constitute the most fundamental sequential decision problems with an exploration vs. exploitation trade-off. In such problems, the decision maker selects
an arm in each round, and observes a realization of the corresponding unknown reward distribution.
Each decision is based on past decisions and observed rewards. The objective is to maximize the
expected cumulative reward over some time horizon by balancing exploitation (arms with higher
observed rewards should be selected often) and exploration (all arms should be explored to learn
their average rewards). Equivalently, the performance of a decision rule or algorithm can be measured through its expected regret, defined as the gap between the expected reward achieved by the
algorithm and that achieved by an oracle algorithm always selecting the best arm. MAB problems
have found applications in many fields, including sequential clinical trials, communication systems,
economics, see e.g. [2, 3].
In this paper, we investigate generic combinatorial MAB problems with linear rewards, as introduced
in [4]. In each round n ? 1, a decision maker selects an arm M from a finite set M ? {0, 1}d and
Pd
d
receives a reward M > X(n) =
i=1 Mi Xi (n). The reward vector X(n) ? R+ is unknown.
We focus here on the case where all arms consist of the same number m of basic actions in the
sense that kM k1 = m, ?M ? M. After selecting an arm M in round n, the decision maker
receives some feedback. We consider both (i) semi-bandit feedback under which after round n, for
all i ? {1, . . . , d}, the component Xi (n) of the reward vector is revealed if and only if Mi = 1; (ii)
bandit feedback under which only the reward M > X(n) is revealed. Based on the feedback received
up to round n ? 1, the decision maker selects an arm for the next round n, and her objective is to
maximize her cumulative reward over a given time horizon consisting of T rounds. The challenge in
these problems resides in the very large number of arms, i.e., in its combinatorial structure: the size
of M could well grow as dm . Fortunately, one may hope to exploit the problem structure to speed
up the exploration of sub-optimal arms.
We consider two instances of combinatorial bandit problems, depending on how the sequence
of reward vectors is generated. We first analyze the case of stochastic rewards, where for all
1
Algorithm
Regret
LLR
[9]
O
m3 d?max
?2
min
CUCB
[10]
log(T )
O
m2 d
?min
log(T )
CUCB
[11]
O
md
?min
log(T )
ESCB
5)
(Theorem
?
md
O ?min log(T )
Table 1: Regret upper bounds for stochastic combinatorial optimization under semi-bandit feedback.
i ? {1, . . . , d}, (Xi (n))n?1 are i.i.d. with Bernoulli distribution of unknown mean. The reward
sequences are also independent across i. We then address the problem in the adversarial setting
where the sequence of vectors X(n) is arbitrary and selected by an adversary at the beginning of
the experiment. In the stochastic setting, we provide sequential arm selection algorithms whose performance exceeds that of existing algorithms, whereas in the adversarial setting, we devise simple
algorithms whose regret have the same scaling as that of state-of-the-art algorithms, but with lower
computational complexity.
2
2.1
Contribution and Related Work
Stochastic combinatorial bandits under semi-bandit feedback
Contribution. (a) We derive an asymptotic (as the time horizon T grows large) regret lower bound
satisfied by any algorithm (Theorem 1). This lower bound is problem-specific and tight: there
exists an algorithm that attains the bound on all problem instances, although the algorithm might
be computationally expensive. To our knowledge, such lower bounds have not been proposed in
the case of stochastic combinatorial bandits. The dependency in m and d of the lower bound is
unfortunately not explicit. We further provide a simplified lower bound (Theorem 2) and derive its
scaling in (m, d) in specific examples.
(b) We propose ESCB
? (Efficient Sampling for Combinatorial Bandits), an algorithm whose regret
scales at most as O( md??1
min log(T )) (Theorem 5), where ?min denotes the expected reward difference between the best and the second-best arm. ESCB assigns an index to each arm. The index
of given arm can be interpreted as performing likelihood tests with vanishing risk on its average reward. Our indexes are the natural extension of KL-UCB and UCB1 indexes defined for unstructured
bandits [5, 21]. Numerical experiments for some specific combinatorial problems are presented in
the supplementary material, and show that ESCB significantly outperforms existing algorithms.
Related work. Previous contributions on stochastic combinatorial bandits focused on specific combinatorial structures, e.g. m-sets [6], matroids [7], or permutations [8]. Generic combinatorial problems were investigated in [9, 10, 11, 12]. The proposed algorithms, LLR and CUCB are variants
of the UCB algorithm, and their performance guarantees are?presented in Table 1. Our algorithms
improve over LLR and CUCB by a multiplicative factor of m.
2.2
Adversarial combinatorial problems under bandit feedback
Contribution.
We
present
algorithm
C OMB EXP,
whose
regret
is
q
P
?1
?1
1
3
1/2
O
m T (d + m ? ) log ?min , where ?min = mini?[d] m|M| M ?M Mi and ? is
the smallest nonzero eigenvalue of the matrix E[M M > ] when M is uniformly distributed over M
(Theorem 6). For most problems
of interest m(d?)?1 = O(1) [4] and ??1
min = O(poly(d/m)),
p
?
3
so that C OMB EXP has O( m dT log(d/m)) regret. A known regret lower bound is ?(m dT )
[13], so the regret gap between C OMB EXP and this lower bound scales at most as m1/2 up to a
logarithmic factor.
Related work. Adversarial combinatorial bandits have been extensively investigated recently,
see [13] and references therein. Some papers consider specific instances of these problems, e.g.,
shortest-path routing [14], m-sets [15],and permutations
For generic combinatorial problems,
[16].
?
?
known regret lower bounds scale as ?
mdT and ? m dT (if d ? 2m) in the case of semibandit and bandit feedback, respectively [13]. In the case of semi-bandit feedback, [13] proposes
2
Algorithm
Lower Bound [13]
C OM BAND [4]
EXP2 WITH J OHN ? S E XPLORATION [18]
C OMB EXP (Theorem 6)
? Regret
? m dT , if d ? 2m
r
d
O
m3 dT log m
1 + 2m
d?
q
d
O
m3 dT log m
r
1/2
O
m3 dT 1 + md? log ??1
min
Table 2: Regret of various algorithms for adversarial combinatorial bandits with bandit feedback.
Note that for most combinatorial classes of interests, m(d?)?1 = O(1) and ??1
min = O(poly(d/m)).
OSMD, anp
algorithm whose regret upper bound matches the lower bound. [17] presents an algorithm
with O(m dL?T log(d/m)) regret where L?T is the total reward of the best arm after T rounds.
For problems with bandit feedback, [4] proposes C OM BAND and derives a regret upper bound which
depends on the structurep
of arm set M. For most problems of interest, the regret under C OM BAND
is upper-bounded by O( m3 dT log(d/m)). [18] addresses generic linear optimization with bandit
feedback and the proposed p
algorithm, referred to as EXP2 WITH J OHN ? S E XPLORATION, has a
regret scaling at most as O( m3 dT log(d/m)) in the case of combinatorial structure. As we show
next, for many combinatorial structures of interest (e.g. m-sets, matchings, spanning trees), C OMB EXP yields the same regret as C OM BAND and EXP2 WITH J OHN ? S E XPLORATION, with lower
computational complexity for a large class of problems. Table 2 summarises known regret bounds.
Example 1: m-sets. M is the set of all d-dimensional binary vectors with m non-zero coordinates.
m(d?m)
We have ?min = m
d and ? = d(d?1) (refer to the supplementary material for details). Hence when
p
m = o(d), the regret upper bound of C OMB EXP becomes O( m3 dT log(d/m)), which is the
same as that of C OM BAND and EXP2 WITH J OHN ? S E XPLORATION.
Example 2: matchings. The set of arms M is the set of perfect matchings in Km,m . d = m2 and
1
1
|M| = m!. We have ?min = m
, and ? = m?1
. Hence the regret upper bound of C OMB EXP is
p
5
O( m T log(m)), the same as for C OM BAND and EXP2 WITH J OHN ? S E XPLORATION.
Example 3: spanning
trees. M is the set of spanning trees in the complete graph KN . In this
case, d = N2 , m = N ? 1, and by Cayley?s formula M has N N ?2 arms. log ??1
min ? 2N for
m
N ? 2 and d?
< 7 when N ? 6, The regret upper bound of C OM BAND and EXP2 WITH J OHN ? S
p
E XPLORATION
becomes O( N 5 T log(N )). As for C OMB EXP, we get the same regret upper
p
bound O( N 5 T log(N )).
3
Models and Objectives
We consider MAB problems where each arm M is a subset of m basic actions taken from [d] =
{1, . . . , d}. For i ? [d], Xi (n) denotes the reward of basic action i in round n. In the stochastic
setting, for each i, the sequence of rewards (Xi (n))n?1 is i.i.d. with Bernoulli distribution with
mean ?i . Rewards are assumed to be independent across actions. We denote by ? = (?1 , . . . , ?d )> ?
? = [0, 1]d the vector of unknown expected rewards of the various basic actions. In the adversarial
setting, the reward vector X(n) = (X1 (n), . . . , Xd (n))> ? [0, 1]d is arbitrary, and the sequence
(X(n), n ? 1) is decided (but unknown) at the beginning of the experiment.
The set of arms M is an arbitrary subset of {0, 1}d , such that each of its elements M has m basic
actions. Arm M is identified with a binary column vector (M1 , . . . , Md )> , and we have kM k1 =
m, ?M ? M. At the beginning of each round n, a policy ?, selects an arm M ? (n) ? M based on
the arms chosen
rounds and their observed rewards. The reward of arm M ? (n) selected
P in previous
?
in round n is i?[d] Mi (n)Xi (n) = M ? (n)> X(n).
3
We consider both semi-bandit and bandit feedbacks. Under semi-bandit feedback and policy ?,
at the end of round n, the outcome of basic actions Xi (n) for all i ? M ? (n) are revealed to the
decision maker, whereas under bandit feedback, M ? (n)> X(n) only can be observed.
Let ? be the set of all feasible policies. The objective is to identify a policy in ? maximizing the
cumulative expected reward over a finite time horizon T . The expectation is here taken with respect
to possible randomness in the rewards (in the stochastic setting) and the possible randomization in
the policy. Equivalently, we aim at designing a policy that minimizes regret, where the regret of
policy ? ? ? is defined by:
" T
#
" T
#
X
X
M > X(n) ? E
M ? (n)> X(n) .
R? (T ) = max E
M ?M
n=1
n=1
Finally, for the stochastic setting, we denote by ?M (?) = M > ? the expected reward of arm M ,
and let M ? (?) ? M, or M ? for short, be any arm with maximum expected reward: M ? (?) ?
arg maxM ?M ?M (?). In what follows, to simplify the presentation, we assume that the optimal
M ? is unique. We further define: ?? (?) = M ?> ?, ?min = minM 6=M ? ?M where ?M =
?? (?) ? ?M (?), and ?max = maxM (?? (?) ? ?M (?)).
4
4.1
Stochastic Combinatorial Bandits under Semi-bandit Feedback
Regret Lower Bound
Given ?, define the set of parameters that cannot be distinguished from ? when selecting action
M ? (?), and for which arm M ? (?) is suboptimal:
B(?) = {? ? ? : Mi? (?)(?i ? ?i ) = 0, ?i, ?? (?) > ?? (?)}.
We define X = (R+ )|M| and kl(u, v) the Kullback-Leibler divergence between Bernoulli distributions of respective means u and v, i.e., kl(u, v) = u log(u/v) + (1 ? u) log((1 ? u)/(1 ? v)).
Finally, for (?, ?) ? ?2 , we define the vector kl(?, ?) = (kl(?i , ?i ))i?[d] .
We derive a regret lower bound valid for any uniformly good algorithm. An algorithm ? is uniformly
good iff R? (T ) = o(T ? ) for all ? > 0 and all parameters ? ? ?. The proof of this result relies on
a general result on controlled Markov chains [19].
Theorem 1 For all ? ? ?, for any uniformly good policy ? ? ?, lim inf T ??
where c(?) is the optimal value of the optimization problem:
inf
x?X
X
M ?M
xM (M ? (?) ? M )> ?
s.t.
X
xM M
>
R? (T )
log(T )
? c(?),
kl(?, ?) ? 1 , ?? ? B(?).
(1)
M ?M
Observe first that optimization problem (1) is a semi-infinite linear program which can be solved for
any fixed ?, but its optimal value is difficult to compute explicitly. Determining how c(?) scales as
a function of the problem dimensions d and m is not obvious. Also note that (1) has the following
interpretation: assume that (1) has a unique solution x? . Then any uniformly good algorithm must
select action M at least x?M log(T ) times over the T first rounds. From [19], we know that there
exists an algorithm which is asymptotically optimal, so that its regret matches the lower bound of
Theorem 1. However this algorithm suffers from two problems: it is computationally infeasible
for large problems since it involves solving (1) T times, furthermore the algorithm has no finite
time performance guarantees, and numerical experiments suggest that its finite time performance
on typical problems is rather poor. Further remark that if M is the set of singletons (classical
bandit), Theorem 1 reduces to the Lai-Robbins bound [20] and if M is the set of m-sets (bandit
with multiple plays), Theorem 1 reduces to the lower bound derived in [6]. Finally, Theorem 1 can
be generalized in a straightforward manner for when rewards belong to a one-parameter exponential
family of distributions (e.g., Gaussian, Exponential, Gamma etc.) by replacing kl by the appropriate
divergence measure.
4
A Simplified Lower Bound We now study how the regret c(?) scales as a function of the problem
dimensions d and m. To this aim, we present a simplified regret lower bound. Given ?, we say that
a set H ? M \ M ? has property P (?) iff, for all (M, M 0 ) ? H2 , M 6= M 0 we have Mi Mi0 (1 ?
Mi? (?)) = 0 for all i. We may now state Theorem 2.
Theorem 2 Let H be a maximal (inclusion-wise) subset of M with property P (?). Define ?(?) =
M
minM 6=M ? |M?\M
? | . Then:
?(?)
X
c(?) ?
M ?H
1
maxi?M \M ? kl ?i , |M \M
?|
P
j?M ? \M
?j
.
Corollary 1 Let ? ? [a, 1]d for some constant a > 0 and M be such that each arm M ? M, M 6=
M ? has at most k suboptimal basic actions. Then c(?) = ?(|H|/k).
Theorem 2 provides an explicit regret lower bound. Corollary 1 states that c(?) scales at least
with the size of H. For most combinatorial sets, |H| is proportional to d ? m (see supplementary
material for some examples), which implies that in these cases, one cannot obtain a regret smaller
than O((d ? m)??1
min log(T )). This result is intuitive since d ? m is the number of parameters
not observed
when
selecting the optimal arm. The algorithms
proposed below have a regret of
?
?
O(d m??1
log(T
)),
which
is
acceptable
since
typically,
m
is
much smaller than d.
min
4.2
Algorithms
Next we present ESCB, an algorithm for stochastic combinatorial bandits that relies on arm indexes
as in UCB1 [21] and KL-UCB [5]. We derive finite-time regret upper bounds for ESCB that hold
even if we assume that kM k1 ? m, ?M ? M, instead of kM k1 = m, so that arms may have
different numbers of basic actions.
4.2.1
Indexes
ESCB relies on arm indexes. In general, an index of arm M in round n, say bM (n), should be
defined so that bM (n) ? M > ? with high probability. Then as for UCB1 and KL-UCB, applying the
principle of optimism in face of uncertainty, a natural way to devise algorithms based on indexes is
to select in
round the arm with the highest index. Under a given algorithm, at time n, we define
Peach
n
ti (n) = s=1 Mi (s) the number of times basic action i has been sampled. The empirical mean
Pn
reward of action i is then defined as ??i (n) = (1/ti (n)) s=1 Xi (s)Mi (s) if ti (n) > 0 and ??i (n) =
?
0 otherwise. We define the corresponding vectors t(n) = (ti (n))i?[d] and ?(n)
= (??i (n))i?[d] .
?
The indexes we propose are functions of the round n and of ?(n).
Our first index for arm M ,
?
referred to as bM (n, ?(n))
or bM (n) for short, is an extension of KL-UCB index. Let f (n) =
?
log(n) + 4m log(log(n)). bM (n, ?(n))
is the optimal value of the following optimization problem:
max M > q
q??
?
s.t. (M t(n))> kl(?(n),
q) ? f (n),
(2)
where we use the convention that for v, u ? Rd , vu = (vi ui )i?[d] . As we show later, bM (n) may be
computed efficiently using a line search procedure similar to that used to determine KL-UCB index.
?
Our second index cM (n, ?(n))
or cM (n) for short is a generalization of the UCB1 and UCB-tuned
indexes:
v
!
u
d
u f (n) X
M
i
>?
cM (n) = M ?(n) + t
2
t (n)
i=1 i
Note that, in the classical bandit problems with independent arms, i.e., when m = 1, bM (n) reduces to the KL-UCB index (which yields an asymptotically optimal algorithm) and cM (n) reduces
to the UCB-tuned index. The next theorem provides generic properties of our indexes. An impor?
tant consequence of these properties is that the expected number of times where bM ? (n, ?(n))
or
?
?
cM ? (n, ?(n)) underestimate ? (?) is finite, as stated in the corollary below.
5
Theorem 3 (i) For all n ? 1, M ? M and ? ? [0, 1]d , we have bM (n, ? ) ? cM (n, ? ).
(ii) There exists Cm > 0 depending on m only such that, for all M ? M and n ? 2:
?
P[bM (n, ?(n))
? M > ?] ? Cm n?1 (log(n))?2 .
Corollary 2
P
n?1
?
P[bM ? (n, ?(n))
? ?? ] ? 1 + Cm
P
n?2
n?1 (log(n))?2 < ?.
Statement (i) in the above theorem is obtained combining Pinsker and Cauchy-Schwarz inequalities.
The proof of statement (ii) is based on a concentration inequality on sums of empirical KL divergences proven in [22]. It enables to control the fluctuations of multivariate empirical distributions
for exponential families. It should also be observed that indexes bM (n) and cM (n) can be extended
in a straightforward manner to the case of continuous linear bandit problems, where the set of arms
is the unit sphere and one wants to maximize the dot product between the arm and an unknown
vector. bM (n) can also be extended to the case where reward distributions are not Bernoulli but
lie in an exponential family (e.g. Gaussian, Exponential, Gamma, etc.), replacing kl by a suitably
chosen divergence measure. A close look at cM (n) reveals that the indexes proposed in [10], [11],
Pd
i
and [9] are too conservative to be optimal in our setting: there the ?confidence bonus? i=1 tiM(n)
Pd
i
was replaced by (at least) m i=1 tiM(n)
. Note that [10], [11] assume that the various basic actions
are arbitrarily correlated, while we assume independence among basic actions. When independence
does not hold, [11] provides a problem instance where the regret is at least ?( ?md
log(T )). This
min
?
d m
log(T ))), since we have added the
does not contradict our regret upper bound (scaling as O( ?
min
independence assumption.
4.2.2
Index computation
While the index cM (n) is explicit, bM (n) is defined as the solution to an optimization problem. We
show that it may be computed by a simple line search. For ? ? 0, w ? [0, 1] and v ? N, define:
p
g(?, w, v) = 1 ? ?v + (1 ? ?v)2 + 4wv? /2.
?
Fix n, M , ?(n)
and t(n). Define I = {i : Mi = 1, ??i (n) 6= 1}, and for ? > 0, define:
X
F (?) =
ti (n)kl(??i (n), g(?, ??i (n), ti (n))).
i?I
Theorem 4 If I = ?, bM (n) = ||M ||1 . Otherwise: (i) ? 7? F (?) is strictly increasing, and
F (R+ ) = R+ . (ii) Define ?? as the unique solution to F (?) = f (n). Then bM (n) = ||M ||1 ? |I| +
P
g(?? , ??i (n), ti (n)).
i?I
Theorem 4 shows that bM (n) can be computed using a line search procedure such as bisection,
as this computation amounts to solving the nonlinear equation F (?) = f (n), where F is strictly
increasing. The proof of Theorem 4 follows from KKT conditions and the convexity of the KL
divergence.
4.2.3
The ESCB Algorithm
The pseudo-code of ESCB is presented in Algorithm 1. We consider two variants of the algorithm
based on the choice of the index ?M (n): ESCB-1 when ?M (n) = bM (n) and ESCB-2 if ?M (n) =
cM (n). In practice, ESCB-1 outperforms ESCB-2. Introducing ESCB-2 is however instrumental
in the regret analysis of ESCB-1 (in view of Theorem 3 (i)). The following theorem provides a
finite time analysis of our ESCB algorithms. The proof of this theorem borrows some ideas from
the proof of [11, Theorem 3].
Theorem 5 The regret under algorithms ? ? {ESCB-1, ESCB-2} satisfies for all T ? 1:
?
3 ?2
0
R? (T ) ? 16d m??1
min f (T ) + 4dm ?min + Cm ,
?
0
where Cm
? 0 does not depend on ?, d and T . As a consequence R? (T ) = O(d m??1
min log(T ))
when T ? ?.
6
Algorithm 1 ESCB
for n ? 1 do
Select arm M (n) ? arg maxM ?M ?M (n).
Observe the rewards, and update ti (n) and ??i (n), ?i ? M (n).
end for
Algorithm 2 C OMB EXP
q
Initialization: Set q0 = ?0 , ? =
q
m log ??1
min
?
m log ??1
min +
C(Cm2 d+m)T
and ? = ?C, with C =
?
.
m3/2
for n ? 1 do
0
Mixing: Let qn?1
= (1 ? ?)qn?1 + ??0 .
P
0
Decomposition: Select a distribution pn?1 over M such that M pn?1 (M )M = mqn?1
.
P
Sampling: Select a random arm M (n) with distribution pn?1 and incur a reward Yn = i Xi (n)Mi (n).
+
?
Estimation: Let ?n?1 = E M M > , where M has law pn?1 . Set X(n)
= Y n ?+
n?1 M (n), where ?n?1
is the pseudo-inverse of ?n?1 .
? i (n)), ?i ? [d].
Update: Set q?n (i) ? qn?1 (i) exp(? X
Projection: Set qn to be the projection of q?n onto the set P using the KL divergence.
end for
ESCB with time horizon T has a complexity of O(|M|T ) as neither bM nor cM can be written as M > y for some vector y ? Rd . Assuming that the offline (static) combinatorial problem is solvable in O(V (M)) time, the complexity of the CUCB algorithm in [10] and [11]
after T rounds is O(V (M)T ). Thus, if the offline problem is efficiently implementable, i.e.,
V (M) = O(poly(d/m)), CUCB is efficient, whereas ESCB is not since |M| may have exponentially many elements. In ?2.5 of the supplement, we provide an extension of ESCB called
E POCH -ESCB, that attains almost the same regret as ESCB while enjoying much better computational complexity.
5
Adversarial Combinatorial Bandits under Bandit Feedback
We now consider adversarial combinatorial bandits with bandit feedback. We start with the following observation:
max M > X =
M ?M
max
?> X,
??Co(M)
with Co(M) the convex hull of M. We embed M in the d-dimensional simplex by dividing its
elements by m. Let P be this scaled version of Co(M).
Inspired by OSMD [13, 18], we propose the C OMB EXP algorithm, where the KL divergence
is the Bregman divergence used to project onto P. Projection using the KL divergence is
addressed in [23]. We denote the KL divergence between distributions q and p in P by
P
KL(p, q) = i?[d] p(i) log p(i)
q(i) . The projection of distribution q onto a closed convex set ? of
?
distributions is p = arg minp?? KL(p, q).
Let ? be the smallest nonzero eigenvalue of E[M M > ], where M is uniformly
distributed over M.
P
1
We define the exploration-inducing distribution ?0 ? P: ?0i = m|M|
M
?i ? [d], and
i,
M ?M
0
0
let ?min = mini m?i . ? is the distribution over basic actions [d] induced by the uniform distribution over M. The pseudo-code for C OMB EXP is shown in Algorithm 2. The KL projection
in C OMB EXP
P ensures that mqn?1 ? Co(M). There exists ?, a distribution over M such that
mqn?1 = M ?(M )M . This guarantees that the system of linear equations in the decomposition
step is consistent. We propose to perform the projection step (the KL projection of q? onto P) using
interior-point methods [24]. We provide a simpler method in ?3.4 of the supplement. The decomposition step can be efficiently implemented using the algorithm of [25]. The following theorem
provides a regret upper bound for C OMB EXP.
r
1/2
?1
m5/2
C OMB EXP
Theorem 6 For all T ? 1: R
(T ) ? 2 m3 T d + m?
log ??1
min + ? log ?min .
7
?1
?1
For most classes of M, we have ?min
O(1) [4]. For these
p = O(poly(d/m)) and m(d?) =p
3
classes, C OMB EXP has a regret of O( m dT log(d/m)), which is a factor m log(d/m) off the
lower bound (see Table 2).
It might not be possible to compute the projection step exactly, and this step can be solved up
to accuracy n in round n. Namely we find qn such that KL(qn , q?n ) ? minp?? KL(p, q?n ) ? n .
Proposition 1 shows that for n = O(n?2 log?3 (n)), the approximate projection gives the same
regret as when the projection is computed exactly. Theorem 7 gives the computational complexity of
C OMB EXP with approximate projection. When Co(M) is described by polynomially (in d) many
linear equalities/inequalities, C OMB EXP is efficiently implementable and its running time scales
(almost) linearly in T . Proposition 1 and Theorem 7 easily extend to other OSMD-type algorithms
and thus might be of independent interest.
Proposition 1 If the projection
n = O(n?2 log?3 (n)), we have:
s
RC OMB EXP (T ) ? 2
step
2m3 T
C OMB EXP
of
d+
m1/2
?
is
log ??1
min +
solved
up
to
accuracy
2m5/2
log ??1
min .
?
Theorem 7 Assume that Co(M) is defined by c linear equalities and s linear inequalities. If the
projection step is solved up to accuracy n = O(n?2 log?3 (n)), then C OMB EXP has time complexity.
The time complexity of C OMB EXP can be reduced by exploiting the structure of M (See [24,
page 545]). In particular, if inequality
?constraints describing Co(M) are box constraints, the time
complexity of C OMB EXP is O(T [c2 s(c + d) log(T ) + d4 ]).
The computational complexity of C OMB EXP is determined by the structure of Co(M) and C OMB EXP has O(T log(T )) time complexity due to the efficiency of interior-point methods. In contrast, the computational complexity of C OM BAND depends on the complexity of sampling from M.
C OM BAND may have a time complexity that is super-linear in T (see [16, page 217]). For instance,
consider the matching problem described in Section 2. We have c = 2m equality constraints and
s = m2 box constraints, so that the time complexity of C OMB EXP is: O(m5 T log(T )). It is noted
that using [26, Algorithm 1], the cost of decomposition in this case is O(m4 ). On the other hand,
C OMB BAND has a time complexity of O(m10 F (T )), with F a super-linear function, as it requires
to approximate a permanent, requiring O(m10 ) operations per round. Thus, C OMB EXP has much
lower complexity than C OM BAND and achieves the same regret.
6
Conclusion
We have investigated stochastic and adversarial combinatorial bandits. For stochastic combinatorial
bandits with semi-bandit feedback, we have provided a tight, problem-dependent regret lower bound
that, in most cases, scales at least as O((d ? m)??1
min log(T )). We proposed ESCB, an algorithm
?
log(T
))
regret.
We
plan
to
reduce
the gap between this regret guarantee and
with O(d m??1
min
the regret lower bound, as well as investigate the performance of E POCH -ESCB. For adversarial
combinatorial bandits with bandit feedback, we proposed the C OMB EXP algorithm. There is a gap
between the regret of C OMB EXP and the known regret lower bound in this setting, and we plan to
reduce it as much as possible.
Acknowledgments
A. Proutiere?s research is supported by the ERC FSA grant, and the SSF ICT-Psi project.
8
References
[1] Herbert Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected
Papers, pages 169?177. Springer, 1985.
[2] S?ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?222, 2012.
[3] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, learning, and games, volume 1. Cambridge University Press Cambridge, 2006.
[4] Nicol`o Cesa-Bianchi and G?abor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404?1422, 2012.
[5] Aur?elien Garivier and Olivier Capp?e. The KL-UCB algorithm for bounded stochastic bandits and beyond.
In Proc. of COLT, 2011.
[6] Venkatachalam Anantharam, Pravin Varaiya, and Jean Walrand. Asymptotically efficient allocation rules
for the multiarmed bandit problem with multiple plays-part i: iid rewards. Automatic Control, IEEE
Transactions on, 32(11):968?976, 1987.
[7] Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Brian Eriksson. Matroid bandits: Fast
combinatorial optimization with learning. In Proc. of UAI, 2014.
[8] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Learning multiuser channel allocations in cognitive
radio networks: A combinatorial multi-armed bandit formulation. In Proc. of IEEE DySpan, 2010.
[9] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Combinatorial network optimization with unknown
variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Trans. on
Networking, 20(5):1466?1478, 2012.
[10] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework and
applications. In Proc. of ICML, 2013.
[11] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic
combinatorial semi-bandits. In Proc. of AISTATS, 2015.
[12] Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Branislav Kveton. Efficient learning in large-scale combinatorial semi-bandits. In Proc. of ICML, 2015.
[13] Jean-Yves Audibert, S?ebastien Bubeck, and G?abor Lugosi. Regret in online combinatorial optimization.
Mathematics of Operations Research, 39(1):31?45, 2013.
[14] Andr?as Gy?orgy, Tam?as Linder, G?abor Lugosi, and Gy?orgy Ottucs?ak. The on-line shortest path problem
under partial monitoring. Journal of Machine Learning Research, 8(10), 2007.
[15] Satyen Kale, Lev Reyzin, and Robert Schapire. Non-stochastic bandit slate problems. Advances in Neural
Information Processing Systems, pages 1054?1062, 2010.
[16] Nir Ailon, Kohei Hatano, and Eiji Takimoto. Bandit online optimization over the permutahedron. In
Algorithmic Learning Theory, pages 215?229. Springer, 2014.
[17] Gergely Neu. First-order regret bounds for combinatorial semi-bandits. In Proc. of COLT, 2015.
[18] S?ebastien Bubeck, Nicol`o Cesa-Bianchi, and Sham M. Kakade. Towards minimax policies for online
linear optimization with bandit feedback. Proc. of COLT, 2012.
[19] Todd L. Graves and Tze Leung Lai. Asymptotically efficient adaptive choice of control laws in controlled
markov chains. SIAM J. Control and Optimization, 35(3):715?743, 1997.
[20] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6(1):4?22, 1985.
[21] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite time analysis of the multiarmed bandit problem.
Machine Learning, 47(2-3):235?256, 2002.
[22] Stefan Magureanu, Richard Combes, and Alexandre Proutiere. Lipschitz bandits: Regret lower bounds
and optimal algorithms. Proc. of COLT, 2014.
[23] I. Csisz?ar and P.C. Shields. Information theory and statistics: A tutorial. Now Publishers Inc, 2004.
[24] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
[25] H. D. Sherali. A constructive proof of the representation theorem for polyhedral sets based on fundamental
definitions. American Journal of Mathematical and Management Sciences, 7(3-4):253?270, 1987.
[26] David P. Helmbold and Manfred K. Warmuth. Learning permutations with exponential weights. Journal
of Machine Learning Research, 10:1705?1736, 2009.
9
| 5831 |@word trial:1 exploitation:2 version:1 instrumental:1 suitably:1 cm2:1 km:5 decomposition:4 selecting:4 mi0:1 sherali:1 tuned:2 outperforms:3 existing:3 past:1 multiuser:1 yajun:1 must:1 written:1 numerical:2 enables:1 update:2 v:1 selected:4 warmuth:1 beginning:3 vanishing:1 short:3 manfred:1 provides:5 revisited:1 simpler:1 rc:1 mathematical:1 c2:1 yuan:1 polyhedral:1 manner:2 expected:9 nor:1 multi:6 inspired:1 armed:6 increasing:2 becomes:2 project:2 provided:1 bounded:2 pravin:1 bonus:1 what:1 cm:16 interpreted:1 gif:1 minimizes:1 csaba:1 guarantee:5 pseudo:3 ti:8 xd:1 exactly:2 scaled:1 control:5 unit:1 grant:1 yn:1 todd:1 consequence:2 ak:1 lev:1 path:2 fluctuation:1 lugosi:4 inria:1 might:3 therein:1 initialization:1 co:8 decided:1 unique:3 acknowledgment:1 omb:30 vu:1 practice:2 regret:58 kveton:3 procedure:2 empirical:3 kohei:1 significantly:2 projection:13 matching:1 confidence:1 boyd:1 suggest:1 get:1 cannot:2 close:1 selection:1 onto:4 interior:2 eriksson:1 risk:1 applying:1 branislav:3 maximizing:1 straightforward:2 economics:1 kale:1 convex:3 focused:1 unstructured:1 assigns:1 helmbold:1 m2:3 rule:3 semibandit:1 vandenberghe:1 coordinate:1 play:2 olivier:1 designing:1 element:3 trend:1 expensive:1 osmd:3 cayley:1 observed:6 solved:4 wang:1 ensures:1 trade:1 highest:1 observes:1 pd:3 convexity:1 complexity:18 ui:1 reward:36 pinsker:1 depend:1 tight:3 solving:2 incur:1 efficiency:1 matchings:3 capp:1 easily:1 slate:1 various:3 jain:2 fast:1 outcome:1 whose:5 jean:2 supplementary:3 say:2 otherwise:2 satyen:1 statistic:1 fischer:1 online:3 fsa:1 sequence:5 eigenvalue:2 propose:6 maximal:1 product:1 fr:2 combining:1 realization:1 exp2:6 reyzin:1 iff:2 mixing:1 intuitive:1 inducing:1 csisz:1 exploiting:1 escb:28 perfect:1 tim:2 derive:5 depending:2 measured:1 received:1 dividing:1 implemented:1 involves:1 implies:1 convention:1 stochastic:19 hull:1 exploration:4 routing:1 material:3 cucb:6 fix:1 generalization:1 mab:4 randomization:1 brian:1 proposition:3 stockholm:1 extension:3 strictly:2 hold:2 exp:30 algorithmic:1 achieves:1 smallest:2 estimation:1 proc:9 combinatorial:39 radio:1 maker:5 schwarz:1 robbins:4 maxm:3 hope:1 stefan:1 always:1 gaussian:2 aim:2 super:2 rather:1 pn:5 corollary:4 derived:1 focus:1 bernoulli:4 likelihood:1 contrast:1 adversarial:12 attains:2 sense:1 dependent:1 leung:2 typically:1 abor:4 her:2 bandit:59 proutiere:3 france:2 selects:4 arg:3 among:1 colt:4 proposes:2 plan:2 art:2 field:1 sampling:3 look:1 icml:2 simplex:1 simplify:1 richard:3 wen:3 gamma:2 divergence:10 individual:1 m4:1 replaced:1 consisting:1 interest:5 investigate:2 zheng:3 llr:3 chain:2 l2s:1 bregman:1 partial:1 respective:1 sweden:1 tree:3 peach:1 impor:1 enjoying:1 instance:5 column:1 ar:1 cost:1 introducing:1 subset:3 supelec:2 uniform:1 too:1 dependency:1 kn:1 m10:2 fundamental:2 siam:1 aur:1 off:2 talebi:1 gergely:1 satisfied:1 cesa:5 management:1 cognitive:1 tam:1 american:1 elien:1 singleton:1 gy:2 inc:1 permanent:1 explicitly:1 audibert:1 depends:2 vi:1 multiplicative:1 later:1 view:1 closed:1 analyze:1 start:1 contribution:4 om:10 yves:1 accuracy:3 efficiently:5 yield:2 identify:1 bisection:1 iid:1 monitoring:1 randomness:1 minm:2 networking:1 suffers:1 ashkan:3 neu:1 definition:1 lelarge:2 underestimate:1 obvious:1 dm:2 proof:6 mi:11 psi:1 static:1 sampled:1 knowledge:1 lim:1 auer:1 alexandre:2 higher:1 dt:11 ohn:6 rahul:2 wei:1 formulation:1 box:2 furthermore:1 hand:1 receives:2 replacing:2 combes:3 nonlinear:1 grows:1 tant:1 requiring:1 poch:2 hence:2 equality:3 q0:1 nonzero:2 leibler:1 round:20 game:1 noted:1 d4:1 generalized:1 m5:3 complete:1 wise:1 recently:1 exponentially:1 volume:1 belong:1 interpretation:1 m1:3 extend:1 lieven:1 refer:1 multiarmed:2 cambridge:3 automatic:2 rd:2 mathematics:2 inclusion:1 erc:1 dot:1 permutahedron:1 hatano:1 etc:2 multivariate:1 inf:2 inequality:5 binary:2 arbitrarily:1 wv:1 yi:2 devise:2 herbert:3 fortunately:1 determine:1 shortest:2 maximize:3 semi:13 ii:4 multiple:2 stephen:1 sham:1 reduces:4 exceeds:1 match:2 clinical:1 sphere:1 lai:3 controlled:2 prediction:1 variant:2 basic:12 expectation:1 achieved:2 whereas:3 want:1 addressed:1 grow:1 publisher:1 anp:1 induced:1 bhaskar:2 ssf:1 yang:1 revealed:3 independence:3 matroid:1 nonstochastic:1 identified:1 suboptimal:2 reduce:2 idea:1 optimism:1 peter:1 azin:3 constitute:1 action:16 remark:1 se:1 amount:1 extensively:1 band:11 eiji:1 reduced:1 schapire:1 walrand:1 andr:1 tutorial:1 per:1 neither:1 takimoto:1 garivier:1 graph:1 asymptotically:5 sum:1 inverse:1 uncertainty:1 family:3 almost:2 decision:10 acceptable:1 scaling:6 investigates:1 sadegh:1 bound:38 oracle:1 constraint:4 aspect:1 speed:1 min:32 performing:1 department:1 ailon:1 centrale:1 poor:1 across:2 smaller:2 kakade:1 taken:2 computationally:2 equation:2 discus:1 describing:1 know:1 end:3 operation:2 observe:2 generic:5 appropriate:1 distinguished:1 eydgahi:2 denotes:2 running:1 exploit:2 k1:4 classical:2 summarises:1 objective:4 mdt:1 added:1 concentration:1 md:6 kth:2 cauchy:1 spanning:3 ottucs:1 assuming:1 code:2 sur:1 index:24 mini:2 equivalently:2 difficult:1 unfortunately:1 robert:1 statement:2 stated:1 design:1 ebastien:3 policy:9 unknown:7 perform:1 bianchi:5 upper:11 observation:2 markov:2 finite:9 implementable:2 magureanu:1 extended:2 communication:1 arbitrary:3 introduced:1 david:1 namely:1 paris:1 kl:29 varaiya:1 trans:1 address:2 beyond:1 adversary:1 below:2 xm:2 challenge:1 program:1 including:1 max:6 natural:2 solvable:1 arm:40 minimax:1 improve:1 nir:1 ict:1 nicol:5 determining:1 asymptotic:1 law:2 graf:1 permutation:3 proportional:1 allocation:3 proven:1 borrows:1 h2:1 foundation:1 consistent:1 minp:2 principle:1 balancing:1 supported:1 infeasible:1 offline:2 face:1 matroids:1 venkatachalam:1 distributed:2 feedback:23 dimension:3 valid:1 cumulative:3 resides:1 qn:6 adaptive:2 simplified:3 bm:19 polynomially:1 transaction:1 approximate:3 contradict:1 kullback:1 kkt:1 reveals:1 uai:1 assumed:1 xi:9 search:3 continuous:1 table:5 learn:1 channel:1 szepesvari:1 orgy:2 investigated:3 poly:4 hoda:2 marc:2 aistats:1 linearly:1 alepro:1 paul:1 n2:1 x1:1 referred:2 en:2 gai:2 shield:1 sub:1 explicit:3 exponential:6 lie:1 theorem:31 formula:1 embed:1 specific:6 maxi:1 explored:1 krishnamachari:2 dl:1 consist:1 exists:4 derives:1 sequential:4 supplement:2 yvette:1 horizon:5 gap:4 chen:1 ucb1:4 logarithmic:1 tze:2 bubeck:3 springer:2 satisfies:1 relies:3 acm:1 presentation:1 towards:1 lipschitz:1 feasible:1 infinite:1 typical:1 uniformly:6 determined:1 conservative:1 total:1 called:1 m3:10 ucb:10 select:5 linder:1 constructive:1 anantharam:1 correlated:1 |
5,338 | 5,832 | On Elicitation Complexity
Rafael Frongillo
University of Colorado, Boulder
Ian A. Kash
Microsoft Research
[email protected]
[email protected]
Abstract
Elicitation is the study of statistics or properties which are computable via empirical risk minimization. While several recent papers have approached the general question of which properties are elicitable, we suggest that this is the wrong
question?all properties are elicitable by first eliciting the entire distribution or
data set, and thus the important question is how elicitable. Specifically, what is
the minimum number of regression parameters needed to compute the property?
Building on previous work, we introduce a new notion of elicitation complexity
and lay the foundations for a calculus of elicitation. We establish several general
results and techniques for proving upper and lower bounds on elicitation complexity. These results provide tight bounds for eliciting the Bayes risk of any loss, a
large class of properties which includes spectral risk measures and several new
properties of interest.
1
Introduction
Empirical risk minimization (ERM) is a domininant framework for supervised machine learning,
and a key component of many learning algorithms. A statistic or property is simply a functional
assigning a vector of values to each distribution. We say that such a property is elicitable, if for
some loss function it can be represented as the unique minimizer of the expected loss under the
distribution. Thus, the study of which properties are elicitable can be viewed as the study of which
statistics are computable via ERM [1, 2, 3].
The study of property elicitation began in statistics [4, 5, 6, 7], and is gaining momentum in machine
learning [8, 1, 2, 3], economics [9, 10], and most recently, finance [11, 12, 13, 14, 15]. A sequence of
papers starting with Savage [4] has looked at the full characterization of losses which elicit the mean
of a distribution, or more generally the expectation of a vector-valued random variable [16, 3]. The
case of real-valued properties is also now well in hand [9, 1]. The general vector-valued case is still
generally open, with recent progress in [3, 2, 15]. Recently, a parallel thread of research has been
underway in finance, to understand which financial risk measures, among several in use or proposed
to help regulate the risks of financial institutions, are computable via regression, i.e., elicitable (cf.
references above). More often than not, these papers have concluded that most risk measures under
consideration are not elicitable, notable exceptions being generalized quantiles (e.g. value-at-risk,
expectiles) and expected utility [13, 12].
Throughout the growing momentum of the study of elicitation, one question has been central: which
properties are elicitable? It is clear, however, that all properties are ?indirectly? elicitable if one first
elicits the distribution using a standard proper scoring rule. Therefore, in the present work, we
suggest replacing this question with a more nuanced one: how elicitable are various properties?
Specifically, heeding the suggestion of Gneiting [7], we adapt to our setting the notion of elicitation
complexity introduced by Lambert et al. [17], which captures how many parameters one needs to
maintain in an ERM procedure for the property in question. Indeed, if a real-valued property is
found not to be elicitable, such as the variance, one should not abandon it, but rather ask how many
parameters are required to compute it via ERM.
1
Our work is heavily inspired by the recent progress along these lines of Fissler and Ziegel [15], who
show that spectral risk measures of support k have elicitation complexity at most k + 1. Spectral
risk measures are among those under consideration in the finance community, and this result shows
that while not elicitable in the classical sense, their elicitation complexity is still low, and hence one
can develop reasonable regression procedures for them. Our results extend to these and many other
risk measures (see ? 4.6), often providing matching lower bounds on the complexity as well.
Our contributions are the following. We first introduce an adapted definition of elicitation complexity which we believe to be the right notion to focus on going forward. We establish a few simple but
useful results which allow for a kind of calculus of elicitation; for example, conditions under which
the complexity of eliciting two properties in tandem is the sum of their individual complexities. In
? 3, we derive several techniques for proving both upper and lower bounds on elicitation complexity
which apply primarily to the Bayes risks from decision theory, or optimal expected loss functions.
The class includes spectral risk measures among several others; see ? 4. We conclude with brief
remarks and open questions.
2
Preliminaries and Foundation
Let ? be a set of outcomes and P ? ?(?) be a convex set of probability measures. The goal of
elicitation is to learn something about the distribution p ? P, specifically some function ?(p) such
as the mean or variance, by minimizing a loss function.
Definition 1. A property is a function ? : P ? Rk , for some k ? N, which associates a desired
.
report value to each distribution.1 We let ?r = {p ? P | r = ?(p)} denote the set of distributions p
corresponding to report value r.
Given a property ?, we want to ensure that the best result is to reveal the value of the property using
a loss function that evaluates the report using a sample from the distribution.
Definition 2. A loss function L : Rk ? ? ? R elicits a property ? : P ? Rk if for all p ? P,
.
?(p) = arginf r L(r, p), where L(r, p) = Ep [L(r, ?)]. A property is elicitable if some loss elicits it.
For example, when ? = R, the mean ?(p) = Ep [?] is elicitable via squared loss L(r, ?) = (r??)2 .
A well-known necessary condition for elicitability is convexity of the level sets of ?.
Proposition 1 (Osband [5]). If ? is elicitable, the level sets ?r are convex for all r ? ?(P).
One can easily check that the mean ?(p) = Ep [?] has convex level sets, yet the variance ?(p) =
Ep [(? ? Ep [?])2 ] does not, and hence is not elicitable [9].
It is often useful to work with a stronger condition, that not only is ?r convex, but it is the intersection
of a linear subspace with P. This condition is equivalent the existence of an identification function,
a functional describing the level sets of ? [17, 1].
Definition 3. A function V : R?? ? Rk is an identification function for ? : P ? Rk , or identifies
?, if for all r ? ?(P) it holds that p ? ?r ?? V (r, p) = 0 ? Rk , where as with L(r, p) above we
.
write V (r, p) = Ep [V (r, ?)]. ? is identifiable if there exists a V identifying it.
One can check for example that V (r, ?) = ? ? r identifies the mean.
We can now define the classes of identifiable and elicitable properties, along with the complexity
of identifying or eliciting a given property. Naturally, a property is k-identifiable if it is the link of
a k-dimensional identifiable property, and k-elicitable if it is the link of a k-dimensional elicitable
property. The elicitation complexity of a property is then simply the minimum dimension k needed
for it to be k-elicitable.
Definition 4. Let Ik (P) denote the class of all identifiable properties ? : P ?SRk , and Ek (P)
k
denote the
S class of all elicitable properties ? : P ? R . We write I(P) = k?N Ik (P) and
E(P) = k?N Ek (P).
? ? Ik (P) and f such that ? = f ? ?.
?
Definition 5. A property ? is k-identifiable if there exists ?
The identification complexity of ? is defined as iden(?) = min{k : ? is k-identifiable}.
1
We will also consider ? : P ? RN .
2
? ? Ek (P) and f such that ? = f ? ?.
? The
Definition 6. A property ? is k-elicitable if there exists ?
elicitation complexity of ? is defined as elic(?) = min{k : ? is k-elicitable}.
To make the above definitions concrete, recall that the variance ? 2 (p) = Ep [(Ep [?]??)2 ] is not elicitable, as its level sets are not convex, a necessary condition by Prop. 1. Note however that we may
?
write ? 2 (p) = Ep [? 2 ]?Ep [?]2 , which can be obtained from the property ?(p)
= (Ep [?], Ep [? 2 ]). It
?
is well-known [4, 7] that ? is both elicitable and identifiable as the expectation of a vector-valued random variable X(?) = (?, ? 2 ), using for example L(r, ?) = kr ?X(?)k2 and V (r, ?) = r ?X(?).
? : P ? R2 , and as no such
Thus, we can recover ? 2 as a link of the elicitable and identifiable ?
2
2
?
? : P ? R exists, we have iden(? ) = elic(? ) = 2.
In this example, the variance has a stronger property than merely being 2-identifiable and 2? that satisfies both of these simultaneously. In fact this
elicitable, namely that there is a single ?
is quite common, and identifiability provides geometric structure that we make use of in our lower
bounds. Thus, most of our results use this refined notion of elicitation complexity.
Definition 7. A property ? has (identifiable) elicitation complexity
? f such that ?
? ? Ek (P) ? Ik (P) and ? = f ? ?}.
?
elicI (?) = min{k : ??,
Note that restricting our attention to elicI effectively requires elicI (?) ? iden(?); specifically, if ?
? then ?
? must be identifiable as well. This restriction is only relevant
is derived from some elicitable ?,
for our lower bounds, as our upper bounds give losses explicitly.2 Note however that some restriction
on Ek (P) is necessary, as otherwise pathological constructions giving injective mappings from R to
Rk would render all properties 1-elicitable. To alleviate this issue, some authors require continuity
(e.g. [1]) while others like we do require identifiability (e.g. [15]), which can be motivated by the
fact that for any differentiable loss L for ?, V (r, ?) = ?r L(?, ?) will identify ? provided Ep [L]
has no inflection points or local minima. An important future direction is to relax this identifiability
assumption, as there are very natural (set-valued) properties with iden > elic.3
Our definition of elicitation complexity differs from the notion proposed by Lambert et al. [17], in
? above do not need to be individually elicitable. This turns out to have
that the components of ?
a large impact, as under their definition the property ?(p) = max??? p({?}) for finite ? has
elicitation complexity |?| ? 1, whereas under our definition elicI (?) = 2; see Example 4.3. Fissler
and Ziegel [15] propose a closer but still different definition, with the complexity being the smallest
k such that ? is a component of a k-dimensional elicitable property. Again, this definition can lead
to larger complexities than necessary; take for example the squared mean ?(p) = Ep [?]2 when
?
? = R, which has elicI (?) = 1 with ?(p)
= Ep [?] and f (x) = x2 , but is not elicitable and thus has
complexity 2 under [15]. We believe that, modulo regularity assumptions on Ek (P), our definition is
better suited to studying the difficulty of eliciting properties: viewing f as a (potentially dimensionreducing) link function, our definition captures the minimum number of parameters needed in an
ERM computation of the property in question, followed by a simple one-time application of f .
2.1
Foundations of Elicitation Complexity
In the remainder of this section, we make some simple, but useful, observations about iden(?) and
elicI (?). We have already discussed one such observation after Definition 7: elicI (?) ? iden(?).
It is natural to start with some trivial upper bounds. Clearly, whenever p ? P can be uniquely determined by some number of elicitable parameters then the elicitation complexity of every property is
at most that number. The following propositions give two notable applications of this observation.4
Proposition 2. When |?| = n, every property ? has elicI (?) ? n ? 1.
Proof. The probability distribution is determined by the probability of any n ? 1 outcomes, and the
probability associated with a given outcome is both elicitable and identifiable.
2
Our main lower bound (Thm 2) merely requires ? to have convex level sets, which is necessary by Prop. 1.
One may take for example ?(p) = argmaxi p(Ai ) for a finite measurable partition A1 , . . . , An of ?.
4
Note that these restrictions on ? may easily be placed on P instead; e.g. finite ? is equivalent to P having
support on a finite subset of ?, or even being piecewise constant on some disjoint events.
3
3
Proposition 3. When ? = R,5 every property ? has elicI (?) ? ? (countable).6
One well-studied class of properties are those where ? is linear, i.e., the expectation of some vectorvalued random variable. All such properties are elicitable and identifiable (cf. [4, 8, 3]), with
elicI (?) ? k, but of course the complexity can be lower if the range of ? is not full-dimensional.
Lemma 1. Let X : ? ? Rk be P-integrable and ?(p) = Ep [X]. Then elicI (?) =
dim(affhull(?(P))), the dimension of the affine hull of the range of ?.
It is easy to create redundant properties in various ways. For example, given elicitable properties
.
?1 and ?2 the property ? = {?1 , ?2 , ?1 + ?2 } clearly contains redundant information. A concrete
case is ? = {mean squared, variance, 2nd moment}, which, as we have seen, has elicI (?) = 2. The
following definitions and lemma capture various aspects of a lack of such redundancy.
Definition 8. Property ? : P ? Rk in I(P) is of full rank if iden(?) = k.
Note that there are two ways for a property to fail to be full rank. First, as the examples above
suggest, ? can be ?redundant? so that it is a link of a lower-dimensional identifiable property. Full
rank can also be violated if more dimensions are needed to identify the property than to specify it.
This is the case with, e.g., the variance which is a 1 dimensional property but has iden(? 2 ) = 2.
Definition 9. Properties ?, ?0 ? I(P) are independent if iden({?, ?0 }) = iden(?) + iden(?0 ).
Lemma 2. If ?, ?0 ? E(P) are full rank and independent, then elicI ({?, ?0 }) = elicI (?)+elicI (?0 ).
To illustrate the lemma, elicI (variance) = 2, yet ? = {mean,variance} has elicI (?) = 2, so clearly
the mean and variance are not both independent and full rank. (As we have seen, variance is not full
rank.) However, the mean and second moment satisfy both by Lemma 1.
Another important case is when ? consists of some number of distinct quantiles. Osband [5] essentially showed that quantiles are independent and of full rank, so their elicitation complexity is the
number of quantiles being elicited.
Lemma 3. Let ? = R and P be a class of probability measures with continuously differentiable and invertible CDFs F , which is sufficiently rich in the sense that for all x1 , . . . , xk ? R,
span({F ?1 (x1 ), . . . , F ?1 (xk )}, F ? P) = Rk . Let q? , denote the ?-quantile function. Then if
?1 , . . . , ?k are all distinct, ? = {q?1 , . . . , q?k } has elicI (?) = k.
The quantile example in particular allows us to see that all complexity classes, including ?, are
occupied. In fact, our results to follow will show something stronger: even for real-valued properties
? : P ? R, all classes are occupied; we give here the result that follows from our bounds on spectral
risk measures in Example 4.4, but this holds for many other P; see e.g. Example 4.2.
Proposition 4. Let P as in Lemma 3. Then for all k ? N there exists ? : P ? R with elicI (?) = k.
3
Eliciting the Bayes Risk
In this section we prove two theorems that provide our main tools for proving upper and lower
bounds respectively on elicitation complexity. Of course many properties are known to be elicitable, and the losses that elicit them provide such an upper bound for that case. We provide such
a construction for properties that can be expressed as the pointwise minimum of an indexed set of
functions. Interestingly, our construction does not elicit the minimum directly, but as a joint elicitation of the value and the function that realizes this value. The form (1) is that of a scoring rule for
the linear property p 7? Ep [Xa ], except that here the index a itself is also elicited.7
Theorem 1. Let {Xa : ? ? R}a?A be a set of P-integrable functions indexed by A ? Rk . Then if
inf a Ep [Xa ] is attained, the property ?(p) = mina Ep [Xa ] is (k + 1)-elicitable. In particular,
L((r, a), ?) = H(r) + h(r)(Xa ? r)
elicits p 7? {(?(p), a) : Ep [Xa ] = ?(p)} for any strictly decreasing h : R ? R+ with
(1)
d
dr H
= h.
Here and throughout, when ? = Rk we assume the Borel ?-algebra.
Omitted proofs can be found in the appendix of the full version of this paper.
7
As we focus on elicitation complexity, we have not tried to characterize all ways to elicit this joint property,
or other properties we give explicit losses for. See ? 4.1 for an example where additional losses are possible.
5
6
4
Proof. We will work with gains instead of losses, and show that S((r, a), ?) = g(r) + dgr (Xa ? r)
elicits p 7? {(?(p), a) : Ep [Xa ] = ?(p)} for ?(p) = maxa Ep [Xa ]. Here g is convex with strictly
increasing and positive subgradient dg.
For any fixed a, we have by the subgradient inequality,
S((r, a), p) = g(r) + dgr (Ep [Xa ] ? r) ? g(Ep [Xa ]) = S((Ep [Xa ], a), p) ,
and as dg is strictly increasing, g is strictly convex, so r = Ep [Xa ] is the unique maximizer. Now
? p) = S((Ep [Xa ], a), p), we have
letting S(a,
? p) = argmax g(Ep [Xa ]) = argmax Ep [Xa ] ,
argmax S(a,
a?A
a?A
a?A
because g is strictly increasing. We now have
argmax S((r, a), p) = (Ep [Xa ], a) : a ? argmax Ep [Xa ] .
a?A
a?A,r?R
One natural way to get such an indexed set of functions is to take an arbitrary loss function L(r, ?),
in which case this pointwise minimum corresponds to the Bayes risk, which is simply the minimum
possible expected loss under some distribution p.
Definition 10. Given loss function L : A ? ? ? R on some prediction set A, the Bayes risk of L
is defined as L(p) := inf a?A L(a, p).
One illustration of the power of Theorem 1 is that the Bayes risk of a loss eliciting a k-dimensional
property is itself (k + 1)-elicitable.
Corollary 1. If L : Rk ? ? ? R is a loss function eliciting ? : P ? Rk , then the loss
L((r, a), ?) = L0 (a, ?) + H(r) + h(r)(L(a, ?) ? r)
elicits {L, ?}, where h : R ? R+ is any positive strictly decreasing function, H(r) =
and L0 is any surrogate loss eliciting ?.8 If ? ? Ik (P), elicI (L) ? k + 1.
(2)
Rr
0
h(x)dx,
We now turn to our second theorem which provides lower bounds for the elicitation complexity
of the Bayes risk. A first observation, which follows from standard convex analysis, is that L is
concave, and thus it is unlikely to be elicitable directly, as the level sets of L are likely to be nonconvex. To show a lower bound greater than 1, however, we will need much stronger techniques.
In particular, while L must be concave, it may not be strictly so, thus enabling level sets which are
potentially amenable to elicitation. In fact, L must be flat between any two distributions which share
a minimizer. Crucial to our lower bound is the fact that whenever the minimizer of L differs between
two distributions, L is essentially strictly concave between them.
Lemma 4. Suppose loss L with Bayes risk L elicits ? : P ? Rk . Then for any p, p0 ? P with
?(p) 6= ?(p0 ), we have L(?p + (1 ? ?)p0 ) > ?L(p) + (1 ? ?)L(p0 ) for all ? ? (0, 1).
With this lemma in hand we can prove our lower bound. The crucial insight is that an identification
function for the Bayes risk of a loss eliciting a property can, through a link, be used to identify that
property. Corollary 1 tells us that k + 1 parameters suffice for the Bayes risk of a k-dimensional
property, and our lower bound shows this is often necessary. Only k parameters suffice, however,
when the property value itself provides all the information required to compute the Bayes risk; for
example, dropping the y 2 term from squared loss gives L(x, y) = x2 ? 2xy and L(p) = ?Ep [y]2 ,
giving elic(L) = 1. Thus the theorem splits the lower bound into two cases.
Theorem 2. If a loss L elicits some ? ? Ek (P) with elicitation complexity elicI (?) = k, then its
Bayes risk L has elicI (L) ? k. Moreover, if we can write L = f ? ? for some function f : Rk ? R,
then we have elicI (L) = k; otherwise, elicI (L) = k + 1.
? ? E` such that L = g ? ?
? for some g : R` ? R.
Proof. Let ?
8
Note that one could easily lift the requirement that ? be a function, and allow ?(p) to be the set of minimizers of the loss (cf. [18]). We will use this additional power in Example 4.4.
5
?
? 0 ) implies ?(p) = ?(p0 ). Otherwise, we
We show by contradiction that for all p, p0 ? P, ?(p)
= ?(p
0
0
0
?
?
have p, p with ?(p) = ?(p ), and thus L(p) = L(p ), but ?(p) 6= ?(p0 ). Lemma 4 would then give
? r? are convex by Prop. 1,
us some p? = ?p + (1 ? ?)p0 with L(p? ) > L(p). But as the level sets ?
? ? ) = ?(p),
?
we would have ?(p
which would imply L(p? ) = L(p).
? But as ?
? ? E` , this implies
We now can conclude that there exists h : R` ? Rk such that ? = h ? ?.
?
elicI (?) ? `, so clearly we need ` ? k. Finally, if ` = k we have L = g ? ? = g ? h?1 ? ?. The
upper bounds follow from Corollary 1.
4
Examples and Applications
We now give several applications of our results. Several upper bounds are novel, as well as all lower
bounds greater than 2. In the examples, unless we refer to ? explicitly we will assume ? = R and
write y ? ? so that y ? p. In each setting, we also make several standard regularity assumptions
which we suppress for ease of exposition ? for example, for the variance and variantile we assume
finite first and second moments (which must span R2 ), and whenever we discuss quantiles we will
assume that P is as in Lemma 3, though we will not require as much regularity for our upper bounds.
4.1
Variance
In Section 2 we showed that elicI (? 2 ) = 2. As a warm up, let us see how to recover this statement
using our results on the Bayes risk. We can view ? 2 as the Bayes risk of squared loss L(x, y) = (x?
y)2 , which of course elicits the mean: L(p) = minx?R Ep [(x ? y)2 ] = Ep [(Ep [y] ? y)2 ] = ? 2 (p).
This gives us elicI (? 2 ) ? 2 by Corollary 1, with a matching lower bound by Theorem 2, as the
variance is not simply a function of the mean. Corollary 1 gives losses such as L((x, v), y) =
e?v ((x ? y)2 ? v) ? e?v which elict {Ep [y], ? 2 (p)}, but in fact there are losses which cannot
be represented by the form (2), showing that we do not have a full characterization;
for example,
?
? was
L((x,
v), y) = v 2 + v(x ? y)(2(x + y) + 1) + (x ? y)2 (x + y)2 + x + y + 1 . This L
h y i
2
h
i
1 ?1/2
generated via squared loss
z ? y2
with respect to the norm kzk2 = z > ?1/2 1
z, which
elicits the first two moments, and link function (z1 , z2 ) 7? (z1 , z2 ? z12 ).
4.2
Convex Functions of Means
Another simple example is ?(p) = G(Ep [X]) for some strictly convex function G : Rk ? R and
P-integrable X : ? ? Rk . To avoid degeneracies, we assume dim affhull{Ep [X] : p ? P} = k,
i.e. ? is full rank. Letting {dGp }p?P be a selection of subgradients of G, the loss L(r, ?) =
?(G(r) + dGr (X(?) ? r)) elicits ? : p 7? Ep [X] (cf. [3]), and moreover we have ?(p) = ?L(p).
By Lemma 1, elicI (?) = k. One easily checks that L = G ? ?, so now by Theorem 2, elicI (?) = k
as well. Letting {Xk }k?N be a family of such ?full rank? random variables, this gives us a sequence
of real-valued properties ?k (p) = kEp [X]k2 with elicI (?k ) = k, proving Proposition 4.
4.3
Modal Mass
With ? = R consider the property ?? (p) = maxx?R p([x ? ?, x + ?]), namely, the maximum
probability mass contained in an interval of width 2?. Theorem 1 easily shows elicI (?? ) ? 2,
as ??? (p) = argmaxx?R p([x ? ?, x + ?]) is elicited by L(x, y) = 1|x?y|>? , and ?? (p) = 1 ?
L(p). Similarly, in the case of finite ?, ?(p) = max??? p({?}) is simply the expected score (gain
rather than loss) of the mode ?(p) = argmax??? p({?}), which is elicitable for finite ? (but not
otherwise; see Heinrich [19]).
In both cases, one can easily check that the level sets of ? are not convex, so elicI (?) = 2; alternatively Theorem 2 applies in the first case. As mentioned following Definition 6, the result for finite
? differs from the definitions of Lambert et al. [17], where the elicitation complexity of ? is |?| ? 1.
6
4.4
Expected Shortfall and Other Spectral Risk Measures
One important application of our results on the elicitation complexity of the Bayes risk is the elicitability of various financial risk measures. One of the most popular financial risk measures is
expected shortfall ES? : P ? R, also called conditional value at risk (CVaR) or average value at
risk (AVaR), which we define as follows (cf. [20, eq.(18)], [21, eq.(3.21)]):
ES? (p) = inf Ep ?1 (z ? y)1z?y ? z = inf Ep ?1 (z ? y)(1z?y ? ?) ? y .
(3)
z?R
z?R
Despite the importance of elicitability to financial regulation [11, 22], ES? is not elicitable [7]. It
was recently shown by Fissler and Ziegel [15], however, that elicI (ES? ) = 2. They
R also consider the
broader class of spectral risk measures, which can be represented as ?? (p) = [0,1] ES? (p)d?(?),
where ? is a probability measure on [0, 1] (cf. [20, eq. (36)]). In the case where ? has finite support
Pk
? = i=1 ?i ??i for point distributions ?, ?i > 0, we can rewrite ?? using the above as:
( " k
#)
k
X
X ?i
?? (p) =
?i ES?i (p) = inf Ep
(zi ? y)(1zi ?y ? ?i ) ? y
.
(4)
?
z?Rk
i=1
i=1 i
They conclude elicI (?? ) ? k + 1 unless ?({1}) = 1 in which case elicI (?? ) = 1. We show how
to recover these results together with matching lower bounds. It is well-known that the infimum in
eq. (4) is attained by any of the k quantiles in q?1 (p), . . . , q?k (p), so we conclude elicI (?? ) ? k + 1
by Theorem 1, and in particular the property {?? , q?1 , . . . , q?k } is elicitable. The family of losses
from Corollary 1 coincide with the characterization of Fissler and Ziegel [15] (see ? D.1). For a
lower bound, as elicI ({q?1 , . . . , q?k }) = k whenever the ?i are distinct by Lemma 3, Theorem 2
gives us elicI (?? ) = k + 1 whenever ?({1}) < 1, and of course elicI (?? ) = 1 if ?({1}) = 1.
4.5
Variantile
The ? -expectile, a type of generalized quantile introduced by Newey and Powell [23], is defined as
the solution x = ?? to the equation Ep [|1x?y ? ? |(x ? y)] = 0. (This also shows ?? ? I1 .) Here
we propose the ? -variantile, an asymmetric variance-like measure with respect to the ? -expectile:
just as the mean is the solution x = ? to the equation Ep [x? y] = 0, and the variance
is ? 2 (p) =
Ep [(? ? y)2 ], we define the ? -variantile ??2 by ??2 (p) = Ep |1?? ?y ? ? |(?? ? y)2 .
It is well-known that ?? can be expressed as the minimizer of a asymmetric least squares problem:
the loss L(x, y) = |1x?y ? ? |(x ? y)2 elicits ?? [23, 7]. Hence, just as the variance turned out to
be a Bayes risk for the mean, so is the ? -variantile for the ? -expectile:
?? = argmin Ep |1x?y ? ? |(x ? y)2 =? ??2 = min Ep |1x?y ? ? |(x ? y)2 .
x?R
x?R
We now see the pair
4.6
{?? , ??2 }
is elicitable by Corollary 1, and by Theorem 2 we have elicI (??2 ) = 2.
Deviation and Risk Measures
Rockafellar and Uryasev [21] introduce ?risk quadrangles? in which they relate a risk R, deviation
D, error E, and a statistic S, all functions from random variables to the reals, as follows:
R(X) = min{C + E(X ? C)},
C
D(X) = min{E(X ? C)},
C
S(X) = argmin{E(X ? C)} .
C
Our results provide tight bounds for many of the risk and deviation measures in their paper. The most
immediate case is the expectation quadrangle case, where E(X) = E[e(X)] for some e : R ? R.
In this case, if S(X) ? I1 (P) Theorem 2 implies elicI (R) = elicI (D) = 2 provided S is nonconstant and e non-linear. This includes several of their examples, e.g. truncated mean, log-exp, and
rate-based. Beyond the expectation case, the authors show a Mixing Theorem, where they consider
)
( k
)
( k
X
X
X
0
D(X) = min min
?i Ei (X ? C ? Bi )
?i Bi = 0 = min
?i Ei (X ? Bi ) .
C B1 ,..,Bk
i=1
i
B10 ,..,Bk0
i=1
Once again, if the Ei are all of expectation type and Si ? I1 , Theorem 1 gives elicI (D) =
elicI (R) ? k + 1, with a matching lower bound from Theorem 2 provided the Si are all independent. The Reverting Theorem for a pair E1 , E2 can be seen as a special case of the above where
7
one replaces E2 (X) by E2 (?X). Consequently, we have tight bounds for the elicitation complexity of several other examples, including superquantiles (the same as spectral risk measures), the
quantile-radius quadrangle, and optimized certainty equivalents of Ben-Tal and Teboulle [24].
Our results offer an explaination for the existence of regression procedures for some of these
risk/deviation measures. For example, a proceedure called superquantile regression was introduced
in Rockafellar et al. [25], which computes spectral risk measures. In light of Theorem 1, one could
interpret their procedure as simply performing regression on the k different quantiles as well as the
Bayes risk. In fact, our results show that any risk/deviation generated by mixing several expectation
quadrangles will have a similar procedure, in which the Bi0 variables are simply computed along side
the measure of interest. Even more broadly, such regression procedures exist for any Bayes risk.
5
Discussion
We have outlined a theory of elicitation complexity which we believe is the right notion of complexity for ERM, and provided techniques and results for upper and lower bounds. In particular, we now
have tight bounds for the large class of Bayes risks, including several applications of note such as
spectral risk measures. Our results also offer an explanation for why procedures like superquantile
regression are possible, and extend this logic to all Bayes risks.
There many natural open problems in elicitation complexity. Perhaps the most apparent are the
characterizations of the complexity classes {? : elic(?) = k}, and in particular, determining the
elicitation complexity of properties which are known to be non-elicitabile, such as the mode [19]
and smallest confidence interval [18].
In this paper we have focused on elicitation complexity with respect to the class of identifiable
properties I, which we denoted elicI . This choice of notation was deliberate; one may define
? ? Ek ? C, ?f, ? = f ? ?}
? to be the complexity with respect to some arbitrary
elicC := min{k : ??
class of properties C. Some examples of interest might be elicE for expected values, of interest to
the prediction market literature [8], and eliccvx for properties elicitable by a loss which is convex in
r, of interest for efficiently performing ERM.
Another interesting line of questioning follows from the notion of conditional elicitation, properties
which are elicitable as long as the value of some other elicitable property is known. This notion
was introduced by Emmer et al. [11], who showed that the variance and expected shortfall are both
conditionally elicitable, on Ep [y] and q? (p) respectively. Intuitively, knowing that ? is elicitable
conditional on an elicitable ?0 would suggest that perhaps the pair {?, ?0 } is elicitable; Fissler and
Ziegel [15] note that it is an open question whether this joint elicitability holds in general. The Bayes
risk L for ? is elicitable conditioned on ?, and as we saw above, the pair {?, L} is jointly elicitable
as well. We give a counter-example in Figure 1, however, which also illustrates the subtlety of
characterizing all elicitable properties.
p2
p2
p3
p3
p1
p1
Figure 1: Depictions of the level sets of two properties, one elicitable and the other not. The left is a Bayes risk
together with its property, and thus elicitable, while the right is shown in [3] not to be elicitable. Here the planes
are shown to illustrate the fact that these are both conditionally elicitable: the height of the plane (the intersept
(p3 , 0, 0) for example) is elicitable from the characterizations for scalar properties [9, 1], and conditioned on
the plane, the properties are both linear and thus links of expected values, which are also elicitable.
8
References
[1] Ingo Steinwart, Chlo Pasin, Robert Williamson, and Siyu Zhang. Elicitation and Identification of Properties. In Proceedings of The 27th Conference on Learning Theory, pages 482?526, 2014.
[2] A. Agarwal and S. Agrawal. On Consistent Surrogate Risk Minimization and Property Elicitation. In
COLT, 2015.
[3] Rafael Frongillo and Ian Kash. Vector-Valued Property Elicitation. In Proceedings of the 28th Conference
on Learning Theory, pages 1?18, 2015.
[4] L.J. Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical
Association, pages 783?801, 1971.
[5] Kent Harold Osband. Providing Incentives for Better Cost Forecasting. University of California, Berkeley,
1985.
[6] T. Gneiting and A.E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the
American Statistical Association, 102(477):359?378, 2007.
[7] T. Gneiting. Making and Evaluating Point Forecasts. Journal of the American Statistical Association,
106(494):746?762, 2011.
[8] J. Abernethy and R. Frongillo. A characterization of scoring rules for linear properties. In Proceedings of
the 25th Conference on Learning Theory, pages 1?27, 2012.
[9] N.S. Lambert. Elicitation and Evaluation of Statistical Forecasts. Preprint, 2011.
[10] N.S. Lambert and Y. Shoham. Eliciting truthful answers to multiple-choice questions. In Proceedings of
the 10th ACM conference on Electronic commerce, pages 109?118, 2009.
[11] Susanne Emmer, Marie Kratz, and Dirk Tasche. What is the best risk measure in practice? A comparison
of standard measures. arXiv:1312.1645 [q-fin], December 2013. arXiv: 1312.1645.
[12] Fabio Bellini and Valeria Bignozzi. Elicitable risk measures. This is a preprint of an article accepted for
publication in Quantitative Finance (doi 10.1080/14697688.2014. 946955), 2013.
[13] Johanna F. Ziegel. Coherence and elicitability. Mathematical Finance, 2014. arXiv: 1303.1690.
[14] Ruodu Wang and Johanna F. Ziegel. Elicitable distortion risk measures: A concise proof. Statistics &
Probability Letters, 100:172?175, May 2015.
[15] Tobias Fissler and Johanna F. Ziegel. Higher order elicitability and Osband?s principle. arXiv:1503.08123
[math, q-fin, stat], March 2015. arXiv: 1503.08123.
[16] A. Banerjee, X. Guo, and H. Wang. On the optimality of conditional expectation as a Bregman predictor.
IEEE Transactions on Information Theory, 51(7):2664?2669, July 2005.
[17] N.S. Lambert, D.M. Pennock, and Y. Shoham. Eliciting properties of probability distributions. In Proceedings of the 9th ACM Conference on Electronic Commerce, pages 129?138, 2008.
[18] Rafael Frongillo and Ian Kash. General truthfulness characterizations via convex analysis. In Web and
Internet Economics, pages 354?370. Springer, 2014.
[19] C. Heinrich. The mode functional is not elicitable. Biometrika, page ast048, 2013.
[20] Hans Fllmer and Stefan Weber. The Axiomatic Approach to Risk Measures for Capital Determination.
Annual Review of Financial Economics, 7(1), 2015.
[21] R. Tyrrell Rockafellar and Stan Uryasev. The fundamental risk quadrangle in risk management, optimization and statistical estimation. Surveys in Operations Research and Management Science, 18(1):33?53,
2013.
[22] Tobias Fissler, Johanna F. Ziegel, and Tilmann Gneiting. Expected Shortfall is jointly elicitable with
Value at Risk - Implications for backtesting. arXiv:1507.00244 [q-fin], July 2015. arXiv: 1507.00244.
[23] Whitney K. Newey and James L. Powell. Asymmetric least squares estimation and testing. Econometrica:
Journal of the Econometric Society, pages 819?847, 1987.
[24] Aharon Ben-Tal and Marc Teboulle. AN OLD-NEW CONCEPT OF CONVEX RISK MEASURES: THE
OPTIMIZED CERTAINTY EQUIVALENT. Mathematical Finance, 17(3):449?476, 2007.
[25] R. T. Rockafellar, J. O. Royset, and S. I. Miranda. Superquantile regression with applications to buffered
reliability, uncertainty quantification, and conditional value-at-risk. European Journal of Operational
Research, 234:140?154, 2014.
9
| 5832 |@word version:1 stronger:4 norm:1 nd:1 open:4 calculus:2 tried:1 p0:8 kent:1 concise:1 moment:4 contains:1 score:1 interestingly:1 savage:2 com:1 z2:2 si:2 assigning:1 yet:2 must:4 dx:1 partition:1 plane:3 xk:3 institution:1 characterization:7 provides:3 math:1 zhang:1 height:1 mathematical:2 along:3 ik:5 consists:1 prove:2 introduce:3 indeed:1 market:1 expected:11 p1:2 growing:1 inspired:1 decreasing:2 tandem:1 increasing:3 provided:4 moreover:2 suffice:2 notation:1 mass:2 what:2 kind:1 argmin:2 maxa:1 certainty:2 berkeley:1 every:3 quantitative:1 concave:3 finance:6 biometrika:1 wrong:1 k2:2 positive:2 gneiting:4 local:1 despite:1 might:1 studied:1 ease:1 cdfs:1 range:2 bi:3 unique:2 commerce:2 testing:1 practice:1 differs:3 procedure:7 powell:2 empirical:2 elicit:4 maxx:1 shoham:2 matching:4 confidence:1 suggest:4 get:1 cannot:1 selection:1 risk:59 restriction:3 equivalent:4 measurable:1 economics:3 starting:1 attention:1 convex:16 focused:1 survey:1 identifying:2 contradiction:1 rule:4 insight:1 financial:6 proving:4 notion:8 siyu:1 construction:3 suppose:1 colorado:2 heavily:1 modulo:1 bk0:1 associate:1 lay:1 asymmetric:3 ep:49 preprint:2 wang:2 capture:3 counter:1 mentioned:1 elicitability:6 complexity:41 convexity:1 econometrica:1 heinrich:2 tobias:2 personal:1 tight:4 rewrite:1 algebra:1 easily:6 joint:3 represented:3 various:4 distinct:3 argmaxi:1 doi:1 approached:1 tell:1 lift:1 outcome:3 refined:1 abernethy:1 quite:1 apparent:1 larger:1 valued:9 say:1 relax:1 otherwise:4 distortion:1 statistic:6 jointly:2 itself:3 abandon:1 sequence:2 differentiable:2 rr:1 questioning:1 agrawal:1 propose:2 remainder:1 relevant:1 turned:1 mixing:2 regularity:3 requirement:1 quadrangle:5 ben:2 help:1 derive:1 develop:1 illustrate:2 stat:1 progress:2 eq:4 p2:2 implies:3 direction:1 radius:1 hull:1 viewing:1 require:3 preliminary:1 alleviate:1 proposition:6 strictly:10 hold:3 sufficiently:1 exp:1 mapping:1 smallest:2 omitted:1 bi0:1 estimation:3 realizes:1 axiomatic:1 saw:1 individually:1 create:1 tool:1 minimization:3 proceedure:1 stefan:1 clearly:4 rather:2 occupied:2 avoid:1 frongillo:4 dgp:1 broader:1 publication:1 corollary:7 derived:1 focus:2 l0:2 rank:9 check:4 inflection:1 sense:2 dim:2 minimizers:1 entire:1 unlikely:1 kep:1 going:1 i1:3 issue:1 among:3 colt:1 denoted:1 special:1 once:1 having:1 newey:2 future:1 others:2 report:3 piecewise:1 few:1 primarily:1 pathological:1 dg:2 simultaneously:1 individual:1 iden:11 argmax:6 microsoft:2 maintain:1 interest:5 evaluation:1 light:1 amenable:1 implication:1 bregman:1 closer:1 necessary:6 injective:1 xy:1 unless:2 indexed:3 old:1 desired:1 teboulle:2 whitney:1 cost:1 deviation:5 subset:1 predictor:1 characterize:1 answer:1 truthfulness:1 fundamental:1 shortfall:4 invertible:1 together:2 continuously:1 concrete:2 squared:6 central:1 again:2 management:2 dr:1 ek:8 american:3 includes:3 rockafellar:4 z12:1 satisfy:1 notable:2 explicitly:2 kzk2:1 valeria:1 view:1 start:1 bayes:22 recover:3 parallel:1 raf:1 elicited:3 identifiability:3 contribution:1 johanna:4 square:2 variance:18 who:2 efficiently:1 identify:3 identification:5 lambert:6 whenever:5 definition:23 evaluates:1 james:1 e2:3 naturally:1 proof:5 associated:1 chlo:1 degeneracy:1 gain:2 popular:1 ask:1 recall:1 attained:2 higher:1 supervised:1 follow:2 specify:1 modal:1 though:1 xa:18 just:2 hand:2 steinwart:1 web:1 replacing:1 ei:3 banerjee:1 lack:1 maximizer:1 continuity:1 mode:3 infimum:1 reveal:1 perhaps:2 nuanced:1 believe:3 building:1 concept:1 y2:1 hence:3 conditionally:2 width:1 uniquely:1 harold:1 generalized:2 mina:1 weber:1 consideration:2 novel:1 recently:3 kash:3 began:1 common:1 vectorvalued:1 functional:3 extend:2 discussed:1 association:3 interpret:1 refer:1 buffered:1 ai:1 outlined:1 similarly:1 reliability:1 han:1 depiction:1 something:2 recent:3 showed:3 inf:5 nonconvex:1 inequality:1 scoring:4 integrable:3 seen:3 minimum:8 additional:2 greater:2 arginf:1 truthful:1 redundant:3 july:2 full:13 multiple:1 adapt:1 determination:1 offer:2 long:1 e1:1 a1:1 impact:1 prediction:3 regression:9 essentially:2 expectation:9 arxiv:7 agarwal:1 whereas:1 want:1 interval:2 concluded:1 crucial:2 pennock:1 december:1 split:1 easy:1 zi:2 knowing:1 computable:3 thread:1 motivated:1 whether:1 utility:1 forecasting:1 osband:4 render:1 remark:1 generally:2 useful:3 clear:1 exist:1 deliberate:1 disjoint:1 broadly:1 write:5 dropping:1 incentive:1 key:1 redundancy:1 capital:1 marie:1 miranda:1 econometric:1 subgradient:2 merely:2 sum:1 letter:1 uncertainty:1 throughout:2 reasonable:1 family:2 electronic:2 p3:3 decision:1 appendix:1 coherence:1 bound:30 internet:1 followed:1 replaces:1 identifiable:16 annual:1 adapted:1 x2:2 flat:1 tal:2 aspect:1 min:10 span:2 optimality:1 subgradients:1 performing:2 march:1 making:1 intuitively:1 erm:7 boulder:1 equation:2 describing:1 turn:2 fail:1 discus:1 needed:4 reverting:1 letting:3 tilmann:1 studying:1 operation:1 aharon:1 apply:1 spectral:10 regulate:1 indirectly:1 existence:2 cf:6 ensure:1 giving:2 quantile:4 establish:2 eliciting:12 classical:1 society:1 question:10 already:1 looked:1 surrogate:2 minx:1 subspace:1 fabio:1 elicits:12 link:8 cvar:1 trivial:1 pointwise:2 index:1 illustration:1 providing:2 minimizing:1 regulation:1 robert:1 potentially:2 statement:1 relate:1 suppress:1 susanne:1 countable:1 proper:2 upper:10 observation:4 ingo:1 finite:9 enabling:1 fin:3 truncated:1 immediate:1 dirk:1 rn:1 arbitrary:2 thm:1 community:1 introduced:4 bk:1 namely:2 required:2 pair:4 z1:2 optimized:2 california:1 beyond:1 elicitation:42 gaining:1 max:2 including:3 explanation:1 power:2 event:1 natural:4 difficulty:1 warm:1 quantification:1 brief:1 imply:1 identifies:2 stan:1 raftery:1 b10:1 nonconstant:1 review:1 geometric:1 literature:1 underway:1 determining:1 loss:37 suggestion:1 interesting:1 foundation:3 affine:1 consistent:1 article:1 principle:1 share:1 course:4 placed:1 side:1 allow:2 understand:1 characterizing:1 dimension:3 evaluating:1 rich:1 computes:1 forward:1 author:2 coincide:1 dgr:3 uryasev:2 transaction:1 rafael:3 logic:1 b1:1 conclude:4 alternatively:1 why:1 learn:1 operational:1 argmaxx:1 williamson:1 european:1 marc:1 pk:1 main:2 x1:2 quantiles:7 borel:1 momentum:2 explicit:1 ian:3 rk:20 theorem:19 showing:1 r2:2 exists:6 restricting:1 effectively:1 kr:1 importance:1 conditioned:2 illustrates:1 forecast:2 suited:1 intersection:1 simply:7 likely:1 expressed:2 contained:1 scalar:1 subtlety:1 applies:1 springer:1 corresponds:1 minimizer:4 satisfies:1 acm:2 prop:3 conditional:5 viewed:1 goal:1 consequently:1 exposition:1 specifically:4 determined:2 except:1 tyrrell:1 lemma:13 called:2 accepted:1 e:6 exception:1 support:3 guo:1 violated:1 |
5,339 | 5,833 | Online Learning with Adversarial Delays
Kent Quanrud? and Daniel Khashabi?
Department of Computer Science
University of Illinois at Urbana-Champaign
Urbana, IL 61801
{quanrud2,khashab2}@illinois.edu
Abstract
We study the performance of standard online learning algorithms when the feedback is delayed by an adversary. We show that online-gradient-descent
?
[1] and follow-the-perturbed-leader [2] achieve regret O( D) in the
delayed setting, where D is the sum
? of delays of each round?s feedback. This
bound collapses to an optimal O( T ) bound in the usual setting of no delays
(where D = T ). Our main contribution is to show that standard algorithms for
online learning already have simple regret bounds in the most general setting of
delayed feedback, making adjustments to the analysis and not to the algorithms
themselves. Our results help affirm and clarify the success of recent algorithms in
optimization and machine learning that operate in a delayed feedback model.
1
Introduction
Consider the following simple game. Let K be a bounded set, such as the unit `1 ball or a collection
of n experts. Each round t, we pick a point xt ? K. An adversary then gives us a cost function ft ,
PT
and we incur the loss `t = ft (xt ). After T rounds, our total loss is the sum LT = t=1 `t , which
we want to minimize.
We cannot hope to beat the adversary, so to speak, when the adversary picks the cost function after
we select our point. There is margin for optimism, however, if rather than evaluate our total loss in
absolute terms, we compare our strategy to the best fixed point in hindsight. The regret of a strategy
PT
PT
x1 , . . . , xT ? K is the additive difference R(T ) = t=1 ft (xt ) ? arg minx?K t=1 ft (x).
Surprisingly, one can obtain positive results in terms of regret. Kalai and Vempala
? showed that a
simple and randomized follow-the-leader type algorithm achieves R(T ) = O( T ) in expectation
for linear cost functions [2] (here, the big-O notation assumes that the diameter of K and the ft ?s
are bounded by constants). If K is convex, then even if the cost vectors are more generally convex
cost functions (where we incur losses of the form `t = ft (xt ), ?
with ft a convex function), Zinkevich showed that gradient descent achieves regret R(T ) = O( T ) [1]. There is a large body of
theoretical literature about this setting, called online learning (see for example the surveys by Blum
[3], Shalev-Shwartz [4], and Hazan [5]).
Online learning is general enough to be applied to a diverse family of problems. For example, Kalai
and Vempala?s algorithm can be applied to online combinatorial problems such as shortest paths
[6], decision trees [7], and data structures [8, 2]. In addition to basic machine learning problems
with convex loss functions, Zinkevich considers applications to industrial optimization, where the
?
http://illinois.edu/~quanrud2/. Supported in part by NSF grants CCF-1217462, CCF1319376, CCF-1421231, CCF-1526799.
?
http://illinois.edu/~khashab2/. Supported in part by a grant from Google.
1
value of goods is not known until after the goods are produced. Other examples of applications of
online learning include universal portfolios in finance [9] and online topic-ranking for multi-labeled
documents [10].
The standard setting assumes that the cost vector ft (or more generally, the feedback) is given to and
processed by the player before making the next decision in round t + 1. Philosophically, this is not
how decisions are made in real life: we rush through many different things at the same time with no
pause for careful consideration, and we may not realize our mistakes for a while. Unsurprisingly, the
assumption of immediate feedback is too restrictive for many real applications. In online advertising,
online learning algorithms try to predict and serve ads that optimize for clicks [11]. The algorithm
learns by observing whether or not an ad is clicked, but in production systems, a massive number of
ads are served between the moment an ad is displayed to a user and the moment the user has decided
to either click or ignore that ad. In military applications, online learning algorithms are used by radio
jammers to identify efficient jamming strategies [12]. After a jammer attempts to disrupt a packet
between a transmitter and a receiver, it does not know if the jamming attempt succeeded until an
acknowledgement packet is sent by the receiver. In cloud computing, online learning helps devise
efficient resource allocation strategies, such as finding the right mix of cheaper (and inconsistent)
spot instances and more reliable (and expensive) on-demand instances when renting computers for
batch jobs [13]. The learning algorithm does not know how well an allocation strategy worked for
a batch job until the batch job has ended, by which time many more batch jobs have already been
launched. In finance, online learning algorithms managing portfolios are subject to information and
transaction delays from the market, and financial firms invest heavily to minimize these delays.
One strategy to handle delayed feedback is to pool independent copies of a fixed learning algorithm,
each of which acts as an undelayed learner over a subsequence of the rounds. Each round is delegated to a single instance from the pool of learners, and the learner is required to wait for and process
its feedback before rejoining the pool. If there are no learners available, a new copy is instantiated
and added to the pool. The size of the pool is proportional to the maximum number of outstanding
delays at any point of decision, and the overall regret is bounded by the sum of regrets of the individual learners. This approach is analyzed for constant delays by Weinberger and Ordentlich [14], and
a more sophisticated analysis is given by Joulani et al. [15]. If ? is the expected maximum
number
?
of outstanding feedbacks, then Joulani et al. obtain a regret bound on the order of O( ?T ) (in expectation) for the setting considered here. The blackbox nature of this approach begets simultaneous
bounds for other settings such as partial information and stochastic rewards. Although maintaining
copies of learners in proportion to the delay may be prohibitively resource intensive, Joulani et al.
provide a more efficient variant for the stochastic bandit problem, a setting not considered here.
Another line of research is dedicated to scaling gradient descent type algorithms to distributed settings, where asynchronous processors naturally introduce delays in the learning framework. A classic reference in this area is the book of Bertsekas and Tsitskilis [16]. If the data is very sparse, so that
input instances and their gradients are somewhat orthogonal, then intuitively we can apply gradients
out of order without significant interference across rounds. This idea is explored by Recht et al. [17],
who analyze and test parallel algorithm on a restricted class of strongly convex loss functions, and
by Duchi et al. [18] and McMahan and Streeter [19], who design and analyze distributed variants
of adaptive gradient descent [20]. Perhaps the most closely related work in this area is by Langford
et al., who study the online-gradient-descent algorithm of Zinkevich when the delays are
bounded by a constant number of rounds [21]. Research in this area has largely moved on from the
simplistic models considered here; see [22, 23, 24] for more recent developments.
The impact of delayed feedback in learning algorithms is also explored by Riabko [25] under the
framework of ?weak teachers?.
For the sake of concreteness, we establish the following notation for the delayed setting. For each
round t, let dt ? Z+ be a non-negative integer delay. The feedback from round t is delivered at the
end of round t + dt ? 1, and can be used in round t + dt . In the standard setting with no delays,
dt = 1 for all t. For each round t, let Ft = {u ? [T ] : u + du ? 1 = t} be the set of rounds whose
PT
feedback appears at the end of round t. We let D = t=1 dt denote the sum of all delays; in the
standard setting with no delays, we have D = T .
In this paper, we investigate the implications of delayed feedback when the delays are adversarial
(i.e., arbitrary), with no assumptions or restrictions made on the adversary. Rather than design new
2
algorithms that may generate a more involved analysis, we study the performance of the classical
algorithms online-gradient-descent and follow-the-perturbed-leader, essentially unmodified, when the feedback is delayed.
In the delayed setting, we prove that both algo?
rithms
have
a
simple
regret
bound
of
O(
D).
These
bounds collapse to match the well-known
?
O( T ) regret bounds if there are no delays (i.e., where D = T ).
Paper organization In Section 2, we analyze the online-gradient-descent algorithm in
the delayed setting, giving upper bounds on the regret as a function of the sum of delays D. In
Section 3, we analyze the follow-the-perturbed-leader in the delayed setting and derive
a regret bound in terms of D. Due to space constraints, extensions to online-mirror-descent
and follow-the-lazy-leader are deferred to the appendix. We conclude and propose future
directions in Section 4.
2
Delayed gradient descent
Convex optimization In online convex optimization, the input domain K is convex, and each
cost function ft is convex. For this setting, Zinkevich proposed a simple online algorithm, called
online-gradient-descent, designed as follows [1]. The first point, x1 , is picked in K
arbitrarily. After picking the tth point xt , online-gradient-descent computes the gradient
?ft |xt of the loss function at xt , and chooses xt+1 = ?K (xt ? ??ft |xt ) in the subsequent round,
for some parameter ? ? R>0 . Here, ?K is the projection that maps a point x0 to its nearest point
in K (discussed further below). Zinkevich showed that, assuming the Euclidean diameter of K
and the Euclidean lengths of all gradients ?ft?
|x are bounded by constants, online-gradientdescent has an optimal regret bound of O( T ).
Delayed gradient descent In the delayed setting, the loss function ft is not necessarily given by
the adversary before we pick the next point xt+1 (or even at all). The natural generalization of
online-gradient-descent to this setting is to process the convex loss functions and apply
their gradients the moment they are delivered. That is, we update
X
x0t+1 = xt ? ?
?fs |xs ,
s?Ft
for some fixed parameter ?, and then project xt+1 = ?K (x0t+1 ) back into K to choose our (t + 1)th
point. In the setting of Zinkevich, we have Ft = {t} for each t, and this algorithm is exactly
online-gradient-descent. Note that a gradient ?fs |xs does not need to be timestamped by
the round s from which it originates, which is required by the pooling strategies of Weinberger and
Ordentlich [14] and Joulani et al. [15] in order to return the feedback to the appropriate learner.
Theorem 2.1. Let K be a convex set with diameter 1, let f1 , . . . , fT be convex functions over K
with k?ft |x k2 ? L for all x ? K and t ? [T ], and let ? ? R be a fixed parameter. In the presence
of adversarial delays, online-gradient-descent selects points x1 , . . . , xT ? K such that
for all y ? K,
T
T
X
X
1
2
ft (xt ) ?
ft (y) = O
+ ?L (T + D) ,
?
t=1
t=1
where D denotes the sum of delays over all rounds t ? [T ].
?
?
?
For ? = 1/L T + D, Theorem 2.1 implies a regret bound of O(L D + T ) = O(L D). This
choice of ? requires prior knowledge of the final sum D. When this sum is not known, one can
calculate D on the fly: if there are ? outstanding (undelivered) cost functions at a round t, then D
increases by exactly ?. Obviously, ? ? T and T ? D, so D at most doubles. We can therefore
employ the ?doubling trick? of Auer et al. [26] to dynamically adjust ? as D grows.
In the undelayed setting analyzed by Zinkevich, we have D = T , and the regret bound of Theorem
2.1 matches that obtained by Zinkevich.
? If each delay dt is bounded by some fixed value ? , Theorem
2.1 implies a regret bound of O(L ? T ) that matches that of Langford et al. [21]. In both of these
special cases, the regret bound is known to be tight.
3
Before proving Theorem 2.1, we review basic definitions and facts on convexity. A function f :
K ? R is convex if
f ((1 ? ?)x + ?y) ? (1 ? ?)f (x) + ?f (y)
?x, y ? K, ? ? [0, 1].
If f is differentiable, then f is convex iff
f (x) + ?f |x ? (y ? x) ? f (y)
?x, y ? K.
(1)
For f convex but not necessarily differentiable, a subgradient of f at x is any vector that can replace
?f |x in equation (1). The (possible empty) set of gradients of f at x is denoted by ?f (x).
The gradient descent may occasionally update along a gradient that takes us out of the constrained
domain K. If K is convex, then we can simply project the point back into K.
Lemma 2.2. Let K be a closed convex set in a normed linear space X and x ? X a point, and let
x0 ? K be the closest point in K to x. Then, for any point y ? K,
kx ? yk2 ? kx0 ? yk2 .
We let ?K denote the map taking a point x to its closest point in the convex set K.
Proof of Theorem 2.1. Let y = arg minx?K (f1 (x) + ? ? ? + fT (x)) be the best point in hindsight at
the end of all T rounds. For t ? [T ], by convexity of ft , we have,
ft (y) ? ft (xt ) + ?ft |xt ? (y ? xt ).
Fix t ? [T ], and
t+1 and y. By Lemma 2.2, we know that
consider
the distance between x
P
kxt+1 ? yk2 ?
x0t+1 ? y
2 , where x0t+1 = xt ? ? s?Ft ?fs |xs .
We split the sum of gradients applied in a single round P
and consider them one by one. For each
s ? Ft , let Ft,s = {r ? Ft : r < s}, and let xt,s = xt ?? r?Ft,s ?fr |xr . Suppose Ft is nonempty,
and fix s0 = max Ft to be the last index in Ft . By Lemma 2.2, we have,
2
2
2
kxt+1 ? yk2 ?
x0t+1 ? y
2 =
xt,s0 ? ??fs0 |xs0 ? y
2
2
2
= kxt,s0 ? yk2 ? 2? ?fs0 |xs0 ? (xt,s0 ? y) + ? 2
?fs0 |xs0
2 .
Repeatedly unrolling the first term in this fashion gives
X
X
2
2
2
kxt+1 ? yk2 ? kxt ? yk2 ? 2?
?fs |xs ? (xt,s ? y) + ? 2
k?fs |xs k2 .
s?Ft
s?Ft
For each s ? Ft , by convexity of f , we have,
??fs |xs ? (xt,s ? y) = ?fs |xs ? (y ? xt,s ) = ?fs |xs ? (y ? xs ) + ?fs |xs ? (xs ? xt,s )
? fs (y) ? fs (xs ) + ?fs |xs ? (xs ? xt,s ).
By assumption, we also have k?fs |xs k2 ? L for each s ? Ft . With respect to the distance between
xt+1 and y, this gives,
X
2
2
kxt+1 ? yk2 ? kxt ? yk2 + 2?
(fs (y) ? fs (xs ) + ?fs |xs ? (xs ? xt,s )) + ? 2 ? |Ft | ? L2 .
s?Ft
Solving this inequality for the regret terms
over all rounds t ? [T ], we have,
T
X
(ft (xt ) ? ft (y)) =
t=1
T X
X
P
s?Ft
fs (xs ) ? fs (y) and taking the sum of inequalities
fs (xs ) ? fs (y)
t=1 s?Ft
T
X
1 X
2
2
?
?
kxt ? yk2 ? kxt+1 ? yk2 + 2?
?fs |xs ? (xs ? xt,s ) + ? 2 ? |Ft | ? L2
2? t=1
s?Ft
!
T
T X
X
1 X
?
2
2
=
kxt ? yk2 ? kxt+1 ? yk2 + T L2 +
?fs |xs ? (xs ? xt,s )
2? t=1
2
t=1
!
s?Ft
?
T
X
X
1
?
+ T L2 +
?fs |xs ? (xs ? xt,s ).
2?
2
t=1
s?Ft
4
(2)
The first two terms are familiar from the standard analysis of online-gradient-descent. It
remains to analyze the last sum, which we call the delay term.
Each summand ?fs |xs ? (xs ? xt,s ) in the delay term contributes loss proportional to the distance
between the point xs when the gradient ?fs |xs is generated and the point xt,s when the gradient is
applied. This distance is created by the other gradients that are applied in between, and the number
of such in-between gradients are intimately tied to the total delay, as follows. By Cauchy-Schwartz,
the delay term is bounded above by
T X
X
?fs |xs ? (xs ? xt,s ) ?
t=1 s?Ft
T X
X
k?fs |xs k2 kxs ? xt,s k2 ? L
t=1 s?Ft
T X
X
kxs ? xt,s k2 . (3)
t=1 s?Ft
Consider a single term kxs ? xt,s k2 for fixed t ? [T ] and s ? Ft . Intuitively, the difference xt,s ?xs
is roughly the sum of gradients received between round s and when we apply the gradient from round
s in round t. More precisely, by applying the triangle inequality and Lemma 2.2, we have,
kxt,s ? xs k2 ? kxt,s ? xt k2 + kxt ? xs k2 ? kxt,s ? xt k2 + kx0t ? xs k2 .
For the same reason, we have kx0t ? xs k2 ? kx0t ? xt?1 k2 +
x0t?1 ? xs
2 , and unrolling in this
fashion, we have,
kxt,s ? xs k2 ? kxt,s ? xt k2 +
t?1
t?1 X
X
X
X
0
xr+1 ? xr
? ?
?fp |x
+ ?
?fq |x
p 2
q 2
2
r=s
???L?
|Ft,s | +
r=s q?Fr
p?Ft,s
!
t?1
X
|Fr | .
(4)
r=s
PT P
After substituting equation (4) into equation (3), it remains to bound the sum t=1 s?Ft (|Ft,s | +
Pt?1
Pt?1
r=s |Fr |). Consider a single term |Ft,s | +
r=s |Fr | in the sum. This quantity counts, for a
gradient ?fs |xs from round s delivered just before round t ? s, the number of other gradients that
are applied while ?fs |xs is withheld. Fix two rounds s and t, and consider an intermediate round
r ? {s, . . . , t}. If r < t then fix q ? Fr , and if r = t then fix q ? Ft,s . The feedback from round q
is applied in a round r between round s and round t. We divide our analysis into two scenarios. In
one case, q ? s, and the gradient from round q appears only after s, as in the following diagram.
?fq |xq
q
/ ???
?fs |xs
/s
/ ???
/% r
/ ???
"/
/ ???
$/
t
In the other case, q > s, as in the following diagram.
?fs |xs
s
/ ???
?fq |xq
/q
/ ???
r
/) t
For each round u, let du denote the number of rounds the gradient feedback is delayed (so u ?
Fu+du ). There are at most ds instances of the latter case, since q must lie in s+1, . . . , t. The first case
can be charged to dq . To bound the first case, observe that for fixed q, the number of indices s such
that q < s ? dq + q ? ds + s is at most dq . That is, all instances of the second case for a fixed q can
PT P
Pt?1
PT
be charged to dq . Between the two cases, we have t=1 s?Ft (|Ft,s | + r=s |Fr |) ? 2 t=1 dt ,
and the delay term is bounded by
T X
X
?fs |xs ? (xs ? xt,s ) ? 2? ? L2
t=1 s?Ft
T
X
dt .
t=1
With respect to the overall regret, this gives,
T
X
T
X
1
T
(f (xt ) ? f (y)) ?
+ ? ? L2
+2
dt
2?
2
t=1
t=1
!
=O
1
+ ?L2 D ,
?
as desired.
5
PT P
Remark 2.3. The delay term t=1 s?Ft ?fs |xs ? (xs ? xt,s ) is a natural point of entry for a
sharper analysis based on strong sparseness assumptions. The distance xs ? xt,s is measured by its
projection against the gradient ?fs |xs , and the preceding proof assumes the worst case and bounds
the dot product with the Cauchy-Schwartz inequality. If, for example, we assume that gradients
are pairwise orthogonal and analyze online-gradient-descent in the unconstrained setting,
then the dot product ?fs |xs ? (xs ? xt,s ) is 0 and the delay term vanishes altogether.
3
Delaying the Perturbed Leader
Discrete online linear optimization In discrete online linear optimization, the input domain K ?
Rn is a (possibly discrete) set with bounded diameter, and each cost function ft is of the form
ft (x) = ct ? x for a bounded-length cost vector ct . The previous algorithm online-gradientdescent does not apply here because K is not convex.
A natural algorithm for this problem is follow-the-leader. Each round t, let yt =
arg minx?K x?(c1 +? ? ?+ct ) be the optimum choice over the first t cost vectors. The algorithm picking yt in round t is called be-the-leader, and can be shown to have zero regret. Of course, bethe-leader is infeasible since the cost vector ct is revealed after picking yt . follow-theleader tries the next best thing, picking yt?1 in round t. Unfortunately, this strategy can have
linear regret, largely because it is a deterministic algorithm that can be manipulated by an adversary.
Kalai and Vempala [2] gave a simple and elegant correction called follow-the-perturbedleader. Let > 0 be a parameter to be fixed later, and let Q = [0, 1/]n be the cube of length
1/. Each round t, follow-the-perturbed-leader randomly picks a vector c0 ? Q by the
uniform distribution, and then selects xt = arg minx?K x ? (c0 + ? ? ? + ct?1 ) to optimize over the
previous costs plus the random perturbation c0 . With the diameter of K and the lengths kct k of each
cost vector held
? constant, Kalai and Vempala showed that follow-the-perturbed-leader
has regret O( T ) in expectation.
Following the delayed and perturbed leader More generally, follow-the-perturbedleader optimizes over all information available to the algorithm, plus some additional noise to
smoothen the worst-case analysis. If the cost vectors are delayed, we naturally interpret followthe-perturbed-leader to optimize over all cost vectors ct delivered in time for round t when
picking its point xt . That is, the tth leader becomes the best choice with respect to all cost vectors
delivered in the first t rounds:
ytd = arg min
x?K
t X
X
cr ? x
s=1 r?Fs
(we use the superscript d to emphasize the delayed setting). The tth perturbed leader optimizes over
all cost vectors delivered through the first t rounds in addition to the random perturbation c0 ? Q :
!
t X
X
y?td = arg min c0 ? x +
cr ? x .
x?K
s=1 r?Fs
d
In the delayed setting, follow-the-perturbed-leader chooses xt = y?t?1
in round t. We
claim that follow-the-perturbed-leader has a direct ?
and simple regret bound in terms of
the sum of delays D, that collapses to Kalai and Vempala?s O( T ) regret bound in the undelayed
setting.
Theorem 3.1. Let K ? Rn be a set with L1 -diameter ? 1, c1 , . . . , cT ? Rn with kct k1 ? 1 for all
t, and ? > 0. In the presence of adversarial delays, follow-the-perturbed-leader picks
points x1 , . . . , xT ? K such that for all y ? K,
T
X
t=1
E[ct ? xt ] ?
T
X
ct ? y + O ?1 + D .
t=1
?
?
For = 1/ D, Theorem 3.1 implies a regret bound of O( D). When D is not known a priori, the
doubling trick can be used to adjust dynamically (see the discussion following Theorem 2.1).
6
To analyze follow-the-perturbed-leader in the presence of delays, we introduce the notion of a prophet, who is a sort of omniscient leader who sees the feedback immediately. Formally,
the tth prophet is the best point with respect to all the cost vectors over the first t rounds:
zt = arg min(c1 + ? ? ? + ct ) ? x.
x?K
The tth perturbed prophet is the best point with respect to all the cost vectors over the first t rounds,
in addition to a perturbation c0 ? Q :
z?t = arg min(c0 + c1 + ? ? ? + ct ) ? x.
(5)
x?K
The prophets and perturbed prophets behave exactly as the leaders and perturbed leaders in the
setting of Kalai and Vempala with no delays. In particular, we can apply the regret bound of Kalai
and Vempala to the (infeasible) strategy of following the perturbed prophet.
Lemma 3.2 ([2]). Let K ? Rn be a set with L1 -diameter ? 1, let c1 , . . . , cT ? Rn be cost vectors
bounded by kct k1 ? 1 for all t, and let > 0. If z?1 , . . . , z?T ?1 ? K are chosen per equation (5),
PT
PT
then t=1 E[ct ? z?i?1 ] ? t=1 ct ? y + O ?1 + T . for all y ? K.
The analysis by Kalai and Vempala observes that when there are no delays, two consecutive perturbed leaders y?t and y?t+1 are distributed similarly over the random noise [2, Lemma 3.2]. Instead,
we will show that y?td and z?t are distributed in proportion to delays. We first require a technical
lemma that is implicit in [2].
Lemma 3.3. Let K be a set with L1 -diameter ? 1, and let u, v ? Rn be vectors. Let y, z ? Rn
be random vectors defined by y = arg miny?K (q + u) ? y and z = arg minz?K (q + v) ? z, where
Qn
q is chosen uniformly at random from Q = i=1 [0, r], for some fixed length r > 0. Then, for any
vector c,
E[c ? z] ? E[c ? y] ?
kv ? uk1 kck?
.
r
Proof. Let Q0 = v+Q and Q00 = u+Q, and write y = arg miny?K q 00 ?y and z = arg minz?K q 0 ?z,
where q 0 ? Q0 and q 00 ? Q00 are chosen uniformly at random. Then
E[c ? z] ? E[c ? y] = Eq00 ?Q00 [c ? z] ? Eq0 ?Q0 [c ? y].
0
0
Subtracting P[q ? Q ? Q00 ]Eq0 ?Q0 ?Q00 [c ? z] from both terms on the right, we have
Eq00 ?Q00 [c ? z] ? Eq0 ?Q0 [c ? y]
= P[q 00 ? Q00 \ Q0 ] ? Eq00 ?Q00 \Q0 [c ? z] ? P[q 0 ? Q0 \ Q00 ] ? Eq0 ?Q0 \Q00 [c ? y]
By symmetry, P[q 00 ? Q00 \ Q0 ] = P[q 0 ? Q0 \ Q00 ], and we have,
E[c ? z] ? E[c ? y] ? (P[q 00 ? Q00 \ Q0 ])Eq00 ?Q00 \Q0 ,q0 ?Q0 \Q00 [c ? (z ? y)].
By assumption, K has L1 -diameter ? 1, so ky ? zk1 ? 1, and by H?lder?s inequality, we have,
E[c ? z] ? E[c ? y] ? P[q 00 ? Q00 \ Q0 ]kck? .
It remains to bound P[q 00 ? Q00 \ Q0 ] = P[q 0 ? Q0 \ Q00 ]. If kv ? uk1 ? r, we have,
n
n
Y
Y
kv ? uk1
|(vi ? ui )|
0
0
00
0
? vol(Q ) 1 ?
.
vol(Q ? Q ) =
(r ? |vi ? ui |) = vol(Q )
1?
r
r
i=1
i=1
Otherwise, if ku ? vk1 > r, then vol(Q0 ? Q00 ) = 0 ? vol(Q0 )(1 ? kv ? uk1 /r). In either case,
we have,
P[q 0 ? Q0 \ Q00 ] =
kv ? uk1
vol(Q0 ? Q00 )
vol(Q0 ? Q00 )
?1?
?
,
0
0
vol(Q )
vol(Q )
r
and the claim follows.
Lemma 3.3 could also have been proven geometrically in similar fashion to Kalai and Vempala.
7
Lemma 3.4.
vectors.
PT
t=1
d
? D, where D is the sum of delays of all cost
E[ct ? z?t?1 ] ? E ct ? y?t?1
Pt
P
Proof. Let ut = s=1 ct be the sum of all costs through the first t rounds, and vt = s:s+ds ?t ct
be the sum of cost vectors actually delivered through the first t rounds. Then the perturbed prophet
d
z?t?1 optimizes over c0 + ut?1 and y?t?1
optimizes over c0 + vt?1 . By Lemma 3.3, for each t, we
have
d
? ? kut?1 ? vt?1 k1 kct k? ? ? |{s < t : s + ds ? t}|
Ec0 ?Q [ct ? z?t?1 ] ? Ec0 ?Q ct ? y?t?1
Summed over all T rounds, we have,
T
X
Ec0 [ct ? z?t ] ? Ec0 ct ?
y?td
T
X
?
|{s < t : s + ds ? t}|.
t=1
t=1
PT
The sum t=1 |{s < t : s + ds ? t}| charges each cost vector cs once for every round it is delayed,
PT
and therefore equals D. Thus, t=1 Ec0 [ct ? z?t ] ? Ec0 ct ? y?td ? D, as desired.
Now we complete the proof of Theorem 3.1.
Proof of Theorem 3.1. By Lemma 3.4 and Lemma 3.2, we have,
T
T
T
X
X
X
d
E[ct ? x] + O(?1 + D),
E ct ? y?t?1
E[ct ? z?t?1 ] + D ? arg min
?
t=1
x?K
t=1
4
t=1
as desired.
Conclusion
?
We prove O( D) regret bounds for online-gradient-descent
? and follow-theperturbed-leader in the delayed setting, directly extending the O( T ) regret bounds known
in the undelayed setting. More importantly, by deriving a simple bound as a function of the delays, without any restriction on the delays, we establish a simple and intuitive model for measuring
delayed learning. This work suggests natural relationships between the regret bounds of online
learning algorithms and delays in the feedback.
Beyond analyzing existing algorithms, we hope that optimizing over the regret as a function of D
may inspire different (and hopefully simple) algorithms that readily model real world applications
and scale nicely to distributed environments.
Acknowledgements We thank Avrim Blum for introducing us to the area of online learning and
helping us with several valuable discussions. We thank the reviewers for their careful and insightful
reviews: finding errors, referencing relevant works, and suggesting a connection to mirror descent.
References
[1] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proc. 20th
Int. Conf. Mach. Learning (ICML), pages 928?936, 2003.
[2] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. J. Comput. Sys. Sci., 71:291?
307, 2005. Extended abstract in Proc. 16th Ann. Conf. Comp. Learning Theory (COLT), 2003.
[3] A. Blum. On-line algorithms in machine learning. In A. Fiat and G. Woeginger, editors, Online algorithms, volume 1442 of LNCS, chapter 14, pages 306?325. Springer Berlin Heidelberg, 1998.
[4] S. Shalev-Shwartz. Online learning and online convex optimization. Found. Trends Mach. Learn.,
4(2):107?194, 2011.
[5] E. Hazan. Introduction to online convex optimization. Internet draft available at http://ocobook.
cs.princeton.edu, 2015.
[6] E. Takimoto and M. Warmuth. Path kernels and multiplicative updates. J. Mach. Learn. Research, 4:773?
818, 2003.
8
[7] D. Helmbold and R. Schapire. Predicting nearly as well as the best pruning of a decision tree. Mach.
Learn. J., 27(1):61?68, 1997.
[8] A. Blum, S. Chawla, and A. Kalai. Static optimality and dynamic search optimality in lists and trees.
Algorithmica, 36(3):249?260, 2003.
[9] T. M. Cover. Universal portfolios. Math. Finance, 1(1):1?29, 1991.
[10] K. Crammer and Y. Singer. A family of additive online algorithms for category ranking. J. Mach. Learn.
Research, 3:1025?1058, 2003.
[11] X. He, J. Pan, O. Jin, T. Xu, B. Liu, T. Xu, Y. Shi, A. Atallah, R. Herbrich, S. Bowers, and J. Qui?onero
Candela. Practical lessons from predicting clicks on ads at facebook. In Proc. 20th ACM Conf. Knowl.
Disc. and Data Mining (KDD), pages 1?9. ACM, 2014.
[12] S. Amuru and R. M. Buehrer. Optimal jamming using delayed learning. In 2014 IEEE Military Comm.
Conf. (MILCOM), pages 1528?1533. IEEE, 2014.
[13] I. Menache, O. Shamir, and N. Jain. On-demand, spot, or both: Dynamic resource allocation for executing
batch jobs in the cloud. In 11th Int. Conf. on Autonomic Comput. (ICAC), 2014.
[14] M.J. Weinberger and E. Ordentlich. On delayed prediction of individual sequences. IEEE Trans. Inf.
Theory, 48(7):1959?1976, 2002.
[15] P. Joulani, A. Gy?rgy, and C. Szepesv?ri. Online learning under delayed feedback. In Proc. 30th Int.
Conf. Mach. Learning (ICML), volume 28, 2013.
[16] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. PrenticeHall, 1989.
[17] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: a lock-free approach to parallelizing stochastic gradient
descent. In Adv. Neural Info. Proc. Sys. 24 (NIPS), pages 693?701, 2011.
[18] J. Duchi, M.I. Jordan, and B. McMahan. Estimation, optimization, and parallelism when data is sparse.
In Adv. Neural Info. Proc. Sys. 26 (NIPS), pages 2832?2840, 2013.
[19] H.B. McMahan and M. Streeter. Delay-tolerant algorithms for asynchronous distributed online learning.
In Adv. Neural Info. Proc. Sys. 27 (NIPS), pages 2915?2923, 2014.
[20] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. J. Mach. Learn. Research, 12:2121?2159, July 2011.
[21] J. Langford, A. J. Smola, and M. Zinkevich. Slow learners are fast. In Adv. Neural Info. Proc. Sys. 22
(NIPS), pages 2331?2339, 2009.
[22] J. Liu, S. J. Wright, C. R?, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordiante
descent algorithm. J. Mach. Learn. Research, 16:285?322, 2015.
[23] J. C. Duchi, T. Chaturapruek, and C. R?. Asynchronous stochastic convex optimization.
abs/1508.00882, 2015. To appear in Adv. Neural Info. Proc. Sys. 28 (NIPS), 2015.
CoRR,
[24] S. J. Wright. Coordinate descent algorithms. Math. Prog., 151(3?34), 2015.
[25] D. Riabko. On the flexibility of theoretical models for pattern recognition. PhD thesis, University of
London, April 2005.
[26] N. Cesa-Bianchi, Y. Freund, D. Haussler, D.P. Helmbold, R.E. Schapire, and M.K. Warmuth. How to use
expert advice. J. Assoc. Comput. Mach., 44(3):426?485, 1997.
9
| 5833 |@word proportion:2 c0:9 kent:1 pick:5 moment:3 liu:2 daniel:1 document:1 omniscient:1 kx0:1 existing:1 must:1 readily:1 realize:1 subsequent:1 numerical:1 additive:2 kdd:1 designed:1 icac:1 update:3 warmuth:2 sys:6 draft:1 math:2 herbrich:1 bittorf:1 along:1 direct:1 prove:2 introduce:2 pairwise:1 x0:2 expected:1 market:1 roughly:1 themselves:1 blackbox:1 multi:1 td:4 unrolling:2 clicked:1 project:2 becomes:1 bounded:11 notation:2 prophet:7 ec0:6 hindsight:2 finding:2 ended:1 every:1 act:1 charge:1 finance:3 exactly:3 prohibitively:1 jamming:3 k2:16 schwartz:2 assoc:1 unit:1 grant:2 originates:1 appear:1 bertsekas:2 positive:1 before:5 mistake:1 mach:9 analyzing:1 niu:1 path:2 plus:2 dynamically:2 suggests:1 collapse:3 decided:1 practical:1 regret:30 xr:3 spot:2 lncs:1 area:4 universal:2 projection:2 wait:1 cannot:1 applying:1 optimize:3 zinkevich:10 restriction:2 map:2 charged:2 yt:4 deterministic:1 reviewer:1 shi:1 normed:1 convex:23 survey:1 immediately:1 helmbold:2 haussler:1 importantly:1 deriving:1 financial:1 classic:1 handle:1 proving:1 notion:1 coordinate:1 delegated:1 pt:17 suppose:1 heavily:1 massive:1 speak:1 user:2 programming:1 shamir:1 trick:2 trend:1 expensive:1 recognition:1 renting:1 labeled:1 ft:63 cloud:2 fly:1 worst:2 calculate:1 adv:5 observes:1 valuable:1 vanishes:1 comm:1 convexity:3 ui:2 reward:1 miny:2 environment:1 dynamic:2 tight:1 solving:1 algo:1 incur:2 serve:1 learner:8 triangle:1 chapter:1 instantiated:1 jain:1 fast:1 london:1 shalev:2 firm:1 whose:1 q00:22 otherwise:1 lder:1 delivered:7 online:44 final:1 obviously:1 kxt:17 differentiable:2 superscript:1 eq0:4 sequence:1 propose:1 subtracting:1 product:2 fr:7 relevant:1 iff:1 flexibility:1 achieve:1 intuitive:1 moved:1 kv:5 rgy:1 ky:1 invest:1 smoothen:1 double:1 empty:1 optimum:1 extending:1 executing:1 help:2 derive:1 measured:1 nearest:1 received:1 job:5 strong:1 c:2 implies:3 direction:1 closely:1 stochastic:6 packet:2 require:1 f1:2 generalization:1 fix:5 quanrud:1 extension:1 helping:1 clarify:1 correction:1 considered:3 wright:3 predict:1 claim:2 substituting:1 achieves:2 consecutive:1 estimation:1 proc:9 combinatorial:1 radio:1 knowl:1 hope:2 khashabi:1 rather:2 kalai:11 cr:2 transmitter:1 fq:3 industrial:1 adversarial:4 bandit:1 selects:2 arg:13 overall:2 colt:1 denoted:1 priori:1 development:1 constrained:1 special:1 summed:1 cube:1 equal:1 once:1 nicely:1 icml:2 nearly:1 future:1 summand:1 employ:1 randomly:1 manipulated:1 individual:2 delayed:26 cheaper:1 familiar:1 algorithmica:1 attempt:2 ab:1 organization:1 investigate:1 mining:1 adjust:2 deferred:1 analyzed:2 held:1 implication:1 chaturapruek:1 succeeded:1 fu:1 partial:1 orthogonal:2 tree:3 euclidean:2 divide:1 desired:3 re:1 rush:1 theoretical:2 instance:6 military:2 cover:1 unmodified:1 measuring:1 cost:25 introducing:1 entry:1 uniform:1 delay:38 too:1 perturbed:19 teacher:1 chooses:2 recht:2 randomized:1 kut:1 uk1:5 picking:5 pool:5 prenticehall:1 thesis:1 cesa:1 choose:1 possibly:1 conf:6 book:1 expert:2 return:1 suggesting:1 gy:1 int:3 ranking:2 ad:6 vi:2 multiplicative:1 later:1 try:2 picked:1 candela:1 hazan:3 observing:1 analyze:7 closed:1 sort:1 hogwild:1 parallel:3 contribution:1 minimize:2 il:1 who:5 largely:2 identify:1 lesson:1 weak:1 produced:1 disc:1 onero:1 advertising:1 served:1 comp:1 processor:1 simultaneous:1 facebook:1 definition:1 infinitesimal:1 against:1 involved:1 kct:4 naturally:2 proof:6 static:1 rithms:1 knowledge:1 ut:2 fiat:1 sophisticated:1 auer:1 back:2 actually:1 appears:2 dt:9 follow:16 inspire:1 april:1 affirm:1 strongly:1 just:1 implicit:1 smola:1 until:3 langford:3 d:6 hopefully:1 google:1 perhaps:1 grows:1 xs0:3 ccf:3 q0:23 perturbedleader:2 round:50 game:1 generalized:1 complete:1 duchi:4 dedicated:1 l1:4 autonomic:1 consideration:1 x0t:6 volume:2 discussed:1 he:1 interpret:1 significant:1 unconstrained:1 similarly:1 illinois:4 portfolio:3 dot:2 yk2:13 closest:2 recent:2 showed:4 followthe:1 optimizing:1 optimizes:4 inf:1 scenario:1 occasionally:1 inequality:5 success:1 arbitrarily:1 life:1 vt:3 devise:1 additional:1 somewhat:1 preceding:1 managing:1 shortest:1 july:1 mix:1 timestamped:1 champaign:1 technical:1 match:3 impact:1 prediction:1 variant:2 basic:2 simplistic:1 essentially:1 expectation:3 kernel:1 c1:5 addition:3 want:1 szepesv:1 diagram:2 launched:1 operate:1 ascent:1 subject:1 pooling:1 elegant:1 sent:1 thing:2 inconsistent:1 jordan:1 integer:1 call:1 presence:3 intermediate:1 split:1 enough:1 revealed:1 gave:1 click:3 idea:1 intensive:1 ytd:1 whether:1 optimism:1 f:37 repeatedly:1 remark:1 generally:3 processed:1 category:1 diameter:9 tth:5 http:3 generate:1 schapire:2 nsf:1 per:1 diverse:1 discrete:3 write:1 vol:9 kck:2 blum:4 takimoto:1 subgradient:2 concreteness:1 geometrically:1 sum:19 prog:1 family:2 decision:6 appendix:1 scaling:1 qui:1 bound:27 ct:27 internet:1 atallah:1 philosophically:1 constraint:1 worked:1 precisely:1 ri:1 sake:1 min:5 optimality:2 vempala:10 department:1 ball:1 fs0:3 across:1 pan:1 intimately:1 making:2 joulani:5 intuitively:2 restricted:1 referencing:1 interference:1 resource:3 equation:4 remains:3 count:1 nonempty:1 singer:2 know:3 end:3 available:3 apply:5 observe:1 appropriate:1 chawla:1 batch:5 weinberger:3 altogether:1 assumes:3 denotes:1 include:1 lock:1 maintaining:1 giving:1 restrictive:1 k1:3 establish:2 classical:1 already:2 added:1 quantity:1 kx0t:3 strategy:9 usual:1 gradient:41 minx:4 distance:5 thank:2 sci:1 berlin:1 topic:1 considers:1 cauchy:2 reason:1 assuming:1 length:5 index:2 relationship:1 unfortunately:1 sharper:1 menache:1 info:5 negative:1 design:2 zt:1 bianchi:1 upper:1 zk1:1 urbana:2 withheld:1 descent:23 behave:1 displayed:1 beat:1 immediate:1 jin:1 extended:1 delaying:1 rn:7 perturbation:3 arbitrary:1 parallelizing:1 required:2 connection:1 nip:5 trans:1 beyond:1 adversary:7 below:1 parallelism:1 pattern:1 fp:1 reliable:1 max:1 natural:4 predicting:2 pause:1 created:1 xq:2 prior:1 literature:1 acknowledgement:2 review:2 l2:7 unsurprisingly:1 freund:1 loss:10 allocation:3 proportional:2 proven:1 s0:4 dq:4 editor:1 production:1 course:1 surprisingly:1 supported:2 copy:3 asynchronous:4 last:2 undelayed:4 infeasible:2 tsitsiklis:1 free:1 taking:2 absolute:1 sparse:2 distributed:7 feedback:20 world:1 ordentlich:3 computes:1 qn:1 collection:1 made:2 adaptive:2 transaction:1 pruning:1 emphasize:1 ignore:1 tolerant:1 receiver:2 conclude:1 leader:24 shwartz:2 disrupt:1 subsequence:1 search:1 streeter:2 nature:1 bethe:1 ku:1 learn:6 symmetry:1 contributes:1 heidelberg:1 du:3 necessarily:2 domain:3 main:1 big:1 noise:2 sridhar:1 x1:4 body:1 xu:2 advice:1 fashion:3 slow:1 comput:3 mcmahan:3 lie:1 tied:1 bower:1 minz:2 learns:1 gradientdescent:2 theorem:11 xt:56 insightful:1 explored:2 x:52 list:1 avrim:1 corr:1 mirror:2 phd:1 sparseness:1 margin:1 demand:2 kx:1 lt:1 simply:1 vk1:1 lazy:1 adjustment:1 doubling:2 springer:1 acm:2 ann:1 careful:2 replace:1 uniformly:2 lemma:13 total:3 called:4 kxs:3 player:1 select:1 formally:1 latter:1 crammer:1 outstanding:3 evaluate:1 princeton:1 |
5,340 | 5,834 | Structured Estimation with Atomic Norms:
General Bounds and Applications
Sheng Chen
Arindam Banerjee
Dept. of Computer Science & Engg., University of Minnesota, Twin Cities
{shengc,banerjee}@cs.umn.edu
Abstract
For structured estimation problems with atomic norms, recent advances in the literature express sample complexity and estimation error bounds in terms of certain
geometric measures, in particular Gaussian width of the unit norm ball, Gaussian
width of a spherical cap induced by a tangent cone, and a restricted norm compatibility constant. However, given an atomic norm, bounding these geometric
measures can be difficult. In this paper, we present general upper bounds for such
geometric measures, which only require simple information of the atomic norm under consideration, and we establish tightness of these bounds by providing
the corresponding lower bounds. We show applications of our analysis to certain
atomic norms, especially k-support norm, for which existing result is incomplete.
1
Introduction
Accurate recovery of structured sparse signal/parameter vectors from noisy linear measurements has
been extensively studied in the field of compressed sensing, statistics, etc. The goal is to recover
a high-dimensional signal (parameter) ? ? ? Rp which is sparse (only has a few nonzero entries),
possibly with additional structure such as group sparsity. Typically one assume linear models, y =
X? ? + ?, in which X ? Rn?p is the design matrix consisting of n samples, y ? Rn is the observed
response vector, and ? ? Rn is an unknown noise vector. By leveraging the sparsity of ? ? , previous
work has shown that certain L1 -norm based estimators [22, 7, 8] can find a good approximation of
? ? using sample size n p. Recent work has extended the notion of unstructured sparsity to other
structures in ? ? which can be captured or approximated by some norm R(?) [10, 18, 3, 11, 6, 19]
other than L1 , e.g., (non)overlapping group sparsity with L1 /L2 norm [24, 15], etc. In general, two
broad classes of estimators are considered in recovery analysis: (i) Lasso-type estimators [22, 18, 3],
which solve the regularized optimization problem
1
???n = argmin
kX? ? yk22 + ?n R(?) ,
??Rp 2n
(1)
and (ii) Dantzig-type estimators [7, 11, 6], which solve the constrained problem
???n = argmin R(?) s.t. R? (XT (X? ? y)) ? ?n ,
(2)
??Rp
where R? (?) is the dual norm of R(?). Variants of these estimators exist [10, 19, 23], but the recovery
analysis proceeds along similar lines as these two classes of estimators.
To establish recovery guarantees, [18] focused on Lasso-type estimators and R(?) from the class of
decomposable norm, e.g., L1 , non-overlapping L1 /L2 norm. The upper bound for the estimation
error k???n ?? ? k2 for any decomposable norm is characterized in terms of three geometric measures:
(i) a dual norm bound, as an upper bound for R? (XT ?), (ii) sample complexity, the minimal sample
size needed for a certain restricted eigenvalue (RE) condition to be true [4, 18], and (iii) a restricted
1
norm compatibility constant between R(?) and L2 norms [18, 3]. The non-asymptotic estimation
?
error bound typically has the form k???n ? ? ? ||2 ? c/ n, where c depends on a product of dual
norm bound and restricted norm compatibility, whereas the sample complexity characterizes the
minimum number of samples after which the error bound starts to be valid. In recent work, [3]
extended the analysis of Lasso-type estimator for decomposable norm to any norm, and gave a more
succinct characterization of the dual norm bound for R? (XT ?) and the sample complexity for the
RE condition in terms of Gaussian widths [14, 10, 20, 1] of suitable sets where, for any set A ? Rp ,
the Gaussian width is defined as
w(A) = E sup hu, gi ,
(3)
u?A
where g is a standard Gaussian random vector. For Dantzig-type estimators, [11, 6] obtained similar
extensions. To be specific, assume entries in X and ? are i.i.d. normal, and define the tangent cone,
TR (? ? ) = cone {u ? Rp | R(? ? + u) ? R(? ? )} .
(4)
?
Then one can get (high-probability) upper bound for R? (XT ?) as O( nw(?R )) where ?R = {u ?
Rp |R(u) ? 1} is the unit norm ball, and the RE condition is satisfied with O(w2 (TR (? ? ) ? Sp?1 ))
samples, in which Sp?1 is the unit sphere. For convenience, we denote by CR (? ? ) the spherical
cap TR (? ? ) ? Sp?1 throughout the paper. Further, the restricted norm compatibility is given by
?R (? ? ) = supu?TR (?? ) R(u)
kuk2 (see Section 2 for details).
Thus, for any given norm, it suffices to get a characterization of (i) w(?R ), the width of the unit norm ball, (ii) w(CR (? ? )), the width of the spherical cap induced by the tangent cone TR (? ? ),
and (iii) ?R (? ? ), the restricted norm compatibility in the tangent cone. For the special case of L1
norm, accurate characterization of all three measures exist [10, 18]. However, for more general
norms, the literature is rather limited. For w(?R ), the characterization is often reduced to comparison with either w(CR (? ? )) [3] or known results on other norm balls [13]. While w(CR (? ? ))
has been investigated for certain decomposable norms [10, 9, 1], little is known about general nondecomposable norms. One general approach for upper bounding w(CR (? ? )) is via the statistical
dimension [10, 19, 1], which computes the expected squared distance between a Gaussian random
vector and the polar cone of TR (? ? ). To specify the polar, one need full information of the subdifferential ?R(? ? ), which could be difficult to obtain for non-decomposable norms. A notable
bound for (overlapping) L1 /L2 norms is presented in [21], which yields tight bounds for mildly
non-overlapping cases, but is loose for highly overlapping ones. For ?R (? ? ), the restricted norm
compatibility, results are only available for decomposable norms [18, 3].
In this paper, we present a general set of bounds for the width w(?R ) of the norm ball, the width
w(CR (? ? )) of the spherical cap, and the restricted norm compatibility ?R (? ? ). For the analysis,
we consider the class of atomic norms that are invariant under sign-changes, i.e., the norm of a
vector stays unchanged if any entry changes only by flipping its sign. The class is quite general, and
covers most of the popular norms used in practical applications, e.g., L1 norm, ordered weighted
L1 (OWL) norm [5] and k-support norm [2]. Specifically we show that sharp bounds on w(?R )
can be obtained using simple calculation based on a decomposition inequality from [16]. To upper
bound w(CR (? ? )) and ?R (? ? ), instead of a full specification of TR (? ? ), we only require some
information regarding the subgradient of R(? ? ), which is often readily accessible. The key insight
is that bounding statistical dimension often ends up computing the expected distance from Gaussian
vector to a single point rather than to the whole polar cone, thus the full information on ?R(? ? ) is
unnecessary. In addition, we derive the corresponding lower bounds to show the tightness of our
results. As examples, we illustrate the bounds for L1 and OWL norms [5]. Finally, we give sharp
bounds for the recently proposed k-support norm [2], for which existing analysis is incomplete.
The rest of the paper is organized as follows: we first review the relevant background for Dantzigtype estimator and atomic norm in Section 2. In Section 3, we introduce the general bounds for the
geometric measures. In Section 4, we discuss the tightness of our bounds. Section 5 is dedicated to
the example of k-support norm, and we conclude in Section 6.
2
Background
In this section, we briefly review the recovery guarantee for the generalized Dantzig selector in (2)
and the basics on atomic norms. The following lemma, originally [11, Theorem 1], provides an error
bound for k???n ? ? ? k2 . Related results have appeared for other estimators [18, 10, 19, 3, 23].
2
Lemma 1 Assume that y = X? ? + ?,
? where entries of X and ? are i.i.d. copies of standard
Gaussian random variable. If ?n ? c1 nw(?R ) and n > c2 w2 (TR (? ? ) ? Sp?1 ) = w2 (CR (? ? ))
for some constant c1 , c2 > 1, with high probability, the estimate ???n given by (2) satisfies
?
? w(?R )
?
k??n ? ? k2 ? O ?R (? ) ?
.
(5)
n
In this Lemma, there are three geometric measures?w(?R ), w(CR (? ? )) and ?R (? ? )?which need
to be determined for specific R(?) and ? ? . In this work, we focus on general atomic norms R(?).
Given a set of atomic vectors A ? Rp , the corresponding atomic norm of any ? ? Rp is given by
(
)
X
X
k?kA = inf
ca : ? =
ca a, ca ? 0 ? a ? A
(6)
a?A
a?A
In order for k ? kA to be a valid norm, atomic vectors in A has to span Rp , and a ? A iff ?a ? A.
The unit ball of atomic norm k ? kA is given by ?A = conv(A). In addition, we assume that the
atomic set A contains v a for any v ? {?1}p if a belongs to A, where denotes the elementwise
(Hadamard) product for vectors. This assumption guarantees that both k ? kA and its dual norm
are invariant under sign-changes, which is satisfied by many widely used norms, such as L1 norm,
OWL norm [5] and k-support norm [2]. For the rest of the paper, we will use ?A , TA (? ? ), CA (? ? )
and ?A (? ? ) with A replaced by appropriate subscript for specific norms. For any vector u and
coordinate set S, we define uS by zeroing out all the coordinates outside S.
3
General Analysis for Atomic Norms
In this section, we present detailed analysis of the general bounds for the geometric measures,
w(?A ), w(CA (? ? )) and ?A (? ? ). In general, knowing the atomic set A is sufficient for bounding w(?A ). For w(CA (? ? )) and ?A (? ? ), we only need a single subgradient of k? ? kA and some
simple additional calculations.
3.1
Gaussian width of unit norm ball
Although the atomic set A may contain uncountably many vectors, we assume that A can be decomposed as a union of M ?simple? sets, A = A1 ? A2 ? . . . ? AM . By ?simple,? we mean the
Gaussian width of each Ai is easy to compute/bound. Such a decomposition assumption is often
satisfied by commonly used atomic norms, e.g., L1 , L1 /L2 , OWL, k-support norm. The Gaussian
width of the unit norm ball of k ? kA can be easily obtained using the following lemma, which is
essentially the Lemma 2 in [16]. Related results appear in [16].
Lemma 2 Let M > 4, A1 , ? ? ? , AM ? Rp , and A = ?m Am . The Gaussian width of unit norm
ball of k ? kA satisfies
p
w(?A ) = w(conv(A)) = w(A) ? max w(Am ) + 2 sup kzk2 log M
(7)
1?m?M
z?A
Next we illustrate application of this result to bounding the width of the unit norm ball of L1 and
OWL norm.
Example 1.1 (L1 norm): Recall that the L1 norm can be viewed as the atomic norm induced by the
set AL1 = {?ei : 1 ? i ? p}, where {ei }pi=1 is the canonical basis of Rp . Since the Gaussian
width of a singleton is 0, if we treat A as the union of individual {+ei } and {?ei }, we have
p
p
(8)
w(?L1 ) ? 0 + 2 log 2p = O( log p) .
Example 1.2 (OWL norm): A recent variant of L1 norm is the so-called ordered weighted L1
Pp
(OWL) norm [13, 25, 5] defined as k?kowl = i=1 wi |?|?i , where w1 ? w2 ? . . . ? wp ? 0 are
pre-specified ordered weights, and |?|? is the permutation of |?| with entries sorted in decreasing
order. In [25], the OWL norm is proved to be an atomic norm with atomic set
(
)
[
[
[
vS
p
p
Aowl =
Ai =
u ? R : uS c = 0, uS = Pi
, v ? {?1}
. (9)
j=1 wj
1?i?p
1?i?p | supp(S)|=i
3
We first apply Lemma 2 to each set Ai , and note that each Ai contains 2i pi atomic vectors.
s
s
r
r
p
p
i
p
2i
2
i
w(Ai ) ? 0 + 2
log 2
? Pi
2 + log
?
2 + log
,
Pi
i
i
w
?
i
( j=1 wj )2
j=1 wj
where w
? is the average of w1 , . . . , wp . Then we apply the lemma again to Aowl and obtain
?
2p
2p
log p
w(?owl ) = w(Aowl ) ?
2 + log p +
log p = O
,
w
?
w
?
w
?
(10)
which matches the result in [13].
3.2
Gaussian width of the intersection of tangent cone and unit sphere
In this subsection, we consider the computation of general w(CA (? ? )). Using the definition of dual
norm, we can write k? ? kA as k? ? kA = supkuk?A ?1 hu, ? ? i, where k ? k?A denotes the dual norm of
k ? kA . The u? for which hu? , ? ? i = k? ? kA , is a subgradient of k? ? kA . One can obtain u? by
simply solving the so-called polar operator [26] for the dual norm k ? k?A ,
u? ? argmax hu, ? ? i .
(11)
kuk?
A ?1
Based on polar operator, we start with the Lemma 3, which plays a key role in our analysis.
Lemma 3 Let u? bePa solution to the polar operator (11), and define the weighted L1 semi-norm
p
k ? ku? as kvku? = i=1 |u?i | ? |vi |. Then the following relation holds
TA (? ? ) ? Tu? (? ? ) ,
where Tu? (? ? ) = cone{v ? Rp | k? ? + vku? ? k? ? ku? }.
The proof of this lemma is in supplementary material. Note that the solution to (11) may not be
unique. A good criterion for choosing u? is to avoid zeros in u? , as any u?i = 0 will lead to the
unboundedness of unit ball of k ? ku? , which could potentially increase the size of Tu? (? ? ). Next we
present the upper bound for w(CA (? ? )).
Theorem 4 Suppose that u? is one of the solutions to (11), and define the following sets,
Q = {i | u?i = 0},
S = {i | u?i 6= 0, ?i? 6= 0},
R = {i | u?i 6= 0, ?i? = 0} .
The Gaussian width w(CA (? ? )) is upper bounded by
? ?
?
? p , if R is empty
w(CA (? ? )) ?
?
?
q
m + 23 s +
2?2max
s log
?2min
,
p?m
s
(12)
, if R is nonempty
where m = |Q|, s = |S|, ?min = mini?R |u?i | and ?max = maxi?S |u?i |.
Proof: By Lemma 3, we have w(CA (? ? )) ? w(Tu? (? ? ) ? Sp?1 ) , w(Cu? (? ? )). Hence we can
focus on bounding w(Cu? (? ? )). We first analyze the structure of v that satisfies k? ? + vku? ?
k? ? ku? . For the coordinates Q = {i | u?i = 0}, the corresponding entries vi ?s can be arbitrary since
it does not affect the value of k? ? + vku? . Thus all possible vQ form a m-dimensional subspace,
?
? = vS?R , and v
? needs to
where m = |Q|. For S ? R = {i | u?i 6= 0}, we define ?? = ?S?R
and v
satisfy
? u? ? k?k
? u? ,
k?
v + ?k
which is similar to the L1 -norm tangent cone except that coordinates are weighted by |u? |. Therefore
we use the techniques for proving the Proposition 3.10 in [10]. Based on the structure of v, The
normal cone at ? ? for Tu? (? ? ) is given by
N (? ? ) = {z : hz, vi ? 0 ?v s.t. kv + ? ? ku? ? k? ? ku? }
= {z : zi = 0 for i ? Q, zi = |u?i |sign(??i )t for i ? S, |zi | ? |u?i |t for i ? R, for any t ? 0} .
4
Given a standard Gaussian random vector g, using the relation between Gaussian width and statistical dimension (Proposition 2.4 and 10.2 in [1]), we have
X
X
X
w2 (Cu? (? ? )) ? E[ inf ? kz ? gk22 ] = E[ inf ?
gi2 +
(zj ? gj )2 +
(zk ? gk )2 ]
z?N (? )
= |Q| + E[
2
? |Q| + t
z?N (? )
X
inf
zS?R ?N (? ? )
X
|u?j |2
i?Q
j?S
(|u?j |sign(??j )t ? gj )2 +
j?S
(zk ? gk )2 ]
k?R
+ |S| + E[
j?S
k?R
X
X
k?R
inf
(zk ? gk )2 ]
|zk |?|u?
k |t
+?
?g 2
(gk ? |u?k |t)2 exp( k )dgk
2
|u?
k |t
X
X 2
1
|u?k |2 t2
?
? |Q| + t2
|u?j |2 + |S| +
exp
?
(?) .
2
2? |u?k |t
2
?
? |Q| + t2
|u?j |2 + |S| +
2?
j?S
k?R
X
X
j?S
k?R
Z
!
The details for the derivation above can be found in Appendix C of [10]. If R is empty, by taking
t = 0, we have
X
(?) ? |Q| + t2
|u?j |2 + |S| = |Q| + |S| = p .
j?S
If Rris nonempty, we denote ?min = mini?R |u?i | and ?max = maxi?S |u?i |. Taking t =
|S?R|
1
2
log
, we obtain
?min
|S|
2 2
?
t
2
2|R| exp ? min
2
|S ? R|
2?max
2
2
?
log
(?) ? |Q| + |S|(?max t + 1) +
= |Q| + |S|
+1
?2min
|S|
2??min t
|R||S|
2?2max
|S ? R|
3
r
+
?
|Q|
+
|S|
log
+ |S| .
2
?min
|S|
2
|S ? R| ? log |S?R|
|S|
Substituting |Q| = m, |S| = s and |S ? R| = p ? m into the last inequality completes the proof.
Suppose that ? ? is a s-sparse vector. We illustrate the above bound on the Gaussian width of the
spherical cap using L1 norm and OWL norm as examples.
Example 2.1 (L1 norm): The dual norm of L1 is L? norm, and its easy to verify that u? =
[1, 1, . . . , 1]T ? Rp is a solution to (11). Applying Theorem 4 to u? , we have
r
r
p
p
3
?
w(CL1 (? )) ?
s + 2s log
=O
s + s log
.
2
s
s
Example 2.2 (OWL norm): For OWL, its dual norm is given by kuk?owl = maxb?Aowl hb, ui.
W.l.o.g. we assume ? ? = |? ? |? , and a solution to (11) is given by u? = [w1 , . . . , ws , w,
? w,
? . . . , w]
? T,
in which w
? is the average of ws+1 , . . . , wp . If all wi ?s are nonzero, the Gaussian width satisfies
r
p
3
2w2
?
w(Cowl (? )) ?
s + 21 s log
.
2
w
?
s
3.3
Restricted norm compatibility
The next theorem gives general upper bounds for the restricted norm compatibility ?A (? ? ).
Theorem 5 Assume that kukA ? max{?1 kuk1 , ?2 kuk2 } for all u ? Rp . Under the setting of
Theorem 4, the restricted norm compatibility ?A (? ? ) is upper bounded by
(
? , if R isnempty
? o
?
?A (? ) ?
,
(13)
?Q + max ?2 , ?1 1 + ??max
s , if R is nonempty
min
where ? = supu?Rp
kukA
kuk2
and ?Q = supsupp(u)?Q
5
kukA
kuk2 .
Proof: As analyzed in the proof of Theorem 4, vQ for v ? Tu? (? ? ) can be arbitrary, and the
vS?R = vQc satisfies
X
X
X
?
?
kvQc + ?Q
=?
|?i? + vi ||u?i | +
|vj ||u?j | ?
|?i? ||u?i |
c ku? ? k?Qc ku?
i?S
=?
X
(|?i? | ? |vi |) |u?i | +
i?S
X
|vj ||u?j | ?
j?R
j?R
X
|?i? ||u?i |
=?
i?S
?min kvR k1 ? ?max kvS k1
i?S
If R is empty, by Lemma 3, we obtain
?A (? ? ) ? ?u? (? ? ) ,
kvkA
kvkA
? sup
=?.
v?Rp kvk2
v?Tu? (? ? ) kvk2
sup
If R is nonempty, we have
?A (? ? ) ? ?u? (? ? ) ?
kvQ kA + kvQc kA
kvkA + kv0 kA
?
sup
kvk2
kv + v0 k2
v?Tu? (? ? )
supp(v)?Q, supp(v0 )?Qc
sup
0
0
?min kvR
k1 ??max kvS
k1
?
kvkA
+
supp(v)?Q kvk2
sup
? ?Q +
max{?1 kv0 k1 , ?2 kv0 k2 }
kv0 k2
sup
supp(v0 )?Qc
0
0
?min kvR
k1 ??max kvS
k1
?max
?(1 + ?min )kv0 k1
max{?2 ,
sup
}
kv0 k2
supp(v0 )?S
? ?Q + max{?2 , ?1
?max
1+
?min
?
s} ,
in which the last inequality in the first line uses the property of Tu? (? ? ).
Remark: We call ? the unrestricted norm compatibility, and ?Q the subspace norm compatibility,
both of which are often easier to compute than ?A (? ? ). The ?1 and ?2 in the assumption of k ? kA
can have multiple choices, and one has the flexibility to choose the one that yields the tightest bound.
Example 3.1 (L1 norm): To apply the Theorem 5 to L1 norm, we can choose ?1 = 1 and ?2 = 0.
We recall the u? for L1 norm, whose Q is empty while R is nonempty. So we have for s-sparse ? ?
?
1 ?
s =2 s.
?L1 (? ? ) ? 0 + max 0, 1 +
1
Example 3.2 (OWL norm): For OWL, note that k ? kowl ? w1 k ? k1 . Hence we choose ?1 = w1
and ?2 = 0. As a result, we similarly have for s-sparse ? ?
n
w1 ? o 2w12 ?
?owl (? ? ) ? 0 + max 0, w1 1 +
s ?
s.
w
?
w
?
4
Tightness of the General Bounds
So far we have shown that the geometric measures can be upper bounded for general atomic norms.
One might wonder how tight the bounds in Section 3 are for these measures. For w(?A ), as the
result from [16] depends on the decomposition of A for the ease of computation, it might be tricky
to discuss its tightness in general. Hence we will focus on the other two, w(CA (? ? )) and ?A (? ? ).
To characterize the tightness, we need to compare the lower bounds of w(CA (? ? )) and ?A (? ? ),
with their upper bounds determined by u? . While there can be multiple u? , it is easy to see that any
convex combination of them is also a solution to (11). Therefore we can always find a u? that has
the largest support, i.e., supp(u0 ) ? supp(u? ) for any other solution u0 . We will use such u? to
generate the lower bounds. First we need the following lemma for the cone TA (? ? ).
Lemma 6 Consider a solution u? to (11), which satisfies supp(u0 ) ? supp(u? ) for any other
solution u0 . Under the setting of notations in Theorem 4, we define an additional set of coordinates
P = {i | u?i = 0, ?i? = 0}. Then the tangent cone TA (? ? ) satisfies
T1 ? T2 ? cl(TA (? ? )) ,
(14)
where ? denotes the direct (Minkowski) sum operation, cl(?) denotes the closure, T1 = {v ?
Rp | vi = 0 for i ?
/ P} is a |P|-dimensional subspace, and T2 = {v ? Rp | sign(vi ) =
?
?sign(?i ) for i ? supp(? ? ), vi = 0 for i ?
/ supp(? ? )} is a | supp(? ? )|-dimensional orthant.
6
The proof of Lemma 6 is given in supplementary material. The following theorem gives us the lower
bound for w(CA (? ? )) and ?A (? ? ).
Theorem 7 Under the setting of Theorem 4 and Lemma 6, the following lower bounds hold,
?
w(CA (? ? )) ? O( m + s) ,
?A (? ? ) ? ?Q?S .
(15)
(16)
Proof: To lower bound w(CA (? ? )), we use Lemma 6 and the relation between Gaussian width and
statistical dimension (Proposition 10.2 in [1]),
r
w(TA (? ? )) ? w(T1 ? T2 ? Sp?1 ) ? E[
inf ? kz ? gk22 ] ? 1 (?) ,
z?NT1 ?T2 (? )
?
where the normal cone NT1 ?T2 (? ) of T1 ? T2 is given by NT1 ?T2 (? ? ) = {z : zi = 0 for i ?
P, sign(zi ) = sign(?i? ) for i ? supp(? ? )}. Hence we have
r
s X
X
?
| supp(? ? )|
2
2
gj I{gj ?j? <0} ] ? 1 = |P| +
gi +
? 1 = O( m + s) ,
(?) = E[
2
?
i?P
j?supp(? )
where the last equality follows the fact that P ? supp(? ? ) = Q ? S. This completes proof of (15).
To prove (16), we again use Lemma 6 and the fact P ? supp(? ? ) = Q ? S. Noting that k ? kA is
invariant under sign-changes, we get
kvkA
kvkA
kvkA
?A (? ? ) = sup
? sup
=
sup
= ?Q?S .
kvk
kvk
?
?
2
2
v?T
?T
v?TA (? )
supp(v)?P?supp(? ) kvk2
1
2
Remark: We compare the lower bounds (15) (16) with the upper bounds (12) (13). If R is empty,
m + s = p, and the lower bounds actually match the upper bounds up to a constant factor for both
w(CA (? ? )) and ?A (? ? ). If R is nonempty, the lower and upper bounds of w(CA (? ? )) differ by a
2?2
?
multiplicative factor ?2max log( p?m
s ), which can be small in practice. For ?A (? ), as ?Q?S ? ?Q ,
min
?
we usually have at most an additive O( s) term in upper bound, since the assumption on k ? kA
often holds with a constant ?1 and ?2 = 0 for most norms.
5
Application to the k-Support Norm
In this section, we apply our general results on geometric measures to a non-trivial example, ksupport norm [2], which has been proved effective for sparse recovery [11, 17, 12]. The k-support
norm can be viewed as an atomic norm, for which A = {a ? Rp | kak0 ? k, kak2 ? 1}. The
k-support norm can be explicitly expressed as an infimum convolution given by
nX
o
k?ksp
kui k2 kui k0 ? k ,
(17)
k = P inf
i
ui =?
i
and its dual norm is the so-called 2-k symmetric gauge norm defined as
?
k?ksp
= k?k(k) = k|?|?1:k k2 ,
k
(18)
It is straightforward to see that the dual norm is simply the L2 norm of the largest k entries in |?|.
Suppose that all the sets of coordinates with cardinality k can be listed as S1 , S2 , . . . , S(p) . Then A
k
can be written as A = A1 ? . . . ? A(p) , where each Ai = {a ? Rp | supp(a) ? Si , kak2 ? 1}.
k
p
?
It is not difficult to see that w(Ai ) = E supa?Ai ha, gi = EkgSi k2 ? EkgSi k22 ? k. Using
Lemma 2, we know the Gaussian width of the unit ball of k-support norm
s
r
r
p
p
?
?
p
sp
w(?k ) ? k + 2 log
? k + 2 k log
+k =O
k log
+k ,
(19)
k
k
k
?
which matches that in [11]. Now we turn to the calculation of w(Cksp (? ? )) and ?sp
k (? ). As we have
?
seen in the general analysis, the solution u to the polar operator (11) is important in characterizing
the two quantities. We first present a simple procedure in Algorithm 1 for solving the polar operator
?
for k ? ksp
procedure can be utilized to compute
k . The time complexity is only O(p log p + k). This
?
the k-support norm, or be applied to estimation with k ? ksp
using
generalized conditional gradient
k
method [26], which requires solving the polar operator in each iteration.
7
?
Algorithm 1 Solving polar operator for k ? ksp
k
Input: ? ? ? Rp , positive integer k
Output: solution u? to the polar operator (11)
1: z = |? ? |? , t = 0
2: for i = 1 to k do
3:
?1 = kz1:i?1 k2 , ?2 = kzi:p k1 , d = k ? i + 1, ? = ?
4:
if
?12
2?
?2
,
?22 d+?12 d2
? = ? ?1
2
1?? 2 d
,w=
z1:i?1
2?
+ ??2 > t and ? < wi?1 then
?2
5:
t = 2?1 + ??2 , u? = [w, ?1]T (1 is (p ? i + 1)-dimensional vector with all ones)
6:
end if
7: end for
8: change the sign and order of u? to conform with ? ?
9: return u?
?
Theorem 8 For a given ? ? , Algorithm 1 returns a solution to polar operator (11) for k ? ksp
k .
The proof of this theorem is provided in supplementary material. Now we consider w(Cksp (? ? ))
?
?
?
?
and ?sp
k (? ) for s-sparse ? (here s-sparse ? means | supp(? )| = s) in three scenarios: (i) overspecified k, where s < k, (ii) exactly specified k, where s = k, and (iii) under-specified k, where
s > k. The bounds are given in Theorem 9, and the proof is also in supplementary material.
Theorem 9 For given s-sparse ? ? ? Rp , the Gaussian width w(Cksp (? ? )) and the restricted norm
?
compatibility ?sp
k (? ) for a specified k are given by
? q
? ?
p , if s < k
2p
?
?
?
?
k , if s < k
?
?
?
?
r
?
?
?
?
? ?
?
?2
2?max
p
3
??
sp ?
2(1 + ?max
) , if s = k
,
w(Cksp (? ? )) ?
2 s + ? ?2 s log s , if s = k , ?k (? ) ?
?
min
min
?
?
?
?
?
?
?
? q
q
?
?
?
?
2?2
? (1 + ?max ) 2s , if s > k
?
3
s + max s log p , if s > k
2
?2min
?min
s
?
?
= maxi?supp(?? ) |?i? | and ?min
= mini?supp(?? ) |?i? |.
where ?max
k
(20)
sp ?
?
Remark: Previously ?sp
k (? ) is unknown and the bound on w(Ck (? )) given in [11] is loose, as
it used the result in [21]. Based on Theorem 9, we note that the choice of k can affect the recovery
guarantees. Over-specified k leads to a direct dependence on the dimensionality p for w(Cksp (? ? ))
?
and ?sp
k (? ), resulting in a weak error bound. The bounds are sharp for exactly specified or underspecified k. Thus, it is better to under-specify k in practice. where the estimation error satifies
!
r
s
+
s
log
(p/k)
?
(21)
k???n ? ? k2 ? O
n
6
Conclusions
In this work, we study the problem of structured estimation with general atomic norms that are
invariant under sign-changes. Based on Dantzig-type estimators, we provide the general bounds
for the geometric measures. In terms of w(?A ), instead of comparison with other results or direct
calculation, we demonstrate a third way to compute it based on decomposition of atomic set A.
For w(CA (? ? )) and ?A (? ? ), we derive general upper bounds, which only require the knowledge
of a single subgradient of k? ? kA . We also show that these upper bounds are close to the lower
bounds, which makes them practical in general. To illustrate our results, we discuss the application
to k-support norm in details and shed light on the choice of k in practice.
Acknowledgements
The research was supported by NSF grants IIS-1447566, IIS-1422557, CCF-1451986, CNS1314560, IIS-0953274, IIS-1029711, and by NASA grant NNX12AQ39A.
8
References
[1] D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: Phase transitions in convex
programs with random data. Inform. Inference, 3(3):224?294, 2014.
[2] A. Argyriou, R. Foygel, and N. Srebro. Sparse prediction with the k-support norm. In Advances in Neural
Information Processing Systems (NIPS), 2012.
[3] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with norm regularization. In Advances
in Neural Information Processing Systems (NIPS), 2014.
[4] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. The
Annals of Statistics, 37(4):1705?1732, 2009.
[5] M. Bogdan, E. van den Berg, W. Su, and E. Candes. Statistical estimation and testing via the sorted L1
norm. arXiv:1310.1969, 2013.
[6] T. T. Cai, T. Liang, and A. Rakhlin. Geometrizing Local Rates of Convergence for High-Dimensional
Linear Inverse Problems. arXiv:1404.4408, 2014.
[7] E. Candes and T Tao. The Dantzig selector: statistical estimation when p is much larger than n. The
Annals of Statistics, 35(6):2313?2351, 2007.
[8] E. J. Cand`es, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8):1207?1223, 2006.
[9] E. J. Cands and B. Recht. Simple bounds for recovering low-complexity models. Math. Program., 141(12):577?589, 2013.
[10] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse
problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[11] S. Chatterjee, S. Chen, and A. Banerjee. Generalized dantzig selector: Application to the k-support norm.
In Advances in Neural Information Processing Systems (NIPS), 2014.
[12] S. Chen and A. Banerjee. One-bit compressed sensing with the k-support norm. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2015.
[13] M. A. T. Figueiredo and R. D. Nowak. Sparse estimation with strongly correlated variables using ordered
weighted l1 regularization. arXiv:1409.4005, 2014.
[14] Y. Gordon. Some inequalities for gaussian processes and applications. Israel Journal of Mathematics,
50(4):265?289, 1985.
[15] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In International
Conference on Machine Learning (ICML), 2009.
[16] A. Maurer, M. Pontil, and B. Romera-Paredes. An Inequality with Applications to Structured Sparsity
and Multitask Dictionary Learning. In Conference on Learning Theory (COLT), 2014.
[17] A. M. McDonald, M. Pontil, and D. Stamos. Spectral k-support norm regularization. In Advances in
Neural Information Processing Systems (NIPS), 2014.
[18] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for the analysis of
regularized M -estimators. Statistical Science, 27(4):538?557, 2012.
[19] S. Oymak, C. Thrampoulidis, and B. Hassibi. The Squared-Error of Generalized Lasso: A Precise Analysis. arXiv:1311.0830, 2013.
[20] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A convex
programming approach. IEEE Transactions on Information Theory, 59(1):482?494, 2013.
[21] N. Rao, B. Recht, and R. Nowak. Universal Measurement Bounds for Structured Sparse Signal Recovery.
In International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[22] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[23] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. In
Sampling Theory, a Renaissance. 2015.
[24] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society, Series B, 68:49?67, 2006.
[25] X. Zeng and M. A. T. Figueiredo. The Ordered Weighted `1 Norm: Atomic Formulation, Projections, and
Algorithms. arXiv:1409.4271, 2014.
[26] X. Zhang, Y. Yu, and D. Schuurmans. Polar operators for structured sparse estimation. In Advances in
Neural Information Processing Systems (NIPS), 2013.
9
| 5834 |@word multitask:1 cu:3 briefly:1 norm:120 paredes:1 hu:4 closure:1 d2:1 decomposition:4 jacob:1 kz1:1 tr:8 contains:2 series:2 romera:1 existing:2 ka:19 si:1 written:1 nt1:3 readily:1 additive:1 engg:1 v:3 intelligence:2 characterization:4 provides:1 math:1 kv0:6 zhang:1 along:1 c2:2 kvk2:5 direct:3 yuan:1 prove:1 introduce:1 expected:2 cand:1 spherical:5 decomposed:1 decreasing:1 little:1 cardinality:1 conv:2 provided:1 bounded:3 notation:1 israel:1 argmin:2 z:1 unified:1 guarantee:4 shed:1 exactly:2 k2:12 tricky:1 unit:12 grant:2 appear:1 t1:4 positive:1 kuk1:1 treat:1 local:1 subscript:1 sivakumar:1 might:2 studied:1 dantzig:7 ease:1 limited:1 practical:2 unique:1 testing:1 atomic:27 union:2 practice:3 supu:2 procedure:2 pontil:2 nondecomposable:1 universal:1 vert:1 projection:1 pre:1 get:3 convenience:1 close:1 selection:2 operator:10 romberg:1 applying:1 straightforward:1 convex:5 focused:1 qc:3 decomposable:6 recovery:10 unstructured:1 pure:1 estimator:13 insight:1 proving:1 kuka:3 notion:1 coordinate:6 annals:2 play:1 suppose:3 programming:1 us:1 approximated:1 utilized:1 underspecified:1 observed:1 role:1 wj:3 complexity:6 ui:2 tight:2 solving:4 basis:1 easily:1 k0:1 derivation:1 effective:1 artificial:2 outside:1 choosing:1 quite:1 whose:1 widely:1 solve:2 supplementary:4 larger:1 tightness:6 compressed:3 statistic:5 gi:3 noisy:1 eigenvalue:1 cai:1 product:2 tu:9 relevant:1 hadamard:1 iff:1 flexibility:1 kv:5 cl1:1 convergence:1 empty:5 bogdan:1 derive:2 illustrate:4 recovering:1 c:1 differ:1 gi2:1 material:4 owl:16 require:3 suffices:1 proposition:3 extension:1 hold:3 considered:1 normal:3 exp:3 nw:2 substituting:1 al1:1 bickel:1 a2:1 dictionary:1 dgk:1 estimation:14 polar:13 largest:2 grouped:1 gauge:1 city:1 weighted:6 gaussian:23 always:1 ksupport:1 rather:2 ck:1 avoid:1 cr:9 shrinkage:1 renaissance:1 mccoy:1 focus:3 am:4 amelunxen:1 inference:1 inaccurate:1 typically:2 stamos:1 w:2 relation:3 lotz:1 tao:2 compatibility:13 ksp:6 dual:12 gk22:2 colt:1 plan:1 constrained:1 special:1 field:1 sampling:1 broad:1 yu:2 icml:1 t2:11 gordon:1 few:1 geometrizing:1 individual:1 replaced:1 argmax:1 consisting:1 phase:1 geometry:1 highly:1 umn:1 fazayeli:1 analyzed:1 kvk:2 light:1 accurate:2 edge:1 nowak:2 incomplete:3 maurer:1 re:3 minimal:1 rao:1 cover:1 entry:7 wonder:1 characterize:1 shengc:1 vershynin:1 recht:3 international:3 negahban:1 oymak:1 accessible:1 stay:1 w1:7 squared:2 again:2 satisfied:3 choose:3 possibly:1 vku:3 return:2 supp:24 parrilo:1 singleton:1 twin:1 satisfy:1 notable:1 explicitly:1 kzk2:1 depends:2 vi:8 multiplicative:1 analyze:1 characterizes:1 sup:12 start:2 recover:1 candes:2 yield:2 weak:1 simultaneous:1 inform:1 overspecified:1 definition:1 pp:1 proof:10 proved:2 popular:1 recall:2 subsection:1 cap:5 dimensionality:1 knowledge:1 organized:1 actually:1 nasa:1 originally:1 ta:7 response:1 specify:2 ritov:1 formulation:1 strongly:1 sheng:1 tropp:2 ei:4 su:1 zeng:1 banerjee:5 overlapping:5 logistic:1 infimum:1 k22:1 contain:1 true:1 verify:1 ccf:1 hence:4 equality:1 regularization:3 symmetric:1 nonzero:2 wp:3 width:22 criterion:1 generalized:4 demonstrate:1 mcdonald:1 l1:30 dedicated:1 consideration:1 arindam:1 recently:1 elementwise:1 measurement:4 ai:8 mathematics:3 similarly:1 zeroing:1 minnesota:1 specification:1 stable:1 gj:4 etc:2 v0:4 recent:4 inf:7 belongs:1 scenario:1 certain:5 inequality:5 captured:1 minimum:1 additional:3 unrestricted:1 seen:1 living:1 signal:5 ii:8 full:3 semi:1 multiple:2 u0:4 match:3 characterized:1 calculation:4 sphere:2 lin:1 ravikumar:1 a1:3 prediction:1 variant:2 basic:1 regression:3 essentially:1 arxiv:5 iteration:1 c1:2 whereas:1 subdifferential:1 addition:2 background:2 completes:2 w2:6 rest:2 induced:3 hz:1 leveraging:1 call:1 integer:1 noting:1 yk22:1 iii:3 easy:3 maxb:1 hb:1 affect:2 gave:1 zi:5 lasso:8 regarding:1 knowing:1 remark:3 detailed:1 listed:1 tsybakov:1 extensively:1 reduced:1 generate:1 exist:2 canonical:1 zj:1 nsf:1 sign:12 tibshirani:1 conform:1 write:1 express:1 group:3 key:2 kuk:2 graph:1 subgradient:4 cone:14 sum:1 inverse:2 throughout:1 chandrasekaran:1 w12:1 appendix:1 bit:2 bound:54 span:1 min:20 minkowski:1 structured:8 ball:12 combination:1 wi:3 s1:1 satifies:1 den:1 restricted:12 invariant:4 vq:2 previously:1 foygel:1 discus:3 loose:2 nonempty:6 turn:1 needed:1 know:1 nnx12aq39a:1 end:3 available:1 tightest:1 operation:1 apply:4 appropriate:1 spectral:1 rp:22 denotes:4 k1:10 especially:1 establish:2 society:2 unchanged:1 quantity:1 flipping:1 dependence:1 kak2:2 gradient:1 subspace:3 distance:2 nx:1 trivial:1 willsky:1 mini:3 providing:1 liang:1 difficult:3 potentially:1 gk:4 design:1 unknown:2 upper:18 convolution:1 orthant:1 extended:2 communication:1 precise:1 rn:3 supa:1 sharp:3 arbitrary:2 thrampoulidis:1 specified:6 z1:1 nip:5 proceeds:1 usually:1 appeared:1 sparsity:5 program:2 max:26 royal:2 wainwright:1 suitable:1 overlap:1 regularized:2 review:2 literature:2 geometric:10 tangent:7 l2:6 acknowledgement:1 asymptotic:1 permutation:1 srebro:1 foundation:1 sufficient:1 pi:5 uncountably:1 supported:1 last:3 copy:1 figueiredo:2 taking:2 characterizing:1 sparse:14 van:1 dimension:4 valid:2 transition:1 computes:1 kz:2 commonly:1 far:1 kzi:1 transaction:1 selector:4 unnecessary:1 conclude:1 ku:8 zk:4 robust:1 ca:19 schuurmans:1 kui:2 investigated:1 cl:2 vj:2 sp:14 unboundedness:1 aistats:2 bounding:6 noise:1 whole:1 s2:1 succinct:1 hassibi:1 third:1 theorem:17 kuk2:4 xt:4 specific:3 sensing:3 maxi:3 rakhlin:1 chatterjee:1 kx:1 chen:4 mildly:1 easier:1 intersection:1 simply:2 kvr:3 expressed:1 ordered:5 satisfies:7 obozinski:1 conditional:1 kak0:1 goal:1 viewed:2 sorted:2 change:6 specifically:1 determined:2 except:1 lemma:20 called:3 e:1 berg:1 support:17 dept:1 argyriou:1 correlated:1 |
5,341 | 5,835 | Subsampled Power Iteration: a Unified Algorithm for
Block Models and Planted CSP?s
Will Perkins
University of Birmingham
[email protected]
Vitaly Feldman
IBM Research - Almaden
[email protected]
Santosh Vempala
Georgia Tech
[email protected]
Abstract
We present an algorithm for recovering planted solutions in two well-known models, the stochastic block model and planted constraint satisfaction problems (CSP),
via a common generalization in terms of random bipartite graphs. Our algorithm
matches up to a constant factor the best-known bounds for the number of edges (or
constraints) needed for perfect recovery and its running time is linear in the number of edges used. The time complexity is significantly better than both spectral
and SDP-based approaches.
The main contribution of the algorithm is in the case of unequal sizes in the bipartition that arises in our reduction from the planted CSP. Here our algorithm
succeeds at a significantly lower density than the spectral approaches, surpassing
a barrier based on the spectral norm of a random matrix.
Other significant features of the algorithm and analysis include (i) the critical use
of power iteration with subsampling, which might be of independent interest; its
analysis requires keeping track of multiple norms of an evolving solution (ii) the
algorithm can be implemented statistically, i.e., with very limited access to the
input distribution (iii) the algorithm is extremely simple to implement and runs in
linear time, and thus is practical even for very large instances.
1
Introduction
A broad class of learning problems fits into the framework of obtaining a sequence of independent
random samples from a unknown distribution, and then (approximately) recovering this distribution
using as few samples as possible. We consider two natural instances of this framework: the stochastic block model in which a random graph is formed by choosing edges independently at random
with probabilities that depend on whether an edge crosses a planted partition, and planted k-CSP?s
(or planted k-SAT) in which width-k boolean constraints are chosen independently at random with
probabilities that depend on their evaluation on a planted assignment to a set of boolean variables.
We propose a natural bipartite generalization of the stochastic block model, and then show that
planted k-CSP?s can be reduced to this model, thus unifying graph partitioning and planted CSP?s
into one problem. We then give an algorithm for solving random instances of the model. Our
algorithm is optimal up to a constant factor in terms of number of sampled edges and running time
for the bipartite block model; for planted CSP?s the algorithm matches up to log factors the best
possible sample complexity in several restricted computational models and the best-known bounds
for any algorithm. A key feature of the algorithm is that when one side of the bipartition is much
1
larger than the other, then our algorithm succeeds at significantly lower edge densities than using
Singular Value Decomposition (SVD) on the rectangular adjacency matrix. Details are in Sec. 5.
The bipartite block model begins with two vertex sets, V1 and V2 (of possibly unequal size), each
with a balanced partition, (A1 , B1 ) and (A2 , B2 ) respectively. Edges are added independently at
random between V1 and V2 with probabilities that depend on which parts the endpoints are in: edges
between A1 and A2 or B1 and B2 are added with probability ?p, while the other edges are added
with probability (2 ? ?)p, where ? ? [0, 2] and p is the overall edge density. To obtain the stochastic
block model we can identify V1 and V2 . To reduce planted CSP?s to this model, we first reduce the
problem to an instance of noisy r-XOR-SAT, where r is the complexity parameter of the planted
CSP distribution defined in [19] (see Sec. 2 for details). We then identify V1 with literals, and V2
with (r ? 1)-tuples of literals, and add an edge between literal l ? V1 and tuple t ? V2 when the
r-clause consisting of their union appears in the formula. The reduction leads to a bipartition with
V2 much larger than V1 .
Our algorithm is based on applying power iteration with a sequence of matrices subsampled from the
original adjacency matrix. This is in contrast to previous algorithms that compute the eigenvectors
(or singular vectors) of the full adjacency matrix. Our algorithm has several advantages. Such
an algorithm, for the special case of square matrices, was previously proposed and analyzed in a
different context by Korada et al. [25].
? Up to a constant factor, the algorithm matches the best-known (and in some cases the bestpossible) edge or constraint density needed for complete recovery of the planted partition or
assignment. The algorithm for planted CSP?s finds the planted assignment using O(nr/2 ?
log n) clauses for a clause distribution of complexity r (see Sec. 2 for the formal definition),
nearly matching computational lower bounds for SDP hierarchies [30] and the class of
statistical algorithms [19].
? The algorithm is fast, running in time linear in the number of edges or constraints used,
unlike other approaches that require computing eigenvectors or solving semi-definite programs.
? The algorithm is conceptually simple and easy to implement. In fact it can be implemented
in the statistical query model, with very limited access to the input graph [19].
? It is based on the idea of iteration with subsampling which may have further applications
in the design and analysis of algorithms.
? Most notably, the algorithm succeeds where generic spectral approaches fail. For the case
of the planted CSP, when |V2 | |V1 |, our algorithm succeeds at a polynomial factor
sparser density than the approaches of McSherry [28], Coja-Oghlan [7], and Vu [33]. The
algorithm succeeds despite the fact that the ?energy? of the planted vector with respect to
the random adjacency matrix is far below the spectral norm of the matrix. In previous
analyses, this was believed to indicate failure of the spectral approach. See Sec. 5.
1.1
Related work
The algorithm of Mossel, Neeman and Sly [29] for the standard stochastic block model also runs
in near linear time, while other known algorithmic approaches for planted partitioning that succeed
near the optimal edge density [28, 7, 27] perform eigenvector or singular vector computations and
thus require superlinear time, though a careful randomized implementation of low-rank approximations can reduce the running time of McSherry?s algorithm substantially [2].
For planted satisfiability, the algorithm of Flaxman for planted 3-SAT works for a subset of planted
distributions (those with distribution complexity at most 2 in our definition below) using O(n) constraints, while the algorithm of Coja-Oghlan, Cooper, and Frieze [8] works for planted 3-SAT distributions that exclude unsatisfied clauses and uses O(n3/2 ln10 n) constraints.
The only previous algorithm that finds the planted assignment for all distributions of planted kCSP?s is the SDP-based algorithm of Bogdanov and Qiao [5] with the folklore generalization to
? r/2 ) constraints. This
r-wise independent predicates (cf. [30]). Similar to our algorithm, it uses O(n
algorithm effectively solves the noisy r-XOR-SAT instance and therefore can be also used to solve
? r/2 ) clauses (via the reduction in Sec. 4).
our general version of planted satisfiability using O(n
2
Notably for both this algorithm and ours, having a completely satisfying planted assignment plays
no special role: the number of constraints required depends only on the distribution complexity.To
the best of our knowledge, our algorithm is the first for the planted k-SAT problem that runs in linear
time in the number of constraints used.
It is important to note that in planted k-CSP?s, the planted assignment becomes recoverable with
high probability after at most O(n log n) random clauses yet the best known efficient algorithms
require n?(r/2) clauses. Problems exhibiting this type of behavior have attracted significant interest
in learning theory [4, 12, 31, 15, 32, 3, 10, 16] and some of the recent hardness results are based on
the conjectured computational hardness of the k-SAT refutation problem [10, 11].
Our algorithm is arguably simpler than the approach in [5] and substantially improves the running
time even for small k. Another advantage of our approach is that it can be implemented using
restricted access to the distribution of constraints referred to as statistical queries [24, 17]. Roughly
speaking, for the planted SAT problem this access allows an algorithm to evaluate multi-valued
functions of a single clause on randomly drawn clauses or to estimate expectations of such functions,
without direct access to the clauses themselves. Recently, in [19], lower bounds on the number of
clauses necessary for a polynomial-time statistical algorithm to solve planted k-CSPs were proved.
It is therefore important to understand the power of such algorithms for solving planted k-CSPs.
A statistical implementation of our algorithm gives an upper bound that nearly matches the lower
bound for the problem. See [19] for the formal details of the model and statistical implementation
of our algorithm.
Korada, Montanari and Oh [25] analyzed the ?Gossip PCA? algorithm, which for the special case
of an equal bipartition is the same as our subsampled power iteration. The assumptions, model,
and motivation in the two papers are different and the results incomparable. In particular, while
our focus and motivation are on general (nonsquare) matrices, their work considers extracting a
planting of rank k greater than 1 in the square setting. Their results also assume an initial vector
with non-trivial correlation with the planted vector. The nature of the guarantees is also different.
2
Model and results
Bipartite stochastic block model:
Definition 1. For ? ? [0, 2] \ {1}, n1 , n2 even, and P1 = (A1 , B1 ), P2 = (A2 , B2 ) bipartitions of vertex sets V1 , V2 of size n1 , n2 respectively, we define the bipartite stochastic block model
B(n1 , n2 , P1 , P2 , ?, p) to be the random graph in which edges between vertices in A1 and A2 and
B1 and B2 are added independently with probability ?p and edges between vertices in A1 and B2
and B1 and A2 with probability (2 ? ?)p.
Here ? is a fixed constant while p will tend to 0 as n1 , n2 ? ?. Note that setting n1 = n2 = n, and
identifying A1 and A2 and B1 and B2 gives the usual stochastic block model (with loops allowed);
for edge probabilities a/n and b/n, we have ? = 2a/(a + b) and p = (a + b)/2n, the overall edge
density. For our application to k-CSP?s, it will be crucial to allow vertex sets of very different sizes,
i.e. n2 n1 .
The algorithmic task for the bipartite block model is to recover one or both partitions (completely
or partially) using as few edges and as little computational time as possible. In this work we will
assume that n1 ? n2 , and we will be concerned with the algorithmic task of recovering the partition
P1 completely, as this will allow us to solve the planted k-CSP problems described below. We define
complete recovery of P1 as finding the exact partition with high probability over the randomness in
the graph and in the algorithm.
Theorem 1. Assume n1 ? n2 . There is a constant C so that the Subsampled Power Iteration
algorithm described below completely recovers the partition P1 in the bipartite stochastic block
C log
?n1
model B(n1 , n2 , P1 , P2 , ?, p) with probability 1 ? o(1) as n1 ? ? when p ? (??1)
2 n n . Its
1 2
?
log n1
running time is O
n1 n2 ? (??1)2 .
Note that for the usual stochastic block model this gives an algorithm using O(n log n) edges and
O(n log n) time, which is the best possible for complete recovery since that many edges are needed
for every vertex to appear in at least edge. With edge probabilities a log n/n and b log n/n, our
3
results require (a ? b)2 ? C(a + b) for some absolute constant C, matching the dependence on a
and b in [6, 28] (see [1] for a discussion of the best possible threshold for complete recovery).
?
For any n1 , n2 , at least n1 n2 edges are necessary for even non-trivial partial recovery, as below
that threshold the graph consists only of small components (and even if a correct partition is found
on each ?
component, correlating the partitions of different components is impossible). Similarly at
least ?( n1 n2 log n1 ) are needed for complete recover of P1 since below that density, there are
vertices in V1 joined only to vertices of degree 1 in V2 .
For very lopsided graphs, with n2 n1 log2 n1 , the running time is sublinear in the size of V2 ; this
requires careful implementation and is essential to achieving the running time bounds for planted
CSP?s described below.
Planted k-CSP?s: We now describe a general model for planted satisfiability problems introduced in [19]. For an integer k, let Ck be the set of all ordered k-tuples of literals from
x1 , . . . , xn , x1 , . . . , xn with no repetition of variables. For a k-tuple of literals C and an assignment ?, ?(C) denotes the vector of values that ? assigns to the literals in C. A planting distribution
Q : {?1}k ? [0, 1] is a probability distribution over {?1}k .
Definition 2. Given a planting distribution Q : {?1}k ? [0, 1], and an assignment ? ? {?1}n ,
we define the random constraint satisfaction problem FQ,? (n, m) by drawing m k-clauses from Ck
independently according to the distribution
Q(?(C))
Q? (C) = P
0
0
C ?Ck Q(?(C ))
where ?(C) is the vector of values that ? assigns to the k-tuple of literals comprising C.
Definition 3. The distribution complexity r(Q) of the planting distribution Q is the smallest integer
?
r ? 1 so that there is some S ? [k], |S| = r, so that the discrete Fourier coefficient Q(S)
is
non-zero.
In other words, the distribution complexity of Q is r if Q is an (r ? 1)-wise independent distribution
on {?1}k but not an r-wise independent distribution. The uniform distribution over all clauses,
?
Q ? 2?k , has Q(S)
= 0 for all |S| ? 1, and so we define its complexity to be ?. The uniform
distribution does not reveal any information about ?, and so inference is impossible. For any Q that
is not the uniform distribution over clauses, we have 1 ? r(Q) ? k.
Note that the uniform distribution on k-SAT clauses with at least one satisfied literal under ? has
distribution complexity r = 1. r = 1 means that there is a bias towards either true or false literals.
In this case, a very simple algorithm is effective: for each variable, count the number of times it
appears negated and not negated, and take the majority vote. For distributions with complexity
r ? 2, the expected number of true and false literals in the random formula are equal and so this
simple algorithm fails.
Theorem 2. For any planting distribution Q, there exists an algorithm that for any assignment
?, given an instance of FQ,? (n, m) completely recovers the planted assignment ? for m =
O(nr/2 log n) using O(nr/2 log n) time, where r ? 2 is the distribution complexity of Q. For
distribution complexity r = 1, there is an algorithm that gives non-trivial partial recovery with
O(n1/2 ) constraints and complete recovery with O(n log n) constraints.
3
The algorithm
We now present our algorithm for the bipartite stochastic block model. We define vectors u and v
of dimension n1 and n2 respectively, indexed by V1 and V2 , with ui = 1 for i ? A1 , ui = ?1
for i ? B1 , and similarly for v. To recover the partition P1 it suffices to find either u or ?u. We
will find this vector by multiplying a random initial vector x0 by a sequence of centered adjacency
matrices and their transposes.
We form these matrices as follows: let Gp be the random bipartite graph drawn from the model
B(n1 , n2 , P1 , P2 , ?, p), and T a positive integer. Then form T different bipartite graphs G1 , . . . , GT
on the same vertex sets V1 , V2 by placing each edge from Gp uniformly and independently at random
into one of the T graphs. The resulting graphs have the same marginal distribution.
4
Next we form the n1 ? n2 adjacency matrices A1 , . . . , AT for G1 , . . . GT with rows indexed by V1
and columns by V2 with a 1 in entry (i, j) if vertex i ? V1 is joined to vertex j ? V2 . Finally we
center the matrices by defining Mi = Ai ? Tp J where J is the n1 ? n2 all ones matrix.
The basic iterative steps are the multiplications y = M T x and x = M y.
Algorithm: Subsampled Power Iteration.
1. Form T = 10 log n1 matrices M1 , . . . , MT by uniformly and independently assigning
each edge of the bipartite block model to a graph G1 , . . . , GT , then forming the matrices
Mi = Ai ? Tp J, where Ai is the adjacency matrix of Gi and J is the all ones matrix.
2. Sample x ? {?1}n1 uniformly at random and let x0 = ?xn1 .
3. For i = 1 to T /2 let
yi =
T
xi?1
M2i?1
;
T
xi?1 k
kM2i?1
xi =
M2i y i
;
kM2i y i k
z i = sgn(xi ).
4. For each coordinate j ? [n1 ] take the majority vote of the signs of zji for all i ?
{T /4, . . . , T /2} and call this vector v:
?
?
T
X
v j = sgn ?
zji ? .
i=T /2
5. Return the partition indicated by v.
The analysis of the resampled power iteration algorithm proceeds in four phases, during which
we track the progress of two vectors xi and y i , as measured by their inner product with u and v
respectively. We define Ui := u ? xi and Vi := v ? y i . Here we give an overview of each phase:
? Phase 1. Within log n1 iterations, |Ui | reaches log n1 . We show that conditioned on the
value of Ui , there is at least a 1/2 chance that |Ui+1 | ? 2|Ui |; that Ui never gets too small;
and that in log n1 steps, a run of log log n1 doublings pushes the magnitude of Ui above
log n1 .
? Phase 2. After reaching log?n1 , |Ui | makes steady, predictable progress, doubling at each
step whp until it reaches ?( n1 ), at which point we say xi has strong correlation with u.
? Phase 3. Once xi is strongly correlated with u, we show that z i+1 agrees with either u or
?u on a large fraction of coordinates.
? Phase 4. We show that taking the majority vote of the coordinate-by-coordinate signs of z i
over O(log n1 ) additional iterations gives complete recovery whp.
Running time If n2 = ?(n1 ), then a straightforward implementation of the algorithm runs in
time linear in the number of edges used: each entry of xi = M y i (resp. y i = M T xi?1 ) can be
computed as a sum over the edges in the graph associated with M . The rounding and majority vote
are both linear in n1 . However, if n2 n1 , then simply initializing the vector y i will take too much
time. In this case, we have to implement the algorithm more carefully.
Say we have a vector xi?1 and want to compute xi = M2i y i without storing the vector y i . Instead
T
of computing y i = M2i?1
xi?1 , we create a set S i ? V2 of all vertices with degree at least 1 in the
current graph G2i?1 corresponding to the matrix M2i?1 . The size of S i is bounded by the number
of edges in G2i?1 , and checking membership can be done in constant time with a data structure of
size O(|S i |) that requires expected time O(|S i |) to create [21].
Recall that M2i?1 = A2i?1 ? qJ. Then we can write
?
?
n1
X
? 1n2 = y? ? qL1n2 ,
y i = (A2i?1 ? qJ)T xi?1 = y? ? q ?
xi?1
j
j=1
5
where y? is 0 on coordinates j ?
/ Si, L =
i
i
Then to compute x = M2i y , we write
Pn1
j=1
xi?1
j , and 1n2 is the all ones vector of length n2 .
xi = (A2i ? qJ)y i = (A2i ? qJ)(?
y ? qL1n2 )
= (A2i ? qJ)?
y ? qLA2i 1n2 + q 2 LJ1n2
= A2i y? ? qJ y? ? qLA2i 1n2 + q 2 Ln2 1n1
We bound the running time of the computation as follows: we can compute y? in linear time in the
number of edges of G2i?1 using S i . Given y?, computing A2i y? is linear in the number of edges
of G2i and computing qJ y? is linear in the numberPof non-zero entries of y?, which is bounded by
n1
xji?1 is linear in n1 and gives q 2 Ln2 1n1 .
the number of edges of G2i?1 . Computing L = j=1
Computing qLA2i 1n2 is linear in the number of edges of G2i . All together this gives our linear time
implementation.
4
Reduction of planted k-CSP?s to the block model
Here we describe how solving the bipartite block model suffices to solve the planted k-CSP problems. Consider a planted k-SAT problem FQ,? (n, m) with distribution complexity r. Let S ? [k],
?
|S| = r, be such that Q(S)
= ? 6= 0. Such an S exists from the definition of the distribution
complexity. We assume that we know both r and this set S, as trying all possibilities (smallest first)
requires only a constant factor (2r ) more time.
We will restrict each k-clause in the formula to an r-clause, by taking the r literals specified by the
set S. If the distribution Q is known to be symmetric with respect to the order of the k-literals in
each clause, or if clauses are given as unordered sets of literals, then we can simply sample a random
set of r literals (without replacement) from each clause.
We will show that restricting to these r literals from each k-clause induces a distribution on r-clauses
defined by Q? : {?1}r ? R+ of the form Q? (C) = ?/2r for |C| even, Q? (C) = (2 ? ?)/2r for
|C| odd, for some ? ? [0, 2] , ? 6= 1, where |C| is the number of TRUE literals in C under ?. This
reduction allows us to focus on algorithms for the specific case of a parity-based distribution on
r-clauses with distribution complexity r.
Recall that for a function f : {?1, 1}k ? R, its Fourier coefficients are defined for each subset
S ? [k] as
f?(S) =
[f (x)?S (x)]
E
x?{?1,1}k
where ?S are Q
the Walsh basis functions of {?1}k with respect to the uniform probability measure,
i.e., ?S (x) = i?S xi .
Lemma 1. If the function Q : {?1}k ? R+ defines a distribution Q? on k-clauses with distribution
complexity r and planted assignment ?, then for some S ? [k], |S| = r and ? ? [0, 2]\{1}, choosing
r literals with indices in S from a clause drawn randomly from Q? yields a random r-clause from
Q?? .
?
Proof. From Definition 3 we have that there exists an S with |S| = r such that Q(S)
6= 0. Note that
by definition,
1 X
?
Q(S)
=
Q(x)?S (x)
E [Q(x)?S (x)] = k
2
x?{?1}k
x?{?1}k
?
?
X
X
1 ?
Q(x) ?
Q(x)?
= k
2
k
k
x:?{?1} :xS even
=
x:?{?1} :xS odd
1
(Pr[xS even] ? Pr[xS odd])
2k
?
where xS is x restricted to the coordinates in S, and so if we take ? = 1 + 2k Q(S),
the distribution
induced by restricting k-clauses to the r-clauses specified by S is Q?? . Note that by the definition
6
? ) = 0 for any 1 ? |T | < r, and so the original and induced
of the distribution complexity, Q(T
distributions are uniform over any set of r ? 1 coordinates.
First consider the case r = 1. Restricting each clause to S for |S| = 1, induces a noisy 1-XORSAT distribution in which a random true literal appears with probability ? and random false literal
appears with probability 2 ? ?. The simple majority vote algorithm described above suffices: set
each variable to +1 if it appears more often positively than negated in the restricted clauses of the
formula;pto ?1 if it appears more often negated; and choose randomly if it appears equally often.
2
Using c t log(1/) clauses
?for c = O(1/|1??| ) this algorithm will give an assignment that agrees
with ? (or ??) on n/2 + t n variables with probability at least 1 ? ; using cn log n clauses it will
recover ? exactly with probability 1 ? o(1).
Now assume that r ? 2. We describe how the parity distribution Q?? on r-constraints induces a
bipartite block model. Let V1 be the set of 2n literals of the given variable set, and V2 the collection
2n
of all (r ? 1)-tuples of literals. We have n1 = |V1 | = 2n and n2 = |V2 | = r?1
. We partition
each set into two parts as follows: A1 ? V1 is the set of false literals under ?, and B1 the set of true
literals. A2 ? V2 is the set of (r ? 1)-tuples with an even number of true literals under ?, and B2
the set of (r ? 1)-tuples with an odd number of true literals.
For each r-constraint (l1 , l2 , . . . , lr ), we add an edge in the block model between the tuples l1 ? V1
and (l2 , . . . , lr ) ? V2 . A constraint drawn according to Q?? induces a random edge between A1
and A2 or B1 and B2 with probability ?/2 and between A1 and B2 or B1 and A2 with probability
1 ? ?/2, exactly the distribution of a single edge in the bipartite block model. Recovering the
partition P1 = A1 ? B1 in this bipartite block model partitions the literals into true and false sets
giving ? (up to sign). Now the model in Defn. 2 is that of m clauses selected independently with
replacement according to a given distribution, while in Defn. 1, each edge is present independently
with a given probability. Reducing from the first to the second can be done by Poissonization; details
given in the full version [18].
? ?n1 n2 ) edges (i.e. p =
The key feature of our bipartite block model algorithm is that it uses O(
? 1 n2 )?1/2 ), corresponding to O(n
? r/2 ) clauses in the planted CSP.
O((n
5
Comparison with spectral approach
As noted above, many approaches to graph partitioning problems and planted satisfiability problems
use eigenvectors or singular vectors. These algorithms are essentially based on the signs of the
top eigenvector of the centered adjacency matrix being correlated with the planted vector. This is
fairly straightforward to establish when the average degree of the random graph is large enough.
However, in the stochastic block model, for example, when the average degree is a constant, vertices
of large degree dominate the spectrum and the straightforward spectral approach fails (see [26] for
a discussion and references).
In the case of the usual block model, n1 = n2 = n, while our approach has a fast running time, it
does not save on the number of edges required as compared to the standard spectral approach: both
require ?(n log n) edges. However, when n2 n1 , eg. n1 = ?(n), n2 = ?(nk?1 ) as in the case
of the planted k-CSP?s for odd k, this is no longer the case.
Consider the general-purpose partitioning algorithm of [28]. Let G be the matrix of edge probabilities: Gij is the probability that the edge between vertices i and j is present. Let Gu , Gv denote
columns of G corresponding to vertices u, v. Let ? 2 be an upper bound of the variance of an entry
in the adjacency matrix, sm the size of the smallest part in the planted partition, q the number of
parts, ? the failure probability of the algorithm, and c a universal constant. Then the condition for
the success of McSherry?s partitioning algorithm is:
min
u,v in different parts
kGu ? Gv k2 > cq? 2 (n/sm + log(n/?))
In our case, we have q = 4, n = n1 +n2 , sm = n1 /2, ? 2 = ?(p), and kGu ?Gv k2 = 4(??1)2 p2 n2 .
When n2 n1 log n, the condition requires p = ?(1/n1 ), while our algorithm succeeds when
?
2n
p = ?(log n1 / n1 n2 ). In our application to planted CSP?s with odd k and n1 = 2n, n2 = k?1
,
this gives a polynomial factor improvement.
7
In fact, previous spectral approaches to planted CSP?s or random k-SAT refutation worked for even
k using nk/2 constraints [23, 9, 14], while algorithms for odd k only worked for k = 3 and used
considerably more complicated constructions and techniques [13, 22, 8]. In contrast to previous
approaches, our algorithm unifies the algorithm for planted k-CSP?s for odd and even k, works for
odd k > 3, and is particularly simple and fast.
We now describe why previous approaches faced a spectral barrier for odd k, and how our algorithm
surmounts it. The previous spectral algorithms for even k constructed a similar graph to the one in
the reduction above: vertices are k/2-tuples of literals, and with edges between two tuples if their
union appears as a k-clause. The distribution induced in this case is the stochastic block model. For
odd k, such a reduction is not possible, and one might try a bipartite graph, with either the reduction
described above, or with bk/2c-tuples and dk/2e-tuples (our analysis works for this reduction as
?
well). However, with O(k/2)
clauses, the spectral approach of computing the largest or second
largest singular vector of the adjacency matrix does not work.
Consider M from the distribution M (p). Let u be the n1 dimensional vector indexed as the rows
of M whose entries are 1 if the corresponding vertex is in A1 and ?1 otherwise. Define the n2
dimensional vector v analogously. The next propositions summarize properties of M .
Proposition 1. E(M ) = (? ? 1)puv T .
Proposition 2. Let M1 be the rank-1 approximation of M drawn from M (p). Then kM1 ?E(M )k ?
2kM ? E(M )k.
The above propositions suffice to show high correlation between the top singular vector ?
and the
vector u when n2 = ?(n
)
and
p
=
?(log
n
/n
).
This
is
because
the
norm
of
(M
)
is
p
n1 n2 ;
E
1
1
1
?
this is higher than O( pn2 ), the norm of M ? E(M ) for this range of p. Therefore the top singular
vector of M will be correlated with the top singular vector of E(M ). The latter is a rank-1 matrix
with u as its left singular vector.
? 1 n2 )?1/2 ), the norm of the zero-mean matrix
However, when n2 n1 (eg. k odd) and p = O((n
M ? E(M ) is in fact much larger than the norm of E(M ). Letting x(i) be the vector of length
?
n1 with a 1 in the ith coordinate and zeroes elsewhere, we see that kM x(i) k2 ? pn2 , and so
?
?
kM ? E(M )k = ?( pn2 ), while k E(M )k = O(p n1 n2 ); the former is ?((n2 /n1 )1/4 ) while the
latter is O(1)). In other words, the top singular value of M is much larger than the value obtained by
the vector corresponding to the planted assignment! The picture is in fact richer: the straightforward
?2/3 ?1/3
?2/3 ?1/3
spectral approach succeeds for p n1 n2 , while for p n1 n2 , the top left singular
vector of the centered adjacency matrix is asymptotically uncorrelated with the planted vector [20].
In spite of this, one can exploit correlations to recover the planted vector below this threshold with
our resampling algorithm, which in this case provably outperforms the spectral algorithm.
Acknowledgements
S. Vempala supported in part by NSF award CCF-1217793.
References
[1] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. arXiv preprint
arXiv:1405.3267, 2014.
[2] D. Achlioptas and F. McSherry. Fast computation of low rank matrix approximations. In STOC, pages
611?618, 2001.
[3] Q. Berthet and P. Rigollet. Complexity theoretic lower bounds for sparse principal component detection.
In COLT, pages 1046?1066, 2013.
[4] A. Blum. Learning boolean functions in an infinite attribute space. Machine Learning, 9:373?386, 1992.
[5] A. Bogdanov and Y. Qiao. On the security of goldreich?s one-way function. In Approximation,
Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 392?405. 2009.
[6] R. B. Boppana. Eigenvalues and graph bisection: An average-case analysis. In FOCS, pages 280?285,
1987.
[7] A. Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Combinatorics, Probability &
Computing, 19(2):227, 2010.
8
[8] A. Coja-Oghlan, C. Cooper, and A. Frieze. An efficient sparse regularity concept. SIAM Journal on
Discrete Mathematics, 23(4):2000?2034, 2010.
[9] A. Coja-Oghlan, A. Goerdt, A. Lanka, and F. Sch?adlich. Certifying unsatisfiability of random 2k-sat
formulas using approximation techniques. In Fundamentals of Computation Theory, pages 15?26.
Springer, 2003.
[10] A. Daniely, N. Linial, and S. Shalev-Shwartz. More data speeds up training time in learning halfspaces
over sparse vectors. In NIPS, pages 145?153, 2013.
[11] A. Daniely and S. Shalev-Shwartz. Complexity theoretic limitations on learning dnf?s. CoRR,
abs/1404.3378, 2014.
[12] S. Decatur, O. Goldreich, and D. Ron. Computational sample complexity. SIAM Journal on Computing,
29(3):854?879, 1999.
[13] U. Feige and E. Ofek. Easily refutable subformulas of large random 3cnf formulas. In Automata,
languages and programming, pages 519?530. Springer, 2004.
[14] U. Feige and E. Ofek. Spectral techniques applied to sparse random graphs. Random Structures &
Algorithms, 27(2):251?275, 2005.
[15] V. Feldman. Attribute efficient and non-adaptive learning of parities and DNF expressions. Journal of
Machine Learning Research, (8):1431?1460, 2007.
[16] V. Feldman. Open problem: The statistical query complexity of learning sparse halfspaces. In COLT,
pages 1283?1289, 2014.
[17] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms and a lower bound
for planted clique. In STOC, pages 655?664, 2013.
[18] V. Feldman, W. Perkins, and S. Vempala. Subsampled power iteration: a unified algorithm for block
models and planted csp?s. CoRR, abs/1407.2774, 2014.
[19] V. Feldman, W. Perkins, and S. Vempala. On the complexity of random satisfiability problems with
planted solutions. In STOC, pages 77?86, 2015.
[20] L. Florescu and W. Perkins. Spectral thresholds in the bipartite stochastic block model. arXiv preprint
arXiv:1506.06737, 2015.
[21] M. L. Fredman, J. Koml?os, and E. Szemer?edi. Storing a sparse table with 0 (1) worst case access time.
Journal of the ACM (JACM), 31(3):538?544, 1984.
[22] J. Friedman, A. Goerdt, and M. Krivelevich. Recognizing more unsatisfiable random k-sat instances
efficiently. SIAM Journal on Computing, 35(2):408?430, 2005.
[23] A. Goerdt and M. Krivelevich. Efficient recognition of random unsatisfiable k-sat instances by spectral
methods. In STACS 2001, pages 294?304. Springer, 2001.
[24] M. Kearns. Efficient noise-tolerant learning from statistical queries. JACM, 45(6):983?1006, 1998.
[25] S. B. Korada, A. Montanari, and S. Oh. Gossip pca. In SIGMETRICS, pages 209?220, 2011.
[26] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborov?a, and P. Zhang. Spectral redemption
in clustering sparse networks. PNAS, 110(52):20935?20940, 2013.
[27] L. Massouli?e. Community detection thresholds and the weak ramanujan property. In STOC, pages 1?10,
2014.
[28] F. McSherry. Spectral partitioning of random graphs. In FOCS, pages 529?537, 2001.
[29] E. Mossel, J. Neeman, and A. Sly. A proof of the block model threshold conjecture. arXiv preprint
arXiv:1311.4115, 2013.
[30] R. O?Donnell and D. Witmer. Goldreich?s prg: Evidence for near-optimal polynomial stretch. In
Conference on Computational Complexity, 2014.
[31] R. Servedio. Computational sample complexity and attribute-efficient learning. Journal of Computer
and System Sciences, 60(1):161?178, 2000.
[32] S. Shalev-Shwartz, O. Shamir, and E. Tromer. Using more data to speed-up training time. In AISTATS,
pages 1019?1027, 2012.
[33] V. Vu. A simple svd algorithm for finding hidden partitions. arXiv preprint arXiv:1404.3918, 2014.
9
| 5835 |@word version:2 polynomial:4 norm:7 open:1 km:3 decomposition:1 reduction:9 initial:2 neeman:3 ours:1 outperforms:1 current:1 whp:2 si:1 yet:1 assigning:1 attracted:1 partition:16 gv:3 resampling:1 selected:1 ith:1 bipartitions:1 lr:2 ron:1 simpler:1 zhang:1 constructed:1 direct:1 focs:2 consists:1 krzakala:1 x0:2 notably:2 expected:2 hardness:2 roughly:1 themselves:1 p1:10 sdp:3 multi:1 xji:1 behavior:1 little:1 becomes:1 begin:1 bounded:2 suffice:1 pto:1 substantially:2 eigenvector:2 unified:2 finding:2 guarantee:1 every:1 exactly:2 k2:3 uk:1 partitioning:7 appear:1 arguably:1 positive:1 despite:1 approximately:1 might:2 limited:2 walsh:1 range:1 statistically:1 practical:1 vu:2 union:2 block:30 implement:3 definite:1 bipartition:4 universal:1 evolving:1 significantly:3 matching:2 word:2 spite:1 get:1 superlinear:1 context:1 applying:1 impossible:2 center:1 pn2:3 ramanujan:1 straightforward:4 independently:9 automaton:1 rectangular:1 recovery:10 identifying:1 assigns:2 dominate:1 oh:2 coordinate:8 resp:1 hierarchy:1 play:1 construction:1 shamir:1 exact:2 programming:1 us:3 harvard:1 satisfying:1 particularly:1 recognition:1 stacs:1 role:1 preprint:4 initializing:1 worst:1 lanka:1 halfspaces:2 balanced:1 redemption:1 predictable:1 complexity:25 ui:10 depend:3 solving:4 linial:1 bipartite:19 unsatisfiability:1 completely:5 basis:1 gu:1 easily:1 goldreich:3 fast:4 describe:4 effective:1 dnf:2 query:4 choosing:2 shalev:3 whose:1 richer:1 larger:4 solve:4 valued:1 say:2 drawing:1 otherwise:1 gi:1 g1:3 gp:2 noisy:3 sequence:3 advantage:2 eigenvalue:1 propose:1 product:1 loop:1 reyzin:1 regularity:1 perfect:1 ac:1 measured:1 odd:12 progress:2 strong:1 p2:5 recovering:4 implemented:3 solves:1 indicate:1 exhibiting:1 correct:1 attribute:3 stochastic:15 centered:3 sgn:2 adjacency:11 require:5 suffices:3 generalization:3 randomization:1 proposition:4 stretch:1 hall:1 algorithmic:3 a2:9 smallest:3 purpose:1 birmingham:1 combinatorial:1 agrees:2 largest:2 repetition:1 create:2 sigmetrics:1 csp:24 ck:3 reaching:1 gatech:1 focus:2 improvement:1 rank:5 fq:3 tech:1 contrast:2 inference:1 membership:1 hidden:1 comprising:1 provably:1 overall:2 colt:2 almaden:1 special:3 fairly:1 marginal:1 santosh:1 equal:2 never:1 having:1 once:1 placing:1 broad:1 nearly:2 abbe:1 few:2 randomly:3 frieze:2 subsampled:6 defn:2 phase:6 consisting:1 replacement:2 n1:63 ab:2 friedman:1 detection:2 interest:2 possibility:1 evaluation:1 analyzed:2 mcsherry:5 edge:43 tuple:3 partial:2 necessary:2 indexed:3 refutation:2 instance:8 column:2 boolean:3 tp:2 assignment:13 vertex:17 subset:2 entry:5 daniely:2 uniform:6 recognizing:1 predicate:1 rounding:1 too:2 considerably:1 density:8 fundamental:1 randomized:1 siam:3 donnell:1 together:1 analogously:1 satisfied:1 choose:1 possibly:1 g2i:6 literal:27 return:1 exclude:1 unordered:1 sec:5 b2:9 coefficient:2 combinatorics:1 depends:1 vi:1 try:1 recover:5 complicated:1 contribution:1 square:2 formed:1 xor:2 variance:1 efficiently:1 yield:1 identify:2 conceptually:1 weak:1 unifies:1 bisection:1 multiplying:1 cc:1 randomness:1 prg:1 reach:2 definition:9 failure:2 servedio:1 energy:1 proof:2 associated:1 mi:2 recovers:2 xn1:1 nonsquare:1 sampled:1 proved:1 recall:2 knowledge:1 improves:1 satisfiability:5 oghlan:5 carefully:1 bham:1 appears:8 puv:1 higher:1 done:2 though:1 strongly:1 sly:3 correlation:4 until:1 achlioptas:1 o:1 defines:1 indicated:1 reveal:1 concept:1 true:8 ccf:1 former:1 symmetric:1 moore:1 eg:2 during:1 width:1 noted:1 steady:1 ln2:2 trying:1 complete:7 theoretic:2 l1:2 wise:3 recently:1 planting:5 common:1 mt:1 clause:36 overview:1 rigollet:1 endpoint:1 korada:3 m1:2 surpassing:1 significant:2 pn1:1 feldman:6 ai:3 mathematics:1 similarly:2 ofek:2 language:1 access:6 longer:1 gt:3 add:2 recent:1 csps:2 conjectured:1 bandeira:1 success:1 yi:1 greater:1 additional:1 semi:1 ii:1 multiple:1 full:2 recoverable:1 pnas:1 match:4 cross:1 believed:1 post:1 equally:1 award:1 a1:13 unsatisfiable:2 basic:1 essentially:1 expectation:1 m2i:7 arxiv:8 iteration:11 want:1 singular:11 crucial:1 sch:1 unlike:1 induced:3 tend:1 vitaly:2 integer:3 extracting:1 call:1 near:3 iii:1 easy:1 concerned:1 enough:1 fit:1 restrict:1 reduce:3 idea:1 incomparable:1 inner:1 cn:1 qj:7 whether:1 expression:1 pca:2 speaking:1 cnf:1 bogdanov:2 krivelevich:2 eigenvectors:3 induces:4 reduced:1 nsf:1 sign:4 track:2 discrete:2 write:2 key:2 four:1 threshold:6 blum:1 achieving:1 drawn:5 decatur:1 v1:17 graph:23 asymptotically:1 fraction:1 sum:1 run:5 refutable:1 massouli:1 bound:11 resampled:1 constraint:18 perkins:5 worked:2 n3:1 certifying:1 fourier:2 speed:2 extremely:1 min:1 vempala:6 conjecture:1 according:3 feige:2 restricted:4 pr:2 grigorescu:1 previously:1 count:1 fail:1 needed:4 know:1 kgu:2 letting:1 koml:1 v2:18 spectral:21 generic:1 save:1 a2i:7 original:2 denotes:1 running:11 include:1 subsampling:2 cf:1 top:6 log2:1 clustering:1 unifying:1 folklore:1 exploit:1 giving:1 establish:1 added:4 planted:57 dependence:1 usual:3 nr:3 zdeborov:1 majority:5 considers:1 trivial:3 length:2 index:1 cq:1 km1:1 stoc:4 design:1 implementation:6 unknown:1 coja:5 perform:1 upper:2 negated:4 sm:3 defining:1 community:1 introduced:1 bk:1 edi:1 required:2 specified:2 security:1 unequal:2 qiao:2 nip:1 proceeds:1 below:8 summarize:1 program:1 power:9 critical:1 satisfaction:2 natural:2 szemer:1 mossel:3 picture:1 flaxman:1 faced:1 l2:2 checking:1 acknowledgement:1 multiplication:1 unsatisfied:1 sublinear:1 limitation:1 degree:5 xiao:1 storing:2 uncorrelated:1 ibm:1 row:2 elsewhere:1 supported:1 parity:3 keeping:1 transpose:1 side:1 formal:2 understand:1 allow:2 bias:1 taking:2 barrier:2 absolute:1 sparse:7 dimension:1 xn:2 berthet:1 collection:1 adaptive:2 far:1 clique:1 correlating:1 tolerant:1 sat:14 b1:11 tuples:10 xi:18 shwartz:3 spectrum:1 iterative:1 why:1 table:1 nature:1 obtaining:1 aistats:1 main:1 montanari:2 motivation:2 noise:1 n2:46 allowed:1 x1:2 positively:1 surmounts:1 referred:1 gossip:2 georgia:1 cooper:2 fails:2 formula:6 theorem:2 specific:1 x:5 dk:1 evidence:1 essential:1 exists:3 false:5 restricting:3 effectively:1 corr:2 magnitude:1 conditioned:1 push:1 nk:2 sparser:1 zji:2 simply:2 jacm:2 forming:1 ordered:1 poissonization:1 partially:1 joined:2 doubling:2 springer:3 chance:1 acm:1 succeed:1 boppana:1 careful:2 towards:1 infinite:1 uniformly:3 reducing:1 lemma:1 principal:1 kearns:1 gij:1 svd:2 succeeds:7 vote:5 latter:2 arises:1 goerdt:3 evaluate:1 correlated:3 |
5,342 | 5,836 | Learning Theory and Algorithms for
Forecasting Non-Stationary Time Series
Vitaly Kuznetsov
Courant Institute
New York, NY 10011
Mehryar Mohri
Courant Institute and Google Research
New York, NY 10011
[email protected]
[email protected]
Abstract
We present data-dependent learning bounds for the general scenario of nonstationary non-mixing stochastic processes. Our learning guarantees are expressed
in terms of a data-dependent measure of sequential complexity and a discrepancy
measure that can be estimated from data under some mild assumptions. We use
our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results.
1
Introduction
Time series forecasting plays a crucial role in a number of domains ranging from weather forecasting and earthquake prediction to applications in economics and finance. The classical statistical
approaches to time series analysis are based on generative models such as the autoregressive moving
average (ARMA) models, or their integrated versions (ARIMA) and several other extensions [Engle, 1982, Bollerslev, 1986, Brockwell and Davis, 1986, Box and Jenkins, 1990, Hamilton, 1994].
Most of these models rely on strong assumptions about the noise terms, often assumed to be i.i.d.
random variables sampled from a Gaussian distribution, and the guarantees provided in their support
are only asymptotic.
An alternative non-parametric approach to time series analysis consists of extending the standard
i.i.d. statistical learning theory framework to that of stochastic processes. In much of this work, the
process is assumed to be stationary and suitably mixing [Doukhan, 1994]. Early work along this
approach consisted of the VC-dimension bounds for binary classification given by Yu [1994] under
the assumption of stationarity and -mixing. Under the same assumptions, Meir [2000] presented
bounds in terms of covering numbers for regression losses and Mohri and Rostamizadeh [2009]
proved general data-dependent Rademacher complexity learning bounds. Vidyasagar [1997] showed
that PAC learning algorithms in the i.i.d. setting preserve their PAC learning property in the -mixing
stationary scenario. A similar result was proven by Shalizi and Kontorovitch [2013] for mixtures
of -mixing processes and by Berti and Rigo [1997] and Pestov [2010] for exchangeable random
variables. Alquier and Wintenberger [2010] and Alquier et al. [2014] also established PAC-Bayesian
learning guarantees under weak dependence and stationarity.
A number of algorithm-dependent bounds have also been derived for the stationary mixing setting.
Lozano et al. [2006] studied the convergence of regularized boosting. Mohri and Rostamizadeh
[2010] gave data-dependent generalization bounds for stable algorithms for '-mixing and -mixing
stationary processes. Steinwart and Christmann [2009] proved fast learning rates for regularized
algorithms with ?-mixing stationary sequences and Modha and Masry [1998] gave guarantees for
certain classes of models under the same assumptions.
However, stationarity and mixing are often not valid assumptions. For example, even for Markov
chains, which are among the most widely used types of stochastic processes in applications, stationarity does not hold unless the Markov chain is started with an equilibrium distribution. Similarly,
1
long memory models such as ARFIMA, may not be mixing or mixing may be arbitrarily slow [Baillie, 1996]. In fact, it is possible to construct first order autoregressive processes that are not mixing
[Andrews, 1983]. Additionally, the mixing assumption is defined only in terms of the distribution
of the underlying stochastic process and ignores the loss function and the hypothesis set used. This
suggests that mixing may not be the right property to characterize learning in the setting of stochastic
processes.
A number of attempts have been made to relax the assumptions of stationarity and mixing. Adams
and Nobel [2010] proved asymptotic guarantees for stationary ergodic sequences. Agarwal and
Duchi [2013] gave generalization bounds for asymptotically stationary (mixing) processes in the
case of stable on-line learning algorithms. Kuznetsov and Mohri [2014] established learning guarantees for fully non-stationary - and '-mixing processes.
In this paper, we consider the general case of non-stationary non-mixing processes. We are not
aware of any prior work providing generalization bounds in this setting. In fact, our bounds appear
to be novel even when the process is stationary (but not mixing). The learning guarantees that we
present hold for both bounded and unbounded memory models. Deriving generalization bounds for
unbounded memory models even in the stationary mixing case was an open question prior to our
work [Meir, 2000]. Our guarantees cover the majority of approaches used in practice, including
various autoregressive and state space models.
The key ingredients of our generalization bounds are a data-dependent measure of sequential complexity (expected sequential covering number or sequential Rademacher complexity [Rakhlin et al.,
2010]) and a measure of discrepancy between the sample and target distributions. Kuznetsov and
Mohri [2014] also give generalization bounds in terms of discrepancy. However, unlike the result
of Kuznetsov and Mohri [2014], our analysis does not require any mixing assumptions which are
hard to verify in practice. More importantly, under some additional mild assumption, the discrepancy measure that we propose can be estimated from data, which leads to data-dependent learning
guarantees for non-stationary non-mixing case.
We devise new algorithms for non-stationary time series forecasting that benefit from our datadependent guarantees. The parameters of generative models such as ARIMA are typically estimated
via the maximum likelihood technique, which often leads to non-convex optimization problems. In
contrast, our objective is convex and leads to an optimization problem with a unique global solution
that can be found efficiently. Another issue with standard generative models is that they address nonstationarity in the data via a differencing transformation which does not always lead to a stationary
process. In contrast, we address the problem of non-stationarity in a principled way using our
learning guarantees.
The rest of this paper is organized as follows. The formal definition of the time series forecasting
learning scenario as well as that of several key concepts is given in Section 2. In Section 3, we
introduce and prove our new generalization bounds. In Section 4, we give data-dependent learning bounds based on the empirical discrepancy. These results, combined with a novel analysis of
kernel-based hypotheses for time series forecasting (Appendix B), are used to devise new forecasting algorithms in Section 5. In Appendix C, we report the results of preliminary experiments using
these algorithms.
2
Preliminaries
We consider the following general time series prediction setting where the learner receives a realization (X1 , Y1 ), . . . , (XT , YT ) of some stochastic process, with (Xt , Yt ) 2 Z = X ? Y. The objective of the learner is to select out of a specified family H a hypothesis h : X ! Y that achieves a
small generalization error E[L(h(XT +1 ), YT +1 )|Z1 , . . . , ZT ] conditioned on observed data, where
L : Y ? Y ! [0, 1) is a given loss function. The path-dependent generalization error that we
consider in this work is a finer measure of the generalization ability than the averaged generalization error E[L(h(XT +1 ), YT +1 )] = E[E[L(h(XT +1 ), YT +1 )|Z1 , . . . , ZT ]] since it only takes into
consideration the realized history of the stochastic process and does not average over the set of all
possible histories. The results that we present in this paper also apply to the setting where the time
parameter t can take non-integer values and prediction lag is an arbitrary number l 0. That is, the
error is defined by E[L(h(XT +l ), YT +l )|Z1 , . . . , ZT ] but for notational simplicity we set l = 1.
2
Our setup covers a larger number of scenarios commonly used in practice. The case X = Y p
p
corresponds to a large class of autoregressive models. Taking X = [1
p=1 Y leads to growing
memory models which, in particular, include state space models. More generally, X may contain
both the history of the process {Yt } and some additional side information.
To simplify the notation, in the rest of the paper, we will use the shorter notation f (z) = L(h(x), y),
for any z = (x, y) 2 Z and introduce the family F = {(x, y) ! L(h(x), y) : h 2 H} containing
such functions f . We will assume a bounded loss function, that is |f | ? M for all f 2 F for
some M 2 R+ . Finally, we will use the shorthand Zba to denote a sequence of random variables
Za , Za+1 , . . . , Zb .
The key quantity of interest in the analysis of generalization is the following supremum of the
empirical process defined as follows:
(ZT1 )
= sup
f 2F
E[f (ZT +1 )|ZT1 ]
T
X
!
qt f (Zt ) ,
t=1
(1)
where q1 , . . . , qT are real numbers, which in the standard learning scenarios are chosen to be uniform. In our general setting, different Zt s may follow different distributions, thus distinct weights
could be assigned to the errors made on different sample points depending on their relevance to
forecasting the future ZT +1 . The generalization bounds that we present below are for an arbitrary
sequence q = (q1 , . . . qT ) which, in particular, covers the case of uniform weights. Remarkably, our
bounds do not even require the non-negativity of q.
Our generalization bounds are expressed in terms of data-dependent measures of sequential complexity such as expected sequential covering number or sequential Rademacher complexity [Rakhlin
et al., 2010]. We give a brief overview of the notion of sequential covering number and refer the
reader to the aforementioned reference for further details. We adopt the following definition of a
complete binary tree: a Z-valued complete binary tree z is a sequence (z1 , . . . , zT ) of T mappings
zt : {?1}t 1 ! Z, t 2 [1, T ]. A path in the tree is = ( 1 , . . . , T 1 ). To simplify the notation we
will write zt ( ) instead of zt ( 1 , . . . , t 1 ), even though zt depends only on the first t 1 elements
of . The following definition generalizes the classical notion of covering numbers to sequential
setting. A set V of R-valued trees of depth T is a sequential ?-cover (with respect to q-weighted `p
norm) of a function class G on a tree z of depth T if for all g 2 G and all 2 {?}T , there is v 2 V
such that
T
X
vt ( )
g(zt ( ))
t=1
p
! p1
? kqkq 1 ?,
where k ? kq is the dual norm. The (sequential) covering number Np (?, G, z) of a function class G
on a given tree z is defined to be the size of the minimal sequential cover. The maximal covering
number is then taken to be Np (?, G) = supz Np (?, G, z). One can check that in the case of uniform
weights this definition coincides with the standard definition of sequential covering numbers. Note
that this is a purely combinatorial notion of complexity which ignores the distribution of the process
in the given learning problem.
Data-dependent sequential covering numbers can be defined as follows. Given a stochastic process
distributed according to the distribution p with pt (?|zt1 1 ) denoting the conditional distribution at
time t, we sample a Z ? Z-valued tree of depth T according to the following procedure. Draw two
independent samples Z1 , Z10 from p1 : in the left child of the root draw Z2 , Z20 according to p2 (?|Z1 )
and in the right child according to p2 (?|Z20 ). More generally, for a node that can be reached by a
path ( 1 , . . . , t ), we draw Zt , Zt0 according to pt (?|S1 ( 1 ), . . . , St 1 ( t 1 )), where St (1) = Zt
and St ( 1) = Zt0 . Let z denote the tree formed using Zt s and define the expected covering number
to be Ez?T (p) [Np (?, G, z)], where T (p) denotes the distribution of z.
In a similar manner, one can define other measures of complexity such as sequential Rademacher
complexity and the Littlestone dimension [Rakhlin et al., 2015] as well as their data-dependent
counterparts [Rakhlin et al., 2011].
3
The final ingredient needed for expressing our learning guarantees is the notion of discrepancy
between target distribution and the distribution of the sample:
?
?
T
X
= sup E[f (ZT +1 )|ZT1 ]
qt E[f (Zt )|Zt1 1 ] .
(2)
f 2F
t=1
The discrepancy
is a natural measure of the non-stationarity of the stochastic process Z with
respect to both the loss function L and the hypothesis set H. In particular, note that if the process
Z is i.i.d., then we simply have
= 0 provided that qt s form a probability distribution. It is also
possible to give bounds on in terms of other natural distances between distribution. For instance,
Pinsker?s inequality yields
r ?
?
PT
PT
t 1
T
? M PT +1 (?|Z1 )
? 12 D PT +1 (?|ZT1 ) k t=1 qt Pt (?|Zt1 1 ) ,
t=1 qt Pt (?|Z1 )
TV
where k ? kTV is the total variation distance and D(? k ?) the relative entropy, Pt+1 (?|Zt1 ) the condiPT
tional distribution of Zt+1 , and t=1 qt Pt (?|Zt1 1 ) the mixture of the sample marginals. Alternatively, if the target distribution at lag l, P = PT +l is a stationary distribution of an asymptotically
stationary process Z [Agarwal and Duchi, 2013, Kuznetsov and Mohri, 2014], then for qt = 1/T
we have
?
T
MX
kP
T t=1
Pt+l (?|Zt
1 )kTV
?
(l),
where (l) = sups supz [kP Pl+s (?|zs 1 )kTV ] is the coefficient of asymptotic stationarity. The
process is asymptotically stationary if liml!1 (l) = 0. However, the most important property of
the discrepancy
is that, as shown later in Section 4, it can be estimated from data under some
additional mild assumptions. [Kuznetsov and Mohri, 2014] also give generalization bounds for
non-stationary mixing processes in terms of a related notion of discrepancy. It is not known if the
discrepancy measure used in [Kuznetsov and Mohri, 2014] can be estimated from data.
3
Generalization Bounds
In this section, we prove new generalization bounds for forecasting non-stationary time series. The
first step consists of using decoupled tangent sequences to establish concentration results for the
supremum of the empirical process (ZT1 ). Given a sequence of random variables ZT1 we say that
T
Z0 1 is a decoupled tangent sequence if Zt0 is distributed according to P(?|Zt1 1 ) and is independent
of Z1
na and Gin?e,
t . It is always possible to construct such a sequence of random variables [De la Pe?
1999]. The next theorem is the main result of this section.
Theorem 1. Let ZT1 be a sequence of random variables distributed according to p. Fix ? > 2? > 0.
Then, the following holds:
?
?
?
?
(? 2?)2
T
P (Z1 )
? ? E
N1 (?, F, v) exp
.
2M 2 kqk22
v?T (p)
Proof. The first step is to observe that, since the difference of the suprema is upper bounded by the
supremum of the difference, it suffices to bound the probability of the following event
(
!
)
T
X
sup
qt (E[f (Zt )|Zt1 1 ] f (Zt ))
? .
f 2F
By Markov?s inequality, for any
P
sup
f 2F
? exp(
T
X
t=1
t=1
> 0, the following inequality holds:
!
!
qt (E[f (Zt )|Zt1 1 ]
"
?) E exp
sup
f 2F
f (Zt ))
T
X
t=1
4
?
qt (E[f (Zt )|Zt1 1 ]
!!#
f (Zt ))
.
T
Since Z0 1 is a tangent sequence the following equalities hold: E[f (Zt )|Zt1 1 ] = E[f (Zt0 )|Zt1 1 ] =
E[f (Zt0 )|ZT1 ]. Using these equalities and Jensen?s inequality, we obtain the following:
?
T
?
?
X
E exp
sup
qt E[f (Zt )|Zt1 1 ] f (Zt )
f 2F t=1
?
T
?
hX
= E exp
sup E
qt f (Zt0 )
f 2F
?
? E exp
?
sup
t=1
T
X
f 2F t=1
qt f (Zt0 )
f (Zt ) |ZT1
?
f (Zt )
i?
,
T
where the last expectation is taken over the joint measure of ZT1 and Z0 1 . Applying Lemma 5
(Appendix A), we can further bound this expectation by
?
?
T
?
??
X
0
E
E exp
sup
f (zt ( ))
t qt f (z t ( ))
0
(z,z )?T (p)
?
?
=
E
0
(z,z )?T (p)
?
f 2F t=1
E exp
E
z?T (p)
?
?
sup
T
X
f 2F t=1
0
t qt f (z t ( )) +
sup
?
T
X
0
E 0 E exp 2 sup
+
t qt f (z t ( ))
1
2 (z,z )
?
?
?
f 2F t=1
?
q
f
(z
(
))
t t
t
T
X
f 2F t=1
1
E0 E
2 (z,z
)
?
T
X
E exp 2 sup
,
t qt f (zt ( ))
?
?
exp 2 sup
T
X
f 2F t=1
?
t qt f (zt ( ))
f 2F t=1
where for the second inequality we used Young?s inequality and for the last equality we used symmetry. Given z let C denote the minimal ?-cover with respect to the q-weighted `1 -norm of F on
z. Then, the following bound holds
sup
T
X
f 2F t=1
t qt f (zt (
)) ? max
c2C
T
X
t q t ct (
) + ?.
t=1
By the monotonicity of the exponential function,
?
?
?
?
?
T
T
X
X
E exp 2 sup
? exp(2 ?) E exp 2 max
t qt f (zt ( ))
c2C
f 2F t=1
? exp(2 ?)
X
c2C
?
?
E exp 2
t=1
T
X
t=1
Since ct ( ) depends only on 1 , . . . , T 1 , by Hoeffding?s bound,
?
? X
?
?
? TX1
? ?
?
T
E exp 2
q
c
(
)
=
E
exp
2
q
c
(
)
E
exp
2
t t t
t t t
t=1
?
?
? E exp 2
T
t=1
T
X1
t q t ct (
t=1
?
) exp(2
?
t q t ct ( )
?
.
t q t ct ( )
?
q
c
(
)
T T T
v?T (p)
Optimizing over
completes the proof.
An immediate consequence of Theorem 1 is the following result.
5
1
2 2
qT M 2 )
and iterating this inequality and using the union bound, we obtain the following:
?
?
T
?
X
P sup
qt (E[f (Zt )|Zt1 1 ] f (Zt )) ? ? E [N1 (?, G, v)] exp
(? 2?)+2
f 2F t=1
T
1
2
?
M 2 kqk22 .
Corollary 2. For any
> 0, with probability at least 1
E[f (ZT +1 )|ZT1 ] ?
T
X
qt f (Zt ) +
t=1
, for all f 2 F and all ? > 0,
r
Ev?T (P) [N1 (?, G, v)]
+ 2? + M kqk2 2 log
.
We are not aware of other finite sample bounds in a non-stationary non-mixing case. In fact, our
bounds appear to be novel even in the stationary non-mixing case. Using chaining techniques
bounds, Theorem 1 and Corollary 2 can be further improved and we will present these results in
the full version of this paper.
While Rakhlin et al. [2015] give high probability bounds for a different quantity than the quantity of
interest in time series prediction,
!
T
X
t 1
sup
qt (E[f (Zt )|Z1 ] f (Zt )) ,
(3)
f 2F
t=1
their analysis of this quantity can also be used in our context to derive high probability bounds for
(ZT1 )
. However, this approach results in bounds that are in terms of purely combinatorial
notions such as maximal sequential covering numbers N1 (?, F). While at first sight, this may seem
as a minor technical detail, the distinction is crucial in the setting of time series prediction. Consider
the following example. Let Z1 be drawn from a uniform distribution on {0, 1} and Zt ? p(?|Zt 1 )
with p(?|y) being a distribution over {0, 1} such that p(x|y) = 2/3 if x = y and 1/3 otherwise. Let G
be defined by G = {g(x) = 1x ? : ? 2 [0, 1]}. Then, one can check that Ev?T (P) [N1 (?, G, v)] = 2,
while N1 (?, G) 2T . The data-dependent bounds of Theorem 1 and Corollary 2 highlight the fact
that the task of time series prediction lies in between the familiar i.i.d. scenario and adversarial
on-line learning setting.
However, the key component of our learning guarantees is the discrepancy term . Note that in the
general non-stationary case, the bounds of Theorem 1 may not converge to zero due to the discrepancy between the target and sample distributions. This is also consistent with the lower bounds of
Barve and Long [1996] that we discuss in more detail in Section 4. However, convergence can be
established in some special cases. In the i.i.d. case our bounds reduce to the standard covering numbers learning guarantees. In the drifting scenario, with ZT1 being a sequence of independent random
variables, our discrepancy measure coincides with the one used and studied in [Mohri and Mu?noz
Medina, 2012]. Convergence can also be established in asymptotically stationary and stationary
mixing cases. However, as we show in Section 4, the most important advantage of our bounds is
that the discrepancy measure we use can be estimated from data.
4
Estimating Discrepancy
In Section 3, we showed that the discrepancy is crucial for forecasting non-stationary time series. In particular, if we could select a distribution q over the sample ZT1 that would minimize the
discrepancy and use it to weight training points, then we would have a better learning guarantee
for an algorithm trained on this weighted sample. In some special cases, the discrepancy
can
be computed analytically. However, in general, we do not have access to the distribution of ZT1
and hence we need to estimate the discrepancy from the data. Furthermore, in practice, we never
observe ZT +1 and it is not possible to estimate without some further assumptions. One natural
assumption is that the distribution Pt of Zt does not change drastically with t on average. Under
this assumption the last s observations ZTT s+1 are effectively drawn from the distribution close to
PT +1 . More precisely, we can write
?
?
T
T
X
1 X
t 1
t 1
? sup
E[f (Zt )|Z1 ]
qt E[f (Zt )|Z1 ]
s
f 2F
t=1
t=T
+ sup
f 2F
?
s+1
1
s
E[f (ZT +1 )|ZT1 ]
T
X
t=T
s+1
?
E[f (Zt )|Zt1 1 ]
.
We will assume that the second term, denoted by s , is sufficiently small and will show that the first
term can be estimated from data. But, we first note that our assumption is necessary for learning in
6
this setting. Observe that
?
sup E[ZT +1 |ZT1 ]
E[f (Zr )|Zr1
f 2F
1
T
? X
?
] ?
sup E[f (Zt+1 )|Zt1 ]
t=r f 2F
?M
for all r = T
T
X
t=r
kPt+1 (?|Zt1 )
?
E[f (Zt )|Zt1 1 ]
Pt (?|Zt1 1 )kTV ,
s + 1, . . . , T . Therefore, we must have
?
? s+1
1 X
sup E[ZT +1 |ZT1 ] E[f (Zt )|Zt1 ] ?
M ,
s ?
s
2
f 2F
t=T
s+1
1
= supt kPt+1 (?|Zt1 )
where
Pt (?|Zt1 1 )kTV . Barve and Long [1996] showed that [VC-dim(H) ] 3
is a lower bound on the generalization error in the setting of binary classification where ZT1 is a
sequence of independent but not identically distributed random variables (drifting). This setting is a
special case of the more general scenario that we are considering.
The following result shows that we can estimate the first term in the upper bound on .
Theorem 3. Let ZT1 be a sequence of random variables. Then, for any > 0, with probability at
least 1
, the following holds for all ? > 0:
!
!
T
T
X
X
t 1
sup
(pt qt ) E[f (Zt )|Z1 ] ? sup
(pt qt )f (Zt ) + B,
f 2F
t=1
q
pk2 2 log
where B = 2? + M kq
the last s points.
f 2F
Ez?T (p) [N1 (?,G,z)]
t=1
and where p is the uniform distribution over
The proof of this result is given in Appendix A. Theorem 1 and Theorem 3 combined with the union
bound yield the following result.
Corollary 4. Let ZT1 be a sequence of random variables. Then, for any > 0, with probability at
least 1
, the following holds for all f 2 F and all ? > 0:
E[f (ZT +1 )|ZT1 ] ?
T
X
t=1
qt f (Zt ) + e +
where e = supf 2F
5
Algorithms
?P
T
t=1 (pt
s
?
+ 4? + M kqk2 + kq
?
qt )f (Zt ) .
pk2
q
?
2 log
2 Ev?T (p) [N1 (?,G,z)]
,
In this section, we use our learning guarantees to devise algorithms for forecasting non-stationary
time series. We consider a broad family of kernel-based hypothesis classes with regression losses.
We present the full analysis of this setting in Appendix B including novel bounds on the sequential
Rademacher complexity. The learning bounds
1 can be generalized
q
? of Theorem
? to hold uniformly
1
over q at the price of an additional term in O kq uk1 log2 log2 kq uk1 . We prove this result
in Theorem 8 (Appendix B). Suppose L is the squared loss and H = {x ! w ? (x) : kwkH ? ?},
where : X ! H is a feature mapping from X to a Hilbert space H. By Lemma 6 (Appendix B),
we can bound the complexity term in our generalization bounds by
?
?
?r
O (log3 T ) p + (log3 T )kq uk1 ,
T
where K is a PDS kernel associated with H such that supx K(x, x) ? r and u is the uniform
distribution over the sample. Then, we can formulate a joint optimization problem over both q and
w based on the learning guarantee of Theorem 8, which holds uniformly over all q:
?X
T
T
X
min
qt (w ? (xt ) yt )2 + 1
dt qt + 2 kwk2H + 3 kq uk1 .
(4)
0?q?1,w
t=1
t=1
7
PT
Here, we have upper bounded the empirical discrepancy term by t=1 dt qt with each dt defined
PT
by supw0 ?? | s=1 ps (w0 ? (xs ) ys )2 (w0 ? (xt ) yt )2 |. Each dt can be precomputed
using DC-programming. For general loss functions, the DC-programming approach only guarantees
convergence to a stationary point. However, for the squared loss, our problem can be cast as an
instance of the trust region problem, which can be solved globally using the DCA algorithm of Tao
and An [1998]. Note that problem (4) is not jointly convex in q and w. However, using the dual
problem associated to w yields the following equivalent problem, it can be rewritten as follows:
min
0?q?1
?
max
?
n
2
T
X
?2
?T K? + 2
t
qt
t=1
T
2? Y
o
+
1 (d?q)
+
3 kq
uk1 ,
(5)
where d = (d1 , . . . , dT )T , K is the kernel matrix and Y = (y1 , . . . , yT )T . We use the change of
variables rt = 1/qt and further upper bound 3 kq uk1 by 03 kr T 2 uk2 , which follows from
|qt ut | = |qt ut (rt T )| and H?older?s inequality. Then, this yields the following optimization
problem:
min
r2D
?
max
?
n
2
T
X
?T K? + 2
rt ?t2
t=1
o
T
?
Y
+
2
1
T
X
dt
t=1
+
rt
3 kr
T 2 uk22 ,
(6)
where D = {r : rt 1, t 2 [1, T ]}. The optimization problem (6) is convex since D is a convex set,
the first term in (6) is convex as a maximum of convex (linear) functions of r. This problem can be
solved using standard descent methods, where, at each iteration, we solve a standard QP in ?, which
admits a closed-form solution. Parameters 1 , 2 , and 3 are selected through cross-validation.
An alternative simpler algorithm based on the data-dependent bounds of Corollary 4 consists of
first finding a distribution q minimizing the (regularized) discrepancy and then using that to find a
hypothesis minimizing the (regularized) weighted empirical risk. This leads to the following twostage procedure. First, we find a solution q? of the following convex optimization problem:
min
q 0
?
sup
w0 ??
T
?X
(pt
t=1
qt )(w0 ? (xt )
?
yt )2 +
1 kq
uk1 ,
(7)
where 1 and ? are parameters that can be selected via cross-validation. Our generalization bounds
hold for arbitrary weights q but we restrict them to being positive sequences. Note that other regularization terms such as kqk22 and kq pk22 from the bound of Corollary 4 can be incorporated
in the optimization problem, but we discard them to minimize the number of parameters. This
problem can be solved using standard descent optimization methods, where, at each step, we use
DC-programming to evaluate the supremum over w0 . Alternatively, one can upper bound the suprePT
mum by t=1 qt dt and then solve the resulting optimization problem.
The solution q? of (7) is then used to solve the following (weighted) kernel ridge regression problem:
min
w
?X
T
t=1
qt? (w ? (xt )
yt )2 +
2
2 kwkH
Note that, in order to guarantee the convexity of this problem, we require q?
6
(8)
.
0.
Conclusion
We presented a general theoretical analysis of learning in the broad scenario of non-stationary nonmixing processes, the realistic setting for a variety of applications. We discussed in detail several
algorithms benefitting from the learning guarantees presented. Our theory can also provide a finer
analysis of several existing algorithms and help devise alternative principled learning algorithms.
Acknowledgments
This work was partly funded by NSF IIS-1117591 and CCF-1535987, and the NSERC PGS D3.
8
References
T. M. Adams and A. B. Nobel. Uniform convergence of Vapnik-Chervonenkis classes under ergodic sampling.
The Annals of Probability, 38(4):1345?1367, 2010.
A. Agarwal and J. Duchi. The generalization ability of online algorithms for dependent data. Information
Theory, IEEE Transactions on, 59(1):573?587, 2013.
P. Alquier and O. Wintenberger. Model selection for weakly dependent time series forecasting. Technical
Report 2010-39, Centre de Recherche en Economie et Statistique, 2010.
P. Alquier, X. Li, and O. Wintenberger. Prediction of time series by statistical learning: general losses and fast
rates. Dependence Modelling, 1:65?93, 2014.
D. Andrews. First order autoregressive processes and strong mixing. Cowles Foundation Discussion Papers
664, Cowles Foundation for Research in Economics, Yale University, 1983.
R. Baillie. Long memory processes and fractional integration in econometrics. Journal of Econometrics, 73
(1):5?59, 1996.
R. D. Barve and P. M. Long. On the complexity of learning from drifting distributions. In COLT, 1996.
P. Berti and P. Rigo. A Glivenko-Cantelli theorem for exchangeable random variables. Statistics & Probability
Letters, 32(4):385 ? 391, 1997.
T. Bollerslev. Generalized autoregressive conditional heteroskedasticity. J Econometrics, 1986.
G. E. P. Box and G. Jenkins. Time Series Analysis, Forecasting and Control. Holden-Day, Incorporated, 1990.
P. J. Brockwell and R. A. Davis. Time Series: Theory and Methods. Springer-Verlag, New York, 1986.
V. H. De la Pe?na and E. Gin?e. Decoupling: from dependence to independence: randomly stopped processes,
U-statistics and processes, martingales and beyond. Probability and its applications. Springer, NY, 1999.
P. Doukhan. Mixing: properties and examples. Lecture notes in statistics. Springer-Verlag, New York, 1994.
R. Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom
inflation. Econometrica, 50(4):987?1007, 1982.
J. D. Hamilton. Time series analysis. Princeton, 1994.
V. Kuznetsov and M. Mohri. Generalization bounds for time series prediction with non-stationary processes.
In ALT, 2014.
A. C. Lozano, S. R. Kulkarni, and R. E. Schapire. Convergence and consistency of regularized boosting
algorithms with stationary -mixing observations. In NIPS, pages 819?826, 2006.
R. Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning, pages
5?34, 2000.
D. Modha and E. Masry. Memory-universal prediction of stationary random processes. Information Theory,
IEEE Transactions on, 44(1):117?133, Jan 1998.
M. Mohri and A. Mu?noz Medina. New analysis and algorithm for learning with drifting distributions. In ALT,
2012.
M. Mohri and A. Rostamizadeh. Rademacher complexity bounds for non-i.i.d. processes. In NIPS, 2009.
M. Mohri and A. Rostamizadeh. Stability bounds for stationary '-mixing and -mixing processes. Journal of
Machine Learning Research, 11:789?814, 2010.
V. Pestov. Predictive PAC learnability: A paradigm for learning from exchangeable input data. In GRC, 2010.
A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Random averages, combinatorial parameters, and
learnability. In NIPS, 2010.
A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Stochastic, constrained, and smoothed adversaries.
In NIPS, 2011.
A. Rakhlin, K. Sridharan, and A. Tewari. Sequential complexities and uniform martingale laws of large numbers. Probability Theory and Related Fields, 2015.
C. Shalizi and A. Kontorovitch. Predictive PAC learning and process decompositions. In NIPS, 2013.
I. Steinwart and A. Christmann. Fast learning from non-i.i.d. observations. In NIPS, 2009.
P. D. Tao and L. T. H. An. A D.C. optimization algorithm for solving the trust-region subproblem. SIAM
Journal on Optimization, 8(2):476?505, 1998.
M. Vidyasagar. A Theory of Learning and Generalization: With Applications to Neural Networks and Control
Systems. Springer-Verlag New York, Inc., 1997.
B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of Probability,
22(1):94?116, 1994.
9
| 5836 |@word mild:3 version:2 norm:3 suitably:1 open:1 decomposition:1 q1:2 series:22 united:1 ktv:5 chervonenkis:1 denoting:1 existing:1 z2:1 must:1 realistic:1 stationary:35 generative:3 selected:2 recherche:1 boosting:2 node:1 simpler:1 unbounded:2 along:1 consists:3 prove:3 shorthand:1 manner:1 introduce:2 expected:3 p1:2 growing:1 globally:1 considering:1 provided:2 estimating:1 bounded:4 underlying:1 notation:3 z:1 finding:1 transformation:1 guarantee:20 finance:1 exchangeable:3 masry:2 control:2 appear:2 hamilton:2 positive:1 consequence:1 modha:2 path:3 studied:2 suggests:1 doukhan:2 averaged:1 unique:1 acknowledgment:1 earthquake:1 practice:4 union:2 procedure:2 jan:1 barve:3 kpt:2 universal:1 suprema:1 empirical:6 weather:1 statistique:1 close:1 selection:2 context:1 applying:1 risk:1 equivalent:1 yt:12 economics:2 convex:8 ergodic:2 formulate:1 simplicity:1 supz:2 importantly:1 deriving:1 pk2:2 stability:1 notion:6 variation:1 annals:2 target:4 play:1 pt:22 suppose:1 programming:3 hypothesis:6 element:1 econometrics:3 observed:1 role:1 subproblem:1 solved:3 region:2 principled:2 pd:1 mu:2 complexity:14 convexity:1 econometrica:1 pinsker:1 trained:1 weakly:1 solving:1 heteroscedasticity:1 predictive:2 purely:2 learner:2 joint:2 various:1 distinct:1 fast:3 kp:2 glivenko:1 lag:2 widely:1 larger:1 valued:3 say:1 relax:1 otherwise:1 solve:3 ability:2 statistic:3 economie:1 jointly:1 final:1 online:3 sequence:17 advantage:1 propose:1 maximal:2 realization:1 mixing:33 brockwell:2 pk22:1 bollerslev:2 convergence:7 p:1 extending:1 rademacher:6 adam:2 help:1 depending:1 andrew:2 derive:1 minor:1 qt:41 strong:2 p2:2 christmann:2 stochastic:10 vc:2 require:3 hx:1 shalizi:2 fix:1 generalization:23 suffices:1 preliminary:3 extension:1 pl:1 hold:11 sufficiently:1 inflation:1 exp:22 equilibrium:1 mapping:2 achieves:1 early:1 adopt:1 combinatorial:3 weighted:5 kontorovitch:2 gaussian:1 always:2 sight:1 supt:1 corollary:6 derived:1 notational:1 modelling:1 likelihood:1 check:2 cantelli:1 contrast:2 adversarial:1 rostamizadeh:4 benefitting:1 dim:1 tional:1 dependent:16 integrated:1 typically:1 holden:1 tao:2 issue:1 classification:2 among:1 aforementioned:1 dual:2 denoted:1 colt:1 constrained:1 special:3 integration:1 field:1 construct:2 aware:2 never:1 sampling:1 broad:2 yu:2 discrepancy:21 future:1 report:3 np:4 simplify:2 t2:1 randomly:1 preserve:1 familiar:1 n1:8 attempt:1 zt0:7 stationarity:8 interest:2 mixture:2 chain:2 necessary:1 shorter:1 decoupled:2 unless:1 tree:8 littlestone:1 arma:1 e0:1 theoretical:1 minimal:2 stopped:1 instance:2 cover:6 rigo:2 uniform:8 kq:11 learnability:2 characterize:1 supx:1 combined:2 st:3 siam:1 uk1:7 na:2 squared:2 containing:1 r2d:1 hoeffding:1 li:1 de:3 coefficient:1 inc:1 depends:2 later:1 root:1 closed:1 sup:27 reached:1 minimize:2 formed:1 variance:1 efficiently:1 yield:4 weak:1 bayesian:1 finer:2 history:3 za:2 nonstationarity:1 definition:5 proof:3 associated:2 sampled:1 proved:3 ut:2 fractional:1 organized:1 hilbert:1 dca:1 courant:2 dt:7 follow:1 day:1 improved:1 box:2 though:1 furthermore:1 steinwart:2 receives:1 trust:2 google:1 alquier:4 consisted:1 concept:1 verify:1 contain:1 lozano:2 counterpart:1 assigned:1 regularization:1 analytically:1 hence:1 equality:3 ccf:1 davis:2 covering:12 chaining:1 coincides:2 generalized:2 complete:2 ridge:1 duchi:3 ranging:1 consideration:1 novel:4 qp:1 overview:1 discussed:1 marginals:1 refer:1 expressing:1 consistency:1 similarly:1 centre:1 funded:1 moving:1 stable:2 access:1 heteroskedasticity:1 mum:1 berti:2 showed:3 optimizing:1 discard:1 scenario:9 certain:1 verlag:3 inequality:8 binary:4 arbitrarily:1 vt:1 devise:5 additional:4 converge:1 paradigm:1 ii:1 full:2 technical:2 cross:2 long:5 y:1 prediction:10 regression:3 kwk2h:1 expectation:2 iteration:1 kernel:5 agarwal:3 remarkably:1 completes:1 crucial:3 rest:2 unlike:1 vitaly:2 sridharan:3 seem:1 integer:1 nonstationary:1 identically:1 variety:1 independence:1 gave:3 restrict:1 pestov:2 reduce:1 engle:2 c2c:3 tx1:1 forecasting:14 york:5 generally:2 iterating:1 tewari:3 nonparametric:1 schapire:1 meir:3 nsf:1 uk2:1 estimated:7 write:2 arima:2 key:4 drawn:2 d3:1 asymptotically:4 letter:1 family:3 reader:1 draw:3 appendix:7 bound:53 ct:5 yale:1 precisely:1 min:5 tv:1 according:7 s1:1 taken:2 discus:1 precomputed:1 needed:1 generalizes:1 jenkins:2 rewritten:1 z10:1 apply:1 observe:3 alternative:3 drifting:4 denotes:1 include:1 log2:2 establish:1 classical:2 objective:2 question:1 realized:1 quantity:4 liml:1 parametric:1 concentration:1 dependence:3 rt:5 gin:2 mx:1 distance:2 uk22:1 majority:1 w0:5 nobel:2 providing:1 minimizing:2 differencing:1 setup:1 kingdom:1 zt:60 upper:5 observation:3 markov:3 finite:1 descent:2 immediate:1 incorporated:2 y1:2 dc:3 smoothed:1 arbitrary:3 cast:1 specified:1 z1:15 distinction:1 established:4 nip:6 address:2 beyond:1 adversary:1 kwkh:2 below:1 ev:3 including:2 memory:6 max:4 vidyasagar:2 event:1 natural:3 rely:1 regularized:5 zr:1 older:1 brief:1 cim:2 started:1 negativity:1 prior:2 tangent:3 asymptotic:3 relative:1 law:1 loss:10 fully:1 highlight:1 lecture:1 proven:1 ingredient:2 validation:2 foundation:2 consistent:1 mohri:15 last:4 drastically:1 formal:1 side:1 institute:2 noz:2 taking:1 benefit:1 distributed:4 dimension:2 depth:3 valid:1 autoregressive:7 ignores:2 made:2 commonly:1 twostage:1 adaptive:1 log3:2 zt1:43 transaction:2 supremum:4 monotonicity:1 global:1 z20:2 assumed:2 alternatively:2 kqk2:2 grc:1 additionally:1 decoupling:1 symmetry:1 mehryar:1 domain:1 main:1 pgs:1 ztt:1 noise:1 child:2 x1:2 en:1 kqk22:3 martingale:2 ny:3 slow:1 medina:2 exponential:1 lie:1 pe:2 young:1 z0:3 theorem:13 xt:10 pac:5 jensen:1 nyu:2 rakhlin:8 x:1 admits:1 alt:2 vapnik:1 sequential:18 effectively:1 kr:2 conditioned:1 supf:1 baillie:2 entropy:1 simply:1 ez:2 expressed:2 datadependent:1 nserc:1 kuznetsov:8 springer:4 corresponds:1 conditional:3 price:1 hard:1 change:2 uniformly:2 wintenberger:3 zb:1 total:1 lemma:2 zr1:1 partly:1 experimental:1 la:2 select:2 support:1 relevance:1 kulkarni:1 evaluate:1 princeton:1 d1:1 cowles:2 |
5,343 | 5,837 | Empirical Localization of Homogeneous Divergences
on Discrete Sample Spaces
Takashi Takenouchi
Department of Complex and Intelligent Systems
Future University Hakodate
116-2 Kamedanakano, Hakodate, Hokkaido, 040-8655, Japan
[email protected]
Takafumi Kanamori
Department of Computer Science and Mathematical Informatics
Nagoya University
Furocho, Chikusaku, Nagoya 464-8601, Japan
[email protected]
Abstract
In this paper, we propose a novel parameter estimator for probabilistic models on
discrete space. The proposed estimator is derived from minimization of homogeneous divergence and can be constructed without calculation of the normalization
constant, which is frequently infeasible for models in the discrete space. We investigate statistical properties of the proposed estimator such as consistency and
asymptotic normality, and reveal a relationship with the information geometry.
Some experiments show that the proposed estimator attains comparable performance to the maximum likelihood estimator with drastically lower computational
cost.
1
Introduction
Parameter estimation of probabilistic models on discrete space is a popular and important issue in
the fields of machine learning and pattern recognition. For example, the Boltzmann machine (with
hidden variables) [1] [2] [3] is a very popular probabilistic model to represent binary variables, and
attracts increasing attention in the context of Deep learning [4]. A training of the Boltzmann machine, i.e., estimation of parameters is usually done by the maximum likelihood estimation (MLE).
The MLE for the Boltzmann machine cannot be explicitly solved and the gradient-based optimization is frequently used. A difficulty of the gradient-based optimization is that the calculation of the
gradient requires calculation of a normalization constant or a partition function in each step of the optimization and its computational cost is sometimes exponential order. The problem of computational
cost is common to the other probabilistic models on discrete spaces and various kinds of approximation methods have been proposed to solve the difficulty. One approach tries to approximate the
probabilistic model by a tractable model by the mean-field approximation, which considers a model
assuming independence of variables [5]. Another approach such as the contrastive divergence [6]
avoids the exponential time calculation by the Markov Chain Monte Carlo (MCMC) sampling.
In the literature of parameters estimation of probabilistic model for continuous variables, [7] employs a score function which is a gradient of log-density with respect to the data vector rather than
parameters. This approach makes it possible to estimate parameters without calculating the normalization term by focusing on the shape of the density function. [8] extended the method to discrete
variables, which defines information of ?neighbor? by contrasting probability with that of a flipped
1
variable. [9] proposed a generalized local scoring rules on discrete sample spaces and [10] proposed
an approximated estimator with the Bregman divergence.
In this paper, we propose a novel parameter estimator for models on discrete space, which does not
require calculation of the normalization constant. The proposed estimator is defined by minimization
of a risk function derived by an unnormalized model and the homogeneous divergence having a weak
coincidence axiom. The derived risk function is convex for various kind of models including higher
order Boltzmann machine. We investigate statistical properties of the proposed estimator such as the
consistency and reveal a relationship between the proposed estimator and the ?-divergence [11].
2
Settings
Let X be a d-dimensional vector of random variables in a discrete space?
X (typically {+1, ?1}d )
and a bracket ?f ? be summation of a function f (x) on X , i.e., ?f ? = x?X f (x). Let M and
P be a space of all non-negative finite measures on X and a subspace consisting of all probability
measures on X , respectively.
M = {f (x) |?f ? < ?, f (x) ? 0 } , P = {f (x) |?f ? = 1, f (x) ? 0 } .
In this paper, we focus on parameter estimation of a probabilistic model q?? (x) on X , written as
q?? (x) =
q? (x)
Z?
(1)
where ? is an m-dimensional vector of parameters, q? (x) is an unnormalized model in M and
Z? = ?q? ? is a normalization constant. A computation of the normalization constant Z? sometimes
requires calculation of exponential order and is sometimes difficult for models
? on the discrete space.
Note that the unnormalized model q? (x) is not normalized and ?q? ? = x?X q? (x) = 1 does not
necessarily hold. Let ?? (x) be a function on X and throughout the paper, we assume without loss
of generality that the unnormalized model q? (x) can be written as
q? (x) = exp(?? (x)).
(2)
Remark 1. By setting ?? (x) as ?? (x) ? log Z? , the normalized model (1) can be written as (2).
Example 1. The Bernoulli distribution on X = {+1, ?1} is a simplest example of the probabilistic
model (1) with the function ?? (x) = ?x.
Example 2. With a function ??,k (x) = (x1 , . . . , xd , x1 x2 , . . . , xd?1 xd , x1 x2 x3 , . . .)?, we can define a k-th order Boltzmann machine [1, 12].
d1
Example 3. Let xo ? {+1,
?1}d2 be an observed vector and hidden vector,
( T ?1}
) and xh ? {+1,
T
d1 +d2
respectively, and x = xo , xh ? {+1, ?1}
where T indicates the transpose, be a concatenated vector. A function ?h,? (xo ) for the Boltzmann machine with hidden variables is written
as
?
?h,? (xo ) = log
exp(??,2 (x)),
(3)
where
xh
?
xh
is the summation with respect to the hidden variable xh .
Let us assume that a dataset D = {xi }ni=1 generated by an underlying distribution p(x), is given and
Z be a set of all patterns which appear in the dataset D. An empirical distribution p?(x) associated
with the dataset D is defined as
{ nx
x ? Z,
p?(x) = n
0
otherwise,
?n
where nx = i=1 I(xi = x) is a number of pattern x appeared in the dataset D.
Definition 1. For the unnormalized model (2) and distributions p(x) and p?(x) in P , probability
functions r?,? (x) and r??,? (x) on X are defined by
r?,? (x) =
p?(x)? q? (x)1??
p(x)? q? (x)1??
?
? , r??,? (x) =
?
? .
1??
?
p q?
p?? q?1??
2
The distribution r?,? (?
r?,? ) is an e-mixture model of the unnormalized model (2) and p(x) (?
p(x))
with ratio ? [11].
Remark 2. We observe that r0,? (x) = r?0,? (x) = q?? (x), r1,? (x) = p(x), r?1,? (x) = p?(x). Also if
p(x) = q??0 (x), r?,?0 (x) = q??0 (x) holds for an arbitrary ?.
?
To estimate the parameter ? of probabilistic
?n model q?? , the MLE defined by ? mle = argmax? L(?) is
frequently employed, where L(?) = i=1 log q?? (xi ) is the log-likelihood of the parameter ? with
the model q?? . Though the MLE is asymptotically consistent and efficient estimator, a main drawback
of the MLE is that computational cost for probabilistic models on the discrete space sometimes
becomes exponential. Unfortunately the MLE does not have an explicit solution in general, the
estimation of the parameter can be done by the gradient based optimization with a gradient ??
p??? ? ?
??
??
q? ??? ? of log-likelihood, where ??? = ??? . While the first term can be easily calculated, the second
term includes calculation of the normalization term Z? , which requires 2d times summation for
X = {+1, ?1}d and is not feasible when d is large.
3
Homogeneous Divergences for Statistical Inference
Divergences are an extension of the squared distance and are often used in statistical inference. A
formal definition of the divergence D(f, g) is a non-negative valued function on M?M or on P ?P
such that D(f, f ) = 0 holds for arbitrary f . Many popular divergences such as the Kullback-Leilber
(KL) divergence defined on P ? P enjoy the coincidence axiom, i.e., D(f, g) = 0 leads to f = g.
The parameter in the statistical model q?? is estimated by minimizing the divergence D(?
p, q?? ), with
respect to ?.
In the statistical inference using unnormalized models, the coincidence axiom of the divergence is
not suitable, since the probability and the unnormalized model do not exactly match in general. Our
purpose is to estimate the underlying distribution up to a constant factor using unnormalized models.
Hence, divergences having the property of the weak coincidence axiom, i.e., D(f, g) = 0 if and only
if g = cf for some c > 0, are good candidate. As a class of divergences with the weak coincidence
axiom, we focus on homogeneous divergences that satisfy the equality D(f, g) = D(f, cg) for any
f, g ? M and any c > 0.
A representative of homogeneous divergences is the pseudo-spherical (PS) divergence [13], or in
other words, ?-divergence [14], that is defined from the H?older inequality. Assume that ? is a
positive constant. For all non-negative functions f, g in M, the H?older inequality
?
1
? ?+1 ? ?+1
? ?+1 ? ?+1
f
g
? ?f g ? ? ? 0
holds. The inequality becomes an equality if and only if f and g are linearly dependent. The PSdivergence D? (f, g) for f, g ? M is defined by
?
?
?
?
1
?
D? (f, g) =
log f ?+1 +
log g ?+1 ? log ?f g ? ? , ? > 0.
(4)
1+?
1+?
The PS divergence is homogeneous, and the H?older inequality ensures the non-negativity and the
weak coincidence axiom of the PS-divergence. One can confirm that the scaled PS-divergence,
? ?1 D? , converges to the extended KL-divergence defined on M?M, as ? ? 0. The PS-divergence
is used to obtain a robust estimator [14].
As shown in (4), the standard PS-divergence from the empirical distribution p? to the unnormalized
model q? requires the computation of ?q??+1 ?, that may be infeasible in our setup. To circumvent
such an expensive computation, we employ a trick and substitute a model p?q? localized by the
empirical distribution for q? , which makes it possible to replace the total sum in ?q??+1 ? with the
1
empirical mean. More precisely, let us consider the PS-divergence from f = (p? q 1?? ) 1+? to
1
?
?
g = (p? q 1?? ) 1+? for the probability distribution p ? P and the unnormalized model q ? M,
where ?, ?? are two distinct real numbers. Then, the divergence vanishes if and only if p? q 1?? ?
?
?
p? q 1?? , i.e., q ? p. We define the localized PS-divergence S?,?? ,? (p, q) by
?
?
S?,?? ,? (p, q) = D? ((p? q 1?? )1/(1+?) , (p? q 1?? )1/(1+?) )
?
?
?
?
?
?
1
?
=
log p? q 1?? +
log?p? q 1?? ? ? log p? q 1?? ,
1+?
1+?
3
(5)
where ? = (? + ??? )/(1 + ?). Substituting the empirical distribution
p, the( total
? ? 1?? ? p? into
) sum over
?
nx ? 1??
X is replaced with a variant of the empirical mean such as p? q
=
q
(x)
x?Z
n
for a non-zero real number ?. Since S?,?? ,? (p, q) = S?? ,?,1/? (p, q) holds, we can assume ? > ??
without loss of generality. In summary, the conditions of the real parameters ?, ?? , ? are given by
? > 0, ? > ?? , ? ?= 0, ?? ?= 0, ? + ??? ?= 0,
where the last condition denotes ? ?= 0.
Let us consider another aspect of the computational issue about the localized PS-divergence. For the
probability distribution p and the unnormalized exponential model q? , we show that the localized
PS-divergence S?,?? ,? (p, q? ) is convex in ?, when the parameters ?, ?? and ? are properly chosen.
Theorem 1. Let p ? P be any probability distribution, and let q? be the unnormalized exponential
model q? (x) = exp(? T ?(x)), where ?(x) is any vector-valued function corresponding to the sufficient statistic in the (normalized) exponential model q?? . For a given ?, the localized PS-divergence
S?,?? ,? (p, q? ) is convex in ? for any ?, ?? , ? satisfying ? = (? + ??? )/(1 + ?) if and only if ? = 1.
? 2 log?p? q?1?? ?
Proof. After some calculation, we have
= (1 ? ?)2 Vr?,? [?], where Vr?,? [?] is the
???? T
covariance matrix of ?(x) under the probability r?,? (x). Thus, the Hessian matrix of S?,?? ,? (p, q? )
is written as
(1 ? ?)2
?(1 ? ?? )2
?2
? ,? (p, q? ) =
S
V
[?]
+
Vr?? ,? [?] ? (1 ? ?)2 Vr?,? [?].
?,?
r
?,?
1+?
1+?
???? T
The Hessian matrix is non-negative definite if ? = 1. The converse direction is deferred to the
supplementary material.
Up to a constant factor, the localized PS-divergence with ? = 1 characterized by Theorem 1 is
denotes as S?,?? (p, q) that is defined by
?
?
?
?
1
1
S?,?? (p, q) =
log p? q 1?? +
log?p? q 1?? ?
?
??1
1??
?
?
for ? > 1 > ? ?= 0. The parameter ? can be negative if p is positive on X . Clearly, S?,?? (p, q)
satisfies the homogeneity and the weak coincidence axiom as well as S?,?? ,? (p, q).
4
Estimation with the localized pseudo-spherical divergence
Given the empirical distribution p? and the unnormalized model q? , we define a novel estimator with
the localized PS-divergence S?,?? ,? (or S?,?? ). Though the localized PS-divergence plugged-in
the empirical distribution is not well-defined when ?? < 0, we can formally define the following
estimator by restricting the domain X to the observed set of examples Z, even for negative ?? :
?? = argmin S?,?? ,? (?
p, q? )
?
(6)
? ( n x )?
? ( n x )? ?
?
1
?
log
q? (x)1?? +
log
q? (x)1??
1+?
n
1+?
n
x?Z
x?Z
? ( n x )?
? log
q? (x)1?? .
n
= argmin
?
x?Z
Remark 3. The summation in (6) is defined on Z and then is computable even when ?, ?? , ? < 0.
Also the summation includes only Z(? n) terms and its computational cost is O(n).
Proposition 1. For the unnormalized model (2), the estimator (6) is Fisher consistent.
Proof. We observe
(
)
?
?
? + ??? ?
S?,?? ,? (?
q?0 , q? )
= ??
q??0 ??? 0 = 0
??
1+?
?=? 0
?
implying the Fisher consistency of ?.
4
Theorem 2. Let q? (x) be the unnormalized model (2), and ? 0 be the true parameter of underlying
distribution p(x) = q??0 (x). Then an asymptotic distribution of the estimator (6) is written as
?
n(?? ? ? 0 ) ? N (0, I(? 0 )?1 )
where I(? 0 ) = Vq??0 [??? 0 ] is the Fisher information matrix.
Proof. We shall sketch a proof and the detailed proof is given in supplementary material. Let us
assume that the empirical distribution is written as
p?(x) = q??0 (x) + ?(x).
Note that ??? = 0 because p?, q??0 ? P. The asymptotic expansion of the equilibrium condition for
the estimator (6) around ? = ? 0 leads to
?
S?,?? ,? (?
p, q? )
0=
??
?
?=?
?
?2
=
+
S?,?? ,? (?
p, q? )
(?? ? ? 0 ) + O(||?? ? ? 0 ||2 )
S?,?? ,? (?
p, q? )
T
??
????
?=? 0
?=? 0
By the delta method [15], we have
?
?
?
?
?
?
??
S?,?? ,? (?
p, q? )
S?,?? ,? (p, q? )
(? ? ?? )2 ??? 0 ?
2
??
??
(1 + ?)
?=? 0
?=? 0
and from the central limit theorem, we observe that
n
( ?
?
?)
? ? ? ? ? 1?
??0 (xi ) ? q??0 ??? 0
n ?? 0 ? = n
n i=1
asymptotically follows the normal distribution with mean 0, and variance I(? 0 ) = Vq??0 [??? 0 ], which
is known as the Fisher information matrix. Also from the law of large numbers, we have
?2
?
S ? (?
p, q? )
(? ? ?? )2 I(? 0 ),
(?? ? ? 0 ) ?
T ?,? ,?
2
(1
+
?)
????
?=? 0
in the limit of n ? ?. Consequently, we observe that (2).
Remark 4. The asymptotic distribution of (6) is equal to that of the MLE, and its variance does not
depend on ?, ?? , ?.
Remark 5. As shown in Remark 1, the normalized model (1) is a special case of the unnormalized
model (2) and then Theorem 2 holds for the normalized model.
5
Characterization of localized pseudo-spherical divergence S?,??
Throughout this section, we assume that ? = 1 holds and investigate properties of the localized PSdivergence S?,?? . We discuss influence of selection of ?, ?? and characterization of the localized
PS-divergence S?,?? in the following subsections.
5.1
Influence of selection of ?, ??
We investigate influence of selection of ?, ?? for the localized PS-divergence S?,?? with a view of
the estimating equation. The estimator ?? derived from S?,?? satisfies
?
? ?
?
?S?,?? (?
p, q? )
?
?
?
r
?
?
?
r
?
?
= 0.
(7)
?
?
?
?
?
?
,
?
?,
?
?
?
?
??
?=?
which is a moment matching with respect to two distributions r??,? and r??? ,? (?, ?? ?= 0, 1). On the
other hand, the estimating equation of the MLE is written as
?
?
?
? ?
?
?L(?)
? p???? mle ? ??
q?mle ??mle ? = r?1,?mle ??? mle ? r?0,?mle ??? mle = 0, (8)
??
?=? mle
which is a moment matching with respect to the empirical distribution p? = r?1,?mle and the
normalized model q?? = r?0,?mle . While the localized PS-divergence S?,?? is not defined with
(?, ?? ) = (0, 1), comparison of (7) with (8) implies that behavior the estimator ?? becomes similar to that of the MLE in the limit of ? ? 1 and ?? ? 0.
5
5.2
Relationship with the ?-divergence
The ?-divergence between two positive measures f, g ? M is defined as
?
?
1
D? (f, g) =
?f + (1 ? ?)g ? f ? g 1?? ,
?(1 ? ?)
where ? is a real number. Note that D? (f, g) ? 0 and 0 if and only if f = g, and the ?-divergence
reduces to KL(f, g) and KL(g, f ) in the limit of ? ? 1 and 0, respectively.
Remark 6. An estimator defined by minimizing ?-divergence D? (?
p, q?? ) between the empirical
distribution and normalized model, satisfies
?
?D? (?
p, q?? ) ? ? 1?? ?
? p? q? (?? ? ??
q? ??? ?) = 0
??
and requires calculation proportional to |X | which is infeasible. Also the same hold for an estimator
defined by minimizing ?-divergence D? (?
p, q? ) between the empirical distribution and unnormalized
?
?
?D? (p,q
? ?)
model, satisfying
? (1 ? ?)q? ??? ? p?? q?1?? = 0.
??
Here, we assume that ?, ?? ?= 0, 1 and consider a trick to cancel out the term ?g? by mixing two
?-divergences as follows.
(
)
???
D?,?? (f, g) =D? (f, g) +
D?? (f, g)
?
?(
)
?
1
??
1
1
? 1??
?? 1???
=
?
f
?
f
g
+
f
g
.
1 ? ? ?(1 ? ?? )
?(1 ? ?)
?(1 ? ?? )
Remark 7. D?,?? (f, g) ? 0 is divergence when ??? < 0 holds, i.e., D?,?? (f, g) ? 0 and
D?,?? (f, g) = 0 if and only if f = g. Without loss of generality, we assume ? > 0 > ?? for
D?,?? .
Firstly, we consider an estimator defined by the minmizer of
}
? { 1 ( nx )??
1 ( n x )?
1???
1??
min
q
(x)
?
q
(x)
.
?
?
?
1 ? ?? n
1?? n
(9)
x?Z
Note that the summation in (9) includes only Z(? n) terms. We remark the following.
Remark 8. Let q??0 (x) be the underlying distribution and q? (x) be the unnormalized model (2).
Then an estimator defined by minimizing D?,?? (?
q?0 , q? ) is not in general Fisher consistent, i.e.,
?
? (
)
?
?
?
?D?,?? (?
q? 0 , q ? )
???
?? ?
? q???0 q?1??
??? 0 ? q???0 q?1??
??? 0 = ?q?0 ?
? ?q?0 ?
q?0 ??? 0 ?= 0.
0
0
??
?=? 0
This remark shows that an estimator associated with D?,?? (?
p, q? ) does not have suitable properties
such as (asymptotic) unbiasedness and consistency while required computational cost is drastically
reduced. Intuitively, this is because the (mixture of) ?-divergence satisfies the coincidence axiom.
To overcome this drawback, we consider the following minimization problem for estimation of the
parameter ? of model q?? (x).
? r?) = argmin D?,?? (?
(?,
p, rq? )
?,r
where r is a constant corresponding to an inverse of the normalization term Z? = ?q? ?.
Proposition 2. Let q? (x) be the unnormalized model (2). For ? > 1 and 0 > ?? , the minimization
of D?,?? (?
p, rq? ) is equivalent to the minimization of
S?,?? (?
p, q? ).
Proof. For a given ?, we observe that
1
(?
? ) ???
?
p?? q?1??
.
r?? = argmin D?,?? (?
p, rq? ) = ? ? 1??? ?
r
p?? q?
6
(10)
Note that computation of (10) requires only sample order O(n) calculation. By plugging (10) into
D?,?? (?
p, rq? ), we observe
?? = argmin D?,?? (?
p, r?? q? ) = argmin S?,?? (?
p, q? ).
?
(11)
?
If ? > 1 and ?? < 0 hold, the estimator (11) is equivalent to the estimator associated with the
localized PS-divergence S?,?? , implying that S?,?? is characterized by the mixture of ?-divergences.
Remark 9. From a viewpoint of the information geometry [11], a metric (information geometrical
structure) induced by the ?-divergence is the Fisher metric induced by the KL-divergence. This implies that the estimation based on the (mixture of) ?-divergence is Fisher efficient and is an intuitive
explanation of the Theorem 2. The localized PS divergence S?,?? ,? and S?,?? with ??? > 0 can be
interpreted as an extension of the ?-divergence, which preserves Fisher efficiency.
6
Experiments
We especially focus on a setting of ? = 1, i.e., convexity of the risk function with the unnormalized
model exp(? T ?(x)) holds (Theorem 1) and examined performance of the proposed estimator.
6.1
Fully visible Boltzmann machine
In the first experiment, we compared the proposed estimator with parameter settings (?, ?? ) =
(1.01, 0.01), (1.01, ?0.01), (2, ?1), with the MLE and the ratio matching method [8]. Note that
the ratio matching method also does not require calculation of the normalization constant, and the
proposed method with (?, ?? ) = (1.01, ?0.01) may behave like the MLE as discussed in section
5.1.
All methods were optimized with the optim function in R language [16]. The dimension d of input
was set to 10 and the synthetic dataset was randomly generated from the second order Boltzmann
machine (Example 2) with a parameter ? ? ? N (0, I). We repeated comparison 50 times and
observed averaged performance. Figure 1 (a) shows median of the root mean square errors (RMSEs)
between ? ? and ?? of each method over 50 trials, against the number n of examples. We observe that
the proposed estimator works well and is superior to the ratio matching method. In this experiment,
the MLE outperforms the proposed method contrary to the prediction of Theorem 2. This is because
observed patterns were only a small portion of all possible patterns, as shown in Figure 1 (b). Even
in such a case, the MLE can take all possible patterns (210 = 1024) into account through the
normalization term log Z? ? Const + 12 ||?||2 that works like a regularizer. On the other hand, the
proposed method genuinely uses only the observed examples, and the asymptotic analysis would not
be relevant in this case. Figure 1 (c) shows median of computational time of each method against
n. The computational time of the MLE does not vary against n because the computational cost is
dominated by the calculation of the normalization constant. Both the proposed estimator and the
ratio matching method are significantly faster than the MLE, and the ratio matching method is faster
than the proposed estimator while the RMSE of the proposed estimator is less than that of the ratio
matching.
6.2
Boltzmann machine with hidden variables
In this subsection, we applied the proposed estimator for the Boltzmann machine with hidden variables whose associated function is written as (3). The proposed estimator with parameter settings
(?, ?? ) = (1.01, 0.01), (1.01, ?0.01), (2, ?1) was compared with the MLE. The dimension d1 of
observed variables was fixed to 10 and d2 of hidden variables was set to 2, and the parameter ? ?
was generated as ? ? ? N (0, I) including parameters corresponding to hidden variables. Note
that the Boltzmann machine with hidden variables is not identifiable and different values of the parameter do not necessarily generate different probability distributions, implying that estimators are
influenced by local minimums. Then we measured performance of each estimator by the averaged
7
500
300
200
250
20
50
Time[s]
100
200
150
MLE
Ratio matching
a1=1.01,a2=0.01
a1=1.01,a2=?0.01
a1=2,a2=?1
0
2
0.1
5
50
10
0.2
100
Number |Z| of unique patterns
1.0
RMSE
0.5
MLE
Ratio matching
a1=1.01,a2=0.01
a1=1.01,a2=?0.01
a1=2,a2=?1
0
5000
10000
15000
20000
25000
100
200
400
800
1600
n
3200
6400
0
12800 25600
5000
10000
15000
20000
25000
n
n
Figure 1: (a) Median of RMSEs of each method against n, in log scale. (b) Box-whisker plot of
number |Z| of unique patterns in the dataset D against n. (c) Median of computational time of each
method against n, in log scale.
1000
200
MLE
a1=1.01,a2=0.01
a1=1.01,a2=?0.01
a1=2,a2=?1
5
10
20
?15
50
MLE
a1=1.01,a2=0.01
a1=1.01,a2=?0.01
a1=2,a2=?1
100
Time[s]
500
?5
?10
Averaged Log likelihood
?10
MLE
a1=1.01,a2=0.01
a1=1.01,a2=?0.01
a1=2,a2=?1
?15
Averaged Log likelihood
?5
?n
log-likelihood n1 i=1 log q???(xi ) rather than the RMSE. An initial value of the parameter was set
by N (0, I) and commonly used by all methods. We repeated the comparison 50 times and observed the averaged performance. Figure 2 (a) shows median of averaged log-likelihoods of each
method over 50 trials, against the number n of example. We observe that the proposed estimator is
comparable with the MLE when the number n of examples becomes large. Note that the averaged
log-likelihood of MLE once decreases when n is samll, and this is due to overfitting of the model.
Figure 2 (b) shows median of averaged log-likelihoods of each method for test dataset consists of
10000 examples, over 50 trials. Figure 2 (c) shows median of computational time of each method
against n, and we observe that the proposed estimator is significantly faster than the MLE.
0
5000
10000
15000
20000
25000
0
5000
10000
15000
n
n
20000
25000
0
5000
10000
15000
20000
25000
n
Figure 2: (a) Median of averaged log-likelihoods of each method against n. (b) Median of averaged
log-likelihoods of each method calculated for test dataset against n. (c) Median of computational
time of each method against n, in log scale.
7
Conclusions
We proposed a novel estimator for probabilistic model on discrete space, based on the unnormalized model and the localized PS-divergence which has the homogeneous property. The proposed
estimator can be constructed without calculation of the normalization constant and is asymptotically
efficient, which is the most important virtue of the proposed estimator. Numerical experiments show
that the proposed estimator is comparable to the MLE and required computational cost is drastically
reduced.
8
References
[1] Hinton, G. E. & Sejnowski, T. J. (1986) Learning and relearning in boltzmann machines. MIT
Press, Cambridge, Mass, 1:282?317.
[2] Ackley, D. H., Hinton, G. E. & Sejnowski, T. J. (1985) A learning algorithm for boltzmann
machines. Cognitive Science, 9(1):147?169.
[3] Amari, S., Kurata, K. & Nagaoka, H. (1992) Information geometry of Boltzmann machines.
In IEEE Transactions on Neural Networks, 3: 260?271.
[4] Hinton, G. E. & Salakhutdinov, R. R. (2012) A better way to pretrain deep boltzmann machines. In Advances in Neural Information Processing Systems, pp. 2447?2455 Cambridge,
MA: MIT Press.
[5] Opper, M. & Saad, D. (2001) Advanced Mean Field Methods: Theory and Practice. MIT
Press, Cambridge, MA.
[6] Hinton, G.E. (2002) Training Products of Experts by Minimizing Contrastive Divergence.
Neural Computation, 14(8):1771?1800.
[7] Hyv?arinen, A. (2005) Estimation of non-normalized statistical models by score matching.
Journal of Machine Learning Research, 6:695?708.
[8] Hyv?arinen, A. (2007) Some extensions of score matching. Computational statistics & data
analysis, 51(5):2499?2512.
[9] Dawid, A. P., Lauritzen, S. & Parry, M. (2012) Proper local scoring rules on discrete sample
spaces. The Annals of Statistics, 40(1):593?608.
[10] Gutmann, M. & Hirayama, H. (2012) Bregman divergence as general framework to estimate
unnormalized statistical models. arXiv preprint arXiv:1202.3727.
[11] Amari, S & Nagaoka, H. (2000) Methods of Information Geometry, volume 191 of Translations of Mathematical Monographs. Oxford University Press.
[12] Sejnowski, T. J. (1986) Higher-order boltzmann machines. In American Institute of Physics
Conference Series, 151:398?403.
[13] Good, I. J. (1971) Comment on ?measuring information and uncertainty,? by R. J. Buehler.
In Godambe, V. P. & Sprott, D. A. editors, Foundations of Statistical Inference, pp. 337?339,
Toronto: Holt, Rinehart and Winston.
[14] Fujisawa, H. & Eguchi, S. (2008) Robust parameter estimation with a small bias against heavy
contamination. Journal of Multivariate Analysis, 99(9):2053?2081.
[15] Van der Vaart, A. W. (1998) Asymptotic Statistics. Cambridge University Press.
[16] R Core Team. (2013) R: A Language and Environment for Statistical Computing. R Foundation
for Statistical Computing, Vienna, Austria.
9
| 5837 |@word trial:3 hyv:2 d2:3 covariance:1 contrastive:2 moment:2 initial:1 series:1 score:3 outperforms:1 hakodate:2 optim:1 written:9 numerical:1 visible:1 partition:1 shape:1 plot:1 implying:3 core:1 characterization:2 toronto:1 firstly:1 mathematical:2 constructed:2 consists:1 behavior:1 frequently:3 salakhutdinov:1 spherical:3 increasing:1 becomes:4 estimating:2 underlying:4 mass:1 kind:2 argmin:6 interpreted:1 contrasting:1 pseudo:3 fun:1 xd:3 exactly:1 scaled:1 converse:1 enjoy:1 appear:1 positive:3 local:3 limit:4 oxford:1 examined:1 averaged:10 unique:2 practice:1 definite:1 x3:1 axiom:8 empirical:13 significantly:2 matching:12 word:1 holt:1 cannot:1 selection:3 context:1 risk:3 influence:3 godambe:1 equivalent:2 attention:1 convex:3 rinehart:1 estimator:42 rule:2 annals:1 homogeneous:8 us:1 trick:2 dawid:1 recognition:1 approximated:1 expensive:1 satisfying:2 genuinely:1 observed:7 ackley:1 preprint:1 coincidence:8 solved:1 ensures:1 gutmann:1 decrease:1 contamination:1 rq:4 monograph:1 vanishes:1 convexity:1 environment:1 depend:1 localization:1 efficiency:1 easily:1 various:2 regularizer:1 distinct:1 monte:1 sejnowski:3 whose:1 supplementary:2 solve:1 valued:2 otherwise:1 amari:2 statistic:4 vaart:1 nagaoka:2 propose:2 product:1 relevant:1 mixing:1 intuitive:1 p:20 r1:1 converges:1 ac:2 hirayama:1 measured:1 lauritzen:1 implies:2 direction:1 drawback:2 material:2 require:2 arinen:2 proposition:2 summation:6 extension:3 hold:11 around:1 normal:1 exp:4 equilibrium:1 substituting:1 vary:1 a2:15 purpose:1 estimation:11 minimization:5 mit:3 clearly:1 rather:2 buehler:1 takashi:1 derived:4 focus:3 properly:1 bernoulli:1 likelihood:12 indicates:1 pretrain:1 attains:1 cg:1 inference:4 dependent:1 typically:1 hidden:9 issue:2 special:1 field:3 equal:1 once:1 having:2 sampling:1 flipped:1 cancel:1 future:1 intelligent:1 employ:2 randomly:1 preserve:1 divergence:57 homogeneity:1 replaced:1 geometry:4 consisting:1 argmax:1 n1:1 investigate:4 deferred:1 mixture:4 bracket:1 chain:1 bregman:2 plugged:1 measuring:1 cost:8 synthetic:1 unbiasedness:1 density:2 probabilistic:11 physic:1 informatics:1 squared:1 central:1 sprott:1 cognitive:1 expert:1 american:1 japan:2 account:1 includes:3 satisfy:1 explicitly:1 try:1 view:1 root:1 portion:1 rmse:3 square:1 ni:1 variance:2 weak:5 carlo:1 influenced:1 definition:2 against:12 pp:2 associated:4 proof:6 dataset:8 popular:3 austria:1 subsection:2 focusing:1 higher:2 done:2 though:2 box:1 generality:3 sketch:1 hand:2 defines:1 hokkaido:1 reveal:2 normalized:8 true:1 hence:1 equality:2 unnormalized:23 generalized:1 geometrical:1 novel:4 common:1 superior:1 jp:2 volume:1 discussed:1 cambridge:4 consistency:4 language:2 multivariate:1 nagoya:3 inequality:4 binary:1 der:1 scoring:2 minimum:1 employed:1 r0:1 reduces:1 match:1 characterized:2 calculation:13 faster:3 mle:36 plugging:1 a1:15 prediction:1 variant:1 metric:2 fujisawa:1 arxiv:2 represent:1 sometimes:4 normalization:12 median:10 saad:1 comment:1 induced:2 contrary:1 independence:1 attracts:1 computable:1 hessian:2 remark:12 deep:2 detailed:1 parry:1 simplest:1 reduced:2 generate:1 estimated:1 delta:1 discrete:13 shall:1 asymptotically:3 sum:2 inverse:1 uncertainty:1 throughout:2 comparable:3 winston:1 identifiable:1 precisely:1 x2:2 dominated:1 aspect:1 min:1 department:2 intuitively:1 xo:4 equation:2 vq:2 discus:1 tractable:1 observe:9 kurata:1 substitute:1 denotes:2 cf:1 vienna:1 const:1 calculating:1 concatenated:1 especially:1 gradient:6 subspace:1 distance:1 nx:4 considers:1 assuming:1 relationship:3 ratio:9 minimizing:5 difficult:1 unfortunately:1 setup:1 negative:6 proper:1 boltzmann:16 markov:1 finite:1 behave:1 extended:2 hinton:4 team:1 arbitrary:2 required:2 kl:5 optimized:1 usually:1 pattern:8 appeared:1 including:2 explanation:1 suitable:2 difficulty:2 circumvent:1 advanced:1 normality:1 older:3 negativity:1 literature:1 asymptotic:7 law:1 loss:3 fully:1 whisker:1 proportional:1 localized:17 rmses:2 foundation:2 sufficient:1 consistent:3 viewpoint:1 editor:1 heavy:1 translation:1 summary:1 last:1 transpose:1 kanamori:2 infeasible:3 drastically:3 formal:1 bias:1 institute:1 neighbor:1 van:1 overcome:1 calculated:2 dimension:2 opper:1 avoids:1 commonly:1 transaction:1 approximate:1 kullback:1 confirm:1 overfitting:1 xi:5 continuous:1 robust:2 expansion:1 complex:1 necessarily:2 domain:1 main:1 linearly:1 repeated:2 x1:3 representative:1 vr:4 explicit:1 xh:5 exponential:7 candidate:1 theorem:8 virtue:1 restricting:1 relearning:1 satisfies:4 ma:2 consequently:1 replace:1 fisher:8 feasible:1 total:2 formally:1 eguchi:1 takafumi:1 mcmc:1 d1:3 |
5,344 | 5,838 | Multi-Layer Feature Reduction for Tree Structured
Group Lasso via Hierarchical Projection
Jie Wang1 , Jieping Ye1,2
Computational Medicine and Bioinformatics
2
Department of Electrical Engineering and Computer Science
University of Michigan, Ann Arbor, MI 48109
{jwangumi, jpye}@umich.edu
1
Abstract
Tree structured group Lasso (TGL) is a powerful technique in uncovering the tree
structured sparsity over the features, where each node encodes a group of features.
It has been applied successfully in many real-world applications. However, with
extremely large feature dimensions, solving TGL remains a significant challenge
due to its highly complicated regularizer. In this paper, we propose a novel MultiLayer Feature reduction method (MLFre) to quickly identify the inactive nodes
(the groups of features with zero coefficients in the solution) hierarchically in a
top-down fashion, which are guaranteed to be irrelevant to the response. Thus, we
can remove the detected nodes from the optimization without sacrificing accuracy. The major challenge in developing such testing rules is due to the overlaps
between the parents and their children nodes. By a novel hierarchical projection algorithm, MLFre is able to test the nodes independently from any of their
ancestor nodes. Moreover, we can integrate MLFre?that has a low computational cost?with any existing solvers. Experiments on both synthetic and real data
sets demonstrate that the speedup gained by MLFre can be orders of magnitude.
1
Introduction
Tree structured group Lasso (TGL) [13, 30] is a powerful regression technique in uncovering the
hierarchical sparse patterns among the features. The key of TGL, i.e., the tree guided regularization,
is based on a pre-defined tree structure and the group Lasso penalty [29], where each node represents
a group of features. In recent years, TGL has achieved great success in many real-world applications
such as brain image analysis [10, 18], gene data analysis [14], natural language processing [27, 28],
and face recognition [12]. Many algorithms have been proposed to improve the efficiency of TGL
[1, 6, 11, 7, 16]. However, the application of TGL to large-scale problems remains a challenge due
to its highly complicated regularizer.
As an emerging and promising technique in scaling large-scale problems, screening has received
much attention in the past few years. Screening aims to identify the zero coefficients in the sparse
solutions by simple testing rules such that the corresponding features can be removed from the
optimization. Thus, the size of the data matrix can be significantly reduced, leading to substantial
savings in computational cost and memory usage. Typical examples include TLFre [25], FLAMS
[22], EDPP [24], Sasvi [17], DOME [26], SAFE [8], and strong rules [21]. We note that strong rules
are inexact in the sense that features with nonzero coefficients may be mistakenly discarded, while
the others are exact. Another important direction of screening is to detect the non-support vectors for
support vector machine (SVM) and least absolute deviation (LAD) [23, 19]. Empirical studies have
shown that the speedup gained by screening methods can be several orders of magnitude. Moreover,
the exact screening methods improve the efficiency without sacrificing optimality.
However, to the best of our knowledge, existing screening methods are only applicable to sparse
models with simple structures such as Lasso, group Lasso, and sparse group Lasso. In this paper, we
1
propose a novel Multi-Layer Feature reduction method, called MLFre, for TGL. MLFre is exact and
it tests the nodes hierarchically from the top level to the bottom level to quickly identify the inactive
nodes (the groups of features with zero coefficients in the solution vector), which are guaranteed to
be absent from the sparse representation. To the best of our knowledge, MLFre is the first screening
method that is applicable to TGL with the highly complicated tree guided regularization.
The major technical challenges in developing MLFre for TGL lie in two folds. The first is that
most existing exact screening methods are based on evaluating the norm of the subgradients of the
sparsity-inducing regularizers with respect to the variables or groups of variables of interests. However, for TGL, we only have access to a mixture of the subgradients due to the overlaps between
parents and their children nodes. Therefore, our first major technical contribution is a novel hierarchical projection algorithm that is able to exactly and efficiently recover the subgradients with
respect to every node from the mixture (Sections 3 and 4). The second technical challenge is that
most existing exact screening methods need to estimate an upper bound involving the dual optimum.
This turns out to be a complicated nonconvex optimization problem for TGL. Thus, our second major
technical contribution is to show that this highly nontrivial nonconvex optimization problem admits
closed form solutions (Section 5). Experiments on both synthetic and real data sets demonstrate that
the speedup gained by MLFre can be orders of magnitude (Section 6). Please see supplements for
detailed proofs of the results in the main text.
? = [p]\G.
Notation: Let k?k be the `2 norm, [p] = {1, . . . , p} for a positive integer p, G ? [p], and G
For u ? Rp , let ui be its ith component. For G ? [p], we denote uG = [u]G = {v : vi = ui if i ?
G, vi = 0 otherwise} and HG = {u ? Rp : uG? = 0}. If G1 , G2 ? [n] and G1 ? G2 , we
emphasize that G2 \ G1 6= ?. For a set C, let int C, ri C, bd C, and rbd C be its interior, relative
interior, boundary, and relative boundary, respectively [5]. If C is closed and convex, the projection
operator is PC (z) := argminu?C kz ? uk, and its indicator function is IC (?), which is 0 on C and ?
elsewhere. Let ?0 (Rp ) be the class of proper closed convex functions on Rp . For f ? ?0 (Rp ), let
?f be its subdifferential and dom f := {z : f (z) < ?}. We denote by ?+ = max(?, 0).
2
Basics
We briefly review some basics of TGL. First, we introduce the so-called index tree.
Definition 1. [16] For an index tree T of depth d, we denote the node(s) of depth i by Ti =
{Gi1 , . . . , Gini }, where n0 = 1, G01 = [p], Gij ? [p], and ni ? 1, ? i ? [d]. We assume that
(i): Gij1 ? Gij2 = ?, ? i ? [d] and j1 6= j2 (different nodes of the same depth do not overlap).
i+1
(ii): If Gij is a parent node of Gi+1
? Gij .
` , then G`
When the tree structure is available (see supplement for an example), the TGL problem is
Xd Xni
min 21 ky ? X?k2 + ?
wji k?Gij k,
(TGL)
i=0
?
j=1
N ?p
N
where y ? R is the response vector, X ? R
is the data matrix, ?Gij and wji are the coefficients
vector and positive weight corresponding to node Gij , respectively, and ? > 0 is the regularization
parameter. We derive the Lagrangian dual problem of TGL as follows.
Pd Pni
Theorem 2. For the TGL problem, let ?(?) = i=0 j=1
wji k?Gij k. The following hold:
(i): Let ?ij (?) = k?Gij k and Bji = {? ? HGij : k?k ? wji }. We can write ??(0) as
Xd Xni
Xd Xni
??(0) =
wji ??ij (0) =
Bji .
(1)
i=0
j=1
i=0
j=1
T
(ii): Let F = {? : X ? ? ??(0)}. The Lagrangian dual of TGL is
sup 21 kyk2 ? 12 k y? ? ?k2 : ? ? F .
(2)
?
(iii): Let ? ? (?) and ?? (?) be the optimal solution of problems (TGL) and (2), respectively. Then,
y = X? ? (?) + ??? (?),
(3)
Xd Xni
XT ?? (?) ?
wji ??ij (? ? (?)).
(4)
i=0
j=1
The dual problem of TGL in (2) is equivalent to a projection problem, i.e., ?? (?) = PF (y/?). This
geometric property plays a fundamentally important role in developing MLFre (see Section 5).
2
3
Testing Dual Feasibility via Hierarchical Projection
Although the dual problem in (2) has nice geometric properties, it is challenging to determine the
feasibility of a given ? due to the complex dual feasible set F. An alternative approach is to test
if XT ? = P??(0) (XT ?). Although ??(0) is very complicated, we show that P??(0) (?) admits a
closed form solution by hierarchically splitting P??(0) (?) into a sum of projection operators with
respect to a collection of simpler sets. We first introduce some notations. For an index tree T , let
nX
o
Aij =
Bkt : Gtk ? Gij , ? i ? 0 ? [d], j ? [ni ],
(5)
t,k
nX
o
Cji =
(6)
Bkt : Gtk ? Gij , ? i ? 0 ? [d], j ? [ni ].
t,k
For a node Gij , the set Aij is the sum of Bkt corresponding to all its descendant nodes and itself, and
the set Cji the sum excluding itself. Therefore, by the definitions of Aij , Bji , and Cji , we have
??(0) = A01 ,
Aij = Bji + Cji , ? non-leaf node Gij ,
Aij = Bji , ? leaf node Gij ,
(7)
which implies that P??(0) (?) = PA01 (?) = PB10 +C10 (?). This motivates the first pillar of this paper,
i.e., Lemma 3, which splits PB10 +C10 (?) into the sum of two projections onto B10 and C10 , respectively.
Lemma 3. Let G ? [p], B = {u ? HG : kuk ? ?} with ? > 0, C ? HG a nonempty closed convex
set, and z an arbitrary point in HG . Then, the following hold:
(i): [2] PB (z) = min{1, ?/kzk}z if z 6= 0. Otherwise, PB (z) = 0.
(ii): IB+C (z) = IB (z ? PC (z)), i.e., PC (z) ? argminu?C IB (z ? u).
(iii): PB+C (z) = PC (z) + PB (z ? PC (z)).
By part (iii) of Lemma 3, we can split PA01 (XT ?) in the following form:
PA01 (XT ?) = PC10 (XT ?) + PB10 (XT ? ? PC10 (XT ?)).
(8)
T
As PB10 (?) admits a closed form solution by part (i) of Lemma 3, we can compute PA01 (X ?) if we
have PC10 (XT ?) computed. By Eq. (5) and Eq. (6), for a non-leaf node Gij , we note that
X
i+1
i
Cji =
Ai+1
? Gij }.
(9)
k , where Ic (Gj ) = {k : Gk
i
k?Ic (Gj )
Inspired by (9), we have the following result.
Lemma 4. Let {G` ? [p]}` be a setPof nonoverlapping index
P sets, {C` ? HG` }` be a set of
nonempty closed convex sets, and C = ` C` . Then, PC (z) = ` PC` (zG` ) for z ? Rp .
Remark 1. For Lemma 4, if all C` are balls centered at 0, then PC (z) admits a closed form solution.
By Lemma 4 and Eq. (9), we can further splits PC10 (XT ?) in Eq. (8) in the following form.
X
PC10 (XT ?) =
PA1k ([XT ?]G1k ), where Ic (G01 ) = {k : G1k ? G01 }.
0
k?Ic (G1 )
(10)
Consider the right hand side of Eq. (10). If G1k is a leaf node, Eq. (7) implies that A1k = Bk1 and
thus PA1k (?) admits a closed form solution by part (i) of Lemma 3. Otherwise, we continue to split
PA1k (?) by Lemmas (3) and (4). This procedure will always terminate as we reach the leaf nodes
[see the last equality in Eq. (7)]. Therefore, by a repeated application of Lemmas (3) and (4), the
following algorithm computes the closed form solution of PA01 (?).
Algorithm 1 Hierarchical Projection: PA01 (?).
Input: z ? Rp , the index tree T as in Definition 1, and positive weights wji for all nodes Gij in T .
Output: u0 = PA01 (z), vi for ? i ? 0 ? [d].
1: Set ui ? 0 ? Rp , ? i ? 0 ? [d + 1], vi ? 0 ? Rp , ? i ? 0 ? [d].
2: for i = d to 0 do
/*hierarchical projection*/
3:
for j = 1 to ni do
i+1
i
vG
(11)
4:
i = PB i (zGi ? u i ),
G
j
j
j
j
i
uiGi ? ui+1
+ vG
i.
Gi
j
j
5:
end for
6: end for
3
j
(12)
The time complexity of Algorithm 1 is similar to that of solving its proximal operator [16],
Pd Pni
|Gij |), where |Gij | is the number of features contained in the node Gij . As
i.e., O( i=0 j=1
Pni
i
j=1 |Gj | ? p by Definition 1, the time complexity of Algorithm 1 is O(pd), and thus O(p log p)
for a balanced tree, where d = O(log p). The next result shows that u0 returned by Algorithm 1 is
the projection of z onto A01 . Indeed, we have more general results as follows.
Theorem 5. ForAlgorithm
1, the following hold:
i
(i): uGi = PAij zGij , ? i ? 0 ? [d], j ? [ni ].
j
= PCji zGij , for any non-leaf node Gij .
(ii): ui+1
Gi
j
4
MLFre Inspired by the KKT Conditions and Hierarchical Projection
In this section, we motivate MLFre via the KKT condition in Eq. (4) and the hierarchical projection
in Algorithm 1. Note that for any node Gij , we have
(
{? ? HGij : k?k ? wji },
if [? ? (?)]Gij = 0,
i
i
?
(13)
wj ??j (? (?)) =
wji [? ? (?)]Gij /k[? ? (?)]Gij k, otherwise.
Moreover, the KKT condition in Eq. (4) implies that
? {?ji ? wji ??ij (? ? (?)) : ? i ? 0 ? [d], j ? [ni ]} such that XT ?? (?) =
Xd
i=0
Xn i
j=1
?ji .
(14)
Thus, if k?ji k < wji , we can see that [? ? (?)]Gij = 0. However, we do not have direct access to ?ji
even if ?? (?) is known, because XT ?? (?) is a mixture (sum) of all ?ji as shown in Eq. (14). Indeed,
Algorithm 1 turns out to be much more useful than testing the feasibility of a given ?: it is able to
split all ?ji ? wji ??ij (? ? (?)) from XT ?? (?). This will serve as a cornerstone in developing MLFre.
Theorem 6 rigorously shows this property of Algorithm 1.
Theorem 6. Let vi , i ? 0 ? [d] be the output of Algorithm 1 with input XT ?? (?), and {?ji : i ?
0 ? [d], j ? [ni ]} be the set of vectors that satisfy Eq. (14). Then, the following hold.
(i) If [? ? (?)]Gij = 0, and [? ? (?)]Glr 6= 0 for all Glr ? Gij , then
P
PAij [XT ?? (?)]Gij = {(k,t):Gt ?Gi } ?kt .
k
j
(ii) If Gij is a non-leaf node, and [? ? (?)]Gij 6= 0, then
P
PCji [XT ?? (?)]Gij = {(k,t):Gt ?Gi } ?kt .
k
j
i
i
i
?
(iii) vG
i ? wj ??j (? (?)), ? i ? 0 ? [d], j ? [ni ].
j
Combining Eq. (13) and part (iii) of Theorem 6, we can see that
i
i
?
kvG
i k < wj ? [? (?)]Gi = 0.
j
j
By plugging Eq. (11) and part (ii) of Theorem 5 into (15), we have [? ? (?)]Gij = 0 if
(a):
PBji [XT ?? (?)]Gij ? PCji [XT ?? (?)]Gij
< wji , if Gij is a non-leaf node,
(b):
PBji [XT ?? (?)]Gij
< wji ,
if Gij is a leaf node.
(15)
(R1)
(R2)
Moreover, the definition of PBji implies that we can simplify (R1) and (R2) to the following form:
T ?
[X ? (?)]Gij ? PCji [XT ?? (?)]Gij
< wji ? [? ? (?)]Gij = 0, if Gij is a non-leaf node, (R1?)
T ?
if Gij is a leaf node.
(R2?)
[X ? (?)]Gij
< wji ? [? ? (?)]Gij = 0,
However, (R1?) and (R2?) are not applicable to detect inactive nodes as they involve ?? (?). Inspired
by SAFE [8], we first estimate a set ? containing ?? (?). Let [XT ?]Gij = {[XT ?]Gij : ? ? ?} and
Sij (z) = zGij ? PCji zGij .
(16)
4
Then, we can relax (R1?) and (R2?) as
n
o
sup?
Sij (?)
: ?Gij ? ?ij ? [XT ?]Gij < wji ? [? ? (?)]Gij = 0, if Gij is a non-leaf node, (R1? )
o
n
if Gij is a leaf node.
(R2? )
sup?
?Gij
: ?Gij ? [XT ?]Gij < wji ? [? ? (?)]Gij = 0,
In view of (R1? ) and (R2? ), we sketch the procedure to develop MLFre in the following three steps.
Step 1 We estimate a set ? that contains ?? (?).
Step 2 We solve for the supreme values in (R1? ) and (R2? ), respectively.
Step 3 We develop MLFre by plugging the supreme values obtained in Step 2 to (R1? ) and (R2? ).
4.1 The Effective Interval of the Regularization Parameter ?
The geometric property of the dual problem in (2), i.e., ?? (?) = PF (y/?), implies that ?? (?) =
y/? if y/? ? F. Moreover, (R1) for the root node G01 leads to ? ? (?) = 0 if y/? is an interior point
of F. Indeed, the following theorem presents stronger results.
Theorem 7. For TGL, let ?max = max {? : y/? ? F} and S01 (?) be defined by Eq. (16). Then,
(i): ?max = {? : kS01 (XT y/?)k = w10 }.
(ii): y? ? F ? ? ? ?max ? ?? (?) = y? ? ? ? (?) = 0.
For more discussions on ?max , please refer to Section H in the supplements.
5
The Proposed Multi-Layer Feature Reduction Method for TGL
We follow the three steps in Section 4 to develop MLFre. Specifically, we first present an accurate
estimation of the dual optimum in Section 5.1, then we solve for the supreme values in (R1? ) and
(R2? ) in Section 5.2, and finally we present the proposed MLFre in Section 5.3.
5.1 Estimation of the Dual Optimum
We estimate the dual optimum by the geometric properties of projection operators [recall that
?? (?) = PF (y/?)]. We first introduce a useful tool to characterize the projection operators.
Definition 8. [2] For a closed convex set C and a point z0 ? C, the normal cone to C at z0 is
NC (z0 ) = {? : h?, z ? z0 i ? 0, ? z ? C}.
Theorem 7 implies that ?? (?) is known with ? ? ?max . Thus, we can estimate ?? (?) in terms of a
known ?? (?0 ). This leads to Theorem 9 that bounds the dual optimum by a small ball.
Theorem 9. For TGL, suppose that ?? (?0 ) is known with ?0 ? ?max . For ? ? (0, ?0 ), we define
(y
?? (?0 ),
if ?0 < ?max ,
?0 ?
n(?0 ) =
0
T y
XS1 X ?max , if ?0 = ?max ,
r(?, ?0 ) =
y
?
? ?? (?0 ),
hr(?,?0 ),n(?0 )i
n(?0 ).
kn(?0 )k2
r? (?, ?0 ) = r(?, ?0 ) ?
Then, the following hold:
(i): n(?0 ) ? NF (?? (?0 )).
(ii): k?? (?) ? (?? (?0 ) + 21 r? (?, ?0 ))k ? 12 kr? (?, ?0 )k.
Theorem 9 indicates that ?? (?) lies inside the ball of radius 21 kr? (?, ?0 )k centered at
o(?, ?0 ) = ?? (?0 ) + 12 r? (?, ?0 ).
5.2 Solving the Nonconvex Optimization Problems in (R1? ) and (R2? )
We solve for the supreme values in (R1? ) and (R2? ). For notational convenience, let
? = {? : k? ? o(?, ?0 )k ? 12 kr? (?, ?0 )k},
?ij
T
= {? : ? ? HGij , k? ? [X o(?, ?0 )]Gij k ?
?
T
?
1
i
2 kr (?, ?0 )kkXGj k2 }.
?ij
Theorem 9 implies that ? (?) ? ?, and thus [X ?]Gij ?
for all non-leaf nodes
?
?
MLFre by (R1 ) and (R2 ), we need to solve the following optimization problems:
sij (?, ?0 ) = sup? {kSij (?)k : ? ? ?ij }, if Gij is a non-leaf node,
sij (?, ?0 )
= sup? {k?k : ? ?
?ij },
5
if
Gij
is a leaf node.
Gij .
(17)
(18)
To develop
(19)
(20)
Before we solve problems (19) and (20), we first introduce some notations.
Definition 10. For a non-leaf node Gij of an index tree T , let Ic (Gij ) = {k : Gi+1
? Gij }. If
k
i+1
i+1
i
i
i
for
Gj \ ?k?Ic (Gij ) Gk 6= ?, we define a virtual child node of Gj by Gj 0 = Gj \ ?k?Ic (Gij ) Gi+1
k
0
0
0
j ? {ni+1 + 1, ni+1 + 2, . . . , ni+1 + ni+1 }, where ni+1 is the number of virtual nodes of depth
i + 1. We set the weights wji 0 = 0 for all virtual nodes Gij 0 .
Another useful concept is the so-called unique path between the nodes in the tree.
Lemma 11. [16] For any non-root node Gij , we can find a unique path from Gij to the root G01 . Let
the nodes on this path be Glrl , where l ? 0 ? [i], r0 = 1, and ri = j. Then, the following hold:
Gij ? Glrl , ? l ? 0 ? [i ? 1].
(21)
Gij
(22)
?
Glr
= ?, ? r 6= rl , l ? [i ? 1], r ? [ni ].
Solving Problem (19) We consider the following equivalent problem of (19).
2
1 i
2 (sj (?, ?0 ))
= sup? { 12 kSij (?)k2 : ? ? ?ij }, if Gij is a non-leaf node.
(23)
Although both the objective function and feasible set of problem (23) are convex, it is nonconvex as
we need to find the supreme value. We derive the closed form solutions of (19) and (23) as follows.
Theorem 12. Let c = [XT o(?, ?0 )]Gij , ? =
output of Algorithm 1 with input XT o(?, ?0 ).
1
?
i
2 kr (?, ?0 )kkXGj k2 ,
and vi , i ? 0 ? [d] be the
i
(i): Suppose that c ?
/ Cji . Then, sij (?, ?0 ) = kvG
i k + ?.
j
(ii): Suppose that node Gij has a virtual child node. Then, for any c ? Cji , sij (?, ?0 ) = ?.
(iii): Suppose that node Gij has no virtual child node. Then, the following hold.
(iii.a): If c ? rbd Cji , then sij (?, ?0 ) = ?.
(iii.b): If c ? ri Cji , then, for any node Gtk ? Gij , where t ? {i + 1, . . . , d} and k ? [nt + n0t ], let
the nodes on the path from Gtk to Gij be Glrl , where l = i, . . . , t, ri = j, and rt = k, and
Xt
t
l
?(Gi+1
wrl l ? kvG
.
(24)
l k
ri+1 , Gk ) =
r
l=i+1
l
t
Then, sij (?, ?0 ) = ? ? min{(k,t):Gtk ?Gij } ?(Gi+1
.
ri+1 , Gk )
+
Solving Problem (20) We can solve problem (20) by the Cauchy-Schwarz inequality.
Theorem 13. For problem (20), we have sij (?, ?0 ) = k[XT o(?, ?0 )]Gij k + 21 kr? (?, ?0 )kkXGij k2 .
5.3 The Multi-Layer Screening Rule
In real-world applications, the optimal parameter values are usually unknown. Commonly used
approaches to determine an appropriate parameter value, such as cross validation and stability selection, solve TGL many times along a grid of parameter values. This process can be very time
consuming. Motivated by this challenge, we present MLFre in the following theorem by plugging
the supreme values found by Theorems 12 and 13 into (R1? ) and (R2? ), respectively.
Theorem 14. For the TGL problem, suppose that we are given a sequence of parameter values
?max = ?0 > ?1 > ? ? ? > ?K . For each integer k = 0, . . . , K ? 1, we compute ?? (?k ) from a given
? ? (?k ) via Eq. (3). Then, for i = 1, . . . , d, MLFre takes the form of
sij (?k+1 , ?k ) < wji ? [? ? (?)]Gij = 0, ? j ? [ni ].
(MLFre)
Remark 2. We apply MLFre to identify inactive nodes hierarchically in a top-down fashion. Note
that, we do not need to apply MLFre to node Gij if one of its ancestor nodes passes the rule.
Remark 3. To simplify notations, we consider TGL with a single tree, in the proof. However, all
major results are directly applicable to TGL with multiple trees, as they are independent from each
other. We note that, many sparse models, such as Lasso, group Lasso, and sparse group Lasso, are
special cases of TGL with multiple trees.
6
(a) synthetic 1, p = 20000
(b) synthetic 1, p = 50000
(c) synthetic 1, p = 100000
(d) synthetic 2, p = 20000
(e) synthetic 2, p = 50000
(f) synthetic 2, p = 100000
Figure 1: Rejection ratios of MLFre on two synthetic data sets with different feature dimensions.
6
Experiments
We evaluate MLFre on both synthetic and real data sets by two measurements. The first measure is
the rejection ratios of MLFre for each level of the tree. Let p0 be the number of zero coefficients in
the solution vector and G i be the index set of the inactive nodes withPdepth i identified by MLFre.
i
|Gi |
The rejection ratio of the ith layer of MLFre is defined by ri = k?Gp0 k , where |Gik | is the
number of features contained in node Gik . The second measure is speedup, namely, the ratio of the
running time of the solver without screening to the running time of solver with MLFre.
For each data set, we run the solver combined with MLFre along a sequence of 100 parameter values
equally spaced on the logarithmic scale of ?/?max from 1.0 to 0.05. The solver for TGL is from the
SLEP package [15]. It also provides an efficient routine to compute ?max .
6.1 Simulation Studies
We perform experiments on two synthetic data Table 1: Running time (in seconds) for solving
sets, named synthetic 1 and synthetic 2, which TGL along a sequence of 100 tuning parameare commonly used in the literature [21, 31]. ter values of ? equally spaced on the logarithmic
The true model is y = X? ? + 0.01, ? scale of ?/?max from 1.0 to 0.05 by (a): the solver
N (0, 1). For each of the data set, we fix N = [15] without screening (see the third column); (b):
250 and select p = 20000, 50000, 100000. We the solver with MLFre (see the fifth column).
create a tree with height 4, i.e., d = 3. The
Dataset
p
solver MLFre MLFre+solver speedup
average sizes of the nodes with depth 1, 2 and
20000
483.96
1.03
30.17
16.04
3 are 50, 10, and 1, respectively. Thus, if
synthetic 1 50000 1175.91 2.95
39.49
29.78
p = 100000, we have roughly n1 = 2000,
100000 2391.43 6.57
58.91
40.60
n2 = 10000, and n3 = 100000. For synthet20000
470.54
1.19
37.87
12.43
ic 1, the entries of the data matrix X are i.i.d.
synthetic 2 50000 1122.30 3.13
43.97
25.53
standard Gaussian with zero pair-wise correla100000 2244.06 6.18
60.96
36.81
tion, i.e., corr (xi , xj ) = 0 for the ith and j th
ADNI+GMV 406262 20911.92 81.14
492.08
42.50
columns of X with i 6= j. For synthetic 2,
ADNI+WMV 406262 21855.03 80.83
556.19
39.29
the entries of X are drawn from standard GausADNI+WBV 406262 20812.06 82.10
564.36
36.88
sian with pair-wise correlation corr (xi , xj ) =
0.5|i?j| . To construct ? ? , we first randomly select 50% of the nodes with depth 1, and then randomly select 20% of the children nodes (with depth 2) of the remaining nodes with depth 1. The
components of ? ? corresponding to the remaining nodes are populated from a standard Gaussian,
and the remaining ones are set to zero.
7
(a) ADNI+GMV
(b) ADNI+WMV
(c) ADNI+WBV
Figure 2: Rejection ratios of MLFre on ADNI data set with grey matter volume (GMV), white mater
volume (WMV), and whole brain volume (WBV) as response vectors, respectively.
Fig. 1 shows the rejection ratios of all three layers of MLFre. We can see that MLFre identifies
P3
almost all of the inactive nodes, i.e., i=1 ri ? 90%, and the first layer contributes the most.
Moreover, Fig. 1 also indicates that, as the feature dimension (and the number of nodes in each level)
P3
increases, MLFre identifies more inactive nodes, i.e., i=1 ri ? 100%. Thus, we can expect a more
significant capability of MLFre in identifying inactive nodes on data sets with higher dimensions.
Table 1 shows the running time of the solver with and without MLFre. We can observe significant
speedups gained by MLFre, which are up to 40 times. Take synthetic 1 with p = 100000 for
example. The solver without MLFre takes about 40 minutes to solve TGL at 100 parameter values.
Combined with MLFre, the solver only needs less than one minute for the same task. Table 1 also
shows that the computational cost of MLFre is very low?that is negligible compared to that of the
solver without MLFre. Moreover, as MLFre identifies more inactive nodes with increasing feature
dimensions, Table 1 shows that the speedup gained by MLFre becomes more significant as well.
6.2 Experiments on ADNI data set
We perform experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) data set
(http://adni.loni.usc.edu/). The data set consists of 747 patients with 406262 single
nucleotide polymorphisms (SNPs). We create the index tree such that n1 = 4567, n2 = 89332,
and n3 = 406262. Fig. 2 presents the rejection ratios of MLFre on the ADNI data set with grey
matter volume (GMV), white matter volume (WMV), and whole brain volume (WBV) as response,
P3
respectively. We can see that MLFre identifies almost all inactive nodes, i.e., i=1 ri ? 100%. As
a result, we observe significant speedups gained by MLFre?that are about 40 times?from Table
1. Specifically, with GMV as response, the solver without MLFre takes about six hours to solve
TGL at 100 parameter values. However, combined with MLFre, the solver only needs about eight
minutes for the same task. Moreover, Table 1 also indicates that the computational cost of MLFre is
very low?that is negligible compared to that of the solver without MLFre.
7
Conclusion
In this paper, we propose a novel multi-layer feature reduction (MLFre) method for TGL. Our major
technical contributions lie in two folds. The first is the novel hierarchical projection algorithm that
is able to exactly and efficiently recover the subgradients of the tree-guided regularizer with respect
to each node from their mixture. The second is that we show a highly nontrivial nonconvex problem
admits a closed form solution. To the best of our knowledge, MLFre is the first screening method
that is applicable to TGL. An appealing feature of MLFre is that it is exact in the sense that the
identified inactive nodes are guaranteed to be absent from the sparse representations. Experiments
on both synthetic and real data sets demonstrate that MLFre is very effective in identifying inactive
nodes, leading to substantial savings in computational cost and memory usage without sacrificing
accuracy. Moreover, the capability of MLFre in identifying inactive nodes on higher dimensional
data sets is more significant. We plan to generalize MLFre to more general and complicated sparse
models, e.g., over-lapping group Lasso with logistic loss. In addition, we plan to apply MLFre to
other applications, e.g., brain image analysis [10, 18] and natural language processing [27, 28].
Acknowledgments
This work is supported in part by research grants from NIH (R01 LM010730, U54 EB020403) and
NSF (IIS- 0953662, III-1539991, III-1539722).
8
References
[1] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1?106, Jan. 2012.
[2] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces.
Springer, 2011.
[3] M. Bazaraa, H. Sherali, and C. Shetty. Nonlinear Programming: Theory and Algorithms. WileyInterscience, 2006.
[4] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization, Second Edition. Canadian
Mathematical Society, 2006.
[5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[6] X. Chen, Q. Lin, S. Kim, J. Carbonell, and E. Xing. Smoothing proximal gradient method for general
structured sparse regression. Annals of Applied Statistics, pages 719?752, 2012.
[7] W. Deng, W. Yin, and Y. Zhang. Group sparse optimization by alternating direction method. Technical
report, Rice CAAM Report TR11-06, 2011.
[8] L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Pacific
Journal of Optimization, 8:667?698, 2012.
[9] J.-B. Hiriart-Urruty. From convex optimization to nonconvex optimization. necessary and sufficient conditions for global optimality. In Nonsmooth optimization and related topics. Springer, 1988.
[10] R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, E. Eger, F. Bach, and B. Thirion. Multiscale mining of
fmri data with hierarchical structured sparsity. SIAM Journal on Imaging Science, pages 835?856, 2012.
[11] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding.
Journal of Machine Learning Research, 12:2297?2334, 2011.
[12] K. Jia, T. Chan, and Y. Ma. Robust and practical face recognition via structured sparsity. In European
Conference on Computer Vision, 2012.
[13] S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In
International Conference on Machine Learning, 2010.
[14] S. Kim and E. Xing. Tree-guided group lasso for multi-response regression with structured sparsity, with
an application to eqtl mapping. The Annals of Applied Statistics, 2012.
[15] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009.
[16] J. Liu and J. Ye. Moreau-Yosida regularization for grouped tree structure learning. In Advances in neural
information processing systems, 2010.
[17] J. Liu, Z. Zhao, J. Wang, and J. Ye. Safe screening with variational inequalities and its application to
lasso. In International Conference on Machine Learning, 2014.
[18] M. Liu, D. Zhang, P. Yap, and D. Shen. Tree-guided sparse coding for brain disease classification. In
Medical Image Computing and Computer-Assisted Intervention, 2012.
[19] K. Ogawa, Y. Suzuki, and I. Takeuchi. Safe screening of non-support vectors in pathwise SVM computation. In ICML, 2013.
[20] A. Ruszczy?nski. Nonlinear Optimization. Princeton University Press, 2006.
[21] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. Tibshirani. Strong rules for
discarding predictors in lasso-type problems. Journal of the Royal Statistical Society Series B, 74:245?
266, 2012.
[22] J. Wang, W. Fan, and J. Ye. Fused lasso screening rules via the monotonicity of subdifferentials. IEEE
Transactions on Pattern Analysis and Machine Intelligence, PP(99):1?1, 2015.
[23] J. Wang, P. Wonka, and J. Ye. Scaling svm and least absolute deviations via exact data reduction. In
International Conference on Machine Learning, 2014.
[24] J. Wang, P. Wonka, and J. Ye. Lasso screening rules via dual polytope projection. Journal of Machine
Learning Research, 16:1063?1101, 2015.
[25] J. Wang and J. Ye. Two-Layer feature reduction for sparse-group lasso via decomposition of convex sets.
Advances in neural information processing systems, 2014.
[26] Z. J. Xiang, H. Xu, and P. J. Ramadge. Learning sparse representation of high dimensional data on large
scale dictionaries. In NIPS, 2011.
[27] D. Yogatama, M. Faruqui, C. Dyer, and N. Smith. Learning word representations with hierarchical sparse
coding. In International Conference on Machine Learning, 2015.
[28] D. Yogatama and N. Smith. Linguistic structured sparsity in text categorization. In Proceedings of the
Annual Meeting of the Association for Computational Linguistics, 2014.
[29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society Series B, 68:49?67, 2006.
[30] P. Zhao, G. Rocha, and B. Yu. The composite absolute penalties family for grouped and hierarchical
variable selection. Annals of Statistics, 2009.
[31] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society Series B, 67:301?320, 2005.
9
| 5838 |@word briefly:1 stronger:1 norm:2 pillar:1 grey:2 simulation:1 decomposition:1 p0:1 reduction:7 liu:4 contains:1 series:3 sherali:1 past:1 existing:4 nt:1 bd:1 j1:1 remove:1 n0:1 a1k:1 intelligence:1 leaf:18 ith:3 smith:2 provides:1 node:72 simpler:1 zhang:2 height:1 mathematical:1 along:3 rabbani:1 direct:1 initiative:1 descendant:1 consists:1 yuan:1 inside:1 introduce:4 indeed:3 roughly:1 multi:7 brain:5 inspired:3 paij:2 pf:3 solver:16 increasing:1 becomes:1 moreover:9 notation:4 emerging:1 every:1 nf:1 ti:1 xd:5 exactly:2 k2:7 uk:1 grant:1 medical:1 intervention:1 positive:3 before:1 engineering:1 negligible:2 bazaraa:1 path:4 rbd:2 ye1:1 challenging:1 ramadge:1 unique:2 acknowledgment:1 practical:1 testing:4 procedure:2 jan:1 uigi:1 empirical:1 significantly:1 composite:1 projection:18 boyd:1 pre:1 word:1 onto:2 interior:3 convenience:1 operator:6 selection:4 faruqui:1 equivalent:2 lagrangian:2 jieping:1 attention:1 independently:1 convex:11 shen:1 splitting:1 identifying:3 rule:9 vandenberghe:1 rocha:1 stability:1 annals:3 play:1 suppose:5 exact:7 programming:1 trend:1 yap:1 recognition:2 bottom:1 role:1 electrical:1 wang:5 wj:3 removed:1 substantial:2 balanced:1 pd:3 disease:2 ui:5 complexity:2 yosida:1 rigorously:1 dom:1 motivate:1 solving:6 serve:1 efficiency:2 pbji:3 regularizer:3 effective:2 detected:1 gini:1 solve:9 relax:1 otherwise:4 statistic:3 gi:11 g1:4 itself:2 sequence:3 net:1 propose:3 hiriart:1 j2:1 supreme:6 combining:1 eger:1 inducing:2 ky:1 ogawa:1 parent:3 optimum:5 r1:15 categorization:1 derive:2 develop:4 ij:11 received:1 c10:3 eq:15 strong:3 implies:7 direction:2 guided:6 safe:5 radius:1 centered:2 virtual:5 elimination:1 glrl:3 fix:1 polymorphism:1 assisted:1 hold:7 ic:9 normal:1 great:1 mapping:1 major:6 dictionary:1 estimation:3 applicable:5 eqtl:1 schwarz:1 grouped:3 create:2 successfully:1 tool:1 always:1 gaussian:2 aim:1 linguistic:1 notational:1 indicates:3 kim:3 sense:2 wang1:1 detect:2 a01:2 el:1 ancestor:2 uncovering:2 among:1 dual:13 classification:1 plan:2 smoothing:1 special:1 gramfort:1 construct:1 saving:2 represents:1 yu:1 icml:1 fmri:1 others:1 report:2 fundamentally:1 simplify:2 few:1 tlfre:1 nonsmooth:1 randomly:2 mater:1 usc:1 n1:2 friedman:1 screening:17 interest:1 highly:5 mining:1 mixture:4 pc:8 regularizers:1 hg:5 glr:3 kt:2 xni:4 ugi:1 accurate:1 necessary:1 gp0:1 alzheimers:1 nucleotide:1 tree:26 taylor:1 sacrificing:3 column:3 cost:5 deviation:2 entry:2 predictor:1 slep:2 characterize:1 bauschke:1 wrl:1 kn:1 proximal:3 synthetic:18 combined:3 nski:1 international:4 siam:1 quickly:2 fused:1 borwein:1 containing:1 zhao:2 leading:2 michel:1 nonoverlapping:1 coding:3 coefficient:6 int:1 matter:3 satisfy:1 vi:6 tion:1 root:3 view:1 closed:13 sup:6 xing:3 recover:2 complicated:6 capability:2 simon:1 jia:1 contribution:3 ni:15 accuracy:2 takeuchi:1 efficiently:2 spaced:2 identify:4 generalize:1 bk1:1 reach:1 definition:7 inexact:1 pp:1 proof:2 mi:1 n0t:1 dataset:1 recall:1 knowledge:3 hilbert:1 routine:1 jenatton:3 higher:2 supervised:1 follow:1 response:6 loni:1 correlation:1 hand:1 sketch:1 mistakenly:1 nonlinear:3 multiscale:1 logistic:1 usage:2 ye:7 concept:1 true:1 subdifferentials:1 regularization:6 equality:1 alternating:1 nonzero:1 white:2 edpp:1 kyk2:1 please:2 demonstrate:3 snp:1 image:3 wise:2 variational:1 novel:6 nih:1 ug:2 ji:8 rl:1 volume:6 association:1 lad:1 significant:6 refer:1 measurement:1 cambridge:1 ai:1 tuning:1 grid:1 populated:1 language:2 access:2 gj:7 gt:2 setpof:1 recent:1 chan:1 irrelevant:1 jpye:1 nonconvex:6 inequality:2 success:1 continue:1 meeting:1 wji:20 deng:1 r0:1 determine:2 ii:10 u0:2 multiple:2 technical:6 adni:10 cross:1 bach:3 lin:2 equally:2 plugging:3 feasibility:3 involving:1 regression:5 basic:2 multilayer:1 patient:1 vision:1 achieved:1 subdifferential:1 addition:1 interval:1 kvg:3 pass:1 integer:2 ter:1 canadian:1 iii:10 split:5 xj:2 hastie:2 lasso:18 identified:2 absent:2 inactive:13 motivated:1 six:1 cji:9 penalty:3 returned:1 remark:3 jie:1 cornerstone:1 useful:3 detailed:1 involve:1 reduced:1 http:1 nsf:1 tibshirani:2 write:1 group:18 key:1 pb:5 drawn:1 kuk:1 viallon:1 imaging:1 monotone:1 year:2 sum:5 cone:1 run:1 package:1 powerful:2 named:1 tgl:34 almost:2 family:1 p3:3 scaling:2 bound:2 layer:9 guaranteed:3 fold:2 zgi:1 arizona:1 fan:1 annual:1 nontrivial:2 ri:10 n3:2 encodes:1 extremely:1 optimality:2 min:3 subgradients:4 speedup:8 structured:10 department:1 developing:4 pacific:1 ball:3 lm010730:1 appealing:1 caam:1 sij:10 ghaoui:1 yogatama:2 remains:2 turn:2 nonempty:2 thirion:1 urruty:1 dyer:1 end:2 umich:1 available:1 apply:3 observe:2 hierarchical:14 eight:1 appropriate:1 alternative:1 shetty:1 rp:9 s01:1 top:3 running:4 include:1 remaining:3 linguistics:1 bkt:3 medicine:1 society:4 r01:1 objective:1 ruszczy:1 rt:1 gradient:1 gmv:5 nx:2 carbonell:1 topic:1 polytope:1 cauchy:1 index:8 ratio:7 nc:1 neuroimaging:1 gk:4 wonka:2 proper:1 motivates:1 unknown:1 perform:2 upper:1 discarded:1 excluding:1 arbitrary:1 namely:1 pair:2 hour:1 nip:1 able:4 usually:1 pattern:2 sparsity:8 challenge:6 bien:1 max:15 memory:2 royal:3 overlap:3 natural:2 indicator:1 hr:1 sian:1 improve:2 identifies:4 b10:1 text:2 review:1 geometric:4 nice:1 literature:1 relative:2 xiang:1 loss:1 expect:1 vg:3 validation:1 foundation:1 integrate:1 sufficient:1 elsewhere:1 supported:1 last:1 aij:5 side:1 xs1:1 pni:3 face:2 absolute:3 sparse:18 fifth:1 moreau:1 boundary:2 dimension:5 depth:8 world:3 evaluating:1 kzk:1 kz:1 computes:1 xn:1 collection:1 commonly:2 suzuki:1 dome:1 u54:1 transaction:1 sj:1 emphasize:1 gene:1 monotonicity:1 global:1 kkt:3 mairal:2 consuming:1 xi:2 wmv:4 table:6 promising:1 terminate:1 robust:1 elastic:1 contributes:1 complex:1 european:1 zou:1 hierarchically:4 main:1 whole:2 edition:1 n2:2 gtk:5 child:6 repeated:1 xu:1 fig:3 fashion:2 combettes:1 lie:3 ib:3 third:1 down:2 theorem:18 z0:4 minute:3 xt:31 discarding:1 r2:14 svm:3 admits:6 gik:2 corr:2 gained:6 kr:6 supplement:3 magnitude:3 chen:1 rejection:6 michigan:1 logarithmic:2 yin:1 g1k:3 gi1:1 contained:2 pathwise:1 g2:3 springer:2 lewis:1 bji:5 w10:1 obozinski:3 rice:1 ma:1 ann:1 feasible:2 typical:1 specifically:2 lemma:11 argminu:2 called:3 gij:80 arbor:1 g01:5 zg:1 select:3 support:3 bioinformatics:1 evaluate:1 princeton:1 wileyinterscience:1 |
5,345 | 5,839 | Optimal Testing for Properties of Distributions
Jayadev Acharya, Constantinos Daskalakis, Gautam Kamath
EECS, MIT
{jayadev, costis, g}@mit.edu
Abstract
Given samples from an unknown discrete distribution p, is it possible to distinguish whether p belongs to some class of distributions C versus p being far from
every distribution in C? This fundamental question has received tremendous attention in statistics, focusing primarily on asymptotic analysis, as well as in information theory and theoretical computer science, where the emphasis has been
on small sample size and computational complexity. Nevertheless, even for basic properties of discrete distributions such as monotonicity, independence, logconcavity, unimodality, and monotone-hazard rate, the optimal sample complexity
is unknown.
We provide a general approach via which we obtain sample-optimal and computationally efficient testers for all these distribution families. At the core of our
approach is an algorithm which solves the following problem: Given samples
from an unknown distribution p, and a known distribution q, are p and q close in
2
-distance, or far in total variation distance?
The optimality of our testers is established by providing matching lower bounds,
up to constant factors. Finally, a necessary building block for our testers and
an important byproduct of our work are the first known computationally efficient
proper learners for discrete log-concave, monotone hazard rate distributions.
1
Introduction
The quintessential scientific question is whether an unknown object has some property, i.e. whether a
model from a specific class fits the object?s observed behavior. If the unknown object is a probability
distribution, p, to which we have sample access, we are typically asked to distinguish whether p
belongs to some class C or whether it is sufficiently far from it.
This question has received tremendous attention in the field of statistics (see, e.g., [1, 2]), where test
statistics for important properties such as the ones we consider here have been proposed. Nevertheless, the emphasis has been on asymptotic analysis, characterizing the rates of convergence of test
statistics under null hypotheses, as the number of samples tends to infinity. In contrast, we wish to
study the following problem in the small sample regime:
?(C, "): Given a family of distributions C, some " > 0, and sample access to an unknown
distribution p over a discrete support, how many samples are required to distinguish between
p 2 C versus dTV (p, C) > "?1
The problem has been studied intensely in the literature on property testing and sublinear algorithms
[3, 4, 5], where the emphasis has been on characterizing the optimal tradeoff between p?s support
size and the accuracy " in the number of samples. Several results have been obtained, roughly
1
We want success probability at least 2/3, which can be boosted to 1
times and taking the majority.
1
by repeating the test O(log(1/ ))
clustering into three groups, where (i) C is the class of monotone distributions over [n], or more
generally a poset [6, 7]; (ii) C is the class of independent, or k-wise independent distributions over a
hypergrid [8, 9]; and (iii) C contains a single-distribution q, and the problem becomes that of testing
whether p equals q or is far from it [8, 10, 11, 13].
With respect to (iii), [13] exactly characterizes the number of samples required to test identity to each
distribution q, providing a single tester matching this bound simultaneously for all q. Nevertheless,
this tester and its precursors are not applicable to the composite identity testing problem that we
consider. If our class C were finite, we could test against each element in the class, albeit this
would not necessarily be sample optimal. If our class C were a continuum, we would need tolerant
identity testers, which tend to be more expensive in terms of sample complexity [12], and result
in substantially suboptimal testers for the classes we consider. Or we could use approaches related
to generalized likelihood ratio test, but their behavior is not well-understood in our regime, and
optimizing likelihood over our classes becomes computationally intense.
Our Contributions We obtain sample-optimal and computationally efficient testers for ?(C, ")
for the most fundamental shape restrictions to a distribution. Our contributions are the following:
1. For a known distribution q over [n], and sample access to p, we show that distinguishing the
2
cases: (a) whether the 2 -distance between
p p and q is at most " /2, versus (b) the `1 distance
between p and q is at least 2", requires ?( n/"2 ) samples.
As
a
corollary,
we obtain an alternate
p
argument that shows that identity testing requires ?( n/"2 ) samples (previously shown in [13]).
2. For the class C = Mdn of monotone distributions over [n]d we require an optimal ? nd/2 /"2
p
number of samples, where prior work requires ? n log n/"6 samples for d = 1 and
? nd 1/2 poly (1/") for d > 1 [6, 7]. Our results improve the exponent of n with respect
?
to d, shave all logarithmic factors in n, and improve the exponent of " by at least a factor of 2.
(a) A useful building block and interesting byproduct of our analysis is extending Birg?e?s oblivious decomposition for single-dimensional monotone distributions [14] to monotone distributions in d 1, and to the stronger notion of 2 -distance. See Section C.1.
(b) Moreover, we show that O(logd n) samples suffice to learn a monotone distribution over
[n]d in 2 -distance. See Lemma 3 for the precise statement.
3. For the
C =P
?d of product distributions over [n1 ] ? ? ? ? ? [nd ], our algorithm requires
Q class1/2
O ( ` n` ) + ` n` /"2 samples. We note that a product distribution is one where all
marginals are independent, so this is equivalent to testing if a collection of random variables are
all independent. In the case where n` ?s are large, then the first term dominates, and the sample
Q
1/2
complexity is O(( ` n` ) /"2 ). In particular, when d is a constant and all n` ?s are equal to n,
we achieve the optimal sample complexity of ?(nd/2 /"2 ). To the best of our knowledge, this
is the first result for d
3, and when d = 2, this improves the previously known complexity
from O "n6 polylog(n/") [8, 15], significantly improving the dependence on " and shaving all
logarithmic factors.
4. For the classes C = LCDn , C = MHRn and C = Un of log-concave,
monotone-hazard-rate
p
and unimodal distributions over [n], we require an optimal ? n/"2 number of samples. Our
testers for LCDn and C = MHRn are to our knowledge the first for these classes for the
low sample regime we are studying?see [16] and its references for statistics literature on the
asymptotic regime. Our tester for Un improves the dependence of the sample complexity on " by
at least a factor of 2 in the exponent, and shaves all logarithmic factors in n, compared to testers
based on testing monotonicity.
(a) A useful building block and important byproduct of our analysis are the first computationally efficient algorithms for properly learning log-concave and monotone-hazard-rate distributions, to within " in total variation distance, from poly(1/") samples, independent of
the domain size n. See Corollaries 4 and 6. Again, these are the first computationally efficient algorithms to our knowledge in the low sample regime. [17] provide algorithms for
density estimation, which are non-proper, i.e. will approximate an unknown distribution
from these classes with a distribution that does not belong to these classes. On the other
hand, the statistics literature focuses on maximum-likelihood estimation in the asymptotic
regime?see e.g. [18] and its references.
2
5. For all the above classes we obtain matching lower bounds, showing that the sample complexity
of our testers is optimal with respect to n, " and when applicable d. See Section 8. Our lower
bounds are based on extending Paninski?s lower bound for testing uniformity [10].
Our Techniques At the heart of our tester lies a novel use of the 2 statistic. Naturally, the 2
and its related `2 statistic have been used in several of the afore-cited results. We propose a new
use of the 2 statistic enabling our optimal sample complexity. The essence of our approach is to
first draw a small number of samples (independent of n for log-concave and monotone-hazard-rate
distributions and only logarithmic in n for monotone and unimodal distributions) to approximate the
unknown distribution p in 2 distance. If p 2 C, our learner is required to output a distribution q
that is O(")-close to C in total variation and O("2 )-close to p in 2 distance. Then some analysis
reduces our testing problem to distinguishing the following cases:
? p and q are O("2 )-close in 2 distance; this case corresponds to p 2 C.
? p and q are ?(")-far in total variation distance; this case corresponds to dTV (p, C) > ".
We draw a comparison with robust identity testing, in which one must distinguish whether p and q are
c1 "-close or c2 "-far in total variation distance, for constants c2 > c1 > 0. In [12], Valiant and Valiant
show that ?(n/ log n) samples are required for this problem ? a nearly-linear sample complexity,
which may be prohibitively large in many settings. In comparison, the problem we study tests for
2
closeness rather than total variation closeness: a relaxation of the previous problem. However,
our tester
p demonstrates that this relaxation allows us to achieve a substantially sublinear complexity
of O( n/"2 ). On the other hand, this relaxation is still tight enough to be useful, demonstrated by
our application in obtaining sample-optimal testers.
We note that while the 2 statistic for testing hypothesis is prevalent in statistics providing optimal error exponents in the large-sample regime, to the best of our knowledge, in the small-sample
regime, modified-versions of the 2 statistic have only been recently used for closeness-testing
in [19, 20, 21] and for testing uniformity of monotone distributions in [22]. In particular, [19]
design an unbiased statistic for estimating the 2 distance between two unknown distributions.
Organization In Section 4, we show that a version of the 2 statistic, appropriately excluding
certain elements of the support, is sufficiently well-concentrated to distinguish between the above
cases. Moreover, the sample complexity of our algorithm is optimal for most classes. Our base tester
is combined with the afore-mentioned extension of Birg?e?s decomposition theorem to test monotone
distributions in Section 5 (see Theorem 2 and Corollary 1), and is also used to test independence of
distributions in Section 6 (see Theorem 3).
In Section 7, we give our results on testing unimodal, log-concave and monotone hazard rate distributions. Naturally, there are several bells and whistles that we need to add to the above skeleton
to accommodate all classes of distributions that we are considering. In Remark 1 we mention the
additional modifications for these classes.
Related Work. For the problems that we study in this paper, we have provided the related works
in the previous section along with our contributions. We cannot do justice to the role of shape restrictions of probability distributions in probabilistic modeling and testing. It suffices to say that
the classes of distributions that we study are fundamental, motivating extensive literature on their
learning and testing [23]. In the recent times, there has been work on shape restricted statistics, pioneered by Jon Wellner, and others. [24, 25] study estimation of monotone and k-monotone densities,
and [26, 27] study estimation of log-concave distributions. Due to the sheer volume of literature in
statistics in this field, we will restrict ourselves to those already referenced.
As we have mentioned, statistics has focused on the asymptotic regime as the number of samples
tends to infinity. Instead we are considering the low sample regime and are more stringent about the
behavior of our testers, requiring 2-sided guarantees. We want to accept if the unknown distribution
is in our class of interest, and also reject if it is far from the class. For this problem, as discussed
above, there are few results when C is a whole class of distributions. Closer related to our paper is
the line of papers [6, 7, 28] for monotonicity testing, albeit these papers have sub-optimal sample
complexity as discussed above. Testing independence of random variables has a long history in
statisics [29, 30]. The theoretical computer science community has also considered the problem of
3
testing independence of two random variables [8, 15]. While our results sharpen the case where the
variables are over domains of equal size, they demonstrate an interesting asymmetric upper bound
when this is not the case. More recently, Acharya and Daskalakis provide optimal testers for the
family of Poisson Binomial Distributions [31].
Finally, contemporaneous work of Canonne et al [32] provides a generic algorithm and lower bounds
for the single-dimensional families of distributions considered here. We note that their algorithm
has a sample complexity which is suboptimal in both n and ", while our algorithms are optimal.
Their algorithm also extends to mixtures of these classes, though some of these extensions are not
computationally efficient. They also provide a framework for proving lower bounds, giving the
optimal bounds for many classes when " is sufficiently large with respect to 1/n. In comparison, we
provide these lower bounds unconditionally by modifying Paninski?s construction [10] to suit the
classes we consider.
2
Preliminaries
We use the following probability distances in our paper.
def
The total variation distance between distributions p and q is dTV (p, q) = supA |p(A) q(A)| =
2
def P
1
2
-distance between p and q over [n] is defined as 2 (p, q) = i2[n] (pi qiqi ) . The
2 kp qk1 . The
Kolmogorov distance between two probability measures p and q over an ordered set (e.g., R) with
def
cumulative density functions Fp and Fq is dK (p, q) = supx2R |Fp (x) Fq (x)|.
Our paper is primarily concerned with testing against classes of distributions, defined formally as:
Definition 1. Given " 2 (0, 1] and sample access to a distribution p, an algorithm is said to test a
class C if it has the following guarantees:
? If p 2 C, the algorithm outputs ACCEPT with probability at least 2/3;
? If dTV (p, C)
", the algorithm outputs R EJECT with probability at least 2/3.
We note the following useful relationships between these distances [33]:
Proposition 1. dK (p, q)2 ? dTV (p, q)2 ? 14 2 (p, q).
Definition 2. An ?-effective support of a distribution p is any set S such that p(S)
1
?.
The flattening of a function f over a subset S is the function f? such that f?i = p(S)/|S|.
Definition 3. Let p be a distribution, and support I1 , . . . is a partition of the domain. The flattening
of p with respect to I1 , . . . is the distribution p? which is the flattening of p over the intervals I1 , . . ..
Poisson Sampling Throughout this paper, we use the standard Poissonization approach. Instead
of drawing exactly m samples from a distribution p, we first draw m0 ? Poisson(m), and then
draw m0 samples from p. As a result, the number of times different elements in the support of p
occur in the sample become independent, giving much simpler analyses. In particular, the number
of times we will observe domain element i will be distributed as Poisson(mpi ), independently for
each i. Since Poisson(m) is tightly concentrated around m, this additional flexibility comes only at
a sub-constant cost in the sample complexity with an inversely exponential in m, additive increase
in the error probability.
3
The Testing Algorithm ? An Overview
Our algorithm for testing a class C can be decomposed into three steps.
Near-proper learning in 2 -distance. Our first step requires learning with very specific guarantees.
Given sample access to p 2 C, we wish to output q such that (i) q is close to C in total variation
distance, and (ii) p and q are O("2 )-close in 2 -distance on an " effective support2 of p. When
2
We also require the algorithm to output a description of an effective support for which this property holds.
This requirement can be slightly relaxed, as we show in our results for testing unimodality.
4
p is not in C, we do not guarantee anything about q. From an information theoretic standpoint,
this problem is harder than learning the distribution in total variation, since 2 -distance is more
restrictive than total variation distance. Nonetheless, for the structured classes we consider, we are
able to learn in 2 by modifying the approaches to learn in total variation.
Computation of distance to class. The next step is to see if the hypothesis q is close to the class C
or not. Since we have an explicit description of q, this step requires no further samples from p, i.e.
it is purely computational. If we find that q is far from the class C, then it must be that p 62 C, as
otherwise the guarantees from the previous step would imply that q is close to C. Thus, if it is not,
we can terminate the algorithm at this point.
2
-testing. At this point, the previous two steps guarantee that our distribution q is such that:
- If p 2 C, then p and q are close in
2
distance on a (known) effective support of p;
- If dTV (p, C)
", then p and q are far in total variation distance.
p
We can distinguish between these two cases using O( n/"2 ) samples with a simple statistical
test, that we describe in Section 4.
2
-
Using the above three-step approach, our tester, as described in the next section, can directly test
monotonicity, log-concavity, and monotone hazard rate. With an extra trick, using Kolmogorov?s
max inequality, it can also test unimodality.
4
A Robust
2
-`1 Identity Test
Our main result in this section is Theorem 1.
Theorem 1. Given " 2 (0, 1], a class of probability distributions C, sample access to a distribution
p, and an explicit description of a distribution q, both over [n] with the following properties:
Property 1. dTV (q, C) ? 2" .
Property 2. If p 2 C, then
2
(p, q) ?
"2
500 .
Then there exists an algorithm such that: If p 2 C, it outputs ACCEPT with probability at least 2/3;
If dTV (p, C) ", it outputs
p R EJECT with probability at least 2/3. The time and sample complexity
of this algorithm are O n/"2 .
Proof. Algorithm 1 describes a
2
testing procedure that gives the guarantee of the theorem.
Algorithm 1 Chi-squared testing algorithm
1: Input: "; an explicit distribution q; (Poisson) m samples from a distribution p, where Ni denotes
the number of occurrences of the ith domain element.
2: A
{i : qi "2 /50n}
P
(Ni mqi )2 Ni
3: Z
i2A
mqi
4: if Z ? m"2 /10 return close
5: else return far
In Section A we compute the mean and variance of the statistic Z (defined in Algorithm 1) as:
X (pi qi )2
X ? p2
pi ? (pi qi )2
2
E [Z] = m ?
= m ? (pA , qA ), Var [Z] =
2 2i + 4m ?
(1)
qi
qi
qi2
i2A
i2A
where by pA and qA we denote respectively the vectors p and q restricted to the coordinates in
A, and we slightly abuse notation when we write 2 (pA , qA ), as these do not then correspond to
probability distributions.
Lemma 1 demonstrates the separation in the means of the statistic Z in the two cases of interest,
i.e., p 2 C versus dTV (p, C)
", and Lemma 2 shows the separation in the variances in the two
cases. These two results are proved in Section B.
5
Lemma 1. If p 2 C, then E [Z] ? m"2 /500. If dTV (p, C) ", then E [Z] m"2 /5.
p
1
Lemma 2. Let m 20000 n/"2 . If p 2 C then Var [Z] ? 500000
m2 "4 . If dTV (p, C)
1
2
Var [Z] ? 100 E[Z] .
", then
Assuming Lemmas 1 and 2, Theorem 1 is now a simple application of Chebyshev?s inequality.
?
?
p
p
When p 2 C, we have that E [Z] + 3 Var [Z] ? 1/500 + 3/500000 m"2 ? m"2 /200. Thus,
Chebyshev?s inequality gives
h
i
p
?
?
?
?
1/2
Pr Z m"2 /10 ? Pr Z m"2 /200 ? Pr Z E [Z]
3 Var [Z]
? 1/3.
When dTV (p, C)
", E [Z]
p
3 Var [Z]
?
?
p
3/100 E[Z]
1
h
?
?
?
?
Pr Z ? m"2 /10 ? Pr Z ? 3m"2 /20 ? Pr Z
E [Z] ?
3m"2 /20. Therefore,
p
3 Var [Z]
1/2
i
? 1/3.
This proves the correctness of Algorithm 1. For the running time, we divide the summation in Z
into the elements for which Ni > 0 and Ni = 0. When Ni = 0, the contribution of the term to
the summation is mqi , and we can sum them up by subtracting the total probability of all elements
appearing at least once from 1.
Remark 1. To apply Theorem 1, we need to learn distribution in C and find a q that is O("2 )-close
in 2 -distance to p. For the class of monotone distributions, we are able to efficiently obtain such
a q, which immediately implies sample-optimal learning algorithms for this class. However, for
some classes, we may not be able to learn a q with such strong guarantees, and we must consider
modifications to our base testing algorithm.
For example, for log-concave and monotone hazard rate distributions, we can obtain a distribution
q and a set S with the following guarantees:
- If p 2 C, then
2
(pS , qS ) ? O("2 ) and p(S)
1
O(");
- If dTV (p, C)
", then dTV (p, q)
"/2. In this scenario, the tester will simply pretend that the
support of p and q is S, ignoring any samples and support elements in [n] \ S. Analysis of this
tester is extremely similar to Theorem 1. In particular, we can still show that the statistic Z will be
separated in the two cases. When p 2 C, excluding [n] \ S will only reduce Z. On the other hand,
when dTV (p, C) ", since p(S) 1 O("), p and q must still be far on the remaining support, and
we can show that Z is still sufficiently large. Therefore,
a small modification allows us to handle
p
this case with the same sample complexity of O( n/"2 ).
For unimodal distributions, we are even unable to identify a large enough subset of the support
where the 2 approximation is guaranteed to be tight. But we can show that there exists a light
enough piece of the support (in terms of probability mass under p) that we can exclude to make the
2
approximation tight. Given that we only use Chebyshev?s inequality to prove the concentration
of the test statistic, it would seem that our lack of knowledge of the piece to exclude would involve a
union bound and a corresponding increase in the required number of samples. We avoid this through
a careful application of Kolmogorov?s max inequality in our setting. See Theorem 7 of Section 7.
5
Testing Monotonicity
As the first application of our testing framework, we will demonstrate how to test for monotonicity.
Let d 1, and i = (i1 , . . . , id ), j = (j1 , . . . , jd ) 2 [n]d . We say i < j if il jl for l = 1, . . . , d. A
distribution p over [n]d is monotone (decreasing) if for all i < j, pi ? pj 3 .
We follow the steps in the overview. The learning result we show is as follows (proved in Section C).
3
This definition describes monotone non-increasing distributions. By symmetry, identical results hold for
monotone non-decreasing distributions.
6
Lemma 3. Let d 1. There is an algorithm that takes m = O((d log(n)/"2 )d /"2 ) samples from a
distribution p over [n]d , and outputs a distribution q such that if p is monotone, then with probability
"2
at least 5/6, 2 (p, q) ? 500
. Furthermore, the distance of q to monotone distributions can be
computed in time poly(m).
This accomplishes the first two steps in the overview. In particular, if the distance of q from monotone distributions is more than "/2, we declare that p is not monotone. Therefore, Property 1 in
Theorem 1 is satisfied, and the lemma states that Property 2 holds with probability at least 5/6. We
then proceed to the 2 `1 test. At this point, we have precisely the guarantees needed to apply
Theorem 1 over [n]d , directly implying our main result of this section:
Theorem 2. For any d
complexity
1, there exists an algorithm for testing monotonicity over [n]d with sample
!
?
?d
nd/2
d log n
1
O
+
? 2
"2
"2
"
and time complexity O nd/2 /"2 + poly(log n, 1/")d .
In particular, this implies the following optimal algorithms for monotonicity testing for all d 1:
p
Corollary 1. Fix any d 1, and suppose " > d log n/n1/4 . Then there exists an algorithm for
testing monotonicity over [n]d with sample complexity O nd/2 /"2 .
We note that the class of monotone distributions is the simplest of the classes we consider. We
now consider testing for log-concavity, monotone hazard rate, and unimodality, all of which are
much more challenging to test. In particular, these classes require a more sophisticated structural
understanding, more complex proper 2 -learning algorithms, and non-trivial modifications to our
2
-tester. We have already given some details on the required adaptations to the tester in Remark 1.
Our algorithms for learning these classes use convex programming. One of the main challenges is
to enforce log-concavity of the PDF when learning LCDn (respectively, of the CDF when learning
MHRn ), while simultaneously enforcing closeness in total variation distance. This involves a
careful choice of our variables, and we exploit structural properties of the classes to ensure the
soundness of particular Taylor approximations. We encourage the reader to refer to the proofs of
Theorems 7, 8, and 9 for more details.
6
Testing Independence of Random Variables
def
Let X = [n1 ] ? . . . ? [nd ], and let ?d be the class of all product distributions over X . Similar to
learning monotone distributions in 2 distance we prove the following result in Section E.
?P
?
d
Lemma 4. There is an algorithm that takes O ( `=1 n` )/"2 samples from a distribution p and
outputs a q 2 ?d such that if p 2 ?d , then with probability at least 5/6,
2
(p, q) ? O("2 ).
The distribution q always satisfies Property 1 since it is in ?d , and by this lemma, with probability
at least 5/6 satisfies Property 2 in Theorem 1. Therefore, we obtain the following result.
Theorem 3. For any d 1, there exists an algorithm for??
testing independence of random
variables
?
?
Qd
Pd
1/2
2
over [n1 ] ? . . . [nd ] with sample and time complexity O ( `=1 n` ) + `=1 n` /" .
When d = 2 and n1 = n2 = n this improves the result of [8] for testing independence of two
random variables.
Corollary 2. Testing if two distributions over [n] are independent has sample complexity ?(n/"2 ).
7
Testing Unimodality, Log-Concavity and Monotone Hazard Rate
Unimodal distributions over [n] (denoted by Un ) are all distributions p for which there exists an i?
such that pi is non-decreasing for i ? i? and non-increasing for i i? . Log-concave distributions
over [n] (denoted by LCDn ), is the sub-class of unimodal distributions for which pi 1 pi+1 ? p2i .
7
Monotone hazard rate (MHR) distributions over [n] (denoted by MHRn ), are distributions p with
f
CDF F for which i < j implies 1 fiFi ? 1 jFj .
The following theorem bounds the complexity of testing these classes (for moderate ").
Theorem 4. Suppose " > n 1/5 . For each of the classes, unimodal, log-concave,
and MHR, there
p
exists an algorithm for testing the class over [n] with sample complexity O( n/"2 ).
This result is a corollary of the specific results for each class, which is proved in the appendix.
In particular, a more complete statement for unimodality, log-concavity and monotone-hazard rate,
with precise dependence on both n and " is given in Theorems 7, 8 and 9 respectively. We mention
some key points about each class, and refer the reader to the respective appendix for further details.
Testing Unimodality Using a unionpbound argument, one can use the results on testing monotonicity to give an algorithm with O n log n/"2 samples. However, this is unsatisfactory, since
p
our lower bound (and as we will demonstrate, the true complexity of this problem) is n/"2 . We
overcome the logarithmic barrier introduced by the union bound, by employing a non-oblivious
decomposition of the domain, and using Kolmogorov?s max-inequality.
Testing Log-Concavity The key step is to design an algorithm to learn a log-concave distribution
in 2 distance. We formulate the problem as a linear program in the logarithms of the distribution
and show that using O(1/"5 ) samples, it is possible to output a log-concave distribution that has a
2
distance at most O("2 ) from the underlying log-concave distribution.
Testing Monotone Hazard Rate For learning MHR distributions in 2 distance, we formulate a
linear program in the logarithms of the CDF and show that using O(log(n/")/"5 ) samples, it is
possible to output a MHR distribution that has a 2 distance at most O("2 ) from the underlying
MHR distribution.
8
Lower Bounds
We now prove sharp lower bounds for the classes of distributions we consider. We show that the
example studied by Paninski [10] to prove lower bounds on testing uniformity can be used to prove
lower bounds for the classes we consider. They consider a class Q consisting of 2n/2 distributions
defined as follows. Without loss of generality assume that n is even. For each of the 2n/2 vectors
z0 z1 . . . zn/2 1 2 { 1, 1}n/2 , define a distribution q 2 Q over [n] as follows.
(
(1+z` c")
for i = 2` + 1
qi = (1 nz` c")
(2)
for i = 2`.
n
Each distribution in Q has a total variation distance c"/2 from Un , the uniform distribution over
[n]. By choosing c to be an appropriate constant, Paninski [10] showed that apdistribution picked
uniformly at random from Q cannot be distinguished from Un with fewer than n/"2 samples with
probability at least 2/3.
Suppose C is a class of distributions such that (i) The uniform distribution Un is in C, (ii) For
appropriately chosen c, dTV (C, Q) ", then testing C is not easier than
p distinguishing Un from Q.
Invoking [10] immediately implies that testing the class C requires ?( n/"2 ) samples.
The lower bounds for all the one dimensional distributions will follow directly from this construction, and for testing monotonicity in higher dimensions, we extend this construction to d
1,
appropriately. These arguments are proved in Section H, leading to the following lower bounds for
testing these classes:
Theorem 5.
? For any d
1, any algorithm for testing monotonicity over [n]d requires ?(nd/2 /"2 ) samples.
? For d
1, testing independence over [n1 ]?? ? ??[nd ] requires ? (n1 n2 . . . nd )1/2 /"2 samples.
p
? Testing unimodality, log-concavity, or monotone hazard rate over [n] needs ?( n/"2 ) samples.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
R. A. Fisher, Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd, 1925.
E. Lehmann and J. Romano, Testing statistical hypotheses. Springer Science & Business Media, 2006.
E. Fischer, ?The art of uninformed decisions: A primer to property testing,? Science, 2001.
R. Rubinfeld, ?Sublinear-time algorithms,? in International Congress of Mathematicians, 2006.
C. L. Canonne, ?A survey on distribution testing: your data is big, but is it blue,? ECCC, 2015.
T. Batu, R. Kumar, and R. Rubinfeld, ?Sublinear algorithms for testing monotone and unimodal distributions,? in Proceedings of STOC, 2004.
A. Bhattacharyya, E. Fischer, R. Rubinfeld, and P. Valiant, ?Testing monotonicity of distributions over
general partial orders,? in ICS, 2011, pp. 239?252.
T. Batu, E. Fischer, L. Fortnow, R. Kumar, R. Rubinfeld, and P. White, ?Testing random variables for
independence and identity,? in Proceedings of FOCS, 2001.
N. Alon, A. Andoni, T. Kaufman, K. Matulef, R. Rubinfeld, and N. Xie, ?Testing k-wise and almost
k-wise independence,? in Proceedings of STOC, 2007.
L. Paninski, ?A coincidence-based test for uniformity given very sparsely sampled discrete data.? IEEE
Transactions on Information Theory, vol. 54, no. 10, 2008.
D. Huang and S. Meyn, ?Generalized error exponents for small sample universal hypothesis testing,?
IEEE Transactions on Information Theory, vol. 59, no. 12, pp. 8157?8181, Dec 2013.
G. Valiant and P. Valiant, ?Estimating the unseen: An n/ log n-sample estimator for entropy and support
size, shown optimal via new CLTs,? in Proceedings of STOC, 2011.
??, ?An automatic inequality prover and instance optimal identity testing,? in FOCS, 2014.
L. Birg?e, ?Estimating a density under order restrictions: Nonasymptotic minimax risk,? The Annals of
Statistics, vol. 15, no. 3, pp. 995?1012, September 1987.
R. Levi, D. Ron, and R. Rubinfeld, ?Testing properties of collections of distributions,? Theory of Computing, vol. 9, no. 8, pp. 295?347, 2013.
P. Hall and I. Van Keilegom, ?Testing for monotone increasing hazard rate,? Annals of Statistics, pp.
1109?1137, 2005.
S. O. Chan, I. Diakonikolas, R. A. Servedio, and X. Sun, ?Learning mixtures of structured distributions
over discrete domains,? in Proceedings of SODA, 2013.
M. Cule and R. Samworth, ?Theoretical properties of the log-concave maximum likelihood estimator of
a multidimensional density,? Electronic Journal of Statistics, vol. 4, pp. 254?270, 2010.
J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, S. Pan, and A. T. Suresh, ?Competitive classification and
closeness testing,? in COLT, 2012, pp. 22.1?22.18.
S. Chan, I. Diakonikolas, G. Valiant, and P. Valiant, ?Optimal algorithms for testing closeness of discrete
distributions,? in SODA, 2014, pp. 1193?1203.
B. B. Bhattacharya and G. Valiant, ?Testing closeness with unequal sized samples,? in NIPS, 2015.
J. Acharya, A. Jafarpour, A. Orlitsky, and A. Theertha Suresh, ?A competitive test for uniformity of
monotone distributions,? in Proceedings of AISTATS, 2013, pp. 57?65.
R. E. Barlow, D. J. Bartholomew, J. M. Bremner, and H. D. Brunk, Statistical Inference under Order
Restrictions. New York: Wiley, 1972.
H. K. Jankowski and J. A. Wellner, ?Estimation of a discrete monotone density,? Electronic Journal of
Statistics, vol. 3, pp. 1567?1605, 2009.
F. Balabdaoui and J. A. Wellner, ?Estimation of a k-monotone density: characterizations, consistency and
minimax lower bounds,? Statistica Neerlandica, vol. 64, no. 1, pp. 45?70, 2010.
F. Balabdaoui, H. Jankowski, and K. Rufibach, ?Maximum likelihood estimation and confidence bands
for a discrete log-concave distribution,? 2011. [Online]. Available: http://arxiv.org/abs/1107.3904v1
A. Saumard and J. A. Wellner, ?Log-concavity and strong log-concavity: a review,? Statistics Surveys,
vol. 8, pp. 45?114, 2014.
M. Adamaszek, A. Czumaj, and C. Sohler, ?Testing monotone continuous distributions on highdimensional real cubes,? in SODA, 2010, pp. 56?65.
J. N. Rao and A. J. Scott, ?The analysis of categorical data from complex sample surveys,? Journal of the
American Statistical Association, vol. 76, no. 374, pp. 221?230, 1981.
A. Agresti and M. Kateri, Categorical data analysis. Springer, 2011.
J. Acharya and C. Daskalakis, ?Testing Poisson Binomial Distributions,? in SODA, 2015, pp. 1829?1840.
C. Canonne, I. Diakonikolas, T. Gouleakis, and R. Rubinfeld, ?Testing shape restrictions of discrete
distributions,? in STACS, 2016.
A. L. Gibbs and F. E. Su, ?On choosing and bounding probability metrics,? International Statistical
Review, vol. 70, no. 3, pp. 419?435, dec 2002.
J. Acharya, A. Jafarpour, A. Orlitsky, and A. T. Suresh, ?Efficient compression of monotone and m-modal
distributions,? in ISIT, 2014.
S. Kamath, A. Orlitsky, D. Pichapati, and A. T. Suresh, ?On learning distributions from their samples,? in
COLT, 2015.
P. Massart, ?The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality,? The Annals of Probability,
vol. 18, no. 3, pp. 1269?1283, 07 1990.
9
| 5839 |@word version:2 compression:1 clts:1 stronger:1 nd:12 justice:1 decomposition:3 invoking:1 mention:2 jafarpour:3 harder:1 accommodate:1 contains:1 bhattacharyya:1 must:4 mqi:3 class1:1 additive:1 partition:1 j1:1 shape:4 implying:1 fewer:1 ith:1 core:1 provides:1 characterization:1 gautam:1 ron:1 org:1 simpler:1 along:1 c2:2 become:1 focs:2 prove:5 roughly:1 behavior:3 whistle:1 chi:1 decomposed:1 decreasing:3 precursor:1 considering:2 increasing:3 becomes:2 provided:1 estimating:3 moreover:2 suffice:1 notation:1 mass:1 medium:1 null:1 underlying:2 kaufman:1 substantially:2 mathematician:1 guarantee:10 every:1 multidimensional:1 orlitsky:4 concave:14 exactly:2 prohibitively:1 demonstrates:2 declare:1 understood:1 referenced:1 tends:2 congress:1 id:1 abuse:1 emphasis:3 nz:1 studied:2 challenging:1 testing:69 poset:1 block:3 union:2 procedure:1 suresh:4 universal:1 bell:1 significantly:1 composite:1 matching:3 reject:1 boyd:1 confidence:1 cannot:2 close:12 risk:1 restriction:5 equivalent:1 demonstrated:1 attention:2 independently:1 convex:1 focused:1 formulate:2 survey:3 immediately:2 m2:1 q:1 estimator:2 meyn:1 proving:1 handle:1 notion:1 variation:14 coordinate:1 annals:3 construction:3 suppose:3 pioneered:1 programming:1 distinguishing:3 hypothesis:5 trick:1 element:8 pa:3 expensive:1 asymmetric:1 sparsely:1 stacs:1 observed:1 role:1 coincidence:1 mhr:5 sun:1 mentioned:2 pd:1 complexity:25 skeleton:1 asked:1 support2:1 uniformity:5 tight:4 purely:1 learner:2 unimodality:8 kolmogorov:4 separated:1 effective:4 describe:1 kp:1 choosing:2 agresti:1 say:2 drawing:1 otherwise:1 statistic:26 soundness:1 fischer:3 unseen:1 online:1 czumaj:1 propose:1 subtracting:1 product:3 adaptation:1 flexibility:1 achieve:2 description:3 convergence:1 requirement:1 extending:2 p:1 object:3 polylog:1 alon:1 uninformed:1 received:2 strong:2 p2:1 solves:1 involves:1 come:1 implies:4 qd:1 tester:23 modifying:2 stringent:1 require:4 suffices:1 fix:1 preliminary:1 proposition:1 isit:1 summation:2 extension:2 hold:3 sufficiently:4 considered:2 around:1 ic:1 hall:1 m0:2 continuum:1 estimation:7 samworth:1 applicable:2 correctness:1 mit:2 always:1 modified:1 rather:1 avoid:1 boosted:1 corollary:6 focus:1 properly:1 unsatisfactory:1 prevalent:1 likelihood:5 fq:2 contrast:1 inference:1 typically:1 accept:3 i1:4 classification:1 colt:2 denoted:3 exponent:5 art:1 cube:1 field:2 equal:3 once:1 sampling:1 sohler:1 identical:1 constantinos:1 nearly:1 jon:1 others:1 acharya:6 primarily:2 oblivious:2 few:1 simultaneously:2 tightly:1 neerlandica:1 ourselves:1 consisting:1 n1:7 suit:1 ab:1 organization:1 interest:2 mixture:2 light:1 oliver:1 closer:1 partial:1 necessary:1 byproduct:3 encourage:1 respective:1 worker:1 intense:1 divide:1 taylor:1 logarithm:2 theoretical:3 instance:1 modeling:1 rao:1 zn:1 cost:1 subset:2 uniform:2 motivating:1 shave:2 eec:1 combined:1 density:7 fundamental:3 cited:1 international:2 probabilistic:1 again:1 squared:1 satisfied:1 huang:1 american:1 leading:1 return:2 exclude:2 nonasymptotic:1 piece:2 picked:1 characterizes:1 competitive:2 contribution:4 il:1 ni:6 accuracy:1 kiefer:1 variance:2 efficiently:1 correspond:1 identify:1 pichapati:1 history:1 definition:4 against:2 servedio:1 nonetheless:1 pp:17 naturally:2 proof:2 sampled:1 proved:4 knowledge:5 improves:3 sophisticated:1 focusing:1 dvoretzky:1 higher:1 xie:1 follow:2 brunk:1 modal:1 though:1 generality:1 furthermore:1 hand:3 su:1 lack:1 scientific:1 building:3 requiring:1 true:1 barlow:1 unbiased:1 i2:1 white:1 essence:1 anything:1 mpi:1 generalized:2 pdf:1 theoretic:1 demonstrate:3 complete:1 logd:1 wise:3 novel:1 recently:2 overview:3 fortnow:1 volume:1 jl:1 belong:1 discussed:2 extend:1 intensely:1 marginals:1 association:1 refer:2 gibbs:1 automatic:1 consistency:1 sharpen:1 bartholomew:1 access:6 base:2 add:1 recent:1 showed:1 chan:2 optimizing:1 belongs:2 moderate:1 scenario:1 certain:1 inequality:8 success:1 additional:2 relaxed:1 accomplishes:1 wolfowitz:1 ii:3 unimodal:8 afore:2 reduces:1 long:1 hazard:15 qi:6 basic:1 i2a:3 poisson:7 metric:1 arxiv:1 dec:2 c1:2 want:2 interval:1 else:1 standpoint:1 appropriately:3 extra:1 massart:1 tend:1 seem:1 structural:2 near:1 iii:2 enough:3 concerned:1 independence:10 fit:1 dtv:16 restrict:1 suboptimal:2 reduce:1 tradeoff:1 chebyshev:3 whether:8 wellner:4 proceed:1 york:1 romano:1 remark:3 generally:1 useful:4 involve:1 repeating:1 band:1 concentrated:2 simplest:1 http:1 qi2:1 jankowski:2 blue:1 discrete:10 write:1 vol:11 group:1 key:2 levi:1 sheer:1 nevertheless:3 pj:1 qk1:1 v1:1 relaxation:3 monotone:41 sum:1 lehmann:1 soda:4 extends:1 family:4 throughout:1 reader:2 almost:1 electronic:2 separation:2 draw:4 decision:1 appendix:2 bound:21 def:4 guaranteed:1 distinguish:6 occur:1 infinity:2 precisely:1 your:1 balabdaoui:2 argument:3 optimality:1 extremely:1 kumar:2 eject:2 structured:2 rubinfeld:7 alternate:1 describes:2 slightly:2 pan:1 modification:4 restricted:2 pr:6 sided:1 heart:1 computationally:7 previously:2 needed:1 studying:1 available:1 apply:2 observe:1 birg:3 generic:1 enforce:1 appropriate:1 occurrence:1 appearing:1 distinguished:1 bhattacharya:1 primer:1 jd:1 binomial:2 clustering:1 denotes:1 running:1 remaining:1 ensure:1 mdn:1 exploit:1 giving:2 restrictive:1 pretend:1 prof:1 jayadev:2 question:3 already:2 prover:1 concentration:1 dependence:3 diakonikolas:3 said:1 september:1 distance:36 unable:1 majority:1 trivial:1 enforcing:1 bremner:1 assuming:1 relationship:1 providing:3 ratio:1 kamath:2 statement:2 stoc:3 design:2 proper:4 unknown:10 upper:1 finite:1 enabling:1 excluding:2 precise:2 supa:1 sharp:1 community:1 introduced:1 required:6 extensive:1 z1:1 unequal:1 tremendous:2 established:1 nip:1 qa:3 able:3 scott:1 regime:10 fp:2 challenge:1 program:2 max:3 business:1 eccc:1 minimax:2 improve:2 inversely:1 imply:1 categorical:2 n6:1 unconditionally:1 prior:1 literature:5 understanding:1 batu:2 review:2 asymptotic:5 loss:1 sublinear:4 interesting:2 versus:4 var:7 pi:8 qiqi:1 characterizing:2 taking:1 barrier:1 distributed:1 edinburgh:1 overcome:1 dimension:1 van:1 cumulative:1 concavity:9 collection:2 far:11 employing:1 contemporaneous:1 transaction:2 approximate:2 monotonicity:13 tolerant:1 quintessential:1 daskalakis:3 un:7 continuous:1 learn:6 terminate:1 robust:2 ignoring:1 obtaining:1 symmetry:1 improving:1 necessarily:1 poly:4 complex:2 domain:7 da:1 logconcavity:1 flattening:3 aistats:1 main:3 statistica:1 whole:1 big:1 bounding:1 n2:2 p2i:1 wiley:1 sub:3 wish:2 explicit:3 exponential:1 lie:1 theorem:20 z0:1 specific:3 showing:1 dk:2 theertha:1 dominates:1 closeness:7 exists:7 albeit:2 andoni:1 valiant:8 easier:1 entropy:1 logarithmic:5 paninski:5 simply:1 ordered:1 poissonization:1 springer:2 corresponds:2 satisfies:2 cdf:3 identity:8 sized:1 careful:2 fisher:1 uniformly:1 lemma:10 total:15 formally:1 highdimensional:1 support:14 |
5,346 | 584 | Software for ANN training on a Ring Array Processor
Phil Kohn, Jeff Bilmes, Nelson Morgan, James Beck
International Computer Science Institute,
1947 Center St., Berkeley CA 94704, USA
Abstract
Experimental research on Artificial Neural Network (ANN) algorithms requires
either writing variations on the same program or making one monolithic program
with many parameters and options. By using an object-oriented library, the size
of these experimental programs is reduced while making them easier to read,
write and modify. An efficient and flexible realization of this idea is Connectionist Layered Object-oriented Network Simulator (CLONES). CLONES runs on
UNIX 1 workstations and on the 100-1000 MFLOP Ring Array Processor (RAP)
that we built with ANN algorithms in mind. In this report we describe CLONES
and show how it is implemented on the RAP.
1 Overview
As we continue to experiment with Artificial Neural Networks (ANNs) to generate phoneme
probabilities for speech recognition (Bourlard & Morgan, 1991), two things have become
increasingly clear:
1. Because of the diversity and continuing evolution of ANN algorithms, the programming environment must be both powerful and flexible.
2. These algorithms are very computationally intensive when applied to large databases
of training patterns.
Ideally we would like to implement and test ideas at about the same rate that we come up
with them. We have approached this goal both by developing application specific parallel
lUNIX is a trademark of AT&T
781
782
Kahn, Bilrnes, Morgan, and Beck
System
Languages Supported
Perfonnance
Assem
~
~
SparcStation 2
2MFLOP
SPERTBowd
~~~L7
c++
Sather pSather
-/ -/ -/ -/ -/
S
o
Desktop RAP +
u
r
-I -I -I -I
Nctwmcd RAP
(1-10 Boards)
1 GFLOP
SparcSlation +
SPERT Board
lOOP
CNS-l System
200 GOP
- / Completed
c
c
lOOMFLOP
Sun 4/330 Host
RAP9jstam
~~mID
C
C
o
m
p
?t
~ In Design
if if if
~
~
~
~
~
<,...-__
i
b
I
c
~
L_in_kC_r_c_om-::pa_lI_b_IC_ _ _
o
>
Figure 1: Hardware and software configurations
hardware, the Ring Array Processor (RAP) (Morgan et al., 1990; Beck, 1990; Morgan et al.,
1992), and by building an object-oriented software environment, the Connectionist Layered
Object-oriented Network Simulator (CLONES) (Kohn, 1991). By using an object-oriented
library, the size of experimental ANN programs can be greatly reduced while making them
easier to read, write and modify. CLONES is written in C++ and utilizes libraries previously
written in C and assem bIer.
Our ANN research currently encompasses two hardware platforms and several languages.
shown in Figure 1. Two new hardware platforms, the SPERT board (Asanovic et al., 1991)
and the CNS-l system are in design (unfilled check marks), and will support source code
compatibility with the existing machines. The SPERT design is a custom VLSI parallel
processor installed on an SBUS card plugged into a SPARC workstation. Using variable
precision fixed point arithmetic, a single SPERT board will have performance comparable
to a 10 board RAP system with 40 processors. The CNS-l system is based on multiple
VLSI parallel processors interconnected by high speed communication rings.
Because the investment in software is generally large, we insiston source level compatibility
across hardware platforms at the level of the system libraries. These libraries include matrix
and vector classes that free the user from concern about the hardware configuration. It is
also considered important to allow routines in different languages to be linked together.
This includes support for Sather, an object-oriented language that has been developed at
ICSI for workstations. The parallel version of Sather, called pSather, will be supported on
Sottware for ANN training on a Ring Array Processor
theCNS-l.
CLONES is seen as the ANN researcher's interface to this multiplatform, multi language
environment. Although CLONES is an application written specifically for ANN algorithms,
it's object-orientation gives it the ability to easily include previously developed libraries.
CLONES currently runs on UNIX workstations and the RAP; this paper focuses on the
RAP implementation.
2 RAP hardware
The RAP consists of cards that are added to a UNIX host machine (currently a VME based
Sun SPARe). A RAP card has four 32 MFlop Digital Signal Processor (DSP) chips (TI
TMS32OC30), each with its own local 256KB or 1MB of fast static RAM and 16MB of
DRAM.
Instead of sharing memory, the processors communicate on a high speed ring that shifts
data in a single machine cycle. For each board, the peak transfer rate between 4 nodes
is 64 million words/sec (256 Mbytes/second). This is a good balance to the 64 million
multiply-accumulates per second (128 MFLOPS) peak performance of the computational
elements.
Up to 16 of these boards can be interconnected and used as one Single Program operating
on Multiple Data stream (SPMD) machine. In this style of parallel computation, all the
processors run the same program and are doing the same operations to different pieces of the
same matrix or vector 2. The RAP can run other styles of parallel computation, including
pipelines where each processor is doing a different operation on different data streams.
However, for fully connected back-propagation networks, SPMD parallelism works well
and is also much easier to program since there is only one flow of control to worry about.
A reasonable design for networks in which all processors need all unit outputs is a single
broadcast bus. However, this design is not appropriate for other related algorithms such
as the backward phase of the back-propagation learning algorithm. By using a ring, backpropagation can be efficiently parallelized without the need to have the complete weight
matrix on all processors. The number of ring operations required for each complete matrix
update cycle is of the same order as the number of units, not the square of the number of
units. It should also be noted that we are using a stochastic or on-line learning algorithm.
The training examples are not di viding among the processors then the weights batch updated
after a complete pass. All weights are updated for each training example. This procedure
greatly decreases the training time for large redundant training sets since more steps are
being taken in the weight-space per training example.
We have empirically derived formulae that predict the performance improvement on backpropagation training as a function of the number of boards. Theoretical peak performance is
128 MFlops/board, with sustained performance of 30-90% for back-propagation problems
of interest to us. Systems with up to 40 nodes have been tested, for which throughputs
1'he hardware does not automatically keep the processors in lock step; for example, they may
become out of sync because of branches conditioned on the processor's node number or on the
data. However, when the processors must communicate with each other through the ring, hardware
synchronization automatically occurs. A node that attempts to read before data is ready. or to write
when there is already data waiting. will stop executing until the data can be moved.
783
784
Kahn, Bilrnes, Morgan, and Beck
of up to 574 Million Connections Per Second (MCPS) have been measured, as well as
learning rates of up to 106 Million Connection Updates Per Second (MCUPS) for training.
Practical considerations such as workstation address space and clock skew restrict current
implementations to 64 nodes, but in principle the architecture scales to about 16,000 nodes
for back-propagation.
We now have considerable experience with the RAP as a day-to-day computational tool for
our research. With the aid of the RAP hardware and software, we have done network training
studies that would have over a century on a UNIX workstation such as the SPARCstation-2.
We have also used the RAP to simulate variable precision arithmetic to guide us in the
design of higher performance hardware such as SPERT.
The RAP hardware remains very flexible because of the extensive use of programmable
logic arrays. These parts are automatically downloaded when the host machine boots up.
By changing the download files, the functionality of the communications ring and the host
interface can be modified or extended without any physical changes to the board.
3
RAP software
The RAP DSP software is built in three levels (Kohn & Bilmes, 1990; Bilmes & Kohn,
(990). At the lowest level are hand coded assembler routines for matrix, vector and ring
operations. Many standard matrix and vector operations are currently supported as well
as some operations specialized for efficient back-propagation. These matrix and vector
routines do not use the communications ring or split up data among processing nodes.
There is also a UNIX compatible library including most standard C functions for file, math
and string operations. All UNIX kernel calls (such as file input or output) cause requests
to be made to the host SPARC over the VMEbus. A RAP dremon process running under
UNIX has all of the RAP memory mapped into its virtual address space. It responds to the
RAP system call interrupts (from the RAP device driver) and can access RAP memory with
a direct memory copy function or assignment statement.
An intermediate level consists of matrix and vector object classes coded in C++. A
programmer writing at this level or above can program the RAP as if it were a conventional
serial machine. These object classes divide the data and processing among the available
processing nodes, using the communication ring to redistribute data as needed. For example,
to multiply a matrix by a vector, each processor would have its own subset of the matrix
rows that must be multiplied. This is equivalent to partitioning the output vector elements
among the processors. If the complete output vector is needed by all processors, a ring
broadcast routine is called to redistribute the part of the output vector from each processor
to all the other processors.
The top level of RAP software is the CLONES environment. CLONES is an object-oriented
library for constructing, training and utilizing connectionist networks. It is designed to
run efficiently on data parallel computers as well as uniprocessor workstations. While
efficiency and portability to parallel computers are the primary goals, there are several
secondary design goals:
1. minimize the learning curve for using CLONES;
2. minimize the additional code required for new experiments;
3. maximize the variety of artificial neural network algorithms supported;
Software for ANN training on a Ring Array Processor
4. allow heterogeneous algorithms and training procedures to be interconnected and
trained together;
5. allow the trained network to be easily embedded into other programs.
The size of experimental ANN programs is greatly reduced by using an object-oriented
library; at the same time these programs are easier to read, write and evolve.
Researchers often generate either a proliferation of versions of the same basic program,
or one giant program with a large number of options and many potential interactions and
side-effects. Some simulator programs include (or worse, evolve) their own language
for describing networks. We feel that a modem object-oriented language (such as C++)
has all the functionality needed to build and train ANNs. By using an object-oriented
design, we attempt to make the most frequently changed parts of the program very small
and well localized. The parts that rarely change are in a centralized library. One of the
many advantages of an object-oriented library for experimental work is that any part can
be specialized by making a new class of object that inherits the desired operations from a
library class.
4
CLONES overview
To make CLONES easier to learn, we restrict ourselves to a subset of the many features
of C++. Excluded features include multiple inheritance, operator overloading (however,
function overloading is used) and references. Since the multiple inheritance feature of C++
is not used, CLONES classes can be viewed as a collection of simple inheritance trees.
This means that all classes of objects in CLONES either have no parent class (top of a class
tree) or inherit the functions and variables of a single parent class.
CLONES consists of a library of C++ classes that represent networks (Net), their components (Net-part) and training procedures. There are also utility classes used during
training such as: databases of training data (Database), tables of parameters and arguments
(Param), and perfonnance statistics (Stats). Database and Param do not inherit from any
other class. Their class trees are independent of the rest of CLONES and each other. The
Stats class inherits from Net-behavior.
The top level of the CLONES class tree is a class called NeLbehavior. It defines function
interfaces for many general functions including file save or restore and debugging. It also
contains behavior functions that are called during different phases of running or training a
network. For example, there are functions that are called before or after a complete training
run (pre_training, posLtraining), before or after a pass over the database (pre_epoch,
post-epoch) and before or after a forward or backward run of the network (pre_forw-pass,
post1orw_pass, pre_back_pass, posLback_pass). The Net, NeLpart and Stats classes
inherit from this class.
All network components used to construct ANNs are derived from the two classes Layer
and Connect. Both of these inherit from class NeLpart. A CLONES network can be
viewed as a graph where the nodes are Layer objects and the arcs are Connect objects.
Each Connect connects a single input Layer with a single output Layer. A Layer holds
the data for a set of units (such as an activation vector), while a Connect transforms the
data as it passes between Layers. Data flows along Connects between the pair of Layers
by calling forw_propagate (input to output) or back_propagate (output to input) behavior
785
786
Kahn, Bilrnes, Morgan, and Beck
functions in the Connect object.
CLONES does not have objects that represent single units (or artificial neurons). Instead t
Layer objects are used to represent a set of units. Because arrays of units are passed
down to the lowest level routines t most of the computation time is focused into a few small
assembly coded loops that easily fit into the processor instruction cache. Time spent in all
of the levels of control code that call these loops becomes less significant as the size of the
Layer is increased.
The Layer class does not place any restrictions on the representation of its internal information. For example t the representation for activations may be a floating point number
for each unit (AnalogJayer)t or it may be a set of unit indices t indicating which units
are active (BinaryJayer). AnalogJayer and BinaryJayer are built into the CLONES
library as subclasses of the class Layer. The AnalogJayer class specifies the representation of activations t but it still leaves open the procedures that use and update the
activation array. BP...analogJayer is a subclass of AnalogJayer that specify these procedures for the back-propagation algorithm. Subclasses of AnalogJayer may also add
new data structures to hold extra internal state such as the error vector in the case of
BP...analogJayer. The BP-AnalogJaycr class has subclasses for various transfer functions such as BP...sigmoidJayer and BPJinear Jayer.
Layer classes also have behavior functions that are called in the course of running the
network. For example t one of these functions (pre_forw-propagate) initializes the Layer
for a forward pass t perhaps by clearing its activation vector. After all of the connections
coming into it are runt another Layer behavior function (postJorw_propagate) is called
that computes the activation vector from the partial results left by these connections. For
example t this function may apply a transfer function such as the sigmoid to the accumulated
sum of all the input activations.
These behavior functions can be changed by making a subclass. BP...analogJayer leaves
open the activation transfer function (or squashing function) and its derivative. Subclasses
define new transfer functions to be applied to the activations. A new class of backpropagation layer with a customized transfer function (instead of the default sigmoid) can
be created with the following C++ code:
My_new_BP_layer_class(int number_of_units)
: BP_analog_layer(number_of_units)i
II constructor
void transfer (Fvec *activation) {
1* apply forward transfer function to my activation vector *1
void d_transfer(Fvec *activation, Fvec *err)
1* apply backward error transfer to err (given activation)
*1
}i
A Connect class includes two behavior functions: one that transforms activations from the
incoming Layer into partial results in the outgoing Layer (forw-propagate) and one that
takes outgoing errors and generates partial results in the incoming Layer (back-propagate).
Software for ANN training on a Ring Array Processor
The structure of a partial result is part of the Layer class. The subclasses of Connect include:
Bus_connect (one to one), Full_connect (all to all) and Sparse_connect (some to some).
Each subclass of Connect may contain a set of internal parameters such as the weight
matrix in a BPJulLconnect. Subclasses of Connect also specify which pairs of Layer
subclasses can be connected. When a pair of Layer objects are connected, type checking
by the C++ compiler insures that the input and output Layer subclasses are supported by
the Connect object.
In order to do its job efficiently, a Connect must know something about the internal
representation of the layers that are connected. By using C++ overloading, the Connect
function selected depends not only on the class of Connect, but also on the classes of
the two layers that are connected. Not all Connect classes are defined for all pairs of
Layer classes. However, Connects that convert between Layer classes can be utilized to
compensate for missing functions.
CLONES allows the user to view layers and connections much like tinker-toy wheels and
rods. ANNs are built up by creating Layer objects and passing them to the create functions
of the desired Connect classes. Changing the interconnection pattern does not require any
changes to the Layer classes or objects and vice-versa.
At the highest level, a Net object delineates a subset of a network and controls its training.
Operations can be performed on these subsets by calling functions on their Net objects. The
Layers of a Net are specified by calling one of new_inputJayer, new_hidden.Jayer, or
new_outputJayer on the Net object for each Layer. Given the Layers, the Connects that
belong to the Net are deduced by the Net-order objects (see below). Layer and Connect
objects can belong to any number of Nets.
The Net labels all of its Layers as one of input, output or hidden. These labels are
used by the NeLorder objects to determine the order in which the behavior functions of
the NeLparts are called. For example, a Net object contains NeLorder objects called
forward_pass_order and backward_pass_order that control the execution sequence for a
forward or backward pass. The Net object also has functions that call a function by the
same name on all of its component parts (for example set.Jearning-.rate).
When a Net-order object is built it scans the connectivity of the Net. The rules that relate
topology to order of execution are centralized and encapsulated in subclasses of NeLorder.
Changes to the structure of the Net are localized to just the code that creates the Layers
and Connects; one does not need to update separate code that contains explict knowledge
about the order of evaluation for running a forward or backward pass.
The training procedure is divided into a series of steps, each of which is a call to a function
in the Net object. At the top level, calling run_training on a Net performs a complete
training run. In addition to calling pre_training, posLtraining behavior functions, it calls
run_epoch in a loop until the the nextJearning-.rate function returns zero. The run_epoch
function calls run_forward and run_backward.
At a lower level there are functions that interface the database(s) of the Net object to the
Layers of the Net. For example, seLinput sets the activations of the input Layers for a
given pattern number of the database. Another of these sets the error vector of the output
layer (seLerror). Some of these functions, such as is_correct evaluate the performance of
the Net on the current pattern.
787
788
Kahn, Bilmes, Morgan, and Beck
In addition to database related functions, the Net object also contains useful global variables
for all of its components. A pointer to the Net object is always passed to all behavior
functions of its Layers and Connects when they are called. One of these variables is a
Param object that contains a table of parameter names, each with a list of values. These
parameters usually come from the command line and/or parameter files. Other variables
include: the current pattern, the correct target output, the epoch number, etc.
5 Conclusions
CLONES is a useful tool for training ANNs especially when working with large training
databases and networks. It runs efficiently on a variety of parallel hardware as well as on
UNIX workstations.
Acknowledgements
Special thanks to Steve Renals for daring to be the first CLONES user and making significant
contributions to the design and implementation. Others who provided valuable input to
this work were: Krste Asanovi~, Steve Omohundro, Jerry Feldman, Heinz Schmidt and
Chuck Wooters. Support from the International Computer Science Institute is gratefully
acknowledged.
References
Asanovi~,
K., Beck, J., Kingsbury, B., Kohn, P., Morgan, N., & Wawrzynek, J. (1991).
SPERT: A VLIWISIMD Microprocessor for Artificial Neural Network Computations.
Tech. rep. TR-91-072, International Computer Science Institute.
Beck, J. (1990). The Ring Array Processor (RAP): Hardware. Tech. rep. TR-90-048,
International Computer Science Institute.
Bilmes, J. & Kohn, P. (1990). The Ring Array Processor (RAP): Software Architecture.
Tech. rep. TR-90-050, International Computer Science Institute.
Bourlard, H. & Morgan, N. (1991). Connectionist approaches to the use of Markov models
for continuous speech recognition. In Touretzky, D. S. (Ed.), Advances in Neural
Information Processing Systems, Vol. 3. Morgan Kaufmann, San Mateo CA.
Kohn, P. & Bilmes, J. (1990). The Ring Array Processor (RAP): Software Users Manual
Version 1.0. Tech. rep. TR-90-049, International Computer Science Institute.
Kohn, P. (1991). CLONES: Connectionist Layered Object-oriented NEtwork Simulator.
Tech. rep. TR-91-073, International Computer Science Institute.
Morgan, N., Beck, J., Kohn, P., Bilmes, J., Allman, E., & Beer, J. (1990). The RAP: a ring
array processor for layered network calculations. In Proceedings IEEE International
Conference on Application Specific Array Processors, pp. 296-308 Princeton NI.
Morgan, N., Beck, J., Kohn, P., & Bilmes, J. (1992). Neurocomputing on the RAP. In
Przytula, K. W. & Prasanna, V. K. (Eds.), Digital Parallellmplemencations of Neural
Networks. Prentice-Hall, Englewood Cliffs NJ.
| 584 |@word version:3 open:2 instruction:1 propagate:3 tr:5 configuration:2 contains:5 series:1 daring:1 existing:1 err:2 current:3 activation:15 must:4 written:3 designed:1 fvec:3 update:4 leaf:2 device:1 selected:1 desktop:1 pointer:1 math:1 node:9 tinker:1 kingsbury:1 along:1 direct:1 become:2 driver:1 consists:3 sustained:1 sync:1 behavior:10 proliferation:1 frequently:1 simulator:4 multi:1 heinz:1 automatically:3 param:3 cache:1 becomes:1 provided:1 lowest:2 string:1 developed:2 sparcstation:2 giant:1 nj:1 berkeley:1 ti:1 subclass:12 control:4 unit:10 partitioning:1 before:4 monolithic:1 modify:2 local:1 installed:1 accumulates:1 cliff:1 mateo:1 mcups:1 practical:1 investment:1 implement:1 backpropagation:3 procedure:6 spmd:2 word:1 wheel:1 layered:4 operator:1 prentice:1 przytula:1 writing:2 restriction:1 conventional:1 equivalent:1 center:1 phil:1 missing:1 focused:1 stats:3 rule:1 array:14 utilizing:1 century:1 variation:1 constructor:1 updated:2 feel:1 target:1 user:4 programming:1 element:2 recognition:2 utilized:1 database:9 connected:5 cycle:2 sun:2 decrease:1 highest:1 icsi:1 valuable:1 environment:4 ideally:1 trained:2 mbytes:1 creates:1 efficiency:1 easily:3 chip:1 various:1 train:1 fast:1 describe:1 artificial:5 approached:1 interconnection:1 ability:1 statistic:1 advantage:1 sequence:1 net:24 interconnected:3 mb:2 interaction:1 coming:1 renals:1 loop:4 realization:1 moved:1 parent:2 ring:20 executing:1 object:41 spent:1 measured:1 job:1 implemented:1 come:2 correct:1 functionality:2 stochastic:1 kb:1 programmer:1 virtual:1 spare:1 require:1 hold:2 considered:1 hall:1 predict:1 encapsulated:1 label:2 currently:4 vice:1 create:1 tool:2 always:1 modified:1 command:1 derived:2 focus:1 interrupt:1 dsp:2 improvement:1 inherits:2 check:1 greatly:3 tech:5 accumulated:1 hidden:1 vlsi:2 kahn:4 compatibility:2 l7:1 flexible:3 orientation:1 among:4 platform:3 special:1 construct:1 throughput:1 connectionist:5 report:1 others:1 asanovi:2 few:1 wooters:1 oriented:12 neurocomputing:1 beck:10 floating:1 phase:2 ourselves:1 cns:3 connects:6 attempt:2 interest:1 centralized:2 mflop:5 englewood:1 multiply:2 custom:1 evaluation:1 redistribute:2 partial:4 experience:1 sather:3 perfonnance:2 tree:4 continuing:1 plugged:1 divide:1 desired:2 theoretical:1 rap:30 increased:1 assignment:1 subset:4 krste:1 connect:16 my:1 st:1 deduced:1 international:8 clone:25 peak:3 thanks:1 together:2 connectivity:1 broadcast:2 worse:1 creating:1 derivative:1 style:2 return:1 toy:1 potential:1 diversity:1 sec:1 includes:2 int:1 depends:1 stream:2 piece:1 performed:1 view:1 linked:1 doing:2 compiler:1 option:2 parallel:9 contribution:1 minimize:2 square:1 ni:1 phoneme:1 who:1 efficiently:4 kaufmann:1 vme:1 bilmes:8 researcher:2 processor:30 anns:5 uniprocessor:1 touretzky:1 sharing:1 ed:2 manual:1 pp:1 james:1 di:1 workstation:8 static:1 stop:1 knowledge:1 routine:5 back:7 worry:1 steve:2 higher:1 day:2 specify:2 done:1 just:1 until:2 clock:1 hand:1 working:1 propagation:6 defines:1 perhaps:1 usa:1 effect:1 name:2 contain:1 building:1 evolution:1 jerry:1 read:4 excluded:1 unfilled:1 during:2 noted:1 complete:6 omohundro:1 performs:1 interface:4 consideration:1 sigmoid:2 specialized:2 empirically:1 overview:2 physical:1 million:4 belong:2 he:1 significant:2 versa:1 feldman:1 language:7 gratefully:1 access:1 operating:1 etc:1 add:1 something:1 own:3 sparc:2 gop:1 rep:5 continue:1 chuck:1 morgan:13 seen:1 additional:1 parallelized:1 determine:1 maximize:1 redundant:1 signal:1 arithmetic:2 branch:1 multiple:4 ii:1 calculation:1 compensate:1 divided:1 host:5 serial:1 post:1 coded:3 basic:1 heterogeneous:1 kernel:1 represent:3 addition:2 void:2 source:2 spert:6 rest:1 extra:1 file:5 pass:1 thing:1 flow:2 call:7 allman:1 intermediate:1 split:1 variety:2 fit:1 architecture:2 restrict:2 topology:1 idea:2 intensive:1 shift:1 rod:1 kohn:10 utility:1 passed:2 assembler:1 speech:2 passing:1 cause:1 programmable:1 generally:1 useful:2 clear:1 transforms:2 mid:1 hardware:14 reduced:3 generate:2 specifies:1 per:4 write:4 vol:1 waiting:1 four:1 acknowledged:1 changing:2 explict:1 backward:5 ram:1 graph:1 sum:1 convert:1 run:9 unix:8 powerful:1 communicate:2 place:1 reasonable:1 utilizes:1 comparable:1 layer:39 bp:5 software:12 calling:5 generates:1 speed:2 simulate:1 argument:1 developing:1 debugging:1 request:1 across:1 increasingly:1 wawrzynek:1 making:6 mcps:1 pipeline:1 taken:1 computationally:1 previously:2 bus:1 skew:1 remains:1 describing:1 needed:3 mind:1 know:1 sbus:1 available:1 operation:9 multiplied:1 apply:3 appropriate:1 save:1 batch:1 schmidt:1 top:4 running:4 include:6 assembly:1 completed:1 lock:1 build:1 especially:1 initializes:1 added:1 already:1 occurs:1 primary:1 responds:1 separate:1 card:3 mapped:1 nelson:1 code:6 index:1 balance:1 statement:1 relate:1 dram:1 design:9 implementation:3 boot:1 neuron:1 modem:1 markov:1 arc:1 extended:1 communication:4 download:1 pair:4 required:2 specified:1 extensive:1 connection:5 address:2 parallelism:1 pattern:5 below:1 usually:1 encompasses:1 program:15 built:5 including:3 memory:4 restore:1 bourlard:2 customized:1 library:14 created:1 ready:1 epoch:2 inheritance:3 checking:1 acknowledgement:1 evolve:2 delineates:1 synchronization:1 fully:1 embedded:1 localized:2 digital:2 downloaded:1 beer:1 principle:1 prasanna:1 squashing:1 row:1 compatible:1 changed:2 course:1 supported:5 clearing:1 free:1 copy:1 guide:1 allow:3 side:1 institute:7 curve:1 asanovic:1 default:1 computes:1 forward:5 made:1 collection:1 san:1 keep:1 logic:1 global:1 active:1 incoming:2 portability:1 continuous:1 table:2 learn:1 transfer:9 ca:2 constructing:1 microprocessor:1 inherit:4 board:10 aid:1 precision:2 formula:1 down:1 specific:2 trademark:1 list:1 concern:1 overloading:3 execution:2 conditioned:1 easier:5 insures:1 goal:3 viewed:2 ann:12 jeff:1 considerable:1 change:4 specifically:1 called:10 pas:6 secondary:1 experimental:5 rarely:1 indicating:1 internal:4 mark:1 support:3 scan:1 evaluate:1 outgoing:2 princeton:1 tested:1 |
5,347 | 5,840 | Market Scoring Rules Act As Opinion Pools For
Risk-Averse Agents
Mithun Chakraborty, Sanmay Das
Department of Computer Science and Engineering
Washington University in St. Louis
St. Louis, MO 63130
{mithunchakraborty,sanmay}@wustl.edu
Abstract
A market scoring rule (MSR) ? a popular tool for designing algorithmic prediction
markets ? is an incentive-compatible mechanism for the aggregation of probabilistic beliefs from myopic risk-neutral agents. In this paper, we add to a growing
body of research aimed at understanding the precise manner in which the price
process induced by a MSR incorporates private information from agents who deviate from the assumption of risk-neutrality. We first establish that, for a myopic
trading agent with a risk-averse utility function, a MSR satisfying mild regularity conditions elicits the agent?s risk-neutral probability conditional on the latest
market state rather than her true subjective probability. Hence, we show that a
MSR under these conditions effectively behaves like a more traditional method of
belief aggregation, namely an opinion pool, for agents? true probabilities. In particular, the logarithmic market scoring rule acts as a logarithmic pool for constant
absolute risk aversion utility agents, and as a linear pool for an atypical budgetconstrained agent utility with decreasing absolute risk aversion. We also point out
the interpretation of a market maker under these conditions as a Bayesian learner
even when agent beliefs are static.
1
Introduction
How should we combine opinions (or beliefs) about hidden truths (or uncertain future events) furnished by several individuals with potentially diverse information sets into a single group judgment
for decision or policy-making purposes? This has been a fundamental question across disciplines
for a long time (Surowiecki [2005]). One simple, principled approach towards achieving this end is
the opinion pool (OP) which directly solicits inputs from informants in the form of probabilities (or
distributions) and then maps this vector of inputs to a single probability (or distribution) based on
certain axioms (Genest and Zidek [1986]). However, this technique abstracts away from the issue
of providing proper incentives to a selfish-rational agent to reveal her private information honestly.
Financial markets approach the problem differently, offering financial incentives for traders to supply their information about valuations and aggregating this information into informative prices. A
prediction market is a relatively novel tool that builds upon this idea, offering trade in a financial security whose final monetary worth is tied to the future revelation of some currently unknown ground
truth. Hanson [2003] introduced a family of algorithms for designing automated prediction markets
called the market scoring rule (MSR) of which the Logarithmic Market Scoring Rule (LMSR) is
arguably the most widely used and well-studied. A MSR effectively acts as a cost function-based
market maker always willing to take the other side of a trade with any willing buyer or seller, and
re-adjusting its quoted price after every transaction.
One of the most attractive properties of a MSR is its incentive-compatibility for a myopic risk-neutral
trader. But this also means that, every time a MSR trades with such an agent, the updated market
1
price is reset to the subjective probability of that agent; the market mechanism itself does not play an
active role in unifying pieces of information gleaned from the entire trading history into its current
price. Ostrovsky [2012] and Iyer et al. [2014] have shown that, with differentially informed Bayesian
risk-neutral and risk-averse agents respectively, trading repeatedly, ?information gets aggregated? in
a MSR-based market in a perfect Bayesian equilibrium. However, if agent beliefs themselves do not
converge, can the price process emerging out of their interaction with a MSR still be viewed as an
aggeragator of information in some sense? Intuitively, even if an agent does not revise her belief
based on her inference about her peers? information from market history, her conservative attitude
towards risk should compel her to trade in such a way as to move the market price not all the way
to her private belief but to some function of her belief and the most recent price; thus, the evolving
price should always retain some memory of all agents? information sequentially injected into the
market. Therefore, the assumption of belief-updating agents may not be indispensable for providing
theoretical guarantees on how the market incorporates agent beliefs. A few attempts in this vein
can be found in the literature, typically embedded in a broader context (Sethi and Vaughan [2015],
Abernethy et al. [2014]), but there have been few general results; see Section 1.1 for a review.
In this paper, we develop a new unified understanding of the information aggregation characteristics
of a market with risk-averse agents mediated by a MSR, with no regard to how the agents? beliefs are
formed. In fact, we demonstrate an equivalence between such MSR-mediated markets and opinion
pools. We do so by first proving, in Section 3, that for any MSR interacting with myopic risk-averse
traders, the revised instantaneous price after every trade equals the latest trader?s risk-neutral probability conditional on the preceding market state. We then show that this price update rule satisfies an
axiomatic characterization of opinion pooling functions from the literature, establishing the equivalence. In Sections 3.1, and 3.2, we focus on a specific MSR, the commonly used logarithmic variety
(LMSR). We demonstrate that a LMSR-mediated market with agents having constant absolute risk
aversion (CARA) utilities is equivalent to a logarithmic opinion pool, and that a LMSR-mediated
market with budget-constrained agents having a specific concave utility with decreasing absolute
risk aversion is equivalent to a linear opinion pool. We also demonstrate how the agents? utility
function parameters acquire additional significance with respect to this pooling operation, and that
in these two scenarios the market maker can be interpreted as a Bayesian learning algorithm even
if agents never update beliefs. Our results are reminiscent of similar findings about competitive
equilibrium prices in markets with rational, risk-averse agents (Pennock [1999], Beygelzimer et al.
[2012], Millin et al. [2012] etc.), but those models require that agents learn from prices and also
abstract away from any consideration of microstructure and the dynamics of actual price formation
(how the agents would reach the equilibrium is left open). By contrast, our results do not presuppose
any kind of generative model for agent signals, and also do not involve an equilibrium analysis ?
hence they can be used as tools to analyze the convergence characteristics of the market price in
non-equilibrium situations with potentially fixed-belief or irrational agents.
1.1
Related Work
Given the plethora of experimental and empirical evidence that prediction markets are at least as effective as more traditional means of belief aggregation (Wolfers and Zitzewitz [2004], Cowgill and
Zitzewitz [2013]), there has been considerable work on understanding how such a market formulates its own consensus belief from individual signals. An important line of research (Beygelzimer
et al. [2012], Millin et al. [2012], Hu and Storkey [2014], Storkey et al. [2015]) has focused on a
competitive equilibrium analysis of prediction markets under various trader models, and found an
equivalence between the market?s equilibrium price and the outcome of an opinion pool with the
same agents. Seminal work in this field was done by Pennock [1999] who showed that linear and
logarithmic opinion pools arise as special cases of the equilibrium of his intuitive model of securities
markets when all agents have generalized logarithmic and negative exponential utilities respectively.
Unlike these analyses that abstract away from the microstructure, Ostrovsky [2012] and Iyer et al.
[2014] show that certain market structures (including market scoring rules) satisfying mild conditions perform ?information aggregation? (i.e. the market?s belief measure converges in probability
to the ground truth) for repeatedly trading and learning agents with risk-neutral and risk-averse utilities respectively. Our contribution, while drawing inspiration from these sources, differs in that
we delve into the characteristics of the evolution of the price rather than the properties of prices
in equilibrium, and examine the manner in which the microstructure induces aggregation even if
the agents are not Bayesian. While there has also been significant work on market properties in
2
continuous double auctions or markets mediated by sophisticated market-making algorithms (e.g.
Cliff and Bruten [1997], Farmer et al. [2005], Brahma et al. [2012] and references therein) when the
agents are ?zero intelligence? or derivatives thereof (and therefore definitely not Bayesian), this line
of literature has not looked at market scoring rules in detail, and analytical results have been rare.
In recent years, the literature focusing on the market scoring rule (or, equivalently, the cost functionbased market maker) family has grown substantially. Chen and Vaughan [2010] and Frongillo et al.
[2012] have uncovered isomorphisms between this type of market structure and well-known machine learning algorithms. We, on the other hand, are concerned with the similarities between price
evolution in MSR-mediated markets and opinion pooling methods (see e.g. Garg et al. [2004]). Our
work comes close to that of Sethi and Vaughan [2015] who show analytically that the price sequence
of a cost function-based market maker with budget-limited risk-averse traders is ?convergent under
general conditions?, and by simulation that the limiting price of LMSR with multi-shot but myopic
logarithmic utility agents is approximately a linear opinion pool of agent beliefs. Abernethy et al.
[2014] show that a risk-averse exponential utility agent with an exponential family belief distribution updates the state vector of a generalization of LMSR that they propose to a convex combination
of the current market state vector and the natural parameter vector of the agent?s own belief distribution (see their Theorem 5.2, Corollary 5.3) ? this reduces to a logarithmic opinion pool (LogOP)
for classical LMSR. The LMSR-LogOP connection was also noted by Pennock and Xia [2011] (in
their Theorem 1) but with respect to an artificial probability distribution based on an agent?s observed trade that the authors defined instead of considering traders? belief structure or strategies. We
show how results of this type arise as special cases of a more general MSR-OP equivalence that we
establish in this paper.
2
Model and definitions
Consider a decision-maker or principal interested in the ?opinions? / ?beliefs? / ?forecasts? of a
group of n agents about an extraneous random binary event X ? {0, 1}, expressed in the form of
point probabilities ?i ? (0, 1), i = 1, 2, . . . , n, i.e. ?i is agent i?s subjective probability Pr(X = 1).
X can represent a proposition such as ?A Republican will win the next U.S. presidential election?
or ?The favorite will beat the underdog by more than a pre-determined point spread in a game of
football? or ?The next Avengers movie will hit a certain box office target in its opening week.? In this
section, we briefly describe two approaches towards the aggregation of such private beliefs: (1) the
opinion pool, which disregards the problem of incentivizing truthful reports, and focuses simply on
unifying multiple probabilistic reports on a topic, and (2) the market scoring rule, an incentive-based
mechanism for extracting honest beliefs from selfish-rational agents.
2.1
Opinion Pool (OP)
This family of methods takes as input the vector of probabilistic reports pi , i = 1, 2, ? ? ? , n submitted
by n agents, also called experts in this context, and computes an aggregate or consensus operator
pb = f (p1 , p2 , ? ? ? , pn ) ? [0, 1]. Garg et al. [2004] identified three desiderata for an opinion pool
(other criteria are also recognized in the literature, but the following are the most basic and natural):
1. Unanimity: If all experts agree, the aggregate also agrees with them.
2. Boundedness: The aggregate is bounded by the extremes of the inputs.
3. Monotonicity: If one expert changes her opinion in a particular direction while all other
experts? opinions remain unaltered, then the aggregate changes in the same direction.
Definition 1. We call pb = f (p1 , p2 , ? ? ? , pn ) a valid opinion pool for n probabilistic reports if it
possesses properties 1, 2, and 3 listed above.
It is easy to derive the following result for recursively defined pooling functions that will prove
useful for establishing an equivalence between market scoring rules and opinion pools. The proof is
in Section 1 of the Supplementary Material.
Lemma 1. For a two-outcome scenario, if f2 (r1 , r2 ) and fn?1 (q1 , q2 , . . . , qn?1 ) are valid opinion
pools for two probabilistic reports r1 , r2 and n?1 probabilistic reports q1 , q2 , . . . , qn?1 respectively,
then f (p1 , p2 , . . . , pn ) = f2 (fn?1 (p1 , p2 , . . . , pn?1 ), pn ) is also a valid opinion pool for n reports.
3
Two popular opinion pooling methods are the Linear Opinion Pool (LinOP) and the Logarithmic
Opinion Pool (LogOP) which are essentially a weighted average (or convex combination) and a
renormalized weighted geometric mean of the experts? probability reports respectively.
Pn
LinOP(p1 , p2 , ? ? ? , pn )= i=1 ?ilin pi ,
. Q
Qn
Qn
? log
?ilog
n
?ilog
LogOP(p1 , p2 , ? ? ? , pn )= i=1 pi i
p
+
(1
?
p
)
,
i
i=1 i
i=1
for a two-outcome scenario, where ?ilin , ?ilog ? 0 ?i = 1, 2, . . . , n,
2.2
Pn
i=1
?ilin = 1,
Pn
i=1
?ilog = 1.
Market Scoring Rule (MSR)
In general, a scoring rule is a function of two variables s(p, x) ? R ? {??, ?}, where p is an
agent?s probabilistic prediction (density or mass function) about an uncertain event, x is the realized
or revealed outcome of that event after the prediction has been made, and the resulting value of s
is the agent?s ex post compensation for prediction. For a binary event X, a scoring rule can just be
represented by the pair (s1 (p), s0 (p)) which is the vector of agent compensations for {X = 1} and
{X = 0} respectively, p ? [0, 1] being the agent?s reported probability of {X = 1} which may or
may not be equal to her true subjective probability, say, ? = Pr(X = 1). A scoring rule is defined
to be strictly proper if it is incentive-compatible for a risk-neutral agent, i.e. an agent maximizes
her subjective expectation of her ex post compensation by reporting her true subjective probability:
? = arg maxp?[0,1] [?s1 (p) + (1 ? ?)s0 (p)], ?? ? [0, 1].
In addition, a two-outcome scoring rule is regular if sj (?) is real-valued except possibly that s0 (1)
or s1 (0) is ??; any regular strictly proper scoring rule can written in the following form (Gneiting
and Raftery [2007]):
sj (p) = G(p) + G0 (p)(j ? p),
j ? {0, 1}, p ? [0, 1],
(1)
0
G : [0, 1] ? R is a strictly convex function with G (?) as a sub-gradient which is real-valued expect
possibly that ?G0 (0) or G0 (1) is ?; if G(?) is differentiable in (0, 1), G0 (?) is simply its derivative.
A classic example of a regular strictly proper scoring rule is the logarithmic scoring rule:
s1 (p) = b ln p;
s0 (p) = b ln(1 ? p),
where b > 0 is a free parameter.
(2)
Hanson [2003] introduced an extension of a scoring rule wherein the principal initiates the process
of information elicitation by making a baseline report p0 , and then elicits publicly declared reports
pi sequentially from n agents; the ex post compensation cx (pi , pi?1 ) received by agent i from the
principal, where x is the realized outcome of event X, is the difference between the scores assigned
to the reports made by herself and her predecessor:
cx (pi , pi?1 ) , sx (pi ) ? sx (pi?1 ),
x ? {0, 1}.
(3)
If each agent acts non-collusively, risk-neutrally, and myopically (as if her current interaction with
the principal is her last), then the incentive compatibility property of a strictly proper score still holds
for the sequential version. Moreover, it is easy to show that the principal?s worst-case payout (loss)
is bounded regardless of agent behavior. In particular, for the binary-outcome logarithmic score, the
loss bound for p0 = 1/2 is b ln 2; b can be referred to as the principal?s loss parameter.
A sequentially shared strictly proper scoring rule of the above form can also be interpreted as a
cost function-based prediction market mechanism offering trade in an Arrow-Debreu (i.e. (0, 1)valued) security written on the event X, hence the name ?market scoring rule?. The cost function
is a strictly convex function of the total outstanding quantity of the security that determines all
execution costs; its first derivative (the cost per share of buying or the proceeds per share from
selling an infinitesimal quantity of the security) is called the market?s ?instantaneous price?, and can
be interpreted as the market maker?s current risk-neutral probability (Chen and Pennock [2007]) for
{X = 1}, the starting price being equal to the principal?s baseline report p0 . Trading occurs in
discrete episodes 1, 2, . . . , n, in each of which an agent orders a quantity of the security to buy or
sell given the market?s cost function and the (publicly displayed) instantaneous price. Since there is
a one-to-one correspondence between agent i?s order size and pi , the market?s revised instantaneous
price after trading with agent i, an agent?s ?action? or trading decision in this setting is identical to
making a probability report by selecting a pi ? [0, 1]. If agent i is risk-neutral, then pi is, by design,
her subjective probability ?i (see Hanson [2003], Chen and Pennock [2007] for further details).
4
Definition 2. We call a market scoring rule well-behaved if the underlying scoring rule is regular
and strictly proper, and the associated convex function G(?) (as in (1)) is continuous and thricedifferentiable, with 0 < G00 (p) < ? and |G000 (p)| < ? for 0 < p < 1.
3
MSR behavior with risk-averse myopic agents
We first present general results on the connection between sequential trading in a MSR-mediated
market with risk-averse agents and opinion pooling, and then give a more detailed picture for two
representative utility functions without and with budget constraints respectively. Please refer to
Section 2 of the Supplementary Material for detailed proofs of all results in this section.
Suppose that, in addition to a belief ?i = Pr(X = 1), each agent i has a continuous utility function
of wealth ui (c), where c ? [cmin
, ?] denotes her (ex post) wealth, i.e. her net compensation from
i
the market mechanism after the realization of X defined in (3), and cmin
? [??, 0] is her minimum
i
acceptable wealth (a negative value suggests tolerance of debt); ui (?) satisfies the usual criteria of
non-satiation i.e. u0i (c) > 0 except possibly that u0i (?) = 0, and risk aversion, i.e. u00i (c) < 0
except possibly that u00i (?) = 0, through out its domain (Mas-Colell et al. [1995]); in other words
ui (?) is strictly increasing and strictly concave. Additionally, we require its first two derivatives to
be finite and continuous on [cmin
, ?] except that we tolerate u0i (cmin
) = ?, u00i (cmin
) = ??. Note
i
i
i
that, by choosing a finite lower bound cmin
on
the
agent?s
wealth,
we
can account for any starting
i
wealth or budget constraint that effectively restricts the agent?s action space.
Lemma 2. If |cmin
| < ?, then there exist lower and upper bounds, pmin
? [0, pi?1 ] and pmax
?
i
i
i
[pi?1 , 1] respectively, on the feasible values of the price pi to which agent i can drive the market
min
min
regardless of her belief ?i , where pmin
= s?1
+ s1 (pi?1 )) and pmax
= s?1
+ s0 (pi?1 )).
i
i
1 (ci
0 (ci
Since the latest price pi?1 can be viewed as the market?s current ?state? from myopic agent i?s
perspective, the agent?s final utility depends not only on her own action pi and the extraneously
determined outcome x but also on the current market state pi?1 she encounters, her rational action
being given by pi = arg maxp?[0,1] [?i ui (c1 (p, pi?1 )) + (1 ? ?i )ui (c0 (pi , pi?1 ))]. This leads us
to the main result of this section.
Theorem 1. If a well-behaved market scoring rule for an Arrow-Debreu security with a starting
instantaneous price p0 ? (0, 1) trades with a sequence of n myopic agents with subjective probabilities ?1 , . . . , ?n ? (0, 1) and risk-averse utility functions of wealth u1 (?), . . . , un (?) as above, then
the updated market price pi after every trading episode i ? {1, 2, . . . , n} is equivalent to a valid
opinion pool for the market?s initial baseline report p0 and the subjective probabilities ?1 , ?2 , . . . , ?i
of all agents who have traded up to (and including) that episode.
Proof sketch.
For every trading epsiode i, by setting the first derivative of agent i?s expected
utility to zero, and analyzing the resulting equation, we can arrive at the following lemmas.
Lemma 3. Under the conditions of Theorem 1, if pi?1 ? (0, 1), then the revised price pi after agent
i trades is the unique solution in (0, 1) to the fixed-point equation:
pi =
?i u0i (c1 (pi , pi?1 ))
.
0
?i ui (c1 (pi , pi?1 )) + (1 ? ?i )u0i (c0 (pi , pi?1 ))
(4)
Since p0 ? (0, 1), and ?i ? (0, 1) ?i, pi is also confined to (0, 1) ?i, by induction.
Lemma 4. The implicit function pi (pi?1 , ?i ) described by (4) has the following properties:
1. pi = ?i (or pi?1 ) if and only if ?i = pi?1 .
2. 0 < min{pi?1 , ?i } < pi < max{pi?1 , ?i } < 1 whenever ?i 6= pi?1 , 0 < ?i , pi?1 < 1.
3. For any given pi?1 (resp. ?i ), pi is a strictly increasing function of ?i (resp. pi?1 ).
Evidently, properties 1, 2, and 3 above correspond to axioms of unanimity, boundedness, and monotonicity respectively, defined in Section 2. Hence, pi (pi?1 , ?i ) is a valid opinion pooling function for
pi?1 , ?i . Finally, since (4) defines the opinion pool pi recursively in terms of pi?1 ?i = 1, 2, . . . , n,
we can invoke Lemma 1 to obtain the desired result.
5
There are several points worth noting about this result. First, since the updated market price pi is
also equivalent to agent i?s action (Section 2.2), the R.H.S. of (4) is agent i?s risk-neutral probability
(Pennock [1999]) of {X = 1}, given her utility function, her action, and the current market state.
Thus, Lemma 3 is a natural extension of the elicitation properties of a MSR. MSRs, by design, elicit
subjective probabilities from risk-neutral agents in an incentive compatible manner; we show that, in
general, they elicit risk-neutral probabilities when they interact with risk-averse agents. Lemma 3 is
also consistent with the observation of Pennock [1999] that, for all belief elicitation schemes based
on monetary incentives, an external observer can only assess a participant?s risk-neutral probability
uniquely; she cannot discern the participant?s belief and utility separately. Second, observe that
this pooling operation is accomplished by a MSR even without direct revelation. Finally, notice
the presence of the market maker?s own initial baseline p0 as a component in the final aggregate;
however, for the examples we study below, the impact of p0 diminishes with the participation of
more and more informed agents, and we conjecture that this is a generic property.
In general, the exact form of this pooling function is determined by the complex interaction between
the MSR and agent utility, and a closed form of pi from (4) might not be attainable in many cases.
However, given a paticular MSR, we can venture to identify agent utility functions which give rise
to well-known opinion pools. Hence, for the rest of this paper, we focus on the logarithmic market
scoring rule (LMSR), one of the most popular tools for implementing real-world prediction markets.
3.1
LMSR as LogOP for constant absolute risk aversion (CARA) utility
Theorem 2. If myopic agent i, having a subjective belief ?i ? (0, 1) and a risk-averse utility function satisfying our criteria, trades with a LMSR market with parameter b and current instantaneous
price pi?1 , then the market?s updated price pi is identical to a logarithmic opinion pool between the
current price and the agent?s subjective belief, i.e.
?i 1??i
i
pi = ?i?i p1??
?i pi?1 + (1 ? ?i )?i (1 ? pi?1 )1??i , ?i ? (0, 1),
(5)
i?1
if and only if agent i?s utility function is of the form
ui (c) = ?i (1 ? exp (?c/?i )) ,
c ? R ? {??, ?},
the aggregation weight being given by ?i =
constant ?i ? (0, ?),
(6)
?i /b
1+?i /b .
The proof is in Section 2.1 of the Supplementary Material. Note that (6) is a standard formulation
of the CARA (or negative exponential) utility function with risk tolerance ?i ; smaller the value of
?i , higher is agent i?s aversion to risk. The unbounded domain of ui (?) indicates a lack of budget
constraints; risk aversion comes about from the fact that the range of the function is bounded above
(by its risk tolerance ?i ) but not bounded below.
Moreover, the LogOP equation (5) can alternatively be expressed as a linear update in terms of
log-odds ratios, another popular means of formulating one?s belief about a binary event:
p
l(pi ) = ?i l(?i ) + (1 ? ?i )l(pi?1 ), l(p) = ln 1?p
? [??, ?] for p ? [0, 1].
(7)
Aggregation weight and risk tolerance: Since ?i is an increasing function of an agent?s risk
tolerance relative to the market?s loss parameter (the latter being, in a way, a measure of how much
risk the market maker is willing to take), identity (7) implies that the higher an agent?s risk tolerance,
the larger is the contribution of her belief towards the changed market price, which agrees with
intuition. Also note the interesting manner in which the market?s loss parameter effectively scales
down an agent?s risk tolerance, enhancing the inertia factor (1 ? ?i ) of the price process.
Bayesian interpretation: The Bayesian interpretation of LogOP in general is well-known (Bordley
[1982]); we restate it here in
a formthat is
more appropriate
for our prediction
market
? i
?i i setting. We
?i h
?i
?i
1??i
can recast (5) as pi = pi?1 pi?1
pi?1 pi?1
+ (1 ? pi?1 ) 1?pi?1
. This shows
that, over the ith trading episode ?i, the LMSR-CARA agent market environment is equivalent
to a Bayesian learner performing inference on the point estimate of the probability of the forecast
event X, starting with the common-knowledge prior Pr(X = 1) = pi?1 , and having direct access
to ?i (which corresponds to the ?observation? for the inference
problem),
the likelihood function
1?x??i ?i
associated with this observation being L (X = x|?i ) ? 1?x?pi?1 , x ? {0, 1}.
6
Sequence of one-shot traders: If all n agents in the system have CARA utilities with potentially
different risk tolerances, and trade with LMSR myopically only once each in the order 1, . . . , n, then
the ?final? market log-odds
ratio after these n trades, on unfolding
Pn
Qn the recursion in (7), is given by
l(pn ) = ?
e0n l(p0 )+ i=1 ?
ein l(?i ). This is a LogOP where ?
e0n = i=1 (1??i ) determines the inertia
of the market?s initial price, which diminishes as more and more traders interact with the market, and
?
ejn , j ? 1 quantifies the degree to which an individual trader impacts the final (aggregate) market
Qn
belief; ?
ejn = ?j i=j+1 (1 ? ?i ), j = 1, . . . , n ? 1, and ?
enn = ?n . Interestingly, the weight of an
agent?s belief depends not only on her own risk tolerance but also on those of all agents succeeding
her in the trading sequence (lower weight for a more risk tolerant successor, ceteris paribus), and is
independent of her predecessors? utility parameters. This is sensible since, by the design of a MSR,
trader i?s belief-dependent action influences the action of each of (rational) traders i + 1, i + 2, . . .
so that the action of each of these successors, in turn, has a role to play in determining the market
impact of trader i?s belief. In particular, if ?j = ? > 0 ?j ? 1, then the aggregation weights satisfy
n
the inequalities ?
ej+1
/e
?jn = 1 + ? /b > 1 ?j = 1, ? ? ? , n ? 1, i.e. LMSR assigns progessively higher
weights to traders arriving later in the market?s lifetime when they all exhibit identical constant risk
aversion. This seems to be a reasonable aggregation principle in most scenarios wherein the amount
of information in the world improves over time. Moreover, in this situation, ?
e1n /e
?0n = ? /b which
indicates that the weight of the market?s baseline belief in the aggregate may be higher than those of
some of the trading agents if the market maker has a comparatively high loss parameter. This strong
effect of the trading sequence on the weights of agents? beliefs is a significant difference between the
one-shot trader setting and the market equilibrium setting where each agent?s weight is independent
of the utility function parameters of her peers.
Convergence: If agents? beliefs are themselves independent samples from the same distribution P
over [0, 1], i.e. ?i ?i.i.d. P ?i, then by the sum laws of expectation and variance,
Pn
E [l(pn )] = ?
e0n l(p0 ) + (1 ? ?
e0n )E??P [l(?)] ; Var [l(pn )] = Var??P [l(?)] i=1 (e
?in )2 .
Hence, using an appropriate concentration inequality (Boucheron et al. [2004]) and the properties of
the ?
ein ?s, we can show that, as n increases, the market log-odds ratio l(pn ) converges to E??P [l(?)]
with a high probability; this convergence guarantee does not require the agents to be Bayesian.
3.2
LMSR as LinOP for an atypical utility with decreasing absolute risk aversion
Theorem 3. If myopic agent i, having a subjective belief ?i ? (0, 1) and a risk-averse utility function satisfying our criteria, trades with a LMSR market with parameter b and current instantaneous
price pi?1 , then the market?s updated price pi is identical to a linear opinion pool between the
current price and the agent?s subjective belief, i.e.
pi = ?i ?i + (1 ? ?i )pi?1 ,
for some constant ?i ? (0, 1),
(8)
if and only if agent i?s utility function is of the form
ui (c) = ln(exp((c + Bi )/b) ? 1),
c ? ?Bi ,
(9)
where Bi > 0 represents agent i?s budget, the aggregation weight being ?i = 1 ? exp(?Bi /b).
The proof is in Section 2.2 of the Supplementary Material. The above atypical utility function has
its domain bounded below, and possesses a positive but strictly decreasing Arrow-Pratt absolute
1
risk aversion measure (Mas-Colell et al. [1995]) Ai (c) = ?u00i (c)/u0i (c) = b(exp((c+B
for
i )/b)?1)
any b, Bi > 0. It shares these characteristics with the well-known logarithmic utility function.
Moreover, although this function is approximately linear for large (positive) values of the wealth c,
it is approximately logarithmic when (c + Bi ) b.
Theorem 3 is somewhat surprising since it is logarithmic utility that has traditionally been found to
effect a LinOP in a market equilibrium (Pennock [1999], Beygelzimer et al. [2012], Storkey et al.
[2015], etc.). Of course in this paper, we are not in an equilibrium / convergence setting, but in light
of the above similarities between utility function (9) and logarithmic utility, it is perhaps not unreasonable to ask whether the logarithmic utility-LinOP connection is still maintained approximately
for LMSR price evolution under some conditions. We have extensively explored this idea, both analytically and by simulations, and have found that a small agent budget compared to the LMSR loss
parameter b seems to produce the desired result (see Section 3 of the Supplementary Material).
7
Note that, unlike in Theorem 2, the equivalence here requires the agent utility function to depend on
the market maker?s loss parameter b (the scaling factor in the exponential). Since the microstructure
is assumed to be common knowledge, as in traditional MSR settings, the consideration of an agent
utility that takes into account the market?s pricing function is not unreasonable.
Since the domain of utility function (9) is bounded below, we can derive ?i -independent bounds on
= ?i + (1 ? ?i )pi?1 . Hence,
= (1 ? ?i )pi?1 , pmax
possible values of pi from Lemma 2: pmin
i
i
min
, i.e. the revised price is a linear interpolation
+
(1
?
?
equation (8) becomes pi = ?i pmax
i )pi
i
between the agent?s price bounds, her subjective probability itself acting as the interpolation factor.
Aggregation weight and budget constraint: Evidently, the aggregation weight of agent i?s belief, ?i = (1 ? exp(?Bi /b)), is an increasing function of her budget normalized with respect to
the market?s loss parameter; it is, in a way, a measure of her relative risk tolerance. Thus, broad
characteristics analogous to the ones in Section 3.1 apply to these aggregation weights as well, with
the log-odds ratio replaced by the actual market price.
Bayesian interpretation: Under the mild technical assumption that agent i?s belief ?i ? (0, 1) is
rational, and her budget Bi > 0 is such that ?i ? (0, 1) is also rational, it is possible to obtain positive
integers ri , Ni and a positive rational number mi?1 such that ?i = ri /Ni and ?i = Ni /(mi?1 +Ni ).
+pi?1 mi?1
Then, we can rewrite the LinOP equation (8) as pi = ri m
, which is equivalent to the postei?1 +Ni
rior expectation of a beta-binomial Bayesian inference procedure described as follows: The forecast
event X is modeled as the (future) final flip of a biased coin with an unknown probability of heads.
In episode i, the principal (or aggregator) has a prior distribution B ETA(?i?1 , ?i?1 ) over this probability, with ?i?1 = pi?1 mi?1 , ?i?1 = (1 ? pi?1 )mi?1 . Thus, pi?1 is the prior mean and mi?1
the corresponding ?pseudo-sample size? parameter. Agent i is non-Bayesian, and her subjective
probability ?i , accessible to the aggregator, is her maximum likelihood estimate associated with the
(binomial) likelihood of observing ri heads out of a private sample of Ni independent flips of the
above coin (Ni is common knowledge). Note that mi?1 , Ni are measures of certainty of the aggregator and the trading agent respectively, and the latter?s normalized budget Bi /b = ln(1+Ni /mi?1 )
becomes a measure of her certainty relative to the aggregator?s current state in this interpretation.
Sequence of one-shot traders and convergence: If all agents have utility (9) with potentially
different budgets, and trade with LMSR myopically once each, then the final aggregate market
Pn
Qn
price is given by pn = ?e0n p0 + i=1 ?ein ?i , which is a LinOP where ?e0n = i=1 (1 ? ?i ), ?ejn =
Qn
?j i=j+1 (1 ? ?i ) ?j = 1, . . . , n ? 1, ?enn = ?n . Again, all intuitions about ?
ejn from Section 3.1
carry over to ?en . Moreover, if ?i ?i.i.d. P ?i, then we can proceed exactly as in Section 3.1 to show
j
that, as n increases, pn converges to E??P [?] with a high probability.
4
Discussion and future work
We have established the correspondence of a well-known securities market microstructure to a class
of traditional belief aggregation methods and, by extension, Bayesian inference procedures in two
important cases. An obvious next step is the identification of general conditions under which a MSR
and agent utility combination is equivalent to a given pooling operation. Another research direction
is extending our results to a sequence of agents who trade repeatedly until ?convergence?, taking into
account issues such as the order in which agents trade when they return, the effects of the updated
wealth after the first trade for agents with budgets, etc.
Acknowledgments
We are grateful for support from NSF IIS awards 1414452 and 1527037.
References
Jacob Abernethy, Sindhu Kutty, S?ebastien Lahaie, and Rahul Sami. Information aggregation in
exponential family markets. In Proc. ACM Conference on Economics and Computation, pages
395?412. ACM, 2014.
8
Alina Beygelzimer, John Langford, and David M Pennock. Learning performance of prediction
markets with Kelly bettors. In Proc. AAMAS, pages 1317?1318, 2012.
Robert F. Bordley. A multiplicative formula for aggregating probability assessments. Management
Science, 28(10):1137?1148, 1982.
St?ephane Boucheron, G?abor Lugosi, and Olivier Bousquet. Concentration inequalities. In Advanced
Lectures on Machine Learning, pages 208?240. Springer, 2004.
Aseem Brahma, Mithun Chakraborty, Sanmay Das, Allen Lavoie, and Malik Magdon-Ismail. A
Bayesian market maker. In Proc. ACM Conference on Electronic Commerce, pages 215?232.
ACM, 2012.
Yiling Chen and David M. Pennock. A utility framework for bounded-loss market makers. In Proc.
UAI-07, 2007.
Yiling Chen and Jennifer Wortman Vaughan. A new understanding of prediction markets via noregret learning. In Proc. ACM Conference on Electronic Commerce, pages 189?198. ACM, 2010.
Dave Cliff and Janet Bruten. Zero is not enough: On the lower limit of agent intelligence for
continuous double auction markets. Technical report, HPL-97-141, Hewlett-Packard Laboratories
Bristol. 105, 1997.
Bo Cowgill and Eric Zitzewitz. Corporate prediction markets: Evidence from Google, Ford, and
Koch industries. Technical report, Working paper, 2013.
J. Doyne Farmer, Paolo Patelli, and Ilija I. Zovko. The predictive power of zero intelligence in
financial markets. PNAS, 102(6):2254?2259, 2005.
Rafael M. Frongillo, Nicolas D. Penna, and Mark D. Reid. Interpreting prediction markets: A
stochastic approach. In Proc. NIPS, pages 3266?3274, 2012.
Ashutosh Garg, T.S. Jayram, Shivakumar Vaithyanathan, and Huaiyu Zhu. Generalized opinion
pooling. In Proc. 8th Intl. Symp. on Artificial Intelligence and Mathematics. Citeseer, 2004.
Christian Genest and James V. Zidek. Combining probability distributions: A critique and an annotated bibliography. Statistical Science, pages 114?135, 1986.
Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation.
Journal of the American Statistical Association, 102(477):359?378, 2007.
Robin D. Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1):
107?119, 2003.
Jinli Hu and Amos J. Storkey. Multi-period trading prediction markets with connections to machine
learning. Proc. ICML, 2014.
Krishnamurthy Iyer, Ramesh Johari, and Ciamac C. Moallemi. Information aggregation and allocative efficiency in smooth markets. Management Science, 60(10):2509?2524, 2014.
Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green. Microeconomic theory, volume 1.
New York: Oxford University Press, 1995.
Jono Millin, Krzysztof Geras, and Amos J. Storkey. Isoelastic agents and wealth updates in machine
learning markets. In Proc. ICML, pages 1815?1822, 2012.
Michael Ostrovsky. Information aggregation in dynamic markets with strategic traders. Econometrica, 80(6):2595?2647, 2012.
David M. Pennock. Aggregating probabilistic beliefs: Market mechanisms and graphical representations. PhD thesis, The University of Michigan, 1999.
David M. Pennock and Lirong Xia. Price updating in combinatorial prediction markets with
Bayesian networks. In Proc. UAI, pages 581?588, 2011.
Rajiv Sethi and Jennifer Wortman Vaughan. Belief aggregation with automated market makers.
Computational Economics, Forthcoming, 2015. Available at SSRN 2288670.
Amos J Storkey, Zhanxing Zhu, and Jinli Hu. Aggregation under bias: R?enyi divergence aggregation
and its implementation via machine learning markets. In Machine Learning and Knowledge
Discovery in Databases, pages 560?574. Springer, 2015.
James Surowiecki. The wisdom of crowds. Anchor, 2005.
Justin Wolfers and Eric Zitzewitz. Prediction markets. J. Econ. Perspectives, 18(2):107?126, 2004.
9
| 5840 |@word mild:3 msr:26 unaltered:1 private:5 briefly:1 chakraborty:2 version:1 seems:2 c0:2 open:1 adrian:1 willing:3 hu:3 simulation:2 noregret:1 jacob:1 p0:11 citeseer:1 q1:2 attainable:1 shot:4 boundedness:2 recursively:2 carry:1 initial:3 uncovered:1 score:3 selecting:1 offering:3 interestingly:1 subjective:16 current:12 beygelzimer:4 surprising:1 reminiscent:1 written:2 john:1 fn:2 informative:1 compel:1 christian:1 succeeding:1 update:5 ashutosh:1 generative:1 intelligence:4 ith:1 characterization:1 unbounded:1 direct:2 predecessor:2 supply:1 beta:1 ilin:3 prove:1 combine:1 symp:1 manner:4 expected:1 market:114 behavior:2 themselves:2 examine:1 e1n:1 p1:7 growing:1 multi:2 buying:1 decreasing:4 actual:2 election:1 considering:1 increasing:4 becomes:2 bounded:7 moreover:5 maximizes:1 underlying:1 mass:1 geras:1 kind:1 interpreted:3 substantially:1 emerging:1 q2:2 informed:2 unified:1 finding:1 guarantee:2 pseudo:1 certainty:2 every:5 act:4 concave:2 exactly:1 revelation:2 ostrovsky:3 hit:1 farmer:2 louis:2 arguably:1 reid:1 positive:4 engineering:1 aggregating:3 gneiting:2 limit:1 analyzing:1 cliff:2 establishing:2 critique:1 oxford:1 interpolation:2 approximately:4 lugosi:1 might:1 garg:3 therein:1 studied:1 equivalence:6 suggests:1 delve:1 limited:1 range:1 bi:9 unique:1 acknowledgment:1 commerce:2 differs:1 procedure:2 axiom:2 evolving:1 empirical:1 elicit:2 pre:1 wustl:1 regular:4 word:1 get:1 cannot:1 close:1 operator:1 janet:1 context:2 risk:53 seminal:1 influence:1 vaughan:5 equivalent:7 map:1 latest:3 regardless:2 starting:4 economics:2 convex:5 focused:1 rajiv:1 assigns:1 rule:27 financial:4 his:1 proving:1 classic:1 satiation:1 traditionally:1 krishnamurthy:1 analogous:1 updated:6 limiting:1 target:1 play:2 suppose:1 resp:2 exact:1 olivier:1 designing:2 sethi:3 storkey:6 satisfying:4 updating:2 database:1 vein:1 observed:1 role:2 worst:1 averse:15 episode:5 unanimity:2 trade:17 principled:1 intuition:2 environment:1 ui:9 econometrica:1 seller:1 dynamic:2 renormalized:1 irrational:1 depend:1 rewrite:1 grateful:1 predictive:1 upon:1 f2:2 learner:2 paribus:1 eric:2 selling:1 efficiency:1 microeconomic:1 differently:1 various:1 represented:1 herself:1 grown:1 attitude:1 enyi:1 effective:1 describe:1 artificial:2 presuppose:1 aggregate:8 formation:1 outcome:8 choosing:1 abernethy:3 peer:2 whose:1 crowd:1 widely:1 supplementary:5 valued:3 say:1 drawing:1 larger:1 football:1 presidential:1 maxp:2 aseem:1 itself:2 cara:5 final:7 ford:1 sequence:7 differentiable:1 evidently:2 analytical:1 net:1 propose:1 yiling:2 interaction:3 reset:1 combining:1 monetary:2 realization:1 ismail:1 intuitive:1 venture:1 differentially:1 convergence:6 regularity:1 double:2 plethora:1 r1:2 produce:1 extending:1 perfect:1 converges:3 intl:1 derive:2 develop:1 op:3 received:1 strong:1 p2:6 trading:16 implies:1 come:2 direction:3 restate:1 annotated:1 stochastic:1 successor:2 opinion:34 material:5 cmin:7 implementing:1 g000:1 require:3 microstructure:5 generalization:1 proposition:1 strictly:13 extension:3 frontier:1 hold:1 koch:1 ground:2 exp:5 equilibrium:12 algorithmic:1 week:1 mo:1 traded:1 purpose:1 estimation:1 diminishes:2 proc:10 axiomatic:1 combinatorial:2 currently:1 maker:14 lmsr:19 agrees:2 tool:4 weighted:2 amos:3 zidek:2 unfolding:1 always:2 rather:2 pn:19 frongillo:2 ej:1 broader:1 office:1 corollary:1 focus:3 she:2 indicates:2 likelihood:3 contrast:1 baseline:5 sense:1 inference:5 dependent:1 entire:1 typically:1 abor:1 her:38 hidden:1 interested:1 compatibility:2 issue:2 arg:2 extraneous:1 constrained:1 special:2 equal:3 field:1 never:1 having:5 washington:1 u0i:6 once:2 identical:4 sell:1 represents:1 broad:1 icml:2 future:4 report:16 ephane:1 few:2 opening:1 divergence:1 individual:3 neutrality:1 replaced:1 attempt:1 extreme:1 light:1 hewlett:1 myopic:10 moallemi:1 lahaie:1 bettor:1 re:1 desired:2 theoretical:1 uncertain:2 industry:1 eta:1 formulates:1 cost:8 strategic:1 neutral:13 rare:1 trader:17 wortman:2 colell:3 reported:1 st:3 density:1 fundamental:1 definitely:1 accessible:1 retain:1 probabilistic:8 invoke:1 pool:26 discipline:1 michael:2 again:1 thesis:1 management:2 possibly:4 payout:1 external:1 expert:5 derivative:5 american:1 return:1 pmin:3 account:3 satisfy:1 depends:2 piece:1 multiplicative:1 later:1 observer:1 closed:1 analyze:1 observing:1 johari:1 competitive:2 aggregation:22 participant:2 contribution:2 ass:1 formed:1 ni:9 publicly:2 variance:1 who:5 characteristic:5 judgment:1 correspond:1 identify:1 wisdom:1 bayesian:16 identification:1 worth:2 drive:1 enn:2 dave:1 history:2 submitted:1 bristol:1 penna:1 reach:1 whenever:1 aggregator:4 definition:3 infinitesimal:1 james:2 thereof:1 obvious:1 proof:5 associated:3 mi:8 static:1 rational:8 adjusting:1 popular:4 revise:1 ask:1 knowledge:4 improves:1 sophisticated:1 focusing:1 tolerate:1 higher:4 wherein:2 rahul:1 formulation:1 done:1 box:1 lifetime:1 just:1 implicit:1 hpl:1 until:1 langford:1 hand:1 sketch:1 working:1 assessment:1 lack:1 google:1 defines:1 e0n:6 reveal:1 behaved:2 perhaps:1 pricing:1 name:1 effect:3 normalized:2 true:4 evolution:3 hence:7 inspiration:1 analytically:2 assigned:1 boucheron:2 laboratory:1 jerry:1 attractive:1 game:1 uniquely:1 please:1 maintained:1 noted:1 kutty:1 criterion:4 generalized:2 demonstrate:3 gleaned:1 allen:1 interpreting:1 auction:2 instantaneous:7 novel:1 consideration:2 common:3 behaves:1 wolfers:2 volume:1 association:1 interpretation:5 significant:2 refer:1 ai:1 mathematics:1 access:1 similarity:2 etc:3 add:1 functionbased:1 own:5 recent:2 showed:1 perspective:2 scenario:4 indispensable:1 certain:3 inequality:3 binary:4 accomplished:1 lirong:1 scoring:26 zitzewitz:4 minimum:1 additional:1 somewhat:1 preceding:1 recognized:1 aggregated:1 converge:1 period:1 truthful:1 signal:2 ii:1 multiple:1 corporate:1 pnas:1 reduces:1 debreu:2 smooth:1 technical:3 long:1 post:4 neutrally:1 award:1 jinli:2 impact:3 prediction:19 desideratum:1 basic:1 essentially:1 expectation:3 enhancing:1 represent:1 confined:1 c1:3 addition:2 separately:1 wealth:9 allocative:1 source:1 myopically:3 biased:1 rest:1 unlike:2 posse:2 pennock:12 induced:1 pooling:11 epsiode:1 incorporates:2 integer:1 call:2 extracting:1 odds:4 noting:1 presence:1 revealed:1 paticular:1 easy:2 concerned:1 automated:2 variety:1 pratt:1 sami:1 enough:1 forthcoming:1 identified:1 idea:2 honest:1 whether:1 utility:41 isomorphism:1 proceed:1 york:1 repeatedly:3 action:9 useful:1 detailed:2 aimed:1 involve:1 listed:1 zhanxing:1 amount:1 extensively:1 induces:1 exist:1 restricts:1 nsf:1 notice:1 per:2 econ:1 diverse:1 discrete:1 incentive:9 paolo:1 group:2 pb:2 achieving:1 alina:1 krzysztof:1 lavoie:1 year:1 sum:1 injected:1 furnished:1 mithun:2 reporting:1 family:5 arrive:1 discern:1 reasonable:1 electronic:2 decision:3 acceptable:1 scaling:1 solicits:1 whinston:1 bound:5 convergent:1 correspondence:2 shivakumar:1 constraint:4 millin:3 g00:1 ri:4 bibliography:1 bousquet:1 declared:1 u1:1 min:4 formulating:1 performing:1 relatively:1 conjecture:1 ssrn:1 department:1 combination:3 across:1 remain:1 smaller:1 making:4 s1:5 avenger:1 intuitively:1 pr:4 ln:6 equation:5 agree:1 jennifer:2 turn:1 mechanism:6 tilmann:1 initiate:1 flip:2 end:1 available:1 operation:3 magdon:1 unreasonable:2 apply:1 observe:1 away:3 generic:1 appropriate:2 encounter:1 coin:2 jn:1 denotes:1 binomial:2 graphical:1 unifying:2 build:1 establish:2 classical:1 comparatively:1 malik:1 move:1 g0:4 question:1 realized:2 looked:1 quantity:3 strategy:1 occurs:1 concentration:2 usual:1 traditional:4 exhibit:1 gradient:1 win:1 elicits:2 sensible:1 topic:1 valuation:1 consensus:2 induction:1 modeled:1 providing:2 ratio:4 acquire:1 equivalently:1 robert:1 potentially:4 negative:3 rise:1 pmax:4 design:4 ebastien:1 proper:8 policy:1 unknown:2 perform:1 implementation:1 upper:1 observation:3 revised:4 ilog:4 honestly:1 finite:2 compensation:5 ramesh:1 displayed:1 beat:1 situation:2 precise:1 head:2 andreu:1 interacting:1 introduced:2 david:4 namely:1 pair:1 connection:4 security:8 hanson:4 established:1 nip:1 justin:1 elicitation:3 proceeds:1 below:4 jayram:1 recast:1 green:1 memory:1 including:2 belief:45 debt:1 max:1 event:10 packard:1 natural:3 power:1 participation:1 recursion:1 advanced:1 zhu:2 scheme:1 movie:1 republican:1 surowiecki:2 picture:1 raftery:2 huaiyu:1 mediated:7 deviate:1 review:1 kelly:1 understanding:4 literature:5 geometric:1 prior:3 determining:1 relative:3 law:1 embedded:1 loss:10 expect:1 lecture:1 interesting:1 var:2 aversion:11 agent:112 degree:1 consistent:1 s0:5 principle:1 pi:86 share:3 compatible:3 changed:1 course:1 last:1 free:1 arriving:1 side:1 rior:1 bias:1 taking:1 absolute:7 tolerance:10 regard:1 xia:2 valid:5 world:2 computes:1 qn:8 author:1 commonly:1 made:2 inertia:2 transaction:1 sj:2 rafael:1 monotonicity:2 active:1 sequentially:3 buy:1 tolerant:1 uai:2 anchor:1 assumed:1 quoted:1 alternatively:1 discovery:1 continuous:5 un:1 quantifies:1 robin:1 ilija:1 additionally:1 learn:1 favorite:1 nicolas:1 genest:2 interact:2 complex:1 domain:4 da:2 significance:1 spread:1 main:1 arrow:3 arise:2 sanmay:3 aamas:1 body:1 referred:1 representative:1 en:1 bordley:2 ein:3 sub:1 exponential:6 atypical:3 tied:1 theorem:8 incentivizing:1 down:1 formula:1 specific:2 vaithyanathan:1 sindhu:1 ciamac:1 r2:2 explored:1 evidence:2 sequential:2 effectively:4 ci:2 phd:1 execution:1 iyer:3 budget:13 sx:2 forecast:3 chen:5 cx:2 logarithmic:19 michigan:1 selfish:2 simply:2 expressed:2 bo:1 springer:2 corresponds:1 truth:3 satisfies:2 determines:2 acm:6 ma:3 conditional:2 viewed:2 identity:1 towards:4 price:46 shared:1 considerable:1 change:2 feasible:1 determined:3 except:4 acting:1 principal:8 conservative:1 called:3 lemma:9 total:1 experimental:1 buyer:1 disregard:1 ceteris:1 support:1 mark:1 latter:2 outstanding:1 ex:4 |
5,348 | 5,841 | Information-theoretic lower bounds for convex
optimization with erroneous oracles
Jan Vondr?ak
IBM Almaden Research Center
San Jose, CA 95120
[email protected]
Yaron Singer
Harvard University
Cambridge, MA 02138
[email protected]
Abstract
We consider the problem of optimizing convex and concave functions with access
to an erroneous zeroth-order oracle. In particular, for a given function x ? f (x)
we consider optimization when one is given access to absolute error oracles that
return values in [f (x) ? , f (x) + ] or relative error oracles that return value in
[(1 ? )f (x), (1 + )f (x)], for some > 0. We show stark information theoretic
impossibility results for minimizing convex functions and maximizing concave
functions over polytopes in this model.
1
Introduction
Consider the problem of minimizing a convex function over some convex domain. It is well known
that this problem is solvable in the sense that there are algorithms which make polynomially-many
calls to an oracle that evaluates the function at every given point, and return a point which is arbitrarily close to the true minimum of the function. But suppose that instead of the true value of
the function, the oracle has some small error. Would it still be possible to optimize the function
efficiently? To formalize the notion of error, we can consider two types of erroneous oracles:
? For a given function f : [0, 1]n ? [0, 1] we say that fe : [0, 1]n ? [0, 1] is an absolute
-erroneous oracle if ?x ? [0, 1]n we have that: fe(x) = f (x) + ?x where ?x ? [?, ].
? For a given function f : [0, 1]n ? R we say that fe : [0, 1]n ? R is a relative -erroneous
oracle if ?x ? [0, 1]n we have that: fe(x) = ?x f (x) where ?x ? [1 ? , 1 + ].
Note that we intentionally do not make distributional assumptions about the errors. This is in contrast to noise, where the errors are assumed to be random and independently generated from some
distribution. In such cases, under reasonable conditions on the distribution, one can obtain arbitrarily good approximations of the true function value by averaging polynomially many points in some
-ball around the point of interest. Stated in terms of noise, in this paper we consider oracles that
have some small adversarial noise, and wish to understand whether desirable optimization guarantees are obtainable. To avoid ambiguity, we refrain from using the term noise altogether, and refer
to such as inaccuracies in evaluation as error.
While distributional i.i.d. assumptions are often reasonable models, evaluating our dependency on
these assumptions seems necessary. From a practical perspective, there are cases in which noise
can be correlated, or where the data we use to estimate the function is corrupted in some arbitrary
way. Furthermore, since we often optimize over functions that we learn from data, the process of
fitting a model to a function may also introduce some bias that does not necessarily vanish, or may
vanish. But more generally, it seems like we should morally know the consequences that modest
inaccuracies may have.
1
f(x)
x
Figure 1: An illustration of an erroneous oracle to a convex function that fools a gradient descent algorithm.
Benign cases. In the special case of a linear function f (x) = c| x, for some c ? Rn , a relative -error has little effect on the optimization. By querying f (ei ), for every i ? [n] we can extract
e
ci ? [(1?)ci , (1+)ci ] and then optimize over f 0 (x) = e
c| x. This results in a (1?)-multiplicative
approximation. Alternatively, if the erroneous oracle fe happens to be a convex function, optimizing f?(x) directly retains desirable optimization guarantees, up to either additive and multiplicative
errors. We are therefore interested in scenarios where the error does not necessarily have nice properties.
Gradient descent fails with error. For a simple example, consider the function illustrated in
Figure 1. The figure illustrates a convex function (depicted in blue) and an erroneous version of
it (dotted red), s.t. on every point, the oracle is at most some additive > 0 away from the true
function value (the margins of the function are depicted in grey). If we assume that a gradient
descent algorithm is given access to the erroneous version (dotted red) instead of the true function
(blue), the algorithm will be trapped in a local minimum that can be arbitrarily far from the true
minimum. But the fact that a naive gradient descent algorithm fails does not necessarily mean that
there isn?t an algorithm that can overcome small errors. This narrates the main question in this paper.
Is convex optimization robust to error?
Main Results. Our results are largely spoilers. We present stark information-theoretic lower
bounds for both relative and absolute -erroneous oracles, for any constant and even sub-constant
> 0. In particular, we show that:
? For minimizing a convex function (or maximizing a concave function) f : [0, 1]n ? [0, 1]
over [0, 1]n : we show that for any fixed ? > 0, no algorithm can achieve an additive
approximation within 1/2 ? ? of the optimum, using a subexponential number of calls to
an absolute n?1/2+? -erroneous oracle.
? For minimizing a convex function f : [0, 1]n ? [0, 1] over a polytope P ? [0, 1]n : for any
fixed > 0, no algorithm can achieve a finite multiplicative factor using a subexponential
number of calls to a relative -erroneous oracle.
? For maximizing a concave function f : [0, 1]n ? [0, 1] over a polytope P ? [0, 1]n : for
any fixed > 0, no algorithm can achieve a multiplicative factor better than ?(n?1/2+ )
using a subexponential number of calls to a relative -erroneous oracle;
? For maximizing a concave function f : [0, 1]n ? [0, 1] over [0, 1]n : for any fixed > 0,
no algorithm can obtain a multiplicative factor better than 1/2 + using a subexponential
number of calls to a relative -erroneous oracle. (And there is a trivial 1/2-approximation
without asking any queries.)
Somewhat surprisingly, many of the impossibility results listed above are shown for a class of extremely simple convex and concave functions, namely, affine functions: f (x) = c| x + b. This is
2
in sharp contrast to the case of linear functions (without the constant term b) with relative erroneous
oracles as discussed above. In addition, we note that our results extend to strongly convex functions.
1.1
Related work
The oracle models we study here fall in the category of zeroth-order or derivative free. Derivativefree methods have a rich history in convex optimization and were among the earliest to numerically
solve unconstrained optimization problems. Recently these approaches have enjoyed increasing
interest, as they are useful in scenarios where black-box access is given to the function or cases in
which gradient information is difficult to compute or does not exist [9, 8, 11, 15, 14, 6]
There has been a rich line of work for noisy oracles, where the oracles return some erroneous version
of the function value which is random. In a stochastic framework, these settings correspond to
repeatedly choosing points in some convex domain, obtaining noisy realizations of some underlying
convex function?s value. Most frequently, the assumption is that one is given a first-order noisy
oracle with some assumptions about the distribution that generates the error [13, 12]. In the learning
theory community, optimization with stochastic noisy oracles is often motivated by multi-armed
bandits settings [4, 1], and regret minimization with zeroth-order feedback [2]. All these models
consider the case in which the error is drawn from a distribution.
The model of adversarial noise in zeroth order oracles has been mentioned in [10] which considers
a related model of erroneous oracles and informally argues that exponentially many queries are
required to approximately minimize a convex function in this model (under an `2 -ball constraint).
In recent work, Belloni et al. [3] study convex optimization with erroneous oracles. Interestingly,
Belloni et al. show positive results. In their work they develop a novel algorithm that is based on
sampling from an approximately log-concave distribution using the Hit-and-Run method and show
that their method has polynomial query complexity. In contrast to the negative results we show in
this work, the work of Belloni et al. assumes the (absolute) erroneous oracle returns f (x) + ?x
with ?x ? [? n , n ]. That is, the error is not a constant term, but rather is inversely proportional
to the dimension. Our lower bounds for additive approximation hold when the oracle error is not
1
1
necessarily a constant but ?x ? [ n1/2??
, n1/2??
] for a constant 0 < ? < 1/2.
2
Preliminaries
Optimization and convexity. For a minimization problem, given a nonnegative objective function
f and a polytope P we will say that an algorithm provides a (multiplicative) ?-approximation (? >
? ? P s.t. f (?
1) if it finds a point x
x) ? ? minx?P f (x). For a maximization problem, an algorithm
? s.t. f (?
provides an ?-approximation (? < 1) if it finds a point x
x) ? ? maxx?P f (x).
For absolute erroneous oracles, given an objective function f and a polytope P we will aim to find
? ? P which is within an additive error of ? from the optimum, with ? as small as possible.
a point x
? s.t. |f (?
That is, for a ? > 0 we aim to find a point x
x)?minx f (x)| < ? in the case of minimization.
A function f : P ? R is convex on P if f (tx + (1 ? t)y) ? tf (x) + (1 ? t)f (y) (or concave if
f (tx + (1 ? t)y) ? tf (x) + (1 ? t)f (y)) for every x, y ? P and t ? [0, 1].
Chernoff bounds. Throughout the paper we appeal to the Chernoff bounds. We note that while
typically stated for independent random variables X1 , . . . , Xm , Chernoff bounds also hold for negatively associated random variables.
Definition 2.1 ([5], Definition 1). Random variables X1 , . . . , Xn are negatively associated, if for
?
every I ? [n] and every non-decreasing f : RI ? R, g : RI ? R,
? ? E[f (Xi , i ? I)]E[g(Xj , j ? I)].
?
E[f (Xi , i ? I)g(Xj , j ? I)]
Claim 2.2 ([5], Theorem 14).
Pn Let X1 , . . . , Xn be negatively associated random variables that take
values in [0, 1] and ? = E[ i=1 Xi ]. Then, for any ? ? [0, 1] we have that:
n
X
2
Pr[
Xi > (1 + ?)?] ? e?? ?/3 ,
i=1
3
n
X
2
Pr[
Xi < (1 ? ?)?] ? e?? ?/2 .
i=1
We apply this to random variables that are formed by selecting a random subset of a fixed size. In
particular, we use the following.
Claim 2.3. Let x1 , . . . , xn ? 0 be fixed. For 1 ? k ? n, let R be a uniformly random subset of k
elements out of [n]. Let Xi = xi if i ? R and Xi = 0 otherwise. Then X1 , . . . , Xn are negatively
associated.
Proof. For x1 = x2 = . . . = xn = 1, the statement holds by Corollary 11 of [5] (which refers
to this distribution as the Fermi-Dirac model). The generalization to arbitrary xi ? 0 follows from
Proposition 4 of [5] with Ij = {j} and hj (t) = xj t.
3
Optimization over the unit cube
We start with optimization over [0, 1]n , arguably the simplest possible polytope. We show that
already in this setting, the presence of adversarial noise prevents us from achieving much more than
trivial results.
3.1
Convex minimization
First let us consider convex minimization over [0, 1]n . In this setting, we show that errors as small
as n?(1??)/2 prevent us from optimizing within a constant additive error.
Theorem 3.1. Let ? > 0 be a constant. There are instances of a convex function f : [0, 1]n ?
[0, 1] accessible through an absolute n?(1??)/2 -erroneous oracle, such that a (possibly randomized)
?
algorithm that makes eO(n ) queries cannot find a solution of value better than within additive
?
1/2 ? o(1) of the optimum with probability more than e??(n ) .
We remark that the proof of this theorem is inspired by the proof of hardness of ( 21 + )approximation for unconstrained submodular maximization [7]; in particular it can be viewed as
a simple application of the ?symmetry gap? argument (see [16] for a more general exposition).
Proof. Let = n?(1??)/2 ; we can assume that < 12 , otherwise n is constant and the statement
is trivial. We will construct an -erroneous oracle (both in the relative and absolute sense) for a
convex function f : [0, 1]n ? [0, 1]. Consider a partition of [n] into two subsets A, B of size
|A| = |B| = n/2 (which will be eventually chosen randomly). We define the following function:
? f (x) =
1
2
P
P
+ n1 ( i?A xi ? j?B xj ).
This is a convex (in fact linear) function. Next, we define the following modification of f , which
could be the function returned by an -erroneous oracle.
? If |
P
xi ?
P
xj | > 12 n, then f?(x) = f (x) =
? If |
P
xi ?
P
xj | ? 12 n, then f?(x) = 12 .
i?A
i?A
j?B
j?B
1
2
P
P
+ n1 ( i?A xi ? j?B xj ).
P
P
Note that f (x) and f?(x) differ only in the region where | i?A xi ? j?B xj | ? 12 n. In particular,
1+
1
?
the value of f (x) in this region is within [ 1?
2 , 2 ], while f (x) = 2 , so an -erroneous oracle for
f (x) (both in the relative and absolute sense) could very well return f?(x) instead.
Now assume that (A, B) is a random partition, unknown to the algorithm. We argue
Pthat with
high probability, a fixed query x issued by the algorithm will have the property that | i?A xi ?
P
1
j?B xj | ? 2 n. More precisely, since (A, B) is chosen at random subject to |A| = |B| = n/2,
4
P
we have that i?A xi is a sum of negatively associated random variables in [0, 1] (by Claim 2.3).
P
Pn
The expectation of this quantity is ? = E[ i?A xi ] = 12 i=1 xi ? 21 n. By Claim 2.2, we have
X
X
2
2
1
n
Pr[
xi > ? + n] = Pr[
xi > (1 +
)?] < e?(n/(4?)) ?/3 ? e? n/24 .
4
4?
i?A
i?A
P
P
P
n
Since 12 i?A xi + 12 i?B xi = 12 i=1 xi = ?, we get
X
X
X
2
1
1
Pr[
xi ?
xi > n] = Pr[
xi ? ? > n] < e? n/24 .
2
4
i?A
i?B
i?A
By symmetry,
Pr[|
X
i?B
xi ?
X
xj | >
j?A
2
1
n] < 2e? n/24 .
2
We emphasize that this holds for a fixed query x.
Recall that we assumed the algorithm to be deterministic. Hence, as long as its queries satisfy the
property above, the answers will be f?(x) = 1/2, and the algorithm will follow the same path of
computation, no matter what the choice of (A, B) is. (Effectively we will not learn anything about
A and B.) Considering the sequence of queries on this computation path, if the number of queries is
2
t then with probability at least 1?2te? n/24 the queries will indeed fall in the region where f?(x) =
2
1/2 and the algorithm will follow this path. If t ? e n/48 , this happens with probability at least
2
1 ? 2e? n/48 . In this case, all the points queried by the algorithm as well as the returned solution
xout (by the same argument) satisfies f?(xout ) = 1/2, and hence f (xout ) ? 1?
2 . In contrast, the
?(1??)/2
actual optimum is f (1B ) = 0. Recall that = n
; hence, f (xout ) ? 12 (1 ? n?(1??)/2 )
and the bounds on the number of queries and probability of success are as in the statement of the
theorem.
Finally, consider a randomized algorithm. Denote by (R1 , R2 , . . . , ...) the random variables used
by the algorithm in its decisions. We can condition on a fixed choice of (R1 = r1 , R2 = r2 , . . .)
which makes the algorithm deterministic. By our proof, the algorithm conditioned on this choice
?
cannot succeed with probability more than e??(n ) . Since this is true for each particular choice of
(r1 , r2 , . . .), by averaging it is also true for a random choice of (R1 , R2 , . . .). Hence, we obtain the
same result for randomized algorithms as well.
3.2
Concave maximization
Here we consider the problem of maximizing a concave function f : [0, 1]n ? [0, 1]. One can
obtain a result for concave maximization analogous to Theorem 3.1, which we do not state; in
terms of additive errors, there is really no difference between convex minimization and concave
maximization. However, in the case of concave maximization we can also formulate the following
hardness result for multiplicative approximation.
Theorem 3.2. If a concave function f : [0, 1]n ? [0, 1] is accessible through a relative-?-erroneous
2
oracle, then for any ? [0, ?], an algorithm that makes less than e n/48 queries cannot find a
?2 n/48
solution of value greater than 1+
.
2 OP T with probability more than 2e
Proof. This result follows from the same construction as Theorem 3.1. Recall that f (x) is a linear
function, hence also concave. As we mentioned in the proof of Theorem 3.1, f?(x) could be the
values returned by a relative -erroneous oracle. Now we consider an arbitrary > 0; note that for
? ? it still holds that f?(x) is a relative ?-erroneous oracle.
n
By the same proof, an algorithm querying less than e n/48 points cannot find a solution of value
?2 n/48
. In contrast, the optimum of the maximization
better than 1+
2 with probability more than 2e
problem is supx?[0,1]n f (x) = 1. Therefore, the algorithm cannot achieve multiplicative approximation better than 1+
2 .
We note that this hardness result is optimal due to the following easy observation.
5
Theorem 3.3. For any concave function f : [0, 1]n ? R+ , let OP T = supx?[0,1]n f (x). Then
1 1
1
1
f
, ,...,
? OP T.
2 2
2
2
Proof. By compactness, the optimum is attained at a point: let OP T = f (x? ). Let also x0 =
(1, 1, . . . , 1) ? x? . We have x0 ? [0, 1]n and hence f (x0 ) ? 0. By concavity, we obtain
?
1
x + x0
f (x? ) + f (x0 )
1
1
1 1
f
, ,...,
=f
?
? f (x? ) = OP T.
2 2
2
2
2
2
2
In other words, a multiplicative 12 -approximation for this problem is trivial to obtain ? even without
asking any queries about f . We just return the point ( 12 , 12 , . . . , 12 ). Thus we can conclude that for
concave maximization, a relative -erroneous oracle is not useful at all.
4
Optimization over polytopes
In this section we consider optimization of convex and concave functions over a polytope
P = {x ? 0 : Ax = b}. We will show inappoximability results for the relative error model. Note
that for the absolute error case, the lower bound on convex minimization from the previous section
holds, and can be applied to show a lower bound for concave maximization with absolute errors.
Theorem 4.1. Let , ? ? (0, 1/2) be some constants. There are convex functions for which no
?
algorithm can obtain a finite approximation ratio to minx?P f (x) using ?(en ) queries to a relative
-erroneous oracle of the function.
P
1/2+?
Proof. We will prove our theorem for the case in which P = {x ? 0 :
}. Let
i xi ? n
H be a subset of indices chosen uniformly at random from all subsets of size exactly n1/2+? . We
construct two functions:
X
f (x) = n1+? ? n1/2
xi
i?H
g(x) = n
1+?
?n
?
X
xi
i
Observe that both these functions are convex and non-negative. Also, observe that
P the minimizer
1/2+?
of f is x? = 1H and f (x? ) = 0, while the minimizer of g is any vector x0 :
i xi = n
0
1+?
1/2+2?
1+?
and g(x ) = n
?n
= ?(n ). Therefore, the ratio between these two functions is
unbounded. We will now construct the erroneous oracle in the following manner:
g(x), if (1 ? )f (x) ? g(x) ? (1 + )f (x)
e
f (x) =
f (x) otherwise
By definition, fe is an -erroneous oracle to f . The claim will follow from the fact that given access
to fe one cannot distinguish between f and g using a subexponential number of queries. This implies
the inapproximability result since an approximation algorithm which guarantees a finite approximation ratio using a subexponential number of queries could be used to distinguish between the two
functions: if the algorithm returns an answer strictly greater than 0 then we know the underlying
function is g and otherwise it is f.
Given a query x ? [0, 1]n to the oracle, we will consider two cases.
? In case the query x is such that
P
i
xi ? n1/2 then we have that:
n1+? ? n ? f (x) ? n1+?
n1+? ? n?+1/2 ? g(x) ? n1+?
6
Since for any , ? > P
0 there is a large enough n s.t. n? > (1 + )/, this implies that for
any query for which i xi ? n1/2 then we have that g(x) ? [(1 ? )f (x), (1 + )f (x)]
and thus the oracle returns g(x).
P
1/2
? In
then we can interpret the value of
i xi > n
P case the query is such that
x
which
determines
value
of
f
as
a
sum
of
negatively
associated random variables
i
i?H
X1 , . . . , Xn where Xi realizes with probability n?1/2+? and takes value xi if realized
(see Claim 2.3). WePcan then apply the Chernoff bound (Claim 2.2), using the fact that
E[f (x)] = n1/2?? i xi , and get that for any constant 0 < ? < 1 we have that with
?
probability 1 ? e??(n ) :
P x
P x
X
i i
i i
1 ? ? 1/2??
?
xi ? 1 + ? 1/2??
n
n
i?H
?
By using ? ? /(1 + ), this implies that with probability at least 1 ? e??(n ) we get that:
(1 ? )f (x) ? g(x) ? (1 + )f (x)
Since the likelihood of distinguishing between f and g on a single query is exponentially
small in n? , the same arguments used throughout the paper imply that it takes an exponential number of queries to distinguish between f and g.
?
To conclude, for any query x ? [0, 1]n it takes ?(en ) queries to distinguish between f and g.
As discussed above, due to the fact that the ratio between the optima of these two functions is
unbounded, this concludes the proof.
Theorem 4.2. ? constants , ? ? (0, 1/2) there is a concave function f : [0, 1]n ? R+ for which
no algorithm can obtain an approximation strictly better than O(n?1/2+? ) to maxx?P f (x) using
?
?(en ) queries to a relative -erroneous oracle of the function.
Proof. We follow a similar methodology as in the proof of Theorem 4.1. We again we select a
P
1/2+?
set H of size n1/2+? u.a.r. and construct two functions: f (x) = n1/2 i?H xi + n
and
P
n1/2+?
?
e
. As in the proof of Theorem 4.1 the noisy oracle f (x) = g(x) when
g(x) = n
i xi +
(1 ? )f (x) ? g(x) ? (1 + )f (x) and otherwise fe(x) = f (x). Note that both functions are concave
and non-negative, and by its definition the oracle is -erroneous for the function f . For b = n1/2+?
it is easy to see that the optimal value when the objective is f is n1+? while the optimal value is
O(n1/2+? ) when the objective is g, which implies that one cannot obtain an approximation better
?1/2+?
than ?(n
) with a subexponential number of queries. In case the query to the oracle is a point
P
1/2
x s.t.
x
?
n
, then by Chernoff bound arguments, similar to the ones we used above, with
i i
?
probability
1 ? e??(n ) we get (1 ? )f (x) ? g(x) ? (1 + )f (x). Thus, for any query in
P at least1/2
which i xi ? n , the likelihood of the oracle returning f is exponentially small in n? .
P
In case the query is a point x s.t. i xi > n1/2 standard concentration bound arguments as before,
?
imply that with probability at least 1 ? e??(n ) we get (1 ? )f (x) ? g(x) ? (1 + )f (x). Since
the likelihood of distinguishing between f and g on a single query is exponentially small in n? , we
can conclude that it takes an exponential number of queries to distinguish between f and g.
5
Optimization over assignments
In this section, we consider the concave maximization problem over a more specific polytope,
?
?
k
?
?
X
Pn,k = x ? Rn?k
:
xij = 1 ?i ? [n] .
+
?
?
j=1
This can be viewed as the matroid polytope for a partition matroid on n blocks of k elements, or
alternatively the convex hull of assignments of n items to k agents. In this case, there is a trivial
1
1
k -approximation, similar to the 2 -approximation in the case of a unit cube.
7
Theorem 5.1. For any k ? 2 and a concave function f : Pn,k ? R+ , let OP T = supx?Pn,k f (x).
Then
1 1
1
1
f
, ,...,
? OP T.
k k
k
k
(`)
Proof. By compactness, the optimum is attained at a point: let OP T = f (x? ). Let xij =
x?i,(j+` mod k) ; i.e., x(`) is a cyclic shift of the coordinates of x? by ` in each block. We have
Pk?1 (`)
Pk
x(`) ? Pn,k and k1 `=0 xij = k1 j=1 x?ij = k1 . By concavity and nonnegativity of f , we obtain
!
k?1
1
1 1
1 X (`)
1
1
, ,...,
=f
x
f
? f (x(0) ) = OP T.
k k
k
k
k
k
`=0
We show that this approximation is best possible, if we have access only to a ?-erroneous oracle.
Theorem 5.2. If k ? 2 and a concave function f : Pn,k ? [0, 1] is accessible through a relative-?2
erroneous oracle, then for any ? [0, ?], an algorithm that makes less than e n/6k queries cannot
?2 n/6k
find a solution of value greater than 1+
.
k OP T with probability more than 2e
Note that this result is nontrivial only for n k. In other words, the hardness factor of k is never
worse than a square root of the dimension of the problem. Therefore, this result can be viewed as
interpolating between the hardness of 1+
2 -approximation over the unit cube (Theorem 3.2), and the
hardness of n??1/2 -approximation over a general polytope (Theorem 4.2).
Proof. Given ? : [n] ? [k], we construct a function f ? : Pn,k ? [0, 1] (where ? describes the
intended optimal solution):
? f ? (x) =
1
n
Pn
i=1
xi,?(i) .
Next we define a modified function f?? as follows:
? If |f ? (x) ? k1 | >
k
then f?? (x) = f ? (x)
? If |f ? (x) ? k1 | ?
k
then f?? (x) = k1 .
1+
By definition, f ? (x) and f?? (x) differ only if |f ? (x) ? k1 | ? k , and then f ? (x) ? [ 1?
k , k ] while
f?? (x) = k1 . Therefore, f?? (x) is a valid relative -erroneous oracle for f ? .
Similarly to the proofs above, we argue that if ? is chosen uniformly at random, then with high
probability f?? (x) = k1 for any fixed query x ? Pn,k . This holds again by a Chernoff bound: For a
Pk
Pn
fixed xij such that j=1 xij = 1, we have that f ? (x) = n1 i=1 xi,?(i) = n1 Z where Z is a sum of
P
independent random variables with expectation k1 i,j xij = nk . The random variables attain values
2
in [0, 1]. By the Chernoff bound, Pr[|Z ? nk | > nk ] < 2e? n/3k . This gives
2
1
Pr |f ? (x) ? | >
< 2e? n/3k .
k
k
2
By the same arguments as before, if the algorithm asks less than e n/6k queries, then it will not
2
detect a point such that |f ? (x) ? k1 | > k with probability more than 2e? n/6k . Then the query answers will all be f?? (x) = k1 and the value of the returned solution will be at most 1+
k . Meanwhile,
the optimum solution is x?i,?(i) = 1 for all i, which gives f (x? ) = 1.
Acknowledgements. YS was supported by NSF grant CCF-1301976, CAREER CCF-1452961 and a Google
Faculty Research Award.
8
References
[1] Alekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimization with
multi-point bandit feedback. In COLT 2010 - The 23rd Conference on Learning Theory, Haifa, Israel,
June 27-29, 2010, pages 28?40, 2010.
[2] Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Alexander Rakhlin. Stochastic convex
optimization with bandit feedback. SIAM Journal on Optimization, 23(1):213?240, 2013.
[3] Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, and Alexander Rakhlin. Escaping the local
minima via simulated annealing: Optimization of approximately convex functions. COLT 2015.
[4] S?ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[5] Devdatt Dubhashi, Volker Priebe, and Desh Ranjan. Negative dependence through the FKG inequality.
In Research report MPI-I-96-1-020, Max-Planck Institut f?ur Informatik, Saarbr?ucken, 1996.
[6] John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Andre Wibisono. Optimal rates for zeroorder convex optimization: The power of two function evaluations. IEEE Transactions on Information
Theory, 61(5):2788?2806, 2015.
[7] Uriel Feige, Vahab S. Mirrokni, and Jan Vondr?ak. Maximizing non-monotone submodular functions.
SIAM J. Comput., 40(4):1133?1153, 2011.
[8] Abraham Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the
bandit setting: gradient descent without a gradient. In Proceedings of the Sixteenth Annual ACM-SIAM
Symposium on Discrete Algorithms, SODA 2005, Vancouver, British Columbia, Canada, January 23-25,
2005, pages 385?394, 2005.
[9] Kevin G. Jamieson, Robert D. Nowak, and Benjamin Recht. Query complexity of derivative-free optimization. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural
Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe,
Nevada, United States., pages 2681?2689, 2012.
[10] A.S. Nemirovsky and D.B. Yudin. Problem Complexity and Method Efficiency in Optimization. J. Wiley
& Sons, New York, 1983.
[11] Yurii Nesterov. Random gradient-free minimization of convex functions. CORE Discussion Papers
2011001, Universit?e catholique de Louvain, Center for Operations Research and Econometrics (CORE),
2011.
[12] Aaditya Ramdas, Barnab?as P?oczos, Aarti Singh, and Larry A. Wasserman. An analysis of active learning
with uniform feature noise. In Proceedings of the Seventeenth International Conference on Artificial
Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014, pages 805?813, 2014.
[13] Aaditya Ramdas and Aarti Singh. Optimal rates for stochastic convex optimization under tsybakov noise
condition. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 365?373, 2013.
[14] Ohad Shamir. On the complexity of bandit and derivative-free stochastic convex optimization. In COLT
2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ,
USA, pages 3?24, 2013.
[15] Sebastian U. Stich, Christian L. M?uller, and Bernd G?artner. Optimization of convex functions with random
pursuit. CoRR, abs/1111.0194, 2011.
[16] Jan Vondr?ak. Symmetry and approximability of submodular maximization problems. SIAM J. Comput.,
42(1):265?304, 2013.
9
| 5841 |@word faculty:1 version:3 polynomial:1 seems:2 dekel:1 grey:1 nemirovsky:1 xout:4 asks:1 cyclic:1 selecting:1 united:1 daniel:1 interestingly:1 com:1 john:1 additive:8 partition:3 benign:1 christian:1 intelligence:1 item:1 desh:1 core:2 provides:2 tahoe:1 unbounded:2 symposium:1 prove:1 artner:1 fitting:1 manner:1 introduce:1 x0:6 indeed:1 hardness:6 frequently:1 multi:3 inspired:1 decreasing:1 little:1 armed:2 actual:1 considering:1 increasing:1 ucken:1 underlying:2 what:1 israel:1 nj:1 guarantee:3 every:6 concave:24 exactly:1 returning:1 universit:1 hit:1 unit:3 grant:1 jamieson:1 planck:1 arguably:1 positive:1 before:2 local:2 consequence:1 ak:3 path:3 approximately:3 black:1 zeroth:4 seventeenth:1 practical:1 pthat:1 regret:2 block:2 jan:3 maxx:2 attain:1 word:2 refers:1 get:5 cannot:8 ga:1 close:1 fkg:1 optimize:3 deterministic:2 dean:1 center:2 maximizing:6 ranjan:1 independently:1 convex:39 formulate:1 wasserman:1 notion:1 coordinate:1 analogous:1 construction:1 suppose:1 shamir:1 distinguishing:2 harvard:2 element:2 trend:1 econometrics:1 distributional:2 iceland:1 region:3 devdatt:1 mentioned:2 benjamin:1 convexity:1 complexity:4 nesterov:1 singh:2 negatively:6 efficiency:1 tx:2 query:36 artificial:1 kevin:1 choosing:1 solve:1 say:3 otherwise:5 statistic:1 noisy:5 online:2 sequence:1 nevada:1 realization:1 achieve:4 sixteenth:1 dirac:1 fermi:1 optimum:9 r1:5 sea:1 adam:1 develop:1 ij:2 op:10 implies:4 differ:2 stochastic:6 hull:1 larry:1 barnab:1 generalization:1 really:1 preliminary:1 proposition:1 strictly:2 hold:7 around:1 claim:7 aarti:2 realizes:1 tf:2 minimization:8 uller:1 aim:2 modified:1 rather:1 kalai:1 avoid:1 pn:11 hj:1 volker:1 earliest:1 corollary:1 ax:1 june:3 likelihood:3 impossibility:2 contrast:5 adversarial:3 brendan:1 sense:3 detect:1 typically:1 compactness:2 bandit:6 interested:1 among:1 subexponential:7 colt:3 almaden:1 special:1 cube:3 construct:5 never:1 sampling:1 chernoff:7 icml:1 report:1 randomly:1 intended:1 n1:23 ab:1 atlanta:1 interest:2 evaluation:2 held:1 nowak:1 necessary:1 ohad:1 modest:1 institut:1 haifa:1 instance:1 vahab:1 asking:2 morally:1 retains:1 assignment:2 maximization:11 subset:5 uniform:1 dependency:1 answer:3 supx:3 corrupted:1 recht:1 international:2 randomized:3 siam:4 accessible:3 wepcan:1 michael:1 again:2 ambiguity:1 cesa:1 possibly:1 worse:1 derivative:3 return:9 stark:2 de:1 matter:1 satisfy:1 multiplicative:9 root:1 red:2 start:1 yaron:2 minimize:1 formed:1 square:1 hariharan:1 largely:1 efficiently:1 correspond:1 informatik:1 history:1 andre:1 sebastian:1 definition:5 evaluates:1 intentionally:1 associated:6 proof:17 hsu:1 recall:3 formalize:1 obtainable:1 alexandre:1 attained:2 follow:4 methodology:1 april:1 box:1 strongly:1 furthermore:1 just:1 uriel:1 ei:1 google:1 usa:2 effect:1 true:8 ccf:2 hence:6 illustrated:1 anything:1 mpi:1 theoretic:3 argues:1 duchi:1 aaditya:2 novel:1 recently:1 exponentially:4 discussed:2 extend:1 numerically:1 interpret:1 refer:1 cambridge:1 queried:1 enjoyed:1 unconstrained:2 rd:1 similarly:1 submodular:3 access:6 alekh:2 recent:1 perspective:1 optimizing:3 scenario:2 issued:1 inequality:1 oczos:1 refrain:1 arbitrarily:3 success:1 meeting:1 minimum:4 greater:3 somewhat:1 eo:1 desirable:2 sham:1 reykjavik:1 long:1 lin:1 y:1 award:1 expectation:2 agarwal:2 addition:1 annealing:1 subject:1 december:1 tengyuan:1 mod:1 jordan:1 call:5 presence:1 easy:2 enough:1 xj:10 matroid:2 nonstochastic:1 escaping:1 shift:1 whether:1 motivated:1 derivativefree:1 returned:4 york:1 repeatedly:1 remark:1 useful:2 generally:1 fool:1 listed:1 informally:1 tsybakov:1 narayanan:1 category:1 simplest:1 exist:1 xij:6 nsf:1 dotted:2 trapped:1 blue:2 discrete:1 achieving:1 drawn:1 prevent:1 monotone:1 sum:3 run:1 jose:1 soda:1 throughout:2 reasonable:2 lake:1 decision:1 bound:14 distinguish:5 oracle:51 nonnegative:1 nontrivial:1 annual:3 constraint:1 belloni:4 precisely:1 ri:2 x2:1 generates:1 argument:6 extremely:1 approximability:1 martin:1 ball:2 describes:1 feige:1 son:1 ur:1 kakade:1 modification:1 happens:2 pr:9 eventually:1 singer:1 know:2 yurii:1 ofer:1 operation:1 pursuit:1 apply:2 observe:2 away:1 altogether:1 assumes:1 k1:12 dubhashi:1 objective:4 question:1 already:1 quantity:1 realized:1 concentration:1 dependence:1 mirrokni:1 gradient:8 minx:3 simulated:1 polytope:9 argue:2 considers:1 trivial:5 index:1 illustration:1 ratio:4 minimizing:4 liang:1 difficult:1 fe:8 statement:3 robert:1 stated:2 negative:4 priebe:1 ebastien:1 unknown:1 bianchi:1 observation:1 finite:3 descent:5 january:1 rn:2 arbitrary:3 sharp:1 community:1 canada:1 namely:1 required:1 bernd:1 louvain:1 polytopes:2 saarbr:1 inaccuracy:2 xm:1 max:1 wainwright:1 power:1 solvable:1 inversely:1 imply:2 concludes:1 extract:1 naive:1 flaxman:1 isn:1 columbia:1 nice:1 acknowledgement:1 nicol:1 vancouver:1 relative:19 proportional:1 querying:2 foundation:1 agent:1 affine:1 xiao:1 foster:1 ibm:2 surprisingly:1 supported:1 free:4 catholique:1 bias:1 understand:1 fall:2 absolute:11 tauman:1 overcome:1 feedback:3 dimension:2 evaluating:1 xn:6 rich:2 concavity:2 valid:1 yudin:1 san:1 far:1 polynomially:2 transaction:1 vondr:3 emphasize:1 active:1 assumed:2 conclude:3 xi:44 alternatively:2 learn:2 robust:1 ca:1 career:1 obtaining:1 symmetry:3 necessarily:4 interpolating:1 meanwhile:1 domain:2 aistats:1 pk:3 main:2 abraham:1 noise:9 ramdas:2 x1:7 en:3 wiley:1 fails:2 sub:1 nonnegativity:1 wish:1 exponential:2 comput:2 mcmahan:1 vanish:2 theorem:18 british:1 erroneous:36 specific:1 appeal:1 r2:5 rakhlin:2 effectively:1 corr:1 ci:3 te:1 illustrates:1 conditioned:1 margin:1 nk:3 gap:1 depicted:2 bubeck:1 prevents:1 inapproximability:1 minimizer:2 satisfies:1 determines:1 acm:1 ma:1 succeed:1 viewed:3 exposition:1 stich:1 uniformly:3 averaging:2 select:1 alexander:2 wibisono:1 princeton:1 correlated:1 |
5,349 | 5,842 | Bandit Smooth Convex Optimization:
Improving the Bias-Variance Tradeoff
Ofer Dekel
Microsoft Research
Redmond, WA
[email protected]
Ronen Eldan
Weizmann Institute
Rehovot, Israel
[email protected]
Tomer Koren
Technion
Haifa, Israel
[email protected]
Abstract
Bandit convex optimization is one of the fundamental problems in the field of
online learning. The best algorithm for the general bandit convex optimizae 5/6 ), while the best known lower bound
tion problem guarantees a regret of O(T
1/2
is ?(T ). Many attempts have been made to bridge the huge gap between these
bounds. A particularly interesting special case of this problem assumes that the
loss functions are smooth. In this case, the best known algorithm guarantees a ree 2/3 ). We present an efficient algorithm for the bandit smooth convex
gret of O(T
e 5/8 ). Our result rules out
optimization problem that guarantees a regret of O(T
an ?(T 2/3 ) lower bound and takes a significant step towards the resolution of this
open problem.
1
Introduction
Bandit convex optimization [11, 5] is the following online learning problem. First, an adversary
privately chooses a sequence of bounded and convex loss functions f1 , . . . , fT defined over a convex domain K in d-dimensional Euclidean space. Then, a randomized decision maker iteratively
chooses a sequence of points x1 , . . . , xT , where each xt 2 K. On iteration t, after choosing the
point xt , the decision maker incurs a loss of ft (xt ) and receives bandit feedback: he observes the
value of his loss but he does not receive any other information about the function ft . The decision
maker uses the feedback to make better choices on subsequent rounds. His goal is to minimize regret, which is the difference between his loss and the loss incurred by the best fixed point in K. If
the regret grows sublinearly with T , it indicates that the decision maker?s performance improves as
the length of the sequence increases, and therefore we say that he is learning.
Finding an optimal algorithm for bandit convex optimization is an elusive open problem. The first
algorithm for this problem was presented in Flaxman et al. [11] and guarantees a regret of R(T ) =
e 5/6 ) for any sequence of loss functions (here and throughout, the asymptotic O
e notation hides
O(T
a polynomial dependence on the dimension d as well as logarithmic factors). Despite the ongoing
effort to improve on this rate, it remains the state of the art. On the other hand, Dani et al. [9]
proves that for any algorithm there exists a worst-case sequence of loss functions for which R(T ) =
?(T 1/2 ), and the gap between the upper and lower bounds is huge.
While no progress has been made on the general form of the problem, some progress has been made
in interesting special cases. Specifically, if the bounded convex loss functions are also assumed to be
e 3/4 ). If the loss funcLipschitz, Flaxman et al. [11] improves their regret guarantee to R(T ) = O(T
tions are smooth (namely, their gradients are Lipschitz), Saha and Tewari [15] present an algorithm
e 2/3 ). Similarly, if the loss functions are bounded, Lipschitz, and
with a guaranteed regret of O(T
e 2/3 ) [3]. If even stronger assumptions are made, an
strongly convex, the guaranteed regret is O(T
1/2
e
optimal regret rate of ?(T ) can be guaranteed; namely, when the loss functions are both smooth
1
and strongly-convex [12], when they are Lipschitz and linear [2], and when Lipschitz loss functions
are not generated adversarially but drawn i.i.d. from a fixed and unknown distribution [4].
Recently, Bubeck et al. [8] made progress that did not rely on additional assumptions, such as
Lipschitz, smoothness, or strong convexity, but instead considered the general problem in the onee 1/2 ) regret
dimensional case. That result proves that there exists an algorithm with optimal ?(T
for arbitrary univariate convex functions ft : [0, 1] 7! [0, 1]. Subsequently, and after the current
paper was written, Bubeck and Eldan [7] generalized this result to bandit convex optimization in
general Euclidean spaces (albeit requiring a Lipschitz assumption). However, the proofs in both
papers are non-constructive and do not give any hint on how to construct a concrete algorithm, nor
any indication that an efficient algorithm exists.
The current state of the bandit convex optimization problem has given rise to two competing conjectures. Some believe that there exists an efficient algorithm that matches the current lower bound.
Meanwhile, others are trying to prove larger lower bounds, in the spirit of [10], even under the assumption that the loss functions are smooth; if the ?(T 1/2 ) lower bound is loose, a natural guess
e 2/3 ).1 In this paper, we take an important step towards the
of the true regret rate would be ?(T
e 5/8 ) against
resolution of this problem by presenting an algorithm that guarantees a regret of ?(T
any sequence of bounded, convex, smooth loss functions. Compare this result to the previous statee 2/3 ) (noting that 2/3 = 0.666... and 5/8 = 0.625). This result rules out
of-the-art result of ?(T
the possibility of proving a lower bound of ?(T 2/3 ) with smooth functions. While there remains
a sizable gap with the T 1/2 lower bound, our result brings us closer to finding the elusive optimal
algorithm for bandit convex optimization, at least in the case of smooth functions.
Our algorithm is a variation on the algorithms presented in [11, 1, 15], with one new idea. These
algorithms all follow the same template: on each round, the algorithm computes an estimate of
rft (xt ), the gradient of the current loss function at the current point, by applying a random perturbation to xt . The sequence of gradient estimates is then plugged into a first-order online optimization
technique. The technical challenge in the analysis of these algorithms is to bound the bias and the
variance of these gradient estimates. Our idea is take a window of consecutive gradient estimates
and average them, producing a new gradient estimate with lower variance and higher bias. Overall,
the new bias-variance tradeoff works in our favor and allows us to improve the regret upper-bound.
Averaging uncorrelated random vectors to reduce variance is a well-known technique, but applying
it in the context of bandit convex optimization algorithm is easier said than done and requires us to
overcome a number of technical difficulties. For example, the gradient estimates in our window are
taken at different points, which introduces a new type of bias. Another example is the difficulty that
arrises when the sequence xs , . . . , xt travels adjacent to the boundary of the convex set K (imagine
transitioning from one face of a hypercube to another); the random perturbation applied to xs and
xt could be supported on orthogonal directions, yet we average the resulting gradient estimates
and expect to get a meaningful low-variance gradient estimate. While the basic idea is simple, our
non-trivial technical analysis is not, and may be of independent interest.
2
Preliminaries
We begin by defining smooth bandit convex optimization more formally, and recalling several basic
results from previous work on the problem (Flaxman et al. [11], Abernethy et al. [2], Saha and Tewari
[15]) that we use in our analysis. We also review the necessary background on self-concordant
barrier functions.
2.1
Smooth Bandit Convex Optimization
In the bandit convex optimization problem, an adversary first chooses a sequence of convex functions
f1 , . . . , fT : K 7! [0, 1], where K is a closed and convex domain in Rd . Then, on each round
t = 1, . . . , T , a randomized decision maker has to choose a point xt 2 K, and after committing
to his decision he incurs a loss of ft (xt ), and observes this loss as feedback. The P
decision maker?s
T
expected loss (where expectation is taken with respect to his random choices) is E[ t=1 ft (xt )] and
1
In fact, we are aware of at least two separate research groups that invested time trying to prove such
an ?(T 2/3 ) lower bound.
2
his regret is
"
R(T ) = E
T
X
ft (xt )
t=1
#
min
x2K
T
X
ft (x) .
t=1
Throughout, we use the notation Et [?] to indicate expectations conditioned on all randomness up to
and including round t 1.
We make the following assumptions. First, we assume that each of the functions f1 , . . . , fT is LLipschitz with respect to the Euclidean norm k?k2 , namely that |ft (x) ft (y)| ? Lkx yk2 for all
x, y 2 K. We further assume that ft is H-smooth with respect to k?k2 , which is to say that
8 x, y 2 K ,
krft (x) rft (y)k2 ? H kx yk2 .
In particular, this implies that ft is continuously differentiable over K. Finally, we assume that the
Euclidean diameter of the decision domain K is bounded by D > 0.
2.2
First Order Algorithms with Estimated Gradients
The online convex optimization problem becomes much easier in the full information setting, where
the decision maker?s feedback includes the vector gt = rft (xt ), the gradient (or subgradient) of
ft at the point xt . In this setting, the decision maker can use a first-order online algorithm, such as
the projected online gradient descent algorithm [17] or dual averaging [13] (sometimes known as
follow the regularized leader [16]), and guarantee a regret of O(T 1/2 ). The dual averaging approach
sets xt to be the solution to the following optimization problem,
( t 1
)
X
xt = arg min x ?
?s,t gs + R(x) ,
(1)
x2K
s=1
where R is a suitably chosen regularizer, and for all t = 1, . . . , T and s = 1, . . . , t we define a
non-negative weight ?s,t . Typically, all of the weights (?s,t ) are set to a constant value ?, called the
learning rate parameter.
However, since we are not in the full information setting and the decision maker does not observe gt ,
the algorithms mentioned above cannot be used directly. The key observation of Flaxman et al. [11],
which is later reused in all of the follow-up work, is that gt can be estimated by randomly perturbing
the point xt . Specifically, on round t, the algorithm chooses the point
yt = xt + At ut ,
(2)
instead of the original point xt , where > 0 is a parameter that controls the magnitude of the
perturbation, At is a positive definite d ? d matrix, and ut is drawn from the uniform distribution
on the unit sphere. In Flaxman et al. [11], At is simply set to the identity matrix whereas in Saha
and Tewari [15], At is more carefully tailored to the point xt (see details below). In any case, care
should be taken to ensure that the perturbed point yt remains in the convex set K. The observed
value ft (yt ) is then used to compute the gradient estimate
d
g?t = ft (yt )At 1 ut ,
(3)
and this estimate is fed to the first-order optimization algorithm. While g?t is not an unbiased estimator of rft (xt ), it is an unbiased estimator for the gradient of a different function, f?t , defined
by
f?t (x) = Et [ft (x + At v)] ,
(4)
where v 2 Rd is uniformly drawn from the unit ball. The function f?t (x) is a smoothed version of
ft , which plays a key role in our analysis and in many of the previous results on this topic. The main
property of f?t is summarized in the following lemma.
Lemma 1 (Flaxman et al. [11], Saha and Tewari [15, Lemma 5]). For any differentiable function
f : Rd 7! R, positive definite matrix A, x 2 Rd , and 2 (0, 1], define g? = (d/ )f (x+ Au)?A 1 u,
where u is uniform on the unit sphere. Also, let f?(x) = E[f (x + Av)] where v is uniform on the
unit ball. Then E[?
g ] = rf?(x).
The difference between rft (xt ) and rf?t (xt ) is the bias of the gradient estimator g?t . The analysis
in Flaxman et al. [11], Abernethy et al. [2], Saha and Tewari [15] focuses on bounding the bias and
the variance of g?t and their effect on the first-order optimization algorithm.
3
2.3
Self-Concordant Barriers
Following [2, 1, 15], our algorithm and analysis rely on the properties of self-concordant barrier
functions. Intuitively, a barrier is a function defined on the interior of the convex body K, which is
rather flat in most of the interior of K and explodes to 1 as we approach its boundary. Additionally, a self-concordant barrier has some technical properties that are useful in our setting. Before
giving the formal definition of a self-concordant barrier, we define the local norm defined by a
self-concordant barrier.
Definition 2 (Local Norm Induced by a Self-Concordant Barrier [14]). Let R : int(K) 7! R be a
self-concordant barrier.pThe local norm induced by R at the pointpx 2 int(K) is denoted by kzkx
and defined as kzkx = zT r2 R(x)z. Its dual norm is kzkx,? = zT (r2 R(x)) 1 z.
In words, the local norm at x is the Mahalanobis norm defined by the Hessian of R at the point x,
namely, r2 R(x). We now give a formal definition of a self-concordant barrier.
Definition 3 (Self-Concordant Barrier [14]). Let K ? Rd be a convex body. A function R :
int(K) 7! R is a #-self-concordant barrier for K if (i) R is three times continuously differentiable,
(ii) R(x) ! 1 as x ! @K, and (iii) for all x 2 int(K) and y 2 Rd , R satisfies
p
|r3 R(x)[y, y, y]| ? 2kyk3x and |rR(x) ? y| ? #kykx .
This definition is given for completeness, and is not directly used in our analysis. Instead, we rely
on some useful properties of self-concordant barriers. First and foremost, there exists a O(d)-selfconcordant barrier for any convex body [14, 6]. Efficiently-computable self-concordant barriers
are only known for specific classes of convex bodies, such as polytopes, yet we make the standard
assumption that we have an efficiently computable #-self-concordant barrier for the set K.
Another key feature of a self-concordant barrier is the set of Dikin ellipsoids that it defines. The
Dikin ellipsoid at x 2 int(K) is simply the unit ball with respect to the local norm at x. A key
feature of the Dikin ellipsoid is that it is entirely contained in the convex body K, for any x [see 14,
Theorem 2.1.1]. Another technical property of a self-concordant barriers is that its Hessian changes
slowly with respect to its local norm.
Theorem 4 (Nesterov and Nemirovskii [14, Theorem 2.1.1]). Let K be a convex body with selfconcordant barrier R. For any x 2 int(K) and z 2 Rd such that kzkx < 1, it holds that
(1
kzkx )2 r2 R(x)
r2 R(x + z)
(1
kzkx )
2
r2 R(x) .
While the self-concordant barrier explodes to infinity at the boundary of K, it is quite flat at points
that are far from the boundary. To make this statement formal, we define an operation that multiplicatively shrinks the set K toward the minimizer of R (called the analytic center of K). Let
y = arg min R(x) and assume without loss of generality that R(y) = 0. For any ? 2 (0, 1) let Ky,?
denote the set {y + (1 ?)(x y) : x 2 K}. The next theorem states that the barrier is flat in Ky,?
and explodes to 1 in the thin shell between Ky,? and K.
Theorem 5 (Nesterov and Nemirovskii [14, Propositions 2.3.2-3]). Let K be a convex body with
#-self-concordant barrier R, let y = arg min R(x), and assume that R(y) = 0. For any ? 2 (0, 1],
it holds that
1
8 x 2 Ky,?
R(x) ? # log .
?
Our assumptions on the loss functions, as the Lipschitz assumption or the smoothness assumption,
are stated in terms of the standard Euclidean norm (which we denote by k ? k2 ). Therefore, we will
need to relate the Euclidean norm to the local norms defined by the self-concordant barrier. This is
accomplished by the following lemma (whose proof appears in the supplementary material).
Lemma 6. Let K be a convex body with self-concordant barrier R and let D be the (Euclidean)
diameter of K. For any x 2 K, it holds that D 1 kzkx,? ? kzk2 ? Dkzkx for all z 2 Rd .
2.4
Self-Concordant Barrier as a Regularizer
Looking back at the dual averaging strategy defined in Eq. (1), we can now fill in some of the details
that were left unspecified: [1, 15] set the regularization R in Eq. (1) to be a #-self-concordant barrier
for the set K. We use the following useful lemma from Abernethy and Rakhlin [1] in our analysis.
4
Algorithm 1: Bandit Smooth Convex Optimization
Parameters: perturbation parameter 2 (0, 1], dual averaging weights (?s,t ), self-concordant
barrier R : int(K) 7! R
Initialize: y1 2 K arbitrarily
for t = 1, . . . , T
At
(r2 R(xt )) 1/2
draw ut uniformly from the unit sphere
yt
xt + At ut
choose yt , receive feedback ft (yt )
g?t
(d/ )ft (yt ) ? At 1 ut
Pt
xt+1
arg minx2K {x ? s=1 ?s,t g?s + R(x)}
Lemma 7 (Abernethy and Rakhlin [1]). Let K be a convex body with #-self-concordant barrier
R, let g1 , . . . , gT be vectors in Rd , and let ? > 0 be such that ?kgt kxt ,? ? 14 for all t. Define
Pt 1
xt = arg minx2K {x ? s=1 ?gs + R(x)}. Then,
(i) for all t it holds that kxt xt+1 kxt ? 2?kgt kxt ,? ;
PT
PT
(ii) for any x? 2 K it holds that t=1 gt ? (xt x? ) ? ?1 R(x? ) + 2? t=1 kgt k2xt ,? .
Algorithms for bandit convex optimization that use a self-concordant regularizer also use the same
self-concordant barrier to obtain gradient estimates. Namely, these algorithms perturb the dual averaging solution xt as in Eq. (3), with the perturbation matrix At set to (r2 R(xt )) 1/2 , the root of
the inverse Hessian of R at the point xt . In other words, the distribution of yt is supported on the
Dikin ellipsoid centered at xt , scaled by . Since 2 (0, 1], this form of perturbation guarantees
that yt 2 K. Moreover, if yt is generated in this way and used to construct the gradient estimator
g?t , then the local norm of g?t is bounded, as specified in the following lemma.
Lemma 8 (Saha and Tewari [15, Lemma 5]). Let K ? Rd be a convex body with self-concordant
barrier R. For any differentiable function f : K 7! [0, 1], 2 (0, 1], and x 2 int(K), define
g? = (d/ )f (y) ? A 1 u, where A = (r2 R(x)) 1/2 , y = x + Au, and u is drawn uniformly from
the unit sphere. Then k?
g kx ? d/ .
3
Main Result
Our algorithm for the bandit smooth convex optimization problem is a variant of the algorithm in
Saha and Tewari [15], and appears in Algorithm 1. Following Abernethy and Rakhlin [1], Saha and
Tewari [15], we use a self-concordant function as the dual averaging regularizer and we use its Dikin
ellipsoids to perturb the points xt . The difference between our algorithm and previous ones is the
introduction of dual averaging weights (?s,t ), for t = 1, . . . , T and s = 1, . . . , t, which allow us to
vary the weight of each gradient in the dual averaging objective function.
In addition to the parameters , ?, and ?, we introduce a new buffering parameter k, which takes
non-negative integer values. We set the dual averaging weights in Algorithm 1 to be
(
?
if s ? t k
?s,t =
(5)
t s+1
?
if s > t k ,
k+1
where ? > 0 is a global learning rate parameter. This choice of (?s,t ) effectively decreases the
influence of the feedback received on the most recent k rounds. If k = 0, all of the (?s,t ) become
equal to ? and Algorithm 1 reduces to the algorithm in Saha and Tewari [15]. The surprising result
is that there exists a different setting of k > 0 that gives a better regret bound.
We introduce a slight abuse of notation, which helps us simplify the presentation of our regret bound.
We will eventually achieve the desired regret bound by setting the parameters ?, , and k to be some
functions of T . Therefore, from now on, we treat the notation ?, , and k as an abbreviation for the
functional forms ?(T ), (T ), and k(T ) respectively. The benefit is that we can now use asymptotic
notation (e.g., O(?k)) to sweep meaningless low-order terms under the rug.
5
We prove the following regret bound for this algorithm.
Theorem 9. Let f1 , . . . , fT be a sequence of loss functions where each ft : K 7! [0, 1] is differentiable, convex, H-smooth and L-Lipschitz, and where K ? Rd is a convex body of diameter
D > 0 with #-self-concordant barrier R. For any , ? 2 (0, 1] and k 2 {0, 1, . . . , T } assume that
Algorithm 1 is run with these parameters and with the weights defined in Eq. (5) (using k and ?) to
generate the sequences x1 , . . . , xT and y1 , . . . , yT . If 12k?d ? and for any ? 2 (0, 1) it holds that
p
?
?
# log 1?
64d2 ?T
12(HD2 + DL)d? kT
T?
R(T ) ? HD2 2 T +
+ 2
+
+O
+ T ?k .
?
(k + 1)
Specifically, if wepset = d1/4 T
that R(T ) = O( d T 5/8 log T ).
3/16
,? =d
1/2
T
5/8
, k = d1/2 T 1/8 , and ? = T
100
, we get
e 2/3 ) bound in Saha and Tewari [15] up
Note that if we set k = 0 in our theorem, we recover the O(T
to a small numerical constant (namely, the dependence on L, H, D, #, d, and T is the same).
4
Analysis
PT
Using the notation x? = arg minx2K t=1 ft (x), the decision maker?s regret becomes R(T ) =
? PT
?
E
ft (x? ) . Following Flaxman et al. [11], Saha and Tewari [15], we rewrite the
t=1 ft (yt )
regret as
" T
#
" T
#
" T
#
X
X
X
?
?
?
?
?
?
?
R(T ) = E
ft (yt ) f t (xt ) + E
f t (xt ) f t (x ) + E
f t (x ) ft (x ) . (6)
|
t=1
{z
a
}
|
t=1
{z
b
}
t=1
|
{z
c
}
This decomposition essentially adds a layer of hallucination to the analysis: we pretend that the
loss functions are f?1 , . . . , f?T instead of f1 , . . . , fT and we also pretend that we chose the points
x1 , . . . , xT rather than y1 , . . . , yT . We then analyze the regret in this pretend world (this regret is
the expression in Eq. (6b)). Finally, we tie our analysis back to the real world by bounding the
difference between that which we analyzed and the regret of the actual problem (this difference is
the sum of Eq. (6a) and Eq. (6c)). The advantage of our pretend world over the real world is that we
have unbiased gradient estimates g?1 , . . . , g?T that can plug into the dual averaging algorithm.
The algorithm in Saha and Tewari [15] sets all of the dual averaging weights (?s,t ) equal to the
constant learning rate ? > 0. It decomposes the regret as in Eq. (6) and their main technical result is
the following bound for the individual terms:
Theorem 10 (Saha and Tewari [15]). Let f1 , . . . , fT be a sequence of loss functions where each
ft : K 7! [0, 1] is differentiable, convex, H-smooth and L-Lipschitz, and where K ? Rd is a convex
body of diameter D > 0 and #-self-concordant barrier R. Assume that Algorithm 1 is run with
perturbation parameter 2 (0, 1] and generates the sequences x1 , . . . , xT and y1 , . . . , yT . Then
for any ? 2 (0, 1) it holds that (6a) + (6c) ? (HD2 2 + ?L)T . If, additionally, the dual averaging
weights (?s,t ) are all set to the constant learning rate ? then (6b) ? # log(1/?)? 1 + d2 2 ?T .
e 2/3 ) by choosing
The analysis in Saha and Tewari [15] goes on to obtain a regret bound of O(T
optimal values for the parameters ?, and ? and plugging those values into Theorem 10. Our
analysis uses the first part of Theorem 10 to bound (6a) + (6c) and shows that our careful choice of
the dual averaging weights (?s,t ) results in the following improved bound on (6b).
We begin our analysis by defining a moving average of the functions f?1 , . . . , f?T , as follows:
k
8 t = 1, . . . , T ,
f?t (x) =
1 X?
f t i (x) ,
k + 1 i=0
(7)
where, for soundness, we let f?s ? 0 for s ? 0. Also, define a moving average of gradient estimates:
k
8 t = 1, . . . , T ,
g?t =
6
1 X
g?t
k + 1 i=0
i
,
again, with g?s = 0 for s ? 0. In Section 4 below, we show how each g?t can be used as a biased
estimate of rf?t (xt ). Also note that the choice of the dual averaging weights (?s,t ) in Eq. (5) is such
Pt
Pt
that s=1 ?s,t g?s = ? s=1 g?s for all t. Therefore, the last step in Algorithm 1 basically performs
dual averaging with the gradient estimates g?1 , . . . , g?T uniformly weighted by ?.
We use the functions f?t to rewrite Eq. (6b) as
" T
#
" T
#
" T
#
X
X
X
E
f?t (xt ) f?t (xt ) + E
f?t (xt ) f?t (x? ) + E
f?t (x? ) f?t (x? ) .
(8)
|
t=1
{z
a
}
|
t=1
{z
b
}
|
t=1
{z
c
}
This decomposition essentially adds yet another layer of hallucination to the analysis: we pretend
that the loss functions are f?1 , . . . , f?T instead of f?1 , . . . , f?T (which are themselves pretend loss
functions, as described above). Eq. (8b) is the regret in our new pretend scenario, while Eq. (8a) +
Eq. (8c) is the difference between this regret and the regret in Eq. (6b).
The following lemma bounds each of the terms in Eq. (8) separately, and summarizes the main
technical contribution of our paper.
Lemma 11. Under the conditions of Theorem 9, for any ? 2 (0, 1) it holds that (8c) ? 0 ,
p
12DLd? kT
(8a) ?
+ O (1 + ?)k , and
p
?
?
1
# log ?
64d2 ?T
12HD2 d? kT
T?
(8b) ?
+ 2
+
+ O
+ T ?k .
?
(k + 1)
4.1
Proof Sketch of Lemma 11
As mentioned above, the basic intuition of our technique is quite simple: average the gradients to
decrease their variance. Yet, applying this idea in the analysis is tricky. We begin by describing the
main source of difficulty in proving Lemma 11.
Recall that our strategy is to pretend that the loss functions are f?1 , . . . , f?T and to use the random
vector g?t as a biased estimator of rf?t (xt ). Naturally, one of our goals is to show that this bias
is small. Recall that each g?s is an unbiased estimator of rf?s (xs ) (conditioned on the history up
to round t). Specifically, note that each vector in the sequence g?t k , . . . , g?t is a gradient estimate
at a different point. Yet, we average these vectors and claim that they accurately estimate rf?t at
the current point, xt . Luckily, f?t is H-smooth, so rf?t i (xt i ) should not be much different than
rf?t i (xt ), provided that we show that xt i and xt are close to each other in Euclidean distance.
To show that xt i and xt are close, we exploit the stability of the dual averaging algorithm. Particularly, the first claim in Lemma 7 states that kxs xs+1 kxs is controlled by k?
g s kxs ,? for all s, so now
we need to show that k?
g s kxs ,? is small. However, g?t is the average of k + 1 gradient estimates taken
at different points; each g?t i is designed to have a small norm with respect to its own local norm
k ? kxt i ,? ; for all we know, it may be very large with respect to the current local norm k ? kxt ,? . So
now we need to show that the local norms at xt i and xt are similar. We could prove this if we knew
that xt i and xt are close to each other?which is exactly what we set out to prove in the beginning.
This chicken-and-egg situation complicates our analysis considerably.
Another non-trivial component of our proof is the variance reduction analysis. The motivation
to average g?t k , . . . , g?t is to generate new gradient estimates with a smaller variance. While the
random vectors g?1 , . . . , g?T are not independent, we show that their randomness is uncorrelated.
Therefore, the variance of g?t is k + 1 times smaller than the variance of each g?t . However, to make
this argument formal, we again require the local norms at xt i and xt to be similar.
To make things more complicated, there is the recurring need to move back and forth between local
norms and the Euclidean norm, since the latter is used in the definition of Lipschitz and smoothness.
All of this has to do with bounding Eq. (8b), the regret with respect to the pretend loss functions
f?1 , . . . , f?T - an additional bias term appears in the analysis of Eq. (8a).
We conclude the paper by stating our main lemmas and sketching the proof Lemma 11. The full
technical proofs are all deferred to the supplementary material and replaced with some high level
commentary.
7
To break the chicken-and-egg situation described above, we begin with a crude bound on k?
g t kxt ,? ,
which does not benefit at all from the averaging operation. We simultaneously prove that the local
norms at xt i and xt are similar.
Lemma 12. If the parameters k, ?, and are chosen such that 12k?d ?
(i) k?
g t kxt ,? ? 2d/ ;
(ii) for any 0 ? i ? k such that t
1 it holds that 12 kzkxt
i
i ,?
then for all t,
? kzkxt ,? ? 2kzkxt
i ,?
.
Lemma 12 itself has a chicken-and-egg aspect, which we resolve using an inductive proof technique.
Armed with the knowledge
? that2 the? local norms at xt i and xt are similar, we go on to prove the
more refined bound on Et k?
g t kxt ,? , which does benefit from averaging.
Lemma 13. If the parameters k, ?, and are chosen such that 12k?d ?
?
?
Et k?
g t k2xt ,? ? 2D2 L2 +
then
32d2
.
2 (k + 1)
The proof constructs a martingale difference sequence and uses the fact that its increments are uncorrelated. Compare the above to Lemma 8, which proves that k?
g t i k2xt i ,? ? d2 / 2 and note the
extra k + 1 in our denominator?all of our hard work was aimed at getting this factor.
Next, we set out to bound the expected Euclidean distance between xt i and xt . This bound is later
needed to exploit the L-Lipschitz and H-smooth assumptions. The crude bound on k?
g s kxs ,? from
Lemma 12 is enough to satisfy the conditions of Lemma 7, which then tells us that E[kxs xs+1 kxs ]
is controlled by E[k?
g s kxs ,? ]. The latter enjoys the improved bound due to Lemma 13. Integrating
the resulting bound over time, we obtain the following lemma.
Lemma 14. If the parameters k, ?, and
0 ? i ? k such that t i 1, we have
E[kxt
are chosen such that 12k?d ?
x t k2 ] ?
i
p
12Dd? k
then for all t and any
+ O(?k) .
Notice that xt i and xt may be k rounds apart, but the bound scales only with
the work of the averaging technique.
p
k. Again, this is
Finally, we have all the tools in place to prove our main result, Lemma 11.
Pk ?
1
Proof sketch. The first term, Eq. (8a), is bounded by rewriting f?t (xt ) = k+1
i=0 f t i (xt ) and
then proving that f?t i (xt ) is not very far from f?t (xt ). This follows from the fact that f?t is LLipschitz and from Lemma 14. To bound the second term, Eq. (8b), we use the convexity of each f?t
to write
" T
#
" T
#
X
X
E
f?t (xt ) f?t (x? ) ? E
rf?t (xt ) ? (xt x? ) .
t=1
t=1
We relate the right-hand side above to
E
"
T
X
t=1
g?t ? (xt
?
x )
#
,
using the fact that f?t is H-smooth and again using Lemma 14. Then, we upper bound the above
using Lemma 7, Theorem 5, and Lemma 13.
?
Acknowledgments
We thank Jian Ding for several critical contributions during the early stages of this research. Parts
of this work were done while the second and third authors were at Microsoft Research, the support
of which is gratefully acknowledged.
8
References
[1] J. Abernethy and A. Rakhlin. Beating the adaptive bandit with high probability. In Information
Theory and Applications Workshop, 2009, pages 280?289. IEEE, 2009.
[2] J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for
bandit linear optimization. In Proceedings of the 21st Annual Conference on Learning Theory
(COLT), 2008.
[3] A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization with
multi-point bandit feedback. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010.
[4] A. Agarwal, D. P. Foster, D. Hsu, S. M. Kakade, and A. Rakhlin. Stochastic convex optimization with bandit feedback. In Advances in Neural Information Processing Systems (NIPS),
2011.
[5] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[6] S. Bubeck and R. Eldan. The entropic barrier: a simple and optimal universal self-concordant
barrier. arXiv preprint arXiv:1412.1587, 2015.
[7] S. Bubeck and R. Eldan. Multi-scale exploration of convex functions and bandit convex optimization. arXiv preprint arXiv:1507.06580, 2015.
p
[8] S. Bubeck, O. Dekel, T. Koren, and Y. Peres. Bandit convex optimization: T regret in one
dimension. In In Proceedings of the 28st Annual Conference on Learning Theory (COLT),
2015.
[9] V. Dani, T. Hayes, and S. M. Kakade. The price of bandit information for online optimization.
In Advances in Neural Information Processing Systems (NIPS), 2008.
[10] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: T 2/3 regret. In
Proceedings of the 46th Annual Symposium on the Theory of Computing, 2014.
[11] A. D. Flaxman, A. Kalai, and H. B. McMahan. Online convex optimization in the bandit
setting: gradient descent without a gradient. In Proceedings of the sixteenth annual ACMSIAM symposium on Discrete algorithms, pages 385?394. Society for Industrial and Applied
Mathematics, 2005.
[12] E. Hazan and K. Levy. Bandit convex optimization: Towards tight bounds. In Advances in
Neural Information Processing Systems (NIPS), 2014.
[13] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):221?259, 2009.
[14] Y. Nesterov and A. Nemirovskii. Interior-point polynomial algorithms in convex programming,
volume 13. SIAM, 1994.
[15] A. Saha and A. Tewari. Improved regret guarantees for online smooth convex optimization with
bandit feedback. In International Conference on Artificial Intelligence and Statistics (AISTAT),
pages 636?642, 2011.
[16] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends
in Machine Learning, 4(2):107?194, 2011.
[17] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
Proceedings of the 20th International Conference on Machine Learning (ICML?03), pages
928?936, 2003.
9
| 5842 |@word version:1 polynomial:2 stronger:1 norm:22 dekel:4 suitably:1 open:2 reused:1 d2:6 that2:1 decomposition:2 incurs:2 reduction:1 current:7 com:2 dikin:5 surprising:1 gmail:1 yet:5 written:1 subsequent:1 numerical:1 analytic:1 designed:1 intelligence:1 guess:1 beginning:1 completeness:1 mathematical:1 become:1 symposium:2 prove:8 acmsiam:1 introduce:2 expected:2 sublinearly:1 themselves:1 nor:1 multi:3 resolve:1 actual:1 armed:2 window:2 becomes:2 begin:4 provided:1 bounded:7 notation:6 moreover:1 israel:2 what:1 unspecified:1 finding:2 guarantee:9 tie:1 exactly:1 k2:5 scaled:1 tricky:1 control:1 unit:7 producing:1 positive:2 before:1 local:15 treat:1 switching:1 despite:1 minx2k:3 ree:1 abuse:1 chose:1 au:2 weizmann:1 acknowledgment:1 regret:34 definite:2 llipschitz:2 universal:1 word:2 integrating:1 get:2 cannot:1 interior:3 close:3 context:1 applying:3 influence:1 zinkevich:1 yt:16 center:1 elusive:2 go:2 convex:54 resolution:2 rule:2 estimator:6 fill:1 his:6 proving:3 stability:1 variation:1 increment:1 imagine:1 play:1 pt:8 programming:3 us:3 trend:2 particularly:2 observed:1 ft:31 role:1 preprint:2 ding:2 worst:1 decrease:2 observes:2 mentioned:2 intuition:1 convexity:2 nesterov:4 rewrite:2 tight:1 regularizer:4 committing:1 artificial:1 tell:1 choosing:2 refined:1 abernethy:7 shalev:1 quite:2 whose:1 larger:1 supplementary:2 say:2 favor:1 soundness:1 statistic:1 g1:1 invested:1 itself:1 online:13 sequence:15 indication:1 differentiable:6 rr:1 kxt:10 advantage:1 pthe:1 achieve:1 sixteenth:1 forth:1 ky:4 getting:1 aistat:1 tions:1 help:1 ac:1 stating:1 received:1 progress:3 eq:19 strong:1 sizable:1 indicate:1 implies:1 direction:1 kgt:3 subsequently:1 luckily:1 centered:1 stochastic:2 exploration:1 material:2 require:1 f1:6 preliminary:1 proposition:1 hold:9 considered:1 claim:2 vary:1 consecutive:1 early:1 entropic:1 travel:1 maker:10 bridge:1 tool:1 weighted:1 dani:2 rather:2 kalai:1 focus:1 indicates:1 industrial:1 dld:1 typically:1 bandit:29 overall:1 dual:18 arg:6 colt:3 denoted:1 art:2 special:2 rft:5 initialize:1 field:1 construct:3 aware:1 equal:2 adversarially:1 buffering:1 icml:1 thin:1 others:1 simplify:1 hint:1 saha:15 randomly:1 simultaneously:1 individual:1 replaced:1 microsoft:3 attempt:1 recalling:1 huge:2 interest:1 possibility:1 hallucination:2 deferred:1 introduces:1 analyzed:1 primal:1 kt:3 closer:1 necessary:1 orthogonal:1 euclidean:10 plugged:1 haifa:1 desired:1 complicates:1 cost:1 uniform:3 technion:2 perturbed:1 considerably:1 chooses:4 st:2 fundamental:1 randomized:2 siam:1 international:2 continuously:2 concrete:1 sketching:1 again:4 cesa:1 choose:2 slowly:1 summarized:1 includes:1 int:8 satisfy:1 kzk2:1 tion:1 later:2 root:1 closed:1 break:1 analyze:1 hazan:2 recover:1 complicated:1 gret:1 contribution:2 minimize:1 il:1 variance:12 efficiently:2 ronen:1 accurately:1 basically:1 randomness:2 history:1 definition:6 infinitesimal:1 against:1 naturally:1 proof:9 hsu:1 recall:2 knowledge:1 ut:6 improves:2 carefully:1 back:3 appears:3 higher:1 follow:3 improved:3 done:2 shrink:1 strongly:2 generality:1 stage:1 hand:2 receives:1 sketch:2 defines:1 brings:1 believe:1 grows:1 effect:1 requiring:1 true:1 unbiased:4 inductive:1 regularization:1 iteratively:1 round:8 adjacent:1 during:1 self:31 mahalanobis:1 generalized:2 trying:2 presenting:1 performs:1 recently:1 functional:1 perturbing:1 volume:1 he:4 slight:1 significant:1 smoothness:3 rd:13 mathematics:1 similarly:1 gratefully:1 moving:2 yk2:2 lkx:1 gt:5 add:2 own:1 hide:1 recent:1 apart:1 scenario:1 arbitrarily:1 accomplished:1 additional:2 care:1 rug:1 commentary:1 ii:3 full:3 reduces:1 smooth:20 technical:8 match:1 plug:1 sphere:4 plugging:1 controlled:2 variant:1 basic:3 denominator:1 essentially:2 expectation:2 foremost:1 arxiv:4 iteration:1 sometimes:1 tailored:1 agarwal:2 chicken:3 receive:2 background:1 whereas:1 addition:1 separately:1 source:1 jian:1 biased:2 meaningless:1 extra:1 explodes:3 ascent:1 induced:2 thing:1 spirit:1 integer:1 noting:1 iii:1 enough:1 nonstochastic:1 competing:2 reduce:1 idea:4 tradeoff:2 computable:2 expression:1 effort:1 hessian:3 useful:3 tewari:15 aimed:1 dark:1 diameter:4 generate:2 notice:1 estimated:2 rehovot:1 write:1 discrete:1 group:1 key:4 acknowledged:1 drawn:4 rewriting:1 subgradient:2 sum:1 run:2 inverse:1 place:1 throughout:2 draw:1 decision:12 summarizes:1 x2k:2 entirely:1 bound:33 layer:2 guaranteed:3 koren:3 g:2 annual:5 infinity:1 flat:3 generates:1 aspect:1 argument:1 min:4 selfconcordant:2 conjecture:1 ball:3 smaller:2 kakade:2 intuitively:1 taken:4 remains:3 describing:1 loose:1 r3:1 eventually:1 needed:1 know:1 fed:1 ofer:1 operation:2 hd2:4 observe:1 original:1 assumes:1 ensure:1 exploit:2 giving:1 pretend:9 perturb:2 prof:3 hypercube:1 society:1 sweep:1 objective:1 move:1 strategy:2 dependence:2 said:1 gradient:28 distance:2 separate:1 thank:1 topic:1 trivial:2 toward:1 length:1 ellipsoid:5 multiplicatively:1 statement:1 relate:2 negative:2 rise:1 stated:1 zt:2 unknown:1 bianchi:1 upper:3 av:1 observation:1 descent:2 defining:2 situation:2 looking:1 peres:2 nemirovskii:3 y1:4 perturbation:7 smoothed:1 arbitrary:1 tomer:1 namely:6 specified:1 polytopes:1 nip:3 redmond:1 adversary:2 below:2 recurring:1 beating:1 challenge:1 rf:9 including:1 kykx:1 critical:1 difficulty:3 rely:3 natural:1 regularized:1 improve:2 flaxman:9 review:1 l2:1 asymptotic:2 k2xt:3 loss:28 expect:1 interesting:2 foundation:2 incurred:1 xiao:1 dd:1 foster:1 uncorrelated:3 eldan:4 supported:2 last:1 enjoys:1 bias:9 formal:4 allow:1 side:1 institute:1 template:1 face:1 barrier:34 benefit:3 feedback:9 dimension:2 overcome:1 boundary:4 world:4 computes:1 author:1 made:5 adaptive:1 projected:1 far:2 global:1 hayes:1 assumed:1 conclude:1 knew:1 leader:1 shwartz:1 decomposes:1 additionally:2 improving:1 meanwhile:1 domain:3 did:1 pk:1 main:7 privately:1 oferd:1 bounding:3 motivation:1 x1:4 body:12 egg:3 martingale:1 mcmahan:1 crude:2 levy:1 third:1 theorem:12 transitioning:1 xt:73 specific:1 r2:9 x:5 rakhlin:6 dl:1 exists:6 workshop:1 albeit:1 effectively:1 magnitude:1 conditioned:2 kx:2 gap:3 easier:2 logarithmic:1 simply:2 univariate:1 bubeck:6 contained:1 tomerk:1 minimizer:1 satisfies:1 shell:1 abbreviation:1 goal:2 identity:1 presentation:1 careful:1 towards:3 lipschitz:11 price:1 change:1 hard:1 specifically:4 uniformly:4 averaging:20 lemma:31 called:2 kxs:8 concordant:31 meaningful:1 formally:1 support:1 latter:2 ongoing:1 constructive:1 d1:2 |
5,350 | 5,843 | Accelerated Mirror Descent
in Continuous and Discrete Time
Walid Krichene
UC Berkeley
Alexandre M. Bayen
UC Berkeley
Peter L. Bartlett
UC Berkeley and QUT
[email protected]
[email protected]
[email protected]
Abstract
We study accelerated mirror descent dynamics in continuous and discrete time.
Combining the original continuous-time motivation of mirror descent with a recent ODE interpretation of Nesterov?s accelerated method, we propose a family of
continuous-time descent dynamics for convex functions with Lipschitz gradients,
such that the solution trajectories converge to the optimum at a O(1/t2 ) rate. We
then show that a large family of first-order accelerated methods can be obtained as
a discretization of the ODE, and these methods converge at a O(1/k 2 ) rate. This
connection between accelerated mirror descent and the ODE provides an intuitive
approach to the design and analysis of accelerated first-order algorithms.
1
Introduction
We consider a convex optimization problem, minimizex?X f (x), where X ? Rn is convex and
closed, f is a C 1 convex function, and ?f is assumed to be Lf -Lipschitz. Let f ? be the minimum
of f on X . Many convex optimization methods can be interpreted as the discretization of an ordinary
differential equation, the solutions of which are guaranteed to converge to the set of minimizers. Perhaps the simplest such method is gradient descent, given by the iteration x(k+1) = x(k) ?s?f (x(k) )
?
for some step size s, which can be interpreted as the discretization of the ODE X(t)
= ??f (X(t)),
with discretization step s. The well-established theory of ordinary differential equations can provide
guidance in the design and analysis of optimization algorithms, and has been used for unconstrained
optimization [8, 7, 13], constrained optimization [27] and stochastic optimization [25]. In particular,
proving convergence of the solution trajectories of an ODE can often be achieved using simple and
elegant Lyapunov arguments. The ODE can then be carefully discretized to obtain an optimization algorithm for which the convergence rate can be analyzed by using an analogous Lyapunov
argument in discrete time.
In this article, we focus on two families of first-order methods: Nesterov?s accelerated method [22],
and Nemirovski?s mirror descent method [19]. First-order methods have become increasingly important for large-scale optimization problems that arise in machine learning applications. Nesterov?s
accelerated method [22] has been applied to many problems and extended in a number of ways, see
for example [23, 20, 21, 4]. The mirror descent method also provides an important generalization of the gradient descent method to non-Euclidean geometries, as discussed in [19, 3], and has
many applications in convex optimization [6, 5, 12, 15], as well as online learning [9, 11]. An intuitive understanding of these methods is of particular importance for the design and analysis of
new algorithms. Although Nesterov?s method has been notoriously hard to explain intuitively [14],
progress has been made recently: in [28], Su et al. give an ODE interpretation of Nesterov?s method.
However, this interpretation is restricted to the original method [22], and does not apply to its extensions to non-Euclidean geometries. In [1], Allen-Zhu and Orecchia give another interpretation
of Nesterov?s method, as performing, at each iteration, a convex combination of a mirror step and a
gradient step. Although it covers a broader family of algorithms (including non-Euclidean geometries), this interpretation still requires an involved analysis, and lacks the simplicity and elegance of
1
ODEs. We provide a new interpretation which has the benefits of both approaches: we show that a
broad family of accelerated methods (which includes those studied in [28] and [1]) can be obtained
as a discretization of a simple ODE, which converges at a O(1/t2 ) rate. This provides a unified
interpretation, which could potentially simplify the design and analysis of first-order accelerated
methods.
The continuous-time interpretation [28] of Nesterov?s method and the continuous-time motivation
of mirror descent [19] both rely on a Lyapunov argument. They are reviewed in Section 2. By
combining these ideas, we propose, in Section 3, a candidate Lyapunov function V (X(t), Z(t), t)
that depends on two state variables: X(t), which evolves in the primal space E = Rn , and Z(t),
which evolves in the dual space E ? , and we design coupled dynamics of (X, Z) to guarantee that
d
dt V (X(t), Z(t), t) ? 0. Such a function is said to be a Lyapunov function, in reference to [18];
see also [16]. This leads to a new family of ODE systems, given in Equation (5). We prove the
existence and uniqueness of the solution to (5) in Theorem 1. Then we prove in Thereom 2, using
the Lyapunov function V , that the solution trajectories are such that f (X(t)) ? f ? = O(1/t2 ). In
Section 4, we give a discretization of these continuous-time dynamics, and obtain a family of accelerated mirror descent methods, for which we prove the same O(1/k 2 ) convergence rate (Theorem 3)
using a Lyapunov argument analogous to (though more involved than) the continuous-time case. We
give, as an example, a new accelerated method on the simplex, which can be viewed as performing,
at each step, a convex combination of two entropic projections with different step sizes. This ODE
interpretation of accelerated mirror descent gives new insights and allows us to extend recent results
such as the adaptive restarting heuristics proposed by O?Donoghue and Cand`es in [24], which are
known to empirically improve the convergence rate. We test these methods on numerical examples
in Section 5 and comment on their performance.
2
ODE interpretations of Nemirovski?s mirror descent method and
Nesterov?s accelerated method
Proving convergence of the solution trajectories of an ODE often involves a Lyapunov argument. For
?
example, to prove convergence of the solutions to the gradient descent ODE X(t)
= ??f (X(t)),
1
? 2
consider the Lyapunov function V (X(t)) = 2 kX(t) ? x k for some minimizer x? . Then the time
derivative of V (X(t)) is given by
D
E
d
?
V (X(t)) = X(t),
X(t) ? x? = h??f (X(t)), X(t) ? x? i ? ?(f (X(t)) ? f ? ),
dt
where the last inequality is by convexity of f .Integrating, we
? V (x0 ) ? tf ? ?
have V (X(t))
Rt
R
R
t
t
f (X(? ))d? , thus by Jensen?s inequality, f 1t 0 X(? )d? ? f ? ? 1t 0 f (X(? ))d? ? f ? ?
0
R
V (x0 )
1 t
?
t , which proves that f t 0 X(? )d? converges to f at a O(1/t) rate.
2.1
Mirror descent ODE
The previous argument was extended by Nemirovski and Yudin in [19] to a family of methods
called mirror descent. The idea is to start from a non-negative function V , then to design dynamics
for which V is a Lyapunov function. Nemirovski and Yudin argue that one can replace the Lyapunov
function V (X(t)) = 21 kX(t) ? x? k2 by a function on the dual space, V (Z(t)) = D?? (Z(t), z ? ),
where Z(t) ? E ? is a dual variable for which we will design the dynamics (z ? is the value of Z at
equilibrium), and the corresponding trajectory in the primal space is X(t) = ?? ? (Z(t)). Here ? ? is
a convex function defined on E ? , such that ?? ? maps E ? to X , and D?? (Z(t), z ? ) is the Bregman
divergence associated with ? ? , defined as D?? (z, y) = ? ? (z) ? ? ? (y) ? h?? ? (y), z ? yi. The
?
function ? ? is said to be `-strongly convex w.r.t. a reference norm k ? k? if D?
(z, y) ? 2` kz ? yk2?
L
for all y, z, and it is said to be L-smooth w.r.t. k ? k? if D?? (z, y) ? 2 kz ? yk2? . For a review of
properties of Bregman divergences, see Chapter 11.2 in [11], or Appendix A in [2].
By definition of the Bregman divergence, we have
d
d
d
V (Z(t)) = D?? (Z(t), z ? ) =
(? ? (Z(t)) ? ? ? (z ? ) ? h?? ? (z ? ), Z(t) ? z ? i)
dt
dt
dt
D
E D
E
?
?
= ?? ? (Z(t)) ? ?? ? (z ? ), Z(t)
= X(t) ? x? , Z(t)
.
2
Therefore, if the dual variable Z obeys the dynamics Z? = ??f (X), then
d
V (Z(t)) = ? h?f (X(t)), X(t) ? x? i ? ?(f (X(t)) ? f ? )
dt
and
the same
by
argument as in the gradient descent ODE, V is a Lyapunov function and
R
1 t
f t 0 X(? )d? ? f ? converges to 0 at a O(1/t) rate. The mirror descent ODE system can be
summarized by
?
?
?
? X = ?? (Z)
?
Z = ??f (X)
?
? X(0) = x , Z(0) = z with ?? ? (z ) = x
0
0
0
0
(1)
Note that since ?? ? maps into X , X(t) = ?? ? (Z(t)) remains in X . Finally, the unconstrained
gradient descent ODE can be obtained as a special case of the mirror descent ODE (1) by taking
? ? (z) = 12 kzk2 , for which ?? ? is the identity, in which case X and Z coincide.
2.2
ODE interpretation of Nesterov?s accelerated method
In [28], Su et al. show that Nesterov?s accelerated method [22] can be interpreted as a discretization
of a second-order differential equation, given by
(
? + r+1 X? + ?f (X) = 0
X
t
?
X(0) = x0 , X(0)
=0
(2)
2
The argument uses the following Lyapunov function (up to reparameterization), E(t) = tr (f (X) ?
f ? ) + 2r kX + rt X? ? x? k2 , which is proved to be a Lyapunov function for the ODE (2) whenever
r ? 2. Since E is decreasing along trajectories of the system, it follows that for all t > 0, E(t) ?
? 2
2
k
E(0) = 2r kx0 ? x? k2 , therefore f (X(t)) ? f ? ? tr2 E(t) ? tr2 E(0) ? rt2 kx0 ?x
, which proves
2
that f (X(t)) converges to f ? at a O(1/t2 ) rate. One should note in particular that the squared
Euclidean norm is used in the definition of E(t) and, as a consequence, discretizing the ODE (2)
leads to a family of unconstrained, Euclidean accelerated methods. In the next section, we show
that by combining this argument with Nemirovski?s idea of using a general Bregman divergence
as a Lyapunov function, we can construct a much more general family of ODE systems which
have the same O(1/t2 ) convergence guarantee. And by discretizing the resulting dynamics, we
obtain a general family of accelerated methods that are not restricted to the unconstrained Euclidean
geometry.
3
3.1
Continuous-time Accelerated Mirror Descent
Derivation of the accelerated mirror descent ODE
We consider a pair of dual convex functions, ? defined on X and ? ? defined on E ? , such that
?? ? : E ? ? X . We assume that ? ? is L?? -smooth with respect to k ? k? , a reference norm on the
dual space. Consider the function
V (X(t), Z(t), t) =
t2
(f (X(t)) ? f ? ) + rD?? (Z(t), z ? )
r
(3)
?
where Z is a dual variable for which we will design the dynamics, and z is its value at equilibrium.
Taking the time-derivative of V , we have
E
D
E
d
2t
t2 D
? ?? ? (Z) ? ?? ? (z ? ) .
V (X(t), Z(t), t) = (f (X) ? f ? ) +
?f (X), X? + r Z,
dt
r
r
Assume that Z? = ? rt ?f (X). Then, the time-derivative of V becomes
d
2t
t ?
?
?
? ?
V (X(t), Z(t), t) = (f (X) ? f ) ? t ?f (X), ? X + ?? (Z) ? ?? (z ) .
dt
r
r
? and ?? ? (z ? ) = x? , then,
Therefore, if Z is such that ?? ? (Z) = X + rt X,
d
2t
2t
V (X(t), Z(t), t) = (f (X) ? f ? ) ? t h?f (X), X ? x? i ? (f (X) ? f ? ) ? t(f (X) ? f ? )
dt
r
r
r?2
? ?t
(f (X) ? f ? )
(4)
r
3
and it follows that V is a Lyapunov function whenever r ? 2. The proposed ODE system is then
?
?
r
?
? X? = t (?? (Z) ? X),
t
?
Z = ? r ?f (X),
?
? X(0) = x , Z(0) = z , with ?? ? (z ) = x .
0
0
0
0
(5)
1
2
?
In the unconstrained Euclidean case, taking ? ? (z)
= 2 kzk
, we have ?? (z) = z, thus Z =
t ?
t ?
d
t
X + r X, and the ODE system is equivalent to dt X + r X = ? r ?f (X), which is equivalent to
the ODE (2) studied in [28], which we recover as a special case.
We also give another interpretation of ODE (5): theR first equation is equivalent to tr X? + rtr?1 X =
t
rtr?1 ?? ? (Z), or, in integral form, tr X(t) = r 0 ? r?1 ?? ? (Z(? ))d? , which can be written as
X(t) =
Rt
0
w(? )?? ? (Z(? ))d?
Rt
,
w(? )d?
0
with w(? ) = ? r?1 . Therefore the coupled dynamics of (X, Z) can
be interpreted as follows: the dual variable Z accumulates gradients with a rt rate, while the primal
variable X is a weighted average of ?? ? (Z(? )) (the ?mirrored? dual trajectory), with weights
proportional to ? r?1 . This also gives an interpretation of r as a parameter controlling the weight
distribution. It is also interesting to observe that the weights are increasing if and only if r ? 2.
Finally, with this averaging interpretation, it becomes clear that the primal trajectory X(t) remains
in X , since ?? ? maps into X and X is convex.
3.2
Solution of the proposed dynamics
First, we prove existence and uniqueness of a solution to the ODE system (5), defined for all t >
0. By assumption, ? ? is L?? -smooth w.r.t. k ? k? , which is equivalent (see e.g. [26]) to ?? ? is
? the function (X, Z, t) 7?
L?? -Lipschitz. Unfortunately, due to the rt term in the expression of X,
? Z)
? is not Lipschitz at t = 0, and we cannot directly apply the Cauchy-Lipschitz existence and
(X,
uniqueness theorem. However, one can work around it by considering a sequence of approximating
ODEs, similarly to the argument used in [28].
Theorem 1. Suppose f is C 1 , and that ?f is Lf -Lipschitz, and let (x0 , z0 ) ? X ? E ? such that
?? ? (z0 ) = x0 . Then the accelerated mirror descent ODE system (5) with initial condition (x0 , z0 )
has a unique solution (X, Z), in C 1 ([0, ?), Rn ).
We will show existence of a solution on any given interval [0, T ] (uniqueness is proved in the supplementary material). Let ? > 0, and consider the smoothed ODE system
?
?
r
?
? X? = max(t,?) (?? (Z) ? X),
t
?
Z = ? r ?f (X),
?
? X(0) = x , Z(0) = z with ?? ? (z ) = x .
0
0
0
0
(6)
r
Since the functions (X, Z) 7? ? rt ?f (X) and (X, Z) 7? max(t,?)
(?? ? (Z) ? X) are Lipschitz for
all t ? [0, T ], by the Cauchy-Lipschitz theorem (Theorem 2.5 in [29]), the system (6) has a unique
solution (X? , Z? ) in C 1 ([0, T ]). In order to show the existence of a solution to the original ODE,
we use the following Lemma (proved in the supplementary material).
Lemma 1. Let t0 = ? 2
. Then the family of solutions (X? , Z? )|[0,t0 ] ??t is equi-LipschitzLf L??
0
continuous and uniformly bounded.
Proof of existence. Consider the family of solutions (X?i , Z?i ), ?i = t0 2?i i?N restricted to [0, t0 ].
By Lemma 1, this family is equi-Lipschitz-continuous and uniformly bounded, thus by the Arzel`aAscoli theorem, there exists a subsequence ((X?i , Z?i ))i?I that converges uniformly on [0, t0 ]
? Z)
? be its limit. Then we prove that (X,
? Z)
?
(where I ? N is an infinite set of indices). Let (X,
is a solution to the original ODE (5) on [0, t0 ].
?
First, since for all i ? I, X?i (0) = x0 and Z?i (0) = z0 , it follows that X(0)
=
?
? Z)
? satisfies the initial
limi??,i?I X?i (0) = x0 and Z(0)
= limi??,i?I Z?i (0) = z0 , thus (X,
? Z)
? be the solution of the ODE (5) on t ? t1 , with
conditions. Next, let t1 ? (0, t0 ), and let (X,
? 1 ), Z(t
? 1 )). Since (X? (t1 ), Z? (t1 ))i?I ? (X(t
? 1 ), Z(t
? 1 )) as i ? ?, then by
initial condition (X(t
i
i
4
continuity of the solution w.r.t. initial conditions (Theorem 2.8 in [29]), we have that for some > 0,
? uniformly on [t1 , t1 + ). But we also have X? ? X
? uniformly on [0, t0 ], therefore X
?
X?i ? X
i
?
?
and X coincide on [t1 , t1 + ), therefore X satisfies the ODE on [t1 , t1 + ). And since t1 is arbitrary
in (0, t0 ), this concludes the proof of existence.
3.3
Convergence rate
It is now straightforward to establish the convergence rate of the solution.
Theorem 2. Suppose that f has Lipschitz gradient, and that ? ? is a smooth distance generating
function. Let (X(t), Z(t)) be the solution to the accelerated mirror descent ODE (5) with r ? 2.
Then for all t > 0, f (X(t)) ? f ? ?
r 2 D?? (z0 ,z ? )
.
t2
2
Proof. By construction of the ODE, V (X(t), Z(t), t) = tr (f (X(t)) ? f ? ) + rD?? (Z(t), z ? ) is
2
a Lyapunov function. It follows that for all t > 0, tr (f (X(t)) ? f ? ) ? V (X(t), Z(t), t) ?
V (x0 , z0 , 0) = rD?? (z0 , z ? ).
4
Discretization
Next, we show that with a careful discretization of this continuous-time dynamics, we can obtain a
general family of accelerated mirror descent methods for constrained optimization. Using a mixed
forward/backward
(5)
? system
?Euler scheme (see e.g. Chapter 2 in [10]), we can discretize the ODE
(k)
of
the
ODE
(5),
let
t
=
k
s,
and
x
=
using a step size s as follows. Given a solution (X, Z)
k
?
?
? k ) with X(tk + ?s)?X(tk ) , we propose the discretization
X(tk ) = X(k s). Approximating X(t
s
(
(k)
x(k+1)
??x
s
(k)
z (k+1)
??z
s
=
+
r
?
?? ? (z (k) ) ? x(k+1)
k? s
k s
(k+1)
) = 0.
r ?f (x
,
(7)
The first equation can be rewritten as x(k+1) = x(k) + kr ?? ? (z (k) ) / 1 + kr (note the independence on s, due to the time-scale invariance of the first ODE). In other words, x(k+1) is a convex
r
k
combination of ?? ? (z (k) ) and x(k) with coefficients ?k = r+k
and 1 ? ?k = r+k
. To summarize,
our first discrete scheme can be written as
(
r
x(k+1) = ?k ?? ? (z (k) ) + (1 ? ?k )x(k) , ?k = r+k
,
(8)
ks
(k+1)
(k+1)
(k)
).
z
= z ? r ?f (x
Since ?? ? maps into the feasible set X , starting from x(0) ? X guarantees that x(k) remains in X
for all k (by convexity of X ). Note that by duality, we have ?? ? (x? ) = arg maxx?X hx, x? i??(x),
and if we additionally assume that ? is differentiable on the image of ?? ? , then ?? = (?? ? )?1
(Theorem 23.5 in [26]), thus if we write z?(k) = ?? ? (z (k) ), the second equation can be written as
ks
ks
z (k) ) ?
?f (x(k+1) )) = arg min ?(x) ? ??(?
?f (x(k+1) ), x
r
r
x?X
D
E
ks
= arg min
?f (x(k+1) ), x + D? (x, z?(k) ).
r
x?X
z?(k+1) = ?? ? (??(?
z (k) ) ?
We will eventually modify this scheme in order to be able to prove the desired O(1/k 2 ) convergence
rate. However, we start by analyzing this version.? Motivated by the continuous-time Lyapunov
function (3), and using the correspondence t ? k s, we consider the potential function E (k) =
?
2
V (x(k) , z (k) , k s) = kr s (f (x(k) ) ? f ? ) + rD?? (z (k) , z ? ). Then we have
(k + 1)2 s
k2 s
(f (x(k+1) ) ? f ? ) ?
(f (x(k) ) ? f ? ) + r(D?? (z (k+1) , z ? ) ? D?? (z (k) , z ? ))
r
r
s(1 + 2k)
k2 s
=
(f (x(k+1) ) ? f (x(k) )) +
(f (x(k+1) ) ? f ? ) + r(D?? (z (k+1) , z ? ) ? D?? (z (k) , z ? )).
r
r
E (k+1) ? E (k) =
5
And through simple algebraic manipulation, the last term can be bounded as follows
D?? (z (k+1) , z ? ) ? D?? (z (k) , z ? )
D
E
= D?? (z (k+1) , z (k) ) + ?? ? (z (k) ) ? ?? ? (z ? ), z (k+1) ? z (k)
by definition of the Bregman divergence
k (k+1)
ks
= D?? (z (k+1) , z (k) ) +
(x
? x(k) ) + x(k+1) ? x? , ? ?f (x(k+1) )
by the discretization (8)
r
r
? D?? (z (k+1) , z (k) ) +
k2 s
ks ?
(f (x(k) ) ? f (x(k+1) )) +
(f ? f (x(k+1) )).
r2
r
by convexity of f
Therefore we have E (k+1) ? E (k) ? ? s[(r?2)k?1]
(f (x(k+1) ) ? f ? ) + rD?? (z (k+1) , z (k) ). Comr
d
V (X(t), Z(t), t) in the continuous-time case,
paring this expression with the expression (4) of dt
we see that we obtain an analogous expression, except for the additional Bregman divergence term
rD?? (z (k+1) , z (k) ), and we cannot immediately conclude that V is a Lyapunov function. This can
be remedied by the following modification of the discretization scheme.
4.1
A family of discrete-time accelerated mirror descent methods
In the expression (8) of x(k+1) = ?k z?(k) + (1 ? ?k )x(k) , we propose to
x(k) with x
?(k) , ob
replace
(k)
(k)
tained as a solution to a minimization problem x
? = arg minx?X ?s ?f (x ), x + R(x, x(k) ),
where R is regularization function that satisfies the following assumptions: there exist 0 < `R ? LR
such that for all x, x0 ? X , `2R kx ? x0 k2 ? R(x, x0 ) ? L2R kx ? x0 k2 .
0 2
k
In the Euclidean case, one can take R(x, x0 ) = kx?x
, in which case `R = LR = 1 and the
2
x
? update becomes a prox-update. In the general case, one can take R(x, x0 ) = D? (x, x0 ) for
some distance generating function ? which is `R -strongly convex and LR -smooth, in which case
the x
? update becomes a mirror update. The resulting method is summarized in Algorithm 1. This
algorithm is a generalization of Allen-Zhu and Orecchia?s interpretation of Nesterov?s method in [1],
where x(k+1) is a convex combination of a mirror descent update and a gradient descent update.
Algorithm 1 Accelerated mirror descent with distance generating function ? ? , regularizer R, step
size s, and parameter r ? 3
1: Initialize x
?(0) = x0 , z?(0) = x0 , or z (0) ? (??)?1 (x0 ) .
2: for k ? N do
r
3:
x(k+1) = ?k z?(k) + (1 ? ?k )?
x(k) , with ?k = r+k
.
D
E
(k+1)
(k+1)
ks
), z? + D? (?
z , z?(k) ).
4:
z?
= arg minz??X r ?f (x
5:
4.2
?f (x(k+1) ) and z?(k+1) = ?? ? (z (k+1) ).
If ? is non-differentiable, z (k+1) = z (k) ? kr
D
E s
(k+1)
(k+1)
x
?
= arg minx??X ?s ?f (x
), x
? + R(?
x, x(k+1) )
Consistency of the modified scheme
One can show that given our assumptions on R, x
?(k) = x(k) + O(s). Indeed, we have
D
E
`R (k)
k?
x ? x(k) k2 ? R(?
x(k) , x(k) ) ? R(x(k) , x(k) ) + ?s ?f (x(k) ), x(k) ? x
?(k)
2
? ?sk?f (x(k) )k? k?
x(k) ? x(k) k
(k)
)k?
therefore k?
x(k) ? x(k) k ? s 2?k?f`(x
, which proves the claim. Using this observation, we
R
can show that the modified discretization scheme is consistent with the original ODE (5), that is,
the difference equations defining x(k) and z (k) converge, as s tends to 0, to the ordinary differential
equations of the continuous-time system (5). The difference equations of Algorithm 1 are equivalent
to (7) in which x(k) is replaced by x
?(k) , i.e.
(
x(k+1)
x(k)
r
???
= k?
(?? ? (z (k) ) ? x(k+1) )
s
s
(k)
z (k+1)
??z
s
?
= ? k r s ?f (x(k+1) )
6
Now suppose there exist C 1 functions (X, Z), defined on R+ , such that X(tk ) ? x(k) and
?
Z(tk ) ? z (k) for tk = k s. Then, using? the fact that x
?(k) = x(k) + O(s), we have
?
?
(k+1)
(k)
(k+1)
(k)
X(tk + s)?X(tk )
x
?x
x
??
x
? k ) + o(1), and simi?
?
?
=
+ O( s) ?
+ O( s) = X(t
s
s
s
(k+1)
(k)
? k ) + o(1), therefore the difference equation system can be written as
larly, z ??z ? Z(t
s
(
? k ) + o(1) = r (?? ? (Z(tk )) ? X(tk + ?s))
X(t
tk
? k ) + o(1) = ? tk ?f (X(tk + ?s))
Z(t
r
which converges to the ODE (5) as s ? 0.
4.3
Convergence rate
To prove convergence of the algorithm, consider the modified potential function
?
2
? (k) = V (?
E
x(k) , z (k) , k s) = kr s (f (?
x(k) ) ? f ? ) + rD?? (z (k) , z ? ).
Lemma 2. If ? ? LR L?? and s ?
`R
2Lf ? ,
then for all k ? 0,
? (k+1) ? E
? (k) ? (2k + 1 ? kr)s (f (?
E
x(k+1) ) ? f ? ).
r
? is a Lyapunov function for k ? 1.
As a consequence, if r ? 3, E
This lemma is proved in the supplementary material.
Theorem 3. The discrete-time accelerated mirror descent Algorithm 1 with parameter r ? 3 and
step sizes ? ? LR L?? , s ? 2L`Rf ? , guarantees that for all k > 0,
r2 D?? (z0 , z ? ) f (x0 ) ? f ?
r ? (1)
E
?
+
.
sk 2
sk 2
k2
Proof. The first inequality follows immediately from Lemma 2. The second inequality follows from
? (1) , proved in the supplementary material.
a simple bound on E
f (?
x(k)) ) ? f ? ?
4.4
Example: accelerated entropic descent
We give an P
instance of Algorithm 1 for simplex-constrained problems. Suppose that X = ?n =
n
n
{x ? R+ : i=1 xi = 1} is the n-simplex. Taking ? to be the negative entropy on ?, we have for
x ? X , z ? E?,
!
?(x) =
n
X
xi ln xi +?(x|?), ? ? (z) = ln
i=1
n
X
ezi
ezi
, ??(x) = (1 + ln xi )i +Ru, ?? ? (z)i = Pn
j=1
i=1
ezj
.
where ?(?|?) is the indicator function of the simplex (?(x|?) = 0 if x ? ? and +? otherwise),
and u ? Rn is a normal vector to the affine hull of the simplex. The resulting mirror descent
update is a simple entropy projection and can be computed exactly in O(n) operations, and ? ?
can be shown to be 1-smooth w.r.t. k ? k? , see for example [3, 6]. For the second update, we
take R(x, y) = D? (x, y) where
Pn ? is a smoothed negative entropy function defined as follows:
let > 0, and let ?(x) = i=1 (xi + ) ln(xi + ) + ?(x|?). Although no simple, closed-form
expression is known for ??? , it can be computed efficiently, in O(n log n) time using a deterministic
algorithm, or O(n) expected time using a randomized algorithm, see [17]. Additionally, ? satisfies
our assumptions: it is 1+n
-strongly convex and 1-smooth w.r.t. k ? k? . The resulting accelerated
mirror descent method on the simplex can then be implemented efficiently, and by Theorem 3 it is
guaranteed to converge in O(1/k 2 ) whenever ? ? 1 and s ? 2(1+n)L
.
f?
5
Numerical Experiments
We test the accelerated mirror descent method in Algorithm 1, on simplex-constrained problems in Rn , n = 100, with two different objective functions: a simple quadratic f (x) =
hx ? x? , Q(x ? x? )i, for a random positive semi-definite matrix Q, and a log-sum-exp function
7
10?1
f (x(k) ) ? f ?
f (x(k) ) ? f ?
10?9
10
?17
Mirror descent
Accelerated mirror descent
Speed restart
Gradient restart
100
200
300
400
10?5
10?8
10?11
500
600
700
f (x(k) ) ? f ?
10?5
10?5
10?13
r
r
r
r
10?2
10?2
10
?14
10?8
10?11
Mirror descent
Accelerated mirror descent
Speed restart
Gradient restart
100
200
300
k
k
(a) Weakly convex quadratic, rank 10
=3
= 10
= 30
= 90
10?14
400
500
(b) Log-sum-exp
600
10?17
100
200
300
400
k
500
600
700
800
(c) Effect of the parameter r.
Figure 1: Evolution of f (x(k) ) ? f ? on simplex-constrained problems, using different accelerated
mirror descent methods with entropy distance generating functions.
Algorithm 2 Accelerated mirror descent with restart
1: Initialize l = 0, x
?(0) = z?(0) = x0 .
2: for k ? N do
r
3:
x(k+1) = ?l z?(k) + (1 ? ?l )?
x(k) , with ?l = r+l
D
E
(k+1)
(k+1)
ks
4:
z?
= arg minz??X r ?f (x
), z? + D? (?
z , z?(k) )
D
E
5:
x
?(k+1) = arg minx??X ?s ?f (x(k+1) ), x
? + R(?
x, x(k+1) )
6:
l ?l+1
7:
if Restart condition then
8:
z?(k+1) ? x(k+1) , l ? 0
P
I
n
given by f (x) = ln
i=1 hai , xi + bi , where each entry in ai ? R and bi ? R is iid normal. We implement the accelerated entropic descent algorithm proposed in Section 4.4, and include the (non-accelerated) entropic descent for reference. We also adapt the gradient restarting
heuristic proposed by O?Donoghue and Cand`es in [24], as well as the speed restart heuristic proposed by Su et al. in [28]. The generic restart
2. The restart condi
method is given in Algorithm
tions are the following: (i) gradient restart: x(k+1) ? x(k) , ?f (x(k) ) > 0, and (ii) speed restart:
kx(k+1) ? x(k) k < kx(k) ? x(k?1) k.
The results are given in Figure 1. The accelerated mirror descent method exhibits a polynomial
convergence rate, which is empirically faster than the O(1/k 2 ) rate predicted by Theorem 3. The
method also exhibits oscillations around the set of minimizers, and increasing the parameter r seems
to reduce the period of the oscillations, and results in a trajectory that is initially slower, but faster
for large k, see Figure 1-c. The restarting heuristics alleviate the oscillation and empirically speed
up the convergence. We also visualized, for each experiment, the trajectory of the iterates x(k) for
each method, projected on a 2-dimensional hyperplane. The corresponding videos are included in
the supplementary material.
6
Conclusion
By combining the Lyapunov argument that motivated mirror descent, and the recent ODE interpretation [28] of Nesterov?s method, we proposed a family of ODE systems for minimizing convex
functions with a Lipschitz gradient, which are guaranteed to converge at a O(1/t2 ) rate, and proved
existence and uniqueness of a solution. Then by discretizing the ODE, we proposed a family of
accelerated mirror descent methods for constrained optimization and proved an analogous O(1/k 2 )
rate when the step size is small enough. The connection with the continuous-time dynamics motivates a more detailed study of the ODE (5), such as studying the oscillatory behavior of its solution
trajectories, its convergence rates under additional assumptions such as strong convexity, and a rigorous study of the restart heuristics.
Acknowledgments
We gratefully acknowledge the NSF (CCF-1115788, CNS-1238959, CNS-1238962, CNS-1239054,
CNS-1239166), the ARC (FL110100281 and ACEMS), and the Simons Institute Fall 2014 Algorithmic Spectral Graph Theory Program.
8
References
[1] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror
descent. In ArXiv, 2014.
[2] Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh. Clustering with Bregman
divergences. J. Mach. Learn. Res., 6:1705?1749, December 2005.
[3] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex
optimization. Oper. Res. Lett., 31(3):167?175, May 2003.
[4] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[5] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization. SIAM, 2001.
[6] Aharon Ben-Tal, Tamar Margalit, and Arkadi Nemirovski. The ordered subsets mirror descent optimization method with applications to tomography. SIAM J. on Optimization, 12(1):79?108, January 2001.
[7] Anthony Bloch, editor. Hamiltonian and gradient flows, algorithms, and control. American Mathematical
Society, 1994.
[8] A. A. Brown and M. C. Bartholomew-Biggs. Some effective methods for unconstrained optimization
based on the solution of systems of ordinary differential equations. Journal of Optimization Theory and
Applications, 62(2):211?224, 1989.
[9] S?ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[10] J. C. Butcher. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons, Ltd, 2008.
[11] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge, 2006.
[12] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction. In
Proceedings of the 28th International Conference on Machine Learning (ICML), June 2011.
[13] U. Helmke and J.B. Moore. Optimization and dynamical systems. Communications and control engineering series. Springer-Verlag, 1994.
[14] Anatoli Juditsky. Convex Optimization II: Algorithms, Lecture Notes. 2013.
[15] Anatoli Juditsky, Arkadi Nemirovski, and Claire Tauvel. Solving variational inequalities with stochastic
mirror-prox algorithm. Stoch. Syst., 1(1):17?58, 2011.
[16] H.K. Khalil. Nonlinear systems. Macmillan Pub. Co., 1992.
[17] Walid Krichene, Syrine Krichene, and Alexandre Bayen. Efficient Bregman projections onto the simplex.
In 54th IEEE Conference on Decision and Control, 2015.
[18] A.M. Lyapunov. General Problem of the Stability Of Motion. Control Theory and Applications Series.
Taylor & Francis, 1992.
[19] A. S. Nemirovsky and D. B. Yudin. Problem Complexity and Method Efficiency in Optimization. WileyInterscience series in discrete mathematics. Wiley, 1983.
[20] Yu. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, 2005.
[21] Yu. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming,
140(1):125?161, 2013.
[22] Yurii Nesterov. A method of solving a convex programming problem with convergence rate o(1/k2).
Soviet Mathematics Doklady, 27(2):372?376, 1983.
[23] Yurii Nesterov. Introductory Lectures on Convex Optimization, volume 87. Springer Science & Business
Media, 2004.
[24] Brendan O?Donoghue and Emmanuel Cand`es. Adaptive restart for accelerated gradient schemes. Foundations of Computational Mathematics, 15(3):715?732, 2015.
[25] M. Raginsky and J. Bouvrie. Continuous-time stochastic mirror descent on a network: Variance reduction,
consensus, convergence. In CDC 2012, pages 6793?6800, 2012.
[26] R.T. Rockafellar. Convex Analysis. Princeton University Press, 1970.
[27] J. Schropp and I. Singer. A dynamical systems approach to constrained minimization. Numerical Functional Analysis and Optimization, 21(3-4):537?551, 2000.
[28] Weijie Su, Stephen Boyd, and Emmanuel Cand`es. A differential equation for modeling Nesterov?s accelerated gradient method: Theory and insights. In NIPS, 2014.
[29] Gerald Teschl. Ordinary differential equations and dynamical systems, volume 140. American Mathematical Soc., 2012.
9
| 5843 |@word version:1 polynomial:1 norm:3 seems:1 dekel:1 nemirovsky:1 tr:5 reduction:1 initial:4 series:3 pub:1 kx0:2 discretization:13 written:4 john:1 numerical:4 update:8 juditsky:2 amir:2 hamiltonian:1 lr:5 provides:3 equi:2 iterates:1 mathematical:4 along:1 differential:8 become:1 prove:8 introductory:1 x0:21 expected:1 indeed:1 behavior:1 cand:4 multi:1 discretized:1 decreasing:1 armed:1 considering:1 increasing:2 becomes:4 bounded:3 medium:1 interpreted:4 unified:1 ghosh:1 guarantee:4 berkeley:6 exactly:1 doklady:1 k2:11 control:4 t1:11 positive:1 engineering:1 modify:1 tends:1 limit:1 consequence:2 accumulates:1 mach:1 analyzing:1 lugosi:1 studied:2 k:8 co:1 nemirovski:8 bi:2 obeys:1 unique:2 acknowledgment:1 regret:1 definite:1 lf:3 implement:1 maxx:1 composite:1 projection:3 boyd:1 word:1 integrating:1 cannot:2 onto:1 equivalent:5 map:4 deterministic:1 comr:1 straightforward:1 starting:1 convex:24 bachrach:1 simplicity:1 immediately:2 insight:2 reparameterization:1 proving:2 stability:1 analogous:4 controlling:1 suppose:4 construction:1 shamir:1 programming:3 us:1 larly:1 trend:1 ran:1 convexity:4 complexity:1 nesterov:17 dynamic:13 gerald:1 weakly:1 solving:2 efficiency:1 biggs:1 chapter:2 regularizer:1 soviet:1 derivation:1 fast:1 effective:1 heuristic:5 supplementary:5 otherwise:1 online:2 sequence:1 differentiable:2 propose:4 srujana:1 combining:4 intuitive:2 khalil:1 convergence:17 optimum:1 generating:4 converges:6 ben:2 tk:13 tions:1 coupling:1 rt2:1 progress:1 strong:1 soc:1 bayen:3 implemented:1 involves:1 predicted:1 lyapunov:22 stochastic:4 hull:1 material:5 hx:2 generalization:2 alleviate:1 extension:1 weijie:1 around:2 normal:2 exp:2 ezj:1 equilibrium:2 algorithmic:1 claim:1 entropic:4 uniqueness:5 tf:1 weighted:1 minimization:3 modified:3 tained:1 pn:2 shrinkage:1 broader:1 focus:1 june:1 stoch:1 rank:1 rigorous:1 brendan:1 minimizers:2 abor:1 initially:1 margalit:1 bandit:1 butcher:1 fl110100281:1 arg:8 dual:9 constrained:7 special:2 initialize:2 uc:3 construct:1 broad:1 yu:2 icml:1 simplex:9 t2:9 simplify:1 modern:1 divergence:7 beck:2 replaced:1 geometry:4 cns:4 analyzed:1 primal:4 bloch:1 bregman:8 integral:1 unification:1 ohad:1 euclidean:8 taylor:1 desired:1 re:2 guidance:1 joydeep:1 instance:1 modeling:1 teboulle:2 cover:1 ordinary:6 entry:1 euler:1 subset:1 eec:1 thereom:1 international:1 randomized:1 siam:3 tr2:2 squared:1 cesa:2 american:2 derivative:3 oper:1 syst:1 potential:2 prox:2 summarized:2 includes:1 coefficient:1 rockafellar:1 kzk2:1 depends:1 closed:2 francis:1 start:2 recover:1 simon:1 arkadi:2 merugu:1 variance:1 efficiently:2 iid:1 trajectory:11 notoriously:1 explain:1 oscillatory:1 whenever:3 definition:3 involved:2 elegance:1 associated:1 proof:4 proved:7 carefully:1 alexandre:2 dt:11 though:1 strongly:3 su:4 nonlinear:2 banerjee:1 lack:1 continuity:1 perhaps:1 effect:1 brown:1 ccf:1 evolution:1 regularization:1 dhillon:1 moore:1 krichene:3 game:1 allen:3 motion:1 image:1 variational:1 arindam:1 recently:1 functional:1 empirically:3 volume:2 discussed:1 interpretation:16 extend:1 cambridge:1 ai:1 rd:7 unconstrained:6 consistency:1 mathematics:3 similarly:1 bartholomew:1 gratefully:1 yk2:2 ezi:2 recent:3 manipulation:1 verlag:1 inequality:5 discretizing:3 yi:1 minimum:1 additional:2 zeyuan:1 converge:6 period:1 semi:1 ii:2 stephen:1 smooth:9 faster:2 adapt:1 lin:1 prediction:2 arxiv:1 iteration:2 gilad:1 achieved:1 condi:1 ode:47 interval:1 tauvel:1 comment:1 elegant:1 orecchia:3 december:1 flow:1 enough:1 independence:1 simi:1 nonstochastic:1 reduce:1 idea:3 tamar:1 donoghue:3 t0:9 expression:6 motivated:2 bartlett:2 ultimate:1 ltd:1 peter:1 algebraic:1 clear:1 detailed:1 tomography:1 visualized:1 simplest:1 exist:2 mirrored:1 nsf:1 discrete:7 write:1 backward:1 imaging:1 graph:1 subgradient:1 sum:2 raginsky:1 inverse:1 family:18 oscillation:3 ob:1 appendix:1 decision:1 bound:1 guaranteed:3 correspondence:1 quadratic:2 tal:2 speed:5 argument:11 min:2 performing:2 helmke:1 combination:4 increasingly:1 son:1 evolves:2 modification:1 acems:1 intuitively:1 restricted:3 ln:5 equation:15 remains:3 eventually:1 singer:1 yurii:2 studying:1 ofer:1 operation:1 rewritten:1 aharon:1 apply:2 observe:1 generic:1 spectral:1 slower:1 existence:8 original:5 clustering:1 include:1 anatoli:2 emmanuel:2 prof:3 establish:1 approximating:2 society:1 objective:1 rt:9 said:3 hai:1 gradient:20 minx:3 exhibit:2 distance:4 remedied:1 restart:13 argue:1 cauchy:2 consensus:1 ru:1 index:1 minimizing:2 unfortunately:1 potentially:1 bouvrie:1 negative:3 design:8 ebastien:1 motivates:1 bianchi:2 discretize:1 observation:1 arc:1 acknowledge:1 descent:49 january:1 defining:1 extended:2 communication:1 rn:5 smoothed:2 arbitrary:1 pair:1 connection:2 established:1 ther:1 nip:1 able:1 dynamical:3 summarize:1 l2r:1 program:1 rf:1 including:1 max:2 video:1 business:1 rely:1 indicator:1 zhu:3 scheme:7 improve:1 lorenzo:1 concludes:1 coupled:2 review:1 understanding:1 nicol:2 lecture:3 cdc:1 mixed:1 interesting:1 proportional:1 foundation:2 affine:1 minimizex:1 consistent:1 article:1 thresholding:1 editor:1 xiao:1 claire:1 last:2 institute:1 fall:1 taking:4 limi:2 benefit:1 distributed:1 kzk:1 lett:1 yudin:3 kz:2 forward:1 made:1 adaptive:2 coincide:2 projected:2 restarting:3 assumed:1 conclude:1 xi:7 subsequence:1 continuous:17 iterative:1 sk:3 reviewed:1 additionally:2 learn:1 anthony:1 marc:2 motivation:2 arise:1 qut:1 rtr:2 wiley:2 candidate:1 minz:2 theorem:13 z0:9 jensen:1 r2:2 exists:1 importance:1 kr:6 mirror:42 kx:8 entropy:4 bubeck:1 ordered:1 macmillan:1 inderjit:1 springer:2 minimizer:1 satisfies:4 viewed:1 identity:1 careful:1 lipschitz:11 replace:2 feasible:1 hard:1 included:1 infinite:1 except:1 uniformly:5 averaging:1 hyperplane:1 walid:3 lemma:6 called:1 invariance:1 e:4 duality:1 accelerated:39 princeton:1 wileyinterscience:1 |
5,351 | 5,844 | Adaptive Online Learning
Dylan J. Foster ?
Cornell University
Alexander Rakhlin ?
University of Pennsylvania
Karthik Sridharan ?
Cornell University
Abstract
We propose a general framework for studying adaptive regret bounds in the online
learning setting, subsuming model selection and data-dependent bounds. Given a
data- or model-dependent bound we ask, ?Does there exist some algorithm achieving this bound?? We show that modifications to recently introduced sequential
complexity measures can be used to answer this question by providing sufficient
conditions under which adaptive rates can be achieved. In particular each adaptive
rate induces a set of so-called offset complexity measures, and obtaining small
upper bounds on these quantities is sufficient to demonstrate achievability. A
cornerstone of our analysis technique is the use of one-sided tail inequalities to
bound suprema of offset random processes.
Our framework recovers and improves a wide variety of adaptive bounds including
quantile bounds, second order data-dependent bounds, and small loss bounds. In
addition we derive a new type of adaptive bound for online linear optimization
based on the spectral norm, as well as a new online PAC-Bayes theorem.
1
Introduction
Some of the recent progress on the theoretical foundations of online learning has been motivated by
the parallel developments in the realm of statistical learning. In particular, this motivation has led to
martingale extensions of empirical process theory, which were shown to be the ?right? notions for
online learnability. Two topics, however, have remained elusive thus far: obtaining data-dependent
bounds and establishing model selection (or, oracle-type) inequalities for online learning problems.
In this paper we develop new techniques for addressing both these questions.
Oracle inequalities and model selection have been topics of intense research in statistics in the last
two decades [1, 2, 3]. Given a sequence of models M1 , M2 , . . . whose union is M, one aims to
derive a procedure that selects, given an i.i.d. sample of size n, an estimator f? from a model Mm
?
that trades off bias and variance. Roughly speaking the desired oracle bound takes the form
err(f?) ? inf ? inf err(f ) + penn (m)? ,
m
f ?Mm
where penn (m) is a penalty for the model m. Such oracle inequalities are attractive because they
can be shown to hold even if the overall model M is too large. A central idea in the proofs of such
statements (and an idea that will appear throughout the present paper) is that penn (m) should be
?slightly larger? than the fluctuations of the empirical process for the model m. It is therefore not
surprising that concentration inequalities?and particularly Talagrand?s celebrated inequality for the
supremum of the empirical process?have played an important role in attaining oracle bounds. In
order to select a good model in a data-driven manner, one establishes non-asymptotic data-dependent
bounds on the fluctuations of an empirical process indexed by elements in each model [4].
?
?
Deptartment of Computer Science
Deptartment of Statistics
1
Lifting the ideas of oracle inequalities and data-dependent bounds from statistical to online learning
is not an obvious task. For one, there is no concentration inequality available, even for the simple
case of sequential Rademacher complexity. (For the reader already familiar with this complexity:
a change of the value of one Rademacher variable results in a change of the remaining path, and
hence an attempt to use a version of a bounded difference inequality grossly fails). Luckily, as we
show in this paper, the concentration machinery is not needed and one only requires a one-sided tail
inequality. This realization is motivated by the recent work of [5, 6, 7]. At a high level, our approach
will be to develop one-sided inequalities for the suprema of certain offset processes [7], where the
offset is chosen to be ?slightly larger? than the complexity of the corresponding model. We then show
that these offset processes determine which data-dependent adaptive rates are achievable for online
learning problems, drawing strong connections to the ideas of statistical learning described earlier.
1.1
Framework
Let X be the set of observations, D the space of decisions, and Y the set of outcomes. Let (S)
denote the set of distributions on a set S. Let ` ? D ? Y ? R be a loss function. The online learning
framework is defined by the following process: For t = 1, . . . , n, Nature provides input instance
xt ? X ; Learner selects prediction distribution qt ? (D); Nature provides label yt ? Y, while the
learner draws prediction y?t ? qt and suffers loss `(?
yt , yt ).
Two important settings are supervised learning (Y ? R, D ? R) and online linear optimization
(X = {0} is a singleton set, Y and D are balls in dual Banach spaces and `(?
y , y) = ??
y , y?). For a
class F ? DX , we define the learner?s cumulative regret to F as
n
n
yt , yt ) ? inf ? `(f (xt ), yt ).
? `(?
f ?F t=1
t=1
A uniform regret bound Bn is achievable if there exists a randomized algorithm selecting y?t such that
n
n
E?? `(?
yt , yt ) ? inf ? `(f (xt ), yt )? ? Bn
f ?F t=1
t=1
?x1?n , y1?n ,
(1)
where a1?n stands for {a1 , . . . , an }. Achievable rates Bn depend on complexity of the function class
F. For example, sequential Rademacher complexity of F is one of the tightest achievable uniform
rates for a variety of loss functions [8, 7].
An adaptive regret bound has the form Bn (f ; x1?n , y1?n ) and is said to be achievable if there exists a
randomized algorithm for selecting y?t such that
n
n
t=1
t=1
E?? `(?
yt , yt ) ? ? `(f (xt ), yt )? ? Bn (f ; x1?n , y1?n ) ?x1?n , y1?n , ?f ? F.
(2)
We distinguish three types of adaptive bounds, according to whether Bn (f ; x1?n , y1?n ) depends only
on f , only on (x1?n , y1?n ), or on both quantities. Whenever Bn depends on f , an adaptive regret can
be viewed as an oracle inequality which penalizes each f according to a measure of its complexity
(e.g. the complexity of the smallest model to which it belongs). As in statistical learning, an oracle
inequality (2) may be proved for certain functions Bn (f ; x1?n , y1?n ) even if a uniform bound (1)
cannot hold for any nontrivial Bn .
1.2
Related Work
The case when Bn (f ; x1?n , y1?n ) = Bn (x1?n , y1?n ) does not depend on f has received most of the
attention in the literature. The focus is on bounds that can be tighter for ?nice sequences,? yet maintain
near-optimal worst-case guarantees. An incomplete list of prior work includes [9, 10, 11, 12], couched
in the setting of online linear/convex optimization, and [13] in the experts setting.
A bound of type Bn (f ) was studied in [14], which presented an algorithm that competes with all
experts simultaneously, but with varied regret with respect to each of them depending on the quantile
of the expert. Another bound of this type was given by [15], who consider online linear optimization
with an unbounded set and provide oracle inequalities with an appropriately chosen function Bn (f ).
Finally, the third category of adaptive bounds are those that depend on both the hypothesis f ? F
and the data. The bounds that depend on the loss of the best function (so-called ?small-loss? bounds,
2
[16, Sec. 2.4], [17, 13]) fall in this category trivially, since one may overbound the loss of the best
function by the performance of f . We draw attention to the recent result of [18] who show an adaptive
bound in terms of both the loss of comparator and the KL divergence between the comparator and
some pre-fixed prior distribution over experts. An MDL-style bound in terms of the variance of the
loss of the comparator (under the distribution induced by the algorithm) was recently given in [19].
Our study was also partly inspired by Cover [20] who characterized necessary and sufficient conditions
for achievable bounds in prediction of binary sequences. The methods in [20], however, rely on the
structure of the binary prediction problem and do not readily generalize to other settings.
The framework we propose recovers the vast majority of known adaptive rates in literature, including
variance bounds, quantile bounds, localization-based bounds, and fast rates for small losses. It should
be noted that while existing literature on adaptive online learning has focused on simple hypothesis
classes such as finite experts and finite-dimensional p-norm balls, our results extend to general
hypothesis classes, including large nonparametric ones discussed in [7].
2
Adaptive Rates and Achievability: General Setup
The first step in building a general theory for adaptive online learning is to identify what adaptive
regret bounds are possible to achieve. Recall that an adaptive regret bound of Bn ? F ? X n ? Y n ? R
is said to be achievable if there exists an online learning algorithm such that, (2) holds.
In the rest of this work, we use the notation ?. . .?t=1 to denote the interleaved application of the
operators inside the brackets, repeated over t = 1, . . . , n rounds (see [21]). Achievability of an
adaptive rate can be formalized by the following minimax quantity.
Definition 1. Given an adaptive rate Bn we define the offset minimax value:
n
An (F , Bn ) ? ? sup
n
inf
sup E ?
?t ?qt
xt ?X qt ? (D) yt ?Y y
?? `(?
yt , yt ) ? inf ?? `(f (xt ), yt ) + Bn (f ; x1?n , y1?n )??.
n
n
f ?F
t=1 t=1
t=1
An (F, Bn ) quantifies how ?nt=1 `(?
yt , yt ) ? inf f ?F {?nt=1 `(f (xt ), yt ) + Bn (f ; x1?n , y1?n )} behaves
when the optimal learning algorithm that minimizes this difference is used against Nature trying to
maximize it. Directly from this definition,
An adaptive rate Bn is achievable if and only if An (F, Bn ) ? 0.
If Bn is a uniform rate, i.e., Bn (f ; x1?n , y1?n ) = Bn , achievability reduces to the minimax analysis
explored in [8]. The uniform rate Bn is achievable if and only if Bn ? Vn (F), where Vn (F) is the
minimax value of the online learning game.
We now focus on understanding the minimax value An (F, Bn ) for general adaptive rates. We
first show that the minimax value is bounded by an offset version of the sequential Rademacher
complexity studied in [8]. The symmetrization Lemma 1 below provides us with the first step towards
a probabilistic analysis of achievable rates. Before stating the lemma, we need to define the notion of
a tree and the notion of sequential Rademacher complexity.
Given a set Z, a Z-valued tree z of depth n is a sequence (zt )nt=1 of functions zt ? {?1}t?1 ? Z.
One may view z as a complete binary tree decorated by elements of Z. Let ? = (?t )nt=1 be a sequence
of independent Rademacher random variables. Then (zt (?)) may be viewed as a predictable process
with respect to the filtration S t = (?1 , . . . , ?t ). For a tree z, the sequential Rademacher complexity
of a function class G ? RZ on z is defined as
n
Rn (G, z) ? E? sup ? ?t g(zt (?))
and
g?G t=1
Rn (G) ? sup Rn (G, z) .
z
Lemma 1. For any lower semi-continuous loss `, and any adaptive rate Bn that only depends on
outcomes (i.e. Bn (f ; x1?n , y1?n ) = Bn (y1?n )), we have that
n
An ? sup E? ?sup ?2 ? ?t `(f (xt (?)), yt (?))? ? Bn (y1?n (?))? .
x,y
f ?F
t=1
3
(3)
Further, for any general adaptive rate Bn ,
n
?
An ? sup E? ?sup ?2 ? ?t `(f (xt (?)), yt (?)) ? Bn (f ; x1?n (?), y2?n+1
(?))?? .
f ?F
x,y,y?
(4)
t=1
Finally, if one considers the supervised learning problem where F ? X ? R, Y ? R and ` ? R ? R ? R
is a loss that is convex and L-Lipschitz in its first argument, then for any adaptive rate Bn ,
n
An ? sup E? ?sup ?2L ? ?t f (xt (?)) ? Bn (f ; x1?n (?), y1?n (?))?? .
x,y
f ?F
(5)
t=1
The above lemma tells us that to check whether an adaptive rate is achievable, it is sufficient to check
that the corresponding adaptive sequential complexity measures are non-positive. We remark that if
the above complexities are bounded by some positive quantity of a smaller order, one can form a new
achievable rate Bn? by adding the positive quantity to Bn .
3
Probabilistic Tools
As mentioned in the introduction, our technique rests on certain one-sided probabilistic inequalities.
We now state the first building block: a rather straightforward maximal inequality.
Proposition 2. Let I = {1, . . . , N }, N ? ?, be a set of indices and let (Xi )i?I be a sequence of
random variables satisfying the following tail condition: for any ? > 0,
P (Xi ? Bi > ? ) ? C1 exp ??? 2 ?(2
2
i )? + C2 exp (?? si )
(6)
for some positive sequence (Bi ), nonnegative sequence ( i ) and nonnegative sequence (si ) of
numbers, and for constants C1 , C2 ? 0. Then for any ? ? 1 , s? ? s1 , and
?
i
?i = max ?
2 log( i ?? ) + 4 log(i), (Bi si )?1 log ?i2 (?
s?si )?? + 1,
Bi
it holds that
E sup {Xi ? Bi ?i } ? 3C1 ? + 2C2 (?
s)?1 .
(7)
i?I
We remark that Bi need not be the expected value of Xi , as we are not interested in two-sided
deviations around the mean.
One of the approaches to obtaining oracle-type inequalities is to split a large class into smaller
ones according to a ?complexity radius? and control a certain stochastic process separately on each
subset (also known as the peeling technique). In the applications below, Xi will often stand for the
(random) supremum of this process on subset i, and Bi will be an upper bound on its typical size.
Given deviation bounds for Xi above Bi , the dilated size Bi ?i then allows one to pass to maximal
inequalities (7) and thus verify achievability in Lemma 1. The same strategy works for obtaining
data-dependent bounds, where we first prove tail bounds for the given size of the data-dependent
quantity, then appeal to (7).
A simple yet powerful example for the control of the supremum of a stochastic process is an inequality
due to Pinelis [22] for the norm (which is a supremum over the dual ball) of a martingale in a 2-smooth
Banach space. Here we state a version of this result that can be found in [23, Appendix A].
Lemma 3. Let Z be a unit ball in a separable (2, D)-smooth Banach space H. For any Z-valued
tree z, and any n > ? ?4D2
P ??? ?t zt (?)? ? ? ? ? 2 exp ??
n
t=1
?2
?
8D2 n
When the class of functions is not linear, we may no longer appeal to the above lemma. Instead, we
make use of a result from [24] that extends Lemma 3 at a price of a poly-logarithmic factor. Before
stating this lemma, we briefly define the relevant complexity measures (see [24] for more details).
First, a set V of R-valued trees is called an ?-cover of G ? RZ on z with respect to `p if
n
?g ? G, ?? ? {?1}n , ?v ? V s.t. ?(g(zt (?)) ? vt (?))p ? n?p .
t=1
4
The size of the smallest ?-cover is denoted by Np (G, ?, z), and Np (G, ?, n) ? supz Np (G, ?, z).
The set V is an ?-cover of G on z with respect to `? if
?g ? G, ?? ? {?1}, ?v ? V s.t. ?g(zt (?)) ? vt (?)? ? ?
?t ? [n].
We let N? (G, ?, z) be the smallest such cover and set N? (G, ?, n) = supz N? (G, ?, z).
Lemma 4 ([24]). Let G ? [?1, 1]Z . Suppose Rn (G)?n ? 0 with n ? ? and that the following
?1
mild assumptions hold: Rn (G) ? 1?n, N? (G,
?2 , n) ? 4, and there exists a constant such that
?
?j
?1
? ?j=1 N? (G, 2 , n) . Then for any ? > 12?n, for any Z-valued tree z of depth n,
?
n
P ?sup ?? ?t g(zt (?))? > 8 ?1 + ? 8n log3 (en2 )? ? Rn (G)?
g?G t=1
n
? P ?sup ?? ?t g(zt (?))? > n inf ?4? + 6? ?
?
?>0
g?G t=1
1?
log N? (G, , n)d ?? ? 2 e?
n? 2
4
.
The above lemma yields a one-sided control on the size of the supremum of the sequential Rademacher
process, as required for our oracle-type inequalities.
Next, we turn our attention to an offset Rademacher process, where the supremum is taken over a
collection of negative-mean random variables. The behavior of this offset process was shown to
govern the optimal rates of convergence for online nonparametric regression [7]. Such a one-sided
control of the supremum will be necessary for some of the data-dependent upper bounds we develop.
Lemma 5. Let z be a Z-valued tree of depth n, and let G ? RZ . For any
P ?sup ? ??t g(zt (?)) ? 2?g 2 (zt (?))? ?
n
g?G t=1
where
?
exp ??
?2
??
? + exp ?? ? ,
2 2
2
log (2n )
? ?j=12
? 1?n and ? > 0,
?
?
log N2 (G, , z)
? 12 2 ?
n log N2 (G, , z)d ? 1 > ? ?
?
1?n
N2 (G, 2?j , z)?2 and
= 12 ? 1
n
?
n log N2 (G, , z)d .
We observe that the probability of deviation has both subgaussian and subexponential components.
Using the above result and Proposition 2 leads to useful bounds on the quantities in Lemma 1 for
specific types of adaptive rates. Given a tree z, we obtain a bound on the expected size of the
sequential Rademacher process when we subtract off the data-dependent `2 -norm of the function on
the tree z, adjusted by logarithmic terms.
Corollary 6. Suppose G ? [?1, 1]Z , and let z be any Z-valued tree of depth n. Assume
log N2 (G, , n) ? ?p for some p < 2. Then
?
?
n
?
?
?n
E sup ?? ?t g(zt (?)) ? 4?2(log n) log N2 (G, ?2, z) ?? g 2 (zt (?)) + 1?
g?G, ?
?
t=1
?t=1
?
?
?
?
?
?24 2 log n ?
n log N2 (G, , z)d ? ? 7 + 2 log n .
?
1?n
?
?
The next corollary yields slightly faster rates than Corollary 6 when ?G? < ?.
Corollary 7. Suppose G ? [?1, 1]Z with ?G? = N , and let z be any Z-valued tree of depth n. Then
4
?
?
?
n
n
?
?
?
?n
?
2
E sup?? ?t g(zt (?)) ? 2 log?log N ? g (z(?)) + e??32?log N ? g 2 (z(?)) + e?? ? 1.
?
?
g?G ?t=1
?
t=1
t=1
?
?
Achievable Bounds
In this section we use Lemma 1 along with the probabilistic tools from the previous section to obtain
an array of achievable adaptive bounds for various online learning problems. We subdivide the section
into one subsection for each category of adaptive bound described in Section 1.1.
5
4.1
Adapting to Data
Here we consider adaptive rates of the form Bn (x1?n , y1?n ), uniform over f ? F. We show the power
of the developed tools on the following example.
Example 4.1 (Online Linear Optimization in Rd ). Consider the problem of online linear optimization where F = {f ? Rd ? ?f ?2 ? 1}, Y = {y ? ?y?2 ? 1}, X = {0}, and `(?
y , y) = ??
y , y?. The
following adaptive rate is achievable:
?
?
? 1?2
Bn (y1?n ) = 16 d log(n) ???n
? + 16 d log(n),
t=1 yt yt ?
where ??? is the spectral norm. Let us deduce this result from Corollary 6. First, observe that
?
???n
t=1 yt yt ?
1?2
? =
?
sup ???n
t=1 yt yt ?
1?2
f ??f ?2 ?1
f? =
sup
f ??f ?2 ?1
? n
?
?
f ? ?n
?t=1 `2 (f, yt ).
t=1 yt yt f = sup
f ?F
The linear function class F can be covered point-wise at any scale with (3? )d balls and thus
N (` ? F, 1?(2n), z) ? (6n)d for any Y-valued tree z. We apply Corollary 6 with = 1?n and the
integral term in the corollary vanishes, yielding the claimed statement.
4.2
Model Adaptation
In this subsection we focus on achievable rates for oracle inequalities and model selection, but
without dependence on data. The form of the rate is therefore Bn (f ). Assume we have a class
F = ?R?1 F(R), with the property that F(R) ? F(R? ) for any R ? R? . If we are told by an
oracle that regret will be measured with respect to those hypotheses f ? F with R(f ) ? inf{R ?
f ? F(R)} ? R? , then using the minimax algorithm one can guarantee a regret bound of at most
the sequential Rademacher complexity Rn (F(R? )). On the other hand, given the optimality of the
sequential Rademacher complexity for online learning problems for commonly encountered losses,
we can argue that for any f ? F chosen in hindsight, one cannot expect a regret better than order
Rn (F(R(f ))). In this section we show that simultaneously for all f ? F, one can attain an adaptive
?
upper bound of O ?Rn (F(R(f ))) log (Rn (F(R(f )))) log3?2 n?. That is, we may predict as if
we knew the optimal radius, at the price of a logarithmic factor. This is the price of adaptation.
Corollary 8. For any class of predictors F with F(1) non-empty, if one considers the supervised
learning problem with 1-Lipschitz loss `, the following rate is achievable:
Bn (f ) = log
3?2
?
?
?
?
?
log(2R(f )) ? Rn (F (2R(f ))) ?
n ?K1 Rn (F (2R(f ))) ?1 + ?log ?
?? + K2 Rn (F (1))? ,
Rn (F (1))
?
?
?
?
for absolute constants K1 , K2 , and
defined in Lemma 4.
In fact, this statement is true more generally with F(2R(f )) replaced by ` ? F(2R(f )). It is
tempting to attempt to prove the above statement with the exponential weights algorithm running as
an aggregation procedure over the solutions for each R. In general, this approach will fail for two
reasons. First, if function values grow with R, the exponential
? weights bound will scale linearly with
this value. Second, an experts bound yields only a slower n rate.
As a special case of the above lemma, we obtain an online PAC-Bayesian theorem. We postpone
this example to the next sub-section where we get a data-dependent version of this result. We now
provide a bound for online linear optimization in 2-smooth Banach spaces that automatically adapts
to the norm of the comparator. To prove it, we use the concentration bound from [22] (Lemma 3)
within the proof of the above corollary to remove the extra logarithmic factors.
Example 4.2 (Unconstrained Linear Optimization). Consider linear optimization with Y being
the unit ball of some reflexive Banach space with norm ???? . Let F = D be the dual space and the
loss `(?
y , y) = ??
y , y? (where we are using ??, ?? to represent the linear functional in the first argument
to the second argument). Define F(R) = {f ? ?f ? ? R} where ??? is the norm dual to ???? . If the unit
ball of Y is (2, D)-smooth, then the following rate is achievable for all f with ?f ? ? 1:
?
?
B(f ) = D n?8?f ??1 + log(2?f ?) + log log(2?f ?)? + 12?.
For the case of a Hilbert space, the above bound was achieved by [15].
6
4.3
Adapting to Data and Model Simultaneously
We now study achievable bounds that perform online model selection in a data-adaptive way. Of
specific interest is our online optimistic PAC-Bayesian bound. This bound should be compared
to [18, 19], with the reader noting that it is independent of the number of experts, is algorithmindependent, and depends quadratically on the expected loss of the expert we compare against.
Example 4.3 (Generalized Predictable Sequences (Supervised Learning)). Consider an online
supervised learning problem with a convex 1-Lipschitz loss. Let (Mt )t?1 be any predictable sequence
that the learner can compute at round t based on information provided so far, including xt (One can
think of the predictable sequence Mt as a prior guess for the hypothesis we would compare with in
hindsight). Then the following adaptive rate is achievable:
?
?
n
?
? ?
Bn (f ; x1?n ) = inf ?K1 ?log n ? log N2 (F , ?2, n) ? ?? (f (xt ) ? Mt )2 + 1?
?
?
t=1
?
?
?
?
?
+K2 log n ?
n log N2 (F , , n)d + 2 log n+7?,
?
1?n
?
?
?
?
for constants K1 = 4 2, K2 = 24 2 from Corollary 6. The achievability is a direct consequence
of Eq. (5) in Lemma 1, followed by Corollary 6 (one can include any predictable sequence in
the Rademacher average part because ?t Mt ?t is zero mean). Particularly, if we assume that the
sequential covering of class F grows as log N2 (F, ?, n) ? ??p for some p < 2, we get that
p
?
1? 2 ?
p?2
2
? ?? ?n
? n? ? .
Bn (f ) = O
t=1 (f (xt ) ? Mt ) + 1?
As p gets closer to 0, we get full adaptivity and replace n by ?nt=1 (f (xt ) ? Mt ) + 1. On the other
hand, as p gets closer to 2 (i.e. more complex function classes), we do not adapt and get a uniform
bound in terms of n. For p ? (0, 2), we attain a natural interpolation.
Example 4.4 (Regret to Fixed Vs Regret to Best (Supervised Learning)). Consider an online
supervised learning problem with a convex 1-Lipschitz loss and let ?F? = N . Let f ? ? F be a fixed
expert chosen in advance. The following bound is achievable:
2
?
n
?
Bn (f, x1?n ) = 4 log?log N ?(f (xt ) ? f (xt )) + e??32?log N ?(f (xt ) ? f ? (xt ))2 + e? + 2.
n
t=1
?
2
t=1
?
?
In particular, against
? f we have Bn (f , x1?n ) = O(1), and against an arbitrary expert we have
Bn (f, x1?n ) = O? n log N (log (n ? log N ))?. This bound follows from Eq. (5) in Lemma 1 followed
by Corollary 7. This extends the study of [25] to supervised learning and general class of experts F.
Example 4.5 (Optimistic PAC-Bayes). Assume that we have a countable set of experts and that the
loss for each expert on any round is non-negative and bounded by 1. The function class F is the set
of all distributions over these experts, and X = {0}. This setting can be formulated as online linear
optimization where the loss of mixture f over experts, given instance y, is ?f, y?, the expected loss
under the mixture. The following adaptive bound is achievable:
?
n
?
Bn (f ; y1?n ) = ?50 (KL(f ??) + log(n)) ? Ei?f ?ei , yt ?2 + 50 (KL(f ??) + log(n)) + 10.
t=1
This adaptive bound is an online PAC-Bayesian bound. The rate adapts not only to the KL di2
vergence of f with fixed prior ? but also replaces n with ?nt=1 Ei?f ?ei , yt ? . Note that we have
2
n
n
?t=1 Ei?f ?ei , yt ? ? ?t=1 ?f, yt ?, yielding the small-loss type bound described earlier. This is an
improvement over the bound in [18] in that the bound is independent of number of experts, and so
holds even for countably infinite sets of experts. The KL term in our bound may be compared to the
MDL-style term in the bound of [19]. If we have a large (but finite) number of experts and take ? to
be uniform, the above bound provides an improvement over both [14]1 and [18].
Evaluating the above bound with a distribution f that places all its weight on any one expert appears
to address the open question posed by [13] of obtaining algorithm-independent oracle-type variance
bounds for experts. The proof of achievability of the above rate is shown in the appendix because it
requires a slight variation on the symmetrization lemma specific to the problem.
1
See [18] for a comparison of KL-based bounds and quantile bounds.
7
5
Relaxations for Adaptive Learning
To design algorithms for achievable rates, we extend the framework of online relaxations from [26].
A relaxation Reln ? ?nt=0 X t ? Y t ? R that satisfies the initial condition,
Reln (x1?n , y1?n ) ? ? inf ?? `(f (xt ), yt ) + Bn (f ; x1?n , y1?n )?,
n
f ?F
and the recursive condition,
Reln (x1?t?1 , y1?t?1 ) ? sup
(8)
t=1
sup Ey??qt [`(?
yt , yt ) + Reln (x1?t , y1?t )],
inf
xt ?X qt ? (D) yt ?Y
(9)
is said to be admissible for the adaptive rate Bn . The relaxation?s corresponding strategy is
q?t = arg minqt ? (D) supyt ?Y Ey??qt [`(?
yt , yt ) + Reln (x1?t , y1?t )], which enjoys the adaptive bound
yt , yt ) ? inf ?? `(f (xt ), yt ) + Bn (f ; x1?n , y1?n )? ? Reln (?) ?x1?n , y1?n .
? `(?
n
t=1
n
f ?F
t=1
It follows immediately that the strategy achieves the rate Bn (f ; x1?n , y1?n ) + Reln (?). Our goal is
then to find relaxations for which the strategy is computationally tractable and Reln (?) ? 0 or at least
has smaller order than Bn . Similar to [26], conditional versions of the offset minimax values An
yield admissible relaxations, but solving these relaxations may not be computationally tractable.
Example 5.1 (Online PAC-Bayes). Consider the experts setting in Example 4.5 with:
?
?
Bn (f ) = 3 2n max{KL(f ? ?), 1} + 4 n.
?
Let Ri = 2i?1 and let qtR (y) denote the exponential weights distribution with learning rate R?n:
?
q R (y1?t )k ? ?k exp?? R?n(?ts=1 yt )k ?. The following is an admissible relaxation achieving Bn :
Reln (y1?t ) = inf ?
>0
1
log?? exp?? ? ? ?q Ri (y1?s?1 ), ys ? +
t
i
s=1
?
nRi ??? + 2 (n ? t)?.
Ri
Let qt? be a distribution with (qt? )i ? exp?? ?1n ??t?1
s=1 ?q (y1?s?1 ), ys ? ?
?
nRi ??. We predict by
drawing i according to qt? , then drawing an expert according to q Ri (y1?t?1 ).
While in general the problem of obtaining an efficient adaptive relaxation might be hard, one can ask
the question, ?If and efficient relaxation RelR
n is available for each F(R), can one obtain an adaptive
model selection algorithm for all of F??. To this end for supervised learning problem with convex
Lipschitz loss we delineate a meta approach which utilizes existing relaxations for each F(R).
Lemma 9. Let qtR (y1 , . . . , yt?1 ) be the randomized strategy corresponding to RelR
n , obtained after
observing outcomes y1 , . . . , yt?1 , and let ? ? R ? R be nonnegative. The following relaxation is
R
admissible for the rate Bn (R) = RelR
n (?)?(Reln (?)):
Adan (x1?t , y1?t ) =
R
R
sup E?t+1?n sup?RelR
`(?
ys , ys (?))?.
?
R (y
n (x1?t , y1?t ) ? Reln (?)?(Reln (?)) + 2 ? ?s Ey
?s ?qs
1?t ,yt+1?s?1 (?))
x,y,y?
n
R?1
s=t+1
Playing according to the strategy for Adan will guarantee a regret bound of Bn (R) + Adan (?), and
Adan (?) can be bounded using Proposition 2 when the form of ? is as in that proposition.
We remark that the above strategy is not necessarily obtained by running a high-level experts algorithm
over the discretized values of R. It is an interesting question to determine the cases when such a
strategy is optimal. More generally, when the adaptive rate Bn depends on data, it is not possible to
obtain the rates we show non-constructively in this paper using the exponential weights algorithm
with meta-experts as the required weighting over experts would be data dependent (and hence is not a
prior over experts). Further, the bounds from exponential-weights-type algorithms are akin to having
sub-exponential tails in Proposition 2, but for many problems we may have sub-gaussian tails.
Obtaining computationally efficient methods from the proposed framework is an interesting research
direction. Proposition 2 provides a useful non-constructive tool to establish achievable adaptive
bounds, and a natural question to ask is if one can obtain a constructive counterpart for the proposition.
8
References
[1] Lucien Birg?e, Pascal Massart, et al. Minimum contrast estimators on sieves: exponential bounds and rates
of convergence. Bernoulli, 4(3):329?375, 1998.
[2] G?abor Lugosi and Andrew B Nobel. Adaptive model selection using empirical complexities. Annals of
Statistics, pages 1830?1864, 1999.
[3] Peter L. Bartlett, St?ephane Boucheron, and G?abor Lugosi. Model selection and error estimation. Machine
Learning, 48(1-3):85?113, 2002.
[4] Pascal Massart. Concentration inequalities and model selection, volume 10. Springer, 2007.
[5] Shahar Mendelson. Learning without Concentration. In Conference on Learning Theory, 2014.
[6] Tengyuan Liang, Alexander Rakhlin, and Karthik Sridharan. Learning with square loss: Localization
through offset rademacher complexity. Proceedings of The 28th Conference on Learning Theory, 2015.
[7] Alexander Rakhlin and Karthik Sridharan. Online nonparametric regression. Proceedings of The 27th
Conference on Learning Theory, 2014.
[8] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages, combinatorial parameters, and learnability. In Advances in Neural Information Processing Systems 23. 2010.
[9] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs.
Machine learning, 80(2):165?188, 2010.
[10] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo
Zhu. Online optimization with gradual variations. In COLT, 2012.
[11] Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In Proceedings of
the 26th Annual Conference on Learning Theory (COLT), 2013.
[12] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
[13] Nicolo Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction
with expert advice. Machine Learning, 66(2-3):321?352, 2007.
[14] Kamalika Chaudhuri, Yoav Freund, and Daniel J Hsu. A parameter-free hedging algorithm. In Advances
in neural information processing systems, pages 297?305, 2009.
[15] H. Brendan McMahan and Francesco Orabona. Unconstrained online linear learning in hilbert spaces:
Minimax algorithms and normal approximations. Proceedings of The 27th Conference on Learning Theory,
2014.
[16] Nicolo Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
2006.
[17] Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. In Advances
in neural information processing systems, pages 2199?2207, 2010.
[18] Haipeng Luo and Robert E. Schapire. Achieving all with no parameters: Adaptive normalhedge. CoRR,
abs/1502.05934, 2015.
[19] Wouter M. Koolen and Tim van Erven. Second-order quantile methods for experts and combinatorial
games. In Proceedings of the 28th Annual Conference on Learning Theory (COLT), pages 1155?1175,
2015.
[20] Thomas M. Cover. Behavior of sequential predictors of binary sequences. In in Trans. 4th Prague
Conference on Information Theory, Statistical Decision Functions, Random Processes, pages 263?272.
Publishing House of the Czechoslovak Academy of Sciences, 1967.
[21] Alexander Rakhlin and Karthik Sridharan. Statistical learning theory and sequential prediction, 2012.
Available at http://stat.wharton.upenn.edu/?rakhlin/book_draft.pdf.
[22] Iosif Pinelis. Optimum bounds for the distributions of martingales in banach spaces. The Annals of
Probability, 22(4):1679?1706, 10 1994.
[23] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Beyond regret. arXiv preprint
arXiv:1011.3168, 2010.
[24] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Sequential complexities and uniform martingale
laws of large numbers. Probability Theory and Related Fields, 2014.
[25] Eyal Even-Dar, Michael Kearns, Yishay Mansour, and Jennifer Wortman. Regret to the best vs. regret to
the average. Machine Learning, 72(1-2):21?37, 2008.
[26] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize: From value to algorithms.
Advances in Neural Information Processing Systems 25, pages 2150?2158, 2012.
9
| 5844 |@word mild:1 version:5 briefly:1 achievable:24 norm:8 open:1 d2:2 gradual:1 bn:57 initial:1 celebrated:1 selecting:2 daniel:1 erven:1 existing:2 err:2 di2:1 nt:7 surprising:1 luo:1 si:4 yet:2 dx:1 readily:1 john:1 remove:1 v:2 guess:1 chiang:1 provides:5 unbounded:1 along:1 c2:3 direct:1 prove:3 inside:1 manner:1 upenn:1 expected:4 behavior:2 roughly:1 discretized:1 chi:1 inspired:1 automatically:1 provided:1 bounded:6 competes:1 notation:1 what:1 minimizes:1 developed:1 hindsight:2 guarantee:3 certainty:1 k2:4 control:4 unit:3 penn:3 appear:1 before:2 positive:4 consequence:1 establishing:1 fluctuation:2 path:1 interpolation:1 lugosi:3 might:1 shenghuo:1 studied:2 czechoslovak:1 bi:9 union:1 regret:18 block:1 postpone:1 recursive:1 procedure:2 empirical:5 suprema:2 adapting:2 attain:2 pre:1 get:6 cannot:2 selection:9 operator:1 algorithmindependent:1 yt:47 elusive:1 straightforward:1 attention:3 kale:1 tianbao:1 convex:5 focused:1 formalized:1 immediately:1 m2:1 estimator:2 q:1 supz:2 array:1 notion:3 variation:3 deptartment:2 annals:2 yishay:2 suppose:3 shamir:1 hypothesis:5 element:2 satisfying:1 particularly:2 role:1 preprint:1 worst:1 trade:1 qtr:2 mentioned:1 predictable:6 govern:1 complexity:21 vanishes:1 depend:4 solving:1 localization:2 learner:4 various:1 fast:2 tell:1 outcome:3 whose:1 larger:2 valued:8 posed:1 elad:2 drawing:3 kai:1 relax:1 satyen:1 statistic:3 think:1 online:39 sequence:15 propose:2 maximal:2 adaptation:2 relevant:1 realization:1 chaudhuri:1 achieve:1 adapts:2 academy:1 haipeng:1 convergence:2 empty:1 optimum:1 rademacher:14 tim:1 derive:2 andrew:1 develop:3 depending:1 stating:2 measured:1 pinelis:2 qt:10 received:1 progress:1 eq:2 strong:1 stat:1 direction:1 radius:2 stochastic:3 luckily:1 proposition:7 tighter:1 adjusted:1 extension:1 rong:1 mm:2 hold:6 around:1 normal:1 exp:8 predict:2 achieves:1 smallest:3 estimation:1 label:1 lucien:1 combinatorial:2 symmetrization:2 establishes:1 tool:4 gaussian:1 aim:1 rather:1 cornell:2 corollary:12 focus:3 improvement:2 bernoulli:1 check:2 contrast:1 brendan:1 dependent:13 abor:3 selects:2 interested:1 overall:1 dual:4 subexponential:1 arg:1 denoted:1 pascal:2 colt:3 development:1 special:1 wharton:1 field:1 having:1 np:3 ephane:1 simultaneously:3 divergence:1 familiar:1 replaced:1 karthik:10 maintain:1 attempt:2 ab:1 interest:1 wouter:1 mdl:2 mixture:2 bracket:1 yielding:2 integral:1 closer:2 necessary:2 ohad:1 intense:1 stoltz:1 machinery:1 indexed:1 incomplete:1 tree:13 penalizes:1 desired:1 theoretical:1 instance:2 earlier:2 cover:6 yoav:1 cost:1 reflexive:1 addressing:1 deviation:3 subset:2 uniform:9 predictor:2 wortman:1 too:1 learnability:2 answer:1 st:1 randomized:3 probabilistic:4 off:2 told:1 lee:1 michael:1 decorated:1 central:1 cesa:2 expert:28 style:2 mahdavi:1 singleton:1 attaining:1 sec:1 includes:1 dilated:1 depends:5 hedging:1 view:1 eyal:1 optimistic:2 observing:1 sup:23 hazan:2 bayes:3 aggregation:1 parallel:1 square:1 variance:4 who:3 yield:4 identify:1 generalize:1 bayesian:3 lu:1 suffers:1 whenever:1 definition:2 against:4 grossly:1 obvious:1 proof:3 recovers:2 hsu:1 proved:1 ask:3 recall:1 realm:1 subsection:2 improves:1 normalhedge:1 hilbert:2 appears:1 supervised:9 improved:1 delineate:1 talagrand:1 hand:2 ei:6 grows:1 building:2 verify:1 y2:1 true:1 counterpart:1 hence:2 sieve:1 boucheron:1 i2:1 attractive:1 couched:1 round:3 game:3 covering:1 noted:1 generalized:1 trying:1 pdf:1 complete:1 demonstrate:1 duchi:1 wise:1 recently:2 behaves:1 functional:1 mt:6 koolen:1 volume:1 banach:6 tail:6 extend:2 m1:1 discussed:1 slight:1 cambridge:1 smoothness:1 rd:2 unconstrained:2 trivially:1 longer:1 deduce:1 nicolo:2 recent:3 inf:14 driven:1 belongs:1 claimed:1 certain:4 inequality:22 binary:4 meta:2 shahar:1 vt:2 minimum:1 ey:3 determine:2 maximize:1 tempting:1 semi:1 full:1 reduces:1 smooth:4 faster:1 characterized:1 adapt:1 chia:1 y:4 a1:2 prediction:7 regression:2 subsuming:1 arxiv:2 represent:1 achieved:2 c1:3 addition:1 separately:1 grow:1 appropriately:1 extra:1 rest:2 massart:2 induced:1 tengyuan:1 sridharan:10 prague:1 extracting:1 subgaussian:1 near:1 noting:1 yang:1 split:1 variety:2 pennsylvania:1 idea:4 whether:2 motivated:2 bartlett:1 akin:1 penalty:1 peter:1 speaking:1 remark:3 dar:1 cornerstone:1 useful:2 generally:2 covered:1 tewari:4 nonparametric:3 induces:1 category:3 schapire:1 http:1 exist:1 achieving:3 vast:1 relaxation:12 subgradient:1 powerful:1 uncertainty:1 extends:2 throughout:1 reader:2 place:1 vn:2 utilizes:1 draw:2 decision:2 appendix:2 interleaved:1 bound:77 followed:2 played:1 distinguish:1 replaces:1 encountered:1 oracle:14 nonnegative:3 nontrivial:1 annual:2 ri:4 nathan:1 argument:3 optimality:1 separable:1 according:6 ball:7 smaller:3 slightly:3 modification:1 s1:1 en2:1 sided:7 taken:1 computationally:3 jennifer:1 turn:1 fail:1 needed:1 singer:1 tractable:2 end:1 studying:1 available:3 tightest:1 nri:2 apply:1 observe:2 spectral:2 birg:1 subdivide:1 slower:1 rz:3 thomas:1 remaining:1 running:2 include:1 publishing:1 yoram:1 quantile:5 k1:4 establish:1 question:6 quantity:7 already:1 strategy:8 concentration:6 dependence:1 mehrdad:1 randomize:1 said:3 majority:1 topic:2 argue:1 considers:2 reason:1 nobel:1 index:1 providing:1 liang:1 setup:1 robert:1 statement:4 negative:2 filtration:1 constructively:1 design:1 countable:1 zt:14 perform:1 bianchi:2 upper:4 gilles:1 observation:1 francesco:1 finite:3 jin:1 t:1 y1:36 rn:14 varied:1 mansour:2 arbitrary:1 introduced:1 required:2 kl:7 connection:1 quadratically:1 trans:1 address:1 beyond:1 below:2 ambuj:4 including:4 max:2 power:1 natural:2 rely:1 zhu:1 minimax:9 chao:1 nice:1 literature:3 prior:5 understanding:1 asymptotic:1 law:1 freund:1 loss:24 expect:1 adaptivity:1 interesting:2 srebro:1 foundation:1 sufficient:4 foster:1 playing:1 achievability:7 jung:1 last:1 free:1 enjoys:1 bias:1 wide:1 fall:1 absolute:1 van:1 depth:5 stand:2 cumulative:1 evaluating:1 collection:1 adaptive:47 commonly:1 far:2 log3:2 countably:1 supremum:7 knew:1 xi:6 continuous:1 decade:1 quantifies:1 vergence:1 nature:3 obtaining:7 poly:1 complex:1 necessarily:1 linearly:1 motivation:1 noise:1 n2:10 repeated:1 x1:30 advice:1 martingale:4 fails:1 sub:3 exponential:7 dylan:1 mcmahan:1 house:1 third:1 weighting:1 peeling:1 admissible:4 theorem:2 remained:1 xt:21 specific:3 jen:1 pac:6 rakhlin:10 offset:11 list:1 explored:1 appeal:2 exists:4 mendelson:1 sequential:15 adding:1 kamalika:1 corr:1 lifting:1 subtract:1 led:1 logarithmic:4 springer:1 satisfies:1 comparator:4 conditional:1 viewed:2 formulated:1 goal:1 towards:1 orabona:1 lipschitz:5 price:3 replace:1 change:2 hard:1 typical:1 infinite:1 lemma:21 kearns:1 called:3 pas:1 partly:1 select:1 alexander:9 constructive:2 |
5,352 | 5,845 | Deep Visual Analogy-Making
Scott Reed Yi Zhang Yuting Zhang Honglak Lee
University of Michigan, Ann Arbor, MI 48109, USA
{reedscot,yeezhang,yutingzh,honglak}@umich.edu
Abstract
In addition to identifying the content within a single image, relating images and
generating related images are critical tasks for image understanding. Recently,
deep convolutional networks have yielded breakthroughs in predicting image labels, annotations and captions, but have only just begun to be used for generating high-quality images. In this paper we develop a novel deep network trained
end-to-end to perform visual analogy making, which is the task of transforming a
query image according to an example pair of related images. Solving this problem
requires both accurately recognizing a visual relationship and generating a transformed query image accordingly. Inspired by recent advances in language modeling, we propose to solve visual analogies by learning to map images to a neural
embedding in which analogical reasoning is simple, such as by vector subtraction
and addition. In experiments, our model effectively models visual analogies on
several datasets: 2D shapes, animated video game sprites, and 3D car models.
1
Introduction
Humans are good at considering ?what-if?? questions about objects in their environment. What if
this chair were rotated 30 degrees clockwise? What if I dyed my hair blue? We can easily imagine
roughly how objects would look according to various hypothetical questions. However, current
generative models of images struggle to perform this kind of task without encoding significant prior
knowledge about the environment and restricting the allowed transformations.
Infer Relationship
Transform query
Often, these visual hypothetical questions
can be effectively answered by analogical reasoning.1 Having observed many
similar objects rotating, one could learn
to mentally rotate new objects. Having
observed objects with different colors (or
textures), one could learn to mentally reFigure 1: Visual analogy making concept. We learn
color (or re-texture) new objects.
an encoder function f mapping images into a space
Solving the analogy problem requires the in which analogies can be performed, and a decoder
ability to identify relationships among im- g mapping back to the image space.
ages and transform query images accordingly. In this paper, we propose to solve the problem by directly training on visual analogy completion; that is, to generate the transformed image output. Note that we do not make any claim about
how humans solve the problem, but we show that in many cases thinking by analogy is enough to
solve it, without exhaustively encoding first principles into a complex model.
We denote a valid analogy as a 4-tuple A : B :: C : D, often spoken as ?A is to B as C is to D?. Given
such an analogy, there are several questions one might ask:
? A ? B :: C ? D - What is the common relationship?
? A : B ? C : D - Are A and B related in the same way that C and D are related?
? A : B :: C : ? - What is the result of applying the transformation A : B to C?
1
See [2] for a deeper philosophical discussion of analogical reasoning.
1
The first two questions can be viewed as discriminative tasks, and could be formulated as classification problems. The third question requires generating an appropriate image to make a valid analogy.
Since a model with this capability would be of practical interest, we focus on this question.
Our proposed approach is to learn a deep encoder function f : RD ? RK that maps images to an
embedding space suitable for reasoning about analogies, and a deep decoder function g : RK ? RD
that maps from the embedding back to the image space. (See Figure 1.) Our encoder function is
inspired by word2vec [21], GloVe [22] and other embedding methods that map inputs to a space
supporting analogies by vector addition. In those models, analogies could be performed via
d = arg maxw?V cos(f (w), f (b) ? f (a) + f (c))
where V is the vocabulary and (a, b, c, d) form an analogy tuple such that a : b :: c : d. Other
variations, such as a multiplicative version [18], on this inference have been proposed. The vector
f (b) ? f (a) represents the transformation, which is applied to a query c by vector addition in the
embedding space. In the case of images, we can modify this naturally by replacing the cosine
similarity and argmax over the vocabulary with application of a decoder function mapping from
the embedding back to the image space.
Clearly, this simple vector addition will not accurately model transformations for low-level representations such as raw pixels, and so in this work we seek to learn a high-level representation. In
our experiments, we parametrize the encoder f and decoder g as deep convolutional neural networks (CNN), but in principle other methods could be used to model f and g. In addition to vector
addition, we also propose more powerful methods of applying the inferred transformations to new
images, such as higher-order multiplicative interactions and multi-layer additive interactions.
We first demonstrate visual analogy making on a 2D shapes benchmark, with variation in shape,
color, rotation, scaling and position, and evaluate the performance on analogy completion. Second,
we generate a dataset of animated 2D video game character sprites using graphics assets from the
Liberated Pixel Cup [1]. We demonstrate the capability of our model to transfer animations onto
novel characters from a single frame, and to perform analogies that traverse the manifold induced
by an animation. Third, we apply our model to the task of analogy making on 3D car models, and
show that our model can perform 3D pose transfer and rotation by analogy.
2
Related Work
Hertzmann et al. [12] developed a method for applying new textures to images by analogy. This
problem is of practical interest, e.g., for stylizing animations [3]. Our model can also synthesize
new images by analogy to examples, but we study global transformations rather than only changing
the texture of the image.
Doll?ar et al. [9] developed Locally-Smooth Manifold Learning to traverse image manifolds. We
share a similar motivation when analogical reasoning requires walking along a manifold (e.g. pose
analogies), but our model leverages a deep encoder and decoder trainable by backprop.
Memisevic and Hinton [19] proposed the Factored Gated Boltzmann Machine for learning to represent transformations between pairs of images. This and related models [25, 8, 20] use 3-way tensors
or their factorization to infer translations, rotations and other transformations from a pair of images,
and apply the same transformation to a new image. In this work, we share a similar goal, but we
directly train a deep predictive model for the analogy task without requiring 3-way multiplicative
connections, with the intent to scale to bigger images and learn more subtle relationships involving
articulated pose, multiple attributes and out-of-plane rotation.
Our work is related to several previous works on disentangling factors of variation, for which a
common application is analogy-making. As an early example, bilinear models [27] were proposed
to separate style and content factors of variation in face images and speech signals. Tang et al. [26]
developed the tensor analyzer which uses a factor loading tensor to model the interaction among
latent factor groups, and was applied to face modeling. Several variants of higher-order Boltzmann
machine were developed to tackle the disentangling problem, featuring multiple groups of hidden
units, with each group corresponding to a single factor [23, 7]. Disentangling was also considered
in the discriminative case in the Contractive Discriminative Analysis model [24]. Our work differs
from these in that we train a deep end-to-end network for generating images by analogy.
Recently several methods were proposed to generate high-quality images using deep networks.
Dosovitskiy et al. [10] used a CNN to generate chair images with controllable variation in appear2
ance, shape and 3D pose. Contemporary to our work, Kulkarni et al. [17] proposed the Deep Convolutional Inverse Graphics Network, which is a form of variational autoencoder (VAE) [15] in which
the encoder disentangles factors of variation. Other works have considered a semi-supervised extension of the VAE [16] incorporating class labels associated to a subset of the training images, which
can control the label units to perform some visual analogies. Cohen and Welling [6] developed a
generative model of commutative Lie groups (e.g. image rotation, translation) that produced invariant and disentangled representations. In [5], this work is extended to model the non-commutative
3D rotation group SO(3). Zhu et al. [30] developed the multi-view perceptron for modeling face
identity and viewpoint, and generated high quality faces subject to view changes. Cheung et al.
[4] also use a convolutional encoder-decoder model, and develop a regularizer to disentangle latent
factors of variation from a discriminative target.
Analogies have been well-studied in the NLP community; Turney [28] used analogies from SAT
tests to evaluate the performance of text analogy detection methods. In the visual domain, Hwang
et al. [13] developed an analogy-preserving visual-semantic embedding model that could both detect
analogies and as a regularizer improve visual recognition performance. Our work is related to these,
but we focus mainly on generating images to complete analogies rather than detecting analogies.
3
Method
Suppose that A is the set of valid analogy tuples in the training set. For example, (a, b, c, d) ? A
implies the statement ?a is to b as c is to d?. Let the input image space for images a, b, c, d be RD ,
and the embedding space be RK (typically K < D). Denote the encoder as f : RD ? RK and the
decoder as g : RK ? RD . Figure 2 illustrates our architectures for visual analogy making.
3.1 Making analogies by vector addition
Neural word representations (e.g., [21, 22]) have been shown to be capable of analogy-making by
addition and subtraction of word embeddings. Analogy making capability appears to be an emergent
property of these embeddings, but for images we propose to directly train on the objective of analogy
completion. Concretely, we propose the following objective for vector-addition-based analogies:
X
Ladd =
||d ? g(f (b) ? f (a) + f (c))||22
(1)
a,b,c,d?A
This objective has the advantage of being very simple to implement and train. In addition, with a
modest number of labeled relations, a large number of training analogies can be mined.
3.2 Making analogy transformations dependent on the query context
In some cases, a purely additive model of applying transformations may not be ideal. For example,
in the case of rotation, the manifold of a rotated object is circular, and after enough rotation has
been applied, one returns to the original point. In the vector-addition model, we can add the same
rotation vector f (b) ? f (a) multiple times to a query f (c), but we will never return to the original
point (except when f (b) = f (a)). The decoder g could (in principle) solve this problem by learning
to perform a ?modulus? operation, but this would make the training significantly more difficult.
Instead, we propose to parametrize the transformation increment to f (c) as a function of both f (b)?
f (a) and f (c) itself. In this way, analogies can be applied in a context-dependent way.
We present two variants of our training objective to solve this problem. The first, which we will call
Lmul , uses multiplicative interactions between f (b) ? f (a) and f (c) to generate the increment. The
second, which we call Ldeep , uses multiple fully connected layers to form a multi-layer perceptron
(MLP) without using multiplicative interactions:
X
Lmul =
||d ? g(f (c) + W ?1 [f (b) ? f (a)]?2 f (c))||22
(2)
a,b,c,d?A
Ldeep =
X
||d ? g(f (c) + h([f (b) ? f (a); f (c)]))||22 .
(3)
a,b,c,d?A
For Lmul , W ? RK?K?K is a 3-way tensor.2 In practice, to reduce the number of weights we
P
(1)
(2)
(3)
used a factorized tensor parametrized as Wijl = f Wif Wjf Wlf . Multiplicative interactions
2
R
K
For a tensor W ? RK?K?K and vectors v, w ? RK , we define the tensor multiplication W ?1 v ?2 w ?
P
PK
as (W ?1 v ?2 w)l = K
i=1
j=1 Wijl vi wj , ?l ? {1, ..., K}.
3
Encoder network f
a
Increment
function T
b
add
mul
deep
Decoder
network g
c
d
f(b)
add
f(b)
mul
f(b)
f(a)
f(a)
f(a)
f(c)
f(c)
f(c)
deep
Figure 2: Illustration of the network structure for analogy making. The top portion shows the
encoder, transformation module, andN
decoder. The bottom portion illustrates the transformations
used for Ladd , Lmul and Ldeep . The
icon in Lmul indicates a tensor product. We share weights
with all three encoder networks shown on the top left.
were similarly used in bilinear models [27], disentangling Boltzmann Machines [23] and Tensor
Analyzers [26]. Note that our multiplicative interaction in Lmul is different from [19] in that we use
the difference between two encoding vectors (i.e., f (b) ? f (a)) to infer about the transformation (or
relation), rather than using a higher-order interaction (e.g., tensor product) for this inference.
For Ldeep , h : R2K ? RK is an MLP (deep Algorithm 1: Manifold traversal by analogy,
network without 3-way multiplicative interactions) with transformation function T (Eq. 5).
and [f (b) ? f (a); f (c)] denotes concatenation of the
Given images a, b, c, and N (# steps)
transformation vector with the query embedding.
z ? f (c)
Optimizing the above objectives teaches the model for i = 1 to N do
to predict analogy completions in image space, but
z ? z + T (f (a), f (b), z)
in order to traverse image manifolds (e.g. for rexi ? g(z)
peated analogies) as in Algorithm 1, we also want return generated images xi (i = 1, ..., N )
accurate analogy completions in the embedding
space. To encourage this property, we introduce a regularizer to make the predicted transformation increment T (f (a), f (b), f (c)) match the difference of encoder embeddings f (d) ? f (c):
X
R=
||f (d) ? f (c) ? T (f (a), f (b), f (c))||22 , where
(4)
a,b,c,d?A
?
?y ? x
T (x, y, z) = W ?1 [y ? x] ?2 z
?
M LP ([y ? x; z])
when using Ladd
when using Lmul
when using Ldeep
(5)
The overall training objective is a weighted combination of analogy prediction and the above regularizer, e.g. Ldeep +?R. We set ? = 0.01 by cross validation on the shapes data and found it worked
well for all models on sprites and 3D cars as well. All parameters were trained with backpropagation
using stochastic gradient descent (SGD).
3.3 Analogy-making with a disentangled feature representation
Visual analogies change some aspects of a query image, and leave others unchanged; for example,
changing the viewpoint but preserving the shape and texture of an object. To exploit this fact,
we incorporate disentangling into our analogy prediction model. A disentangled representation is
simply a concatenation of coordinates along each underlying factor of variation. If one can reliably
infer these disentangled coordinates, a subset of analogies can be solved simply by swapping sets
of coordinates among a reference and query embedding, and projecting back into the image space.
However, in general, disentangling alone cannot solve analogies that require traversing the manifold
structure of a given factor, and by itself does not capture image relationships.
In this section we show how to incorporate disentangled features into our analogy model. The
disentangling component makes each group of embedding features encode its respective factor of
variation and be invariant to the others. The analogy component enables the model to traverse the
manifold of a given factor or subset of factors.
4
Identity
a
switches s
Pitch
c
Elevation
b
Identity
Pitch
Elevation
Algorithm 2: Disentangling training update. The
switches s determine which units from f (a) and
f (b) are used to reconstruct image c.
Given input images a, b and target c
Given switches s ? {0, 1}K
z ? s ? f (a) + (1 ? s) ? f(b)
?? ? ?/?? ||g(z) ? c||22
Figure 3: The encoder f learns a disentangled representation, in this case for pitch, elevation and
identity of 3D car models. In the example above, switches s would be a block [0; 1; 1] vector.
For learning a disentangled representation, we require three-image tuples: a pair from which to
extract hidden units, and a third to act as a target for prediction. As shown in Figure 3, We use a
vector of switch units s that decides which elements from f (a) and which from f (b) will be used
to form the hidden representation z ? RK . Typically s will have a block structure according to the
groups of units associated to each factor of variation. Once z has been extracted, it is projected back
into the image space via the decoder g(z).
The key to learning disentangled features is that images a, b, c should be distinct, so that there is no
path from any image to itself. This way, the reconstruction target forces the network to separate the
visual concepts shared by (a, c) and (b, c), respectively, rather than learning the identity mapping.
Concretely, the disentangling objective can be written as:
X
Ldis =
||c ? g(s ? f (a) + (1 ? s) ? f (b))||22
(6)
a,b,c,s?D
Note that unlike analogy training, disentangling only requires a dataset D of 3-tuple of images a, b, c
along with a switch unit vector s. Intuitively, s describes the sense in which a, b and c are related.
Algorithm 2 describes the learning update we used to learn a disentangled representation.
4
Experiments
We evaluated our methods using three datasets. The first is a set of 2D colored shapes, which is
a simple yet nontrivial benchmark for visual analogies. The second is a set of 2D sprites from the
open-source video game project called Liberated Pixel Cup [1], which we chose in order to get
controlled variation in a large number of character attributes and animations. The third is a set of
3D car model renderings [11], which allowed us to train a model to perform out-of-plane rotation.
We used Caffe [14] to train our encoder and decoder networks, with a custom Matlab wrapper
implementing our analogy sampling and training objectives. Many additional qualitative results of
images generated by our model are presented in the supplementary material.
4.1 Transforming shapes: comparison of analogy models
The shapes dataset was used to benchmark performance on rotation, scaling and translation analogies. Specifically, we generated 48 ? 48 images scaled to [0, 1] with four shapes, eight colors, four
scales, five row and column positions, and 24 rotation angles.
We compare the performance of our models trained with Ladd , Lmul and Ldeep objectives, respectively. We did not perform disentangling training in this experiment. The encoder f consisted
of 4096-1024-512-dimensional fully connected layers, with rectified linear nonlinearities (relu) for
intermediate layers. The final embedding layer did not use any nonlinearity. The decoder g architecture mirrors the encoder, but did not share weights. We trained for 200k steps with mini-batch
size 25 (i.e. 25 analogy 4-tuples per mini-batch). We used SGD with momentum 0.9, base learning
rate 0.001 and decayed the learning rate by factor 0.1 every 100k steps.
Model
Rotation steps
Scaling steps
Translation steps
1
2
3
4
1
2
3
4
1
2
3
4
Ladd
8.39 11.0 15.1 21.5 5.57 6.09 7.22 14.6 5.44 5.66 6.25 7.45
Lmul
8.04 11.2 13.5 14.2 4.36 4.70 5.78 14.8 4.24 4.45 5.24 6.90
Ldeep
1.98 2.19 2.45 2.87 3.97 3.94 4.37 11.9 3.84 3.81 3.96 4.61
Table 1: Comparison of squared pixel prediction error of Ladd , Lmul and Ldeep on shape analogies.
5
ref
+rot (gt)
query
+rot
+rot
+rot
+rot
ref
+scl (gt)
query
+scl
+scl
+scl
+scl
ref
+trans (gt)
query
+trans
+trans
+trans
+trans
Figure 4: Analogy predictions made by Ldeep for
rotation, scaling and translation, respectively by
row. Ladd and Lmul perform as well for scaling
and transformation, but fail for rotation.
Figure 5: Mean-squared prediction error on
repeated application of rotation analogies.
Figure 4 shows repeated predictions from Ldeep on rotation, scaling and translation test set analogies,
showing that our model has learned to traverse these manifolds. Table 1 shows that Ladd and Lmul
perform similarly for scaling and translation, but only Ldeep can perform accurate rotation analogies.
Further extrapolation results with repeated rotations are shown in Figure 5. Though both Lmul and
Ldeep are in principle capable of learning the circular pose manifold, we suspect that Ldeep has
much better performance due to the difficulty of training multiplicative models such as Lmul .
4.2 Generating 2D video game sprites
Game developers often use what are known as ?sprites? to portray characters and objects in 2D video
games (more commonly on older systems, but still seen on phones and indie games). This entails
significant human effort to draw each frame of each common animation for each character.3 In this
section we show how animations can be transferred to new characters by analogy.
Our dataset consists of 60 ? 60 color images of sprites scaled to [0, 1], with 7 attributes: body type,
sex, hair type, armor type, arm type, greaves type, and weapon type, with 672 total unique characters.
For each character, there are 5 animations each from 4 viewpoints: spellcast, thrust, walk, slash and
shoot. Each animation has between 6 and 13 frames. We split the data by characters: 500 training,
72 validation and 100 for testing.
We conducted experiments using the Ladd and Ldeep variants of our objective, with and without disentangled features. We also experimented with a disentangled feature version in which the identity
units are taken to be the 22-dimensional character attribute vector, from which the pose is disentangled. In this case, the encoder for identity units acts as multiple softmax classifiers, one for each
attribute, hence we refer to this objective in experiments as Ldis+cls .
The encoder network consisted of two layers of 5 ? 5 convolution with stride 2 and relu, followed by
two fully-connected and relu layers, followed by a projection onto the 1024-dimensional embedding.
The decoder mirrors the encoder. To increase the spatial dimension we use simple upsampling in
which we copy each input cell value to the upper-left corner of its corresponding 2 ? 2 output.
For Ldis , we used 512 units for identity and 512 for pose. For Ldis+cls , we used 22 categorical
units for identity, which is the attribute vector, and the remaining 490 for pose. During training for
Ldis+cls , we did not backpropagate reconstruction error through the identity units; we only used
the attribute classification objective for those units. When Ldeep is used, the internal layers of the
transformation function T (see Figure 2) had dimension 300, and were each followed by relu. We
trained the models using SGD with momentum 0.9 and learning rate 0.00001 decayed by factor 0.1
every 100k steps. Training was conducted for 200k steps with mini-batch size 25.
Figure 6 demonstrates the task of animation transfer, with predictions from a model trained on Ladd .
Table 2 provides a quantitative comparison of Ladd , Ldis and Ldis+cls . We found that the disentangling and additive analogy models perform similarly, and that using attributes for disentangled
identity features provides a further gain. We conjecture that Ldis+cls wins because changes in certain aspects of appearance, such as arm color, have a very small effect in pixel space yielding a weak
signal for pixel prediction, but still provides a strong signal to an attribute classifier.
3
In some cases the work may be decreased by projecting 3D models to 2D or by other heuristics, but in
general the work scales with the number of animations and characters.
6
Figure 6: Transferring animations. The top row shows the reference, and the bottom row shows the
transferred animation, where the first frame (in red) is the starting frame of a test set character.
Model
spellcast thrust walk slash shoot average
Ladd
41.0
53.8
55.7
52.1
77.6
56.0
Ldis
40.8
55.8
52.6
53.5
79.8
56.5
Ldis+cls
13.3
24.6
17.2
18.9
40.8
23.0
Table 2: Mean-squared pixel error on test analogies, by animation.
From a practical perspective, the ability to transfer poses accurately to unseen characters could help
decrease manual labor of drawing (at least of drawing the assets comprising each character in each
animation frame). However, training this model required that each transferred animation already
has hundreds of examples. Ideally, the model could be shown a small number of examples for a
new animation, and transfer it to the existing character database. We call this setting ?few-shot?
analogy-making because only a small number of the target animations are provided.
Model
Ladd
Ldis
Ldis+cls
Num.
6
42.8
19.3
15.0
Reference
of few-shot examples
12
24
48
42.7 42.3
41.0
18.9 17.4
16.3
12.0 11.3
10.4
Table 3: Mean-squared pixel-prediction error
for few-shot analogy transfer of the ?spellcast?
animation from each of 4 viewpoints. Ldis outperforms Ladd , and Ldis+cls performs the best
even with only 6 examples.
Output
Query
Prediction
Figure 7: Few shot prediction with 48 examples.
Table 3 provides a quantitative comparison and figure 7 provides a qualitative comparison of our
proposed models in this task. We find that Ldis+cls provides the best performance by a wide margin.
Unlike in Table 2, Ldis outperforms Ladd , suggesting that disentangling may allow new animations
to be learned in a more data-efficient manner. However, Ldis has an advantage in that it can average
the identity features of multiple views of a query character, which Ladd cannot do.
The previous analogies only required us to combine disentangled features from two characters, e.g.
the identity from one and the pose from another, and so disentangling was sufficient. However,
our analogy method enables us to perform more challenging analogies by learning the manifold of
character animations, defined by the sequence of frames in each animation. Adjacent frames are thus
neighbors on the manifold and each animation sequence can be viewed as a fiber in this manifold.
We trained a model by forming analogy tuples across animations as depicted in Fig. 8, using disentangled identity and pose features. Pose transformations were modeled by deep additive interactions,
and we used Ldis+cls to disentangle pose from identity units. Figure 9 shows the result of several
analogies and their extrapolations, including character rotation for which we created animations.
Figure 8: A cartoon visualization of the ?shoot? animation manifold for two different characters
in different viewpoints. The model can learn the structure of the animation manifold by forming
analogy tuples during training; example tuples are circled in red and blue above.
7
ref.
output query predictions
walk
thrust
rotate
Figure 9: Extrapolating by analogy. The model sees the reference / output pair and repeatedly
applies the inferred transformation to the query. This inference requires learning the manifold of
animation poses, and cannot be done by simply combining and decoding disentangled features.
4.3 3D car analogies
In this section we apply our model to analogy-making on 3D car renderings subject to changes in
appearance and rotation angle. Unlike in the case of shapes, this requires the ability of the model to
perform out-of-plane rotation, and the depicted objects are more complex.
Features
Pose units
ID units
Combined
Pose AUC
95.6
50.1
94.6
ID AUC
85.2
98.5
98.4
Pose
Table 4: Measuring the disentangling performance on 3D
cars. Pose AUC refers to area under the ROC curve for
same-or-different pose verification, and ID AUC for sameor-different car verification on pairs of test set images.
ID
GT
Prediction
Figure 10: 3D car analogies. The
column ?GT? denotes ground truth.
We use the car CAD models from [11]. For each of the 199 car models, we generated 64 ? 64 color
renderings from 24 rotation angles each offset by 15 degrees. We split the models into 100 training,
49 validation and 50 testing. The same convolutional network architecture was used as in the sprites
experiments, and we used 512 units for identity and 128 for pose.
ref
output
-4
-3
-2
-1
query
+1
+2
+3
+4
Figure 11: Repeated rotation analogies in forward and reverse directions, starting from frontal pose.
Figure 10 shows test set predictions of our model trained on Ldis , where images in the fourth column
combine pose units from the first column and identity units from the second. Table 4 shows that the
learned features are in fact disentangled, and discriminative for identity and pose matching despite
not being discriminatively trained. Figure 11 shows repeated rotation analogies on test set cars using
a model trained on Ldeep , demonstrating that our model can perform out-of-plane rotation. This type
of extrapolation is difficult because the query image shows a different car from a different starting
pose. We expect that a recurrent architecture can further improve the results, as shown in [29].
5
Conclusions
We studied the problem of visual analogy making using deep neural networks, and proposed several
new models. Our experiments showed that our proposed models are very general and can learn to
make analogies based on appearance, rotation, 3D pose, and various object attributes. We provide
connection between analogy making and disentangling factors of variation, and showed that our
proposed analogy representations can overcome certain limitations of disentangled representations.
Acknowledgements This work was supported in part by NSF GRFP grant DGE-1256260, ONR
grant N00014-13-1-0762, NSF CAREER grant IIS-1453651, and NSF grant CMMI-1266184. We
thank NVIDIA for donating a Tesla K40 GPU.
8
References
[1] Liberated pixel cup. http://lpc.opengameart.org/. Accessed: 2015-05-21.
[2] P. Bartha. Analogy and analogical reasoning. In The Stanford Encyclopedia of Philosophy. Fall 2013
edition, 2013.
[3] P. B?enard, F. Cole, M. Kass, I. Mordatch, J. Hegarty, M. S. Senn, K. Fleischer, D. Pesare, and K. Breeden.
Stylizing animation by example. ACM Transactions on Graphics, 32(4):119, 2013.
[4] B. Cheung, J. A. Livezey, A. K. Bansal, and B. A. Olshausen. Discovering hidden factors of variation in
deep networks. In ICLR Workshop, 2015.
[5] T. Cohen and M. Welling. Learning the irreducible representations of commutative Lie groups. In ICML,
2014.
[6] T. Cohen and M. Welling. Transformation properties of learned visual representations. In ICLR, 2015.
[7] G. Desjardins, A. Courville, and Y. Bengio. Disentangling factors of variation via generative entangling.
arXiv preprint arXiv:1210.5474, 2012.
[8] W. Ding and G. W. Taylor. Mental rotation by optimizing transforming distance. arXiv preprint
arXiv:1406.3010, 2014.
[9] P. Doll?ar, V. Rabaud, and S. Belongie. Learning to traverse image manifolds. In NIPS, 2007.
[10] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. In CVPR, 2015.
[11] S. Fidler, S. Dickinson, and R. Urtasun. 3d object detection and viewpoint estimation with a deformable
3d cuboid model. In NIPS, 2012.
[12] A. Hertzmann, C. Jacobs, N. Oliver, B. Curless, and D. Salesin. Image analogies. In SIGGRAPH, 2001.
[13] S. J. Hwang, K. Grauman, and F. Sha. Analogy-preserving semantic embedding for visual object categorization. In NIPS, 2013.
[14] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[15] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[16] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In NIPS, 2014.
[17] T. D. Kulkarni, W. Whitney, P. Kohli, and J. B. Tenenbaum. Deep convolutional inverse graphics network.
In NIPS, 2015.
[18] O. Levy, Y. Goldberg, and I. Ramat-Gan. Linguistic regularities in sparse and explicit word representations. In CoNLL-2014, 2014.
[19] R. Memisevic and G. E. Hinton. Learning to represent spatial transformations with factored higher-order
boltzmann machines. Neural Computation, 22(6):1473?1492, 2010.
[20] V. Michalski, R. Memisevic, and K. Konda. Modeling deep temporal dependencies with recurrent grammar cells. In NIPS, 2014.
[21] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and
phrases and their compositionality. In NIPS, 2013.
[22] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP,
2014.
[23] S. Reed, K. Sohn, Y. Zhang, and H. Lee. Learning to disentangle factors of variation with manifold
interaction. In ICML, 2014.
[24] S. Rifai, Y. Bengio, A. Courville, P. Vincent, and M. Mirza. Disentangling factors of variation for facial
expression recognition. In ECCV. 2012.
[25] J. Susskind, R. Memisevic, G. Hinton, and M. Pollefeys. Modeling the joint density of two images under
a variety of transformations. In CVPR, 2011.
[26] Y. Tang, R. Salakhutdinov, and G. Hinton. Tensor analyzers. In ICML, 2013.
[27] J. B. Tenenbaum and W. T. Freeman. Separating style and content with bilinear models. Neural computation, 12(6):1247?1283, 2000.
[28] P. D. Turney. Similarity of semantic relations. Computational Linguistics, 32(3):379?416, 2006.
[29] J. Yang, S. Reed, M.-H. Yang, and H. Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In NIPS, 2015.
[30] Z. Zhu, P. Luo, X. Wang, and X. Tang. Multi-view perceptron: a deep model for learning face identity
and view representations. In NIPS, 2014.
9
| 5845 |@word kohli:1 cnn:2 version:2 loading:1 sex:1 open:1 seek:1 jacob:1 sgd:3 wjf:1 shot:4 wrapper:1 animated:2 outperforms:2 greave:1 existing:1 current:1 ka:1 guadarrama:1 cad:1 luo:1 yet:1 written:1 gpu:1 additive:4 thrust:3 shape:12 enables:2 extrapolating:1 update:2 alone:1 generative:4 discovering:1 accordingly:2 plane:4 colored:1 num:1 grfp:1 detecting:1 provides:6 mental:1 traverse:6 yuting:1 org:1 zhang:3 five:1 accessed:1 along:3 qualitative:2 consists:1 combine:2 manner:1 introduce:1 roughly:1 multi:4 inspired:2 salakhutdinov:1 freeman:1 considering:1 project:1 provided:1 underlying:1 factorized:1 reedscot:1 what:6 kind:1 developer:1 developed:7 spoken:1 transformation:26 temporal:1 quantitative:2 every:2 hypothetical:2 act:2 tackle:1 grauman:1 classifier:2 scaled:2 demonstrates:1 control:1 unit:19 grant:4 modify:1 struggle:1 bilinear:3 encoding:4 despite:1 id:4 path:1 might:1 chose:1 studied:2 challenging:1 co:1 ramat:1 factorization:1 contractive:1 practical:3 unique:1 testing:2 practice:1 block:2 implement:1 differs:1 ance:1 backpropagation:1 susskind:1 area:1 significantly:1 projection:1 matching:1 word:5 refers:1 get:1 onto:2 cannot:3 context:2 applying:4 disentangles:1 map:4 dean:1 starting:3 identifying:1 factored:2 disentangled:18 embedding:16 variation:16 increment:4 coordinate:3 imagine:1 target:5 suppose:1 caption:1 dickinson:1 us:3 goldberg:1 synthesize:1 element:1 recognition:2 walking:1 labeled:1 database:1 observed:2 bottom:2 module:1 preprint:3 ding:1 solved:1 capture:1 wang:1 wj:1 connected:3 k40:1 decrease:1 contemporary:1 transforming:3 environment:2 hertzmann:2 ideally:1 exhaustively:1 traversal:1 trained:10 weakly:1 solving:2 predictive:1 purely:1 easily:1 siggraph:1 joint:1 emergent:1 various:2 fiber:1 regularizer:4 train:6 articulated:1 distinct:1 fast:1 armor:1 query:19 caffe:2 heuristic:1 supplementary:1 solve:7 stanford:1 cvpr:2 drawing:2 reconstruct:1 encoder:19 ability:3 spellcast:3 grammar:1 unseen:1 transform:2 itself:3 final:1 advantage:2 sequence:2 karayev:1 michalski:1 propose:6 reconstruction:2 interaction:11 product:2 combining:1 deformable:1 indie:1 analogical:5 sutskever:1 regularity:1 darrell:1 generating:7 categorization:1 leave:1 rotated:2 object:13 help:1 develop:2 completion:5 pose:25 recurrent:3 eq:1 strong:1 predicted:1 implies:1 direction:1 attribute:10 stochastic:1 human:3 material:1 implementing:1 backprop:1 require:2 elevation:3 im:1 extension:1 considered:2 ground:1 mapping:4 predict:1 slash:2 claim:1 desjardins:1 early:1 estimation:1 label:3 cole:1 weighted:1 clearly:1 rather:4 vae:2 linguistic:1 encode:1 rezende:1 focus:2 indicates:1 mainly:1 detect:1 sense:1 inference:3 dependent:2 typically:2 transferring:1 hidden:4 relation:3 transformed:2 comprising:1 pixel:9 arg:1 among:3 classification:2 overall:1 spatial:2 breakthrough:1 softmax:1 brox:1 once:1 never:1 having:2 sampling:1 cartoon:1 represents:1 look:1 icml:3 thinking:1 others:2 mirza:1 dosovitskiy:2 few:4 irreducible:1 argmax:1 detection:2 interest:2 mlp:2 circular:2 wlf:1 custom:1 yielding:1 swapping:1 word2vec:1 accurate:2 oliver:1 tuple:3 capable:2 encourage:1 respective:1 facial:1 modest:1 traversing:1 taylor:1 rotating:1 re:1 walk:3 girshick:1 column:4 modeling:5 ar:2 measuring:1 whitney:1 phrase:1 subset:3 hundred:1 recognizing:1 conducted:2 graphic:4 dependency:1 my:1 combined:1 decayed:2 density:1 memisevic:4 lee:3 decoding:1 synthesis:1 squared:4 emnlp:1 corner:1 style:2 return:3 suggesting:1 nonlinearities:1 stride:1 vi:1 performed:2 multiplicative:9 view:6 extrapolation:3 portion:2 red:2 bayes:1 capability:3 annotation:1 jia:1 convolutional:8 identify:1 salesin:1 weak:1 raw:1 vincent:1 curless:1 accurately:3 produced:1 asset:2 icon:1 rectified:1 r2k:1 manual:1 mohamed:1 naturally:1 associated:2 mi:1 donating:1 gain:1 dataset:4 begun:1 ask:1 knowledge:1 car:14 color:7 subtle:1 back:5 appears:1 higher:4 supervised:3 evaluated:1 though:1 done:1 just:1 dyed:1 replacing:1 quality:3 hwang:2 dge:1 olshausen:1 usa:1 modulus:1 effect:1 concept:2 requiring:1 consisted:2 hence:1 fidler:1 semantic:3 adjacent:1 game:7 during:2 auc:4 cosine:1 bansal:1 complete:1 demonstrate:2 performs:1 reasoning:6 image:62 variational:2 shoot:3 novel:2 recently:2 common:3 rotation:28 mentally:2 cohen:3 relating:1 significant:2 refer:1 honglak:2 cup:3 rd:5 similarly:3 analyzer:3 nonlinearity:1 language:1 had:1 rot:5 entail:1 similarity:2 gt:5 add:3 base:1 disentangle:3 recent:1 showed:2 perspective:1 optimizing:2 phone:1 reverse:1 certain:2 n00014:1 nvidia:1 onr:1 yi:1 preserving:3 seen:1 additional:1 subtraction:2 determine:1 corrado:1 clockwise:1 signal:3 semi:2 multiple:6 ii:1 infer:4 smooth:1 match:1 cross:1 long:1 bigger:1 controlled:1 prediction:15 involving:1 variant:3 hair:2 pitch:3 arxiv:6 represent:2 cell:2 addition:12 want:1 liberated:3 decreased:1 source:1 weapon:1 unlike:3 induced:1 subject:2 suspect:1 call:3 yang:2 leverage:1 ideal:1 intermediate:1 enough:2 embeddings:3 wif:1 switch:6 rendering:3 relu:4 split:2 bengio:2 architecture:5 variety:1 reduce:1 rifai:1 fleischer:1 expression:1 effort:1 sprite:8 speech:1 repeatedly:1 matlab:1 deep:21 encyclopedia:1 locally:1 tenenbaum:2 sohn:1 generate:6 http:1 nsf:3 senn:1 per:1 blue:2 pollefeys:1 group:8 key:1 four:2 demonstrating:1 changing:2 inverse:2 angle:3 powerful:1 fourth:1 springenberg:1 draw:1 scaling:7 conll:1 layer:9 followed:3 mined:1 courville:2 yielded:1 nontrivial:1 worked:1 aspect:2 answered:1 chair:3 mikolov:1 conjecture:1 transferred:3 according:3 combination:1 manning:1 describes:2 across:1 character:20 lp:1 making:17 projecting:2 invariant:2 intuitively:1 taken:1 visualization:1 fail:1 scl:5 end:4 umich:1 parametrize:2 operation:1 doll:2 apply:3 eight:1 appropriate:1 batch:3 original:2 top:3 denotes:2 nlp:1 remaining:1 gan:1 linguistics:1 konda:1 exploit:1 unchanged:1 tensor:11 objective:12 question:7 already:1 cmmi:1 sha:1 gradient:1 win:1 iclr:3 distance:1 separate:2 thank:1 separating:1 concatenation:2 decoder:14 parametrized:1 upsampling:1 manifold:19 urtasun:1 modeled:1 reed:3 relationship:6 illustration:1 mini:3 difficult:2 disentangling:19 statement:1 teach:1 intent:1 reliably:1 boltzmann:4 perform:15 gated:1 upper:1 convolution:1 datasets:2 benchmark:3 descent:1 supporting:1 hinton:4 extended:1 frame:8 community:1 inferred:2 compositionality:1 pair:6 required:2 connection:2 philosophical:1 learned:4 kingma:2 nip:9 trans:5 mordatch:1 scott:1 lpc:1 including:1 video:5 critical:1 suitable:1 difficulty:1 force:1 predicting:1 zhu:2 arm:2 older:1 improve:2 created:1 portray:1 categorical:1 autoencoder:1 extract:1 auto:1 livezey:1 text:1 prior:1 understanding:1 circled:1 acknowledgement:1 multiplication:1 fully:3 expect:1 discriminatively:1 limitation:1 analogy:93 age:1 validation:3 shelhamer:1 degree:2 sufficient:1 verification:2 principle:4 viewpoint:6 share:4 translation:7 row:4 eccv:1 featuring:1 supported:1 copy:1 allow:1 deeper:1 perceptron:3 wide:1 neighbor:1 face:5 fall:1 sparse:1 distributed:1 curve:1 dimension:2 vocabulary:2 valid:3 overcome:1 concretely:2 made:1 commonly:1 projected:1 forward:1 rabaud:1 welling:5 transaction:1 wijl:2 cuboid:1 global:2 decides:1 sat:1 belongie:1 tuples:6 discriminative:5 xi:1 latent:2 ladd:16 table:9 learn:9 transfer:6 controllable:1 career:1 complex:2 cl:10 domain:1 did:4 pk:1 motivation:1 animation:28 edition:1 allowed:2 mul:2 ref:5 repeated:5 body:1 tesla:1 fig:1 roc:1 position:2 momentum:2 explicit:1 lie:2 entangling:1 third:4 levy:1 learns:1 donahue:1 tang:3 rk:10 showing:1 offset:1 experimented:1 incorporating:1 workshop:1 socher:1 restricting:1 kulkarni:2 effectively:2 pennington:1 mirror:2 texture:5 illustrates:2 commutative:3 margin:1 chen:1 backpropagate:1 depicted:2 michigan:1 simply:3 appearance:3 forming:2 visual:20 labor:1 maxw:1 ldis:19 applies:1 truth:1 extracted:1 acm:1 viewed:2 formulated:1 goal:1 ann:1 identity:19 cheung:2 shared:1 content:3 change:4 glove:2 except:1 specifically:1 called:1 total:1 arbor:1 turney:2 andn:1 internal:1 rotate:2 frontal:1 philosophy:1 incorporate:2 evaluate:2 trainable:1 |
5,353 | 5,846 | End-To-End Memory Networks
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University
[email protected]
Arthur Szlam
Jason Weston
Rob Fergus
Facebook AI Research
New York
{aszlam,jase,robfergus}@fb.com
Abstract
We introduce a neural network with a recurrent attention model over a possibly
large external memory. The architecture is a form of Memory Network [23]
but unlike the model in that work, it is trained end-to-end, and hence requires
significantly less supervision during training, making it more generally applicable
in realistic settings. It can also be seen as an extension of RNNsearch [2] to the
case where multiple computational steps (hops) are performed per output symbol.
The flexibility of the model allows us to apply it to tasks as diverse as (synthetic)
question answering [22] and to language modeling. For the former our approach
is competitive with Memory Networks, but with less supervision. For the latter,
on the Penn TreeBank and Text8 datasets our approach demonstrates comparable
performance to RNNs and LSTMs. In both cases we show that the key concept
of multiple computational hops yields improved results.
1
Introduction
Two grand challenges in artificial intelligence research have been to build models that can make
multiple computational steps in the service of answering a question or completing a task, and
models that can describe long term dependencies in sequential data.
Recently there has been a resurgence in models of computation using explicit storage and a notion
of attention [23, 8, 2]; manipulating such a storage offers an approach to both of these challenges.
In [23, 8, 2], the storage is endowed with a continuous representation; reads from and writes to the
storage, as well as other processing steps, are modeled by the actions of neural networks.
In this work, we present a novel recurrent neural network (RNN) architecture where the recurrence
reads from a possibly large external memory multiple times before outputting a symbol. Our model
can be considered a continuous form of the Memory Network implemented in [23]. The model in
that work was not easy to train via backpropagation, and required supervision at each layer of the
network. The continuity of the model we present here means that it can be trained end-to-end from
input-output pairs, and so is applicable to more tasks, i.e. tasks where such supervision is not available, such as in language modeling or realistically supervised question answering tasks. Our model
can also be seen as a version of RNNsearch [2] with multiple computational steps (which we term
?hops?) per output symbol. We will show experimentally that the multiple hops over the long-term
memory are crucial to good performance of our model on these tasks, and that training the memory
representation can be integrated in a scalable manner into our end-to-end neural network model.
2
Approach
Our model takes a discrete set of inputs x1 , ..., xn that are to be stored in the memory, a query q, and
outputs an answer a. Each of the xi , q, and a contains symbols coming from a dictionary with V
words. The model writes all x to the memory up to a fixed buffer size, and then finds a continuous
representation for the x and q. The continuous representation is then processed via multiple hops to
output a. This allows backpropagation of the error signal through multiple memory accesses back
to the input during training.
1
2.1
Single Layer
We start by describing our model in the single layer case, which implements a single memory hop
operation. We then show it can be stacked to give multiple hops in memory.
Input memory representation: Suppose we are given an input set x1 , .., xi to be stored in memory.
The entire set of {xi } are converted into memory vectors {mi } of dimension d computed by
embedding each xi in a continuous space, in the simplest case, using an embedding matrix A (of
size d?V ). The query q is also embedded (again, in the simplest case via another embedding matrix
B with the same dimensions as A) to obtain an internal state u. In the embedding space, we compute
the match between u and each memory mi by taking the inner product followed by a softmax:
pi = Softmax(uT mi ).
where Softmax(zi ) = ezi /
P
j
(1)
ezj . Defined in this way p is a probability vector over the inputs.
Output memory representation: Each xi has a corresponding output vector ci (given in the
simplest case by another embedding matrix C). The response vector from the memory o is then a
sum over the transformed inputs ci , weighted by the probability vector from the input:
X
p i ci .
(2)
o=
i
Because the function from input to output is smooth, we can easily compute gradients and backpropagate through it. Other recently proposed forms of memory or attention take this approach,
notably Bahdanau et al. [2] and Graves et al. [8], see also [9].
Generating the final prediction: In the single layer case, the sum of the output vector o and the
input embedding u is then passed through a final weight matrix W (of size V ? d) and a softmax
to produce the predicted label:
a
? = Softmax(W (o + u))
(3)
The overall model is shown in Fig. 1(a). During training, all three embedding matrices A, B and C,
as well as W are jointly learned by minimizing a standard cross-entropy loss between a
? and the true
label a. Training is performed using stochastic gradient descent (see Section 4.2 for more details).
W
Softmax
{xi}
A2
Input
mi
A1
Embedding B
Question
q
(a)
u1
Predicted
Answer
o1
C1
Inner Product
u
u2
^
a
o2
C2
Sentences
Embedding A
u3
Out1 In1
Weights
pi
Sentences
{xi}
C3
Out2 In2
ci
W
o3
A3
Output
Embedding C
Predicted
Answer
^
a
Out3 In3
u
Weighted Sum
Softmax
o
B
(b)
Question q
Figure 1: (a): A single layer version of our model. (b): A three layer version of our model. In
practice, we can constrain several of the embedding matrices to be the same (see Section 2.2).
2.2
Multiple Layers
We now extend our model to handle K hop operations. The memory layers are stacked in the
following way:
? The input to layers above the first is the sum of the output ok and the input uk from layer k
(different ways to combine ok and uk are proposed later):
uk+1 = uk + ok .
2
(4)
? Each layer has its own embedding matrices Ak , C k , used to embed the inputs {xi }. However, as
discussed below, they are constrained to ease training and reduce the number of parameters.
? At the top of the network, the input to W also combines the input and the output of the top
memory layer: a
? = Softmax(W uK+1 ) = Softmax(W (oK + uK )).
We explore two types of weight tying within the model:
1. Adjacent: the output embedding for one layer is the input embedding for the one above,
i.e. Ak+1 = C k . We also constrain (a) the answer prediction matrix to be the same as the
final output embedding, i.e W T = C K , and (b) the question embedding to match the input
embedding of the first layer, i.e. B = A1 .
2. Layer-wise (RNN-like): the input and output embeddings are the same across different
layers, i.e. A1 = A2 = ... = AK and C 1 = C 2 = ... = C K . We have found it useful to
add a linear mapping H to the update of u between hops; that is, uk+1 = Huk + ok . This
mapping is learnt along with the rest of the parameters and used throughout our experiments
for layer-wise weight tying.
A three-layer version of our memory model is shown in Fig. 1(b). Overall, it is similar to the
Memory Network model in [23], except that the hard max operations within each layer have been
replaced with a continuous weighting from the softmax.
Note that if we use the layer-wise weight tying scheme, our model can be cast as a traditional
RNN where we divide the outputs of the RNN into internal and external outputs. Emitting an
internal output corresponds to considering a memory, and emitting an external output corresponds
to predicting a label. From the RNN point of view, u in Fig. 1(b) and Eqn. 4 is a hidden state, and
the model generates an internal output p (attention weights in Fig. 1(a)) using A. The model then
ingests p using C, updates the hidden state, and so on1 . Here, unlike a standard RNN, we explicitly
condition on the outputs stored in memory during the K hops, and we keep these outputs soft,
rather than sampling them. Thus our model makes several computational steps before producing an
output meant to be seen by the ?outside world?.
3
Related Work
A number of recent efforts have explored ways to capture long-term structure within sequences
using RNNs or LSTM-based models [4, 7, 12, 15, 10, 1]. The memory in these models is the state
of the network, which is latent and inherently unstable over long timescales. The LSTM-based
models address this through local memory cells which lock in the network state from the past. In
practice, the performance gains over carefully trained RNNs are modest (see Mikolov et al. [15]).
Our model differs from these in that it uses a global memory, with shared read and write functions.
However, with layer-wise weight tying our model can be viewed as a form of RNN which only
produces an output after a fixed number of time steps (corresponding to the number of hops), with
the intermediary steps involving memory input/output operations that update the internal state.
Some of the very early work on neural networks by Steinbuch and Piske[19] and Taylor [21] considered a memory that performed nearest-neighbor operations on stored input vectors and then fit
parametric models to the retrieved sets. This has similarities to a single layer version of our model.
Subsequent work in the 1990?s explored other types of memory [18, 5, 16]. For example, Das
et al. [5] and Mozer et al. [16] introduced an explicit stack with push and pop operations which has
been revisited recently by [11] in the context of an RNN model.
Closely related to our model is the Neural Turing Machine of Graves et al. [8], which also uses
a continuous memory representation. The NTM memory uses both content and address-based
access, unlike ours which only explicitly allows the former, although the temporal features that we
will introduce in Section 4.1 allow a kind of address-based access. However, in part because we
always write each memory sequentially, our model is somewhat simpler, not requiring operations
like sharpening. Furthermore, we apply our memory model to textual reasoning tasks, which
qualitatively differ from the more abstract operations of sorting and recall tackled by the NTM.
1
Note that in this view, the terminology of input and output from Fig. 1 is flipped - when viewed as a
traditional RNN with this special conditioning of outputs, A becomes part of the output embedding of the
RNN and C becomes the input embedding.
3
Our model is also related to Bahdanau et al. [2]. In that work, a bidirectional RNN based encoder
and gated RNN based decoder were used for machine translation. The decoder uses an attention
model that finds which hidden states from the encoding are most useful for outputting the next
translated word; the attention model uses a small neural network that takes as input a concatenation
of the current hidden state of the decoder and each of the encoders hidden states. A similar attention
model is also used in Xu et al. [24] for generating image captions. Our ?memory? is analogous to
their attention mechanism, although [2] is only over a single sentence rather than many, as in our
case. Furthermore, our model makes several hops on the memory before making an output; we will
see below that this is important for good performance. There are also differences in the architecture
of the small network used to score the memories compared to our scoring approach; we use a simple
linear layer, whereas they use a more sophisticated gated architecture.
We will apply our model to language modeling, an extensively studied task. Goodman [6] showed
simple but effective approaches which combine n-grams with a cache. Bengio et al. [3] ignited
interest in using neural network based models for the task, with RNNs [14] and LSTMs [10, 20]
showing clear performance gains over traditional methods. Indeed, the current state-of-the-art is
held by variants of these models, for example very large LSTMs with Dropout [25] or RNNs with
diagonal constraints on the weight matrix [15]. With appropriate weight tying, our model can be
regarded as a modified form of RNN, where the recurrence is indexed by memory lookups to the
word sequence rather than indexed by the sequence itself.
4
Synthetic Question and Answering Experiments
We perform experiments on the synthetic QA tasks defined in [22] (using version 1.1 of the dataset).
A given QA task consists of a set of statements, followed by a question whose answer is typically
a single word (in a few tasks, answers are a set of words). The answer is available to the model at
training time, but must be predicted at test time. There are a total of 20 different types of tasks that
probe different forms of reasoning and deduction. Here are samples of three of the tasks:
Sam walks into the kitchen.
Sam picks up an apple.
Sam walks into the bedroom.
Sam drops the apple.
Q: Where is the apple?
A. Bedroom
Brian is a lion.
Julius is a lion.
Julius is white.
Bernhard is green.
Q: What color is Brian?
A. White
Mary journeyed to the den.
Mary went back to the kitchen.
John journeyed to the bedroom.
Mary discarded the milk.
Q: Where was the milk before the den?
A. Hallway
Note that for each question, only some subset of the statements contain information needed for
the answer, and the others are essentially irrelevant distractors (e.g. the first sentence in the first
example). In the Memory Networks of Weston et al. [22], this supporting subset was explicitly
indicated to the model during training and the key difference between that work and this one is that
this information is no longer provided. Hence, the model must deduce for itself at training and test
time which sentences are relevant and which are not.
Formally, for one of the 20 QA tasks, we are given example problems, each having a set of I
sentences {xi } where I ? 320; a question sentence q and answer a. Let the jth word of sentence
i be xij , represented by a one-hot vector of length V (where the vocabulary is of size V = 177,
reflecting the simplistic nature of the QA language). The same representation is used for the
question q and answer a. Two versions of the data are used, one that has 1000 training problems
per task and a second larger one with 10,000 per task.
4.1
Model Details
Unless otherwise stated, all experiments used a K = 3 hops model with the adjacent weight sharing
scheme. For all tasks that output lists (i.e. the answers are multiple words), we take each possible
combination of possible outputs and record them as a separate answer vocabulary word.
Sentence Representation: In our experiments we explore two different representations for
the sentences. The first is the bag-of-words (BoW) representation that takes P
the sentence
xi = {xi1 , xi2 , ..., xin }, embeds each word and sums the resulting vectors: e.g mi = j Axij and
P
ci = j Cxij . The input vector u representing the question is also embedded as a bag of words:
P
u = j Bqj . This has the drawback that it cannot capture the order of the words in the sentence,
which is important for some tasks.
We therefore propose a second representation
that encodes the position of words within the
P
sentence. This takes the form: mi = j lj ? Axij , where ? is an element-wise multiplication. lj is a
4
column vector with the structure lkj = (1 ? j/J) ? (k/d)(1 ? 2j/J) (assuming 1-based indexing),
with J being the number of words in the sentence, and d is the dimension of the embedding. This
sentence representation, which we call position encoding (PE), means that the order of the words
now affects mi . The same representation is used for questions, memory inputs and memory outputs.
Temporal Encoding: Many of the QA tasks require some notion of temporal context, i.e. in
the first example of Section 2, the model needs to understand that Sam is in the bedroom after
he is in the P
kitchen. To enable our model to address them, we modify the memory vector so
that mi =
j Axij + TA (i), where TA (i) is the ith row of a special matrix TA that encodes
temporal information.
The output embedding is augmented in the same way with a matrix Tc
P
(e.g. ci = j Cxij + TC (i)). Both TA and TC are learned during training. They are also subject to
the same sharing constraints as A and C. Note that sentences are indexed in reverse order, reflecting
their relative distance from the question so that x1 is the last sentence of the story.
Learning time invariance by injecting random noise: we have found it helpful to add ?dummy?
memories to regularize TA . That is, at training time we can randomly add 10% of empty memories
to the stories. We refer to this approach as random noise (RN).
4.2
Training Details
10% of the bAbI training set was held-out to form a validation set, which was used to select the
optimal model architecture and hyperparameters. Our models were trained using a learning rate of
? = 0.01, with anneals every 25 epochs by ?/2 until 100 epochs were reached. No momentum or
weight decay was used. The weights were initialized randomly from a Gaussian distribution with
zero mean and ? = 0.1. When trained on all tasks simultaneously with 1k training samples (10k
training samples), 60 epochs (20 epochs) were used with learning rate anneals of ?/2 every 15
epochs (5 epochs). All training uses a batch size of 32 (but cost is not averaged over a batch), and
gradients with an `2 norm larger than 40 are divided by a scalar to have norm 40. In some of our
experiments, we explored commencing training with the softmax in each memory layer removed,
making the model entirely linear except for the final softmax for answer prediction. When the
validation loss stopped decreasing, the softmax layers were re-inserted and training recommenced.
We refer to this as linear start (LS) training. In LS training, the initial learning rate is set to
? = 0.005. The capacity of memory is restricted to the most recent 50 sentences. Since the number
of sentences and the number of words per sentence varied between problems, a null symbol was
used to pad them all to a fixed size. The embedding of the null symbol was constrained to be zero.
On some tasks, we observed a large variance in the performance of our model (i.e. sometimes failing
badly, other times not, depending on the initialization). To remedy this, we repeated each training
10 times with different random initializations, and picked the one with the lowest training error.
4.3
Baselines
We compare our approach2 (abbreviated to MemN2N) to a range of alternate models:
? MemNN: The strongly supervised AM+NG+NL Memory Networks approach, proposed in [22].
This is the best reported approach in that paper. It uses a max operation (rather than softmax) at
each layer which is trained directly with supporting facts (strong supervision). It employs n-gram
modeling, nonlinear layers and an adaptive number of hops per query.
? MemNN-WSH: A weakly supervised heuristic version of MemNN where the supporting sentence labels are not used in training. Since we are unable to backpropagate through the max
operations in each layer, we enforce that the first memory hop should share at least one word with
the question, and that the second memory hop should share at least one word with the first hop and
at least one word with the answer. All those memories that conform are called valid memories,
and the goal during training is to rank them higher than invalid memories using the same ranking
criteria as during strongly supervised training.
? LSTM: A standard LSTM model, trained using question / answer pairs only (i.e. also weakly
supervised). For more detail, see [22].
2
MemN2N source code is available at https://github.com/facebook/MemNN.
5
4.4
Results
We report a variety of design choices: (i) BoW vs Position Encoding (PE) sentence representation;
(ii) training on all 20 tasks independently vs jointly training (joint training used an embedding
dimension of d = 50, while independent training used d = 20); (iii) two phase training: linear start
(LS) where softmaxes are removed initially vs training with softmaxes from the start; (iv) varying
memory hops from 1 to 3.
The results across all 20 tasks are given in Table 1 for the 1k training set, along with the mean
performance for 10k training set3 . They show a number of interesting points:
? The best MemN2N models are reasonably close to the supervised models (e.g. 1k: 6.7% for
MemNN vs 12.6% for MemN2N with position encoding + linear start + random noise, jointly
trained and 10k: 3.2% for MemNN vs 4.2% for MemN2N with position encoding + linear start +
random noise + non-linearity4 , although the supervised models are still superior.
? All variants of our proposed model comfortably beat the weakly supervised baseline methods.
? The position encoding (PE) representation improves over bag-of-words (BoW), as demonstrated
by clear improvements on tasks 4, 5, 15 and 18, where word ordering is particularly important.
? The linear start (LS) to training seems to help avoid local minima. See task 16 in Table 1, where
PE alone gets 53.6% error, while using LS reduces it to 1.6%.
? Jittering the time index with random empty memories (RN) as described in Section 4.1 gives a
small but consistent boost in performance, especially for the smaller 1k training set.
? Joint training on all tasks helps.
? Importantly, more computational hops give improved performance. We give examples of the
hops performed (via the values of eq. (1)) over some illustrative examples in Fig. 2 and in the
supplementary material.
Baseline
Task
1: 1 supporting fact
2: 2 supporting facts
3: 3 supporting facts
4: 2 argument relations
5: 3 argument relations
6: yes/no questions
7: counting
8: lists/sets
9: simple negation
10: indefinite knowledge
11: basic coreference
12: conjunction
13: compound coreference
14: time reasoning
15: basic deduction
16: basic induction
17: positional reasoning
18: size reasoning
19: path finding
20: agent?s motivation
Mean error (%)
Failed tasks (err. > 5%)
Strongly
Supervised
MemNN [22]
0.0
0.0
0.0
0.0
2.0
0.0
15.0
9.0
0.0
2.0
0.0
0.0
0.0
1.0
0.0
0.0
35.0
5.0
64.0
0.0
6.7
4
LSTM
[22]
50.0
80.0
80.0
39.0
30.0
52.0
51.0
55.0
36.0
56.0
38.0
26.0
6.0
73.0
79.0
77.0
49.0
48.0
92.0
9.0
51.3
20
MemNN
WSH
0.1
42.8
76.4
40.3
16.3
51.0
36.1
37.8
35.9
68.7
30.0
10.1
19.7
18.3
64.8
50.5
50.9
51.3
100.0
3.6
40.2
18
BoW
0.6
17.6
71.0
32.0
18.3
8.7
23.5
11.4
21.1
22.8
4.1
0.3
10.5
1.3
24.3
52.0
45.4
48.1
89.7
0.1
25.1
15
On 10k training data
Mean error (%)
Failed tasks (err. > 5%)
3.2
2
36.4
16
39.2
17
15.4
9
PE
0.1
21.6
64.2
3.8
14.1
7.9
21.6
12.6
23.3
17.4
4.3
0.3
9.9
1.8
0.0
52.1
50.1
13.6
87.4
0.0
20.3
13
PE
LS
0.2
12.8
58.8
11.6
15.7
8.7
20.3
12.7
17.0
18.6
0.0
0.1
0.3
2.0
0.0
1.6
49.0
10.1
85.6
0.0
16.3
12
PE
LS
RN
0.0
8.3
40.3
2.8
13.1
7.6
17.3
10.0
13.2
15.1
0.9
0.2
0.4
1.7
0.0
1.3
51.0
11.1
82.8
0.0
13.9
11
9.4
6
7.2
4
6.6
4
MemN2N
1 hop
2 hops
PE LS
PE LS
joint
joint
0.8
0.0
62.0
15.6
76.9
31.6
22.8
2.2
11.0
13.4
7.2
2.3
15.9
25.4
13.2
11.7
5.1
2.0
10.6
5.0
8.4
1.2
0.4
0.0
6.3
0.2
36.9
8.1
46.4
0.5
47.4
51.3
44.4
41.2
9.6
10.3
90.7
89.9
0.0
0.1
25.8
15.6
17
11
24.5
16
10.9
7
3 hops
PE LS
joint
0.1
14.0
33.1
5.7
14.8
3.3
17.9
10.1
3.1
6.6
0.9
0.3
1.4
8.2
0.0
3.5
44.5
9.2
90.2
0.0
13.3
11
PE
LS RN
joint
0.0
11.4
21.9
13.4
14.4
2.8
18.3
9.3
1.9
6.5
0.3
0.1
0.2
6.9
0.0
2.7
40.4
9.4
88.0
0.0
12.4
11
PE LS
LW
joint
0.1
18.8
31.7
17.5
12.9
2.0
10.1
6.1
1.5
2.6
3.3
0.0
0.5
2.0
1.8
51.0
42.6
9.2
90.6
0.2
15.2
10
7.9
6
7.5
6
11.0
6
Table 1: Test error rates (%) on the 20 QA tasks for models using 1k training examples (mean
test errors for 10k training examples are shown at the bottom). Key: BoW = bag-of-words
representation; PE = position encoding representation; LS = linear start training; RN = random
injection of time index noise; LW = RNN-style layer-wise weight tying (if not stated, adjacent
weight tying is used); joint = joint training on all tasks (as opposed to per-task training).
5
Language Modeling Experiments
The goal in language modeling is to predict the next word in a text sequence given the previous
words x. We now explain how our model can easily be applied to this task.
3
4
More detailed results for the 10k training set can be found in the supplementary material.
Following [17] we found adding more non-linearity solves tasks 17 and 19, see the supplementary material.
6
Story (1: 1 supporting fact)
Support Hop 1 Hop 2
Daniel went to the bathroom.
0.00
0.00
Mary travelled to the hallway.
0.00
0.00
John went to the bedroom.
0.37
0.02
John travelled to the bathroom.
yes
0.60
0.98
Mary went to the office.
0.01
0.00
Where is John? Answer: bathroom Prediction: bathroom
Hop 3
0.03
0.00
0.00
0.96
0.00
Story (2: 2 supporting facts)
John dropped the milk.
John took the milk there.
Sandra went back to the bathroom.
John moved to the hallway.
Mary went back to the bedroom.
Where is the milk? Answer: hallway
Support
Hop 1
0.06
0.88
0.00
yes
0.00
0.00
Prediction: hallway
Story (16: basic induction)
Support Hop 1 Hop 2
Brian is a frog.
yes
0.00
0.98
Lily is gray.
0.07
0.00
Brian is yellow.
yes
0.07
0.00
Julius is green.
0.06
0.00
Greg is a frog.
yes
0.76
0.02
What color is Greg? Answer: yellow Prediction: yellow
Hop 3
0.00
0.00
1.00
0.00
0.00
Story (18: size reasoning)
Support Hop 1
Hop 2
Hop 3
The suitcase is bigger than the chest.
yes
0.00
0.88
0.00
The box is bigger than the chocolate.
0.04
0.05
0.10
The chest is bigger than the chocolate.
yes
0.17
0.07
0.90
The chest fits inside the container.
0.00
0.00
0.00
The chest fits inside the box.
0.00
0.00
0.00
Does the suitcase fit in the chocolate? Answer: no Prediction: no
yes
Hop 2
0.00
1.00
0.00
0.00
0.00
Hop 3
0.00
0.00
0.00
1.00
0.00
Figure 2: Example predictions on the QA tasks of [22]. We show the labeled supporting facts
(support) from the dataset which MemN2N does not use during training, and the probabilities p of
each hop used by the model during inference. MemN2N successfully learns to focus on the correct
supporting sentences.
Model
RNN [15]
LSTM [15]
SCRN [15]
MemN2N
# of
hidden
300
100
100
150
150
150
150
150
150
150
150
150
150
150
150
150
Penn Treebank
# of memory Valid.
hops
size
perp.
133
120
120
2
100
128
3
100
129
4
100
127
5
100
127
6
100
122
7
100
120
6
25
125
6
50
121
6
75
122
6
100
122
6
125
120
6
150
121
7
200
118
Test
perp.
129
115
115
121
122
120
118
115
114
118
114
114
115
112
114
111
# of
hidden
500
500
500
500
500
500
500
500
500
500
500
500
500
500
500
-
# of
hops
2
3
4
5
6
7
6
6
6
6
6
6
-
Text8
memory
size
100
100
100
100
100
100
25
50
75
100
125
150
-
Valid.
perp.
122
152
142
129
123
124
118
131
132
126
124
125
123
-
Test
perp.
184
154
161
187
178
162
154
155
147
163
166
158
155
157
154
-
Table 2: The perplexity on the test sets of Penn Treebank and Text8 corpora. Note that increasing
the number of memory hops improves performance.
Figure 3: Average activation weight of memory positions during 6 memory hops. White color
indicates where the model is attending during the k th hop. For clarity, each row is normalized to
have maximum value of 1. A model is trained on (left) Penn Treebank and (right) Text8 dataset.
We now operate on word level, as opposed to the sentence level. Thus the previous N words in the
sequence (including the current) are embedded into memory separately. Each memory cell holds
only a single word, so there is no need for the BoW or linear mapping representations used in the
QA tasks. We employ the temporal embedding approach of Section 4.1.
Since there is no longer any question, q in Fig. 1 is fixed to a constant vector 0.1 (without
embedding). The output softmax predicts which word in the vocabulary (of size V ) is next in the
sequence. A cross-entropy loss is used to train model by backpropagating the error through multiple
memory layers, in the same manner as the QA tasks. To aid training, we apply ReLU operations to
half of the units in each layer. We use layer-wise (RNN-like) weight sharing, i.e. the query weights
of each layer are the same; the output weights of each layer are the same. As noted in Section 2.2,
this makes our architecture closely related to an RNN which is traditionally used for language
7
modeling tasks; however here the ?sequence? over which the network is recurrent is not in the text,
but in the memory hops. Furthermore, the weight tying restricts the number of parameters in the
model, helping generalization for the deeper models which we find to be effective for this task. We
use two different datasets:
Penn Tree Bank [13]: This consists of 929k/73k/82k train/validation/test words, distributed over a
vocabulary of 10k words. The same preprocessing as [25] was used.
Text8 [15]: This is a a pre-processed version of the first 100M million characters, dumped from
Wikipedia. This is split into 93.3M/5.7M/1M character train/validation/test sets. All word occurring
less than 5 times are replaced with the <UNK> token, resulting in a vocabulary size of ?44k.
5.1 Training Details
The training procedure we use is the same as the QA tasks, except for the following. For each
mini-batch update, the `2 norm of the whole gradient of all parameters is measured5 and if larger
than L = 50, then it is scaled down to have norm L. This was crucial for good performance. We
use the learning rate annealing schedule from [15], namely, if the validation cost has not decreased
after one epoch, then the learning rate is scaled down by a factor 1.5. Training terminates when the
learning rate drops below 10?5 , i.e. after 50 epochs or so. Weights are initialized using N (0, 0.05)
and batch size is set to 128. On the Penn tree dataset, we repeat each training 10 times with different
random initializations and pick the one with smallest validation cost. However, we have done only
a single training run on Text8 dataset due to limited time constraints.
5.2 Results
Table 2 compares our model to RNN, LSTM and Structurally Constrained Recurrent Nets (SCRN)
[15] baselines on the two benchmark datasets. Note that the baseline architectures were tuned in
[15] to give optimal perplexity6 . Our MemN2N approach achieves lower perplexity on both datasets
(111 vs 115 for RNN/SCRN on Penn and 147 vs 154 for LSTM on Text8). Note that MemN2N
has ?1.5x more parameters than RNNs with the same number of hidden units, while LSTM has
?4x more parameters. We also vary the number of hops and memory size of our MemN2N,
showing the contribution of both to performance; note in particular that increasing the number of
hops helps. In Fig. 3, we show how MemN2N operates on memory with multiple hops. It shows
the average weight of the activation of each memory position over the test set. We can see that
some hops concentrate only on recent words, while other hops have more broad attention over all
memory locations, which is consistent with the idea that succesful language models consist of a
smoothed n-gram model and a cache [15]. Interestingly, it seems that those two types of hops tend
to alternate. Also note that unlike a traditional RNN, the cache does not decay exponentially: it
has roughly the same average activation across the entire memory. This may be the source of the
observed improvement in language modeling.
6
Conclusions and Future Work
In this work we showed that a neural network with an explicit memory and a recurrent attention
mechanism for reading the memory can be successfully trained via backpropagation on diverse tasks
from question answering to language modeling. Compared to the Memory Network implementation
of [23] there is no supervision of supporting facts and so our model can be used in a wider range
of settings. Our model approaches the same performance of that model, and is significantly better
than other baselines with the same level of supervision. On language modeling tasks, it slightly
outperforms tuned RNNs and LSTMs of comparable complexity. On both tasks we can see that
increasing the number of memory hops improves performance.
However, there is still much to do. Our model is still unable to exactly match the performance of
the memory networks trained with strong supervision, and both fail on several of the 1k QA tasks.
Furthermore, smooth lookups may not scale well to the case where a larger memory is required. For
these settings, we plan to explore multiscale notions of attention or hashing, as proposed in [23].
Acknowledgments
The authors would like to thank Armand Joulin, Tomas Mikolov, Antoine Bordes and Sumit Chopra
for useful comments and valuable discussions, and also the FAIR Infrastructure team for their help
and support.
5
In the QA tasks, the gradient of each weight matrix is measured separately.
They tuned the hyper-parameters on Penn Treebank and used them on Text8 without additional tuning,
except for the number of hidden units. See [15] for more detail.
6
8
References
[1] C. G. Atkeson and S. Schaal. Memory-based neural networks for robot learning. Neurocomputing, 9:243?269, 1995.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align
and translate. In International Conference on Learning Representations (ICLR), 2015.
[3] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. J.
Mach. Learn. Res., 3:1137?1155, Mar. 2003.
[4] J. Chung, C
? . G?ulc?ehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural
networks on sequence modeling. arXiv preprint: 1412.3555, 2014.
[5] S. Das, C. L. Giles, and G.-Z. Sun. Learning context-free grammars: Capabilities and
limitations of a recurrent neural network with an external stack memory. In In Proceedings of
The Fourteenth Annual Conference of Cognitive Science Society, 1992.
[6] J. Goodman. A bit of progress in language modeling. CoRR, cs.CL/0108005, 2001.
[7] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint: 1308.0850,
2013.
[8] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint: 1410.5401,
2014.
[9] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: A recurrent neural network for
image generation. CoRR, abs/1502.04623, 2015.
[10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?
1780, 1997.
[11] A. Joulin and T. Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets.
NIPS, 2015.
[12] J. Koutn??k, K. Greff, F. J. Gomez, and J. Schmidhuber. A clockwork RNN. In ICML, 2014.
[13] M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of
english: The Penn Treebank. Comput. Linguist., 19(2):313?330, June 1993.
[14] T. Mikolov. Statistical language models based on neural networks. Ph. D. thesis, Brno
University of Technology, 2012.
[15] T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. Ranzato. Learning longer memory in
recurrent neural networks. arXiv preprint: 1412.7753, 2014.
[16] M. C. Mozer and S. Das. A connectionist symbol manipulator that discovers the structure of
context-free languages. NIPS, pages 863?863, 1993.
[17] B. Peng, Z. Lu, H. Li, and K. Wong. Towards Neural Network-based Reasoning. ArXiv
preprint: 1508.05508, 2015.
[18] J. Pollack. The induction of dynamical recognizers. Machine Learning, 7(2-3):227?252, 1991.
[19] K. Steinbuch and U. Piske. Learning matrices and their applications. IEEE Transactions on
Electronic Computers, 12:846?862, 1963.
[20] M. Sundermeyer, R. Schl?uter, and H. Ney. LSTM neural networks for language modeling. In
Interspeech, pages 194?197, 2012.
[21] W. K. Taylor. Pattern recognition by means of automatic analogue apparatus. Proceedings of
The Institution of Electrical Engineers, 106:198?209, 1959.
[22] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. Towards AI-complete question answering:
A set of prerequisite toy tasks. arXiv preprint: 1502.05698, 2015.
[23] J. Weston, S. Chopra, and A. Bordes. Memory networks. In International Conference on
Learning Representations (ICLR), 2015.
[24] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio.
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. ArXiv
preprint: 1502.03044, 2015.
[25] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
9
| 5846 |@word armand:1 version:9 norm:4 seems:2 out1:1 pick:2 initial:1 contains:1 score:1 daniel:1 tuned:3 ours:1 interestingly:1 o2:1 past:1 err:2 current:3 com:2 outperforms:1 activation:3 must:2 john:7 realistic:1 subsequent:1 drop:2 update:4 v:7 sukhbaatar:1 intelligence:1 alone:1 half:1 hallway:5 ith:1 short:1 record:1 infrastructure:1 institution:1 revisited:1 location:1 simpler:1 wierstra:1 along:2 c2:1 consists:2 combine:3 inside:2 manner:2 introduce:2 peng:1 notably:1 indeed:1 roughly:1 kiros:1 salakhutdinov:1 decreasing:1 axij:3 cache:3 considering:1 increasing:3 becomes:2 provided:1 linearity:1 null:2 what:2 tying:8 kind:1 lowest:1 finding:1 sharpening:1 ignited:1 temporal:5 every:2 zaremba:1 exactly:1 demonstrates:1 scaled:2 uk:7 szlam:1 penn:9 unit:3 wayne:1 producing:1 danihelka:2 aszlam:1 service:1 before:4 local:2 modify:1 dropped:1 perp:4 apparatus:1 attend:1 encoding:8 ak:3 mach:1 chocolate:3 path:1 rnns:7 initialization:3 studied:1 frog:2 ease:1 limited:1 range:2 averaged:1 acknowledgment:1 practice:2 implement:1 differs:1 writes:2 backpropagation:3 procedure:1 anneals:2 rnn:21 empirical:1 significantly:2 word:32 pre:1 get:1 cannot:1 close:1 storage:4 context:4 wong:1 demonstrated:1 clockwork:1 attention:12 l:13 independently:1 tomas:1 attending:1 regarded:1 importantly:1 regularize:1 embedding:25 handle:1 notion:3 traditionally:1 analogous:1 suppose:1 caption:2 us:7 element:1 recognition:1 particularly:1 julius:3 steinbuch:2 predicts:1 labeled:1 observed:2 inserted:1 softmaxes:2 bottom:1 preprint:8 electrical:1 capture:2 sun:1 went:6 ordering:1 ranzato:1 removed:2 valuable:1 mozer:2 complexity:1 jittering:1 trained:11 weakly:3 coreference:2 translated:1 easily:2 joint:9 represented:1 train:4 stacked:2 describe:1 effective:2 artificial:1 query:4 zemel:1 tell:1 hyper:1 outside:1 whose:1 heuristic:1 larger:4 supplementary:3 ducharme:1 otherwise:1 encoder:1 grammar:1 succesful:1 jointly:4 itself:2 final:4 sequence:9 net:2 took:1 propose:1 outputting:2 coming:1 product:2 relevant:1 bqj:1 bow:6 translate:1 flexibility:1 realistically:1 moved:1 sutskever:1 empty:2 produce:2 generating:3 help:4 depending:1 recurrent:12 wider:1 schl:1 measured:1 nearest:1 progress:1 dumped:1 strong:2 solves:1 implemented:1 c:2 predicted:4 eq:1 differ:1 concentrate:1 closely:2 drawback:1 correct:1 annotated:1 stochastic:1 enable:1 material:3 require:1 sandra:1 generalization:1 marcinkiewicz:1 koutn:1 brian:4 sainbayar:1 extension:1 helping:1 hold:1 considered:2 ezj:1 mapping:3 predict:1 algorithmic:1 u3:1 dictionary:1 early:1 a2:2 smallest:1 achieves:1 vary:1 failing:1 intermediary:1 applicable:2 bag:4 label:4 injecting:1 successfully:2 weighted:2 suitcase:2 always:1 gaussian:1 modified:1 rather:4 avoid:1 varying:1 office:1 conjunction:1 focus:1 june:1 schaal:1 improvement:2 rank:1 indicates:1 baseline:6 am:1 helpful:1 inference:1 typically:1 integrated:1 entire:2 pad:1 hidden:9 relation:2 manipulating:1 deduction:2 transformed:1 lj:2 initially:1 overall:2 unk:1 rnnsearch:2 plan:1 constrained:3 softmax:15 special:2 art:1 having:1 ng:1 sampling:1 hop:49 flipped:1 broad:1 icml:1 future:1 others:1 connectionist:1 report:1 employ:2 few:1 randomly:2 simultaneously:1 neurocomputing:1 ulc:1 kitchen:3 phase:1 replaced:2 negation:1 ab:1 interest:1 evaluation:1 nl:1 held:2 arthur:1 modest:1 unless:1 indexed:3 iv:1 divide:1 taylor:2 walk:2 initialized:2 re:2 tree:2 pollack:1 stopped:1 column:1 modeling:13 soft:1 giles:1 cost:3 subset:2 sumit:1 stored:4 reported:1 dependency:1 answer:19 encoders:1 learnt:1 synthetic:3 cho:3 grand:1 lstm:10 international:2 memn2n:13 probabilistic:1 xi1:1 travelled:2 again:1 thesis:1 opposed:2 possibly:2 external:5 cognitive:1 chung:1 style:1 li:1 toy:1 converted:1 lily:1 lookup:2 explicitly:3 ranking:1 performed:4 later:1 jason:1 view:2 picked:1 reached:1 competitive:1 start:8 capability:1 contribution:1 text8:8 greg:2 variance:1 yield:1 yes:9 yellow:3 uter:1 vincent:1 lu:1 apple:3 explain:1 sharing:3 facebook:2 mi:8 gain:2 dataset:5 recall:1 color:3 ut:1 distractors:1 improves:3 knowledge:1 schedule:1 carefully:1 sophisticated:1 back:4 reflecting:2 ok:5 bidirectional:1 ta:5 courant:1 supervised:9 higher:1 hashing:1 response:1 improved:2 done:1 box:2 strongly:3 mar:1 furthermore:4 until:1 eqn:1 babi:1 lstms:4 nonlinear:1 multiscale:1 continuity:1 indicated:1 gray:1 manipulator:1 mary:6 building:1 concept:1 true:1 requiring:1 contain:1 former:2 hence:2 remedy:1 normalized:1 read:3 regularization:1 white:3 adjacent:3 during:12 interspeech:1 recurrence:2 backpropagating:1 illustrative:1 noted:1 criterion:1 o3:1 complete:1 greff:1 reasoning:7 image:3 wise:7 novel:1 recently:3 discovers:1 superior:1 wikipedia:1 conditioning:1 exponentially:1 million:1 extend:1 discussed:1 he:1 comfortably:1 refer:2 ai:2 tuning:1 automatic:1 language:16 access:3 robot:1 supervision:8 similarity:1 longer:3 ezi:1 add:3 deduce:1 align:1 recognizers:1 own:1 recent:3 showed:2 retrieved:1 out2:1 irrelevant:1 reverse:1 perplexity:2 compound:1 buffer:1 schmidhuber:2 scoring:1 seen:3 minimum:1 additional:1 somewhat:1 bathroom:5 ntm:2 signal:1 ii:1 multiple:13 reduces:1 out3:1 smooth:2 match:3 offer:1 long:5 cross:2 divided:1 bigger:3 a1:3 prediction:8 scalable:1 involving:1 variant:2 simplistic:1 essentially:1 basic:4 arxiv:9 sometimes:1 hochreiter:1 cell:2 c1:1 whereas:1 set3:1 separately:2 annealing:1 decreased:1 source:2 crucial:2 goodman:2 container:1 rest:1 unlike:4 operate:1 comment:1 subject:1 tend:1 bahdanau:3 call:1 chest:4 chopra:4 counting:1 split:1 bengio:5 easy:1 embeddings:1 iii:1 variety:1 affect:1 fit:4 zi:1 bedroom:6 architecture:7 relu:1 inner:2 reduce:1 idea:1 passed:1 effort:1 york:2 linguist:1 action:1 generally:1 useful:3 clear:2 detailed:1 extensively:1 ph:1 processed:2 simplest:3 http:1 xij:1 restricts:1 per:7 dummy:1 diverse:2 conform:1 discrete:1 write:2 sundermeyer:1 key:3 indefinite:1 terminology:1 clarity:1 sum:5 run:1 turing:2 fourteenth:1 throughout:1 electronic:1 draw:1 comparable:2 bit:1 entirely:1 dropout:1 layer:34 completing:1 followed:2 on1:1 tackled:1 gomez:1 courville:1 annual:1 badly:1 constraint:3 constrain:2 encodes:2 generates:1 u1:1 argument:2 mikolov:6 injection:1 alternate:2 combination:1 across:3 smaller:1 terminates:1 sam:5 character:2 slightly:1 brno:1 rob:1 making:3 den:2 restricted:1 indexing:1 describing:1 abbreviated:1 mechanism:2 xi2:1 needed:1 fail:1 end:8 available:3 operation:11 endowed:1 prerequisite:1 apply:4 probe:1 appropriate:1 enforce:1 ney:1 batch:4 in2:1 top:2 lock:1 build:1 especially:1 society:1 gregor:1 question:20 in1:1 parametric:1 traditional:4 diagonal:1 antoine:1 gradient:5 iclr:2 distance:1 separate:1 unable:2 thank:1 concatenation:1 decoder:3 capacity:1 unstable:1 induction:3 marcus:1 assuming:1 length:1 o1:1 modeled:1 code:1 index:2 mini:1 minimizing:1 statement:2 stated:2 resurgence:1 ba:1 design:1 implementation:1 gated:3 perform:1 datasets:4 discarded:1 benchmark:1 descent:1 supporting:11 beat:1 santorini:1 team:1 rn:5 varied:1 stack:3 smoothed:1 introduced:1 pair:2 required:2 cast:1 c3:1 sentence:24 namely:1 janvin:1 learned:2 textual:1 pop:1 boost:1 nip:2 qa:12 address:4 in3:1 below:3 lion:2 lkj:1 pattern:2 dynamical:1 reading:1 challenge:2 max:3 memory:81 green:2 including:1 analogue:1 hot:1 predicting:1 representing:1 scheme:2 github:1 technology:1 mathieu:1 commencing:1 text:2 epoch:8 multiplication:1 graf:5 relative:1 embedded:3 loss:3 interesting:1 limitation:1 generation:2 validation:6 agent:1 consistent:2 treebank:6 story:6 bank:1 pi:2 share:2 translation:2 row:2 bordes:3 ehre:1 token:1 repeat:1 last:1 free:2 english:1 jth:1 allow:1 understand:1 deeper:1 institute:1 neighbor:1 taking:1 distributed:1 dimension:4 xn:1 world:1 gram:3 vocabulary:5 fb:1 valid:3 author:1 qualitatively:1 adaptive:1 preprocessing:1 atkeson:1 emitting:2 transaction:1 bernhard:1 keep:1 global:1 sequentially:1 corpus:2 fergus:1 xi:10 continuous:7 latent:1 table:5 nature:1 reasonably:1 learn:1 inherently:1 huk:1 cl:1 da:3 joulin:3 timescales:1 motivation:1 noise:5 hyperparameters:1 whole:1 repeated:1 fair:1 x1:3 xu:2 fig:8 augmented:2 aid:1 embeds:1 structurally:1 position:9 momentum:1 explicit:3 inferring:1 comput:1 answering:6 pe:13 lw:2 weighting:1 learns:1 piske:2 down:2 embed:1 showing:2 symbol:7 nyu:1 explored:3 list:2 decay:2 a3:1 consist:1 sequential:1 adding:1 milk:5 ci:6 corr:2 push:1 occurring:1 sorting:1 backpropagate:2 entropy:2 tc:3 explore:3 positional:1 failed:2 visual:1 vinyals:1 scalar:1 u2:1 corresponds:2 weston:4 viewed:2 goal:2 invalid:1 towards:2 shared:1 content:1 experimentally:1 hard:1 except:4 operates:1 engineer:1 total:1 called:1 invariance:1 xin:1 formally:1 select:1 internal:5 support:6 sainbar:1 latter:1 meant:1 dept:1 |
5,354 | 5,847 | Attention-Based Models for Speech Recognition
Dzmitry Bahdanau
Jacobs University Bremen, Germany
Jan Chorowski
University of Wroc?aw, Poland
[email protected]
Dmitriy Serdyuk
Universit?e de Montr?eal
Kyunghyun Cho
Universit?e de Montr?eal
Yoshua Bengio
Universit?e de Montr?eal
CIFAR Senior Fellow
Abstract
Recurrent sequence generators conditioned on input data through an attention
mechanism have recently shown very good performance on a range of tasks including machine translation, handwriting synthesis [1, 2] and image caption generation [3]. We extend the attention-mechanism with features needed for speech
recognition. We show that while an adaptation of the model used for machine
translation in [2] reaches a competitive 18.7% phoneme error rate (PER) on the
TIMIT phoneme recognition task, it can only be applied to utterances which are
roughly as long as the ones it was trained on. We offer a qualitative explanation of
this failure and propose a novel and generic method of adding location-awareness
to the attention mechanism to alleviate this issue. The new method yields a model
that is robust to long inputs and achieves 18% PER in single utterances and 20%
in 10-times longer (repeated) utterances. Finally, we propose a change to the attention mechanism that prevents it from concentrating too much on single frames,
which further reduces PER to 17.6% level.
1
Introduction
Recently, attention-based recurrent networks have been successfully applied to a wide variety of
tasks, such as handwriting synthesis [1], machine translation [2], image caption generation [3] and
visual object classification [4].1 Such models iteratively process their input by selecting relevant
content at every step. This basic idea significantly extends the applicability range of end-to-end
training methods, for instance, making it possible to construct networks with external memory [6, 7].
We introduce extensions to attention-based recurrent networks that make them applicable to speech
recognition. Learning to recognize speech can be viewed as learning to generate a sequence (transcription) given another sequence (speech). From this perspective it is similar to machine translation
and handwriting synthesis tasks, for which attention-based methods have been found suitable [2, 1].
However, compared to machine translation, speech recognition principally differs by requesting
much longer input sequences (thousands of frames instead of dozens of words), which introduces a
challenge of distinguishing similar speech fragments2 in a single utterance. It is also different from
handwriting synthesis, since the input sequence is much noisier and does not have as clear structure.
For these reasons speech recognition is an interesting testbed for developing new attention-based
architectures capable of processing long and noisy inputs.
Application of attention-based models to speech recognition is also an important step toward building fully end-to-end trainable speech recognition systems, which is an active area of research. The
1
2
An early version of this work was presented at the NIPS 2014 Deep Learning Workshop [5].
Explained in more detail in Sec. 2.1.
1
dominant approach is still based on hybrid systems consisting of a deep neural acoustic model, a triphone HMM model and an n-gram language model [8, 9]. This requires dictionaries of hand-crafted
pronunciation and phoneme lexicons, and a multi-stage training procedure to make the components
work together. Excellent results by an HMM-less recognizer have recently been reported, with the
system consisting of a CTC-trained neural network and a language model [10]. Still, the language
model was added only at the last stage in that work, thus leaving open a question of how much an
acoustic model can benefit from being aware of a language model during training.
In this paper, we evaluate attention-based models on the widely-used TIMIT phoneme recognition
task. For each generated phoneme, an attention mechanism selects or weighs the signals produced
by a trained feature extraction mechanism at potentially all of the time steps in the input sequence
(speech frames). The weighted feature vector then helps to condition the generation of the next
element of the output sequence. Since the utterances in this dataset are rather short (mostly under
5 seconds), we measure the ability of the considered models in recognizing much longer utterances
which were created by artificially concatenating the existing utterances.
We start with a model proposed in [2] for the machine translation task as the baseline. This model
seems entirely vulnerable to the issue of similar speech fragments but despite our expectations it
was competitive on the original test set, reaching 18.7% phoneme error rate (PER). However, its
performance degraded quickly with longer, concatenated utterances. We provide evidence that this
model adapted to track the absolute location in the input sequence of the content it is recognizing, a
strategy feasible for short utterances from the original test set but inherently unscalable.
In order to circumvent this undesired behavior, in this paper, we propose to modify the attention
mechanism such that it explicitly takes into account both (a) the location of the focus from the
previous step, as in [6] and (b) the features of the input sequence, as in [2]. This is achieved by
adding as inputs to the attention mechanism auxiliary convolutional features which are extracted by
convolving the attention weights from the previous step with trainable filters. We show that a model
with such convolutional features performs significantly better on the considered task (18.0% PER).
More importantly, the model with convolutional features robustly recognized utterances many times
longer than the ones from the training set, always staying below 20% PER.
Therefore, the contribution of this work is three-fold. For one, we present a novel purely neural
speech recognition architecture based on an attention mechanism, whose performance is comparable to that of the conventional approaches on the TIMIT dataset. Moreover, we propose a generic
method of adding location awareness to the attention mechanism. Finally, we introduce a modification of the attention mechanism to avoid concentrating the attention on a single frame, and thus
avoid obtaining less ?effective training examples?, bringing the PER down to 17.6%.
2
2.1
Attention-Based Model for Speech Recognition
General Framework
An attention-based recurrent sequence generator (ARSG) is a recurrent neural network that stochastically generates an output sequence (y1 , . . . , yT ) from an input x. In practice, x is often processed
by an encoder which outputs a sequential input representation h = (h1 , . . . , hL ) more suitable for
the attention mechanism to work with.
In the context of this work, the output y is a sequence of phonemes, and the input x = (x1 , . . . , xL0 )
is a sequence of feature vectors. Each feature vector is extracted from a small overlapping window
of audio frames. The encoder is implemented as a deep bidirectional recurrent network (BiRNN),
to form a sequential representation h of length L = L0 .
At the i-th step an ARSG generates an output yi by focusing on the relevant elements of h:
?i = Attend(si?1 , ?i?1 , h)
gi =
L
X
(1)
?i,j hj
(2)
yi ? Generate(si?1 , gi ),
(3)
j=1
2
yi+1
yi
si
si-1
si+1
gi
?
?i-1,j-1
?
?i-1,j
hj-1
?
gi+1
?i
?i-1,j+1
?
?i,j-1
?
?i,j
?
?i,j+1
hj+1
hj
Figure 1:
Two steps of the proposed attention-based recurrent sequence generator (ARSG) with a hybrid attention mechanism (computing
?), based on both content (h) and location (previous ?) information. The
dotted lines correspond to Eq. (1),
thick solid lines to Eq. (2) and dashed
lines to Eqs. (3)?(4).
where si?1 is the (i ? 1)-th state of the recurrent neural network to which we refer as the generator, ?i ? RL is a vector of the attention weights, also often called the alignment [2]. Using the
terminology from [4], we call gi a glimpse. The step is completed by computing a new generator
state:
si = Recurrency(si?1 , gi , yi )
(4)
Long short-term memory units (LSTM, [11]) and gated recurrent units (GRU, [12]) are typically
used as a recurrent activation, to which we refer as a recurrency. The process is graphically illustrated in Fig. 1.
Inspired by [6] we distinguish between location-based, content-based and hybrid attention mechanisms. Attend in Eq. (1) describes the most generic, hybrid attention. If the term ?i?1 is dropped
from Attend arguments, i.e., ?i = Attend(si?1 , h), we call it content-based (see, e.g., [2] or [3]).
In this case, Attend is often implemented by scoring each element in h separately and normalizing
the scores:
?i,j
ei,j = Score(si?1 , hj ),
, L
X
exp(ei,j ) .
= exp(ei,j )
(5)
(6)
j=1
The main limitation of such scheme is that identical or very similar elements of h are scored equally
regardless of their position in the sequence. This is the issue of ?similar speech fragments? raised
above. Often this issue is partially alleviated by an encoder such as e.g. a BiRNN [2] or a deep
convolutional network [3] that encode contextual information into every element of h . However,
capacity of h elements is always limited, and thus disambiguation by context is only possible to a
limited extent.
Alternatively, a location-based attention mechanism computes the alignment from the generator
state and the previous alignment only such that ?i = Attend(si?1 , ?i?1 ). For instance, Graves
[1] used the location-based attention mechanism using a Gaussian mixture model in his handwriting
synthesis model. In the case of speech recognition, this type of location-based attention mechanism
would have to predict the distance between consequent phonemes using si?1 only, which we expect
to be hard due to large variance of this quantity.
For these limitations associated with both content-based and location-based mechanisms, we argue
that a hybrid attention mechanism is a natural candidate for speech recognition. Informally, we
would like an attention model that uses the previous alignment ?i?1 to select a short list of elements
from h, from which the content-based attention, in Eqs. (5)?(6), will select the relevant ones without
confusion.
2.2
Proposed Model: ARSG with Convolutional Features
We start from the ARSG-based model with the content-based attention mechanism proposed in [2].
This model can be described by Eqs. (5)?(6), where
ei,j = w> tanh(W si?1 + V hj + b).
w and b are vectors, W and V are matrices.
3
(7)
We extend this content-based attention mechanism of the original model to be location-aware by
making it take into account the alignment produced at the previous step. First, we extract k vectors
fi,j ? Rk for every position j of the previous alignment ?i?1 by convolving it with a matrix
F ? Rk?r :
fi = F ? ?i?1 .
(8)
These additional vectors fi,j are then used by the scoring mechanism ei,j :
ei,j = w> tanh(W si?1 + V hj + U fi,j + b)
2.3
(9)
Score Normalization: Sharpening and Smoothing
There are three potential issues with the normalization in Eq. (6).
First, when the input sequence h is long, the glimpse gi is likely to contain noisy information from
many irrelevant feature vectors hj , as the normalized scores ?i,j are all positive and sum to 1. This
makes it difficult for the proposed ARSG to focus clearly on a few relevant frames at each time
i. Second, the attention mechanism is required to consider all the L frames each time it decodes a
single output yi while decoding the output of length T , leading to a computational complexity of
O(LT ). This may easily become prohibitively expensive, when input utterances are long (and issue
that is less serious for machine translation, because in that case the input sequence is made of words,
not of 20ms acoustic frames).
The other side of the coin is that the use of softmax normalization in Eq. (6) prefers to mostly focus
on only a single feature vector hj . This prevents the model from aggregating multiple top-scored
frames to form a glimpse gi .
Sharpening There is a straightforward way to address the first issue of a noisy glimpse by ?sharpening? the scores ?i,j . One way to sharpen the weights
is to introduce an inverse temperature ? > 1
.P
L
to the softmax function such that ai,j = exp(?ei,j )
j=1 exp(?ei,j ) , or to keep only the top-k
frames according to the scores and re-normalize them. These sharpening methods, however, still
requires us to compute the score of every frame each time (O(LT )), and they worsen the second
issue, of overly narrow focus.
We also propose and investigate a windowing technique. At each time i, the attention mechanism
? = (hp ?w , . . . , hp +w?1 ) of the whole sequence h, where w L is
considers only a subsequence h
i
i
? are
the predefined window width and pi is the median of the alignment ?i?1 . The scores for hj ?
/h
not computed, resulting in a lower complexity of O(L + T ). This windowing technique is similar
to taking the top-k frames, and similarly, has the effect of sharpening.
The proposed sharpening based on windowing can be used both during training and evaluation.
Later, in the experiments, we only consider the case where it is used during evaluation.
Smoothing We observed that the proposed sharpening methods indeed helped with long utterances. However, all of them, and especially selecting the frame with the highest score, negatively
affected the model?s performance on the standard development set which mostly consists of short
utterances. This observations let us hypothesize that it is helpful for the model to aggregate selections from multiple top-scored frames. In a sense this brings more diversity, i.e., more effective
training examples, to the output part of the model, as more input locations are considered. To facilitate this effect, we replace the unbounded exponential function
.Pof the softmax function in Eq. (6)
L
with the bounded logistic sigmoid ? such that ai,j = ?(ei,j )
j=1 ?(ei,j ) . This has the effect of
smoothing the focus found by the attention mechanism.
3
Related Work
Speech recognizers based on the connectionist temporal classification (CTC, [13]) and its extension,
RNN Transducer [14], are the closest to the ARSG model considered in this paper. They follow
earlier work on end-to-end trainable deep learning over sequences with gradient signals flowing
4
Phoneme Error Rate [%]
Dependency of error rate on beam search width.
Baseline
19
Conv Feats
Smooth Focus
Dataset
18
?
dev
17
16
?
?
?
?
?
?
?
1
2
5
10
20
50
100
?
1
?
2
test
?
?
?
?
?
?
?
?
?
?
?
?
5
10
20
50
100
1
2
5
10
20
50
100
Beam width
Figure 2: Decoding performance w.r.t. the beam size. For rigorous comparison, if decoding failed
to generate heosi, we considered it wrongly recognized without retrying with a larger beams size.
The models, especially with smooth focus, perform well even with a beam width as small as 1.
through the alignment process [15]. They have been shown to perform well on the phoneme recognition task [16]. Furthermore, the CTC was recently found to be able to directly transcribe text from
speech without any intermediate phonetic representation [17].
The considered ARSG is different from both the CTC and RNN Transducer in two ways. First,
whereas the attention mechanism deterministically aligns the input and the output sequences, the
CTC and RNN Transducer treat the alignment as a latent random variable over which MAP (maximum a posteriori) inference is performed. This deterministic nature of the ARSG?s alignment
mechanism allows beam search procedure to be simpler. Furthermore, we empirically observe that a
much smaller beam width can be used with the deterministic mechanism, which allows faster decoding (see Sec. 4 and Fig. 2). Second, the alignment mechanism of both the CTC and RNN Transducer
is constrained to be ?monotonic? to keep marginalization of the alignment tractable. On the other
hand, the proposed attention mechanism can result in non-monotonic alignment, which makes it
suitable for a larger variety of tasks other than speech recognition.
A hybrid attention model using a convolution operation was also proposed in [6] for neural Turing
machines (NTM). At each time step, the NTM computes content-based attention weights which are
then convolved with a predicted shifting distribution. Unlike the NTM?s approach, the hybrid mechanism proposed here lets learning figure out how the content-based and location-based addressing
be combined by a deep, parametric function (see Eq. (9).)
Sukhbaatar et al. [18] describes a similar hybrid attention mechanism, where location embeddings
are used as input to the attention model. This approach has an important disadvantage that the model
cannot work with an input sequence longer than those seen during training. Our approach, on the
other hand, works well on sequences many times longer than those seen during training (see Sec. 5.)
4
Experimental Setup
We closely followed the procedure in [16]. All experiments were performed on the TIMIT corpus
[19]. We used the train-dev-test split from the Kaldi [20] TIMIT s5 recipe. We trained on the
standard 462 speaker set with all SA utterances removed and used the 50 speaker dev set for early
stopping. We tested on the 24 speaker core test set. All networks were trained on 40 mel-scale filterbank features together with the energy in each frame, and first and second temporal differences,
yielding in total 123 features per frame. Each feature was rescaled to have zero mean and unit
variance over the training set. Networks were trained on the full 61-phone set extended with an
extra ?end-of-sequence? token that was appended to each target sequence. Similarly, we appended
an all-zero frame at the end of each input sequence to indicate the end of the utterance. Decoding
was performed using the 61+1 phoneme set, while scoring was done on the 39 phoneme set.
Training Procedure One property of ARSG models is that different subsets of parameters are
reused different number of times; L times for those of the encoder, LT for the attention weights
and T times for all the other parameters of the ARSG. This makes the scales of derivatives w.r.t.
parameters vary significantly. We used an adaptive learning rate algorithm, AdaDelta [21] which
has two hyperparameters and ?. All the weight matrices were initialized from a normal Gaussian
distribution with its standard deviation set to 0.01. Recurrent weights were orthogonalized.
As TIMIT is a relatively small dataset, proper regularization is crucial. We used the adaptive weight
noise as a main regularizer [22]. We first trained our models with a column norm constraint [23] with
5
FDHC0_SX209: Michael colored the bedroom wall with crayons.
h#
m
ay
kcl k
el
kcl k
ah
l
er
b
ix bcl
dcl dh
eh
dcl d
m
r ux
w
ao
kcl
ix th
l w
k
r
ey
aa
n
s
h#
Figure 3: Alignments produced by the baseline model. The vertical bars indicate ground truth
phone location from TIMIT. Each row of the upper image indicates frames selected by the attention
mechanism to emit a phone symbol. The network has clearly learned to produce a left-to-right
alignment with a tendency to look slightly ahead, and does not confuse between the repeated ?kclk? phrase. Best viewed in color.
the maximum norm 1 until the lowest development negative log-likelihood is achieved.3 During this
time, and ? are set to 10?8 and 0.95, respectively. At this point, we began using the adaptive
weight noise, with the model complexity cost LC divided by 10, while disabling the column norm
constraints. Once the new lowest development log-likelihood was reached, we fine-tuned the model
with a smaller = 10?10 , until we did not observe the improvement in the development phoneme
error rate (PER) for 100K weight updates. Batch size 1 was used throughout the training.
Evaluated Models We evaluated the ARSGs with different attention mechanisms. The encoder
was a 3-layer BiRNN with 256 GRU units in each direction, and the activations of the 512 top-layer
units were used as the representation h. The generator had a single recurrent layer of 256 GRU units.
Generate in Eq. (3) had a hidden layer of 64 maxout units. The initial states of both the encoder
and generator were treated as additional parameters.
Our baseline model is the one with a purely content-based attention mechanism (See Eqs. (5)?(7).)
The scoring network in Eq. (7) had 512 hidden units. The other two models use the convolutional
features in Eq. (8) with k = 10 and r = 201. One of them uses the smoothing from Sec. 2.3.
Decoding Procedure A left-to-right beam search over phoneme sequences was used during decoding [24]. Beam search was stopped when the ?end-of-sequence? token heosi was emitted. We
started with a beam width of 10, increasing it up to 40 when the network failed to produce heosi with
the narrower beam. As shown in Fig. 2, decoding with a wider beam gives little-to-none benefit.
5
Results
Table 1: Phoneme error rates (PER). The bold-faced PER corresponds to the best error rate with an
attention-based recurrent sequence generator (ARSG) incorporating convolutional attention features
and a smooth focus.
Model
Dev
Test
Baseline Model
15.9% 18.7%
16.1% 18.0%
Baseline + Conv. Features
Baseline + Conv. Features + Smooth Focus
15.8% 17.6%
RNN Transducer [16]
N/A
17.7%
HMM over Time and Frequency Convolutional Net [25] 13.9% 16.7%
All the models achieved competitive PERs (see Table 1). With the convolutional features, we see
3.7% relative improvement over the baseline and further 5.9% with the smoothing.
To our surprise (see Sec. 2.1.), the baseline model learned to align properly. An alignment produced by the baseline model on a sequence with repeated phonemes (utterance FDHC0 SX209) is
presented in Fig. 3 which demonstrates that the baseline model is not confused by short-range repetitions. We can also see from the figure that it prefers to select frames that are near the beginning or
3
Applying the weight noise from the beginning of training caused severe underfitting.
6
Figure 4: Results of force-aligning the concatenated utterances. Each dot represents a single utterance created by either concatenating multiple copies of the same utterance, or of different, randomly
chosen utterances. We clearly see that the highest robustness is achieved when the hybrid attention
mechanism is combined with the proposed sharpening technique (see the bottom-right plot.)
even slightly before the phoneme location provided as a part of the dataset. The alignments produced
by the other models were very similar visually.
5.1
Forced Alignment of Long Utterances
The good performance of the baseline model led us to the question of how it distinguishes between
repetitions of similar phoneme sequences and how reliably it decodes longer sequences with more
repetitions. We created two datasets of long utterances; one by repeating each test utterance, and the
other by concatenating randomly chosen utterances. In both cases, the waveforms were cross-faded
with a 0.05s silence inserted as the ?pau? phone. We concatenated up to 15 utterances.
First, we checked the forced alignment with these longer utterances by forcing the generator to emit
the correct phonemes. Each alignment was considered correct if 90% of the alignment weight lies
inside the ground-truth phoneme window extended by 20 frames on each side. Under this definition,
all phones but the heosi shown in Fig. 3 are properly aligned.
The first column of Fig. 4 shows the number of correctly aligned frames w.r.t. the utterance length
(in frames) for some of the considered models. One can see that the baseline model was able to
decode sequences up to about 120 phones when a single utterance was repeated, and up to about 150
phones when different utterances were concatenated. Even when it failed, it correctly aligned about
50 phones. On the other hand, the model with the hybrid attention mechanism with convolutional
features was able to align sequences up to 200 phones long. However, once it began to fail, the
model was not able to align almost all phones. The model with the smoothing behaved similarly to
the one with convolutional features only.
We examined failed alignments to understand these two different modes of failure. Some of the
examples are shown in the Supplementary Materials.
We found that the baseline model properly aligns about 40 first phones, then makes a jump to the
end of the recording and cycles over the last 10 phones. This behavior suggests that it learned to
track its approximate location in the source sequence. However, the tracking capability is limited to
the lengths observed during training. Once the tracker saturates, it jumps to the end of the recording.
In contrast, when the location-aware network failed it just stopped aligning ? no particular frames
were selected for each phone. We attribute this behavior to the issue of noisy glimpse discussed in
Sec. 2.3. With a long utterance there are many irrelevant frames negatively affecting the weight assigned to the correct frames. In line with this conjecture, the location-aware network works slightly
better on the repetition of the same utterance, where all frames are somehow relevant, than on the
concatenation of different utterances, where each misaligned frame is irrelevant.
To gain more insight we applied the alignment sharpening schemes described in Sec. 2.3. In the
remaining columns of Fig. 4, we see that the sharpening methods help the location-aware network
to find proper alignments, while they show little effect on the baseline network. The windowing
7
?
Phoneme error rates on long utterances
Baseline
Phoneme error rate [%]
24
Decoding algorithm
Conv Feats
Smooth Focus
Keep 20
?
?
?
?
22
?
?
?
?
?
?
?
?
Keep 50
?
Win ? 150
Win ? 75
?
?
?
20
?
?
?
18
?
?
?
?
?
?
?
?
?
?
?
?
?
?
3
6
9
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Dataset
?
?
3
6
9
?
Mixed Utt.
3
6
9
Same Utt.
Number of repetitions
Figure 5: Phoneme error rates obtained on decoding long sequences. Each network was decoded with alignment sharpening techniques that produced proper forced alignments. The proposed
ARSG?s are clearly more robust to the length of the utterances than the baseline one is.
technique helps both the baseline and location-aware networks, with the location-aware network
properly aligning nearly all sequences.
During visual inspection, we noticed that in the middle of very long utterances the baseline model
was confused by repetitions of similar content within the window, and that such confusions did not
happen in the beginning. This supports our conjecture above.
5.2
Decoding Long Utterances
We evaluated the models on long sequences. Each model was decoded using the alignment sharpening techniques that helped to obtain proper forced alignments. The results are presented in Fig. 5.
The baseline model fails to decode long utterances, even when a narrow window is used to constrain
the alignments it produces. The two other location-aware networks are able to decode utterances
formed by concatenating up to 11 test utterances. Better results were obtained with a wider window,
presumably because it resembles more the training conditions when at each step the attention mechanism was seeing the whole input sequence. With the wide window, both of the networks scored
about 20% PER on the long utterances, indicating that the proposed location-aware attention mechanism can scale to sequences much longer than those in the training set with only minor modifications
required at the decoding stage.
6
Conclusions
We proposed and evaluated a novel end-to-end trainable speech recognition architecture based on a
hybrid attention mechanism which combines both content and location information in order to select
the next position in the input sequence for decoding. One desirable property of the proposed model
is that it can recognize utterances much longer than the ones it was trained on. In the future, we
expect this model to be used to directly recognize text from speech [10, 17], in which case it may
become important to incorporate a monolingual language model to the ARSG architecture [26].
This work has contributed two novel ideas for attention mechanisms: a better normalization approach yielding smoother alignments and a generic principle for extracting and using features from
the previous alignments. Both of these can potentially be applied beyond speech recognition. For
instance, the proposed attention can be used without modification in neural Turing machines, or by
using 2?D convolution instead of 1?D, for improving image caption generation [3].
Acknowledgments
All experiments were conducted using Theano [27, 28], PyLearn2 [29], and Blocks [30] libraries.
The authors would like to acknowledge the support of the following agencies for research funding
and computing support: National Science Center (Poland) grant Sonata 8 2014/15/D/ST6/04402,
NSERC, Calcul Qu?ebec, Compute Canada, the Canada Research Chairs and CIFAR. D. Bahdanau
also thanks Planet Intelligent Systems GmbH and Yandex.
8
References
[1] A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, August 2013.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
In Proc. of the 3rd ICLR, 2015. arXiv:1409.0473.
[3] K. Xu, J. Ba, R. Kiros, et al. Show, attend and tell: Neural image caption generation with visual attention.
In Proc. of the 32nd ICML, 2015. arXiv:1502.03044.
[4] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of visual attention. In Proc. of the 27th NIPS,
2014. arXiv:1406.6247.
[5] J. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio. End-to-end continuous speech recognition using
attention-based recurrent NN: First results. arXiv:1412.1602 [cs, stat], December 2014.
[6] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv:1410.5401, 2014.
[7] J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv:1410.3916, 2014.
[8] M. Gales and S. Young. The application of hidden markov models in speech recognition. Found. Trends
Signal Process., 1(3):195?304, January 2007.
[9] G. Hinton, L. Deng, D. Yu, et al. Deep neural networks for acoustic modeling in speech recognition: The
shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82?97, November 2012.
[10] A. Hannun, C. Case, J. Casper, et al. Deepspeech: Scaling up end-to-end speech recognition.
arXiv:1412.5567, 2014.
[11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural. Comput., 9(8):1735?1780, 1997.
[12] K. Cho, B. van Merrienboer, C. Gulcehre, et al. Learning phrase representations using RNN encoderdecoder for statistical machine translation. In EMNLP, October 2014. to appear.
[13] A. Graves, S. Fern?andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling
unsegmented sequence data with recurrent neural networks. In Proc. of the 23rd ICML-06, 2006.
[14] A. Graves. Sequence transduction with recurrent neural networks. In Proc. of the 29th ICML, 2012.
[15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient based learning applied to document recognition.
Proc. IEEE, 1998.
[16] A. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In
ICASSP 2013, pages 6645?6649. IEEE, 2013.
[17] A. Graves and N. Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In Proc.
of the 31st ICML, 2014.
[18] S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus.
Weakly supervised memory networks.
arXiv:1503.08895, 2015.
[19] J. S. Garofolo, L. F. Lamel, W. M. Fisher, et al. DARPA TIMIT acoustic phonetic continuous speech
corpus, 1993.
[20] D. Povey, A. Ghoshal, G. Boulianne, et al. The kaldi speech recognition toolkit. In Proc. ASRU, 2011.
[21] M. D. Zeiler. ADADELTA: An adaptive learning rate method. arXiv:1212.5701, 2012.
[22] A. Graves. Practical variational inference for neural networks. In Proc of the 24th NIPS, 2011.
[23] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural
networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012.
[24] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Proc. of
the 27th NIPS, 2014. arXiv:1409.3215.
[25] L. T?oth. Combining time-and frequency-domain convolution in convolutional neural network-based
phone recognition. In Proc. ICASSP, 2014.
[26] C. Gulcehre, O. Firat, K. Xu, et al. On using monolingual corpora in neural machine translation.
arXiv:1503.03535, 2015.
[27] J. Bergstra, O. Breuleux, F. Bastien, et al. Theano: a CPU and GPU math expression compiler. In Proc.
SciPy, 2010.
[28] F. Bastien, P. Lamblin, R. Pascanu, et al. Theano: new features and speed improvements. Deep Learning
and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[29] I. J. Goodfellow, D. Warde-Farley, P. Lamblin, et al. Pylearn2: a machine learning research library. arXiv
preprint arXiv:1308.4214, 2013.
[30] B. van Merri?enboer, D. Bahdanau, V. Dumoulin, et al. Blocks and fuel: Frameworks for deep learning.
arXiv:1506.00619 [cs, stat], June 2015.
9
| 5847 |@word middle:1 version:1 seems:1 norm:3 nd:1 reused:1 open:1 jacob:1 solid:1 initial:1 fragment:2 selecting:2 score:9 tuned:1 document:1 existing:1 contextual:1 si:14 activation:2 gpu:1 planet:1 happen:1 hypothesize:1 plot:1 update:1 sukhbaatar:2 selected:2 inspection:1 beginning:3 short:7 core:1 colored:1 math:1 pascanu:1 location:25 lexicon:1 simpler:1 unbounded:1 become:2 qualitative:1 consists:1 transducer:5 combine:1 underfitting:1 inside:1 introduce:3 indeed:1 behavior:3 roughly:1 kiros:1 multi:1 inspired:1 salakhutdinov:1 little:2 cpu:1 window:7 increasing:1 conv:4 confused:2 pof:1 moreover:1 bounded:1 provided:1 fuel:1 lowest:2 sharpening:12 temporal:3 fellow:1 every:4 ebec:1 universit:3 prohibitively:1 filterbank:1 demonstrates:1 unit:8 grant:1 wayne:1 appear:1 szlam:1 danihelka:1 positive:1 before:1 attend:7 dropped:1 modify:1 aggregating:1 treat:1 despite:1 garofolo:1 resembles:1 examined:1 suggests:1 misaligned:1 co:1 limited:3 range:3 acknowledgment:1 lecun:1 practical:1 practice:1 block:2 differs:1 procedure:5 jan:2 area:1 rnn:6 significantly:3 alleviated:1 word:2 seeing:1 cannot:1 selection:1 wrongly:1 context:2 applying:1 conventional:1 map:1 deterministic:2 yt:1 center:1 boulianne:1 graphically:1 attention:60 regardless:1 straightforward:1 scipy:1 insight:1 importantly:1 lamblin:2 his:1 merri:1 wroc:2 target:1 decode:3 caption:4 magazine:1 distinguishing:1 us:2 goodfellow:1 jaitly:1 element:7 adadelta:2 recognition:26 expensive:1 trend:1 observed:2 bottom:1 inserted:1 preprint:1 thousand:1 cycle:1 highest:2 removed:1 rescaled:1 agency:1 complexity:3 warde:1 trained:8 weakly:1 purely:2 negatively:2 easily:1 icassp:2 darpa:1 regularizer:1 train:1 forced:4 kcl:3 effective:2 deepspeech:1 tell:1 aggregate:1 pronunciation:1 whose:1 widely:1 larger:2 dmitriy:1 supplementary:1 encoder:6 ability:1 gi:8 jointly:1 noisy:4 sequence:45 net:1 propose:5 adaptation:2 relevant:5 aligned:3 combining:1 translate:1 normalize:1 recipe:1 sutskever:2 produce:3 generating:1 staying:1 object:1 help:3 wider:2 recurrent:20 stat:2 minor:1 disabling:1 sa:1 eq:14 auxiliary:1 implemented:2 predicted:1 indicate:2 c:2 direction:1 waveform:1 thick:1 closely:1 correct:3 attribute:1 filter:1 utt:2 material:1 ao:1 andez:1 wall:1 alleviate:1 merrienboer:1 extension:2 pl:1 tracker:1 considered:8 ground:2 normal:1 exp:4 visually:1 presumably:1 kaldi:2 predict:1 achieves:1 early:2 dictionary:1 vary:1 recognizer:1 proc:12 applicable:1 tanh:2 repetition:6 successfully:1 weighted:1 clearly:4 always:2 gaussian:2 rather:1 reaching:1 avoid:2 hj:10 encode:1 l0:1 focus:10 june:1 improvement:3 properly:4 indicates:1 likelihood:2 contrast:1 rigorous:1 baseline:20 sense:1 helpful:1 posteriori:1 inference:2 stopping:1 el:1 nn:1 typically:1 hidden:3 selects:1 germany:1 issue:9 classification:3 development:4 raised:1 smoothing:6 softmax:3 constrained:1 construct:1 aware:9 extraction:1 once:3 identical:1 represents:1 look:1 unsupervised:1 icml:4 nearly:1 yu:1 future:1 yoshua:1 connectionist:2 intelligent:1 serious:1 few:1 distinguishes:1 randomly:2 recognize:3 national:1 consisting:2 montr:3 investigate:1 mnih:1 evaluation:2 severe:1 alignment:31 introduces:1 mixture:1 yielding:2 farley:1 oth:1 predefined:1 emit:2 capable:1 glimpse:5 initialized:1 re:1 orthogonalized:1 weighs:1 stopped:2 instance:3 eal:3 earlier:1 column:4 modeling:1 dev:4 disadvantage:1 phrase:2 applicability:1 cost:1 deviation:1 subset:1 addressing:1 recognizing:2 krizhevsky:1 conducted:1 too:1 reported:1 dependency:1 aw:1 cho:4 combined:2 thanks:1 st:1 lstm:1 st6:1 decoding:13 unscalable:1 synthesis:5 together:2 quickly:1 michael:1 gale:1 emnlp:1 external:1 stochastically:1 convolving:2 derivative:1 leading:1 chorowski:3 account:2 potential:1 de:3 diversity:1 bergstra:1 sec:7 bold:1 explicitly:1 caused:1 yandex:1 later:1 h1:1 helped:2 performed:3 view:1 dumoulin:1 reached:1 competitive:3 start:2 dcl:2 capability:1 compiler:1 worsen:1 timit:8 contribution:1 appended:2 formed:1 degraded:1 convolutional:12 phoneme:23 variance:2 yield:1 correspond:1 decodes:2 produced:6 fern:1 none:1 ah:1 detector:1 reach:1 aligns:2 checked:1 definition:1 failure:2 energy:1 frequency:2 mohamed:1 pers:1 associated:1 handwriting:5 gain:1 dataset:6 concentrating:2 color:1 focusing:1 bidirectional:1 supervised:1 follow:1 flowing:1 done:1 evaluated:4 furthermore:2 just:1 stage:3 until:2 hand:4 ei:10 unsegmented:1 overlapping:1 somehow:1 logistic:1 brings:1 mode:1 behaved:1 building:1 effect:4 facilitate:1 contain:1 normalized:1 regularization:1 kyunghyun:1 assigned:1 iteratively:1 illustrated:1 undesired:1 during:9 width:6 speaker:3 mel:1 m:1 ay:1 confusion:2 performs:1 lamel:1 temperature:1 image:5 variational:1 novel:4 recently:4 fi:4 began:2 sigmoid:1 funding:1 ctc:6 rl:1 empirically:1 extend:2 discussed:1 refer:2 s5:1 ai:2 rd:2 hp:2 similarly:3 sharpen:1 language:5 had:3 dot:1 toolkit:1 longer:11 recognizers:1 align:4 aligning:3 dominant:1 closest:1 perspective:1 irrelevant:3 phone:14 forcing:1 phonetic:2 schmidhuber:2 yi:6 scoring:4 seen:2 additional:2 ey:1 deng:1 triphone:1 recognized:2 ntm:3 signal:4 ii:1 dashed:1 multiple:3 windowing:4 full:1 reduces:1 desirable:1 smoother:1 smooth:5 faster:1 offer:1 long:19 cifar:2 cross:1 divided:1 equally:1 basic:1 expectation:1 arxiv:16 normalization:4 achieved:4 hochreiter:1 beam:12 whereas:1 affecting:1 separately:1 fine:1 median:1 leaving:1 source:1 crucial:1 extra:1 breuleux:1 unlike:1 bringing:1 recording:2 bahdanau:5 december:1 encoderdecoder:1 call:2 emitted:1 extracting:1 near:1 chopra:1 intermediate:1 bengio:4 embeddings:1 split:1 variety:2 marginalization:1 bedroom:1 architecture:4 idea:2 haffner:1 requesting:1 expression:1 speech:31 prefers:2 deep:10 heess:1 clear:1 informally:1 repeating:1 processed:1 generate:4 dotted:1 overly:1 per:12 track:2 correctly:2 affected:1 group:1 four:1 terminology:1 povey:1 sum:1 inverse:1 turing:3 extends:1 throughout:1 almost:1 disambiguation:1 scaling:1 comparable:1 entirely:1 layer:4 followed:1 distinguish:1 gomez:1 fold:1 adapted:1 ahead:1 constraint:2 constrain:1 generates:2 speed:1 argument:1 chair:1 relatively:1 conjecture:2 developing:1 according:1 describes:2 smaller:2 slightly:3 qu:1 making:2 modification:3 hl:1 explained:1 theano:3 principally:1 hannun:1 mechanism:41 fail:1 needed:1 tractable:1 end:20 gulcehre:2 operation:1 observe:2 generic:4 robustly:1 recurrency:2 batch:1 coin:1 robustness:1 convolved:1 original:3 top:5 remaining:1 completed:1 zeiler:1 concatenated:4 especially:2 noticed:1 added:1 question:2 quantity:1 strategy:1 parametric:1 gradient:2 win:2 iclr:1 distance:1 capacity:1 hmm:3 concatenation:1 argue:1 extent:1 considers:1 reason:1 dzmitry:1 toward:1 length:5 difficult:1 mostly:3 setup:1 october:1 potentially:2 negative:1 ba:1 reliably:1 proper:4 gated:1 perform:2 upper:1 vertical:1 observation:1 convolution:3 datasets:1 contributed:1 markov:1 acknowledge:1 november:1 january:1 extended:2 saturates:1 hinton:3 frame:27 y1:1 august:1 canada:2 gru:3 required:2 trainable:4 acoustic:5 learned:3 narrow:2 testbed:1 pylearn2:2 nip:5 address:1 able:5 bar:1 beyond:1 below:1 challenge:1 including:1 memory:5 explanation:1 shifting:1 suitable:3 natural:1 hybrid:11 circumvent:1 eh:1 treated:1 force:1 scheme:2 library:2 created:3 started:1 extract:1 utterance:41 poland:2 text:2 faced:1 calcul:1 graf:9 bcl:1 relative:1 fully:1 expect:2 monolingual:2 mixed:1 generation:5 interesting:1 limitation:2 faded:1 generator:10 awareness:2 principle:1 bremen:1 pi:1 bordes:1 translation:10 row:1 casper:1 token:2 last:2 copy:1 silence:1 side:2 senior:1 understand:1 wide:2 taking:1 absolute:1 ghoshal:1 benefit:2 van:2 gram:1 computes:2 preventing:1 author:1 made:1 adaptive:4 jump:2 approximate:1 uni:1 feat:2 transcription:1 keep:4 active:1 corpus:3 fergus:1 alternatively:1 subsequence:1 search:4 latent:1 continuous:2 table:2 nature:1 robust:2 inherently:1 obtaining:1 serdyuk:1 improving:2 excellent:1 bottou:1 artificially:1 domain:1 did:2 main:2 whole:2 noise:3 scored:4 hyperparameters:1 heosi:4 repeated:4 x1:1 gmbh:1 crafted:1 fig:8 xu:2 transduction:1 lc:1 fails:1 position:3 decoded:2 deterministically:1 concatenating:4 exponential:1 candidate:1 lie:1 comput:1 ix:2 young:1 dozen:1 down:1 rk:2 bastien:2 er:1 symbol:1 list:1 consequent:1 evidence:1 normalizing:1 workshop:2 incorporating:1 adding:3 sequential:2 labelling:1 conditioned:1 confuse:1 pau:1 surprise:1 lt:3 led:1 likely:1 visual:4 failed:5 prevents:2 vinyals:1 nserc:1 ux:1 partially:1 tracking:1 vulnerable:1 monotonic:2 aa:1 corresponds:1 truth:2 extracted:2 dh:1 transcribe:1 weston:2 viewed:2 narrower:1 towards:1 maxout:1 asru:1 replace:1 shared:1 content:14 change:1 feasible:1 hard:1 fisher:1 called:1 total:1 experimental:1 tendency:1 indicating:1 select:4 support:3 noisier:1 xl0:1 incorporate:1 evaluate:1 audio:1 tested:1 srivastava:1 |
5,355 | 5,848 | Where are they looking?
Adri`a Recasens?
Aditya Khosla?
Carl Vondrick
Massachusetts Institute of Technology
Antonio Torralba
{recasens, khosla, vondrick, torralba}@csail.mit.edu
(* - indicates equal contribution)
Abstract
Humans have the remarkable ability to follow the gaze of other people to identify
what they are looking at. Following eye gaze, or gaze-following, is an important
ability that allows us to understand what other people are thinking, the actions they
are performing, and even predict what they might do next. Despite the importance
of this topic, this problem has only been studied in limited scenarios within the
computer vision community. In this paper, we propose a deep neural networkbased approach for gaze-following and a new benchmark dataset, GazeFollow, for
thorough evaluation. Given an image and the location of a head, our approach
follows the gaze of the person and identifies the object being looked at. Our deep
network is able to discover how to extract head pose and gaze orientation, and to
select objects in the scene that are in the predicted line of sight and likely to be
looked at (such as televisions, balls and food). The quantitative evaluation shows
that our approach produces reliable results, even when viewing only the back of
the head. While our method outperforms several baseline approaches, we are still
far from reaching human performance on this task. Overall, we believe that gazefollowing is a challenging and important problem that deserves more attention
from the community.
1
Introduction
You step out of your house and notice a group of people looking up. You look up and realize they are
looking at an aeroplane in the sky. Despite the object being far away, humans have the remarkable
ability to precisely follow the gaze direction of another person, a task commonly referred to as gazefollowing (see [3] for a review). Such an ability is a key element to understanding what people are
doing in a scene and their intentions. Similarly, it is crucial for a computer vision system to have
this ability to better understand and interpret people. For instance, a person might be holding a book
but looking at the television, or a group of people might be looking at the same object which can
indicate that they are collaborating at some task, or they might be looking at different places which
can indicate that they are not familiar with each other or that they are performing unrelated tasks
Figure 1: Gaze-following: We present a model that learns to predict where people in images are
looking. We also introduce GazeFollow, a new large-scale annotated dataset for gaze-following.
1
(see Figure 1). Gaze-following has applications in robotics and human interaction interfaces where
it is important to understand the object of interest of a person. Gaze-following can also be used to
predict what a person will do next as people tend to attend to objects they are planning to interact
with even before they start an action.
Despite the importance of this topic, only a few works in computer vision have explored gazefollowing [5, 16, 14, 15, 18]. Previous work on gaze-following addresses the problem by limiting
the scope (e.g., people looking at each other only [14]), by restricting the situations (e.g., scenes
with multiple people only or synthetic scenarios [9, 7]), or by using complex inputs (multiple images
[5, 15, 18] or eye-tracking data [6]). Only [16] tackles the unrestricted gaze-following scenario but
relies on face detectors (therefore can not handle situations such as people looking away from the
camera) and is not evaluated on a gaze-following task. Our goal is to perform gaze-following in
natural settings without making restrictive assumptions and when only a single view is available.
We want to address the general gaze-following problem to be able to handle situations in which
several people are looking at each other, and one or more people are interacting with one or more
objects.
In this paper, we formulate the problem of gaze-following as: given a single picture containing one
or more people, the task is to the predict the location that each person in a scene is looking at. To
address this problem, we introduce a deep architecture that learns to combine information about
the head orientation and head location with the scene content in order to follow the gaze of a person
inside the picture. The input to our model is a picture and the location of the person for who we want
to follow the gaze, and the output is a distribution over possible locations that the selected person
might be looking at. This output distribution can be seen as a saliency map from the point of view
of the person inside the picture. To train and evaluate our model, we also introduce GazeFollow,
a large-scale benchmark dataset for gaze-following. Our model, code and dataset are available for
download at http://gazefollow.csail.mit.edu.
Related Work (Saliency): Although strongly related, there are a number of important distinctions
between gaze-following [3] and saliency models of attention [8]. In traditional models of visual
attention, the goal is to predict the eye fixations of an observer looking at a picture, while in gazefollowing the goal is to estimate what is being looked at by a person inside a picture. Most saliency
models focus on predicting fixations while an observer is free-viewing an image [8, 11] (see [2] for
a review). However, in gaze-following, the people in the picture are generally engaged in a task
or navigating an environment and, therefore, are not free-viewing and might fixate on objects even
when they are not the most salient. A model for gaze-following has to be able to follow the line
of sight and then select, among all possible elements that cross the line of sight, which objects are
likely to be the center of attention. Both tasks (gaze-following and saliency modeling) are related in
several interesting ways. For instance, [1] showed that gaze-following of people inside a picture can
influence the fixations of an observer looking at the picture as the object being fixated by the people
inside the picture will attract the attention of the observer of the picture.
Related Work (Gaze): The work on gaze-following in computer vision is very limited. Gazefollowing is used in [16] to improve models of free-viewing saliency prediction. However, they only
estimate the gaze direction without identifying the object being attended. Further, their reliance
on a face detector [23] prevents them from being able to estimate gaze for people looking away
from the camera. Another way of approaching gaze-following is using a wearable eye-tracker to
precisely measure the gaze of several people in a scene. For instance, [6] used an eye tracker to
predict the next object the user will interact with, and to improve action recognition in egocentric
vision. In [14] they propose detecting people looking at each other in a movie in order to better
identify interactions between people. As in [16], this work only relies on the direction of gaze
without estimating the object being attended, and, therefore, cannot address the general problem of
gaze-following, in which a person is interacting with an object. In [5], they perform gaze-following
in scenes with multiple observers in an image by finding the regions in which multiple lines of
sight intersect. Their method needs multiple people in the scene, each with an egocentric camera,
used to get 3D head location, as the model only uses head orientation information and does not
incorporate knowledge about the content of the scene. In [15, 18], the authors propose a system to
infer the region attracting the attention of a group of people (social saliency prediction). As in [5]
their method takes as input a set of pictures taken from the viewpoint of each of the people present
in the image and it does not perform gaze-following. Our method only uses a single third-person
view of the scene to infer gaze.
2
Head Location
Density
y
1
x
Fixation Loc.
Density
0
Norm. Fixation
Loc. Density
y
y
x
Avg. Gaze
Direction
x
Direction
Color Code
y
x
(a) Example test images and annotations
(b) Test set statistics
Figure 2: GazeFollow Dataset: We introduce a new dataset for gaze-following in natural images.
On the left, we show several example annotations and images. In the graphs on the right, we summarize a few statistics about test partition of the dataset. The top three heat maps show the probability
density for the location of the head, the fixation location, and the fixation location normalized with
respect to the head position. The bottom shows the average gaze direction for various head positions.
2
GazeFollow: A Large-Scale Gaze-Following Dataset
In order to both train and evaluate models, we built GazeFollow, a large-scale dataset annotated with
the location of where people in images are looking. We used several major datasets that contain
people as a source of images: 1, 548 images from SUN [19], 33, 790 images from MS COCO [13],
9, 135 images from Actions 40 [20], 7, 791 images from PASCAL [4], 508 images from the ImageNet detection challenge [17] and 198, 097 images from the Places dataset [22]. This concatenation
results in a challenging and large image collection of people performing diverse activities in many
everyday scenarios.
Since the source datasets do not have gaze ground-truth, we annotated it using Amazon?s Mechanical
Turk (AMT). Workers used our online tool to mark the center of a person?s eyes and where the
worker believed the person was looking. Workers could indicate if the person was looking outside
the image or if the person?s head was not visible. To control quality, we included images with
known ground-truth, and we used these to detect and discard poor annotations. Finally, we obtained
130, 339 people in 122, 143 images, with gaze locations inside the image.
We use about 4, 782 people of our dataset for testing and the rest for training. We ensured that every
person in an image is part of the same split, and to avoid bias, we picked images for testing such
that the fixation locations were uniformly distributed across the image. Further, to evaluate human
consistency on gaze-following, we collected 10 gaze annotations per person for the test set.
We show some example annotations and statistics of the dataset in Fig.2. We designed our dataset
to capture various fixation scenarios. For example, some images contain several people with joint
attention while others contain people looking at each other. The number of people in the image can
vary, ranging from a single person to a crowd of people. Moreover, we observed that while some
people have consistent fixation locations others have bimodal or largely inconsistent distributions,
suggesting that solutions to the gaze-following problem could be multimodal.
3
Learning to Follow Gaze
At a high level, our model is inspired by how humans tend to follow gaze. When people infer where
another person is looking, they often first look at the person?s head and eyes to estimate their field
of view, and subsequently reason about salient objects in their perspective to predict where they are
looking. In this section, we present a model that emulates this approach.
3
full image
?xi
element-wise
product
Saliency Pathway
Shifted Grids
CONV
CONV
CONV
CONV
CONV
CONV
FC
saliency map
?S(xi)
?
?
?
?head
?xh
Gaze Pathway
FC
CONV
CONV
CONV
CONV
CONV
...
?
FC
FC
head
location xp
FC
FC
FC
FC
gaze mask
G(xh, xp)
FC
gaze
prediction
?
Figure 3: Network architecture: We show the architecture of our deep network for gaze-following.
Our network has two main components: the saliency pathway (top) to estimate saliency and the gaze
pathway (bottom) to estimate gaze direction. See Section 3 for details.
3.1
Gaze and Saliency Pathways
Suppose we have an image xi and a person for whom we want to predict gaze. We parameterize this
person with a quantized spatial location of the person?s head xp and a cropped, close-up image of
their head xh . Given x, we seek to predict the spatial location of the person?s fixation y. Encouraged
by progress in deep learning, we also use deep networks to predict a person?s fixation.
Keeping the motivation from Section 3 in mind, we design our network to have two separate pathways for gaze and saliency. The gaze pathway only has access to the closeup image of the person?s
head and their location, and produces a spatial map, G(xh , xp ), of size D ? D. The saliency pathway sees the full image but not the person?s location, and produces another spatial map, S(xi ), of
the same size D ? D. We then combine the pathways with an element-wise product:
y? = F (G(xh , xp ) ? S(xi ))
where ? represents the element-wise product. F (?) is a fully connected layer that uses the multiplied
pathways to predict where the person is looking, y?.
Since the two network pathways only receive a subset of the inputs, they cannot themselves solve the
full problem during training, and instead are forced to solve subproblems. Our intention is that, since
the gaze pathway only has access to the person?s head, xh and location, xp , we expect it will learn to
predict the direction of gaze. Likewise, since the saliency pathway does not know which person to
follow, we hope it learns to find objects that are salient, independent of the person?s viewpoint. The
element-wise product allows these two pathways to interact in a way that is similar to how humans
approach this task. In order for a location in the element-wise product to be activated, both the gaze
and saliency pathways must have large activations.
Saliency map: To form the saliency pathway, we use a convolutional network on the full image to
produce a hidden representation of size D ? D ? K. Since [21] shows that objects tend to emerge in
these deep representations, we can create a gaze-following saliency map by learning the importance
of these objects. To do this, we add a convolutional layer that convolves the hidden representation
with a w ? R1?1?K filter, which produces the D ? D saliency map. Here, the sign and magnitude
of w can be interpreted as weights indicating an object?s importance for gaze-following saliency.
Gaze mask: In the gaze pathway, we use a convolutional network on the head image. We concatenate its output with the head position and use several fully connected layers and a final sigmoid to
predict the D ? D gaze mask.
Pathway visualization: Fig. 4 shows examples of the (a) gaze masks and (b) saliency maps learned
by our network. Fig. 4(b) also compares the saliency maps of our network with the saliency computed using a state of the art saliency model [11]. Note that our model learns a notion of saliency
that is relevant for the gaze-following task and places emphasis on certain objects that people tend
to look at (e.g., balls and televisions). In the third example, the red light coming from the computer
mouse is salient in the Judd et al [11] model but that object is not relevant in a gaze-following task
as the computer monitor is more likely to be the target of attention of the person inside the picture.
4
gaze mask
(a)
saliency
(b)
input
image
free-view
saliency
gaze
saliency
input
image
free-view
saliency
gaze
saliency
input
image
free-view
saliency
gaze
saliency
Figure 4: Pathway visualization: (a) The gaze mask output by our network for various head poses.
(b) Each triplet of images show, from left to right, the input image, its free-viewing saliency estimated using [11], and the gaze-following saliency estimated using our network. These examples
clearly illustrate the differences between free-viewing saliency [11] and gaze-following saliency.
3.2
Multimodal Predictions
Although humans can often follow gaze reliably, predicting gaze is sometimes ambiguous. If there
are several salient objects in the image, or the eye pose cannot be accurately perceived, then humans
may disagree when predicting gaze. We can observe this for several examples in Fig. 2. Consequently, we want to design our model to support multimodal predictions.
We could formulate our problem as a regression task (i.e., regress the Cartesian coordinates of
fixations) but then our predictions would be unimodal. Instead, we can formulate our problem
as a classification task, which naturally supports multimodal outputs because each category has a
confidence value. To do this, we quantize the fixation location y into a N ? N grid. Then, the job of
the network is to classify the inputs x into one of N 2 classes. The model output y? ? RN ?N is the
confidence that the person is fixating in each grid cell.
Shifted grids: For classification, we must choose the number of grid cells, N . If we pick a small N ,
our predictions will suffer from poor precision. If we pick a large N , there will be more precision,
but the learning problem becomes harder because standard classification losses do not gradually
penalize spatial categories ? a misclassification that is off by just one cell should be penalized less
than errors multiple cells away. To alleviate this trade-off, we propose the use of shifted grids,
as illustrated in Fig. 3, where the network solves several overlapping classification problems. The
network predicts locations in multiple grids where each grid is shifted such that cells in one grid
overlap with cells in other grids. We then average the shifted outputs to produce the final prediction.
3.3
Training
We train our network end-to-end using backpropagation. We use a softmax loss for each shifted
grid and average their losses. Since we only supervise the network with gaze fixations, we do not
enforce that the gaze and saliency pathways solve their respective subproblems. Rather, we expect
that the proposed network structure encourages these roles to emerge automatically (which they do,
as shown in Fig. 6).
Implementation details: We implemented the network using Caffe [10]. The convolutional layers
in both the gaze and saliency pathways follow the architecture of the first five layers of the AlexNet
architecture [12]. In our experiments, we initialize these convolutional layers of the saliency pathway with the Places-CNN [22] and those of the gaze pathway with ImageNet-CNN [12]. The last
convolutional layer of the saliency pathway has a 1 ? 1 ? 256 convolution kernel (i.e., K = 256).
The remaining fully connected layers in the gaze pathway are of sizes 100, 400, 200, and 169 respectively. The saliency map and gaze mask are 13 ? 13 in size (i.e., D = 13), and we use 5 shifted
grids of size 5 ? 5 each (i.e., N = 5). For learning, we augment our training data with flips and
random crops with the fixation locations adjusted accordingly.
5
Figure 5: Qualitative results: We show several examples of successes and failures of our model.
The red lines indicate ground truth gaze, and the yellow, our predicted gaze.
Min
Min
Model
AUC Dist. Dist. Ang.
AUC Dist. Dist. Ang.
Model
Our
0.878 0.190 0.113 24?
No image
0.821 0.221 0.142 27?
SVM+shift grid 0.788 0.268 0.186 40?
No position 0.837 0.238 0.158 32?
SVM+one grid 0.758 0.276 0.193 43?
No head
0.822 0.264 0.179 41?
Judd [11]
0.711 0.337 0.250 54?
No eltwise 0.876 0.193 0.117 25?
0.674 0.306 0.219 48?
Fixed bias
5 ? 5 grid
0.839 0.245 0.164 36?
Center
0.633 0.313 0.230 49?
10 ? 10 grid 0.873 0.218 0.138 30?
Random
0.504 0.484 0.391 69?
L2 loss
0.768 0.245 0.169 34?
?
One human
0.924 0.096 0.040 11
Our full
0.878 0.190 0.113 24?
(a) Main Evaluation
(b) Model Diagnostics
Table 1: Evaluation: (a) We evaluate our model against baselines and (b) analyze how it performances with some components disabled. AUC refers to the area under the ROC curve (higher is
better). Dist. refers to the L2 distance to the average of ground truth fixation, while Min Dist. refers
to the L2 distance to the nearest ground truth fixation (lower is better). Ang. is the angular error of
predicted gaze in degrees (lower is better). See Section 4 for details.
4
Experiments
4.1
Setup
We evaluate the ability of our model to predict where people in images are looking. We use the disjoint train and test sets from GazeFollow, as described in Section 2, to train and evaluate our model.
The test set was randomly sampled such that the fixation location was approximately uniform, and
ignored people who were looking outside the picture or at the camera. Similar to PASCAL VOC
Action Recognition [4] where ground-truth person bounding boxes are available both during training
and testing, we assume that we are given the head location at both train and test time. This allows
us to focus our attention on the primary task of gaze-following. In Section 4.3, we show that our
method performs well even when using a simple head detector.
Our primary evaluation metric compares the ground truth annotations1 against the distribution predicted by our model. We use the Area Under Curve (AUC) criteria from [11] where the predicted
heatmap is used as confidences to produce an ROC curve. The AUC is the area under this ROC
curve. If our model behaves perfectly, the AUC will be 1 while chance performance is 0.5. L2 distance: We evaluate the Euclidean distance between our prediction and the average of ground truth
annotations. We assume each image is of size 1 ? 1 when computing the L2 distance. Additionally,
as the ground truth may be multimodal, we also report the minimum L2 distance between our pre1
Note that, as mentioned in Section 2, we obtain 10 annotations per person in the test set.
6
diction and all ground truth annotations. Angular error: Using the ground truth eye position from
the annotation we compute the gaze vectors for the average ground truth fixations and our prediction,
and report the angular difference between them.
We compare our approach against several baselines ranging from simple (center, fixed bias) to more
complex (SVM, free-viewing saliency) as described below. Center: The prediction is always the
center of the image. Fixed bias: The prediction is given by the average of fixations from the training
set for heads in similar locations as the test image. SVM: We generate features by concatenating
the quantized eye position with pool5 of the ImageNet-CNN [12] for both the full image and the
head image. We train a SVM on these features to predict gaze using a similar classification grid
setup as our model. We evaluate this approach for both, a single grid and shifted grids. Freeviewing saliency: We use a state-of-the-art free-viewing saliency model [11] as a predictor of gaze.
Although free-viewing saliency models ignore head orientation and location, they may still identify
important objects in the image.
4.2
Results
We compare our model against baselines in Tbl.1(a). Our method archives an AUC of 0.878 and a
mean Euclidean error of 0.190, outperforming all baselines significantly in all the evaluation metrics. The SVM model using shifted grids shows the best baseline performance, surpassing the one
grid baseline by a reasonable margin. This verifies the effectiveness of the shifted grids approach
proposed in this work.
Fig.5 shows some example outputs of our method. These qualitative results show that our method
is able to distinguish people in the image by using the gaze pathway to model a person?s point of
view, as it produces different outputs for different people in the same image. Furthermore, it is also
able to find salient objects in images, such as balls or food. However, the method still has certain
limitations. The lack of 3D understanding generates some wrong predictions, as illustrated by the
1st image in the 2nd row of Fig. 5, where one of the predictions is in a different plane of depth.
To obtain an approximate upper bound on prediction performance, we evaluate human performance
on this task. Since we annotated our test set 10 times, we can quantify how well one annotation
predicts the mean of the remaining 9 annotations. A single human is able to achieve an AUC of
0.924 and a mean Euclidean error of 0.096. While our approach outperforms all baselines, it is still
far from reaching human performance. We hope that the availability of GazeFollow will motivate
further research in this direction, allowing machines to reach human level performance.
4.3
Analysis
Ablation study: In Tbl. 1(b), we report the performance after removing different components of our
model, one at a time, to better understand their significance. In general, all three of inputs (image,
position and head) contribute to the performance of our model. Interestingly, the model with only
the head and its position achieves comparable angular error to our full method, suggesting that the
gaze pathway is largely responsible for estimating the gaze direction. Further, we show the results
of our model with single output grids (5?5 and 10?10). Removing shifted grids hurts performance
significantly as shifted grids have a spatially graded loss function, which is important for learning.
Internal representation: In Fig. 6, we visualize the various stages of our network. We show the
output of each of the pathways as well as the element wise product. For example, in the second row
we have two different girls writing on the blackboard. The gaze mask effectively creates a heat map
of the field of view for the girl in the right, while the saliency map identifies the salient spots in the
image. The element-wise multiplication of the saliency map and gaze mask removes the responses
of the girl on the left and attenuates the saliency of the right girl?s head. Finally, our shifted grids
approach accurately predicts where the girl is looking.
Further, we apply the technique from [21] to visualize the top activations for different units in the
fifth convolutional layer of the saliency pathway. We use filter weights from the sixth convolutional
layer to rank their contribution to the saliency map. Fig. 7 shows four units with positive (left) and
negative (right) contributions to the saliency map. Interestingly, w learns positive weights for salient
objects such as switched on TV monitors and balls, and negative weights for non-salient objects.
7
input image
gaze mask
saliency
product
output
prediction
Figure 6: Visualization of internal representations: We visualize the output of different components of our model. The green circle indicates the person whose gaze we are trying to predict, the
red dots/lines show the ground truth gaze, and the yellow line is our predicted gaze.
positive weight units
negative weight units
ball
surface
screen
horizon
head
lights
pizza
floor
Figure 7: Visualization of saliency units: We visualize several units in our saliency pathway by
finding images with high scoring activations, similar to [21]. We sort the units by w, the weights of
the sixth convolutional layer (See Section 3.1 for more details). Positive weights tend to correspond
to salient everyday objects, while negative weights tend to correspond to background objects.
Automatic head detection: To evaluate the impact of imperfect head locations on our system, we
built a simple head detector, and input its detections into our model. For detections surpassing the
intersection over union threshold of 0.5, our model achieved an AUC of 0.868, as compared to an
AUC of 0.878 when using ground-truth head locations. This demonstrates that our model is robust
to inaccurate head detections, and can easily be made fully-automatic.
5
Conclusion
Accurate gaze-following achieving human-level performance will be an important tool to enable
systems that can interpret human behavior and social situations. In this paper, we have introduced a
model that learns to do gaze-following using GazeFollow, a large-scale dataset of human annotated
gaze. Our model automatically learns to extract the line of sight from heads, without using any
supervision on head pose, and to detect salient objects that people are likely to interact with, without
requiring object-level annotations during training. We hope that our model and dataset will serve as
important resources to facilitate further research in this direction.
Acknowledgements. We thank Andrew Owens for helpful discussions. Funding for this research
was partially supported by the Obra Social ?la Caixa? Fellowship for Post-Graduate Studies to AR
and a Google PhD Fellowship to CV.
8
References
[1] A. Borji, D. Parks, and L. Itti. Complementary effects of gaze direction and early saliency in guiding
fixations during free viewing. Journal of vision, 14(13):3, 2014.
[2] A. Borji, D. N. Sihite, and L. Itti. Salient object detection: A benchmark. In ECCV. 2012.
[3] N. Emery. The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience &
Biobehavioral Reviews, 2000.
[4] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes (VOC) Challenge. IJCV, 2010.
[5] A. Fathi, J. K. Hodgins, and J. M. Rehg. Social interactions: A first-person perspective. In CVPR, 2012.
[6] A. Fathi, Y. Li, and J. M. Rehg. Learning to recognize daily actions using gaze. In ECCV. 2012.
[7] M. W. Hoffman, D. B. Grimes, A. P. Shon, and R. P. Rao. A probabilistic model of gaze imitation and
shared attention. Neural Networks, 2006.
[8] L. Itti and C. Koch. Computational modelling of visual attention. Nature Reviews Neuroscience, 2001.
[9] H. Jasso, J. Triesch, and G. De?ak. Using eye direction cues for gaze following?a developmental model.
In ICDL, 2006.
[10] Y. Jia.
Caffe: An open source convolutional architecture for fast feature embedding.
http://caffe.berkeleyvision.org, 2013.
[11] T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In CVPR, 2009.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[13] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?ar, and C. L. Zitnick. Microsoft
coco: Common objects in context. In ECCV. 2014.
[14] M. J. Marin-Jimenez, A. Zisserman, M. Eichner, and V. Ferrari. Detecting people looking at each other
in videos. IJCV, 2014.
[15] H. Park, E. Jain, and Y. Sheikh. Predicting primary gaze behavior using social saliency fields. In ICCV,
2013.
[16] D. Parks, A. Borji, and L. Itti. Augmented saliency model using automatic 3d head pose detection and
learned gaze following in natural scenes. Vision Research, 2014.
[17] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015.
[18] H. Soo Park and J. Shi. Social saliency prediction. In CVPR, 2015.
[19] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition
from abbey to zoo. In CVPR, 2010.
[20] B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei. Human action recognition by learning
bases of action attributes and parts. In ICCV, 2011.
[21] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Object detectors emerge in deep scene cnns.
In ICLR, 2015.
[22] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition
using places database. In NIPS, 2014.
[23] X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In
CVPR, 2012.
9
| 5848 |@word cnn:3 norm:1 nd:1 everingham:1 open:1 seek:1 attended:2 pick:2 harder:1 loc:2 jimenez:1 interestingly:2 outperforms:2 activation:3 must:2 realize:1 visible:1 concatenate:1 partition:1 remove:1 designed:1 cue:1 selected:1 accordingly:1 plane:1 recasens:2 detecting:2 quantized:2 contribute:1 location:30 org:1 five:1 qualitative:2 fixation:22 ijcv:3 combine:2 pathway:30 wild:1 inside:7 introduce:4 mask:10 behavior:2 themselves:1 planning:1 dist:6 inspired:1 voc:2 automatically:2 food:2 biobehavioral:1 conv:11 becomes:1 discover:1 unrelated:1 estimating:2 moreover:1 lapedriza:2 alexnet:1 what:6 interpreted:1 finding:2 thorough:1 quantitative:1 sky:1 every:1 borji:3 tackle:1 eichner:1 ensured:1 wrong:1 demonstrates:1 control:1 unit:7 ramanan:2 before:1 positive:4 attend:1 despite:3 marin:1 ak:1 jiang:1 triesch:1 approximately:1 might:6 emphasis:1 studied:1 challenging:2 limited:2 graduate:1 camera:4 responsible:1 testing:3 union:1 backpropagation:1 spot:1 maire:1 intersect:1 area:3 significantly:2 intention:2 confidence:3 refers:3 get:1 cannot:3 close:1 closeup:1 context:1 influence:1 writing:1 map:16 center:6 shi:1 williams:1 attention:11 formulate:3 amazon:1 identifying:1 rehg:2 embedding:1 handle:2 notion:1 coordinate:1 hurt:1 ferrari:1 limiting:1 target:1 suppose:1 user:1 carl:1 us:3 element:9 recognition:6 predicts:3 database:2 bottom:2 observed:1 role:1 capture:1 parameterize:1 region:2 connected:3 sun:2 trade:1 mentioned:1 environment:1 developmental:1 motivate:1 serve:1 creates:1 localization:1 girl:5 multimodal:5 joint:1 convolves:1 easily:1 various:4 train:7 heat:2 forced:1 fast:1 pool5:1 jain:1 outside:2 crowd:1 caffe:3 whose:1 solve:3 cvpr:5 ability:6 statistic:3 final:2 online:1 propose:4 interaction:3 product:7 coming:1 blackboard:1 relevant:2 ablation:1 achieve:1 everyday:2 sutskever:1 r1:1 produce:8 emery:1 object:34 illustrate:1 andrew:1 pose:6 nearest:1 progress:1 job:1 solves:1 implemented:1 predicted:6 indicate:4 quantify:1 direction:13 annotated:5 attribute:1 filter:2 subsequently:1 cnns:1 human:19 viewing:10 enable:1 alleviate:1 adjusted:1 tracker:2 koch:1 ground:14 guibas:1 scope:1 predict:17 visualize:4 major:1 vary:1 torralba:6 achieves:1 early:1 abbey:1 perceived:1 estimation:1 freeviewing:1 networkbased:1 create:1 tool:2 hoffman:1 hope:3 mit:2 clearly:1 always:1 sight:5 reaching:2 rather:1 avoid:1 neuroethology:1 zhou:2 focus:2 rank:1 indicates:2 modelling:1 baseline:8 detect:2 helpful:1 attract:1 inaccurate:1 hidden:2 perona:1 overall:1 among:1 orientation:4 pascal:3 classification:6 augment:1 heatmap:1 spatial:5 art:2 softmax:1 initialize:1 equal:1 field:3 encouraged:1 represents:1 park:4 look:4 thinking:1 others:2 report:3 few:2 randomly:1 recognize:1 familiar:1 microsoft:1 detection:8 interest:1 evaluation:6 grime:1 diagnostics:1 light:2 activated:1 accurate:1 worker:3 daily:1 respective:1 euclidean:3 circle:1 instance:3 classify:1 modeling:1 rao:1 ar:2 deserves:1 subset:1 uniform:1 predictor:1 krizhevsky:1 synthetic:1 person:40 density:4 st:1 csail:2 probabilistic:1 off:2 gaze:110 mouse:1 yao:1 containing:1 choose:1 huang:1 book:1 itti:4 li:1 suggesting:2 fixating:1 de:1 availability:1 view:9 observer:5 picked:1 doing:1 analyze:1 red:3 start:1 sort:1 annotation:12 jia:1 contribution:3 convolutional:11 who:2 likewise:1 largely:2 correspond:2 emulates:1 identify:3 saliency:59 yellow:2 accurately:2 zoo:1 russakovsky:1 detector:5 reach:1 sixth:2 failure:1 against:4 turk:1 regress:1 fixate:1 naturally:1 wearable:1 sampled:1 dataset:15 massachusetts:1 knowledge:1 color:1 back:1 higher:1 follow:10 response:1 zisserman:2 evaluated:1 box:1 strongly:1 furthermore:1 just:1 angular:4 stage:1 su:1 overlapping:1 lack:1 google:1 quality:1 disabled:1 believe:1 facilitate:1 effect:1 normalized:1 contain:3 requiring:1 evolution:1 spatially:1 illustrated:2 during:4 encourages:1 auc:10 ambiguous:1 berkeleyvision:1 m:1 criterion:1 trying:1 performs:1 interface:1 vondrick:2 image:56 ranging:2 wise:7 funding:1 sigmoid:1 common:1 behaves:1 interpret:2 surpassing:2 cv:1 automatic:3 consistency:1 grid:26 similarly:1 dot:1 access:2 supervision:1 surface:1 attracting:1 add:1 base:1 showed:1 perspective:2 coco:2 scenario:5 discard:1 certain:2 hay:2 outperforming:1 success:1 durand:1 scoring:1 seen:1 minimum:1 unrestricted:1 adri:1 floor:1 deng:1 multiple:7 full:7 unimodal:1 infer:3 pre1:1 cross:1 believed:1 lin:2 post:1 impact:1 prediction:17 regression:1 crop:1 oliva:3 vision:7 metric:2 sometimes:1 kernel:1 bimodal:1 robotics:1 cell:6 penalize:1 receive:1 cropped:1 want:4 background:1 achieved:1 fellowship:2 winn:1 krause:1 source:3 crucial:1 rest:1 archive:1 tend:6 inconsistent:1 effectiveness:1 bernstein:1 split:1 architecture:6 approaching:1 perfectly:1 imperfect:1 shift:1 aeroplane:1 suffer:1 action:8 antonio:1 deep:10 generally:1 ignored:1 karpathy:1 ang:3 category:2 http:2 generate:1 notice:1 shifted:13 sign:1 estimated:2 disjoint:1 per:2 neuroscience:2 diverse:1 group:3 key:1 salient:12 reliance:1 tbl:2 four:1 monitor:2 threshold:1 achieving:1 egocentric:2 graph:1 you:2 place:5 reasonable:1 comparable:1 layer:11 bound:1 distinguish:1 sihite:1 activity:1 precisely:2 your:1 fei:2 scene:14 generates:1 min:3 performing:3 tv:1 ball:5 poor:2 across:1 sheikh:1 fathi:2 making:1 supervise:1 gradually:1 iccv:2 taken:1 resource:1 visualization:4 mind:1 know:1 flip:1 end:2 available:3 doll:1 multiplied:1 apply:1 observe:1 away:4 enforce:1 top:3 remaining:2 restrictive:1 graded:1 looked:3 primary:3 traditional:1 navigating:1 iclr:1 distance:6 separate:1 thank:1 concatenation:1 landmark:1 topic:2 whom:1 collected:1 reason:1 code:2 setup:2 holding:1 subproblems:2 negative:4 pizza:1 design:2 reliably:1 implementation:1 attenuates:1 satheesh:1 perform:3 allowing:1 disagree:1 upper:1 convolution:1 datasets:2 benchmark:3 situation:4 hinton:1 looking:28 head:40 interacting:2 rn:1 community:2 download:1 introduced:1 mechanical:1 imagenet:5 distinction:1 learned:2 diction:1 nip:2 address:4 able:7 below:1 summarize:1 challenge:3 built:2 reliable:1 green:1 video:1 soo:1 gool:1 misclassification:1 overlap:1 natural:3 predicting:4 zhu:1 improve:2 movie:1 technology:1 eye:12 picture:14 identifies:2 extract:2 review:4 understanding:2 l2:6 acknowledgement:1 multiplication:1 fully:4 expect:2 loss:5 interesting:1 limitation:1 remarkable:2 switched:1 degree:1 consistent:1 xp:6 xiao:2 viewpoint:2 row:2 eccv:3 penalized:1 supported:1 last:1 free:12 keeping:1 bias:4 understand:4 institute:1 face:3 emerge:3 fifth:1 distributed:1 van:1 curve:4 judd:3 depth:1 author:1 commonly:1 avg:1 collection:1 made:1 far:3 social:7 approximate:1 ignore:1 fixated:1 belongie:1 xi:5 imitation:1 triplet:1 khosla:5 table:1 additionally:1 learn:1 nature:1 robust:1 interact:4 quantize:1 complex:2 zitnick:1 hodgins:1 significance:1 main:2 motivation:1 bounding:1 verifies:1 complementary:1 augmented:1 fig:10 referred:1 roc:3 screen:1 ehinger:2 precision:2 position:8 guiding:1 xh:6 concatenating:1 house:1 third:2 learns:7 removing:2 explored:1 svm:6 restricting:1 effectively:1 importance:4 phd:1 magnitude:1 television:3 cartesian:1 margin:1 horizon:1 intersection:1 fc:9 likely:4 visual:4 prevents:1 aditya:1 tracking:1 partially:1 shon:1 collaborating:1 truth:14 relies:2 amt:1 chance:1 ma:1 goal:3 consequently:1 owen:1 shared:1 content:2 included:1 uniformly:1 engaged:1 la:1 indicating:1 select:2 internal:2 people:42 mark:1 support:2 icdl:1 incorporate:1 evaluate:10 |
5,356 | 5,849 | Semi-supervised Convolutional Neural Networks for
Text Categorization via Region Embedding
Rie Johnson
RJ Research Consulting
Tarrytown, NY, USA
[email protected]
Tong Zhang?
Baidu Inc., Beijing, China
Rutgers University, Piscataway, NJ, USA
[email protected]
Abstract
This paper presents a new semi-supervised framework with convolutional neural
networks (CNNs) for text categorization. Unlike the previous approaches that rely
on word embeddings, our method learns embeddings of small text regions from
unlabeled data for integration into a supervised CNN. The proposed scheme for
embedding learning is based on the idea of two-view semi-supervised learning,
which is intended to be useful for the task of interest even though the training
is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.
1
Introduction
Convolutional neural networks (CNNs) [15] are neural networks that can make use of the internal
structure of data such as the 2D structure of image data through convolution layers, where each
computation unit responds to a small region of input data (e.g., a small square of a large image). On
text, CNN has been gaining attention, used in systems for tagging, entity search, sentence modeling,
and so on [4, 5, 26, 7, 21, 12, 25, 22, 24, 13], to make use of the 1D structure (word order) of text
data. Since CNN was originally developed for image data, which is fixed-sized, low-dimensional and
dense, without modification it cannot be applied to text documents, which are variable-sized, highdimensional and sparse if represented by sequences of one-hot vectors. In many of the CNN studies
on text, therefore, words in sentences are first converted to low-dimensional word vectors. The word
vectors are often obtained by some other method from an additional large corpus, which is typically
done in a fashion similar to language modeling though there are many variations [3, 4, 20, 23, 6, 19].
Use of word vectors obtained this way is a form of semi-supervised learning and leaves us with
the following questions. Q1. How effective is CNN on text in a purely supervised setting without
the aid of unlabeled data? Q2. Can we use unlabeled data with CNN more effectively than using
general word vector learning methods? Our recent study [11] addressed Q1 on text categorization
and showed that CNN without a word vector layer is not only feasible but also beneficial when
not aided by unlabeled data. Here we address Q2 also on text categorization: building on [11], we
propose a new semi-supervised framework that learns embeddings of small text regions (instead of
words) from unlabeled data, for use in a supervised CNN.
The essence of CNN, as described later, is to convert small regions of data (e.g., ?love it? in a document) to feature vectors for use in the upper layers; in other words, through training, a convolution
layer learns an embedding of small regions of data. Here we use the term ?embedding? loosely to
mean a structure-preserving function, in particular, a function that generates low-dimensional features that preserve the predictive structure. [11] applies CNN directly to high-dimensional one-hot
vectors, which leads to directly learning an embedding of small text regions (e.g., regions of size 3
?
Tong Zhang would like to acknowledge NSF IIS-1250985, NSF IIS-1407939, and NIH R01AI116744 for
supporting his research.
1
like phrases, or regions of size 20 like sentences), eliminating the extra layer for word vector conversion. This direct learning of region embedding was noted to have the merit of higher accuracy
with a simpler system (no need to tune hyper-parameters for word vectors) than supervised word
vector-based CNN in which word vectors are randomly initialized and trained as part of CNN training. Moreover, the performance of [11]?s best CNN rivaled or exceeded the previous best results on
the benchmark datasets.
Motivated by this finding, we seek effective use of unlabeled data for text categorization through
direct learning of embeddings of text regions. Our new semi-supervised framework learns a region embedding from unlabeled data and uses it to produce additional input (additional to one-hot
vectors) to supervised CNN, where a region embedding is trained with labeled data. Specifically,
from unlabeled data, we learn tv-embeddings (?tv? stands for ?two-view?; defined later) of a text
region through the task of predicting its surrounding context. According to our theoretical finding,
a tv-embedding has desirable properties under ideal conditions on the relations between two views
and the labels. While in reality the ideal conditions may not be perfectly met, we consider them as
guidance in designing the tasks for tv-embedding learning.
We consider several types of tv-embedding learning task trained on unlabeled data; e.g., one task
is to predict the presence of the concepts relevant to the intended task (e.g., ?desire to recommend
the product?) in the context, and we indirectly use labeled data to set up this task. Thus, we seek
to learn tv-embeddings useful specifically for the task of interest. This is in contrast to the previous
word vector/embedding learning methods, which typically produce a word embedding for general
purposes so that all aspects (e.g., either syntactic or semantic) of words are captured. In a sense,
the goal of our region embedding learning is to map text regions to high-level concepts relevant
to the task. This cannot be done by word embedding learning since individual words in isolation
are too primitive to correspond to high-level concepts. For example, ?easy to use? conveys positive
sentiment, but ?use? in isolation does not. We show that our models with tv-embeddings outperform the previous best results on sentiment classification and topic classification. Moreover, a more
direct comparison confirms that our region tv-embeddings provide more compact and effective representations of regions for the task of interest than what can be obtained by manipulation of a word
embedding.
1.1
Preliminary: one-hot CNN for text categorization [11]
A CNN is a feed-forward network equipped with convolution layers interleaved with pooling layers.
A convolution layer consists of computation units, each of which responds to a small region of
input (e.g., a small square of an image), and the small regions collectively cover the entire data. A
computation unit associated with the `-th region of input x computes:
?(W ? r` (x) + b) ,
(1)
q
where r` (x) ? R is the input region vector that represents the `-th region. Weight matrix W ?
Rm?q and bias vector b ? Rm are shared by all the units in the same layer, and they are learned
through training. In [11], input x is a document represented by one-hot vectors (Figure 1); therefore,
we call [11]?s CNN one-hot CNN; r` (x) can be either a concatenation of one-hot vectors, a bag-ofword vector (bow), or a bag-of-n-gram vector: e.g., for a region ?love it?
I
it
love
I
it
love
r` (x) =[
0
r` (x) =[
0
0
1
1
1
I
| 0
it
1
]>
love
0
]>
(concatenation)
(2)
(bow)
(3)
The bow representation (3) loses word order within the region but is more robust to data sparsity,
enables a large region size such as 20, and speeds up training by having fewer parameters. This is
what we mainly use for embedding learning from unlabeled data. CNN with (2) is called seq-CNN
and CNN with (3) bow-CNN. The region size and stride (distance between the region centers) are
meta-parameters. Note that we used a tiny three-word vocabulary for the vector examples above to
save space, but a vocabulary of typical applications could be much larger. ? in (1) is a componentwise non-linear function (e.g., applying ?(x) = max(x, 0) to each vector component). Thus, each
computation unit generates an m-dimensional vector where m is the number of weight vectors (W?s
rows) or neurons. In other words, a convolution layer embodies an embedding of text regions, which
produces an m-dim vector for each text region. In essence, a region embedding uses co-presence
and absence of words in a region as input to produce predictive features, e.g., if presence of ?easy
2
Output
good good acting
Output
?
fun plot :) plot :) :)
X2
Top layer g1
Convolution
?
layer f1
1 (positive)
Top layer
Pooling layer
Convolution
layer (size 2)
One-hot
I really love it ! Input:
vectors
Figure 1: One-hot CNN example. Region size 2, stride 1.
good acting , fun plot
:) Input X1
Figure 2: Tv-embedding learning by training
to predict adjacent regions.
to use? with absence of ?not? is a predictive indicator, it can be turned into a large feature value by
having a negative weight on ?not? (to penalize its presence) and positive weights on the other three
words in one row of W. A more formal argument can be found in the supplementary material. The
m-dim vectors from all the text regions of each document are aggregated by the pooling layer, by
either component-wise maximum (max-pooling) or average (average-pooling), and used by the top
layer (a linear classifier) as features for classification. Here we focused on the convolution layer; for
other details, [11] should be consulted.
2
Semi-supervised CNN with tv-embeddings for text categorization
It was shown in [11] that one-hot CNN is effective on text categorization, where the essence is direct
learning of an embedding of text regions aided by new options of input region vector representation.
We go further along this line and propose a semi-supervised learning framework that learns an embedding of text regions from unlabeled data and then integrates the learned embedding in supervised
training. The first step is to learn an embedding with the following property.
Definition 1 (tv-embedding). A function f1 is a tv-embedding of X1 w.r.t. X2 if there exists a
function g1 such that P (X2 |X1 ) = g1 (f1 (X1 ), X2 ) for any (X1 , X2 ) ? X1 ? X2 .
A tv-embedding (?tv? stands for two-view) of a view (X1 ), by definition, preserves everything required to predict another view (X2 ), and it can be trained on unlabeled data. The motivation of tvembedding is our theoretical finding (formalized in the Appendix) that, essentially, a tv-embedded
feature vector f1 (X1 ) is as useful as X1 for the purpose of classification under ideal conditions.
The conditions essentially state that there exists a set H of hidden concepts such that two views
and labels of the classification task are related to each other only through the concepts in H. The
concepts in H might be, for example, ?pricey?, ?handy?, ?hard to use?, and so on for sentiment
classification of product reviews. While in reality the ideal conditions may not be completely met,
we consider them as guidance and design tv-embedding learning accordingly.
Tv-embedding learning is related to two-view feature learning [2] and ASO [1], which learn a linear
embedding from unlabeled data through tasks such as predicting a word (or predicted labels) from
the features associated with its surrounding words. These studies were, however, limited to a linear
embedding. A related method in [6] learns a word embedding so that left context and right context
maximally correlate in terms of canonical correlation analysis. While we share with these studies
the general idea of using the relations of two views, we focus on nonlinear learning of region embeddings useful for the task of interest, and the resulting methods are very different. An important
difference of tv-embedding learning from co-training is that it does not involve label guessing, thus
avoiding risk of label contamination. [8] used a Stacked Denoising Auto-encoder to extract features
invariant across domains for sentiment classification from unlabeled data. It is for fully-connected
neural networks, which underperformed CNNs in [11].
Now let B be the base CNN model for the task of interest, and assume that B has one convolution
layer with region size p. Note, however, that the restriction of having only one convolution layer is
merely for simplifying the description. We propose a semi-supervised framework with the following
two steps.
1. Tv-embedding learning: Train a neural network U to predict the context from each region
of size p so that U?s convolution layer generates feature vectors for each text region of size
p for use in the classifier in the top layer. It is this convolution layer, which embodies the
tv-embedding, that we transfer to the supervised learning model in the next step. (Note that
U differs from CNN in that each small region is associated with its own target/output.)
3
2. Final supervised learning: Integrate the learned tv-embedding (the convolution layer of U)
into B, so that the tv-embedded regions (the output of U?s convolution layer) are used as an
additional input to B?s convolution layer. Train this final model with labeled data.
These two steps are described in more detail in the next two sections.
2.1
Learning tv-embeddings from unlabeled data
We create a task on unlabeled data to predict the context (adjacent text regions) from each region of
size p defined in B?s convolution layer. To see the correspondence to the definition of tv-embeddings,
it helps to consider a sub-task that assigns a label (e.g., positive/negative) to each text region (e.g., ?,
fun plot?) instead of the ultimate task of categorizing the entire document. This is sensible because
CNN makes predictions by building up from these small regions. In a document ?good acting, fun
plot :)? as in Figure 2, the clues for predicting a label of ?, fun plot? are ?, fun plot? itself (view1: X1 ) and its context ?good acting? and ?:)? (view-2: X2 ). U is trained to predict X2 from X1 ,
i.e., to approximate P (X2 |X1 ) by g1 (f1 (X1 ), X2 )) as in Definition 1, and functions f1 and g1 are
embodied by the convolution layer and the top layer, respectively.
Given a document x, for each text region indexed by `, U?s convolution layer computes:
(U )
u` (x) = ? (U ) W(U ) ? r` (x) + b(U ) ,
(4)
which is the same as (1) except for the superscript ?(U)? to indicate that these entities belong to U.
The top layer (a linear model for classification) uses u` (x) as features for prediction. W(U ) and b(U )
(and the top-layer parameters) are learned through training. The input region vector representation
(U )
r` (x) can be either sequential, bow, or bag-of-n-gram, independent of r` (x) in B.
The goal here is to learn an embedding of text regions (X1 ), shared with all the text regions at
every location. Context (X2 ) is used only in tv-embedding learning as prediction target (i.e., not
transferred to the final model); thus, the representation of context should be determined to optimize
the final outcome without worrying about the cost at prediction time. Our guidance is the conditions
on the relationships between the two views mentioned above; ideally, the two views should be
related to each other only through the relevant concepts. We consider the following two types of
target/context representation.
Unsupervised target A straightforward vector encoding of context/target X2 is bow vectors of
the text regions on the left and right to X1 . If we distinguish the left and right, the target vector is
2|V |-dimensional with vocabulary V , and if not, |V |-dimensional. One potential problem of this
encoding is that adjacent regions often have syntactic relations (e.g., ?the? is often followed by an
adjective or a noun), which are typically irrelevant to the task (e.g., to identify positive/negative
sentiment) and therefore undesirable. A simple remedy we found effective is vocabulary control
of context to remove function words (or stop-words if available) from (and only from) the target
vocabulary.
Partially-supervised target Another context representation that we consider is partially supervised in the sense that it uses labeled data. First, we train a CNN with the labeled data for the
intended task and apply it to the unlabeled data. Then we discard the predictions and only retain
the internal output of the convolution layer, which is an m-dimensional vector for each text region
where m is the number of neurons. We use these m-dimensional vectors to represent the context.
[11] has shown, by examples, that each dimension of these vectors roughly represents concepts relevant to the task, e.g., ?desire to recommend the product?, ?report of a faulty product?, and so on.
Therefore, an advantage of this representation is that there is no obvious noise between X1 and X2
since context X2 is represented only by the concepts relevant to the task. A disadvantage is that it
is only as good as the supervised CNN that produced it, which is not perfect and in particular, some
relevant concepts would be missed if they did not appear in the labeled data.
2.2
Final supervised learning: integration of tv-embeddings into supervised CNN
We use the tv-embedding obtained from unlabeled data to produce additional input to B?s convolution layer, by replacing ? (W ? r` (x) + b) (1) with:
? (W ? r` (x) + V ? u` (x) + b) ,
4
(5)
where u` (x) is defined by (4), i.e., u` (x) is the output of the tv-embedding applied to the `-th
region. We train this model with the labeled data of the task; that is, we update the weights W, V,
bias b, and the top-layer parameters so that the designated loss function is minimized on the labeled
training data. W(U ) and b(U ) can be either fixed or updated for fine-tuning, and in this work we fix
them for simplicity.
Note that while (5) takes a tv-embedded region as input, (5) itself is also an embedding of text
regions; let us call it (and also (1)) a supervised embedding, as it is trained with labeled data, to
distinguish it from tv-embeddings. That is, we use tv-embeddings to improve the supervised embedding. Note that (5) can be naturally extended to accommodate multiple tv-embeddings by
!
k
X
(i)
? W ? r` (x) +
V(i) ? u` (x) + b ,
(6)
i=1
so that, for example, two types of tv-embedding (i.e., k = 2) obtained with the unsupervised target
and the partially-supervised target can be used at once, which can lead to performance improvement
as they complement each other, as shown later.
3
Experiments
Our code and the experimental settings are available at riejohnson.com/cnn download.html.
Data We used the three datasets used in [11]: IMDB, Elec, and RCV1, as summarized in Table
1. IMDB (movie reviews) [17] comes with an unlabeled set. To facilitate comparison with previous
studies, we used a union of this set and the training set as unlabeled data. Elec consists of Amazon
reviews of electronics products. To use as unlabeled data, we chose 200K reviews from the same
data source so that they are disjoint from the training and test sets, and that the reviewed products
are disjoint from the test set. On the 55-way classification of the second-level topics on RCV1
(news), unlabeled data was chosen to be disjoint from the training and test sets. On the multi-label
categorization of 103 topics on RCV1, since the official LYRL04 split for this task divides the entire
corpus into a training set and a test set, we used the entire test set as unlabeled data (the transductive
learning setting).
IMDB
Elec
RCV1
#train
25,000
25,000
15,564
23,149
#test
25,000
25,000
49,838
781,265
#unlabeled
75K (20M words)
200K (24M words)
669K (183M words)
781K (214M words)
#class
2
2
55 (single)
103 (multi)?
output
Positive/negative
sentiment
Topic(s)
Table 1: Datasets. ?The multi-label RCV1 is used only in Table 6.
Implementation We used the one-layer CNN models found to be effective in [11] as our base
models B, namely, seq-CNNP
on IMDB/Elec and bow-CNN on RCV1. Tv-embedding training minimized weighted square loss i,j ?i,j (zi [j] ? pi [j])2 where i goes through the regions, z represents
the target regions, and p is the model output. The weights ?i,j were set to balance the loss originating from the presence and absence of words (or concepts in case of the partially-supervised target)
and to speed up training by eliminating some negative examples, similar to negative sampling of
[19]. To experiment with the unsupervised target, we set z to be bow vectors of adjacent regions
on the left and right, while only retaining the 30K most frequent words with vocabulary control;
on sentiment classification, function words were removed, and on topic classification, numbers and
stop-words provided by [16] were removed. Note that these words were removed from (and only
from) the target vocabulary. To produce the partially-supervised target, we first trained the supervised CNN models with 1000 neurons and applied the trained convolution layer to unlabeled data
to generate 1000-dimensional vectors for each region. The rest of implementation follows [11]; i.e.,
supervised models minimized square loss with L2 regularization and optional dropout [9]; ? and
? (U ) were the rectifier; response normalization was performed; optimization was done by SGD.
Model selection On all the tested methods, tuning of meta-parameters was done by testing the
models on the held-out portion of the training data, and then the models were re-trained with the
chosen meta-parameters using the entire training data.
5
3.1
Performance results
Overview After confirming the effectiveness of our new models in comparison with the supervised
CNN, we report the performances of [13]?s CNN, which relies on word vectors pre-trained with a
very large corpus (Table 3). Besides comparing the performance of approaches as a whole, it is
also of interest to compare the usefulness of what was learned from unlabeled data; therefore, we
show how it performs if we integrate the word vectors into our base model one-hot CNNs (Figure
3). In these experiments we also test word vectors trained by word2vec [19] on our unlabeled data
(Figure 4). We then compare our models with two standard semi-supervised methods, transductive
SVM (TSVM) [10] and co-training (Table 3), and with the previous best results in the literature
(Tables 4?6). In all comparisons, our models outperform the others. In particular, our region tvembeddings are shown to be more compact and effective than region embeddings obtained by simple
manipulation of word embeddings, which supports our approach of using region embedding instead
of word embedding.
names in Table 3
unsup-tv.
parsup-tv.
unsup3-tv.
(U )
X1 : r` (x)
bow vector
bow vector
bag-of-{1,2,3}-gram vector
X2 : target of U training
bow vector
output of supervised embedding
bow vector
Table 2: Tested tv-embeddings.
1
2
3
4
5
6
7
8
9
10
11
12
linear SVM with 1-3grams [11]
linear TSVM with 1-3grams
[13]?s CNN
One-hot CNN (simple) [11]
One-hot CNN (simple) co-training best
100-dim
unsup-tv.
200-dim
100-dim
parsup-tv.
Our CNN
200-dim
100-dim
unsup3-tv.
200-dim
all three
100?3
IMDB
10.14
9.99
9.17
8.39
(8.06)
7.12
6.81
7.12
7.13
7.05
6.96
6.51
Elec
9.16
16.41
8.03
7.64
(7.63)
6.96
6.69
6.58
6.57
6.66
6.84
6.27
RCV1
10.68
10.77
10.44
9.17
(8.73)
8.10
7.97
8.19
7.99
8.13
8.02
7.71
Table 3: Error rates (%). For comparison, all the CNN models were constrained to have 1000 neurons. The
parentheses around the error rates indicate that co-training meta-parameters were tuned on test data.
Our CNN with tv-embeddings We tested three types of tv-embedding as summarized in Table
2. The first thing to note is that all of our CNNs (Table 3, row 6?12) outperform their supervised
counterpart in row 4. This confirms the effectiveness of the framework we propose. In Table 3, for
meaningful comparison, all the CNNs are constrained to have exactly one convolution layer (except
for [13]?s CNN) with 1000 neurons. The best-performing supervised CNNs within these constraints
(row 4) are: seq-CNN (region size 3) on IMDB and Elec and bow-CNN (region size 20) on RCV11 .
They also served as our base models B (with region size parameterized on IMDB/Elec). More
complex supervised CNNs from [11] will be reviewed later. On sentiment classification (IMDB
and Elec), the region size chosen by model selection for our models was 5, larger than 3 for the
supervised CNN. This indicates that unlabeled data enabled effective use of larger regions which are
more predictive but might suffer from data sparsity in supervised settings.
?unsup3-tv.? (rows 10?11) uses a bag-of-n-gram vector to initially represent each region, thus, retains word order partially within the region. When used individually, unsup3-tv. did not outperform
the other tv-embeddings, which use bow instead (rows 6?9). But we found that it contributed to
error reduction when combined with the others (not shown in the table). This implies that it learned
from unlabeled data predictive information that the other two embeddings missed. The best performances (row 12) were obtained by using all the three types of tv-embeddings at once according to
(6). By doing so, the error rates were improved by nearly 1.9% (IMDB) and 1.4% (Elec and RCV1)
compared with the supervised CNN (row 4), as a result of the three tv-embeddings with different
strengths complementing each other.
1
The error rate on RCV1 in row 4 slightly differs from [11] because here we did not use the stopword list.
6
tors integrated into our base
models. Better than [13]?s
CNN (Table 3, row 3).
IMDB
0 150 300
additional dim
8
7.5
7
6.5
6
Elec
Error rate (%)
Figure 3: GN word vec-
8.5
8
7.5
7
6.5
Error rate (%)
avg
7.83
7.24
8.62
Error rate (%)
IMDB
Elec
RCV1
concat
8.31
7.37
8.70
0 150 300
additional dim
9.5
9
8.5
8
7.5
RCV1
0 150 300
additional dim
supervised
w: concat
w: average
r: unsup-tv
r: 3 tv-embed.
Figure 4: Region tv-embeddings vs. word2vec word embeddings. Trained
on our unlabeled data. x-axis: dimensionality of the additional input to
supervised region embedding. ?r:?: region, ?w:?: word.
[13]?s CNN It was shown in [13] that CNN that uses the Google News word vectors as input is
competitive on a number of sentence classification tasks. These vectors (300-dimensional) were
trained by the authors of word2vec [19] on a very large Google News (GN) corpus (100 billion
words; 500?5K times larger than our unlabeled data). [13] argued that these vectors can be useful
for various tasks, serving as ?universal feature extractors?. We tested [13]?s CNN, which is equipped
with three convolution layers with different region sizes (3, 4, and 5) and max-pooling, using the
GN vectors as input. Although [13] used only 100 neurons for each layer, we changed it to 400,
300, and 300 to match the other models, which use 1000 neurons. Our models clearly outperform
these models (Table 3, row 3) with relatively large differences.
Comparison of embeddings Besides comparing the performance of the approaches as a whole,
it is also of interest to compare the usefulness of what was learned from unlabeled data. For this
purpose, we experimented with integration of a word embedding into our base models using two
methods; one takes the concatenation, and the other takes the average, of word vectors for the words
in the region. These provide additional input to the supervised embedding of regions in place of
u` (x) in (5). That is, for comparison, we produce a region embedding from a word embedding
to replace a region tv-embedding. We show the results with two types of word embeddings: the
GN word embedding above (Figure 3), and word embeddings that we trained with the word2vec
software on our unlabeled data, i.e., the same data as used for tv-embedding learning and all others
(Figure 4). Note that Figure 4 plots error rates in relation to the dimensionality of the produced
additional input; a smaller dimensionality has an advantage of faster training/prediction.
On the results, first, the region tv-embedding is more useful for these tasks than the tested word
embeddings since the models with a tv-embedding clearly outperform all the models with a word
embedding. Word vector concatenations of much higher dimensionality than those shown in the
figure still underperformed 100-dim region tv-embedding. Second, since our region tv-embedding
takes the form of ?(W ? r` (x) + b) with r` (x) being a bow vector, the columns of W correspond
to words, and therefore, W ? r` (x) is the sum of W?s columns whose corresponding words are
in the `-th region. Based on that, one might wonder why we should not simply use the sum or
average of word vectors obtained by an existing tool such as word2vec instead. The suboptimal
performances of ?w: average? (Figure 4) tells us that this is a bad idea. We attribute it to the fact that
region embeddings learn predictiveness of co-presence and absence of words in a region; a region
embedding can be more expressive than averaging of word vectors. Thus, an effective and compact
region embedding cannot be trivially obtained from a word embedding. In particular, effectiveness
of the combination of three tv-embeddings (?r: 3 tv-embed.? in Figure 4) stands out.
Additionally, our mechanism of using information from unlabeled data is more effective than [13]?s
CNN since our CNNs with GN (Figure 3) outperform [13]?s CNNs with GN (Table 3, row 3). This
is because in our model, one-hot vectors (the original features) compensate for potential information
loss in the embedding learned from unlabeled data. This, as well as region-vs-word embedding, is a
major difference between our model and [13]?s model.
Standard semi-supervised methods Many of the standard semi-supervised methods are not applicable to CNN as they require bow vectors as input. We tested TSVM with bag-of-{1,2,3}-gram
vectors using SVMlight. TSVM underperformed the supervised SVM2 on two of the three datasets
2
Note that for feasibility, we only used the 30K most frequent n-grams in the TSVM experiments, thus,
showing the SVM results also with 30K vocabulary for comparison, though on some datasets SVM performance
can be improved by use of all the n-grams (e.g., 5 million n-grams on IMDB) [11]. This is because the
computational cost of TSVM (single-core) turned out to be high, taking several days even with 30K vocabulary.
7
NB-LM 1-3grams [18]
[11]?s best CNN
Paragraph vectors [14]
Ensemble of 3 models [18]
Our best
8.13
7.67
7.46
7.43
6.51
?
?
Unlab.data
Ens.+unlab.
Unlab.data
SVM 1-3grams [11]
dense NN 1-3grams [11]
NB-LM 1-3grams [11]
[11]?s best CNN
Our best
Table 4: IMDB: previous error rates (%).
models
SVM [16]
bow-CNN [11]
bow-CNN w/ three tv-embed.
8.71
8.48
8.11
7.14
6.27
?
?
?
?
Unlab.data
Table 5: Elec: previous error rates (%).
micro-F
81.6
84.0
85.7
macro-F
60.7
64.8
67.1
extra resource
?
?
Unlabeled data
Table 6: RCV1 micro- and macro-averaged F on the multi-label task (103 topics) with the LYRL04 split.
(Table 3, rows 1?2). Since co-training is a meta-learner, it can be used with CNN. Random split of
vocabulary and split into the first and last half of each document were tested. To reduce the computational burden, we report the best (and unrealistic) co-training performances obtained by optimizing
the meta-parameters including when to stop on the test data. Even with this unfair advantage to cotraining, co-training (Table 3, row 5) clearly underperformed our models. The results demonstrate
the difficulty of effectively using unlabeled data on these tasks, given that the size of the labeled data
is relatively large.
Comparison with the previous best results We compare our models with the previous best results on IMDB (Table 4). Our best model with three tv-embeddings outperforms the previous best
results by nearly 0.9%. All of our models with a single tv-embed. (Table 3, row 6?11) also perform
better than the previous results. Since Elec is a relatively new dataset, we are not aware of any previous semi-supervised results. Our performance is better than [11]?s best supervised CNN, which has
a complex network architecture of three convolution-pooling pairs in parallel (Table 5). To compare
with the benchmark results in [16], we tested our model on the multi-label task with the LYRL04
split [16] on RCV1, in which more than one out of 103 categories can be assigned to each document.
Our model outperforms the best SVM of [16] and the best supervised CNN of [11] (Table 6).
4
Conclusion
This paper proposed a new semi-supervised CNN framework for text categorization that learns embeddings of text regions with unlabeled data and then labeled data. As discussed in Section 1.1,
a region embedding is trained to learn the predictiveness of co-presence and absence of words in
a region. In contrast, a word embedding is trained to only represent individual words in isolation.
Thus, a region embedding can be more expressive than simple averaging of word vectors in spite
of their seeming similarity. Our comparison of embeddings confirmed its advantage; our region tvembeddings, which are trained specifically for the task of interest, are more effective than the tested
word embeddings. Using our new models, we were able to achieve higher performances than the
previous studies on sentiment classification and topic classification.
Appendix A
Theory of tv-embedding
Suppose that we observe two views (X1 , X2 ) ? X1 ? X2 of the input, and a target label Y ? Y of
interest, where X1 and X2 are finite discrete sets.
Assumption 1. Assume that there exists a set of hidden states H such that X1 , X2 , and Y are
conditionally independent given h in H, and that the rank of matrix [P (X1 , X2 )] is |H|.
Theorem 1. Consider a tv-embedding f1 of X1 w.r.t. X2 . Under Assumption 1, there exists
a function q1 such that P (Y |X1 ) = q1 (f1 (X1 ), Y ). Further consider a tv-embedding f2 of
X2 w.r.t. X1 . Then, under Assumption 1, there exists a function q such that P (Y |X1 , X2 ) =
q(f1 (X1 ), f2 (X2 ), Y ).
The proof can be found in the supplementary material.
8
References
[1] Rie K. Ando and Tong Zhang. A framework for learning predictive structures from multiple tasks and
unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[2] Rie K. Ando and Tong Zhang. Two-view feature generation model for semi-supervised learning. In
Proceedings of ICML, 2007.
[3] Yoshua Bengio, R?ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language
model. Journal of Marchine Learning Research, 3:1137?1155, 2003.
[4] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural
networks with multitask learning. In Proceedings of ICML, 2008.
[5] Ronan Collobert, Jason Weston, L?eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.
Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493?
2537, 2011.
[6] Paramveer S. Dhillon, Dean Foster, and Lyle Ungar. Multi-view learning of word embeddings via CCA.
In Proceedings of NIPS, 2011.
[7] Jianfeng Gao, Patric Pantel, Michael Gamon, Xiaodong He, and Li dent. Modeling interestingness with
deep neural networks. In Proceedings of EMNLP, 2014.
[8] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of ICML, 2011.
[9] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov.
Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012.
[10] Thorsten Joachims. Transductive inference for text classification using support vector machines. In
Proceedings of ICML, 1999.
[11] Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional
neural networks. In Proceedings of NAACL HLT, 2015.
[12] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modeling
sentences. In Proceedings of ACL, pages 655?665, 2014.
[13] Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of EMNLP, pages
1746?1751, 2014.
[14] Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In Proceedings of
ICML, 2014.
[15] Yann LeCun, Le?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. In Proceedings of the IEEE.
[16] David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. RCV1: A new benchmark collection for text
categorization research. Journal of Marchine Learning Research, 5:361?397, 2004.
[17] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
Learning word vectors for sentiment analysis. In Proceedings of ACL, 2011.
[18] Gr?egoire Mesnil, Tomas Mikolov, Marc?Aurelio Ranzato, and Yoshua Bengio. Ensemble of generative
and discriminative techniques for sentiment analysis of movie reviews. arXiv:1412.5335v5 (4 Feb 2015
version), 2014.
[19] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations
of words and phrases and their compositionality. In Proceedings of NIPS, 2013.
[20] Andriy Mnih and Geoffrey E. Hinton. A scalable hierarchical distributed language model. In NIPS, 2008.
[21] Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr?egoire Mensnil. A latent semantic model with
convolutional-pooling structure for information retrieval. In Proceedings of CIKM, 2014.
[22] Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. Learning sentiment-specific word
embedding for twitter sentiment classification. In Proceedings of ACL, pages 1555?1565, 2014.
[23] Joseph Turian, Lev Rainov, and Yoshua Bengio. Word representations: A simple and general method for
semi-supervised learning. In Proceedings of ACL, pages 384?394, 2010.
[24] Jason Weston, Sumit Chopra, and Keith Adams. #tagspace: Semantic embeddings from hashtags. In
Proceedings of EMNLP, pages 1822?1827, 2014.
[25] Liheng Xu, Kang Liu, Siwei Lai, and Jun Zhao. Product feature mining: Semantic clues versus syntactic
constituents. In Proceedings of ACL, pages 336?346, 2014.
[26] Puyang Xu and Ruhi Sarikaya. Convolutional neural network based triangular CRF for joint intent detection and slot filling. In ASRU, 2013.
9
| 5849 |@word multitask:1 cnn:62 version:1 eliminating:2 confirms:2 seek:2 simplifying:1 pavel:1 q1:4 sgd:1 accommodate:1 reduction:1 electronics:1 liu:2 tuned:1 document:11 outperforms:2 existing:1 com:2 comparing:2 gmail:1 ronan:2 confirming:1 enables:1 christian:1 remove:1 plot:8 update:1 v:2 half:1 leaf:1 fewer:1 generative:1 complementing:1 accordingly:1 concat:2 patric:1 core:1 consulting:1 location:1 simpler:1 zhang:5 along:1 direct:4 baidu:1 yelong:1 consists:2 dan:1 paragraph:1 ofword:1 tagging:1 kuksa:1 roughly:1 love:6 multi:6 salakhutdinov:1 ming:1 equipped:2 provided:1 moreover:2 what:4 q2:2 developed:1 unified:1 finding:3 nj:1 every:1 fun:6 exactly:1 rm:2 classifier:2 control:2 unit:5 appear:1 interestingness:1 positive:6 encoding:2 lev:1 might:3 chose:1 blunsom:1 acl:5 china:1 co:11 limited:1 averaged:1 lecun:1 testing:1 lyle:1 union:1 differs:2 handy:1 universal:1 word:78 pre:1 spite:1 cannot:3 unlabeled:40 undesirable:1 selection:2 faulty:1 context:15 applying:1 risk:1 nb:2 restriction:1 optimize:1 map:1 dean:2 center:1 phil:1 primitive:1 attention:1 go:2 straightforward:1 focused:1 shen:1 tomas:3 formalized:1 simplicity:1 assigns:1 amazon:1 unlab:4 his:1 enabled:1 embedding:74 variation:1 updated:1 target:17 suppose:1 us:6 designing:1 recognition:1 labeled:11 yoon:1 region:94 connected:1 news:3 ranzato:1 mesnil:1 contamination:1 removed:3 mentioned:1 rose:1 ideally:1 stopword:1 trained:17 predictive:6 purely:1 unsup:3 imdb:14 learner:1 completely:1 f2:2 joint:1 represented:3 various:1 surrounding:2 stacked:1 train:5 elec:13 effective:12 tell:1 hyper:1 outcome:1 jianfeng:2 kalchbrenner:1 whose:1 larger:4 supplementary:2 ducharme:1 kai:1 encoder:1 triangular:1 g1:5 syntactic:3 transductive:3 itself:2 final:5 superscript:1 sequence:1 advantage:4 propose:4 product:7 adaptation:2 frequent:2 macro:2 relevant:6 turned:2 qin:1 bow:18 achieve:2 pantel:1 description:1 constituent:1 billion:1 sutskever:2 produce:7 categorization:12 perfect:1 adam:1 yiming:1 hashtags:1 help:1 andrew:2 stat:1 keith:1 edward:1 ejean:1 predicted:1 indicate:2 come:1 met:2 implies:1 attribute:1 cnns:10 material:2 everything:1 argued:1 require:1 ungar:1 f1:9 fix:1 really:1 preliminary:1 dent:1 pham:1 predictiveness:2 around:1 predict:6 lm:2 tor:1 major:1 purpose:3 ruslan:1 daly:1 integrates:1 applicable:1 bag:6 label:12 individually:1 create:1 tool:1 aso:1 weighted:1 clearly:3 zhou:1 categorizing:1 focus:1 joachim:1 improvement:1 potts:1 rank:1 indicates:1 mainly:1 contrast:2 kim:1 sense:2 dim:12 inference:1 twitter:1 jauvin:1 nn:1 typically:3 entire:5 integrated:1 initially:1 hidden:2 relation:4 originating:1 classification:21 html:1 pascal:1 retaining:1 noun:1 integration:3 constrained:2 once:2 aware:1 having:3 ng:1 sampling:1 svm2:1 koray:1 represents:3 unsupervised:3 nearly:2 icml:5 filling:1 minimized:3 report:3 recommend:2 others:3 micro:2 yoshua:5 randomly:1 preserve:2 individual:2 intended:3 jeffrey:1 ando:2 detection:1 interest:9 mining:1 mnih:1 held:1 word2vec:5 indexed:1 loosely:1 divide:1 initialized:1 re:1 guidance:3 theoretical:2 column:2 modeling:4 gn:6 cover:1 disadvantage:1 retains:1 phrase:2 cost:2 usefulness:2 krizhevsky:1 wonder:1 johnson:2 sumit:1 too:1 gr:2 combined:1 retain:1 probabilistic:1 michael:2 ilya:2 huang:1 emnlp:3 zhao:1 li:3 converted:1 potential:2 stride:2 summarized:2 seeming:1 inc:1 collobert:2 later:4 view:15 performed:1 jason:3 doing:1 portion:1 competitive:1 option:1 tsvm:6 parallel:1 square:4 greg:1 convolutional:8 accuracy:1 ensemble:2 correspond:2 identify:1 vincent:1 kavukcuoglu:1 produced:2 served:1 confirmed:1 detector:1 siwei:1 hlt:1 definition:4 obvious:1 conveys:1 naturally:1 associated:3 proof:1 stop:3 dataset:1 view1:1 dimensionality:4 feed:1 exceeded:1 originally:1 higher:3 supervised:50 day:1 response:1 maximally:1 improved:2 rie:4 wei:1 done:5 though:3 correlation:1 replacing:1 expressive:2 nonlinear:1 christopher:1 google:2 xiaodong:2 usa:2 building:2 concept:11 facilitate:1 remedy:1 name:1 counterpart:1 regularization:1 assigned:1 xavier:1 naacl:1 dhillon:1 paramveer:1 semantic:4 conditionally:1 adjacent:4 essence:3 noted:1 crf:1 demonstrate:1 performs:1 image:4 wise:1 nih:1 overview:1 egoire:2 million:1 belong:1 discussed:1 he:2 vec:1 tuning:2 trivially:1 language:5 similarity:1 base:6 patrick:1 feb:1 own:1 recent:1 showed:1 optimizing:1 irrelevant:1 discard:1 manipulation:2 meta:6 preserving:1 captured:1 additional:11 deng:1 aggregated:1 corrado:1 semi:16 ii:2 multiple:2 desirable:1 rj:1 karlen:1 match:1 faster:1 compensate:1 retrieval:1 lai:1 parenthesis:1 feasibility:1 prediction:6 scalable:1 essentially:2 rutgers:2 arxiv:2 represent:3 normalization:1 penalize:1 fine:1 addressed:1 source:1 extra:2 rest:1 unlike:1 pooling:8 thing:1 effectiveness:3 call:2 chopra:1 presence:7 ideal:4 svmlight:1 split:5 embeddings:38 easy:2 bengio:5 yang:2 isolation:3 zi:1 architecture:2 perfectly:1 suboptimal:1 andriy:1 reduce:1 idea:3 puyang:1 haffner:1 motivated:1 ultimate:1 sentiment:15 suffer:1 peter:1 deep:3 useful:6 involve:1 tune:1 category:1 generate:1 marchine:2 outperform:7 nsf:2 canonical:1 cikm:1 disjoint:3 serving:1 discrete:1 nal:1 worrying:1 merely:1 convert:1 beijing:1 sum:2 parameterized:1 tzhang:1 place:1 almost:1 yann:1 seq:3 missed:2 appendix:2 interleaved:1 layer:40 dropout:1 cca:1 followed:1 distinguish:2 nan:1 correspondence:1 fan:1 strength:1 constraint:1 alex:1 x2:25 software:1 generates:3 aspect:1 speed:2 argument:1 nitish:1 rcv1:14 performing:1 mikolov:3 relatively:3 transferred:1 tv:65 according:2 piscataway:1 designated:1 combination:1 beneficial:1 across:1 slightly:1 smaller:1 joseph:1 modification:1 quoc:1 gamon:1 invariant:1 thorsten:1 resource:1 bing:1 mechanism:1 merit:1 available:2 apply:1 observe:1 hierarchical:1 indirectly:1 save:1 original:1 top:8 tony:1 embodies:2 eon:1 ting:1 question:1 v5:1 responds:2 guessing:1 antoine:1 gradient:1 distance:1 entity:2 concatenation:4 sensible:1 topic:8 code:1 besides:2 relationship:1 balance:1 negative:6 intent:1 design:1 implementation:2 contributed:1 perform:1 upper:1 conversion:1 convolution:24 neuron:7 datasets:5 benchmark:3 acknowledge:1 finite:1 supporting:1 optional:1 extended:1 hinton:2 consulted:1 download:1 compositionality:1 david:1 complement:1 namely:1 required:1 pair:1 sentence:7 componentwise:1 learned:8 kang:1 nip:3 address:1 able:1 sparsity:2 adjective:1 gaining:1 max:3 including:1 hot:14 unrealistic:1 difficulty:1 rely:1 natural:2 predicting:3 indicator:1 scheme:1 improve:1 movie:2 axis:1 jun:1 auto:1 extract:1 embodied:1 raymond:1 text:38 review:5 literature:1 l2:1 embedded:3 fully:1 loss:5 generation:1 geoffrey:2 versus:1 integrate:2 foster:1 tiny:1 share:1 pi:1 bordes:1 row:16 changed:1 maas:1 last:1 bias:2 formal:1 taking:1 sparse:1 distributed:3 dimension:1 vocabulary:10 stand:3 gram:14 computes:2 preventing:1 forward:1 author:1 clue:2 avg:1 collection:1 correlate:1 approximate:1 compact:3 corpus:4 discriminative:1 search:1 latent:1 why:1 reviewed:2 reality:2 underperformed:4 learn:7 transfer:1 robust:1 table:25 additionally:1 improving:1 bottou:2 complex:2 domain:2 official:1 marc:1 did:3 dense:2 aurelio:1 motivation:1 noise:1 whole:2 turian:1 x1:28 xu:2 en:1 fashion:1 ny:1 tong:5 aid:1 sub:1 unfair:1 cotraining:1 extractor:1 learns:7 tang:1 theorem:1 embed:4 bad:1 rectifier:1 specific:1 showing:1 list:1 experimented:1 svm:7 glorot:1 exists:5 burden:1 sequential:1 effectively:2 chen:1 simply:1 gao:2 desire:2 partially:6 applies:1 collectively:1 loses:1 relies:1 lewis:1 weston:3 grefenstette:1 slot:1 sized:2 goal:2 asru:1 shared:2 absence:5 feasible:1 hard:1 aided:2 replace:1 specifically:3 typical:1 except:2 determined:1 acting:4 averaging:2 denoising:1 called:1 experimental:1 meaningful:1 highdimensional:1 internal:2 support:2 avoiding:1 tested:9 scratch:1 srivastava:1 |
5,357 | 585 | Fast, Robust Adaptive Control by Learning only
Forward Models
Andrew W. Moore
MIT Artificial Intelligence Laboratory
545 Technology Square, Cambridge, MA 02139
awmGai.JD.it.edu
Abstract
A large class of motor control tasks requires that on each cycle the controller is told its current state and must choose an action to achieve a
specified, state-dependent, goal behaviour. This paper argues that the
optimization of learning rate, the number of experimental control decisions before adequate performance is obtained, and robustness is of prime
importance-if necessary at the expense of computation per control cycle and memory requirement. This is motivated by the observation that
a robot which requires two thousand learning steps to achieve adequate
performance, or a robot which occasionally gets stuck while learning, will
always be undesirable, whereas moderate computational expense can be
accommodated by increasingly powerful computer hardware. It is not unreasonable to assume the existence of inexpensive 100 Mflop controllers
within a few years and so even processes with control cycles in the low
tens of milliseconds will have millions of machine instructions in which to
make their decisions. This paper outlines a learning control scheme which
aims to make effective use of such computational power.
1
MEMORY BASED LEARNING
Memory-based learning is an approach applicable to both classification and function learning in which all experiences presented to the learning box are explicitly remembered. The memory, Mem, is a set of input-output pairs, Mem =
{(Xl, YI), (X21 Y2), ... , (Xb Yk)}. When a prediction is required of the output of a
novel input Xquery, the memory is searched to obtain experiences with inputs close to
Xquery. These local neighbours are used to determine a locally consistent output for
the query. Three memory-based techniques, Nearest Neighbour, Kernel Regression,
and Local Weighted Regression, are shown in the accompanying figure.
571
572
Moore
j.
?
?
?
I
?
I
?
,
?
,
w
laput
Nearest
Neighbour:
Yi where
i minimizes {( Xi - x query) 2 :
(Xi, Yi) E Mem}.
There
is a general introduction
in [5], some recent applications in [11], and recent
robot learning work in [9, 3].
Ypredict(Xquery)
2
j.
i?
o? ?
o? ?
=
?
o? ?
,
?
?
?
?
?
,
?
,
.e
?
,
lap.t
?
I
?
I
?
Y
?
?
M
lap.t
Kernel Regression: Also Local Weighted Regresknown as Shepard's interpo- sion: finds the linear maplation or Local Weighted Av- ping Y = Ax to minimize
erages. Y;.;edict(Xquery) = the sum of weighted squares
C? w.y.)/ L w. where Wi = of residua!s E Wj(Yi - AXi)2.
exp( -(Xi - Xquery )2 / K width 2)Yp!~dict IS ~hen AXquery.
[6] describes some variants
LWR was mtroduced for
. robot learning control by [1].
A MEMORY-BASED INVERSE MODEL
An inverse model maps State x Behaviour ~ Action (8 x b ~ a). Behaviour is
the output of the system, typically the next state or time derivative of state. The
learned inverse model provides a conceptually simple controller:
1. Observe 8 and b goa1 .
2. a : - inverse-model(s, bgoal)
3. Perform action a and observe actual behaviour bactual.
4. Update MEM with (8, b actual - a): If we are ever again in state 8 and
require behaviour bactual we should apply action a.
Memory-based versions of this simple algorithm have used nearest neighbour [9]
and LWR [3]. bgoal is the goal behaviour: depending on the task it may be fixed
or it may vary between control cycles, perhaps as a function of state or time. The
algorithm provides aggressive learning: during repeated attempts to achieve the
same goal behaviour, the action which is applied is not an incrementally adjusted
version of the previous action, but is instead the action which the memory and the
memory-based learner predicts will directly achieve the required behaviour. If the
function is locally linear then the sequence of actions which are chosen are closely
related to the Secant method [4] for numerically finding the zero of a function by
bisecting the line between the closest approximations that bracket the y = 0 axis. If
learning begins with an initial error Eo in the action choice, and we wish to reduce
this error to Eo/I<, the number of learning steps is O(log log I<): subject to benign
conditions, the learner jumps to actions close to the ideal action very quickly.
A common objection to learning the inverse model is that it may be ill-defined. For
a memory-based method the problems are particularly serious because of its update
rule. It updates the inverse model near bactual and therefore in those cases in which
bgoal and bactual differ greatly, the mapping near bgoal may not change. As a result,
Fast, Robust Adaptive Control by Learning only Forward Models
subsequent cycles will make identical mistakes. [10] discusses this further.
3
A MEMORY-BASED FORWARD MODEL
One fix for the problem of inverses becoming stuck is the addition of random noise
to actions prior to their application. However, this can result in a large proportion
of control cycles being wasted on experiments which the robot should have been able
to predict as valueless, defeating the initial aim of learning as quickly as possible.
An alternative technique using multilayer neural nets has been to learn a forward
model, which is necessarily well defined, to train a partial inverse. Updates to the
forward model are obtained by standard supervised training, but updates to the
inverse model are more sophisticated. The local Jacobian of the forward model
is obtained and this value is used to drive an incremental change to the inverse
model [8]. In conjunction with memory-based methods such an approach has the
disadvantage that incremental changes to the inverse model loses the one-shot learning behaviour, and introduces the danger of becoming trapped in a local minimum.
Instead, this investigation only relies on learning the forward model. Then the
inverse model is implicitly obtained from it by online numerical inversion instead of
direct lookup. This is illustrated by the following algorithm:
1. Observe sand bgoal.
2. Perform numerical inversion:
Search among a series of candidate actions
a1, a2 .. , ak:
brredict : _ forvard-llodel( s, a1, MEM)
b~redict : = forvard-llodel(s, a2, MEM)
Until
I
ITIME-OUT I
or beredict
= bgoal
I
beredict : _ forvard-llodel( s, ak, MEM)
3. If TIME-OUT then perform experimental action else perform ak.
4. Update MEM with (s, ak - bactual)
A nice feature of this method is the absence of a preliminary training phase such
as random flailing or feedback control. A variety of search techniques for numerical
inversion can be applied. Global random search avoids local minima but is very slow
for obtaining accurate actions, hill climbing is a robust local procedure and more
aggressive procedures such as Newton's method can use partial derivative estimates
from the forward model to make large second-order steps. The implementation used
for subsequent results had a combination of global search and local hill climbing.
In very high speed applications in which there is only time to make a small number
of forward model predictions, it is not difficult to regain much of the speed advantage
of directly using an inverse model by commencing the action search with ao as the
action predicted by a learned inverse model.
4
OTHER CONSIDERATIONS
Actions selected by a forward memory-based learner can be expected to converge
very quickly to the correct action in benign cases, and will not become stuck in difficult cases, provided that the memory based representation can fit the true forward
573
574
Moore
model. This proviso is weak compared with incremental learning control techniques
which typically require stronger prior assumptions about the environment, such as
near-linearity, or that an iterative function approximation procedure will avoid local
minima. One-shot methods have an advantage in terms of number of control cycles before adequate performance whereas incremental methods have the advantage
of only requiring trivial amounts of computation per cycle. However, the simple
memory-based formalism described so far suffers from two major problems which
some forms of adaptive and neural controllers may avoid .
? Brittle behaviour in the presence of outliers.
? Poor resistance to non-stationary environments.
Many incremental methods implicitly forget all experiences beyond a certain horizon. For example, in the delta rule ~Wij = lI(y~ctual - yrredict) X j, the age beyond
which experiences have a negligible effect is determined by the learning rate 1I. As
a result, the detrimental effect of misleading experiences is presen t for only a fixed
amount of time and then fades awayl . In contrast, memory-based methods remember everything for ever. Fortunately, two statistical techniques: robust regression
and cross-validation allow extensions to the numerical inversion method in which
we can have our cake and eat it too.
5
USING ROBUST REGRESSION
We can judge the quality of each experience (Xi, yd E Mem by how well it is
predicted by the rest of the experiences. A simple measure of the ith error is the
cross validation error, in which the experience is first removed from the memory
before prediction. efve =1 Predict(xi, Mem - {(Xi, Yin) I. With the memorybased formalism, in which all work takes place at prediction time, it is no more
expensive to predict a value with one datapoint removed than with it included.
Once we have the measure efve of the quality of each experience, we can decide
if it is worth keeping. Robust statistics [7] offers a wide range of methods: this
implementation uses the Median Absolute Deviation (MAD) procedure.
6
FULL CROSS VALIDATION
=
The value e~~ial L.: efve, summed over all "good" experiences, provides a measure
of how well the current representation fits the data. By optimizing this value with
respect to internal learner parameters, such as the width of the local weighting
function [(width used by kernel regression and LWR, the internal parameters can be
found automatically. Another important set of parameters that can be optimized is
the relative scaling of each input variable: an example of this procedure applied to a
two-joint arm task may be found in Reference [2]. A useful feature of this procedure
is its quick discovery (and subsequent ignoring) of irrelevant input variables.
Cross-validation can also be used to selectively forget old inaccurate experiences
caused by a slowly drifting or suddenly changing environment. We have already
seen that adaptive control algorithms such as the LMS rule can avoid such problems
because the effects of experiences decay with time. Memory based methods can also
forget things according to a forgetfulness parameter: all observations are weighted
IThis also has disadvantages: persistence of excitation is required and multiple tasks
can often require relearning if they have not been practised recently.
Fast, Robust Adaptive Control by Learning only Forward Models
by not only the distance to the
Wi
= exp( -(Xi -
Xquery
but also by their age:
Xquery)2 / Kwidth 2 -
(n - i)/ Krecau)
(1)
where we assume the ordering of the experiences' indices i is temporal, with experience n the most recent.
We find the K recall that minimizes the recen t weighted average cross validation error
L:?=o efve exp(
i)/,), where, is a human assigned 'meta-forgetfulness' constant, reflecting how many experiences the learner would need in order to benefit
from observation of an environmental change. It should be noted that, is a substantially less task dependent prescription of how far back to forget than would be
a human specified Krecall. Some initial tests of this technique are included among
the experiments of Section 8.
-en -
Architecture selection is another use of cross validation. Given a family of learners,
the member with the least cross validation error is used for subsequent predictions.
7
COMPUTATIONAL CONSIDERATIONS
Unless the real time between control cycles is longer than a few seconds, cross validation is too expensive to perform after every cycle. Instead it can be performed as
a separate parallel process, updating the best parameter values and removing outliers every few real control cycles. The usefulness of breaking a learning control task
into an online realtime processes and offline mental simulation was noted by [12].
Initially, the small number of experiences means that cross validation optimizes
the parameters very frequently, but the time between updates increases with the
memory size. The decreasing frequency of cross validation updates is little cause
for concern, because as time progresses, the estimated optimal parameter values are
expected to become decreasingly variable.
If there is no time to make more than one memory based query per cycle, then
memory based learning can nevertheless proceed by pushing even more of the computation into the offline component. If the offline process can identify meaningful
states relevant to the task, then it can compute, for each of them, what the optimal
action would be. The resulting state-action pairs are then used as a policy. The
online process then need only look up the recommended action in the policy, apply
it and then insert (s, a, b) into the memory.
8
COMPARATIVE TESTS
The ultimate goal of the investigation is to produce a learning control algorithm
which can learn to control a fairly wide family of different tasks. Some basic, very
different, tasks have been used for the initial tests.
The HARD task, graphed in Figure 1, is a one-dimensional direct relationship between
action and behaviour which is both non-monotonic and discontinuous. The VARIER
task (Figure 2) is a sinusoidal relation for which the phase continuously drifts, and
occasionally alters catastrophically.
LINEAR is a noisy linear relation between 4-d states, 4-d actions and 4-d behaviours.
For these first three tasks, the goal behaviour is selected randomly on each control
cycle. ARM (Figure 3) is a simulated noisy dynamic two-joint arm acting under
gravity in which state is perceived in cartesian coordinates and actions are produced
575
576
Moore
in joint-torque coordinates. Its task is to follow the circular trajectory. BILLIARDS is
a simulation of the real billiards robot described shortly in which 5% of experiences
are entirely random outliers.
-.-----------------,
.. ..
. ..
G1I ?
G1I ?
Goal Trajectory
~
~
o
o
J
J
01114"'.'.
Action
Figure 1: The HARD relation.
Figure 2: VARIER relation.
Figure 3: The ARM task.
The following learning methods were tested: nearest neighbour, kernel regression
and LWR, all searching the forward model and using a form of uncertainty-based intelligent experimentation [10] when the forward search proved inadequate. Another
method under test was sole use of the inverse, learned by LWR. Finally a "bestpossible" value was obtained by numerically inverting the real simulated forward
model instead of a learned model.
All tasks were run for only 200 control cycles. In each case the quality of the learner
was measured by the number of successful actions in the final hundred cycles, where
"successful" was defined as producing behaviour within a small tolerance of b goal .
Results are displayed in Table 1. There is little space to discuss them in detail,
but they generally support the arguments of the previous sections. The inverse
model on its own was generally inferior to the forward method, even in those cases
in which the inverse is well-defined. Outlier removal improved performance on
the BILLIARDS task over non-robustified versions. Interestingly, outlier removal
also greatly benefited the inverse only method. The selectively forgetful methods
performed better than than their non-forgetful counterparts on the VARIER task, but
in the stationary environments they did not pay a great penalty. Cross validation
for K width was useful: for the HARD task, LWR found a very small K width but in the
LINEAR task it unsurprisingly preferred an enormous Kwidth.
Some experiments were also performed with a real billiards robot shown in Figure 4.
Sensing is visual: one camera looks along the cue stick and the other looks down
at the table. The cue stick swivels around the cue ball, which starts each shot
at the same position. At the start of each attempt the object ball is placed at a
random position in the half of the table opposite the cue stick. The camera above
the table obtains the (x, y) image coordinates of the object ball, which constitute
the state. The action is the x-coordinate of the image of the object ball on the cue
stick camera. A motor swivels the cue stick until the centroid of the actual image
of the object ball coincides with the chosen x-coordinate value. The shot is then
performed and observed by the overhead camera. The behaviour is defined as the
cushion and position on the cushion with which the object ball first collides.
Fast, Robust Adaptive Control by Learning only Forward Models
Controller type. (K = use MAD
outlier removal, X
use crossvalidation for K width, R = use crossvalidation for K recall , IF
obtain
=
VARIER
HARD
LINEAR
ARM
BIL'DS
100 ?O
100 ?O
75 ? 3
94? 1
82 ? 4
15 ? 9
48 ? 16
14? 10
19? 9
22 ? 15
54? 8
56 ? 9
8?2
15 ? 8
22? 4
26 ? 10
44? 8
43 ? 8
24 ? 11
72? 8
11 ? 5
72? 4
51 ? 27
65 ?28
53 ? 17
6?2
42 ? 21
92? 2
69? 4
68? 3
66? 5
7?6
70 ? 4
58 ? 4
70 ? 4
73 ? 3
70 ? 5
73 ? 1
13 ? 3
14? 2
O?O
O?O
O?O
O?O
76 ? 28
89? 4
83 ? 4
89 ? 3
90? 3
89? 2
89? 1
3?2
23 ? 10
44? 6
40? 6
40? 7
37 ? 3
71 ? 5
70? 10
55? 12
61 ? 9
75 ? 7
69 ? 7
69? 7
1?1
30? 5
10 ? 2
9?3
11 ? 3
8?1
=
initial candidate action from the inverse model then search the forward
model.)
Best Possible: Obtamed from numerically inverting simulated world
Inverse only, learned WIth LWR
Inverse only, learned WIth LW R, KRX
LWR: IF
LWR: IF X
LWK: IF KX
LWK: IF KRX
L WK: r'orward only, KRX
Kernel KegresslOn: IF
Kernel RegreSSion: IF KRX
Nearest Neigh bour: IF
Nearest Nelghbour: IF K
Nearest Neigh bour: IF KR
Nearest Neighbour: Forward only,
KR
Global Lmear RegresslOn: IF
74? 5 60 ? 17
23 ? 6
8?3
7?3
Global Lmear RegresslOn: IF KR
20 ? 13
73 ? 4
9?2
72? 3
21 ? 4
Global Quadrattc RegresslOn: IF
14?7
5?3
64? 2 70 ? 22 40? 11
Table 1: Relative p erformance of a family o learners on a famil y of tasks. Each
combination of learner and task was run ten times to provide the mean number
of successes and standard deviation shown in the table.
The controller uses the memory based learner to choose the action to maximize the
probability that the ball will enter the nearer of the two pockets at the end of the
table. A histogram of the number of successes against trial number is shown in
Figure 5. In this experiment, the learner was LWR using outlier removal and cross
validation for [(width. After 100 experiences, control choice running on a Sun-4 was
taking 0.8 seconds 2 . Sinking the ball requires better than 1% accuracy in the choice
of action, the world contains discontinuities and there are random outliers in the
data and so it is encouraging that within less than 100 experiences the robot had
reached a 70% success rate--substantially better than the author can achieve.
ACKNOWLEDGEMENTS
Some of the work discussed in this paper is being performed in collaboration with Chris
Atkeson. The robot cue stick was designed and built by Wes Huang with help from Gerrit van Zyl. Dan Hill also helped considerably with the billiards robot. The author is
supported by a Postdoctoral Fellowship from SERC/NATO. Support was provided under Air Force Office of Scien tific Research gran t AFOSR-89-0500 and a National Science
Foundation Presidential Young Investigator Award to Christopher G. Atkeson.
2This could have been greatly improved with more appropriate hardware or better
software techniques such as kd-trees for structuring data [11, 9].
577
578
Moore
10
9
!
8
7
&"
r- 5
J ..
e::I 3
Z :z
1
o
o
1
:zo
40
60
80
100
Trial number (batches of 10)
Figure 4: The billiards robot. In the
foreground is the cue stick which attempts to sink balls in the far pockets.
Figure 5: Frequency of successes versus
con trol cycle for the billiards task.
References
[1] C. G. Atkeson. Using Local Models to Control Movement. In Proceedings of Neural
Information Processing Systems Conference, November 1989.
[2] C. G. Atkeson. Memory-Based Approaches to Approximating Continuous Functions.
Technical report, M. I. T. Artificial Intelligence Laboratory, 1990.
[3] C. G. Atkeson and D. J. Reinkensmeyer. Using Associative Content-Addressable
Memories to Control Robots. In Miller, Sutton, and Werbos, editors, Neural Networks
for Control. MIT Press, 1989.
[4] S. D. Conte and C. De Boor. Elementary Numerical Analysis. McGraw Hill, 1980.
[5] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley
& Sons, 1973.
[6] R. Franke. Scattered Data Interpolation: Tests of Some Methods. Mathematics of
Computation, 38(157), January 1982.
[7] F. Hampbell, P. Rousseeuw, E. Ronchetti, and W. Stahel. Robust Statistics. Wiley
International, 1985.
[8] M. 1. Jordan and D. E. Rumelhart. Forward Models: Supervised Learning with a
Distal Teacher. Technical report, M. I. T., July 1990.
[9] A. W. Moore. Efficient Memory-based Learning for Robot Control. PhD. Thesis;
Technical Report No. 209, Computer Laboratory, University of Cambridge, October
1990.
[10] A. W. Moore. Knowledge of Knowledge and Intelligent Experimentation for Learning
Control. In Proceedings of the 1991 Seattle International Joint Conference on Neural
Networks, July 1991.
[11] S. M. Omohundro. Efficient Algorithms with Neural Network Behaviour. Journal of
Complex Systems, 1(2):273-347, 1987.
[12] R. S. Sutton. Integrated Architecture for Learning, Planning, and Reacting Based
on Approximating Dynamic Programming. In Proceedings of the 7th International
Conference on Machine Learning. Morgan Kaufman, June 1990.
| 585 |@word trial:2 version:3 inversion:4 proportion:1 stronger:1 duda:1 lwk:2 instruction:1 simulation:2 ronchetti:1 shot:4 catastrophically:1 initial:5 series:1 contains:1 interestingly:1 current:2 must:1 john:1 subsequent:4 numerical:5 benign:2 motor:2 designed:1 update:8 stationary:2 intelligence:2 selected:2 cue:8 half:1 ith:1 ial:1 stahel:1 mental:1 provides:3 along:1 direct:2 become:2 overhead:1 dan:1 boor:1 expected:2 frequently:1 planning:1 torque:1 decreasing:1 automatically:1 actual:3 little:2 encouraging:1 begin:1 provided:2 linearity:1 reinkensmeyer:1 what:1 kaufman:1 minimizes:2 substantially:2 finding:1 temporal:1 remember:1 every:2 gravity:1 stick:7 control:29 producing:1 before:3 negligible:1 local:12 mistake:1 sutton:2 ak:4 reacting:1 becoming:2 yd:1 interpolation:1 scien:1 range:1 camera:4 procedure:6 erformance:1 addressable:1 secant:1 danger:1 persistence:1 get:1 undesirable:1 close:2 selection:1 franke:1 map:1 quick:1 fade:1 rule:3 searching:1 coordinate:5 programming:1 us:2 rumelhart:1 expensive:2 particularly:1 updating:1 werbos:1 predicts:1 observed:1 thousand:1 wj:1 cycle:16 sun:1 ordering:1 movement:1 removed:2 yk:1 environment:4 dynamic:2 trol:1 learner:11 bisecting:1 sink:1 joint:4 train:1 zo:1 fast:4 effective:1 artificial:2 query:3 presidential:1 statistic:2 noisy:2 final:1 online:3 associative:1 sequence:1 advantage:3 net:1 g1i:2 regain:1 relevant:1 achieve:5 crossvalidation:2 seattle:1 requirement:1 produce:1 comparative:1 incremental:5 object:5 help:1 depending:1 andrew:1 measured:1 nearest:8 sole:1 progress:1 predicted:2 judge:1 differ:1 closely:1 correct:1 discontinuous:1 human:2 proviso:1 everything:1 sand:1 require:3 behaviour:16 fix:1 ao:1 investigation:2 preliminary:1 memorybased:1 elementary:1 adjusted:1 extension:1 insert:1 accompanying:1 around:1 exp:3 great:1 mapping:1 predict:3 lm:1 major:1 vary:1 a2:2 perceived:1 applicable:1 defeating:1 weighted:6 mit:2 always:1 aim:2 avoid:3 sion:1 office:1 conjunction:1 structuring:1 ax:1 june:1 greatly:3 contrast:1 centroid:1 dependent:2 inaccurate:1 typically:2 integrated:1 initially:1 relation:4 wij:1 classification:2 ill:1 among:2 summed:1 fairly:1 once:1 lwr:10 identical:1 look:3 neigh:2 forgetfulness:2 foreground:1 report:3 intelligent:2 serious:1 few:3 bil:1 randomly:1 neighbour:6 national:1 phase:2 attempt:3 mflop:1 circular:1 introduces:1 bracket:1 xb:1 accurate:1 partial:2 necessary:1 experience:19 unless:1 tree:1 old:1 accommodated:1 formalism:2 disadvantage:2 deviation:2 hundred:1 usefulness:1 successful:2 inadequate:1 too:2 teacher:1 considerably:1 swivel:2 international:3 told:1 quickly:3 continuously:1 again:1 thesis:1 choose:2 slowly:1 huang:1 derivative:2 yp:1 li:1 aggressive:2 sinusoidal:1 de:1 lookup:1 wk:1 explicitly:1 caused:1 performed:5 helped:1 reached:1 start:2 parallel:1 minimize:1 square:2 air:1 accuracy:1 miller:1 identify:1 climbing:2 conceptually:1 weak:1 produced:1 trajectory:2 worth:1 drive:1 ping:1 datapoint:1 suffers:1 inexpensive:1 against:1 frequency:2 krx:4 con:1 proved:1 recall:2 knowledge:2 pocket:2 sophisticated:1 reflecting:1 back:1 supervised:2 follow:1 improved:2 box:1 cushion:2 until:2 d:1 christopher:1 incrementally:1 billiards:7 quality:3 perhaps:1 graphed:1 effect:3 requiring:1 true:1 y2:1 counterpart:1 assigned:1 moore:7 laboratory:3 illustrated:1 distal:1 during:1 width:7 inferior:1 noted:2 excitation:1 coincides:1 hill:4 outline:1 omohundro:1 argues:1 image:3 consideration:2 novel:1 recently:1 common:1 shepard:1 million:1 sinking:1 discussed:1 numerically:3 cambridge:2 enter:1 mathematics:1 had:2 robot:13 longer:1 closest:1 own:1 recent:3 optimizing:1 moderate:1 irrelevant:1 prime:1 optimizes:1 occasionally:2 certain:1 meta:1 remembered:1 success:4 yi:4 seen:1 minimum:3 fortunately:1 morgan:1 eo:2 determine:1 converge:1 maximize:1 recommended:1 july:2 full:1 multiple:1 technical:3 cross:12 offer:1 prescription:1 hart:1 award:1 a1:2 prediction:5 variant:1 regression:8 basic:1 controller:6 multilayer:1 histogram:1 kernel:6 whereas:2 addition:1 fellowship:1 objection:1 else:1 median:1 rest:1 collides:1 subject:1 thing:1 member:1 jordan:1 near:3 presence:1 ideal:1 variety:1 fit:2 architecture:2 opposite:1 reduce:1 motivated:1 ultimate:1 penalty:1 resistance:1 proceed:1 cause:1 constitute:1 action:31 adequate:3 useful:2 generally:2 amount:2 rousseeuw:1 ten:2 locally:2 hardware:2 wes:1 millisecond:1 alters:1 trapped:1 delta:1 per:3 estimated:1 nevertheless:1 enormous:1 changing:1 wasted:1 year:1 sum:1 run:2 inverse:21 powerful:1 uncertainty:1 place:1 family:3 decide:1 realtime:1 decision:2 scaling:1 entirely:1 pay:1 scene:1 software:1 conte:1 speed:2 argument:1 forgetful:2 eat:1 robustified:1 according:1 gran:1 combination:2 poor:1 ball:9 kd:1 describes:1 increasingly:1 son:1 wi:2 outlier:8 discus:2 end:1 experimentation:2 unreasonable:1 apply:2 observe:3 appropriate:1 alternative:1 robustness:1 shortly:1 batch:1 drifting:1 jd:1 existence:1 cake:1 running:1 x21:1 newton:1 kwidth:2 pushing:1 serc:1 approximating:2 suddenly:1 already:1 xquery:7 detrimental:1 distance:1 separate:1 simulated:3 chris:1 trivial:1 mad:2 index:1 relationship:1 difficult:2 october:1 expense:2 implementation:2 policy:2 perform:5 av:1 observation:3 november:1 displayed:1 january:1 ever:2 drift:1 inverting:2 pair:2 required:3 specified:2 optimized:1 learned:6 decreasingly:1 nearer:1 discontinuity:1 able:1 beyond:2 pattern:1 built:1 memory:27 power:1 dict:1 force:1 arm:5 scheme:1 technology:1 misleading:1 axis:1 commencing:1 prior:2 nice:1 discovery:1 removal:4 hen:1 acknowledgement:1 relative:2 unsurprisingly:1 afosr:1 brittle:1 versus:1 age:2 validation:12 foundation:1 consistent:1 editor:1 collaboration:1 placed:1 supported:1 keeping:1 offline:3 allow:1 wide:2 taking:1 absolute:1 benefit:1 tolerance:1 feedback:1 axi:1 lmear:2 world:2 avoids:1 van:1 forward:20 stuck:3 adaptive:6 jump:1 author:2 atkeson:5 far:3 obtains:1 implicitly:2 preferred:1 nato:1 mcgraw:1 global:5 mem:10 xi:7 postdoctoral:1 search:7 iterative:1 continuous:1 table:7 learn:2 robust:9 ignoring:1 obtaining:1 necessarily:1 complex:1 did:1 noise:1 repeated:1 benefited:1 en:1 scattered:1 slow:1 wiley:2 position:3 wish:1 xl:1 candidate:2 breaking:1 lw:1 jacobian:1 weighting:1 young:1 removing:1 down:1 redict:1 sensing:1 decay:1 concern:1 importance:1 kr:3 phd:1 cartesian:1 horizon:1 relearning:1 kx:1 forget:4 yin:1 lap:2 orward:1 visual:1 flailing:1 monotonic:1 loses:1 environmental:1 relies:1 ma:1 goal:7 absence:1 content:1 change:4 hard:4 included:2 determined:1 acting:1 ithis:1 experimental:2 meaningful:1 selectively:2 internal:2 searched:1 support:2 investigator:1 tested:1 |
5,358 | 5,850 | Training Very Deep Networks
Rupesh Kumar Srivastava
Klaus Greff
?
Jurgen
Schmidhuber
The Swiss AI Lab IDSIA / USI / SUPSI
{rupesh, klaus, juergen}@idsia.ch
Abstract
Theoretical and empirical evidence indicates that the depth of neural networks
is crucial for their success. However, training becomes more difficult as depth
increases, and training of very deep networks remains an open problem. Here we
introduce a new architecture designed to overcome this. Our so-called highway
networks allow unimpeded information flow across many layers on information
highways. They are inspired by Long Short-Term Memory recurrent networks and
use adaptive gating units to regulate the information flow. Even with hundreds of
layers, highway networks can be trained directly through simple gradient descent.
This enables the study of extremely deep and efficient architectures.
1
Introduction & Previous Work
Many recent empirical breakthroughs in supervised machine learning have been achieved through
large and deep neural networks. Network depth (the number of successive computational layers) has
played perhaps the most important role in these successes. For instance, within just a few years, the
top-5 image classification accuracy on the 1000-class ImageNet dataset has increased from ?84%
[1] to ?95% [2, 3] using deeper networks with rather small receptive fields [4, 5]. Other results on
practical machine learning problems have also underscored the superiority of deeper networks [6]
in terms of accuracy and/or performance.
In fact, deep networks can represent certain function classes far more efficiently than shallow ones.
This is perhaps most obvious for recurrent nets, the deepest of them all. For example, the n bit
parity problem can in principle be learned by a large feedforward net with n binary input units, 1
output unit, and a single but large hidden layer. But the natural solution for arbitrary n is a recurrent
net with only 3 units and 5 weights, reading the input bit string one bit at a time, making a single
recurrent hidden unit flip its state whenever a new 1 is observed [7]. Related observations hold for
Boolean circuits [8, 9] and modern neural networks [10, 11, 12].
To deal with the difficulties of training deep networks, some researchers have focused on developing
better optimizers (e.g. [13, 14, 15]). Well-designed initialization strategies, in particular the normalized variance-preserving initialization for certain activation functions [16, 17], have been widely
adopted for training moderately deep networks. Other similarly motivated strategies have shown
promising results in preliminary experiments [18, 19]. Experiments showed that certain activation
functions based on local competition [20, 21] may help to train deeper networks. Skip connections between layers or to output layers (where error is ?injected?) have long been used in neural
networks, more recently with the explicit aim to improve the flow of information [22, 23, 2, 24].
A related recent technique is based on using soft targets from a shallow teacher network to aid in
training deeper student networks in multiple stages [25], similar to the neural history compressor
for sequences, where a slowly ticking teacher recurrent net is ?distilled? into a quickly ticking student recurrent net by forcing the latter to predict the hidden units of the former [26]. Finally, deep
networks can be trained layer-wise to help in credit assignment [26, 27], but this approach is less
attractive compared to direct training.
1
Very deep network training still faces problems, albeit perhaps less fundamental ones than the problem of vanishing gradients in standard recurrent networks [28]. The stacking of several non-linear
transformations in conventional feed-forward network architectures typically results in poor propagation of activations and gradients. Hence it remains hard to investigate the benefits of very deep
networks for a variety of problems.
To overcome this, we take inspiration from Long Short Term Memory (LSTM) recurrent networks
[29, 30]. We propose to modify the architecture of very deep feedforward networks such that information flow across layers becomes much easier. This is accomplished through an LSTM-inspired
adaptive gating mechanism that allows for computation paths along which information can flow
across many layers without attenuation. We call such paths information highways. They yield highway networks, as opposed to traditional ?plain? networks.1
Our primary contribution is to show that extremely deep highway networks can be trained directly
using stochastic gradient descent (SGD), in contrast to plain networks which become hard to optimize as depth increases (Section 3.1). Deep networks with limited computational budget (for which
a two-stage training procedure mentioned above was recently proposed [25]) can also be directly
trained in a single stage when converted to highway networks. Their ease of training is supported
by experimental results demonstrating that highway networks also generalize well to unseen data.
2
Highway Networks
Notation We use boldface letters for vectors and matrices, and italicized capital letters to denote
transformation functions. 0 and 1 denote vectors of zeros and ones respectively, and I denotes an
identity matrix. The function ?(x) is defined as ?(x) = 1+e1?x , x ? R. The dot operator (?) is used
to denote element-wise multiplication.
A plain feedforward neural network typically consists of L layers where the lth layer (l ?
{1, 2, ..., L}) applies a non-linear transformation H (parameterized by WH,l ) on its input xl to
produce its output yl . Thus, x1 is the input to the network and yL is the network?s output. Omitting
the layer index and biases for clarity,
y = H(x, WH ).
(1)
H is usually an affine transform followed by a non-linear activation function, but in general it may
take other forms, possibly convolutional or recurrent. For a highway network, we additionally define
two non-linear transforms T (x, WT ) and C(x, WC ) such that
y = H(x, WH )? T (x, WT ) + x ? C(x, WC ).
(2)
We refer to T as the transform gate and C as the carry gate, since they express how much of the
output is produced by transforming the input and carrying it, respectively. For simplicity, in this
paper we set C = 1 ? T , giving
y = H(x, WH )? T (x, WT ) + x ? (1 ? T (x, WT )).
(3)
The dimensionality of x, y, H(x, WH ) and T (x, WT ) must be the same for Equation 3 to be valid.
Note that this layer transformation is much more flexible than Equation 1. In particular, observe that
for particular values of T ,
y=
x,
if T (x, WT ) = 0,
H(x, WH ), if T (x, WT ) = 1.
(4)
Similarly, for the Jacobian of the layer transform,
1
This paper expands upon a shorter report on Highway Networks [31]. More recently, a similar LSTMinspired model was also proposed [32].
2
Figure 1: Comparison of optimization of plain networks and highway networks of various depths.
Left: The training curves for the best hyperparameter settings obtained for each network depth.
Right: Mean performance of top 10 (out of 100) hyperparameter settings. Plain networks become
much harder to optimize with increasing depth, while highway networks with up to 100 layers can
still be optimized well. Best viewed on screen (larger version included in Supplementary Material).
dy
=
dx
I,
if T (x, WT ) = 0,
0
H (x, WH ), if T (x, WT ) = 1.
(5)
Thus, depending on the output of the transform gates, a highway layer can smoothly vary its behavior
between that of H and that of a layer which simply passes its inputs through. Just as a plain layer
consists of multiple computing units such that the ith unit computes yi = Hi (x), a highway network
consists of multiple blocks such that the ith block computes a block state Hi (x) and transform
gate output Ti (x). Finally, it produces the block output yi = Hi (x) ? Ti (x) + xi ? (1 ? Ti (x)),
which is connected to the next layer.2
2.1
Constructing Highway Networks
As mentioned earlier, Equation 3 requires that the dimensionality of x, y, H(x, WH ) and
T (x, WT ) be the same. To change the size of the intermediate representation, one can replace
x with x
? obtained by suitably sub-sampling or zero-padding x. Another alternative is to use a plain
layer (without highways) to change dimensionality, which is the strategy we use in this study.
Convolutional highway layers utilize weight-sharing and local receptive fields for both H and T
transforms. We used the same sized receptive fields for both, and zero-padding to ensure that the
block state and transform gate feature maps match the input size.
2.2
Training Deep Highway Networks
We use the transform gate defined as T (x) = ?(WT T x + bT ), where WT is the weight matrix
and bT the bias vector for the transform gates. This suggests a simple initialization scheme which
is independent of the nature of H: bT can be initialized with a negative value (e.g. -1, -3 etc.) such
that the network is initially biased towards carry behavior. This scheme is strongly inspired by the
proposal [30] to initially bias the gates in an LSTM network, to help bridge long-term temporal
dependencies early in learning. Note that ?(x) ? (0, 1), ?x ? R, so the conditions in Equation 4
can never be met exactly.
In our experiments, we found that a negative bias initialization for the transform gates was sufficient
for training to proceed in very deep networks for various zero-mean initial distributions of WH
and different activation functions used by H. In pilot experiments, SGD did not stall for networks
with more than 1000 layers. Although the initial bias is best treated as a hyperparameter, as a
general guideline we suggest values of -1, -2 and -3 for convolutional highway networks of depth
approximately 10, 20 and 30.
2
Our pilot experiments on training very deep networks were successful with a more complex block design
closely resembling an LSTM block ?unrolled in time?. Here we report results only for a much simplified form.
3
Network
No. of parameters
Test Accuracy (in %)
Highway Networks
10-layer (width 16) 10-layer (width 32)
39 K
151 K
99.43 (99.4?0.03) 99.55 (99.54?0.02)
Maxout [20]
DSN [24]
420 K
99.55
350 K
99.61
Table 1: Test set classification accuracy for pilot experiments on the MNIST dataset.
Network
No. of Layers
Fitnet Results (reported by Romero et. al.[25])
Teacher
5
Fitnet A
11
Fitnet B
19
Highway networks
Highway A (Fitnet A) 11
Highway B (Fitnet B) 19
Highway C
32
No. of Parameters
Accuracy (in %)
?9M
?250K
?2.5M
90.18
89.01
91.61
?236K
?2.3M
?1.25M
89.18
92.46 (92.28?0.16)
91.20
Table 2: CIFAR-10 test set accuracy of convolutional highway networks. Architectures tested were
based on fitnets trained by Romero et. al. [25] using two-stage hint based training. Highway networks were trained in a single stage without hints, matching or exceeding the performance of fitnets.
3
Experiments
All networks were trained using SGD with momentum. An exponentially decaying learning rate was
used in Section 3.1. For the rest of the experiments, a simpler commonly used strategy was employed
where the learning rate starts at a value ? and decays according to a fixed schedule by a factor ?.
?, ? and the schedule were selected once based on validation set performance on the CIFAR-10
dataset, and kept fixed for all experiments. All convolutional highway networks utilize the rectified
linear activation function [16] to compute the block state H. To provide a better estimate of the
variability of classification results due to random initialization, we report our results in the format
Best (mean ? std.dev.) based on 5 runs wherever available. Experiments were conducted using
Caffe [33] and Brainstorm (https://github.com/IDSIA/brainstorm) frameworks. Source
code, hyperparameter search results and related scripts are publicly available at http://people.
idsia.ch/?rupesh/very_deep_learning/.
3.1
Optimization
To support the hypothesis that highway networks do not suffer from increasing depth, we conducted
a series of rigorous optimization experiments, comparing them to plain networks with normalized
initialization [16, 17].
We trained both plain and highway networks of varying varying depths on the MNIST digit classification dataset. All networks are thin: each layer has 50 blocks for highway networks and 71
units for plain networks, yielding roughly identical numbers of parameters (?5000) per layer. In
all networks, the first layer is a fully connected plain layer followed by 9, 19, 49, or 99 fully connected plain or highway layers. Finally, the network output is produced by a softmax layer. We
performed a random search of 100 runs for both plain and highway networks to find good settings
for the following hyperparameters: initial learning rate, momentum, learning rate exponential decay
factor & activation function (either rectified linear or tanh). For highway networks, an additional
hyperparameter was the initial value for the transform gate bias (between -1 and -10). Other weights
were initialized using the same normalized initialization as plain networks.
The training curves for the best performing networks for each depth are shown in Figure 1. As expected, 10 and 20-layer plain networks exhibit very good performance (mean loss < 1e?4 ), which
significantly degrades as depth increases, even though network capacity increases. Highway networks do not suffer from an increase in depth, and 50/100 layer highway networks perform similar
to 10/20 layer networks. The 100-layer highway network performed more than 2 orders of magnitude better compared to a similarly-sized plain network. It was also observed that highway networks
consistently converged significantly faster than plain ones.
4
Network
Maxout [20]
dasNet [36]
NiN [35]
DSN [24]
All-CNN [37]
Highway Network
CIFAR-10 Accuracy (in %)
90.62
90.78
91.19
92.03
92.75
92.40 (92.31?0.12)
CIFAR-100 Accuracy (in %)
61.42
66.22
64.32
65.43
66.29
67.76 (67.61?0.15)
Table 3: Test set accuracy of convolutional highway networks on the CIFAR-10 and CIFAR-100
object recognition datasets with typical data augmentation. For comparison, we list the accuracy
reported by recent studies in similar experimental settings.
3.2
Pilot Experiments on MNIST Digit Classification
As a sanity check for the generalization capability of highway networks, we trained 10-layer convolutional highway networks on MNIST, using two architectures, each with 9 convolutional layers
followed by a softmax output. The number of filter maps (width) was set to 16 and 32 for all the
layers. We obtained test set performance competitive with state-of-the-art methods with much fewer
parameters, as show in Table 1.
3.3
Experiments on CIFAR-10 and CIFAR-100 Object Recognition
3.3.1
Comparison to Fitnets
Fitnet training Maxout networks can cope much better with increased depth than those with traditional activation functions [20]. However, Romero et. al. [25] recently reported that training on
CIFAR-10 through plain backpropogation was only possible for maxout networks with a depth up
to 5 layers when the number of parameters was limited to ?250K and the number of multiplications
to ?30M. Similar limitations were observed for higher computational budgets. Training of deeper
networks was only possible through the use of a two-stage training procedure and addition of soft
targets produced from a pre-trained shallow teacher network (hint-based training).
We found that it was easy to train highway networks with numbers of parameters and operations
comparable to those of fitnets in a single stage using SGD. As shown in Table 2, Highway A and
Highway B, which are based on the architectures of Fitnet A and Fitnet B, respectively, obtain
similar or higher accuracy on the test set. We were also able to train thinner and deeper networks:
for example a 32-layer highway network consisting of alternating receptive fields of size 3x3 and
1x1 with ?1.25M parameters performs better than the earlier teacher network [20].
3.3.2
Comparison to State-of-the-art Methods
It is possible to obtain high performance on the CIFAR-10 and CIFAR-100 datasets by utilizing
very large networks and extensive data augmentation. This approach was popularized by Ciresan
et. al. [5] and recently extended by Graham [34]. Since our aim is only to demonstrate that deeper
networks can be trained without sacrificing ease of training or generalization ability, we only performed experiments in the more common setting of global contrast normalization, small translations
and mirroring of images. Following Lin et. al. [35], we replaced the fully connected layer used
in the networks in the previous section with a convolutional layer with a receptive field of size one
and a global average pooling layer. The hyperparameters from the last section were re-used for both
CIFAR-10 and CIFAR-100, therefore it is quite possible to obtain much better results with better
architectures/hyperparameters. The results are tabulated in Table 3.
4
Analysis
Figure 2 illustrates the inner workings of the best3 50 hidden layer fully-connected highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The first three columns show
3
obtained via random search over hyperparameters to minimize the best training set error achieved using
each configuration
5
Figure 2: Visualization of best 50 hidden-layer highway networks trained on MNIST (top row) and
CIFAR-100 (bottom row). The first hidden layer is a plain layer which changes the dimensionality
of the representation to 50. Each of the 49 highway layers (y-axis) consists of 50 blocks (x-axis).
The first column shows the transform gate biases, which were initialized to -2 and -4 respectively.
In the second column the mean output of the transform gate over all training examples is depicted.
The third and fourth columns show the output of the transform gates and the block outputs (both
networks using tanh) for a single random training sample. Best viewed in color.
the bias, the mean activity over all training samples, and the activity for a single random sample for
each transform gate respectively. Block outputs for the same single sample are displayed in the last
column.
The transform gate biases of the two networks were initialized to -2 and -4 respectively. It is interesting to note that contrary to our expectations most biases decreased further during training. For
the CIFAR-100 network the biases increase with depth forming a gradient. Curiously this gradient
is inversely correlated with the average activity of the transform gates, as seen in the second column.
This indicates that the strong negative biases at low depths are not used to shut down the gates, but to
make them more selective. This behavior is also suggested by the fact that the transform gate activity
for a single example (column 3) is very sparse. The effect is more pronounced for the CIFAR-100
network, but can also be observed to a lesser extent in the MNIST network.
The last column of Figure 2 displays the block outputs and visualizes the concept of ?information
highways?. Most of the outputs stay constant over many layers forming a pattern of stripes. Most of
the change in outputs happens in the early layers (? 15 for MNIST and ? 40 for CIFAR-100).
4.1
Routing of Information
One possible advantage of the highway architecture over hard-wired shortcut connections is that the
network can learn to dynamically adjust the routing of the information based on the current input.
This begs the question: does this behaviour manifest itself in trained networks or do they just learn
a static routing that applies to all inputs similarly. A partial answer can be found by looking at the
mean transform gate activity (second column) and the single example transform gate outputs (third
column) in Figure 2. Especially for the CIFAR-100 case, most transform gates are active on average,
while they show very selective activity for the single example. This implies that for each sample
only a few blocks perform transformation but different blocks are utilized by different samples.
This data-dependent routing mechanism is further investigated in Figure 3. In each of the columns
we show how the average over all samples of one specific class differs from the total average shown
in the second column of Figure 2. For MNIST digits 0 and 7 substantial differences can be seen
6
Figure 3: Visualization showing the extent to which the mean transform gate activity for certain
classes differs from the mean activity over all training samples. Generated using the same 50-layer
highway networks on MNIST on CIFAR-100 as Figure 2. Best viewed in color.
within the first 15 layers, while for CIFAR class numbers 0 and 1 the differences are sparser and
spread out over all layers. In both cases it is clear that the mean activity pattern differs between
classes. The gating system acts not just as a mechanism to ease training, but also as an important
part of the computation in a trained network.
4.2
Layer Importance
Since we bias all the transform gates towards being closed, in the beginning every layer mostly
copies the activations of the previous layer. Does training indeed change this behaviour, or is the
final network still essentially equivalent to a network with a much fewer layers? To shed light on this
issue, we investigated the extent to which lesioning a single layer affects the total performance of
trained networks from Section 3.1. By lesioning, we mean manually setting all the transform gates
of a layer to 0 forcing it to simply copy its inputs. For each layer, we evaluated the network on the
full training set with the gates of that layer closed. The resulting performance as a function of the
lesioned layer is shown in Figure 4.
For MNIST (left) it can be seen that the error rises significantly if any one of the early layers is
removed, but layers 15 ? 45 seem to have close to no effect on the final performance. About 60% of
the layers don?t learn to contribute to the final result, likely because MNIST is a simple dataset that
doesn?t require much depth.
We see a different picture for the CIFAR-100 dataset (right) with performance degrading noticeably
when removing any of the first ? 40 layers. This suggests that for complex problems a highway
network can learn to utilize all of its layers, while for simpler problems like MNIST it will keep
many of the unneeded layers idle. Such behavior is desirable for deep networks in general, but
appears difficult to obtain using plain networks.
5
Discussion
Alternative approaches to counter the difficulties posed by depth mentioned in Section 1 often have
several limitations. Learning to route information through neural networks with the help of competitive interactions has helped to scale up their application to challenging problems by improving
credit assignment [38], but they still suffer when depth increases beyond ?20 even with careful initialization [17]. Effective initialization methods can be difficult to derive for a variety of activation
functions. Deep supervision [24] has been shown to hurt performance of thin deep networks [25].
Very deep highway networks, on the other hand, can directly be trained with simple gradient descent methods due to their specific architecture. This property does not rely on specific non-linear
transformations, which may be complex convolutional or recurrent transforms, and derivation of
a suitable initialization scheme is not essential. The additional parameters required by the gating
mechanism help in routing information through the use of multiplicative connections, responding
differently to different inputs, unlike fixed ?skip? connections.
7
Figure 4: Lesioned training set performance (y-axis) of the best 50-layer highway networks on
MNIST (left) and CIFAR-100 (right), as a function of the lesioned layer (x-axis). Evaluated on
the full training set while forcefully closing all the transform gates of a single layer at a time. The
non-lesioned performance is indicated as a dashed line at the bottom.
A possible objection is that many layers might remain unused if the transform gates stay closed.
Our experiments show that this possibility does not affect networks adversely?deep and narrow
highway networks can match/exceed the accuracy of wide and shallow maxout networks, which
would not be possible if layers did not perform useful computations. Additionally, we can exploit
the structure of highways to directly evaluate the contribution of each layer as shown in Figure 4.
For the first time, highway networks allow us to examine how much computation depth is needed
for a given problem, which can not be easily done with plain networks.
Acknowledgments
We thank NVIDIA Corporation for their donation of GPUs and acknowledge funding from the
EU project NASCENCE (FP7-ICT-317662). We are grateful to Sepp Hochreiter and Thomas Unterthiner for helpful comments and Jan Koutn??k for help in conducting experiments.
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems, 2012.
[2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842
[cs], September 2014.
[3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 [cs], September 2014.
[4] DC Ciresan, Ueli Meier, Jonathan Masci, Luca M Gambardella, and J?urgen Schmidhuber. Flexible, high
performance convolutional neural networks for image classification. In IJCAI, 2011.
[5] Dan Ciresan, Ueli Meier, and J?urgen Schmidhuber. Multi-column deep neural networks for image classification. In IEEE Conference on Computer Vision and Pattern Recognition, 2012.
[6] Dong Yu, Michael L. Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide. Feature learning in deep neural
networks-studies on speech recognition tasks. arXiv preprint arXiv:1301.3605, 2013.
[7] Sepp Hochreiter and Jurgen Schmidhuber. Bridging long time lags by weight guessing and ?long shortterm memory?. Spatiotemporal models in biological and artificial systems, 37:65?72, 1996.
[8] Johan H?astad. Computational limitations of small-depth circuits. MIT press, 1987.
[9] Johan H?astad and Mikael Goldmann. On the power of small-depth threshold circuits. Computational
Complexity, 1(2):113?129, 1991.
[10] Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A comparison
between shallow and deep architectures. IEEE Transactions on Neural Networks, 2014.
8
[11] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear
regions of deep neural networks. In Advances in Neural Information Processing Systems. 2014.
[12] James Martens and Venkatesh Medabalimi. On the expressive efficiency of sum product networks.
arXiv:1411.7717 [cs, stat], November 2014.
[13] James Martens and Ilya Sutskever. Training deep and recurrent networks with hessian-free optimization.
Neural Networks: Tricks of the Trade, pages 1?58, 2012.
[14] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization
and momentum in deep learning. pages 1139?1147, 2013.
[15] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In
Advances in Neural Information Processing Systems 27, pages 2933?2941. 2014.
[16] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural
networks. In International Conference on Artificial Intelligence and Statistics, pages 249?256, 2010.
[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing
human-level performance on ImageNet classification. arXiv:1502.01852 [cs], February 2015.
[18] David Sussillo and L. F. Abbott. Random walk initialization for training very deep feedforward networks.
arXiv:1412.6558 [cs, stat], December 2014.
[19] Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of
learning in deep linear neural networks. arXiv:1312.6120 [cond-mat, q-bio, stat], December 2013.
[20] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. arXiv:1302.4389 [cs, stat], February 2013.
[21] Rupesh K. Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and J?urgen Schmidhuber.
Compete to compute. In Advances in Neural Information Processing Systems, pages 2310?2318, 2013.
[22] Tapani Raiko, Harri Valpola, and Yann LeCun. Deep learning made easier by linear transformations in
perceptrons. In International Conference on Artificial Intelligence and Statistics, pages 924?932, 2012.
[23] Alex Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013.
[24] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised
nets. pages 562?570, 2015.
[25] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua
Bengio. FitNets: Hints for thin deep nets. arXiv:1412.6550 [cs], December 2014.
[26] J?urgen Schmidhuber. Learning complex, extended sequences using the principle of history compression.
Neural Computation, 4(2):234?242, March 1992.
[27] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets.
Neural computation, 18(7):1527?1554, 2006.
[28] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Masters thesis, Technische Universit?at M?unchen, M?unchen, 1991.
[29] Sepp Hochreiter and J?urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?
1780, November 1997.
[30] Felix A. Gers, J?urgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with
LSTM. In ICANN, volume 2, pages 850?855, 1999.
[31] Rupesh Kumar Srivastava, Klaus Greff, and J?urgen Schmidhuber. Highway networks. arXiv:1505.00387
[cs], May 2015.
[32] Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long Short-Term memory. arXiv:1507.01526
[cs], July 2015.
[33] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding.
arXiv:1408.5093 [cs], 2014.
[34] Benjamin Graham. Spatially-sparse convolutional neural networks. arXiv:1409.6070, September 2014.
[35] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2014.
[36] Marijn F Stollenga, Jonathan Masci, Faustino Gomez, and J?urgen Schmidhuber. Deep networks with
internal selective attention through feedback connections. In NIPS. 2014.
[37] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv:1412.6806 [cs], December 2014.
[38] Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, and J?urgen Schmidhuber. Understanding
locally competitive networks. In International Conference on Learning Representations, 2015.
9
| 5850 |@word cnn:1 version:1 compression:1 suitably:1 open:1 shuicheng:1 sgd:4 harder:1 carry:2 initial:4 configuration:1 series:1 liu:1 current:1 com:1 comparing:1 guadarrama:1 activation:10 dx:1 must:1 romero:4 enables:1 christian:1 designed:2 intelligence:2 selected:1 fewer:2 shut:1 ivo:1 beginning:1 ith:2 vanishing:1 short:4 pascanu:2 contribute:1 successive:1 simpler:2 zhang:2 along:1 direct:1 become:2 dsn:2 consists:4 seide:1 dan:1 introduce:1 indeed:1 expected:1 roughly:1 behavior:4 examine:1 multi:1 inspired:3 increasing:2 becomes:2 project:1 notation:1 circuit:3 unimpeded:1 string:1 degrading:1 transformation:7 corporation:1 temporal:1 every:1 attenuation:1 expands:1 ti:3 act:1 shed:1 exactly:1 universit:1 classifier:1 continual:1 bio:1 unit:9 superiority:1 danihelka:1 felix:1 local:2 modify:1 thinner:1 path:2 approximately:1 might:1 alexey:1 fitnets:5 initialization:12 dynamically:1 suggests:2 challenging:1 ease:3 limited:2 practical:1 acknowledgment:1 lecun:1 block:15 differs:3 swiss:1 x3:1 optimizers:1 digit:3 procedure:2 razvan:2 jan:1 evan:1 riedmiller:1 empirical:2 yan:1 significantly:3 matching:1 pre:1 idle:1 jui:1 suggest:1 lesioning:2 close:1 operator:1 yee:1 optimize:2 conventional:1 map:2 equivalent:1 marten:3 resembling:1 sepp:4 attention:1 convex:1 focused:1 simplicity:2 identifying:1 usi:1 marijn:1 utilizing:1 embedding:1 hurt:1 target:2 netzen:1 guido:1 exact:1 hypothesis:1 goodfellow:1 trick:1 element:1 idsia:4 recognition:5 kahou:1 utilized:1 std:1 stripe:1 observed:4 role:1 bottom:3 preprint:1 region:1 connected:5 montufar:1 sun:1 eu:1 counter:1 removed:1 trade:1 deeply:1 mentioned:3 substantial:1 transforming:1 benjamin:1 complexity:2 moderately:1 lesioned:4 warde:1 dynamic:1 tobias:1 trained:17 carrying:1 grateful:1 upon:1 efficiency:1 easily:1 differently:1 various:2 harri:1 derivation:1 train:3 fast:2 effective:1 artificial:3 klaus:3 caffe:2 sanity:1 quite:1 lag:1 widely:1 larger:1 supplementary:1 posed:1 kalchbrenner:1 ability:1 simonyan:1 statistic:2 unseen:1 transform:25 itself:1 final:3 sequence:3 advantage:1 karayev:1 net:9 propose:1 interaction:1 product:1 tu:1 pronounced:1 competition:1 sutskever:3 ijcai:1 darrell:1 nin:1 produce:2 wired:1 generating:1 object:2 help:6 depending:1 recurrent:12 derive:1 stat:4 donation:1 sohrob:1 andrew:3 sussillo:1 jurgen:2 astad:2 strong:1 c:11 skip:2 implies:1 met:1 closely:1 filter:1 stochastic:1 human:1 routing:5 saxe:1 material:1 forcefully:1 noticeably:1 seltzer:1 require:1 behaviour:2 generalization:2 preliminary:1 koutn:1 biological:1 hold:1 credit:2 ueli:2 predict:1 vary:1 early:3 faustino:3 tanh:2 ross:1 bridge:1 highway:59 mit:1 aim:2 rather:1 varying:2 consistently:1 indicates:2 check:1 contrast:2 rigorous:1 helpful:1 ganguli:2 rupesh:6 dependent:1 typically:2 bt:3 unneeded:1 initially:2 hidden:6 selective:3 going:1 issue:1 classification:9 flexible:2 dauphin:1 art:2 breakthrough:1 softmax:2 urgen:9 brox:1 field:5 once:1 distilled:1 never:1 sampling:1 manually:1 untersuchungen:1 qiang:1 identical:1 yu:2 thin:3 report:3 yoshua:5 mirza:1 hint:4 few:2 dosovitskiy:1 modern:1 scarselli:1 replaced:1 consisting:1 investigate:1 possibility:1 adjust:1 yielding:1 light:1 farley:1 stollenga:1 partial:1 shorter:1 unterthiner:1 initialized:4 walk:1 re:1 sacrificing:1 girshick:1 theoretical:1 instance:1 column:13 increased:2 soft:2 boolean:1 dev:1 earlier:2 assignment:2 juergen:1 rabinovich:1 stacking:1 technische:1 hundred:1 krizhevsky:1 successful:1 conducted:2 supsi:1 samira:1 osindero:1 reported:3 dependency:1 answer:1 teacher:5 spatiotemporal:1 cho:2 fundamental:1 lstm:5 international:3 stay:2 lee:1 yl:2 dong:1 michael:1 quickly:1 ilya:3 monica:1 augmentation:2 thesis:1 opposed:1 huang:1 slowly:1 possibly:1 adversely:1 li:1 szegedy:1 converted:1 student:2 script:1 performed:3 helped:1 lab:1 closed:3 multiplicative:1 start:1 decaying:1 competitive:3 capability:1 simon:1 jia:2 contribution:2 minimize:1 publicly:1 accuracy:12 convolutional:16 variance:1 conducting:1 efficiently:1 yield:1 generalize:1 vincent:1 zhengyou:1 produced:3 ren:1 carlo:1 researcher:1 rectified:2 chassang:1 visualizes:1 history:2 converged:1 whenever:1 sharing:1 trevor:1 james:4 obvious:1 static:1 dynamischen:1 pilot:4 dataset:6 wh:9 manifest:1 color:2 dimensionality:4 schedule:2 appears:1 feed:1 higher:2 supervised:2 xie:1 unchen:2 zisserman:1 wei:1 evaluated:2 though:1 strongly:1 done:1 just:4 stage:7 working:1 hand:1 expressive:1 mehdi:1 nonlinear:1 propagation:1 perhaps:3 indicated:1 omitting:1 effect:2 normalized:3 concept:1 former:1 hence:1 inspiration:1 kyunghyun:2 alternating:1 xavier:1 spatially:1 deal:1 attractive:1 during:1 width:3 whye:1 demonstrate:1 performs:1 dragomir:1 greff:2 image:5 wise:2 recently:5 funding:1 common:1 exponentially:1 ballas:1 volume:1 he:1 surpassing:1 refer:1 jinyu:1 anguelov:1 backpropogation:1 ai:1 grid:1 similarly:4 closing:1 dot:1 supervision:1 etc:1 patrick:1 sergio:1 recent:3 showed:1 forcing:2 schmidhuber:11 route:1 certain:4 nvidia:1 binary:1 success:2 accomplished:1 yi:2 preserving:1 seen:3 additional:2 george:1 tapani:1 employed:1 attacking:1 gambardella:1 xiangyu:1 dashed:1 cummins:1 july:1 multiple:3 full:2 desirable:1 match:2 faster:1 long:9 cifar:23 lin:2 luca:1 e1:1 prediction:1 jost:1 essentially:1 expectation:1 vision:1 arxiv:17 represent:1 normalization:1 sergey:1 achieved:2 hochreiter:4 proposal:1 addition:1 fitnet:8 decreased:1 objection:1 adriana:1 source:1 jian:1 crucial:1 biased:1 rest:1 unlike:1 pass:1 comment:1 pooling:1 december:4 contrary:1 flow:5 seem:1 call:1 unused:1 feedforward:5 intermediate:1 easy:1 exceed:1 bengio:5 variety:2 affect:2 zhuowen:1 architecture:12 ciresan:3 stall:1 inner:1 lesser:1 motivated:1 curiously:1 bridging:1 padding:2 tabulated:1 suffer:3 karen:1 speech:1 proceed:1 hessian:1 shaoqing:1 deep:37 mirroring:1 useful:1 clear:1 transforms:3 ticking:2 locally:1 mcclelland:1 http:2 per:1 hyperparameter:5 mat:1 express:1 demonstrating:1 threshold:1 yangqing:2 capital:1 clarity:1 abbott:1 dahl:1 nal:1 utilize:3 kept:1 year:1 sum:1 run:2 compete:1 letter:2 injected:1 parameterized:1 fourth:1 master:1 springenberg:1 yann:2 dy:1 comparable:1 bit:3 graham:2 layer:74 hi:3 followed:3 played:1 display:1 courville:1 gomez:3 activity:9 alex:3 wc:2 franco:1 extremely:2 min:1 kumar:3 performing:1 format:1 gpus:1 martin:1 developing:1 according:1 popularized:1 march:1 poor:1 across:3 remain:1 shallow:5 making:1 wherever:1 happens:1 equation:4 visualization:2 remains:2 mechanism:4 needed:1 flip:1 fp7:1 adopted:1 available:2 operation:1 gulcehre:1 goldmann:1 observe:1 regulate:1 pierre:1 alternative:2 gate:27 thomas:2 ebrahimi:1 top:4 underscored:1 denotes:1 ensure:1 responding:1 mikael:1 exploit:1 giving:1 ting:1 especially:1 february:2 question:1 receptive:5 strategy:4 primary:1 degrades:1 traditional:2 guessing:1 antoine:1 exhibit:1 gradient:7 september:3 thank:1 valpola:1 capacity:1 italicized:1 extent:3 boldface:1 code:1 brainstorm:2 index:1 reed:1 unrolled:1 sermanet:1 difficult:3 mostly:1 frank:1 negative:3 rise:1 design:1 guideline:1 perform:3 teh:1 observation:1 convolution:1 datasets:2 acknowledge:1 caglar:1 descent:3 november:2 displayed:1 extended:2 variability:1 looking:1 hinton:3 dc:1 arbitrary:1 david:2 venkatesh:1 meier:2 required:1 extensive:1 connection:5 imagenet:3 optimized:1 learned:1 narrow:1 nip:1 able:1 suggested:1 beyond:1 usually:1 pattern:3 scott:1 saining:1 reading:1 memory:5 belief:1 dasnet:1 power:1 suitable:1 natural:1 difficulty:3 treated:1 rely:1 scheme:3 improve:1 github:1 inversely:1 picture:1 axis:4 raiko:1 shortterm:1 ict:1 deepest:1 understanding:2 multiplication:2 graf:2 fully:4 loss:1 interesting:1 limitation:3 nascence:1 geoffrey:3 validation:1 shelhamer:1 vanhoucke:1 affine:1 sufficient:1 principle:2 begs:1 neuronalen:1 translation:1 row:4 supported:1 parity:1 last:3 copy:2 free:1 bias:13 allow:2 deeper:8 wide:1 face:1 sparse:2 benefit:1 feedback:1 overcome:2 plain:21 depth:23 valid:1 curve:2 fred:1 computes:2 doesn:1 forward:1 commonly:1 adaptive:2 made:1 simplified:1 far:1 erhan:1 cope:1 transaction:1 keep:1 global:2 active:1 xi:1 don:1 surya:2 search:3 table:6 additionally:2 promising:1 nature:1 learn:4 johan:2 delving:1 nicolas:1 improving:1 investigated:2 complex:4 constructing:1 did:2 icann:1 spread:1 hyperparameters:4 x1:2 screen:1 aid:1 sub:1 momentum:3 explicit:1 exceeding:1 exponential:1 xl:1 gers:1 jacobian:1 third:2 donahue:1 masci:4 ian:1 down:1 removing:1 dumitru:1 specific:3 rectifier:1 zu:1 gating:4 showing:1 list:1 decay:2 striving:1 evidence:1 glorot:1 essential:1 mnist:14 albeit:1 importance:2 magnitude:1 gallagher:1 budget:2 illustrates:1 sparser:1 easier:2 chen:2 smoothly:1 depicted:1 forget:1 simply:2 likely:1 saddle:1 forming:2 gatta:1 kaiming:1 compressor:1 srivastava:4 applies:2 ch:2 lth:1 identity:1 viewed:3 sized:2 careful:1 towards:2 maxout:6 jeff:1 replace:1 shortcut:1 hard:3 change:5 included:1 typical:1 wt:12 kazerounian:1 called:1 total:2 experimental:2 cond:1 perceptrons:1 aaron:1 internal:1 people:1 support:1 latter:1 jonathan:5 evaluate:1 tested:1 correlated:1 |
5,359 | 5,851 | Deep Convolutional Inverse Graphics Network
Tejas D. Kulkarni*1 , William F. Whitney*2 ,
Pushmeet Kohli3 , Joshua B. Tenenbaum4
1,2,4
Massachusetts Institute of Technology, Cambridge, USA
3
Microsoft Research, Cambridge, UK
1
2
[email protected] [email protected] 3 [email protected] 4 [email protected]
* First two authors contributed equally and are listed alphabetically.
Abstract
This paper presents the Deep Convolution Inverse Graphics Network (DCIGN), a model that aims to learn an interpretable representation of images,
disentangled with respect to three-dimensional scene structure and viewing
transformations such as depth rotations and lighting variations. The DCIGN model is composed of multiple layers of convolution and de-convolution
operators and is trained using the Stochastic Gradient Variational Bayes
(SGVB) algorithm [10]. We propose a training procedure to encourage
neurons in the graphics code layer to represent a specific transformation
(e.g. pose or light). Given a single input image, our model can generate
new images of the same object with variations in pose and lighting. We
present qualitative and quantitative tests of the model?s efficacy at learning
a 3D rendering engine for varied object classes including faces and chairs.
1
Introduction
Deep learning has led to remarkable breakthroughs in learning hierarchical representations
from images. Models such as Convolutional Neural Networks (CNNs) [13], Restricted Boltzmann Machines, [8, 19], and Auto-encoders [2, 23] have been successfully applied to produce
multiple layers of increasingly abstract visual representations. However, there is relatively
little work on characterizing the optimal representation of the data. While Cohen et al.
[4] have considered this problem by proposing a theoretical framework to learn irreducible
representations with both invariances and equivariances, coming up with the best representation for any given task is an open question.
Various work [3, 4, 7] has been done on the theory and practice of representation learning,
and from this work a consistent set of desiderata for representations has emerged: invariance,
interpretability, abstraction, and disentanglement. In particular, Bengio et al. [3] propose
that a disentangled representation is one for which changes in the encoded data are sparse
over real-world transformations; that is, changes in only a few latents at a time should be
able to represent sequences which are likely to happen in the real world.
The ?vision as inverse graphics? paradigm suggests a representation for images which provides these features. Computer graphics consists of a function to go from compact descriptions of scenes (the graphics code) to images, and this graphics code is typically disentangled
to allow for rendering scenes with fine-grained control over transformations such as object
location, pose, lighting, texture, and shape. This encoding is designed to easily and interpretably represent sequences of real data so that common transformations may be compactly
represented in software code; this criterion is conceptually identical to that of Bengio et al.,
and graphics codes conveniently align with the properties of an ideal representation.
1
Q(zi |x)
graphics code
Unpooling (Nearest Neighbor) +
Convolution
Convolution + Pooling
pose
..
..
observed
image
Filters = 96
kernel size (KS) = 5
Filters = 64
KS = 5
Filters = 32
KS = 5
shape
150x150
P (x|z)
light
x
7200
Filters = 32
KS = 7
Filters = 64
KS = 7
Filters = 96
KS = 7
{?200 , ?200 }
Encoder
(De-rendering)
Decoder
(Renderer)
Figure 1: Model Architecture: Deep Convolutional Inverse Graphics Network (DC-IGN)
has an encoder and a decoder. We follow the variational autoencoder [10] architecture
with variations. The encoder consists of several layers of convolutions followed by maxpooling and the decoder has several layers of unpooling (upsampling using nearest neighbors)
followed by convolution. (a) During training, data x is passed through the encoder to
produce the posterior approximation Q(zi |x), where zi consists of scene latent variables
such as pose, light, texture or shape. In order to learn parameters in DC-IGN, gradients
are back-propagated using stochastic gradient descent using the following variational object
function: ?log(P (x|zi )) + KL(Q(zi |x)||P (zi )) for every zi . We can force DC-IGN to learn
a disentangled representation by showing mini-batches with a set of inactive and active
transformations (e.g. face rotating, light sweeping in some direction etc). (b) During test,
data x can be passed through the encoder to get latents zi . Images can be re-rendered to
different viewpoints, lighting conditions, shape variations, etc by setting the appropriate
graphics code group (zi ), which is how one would manipulate an off-the-shelf 3D graphics
engine.
Recent work in inverse graphics [15, 12, 11] follows a general strategy of defining a probabilistic with latent parameters, then using an inference algorithm to find the most appropriate
set of latent parameters given the observations. Recently, Tieleman et al. [21] moved beyond
this two-stage pipeline by using a generic encoder network and a domain-specific decoder
network to approximate a 2D rendering function. However, none of these approaches have
been shown to automatically produce a semantically-interpretable graphics code and to learn
a 3D rendering engine to reproduce images.
In this paper, we present an approach which attempts to learn interpretable graphics codes
for complex transformations such as out-of-plane rotations and lighting variations. Given
a set of images, we use a hybrid encoder-decoder model to learn a representation that is
disentangled with respect to various transformations such as object out-of-plane rotations
and lighting variations. We employ a deep directed graphical model with many layers
of convolution and de-convolution operators that is trained using the Stochastic Gradient
Variational Bayes (SGVB) algorithm [10].
We propose a training procedure to encourage each group of neurons in the graphics code
layer to distinctly represent a specific transformation. To learn a disentangled representation, we train using data where each mini-batch has a set of active and inactive transformations, but we do not provide target values as in supervised learning; the objective function
remains reconstruction quality. For example, a nodding face would have the 3D elevation
transformation active but its shape, texture and other transformations would be inactive.
We exploit this type of training data to force chosen neurons in the graphics code layer to
specifically represent active transformations, thereby automatically creating a disentangled
representation. Given a single face image, our model can re-generate the input image with
a different pose and lighting. We present qualitative and quantitative results of the model?s
efficacy at learning a 3D rendering engine.
2
z=
corresponds to
z1
z2
z3
?
?
?L
z[4,n]
intrinsic properties (shape, texture, etc)
Figure 2: Structure of the representation vector. ? is the azimuth of the face, ? is the
elevation of the face with respect to the camera, and ?L is the azimuth of the light source.
2
Related Work
?1
?1
As mentioned previously, a?number
of generative models have been proposed in the litera1
L
ture to obtain abstract visual
representations. Unlike most RBM-based models [8, 19, 14],
our approach is trained using back-propagation with objective function consisting of data
reconstruction and the variational bound.
Relatively recently, Kingma et al. [10] proposed the SGVB algorithm to learn generative
models with continuous latent variables. In this work, a feed-forward neural network (encoder) is used to approximate the posterior distribution and a decoder network serves to
enable stochastic reconstruction of observations. In order to handle fine-grained geometry of
faces, we work with relatively large scale images (150 ? 150 pixels). Our approach extends
and applies the SGVB algorithm to jointly train and utilize many layers of convolution
and de-convolution operators for the encoder and decoder network respectively. The decoder network is a function that transform a compact graphics code ( 200 dimensions) to
a 150 ? 150 image. We propose using unpooling (nearest neighbor sampling) followed by
convolution to handle
massive increase in dimensionality with a manageabletonumber
of
fromthe
encoder
encoder
parameters.
Output
Backpropagation
Recently, [6] proposed using CNNs to generate images given object-specific parameters in
a supervised setting. As their approach requires ground-truth labels for the graphics code
layer, it cannot be directly applied to image interpretation tasks. Our work is similar to
1
z1 z2 z3
z1 z2 z3
ple in batch x
Ranzato et al. [18], whosez[4,n]
work was amongst the first to use
a generic encoder-decoderz[4,n]
architecture for feature learning. However, in comparison to our proposal their model was
trained layer-wise, the intermediate representations were not disentangled like a graphics
code, and their approach does not use the variational auto-encoder loss to approximate
thesignal
zero error
unique for each
posterior distribution. Our work 1is also similar in spirit to [20], but in comparison
our
for clamped outputs
same
as output
for x
xi in batch
model does not
assume
a Lambertian
reflectance model and implicitly constructs the 3D
representations. Another piece of related work is Desjardins et al. [5], who used a spike and
slab prior to factorize representations in a generative deep network.
les in batch xi
In comparison to existing approaches, it is important to note that our encoder network
produces the interpretable and disentangled representations necessary to learn a meaningful
z1 3Dz2graphics
z3
z[4,n] of inverse-graphics inspired zmethods
engine. A number
1 z2 z3 have recently beenz[4,n]
proposed in the literature [15]. However, most such methods rely on hand-crafted rendering
engines. The exception to this is work by Hinton et al. [9] and Tieleman [21] on transforming
autoencoders which use a domain-specific decoder to reconstruct input images. zero
Ourerror
worksignal
is similar in spirit to these works but has some key differences: (a) It uses a very generic
for clamped
convolutional architecture in the encoder and decoder networks to enable efficient
learningoutputs
on large datasets and image sizes; (b) it can handle single static frames as opposed to pair
of images required in [9]; and (c) it is generative.
3
Model
Caption: Training on a minibatch in which only ?, the azimuth angle of the face,
As shown in Figure 1, the basic structure of the Deep Convolutional Inverse Graphics Netchanges.
work (DC-IGN) consists of two parts: an encoder network which captures a distribution over
graphics
codes
Z given
datastep,
x and the
a decoder
network
a conditional
During
the
forward
output
from which
each learns
component
z_k distribution
!= z_1 of the
to produce an approximation x
? given Z. Z can be a disentangled representation containing
encoder
to be thez same
for each sample in the batch. This reflects the fact
a factored
setisofforced
latent variables
i ? Z such as pose, light and shape. This is important
that the generating variables of the image which correspond to the desired values of
these latents are unchanged throughout
the batch. By holding these outputs
3
constant throughout the batch, z_1 is forced to explain all the variance within the
Forward
Backward
Decoder
Decoder
?out1
outi = mean zki
out1 = z1
k ? batch
clamped
unclamped
z1 z2
grad1 = ?z1
z[4,n]
z3
Encoder
grad i = zki mean zki
k ? batch
Encoder
Figure 3: Training on a minibatch in which only ?, the azimuth angle of the
face, changes. During the forward step, the output from each component zi 6= z1 of the
encoder is altered to be the same for each sample in the batch. This reflects the fact that
the generating variables of the image (e.g. the identity of the face) which correspond to
the desired values of these latents are unchanged throughout the batch. By holding these
outputs constant throughout the batch, the single neuron z1 is forced to explain all the
variance within the batch, i.e. the full range of changes to the image caused by changing ?.
During the backward step z1 is the only neuron which receives a gradient signal from the
attempted reconstruction, and all zi 6= z1 receive a signal which nudges them to be closer
to their respective averages over the batch. During the complete training process, after this
batch, another batch is selected at random; it likewise contains variations of only one of
?, ?, ?L , intrinsic; all neurons which do not correspond to the selected latent are clamped;
and the training proceeds.
in learning a meaningful approximation of a 3D graphics engine and helps tease apart the
generalization capability of the model with respect to different types of transformations.
Let us denote the encoder output of DC-IGN to be ye = encoder(x). The encoder output
is used to parametrize the variational approximation Q(zi |ye ), where Q is chosen to be a
multivariate normal distribution. There are two reasons for using this parametrization: (1)
Gradients of samples with respect to parameters ? of Q can be easily obtained using the
reparametrization trick proposed in [10], and (2) Various statistical shape models trained
on 3D scanner data such as faces have the same multivariate normal latent distribution
[17]. Given that model parameters We connect ye and zi , the distribution parameters
? = (?zi , ?zi ) and latents Z can then be expressed as:
?z = We ye , ?z = diag(exp(We ye ))
?i, zi ? N (?zi , ?zi )
(1)
(2)
We present a novel training procedure which allows networks to be trained to have disentangled and interpretable representations.
3.1
Training with Specific Transformations
The main goal of this work is to learn a representation of the data which consists of disentangled and semantically interpretable latent variables. We would like only a small subset
of the latent variables to change for sequences of inputs corresponding to real-world events.
One natural choice of target representation for information about scenes is that already
designed for use in graphics engines. If we can deconstruct a face image by splitting it
into variables for pose, light, and shape, we can trivially represent the same transformations
that these variables are used for in graphics applications. Figure 2 depicts the representation
which we will attempt to learn.
With this goal in mind, we perform a training procedure which directly targets this definition
of disentanglement. We organize our data into mini-batches corresponding to changes in
only a single scene variable (azimuth angle, elevation angle, azimuth angle of the light
4
(a)
(b)
Figure 4: Manipulating light and elevation variables: Qualitative results showing the
generalization capability of the learned DC-IGN decoder to re-render a single input image
with different pose directions. (a) We change the latent zlight smoothly leaving all 199
other latents unchanged. (b) We change the latent zelevation smoothly leaving all 199 other
latents unchanged.
source); these are transformations which might occur in the real world. We will term these
the extrinsic variables, and they are represented by the components z1,2,3 of the encoding.
We also generate mini-batches in which the three extrinsic scene variables are held fixed
but all other properties of the face change. That is, these batches consist of many different
faces under the same viewing conditions and pose. These intrinsic properties of the model,
which describe identity, shape, expression, etc., are represented by the remainder of the
latent variables z[4,200] . These mini-batches varying intrinsic properties are interspersed
stochastically with those varying the extrinsic properties.
We train this representation using SGVB, but we make some key adjustments to the outputs
of the encoder and the gradients which train it. The procedure (Figure 3) is as follows.
1. Select at random a latent variable ztrain which we wish to correspond to one of
{azimuth angle, elevation angle, azimuth of light source, intrinsic properties}.
2. Select at random a mini-batch in which that only that variable changes.
3. Show the network each example in the minibatch and capture its latent representation for that example z k .
4. Calculate the average of those representation vectors over the entire batch.
5. Before putting the encoder?s output into the decoder, replace the values zi 6= ztrain
with their averages over the entire batch. These outputs are ?clamped?.
6. Calculate reconstruction error and backpropagate as per SGVB in the decoder.
7. Replace the gradients for the latents zi 6= ztrain (the clamped neurons) with their
difference from the mean (see Section 3.2). The gradient at ztrain is passed through
unchanged.
8. Continue backpropagation through the encoder using the modified gradient.
Since the intrinsic representation is much higher-dimensional than the extrinsic ones, it
requires more training. Accordingly we select the type of batch to use in a ratio of about
1:1:1:10, azimuth : elevation : lighting : intrinsic; we arrived at this ratio after extensive
testing, and it works well for both of our datasets.
This training procedure works to train both the encoder and decoder to represent certain
properties of the data in a specific neuron. By clamping the output of all but one of the
neurons, we force the decoder to recreate all the variation in that batch using only the
changes in that one neuron?s value. By clamping the gradients, we train the encoder to put
all the information about the variations in the batch into one output neuron.
This training method leads to networks whose latent variables have a strong equivariance
with the corresponding generating parameters, as shown in Figure 6. This allows the value
5
of the true generating parameter (e.g. the true angle of the face) to be trivially extracted
from the encoder.
3.2
Invariance Targeting
By training with only one transformation at a time, we are encouraging certain neurons to
contain specific information; this is equivariance. But we also wish to explicitly discourage
them from having other information; that is, we want them to be invariant to other transformations. Since our mini-batches of training data consist of only one transformation per
batch, then this goal corresponds to having all but one of the output neurons of the encoder
give the same output for every image in the batch.
To encourage this property of the DC-IGN, we train all the neurons which correspond to
the inactive transformations with an error gradient equal to their difference from the mean.
It is simplest to think about this gradient as acting on the set of subvectors zinactive from
the encoder for each input in the batch. Each of these zinactive ?s will be pointing to a
close-together but not identical point in a high-dimensional space; the invariance training
signal will push them all closer together. We don?t care where they are; the network can
represent the face shown in this batch however it likes. We only care that the network always
represents it as still being the same face, no matter which way it?s facing. This regularizing
force needs to be scaled to be much smaller than the true training signal, otherwise it can
overwhelm the reconstruction goal. Empirically, a factor of 1/100 works well.
4
Experiments
We trained our model on about 12,000 batches
of faces generated from a 3D face model obtained from Paysan et al. [17], where each batch
consists of 20 faces with random variations on
face identity variables (shape/texture), pose, or
lighting. We used the rmsprop [22] learning algorithm during training and set the meta learning
rate equal to 0.0005, the momentum decay to
0.1 and weight decay to 0.01.
To ensure that these techniques work on other
types of data, we also trained networks to perform reconstruction on images of widely varied
3D chairs from many perspectives derived from
the Pascal Visual Object Classes dataset as extracted by Aubry et al. [16, 1]. This task tests
the ability of the DC-IGN to learn a rendering
function for a dataset with high variation between the elements of the set; the chairs vary
from office chairs to wicker to modern designs,
and viewpoints span 360 degrees and two elevations. These networks were trained with the
same methods and parameters as the ones above.
4.1
Figure 5:
Manipulating azimuth
(pose) variables: Qualitative results
showing the generalization capability of
the learnt DC-IGN decoder to render
original static image with different azimuth (pose) directions. The latent neuron zazimuth is changed to random values
but all other latents are clamped.
3D Face Dataset
The decoder network learns an approximate rendering engine as shown in Figures (4,7).
Given a static test image, the encoder network produces the latents Z depicting scene
variables such as light, pose, shape etc. Similar to an off-the-shelf rendering engine, we
can independently control these to generate new images with the decoder. For example, as
shown in Figure 7, given the original test image, we can vary the lighting of an image by
keeping all the other latents constant and varying zlight . It is perhaps surprising that the
fully-trained decoder network is able to function as a 3D rendering engine.
6
(a)
(b)
(c)
Figure 6: Generalization of decoder to render images in novel viewpoints and
lighting conditions: We generated several datasets by varying light, azimuth and elevation, and tested the invariance properties of DC-IGN?s representation Z. We show quantitative performance on three network configurations as described in section 4.1. (a,b,c)
All DC-IGN encoder networks reasonably predicts transformations from static test images.
Interestingly, as seen in (a), the encoder network seems to have learnt a switch node to
separately process azimuth on left and right profile side of the face.
We also quantitatively illustrate the network?s ability to represent pose and light
on a smooth linear manifold as shown in Figure 6, which directly demonstrates
our training algorithm?s ability to disentangle complex transformations.
In these
plots, the inferred and ground-truth transformation values are plotted for a random
subset of the test set.
Interestingly, as shown in Figure 6(a), the encoder network?s representation of azimuth has a discontinuity at 0? (facing straight forward).
4.1.1
Comparison with Entangled Representations
To explore how much of a difference the DC-IGN training
procedure makes, we compare the novel-view reconstruction
performance of networks with entangled representations (baseline) versus disentangled representations (DC-IGN). The baseline network is identical in every way to the DC-IGN, but was
trained with SGVB without using our proposed training procedure. As in Figure 4, we feed each network a single input
image, then attempt to use the decoder to re-render this image
at different azimuth angles. To do this, we first must figure
out which latent of the entangled representation most closely
corresponds to the azimuth. This we do rather simply. First,
we encode all images in an azimuth-varied batch using the
baseline?s encoder. Then we calculate the variance of each of
the latents over this batch. The latent with the largest variance is then the one most closely associated with the azimuth
of the face, and we will call it zazimuth . Once that is found,
the latent zazimuth is varied for both the models to render a
novel view of the face given a single image of that face. Figure
7 shows that explicit disentanglement is critical for novel-view
reconstruction.
4.2
Figure 7: Entangled versus disentangled representations. First column:
Original images.
Second
column: transformed image
using DC-IGN. Third column: transformed image using normally-trained network.
Chair Dataset
We performed a similar set of experiments on the 3D chairs dataset described above. This
dataset contains still images rendered from 3D CAD models of 1357 different chairs, each
model skinned with the photographic texture of the real chair. Each of these models is
rendered in 60 different poses; at each of two elevations, there are 30 images taken from 360
degrees around the model. We used approximately 1200 of these chairs in the training set
and the remaining 150 in the test set; as such, the networks had never seen the chairs in the
test set from any angle, so the tests explore the networks? ability to generalize to arbitrary
7
(a)
(b)
Figure 8: Manipulating rotation: Each row was generated by encoding the input image
(leftmost) with the encoder, then changing the value of a single latent and putting this
modified encoding through the decoder. The network has never seen these chairs before at
any orientation. (a) Some positive examples. Note that the DC-IGN is making a conjecture
about any components of the chair it cannot see; in particular, it guesses that the chair in
the top row has arms, because it can?t see that it doesn?t. (b) Examples in which the
network extrapolates to new viewpoints less accurately.
chairs. We resized the images to 150 ? 150 pixels and made them grayscale to match our
face dataset.
We trained these networks with the azimuth (flat rotation) of the chair as a disentangled variable represented by a single node z1 ; all other variation between images is undifferentiated
and represented by z[2,200] . The DC-IGN network succeeded in achieving a mean-squared
error (MSE) of reconstruction of 2.7722 ? 10?4 on the test set. Each image has grayscale
values in the range [0, 1] and is 150 ? 150 pixels.
In Figure 8 we have included examples of the network?s ability to re-render previouslyunseen chairs at different angles given a single image. For some chairs it is able to render
fairly smooth transitions, showing the chair at many intermediate poses, while for others it
seems to only capture a sort of ?keyframes? representation, only having distinct outputs for
a few angles. Interestingly, the task of rotating a chair seen only from one angle requires
speculation about unseen components; the chair might have arms, or not; a curved seat or
a flat one; etc.
5
Discussion
We have shown that it is possible to train a deep convolutional inverse graphics network with
a fairly disentangled, interpretable graphics code layer representation from static images. By
utilizing a deep convolution and de-convolution architecture within a variational autoencoder
formulation, our model can be trained end-to-end using back-propagation on the stochastic
variational objective function [10]. We proposed a training procedure to force the network
to learn disentangled and interpretable representations. Using 3D face and chair analysis as
a working example, we have demonstrated the invariant and equivariant characteristics of
the learned representations.
Acknowledgements: We thank Thomas Vetter for access to the Basel face model. We are
grateful for support from the MIT Center for Brains, Minds, and Machines (CBMM). We
also thank Geoffrey Hinton and Ilker Yildrim for helpful feedback and discussions.
8
References
[1] M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar partbased 2d-3d alignment using a large dataset of cad models. In CVPR, 2014.
R in Machine Learning,
[2] Y. Bengio. Learning deep architectures for ai. Foundations and trends
2(1):1?127, 2009.
[3] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798?1828,
2013.
[4] T. Cohen and M. Welling. Learning the irreducible representations of commutative lie groups.
arXiv preprint arXiv:1402.4437, 2014.
[5] G. Desjardins, A. Courville, and Y. Bengio. Disentangling factors of variation via generative
entangling. arXiv preprint arXiv:1210.5474, 2012.
[6] A. Dosovitskiy, J. Springenberg, and T. Brox. Learning to generate chairs with convolutional
neural networks. arXiv:1411.5928, 2015.
[7] I. Goodfellow, H. Lee, Q. V. Le, A. Saxe, and A. Y. Ng. Measuring invariances in deep
networks. In Advances in neural information processing systems, pages 646?654, 2009.
[8] G. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural
computation, 18(7):1527?1554, 2006.
[9] G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In Artificial Neural
Networks and Machine Learning?ICANN 2011, pages 44?51. Springer, 2011.
[10] D. P. Kingma and M. Welling.
arXiv:1312.6114, 2013.
Auto-encoding variational bayes.
arXiv preprint
[11] T. D. Kulkarni, P. Kohli, J. B. Tenenbaum, and V. Mansinghka. Picture: A probabilistic programming language for scene perception. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pages 4390?4399, 2015.
[12] T. D. Kulkarni, V. K. Mansinghka, P. Kohli, and J. B. Tenenbaum. Inverse graphics with
probabilistic cad models. arXiv preprint arXiv:1407.1339, 2014.
[13] Y. LeCun and Y. Bengio. Convolutional networks for images, speech, and time series. The
handbook of brain theory and neural networks, 3361, 1995.
[14] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual
International Conference on Machine Learning, pages 609?616. ACM, 2009.
[15] V. Mansinghka, T. D. Kulkarni, Y. N. Perov, and J. Tenenbaum. Approximate bayesian
image interpretation using generative probabilistic graphics programs. In Advances in Neural
Information Processing Systems, pages 1520?1528, 2013.
[16] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. The
role of context for object detection and semantic segmentation in the wild. In IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2014.
[17] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3d face model for pose and
illumination invariant face recognition. Genova, Italy, 2009. IEEE.
[18] M. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning of invariant
feature hierarchies with applications to object recognition. In Computer Vision and Pattern
Recognition, 2007. CVPR?07. IEEE Conference on, pages 1?8. IEEE, 2007.
[19] R. Salakhutdinov and G. E. Hinton. Deep boltzmann machines. In International Conference
on Artificial Intelligence and Statistics, pages 448?455, 2009.
[20] Y. Tang, R. Salakhutdinov, and G. Hinton.
arXiv:1206.6445, 2012.
Deep lambertian networks.
arXiv preprint
[21] T. Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, University of
Toronto, 2014.
[22] T. Tieleman and G. Hinton. Lecture 6.5 - rmsprop, coursera: Neural networks for machine
learning. 2012.
[23] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
The Journal of Machine Learning Research, 11:3371?3408, 2010.
9
| 5851 |@word kohli:2 seems:2 open:1 out1:2 thereby:1 configuration:1 contains:2 efficacy:2 series:1 liu:1 interestingly:3 existing:1 com:1 z2:5 surprising:1 cad:3 must:1 unpooling:3 happen:1 shape:12 designed:2 interpretable:8 plot:1 generative:6 selected:2 guess:1 intelligence:2 accordingly:1 plane:2 parametrization:1 provides:1 node:2 location:1 toronto:1 qualitative:4 consists:6 wild:1 undifferentiated:1 equivariant:1 brain:2 inspired:1 salakhutdinov:2 automatically:2 little:1 encouraging:1 subvectors:1 proposing:1 transformation:24 quantitative:3 every:3 scaled:1 demonstrates:1 uk:1 control:2 normally:1 organize:1 before:2 positive:1 local:1 sgvb:7 pkohli:1 encoding:5 approximately:1 might:2 k:6 suggests:1 range:2 directed:1 unique:1 camera:1 lecun:2 testing:1 practice:1 backpropagation:2 procedure:9 vetter:2 seeing:1 get:1 cannot:2 targeting:1 close:1 operator:3 put:1 context:1 demonstrated:1 center:1 go:1 independently:1 splitting:1 factored:1 seat:1 utilizing:1 disentangled:17 handle:3 variation:13 target:3 hierarchy:1 massive:1 caption:1 programming:1 us:1 goodfellow:1 trick:1 element:1 trend:1 recognition:5 predicts:1 observed:1 role:1 preprint:5 wang:1 capture:3 nudge:1 calculate:3 coursera:1 ranzato:2 russell:1 mentioned:1 transforming:2 rmsprop:2 trained:14 grateful:1 yuille:1 compactly:1 easily:2 various:3 represented:5 train:8 stacked:1 forced:2 distinct:1 describe:1 fast:1 artificial:2 whose:1 emerged:1 encoded:1 widely:1 cvpr:3 reconstruct:1 otherwise:1 encoder:37 ability:5 statistic:1 unseen:1 think:1 jointly:1 transform:1 sequence:3 net:1 propose:4 reconstruction:10 coming:1 remainder:1 interpretably:1 description:1 moved:1 produce:6 generating:4 object:9 help:1 illustrate:1 maturana:1 pose:18 exemplar:1 nearest:3 mansinghka:3 strong:1 larochelle:1 direction:3 closely:2 cnns:2 stochastic:5 filter:6 saxe:1 viewing:2 enable:2 generalization:4 elevation:9 disentanglement:3 scanner:1 around:1 considered:1 ground:2 normal:2 exp:1 cbmm:1 slab:1 pointing:1 desjardins:2 vary:2 efros:1 label:1 largest:1 successfully:1 reflects:2 mit:4 always:1 aim:1 modified:2 rather:1 shelf:2 resized:1 varying:4 office:1 encode:1 derived:1 unclamped:1 skinned:1 baseline:3 helpful:1 inference:1 abstraction:1 typically:1 entire:2 manipulating:3 reproduce:1 transformed:2 pixel:3 orientation:1 pascal:1 breakthrough:1 fairly:2 brox:1 equal:2 construct:1 once:1 having:3 never:2 sampling:1 ng:2 identical:3 represents:1 unsupervised:2 others:1 quantitatively:1 dosovitskiy:1 few:2 irreducible:2 employ:1 modern:1 composed:1 zki:3 geometry:1 consisting:1 william:1 microsoft:2 attempt:3 detection:1 alignment:1 light:13 held:1 paysan:2 encourage:3 closer:2 necessary:1 succeeded:1 respective:1 rotating:2 re:5 desired:2 plotted:1 theoretical:1 column:3 whitney:1 measuring:1 perov:1 subset:2 latents:12 krizhevsky:1 azimuth:19 graphic:30 osindero:1 encoders:2 connect:1 learnt:2 cho:1 international:2 probabilistic:4 off:2 lee:3 together:2 thesis:1 squared:1 opposed:1 containing:1 huang:1 stochastically:1 creating:1 de:5 matter:1 caused:1 explicitly:1 piece:1 performed:1 view:3 bayes:3 sort:1 capability:3 reparametrization:1 convolutional:9 variance:4 who:1 likewise:1 characteristic:1 correspond:5 conceptually:1 generalize:1 bayesian:1 vincent:2 accurately:1 none:1 lighting:11 straight:1 explain:2 romdhani:1 definition:1 associated:1 rbm:1 deconstruct:1 static:5 propagated:1 dataset:8 aubry:2 massachusetts:1 dimensionality:1 segmentation:1 back:3 feed:2 higher:1 supervised:2 follow:1 formulation:1 done:1 stage:1 autoencoders:2 hand:1 receives:1 working:1 propagation:2 minibatch:3 quality:1 perhaps:1 usa:1 ye:5 contain:1 true:3 fidler:1 jbt:1 semantic:1 during:7 criterion:2 leftmost:1 arrived:1 complete:1 image:51 variational:10 wise:1 novel:5 recently:4 regularizing:1 common:1 rotation:5 empirically:1 keyframes:1 cohen:2 interspersed:1 interpretation:2 cambridge:2 ai:1 trivially:2 language:1 had:1 access:1 maxpooling:1 renderer:1 align:1 etc:6 disentangle:1 posterior:3 multivariate:2 recent:1 perspective:2 italy:1 optimizing:1 apart:1 certain:2 meta:1 continue:1 joshua:1 mottaghi:1 seen:4 care:2 paradigm:1 signal:4 multiple:2 full:1 photographic:1 smooth:2 match:1 ign:17 equally:1 manipulate:1 desideratum:1 basic:1 scalable:1 vision:4 arxiv:11 represent:9 kernel:1 proposal:1 receive:1 want:1 fine:2 separately:1 entangled:4 source:3 leaving:2 unlike:1 pooling:1 spirit:2 call:1 ideal:1 intermediate:2 bengio:7 ture:1 rendering:11 switch:1 zi:20 architecture:6 grad:1 inactive:4 recreate:1 expression:1 passed:3 render:7 speech:1 deep:16 useful:1 listed:1 tenenbaum:3 simplest:1 generate:7 extrinsic:4 per:2 group:3 key:2 putting:2 achieving:1 changing:2 utilize:1 backward:2 inverse:9 angle:13 springenberg:1 extends:1 throughout:4 genova:1 layer:12 bound:1 followed:3 courville:2 annual:1 extrapolates:1 occur:1 scene:9 software:1 flat:2 chair:23 span:1 rendered:3 relatively:3 conjecture:1 smaller:1 increasingly:1 making:1 restricted:1 invariant:4 pipeline:1 taken:1 remains:1 previously:1 overwhelm:1 mind:2 serf:1 end:2 parametrize:1 hierarchical:2 lambertian:2 generic:3 appropriate:2 batch:36 original:3 thomas:1 top:1 remaining:1 ensure:1 graphical:1 exploit:1 reflectance:1 unchanged:5 objective:3 question:1 already:1 spike:1 strategy:1 gradient:13 amongst:1 thank:2 upsampling:1 decoder:25 lajoie:1 manifold:1 urtasun:1 reason:1 code:16 mini:7 z3:6 ratio:2 manzagol:1 disentangling:1 holding:2 partbased:1 design:1 boltzmann:2 basel:1 contributed:1 perform:2 teh:1 convolution:14 neuron:15 observation:2 datasets:3 descent:1 curved:1 defining:1 hinton:7 dc:17 frame:1 varied:4 sweeping:1 arbitrary:1 inferred:1 pair:1 required:1 kl:1 extensive:1 z1:13 speculation:1 sivic:1 engine:11 learned:2 kingma:2 discontinuity:1 able:3 beyond:1 proceeds:1 pattern:4 perception:1 program:1 including:1 interpretability:1 belief:2 event:1 critical:1 natural:1 force:5 hybrid:1 rely:1 arm:2 altered:1 technology:1 picture:1 auto:4 autoencoder:2 ilker:1 prior:1 literature:1 acknowledgement:1 review:1 loss:1 fully:1 lecture:1 x150:1 facing:2 remarkable:1 versus:2 geoffrey:1 foundation:1 degree:2 consistent:1 viewpoint:4 row:2 changed:1 keeping:1 tease:1 side:1 allow:1 institute:1 neighbor:3 face:31 characterizing:1 sparse:1 distinctly:1 feedback:1 depth:1 dimension:1 world:4 transition:1 doesn:1 author:1 forward:5 made:1 ple:1 pushmeet:1 alphabetically:1 transaction:1 welling:2 approximate:5 compact:2 ranganath:1 implicitly:1 active:4 handbook:1 xi:2 factorize:1 don:1 grayscale:2 continuous:1 latent:20 learn:14 reasonably:1 depicting:1 mse:1 complex:2 discourage:1 domain:2 diag:1 equivariance:2 icann:1 main:1 profile:1 crafted:1 depicts:1 grosse:1 momentum:1 wish:2 explicit:1 lie:1 clamped:7 entangling:1 third:1 learns:2 grained:2 tang:1 specific:8 showing:4 decay:2 intrinsic:7 consist:2 texture:6 phd:1 illumination:1 push:1 commutative:1 clamping:2 boureau:1 chen:1 amberg:1 backpropagate:1 smoothly:2 knothe:1 led:1 simply:1 likely:1 explore:2 visual:3 conveniently:1 expressed:1 adjustment:1 applies:1 springer:1 corresponds:3 tieleman:4 truth:2 extracted:2 acm:1 tejas:1 conditional:1 identity:3 goal:4 replace:2 change:11 included:1 specifically:1 semantically:2 acting:1 denoising:2 invariance:6 attempted:1 meaningful:2 exception:1 select:3 support:1 kulkarni:4 tested:1 |
5,360 | 5,852 | Learning to Segment Object Candidates
Pedro O. Pinheiro?
Ronan Collobert
Piotr Doll?ar
[email protected]
[email protected]
[email protected]
Facebook AI Research
Abstract
Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals
is then passed to an object classifier. Such approaches have been shown they can
be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach
based on a discriminative convolutional network. Our model is trained jointly
with two objectives: given an image patch, the first part of the system outputs a
class-agnostic segmentation mask, while the second part of the system outputs the
likelihood of the patch being centered on a full object. At test time, the model
is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood
score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches,
our model obtains substantially higher object recall using fewer proposals. We
also show that our model is able to generalize to unseen categories it has not seen
during training. Unlike all previous approaches for generating object masks, we
do not rely on edges, superpixels, or any other form of low-level segmentation.
1
Introduction
Object detection is one of the most foundational tasks in computer vision [21]. Until recently, the
dominant paradigm in object detection was the sliding window framework: a classifier is applied at
every object location and scale [4, 8, 32]. More recently, Girshick et al. [10] proposed a two-phase
approach. First, a rich set of object proposals (i.e., a set of image regions which are likely to contain
an object) is generated using a fast (but possibly imprecise) algorithm. Second, a convolutional
neural network classifier is applied on each of the proposals. This approach provides a notable gain
in object detection accuracy compared to classic sliding window approaches. Since then, most stateof-the-art object detectors for both the PASCAL VOC [7] and ImageNet [5] datasets rely on object
proposals as a first preprocessing step [10, 15, 33].
Object proposal algorithms aim to find diverse regions in an image which are likely to contain
objects. For efficiency and detection performance reasons, an ideal proposal method should possess
three key characteristics: (i) high recall (i.e., the proposed regions should contain the maximum
number of possible objects), (ii) the high recall should be achieved with the minimum number of
regions possible, and (iii) the proposed regions should match the objects as accurately as possible.
In this paper, we present an object proposal algorithm based on Convolutional Networks (ConvNets) [20] that satisfies these constraints better than existing approaches. ConvNets are an important class of algorithms which have been shown to be state of the art in many large scale object
recognition tasks. They can be seen as a hierarchy of trainable filters, interleaved with non-linearities
?
Pedro O. Pinheiro is with the Idiap Research Institute in Martigny, Switzerland and Ecole Polytechnique
F?ed?erale de Lausanne (EPFL) in Lausanne, Switzerland. This work was done during an internship at FAIR.
1
and pooling. ConvNets saw a resurgence after Krizhevsky et al. [18] demonstrated that they perform very well on the ImageNet classification benchmark. Moreover, these models learn sufficiently
general image features, which can be transferred to many different tasks [10, 11, 3, 22, 23].
Given an input image patch, our algorithm generates a class-agnostic mask and an associated score
which estimates the likelihood of the patch fully containing a centered object (without any notion
of an object category). The core of our model is a ConvNet which jointly predicts the mask and the
object score. A large part of the network is shared between those two tasks: only the last few network
layers are specialized for separately outputting a mask and score prediction. The model is trained by
optimizing a cost function that targets both tasks simultaneously. We train on MS COCO [21] and
evaluate the model on two object detection datasets, PASCAL VOC [7] and MS COCO.
By leveraging powerful ConvNet feature representations trained on ImageNet and adapted on the
large amount of segmented training data available in COCO, we are able to beat the state of the art
in object proposals generation under multiple scenarios. Our most notable achievement is that our
approach beats other methods by a large margin while considering a smaller number of proposals.
Moreover, we demonstrate the generalization capabilities of our model by testing it on object categories not seen during training. Finally, unlike all previous approaches for generating segmentation
proposals, we do not rely on edges, superpixels, or any other form of low-level segmentation. Our
approach is the first to learn to generate segmentation proposals directly from raw image data.
The paper is organized as follows: ?2 presents related work, ?3 describes our architecture choices,
and ?4 describes our experiments in different datasets. We conclude in ?5.
2
Related Work
In recent years, ConvNets have been widely used in the context of object recognition. Notable
systems are AlexNet [18] and more recently GoogLeNet [29] and VGG [27], which perform exceptionally well on ImageNet. In the setting of object detection, Girshick et al. [10] proposed
R-CNN, a ConvNet-based model that beats by a large margin models relying on hand-designed
features. Their approach can be divided into two steps: selection of a set of salient object proposals [31], followed by a ConvNet classifier [18, 27]. Currently, most state-of-the-art object detection
approaches [30, 12, 9, 25] rely on this pipeline. Although they are slightly different in the classification step, they all share the first step, which consist of choosing a rich set of object proposals.
Most object proposal approaches leverage low-level grouping and saliency cues. These approaches
usually fall into three categories: (1) objectness scoring [1, 34], in which proposals are extracted by
measuring the objectness score of bounding boxes, (2) seed segmentation [14, 16, 17], where models
start with multiple seed regions and generate separate foreground-background segmentation for each
seed, and (3) superpixel merging [31, 24], where multiple over-segmentations are merged according
to various heuristics. These models vary in terms of the type of proposal generated (bounding boxes
or segmentation masks) and if the proposals are ranked or not. For a more complete survey of object
proposal methods, we recommend the recent survey from Hosang et al. [13].
Although our model shares high level similarities with these approaches (we generate a set of ranked
segmentation proposals), these results are achieved quite differently. All previous approaches for
generating segmentation masks, including [17] which has a learning component, rely on low-level
segmentations such as superpixels or edges. Instead, we propose a data-driven discriminative approach based on a deep-network architecture to obtain our segmentation proposals.
Most closely related to our approach, Multibox [6, 30] proposed to train a ConvNet model to generate bounding box object proposals. Their approach, similar to ours, generates a set of ranked
class-agnostic proposals. However, our model generates segmentation proposals instead of the less
informative bounding box proposals. Moreover, the model architectures, training scheme, etc., are
quite different between our approach and [30]. More recently, Deepbox [19] proposed a ConvNet
model that learns to rerank proposals generated by EdgeBox, a bottom-up method for bounding box
proposals. This system shares some similarities to our scoring network. Our model, however, is
able to generate the proposals and rank them in one shot from the test image, directly from the pixel
space. Finally, concurrently with this work, Ren et al. [25] proposed ?region proposal networks? for
generating box proposals that shares similarities with our work. We emphasize, however, that unlike
all these approaches our method generates segmentation masks instead of bounding boxes.
2
1x1#
conv#
56x56#
512x14x14# 512x1x1#
VGG#
2x2#
pool#
#
512x14x14#
x:#3x224x224#
fsegm(x):#224x224#
fscore(x):#1x1#
512x7x7#
512x1x1#
1024x1x1#
Figure 1: (Top) Model architecture: the network is split into two branches after the shared feature
extraction layers. The top branch predicts a segmentation mask for the the object located at the
center while the bottom branch predicts an object score for the input patch. (Bottom) Examples
of training triplets: input patch x, mask m and label y. Green patches contain objects that satisfy
the specified constraints and therefore are assigned the label y = 1. Note that masks for negative
examples (shown in red) are not used and are shown for illustrative purposes only.
3
DeepMask Proposals
Our object proposal method predicts a segmentation mask given an input patch, and assigns a score
corresponding to how likely the patch is to contain an object.
Both mask and score predictions are achieved with a single convolutional network. ConvNets are
flexible models which can be applied to various computer vision tasks and they alleviate the need
for manually designed features. Their flexible nature allows us to design a model in which the two
tasks (mask and score predictions) can share most of the layers of the network. Only the last layers
are task-specific (see Figure 1). During training, the two tasks are learned jointly. Compared to a
model which would have two distinct networks for the two tasks, this architecture choice reduces
the capacity of the model and increases the speed of full scene inference at test time.
Each sample k in the training set is a triplet containing (1) the RGB input patch xk , (2) the binary
mask corresponding to the input patch mk (with mij
k ? {?1}, where (i, j) corresponds to a pixel
location on the input patch) and (3) a label yk ? {?1} which specifies whether the patch contains
an object. Specifically, a patch xk is given label yk = 1 if it satisfies the following constraints:
(i) the patch contains an object roughly centered in the input patch
(ii) the object is fully contained in the patch and in a given scale range
Otherwise, yk = ?1, even if an object is partially present. The positional and scale tolerance used in
our experiments are given shortly. Assuming yk = 1, the ground truth mask mk has positive values
only for the pixels that are part of the single object located in the center of the patch. If yk = ?1 the
mask is not used. Figure 1, bottom, shows examples of training triplets.
Figure 1, top, illustrates an overall view of our model, which we call DeepMask. The top branch is
responsible for predicting a high quality object segmentation mask and the bottom branch predicts
the likelihood that an object is present and satisfies the above two constraints. We next describe in
detail each part of the architecture, the training procedure, and the fast inference procedure.
3.1
Network Architecture
The parameters for the layers shared between the mask prediction and the object score prediction are
initialized with a network that was pre-trained to perform classification on the ImageNet dataset [5].
This model is then fine-tuned for generating object proposals during training. We choose the VGGA architecture [27] which consists of eight 3 ? 3 convolutional layers (followed by ReLU nonlinearities) and five 2 ? 2 max-pooling layers and has shown excellent performance.
3
As we are interested in inferring segmentation masks, the spatial information provided in the convolutional feature maps is important. We therefore remove all the final fully connected layers of the
VGG-A model. Additionally we also discard the last max-pooling layer. The output of the shared
layers has a downsampling factor of 16 due to the remaining four 2 ? 2 max-pooling layers; given
h
w
an input image of dimension 3 ? h ? w, the output is a feature map of dimensions 512 ? 16
? 16
.
Segmentation: The branch of the network dedicated to segmentation is composed of a single 1 ? 1
convolution layer (and ReLU non-linearity) followed by a classification layer. The classification
layer consists of h?w pixel classifiers, each responsible for indicating whether a given pixel belongs
to the object in the center of the patch. Note that each pixel classifier in the output plane must be
able to utilize information contained in the entire feature map, and thus have a complete view of the
object. This is critical because unlike in semantic segmentation, our network must output a mask for
a single object even when multiple objects are present (e.g., see the elephants in Fig. 1).
For the classification layer one could use either locally or fully connected pixel classifiers. Both
options have drawbacks: in the former each classifier has only a partial view of the object while
in the latter the classifiers have a massive number of redundant parameters. Instead, we opt to
decompose the classification layer into two linear layers with no non-linearity in between. This
can be viewed as a ?low-rank? variant of using fully connected linear classifiers. Such an approach
massively reduces the number of network parameters while allowing each pixel classifier to leverage
information from the entire feature map. Its effectiveness is shown in the experiments. Finally, to
further reduce model capacity, we set the output of the classification layer to be ho ?wo with ho < h
and wo < w and upsample the output to h ? w to match the input dimensions.
Scoring: The second branch of the network is dedicated to predicting if an image patch satisfies
constraints (i) and (ii): that is if an object is centered in the patch and at the appropriate scale. It is
composed of a 2 ? 2 max-pooling layer, followed by two fully connected (plus ReLU non-linearity)
layers. The final output is a single ?objectness? score indicating the presence of an object in the
center of the input patch (and at the appropriate scale).
3.2 Joint Learning
Given an input patch xk ? I, the model is trained to jointly infer a pixel-wise segmentation mask and
an object score. The loss function is a sum of binary logistic regression losses, one for each location
of the segmentation network and one for the object score, over all training triplets (xk , mk , yk ):
X
X
ij
1+yk
?mij
k fsegm (xk ) ) + ? log(1 + e?yk fscore (xk ) )
log(1
+
e
(1)
L(?) =
2wo ho
k
ij
ij
Here ? is the set of parameters, fsegm
(xk ) is the prediction of the segmentation network at location
(i, j), and fscore (xk ) is the predicted object score. We alternate between backpropagating through
1
). For the scoring branch, the data is
the segmentation branch and scoring branch (and set ? = 32
sampled such that the model is trained with an equal number of positive and negative samples.
Note that the factor multiplying the first term of Equation 1 implies that we only backpropagate the
error over the segmentation branch if yk = 1. An alternative would be to train the segmentation
branch using negatives as well (setting mij
k = 0 for all pixels if yk = 0). However, we found that
training with positives only was critical for generalizing beyond the object categories seen during
training and for achieving high object recall. This way, during inference the network attempts to
generate a segmentation mask at every patch, even if no known object is present.
3.3 Full Scene Inference
During full image inference, we apply the model densely at multiple locations and scales. This
is necessary so that for each object in the image we test at least one patch that fully contains the
object (roughly centered and at the appropriate scale), satisfying the two assumptions made during
training. This procedure gives a segmentation mask and object score at each image location. Figure 2
illustrates the segmentation output when the model is applied densely to an image at a single scale.
The full image inference procedure is efficient since all computations can be computed convolutionally. The VGG features can be computed densely in a fraction of a second given a typical input
image. For the segmentation branch, the last fully connected layer can be computed via convolutions applied to the VGG features. The scores are likewise computed by convolutions on the VGG
features followed by two 1 ? 1 convolutional layers. Exact runtimes are given in ?4.
4
Figure 2: Output of segmentation model applied densely to a full image with a 16 pixel stride (at a
single scale at the central horizontal image region). Multiple locations give rise to good masks for
each of the three monkeys (scores not shown). Note that no monkeys appeared in our training set.
Finally, note that the scoring branch of the network has a downsampling factor 2? larger than the
segmentation branch due to the additional max-pooling layer. Given an input test image of size
t
t
ht
wt
ht ? wt , the segmentation and object network generate outputs of dimension h16 ? w
16 and 32 ? 32 ,
respectively. In order to achieve a one-to-one mapping between the mask prediction and object
score, we apply the interleaving trick right before the last max-pooling layer for the scoring branch
to double its output resolution (we use exactly the implementation described in [26]).
3.4
Implementation Details
During training, an input patch xk is considered to contain a ?canonical? positive example if an object
is precisely centered in the patch and has maximal dimension equal to exactly 128 pixels. However,
having some tolerance in the position of an object within a patch is critical as during full image
inference most objects will be observed slightly offset from their canonical position. Therefore,
during training, we randomly jitter each ?canonical? positive example to increase the robustness of
our model. Specifically, we consider translation shift (of ?16 pixels), scale deformation (of 2?1/4 ),
and also horizontal flip. In all cases we apply the same transformation to both the image patch xk
and the ground truth mask mk and assign the example a positive label yk = 1. Negative examples
(yk = ?1) are any patches at least ?32 pixels or 2?1 in scale from any canonical positive example.
During full image inference we apply the model densely at multiple locations (with a stride of 16
pixels) and scales (scales 2?2 to 21 with a step of 21/2 ). This ensures that there is at least one tested
image patch that fully contains each object in the image (within the tolerances used during training).
As in the original VGG-A network [27], our model is fed with RGB input patches of dimension
3 ? 224 ? 224. Since we removed the fifth pooling layer, the common branch outputs a feature map
of dimensions 512 ? 14 ? 14. The score branch of our network is composed of 2 ? 2 max pooling
followed by two fully connected layers (with 512 and 1024 hidden units, respectively). Both of these
layers are followed by ReLU non-linearity and a dropout [28] procedure with a rate of 0.5. A final
linear layer then generates the object score.
The segmentation branch begins with a single 1 ? 1 convolutional layer with 512 units. This feature
map is then fully connected to a low dimensional output of size 512, which is further fully connected
to each pixel classifier to generate an output of dimension 56 ? 56. As discussed, there is no nonlinearity between these two layers. In total, our model contains around 75M parameters.
A final bilinear upsampling layer is added to transform the 56 ? 56 output prediction to the full
224 ? 224 resolution of the ground-truth (directly predicting the full resolution output would have
been much slower). We opted for a non-trainable layer as we observed that a trainable one simply
learned to bilinearly upsample. Alternatively, we tried downsampling the ground-truth instead of
upsampling the network output; however, we found that doing so slightly reduced accuracy.
Design architecture and hyper-parameters were chosen using a subset of the MS COCO validation
data [21] (non-overlapping with the data we used for evaluation). We considered a learning rate
of .001. We trained our model using stochastic gradient descent with a batch size of 32 examples,
momentum of .9, and weight decay of .00005. Aside from the pre-trained VGG features, weights
are initialized randomly from a uniform distribution. Our model takes around 5 days to train on a
Nvidia Tesla K40m. To binarize predicted masks we simply threshold the continuous output (using
a threshold of .1 for PASCAL and .2 for COCO). All the experiments were conducted using Torch71 .
1
http://torch.ch
5
Figure 3: DeepMask proposals with highest IoU to the ground truth on selected images from COCO.
Missed objects (no matching proposals with IoU > 0.5) are marked with a red outline.
4
Experimental Results
In this section, we evaluate the performance of our approach on the PASCAL VOC 2007 test set [7]
and on the first 5000 images of the MS COCO 2014 validation set [21]. Our model is trained on
the COCO training set which contains about 80,000 images and a total of nearly 500,000 segmented
objects. Although our model is trained to generate segmentation proposals, it can also be used to
provide box proposals by taking the bounding boxes enclosing the segmentation masks. Figure 3
shows examples of generated proposals with highest IoU to the ground truth on COCO.
Metrics: We measure accuracy using the common Intersection over Union (IoU) metric. IoU is the
intersection of a candidate proposal and ground-truth annotation divided by the area of their union.
This metric can be applied to both segmentation and box proposals. Following Hosang et al. [13],
we evaluate the performance of the proposal methods considering the average recall (AR) between
IoU 0.5 and 1.0 for a fixed number of proposals. AR has been shown to correlate extremely well
with detector performance (recall at a single IoU threshold is far less predictive) [13].
Methods: We compare to the current top-five publicly-available proposal methods including: EdgeBoxes [34], SelectiveSearch [31], Geodesic [16], Rigor [14], and MCG [24]. These methods achieve
top results on object detection (when coupled with R-CNNs [10]) and also obtain the best AR [13].
Results: Figure 4 (a-c) compares the performance of our approach, DeepMask, to existing proposal
methods on PASCAL (using boxes) and COCO (using both boxes and segmentations). Shown is the
AR of each method as a function of the number of generated proposals. Under all scenarios DeepMask (and its variants) achieves substantially better AR for all numbers of proposals considered. AR
at selected proposal counts and averaged across all counts (AUC) is reported in Tables 1 and 2 for
COCO and PASCAL, respectively. Notably, DeepMask achieves an order of magnitude reduction
in the number of proposals necessary to reach a given AR under most scenarios. For example, with
100 segmentation proposals DeepMask achieves an AR of .245 on COCO while competing methods
require nearly 1000 segmentation proposals to achieve similar AR.
6
0.6
DeepMask
MCG
SelectiveSearch
Rigor
Geodesic
EdgeBoxes
0.4
0.3
0.2
0.4
0.5
0.3
0.2
0.1
0.1
0
10 0
10
1
10
2
10
10
1
# proposals
0.3
0.2
0
10 0
10 1
10 2
0.4
0.5
0.3
0.2
0.4
(d) Small objects (area< 322 ).
10 1
10 2
0.3
0.2
0
10 0
10 3
0.5
recall
0.4
0.3
0.5
0.4
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0.1
0.9
1
0
0.5
0.6
0.7
0.8
IoU
0.9
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
0.6
0.2
0.8
1
0
0.5
0.6
0.7
IoU
(g) Recall with 10 proposals.
10 3
0.7
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
0.6
recall
0.5
10 2
(f) Large objects (area> 962 ).
0.7
0.7
10 1
# proposals
(e) Medium objects.
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
0.6
0.6
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
# proposals
0.7
10 3
0.1
0
10 0
10 3
10 2
0.6
# proposals
0
0.5
10 1
(c) Segm. proposals on COCO.
0.1
0
10 0
0.2
# proposals
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
0.5
average recall
average recall
10
3
0.6
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
0.1
recall
10
2
(b) Box proposals on COCO.
0.6
0.4
0.3
# proposals
(a) Box proposals on PASCAL.
0.5
0.4
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
0.1
0
10 0
3
average recall
average recall
0.5
0.6
DeepMask
DeepMaskZoom
MCG
SelectiveSearch
Rigor
Geodesic
EdgeBoxes
0.5
average recall
0.6
average recall
0.7
0.8
0.9
1
IoU
(h) Recall with 100 proposals.
(i) Recall with 1000 proposals.
Figure 4: (a-c) Average recall versus number of box and segmentation proposals on various datasets.
(d-f) AR versus number of proposals for different object scales on segmentation proposals in COCO.
(g-h) Recall versus IoU threshold for different number of segmentation proposals in COCO.
Box Proposals
Segmentation Proposals
AR@10
AR@100
AR@1000
AUC
AR@10
AR@100
AR@1000
AUCS
AUCM
AUCL
AUC
EdgeBoxes [34]
Geodesic [16]
Rigor [14]
SelectiveSearch [31]
MCG [24]
.074
.040
.052
.101
.178
.180
.133
.163
.246
.338
.359
.337
.357
.398
.139
.126
.101
.126
.180
.023
.025
.077
.123
.094
.095
.186
.253
.253
.230
.299
.013
.022
.006
.031
.086
.060
.055
.129
.205
.178
.214
.324
.085
.074
.074
.137
DeepMask20
DeepMask20?
DeepMaskZoom
DeepMaskFull
.139
.152
.150
.149
.286
.306
.326
.310
.431
.432
.482
.442
.217
.228
.242
.231
.109
.123
.127
.118
.215
.233
.261
.235
.314
.314
.366
.323
.020
.020
.068
.020
.227
.257
.263
.244
.317
.321
.308
.342
.164
.175
.194
.176
DeepMask
.153
.313
.446
.233
.126
.245
.331
.023
.266
.336
.183
Table 1: Results on the MS COCO dataset for both bounding box and segmentation proposals. We
report AR at different number of proposals (10, 100 and 1000) and also AUC (AR averaged across
all proposal counts). For segmentation proposals we report overall AUC and also AUC at different
scales (small/medium/large objects indicated by superscripts S/M/L). See text for details.
Scale: The COCO dataset contains objects in a wide range of scales. In order to analyze performance
in more detail, we divided the objects in the validation set into roughly equally sized sets according
to object pixel area a: small (a < 322 ), medium (322 ? a ? 962 ), and large (a > 962 ) objects.
Figure 4 (d-f) shows performance at each scale; all models perform poorly on small objects. To
improve accuracy of DeepMask we apply it at an additional smaller scale (DeepMaskZoom). This
boosts performance (especially for small objects) but at a cost of increased inference time.
7
PASCAL VOC07
EdgeBoxes [34]
Geodesic [16]
Rigor [14]
SelectiveSearch [31]
MCG [24]
DeepMask
AR@10
.203
.121
.164
.085
.232
AR@100
.407
.364
.321
.347
.462
AR@1000
.601
.596
.589
.618
.634
AUC
.309
.230
.239
.241
.344
.337
.561
.690
.433
Table 2: Results on PASCAL VOC 2007 test.
Figure 5: Fast R-CNN results on PASCAL.
Localization: Figure 4 (g-i) shows the recall each model achieves as the IoU varies, shown for
different number of proposals per image. DeepMask achieves a higher recall in virtually every
scenario, except at very high IoU, in which it falls slightly below other models. This is likely due
to the fact that our method outputs a downsampled version of the mask at each location and scale; a
multiscale approach or skip connections could improve localization at very high IoU.
Generalization: To see if our approach can generalize to unseen classes [2, 19], we train two additional versions of our model, DeepMask20 and DeepMask20? . DeepMask20 is trained only with
objects belonging to one of the 20 PASCAL categories (subset of the full 80 COCO categories).
DeepMask20? is similar, except we use the scoring network from the original DeepMask. Results
for the two models when evaluated on all 80 COCO categories (as in all other experiments) are
shown in Table 1. Compared to DeepMask, DeepMask20 exhibits a drop in AR (but still outperforms all previous methods). DeepMask20? , however, matches the performance of DeepMask. This
surprising result demonstrates that the drop in accuracy is due to the discriminatively trained scoring
branch (DeepMask20 is inadvertently trained to assign low scores to the other 60 categories); the
segmentation branch generalizes extremely well even when trained on a reduced set of categories.
Architecture: In the segmentation branch, the convolutional features are fully connected to a 512
?low-rank? layer which is in turn connected to the 56?56 output (with no intermediate non-linearity),
see ?3. We also experimented with a ?full-rank? architecture (DeepMaskFull) with over 300M parameters where each of the 56 ? 56 outputs was directly connected to the convolutional features. As
can be seen in Table 1, DeepMaskFull is slightly inferior to our final model (and much slower).
Detection: As a final validation, we evaluate how DeepMask performs when coupled with an object
detector on PASCAL VOC 2007 test. We re-train and evaluate the state-of-the-art Fast R-CNN [9]
using proposals generated by SelectiveSearch [31] and our method. Figure 5 shows the mean average
precision (mAP) for Fast R-CNN with varying number of proposals. Most notably, with just 100
DeepMask proposals Fast R-CNN achieves mAP of 68.2% and outperforms the best results obtained
with 2000 SelectiveSearch proposals (mAP of 66.9%). We emphasize that with 20? fewer proposals
DeepMask outperforms SelectiveSearch (this is consistent with the AR numbers in Table 1). With
500 DeepMask proposals, Fast R-CNN improves to 69.9% mAP, after which performance begins to
degrade (a similar effect was observed in [9]).
Speed: Inference takes an average of 1.6s per image in the COCO dataset (1.2s on the smaller
PASCAL images). Our runtime is competitive with the fastest segmentation proposal methods
(Geodesic [16] runs at ?1s per PASCAL image) and substantially faster than most (e.g., MCG [24]
takes ?30s). Inference time can further be dropped by ?30% by parallelizing all scales in a single
batch (eliminating GPU overhead). We do, however, require use of a GPU for efficient inference.
5
Conclusion
In this paper, we propose an innovative framework to generate segmentation object proposals directly from image pixels. At test time, the model is applied densely over the entire image at multiple
scales and generates a set of ranked segmentation proposals. We show that learning features for
object proposal generation is not only feasible but effective. Our approach surpasses the previous
state of the art by a large margin in both box and segmentation proposal generation. In future work,
we plan on coupling our proposal method more closely with state-of-the-art detection approaches.
Acknowledgements: We would like to thank Ahmad Humayun and Tsung-Yi Lin for help with generating experimental results, Andrew Tulloch, Omry Yadan and Alexey Spiridonov for help with computational
infrastructure, and Rob Fergus, Yuandong Tian and Soumith Chintala for valuable discussions.
8
References
[1] B. Alexe, T. Deselaers, and V. Ferrari. Measuring the objectness of image windows. PAMI, 2012. 2
[2] N. Chavali, H. Agrawal, A. Mahendru, and D. Batra. Object-proposal evaluation protocol is ?gameable?.
arXiv:1505.05836, 2015. 8
[3] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with
deep convolutional nets and fully connected crfs. ICLR, 2015. 2
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. 1
[5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009. 1, 3
[6] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks.
In CVPR, 2014. 2
[7] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL visual object
classes (VOC) challenge. IJCV, 2010. 1, 2, 6
[8] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. PAMI, 2010. 1
[9] R. Girshick. Fast R-CNN. arXiv:1504.08083, 2015. 2, 8
[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, 2014. 1, 2, 6
[11] B. Hariharan, P. Arbel?aez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and finegrained localization. In CVPR, 2015. 2
[12] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual
recognition. In ECCV, 2014. 2
[13] J. Hosang, R. Benenson, P. Doll?ar, and B. Schiele. What makes for effective detection proposals?
arXiv:1502.05082, 2015. 2, 6
[14] A. Humayun, F. Li, and J. M. Rehg. RIGOR: Reusing Inference in Graph Cuts for generating Object
Regions. In CVPR, 2014. 2, 6, 7, 8
[15] H. Kaiming, Z. Xiangyu, R. Shaoqing, and S. Jian. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. 1
[16] P. Kr?ahenb?uhl and V. Koltun. Geodesic object proposals. In ECCV, 2014. 2, 6, 7, 8
[17] P. Kr?ahenb?uhl and V. Koltun. Learning to propose objects. In CVPR, 2015. 2
[18] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 2
[19] W. Kuo, B. Hariharan, and J. Malik. Deepbox: Learning objectness with convolutional networks. In
arXiv:505.02146v1, 2015. 2, 8
[20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 1998. 1
[21] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick,
and P. Doll?ar. Microsoft COCO: Common objects in context. arXiv:1405.0312, 2015. 1, 2, 5, 6
[22] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Is object localization for free? ? Weakly-supervised learning
with convolutional neural networks. In CVPR, 2015. 2
[23] P. O. Pinheiro and R. Collobert. Recurrent conv. neural networks for scene labeling. In ICML, 2014. 2
[24] J. Pont-Tuset, P. Arbel?aez, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping for
image segmentation and object proposal generation. In arXiv:1503.00848, 2015. 2, 6, 7, 8
[25] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region
proposal networks. In arXiv:1506.01497, 2015. 2
[26] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition,
localization and detection using convolutional networks. In ICLR, 2014. 5
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015. 2, 3, 5
[28] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. JMLR, 2014. 5
[29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 2
[30] C. Szegedy, S. Reed, D. Erhan, and D. Anguelov. Scalable, high-quality object detection. In
arXiv:1412.1441, 2014. 2
[31] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recog. IJCV, 2013.
2, 6, 7, 8
[32] P. Viola and M. J. Jones. Robust real-time face detection. IJCV, 2004. 1
[33] Z. Y. Zhu, R. Urtasun, R. Salakhutdinov, and S. Fidler. segdeepm: Exploiting segmentation and context
in deep neural networks for object detection. In CVPR, 2015. 1
[34] C. L. Zitnick and P. Doll?ar. Edge boxes: Locating object proposals from edges. In ECCV, 2014. 2, 6, 7, 8
9
| 5852 |@word cnn:8 version:2 eliminating:1 dalal:1 kokkinos:1 everingham:1 triggs:1 tried:1 rgb:2 shot:1 reduction:1 liu:1 contains:7 score:21 ecole:1 ours:1 tuned:1 document:1 outperforms:3 existing:2 current:1 com:3 surprising:1 must:2 gpu:2 ronan:1 informative:1 remove:1 designed:2 drop:2 aside:1 cue:1 fewer:2 selected:2 plane:1 xk:10 core:1 infrastructure:1 provides:1 location:9 zhang:2 five:2 koltun:2 consists:2 ijcv:3 overhead:1 notably:2 mask:31 roughly:3 salakhutdinov:2 voc:6 relying:1 soumith:1 window:3 considering:2 conv:2 provided:1 begin:2 linearity:6 moreover:3 agnostic:3 alexnet:1 medium:3 what:1 substantially:3 monkey:2 transformation:1 every:3 runtime:1 exactly:2 classifier:12 demonstrates:1 unit:2 ramanan:2 positive:7 before:1 dropped:1 bilinear:1 pami:2 plus:1 alexey:1 lausanne:2 fastest:1 bilinearly:1 range:2 tian:1 averaged:2 responsible:2 lecun:2 testing:1 union:2 procedure:5 maire:1 foundational:1 area:4 matching:1 imprecise:1 pre:2 downsampled:1 selection:1 context:3 map:10 demonstrated:1 x56:1 center:4 crfs:1 williams:1 survey:2 resolution:3 assigns:1 rehg:1 classic:1 notion:1 ferrari:1 hierarchy:2 target:1 massive:1 exact:1 superpixel:1 trick:1 recognition:7 satisfying:1 located:2 cut:1 predicts:5 database:1 bottom:5 observed:3 recog:1 region:10 ensures:1 connected:12 sun:2 removed:1 highest:2 ahmad:1 yk:12 valuable:1 schiele:1 geodesic:14 trained:15 weakly:1 segment:1 laptev:1 predictive:1 yuille:1 localization:5 efficiency:1 joint:1 differently:1 various:3 train:6 distinct:1 fast:9 describe:1 effective:2 labeling:1 hyper:1 choosing:1 aez:2 quite:2 heuristic:1 widely:1 larger:1 cvpr:10 otherwise:1 elephant:1 simonyan:1 unseen:2 jointly:4 transform:1 final:6 superscript:1 agrawal:1 net:1 arbel:2 propose:4 outputting:1 maximal:1 erale:1 poorly:1 achieve:3 achievement:1 sutskever:2 exploiting:1 double:1 darrell:1 generating:7 object:110 help:2 coupling:1 andrew:1 bourdev:1 recurrent:1 ij:3 predicted:3 idiap:1 implies:1 skip:1 iou:14 switzerland:2 merged:1 closely:2 drawback:1 filter:1 stochastic:1 cnns:1 centered:6 human:1 mcallester:1 require:2 assign:2 generalization:2 alleviate:1 decompose:1 opt:1 sufficiently:1 considered:3 ground:7 around:2 seed:3 mapping:1 alexe:1 vary:1 achieves:6 purpose:1 label:5 currently:1 combinatorial:1 saw:1 concurrently:1 aim:1 varying:1 deselaers:1 improvement:1 rank:4 likelihood:4 superpixels:3 opted:1 inference:13 epfl:1 integrated:1 entire:3 torch:1 voc07:1 hidden:1 perona:1 going:1 selective:1 interested:1 x224:1 pont:1 pixel:18 overall:2 classification:9 flexible:2 pascal:15 stateof:1 overfeat:1 plan:1 art:8 spatial:3 uhl:2 equal:2 extraction:1 piotr:1 having:1 manually:1 runtimes:1 jones:1 icml:1 nearly:2 theart:1 foreground:1 future:1 report:2 recommend:1 few:1 randomly:2 oriented:1 composed:3 simultaneously:1 densely:6 murphy:1 phase:1 microsoft:1 attempt:1 detection:22 evaluation:2 accurate:1 edge:5 partial:1 necessary:2 initialized:2 re:1 deformation:1 girshick:8 mk:4 increased:1 ar:27 papandreou:1 measuring:2 rabinovich:1 cost:2 introducing:1 surpasses:1 subset:2 uniform:1 krizhevsky:3 conducted:1 reported:1 varies:1 hypercolumns:1 dong:1 pool:1 central:1 containing:2 choose:1 possibly:1 li:3 szegedy:3 reusing:1 nonlinearities:1 de:2 stride:2 satisfy:1 notable:3 hosang:3 collobert:2 tsung:1 view:3 doing:1 analyze:1 red:2 start:1 competitive:1 option:1 capability:1 annotation:1 gevers:1 jia:1 smeulders:1 hariharan:2 accuracy:5 convolutional:18 multibox:1 characteristic:1 efficiently:2 likewise:1 yield:1 saliency:1 publicly:1 generalize:2 raw:1 accurately:1 ren:3 multiplying:1 detector:3 reach:1 ed:1 facebook:1 internship:1 pdollar:1 chintala:1 associated:1 gain:1 sampled:1 dataset:4 finegrained:1 recall:22 improves:1 segmentation:60 organized:1 higher:2 day:1 supervised:1 zisserman:2 done:1 box:19 evaluated:1 just:1 until:1 convnets:5 hand:1 horizontal:2 multiscale:2 overlapping:1 logistic:1 quality:2 indicated:1 effect:1 contain:6 former:1 fidler:1 assigned:2 edgeboxes:5 semantic:3 during:14 auc:8 backpropagating:1 illustrative:1 inferior:1 m:5 outline:1 complete:2 polytechnique:1 demonstrate:1 performs:1 dedicated:2 image:38 wise:1 recently:4 common:3 specialized:1 googlenet:1 discussed:1 yuandong:1 he:2 significant:1 anguelov:3 ai:1 nonlinearity:1 similarity:3 etc:1 dominant:1 recent:3 optimizing:1 belongs:1 driven:1 coco:22 scenario:4 discard:1 massively:1 nvidia:1 hay:1 sande:1 binary:2 yi:1 scoring:9 seen:5 minimum:1 additional:3 oquab:1 deng:1 xiangyu:1 paradigm:1 redundant:1 ii:3 sliding:2 full:12 multiple:8 branch:22 reduces:2 infer:1 segmented:2 match:3 faster:2 convolutionally:1 lin:2 divided:3 equally:1 prediction:8 variant:2 regression:1 scalable:2 vision:2 metric:3 arxiv:8 histogram:1 pyramid:2 achieved:3 ahenb:2 proposal:94 background:1 separately:1 fine:1 winn:1 jian:1 benenson:1 pinheiro:3 unlike:4 posse:1 humayun:2 pooling:11 virtually:1 leveraging:1 effectiveness:1 call:1 leverage:2 ideal:1 presence:1 iii:1 split:1 intermediate:1 bengio:1 relu:4 architecture:11 competing:1 reduce:1 haffner:1 vgg:8 shift:1 whether:2 passed:1 wo:3 locating:1 shaoqing:1 deep:8 amount:1 locally:1 category:10 reduced:2 generate:11 specifies:1 http:1 canonical:4 per:3 diverse:1 key:1 salient:1 four:1 threshold:4 achieving:2 prevent:1 ht:2 utilize:1 v1:1 graph:1 fraction:1 year:1 sum:1 run:1 powerful:1 jitter:1 patch:32 missed:1 interleaved:1 layer:33 dropout:2 followed:7 adapted:1 constraint:5 precisely:1 fei:2 x2:1 scene:3 generates:7 toshev:1 speed:2 extremely:2 innovative:1 transferred:1 according:2 alternate:1 belonging:1 smaller:3 describes:2 slightly:5 across:2 rob:1 pipeline:1 equation:1 turn:1 count:3 flip:1 fed:1 available:2 generalizes:1 doll:4 eight:1 apply:5 hierarchical:1 barron:1 appropriate:3 alternative:1 robustness:1 shortly:1 ho:3 slower:2 batch:2 original:2 eigen:1 top:6 remaining:1 especially:1 objective:1 malik:4 added:1 exhibit:1 gradient:3 iclr:3 convnet:6 separate:1 thank:1 capacity:2 upsampling:2 degrade:1 binarize:1 reason:1 urtasun:1 assuming:1 reed:2 downsampling:3 sermanet:2 negative:4 resurgence:1 martigny:1 rise:1 design:2 implementation:2 enclosing:1 perform:4 allowing:1 convolution:4 datasets:4 benchmark:1 x14x14:2 descent:1 beat:3 marque:1 viola:1 hinton:2 parallelizing:1 trainable:3 specified:1 connection:1 imagenet:7 sivic:1 learned:2 boost:1 nip:1 able:4 beyond:1 usually:1 below:1 selectivesearch:15 appeared:1 challenge:1 including:2 green:1 max:7 gool:1 critical:4 ranked:4 rely:6 predicting:3 zhu:1 scheme:1 improve:2 mcg:13 mathieu:1 coupled:2 text:1 acknowledgement:1 fully:14 loss:2 rerank:1 discriminatively:2 generation:4 versus:3 validation:4 vanhoucke:1 consistent:1 share:5 translation:1 eccv:4 last:5 free:1 deeper:1 institute:1 fall:2 wide:1 taking:1 felzenszwalb:1 face:1 fifth:1 tolerance:3 van:1 dimension:8 k40m:1 tuset:1 rich:3 fb:2 made:1 preprocessing:1 far:1 erhan:3 correlate:1 obtains:1 emphasize:2 overfitting:1 conclude:1 belongie:1 discriminative:2 fergus:2 alternatively:1 continuous:1 search:1 triplet:4 table:6 additionally:1 learn:2 nature:1 robust:1 excellent:1 bottou:2 uijlings:1 protocol:1 zitnick:2 whole:1 bounding:8 fair:1 tesla:1 x1:2 fig:1 precision:1 inferring:1 position:2 momentum:1 candidate:3 jmlr:1 learns:1 interleaving:1 donahue:1 specific:1 offset:1 decay:1 experimented:1 grouping:2 consist:1 socher:1 merging:1 kr:2 magnitude:1 illustrates:2 fscore:3 margin:3 chen:1 backpropagate:1 generalizing:1 intersection:2 simply:2 likely:4 positional:1 visual:3 contained:2 upsample:2 partially:1 kaiming:1 pedro:3 mij:3 corresponds:1 satisfies:4 truth:7 extracted:1 ch:1 x1x1:3 viewed:1 marked:1 sized:1 towards:1 shared:4 exceptionally:1 feasible:1 objectness:5 specifically:2 typical:1 deepmask:27 except:2 wt:2 total:2 tulloch:1 rigor:13 batra:1 experimental:2 kuo:1 inadvertently:1 indicating:2 latter:1 evaluate:5 tested:1 srivastava:1 |
5,361 | 5,853 | The Return of the Gating Network:
Combining Generative Models and Discriminative
Training in Natural Image Priors
Yair Weiss
School of Computer Science and Engineering
Hebrew University of Jerusalem
Dan Rosenbaum
School of Computer Science and Engineering
Hebrew University of Jerusalem
Abstract
In recent years, approaches based on machine learning have achieved state-of-theart performance on image restoration problems. Successful approaches include
both generative models of natural images as well as discriminative training of
deep neural networks. Discriminative training of feed forward architectures allows explicit control over the computational cost of performing restoration and
therefore often leads to better performance at the same cost at run time. In contrast, generative models have the advantage that they can be trained once and then
adapted to any image restoration task by a simple use of Bayes? rule.
In this paper we show how to combine the strengths of both approaches by training
a discriminative, feed-forward architecture to predict the state of latent variables
in a generative model of natural images. We apply this idea to the very successful
Gaussian Mixture Model (GMM) of natural images. We show that it is possible
to achieve comparable performance as the original GMM but with two orders of
magnitude improvement in run time while maintaining the advantage of generative
models.
1
Introduction
Figure 1 shows an example of an image restoration problem. We are given a degraded image (in
this case degraded with Gaussian noise) and seek to estimate the clean image. Image restoration
is an extremely well studied problem and successful systems for specific scenarios have been built
without any explicit use of machine learning. For example, approaches based on ?coring? can be
used to successfully remove noise from an image by transforming to a wavelet basis and zeroing
out coefficients that are close to zero [7]. More recently the very successful BM3D method removes
noise from patches by finding similar patches in the noisy image and combining all similar patches
in a nonlinear way [4].
In recent years, machine learning based approaches are starting to outperform the hand engineered
systems for image restoration. As in other areas of machine learning, these approaches can be
divided into generative approaches which seek to learn probabilistic models of clean images versus
discriminative approaches which seek to learn models that map noisy images to clean images while
minimizing the training loss between the predicted clean image and the true one.
Two influential generative approaches are the fields of experts (FOE) approach [16] and KSVD [5]
which assume that filter responses to natural images should be sparse and learn a set of filters under this assumption. While very good performance can be obtained using these methods, when
they are trained generatively they do not give performance that is as good as BM3D. Perhaps the
most successful generative approach to image restoration is based on Gaussian Mixture Models
(GMMs) [22]. In this approach 8x8 image patches are modeled as 64 dimensional vectors and a
1
Noisy image
full model gating
(200 ? 64 dot-products per patch)
fast gating
(100 dot-products per patch)
29.16dB
29.12dB
Figure 1: Image restoration with a Gaussian mixture model. Middle: the most probable component
of every patch calculated using a full posterior calculation vs. a fast gating network (color coded
by embedding in a 2-dimensional space). Bottom: the restored image: the gating network achieves
almost identical results but in 2 orders of magnitude faster.
simple GMM with 200 components is used to model the density in this space. Despite its simplicity, this model remains among the top performing models in terms of likelihood given to left out
patches and also gives excellent performance in image restoration [23, 20]. In particular, it outperforms BM3D on image denoising and has been successfully used for other image restoration
problems such as deblurring [19]. The performance of generative models in denoising can be much
improved by using an ?empirical Bayes? approach where the parameters are estimated from the
noisy image [13, 21, 14, 5].
Discriminative approaches for image restoration typically assume a particular feed forward structure
and use training to optimize the parameters of the structure. Hel-Or and Shaked used discriminative training to optimize the parameters of coring [7]. Chen et al. [3] discriminatively learn the
parameters of a generative model to minimize its denoising error. They show that even though the
model was trained for a specific noise level, it acheives similar results as the GMM for different
noise levels. Jain and Seung trained a convolutional deep neural network to perform image denoising. Using the same training set as was used by the FOE and GMM papers, they obtained better
results than FOE but not as good as BM3D or GMM [9]. Burger et al. [2] trained a deep (nonconvolutional) multi layer perceptron to perform denoising. By increasing the size of the training
set by two orders of magnitude relative to previous approaches, they obtained what is perhaps the
2
best stand-alone method for image denoising. Fanello et al. [6] trained a random forest architecture
to optimize denoising performance. They obtained results similar to the GMM but at a much smaller
computational cost.
Which approach is better, discriminative or generative? First it should be said that the best performing methods in both categories give excellent performance. Indeed, even the BM3D approach
(which can be outperformed by both types of methods) has been said to be close to optimal for image
denoising [12]. The primary advantage of the discriminative approach is its efficiency at run-time.
By defining a particular feed-forward architecture we are effectively constraining the computational
cost at run-time and during learning we seek the best performing parameters for a fixed computational cost. The primary advantage of the generative approach, on the other hand, is its modularity.
Learning only requires access to clean images, and after learning a density model for clean images,
Bayes? rule can be used to peform restoration on any image degradation and can support different
loss functions at test time. In contrast, discriminative training requires separate training (and usually
separate architectures) for every possible image degradation. Given that there are literally an infinite number of ways to degrade images (not just Gaussian noise with different noise levels but also
compression artifacts, blur etc.), one would like to have a method that maintains the modularity of
generative models but with the computational cost of discriminative models.
In this paper we propose such an approach. Our method is based on the observation that the most
costly part of inference with many generative models for natural images is in estimating latent variables. These latent variables can be abstract representations of local image covariance (e.g. [10])
or simply a discrete variable that indicates which Gaussian most likely generated the data in a
GMM. We therefore discriminatively train a feed-forward architecture, or a ?gating network? to
predict these latent variables using far less computation. The gating network need only be trained on
?clean? images and we show how to combine it during inference with Bayes? rule to perform image
restoration for any type of image degradation. Our results show that we can maintain the accuracy
and the modularity of generative models but with a speedup of two orders of magnitude in run time.
In the rest of the paper we focus on the Gaussian mixture model although this approach can be used
for other generative models with latent variables like the one proposed by Karklin and Lewicki [10].
Code implementing our proposed algorithms for the GMM prior and Karklin and Lewicki?s prior is
available online at www.cs.huji.ac.il/?danrsm.
2
Image restoration with Gaussian mixture priors
Modeling image patches with Gaussian mixtures has proven to be very effective for image restoration [22]. In this model, the prior probability of an image patch x is modeled by: Pr(x) =
P
h ?h N (x; ?h , ?h ). During image restoration, this prior is combined with a likelihood function Pr(y|x) and restoration is based on the posterior probability Pr(x|y) which is computed using Bayes? rule. Typically, MAP estimators are used [22] although for some problems the more
expensive BLS estimator has been shown to give an advantage [17].
In order to maximize the posterior probability different numerical optimizations can be used. Typically they require computing the assignment probabilities:
?h N (x; ?h , ?h )
Pr(h|x) = P
k ?k N (x; ?k , ?k )
(1)
These assignment probabilities play a central role in optimizing the posterior. For example, it is easy
to see that the gradient of the log of the posterior involves a weighted sum of gradients where the
assignment probabilities give the weights:
? log Pr(x|y)
?x
? [log Pr(x) + log Pr(y|x) ? log Pr(y)]
?x
X
? log Pr(y|x)
= ?
Pr(h|x)(x ? ?h )> ??1
h +
?x
=
(2)
h
Similarly, one can use a version of the EM algorithm to iteratively maximize the posterior probability
by solving a sequence of reweighted least squares problems. Here the assignment probabilities
define the weights for the least squares problems [11]. Finally, in auxiliary samplers for performing
3
BLS estimation, each iteration requires sampling the hidden variables according to the current guess
of the image [17].
For reasons of computational efficiency, the assignment probabilities are often used to calculate a
hard assignment of a patch to a component:
?
h(x)
= arg max Pr(h|x)
h
(3)
Following the literature on ?mixtures of experts? [8] we call this process gating. As we now show,
this process is often the most expensive part of performing image restoration with a GMM prior.
2.1
Running time of inference
The successful EPLL algorithm [22] for image restoration with patch priors defines a cost function
based on the simplifying assumption that the patches of an image are independent:
X
J(x) = ?
log Pr(xi ) ? ? log Pr(y|x)
(4)
i
where {xi } are the image patches, x is the full image and ? is a parameter that compensates for
the simplifying assumption. Minimizing this cost when the prior is a GMM, is done by alternating
between three steps. We give here only a short representation of each step but the full algorithm is
given in the supplementary material. The three steps are:
? i)
? Gating. For each patch, the current guess xi is assigned to one of the components h(x
? i ), a least squares problem is
? Filtering. For each patch, depending on the assignments h(x
solved.
? Mixing. Overlapping patches are averaged together with the noisy image y.
It can be shown that after each iteration of the three steps, the EPLL splitting cost function (a
relaxation of equation 4) is decreased.
In terms of computation time, the gating step is by far the most expensive one. The filtering step
multiplies each d dimensional patch by a single d?d matrix which is equivalent to d dot-products or
d2 flops per patch. Assuming a local noise model, the mixing step involves summing up all patches
back to the image and solving a local cost on the image (equivalent to 1 dot-product or d flops per
patch).1 In the gating step however, we compute the probability of all the Gaussian components for
every patch. Each computation performs d dot-products, and so for K components we get a total of
d ? K dot-products or d2 ? K flops per patch. For a GMM with 200 components like the one used
in [22], this results in a gating step which is 200 times slower than the filtering and mixing steps.
3
The gating network
Figure 2: Architecture of the gating step in GMM inference (left) vs. a more efficient gating network.
1
For non-local noise models like in image deblurring there is an additional factor of the square of the kernel
dimension. If the kernel dimension is in the order of d, the mixing step performs d dot-products or d2 flops.
4
The left side of figure 2 shows the computation involved in a naive computing of the gating. In the
GMM used in [22], the Gaussians are zero mean so computing the most likely component involves
multiplying each patch with all the eigenvectors of the covariance matrix and squaring the results:
X 1
log P r(x|h) = ?x> ??1
(v h x)2 + consth
(5)
h x + consth = ?
h i
?
i
i
where ?ih and vih are the eigenvalues and eigenvectors of the covariance matrix. The eigenvectors can
be viewed as templates, and therefore, the gating is performed according to weighted sums of dotproducts with different templates. Every component has a different set of templates and a different
weighting of their importance (the eigenvalues). Framing this process as a feed-forward network
starting with a patch of dimension d and using K Gaussian components, the first layer computes
d ? K dot-products (followed by squaring), and the second layer performs K dot-products.
Viewed this way, it is clear that the naive computation of the gating is inefficient. There is no
?sharing? of dot-products between different components and the number of dot-products that are
required for deciding about the appropriate component, may be much smaller than is done with this
naive computation.
3.1
Discriminative training of the gating network
In order to obtain a more efficient gating network we use discriminative training. We rewrite equation 5 as:
X
log Pr(x|h) ? ?
wih (viT x)2 + consth
(6)
i
Note that the vectors vi are required to be shared and do not depend on h. Only the weights wih
depend on h.
Given a set of vectors vi and the weights w the posterior probability of a patch assignment is approximated by:
P
exp(? i wih (viT x)2 + consth )
P k T 2
Pr(h|x) ? P
(7)
k exp(?
i wi (vi x) + constk )
We minimize the cross entropy between the approximate posterior probability and the exact posterior
probability given by equation 1. The training is done on 500 mini-batches of 10K clean image
patches each, taken randomly from the 200 images in the BSDS training set. We minimize the
training loss for each mini-batch using 100 iterations of minimize.m [15] before moving to the
next mini-batch.
Results of the training are shown in figure 3. Unlike the eigenvectors of the GMM covariance
matrices which are often global Fourier patterns or edge filters, the learned vectors are more localized
in space and resemble Gabor filters.
generatively trained:
discriminatively trained:
27
PSNR
26.5
26
25.5
generative
discriminative
25
1
2
3
4
log 10 # of dot-products
5
Figure 3: Left: A subset of the 200 ? 64 eigenvectors used for the full posterior calculation. Center:
The first layer of the discriminatively trained gating network which serves as a shared pool of 100
eigenvectors. Right: The number of dot-products versus the resulting PSNR for patch denoising
using different models. Discrimintively training smaller gating networks is better than generatively
training smaller GMMs (with less components).
5
Figure 1 compares the gating performed by the full network and the discriminatively trained one.
Each pixel shows the predicted component for a patch centered around that pixel. Components are
color coded so that dark pixels correspond to components with low variance and bright pixels to
high variance. The colors denote the preferred orientation of the covariance. Although the gating
network requires far less dot-products it gives similar (although not identical) gating.
Figure 4 shows sample patches arranged according to the gating with either the full model (top) or
the gating network (bottom). We classify a set of patches by their assignment probabilities. For 60
of the 200 components we display 10 patches that are classified to that component. It can be seen
that when the classification is done using the gating network or the full posterior, the results are
visually similar.
The right side of figure 3 compares between two different ways to reduce computation time. The
green curve shows gating networks with different sizes (containing 25 to 100 vectors) trained on top
of the 200 component GMM. The blue curve shows GMMs with a different number of components
(from 2 to 200). Each of the models is used to perform patch denoising (using MAP inference) with
noise level of 25. It is clearly shown that in terms of the number of dot-products versus the resulting
PSNR, discriminatively training a small gating network on top of a GMM with 200 components is
much better than a pure generative training of smaller GMMs.
gating with the full model
gating with the learned network
Figure 4: Gating with the full posterior computation vs. the learned gating network. Top: Patches
from clean images arranged according to the component with maximum probability. Every column
represents a different component (showing 60 out of 200). Bottom: Patches arranged according to
the component with maximum gating score. Both gating methods have a very similar behavior.
4
Results
We compare the image restoration performance of our proposed method to several other methods
proposed in the literature. The first class of methods used for denoising are ?internal? methods that
do not require any learning but are specific to image denoising. A prime example is BM3D. The
second class of methods are generative models which are only trained on clean images. The original
EPLL algorithm is in this class. Finally, the third class of models are discriminative which are
trained ?end-to-end?. These typically have the best performance but need to be trained in advance
for any image restoration problem.
In the right hand side of table 1 we show the denoising results of our implementation of EPLL with
a GMM of 200 components. It can be seen that the difference between doing the full inference and
using a learned gating network (with 100 vectors) is about 0.1dB to 0.3dB which is comparable to
the difference between different published values of performance for a single algorithm. Even with
the learned gating network the EPLL?s performance is among the top performing methods for all
noise levels. The fully discriminative MLP method is the best performing method for each noise
level but it is trained explicitly and separately for each noise level.
The right hand side of table 1 also shows the run times of our Matlab implementation of EPLL on a
standard CPU. Although the number of dot-products in the gating has been decreased by a factor of
6
?
internal
BM3D[22]
BM3D[1]
BM3D[6]
LSSC[22]
LSSC[6]
KSVD[22]
generative
FoE[22]
KSVDG[22]
EPLL[22]
EPLL[1]
EPLL[6]
discriminative
CSF57?7 [18]
MLP[1]
FF[6]
20
25
30
50
75
EPLL with different gating methods
28.57
28.35
29.25
27.32
28.70
29.40
27.39
28.20
27.77
28.28
28.71
28.47
29.38
27.44
23.29
25.18
25.72
25.50
25.22
27.48
25.83
25.25
28.72
28.75
29.65
25.63
25.45
25.09
25.73
25.09
25.15
23.96
?
25
50
75
sec.
full
28.52
25.53
24.02
91
gating
28.40
25.37
23.79
5.6
gating3
28.36
25.30
23.71
0.7
24.16
24.42
full:
naive posterior computation.
gating: the learned gating network.
gating3 : the learned network calculated
with a stride of 3.
Table 1: Average PSNR (dB) for image denoising. Left: Values for different denoising methods
as reported by different papers. Right: Comparing different gating methods for our EPLL implementation, computed over 100 test images of BSDS. Using a fast gating method results in a PSNR
difference comparable to the difference between different published values of the same algorithm.
noisy: 20.19
MLP: 27.31
full: 27.01
gating: 26.99
noisy: 20.19
MLP: 30.37
full: 30.14
gating: 30.06
Figure 5: Image denoising examples. Using the fast gating network or the full inference computation, is visually indistinguishable.
128, the effect on the actual run times is more complex. Still, by only switching to the new gating
network, we obtain a speedup factor of more than 15 on small images. We also show that further
speedup can be achieved by simply working with less overlapping patches (?stride?). The results
show that using a stride of 3 (i.e. working on every 9?th patch) leads to almost no loss in PSNR.
Although the ?stride? speedup can be achieved by any patch based method, it emphasizes another
important trade-off between accuracy and running-time. In total, we see that a speedup factor of
more than 100, lead to very similar results than the full inference. We expect even more dramatic
speedups are possible with more optimized and parallel code.
Figures 5 gives a visual comparison of denoised images. As can be expected from the PSNR values,
the results with full EPLL and the gating network EPLL are visually indistinguishable.
To highlight the modularity advantage of generative models, figure 6 shows results of image deblurring using the same prior. Even though all the training of the EPLL and the gating was done on clean
sharp images, the prior can be combined with a likelihood for deblurring to obtain state-of-the-art
deblurring results. Again, the full and the gating results are visually indistinguishable.
7
9 ? 9 blur: 19.12
Hyper-Laplacian: 24.69
full: 26.25
gating: 26.15
9 ? 9 blur: 22.50
Hyper-Laplacian: 25.03
full: 25.77
gating: 25.75
Figure 6: Image deblurring examples. Using the learned gating network maintains the modularity
property, allowing it to be used for different restoration tasks. Once again, results are very similar to
the full inference computation.
noisy
CSF57?7
EPLLgating
PSNR: 20.17
running-time:
30.49
230sec.
30.51
83sec.
Figure 7: Denoising of a 18mega-pixel image. Using the learned gating network and a stride of 3,
we get very fast inference with comparable results to discriminatively ?end-to-end? trained models.
Finally, figure 7 shows the result of performing resotration on an 18 mega-pixel image. EPLL with
a gating network achieves comparable results to a discriminatively trained method (CSF) [18] but is
even more efficient while maintaining the modularity of the generative approach.
5
Discussion
Image restoration is a widely studied problem with immediate practical applications. In recent years,
approaches based on machine learning have started to outperform handcrafted methods. This is true
both for generative approaches and discriminative approaches. While discriminative approaches often give the best performance for a fixed computational budget, the generative approaches have the
advantage of modularity. They are only trained on clean images and can be used to perform one of
an infinite number of possible resotration tasks by using Bayes? rule. In this paper we have shown
how to combine the best aspects of both approaches. We discriminatively train a feed-forward architecture to perform the most expensive part of inference using generative models. Our results indicate
that we can still obtain state-of-the-art performance with two orders of magnitude improvement in
run times while maintaining the modularity advantage of generative models.
Acknowledgements
Support by the ISF, Intel ICRI-CI and the Gatsby Foundation is greatfully acknowledged.
8
References
[1] Harold Christopher Burger, Christian Schuler, and Stefan Harmeling. Learning how to combine internal
and external denoising methods. In Pattern Recognition, pages 121?130. Springer, 2013.
[2] Harold Christopher Burger, Christian J Schuler, and Stefan Harmeling. Image denoising with multilayer perceptrons, part 1: comparison with existing algorithms and with bounds. arXiv preprint
arXiv:1211.1544, 2012.
[3] Yunjin Chen, Thomas Pock, Ren?e Ranftl, and Horst Bischof. Revisiting loss-specific training of filterbased mrfs for image restoration. In Pattern Recognition, pages 271?281. Springer, 2013.
[4] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse
3-d transform-domain collaborative filtering. Image Processing, IEEE Transactions on, 16(8):2080?2095,
2007.
[5] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned
dictionaries. Image Processing, IEEE Transactions on, 15(12):3736?3745, 2006.
[6] Sean Ryan Fanello, Cem Keskin, Pushmeet Kohli, Shahram Izadi, Jamie Shotton, Antonio Criminisi, Ugo
Pattacini, and Tim Paek. Filter forests for learning data-dependent convolutional kernels. In Computer
Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1709?1716. IEEE, 2014.
[7] Yacov Hel-Or and Doron Shaked. A discriminative approach for wavelet denoising. Image Processing,
IEEE Transactions on, 17(4):443?457, 2008.
[8] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local
experts. Neural computation, 3(1):79?87, 1991.
[9] Viren Jain and Sebastian Seung. Natural image denoising with convolutional networks. In Advances in
Neural Information Processing Systems, pages 769?776, 2009.
[10] Yan Karklin and Michael S Lewicki. Emergence of complex cell properties by learning to generalize in
natural scenes. Nature, 457(7225):83?86, 2009.
[11] Effi Levi. Using natural image priors-maximizing or sampling? PhD thesis, The Hebrew University of
Jerusalem, 2009.
[12] Anat Levin and Boaz Nadler. Natural image denoising: Optimality and inherent bounds. In Computer
Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2833?2840. IEEE, 2011.
[13] Siwei Lyu and Eero P Simoncelli. Statistical modeling of images with fields of gaussian scale mixtures.
In Advances in Neural Information Processing Systems, pages 945?952, 2006.
[14] Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-local sparse
models for image restoration. In Computer Vision, 2009 IEEE 12th International Conference on, pages
2272?2279. IEEE, 2009.
[15] Carl E Rassmusen. minimize.m, 2006. http://learning.eng.cam.ac.uk/carl/code/minimize/.
[16] Stefan Roth and Michael J Black. Fields of experts: A framework for learning image priors. In Computer
Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2,
pages 860?867. IEEE, 2005.
[17] Uwe Schmidt, Qi Gao, and Stefan Roth. A generative perspective on mrfs in low-level vision. In Computer
Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 1751?1758. IEEE, 2010.
[18] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Computer Vision and
Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2774?2781. IEEE, 2014.
[19] Libin Sun, Sunghyun Cho, Jue Wang, and James Hays. Edge-based blur kernel estimation using patch
priors. In Computational Photography (ICCP), 2013 IEEE International Conference on, pages 1?8. IEEE,
2013.
[20] Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive densityestimator. In Advances in Neural Information Processing Systems, pages 2175?2183, 2013.
[21] Guoshen Yu, Guillermo Sapiro, and St?ephane Mallat. Solving inverse problems with piecewise linear
estimators: From gaussian mixture models to structured sparsity. Image Processing, IEEE Transactions
on, 21(5):2481?2499, 2012.
[22] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration.
In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 479?486. IEEE, 2011.
[23] Daniel Zoran and Yair Weiss. Natural images, gaussian mixtures and dead leaves. In NIPS, pages 1745?
1753, 2012.
9
| 5853 |@word kohli:1 version:1 middle:1 compression:1 d2:3 seek:4 covariance:5 simplifying:2 jacob:1 eng:1 dramatic:1 generatively:3 score:1 daniel:2 outperforms:1 existing:1 current:2 comparing:1 michal:1 nowlan:1 rnade:1 uria:1 numerical:1 blur:4 christian:2 remove:2 v:3 alone:1 generative:27 leaf:1 guess:2 short:1 doron:1 ksvd:2 dan:1 combine:4 expected:1 indeed:1 behavior:1 multi:1 bm3d:9 cpu:1 actual:1 increasing:1 burger:3 estimating:1 what:1 finding:1 sapiro:2 every:6 uk:1 control:1 before:1 engineering:2 local:6 pock:1 switching:1 despite:1 black:1 studied:2 averaged:1 practical:1 harmeling:2 yunjin:1 area:1 empirical:1 yan:1 gabor:1 get:2 close:2 shaked:2 optimize:3 www:1 map:3 equivalent:2 center:1 maximizing:1 roth:3 jerusalem:3 starting:2 vit:2 simplicity:1 splitting:1 pure:1 rule:5 estimator:3 iain:1 embedding:1 play:1 mallat:1 exact:1 carl:2 epll:15 deblurring:6 viren:1 expensive:4 approximated:1 recognition:7 bottom:3 role:1 steven:1 preprint:1 solved:1 wang:1 calculate:1 revisiting:1 sun:1 trade:1 alessandro:1 transforming:1 seung:2 cam:1 trained:19 depend:2 solving:3 rewrite:1 zoran:2 efficiency:2 basis:1 train:2 jain:2 fast:5 effective:2 hyper:2 jean:1 supplementary:1 widely:1 elad:1 cvpr:5 valued:1 compensates:1 transform:1 noisy:8 nonconvolutional:1 emergence:1 online:1 advantage:8 sequence:1 eigenvalue:2 propose:1 wih:3 jamie:1 product:16 combining:2 iccp:1 mixing:4 achieve:1 tim:1 depending:1 andrew:1 ac:2 school:2 lssc:2 auxiliary:1 predicted:2 rosenbaum:1 c:1 involves:3 resemble:1 indicate:1 larochelle:1 csf:1 filter:5 criminisi:1 centered:1 engineered:1 material:1 implementing:1 require:2 benigno:1 probable:1 ryan:1 around:1 deciding:1 exp:2 bsds:2 visually:4 predict:2 nadler:1 lyu:1 achieves:2 dictionary:1 estimation:2 outperformed:1 successfully:2 weighted:2 stefan:5 clearly:1 gaussian:14 shrinkage:1 focus:1 ponce:1 improvement:2 likelihood:3 indicates:1 contrast:2 inference:11 mrfs:2 dependent:1 squaring:2 typically:4 hidden:1 greatfully:1 pixel:6 arg:1 among:2 orientation:1 classification:1 uwe:2 multiplies:1 art:2 field:4 once:2 shahram:1 sampling:2 identical:2 represents:1 yu:1 theart:1 ephane:1 piecewise:1 inherent:1 randomly:1 fanello:2 maintain:1 mlp:4 acheives:1 mixture:11 edge:2 literally:1 classify:1 modeling:2 column:1 restoration:27 assignment:9 cost:10 subset:1 successful:6 levin:1 reported:1 combined:2 cho:1 st:1 density:2 international:3 huji:1 probabilistic:1 off:1 pool:1 michael:4 together:1 again:2 central:1 thesis:1 containing:1 external:1 dead:1 expert:4 inefficient:1 return:1 stride:5 sec:3 coefficient:1 explicitly:1 vi:3 performed:2 doing:1 francis:1 bayes:6 maintains:2 parallel:1 denoised:1 collaborative:1 minimize:6 square:4 il:1 egiazarian:1 degraded:2 convolutional:3 accuracy:2 variance:2 bright:1 correspond:1 generalize:1 foi:1 emphasizes:1 ren:1 multiplying:1 dabov:1 published:2 foe:4 classified:1 siwei:1 sharing:1 sebastian:1 involved:1 james:1 color:3 psnr:8 sean:1 back:1 feed:7 response:1 wei:3 improved:1 zisserman:1 arranged:3 done:5 though:2 just:1 hand:4 working:2 christopher:2 nonlinear:1 overlapping:2 defines:1 artifact:1 perhaps:2 icri:1 effect:1 true:2 vih:1 assigned:1 alternating:1 iteratively:1 reweighted:1 indistinguishable:3 during:3 harold:2 performs:3 image:93 photography:1 recently:1 hugo:1 handcrafted:1 volume:1 isf:1 similarly:1 zeroing:1 dot:16 moving:1 access:1 etc:1 posterior:13 recent:3 perspective:1 optimizing:1 dotproducts:1 prime:1 scenario:1 hay:1 seen:2 additional:1 maximize:2 redundant:1 full:22 simoncelli:1 faster:1 calculation:2 cross:1 bach:1 divided:1 coded:2 laplacian:2 qi:1 multilayer:1 vision:8 arxiv:2 iteration:3 kernel:4 achieved:3 cell:1 separately:1 decreased:2 rest:1 unlike:1 db:5 gmms:4 jordan:1 call:1 constraining:1 shotton:1 easy:1 architecture:8 reduce:1 idea:1 karen:1 matlab:1 deep:3 hel:2 antonio:1 clear:1 eigenvectors:6 dark:1 category:1 http:1 outperform:2 sunghyun:1 estimated:1 per:5 mega:2 blue:1 discrete:1 bls:2 levi:1 acknowledged:1 gmm:18 clean:12 relaxation:1 year:3 sum:2 run:8 inverse:1 almost:2 patch:40 comparable:5 layer:4 bound:2 followed:1 display:1 jue:1 adapted:1 strength:1 scene:1 fourier:1 aspect:1 extremely:1 optimality:1 performing:9 speedup:6 influential:1 structured:1 according:5 smaller:5 em:1 wi:1 iccv:1 pr:15 taken:1 izadi:1 equation:3 remains:1 serf:1 end:4 available:1 gaussians:1 aharon:1 apply:1 appropriate:1 batch:3 yair:3 schmidt:2 slower:1 original:2 thomas:1 top:6 running:3 include:1 paek:1 maintaining:3 effi:1 murray:1 society:1 restored:1 primary:2 costly:1 said:2 gradient:2 separate:2 degrade:1 reason:1 assuming:1 code:3 modeled:2 mini:3 minimizing:2 hebrew:3 vladimir:1 robert:1 implementation:3 perform:6 allowing:1 observation:1 immediate:1 defining:1 flop:4 hinton:1 sharp:1 required:2 optimized:1 bischof:1 framing:1 learned:10 nip:1 usually:1 pattern:8 sparsity:1 built:1 max:1 green:1 natural:12 karklin:3 csf57:2 julien:1 started:1 x8:1 naive:4 prior:14 literature:2 acknowledgement:1 relative:1 loss:5 fully:1 discriminatively:9 expect:1 highlight:1 filtering:4 proven:1 versus:3 localized:1 geoffrey:1 foundation:1 coring:2 guillermo:2 side:4 katkovnik:1 perceptron:1 template:3 sparse:4 curve:2 calculated:2 dimension:3 stand:1 computes:1 autoregressive:1 forward:7 horst:1 adaptive:1 far:3 pushmeet:1 transaction:4 approximate:1 boaz:1 preferred:1 global:1 cem:1 mairal:1 summing:1 eero:1 discriminative:20 xi:3 latent:5 modularity:8 table:3 learn:4 schuler:2 nature:1 forest:2 excellent:2 complex:2 domain:1 whole:1 noise:13 intel:1 ff:1 gatsby:1 explicit:2 weighting:1 third:1 anat:1 wavelet:2 specific:4 gating:57 showing:1 ih:1 effectively:1 importance:1 ci:1 phd:1 magnitude:5 budget:1 chen:2 entropy:1 simply:2 likely:2 gao:1 visual:1 lewicki:3 springer:2 viewed:2 shared:2 hard:1 infinite:2 sampler:1 denoising:24 degradation:3 total:2 perceptrons:1 internal:3 support:2 |
5,362 | 5,854 | Spatial Transformer Networks
Max Jaderberg
Karen Simonyan
Andrew Zisserman
Koray Kavukcuoglu
Google DeepMind, London, UK
{jaderberg,simonyan,zisserman,korayk}@google.com
Abstract
Convolutional Neural Networks define an exceptionally powerful class of models,
but are still limited by the lack of ability to be spatially invariant to the input data
in a computationally and parameter efficient manner. In this work we introduce a
new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be
inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map
itself, without any extra training supervision or modification to the optimisation
process. We show that the use of spatial transformers results in models which
learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of
classes of transformations.
1
Introduction
Over recent years, the landscape of computer vision has been drastically altered and pushed forward
through the adoption of a fast, scalable, end-to-end learning framework, the Convolutional Neural
Network (CNN) [18]. Though not a recent invention, we now see a cornucopia of CNN-based
models achieving state-of-the-art results in classification, localisation, semantic segmentation, and
action recognition tasks, amongst others.
A desirable property of a system which is able to reason about images is to disentangle object
pose and part deformation from texture and shape. The introduction of local max-pooling layers in
CNNs has helped to satisfy this property by allowing a network to be somewhat spatially invariant
to the position of features. However, due to the typically small spatial support for max-pooling
(e.g. 2 ? 2 pixels) this spatial invariance is only realised over a deep hierarchy of max-pooling and
convolutions, and the intermediate feature maps (convolutional layer activations) in a CNN are not
actually invariant to large transformations of the input data [5, 19]. This limitation of CNNs is due
to having only a limited, pre-defined pooling mechanism for dealing with variations in the spatial
arrangement of data.
In this work we introduce the Spatial Transformer module, that can be included into a standard
neural network architecture to provide spatial transformation capabilities. The action of the spatial
transformer is conditioned on individual data samples, with the appropriate behaviour learnt during
training for the task in question (without extra supervision). Unlike pooling layers, where the receptive fields are fixed and local, the spatial transformer module is a dynamic mechanism that can
actively spatially transform an image (or a feature map) by producing an appropriate transformation
for each input sample. The transformation is then performed on the entire feature map (non-locally)
and can include scaling, cropping, rotations, as well as non-rigid deformations. This allows networks
which include spatial transformers to not only select regions of an image that are most relevant (attention), but also to transform those regions to a canonical, expected pose to simplify inference in
the subsequent layers. Notably, spatial transformers can be trained with standard back-propagation,
allowing for end-to-end training of the models they are injected in.
1
(a)
(b)
(c)
(d)
7
5
6
4
Figure 1: The result of using a spatial transformer as the
first layer of a fully-connected network trained for distorted
MNIST digit classification. (a) The input to the spatial transformer network is an image of an MNIST digit that is distorted with random translation, scale, rotation, and clutter. (b)
The localisation network of the spatial transformer predicts a
transformation to apply to the input image. (c) The output
of the spatial transformer, after applying the transformation.
(d) The classification prediction produced by the subsequent
fully-connected network on the output of the spatial transformer. The spatial transformer network (a CNN including a
spatial transformer module) is trained end-to-end with only
class labels ? no knowledge of the groundtruth transformations is given to the system.
Spatial transformers can be incorporated into CNNs to benefit multifarious tasks, for example:
(i) image classification: suppose a CNN is trained to perform multi-way classification of images
according
9 to whether they contain a particular digit ? where the position and size of the digit may
vary significantly with each sample (and are uncorrelated with the class); a spatial transformer that
crops out and scale-normalizes the appropriate region can simplify the subsequent classification
task, and lead to superior classification performance, see Fig. 1; (ii) co-localisation: given a set of
images containing different instances of the same (but unknown) class, a spatial transformer can be
used to localise them in each image; (iii) spatial attention: a spatial transformer can be used for
tasks requiring an attention mechanism, such as in [11, 29], but is more flexible and can be trained
purely with backpropagation without reinforcement learning. A key benefit of using attention is that
transformed (and so attended), lower resolution inputs can be used in favour of higher resolution
raw inputs, resulting in increased computational efficiency.
The rest of the paper is organised as follows: Sect. 2 discusses some work related to our own, we
introduce the formulation and implementation of the spatial transformer in Sect. 3, and finally give
the results of experiments in Sect. 4. Additional experiments and implementation details are given
in the supplementary material or can be found in the arXiv version.
2
Related Work
In this section we discuss the prior work related to the paper, covering the central ideas of modelling
transformations with neural networks [12, 13, 27], learning and analysing transformation-invariant
representations [3, 5, 8, 17, 19, 25], as well as attention and detection mechanisms for feature selection [1, 6, 9, 11, 23].
Early work by Hinton [12] looked at assigning canonical frames of reference to object parts, a theme
which recurred in [13] where 2D affine transformations were modeled to create a generative model
composed of transformed parts. The targets of the generative training scheme are the transformed
input images, with the transformations between input images and targets given as an additional
input to the network. The result is a generative model which can learn to generate transformed
images of objects by composing parts. The notion of a composition of transformed parts is taken
further by Tieleman [27], where learnt parts are explicitly affine-transformed, with the transform
predicted by the network. Such generative capsule models are able to learn discriminative features
for classification from transformation supervision.
The invariance and equivariance of CNN representations to input image transformations are studied
in [19] by estimating the linear relationships between representations of the original and transformed
images. Cohen & Welling [5] analyse this behaviour in relation to symmetry groups, which is also
exploited in the architecture proposed by Gens & Domingos [8], resulting in feature maps that are
more invariant to symmetry groups. Other attempts to design transformation invariant representations are scattering networks [3], and CNNs that construct filter banks of transformed filters [17, 25].
Stollenga et al. [26] use a policy based on a network?s activations to gate the responses of the network?s filters for a subsequent forward pass of the same image and so can allow attention to specific
features. In this work, we aim to achieve invariant representations by manipulating the data rather
than the feature extractors, something that was done for clustering in [7].
Neural networks with selective attention manipulate the data by taking crops, and so are able to learn
translation invariance. Work such as [1, 23] are trained with reinforcement learning to avoid the
2
Figure 2: The architecture of a spatial
Grid !
generator
Localisation net
?
U
transformer module. The input feature map
U is passed to a localisation network which
regresses the transformation parameters ?.
The regular spatial grid G over V is transformed to the sampling grid T? (G), which
is applied to U as described in Sect. 3.3,
producing the warped output feature map V .
The combination of the localisation network
and sampling mechanism defines a spatial
transformer.
T? (G)
Sampler
V
Spatial Transformer
need for a differentiable attention mechanism, while [11] use a differentiable attention mechansim
by utilising Gaussian kernels in a generative model. The work by Girshick et al. [9] uses a region
proposal algorithm as a form of attention, and [6] show that it is possible to regress salient regions
with a CNN. The framework we present in this paper can be seen as a generalisation of differentiable
attention to any spatial transformation.
3
Spatial Transformers
In this section we describe the formulation of a spatial transformer. This is a differentiable module
which applies a spatial transformation to a feature map during a single forward pass, where the
transformation is conditioned on the particular input, producing a single output feature map. For
multi-channel inputs, the same warping is applied to each channel. For simplicity, in this section we
consider single transforms and single outputs per transformer, however we can generalise to multiple
transformations, as shown in experiments.
The spatial transformer mechanism is split into three parts, shown in Fig. 2. In order of computation,
first a localisation network (Sect. 3.1) takes the input feature map, and through a number of hidden
layers outputs the parameters of the spatial transformation that should be applied to the feature map
? this gives a transformation conditional on the input. Then, the predicted transformation parameters
are used to create a sampling grid, which is a set of points where the input map should be sampled to
produce the transformed output. This is done by the grid generator, described in Sect. 3.2. Finally,
the feature map and the sampling grid are taken as inputs to the sampler, producing the output map
sampled from the input at the grid points (Sect. 3.3).
The combination of these three components forms a spatial transformer and will now be described
in more detail in the following sections.
3.1
Localisation Network
The localisation network takes the input feature map U ? RH?W ?C with width W , height H and
C channels and outputs ?, the parameters of the transformation T? to be applied to the feature map:
? = floc (U ). The size of ? can vary depending on the transformation type that is parameterised,
e.g. for an affine transformation ? is 6-dimensional as in (1).
The localisation network function floc () can take any form, such as a fully-connected network or
a convolutional network, but should include a final regression layer to produce the transformation
parameters ?.
3.2
Parameterised Sampling Grid
To perform a warping of the input feature map, each output pixel is computed by applying a sampling
kernel centered at a particular location in the input feature map (this is described fully in the next
section). By pixel we refer to an element of a generic feature map, not necessarily an image. In
general, the output pixels are defined to lie on a regular grid G = {Gi } of pixels Gi = (xti , yit ),
0
0
forming an output feature map V ? RH ?W ?C , where H 0 and W 0 are the height and width of the
grid, and C is the number of channels, which is the same in the input and output.
For clarity of exposition, assume for the moment that T? is a 2D affine transformation A? . We will
discuss other transformations below. In this affine case, the pointwise transformation is
? t ?
? t ?
s
xi
xi
xi
?
?
?
11
12
13
? yit ? =
? yit ?
=
T
(G
)
=
A
(1)
?
i
?
yis
?21 ?22 ?23
1
1
3
(a)
(b)
Figure 3: Two examples of applying the parameterised sampling grid to an image U producing the output V .
(a) The sampling grid is the regular grid G = TI (G), where I is the identity transformation parameters. (b)
The sampling grid is the result of warping the regular grid with an affine transformation T? (G).
where (xti , yit ) are the target coordinates of the regular grid in the output feature map, (xsi , yis ) are
the source coordinates in the input feature map that define the sample points, and A? is the affine
transformation matrix. We use height and width normalised coordinates, such that ?1 ? xti , yit ? 1
when within the spatial bounds of the output, and ?1 ? xsi , yis ? 1 when within the spatial bounds
of the input (and similarly for the y coordinates). The source/target transformation and sampling is
equivalent to the standard texture mapping and coordinates used in graphics.
The transform defined in (1) allows cropping, translation, rotation, scale, and skew to be applied
to the input feature map, and requires only 6 parameters (the 6 elements of A? ) to be produced by
the localisation network. It allows cropping because if the transformation is a contraction (i.e. the
determinant of the left 2 ? 2 sub-matrix has magnitude less than unity) then the mapped regular grid
will lie in a parallelogram of area less than the range of xsi , yis . The effect of this transformation on
the grid compared to the identity transform is shown in Fig. 3.
The class of transformations T? may be more constrained, such as that used for attention
s 0 tx
A? =
0 s ty
(2)
allowing cropping, translation, and isotropic scaling by varying s, tx , and ty . The transformation
T? can also be more general, such as a plane projective transformation with 8 parameters, piecewise affine, or a thin plate spline. Indeed, the transformation can have any parameterised form,
provided that it is differentiable with respect to the parameters ? this crucially allows gradients to be
backpropagated through from the sample points T? (Gi ) to the localisation network output ?. If the
transformation is parameterised in a structured, low-dimensional way, this reduces the complexity
of the task assigned to the localisation network. For instance, a generic class of structured and differentiable transformations, which is a superset of attention, affine, projective, and thin plate spline
transformations, is T? = M? B, where B is a target grid representation (e.g. in (1), B is the regular grid G in homogeneous coordinates), and M? is a matrix parameterised by ?. In this case it is
possible to not only learn how to predict ? for a sample, but also to learn B for the task at hand.
3.3
Differentiable Image Sampling
To perform a spatial transformation of the input feature map, a sampler must take the set of sampling
points T? (G), along with the input feature map U and produce the sampled output feature map V .
Each (xsi , yis ) coordinate in T? (G) defines the spatial location in the input where a sampling kernel
is applied to get the value at a particular pixel in the output V . This can be written as
Vic =
H X
W
X
n
m
c
Unm
k(xsi ? m; ?x )k(yis ? n; ?y ) ?i ? [1 . . . H 0 W 0 ] ?c ? [1 . . . C]
(3)
where ?x and ?y are the parameters of a generic sampling kernel k() which defines the image
c
interpolation (e.g. bilinear), Unm
is the value at location (n, m) in channel c of the input, and Vic
is the output value for pixel i at location (xti , yit ) in channel c. Note that the sampling is done
identically for each channel of the input, so every channel is transformed in an identical way (this
preserves spatial consistency between channels).
4
In theory, any sampling kernel can be used, as long as (sub-)gradients can be defined with respect to
xsi and yis . For example, using the integer sampling kernel reduces (3) to
Vic =
H X
W
X
n
m
c
Unm
?(bxsi + 0.5c ? m)?(byis + 0.5c ? n)
(4)
where bx + 0.5c rounds x to the nearest integer and ?() is the Kronecker delta function. This
sampling kernel equates to just copying the value at the nearest pixel to (xsi , yis ) to the output location
(xti , yit ). Alternatively, a bilinear sampling kernel can be used, giving
Vic =
H X
W
X
n
m
c
Unm
max(0, 1 ? |xsi ? m|) max(0, 1 ? |yis ? n|)
(5)
To allow backpropagation of the loss through this sampling mechanism we can define the gradients
with respect to U and G. For bilinear sampling (5) the partial derivatives are
H
W
XX
?Vic
=
max(0, 1 ? |xsi ? m|) max(0, 1 ? |yis ? n|)
c
?Unm
n m
(6)
?
if |m ? xsi | ? 1
W
H X
?0
X
?Vic
c
s
Unm max(0, 1 ? |yi ? n|) 1
=
if m ? xsi
?
?xsi
n m
?1 if m < xsi
(7)
and similarly to (7) for
?Vic
?yis .
This gives us a (sub-)differentiable sampling mechanism, allowing loss gradients to flow back not
only to the input feature map (6), but also to the sampling grid coordinates (7), and therefore back
?xs
?xs
to the transformation parameters ? and localisation network since ??i and ??i can be easily derived
from (1) for example. Due to discontinuities in the sampling fuctions, sub-gradients must be used.
This sampling mechanism can be implemented very efficiently on GPU, by ignoring the sum over
all input locations and instead just looking at the kernel support region for each output pixel.
3.4
Spatial Transformer Networks
The combination of the localisation network, grid generator, and sampler form a spatial transformer
(Fig. 2). This is a self-contained module which can be dropped into a CNN architecture at any point,
and in any number, giving rise to spatial transformer networks. This module is computationally very
fast and does not impair the training speed, causing very little time overhead when used naively, and
even potential speedups in attentive models due to subsequent downsampling that can be applied to
the output of the transformer.
Placing spatial transformers within a CNN allows the network to learn how to actively transform
the feature maps to help minimise the overall cost function of the network during training. The
knowledge of how to transform each training sample is compressed and cached in the weights of
the localisation network (and also the weights of the layers previous to a spatial transformer) during
training. For some tasks, it may also be useful to feed the output of the localisation network, ?,
forward to the rest of the network, as it explicitly encodes the transformation, and hence the pose, of
a region or object.
It is also possible to use spatial transformers to downsample or oversample a feature map, as one can
define the output dimensions H 0 and W 0 to be different to the input dimensions H and W . However,
with sampling kernels with a fixed, small spatial support (such as the bilinear kernel), downsampling
with a spatial transformer can cause aliasing effects.
Finally, it is possible to have multiple spatial transformers in a CNN. Placing multiple spatial transformers at increasing depths of a network allow transformations of increasingly abstract representations, and also gives the localisation networks potentially more informative representations to base
the predicted transformation parameters on. One can also use multiple spatial transformers in parallel ? this can be useful if there are multiple objects or parts of interest in a feature map that should be
focussed on individually. A limitation of this architecture in a purely feed-forward network is that
the number of parallel spatial transformers limits the number of objects that the network can model.
5
Model
FCN
CNN
Aff
ST-FCN Proj
TPS
Aff
ST-CNN Proj
TPS
MNIST Distortion
R RTS P E
2.1 5.2 3.1 3.2
E
1.2 0.8 1.5 1.4
1.2 0.8 1.5 2.7
1.3 0.9 1.4 2.6
E
1.1 0.8 1.4 2.4
0.7 0.5 0.8 1.2
0.8 0.6 0.8 1.3 RTS
0.7 0.5 0.8 1.1
(a)
(b)
(a)
(c)
(b)
(c)
58?
R
-65?
R
93?
R
Table 1: Left: The percentage errors for different models on different distorted MNIST datasets. The different
distorted MNIST datasets we test are TC: translated and cluttered, R: rotated, RTS: rotated, translated, and
scaled, P: projective distortion, E: elastic distortion. All the models used for each experiment have the same
number of parameters, and same base structure for all experiments. Right: Some example test images where
a spatial transformer network correctly classifies the digit but a CNN fails. (a) The inputs to the networks. (b)
The transformations predicted by the spatial transformers, visualised by the grid T? (G). (c) The outputs of the
spatial transformers. E and RTS examples use thin plate spline spatial transformers (ST-CNN TPS), while R
examples use affine spatial transformers (ST-CNN Aff) with the angles of the affine transformations given. For
videos showing animations of these experiments and more see https://goo.gl/qdEhUu.
4
Experiments
In this section we explore the use of spatial transformer networks on a number of supervised learning tasks. In Sect. 4.1 we begin with experiments on distorted versions of the MNIST handwriting
dataset, showing the ability of spatial transformers to improve classification performance through
actively transforming the input images. In Sect. 4.2 we test spatial transformer networks on a challenging real-world dataset, Street View House Numbers [21], for number recognition, showing stateof-the-art results using multiple spatial transformers embedded in the convolutional stack of a CNN.
Finally, in Sect. 4.3, we investigate the use of multiple parallel spatial transformers for fine-grained
classification, showing state-of-the-art performance on CUB-200-2011 birds dataset [28] by automatically discovering object parts and learning to attend to them. Further experiments with MNIST
addition and co-localisation can be found in the supplementary material.
4.1
Distorted MNIST
In this section we use the MNIST handwriting dataset as a testbed for exploring the range of transformations to which a network can learn invariance to by using a spatial transformer.
We begin with experiments where we train different neural network models to classify MNIST data
that has been distorted in various ways: rotation (R); rotation, scale and translation (RTS); projective transformation (P); elastic warping (E) ? note that elastic warping is destructive and cannot
be inverted in some cases. The full details of the distortions used to generate this data are given
in the supplementary material. We train baseline fully-connected (FCN) and convolutional (CNN)
neural networks, as well as networks with spatial transformers acting on the input before the classification network (ST-FCN and ST-CNN). The spatial transformer networks all use bilinear sampling, but variants use different transformation functions: an affine transformation (Aff), projective
transformation (Proj), and a 16-point thin plate spline transformation (TPS) with a regular grid of
control points. The CNN models include two max-pooling layers. All networks have approximately
the same number of parameters, are trained with identical optimisation schemes (backpropagation,
SGD, scheduled learning rate decrease, with a multinomial cross entropy loss), and all with three
weight layers in the classification network.
The results of these experiments are shown in Table 1 (left). Looking at any particular type of distortion of the data, it is clear that a spatial transformer enabled network outperforms its counterpart
base network. For the case of rotation, translation, and scale distortion (RTS), the ST-CNN achieves
0.5% and 0.6% depending on the class of transform used for T? , whereas a CNN, with two maxpooling layers to provide spatial invariance, achieves 0.8% error. This is in fact the same error that
the ST-FCN achieves, which is without a single convolution or max-pooling layer in its network,
showing that using a spatial transformer is an alternative way to achieve spatial invariance. ST-CNN
models consistently perform better than ST-FCN models due to max-pooling layers in ST-CNN providing even more spatial invariance, and convolutional layers better modelling local structure. We
also test our models in a noisy environment, on 60 ? 60 images with translated MNIST digits and
6
Size
Model
64px 128px
Maxout CNN [10] 4.0
CNN (ours)
4.0
5.6
DRAM* [1]
3.9
4.5
Single
3.7
3.9
ST-CNN
Multi
3.6
3.9
(a)
ST
conv
?
ST
conv
ST
conv
ST ?
2!
6!
0
(b)
Table 2: Left: The sequence error (%) for SVHN multi-digit recognition on crops of 64?64 pixels (64px), and
inflated crops of 128 ? 128 (128px) which include more background. * The best reported result from [1] uses
model averaging and Monte Carlo averaging, whereas the results from other models are from a single forward
pass of a single model. Right: (a) The schematic of the ST-CNN Multi model. The transformations of each
spatial transformer (ST) are applied to the convolutional feature map produced by the previous layer. (b) The
result of the composition of the affine transformations predicted by the four spatial transformers in ST-CNN
Multi, visualised on the input image.
background clutter (see Fig. 1 third row for an example): an FCN gets 13.2% error, a CNN gets
3.5% error, while an ST-FCN gets 2.0% error and an ST-CNN gets 1.7% error.
Looking at the results between different classes of transformation, the thin plate spline transformation (TPS) is the most powerful, being able to reduce error on elastically deformed digits by
reshaping the input into a prototype instance of the digit, reducing the complexity of the task for the
classification network, and does not over fit on simpler data e.g. R. Interestingly, the transformation
of inputs for all ST models leads to a ?standard? upright posed digit ? this is the mean pose found
in the training data. In Table 1 (right), we show the transformations performed for some test cases
where a CNN is unable to correctly classify the digit, but a spatial transformer network can.
4.2
Street View House Numbers
We now test our spatial transformer networks on a challenging real-world dataset, Street View House
Numbers (SVHN) [21]. This dataset contains around 200k real world images of house numbers, with
the task to recognise the sequence of numbers in each image. There are between 1 and 5 digits in
each image, with a large variability in scale and spatial arrangement.
We follow the experimental setup as in [1, 10], where the data is preprocessed by taking 64 ? 64
crops around each digit sequence. We also use an additional more loosely 128?128 cropped dataset
as in [1]. We train a baseline character sequence CNN model with 11 hidden layers leading to five
independent softmax classifiers, each one predicting the digit at a particular position in the sequence.
This is the character sequence model used in [16], where each classifier includes a null-character
output to model variable length sequences. This model matches the results obtained in [10].
We extend this baseline CNN to include a spatial transformer immediately following the input (STCNN Single), where the localisation network is a four-layer CNN. We also define another extension
where before each of the first four convolutional layers of the baseline CNN, we insert a spatial
transformer (ST-CNN Multi). In this case, the localisation networks are all two-layer fully connected networks with 32 units per layer. In the ST-CNN Multi model, the spatial transformer before
the first convolutional layer acts on the input image as with the previous experiments, however the
subsequent spatial transformers deeper in the network act on the convolutional feature maps, predicting a transformation from them and transforming these feature maps (this is visualised in Table 2
(right) (a)). This allows deeper spatial transformers to predict a transformation based on richer features rather than the raw image. All networks are trained from scratch with SGD and dropout [14],
with randomly initialised weights, except for the regression layers of spatial transformers which are
initialised to predict the identity transform. Affine transformations and bilinear sampling kernels are
used for all spatial transformer networks in these experiments.
The results of this experiment are shown in Table 2 (left) ? the spatial transformer models obtain
state-of-the-art results, reaching 3.6% error on 64 ? 64 images compared to previous state-of-the-art
of 3.9% error. Interestingly on 128 ? 128 images, while other methods degrade in performance,
an ST-CNN achieves 3.9% error while the previous state of the art at 4.5% error is with a recurrent
attention model that uses an ensemble of models with Monte Carlo averaging ? in contrast the STCNN models require only a single forward pass of a single model. This accuracy is achieved due to
the fact that the spatial transformers crop and rescale the parts of the feature maps that correspond
to the digit, focussing resolution and network capacity only on these areas (see Table 2 (right) (b)
7
Model
Cimpoi ?15 [4]
Zhang ?14 [30]
Branson ?14 [2]
Lin ?15 [20]
Simon ?15 [24]
CNN (ours) 224px
2?ST-CNN 224px
2?ST-CNN 448px
4?ST-CNN 448px
66.7
74.9
75.7
80.9
81.0
82.3
83.1
83.9
84.1
Table 3: Left: The accuracy (%) on CUB-200-2011 bird classification dataset. Spatial transformer networks
with two spatial transformers (2?ST-CNN) and four spatial transformers (4?ST-CNN) in parallel outperform
other models. 448px resolution images can be used with the ST-CNN without an increase in computational
cost due to downsampling to 224px after the transformers. Right: The transformation predicted by the spatial
transformers of 2?ST-CNN (top row) and 4?ST-CNN (bottom row) on the input image. Notably for the 2?STCNN, one of the transformers (shown in red) learns to detect heads, while the other (shown in green) detects
the body, and similarly for the 4?ST-CNN.
for some examples). In terms of computation speed, the ST-CNN Multi model is only 6% slower
(forward and backward pass) than the CNN.
4.3
Fine-Grained Classification
In this section, we use a spatial transformer network with multiple transformers in parallel to perform
fine-grained bird classification. We evaluate our models on the CUB-200-2011 birds dataset [28],
containing 6k training images and 5.8k test images, covering 200 species of birds. The birds appear
at a range of scales and orientations, are not tightly cropped, and require detailed texture and shape
analysis to distinguish. In our experiments, we only use image class labels for training.
We consider a strong baseline CNN model ? an Inception architecture with batch normalisation [15]
pre-trained on ImageNet [22] and fine-tuned on CUB ? which by itself achieves state-of-the-art accuracy of 82.3% (previous best result is 81.0% [24]). We then train a spatial transformer network,
ST-CNN, which contains 2 or 4 parallel spatial transformers, parameterised for attention and acting
on the input image. Discriminative image parts, captured by the transformers, are passed to the part
description sub-nets (each of which is also initialised by Inception). The resulting part representations are concatenated and classified with a single softmax layer. The whole architecture is trained
on image class labels end-to-end with backpropagation (details in supplementary material).
The results are shown in Table 3 (left). The 4?ST-CNN achieves an accuracy of 84.1%, outperforming the baseline by 1.8%. In the visualisations of the transforms predicted by 2?ST-CNN (Table 3
(right)) one can see interesting behaviour has been learnt: one spatial transformer (red) has learnt
to become a head detector, while the other (green) fixates on the central part of the body of a bird.
The resulting output from the spatial transformers for the classification network is a somewhat posenormalised representation of a bird. While previous work such as [2] explicitly define parts of the
bird, training separate detectors for these parts with supplied keypoint training data, the ST-CNN is
able to discover and learn part detectors in a data-driven manner without any additional supervision.
In addition, spatial transformers allows for the use of 448px resolution input images without any
impact on performance, as the output of the transformed 448px images are sampled at 224px before
being processed.
5
Conclusion
In this paper we introduced a new self-contained module for neural networks ? the spatial transformer. This module can be dropped into a network and perform explicit spatial transformations
of features, opening up new ways for neural networks to model data, and is learnt in an end-toend fashion, without making any changes to the loss function. While CNNs provide an incredibly
strong baseline, we see gains in accuracy using spatial transformers across multiple tasks, resulting in state-of-the-art performance. Furthermore, the regressed transformation parameters from the
spatial transformer are available as an output and could be used for subsequent tasks. While we
only explore feed-forward networks in this work, early experiments show spatial transformers to be
powerful in recurrent models, and useful for tasks requiring the disentangling of object reference
frames.
8
References
[1] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. ICLR, 2015.
[2] S. Branson, G. Van Horn, S. Belongie, and P. Perona. Bird species categorization using pose normalized
deep convolutional nets. BMVC., 2014.
[3] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE PAMI, 35(8):1872?1886, 2013.
[4] M. Cimpoi, S. Maji, and A. Vedaldi. Deep filter banks for texture recognition and segmentation. In CVPR,
2015.
[5] T. S. Cohen and M. Welling. Transformation properties of learned visual representations. ICLR, 2015.
[6] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks.
In CVPR, 2014.
[7] B. J. Frey and N. Jojic. Fast, large-scale transformation-invariant clustering. In NIPS, 2001.
[8] R. Gens and P. M. Domingos. Deep symmetry networks. In NIPS, 2014.
[9] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, 2014.
[10] I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet. Multi-digit number recognition from street
view imagery using deep convolutional neural networks. arXiv:1312.6082, 2013.
[11] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image
generation. ICML, 2015.
[12] G. E. Hinton. A parallel computation that assigns canonical object-based frames of reference. In IJCAI,
1981.
[13] G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In ICANN. 2011.
[14] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
[15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. ICML, 2015.
[16] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic data and artificial neural networks
for natural scene text recognition. NIPS DLW, 2014.
[17] A. Kanazawa, A. Sharma, and D. Jacobs. Locally scale-invariant convolutional neural networks. In NIPS,
2014.
[18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[19] K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equivalence. CVPR, 2015.
[20] T. Lin, A. RoyChowdhury, and S. Maji. Bilinear CNN models for fine-grained visual recognition.
arXiv:1504.07889, 2015.
[21] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with
unsupervised feature learning. In NIPS DLW, 2011.
[22] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, et al. Imagenet large scale visual recognition challenge. arXiv:1409.0575, 2014.
[23] P. Sermanet, A. Frome, and E. Real. Attention for fine-grained categorization. arXiv:1412.7054, 2014.
[24] M. Simon and E. Rodner. Neural activation constellations: Unsupervised part model discovery with
convolutional networks. arXiv:1504.08289, 2015.
[25] K. Sohn and H. Lee. Learning invariant representations with local transformations. arXiv:1206.6418,
2012.
[26] M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber. Deep networks with internal selective attention
through feedback connections. In NIPS, 2014.
[27] T. Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, University of Toronto, 2014.
[28] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 dataset.
2011.
[29] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend
and tell: Neural image caption generation with visual attention. ICML, 2015.
[30] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun. Efficient and accurate approximations of nonlinear
convolutional networks. arXiv:1411.4229, 2014.
9
| 5854 |@word deformed:1 cnn:56 version:2 determinant:1 crucially:1 contraction:1 jacob:1 attended:1 sgd:2 moment:1 contains:2 tuned:1 ours:2 interestingly:2 document:1 outperforms:1 existing:1 com:1 activation:3 assigning:1 must:2 written:1 gpu:1 subsequent:7 informative:1 shape:2 localise:1 generative:5 discovering:1 rts:6 plane:1 isotropic:1 bissacco:1 location:6 toronto:1 simpler:1 zhang:2 five:1 height:3 wierstra:1 along:1 become:1 overhead:1 manner:2 introduce:3 notably:2 indeed:1 expected:1 ibarz:1 kiros:1 multi:10 aliasing:1 salakhutdinov:2 detects:1 ming:1 automatically:1 bulatov:1 xti:5 little:1 increasing:1 conv:3 provided:1 estimating:1 xx:1 classifies:1 begin:2 discover:1 null:1 deepmind:1 transformation:71 every:1 ti:1 act:2 scaled:1 classifier:2 uk:1 control:1 unit:1 appear:1 producing:5 danihelka:1 before:4 dropped:2 local:4 attend:2 frey:1 limit:1 bilinear:7 elastically:1 interpolation:1 approximately:1 pami:1 bird:11 studied:1 equivalence:1 challenging:2 co:3 branson:3 limited:2 projective:5 range:3 adoption:1 horn:1 lecun:1 backpropagation:4 digit:17 area:2 significantly:1 vedaldi:3 pre:2 regular:8 get:5 cannot:1 selection:1 transformer:86 applying:3 equivalent:1 map:35 attention:19 incredibly:1 cluttered:1 resolution:5 simplicity:1 immediately:1 assigns:1 enabled:1 notion:1 variation:1 coordinate:8 hierarchy:2 suppose:1 target:5 mallat:1 caption:1 homogeneous:1 us:3 domingo:2 goodfellow:1 element:2 recognition:10 predicts:1 bottom:1 inserted:1 module:11 wang:2 region:7 connected:5 sun:1 sect:10 goo:1 decrease:1 visualised:3 transforming:3 environment:1 complexity:2 dynamic:1 trained:10 purely:2 efficiency:1 translated:3 easily:1 various:1 tx:2 maji:2 train:4 fast:3 describe:1 london:1 monte:2 artificial:1 zemel:1 tell:1 richer:1 supplementary:4 posed:1 cvpr:4 distortion:6 compressed:1 ability:3 simonyan:3 gi:3 transform:10 itself:2 analyse:1 final:1 noisy:1 sequence:7 differentiable:9 net:3 adaptation:1 causing:1 relevant:1 gen:2 achieve:2 description:1 fixates:1 sutskever:1 ijcai:1 darrell:1 cropping:4 produce:3 cached:1 categorization:2 rotated:2 object:12 help:1 depending:2 andrew:1 recurrent:3 pose:5 rescale:1 nearest:2 strong:2 implemented:1 predicted:7 frome:1 inflated:1 korayk:1 cnns:5 filter:4 centered:1 material:4 require:2 behaviour:3 exploring:1 extension:1 insert:1 around:2 mapping:1 predict:3 vary:2 early:2 cub:4 achieves:6 label:3 individually:1 create:2 gaussian:1 aim:1 rather:2 reaching:1 avoid:1 arnoud:1 varying:1 derived:1 consistently:1 modelling:2 contrast:1 baseline:7 detect:1 inference:1 rigid:1 downsample:1 typically:1 entire:1 hidden:2 relation:1 manipulating:1 proj:3 visualisation:1 selective:2 transformed:12 perona:2 pixel:10 overall:1 classification:17 flexible:1 orientation:1 stateof:1 spatial:101 art:9 constrained:1 softmax:2 field:1 construct:1 having:1 ng:1 koray:1 sampling:28 identical:2 placing:2 icml:3 unsupervised:2 thin:5 fcn:8 others:1 spline:5 simplify:2 piecewise:1 opening:1 randomly:1 composed:1 preserve:1 tightly:1 individual:1 attempt:1 ab:1 detection:3 interest:1 normalisation:1 investigate:1 localisation:21 mnih:1 stollenga:2 accurate:2 partial:1 netzer:1 loosely:1 deformation:2 girshick:2 instance:3 increased:1 classify:2 measuring:1 cost:2 krizhevsky:2 welinder:1 graphic:1 reported:1 encoders:1 learnt:5 synthetic:1 cho:1 st:39 lee:1 imagery:1 central:2 thesis:1 containing:2 huang:1 warped:1 derivative:1 leading:1 bx:1 actively:4 szegedy:2 potential:1 includes:1 satisfy:1 explicitly:4 performed:2 helped:1 view:4 realised:1 red:2 capability:1 parallel:7 simon:2 accuracy:5 convolutional:17 efficiently:1 ensemble:1 correspond:1 landscape:1 raw:2 kavukcuoglu:2 produced:3 carlo:2 russakovsky:1 classified:1 detector:4 attentive:1 ty:2 initialised:3 destructive:1 regress:2 fuctions:1 handwriting:2 sampled:4 gain:1 dataset:10 dlw:2 knowledge:2 segmentation:3 actually:1 back:3 feed:3 scattering:2 higher:1 supervised:1 follow:1 zisserman:3 response:1 bmvc:1 formulation:2 done:3 though:1 furthermore:1 parameterised:7 just:2 inception:2 hand:1 oversample:1 su:1 nonlinear:1 lack:1 google:2 propagation:1 defines:3 scheduled:1 effect:2 contain:1 requiring:2 normalized:1 counterpart:1 hence:1 assigned:1 spatially:4 jojic:1 semantic:2 round:1 during:4 width:3 self:2 covering:2 plate:5 tps:5 svhn:2 image:45 superior:1 rotation:7 multinomial:1 cohen:2 extend:1 he:1 refer:1 composition:2 anguelov:1 grid:24 consistency:1 similarly:3 bruna:1 supervision:4 maxpooling:1 base:3 something:1 disentangle:1 own:1 recent:2 optimizing:1 driven:1 manipulation:1 schmidhuber:1 outperforming:1 yi:12 exploited:1 caltech:1 inverted:1 seen:1 captured:1 additional:4 somewhat:2 deng:1 sharma:1 focussing:1 ii:1 multiple:10 desirable:1 full:1 reduces:2 match:1 cross:1 long:1 lin:2 reshaping:1 manipulate:1 schematic:1 prediction:1 scalable:2 crop:6 regression:2 xsi:13 optimisation:2 vision:1 variant:1 impact:1 arxiv:8 kernel:12 normalization:1 achieved:1 proposal:1 addition:2 whereas:2 fine:6 background:2 cropped:2 krause:1 source:2 extra:2 rest:2 unlike:1 lenc:1 pooling:8 flow:1 rodner:1 integer:2 intermediate:1 iii:1 split:1 superset:1 identically:1 bengio:2 bernstein:1 fit:1 architecture:8 reduce:1 idea:1 prototype:1 haffner:1 minimise:1 shift:1 favour:1 whether:1 passed:2 accelerating:1 karen:1 cause:1 action:2 deep:8 useful:3 clear:1 detailed:1 karpathy:1 transforms:2 clutter:2 backpropagated:1 locally:2 processed:1 sohn:1 generate:3 http:1 outperform:1 percentage:1 supplied:1 canonical:3 coates:1 toend:1 roychowdhury:1 delta:1 per:2 correctly:2 group:2 key:1 salient:1 four:4 achieving:1 yit:7 clarity:1 preprocessed:1 invention:1 backward:1 year:1 sum:1 utilising:1 angle:1 powerful:3 injected:1 distorted:7 groundtruth:1 wu:1 parallelogram:1 recognise:1 draw:1 scaling:2 pushed:1 dropout:1 layer:23 bound:2 distinguish:1 gomez:1 courville:1 kronecker:1 aff:4 scene:1 encodes:1 regressed:1 toshev:1 speed:2 px:13 speedup:1 structured:2 according:1 combination:3 across:1 increasingly:1 character:3 unity:1 modification:1 making:1 invariant:11 taken:2 computationally:2 discus:3 skew:1 mechanism:10 end:9 available:1 apply:1 generic:4 appropriate:3 alternative:1 batch:2 gate:1 slower:1 original:1 top:1 clustering:2 include:6 giving:3 concatenated:1 gregor:1 warping:6 malik:1 arrangement:2 question:1 looked:1 receptive:1 amongst:1 gradient:6 iclr:2 unable:1 mapped:1 separate:1 capacity:1 street:4 degrade:1 reason:1 length:1 modeled:1 relationship:1 pointwise:1 copying:1 downsampling:3 providing:1 sermanet:1 setup:1 disentangling:1 potentially:1 rise:1 ba:2 dram:1 implementation:2 design:1 policy:1 unknown:1 perform:6 allowing:4 satheesh:1 convolution:3 datasets:2 benchmark:1 hinton:4 incorporated:1 looking:3 variability:1 frame:3 head:2 ucsd:1 stack:1 introduced:1 connection:1 imagenet:2 wah:1 learned:1 testbed:1 discontinuity:1 nip:6 able:5 impair:1 below:1 reading:1 challenge:1 max:12 including:1 video:1 green:2 natural:2 predicting:2 scheme:2 altered:1 vic:7 improve:1 keypoint:1 shet:1 auto:1 text:1 prior:1 understanding:1 discovery:1 graf:1 embedded:1 fully:6 loss:4 interesting:1 limitation:2 organised:1 generation:2 generator:3 affine:14 bank:2 uncorrelated:1 translation:7 normalizes:1 row:3 gl:1 drastically:1 allow:3 normalised:1 generalise:1 deeper:2 taking:2 focussed:1 benefit:2 van:1 feedback:1 dimension:2 depth:1 world:3 rich:1 preventing:1 forward:9 reinforcement:2 erhan:1 welling:2 unm:6 jaderberg:3 dealing:1 cimpoi:2 ioffe:1 belongie:2 discriminative:2 xi:3 alternatively:1 khosla:1 table:10 learn:9 capsule:1 channel:9 composing:1 elastic:3 ignoring:1 symmetry:3 improving:1 bottou:1 necessarily:1 zou:1 equivariance:2 icann:1 rh:2 whole:1 animation:1 body:2 xu:1 fig:5 fashion:1 sub:5 position:3 theme:1 fails:1 explicit:1 lie:2 house:4 third:1 extractor:1 learns:1 grained:5 donahue:1 masci:1 specific:1 covariate:1 showing:5 learnable:1 constellation:1 x:2 naively:1 kanazawa:1 mnist:11 corr:1 equates:1 texture:4 magnitude:1 phd:1 conditioned:2 entropy:1 tc:1 explore:2 forming:1 visual:5 contained:2 applies:1 tieleman:2 ma:1 conditional:2 identity:3 exposition:1 maxout:1 exceptionally:1 analysing:1 change:1 included:1 generalisation:1 upright:1 reducing:2 except:1 sampler:4 acting:2 averaging:3 specie:2 pas:5 invariance:8 experimental:1 select:1 internal:2 support:3 evaluate:1 scratch:1 srivastava:1 |
5,363 | 5,855 | A Reduced-Dimension fMRI Shared Response Model
Po-Hsuan Chen1 , Janice Chen2 , Yaara Yeshurun2 ,
Uri Hasson2 , James V. Haxby3 , Peter J. Ramadge1
1
Department of Electrical Engineering, Princeton University
2
Princeton Neuroscience Institute and Department of Psychology, Princeton University
3
Department of Psychological and Brain Sciences
and Center for Cognitive Neuroscience, Dartmouth College
Abstract
Multi-subject fMRI data is critical for evaluating the generality and validity
of findings across subjects, and its effective utilization helps improve analysis
sensitivity. We develop a shared response model for aggregating multi-subject
fMRI data that accounts for different functional topographies among anatomically
aligned datasets. Our model demonstrates improved sensitivity in identifying a
shared response for a variety of datasets and anatomical brain regions of interest.
Furthermore, by removing the identified shared response, it allows improved detection of group differences. The ability to identify what is shared and what is not
shared opens the model to a wide range of multi-subject fMRI studies.
1
Introduction
Many modern fMRI studies of the human brain use data from multiple subjects. The use of multiple
subjects is critical for assessing the generality and validity of the findings across subjects. It is also
increasingly important since from one subject one can gather at most a few thousand noisy instances
of functional response patterns. To increase the power of multivariate statistical analysis, one therefore needs to aggregate response data across multiple subjects. However, the successful aggregation
of fMRI brain imaging data across subjects requires resolving the major problem that both anatomical structure and functional topography vary across subjects [1, 2, 3, 4]. Moreover, it is well known
that standard methods of anatomical alignment [1, 4, 5] do not adequately align functional topography [4, 6, 7, 8, 9]. Hence anatomical alignment is often followed by spatial smoothing of the data
to blur functional topographies. Recently, functional spatial registration methods have appeared that
use cortical warping to maximize inter-subject correlation of time series [7] or inter-subject correlation of functional connectivity [8, 9]. A more radical approach learns a latent multivariate feature
that models the shared component of each subject?s response [10, 11, 12].
Multivariate statistical analysis often begins by identifying a set of features that capture the informative aspects of the data. For example, in fMRI analysis one might select a subset of voxels within an
anatomical region of interest (ROI), or select a subset of principal components of the ROI, then use
these features for subsequent analysis. In a similar way, one can think of the fMRI data aggregation
problem as a two step process. First use training data to learn a mapping of each subject?s measured
data to a shared feature space in a way that captures the across-subject shared response. Then use
these learned mappings to project held out data for each subject into the shared feature space and
perform a statistical analysis.
To make this more precise, let {Xi ? Rv?d }m
i=1 denote matrices of training data (v voxels in
the ROI, over d TRs) for m subjects. We propose using this data to learn subject specific bases
Wi ? Rv?k , where k is to be selected, and a shared matrix S ? Rk?d of feature responses such
that Xi = Wi S + Ei where Ei is an error term corresponding to unmodeled aspects of the subject?s
response. One can think of the bases Wi as representing the individual functional topographies and
S as a latent feature that captures the component of the response shared across subjects. We don?t
claim that S is a sufficient statistic, but that is a useful analogy.
1
problem(1) k=50
problem(1) k=100
1 0.7
problem(1) k=500
1 0.7
0.6
0.6
0.6
0.5
0.5
0.5
1 0.5
problem(2) k=50
problem(2) k=100
1 0.6
problem(2) k=500
1 0.7
1
Train Objective
Test Accuracy
0.7
0.6
0.4
0
2
4
6
8
10
0.4
0
2
4
6
8
10
0.4
0.4
0
2
4
6
8
10
0
Iterations
2
4
6
8
10
0.4
0
2
4
6
8
10
0.5
0
2
4
6
8
10
Figure 1: Comparison of training objective value and testing accuracy for problem (1) and (2) over various k
on raider dataset with 500 voxels of ventral temporal cortex (VT) in image stimulus classficiation experiment
(details in Sec.4). In all cases, error bars show ?1 standard error.
The contribution of the paper is twofold: First, we propose a probabilistic generative framework for
modeling and estimating the subject specific bases Wi and the shared response latent variable S. A
critical aspect of the model is that it directly estimates k v shared features. This is in contrast to
methods where the number of features equals the number of voxels [10, 11]. Moreover, the Bayesian
nature of the approach provides a natural means of incorporating prior domain knowledge. Second,
we give a demonstration of the robustness and effectiveness of our data aggregation model using a
variety of fMRI datasets captured on different MRI machines, employing distinct analysis pathways,
and based on various brain ROIs.
2
Preliminaries
fMRI time-series data Xi ? Rv?d , i = 1 : m, is collected for m subjects as they are presented
with identical, time synchronized stimuli. Here d is the number of time samples in TRs (Time of
Repetition), and v is the number of voxels. Our objective is to model each subject?s response as
Xi = Wi S + Ei where Wi ? Rv?k is a basis of topographies for subject i, k is a parameter selected
by the experimenter, S ? Rk?d is a corresponding time series of shared response coordinates, and
Ei is an error term, i = 1 : m. To ensure uniqueness of coordinates it is necessary that Wi has
linearly independent columns. We make the stronger assumption that each Wi has orthonormal
columns, WiT Wi = Ik .
Two approaches for estimating the bases Wi and the shared response S are illustrated below:
P
P
2
T
2
minWi ,S
minWi ,S
i kXi ? Wi SkF
i kWi Xi ? SkF
(1)
(2)
T
T
s.t.
Wi Wi = Ik ,
s.t.
Wi Wi = I k ,
where k ? kF denotes the Frobenius norm. For k ? v, (1) can be solved iteratively by first selecting
conditions for Wi , i = 1 : m, and optimizing (1) with respect to S by setting S =
P initial
T
2
1/m
i Wi Xi . With S fixed, (1) becomes m separate subproblems of the form min kXi ? Wi SkF
T
T
T
?
?
?
?
?
with solution Wi = Ui Vi , where Ui ?i Vi is an SVD of Xi S [13]. These two steps can be iterated until a stopping criterion is satisfied. Similarly, for k ? v, (2) can also be solved iteratively.
However, for k < v, there is no known fast update of Wi given S. Hence this must be done using
local gradient decent on the Stiefel manifold [14]. Both approaches yield the same solution when
k = v, but are not equivalent in the more interesting situation k v (Sup. Mat.). What is most
important, however, is that problem (2) with k < v, often learns an uninformative shared response
S. This is illustrated in Fig. 1 which plots of the value of the training objective and the test accuracy
for a stimulus classification experiment versus iteration count (image classification using the raider
fMRI dataset, see Sec.4). For problem (1), test accuracy increases with decreasing training error,
Whereas for problem (2), test accuracy decreases with decreasing training error (This can be explained analytically, see Sup. Mat.). We therefore base our approach on a generalization of problem
(1). We call the resulting S and {Wi }m
i=1 a shared response model (SRM).
Before extending this simple model, we note a few important properties. First, a solution of (1)
T m
is not unique. If S, {Wi }m
i=1 is a solution, then so is QS, {Wi Q }i=1 , for any k ? k orthogonal
matrix Q. This is not a problem as long as we only learn one template and one set of subject bases.
Any new subjects or new data will be referenced to the original SRM. However, if we independently
learn two SRMs, the group shared responses S1 , S2 , may not be registered (use the same Q). We
register S1 to S2 by finding a k ? k orthogonal matrix Q to minimize kS2 ? QS1 k2F ; then use QS1
in place of S1 and Wj QT in place of Wj for subjects in the first SRM.
Next, when projected onto the span of its basis, eachP
subject?s training data Xi has coordinates
m
Si = WiT Xi and the learning phase ensures S = 1/m i Si . The projection to k shared features
2
and the averaging across subjects in feature space both contribute to across-subject denoising during
the learning phase. By mapping S back into voxel space we obtain the voxel space manifestation
Wi S of the denoised, shared component of each subject?s training data. The training data of subject
j can also be mapped through the shared response model to the functional topography and anatomy
? i,j = Wi W T Xj .
of subject i by the mapping X
j
New subjects are easily added to an existing SRM S, {Wi }m
i=1 . We refer to S as the training
template. To introduce a new subject j = m + 1 with training data Xj , form its orthonormal basis
by minimizing the mean squared modeling error minWj ,WjT Wj =Ik kXj ? Wj Sk2F . We solve this
for the least norm solution. Note that S, and the existing W1:m do not change; we simply add a
new subject by using its training data for the same stimulus and the template S to determine its
basis of functional topographies. We can also add new data to an SRM. Let Xi0 , i = 1:m, denote
new data collected under a distinct stimulus from the same subjects. This is added to the study by
forming Si0 = P
WiT Xi0 , then averaging these projections to form the shared response for the new
m
data: S 0 = 1/m i=1 WiT Xi0 . This assumes the learned subject specific topographies Wi generalize
to the new data. This usually requires a sufficiently rich stimulus in the learning phase.
3
Probabilistic Shared Response Model
We now extend our simple shared response model to a probabilistic setting. Let xit ? Rv denote
the observed pattern of voxel responses of the i-th subject at time t. For the moment, assume these
observations are centered over time. Let st ? Rk be a hyperparameter modeling the shared response
at time t = 1:d, and model the observation at time t for dataset i as the outcome of a random vector:
xit ? N (Wi st , ?2 I),
with WiT Wi = Ik ,
(3)
where, xit takes values in Rv , Wi ? Rv?k , i = 1 : m, and ?2 is a subject independent hyperpaP P
?2
rameter. The negative log-likelihood of this model is L = t i v2 log 2? + v2 log ?2 + ? 2 (xit ?
Wi st )T (xit ?Wi st ). Noting that xit is the t-th column of Xi , we see that minimizing L with respect
to Wi and S = [s1 , . . . , sd ], requires the solution of:
P P
P
min t i (xit ? Wi st )T (xit ? Wi st ) = min i kXi ? Wi Sk2F .
Thus maximum likelihood estimation for this model matches (1).
In our fMRI datasets, and most multi-subject fMRI datasets available today, d m. Since st is
time specific but shared across the m subjects, we see that there is palpable value in regularizing its
estimation. In contrast, subject specific variables such as Wi are shared across time, a dimension
in which data is relatively plentiful. Hence, a natural extension of (3) is to make st a shared latent
random vector st ? N (0, ?s ) taking values in Rk . The observation for dataset i at time t then has
the conditional density p(xit |st ) = N (Wi st + ?i , ?2i I), where the subject specific mean ?i allows
for a non-zero mean and we assume subject dependent isotropic noise covariance ?2i I. This is an
extended multi-subject form of factor analysis, but in factor analysis one normally assumes ?s = I.
T
To form a joint model, let xTt = [x1t T . . . xmt T ], W T = [W1T . . . Wm
], ?T = [?T1 . . . ?Tm ], ? =
diag(?21 I, . . . , ?2m I), ? N (0, ?), and ?x = W ?s W T + ?. Then
xt = W st + ? + ,
(4)
with xt ? N (?, ?x ) taking values in Rmv . For this joint model, we formulate SRM as:
?s
st ? N (0, ?s ),
xit |st ? N (Wi st +
WiT Wi
= Ik ,
?i , ?2i I),
st
Wi , ?i , ?i
(5)
xit
d
Figure 2: Graphical model
for SRM. Shaded nodes: observations, unshaded nodes:
latent variables, and black
squares: hyperparameters.
m
where st takes values in Rk , xit takes values in Rv , and the hyperparameters Wi are matrices in
Rv?k , i = 1:m. The latent variable st , with covariance ?s , models a shared elicited response across
the subjects at time t. By applying the same orthogonal transform to each of the Wi , we can assume,
without loss of generality, that ?s is diagonal. The SRM graphical model is displayed in Fig. 2.
3
3.1
Parameter Estimation for SRM
To estimate the parameters of the SRM model we apply a constrained EM algorithm to find maximum likelihood solutions. Let ? denote the vector of all parameters. In the E-step, given initial
value or estimated value ?old from the previous M-step, we calculate the sufficient statistics by taking expectation with respect to p(st |xt , ?old ):
Es|x [st ] = (W ?s )T (W ?s W T + ?)?1 (xt ? ?),
Es|x [st sTt ]
(6)
T
= Vars|x [st ] + Es|x [st ]Es|x [st ]
= ?s ? ?Ts W T (W ?s W T + ?)?1 W ?s + Es|x [st ]Es|x [st ]T .
(7)
In the M-step, we update the parameter estimate to ?new by maximizing Q with respect to Wi , ?i ,
?2i , i = 1:m, and ?s . This is given by ?new = arg max? Q(?, ?old ), where
Pd R
Q(?, ?old ) = d1 t=1 p(st |xt , ?old ) log p(xt , st |?)dst .
Due to the model structure, Q can be maximized with respect to each parameter separately. To
enforce the orthogonality of Wi , we bring a symmetric matrix ?i of Lagrange multipliers and add
the constraint term tr(?i (WiT Wi ? I)) to the objective function. Setting the derivatives of the
modified objective to zero, we obtain the following update equations:
P
?new
= d1 t xit ,
(8)
i
P
1
new
T
new
T
?1/2
,
(9)
Wi = Ai (Ai Ai )
, Ai = 2
t (xit ? ?i )Es|x [st ]
P
1
2 new
new 2
new T
new
T
?i
= dv t kxit ? ?i k ? 2(xit ? ?i ) Wi Es|x [st ] + tr(Es|x [st st ]) ,
(10)
P
1
T
new
(11)
?s = d t (Es|x [st st ]).
The orthonormal constraint WiT Wi = Ik in SRM is similar to that of PCA. In general, there is no
reason to believe that key brain response patterns are orthogonal. So, the orthonormal bases found
via SRM are a computational tool to aid statistical analysis within an ROI. From a computational
viewpoint, orthogonality has the advantage of robustness and preserving temporal geometry.
3.2
Connections with related methods
For one subject, SRM is similar to a variant of pPCA [15] that imposes an orthogonality constraint
on the loading matrix. pPCA yields an orthogonal loading matrix. However, due to the increase in
model complexity to handle multiple datasets, SRM has an explicit constraint of orthogonal loading
matrices. Topographic Factor Analysis (TFA) [16] is a factor model using a topographic basis composed of spherical Gaussians with different centers and widths. This choice of basis is constraining
but since each factor is an ?blob? in the brain it has the advantage of providing a simple spatial interpretation. Hyperalignment (HA) [10], learns a shared representational by rotating subjects? time
series responses to maximize inter-subject time series correlation. The formulation in [10] is based
on problem (2) with k = v and Wi a v ? v orthogonal matrix (Sup. Mat.). So this method does
not directly reduce the dimension of the feature space, nor does it directly extend to this case (see
Fig. 1). Although dimensionality reduction can be done posthoc using PCA, [10] shows that this
doesn?t lead to performance improvement. In contrast, we show in ?4 that selecting k v can
improve the performance of SRM beyond that attained by HA. The GICA, IVA algorithms [17] do
not assume time-synchronized stimulus and hence concatenate data along the time dimension (implying spatial consistency) and learn spatial independent components. We use the assumption of a
time-synchronized stimulus for anchoring the shared response to overcome a spatial mismatch in
functional topographies. Finally, SRM can be regarded as a refinement of the concept of hyperalignment [10] cast into a probabilistic framework. The HA approach has connections with regularized
CCA [18]. Additional details of these connections and connections with Canonical Correlation
Analysis (CCA) [19], ridge regression, Independent Component Analysis (ICA) [20], regularized
Hyperalignment [18] are discussed in the supplementary material.
4
Experiments
We assess the performance and robustness of SRM using fMRI datasets (Table 1) collected using
different MRI machines, subjects, and preprocessing pipelines. The sherlock dataset was collected
4
Dataset
Subjs TRs (s/TR)
Region of interest (ROI)
Voxels
sherlock (audio-visual movie) [21]
16
1976 (2)
posterior medial cortex (PMC) [22]
813
raider (audio-visual movie) [10]
10
2203 (3)
ventral temporal cortex (VT) [23]
500/H
forrest (audio movie) [24]
18
3599 (2)
planum temporale (PT) [25]
1300/H
audiobook (narrated story) [26]
40
449 (2)
default mode network (DMN) [27] 2500/H
Table 1: fMRI datasets are shown in the left four columns, and the ROIs are shown in right two columns. The
ROIs vary in functional from visual, language, memory, to mental states. H stands for hemisphere.
while subjects watched an episode of the BBC TV series ?Sherlock? (66 mins).The raider dataset
was collected while subjects viewed the movie ?Raiders of the Lost Ark? (110 mins) and a series
of still images (7 categories, 8 runs). The forrest dataset was collected while subjects listened to
an auditory version of the film ?Forrest Gump? (120 mins). The audiobook dataset was collected
while subjects listened to a narrated story (15 mins) with two possible interpretations. Half of the
subjects had a prior context favoring one interpretation, the other half had a prior context favoring
the other interpretation. Post scanning questionnaires showed no difference in comprehension but a
significant difference in interpretations between groups.
Experiment 1: SRM and spatial smoothing. We first use spatial smoothing to determine if we can
detect a shared response in PMC for the sherlock dataset. The subjects are randomly partitioned into
two equal sized groups, the data for each group is averaged, we calculate the Pearson correlation
over voxels between these averaged responses for each time, then average these correlations over
time. This is a measure of similarity of the sequence of brain maps in the two average responses. We
repeat this for five random subject divisions and average the results. If there is a shared response,
we expect a positive average correlation between the groups, but if functional topographies differ
significantly across subjects, this correlation may be small. If the result not distinct from zero, a
shared response is not detected. The computation yields the benchmark value 0.26 ? 0.006 shown
as the purple bar in the right plot in Fig. 3. This is support for a shared response in PMC, but we posit
that the subject?s functional topographies in PMC are misaligned. To test this, we use a Gaussian
filter, with width at half height of 3, 4, 5 and 6mm, to spatially smooth each subject?s fMRI data,
then recalculate the average Pearson correlation as described above. The results, shown as blue
bars in Fig. 3, indicate higher correlations with greater spatial smoothing. This indicates greater
average correlation of the responses at lower spatial frequencies, suggesting a fine scale mismatch
of functional topographies across subjects.
We now test the robustness of SRM using the unsmoothed data. The subjects are randomly partitioned into two equal sized groups. The data in each group is divided in time into two halves, and the
same half in each group is used to learn a shared response model for the group. The independently
obtained group templates S1 , S2 , are then registered using a k ? k orthogonal matrix Q (method
outlined in ?2). For each group, the second half of the data is projected to feature space using the
subject-specific bases and averaged. Then the Pearson correlation over features is calculated between the group averaged shared responses, and averaged over time. This is repeated using the other
the halves of the subject?s data for training and the results are averaged. The average results over 5
random subject divisions are report as the green bars in Fig. 3. With k = 813 there is no reduction
of dimension and SRM achieves a correlation equivalent to 6mm spatial smoothing. This strong
average correlation between groups, suggests some form of shared response. As expected, if the
dimension of the feature space k is reduced, the correlation increases. A smaller value of k, forces
Learning Subject Specific Bases
half
movie
data
subject
1
learn
bases
?
subject
m
Wm
W1
shared
Group 1
response
half
movie
data
learn
bases
Q
subject
1
?
subject
m
Wm
W1
shared
Group 2
response
Computing Correlation between Groups
half
movie
data
subject
1
project
to bases
?
subject
m
Wm QT
W1 QT
shared
R
Group 1
response
subject
1
?
subject
m
W1
Wm
shared
Group 2
response
Figure 3: Experiment 1. Left: Learn using half of the data, then compute between group correlation on other
half. Right: Pearson correlation after spatial smoothing, and SRM with various k. Error bars: ?1 stand. error.
5
Learning Subject Specific Bases
movie
data
subject 1
?
subject m-1
W1
learn bases
Wm
TR
subject m
movie
1
Wm
1
2
?
9
?
train
output
subject m-1
Wm
?
classifier
?
t
+8
t
+9
?
subject m-1
subject m
Wm
1
subject m
testing
subject?s
projected
response
seg
1
seg
2
?
seg
t-9
shared
response
seg
1
seg
2
?
seg
t-9
?
seg
t
?
subject 1
t
+1
?
W1
t
subject 1
t
-1
Segment t-9
overlapping segment
testing segment
overlapping segment
Segment t+9
Testing on Held-out Subject
project to
bases
projected
data
?
?
shared response
(template)
image data/
movie data
10
segment 1
segment 2
Seg
t+9
?
Seg
t+9
?
test
overlapping
seg. excluded
seg
t
overlapping
seg. excluded
Figure 4: Left: Experiment 2. Learn subject specific bases. Test on held out subject and data.
Right: Experiment 2. Time segment matching by correlating with 9 TR segments in the shared response.
Figure 5: Experiment 2. Top: Comparison of 18s time segment classification on three datasets using distinct
ROIs. Bottom: (Left) SRM time segment classification accuracy vs k. (Right) Learn bases from movie response, classify stimulus category using still image response. For raider and forrest, we conduct experiment
on ROI in each hemisphere separately and then average the results. For sherlock, we conduct experiment over
whole PMC. The TAL results for the raider dataset are from [10]. Error bars: ?1 stand. error.
SRM to focus on shared features yielding the best data representation and gives greater noise rejection. Learning 50 features achieves a 33% higher average correlation in feature space than is
achieved by 6mm spatial smoothing in voxel space. A commensurate improvement occurs when
SRM is applied to the spatially smoothed data.
Experiment 2 : Time segment matching and image classification. We test if the shared response
estimated by SRM generalizes to new subjects and new data using versions of two experiments from
[10] (unlike in [10], here the held out subject is not included in learning phase). The first experiment
tests if an 18s time segment from a held-out subject?s new data can be located in the corresponding
new data of the training subjects. A shared response and subject specific bases are learned using half
of the data, and the held out subject?s basis is estimated using the shared response as a template. Then
a random 18s test segment from the unused half of the held out subject?s data is projected onto the
subject?s basis, and we locate the 18s segment in the averaged shared response of the other subject?s
new data that is maximally correlated with the test segment (see Fig. 4). The held out subject?s
test segment is correctly located (matched) if its correlation with the average shared response at
the same time point is the highest; segments overlapping with the test segment are excluded. We
record the average accuracy and standard error by two-fold cross-validation over the data halves
and leave-one-out over subjects. The results using three different fMRI datasets with distinct ROIs
are shown in the top plot of Fig. 5. The accuracy is compared using: anatomical alignment (MNI
[4], Talairach (TAL) [1]); standard PCA, and ICA feature selection (FastICA implementation [20]);
the Hyperalignment (HA) method [10]; and SRM. PCA and ICA are directly applied on joint data
T
T
matrix X T = [X1T . . . Xm
] for learning W and S, where X ? W S and W T = [W1T . . . Wm
]. SRM
demonstrates the best matching of the estimated shared temporal features of the methods tested. This
suggests that the learned shared response is more informative of the shared brain state trajectory at
an 18s time scale. Moreover, the experiment verifies generalization of the estimated shared features
to subjects not included in the training phase and new (but similar) data collected during the other
half of the movie stimulus. Since we expect accuracy to improve as the time segment is lengthened,
6
Group 1!
(c)!
?!
subj m!
subj 1!
Group 2!
?!
TR!
shared response across all subjects!
in voxel space (k1)!
TR!
residual!
shared !
by all!
shared response within group!
in voxel space (k2)!
voxel!
test!
train!
train!
train!
group classification!
accuracy with SRM!
0.72?0.06!
k1 = 3
0.54?0.06!
(c)!
k1 = 3
0.70?0.04!
group 1!
(d)!
shared response within group!
in voxel space (k2)!
test!
individual!
(b)!
group 2!
group 1!
TR!
(d)!
within group!
(a)!
subj m!
voxel!
(b)!
subj 1!
original data!
voxel!
(a)!
voxel!
TR!
train!
group 2!
k1 = 0
k2 = 100 0.72?0.04!
k1 = 10
k2 = 100 0.82?0.04!
Fig. 6.2
Fig. 6.1
Fig. 6.3
Figure 6: Experiment 3. Fig. 6.1: Experimental procedure. Fig 6.2: Data components (left) and group
classification performance with SRM (right) in different steps of the procedure. Fig. 6.3: Group classification
on audiobook dataset in DMN before and after removing an estimated shared response for various values of k1
and k2 with SRM, PCA and ICA. Error bars: ?1 stand. error.
what is important is the relative accuracy of the compared methods. The method in (1) can be viewed
as non-probabilistic SRM. In this experiment, it performs worse than SRM but better than the other
compared methods. The effect of the number of features used in SRM is shown in Fig. 5, lower left.
This can be used to select k. A similar test on the number of features used in PCA and ICA indicates
lower performance than SRM (results not shown).
We now use the image viewing data and the movie data from the raider dataset to test the generalizability of a learned shared response to a held-out subject and new data under a very distinct stimulus.
The raider movie data is used to learn a shared response model, while excluding a held-out subject.
The held-out subject?s basis is estimated by matching its movie response data to the estimated shared
response. The effectiveness of the learned bases is then tested using the image viewing dataset [10].
After projecting the image data using the subject bases to feature space, an SVM classifier is trained
and the average classifier accuracy and standard error is recorded by leave-one-out across subject
testing. The results, lower right plot in Fig. 5, support the effectiveness of SRM in generalizing to
a new subject and a distinct new stimulus. Under SRM, the image stimuli can be slightly more accurately identified using other subjects? data for training than using a subject?s own data, indicating
that the learned shared response is informative of image category.
Experiment 3: Differentiating between groups. Now consider the audiobook dataset and the
DMN ROI. If subjects are given group labels according to the two prior contexts, a linear SVM
classifier trained on labeled voxel space data and tested on the voxel space data of held out subjects,
can distinguish the two groups at an above chance level. This is shown as the leftmost bar in the
bottom figure of Fig. 6.3. This is consistent with previous similar studies [28].
We test if SRM can distinguish the two subject groups with a higher rate of success. To do so we use
g1
g2
the procedure outlined in rows of Fig. 6.1. We first use the original data X1:m
, X1:m
of all subjects
all
(Fig. 6.1 (a)) to learn a k1 -dimensional shared response S all and subject bases Wgj,1:m
. This shared
response is then mapped to voxel space using each subject?s learned topography (Fig. 6.1 (b)) and
all
subtracted from the subject?s data to form the residual response Xigj ?Wgj,i
S all for subject i in group
j (Fig. 6.1 (c)). Leaving out one subject from each group, we use two within-group applications of
g1
g2
, W1:m
SRM to find k2 -dimensional within-group shared responses S g1 , S g2 , and subject bases W1:m
7
for the residual response. These are mapped into voxel space Wigj S gj for each subject (Fig. 6.1 (d)).
The first application of SRM yields an estimate of the response shared by all subjects. This is used
to form the residual response. The subsequent within-group applications of SRM to the residual give
estimates of the within-group shared residual response. Both applications of SRM seek to remove
components of the original response that are uninformative of group membership. Finally, a linear
SVM classifier is trained using the voxel space group-labeled data, and tested on the voxel space
data of held out subjects. The results are shown as the red bars in Fig. 6.3. When using k1 = 10 and
k2 = 100, we observe significant improvement in distinguishing the groups.
One can visualize why this works using the cartoon in Fig. 6.2 showing the data for one subject
modeled as the sum of three components: the response shared by all subjects, the response shared
by subjects in the same group after the response shared by all subjects is removed, and a final residual
term called the individual response (Fig. 6.2(a)). We first identify the response shared by all subjects
(Fig. 6.2(b)); subtracting this from the subject response gives the residual (Fig. 6.2(c)). The second
within-group application of SRM removes the individual response (Fig. 6.2(d)). By tuning k1 in the
first application of SRM and tuning k2 in the second application of SRM, we estimate and remove
the uninformative components while keeping the informative component.
Classification using the estimated shared response (k1 ? 10) results in accuracy around chance
(Fig. 6.2(b)), indicating that it is uninformative for distinguishing the groups. The classification
accuracy using the residual response is statistically equivalent to using the original data (Fig. 6.2(c)),
indicating that only removing the response shared by all subjects is insufficient for improvement.
The classification accuracy that results by not removing the shared response (k1 = 0) and only
applying within-group SRM (Fig. 6.2(d)), is also statistically equivalent to using the original data.
This indicates that only removing the individual response is also insufficient for improvement. By
combining both applications of SRM we remove both the response shared by all subjects and the
individual responses, keeping only the responses shared within groups. For k1 = 10, k2 = 100, this
leads to significant improvement in performance (Fig. 6.2(d) and Fig. 6.3).
We performed the same experiment using PCA and ICA (Fig. 6.3). In this case, after removing the
estimated shared response (k1 ? 1) group identification quickly drops to chance since the shared
response is informative of group difference (around 70% accuracy for distinguishing the groups
(Sup. Mat.)). A detailed comparison of all three methods on the different steps of the procedure is
given in the supplementary material.
5
Discussion and Conclusion
The vast majority of fMRI studies require aggregation of data across individuals. By identifying
shared responses between the brains of different individuals, our model enhances fMRI analyses that
use aggregated data to evaluate cognitive states. A key attribute of SRM is its built-in dimensionality
reduction leading to a reduced-dimension shared feature space. We have shown that by tuning this
dimensionality, the data-driven aggregation achieved by SRM demonstrates higher sensitivity in
distinguishing multivariate functional responses across cognitive states. This was shown across
a variety of datasets and anatomical brain regions of interest. This also opens the door for the
identification of shared and individual responses. The identification of shared responses after SRM
is of great interest, as it allows us to assess the degree to which functional topography is shared
across subjects. Furthermore, the SRM allows the detection of group specific responses. This was
demonstrated by removing an estimated shared response to increase sensitivity in detecting group
differences. We posit that this technique can be adapted to examine an array of situations where
group differences are the key experimental variable. The method can facilitate studies of how neural
representations are influenced by cognitive manipulations or by factors such as genetics, clinical
disorders, and development.
Successful decoding of a particular cognitive state (such as a stimulus category) in a given brain area
provides evidence that information relevant to that cognitive state is present in the neural activity of
that brain area. Conducting such analyses in locations spanning the brain, e.g., using a searchlight
approach, can facilitate the discovery of information pathways. In addition, comparison of decoding
accuracies between searchlights can suggest what kind of information is present and where it is
concentrated in the brain. SRM provides a more sensitive method for conducting such investigations.
This may also have direct application in designing better noninvasive brain-computer interfaces [29].
8
References
[1] J. Talairach and P. Tournoux. Co-planar stereotaxic atlas of the human brain. 3-Dimensional proportional
system: an approach to cerebral imaging. Thieme, 1988.
[2] J.D.G. Watson, R. Myers, et al. Area V5 of the human brain: evidence from a combined study using
positron emission tomography and magnetic resonance imaging. Cereb. Cortex, 3:79?94, 1993.
[3] R. B. H. Tootell, J. B. Reppas, et al. Visual motion aftereffect in human cortical area MT revealed by
functional magnetic resonance imaging. Nature, 375(6527):139?141, 05 1995.
[4] J. Mazziotta, A. Toga, et al. A probabilistic atlas and reference system for the human brain. Philosophical
Transactions of the Royal Society B: Biological Sciences, 356(1412):1293?1322, 2001.
[5] B. Fischl, M. I. Sereno, R.B.H. Tootell, and A. M. Dale. High-resolution intersubject averaging and a
coordinate system for the cortical surface. Human brain mapping, 8(4):272?284, 1999.
[6] M. Brett, I. S. Johnsrude, and A. M. Owen. The problem of functional localization in the human brain.
Nat Rev Neurosci, 3(3):243?249, 03 2002.
[7] M. R. Sabuncu, B. D. Singer, B. Conroy, R. E. Bryan, P. J. Ramadge, and J. V. Haxby. Function-based
intersubject alignment of human cortical anatomy. Cerebral Cortex, 20(1):130?140, 2010.
[8] B. R. Conroy, B. D. Singer, J. V. Haxby, and P. J. Ramadge. fMRI-based inter-subject cortical alignment
using functional connectivity. In Advances in Neural Information Processing Systems, 2009.
[9] B. R. Conroy, B. D. Singer, J. S. Guntupalli, P. J. Ramadge, and J. V. Haxby. Inter-subject alignment of
human cortical anatomy using functional connectivity. NeuroImage, 2013.
[10] J. V. Haxby, J. S. Guntupalli, et al. A common, high-dimensional model of the representational space in
human ventral temporal cortex. Neuron, 72(2):404?416, 2011.
[11] A. Lorbert and P. J. Ramadge. Kernel hyperalignment. In Adv. in Neural Inform. Proc. Systems, 2012.
[12] A. G. Huth, T. L. Griffiths, F. E. Theunissen, and J. L. Gallant. PrAGMATiC: a Probabilistic and Generative Model of Areas Tiling the Cortex. ArXiv 1504.03622, 2015.
[13] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge university press, 2012.
[14] A. Edelman, T. A. Arias, and S. T Smith. The geometry of algorithms with orthogonality constraints.
SIAM journal on Matrix Analysis and Applications, 20(2):303?353, 1998.
[15] J.-H. Ahn and J.-H. Oh. A constrained EM algorithm for principal component analysis. Neural Computation, 15(1):57?65, 2003.
[16] J. R. Manning, R. Ranganath, K. A. Norman, and D. M. Blei. Topographic factor analysis: a bayesian
model for inferring brain networks from neural data. PLoS One, 9(5):e94914, 2014.
[17] A. M. Michael, M. Anderson, et al. Preserving subject variability in group fMRI analysis: performance
evaluation of GICA vs. IVA. Frontiers in systems neuroscience, 8, 2014.
[18] H. Xu, A. Lorbert, P. J. Ramadge, J. S. Guntupalli, and J. V. Haxby. Regularized hyperalignment of
multi-set fMRI data. In Proc. Statistical Signal Processing Workshop, pages 229?232. IEEE, 2012.
[19] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3-4):321?377, 1936.
[20] A. Hyv?rinen, J. Karhunen, and E. Oja. Independent component analysis. John Wiley & Sons, 2004.
[21] J. Chen, Y. C. Leong, K. A. Norman, and U. Hasson. Reinstatement of neural patterns during narrative
free recall. Abstracts of the Cognitive Neuroscience Society, 2014.
[22] D. S. Margulies, J. L. Vincent, and et al. Precuneus shares intrinsic functional architecture in humans and
monkeys. Proceedings of the National Academy of Sciences, 106(47):20069?20074, 2009.
[23] J. V. Haxby, M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539), 2001.
[24] M. Hanke, F. J. Baumgartner, et al. A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie. Scientific Data, 1, 2014.
[25] T. D. Griffiths and J. D. Warren. The planum temporale as a computational hub. Trends in neurosciences,
25(7):348?353, 2002.
[26] Y. Yeshurun, S. Swanson, J. Chen, E. Simony, C. Honey, P. C. Lazaridi, and U. Hasson. How does the
brain represent different ways of understanding the same story? Society for Neuroscience Abstracts, 2014.
[27] M. E. Raichle. The brain?s default mode network. Annual Review of Neuroscience, 38(1), 2015.
[28] D. L. Ames, C. J. Honey, M. Chow, A. Todorov, and U. Hasson. Contextual alignment of cognitive and
neural dynamics. Journal of cognitive neuroscience, 2014.
[29] M.T. deBettencourt, J. D. Cohen, R. F. Lee, K. A. Norman, and N. B. Turk-Browne. Closed-loop training
of attention with real-time brain imaging. Nature neuroscience, 18(3):470?475, 2015.
9
| 5855 |@word version:2 mri:2 loading:3 stronger:1 norm:2 open:2 hyv:1 seek:1 covariance:2 tr:9 reduction:3 moment:1 plentiful:1 series:7 initial:2 selecting:2 existing:2 contextual:1 si:2 must:1 john:1 subsequent:2 concatenate:1 blur:1 informative:5 haxby:6 remove:4 plot:4 atlas:2 update:3 medial:1 v:2 implying:1 generative:2 selected:2 half:16 drop:1 isotropic:1 positron:1 smith:1 record:1 blei:1 mental:1 provides:3 detecting:1 contribute:1 node:2 location:1 precuneus:1 ames:1 raider:9 five:1 height:1 along:1 direct:1 ik:6 edelman:1 pathway:2 introduce:1 inter:5 expected:1 ica:6 nor:1 examine:1 multi:6 brain:25 anchoring:1 decreasing:2 spherical:1 becomes:1 begin:1 estimating:2 moreover:3 project:3 matched:1 brett:1 furey:1 what:5 kind:1 thieme:1 monkey:1 iva:2 finding:3 temporal:6 honey:2 biometrika:1 demonstrates:3 classifier:5 k2:9 utilization:1 normally:1 xmt:1 before:2 t1:1 engineering:1 aggregating:1 local:1 referenced:1 sd:1 positive:1 might:1 black:1 suggests:2 shaded:1 misaligned:1 co:1 ramadge:5 range:1 statistically:2 averaged:7 unique:1 horn:1 testing:5 lost:1 narrated:2 procedure:4 dmn:3 area:5 significantly:1 projection:2 matching:4 griffith:2 suggest:1 onto:2 selection:1 tootell:2 context:3 applying:2 equivalent:4 unshaded:1 hyperalignment:6 center:2 maximizing:1 map:1 demonstrated:1 attention:1 independently:2 formulate:1 wit:8 hsuan:1 identifying:3 disorder:1 resolution:2 chen2:1 q:1 array:1 regarded:1 orthonormal:4 d1:2 oh:1 handle:1 coordinate:4 pt:1 today:1 rinen:1 distinguishing:4 designing:1 trend:1 ks2:1 located:2 ark:1 labeled:2 theunissen:1 observed:1 bottom:2 electrical:1 capture:3 solved:2 thousand:1 calculate:2 region:4 wj:4 ensures:1 recalculate:1 episode:1 seg:12 adv:1 decrease:1 highest:1 removed:1 plo:1 pd:1 questionnaire:1 ui:2 complexity:1 dynamic:1 trained:3 segment:20 localization:1 bbc:1 division:2 basis:9 po:1 easily:1 kxj:1 joint:3 yeshurun:1 various:4 train:6 distinct:7 fast:1 effective:1 detected:1 aggregate:1 outcome:1 pearson:4 supplementary:2 solve:1 film:1 ability:1 statistic:2 eachp:1 topographic:3 g1:3 think:2 transform:1 noisy:1 final:1 advantage:2 blob:1 sequence:1 myers:1 propose:2 subtracting:1 aligned:1 combining:1 relevant:1 loop:1 representational:2 academy:1 frobenius:1 x1t:2 assessing:1 extending:1 leave:2 object:1 help:1 develop:1 radical:1 measured:1 qt:3 intersubject:2 strong:1 indicate:1 synchronized:3 differ:1 posit:2 anatomy:3 attribute:1 filter:1 centered:1 human:11 viewing:2 material:2 require:1 generalization:2 preliminary:1 investigation:1 biological:1 comprehension:1 extension:1 frontier:1 mm:3 sufficiently:1 stt:1 around:2 roi:12 great:1 mapping:5 visualize:1 claim:1 major:1 achieves:2 vary:2 ventral:4 uniqueness:1 estimation:3 proc:2 narrative:1 label:1 guntupalli:3 si0:1 sensitive:1 repetition:1 tool:1 gaussian:1 modified:1 xit:15 focus:1 emission:1 improvement:6 likelihood:3 indicates:3 contrast:3 detect:1 dependent:1 stopping:1 membership:1 chow:1 relation:1 favoring:2 raichle:1 arg:1 among:1 classification:11 development:1 resonance:2 spatial:13 smoothing:7 constrained:2 equal:3 cartoon:1 identical:1 k2f:1 fmri:23 report:1 stimulus:14 few:2 modern:1 randomly:2 oja:1 composed:1 national:1 individual:9 phase:5 geometry:2 detection:2 interest:5 evaluation:1 alignment:7 yielding:1 held:13 necessary:1 tfa:1 skf:3 orthogonal:8 conduct:2 old:5 rotating:1 planum:2 psychological:1 instance:1 column:5 modeling:3 classify:1 reinstatement:1 subset:2 srm:51 successful:2 fastica:1 johnson:1 listened:2 ishai:1 scanning:1 generalizability:1 kxi:3 combined:1 st:34 density:1 sensitivity:4 siam:1 probabilistic:7 lee:1 decoding:2 michael:1 quickly:1 connectivity:3 squared:1 w1:9 satisfied:1 gump:1 recorded:1 worse:1 cognitive:9 derivative:1 leading:1 account:1 suggesting:1 sec:2 register:1 toga:1 vi:2 performed:1 closed:1 sup:4 red:1 wm:10 aggregation:5 denoised:1 elicited:1 temporale:2 hanke:1 contribution:1 minimize:1 square:1 wgj:2 accuracy:17 ass:2 purple:1 conducting:2 maximized:1 yield:4 identify:2 generalize:1 bayesian:2 identification:3 iterated:1 accurately:1 vincent:1 trajectory:1 lorbert:2 influenced:1 inform:1 frequency:1 turk:1 james:1 ppca:2 dataset:16 experimenter:1 auditory:1 recall:1 knowledge:1 dimensionality:3 back:1 attained:1 higher:4 planar:1 response:95 improved:2 maximally:1 formulation:1 done:2 generality:3 furthermore:2 anderson:1 correlation:20 until:1 ei:4 overlapping:6 mode:2 mazziotta:1 scientific:1 believe:1 facilitate:2 effect:1 validity:2 concept:1 multiplier:1 norman:3 adequately:1 hence:4 analytically:1 spatially:2 symmetric:1 iteratively:2 excluded:3 illustrated:2 during:3 width:2 criterion:1 manifestation:1 leftmost:1 ridge:1 cereb:1 performs:1 motion:1 bring:1 interface:1 stiefel:1 sabuncu:1 image:11 regularizing:1 recently:1 common:1 srms:1 functional:22 mt:1 stimulation:1 cohen:1 cerebral:2 extend:2 xi0:3 interpretation:5 discussed:1 refer:1 significant:3 cambridge:1 ai:4 tuning:3 consistency:1 outlined:2 similarly:1 language:1 had:2 cortex:8 similarity:1 gj:1 surface:1 align:1 base:22 add:3 ahn:1 multivariate:4 posterior:1 showed:1 own:1 optimizing:1 hemisphere:2 driven:1 manipulation:1 success:1 watson:1 vt:2 posthoc:1 captured:1 preserving:2 additional:1 greater:3 determine:2 maximize:2 aggregated:1 signal:1 rmv:1 resolving:1 multiple:4 rv:9 smooth:1 match:1 cross:1 long:1 clinical:1 divided:1 rameter:1 post:1 watched:1 variant:1 regression:1 expectation:1 arxiv:1 iteration:2 kernel:1 represent:1 achieved:2 whereas:1 uninformative:4 separately:2 fine:1 addition:1 leaving:1 unlike:1 kwi:1 subject:136 effectiveness:3 call:1 noting:1 unused:1 leong:1 constraining:1 door:1 decent:1 revealed:1 todorov:1 variety:3 xj:2 psychology:1 variate:1 dartmouth:1 identified:2 architecture:1 browne:1 reduce:1 tm:1 classficiation:1 pca:7 baumgartner:1 peter:1 useful:1 detailed:1 concentrated:1 tomography:1 category:4 reduced:3 canonical:1 neuroscience:9 estimated:11 correctly:1 bryan:1 anatomical:7 blue:1 fischl:1 hyperparameter:1 mat:4 group:57 key:3 four:1 registration:1 imaging:5 vast:1 sum:1 run:1 dst:1 place:2 forrest:4 cca:2 followed:1 distinguish:2 fold:1 trs:3 mni:1 activity:1 adapted:1 hasson:3 subj:4 orthogonality:4 constraint:5 annual:1 tal:2 aspect:3 min:7 span:1 unsmoothed:1 relatively:1 department:3 tv:1 according:1 manning:1 across:20 smaller:1 increasingly:1 em:2 slightly:1 wi:50 partitioned:2 son:1 rev:1 s1:5 anatomically:1 explained:1 dv:1 projecting:1 pipeline:1 equation:1 aftereffect:1 count:1 singer:3 tiling:1 available:1 gaussians:1 generalizes:1 apply:1 observe:1 v2:2 enforce:1 magnetic:2 hotelling:1 subtracted:1 robustness:4 pietrini:1 original:6 denotes:1 assumes:2 ensure:1 top:2 graphical:2 johnsrude:1 k1:13 society:3 warping:1 objective:6 added:2 v5:1 occurs:1 gobbini:1 diagonal:1 enhances:1 gradient:1 separate:1 mapped:3 majority:1 manifold:1 collected:8 reason:1 spanning:1 swanson:1 modeled:1 insufficient:2 providing:1 demonstration:1 ramadge1:1 minimizing:2 pmc:5 subproblems:1 negative:1 huth:1 implementation:1 perform:1 tournoux:1 gallant:1 observation:4 neuron:1 datasets:11 commensurate:1 benchmark:1 t:1 displayed:1 situation:2 extended:1 excluding:1 precise:1 variability:1 locate:1 smoothed:1 unmodeled:1 princeton:3 lengthened:1 cast:1 searchlight:2 connection:4 philosophical:1 learned:8 registered:2 chen1:1 conroy:3 beyond:1 bar:9 below:1 pattern:4 usually:1 mismatch:2 xm:1 appeared:1 sherlock:5 royal:1 max:1 memory:1 green:1 built:1 power:1 critical:3 natural:3 force:1 regularized:3 residual:9 sk2f:2 representing:1 improve:3 movie:16 prior:4 voxels:7 discovery:1 understanding:1 kf:1 xtt:1 review:1 relative:1 loss:1 expect:2 topography:15 interesting:1 proportional:1 analogy:1 versus:1 var:1 validation:1 degree:1 gather:1 sufficient:2 consistent:1 imposes:1 viewpoint:1 story:3 share:1 row:1 genetics:1 repeat:1 keeping:2 free:1 schouten:1 warren:1 institute:1 wide:1 template:6 taking:3 face:1 differentiating:1 distributed:1 overcome:1 dimension:7 cortical:6 evaluating:1 default:2 rich:1 doesn:1 stand:4 calculated:1 noninvasive:1 refinement:1 projected:5 preprocessing:1 dale:1 employing:1 voxel:17 transaction:1 ranganath:1 correlating:1 xi:10 don:1 latent:6 why:1 table:2 learn:14 nature:3 complex:1 domain:1 diag:1 linearly:1 neurosci:1 s2:3 noise:2 hyperparameters:2 whole:1 sereno:1 w1t:2 verifies:1 repeated:1 tesla:1 x1:2 xu:1 fig:34 aid:1 wiley:1 neuroimage:1 inferring:1 explicit:1 learns:3 removing:7 rk:5 specific:12 xt:6 showing:1 hub:1 svm:3 evidence:2 incorporating:1 workshop:1 intrinsic:1 aria:1 nat:1 karhunen:1 uri:1 chen:2 rejection:1 generalizing:1 simply:1 forming:1 visual:4 lagrange:1 g2:3 talairach:2 chance:3 conditional:1 viewed:2 sized:2 wjt:1 twofold:1 shared:87 owen:1 change:1 included:2 averaging:3 denoising:1 principal:2 called:1 svd:1 e:10 experimental:2 indicating:3 select:3 college:1 pragmatic:1 support:2 evaluate:1 stereotaxic:1 audio:4 tested:4 correlated:1 |
5,364 | 5,856 | Attractor Network Dynamics Enable Preplay and
Rapid Path Planning in Maze?like Environments
Wulfram Gerstner
Laboratory of Computational Neuroscience
?
Ecole
Polytechnique F?ed?erale de Lausanne
CH-1015 Lausanne, Switzerland
[email protected]
Dane Corneil
Laboratory of Computational Neuroscience
?
Ecole
Polytechnique F?ed?erale de Lausanne
CH-1015 Lausanne, Switzerland
[email protected]
Abstract
Rodents navigating in a well?known environment can rapidly learn and revisit observed reward locations, often after a single trial. While the mechanism for rapid
path planning is unknown, the CA3 region in the hippocampus plays an important
role, and emerging evidence suggests that place cell activity during hippocampal ?preplay? periods may trace out future goal?directed trajectories. Here, we
show how a particular mapping of space allows for the immediate generation of
trajectories between arbitrary start and goal locations in an environment, based
only on the mapped representation of the goal. We show that this representation
can be implemented in a neural attractor network model, resulting in bump?like
activity profiles resembling those of the CA3 region of hippocampus. Neurons
tend to locally excite neurons with similar place field centers, while inhibiting
other neurons with distant place field centers, such that stable bumps of activity
can form at arbitrary locations in the environment. The network is initialized to
represent a point in the environment, then weakly stimulated with an input corresponding to an arbitrary goal location. We show that the resulting activity can
be interpreted as a gradient ascent on the value function induced by a reward at
the goal location. Indeed, in networks with large place fields, we show that the
network properties cause the bump to move smoothly from its initial location to
the goal, around obstacles or walls. Our results illustrate that an attractor network
with hippocampal?like attributes may be important for rapid path planning.
1
Introduction
While early human case studies revealed the importance of the hippocampus in episodic memory [1,
2], the discovery of ?place cells? in rats [3] established its role for spatial representation. Recent
results have further suggested that, along with these functions, the hippocampus is involved in active
spatial planning: experiments in ?one?shot learning? have revealed the critical role of the CA3
region [4, 5] and the intermediate hippocampus [6] in returning to goal locations that the animal has
seen only once. This poses the question of whether and how hippocampal dynamics could support
a representation of the current location, a representation of a goal, and the relation between the two.
In this article, we propose that a model of CA3 as a ?bump attractor? [7] can be be used for path
planning. The attractor map represents not only locations within the environment, but also the spatial
relationship between locations. In particular, broad activity profiles (like those found in intermediate
and ventral hippocampus [8]) can be viewed as a condensed map of a particular environment. The
planned path presents as rapid sequential activity from the current position to the goal location,
similar to the ?preplay? observed experimentally in hippocampal activity during navigation tasks [9,
10], including paths that require navigating around obstacles. In the model, the activity is produced
by supplying input to the network consistent with the sensory input that would be provided at the
1
goal site. Unlike other recent models of rapid goal learning and path planning [11, 12], there is
no backwards diffusion of a value signal from the goal to the current state during the learning or
planning process. Instead, the sequential activity results from the representation of space in the
attractor network, even in the presence of obstacles.
The recurrent structure in our model is derived from the ?successor representation? [13], which
represents space according to the number and length of paths connecting different locations. The
resulting network can be interpreted as an attractor manifold in a low?dimensional space, where the
dimensions correspond to weighted version of the most relevant eigenvectors of the environment?s
transition matrix. Such low?frequency functions have recently found support as a viable basis for
place cell activity [14?16]. We show that, when the attractor network operates in this basis and is
stimulated with a goal location, the network activity traces out a path to that goal. Thus, the bump
attractor network can act as a spatial path planning system as well as a spatial memory system.
2
The successor representation and path?finding
A key problem in reinforcement learning is assessing the value of a particular state, given the expected returns from that state in both the immediate and distant future. Several model?free algorithms exist for solving this task [17], but they are slow to adjust when the reward landscape is
rapidly changing. The successor representation, proposed by Dayan [13], addresses this issue.
Given a Markov chain described by the transition matrix P, where each element P (s, s0 ) gives the
probability of transitioning from state s to state s0 in a single time step; a reward vector r, where
each element r(s0 ) gives the expected immediate returns from state s0 ; and a discount factor ?, the
expected returns v from each state can be described by
v = r + ?Pr + ? 2 P2 r + ? 3 P3 r + . . .
?1
= (I ? ?P)
= Lr.
(1)
r
The successor representation L provides an efficient means of representing the state space according
to the expected (discounted) future occupancy of each state s0 , given that the chain is initialized from
state s. An agent employing a policy described by the matrix P can immediately update the value
function when the reward landscape r changes, without any further exploration.
The successor representation is particularly useful for representing many reward landscapes in the
same state space. Here we consider the set of reward functions where returns are confined to a single
state s0 ; i.e. r(s0 ) = ?s0 g where ? denotes the Kronecker delta function and the index g denotes a
particular goal state. From Eq. 1, we see that the value function is then given by the column s0
of the matrix L. Indeed, when we consider only a single goal, we can see the elements of L as
L(s, s0 ) = v(s|s0 = g). We will use this property to generate a spatial mapping that allows for a
rapid approximation of the shortest path between any two points in an environment.
2.1
Representing space using the successor representation
In the spatial navigation problems considered here, we assume that the animal has explored the environment sufficiently to learn its natural topology. We represent the relationship between locations
with a Gaussian affinity metric a: given states s(x, y) and s0 (x, y) in the 2D plane, their affinity is
0
0
a(s(x, y), s (x, y)) = a(s (x, y), s(x, y)) = exp
?d2
2?s2
(2)
where d is the length of the shortest traversable path between s and s0 , respecting walls and obstacles.
We define ? to be small enough that the metric is localized (Fig. 1) such that a(s(x, y), ?) resembles
a small bump in space, truncated by walls. Normalizing the affinity metric gives
a(s, s0 )
.
p(s, s0 ) = P
0
s0 a(s, s )
2
(3)
The normalized metric can be interpreted as a transition probability for an agent exploring the environment randomly. In this case, a spectral analysis of the successor representation [14, 18] gives
v(s|s0 = g) = ?(s0 )
n
X
(1 ? ??l )?1 ?l (s)?l (s0 )
(4)
l=0
where ?l are the right eigenvectors of the transition matrix P, 1 = |?0 | ? |?1 | ? |?2 | ? ? ? ?
|?n | are the eigenvalues [18], and ?(s0 ) denotes the steady?state occupancy of state s0 resulting
from P. Although the affinity metric is defined locally, large?scale features of the environment are
represented in the eigenvectors associated with the largest eigenvalues (Fig. 1).
We now express the position in the 2D space using a set of ?successor coordinates?, such that
s(x, y) 7? ?
s=
q
(1 ? ??0 )
?1
q
?0 (s),
(1 ? ??1 )
?1
?1 (s), . . . ,
q
(1 ? ??q )
?1
?q (s)
(5)
= (?0 (s), ?1 (s), . . . , ?q (s))
q
?1
where ?l = (1 ? ??l ) ?l . This is similar to the ?diffusion map? framework by Coifman and
Lafon [18]; with the useful property that, if q = n, the value of a given state when considering
a given goal is proportional to the scalar product of their respective mappings: v(s|s0 = g) =
?(s0 )h?
s, ?
s0 i. We will use this property to show how a network operating in the successor coordinate
space can rapidly generate prospective trajectories between arbitrary locations.
Note that the mapping can also be defined using the eigenvectors ?l of a related measure of the
space, the normalized graph Laplacian [19]. The eigenvectors ?l serve as the objective functions for
slow feature analysis [20], and approximations have been extracted through hierarchical slow feature
analysis on visual data [15, 16], where they have been used to generate place cell?like behaviour.
2.2
Path?finding using the successor coordinate mapping
Successor coordinates provide a means of mapping a set of locations in a 2D environment to a new
space based on the topology of the environment. In the new representation, the value landscape
is particularly simple. To move from a location ?
s towards a goal position ?
s0 , we can consider a
constrained gradient ascent procedure on the value landscape:
h
i
2
?
st+1 = arg min (?
s ? (?
st + ??v(?
st )))
(6)
?
?
s ?S
h
i
2
= arg min (?
s ? (?
st + ?
??
s0 ))
?
?
s ?S
where ?(s0 ) has been absorbed into the parameter ?
? . At each time step, the state closest to an
? In the
incremental ascent of the value gradient is selected amongst all states in the environment S.
following, we will consider how the step ?
st + ?
??
s0 can be approximated by a neural attractor network
acting in successor coordinate space.
Due to the properties of the transition matrix, ?0 is constant across the state space and does not
contribute
to the value gradient in Eq. 6. As such, we substituted a free parameter for the coefficient
p
(1 ? ??0 )?1 , which controlled the overall level of activity in the network simulations.
3
Encoding successor coordinates in an attractor network
The bump attractor network is a common model of place cell activity in the hippocampus [7, 21].
Neurons in the attractor network strongly excite other neurons with similar place field centers, and
weakly inhibit the neurons within the network with distant place field centers. As a result, the
network allows a stable bump of activity to form at an arbitrary location within the environment.
3
30
20
10
0
-10
-20
-30
-40
-50
-40
-30
-20
-10
0
10
20
30
Figure 1: [Left] A rat explores a maze?like environment and passively learns its topology. We assume a process such as hierarchical slow feature analysis, that preliminarily extracts slowly changing
functions in the environment (here, the vectors ?1 . . . ?q ). The vector ?1 for the maze is shown in
the top left. In practice, we extracted the vectors directly from a localized Gaussian transition function (bottom center, for an arbitrary location). [Right] This basis can be used to generate a value
map approximation over the environment for a given reward (goal) position and discount factor ?
(inset). Due to the walls, the function is highly discontinuous in the xy spatial dimensions. The
goal position is circled in white. In the scatter plot, the same array of states and value function are
shown in the first two non?trivial successor coordinate dimensions. In this space, the value function
is proportional to the scalar product between the states and the goal location. The grey and black
dots show corresponding states between the inset and the scatter plot.
Such networks typically represent a periodic (toroidal) environment [7, 21], using a local excitatory
weight profile that falls off exponentially. Here, we show how the spatial mapping of Eq. 5 can be
used to represent bounded environments with arbitrary obstacles. The resulting recurrent weights
induce stable firing fields that decrease with distance from the place field center, around walls and
obstacles, in a manner consistent with experimental observations [22]. In addition, the network
dynamics can be used to perform rapid path planning in the environment.
We will use the techniques introduced in the attractor network models by Eliasmith and Anderson
[23] to generalize the bump attractor. We first consider a purely feed?forward network, composed of
a population of neurons with place field centers scattered randomly throughout the environment. We
assume that the input is highly preprocessed, potentially by several layers of neuronal processing
in
(Fig. 1), and given directly by units k whose activities s?in
k (t) = ?k (s (t)) represent the input in the
successor coordinate dimensions introduced above. The activity ai of neuron i in response to the m
inputs s?in
k (t) can be described by
"m
#
X ff
dai (t)
in
?
= ?ai (t) + g
wik s?k (t)
dt
k=1
(7)
+
ff
where g is a gain factor, [?]+ represents a rectified linear function, and wik
are the feed?forward
weights. Each neuron is particularly responsive to a ?bump? in the environment given by its encoding vector ei = ||??ssii || , the normalized successor coordinates of a particular point in space, which
corresponds to its place field center. The input to neuron i in the network is then given by
ff
wik
= [ei ]k ,
m
X
f f in
wik
s?k (t) = ei ? ?
sin (t).
(8)
k=1
A neuron is therefore maximally active when the input coordinates are nearly parallel to its encoding
vector. Although we assume the input is given directly in the basis vectors ?l for convenience, a
neural encoding using an (over)complete basis based on a linear combination of the eigenvectors ?l
or ?l is also possible given a corresponding transformation in the feed?forward weights.
4
Figure 2: [Left] The attractor network structure for the maze?like environment in Fig. 1. The inputs
give a low?dimensional approximation of the successor coordinates of a point in space. The network
is composed of 500 neurons with encoding vectors representing states scattered randomly throughout the environment. Each neuron?s activation is proportional to the scalar product of its encoding
vector and the input, resulting in a large ?bump? of activity. Recurrent weights are generated using a
least?squares error decoding of the successor coordinates from the neural activities, projected back
on to the neural encoding vectors. [Right] The generated recurrent weights for the network. The
plot shows the incoming weights from each neuron to the unit at the circled position, where neurons
are plotted according to their place field centers.
If the input ?
sin (t) represents a location in the environment, a bump of activity forms in the network
(Fig. 2). These activities give a (non?linear) encoding of the input. Given the response properties of
the neurons, we can find a set of linear decoding weights dj that recovers an approximation of the
input given to the network from the neural activities [23]:
?
srec (t) =
n
X
dj ? aj (t).
(9)
j=1
These decoding weights dj were derived by minimizing the least?squares estimation error of a set
of example inputs from their resulting steady?state activities, where the example inputs correspond
to the successor coordinates of points evenly spaced throughout the environment. The minimization
can be performed by taking the Moore?Penrose pseudoinverse of the matrix of neural activities in
response to the example inputs (with singular values below a certain tolerance removed to avoid
overfitting). The vector dj therefore gives the contribution of aj (t) to a linear population code for
the input location.
rec
We now introduce the recurrent weights wij
to allow the network to maintain a memory of past
input in persistent activity. The recurrent weights are determined by projecting the decoded location
back on to the neuron encoding vectors such that
rec
wij
= (1 ? ) ? ei ? dj ,
n
X
(10)
rec
wij
aj (t) = (1 ? ) ? ei ? ?
srec (t).
j=1
Here, the factor 1 determines the timescale on which the network activity fades. Since the
encoding and decoding vectors for the same neuron tend to be similar, recurrent weights are highest
between neurons representing similar successor coordinates, and the weight profile decreases with
the distance between place field centers (Fig. 2). The full neuron?level description is given by
?
?
n
m
X
X
dai (t)
f f in
rec
?
= ?ai (t) + g ?
wij
aj (t) + ?
wik
s?k (t)?
dt
j=1
k=1
+
rec
in
= ?ai (t) + g ei ? (1 ? ) ? ?
s (t) + ? ? ?
s (t) +
5
(11)
where the ? parameter corresponds to the input strength. If we consider the estimate of ?
srec (t)
recovered from decoding the activities of the network, we arrive at the update equation
?
d?
srec (t)
? ? ??
sin (t) ? ? ?
srec (t).
dt
(12)
Given a location ?
sin (t) as an initial input, the recovered representation ?
srec (t) approximates the
input and reinforces it, allowing a persistent bump of activity to form. When ?
sin (t) then changes
to a new (goal) location, the input and recovered coordinates conflict. By Eq. 12, the recovered
location moves in the direction of the new input, giving us an approximation of the initial gradient
ascent step in Eq. 6 with the addition of a decay controlled by . As we will show, the attractor
dynamics typically cause the network activity to manifest as a movement of the bump towards the
goal location, through locations intermediate to the starting position and the goal (as observed in
experiments [9, 10]). After a short stimulation period, the network activity can be decoded to give a
state nearby the starting position that is closer to the goal. Note that, with no decay , the network
activity will tend to grow over time. To induce stable activity when the network representation
matches the goal position (?
srec (t) ? ?
sin (t)), we balanced the decay and input strength ( = ?).
In the following, we consider networks where the successor coordinate representation was truncated
to the first q dimensions, where q n. This was done because the network is composed of a limited
number of neurons, representing only the portion of the successor coordinate space corresponding
to actual locations in the environment. In a very high?dimensional space, the network can rapidly
move into a regime far from any actual locations, and the integration accuracy suffers. In effect, the
weight profiles and feed?forward activation profile become very narrow, and as a result the bump
of activity simply disappears from the original position and reappears at the goal. Conversely, low?
dimensional representations tend to result in broad excitatory weight profiles and activity profiles
(Fig. 2). The high degree of excitatory overlap across the network causes the activity profile to move
smoothly between distant points, as we will show.
4
Results
We generated attractor networks according to the layout of multiple environments containing walls
and obstacles, and stimulated them successively with arbitrary startpoints and goals. We used
n = 500 neurons to represent each environment, with place field centers selected randomly throughout the environment. The successor coordinates were generated using ? = 1. We adjusted q to
control the dimensionality of the representation. The network activity resembles a bump across a
portion of the environment (Fig. 3). Low?dimensional representations (low q) produced large activity bumps across significant portions of the environment; when a weak stimulus was provided at the
goal, the overall activity decreased while the center of the bump moved towards the goal through
the intervening areas of the environment. With a high?dimensional representation, activity bumps
became more localized, and shifted discontinuously to the goal (Fig. 3, bottom row).
For several networks representing different environments, we initialized the activity at points evenly
spaced throughout the environment and provided weak feed?forward stimulation corresponding to
a fixed goal location (Fig. 4). After a short delay (5? ), we decoded the successor coordinates from
the network activity to determine the closest state (Eq. 6). The shifts in the network representation
are shown by the arrows in Fig. 4. For two networks, we show the effect of different feed?forward
stimuli representing different goal locations. The movement of the activity profile was similar to the
shortest path towards the goal (Fig. 4, bottom left), including reversals at equidistant points (center
bottom of the maze). Irregularities were still present, however, particularly near the edges of the
environment and in the immediate vicinity of the goal (where high?frequency components play a
larger role in determining the value gradient).
5
Discussion
We have presented a spatial bump attractor model generalized to represent environments with arbitrary obstacles, and shown how, with large activity profiles relative to the size of the environment, the
network dynamics can be used for path?finding. This provides a possible correlate for goal?directed
6
18.0
13.0
9.0
4.0
0.0
18.0
13.0
9.0
4.0
0.0
8.0
6.0
4.0
2.0
0.0
11.0
8.25
5.5
2.75
0.0
Figure 3: Attractor network activities illustrated over time for different inputs and networks, in
multiples of the membrane time constant ? . Purple boxes indicate the most active unit at each point
in time. [First row] Activities are shown for a network representing a maze?like environment in
a low?dimensional space (q = 5). The network was initially stimulated with a bump of activation representing the successor coordinates of the state at the black circle; recurrent connections
maintain a similar yet fading profile over time. [Second row] For the same network and initial conditions, a weak constant stimulus was provided representing the successor coordinates at the grey
circle; the activities transiently decrease and the center of the profile shifts over time through the
environment. [Third row] Two positions (black and grey circles) were sequentially activated in a
network representing a second environment in a low?dimensional space (q = 4). [Bottom row] For
a higher?dimensional representation (q = 50), the activity profile fades rapidly and reappears at the
stimulated position.
activity observed in the hippocampus [9, 10] and an hypothesis for the role that the hippocampus
and the CA3 region play in rapid goal?directed navigation [4?6], as a complement to an additional
(e.g. model?free) system enabling incremental goal learning in unfamiliar environments [4].
Recent theoretical work has linked the bump?like firing behaviour of place cells to an encoding of the
environment based on its natural topology, including obstacles [22], and specifically to the successor
representation [14]. As well, recent work has proposed that place cell behaviour can be learned by
processing visual data using hierarchical slow feature analysis [15, 16], a process which can extract
the lowest frequency eigenvectors of the graph Laplacian generated by the environment [20] and
therefore provide a potential input for successor representation?based activity. We provide the first
link between these theoretical analyses and attractor?based models of CA3.
Slow feature analysis has been proposed as a natural outcome of a plasticity rule based on Spike?
Timing?Dependent Plasticity (STDP) [24], albeit on the timescale of a standard postsynaptic po7
Figure 4: Large?scale, low?dimensional attractor network activities can be decoded to determine
local trajectories to long?distance goals. Arrows show the initial change in the location of the
activity profile by determining the state closest to the decoded network activity (at t = 5? ) after
weakly stimulating with the successor coordinates at the black dot (? = = 0.05). Pixels show the
place field centers of the 500 neurons representing each environment, coloured according to their
activity at the stimulated goal site. [Top left] Change in position of the activity profile in a maze?
like environment with low?dimensional activity (q = 5) compared to [Bottom left] the true shortest
path towards the goal at each point in the environment. [Additional plots] Various environments
and stimulated goal sites using low?dimensional successor coordinate representations.
tential rather than the behavioural timescale we consider here. However, STDP can be extended to
behavioural timescales when combined with sustained firing and slowly decaying potentials [25] of
the type observed on the single?neuron level in the input pathway to CA3 [26], or as a result of
network effects. Within the attractor network, learning could potentially be addressed by a rule that
trains recurrent synapses to reproduce feed?forward inputs during exploration (e.g. [27]).
Our model assigns a key role to neurons with large place fields in generating long?distance goal?
directed trajectories. This suggests that such trajectories in dorsal hippocampus (where place fields
are much smaller [8]) must be inherited from dynamics in ventral or intermediate hippocampus.
The model predicts that ablating the intermediate/ventral hippocampus [6] will result in a significant
reduction in goal?directed preplay activity in the remaining dorsal region. In an intact hippocampus,
the model predicts that long?distance goal?directed preplay in the dorsal hippocampus is preceded
by preplay tracing a similar path in intermediate hippocampus. However, these large?scale networks
lack the specificity to consistently generate useful trajectories in the immediate vicinity of the goal.
Therefore, higher?dimensional (dorsal) representations may prove useful in generating trajectories
close to the goal location, or alternative methods of navigation may become more important.
If an assembly of neurons projecting to the attractor network is active while the animal searches the
environment, reward?modulated Hebbian plasticity provides a potential mechanism for reactivating
a goal location. In particular, the presence of a reward?induced neuromodulator could allow for
potentiation between the assembly and the attractor network neurons active when the animal receives
a reward at a particular location. Activating the assembly would then provide stimulation to the goal
location in the network; the same mechanism could allow an arbitrary number of assemblies to
become selective for different goal locations in the same environment. Unlike traditional model?
free methods of learning which generate a static value map, this would give a highly configurable
means of navigating the environment (e.g. visiting different goal locations based on thirst vs. hunger
needs), providing a link between spatial navigation and higher cognitive functioning.
Acknowledgements
This research was supported by the Swiss National Science Foundation (grant agreement no. 200020 147200).
We thank Laureline Logiaco and Johanni Brea for valuable discussions.
8
References
[1] William Beecher Scoville and Brenda Milner. Loss of recent memory after bilateral hippocampal lesions. Journal of
neurology, neurosurgery, and psychiatry, 20(1):11, 1957.
[2] Howard Eichenbaum. Memory, amnesia, and the hippocampal system. MIT press, 1993.
[3] John O?Keefe and Jonathan Dostrovsky. The hippocampus as a spatial map. preliminary evidence from unit activity in
the freely-moving rat. Brain research, 34(1):171?175, 1971.
[4] Kazu Nakazawa, Linus D Sun, Michael C Quirk, Laure Rondi-Reig, Matthew A Wilson, and Susumu Tonegawa.
Hippocampal CA3 NMDA receptors are crucial for memory acquisition of one-time experience. Neuron, 38(2):305?
315, 2003.
[5] Toshiaki Nakashiba, Jennie Z Young, Thomas J McHugh, Derek L Buhl, and Susumu Tonegawa. Transgenic inhibition
of synaptic transmission reveals role of ca3 output in hippocampal learning. Science, 319(5867):1260?1264, 2008.
[6] Tobias Bast, Iain A Wilson, Menno P Witter, and Richard GM Morris. From rapid place learning to behavioral performance: a key role for the intermediate hippocampus. PLoS biology, 7(4):e1000089, 2009.
[7] Alexei Samsonovich and Bruce L McNaughton. Path integration and cognitive mapping in a continuous attractor neural
network model. The Journal of Neuroscience, 17(15):5900?5920, 1997.
[8] Kirsten Brun Kjelstrup, Trygve Solstad, Vegard Heimly Brun, Torkel Hafting, Stefan Leutgeb, Menno P Witter, Edvard I
Moser, and May-Britt Moser. Finite scale of spatial representation in the hippocampus. Science, 321(5885):140?143,
2008.
[9] Brad E Pfeiffer and David J Foster. Hippocampal place-cell sequences depict future paths to remembered goals. Nature,
497(7447):74?79, 2013.
[10] Andrew M Wikenheiser and A David Redish. Hippocampal theta sequences reflect current goals. Nature neuroscience,
2015.
[11] Louis-Emmanuel Martinet, Denis Sheynikhovich, Karim Benchenane, and Angelo Arleo. Spatial learning and action
planning in a prefrontal cortical network model. PLoS computational biology, 7(5):e1002045, 2011.
[12] Filip Ponulak and John J Hopfield. Rapid, parallel path planning by propagating wavefronts of spiking neural activity.
Frontiers in computational neuroscience, 7, 2013.
[13] Peter Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613?624, 1993.
[14] Kimberly L Stachenfeld, Matthew Botvinick, and Samuel J Gershman. Design principles of the hippocampal cognitive
map. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural
Information Processing Systems 27, pages 2528?2536. Curran Associates, Inc., 2014.
[15] Mathias Franzius, Henning Sprekeler, and Laurenz Wiskott. Slowness and sparseness lead to place, head-direction, and
spatial-view cells. PLoS Computational Biology, 3(8):e166, 2007.
[16] Fabian Schoenfeld and Laurenz Wiskott. Modeling place field activity with hierarchical slow feature analysis. Frontiers
in Computational Neuroscience, 9:51, 2015.
[17] Richard S Sutton and Andrew G Barto. Introduction to reinforcement learning. MIT Press, 1998.
[18] Ronald R Coifman and St?ephane Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5?30,
2006.
[19] Sridhar Mahadevan. Learning Representation and Control in Markov Decision Processes, volume 3. Now Publishers
Inc, 2009.
[20] Henning Sprekeler. On the relation of slow feature analysis and laplacian eigenmaps. Neural computation, 23(12):
3287?3302, 2011.
[21] John Conklin and Chris Eliasmith. A controlled attractor network model of path integration in the rat. Journal of
computational neuroscience, 18(2):183?203, 2005.
[22] Nicholas J Gustafson and Nathaniel D Daw. Grid cells, place cells, and geodesic generalization for spatial reinforcement
learning. PLoS computational biology, 7(10):e1002235, 2011.
[23] Chris Eliasmith and C Charles H Anderson. Neural engineering: Computation, representation, and dynamics in neurobiological systems. MIT Press, 2004.
[24] Henning Sprekeler, Christian Michaelis, and Laurenz Wiskott. Slowness: an objective for spike-timing-dependent
plasticity. PLoS Comput Biol, 3(6):e112, 2007.
[25] Patrick J Drew and LF Abbott. Extending the effects of spike-timing-dependent plasticity to behavioral timescales.
Proceedings of the National Academy of Sciences, 103(23):8876?8881, 2006.
[26] Phillip Larimer and Ben W Strowbridge. Representing information in cell assemblies: persistent activity mediated by
semilunar granule cells. Nature neuroscience, 13(2):213?222, 2010.
[27] Robert Urbanczik and Walter Senn. Learning by the dendritic prediction of somatic spiking. Neuron, 81(3):521?528,
2014.
9
| 5856 |@word trial:1 version:1 hippocampus:18 grey:3 d2:1 simulation:1 shot:1 reduction:1 initial:5 ecole:2 past:1 current:4 recovered:4 activation:3 scatter:2 yet:1 must:1 john:3 ronald:1 distant:4 plasticity:5 christian:1 plot:4 update:2 depict:1 v:1 selected:2 plane:1 reappears:2 short:2 supplying:1 lr:1 provides:3 contribute:1 location:39 denis:1 along:1 become:3 viable:1 persistent:3 amnesia:1 prove:1 sustained:1 pathway:1 behavioral:2 introduce:1 manner:1 coifman:2 expected:4 indeed:2 rapid:10 planning:11 samsonovich:1 brain:1 discounted:1 brea:1 actual:2 considering:1 laurenz:3 provided:4 bounded:1 lowest:1 interpreted:3 emerging:1 finding:3 transformation:1 temporal:1 act:1 returning:1 toroidal:1 botvinick:1 control:2 unit:4 grant:1 louis:1 engineering:1 local:2 timing:3 receptor:1 encoding:11 sutton:1 path:23 firing:3 black:4 resembles:2 suggests:2 lausanne:4 conversely:1 conklin:1 limited:1 directed:6 wavefront:1 practice:1 lf:1 irregularity:1 swiss:1 procedure:1 urbanczik:1 episodic:1 area:1 induce:2 specificity:1 convenience:1 close:1 map:8 center:15 resembling:1 layout:1 starting:2 hunger:1 immediately:1 assigns:1 thirst:1 scoville:1 fade:2 rule:2 iain:1 array:1 hafting:1 population:2 coordinate:23 mcnaughton:1 play:3 milner:1 gm:1 curran:1 hypothesis:1 agreement:1 associate:1 element:3 approximated:1 particularly:4 rec:5 predicts:2 observed:5 dane:2 role:8 bottom:6 region:5 sun:1 martinet:1 plo:5 decrease:3 inhibit:1 removed:1 highest:1 movement:2 traversable:1 balanced:1 environment:53 valuable:1 respecting:1 reward:11 tobias:1 dynamic:7 geodesic:1 weakly:3 solving:1 serve:1 purely:1 basis:5 hopfield:1 represented:1 various:1 train:1 walter:1 outcome:1 whose:1 larger:1 kirsten:1 timescale:3 preliminarily:1 eigenvalue:2 sequence:2 propose:1 product:3 relevant:1 rapidly:5 erale:2 stachenfeld:1 academy:1 description:1 moved:1 intervening:1 transmission:1 assessing:1 extending:1 generating:2 incremental:2 ben:1 illustrate:1 recurrent:9 andrew:2 propagating:1 pose:1 quirk:1 eq:6 p2:1 implemented:1 sprekeler:3 indicate:1 switzerland:2 direction:2 discontinuous:1 attribute:1 exploration:2 human:1 enable:1 successor:31 eliasmith:3 require:1 behaviour:3 potentiation:1 activating:1 generalization:2 wall:6 preliminary:1 dendritic:1 adjusted:1 exploring:1 frontier:2 around:3 considered:1 sufficiently:1 stdp:2 exp:1 lawrence:1 mapping:8 bump:22 matthew:2 inhibiting:1 ventral:3 early:1 estimation:1 condensed:1 angelo:1 largest:1 weighted:1 minimization:1 stefan:1 neurosurgery:1 mit:3 gaussian:2 rather:1 avoid:1 barto:1 wilson:2 derived:2 consistently:1 psychiatry:1 dayan:2 dependent:3 epfl:2 typically:2 initially:1 relation:2 wij:4 reproduce:1 selective:1 pixel:1 issue:1 arg:2 overall:2 animal:4 spatial:16 constrained:1 integration:3 field:16 once:1 biology:4 represents:4 broad:2 linus:1 nearly:1 future:4 ephane:1 stimulus:3 transiently:1 richard:2 randomly:4 composed:3 national:2 attractor:27 maintain:2 william:1 highly:3 alexei:1 adjust:1 ablating:1 navigation:5 activated:1 chain:2 edge:1 closer:1 xy:1 experience:1 respective:1 initialized:3 circle:3 plotted:1 theoretical:2 column:1 dostrovsky:1 obstacle:9 modeling:1 planned:1 ca3:9 delay:1 eigenmaps:1 configurable:1 periodic:1 combined:1 st:6 explores:1 moser:2 off:1 decoding:5 michael:1 connecting:1 reflect:1 successively:1 containing:1 slowly:2 prefrontal:1 cognitive:3 return:4 potential:3 de:2 redish:1 coefficient:1 inc:2 bilateral:1 performed:1 view:1 linked:1 portion:3 start:1 decaying:1 parallel:2 michaelis:1 inherited:1 bruce:1 contribution:1 square:2 purple:1 accuracy:1 became:1 nathaniel:1 correspond:2 spaced:2 landscape:5 generalize:1 weak:3 produced:2 trajectory:8 rectified:1 arleo:1 synapsis:1 suffers:1 ed:2 synaptic:1 acquisition:1 frequency:3 involved:1 derek:1 associated:1 recovers:1 static:1 gain:1 manifest:1 dimensionality:1 nmda:1 back:2 feed:7 higher:3 dt:3 response:3 maximally:1 done:1 box:1 strongly:1 anderson:2 receives:1 ei:6 brun:2 lack:1 aj:4 phillip:1 effect:4 normalized:3 true:1 functioning:1 vicinity:2 laboratory:2 moore:1 karim:1 illustrated:1 white:1 sin:6 during:4 steady:2 samuel:1 rat:4 generalized:1 hippocampal:11 complete:1 polytechnique:2 harmonic:1 recently:1 charles:1 common:1 stimulation:3 preceded:1 spiking:2 exponentially:1 volume:1 approximates:1 significant:2 unfamiliar:1 ai:4 grid:1 dj:5 dot:2 moving:1 stable:4 operating:1 inhibition:1 patrick:1 closest:3 recent:5 certain:1 slowness:2 remembered:1 seen:1 dai:2 additional:2 freely:1 determine:2 shortest:4 period:2 signal:1 bast:1 full:1 multiple:2 hebbian:1 match:1 long:3 laplacian:3 controlled:3 prediction:1 metric:5 represent:7 ssii:1 confined:1 cell:13 addition:2 decreased:1 addressed:1 singular:1 grow:1 crucial:1 publisher:1 unlike:2 ascent:4 induced:2 tend:4 henning:3 near:1 presence:2 backwards:1 revealed:2 intermediate:7 enough:1 mahadevan:1 gustafson:1 equidistant:1 topology:4 shift:2 leutgeb:1 whether:1 peter:1 cause:3 action:1 useful:4 eigenvectors:7 discount:2 locally:2 morris:1 generate:6 exist:1 revisit:1 shifted:1 senn:1 neuroscience:8 delta:1 reinforces:1 express:1 key:3 susumu:2 changing:2 preprocessed:1 abbott:1 diffusion:3 graph:2 place:26 throughout:5 arrive:1 p3:1 decision:1 layer:1 activity:59 strength:2 fading:1 kronecker:1 nearby:1 min:2 passively:1 eichenbaum:1 according:5 combination:1 membrane:1 across:4 smaller:1 postsynaptic:1 projecting:2 pr:1 behavioural:2 equation:1 mechanism:3 franzius:1 reversal:1 hierarchical:4 spectral:1 nicholas:1 responsive:1 alternative:1 weinberger:1 original:1 thomas:1 denotes:3 top:2 remaining:1 assembly:5 giving:1 emmanuel:1 ghahramani:1 granule:1 move:5 objective:2 question:1 spike:3 traditional:1 visiting:1 navigating:3 gradient:6 affinity:4 amongst:1 distance:5 mapped:1 link:2 thank:1 prospective:1 evenly:2 manifold:1 chris:2 trivial:1 length:2 code:1 index:1 relationship:2 providing:1 minimizing:1 robert:1 potentially:2 trace:2 design:1 policy:1 unknown:1 perform:1 allowing:1 neuron:29 observation:1 markov:2 howard:1 enabling:1 finite:1 fabian:1 truncated:2 immediate:5 extended:1 head:1 somatic:1 arbitrary:10 introduced:2 complement:1 david:2 connection:1 conflict:1 learned:1 narrow:1 established:1 daw:1 address:1 suggested:1 below:1 regime:1 including:3 memory:6 sheynikhovich:1 critical:1 overlap:1 natural:3 pfeiffer:1 representing:14 occupancy:2 wik:5 theta:1 disappears:1 mediated:1 extract:2 brenda:1 coloured:1 circled:2 discovery:1 acknowledgement:1 determining:2 relative:1 loss:1 generation:1 tonegawa:2 proportional:3 gershman:1 localized:3 foundation:1 agent:2 degree:1 consistent:2 s0:28 article:1 foster:1 principle:1 editor:1 wiskott:3 row:5 neuromodulator:1 excitatory:3 supported:1 free:4 allow:3 fall:1 taking:1 tracing:1 tolerance:1 dimension:5 cortical:1 transition:6 lafon:2 maze:7 sensory:1 forward:7 reinforcement:3 projected:1 employing:1 far:1 welling:1 correlate:1 neurobiological:1 pseudoinverse:1 active:5 incoming:1 overfitting:1 sequentially:1 reveals:1 filip:1 excite:2 witter:2 neurology:1 search:1 continuous:1 stimulated:7 learn:2 nature:3 ponulak:1 improving:1 gerstner:2 substituted:1 timescales:2 arrow:2 s2:1 profile:16 sridhar:1 lesion:1 neuronal:1 site:3 fig:12 ff:3 scattered:2 slow:8 position:13 decoded:5 comput:1 third:1 learns:1 young:1 transitioning:1 inset:2 explored:1 decay:3 cortes:1 evidence:2 normalizing:1 albeit:1 sequential:2 keefe:1 importance:1 drew:1 sparseness:1 rodent:1 smoothly:2 simply:1 absorbed:1 visual:2 penrose:1 brad:1 scalar:3 ch:4 corresponds:2 determines:1 extracted:2 stimulating:1 goal:53 viewed:1 kimberly:1 towards:5 wulfram:2 experimentally:1 change:4 determined:1 discontinuously:1 operates:1 specifically:1 acting:1 menno:2 mathias:1 experimental:1 intact:1 transgenic:1 support:2 corneil:2 modulated:1 dorsal:4 jonathan:1 biol:1 |
5,365 | 5,857 | Inferring Algorithmic Patterns with
Stack-Augmented Recurrent Nets
Tomas Mikolov
Facebook AI Research
770 Broadway, New York, USA.
[email protected]
Armand Joulin
Facebook AI Research
770 Broadway, New York, USA.
[email protected]
Abstract
Despite the recent achievements in machine learning, we are still very far from
achieving real artificial intelligence. In this paper, we discuss the limitations of
standard deep learning approaches and show that some of these limitations can be
overcome by learning how to grow the complexity of a model in a structured way.
Specifically, we study the simplest sequence prediction problems that are beyond
the scope of what is learnable with standard recurrent networks, algorithmically
generated sequences which can only be learned by models which have the capacity
to count and to memorize sequences. We show that some basic algorithms can be
learned from sequential data using a recurrent network associated with a trainable
memory.
1
Introduction
Machine learning aims to find regularities in data to perform various tasks. Historically there have
been two major sources of breakthroughs: scaling up the existing approaches to larger datasets, and
development of novel approaches [5, 14, 22, 30]. In the recent years, a lot of progress has been
made in scaling up learning algorithms, by either using alternative hardware such as GPUs [9] or by
taking advantage of large clusters [28]. While improving computational efficiency of the existing
methods is important to deploy the models in real world applications [4], it is crucial for the research
community to continue exploring novel approaches able to tackle new problems.
Recently, deep neural networks have become very successful at various tasks, leading to a shift in
the computer vision [21] and speech recognition communities [11]. This breakthrough is commonly
attributed to two aspects of deep networks: their similarity to the hierarchical, recurrent structure of
the neocortex and the theoretical justification that certain patterns are more efficiently represented
by functions employing multiple non-linearities instead of a single one [1, 25].
This paper investigates which patterns are difficult to represent and learn with the current state of the
art methods. This would hopefully give us hints about how to design new approaches which will advance machine learning research further. In the past, this approach has lead to crucial breakthrough
results: the well-known XOR problem is an example of a trivial classification problem that cannot
be solved using linear classifiers, but can be solved with a non-linear one. This popularized the use
of non-linear hidden layers [30] and kernels methods [2]. Another well-known example is the parity
problem described by Papert and Minsky [25]: it demonstrates that while a single non-linear hidden
layer is sufficient to represent any function, it is not guaranteed to represent it efficiently, and in
some cases can even require exponentially many more parameters (and thus, also training data) than
what is sufficient for a deeper model. This lead to use of architectures that have several layers of
non-linearities, currently known as deep learning models.
Following this line of work, we study basic patterns which are difficult to represent and learn for
standard deep models. In particular, we study learning regularities in sequences of symbols gen1
Sequence generator
{an bn | n > 0}
{an bn cn | n > 0}
{an bn cn dn | n > 0}
{an b2n | n > 0}
n m n+m
{a b c
| n, m > 0}
n ? [1, k], X ? nXn, X ?=
Example
aabbaaabbbabaaaaabbbbb
aaabbbcccabcaaaaabbbbbccccc
aabbccddaaabbbcccdddabcd
aabbbbaaabbbbbbabb
aabcccaaabbcccccabcc
(k = 2) 12=212122=221211121=12111
Table 1: Examples generated from the algorithms studied in this paper. In bold, the characters which
can be predicted deterministically. During training, we do not have access to this information and at
test time, we evaluate only on deterministically predictable characters.
erated by simple algorithms. Interestingly, we find that these regularities are difficult to learn even
for some advanced deep learning methods, such as recurrent networks. We attempt to increase the
learning capabilities of recurrent nets by allowing them to learn how to control an infinite structured
memory. We explore two basic topologies of the structured memory: pushdown stack, and a list.
Our structured memory is defined by constraining part of the recurrent matrix in a recurrent net [24].
We use multiplicative gating mechanisms as learnable controllers over the memory [8, 19] and show
that this allows our network to operate as if it was performing simple read and write operations, such
as PUSH or POP for a stack.
Among recent work with similar motivation, we are aware of the Neural Turing Machine [17] and
Memory Networks [33]. However, our work can be considered more as a follow up of the research
done in the early nineties, when similar types of memory augmented neural networks were studied [12, 26, 27, 37].
2
Algorithmic Patterns
We focus on sequences generated by simple, short algorithms. The goal is to learn regularities in
these sequences by building predictive models. We are mostly interested in discrete patterns related
to those that occur in the real world, such as various forms of a long term memory.
More precisely, we suppose that during training we have only access to a stream of data which is
obtained by concatenating sequences generated by a given algorithm. We do not have access to the
boundary of any sequence nor to sequences which are not generated by the algorithm. We denote
the regularities in these sequences of symbols as Algorithmic patterns. In this paper, we focus on
algorithmic patterns which involve some form of counting and memorization. Examples of these
patterns are presented in Table 1. For simplicity, we mostly focus on the unary and binary numeral
systems to represent patterns. This allows us to focus on designing a model which can learn these
algorithms when the input is given in its simplest form.
Some algorithm can be given as context free grammars, however we are interested in the more general case of sequential patterns that have a short description length in some general Turing-complete
computational system. Of particular interest are patterns relevant to develop a better language understanding. Finally, this study is limited to patterns whose symbols can be predicted in a single
computational step, leaving out algorithms such as sorting or dynamic programming.
3
Related work
Some of the algorithmic patterns we study in this paper are closely related to context free and context
sensitive grammars which were widely studied in the past. Some works used recurrent networks
with hardwired symbolic structures [10, 15, 18]. These networks are continuous implementation of
symbolic systems, and can deal with recursive patterns in computational linguistics. While theses
approaches are interesting to understand the link between symbolic and sub-symbolic systems such
as neural networks, they are often hand designed for each specific grammar.
Wiles and Elman [34] show that simple recurrent networks are able to learn sequences of the form
an bn and generalize on a limited range of n. While this is a promising result, their model does not
2
truly learn how to count but instead relies mostly on memorization of the patterns seen in the training
data. Rodriguez et al. [29] further studied the behavior of this network. Gr?unwald [18] designs a
hardwired second order recurrent network to tackle similar sequences. Christiansen and Chater [7]
extended these results to grammars with larger vocabularies. This work shows that this type of
architectures can learn complex internal representation of the symbols but it cannot generalize to
longer sequences generated by the same algorithm. Beside using simple recurrent networks, other
structures have been used to deal with recursive patterns, such as pushdown dynamical automata [31]
or sequenctial cascaded networks [3, 27].
Hochreiter and Schmidhuber [19] introduced the Long Short Term Memory network (LSTM) architecture. While this model was orginally developed to address the vanishing and exploding gradient
problems, LSTM is also able to learn simple context-free and context-sensitive grammars [16, 36].
This is possible because its hidden units can choose through a multiplicative gating mechanism to
be either linear or non-linear. The linear units allow the network to potentially count (one can easily
add and subtract constants) and store a finite amount of information for a long period of time. These
mechanisms are also used in the Gated Recurrent Unit network [8]. In our work we investigate the
use of a similar mechanism in a context where the memory is unbounded and structured. As opposed
to previous work, we do not need to ?erase? our memory to store a new unit. More recently, Graves
et al. [17] have extended LSTM with an attention mechansim to build a model which roughly resembles a Turing machine with limited tape. Their memory controller works with a fixed size memory
and it is not clear if its complexity is necessary for the the simple problems they study.
Finally, many works have also used external memory modules with a recurrent network, such as
stacks [12, 13, 20, 26, 37]. Zheng et al. [37] use a discrete external stack which may be hard
to learn on long sequences. Das et al. [12] learn a continuous stack which has some similarities
with ours. The mechnisms used in their work is quite different from ours. Their memory cells are
associated with weights to allow continuous representation of the stack, in order to train it with
continuous optimization scheme. On the other hand, our solution is closer to a standard RNN with
special connectivities which simulate a stack with unbounded capacity. We tackle problems which
are closely related to the ones addressed in these works and try to go further by exploring more
challenging problems such as binary addition.
4
4.1
Model
Simple recurrent network
We consider sequential data that comes in the form of discrete tokens, such as characters or words.
The goal is to design a model able to predict the next symbol in a stream of data. Our approach is
based on a standard model called recurrent neural network (RNN) and popularized by Elman [14].
RNN consists of an input layer, a hidden layer with a recurrent time-delayed connection and an
output layer. The recurrent connection allows the propagation of information through time.Given a
sequence of tokens, RNN takes as input the one-hot encoding xt of the current token and predicts
the probability yt of next symbol. There is a hidden layer with m units which stores additional
information about the previous tokens seen in the sequence. More precisely, at each time t, the state
of the hidden layer ht is updated based on its previous state ht?1 and the encoding xt of the current
token, according to the following equation:
ht = ? (U xt + Rht?1 ) ,
(1)
where ?(x) = 1/(1 + exp(?x)) is the sigmoid activation function applied coordinate wise, U is the
d ? m token embedding matrix and R is the m ? m matrix of recurrent weights. Given the state of
these hidden units, the network then outputs the probability vector yt of the next token, according to
the following equation:
yt = f (V ht ) ,
(2)
where f is the softmax function [6] and V is the m ? d output matrix, where d is the number of
different tokens. This architecture is able to learn relatively complex patterns similar in nature to
the ones captured by N-grams. While this has made the RNNs interesting for language modeling
[23], they may not have the capacity to learn how algorithmic patterns are generated. In the next
section, we show how to add an external memory to RNNs which has the theoretical capability to
learn simple algorithmic patterns.
3
(a)
(b)
Figure 1: (a) Neural network extended with push-down stack and a controlling mechanism that
learns what action (among PUSH, POP and NO-OP) to perform. (b) The same model extended with
a doubly-linked list with actions INSERT, LEFT, RIGHT and NO-OP.
4.2
Pushdown network
In this section, we describe a simple structured memory inspired by pushdown automaton, i.e., an
automaton which employs a stack. We train our network to learn how to operate this memory with
standard optimization tools.
A stack is a type of persistent memory which can be only accessed through its topmost element.
Three basic operations can be performed with a stack: POP removes the top element, PUSH adds
a new element on top of the stack and NO-OP does nothing. For simplicity, we first consider a
simplified version where the model can only choose between a PUSH or a POP at each time step.
We suppose that this decision is made by a 2-dimensional variable at which depends on the state of
the hidden variable ht :
at = f (Aht ) ,
(3)
where A is a 2 ? m matrix (m is the size of the hidden layer) and f is a softmax function. We denote
by at [PUSH], the probability of the PUSH action, and by at [POP] the probability of the POP action.
We suppose that the stack is stored at time t in a vector st of size p. Note that p could be increased
on demand and does not have to be fixed which allows the capacity of the model to grow. The top
element is stored at position 0, with value st [0]:
st [0] = at [PUSH]?(Dht ) + at [POP]st?1 [1],
(4)
where D is 1 ? m matrix. If at [POP] is equal to 1, the top element is replaced by the value below
(all values are moved by one position up in the stack structure). If at [PUSH] is equal to 1, we move
all values down in the stack and add a value on top of the stack. Similarly, for an element stored at
a depth i > 0 in the stack, we have the following update rule:
st [i] = at [PUSH]st?1 [i ? 1] + at [POP]st?1 [i + 1].
(5)
We use the stack to carry information to the hidden layer at the next time step. When the stack is
empty, st is set to ?1. The hidden layer ht is now updated as:
ht = ? U xt + Rht?1 + P skt?1 ,
(6)
where P is a m ? k recurrent matrix and skt?1 are the k top-most element of the stack at time t ? 1.
In our experiments, we set k to 2. We call this model Stack RNN, and show it in Figure 1-a without
the recurrent matrix R for clarity.
Stack with a no-operation. Adding the NO-OP action allows the stack to keep the same value on
top by a minor change of the stack update rule. Eq. (4) is replaced by:
st [0] = at [PUSH]?(Dht ) + at [POP]st?1 [1] + at [NO-OP]st?1 [0].
Extension to multiple stacks. Using a single stack has serious limitations, especially considering
that at each time step, only one action can be performed. We increase capacity of the model by
using multiple stacks in parallel. The stacks can interact through the hidden layer allowing them to
process more challenging patterns.
4
method
RNN
LSTM
List RNN 40+5
Stack RNN 40+10
Stack RNN 40+10 + rounding
an bn
25%
100%
100%
100%
100%
an bn cn
23.3%
100%
33.3%
100%
100%
an bn cn dn
13.3%
68.3%
100%
100%
100%
an b2n
23.3%
75%
100%
100%
100%
an bm cn+m
33.3%
100%
100%
43.3%
100%
Table 2: Comparison with RNN and LSTM on sequences generated by counting algorithms. The
sequences seen during training are such that n < 20 (and n + m < 20), and we test on sequences
up to n = 60. We report the percent of n for which the model was able to correctly predict the
sequences. Performance above 33.3% means it is able to generalize to never seen sequence lengths.
Doubly-linked lists. While in this paper we mostly focus on an infinite memory based on stacks, it
is straightforward to extend the model to another forms of infinite memory, for example, the doublylinked list. A list is a one dimensional memory where each node is connected to its left and right
neighbors. There is a read/write head associated with the list. The head can move between nearby
nodes and insert a new node at its current position. More precisely, we consider three different
actions: INSERT, which inserts an element at the current position of the head, LEFT, which moves
the head to the left, and RIGHT which moves it to the right. Given a list L and a fixed head position
HEAD, the updates are:
(
at [RIGHT]Lt?1 [i + 1] + at [LEFT]Lt?1 [i ? 1] + at [INSERT]?(Dht )
if i = HEAD,
Lt [i] = at [RIGHT]Lt?1 [i + 1] + at [LEFT]Lt?1 [i ? 1] + at [INSERT]Lt?1 [i + 1] if i < HEAD,
at [RIGHT]Lt?1 [i + 1] + at [LEFT]Lt?1 [i ? 1] + at [INSERT]Lt?1 [i]
if i > HEAD.
Note that we can add a NO-OP operation as well. We call this model List RNN, and show it in
Figure 1-b without the recurrent matrix R for clarity.
Optimization. The models presented above are continuous and can thus be trained with stochastic
gradient descent (SGD) method and back-propagation through time [30, 32, 35]. As patterns becomes more complex, more complex memory controller must be learned. In practice, we observe
that these more complex controller are harder to learn with SGD. Using several random restarts
seems to solve the problem in our case. We have also explored other type of search based procedures as discussed in the supplementary material.
Rounding. Continuous operators on stacks introduce small imprecisions leading to numerical issues on very long sequences. While simply discretizing the controllers partially solves this problem,
we design a more robust rounding procedure tailored to our model. We slowly makes the controllers
converge to discrete values by multiply their weights by a constant which slowly goes to infinity. We
finetune the weights of our network as this multiplicative variable increase, leading to a smoother
rounding of our network. Finally, we remove unused stacks by exploring models which use only a
subset of the stacks. While brute-force would be exponential in the number of stacks, we can do it
efficiently by building a tree of removable stacks and exploring it with deep first search.
5
Experiments and results
First, we consider various sequences generated by simple algorithms, where the goal is to learn their
generation rule [3, 12, 29]. We hope to understand the scope of algorithmic patterns each model can
capture. We also evaluate the models on a standard language modeling dataset, Penn Treebank.
Implementation details. Stack and List RNNs are trained with SGD and backpropagation through
time with 50 steps [32], a hard clipping of 15 to prevent gradient explosions [23], and an initial
learning rate of 0.1. The learning rate is divided by 2 each time the entropy on the validation set is
not decreasing. The depth k defined in Eq. (6) is set to 2. The free parameters are the number of
hidden units, stacks and the use of NO-OP. The baselines are RNNs with 40, 100 and 500 units, and
LSTMs with 1 and 2 layers with 50, 100 and 200 units. The hyper-parameters of the baselines are
selected on the validation sets.
5.1
Learning simple algorithmic patterns
Given an algorithm with short description length, we generate sequences and concatenate them into
longer sequences. This is an unsupervised task, since the boundaries of each generated sequences
5
current
b
a
a
a
a
a
a
b
b
b
b
b
b
b
b
b
b
b
b
next
a
a
a
a
a
a
b
b
b
b
b
b
b
b
b
b
b
b
a
prediction
a
a
a
a
a
a
a
b
b
b
b
b
b
b
b
b
b
b
a
proba(next)
0.99
0.99
0.95
0.93
0.91
0.90
0.10
0.99
1.00
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.99
0.99
action
POP
POP
PUSH
POP
PUSH PUSH
PUSH PUSH
PUSH PUSH
PUSH PUSH
PUSH PUSH
PUSH PUSH
POP
PUSH
POP
PUSH
POP
PUSH
POP
PUSH
POP
PUSH
POP
PUSH
POP
POP
POP
POP
POP
POP
POP
POP
POP
POP
stack1[top]
-1
0.01
0.18
0.32
0.40
0.46
0.52
0.57
0.52
0.46
0.40
0.32
0.18
0.01
-1
-1
-1
-1
-1
stack2[top]
0.53
0.97
0.99
0.98
0.97
0.97
0.97
0.97
0.56
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.56
Table 3: Example of the Stack RNN with 20 hidden units and 2 stacks on a sequence an b2n with
n = 6. ?1 means that the stack is empty. The depth k is set to 1 for clarity. We see that the first
stack pushes an element every time it sees a and pop when it sees b. The second stack pushes when
it sees a. When it sees b , it pushes if the first stack is not empty and pop otherwise. This shows how
the two stacks interact to correctly predict the deterministic part of the sequence (shown in bold).
Memorization
Binary addition
Figure 2: Comparison of RNN, LSTM, List RNN and Stack RNN on memorization and the performance of Stack RNN on binary addition. The accuracy is in the proportion of correctly predicted
sequences generated with a given n. We use 100 hidden units and 10 stacks.
are not known. We study patterns related to counting and memorization as shown in Table 1. To
evaluate if a model has the capacity to understand the generation rule used to produce the sequences,
it is tested on sequences it has not seen during training. Our experimental setting is the following:
the training and validation set are composed of sequences generated with n up to N < 20 while
the test set is composed of sequences generated with n up to 60. During training, we incrementally
increase the parameter n every few epochs until it reaches some N . At test time, we measure the
performance by counting the number of correctly predicted sequences. A sequence is considered as
correctly predicted if we correctly predict its deterministic part, shown in bold in Table 1. On these
toy examples, the recurrent matrix R defined in Eq. (1) is set to 0 to isolate the mechanisms that
Stack and list can capture.
Counting. Results on patterns generated by ?counting? algorithms are shown in Table 2. We report
the percentage of sequence lengths for which a method is able to correctly predict sequences of
that length. List RNN and Stack RNN have 40 hidden units and either 5 lists or 10 stacks. For
these tasks, the NO-OP operation is not used. Table 2 shows that RNNs are unable to generalize to
longer sequences, and they only correctly predict sequences seen during training. LSTM is able to
generalize to longer sequences which shows that it is able to count since the hidden units in an LSTM
can be linear [16]. With a finer hyper-parameter search, the LSTM should be able to achieve 100%
6
on all of these tasks. Despite the absence of linear units, these models are also able to generalize.
For an bm cn+m , rounding is required to obtain the best performance.
Table 3 show an example of actions done by a Stack RNN with two stacks on a sequence of the
form an b2n . For clarity, we show a sequence generated with n equal to 6, and we use discretization.
Stack RNN pushes an element on both stacks when it sees a. The first stack pops elements when the
input is b and the second stack starts popping only when the first one is empty. Note that the second
stack pushes a special value to keep track of the sequence length, i.e. 0.56.
Memorization. Figure 2 shows results on memorization for a dictionary with two elements. Stack
RNN has 100 units and 10 stacks, and List RNN has 10 lists. We use random restarts and we repeat
this process multiple times. Stack RNN and List RNN are able to learn memorization, while RNN
and LSTM do not seem to generalize. In practice, List RNN is more unstable than Stack RNN and
overfits on the training set more frequently. This unstability may be explained by the higher number
of actions the controler can choose from (4 versus 3). For this reason, we focus on Stack RNN in
the rest of the experiments.
Figure 3: An example of a learned Stack RNN that performs binary addition. The last column
is our interpretation of the functionality learned by the different stacks. The color code is: green
means PUSH, red means POP and grey means actions equivalent to NO-OP. We show the current
(discretized) value on the top of the each stack at each given time. The sequence is read from left
to right, one character at a time. In bold is the part of the sequence which has to be predicted. Note
that the result is written in reverse.
Binary addition. Given a sequence representing a binary addition, e.g., ?101+1=?, the goal is
to predict the result, e.g., ?110.? where ?.? represents the end of the sequence. As opposed to
the previous tasks, this task is supervised, i.e., the location of the deterministic tokens is provided.
The result of the addition is asked in the reverse order, e.g., ?011.? in the previous example. As
previously, we train on short sequences and test on longer ones. The length of the two input numbers
is chosen such that the sum of their lengths is equal to n (less than 20 during training and up to 60
at test time). Their most significant digit is always set to 1. Stack RNN has 100 hidden units with
10 stacks. The right panel of Figure 2 shows the results averaged over multiple runs (with random
restarts). While Stack RNNs are generalizing to longer numbers, it overfits for some runs on the
validation set, leading to a larger error bar than in the previous experiments.
Figure 3 shows an example of a model which generalizes to long sequences of binary addition. This
example illustrates the moderately complex behavior that the Stack RNN learns to solve this task: the
first stack keeps track of where we are in the sequence, i.e., either reading the first number, reading
the second number or writing the result. Stack 6 keeps in memory the first number. Interestingly, the
first number is first captured by the stacks 3 and 5 and then copied to stack 6. The second number is
stored on stack 3, while its length is captured on stack 4 (by pushing a one and then a set of zeros).
When producing the result, the values stored on these three stacks are popped. Finally stack 5 takes
7
care of the carry: it switches between two states (0 or 1) which explicitly say if there is a carry over
or not. While this use of stacks is not optimal in the sense of minimal description length, it is able
to generalize to sequences never seen before.
5.2
Language modeling.
Model
Validation perplexity
Test perplexity
Ngram
141
Ngram + Cache
125
RNN
137
129
LSTM
120
115
SRCN [24]
120
115
Stack RNN
124
118
Table 4: Comparison of RNN, LSTM, SRCN [24] and Stack RNN on Penn Treebank Corpus. We
use the recurrent matrix R in Stack RNN as well as 100 hidden units and 60 stacks.
We compare Stack RNN with RNN, LSTM and SRCN [24] on the standard language modeling
dataset Penn Treebank Corpus. SRCN is a standard RNN with additional self-connected linear
units which capture long term dependencies similar to bag of words. The models have only one
hidden layer with 100 hidden units. Table 4 shows that Stack RNN performs better than RNN with
a comparable number of parameters, but not as well as LSTM and SRCN. Empirically, we observe
that Stack RNN learns to store exponentially decaying bag of words similar in nature to the memory
of SRCN.
6
Discussion and future work
Continuous versus discrete model and search. Certain simple algorithmic patterns can be efficiently learned using a continuous optimization approach (stochastic gradient descent) applied to a
continuous model representation (in our case RNN). Note that Stack RNN works better than prior
work based on RNN from the nineties [12, 34, 37]. It seems also simpler than many other approaches designed for these tasks [3, 17, 31]. However, it is not clear if a continuous representation
is completely appropriate for learning algorithmic patterns. It may be more natural to attempt to
solve these problems with a discrete model. This motivates us to try to combine continuous and
discrete optimization. It is possible that the future of learning of algorithmic patterns will involve
such combination of discrete and continuous optimization.
Long-term memory. While in theory using multiple stacks for representing memory is as powerful
as a Turing complete computational system, intricate interactions between stacks need to be learned
to capture more complex algorithmic patterns. Stack RNN also requires the input and output sequences to be in the right format (e.g., memorization is in reversed order). It would be interesting
to consider in the future other forms of memory which may be more flexible, as well as additional
mechanisms which allow to perform multiple steps with the memory, such as loop or random access.
Finally, complex algorithmic patterns can be more easily learned by composing simpler algorithms.
Designing a model which possesses a mechanism to compose algorithms automatically and training
it on incrementally harder tasks is a very important research direction.
7
Conclusion
We have shown that certain difficult pattern recognition problems can be solved by augmenting a
recurrent network with structured, growing (potentially unlimited) memory. We studied very simple
memory structures such as a stack and a list, but, the same approach can be used to learn how to
operate more complex ones (for example a multi-dimensional tape). While currently the topology
of the long term memory is fixed, we think that it should be learned from the data as well.
Acknowledgment. We would like to thank Arthur Szlam, Keith Adams, Jason Weston, Yann LeCun
and the rest of the Facebook AI Research team for their useful comments.
References
[1] Y. Bengio and Y. LeCun. Scaling learning algorithms towards ai. Large-scale kernel machines, 2007.
[2] C. M. Bishop. Pattern recognition and machine learning. springer New York, 2006.
[3] M. Bod?en and J. Wiles. Context-free and context-sensitive dynamics in recurrent neural networks. Connection Science, 2000.
The code is available at https://github.com/facebook/Stack-RNN
8
[4] L. Bottou. Large-scale machine learning with stochastic gradient descent. In COMPSTAT. Springer, 2010.
[5] L. Breiman. Random forests. Machine learning, 45(1):5?32, 2001.
[6] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships
to statistical pattern recognition. In Neurocomputing, pages 227?236. Springer, 1990.
[7] M. H. Christiansen and N. Chater. Toward a connectionist model of recursion in human linguistic performance. Cognitive Science, 23(2):157?205, 1999.
[8] J. Chung, C. Gulcehre, K Cho, and Y. Bengio. Gated feedback recurrent neural networks. arXiv, 2015.
[9] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber. High-performance neural
networks for visual object classification. arXiv preprint, 2011.
[10] M. W. Crocker. Mechanisms for sentence processing. University of Edinburgh, 1996.
[11] G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for
large-vocabulary speech recognition. Audio, Speech, and Language Processing, 20(1):30?42, 2012.
[12] S. Das, C. Giles, and G. Sun. Learning context-free grammars: Capabilities and limitations of a recurrent
neural network with an external stack memory. In ACCSS, 1992.
[13] S. Das, C. Giles, and G. Sun. Using prior knowledge in a nnpda to learn context-free languages. NIPS,
1993.
[14] J. L. Elman. Finding structure in time. Cognitive science, 14(2):179?211, 1990.
[15] M. Fanty. Context-free parsing in connectionist networks. Parallel natural language processing, 1994.
[16] F. A. Gers and J. Schmidhuber. Lstm recurrent networks learn simple context-free and context-sensitive
languages. Transactions on Neural Networks, 12(6):1333?1340, 2001.
[17] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint, 2014.
[18] P. Gr?unwald. A recurrent network that performs a context-sensitive prediction task. In ACCSS, 1996.
[19] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[20] S. Holldobler, Y. Kalinke, and H. Lehmann. Designing a counter: Another case study of dynamics and
activation landscapes in recurrent networks. In Advances in Artificial Intelligence, 1997.
[21] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
1998.
[23] T. Mikolov. Statistical language models based on neural networks. PhD thesis, Brno University of
Technology, 2012.
[24] T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. A. Ranzato. Learning longer memory in recurrent
neural networks. arXiv preprint, 2014.
[25] M. Minsky and S. Papert. Perceptrons. MIT press, 1969.
[26] M. C. Mozer and S. Das. A connectionist symbol manipulator that discovers the structure of context-free
languages. NIPS, 1993.
[27] J. B. Pollack. The induction of dynamical recognizers. Machine Learning, 7(2-3):227?252, 1991.
[28] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient
descent. In NIPS, 2011.
[29] P. Rodriguez, J. Wiles, and J. L. Elman. A recurrent neural network that learns to count. Connection
Science, 1999.
[30] D. E Rumelhart, G. Hinton, and R. J. Williams. Learning internal representations by error propagation.
Technical report, DTIC Document, 1985.
[31] W. Tabor. Fractal encoding of context-free grammars in connectionist networks. Expert Systems, 2000.
[32] P. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural
Networks, 1(4):339?356, 1988.
[33] J. Weston, S. Chopra, and A. Bordes. Memory networks. In ICLR, 2015.
[34] J. Wiles and J. Elman. Learning to count without a counter: A case study of dynamics and activation
landscapes in recurrent networks. In ACCSS, 1995.
[35] R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. Back-propagation: Theory, architectures and applications, pages 433?486, 1995.
[36] W. Zaremba and I. Sutskever. Learning to execute. arXiv preprint, 2014.
[37] Z. Zeng, R. M. Goodman, and P. Smyth. Discrete recurrent neural networks for grammatical inference.
Transactions on Neural Networks, 5(2):320?330, 1994.
9
| 5857 |@word armand:1 version:1 seems:2 proportion:1 grey:1 bn:7 sgd:3 harder:2 carry:3 initial:1 ours:2 interestingly:2 crocker:1 document:2 past:2 existing:2 current:7 com:3 discretization:1 activation:3 must:1 written:1 parsing:1 numerical:1 concatenate:1 remove:2 designed:2 update:3 intelligence:2 selected:1 vanishing:1 short:6 node:3 location:1 simpler:2 accessed:1 unbounded:2 dn:2 become:1 persistent:1 consists:1 doubly:2 combine:1 compose:1 introduce:1 market:1 intricate:1 roughly:1 elman:5 nor:1 frequently:1 growing:1 multi:1 discretized:1 behavior:2 inspired:1 decreasing:1 automatically:1 cache:1 considering:1 erase:1 becomes:1 provided:1 linearity:2 panel:1 what:3 developed:1 finding:1 every:2 tackle:3 zaremba:1 classifier:1 demonstrates:1 control:1 wayne:1 unit:19 brute:1 penn:3 szlam:1 producing:1 danihelka:1 before:1 despite:2 encoding:3 niu:1 rnns:6 studied:5 resembles:1 challenging:2 limited:3 ngram:2 range:1 averaged:1 acknowledgment:1 lecun:3 recursive:2 practice:2 backpropagation:2 digit:1 procedure:2 rnn:47 word:3 pre:1 symbolic:4 cannot:2 operator:1 acero:1 context:17 memorization:9 writing:1 equivalent:1 deterministic:3 yt:3 compstat:1 go:2 attention:1 straightforward:1 williams:2 automaton:3 tomas:1 simplicity:2 rule:4 embedding:1 coordinate:1 justification:1 updated:2 controlling:1 deploy:1 suppose:3 smyth:1 programming:1 designing:3 element:12 rumelhart:1 recognition:6 werbos:1 predicts:1 erated:1 module:1 preprint:4 solved:3 capture:4 connected:2 sun:2 ranzato:1 counter:2 topmost:1 mozer:1 predictable:1 complexity:3 moderately:1 asked:1 dynamic:4 stack2:1 trained:3 predictive:1 efficiency:1 completely:1 easily:2 various:4 represented:1 train:3 describe:1 artificial:2 hyper:2 whose:1 quite:1 larger:3 widely:1 solve:3 supplementary:1 say:1 otherwise:1 grammar:7 think:1 sequence:53 advantage:1 net:3 interaction:1 nnpda:1 fanty:1 relevant:1 loop:1 achieve:1 bod:1 description:3 moved:1 achievement:1 sutskever:2 regularity:5 cluster:1 empty:4 produce:1 adam:1 object:1 recurrent:37 develop:1 augmenting:1 minor:1 op:9 progress:1 keith:1 eq:3 solves:1 predicted:6 memorize:1 come:1 direction:1 closely:2 functionality:1 stochastic:4 human:1 material:1 numeral:1 require:1 generalization:1 exploring:4 insert:7 extension:1 considered:2 wright:1 exp:1 algorithmic:14 scope:2 predict:7 major:1 dictionary:1 early:1 bag:2 currently:2 sensitive:5 tool:1 hope:1 mit:1 always:1 aim:1 breiman:1 chater:2 linguistic:1 tabor:1 focus:6 baseline:2 sense:1 inference:1 dependent:1 unary:1 hidden:21 interested:2 issue:1 classification:4 among:2 flexible:1 development:1 art:1 breakthrough:3 special:2 softmax:2 equal:4 aware:1 never:2 represents:1 yu:1 unsupervised:1 future:3 report:3 connectionist:4 serious:1 employ:1 hint:1 few:1 composed:2 neurocomputing:1 delayed:1 replaced:2 minsky:2 proba:1 attempt:2 interest:1 investigate:1 zheng:1 multiply:1 kalinke:1 truly:1 popping:1 closer:1 explosion:1 necessary:1 arthur:1 tree:1 re:1 pollack:1 theoretical:2 minimal:1 increased:1 removable:1 modeling:4 column:1 giles:2 clipping:1 subset:1 krizhevsky:1 successful:1 rounding:5 gr:2 stored:5 dependency:1 cho:1 st:11 recht:1 lstm:15 probabilistic:1 connectivity:1 thesis:2 opposed:2 choose:3 slowly:2 external:4 cognitive:2 expert:1 chung:1 leading:4 toy:1 bold:4 explicitly:1 depends:1 stream:2 multiplicative:3 try:2 jason:1 lot:1 performed:2 linked:2 overfits:2 red:1 start:1 decaying:1 hogwild:1 parallel:2 capability:3 accuracy:1 xor:1 convolutional:1 efficiently:4 landscape:2 generalize:8 finer:1 acc:3 reach:1 facebook:4 associated:3 attributed:1 bridle:1 dataset:2 color:1 knowledge:1 back:2 finetune:1 higher:1 supervised:1 follow:1 restarts:3 done:2 execute:1 until:1 hand:2 lstms:1 zeng:1 hopefully:1 propagation:4 rodriguez:2 incrementally:2 manipulator:1 usa:2 building:2 read:3 imprecision:1 deal:2 during:7 self:1 complete:2 performs:3 percent:1 wise:1 novel:2 recently:2 discovers:1 sigmoid:1 stack1:1 empirically:1 exponentially:2 extend:1 discussed:1 interpretation:2 significant:1 ai:4 similarly:1 language:11 access:4 similarity:2 longer:7 recognizers:1 add:5 recent:3 reverse:2 schmidhuber:4 store:4 certain:3 perplexity:2 binary:8 continue:1 discretizing:1 seen:7 captured:3 additional:3 care:1 deng:1 converge:1 gambardella:1 period:1 exploding:1 smoother:1 multiple:7 technical:1 long:10 divided:1 prediction:3 basic:4 controller:6 vision:1 arxiv:5 represent:5 kernel:2 tailored:1 hochreiter:2 cell:1 ajoulin:1 addition:8 addressed:1 grow:2 source:1 leaving:1 crucial:2 goodman:1 operate:3 rest:2 posse:1 comment:1 isolate:1 seem:1 call:2 zipser:1 chopra:2 counting:6 unused:1 constraining:1 bengio:3 feedforward:1 switch:1 architecture:5 topology:2 ciresan:1 cn:6 haffner:1 shift:1 speech:3 york:3 tape:2 action:11 deep:9 fractal:1 useful:1 clear:2 involve:2 amount:1 neocortex:1 hardware:1 simplest:2 generate:1 http:1 percentage:1 algorithmically:1 correctly:8 track:2 write:2 discrete:9 achieving:1 clarity:4 prevent:1 dahl:1 ht:7 year:1 sum:1 run:2 turing:5 powerful:1 lehmann:1 yann:1 christiansen:2 decision:1 scaling:3 investigates:1 comparable:1 layer:14 guaranteed:1 copied:1 occur:1 precisely:3 infinity:1 unlimited:1 nearby:1 aht:1 aspect:1 simulate:1 mikolov:3 performing:1 relatively:1 gpus:1 format:1 structured:7 according:2 popularized:2 combination:1 ninety:2 character:4 brno:1 wile:4 explained:1 equation:2 previously:1 discus:1 count:6 mechanism:9 popped:1 end:1 gulcehre:1 generalizes:1 operation:5 available:1 observe:2 hierarchical:1 appropriate:1 alternative:1 top:10 linguistics:1 lock:1 pushing:1 build:1 especially:1 skt:2 dht:3 move:4 gradient:8 iclr:1 reversed:1 link:1 unable:1 thank:1 capacity:6 unstable:1 trivial:1 reason:1 toward:1 induction:1 length:10 code:2 relationship:1 difficult:4 mostly:4 potentially:2 broadway:2 design:4 implementation:2 motivates:1 perform:3 allowing:2 gated:2 b2n:4 datasets:1 finite:1 descent:4 gas:1 extended:4 hinton:2 head:9 team:1 stack:94 parallelizing:1 community:2 introduced:1 holldobler:1 meier:1 required:1 trainable:1 connection:4 sentence:1 imagenet:1 learned:9 pop:33 nip:4 address:1 beyond:1 able:14 bar:1 dynamical:2 pattern:34 below:1 reading:2 green:1 memory:36 hot:1 natural:2 force:1 hardwired:2 cascaded:1 recursion:1 advanced:1 representing:2 scheme:1 github:1 historically:1 technology:1 mathieu:1 epoch:1 understanding:1 prior:2 graf:2 nxn:1 beside:1 interesting:3 limitation:4 generation:2 versus:2 generator:1 validation:5 sufficient:2 treebank:3 bordes:1 token:9 repeat:1 parity:1 free:12 last:1 allow:3 deeper:1 understand:3 neighbor:1 taking:1 edinburgh:1 grammatical:1 overcome:1 boundary:2 vocabulary:2 world:2 gram:1 depth:3 fb:2 feedback:1 made:3 commonly:1 simplified:1 bm:2 far:1 employing:1 transaction:2 keep:4 corpus:2 continuous:12 search:4 table:11 promising:1 learn:22 nature:2 robust:1 composing:1 improving:1 forest:1 interact:2 bottou:2 complex:9 da:4 joulin:2 motivation:1 nothing:1 augmented:2 en:1 papert:2 inferring:1 sub:1 deterministically:2 position:5 concatenating:1 exponential:1 gers:1 learns:4 masci:1 down:2 specific:1 xt:4 bishop:1 gating:2 learnable:2 symbol:7 list:19 rht:2 explored:1 sequential:3 adding:1 phd:1 illustrates:1 push:36 demand:1 dtic:1 sorting:1 subtract:1 entropy:1 generalizing:1 lt:9 simply:1 explore:1 visual:1 partially:1 springer:3 relies:1 weston:2 goal:4 towards:1 absence:1 hard:2 pushdown:4 change:1 specifically:1 infinite:3 called:1 experimental:1 unwald:2 perceptrons:1 internal:2 controler:1 evaluate:3 audio:1 tested:1 |
5,366 | 5,858 | Decoupled Deep Neural Network for
Semi-supervised Semantic Segmentation
Seunghoon Hong? Hyeonwoo Noh? Bohyung Han
Dept. of Computer Science and Engineering, POSTECH, Pohang, Korea
{maga33,hyeonwoonoh ,bhhan}@postech.ac.kr
Abstract
We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image
are identified by classification network, and binary segmentation is subsequently
performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately
based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm
shows outstanding performance compared to other semi-supervised approaches
with much less training images with strong annotations in PASCAL VOC dataset.
1
Introduction
Semantic segmentation is a technique to assign structured semantic labels?typically, object class
labels?to individual pixels in images. This problem has been studied extensively over decades,
yet remains challenging since object appearances involve significant variations that are potentially
originated from pose variations, scale changes, occlusion, background clutter, etc. However, in spite
of such challenges, the techniques based on Deep Neural Network (DNN) demonstrate impressive
performance in the standard benchmark datasets such as PASCAL VOC [1].
Most DNN-based approaches pose semantic segmentation as pixel-wise classification problem [2,
3, 4, 5, 6]. Although these approaches have achieved good performance compared to previous
methods, training DNN requires a large number of segmentation ground-truths, which result from
tremendous annotation efforts and costs. For this reason, reliable pixel-wise segmentation annotations are typically available only for a small number of classes and images, which makes supervised
DNNs difficult to be applied to semantic segmentation tasks involving various kinds of objects.
Semi- or weakly-supervised learning approaches [7, 8, 9, 10] alleviate the problem in lack of training
data by exploiting weak label annotations per bounding box [10, 8] or image [7, 8, 9]. They often
assume that a large set of weak annotations is available during training while training examples with
strong annotations are missing or limited. This is a reasonable assumption because weak annotations
such as class labels for bounding boxes and images require only a fraction of efforts compared to
strong annotations, i.e., pixel-wise segmentations. The standard approach in this setting is to update
the model of a supervised DNN by iteratively inferring and refining hypothetical segmentation labels using weakly annotated images. Such iterative techniques often work well in practice [8, 10],
but training methods rely on ad-hoc procedures and there is no guarantee of convergence; implementation may be tricky and the algorithm may not be straightforward to reproduce.
?
Both authors have equal contribution on this paper.
1
We propose a novel decoupled architecture of DNN appropriate for semi-supervised semantic segmentation, which exploits heterogeneous annotations with a small number of strong annotations?
full segmentation masks?as well as a large number of weak annotations?object class labels per
image. Our algorithm stands out from the traditional DNN-based techniques because the architecture is composed of two separate networks; one is for classification and the other is for segmentation.
In the proposed network, object labels associated with an input image are identified by classification
network while figure-ground segmentation of each identified label is subsequently obtained by segmentation network. Additionally, there are bridging layers, which deliver class-specific information
from classification to segmentation network and enable segmentation network to focus on the single
label identified by classification network at a time.
Training is performed on each network separately, where networks for classification and segmentation are trained with image-level and pixel-wise annotations, respectively; training does not require
iterative procedure, and algorithm is easy to reproduce. More importantly, decoupling classification
and segmentation reduces search space for segmentation significantly, which makes it feasible to
train the segmentation network with a handful number of segmentation annotations. Inference in
our network is also simple and does not involve any post-processing. Extensive experiments show
that our network substantially outperforms existing semi-supervised techniques based on DNNs
even with much smaller segmentation annotations, e.g., 5 or 10 strong annotations per class.
The rest of the paper is organized as follows. We briefly review related work and introduce overall
algorithm in Section 2 and 3, respectively. The detailed configuration of the proposed network is
described in Section 4, and training algorithm is presented in Section 5. Section 6 presents experimental results on a challenging benchmark dataset.
2
Related Work
Recent breakthrough in semantic segmentation are mainly driven by supervised approaches relying on Convolutional Neural Network (CNN) [2, 3, 4, 5, 6]. Based on CNNs developed for image
classification, they train networks to assign semantic labels to local regions within images such
as pixels [2, 3, 4] or superpixels [5, 6]. Notably, Long et al. [2] propose an end-to-end system
for semantic segmentation by transforming a standard CNN for classification into a fully convolutional network. Later approaches improve segmentation accuracy through post-processing based
on fully-connected CRF [3, 11]. Another branch of semantic segmentation is to learn a multi-layer
deconvolution network, which also provides a complete end-to-end pipeline [12]. However, training
these networks requires a large number of segmentation ground-truths, but the collection of such
dataset is a difficult task due to excessive annotation efforts.
To mitigate heavy requirement of training data, weakly-supervised learning approaches start to draw
attention recently. In weakly-supervised setting, the models for semantic segmentation have been
trained with only image-level labels [7, 8, 9] or bounding box class labels [10]. Given weakly
annotated training images, they infer latent segmentation masks based on Multiple Instance Learning
(MIL) [7, 9] or Expectation-Maximization (EM) [8] framework based on the CNNs for supervised
semantic segmentation. However, performance of weakly supervised learning approaches except
[10] is substantially lower than supervised methods, mainly because there is no direct supervision for
segmentation during training. Note that [10] requires bounding box annotations as weak supervision,
which are already pretty strong and significantly more expensive to acquire than image-level labels.
Semi-supervised learning is an alternative to bridge the gap between fully- and weakly-supervised
learning approaches. In the standard semi-supervised learning framework, given only a small number of training images with strong annotations, one needs to infer the full segmentation labels for the
rest of the data. However, it is not plausible to learn a huge number of parameters in deep networks
reliably in this scenario. Instead, [8, 10] train the models based on heterogeneous annotations?a
large number of weak annotations as well as a small number strong annotations. This approach is
motivated from the facts that the weak annotations, i.e., object labels per bounding box or image, is
much more easily accessible than the strong ones and that the availability of the weak annotations
is useful to learn a deep network by mining additional training examples with full segmentation
masks. Based on supervised CNN architectures, they iteratively infer and refine pixel-wise segmentation labels of weakly annotated images with guidance of strongly annotated images, where
image-level labels [8] and bounding box annotations [10] are employed as weak annotations. They
2
Figure 1: The architecture of the proposed network. While classification and segmentation networks
are decoupled, bridging layers deliver critical information from classification network to segmentation network.
claim that exploiting few strong annotations substantially improves the accuracy of semantic segmentation while it reduces annotations efforts for supervision significantly. However, they rely on
iterative training procedures, which are often ad-hoc and heuristic and increase complexity to reproduce results in general. Also, these approaches still need a fairly large number of strong annotations
to achieve reliable performance.
3
Algorithm Overview
Figure 1 presents the overall architecture of the proposed network. Our network is composed of
three parts: classification network, segmentation network and bridging layers connecting the two
networks. In this model, semantic segmentation is performed by separate but successive operations
of classification and segmentation. Given an input image, classification network identifies labels
associated with the image, and segmentation network produces pixel-wise figure-ground segmentation corresponding to each identified label. This formulation may suffer from loose connection
between classification and segmentation, but we figure out this challenge by adding bridging layers
between the two networks and delivering class-specific information from classification network to
segmentation network. Then, it is possible to optimize the two networks using separate objective
functions while the two decoupled tasks collaborate effectively to accomplish the final goal.
Training our network is very straightforward. We assume that a large number of image-level annotations are available while there are only a few training images with segmentation annotations. Given
these heterogeneous and unbalanced training data, we first learn the classification network using
rich image-level annotations. Then, with the classification network fixed, we jointly optimize the
bridging layers and the segmentation network using a small number of training examples with strong
annotations. There are only a small number of strongly annotated training data, but we alleviate this
challenging situation by generating many artificial training examples through data augmentation.
The contributions and characteristics of the proposed algorithm are summarized below:
? We propose a novel DNN architecture for semi-supervised semantic segmentation using heterogeneous annotations. The new architecture decouples classification and segmentation tasks,
which enables us to employ pre-trained models for classification network and train only segmentation network and bridging layers using a few strongly annotated data.
? The bridging layers construct class-specific activation maps, which are delivered from classification network to segmentation network. These maps provide strong priors for segmentation,
and reduce search space dramatically for training and inference.
? Overall training procedure is very simple since two networks are to be trained separately. Our
algorithm does not infer segmentation labels of weakly annotated images through iterative
heuristics1 , which are common in semi-supervised learning techniques [8, 10].
The proposed algorithm provides a concept to make up for the lack of strongly annotated training
data using a large number of weakly annotated data. This concept is interesting because the assump1
Due to this property, our framework is different from standard semi-supervised learning but close to fewshot learning with heterogeneous annotations. Nonetheless, we refer to it as semi-supervised learning in this
paper since we have a fraction of strongly annotated data in our training dataset but complete annotations of
weak labels. Note that our level of supervision is similar to the semi-supervised learning case in [8].
3
tion about the availability of training data is desirable for real situations. We estimate figure-ground
segmentation maps only for the relevant classes identified by classification network, which improves
scalability of algorithm in terms of the number of classes. Finally, our algorithm outperforms the
comparable semi-supervised learning method with substantial margins in various settings.
4
Architecture
This section describes the detailed configurations of the proposed network, including classification
network, segmentation network and bridging layers between the two networks.
4.1
Classification Network
The classification network takes an image x as its input, and outputs a normalized score vector
S(x; ?c ) ? RL representing a set of relevance scores of the input x based on the trained classification
model ?c for predefined L categories. The objective of classification network is to minimize error
between ground-truths and estimated class labels, and is formally written as
X
min
ec (yi , S(xi ; ?c )),
(1)
?c
i
L
where yi ? {0, 1} denotes the ground-truth label vector of the i-th example and ec (yi , S(xi ; ?c ))
is classification loss of S(xi ; ?c ) with respect to yi .
We employ VGG 16-layer net [13] as the base architecture for our classification network. It consists of 13 convolutional layers, followed by rectification and optional pooling layers, and 3 fully
connected layers for domain-specific projection. Sigmoid cross-entropy loss function is employed
in Eq. (1), which is a typical choice in multi-class classification tasks.
Given output scores S(xi ; ?c ), our classification network identifies a set of labels Li associated with
input image xi . The region in xi corresponding to each label l ? Li is predicted by the segmentation
network discussed next.
4.2
Segmentation Network
The segmentation network takes a class-specific activation map gil of input image xi , which is obtained from bridging layers, and produces a two-channel class-specific segmentation map M (gil ; ?s )
after applying softmax function, where ?s is the model parameter of segmentation network. Note
that M (gil ; ?s ) has foreground and background channels, which are denoted by Mf (gil ; ?s ) and
Mb (gil ; ?s ), respectively. The segmentation task is formulated as per-pixel regression to groundtruth segmentation, which minimizes
X
es (zli , M (gil ; ?s )),
(2)
min
?s
i
zli
where denotes the binary ground-truth segmentation mask for category l of the i-th image xi and
es (zi , M (gil ; ?s )) is the segmentation loss of Mf (gil ; ?s )?or equivalently the segmentation loss of
Mb (gil ; ?s )?with respect to zli .
The recently proposed deconvolution network [12] is adopted for our segmentation network. Given
an input activation map gil corresponding to input image xi , the segmentation network generates
a segmentation mask in the same size to xi by multiple series of operations of unpooling, deconvolution and rectification. Unpooling is implemented by importing the switch variable from every
pooling layer in the classification network, and the number of deconvolutional and unpooling layers
are identical to the number of convolutional and pooling layers in the classification network. We
employ the softmax loss function to measure per-pixel loss in Eq. (2).
Note that the objective function in Eq. (2) corresponds to pixel-wise binary classification; it infers
whether each pixel belongs to the given class l or not. This is the major difference from the existing
networks for semantic segmentation including [12], which aim to classify each pixel to one of the
L predefined classes. By decoupling classification from segmentation and posing the objective
of segmentation network as binary classification, our algorithm reduces the number of parameters
4
Figure 2: Examples of class-specific activation maps (output of bridging layers). We show the most
representative channel for visualization. Despite significant variations in input images, the classspecific activation maps share similar properties.
in the segmentation network significantly. Specifically, this is because we identify the relevant
labels using classification network and perform binary segmentation for each of the labels, where the
number of output channels in segmentation network is set to two?for foreground and background?
regardless of the total number of candidate classes. This property is especially advantageous in our
challenging scenario, where only a few pixel-wise annotations (typically 5 to 10 annotations per
class) are available for training segmentation network.
4.3
Bridging Layers
To enable the segmentation network described in Section 4.2 to produce the segmentation mask of
a specific class, the input to the segmentation network should involve class-specific information as
well as spatial information required for shape generation. To this end, we have additional layers
underneath segmentation network, which is referred to as bridging layers, to construct the classspecific activation map gil for each identified label l ? Li .
To encode spatial configuration of objects presented in image, we exploit outputs from an intermediate layer in the classification network. We take the outputs from the last pooling layer (pool5) since
the activation patterns of convolution and pooling layers often preserve spatial information effectively while the activations in the higher layers tend to capture more abstract and global information.
We denote the activation map of pool5 layer by fspat afterwards.
Although activations in fspat maintain useful information for shape generation, they contain mixed
information of all relevant labels in xi and we should identify class-specific activations in fspat additionally. For the purpose, we compute class-specific saliency maps using the back-propagation
technique proposed in [14]. Let f (i) be the output of the i-th layer (i = 1, . . . , M ) in the classification network. The relevance of activations in f (k) with respect to a specific class l is computed by
chain rule of partial derivative, which is similar to error back-propagation in optimization, as
l
fcls
=
?Sl
?f (M ) ?f (M ?1)
?f (k+1)
=
?
?
?
,
?f (k)
?f (M ?1) ?f (M ?2)
?f (k)
(3)
l
where fcls
denotes class-specific saliency map and Sl is the classification score of class l. Intuitively,
l
Eq. (3) means that the values in fcls
depend on how much the activations in f (k) are relevant to class
l; this is measured by computing the partial derivative of class score Sl with respect to the activations
in f (k) . We back-propagate the class-specific information until pool5 layer.
l
The class-specific activation map gil is obtained by combining both fspat and fcls
. We first concatenate
l
fspat and fcls in their channel direction, and forward-propagate it through the fully-connected bridging
l
layers, which discover the optimal combination of fspat and fcls
using the trained weights. The
resultant class-specific activation map gil that contains both spatial and class-specific information is
given to segmentation network to produce a class-specific segmentation map. Note that the changes
l
in gil depend only on fcls
since fspat is fixed for all classes in an input image.
5
?
L
={??????,
{person,?????,
table,?????}
plant}
?i? =
M
(giperson )
?f??????
M?f?????
(gitable )
M?f ?????
(giplant )
Figure 3: Input image (left) and its segmentation maps (right) of individual classes.
Figure 2 visualizes the examples of class-specific activation maps gil obtained from several validation
images. The activations from the images in the same class share similar patterns despite substantial
appearance variations, which shows that the outputs of bridging layers capture class-specific information effectively; this property makes it possible to obtain figure-ground segmentation maps for
individual relevant classes in segmentation network. More importantly, it reduces the variations of
input distributions for segmentation network, which allows to achieve good generalization performance in segmentation even with a small number of training examples.
For inference, we compute a class-specific activation map gil for each identified label l ? Li and obtain class-specific segmentation maps {M (gil ; ?s )}?l?Li . In addition, we obtain M (gi? ; ?s ), where
gi? is the activation map from the bridging layers for all identified labels. The final label estimation
is given by identifying the label with the maximum score in each pixel out of {Mf (gil ; ?s )}?l?Li
and Mb (gi? ; ?s ). Figure 3 illustrates the output segmentation map of each gil for xi , where each map
identifies high response area given gil successfully.
5
Training
In our semi-supervised learning scenario, we have mixed training examples with weak and strong
annotations. Let W = {1, ..., Nw } and S = {1, ..., Ns } denote the index sets of images with imagelevel and pixel-wise class labels, respectively, where Nw Ns . We first train the classification
network using the images in W by optimizing the loss function in Eq. (1). Then, fixing the weights
in the classification network, we jointly train the bridging layers and the segmentation network using
images in S by optimizing Eq. (2). For training segmentation network, we need to obtain classspecific activation map gil from bridging layers using ground-truth class labels associated with xi ,
i ? S. Note that we can reduce complexity in training by optimizing the two networks separately.
Although the proposed algorithm has several advantages in training segmentation network with few
training images, it would still be better to have more training examples with strong annotations.
Hence, we propose an effective data augmentation strategy, combinatorial cropping. Let L?i denotes
a set of ground-truth labels associated with image xi , i ? S. We enumerate all possible combinations of labels in P(L?i ), where P(L?i ) denotes the powerset of L?i . For each P ? P(L?i ) except
empty set (P =
6 ?), we construct a binary ground-truth segmentation mask zP
i by setting the pixels
corresponding to every label l ? P as foreground and the rests as background. Then, we generate Np
sub-images enclosing the foreground areas based on region proposal method [15] and
sam-
P random
|L?
i| ? 1
pling. Through this simple data augmentation technique, we have Nt = Ns +Np ?
2
i?S
training examples with strong annotations effectively, where Nt Ns .
6
6.1
Experiments
Implementation Details
Dataset We employ PASCAL VOC 2012 dataset [1] for training and testing of the proposed deep
network. The dataset with extended annotations from [16], which contains 12,031 images with
pixel-wise class labels, is used in our experiment. To simulate semi-supervised learning scenario, we
divide the training images into two non-disjoint subsets?W with weak annotations only and S with
strong annotations as well. There are 10,582 images with image-level class labels, which are used to
train our classification network. We also construct training datasets with strongly annotated images;
6
Table 1: Evaluation results on PASCAL VOC 2012 validation set.
# of strongs
Full
25 (?20 classes)
10 (?20 classes)
5 (?20 classes)
DecoupledNet
67.5
62.1
57.4
53.1
WSSL-Small FoV [8]
63.9
56.9
47.6
-
WSSL-Large-FoV [8]
67.6
54.2
38.9
-
DecoupledNet-Str
67.5
50.3
41.7
32.7
DeconvNet [12]
67.1
38.6
21.5
15.3
Table 2: Evaluation results on PASCAL VOC 2012 test set.
Models
DecoupledNet-Full
DecoupledNet-25
DecoupledNet-10
DecoupledNet-5
bkg
91.5
90.1
88.5
87.4
areo
78.8
75.8
73.8
70.4
bike
39.9
41.7
40.1
40.9
bird
78.1
70.4
68.1
60.4
boat bottle bus
53.8 68.3 83.2
46.4 66.2 83.0
45.5 59.5 76.4
36.3 61.2 67.3
car
78.2
69.9
62.7
67.7
cat chair cow
80.6 25.8 62.6
76.7 23.1 61.2
71.4 17.7 60.4
64.6 12.8 60.2
table
55.5
43.3
39.9
26.4
dog horse mbk
75.1 77.2 77.1
70.4 75.7 74.1
64.5 73.0 68.5
63.2 69.6 64.8
prsn
76.0
65.7
56.0
53.1
plnt sheep sofa
47.8 74.1 47.5
46.2 73.8 39.7
43.4 70.8 37.8
34.7 65.3 34.4
train
66.4
61.9
60.3
57.0
tv mean
60.4 66.6
57.6 62.5
54.2 58.7
50.5 54.7
the number of images with segmentation labels per class is controlled to evaluate the impact of
supervision level. In our experiment, three different cases?5, 10, or 25 training images with strong
annotations per class?are tested to show the effectiveness of our semi-supervised framework. We
evaluate the performance of the proposed algorithm on 1,449 validation images.
Data Augmentation We employ different strategies to augment training examples in the two
datasets with weak and strong annotations. For the images with weak annotations, simple data augmentation techniques such as random cropping and horizontal flipping are employed as suggested in
[13]. We perform combinatorial cropping proposed in Section 5 for the images with strong annotations, where EdgeBox [15] is adopted to generate region proposals and the Np (= 200) sub-images
are generated for each label combination.
Optimization The proposed network is implemented based on Caffe library [17]. The standard
Stochastic Gradient Descent (SGD) with momentum is employed for optimization, where all parameters are identical to [12]. We initialize the weights of the classification network using VGG
16-layer net pre-trained on ILSVRC [18] dataset. When we train the deep network with full annotations, the network converges after approximately 5.5K and 17.5K SGD iterations with mini-batches
of 64 examples in training classification and segmentation networks, respectively; training takes 3
days (0.5 day for classification network and 2.5 days for segmentation network) in a single Nvidia
GTX Titan X GPU with 12G memory. Note that training segmentation network is much faster in
our semi-supervised setting while there is no change in training time of classification network.
6.2
Results on PASCAL VOC Dataset
Our algorithm denoted by DecoupledNet is compared with two variations in WSSL [8], which is another algorithm based on semi-supervised learning with heterogeneous annotations. We also test the
performance of DecoupledNet-Str2 and DeconvNet [12], which only utilize examples with strong
annotations, to analyze the benefit of image-level weak annotations. All learned models in our experiment are based only on the training set (not including the validation set) in PASCAL VOC 2012
dataset. All algorithms except WSSL [8] report the results without CRF. Segmentation accuracy is
measured by Intersection over Union (IoU) between ground-truth and predicted segmentation, and
the mean IoU over 20 semantic categories is employed for the final performance evaluation.
Table 1 summarizes quantitative results on PASCAL VOC 2012 validation set. Given the same
amount of supervision, DecoupledNet presents substantially better performance even without any
post-processing than WSSL [8], which is a directly comparable method. In particular, our algorithm has great advantage over WSSL when the number of strong annotations is extremely small.
We believe that this is because DecoupledNet reduces search space for segmentation effectively by
employing the bridging layers and the deep network can be trained with a smaller number of images
with strong annotations consequently. Our results are even more meaningful since training procedure of DecoupledNet is very straightforward compared to WSSL and does not involve heuristic
iterative procedures, which are common in semi-supervised learning methods.
When there are only a small number of strongly annotated training data, our algorithm obviously
outperforms DecoupledNet-Str and DeconvNet [12] by exploiting the rich information of weakly an2
This is identical to DecoupledNet except that its classification and segmentation networks are trained with
the same images, where image-level weak annotations are generated from pixel-wise segmentation annotations.
7
Figure 4: Semantic segmentation results of several PASCAL VOC 2012 validation images based
on the models trained on a different number of pixel-wise segmentation annotations.
notated images. It is interesting that DecoupledNet-Str is clearly better than DeconvNet, especially
when the number of training examples is small. For reference, the best accuracy of the algorithm
based only on the examples with image-level labels is 42.0% [7], which is much lower than our
result with five strongly annotated images per class, even though [7] requires significant efforts for
heuristic post-processing. These results show that even little strong supervision can improve semantic segmentation performance dramatically.
Table 2 presents more comprehensive results of our algorithm in PASCAL VOC test set. Our algorithm works well in general and approaches to the empirical upper-bound fast with a small number
of strongly annotated images. A drawback of our algorithm is that it does not achieve the state-ofthe-art performance [3, 11, 12] when the (almost3 ) full supervision is provided in PASCAL VOC
dataset. This is probably because our method optimizes classification and segmentation networks
separately although joint optimization of two objectives is more desirable. However, note that our
strategy is more appropriate for semi-supervised learning scenario as shown in our experiment.
Figure 4 presents several qualitative results from our algorithm. Note that our model trained only
with five strong annotations per class already shows good generalization performance, and that more
training examples with strong annotations improve segmentation accuracy and reduce label confusions substantially. Refer to our project website4 for more comprehensive qualitative evaluation.
7
Conclusion
We proposed a novel deep neural network architecture for semi-supervised semantic segmentation
with heterogeneous annotations, where classification and segmentation networks are decoupled for
both training and inference. The decoupled network is conceptually appropriate for exploiting heterogeneous and unbalanced training data with image-level class labels and/or pixel-wise segmentation annotations, and simplifies training procedure dramatically by discarding complex iterative
procedures for intermediate label inferences. Bridging layers play a critical role to reduce output
space of segmentation, and facilitate to learn segmentation network using a handful number of segmentation annotations. Experimental results validate the effectiveness of our decoupled network,
which outperforms existing semi- and weakly-supervised approaches with substantial margins.
Acknowledgement This work was partly supported by the ICT R&D program of MSIP/IITP
[B0101-15-0307; ML Center, B0101-15-0552; DeepView] and Samsung Electronics Co., Ltd.
3
4
We did not include the validation set for training and have less training examples than the competitors.
http://cvlab.postech.ac.kr/research/decouplednet/
8
References
[1] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The
Pascal Visual Object Classes (VOC) challenge. IJCV, 88(2):303?338, 2010.
[2] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
[3] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic
image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015.
[4] Bharath Hariharan, Pablo Arbel?aez, Ross Girshick, and Jitendra Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015.
[5] Bharath Hariharan, Pablo Arbel?aez, Ross Girshick, and Jitendra Malik. Simultaneous detection and segmentation. In ECCV, 2014.
[6] Mohammadreza Mostajabi, Payman Yadollahpour, and Gregory Shakhnarovich. Feedforward semantic
segmentation with zoom-out features. CVPR, 2015.
[7] Pedro O. Pinheiro and Ronan Collobert. Weakly supervised semantic segmentation with convolutional
networks. In CVPR, 2015.
[8] George Papandreou, Liang-Chieh Chen, Kevin Murphy, and Alan L Yuille. Weakly-and semi-supervised
learning of a DCNN for semantic image segmentation. In ICCV, 2015.
[9] Deepak Pathak, Evan Shelhamer, Jonathan Long, and Trevor Darrell. Fully convolutional multi-class
multiple instance learning. In ICLR, 2015.
[10] Jifeng Dai, Kaiming He, and Jian Sun. BoxSup: Exploiting bounding boxes to supervise convolutional
networks for semantic segmentation. In ICCV, 2015.
[11] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong
Du, Chang Huang, and Philip Torr. Conditional random fields as recurrent neural networks. In ICCV,
2015.
[12] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic
segmentation. In ICCV, 2015.
[13] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
[14] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR Workshop, 2014.
[15] C Lawrence Zitnick and Piotr Doll?ar. Edge boxes: Locating object proposals from edges. In ECCV, 2014.
[16] Bharath Hariharan, Pablo Arbel?aez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic
contours from inverse detectors. In ICCV, 2011.
[17] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv
preprint arXiv:1408.5093, 2014.
[18] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: a large-scale hierarchical image database. In CVPR, 2009.
9
| 5858 |@word cnn:3 briefly:1 advantageous:1 kokkinos:1 everingham:1 paredes:1 propagate:2 sgd:2 electronics:1 configuration:3 series:1 score:6 contains:2 deconvolutional:1 romera:1 outperforms:4 existing:4 guadarrama:1 nt:2 activation:22 yet:1 written:1 gpu:1 john:1 unpooling:3 concatenate:1 ronan:1 shape:2 enables:2 update:1 payman:1 provides:2 successive:1 five:2 direct:1 qualitative:2 consists:1 ijcv:1 inside:1 introduce:1 notably:1 mask:7 andrea:1 multi:3 voc:12 relying:1 little:1 website4:1 str:3 provided:1 discover:1 project:1 bike:1 kind:1 substantially:5 minimizes:1 developed:1 guarantee:1 mitigate:1 every:2 hypothetical:1 quantitative:1 decouples:2 tricky:1 b0101:2 engineering:1 local:1 despite:2 approximately:1 bird:1 studied:1 fov:2 challenging:4 co:1 limited:1 testing:1 practice:1 union:1 procedure:8 evan:3 area:2 empirical:1 significantly:4 vedaldi:1 projection:1 pre:2 spite:1 close:1 applying:1 optimize:2 map:26 missing:1 center:1 crfs:1 straightforward:3 attention:1 regardless:1 williams:1 identifying:1 rule:1 importantly:2 embedding:1 variation:6 play:1 areo:1 expensive:1 recognition:1 database:1 role:1 preprint:1 visualising:1 capture:2 region:5 connected:4 sun:1 iitp:1 substantial:3 transforming:1 complexity:2 trained:11 weakly:14 depend:2 shakhnarovich:1 deliver:2 yuille:2 localization:1 easily:1 joint:1 samsung:1 various:2 cat:1 maji:1 train:9 fast:2 effective:1 pool5:3 artificial:1 horse:1 kevin:2 caffe:2 aez:3 heuristic:3 kai:1 plausible:1 cvpr:5 simonyan:2 gi:3 jointly:2 prsn:1 final:3 delivered:1 obviously:1 hoc:2 advantage:2 karayev:1 net:3 arbel:3 propose:5 mb:3 relevant:5 combining:1 achieve:3 validate:1 scalability:1 exploiting:6 convergence:1 empty:1 requirement:1 zp:1 cropping:3 produce:4 generating:1 darrell:3 converges:1 object:10 andrew:3 ac:2 fixing:1 pose:2 recurrent:1 measured:2 bourdev:1 eq:6 strong:26 implemented:2 predicted:2 iou:2 direction:1 drawback:1 annotated:14 cnns:2 subsequently:2 stochastic:1 enable:2 require:2 dnns:2 assign:2 generalization:2 alleviate:2 ground:13 great:1 lawrence:1 nw:2 claim:1 major:1 purpose:1 estimation:1 sofa:1 label:49 combinatorial:2 ross:3 bridge:1 successfully:1 clearly:1 dcnn:1 aim:1 mil:1 encode:1 focus:1 refining:1 mainly:2 superpixels:1 underneath:1 inference:5 typically:3 dnn:7 reproduce:3 subhransu:1 pixel:22 noh:2 classification:55 overall:3 pascal:12 denoted:2 augment:1 spatial:4 breakthrough:1 fairly:1 softmax:2 initialize:1 equal:1 construct:4 field:1 art:1 piotr:1 identical:3 excessive:1 foreground:4 np:3 report:1 richard:1 few:5 employ:5 composed:2 preserve:1 comprehensive:2 individual:3 zoom:1 murphy:2 powerset:1 occlusion:1 classspecific:3 maintain:1 detection:1 huge:1 mining:1 zheng:1 evaluation:4 sheep:1 chain:1 predefined:2 edge:2 partial:2 korea:1 decoupled:8 divide:1 mbk:1 guidance:1 girshick:3 instance:2 classify:1 ar:1 papandreou:2 maximization:1 cost:1 subset:1 hypercolumns:1 accomplish:1 gregory:1 person:1 accessible:1 vineet:1 dong:1 connecting:1 augmentation:5 huang:1 derivative:2 li:10 importing:1 summarized:1 availability:2 titan:1 jitendra:3 ad:2 collobert:1 msip:1 performed:3 later:1 tion:1 zhizhong:1 analyze:1 start:1 annotation:62 jia:3 contribution:2 minimize:1 hariharan:3 accuracy:5 convolutional:12 characteristic:1 identify:2 saliency:3 ofthe:1 conceptually:1 weak:16 visualizes:1 bharath:3 simultaneous:1 detector:1 trevor:3 competitor:1 nonetheless:1 resultant:1 associated:6 dataset:11 notated:1 car:1 improves:2 infers:1 segmentation:126 organized:1 iasonas:1 back:3 higher:1 supervised:35 day:3 response:1 zisserman:3 wei:1 formulation:1 box:8 strongly:9 though:1 bhhan:1 until:1 shuai:1 horizontal:1 christopher:1 su:1 lack:2 propagation:2 jayasumana:1 believe:1 vibhav:1 facilitate:1 concept:2 normalized:1 contain:1 gtx:1 hence:1 iteratively:2 semantic:30 postech:3 during:2 hong:2 crf:2 demonstrate:1 complete:2 confusion:1 image:70 wise:14 novel:4 recently:2 common:2 sigmoid:1 rl:1 overview:1 discussed:1 he:1 significant:3 refer:2 collaborate:1 han:2 impressive:1 supervision:8 an2:1 etc:1 base:1 pling:1 sergio:1 recent:1 optimizing:3 belongs:1 driven:1 optimizes:1 scenario:5 nvidia:1 binary:6 yi:4 additional:2 george:2 dai:1 employed:5 deng:1 semi:24 branch:1 full:7 multiple:3 desirable:2 dalong:1 reduces:5 infer:4 afterwards:1 alan:2 faster:1 cross:1 long:4 dept:1 post:4 controlled:1 impact:1 involving:1 regression:1 heterogeneous:9 expectation:1 arxiv:2 iteration:1 sergey:1 achieved:1 proposal:3 background:4 addition:1 separately:5 fine:1 winn:1 jian:1 pinheiro:1 rest:3 probably:1 pooling:5 tend:1 facilitates:1 bohyung:2 contrary:1 effectiveness:2 mohammadreza:1 intermediate:2 feedforward:1 easy:1 switch:1 zi:1 architecture:14 identified:10 cow:1 seunghoon:2 reduce:5 simplifies:1 lubomir:1 vgg:2 whether:1 motivated:1 bridging:20 ltd:1 effort:5 hyeonwoo:2 suffer:1 locating:1 karen:2 deep:12 dramatically:3 useful:2 enumerate:1 detailed:2 involve:4 delivering:1 amount:1 clutter:1 extensively:1 category:3 generate:2 http:1 sl:3 bkg:1 gil:21 estimated:1 disjoint:1 per:11 pohang:1 yangqing:1 yadollahpour:1 utilize:1 fraction:2 inverse:1 zli:3 reasonable:1 groundtruth:1 draw:1 summarizes:1 comparable:2 layer:37 bound:1 ki:1 followed:1 refine:1 handful:2 fei:2 generates:1 simulate:1 min:2 chair:1 extremely:1 structured:1 tv:1 combination:3 smaller:2 describes:1 em:1 sam:1 supervise:1 intuitively:1 iccv:5 pipeline:1 rectification:2 visualization:1 remains:1 bus:1 loose:1 end:5 adopted:2 available:4 operation:2 doll:1 hierarchical:1 appropriate:3 alternative:1 batch:1 denotes:5 include:1 exploit:2 especially:2 chieh:2 objective:5 malik:3 already:2 flipping:1 strategy:3 traditional:1 gradient:1 iclr:4 separate:4 philip:1 reason:1 index:1 mini:1 boxsup:1 acquire:1 equivalently:1 difficult:2 liang:2 potentially:1 implementation:2 reliably:1 enclosing:1 perform:2 upper:1 convolution:1 datasets:3 sadeep:1 benchmark:2 descent:1 optional:1 situation:2 extended:1 pablo:3 bottle:1 required:1 dog:1 extensive:1 connection:1 imagenet:1 learned:1 tremendous:1 suggested:1 below:1 pattern:2 challenge:3 program:1 reliable:2 including:3 memory:1 gool:1 critical:2 pathak:1 rely:2 boat:1 representing:1 improve:3 library:1 identifies:3 review:1 prior:1 acknowledgement:1 ict:1 deconvnet:4 fully:8 loss:7 plant:1 mixed:2 interesting:2 generation:2 validation:7 shelhamer:3 share:2 heavy:1 eccv:2 supported:1 last:1 mostajabi:1 deepak:1 benefit:1 van:1 stand:1 rich:2 contour:1 author:1 collection:1 forward:1 ec:2 employing:1 ml:1 global:1 xi:14 search:4 iterative:6 latent:1 decade:1 pretty:1 table:6 additionally:2 learn:6 channel:5 decoupling:2 du:1 posing:2 complex:1 domain:1 zitnick:1 did:1 bounding:7 cvlab:1 representative:1 referred:1 n:4 sub:2 inferring:1 originated:1 momentum:1 candidate:1 learns:1 grained:1 donahue:1 specific:23 discarding:1 deconvolution:4 workshop:1 socher:1 adding:1 effectively:6 kr:2 illustrates:1 margin:2 gap:1 chen:2 mf:3 entropy:1 intersection:1 appearance:2 visual:1 bernardino:1 kaiming:1 chang:1 pedro:1 corresponds:1 truth:9 conditional:1 goal:1 formulated:1 consequently:1 jeff:1 luc:1 feasible:1 change:3 typical:1 except:4 specifically:1 torr:1 total:1 partly:1 experimental:2 e:2 meaningful:1 formally:1 ilsvrc:1 mark:1 unbalanced:2 jonathan:3 relevance:2 outstanding:1 evaluate:2 tested:1 |
5,367 | 5,859 | Action-Conditional Video Prediction
using Deep Networks in Atari Games
Junhyuk Oh
Xiaoxiao Guo Honglak Lee Richard Lewis
Satinder Singh
University of Michigan, Ann Arbor, MI 48109, USA
{junhyuk,guoxiao,honglak,rickl,baveja}@umich.edu
Abstract
Motivated by vision-based reinforcement learning (RL) problems, in particular
Atari games from the recent benchmark Aracade Learning Environment (ALE),
we consider spatio-temporal prediction problems where future image-frames depend on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can
involve tens of objects with one or more objects being controlled by the actions
directly and many other objects being influenced indirectly, can involve entry and
departure of objects, and can involve deep partial observability. We propose and
evaluate two deep neural network architectures that consist of encoding, actionconditional transformation, and decoding layers based on convolutional neural
networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some
games. To the best of our knowledge, this paper is the first to make and evaluate
long-term predictions on high-dimensional video conditioned by control inputs.
1
Introduction
Over the years, deep learning approaches (see [5, 26] for survey) have shown great success in many
visual perception problems (e.g., [16, 7, 32, 9]). However, modeling videos (building a generative
model) is still a very challenging problem because it often involves high-dimensional natural-scene
data with complex temporal dynamics. Thus, recent studies have mostly focused on modeling simple
video data, such as bouncing balls or small patches, where the next frame is highly-predictable given
the previous frames [29, 20, 19]. In many applications, however, future frames depend not only on
previous frames but also on control or action variables. For example, the first-person-view in a vehicle is affected by wheel-steering and acceleration. The camera observation of a robot is similarly
dependent on its movement and changes of its camera angle. More generally, in vision-based reinforcement learning (RL) problems, learning to predict future images conditioned on actions amounts
to learning a model of the dynamics of the agent-environment interaction, an essential component
of model-based approaches to RL. In this paper, we focus on Atari games from the Arcade Learning Environment (ALE) [1] as a source of challenging action-conditional video modeling problems.
While not composed of natural scenes, frames in Atari games are high-dimensional, can involve tens
of objects with one or more objects being controlled by the actions directly and many other objects
being influenced indirectly, can involve entry and departure of objects, and can involve deep partial
observability. To the best of our knowledge, this paper is the first to make and evaluate long-term
predictions on high-dimensional images conditioned by control inputs.
This paper proposes, evaluates, and contrasts two spatio-temporal prediction architectures based on
deep networks that incorporate action variables (See Figure 1). Our experimental results show that
our architectures are able to generate realistic frames over 100-step action-conditional future frames
without diverging in some Atari games. We show that the representations learned by our architectures 1) approximately capture natural similarity among actions, and 2) discover which objects are
directly controlled by the agent?s actions and which are only indirectly influenced or not controlled.
We evaluated the usefulness of our architectures for control in two ways: 1) by replacing emulator
frames with predicted frames in a previously-learned model-free controller (DQN; DeepMind?s state
1
action
?
?
?
encoding
action
?
?
?
?
?
?
transformation
?
?
?
?
?
?
decoding
encoding
(a) Feedforward encoding
?
?
?
?
?
?
transformation
?
?
?
decoding
(b) Recurrent encoding
Figure 1: Proposed Encoding-Transformation-Decoding network architectures.
of the art Deep-Q-Network for Atari Games [21]), and 2) by using the predicted frames to drive a
more informed than random exploration strategy to improve a model-free controller (also DQN).
2
Related Work
Video Prediction using Deep Networks. The problem of video prediction has led to a variety of
architectures in deep learning. A recurrent temporal restricted Boltzmann machine (RTRBM) [29]
was proposed to learn temporal correlations from sequential data by introducing recurrent connections in RBM. A structured RTRBM (sRTRBM) [20] scaled up RTRBM by learning dependency
structures between observations and hidden variables from data. More recently, Michalski et al. [19]
proposed a higher-order gated autoencoder that defines multiplicative interactions between consecutive frames and mapping units, and showed that temporal prediction problem can be viewed as
learning and inferring higher-order interactions between consecutive images. Srivastava et al. [28]
applied a sequence-to-sequence learning framework [31] to a video domain, and showed that long
short-term memory (LSTM) [12] networks are capable of generating video of bouncing handwritten digits. In contrast to these previous studies, this paper tackles problems where control variables
affect temporal dynamics, and in addition scales up spatio-temporal prediction to larger-size images.
ALE: Combining Deep Learning and RL. Atari 2600 games provide challenging environments
for RL because of high-dimensional visual observations, partial observability, and delayed rewards.
Approaches that combine deep learning and RL have made significant advances [21, 22, 11]. Specifically, DQN [21] combined Q-learning [36] with a convolutional neural network (CNN) and achieved
state-of-the-art performance on many Atari games. Guo et al. [11] used the ALE-emulator for making action-conditional predictions with slow UCT [15], a Monte-Carlo tree search method, to generate training data for a fast-acting CNN, which outperformed DQN on several domains. Throughout
this paper we will use DQN to refer to the architecture used in [21] (a more recent work [22] used a
deeper CNN with more data to produce the currently best-performing Atari game players).
Action-Conditional Predictive Model for RL. The idea of building a predictive model for
vision-based RL problems was introduced by Schmidhuber and Huber [27]. They proposed a neural
network that predicts the attention region given the previous frame and an attention-guiding action.
More recently, Lenz et al. [17] proposed a recurrent neural network with multiplicative interactions
that predicts the physical coordinate of a robot. Compared to this previous work, our work is evaluated on much higher-dimensional data with complex dependencies among observations. There have
been a few attempts to learn from ALE data a transition-model that makes predictions of future
frames. One line of work [3, 4] divides game images into patches and applies a Bayesian framework
to predict patch-based observations. However, this approach assumes that neighboring patches are
enough to predict the center patch, which is not true in Atari games because of many complex interactions. The evaluation in this prior work is 1-step prediction loss; in contrast, here we make and
evaluate long-term predictions both for quality of pixels generated and for usefulness to control.
3
Proposed Architectures and Training Method
The goal of our architectures is to learn a function f : x1:t , at ? xt+1 , where xt and at are the frame
and action variables at time t, and x1:t are the frames from time 1 to time t. Figure 1 shows our two
architectures that are each composed of encoding layers that extract spatio-temporal features from
the input frames (?3.1), action-conditional transformation layers that transform the encoded features
into a prediction of the next frame in high-level feature space by introducing action variables as
additional input (?3.2) and finally decoding layers that map the predicted high-level features into
pixels (?3.3). Our contributions are in the novel action-conditional deep convolutional architectures
for high-dimensional, long-term prediction as well as in the novel use of the architectures in visionbased RL domains.
2
3.1 Two Variants: Feedforward Encoding and Recurrent Encoding
Feedforward encoding takes a fixed history of previous frames as an input, which is concatenated
through channels (Figure 1a), and stacked convolution layers extract spatio-temporal features directly from the concatenated frames. The encoded feature vector henc
? Rn at time t is:
t
henc
= CNN (xt?m+1:t ) ,
t
(1)
where xt?m+1:t ? R(m?c)?h?w denotes m frames of h ? w pixel images with c color channels.
CNN is a mapping from raw pixels to a high-level feature vector using multiple convolution layers
and a fully-connected layer at the end, each of which is followed by a non-linearity. This encoding
can be viewed as early-fusion [14] (other types of fusions, e.g., late-fusion or 3D convolution [35]
can also be applied to this architecture).
Recurrent encoding takes one frame as an input for each time-step and extracts spatio-temporal
features using an RNN in which the temporal dynamics is modeled by the recurrent layer on top
of the high-level feature vector extracted by convolution layers (Figure 1b). In this paper, LSTM
without peephole connection is used for the recurrent layer as follows:
enc
[henc
(2)
t , ct ] = LSTM CNN (xt ) , ht?1 , ct?1 ,
where ct ? Rn is a memory cell that retains information from a deep history of inputs. Intuitively,
CNN (xt ) is given as input to the LSTM so that the LSTM captures temporal correlations from
high-level spatial features.
3.2 Multiplicative Action-Conditional Transformation
We use multiplicative interactions between the encoded feature vector and the control variables:
X
hdec
Wijl henc
(3)
t,i =
t,j at,l + bi ,
j,l
henc
t
n
where
? R is an encoded feature, hdec
? Rn is an action-transformed feature, at ? Ra is
t
the action-vector at time t, W ? Rn?n?a is 3-way tensor weight, and b ? Rn is bias. When the
action a is represented using one-hot vector, using a 3-way tensor is equivalent to using different
weight matrices for each action. This enables the architecture to model different transformations
for different actions. The advantages of multiplicative interactions have been explored in image and
text processing [33, 30, 18]. In practice the 3-way tensor is not scalable because of its large number
of parameters. Thus, we approximate the tensor by factorizing into three matrices as follows [33]:
hdec
= Wdec (Wenc henc
Wa at ) + b,
t
t
dec
n?f
enc
f ?n
f ?a
a
(4)
n
where W
? R
,W
? R
,W ? R
, b ? R , and f is the number of factors.
Unlike the 3-way tensor, the above factorization shares the weights between different actions by
mapping them to the size-f factors. This sharing may be desirable relative to the 3-way tensor when
there are common temporal dynamics in the data across different actions (discussed further in ?4.3).
3.3 Convolutional Decoding
It has been recently shown that a CNN is capable of generating an image effectively using upsampling followed by convolution with stride of 1 [8]. Similarly, we use the ?inverse? operation of
convolution, called deconvolution, which maps 1 ? 1 spatial region of the input to d ? d using deconvolution kernels. The effect of s ? s upsampling can be achieved without explicitly upsampling
the feature map by using stride of s. We found that this operation is more efficient than upsampling
followed by convolution because of the smaller number of convolutions with larger stride.
In the proposed architecture, the transformed feature vector hdec is decoded into pixels as follows:
? t+1 = Deconv Reshape hdec ,
x
(5)
where Reshape is a fully-connected layer where hidden units form a 3D feature map, and Deconv
consists of multiple deconvolution layers, each of which is followed by a non-linearity except for
the last deconvolution layer.
3.4 Curriculum Learning with Multi-Step Prediction
It is almost inevitable for a predictive model to make noisy predictions of high-dimensional images.
When the model is trained on a 1-step prediction objective, small prediction errors can compound
3
through time. To alleviate this effect, we use a multi-step prediction objective. More specifically,
n
oN
(i)
(i)
(i)
(i)
given the training data D =
x1 , a1 , ..., xTi , aTi
, the model is trained to minimize
i=1
the average squared error over K-step predictions as follows:
LK (?) =
K
2
1 XXX
(i)
(i)
? t+k ? xt+k
,
x
2K i t
(6)
k=1
(i)
? t+k is a k-step future prediction. Intuitively, the network is repeatedly unrolled through K
where x
time steps by using its prediction as an input for the next time-step.
The model is trained in multiple phases based on increasing K as suggested by Michalski et al. [19].
In other words, the model is trained to predict short-term future frames and fine-tuned to predict
longer-term future frames after the previous phase converges. We found that this curriculum learning [6] approach is necessary to stabilize the training. A stochastic gradient descent with backpropagation through time (BPTT) is used to optimize the parameters of the network.
4
Experiments
In the experiments that follow, we have the following goals for our two architectures. 1) To evaluate
the predicted frames in two ways: qualitatively evaluating the generated video, and quantitatively
evaluating the pixel-based squared error, 2) To evaluate the usefulness of predicted frames for control
in two ways: by replacing the emulator?s frames with predicted frames for use by DQN, and by using
the predictions to improve exploration in DQN, and 3) To analyze the representations learned by our
architectures. We begin by describing the details of the data, and model architecture, and baselines.
Data and Preprocessing. We used our replication of DQN to generate game-play video datasets
using an -greedy policy with = 0.3, i.e. DQN is forced to choose a random action with 30%
probability. For each game, the dataset consists of about 500, 000 training frames and 50, 000 test
frames with actions chosen by DQN. Following DQN, actions are chosen once every 4 frames which
reduces the video from 60fps to 15fps. The number of actions available in games varies from 3 to
18, and they are represented as one-hot vectors. We used full-resolution RGB images (210 ? 160)
and preprocessed the images by subtracting mean pixel values and dividing each pixel value by 255.
Network Architecture. Across all game domains, we use the same network architecture as follows. The encoding layers consist of 4 convolution layers and one fully-connected layer with 2048
hidden units. The convolution layers use 64 (8 ? 8), 128 (6 ? 6), 128 (6 ? 6), and 128 (4 ? 4)
filters with stride of 2. Every layer is followed by a rectified linear function [23]. In the recurrent
encoding network, an LSTM layer with 2048 hidden units is added on top of the fully-connected
layer. The number of factors in the transformation layer is 2048. The decoding layers consists of one
fully-connected layer with 11264 (= 128 ? 11 ? 8) hidden units followed by 4 deconvolution layers.
The deconvolution layers use 128 (4?4), 128 (6?6), 128 (6?6), and 3 (8?8) filters with stride of
2. For the feedforward encoding network, the last 4 frames are given as an input for each time-step.
The recurrent encoding network takes one frame for each time-step, but it is unrolled through the
last 11 frames to initialize the LSTM hidden units before making a prediction. Our implementation
is based on Caffe toolbox [13].
Details of Training. We use the curriculum learning scheme above with three phases of increasing
prediction step objectives of 1, 3 and 5 steps, and learning rates of 10?4 , 10?5 , and 10?5 , respectively. RMSProp [34, 10] is used with momentum of 0.9, (squared) gradient momentum of 0.95,
and min squared gradient of 0.01. The batch size for each training phase is 32, 8, and 8 for the feedforward encoding network and 4, 4, and 4 for the recurrent encoding network, respectively. When
the recurrent encoding network is trained on 1-step prediction objective, the network is unrolled
through 20 steps and predicts the last 10 frames by taking ground-truth images as input. Gradients
are clipped at [?0.1, 0.1] before non-linearity of each gate of LSTM as suggested by [10].
Two Baselines for Comparison. The first baseline is a multi-layer perceptron (MLP) that takes
the last frame as input and has 4 hidden layers with 400, 2048, 2048, and 400 units. The action
input is concatenated to the second hidden layer. This baseline uses approximately the same number
of parameters as the recurrent encoding model. The second baseline, no-action feedforward (or
naFf ), is the same as the feedforward encoding model (Figure 1a) except that the transformation
layer consists of one fully-connected layer that does not get the action as input.
4
Step
MLP
naFf
Feedforward
Recurrent
Ground Truth
Action
255
?
256
?
257
no-op
Figure 2: Example of predictions over 250 steps in Freeway. The ?Step? and ?Action? columns show the
number of prediction steps and the actions taken respectively. The white boxes indicate the object controlled
by the agent. From prediction step 256 to 257 the controlled object crosses the top boundary and reappears at
the bottom; this non-linear shift is predicted by our architectures and is not predicted by MLP and naFf. The
horizontal movements of the uncontrolled objects are predicted by our architectures and naFf but not by MLP.
200
400
80
400
150
300
60
300
100
200
40
200
50
100
20
100
250
MLP
naFf
Feedforward
Recurrent
200
150
100
0
0
0
50
(a) Seaquest
100
0
50
100
(b) Space Invaders
0
0
50
100
(c) Freeway
0
50
0
50
(d) QBert
100
0
0
50
100
(e) Ms Pacman
Figure 3: Mean squared error over 100-step predictions
4.1 Evaluation of Predicted Frames
Qualitative Evaluation: Prediction video. The prediction videos of our models and baselines are
available in the supplementary material and at the following website: https://sites.google.
com/a/umich.edu/junhyuk-oh/action-conditional-video-prediction. As seen in the
videos, the proposed models make qualitatively reasonable predictions over 30?500 steps depending
on the game. In all games, the MLP baseline quickly diverges, and the naFf baseline fails to predict
the controlled object. An example of long-term predictions is illustrated in Figure 2. We observed
that both of our models predict complex local translations well such as the movement of vehicles
and the controlled object. They can predict interactions between objects such as collision of two
objects. Since our architectures effectively extract hierarchical features using CNN, they are able to
make a prediction that requires a global context. For example, in Figure 2, the model predicts the
sudden change of the location of the controlled object (from the top to the bottom) at 257-step.
However, both of our models have difficulty in accurately predicting small objects, such as bullets in
Space Invaders. The reason is that the squared error signal is small when the model fails to predict
small objects during training. Another difficulty is in handling stochasticity. In Seaquest, e.g., new
objects appear from the left side or right side randomly, and so are hard to predict. Although our
models do generate new objects with reasonable shapes and movements (e.g., after appearing they
move as in the true frames), the generated frames do not necessarily match the ground-truth.
Quantitative Evaluation: Squared Prediction Error. Mean squared error over 100-step predictions is reported in Figure 3. Our predictive models outperform the two baselines for all domains.
However, the gap between our predictive models and naFf baseline is not large except for Seaquest.
This is due to the fact that the object controlled by the action occupies only a small part of the image.
5
Feedforward
Feedforward
Recurrent
Recurrent
True
True
(a) Ms Pacman (28 ? 28 cropped)
(b) Space Invaders (90 ? 90 cropped)
Figure 4: Comparison between two encoding models (feedforward and recurrent). (a) Controlled object is
moving along a horizontal corridor. As the recurrent encoding model makes a small translation error at 4th
frame, the true position of the object is in the crossroad while the predicted position is still in the corridor. The
(true) object then moves upward which is not possible in the predicted position and so the predicted object
keeps moving right. This is less likely to happen in feedforward encoding because its position prediction is
more accurate. (b) The objects move down after staying at the same location for the first five steps. The
feedforward encoding model fails to predict this movement because it only gets the last four frames as input,
while the recurrent model predicts this downwards movement more correctly.
8000
30
600
6000
20
400
4000
5000
2500
4000
2000
3000
1500
2000
1000
10
2000
200
Emulator
Rand
MLP
naFf
Feedforward
Recurrent
1000
500
0
0
0
50
(a) Seaquest
100
0
0
50
100
(b) Space Invaders
0
0
50
100
(c) Freeway
0
50
(d) QBert
100
0
0
50
100
(e) Ms Pacman
Figure 5: Game play performance using the predictive model as an emulator. ?Emulator? and ?Rand? correspond
to the performance of DQN with true frames and random play respectively. The x-axis is the number of steps
of prediction before re-initialization. The y-axis is the average game score measured from 30 plays.
Qualitative Analysis of Relative Strengths and Weaknesses of Feedforward and Recurrent
Encoding. We hypothesize that feedforward encoding can model more precise spatial transformations because its convolutional filters can learn temporal correlations directly from pixels in the
concatenated frames. In contrast, convolutional filters in recurrent encoding can learn only spatial
features from the one-frame input, and the temporal context has to be captured by the recurrent layer
on top of the high-level CNN features without localized information. On the other hand, recurrent
encoding is potentially better for modeling arbitrarily long-term dependencies, whereas feedforward
encoding is not suitable for long-term dependencies because it requires more memory and parameters as more frames are concatenated into the input.
As evidence, in Figure 4a we show a case where feedforward encoding is better at predicting the
precise movement of the controlled object, while recurrent encoding makes a 1-2 pixel translation
error. This small error leads to entirely different predicted frames after a few steps. Since the
feedforward and recurrent architectures are identical except for the encoding part, we conjecture
that this result is due to the failure of precise spatio-temporal encoding in recurrent encoding. On
the other hand, recurrent encoding is better at predicting when the enemies move in Space Invaders
(Figure 4b). This is due to the fact that the enemies move after 9 steps, which is hard for feedforward
encoding to predict because it takes only the last four frames as input. We observed similar results
showing that feedforward encoding cannot handle long-term dependencies in other games.
4.2
Evaluating the Usefulness of Predictions for Control
Replacing Real Frames with Predicted Frames as Input to DQN. To evaluate how useful the
predictions are for playing the games, we implement an evaluation method that uses the predictive
model to replace the game emulator. More specifically, a DQN controller that takes the last four
frames is first pre-trained using real frames and then used to play the games based on = 0.05greedy policy where the input frames are generated by our predictive model instead of the game
emulator. To evaluate how the depth of predictions influence the quality of control, we re-initialize
the predictions using the true last frames after every n-steps of prediction for 1 ? n ? 100. Note
that the DQN controller never takes a true frame, just the outputs of our predictive models.
The results are shown in Figure 5. Unsurprisingly, replacing real frames with predicted frames
reduces the score. However, in all the games using the model to repeatedly predict only a few time
6
Table 1: Average game score of DQN over 100 plays with standard error. The first row and the second row
show the performance of our DQN replication with different exploration strategies.
Model
Seaquest
DQN - Random exploration 13119 (538)
DQN - Informed exploration 13265 (577)
S. Invaders
698 (20)
681 (23)
Freeway
QBert
Ms Pacman
30.9 (0.2) 3876 (106)
32.2 (0.2) 8238 (498)
N
F
N F
?
?
2281 (53)
2522 (57)
?
?
?
?
?
?
? ? ? ?
(b) Informed exploration.
Figure 6: Comparison between two exploration methods on Ms Pacman.
Each heat map shows the trajectories of the controlled object measured
over 2500 steps for the corresponding method.
? ? ? ?
(a) Random exploration.
Figure 7: Cosine similarity between every pair of action factors
(see text for details).
steps yields a score very close to that of using real frames. Our two architectures produce much
better scores than the two baselines for deep predictions than would be suggested based on the much
smaller differences in squared error. The likely cause of this is that our models are better able to
predict the movement of the controlled object relative to the baselines even though such an ability
may not always lead to better squared error. In three out of the five games the score remains much
better than the score of random play even when using 100 steps of prediction.
Improving DQN via Informed Exploration. To learn control in an RL domain, exploration of
actions and states is necessary because without it the agent can get stuck in a bad sub-optimal policy.
In DQN, the CNN-based agent was trained using an -greedy policy in which the agent chooses
either a greedy action or a random action by flipping a coin with probability of . Such random
exploration is a basic strategy that produces sufficient exploration, but can be slower than more
informed exploration strategies. Thus, we propose an informed exploration strategy that follows
the -greedy policy, but chooses exploratory actions that lead to a frame that has been visited least
often (in the last d time steps), rather than random actions. Implementing this strategy requires a
predictive model because the next frame for each possible action has to be considered.
The method works as follows. The most recent d frames are stored in a trajectory memory, denoted
d
D = x(i) i=1 . The predictive model is used to get the next frame x(a) for every action a. We
estimate the visit-frequency for every predicted frame by summing the similarity between the predicted frame and the most d recent frames stored in the trajectory memory using a Gaussian kernel
as follows:
d
X
X
nD (x(a) ) =
k(x(a) , x(i) ); k(x, y) = exp(?
min(max((xj ? yj )2 ? ?, 0), 1)/?) (7)
i=1
j
where ? is a threshold, and ? is a kernel bandwidth. The trajectory memory size is 200 for QBert
and 20 for the other games, ? = 0 for Freeway and 50 for the others, and ? = 100 for all games. For
computational efficiency, we trained a new feedforward encoding network on 84 ? 84 gray-scaled
images as they are used as input for DQN. The details of the network architecture are provided in
the supplementary material. Table 1 summarizes the results. The informed exploration improves
DQN?s performance using our predictive model in three of five games, with the most significant
improvement in QBert. Figure 6 shows how the informed exploration strategy improves the initial
experience of DQN.
4.3
Analysis of Learned Representations
Similarity among Action Representations. In the factored multiplicative interactions, every action is linearly transformed to f factors (Wa a in Equation 4). In Figure 7 we present the cosine
similarity between every pair of action-factors after training in Seaquest. ?N? and ?F? corresponds
7
to ?no-operation? and ?fire?. Arrows correspond to movements with (black) or without (white) ?fire?.
There are positive correlations between actions that have the same movement directions (e.g., ?up?
and ?up+fire?), and negative correlations between actions that have opposing directions. These results are reasonable and discovered automatically in learning good predictions.
Distinguishing Controlled and Uncontrolled Objects is itself a hard and interesting problem.
Bellemare et al. [2] proposed a framework to learn contingent regions of an image affected by agent
action, suggesting that contingency awareness is useful for model-free agents. We show that our
architectures implicitly learn contingent regions as they learn to predict the entire image.
a >
In our architectures, a factor (fi = (Wi,:
) a) with higher
variance
measured
over
all
possible
actions,
Var (fi ) =
h
i
2
Ea (fi ? Ea [fi ]) , is more likely to transform an image
differently depending on actions, and so we assume such
factors are responsible for transforming the parts of the
Prev. frame
Next frame
Prediction
image related to actions. We therefore collected the high
variance (referred to as ?highvar?) factors from the model
trained on Seaquest (around 40% of factors), and collected
the remaining factors into a low variance (?lowvar?) subset.
Given an image and an action, we did two controlled forward propagations: giving only highvar factors (by setting
the other factors to zeros) and vice versa. The results are
visualized as ?Action? and ?Non-Action? in Figure 8. Interestingly, given only highvar-factors (Action), the model
Action
Non-Action
predicts sharply the movement of the object controlled by
Figure 8: Distinguishing controlled and
actions, while the other parts are mean pixel values. In con- uncontrolled objects. Action image shows
trast, given only lowvar-factors (Non-Action), the model a prediction given only learned actionpredicts the movement of the other objects and the back- factors with high variance; Non-Action
ground (e.g., oxygen), and the controlled object stays at its image given only low-variance factors.
previous location. This result implies that our model learns
to distinguish between controlled objects and uncontrolled objects and transform them using disentangled representations (see [25, 24, 37] for related work on disentangling factors of variation).
5
Conclusion
This paper introduced two different novel deep architectures that predict future frames that are dependent on actions and showed qualitatively and quantitatively that they are able to predict visuallyrealistic and useful-for-control frames over 100-step futures on several Atari game domains. To
our knowledge, this is the first paper to show good deep predictions in Atari games. Since our architectures were domain independent we expect that they will generalize to many vision-based RL
problems. In future work we will learn models that predict future reward in addition to predicting
future frames and evaluate the performance of our architectures in model-based RL.
Acknowledgments. This work was supported by NSF grant IIS-1526059, Bosch Research, and
ONR grant N00014-13-1-0762. Any opinions, findings, conclusions, or recommendations expressed
here are those of the authors and do not necessarily reflect the views of the sponsors.
References
[1] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253?279, 2013.
[2] M. G. Bellemare, J. Veness, and M. Bowling. Investigating contingency awareness using Atari 2600
games. In AAAI, 2012.
[3] M. G. Bellemare, J. Veness, and M. Bowling. Bayesian learning of recursively factored environments. In
ICML, 2013.
[4] M. G. Bellemare, J. Veness, and E. Talvitie. Skip context tree switching. In ICML, 2014.
[5] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1?127,
2009.
[6] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009.
[7] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.
In CVPR, 2012.
8
[8] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. In CVPR, 2015.
[9] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, 2014.
[10] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[11] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time Atari game play using
offline Monte-Carlo tree search planning. In NIPS, 2014.
[12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780,
1997.
[13] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. In ACM Multimedia, 2014.
[14] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
[15] L. Kocsis and C. Szepesv?ari. Bandit based Monte-Carlo planning. In ECML. 2006.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[17] I. Lenz, R. Knepper, and A. Saxena. DeepMPC: Learning deep latent features for model predictive
control. In RSS, 2015.
[18] R. Memisevic. Learning to relate images. IEEE TPAMI, 35(8):1829?1846, 2013.
[19] V. Michalski, R. Memisevic, and K. Konda. Modeling deep temporal dependencies with recurrent grammar cells. In NIPS, 2014.
[20] R. Mittelman, B. Kuipers, S. Savarese, and H. Lee. Structured recurrent temporal restricted Boltzmann
machines. In ICML, 2014.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing
Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[22] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature,
518(7540):529?533, 2015.
[23] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML, 2010.
[24] S. Reed, K. Sohn, Y. Zhang, and H. Lee. Learning to disentangle factors of variation with manifold
interaction. In ICML, 2014.
[25] S. Rifai, Y. Bengio, A. Courville, P. Vincent, and M. Mirza. Disentangling factors of variation for facial
expression recognition. In ECCV. 2012.
[26] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85?117, 2015.
[27] J. Schmidhuber and R. Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2:125?134, 1991.
[28] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using
LSTMs. In ICML, 2015.
[29] I. Sutskever, G. E. Hinton, and G. W. Taylor. The recurrent temporal restricted Boltzmann machine. In
NIPS, 2009.
[30] I. Sutskever, J. Martens, and G. E. Hinton. Generating text with recurrent neural networks. In ICML,
2011.
[31] I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
[32] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
[33] G. W. Taylor and G. E. Hinton. Factored conditional restricted Boltzmann machines for modeling motion
style. In ICML, 2009.
[34] T. Tieleman and G. Hinton. Lecture 6.5 - RMSProp: Divde the gradient by a running average of its recent
magnitude. Coursera, 2012.
[35] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D
convolutional networks. In ICCV, 2015.
[36] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279?292, 1992.
[37] J. Yang, S. Reed, M.-H. Yang, and H. Lee. Weakly-supervised disentangling with recurrent transformations for 3D view synthesis. In NIPS, 2015.
9
| 5859 |@word cnn:11 nd:1 bptt:1 r:1 rgb:1 recursively:1 initial:1 liu:1 score:7 tuned:1 interestingly:1 ati:1 guadarrama:1 com:1 realistic:2 happen:1 shape:1 enables:1 hypothesize:1 generative:1 greedy:5 website:1 intelligence:1 reappears:1 talvitie:1 short:3 sudden:1 location:3 zhang:1 five:3 wierstra:1 along:1 corridor:2 replication:2 fps:2 consists:4 qualitative:2 combine:1 prev:1 huber:2 ra:1 paluri:1 planning:2 multi:4 salakhutdinov:1 automatically:1 xti:1 toderici:1 freeway:5 guoxiao:1 increasing:2 kuiper:1 begin:1 discover:1 linearity:3 provided:1 atari:16 deepmind:1 informed:8 finding:1 transformation:11 temporal:20 quantitative:1 every:8 saxena:1 tackle:1 scaled:2 mansimov:1 control:16 unit:8 grant:2 appear:1 before:3 positive:1 local:1 switching:1 encoding:39 approximately:3 black:1 initialization:1 challenging:3 factorization:1 bi:1 acknowledgment:1 camera:2 responsible:1 yj:1 practice:1 implement:1 backpropagation:1 digit:1 riedmiller:2 rnn:1 word:1 pre:1 arcade:2 get:4 cannot:1 wheel:1 close:1 context:3 influence:1 bellemare:6 sukthankar:1 optimize:1 equivalent:1 map:5 marten:1 center:1 attention:2 survey:1 focused:1 resolution:1 factored:3 oh:2 disentangled:1 embedding:1 handle:1 exploratory:1 coordinate:1 variation:3 hierarchy:1 play:8 target:1 us:2 distinguishing:2 invader:6 trend:1 recognition:1 predicts:6 bottom:2 observed:2 preprint:3 wang:1 capture:2 region:4 connected:6 coursera:1 movement:12 environment:6 predictable:1 rmsprop:2 transforming:1 reward:2 dynamic:5 trained:9 singh:2 depend:2 weakly:1 predictive:13 efficiency:1 differently:1 represented:2 stacked:1 forced:1 fast:2 heat:1 monte:3 artificial:2 caffe:2 encoded:4 larger:2 supplementary:2 enemy:2 cvpr:4 grammar:1 ability:1 peephole:1 transform:3 noisy:1 itself:1 kocsis:1 sequence:5 advantage:1 karayev:1 tpami:1 michalski:3 propose:2 subtracting:1 interaction:10 tran:1 neighboring:1 combining:1 enc:2 sutskever:4 darrell:2 diverges:1 produce:3 generating:4 silver:2 converges:1 staying:1 object:38 depending:2 recurrent:36 bourdev:1 bosch:1 measured:3 op:1 dividing:1 predicted:18 involves:1 indicate:1 implies:1 skip:1 direction:2 filter:4 stochastic:1 exploration:16 occupies:1 human:1 opinion:1 material:2 implementing:1 alleviate:1 around:1 considered:1 ground:4 visually:1 great:1 exp:1 mapping:3 predict:18 consecutive:2 early:1 lenz:2 outperformed:1 currently:1 visited:1 vice:1 always:1 gaussian:1 rather:1 srtrbm:1 rusu:1 focus:1 improvement:1 contrast:4 baseline:12 dependent:2 dayan:1 leung:1 entire:1 hidden:8 bandit:1 transformed:3 going:1 pixel:11 upward:1 among:3 classification:3 denoted:1 qbert:5 seaquest:7 proposes:1 art:2 spatial:4 initialize:2 platform:1 brox:1 once:1 never:1 veness:5 identical:1 icml:9 unsupervised:1 inevitable:1 future:14 others:1 mirza:1 torresani:1 quantitatively:2 richard:1 few:3 dosovitskiy:1 randomly:1 composed:3 delayed:1 phase:4 fire:3 opposing:1 attempt:1 detection:2 mlp:7 ostrovski:1 highly:1 mnih:2 evaluation:6 weakness:1 accurate:2 capable:2 partial:3 necessary:2 experience:1 facial:1 tree:3 divide:1 savarese:1 taylor:2 re:2 girshick:2 column:2 modeling:6 mittelman:1 retains:1 rabinovich:1 introducing:2 entry:2 subset:1 usefulness:4 krizhevsky:1 rtrbm:3 reported:1 dependency:6 stored:2 varies:1 crossroad:1 spatiotemporal:1 combined:1 chooses:2 person:1 lstm:8 international:1 stay:1 memisevic:2 lee:5 decoding:7 synthesis:1 quickly:1 squared:10 reflect:1 aaai:1 choose:1 style:1 szegedy:1 suggesting:1 stride:5 stabilize:1 explicitly:1 collobert:1 vehicle:2 view:3 multiplicative:6 analyze:1 jia:2 contribution:1 minimize:1 convolutional:11 variance:5 correspond:2 yield:1 generalize:1 handwritten:1 bayesian:2 raw:1 accurately:1 kavukcuoglu:2 vincent:1 carlo:3 trajectory:5 drive:1 rectified:2 history:2 influenced:3 sharing:1 xiaoxiao:1 evaluates:1 failure:1 frequency:1 mi:1 rbm:1 con:1 dataset:1 knowledge:3 color:1 improves:2 segmentation:1 ea:2 back:1 higher:4 supervised:1 xxx:1 follow:1 rand:2 evaluated:2 box:1 though:1 just:1 uct:1 correlation:5 hand:2 horizontal:2 lstms:1 replacing:4 propagation:1 google:1 defines:1 quality:2 gray:1 bullet:1 dqn:24 building:2 usa:1 effect:2 true:9 semantic:1 illustrated:1 white:2 game:36 during:1 bowling:3 cosine:2 m:5 motion:1 pacman:5 oxygen:1 image:24 novel:3 recently:3 fi:4 ari:1 common:1 junhyuk:3 rl:12 physical:1 overview:1 discussed:1 significant:2 refer:1 anguelov:1 honglak:2 versa:1 ai:1 similarly:2 stochasticity:1 baveja:1 moving:2 robot:2 similarity:5 longer:1 disentangle:1 recent:6 showed:3 schmidhuber:5 compound:1 n00014:1 onr:1 success:1 arbitrarily:1 seen:1 captured:1 additional:1 contingent:2 steering:1 highvar:3 ale:5 signal:1 ii:1 multiple:3 desirable:1 full:1 reduces:2 match:1 cross:1 long:11 visit:1 a1:1 controlled:20 sponsor:1 prediction:52 variant:1 scalable:1 basic:1 controller:4 vision:4 arxiv:6 kernel:3 achieved:2 cell:2 dec:1 hochreiter:1 addition:2 cropped:2 fine:1 whereas:1 szepesv:1 source:1 unlike:1 henc:6 yang:2 feedforward:23 visionbased:1 enough:1 bengio:3 knepper:1 variety:1 affect:1 xj:1 architecture:35 bandwidth:1 ciresan:1 observability:3 idea:1 rifai:1 shift:1 motivated:1 expression:1 cause:1 action:69 repeatedly:2 deep:25 useful:4 generally:1 collision:1 involve:6 karpathy:1 amount:1 ten:2 visualized:1 sohn:1 generate:7 http:1 outperform:1 nsf:1 correctly:1 naddaf:1 affected:2 four:3 threshold:1 preprocessed:1 ht:1 year:1 angle:1 inverse:1 bouncing:2 springenberg:1 clipped:1 throughout:1 almost:1 reasonable:3 patch:5 summarizes:1 entirely:1 layer:31 ct:3 uncontrolled:4 followed:6 distinguish:1 courville:1 strength:1 sharply:1 fei:2 scene:3 min:2 chair:1 performing:1 conjecture:1 structured:2 deepmpc:1 ball:1 across:2 smaller:2 wi:1 making:2 naff:8 intuitively:2 restricted:5 iccv:1 taken:1 equation:1 previously:1 remains:1 describing:1 antonoglou:1 end:1 umich:2 available:2 operation:3 hierarchical:1 indirectly:3 reshape:2 appearing:1 batch:1 coin:1 shetty:1 gate:1 slower:1 assumes:1 denotes:1 top:5 remaining:1 running:1 konda:1 giving:1 concatenated:5 tensor:6 objective:4 deconv:2 added:1 move:5 flipping:1 malik:1 strategy:7 gradient:5 fovea:1 fidjeland:1 upsampling:4 manifold:1 collected:2 reason:1 modeled:1 reed:3 unrolled:3 sermanet:1 rickl:1 mostly:1 disentangling:3 potentially:1 relate:1 negative:1 implementation:1 boltzmann:5 policy:5 gated:1 observation:5 convolution:11 datasets:1 benchmark:1 descent:1 ecml:1 hinton:6 precise:3 frame:72 rn:5 discovered:1 introduced:2 pair:2 meier:1 toolbox:1 connection:2 imagenet:1 learned:5 nip:6 able:5 suggested:3 perception:1 departure:2 max:1 memory:7 video:18 hot:2 suitable:1 natural:4 difficulty:2 predicting:4 curriculum:4 scheme:1 improve:3 lk:1 axis:2 autoencoder:1 extract:4 text:3 prior:1 relative:3 unsurprisingly:1 graf:3 loss:1 fully:6 expect:1 lecture:1 interesting:1 var:1 localized:1 shelhamer:1 foundation:1 contingency:2 awareness:2 agent:9 vanhoucke:1 sufficient:1 emulator:8 playing:2 share:1 translation:3 row:2 eccv:1 supported:1 last:10 free:3 offline:1 bias:1 side:2 deeper:2 perceptron:1 taking:1 boundary:1 depth:1 transition:1 evaluating:3 rich:1 stuck:1 made:1 reinforcement:4 qualitatively:3 preprocessing:1 forward:1 author:1 erhan:1 wijl:1 approximate:1 implicitly:1 keep:1 satinder:1 global:1 investigating:1 summing:1 spatio:7 fergus:1 factorizing:1 search:2 latent:1 table:2 learn:10 channel:2 nature:1 improving:1 complex:4 necessarily:2 domain:8 louradour:1 did:1 linearly:1 arrow:1 hdec:5 x1:3 site:1 referred:1 downwards:1 slow:1 fails:3 inferring:1 guiding:1 decoded:1 momentum:2 position:4 sub:1 watkins:1 late:1 learns:1 donahue:2 down:1 bad:1 xt:7 showing:1 explored:1 evidence:1 fusion:3 consist:2 essential:1 deconvolution:6 sequential:1 effectively:2 magnitude:1 conditioned:3 gap:1 michigan:1 led:1 likely:3 visual:2 vinyals:1 expressed:1 recommendation:1 srivastava:2 applies:1 corresponds:1 truth:3 tieleman:1 lewis:2 extracted:1 acm:1 nair:1 weston:1 conditional:11 viewed:2 goal:2 ann:1 acceleration:1 replace:1 change:2 hard:3 specifically:3 except:4 acting:1 called:1 multimedia:1 arbor:1 experimental:2 diverging:1 player:1 guo:3 lowvar:2 incorporate:1 evaluate:9 handling:1 |
5,368 | 586 | ANN Based Classification for Heart Defibrillators
M. Jabri, S. Pickard, P. Leong, Z. Chi, B. Flower, and Y. Xie
Sydney University Electrical Engineering
NSW 2006 Australia
Abstract
Current Intra-Cardia defibrillators make use of simple classification algorithms to determine patient conditions and subsequently to enable proper
therapy. The simplicity is primarily due to the constraints on power dissipation and area available for implementation. Sub-threshold implementation
of artificial neural networks offer potential classifiers with higher performance than commercially available defibrillators. In this paper we explore
several classifier architectures and discuss micro-electronic implementation
issues.
1.0 INTRODUCTION
Intra-Cardia Defibrillators (lCDs) represent an important therapy for people with heart disease. These devices are implanted and perform three types of actions:
l.monitor the heart
2.to pace the heart
3.to apply high energy/high voltage electric shock
1bey sense the electrical activity of the heart through leads attached to the heart tissue. Two
types of sensing are commooly used:
Single Chamber: Lead attached to the Right Ventricular Apex (RVA)
Dual Chamber : An additional lead is attached to the High Right Atrium (HRA).
The actions performed by defibrillators are based on the outcome of a classification procedure
based on the heart rhythms of different heart diseases (abnormal rhythms or "arrhythmias").
637
638
Jabri, Pickard, Leong, Chi, Flower, and Xie
There are tens of different arrhythmias of interest to cardiologists. They are clustered into
three groups according to the three therapies (actions) that ICDs perform.
Figure 1 shows an example of a Normal Sinus Rhythm. Note the regularity in the beats. Of
interest to us is what is called the QRS complex which represents the electrical activity in the
ventricle during a beat The R point represents the peak, and the distance between two heart
beats is usually referred to as the RR interval.
FIGURE 1. A Normal Sinus Rhythm (NSR) waveform
Figure 2 shows an example of a Ventricular Tachycardia (more precisely a Ventricular Tachycardia Slow or VTS). Note that the beats are faster in comparison with an NSR.
Ventricular Fibrillation (VF) is shown in Figure 3. Note the chaotic behavior and the absence
of well defined heart beats.
FIGURE 2. A Ventricular Tachycardia (VT) wavefonn
The three waveforms discussed above are examples of Intra-Cardia Electro-Grams (ICEG).
NSR, VT and VF are representative of the type of action a defibrillator has to takes. For an
NSR. an action of "continue monitoring" is used. For a VT an action of "pacing" is perfonned, whereas for VF a high energy/high voltage shock is issued. Because they are nearfield signals. ICEGs are different from external Eltro-Cardio-Grams (ECG). As a result, classification algorithms developed for ECG patterns may not be necessarily valuable for lCEG recognition.
The difficulties in ICEG classification lie in that many arrhythmias share similar features and
fuzzy situations often need to be dealt with. For instance, many ICDs make use of the heart
rate as a fundamental feature in the arrhythmia classificatioo process. But several arrhythmias
that require different type of therapeutic actions have similar heart rates. For example. a Sinus
Tachycardia (ST) is an arrhythmia characterized with a heart rate that is higher than that of an
NSR and in the vicinity of a VT. Many classifier would classify an ST as VT leading to a therapy of pacing, whereas an ST is supposed to be grouped under an NSR type of therapy.
Another example is a fast VT which may be associated with heart rates that are indicative of
ANN Based Classification for Heart Defibrillators
VF. In this case the defibrillator would apply a VF type of therapy when only a vr type therapy is required (pacing).
FIGURE 3. A Ventricular Fibrillation (VF) waveform.
'The overlap of the classes when only heart rate is used as the main classification feature highlights the necessity of the consideration of further features with higher discrimination capabilities. Features that are commonly used in addition to the heart rate are:
I.average heart rate (over a period of time)
2.arrhythmia onset
3.arrhythmia probability density nmctions
Because of the limited power budget and area, arrhythmia classifiers in ICDs are kept
extremely simple with respect to what could be achieved with more relaxed implementation
constraints. As a result false positive (pacing or defibrillation when none is required) may be
high and error rates may reach 13%.
Artificial neural network techniques offer the potential of higher classification performance. In
order to maintain as lower power consumption as possible, VLSI micro-power implementation
techniques need to be considered.
In this paper, we discuss several classifier architectures and sub-threshold implementation
techniques. Both single and dual chamber based classifications are considered.
2.0 DATA
Data used in our experiments were collected from Electro-Physiological Studies (EPS) at hospitals in Australia and the UK. Altogether, and depending on whether single or dual chamber
is considered, data from over 70 patients is available. Cardiologists from our commercial collaborator have labelled this data All tests were performed on a testing set that was not used in
classifier Mlding. Arrhythmias recorded during EPS are produced by stimulation. As a result,
no natural transitions are captured.
3.0 SINGLE CHAMBER CLASSIFICATION
We have evaluated several approaches for single chamber classification. It is important to note
here that in the case of single chamber, not all arrhythmias could be correctly classified (not
even by truman experts). This is because data from the RVA lead represents mainly the ventricular electrical activity. and many atrial arrhythmias require atrial information for proper diagnosis.
639
640
Jabri, Pickard, Leong, Chi, Flower, and Xie
3.1 MULTI?LAYER PERCEPTRONS
Table 1 shows the performance of multi-layer perceptrons (MLP) trained using vanilla backpropagation, coojugate gradient and a specialized limited precision training algorithm that we
call Combined Search Algorithm (Xie and labri, 91). The input to the MLP are 21 features
extracted from the time domain. There are three outputs representing three main groupings:
NSR, VT and VF. We do not have the space here to elaborate on the choice of the input features. Interested readers are referenced to (Oli and labri, 91; Leong and labri, 91).
TABLE 1. Perfonnance of Multi-layer Perceptron Based CIMsifiers
Network
Training
Algorithm
Precision
Average
Performance
21-5-3
backprop.
unlimited
96%
21-5-3
conj.-grad
unlimited
95.5%
21-5-3
CSA
6 bits
94.8%
The summary here indicates that a high performance single chamber based c1assificatioo can
be achieved for ventricular arrhythmias. It also indicates that limited precision training does
not Significantly degrade this performance. In the case of limited precision MLP, 6 bits plus a
sign bit are used to represent network activities and weights.
3.2 INDUCTION OF DECISION TREES
The same training data used to train the MLP was used to create a decision tree using the C4.5
program developed by Ross Quinlan (a derivative of the ID3 algorithm). The resultant tree
was then tested, and the performance was 95% correct classification. In order to achieve this
high performance, the whole training data had to be used in the induction process (windowing
disabled). This has a negative side effect in that the trees generated tend to be large.
The implementation of decision trees in VLSI is not a difficult procedure. The problem however is that because of the binary decision process, the branching thresholds are difficult to be
implemented in digital (for large trees) and even more difficult to be implemented in micropower analog. The latter implementation technique would be possible if the induction process
can take advantage of the hardware characteristics in a similar way that ''in-loop'' training of
sub-threshold VLSI MLP achieves the same objective.
4.0 DUAL CHAMBER BASED CLASSIFIERS
Two architectures for dual chamber based classifiers have been investigated: Multi-ModuleNeural Networks and a hybrid Decision Tree/MLP. The difference between the classifier architectures is a function of which arrhythmia group is being targetted for classification.
ANN Based Classification for Heart Defibrillators
4.1 MULTI-MODULE NEURAL NETWORK
The multi-module neural network architecture aims at improving the performance with respect
to the classification of Supra-Ventricular Tachycardia (SVT). The architecture is shown in
Figure 4 with the classifier being the right most block.
--
---------.
I
I
I
,n
~
I
I
JUU""~ r--lI(
--------.~
..
I
1---~----3I!
- i\..
I
\ \
I
\ \,
\
\
,
I
I
.-- --- . . . ..
? ? _______
\ I
\ ~
\
\
\
IU1
r-----"""
I
,
\ _________ .
I
NSI
+-.....:..
~
L __
I
J
~------~,
\I
-,
L __
I
I
--- .. .... -I
~
.------.--
I
~
u..
I
I
I
~
I
I
,..~
_____ 1I ___ _
I
MIn ~--~ G.-a
I
1
t
I
~------.--
I
L __
:
1
i --
~---.-----
VT
-----
vr-
FIGURE 4. Multi-Module Neural Network Classifier
The idea behind this architecture is to divide the classification problem into that of discriminating between NSR and SVT on one hand and VF and VT on the other. The details of the
operation and the training of this structured classifier can be fotllld in (Oli and Jabri, 91).
In order to evaluate the performance of the MMNN classifier, a single large MLP was also
developed. The single large MLP makes use of the same input features as the MMNN and targets the same classes. The performance comparison is shown in Table 2 which clearly shows
that a higher performance is achieved using the MMNN.
4.2 HYBRID DECISION TREEIMLP
The hybrid decision tree/multi-Iayer perceptron "mimics" the classification process as performed by cardiologists. The architecture of the classifier is shown in Figure 5. The decision
tree is used to produce a judgement on:
1.The rate aspects of the ventricular and atrial channels,
2.The relative timing between the atrial and ventricular beats.
In parallel with the decision tree, a morphology based classifier is used to perform template
matching. The morphology classifier is a simple MLP with input that are signal samples (sampled at half the speed of the normal sampling rate of the signal).
The output of the timing and morphology classifiers are fed into an arbitrator which produces
the class of the arrhythmia being observed. An "X out of Y" classifier is used to smooth out the
641
642
Jabri, Pickard, Leong, Chi, Flower, and Xie
TABLE 2. Performance of Multi-Module Neural Network
Classifier and comparison with that of a single large MLP.
Rhythms
MMNN
Best %
MMNN
Worst %
Single
MLP%
NSR
95.3
93.8
93.4
ST
98.6
98.6
97.5
svr
96.4
93.3
95.4
AT
95.9
93.2
71.2
AF
86.7
85.4
77.5
vr
99.4
99.4
100
VTF
100
100
80.3
VF
97
97
99.4
Average
96.2
95.1
89.3
SD
4.18
4.8
11.31
classification output by the arbitrator and to produce an "averaged" final output class. Further
details on the implementation and the operation of the hybrid classifier can be fOtllld in (Leong
and Jabri, 91).
'This classifier achieves a high performance classification over several types of arrhythmia
Table 3 shows the performance on a multi-patient database and indicate a performance of over
99% correct classification.
born
RVA
plobr
QRS
Deuct
N funl Hrtwo:rk Clusifiu
(10,S,l)@ 12S Hz
Albitntr
XoutofY
FIHAL
CLASS
fiom.
P
plobr
Dtttct
TitniDi Clmif"lu
FIGURE 5. Architecture of the hybrid dec~ion treelneural network classifier.
5.0 MICROELECTRONIC IMPLEMENTATIONS
In all our classifier architecture investigations, micro-electronic implementation consider-
ations were a constant constraint. Many other architectures that can achieve competitive performance were not discussed in this paper because of their unsuitability for low power/small
area implementation. The main challenge in a low power/small area VLSI implementation of
classifiers similar to those discussed above, is how to implement in very low power a MLP
architecture that can reliably learn and achieve a performance comparable to that of the func-
ANN Based Classification for Heart Defibrillators
tiona! similations. Several design strategies can achieve the low power and small area objectives.
TABLE 3. Performance 01 the hybrid decision treelMLP dassifier for dual
chamber classification.
SubClass
Class
NSR
svr
VT
VF
NSR
NSR
5247
4
2
0
ST
NSR
1535
24
2
1
svr
SVT
0
1022
0
0
AT
SVT
0
52
0
0
AP
SVT
0
165
0
0
VT
VT
0
0
322
0
VT 1:1
VT
2
0
555
0
VF
VF
0
0
2
196
VTF
VF
0
2
0
116
Both digital and analog implementation techniques are bemg investigated and we report here
on our analog implementation efforts only. Our analog implementations make use of the subthreshold operation mode of MOS transistors in order to maintain a very low power dissipation.
5.1 MASTER PERTURBATOR CHIP
The architecture of this chip is shown in Figure 6(a). Weights are implemented using a differ-
ential capacitor scheme refreshed from digital RAM. Synapses are implemented as four quadrant Gilbert multipliers (Pickard et al, 92). The chip has been fabricated and is currently being
tested. TIle building blocks have so far been successfully tested. Two networks are implemented a 7-5-3 (total of 50 synapses) and a small single layer network. The single layer network has been successfully trained to perfonn simple logic operations using the Weight
Perturbation algorithm (Jabri and Flower, 91).
5.2 THE BOURKE CHIP
The BOURKE chip (Leong and Jahri, 92) makes use of Multiplying Digital to Analog Con-
verters to implement the synapses. Weights are stored in digital registers. All neurons were
implemented as external resistors for the sake of evaluation. Figure 6(b) shows the schematics
of a synapse. The BOURKE chip has a small network 3-3-1 and has been successfully tested
(it was successfully trained to perfonn an XOR). A larger version of this chip with a 10-6-4
network is being fabricated.
6.0 Conclusions
We have presented in this paper several architectures for single and dual chamber arrhythmia
643
644
Jabri, Pickard, Leong, Chi, Flower, and Xie
-...--
;I
........
r-....
-r
hi
D
-..-
ID
? ??
'--
I
??
--"..,.-
'---
I
FIGURE 6. (a) Architecture of the Master Perturbator chip. (b) Schematics 01 the
BOURKE chip synapse implementation.
classifiers. In both cases a good classification performance was achieved. In particular, for the
case of dual chamber classification, the complexity of the problem calls on more structured
classifier architectures. Two microelectronic low power implementation were briefly presented. Progress so far indicates that micro-power VLSI ANNs offer a technology that will
enable the use of powerful classification strategies in implantable defibrillators.
Acknowledgment
Work presented in this paper was supported by the Australian Department of Industry Technology & Commerce, Telectronics Pacing Systems, and the Australian Research Council.
References
Z. Chi and M. Jabri (1991), "Identification of Supraventricular and Ventricular Arrhythmias
Using a Combination of Three Neural Networks". Proceedings of the Computers in Cardiology Conference, 'Venice, Italy, September 1991.
M. Jabri and B. Flower (1991), "Weight Perturbations: An optimal architecture and learning
technique for analog VLSI feed-forward and recurrent multi-layer networks" , Neural Computation, Vol. 3, No.4, MIT Press.
P.H.W Leong and M Jabri (1991), "Arrhythmia Oassification Using Two Intracardiac
Leads" , Proceedings of the Computers in Cardiology Conference, 'knice, Italy.
P.R W. Leong and M Jabri (1992), "An Analogue Low Power VLSI Neural Network", Proceedings of the Third Australian Conference on Neural Networks, pp. 147-150, Canberra.
Australia
S. Pickard, M. labri, P.H.W. Leong, B.O. Flower and P. Henderson (1992), "Low Power Analogue VLSI Implementation of A Feed-Forward Neural Network", Proceedings of the Third
Australian Conference on Neural Networks, pp. 88-91, Canberra. Australia
Y. Xie and M. Jabri (1991), "Training Algorithms for Limited Precision Feed-forward Neural
Networks", submitted to IEEE Transactions on Neural Networks and Neural Computation.
| 586 |@word briefly:1 version:1 judgement:1 nsw:1 necessity:1 born:1 current:1 icds:3 discrimination:1 half:1 device:1 indicative:1 behavior:1 arrhythmia:19 multi:11 morphology:3 chi:6 what:2 fuzzy:1 developed:3 fabricated:2 perfonn:2 vtf:2 subclass:1 classifier:26 uk:1 positive:1 engineering:1 referenced:1 svt:5 timing:2 sd:1 id:1 ap:1 plus:1 ecg:2 limited:5 averaged:1 acknowledgment:1 commerce:1 testing:1 block:2 implement:2 backpropagation:1 chaotic:1 procedure:2 area:5 significantly:1 matching:1 quadrant:1 cardiology:2 svr:3 gilbert:1 simplicity:1 target:1 commercial:1 oassification:1 iu1:1 recognition:1 database:1 observed:1 module:4 electrical:4 worst:1 valuable:1 disease:2 microelectronic:2 complexity:1 lcd:1 trained:3 chip:9 train:1 fast:1 artificial:2 labri:4 outcome:1 larger:1 bey:1 id3:1 final:1 advantage:1 rr:1 transistor:1 loop:1 achieve:4 supposed:1 regularity:1 supra:1 produce:3 depending:1 recurrent:1 cardio:1 progress:1 sydney:1 implemented:6 indicate:1 australian:4 differ:1 waveform:3 correct:2 subsequently:1 australia:4 enable:2 backprop:1 require:2 clustered:1 pacing:5 collaborator:1 investigation:1 therapy:7 considered:3 normal:3 mo:1 achieves:2 currently:1 ross:1 council:1 grouped:1 create:1 successfully:4 mit:1 clearly:1 atrium:1 aim:1 voltage:2 indicates:3 mainly:1 sense:1 bourke:4 arbitrator:2 vlsi:8 interested:1 issue:1 classification:25 dual:8 sampling:1 represents:3 commercially:1 mimic:1 report:1 micro:4 primarily:1 implantable:1 maintain:2 interest:2 mlp:12 ential:1 intra:3 evaluation:1 henderson:1 behind:1 conj:1 hra:1 perfonnance:1 tree:10 divide:1 instance:1 classify:1 industry:1 ations:1 stored:1 combined:1 defibrillator:11 st:5 peak:1 fundamental:1 density:1 discriminating:1 recorded:1 tile:1 external:2 expert:1 derivative:1 leading:1 li:1 potential:2 register:1 onset:1 performed:3 competitive:1 capability:1 parallel:1 xor:1 characteristic:1 subthreshold:1 pickard:7 dealt:1 identification:1 produced:1 none:1 lu:1 monitoring:1 multiplying:1 tissue:1 classified:1 submitted:1 anns:1 synapsis:3 reach:1 energy:2 pp:2 resultant:1 associated:1 refreshed:1 con:1 sampled:1 therapeutic:1 feed:3 higher:5 xie:7 synapse:2 evaluated:1 hand:1 mode:1 disabled:1 building:1 effect:1 nsr:13 multiplier:1 vicinity:1 during:2 branching:1 rhythm:5 dissipation:2 consideration:1 specialized:1 stimulation:1 attached:3 oli:2 discussed:3 analog:6 eps:2 fibrillation:2 vanilla:1 had:1 apex:1 italy:2 issued:1 binary:1 continue:1 vt:15 captured:1 additional:1 relaxed:1 determine:1 period:1 signal:3 windowing:1 smooth:1 faster:1 characterized:1 af:1 offer:3 schematic:2 implanted:1 patient:3 intracardiac:1 sinus:3 represent:2 achieved:4 dec:1 ion:1 whereas:2 addition:1 unsuitability:1 interval:1 hz:1 tend:1 electro:2 capacitor:1 call:2 leong:11 architecture:17 idea:1 grad:1 whether:1 effort:1 action:7 ten:1 hardware:1 sign:1 correctly:1 pace:1 diagnosis:1 vol:1 group:2 four:1 threshold:4 monitor:1 kept:1 shock:2 ram:1 master:2 powerful:1 reader:1 electronic:2 venice:1 decision:10 vf:13 comparable:1 bit:3 layer:6 abnormal:1 hi:1 activity:4 constraint:3 precisely:1 unlimited:2 sake:1 ventricular:12 aspect:1 speed:1 extremely:1 min:1 department:1 structured:2 according:1 combination:1 qrs:2 wavefonn:1 heart:20 discus:2 fed:1 available:3 operation:4 apply:2 chamber:13 targetted:1 altogether:1 quinlan:1 objective:2 strategy:2 september:1 gradient:1 distance:1 consumption:1 degrade:1 collected:1 induction:3 difficult:3 negative:1 implementation:19 reliably:1 proper:2 design:1 perform:3 neuron:1 beat:6 situation:1 perturbation:2 required:2 truman:1 c4:1 flower:8 usually:1 pattern:1 atrial:4 challenge:1 program:1 analogue:2 power:13 perfonned:1 overlap:1 difficulty:1 natural:1 hybrid:6 representing:1 scheme:1 technology:2 func:1 relative:1 highlight:1 digital:5 ventricle:1 share:1 nsi:1 summary:1 supported:1 side:1 perceptron:2 template:1 gram:2 transition:1 forward:3 commonly:1 far:2 transaction:1 logic:1 iayer:1 search:1 table:6 channel:1 learn:1 improving:1 csa:1 investigated:2 complex:1 necessarily:1 electric:1 jabri:13 domain:1 tachycardia:5 main:3 whole:1 referred:1 representative:1 canberra:2 elaborate:1 slow:1 vr:3 icegs:1 precision:5 sub:3 resistor:1 lie:1 third:2 rk:1 sensing:1 micropower:1 physiological:1 grouping:1 false:1 budget:1 explore:1 cardiologist:3 extracted:1 ann:4 telectronics:1 iceg:2 labelled:1 absence:1 called:1 hospital:1 total:1 perceptrons:2 people:1 latter:1 evaluate:1 tested:4 |
5,369 | 5,860 | On-the-Job Learning with Bayesian Decision Theory
Keenon Werling
Department of Computer Science
Stanford University
[email protected]
Arun Chaganty
Department of Computer Science
Stanford University
[email protected]
Percy Liang
Department of Computer Science
Stanford University
[email protected]
Christopher D. Manning
Department of Computer Science
Stanford University
[email protected]
Abstract
Our goal is to deploy a high-accuracy system starting with zero training examples.
We consider an on-the-job setting, where as inputs arrive, we use real-time crowdsourcing to resolve uncertainty where needed and output our prediction when confident. As the model improves over time, the reliance on crowdsourcing queries
decreases. We cast our setting as a stochastic game based on Bayesian decision
theory, which allows us to balance latency, cost, and accuracy objectives in a principled way. Computing the optimal policy is intractable, so we develop an approximation based on Monte Carlo Tree Search. We tested our approach on three
datasets?named-entity recognition, sentiment classification, and image classification. On the NER task we obtained more than an order of magnitude reduction
in cost compared to full human annotation, while boosting performance relative to
the expert provided labels. We also achieve a 8% F1 improvement over having a
single human label the whole set, and a 28% F1 improvement over online learning.
?Poor is the pupil who does not surpass his master.?
? Leonardo da Vinci
1
Introduction
There are two roads to an accurate AI system today: (i) gather a huge amount of labeled training
data [1] and do supervised learning [2]; or (ii) use crowdsourcing to directly perform the task [3, 4].
However, both solutions require non-trivial amounts of time and money. In many situations, one
wishes to build a new system ? e.g., to do Twitter information extraction [5] to aid in disaster relief
efforts or monitor public opinion ? but one simply lacks the resources to follow either the pure ML
or pure crowdsourcing road.
In this paper, we propose a framework called on-the-job learning (formalizing and extending ideas
first implemented in [6]), in which we produce high quality results from the start without requiring
a trained model. When a new input arrives, the system can choose to asynchronously query the
crowd on parts of the input it is uncertain about (e.g. query about the label of a single token in a
sentence). After collecting enough evidence the system makes a prediction. The goal is to maintain
high accuracy by initially using the crowd as a crutch, but gradually becoming more self-sufficient
as the model improves. Online learning [7] and online active learning [8, 9, 10] are different in that
they do not actively seek new information prior to making a prediction, and cannot maintain high
accuracy independent of the number of data instances seen so far. Active classification [11], like us,
1
Get beliefs under
Decide to ask a crowd
1. learned model
?George?
2. worker in real time
44%: PERSON
12%: RESOURCE
2%: NONE
32%: LOCATION
http://www.crowd-workers.com
What is `George` here?
Soup
on
George
str.
#Katrina
y1
y2
y3
y4
y5
x1
x2
x3
x4
x5
soup on george str,
#katrina
3.
Incorporate feedback,
return a prediction
RESOURCE
LOCATION
Soup
on
George
str.
#Katrina
y1
y2
y3
y4
y5
x1
x2
x3
x4
x5
location
person
resource
r1
none
Figure 1: Named entity recognition on tweets in on-the-job learning.
strategically seeks information (by querying a subset of labels) prior to prediction, but it is based on
a static policy, whereas we improve the model during test time based on observed data.
To determine which queries to make, we model on-the-job learning as a stochastic game based on
a CRF prediction model. We use Bayesian decision theory to tradeoff latency, cost, and accuracy
in a principled manner. Our framework naturally gives rise to intuitive strategies: To achieve high
accuracy, we should ask for redundant labels to offset the noisy responses. To achieve low latency,
we should issue queries in parallel, whereas if latency is unimportant, we should issue queries sequentially in order to be more adaptive. Computing the optimal policy is intractable, so we develop
an approximation based on Monte Carlo tree search [12] and progressive widening to reason about
continuous time [13].
We implemented and evaluated our system on three different tasks: named-entity recognition, sentiment classification, and image classification. On the NER task we obtained more than an order of
magnitude reduction in cost compared to full human annotation, while boosting performance relative to the expert provided labels. We also achieve a 8% F1 improvement over having a single human
label the whole set, and a 28% F1 improvement over online learning. An open-source implementation of our system, dubbed LENSE for ?Learning from Expensive Noisy Slow Experts? is available
at http://www.github.com/keenon/lense.
2
Problem formulation
Consider a structured prediction problem from input x = (x1 , . . . , xn ) to output y = (y1 , . . . , yn ).
For example, for named-entity recognition (NER) on tweets, x is a sequence of words in the tweet
(e.g., ?on George str.?) and y is the corresponding sequence of labels (e.g., NONE LOCATION
LOCATION ). The full set of labels of PERSON , LOCATION , RESOURCE, and NONE .
In the on-the-job learning setting, inputs arrive in a stream. On each input x, we make zero or more
queries q1 , q2 , . . . on the crowd to obtain labels (potentially more than once) for any positions in
x. The responses r1 , r2 , . . . come back asynchronously, which are incorporated into our current
prediction model p? . Figure 2 (left) shows one possible outcome: We query positions q1 = 2
(?George?) and q2 = 3 (?str.?). The first query returns r1 = LOCATION, upon which we make
another query on the the same position q3 = 3 (?George?), and so on. When we have sufficient
? under the model. Each
confidence about the entire output, we return the most likely prediction y
query qi is issued at time si and the response comes back at time ti . Assume that each query costs
m cents. Our goal is to choose queries to maximize accuracy, minimize latency and cost.
? on each input x in
We make several remarks about this setting: First, we must make a prediction y
the stream, unlike in active learning, where we are only interested in the pool or stream of examples
for the purposes of building a good model. Second, the responses are used to update the prediction
2
S oGs
S oGs
S oGs
Legend
none
res
loc
per
1
r1 = res
q1 = 1
r2 = loc
q2 = 3
S oGs
S oGs
S oGs
= system
= crowd
4
?W
? = (tnow , q, s, r, t)
(1, (3), (0), (?), (?))
?W
r1 = loc
S oGs
(1.7, (3), (0), (1.7), (loc))
r1 = loc ?
R
2
0.47
q1 = 1
?R
0.27
r1 = res
r2 = per
q2 = 3
q4 = 3
r4 = loc
(a) Incorporating information from responses. The bar graphs
represent the marginals over the labels for each token (indicated
by the first character) at different points in time. The two timelines show how the system updates its confidence over labels
based on the crowd?s responses. The system continues to issue
queries until it has sufficient confidence on its labels. See the
paragraph on behavior in Section 3 for more information.
(b) Game tree. An example of a partial
game tree constructed by the system when
deciding which action to take in the state
? = (1, (3), (0), (?), (?)), i.e. the query
q1 = 3 has already been issued and the
system must decide whether to issue another query or wait for a response to q1 .
Figure 2: Example behavior while running structure prediction on the tweet ?Soup on George str.?
We omit the RESOURCE from the game tree for visual clarity.
model, like in online learning. This allows the number of queries needed (and thus cost and latency)
to decrease over time without compromising accuracy.
3
Model
We model on-the-job learning as a stochastic game with two players: the system and the crowd.
The game starts with the system receiving input x and ends when the system turns in a set of labels
y = (y1 , . . . , yn ). During the system?s turn, the system may choose a query action q ? {1, . . . , n}
to ask the crowd to label yq . The system may also choose the wait action (q = ?W ) to wait for the
crowd to respond to a pending query or the return action (q = ?R ) to terminate the game and return
its prediction given responses received thus far. The system can make as many queries in a row (i.e.
simultaneously) as it wants, before deciding to wait or turn in.1 When the wait action is chosen,
the turn switches to the crowd, which provides a response r to one pending query, and advances
the game clock by the time taken for the crowd to respond. The turn then immediately reverts back
to the system. When the game ends (the system chooses the return action), the system evaluates a
utility that depends on the accuracy of its prediction, the number of queries issued and the total time
taken. The system should choose query and wait actions to maximize the utility of the prediction
eventually returned.
In the rest of this section, we describe the details of the game tree, our choice of utility and specify
models for crowd responses, followed by a brief exploration of behavior admitted by our model.
Game tree. Let us now formalize the game tree in terms of its states, actions, transitions and
rewards; see Figure 2b for an example. The game state ? = (tnow , q, s, r, t) consists of the current
time tnow , the actions q = (q1 , . . . , qk?1 ) that have been issued at times s = (s1 , . . . , sk?1 ) and the
responses r = (r1 , . . . , rk?1 ) that have been received at times t = (t1 , . . . , tk?1 ). Let rj = ? and
tj = ? iff qj is not a query action or its responses have not been received by time tnow .
During the system?s turn, when the system chooses an action qk , the state is updated to ? 0 =
(tnow , q0 , s0 , r0 , t0 ), where q0 = (q1 , . . . , qk ), s0 = (s1 , . . . , sk?1 , tnow ), r0 = (r1 , . . . , rk?1 , ?) and
t0 = (t1 , . . . , tk?1 , ?). If qk ? {1, . . . n}, then the system chooses another action from the new state
? 0 . If qk = ?W , the crowd makes a stochastic move from ? 0 . Finally, if qk = ?R , the game ends,
1
This rules out the possibility of launching a query midway through waiting for the next response. However,
we feel like this is a reasonable limitation that significantly simplifies the search space.
3
and the system returns its best estimate of the labels using the responses it has received and obtains
a utility U (?) (defined later).
Let F = {1 ? j ? k ? 1 | qj 6= ?W ? rj = ?} be the set of in-flight requests. During the crowd?s
turn (i.e. after the system chooses ?W ), the next response from the crowd, j ? ? F , is chosen: j ? =
arg minj?F t0j where t0j is sampled from the response-time model, t0j ? pT (t0j | sj , t0j > tnow ), for
each j ? F . Finally, a response is sampled using a response model, rj0 ? ? p(rj0 ? | x, r), and the state
is updated to ? 0 = (tj ? , q, s, r0 , t0 ), where r0 = (r1 , . . . , rj0 ? , . . . , rk ) and t0 = (t1 , . . . , t0j ? , . . . , tk ).
Utility. Under Bayesian decision theory, the optimal choice for an action in state ? =
(tnow , q, r, s, t) is the one that attains the maximum expected utility (i.e. value) for the game starting
at ?. Recall that the system can return at any time, at which point it receives a utility that trades
off two things: The first is the accuracy of the MAP estimate according to the model?s best guess
of y incorporating all responses received by time ? . The second is the cost of making queries: a
(monetary) cost wM per query made and penalty of wT per unit of time taken. Formally, we define
the utility to be:
U (?) , ExpAcc(p(y | x, q, s, r, t)) ? (nQ wM + tnow wT ),
0
ExpAcc(p) = Ep(y) [Accuracy(arg max
p(y ))],
0
y
(1)
(2)
where nQ = |{j | qj ? {1, . . . , n}| is the number of queries made, p(y | x, q, s, r, t) is a prediction
model that incorporates the crowd?s responses.
The utility of wait and return actions is computed by taking expectations over subsequent trajectories
in the game tree. This is intractable to compute exactly, so we propose an approximate algorithm in
Section 4.
Environment model. The final component is a model of the environment (crowd). Given input
x and queries q = (q1 , . . . , qk ) issued at times s = (s1 , . . . , sk ), we define a distribution over the
output y, responses r = (r1 , . . . , rk ) and response times t = (t1 , . . . , tk ) as follows:
p(y, r, t | x, q, s) , p? (y | x)
k
Y
pR (ri | yqi )pT (ti | si ).
(3)
i=1
The three components are as follows: p? (y | x) is the prediction model (e.g. a standard linear-chain
CRF); pR (r | yq ) is the response model which describes the distribution of the crowd?s response
r for a given a query q when the true answer is yq ; and pT (ti | si ) specifies the latency of query
qi . The CRF model p? (y | x) is learned based on all actual responses (not simulated ones) using
AdaGrad. To model annotation errors, we set pR (r | yq ) = 0.7 iff r = yq ,2 and distribute the
remaining probability for r uniformly. Given this full model, we can compute p(r0 | x, r, q) simply
by marginalizing out y and t from Equation 3. When conditioning on r, we ignore responses that
have not yet been received (i.e. when rj = ? for some j).
Behavior. Let?s look at typical behavior that we expect the model and utility to capture. Figure 2a
shows how the marginals over the labels change as the crowd provides responses for our running
example, i.e. named entity recognition for the sentence ?Soup on George str.?. In the both timelines,
the system issues queries on ?Soup? and ?George? because it is not confident about its predictions
for these tokens. In the first timeline, the crowd correctly responds that ?Soup? is a resource and
that ?George? is a location. Integrating these responses, the system is also more confident about
its prediction on ?str.?, and turns in the correct sequence of labels. In the second timeline, a crowd
worker makes an error and labels ?George? to be a person. The system still has uncertainty on
?George? and issues an additional query which receives a correct response, following which the
system turns in the correct sequence of labels. While the answer is still correct, the system could
have taken less time to respond by making an additional query on ?George? at the very beginning.
2
We found the humans we hired were roughly 70% accurate in our experiments
4
4
Game playing
In Section 3 we modeled on-the-job learning as a stochastic game played between the system and
the crowd. We now turn to the problem of actually finding a policy that maximizes the expected
utility, which is, of course, intractable because of the large state space.
Our algorithm (Algorithm 1) combines ideas from Monte Carlo tree search [12] to systematically
explore the state space and progressive widening [13] to deal with the challenge of continuous variables (time). Some intuition about the algorithm is provided below. When simulating the system?s
turn, the next state (and hence action) is chosen using the upper confidence tree (UCT) decision
rule that trades off maximizing the value of the next state (exploitation) with the number of visits
(exploration). The crowd?s turn is simulated based on transitions defined in Section 3. To handle the
unbounded fanout during the crowd?s turn, we use progressive widening that maintains a current set
of ?active? or ?explored? states, which is gradually grown with time. Let N (?) be the number of
times a state has been visited, and C(?) be all successor states that the algorithm has sampled.
Algorithm 1 Approximating expected utility with MCTS and progressive widening
1: For all ?, N (?) ? 0, V (?) ? 0, C(?) ? [ ]
2: function MONTE C ARLOVALUE(state ?)
3:
increment N (?)
4:
if system?s turn then n
q
o
V (? 0 )
log N (?)
5:
? 0 ? arg max?0 N
+
c
0
0
(? )
N (? )
. Initialize visits, utility sum, and children
. Choose next state ? 0 using UCT
6:
v ?MONTE C ARLOVALUE(? 0 )
7:
V (?) ? V (?) + v
. Record observed utility
8:
return v
9:
else if crowd?spturn then
. Restrict continuous samples using PW
10:
if max(1, N (?)) ? |C(?)| then
11:
? 0 is sampled from set of already visited C(?) based on (3)
12:
else
13:
? 0 is drawn based on (3)
14:
C(?) ? C(?) ? {[? 0 ]}
15:
end if
16:
return MONTE C ARLOVALUE(? 0 )
17:
else if game terminated then
18:
return utility U of ? according to (1)
19:
end if
20: end function
5
Experiments
In this section, we empirically evaluate our approach on three tasks. While the on-the-job setting we
propose is targeted at scenarios where there is no data to begin with, we use existing labeled datasets
(Table 1) to have a gold standard.
Baselines. We evaluated the following four methods on each dataset:
1. Human n-query: The majority vote of n human crowd workers was used as a prediction.
2. Online learning: Uses a classifier that trains on the gold output for all examples seen so
far and then returns the MLE as a prediction. This is the best possible offline system: it
sees perfect information about all the data seen so far, but can not query the crowd while
making a prediction.
3. Threshold baseline: Uses the following heuristic: For each label, yi , we ask for m queries
such that (1?p? (yi | x))?0.3m ? 0.98. Instead of computing the expected marginals over
the responses to queries in flight, we simply count the in-flight requests for a given variable,
and reduces the uncertainty on that variable by a factor of 0.3. The system continues
launching requests until the threshold (adjusted by number of queries in flight) is crossed.
5
Dataset (Examples)
NER (657)
Task and notes
We evaluate on the CoNLL-2003
NER task3 , a sequence labeling
problem over English sentences.
We only consider the four tags corresponding to persons, locations,
organizations or none4 .
Sentiment (1800)
We evaluate on a subset of the
IMDB sentiment dataset [15] that
consists of 2000 polar movie reviews; the goal is binary classification of documents into classes POS
and NEG.
We evaluate on a celebrity face
classification task [17]. Each image must be labeled as one of the
following four choices: Andersen
Cooper, Daniel Craig, Scarlet Johansson or Miley Cyrus.
Face (1784)
Features
We used standard features [14]: the
current word, current lemma, previous and next lemmas, lemmas in
a window of size three to the left
and right, word shape and word
prefix and suffixes, as well as word
embeddings.
We used two feature sets, the
first (UNIGRAMS) containing only
word unigrams, and the second
(RNN) that also contains sentence
vector embeddings from [16].
We used the last layer of a 11layer AlexNet [2] trained on ImageNet as input feature embeddings,
though we leave back-propagating
into the net to future work.
Table 1: Datasets used in this paper and number of examples we evaluate on.
System
1-vote
3-vote
5-vote
Online
Threshold
LENSE
Delay/tok
467 ms
750 ms
1350 ms
n/a
414 ms
267 ms
Named Entity Recognition
Qs/tok PER F1 LOC F1
1.0
90.2
78.8
3.0
93.6
85.1
5.0
95.5
87.7
n/a
56.9
74.6
0.61
95.2
89.8
0.45
95.2
89.7
ORG F1
71.5
74.5
78.7
51.4
79.8
81.7
F1
80.2
85.4
87.3
60.9
88.3
88.8
Face Identification
Latency Qs/ex Acc.
1216 ms
1.0 93.6
1782 ms
3.0 99.1
2103 ms
5.0 99.8
n/a
n/a 79.9
1680 ms
2.66 93.5
1590 ms
2.37 99.2
Table 2: Results on NER and Face tasks comparing latencies, queries per token (Qs/tok) and performance metrics (F1 for NER and accuracy for Face).
Predictions are made using MLE on the model given responses. The baseline does not
reason about time and makes all its queries at the very beginning.
4. LENSE: Our full system as described in Section 3.
Implementation and crowdsourcing setup. We implemented the retainer model of [18] on Amazon Mechanical Turk to create a ?pool? of crowd workers that could respond to queries in real-time.
The workers were given a short tutorial on each task before joining the pool to minimize systematic
errors caused by misunderstanding the task. We paid workers $1.00 to join the retainer pool and
an additional $0.01 per query (for NER, since response times were much faster, we paid $0.005
per query). Worker response times were generally in the range of 0.5?2 seconds for NER, 10?15
seconds for Sentiment, and 1?4 seconds for Faces.
When running experiments, we found that the results varied based on the current worker quality. To
control for variance in worker quality across our evaluations of the different methods, we collected
5 worker responses and their delays on each label ahead of time5 . During simulation we sample the
worker responses and delays without replacement from this frozen pool of worker responses.
Summary of results. Table 2 and Table 3 summarize the performance of the methods on the three
tasks. On all three datasets, we found that on-the-job learning outperforms machine and human-only
3
http://www.cnts.ua.ac.be/conll2003/ner/
The original also includes a fifth tag for miscellaneous, however the definition for miscellaneos is complex,
making it very difficult for non-expert crowd workers to provide accurate labels.
5
These datasets are available in the code repository for this paper
4
6
3.9
System
1-vote
3-vote
5-vote
Unigrams
Unigrams + RNN embeddings
3.8
Queries per example
3.7
Qs/ex
1.00
3.00
5.00
Acc.
89.2
95.8
98.7
n/a
10.9 s
11.7 s
n/a
2.99
3.48
78.1
95.9
98.6
n/a
11.0 s
11.0 s
n/a
2.85
3.19
85.0
96.0
98.6
UNIGRAMS
3.6
Online
Threshold
LENSE
3.5
3.4
RNN
3.3
3.2
Latency
6.6 s
10.9 s
13.5 s
0
20
40
60
80
100
120
140
160
180
Online
Threshold
LENSE
200
Time
Figure 3: Queries per example for LENSE on
Sentiment. With simple UNIGRAM features, the
model quickly learns it does not have the capacity to answer confidently and must query the
crowd. With more complex RNN features, the
model learns to be more confident and queries
the crowd less over time.
Table 3: Results on the Sentiment task comparing latency, queries per example and accuracy.
1
0.8
1.2
Queries per token
0.7
0.6
F1
LENSE
1 vote baseline
1.4
0.9
0.5
0.4
0.3
1
0.8
0.6
0.4
0.2
0.1
0
0.2
LENSE
online learning
0
100
200
300
400
500
600
0
700
Time
0
100
200
300
400
500
600
700
Time
Figure 4: Comparing F1 and queries per token on the NER task over time. The left graph compares
LENSE to online learning (which cannot query humans at test time). This highlights that LENSE
maintains high F1 scores even with very small training set sizes, by falling back the crowd when it
is unsure. The right graph compares query rate over time to 1-vote. This clearly shows that as the
model learns, it needs to query the crowd less.
comparisons on both quality and cost. On NER, we achieve an F1 of 88.4% at more than an order of
magnitude reduction on the cost of achieving comporable quality result using the 5-vote approach.
On Sentiment and Faces, we reduce costs for a comparable accuracy by a factor of around 2. For the
latter two tasks, both on-the-job learning methods perform less well than in NER. We suspect this
is due to the presence of a dominant class (?none?) in NER that the model can very quickly learn to
expend almost no effort on. LENSE outperforms the threshold baseline, supporting the importance
of Bayesian decision theory.
Figure 4 tracks the performance and cost of LENSE over time on the NER task. LENSE is not only
able to consistently outperform other baselines, but the cost of the system steadily reduces over time.
On the NER task, we find that LENSE is able to trade off time to produce more accurate results than
the 1-vote baseline with fewer queries by waiting for responses before making another query.
While on-the-job learning allows us to deploy quickly and ensure good results, we would like to
eventually operate without crowd supervision. Figure 3, we show the number of queries per example
on Sentiment with two different features sets, UNIGRAMS and RNN (as described in Table 1). With
simpler features (UNIGRAMS), the model saturates early and we will continue to need to query to
the crowd to achieve our accuracy target (as specified by the loss function). On the other hand,
using richer features (RNN) the model is able to learn from the crowd and the amount of supervision
needed reduces over time. Note that even when the model capacity is limited, LENSE is able to
guarantee a consistent, high level of performance.
7
Reproducibility. All code, data, and experiments for this paper are available on CodaLab at
https://www.codalab.org/worksheets/0x2ae89944846444539c2d08a0b7ff3f6f/.
6
Related Work
On-the-job learning draws ideas from many areas: online learning, active learning, active classification, crowdsourcing, and structured prediction.
Online learning. The fundamental premise of online learning is that algorithms should improve
with time, and there is a rich body of work in this area [7]. In our setting, algorithms not only
improve over time, but maintain high accuracy from the beginning, whereas regret bounds only
achieve this asymptotically.
Active learning. Active learning (see [19] for a survey) algorithms strategically select most informative examples to build a classifier. Online active learning [8, 9, 10] performs active learning
in the online setting. Several authors have also considered using crowd workers as a noisy oracle
e.g. [20, 21]. It differs from our setup in that it assumes that labels can only be observed after
classification, which makes it nearly impossible to maintain high accuracy in the beginning.
Active classification. Active classification [22, 23, 24] asks what are the most informative features
to measure at test time. Existing active classification algorithms rely on having a fully labeled
dataset which is used to learn a static policy for when certain features should be queried, which does
not change at test time. On-the-job learning differs from active classification in two respects: true
labels are never observed, and our system improves itself at test time by learning a stronger model.
A notable exception is Legion:AR [6], which like us operates in on-the-job learning setting to for
real-time activity classification. However, they do not explore the machine learning foundations
associated with operating in this setting, which is the aim of this paper.
Crowdsourcing. A burgenoning subset of the crowdsourcing community overlaps with machine
learning. One example is Flock [25], which first crowdsources the identification of features for an
image classification task, and then asks the crowd to annotate these features so it can learn a decision
tree. In another line of work, TurKontrol [26] models individual crowd worker reliability to optimize
the number of human votes needed to achieve confident consensus using a POMDP.
Structured prediction. An important aspect our prediction tasks is that the output is structured,
which leads to a much richer setting for one-the-job learning. Since tags are correlated, the importance of a coherent framework for optimizing querying resources is increased. Making active partial
observations on structures and has been explored in the measurements framework of [27] and in the
distant supervision setting [28].
7
Conclusion
We have introduced a new framework that learns from (noisy) crowds on-the-job to maintain high
accuracy, and reducing cost significantly over time. The technical core of our approach is modeling
the on-the-job setting as a stochastic game and using ideas from game playing to approximate the
optimal policy. We have built a system, LENSE, which obtains significant cost reductions over a
pure crowd approach and significant accuracy improvements over a pure ML approach.
Acknowledgments
We are grateful to Kelvin Guu and Volodymyr Kuleshov for useful feedback regarding the calibration of our models and Amy Bearman for providing the image embeddings for the face classification
experiments. We would also like to thank our anonymous reviewers for their helpful feedback. Finally, our work was sponsored by a Sloan Fellowship to the third author.
References
[1] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image
database. In Computer Vision and Pattern Recognition (CVPR), pages 248?255, 2009.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097?1105, 2012.
8
[3] M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell, and
K. Panovich. Soylent: a word processor with a crowd inside. In Symposium on User Interface Software
and Technology, pages 313?322, 2010.
[4] N. Kokkalis, T. K?ohn, C. Pfeiffer, D. Chornyi, M. S. Bernstein, and S. R. Klemmer. Emailvalet: Managing email overload through private, accountable crowdsourcing. In Conference on Computer Supported
Cooperative Work, pages 1291?1300, 2013.
[5] C. Li, J. Weng, Q. He, Y. Yao, A. Datta, A. Sun, and B. Lee. Twiner: named entity recognition in targeted
twitter stream. In ACM Special Interest Group on Information Retreival (SIGIR), pages 721?730, 2012.
[6] Walter S Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P Bigham. Real-time crowd labeling for
deployable activity recognition. In Proceedings of the 2013 conference on Computer supported cooperative work, 2013.
[7] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[8] D. Helmbold and S. Panizza. Some label efficient learning results. In Conference on Learning Theory
(COLT), pages 218?230, 1997.
[9] D. Sculley. Online active learning methods for fast label-efficient spam filtering. In Conference on Email
and Anti-spam (CEAS), 2007.
[10] W. Chu, M. Zinkevich, L. Li, A. Thomas, and B. Tseng. Unbiased online active learning in data streams.
In International Conference on Knowledge Discovery and Data Mining (KDD), pages 195?203, 2011.
[11] T. Gao and D. Koller. Active classification based on value of classifier. In Advances in Neural Information
Processing Systems (NIPS), pages 1062?1070, 2011.
[12] L. Kocsis and C. Szepesv?ari. Bandit based Monte-Carlo planning. In European Conference on Machine
Learning (ECML), pages 282?293, 2006.
[13] R. Coulom. Computing elo ratings of move patterns in the game of go. Computer Games Workshop,
2007.
[14] J. R. Finkel, T. Grenager, and C. Manning. Incorporating non-local information into information extraction systems by Gibbs sampling. In Association for Computational Linguistics (ACL), pages 363?370,
2005.
[15] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
Learning word vectors for sentiment analysis. In ACL: HLT, pages 142?150, Portland, Oregon, USA,
June 2011. Association for Computational Linguistics.
[16] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Empirical Methods in Natural Language
Processing (EMNLP), 2013.
[17] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and Simile Classifiers for Face
Verification. In ICCV, Oct 2009.
[18] M. S. Bernstein, J. Brandt, R. C. Miller, and D. R. Karger. Crowds in two seconds: Enabling realtime
crowd-powered interfaces. In User Interface Software and Technology, pages 33?42, 2011.
[19] B. Settles. Active learning literature survey. Technical report, University of Wisconsin, Madison, 2010.
[20] P. Donmez and J. G. Carbonell. Proactive learning: cost-sensitive active learning with multiple imperfect
oracles. In Conference on Information and Knowledge Management (CIKM), pages 619?628, 2008.
[21] D. Golovin, A. Krause, and D. Ray. Near-optimal Bayesian active learning with noisy observations. In
Advances in Neural Information Processing Systems (NIPS), pages 766?774, 2010.
[22] R. Greiner, A. J. Grove, and D. Roth. Learning cost-sensitive active classifiers. Artificial Intelligence,
139(2):137?174, 2002.
[23] X. Chai, L. Deng, Q. Yang, and C. X. Ling. Test-cost sensitive naive Bayes classification. In International
Conference on Data Mining, pages 51?58, 2004.
[24] S. Esmeir and S. Markovitch. Anytime induction of cost-sensitive trees. In Advances in Neural Information Processing Systems (NIPS), pages 425?432, 2007.
[25] J. Cheng and M. S. Bernstein. Flock: Hybrid Crowd-Machine learning classifiers. In Proceedings of the
18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 600?611,
2015.
[26] P. Dai, Mausam, and D. S. Weld. Decision-theoretic control of crowd-sourced workflows. In Association
for the Advancement of Artificial Intelligence (AAAI), 2010.
[27] P. Liang, M. I. Jordan, and D. Klein. Learning from measurements in exponential families. In International Conference on Machine Learning (ICML), 2009.
[28] G. Angeli, J. Tibshirani, J. Y. Wu, and C. D. Manning. Combining distant and partial supervision for
relation extraction. In Empirical Methods in Natural Language Processing (EMNLP), 2014.
9
| 5860 |@word private:1 exploitation:1 repository:1 pw:1 stronger:1 johansson:1 open:1 seek:2 simulation:1 q1:9 paid:2 asks:2 rj0:3 reduction:4 loc:7 contains:1 score:1 karger:2 daniel:1 document:1 prefix:1 outperforms:2 existing:2 current:6 com:2 comparing:3 si:3 yet:1 chu:1 must:4 subsequent:1 distant:2 informative:2 midway:1 shape:1 kdd:1 sponsored:1 update:2 intelligence:2 fewer:1 guess:1 nq:2 advancement:1 beginning:4 short:1 core:1 record:1 provides:2 boosting:2 location:9 launching:2 org:2 simpler:1 brandt:1 unbounded:1 constructed:1 symposium:1 consists:2 combine:1 dan:1 ray:1 paragraph:1 inside:1 manner:1 expected:4 roughly:1 behavior:5 planning:1 resolve:1 actual:1 little:1 str:8 window:1 ua:1 provided:3 begin:1 formalizing:1 maximizes:1 alexnet:1 what:2 q2:4 finding:1 dubbed:1 guarantee:1 y3:2 collecting:1 ti:3 exactly:1 classifier:6 control:2 unit:1 omit:1 yn:2 kelvin:1 before:3 ner:16 t1:4 soylent:1 local:1 joining:1 becoming:1 lugosi:1 acl:2 r4:1 limited:1 range:1 acknowledgment:1 recursive:1 regret:1 differs:2 x3:2 area:2 rnn:6 empirical:2 significantly:2 word:8 road:2 confidence:4 integrating:1 wait:7 time5:1 get:1 cannot:2 impossible:1 www:4 optimize:1 map:1 reviewer:1 zinkevich:1 maximizing:1 roth:1 go:1 starting:2 survey:2 pomdp:1 sigir:1 amazon:1 immediately:1 pure:4 helmbold:1 yqi:1 rule:2 q:4 amy:1 esmeir:1 his:1 handle:1 markovitch:1 crowdsourcing:9 increment:1 updated:2 feel:1 pt:3 deploy:2 today:1 target:1 user:2 us:2 kuleshov:1 cnts:1 recognition:9 expensive:1 continues:2 labeled:4 database:1 observed:4 ep:1 cooperative:3 capture:1 sun:1 decrease:2 trade:3 principled:2 intuition:1 environment:2 reward:1 trained:2 grateful:1 panizza:1 upon:1 imdb:1 po:1 grown:1 train:1 walter:1 fast:1 describe:1 monte:7 query:58 artificial:2 labeling:2 outcome:1 crowd:48 sourced:1 heuristic:1 stanford:8 richer:2 cvpr:1 katrina:3 grenager:1 noisy:5 itself:1 asynchronously:2 online:18 final:1 ceas:1 sequence:5 kocsis:1 frozen:1 net:1 mausam:1 propose:3 ackerman:1 combining:1 monetary:1 iff:2 reproducibility:1 achieve:8 gold:2 t0j:6 intuitive:1 sutskever:1 chai:1 extending:1 r1:11 produce:2 perfect:1 leave:1 tk:4 develop:2 ac:1 propagating:1 andrew:2 ex:2 received:6 job:18 implemented:3 c:4 come:2 correct:4 compromising:1 attribute:1 stochastic:6 exploration:2 human:10 successor:1 settle:1 opinion:1 public:1 require:1 premise:1 f1:13 anonymous:1 adjusted:1 pham:1 around:1 considered:1 deciding:2 elo:1 early:1 purpose:1 polar:1 daly:1 label:27 visited:2 sensitive:4 create:1 arun:1 clearly:1 aim:1 finkel:1 q3:1 june:1 improvement:5 consistently:1 potts:2 portland:1 attains:1 baseline:7 helpful:1 twitter:2 suffix:1 entire:1 initially:1 koller:1 bandit:1 relation:1 interested:1 issue:6 classification:19 arg:3 hartmann:1 colt:1 special:1 initialize:1 once:1 never:1 having:3 extraction:3 legion:1 sampling:1 x4:2 progressive:4 ng:2 look:1 icml:1 nearly:1 future:1 report:1 strategically:2 simultaneously:1 individual:1 relief:1 replacement:1 jeffrey:1 maintain:5 organization:1 huge:1 interest:1 possibility:1 mining:2 evaluation:1 weng:1 arrives:1 tj:2 chain:1 accurate:4 grove:1 worker:16 partial:3 tree:13 re:3 uncertain:1 instance:1 increased:1 modeling:1 ar:1 cost:20 subset:3 delay:3 krizhevsky:1 answer:3 chooses:4 confident:5 person:5 fundamental:1 international:3 systematic:1 off:3 receiving:1 dong:1 pool:5 lee:1 quickly:3 yao:1 andersen:1 aaai:1 cesa:1 management:1 containing:1 choose:6 huang:1 emnlp:2 expert:4 return:13 actively:1 li:4 volodymyr:1 distribute:1 includes:1 oregon:1 notable:1 sloan:1 caused:1 depends:1 stream:5 crossed:1 later:1 proactive:1 unigrams:7 start:2 wm:2 maintains:2 parallel:1 kautz:1 annotation:3 misunderstanding:1 bayes:1 minimize:2 hired:1 accuracy:19 convolutional:1 qk:7 who:1 variance:1 miller:2 bayesian:6 identification:2 craig:1 none:6 carlo:4 trajectory:1 processor:1 acc:2 minj:1 hlt:1 email:2 definition:1 codalab:2 evaluates:1 turk:1 steadily:1 naturally:1 associated:1 static:2 sampled:4 dataset:4 ask:4 recall:1 knowledge:2 anytime:1 improves:3 formalize:1 actually:1 back:5 supervised:1 follow:1 ohn:1 response:37 specify:1 formulation:1 evaluated:2 though:1 uct:2 until:2 clock:1 flight:4 receives:2 hand:1 christopher:2 celebrity:1 lack:1 quality:5 indicated:1 building:1 usa:1 requiring:1 y2:2 true:2 unbiased:1 hence:1 q0:2 semantic:1 deal:1 x5:2 during:6 game:25 self:1 m:10 crf:3 theoretic:1 performs:1 percy:1 interface:3 soup:7 image:6 ari:1 donmez:1 empirically:1 conditioning:1 association:3 he:1 marginals:3 measurement:2 significant:2 cambridge:1 gibbs:1 ai:1 queried:1 chaganty:2 language:2 flock:2 reliability:1 henry:1 calibration:1 supervision:4 money:1 operating:1 dominant:1 optimizing:1 scenario:1 issued:5 certain:1 klemmer:1 binary:1 continue:1 yi:2 neg:1 seen:3 george:15 additional:3 dai:1 deng:2 r0:5 managing:1 determine:1 maximize:2 redundant:1 belhumeur:1 angeli:1 ii:1 full:5 multiple:1 rj:3 reduces:3 technical:2 faster:1 mle:2 visit:2 qi:2 prediction:27 vision:1 expectation:1 metric:1 bigham:1 annotate:1 represent:1 disaster:1 worksheet:1 whereas:3 want:1 fellowship:1 szepesv:1 krause:1 else:3 source:1 rest:1 unlike:1 operate:1 suspect:1 thing:1 legend:1 incorporates:1 jordan:1 near:1 presence:1 yang:1 bernstein:4 enough:1 embeddings:5 switch:1 restrict:1 imperfect:1 reduce:1 idea:4 simplifies:1 tradeoff:1 regarding:1 qj:3 t0:4 whether:1 utility:15 effort:2 fanout:1 sentiment:11 penalty:1 song:1 peter:1 returned:1 remark:1 action:15 deep:2 workflow:1 generally:1 latency:11 useful:1 unimportant:1 chol:1 amount:3 http:4 specifies:1 outperform:1 tutorial:1 accountable:1 cikm:1 per:14 correctly:1 track:1 klein:1 tibshirani:1 waiting:2 ogs:7 group:1 four:3 reliance:1 threshold:6 monitor:1 drawn:1 falling:1 clarity:1 achieving:1 graph:3 asymptotically:1 tweet:4 sum:1 uncertainty:3 master:1 respond:4 named:7 arrive:2 almost:1 reasonable:1 decide:2 wu:2 family:1 realtime:1 draw:1 decision:8 conll:1 comparable:1 layer:2 bound:1 followed:1 played:1 cheng:1 oracle:2 activity:2 ahead:1 fei:2 leonardo:1 x2:2 ri:1 software:2 tag:3 weld:1 aspect:1 kumar:1 simile:1 department:4 structured:4 according:2 request:3 manning:5 poor:1 unsure:1 describes:1 across:1 character:1 making:7 s1:3 gradually:2 pr:3 iccv:1 taken:4 resource:8 equation:1 turn:14 eventually:2 count:1 needed:4 end:6 available:3 hierarchical:1 simulating:1 deployable:1 cent:1 original:1 assumes:1 running:3 remaining:1 ensure:1 thomas:1 linguistics:2 chuang:1 madison:1 build:2 approximating:1 objective:1 move:2 already:2 strategy:1 responds:1 thank:1 simulated:2 entity:7 majority:1 capacity:2 carbonell:1 y5:2 collected:1 consensus:1 trivial:1 reason:2 tseng:1 induction:1 code:2 modeled:1 y4:2 providing:1 balance:1 coulom:1 liang:2 setup:2 difficult:1 potentially:1 perelygin:1 rise:1 implementation:2 policy:6 pliang:1 perform:2 upper:1 miley:1 observation:2 bianchi:1 datasets:5 enabling:1 tok:3 anti:1 supporting:1 ecml:1 situation:1 saturates:1 incorporated:1 hinton:1 y1:4 varied:1 community:1 datta:1 rating:1 introduced:1 compositionality:1 cast:1 mechanical:1 specified:1 sentence:4 imagenet:3 coherent:1 learned:2 timeline:4 nip:4 able:4 bar:1 below:1 pattern:2 challenge:1 summarize:1 confidently:1 reverts:1 built:1 max:3 belief:1 overlap:1 widening:4 rely:1 natural:2 hybrid:1 pfeiffer:1 retreival:1 improve:3 github:1 movie:1 brief:1 yq:5 technology:2 mcts:1 vinci:1 naive:1 raymond:1 prior:2 review:1 discovery:1 literature:1 powered:1 adagrad:1 relative:2 marginalizing:1 wisconsin:1 loss:1 expect:1 highlight:1 fully:1 limitation:1 filtering:1 querying:2 foundation:1 gather:1 sufficient:3 consistent:1 s0:2 verification:1 treebank:1 systematically:1 playing:2 row:1 course:1 summary:1 token:6 supported:3 last:1 maas:1 english:1 offline:1 taking:1 face:9 expend:1 fifth:1 feedback:3 xn:1 transition:2 rich:1 author:2 made:3 adaptive:1 spam:2 far:4 social:1 sj:1 approximate:2 obtains:2 ignore:1 ml:2 active:22 sequentially:1 q4:1 cyrus:1 search:4 continuous:3 sk:3 table:7 terminate:1 learn:4 pending:2 correlated:1 golovin:1 complex:2 european:1 da:1 terminated:1 whole:2 ling:1 child:1 x1:3 body:1 join:1 pupil:1 slow:1 aid:1 cooper:1 position:3 wish:1 exponential:1 third:1 learns:4 young:1 rk:4 unigram:1 offset:1 r2:3 explored:2 evidence:1 intractable:4 incorporating:3 socher:2 workshop:1 importance:2 magnitude:3 admitted:1 simply:3 likely:1 explore:2 gao:1 visual:1 greiner:1 acm:2 oct:1 goal:4 targeted:2 sculley:1 miscellaneous:1 change:2 typical:1 uniformly:1 operates:1 wt:2 surpass:1 reducing:1 lemma:3 called:1 total:1 player:1 vote:12 exception:1 formally:1 select:1 berg:1 latter:1 overload:1 incorporate:1 evaluate:5 tested:1 nayar:1 |
5,370 | 5,861 | Learning Wake-Sleep Recurrent Attention Models
Jimmy Ba
University of Toronto
Roger Grosse
University of Toronto
[email protected]
[email protected]
Ruslan Salakhutdinov
University of Toronto
Brendan Frey
University of Toronto
[email protected]
[email protected]
Abstract
Despite their success, convolutional neural networks are computationally expensive because they must examine all image locations. Stochastic attention-based
models have been shown to improve computational efficiency at test time, but they
remain difficult to train because of intractable posterior inference and high variance in the stochastic gradient estimates. Borrowing techniques from the literature
on training deep generative models, we present the Wake-Sleep Recurrent Attention Model, a method for training stochastic attention networks which improves
posterior inference and which reduces the variability in the stochastic gradients.
We show that our method can greatly speed up the training time for stochastic
attention networks in the domains of image classification and caption generation.
1
Introduction
Convolutional neural networks, trained end-to-end, have been shown to substantially outperform
previous approaches to various supervised learning tasks in computer vision (e.g. [1])). Despite their
wide success, convolutional nets are computationally expensive when processing high-resolution
input images, because they must examine all image locations at a fine scale. This has motivated
recent work on visual attention-based models [2, 3, 4], which reduce the number of parameters and
computational operations by selecting informative regions of an image to focus on. In addition to
computational speedups, attention-based models can also add a degree of interpretability, as one can
understand what signals the algorithm is using by seeing where it is looking. One such approach
was recently used by [5] to automatically generate image captions and highlight which image region
was relevant to each word in the caption.
There are two general approaches to attention-based image understanding: hard and soft attention.
Soft attention based models (e.g. [5]) obtain features from a weighted average of all image locations,
where locations are weighted based on a model?s saliency map. By contrast, a hard attention model
(e.g. [2, 3]) chooses, typically stochastically, a series of discrete glimpse locations. Soft attention
models are computationally expensive, as they have to examine every image location; we believe
that the computational gains of attention require a hard attention model. Unfortunately, this comes
at a cost: while soft attention models can be trained with standard backpropagation [6, 5], this does
not work for hard attention models, whose glimpse selections are typically discrete.
Training stochastic hard attention models is difficult because the loss gradient involves intractable
posterior expectations, and because the stochastic gradient estimates can have high variance. (The
latter problem was also observed by [7] in the context of memory networks.) In this work, we propose the Wake-Sleep Recurrent Attention Model (WS-RAM), a method for training stochastic recurrent attention models which deals with the problems of intractable inference and high-variance gradients by taking advantage of several advances from the literature on training deep generative models:
1
inference networks [8], the reweighted wake-sleep algorithm [9], and control variates [10, 11]. During training, the WS-RAM approximates posterior expectations using importance sampling, with a
proposal distribution computed by an inference network. Unlike the prediction network, the inference network has access to the object category label, which helps it choose better glimpse locations.
As the name suggests, we train both networks using the reweighted wake-sleep algorithm. In addition, we reduce the variance of the stochastic gradient estimates using carefully chosen control
variates. In combination, these techniques constitute an improved training procedure for stochastic
attention models.
The main contributions of our work are the following. First, we present a new learning algorithm for
stochastic attention models and compare it with a training method based on variational inference [2].
Second, we develop a novel control variate technique for gradient estimation which further speeds
up training. Finally, we demonstrate that our stochastic attention model can learn to (1) classify
translated and scaled MNIST digits, and (2) generate image captions by attending to the relevant
objects in images and their corresponding scale. Our model achieves similar performance to the
variational method [2], but with much faster training times.
2
Related work
In recent years, there has been a flurry of work on attention-based neural networks. Such models
have been applied successfully in image classification [12, 4, 3, 2], object tracking [13, 3], machine
translation [6], caption generation [5], and image generation [14, 15]. Attention has been shown
both to improve computational efficiency [2] and to yield insight into the network?s behavior [5].
Our work is most closely related to stochastic hard attention models (e.g. [2]). A major difficulty
of training such models is that computing the gradient requires taking expectations with respect
to the posterior distribution over saccades, which is typically intractable. This difficulty is closely
related to the problem of posterior inference in training deep generative models such as sigmoid
belief networks [16]. Since our proposed method draws heavily from the literature on training deep
generative models, we overview various approaches here.
One of the challenges of training a deep (or recurrent) generative model is that posterior inference
is typically intractable due to the explaining away effect. One way to deal with intractable inference
is to train a separate inference network whose job it is to predict the posterior distribution. A classic
example was the Helmholtz machine [8], where the inference network predicts a mean field approximation to the posterior.1 The generative and inference networks are trained with the wake-sleep
algorithm: in the wake phase, the generative model is updated to increase a variational lower bound
on the data likelihood. In the sleep phase, data are generated from the model, and the inference
network is trained to predict the latent variables used to generate the observations.
The wake-sleep approach was limited by the fact that the wake and sleep phases were minimizing
two unrelated objective functions. More recently, various methods have been proposed which unify
the training of the generative and inference networks into a single objective function. Neural variational inference and learning (NVIL) [11] trains both networks to maximize a variational lower
bound on the log-likelihood. Since the stochastic gradient estimates in NVIL are very noisy, the
method of control variates is used to reduce the variance. In particular, one uses an algortihm from
reinforcement learning called REINFORCE [17], which attempts to infer a reward baseline for each
instance. The choice of baseline is crucial to good performance; NVIL uses a separate neural network to compute the baseline, an approach also used by [3] in the context of attention networks.
Control variates are discussed in more detail in Section 4.4.
The reweighted wake-sleep approach [9] is similar to traditional wake-sleep, but uses importance
sampling in place of mean field inference to approximate the posterior. Reweighted wake-sleep is
described more formally in Section 4.3. Another method based on inference networks is variational
autoencoders [18, 19], which exploit a clever reparameterization of the probabilistic model in order
to improve the signal in the stochastic gradients. NVIL, reweighted wake-sleep, and variational
autoencoders have all been shown to achieve considerably higher test log-likelihoods compared to
1
In the literature, the inference network is often called a recognition network; we avoid this terminology to
prevent confusion with the task of image classification.
2
q(a1 |y, I, ?)
y
q(a2 |y, a1 , I, ?)
y
p(a1 |I, ?)
q(a3 |y, a1:2 , I, ?)
q(a4 |y, a1:3 , I, ?)
y
p(a2 |a1 , I, ?)
inference
network
y
p(a3 |a1:2 , I, ?)
p(a4 |a1:3 , I, ?)
Ilow-resolution
prediction
network
p(y|a, I, ?)
(x1 , a1 )
(x2 , a2 )
(x3 , a3 )
(xN , aN )
Figure 1: The Wake-Sleep Recurrent Attention Model.
traditional wake-sleep. The term ?Helmholtz machine? is often used loosely to refer to the entire
collection of techniques which simultaneously learn a generative network and an inference network.
3
Wake-Sleep Recurrent Attention Model
We now describe our wake-sleep recurrent attention model (WS-RAM). Given an image I, the network first chooses a sequence of glimpses a = (a1 , . . . , aN ), and after each glimpse, receives an
observation xn computed by a mapping g(an , I). This mapping might, for instance, extract an
image patch at a given scale. The first glimpse is based on a low-resolution version of the input,
while subsequent glimpses are chosen based on information acquired from previous glimpses. The
glimpses are chosen stochastically according to a distribution p(an | a1:n?1 , I, ?), where ? denotes
the parameters of the network. This is in contrast with soft attention models, which deterministically allocate attention across all image locations. After the last glimpse, the network predicts a
distribution p(y | a, I, ?) over the target y (for instance, the caption or image category).
As shown in Figure 1, the core of the attention network is a two-layer recurrent network, which we
term the ?prediction network?, where the output at each time step is an action (saccade) which is
used to compute the input at the next time step. A low-resolution version of the input image is fed
to the network at the first time step, and the network predicts the class label at the final time step.
Importantly, the low-resolution input is fed to the second layer, while the class label prediction is
made by the first layer, preventing information from propagating directly from the low-resolution
image to the output. This prevents local optima where the network learns to predict y directly from
the low-resolution input, disregarding attention completely.
On top of the prediction network is an inference network, which receives both the class label and
the attention network?s top layer representation as inputs. It tries to predict the posterior distribution q(an+1 | y, a1:n , I, ?), parameterized by ?, over the next saccade, conditioned on the image
category being correctly predicted. Its job is to guide the posterior sampler during training time,
thereby acting as a ?teacher? for the attention network. The inference network is described further
in Section 4.3.
One of the benefits of stochastic attention models is that the mapping g can be localized to a small
image region or coarse granularity, which means it can potentially be made very efficient. Furthermore, g need not be differentiable, which allows for operations (such as choosing a scale) which
would be difficult to implement in a soft attention network. The cost of this flexibility is that standard backpropagation cannot be applied, so instead we use novel algorithms described in the next
section.
3
4
Learning
In this work, we assume that we have a dataset with labels y for the supervised prediction task
(e.g. object category). In contrast to the supervised saliency prediction task (e.g. [20, 21]), there are
no labels for where to attend. Instead, we learn an attention policy based on the idea that the best
locations to attend to are the ones which most robustly lead the model to predict the correct category.
In particular, we aim to maximize the probability of the class label (or equivalently, minimize the
cross-entropy) by marginalizing over the actions at each glimpse:
X
` = log p(y | I, ?) = log
p(a | I, ?)p(y | a, I, ?).
(1)
a
We train the attention model by maximizing a lower bound on `. In Section 4.1, we first describe
a previous approach which minimized a variational lower bound. We then introduce our proposed
method which directly estimates the gradients of `. As shown in Section 4.2, our method can be
seen as maximizing a tighter lower bound on `.
4.1
Variational lower bound
We first outline the approach of [2], who trained the model to maximize a variational lower bound
on `. Let q(a | y, I) be an approximating distribution. The lower bound on ` is then given by:
X
X
` = log
p(a | I, ?)p(y | a, I, ?) ?
q(a | y, I) log p(y, a | I, ?) + H[q] = F.
(2)
a
a
In the case where q(a | y, I) = p(a | I, ?) is the prior, as considered by [2], this reduces to
X
F=
p(a | I, ?) log p(y | a, I, ?).
(3)
a
The learning rules can be derived by taking derivatives of Eqn. 3 with respect to the model parameters:
X
?F
? log p(y | a, I, ?)
? log p(a | I, ?)
=
p(a | I, ?)
+ log p(y | a, I, ?)
.
(4)
??
??
??
a
?m from p(a | I, ?):
The summation can be approximated using M Monte Carlo samples a
M
?m , I, ?)
1 X ? log p(y | a
? log p(?
am | I, ?)
?F
?m , I, ?)
?
+ log p(y | a
.
??
M m=1
??
??
(5)
The partial derivative terms can each be computed using standard backpropagation. This suggests a
?m from
simple gradient-based training algorithm: For each image, one first computes the samples a
the prior p(a | I, ?), and then updates the parameters according to Eqn. 5. As observed by [2], one
must carefully use control variates in order to make this technique practical; we defer discussion of
control variates to Section 4.4.
4.2
An improved lower bound on the log-likelihood
The variational method described above has some counterintuitive properties early in training. First,
because it averages the log-likelihood over actions, it greatly amplifies the differences in probabilities assigned to the true category by different bad glances. For instance, a glimpse sequence which
leads to 0.01 probability assigned to the correct class is considered much worse than one which leads
to 0.02 probability under the variational objective, even though in practice they may be equally bad
since they have both missed the relevant information. A second odd behavior is that all glimpse
sequences are weighted equally in the log-likelihood gradient. It would be better if the training
procedure focused its effort on using those glances which contain the relevant information. Both of
these effects contribute noise in the training procedure, especially in the early stages of training.
4
Instead, we adopt an approach based on the wake-p step of reweighted wake-sleep [9], where we
attempt to maximize the marginal log-probability ` directly. We differentiate the marginal loglikelihood objective in Eqn. 1 with respect to the model parameters:
X
?`
1
? log p(y | a, I, ?) ? log p(a | I, ?)
=
p(a | I, ?)p(y | a, I, ?)
+
. (6)
??
p(y | I, ?) a
??
??
The summation and normalizing constant are both intractable to evaluate, so we estimate them using
importance sampling. We must define a proposal distribution q(a | y, I), which ideally should be
close to the posterior p(a | y, I, ?). One reasonable choice is the prior p(a | I, ?), but another choice
is described in Section 4.3. Normalized importance sampling gives a biased but consistent estimator
?1 , . . . , a
?M from q(a | y, I), the (unnormalized) importance
of the gradient of `. Given samples a
weights are computed as:
w
?m =
?m , I, ?)
p(?
am | I, ?)p(y | a
.
q(?
am | y, I)
(7)
The Monte Carlo estimate of the gradient is given by:
M
X
?m , I, ?) ? log p(?
am | I, ?)
?`
m ? log p(y | a
?
+
,
(8)
w
?? m=1
??
??
PM i
where wm = w
? m / i=1 w
? are the normalized importance weights. When q is chosen to be the
prior, this approach is equivalent to the method of [22] for learning generative feed-forward networks.
Our importance sampling
based estimator
h
i can also be viewed as the gradient ascent update on the
PM
1
m
w
?
. Combining Jensen?s inequality with the unbiasedness of
objective function E log M
m=1
m
the w
? shows that this is a lower bound on the log-likelihood:
#
"
#
"
M
M
1 X m
1 X m
w
?
? log E
w
?
= log E [w
? m ] = `.
(9)
E log
M m=1
M m=1
We relate this to the previous section by noting that F = E[log w
? m ]. Another application of Jensen?s
inequality shows that our proposed bound is at least as accurate as F:
"
#
"
#
M
M
1 X
1 X m
m
m
F = E [log w
? ]=E
log w
?
? E log
w
?
.
(10)
M m=1
M m=1
Burda et al. [23] further analyzed a closely related importance sampling based estimator in the context of generative models, bounding the mean absolute deviation and showing that the bias decreases
monotonically with the number of samples.
4.3
Training an inference network
Late in training, once the attention model has learned an effective policy, the prior distribution
p(a | I, ?) is a reasonable choice for the proposal distribution q(a | y, I), as it puts significant probability mass on good actions. But early in training, the model may have only a small probability of
choosing a good set of glimpses, and the prior may have little overlap with the posterior. To deal
with this, we train an inference network to predict, given the observations as well as the class label,
where the network should look to correctly predict that class (see Figure 1). With this additional
information, the inference network can act as a ?teacher? for the attention policy.
The inference network predicts a sequence of glimpses stochastically:
q(a | y, I, ?) =
N
Y
n=1
q(an | y, I, ?, a1:n?1 ).
(11)
This distribution is analogous to the prior, except that each decision also takes into account the class
label y. We denote the parameters for the inference network as ?. During training, the prediction net?m ? q(a | y, I, ?)
work is learnt by following the gradient of the estimator in Eqn. 8 with samples a
drawn from the inference network output.
5
Our training procedure for the inference network parallels the wake-q step of reweighted wakesleep [9]. Intuitively, the inference network is most useful if it puts large probability density over
locations in an image that are most informative for predicting class labels. We therefore train the
inference weights ? to minimize the Kullback-Leibler divergence between the recognition model
prediction q(a | y, I, ?) and posterior distribution from the attention model p(a | y, I, ?):
X
min DKL (p k q) = min ?
p(a | y, I, ?) log q(a | y, I, ?).
(12)
?
?
a
The gradient update for the recognition weights can be obtained by taking the derivatives of Eq. (12)
with respect to the recognition weights ?:
?DKL (p k q)
? log q(a | y, I, ?)
= Ep(a | y,I,?)
.
(13)
??
??
Since the posterior expectation is intractable, we estimate it with importance sampling. In fact, we
reuse the importance weights computed for the prediction network update (see Eqn. 7) to obtain the
following gradient estimate for the recognition network:
M
X
?DKL (p k q)
? log q(?
am | y, I, ?)
?
.
wm
??
??
m=1
4.4
(14)
Control variates
The speed of convergence of gradient ascent with the gradients defined in Eqns. 8 and 14 suffers
from high variance of the stochastic gradient estimates. Past work using similar gradient updates has
found significant benefit from the use of control variates, or reward baselines, to reduce the variance
[17, 10, 3, 11, 2]. Choosing effective control variates for the stochastic gradient estimators amounts
to finding a function that is highly correlated with the gradient vectors, and whose expectation is
known or tractable to compute [10, 24]. Unfortunately, a good choice of control variate is highly
model-dependent.
We first note that:
p(a | I, ?) ? log p(a | I, ?)
Eq(a | y,I,?)
= 0,
q(a | y, I, ?)
??
Eq(a | y,I,?)
? log q(a | y, I, ?)
= 0. (15)
??
The terms inside the expectation are very similar to the gradients in Eqns. 8 and 14, suggesting
that stochastic estimates of these expectations would make good control variates. To increase the
correlation between the gradients and the control variates, we reuse the same set of samples and
importance weights for the gradients and control variates. Using these control variates results in the
gradient estimates for the prediction and recognition networks, we obtain:
?
?
p(?
am | I,?)
M
X
m
? log p(a | I, ?)
? log p(?
am | I, ?)
?wm ? P q(?a | y,I,?)
?
?
,
(16)
M
p(?
ai | I,?)
??
??
i
m=1
i=1 q(?
a | y,I,?)
M
X
?DKL (p k q)
1
?
wm ?
??
M
m=1
? log q(?
am | y, I, ?)
.
??
(17)
Our use of control variates does not bias the gradient estimates (beyond the bias which is present
due to importance sampling). However, as we show in the experiments, the resulting estimates have
much lower variance than those of Eqns. 8 and 14.
Following the analogy with reinforcement learning highlighted by [11], these control variates can
also be viewed as reward baselines:
bp =
bq =
p(a | I,?)
q(a | y,I,?) Eq(a | y,I,?)
M ? Eq(a | y,I,?)
h
[p(y | a, I, ?)]
p(a | I,?)
q(a | y,I,?) Eq(a | y,I,?)
Ep(a | I,?) [p(y | a, I, ?)]
1
=
,
M ? Ep(a | I,?) [p(y | a, I, ?)]
M
i?
[p(y | a, I, ?)]
p(?
am | I,?)
q(?
am | y,I,?)
PM p(?ai | I,?)
i=1 q(?
ai | y,I,?)
,
(18)
(19)
where M is the number of samples drawn for proposal q.
6
10-1
0
20
40
60
80
100
4.0
3.5
3.0
2.5
2.0
1.5
6
VAR
VAR+c
WS-RAM
WS-RAM+c
WS-RAM+q
WS-RAM+q+c
Effective Sample Size
VAR
VAR+c
WS-RAM
WS-RAM+c
WS-RAM+q
WS-RAM+q+c
Variance of Estimated Gradient
Training Error
100
1.0
0.5
0.0
0
20
40
60
80
100
5
4
3
2
1
0
0
WS-RAM
WS-RAM+c
WS-RAM+q
WS-RAM+q+c
20
40
60
80
100
Figure 2: Left: Training error as a function of the number of updates. Middle: variance of the gradient
estimates. Right: effective sample size (max = 5). Horizontal axis: thousands of updates. VAR: variational
baseline; WS-RAM: our proposed method; +q: uses the inference networks for the proposal distribution; +c:
uses control variates.
4.5
Encouraging exploration
Similarly to other methods based on reinforcement learning, stochastic attention networks face the
problem of encouraging the method to explore different actions. Since the gradient in Eqn. 8 only
rewards or punishes glimpse sequences which are actually performed, any part of the space which
is never visited will receive no reward signal. [2] introduced several heuristics to encourage exploration, including: (1) raising the temperature of the proposal distribution, (2) regularizing the
attention policy to encourage viewing all image locations, and (3) adding a regularization term to
encourage high entropy in the action distribution. We have implemented all three heuristics for
the WS-RAM and for the baselines. While these heuristics are important for good performance of
the baselines, we found that they made little difference to the WS-RAM because the basic method
already explores adequately.
5
Experimental results
To measure the effectiveness of the proposed WS-RAM method, we first investigated a toy classification task involving a variant of the MNIST handwritten digits dataset [25] where transformations
were applied to the images. We then evaluated the proposed method on a substantially more difficult
image caption generation task using the Flickr8k [26] dataset.
5.1
Translated scaled MNIST
We generated a dataset of randomly translated and scaled handwritten digits from the MNIST
dataset [25]. Each digit was placed in a 100x100 black background image at a random location
and scale. The task was to identify the digit class. The attention models were allowed four glimpses
before making a classification prediction. The goal of this experiment was to evaluate the effectiveness of our proposed WS-RAM model compared with the variational approach of [2].
For both the WS-RAM and the baseline, the architecture was a stochastic attention model which
used ReLU units in all recurrent layers. The actions included both continuous and discrete latent
variables, corresponding to glimpse scale and location, respectively. The distribution over actions
was represented as a Gaussian random variable for the location and an independent multinomial
random variable for the scale. All networks were trained using Adam [27], with the learning rate set
to the highest value that allowed the model to successfully converge to a sensible attention policy.
The classification performance results are shown in Table 1. In Figure 2, the WS-RAM is compared
with the variational baseline, each using the same number of samples (in order to make computation
time roughly equivalent). We also show comparisons against ablated versions of the WS-RAM
where the control variates and inference network were removed. When the inference network was
removed, the prior p(a | I, ?) was used for the proposal distribution.
In addition to the classification results, we measured the effective sample size (ESS) of our method
with and without control variates and the inference
network. ESS is a standard metric for evaluating
P
importance samplers, and is defined as 1/ m (wm )2 , where wm denotes the normalized importance
weights. Results are shown in Figure 2. Using the inference network reduced the variances in
7
VAR
3.11%
WS-RAM
4.23%
WS-RAM + q
2.59%
+c.v.
1.81%
1.85%
1.62%
Training Error
Test err.
no c.v.
10-1
VAR+c, no exploration
VAR+c + exploration
WS-RAM+q+c, no exploration
WS-RAM+q+c + exploration
10-2
0
50
100
150
200
Table 1: Classification error rate comparison for the Figure 3: The effect of the exploration heuristics on
VAR
BLEU1
62.3
BLEU2
41.6
BLEU3
26.9
BLEU4
17.2
WS-RAM+Qnet
61.1
40.4
26.9
17.8
the variational baseline and the WS-RAM.
Training Negative Loglikelihood
attention models trained using different algorithms on
translated scaled MNIST. The numbers are reported
after 10 million updates using 5 samples.
50
WS-RAM+q+c
VAR+c
48
46
44
42
40
38
36
0
20
40
60
80
100
Table 2: BLEU score performance on the Flickr8K Figure 4:
Training negative log-likelihood on
Flickr8K for the first 10,000 updates. See Figure 2
for the labels.
dataset for our WS-RAM and the variational method.
gradient estimation, although this improvement did not reflect itself in the ESS. Control variates
improved both metrics.
In Section 4.5, we described heuristics which encourage the models to explore the action space. Figure 3 compares the training with and without these heuristics. Without the heuristics, the variational
method quickly fell into a local minimum where the model predicted only one glimpse scale over
all images; the exploration heuristics fixed this problem. By contrast, the WS-RAM did not appear
to have this problem, so the heuristics were not necessary.
5.2
Generating captions using multi-scale attention
We also applied the WS-RAM method to learn a stochastic attention model similar to [5] for generating image captions. We report results on the widely-used Flickr8k dataset. The training/valid/test
split followed the same protocol as used in previous work [28].
The goal of this experiment was to examine the improvement of the WS-RAM over the variational
method for learning with realistic imgaes. Similarly to [5], we first ran a convolutional network,
and the attention network then determined which part of the convolutional net representation to
attend to. The attention network predicted both which layer to attend to and a location within the
layer, in contrast with [5], where the scale was held fixed. Because a convolutional net shrinks
the representation with max-pooling, choosing a layer is analogous to choosing a scale. At each
glimpse, the inference network was given the immediate preceding word in the target sentences.
We compare the BLEU scores of our WS-RAM and the variational method in in Table 2. Figure 4
shows training curves for both models. We observe that WS-RAM obtained similar performance to
the variatinoal method, but trained more efficiently.
6
Conclusions
In this paper, we introduced the Wake-Sleep Recurrent Attention Model (WS-RAM), an efficient
method for training stochastic attention models. This method improves upon prior work by using the
reweighted wake-sleep algorithm [9] to approximate expectations from the posterior over glimpses.
We also introduced control variates to reduce the variability of the stochastic gradients. Our method
reduces the variance in the gradient estimates and accelerates training of attention networks for both
invariant handwritten digit recognition and image caption generation.
Acknowledgments
This work was supported by the Fields Institute, Samsung, ONR Grant N00014-14-1-0232 and the
hardware donation of NVIDIA Corporation.
8
References
[1] A. Krizhevsky, I. Sutskever, , and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In Neural Information Processing Systems, 2012.
[2] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. In International
Conference on Learning Representations, 2015.
[3] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In Neural
Information Processing Systems, 2014.
[4] Y. Tang, N. Srivastava, and R. Salakhutdinov. Learning generative models with visual attention. In Neural
Information Processing Systems, 2014.
[5] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend,
and tell: neural image caption generation with visual attention. In International Conference on Machine
Learning, 2015.
[6] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
In International Conference on Learning Representations, 2015.
[7] W. Zaremba and I. Sutskever. Reinforcement learning neural Turing machines. arXiv:1505.00521, 2015.
[8] P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel. The Helmholtz machine. Neural Computation,
7:889?904, 1995.
[9] J. Bornschein and Y. Bengio. Reweighted wake-sleep. arXiv:1406.2751, 2014.
[10] J. Paisley, D. M. Blei, and M. I. Jordan. Variational Bayesian inference with stochastic search. In International Conference on Machine Learning, 2012.
[11] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. In International
Conference on Machine Learning, 2014.
[12] H. Larochelle and G. E. Hinton. Learning to combine foveal glimpses with a third-order Boltzmann
machine. In Neural Information Processing Systems, 2010.
[13] M. Denil, L. Bazzani, H. Larochelle, and N. de Freitas. Learning where to attend with deep architectures
for image tracking. Neural Computation, 24(8):2151?84, April 2012.
[14] A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2014.
[15] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: a recurrent neural network for image
generation. arXiv:1502.04623, 2015.
[16] Radford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 1992.
[17] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine Learning, 8:229?256, 1992.
[18] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In International Conference on Learning
Representations, 2014.
[19] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In International Conference on Machine Learning, 2014.
[20] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE
Transactions of Pattern Analysis and Machine Intelligence, 20(11):1254?59, November 1998.
[21] T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In International
Conference on Computer Vision, 2009.
[22] Y. Tang and R. Salakhutdinov. Learning stochastic feedforward neural networks. In Neural Information
Processing Systems, 2013.
[23] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv:1509.00519, 2015.
[24] Lex Weaver and Nigel Tao. The optimal reward baseline for gradient-based reinforcement learning.
In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 538?545.
Morgan Kaufmann Publishers Inc., 2001.
[25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[26] Micah Hodosh, Peter Young, and Julia Hockenmaier. Framing image description as a ranking task: Data,
models and evaluation metrics. Journal of Artificial Intelligence Research, pages 853?899, 2013.
[27] D. Kingma and J. L. Ba. Adam: a method for stochastic optimization. arXiv:1412.6980, 2014.
[28] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions.
arXiv preprint arXiv:1412.2306, 2014.
9
| 5861 |@word middle:1 version:3 thereby:1 series:1 score:2 selecting:1 punishes:1 foveal:1 document:1 past:1 freitas:1 err:1 must:4 realistic:1 subsequent:1 informative:2 update:9 generative:13 intelligence:4 es:3 core:1 blei:1 coarse:1 contribute:1 toronto:8 location:15 wierstra:2 combine:1 inside:1 introduce:1 acquired:1 rapid:1 roughly:1 behavior:2 examine:4 kiros:1 multi:1 salakhutdinov:5 automatically:1 little:2 encouraging:2 unrelated:1 mass:1 what:1 substantially:2 finding:1 transformation:1 corporation:1 every:1 flickr8k:4 act:1 zaremba:1 scaled:4 control:22 unit:1 grant:1 appear:1 danihelka:1 before:1 attend:6 frey:2 local:2 despite:2 encoding:1 might:1 black:1 suggests:2 limited:1 seventeenth:1 practical:1 acknowledgment:1 lecun:1 practice:1 implement:1 backpropagation:4 x3:1 digit:6 procedure:4 word:2 seeing:1 cannot:1 clever:1 selection:1 close:1 andrej:1 put:2 context:3 equivalent:2 map:1 maximizing:2 williams:1 attention:60 jimmy:2 focused:1 resolution:7 unify:1 attending:1 insight:1 rule:1 importantly:1 counterintuitive:1 estimator:5 reparameterization:1 classic:1 analogous:2 updated:1 target:2 heavily:1 caption:11 us:5 helmholtz:3 expensive:3 recognition:9 approximated:1 predicts:4 observed:2 ep:3 preprint:1 thousand:1 region:3 decrease:1 highest:1 removed:2 ran:1 reward:6 ideally:1 flurry:1 trained:8 upon:1 efficiency:2 completely:1 translated:4 samsung:1 various:3 x100:1 represented:1 train:7 describe:2 effective:5 monte:2 artificial:3 zemel:2 tell:1 choosing:5 whose:3 heuristic:9 widely:1 loglikelihood:2 highlighted:1 noisy:1 itself:1 final:1 jointly:1 differentiate:1 advantage:1 sequence:6 differentiable:1 net:4 bornschein:1 propose:1 relevant:4 combining:1 translate:1 flexibility:1 achieve:1 description:2 amplifies:1 bazzani:1 sutskever:2 convergence:1 optimum:1 generating:4 adam:2 object:5 help:1 donation:1 recurrent:14 develop:1 propagating:1 measured:1 odd:1 eq:6 job:2 implemented:1 c:2 involves:1 come:1 predicted:3 larochelle:2 closely:3 correct:2 stochastic:28 exploration:8 human:1 viewing:1 require:1 tighter:1 summation:2 koch:1 considered:2 mapping:3 predict:8 major:1 achieves:1 early:3 a2:3 adopt:1 torralba:1 ruslan:1 estimation:2 label:11 visited:1 successfully:2 weighted:4 gaussian:1 aim:1 denil:1 avoid:1 derived:1 focus:1 rezende:1 improvement:2 likelihood:8 greatly:2 contrast:5 brendan:1 algortihm:1 baseline:12 am:10 inference:41 dependent:1 dayan:1 typically:4 entire:1 borrowing:1 w:37 tao:1 classification:9 marginal:2 field:3 once:1 never:1 sampling:8 look:2 minimized:1 report:1 connectionist:2 randomly:1 simultaneously:1 divergence:1 phase:3 attempt:2 highly:2 mnih:3 evaluation:1 alignment:1 analyzed:1 nvil:4 held:1 accurate:1 encourage:4 partial:1 necessary:1 glimpse:22 rgrosse:1 bq:1 loosely:1 ablated:1 instance:4 classify:1 soft:6 cost:2 deviation:1 krizhevsky:1 reported:1 nigel:1 teacher:2 learnt:1 considerably:1 chooses:2 unbiasedness:1 cho:2 density:1 explores:1 international:8 probabilistic:1 quickly:1 reflect:1 choose:1 worse:1 stochastically:3 derivative:3 itti:1 toy:1 li:1 account:1 suggesting:1 de:1 inc:1 ranking:1 performed:1 try:1 wm:6 bayes:1 parallel:1 defer:1 contribution:1 minimize:2 convolutional:7 kaufmann:1 variance:12 who:1 efficiently:1 yield:1 saliency:3 identify:1 handwritten:3 bayesian:1 kavukcuoglu:2 carlo:2 niebur:1 suffers:1 against:1 mohamed:1 psi:2 gain:1 dataset:7 improves:2 carefully:2 actually:1 feed:1 higher:1 supervised:3 improved:3 april:1 evaluated:1 though:1 shrink:1 furthermore:1 roger:1 stage:1 autoencoders:3 correlation:1 receives:2 eqn:6 horizontal:1 glance:2 believe:1 name:1 effect:3 contain:1 true:1 normalized:3 adequately:1 regularization:1 assigned:2 leibler:1 neal:2 semantic:1 deal:3 reweighted:9 during:3 eqns:3 unnormalized:1 outline:1 demonstrate:1 confusion:1 julia:1 temperature:1 image:37 variational:23 novel:2 recently:2 sigmoid:1 multinomial:1 overview:1 million:1 discussed:1 micah:1 approximates:1 refer:1 significant:2 ai:3 paisley:1 pm:3 similarly:2 access:1 add:1 align:1 posterior:17 recent:2 n00014:1 nvidia:1 inequality:2 onr:1 success:2 durand:1 seen:1 minimum:1 additional:1 morgan:1 preceding:1 converge:1 maximize:4 monotonically:1 signal:3 multiple:1 reduces:3 infer:1 faster:1 cross:1 equally:2 dkl:4 a1:13 prediction:12 involving:1 basic:1 variant:1 vision:2 expectation:8 metric:3 arxiv:8 proposal:7 addition:3 receive:1 fine:1 background:1 wake:23 crucial:1 publisher:1 biased:1 unlike:1 ascent:2 fell:1 pooling:1 bahdanau:1 effectiveness:2 jordan:1 noting:1 granularity:1 feedforward:1 split:1 bengio:4 variate:22 relu:1 architecture:2 reduce:5 idea:1 haffner:1 motivated:1 allocate:1 reuse:2 effort:1 peter:1 constitute:1 action:9 deep:9 heess:1 useful:1 karpathy:1 amount:1 hardware:1 category:6 reduced:1 generate:3 outperform:1 ilow:1 estimated:1 correctly:2 discrete:3 four:1 terminology:1 drawn:2 prevent:1 ram:37 year:1 turing:1 parameterized:1 uncertainty:1 place:1 reasonable:2 patch:1 missed:1 draw:2 decision:1 accelerates:1 bound:11 layer:8 followed:1 courville:1 sleep:21 fei:2 bp:1 x2:1 scene:1 speed:3 min:2 speedup:1 according:2 combination:1 remain:1 across:1 hodosh:1 hockenmaier:1 making:1 intuitively:1 invariant:1 computationally:3 fed:2 tractable:1 end:2 operation:2 observe:1 away:1 robustly:1 denotes:2 top:2 a4:2 exploit:1 especially:1 approximating:1 gregor:2 objective:5 already:1 lex:1 traditional:2 gradient:39 separate:2 reinforce:1 sensible:1 bleu:2 minimizing:1 equivalently:1 difficult:4 unfortunately:2 potentially:1 relate:1 negative:2 ba:4 policy:5 boltzmann:1 observation:3 november:1 immediate:1 hinton:3 variability:2 looking:1 introduced:3 sentence:1 imagenet:1 raising:1 learned:1 framing:1 kingma:2 beyond:1 pattern:1 challenge:1 interpretability:1 memory:1 max:2 belief:3 including:1 overlap:1 difficulty:2 predicting:1 weaver:1 improve:3 axis:1 extract:1 auto:1 prior:9 literature:4 understanding:1 marginalizing:1 graf:3 loss:1 wakesleep:1 highlight:1 generation:7 analogy:1 var:10 localized:1 degree:1 consistent:1 translation:2 placed:1 last:1 supported:1 guide:1 bias:3 understand:1 burda:2 institute:1 wide:1 explaining:1 taking:4 face:1 absolute:1 benefit:2 curve:1 judd:1 xn:2 evaluating:1 valid:1 computes:1 preventing:1 forward:1 collection:1 reinforcement:6 made:3 welling:1 transaction:1 approximate:3 kullback:1 continuous:1 latent:2 search:1 table:4 learn:4 correlated:1 investigated:1 bottou:1 domain:1 protocol:1 did:2 main:1 bounding:1 noise:1 allowed:2 x1:1 xu:1 grosse:2 ehinger:1 deterministically:1 late:1 third:1 learns:1 young:1 tang:2 bad:2 showing:1 jensen:2 disregarding:1 a3:3 normalizing:1 intractable:8 mnist:5 adding:1 importance:15 conditioned:1 entropy:2 explore:2 visual:7 prevents:1 tracking:2 saccade:3 radford:1 viewed:2 goal:2 hard:6 included:1 determined:1 except:1 sampler:2 acting:1 called:2 experimental:1 formally:1 latter:1 evaluate:2 regularizing:1 srivastava:1 |
5,371 | 5,862 | Backpropagation for
Energy-Efficient Neuromorphic Computing
Steve K. Esser
IBM Research?Almaden
650 Harry Road, San Jose, CA 95120
[email protected]
Rathinakumar Appuswamy
IBM Research?Almaden
650 Harry Road, San Jose, CA 95120
[email protected]
Paul A. Merolla
IBM Research?Almaden
650 Harry Road, San Jose, CA 95120
[email protected]
John V. Arthur
IBM Research?Almaden
650 Harry Road, San Jose, CA 95120
[email protected]
Dharmendra S. Modha
IBM Research?Almaden
650 Harry Road, San Jose, CA 95120
[email protected]
Abstract
Solving real world problems with embedded neural networks requires both training algorithms that achieve high performance and compatible hardware that runs
in real time while remaining energy efficient. For the former, deep learning using
backpropagation has recently achieved a string of successes across many domains
and datasets. For the latter, neuromorphic chips that run spiking neural networks
have recently achieved unprecedented energy efficiency. To bring these two advances together, we must first resolve the incompatibility between backpropagation, which uses continuous-output neurons and synaptic weights, and neuromorphic designs, which employ spiking neurons and discrete synapses. Our approach
is to treat spikes and discrete synapses as continuous probabilities, which allows
training the network using standard backpropagation. The trained network naturally maps to neuromorphic hardware by sampling the probabilities to create one
or more networks, which are merged using ensemble averaging. To demonstrate,
we trained a sparsely connected network that runs on the TrueNorth chip using the
MNIST dataset. With a high performance network (ensemble of 64), we achieve
99.42% accuracy at 108 ?J per image, and with a high efficiency network (ensemble of 1) we achieve 92.7% accuracy at 0.268 ?J per image.
1
Introduction
Neural networks today are achieving state-of-the-art performance in competitions across a range of
fields [1][2][3]. Such success raises hope that we can now begin to move these networks out of
the lab and into embedded systems that can tackle real world problems. This necessitates a shift
DARPA: Approved for Public Release, Distribution Unlimited
1
in thinking to system design, where both neural network and hardware substrate must collectively
meet performance, power, space, and speed requirements.
On a neuron-for-neuron basis, the most efficient substrates for neural network operation today are
dedicated neuromorphic designs [4][5][6][7]. To achieve high efficiency, neuromorphic architectures can use spikes to provide event based computation and communication that consumes energy
only when necessary, can use low precision synapses to colocate memory with computation keeping
data movement local and allowing for parallel distributed operation, and can use constrained connectivity to implement neuron fan-out efficiently thus dramatically reducing network traffic on-chip.
However, such design choices introduce an apparent incompatibility with the backpropagation algorithm [8] used for training today?s most successful deep networks, which uses continuous-output
neurons and high-precision synapses, and typically operates with no limits on the number of inputs
per neuron. How then can we build systems that take advantage of algorithmic insights from deep
learning, and the operational efficiency of neuromorphic hardware?
As our main contribution here, we demonstrate a learning rule and a network topology that reconciles
the apparent incompatibility between backpropagation and neuromorphic hardware. The essence
of the learning rule is to train a network offline with hardware supported connectivity, as well as
continuous valued input, neuron output, and synaptic weights, but values constrained to the range
[0, 1]. We further impose that such constrained values represent probabilities, either of a spike
occurring or of a particular synapse being on. Such a network can be trained using backpropagation,
but also has a direct representation in the spiking, low synaptic precision deployment system, thereby
bridging these two worlds. The network topology uses a progressive mixing approach, where each
neuron has access to a limited set of inputs from the previous layer, but sources are chosen such that
neurons in successive layers have access to progressively more network input.
Previous efforts have shown success with subsets of the elements we bring together here. Backpropagation has been used to train networks with spiking neurons but with high-precision weights
[9][10][11][12], and the converse, networks with trinary synapses but with continuous output neurons [13]. Other probabilistic backpropagation approaches have been demonstrated for networks
with binary neurons and binary or trinary synapses but full inter-layer connectivity [14][15].
The work presented here is novel in that i) we demonstrate for the first time an offline training
methodology using backpropagation to create a network that employs spiking neurons, synapses requiring less bits of precision than even trinary weights, and constrained connectivity, ii) we achieve
the best accuracy to date on MNIST (99.42%) when compared to networks that use spiking neurons,
even with high precision synapses (99.12%) [12], as well as networks that use binary synapses and
neurons (97.88%) [15], and iii) we demonstrate the network running in real-time on the TrueNorth
chip [7], achieving by far the best published power efficiency for digit recognition (4 ?J per classification at 95% accuracy running 1000 images per second) compared to other low power approaches
(6 mJ per classification at 95% accuracy running 50 images per second) [16].
2
Deployment Hardware
We use the TrueNorth neurosynaptic chip [7] as our example deployment system, though the approach here could be generalized to other neuromorphic hardware [4][5][6]. The TrueNorth chip
consists of 4096 cores, with each core containing 256 axons (inputs), a 256 ? 256 synapse crossbar, and 256 spiking neurons. Information flows via spikes from a neuron to one axon between
any two cores, and from the axon to potentially all neurons on the core, gated by binary synapses
in the crossbar. Neurons can be considered to take on a variety of dynamics [17], including those
described below. Each axon is assigned 1 of 4 axon types, which is used as an index into a lookup
table of s-values, unique to each neuron, that provides a signed 9-bit integer synaptic strength to the
corresponding synapse. This approach requires only 1 bit per synapse for the on/off state and an
additional 0.15 bits per synapse for the lookup table scheme.
3
Network Training
In our approach, we employ two types of multilayer networks. The deployment network runs on a
platform supporting spiking neurons, discrete synapses with low precision, and limited connectivity.
2
The training network is used to learn binary synaptic connectivity states and biases. This network
shares the same topology as the deployment network, but represents input data, neuron outputs,
and synaptic connections using continuous values constrained to the range [0, 1] (an overview is
provided in Figure 1 and Table 1). These values correspond to probabilities of a spike occurring
or of a synapse being ?on?, providing a means of mapping the training network to the deployment
network, while providing a continuous and differentiable space for backpropagation. Below, we
describe the deployment network, our training methodology, and our procedure for mapping the
training network to the deployment network.
3.1
Deployment network
Deployment
Our deployment network follows a feedforward methodology where neurons are sequentially updated from the first to the last
layer. Input to the network is represented using stochastically generated spikes, where the
value of each input unit is 0 or 1 with some
probability. We write this as P (xi = 1) ? x
?i ,
where xi is the spike state of input unit i and
x
?i is a continuous value in the range [0, 1] derived by re-scaling the input data (pixels). This
scheme allows representation of data using binary spikes, while preserving data precision in
the expectation.
Training
Input
Input
Neuron
Connected
synapses
.2
.9
.5
.2
.7
.7
Synaptic
connection
probabilities
Neuron
Spikes
Synapse
strength
-1
1
.7
Spike
probabilities
Figure 1: Diagram showing input, synapses, and
output for one neuron in the deployment and training network. For simplicity, only three synapses
are depicted.
Summed neuron input is computed as
X
Ij =
xi cij sij + bj ,
(1)
i
where j is the target neuron index, cij is a binary indicator variable representing whether a
synapse is on, sij is the synaptic strength, and bj is the bias term. This is identical to common practice in neural networks, except that we have factored the synaptic weight into cij and sij , such that
we can focus our learning efforts on the former for reasons described below. The neuron activation
function follows a history-free thresholding equation
nj =
1
0
if Ij > 0,
otherwise.
These dynamics are implemented in TrueNorth by setting each neuron?s leak equal to the learned
bias term (dropping any fractional portion), its threshold to 0, its membrane potential floor to 0, and
setting its synapse parameters using the scheme described below.
We represent each class label using multiple output neurons in the last layer of the network, which
we found improves prediction performance. The network prediction for a class is simply the average
of the output of all neurons assigned to that class.
Table 1: Network components
Network input
Synaptic connection
Synaptic strength
Neuron output
Deployment Network
Variable
Values
x
{0, 1}
c
{0, 1}
s
{?1, 1}
n
{0, 1}
3
Correspondance
P (x = 1) ? x
?
P (c = 1) ? c?
s?s
P (n = 1) ? n
?
Training Network
Variable Values
x
?
[0, 1]
c?
[0, 1]
s
{?1, 1}
n
?
[0, 1]
3.2
Training network
Training follows the backpropagation methodology by iteratively i) running a forward pass from
the first layer to the last layer, ii) comparing the network output to desired output using a loss
function, iii) propagating the loss backwards through the network to determine the loss gradient
at each synapse and bias term, and iv) using this gradient to update the network parameters. The
training network forward pass is a probabilistic representation of the deployment network forward
pass.
Synaptic connections are represented as probabilities using c?ij , where P (cij = 1) ? c?ij , while
synaptic strength is represented using sij as in the deployment network. It is assumed that sij
can be drawn from a limited set of values and we consider the additional constraint that it is set
in ?blocks? such that multiple synapses share the same value, as done in TrueNorth for efficiency.
While it is conceivable to learn optimal values for sij under such conditions, this requires stepwise
changes between allowed values and optimization that is not local to each synapse. We take a simpler
approach here, which is to learn biases and synapse connection probabilities, and to intelligently fix
the synapse strengths using an approach described in the Network Initialization section.
Input to the training network is represented using x
?i , which is the probability of an input spike
occurring in the deployment network. For neurons, we note that Equation 1 is a summation of
weighted Bernoulli variables plus a bias term. If we assume independence of these inputs and have
sufficient numbers, then we can approximate the probability distribution of this summation as a
Gaussian with mean
X
?j = bj +
x
?i c?ij sij
i
and variance
?j2 =
X
x
?i c?ij (1 ? x
?i c?ij )s2ij .
(2)
i
We can then derive the probability of such a neuron firing using the complementary cumulative
distribution function of a Gaussian:
?
?
??
1
? ? ?j ??
n
? j = 1 ? ?1 + erf ? q
,
(3)
2
2? 2
j
where erf is the error function, ? = 0 and P (nj = 1) ? n
? j . For layers after the first, x
?i is replaced
by the input from the previous layer, n
? i , which represents the probability that a neuron produces a
spike.
A variety of loss functions are suitable for our approach, but we found that training converged the
fastest when using log loss,
X
E=?
[yk log(pk ) + (1 ? yk ) log(1 ? pk )] ,
k
where for each class k, yk is a binary class label that is 1 if the class is present and 0 otherwise, and
pk is the probability that the the average spike count for the class is greater than 0.5. Conveniently,
we can use the Gaussian approximation in Equation 3 for this, with ? = 0.5 and the mean and
variance terms set by the averaging process.
The training network backward pass is an adaptation of backpropagation using the neuron and
synapse equations above. To get the gradient at each synapse, we use the chain rule to compute
?E ? n
?j
?E
=
.
??
cij
?n
? j ??
cij
For the bias, a similar computation is made by replacing c?ij in the above equation with bj .
We can then differentiate Equation 3 to produce
?n
?j
x
?i sij ?
= ? e
??
cij
?j 2?
(???j )2
2? 2
j
x
?i s2ij ? x
?2i c?ij s2ij ?
?
? (? ? ?j )
e
?j3 2?
4
(???j )2
2? 2
j
.
(4)
As described below, we will assume that the synapse strengths to each neuron are balanced between
positive and negative values and that each neuron receives 256 inputs, so we can expect ? to be
close to zero, and ?, n
? i and c?ij to be much less than ?. Therefore, the right term of Equation 4
containing the denominator ?j3 , can be expected to be much smaller than the left term containing the
denominator ?j . Under these conditions, for computational efficiency we can approximate Equation
4 by dropping the right term and factoring out the remainder as
?n
?j
?n
? j ??j
?
,
??
cij
??j ??
cij
where
?
?n
?j
1
= ? e
??j
?j 2?
(???j )2
2? 2
j
,
and
??j
=x
?i sij .
??
cij
A similar treatment can be used to show that corresponding gradient with respect to the bias term
equals one.
The network is updated using the loss gradient at each synapse and bias term. For each iteration,
synaptic connection probability changes according to
??
cij = ??
?E
,
??
cij
where ? is the learning rate. Any synaptic connection probabilities that fall outside of the range
[0, 1] as a result of the update rule are ?snapped? to the nearest valid value. Changes to the bias
term are handled in a similar fashion, with values clipped to fall in the range [?255, 255], the largest
values supported using TrueNorth neuron parameters.
The training procedure described here is amenable to methods and heuristics applied in standard
backpropagation. For the results shown below, we used mini batch size 100, momentum 0.9, dropout
0.5 [18], learning rate decay on a fixed schedule across training iterations starting at 0.1 and multiplying by 0.1 every 250 epochs, and transformations of the training data for each iteration with
rotation up to ?15? , shift up to ?5 pixels and rescale up to ?15%.
3.3
Mapping training network to deployment network
Training is performed offline, and the resulting network is mapped to the deployment network for
hardware operation. For deployment, depending on system requirements, we can utilize an ensemble
of one or more samplings of the training network to increase overall output performance. Unlike
other ensemble methods, we train only once then sample the training network for each member. The
system output for each class is determined by averaging across all neurons in all member networks
assigned to the class. Synaptic connection states are set on or off according to P (cij = 1) ?
c?ij , using independent random number draws for each synapse in each ensemble member. Data is
converted into a spiking representation for input using P (xi = 1) ? x
?i , using independent random
number draws for each input to each member of the ensemble.
3.4
Network initialization
The approach for network initialization described here allows us to optimize for efficient neuromorphic hardware that employs less than 2 bits per synapse. In our approach, each synaptic connection
probability is initialized from a uniform random distribution over the range [0, 1]. To initialize
synapse strength values, we begin from the principle that each core should maximize information
transfer by maximizing information per neuron and minimizing redundancy between neurons. Such
methods have been explored in detail in approaches such as infomax [19]. While the first of these
goals is data dependent, we can pursue the second at initialization time by tuning the space of possible weights for a core, represented by the matrix of synapse strength values, S.
5
In our approach, we wish to minimize redundancy
between neurons on a core by attempting to induce a
product distribution on the outputs for every pair of
neurons. To simplify the problem, we note that the
summed weighted inputs to a pair of neurons is wellapproximated by a bi-variate Gaussian distribution.
Thus, forcing the covariance between the summed
weighted inputs to zero guarantees that the inputs are
independent. Furthermore, since functions of pairwise independent random variables remain pair-wise
independent, the neuron outputs are guaranteed to be
independent.
Axons 1-64
Type 1
Axons 65-128
Type 2
Axons 129-192
Type 3
Axons 193-256
Type 4
Synapse
strength
Neurons 246-256
Neurons 12-22
Neurons 1-11
The summed weighted input to j-th neuron is given
-1
...
1
by Equation 1. It is desirable for the purposes of
maintaining balance in neuron dynamics to configure its weights using a mix of positive and negative
values that sum to zero. Thus for all j,
Figure 2:
Synapse strength values deX
sij = 0,
(5) picted as axons (rows) ? neurons (columns)
array. The learning procedure fixes these
i
values when the network is initialized and
which implies that E[Ij ] ? 0 assuming inputs and learns the probability that each synapse is
synaptic connection states are both decorrelated and in a transmitting state. The blocky appearthe bias term is near 0. This simplifies the covariance ance of the strength matrix is the result of
between the inputs to any two neurons on a core to
the shared synaptic strength approach used
?
?
by TrueNorth to reduce memory footprint.
X
E[Ij Ir ] = E ?
xi cij sij xq cqr sqr ? .
i,q
Rearranging terms, we get
E[Ij Ir ] =
X
cij sij cqr sqr E[x2i ] +
i
X
i
cij sij
X
cqr sqr E[xi xq ].
(6)
q6=i
Next, we note from the equation for covariance that E[xi xq ] = ?(xi , xq ) + E[xi ]E[xq ]. Under
the assumption that inputs have equal mean and variance, then for any i, E[x2i ] = ?, where ? =
?(xi , xq ) + E[xi ]E[xq ] is a constant. Further assuming that covariance between xi and xq where
i 6= q is the same for all inputs, then E[xi xq ] = ?, where ? = ?(xi , xq ) + E[xi ]E[xq ] is a constant.
Using this and equation (5), Equation 6 becomes
X
E[Ij Ir ] = ? hcj sj , cr sr i + ?
cij sij (?cir sir )
i
= ? hcj sj , cr sr i ? ? hcj sj , cr sr i
= (? ? ?) hcj sj , cr sr i .
So minimizing the absolute value of the inner product between columns of W forces Ij and Ir to be
maximally uncorrelated under the constraints.
Inspired by this observation, we apriori (i.e., without any knowledge of the input data) choose the
strength values such that the absolute value of the inner product between columns of the effective
weight matrix is minimized, and the sum of effective weights to each neuron is zero. Practically,
this is achieved by assigning half of each neuron?s s-values to ?1 and the other half to 1, balancing
the possible permutations of such assignments so they occur as equally as possible across neurons
on a core, and evenly distributing the four possible axon types amongst the axons on a core. The
resulting matrix of synaptic strength values can be seen in Figure 2. This configuration thus provides
an optimal weight subspace, given the constraints, in which backpropagation can operate in a datadriven fashion to find desirable synaptic on/off states.
6
B
A
1 cm
one core
(256 neurons)
input window
C
stride
256 neurons
layer
input
12
input
3
16
16
5 Core Network
30 Core Network
Figure 3: A) Two network configurations used for the results described here, a 5 core network
designed to minimize core count and a 30 core network designed to maximize accuracy. B) Board
with a socketed TrueNorth chip used to run the deployment networks. The chip is 4.3 cm2 , runs
in real time (1 ms neuron updates), and consumes 63 mW running a benchmark network that uses
all of its 1 million neuron [7]. C) Measured accuracy and measured energy for the two network
configurations running on the chip. Ensemble size is shown to the right of each data point.
4
Network topology
The network topology is designed to support neurons with responses to local, regional or global
features while respecting the ?core-to-core? connectivity of the TrueNorth architecture ? namely
that all neurons on a core share access to the same set of inputs, and that the number of such inputs
is limited. The network uses a multilayer feedforward scheme, where the first layer consists of input
elements in a rows ? columns ? channels array, such as an image, and the remaining layers consist
of TrueNorth cores. Connections between layers are made using a sliding window approach.
Input to each core in a layer is drawn from an R ? R ? F input window (Figure 3A), where R
represents the row and column dimensions, and F represents the feature dimension. For input from
the first layer, rows and columns are in units of input elements and features are input channels,
while for input from the remaining layers rows and columns are in units of cores and features are
neurons. The first core in a given target layer locates its input window in the upper left corner of its
source layer, and the next core in the target layer shifts its input window to the right by a stride of
S. Successive cores slide the window over by S until the edge of the source layer is reached, then
the window is returned to the left, and shifted down by S and the process is repeated. Features are
sub-selected randomly, with the constraint that each neuron can only be selected by one target core.
We allow input elements to be selected multiple times. This scheme is similar in some respects to
that used by a convolution network, but we employ independent synapses for each location. The
specific networks employed here, and associated parameters, are shown in Figure 3A.
5
Results
We applied the training method described above to the MNIST dataset [20], examining accuracy vs.
energy tradeoffs using two networks running on the TrueNorth chip (Figure 3B). The first network is
the smallest multilayer TrueNorth network possible for the number of pixels present in the dataset,
consisting of 5 cores distributed in 2 layers, corresponding to 512 neurons. The second network
was built with a primary goal of maximizing accuracy, and is composed of 30 cores distributed in
4 layers (Figure 3A), corresponding to 3840 neurons. Networks are configured with a first layer
using R = 16 and F = 1 in both networks, and S = 12 in the 5 core network and S = 4 in the 30
core network, while all subsequent layers in both networks use R = 2, F = 64, and S = 1. These
parameters result in a ?pyramid? shape, where all cores from layer 2 to the final layer draw input
7
from 4 source cores and 64 neurons in each of those sources. Each core employs 64 neurons per
core it targets, up to a maximum of 256 neurons.
We tested each network in an ensemble of 1, 4, 16, or 64 members running on a TrueNorth chip in
real-time. Each image was encoded using a single time step (1 ms), with a different spike sampling
used for each input line targeted by a pixel. The instrumentation available measures active power
for the network in operation and leakage power for the entire chip, which consists of 4096 cores.
We report energy numbers as active power plus the fraction of leakage power for the cores in use.
The highest overall performance we observed of 99.42% was achieved with a 30 core trained network using a 64 member ensemble, for a total of 1920 cores, that was measured using 108 ?J per
classification. The lowest energy was achieved by the 5 core network operating in an ensemble of
1, that was measured using 0.268 ?J per classification while achieving 92.70% accuracy. Results
are plotted showing accuracy vs. energy in Figure 3C. Both networks classified 1000 images per
second.
6
Discussion
Our results show that backpropagation operating in a probabilistic domain can be used to train
networks that naturally map to neuromorphic hardware with spiking neurons and extremely lowprecision synapses. Our approach can be succinctly summarized as constrain-then-train, where
we first constrain our network to provide a direct representation of our deployment system and
then train within those constraints. This can be contrasted with a train-then-constrain approach,
where a network agnostic to the final deployment system is first trained, and following training is
constrained through normalization and discretization methods to provide a spiking representation or
low precision weights. While requiring a customized training rule, the constrain-then-train approach
offers the advantage that a decrease in training error has a direct correspondence to a decrease in
error for the deployment network. Conversely, the train-then-constrain approach allows use of off
the shelf training methods, but unconstrained training is not guaranteed to produce a reduction in
error after hardware constraints are applied.
Looking forward, we see several avenues for expanding this approach to more complex datasets.
First, deep convolution networks [20] have seen a great deal of success by using backpropagation
to learn the weights of convolutional filters. The learning method introduced here is independent of
the specific network structure beyond the given sparsity constraint, and could certainly be adapted
for use in convolution networks. Second, biology provides a number of examples, such as the retina
or cochlea, for mapping high-precision sensory data into a binary spiking representation. Drawing
inspiration from such approaches may improve performance beyond the linear mapping scheme used
in this work. Third, this approach may also be adaptable to other gradient based learning methods,
or to methods with existing probabilistic components such as contrastive divergence [21]. Further,
while we describe the use of this approach with TrueNorth to provide a concrete use case, we see
no reason why this training approach cannot be used with other spiking neuromorphic hardware
[4][5][6].
We believe this work is particularly timely, as in recent years backpropagation has achieved a high
level of performance on a number tasks reflecting real world tasks, including object detection in
complex scenes [1], pedestrian detection [2], and speech recognition [3]. A wide range of sensors
are found in mobile devices ranging from phones to automobiles, and platforms like TrueNorth
provide a low power substrate for processing that sensory data. By bridging backpropagation and
energy efficient neuromorphic computing, we hope that the work here provides an important step
towards building low-power, scalable brain-inspired systems with real world applicability.
Acknowledgments
This research was sponsored by the Defense Advanced Research Projects Agency under contracts
No. HR0011- 09-C-0002 and No. FA9453-15-C-0055. The views, opinions, and/or findings contained in this paper are those of the authors and should not be interpreted as representing the official
views or policies of the Department of Defense or the U.S. Government.
8
References
[1] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, A. C. Berg, and L. Fei-Fei, ?ImageNet Large Scale Visual Recognition Challenge,? International Journal of Computer Vision, 2015.
[2] W. Ouyang and X. Wang, ?Joint deep learning for pedestrian detection,? in International Conference on
Computer Vision, pp. 2056?2063, 2013.
[3] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al., ?Deepspeech: Scaling up end-to-end speech recognition,? arXiv preprint
arXiv:1412.5567, 2014.
[4] B. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A. Chandrasekaran, J.-M. Bussat, R. Alvarez-Icaza,
J. Arthur, P. Merolla, and K. Boahen, ?Neurogrid: A mixed-analog-digital multichip system for largescale neural simulations,? Proceedings of the IEEE, vol. 102, no. 5, pp. 699?716, 2014.
[5] E. Painkras, L. Plana, J. Garside, S. Temple, F. Galluppi, C. Patterson, D. Lester, A. Brown, and S. Furber,
?SpiNNaker: A 1-W 18-core system-on-chip for massively-parallel neural network simulation,? IEEE
Journal of Solid-State Circuits, vol. 48, no. 8, pp. 1943?1953, 2013.
[6] T. Pfeil, A. Gr?ubl, S. Jeltsch, E. M?uller, P. M?uller, M. A. Petrovici, M. Schmuker, D. Br?uderle, J. Schemmel, and K. Meier, ?Six networks on a universal neuromorphic computing substrate,? Frontiers in neuroscience, vol. 7, 2013.
[7] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson,
N. Imam, C. Guo, Y. Nakamura, et al., ?A million spiking-neuron integrated circuit with a scalable communication network and interface,? Science, vol. 345, no. 6197, pp. 668?673, 2014.
[8] D. Rumelhart, G. Hinton, and R. Williams, ?Learning representations by back-propagating errors,? Nature, vol. 323, no. 6088, pp. 533?536, 1986.
[9] P. Moerland and E. Fiesler, ?Neural network adaptations to hardware implementations,? in Handbook of
neural computation (E. Fiesler and R. Beale, eds.), New York: Institute of Physics Publishing and Oxford
University Publishing, 1997.
[10] E. Fiesler, A. Choudry, and H. J. Caulfield, ?Weight discretization paradigm for optical neural networks,?
in The Hague?90, 12-16 April, pp. 164?173, International Society for Optics and Photonics, 1990.
[11] Y. Cao, Y. Chen, and D. Khosla, ?Spiking deep convolutional neural networks for energy-efficient object
recognition,? International Journal of Computer Vision, vol. 113, no. 1, pp. 54?66, 2015.
[12] P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer, ?Fast-classifying, high-accuracy
spiking deep networks through weight and threshold balancing,? in International Joint Conference on
Neural Networks, 2015, in press.
[13] L. K. Muller and G. Indiveri, ?Rounding methods for neural networks with low resolution synaptic
weights,? arXiv preprint arXiv:1504.05767, 2015.
[14] J. Zhao, J. Shawe-Taylor, and M. van Daalen, ?Learning in stochastic bit stream neural networks,? Neural
Networks, vol. 9, no. 6, pp. 991 ? 998, 1996.
[15] Z. Cheng, D. Soudry, Z. Mao, and Z. Lan, ?Training binary multilayer neural networks for image classification using expectation backpropgation,? arXiv preprint arXiv:1503.03562, 2015.
[16] E. Stromatias, D. Neil, F. Galluppi, M. Pfeiffer, S. Liu, and S. Furber, ?Scalable energy-efficient, lowlatency implementations of spiking deep belief networks on spinnaker,? in International Joint Conference
on Neural Networks, IEEE, 2015, in press.
[17] A. S. Cassidy, P. Merolla, J. V. Arthur, S. Esser, B. Jackson, R. Alvarez-Icaza, P. Datta, J. Sawada, T. M.
Wong, V. Feldman, A. Amir, D. Rubin, F. Akopyan, E. McQuinn, W. Risk, and D. S. Modha, ?Cognitive
computing building block: A versatile and efficient digital neuron model for neurosynaptic cores,? in
International Joint Conference on Neural Networks, 2013.
[18] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, ?Improving neural
networks by preventing co-adaptation of feature detectors,? arXiv preprint arXiv:1207.0580, 2012.
[19] A. J. Bell and T. J. Sejnowski, ?An information-maximization approach to blind separation and blind
deconvolution,? Neural computation, vol. 7, no. 6, pp. 1129?1159, 1995.
[20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,? Proceedings of the IEEE, vol. 86, no. 11, pp. 2278?2324, 1998.
[21] G. E. Hinton and R. R. Salakhutdinov, ?Reducing the dimensionality of data with neural networks,?
Science, vol. 313, no. 5786, pp. 504?507, 2006.
9
| 5862 |@word schmuker:1 approved:1 cm2:1 simulation:2 covariance:4 contrastive:1 thereby:1 versatile:1 solid:1 wellapproximated:1 reduction:1 configuration:3 liu:2 document:1 trinary:3 existing:1 com:5 comparing:1 discretization:2 activation:1 assigning:1 must:2 john:1 subsequent:1 shape:1 cqr:3 designed:3 progressively:1 update:3 v:2 sponsored:1 half:2 selected:3 device:1 cook:1 amir:1 core:41 provides:4 location:1 successive:2 simpler:1 direct:3 consists:3 introduce:1 pairwise:1 inter:1 datadriven:1 expected:1 brain:1 hague:1 inspired:2 salakhutdinov:2 resolve:1 window:7 becomes:1 begin:2 provided:1 project:1 circuit:2 agnostic:1 lowest:1 cm:1 interpreted:1 string:1 pursue:1 ouyang:1 finding:1 transformation:1 nj:2 guarantee:1 every:2 tackle:1 lester:1 unit:4 converse:1 positive:2 local:3 treat:1 limit:1 soudry:1 oxford:1 modha:2 meet:1 firing:1 signed:1 plus:2 initialization:4 conversely:1 sawada:2 deployment:23 catanzaro:1 fastest:1 limited:4 co:1 range:8 bi:1 unique:1 acknowledgment:1 lecun:1 practice:1 block:2 implement:1 ance:1 backpropagation:19 digit:1 procedure:3 footprint:1 universal:1 bell:1 road:5 induce:1 get:2 cannot:1 close:1 risk:1 wong:1 optimize:1 map:2 demonstrated:1 maximizing:2 williams:1 starting:1 resolution:1 s2ij:3 simplicity:1 factored:1 insight:1 rule:5 array:2 jackson:2 updated:2 target:5 today:3 substrate:4 us:5 element:4 rumelhart:1 recognition:6 particularly:1 sparsely:1 observed:1 preprint:4 wang:1 connected:2 movement:1 highest:1 decrease:2 consumes:2 yk:3 balanced:1 benjamin:1 agency:1 leak:1 respecting:1 boahen:1 dynamic:3 trained:5 raise:1 solving:1 patterson:1 efficiency:7 basis:1 necessitates:1 darpa:1 joint:4 chip:13 represented:5 train:9 fast:1 describe:2 effective:2 prenger:1 deepspeech:1 sejnowski:1 outside:1 apparent:2 heuristic:1 encoded:1 valued:1 drawing:1 otherwise:2 erf:2 neil:2 final:2 differentiate:1 advantage:2 differentiable:1 unprecedented:1 intelligently:1 product:3 adaptation:3 remainder:1 j2:1 cao:1 date:1 mixing:1 achieve:5 competition:1 sutskever:1 requirement:2 produce:3 cassidy:2 object:2 derive:1 depending:1 propagating:2 measured:4 rescale:1 nearest:1 ij:16 implemented:1 implies:1 merged:1 filter:1 stochastic:1 opinion:1 public:1 government:1 imam:1 fix:2 summation:2 frontier:1 practically:1 considered:1 great:1 algorithmic:1 mapping:5 bj:4 smallest:1 purpose:1 label:2 largest:1 create:2 weighted:4 hope:2 uller:2 sensor:1 gaussian:4 incompatibility:3 cr:4 shelf:1 mobile:1 release:1 derived:1 focus:1 indiveri:1 bernoulli:1 dependent:1 factoring:1 typically:1 entire:1 integrated:1 pixel:4 overall:2 classification:5 almaden:5 sengupta:1 art:1 constrained:6 platform:2 summed:4 apriori:1 initialize:1 field:1 equal:3 once:1 sampling:3 identical:1 progressive:1 represents:4 biology:1 thinking:1 minimized:1 report:1 simplify:1 employ:6 retina:1 randomly:1 composed:1 divergence:1 replaced:1 consisting:1 detection:3 certainly:1 blocky:1 photonics:1 configure:1 uderle:1 chain:1 amenable:1 edge:1 arthur:4 necessary:1 iv:1 taylor:1 initialized:2 re:1 desired:1 plotted:1 elsen:1 column:7 temple:1 assignment:1 neuromorphic:14 maximization:1 applicability:1 subset:1 uniform:1 krizhevsky:1 successful:1 examining:1 rounding:1 gr:1 international:7 probabilistic:4 physic:1 off:4 contract:1 multichip:1 infomax:1 together:2 fiesler:3 concrete:1 transmitting:1 connectivity:7 containing:3 choose:1 huang:1 stochastically:1 corner:1 cognitive:1 zhao:1 potential:1 converted:1 lookup:2 harry:5 stride:2 summarized:1 pedestrian:2 configured:1 blind:2 stream:1 performed:1 view:2 lab:1 traffic:1 portion:1 reached:1 parallel:2 timely:1 truenorth:16 contribution:1 correspondance:1 minimize:2 ir:4 accuracy:12 convolutional:2 variance:3 sqr:3 efficiently:1 ensemble:11 correspond:1 moerland:1 multiplying:1 q6:1 russakovsky:1 published:1 history:1 converged:1 classified:1 detector:1 synapsis:17 decorrelated:1 synaptic:22 ed:1 energy:12 pp:11 naturally:2 associated:1 neurosynaptic:2 dataset:3 treatment:1 knowledge:1 fractional:1 improves:1 dimensionality:1 schedule:1 reflecting:1 adaptable:1 back:1 steve:1 methodology:4 response:1 maximally:1 synapse:24 alvarez:3 april:1 done:1 though:1 furthermore:1 merolla:4 until:1 crossbar:2 receives:1 replacing:1 su:1 believe:1 building:2 requiring:2 brown:1 former:2 assigned:3 inspiration:1 iteratively:1 deal:1 essence:1 m:2 generalized:1 demonstrate:4 dedicated:1 bring:2 interface:1 image:8 wise:1 ranging:1 novel:1 recently:2 common:1 rotation:1 spiking:17 overview:1 million:2 analog:1 neurogrid:1 feldman:1 tuning:1 unconstrained:1 backpropgation:1 shawe:1 esser:2 access:3 operating:2 recent:1 instrumentation:1 forcing:1 phone:1 massively:1 binary:10 success:4 muller:1 preserving:1 seen:2 additional:2 greater:1 impose:1 floor:1 employed:1 deng:1 determine:1 maximize:2 paradigm:1 ii:2 sliding:1 full:1 multiple:3 desirable:2 mix:1 schemmel:1 offer:1 equally:1 locates:1 prediction:2 j3:2 scalable:3 multilayer:4 denominator:2 expectation:2 vision:3 arxiv:8 iteration:3 represent:2 normalization:1 cochlea:1 pyramid:1 achieved:6 appuswamy:1 krause:1 diagram:1 source:5 operate:1 unlike:1 sr:4 regional:1 member:6 flow:1 integer:1 near:1 mw:1 backwards:1 feedforward:2 iii:2 bernstein:1 bengio:1 variety:2 independence:1 variate:1 architecture:2 topology:5 reduce:1 simplifies:1 inner:2 avenue:1 tradeoff:1 br:1 haffner:1 shift:3 whether:1 six:1 handled:1 defense:2 bridging:2 distributing:1 effort:2 returned:1 speech:2 york:1 deep:8 dramatically:1 karpathy:1 slide:1 hardware:14 coates:1 shifted:1 neuroscience:1 per:15 discrete:3 write:1 dropping:2 vol:10 redundancy:2 four:1 threshold:2 lan:1 achieving:3 drawn:2 utilize:1 backward:1 fraction:1 sum:2 year:1 run:6 jose:5 clipped:1 chandrasekaran:1 separation:1 draw:3 scaling:2 bit:6 dropout:1 layer:26 guaranteed:2 correspondence:1 fan:1 cheng:1 strength:15 occur:1 adapted:1 constraint:7 optic:1 constrain:5 fei:2 scene:1 unlimited:1 speed:1 extremely:1 attempting:1 optical:1 department:1 according:2 membrane:1 across:5 smaller:1 remain:1 sij:14 equation:12 hannun:1 count:2 end:2 available:1 operation:4 batch:1 beale:1 remaining:3 running:8 publishing:2 maintaining:1 build:1 society:1 leakage:2 move:1 spike:14 primary:1 gradient:7 conceivable:1 amongst:1 subspace:1 mapped:1 evenly:1 reason:2 assuming:2 index:2 mini:1 providing:2 minimizing:2 balance:1 cij:17 potentially:1 negative:2 design:4 implementation:2 policy:1 satheesh:2 gated:1 allowing:1 upper:1 neuron:74 observation:1 datasets:2 convolution:3 benchmark:1 supporting:1 hinton:3 communication:2 looking:1 datta:1 introduced:1 pair:3 namely:1 meier:1 connection:11 imagenet:1 learned:1 hr0011:1 beyond:2 below:6 sparsity:1 challenge:1 built:1 including:2 memory:2 belief:1 power:9 event:1 suitable:1 force:1 nakamura:1 indicator:1 largescale:1 customized:1 advanced:1 pfeiffer:2 representing:2 scheme:6 improve:1 x2i:2 choudhary:1 cir:1 xq:11 epoch:1 sir:1 embedded:2 loss:6 expect:1 permutation:1 mixed:1 digital:2 sufficient:1 rubin:1 thresholding:1 principle:1 classifying:1 uncorrelated:1 share:3 casper:1 ibm:10 row:5 compatible:1 balancing:2 succinctly:1 supported:2 last:3 keeping:1 free:1 offline:3 bias:11 allow:1 institute:1 fall:2 wide:1 absolute:2 distributed:3 van:1 dimension:2 world:5 cumulative:1 valid:1 sensory:2 forward:4 made:2 author:1 san:5 ubl:1 preventing:1 far:1 sj:4 approximate:2 global:1 sequentially:1 active:2 handbook:1 assumed:1 xi:15 continuous:8 khosla:2 why:1 table:4 channel:2 nature:1 mj:1 expanding:1 diehl:1 learn:4 ca:5 operational:1 transfer:1 rearranging:1 improving:1 automobile:1 complex:2 bottou:1 domain:2 reconciles:1 official:1 pk:3 main:1 paul:1 allowed:1 complementary:1 repeated:1 hcj:4 board:1 fashion:2 axon:12 precision:10 sub:1 momentum:1 mao:1 wish:1 third:1 learns:1 down:1 specific:2 showing:2 explored:1 decay:1 deconvolution:1 consist:1 stepwise:1 mnist:3 diamos:1 occurring:3 chen:1 lowprecision:1 depicted:1 simply:1 gao:1 visual:1 conveniently:1 contained:1 collectively:1 ma:1 goal:2 targeted:1 towards:1 shared:1 change:3 determined:1 except:1 reducing:2 operates:1 averaging:3 contrasted:1 total:1 pas:4 berg:1 support:1 guo:1 latter:1 tested:1 srivastava:1 |
5,372 | 5,863 | A Tractable Approximation to Optimal Point Process
Filtering: Application to Neural Encoding
Yuval Harel, Ron Meir
Department of Electrical Engineering
Technion ? Israel Institute of Technology
Technion City, Haifa, Israel
{yharel@tx,rmeir@ee}.technion.ac.il
Manfred Opper
Department of Artificial Intelligence
Technical University Berlin
Berlin 10587, Germany
[email protected]
Abstract
The process of dynamic state estimation (filtering) based on point process observations is in general intractable. Numerical sampling techniques are often
practically useful, but lead to limited conceptual insight about optimal encoding/decoding strategies, which are of significant relevance to Computational Neuroscience. We develop an analytically tractable Bayesian approximation to optimal filtering based on point process observations, which allows us to introduce
distributional assumptions about sensory cell properties, that greatly facilitate the
analysis of optimal encoding in situations deviating from common assumptions of
uniform coding. The analytic framework leads to insights which are difficult to
obtain from numerical algorithms, and is consistent with experiments about the
distribution of tuning curve centers. Interestingly, we find that the information
gained from the absence of spikes may be crucial to performance.
1
Introduction
The task of inferring a hidden dynamic state based on partial noisy observations plays an important
role within both applied and natural domains. A widely studied problem is that of online inference
of the hidden state at a given time based on observations up to to that time, referred to as filtering
[1]. For the linear setting with Gaussian noise and quadratic cost, the solution is well known since
the early 1960s both for discrete and continuous times, leading to the celebrated Kalman and the
Kalman-Bucy filters [2, 3], respectively. In these cases the exact posterior distribution is Gaussian,
resulting in closed form recursive update equations for the mean and variance of this distribution,
implying finite-dimensional filters. However, beyond some very specific settings [4], the optimal
filter is infinite-dimensional and impossible to compute in closed form, requiring either approximate
analytic techniques (e.g., the extended Kalman filter (e.g., [1]), the unscented filter [5]) or numerical
procedures (e.g., particle filters [6]). The latter usually require time discretization and a finite number
of particles, resulting in loss of precision . For many practical tasks (e.g., queuing [7] and optical
communication [8]) and biologically motivated problems (e.g., [9]) a natural observation process is
given by a point process observer, leading to a nonlinear infinite-dimensional optimal filter (except
in specific settings, e.g., finite state spaces, [7, 10]).
We consider a continuous-state and continuous-time multivariate hidden Markov process observed
through a set of sensory neuron-like elements characterized by multi-dimensional unimodal tuning
functions, representing the elements? average firing rate. The tuning function parameters are characterized by a distribution allowing much flexibility. The actual firing of each cell is random and is
given by a Poisson process with rate determined by the input and by the cell?s tuning function. Inferring the hidden state under such circumstances has been widely studied within the Computational
Neuroscience literature, mostly for static stimuli. In the more challenging and practically important
dynamic setting, much work has been devoted to the development of numerical sampling techniques
1
for fast and effective approximation of the posterior distribution (e.g., [11]). In this work we are less
concerned with algorithmic issues, and more with establishing closed-form analytic expressions for
an approximately optimal filter (see [10, 12, 13] for previous work in related, but more restrictive
settings), and using these to characterize the nature of near-optimal encoders, namely determining
the structure of the tuning functions for optimal state inference. A significant advantage of the closed
form expressions over purely numerical techniques is the insight and intuition that is gained from
them about qualitative aspects of the system. Moreover, the leverage gained by the analytic computation contributes to reducing the variance inherent to Monte Carlo approaches. Technically, given
the intractable infinite-dimensional nature of the posterior distribution, we use a projection method
replacing the full posterior at each point in time by a projection onto a simple family of distributions
(Gaussian in our case). This approach, originally developed in the Filtering literature [14, 15], and
termed Assumed Density Filtering (ADF), has been successfully used more recently in Machine
Learning [16, 17]. As far as we are aware, this is the first application of this methodology to point
process filtering.
The main contributions of the paper are the following: (i) Derivation of closed form recursive expressions for the continuous time posterior mean and variance within the ADF approximation, allowing
for the incorporation of distributional assumptions over sensory variables. (ii) Characterization of
the optimal tuning curves (encoders) for sensory cells in a more general setting than hitherto considered. Specifically, we study the optimal shift of tuning curve centers, providing an explanation
for observed experimental phenomena [18]. (iii) Demonstration that absence of spikes is informative, and that, depending on the relationship between the tuning curve distribution and the dynamic
process (the ?prior?), may significantly improve the inference. This issue has not been emphasized
in previous studies focusing on homogeneous populations.
We note that most previous work in the field of neural encoding/decoding has dealt with static
observations and was based on the Fisher information, which often leads to misleading qualitative
results (e.g., [19, 20]). Our results address the full dynamic setting in continuous time, and provide
results for the posterior variance, which is shown to yield an excellent approximation of the posterior
Mean Square Error (MSE). Previous work addressing non-uniform distributions over tuning curve
parameters [21] used static univariate observations and was based on Fisher information rather than
the MSE itself.
2
2.1
Problem formulation
Dense Gaussian neural code
We consider a dynamical system with state Xt ? Rn , observed through an observation process N
describing the firing patterns of sensory neurons in response to the process X. The observed process
is a diffusion process obeying the Stochastic Differential Equation (SDE)
dXt = A (Xt ) dt + D (Xt ) dWt ,
(t ? 0)
where A (?) , D (?) are arbitrary functions and Wt is standard Brownian motion. The initial condition
X0 is assumed to have a continuous distribution with a known density. The observation process N
is a marked point process [8] defined on [0, ?) ? Rm , meaning that each point, representing the
firing of a neuron, is identified by its time t ? [0, ?), and a mark ? ? Rm . In this work the mark is
interpreted as a parameter of the firing neuron, which we refer to as the neuron?s preferred stimulus.
Specifically, a neuron with parameter ? is taken to have firing rate
1
2
? (x; ?) = h exp ? kHx ? ?k??1 ,
tc
2
in response to state x, where H ? Rm?n and ?tc ? Rm?m , m ? n, are fixed matrices, and the
2
notation kykM denotes y T M y. The choice of Gaussian form for ? facilitates analytic tractability.
The inclusion of the matrix H allows using high-dimensional models where only some dimensions
are observed, for example when the full state includes velocities but only locations are directly
observable. We also define Nt , N ([0, t) ? Rm ), i.e., Nt is the total number of points up to time
t, regardless of their location ?, and denote by Nt the sequence of points up to time t ? formally,
2
the process N restricted to [0, t) ? Rm . Following [8], we use the notation
?
b
?
f (t, ?) N (dt ? d?) ,
a
U
X
1 {ti ? [a, b] , ?i ? U } f (ti , ?i ) ,
(1)
i
for U ? Rm and any function f , where (ti , ?i ) are respectively the time and mark of the i-th point
of the process N .
Consider a network with M sensory neurons, having random preferred stimuli ? = {?i } M
i=1 that are
drawn independently from a common distribution with probability density f (?), which we refer to
as the population density. Positing a distribution for the preferred stimuli allows us to obtain simple
closed form solutions, and to optimize over distribution parameters rather than over the higherdimensional space of all the ?i . The total rate of spikes
in a set A ? Rm ,
with preferred stimuli
P
2
1
given Xt = x, is then ?A (x; ?) = h i 1{?i ?A} exp ? 2 kHx ? ?i k??1 . Averaging over f (?),
tc
?
2
we have the expected rate ?A (x) , E?A (x; ?) = hM A f (?) exp ? 21 kHx ? ?k??1 d?. We
tc
now obtain an infinite neural network by considering the limit M ? ? while holding ?0 = hM
fixed. In the limit we have ?A (x; ?) ? ?A (x), so that the process N has density
1
2
?t (?, Xt ) = ?0 f (?) exp ? kHXt ? ?k??1 ,
tc
2
(2)
Q
meaning that the expected number of points in a small
Q rectangle [t, t + dt] ? i [?i , ?i + d?i ], conditioned on the history X[0,t] , Nt , is ?t (?, Xt ) dt i d?i + o (dt, |d?|). A finite network can be
obtained as a special case by taking f to be a sum of delta functions.
For analytic tractability, we assume that f (?) is Gaussian with center c and covariance ?pop , namely
f (?) = N (?; c, ?pop ). We refer to c as the population center. Previous work [22, 20, 23] considered
the case where neurons? preferred stimuli uniformly
cover the space, obtained by removing the factor
?
f (?) from (2). Then, the total firing rate ?t (?, x) d? is independent of x, which simplifies
the
?
analysis, and leads to a Gaussian posterior (see [22]). We refer to the assumption that ?t (?, x) d?
is independent of x as uniform coding.pThe uniform coding case may be obtained from our model
0
by taking the limit ??1
pop ? 0 with ? / det ?pop held constant.
2.2
Optimal encoding and decoding
We consider the question of optimal encoding and decoding under the above model. The process of
neural decoding is assumed to compute (exactly or approximately) the full posterior distribution of
Xt given Nt . The problem of neural encoding is then to choose the parameters ? = (c, ?pop , ?tc ),
which govern the statistics of the observation process N , given a specific decoding scheme.
To quantify the performance of the encoding-decoding system, we summarize the result of de?t = X
? t (Nt ), and define the Mean Square Error (MSE) as
coding using a single estimator X
T
?
?
? t and ? that solve min? limt?? min ? E [t ] =
t , trace[(Xt ? Xt )(Xt ? Xt ) ]. We seek X
Xt
min? limt?? E[minX? t E[t |Nt ]]. The inner minimization problem in this equation is solved by the
? t = ?t , E [Xt |Nt ]. The posterior mean
MSE-optimal decoder, which is the posterior mean X
may be computed from the full posterior obtained by decoding. The outer minimization problem is
solved by the optimal encoder. In principle, the encoding/decoding problem can be solved for any
value of t. In order to assess performance it is convenient to consider the steady-state limit t ? ?
for the encoding problem.
Below, we find a closed form approximate solution to the decoding problem for any t using ADF. We
then explore the problem of choosing the steady-state optimal encoding parameters ? using Monte
Carlo simulations. Note that if decoding is exact, the problem of optimal encoding becomes that of
minimizing the expected posterior variance.
3
3
3.1
Neural decoding
Exact filtering equations
Let P (?, t) denote the posterior density of Xt given Nt , and EtP [?] the posterior expectation given
Nt . The prior density P (?, 0) is assumed to be known.
The problem of filtering a diffusion process X from a doubly stochastic Poisson process driven by
X is formally solved in [24]. The result is extended to marked point processes in [22], where the
authors derive a stochastic PDE for the posterior density1 ,
?
? t (?)
?t (?, x) ? ?
?
? t (?) d? dt , (3)
dP (x, t) = L P (x, t) dt + P (x, t)
N (dt ? d?) ? ?
? t (?)
?
??Rm
where the integral with respect to N is interpreted as in (1), L is the state?s infinitesimal generator (Kolmogorov?s backward operator), defined as Lf (x)
=
lim?t?0+ (E [f (Xt+?t ) |Xt = x] ? f (x)) /?t, L? is L?s adjoint operator (Kolmogorov?s
?
? t (?) , Et [?t (?, Xt )] = P (x, t) ?t (?, x) dx.
forward operator), and ?
P
The stochastic PDE (3) is usually intractable. In [22, 23] the authors consider linear dynamics with
uniform coding and Gaussian prior. In this case, the posterior is Gaussian, and (3) leads to closed
form ODEs for its moments. When the uniform coding assumption is violated, the posterior is no
longer Gaussian. Still, we can obtain exact equations for the posterior moments, as follows.
? t = Xt ? ?t , ?t = Et [X
?tX
? T ]. Using (3), and the known results for L for
Let ?t = EtP Xt , X
t
P
diffusion processes (see supplementary material), the first two posterior moments can be shown to
obey the following equations between spikes (see [23] for the finite population case):
?
d?t
? t (?) ? ?t (?, Xt ) d?
= EtP [A (Xt )] + EtP Xt
?
dt
h
i
h
i
h
i
d?t
? T + Et X
? t A (Xt )T + Et D (Xt ) D (Xt )T
= EtP A (Xt ) X
t
P
P
dt
?
t
T
?
?
?
?t (?) ? ?t (?, Xt ) d? .
(4)
+EP Xt Xt
3.2
ADF approximation
While equations (4) are exact, they are not practical, since they require computation of EtP [?]. We
now proceed to find an approximate closed form for (4). Here we present the main ideas of the
derivation. The formulation presented here assumes, for simplicity, an open-loop setting where the
system is passively observed. It can be readily extended to a closed-loop control-based setting, and
is presented in this more general framework in the supplementary material, including full details.
To bring (4) to a closed form, we use ADF with an assumed Gaussian density (see [16] for details). Conceptually, this may be envisioned as integrating (4) while replacing the distribution P
by its approximating Gaussian ?at each time step?. Assuming the moments are known exactly, the
Gaussian is obtained by matching the first two moments of P [16]. Note that the solution of the
resulting equations does not in general match the first two moments of the exact solution, though it
may approximate it.
Abusing notation, in the sequel we use ?t , ?t to refer to the ADF approximation rather than to
the exact values. Substituting the normal distribution N (x; ?t , ?t ) for P (x, t) to compute the expectations involving ?t in (4), and using (2) and the Gaussian form of f (?), results in computable
Gaussian integrals. Other terms may also be computed in closed form if the function A, D can be expanded as power series. This computation yields approximate equations for ?t , ?t between spikes.
The updates at spike times can similarly be computed in closed form either from (3) or directly from
a Bayesian update of the posterior (see supplementary material, or e.g., [13]).
1
The model
? considered in [22] assumes linear dynamics and uniform coding ? meaning that the total rate of
Nt , namely ? ?t (?, Xt ) d?, is independent of Xt . However, these assumption are only relevant to establish
other proposition in that paper. The proof of equation (3) still holds as is in our more general setting.
4
1
d?t =dt
5
d?t =dt
0
0
?1
?6
?5
?4
?2
population
density
0
?t
2
4
8
?t2 4
firing rate
for ?=0
?6
?4
N(t;?)
?t ??t
Xt
6
?2
0
?
2
4
0
0
6
2
4
6
8
10
t
Figure 1: Left Changes to the posterior moments between spikes as a function of the current pos2
terior mean estimate, for a static 1-d state. The parameters are a = d = 0, H = 1, ?pop
=
2
1, ?tc
= 0.2, c = 0, ?0 = 10, ?t = 1. The bottom plot shows the density of preferred stimuli f (?) and tuning curve for a neuron with preferred stimulus ? = 0. Right An example of
filtering a linear one-dimensional process. Each dot correspond to a spike with the vertical location indicating the preferred stimulus ?. The curves to the right of the graph show the preferred stimulus density (black), and a tuning curve centered at ? = 0 (gray). The tuning curve
and preferred stimulus density are normalized to the same height for visualization. The bottom
graph shows the posterior variance, with the vertical lines showing spike times. Parameters are:
2
2
= 0.2, c = 0, ?0 = 10, ?0 = 0, ?02 = 1. Note the decrease
= 2, ?tc
a = ?0.1, d = 2, H = 1, ?pop
of the posterior variance following t = 4 even though no spikes are observed.
For simplicity, we assume that the dynamics are linear, dXt = AXt dt + D dWt , resulting in the
filtering equations
?
d?t = A?t dt + gt ?t H T St (H?t ? c) dt + ?t? H T Sttc?
(? ? H?t? ) N (dt ? d?) (5)
??Rm
d?t = A?t + ?t AT + DDT dt
h
i
T
+ gt ?t H T St ? St (H?t ? c) (H?t ? c) St H?t dt
? ?t? H T Sttc? H?t? dNt ,
?1
?1
, St , ?tc + ?pop + H?t H T
, and
where Sttc , ?tc + H?t H T
?
?
p
? (?) d? = Et [? (?, Xt )] d? = ?0 det (?tc St ) exp ? 1 kH?t ? ck2
gt , ?
P
St
2
(6)
is the posterior expected total firing rate. Expressions including t? are to be interpreted as left limits
f (t? ) = lims?t? f (s), which are necessary since the solution is discontinuous at spike times.
The last term in (5) is to be interpreted as in (1). It contributes an instantaneous jump in ?t at the
time of a spike with preferred stimulus ?, moving H?t closer to ?. Similarly, the last term in (6)
contributes an instantaneous jump in ?t at each spike time, which is the same regardless of spike
location. All other terms describe the evolution of the posterior between spikes: the first few terms
in (5)-(6) are the same as in the dynamics of the prior, as in [13, 23], whereas the terms involving
gt correspond to information from the absence of spikes. Note that the latter scale with gt , the
expected total firing rate, i.e., lack of spikes becomes ?more informative? the higher the expected
rate of spikes.
It is illustrative to consider these equations in the scalar case m = n = 1, with H = 1. Letting
2
2
?t2 = ?t , ?tc
= ?tc , ?pop
= ?pop , a = A, d = D yields
?
?t2?
?t2
(?t ? c) dt + 2
(? ? ?t? ) N (dt ? d?)
(7)
d?t = a?t dt + gt 2
2 + ?2
2
?t + ?tc
?t? + ?tc
??R
pop
"
# !
2
?t2?
?t2
(?t ? c)
2
2
2
2
2
d?t = 2a?t + d + gt 2
1
?
?
dt
?
t
2 + ?2
2 + ?2
2 ?t? dNt ,
?t + ?tc
?t2 + ?tc
?t2? + ?tc
pop
pop
(8)
5
Uniform coding filter vs. ADF
1.2
1.0
Accumulated MSE for ADF and uniform-coding filter
True state
ADF
Uniform coding filter
ADF
uniform
102
0.8
MSE
x
0.6
0.4
101
0.2
0.0
100
?0.2
0
2
4
6
8
10
12
14
0
2
4
6
8
10
2
?pop
t
Figure 2: Left Illustration of information gained between spikes. A static state Xt = 0.5, shown
2
in a dotted line, is observed and filtered twice: with the correct value ?pop
= 0.5 (?ADF?, solid
2
blue line), and with ?pop = ? (?Uniform coding filter?, dashed line). The curves to the right of
the graph show the preferred stimulus density (black), and a tuning curve centered at ? = 0 (gray).
Both filters are initialized with ?0 = 0, ?02 = 1. Right Comparison of MSE for the ADF filter and
the uniform coding filter. The vertical axis shows the integral of the square error integrated over the
time interval [5, 10], averaged over 1000 trials. Shaded areas indicate estimated errors, computed as
the sample standard deviation divided by the square root of the number of trials. Parameters in both
2
2
plots are a = d = 0, c = 0, ?pop
= 0.5, ?tc
= 0.1, H = 1, ?0 = 10.
p
2
2 N ? ; c, ? 2 + ? 2 + ? 2
where gt = ?0 2??tc
t
pop . Figure 1 (left) shows how ?t , ?t change betc
t
tween spikes for a static 1-dimensional state (a = d = 0). In this case, all terms in the filtering
equations drop out except those involving gt . The term involving gt in d?t pushes ?t away from c in
the absence of spikes. This effect weakens as |?t ? c| grows due to the factor gt , consistent with the
idea that far from c, the lack of spikes is less surprising, hence less informative. The term involving
gt in d?t2 increases the variance when ?t is near c, otherwise decreases it.
3.3
Information from lack of spikes
An interesting aspect of the filtering equations (5)-(6) is that the dynamics of the posterior density
between spikes differ from the prior dynamics. This is in contrast to previous models which assumed
uniform coding: the (exact) filtering equations appearing in [22] and [23] have the same form as (5)(6) except that they do not include the correction terms involving gt , so that between spikes the
dynamics are identical to the prior dynamics. This reflects the fact that lack of spikes in a time
interval is an indication that the total firing rate is low; in the uniform coding case, this is not
informative, since the total firing rate is independent of the state.
Figure 2 (left) illustrates the information gained from lack of spikes. A static scalar state is observed
by a process with rate (2), and filtered twice: once with the correct value of ?pop , and once with
?pop ? ?, as in the uniform coding filter of [23]. Between spikes, the ADF estimate moves
away from the population center c = 0, whereas the uniform coding estimate remains fixed. The
size of this effect decreases with time, as the posterior variance estimate (not shown) decreases.
The reduction in filtering errors gained from the additional terms in (5)-(6) is illustrated in Figure
2 (right). Despite the approximation involved, the full filter significantly outperforms the uniform
coding filter. The difference disappears as ?pop increases and the population becomes uniform.
Special cases To gain additional insight into the filtering equations, we consider their behavior in
2
several limits. (i) As ?pop
? ?, spikes become rare as the density f (?) approaches 0 for any ?.
The total expected rate of spikes gt also approaches 0, and the terms corresponding to information
2
from lack of spikes vanish. Other terms in the equations are unaffected. (ii) In the limit ?tc
? ?,
each neuron fires as a Poisson process with a constant rate independent of the observed state. The
total expected firing rate gt saturates at its maximum, ?0 . Therefore the preferred stimuli of spiking
neurons provide no information, nor does the presence or absence of spikes. Accordingly, all terms
other than those related to the prior dynamics vanish. (iii) The uniform coding case [22, 23] is
2
obtained as a special case in the limit ?pop
? ? with ?0 /?pop constant. In this limit the terms
involving gt drop out, recovering the (exact) filtering equations in [22].
6
4
Optimal neural encoding
We model the problem of optimal neural encoding as choosing the parameters c, ?pop , ?tc of the
population and tuning curves, so as to minimize the steady-state MSE. As noted above, when the
estimate is exactly the posterior mean, this is equivalent to minimizing the steady-state expected
posterior variance. The posterior variance has the advantage of being less noisy than the square error
itself, since by definition it is the mean of the square error (of the posterior mean) under conditioning
by Nt . We explore the question of optimal neural encoding by measuring the steady-state variance
through Monte Carlo simulations of the system dynamics and the filtering equations (5)-(6). Since
the posterior mean and variance computed by ADF are approximate, we verified numerically that
the variance closely matches the MSE in the steady state when averaged across many trials (see
supplementary material), suggesting that asymptotically the error in estimating ?t and ?t is small.
4.1
Optimal population center
We now consider the question of the optimal value for the population center c. Intuitively, if the
prior distribution of the process X is unimodal with mode x0 , the optimal population center is at
Hx0 , to produce the most spikes. On the other hand, the terms involving gt in the filtering equation
(5)-(6) suggest that the lack of spikes is also informative. Moreover, as seen in Figure 1 (left), the
posterior variance is reduced between spikes only when the current estimate is far enough from c.
These considerations suggest that there is a trade-off between maximizing the frequency of spikes
and maximizing the information obtained from lack of spikes, yielding an optimal value for c that
differs from Hx0 .
We simulated a simple one-dimensional process to determine the optimal value of c which minimizes
the approximate posterior variance ?t . Figure 3 (left) shows the posterior variance for varying
values of the population center c and base firing rate ?0 . For each firing rate, we note the value of
c minimizing the posterior variance (the optimal population center), as well as the value of cm =
argminc (d?t /dt|?t =0 ), which maximizes the reduction in the posterior variance when the current
state estimate ?t is at the process equilibrium x0 = 0. Consistent with the discussion above, the
optimal value lies between 0 (where spikes are most abundant) and cm (where lack of spikes is most
informative). As could be expected, the optimal center is closer to 0 the higher the base firing rate.
Similarly, wide tuning curves, which render the spikes less informative, lead to an optimal center
farther from 0 (Figure 3, right).
A shift of the population center relative to the prior mode has been observed physiologically in
encoding of inter-aural time differences for localization of sound sources [25]. In [18], this phenomenon was explained in a finite population model based on maximization of Fisher information.
This is in contrast to the results of [21], which consider a heterogeneous population where the tuning
curve width scales roughly inversely with neuron density. In this case, the population density maximizing the Fisher information is shown to be monotonic with the prior, i.e., more neurons should
be assigned to more probable states. This apparent discrepancy may be due to the scaling of tuning
curve widths in [21], which produces roughly constant total firing rate, i.e., uniform coding. This
demonstrates that a non-constant total firing rate, which renders lack of spikes informative, may be
necessary to explain the physiologically observed shift phenomenon.
4.2
Optimization of population distribution
Next, we consider the optimization of the population distribution, namely, the simultaneous optimization of the population center c and the population variance ?pop in the case of a static scalar
state. Previous work using a finite neuron population and a Fisher information-based criterion [18]
has shown that the optimal distribution of preferred stimuli depends on the prior variance. When
it is small relative to the tuning curve width, optimal encoding is achieved by placing all preferred
stimuli at a fixed distance from the prior mean. On the other hand, when the prior variance is large
relative to the tuning curve width, optimal encoding is uniform (see figure 2 in [18]).
Similar results are obtained with our model, as shown in Figure 4. Here, a static scalar state drawn
from N (0, ?p2 ) is filtered by a population with tuning curve width ?tc = 1 and preferred stimulus
2
density N (c, ?pop
). In Figure 4 (left), the prior distribution is narrow relative to the tuning curve
width, leading to an optimal population with a narrow population distribution far from the origin. In
7
0.96
0.88
?0
0.80
0.72
101
0.64
0
10
0.0
0.2
0.4
0.6
0.8
c
1.0
0.96
0.90
0.84
0.78
0.72
0.66
0.60
0.54
0.8
0.6
0.56
argminc ?t2
0.4
0.48
argminc d?2 j? =0
0.2
0.40
1.0
Steady-state posterior stdev / prior stdev
0.85
0.80
0.75
0.0 0.2 0.4 0.6 0.8 1.0
?tc
102
?t =?0
Steady-state posterior stdev / prior stdev
0.0
0.5
1.0
c
1.5
2.0
c
0.00
?0.05
?0.10
?0.15
100
?0.20
?0.25
101
2
3
4
5
0.25
0.00
100
?0.25
?0.50
?0.75
?0.35
1
0.50
101
?0.30
0
0.75
10-1
?pop
?pop
10-1
102
Steady-state variance, wide prior
10-2
log10(post. stdev / prior stdev)
Steady-state variance, narrow prior
10-2
log10(post. stdev / prior stdev)
Figure 3: Optimal population center location for filtering a linear one-dimensional process. Both
graphs show the ratio of posterior standard deviation to the prior steady-state standard deviation of the process, along with the value of c minimizing the posterior variance (blue line), and
minimizing the reduction of posterior variance when ?t = 0 (yellow line). The process is initialized from its steady-state distribution. The posterior variance is estimated by averaging over
the time interval [5, 10] and across 1000 trials for each data point. Parameters for both graphs:
2
2
= 0.01; on the right, ?0 = 50.
a = ?1, d = 0.5, ?pop
= 0.1. In the graph on the left, ?tc
102
0
1
2
3
4
5
?1.00
c
c
Figure 4: Optimal population distribution depends on prior variance relative to tuning curve width.
A static scalar state drawn from N (0, ?p2 ) is filtered with tuning curve ?tc = 1 and preferred stimulus
2
). Both graphs show the posterior standard deviation relative to the prior standard
density N (c, ?pop
deviation ?p . In the left graph, the prior distribution is narrow, ?p2 = 0.1, whereas on the right, it is
wide, ?p2 = 10. In both cases the filter is initialized with the correct prior, and the square error is
averaged over the time interval [5, 10] and across 100 trials for each data point.
Figure 4 (right), the prior is wide relative to the tuning curve width, leading to an optimal population
with variance that roughly matches the prior variance. When both the tuning curves and the population density are narrow relative to the prior, so that spikes are rare (low values of ?pop in Figure 4
(right)), the ADF approximation becomes poor, resulting in MSEs larger than the prior variance.
5
Conclusions
We have introduced an analytically tractable Bayesian approximation to point process filtering, allowing us to gain insight into the generally intractable infinite-dimensional filtering problem. The
approach enables the derivation of near-optimal encoding schemes going beyond previously studied
uniform coding assumptions. The framework is presented in continuous time, circumventing temporal discretization errors and numerical imprecisions in sampling-based methods, applies to fully
dynamic setups, and directly estimates the MSE rather than lower bounds to it. It successfully explains observed experimental results, and opens the door to many future predictions. Future work
will include a development of previously successful mean field approaches [13] within our more
general framework, leading to further analytic insight. Moreover, the proposed strategy may lead to
practically useful decoding of spike trains.
References
[1] B.D. Anderson and J.B. Moore. Optimal Filtering. Dover, 2005. 1
[2] R.E. Kalman and R.S. Bucy. New results in linear filtering and prediction theory. J. of Basic Eng., Trans.
ASME, Series D, 83(1):95?108, 1961. 1
8
[3] R.E. Kalman. A new approach to linear filtering and prediction problems. J. Basic Eng., Trans. ASME,
Series D., 82(1):35?45, 1960. 1
[4] F. Daum. Nonlinear filters: beyond the kalman filter. Aerospace and Electronic Systems Magazine, IEEE,
20(8):57?69, 2005. 1
[5] S. Julier, J. Uhlmann, and H. Durrant-Whyte. A new method for the nonlinear transformation of means
and covariances in filters and estimators. IEEE Trans. Autom. Control, 45(3):477?482, 2000. 1
[6] A. Doucet and A.M. Johansen. A tutorial on particle filtering and smoothing: fifteen years later. In
D. Crisan and B. Rozovskii, editors, Handbook of Nonlinear Filtering, pages 656?704. Oxford, UK:
Oxford University Press, 2009. 1
[7] P. Br?emaud. Point Processes and Queues: Martingale Dynamics. Springer, New York, 1981. 1
[8] D.L. Snyder and M.I. Miller. Random Point Processes in Time and Space. Springer, second edition
edition, 1991. 1, 2.1
[9] P. Dayan and L.F. Abbott. Theoretical Neuroscience: Computational and Mathematical Modeling of
Neural Systems. MIT Press, 2005. 1
[10] O. Bobrowski, R. Meir, and Y.C. Eldar. Bayesian filtering in spiking neural networks: noise, adaptation,
and multisensory integration. Neural Comput, 21(5):1277?1320, May 2009. 1
[11] Y. Ahmadian, J.W. Pillow, and L. Paninski. Efficient markov chain monte carlo methods for decoding
neural spike trains. Neural Comput, 23(1):46?96, Jan 2011. 1
[12] A.K. Susemihl, R. Meir, and M. Opper. Analytical results for the error in filtering of gaussian processes.
In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in
Neural Information Processing Systems 24, pages 2303?2311. 2011. 1
[13] A.K. Susemihl, R. Meir, and M. Opper. Dynamic state estimation based on poisson spike trains?
towards a theory of optimal encoding. Journal of Statistical Mechanics: Theory and Experiment,
2013(03):P03009, 2013. 1, 3.2, 3.2, 5
[14] P.S. Maybeck. Stochastic Models, Estimation, and Control. Academic Press, 1979. 1
[15] D. Brigo, B. Hanzon, and F. LeGland. A differential geometric approach to nonlinear filtering: the
projection filter. Automatic Control, IEEE Transactions on, 43:247?252, 1998. 1
[16] M. Opper. A Bayesian approach to online learning. In D. Saad, editor, Online Learning in Neural
Networks, pages 363?378. Cambridge university press, 1998. 1, 3.2
[17] T.P. Minka. Expectation propagation for approximate bayesian inference. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 362?369. Morgan Kaufmann Publishers
Inc., 2001. 1
[18] N.S. Harper and D. McAlpine. Optimal neural population coding of an auditory spatial cue. Nature,
430(7000):682?686, Aug 2004. n1397b. 1, 4.1, 4.2
[19] M. Bethge, D. Rotermund, and K. Pawelzik. Optimal short-term population coding: when fisher information fails. Neural Comput, 14(10):2317?2351, Oct 2002. 1
[20] S. Yaeli and R. Meir. Error-based analysis of optimal tuning functions explains phenomena observed in
sensory neurons. Front Comput Neurosci, 4:130, 2010. 1, 2.1
[21] D. Ganguli and E.P. Simoncelli. Efficient sensory encoding and bayesian inference with heterogeneous
neural populations. Neural Comput, 26(10):2103?2134, 2014. 1, 4.1
[22] I. Rhodes and D. Snyder. Estimation and control performance for space-time point-process observations.
IEEE Transactions on Automatic Control, 22(3):338?346, 1977. 2.1, 3.1, 3.1, 1, 3.3, 3.3
[23] A.K. Susemihl, R. Meir, and M. Opper. Optimal Neural Codes for Control and Estimation. Advances in
Neural Information Processing Systems, pages 1?9, 2014. 2.1, 3.1, 3.2, 3.3, 3.3
[24] D. Snyder. Filtering and detection for doubly stochastic Poisson processes. IEEE Transactions on Information Theory, 18(1):91?102, January 1972. 3.1
[25] A. Brand, O. Behrend, T. Marquardt, D. McAlpine, and B. Grothe. Precise inhibition is essential for
microsecond interaural time difference coding. Nature, 417(6888):543?547, 2002. 4.1
9
| 5863 |@word trial:5 open:2 simulation:2 seek:1 covariance:2 eng:2 fifteen:1 solid:1 reduction:3 moment:7 celebrated:1 series:3 initial:1 interestingly:1 outperforms:1 current:3 discretization:2 nt:12 surprising:1 marquardt:1 dx:1 readily:1 numerical:6 informative:8 analytic:7 enables:1 plot:2 drop:2 update:3 v:1 implying:1 intelligence:2 cue:1 accordingly:1 dover:1 short:1 farther:1 manfred:1 ck2:1 filtered:4 characterization:1 ron:1 location:5 positing:1 height:1 mathematical:1 along:1 differential:2 become:1 qualitative:2 doubly:2 interaural:1 introduce:1 x0:3 inter:1 expected:10 roughly:3 behavior:1 nor:1 mechanic:1 multi:1 actual:1 pawelzik:1 considering:1 becomes:4 estimating:1 moreover:3 notation:3 maximizes:1 israel:2 hitherto:1 sde:1 interpreted:4 minimizes:1 cm:2 developed:1 transformation:1 temporal:1 ti:3 exactly:3 axt:1 rm:10 demonstrates:1 uk:1 control:7 engineering:1 limit:9 despite:1 encoding:21 etp:6 oxford:2 establishing:1 firing:18 approximately:2 black:2 twice:2 studied:3 argminc:3 challenging:1 shaded:1 limited:1 ms:1 averaged:3 seventeenth:1 practical:2 recursive:2 lf:1 differs:1 procedure:1 jan:1 area:1 significantly:2 projection:3 convenient:1 matching:1 integrating:1 hx0:2 suggest:2 onto:1 operator:3 impossible:1 optimize:1 equivalent:1 center:15 maximizing:3 regardless:2 independently:1 simplicity:2 insight:6 kykm:1 estimator:2 population:32 play:1 magazine:1 exact:9 homogeneous:1 origin:1 element:2 velocity:1 distributional:2 observed:14 role:1 ep:1 bottom:2 electrical:1 solved:4 decrease:4 trade:1 envisioned:1 intuition:1 govern:1 dynamic:18 purely:1 technically:1 localization:1 tx:2 kolmogorov:2 stdev:8 derivation:3 train:3 fast:1 effective:1 describe:1 monte:4 ahmadian:1 artificial:2 zemel:1 choosing:2 apparent:1 widely:2 solve:1 supplementary:4 larger:1 otherwise:1 encoder:1 statistic:1 noisy:2 itself:2 online:3 advantage:2 sequence:1 indication:1 analytical:1 adaptation:1 tu:1 relevant:1 loop:2 pthe:1 flexibility:1 opperm:1 adjoint:1 kh:1 produce:2 depending:1 develop:1 ac:1 derive:1 weakens:1 aug:1 p2:4 recovering:1 c:1 whyte:1 indicate:1 quantify:1 differ:1 closely:1 discontinuous:1 correct:3 filter:23 stochastic:6 centered:2 material:4 explains:2 require:2 proposition:1 probable:1 unscented:1 hold:1 practically:3 correction:1 considered:3 normal:1 exp:5 equilibrium:1 algorithmic:1 substituting:1 early:1 estimation:5 rhodes:1 uhlmann:1 density1:1 city:1 successfully:2 reflects:1 minimization:2 mit:1 gaussian:16 rather:4 crisan:1 varying:1 greatly:1 contrast:2 inference:5 ganguli:1 dayan:1 accumulated:1 integrated:1 hidden:4 going:1 germany:1 issue:2 eldar:1 development:2 smoothing:1 special:3 integration:1 spatial:1 field:2 aware:1 once:2 having:1 sampling:3 identical:1 placing:1 discrepancy:1 future:2 t2:10 stimulus:18 inherent:1 few:1 harel:1 deviating:1 fire:1 detection:1 yielding:1 devoted:1 held:1 chain:1 lims:1 integral:3 closer:2 partial:1 necessary:2 taylor:1 initialized:3 haifa:1 abundant:1 theoretical:1 modeling:1 cover:1 measuring:1 maximization:1 cost:1 tractability:2 addressing:1 deviation:5 rare:2 uniform:23 technion:3 successful:1 front:1 characterize:1 bucy:2 encoders:2 st:7 density:20 sequel:1 off:1 decoding:14 bethge:1 choose:1 leading:5 suggesting:1 de:2 coding:23 includes:1 dnt:2 inc:1 depends:2 queuing:1 root:1 observer:1 closed:13 later:1 contribution:1 ass:1 il:1 square:7 minimize:1 variance:31 kaufmann:1 miller:1 yield:3 correspond:2 yellow:1 conceptually:1 dealt:1 bayesian:7 carlo:4 unaffected:1 history:1 explain:1 simultaneous:1 higherdimensional:1 definition:1 infinitesimal:1 frequency:1 involved:1 minka:1 proof:1 static:10 gain:2 auditory:1 lim:1 adf:15 focusing:1 originally:1 dt:23 higher:2 methodology:1 response:2 formulation:2 though:2 anderson:1 hand:2 replacing:2 nonlinear:5 lack:10 propagation:1 abusing:1 mode:2 gray:2 grows:1 facilitate:1 effect:2 requiring:1 normalized:1 true:1 evolution:1 analytically:2 hence:1 assigned:1 imprecision:1 moore:1 illustrated:1 width:8 illustrative:1 steady:12 noted:1 criterion:1 asme:2 motion:1 bring:1 meaning:3 instantaneous:2 consideration:1 recently:1 common:2 mcalpine:2 spiking:2 conditioning:1 julier:1 numerically:1 significant:2 refer:5 cambridge:1 tuning:26 automatic:2 similarly:3 inclusion:1 particle:3 aural:1 shawe:1 dot:1 moving:1 longer:1 inhibition:1 gt:17 base:2 posterior:45 multivariate:1 brownian:1 driven:1 termed:1 seen:1 morgan:1 additional:2 determine:1 dashed:1 ii:2 full:7 simoncelli:1 unimodal:2 sound:1 technical:1 match:3 characterized:2 academic:1 pde:2 divided:1 autom:1 post:2 prediction:3 involving:8 basic:2 heterogeneous:2 circumstance:1 expectation:3 poisson:5 limt:2 achieved:1 cell:4 whereas:3 ode:1 interval:4 source:1 crucial:1 publisher:1 saad:1 facilitates:1 ee:1 near:3 leverage:1 presence:1 door:1 iii:2 enough:1 concerned:1 identified:1 inner:1 simplifies:1 idea:2 computable:1 br:1 det:2 shift:3 motivated:1 expression:4 bartlett:1 render:2 queue:1 proceed:1 york:1 useful:2 generally:1 maybeck:1 reduced:1 meir:6 tutorial:1 dotted:1 neuroscience:3 delta:1 rmeir:1 estimated:2 blue:2 ddt:1 discrete:1 snyder:3 emaud:1 drawn:3 verified:1 abbott:1 diffusion:3 rectangle:1 backward:1 graph:8 asymptotically:1 circumventing:1 sum:1 year:1 yaeli:1 uncertainty:1 family:1 electronic:1 scaling:1 rotermund:1 bound:1 quadratic:1 incorporation:1 aspect:2 min:3 passively:1 expanded:1 optical:1 department:2 poor:1 across:3 biologically:1 intuitively:1 restricted:1 explained:1 taken:1 equation:20 visualization:1 remains:1 previously:2 describing:1 letting:1 tractable:3 obey:1 away:2 appearing:1 dwt:2 weinberger:1 denotes:1 assumes:2 include:2 log10:2 daum:1 restrictive:1 establish:1 approximating:1 move:1 question:3 spike:45 strategy:2 minx:1 dp:1 distance:1 berlin:3 simulated:1 decoder:1 outer:1 assuming:1 kalman:6 code:2 relationship:1 illustration:1 providing:1 demonstration:1 minimizing:5 ratio:1 difficult:1 mostly:1 setup:1 holding:1 trace:1 legland:1 allowing:3 vertical:3 observation:11 neuron:15 markov:2 finite:7 january:1 situation:1 extended:3 communication:1 saturates:1 precise:1 rn:1 arbitrary:1 introduced:1 namely:4 brigo:1 aerospace:1 johansen:1 narrow:5 pop:32 trans:3 address:1 beyond:3 usually:2 dynamical:1 pattern:1 below:1 summarize:1 including:2 explanation:1 power:1 natural:2 representing:2 scheme:2 improve:1 technology:1 misleading:1 inversely:1 disappears:1 axis:1 hm:2 prior:29 literature:2 geometric:1 determining:1 relative:8 loss:1 fully:1 dxt:2 interesting:1 filtering:31 generator:1 consistent:3 principle:1 editor:3 last:2 institute:1 wide:4 taking:2 curve:23 opper:5 dimension:1 pillow:1 sensory:8 author:2 forward:1 jump:2 far:4 transaction:3 approximate:8 observable:1 preferred:17 doucet:1 handbook:1 conceptual:1 assumed:6 continuous:7 physiologically:2 nature:4 contributes:3 mse:10 excellent:1 domain:1 tween:1 main:2 dense:1 neurosci:1 noise:2 edition:2 referred:1 martingale:1 precision:1 fails:1 inferring:2 pereira:1 obeying:1 comput:5 lie:1 vanish:2 durrant:1 removing:1 specific:3 emphasized:1 xt:34 showing:1 intractable:4 essential:1 gained:6 conditioned:1 push:1 illustrates:1 tc:26 paninski:1 univariate:1 explore:2 scalar:5 terior:1 monotonic:1 applies:1 springer:2 oct:1 marked:2 microsecond:1 towards:1 absence:5 fisher:6 change:2 specifically:2 infinite:5 except:3 determined:1 yuval:1 reducing:1 wt:1 averaging:2 uniformly:1 total:12 bobrowski:1 susemihl:3 experimental:2 brand:1 multisensory:1 indicating:1 formally:2 mark:3 latter:2 harper:1 relevance:1 violated:1 khx:3 phenomenon:4 |
5,373 | 5,864 | Color Constancy by Learning to Predict
Chromaticity from Luminance
Ayan Chakrabarti
Toyota Technological Institute at Chicago
6045 S. Kenwood Ave., Chicago, IL 60637
[email protected]
Abstract
Color constancy is the recovery of true surface color from observed color, and
requires estimating the chromaticity of scene illumination to correct for the bias
it induces. In this paper, we show that the per-pixel color statistics of natural
scenes?without any spatial or semantic context?can by themselves be a powerful cue for color constancy. Specifically, we describe an illuminant estimation
method that is built around a ?classifier? for identifying the true chromaticity of
a pixel given its luminance (absolute brightness across color channels). During
inference, each pixel?s observed color restricts its true chromaticity to those values that can be explained by one of a candidate set of illuminants, and applying
the classifier over these values yields a distribution over the corresponding illuminants. A global estimate for the scene illuminant is computed through a simple
aggregation of these distributions across all pixels. We begin by simply defining the luminance-to-chromaticity classifier by computing empirical histograms
over discretized chromaticity and luminance values from a training set of natural
images. These histograms reflect a preference for hues corresponding to smooth
reflectance functions, and for achromatic colors in brighter pixels. Despite its
simplicity, the resulting estimation algorithm outperforms current state-of-the-art
color constancy methods. Next, we propose a method to learn the luminanceto-chromaticity classifier ?end-to-end?. Using stochastic gradient descent, we set
chromaticity-luminance likelihoods to minimize errors in the final scene illuminant estimates on a training set. This leads to further improvements in accuracy,
most significantly in the tail of the error distribution.
1
Introduction
The spectral distribution of light reflected off a surface is a function of an intrinsic material property
of the surface?its reflectance?and also of the spectral distribution of the light illuminating the
surface. Consequently, the observed color of the same surface under different illuminants in different
images will be different. To be able to reliably use color computationally for identifying materials
and objects, researchers are interested in deriving an encoding of color from an observed image that
is invariant to changing illumination. This task is known as color constancy, and requires resolving
the ambiguity between illuminant and surface colors in an observed image. Since both of these
quantities are unknown, much of color constancy research is focused on identifying models and
statistical properties of natural scenes that are informative for color constancy. While pschophysical
experiments have demonstrated that the human visual system is remarkably successful at achieving
color constancy [1], it remains a challenging task computationally.
Early color constancy algorithms were based on relatively simple models for pixel colors. For
example, the gray world method [2] simply assumed that the average true intensities of different
color channels across all pixels in an image would be equal, while the white-patch retinex method [3]
1
assumed that the true color of the brightest pixels in an image is white. Most modern color constancy
methods, however, are based on more complex reasoning with higher-order image features. Many
methods [4, 5, 6] use models for image derivatives instead of individual pixels. Others are based on
recognizing and matching image segments to those in a training set to recover true color [7]. A recent
method proposes the use of a multi-layer convolutional neural network (CNN) to regress from image
patches to illuminant color. There are also many ?combination-based? color constancy algorithms,
that combine illuminant estimates from a number of simpler ?unitary? algorithms [8, 9, 10, 11],
sometimes using image features to give higher weight to the outputs of some subset of methods.
In this paper, we demonstrate that by appropriately modeling and reasoning with the statistics of
individual pixel colors, one can computationally recover illuminant color with high accuracy. We
consider individual pixels in isolation, where the color constancy task reduces to discriminating between the possible choices of true color for the pixel that are feasible given the observed color and a
candidate set of illuminants. Central to our method is a function that gives us the relative likelihoods
of these true colors, and therefore a distribution over the corresponding candidate illuminants. Our
global estimate for the scene illuminant is then computed by simply aggregating these distributions
across all pixels in the image.
We formulate the likelihood function as one that measures the conditional likelihood of true pixel
chromaticity given observed luminance, in part to be agnostic to the scalar (i.e., color channelindependent) ambiguity in observed color intensities. Moreover, rather than committing to a parametric form, we quantize the space of possible chromaticity and luminance values, and define the
function over this discrete domain. We begin by setting the conditional likelihoods purely empirically, based simply on the histograms of true color values over all pixels in all images across a
training set. Even with this purely empirical approach, our estimation algorithm yields estimates
with higher accuracy than current state-of-the-art methods. Then, we investigate learning the perpixel belief function by optimizing an objective based on the accuracy of the final global illuminant
estimate. We carry out this optimization using stochastic gradient descent, and using a sub-sampling
approach (similar to ?dropout? [12]) to improve generalization beyond the training set. This further
improves estimation accuracy, without adding to the computational cost of inference.
2
Preliminaries
Assuming Lambertian reflection, the spectral distribution of light reflected by a material is a product
of the distribution of the incident light and the material?s reflectance function. The color intensity
vector v(n) ? R3 recorded by a tri-chromatic sensor at each pixel n is then given by
Z
v(n) = ?(n, ?)?(n, ?) s(n) ?(?) d?,
(1)
where ?(n, ?) is the reflectance at n, ?(n, ?) is the spectral distribution of the incident illumination,
s(n) is a geometry-dependent shading factor, and ?(?) ? R3 denotes the spectral sensitivities of
the color sensors. Color constancy is typically framed as the task of computing from v(n) the
corresponding color intensities x(n) ? R3 that would have been observed under some canonical
illuminant ?ref (typically chosen to be ?ref (?) = 1). We will refer to x(n) as the ?true color? at n.
Since (1) involves a projection of the full incident light spectrum on to the three filters ?(?), it is not
generally possible to recover x(n) from v(n) even with knowledge of the illuminant ?(n, ?). However, a commonly adopted approximation (shown to be reasonable under certain assumptions [13])
is to relate the true and observed colors x(n) and v(n) by a simple per-channel adaptation:
v(n) = m(n) ? x(n),
(2)
3
where ? refers to the element-wise Hadamard product, and m(n) ? R depends on the illuminant
?(n, ?) (for ?ref , m = [1, 1, 1]T ). With some abuse of terminology, we will refer to m(n) as the
illuminant in the remainder of the paper. Moreover, we will focus on the single-illuminant case in
this paper, and assume m(n) = m, ?n in an image. Our goal during inference will be to estimate
this global illuminant m from the observed image v(n). The true color image x(n) can then simply
be recovered as m?1 ? v(n), where m?1 ? R3 denotes the element-wise inverse of m.
Note that color constancy algorithms seek to resolve the ambiguity between m and x(n) in (2) only
up to a channel-independent scalar factor. This is because scalar ambiguities show up in m between
2
? and ?ref due to light attenuation, between x(n) and ?(n) due to the shading factor s(n), and in
the observed image v(n) itself due to varying
exposure
settings. Therefore, the performance metric
typically used is the angular error cos?1
?
vectors m and m.
?
mT m
? 2
kmk2 kmk
between the true and estimated illuminant
Database For training and evaluation, we use the database of 568 natural indoor and outdoor
images captured under various illuminants by Gehler et al. [14]. We use the version from Shi and
Funt [15] that contains linear images (without gamma correction) generated from the RAW camera
data. The database contains images captured with two different cameras (86 images with a Canon
1D, and 482 with a Canon 5D). Each image contains a color checker chart placed in the image, with
its position manually labeled. The colors of the gray squares in the chart are taken to be the value
of the true illuminant m for each image, which can then be used to correct the image to get true
colors at each pixel (of course, only up to scale). The chart is masked out during evaluation. We use
k-fold cross-validation over this dataset in our experiments. Each fold contains images from both
cameras corresponding to one of k roughly-equal partitions of each camera?s image set (ordered
by file name/order of capture). Estimates for images in each fold are based on training only with
data from the remaining folds. We report results with three- and ten-fold cross-validation. These
correspond to average training set sizes of 379 and 511 images respectively.
3
Color Constancy with Pixel-wise Chromaticity Statistics
A color vector x ? R3 can be characterized in terms of (1) its luminance kxk1 , or absolute brightness
across color channels; and (2) its chromaticity, which is a measure of the relative ratios between
intensities in different channels. While there are different ways of encoding chromaticity, we will
? = x/kxk2 in the direction of x. Note that since intensities can
do so in terms of the unit vector x
? is restricted to lie on the non-negative eighth of the unit sphere S2+ . Remember
not be negative, x
from Sec. 2 that our goal is to resolve the ambiguity between the true colors x(n) and the illuminant
? and true
m only up to scale. In other words, we need only estimate the illuminant chromaticity m
? (n) from the observed image v(n), which we can relate from (2) as
chromaticities x
? (n) =
x
? ?1 ? v(n) ?
m
x(n)
?
= g(v(n), m).
=
? ?1 ? v(n)k2
kx(n)k2
km
(3)
A key property of natural illuminant chromaticities is that they are known to take a fairly restricted
set of values, close to a one-dimensional locus predicted by Planck?s radiation law [16]. To be able
?
? i }M
to exploit this, we denote M = {m
i=1 as the set of possible values for illuminant chromaticity m,
? t }Tt=1 of
and construct it from a training set. Specifically, we quantize1 the chromaticity vectors {m
the illuminants in the training set, and let M be the set of unique chromaticity values. Additionally,
we define a ?prior? bi = log(ni /T ) over this candidate set, based on the number ni of training
? i.
illuminants that were quantized to m
? across the illuminant set M
Given the observed color v(n) at a single pixel n, the ambiguity in m
? (n) over the set {g(v(n), m
? i )}i .
translates to a corresponding ambiguity in the true chromaticity x
Figure 1(a) illustrates this ambiguity for a few different observed colors v. We note that while there
is significant angular deviation within the set of possible true chromaticity values for any observed
color, values in each set lie close to a one dimensional locus in chromaticity space. This suggests
that the illuminants in our training set are indeed a good fit to Planck?s law2 .
The goal of our work is to investigate the extent to which we can resolve the above ambiguity
in true chromaticity on a per-pixel basis, without having to reason about the pixel?s spatial neighborhood or semantic context. Our approach is based on computing a likelihood distribution over
? (n), given the observed luminance kv(n)k1 . But as mentioned in Sec. 2,
the possible values of x
there is considerable ambiguity in the scale of observed color intensities. We address this partially by applying a simple per-image global normalization to the observed luminance to define
1
Quantization is over uniformly sized bins in S2+ . See supplementary material for details.
In fact, the chromaticities appear to lie on two curves, that are slightly separated from each other. This
separation is likely due to differences in the sensor responses of the two cameras in the Gehler-Shi dataset.
2
3
(a) Ambiguity with Observed Color
Legend
Green
Re
d
d
Re
-3.3
e
B lu
+G
ree
n
n+
ee
Gr
Set of possible true
chromaticities
for a speci?c observed
color v.
Blue + Red
Blue
-15.1
True Chromaticity
(b)
(c)
from Empirical Statistics
y ? [0,0.2)
y ? [0.2,0.4)
y ? [0.8,1.0)
y ? [1.4,1.6)
y ? [1.8,2.0)
y ? [2.8,3.0)
y ? [3.4,3.6)
y ? [3.8,?)
from End-to-end Learning
y ? [0,0.2)
y ? [0.2,0.4)
y ? [0.8,1.0)
y ? [1.4,1.6)
y ? [1.8,2.0)
y ? [2.8,3.0)
y ? [3.4,3.6)
y ? [3.8,?)
Figure 1: Color Constancy with Per-pixel Chromaticity-luminance distributions of natural scenes.
(a) Ambiguity in true chromaticity given observed color: each set of points corresponds to the
possible true chromaticity values (location in S2+ , see legend) consistent with the pixel?s observed
? i . (b) Distributions over
chromaticity (color of the points) and different candidate illuminants m
different values for true chromaticity of a pixel conditioned on its observed luminance, computed
as empirical histograms over the training set. Values y are normalized per-image by the median
luminance value over all pixels. (c) Corresponding distributions learned with end-to-end training to
maximize accuracy of overall illuminant estimation.
y(n) = kv(n)k1 /median{kv(n? )k1 }n? . This very roughly compensates for variations across images due to exposure settings, illuminant brightness, etc. However, note that since the normalization
is global, it does not compensate for variations due to shading.
The central component of our inference method is a function L[?
x, y] that encodes the belief that a
? . This function is defined over
pixel with normalized observed luminance y has true chromaticity x
a discrete domain by quantizing both chromaticity and luminance values: we clip luminance values
y to four (i.e., four times the median luminance of the image) and quantize them into twenty equal
? , we use a much finger quantization with 214 equal-sized bins in
sized bins; and for chromaticity x
2
S+ (see supplementary material for details). In this section, we adopt a purely empirical approach
4
P
and define L[?
x, y] as L[?
x, y] = log (Nx? ,y / x? ? Nx? ? ,y ) , where Nx? ,y is the number of pixels across
? and observed luminance y.
all pixels in a set of images in a training set that have true chromaticity x
We visualize these empirical versions of L[?
x, y] for a subset of the luminance quantization levels
in Fig. 1(b). We find that in general, desaturated chromaticities with similar intensity values in all
color channels are most common. This is consistent with findings of statistical analysis of natural
spectra [17], which shows the ?DC? component (flat across wavelength) to be the one with most
variance. We also note that the concentration of the likelihood mass in these chromaticities increasing for higher values of luminance y. This phenomenon is also predicted by traditional intuitions
in color science: materials are brightest when they reflect most of the incident light, which typically occurs when they have a flat reflectance function with all values of ?(?) close to one. Indeed,
this is what forms the basis of the white-patch retinex method [3]. Amongst saturated colors, we
find that hues which combine green with either red or blue occur more frequently than primary colors, with pure green and combinations of red and blue being the least common. This is consistent
with findings that reflectance functions are usually smooth (PCA on pixel spectra in [17] revealed a
Fourier-like basis). Both saturated green and red-blue combinations would require the reflectance to
have either a sharp peak or crest, respectively, in the middle of the visible spectrum.
We now describe a method that exploits the belief function L[?
x, y] for illuminant estimation. Given
? i ), y(n)]}i over
the observed color v(n) at a pixel n, we can obtain a distribution {L[g(v(n), m
? i )}i , which can also be interpreted as a
the set of possible true chromaticity values {g(v(n), m
? i . We then simply aggregate these distributions
distribution over the corresponding illuminants m
? i being the scene illuminant
across all pixels n in P
the image, and define the global probability of m
m as pi = exp(li )/ ( i? exp(li? )), where
? X
? i ), y(n)] + ?bi ,
li =
L[g(v(n), m
(4)
N n
N is the total number of pixels in the image, and ? and ? are scalar parameters. The final illuminant
? is then computed as
chromaticity estimate m
P
?1
T
?
T
?
i pi m i
P
? = arg min E [cos (m m )] ? arg max E[m m ] =
.
(5)
m
k
?
?
?
?
m ,km k2 =1
m ,km k2 =1
i pi mi k2
Note that (4) also incorporates the prior bi over illuminants. We set the parameters ? and ? using
a grid search, to values that minimize mean illuminant estimation error over the training set. The
primary computational cost of inference is in computing the values of {li }. We pre-compute values
? and the candi? using (3) over the discrete domain of quantized chromaticity values for x
of g(?
x, m)
? Therefore, computing each li essentially only requires the addition of
date illuminant set M for m.
N numbers from a look-up table. We need to do this for all M = |M| illuminants, where summations for different illuminants can be carried out in parallel. Our implementation takes roughly 0.3
seconds for a 9 mega-pixel image, on a modern Intel 3.3GHz CPU with 6 cores, and is available at
http://www.ttic.edu/chakrabarti/chromcc/.
This empirical version of our approach bears some similarity to the Bayesian method of [14] that
is based on priors for illuminants, and for the likelihood of different true reflectance values being
present in a scene. However, the key difference is our modeling of true chromaticity conditioned
on luminance that explicitly makes estimation agnostic to the absolute scale of intensity values. We
also reason with all pixels, rather than the set of unique colors in the image.
Experimental Results. Table 1 compares the performance of illuminant estimation with our
method (see rows labeled ?Empirical?) to the current state-of-the-art, using different quantiles of
angular error across the Gehler-Shi database [14, 15]. Results for other methods are from the survey
by Li et al. [18]. (See the supplementary material for comparisons to some other recent methods).
We show results with both three- and ten-fold cross-validation. We find that our errors with threefold cross-validation have lower mean, median, and tri-mean values than those of the best performing
state-of-the-art method from [8], which combines illuminant estimates from twelve different ?unitary? color-constancy method (many of which are also listed in Table 1) using support-vector regression. The improvement in error is larger with respect to the other combination methods [8, 9, 10, 11],
as well as those based the statistics of image derivatives [4, 5, 6]. Moreover, since our method has
more parameters than most previous algorithms (L[?
x, y] has 214 ? 20 ? 300k entries), it is likely
5
Table 1: Quantiles of Angular Error for Different Methods on the Gehler-Shi Database [14, 15]
Method
Mean
Median
Tri-mean
25%-ile
75%-ile
90%-ile
Bayesian [14]
Gamut Mapping [20]
Deriv. Gamut Mapping [4]
Gray World [2]
Gray Edge(1,1,6) [5]
SV-Regression [21]
Spatio-Spectral [6]
Scene Geom. Comb. [9]
Nearest-30% Comb. [10]
Classifier-based Comb. [11]
Neural Comb. (ELM) [8]
SVR-based Comb. [8]
6.74?
6.00?
5.96?
4.77?
4.19?
4.14?
3.99?
4.56?
4.26?
3.83?
3.43?
2.98?
5.14?
3.98?
3.83?
3.63?
3.28?
3.23?
3.24?
3.15?
2.95?
2.75?
2.37?
1.97?
5.54?
4.52?
4.32?
3.92?
3.54?
3.35?
3.45?
3.46?
3.19?
2.93?
2.62?
2.35?
2.42?
1.71?
1.68?
1.81?
1.87?
1.68?
2.38?
1.41?
1.49?
1.34?
1.21?
1.13?
9.47?
8.42?
7.95?
6.63?
5.72?
5.27?
4.97?
6.12?
5.39?
4.89?
4.53?
4.33?
14.71?
14.74?
14.72?
10.59?
8.60?
8.87?
7.50?
10.39?
9.67?
8.19?
6.97?
6.37?
Proposed
(3-Fold)
Empirical
End-to-end Trained
(10-Fold)
Empirical
End-to-end Trained
2.89?
2.56?
2.55?
2.20?
1.89?
1.67?
1.58?
1.37?
2.15?
1.89?
1.83?
1.53?
1.15?
0.91?
0.85?
0.69?
3.68?
3.30?
3.30?
2.68?
6.24?
5.56?
5.74?
4.89?
to benefit from more training data. We find this to indeed be the case, and observe a considerable
decrease in error quantiles when we switch to ten-fold cross-validation.
Figure. 2 shows estimation results with our method for a few sample images. For each image, we
show the input image (indicating the ground truth color chart being masked out) and the output image
with colors corrected by the global illuminant estimate. To visualize the quality of contributions from
individual pixels, we also show a map of angular errors for illuminant estimates from individual
pixels. These estimates are based on values of li computed by restricting the summation in (4) to
individual pixels. We find that even these pixel-wise estimates are fairly accurate for a lot of pixels,
even when it?s true color is saturated (see cart in first row). Also, to evaluate the weight of these
per-pixel distributions to the global li , we show a map of their variance on a per-pixel basis. As
expected from Fig. 1(b), we note higher variances in relatively brighter pixels. The image in the last
row represents one of the poorest estimates across the entire dataset (higher than 90%?ile). Note
that much of the image is in shadow, and contain only a few distinct (and likely atypical) materials.
4
Learning L[?
x, y] End-to-end
While the empirical approach in the previous section would be optimal if pixel chromaticities in a
typical image were infact i.i.d., that is clearly not the case. Therefore, in this section we propose an
alternate approach method to setting the beliefs in L[?
x, y], that optimizes for the accuracy of the final
global illuminant estimate. However, unlike previous color constancy methods that explicitly model
statistical co-dependencies between pixels?for example, by modeling spatial derivatives [4, 5, 6],
or learning functions on whole-image histograms [21]?we retain the overall parametric ?form? by
x, y] itself is learned through
which we compute the illuminant in (4). Therefore, even though L[?
knowledge of co-occurence of chromaticities in natural images, estimation of the illuminant during
inference is still achieved through a simple aggregation of per-pixel distributions.
Specifically, we set the entries of L[?
x, y] to minimize a cost function C over a set of training images:
C(L) =
T
X
C t (L),
Ct =
t=1
X
i
6
? Ti m
? t ) pti ,
cos?1 (m
(6)
Per-Pixel Estimate Error
Belief Variance
Empirical
Global Estimate
Error = 0.56?
End-to-end
Input+Mask
Error = 0.24?
Empirical
Ground Truth
Error = 4.32?
End-to-end
Input+Mask
Error = 3.15?
Empirical
Ground Truth
Error = 16.22?
End-to-end
Input+Mask
Ground Truth
Error = 10.31?
Figure 2: Estimation Results on Sample Images. Along with output images corrected with the global
illuminant estimate from our methods, we also visualize illuminant information extracted at a local
level. We show a map of the angular error of pixel-wise illuminant estimates (i.e., computed with li
based on distributions from only a single pixel). We also show a map of the variance Var({li }i ) of
these beliefs, to gauge the weight of their contributions to the global illuminant estimate.
? t is the true illuminant chromaticity of the tth training image, and pti is computed from the
where m
observed colors v t (n) using (4). We augment the training data available to us by ?re-lighting? each
image with different illuminants from the training set. We use the original image set and six re-lit
copies for training, and use a seventh copy for validation.
We use stochastic gradient descent to minimize (6). We initialize L to empirical values as described
in the previous section (for convenience, we multiply the empirical values by ?, and then set ? = 1
for computing li ), and then consider individual images from the training set at each iteration. We
make multiple passes through the training set, and at each iteration, we randomly sub-sample the
pixels from each training image. Specifically, we only retain 1/128 of the total pixels in the image
by randomly sub-sampling 16 ? 16 patches at a time. This approach, which can be interpreted as
being similar to ?dropout? [12], prevents over-fitting and improves generalization.
7
Derivatives of the cost function C t with respect to the current values of beliefs L[?
x, y] are given by
!
1 X X
?C t
?C t
? i) = x
? ? y t (n) = y
(7)
=
? g(v t (n), m
? t ,
?L[?
x, y]
N i
?li
n
?C t
? Ti m
? t) ? C t .
= pti cos?1 (m
(8)
t
?li
We use momentum to update the values of L[?
x, y] at each iteration based on these derivative as
?C t
L[?
x, y] = L[?
x, y] ? L? [?
x, y], L? [?
x, y] = r
+ ?L?
x, y],
(9)
? [?
?L[?
x, y]
where L?
x, y] is the previous update value, r is the learning rate, and ? is the momentum factor. In
? [?
our experiments, we set ? = 0.9, run stochastic gradient descent for 20 epochs with r = 100, and
another 10 epochs with r = 10. We retain the values of L from each epoch, and our final output is
the version that yields the lowest mean illuminant estimation error on the validation set.
where
We show the belief values learned in this manner in Fig. 1(c). Notice that although they retain the
overall biases towards desaturated colors and combined green-red and green-blue hues, they are less
?smooth? than their empirical counterparts in Fig. 1(b)?in many instances, there are sharp changes
in the values L[?
x, y] for small changes in chromaticity. While harder to interpret, we hypothesize
that these variations result from shifting beliefs of specific (?
x, y) pairs to their neighbors, when they
correspond to incorrect choices within the ambiguous set of specific observed colors.
Experimental Results. We also report errors when using these end-to-end trained versions of the
belief function L in Table 1, and find that they lead to an appreciable reduction in error in comparison
to their empirical counterparts. Indeed, the errors with end-to-end training using three-fold crossvalidation begin to approach those of the empirical version with ten-fold cross-validation, which
has access to much more training data. Also note that the most significant improvements (for both
three- and ten-fold cross-validation) are in ?outlier? performance, i.e., in the 75 and 90%-ile error
values. Color constancy methods perform worst on images that are dominated by a small number of
materials with ambiguous chromaticity, and our results indicate that end-to-end training increases
the reliability of our estimation method in these cases.
We also include results for the end-to-end case for the example images in Figure. 2. For all three
images, there is an improvement in the global estimation error. More interestingly, we see that the
per-pixel error and variance maps now have more high-frequency variation, since L now reacts more
sharply to slight chromaticity changes from pixel to pixel. Moreover, we see that a larger fraction of
pixels generate fairly accurate estimates by themselves (blue shirt in row 2). There is also a higher
disparity in belief variance, including within regions that visually look homogeneous in the input,
indicating that the global estimate is now more heavily influenced by a smaller fraction of pixels.
5
Conclusion and Future Work
In this paper, we introduced a new color constancy method that is based on a conditional likelihood
function for the true chromaticity of a pixel, given its luminance. We proposed two approaches to
learning this function. The first was based purely on empirical pixel statistics, while the second
was based on maximizing accuracy of the final illuminant estimate. Both versions were found to
outperform state-of-the-art color constancy methods, including those that employed more complex
features and semantic reasoning. While we assumed a single global illuminant in this paper, the
underlying per-pixel reasoning can likely be extended to the multiple-illuminant case, especially
since, as we saw in Fig. 2, our method was often able to extract reasonable illuminant estimates
from individual pixels. Another useful direction for future research is to investigate the benefits of
using likelihood functions that are conditioned on lightness?estimated using an intrinsic image decomposition method?instead of normalized luminance. This would factor out the spatially-varying
scalar ambiguity caused by shading, which could lead to more informative distributions.
Acknowledgments
We thank the authors of [18] for providing estimation results of other methods for comparison. The
author was supported by a gift from Adobe.
8
References
[1] D.H. Brainard and A Radonjic. Color constancy. In The new visual neurosciences, 2014.
[2] G. Buchsbaum. A spatial processor model for object colour perception. J. Franklin Inst, 1980.
[3] E.H. Land. The retinex theory of color vision. Scientific American, 1971.
[4] A. Gijsenij, T. Gevers, and J. van de Weijer. Generalized gamut mapping using image derivative structures
for color constancy. IJCV, 2010.
[5] J. van de Weijer, T. Gevers, and A. Gijsenij. Edge-Based Color Constancy. IEEE Trans. Image Proc.,
2007.
[6] A. Chakrabarti, K. Hirakawa, and T. Zickler. Color Constancy with Spatio-spectral Statistics. PAMI,
2012.
[7] H. R. V. Joze and M. S. Drew. Exemplar-based color constancy and multiple illumination. PAMI, 2014.
[8] B. Li, W. Xiong, and D. Xu. A supervised combination strategy for illumination chromaticity estimation.
ACM Trans. Appl. Percept., 2010.
[9] R. Lu, A. Gijsenij, T. Gevers, V. Nedovic, and D. Xu. Color constancy using 3D scene geometry. In
ICCV, 2009.
[10] S. Bianco, F. Gasparini, and R. Schettini. Consensus-based framework for illuminant chromaticity estimation. J. Electron. Imag., 2008.
[11] S. Bianco, G. Ciocca, C. Cusano, and R. Schettini. Automatic color constancy algorithm selection and
combination. Pattern recognition, 2010.
[12] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. JMLR, 2014.
[13] H. Chong, S. Gortler, and T. Zickler. The von Kries hypothesis and a basis for color constancy. In Proc.
ICCV, 2007.
[14] P.V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp. Bayesian Color Constancy Revisited. In CVPR,
2008.
[15] L. Shi and B. Funt. Re-processed version of the gehler color constancy dataset of 568 images. Accessed
from http://www.cs.sfu.ca/~colour/data/.
[16] D.B. Judd, D.L. MacAdam, G. Wyszecki, H.W. Budde, H.R. Condit, S.T. Henderson, and J.L. Simonds.
Spectral distribution of typical daylight as a function of correlated color temperature. JOSA, 1964.
[17] A. Chakrabarti and T. Zickler. Statistics of Real-World Hyperspectral Images. In Proc. CVPR, 2011.
[18] B. Li, W. Xiong, W. Hu, and B. Funt. Evaluating combinational illumination estimation methods on
real-world images. IEEE Trans. Imag. Proc., 2014. Data at http://www.escience.cn/people/
BingLi/Data_TIP14.html.
[19] S. Bianco, C. Cusano, and R. Schettini. Color constancy using cnns. arXiv:1504.04548[cs.CV], 2015.
[20] D. Forsyth. A novel algorithm for color constancy. IJCV, 1990.
[21] W. Xiong and B. Funt.
J. Imag. Sci. Technol., 2006.
Estimating illumination chromaticity via support vector regression.
[22] A. Gijsenij, T. Gevers, and J. van de Weijer. Computational color constancy: Survey and experiments.
IEEE Trans. Image Proc., 2011. Data at http://www.colorconstancy.com/.
9
| 5864 |@word cnn:1 middle:1 version:8 hu:1 km:3 seek:1 decomposition:1 brightness:3 harder:1 shading:4 carry:1 reduction:1 contains:4 disparity:1 interestingly:1 franklin:1 outperforms:1 kmk:1 current:4 recovered:1 com:1 chicago:2 partition:1 informative:2 visible:1 hypothesize:1 update:2 cue:1 core:1 quantized:2 revisited:1 location:1 preference:1 simpler:1 accessed:1 along:1 zickler:3 chakrabarti:4 incorrect:1 ijcv:2 combine:3 fitting:1 combinational:1 comb:5 manner:1 mask:3 expected:1 indeed:4 roughly:3 themselves:2 frequently:1 multi:1 shirt:1 discretized:1 salakhutdinov:1 resolve:3 cpu:1 increasing:1 gift:1 begin:3 estimating:2 moreover:4 underlying:1 agnostic:2 mass:1 lowest:1 what:1 interpreted:2 finding:2 elm:1 remember:1 attenuation:1 ti:2 classifier:5 k2:5 unit:2 imag:3 appear:1 planck:2 gortler:1 aggregating:1 local:1 despite:1 encoding:2 ree:1 abuse:1 pami:2 suggests:1 challenging:1 appl:1 co:6 bi:3 unique:2 camera:5 acknowledgment:1 empirical:20 significantly:1 matching:1 projection:1 word:1 pre:1 refers:1 get:1 svr:1 close:3 convenience:1 selection:1 context:2 applying:2 www:4 map:5 demonstrated:1 shi:5 maximizing:1 exposure:2 focused:1 formulate:1 survey:2 simplicity:1 recovery:1 identifying:3 pure:1 deriving:1 variation:4 heavily:1 homogeneous:1 hypothesis:1 element:2 recognition:1 database:5 gehler:6 observed:29 constancy:34 labeled:2 kxk1:1 capture:1 worst:1 region:1 decrease:1 technological:1 mentioned:1 intuition:1 trained:3 segment:1 purely:4 basis:5 various:1 finger:1 separated:1 distinct:1 committing:1 describe:2 aggregate:1 neighborhood:1 supplementary:3 larger:2 kenwood:1 cvpr:2 compensates:1 achromatic:1 statistic:8 itself:2 final:6 quantizing:1 propose:2 product:2 adaptation:1 remainder:1 hadamard:1 date:1 kv:3 crossvalidation:1 sutskever:1 object:2 brainard:1 radiation:1 exemplar:1 nearest:1 predicted:2 involves:1 shadow:1 indicate:1 c:2 direction:2 correct:2 gasparini:1 filter:1 stochastic:4 cnns:1 human:1 material:10 bin:3 require:1 generalization:2 preliminary:1 summation:2 correction:1 around:1 ground:4 brightest:2 exp:2 visually:1 blake:1 mapping:3 predict:1 visualize:3 electron:1 early:1 adopt:1 estimation:19 proc:5 saw:1 gauge:1 clearly:1 sensor:3 rather:2 chromatic:1 varying:2 focus:1 improvement:4 likelihood:10 ave:1 inst:1 inference:6 dependent:1 typically:4 entire:1 interested:1 pixel:59 overall:3 arg:2 html:1 augment:1 proposes:1 spatial:4 art:5 fairly:3 initialize:1 weijer:3 equal:4 construct:1 having:1 sampling:2 manually:1 represents:1 lit:1 look:2 future:2 others:1 report:2 few:3 modern:2 randomly:2 gamma:1 individual:8 geometry:2 investigate:3 multiply:1 evaluation:2 saturated:3 chong:1 henderson:1 light:7 accurate:2 edge:2 re:5 instance:1 modeling:3 cost:4 deviation:1 subset:2 entry:2 masked:2 krizhevsky:1 successful:1 recognizing:1 seventh:1 gr:1 dependency:1 sv:1 combined:1 peak:1 sensitivity:1 discriminating:1 twelve:1 retain:4 off:1 von:1 reflect:2 ambiguity:13 central:2 recorded:1 american:1 derivative:6 li:15 de:3 sec:2 forsyth:1 explicitly:2 caused:1 depends:1 lot:1 red:5 aggregation:2 recover:3 parallel:1 gevers:4 contribution:2 minimize:4 il:1 chart:4 accuracy:8 convolutional:1 square:1 ni:2 variance:7 percept:1 yield:3 correspond:2 raw:1 bayesian:3 lu:2 lighting:1 researcher:1 processor:1 influenced:1 frequency:1 regress:1 minka:1 mi:1 josa:1 dataset:4 kmk2:1 color:91 knowledge:2 improves:2 higher:7 supervised:1 reflected:2 response:1 though:1 angular:6 quality:1 gray:4 scientific:1 name:1 normalized:3 true:34 contain:1 counterpart:2 spatially:1 semantic:3 white:3 chromaticity:51 during:4 ambiguous:2 generalized:1 tt:1 demonstrate:1 reflection:1 temperature:1 reasoning:4 image:68 wise:5 novel:1 common:2 mt:1 empirically:1 tail:1 slight:1 interpret:1 refer:2 significant:2 cv:1 framed:1 automatic:1 grid:1 reliability:1 access:1 similarity:1 surface:6 etc:1 recent:2 optimizing:1 optimizes:1 certain:1 captured:2 canon:2 speci:1 employed:1 maximize:1 resolving:1 full:1 multiple:3 reduces:1 smooth:3 characterized:1 cross:7 sphere:1 compensate:1 adobe:1 ile:5 regression:3 essentially:1 metric:1 vision:1 funt:4 arxiv:1 histogram:5 sometimes:1 normalization:2 iteration:3 achieved:1 addition:1 remarkably:1 ayan:1 median:5 appropriately:1 unlike:1 checker:1 tri:3 file:1 cart:1 pass:1 legend:2 incorporates:1 unitary:2 ee:1 revealed:1 reacts:1 switch:1 buchsbaum:1 isolation:1 fit:1 brighter:2 cn:1 translates:1 six:1 pca:1 colour:2 generally:1 useful:1 listed:1 hue:3 ten:5 induces:1 clip:1 processed:1 tth:1 http:4 generate:1 outperform:1 restricts:1 canonical:1 notice:1 estimated:2 neuroscience:1 per:12 mega:1 blue:7 discrete:3 threefold:1 key:2 four:2 terminology:1 achieving:1 changing:1 prevent:1 luminance:23 fraction:2 run:1 inverse:1 powerful:1 reasonable:2 patch:4 separation:1 sfu:1 poorest:1 dropout:3 layer:1 ct:1 fold:12 occur:1 sharply:1 scene:11 flat:2 encodes:1 dominated:1 fourier:1 min:1 performing:1 relatively:2 alternate:1 combination:6 across:13 slightly:1 smaller:1 pti:3 explained:1 invariant:1 restricted:2 outlier:1 iccv:2 taken:1 computationally:3 remains:1 r3:5 locus:2 end:26 adopted:1 available:2 observe:1 lambertian:1 spectral:8 xiong:3 original:1 denotes:2 remaining:1 include:1 exploit:2 reflectance:8 k1:3 especially:1 objective:1 quantity:1 occurs:1 parametric:2 concentration:1 primary:2 strategy:1 traditional:1 gradient:4 amongst:1 thank:1 bianco:3 sci:1 nx:3 extent:1 consensus:1 reason:2 assuming:1 rother:1 condit:1 ratio:1 providing:1 daylight:1 relate:2 negative:2 implementation:1 reliably:1 unknown:1 twenty:1 perform:1 descent:4 technol:1 defining:1 extended:1 hinton:1 dc:1 sharp:3 intensity:9 ttic:2 introduced:1 pair:1 learned:3 perpixel:1 trans:4 address:1 able:3 beyond:1 usually:1 perception:1 pattern:1 indoor:1 eighth:1 geom:1 built:1 green:6 max:1 including:2 belief:11 shifting:1 natural:8 escience:1 improve:1 lightness:1 gamut:3 carried:1 extract:1 occurence:1 prior:3 epoch:3 relative:2 law:1 bear:1 var:1 validation:9 illuminating:1 incident:4 consistent:3 pi:3 land:1 row:4 course:1 placed:1 last:1 copy:2 supported:1 bias:2 institute:1 neighbor:1 absolute:3 ghz:1 benefit:2 curve:1 van:3 judd:1 world:4 evaluating:1 author:2 commonly:1 crest:1 global:16 overfitting:1 assumed:3 spatio:2 spectrum:4 infact:1 search:1 table:5 additionally:1 channel:7 learn:1 ca:1 quantize:2 complex:2 domain:3 s2:3 whole:1 ref:4 xu:2 fig:5 intel:1 quantiles:3 sub:3 position:1 momentum:2 candidate:5 outdoor:1 kxk2:1 lie:3 atypical:1 toyota:1 jmlr:1 specific:2 deriv:1 intrinsic:2 quantization:3 restricting:1 adding:1 drew:1 hyperspectral:1 illumination:7 illustrates:1 conditioned:3 candi:1 kx:1 simply:6 likely:4 wavelength:1 visual:2 prevents:1 ordered:1 partially:1 scalar:5 ayanc:1 srivastava:1 corresponds:1 truth:4 extracted:1 acm:1 conditional:3 goal:3 sized:3 consequently:1 towards:1 appreciable:1 feasible:1 considerable:2 change:3 specifically:4 typical:2 uniformly:1 corrected:2 total:2 experimental:2 indicating:2 support:2 retinex:3 people:1 illuminant:63 evaluate:1 phenomenon:1 correlated:1 |
5,374 | 5,865 | Efficient Exact Gradient Update for training Deep
Networks with Very Large Sparse Targets
Pascal Vincent? , Alexandre de Br?bisson, Xavier Bouthillier
D?partement d?Informatique et de Recherche Op?rationnelle
Universit? de Montr?al, Montr?al, Qu?bec, CANADA
?
and CIFAR
Abstract
An important class of problems involves training deep neural networks with sparse
prediction targets of very high dimension D. These occur naturally in e.g. neural
language models or the learning of word-embeddings, often posed as predicting
the probability of next words among a vocabulary of size D (e.g. 200 000). Computing the equally large, but typically non-sparse D-dimensional output vector
from a last hidden layer of reasonable dimension d (e.g. 500) incurs a prohibitive
O(Dd) computational cost for each example, as does updating the D ? d output
weight matrix and computing the gradient needed for backpropagation to previous
layers. While efficient handling of large sparse network inputs is trivial, the case
of large sparse targets is not, and has thus so far been sidestepped with approximate alternatives such as hierarchical softmax or sampling-based approximations
during training. In this work we develop an original algorithmic approach which,
for a family of loss functions that includes squared error and spherical softmax,
can compute the exact loss, gradient update for the output weights, and gradient for backpropagation, all in O(d2 ) per example instead of O(Dd), remarkably
without ever computing the D-dimensional output. The proposed algorithm yields
D
a speedup of 4d
, i.e. two orders of magnitude for typical sizes, for that critical part
of the computations that often dominates the training time in this kind of network
architecture.
1
Introduction
Many modern applications of neural networks have to deal with data represented, or representable,
as very large sparse vectors. Such representations arise in natural language related tasks, where
the dimension D of that vector is typically (a multiple of) the size of the vocabulary, and also in
the sparse user-item matrices of collaborative-filtering applications. It is trivial to handle very large
sparse inputs to a neural network in a computationally efficient manner: the forward propagation
and update to the input weight matrix after backpropagation are correspondingly sparse. By contrast, training with very large sparse prediction targets is problematic: even if the target is sparse, the
computation of the equally large network output and the corresponding gradient update to the huge
output weight matrix are not sparse and thus computationally prohibitive. This has been a practical
problem ever since Bengio et al. [1] first proposed using a neural network for learning a language
model, in which case the computed output vector represents the probability of the next word and
is the size of the considered vocabulary, which is becoming increasingly large in modern applications [2]. Several approaches have been proposed to attempt to address this difficulty essentially by
sidestepping it. They fall in two categories:
? Sampling or selection based approximations consider and compute only a tiny fraction of the
output?s dimensions sampled at random or heuristically chosen. The reconstruction sampling of
Dauphin et al. [3], the efficient use of biased importance sampling in Jean et al. [4], the use of
1
Noise Contrastive Estimation [5] in Mnih and Kavukcuoglu [6] and Mikolov et al. [7] all fall
under this category. As does the more recent use of approximate Maximum Inner Product Search
based on Locality Sensitive Hashing techniques[8, 9] to select a good candidate subset.
? Hierarchical softmax [10, 7] imposes a heuristically defined hierarchical tree structure for the
computation of the normalized probability of the target class.
Compared to the initial problem of considering all D output dimensions, both kinds of approaches
are crude approximations. In the present work, we will instead investigate a way to actually perform
the exact gradient update that corresponds to considering all D outputs, but do so implicitly, in a
computationally efficient manner, without actually computing the D outputs. This approach works
for a relatively restricted class of loss functions, the simplest of which is linear output with squared
error (a natural choice for sparse real-valued regression targets). The most common choice for
multiclass classification, the softmax loss is not part of that family, but we may use an alternative
spherical softmax, which will also yield normalized class probabilities. For simplicity and clarity,
our presentation will focus on squared error and on an online setting. We will briefly discuss its
extension to minibatches and to the class of possible loss functions in sections 3.5 and 3.6.
2
2.1
The problem
Problem definition and setup
We are concerned with gradient-descent based training of a deep feed-forward neural network with
target vectors of very high dimension D (e.g. D = 200 000) but that are sparse, i.e. a comparatively
small number, at most K D, of the elements of the target vector are non-zero. Such a Ksparse vector will typically be stored and represented compactly as 2K numbers corresponding
to pairs (index, value). A network to be trained with such targets will naturally have an equally
large output layer of dimension D. We can also optionally allow the input to the network to be a
similarly high dimensional sparse vector of dimension Din . Between the large sparse target, output,
and (optionally large sparse) input, we suppose the network?s intermediate hidden layers to be of
smaller, more typically manageable, dimension d D (e.g. d = 500)1 .
Mathematical notation: Vectors are denoted using lower-case letters, e.g. h, and are considered
column-vectors; corresponding row vectors are denoted with a transpose, e.g. hT . Matrices are
denoted using upper-case letters, e.g. W , with W T the transpose of W . The ith column of W is
T
denoted Wi , and its ith row W:i (both viewed as a column vector). U ?T = U ?1 denotes the
transpose of the inverse of a square matrix. Id is the d ? d identity matrix.
Network architecture: We consider a standard feed forward neural network architecture as depicted in Figure 1. An input vector x ? RDin is linearly transformed into a linear activation
a(1) = W (1)T x + b(1) through a Din ? d input weight matrix W (1) (and an optional bias vector
b(1) ? Rd ). This is typically followed by a non-linear transformation s to yield the representation of
the first hidden layer h(1) = s(a(1) ). This first hidden layer representation is then similarly transformed through a number of subsequent non-linear layers (that can be of any usual kind amenable to
backpropagation) e.g. h(k) = s(a(k) ) with a(k) = W (k)T h(k?1) + b(k) until we obtain last hidden
layer representation h = h(m) . We then obtain the final D-dimensional network output as o = W h
where W is a D ? d output weight matrix, which will be our main focus in this work. Finally, the
network?s D-dimensional output o is compared to the D-dimensional target vector y associated with
input x using squared error, yielding loss L = ko ? yk2 .
Training procedure: This architecture is a typical (possibly deep) multi-layer feed forward neural
network architecture with a linear output layer and squared error loss. Its parameters (weight matrices and bias vectors) will be trained by gradient descent, using gradient backpropagation [11, 12, 13]
to efficiently compute the gradients. The procedure is shown in Figure 1. Given an example from
the training set as an (input,target) pair (x, y), a pass of forward propagation proceeds as outlined above, computing the hidden representation of each hidden layer in turn based on the previous one, and finally the network?s predicted output o and associated loss L. A pass of gradient
backpropagation then works in the opposite direction, starting from ?o = ?L
?o = 2(o ? y) and
1
Our approach does not impose any restriction on the architecture nor size of the hidden layers, as long as
they are amenable to usual gradient backpropagation.
2
? Training deep neural networks with very large sparse targets is an important problem
? Arises e.g. in Neural Language Models [1] with large vocabulary size (e.g. D = 500 000 one-hot target).
? Efficient handling of large sparse inputs is trivial.
? But backprop training with large sparse targets is prohibitively expensive.
? Focus on output layer: maps last hidden representation h of reasonable dimension d (e.g. 500)
to very large output o of dimension D (e.g. 500 000) with a Dxd parameter matrix W:
Problem: expensive computation
Output o
o = Wh
O(Dd)
Backpropagation
(small d)
O(Kd)
Ex: d = 500
W(1)
(D d)
x
Input x
b) W ? W + 2?yh
We will now see how
only U and V respe
date
versions
Q=
5.3
E?cient
with
Q
= W T Wof gra
hSupposing
in Equations
we ????
hav
The for
gradient
the
squ
updateof
b)
).will
2
d?d matrix
(we
h?y?
?L
W? is ?W
= ??W?W
h = Qh
haswould
a comp
update
to
W
b
Solution:
representation
ofSeco
y,co
learning
rate. Again,
vector,
so that
a) Unew
=U
?comp
2?(
computational
complex
O(K).
So the all
overal
then
to
update
the
b) Vnew
=
V
+
2?y
O(Kd
+ d2 ) = To
O((K
N
will
be sparse).
ove
of magnitude
cheape
(a
Proof:
If we define inter
computation of L caW
Vnew Un
...
W(2)(d d)
x
?
Forward propagation
O(Dd)
!h = W T !o
O(d2)
hidden 1
we suppose K << d << D
Prohibitivley expensive!
O(d2) O(d2)
hidden 2
(small d)
O(D)
!o = 2(o-y)
...
h
(small d)
O(D)
large D, not sparse
O(Dd)
W "W- ! !o hT
last hidden
Loss L = ?o ? y?2
a) W ? W ? 2?W
Altogether: O(3Dd )
Note that we can d
Prohibitive!
O(d2) O(d2)
Boo
W(1) "W (1)- ! x !aT
O(Kd) cheap!
...
??
(large D, but K-sparse)
Ex: D = 500 000, K=5
?
Us
com
!
?
...
??
Target y (large D, but K-sparse)
?
Current workarounds are approximations:
Figure 1: ?The
computational
problem
posed
very
large
sparse
Dealing
sparse inSampling
based approximations
compute
only by
a tiny
fraction
of the
output?stargets.
dimensions
sampled atwith
random.
put efficiently
is trivial,sampling
with both
the
and
backward
propagation
achieved in
Reconstruction
[2] and
theforward
use of Noise
Contrastive
Estimation
[3] in [4, 5]phases
fall undereasily
this category.
O(Kd). However
this is not the case with large sparse targets. They incur a prohibitive compu? Hierarchical softmax [6, 4] imposes a heuristically defined hierarchical tree structure for the computation of the
tational costnormalized
of O(Dd)
at the
output
layer as forward propagation, gradient backpropagation and
probability
of the
target class.
weight update each require accessing all D ? d elements of the large output weight matrix.
! Ne
" ac
SHORT FORMU
LINE CASE: !
Ne
" ac
Qn
a) First update
implicitly by updat
Proof:
[1] Bengio, Y., Ducharme, R., and Vincent, P. (2001). A neural probabilistic language model. NIPS 2000.
?L Y. (2011). Large-scale?L
Dauphin,
Y., Glorot,
learning
of embeddings with reconstruction sampling. ICML 2011.
back[2]the
gradients
?hX.,
= Bengio,
(k) and
(k) and ?a(k) =
(k) upstream through the network.
References:
propagating
?h
?a
The corresponding gradient contributions on parameters (weights and biases), collected along the
way, are straightforward once we have the associated ?a(k) . Specifically they are ?b(k) = ?a(k)
and ?W (k) = h(k?1) (?a(k) )T . Similarly for the input layer ?W (1) = x(?a(1) )T , and for the
output layer ?W = (o ? y)hT . Parameters are then updated through a gradient descent step
W (k) ? W (k) ? ??W (k) and b(k) ? b(k) ? ??b(k) , where ? is a positive learning-rate. Similarly
for the output layer which will be our main focus here: W ? W ? ??W .
2.2
The easy part: input layer forward propagation and weight update
It is easy and straightforward to efficiently compute the forward propagation, and the backpropagation and weight update part for the input layer when we have a very large Din -dimensional but
K?sparse input vector x with appropriate sparse representation. Specifically we suppose that x is
represented as a pair of vectors u, v of length (at most) K, where u contains integer indexes and v
the associated real values of the elements of x such that xi = 0 if i ?
/ u, and xuk = vk .
? Forward propagation through the input layer: The sparse representation of x as the positions
of K elements together with their value makes it cheap to compute W (1)T x. Even though W (1)
may be a huge full Din ? d matrix, only K of its rows (those corresponding to the non-zero
entries of x) need to be visited and summed to compute W (1)T x. Precisely, with our (u, v) sparse
PK
(1)
(1)
representation of x this operation can be written as W (1)T x = k=1 vk W:uk where each W:uk
is a d-dimensional vector, making this an O(Kd) operation rather than O(Dd).
? Gradient and update through input layer: Let us for now suppose that we were able to get
gradients (through backpropagation) up to the first hidden layer activations a(1) ? Rd in the form
of gradient vector ?a(1) = ?a?L
(1) . The corresponding gradient-based update to input layer weights
W (1) is simply W (1) ? W (1) ? ?x(?a(1) )T . This is a rank-one update to W (1) . Here again, we
see that only the K rows of W (1) associated to the (at most) K non-zero entries of x need to be
(1)
(1)
modified. Precisely this operation can be written as: W:uk ? W:uk ??vk ?a(1) ?k ? {1, . . . , K}
making this again a O(Kd) operation rather than O(Dd).
3
[3] G
[4] M
2.3
The hard part: output layer propagation and weight update
Given some network input x, we suppose we can compute without difficulty through forward propagation the associated last hidden layer representation h ? Rd . From then on:
? Computing the final output o = W h incurs a prohibitive computational cost of O(Dd) since W
is a full D ? d matrix. Note that there is a-priori no reason for representation h to be sparse (e.g.
with a sigmoid non-linearity) but even if it was, this would not fundamentally change the problem
since it is D that is extremely large, and we supposed d reasonably sized already. Computing the
residual (o ? y) and associated squared error loss ko ? yk2 incurs an additional O(D) cost.
T
? The gradient on h that we need to backpropagate to lower layers is ?h = ?L
?h = 2W (o ? y)
which is another O(Dd) matrix-vector product.
? Finally, when performing the corresponding output weight update W ? W ? ?(o ? y)hT we see
that it is a rank-one update that updates all D ? d elements of W , which again incurs a prohibitive
O(Dd) computational cost.
For very large D, all these three O(Dd) operations are prohibitive, and the fact that y is sparse, seen
from this perspective, doesn?t help, since neither o nor o ? y will be sparse.
3
A computationally efficient algorithm for performing the exact online
gradient update
Previously proposed workarounds are approximate or use stochastic sampling. We propose a different approach that results in the exact same, yet efficient gradient update, remarkably without ever
having to compute large output o.
3.1
Computing the squared error loss L and the gradient with respect to h efficiently
Suppose that, we have, for a network input example x, computed the last hidden representation
h ? Rd through forward propagation. The network?s D dimensional output o = W h is then in
principle compared to the high dimensional target y ? RD . The corresponding squared error loss
2
is L = kW h ? yk . As we saw in Section 2.3, computing it in the direct naive way would have
a prohibitive computational complexity of O(Dd + D) = O(Dd) because computing output W h
with a full D ? d matrix W and a typically non-sparse h is O(Dd). Similarly, to backpropagate
the gradient through the network, we need to compute the gradient of loss L with respect to last
?kW h?yk2
hidden layer representation h. This is ?h = ?L
= 2W T (W h ? y). So again, if
?h =
?h
we were to compute it directly in this manner, the computational complexity would be a prohibitive
O(Dd). Provided we have maintained an up-to-date matrix Q = W T W , which is of reasonable
size d ? d and can be cheaply maintained as we will see in Section 3.3, we can rewrite these two
operations so as to perform them in O(d2 ):
Loss computation:
Gradient on h:
O(Dd)
L =
=
z}|{
k W h ?yk2
?h =
T
(W h ? y) (W h ? y)
= hT W T W h ? y T W h ? hT W T y + y T y
= hT Qh ? 2hT (W T y) + y T y
= hT ( Qh ?2 W T y ) + y T y
|{z}
| {z }
|{z}
O(d2 )
O(Kd)
(1)
?L
?h
=
=
=
=
?kW h ? yk2
?h
2W T (W h ? y)
2 WTWh ? WTy
2( Qh ? W T y )
|{z} | {z }
O(d2 )
O(K)
(2)
O(Kd)
The terms in O(Kd) and O(K) are due to leveraging the K-sparse representation of target vector
y. With K D and d D, we get altogether a computational cost of O(d2 ) which can be several
orders of magnitude cheaper than the prohibitive O(Dd) of the direct approach.
4
3.2
Efficient gradient update of W
The gradient of the squared error loss with respect to output layer weight matrix W is
?kW h?yk2
?W
?L
?W
=
= 2(W h ? y)h . And the corresponding gradient descent update to W would be
Wnew ? W ? 2?(W h ? y)hT , where ? is a positive learning rate. Again, computed in this manner,
this induces a prohibitive O(Dd) computational complexity, both to compute output and residual
W h ? y, and then to update all the Dd elements of W (since generally neither W h ? y nor h will
be sparse). All D ? d elements of W must be accessed during this update. On the surface this seems
hopeless. But we will now see how we can achieve the exact same update on W in O(d2 ). The trick
is to represent W implicitly as the factorization |{z}
W = |{z}
V |{z}
U and update U and V instead
T
D?d
a) Unew
b) Vnew
D?d d?d
= U ? 2?(U h)hT
= V +
(3)
?T
2?y(Unew
h)T
(4)
This results in implicitly updating W as we did explicitly in the naive approach as we now prove:
Vnew Unew
?T
(V + 2?y(Unew
h)T ) Unew
=
?T
= V Unew + 2?y(Unew
h)T Unew
?1
= V Unew + 2?yhT Unew
Unew
?1
= V (U ? 2?(U h)hT ) + 2?yhT (Unew
Unew )
= V U ? 2?V U hhT + 2?yhT
= V U ? 2?(V U h ? y)hT
= W ? 2?(W h ? y)T hT
= Wnew
We see that the update of U in Eq. 3 is a simple O(d2 ) operation. Following this simple rank-one
update to U , we can use the Sherman-Morrison formula to derive the corresponding rank-one update
to U ?T which will also be O(d2 ):
?T
Unew
=
U ?T +
2?
2 (U
1 ? 2? khk
?T
h)hT
(5)
?T
h, an O(d2 ) operation needed in Eq. 4. The ensuing rank-one
It is then easy to compute the Unew
update of V in Eq 4, thanks to the K-sparsity of y is only O(Kd): only the K rows V associated
to non-zero elements in y are accessed and updated, instead of all D rows of W we had to modify
in the naive update! Note that with the factored representation of W as V U , we only have W
implicitly, so the W T y terms that entered in the computation of L and ?h in the previous paragraph
need to be adapted slightly as y? = W T y = U T (V T y), which becomes O(d2 + Kd) rather than
O(Kd) in computational complexity. But this doesn?t change the overall O(d2 ) complexity of these
computations.
3.3
Bookkeeping: keeping an up-to-date Q and U ?T
We have already seen, in Eq. 5, how we can cheaply maintain an up-to-date U ?T following our
update of U . Similarly, following our updates to U and V , we need to keep an up-to-date Q =
W T W which is needed to efficiently compute the loss L (Eq. 1) and gradient ?h (Eq. 2). We have
shown that updates to U and V in equations 3 and 4 are equivalent to implicitly updating W as
Wnew ? W ? 2?(W h ? y)hT , and this translates into the following update to Q = W T W :
z? =
Qnew
=
Qh ? U T (V T y)
Q ? 2? h?
z T + z?hT + (4? 2 L)hhT
(6)
The proof is straightforward but due to space constraints we put it in supplementary material. One
can see that this last bookkeeping operation also has a O(d2 ) computational complexity.
5
3.4
Putting it all together: detailed algorithm and expected benefits
We have seen that we can efficiently compute cost L, gradient with respect to h (to be later backpropagated further) as well as updating U and V and performing the bookkeeping for U ?T and
Q. Algorithm 1 describes the detailed algorithmic steps that we put together from the equations
derived above. Having K d D we see that the proposed algorithm requires O(d2 ) operations,
whereas the standard approach required O(Dd) operations. If we take K ? d , we may state more
precisely that the proposed algorithm, for computing the loss and the gradient updates will require
roughly 12d2 operations whereas the standard approach required roughly 3Dd operations. So overD
all the proposed algorithm change corresponds to a computational speedup by a factor of 4d
. For
D = 200 000 and d = 500 the expected speedup is thus 100. Note that the advantage is not only
in computational complexity, but also in memory access. For each example, the standard approach
needs to access and change all D ? d elements of matrix W , whereas the proposed approach only
accesses the much smaller number K ? d elements of V as well as the three d ? d matrices U , U ?T ,
and Q. So overall we have a substantially faster algorithm, which, while doing so implicitly, will
nevertheless perform the exact same gradient update as the standard approach. We want to emphasize here that our approach is completely different from simply chaining 2 linear layers U and V
and performing ordinary gradient descent updates on them: this would result in the same prohibitive
computational complexity as the standard approach, and such ordinary separate gradient updates to
U and V would not be equivalent to the ordinary gradient update to W = V U .
Algorithm 1 Efficient computation of cost L, gradient on h, and update to parameters U and V
Step Operation
Computational Number of
#
complexity
multiply-adds
? = Qh
1:
O(d2 )
d2
h
2:
y? = U T (V T y)
O(Kd + d2 )
Kd + d2
? ? y?
3:
z? = h
O(d)
d
4:
?h = 2?
z
O(d)
d
? ? 2hT y? + y T y
5:
L = hT h
O(2d + K)
2d + K + 1
6:
Unew = U ? 2?(U h)hT
O(d2 )
2d2 + d
?T
2
2
7:
Unew =
O(d )
2d + 2d + 3
2?
?T
U ?T + 1?2?khk
h)hT
2 (U
?T
8:
Vnew = V + 2?y(Unew
h)T
O(d2 + Kd)
d2 + K + Kd
2
9:
Qnew =
O(d )
4 + 2d + 3d2
Q ? 2? h?
z T + z?hT +
(4? 2 L)hhT
Altogether:
O(d2 )
? 12d2
provided
elementary
K<dD
operations
3.5
Controlling numerical stability and extension to the minibatch case
The update of U in Equation 3 may over time lead U to become ill-conditioned. To prevent this,
we regularly (every 100 updates) monitor its conditioning number. If either the smallest or largest
singular value moves outside an acceptable range2 , we bring it back to 1 by doing an appropriate
rank-1 update to V (which costs Dd operations, but is only done rarely). Our algorithm can also
be straightforwardly extended to the minibatch case (the derivations are given in the supplementary material section) and yields the same theoretical speedup factor with respect to the standard
naive approach. But one needs to be careful in order to keep the computation of U ?T h reasonably
efficient: depending on the size of the minibatch m, it may be more efficient to solve the corresponding linear equation for each minibatch from scratch rather than updating U ?T with the Woodbury
equation (which generalizes the Sherman-Morrison formula for m > 1).
2
More details on our numerical stabilization procedure can be found in the supplementary material
6
3.6
Generalization to a broader class of loss functions
The approach that we just detailed for linear output and squared error can be extended to a broader,
though restricted, family of loss functions. We call it the spherical family of loss functions because
it includes the spherical alternative to the softmax, thus named in [14]. Basically it contains any loss
2
function
P 2 that can be expressed as a function of only the oc associated to non-zero yc and of kok =
j oj the squared norm of the whole output vector, which we can compute cheaply, irrespective of
D, as we did above3 . This family does not include the standard softmax loss log
o2 +
P c 2
j (oj +)
Pexp(oc ) ,
j exp(oj )
but it
does include the spherical softmax4 : log
. Due to space constraints we will not detail this
extension here, only give a sketch of how it can be obtained. Deriving it may not appear obvious at
first, but it is relatively straightforward once we realize that: a) the gain in computing the squared
error loss comes from being able to very cheaply compute the sum of squared activations kok2 (a
scalar quantity), and will thus apply equally well to other losses that can be expressed based on that
quantity (like the spherical softmax). b) generalizing our gradient update trick to such losses follows
naturally from gradient backpropagation: the gradient is first backpropagated from the final loss to
the scalar sum of squared activations, and from there on follows the same path and update procedure
as for the squared error loss.
4
Experimental validation
We implemented both a CPU version using blas and a parallel GPU (Cuda) version using cublas
of the proposed algorithm5 . We evaluated the GPU and CPU implementations by training word
embeddings with simple neural language models, in which a probability map of the next word
given its preceding n-gram is learned by a neural network. We used a Nvidia Titan Black GPU
and a i7-4820K @ 3.70GHz CPU and ran experiments on the one billion word dataset[15], which
is composed of 0.8 billions words belonging to a vocabulary of 0.8 millions words. We evaluated
the resulting word embeddings with the recently introduced Simlex-999 score [16], which measures
the similarity between words. We also compared our approach to unfactorised versions and to a
two-layer hierarchical softmax. Figure 2 and 3 (left) illustrate the practical speedup of our approach
for the output layer only. Figure 3 (right) shows that out LST (Large Sparse Target) models are
much faster to train than the softmax models and converge to only slightly lower Simlex-999 scores.
Table 1 summarizes the speedups for the different output layers we tried, both on CPU and GPU.
We also empirically verified that our proposed factored algorithm learns the model weights (V U ) as
the corresponding naive unfactored algorithm?s W , as it theoretically should, and followed the same
learning curves (as a function of number of iterations, not time!).
5
Conclusion and future work
We introduced a new algorithmic approach to efficiently compute the exact gradient updates for
training deep networks with very large sparse targets. Remarkably the complexity of the algorithm
is independent of the target size, which allows tackling very large problems. Our CPU and GPU
implementations yield similar speedups to the theoretical one and can thus be used in practical
applications, which could be explored in further work. In particular, neural language models seem
good candidates. But it remains unclear how using a loss function other than the usual softmax might
affect the quality of the resulting word embeddings so further research needs to be carried out in this
direction. This includes empirically investigating natural extensions of the approach we described
to other possible losses in the spherical family such as the spherical-softmax.
Acknowledgements: We wish to thank Yves Grandvalet for stimulating discussions, ?a?glar
G?l?ehre for pointing us to [14], the developers of Theano [17, 18] and Blocks [19] for making
these libraries available to build on, and NSERC and Ubisoft for their financial support.
3
P
In addition loss functions in this family are also allowed to depend on sum(o) = j oj which we can also
P
P
compute cheaply without computing o, by tracking w
? = j W:j whereby sum(o) = j W:jT h = w
? T h.
4
where c is the correct class label, and is a small positive constant that we added to the spherical interpretation in [14] for numerical stability: to guarantee we never divide by 0 nor take the log of 0.
5
Open source code is available at: https://github.com/pascal20100/factored_output_layer
7
Table 1: Speedups with respect to the baseline naive model on CPU, for a minibatch of 128 and the
whole vocabulary of D = 793471 words. This is a model with two hidden layers of d = 300 neurons.
Model
output layer only speedup whole model speedup
cpu unfactorised (naive)
1
1
gpu unfactorised (naive)
6.8
4.7
gpu hierarchical softmax
125.2
178.1
cpu factorised
763.3
501
gpu factorised
3257.3
1852.3
1
10
un-factorised CPU
un-factorised GPU
factorised GPU
factorised CPU
h_softmax GPU
0.008
Timing (sec) of a minibatch of size 128
Timing (sec) of a minibatch of size 128
0.010
0.006
0.004
0.002
un-factorised CPU
un-factorised GPU
factorised GPU
factorised CPU
h_softmax GPU
0
10
-1
10
-2
10
-3
0.000
0
2000
4000
6000
Size of the vocabulary D
8000
10
10000
1
2
10
10
4
3
10
10
Size of the vocabulary D
5
10
6
10
Figure 2: Timing of different algorithms. Time taken by forward and backward propagations in the
output layer, including weight update, on a minibatch of size 128 for different sizes of vocabulary
D on both CPU and GPU. The input size d is fixed to 300. The Timing of a 2 layer hierarchical
softmax efficient GPU implementation (h_softmax) is also provided for comparison. Right plot is
in log-log scale. As expected, the timings of factorized versions are independent of vocabulary size.
0.25
cpu_unfact / cpu_fact, experimental
gpu_unfact / gpu_fact, experimental
unfact / fact, theoretical
cpu_unfact / gpu_fact, experimental
cpu_unfact / gpu_unfact, experimental
1400
1200
0.15
SimLex-999
Speedup
1000
0.20
800
600
0.05
LST CPU
LST GPU
Softmax CPU
Softmax GPU
H-Softmax GPU
0.00
400
?0.05
200
0
0
0.10
100
200
300
400
500
600
700
Size of the vocabulary D (in thousands)
?0.10
800
-1
10
0
10
1
10
Training time (hours)
2
10
3
10
Figure 3: Left: Practical and theoretical speedups for different sizes of vocabulary D and fixed input
size d=300. The practical unfact / fact speedup is similar to the theoretical one. Right: Evolution
of the Simlex-999 score obtained with different models as a function of training time (CPU softmax
times were extrapolated from fewer iterations). Softmax models are zero hidden-layer models, while
our large sparse target (LST) models have two hidden layers. These were the best architectures
retained in both cases (surprisingly the softmax models with hidden layers performed no better on
this task). The extra non-linear layers in LST may help compensate for the lack of a softmax. LST
models converge to slightly lower scores at similar speed as the hierarchical softmax model but
significantly faster than softmax models.
8
References
[1] Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. In Advances in Neural
Information Processing Systems 13 (NIPS?00), pages 932?938, 2001.
[2] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493?2537, 2011.
[3] Y. Dauphin, X. Glorot, and Y. Bengio. Large-scale learning of embeddings with reconstruction sampling.
In Proceedings of the 28th International Conference on Machine learning, ICML ?11, 2011.
[4] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for neural machine
translation. In ACL-IJCNLP?2015, 2015. arXiv:1412.2007.
[5] M. Gutmann and A. Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS?10), 2010.
[6] A. Mnih and K. Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation.
In Advances in Neural Information Processing Systems 26, pages 2265?2273. 2013.
[7] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and
phrases and their compositionality. In NIPS?2013, pages 3111?3119. 2013.
[8] A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search
(MIPS). In Advances in Neural Information Processing Systems 27, pages 2321?2329. 2014.
[9] S. Vijayanarasimhan, J. Shlens, R. Monga, and J. Yagnik. Deep networks with large output spaces.
arxiv:1412.7479, 2014.
[10] F. Morin and Y. Bengio. Hierarchical probabilistic neural network language model. In Proceedings of the
Tenth International Workshop on Artificial Intelligence and Statistics, pages 246?252, 2005.
[11] D. Rumelhart, G. Hinton, and R. Williams. Learning representations by back-propagating errors. Nature,
323:533?536, 1986.
[12] Y. LeCun. Une proc?dure d?apprentissage pour R?seau ? seuil assym?trique. In Cognitiva 85: A la
Fronti?re de l?Intelligence Artificielle, des Sciences de la Connaissance et des Neurosciences, pages 599?
604, 1985.
[13] Y. LeCun. Learning processes in an asymmetric threshold network. In Disordered Systems and Biological
Organization, pages 233?240. Les Houches 1985, 1986.
[14] Y. Ollivier. Riemannian metrics for neural networks. CoRR, abs/1303.0818, 2013.
[15] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word
benchmark for measuring progress in statistical language modeling. In INTERSPEECH 2014, 15th Annual
Conference of the International Speech Communication Association, Singapore, September 14-18, 2014,
pages 2635?2639, 2014.
[16] F. Hill, R. Reichart, and A. Korhonen. Simlex-999: Evaluating semantic models with (genuine) similarity
estimation. CoRR, abs/1408.3456, 2014.
[17] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley,
and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for
Scientific Computing Conference (SciPy), 2010. Oral Presentation.
[18] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio.
Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS
2012 Workshop, 2012.
[19] B. van Merri?nboer, D. Bahdanau, V. Dumoulin, D. Serdyuk, D. Warde-Farley, J. Chorowski, and Y. Bengio. Blocks and Fuel: Frameworks for deep learning. ArXiv e-prints, June 2015.
9
| 5865 |@word version:5 briefly:1 manageable:1 seems:1 norm:1 open:1 d2:31 heuristically:3 tried:1 contrastive:4 incurs:4 initial:1 contains:2 score:4 o2:1 current:1 com:2 activation:4 tackling:1 yet:1 written:2 must:1 gpu:20 realize:1 subsequent:1 numerical:3 cheap:2 plot:1 update:48 intelligence:3 prohibitive:12 fewer:1 item:1 une:1 ith:2 short:1 recherche:1 pascanu:2 math:1 accessed:2 mathematical:1 along:1 direct:2 become:1 prove:1 khk:2 paragraph:1 manner:4 theoretically:1 inter:1 houches:1 expected:3 kuksa:1 pour:1 nor:4 chelba:1 multi:1 roughly:2 spherical:9 cpu:17 considering:2 becomes:1 provided:3 notation:1 linearity:1 factorized:1 fuel:1 kind:3 substantially:1 developer:1 transformation:1 guarantee:1 every:1 universit:1 prohibitively:1 uk:4 appear:1 positive:3 timing:5 modify:1 id:1 path:1 becoming:1 black:1 might:1 acl:1 co:1 factorization:1 practical:5 woodbury:1 lecun:2 block:2 backpropagation:12 procedure:4 significantly:1 word:15 bergeron:1 morin:1 get:2 selection:1 put:3 vijayanarasimhan:1 restriction:1 equivalent:2 map:2 dean:1 straightforward:4 williams:1 starting:1 simplicity:1 scipy:1 factored:2 seuil:1 deriving:1 shlens:1 lamblin:2 financial:1 stability:2 handle:1 merri:1 updated:2 target:26 suppose:6 qh:6 user:1 exact:8 controlling:1 goodfellow:1 trick:2 element:10 rumelhart:1 expensive:3 updating:5 asymmetric:2 bec:1 thousand:1 gutmann:1 yk:1 ran:1 accessing:1 complexity:10 warde:2 trained:2 depend:1 rewrite:1 ove:1 oral:1 incur:1 completely:1 compactly:1 formu:1 lst:6 represented:3 derivation:1 train:1 informatique:1 artificial:2 outside:1 jean:2 posed:2 valued:1 ducharme:2 supplementary:3 solve:1 yht:3 koehn:1 statistic:2 final:3 online:2 unfactored:1 advantage:1 reconstruction:4 propose:1 product:3 date:5 entered:1 glar:1 achieve:1 supposed:1 billion:3 sutskever:1 help:2 derive:1 develop:1 ac:2 propagating:2 depending:1 pexp:1 illustrate:1 op:1 progress:1 eq:6 implemented:1 predicted:1 involves:1 come:1 direction:2 unew:19 correct:1 stochastic:1 stabilization:1 disordered:1 dure:1 material:3 backprop:1 require:2 hx:1 generalization:1 biological:1 elementary:1 extension:4 ijcnlp:1 dxd:1 considered:2 exp:1 algorithmic:3 pointing:1 desjardins:1 smallest:1 estimation:6 proc:1 label:1 visited:1 sensitive:1 saw:1 largest:1 hav:1 sidestepped:1 modified:1 rather:4 broader:2 derived:1 focus:4 june:1 vk:3 improvement:1 rank:6 contrast:1 baseline:1 typically:6 hidden:20 transformed:2 overall:2 among:1 classification:1 pascal:1 dauphin:3 denoted:4 priori:1 ill:1 softmax:24 summed:1 genuine:1 once:2 never:1 having:2 sampling:8 represents:1 kw:4 icml:2 unsupervised:1 future:1 partement:1 fundamentally:1 modern:2 composed:1 cheaper:1 phase:1 maintain:1 attempt:1 ab:2 montr:2 huge:2 organization:1 investigate:1 mnih:2 multiply:1 yielding:1 farley:2 amenable:2 tree:2 divide:1 re:1 theoretical:5 column:3 modeling:1 measuring:1 cublas:1 ordinary:3 cost:8 phrase:1 subset:1 entry:2 stored:1 straightforwardly:1 rationnelle:1 cho:1 thanks:1 international:4 memisevic:1 probabilistic:3 together:3 squared:15 again:6 xuk:1 possibly:1 compu:1 li:1 chorowski:1 de:7 factorised:10 bergstra:2 sec:2 includes:3 titan:1 explicitly:1 collobert:1 later:1 performed:1 dumoulin:1 doing:2 compiler:1 parallel:1 bouchard:1 collaborative:1 contribution:1 square:1 yves:1 efficiently:8 yield:5 vincent:3 kavukcuoglu:3 basically:1 comp:2 definition:1 obvious:1 naturally:3 associated:9 proof:3 riemannian:1 sampled:2 gain:1 dataset:1 wh:1 actually:2 back:3 alexandre:1 feed:3 hashing:1 done:1 though:2 evaluated:2 just:1 until:1 sketch:1 propagation:12 lack:1 minibatch:8 quality:1 scientific:1 normalized:2 xavier:1 evolution:1 din:4 semantic:1 deal:1 during:2 interspeech:1 maintained:2 whereby:1 chaining:1 oc:2 unnormalized:1 hill:1 bring:1 recently:1 common:1 sigmoid:1 bookkeeping:3 empirically:2 conditioning:1 blas:1 million:1 interpretation:1 association:1 caw:1 rd:5 outlined:1 similarly:6 language:11 sherman:2 had:1 lsh:1 access:3 similarity:2 yk2:6 surface:1 add:1 recent:1 perspective:1 nvidia:1 yagnik:1 seen:3 additional:1 impose:1 preceding:1 converge:2 corrado:1 morrison:2 multiple:1 full:3 karlen:1 faster:3 long:1 cifar:1 compensate:1 equally:4 prediction:2 regression:1 ko:2 essentially:1 metric:1 arxiv:3 iteration:2 represent:1 monga:1 achieved:1 whereas:3 remarkably:3 want:1 addition:1 thirteenth:1 singular:1 source:1 biased:1 extra:1 breuleux:1 cognitiva:1 bahdanau:1 regularly:1 leveraging:1 seem:1 integer:1 call:1 intermediate:1 bengio:10 embeddings:7 concerned:1 boo:1 easy:3 affect:1 mips:1 architecture:7 opposite:1 bisson:1 inner:2 br:1 multiclass:1 translates:1 i7:1 expression:1 speech:1 deep:9 generally:1 detailed:3 kok:1 backpropagated:2 induces:1 category:3 simplest:1 http:1 problematic:1 singapore:1 cuda:1 neuroscience:1 per:1 sidestepping:1 putting:1 nevertheless:1 threshold:1 monitor:1 alsh:1 clarity:1 prevent:1 neither:2 verified:1 tenth:1 ht:22 ollivier:1 backward:2 tational:1 fraction:2 sum:4 inverse:1 letter:2 named:1 gra:1 family:7 reasonable:3 almost:1 acceptable:1 summarizes:1 layer:40 followed:2 annual:1 adapted:1 occur:1 precisely:3 constraint:2 speed:2 extremely:1 mikolov:3 performing:4 nboer:1 relatively:2 speedup:13 representable:1 kd:16 belonging:1 smaller:2 slightly:3 increasingly:1 describes:1 wi:1 qu:1 making:3 restricted:2 handling:2 theano:3 taken:1 computationally:4 equation:6 previously:1 remains:1 discus:1 turn:1 needed:3 ge:1 qnew:2 generalizes:1 operation:16 available:2 apply:1 hierarchical:10 appropriate:2 alternative:3 altogether:3 original:1 denotes:1 include:2 brant:1 build:1 comparatively:1 move:1 already:2 quantity:2 added:1 print:1 usual:3 unclear:1 september:1 gradient:43 separate:1 thank:1 ensuing:1 collected:1 trivial:4 reason:1 length:1 code:1 index:2 retained:1 optionally:2 setup:1 implementation:3 perform:3 upper:1 neuron:1 benchmark:1 descent:5 optional:1 extended:2 ever:3 hinton:1 communication:1 workarounds:2 canada:1 compositionality:1 introduced:2 pair:3 required:2 reichart:1 learned:1 hour:1 nip:4 robinson:1 address:1 able:2 proceeds:1 yc:1 sparsity:1 oj:4 ksparse:1 memory:1 including:1 hot:1 critical:1 natural:4 difficulty:2 predicting:1 residual:2 github:1 library:1 ne:2 irrespective:1 carried:1 naive:8 acknowledgement:1 python:1 unfact:2 loss:30 sublinear:1 filtering:1 validation:1 imposes:2 apprentissage:1 dd:24 principle:2 grandvalet:1 tiny:2 translation:1 ehre:1 row:6 wty:1 hopeless:1 extrapolated:1 surprisingly:1 last:8 transpose:3 keeping:1 bias:3 allow:1 fall:3 correspondingly:1 sparse:40 benefit:1 ghz:1 curve:1 dimension:12 vocabulary:13 gram:1 distributed:1 evaluating:1 qn:1 doesn:2 forward:13 far:1 hyvarinen:1 approximate:3 emphasize:1 implicitly:7 keep:2 dealing:1 investigating:1 xi:1 search:2 un:5 table:2 scratch:2 nature:1 reasonably:2 shrivastava:1 serdyuk:1 bottou:1 complex:1 upstream:1 did:2 pk:1 main:2 aistats:1 linearly:1 whole:3 noise:4 arise:1 turian:1 allowed:1 cient:1 position:1 wish:1 candidate:2 crude:1 yh:1 learns:1 formula:2 bastien:2 jt:1 explored:1 dominates:1 glorot:2 vnew:5 workshop:2 corr:2 importance:1 magnitude:3 conditioned:1 chen:1 updat:1 locality:1 backpropagate:2 depicted:1 generalizing:1 simply:2 cheaply:5 expressed:2 nserc:1 tracking:1 scalar:2 van:1 corresponds:2 wnew:3 minibatches:1 stimulating:1 weston:1 viewed:1 presentation:2 identity:1 sized:1 careful:1 simlex:5 hard:1 change:4 typical:2 specifically:2 korhonen:1 rdin:1 ubisoft:1 pas:2 hht:3 experimental:5 la:2 rarely:1 select:1 support:1 arises:1 schuster:1 ex:2 |
5,375 | 5,866 | Pointer Networks
Oriol Vinyals?
Google Brain
Meire Fortunato?
Department of Mathematics, UC Berkeley
Navdeep Jaitly
Google Brain
Abstract
We introduce a new neural architecture to learn the conditional probability of an
output sequence with elements that are discrete tokens corresponding to positions
in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence [1] and Neural Turing Machines [2], because the number of target classes in each step of the output depends on the length
of the input, which is variable. Problems such as sorting variable sized sequences,
and various combinatorial optimization problems belong to this class. Our model
solves the problem of variable size output dictionaries using a recently proposed
mechanism of neural attention. It differs from the previous attention attempts in
that, instead of using attention to blend hidden units of an encoder to a context
vector at each decoder step, it uses attention as a pointer to select a member of
the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net).
We show Ptr-Nets can be used to learn approximate solutions to three challenging
geometric problems ? finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem ? using training examples
alone. Ptr-Nets not only improve over sequence-to-sequence with input attention,
but also allow us to generalize to variable size output dictionaries. We show that
the learnt models generalize beyond the maximum lengths they were trained on.
We hope our results on these tasks will encourage a broader exploration of neural
learning for discrete problems.
1
Introduction
Recurrent Neural Networks (RNNs) have been used for learning functions over sequences from
examples for more than three decades [3]. However, their architecture limited them to settings
where the inputs and outputs were available at a fixed frame rate (e.g. [4]). The recently introduced
sequence-to-sequence paradigm [1] removed these constraints by using one RNN to map an input
sequence to an embedding and another (possibly the same) RNN to map the embedding to an output
sequence. Bahdanau et. al. augmented the decoder by propagating extra contextual information
from the input using a content-based attentional mechanism [5, 2, 6, 7]. These developments have
made it possible to apply RNNs to new domains, achieving state-of-the-art results in core problems
in natural language processing such as translation [1, 5] and parsing [8], image and video captioning
[9, 10], and even learning to execute small programs [2, 11].
Nonetheless, these methods still require the size of the output dictionary to be fixed a priori. Because
of this constraint we cannot directly apply this framework to combinatorial problems where the size
of the output dictionary depends on the length of the input sequence. In this paper, we address this
limitation by repurposing the attention mechanism of [5] to create pointers to input elements. We
show that the resulting architecture, which we name Pointer Networks (Ptr-Nets), can be trained to
output satisfactory solutions to three combinatorial optimization problems ? computing planar convex hulls, Delaunay triangulations and the symmetric planar Travelling Salesman Problem (TSP).
The resulting models produce approximate solutions to these problems in a purely data driven fash?
Equal contribution
1
(b) Ptr-Net
(a) Sequence-to-Sequence
Figure 1: (a) Sequence-to-Sequence - An RNN (blue) processes the input sequence to create a code
vector that is used to generate the output sequence (purple) using the probability chain rule and
another RNN. The output dimensionality is fixed by the dimensionality of the problem and it is the
same during training and inference [1]. (b) Ptr-Net - An encoding RNN converts the input sequence
to a code (blue) that is fed to the generating network (purple). At each step, the generating network
produces a vector that modulates a content-based attention mechanism over inputs ([5, 2]). The
output of the attention mechanism is a softmax distribution with dictionary size equal to the length
of the input.
ion (i.e., when we only have examples of inputs and desired outputs). The proposed approach is
depicted in Figure 1.
The main contributions of our work are as follows:
? We propose a new architecture, that we call Pointer Net, which is simple and effective. It
deals with the fundamental problem of representing variable length dictionaries by using a
softmax probability distribution as a ?pointer?.
? We apply the Pointer Net model to three distinct non-trivial algorithmic problems involving
geometry. We show that the learned model generalizes to test problems with more points
than the training problems.
? Our Pointer Net model learns a competitive small scale (n ? 50) TSP approximate solver.
Our results demonstrate that a purely data driven approach can learn approximate solutions
to problems that are computationally intractable.
2
Models
We review the sequence-to-sequence [1] and input-attention models [5] that are the baselines for this
work in Sections 2.1 and 2.2. We then describe our model - Ptr-Net in Section 2.3.
2.1
Sequence-to-Sequence Model
Given a training pair, (P, C P ), the sequence-to-sequence model computes the conditional probability p(C P |P; ?) using a parametric model (an RNN with parameters ?) to estimate the terms of the
probability chain rule (also see Figure 1), i.e.
m(P)
p(C P |P; ?) =
Y
p(Ci |C1 , . . . , Ci?1 , P; ?).
i=1
2
(1)
Here P = {P1 , . . . , Pn } is a sequence of n vectors and C P = {C1 , . . . , Cm(P) } is a sequence of
m(P) indices, each between 1 and n (we note that the target sequence length m(P) is, in general, a
function of P).
The parameters of the model are learnt by maximizing the conditional probabilities for the training
set, i.e.
X
?? = arg max
log p(C P |P; ?),
(2)
?
P,C P
where the sum is over training examples.
As in [1], we use an Long Short Term Memory (LSTM) [12] to model p(Ci |C1 , . . . , Ci?1 , P; ?).
The RNN is fed Pi at each time step, i, until the end of the input sequence is reached, at which time
a special symbol, ? is input to the model. The model then switches to the generation mode until
the network encounters the special symbol ?, which represents termination of the output sequence.
Note that this model makes no statistical independence assumptions. We use two separate RNNs
(one to encode the sequence of vectors Pj , and another one to produce or decode the output symbols
Ci ). We call the former RNN the encoder and the latter the decoder or the generative RNN.
During inference, given a sequence P, the learnt parameters ?? are used to select the sequence
C?P with the highest probability, i.e., C?P = arg max p(C P |P; ?? ). Finding the optimal sequence C?
CP
is computationally impractical because of the combinatorial number of possible output sequences.
Instead we use a beam search procedure to find the best possible sequence given a beam size.
In this sequence-to-sequence model, the output dictionary size for all symbols Ci is fixed and equal
to n, since the outputs are chosen from the input. Thus, we need to train a separate model for each
n. This prevents us from learning solutions to problems that have an output dictionary with a size
that depends on the input sequence length.
Under the assumption that the number of outputs is O(n) this model has computational complexity
of O(n). However, exact algorithms for the problems we are dealing with are more costly. For example, the convex hull problem has complexity O(n log n). The attention mechanism (see Section 2.2)
adds more ?computational capacity? to this model.
2.2
Content Based Input Attention
The vanilla sequence-to-sequence model produces the entire output sequence C P using the fixed
dimensional state of the recognition RNN at the end of the input sequence P. This constrains
the amount of information and computation that can flow through to the generative model. The
attention model of [5] ameliorates this problem by augmenting the encoder and decoder RNNs with
an additional neural network that uses an attention mechanism over the entire sequence of encoder
RNN states.
For notation purposes, let us define the encoder and decoder hidden states as (e1 , . . . , en ) and
(d1 , . . . , dm(P) ), respectively. For the LSTM RNNs, we use the state after the output gate has
been component-wise multiplied by the cell activations. We compute the attention vector at each
output time i as follows:
uij
= v T tanh(W1 ej + W2 di ) j ? (1, . . . , n)
aij
= softmax(uij )
n
X
=
aij ej
d0i
j ? (1, . . . , n)
(3)
j=1
where softmax normalizes the vector ui (of length n) to be the ?attention? mask over the inputs,
and v, W1 , and W2 are learnable parameters of the model. In all our experiments, we use the same
hidden dimensionality at the encoder and decoder (typically 512), so v is a vector and W1 and W2
are square matrices. Lastly, d0i and di are concatenated and used as the hidden states from which we
make predictions and which we feed to the next time step in the recurrent model.
Note that for each output we have to perform n operations, so the computational complexity at
inference time becomes O(n2 ).
3
This model performs significantly better than the sequence-to-sequence model on the convex hull
problem, but it is not applicable to problems where the output dictionary size depends on the input.
Nevertheless, a very simple extension (or rather reduction) of the model allows us to do this easily.
2.3
Ptr-Net
We now describe a very simple modification of the attention model that allows us to apply the
method to solve combinatorial optimization problems where the output dictionary size depends on
the number of elements in the input sequence.
The sequence-to-sequence model of Section 2.1 uses a softmax distribution over a fixed sized output
dictionary to compute p(Ci |C1 , . . . , Ci?1 , P) in Equation 1. Thus it cannot be used for our problems
where the size of the output dictionary is equal to the length of the input sequence. To solve this
problem we model p(Ci |C1 , . . . , Ci?1 , P) using the attention mechanism of Equation 3 as follows:
uij
p(Ci |C1 , . . . , Ci?1 , P)
= v T tanh(W1 ej + W2 di ) j ? (1, . . . , n)
= softmax(ui )
where softmax normalizes the vector ui (of length n) to be an output distribution over the dictionary
of inputs, and v, W1 , and W2 are learnable parameters of the output model. Here, we do not blend
the encoder state ej to propagate extra information to the decoder, but instead, use uij as pointers
to the input elements. In a similar way, to condition on Ci?1 as in Equation 1, we simply copy
the corresponding PCi?1 as the input. Both our method and the attention model can be seen as an
application of content-based attention mechanisms proposed in [6, 5, 2, 7].
We also note that our approach specifically targets problems whose outputs are discrete and correspond to positions in the input. Such problems may be addressed artificially ? for example we could
learn to output the coordinates of the target point directly using an RNN. However, at inference,
this solution does not respect the constraint that the outputs map back to the inputs exactly. Without the constraints, the predictions are bound to become blurry over longer sequences as shown in
sequence-to-sequence models for videos [13].
3
Motivation and Datasets Structure
In the following sections, we review each of the three problems we considered, as well as our data
generation protocol.1
In the training data, the inputs are planar point sets P = {P1 , . . . , Pn } with n elements each, where
Pj = (xj , yj ) are the cartesian coordinates of the points over which we find the convex hull, the Delaunay triangulation or the solution to the corresponding Travelling Salesman Problem. In all cases,
we sample from a uniform distribution in [0, 1] ? [0, 1]. The outputs C P = {C1 , . . . , Cm(P) } are
sequences representing the solution associated to the point set P. In Figure 2, we find an illustration
of an input/output pair (P, C P ) for the convex hull and the Delaunay problems.
3.1
Convex Hull
We used this example as a baseline to develop our models and to understand the difficulty of solving
combinatorial problems with data driven approaches. Finding the convex hull of a finite number
of points is a well understood task in computational geometry, and there are several exact solutions
available (see [14, 15, 16]). In general, finding the (generally unique) solution has complexity
O(n log n), where n is the number of points considered.
The vectors Pj are uniformly sampled from [0, 1] ? [0, 1]. The elements Ci are indices between 1
and n corresponding to positions in the sequence P, or special tokens representing beginning or end
of sequence. See Figure 2 (a) for an illustration. To represent the output as a sequence, we start
from the point with the lowest index, and go counter-clockwise ? this is an arbitrary choice but helps
reducing ambiguities during training.
1
We will release all the datasets at hidden for reference.
4
P5
P1
P3
P4
P2
(a) Input P = {P1 , . . . , P10 }, and the output sequence C P = {?, 2, 4, 3, 5, 6, 7, 2, ?} representing its convex hull.
(b) Input P = {P1 , . . . , P5 }, and the output C P =
{?, (1, 2, 4), (1, 4, 5), (1, 3, 5), (1, 2, 3), ?} representing its Delaunay Triangulation.
Figure 2: Input/output representation for (a) convex hull and (b) Delaunay triangulation. The tokens
? and ? represent beginning and end of sequence, respectively.
3.2
Delaunay Triangulation
A Delaunay triangulation for a set P of points in a plane is a triangulation such that each circumcircle
of every triangle is empty, that is, there is no point from P in its interior. Exact O(n log n) solutions
are available [17], where n is the number of points in P.
In this example, the outputs C P = {C1 , . . . , Cm(P) } are the corresponding sequences representing
the triangulation of the point set P . Each Ci is a triple of integers from 1 to n corresponding to the
position of triangle vertices in P or the beginning/end of sequence tokens. See Figure 2 (b).
We note that any permutation of the sequence C P represents the same triangulation for P, additionally each triangle representation Ci of three integers can also be permuted. Without loss of
generality, and similarly to what we did for convex hulls at training time, we order the triangles Ci
by their incenter coordinates (lexicographic order) and choose the increasing triangle representation2 . Without ordering, the models learned were not as good, and finding a better ordering that the
Ptr-Net could better exploit is part of future work.
3.3
Travelling Salesman Problem (TSP)
TSP arises in many areas of theoretical computer science and is an important algorithm used for
microchip design or DNA sequencing. In our work we focused on the planar symmetric TSP: given
a list of cities, we wish to find the shortest possible route that visits each city exactly once and
returns to the starting point. Additionally, we assume the distance between two cities is the same
in each opposite direction. This is an NP-hard problem which allows us to test the capabilities and
limitations of our model.
The input/output pairs (P, C P ) have a similar format as in the Convex Hull problem described in
Section 3.1. P will be the cartesian coordinates representing the cities, which are chosen randomly
in the [0, 1] ? [0, 1] square. C P = {C1 , . . . , Cn } will be a permutation of integers from 1 to n
representing the optimal path (or tour). For consistency, in the training dataset, we always start in
the first city without loss of generality.
To generate exact data, we implemented the Held-Karp algorithm [18] which finds the optimal
solution in O(2n n2 ) (we used it up to n = 20). For larger n, producing exact solutions is extremely
costly, therefore we also considered algorithms that produce approximated solutions: A1 [19] and
A2 [20], which are both O(n2 ), and A3 [21] which implements the O(n3 ) Christofides algorithm.
The latter algorithm is guaranteed to find a solution within a factor of 1.5 from the optimal length.
Table 2 shows how they performed in our test sets.
2
We choose Ci = (1, 2, 4) instead of (2,4,1) or any other permutation.
5
4
4.1
Empirical Results
Architecture and Hyperparameters
No extensive architecture or hyperparameter search of the Ptr-Net was done in the work presented
here, and we used virtually the same architecture throughout all the experiments and datasets. Even
though there are likely some gains to be obtained by tuning the model, we felt that having the same
model hyperparameters operate on all the problems makes the main message of the paper stronger.
As a result, all our models used a single layer LSTM with either 256 or 512 hidden units, trained with
stochastic gradient descent with a learning rate of 1.0, batch size of 128, random uniform weight
initialization from -0.08 to 0.08, and L2 gradient clipping of 2.0. We generated 1M training example
pairs, and we did observe overfitting in some cases where the task was simpler (i.e., for small n).
Training generally converged after 10 to 20 epochs.
4.2
Convex Hull
We used the convex hull as the guiding task which allowed us to understand the deficiencies of
standard models such as the sequence-to-sequence approach, and also setting up our expectations
on what a purely data driven model would be able to achieve with respect to an exact solution.
We reported two metrics: accuracy, and area covered of the true convex hull (note that any simple
polygon will have full intersection with the true convex hull). To compute the accuracy, we considered two output sequences C 1 and C 2 to be the same if they represent the same polygon. For
simplicity, we only computed the area coverage for the test examples in which the output represents
a simple polygon (i.e., without self-intersections). If an algorithm fails to produce a simple polygon
in more than 1% of the cases, we simply reported FAIL.
The results are presented in Table 1. We note that the area coverage achieved with the Ptr-Net is
close to 100%. Looking at examples of mistakes, we see that most problems come from points that
are aligned (see Figure 3 (d) for a mistake for n = 500) ? this is a common source of errors in most
algorithms to solve the convex hull.
It was seen that the order in which the inputs are presented to the encoder during inference affects
its performance. When the points on the true convex hull are seen ?late? in the input sequence, the
accuracy is lower. This is possibly the network does not have enough processing steps to ?update?
the convex hull it computed until the latest points were seen. In order to overcome this problem,
we used the attention mechanism described in Section 2.2, which allows the decoder to look at
the whole input at any time. This modification boosted the model performance significantly. We
inspected what attention was focusing on, and we observed that it was ?pointing? at the correct
answer on the input side. This inspired us to create the Ptr-Net model described in Section 2.3.
More than outperforming both the LSTM and the LSTM with attention, our model has the key
advantage of being inherently variable length. The bottom half of Table 1 shows that, when training
our model on a variety of lengths ranging from 5 to 50 (uniformly sampled, as we found other forms
of curriculum learning to not be effective), a single model is able to perform quite well on all lengths
it has been trained on (but some degradation for n = 50 can be observed w.r.t. the model trained only
on length 50 instances). More impressive is the fact that the model does extrapolate to lengths that it
has never seen during training. Even for n = 500, our results are satisfactory and indirectly indicate
that the model has learned more than a simple lookup. Neither LSTM or LSTM with attention can
be used for any given n0 6= n without training a new model on n0 .
4.3
Delaunay Triangulation
The Delaunay Triangulation test case is connected to our first problem of finding the convex hull. In
fact, the Delaunay Triangulation for a given set of points triangulates the convex hull of these points.
We reported two metrics: accuracy and triangle coverage in percentage (the percentage of triangles
the model predicted correctly). Note that, in this case, for an input point set P, the output sequence
C(P) is, in fact, a set. As a consequence, any permutation of its elements will represent the same
triangulation.
6
Table 1: Comparison between LSTM, LSTM with attention, and our Ptr-Net model on the convex
hull problem. Note that the baselines must be trained on the same n that they are tested on. 5-50
means the dataset had a uniform distribution over lengths from 5 to 50.
M ETHOD
LSTM [1]
+ ATTENTION [5]
P TR -N ET
LSTM [1]
P TR -N ET
LSTM [1]
P TR -N ET
P TR -N ET
P TR -N ET
P TR -N ET
P TR -N ET
Ground Truth
Predictions
(a) LSTM, m=50, n=50
Ground Truth
Predictions
(d) Ptr-Net, m=5-50, n=500
TRAINED
n
50
50
50
5
5-50
10
5-50
5-50
5-50
5-50
5-50
n
ACCURACY
A REA
50
50
50
5
5
10
10
50
100
200
500
1.9%
38.9%
72.6%
87.7%
92.0%
29.9%
87.0%
69.6%
50.3%
22.1%
1.3%
FAIL
99.7%
99.9%
99.6%
99.6%
FAIL
99.8%
99.9%
99.9%
99.9%
99.2%
Ground Truth
Ground Truth: tour length is 3.518
(b) Truth, n=50
(c) Truth, n=20
Predictions
Predictions: tour length is 3.523
(e) Ptr-Net , m=50, n=50
(f) Ptr-Net , m=5-20, n=20
Figure 3: Examples of our model on Convex hulls (left), Delaunay (center) and TSP (right), trained
on m points, and tested on n points. A failure of the LSTM sequence-to-sequence model for Convex
hulls is shown in (a). Note that the baselines cannot be applied to a different length from training.
Using the Ptr-Net model for n = 5, we obtained an accuracy of 80.7% and triangle coverage of
93.0%. For n = 10, the accuracy was 22.6% and the triangle coverage 81.3%. For n = 50, we
did not produce any precisely correct triangulation, but obtained 52.8% triangle coverage. See the
middle column of Figure 3 for an example for n = 50.
4.4
Travelling Salesman Problem
We considered the planar symmetric travelling salesman problem (TSP), which is NP-hard as the
third problem. Similarly to finding convex hulls, it also has sequential outputs. Given that the PtrNet implements an O(n2 ) algorithm, it was unclear if it would have enough capacity to learn a
useful algorithm solely from data.
As discussed in Section 3.3, it is feasible to generate exact solutions for relatively small values
of n to be used as training data. For larger n, due to the importance of TSP, good and efficient
algorithms providing reasonable approximate solutions exist. We used three different algorithms in
our experiments ? A1, A2, and A3 (see Section 3.3 for references).
7
Table 2: Tour length of the Ptr-Net and a collection of algorithms on a small scale TSP problem.
n
5
10
50 (A1 TRAINED )
50 (A3 TRAINED )
5 (5-20 TRAINED )
10 (5-20 TRAINED )
20 (5-20 TRAINED )
25 (5-20 TRAINED )
30 (5-20 TRAINED )
40 (5-20 TRAINED )
50 (5-20 TRAINED )
O PTIMAL
A1
A2
A3
P TR -N ET
2.12
2.87
N/A
N/A
2.12
2.87
3.83
N/A
N/A
N/A
N/A
2.18
3.07
6.46
6.46
2.18
3.07
4.24
4.71
5.11
5.82
6.46
2.12
2.87
5.84
5.84
2.12
2.87
3.86
4.27
4.63
5.27
5.84
2.12
2.87
5.79
5.79
2.12
2.87
3.85
4.24
4.60
5.23
5.79
2.12
2.88
6.42
6.09
2.12
2.87
3.88
4.30
4.72
5.91
7.66
Table 2 shows all of our results on TSP. The number reported is the length of the proposed tour.
Unlike the convex hull and Delaunay triangulation cases, where the decoder was unconstrained, in
this example we set the beam search procedure to only consider valid tours. Otherwise, the Ptr-Net
model would sometimes output an invalid tour ? for instance, it would repeat two cities or decided
to ignore a destination. This procedure was relevant for n > 20: for n ? 20, the unconstrained
decoding failed less than 1% of the cases, and thus was not necessary. For 30, which goes beyond
the longest sequence seen in training, failure rate went up to 35%, and for 40, it went up to 98%.
The first group of rows in the table show the Ptr-Net trained on optimal data, except for n = 50,
since that is not feasible computationally (we trained a separate model for each n). Interestingly,
when using the worst algorithm (A1) data to train the Ptr-Net, our model outperforms the algorithm
that is trying to imitate.
The second group of rows in the table show how the Ptr-Net trained on optimal data with 5 to
20 cities can generalize beyond that. The results are virtually perfect for n = 25, and good for
n = 30, but it seems to break for 40 and beyond (still, the results are far better than chance). This
contrasts with the convex hull case, where we were able to generalize by a factor of 10. However,
the underlying algorithms have greater complexity than O(n log n), which could explain this.
5
Conclusions
In this paper we described Ptr-Net, a new architecture that allows us to learn a conditional probability of one sequence C P given another sequence P, where C P is a sequence of discrete tokens
corresponding to positions in P. We show that Ptr-Nets can be used to learn solutions to three different combinatorial optimization problems. Our method works on variable sized inputs (yielding
variable sized output dictionaries), something the baseline models (sequence-to-sequence with or
without attention) cannot do directly. Even more impressively, they outperform the baselines on
fixed input size problems - to which both the models can be applied.
Previous methods such as RNNSearch, Memory Networks and Neural Turing Machines [5, 6? ]
have used attention mechanisms to process inputs. However these methods do not directly address
problems that arise with variable output dictionaries. We have shown that an attention mechanism
can be applied to the output to solve such problems. In so doing, we have opened up a new class
of problems to which neural networks can be applied without artificial assumptions. In this paper,
we have applied this extension to RNNSearch, but the methods are equally applicable to Memory
Networks and Neural Turing Machines.
Future work will try and show its applicability to other problems such as sorting where the outputs
are chosen from the inputs. We are also excited about the possibility of using this approach to other
combinatorial optimization problems.
Acknowledgments
We would like to thank Rafal Jozefowicz, Ilya Sutskever, Quoc Le and Samy Bengio for useful
discussions. We would also like to thank Daniel Gillick for his help with the final manuscript.
8
References
[1] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural
networks. In Advances in Neural Information Processing Systems, pages 3104?3112, 2014.
[2] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint
arXiv:1410.5401, 2014.
[3] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, DTIC Document, 1985.
[4] Anthony J Robinson. An application of recurrent nets to phone probability estimation. Neural
Networks, IEEE Transactions on, 5(2):298?305, 1994.
[5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by
jointly learning to align and translate. In ICLR 2015, arXiv preprint arXiv:1409.0473, 2014.
[6] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLEAR 2015, arXiv
preprint arXiv:1410.3916, 2014.
[7] Alex Graves. Generating sequences with recurrent neural networks.
arXiv:1308.0850, 2013.
arXiv preprint
[8] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton.
Grammar as a foreign language. arXiv preprint arXiv:1412.7449, 2014.
[9] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural
image caption generator. In CVPR 2015, arXiv preprint arXiv:1411.4555, 2014.
[10] Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for
visual recognition and description. In CVPR 2015, arXiv preprint arXiv:1411.4389, 2014.
[11] Wojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615,
2014.
[12] Sepp Hochreiter and J?urgen Schmidhuber. Long short-term memory. Neural computation,
9(8):1735?1780, 1997.
[13] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of
video representations using lstms. In ICML 2015, arXiv preprint arXiv:1502.04681, 2015.
[14] Ray A Jarvis. On the identification of the convex hull of a finite set of points in the plane.
Information Processing Letters, 2(1):18?21, 1973.
[15] Ronald L. Graham. An efficient algorith for determining the convex hull of a finite planar set.
Information processing letters, 1(4):132?133, 1972.
[16] Franco P. Preparata and Se June Hong. Convex hulls of finite sets of points in two and three
dimensions. Communications of the ACM, 20(2):87?93, 1977.
[17] S1 Rebay. Efficient unstructured mesh generation by means of delaunay triangulation and
bowyer-watson algorithm. Journal of computational physics, 106(1):125?138, 1993.
[18] Richard Bellman. Dynamic programming treatment of the travelling salesman problem. Journal of the ACM (JACM), 9(1):61?63, 1962.
[19] Suboptimal travelling salesman
https://github.com/dmishin/tsp-solver.
problem
(tsp)
solver.
[20] Traveling
salesman
problem
c++
implementation.
https://github.com/samlbest/traveling-salesman.
Available
at
Available
at
[21] C++ implementation of traveling salesman problem using christofides and 2-opt. Available at
https://github.com/beckysag/traveling-salesman.
9
| 5866 |@word middle:1 stronger:1 seems:1 termination:1 propagate:1 excited:1 tr:8 reduction:1 daniel:1 document:1 interestingly:1 outperforms:1 guadarrama:1 contextual:1 com:3 anne:1 activation:1 must:1 parsing:1 mesh:1 ronald:2 update:1 n0:2 alone:1 generative:2 half:1 imitate:1 ivo:1 plane:2 beginning:3 core:1 short:2 pointer:10 simpler:1 become:1 microchip:1 ray:1 introduce:1 mask:1 p1:5 elman:1 brain:2 bellman:1 inspired:1 salakhutdinov:1 solver:3 increasing:1 becomes:1 notation:1 underlying:1 lowest:1 what:3 cm:3 finding:7 impractical:1 berkeley:1 every:1 zaremba:1 exactly:2 mansimov:1 unit:2 wayne:1 producing:1 danihelka:1 understood:1 mistake:2 consequence:1 encoding:1 path:1 solely:1 koo:1 rnns:5 initialization:1 challenging:1 limited:1 decided:1 unique:1 acknowledgment:1 yj:1 implement:2 differs:1 procedure:3 area:4 empirical:1 rnn:12 significantly:2 cannot:5 interior:1 close:1 context:1 map:3 center:1 maximizing:1 go:2 attention:28 starting:1 latest:1 convex:30 focused:1 williams:1 sepp:1 simplicity:1 unstructured:1 rule:2 d1:1 his:1 embedding:2 coordinate:4 target:4 inspected:1 decode:1 exact:7 caption:1 programming:1 us:3 samy:2 jaitly:1 element:7 rumelhart:1 recognition:2 approximated:1 observed:2 bottom:1 p5:2 preprint:9 worst:1 connected:1 went:2 ordering:2 counter:1 removed:1 highest:1 complexity:5 constrains:1 ui:3 dynamic:1 existent:1 trained:20 solving:1 purely:3 triangle:10 easily:1 incenter:1 various:1 polygon:4 train:2 distinct:1 effective:2 describe:2 artificial:1 tell:1 pci:1 d0i:2 whose:1 larger:2 solve:4 quite:1 cvpr:2 otherwise:1 encoder:8 grammar:1 jointly:1 tsp:12 christofides:2 final:1 sequence:80 advantage:1 net:29 propose:1 meire:1 p4:1 jarvis:1 aligned:1 relevant:1 translate:1 achieve:1 description:1 sutskever:4 empty:1 darrell:1 captioning:1 produce:7 generating:3 perfect:1 help:2 recurrent:5 develop:1 augmenting:1 propagating:1 p2:1 solves:1 implemented:1 coverage:6 predicted:1 come:1 indicate:1 direction:1 ptimal:1 correct:2 hull:30 stochastic:1 exploration:1 opened:1 require:1 opt:1 extension:2 considered:5 ground:4 algorithmic:1 pointing:1 dictionary:15 a2:3 purpose:1 estimation:1 ruslan:1 applicable:2 combinatorial:8 tanh:2 create:3 city:7 hope:1 lexicographic:1 always:1 rather:1 pn:2 ej:4 boosted:1 broader:1 karp:1 encode:1 release:1 june:1 longest:1 sequencing:1 contrast:1 baseline:6 inference:5 foreign:1 entire:2 typically:1 hidden:6 uij:4 arg:2 priori:1 rnnsearch:2 development:1 art:1 softmax:7 special:3 uc:1 urgen:1 equal:4 once:1 never:1 having:1 represents:3 look:1 unsupervised:1 icml:1 future:2 np:2 report:1 yoshua:1 preparata:1 richard:1 randomly:1 geometry:2 attempt:1 message:1 possibility:1 yielding:1 held:1 chain:2 encourage:1 necessary:1 desired:1 theoretical:1 instance:2 column:1 clipping:1 applicability:1 vertex:1 tour:7 uniform:3 sumit:1 reported:4 answer:1 learnt:3 cho:1 fundamental:1 lstm:14 destination:1 physic:1 decoding:1 ilya:4 w1:5 ambiguity:1 rafal:1 choose:2 possibly:2 lukasz:1 return:1 wojciech:1 lookup:1 kate:1 depends:5 performed:1 break:1 try:1 jason:1 doing:1 reached:1 competitive:1 start:2 capability:1 contribution:2 purple:2 square:2 accuracy:7 greg:1 convolutional:1 correspond:1 generalize:4 identification:1 venugopalan:1 converged:1 explain:1 trevor:1 failure:2 petrov:1 nonetheless:1 dm:1 associated:1 di:3 sampled:2 gain:1 dataset:2 treatment:1 dimensionality:3 back:1 focusing:1 feed:1 manuscript:1 planar:8 execute:2 done:1 though:1 generality:2 lastly:1 until:3 traveling:4 lstms:1 propagation:1 google:2 mode:1 name:1 true:3 former:1 kyunghyun:1 symmetric:3 satisfactory:2 deal:1 during:5 self:1 ptr:24 hong:1 trying:1 demonstrate:1 performs:1 cp:1 image:2 wise:1 ranging:1 recently:2 common:1 permuted:1 belong:1 discussed:1 jozefowicz:1 tuning:1 vanilla:1 trivially:1 mathematics:1 similarly:2 consistency:1 unconstrained:2 language:2 had:1 longer:1 impressive:1 add:1 delaunay:14 something:1 align:1 sergio:1 triangulation:17 driven:4 phone:1 schmidhuber:1 route:1 outperforming:1 watson:1 seen:6 p10:1 additional:1 greater:1 paradigm:1 shortest:1 clockwise:1 full:1 technical:1 long:3 e1:1 visit:1 equally:1 a1:5 ameliorates:1 involving:1 prediction:6 expectation:1 navdeep:1 metric:2 arxiv:18 represent:4 sometimes:1 achieved:1 ion:1 c1:9 beam:3 cell:1 rea:1 hochreiter:1 addressed:2 source:1 extra:2 w2:5 operate:1 unlike:1 virtually:2 bahdanau:2 member:1 flow:1 call:3 integer:3 chopra:1 bengio:3 enough:2 switch:1 independence:1 xj:1 affect:1 variety:1 architecture:9 opposite:1 suboptimal:1 cn:1 generally:2 useful:2 covered:1 se:1 amount:1 dna:1 generate:3 http:3 outperform:1 percentage:2 exist:1 correctly:1 blue:2 discrete:4 hyperparameter:1 group:2 key:1 nevertheless:1 achieving:1 pj:3 neither:1 convert:1 sum:1 turing:4 letter:2 throughout:1 reasonable:1 p3:1 graham:1 bowyer:1 bound:1 layer:1 guaranteed:1 constraint:4 deficiency:1 precisely:1 alex:2 n3:1 gillick:1 felt:1 toshev:1 nitish:1 extremely:1 franco:1 format:1 relatively:1 department:1 slav:1 subhashini:1 modification:2 s1:1 quoc:2 computationally:3 equation:3 mechanism:12 fail:3 fed:2 end:5 travelling:8 salesman:12 available:6 generalizes:1 operation:1 multiplied:1 apply:4 observe:1 indirectly:1 blurry:1 batch:1 encounter:1 gate:1 exploit:1 concatenated:1 blend:2 parametric:1 costly:2 kaiser:1 antoine:1 unclear:1 gradient:2 iclr:1 distance:1 attentional:1 separate:3 thank:2 capacity:2 decoder:9 trivial:1 dzmitry:1 marcus:1 length:22 code:2 index:3 illustration:2 providing:1 fortunato:1 design:1 ethod:1 implementation:2 perform:2 datasets:3 finite:4 descent:1 hinton:2 looking:1 communication:1 frame:1 arbitrary:1 introduced:1 david:1 pair:4 extensive:1 learned:3 robinson:1 address:2 beyond:4 able:3 hendricks:1 program:1 max:2 memory:5 video:3 terry:1 natural:1 difficulty:1 curriculum:1 representing:8 improve:1 github:3 review:2 geometric:1 l2:1 epoch:1 determining:1 graf:2 loss:2 permutation:4 generation:3 limitation:2 impressively:1 geoffrey:2 triple:1 generator:1 pi:1 bordes:1 translation:2 normalizes:2 row:2 token:5 repeat:1 copy:1 aij:2 side:1 allow:1 understand:2 lisa:1 overcome:1 dimension:1 valid:1 computes:1 made:1 collection:1 far:1 erhan:1 transaction:1 approximate:5 ignore:1 dealing:1 overfitting:1 search:3 decade:1 table:8 additionally:2 learn:7 inherently:1 artificially:1 anthony:1 domain:1 protocol:1 did:3 main:2 motivation:1 whole:1 hyperparameters:2 arise:1 n2:4 allowed:1 augmented:1 en:1 fails:1 position:5 guiding:1 wish:1 late:1 third:1 learns:1 donahue:1 dumitru:1 symbol:4 learnable:2 list:1 a3:4 intractable:1 sequential:1 modulates:1 ci:18 importance:1 cartesian:2 dtic:1 sorting:2 depicted:1 intersection:2 simply:2 likely:1 jacm:1 rohrbach:1 visual:1 failed:1 vinyals:4 prevents:1 srivastava:1 truth:6 chance:1 acm:2 weston:1 conditional:4 sized:4 invalid:1 jeff:1 content:4 hard:2 feasible:2 specifically:1 except:1 uniformly:2 reducing:1 degradation:1 saenko:1 select:2 internal:1 latter:2 arises:1 alexander:1 oriol:4 tested:2 extrapolate:1 |
5,376 | 5,867 | Precision-Recall-Gain Curves:
PR Analysis Done Right
Meelis Kull
Intelligent Systems Laboratory
University of Bristol, United Kingdom
[email protected]
Peter A. Flach
Intelligent Systems Laboratory
University of Bristol, United Kingdom
[email protected]
Abstract
Precision-Recall analysis abounds in applications of binary classification where
true negatives do not add value and hence should not affect assessment of the
classifier?s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate
in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions ? e.g., the area under a PR curve takes the arithmetic
mean of precision values whereas the F? score applies the harmonic mean. We
show how to fix this by plotting PR curves in a different coordinate system, and
demonstrate that the new Precision-Recall-Gain curves inherit all key advantages
of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier?s scores so as to determine,
for each operating point on the convex hull, the interval of ? values for which the
point optimises F? . We demonstrate experimentally that the area under traditional
PR curves can easily favour models with lower expected F1 score than others, and
so the use of Precision-Recall-Gain curves will result in better model selection.
1
Introduction and Motivation
In machine learning and related areas we often need to optimise multiple performance measures,
such as per-class classification accuracies, precision and recall in information retrieval, etc. We then
have the option to fix a particular way to trade off these performance measures: e.g., we can use
overall classification accuracy which gives equal weight to correctly classified instances regardless
of their class; or we can use the F1 score which takes the harmonic mean of precision and recall.
However, multi-objective optimisation suggests that to delay fixing a trade-off for as long as possible
has practical benefits, such as the ability to adapt a model or set of models to changing operating
contexts. The latter is essentially what receiver operating characteristic (ROC) curves do for binary classification. In an ROC plot we plot true positive rate (the proportion of correctly classified
positives, also denoted tpr) on the y-axis against false positive rate (the proportion of incorrectly
classified negatives, also denoted fpr) on the x-axis. A categorical classifier evaluated on a test set
gives rise to a single ROC point, while a classifier which outputs scores (henceforth called a model)
can generate a set of points (commonly referred to as the ROC curve) by varying the decision threshold (Figure 1 (left)).
ROC curves are widely used in machine learning and their main properties are well understood [3].
These properties can be summarised as follows.
1
1
0.9
0.8
0.8
0.7
0.7
0.6
0.6
Precision
True positive rate
1
0.9
0.5
0.4
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
False positive rate
0.8
0
0
1
0.2
0.4
0.6
0.8
1
Recall
Figure 1: (left) ROC curve with non-dominated points (red circles) and convex hull (red dotted line).
(right) Corresponding Precision-Recall curve with non-dominated points (red circles).
Universal baselines: the major diagonal of an ROC plot depicts the line of random performance
which can be achieved without training. More specifically, a random classifier assigning the
positive class with probability p and the negative class with probability 1 ? p has expected
true positive rate of p and true negative rate of 1 ? p, represented by the ROC point (p, p).
The upper-left (lower-right) triangle of ROC plots hence denotes better (worse) than random performance. Related baselines include the always-negative and always-positive classifier which occupy fixed points in ROC plots (the origin and the upper right-hand corner,
respectively). These baselines are universal as they don?t depend on the class distribution.
Linear interpolation: any point on a straight line between two points representing the performance
of two classifiers (or thresholds) A and B can be achieved by making a suitably biased random choice between A and B [14]. Effectively this creates an interpolated contingency
table which is a linear combination of the contingency tables of A and B, and since all
three tables involve the same numbers of positives and negatives it follows that the interpolated accuracy as well as true and false positive rates are also linear combinations of
the corresponding quantities pertaining to A and B. The slope of the connecting line determines the trade-off between the classes under which any linear combination of A and
B would yield equivalent performance. In particular, test set accuracy assuming uniform
misclassification costs is represented by accuracy isometrics with slope (1 ? ?)/?, where
? is the proportion of positives [5].
Optimality: a point D dominates another point E if D?s tpr and fpr are not worse than E?s and
at least one of them is strictly better. The set of non-dominated points ? the Pareto front
? establishes the set of classifiers or thresholds that are optimal under some trade-off between the classes. Due to linearity any interpolation between non-dominated points is both
achievable and non-dominated, giving rise to the convex hull (ROCCH) which can be easily
constructed both algorithmically and by visual inspection.
Area: the proportion of the unit square which falls under an ROC curve (AUROC) has a well-known
meaning as a ranking performance measure: it estimates the probability that a randomly
chosen positive is ranked higher by the model than a randomly chosen negative [7]. More
importantly in a classification context, there is a linear relationship between AUROC =
R1
0 tpr d fpr and the expected accuracy acc = ?tpr + (1 ? ?)(1 ? fpr) averaged over all
possible predicted positive rates rate = ?tpr + (1 ? ?)fpr which can be established by a
R
change of variable: E [acc] = 01 acc d rate = ?(1 ? ?)(2AUROC ? 1) + 1/2 [8].
Calibration: slopes of convex hull segments can be interpreted as empirical likelihood ratios associated with a particular interval of raw classifier scores. This gives rise to a non-parametric
calibration procedure which is also called isotonic regression [19] or pool adjacent violators [4] and results in a calibration map which maps each segment of ROCCH with slope r
to a calibrated score c = ?r/(?r + (1 ? ?)) [6]. Define a skew-sensitive version of accuracy
as accc , 2c?tpr + 2(1 ? c)(1 ? ?)(1 ? fpr) (i.e., standard accuracy is accc=1/2 ) then a per2
fectly calibrated classifier outputs, for every instance, the value of c for which the instance
is on the accc decision boundary.
Alternative solutions for each of these exist. For example, parametric alternatives to ROCCH calibration exist based on the logistic function, e.g. Platt scaling [13]; as do alternative ways to aggregate
classification performance across different operating points, e.g. the Brier score [8]. However, the
power of ROC analysis derives from the combination of the above desirable properties, which helps
to explain its popularity across the machine learning discipline.
This paper presents fundamental improvements in Precision-Recall analysis, inspired by ROC analysis, as follows. (i) We identify in Section 2 the problems with current practice in Precision-Recall
curves by demonstrating that they fail to satisfy each of the above properties in some respect. (ii) We
propose a principled way to remedy all these problems by means of a change of coordinates in Section 3. (iii) In particular, our improved Precision-Recall-Gain curves enclose an area that is directly
related to expected F1 score ? on a harmonic scale ? in a similar way as AUROC is related to expected accuracy. (iv) Furthermore, with Precision-Recall-Gain curves it is possible to calibrate a
model for F? in the sense that the predicted score for any instance determines the value of ? for
which the instance is on the F? decision boundary. (v) We give experimental evidence in Section 4
that this matters by demonstrating that the area under traditional Precision-Recall curves can easily
favour models with lower expected F1 score than others.
Proofs of the formal results are found in the Supplementary Material; see also http://www.cs.
bris.ac.uk/?flach/PRGcurves/.
2
Traditional Precision-Recall Analysis
Over-abundance of negative examples is a common phenomenon in many subfields of machine
learning and data mining, including information retrieval, recommender systems and social network
analysis. Indeed, most web pages are irrelevant for most queries, and most links are absent from
most networks. Classification accuracy is not a sensible evaluation measure in such situations, as it
over-values the always-negative classifier. Neither does adjusting the class imbalance through costsensitive versions of accuracy help, as this will not just downplay the benefit of true negatives but
also the cost of false positives. A good solution in this case is to ignore true negatives altogether
and use precision, defined as the proportion of true positives among the positive predictions, as
performance metric instead of false positive rate. In this context, the true positive rate is usually
renamed to recall. More formally, we define precision as prec = TP/(TP + FP) and recall as rec =
TP/(TP + FN), where TP, FP and FN denote the number of true positives, false positives and false
negatives, respectively.
Perhaps motivated by the appeal of ROC plots, many researchers have begun to produce PrecisionRecall or PR plots with precision on the y-axis against recall on the x-axis. Figure 1 (right) shows the
PR curve corresponding to the ROC curve on the left. Clearly there is a one-to-one correspondence
between the two plots as both are based on the same contingency tables [2]. In particular, precision
associated with an ROC point is proportional to the angle between the line connecting the point with
the origin and the x-axis. However, this is where the similarity ends as PR plots have none of the
aforementioned desirable properties of ROC plots.
Non-universal baselines: a random classifier has precision ? and hence baseline performance is a
horizontal line which depends on the class distribution. The always-positive classifier is at
the right-most end of this baseline (the always-negative classifier has undefined precision).
Non-linear interpolation: the main reason for this is that precision in a linearly interpolated contingency table is only a linear combination of the original precision values if the two classifiers have the same predicted positive rate (which is impossible if the two contingency
tables arise from different decision thresholds on the same model). [2] discusses this further and also gives an interpolation formula. More generally, it isn?t meaningful to take the
arithmetic average of precision values.
Non-convex Pareto front: the set of non-dominated operating points continues to be well-defined
(see the red circles in Figure 1 (right)) but in the absence of linear interpolation this set isn?t
convex for PR curves, nor is it straightforward to determine by visual inspection.
3
Uninterpretable area: although many authors report the area under the PR curve (AUPR) it doesn?t
have a meaningful interpretation beyond the geometric one of expected precision when
uniformly varying the recall (and even then the use of the arithmetic average cannot be
justified). Furthermore, PR plots have unachievable regions at the lower right-hand side,
the size of which depends on the class distribution [1].
No calibration: although some results exist regarding the relationship between calibrated scores
and F1 score (more about this below) these are unrelated to the PR curve. To the best of
our knowledge there is no published procedure to output scores that are calibrated for F? ?
that is, which give the value of ? for which the instance is on the F? decision boundary.
2.1
The F? measure
The standard way to combine precision and recall into a single performance measure is through the
F1 score [16]. It is commonly defined as the harmonic mean of precision and recall:
2
2prec ? rec
TP
F1 ,
=
=
(1)
1/prec + 1/rec prec + rec TP + (FP + FN)/2
The last form demonstrates that the harmonic mean is natural here as it corresponds to taking the
arithmetic mean of the numbers of false positives and false negatives. Another way to understand
the F1 score is as the accuracy in a modified contingency table which copies the true positive count
to the true negatives:
Actual ?
Actual
Predicted ?
Predicted
TP
FP
TP + FP
FN
TP
Pos
Pos
Neg ? (TN ? TP)
2TP + FP + FN
We can take a weighted harmonic mean which is commonly parametrised as follows:
1
(1 + ? 2 )TP
F? ,
=
2
2
2
?
1
/prec +
/rec (1 + ? )TP + FP + ? FN
1+? 2
(2)
1+? 2
There is a range of recent papers studying the F-score, several of which in last year?s NIPS conference [12, 9, 11]. Relevant results include the following: (i) non-decomposability of the F? score,
meaning it is not an average over instances (it is a ratio of such averages, called a pseudo-linear
function by [12]); (ii) estimators exist that are consistent: i.e., they are unbiased in the limit [9, 11];
(iii) given a model, operating points that are optimal for F? can be achieved by thresholding the
model?s scores [18]; (iv) a classifier yielding perfectly calibrated posterior probabilities has the property that the optimal threshold for F1 is half the optimal F1 at that point (first proved by [20] and
later by [10], while generalised to F? by [9]). The latter results tell us that optimal thresholds for
F? are lower than optimal thresholds for accuracy (or equal only in the case of the perfect model).
They don?t, however, tell us how to find such thresholds other than by tuning (and [12] propose a
method inspired by cost-sensitive classification). The analysis in the next section significantly extends these results by demonstrating how we can identify all F? -optimal thresholds for any ? in a
single calibration procedure.
3
Precision-Recall-Gain Curves
In this section we demonstrate how Precision-Recall analysis can be adapted to inherit all the benefits
of ROC analysis. While technically straightforward, the implications of our results are far-reaching.
For example, even something as seemingly innocuous as reporting the arithmetic average of F1
values over cross-validation folds is methodologically misguided: we will define the corresponding
performance measure that can safely be averaged.
3.1
Baseline
A random classifier that predicts positive with probability p has F? score (1 + ? 2 )p?/(p + ? 2 ?).
This is monotonically increasing in p ? [0, 1] hence reaches its maximum for p = 1, the always4
1
0.9
0.8
0.8
0.7
0.7
Precision Gain
Precision
1
0.9
0.6
0.5
0.4
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
0
0
1
Recall
0.2
0.4
0.6
Recall Gain
0.8
1
Figure 2: (left) Conventional PR curve with hyperbolic F1 isometrics (dotted lines) and the baseline
performance by the always-positive classifier (solid hyperbole). (right) Precision-Recall-Gain curve
with minor diagonal as baseline, parallel F1 isometrics and a convex Pareto front.
positive classifier. Hence Precision-Recall analysis differs from classification accuracy in that the
baseline to beat is the always-positive classifier rather than any random classifier. This baseline has
prec = ? and rec = 1, and it is easily seen that any model with prec < ? or rec < ? loses against
this baseline. Hence it makes sense to consider only precision and recall values in the interval [?, 1].
x?min
Any real-valued variable x ? [min, max] can be rescaled by the mapping x 7? max?min
. However, the
linear scale is inappropriate here and we should use a harmonic scale instead, hence map to
max ? (x ? min)
1/x ? 1/min
=
(3)
1/max ? 1/min (max ? min) ? x
Taking max = 1 and min = ? we arrive at the following definition.
Definition 1 (Precision Gain and Recall Gain).
prec ? ?
? FP
rec ? ?
? FN
precG =
= 1?
recG =
= 1?
(4)
(1 ? ?)prec
1 ? ? TP
(1 ? ?)rec
1 ? ? TP
A Precision-Recall-Gain curve plots Precision Gain on the y-axis against Recall Gain on the x-axis
in the unit square (i.e., negative gains are ignored).
An example PRG curve is given in Figure 2 (right). The always-positive classifier has recG = 1 and
precG = 0 and hence gets plotted in the lower right-hand corner of Precision-Recall-Gain space,
regardless of the class distribution. Since we show in the next section that F1 isometrics have slope
?1 in this space it follows that all classifiers with baseline F1 performance end up on the minor
diagonal in Precision-Recall-Gain space. In contrast, the corresponding F1 isometric in PR space is
hyperbolic (Figure 2 (left)) and its exact location depends on the class distribution.
3.2
Linearity and optimality
One of the main benefits of PRG space is that it allows linear interpolation. This manifests itself
in two ways: any point on a straight line between two endpoints is achievable by random choice
between the endpoints (Theorem 1) and F? isometrics are straight lines with slope ?? 2 (Theorem 2).
Theorem 1. Let P1 = (precG1 , recG1 ) and P2 = (precG2 , recG2 ) be points in the Precision-RecallGain space representing the performance of Models 1 and 2 with contingency tables C1 and C2 .
Then a model with an interpolated contingency table C? = ?C1 + (1 ? ? )C2 has precision gain
precG? = ?precG1 + (1 ? ?)precG2 and recall gain recG? = ?recG1 + (1 ? ?)recG2 , where ? =
? T P1 /(? T P1 + (1 ? ? )T P2 ).
Theorem 2. precG + ? 2 recG = (1 + ? 2 )FG? , with FG? =
F? ??
(1??)F?
2
? FP+? FN
= 1 ? 1??
.
(1+? 2 )TP
FG? is a linearised version of F? in the same way as precG and recG are linearised versions of precision and recall. FG? measures the gain in performance (on a linear scale) relative to a classifier with
5
both precision and recall ? and hence F? ? equal to ?. F1 isometrics are indicated in Figure 2 (right).
By increasing (decreasing) ? 2 these lines of constant F? become steeper (flatter) and hence we are
putting more emphasis on recall (precision).
With regard to optimality, we already knew that every classifier or threshold optimal for F? for some
? 2 is optimal for accc for some c. The reverse also holds, except for the ROC convex hull points
below the baseline (e.g., the always-negative classifier). Due to linearity the PRG Pareto front is
convex and easily constructed by visual inspection. We will see in Section 3.4 that these segments
of the PRG convex hull can be used to obtain classifier scores specifically calibrated for F-scores,
thereby pre-empting the need for any more threshold tuning.
3.3
Area
Define the area under the Precision-Recall-Gain curve as AUPRG = 01 precG d recG. We will
show how this area can be related to an expected FG1 score when averaging over the operating
points on the curve in a particular way. To this end we define ? = recG/? ? precG/(1 ? ?), which
expresses the extent to which recall exceeds precision (reweighting by ? and 1 ? ? guarantees that ?
is monotonically increasing when changing the threshold towards having more positive predictions,
as shown in the proof of Theorem 3 in the Supplementary Material). Hence, ?y0 /(1??) ? ? ? 1/?,
where y0 denotes the precision gain at the operating point where recall gain is zero. The following
theorem shows that if the operating points are chosen such that ? is uniformly distributed in this
range, then the expected FG1 can be calculated from the area under the Precision-Recall-Gain curve
(the Supplementary Material proves a more general result for expected FG? .) This justifies the use
of AUPRG as a performance metric without fixing the classifier?s operating point in advance.
Theorem 3. Let the operating points of a model with area under the Precision-Recall-Gain curve
AUPRG be chosen such that ? is uniformly distributed within [?y0 /1 ? ?, 1/?]. Then the expected
FG1 score is equal to
R
E [FG1 ] =
AUPRG/2 + 1/4 ? ?(1 ? y0 2 )/4
1 ? ?(1 ? y0 )
(5)
The expected reciprocal F1 score can be calculated from the relationship E [1/F1 ] = (1 ? (1 ?
?)E [FG1 ])/? which follows from the definition of FG? . In the special case where y0 = 1 the
expected FG1 score is AUPRG/2 + 1/4.
3.4
Calibration
Figure 3 (left) shows an ROC curve with empirically calibrated posterior probabilities obtained by
isotonic regression [19] or the ROC convex hull [4]. Segments of the convex hull are labelled with
the value of c for which the two endpoints have the same skew-sensitive accuracy accc . Conversely,
if a point connects two segments with c1 < c2 then that point is optimal for any c such that c1 <
c < c2 . The calibrated values c are derived from the ROC slope r by c = ?r/(?r + (1 ? ?)) [6].
For example, the point on the convex hull two steps up from the origin optimises skew-sensitive
accuracy accc for 0.29 < c < 0.75 and hence also standard accuracy (c = 1/2). We are now in a
position to calculate similarly calibrated scores for F-score.
Theorem 4. Let two classifiers be such that prec1 > prec2 and rec1 < rec2 , then these two classifiers
have the same F? score if and only if
?2 = ?
1/prec1 ? 1/prec2
1/rec1 ? 1/rec2
(6)
In line with ROC calibration we convert these slopes into a calibrated score between 0 and 1:
d=
1
1/rec1 ? 1/rec2
=
(? 2 + 1) (1/rec1 ? 1/rec2 ) ? (1/prec1 ? 1/prec2 )
(7)
It is important to note that there is no model-independent relationship between ROC-calibrated
scores and PRG-calibrated scores, so we cannot derive d from c. However, we can equip a model
with two calibration maps, one for accuracy and the other for F-score.
6
0
0.0280.014
1
1
0.048
0.9
0.76
0.8
0.8
0.49
0.7
0.7
0.18
Precision Gain
True positive rate
0.99
0.9
0.053
0.6
0.5
0.4
0.075
0.6
0.067
0.5
0.4
0.034
0.3
0.29
0.3
0.016
0.2
0.2
0.75
0.1
0.1
1
0
0
0.2
0.4
0.6
False positive rate
0.8
0
0
1
0
0.2
0.4
0.6
Recall Gain
0.8
1
Figure 3: (left) ROC curve with scores empirically calibrated for accuracy. The green dots correspond to a regular grid in Precision-Recall-Gain space. (right) Precision-Recall-Gain curve with
scores calibrated for F? . The green dots correspond to a regular grid in ROC space, clearly indicating
that ROC analysis over-emphasises the high-recall region.
Figure 3 (right) shows the PRG curve for the running example with scores calibrated for F? . Score
0.76 corresponds to ? 2 = (1 ? 0.76)/0.76 = 0.32 and score 0.49 corresponds to ? 2 = 1.04, so the
point closest to the Precision-Recall breakeven line optimises F? for 0.32 < ? 2 < 1.04 and hence
also F1 (but note that the next point to the right on the convex hull is nearly as good for F1 , on
account of the connecting line segment having a calibrated score close to 1/2).
4
Practical examples
The key message of this paper is that precision, recall and F-score are expressed on a harmonic
scale and hence any kind of arithmetic average of these quantities is methodologically wrong. We
now demonstrate that this matters in practice. In particular, we show that in some sense, AUPR and
AUPRG are as different from each other as AUPR and AUROC. Using the OpenML platform [17]
we took all those binary classification tasks which have 10-fold cross-validated predictions using
at least 30 models from different learning methods (these are called flows in OpenML). In each of
the obtained 886 tasks (covering 426 different datasets) we applied the following procedure. First,
we fetched the predicted scores of 30 randomly selected models from different flows and calculated
areas under ROC, PRG and PR curves(with hyperbolic interpolation as recommended by [2]), with
minority class as positives. We then ranked the 30 models with respect to these measures. Figure 4
plots AUPRG-rank against AUPR-rank across all 25 980 models.
Figure 4 (left) demonstrates that AUPR and AUPRG often disagree in ranking the models. In particular, they disagree on the best method in 24% of the tasks and on the top three methods in 58%
of the tasks (i.e., they agree on top, second and third method in 42% of the tasks). This amount of
disagreement is comparable to the disagreement between AUPR and AUROC (29% and 65% disagreement for top 1 and top 3, respectively) and between AUPRG and AUROC (22% and 57%).
Therefore, AUPR, AUPRG and AUROC are related quantities, but still all significantly different.
The same conclusion is supported by the pairwise correlations between the ranks across all tasks:
the correlation between AUPR-ranks and AUPRG-ranks is 0.95, between AUPR and AUROC it is
0.95, and between AUPRG and AUROC it is 0.96.
Figure 4 (right) shows AUPRG vs AUPR in two datasets with relatively low and high rank correlations (0.944 and 0.991, selected as lower and upper quartiles among all tasks). In both datasets
AUPR and AUPRG agree on the best model. However, in the white-clover dataset the second best is
AdaBoost according to AUPRG and Logistic Regression according to AUPR. As seen in Figure 5,
this disagreement is caused by AUPR taking into account the poor performance of AdaBoost in the
early part of the ranking; AUPRG ignores this part as it has negative recall gain.
7
1.00
Count
1
?
?
?
???
?
?
????
??
?
1
10
?
?
?
0.75
2?3
?
AUPRG
Rank of AUPRG
0
4?7
8?15
16?31
20
32?63
?
??
Dataset
?
0.50
?
?
ada?agnostic
white?clover
0.25
64?127
?
128?...
30
30
20
10
0.00
0.00
1
Rank of AUPR
0.25
0.50
0.75
1.00
AUPR
1.00
1.00
0.75
0.75
0.75
0.50
0.25
0.00
0.00
Precision Gain
1.00
Precision
True positive rate
Figure 4: (left) Comparison of AUPRG-ranks vs AUPR-ranks. Each cell shows how many models
across 886 OpenML tasks have these ranks among the 30 models in the same task. (right) Comparison of AUPRG vs AUPR in OpenML tasks with IDs 3872 (white-clover) and 3896 (ada-agnostic),
with 30 models in each task. Some models perform worse than random (AUPRG < 0) and are not
plotted. The models represented by the two encircled triangles are shown in detail in Figure 5.
0.50
0.25
0.25
0.50
0.75
1.00
False positive rate
0.00
0.00
0.25
0.50
Recall
0.75
1.00
0.50
0.25
0.00
0.00
0.25
0.50
0.75
1.00
Recall Gain
Figure 5: (left) ROC curves for AdaBoost (solid line) and Logistic Regression (dashed line) on the
white-clover dataset (OpenML run IDs 145651 and 267741, respectively). (middle) Corresponding
PR curves. The solid curve is on average lower with AUPR = 0.724 whereas the dashed curve has
AUPR = 0.773. (right) Corresponding PRG curves, where the situation has reversed: the solid curve
has AUPRG = 0.714 while the dashed curve has a lower AUPRG of 0.687.
5
Concluding remarks
If a practitioner using PR-analysis and the F-score should take one methodological recommendation
from this paper, it is to use the F-Gain score instead to make sure baselines are taken into account
properly and averaging is done on the appropriate scale. If required the FG? score can be converted
back to an F? score at the end. The second recommendation is to use Precision-Recall-Gain curves
instead of PR curves, and the third to use AUPRG which is easier to calculate than AUPR due to
linear interpolation, has a proper interpretation as an expected F-Gain score and allows performance
assessment over a range of operating points. To assist practitioners we have made R, Matlab and
Java code to calculate AUPRG and PRG curves available at http://www.cs.bris.ac.uk/
?flach/PRGcurves/. We are also working on closer integration of AUPRG as an evaluation
metric in OpenML and performance visualisation platforms such as ViperCharts [15].
As future work we mention the interpretation of AUPRG as a measure of ranking performance: we
are working on an interpretation which gives non-uniform weights to the positives and as such is
related to Discounted Cumulative Gain. A second line of research involves the use of cost curves
for the FG? score and associated threshold choice methods.
Acknowledgments This work was supported by the REFRAME project granted by the European Coordinated Research on Long-Term Challenges in Information and Communication Sciences & Technologies ERANet (CHIST-ERA), and funded by the Engineering and Physical Sciences Research Council in the UK under
grant EP/K018728/1. Discussions with Hendrik Blockeel helped to clarify the intuitions underlying this work.
8
References
[1] K. Boyd, V. S. Costa, J. Davis, and C. D. Page. Unachievable region in precision-recall space
and its effect on empirical evaluation. In International Conference on Machine Learning, page
349, 2012.
[2] J. Davis and M. Goadrich. The relationship between precision-recall and ROC curves. In
Proceedings of the 23rd International Conference on Machine Learning, pages 233?240, 2006.
[3] T. Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861?874,
2006.
[4] T. Fawcett and A. Niculescu-Mizil. PAV and the ROC convex hull. Machine Learning, 68(1):
97?106, July 2007.
[5] P. A. Flach. The geometry of ROC space: understanding machine learning metrics through
ROC isometrics. In Machine Learning, Proceedings of the Twentieth International Conference
(ICML 2003), pages 194?201, 2003.
[6] P. A. Flach. ROC analysis. In C. Sammut and G. Webb, editors, Encyclopedia of Machine
Learning, pages 869?875. Springer US, 2010.
[7] D. J. Hand and R. J. Till. A simple generalisation of the area under the ROC curve for multiple
class classification problems. Machine Learning, 45(2):171?186, 2001.
[8] J. Hern?andez-Orallo, P. Flach, and C. Ferri. A unified view of performance metrics: Translating
threshold choice into expected classification loss. Journal of Machine Learning Research, 13:
2813?2869, 2012.
[9] O. O. Koyejo, N. Natarajan, P. K. Ravikumar, and I. S. Dhillon. Consistent binary classification
with generalized performance metrics. In Advances in Neural Information Processing Systems,
pages 2744?2752, 2014.
[10] Z. C. Lipton, C. Elkan, and B. Naryanaswamy. Optimal thresholding of classifiers to maximize
F1 measure. In Machine Learning and Knowledge Discovery in Databases, volume 8725 of
Lecture Notes in Computer Science, pages 225?239. Springer Berlin Heidelberg, 2014.
[11] H. Narasimhan, R. Vaish, and S. Agarwal. On the statistical consistency of plug-in classifiers
for non-decomposable performance measures. In Advances in Neural Information Processing
Systems 27, pages 1493?1501. 2014.
[12] S. P. Parambath, N. Usunier, and Y. Grandvalet. Optimizing F-measures by cost-sensitive
classification. In Advances in Neural Information Processing Systems, pages 2123?2131, 2014.
[13] J. C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized
likelihood methods. In Advances in Large Margin Classifiers, pages 61?74. MIT Press, Boston,
1999.
[14] F. Provost and T. Fawcett. Robust classification for imprecise environments. Machine Learning, 42(3):203?231, 2001.
[15] B. Sluban and N. Lavra?c. Vipercharts: Visual performance evaluation platform. In H. Blockeel,
?
K. Kersting, S. Nijssen, and F. Zelezn?
y, editors, Machine Learning and Knowledge Discovery
in Databases, volume 8190 of Lecture Notes in Computer Science, pages 650?653. Springer
Berlin Heidelberg, 2013.
[16] C. J. Van Rijsbergen. Information Retrieval. Butterworth-Heinemann, Newton, MA, USA,
2nd edition, 1979.
[17] J. Vanschoren, J. N. van Rijn, B. Bischl, and L. Torgo. OpenML: networked science in machine
learning. SIGKDD Explorations, 15(2):49?60, 2013.
[18] N. Ye, K. M. A. Chai, W. S. Lee, and H. L. Chieu. Optimizing F-measures: A tale of two
approaches. In Proceedings of the 29th International Conference on Machine Learning, pages
289?296, 2012.
[19] B. Zadrozny and C. Elkan. Obtaining calibrated probability estimates from decision trees
and naive Bayesian classifiers. In Proceedings of the Eighteenth International Conference on
Machine Learning (ICML 2001), pages 609?616, 2001.
[20] M.-J. Zhao, N. Edakunni, A. Pocock, and G. Brown. Beyond Fano?s inequality: bounds on the
optimal F-score, BER, and cost-sensitive risk and their implications. The Journal of Machine
Learning Research, 14(1):1033?1090, 2013.
9
| 5867 |@word middle:1 version:4 achievable:2 proportion:5 flach:7 suitably:1 nd:1 methodologically:2 thereby:1 mention:1 solid:4 score:49 united:2 current:1 assigning:1 fn:8 plot:13 v:3 half:1 selected:2 inspection:3 fpr:6 reciprocal:1 location:1 constructed:2 c2:4 become:1 combine:1 pairwise:1 aupr:20 indeed:1 expected:16 p1:3 brier:1 nor:1 multi:1 inspired:3 discounted:1 decreasing:1 actual:2 inappropriate:1 increasing:3 project:1 linearity:3 unrelated:1 underlying:1 agnostic:2 what:1 kind:1 interpreted:1 narasimhan:1 unified:1 guarantee:1 pseudo:1 safely:1 every:2 classifier:34 demonstrates:2 uk:5 platt:2 unit:2 wrong:1 grant:1 positive:36 generalised:1 understood:1 engineering:1 limit:1 abounds:1 era:1 id:2 blockeel:2 interpolation:8 emphasis:2 lavra:1 suggests:1 innocuous:1 conversely:1 range:3 subfields:1 averaged:2 practical:2 acknowledgment:1 practice:3 differs:1 precisionrecall:3 procedure:4 area:18 universal:3 empirical:2 significantly:2 hyperbolic:3 java:1 boyd:1 pre:1 imprecise:1 regular:2 get:1 cannot:2 close:1 selection:1 context:3 impossible:1 risk:1 isotonic:2 www:2 equivalent:1 map:4 conventional:1 eighteenth:1 straightforward:2 regardless:2 convex:16 decomposable:1 estimator:1 importantly:1 goadrich:1 coordinate:2 exact:1 origin:3 elkan:2 recognition:1 natarajan:1 rec:9 continues:1 predicts:1 database:2 ep:1 breakeven:1 calculate:3 region:3 trade:4 rescaled:1 principled:1 downplay:1 intuition:1 environment:1 bischl:1 depend:1 torgo:1 segment:6 technically:1 creates:1 triangle:2 easily:5 po:2 represented:3 pertaining:1 query:1 tell:2 aggregate:1 widely:1 supplementary:3 valued:1 ability:1 itself:1 seemingly:1 advantage:2 took:1 propose:2 relevant:1 networked:1 till:1 chai:1 r1:1 produce:1 perfect:1 help:2 derive:1 tale:1 ac:4 fixing:2 minor:2 p2:2 predicted:6 enclose:1 c:2 involves:1 hull:12 quartile:1 naryanaswamy:1 exploration:1 translating:1 material:3 fix:2 f1:23 andez:1 strictly:1 clarify:1 hold:1 mapping:1 major:1 early:1 sensitive:6 council:1 establishes:1 weighted:1 butterworth:1 mit:1 clearly:2 always:9 modified:1 reaching:1 rather:1 kersting:1 varying:2 derived:1 validated:1 improvement:1 methodological:1 rank:11 likelihood:2 mainly:1 properly:1 ferri:1 contrast:1 sigkdd:1 baseline:15 sense:3 niculescu:1 visualisation:1 overall:1 classification:15 among:3 aforementioned:1 denoted:2 fg1:6 platform:3 special:1 integration:1 optimises:3 equal:4 having:2 icml:2 nearly:1 future:1 report:2 others:2 intelligent:2 randomly:3 geometry:1 connects:1 linearised:2 message:1 mining:1 evaluation:4 yielding:1 undefined:1 parametrised:1 implication:2 closer:1 edakunni:1 tree:1 iv:2 circle:3 plotted:2 chist:1 instance:7 tp:17 calibrate:2 ada:2 cost:6 decomposability:1 uniform:2 delay:1 front:4 reframe:1 calibrated:17 fundamental:1 international:5 probabilistic:1 off:4 lee:1 pool:1 discipline:1 connecting:3 kull:2 henceforth:1 worse:3 corner:2 zhao:1 account:3 converted:1 flatter:1 matter:2 satisfy:1 coordinated:1 caused:1 ranking:4 depends:3 later:1 helped:1 view:1 steeper:1 red:4 option:1 parallel:1 slope:8 square:2 accuracy:19 characteristic:2 yield:1 identify:2 correspond:2 raw:1 bayesian:1 none:1 researcher:2 straight:3 bristol:4 classified:3 acc:3 published:1 explain:1 prg:9 reach:1 definition:3 against:5 conveys:1 associated:4 proof:2 gain:38 costa:1 dataset:3 proved:1 adjusting:1 begun:1 recall:54 knowledge:3 manifest:1 back:1 higher:1 isometric:8 adaboost:3 improved:1 done:2 evaluated:1 furthermore:2 just:1 correlation:3 hand:4 working:2 horizontal:1 web:1 assessment:3 reweighting:1 logistic:3 costsensitive:1 perhaps:2 indicated:1 usa:1 effect:1 ye:1 brown:1 true:15 remedy:1 unbiased:1 vaish:1 hence:14 laboratory:2 dhillon:1 white:4 adjacent:1 pav:1 openml:7 covering:1 davis:2 generalized:1 demonstrate:5 tn:1 meaning:2 harmonic:9 common:1 empirically:2 physical:1 endpoint:3 volume:2 interpretation:4 tpr:6 orallo:1 tuning:2 rd:1 grid:2 consistency:1 similarly:1 fano:1 dot:2 funded:1 calibration:9 similarity:1 operating:13 etc:1 add:1 something:1 posterior:2 closest:1 recent:1 optimizing:2 irrelevant:1 reverse:1 inequality:1 binary:4 neg:1 seen:2 determine:2 maximize:1 monotonically:2 dashed:3 recommended:1 arithmetic:6 multiple:2 desirable:2 ii:2 july:1 encircled:1 exceeds:1 adapt:1 plug:1 cross:2 long:2 retrieval:3 ravikumar:1 prediction:3 regression:4 optimisation:1 metric:7 essentially:1 fawcett:3 agarwal:1 achieved:3 cell:1 c1:4 justified:1 whereas:2 interval:3 koyejo:1 biased:1 sure:1 flow:2 practitioner:2 iii:2 affect:1 perfectly:1 regarding:1 absent:1 favour:2 motivated:1 assist:1 granted:1 peter:2 remark:1 matlab:1 ignored:1 generally:1 involve:1 amount:1 encyclopedia:1 rocch:3 generate:1 occupy:1 http:2 exist:4 dotted:2 algorithmically:1 per:1 correctly:2 popularity:1 summarised:1 express:1 key:2 putting:1 threshold:14 demonstrating:3 changing:2 neither:1 year:1 convert:1 run:1 angle:1 letter:1 extends:1 reporting:1 arrive:1 fetched:1 decision:6 scaling:1 uninterpretable:1 comparable:1 bound:1 correspondence:1 fold:2 adapted:1 unachievable:2 lipton:1 dominated:6 interpolated:4 misguided:1 optimality:3 min:8 concluding:1 relatively:1 according:2 combination:5 poor:1 renamed:1 across:5 y0:6 parambath:1 pocock:1 making:1 pr:18 taken:2 agree:2 hern:1 skew:3 discus:1 fail:1 count:2 end:5 studying:1 available:1 usunier:1 prec:9 disagreement:4 appropriate:1 alternative:3 altogether:1 original:1 denotes:2 running:1 include:2 top:4 newton:1 giving:1 prof:1 vanschoren:1 objective:1 already:1 quantity:3 parametric:2 traditional:3 diagonal:3 reversed:1 link:1 berlin:2 sensible:1 extent:1 reason:1 equip:1 minority:1 assuming:1 code:1 relationship:5 rijsbergen:1 ratio:2 kingdom:2 webb:1 negative:18 rise:3 proper:1 perform:1 upper:3 recommender:1 bris:2 imbalance:1 datasets:3 disagree:2 incorrectly:1 beat:1 situation:2 zadrozny:1 communication:1 provost:1 required:1 established:1 nip:1 beyond:2 usually:1 below:2 pattern:1 fp:9 hendrik:1 challenge:1 optimise:1 including:1 max:6 green:2 power:1 misclassification:1 difficulty:1 ranked:2 natural:1 regularized:1 mizil:1 representing:2 technology:1 axis:7 categorical:1 incoherent:1 naive:1 isn:2 geometric:1 understanding:1 discovery:2 relative:1 loss:1 lecture:2 rijn:1 proportional:1 validation:1 contingency:8 consistent:2 plotting:1 thresholding:2 editor:2 grandvalet:1 pareto:4 sammut:1 supported:2 last:2 copy:1 formal:1 side:1 understand:1 ber:1 fall:1 taking:3 fg:8 benefit:4 regard:1 curve:54 boundary:3 distributed:2 calculated:3 cumulative:1 van:2 doesn:1 ignores:1 author:1 commonly:3 made:1 far:1 social:1 ignore:1 receiver:2 knew:1 don:2 table:9 robust:1 obtaining:1 heidelberg:2 fraught:1 european:1 inherit:2 main:3 linearly:1 motivation:1 arise:1 edition:1 referred:1 roc:38 depicts:1 precision:60 position:1 third:2 abundance:1 formula:1 theorem:8 appeal:1 auroc:10 dominates:1 derives:1 evidence:1 false:11 effectively:1 justifies:1 margin:1 easier:1 boston:1 twentieth:1 visual:4 expressed:1 recommendation:2 chieu:1 applies:1 springer:3 corresponds:3 loses:1 determines:2 violator:1 ma:1 towards:1 labelled:1 absence:1 experimentally:1 change:2 heinemann:1 specifically:2 except:1 uniformly:3 generalisation:1 averaging:2 called:4 auprg:27 experimental:1 meaningful:2 indicating:1 formally:1 support:1 latter:2 phenomenon:1 |
5,377 | 5,868 | NEXT: A System for Real-World Development,
Evaluation, and Application of Active Learning
Kevin Jamieson
UC Berkeley
Lalit Jain, Chris Fernandez, Nick Glattard, Robert Nowak
University of Wisconsin - Madison
[email protected]
{ljain,crfernandez,glattard,rdnowak}@wisc.edu
Abstract
Active learning methods automatically adapt data collection by selecting the most
informative samples in order to accelerate machine learning. Because of this,
real-world testing and comparing active learning algorithms requires collecting
new datasets (adaptively), rather than simply applying algorithms to benchmark
datasets, as is the norm in (passive) machine learning research. To facilitate the
development, testing and deployment of active learning for real applications, we
have built an open-source software system for large-scale active learning research
and experimentation. The system, called NEXT, provides a unique platform for
real-world, reproducible active learning research. This paper details the challenges
of building the system and demonstrates its capabilities with several experiments.
The results show how experimentation can help expose strengths and weaknesses
of active learning algorithms, in sometimes unexpected and enlightening ways.
1
Introduction
We use the term ?active learning? to refer to algorithms that employ adaptive data collection in
order to accelerate machine learning. By adaptive data collection we mean processes that automatically adjust, based on previously collected data, to collect the most useful data as quickly as
possible. This broad notion of active learning includes multi-armed bandits, adaptive data collection
in unsupervised learning (e.g. clustering, embedding, etc.), classification, regression, and sequential
experimental design. Perhaps the most familiar example of active learning arises in the context of
classification. There active learning algorithms select examples for labeling in a sequential, dataadaptive fashion, as opposed to passive learning algorithms based on preselected training data.
The key to active learning is adaptive data collection. Because of this, real-world testing and comparing active learning algorithms requires collecting new datasets (adaptively), rather than simply
applying algorithms to benchmark datasets, as is the norm in (passive) machine learning research.
In this adaptive paradigm, algorithm and network response time, human fatigue, the differing label
quality of humans, and the lack of i.i.d. responses are all real-world concerns of implementing active
learning algorithms. Due to many of these conditions being impossible to faithfully simulate active
learning algorithms must be evaluated on real human participants.
Adaptively collecting large-scale datasets can be difficult and time-consuming. As a result, active
learning has remained a largely theoretical research area, and practical algorithms and experiments
are few and far between. Most experimental work in active learning with real-world data is simulated
by letting the algorithm adaptively select a small number of labeled examples from a large labeled
dataset. This requires a large, labeled data set to begin with, which limits the scope and scale of
such experimental work. Also, it does not address the practical issue of deploying active learning
algorithms and adaptive data collection for real applications.
To address these issues, we have built a software system called NEXT, which provides a unique
platform for real-world, large-scale, reproducible active learning research, enabling
1
4
Waiting?
1
6
2
Web
API
Web
Request
Queue
3
Algorithm
Manager
getQuery
def getQuery( context ):
// GET suff. statistics stats from database
// use stats, context to create query
return query
processAnswer
updateModel
5
8
7
12a
Thanks!
Web
API
Web
Request
Queue
9
Algorithm
Manager
10
11a
12b
1.
2.
3.
4.
5.
6.
7.
Request query display from web API
Submit job to web-request queue
Asynchronous worker accepts job
Request routed to proper algorithm
Query generated, sent back to API
Query display returned to client
Answer reported to web API
def processAnswer( query,answer ):
// SET (query,answer) in database
// update model or ENQUEUE job for later
return ?Thanks!?
getQuery
processAnswer
updateModel
Model Update
Queue
8. Submit job to web-request queue
9. Asynchronous worker accepts job
10. Answer routed to proper algorithm
11a. Answer acknowledged to API
12a. Answer acknowledged to Client
11b. Submit job to model-update queue
12b. Synchronous worker accepts job
Figure 1: NEXT Active Learning Data Flow
11b
def updateModel( ):
// GET all (query,answer) pairs S
// use S to update model to create stats
// SET suff. statistics stats in database
Figure 2: Example algorithm prototype that active learning researcher implements. Each algorithm has access to the database
and the ability to enqueue jobs that
are executed in order, one at a time.
1. machine learning researchers to easily deploy and test new active learning algorithms;
2. applied researchers to employ active learning methods for real-world applications.
Many of today?s state-of-the-art machine learning tools, such as kernel methods and deep learning,
have developed and witnessed wide-scale adoption in practice because of both theoretical work and
extensive experimentation, testing and evaluation on real-world datasets. Arguably, some of the
deepest insights and greatest innovations have come through experimentation. The goal of NEXT
is to enable the same sort of real-world experimentation for active learning. We anticipate this will
lead to new understandings and breakthroughs, just as it has for passive learning.
This paper details the challenges of building active learning systems and our open-source solution,
the NEXT system1 . We also demonstrate the system?s capabilities for real active learning experimentation. The results show how experimentation can help expose strengths and weaknesses of
well-known active learning algorithms, in sometimes unexpected and enlightening ways.
2
What?s NEXT?
At the heart of active learning is a process for sequentially and adaptively gathering data most informative to the learning task at hand as quickly as possible. At each step, an algorithm must decide
what data to collect next. The data collection itself is often from human helpers who are asked to
answer queries, label instances or inspect data. Crowdsourcing platforms such as Amazon?s Mechanical Turk or Crowd Flower provide access to potentially thousands of users answering queries
on-demand. Parallel data collection at a large scale imposes design and engineering challenges
unique to active learning due to the continuous interaction between data collection and learning.
Here we describe the main features and contributions of the NEXT system.
System Functionality - A data flow diagram for NEXT is presented in Figure 1. Consider an
individual client among the crowd tasked with answering a series of classification questions. The
client interacts with NEXT through a website which requests a new query to be presented from the
NEXT web api. Tasked with potentially handling thousands of such requests simultaneously, the
API will enqueue the query request to be processed by a worker pool (workers can be thought of as
processes living on one or many machines pulling from the same queue). Once the job is accepted by
a worker, it is routed through the worker?s algorithm manager (described in the extensibility section
below) and the algorithm then chooses a query based on previously collected data and sufficient
statistics. The query is then sent back to the client through the API to be displayed.
After the client answers the query, the same initial process as above is repeated but this time the
answer is routed to the ?processAnswer? endpoint of the algorithm. Since multiple users are getting
1
The NEXT system is open source and available https://github.com/nextml/NEXT.
2
queries and reporting answers at the same time, there is a potential for two different workers to
attempt to update the model, or the statistics used to generate new queries, at the same time, and potentially overwrite each other?s work. A simple way NEXT avoids this race condition is to provide
a locking queue to each algorithm so that when a worker accepts a job from this queue, the queue is
locked until that job is finished. Hence, when the answer is reported to the algorithm, the ?processAnswer? code block may either update the model asynchronously itself, or submit a ?modelUpdate?
job to this locking model update queue to process the answer later synchronously (see Section 4 for
details). After processing the answer, the worker returns an acknowledgement response to the client.
Of this data flow, NEXT handles the API, enqueueing and scheduling of jobs, and algorithm management. The researcher interested in deploying their algorithm is responsible for implementing
getQuery, processAnswer and updateModel. Figure 2 shows pseudo-code for the functions that
must be implemented for each algorithm in the NEXT system; see the supplementary materials for
an explicit example involving an active SVM classifier.
A key challenge here is latency. A getQuery request uses the current learned model to decide what
queries to serve next. Humans will notice delays greater than roughly 400 ms. Therefore, it is
imperative that the system can receive and process a response, update the model, and select the
next query within 400 ms. Accounting for 100-200 ms in communication latency each way, the
system must perform all necessary processing within 50-100 ms. While in some applications one
can compute good queries offline and serve them as needed without further computation, other
applications, such as contextual bandits for personalized content recommendation, require that the
query depend on the context provided by the user (e.g. their cookies) and consequently, must be
computed in real time.
Realtime Computing - Research in active learning focuses on reducing the sample complexity of
the learning process (i.e., minimizing number of labeled and unlabeled examples needed to learn an
accurate model) and sometimes addresses the issue of computational complexity. In the latter case,
the focus is usually on polynomial-time algorithms, but not necessarily realtime algorithms. Practical active learning systems face a tradeoff between how frequently models are updated and how
carefully new queries are selected. If the model is updated less frequently, then time can be spent
on carefully selecting a batch of new queries. However, selecting in large batches may potentially
reduce some of the gains afforded by active learning, since later queries will be based on old, stale
information. Updating the model frequently may be possible, but then the time available for selecting queries may be very short, resulting in suboptimal selections and again potentially defeating the
aim of active learning. Managing this tradeoff is the chief responsibility of the algorithm designer,
but to make these design choices, the algorithm designer must be able to easily gauge the effects
of different algorithmic choices. In the NEXT system, the tradeoff is explicitly managed by modifying when and how often the updateModel command is run and what it does. The system helps
with making these decisions by providing extensive dashboards describing both the statistical and
computational performance of the algorithms.
Reproducible research - Publishing data and software needed to reproduce experimental results is
essential to scientific progress in all fields. Due to the adaptive nature of data collection in active
learning experiments, it is not enough to simply publish data gathered in a previous experiment. For
other researchers to recreate the experiment, the must be able to also reconstruct the exact adaptive
process that was used to collect the data. This means that the complete system, including any web
facing crowd sourcing tools, not just algorithm code and data, must be made publicly available and
easy to use. By leveraging cloud computing, NEXT abstracts away the difficulties of building a
data collection system and lets the researcher focus on active learning algorithm design. Any other
researcher can replicate an experiment with just a few keystrokes in under one hour by just using the
same experiment initialization parameters.
Expert data collection for the non expert - NEXT puts state-of-the-art active learning algorithms
in the hands of non-experts interested in collecting data in more efficient ways. This includes psychologists, social scientists, biologists, security analysts and researchers in any other field in which
large amounts of data is collected, sometimes at a large dollar cost and time expense. Choosing an
appropriate active learning algorithm is perhaps an easier step for non-experts compared to data collection. While there exist excellent tools to help researchers perform relatively simple experiments
on Mechanical Turk (e.g. PsiTurk [1] or AutoMan [2]), implementing active learning to collect data
requires building a sophisticated system like the one described in this paper. To determine the needs
3
of potential users, the NEXT system was built in close collaboration with cognitive scientists at our
home institution. They helped inform design decisions and provided us with participants to beta-test
the system in a real-world environment. Indeed, the examples used in this paper were motivated by
related studies developed by our collaborators in psychology.
NEXT is accessible through a REST Web API and can be easily deployed in the cloud with minimal
knowledge and expertise using automated scripts. NEXT provides researchers a set of example
templates and widgets that can be used as graphical user interfaces to collect data from participants
(see supplementary materials for examples).
Multiple Algorithms and Extensibility - NEXT provides a platform for applications and algorithms. Applications are general active learning tasks, such as linear classification, and algorithms
are particular implementations of that application (e.g., random sampling or uncertainty sampling
with a C-SVM). Experiments involve one application type but they may involve several different
algorithms, enabling the evaluation and comparison of different algorithms. The algorithm manager
in Figure 1 is responsible for routing each query and reported answer to the algorithms involved in
an experiment. For experiments involving multiple algorithms, this routing could be round-robin,
randomized, or optimized in a more sophisticated manner. For example, it is possible to implement a
multi-armed bandit algorithm inside the algorithm manager in order to select algorithms adaptively
to minimize some notion of regret.
Each application defines an algorithm management module and a contract for the three functions
of active learning: getQuery, processAnswer, and modelUpdate as described in Figure 2. Each
algorithm implemented in NEXT will gain access to a locking synchronous queue for model updates,
logging functionality, automated dashboards for performance statistics and timing, load balancing,
and graphical user interfaces for participants. To implement a new algorithm, a developer must
write the associated getQuery, processAnswer, and updateModel functions in Python (see examples
in supplementary materials); the rest is handled automatically by NEXT. We hope this ease of use
will encourage researchers to experiment with and compare new active learning algorithms. NEXT
is hosted on Github and we urge users to push their local application and algorithm implementations
to the repository.
3
Example Applications
NEXT is capable of hosting any active (or passive) learning application. To demonstrate the capabilities of the system, we look at two applications motivated by cognitive science studies. The
collected raw data along with instructions to easily reproduce these examples, which can be used as
templates to extend, are available on the NEXT project page.
3.1
Pure exploration for dueling bandits
The first experiment type we consider is a pure-exploration problem in the dueling bandits framework [3], based on the New Yorker Caption Contest2 . Each week New Yorker readers are invited to
submit captions for a cartoon, and a winner is picked from among these entries. We used a dataset
from the contest for our experiments. Participants in our experiment are shown a cartoon along with
two captions. Each participant?s task is to pick the caption they think is the funnier of the two. This is
repeated with many caption pairs and different participants. The objective of the learning algorithm
is to determine which caption participants think is the funniest overall as quickly as possible (i.e.,
using as few comparative judgments as possible). In our experiments, we chose an arbitrary cartoon
and n = 25 arbitrary captions from a curated set from the New Yorker dataset (the cartoon and all
25 captions can be found in the supplementary materials). The number of captions was limited to
25 primarily to keep the experimental dollar cost reasonable, but the NEXT system is capable of
handling arbitrarily large numbers of captions (arms) and duels.
Dueling Bandit Algorithms - There are several notions of a ?best? arm in the dueling bandit framework, including the Condorcet, Copeland, and Borda criteria. We focus on the Borda criterion in
this experiment for two reasons. First, algorithms based on the Condorcet or Copeland criterion
2
We thank Bob Mankoff, cartoon editor of The New Yorker, for sharing the cartoon and caption data used
in our experiments. www.newyorker.com
4
Caption
Plurality vote
Thompson
UCB
Successive Elim.
Random
Beat the Mean
My last of...
0.215 ? 0.013
0.638 ? 0.013
0.645 ? 0.017
0.640 ? 0.033
0.638 ? 0.031
0.663 ? 0.030
0.151 ? 0.013
0.619 ? 0.026
0.608 ? 0.023
0.532 ? 0.032
0.519 ? 0.030
0.492 ? 0.032
The last g...
The women?...
Do you eve...
I?m drowni...
Think of i...
They promi...
Want to ge...
0.171 ? 0.013
0.121 ? 0.013
0.118 ? 0.013
0.087 ? 0.013
0.075 ? 0.013
0.061 ? 0.013
0.632 ? 0.017
0.587 ? 0.027
0.617 ? 0.018
0.564 ? 0.031
0.620 ? 0.021
0.418 ? 0.061
0.653 ? 0.016
0.534 ? 0.036
0.623 ? 0.020
0.500 ? 0.044
0.623 ? 0.021
0.536 ? 0.037
0.665 ? 0.033
0.600 ? 0.030
0.588 ? 0.032
0.595 ? 0.032
0.592 ? 0.032
0.566 ? 0.031
0.678 ? 0.030
0.578 ? 0.032
0.594 ? 0.032
0.640 ? 0.034
0.613 ? 0.029
0.621 ? 0.031
0.657 ? 0.031
0.653 ? 0.033
0.667 ? 0.031
0.618 ? 0.033
0.632 ? 0.033
0.482 ? 0.032
Table 1: Dueling bandit results for identifying the ?funniest? caption for a New Yorker cartoon.
Darker shading corresponds to an algorithm?s rank-order of its predictions for the winner.
generally require sampling all 25
2 = 300 possible pairs of arms/captions multiple times [4, 3]. Algorithms based on the Borda criterion do not necessarily require such exhaustive sampling, making
them more attractive for large-scale problems [5]. Second, one can reduce dueling bandits with the
Borda criterion to the standard multi-armed bandit problem using a scheme known as the Borda
Reduction (BR) [5], allowing one to use a number of well-known and tested bandit algorithms.
The algorithms considered in our experiment are: random uniform sampling with BR, Successive
Elimination with BR [6], UCB with BR [7], Thompson Sampling with BR [8], and Beat the Mean
[9] which was originally designed for identifying the Condorcet winner (see the supplementary
materials for more implementation details).
Experimental Setup and Results - We posted 1250 NEXT tasks to Mechanical Turk each of which
asked a unique participant to make 25 comparison judgements for $0.15. For each comparative
judgment, one of the five algorithms was chosen uniformly at random to select the caption pair
and the participant?s decision was used to update that algorithm only. Each algorithm ranked the
captions in order of the empirical Borda score estimates, except the Beat the Mean algorithm which
used its modified Borda score [9].To compare the quality of these results, we collected data in two
different ways. First, we took union of the top-5 captions from each algorithm, resulting in 8 ?top
captions,? and asked a different set of 1497 participants to vote for the funniest of these 8 (one vote
per participant); we denote this the plurality vote ranking. The number of captions shown to each
participant was limited to 8 for practical reasons (e.g., display, voting ease).
The results of the experiment are summarized in Table 1. Each row corresponds to one of the 8
top captions and the columns correspond to different algorithms. Each table entry is the Borda
score estimated by the corresponding algorithm, followed by a bound on
p its standard deviation.
The bound is based on a Bernoulli model for the responses and is simply 1/(4k), where k is the
number judgments collected for the corresponding caption (which depends on the algorithm). The
relative ranking of the scores is what is relevant here, but the uncertainties given an indication of
each algorithm?s certainty of these scores. In the table, each algorithm?s best guess at the ?funniest?
captions are highlighted in decreasing order with darker to lighter shades.
Overall, the predicted captions of the algorithms, which generally optimize for the Borda criterion,
appear to be in agreement with the result of the plurality vote. One thing that should be emphasized
is that the uncertainty (standard deviation) of the top arm scores of Thompson Sampling and UCB is
about half the uncertainty observed for the top three arms of the other methods, which suggests that
these algorithms can provide confident answers with 1/4 of the samples needed by other algorithms.
This is the result of Thompson Sampling and UCB being more aggressive and adaptive early on,
compared to the other methods, and therefore we recommend them for applications of this sort. We
conclude that Thompson Sampling and UCB perform best for this application and require
significantly fewer samples than non-adaptive random sampling or other bandit algorithms.
The results of a replication of this study can be found in the supplementary materials, from which
the same conclusions can be made.
3.2
Active Non-metric Multidimensional Scaling
Finding low-dimensional representations is a fundamental problem in machine learning. Non-metric
multidimensional scaling (NMDS) is a classic technique that embeds a set of items into a low5
dimensional metric space so that distances in that space predict a given set of (non-metric) human
judgments of the form ?k is closer to i than j.? This learning task is more formidable than the
dueling bandits problem in a number of ways, providing an interesting contrast in terms of demands
and tradeoffs in the system. First, NMDS involves triples of items, rather than pairs, posing greater
challenges to scalability. Second, updating the embedding as new data are collected is much more
computationally intensive, which makes managing the tradeoff between updating the embedding
and carefully selecting new queries highly non-trivial.
Formally, the NMDS problem is defined as follows. Given a set S comparative judgments and an
embedding dimension d
1, the ideal goal is to identify a set of points {x1 , . . . , xn } ? Rd such
that ||xi xk ||2 < ||xj xk ||2 if ?k is closer to i than j? is one of the given comparative judgments.
In situations where no embedding exists that agrees with all of the judgments in S, the goal is to find
an embedding that agrees on as many judgements as possible.
Active learning can be used to accelerate this learning process as follows. Once an embedding is
found based on a subset of judgments, the relative locations of the objects, at least at a coarse level,
are constrained. Consequently, many other judgments (not yet collected) can be predicted from the
coarse embedding, while others are still highly uncertain. The goal of active learning algorithms in
this setting is to adaptively select as few triplet queries (e.g., ?is k closer to i or j??) as possible in
order to identify the structure of the embedding.
Active Sampling Algorithms - NMDS is usually posed as an optimization problem. Note that
||xi
xk ||2 < ||xj
xk ||2 () xTi xi
2xTi xk
xTj xj + 2xTj xk () hXX T , Hi,j,k i
where X = (x1 , x2 , . . . , xn )T 2 Rn?d and Hi,j,k is an all-zeros matrix except for the sub-matrix
e = [1, 0, 1; 0, 1, 1; 1, 1, 0] defined on the indices [i, j, k]?[i, j, k]. This suggests an optimizaH
P
1
T
tion: minX2Rn?d |S|
(i,j,k)2S ` hXX , Hi,j,k i , where ` in the literature has taken the form of
hinge-loss, logistic-loss, or a general negative log-likelihood of a probabilistic model [10, 11, 12].
One may also recognize the similarity of this optimization problem with that of learning a linear
classifier; here XX T plays the role of a hyperplane and the Hi,j,k matrices are labeled examples.
Indeed, we apply active learning approaches developed for linear classifiers, like uncertainty sampling [13], to NMDS. Two active learning algorithms have been proposed in the past for this specific
application [11, 14]. Here we consider four data collection methods, inspired by these past works:
1) (passive) uniform random sampling, 2) uncertainty sampling based off an embedding discovered
by minimizing a hinge-loss objective 3) approximate maximum information gain sampling using
the Crowd Kernel approach in [11], and 4) approximate maximum information gain sampling using
the t-STE distribution in [12]. Care was taken in the implementations to make these algorithms
perform as well as possible in a realtime environment, and we point the interested reader to the
Supplementary Materials and source code for details.
Experimental Setup and Results - We have used NEXT for NMDS in many applications, including
embedding faces, words, numbers and images. Here we focus on a particular set of synthetic 3d
shape images that can be found in the supplementary materials. Each shape can be represented
by a single parameter reflecting it?s smoothness, so an accurate embedding should recover a one
dimensional manifold. The dataset consists of n = 30 shapes selected uniformly from the 1d
manifold, and so in total there are 30 29
2 = 12180 unique triplet queries that could be asked. For
all algorithms we set d = 2 for a 2d embedding (although we hope to see the intrinsic 1d manifold
in the result). Each participant was asked to answer 50 triplet queries and 400 total participants
contributed to the experiment. For each query, an algorithm was chosen uniformly at random from
the union of the set of algorithms plus an additional random algorithm whose queries were used for
the hold-out set. Consequently, each algorithm makes approximately 4000 queries.
We consider three different algorithms for generating embeddings from triplets, 1) embedding that
minimizes hinge loss that we call ?Hinge? [10], 2) the Crowd Kernel embedding with ? = 0.05 that
we call ?CK? [11], and 3) and the t-STE embedding ? = 1 [12]. We wish to evaluate the sampling
strategies, not the embedding strategies, so we apply each embedding strategy to each sampling
procedure described above.
To evaluate the algorithms, we sort the collected triplets for each algorithm by timestamp, and then
every 300 triplets we compute an embedding using that algorithm?s answers and each strategy for
6
Figure 3: (Top) Stimuli used for the experiment. (Left) Triplet prediction error. (Right) Nearest
neighbor prediction accuracy.
identifying an embedding. In Figure 10, the left panel evaluates the triplet prediction performance
of the embeddings on the entire collected hold-out set of triplets while the right panel evaluates
the nearest neighbor prediction of the algorithms. For each point in the embedding we look at its
true nearest neighbor on the manifold in which the data was generated and we say an embedding
accurately predicts the nearest neighbor if the true nearest neighbor is within the top three nearest
neighbors of the point in the embedding, we then average over all points. Because this is just a
single trial, the evaluation curves are quite rough. The results of a replication of this experiment can
be found in the supplementary materials from which the same conclusions are made.
We conclude across both metrics and all sampling algorithms, the embeddings produced by minimizing hinge loss do not perform as well as those from Crowd Kernel or t-STE. In terms of predicting
triplets, the experiment provides no evidence that the active selection of triplets provides any improvement over random selection. In terms of nearest neighbor prediction, UncertaintySampling
may have a slight edge, but it is very difficult to make any conclusions with certainty. As in all
deployed active learning studies, we can not rule out the possibility that it is our implementation
that is responsible for the disappointment and not the algorithms themselves. However, we note that
when simulating human responses with Bernoulli noise under similar load conditions, uncertainty
sampling outperformed all other algorithms by a measurable margin leading us to believe that these
active learning algorithms may not be robust to human feedback. To sum up, in this application
there is no evidence for gains from adaptive sampling, but Crowd Kernel and t-STE do appear
to provide slightly better embeddings than the hinge loss optimization. But we caution that this
is but a single, possibly unrepresentative datapoint.
4
Implementation details of NEXT
The entire NEXT system was designed with machine learning researchers and practitioners in mind
rather than engineers with deep systems background. NEXT is almost completely written in Python,
but algorithms can be implemented in any programming language and wrapped in a python wrapper.
We elected to use a variety of startup scripts and Dockerfor deployment to automate the provisioning
process and minimize configuration issues. Details on specific software packages used can be found
in the supplementary materials.
Many components of NEXT can be scaled to work in a distributed environment. For example,
serving many (near) simultaneous ?getQuery? requests is straightforward; one can simply enlarge
the pool of workers by launching additional slave machines and point them towards the web request
7
queue, just like typical web-apps are scaled. This approach to scaling active learning has been
studied rigorously [15]. Processing tasks such as data fitting and selection can also be accelerated
using standard distributed platforms (see next section). A more challenging scaling issue arises in
the learning process. Active learning algorithms update models sequentially as data are collected
and the models guide the selection of new data. Recall that this serial process is handled by a model
update queue. When a worker accepts a job from the queue, the queue is locked until that job is
finished. The processing times required for model fitting and data selection introduce latencies that
may reduce possible speedups afforded by active learning compared to passive learning (since the
rate of ?getQuery? requests could exceed the processing rate of the learning algorithm).
If the number of algorithms running in parallel outnumber the number of workers dedicated to serving the synchronous locking model update queues, performance can be improved by adding more
slave machines, and thus workers, to process the queues. Simulating a load with stress-tests and
inspecting the provided dashboards on NEXT of CPU, memory, queue size, model-staleness, etc.
makes deciding the number of machines for an expected load a straightforward task. An algorithm
in NEXT could also bypass the locking synchronous queue by employing asynchronous schemes
like [16] directly in processAnswer. This could speed up processing through parallelization, but
could reduce active learning speedups since workers may overwrite the previous work of others.
5
Related Work and Discussion
There have been some examples of deployed active learning with human feedback; for human perception [11, 17], interactive search and citation screening [18, 19], and in particular by research
groups from industry, for web content personalization and contextual advertising [20, 21]. However,
these remain special purpose implementations, while the proposed NEXT system provides a flexible and general-purpose active learning platform that is versatile enough to develop, test, and field
any of these specific applications. Moreover, previous real-world deployments have been difficult to
replicate. NEXT could have a profound effect on research reproducibility; it allows anyone to easily
replicate past (and future) algorithm implementations, experiments, and applications.
There exist many sophisticated libraries and systems for performing machine learning at scale. Vowpal Wabbit [22], MLlib [23], Oryx [24] and GraphLab [25] are all excellent examples of state-of-theart software systems designed to perform inference tasks like classification, regression, or clustering
at enormous scale. Many of these systems are optimized for operating on a fixed, static dataset, making them incomparable to NEXT. But some, like Vowpal Wabbit have some active learning support.
The difference between these systems and NEXT is that their goal was to design and implement the
best possible algorithms for very specific tasks that will take the fullest advantage of each system?s
own capabilities. These systems provide great libraries of machine learning tools, whereas NEXT
is an experimental platform to develop, test, and compare active learning algorithms and to allow
practitioners to easily use active learning methods for data collection.
In the crowd-sourcing space there exist excellent tools like PsiTurk [1], AutoMan [2], and CrowdFlower [26] that provide functionality to simplify various aspects of crowdsourcing, including automated task management and quality assurance controls. While successful in this aim, these crowd
programming libraries do not incorporate the necessary infrastructure for deploying active learning
across participants or adaptive data acquisition strategies. NEXT provides a unique platform for
developing active crowdsourcing capabilities and may play a role in optimizing the use of humancomputational resources like those discussed in [27].
Finally, while systems like Oryx [24] and Velox [28] that leverage Apache Spark are made for deployment on the web and model serving, they were designed for very specific types of models that
limit their versatility and applicability. They were also built for an audience with a greater familiarity with systems and understandably prioritize computational performance over, for example, the
human-time it might take a cognitive scientist or active learning theorist to figure out how to actively
crowdsource a large human-subject study using Amazon?s Mechanical Turk.
At the time of this submission, NEXT has been used to ask humans hundreds of thousands of actively
selected queries in ongoing cognitive science studies. Working closely with cognitive scientists who
relied on the system for their research helped us make NEXT predictable, reliable, easy to use and,
we believe, ready for everyone.
8
References
[1] T.M. Gureckis, J. Martin, J. McDonnell, et al. psiTurk: An open-source framework for conducting
replicable behavioral experiments online. (in press).
[2] Daniel W Barowy, Charlie Curtsinger, Emery D Berger, and Andrew McGregor. Automan: A platform
for integrating human-based and digital computation. ACM SIGPLAN Notices, 47(10):639?654, 2012.
[3] Yisong Yue, Josef Broder, Robert Kleinberg, and Thorsten Joachims. The k-armed dueling bandits problem. Journal of Computer and System Sciences, 78(5), 2012.
[4] Tanguy Urvoy, Fabrice Clerot, Raphael F?eraud, and Sami Naamane. Generic exploration and k-armed
voting bandits. In Proceedings of the 30th International Conference on Machine Learning, 2013.
[5] Kevin Jamieson, Sumeet Katariya, Atul Deshpande, and Robert Nowak. Sparse dueling bandits. AISTATS,
2015.
[6] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the
multi-armed bandit and reinforcement learning problems. The Journal of Machine Learning Research,
7:1079?1105, 2006.
[7] Kevin Jamieson, Matthew Malloy, Robert Nowak, and S?ebastien Bubeck. lil?ucb: An optimal exploration
algorithm for multi-armed bandits. In Proceedings of The 27th Conference on Learning Theory, 2014.
[8] Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances in neural
information processing systems, pages 2249?2257, 2011.
[9] Yisong Yue and Thorsten Joachims. Beat the mean bandit. In Proceedings of the 28th International
Conference on Machine Learning (ICML-11), pages 241?248, 2011.
[10] Sameer Agarwal, Josh Wills, Lawrence Cayton, et al. Generalized non-metric multidimensional scaling.
In International Conference on Artificial Intelligence and Statistics, pages 11?18, 2007.
[11] Omer Tamuz, Ce Liu, Ohad Shamir, Adam Kalai, and Serge J Belongie. Adaptively learning the crowd
kernel. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011.
[12] Laurens Van Der Maaten and Kilian Weinberger. Stochastic triplet embedding. In Machine Learning for
Signal Processing (MLSP), 2012 IEEE International Workshop on, pages 1?6. IEEE, 2012.
[13] Burr Settles. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11.
[14] Kevin G Jamieson and Robert D Nowak. Low-dimensional embedding using adaptively selected ordinal
data. In Allerton Conference on Communication, Control, and Computing, 2011.
[15] Alekh Agarwal, Leon Bottou, Miroslav Dudik, and John Langford. Para-active learning. arXiv preprint
arXiv:1310.8243, 2013.
[16] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to
parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, 2011.
[17] All our ideas. http://allourideas.org/. Accessed: 2015-09-15.
[18] Byron C Wallace, Kevin Small, Carla E Brodley, Joseph Lau, and Thomas A Trikalinos. Deploying an
interactive machine learning system in an evidence-based practice center: abstrackr. In Proceedings of
the 2nd ACM SIGHIT International Health Informatics Symposium, pages 819?824. ACM, 2012.
[19] Kyle H Ambert, Aaron M Cohen, Gully APC Burns, Eilis Boudreau, and Kemal Sonmez. Virk: an active
learning-based system for bootstrapping knowledge base development in the neurosciences. Frontiers in
neuroinformatics, 7, 2013.
[20] Lihong Li, Wei Chu, John Langford, and Robert E Schapire. A contextual-bandit approach to personalized
news article recommendation. In International conference on World wide web. ACM, 2010.
[21] Deepak Agarwal, Bee-Chung Chen, and Pradheep Elango. Explore/exploit schemes for web content
optimization. In Data Mining, 2009. ICDM. Ninth IEEE International Conference on. IEEE, 2009.
[22] Alekh Agarwal, Olivier Chapelle, Miroslav Dud??k, and John Langford. A reliable effective terascale
linear learning system. The Journal of Machine Learning Research, 15(1):1111?1133, 2014.
[23] Xiangrui Meng et al. MLlib: Machine learning in apache spark. arXiv:1505.06807, 2015.
[24] Oryx 2. https://github.com/OryxProject/oryx. Accessed: 2015-06-05.
[25] Yucheng Low, Danny Bickson, Joseph Gonzalez, et al. Distributed graphlab: a framework for machine
learning and data mining in the cloud. Proceedings of the VLDB Endowment, 5(8):716?727, 2012.
[26] CrowdFlower. http://www.crowdflower.com/. Accessed: 2015-06-05.
[27] Seungwhan Moon and Jaime G Carbonell. Proactive learning with multiple class-sensitive labelers. In
Data Science and Advanced Analytics (DSAA), 2014 International Conference on. IEEE, 2014.
[28] Daniel Crankshaw, Peter Bailis, Joseph E Gonzalez, et al. The missing piece in complex analytics: Low
latency, scalable model management and serving with velox. CIDR, 2015.
9
| 5868 |@word trial:1 repository:1 judgement:2 polynomial:1 norm:2 replicate:3 nd:1 open:4 instruction:1 vldb:1 atul:1 accounting:1 fabrice:1 pick:1 yorker:5 shading:1 versatile:1 reduction:1 wrapper:1 liu:1 initial:1 score:6 selecting:5 series:1 daniel:2 configuration:1 past:3 current:1 comparing:2 contextual:3 com:4 yet:1 chu:1 must:9 danny:1 written:1 john:3 informative:2 shape:3 reproducible:3 designed:4 update:13 bickson:1 intelligence:1 selected:4 fewer:1 item:2 website:1 guess:1 assurance:1 half:1 xk:6 short:1 infrastructure:1 provides:8 coarse:2 institution:1 mannor:1 successive:2 launching:1 org:1 allerton:1 accessed:3 five:1 location:1 elango:1 along:2 beta:1 symposium:1 profound:1 replication:2 consists:1 fitting:2 behavioral:1 inside:1 burr:1 manner:1 introduce:1 expected:1 indeed:2 roughly:1 themselves:1 frequently:3 wallace:1 multi:5 manager:5 inspired:1 decreasing:1 automatically:3 cpu:1 xti:2 armed:7 provided:3 xx:1 begin:1 project:1 formidable:1 panel:2 moreover:1 what:5 minimizes:1 developer:1 developed:3 caution:1 differing:1 finding:1 bootstrapping:1 pseudo:1 certainty:2 berkeley:2 multidimensional:3 voting:2 every:1 collecting:4 interactive:2 classifier:3 demonstrates:1 scaled:2 control:2 jamieson:4 appear:2 arguably:1 scientist:4 timing:1 local:1 engineering:1 limit:2 api:11 meng:1 niu:1 approximately:1 might:1 plus:1 burn:1 initialization:1 studied:1 chose:1 suggests:2 collect:5 challenging:1 deployment:4 ease:2 limited:2 analytics:2 locked:2 adoption:1 unique:6 responsible:3 practical:4 testing:4 practice:2 block:1 implement:4 union:2 regret:1 urge:1 procedure:1 area:1 empirical:2 significantly:1 thought:1 word:1 integrating:1 get:2 unlabeled:1 close:1 selection:6 scheduling:1 fullest:1 context:4 applying:2 impossible:1 put:1 optimize:1 measurable:1 jaime:1 www:2 missing:1 vowpal:2 center:1 straightforward:2 thompson:6 survey:1 amazon:2 identifying:3 spark:2 pure:2 stats:4 suff:2 rule:1 insight:1 system1:1 classic:1 handle:1 notion:3 embedding:25 overwrite:2 updated:2 shamir:1 play:2 yishay:1 user:7 deploy:1 lighter:1 programming:2 us:1 today:1 exact:1 caption:23 agreement:1 olivier:2 updating:3 curated:1 submission:1 predicts:1 database:4 labeled:5 observed:1 cloud:3 role:2 module:1 preprint:1 thousand:3 news:1 kilian:1 extensibility:2 benjamin:1 environment:3 predictable:1 locking:5 complexity:2 asked:5 rigorously:1 depend:1 serve:2 logging:1 completely:1 accelerate:3 easily:6 represented:1 various:1 jain:1 describe:1 effective:1 query:34 artificial:1 labeling:1 startup:1 kevin:5 choosing:1 crowd:10 neuroinformatics:1 exhaustive:1 whose:1 quite:1 posed:1 supplementary:10 say:1 reconstruct:1 ability:1 statistic:6 think:3 highlighted:1 itself:2 asynchronously:1 online:1 advantage:1 indication:1 wabbit:2 took:1 interaction:1 raphael:1 updatemodel:6 ste:4 relevant:1 omer:1 reproducibility:1 scalability:1 getting:1 race:1 comparative:4 generating:1 adam:1 emery:1 object:1 help:4 spent:1 andrew:1 develop:2 nearest:7 progress:1 job:15 implemented:3 predicted:2 involves:1 come:1 laurens:1 closely:1 functionality:3 modifying:1 stochastic:2 exploration:4 human:14 routing:2 enable:1 settle:1 material:10 elimination:2 implementing:3 require:4 collaborator:1 plurality:3 anticipate:1 inspecting:1 frontier:1 hold:2 considered:1 wright:1 deciding:1 great:1 lawrence:1 algorithmic:1 urvoy:1 naamane:1 predict:1 automate:1 matthew:1 scope:1 week:1 early:1 crowdflower:3 purpose:2 outperformed:1 label:2 expose:2 sensitive:1 agrees:2 faithfully:1 gauge:1 tool:5 create:2 defeating:1 hope:2 rough:1 aim:2 modified:1 ck:1 kalai:1 rather:4 command:1 focus:5 joachim:2 improvement:1 rank:1 bernoulli:2 likelihood:1 contrast:1 dollar:2 inference:1 stopping:1 entire:2 bandit:20 reproduce:2 interested:3 josef:1 overall:2 classification:5 issue:5 among:2 flexible:1 glattard:2 development:3 platform:9 art:2 constrained:1 biologist:1 timestamp:1 field:3 breakthrough:1 once:2 special:1 uc:1 sampling:22 cartoon:7 enlarge:1 broad:1 look:2 icml:2 unsupervised:1 theart:1 future:1 others:2 recommend:1 simplify:1 stimulus:1 primarily:1 employ:2 few:4 simultaneously:1 recognize:1 individual:1 xtj:2 familiar:1 versatility:1 attempt:1 screening:1 possibility:1 mining:2 highly:2 evaluation:5 adjust:1 weakness:2 personalization:1 accurate:2 edge:1 nowak:4 worker:15 helper:1 encourage:1 closer:3 ohad:1 necessary:2 capable:2 old:1 cooky:1 re:1 theoretical:2 minimal:1 uncertain:1 miroslav:2 instance:1 column:1 witnessed:1 industry:1 applicability:1 cost:2 deviation:2 subset:1 imperative:1 entry:2 hundred:1 uniform:2 delay:1 successful:1 psiturk:3 reported:3 answer:18 para:1 synthetic:1 my:1 confident:1 adaptively:9 recht:1 international:9 randomized:1 broder:1 accessible:1 chooses:1 thanks:2 fundamental:1 contract:1 probabilistic:1 off:1 pool:2 informatics:1 quickly:3 again:1 management:4 yisong:2 opposed:1 possibly:1 woman:1 prioritize:1 nmds:6 cognitive:5 expert:4 chung:1 leading:1 return:3 actively:2 li:2 aggressive:1 potential:2 summarized:1 includes:2 mlsp:1 explicitly:1 ranking:2 fernandez:1 depends:1 piece:1 script:2 hogwild:1 picked:1 proactive:1 eyal:1 helped:2 later:3 tion:1 responsibility:1 relied:1 participant:16 parallel:2 capability:5 sort:3 recover:1 borda:9 contribution:1 minimize:2 publicly:1 accuracy:1 moon:1 largely:1 who:2 sumeet:1 judgment:9 correspond:1 serge:1 conducting:1 gathered:1 identify:2 apps:1 raw:1 lalit:1 produced:1 accurately:1 advertising:1 expertise:1 researcher:12 bob:1 simultaneous:1 datapoint:1 deploying:4 inform:1 sharing:1 duel:1 evaluates:2 acquisition:1 deshpande:1 involved:1 turk:4 copeland:2 associated:1 static:1 outnumber:1 gain:5 dataset:5 uncertaintysampling:1 ask:1 recall:1 knowledge:2 carefully:3 sophisticated:3 back:2 reflecting:1 originally:1 response:6 improved:1 wei:1 evaluated:1 cayton:1 just:6 until:2 langford:3 working:1 hand:2 web:17 christopher:1 eraud:1 lack:1 defines:1 logistic:1 quality:3 perhaps:2 pulling:1 believe:2 scientific:1 stale:1 facilitate:1 effect:2 building:4 true:2 managed:1 clerot:1 hence:1 dud:1 staleness:1 attractive:1 round:1 wrapped:1 m:4 criterion:6 generalized:1 fatigue:1 stress:1 apc:1 complete:1 demonstrate:2 dedicated:1 interface:2 passive:7 elected:1 image:2 kyle:1 apache:2 cohen:1 winner:3 endpoint:1 discussed:1 slight:1 extend:1 refer:1 theorist:1 smoothness:1 rd:1 provisioning:1 contest:1 language:1 lihong:2 chapelle:2 access:3 similarity:1 alekh:2 operating:1 etc:2 base:1 labelers:1 own:1 optimizing:1 arbitrarily:1 der:1 additional:2 care:1 greater:3 dudik:1 managing:2 determine:2 paradigm:1 living:1 stephen:1 signal:1 multiple:5 sameer:1 adapt:1 icdm:1 serial:1 prediction:6 scalable:1 regression:2 involving:2 metric:6 tasked:2 dashboard:3 arxiv:3 publish:1 kernel:6 sometimes:4 agarwal:4 audience:1 receive:1 whereas:1 want:1 background:1 diagram:1 source:5 invited:1 rest:2 parallelization:1 yue:2 subject:1 funniest:4 sent:2 shie:1 byron:1 thing:1 flow:3 leveraging:1 practitioner:2 call:2 eve:1 near:1 leverage:1 ideal:1 exceed:1 enough:2 easy:2 automated:3 embeddings:4 xj:3 sami:1 psychology:1 variety:1 suboptimal:1 reduce:4 idea:1 prototype:1 incomparable:1 tradeoff:5 minx2rn:1 br:5 intensive:1 synchronous:4 recreate:1 motivated:2 handled:2 routed:4 peter:1 queue:20 returned:1 action:1 dar:1 deep:2 useful:1 latency:4 gureckis:1 generally:2 involve:2 hosting:1 amount:1 processed:1 generate:1 http:4 schapire:1 exist:3 notice:2 designer:2 estimated:1 neuroscience:1 per:1 serving:4 write:1 waiting:1 group:1 key:2 four:1 enormous:1 acknowledged:2 wisc:1 ce:1 sighit:1 sum:1 run:1 package:1 uncertainty:7 you:1 reporting:1 almost:1 reader:2 decide:2 reasonable:1 realtime:3 home:1 gonzalez:2 maaten:1 decision:3 scaling:5 def:3 hi:4 bound:2 followed:1 mllib:2 display:3 strength:2 afforded:2 software:5 x2:1 personalized:2 sigplan:1 katariya:1 kleinberg:1 aspect:1 simulate:1 speed:1 anyone:1 leon:1 performing:1 relatively:1 martin:1 speedup:2 developing:1 request:13 mcdonnell:1 remain:1 across:2 slightly:1 joseph:3 widget:1 making:3 psychologist:1 lau:1 handling:2 gathering:1 thorsten:2 heart:1 taken:2 computationally:1 resource:1 previously:2 describing:1 needed:4 ordinal:1 mind:1 ge:1 letting:1 available:4 malloy:1 experimentation:7 apply:2 away:1 appropriate:1 generic:1 simulating:2 batch:2 weinberger:1 thomas:1 top:7 running:1 clustering:2 charlie:1 publishing:1 graphical:2 lock:1 hinge:6 madison:2 exploit:1 feng:1 objective:2 question:1 strategy:5 interacts:1 gradient:1 distance:1 thank:1 simulated:1 hxx:2 condorcet:3 carbonell:1 chris:1 manifold:4 collected:11 trivial:1 reason:2 analyst:1 code:4 index:1 berger:1 providing:2 minimizing:3 kemal:1 innovation:1 difficult:3 setup:2 executed:1 robert:6 potentially:5 expense:1 negative:1 design:6 ebastien:1 proper:2 lil:1 implementation:8 perform:6 contributed:1 allowing:1 inspect:1 datasets:6 benchmark:2 enabling:2 descent:1 displayed:1 beat:4 situation:1 communication:2 rn:1 discovered:1 synchronously:1 mansour:1 arbitrary:2 ninth:1 parallelizing:1 pair:5 required:1 mechanical:4 extensive:2 disappointment:1 optimized:2 security:1 nick:1 learned:1 accepts:5 hour:1 address:3 yucheng:1 able:2 usually:2 flower:1 perception:1 below:1 challenge:5 pradheep:1 preselected:1 built:4 including:4 memory:1 reliable:2 everyone:1 dueling:9 greatest:1 enlightening:2 xiangrui:1 difficulty:1 ranked:1 client:7 predicting:1 advanced:1 arm:5 scheme:3 github:3 brodley:1 library:3 finished:2 tanguy:1 ready:1 health:1 understanding:1 deepest:1 literature:2 acknowledgement:1 python:3 bee:1 relative:2 wisconsin:2 loss:6 interesting:1 facing:1 triple:1 digital:1 sufficient:1 imposes:1 article:1 editor:1 terascale:1 bypass:1 collaboration:1 endowment:1 row:1 rdnowak:1 balancing:1 sourcing:2 last:2 asynchronous:3 free:1 offline:1 guide:1 allow:1 elim:1 neighbor:7 template:2 face:2 wide:2 deepak:1 sparse:1 distributed:3 van:1 feedback:2 curve:1 dimension:1 world:13 avoids:1 xn:2 replicable:1 made:4 reinforcement:1 collection:15 adaptive:12 far:1 employing:1 social:1 citation:1 approximate:2 crowdsource:1 keep:1 graphlab:2 active:67 sequentially:2 kjamieson:1 belongie:1 conclude:2 consuming:1 xi:3 continuous:1 search:1 triplet:12 chief:1 table:4 robin:1 learn:1 nature:1 robust:1 posing:1 excellent:3 complex:1 posted:1 bottou:1 necessarily:2 submit:5 understandably:1 enqueue:3 aistats:1 main:1 noise:1 repeated:2 x1:2 tamuz:1 hosted:1 fashion:1 deployed:3 darker:2 embeds:1 sub:1 explicit:1 wish:1 slave:2 answering:2 remained:1 keystroke:1 shade:1 specific:5 emphasized:1 load:4 familiarity:1 svm:2 evidence:3 concern:1 essential:1 exists:1 intrinsic:1 workshop:1 sequential:2 adding:1 push:1 margin:1 demand:2 chen:1 easier:1 carla:1 simply:5 explore:1 bubeck:1 josh:1 unexpected:2 recommendation:2 corresponds:2 acm:4 goal:5 consequently:3 towards:1 unrepresentative:1 content:3 typical:1 except:2 reducing:1 uniformly:3 hyperplane:1 engineer:1 total:2 called:2 accepted:1 experimental:8 vote:5 ucb:6 aaron:1 select:6 formally:1 support:1 latter:1 arises:2 accelerated:1 ongoing:1 incorporate:1 evaluate:2 tested:1 mcgregor:1 crowdsourcing:3 |
5,378 | 5,869 | Structured Transforms for
Small-Footprint Deep Learning
Vikas Sindhwani
Tara N. Sainath
Sanjiv Kumar
Google, New York
{sindhwani, tsainath, sanjivk}@google.com
Abstract
We consider the task of building compact deep learning pipelines suitable for deployment on storage and power constrained mobile devices. We propose a unified framework to learn a broad family of structured parameter matrices that are
characterized by the notion of low displacement rank. Our structured transforms
admit fast function and gradient evaluation, and span a rich range of parameter
sharing configurations whose statistical modeling capacity can be explicitly tuned
along a continuum from structured to unstructured. Experimental results show
that these transforms can significantly accelerate inference and forward/backward
passes during training, and offer superior accuracy-compactness-speed tradeoffs
in comparison to a number of existing techniques. In keyword spotting applications in mobile speech recognition, our methods are much more effective than
standard linear low-rank bottleneck layers and nearly retain the performance of
state of the art models, while providing more than 3.5-fold compression.
1
Introduction
Non-linear vector-valued transforms of the form, f (x, M) = s(Mx), where s is an elementwise
nonlinearity, x is an input vector, and M is an m ? n matrix of parameters are building blocks
of complex deep learning pipelines and non-parametric function estimators arising in randomized
kernel methods [20]. When M is a large general dense matrix, the cost of storing mn parameters
and computing matrix-vector products in O(mn) time can make it prohibitive to deploy such models
on lightweight mobile devices and wearables where battery life is precious and storage is limited.
This is particularly relevant for ?always-on? mobile applications, such as continuously looking for
specific keywords spoken by the user or processing a live video stream onboard a mobile robot. In
such settings, the models may need to be hosted on specialized low-power digital signal processing
components which are even more resource constrained than the device CPU.
A parsimonious structure typically imposed on parameter matrices is that of low-rankness [22]. If
M is a rank r matrix, with r min(m, n), then it has a (non-unique) product representation of the
form M = GHT where G, H have only r columns. Clearly, this representation reduces the storage
requirements to (mr + nr) parameters, and accelerates the matrix-vector multiplication time to
O(mr+nr) via Mx = G(HT x). Another popular structure is that of sparsity [6] typically imposed
during optimization via zero-inducing l0 or l1 regularizers. Other techniques include freezing M
to be a random matrix as motivated via approximations to kernel functions [20], storing M in low
fixed-precision formats [7, 24], using specific parameter sharing mechanisms [3], or training smaller
models on outputs of larger models (?distillation?) [11].
Structured Matrices: An m ? n matrix which can be described in much fewer than mn parameters is referred to as a structured matrix. Typically, the structure should not only reduce memory
1
requirements, but also dramatically accelerate inference and training via fast matrix-vector products
and gradient computations. Below are classes of structured matrices arising pervasively in many
contexts [18] with different types of parameter sharing (indicated by the color).
(iii) Cauchy
(i) Toeplitz
(ii) Vandermonde
?
t0
t?1
...
?
?
?
?
?
?
t1
.
.
.
t0
.
.
.
...
...
.
.
.
t1
tn?1
t?(n?1)
.
.
.
t?1
t0
??
??
??
??
??
??
?
?
1
1
.
.
.
1
v0
v1
.
.
.
vn?1
...
...
.
.
.
...
n?1
v0
n?1
v1
.
.
.
n?1
vn?1
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1
u0 ?v0
...
...
1
u1 ?v0
...
...
.
.
.
.
.
.
...
.
.
.
...
1
un?1 ?v0
1
u0 ?vn?1
?
.
.
.
?
?
?
?
?
?
?
?
.
.
.
1
un?1 ?vn?1
Toeplitz matrices have constant values along each of their diagonals. When the same property holds
for anti-diagonals, the resulting class of matrices are called Hankel matrices. Toeplitz and Hankel
matrices are intimately related to one-dimensional discrete convolutions [10], and arise naturally
in time series analysis and dynamical systems. A Vandermonde matrix is determined by taking
elementwise powers of its second column. A very important special case is the complex matrix
associated with the Discrete Fourier transform (DFT) which has Vandermonde structure with vj =
?nj , j = 1 . . . n where ?n = exp ?2?i
is the primitive nth root of unity. Similarly, the entries of
n
n ? n Cauchy matrices are completely defined by two length n vectors. Vandermonde and Cauchy
matrices arise naturally in polynomial and rational interpolation problems.
?Superfast? Numerical Linear Algebra: The structure in these matrices can be exploited for faster
linear algebraic operations such as matrix-vector multiplication, inversion and factorization. In particular, the matrix-vector product can be computed in time O(n log n) for Toeplitz and Hankel matrices, and in time O(n log2 n) for Vandermonde and Cauchy matrices.
Displacement Operators: At first glance, these matrices appear to have very different kinds of
parameter sharing and consequently very different algorithms to support fast linear algebra. It turns
out, however, that each structured matrix class described above, can be associated with a specific
displacement operator, L : Rm?n 7? Rm?n which transforms each matrix, say M, in that class into
an m ? n matrix L[M] that has very low-rank, i.e. rank(L[M]) min(m, n). This displacement
rank approach, which can be traced back to a seminal 1979 paper [13], greatly unifies algorithm
design and complexity analysis for structured matrices [13], [18], [14].
Generalizations of Structured Matrices: Consider deriving a matrix by taking arbitrary linear
?1
combinations of products of structured matrices and their inverses, e.g. ?1 T1 T?1
2 + ?2 T3 T4 T5
where each Ti is a Toeplitz matrix. The parameter sharing structure in such a derived matrix is by
no means apparent anymore. Yet, it turns out that the associated displacement operator remarkably
continues to expose the underlying parsimony structure, i.e. such derived matrices are still mapped to
relatively low-rank matrices! The displacement rank approach allows fast linear algebra algorithms
to be seamlessly extended to these broader classes of matrices. The displacement rank parameter
controls the degree of structure in these generalized matrices.
Technical Preview, Contributions and Outline: We propose building deep learning pipelines
where parameter matrices belong to the class of generalized structured matrices characterized by
low displacement rank. In Section 2, we attempt to give a self-contained overview of the displacement rank approach [13], [18] drawing key results from the relevant literature on structured matrix
computations (proved in our supplementary material [1] for completeness). In Section 3, we show
that the proposed structured transforms for deep learning admit fast matrix multiplication and gradient computations, and have rich statistical modeling capacity that can be explicitly controlled by
the displacement rank hyperparameter, covering, along a continuum, an entire spectrum of configurations from highly structured to unstructured matrices. While our focus in this paper is on
Toeplitz-related transforms, our proposal extends to other structured matrix generalizations. In Section 4, we study inference and training-time acceleration with structured transforms as a function of
displacement rank and dimensionality. We find that our approach compares highly favorably with
numerous other techniques for learning size-constrained models on several benchmark datasets. Finally, we demonstrate our approach on mobile speech recognition applications where we are able to
match the performance of much bigger state of the art models with a fraction of parameters.
Notation: Let e1 . . . en denote the canonical basis elements of Rn (viewed as column vectors).
In , 0n denote n ? n identity and zero matrices respectively. Jn = [en . . . e1 ] is the anti-identity
reflection matrix whose action on a vector is to reverse its entries. When the dimension is obvious
2
we may drop the subscript; for rectangular matrices, we may specify both the dimensions explicitly,
e.g. we use 01?n for a zero-valued row-vector, and 1n for all ones column vector of length n. u ? v
? will
denotes Hadamard (elementwise) product between two vectors v, u. For a complex vector u, u
denote the vector of complex conjugate of its entries. The Discrete Fourier Transform (DFT) matrix
will be denoted by ? (or ?n ); we will also use fft(x) to denote ?x, and ifft(x) to denote ??1 x.
For a vector v, diag(v) denotes a diagonal matrix given by diag(v)ii = vi .
2
Displacement Operators associated with Structured Matrices
We begin by providing a brisk background on the displacement rank approach. Unless otherwise
specified, for notational convenience we will henceforth assume squared transforms, i.e., m = n,
and discuss rectangular transforms later. Proofs of various assertions can be found in our selfcontained supplementary material [1] or in [18, 19].
The Sylvester displacement operator, denoted as L = ?A,B : Rn?n 7? Rn?n is defined by,
?A,B [M] = AM ? MB
(1)
n?n
n?n
where A ? R
,B ? R
are fixed matrices referred to as operator matrices. Closely related is
the Stein displacement operator, denoted as L = 4A,B : Rn?n 7? Rn?n , and defined by,
4A,B [M] = M ? AMB
(2)
By carefully choosing A and B one can instantiate Sylvester and Stein displacement operators with
desirable properties. In particular, for several important classes of displacement operators, A and/or
B are chosen to be an f -unit-circulant matrix defined as follows.
Definition 2.1 (f -unit-Circulant Matrix). For a real-valued scalar f , the (n ? n) f-circulant matrix,
denoted by Zf , is defined as follows,
?
?
0 0 ... f
? 1 0 ... 0 ?
01?(n?1)
f
?
Zf = [e2 , e3 . . . en , f e1 ] = ?
=
..
..
.. ?
? ...
In?1
0(n?1)?1
.
.
.
0 ... 1 0
The f -unit-circulant matrix is associated with a basic downward shift-and-scale transformation, i.e.,
the matrix-vector product Zf v shifts the elements of the column vector v ?downwards?, and scales
and brings the last element vn to the ?top?, resulting in [f vn , v1 , . . . vn?1 ]T . It has several basic
algebraic properties (see Proposition 1.1 [1]) that are crucial for the results stated in this section
Figure 1 lists the rank of the Sylvester displacement operator in Eqn 1 when applied to matrices
belonging to various structured matrix classes, where the operator matrices A, B in Eqn. 1 are
chosen to be diagonal and/or f -unit-circulant. It can be seen that despite the difference in their
structures, all these classes are characterized by very low displacement rank. Figure 2 shows how
this low-rank transformation happens in the case of a 4 ? 4 Toeplitz matrix (also see section 1,
Lemma 1.2 [1]). Embedded in the 4 ? 4 Toeplitz matrix T are two copies of a 3 ? 3 Toeplitz matrix
shown in black and red boxes. The shift and scale action of Z1 and Z?1 aligns these sub-matrices.
By taking the difference, the Sylvester displacement operator nullifies the aligned submatrix leaving
a rank 2 matrix with non-zero elements only along its first row and last column. Note that the
negative sign introduced by TZ?1 term prevents the complete zeroing out of the value of t (marked
by red star) and is hence critical for invertibility of the displacement action.
Figure 2: Displacement Action on Toeplitz Matrix
Figure 1: Below r is rank(?A,B [M])
T
r
?2
?2
?4
?1
?1
?1
?1
?1
?
i
sh
wn
do
ft
u v
? t
? x t u
?
? y x t
?
z y x
w
v
u
t
?
?
?
?
?
?
ift
B
Z?1
ZT
0
Z0 + ZT
0
Z0
diag(v)
diag(v)
diag(t)
diag(s)
tsh
A
Z1
Z1
Z0 + ZT
0
diag(v)
Z0
ZT
0
diag(s)
diag(t)
TZ?1
Z1 T
?
?
?
?
?
?
z y
t u
x t
y x
3
?
x t
?
v w ?
?
u v ?
?
t u
lef
Structured Matrix M
Toeplitz T, T?1
Hankel H, H?1
T+H
Vandermonde V (v)
V (v)?1
V (v)T
Cauchy C(s, t)
C(s, t)?1
?
?
?
?
?
?
?
u v
t u
x t
y x
?
w -t
?
v -x ?
?
u -y ?
?
t -z
?
=
?
?
?
?
?
Z1 T ? TZ?1
?
* * * *
?
0 0 0 * ?
?
0 0 0 * ?
?
0 0 0 *
Each class of structured matrices listed in Figure 1 can be naturally generalized by allowing
the rank of the displacement operator to be higher. Specifically, given a displacement operator L, and displacement rank parameter r, one may consider the class of matrices M that satisfies rank(L(M)) ? r. Clearly then, L[M] = GHT for rank r matrices G, H. We refer to
rank(L(M)) as the displacement rank of M under L, and to the low-rank factors G, H ? Rn?r as
the associated low-displacement generators. For the operators listed in Table 1, these broader classes
of structured matrices are correspondingly called Toeplitz-like, Vandermonde-like and Cauchy-like.
Fast numerical linear algebra algorithms extend to such matrices [18].
In order to express structured matrices with low-displacement rank directly as a function of its lowdisplacement generators, we need to invert L and obtain a learnable parameterization. For Stein type
displacement operator, the following elegant result is known (see proof in [1]):
Theorem 2.2 ( [19], Krylov Decomposition). If an n ? n matrix M is such that 4A,B [M] = GHT
where G = [g1 . . . gr ], H = [h1 . . . hr ] ? Rn?r and the operator matrices satisfy: An = aI,
B n = bI for some scalars a, b, then M can be expressed as:
r
1 X
M=
krylov(A, gj )krylov(BT , hj )T
1 ? ab j=1
(3)
where krylov(A, v) is defined by:
krylov(A, v) = [v Av A2 v . . . An?1 v]
(4)
Henceforth, our focus in this paper will be on Toeplitz-like matrices for which the displacement operator of interest (see Table 1) is of Sylvester type: ?Z1 ,Z?1 . In order to apply Theorem 2.2, one can
switch between Sylvester and Stein operators, setting A = Z1 and B = Z?1 which both satisfy the
conditions of Theorem 2.2 (see property 3, Proposition 1.1 [1]). The resulting expressions involve
Krylov matrices generated by f -unit-circulant matrices which are called f -circulant matrices in the
literature.
Definition 2.3 (f -circulant matrix). Given a vector v, the f-Circulant matrix, Zf (v), is defined as
follows:
?
?
v0
f vn?1 . . .
f v1
v0
...
f v2 ?
? v1
?
Zf (v) = krylov(Zf , v) = ?
..
..
? ...
?
.
. fv
n?1
vn?1
...
v1
v0
Two special cases are of interest: f = 1 corresponds to Circulant matrices, and f = ?1 corresponds to skew-Circulant matrices.
Finally, one can obtain an explicit parameterization for Toeplitz-like matrices which turns out to
involve taking sums of products of Circulant and skew-Circulant matrices.
Theorem 2.4 ([18]). If an n ? n matrix M satisfies ?Z1 ,Z?1 [M] = GHT where G =
[g1 . . . gr ], H = [h1 . . . hr ] ? Rn?r , then M can be written as:
r
M=
3
1X
Z1 (gj )Z?1 (Jhj )
2 j=1
(5)
Learning Toeplitz-like Structured Transforms
Motivated by Theorem 2.4, we propose learning parameter matrices of the form in Eqn. 5 by optimizing the displacement factors G, H. First, from the properties of displacement operators [18], it
follows that this class of matrices is very rich from a statistical modeling perspective.
Theorem 3.1 (Richness). The set of all n ? n matrices that can be written as,
M(G, H) =
r
X
Z1 (gi )Z?1 (hi )
i=1
for some G = [g1 . . . gr ], H = [h1 . . . hr ] ? Rn?r contains:
4
(6)
?
?
?
?
?
?
All n ? n Circulant and Skew-Circulant matrices for r ? 1.
All n ? n Toeplitz matrices for r ? 2.
Inverses of Toeplitz matrices for r ? 2.
All products of the form A1 . . . At for r ? 2t.
Pp
(i)
(i)
All linear combinations of the form i=1 ?i A1 . . . At where r ? 2tp.
All n ? n matrices for r = n.
where each Ai above is a Toeplitz matrix or the inverse of a Toeplitz matrix.
When we learn a parameter matrix structured as Eqn. 6 with displacement rank equal to 1 or 2,
we also search over convolutional transforms. In this sense, structured transforms with higher displacement rank generalize (one-dimensional) convolutional layers. The displacement rank provides
a knob on modeling capacity: low displacement matrices are highly structured and compact, while
high displacement matrices start to contain increasingly unstructured dense matrices.
Next, we show that associated structured transforms of the form f (x) = M(G, H)x admit fast
evaluation, and gradient computations with respect to G, H. First we recall the following wellknown result concerning the diagonalization of f -Circulant matrices.
Theorem 3.2 (Diagonalization of f -circulant matrices, Theorem 2.6.4 [18]). For any f 6= 0, let
n?1
1
2
f = [1, f n , f n , . . . f n ]T ? Cn , and Df = diag(f ). Then,
?1
Zf (v) = D?1
diag(?(f ? v))?Df
f ?
(7)
This result implies that for the special cases of f = 1 and f = ?1 corresponding to Circulant and Skew-circulant matrices respectively, the matrix-vector multiplication can be computed
in O(n log n) time via the Fast Fourier transform:
y = Z1 (v) x
= ifft (fft(v) ? fft(x))
= ifft fft(v) ? fft(x)
y = Z?1 (v)x
= ?? ? ifft (fft(? ? v) ? fft(? ? x))
(10)
= ?? ? ifft (fft(? ? v) ? fft(? ? x))
(11)
y = Z1 (v)x
T
T
y = Z?1 (v) x
where ? = [1, ?, ? 2 . . . ? n?1 ]T where ? = (?1)
1
n
(8)
(9)
= exp(i n? ), the root of negative unity.
In particular, a single matrix-vector product for Circulant and Skew-circulant matrices has the computational cost of 3 FFTs. Therefore, for matrices of the form in Eqn. 6 comprising of r products
of Circulant and Skew-Circulant matrices, naively computing a matrix-vector product for a batch
of b input vectors would take 6rb FFTs. However, this cost can be significantly lowered to that of
2(rb + r + b) FFTs by making the following observation:
!
r
r
X
X
?1
?1
?
? ? diag(?(? ? hi ))X
Y=
Z1 (gi )Z?1 (hi )X = ?
diag(?gi ) ? diag(?)
i=1
i=1
? = ? diag(?) X. Here, (1) The FFT of the parameters, ?gi and ?(? ? hi ) is computed
where X
once and shared across multiple input vectors in the minibatch, (2) The (scaled) FFT of the input,
(? diag(?) X) is computed once and shared across the sum in Eqn. 6, and (3) The final inverse FFT
is also shared. Thus, the following result is immediate.
Theorem
3.3 (Fast Multiplication). Given an n ? b matrix X, the matrix-matrix product, Y =
Pr
( i=1 Z1 (gi )Z?1 (hi )) X, can be computed at the cost of 2(rb + b + r) FFTs, using the following
algorithm.
1
Set ? = [1, ?, ? 2 . . . ? n?1 ]T where ? = (?1) n = exp(i n? )
Initialize Y = 0n?b
? = fft(diag(?)X)
Set X
? = fft(G) = [?
? = fft(diag(?)H) = [h
?1 . . . h
?r ]
Set G
g1 . . . g
?r ] and H
for i = 1 to r
? i )X
?
?
? U = Z?1 (hi )X = diag(?)ifft
diag(h
? V = diag(?
gi ) fft(U)
5
? Y =Y+V
Set Y = ifft (Y)
Return Y
We now show that when our structured transforms are embedded in a deep learning pipeline, the gradient computation can also be accelerated. First, we note that the Jacobian structure of f -Circulant
matrices has the following pleasing form.
Proposition 3.4 (Jacobian of f -circulant transforms). The Jacobian of the map f (x, v) = Zf (v)x
with respect to the parameters v is Zf (x).
This leads to the following expressions for the Jacobians of the structured transforms of interest.
Proposition 3.5 (Jacobians with respect to displacement generators G, H). Consider parameterized
vector-valued transforms of the form,
f (x, G, H) =
r
X
Z1 (gi )Z?1 (hi )x
(12)
i=1
The Jacobians of f with respect to the j th column of G, H, i.e. gj , hj , at x, are as follows:
Jgj f |x
Jhj f |x
= Z1 (Z?1 (hj )x)
= Z1 (gj )Z?1 (x)
(13)
(14)
Pb
Based on Eqns. 13, 14 the gradient over a minibatch of size b requires computing, i [Jgj f |xi ]T ?i
Pb
and i=1 [Jhj f |xi ]T ?i where, {xi }bi=1 and {?i }bi=1 are batches of forward and backward inputs
during backpropagation. These can be naively computed with 6rb FFTs. However, as before, by
sharing FFT of the forward and backward inputs, and the fft of the parameters, this can be lowered
to (4br + 4r + 2b) FFTs. Below we give matricized implementation.
Proposition 3.6 (Fast Gradients). Let X, Z be n ? b matrices whose columns are forward and
backward inputs respectively of minibatch size b during backpropagation. The gradient with respect
to gj , hj can be computed at the cost of (4br + 4r + 2b) FFTs as follows:
? = fft(Z), X
? = fft(diag(?)X), G
? = fft(G), H
? = fft(diag(?)H)
Compute Z
Gradient wrt
gj (2b + 1 FFTs)
?
?
?
? return ifft fft diag(?
? )ifft diag(hj )X ? Z 1b
Gradient wrt hj (2b +h
1 FFTs)
i
? ? fft diag(?)ifft diag(?
?
? ifft X
gi )Z
1b
? return diag (?)
Rectangular Transforms: Variants of Theorems 2.2, 2.4 exist for rectangular transforms, see [19].
Alternatively, for m < n we can subsample the outputs of square n ? n transforms at the cost of
extra computations, while for m > n, assuming m is a multiple of n, we can stack m
n output vectors
of square n ? n transforms.
4
Empirical Studies
Acceleration with Structured Transforms: In Figure 3, we analyze the speedup obtained in practice using n ? n Circulant and Toeplitz-like matrices relative to a dense unstructured n ? n matrix
(fully connected layer) as a function of displacement rank and dimension n. Three scenarios are
considered: inference speed per test instance, training speed as implicitly dictated by forward passes
on a minibatch, and gradient computations on a minibatch. Factors such as differences in cache
optimization, SIMD vectorization and multithreading between Level-2 BLAS (matrix-vector multiplication), Level-3 BLAS (matrix-matrix multiplication) and FFT implementations (we use FFTW:
http://www.fftw.org) influence the speedup observed in practice. Speedup gains start to
show for dimensions as small as 512 for Circulant matrices. The gains become dramatic with acceleration of the order of 10 to 100 times for several thousand dimensions, even for higher displacement
rank Toeplitz-like transforms.
6
Speedup (unstructured / structured)
Inference
Gradient (minibatch 100)
Forward Pass (minibatch 100)
2
10
1
10
n=512
n=1024
n=2048
n=4096
n=8192
n=16384
n=32768
2
10
1
10
2
10
1
10
0
10
0
0
10
10
?1
10
?1
?1
10
10
0
10
20
30
0
Displacement Rank
10
20
30
Displacement Rank
0
10
20
30
Displacement Rank
Figure 3: Acceleration with n ? n Structured Transforms (6-core 32-GB Intel(R) Xeon(R) machine; random
datasets). In the plot, displacement rank = 0 corresponds to a Circulant Transform.
Effectiveness for learning compact Neural Networks: Next, we compare the proposed structured
transforms with several existing techniques for learning compact feedforward neural networks. We
exactly replicate the experimental setting from the recent paper on H ASHED N ETS [3] which uses
several image classification datasets first prepared by [15]. MNIST is the original 10-class MNIST
digit classification dataset with 60000 training examples and 10000 test examples. BG - IMG - ROT
refers to a challenging version of MNIST where digits are randomly rotated and placed against a
random black and white background. RECT (1200 training images, 50000 test images) and CONVEX
(8000 training images, 50000 test images) are 2-class binary image datasets where the task is to
distinguish between tall and wide rectangles, and whether the ?on? pixels form a convex region or
not, respectively. In all datasets, input images are of size 28 ? 28. Several existing techniques are
benchmarked in [3] for compressing a reference single hidden layer model with 1000 hidden nodes.
?
?
?
?
Random Edge Removal (RER) [5] where a fraction of weights are randomly frozen to be zero-valued.
Low-rank Decomposition (LRD) [9]
Neural Network (NN) where the hidden layer size is reduced to satisfy a parameter budget.
Dark Knowledge (DK) [11]: A small neural network is trained with respect to both the original
labeled data, as well as soft targets generated by a full uncompressed neural network.
? HashedNets (HN) [3]: This approach uses a low-cost hash function to randomly group connection
weights which share the same value.
? HashedNets with Dark Knowledge (HNDK ): Trains a HashedNet with respect to both the original
labeled data, as well as soft targets generated by a full uncompressed neural network.
We consider learning models of comparable size with the weights in the hidden layer structured as
a Toeplitz-like matrix. We also compare with the FASTFOOD approach of [25, 16] where the weight
matrix is a product of diagonal parameter matrices and fixed permutation and Walsh-Hadamard
matrices, also admitting O(n log n) multiplication and gradient computation time. The C IRCULANT
Neural Network approach proposed in [4] is a special case of our framework (Theorem 3.1).
Results in Table 1 show that Toeplitz-like structured transforms outperform all competing approaches on all datasets, sometimes by a very significant margin, with similar or drastically lesser
number of parameters. It should also be noted that while random weight tying in H ASHED N ETS
reduces the number of parameters, the lack of structure in the resulting weight matrix cannot be
exploited for FFT-like O(n log n) multiplication time. We note in passing that for H ASHED N ETS
weight matrices whose entries assume only one of B distinct values, the Mailman algorithm [17]
can be used for faster matrix-vector multiplication, with complexity O(n2 log(B)/(log n)), which
still is much slower than matrix-vector multiplication time for Toeplitz-like matrices. Also note that
the distillation ideas of [11] are complementary to our approach and can further improve our results.
MNIST
BG - IMG - ROT
CONVEX
RECT
RER
15.03
12406
73.17
12406
37.22
12281
18.23
12281
LRD
28.99
12406
80.63
12406
39.93
12281
23.67
12281
NN
6.28
12406
79.03
12406
34.37
12281
5.68
12281
DK
6.32
12406
77.40
12406
31.85
12281
5.78
12281
HN
2.79
12406
59.20
12406
31.77
12281
3.67
12281
HNDK
2.65
12406
58.25
12406
30.43
12281
3.37
12281
Fastfood
6.61
10202
68.4
10202
33.92
3922
21.45
3922
C IRCULANT
3.12
8634
62.11
8634
24.76
2352
2.91
2352
T OEPLITZ (1)
2.79
9418
57.66
9418
17.43
3138
0.70
3138
T OEPLITZ (2)
2.54
10986
55.21
10986
16.18
4706
0.89
4706
Table 1: Error rate and number of parameters (italicized). Best results in blue.
7
T OEPLITZ (3)
2.09
12554
53.94
12554
20.23
6774
0.66
6774
Mobile Speech Recognition: We now demonstrate the techniques developed in this paper on a
speech recognition application meant for mobile deployment. Specifically, we consider a keyword
spotting (KWS) task, where a deep neural network is trained to detect a specific phrase, such as ?Ok
Google? [2]. The data used for these experiments consists of 10?15K utterances of selected phrases
(such as ?play-music?, ?decline-call?), and a larger set of 396K utterances to serve as negative
training examples. The utterances were randomly split into training, development and evaluation
sets in the ratio of 80 : 5 : 15. We created a noisy evaluation set by artificially adding babble-type
cafeteria noise at 0dB SNR to the ?play-music? clean data set. We will refer to this noisy data
set as CAFE 0. We refer the reader to [23] for more details about the datasets. We consider the
task of shrinking a large model for this task whose architecture is as follows [23]: the input layer
consists of 40 dimensional log-mel filterbanks, stacked with a temporal context of 32, to produce
an input of 32 ? 40 whose dimensions are in time and frequency respectively. This input is fed to
a convolutional layer with filter size 32 ? 8, frequency stride 4 and 186 filters. The output of the
convolutional layer is of size 9 ? 186 = 1674. The output of this layer is fed to a 1674 ? 1674 fully
connected layer, followed by a softmax layer for predicting 4 classes constituting the phrase ?playmusic?. The full training set contains about 90 million samples. We use asynchronous distributed
stochastic gradient descent (SGD) in a parameter server framework [8], with 25 worker nodes for
optimizing various models. The global learning rate is set to 0.002, while our structured transform
layers use a layer-specific learning rate of 0.0005; both are decayed by an exponential factor of 0.1.
play?music:accuracy
play?music:cafe0
fullyconnected (2.8M)
0.14
reference (122K)
lowrank4 (13.4K)
0.13
lowrank8 (26.8K)
98.4
98.2
lowrank16 (53.6K)
98
lowrank32 (107K)
0.12
97.8
fastfood (5022)
0.11
Accuracy (%)
False Rejects
circulant (1674)
toeplitz?disprank1 (3348)
toeplitz?disprank2 (6696)
0.1
toeplitz?disprank10 (33.5K)
0.09
97.6
fullyconnected (2.8M)
reference (122K)
97.4
lowrank4 (13.4K)
97.2
lowrank8 (26.8K)
lowrank16 (53.6K)
0.08
97
lowrank32 (107K)
0.07
circulant (1674)
fastfood (5022)
96.8
0.06
toeplitz?disprank1 (3348)
96.6
toeplitz?disprank2 (6696)
toeplitz?disprank10 (33.5K)
0.05
96.4
0.5
1
1.5
2
2.5
3
5
False Alarms per hour
10
15
20
25
Time (hours)
30
35
40
Figure 4: ?play-music? detection performance: (left) End-to-end keyword spotting performance in terms of
false reject (FR) rate per false alarm (FA) rate (lower is better) (right): Classification accuracy as a function of
training time. Displacement rank is in parenthesis for Toeplitz-like models.
Results with 11 different models are reported in Figure 4 (left) including the state of the art keyword
spotting model developed in [23]. At an operating point of 1 False Alarm per hour, the following observations can be made: With just 3348 parameters, a displacement rank=1 T OEPLITZ - LIKE
structured transform outperforms a standard low-rank bottleneck model with rank=16 containing 16
times more parameters; it also lowers false reject rates from 10.2% with C IRCULANT and 14.2%
with FASTFOOD transforms to about 8.2%. With displacement rank 10, the false reject rate is 6.2%,
in comparison to 6.8% with the 3 times larger rank=32 standard low-rank bottleneck model. Our best
Toeplitz-like model comes within 0.4% of the performance of the 80-times larger fully-connected
and 3.6 times larger reference [23] models. In terms of raw classification accuracy as a function of
training time, Figure 4 (right) shows that our models (with displacement ranks 1, 2 and 10) come
within 0.2% accuracy of the fully-connected and reference models, and easily provide much better accuracy-time tradeoffs in comparison to standard low-rank bottleneck models, Circulant and
Fastfood baselines. The conclusions are similar for other noise conditions (see supplementary material [1]).
5
Perspective
We have introduced and shown the effectiveness of new notions of parsimony rooted in the theory of
structured matrices. Our proposal can be extended to various other structured matrix classes, including Block and multi-level Toeplitz-like [12] matrices related to multidimensional convolution [21].
We hope that such ideas might lead to new generalizations of Convolutional Neural Networks.
Acknowledgements: We thank Yu-hsin Chen, Carolina Parada, Rohit Prabhavalkar, Alex Gruenstein, Rajat Monga, Baris Sumengen, Kilian Weinberger and Wenlin Chen for their contributions.
8
References
[1] Supplementary material: Structured transforms for small footprint deep learning. 2015.
http://vikas.sindhwani.org/st_supplementary.pdf.
[2] G. Chen, C. Parada, and G. Heigold. Small-footprint keyword spotting using deep neural
networks. In ICASSP, 2014.
[3] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks
with the hashing trick. In ICML, 2015.
[4] Y. Cheng, F. X. Xu, R. S. Feris, S. Kumar, A. Choudhary, and S.-F. Chang. Fast neural networks
with circulant projections. In arXiv:1502.03436, 2015.
[5] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and Schmidhuber. High-performance
neural networks for visual object classification. In arXiv:1102.0183, 2011.
[6] M. D. Collins and P. Kohli. Memory-bounded deep convolutional neural networks. In ICASSP,
2013.
[7] M. Courbariaux, J.-P. David, and Y. Bengio. Low-precision storage for deep learning. In ICLR,
2015.
[8] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato,
A. Senior, P. Tucker, K. Yang, , and A. Y. Ng. Large-scale distributed deep networks. In NIPS,
2012.
[9] M. Denil, B. Shakibi, L. Dinh, and N. de Freitas. Predicting parameters in deep learning. In
NIPS, 2013.
[10] R. M. Gray. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information Theory, 2005.
[11] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NIPS
workshop, 2014.
[12] T. Kailath and J. Chun. Generalized displacement structure for block toeplitz, toeplitz block
and toeplitz-derived matrices. SIAM J. Matrix Anal. Appl., 15, 1994.
[13] T. Kailath, S. Y. Kung, and M. Morf. Displacement ranks of matrices and linear equations.
Journal of Mathematical Analysis and Applications, pages 395?407, 1979.
[14] T. Kailath and A. H. Sayed. Displacement structure: Theory and applications. SIAM Review,
37, 1995.
[15] H. Larochelle, D. Erhan, A. C. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation
of deep architectures on problems with many factors of variation. In ICML, 2007.
[16] Q. Le, T. Sarlos, and A. Smola. Fastfood ? approximating kernel expansions in loglinear time.
In ICML, 2013.
[17] E. Liberty and S. W. Zucker. The mailman algorithm: a note on matrix vector multiplication.
In Information Processing Letters, 2009.
[18] V. Pan. Structured Matrices and Polynomials: Unified Superfast Algorithms. Springer, 2001.
[19] V. Pan. Inversion of displacement operators. SIAM Journal of Matrix Analysis and Applications, pages 660?677, 2003.
[20] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[21] M. V. Rakhuba and I. V. Oseledets. Fast multidimensional convolution in low-rank tensor
formats via cross approximation. SIAM J. Sci. Comput., 37, 2015.
[22] T. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In ICASSP,
2013.
[23] T. Sainath and C. Parada. Convolutional neural networks for small-footprint keyword spotting.
In Proc. Interspeech, 2015.
[24] V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on cpus. In
NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
[25] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang. Deep fried
convnets. In arXiv:1412.7149, 2015.
9
| 5869 |@word kohli:1 version:1 inversion:2 polynomial:2 compression:1 replicate:1 carolina:1 decomposition:2 sgd:1 dramatic:1 configuration:2 lightweight:1 series:1 contains:2 tuned:1 outperforms:1 existing:3 freitas:2 com:1 yet:1 written:2 devin:1 sanjiv:1 numerical:2 drop:1 plot:1 moczulski:1 hash:1 prohibitive:1 device:3 fewer:1 instantiate:1 parameterization:2 selected:1 fried:1 core:1 feris:1 completeness:1 provides:1 node:2 org:2 jgj:2 mathematical:1 along:4 kingsbury:1 become:1 consists:2 fullyconnected:2 sayed:1 multi:1 cpu:2 cache:1 begin:1 underlying:1 notation:1 bounded:1 tying:1 kind:1 benchmarked:1 parsimony:2 developed:2 unified:2 spoken:1 transformation:2 nj:1 temporal:1 multidimensional:2 ti:1 exactly:1 rm:2 scaled:1 control:1 unit:5 appear:1 t1:3 before:1 despite:1 ets:3 subscript:1 interpolation:1 black:2 might:1 challenging:1 appl:1 deployment:2 limited:1 factorization:2 walsh:1 range:1 bi:3 unique:1 practice:2 block:4 backpropagation:2 footprint:4 digit:2 displacement:52 empirical:2 significantly:2 reject:4 projection:1 refers:1 convenience:1 cannot:1 operator:21 storage:4 context:2 live:1 seminal:1 influence:1 wenlin:1 www:1 sanjivk:1 imposed:2 map:1 dean:2 sarlos:1 primitive:1 sainath:3 convex:3 rectangular:4 unstructured:5 estimator:1 deriving:1 notion:2 variation:1 oseledets:1 target:3 deploy:1 play:5 user:1 us:2 trick:1 element:4 trend:1 recognition:4 particularly:1 bari:1 continues:1 labeled:2 observed:1 ft:1 wang:1 thousand:1 region:1 compressing:2 connected:4 richness:1 kilian:1 keyword:6 ranzato:1 complexity:2 cafeteria:1 battery:1 trained:2 algebra:4 serve:1 completely:1 basis:1 accelerate:2 easily:1 icassp:3 various:4 train:1 stacked:1 hashednet:1 fast:12 effective:1 distinct:1 choosing:1 precious:1 whose:6 apparent:1 larger:5 valued:5 supplementary:4 say:1 drawing:1 otherwise:1 toeplitz:37 gi:8 g1:4 transform:6 noisy:2 final:1 frozen:1 propose:3 product:14 mb:1 fr:1 relevant:2 hadamard:2 aligned:1 arisoy:1 inducing:1 requirement:2 produce:1 rotated:1 object:1 tall:1 keywords:1 implies:1 come:2 distilling:1 larochelle:1 liberty:1 closely:1 filter:2 stochastic:1 babble:1 material:4 generalization:3 proposition:5 superfast:2 hold:1 considered:1 exp:3 continuum:2 a2:1 proc:1 expose:1 nullifies:1 hope:1 clearly:2 always:1 denil:2 hj:6 mobile:8 broader:2 wilson:1 knob:1 l0:1 derived:3 focus:2 notational:1 rank:49 seamlessly:1 greatly:1 baseline:1 am:1 sense:1 detect:1 inference:5 nn:2 typically:3 entire:1 bt:1 compactness:1 hidden:4 comprising:1 pixel:1 classification:5 denoted:4 development:1 constrained:3 art:3 special:4 initialize:1 softmax:1 equal:1 once:2 simd:1 ng:1 kw:1 broad:1 yu:1 unsupervised:1 uncompressed:2 nearly:1 icml:3 lrd:2 randomly:4 preview:1 attempt:1 ab:1 pleasing:1 interest:3 detection:1 highly:3 evaluation:5 sh:1 admitting:1 regularizers:1 edge:1 worker:1 unless:1 instance:1 column:8 modeling:4 xeon:1 soft:2 assertion:1 tp:1 phrase:3 cost:7 entry:4 snr:1 gr:3 reported:1 recht:1 decayed:1 randomized:1 siam:4 retain:1 continuously:1 squared:1 containing:1 hn:2 henceforth:2 admit:3 tz:3 return:3 jacobians:3 de:2 stride:1 star:1 bergstra:1 invertibility:1 filterbanks:1 satisfy:3 explicitly:3 bg:2 vi:1 stream:1 later:1 root:2 h1:3 hsin:1 analyze:1 red:2 start:2 fftw:2 contribution:2 square:2 shakibi:1 accuracy:7 convolutional:7 t3:1 matricized:1 generalize:1 raw:1 unifies:1 sharing:6 aligns:1 definition:2 against:1 pp:1 frequency:2 tucker:1 obvious:1 e2:1 naturally:3 associated:7 proof:2 wearable:1 rational:1 proved:1 gain:2 popular:1 dataset:1 recall:1 color:1 knowledge:3 dimensionality:1 carefully:1 back:1 ok:1 higher:3 hashing:1 specify:1 box:1 just:1 smola:2 convnets:1 eqn:6 freezing:1 lack:1 google:3 glance:1 minibatch:7 brings:1 indicated:1 gray:1 building:3 jhj:3 contain:1 hence:1 white:1 cafe:1 during:4 self:1 eqns:1 interspeech:1 covering:1 noted:1 mel:1 rooted:1 generalized:4 pdf:1 outline:1 complete:1 demonstrate:2 tn:1 l1:1 reflection:1 onboard:1 image:7 superior:1 specialized:1 overview:1 blas:2 belong:1 extend:1 million:1 elementwise:3 distillation:2 refer:3 significant:1 dinh:1 dft:2 ai:2 similarly:1 zeroing:1 nonlinearity:1 rot:2 lowered:2 robot:1 zucker:1 operating:1 v0:8 gj:6 recent:1 dictated:1 perspective:2 optimizing:2 reverse:1 wellknown:1 scenario:1 schmidhuber:1 server:1 binary:1 life:1 exploited:2 seen:1 mr:2 gambardella:1 corrado:1 signal:1 ii:2 u0:2 multiple:2 ifft:11 desirable:1 reduces:2 full:3 rahimi:1 technical:1 faster:2 characterized:3 match:1 offer:1 cross:1 concerning:1 e1:3 bigger:1 a1:2 controlled:1 parenthesis:1 variant:1 basic:2 sylvester:6 df:2 arxiv:3 kernel:4 sometimes:1 monga:2 invert:1 proposal:2 background:2 remarkably:1 leaving:1 rakhuba:1 crucial:1 extra:1 pass:2 elegant:1 db:1 effectiveness:2 call:1 yang:2 feedforward:1 iii:1 split:1 wn:1 fft:26 switch:1 bengio:2 architecture:2 competing:1 ciresan:1 reduce:1 idea:2 cn:1 lesser:1 tradeoff:2 br:2 decline:1 shift:3 bottleneck:4 t0:3 motivated:2 expression:2 whether:1 gb:1 song:1 algebraic:2 speech:4 york:1 e3:1 passing:1 action:4 deep:17 dramatically:1 listed:2 involve:2 transforms:29 stein:4 prepared:1 dark:2 reduced:1 http:2 outperform:1 exist:1 canonical:1 sign:1 arising:2 per:4 rb:4 blue:1 discrete:3 hyperparameter:1 express:1 group:1 key:1 pb:2 traced:1 clean:1 ht:1 backward:4 v1:6 rectangle:1 fraction:2 sum:2 inverse:4 parameterized:1 letter:1 hankel:4 extends:1 family:1 reader:1 rer:2 vn:9 parsimonious:1 comparable:1 submatrix:1 accelerates:1 layer:14 hi:7 followed:1 distinguish:1 courville:1 cheng:1 fold:1 alex:1 u1:1 speed:4 fourier:3 span:1 min:2 kumar:2 format:2 relatively:1 speedup:4 structured:42 combination:2 conjugate:1 belonging:1 smaller:1 across:2 increasingly:1 intimately:1 unity:2 pan:2 making:1 happens:1 pr:1 multithreading:1 pipeline:4 resource:1 equation:1 turn:3 discus:1 mechanism:1 skew:6 wrt:2 fed:2 end:2 operation:1 apply:1 v2:1 anymore:1 batch:2 weinberger:2 slower:1 vikas:2 jn:1 original:3 denotes:2 top:1 include:1 log2:1 music:5 ght:4 approximating:1 ramabhadran:1 tensor:1 parametric:1 fa:1 nr:2 diagonal:5 loglinear:1 gradient:14 iclr:1 mx:2 thank:1 mapped:1 sci:1 capacity:3 italicized:1 cauchy:6 assuming:1 length:2 providing:2 ratio:1 favorably:1 stated:1 negative:3 design:1 implementation:2 zt:4 anal:1 allowing:1 zf:9 av:1 convolution:3 observation:2 datasets:7 benchmark:1 descent:1 anti:2 immediate:1 extended:2 looking:1 communication:1 hinton:1 rn:9 stack:1 arbitrary:1 introduced:2 david:1 meier:1 specified:1 amb:1 z1:17 connection:1 fv:1 hour:3 nip:5 able:1 spotting:6 below:3 dynamical:1 krylov:7 sparsity:1 including:2 memory:2 video:1 power:3 suitable:1 critical:1 predicting:2 hr:3 mn:3 nth:1 improve:1 choudhary:1 numerous:1 created:1 utterance:3 review:2 literature:2 acknowledgement:1 removal:1 multiplication:12 rohit:1 relative:1 embedded:2 fully:4 permutation:1 generator:3 digital:1 vandermonde:7 foundation:1 degree:1 vanhoucke:1 tyree:1 courbariaux:1 storing:2 share:1 row:2 ift:1 placed:1 last:2 copy:1 asynchronous:1 lef:1 drastically:1 senior:2 circulant:33 wide:1 taking:4 correspondingly:1 mailman:2 distributed:2 dimension:6 rich:3 t5:1 forward:6 made:1 erhan:1 constituting:1 compact:4 implicitly:1 global:1 img:2 rect:2 xi:3 alternatively:1 spectrum:1 un:2 search:1 vectorization:1 table:4 learn:2 brisk:1 improving:1 expansion:1 complex:4 artificially:1 vj:1 diag:28 dense:3 fastfood:7 subsample:1 arise:2 noise:2 n2:1 alarm:3 complementary:1 xu:1 referred:2 intel:1 en:3 hosted:1 downwards:1 precision:2 sub:1 shrinking:1 mao:2 explicit:1 exponential:1 comput:1 jacobian:3 ffts:9 masci:1 z0:4 theorem:11 specific:5 learnable:1 list:1 dk:2 chun:1 naively:2 workshop:2 mnist:4 false:7 adding:1 diagonalization:2 downward:1 budget:1 t4:1 margin:1 rankness:1 chen:6 visual:1 prevents:1 expressed:1 contained:1 vinyals:1 scalar:2 sindhwani:4 chang:1 springer:1 corresponds:3 satisfies:2 viewed:1 identity:2 marked:1 consequently:1 acceleration:4 kailath:3 shared:3 determined:1 specifically:2 lemma:1 called:3 pas:1 tsh:1 experimental:2 tara:1 support:1 hashednets:2 meant:1 collins:1 accelerated:1 rajat:1 kung:1 |
5,379 | 587 | Using Prior Knowledge in a NNPDA to Learn
Context-Free Languages
Sreerupa Das
Dept. of Compo Sc. &
Inst. of Cognitive Sc.
University of Colorado
Boulder, CO 80309
c.
Lee Giles?
NEC Research Inst.
4 Independence Way
Princeton, NJ 08540
Guo-Zheng SUD
"'lnst. for Adv. Compo Studies
University of Maryland
College Park, MD 20742
Abstract
Although considerable interest has been shown in language inference and
automata induction using recurrent neural networks, success of these
models has mostly been limited to regular languages. We have previously demonstrated that Neural Network Pushdown Automaton (NNPDA)
model is capable of learning deterministic context-free languages (e.g.,
anb n and parenthesis languages) from examples. However, the learning
task is computationally intensive. In this paper we discus some ways in
which a priori knowledge about the task and data could be used for efficient
learning. We also observe that such knowledge is often an experimental
prerequisite for learning nontrivial languages (eg. anbncbma m ).
1
INTRODUCTION
Language inference and automata induction using recurrent neural networks has
gained considerable interest in the recent years. Nevertheless, success of these models has been mostly limited to regular languages. Additional information in form of
a priori knowledge has proved important and at times necessary for learning complex languages (Abu-Mostafa 1990; AI-Mashouq and Reed, 1991; Omlin and Giles,
1992; Towell, 1990). They have demonstrated that partial information incorporated
in a connectionist model guides the learning process through constraints for efficient
learning and better generalization.
'Ve have previously shown that the NNPDA model can learn Deterministic Context
65
66
Das, Giles, and Sun
Output
State(t+l)
t
Top-or-stack
Action
00 0 0
~
~
='000 00 00
t
State Neurons
it
State(t)
It;;:::::::..
Input Neurons
it
Input(t)
. .11:
push
pop or no-op
::;;;;::..
0
~
\
External
stack
\
.~~
"
"hig.her order
weights
alphabets on the
stack
'::~::::'.
copy
Read Neurons
11'
Top-of-stack(t)
Figure 1: The figure shows the architecture of a third-order NNPDA. Each weight
relates the product of Input(t), State(t) and Top-of-Stack information to the
State(t+1). Depending on the activation of the Action Neuron, stack action
(namely, push, pop or no operation) is taken and the Top-of-Stack (i.e. value
of Read Neurons) is updated.
Free Languages (DCFLs) from a finite set of examples. However, the learning task
requires considerable amount of time and computational resources. In this paper
we discuss methods in which a priori knowledge, may be incorporated in a N eum!
network Pushdown Automaton (NNPDA) described in (Das, Giles and Sun, 1992;
Giles et aI, 1990; Sun et aI, 1990).
2
2.1
THE NEURAL NETWORK PUSHDOWN AUTOMATA
ARCHITECTURE
The description of the network architecture is necessarily brief, for further details
see the references above. The network consists of a set of recurrent units, called
state neurons and an external stack memory. One state neuron is designated as the
output neuron. The state neurons get input (at every time step) from three sources:
from their own recurrent connections, from the input neurons and from the read
neurons. The input neurons register external inputs which consist of strings of
characters presented one at a time. The read neurons keep track of the symbol(s)
on top of the stack. One non-recurrent state neuron, called the action neuron,
indicates the stack action (push, pop or no-op) at any instance. The architecture
is shown in Figure 1.
The stack used in this model is continuous. Unlike an usual discrete stack where an
element is either present or absent, elements in a continuous stack may be present
in varying degrees (values between [0, 1]). A continuous stack is essential in order
Using Prior Knowledge in a NNPDA to Learn Context-Free Languages
to permit the use of a continuous optimization method during learning. The stack
is manipulated by the continuous valued action neuron. A detailed discussion on
the operations may be found in (Das, Giles and Sun, 1992).
2.2
LEARNABLE CLASS OF LANGUAGES
The class of language learnable by the NNPDA is a proper subset of deterministic
context-free languages. A formal description of a Pushdown Automaton (PDA)
requires two distinct sets of symbols - one is the input symbol set and the other
is the stack symbol set!. We have reduced the complexity of this PDA model in
the following ways: First, we use the same set of symbols for the input and the
stack. Second, when a push operation is performed the symbol pushed on the stack
is the one that is available as the current input. Third, no epsilon transitions are
allowed in the NNPDA. Epsilon transition is one that performs state transition and
stack action without reading in a new input symbol. Unlike a deterministic finite
state automata, a deterministic PDA can make epsilon transitions under certain
restrictions!. Although these simplifications reduce the language class learnable by
NNPDA, nevertheless the languages in this class retain essential properties of eFLs
and is therefore more complex than any regular language.
2.3
TRAINING
The activation of the state neurons s at time step t + 1 may be formulated as follows
(we will only consider third order NNPDA in this paper):
(1)
where g(x) = frac1/1 + exp( -x), i is the activation of the input neurons and r is
the activation of the read neuron and W is the weight matrix of the network. We
use a localized representation for the input and the read symbols. During training,
input sequences are presented one at a time and activations are allowed to propagate
until the end of the string is reached. Once the end is reached the activation of the
output neuron is matched with the target (which is 1.0 for positive string and 0.0 for
a negative string) The learning rule used in the NNPDA is a significantly enhanced
extension to Real Time Recurrent Learning (\Villiams and Zipser, 1989).
2.4
OBJECTIVE FUNCTION
The objective function used to train the network consists of two error terms: one for
positive strings and the other for negative strings. For positive strings we require
(a) the NNPDA must reach a final state and (b) the stack must be empty. This
criterion can be reached by minimizing the error function:
(2)
where So(l) is the activation of an output neuron and L(I) is the stack length, after
a string of length I has been presented as input a character at a time. For negative
1
For details refer to (Hopcroft, 1979).
67
68
Das, Giles, and Sun
avg of total
presentations
# of strings
# of character
parenthesis
w IL wjo IL
2671
5644
10628 29552
postfix
w IL wjo IL
15912
8326
31171
82002
anb n
w IL
wjo IL
108200
358750
>200000
>700000
Table 1: Effect of Incremental Learning (IL) is displayed in this table. The number
of strings and characters required for learning the languages are provided here.
parenthesis
epochs
generalization
number of units
w SSP
50-80
100%
1+1
wjo SSP
150
96.02%
1+1
"'''''''
50-80
100%
2
anbncbma m
w SSP wjo SSP
epochs
generalization
number of units
***
***
anbn
w SSP
wjo SSP
150-250
100%
1+1
150-250
98.97%
2
150-250
100%
1+1
***
***
***
an+mbnc m
w SSP
wjo SSP
Table 2: This table provides some statistics on epochs, generalization and number
of hidden units required for learning with and without selective string presentation
(SSP).
strings, the error function is modified as:
E rror -- { 0so(1) - L(l)
if (so(1) - L(l))
else
> 0.0
(3)
Equation (2) reflects the criterion that, for a negative pattern we require either the
final state so(l)
0.0 or the stack length L(1) to be greater than 1.0 (only when
so(l) = 1.0 and the stack length L(l) is close to zero, the error is high).
=
3
BUILDING IN PRIOR KNOWLEDGE
In practical inference tasks it may be possible to obtain prior knowledge about the
problem domain. In such cases it often helps to build in knowledge into the system
under study. There could be at least two different types of knowledge available
to a model (a) knowledge that depends on the training data with absolutely no
knowledge about the automaton, and (b) partial knowledge about the automaton
being inferred. Some of ways in which knowledge can be provided to the model are
discussed below.
3.1
3.1.1
KNOWLEDGE FROM THE DATA
Incremental Learning
Incremental Learning has been suggested by many (Elman, 1991; Giles et aI, 1990,
Sun et aI, 1990), where the training examples are presented in order of increasing
Using Prior Knowledge in a NNPDA to Learn Context-Free Languages
0.9
with SSP without
SSP - - .
0.8
0.7
.,...,.,
...0
........
0.6
0.5
0.4
0.3
0.2
50
150
100
200
250
epochs
Figure 2: Faster convergence using selective string presentation (SSP) for parenthesis language task.
length. This model of learning starts with a training set containing short simple
strings. Longer strings are added to the training set as learning proceeds.
We believe that incremental learning is very useful when (a) the data presented
contains structure, and (b) the strings learned earlier embody simpler versions of
the task being learned. Both these conditions are valid for context-free languages.
Table 1 provides some results obtained when incremental learning was used. The
figures are averages over several pairs of simulations, each of which were initialized
with the same initial random weights.
3.1.2
Selective Input Presentation
Our training data contained both positive and negative examples. One problem
with training on incorrect strings is that, once a symbol in the string is reached
that makes it negative, no further information is gained by processing the rest of
the string. For example, the fifth a in the string aaaaba ... makes the string a
negative example of the language a"b", irrespective of what follows it. In order to
incorporate this idea we have introduced the concept of a dead state.
During training, we assume that there is a teacher or an oracle who has knowledge
of the grammar and is able to identify the first (leftmost) occurrence of incorrect
sequence of symbols in a negative string. When such a point is reached in the input
string, further processing of the string is stopped and the network is trained so that
one designated state neuron called the dead state neuron is active. To accommodate
the idea of a dead state in the learning rule, the following change is made: if the
network is being trained on negative strings that end in a dead state then the
length L(l) in the error function in equation (1) is ignored and it simply becomes
69
70
Das, Giles, and Sun
0.5
W/o
wlth 1
wlth 2
wlth 3
0 . 45
IW
IW
IW
IW
--_.
' .. ----
0 .4
0.35
...0
......
..
0 .3
0.
~
0 . 25
0.2
0.15
0.1
0.05
a
1000
2000
3000
4000
5000
6000
7000
8000
9000
No . of gtrlngg
Figure 3: Learning curves when none, one or more initial weights (IW) were set for
postfix language learning task
Error = ~(1 - Sdead{l))2. Since such strings have an negative subsequence, they
cannot be a prefix to any positive string. Therefore at this point we do not care
about the length of the stack. For strings that are either positive or negative but
do not go to a dead state (an example would be a prefix of a positive string); the
objective function remains the same as described earlier in Equations 1 and 2.
Such additional information provided during training resulted in efficient learning,
helped in learning of exact pushdown automata and led to better generalization for
the trained network. Information in this form was often a prerequisite for successfully learning certain languages. Figure 2 shows a typical plot of improvement in
learning when such knowledge is used. Table 2 shows improvements in the statistics
for generalization, number of units needed and number of epochs required for learning. The numbers in the tables were averages over several simulations; changing
the initial conditions resulted in values of similar orders of magnitude.
3.2
3.2.1
KNOWLEDGE ABOUT THE TASK
Knowledge About The Target PDA's Dynamics
One way in which knowledge about the target PDA can be built into a system is
by biasing the initial conditions of the network. This may be done by assigning
predetermined initial values to a selected set of weights (or biases). For example a
third order NNPDA has a dynamics that maps well onto the theoretical model of a
PDA. Both allow a three to two mapping of a similar kind. This is because in the
third order NNPDA, the product of the activations of the input neurons, the read
neurons and the state neurons determine the next state and the next action to be
Using Prior Knowledge in a NNPDA to Learn Context-Free Languages
[.5
.5]
Start
9
[.1 .9]
a/-/push
[.0.6]
Start
[.5 .5]
,
b,-*
e/-/*
O
End
[* .9]
Dead
[.9 *]
(a) PDA for parenthesis
End
[* .9]
(b) PDA for a~n
Figure 4: The figure shows some of the PDAs inferred by the NNPDA. In the figure
the nodes in the graph represent states inferred by the NNPDA and the numbers in
"[]" indicates the state representations. Every transition is indicated by an arrow
and is labeled as "x/y /z" where "x" corresponds to the current input symbol, "y"
corresponds to the symbol on top of the stack and "z" corresponds to the action
taken.
taken. It may be possible to determine some of the weights in a third order network
if certain information about the automaton in known. Typical improvement in
learning is shown in Figure 3 for a postfix language learning task.
3.2.2
U sing Structured Examples
Structured examples from a grammar are a set of strings where the order of letter
generation is indicated by brackets. An example would be the string (( ab)c) generated by the rules S ---+ Xc; X ---+ abo Under the current dynamics and limitations
of the model, this information could be interpreted as providing the stack actions
(push and pop) to the NNPDA. Learning the palindrome language is a hard task
because it necessitates remembering a precise history over a long period of time.
The NNPDA was able to learn the palindrome language for two symbols when
structured examples were presented.
4
AUTOMATON EXTRACTION FROM NNPDA
Once the network performs well on the training set, the transition rules in the
inferred PDA can then be deduced. Since the languages learned by the NNPDA so
far corresponded to PDAs with few states, the state representations in the induced
PDA could be inferred by looking at the state neuron activations when presented
with all possible character sequences. For larger PDAs clustering techniques could
be used to infer the state representations. Various clustering techniques for similar
tasks have been discussed in (Das and Das, 1992; Giles et al., 1992). Figure 4 shows
some of the PDAs inferred by the NNPDA.
71
72
Das, Giles, and Sun
5
CONCLUSION
This paper has described some of the ways in which prior knowledge could be used
to learn DCFGs in an NNPDA. Such knowledge is valuable to the learning process
in two ways. It may reduce the solution space, and as a consequence may speed
up the learning process. Having the right restrictions on a given representation can
make learning simple: which reconfirms an old truism in Artificial Intelligence.
References
Y.S. Abu-Mostafa. (1990) Learning from hints in neural networks.
Journal of
Complexity, 6:192-198.
K.A. AI-Mashouq and I.S. Reed . (1991) Including hints in training neural networks.
Neural Computation, 3(3):418-427 .
S. Das and R. Das . (1992) Induction of discrete state-machine by stabilizing a continuous recurrent network using clustering. To appear in CSI Journal of Computer
Science and Informatics. Special Issue on Neural Computing.
S. Das, C.L. Giles, and G.Z. Sun. (1992) Learning context free grammars: capabilities and limitations of neural network with an external stack memory. Proc of
the Fourteenth Annual Conf of the Cognitive Science Society, pp. 791-795. Morgan
Kaufmann, San Mateo, Ca.
J .L. Elman. (1991) Incremental learning, or the importance of starting small. CRL
Tech Report 9101, Center for Research in Language, UCSD, La Jolla, CA.
C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee and D. Chen, (1990) Higher Order
Recurrent Networks & Grammatical Inference, Advances in Neural Information
Processing Systems 2, pp. 380-387, ed. D.S. Touretzky, Morgan Kaufmann, San
Mateo, CA.
C.L. Giles, C.B . Miller, H.H. Chen, G.Z. Sun, and Y.C. Lee. (1992) Learning
and extracting finite state automata with second-order recurrent neural networks.
Neural Computation, 4(3):393-405.
J .E. Hopfcroft and J.D . Ullman. (1979) Introduction to Automata Theory, Languages and Computation. Addison-Wesley, Reading, MA.
C.W. Omlin and C.L. Giles. (1992) Training second-order recurrent neural networks
using hints. Proceedings of the Ninth Int Conf on Machine Learning, pp. 363-368.
D. Sleeman and P. Edwards (eds). Morgan Kaufmann, San Mateo, Ca.
G.Z. Sun, H.H. Chen, C.L. Giles, Y.C . Lee and D. Chen. (1991) Neural networks
with external memory stack that learn context-free grammars from examples. Proc
of the Conf on Information Science and Systems, Princeton U., Vol. II, pp. 649-653.
G.G. Towell, J.W. Shavlik and M.O Noordewier. (1990) Refinement of approximately correct domain theories by knowledge-based neural-networks. In Proc of the
Eighth National Conf on Artificial Intelligence, Boston, MA. pp. 861.
R.J. Williams and D. Zipser. (1989) A learning algorithm for continually running
fully recurrent neural networks. Neural Computation 1(2):270-280.
| 587 |@word version:1 simulation:2 propagate:1 accommodate:1 initial:5 contains:1 prefix:2 current:3 activation:9 assigning:1 must:2 predetermined:1 plot:1 intelligence:2 selected:1 short:1 compo:2 provides:2 node:1 simpler:1 incorrect:2 consists:2 embody:1 elman:2 sud:1 increasing:1 becomes:1 provided:3 matched:1 what:1 kind:1 interpreted:1 string:31 nj:1 every:2 unit:5 appear:1 continually:1 positive:7 consequence:1 approximately:1 mateo:3 co:1 limited:2 practical:1 significantly:1 regular:3 get:1 cannot:1 close:1 onto:1 context:10 restriction:2 deterministic:5 demonstrated:2 map:1 center:1 go:1 williams:1 starting:1 automaton:14 stabilizing:1 rule:4 updated:1 target:3 enhanced:1 colorado:1 exact:1 element:2 labeled:1 wjo:7 adv:1 sun:12 valuable:1 csi:1 complexity:2 pda:14 dynamic:3 trained:3 necessitates:1 hopcroft:1 various:1 alphabet:1 train:1 distinct:1 artificial:2 sc:2 corresponded:1 larger:1 valued:1 grammar:4 statistic:2 anbn:1 final:2 sequence:3 nnpda:24 product:2 description:2 convergence:1 empty:1 incremental:6 help:1 depending:1 recurrent:11 op:2 edward:1 correct:1 require:2 generalization:6 extension:1 exp:1 mapping:1 mostafa:2 proc:3 iw:5 successfully:1 reflects:1 modified:1 varying:1 improvement:3 indicates:2 tech:1 inst:2 inference:4 her:1 hidden:1 selective:3 lnst:1 issue:1 priori:3 special:1 once:3 extraction:1 having:1 park:1 connectionist:1 report:1 hint:3 few:1 manipulated:1 ve:1 resulted:2 national:1 ab:1 interest:2 zheng:1 bracket:1 capable:1 partial:2 necessary:1 old:1 initialized:1 theoretical:1 stopped:1 instance:1 earlier:2 giles:16 subset:1 noordewier:1 teacher:1 deduced:1 retain:1 lee:4 informatics:1 containing:1 cognitive:2 external:5 dead:6 conf:4 ullman:1 int:1 register:1 depends:1 performed:1 helped:1 reached:5 start:3 capability:1 il:7 kaufmann:3 who:1 miller:1 identify:1 none:1 history:1 reach:1 villiams:1 touretzky:1 ed:2 pp:5 proved:1 knowledge:25 wesley:1 higher:1 done:1 until:1 indicated:2 believe:1 building:1 effect:1 concept:1 read:7 eg:1 during:4 eum:1 criterion:2 leftmost:1 performs:2 discussed:2 refer:1 ai:6 reconfirms:1 language:31 longer:1 own:1 recent:1 jolla:1 certain:3 success:2 morgan:3 additional:2 greater:1 care:1 remembering:1 determine:2 period:1 ii:1 relates:1 infer:1 faster:1 long:1 parenthesis:5 represent:1 else:1 source:1 rest:1 unlike:2 induced:1 extracting:1 zipser:2 independence:1 architecture:4 reduce:2 idea:2 intensive:1 absent:1 wlth:3 mashouq:2 action:10 ignored:1 useful:1 detailed:1 amount:1 reduced:1 towell:2 track:1 discrete:2 vol:1 abu:2 nevertheless:2 changing:1 graph:1 year:1 fourteenth:1 letter:1 pushed:1 simplification:1 oracle:1 annual:1 nontrivial:1 hig:1 constraint:1 speed:1 structured:3 designated:2 sreerupa:1 character:5 boulder:1 taken:3 computationally:1 resource:1 equation:3 previously:2 remains:1 discus:2 needed:1 addison:1 end:5 available:2 operation:3 permit:1 prerequisite:2 observe:1 occurrence:1 top:6 clustering:3 running:1 xc:1 epsilon:3 build:1 society:1 objective:3 added:1 md:1 usual:1 ssp:12 maryland:1 induction:3 length:7 reed:2 providing:1 minimizing:1 mostly:2 negative:11 proper:1 neuron:27 sing:1 finite:3 displayed:1 incorporated:2 anb:2 precise:1 looking:1 postfix:3 ucsd:1 stack:28 ninth:1 palindrome:2 inferred:6 introduced:1 namely:1 required:3 pair:1 connection:1 learned:3 pop:4 able:2 suggested:1 proceeds:1 below:1 pattern:1 eighth:1 biasing:1 reading:2 built:1 including:1 memory:3 omlin:2 brief:1 irrespective:1 prior:7 epoch:5 fully:1 generation:1 limitation:2 localized:1 degree:1 free:10 copy:1 formal:1 guide:1 bias:1 allow:1 shavlik:1 fifth:1 abo:1 grammatical:1 curve:1 transition:6 valid:1 made:1 avg:1 san:3 refinement:1 far:1 keep:1 active:1 subsequence:1 continuous:6 table:7 learn:8 ca:4 complex:2 necessarily:1 domain:2 da:12 arrow:1 allowed:2 third:6 symbol:13 learnable:3 consist:1 essential:2 gained:2 importance:1 nec:1 magnitude:1 push:6 chen:5 boston:1 led:1 simply:1 contained:1 rror:1 corresponds:3 ma:2 formulated:1 presentation:4 crl:1 considerable:3 change:1 pushdown:5 hard:1 typical:2 called:3 total:1 experimental:1 la:1 college:1 guo:1 absolutely:1 incorporate:1 dept:1 princeton:2 |
5,380 | 5,870 | Equilibrated adaptive learning rates for non-convex
optimization
Harm de Vries1
Universit?e de Montr?eal
[email protected]
Yann N. Dauphin1
Universit?e de Montr?eal
[email protected]
Yoshua Bengio
Universit?e de Montr?eal
[email protected]
Abstract
Parameter-specific adaptive learning rate methods are computationally efficient
ways to reduce the ill-conditioning problems encountered when training large
deep networks. Following recent work that strongly suggests that most of the
critical points encountered when training such networks are saddle points, we find
how considering the presence of negative eigenvalues of the Hessian could help
us design better suited adaptive learning rate schemes. We show that the popular
Jacobi preconditioner has undesirable behavior in the presence of both positive
and negative curvature, and present theoretical and empirical evidence that the socalled equilibration preconditioner is comparatively better suited to non-convex
problems. We introduce a novel adaptive learning rate scheme, called ESGD,
based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always
clearly improving over plain stochastic gradient descent.
1
Introduction
One of the challenging aspects of deep learning is the optimization of the training criterion over millions of parameters: the difficulty comes from both the size of these neural networks and because the
training objective is non-convex in the parameters. Stochastic gradient descent (SGD) has remained
the method of choice for most practitioners of neural networks since the 80?s, in spite of a rich literature in numerical optimization. Although it is well-known that first-order methods considerably
slow down when the objective function is ill-conditioned, it remains unclear how to best exploit
second-order structure when training deep networks. Because of the large number of parameters,
storing the full Hessian or even a low-rank approximation is not practical, making parameter specific
learning rates, i.e diagonal preconditioners, one of the viable alternatives. One of the open questions
is how to set the learning rate for SGD adaptively, both over time and for different parameters, and
several methods have been proposed (see e.g. Schaul et al. (2013) and references therein).
On the other hand, recent work (Dauphin et al., 2014; Choromanska et al., 2014) has brought theoretical and empirical evidence suggesting that local minima are with high probability not the main
obstacle to optimizing large and deep neural networks, contrary to what was previously believed:
instead, saddle points are the most prevalent critical points on the optimization path (except when
we approach the value of the global minimum). These saddle points can considerably slow down
training, mostly because the objective function tends to be ill-conditioned in the neighborhood of
1
Denotes first authors
1
(a) Original
(b) Preconditioned
Figure 1: Contour lines of a saddle point (black point) problem for (a) original function and (b) transformed function (by equilibration preconditioner). Gradient descent slowly escapes the saddle point
in (a) because it oscillates along the high positive curvature direction. For the better conditioned
function (b) these oscillations are reduced, and gradient descent makes faster progress.
these saddle points. This raises the question: can we take advantage of the saddle structure to design
good and computationally efficient preconditioners?
In this paper, we bring these threads together. We first study diagonal preconditioners for saddle
point problems, and find that the popular Jacobi preconditioner has unsuitable behavior in the presence of both positive and negative curvature. Instead, we propose to use the so-called equilibration
preconditioner and provide new theoretical justifications for its use in Section 4. We provide specific
arguments why equilibration is better suited to non-convex optimization problems than the Jacobi
preconditioner and empirically demonstrate this for small neural networks in Section 5. Using this
new insight, we propose a new adaptive learning rate schedule for SGD, called ESGD, that is based
on the equilibration preconditioner. In Section 7 we evaluate the proposed method on two deep autoencoder benchmarks. The results, presented in Section 8, confirm that ESGD performs as well or
better than RMSProp. In addition, we empirically find that the update direction of RMSProp is very
similar to equilibrated update directions, which might explain its success in training deep neural
networks.
2
Preconditioning
It is well-known that gradient descent makes slow progress when the curvature of the loss function
is very different in separate directions. The negative gradient will be mostly pointing in directions of
high curvature, and a small enough learning rate have to be chosen in order to avoid divergence in the
largest positive curvature direction. As a consequence, the gradient step makes very little progress in
small curvature directions, leading to the slow convergence often observed with first-order methods.
Preconditioning can be thought of as a geometric solution to the problem of pathological curvature.
It aims to locally transform the optimization landscape so that its curvature is equal in all directions.
This is illustrated in Figure 1 for a two-dimensional saddle point problem using the equilibration
preconditioner (Section 4). Gradient descent method slowly escapes the saddle point due to the
typical oscillations along the high positive curvature direction. By transforming the function to be
more equally curved, it is possible for gradient descent to move much faster.
More formally, we are interested in minimizing a function f with parameters ? ? RN . We introduce
1
1
preconditioning by a linear change of variables ?? = D 2 ? with a non-singular matrix D 2 . We use
? that is equivalent to the
this change of variables to define a new function f?, parameterized by ?,
original function f :
? = f (D? 12 ?)
? = f (?)
f?(?)
(1)
?
The gradient and the Hessian of this new function f are (by the chain rule):
? = D? 21 ?f (?)
?f?(?)
? = D? 21 > HD? 12 with H = ?2 f (?)
?2 f?(?)
2
(2)
(3)
? for the transformed function corresponds to
A gradient descent iteration ??t = ??t?1 ? ??f?(?)
?t = ?t?1 ? ?D?1 ?f (?)
(4)
for the original parameter ?. In other words, by left-multiplying the original gradient with a positive
definite matrix D?1 , we effectively apply gradient descent to the problem after a change of variables
1
1
1
?? = D 2 ?. The curvature of this transformed function is given by the Hessian D? 2 > HD? 2 , and
we aim to seek a preconditioning matrix D such that the new Hessian has equal curvature in all
directions. One way to assess the success of D in doing so is to compute the relative difference
between the biggest and smallest curvature direction, which is measured by the condition number of
the Hessian:
?max (H)
?(H) =
(5)
?min (H)
where ?max (H), ?min (H) denote respectively the biggest and smallest singular values of H (which
are the absolute value of the eigenvalues). It is important to stress that the condition number is
defined for both definite and indefinite matrices.
1
1
The famous Newton step corresponds to a change of variables D 2 = H 2 which makes the new
Hessian perfectly conditioned. However, a change of variables only exists2 when the Hessian H is
positive semi-definite. This is a problem for non-convex loss surfaces where the Hessian might be
indefinite. In fact, recent studies (Dauphin et al., 2014; Choromanska et al., 2014) has shown that
saddle points are dominating the optimization landscape of deep neural networks, implying that the
Hessian is most likely indefinite. In such a setting, H?1 not a valid preconditioner and applying
Newton?s step without modification would make you move towards the saddle point. Nevertheless,
it is important to realize that the concept of preconditioning extends to non-convex problems, and
reducing ill-conditioning around saddle point will often speed up gradient descent.
At this point, it is natural to ask whether there exists a valid preconditioning matrix that always
perfectly conditions the new Hessian? The answer is yes, and the corresponding preconditioning
matrix is the inverse of the absolute Hessian
X
|H| =
|?j |qj q>
(6)
j ,
j
which is obtained by an eigendecomposition of H and taking the absolute values of the eigenvalues.
See Proposition 1 in Appendix A for a proof that |H|?1 is the only (up to a scalar3 ) symmetric
positive definite preconditioning matrix that perfectly reduces the condition number.
Practically, there are several computational drawbacks for using |H|?1 as a preconditioner. Neural
networks typically have millions of parameters, rendering it infeasible to store the Hessian (O(N 2 )),
perform an eigendecomposition (O(N 3 )) and invert the matrix (O(N 3 )). Except for the eigendecomposition, other full rank preconditioners are facing the same computational issues. We therefore
look for more computationally affordable preconditioners while maintaining its efficiency in reducing the condition number of indefinite matrices. In this paper, we focus on diagonal preconditioners
which can be stored, inverted and multiplied by a vector in linear time. When diagonal preconditioners are applied in an online optimization setting (i.e. in conjunction with SGD), they are often
referred to as adaptive learning rates in the neural network literature.
3
Related work
The Jacobi preconditioner is one of the most well-known preconditioners. It is given by the diagonal
of the Hessian DJ = |diag(H)| where | ? | is element-wise absolute value. LeCun et al. (1998)
proposes an efficient approximation of the Jacobi preconditioner using the Gauss-Newton matrix.
The Gauss-Newton has been shown to approximate the Hessian under certain conditions (Pascanu
& Bengio, 2014). The merit of this approach is that it is efficient but it is not clear what is lost
by the Gauss-Newton approximation. What?s more the Jacobi preconditioner has not be found to
be competitive for indefinite matrices (Bradley & Murray, 2011). This will be further explored for
neural networks in Section 5.
2
3
1
A real square root H 2 only exists when H is positive semi-definite.
can be incorporated into the learning rate
3
A recent revival of interest in adaptive learning rates has been started by AdaGrad (Duchi et al.,
2011). Adagrad collects information from the gradients across several parameter
updates to tune the
P
2 ?1/2
learning rate. This gives us the diagonal preconditioning matrix DA = ( t ?f(t)
)
which relies
on the sum of gradients ?f(t) at each timestep t. Duchi et al. (2011) relies strongly on convexity to
justify this method. This makes the application to neural networks difficult from a theoretical perspective. RMSProp (Tieleman & Hinton, 2012) and AdaDelta (Zeiler, 2012) were follow-up methods introduced to be practical adaptive learning methods to train large neural networks. Although
RMSProp has been shown to work very well (Schaul et al., 2013), there is not much understanding
for its success in practice. Preconditioning might be a good framework to get a better understanding
of such adaptive learning rate methods.
4
Equilibration
Equilibration is a preconditioning technique developed in the numerical mathematics community (Sluis, 1969). When solving a linear system Ax = b with Gaussian Elimination, significant
round-off errors can be introduced when small numbers are added to big numbers (Datta, 2010).
To circumvent this issue, it is advised to properly scale the rows of the matrix before starting the
elimination process. This step is often referred to as row equilibration, which formally scales the
rows of A to unit magnitude in some p-norm. Throughout the following we consider 2-norm. Row
1
equilibration is equivalent to multiplying A from the left by the matrix D?1
ii = kAi,? k 2 . Instead of
? = ?b with
solving the original system, we now solve the equivalent left preconditioned system Ax
?1
?1
?
?
A = D A and b = D b.
i
In this paper, we apply the equilibration preconditioner in the context of large scale non-convex
optimization. However, it is not straightforward how to apply the preconditioner. By choosing the
preconditioning matrix
DEii = kHi,? k2 ,
(7)
1
1
the Hessian of the transformed function (DE )? 2 > H(DE )? 2 (see Section 2) does not have equilibrated rows. Nevertheless, its spectrum (i.e. eigenvalues) is equal to the spectrum of the row
equilibrated Hessian (DE )?1 H and column equilibrated Hessian H(DE )?1 . Consequently, if row
equilibration succesfully reduces the condition number, then the condition number of the trans1
1
formed Hessian (DE )? 2 > H(DE )? 2 will be reduced by the same amount. The proof is given by
Proposition 2.
From the above observation, it seems more natural to seek for a diagonal preconditioning matrix
1
1
D such that D? 2 HD? 2 is row and column equilibrated. In Bradley & Murray (2011) an iterative
stochastic procedure is proposed for finding such matrix. However, we did not find it to work very
well in an online optimization setting, and therefore stick to the original equilibration matrix DE .
Although the original motivation for row equilibration is to prevent round-off errors, our interest is
in how well it is able to reduce the condition number. Intuitively, ill-conditioning can be a result of
matrix elements that are of completely different order. Scaling the rows to have equal norm could
therefore significantly reduce the condition number. Although we are not aware of any proofs that
row equilibration improves the condition number, there are theoretical results that motivates its use.
In
? Sluis (1969) it is shown that the condition number of a row equilibrated matrix is at most a factor
N worse than the diagonal preconditioning matrix that optimally reduces the condition number.
Note that the bound grows sublinear in the dimension of the matrix, and can be quite loose for the
extremely large matrices we consider. In this paper, we provide an alternative justification using the
following upper bound on the condition number from Guggenheimer et al. (1995):
N
2
kHkF
?
?(H) <
(8)
|det H|
N
The proof in Guggenheimer et al. (1995) provides useful insight when we expect a tight upper bound
to be tight: if all singular values, except for the smallest, are roughly equal.
We prove by Proposition 4 that row equilibration improves this upper bound by a factor
N
? F
det(DE ) kHk
. It is easy see that the bound is more reduced when the norms of the rows
N
4
(a) convex
(b) non-convex
Figure 2: Histogram of the condition number reduction (lower is better) for random Hessians in a
(a) convex and b) non-convex setting. Equilibration clearly outperforms the other methods in the
non-convex case.
are more varied. Note that the proof can be easily extended to column equilibration, and row and
column equilibration. In contrast, we can not prove that the Jacobi preconditioner improves the
upper bound, which provides another justification for using the equilibration preconditioner.
A deterministic implementation to calculate the 2-norm of all matrix rows needs to access all matrix
elements. This is prohibitive for very large Hessian?s that can not even be stored. We therefore resort
to a matrix-free estimator of the equilibration matrix that only uses matrix vector multiplications of
the form (Hv)2 where the square is element-wise and vi ? N (0, 1)4 . As shown by Bradley &
Murray (2011), this estimator is unbiased, i.e.
kHi,? k2 = E[(Hv)2 ].
(9)
Since multiplying the Hessian by a vector can be efficiently done without ever computing the Hessian, this method can be efficiently used in the context of neural networks using the R-operator
Schraudolph (2002). The R-operator computation only uses gradient-like computations and costs
about the same as two backpropagations.
5
Equilibrated learning rates are well suited to non-convex problems
In this section, we demonstrate that equilibrated learning rates are well suited to non-convex optimization, particularly compared to the Jacobi preconditioner. First, the diagonal equilibration matrix
can be seen as an approximation to diagonal of the absolute Hessian. Reformulating the equilibration
matrix as
p
DEii = kHi,? k2 = diag(H2 )i
(10)
reveals an interesting connection. Changing the order of the square root and diagonal would give us
the diagonal of |H|. In other words, the equilibration preconditioner can be thought of as the Jacobi
preconditioner of the absolute Hessian.
Recall that the inverse of the absolute Hessian |H|?1 is the only symmetric positive definite matrix that reduces the condition number to 1 (the proof of which can be be found in Proposition 1
in the Appendix). It can be considered as the gold standard, if we do not take computational costs
into account. For indefinite matrices, the diagonal of the Hessian H and the diagonal of the absolute Hessian |H| will be very different, and therefore the behavior of the Jacobi and equilibration
preconditioner will also be very different.
In fact, we argue that the Jacobi preconditioner can cause divergence because it underestimates
curvature. We can measure the amount of curvature in a given direction with the Raleigh quotient
R(H, v) =
4
vT Hv
.
vT v
Any random variable vi with zero mean and unit variance can be used.
5
(11)
Algorithm 1 Equilibrated Gradient Descent
Require: Function f (?) to minimize, learning rate and damping factor ?
D?0
for i = 1 ? K do
v ? N (0, 1)
D ? D + (Hv)2
? ? ? ? ??f (?)
D/i+?
end for
This quotient is large when there is a lot of curvature in the direction v. The Raleigh quoPN
>
>
tient can be decomposed into R(H, v) =
j ?j v qj qj v where ?j and qj are the eigenvalues and eigenvectors of H. It is easy to show that each element of the Jacobi matrix is given by
PN
DJii = |R(H, I?,i )|?1 = | j ?j q2j,i |?1 . An element DJii is the inverse of the sum of the eigenvalues ?j . Negative eigenvalues will reduce the total sum and make the step much larger than it
should. Specifically, imagine a diagonal element where there are large positive and negative curvature eigendirections. The contributions of these directions will cancel each other and a large step
will be taken in that direction. However, the function will probably also change fast in that direction
(because of the high curvature), and the step is too large for the local quadratic approximation we
have considered.
Equilibration methods never diverge this way because they will not underestimate curvature. In
equilibration, the curvature information
is given by the Raleigh quotient of the squared Hessian
P
DEii = (R(H2 , I?,i ))?1/2 = ( j ?2j q2j,i )?1/2 . Note that all the elements are positive and so will
not cancel. Jensen?s inequality then gives us an upper bound
DEii ? |H|?1
ii .
(12)
which ensures that equilibrated adaptive learning rate will in fact be more conservative than the
Jacobi preconditioner of the absolute Hessian (see Proposition 2 for proof).
This strengthens the links between equilibration and the absolute Hessian and may explain why
equilibration has been found to work well for indefinite matrices Bradley & Murray (2011). We have
verified this claim experimentally for random neural networks. The neural networks have 1 hidden
layer of a 100 sigmoid units with zero mean unit-variance Gaussian distributed inputs, weights and
biases. The output layer is a softmax with the target generated randomly. We also give results for
similarly sampled logistic regressions. We compare reductions of the condition number between
the methods. Figure 2 gives the histograms of the condition number reductions. We obtained these
graphs by sampling a hundred networks and computing the ratio of the condition number before and
after preconditioning. On the left we have the convex case, and on the right the non-convex case. We
clearly observe that the Jacobi and equilibration method are closely matched for the convex case.
However, in the non-convex case equilibration significantly outperforms the other methods. Note
that the poor performance of the Gauss-Newton diagonal only means that its success in optimization
is not due to preconditioning. As we will see in Section 8 these results extend to practical highdimensional problems.
6
Implementation
We propose to build a scalable algorithm for preconditioning
p neural networks using equilibration.
This method will estimate the same curvature information diag(H2 ) with the unbiased estimator
described in Equation 9. It is prohibitive to compute the full expectation at each learning step.
Instead we will simply update our running average at each learning step much like RMSProp. The
pseudo-code is given in Algorithm 1. The additional costs are one product with the Hessian, which is
roughly the cost of two additional gradient calculations, and the sampling a random Gaussian vector.
In practice we greatly amortize the cost by only performing the update every 20 iterations. This
brings the cost of equilibration very close to that of regular SGD. The only added hyper-parameter
is the damping ?. We find that a good setting for that hyper-parameter is ? = 10?4 and it is robust
over the tasks we considered.
6
(a) MNIST
(b) CURVES
Figure 3: Learning curves for deep auto-encoders on a) MNIST and b) CURVES comparing the
different preconditioned SGD methods.
In the interest of comparison, we will evaluate SGD preconditioned with the Jacobi preconditioner.
This will allow us to verify the claims that the equilibration preconditioner is better suited for nonconvex problems. Bekas et al. (2007) show that the diagonal of a matrix can be recovered by the
expression
diag(H) = E[v Hv]
(13)
where v are random vectors with entries ?1 and is the element-wise product. We use this estimator to precondition SGD in thePsame fashion as that described in Algorithm 1. The variance of
2
this estimator for an element i is j Hji
? Hii2 , while the method in Martens et al. (2012) has Hii2 .
Therefore, the optimal method depends on the situation. The computational complexity is the same
as ESGD.
7
Experimental setup
We consider the challenging optimization benchmark of training very deep neural networks. Following Martens (2010); Sutskever et al. (2013); Vinyals & Povey (2011), we train deep auto-encoders
which have to reconstruct their input under the constraint that one layer is very low-dimensional.
The networks have up to 11 layers of sigmoidal hidden units and have on the order of a million
parameters. We use the standard network architectures described in Martens (2010) for the MNIST
and CURVES dataset. Both of these datasets have 784 input dimensions and 60,000 and 20,000
examples respectively.
We tune the hyper-parameters of the optimization methods with random search. We have sampled
the learning rate from a logarithmic scale between [0.1, 0.01] for stochastic gradient descent (SGD)
and equilibrated SGD (ESGD). The learning rate for RMSProp and the Jacobi preconditioner are
sampled from [0.001, 0.0001]. The damping factor ? used before dividing the gradient is taken
from either {10?4 , 10?5 , 10?6 } while the exponential decay rate of RMSProp is taken from either
{0.9, 0.95}. The networks are initialized using the sparse initialization described in Martens (2010).
The minibatch size for all methods in 200. We do not make use of momentum in these experiments
in order to evaluate the strength of each preconditioning method on its own. Similarly we do not
use any regularization because we are only concerned with optimization performance. For these
reasons, we report training error in our graphs. The networks and algorithms were implemented
using Theano Bastien et al. (2012), simplifying the use of the R-operator in Jacobi and equilibrated
SGD. All experiments were run on GPU?s.
8
8.1
Results
Comparison of preconditioned SGD methods
We compare the different adaptive learning rates for training deep auto-encoders in Figure 3. We
don?t use momentum to better isolate the performance of each method. We believe this is important
because RMSProp has been found not to mix well with momentum (Tieleman & Hinton, 2012).
Thus the results presented are not state-of-the-art, but they do reach state of the art when momentum
is used.
7
(a) MNIST
(b) CURVES
Figure 4: Cosine distance between the diagonals estimated by each method during the training of
a deep auto-encoder trained on a) MNIST and b) CURVES. We can see that RMSProp estimates a
quantity close to the equilibration matrix.
Our results on MNIST show that the proposed ESGD method significantly outperforms both RMSProp and Jacobi SGD. The difference in performance becomes especially notable after 250 epochs.
Sutskever et al. (2013) reported a performance of 2.1 of training MSE for SGD without momentum
and we can see all adaptive learning rates improve on this result, with equilibration reaching 0.86.
We observe a convergence speed that is approximately three times faster then our baseline SGD.
ESGD also performs best for CURVES, although the difference with RMSProp and Jacobi SGD is
not as significant as for MNIST. We show in the next section that the smaller gap in performance is
due to the different preconditioners behaving the same way on this dataset.
8.2
Measuring the similarity of the methods
We train
pdeep autoencoders with RMSProp and
p measure every 10 epochs the equilibration matrix
DE = diag(H2 ) and Jacobi matrix DJ = diag(H)2 using 100 samples of the unbiased estimators described in Equations 9, respectively. We then measure the pairwise differences between
u?v
these quantities in terms of the cosine distance cosine(u, v) = 1 ? kukkvk
, which measures the angle
between two vectors and ignores their norms.
Figure 4 shows the resulting cosine distances over training on MNIST and CURVES. For the latter
dataset we observe that RMSProp remains remarkably close (around 0.05) to equilibration, while it
is significantly different from Jacobi (in the order of 0.2). The same order of difference is observed
when we compare equilibration and Jacobi, confirming the observations of Section 5 that both quantities are
p rather different in practice. For the MNIST dataset we see that RMSProp fairly well estimates diag(H)2 in the beginning of training, but then quickly diverges. After 1000 epochs this
difference has exceeded the difference between Jacobi and equilibration, and RMSProp no longer
matches equilibration. Interestingly, at the same time that RMSProp starts diverging, we observe in
Figure 3 that also the performance of the optimizer drops in comparison to ESGD. This may suggests
that the success of RMSProp as a optimizer is tied to its similarity to the equilibration matrix.
9
Conclusion
We have studied diagonal preconditioners for saddle point problems i.e. indefinite matrices. We have
shown by theoretical and empirical arguments that the equilibration preconditioner is comparatively
better suited to this kind of problems than the Jacobi preconditioner. Using this insight, we have proposed a novel adaptive learning rate schedule for non-convex optimization problems, called ESGD,
which empirically outperformed RMSProp on two competitive deep autoencoder benchmark. Interestingly, we have found that the update direction of RMSProp was in practice very similar to
the equilibrated update direction, which might provide more insight into why RMSProp has been
so successfull in training deep neural networks. More research is required to confirm these results.
However, we hope that our findings will contribute to a better understanding of SGD?s adaptive
learning rate schedule for large scale, non-convex optimization problems.
8
References
Bastien, Fr?ed?eric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian J., Bergeron,
Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements.
Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
Bekas, Costas, Kokiopoulou, Effrosyni, and Saad, Yousef. An estimator for the diagonal of a matrix.
Applied numerical mathematics, 57(11):1214?1229, 2007.
Bradley, Andrew M and Murray, Walter. Matrix-free approximate equilibration. arXiv preprint
arXiv:1110.2805, 2011.
Choromanska, Anna, Henaff, Mikael, Mathieu, Michael, Arous, Grard Ben, and LeCun, Yann. The
loss surface of multilayer networks, 2014.
Datta, Biswa Nath. Numerical Linear Algebra and Applications, Second Edition. SIAM, 2nd edition,
2010. ISBN 0898716853, 9780898716856.
Dauphin, Yann, Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, Ganguli, Surya, and Bengio,
Yoshua. Identifying and attacking the saddle point problem in high-dimensional non-convex
optimization. In NIPS?2014, 2014.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning
and stochastic optimization. Journal of Machine Learning Research, 2011.
Guggenheimer, Heinrich W., Edelman, Alan S., and Johnson, Charles R. A simple estimate of the
condition number of a linear system. The College Mathematics Journal, 26(1):pp. 2?5, 1995.
ISSN 07468342. URL http://www.jstor.org/stable/2687283.
LeCun, Yann, Bottou, L?eon, Orr, Genevieve B., and M?uller, Klaus-Robert. Efficient backprop. In
Neural Networks, Tricks of the Trade, Lecture Notes in Computer Science LNCS 1524. Springer
Verlag, 1998.
Martens, J. Deep learning via Hessian-free optimization. In ICML?2010, pp. 735?742, 2010.
Martens, James, Sutskever, Ilya, and Swersky, Kevin. Estimating the hessian by back-propagating
curvature. arXiv preprint arXiv:1206.6464, 2012.
Pascanu, Razvan and Bengio, Yoshua. Revisiting natural gradient for deep networks. In International Conference on Learning Representations 2014(Conference Track), April 2014.
Schaul, Tom, Antonoglou, Ioannis, and Silver, David. Unit tests for stochastic optimization. arXiv
preprint arXiv:1312.6055, 2013.
Schraudolph, Nicol N. Fast curvature matrix-vector products for second-order gradient descent.
Neural Computation, 14(7):1723?1738, 2002.
Sluis, AVD. Condition numbers and equilibration of matrices. Numerische Mathematik, 14(1):
14?23, 1969.
Sutskever, Ilya, Martens, James, Dahl, George, and Hinton, Geoffrey. On the importance of initialization and momentum in deep learning. In ICML, 2013.
Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running
average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012.
Vinyals, Oriol and Povey, Daniel. Krylov subspace descent for deep learning. arXiv preprint
arXiv:1111.4259, 2011.
Zeiler, Matthew D. ADADELTA: an adaptive learning rate method. Technical report, arXiv
1212.5701, 2012. URL http://arxiv.org/abs/1212.5701.
9
| 5870 |@word norm:6 seems:1 nd:1 open:1 seek:2 simplifying:1 sgd:17 arous:1 reduction:3 daniel:1 interestingly:2 outperforms:3 bradley:5 recovered:1 comparing:1 gpu:1 john:1 realize:1 numerical:4 confirming:1 drop:1 update:7 implying:1 prohibitive:2 beginning:1 provides:2 pascanu:4 contribute:1 sigmoidal:1 org:2 along:2 viable:1 edelman:1 prove:2 khk:1 introduce:2 pairwise:1 roughly:2 behavior:3 decomposed:1 little:1 considering:1 becomes:1 estimating:1 matched:1 what:3 kind:1 developed:1 finding:2 pseudo:1 every:2 oscillates:1 universit:3 k2:3 stick:1 unit:6 positive:12 before:3 local:2 tends:1 consequence:1 path:1 advised:1 approximately:1 black:1 might:4 therein:1 initialization:2 studied:1 suggests:2 challenging:2 collect:1 succesfully:1 practical:3 lecun:3 lost:1 practice:4 definite:6 razvan:3 procedure:1 lncs:1 empirical:3 thought:2 significantly:4 word:2 bergeron:1 regular:1 spite:1 get:1 undesirable:1 close:3 operator:3 context:2 applying:1 www:1 equivalent:3 deterministic:1 marten:7 straightforward:1 starting:1 convex:21 numerische:1 equilibration:46 identifying:1 insight:4 rule:1 estimator:7 lamblin:1 hd:3 justification:3 imagine:1 target:1 us:2 goodfellow:1 trick:1 element:10 adadelta:2 strengthens:1 particularly:1 observed:2 preprint:4 hv:5 precondition:1 calculate:1 revisiting:1 ensures:1 revival:1 coursera:1 trade:1 transforming:1 convexity:1 rmsprop:22 complexity:1 heinrich:1 trained:1 raise:1 solving:2 tight:2 algebra:1 efficiency:1 eric:1 completely:1 preconditioning:18 easily:1 train:3 walter:1 fast:2 klaus:1 hyper:3 neighborhood:1 choosing:1 kevin:1 quite:1 kai:1 dominating:1 solve:1 larger:1 elad:1 reconstruct:1 encoder:1 transform:1 online:3 advantage:1 eigenvalue:7 isbn:1 propose:3 product:3 fr:1 gold:1 schaul:3 sutskever:4 convergence:3 diverges:1 silver:1 ben:1 help:1 andrew:1 propagating:1 measured:1 progress:3 equilibrated:14 dividing:1 implemented:1 quotient:3 come:1 direction:18 drawback:1 closely:1 stochastic:6 elimination:2 backprop:1 require:1 proposition:5 practically:1 around:2 considered:3 claim:2 pointing:1 matthew:1 optimizer:2 smallest:3 outperformed:1 largest:1 hope:1 uller:1 brought:1 clearly:3 always:2 gaussian:3 aim:2 reaching:1 rather:1 avoid:1 pn:1 conjunction:1 ax:2 focus:1 properly:1 improvement:1 rank:2 prevalent:1 greatly:1 contrast:1 baseline:1 ganguli:1 typically:1 hidden:2 transformed:4 choromanska:3 interested:1 issue:2 ill:5 dauphin:3 pascal:1 socalled:1 proposes:1 art:2 softmax:1 fairly:1 equal:5 aware:1 never:1 sampling:2 look:1 cancel:2 unsupervised:1 icml:2 yoshua:5 report:2 escape:2 pathological:1 randomly:1 divergence:2 ab:1 montr:3 interest:3 dauphiya:1 genevieve:1 kukkvk:1 chain:1 damping:3 divide:1 initialized:1 theoretical:6 eal:3 column:4 obstacle:1 measuring:1 cost:6 entry:1 hundred:1 johnson:1 too:1 optimally:1 stored:2 reported:1 encoders:3 answer:1 considerably:2 cho:1 adaptively:1 international:1 siam:1 off:2 diverge:1 michael:1 together:1 quickly:1 ilya:2 squared:1 slowly:2 worse:1 resort:1 leading:1 suggesting:1 account:1 de:13 orr:1 bergstra:1 ioannis:1 notable:1 vi:2 depends:1 root:2 lot:1 doing:1 hazan:1 competitive:2 start:1 bouchard:1 contribution:1 ass:1 square:3 formed:1 minimize:1 variance:3 efficiently:2 landscape:2 yes:1 famous:1 multiplying:3 explain:2 reach:1 devries:1 ed:1 bekas:2 underestimate:2 pp:2 james:3 proof:7 jacobi:24 sampled:3 costa:1 dataset:4 popular:2 ask:1 recall:1 improves:3 schedule:3 back:1 exceeded:1 follow:1 tom:1 april:1 done:1 strongly:2 kokiopoulou:1 preconditioner:29 autoencoders:1 hand:1 avd:1 minibatch:1 logistic:1 brings:1 grows:1 believe:1 concept:1 unbiased:3 verify:1 regularization:1 kyunghyun:1 reformulating:1 arnaud:1 symmetric:2 illustrated:1 round:2 during:1 cosine:4 criterion:1 stress:1 demonstrate:2 performs:3 duchi:3 bring:1 wise:3 novel:2 umontreal:3 charles:1 sigmoid:1 empirically:3 conditioning:3 million:3 extend:1 significant:2 mathematics:3 similarly:2 dj:2 access:1 stable:1 similarity:2 surface:2 behaving:1 longer:1 curvature:23 own:1 recent:5 perspective:1 optimizing:1 henaff:1 store:1 certain:1 nonconvex:1 verlag:1 inequality:1 success:5 vt:2 inverted:1 seen:1 minimum:2 additional:2 george:1 attacking:1 semi:2 ii:2 full:3 mix:1 reduces:4 alan:1 technical:1 faster:3 match:1 calculation:1 believed:1 schraudolph:2 equally:1 scalable:1 regression:1 multilayer:1 expectation:1 affordable:1 iteration:2 histogram:2 arxiv:10 invert:1 addition:1 remarkably:1 singular:3 saad:1 probably:1 isolate:1 contrary:1 nath:1 practitioner:1 presence:3 backpropagations:1 bengio:6 enough:1 easy:2 rendering:1 concerned:1 architecture:1 perfectly:3 reduce:4 det:2 qj:4 thread:1 whether:1 expression:1 url:2 hessian:34 cause:1 deep:19 useful:1 clear:1 eigenvectors:1 tune:2 amount:2 locally:1 reduced:3 http:2 estimated:1 track:1 hji:1 indefinite:8 nevertheless:2 changing:1 prevent:1 verified:1 povey:2 dahl:1 timestep:1 graph:2 subgradient:1 sum:3 run:1 inverse:3 parameterized:1 you:1 angle:1 eigendirections:1 swersky:1 extends:1 throughout:1 yann:4 oscillation:2 appendix:2 scaling:1 bound:7 layer:4 quadratic:1 jstor:1 encountered:2 strength:1 constraint:1 aspect:1 speed:4 argument:2 min:2 preconditioners:10 extremely:1 performing:1 poor:1 across:1 smaller:1 making:1 modification:1 intuitively:1 theano:2 taken:3 computationally:3 equation:2 remains:2 previously:1 mathematik:1 loose:1 khkf:1 singer:1 merit:1 antonoglou:1 end:1 gulcehre:1 multiplied:1 apply:3 observe:4 alternative:2 original:8 denotes:1 running:2 zeiler:2 maintaining:1 newton:6 unsuitable:1 mikael:1 exploit:1 yoram:1 eon:1 murray:5 build:1 especially:1 comparatively:2 objective:3 move:2 question:2 added:2 quantity:3 diagonal:20 unclear:1 gradient:24 subspace:1 distance:3 separate:1 link:1 argue:1 iro:2 reason:1 preconditioned:5 code:1 issn:1 tijmen:1 ratio:1 minimizing:1 difficult:1 mostly:2 q2j:2 setup:1 robert:1 negative:6 design:2 implementation:2 motivates:1 yousef:1 perform:1 upper:5 observation:2 datasets:1 benchmark:3 caglar:1 descent:14 curved:1 situation:1 hinton:4 incorporated:1 extended:1 ever:1 rn:1 varied:1 community:1 datta:2 introduced:2 david:1 required:1 connection:1 nip:2 able:1 krylov:1 max:2 critical:2 difficulty:1 natural:3 circumvent:1 scheme:2 improve:1 mathieu:1 started:1 autoencoder:2 auto:4 epoch:3 literature:2 geometric:1 understanding:3 multiplication:1 adagrad:2 relative:1 nicol:1 loss:3 expect:1 lecture:2 sublinear:1 interesting:1 facing:1 geoffrey:2 eigendecomposition:3 h2:4 storing:1 row:16 free:3 infeasible:1 bias:1 raleigh:3 allow:1 taking:1 absolute:10 sparse:1 distributed:1 curve:8 plain:1 dimension:2 valid:2 rich:1 contour:1 ignores:1 author:1 adaptive:16 approximate:2 confirm:2 global:1 reveals:1 harm:1 spectrum:2 don:1 search:1 iterative:1 surya:1 why:3 robust:1 ca:3 nicolas:1 improving:1 mse:1 bottou:1 diag:7 da:1 did:1 anna:1 main:1 big:1 motivation:1 edition:2 biggest:2 referred:2 fashion:1 amortize:1 slow:4 momentum:6 khi:3 exponential:1 tied:1 ian:1 down:2 remained:1 specific:3 bastien:2 jensen:1 explored:1 decay:1 evidence:2 exists:2 workshop:1 mnist:9 effectively:1 importance:1 magnitude:2 conditioned:4 gap:1 suited:7 logarithmic:1 simply:1 saddle:15 likely:1 vinyals:2 springer:1 corresponds:2 tieleman:3 relies:2 consequently:1 towards:1 change:6 experimentally:1 typical:1 except:3 reducing:2 specifically:1 justify:1 conservative:1 called:4 total:1 gauss:4 experimental:1 diverging:1 formally:2 highdimensional:1 college:1 latter:1 oriol:1 evaluate:3 |
5,381 | 5,871 | Bayesian Active Model Selection
with an Application to Automated Audiometry
Jacob R. Gardner
CS, Cornell University
Ithaca, NY 14850
[email protected]
Kilian Q. Weinberger
CS, Cornell University
Ithaca, NY 14850
[email protected]
Gustavo Malkomes
CSE, WUSTL
St. Louis, MO 63130
[email protected]
Dennis Barbour
BME, WUSTL
St. Louis, MO 63130
[email protected]
Roman Garnett
CSE, WUSTL
St. Louis, MO 63130
[email protected]
John P. Cunningham
Statistics, Columbia University
New York, NY 10027
[email protected]
Abstract
We introduce a novel information-theoretic approach for active model selection
and demonstrate its effectiveness in a real-world application. Although our method
can work with arbitrary models, we focus on actively learning the appropriate
structure for Gaussian process (GP) models with arbitrary observation likelihoods.
We then apply this framework to rapid screening for noise-induced hearing loss
(NIHL), a widespread and preventible disability, if diagnosed early. We construct a
GP model for pure-tone audiometric responses of patients with NIHL . Using this
and a previously published model for healthy responses, the proposed method is
shown to be capable of diagnosing the presence or absence of NIHL with drastically
fewer samples than existing approaches. Further, the method is extremely fast and
enables the diagnosis to be performed in real time.
1
Introduction
Personalized medicine has long been a critical application area for machine learning [1?3], in which
automated decision making and diagnosis are key components. Beyond improving quality of life,
machine learning in diagnostic settings is particularly important because collecting additional medical
data often incurs significant financial burden, time cost, and patient discomfort. In machine learning
one often considers this problem to be one of active feature selection: acquiring each new feature
(e.g., a blood test) incurs some cost, but will, with hope, better inform diagnosis, treatment, and
prognosis. By careful analysis, we may optimize this trade off.
However, many diagnostic settings in medicine do not involve feature selection, but rather involve
querying a sample space to discriminate different models describing patient attributes. A particular,
clarifying example that motivates this work is noise-induced hearing loss (NIHL), a prevalent disorder
affecting 26 million working-age adults in the United States alone [4] and affecting over half of
workers in particular occupations such as mining and construction. Most tragically, NIHL is entirely
preventable with simple, low-cost solutions (e.g., earplugs). The critical requirement for prevention
is effective early diagnosis.
To be tested for NIHL, patients must complete a time-consuming audiometric exam that presents a
series of tones at various frequencies and intensities; at each tone the patient indicates whether he/she
hears the tone [5?7]. From the responses, the clinician infers the patient?s audible threshold on a
set of discrete frequencies (the audiogram); this process requires the delivery of up to hundreds of
tones. Audiologists scan the audiogram for a hearing deficit with a characteristic notch shape?a
1
narrow band that can be anywhere in the frequency domain that is indicative of NIHL. Unfortunately,
at early stages of the disorder, notches can be small enough that they are undetectable in a standard
audiogram, leaving many cases undiagnosed until the condition has become severe. Increasing
audiogram resolution would require higher sample counts (more presented tones) and thus only
lengthen an already burdensome procedure. We present here a better approach.
Note that the NIHL diagnostic challenge is not one of feature selection (choosing the next test to run
and classifying the result), but rather of model selection: is this patient?s hearing better described by
a normal hearing model, or a notched NIHL model? Here we propose a novel active model selection
algorithm to make the NIHL diagnosis in as few tones as possible, which directly reflects the time
and personnel resources required to make accurate diagnoses in large populations. We note that this
is a model-selection problem in the truest sense: a diagnosis corresponds to selecting between two
or more sets of indexed probability distributions (models), rather than the more-common misnomer
of choosing an index from within a model (i.e., hyperparameter optimization). In the NIHL case
this distinction is critical. We are choosing between two models, the set of possible NIHL hearing
functions and the set of normal hearing functions. This approach suggests a very different and more
direct algorithm than first learning the most likely NIHL function and then accepting or rejecting it as
different from normal, the standard approach.
We make the following contributions: first, we design a completely general active-model-selection
method based on maximizing the mutual information between the response to a tone and the posterior
on the model class. Critically, we develop an analytical approximation of this criterion for Gaussian
process (GP) models with arbitrary observation likelihoods, enabling active structure learning for GPs.
Second, we extend the work of Gardner et al. [8] (which uses active learning to speed up audiogram
inference) to the broader question of identifying which model?normal or NIHL?best fits a given
patient. Finally, we develop a novel GP prior mean that parameterizes notched hearing loss for NIHL
patients. To our knowledge, this is the first publication with an active model-selection approach that
does not require updating each model for every candidate point, allowing audiometric diagnosis of
NIHL to be performed in real time. Finally, using patient data from a clinical trial, we show empirically
that our method typically automatically detects simulated noise-induced hearing loss with fewer than
15 query tones. This is vastly fewer than the number required to infer a conventional audiogram
or even an actively learned audiogram [8], highlighting the importance of both the active-learning
approach and our focus on model selection.
2
Bayesian model selection
We consider supervised learning problems defined on an input space X and an output space Y.
Suppose we are given a set of observed data D = (X, y), where X represents the design matrix of
independent variables xi ? X and y the associated vector of dependent variables yi = y(xi ) ? Y.
Let M be a probabilistic model, and let ? be an element of the parameter space indexing M. Given a
set of observations D, we wish to compute the probability of M being the correct model to explain
D, compared to other models. The key quantity of interest to model selection is the model evidence:
Z
p(y | X, M) = p(y | X, ?, M)p(? | M) d?,
(1)
which represents the probability of having generating the observed data under the model, marginalized
over ? to account for all possible members of that model under a prior p(? | M) [9]. Given a set of
M candidate models {Mi }M
i=1 , and the computed evidence for each, we can apply Bayes? rule to
compute the posterior probability of each model given the data:
p(M | D) =
p(y | X, M)p(M)
p(y | X, M)p(M)
=P
,
p(y | X)
i p(y | X, Mi )p(Mi )
(2)
where p(M) represents the prior probability distribution over the models.
2.1
Active Bayesian model selection
Suppose that we have a mechanism for actively selecting new data?choosing x? ? X and observing
y ? = y(x? )?to add to our dataset D = (X, y), in order to better distinguish the candidate models
2
{Mi }. After making this observation, we will form an augmented dataset D0 = D ? (x? , y ? ) ,
from which we can recompute a new model posterior p(M | D0 ).
An approach motivated by information theory is to select the location maximizing the mutual
information between the observation value y ? and the unknown model:
I(y ? ; M | x? , D) = H[M | D] ? Ey? H[M | D0 ]
(3)
?
?
?
?
= H[y | x , D] ? EM H[y | x , D, M] ,
(4)
where H indicates (differential) entropy. Whereas Equation (3) is computationally problematic
(involving costly model retraining), the equivalent expression (4) is typically more tractable, has been
applied fruitfully in various active-learning settings [10, 11, 8, 12, 13], and requires only computing
the differential entropy of the model-marginal predictive distribution:
p(y ? | x? , D) =
M
X
i=1
p(y ? | x? , D, Mi )p(Mi | D)
(5)
and the model-conditional predictive distributions p(y ? | x? , D, Mi ) with all models trained with
the currently available data. In contrast to (3), this does not involve any retraining cost. Although
computing the entropy in (5) might be problematic, we note that this is a one-dimensional integral
that can easily be resolved with quadrature. Our proposed approach, which we call Bayesian active
model selection (BAMS) is then to compute, for each candidate location x? , the mutual information
between y ? and the unknown model, and query where this is maximized:
arg max I(y ? ; M | x? , D).
x?
2.2
(6)
Related work
Although active learning and model selection have been widely investigated, active model selection
has received comparatively less attention. Ali et al. [14] proposed an active learning model selection
method that requires
leave-two-out cross validation when evaluating each candidate x? , requiring
?
2
O B M |X | model updates per iteration, where B is the total budget. Kulick et al. [15] also
considered an information-theoretic approach to active model selection, suggesting maximizing the
expected cross entropy between the current model posterior p(M | D) and the updated
distribution
p(M | D0 ). This approach also requires extensive model retraining, with O M |X? | model updates
per iteration, to estimate this expectation for each candidate. These approaches become prohibitively
expensive for real-time applications with large number of candidates. In our audiometric experiments,
for example, we consider 10 000 candidate points, expending 1?2 seconds per iteration, whereas
these mentioned techniques would take several hours to selected the next point to query.
3
Active model selection for Gaussian processes
In the previous section, we proposed a general framework for performing sequential active Bayesian
model selection, without making any assumptions about the forms of the models {Mi }. Here we
will discuss specific details of our proposal when these models represent alternative structures for
Gaussian process priors on a latent function.
We assume that our observations are generated via a latent function f : X ? R with a known
observation model p(y | f ), where fi = f (xi ). A standard nonparametric Bayesian approach
with such models is to place a Gaussian process (GP) prior distribution on f , p(f ) = GP(f ; ?, K),
where ? : X ? R is a mean function and K : X 2 ? R is a positive-definite covariance function or
kernel [16]. We condition on the observed data to form a posterior distribution p(f | D), which is
typically an updated Gaussian process (making approximations if necessary).
We make predictions
R
at a new input x? via the predictive distribution p(y ? | x? , D) = p(y ? | f ? , D)p(f ? | x? , D) df ? ,
where f ? = f (x? ). The mean and kernel functions are parameterized by hyperparameters that we
concatenate into a vector ?, and different choices of these hyperparameters imply that the functions
drawn from the GP will have particular frequency, amplitude, and other properties. Together, ? and
K define a model parametrized by the hyperparameters ?. Much attention is paid to learning these
hyperparameters in a fixed model class, sometimes under the unfortunate term ?model selection.?
3
Note, however, that the structural (not hyperparameter) choices made in the mean function ? and
covariance function K themselves are typically done by selecting (often blindly!) from several
off-the-shelf solutions (see, for example, [17, 16]; though also see [18, 19]), and this choice has
substantial bearing on the resulting functions f we can model. Indeed, in many settings, choosing
the nature of plausible functions is precisely the problem of model selection; for example, to decide
whether the function has periodic structure, exhibits nonstationarity, etc. Our goal is to automatically
and actively decide these structural choices during GP modeling through intelligent sampling.
To connect to our active learning formulation, let {Mi } be a set of Gaussian process models for the
latent function f . Each model comprises a mean function ?i , covariance function Ki , and associated
hyperparameters ?i . Our approach outlined in Section 2.1 requires the computation of three quantities
that are not typically encountered in GP modeling and inference: the hyperparameter posterior
p(? | D, M), the model evidence p(y | X, M), and the predictive distribution p(y ? | x? , D, M),
where we have marginalized over ? in the latter two quantities. The most-common approaches to GP
inference are maximum likelihood?II (MLE) or maximum a posteriori?II (MAP) estimation, where
we maximize the hyperparameter posterior [20, 16]:1
?? = arg max log p(? | D, M) = arg max log p(? | M) + log(y | X, ?, M).
?
(7)
?
Typically, predictive distributions and other desired quantities are then reported at the MLE / MAP
? Although a compuhyperparameters, implicitly making the assumption that p(? | D, M) ? ?(?).
tationally convenient choice, this does not account for uncertainty in the hyperparameters, which
can be nontrivial with small datasets [9]. Furthermore, accounting correctly for model parameter
uncertainty is crucial to model selection, where it naturally introduces a model-complexity penalty.
We discuss less-drastic approximations to these quantities below.
3.1
Approximating the model evidence and hyperparameter posterior
The model evidence p(y | X, M) and hyperparameter posterior distribution p(? | D, M) are in
general intractable for GPs, as there is no conjugate prior distribution p(? | M) available. Instead, we
will use a Laplace approximation, where we make a second-order Taylor expansion of log p(? | D, M)
around its mode ?? (7). The result is a multivariate Gaussian approximation:
? ?);
p(? | D, M) ? N (?; ?,
??1 = ??2 log p(? | D, M)?=??.
(8)
The Laplace approximation also results in an approximation to the model evidence:
? M) + log p(?? | M) ?
log p(y | X, M) ? log p(y | X, ?,
1
2
log det ??1 +
d
2
log 2?,
(9)
where d is the dimension of ? [21, 22]. The Laplace approximation to the model evidence can be
interpreted as rewarding explaining the data well while penalizing model complexity. Note that
the Bayesian information criterion (BIC), commonly used for model selection, can be seen as an
approximation to the Laplace approximation [23, 24].
3.2
Approximating the predictive distribution
We next consider the predictive distribution:
Z
Z
?
?
?
?
p(y | x , D, M) = p(y | f ) p(f ? | x? , D, ?, M)p(? | D, M) d? df ? .
|
{z
}
(10)
p(f ? |x? ,D,M)
The posterior p(f ? | x? , D, ?, M) in (10) is typically a known Gaussian distribution, derived
analytically for Gaussian observation likelihoods or approximately using standard approximate GP
inference techniques [25, 26]. However, the integral over ? in (10) is intractible, even with a Gaussian
approximation to the hyperparameter posterior as in (8).
Garnett et al. [11] introduced a mechanism for approximately marginalizing GP hyperparameters
(called the MGP), which we will adopt here due to its strong empirical performance. The MGP assumes
1
Using a noninformative prior p(? | M) ? 1 in the case of maximum likelihood.
4
? ?).2
that we have a Gaussian approximation to the hyperparameter posterior, p(? | D, M) ? N (?; ?,
We define the posterior predictive mean and variance functions as
?? (?) = E[f ? | x? , D, ?, M];
? ? (?) = Var[f ? | x? , D, ?, M].
The MGP works by making an expansion of the predictive distribution around the posterior mean
? The nature of this expansion is chosen so as to match various derivatives of the
hyperparameters ?.
true predictive distribution; see [11] for details. The posterior distribution of f ? is approximated by
? ?2 ,
p(f ? | x? , D, M) ? N f ? ; ?? (?),
(11)
MGP
where
2
? +
? + ??? (?)
? > ? ??? (?)
?MGP
= 34 ? ? (?)
1
?
3? ? (?)
? > ?
? .
? ? ?? (?)
?? (?)
(12)
The MGP thus inflates the predictive variance from the the posterior mean hyperparameters ?? by a
term that is commensurate with the uncertainty in ?, measured by the posterior covariance ?, and
the dependence of the latent predictive mean and variance on ?, measured by the gradients ??? and
?? ? . With the Gaussian approximation in (11), the integral in (10) now reduces to integrating the
observation likelihood against a univariate Gaussian. This integral is often analytic [16] and at worse
requires one-dimensional quadrature.
3.3
Implementation
Given the development above, we may now efficiently compute an approximation to the BAMS
criterion for active GP model selection. Given currently observed data D, for each of our candidate
models Mi , we first find the Laplace approximation to the hyperparameter posterior (8) and model
evidence (9). Given the approximations to the model evidence, we may compute an approximation
to the model posterior (2). Suppose we have a set of candidate points X? from which we may
select our next
For each of
our models, we compute the MGP approximation (11) to the latent
point.
?
?
|
X
,
D,
M
)
, from
posteriors p(f
which we use standard techniques to compute the predictive
i
distributions p(y? | X? , D, Mi ) . Finally, with the ability to compute the differential entropies of
these model-conditional predictive distributions, as well as the marginal predictive distribution (5),
we may compute the mutual information of each candidate in parallel. See the Appendix for explicit
formulas for common likelihoods and a description of general-purpose, reusable code we will release
in conjunction with this manuscript to ease implementation.
4
Audiometric threshold testing
Standard audiometric tests [5?7] are calibrated such that the average human subject has a 50% chance
of hearing a tone at any frequency; this empirical unit of intensity is defined as 0 dB HL. Humans give
binary reports (whether or not a tone was heard) in response to stimuli, and these observations are
inherently noisy. Typical audiometric tests present tones in a predefined order on a grid, in increments
of 5?10 dB HL at each of six octaves. Recently, Gardner et al. [8] demonstrated that Bayesian active
learning of a patient?s audiometric function significantly improves the state-of-the-art in terms of
accuracy and number of stimuli required.
However, learning a patient?s entire audiometric function may not always be necessary. Audiometric
testing is frequently performed on otherwise young and healthy patients to detect noise-induced
hearing loss (NIHL). Noise-induced hearing loss occurs when an otherwise healthy individual is
habitually subjected to high-intensity sound [27]. This can result in sharp, notch-shaped hearing loss
in a narrow (sometimes less than one octave) frequency range. Early detection of NIHL is critical
to desirable long-term clinical outcomes, so large-scale screenings of susceptible populations (for
example, factory workers), is commonplace [28]. Noise-induced hearing loss is difficult to diagnose
with standard audiometry, because a frequency?intensity grid must be very fine to ensure that a notch
is detected. The full audiometric test of Gardner et al. [8] may also be inefficient if the only goal of
testing is to determine whether a notch is present, as would be the case for large-scale screening.
We cast the detection of noise-induced hearing loss as an active model selection problem. We will
describe two Gaussian process models of audiometric functions: a baseline model of normal human
2
This is arbitrary and need not be the Laplace approximation in (8), so this is a slight abuse of notation.
5
hearing, and a model reflecting NIHL. We then use the BAMS framework introduced above to, as
rapidly as possible for a given patient, determine which model best describes his or her hearing.
Normal-patient model. To model a healthy patient?s audiometric function, we use the model
described in [8]. The GP prior proposed in that work combines a constant prior mean ?healthy = c
(modeling a frequency-independent natural threshold) with a kernel taken to be the sum of two
components: a linear covariance in intensity and a squared-exponential covariance in frequency. Let
[i, ?] represent a tone stimulus, with i representing its intensity and ? its frequency. We define:
K [i, ?], [i0 , ?0 ] = ?ii0 + ? exp ? 2`12 |? ? ?0 |2 ,
(13)
where ?, ? > 0 weight each component and ` > 0 is a length scale of frequency-dependent random
deviations from a constant hearing threshold. This kernel encodes two fundamental properties of
human audiologic response. First, hearing is monotonic in intensity. The linear contribution ?ii0
ensures that the posterior probability of detecting a fixed frequency will be monotonically increasing
after conditioning on a few tones. Second, human hearing ability is locally smooth in frequency,
because nearby locations in the cochlea are mechanically coupled. The combination of ?healthy with
K specifies our healthy model Mhealthy , with parameters ?healthy = [c, ?, ?, `]> .
Noise-induced hearing loss model. We extend the model above to create a second GP model
reflecting a localized, notch-shaped hearing deficit characteristic of NIHL. We create a novel, flexible
prior mean function for this purpose, the parameters of which specify the exact nature of the hearing
loss. Our proposed notch mean is:
?NIHL (i, ?) = c ? d N 0 (?; ?, w2 ),
(14)
0
where N (?; ?, w) denotes the unnormalized normal probability density function with mean ? and
standard deviation w, which we scale by a depth parameter d > 0 to reflect the prominence of the
hearing loss. This contribution results in a localized subtractive notch feature with tunable center,
width, and height. We retain a constant offset c to revert to the normal-hearing model outside the
vicinity of the localized hearing deficit. Note that we completely model the effect of NIHL on patient
responses with this mean notch function; the kernel K above remains appropriate. The combination
of ?NIHL with K specifies our NIHL model MNIHL with, in addition to the parameters of our healthy
model, the additional parameters ?NIHL = [?, w, d]> .
5
Results
To test BAMS on our NIHL detection task, we evaluate our algorithm using audiometric data, comparing
to several baselines. From the results of a small-scale clinical trial, we have examples of high-fidelity
audiometric functions inferred for several human patients using the method of Gardner et al. [8]. We
may use these to simulate audiometric examinations of healthy patients using different methods to
select tone presentations. We simulate patients with NIHL by adjusting ground truth inferred from
nine healthy patients with in-model samples from our notch mean prior. Recall that high-resolution
audiogram data is extremely scarce.
We first took a thorough pure-tone audiometric test of each of nine patients from our trial with normal
hearing using 100 samples selected using the algorithm in [8] on the domain X = [250, 8000] Hz ?
[?10, 80] dB HL,3 typical ranges for audiometric testing [6]. We inferred the audiometric function
over the entire domain from the measured responses, using the healthy-patient GP model Mhealthy
with parameters learned via MLE ? II inference. The observation model was p(y = 1 | f ) = ?(f ),
where ? is the standard normal CDF, and approximate GP inference was performed via a Laplace
? Mhealthy ) for this patient as
approximation. We then used the approximate GP posterior p(f | D, ?,
ground-truth for simulating a healthy patient?s responses. The posterior probability of tone detection
learned from one patient is shown in the background of Figure 1(a). We simulated a healthy patient?s
response to a given query tone x? = [i? , ?? ] by sampling a conditionally independent Bernoulli
? Mhealthy ).
random variable with parameter p(y ? = 1 | x? , D, ?,
We simulated a patient with NIHL by then drawing notch parameters (the parameters of (14)) from
an expert-informed prior, adding the corresponding notch to the learned healthy ground-truth latent
mean, recomputing the detection probabilities, and proceeding as above. Example NIHL ground-truth
detection probabilities generated in this manner are depicted in the background of Figure 1(b).
3
Inference was done in log-frequency domain.
6
intensity (dB HL)
80
80
1
60
60
0.8
40
40
20
20
0
0
0.6
0.4
0.2
0
8
9
10
11
12
13
8
frequency (log2 Hz)
9
10
11
12
13
frequency (log2 Hz)
(a) Normal hearing model ground truth.
(b) Notch model ground truth.
Figure 1: Samples selected by BAMS (red) and the method of Gardner et al. [8] (white) when run
on (a) the normal-hearing ground truth, and (b), the NIHL model ground truth. Contours denote
probability of detection at 10% intervals. Circles indicate presentations that were heard by the
simulated patient; exes indicate presentations that were not heard by the simulated patient.
5.1
Diagnosing NIHL
To test our active model-selection approach to diagnosing NIHL, we simulated a series of audiometric
tests, selecting tones using three alternatives: BAMS, the algorithm of [8], and random sampling.4
Each algorithm shared a candidate set of 10 000 quasirandom tones X? generated using a scrambled
Halton set so as to densely cover the two-dimensional search space. We simulated nine healthy
patients and a total of 27 patients exhibiting a range of NIHL presentations, using independent draws
from our notch mean prior in the latter case. For each audiometric test simulation, we initialized with
five random tones, then allowed each algorithm to actively select a maximum of 25 additional tones,
a very small fraction of the hundreds typically used in a regular audiometric test. We repeated this
procedure for each of our nine healthy patients using the normal-patient ground-truth model. We
further simulated, for each patient, three separate presentations of NIHL as described above. We plot
the posterior probability of the correct model after each iteration for each method in Figure 2.
In all runs with both ground-truth models, BAMS was able to rapidly achieve greater than 99%
confidence in the correct model without expending the entire budget. Although all methods correctly
inferred high healthy posterior probability for the healthy patient, BAMS wass more confident. For
the NIHL patients, neither baseline inferred the correct model, whereas BAMS rarely required more
than 15 actively chosen samples to confidently make the correct diagnosis. Note that, when BAMS
was used on NIHL patients, there was often an initial period during which the healthy model was
favored, followed by a rapid shift towards the correct model. This is because our method penalizes
the increased complexity of the notch model until sufficient evidence for a notch is acquired.
Figure 1 shows the samples selected by BAMS for typical healthy and NIHL patients. The fundamental
strategy employed by BAMS in this application is logical: it samples in a row of relatively highintensity tones. The intuition for this design is that failure to recognize a normally heard, high-intensity
sound is strong evidence of a notch deficit. Once the notch has been found (Figure 1(b)), BAMS
continues to sample within the notch to confirm its existence and rule out the possibility of the miss
(tone not heard) being due to the stochasticity of the process. Once satisfied, the BAMS approach then
samples on the periphery of the notch to further solidify its belief.
The BAMS algorithm sequentially makes observations where the healthy and NIHL model disagree
the most, typically in the top-center of the MAP notch location. The exact intensity at which BAMS
samples is determined by the prior over the notch-depth parameter d. When we changed the notch
depth prior to support shallower or deeper notches (data not shown), BAMS sampled at lower or
4
We also compared with uncertainty sampling and query by committee (QBC); the performance was comparable to random sampling and is omitted for clarity.
7
Pr(correct model | D)
BAMS
random
I(y ? ; f | D, Mhealthy )
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
10
20
30
iteration
10
20
30
iteration
(a) Notch model ground truth.
(b) Normal hearing model ground truth.
Figure 2: Posterior probability of the correct model as a function of iteration number.
higher intensities, respectively, to continue to maximize model disagreement. Similarly, the spacing
between samples is controlled by the prior over the notch-width parameter w.
Finally, it is worth emphasizing the stark difference between the sampling pattern of BAMS and the
audiometric tests of [8]; see Figure 1. Indeed, when the goal is learning the patient?s audiometric
function, the audiometric testing algorithm proposed in that work typically has a very good estimate
after 20 samples. However, when using BAMS, the primary goal is to detect or rule out NIHL. As
a result, the samples selected by BAMS reveal little about the nuances of the patient?s audiometric
function, while being highly informative about the correct model to explain the data. This is precisely
the tradeoff one seeks in a large-scale diagnostic setting, highlighting the critical importance of
focusing on the model-selection problem directly.
6
Conclusion
We introduced a novel information-theoretic approach for active model selection, Bayesian active
model selection, and successfully applied it to rapid screening for noise-induced hearing loss. Our
method for active model selection does not require model retraining to evaluate candidate points,
making it more feasible than previous approaches. Further, we provided an effective and efficient
analytic approximation to our criterion that can be used for automatically learning the model class
of Gaussian processes with arbitrary observation likelihoods, a rich and commonly used class of
potential models.
Acknowledgments
This material is based upon work supported by the National Science Foundation (NSF) under award
number IIA-1355406. Additionally, JRG and KQW are supported by NSF grants IIS-1525919,
IIS-1550179, and EFMA-1137211; GM is supported by CAPES/BR; DB acknowledges NIH grant
R01-DC009215 as well as the CIMIT; JPC acknowledges the Sloan Foundation.
References
[1] I. Kononenko. Machine Learning for Medical Diagnosis: History, State of the Art and Perspective.
Artificial Intelligence in Medicine, 23(1):89?109, 2001.
[2] S. Saria, A. K. Rajani, J. Gould, D. L. Koller, and A. A. Penn. Integration of Early Physiological Responses
Predicts Later Illness Severity in Preterm Infants. Science Translational Medicine, 2(48):48ra65, 2010.
8
[3] T. C. Bailey, Y. Chen, Y. Mao, C. Lu, G. Hackmann, S. T. Micek, K. M. Heard, K. M. Faulkner, and M. H.
Kollef. A Trial of a Real-Time Alert for Clinical Deterioration in Patients Hospitalized on General Medical
Wards. Journal of Hospital Medicine, 8(5):236?242, 2013.
[4] J. Shargorodsky, S. G. Curhan, G. C. Curhan, and R. Eavey. Change in Prevalence of Hearing Loss in US
Adolescents. Journal of the American Medical Association, 304(7):772?778, 2010.
[5] R. Carhart and J. Jerger. Preferred Method for Clinical Determination of Pure-Tone Thresholds. Journal of
Speech and Hearing Disorders, 24(4):330?345, 1959.
[6] W. Hughson and H. Westlake. Manual for program outline for rehabilitation of aural casualties both
military and civilian. Transactions of the American Academy of Ophthalmology and Otolaryngology, 48
(Supplement):1?15, 1944.
[7] M. Don, J. J. Eggermont, and D. E. Brackmann. Reconstruction of the audiogram using brain stem
responses and high-pass noise masking. Annals of Otology, Rhinology and Laryngology., 88(3 Part 2,
Supplement 57):1?20, 1979.
[8] J. R. Gardner, X. Song, K. Q. Weinberger, D. Barbour, and J. P. Cunningham. Psychophysical Detection
Testing with Bayesian Active Learning. In UAI, 2015.
[9] D. J. MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press,
2003.
[10] N. Houlsby, F. Huszar, Z. Ghahramani, and J. M. Hern?andez-Lobato. Collaborative Gaussian Processes for
Preference Learning. In NIPS, pages 2096?2104, 2012.
[11] R. Garnett, M. A. Osborne, and P. Hennig. Active Learning of Linear Embeddings for Gaussian Processes.
In UAI, pages 230?239, 2014.
[12] J. M. Hern?andez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive Entropy Search for Efficient
Global Optimization of Black-box Functions. In NIPS, pages 918?926, 2014.
[13] N. Houlsby, J. M. Hern?andez-Lobato, and Z. Ghahramani. Cold-start Active Learning with Robust Ordinal
Matrix Factorization. In ICML, pages 766?774, 2014.
[14] A. Ali, R. Caruana, and A. Kapoor. Active Learning with Model Selection. In AAAI, 2014.
[15] J. Kulick, R. Lieck, and M. Toussaint. Active Learning of Hyperparameters: An Expected Cross Entropy
Criterion for Active Model Selection. CoRR, abs/1409.7552, 2014.
[16] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[17] D. Duvenaud. Automatic Model Construction with Gaussian Processes. PhD thesis, Computational and
Biological Learning Laboratory, University of Cambridge, 2014.
[18] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure Discovery in
Nonparametric Regression through Compositional Kernel Search. In ICML, pages 1166?1174, 2013.
[19] A. G. Wilson, E. Gilboa, A. Nehorai, and J. P. Cunningham. Fast Kernel Learning for Multidimensional
Pattern Extrapolation. In NIPS, pages 3626?3634, 2014.
[20] C. Williams and C. Rasmussen. Gaussian processes for regression. In NIPS, 1996.
[21] A. E. Raftery. Approximate Bayes Factors and Accounting for Model Uncertainty in Generalised Linear
Models. Biometrika, 83(2):251?266, 1996.
[22] J. Kuha. AIC and BIC: Comparisons of Assumptions and Performance. Sociological Methods and
Research, 33(2):188?229, 2004.
[23] G. Schwarz. Estimating the Dimension of a Model. Annals of Statistics, 6(2):461?464, 1978.
[24] K. P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012.
[25] M. Kuss and C. E. Rasmussen. Assessing Approximate Inference for Binary Gaussian Process Classification. Journal of Machine Learning Research, 6:1679?1704, 2005.
[26] T. P. Minka. Expectation Propagation for Approximate Bayesian Inference. In UAI, pages 362?369, 2001.
[27] D. McBride and S. Williams. Audiometric notch as a sign of noise induced hearing loss. Occupational
Environmental Medicine, 58(1):46?51, 2001.
[28] D. I. Nelson, R. Y. Nelson, M. Concha-Barrientos, and M. Fingerhut. The global burden of occupational
noise-induced hearing loss. American Journal of Industrial Medicine, 48(6):446?458, 2005.
9
| 5871 |@word trial:4 retraining:4 laryngology:1 seek:1 simulation:1 jacob:1 covariance:6 accounting:2 prominence:1 paid:1 incurs:2 initial:1 series:2 occupational:2 united:1 selecting:4 existing:1 current:1 comparing:1 must:2 john:1 concatenate:1 informative:1 shape:1 enables:1 lengthen:1 noninformative:1 analytic:2 plot:1 update:2 alone:1 half:1 fewer:3 selected:5 intelligence:1 tone:25 indicative:1 infant:1 accepting:1 recompute:1 detecting:1 cse:2 location:4 preference:1 hospitalized:1 diagnosing:3 height:1 five:1 alert:1 undetectable:1 become:2 direct:1 differential:3 combine:1 manner:1 introduce:1 acquired:1 indeed:2 expected:2 rapid:3 themselves:1 frequently:1 brain:1 detects:1 automatically:3 little:1 increasing:2 provided:1 estimating:1 notation:1 interpreted:1 informed:1 thorough:1 every:1 collecting:1 multidimensional:1 prohibitively:1 biometrika:1 normally:1 grant:2 penn:1 unit:1 medical:4 louis:3 positive:1 generalised:1 approximately:2 abuse:1 might:1 black:1 suggests:1 ease:1 factorization:1 range:3 acknowledgment:1 testing:6 definite:1 prevalence:1 procedure:2 cold:1 otology:1 area:1 empirical:2 significantly:1 convenient:1 confidence:1 wustl:6 integrating:1 regular:1 selection:34 optimize:1 conventional:1 equivalent:1 map:3 demonstrated:1 maximizing:3 center:2 lobato:3 attention:2 williams:3 adolescent:1 resolution:2 disorder:3 identifying:1 pure:3 ii0:2 rule:3 financial:1 his:1 population:2 increment:1 laplace:7 updated:2 annals:2 construction:2 suppose:3 gm:1 exact:2 gps:2 us:1 element:1 expensive:1 particularly:1 updating:1 approximated:1 continues:1 predicts:1 observed:4 commonplace:1 ensures:1 kilian:1 trade:1 mentioned:1 substantial:1 intuition:1 solidify:1 complexity:3 preventable:1 trained:1 nehorai:1 ali:2 predictive:16 upon:1 completely:2 easily:1 resolved:1 various:3 revert:1 fast:2 effective:2 describe:1 query:5 detected:1 artificial:1 choosing:5 outcome:1 outside:1 widely:1 plausible:1 earplug:1 otherwise:2 drawing:1 ability:2 statistic:2 ward:1 gp:18 noisy:1 analytical:1 took:1 propose:1 reconstruction:1 rapidly:2 kapoor:1 achieve:1 academy:1 description:1 luizgustavo:1 requirement:1 assessing:1 generating:1 leave:1 exam:1 develop:2 bme:1 measured:3 received:1 strong:2 c:2 indicate:2 exhibiting:1 correct:9 attribute:1 human:6 material:1 require:3 notched:2 andez:3 biological:1 malkomes:1 around:2 considered:1 ground:12 normal:14 exp:1 duvenaud:2 mo:3 early:5 adopt:1 omitted:1 purpose:2 estimation:1 currently:2 healthy:22 schwarz:1 create:2 successfully:1 reflects:1 hoffman:1 hope:1 mit:2 gaussian:22 always:1 rather:3 shelf:1 cornell:4 broader:1 publication:1 conjunction:1 wilson:1 derived:1 focus:2 release:1 she:1 prevalent:1 indicates:2 likelihood:8 bernoulli:1 contrast:1 industrial:1 baseline:3 sense:1 burdensome:1 posteriori:1 inference:10 detect:2 dependent:2 i0:1 typically:10 entire:3 cunningham:3 her:1 koller:1 arg:3 fidelity:1 flexible:1 translational:1 classification:1 favored:1 prevention:1 development:1 art:2 integration:1 mackay:1 mutual:4 marginal:2 construct:1 once:2 having:1 shaped:2 sampling:6 represents:3 icml:2 report:1 stimulus:3 intelligent:1 roman:1 few:2 inflates:1 densely:1 recognize:1 individual:1 national:1 murphy:1 rhinology:1 ab:1 detection:8 screening:4 interest:1 mining:1 possibility:1 highly:1 severe:1 introduces:1 predefined:1 accurate:1 integral:4 capable:1 worker:2 necessary:2 indexed:1 taylor:1 initialized:1 desired:1 circle:1 penalizes:1 recomputing:1 increased:1 modeling:3 kqw4:1 military:1 civilian:1 cover:1 caruana:1 cost:4 hearing:35 deviation:2 hundred:2 fruitfully:1 reported:1 connect:1 periodic:1 casualty:1 calibrated:1 confident:1 st:3 density:1 fundamental:2 retain:1 probabilistic:2 off:2 audible:1 rewarding:1 mechanically:1 together:1 vastly:1 squared:1 reflect:1 satisfied:1 aaai:1 thesis:1 otolaryngology:1 worse:1 expert:1 derivative:1 inefficient:1 american:3 stark:1 actively:6 account:2 suggesting:1 potential:1 lloyd:1 sloan:1 performed:4 later:1 extrapolation:1 diagnose:1 observing:1 red:1 houlsby:2 bayes:2 start:1 parallel:1 masking:1 contribution:3 collaborative:1 intractible:1 accuracy:1 variance:3 characteristic:2 efficiently:1 maximized:1 bayesian:11 rejecting:1 critically:1 lu:1 worth:1 published:1 kuss:1 history:1 explain:2 inform:1 nonstationarity:1 manual:1 against:1 failure:1 frequency:16 minka:1 brackmann:1 naturally:1 associated:2 mi:11 sampled:1 dataset:2 treatment:1 tunable:1 adjusting:1 logical:1 recall:1 knowledge:1 infers:1 improves:1 amplitude:1 reflecting:2 manuscript:1 focusing:1 higher:2 supervised:1 response:12 specify:1 formulation:1 done:2 though:1 diagnosed:1 box:1 misnomer:1 anywhere:1 stage:1 furthermore:1 until:2 working:1 dennis:1 propagation:1 widespread:1 mode:1 quality:1 reveal:1 effect:1 requiring:1 true:1 analytically:1 vicinity:1 laboratory:1 white:1 conditionally:1 during:2 width:2 unnormalized:1 criterion:5 octave:2 outline:1 theoretic:3 demonstrate:1 complete:1 discomfort:1 novel:5 fi:1 recently:1 tationally:1 common:3 nih:1 empirically:1 conditioning:1 million:1 extend:2 he:1 slight:1 illness:1 association:1 significant:1 cambridge:2 automatic:1 outlined:1 grid:2 similarly:1 iia:1 stochasticity:1 aural:1 etc:1 add:1 posterior:26 multivariate:1 perspective:2 periphery:1 binary:2 continue:1 life:1 yi:1 seen:1 additional:3 greater:1 ey:1 employed:1 determine:2 maximize:2 period:1 monotonically:1 ii:5 full:1 sound:2 desirable:1 infer:1 stem:1 d0:4 expending:2 reduces:1 match:1 smooth:1 determination:1 clinical:5 long:2 cross:3 jpc:1 mle:3 award:1 controlled:1 prediction:1 involving:1 regression:2 patient:42 expectation:2 df:2 blindly:1 iteration:7 represent:2 kernel:7 sometimes:2 cochlea:1 deterioration:1 proposal:1 affecting:2 whereas:3 fine:1 addition:1 background:2 interval:1 spacing:1 leaving:1 crucial:1 ithaca:2 w2:1 induced:11 subject:1 hz:3 db:5 member:1 effectiveness:1 call:1 structural:2 presence:1 enough:1 faulkner:1 automated:2 embeddings:1 fit:1 bic:2 mgp:7 prognosis:1 parameterizes:1 tradeoff:1 br:1 det:1 shift:1 whether:4 motivated:1 expression:1 six:1 notch:27 jerger:1 penalty:1 song:1 speech:1 york:1 nine:4 compositional:1 heard:6 involve:3 nonparametric:2 band:1 locally:1 tenenbaum:1 specifies:2 problematic:2 nsf:2 sign:1 diagnostic:4 per:3 correctly:2 diagnosis:10 discrete:1 hyperparameter:9 audiometry:2 hennig:1 key:2 reusable:1 threshold:5 blood:1 drawn:1 clarity:1 penalizing:1 neither:1 jpc2181:1 fraction:1 sum:1 run:3 parameterized:1 uncertainty:5 place:1 preterm:1 decide:2 draw:1 delivery:1 decision:1 appendix:1 comparable:1 huszar:1 entirely:1 ki:1 barbour:2 followed:1 distinguish:1 aic:1 encountered:1 nontrivial:1 precisely:2 personalized:1 encodes:1 nearby:1 speed:1 simulate:2 extremely:2 performing:1 relatively:1 gould:1 combination:2 conjugate:1 describes:1 ophthalmology:1 em:1 making:7 rehabilitation:1 hl:4 indexing:1 pr:1 taken:1 computationally:1 resource:1 equation:1 previously:1 remains:1 describing:1 count:1 mechanism:2 discus:2 committee:1 jrg:1 hern:3 ordinal:1 tractable:1 drastic:1 subjected:1 available:2 apply:2 appropriate:2 disagreement:1 simulating:1 bailey:1 alternative:2 weinberger:2 existence:1 assumes:1 denotes:1 ensure:1 top:1 unfortunate:1 log2:2 marginalized:2 cape:1 medicine:7 ghahramani:4 eggermont:1 approximating:2 comparatively:1 r01:1 psychophysical:1 already:1 question:1 quantity:5 occurs:1 strategy:1 costly:1 dependence:1 primary:1 disability:1 exhibit:1 gradient:1 deficit:4 separate:1 simulated:8 clarifying:1 parametrized:1 nelson:2 considers:1 nuance:1 code:1 length:1 index:1 tragically:1 difficult:1 unfortunately:1 susceptible:1 design:3 implementation:2 motivates:1 unknown:2 allowing:1 disagree:1 shallower:1 observation:13 datasets:1 commensurate:1 enabling:1 severity:1 arbitrary:5 sharp:1 intensity:11 inferred:5 introduced:3 cast:1 required:4 extensive:1 distinction:1 narrow:2 learned:4 hour:1 nip:4 adult:1 beyond:1 able:1 below:1 bam:21 pattern:2 challenge:1 confidently:1 program:1 max:3 belief:1 critical:5 natural:1 examination:1 scarce:1 representing:1 imply:1 gardner:7 raftery:1 acknowledges:2 coupled:1 columbia:2 hears:1 prior:16 discovery:1 marginalizing:1 occupation:1 loss:16 sociological:1 undiagnosed:1 querying:1 var:1 localized:3 age:1 validation:1 foundation:2 toussaint:1 sufficient:1 subtractive:1 classifying:1 row:1 changed:1 supported:3 rasmussen:3 gilboa:1 drastically:1 deeper:1 explaining:1 dimension:2 depth:3 world:1 evaluating:1 contour:1 rich:1 made:1 commonly:2 transaction:1 approximate:6 implicitly:1 preferred:1 confirm:1 global:2 active:32 sequentially:1 uai:3 quasirandom:1 kononenko:1 consuming:1 xi:3 don:1 scrambled:1 search:3 latent:6 additionally:1 nature:3 robust:1 inherently:1 improving:1 expansion:3 bearing:1 investigated:1 garnett:4 domain:4 noise:12 hyperparameters:10 osborne:1 allowed:1 repeated:1 quadrature:2 kqw:1 augmented:1 grosse:1 ny:3 mao:1 comprises:1 wish:1 explicit:1 exponential:1 factory:1 candidate:13 young:1 formula:1 emphasizing:1 specific:1 offset:1 physiological:1 evidence:11 burden:2 intractable:1 gustavo:1 sequential:1 adding:1 importance:2 corr:1 supplement:2 phd:1 budget:2 chen:1 entropy:7 depicted:1 likely:1 univariate:1 highlighting:2 personnel:1 monotonic:1 acquiring:1 corresponds:1 truth:12 chance:1 qbc:1 environmental:1 cdf:1 conditional:2 goal:4 presentation:5 careful:1 towards:1 shared:1 absence:1 feasible:1 saria:1 change:1 typical:3 clinician:1 determined:1 miss:1 total:2 called:1 discriminate:1 hospital:1 pas:1 rarely:1 select:4 support:1 latter:2 scan:1 evaluate:2 tested:1 ex:1 |
5,382 | 5,872 | Efficient and Robust Automated Machine Learning
Matthias Feurer
Aaron Klein
Katharina Eggensperger
Jost Tobias Springenberg
Manuel Blum
Frank Hutter
Department of Computer Science
University of Freiburg, Germany
{feurerm,kleinaa,eggenspk,springj,mblum,fh}@cs.uni-freiburg.de
Abstract
The success of machine learning in a broad range of applications has led to an
ever-growing demand for machine learning systems that can be used off the shelf
by non-experts. To be effective in practice, such systems need to automatically
choose a good algorithm and feature preprocessing steps for a new dataset at hand,
and also set their respective hyperparameters. Recent work has started to tackle this
automated machine learning (AutoML) problem with the help of efficient Bayesian
optimization methods. Building on this, we introduce a robust new AutoML system
based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and
4 data preprocessing methods, giving rise to a structured hypothesis space with
110 hyperparameters). This system, which we dub AUTO - SKLEARN, improves on
existing AutoML methods by automatically taking into account past performance
on similar datasets, and by constructing ensembles from the models evaluated
during the optimization. Our system won the first phase of the ongoing ChaLearn
AutoML challenge, and our comprehensive analysis on over 100 diverse datasets
shows that it substantially outperforms the previous state of the art in AutoML. We
also demonstrate the performance gains due to each of our contributions and derive
insights into the effectiveness of the individual components of AUTO - SKLEARN.
1
Introduction
Machine learning has recently made great strides in many application areas, fueling a growing
demand for machine learning systems that can be used effectively by novices in machine learning.
Correspondingly, a growing number of commercial enterprises aim to satisfy this demand (e.g.,
BigML.com, Wise.io, SkyTree.com, RapidMiner.com, Dato.com, Prediction.io, DataRobot.com, Microsoft?s Azure Machine
Learning, Google?s Prediction API, and Amazon Machine Learning). At its core, every effective machine learning
service needs to solve the fundamental problems of deciding which machine learning algorithm to
use on a given dataset, whether and how to preprocess its features, and how to set all hyperparameters.
This is the problem we address in this work.
More specifically, we investigate automated machine learning (AutoML), the problem of automatically
(without human input) producing test set predictions for a new dataset within a fixed computational
budget. Formally, this AutoML problem can be stated as follows:
Definition 1 (AutoML problem). For i = 1, . . . , n+m, let xi ? Rd denote a feature vector and yi ?
Y the corresponding target value. Given a training dataset Dtrain = {(x1 , y1 ), . . . , (xn , yn )} and
the feature vectors xn+1 , . . . , xn+m of a test dataset Dtest = {(xn+1 , yn+1 ), . . . , (xn+m , yn+m )}
drawn from the same underlying data distribution, as well as a resource budget b and a loss metric
L(?, ?), the AutoML problem is to (automatically) produce test set predictions
Pm y?n+1 , . . . , y?n+m . The
1
loss of a solution y?n+1 , . . . , y?n+m to the AutoML problem is given by m
yn+j , yn+j ).
j=1 L(?
1
In practice, the budget b would comprise computational resources, such as CPU and/or wallclock time
and memory usage. This problem definition reflects the setting of the ongoing ChaLearn AutoML
challenge [1]. The AutoML system we describe here won the first phase of that challenge.
Here, we follow and extend the AutoML approach first introduced by AUTO -WEKA [2] (see
http://automl.org). At its core, this approach combines a highly parametric machine learning
framework F with a Bayesian optimization [3] method for instantiating F well for a given dataset.
The contribution of this paper is to extend this AutoML approach in various ways that considerably
improve its efficiency and robustness, based on principles that apply to a wide range of machine
learning frameworks (such as those used by the machine learning service providers mentioned above).
First, following successful previous work for low dimensional optimization problems [4, 5, 6],
we reason across datasets to identify instantiations of machine learning frameworks that perform
well on a new dataset and warmstart Bayesian optimization with them (Section 3.1). Second, we
automatically construct ensembles of the models considered by Bayesian optimization (Section 3.2).
Third, we carefully design a highly parameterized machine learning framework from high-performing
classifiers and preprocessors implemented in the popular machine learning framework scikit-learn [7]
(Section 4). Finally, we perform an extensive empirical analysis using a diverse collection of datasets
to demonstrate that the resulting AUTO - SKLEARN system outperforms previous state-of-the-art
AutoML methods (Section 5), to show that each of our contributions leads to substantial performance
improvements (Section 6), and to gain insights into the performance of the individual classifiers and
preprocessors used in AUTO - SKLEARN (Section 7).
2
AutoML as a CASH problem
We first review the formalization of AutoML as a Combined Algorithm Selection and Hyperparameter
optimization (CASH) problem used by AUTO -WEKA?s AutoML approach. Two important problems
in AutoML are that (1) no single machine learning method performs best on all datasets and (2) some
machine learning methods (e.g., non-linear SVMs) crucially rely on hyperparameter optimization.
The latter problem has been successfully attacked using Bayesian optimization [3], which nowadays
forms a core component of an AutoML system. The former problem is intertwined with the latter since
the rankings of algorithms depend on whether their hyperparameters are tuned properly. Fortunately,
the two problems can efficiently be tackled as a single, structured, joint optimization problem:
Definition 2 (CASH). Let A = {A(1) , . . . , A(R) } be a set of algorithms, and let the hyperparameters
of each algorithm A(j) have domain ?(j) . Further, let Dtrain = {(x1 , y1 ), . . . , (xn , yn )} be a train(1)
(K)
(1)
(K)
ing set which is split into K cross-validation folds {Dvalid , . . . , Dvalid } and {Dtrain , . . . , Dtrain }
(i)
(i)
(j)
(i)
(i)
such that Dtrain = Dtrain \Dvalid for i = 1, . . . , K. Finally, let L(A? , Dtrain , Dvalid ) denote the
(i)
(i)
loss that algorithm A(j) achieves on Dvalid when trained on Dtrain with hyperparameters ?. Then,
the Combined Algorithm Selection and Hyperparameter optimization (CASH) problem is to find the
joint algorithm and hyperparameter setting that minimizes this loss:
K
1 X
(j)
(i)
(i)
A? , ?? ? argmin
L(A? , Dtrain , Dvalid ).
(1)
A(j) ?A,???(j) K i=1
This CASH problem was first tackled by Thornton et al. [2] in the AUTO -WEKA system using the
machine learning framework WEKA [8] and tree-based Bayesian optimization methods [9, 10]. In
a nutshell, Bayesian optimization [3] fits a probabilistic model to capture the relationship between
hyperparameter settings and their measured performance; it then uses this model to select the most
promising hyperparameter setting (trading off exploration of new parts of the space vs. exploitation
in known good regions), evaluates that hyperparameter setting, updates the model with the result,
and iterates. While Bayesian optimization based on Gaussian process models (e.g., Snoek et al. [11])
performs best in low-dimensional problems with numerical hyperparameters, tree-based models have
been shown to be more successful in high-dimensional, structured, and partly discrete problems [12] ?
such as the CASH problem ? and are also used in the AutoML system H YPEROPT- SKLEARN [13].
Among the tree-based Bayesian optimization methods, Thornton et al. [2] found the random-forestbased SMAC [9] to outperform the tree Parzen estimator TPE [10], and we therefore use SMAC
to solve the CASH problem in this paper. Next to its use of random forests [14], SMAC?s main
distinguishing feature is that it allows fast cross-validation by evaluating one fold at a time and
discarding poorly-performing hyperparameter settings early.
2
Bayesian optimizer
{Xtrain , Ytrain ,
Xtest , b, L}
metalearning
data preprocessor
feature
classifier
preprocessor
ML framework
AutoML
system
build
ensemble
Y?test
Figure 1: Our improved AutoML approach. We add two components to Bayesian hyperparameter optimization
of an ML framework: meta-learning for initializing the Bayesian optimizer and automated ensemble construction
from configurations evaluated during optimization.
3
New methods for increasing efficiency and robustness of AutoML
We now discuss our two improvements of the AutoML approach. First, we include a meta-learning
step to warmstart the Bayesian optimization procedure, which results in a considerable boost in
efficiency. Second, we include an automated ensemble construction step, allowing us to use all
classifiers that were found by Bayesian optimization.
Figure 1 summarizes the overall AutoML workflow, including both of our improvements. We note
that we expect their effectiveness to be greater for flexible ML frameworks that offer many degrees of
freedom (e.g., many algorithms, hyperparameters, and preprocessing methods).
3.1
Meta-learning for finding good instantiations of machine learning frameworks
Domain experts derive knowledge from previous tasks: They learn about the performance of machine
learning algorithms. The area of meta-learning [15] mimics this strategy by reasoning about the
performance of learning algorithms across datasets. In this work, we apply meta-learning to select
instantiations of our given machine learning framework that are likely to perform well on a new
dataset. More specifically, for a large number of datasets, we collect both performance data and a set
of meta-features, i.e., characteristics of the dataset that can be computed efficiently and that help to
determine which algorithm to use on a new dataset.
This meta-learning approach is complementary to Bayesian optimization for optimizing an ML
framework. Meta-learning can quickly suggest some instantiations of the ML framework that are
likely to perform quite well, but it is unable to provide fine-grained information on performance.
In contrast, Bayesian optimization is slow to start for hyperparameter spaces as large as those of
entire ML frameworks, but can fine-tune performance over time. We exploit this complementarity by
selecting k configurations based on meta-learning and use their result to seed Bayesian optimization.
This approach of warmstarting optimization by meta-learning has already been successfully applied
before [4, 5, 6], but never to an optimization problem as complex as that of searching the space
of instantiations of a full-fledged ML framework. Likewise, learning across datasets has also
been applied in collaborative Bayesian optimization methods [16, 17]; while these approaches are
promising, they are so far limited to very few meta-features and cannot yet cope with the highdimensional partially discrete configuration spaces faced in AutoML.
More precisely, our meta-learning approach works as follows. In an offline phase, for each machine
learning dataset in a dataset repository (in our case 140 datasets from the OpenML [18] repository),
we evaluated a set of meta-features (described below) and used Bayesian optimization to determine
and store an instantiation of the given ML framework with strong empirical performance for that
dataset. (In detail, we ran SMAC [9] for 24 hours with 10-fold cross-validation on two thirds of the
data and stored the resulting ML framework instantiation which exhibited best performance on the
remaining third). Then, given a new dataset D, we compute its meta-features, rank all datasets by
their L1 distance to D in meta-feature space and select the stored ML framework instantiations for
the k = 25 nearest datasets for evaluation before starting Bayesian optimization with their results.
To characterize datasets, we implemented a total of 38 meta-features from the literature, including
simple, information-theoretic and statistical meta-features [19, 20], such as statistics about the number
of data points, features, and classes, as well as data skewness, and the entropy of the targets. All
meta-features are listed in Table 1 of the supplementary material. Notably, we had to exclude the
prominent and effective category of landmarking meta-features [21] (which measure the performance
of simple base learners), because they were computationally too expensive to be helpful in the online
evaluation phase. We note that this meta-learning approach draws its power from the availability of
3
a repository of datasets; due to recent initiatives, such as OpenML [18], we expect the number of
available datasets to grow ever larger over time, increasing the importance of meta-learning.
3.2
Automated ensemble construction of models evaluated during optimization
While Bayesian hyperparameter optimization is data-efficient in finding the best-performing hyperparameter setting, we note that it is a very wasteful procedure when the goal is simply to make good
predictions: all the models it trains during the course of the search are lost, usually including some
that perform almost as well as the best. Rather than discarding these models, we propose to store them
and to use an efficient post-processing method (which can be run in a second process on-the-fly) to
construct an ensemble out of them. This automatic ensemble construction avoids to commit itself to a
single hyperparameter setting and is thus more robust (and less prone to overfitting) than using the
point estimate that standard hyperparameter optimization yields. To our best knowledge, we are the
first to make this simple observation, which can be applied to improve any Bayesian hyperparameter
optimization method.
It is well known that ensembles often outperform individual models [22, 23], and that effective
ensembles can be created from a library of models [24, 25]. Ensembles perform particularly well if
the models they are based on (1) are individually strong and (2) make uncorrelated errors [14]. Since
this is much more likely when the individual models are different in nature, ensemble building is
particularly well suited for combining strong instantiations of a flexible ML framework.
However, simply building a uniformly weighted ensemble of the models found by Bayesian optimization does not work well. Rather, we found it crucial to adjust these weights using the predictions of
all individual models on a hold-out set. We experimented with different approaches to optimize these
weights: stacking [26], gradient-free numerical optimization, and the method ensemble selection [24].
While we found both numerical optimization and stacking to overfit to the validation set and to be
computationally costly, ensemble selection was fast and robust. In a nutshell, ensemble selection
(introduced by Caruana et al. [24]) is a greedy procedure that starts from an empty ensemble and then
iteratively adds the model that maximizes ensemble validation performance (with uniform weight,
but allowing for repetitions). Procedure 1 in the supplementary material describes it in detail. We
used this technique in all our experiments ? building an ensemble of size 50.
4
A practical automated machine learning system
To design a robust AutoML system, as
feature
estimator
preprocessing
classifier
our underlying ML framework we chose
preprocessor
scikit-learn [7], one of the best known
? ? ? AdaBoost
RF
kNN
PCA
None ? ? ? fast ICA
and most widely used machine learning
learning
rate
#
estimators
max.
depth
data
libraries. It offers a wide range of well espreprocessor
tablished and efficiently-implemented ML
imputation balancing
rescaling
one hot enc.
algorithms and is easy to use for both experts and beginners. Since our AutoML
mean ? ? ? median
???
weighting
min/max ? ? ? standard
None
system closely resembles AUTO -WEKA,
but ? like H YPEROPT- SKLEARN ? is based Figure 2: Structured configuration space. Squared boxes
on scikit-learn, we dub it AUTO - SKLEARN. denote parent hyperparameters whereas boxes with rounded
Figure 2 depicts AUTO - SKLEARN?s overall edges are leaf hyperparameters. Grey colored boxes mark
components. It comprises 15 classification active hyperparameters which form an example configuration
and machine learning pipeline. Each pipeline comprises one
algorithms, 14 preprocessing methods, and feature preprocessor, classifier and up to three data prepro4 data preprocessing methods. We param- cessor methods plus respective hyperparameters.
eterized each of them, which resulted in a
space of 110 hyperparameters. Most of these are conditional hyperparameters that are only active
if their respective component is selected. We note that SMAC [9] can handle this conditionality
natively.
All 15 classification algorithms in AUTO - SKLEARN are listed in Table 1a (and described in detail in
Section A.1 of the supplementary material). They fall into different categories, such as general linear
models (2 algorithms), support vector machines (2), discriminant analysis (2), nearest neighbors
(1), na??ve Bayes (3), decision trees (1) and ensembles (4). In contrast to AUTO -WEKA [2], we
4
name
name
#?
cat (cond)
cont (cond)
AdaBoost (AB)
Bernoulli na??ve Bayes
decision tree (DT)
extreml. rand. trees
Gaussian na??ve Bayes
gradient boosting (GB)
kNN
LDA
linear SVM
kernel SVM
multinomial na??ve Bayes
passive aggressive
QDA
random forest (RF)
Linear Class. (SGD)
4
2
4
5
6
3
4
4
7
2
3
2
5
10
1 (-)
1 (-)
1 (-)
2 (-)
2 (-)
1 (-)
2 (-)
2 (-)
1 (-)
1 (-)
2 (-)
4 (-)
3 (-)
1 (-)
3 (-)
3 (-)
6 (-)
1 (-)
3 (1)
2 (-)
5 (2)
1 (-)
2 (-)
2 (-)
3 (-)
6 (3)
(a) classification algorithms
#?
cat (cond)
cont (cond)
extreml. rand. trees prepr.
fast ICA
feature agglomeration
kernel PCA
rand. kitchen sinks
linear SVM prepr.
no preprocessing
nystroem sampler
PCA
polynomial
random trees embed.
select percentile
select rates
5
4
4
5
2
3
5
2
3
4
2
3
2 (-)
3 (-)
3 ()
1 (-)
1 (-)
1 (-)
1 (-)
2 (-)
1 (-)
2 (-)
3 (-)
1 (1)
1 (-)
4 (3)
2 (-)
2 (-)
4 (3)
1 (-)
1 (-)
4 (-)
1 (-)
1 (-)
one-hot encoding
imputation
balancing
rescaling
2
1
1
1
1 (-)
1 (-)
1 (-)
1 (-)
1 (1)
-
(b) preprocessing methods
Table 1: Number of hyperparameters for each possible classifier (left) and feature preprocessing method
(right) for a binary classification dataset in dense representation. Tables for sparse binary classification and
sparse/dense multiclass classification datasets can be found in the Section E of the supplementary material,
Tables 2a, 3a, 4a, 2b, 3b and 4b. We distinguish between categorical (cat) hyperparameters with discrete values
and continuous (cont) numerical hyperparameters. Numbers in brackets are conditional hyperparameters, which
are only relevant when another parameter has a certain value.
focused our configuration space on base classifiers and excluded meta-models and ensembles that
are themselves parameterized by one or more base classifiers. While such ensembles increased
AUTO -WEKA?s number of hyperparameters by almost a factor of five (to 786), AUTO - SKLEARN
?only? features 110 hyperparameters. We instead construct complex ensembles using our post-hoc
method from Section 3.2. Compared to AUTO -WEKA, this is much more data-efficient: in AUTO WEKA, evaluating the performance of an ensemble with 5 components requires the construction and
evaluation of 5 models; in contrast, in AUTO - SKLEARN, ensembles come largely for free, and it is
possible to mix and match models evaluated at arbitrary times during the optimization.
The preprocessing methods for datasets in dense representation in AUTO - SKLEARN are listed in
Table 1b (and described in detail in Section A.2 of the supplementary material). They comprise data
preprocessors (which change the feature values and are always used when they apply) and feature
preprocessors (which change the actual set of features, and only one of which [or none] is used). Data
preprocessing includes rescaling of the inputs, imputation of missing values, one-hot encoding and
balancing of the target classes. The 14 possible feature preprocessing methods can be categorized into
feature selection (2), kernel approximation (2), matrix decomposition (3), embeddings (1), feature
clustering (1), polynomial feature expansion (1) and methods that use a classifier for feature selection
(2). For example, L1 -regularized linear SVMs fitted to the data can be used for feature selection by
eliminating features corresponding to zero-valued model coefficients.
As with every robust real-world system, we had to handle many more important details in AUTO SKLEARN ; we describe these in Section B of the supplementary material.
5
Comparing AUTO - SKLEARN to AUTO -WEKA and H YPEROPT- SKLEARN
As a baseline experiment, we compared the performance of vanilla AUTO - SKLEARN (without our
improvements) to AUTO -WEKA and H YPEROPT- SKLEARN, reproducing the experimental setup
with 21 datasets of the paper introducing AUTO -WEKA [2]. We describe this setup in detail in
Section G in the supplementary material.
Table 2 shows that AUTO - SKLEARN performed statistically significantly better than AUTO -WEKA
in 6/21 cases, tied it in 12 cases, and lost against it in 3. For the three datasets where AUTO WEKA performed best, we found that in more than 50% of its runs the best classifier it chose is not
implemented in scikit-learn (trees with a pruning component). So far, H YPEROPT- SKLEARN is more
of a proof-of-concept ? inviting the user to adapt the configuration space to her own needs ? than
a full AutoML system. The current version crashes when presented with sparse data and missing
values. It also crashes on Cifar-10 due to a memory limit which we set for all optimizers to enable a
5
0.01
0.01
0.05
14.93 33.76 40.67
14.13 33.36 37.75
14.07 34.72 38.45
Yeast
Waveform
5.24
5.24
5.87
Wine
Quality
Shuttle
46.92 7.87
60.34 8.09
55.79 -
Semeion
12.44 2.84
18.21 2.84
14.74 2.82
Secom
0.42
0.31
0.42
MRBI
Madelon
1.74
1.74
-
MNIST
Basic
KR-vs-KP
27.00 1.62
28.33 2.29
27.67 2.29
KDD09
Appetency
5.51
6.38
-
Gisette
German
Credit
Dexter
Convex
Cifar-10
Small
51.70 54.81 17.53 5.56
56.95 56.20 21.80 8.33
57.95 19.18 -
Dorothea
73.50 16.00 0.39
73.50 30.00 0.00
76.21 16.22 0.39
Cifar-10
Car
Amazon
Abalone
AS
AW
HS
Table 2: Test set classification error of AUTO -WEKA (AW), vanilla AUTO - SKLEARN (AS) and H YPEROPTSKLEARN (HS), as in the original evaluation of AUTO -WEKA [2]. We show median percent error across
100 000 bootstrap samples (based on 10 runs), simulating 4 parallel runs. Bold numbers indicate the best result.
Underlined results are not statistically significantly different from the best according to a bootstrap test with
p = 0.05.
3.0
average rank
2.8
2.6
2.4
vanilla auto-sklearn
auto-sklearn + ensemble
auto-sklearn + meta-learning
auto-sklearn + meta-learning + ensemble
2.2
2.0
1.8
500
1000
1500
2000
time [sec]
2500
3000
3500
Figure 3: Average rank of all four AUTO - SKLEARN variants (ranked by balanced test error rate (BER)) across
140 datasets. Note that ranks are a relative measure of performance (here, the rank of all methods has to add up
to 10), and hence an improvement in BER of one method can worsen the rank of another. The supplementary
material shows the same plot on a log-scale to show the time overhead of meta-feature and ensemble computation.
fair comparison. On the 16 datasets on which it ran, it statistically tied the best optimizer in 9 cases
and lost against it in 7.
6
Evaluation of the proposed AutoML improvements
In order to evaluate the robustness and general applicability of our proposed AutoML system on
a broad range of datasets, we gathered 140 binary and multiclass classification datasets from the
OpenML repository [18], only selecting datasets with at least 1000 data points to allow robust
performance evaluations. These datasets cover a diverse range of applications, such as text classification, digit and letter recognition, gene sequence and RNA classification, advertisement, particle
classification for telescope data, and cancer detection in tissue samples. We list all datasets in Table 7
and 8 in the supplementary material and provide their unique OpenML identifiers for reproducibility.
Since the class distribution in many of these datasets is quite imbalanced we evaluated all AutoML
methods using a measure called balanced classification error rate (BER). We define balanced error
rate as the average of the proportion of wrong classifications in each class. In comparison to standard
classification error (the average overall error), this measure (the average of the class-wise error)
assigns equal weight to all classes. We note that balanced error or accuracy measures are often used
in machine learning competitions (e.g., the AutoML challenge [1] uses balanced accuracy).
We performed 10 runs of AUTO - SKLEARN both with and without meta-learning and with and without
ensemble prediction on each of the datasets. To study their performance under rigid time constraints,
and also due to computational resource constraints, we limited the CPU time for each run to 1 hour; we
also limited the runtime for a single model to a tenth of this (6 minutes). To not evaluate performance
on data sets already used for meta-learning, we performed a leave-one-dataset-out validation: when
evaluating on dataset D, we only used meta-information from the 139 other datasets.
Figure 3 shows the average ranks over time of the four AUTO - SKLEARN versions we tested. We
observe that both of our new methods yielded substantial improvements over vanilla AUTO - SKLEARN.
The most striking result is that meta-learning yielded drastic improvements starting with the first
6
OpenML
dataset ID
AUTO-
SKLEARN
AdaBoost
Bernoulli
na??ve Bayes
decision
tree
extreml.
rand. trees
Gaussian
na??ve Bayes
gradient
boosting
kNN
LDA
linear SVM
kernel SVM
multinomial
na??ve Bayes
passive
aggresive
QDA
random forest
Linear Class.
(SGD)
38
46
179
184
554
772
917
1049
1111
1120
1128
293
389
2.15
3.76
16.99
10.32
1.55
46.85
10.22
12.93
23.70
13.81
4.21
2.86
19.65
2.68
4.65
17.03
10.52
2.42
49.68
9.11
12.53
23.16
13.54
4.89
4.07
22.98
50.22
19.27
47.90
25.83
15.50
28.40
18.81
4.71
24.30
-
2.15
5.62
18.31
17.46
12.00
47.75
11.00
19.31
24.40
17.45
9.30
5.03
33.14
18.06
4.74
17.09
11.10
2.91
45.62
10.22
17.18
24.47
13.86
3.89
3.59
19.38
11.22
7.88
21.77
64.74
10.52
48.83
33.94
26.23
29.59
21.50
4.77
32.44
29.18
1.77
3.49
17.00
10.42
3.86
48.15
10.11
13.38
22.93
13.61
4.58
24.48
19.20
50.00
7.57
22.23
31.10
2.68
48.00
11.11
23.80
50.30
17.23
4.59
4.86
30.87
8.55
8.67
18.93
35.44
3.34
46.74
34.22
25.12
24.11
15.48
4.58
24.40
19.68
16.29
8.31
17.30
15.76
2.23
48.38
18.67
17.28
23.99
14.94
4.83
14.16
17.95
17.89
5.36
17.57
12.52
1.50
48.66
6.78
21.44
23.56
14.17
4.59
100.00
22.04
46.99
7.55
18.97
27.13
10.37
47.21
25.50
26.40
27.67
18.33
4.46
24.20
20.04
50.00
9.23
22.29
20.01
100.00
48.75
20.67
29.25
43.79
16.37
5.65
21.34
20.14
8.78
7.57
19.06
47.18
2.75
47.67
30.44
21.38
25.86
15.62
5.59
28.68
39.57
2.34
4.20
17.24
10.98
3.08
47.71
10.83
13.75
28.06
13.70
3.83
2.57
20.66
15.82
7.31
17.01
12.76
2.50
47.93
18.33
19.92
23.36
14.66
4.33
15.54
17.99
Table 3: Median balanced test error rate (BER) of optimizing AUTO - SKLEARN subspaces for each classification
OpenML
dataset ID
AUTO-
SKLEARN
densifier
extreml. rand.
trees prepr.
fast ICA
feature
agglomeration
kernel PCA
rand.
kitchen sinks
linear
SVM prepr.
no
preproc.
nystroem
sampler
PCA
polynomial
random
trees embed.
select percentile
classification
select rates
truncatedSVD
method (and all preprocessors), as well as the whole configuration space of AUTO - SKLEARN, on 13 datasets.
All optimization runs were allowed to run for 24 hours except for AUTO - SKLEARN which ran for 48 hours.
Bold numbers indicate the best result; underlined results are not statistically significantly different from the best
according to a bootstrap test using the same setup as for Table 2.
38
46
179
184
554
772
917
1049
1111
1120
1128
293
389
2.15
3.76
16.99
10.32
1.55
46.85
10.22
12.93
23.70
13.81
4.21
2.86
19.65
24.40
20.63
4.03
4.98
17.83
55.78
1.56
47.90
8.33
20.36
23.36
16.29
4.90
3.41
21.40
7.27
7.95
17.24
19.96
2.52
48.65
16.06
19.92
24.69
14.22
4.96
-
2.24
4.40
16.92
11.31
1.65
48.62
10.33
13.14
23.73
13.73
4.76
-
5.84
8.74
100.00
36.52
100.00
47.59
20.94
19.57
100.00
14.57
4.21
100.00
17.50
8.57
8.41
17.34
28.05
100.00
47.68
35.44
20.06
25.25
14.82
5.08
19.30
19.66
2.28
4.25
16.84
9.92
2.21
47.72
8.67
13.28
23.43
14.02
4.52
3.01
19.89
2.28
4.52
16.97
11.43
1.60
48.34
9.44
15.84
22.27
13.85
4.59
2.66
20.87
7.70
8.48
17.30
25.53
2.21
48.06
37.83
18.96
23.95
14.66
4.08
20.94
18.46
7.23
8.40
17.64
21.15
1.65
47.30
22.33
17.22
23.25
14.23
4.59
-
2.90
4.21
16.94
10.54
100.00
48.00
9.11
12.95
26.94
13.22
50.00
-
18.50
7.51
17.05
12.68
3.48
47.84
17.67
18.52
26.68
15.03
9.23
8.05
44.83
2.20
4.17
17.09
45.03
1.46
47.56
10.00
11.94
23.53
13.65
4.33
2.86
20.17
2.28
4.68
16.86
10.47
1.70
48.43
10.44
14.38
23.33
13.67
4.08
2.74
19.18
4.05
21.58
Table 4: Like Table 3, but instead optimizing subspaces for each preprocessing method (and all classifiers).
configuration it selected and lasting until the end of the experiment. We note that the improvement
was most pronounced in the beginning and that over time, vanilla AUTO - SKLEARN also found good
solutions without meta-learning, letting it catch up on some datasets (thus improving its overall rank).
Moreover, both of our methods complement each other: our automated ensemble construction
improved both vanilla AUTO - SKLEARN and AUTO - SKLEARN with meta-learning. Interestingly, the
ensemble?s influence on the performance started earlier for the meta-learning version. We believe
that this is because meta-learning produces better machine learning models earlier, which can be
directly combined into a strong ensemble; but when run longer, vanilla AUTO - SKLEARN without
meta-learning also benefits from automated ensemble construction.
7
Detailed analysis of AUTO - SKLEARN components
We now study AUTO - SKLEARN?s individual classifiers and preprocessors, compared to jointly
optimizing all methods, in order to obtain insights into their peak performance and robustness. Ideally,
we would have liked to study all combinations of a single classifier and a single preprocessor in
isolation, but with 15 classifiers and 14 preprocessors this was infeasible; rather, when studying the
performance of a single classifier, we still optimized over all preprocessors, and vice versa. To obtain
a more detailed analysis, we focused on a subset of datasets but extended the configuration budget for
optimizing all methods from one hour to one day and to two days for AUTO - SKLEARN. Specifically,
we clustered our 140 datasets with g-means [27] based on the dataset meta-features and used one
dataset from each of the resulting 13 clusters (see Table 6 in the supplementary material for the list of
datasets). We note that, in total, these extensive experiments required 10.7 CPU years.
Table 3 compares the results of the various classification methods against AUTO - SKLEARN. Overall,
as expected, random forests, extremely randomized trees, AdaBoost, and gradient boosting, showed
7
10
50
auto-sklearn
gradient boosting
kernel SVM
random forest
8
auto-sklearn
gradient boosting
kernel SVM
random forest
45
Balanced Error Rate
Balanced Error Rate
40
6
4
35
30
25
20
2
15
0 1
10
2
10
3
10
time [sec]
4
1
10
10
(a) MNIST (OpenML dataset ID 554)
2
10
3
10
time [sec]
4
10
(b) Promise pc4 (OpenML dataset ID 1049)
Figure 4: Performance of a subset of classifiers compared to AUTO - SKLEARN over time. We show median test
error rate and the fifth and 95th percentile over time for optimizing three classifiers separately with optimizing
the joint space. A plot with all classifiers can be found in Figure 4 in the supplementary material. While
AUTO - SKLEARN is inferior in the beginning, in the end its performance is close to the best method.
the most robust performance, and SVMs showed strong peak performance for some datasets. Besides
a variety of strong classifiers, there are also several models which could not compete: The decision
tree, passive aggressive, kNN, Gaussian NB, LDA and QDA were statistically significantly inferior
to the best classifier on most datasets. Finally, the table indicates that no single method was the best
choice for all datasets. As shown in the table and also visualized for two example datasets in Figure
4, optimizing the joint configuration space of AUTO - SKLEARN led to the most robust performance.
A plot of ranks over time (Figure 2 and 3 in the supplementary material) quantifies this across all
13 datasets, showing that AUTO - SKLEARN starts with reasonable but not optimal performance and
effectively searches its more general configuration space to converge to the best overall performance
over time.
Table 4 compares the results of the various preprocessors against AUTO - SKLEARN. As for the
comparison of classifiers above, AUTO - SKLEARN showed the most robust performance: It performed
best on three of the datasets and was not statistically significantly worse than the best preprocessor on
another 8 of 13.
8
Discussion and Conclusion
We demonstrated that our new AutoML system AUTO - SKLEARN performs favorably against the
previous state of the art in AutoML, and that our meta-learning and ensemble improvements for
AutoML yield further efficiency and robustness. This finding is backed by the fact that AUTO SKLEARN won the auto-track in the first phase of ChaLearn?s ongoing AutoML challenge. In this
paper, we did not evaluate the use of AUTO - SKLEARN for interactive machine learning with an expert
in the loop and weeks of CPU power, but we note that that mode has also led to a third place in
the human track of the same challenge. As such, we believe that AUTO - SKLEARN is a promising
system for use by both machine learning novices and experts. The source code of AUTO - SKLEARN is
available under an open source license at https://github.com/automl/auto-sklearn.
Our system also has some shortcomings, which we would like to remove in future work. As one
example, we have not yet tackled regression or semi-supervised problems. Most importantly, though,
the focus on scikit-learn implied a focus on small to medium-sized datasets, and an obvious direction
for future work will be to apply our methods to modern deep learning systems that yield state-ofthe-art performance on large datasets; we expect that in that domain especially automated ensemble
construction will lead to tangible performance improvements over Bayesian optimization.
Acknowledgments
This work was supported by the German Research Foundation (DFG), under Priority Programme Autonomous
Learning (SPP 1527, grant HU 1900/3-1), under Emmy Noether grant HU 1900/2-1, and under the BrainLinksBrainTools Cluster of Excellence (grant number EXC 1086).
8
References
[1] I. Guyon, K. Bennett, G. Cawley, H. Escalante, S. Escalera, T. Ho, N.Maci`a, B. Ray, M. Saeed, A. Statnikov,
and E. Viegas. Design of the 2015 ChaLearn AutoML Challenge. In Proc. of IJCNN?15, 2015.
[2] C. Thornton, F. Hutter, H. Hoos, and K. Leyton-Brown. Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms. In Proc. of KDD?13, pages 847?855, 2013.
[3] E. Brochu, V. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions,
with application to active user modeling and hierarchical reinforcement learning. CoRR, abs/1012.2599,
2010.
[4] M. Feurer, J. Springenberg, and F. Hutter. Initializing Bayesian hyperparameter optimization via metalearning. In Proc. of AAAI?15, pages 1128?1135, 2015.
[5] Reif M, F. Shafait, and A. Dengel. Meta-learning for evolutionary parameter optimization of classifiers.
Machine Learning, 87:357?380, 2012.
[6] T. Gomes, R. Prud?encio, C. Soares, A. Rossi, and A. Carvalho. Combining meta-learning and search
techniques to select parameters for support vector machines. Neurocomputing, 75(1):3?13, 2012.
[7] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in Python. JMLR, 12:2825?2830, 2011.
[8] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. Witten. The WEKA data mining
software: An update. SIGKDD, 11(1):10?18, 2009.
[9] F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm
configuration. In Proc. of LION?11, pages 507?523, 2011.
[10] J. Bergstra, R. Bardenet, Y. Bengio, and B. K?egl. Algorithms for hyper-parameter optimization. In Proc.
of NIPS?11, pages 2546?2554, 2011.
[11] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms.
In Proc. of NIPS?12, pages 2960?2968, 2012.
[12] K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, and K. Leyton-Brown. Towards
an empirical foundation for assessing Bayesian optimization of hyperparameters. In NIPS Workshop on
Bayesian Optimization in Theory and Practice, 2013.
[13] B. Komer, J. Bergstra, and C. Eliasmith. Hyperopt-sklearn: Automatic hyperparameter configuration for
scikit-learn. In ICML workshop on AutoML, 2014.
[14] L. Breiman. Random forests. MLJ, 45:5?32, 2001.
[15] P. Brazdil, C. Giraud-Carrier, C. Soares, and R. Vilalta. Metalearning: Applications to Data Mining.
Springer, 2009.
[16] R. Bardenet, M. Brendel, B. K?egl, and M. Sebag. Collaborative hyperparameter tuning. In Proc. of
ICML?13 [28], pages 199?207.
[17] D. Yogatama and G. Mann. Efficient transfer learning method for automatic hyperparameter tuning. In
Proc. of AISTATS?14, pages 1077?1085, 2014.
[18] J. Vanschoren, J. van Rijn, B. Bischl, and L. Torgo. OpenML: Networked science in machine learning.
SIGKDD Explorations, 15(2):49?60, 2013.
[19] D. Michie, D. Spiegelhalter, C. Taylor, and J. Campbell. Machine Learning, Neural and Statistical
Classification. Ellis Horwood, 1994.
[20] A. Kalousis. Algorithm Selection via Meta-Learning. PhD thesis, University of Geneve, 2002.
[21] B. Pfahringer, H. Bensusan, and C. Giraud-Carrier. Meta-learning by landmarking various learning
algorithms. In Proc. of (ICML?00), pages 743?750, 2000.
[22] I. Guyon, A. Saffari, G. Dror, and G. Cawley. Model selection: Beyond the Bayesian/Frequentist divide.
JMLR, 11:61?87, 2010.
[23] A. Lacoste, M. Marchand, F. Laviolette, and H. Larochelle. Agnostic Bayesian learning of ensembles. In
Proc. of ICML?14, pages 611?619, 2014.
[24] R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes. Ensemble selection from libraries of models. In
Proc. of ICML?04, page 18, 2004.
[25] R. Caruana, A. Munson, and A. Niculescu-Mizil. Getting the most out of ensemble selection. In Proc. of
ICDM?06, pages 828?833, 2006.
[26] D. Wolpert. Stacked generalization. Neural Networks, 5:241?259, 1992.
[27] G. Hamerly and C. Elkan. Learning the k in k-means. In Proc. of NIPS?04, pages 281?288, 2004.
[28] Proc. of ICML?13, 2014.
9
| 5872 |@word h:2 madelon:1 exploitation:1 repository:4 eliminating:1 polynomial:3 version:3 proportion:1 open:1 grey:1 hu:2 crucially:1 decomposition:1 xtest:1 sgd:2 automl:42 configuration:14 selecting:2 tuned:1 interestingly:1 dubourg:1 past:1 existing:1 outperforms:2 current:1 com:6 comparing:1 manuel:1 freitas:1 yet:2 numerical:4 kdd:1 remove:1 plot:3 update:2 v:2 greedy:1 leaf:1 selected:2 beginning:2 core:3 colored:1 iterates:1 boosting:5 org:1 five:1 enterprise:1 initiative:1 combine:1 overhead:1 ray:1 introduce:1 preprocessors:9 excellence:1 blondel:1 notably:1 expected:1 sklearn:56 snoek:3 themselves:1 ica:3 growing:3 springj:1 automatically:5 cpu:4 actual:1 param:1 increasing:2 underlying:2 gisette:1 maximizes:1 mrbi:1 moreover:1 medium:1 agnostic:1 argmin:1 substantially:1 minimizes:1 skewness:1 dror:1 finding:3 eterized:1 every:2 tackle:1 nutshell:2 runtime:1 interactive:1 classifier:24 wrong:1 grant:3 yn:6 producing:1 before:2 service:2 carrier:2 limit:1 io:2 api:1 encoding:2 id:4 chose:2 plus:1 resembles:1 collect:1 limited:3 range:5 statistically:6 practical:2 unique:1 acknowledgment:1 hamerly:1 practice:3 lost:3 optimizers:1 bootstrap:3 procedure:4 dorothea:1 digit:1 area:2 empirical:3 significantly:5 suggest:1 cannot:1 close:1 selection:13 brazdil:1 nb:1 influence:1 optimize:1 demonstrated:1 missing:2 backed:1 starting:2 convex:1 focused:2 amazon:2 assigns:1 insight:3 estimator:3 holmes:1 importantly:1 searching:1 handle:2 tangible:1 autonomous:1 target:3 commercial:1 construction:8 user:2 escalante:1 us:2 distinguishing:1 hypothesis:1 elkan:1 complementarity:1 expensive:2 particularly:2 recognition:1 michie:1 fly:1 initializing:2 capture:1 region:1 munson:1 ran:3 mentioned:1 substantial:2 balanced:8 bischl:1 ideally:1 tobias:1 trained:1 depend:1 torgo:1 passos:1 efficiency:4 learner:1 sink:2 joint:4 various:4 cat:3 train:2 stacked:1 fast:5 effective:4 describe:3 shortcoming:1 kp:1 hyper:1 emmy:1 quite:2 supplementary:12 solve:2 larger:1 widely:1 valued:1 statistic:1 knn:4 commit:1 jointly:1 itself:1 thornton:3 online:1 hoc:1 sequence:1 wallclock:1 matthias:1 propose:1 relevant:1 combining:2 enc:1 loop:1 networked:1 reproducibility:1 poorly:1 pronounced:1 competition:1 getting:1 parent:1 empty:1 cluster:2 assessing:1 produce:2 liked:1 leave:1 adam:1 help:2 derive:2 measured:1 nearest:2 strong:6 implemented:4 c:1 crew:1 trading:1 come:1 indicate:2 larochelle:2 direction:1 waveform:1 closely:1 exploration:2 human:2 enable:1 eliasmith:1 material:12 mann:1 saffari:1 clustered:1 generalization:1 varoquaux:1 reutemann:1 hold:1 considered:1 credit:1 hall:1 deciding:1 great:1 seed:1 week:1 achieves:1 early:1 optimizer:3 inviting:1 fh:1 wine:1 proc:14 prettenhofer:1 individually:1 repetition:1 vice:1 successfully:2 reflects:1 weighted:1 cora:1 gaussian:4 always:1 aim:1 rna:1 rather:3 cash:7 shelf:1 dexter:1 shuttle:1 breiman:1 semeion:1 focus:2 improvement:11 properly:1 rank:9 indicates:1 grisel:1 bernoulli:2 contrast:3 sigkdd:2 baseline:1 helpful:1 rigid:1 niculescu:2 entire:1 pfahringer:2 her:1 germany:1 overall:6 among:1 flexible:2 classification:19 art:4 gramfort:1 equal:1 comprise:2 construct:3 never:1 broad:2 icml:6 mimic:1 future:2 few:1 modern:1 resulted:1 neurocomputing:1 comprehensive:1 dfg:1 individual:6 ve:7 azure:1 phase:5 kitchen:2 saeed:1 microsoft:1 ab:2 freedom:1 detection:1 mining:2 investigate:1 highly:2 cournapeau:1 evaluation:6 adjust:1 bracket:1 nowadays:1 edge:1 respective:3 tree:16 reif:1 taylor:1 qda:3 divide:1 fitted:1 hutter:5 beginner:1 increased:1 earlier:2 modeling:1 elli:1 cover:1 caruana:3 stacking:2 introducing:1 applicability:1 subset:2 cost:1 uniform:1 successful:2 too:1 dtrain:9 characterize:1 stored:2 aw:2 considerably:1 combined:4 fundamental:1 peak:2 randomized:1 probabilistic:1 off:2 rounded:1 parzen:1 quickly:1 na:7 squared:1 aaai:1 thesis:1 dtest:1 choose:1 priority:1 worse:1 expert:5 rescaling:3 michel:1 account:1 exclude:1 aggressive:2 de:2 stride:1 bergstra:3 bold:2 sec:3 availability:1 includes:1 coefficient:1 satisfy:1 ranking:1 performed:5 start:3 bayes:7 prepr:4 parallel:1 worsen:1 contribution:3 collaborative:2 brendel:1 accuracy:2 characteristic:1 efficiently:3 ensemble:38 likewise:1 preprocess:1 identify:1 yield:3 largely:1 gathered:1 ofthe:1 bayesian:31 dub:2 none:3 provider:1 tissue:1 metalearning:3 definition:3 evaluates:1 against:5 obvious:1 warmstarting:1 proof:1 gain:2 dataset:23 popular:1 knowledge:2 car:1 improves:1 carefully:1 brochu:1 mlj:1 campbell:1 dt:1 day:2 follow:1 adaboost:4 supervised:1 improved:2 rand:6 wei:1 evaluated:6 box:3 though:1 until:1 overfit:1 hand:1 scikit:8 google:1 mode:1 lda:3 quality:1 yeast:1 believe:2 usage:1 building:4 name:2 concept:1 brown:3 former:1 hence:1 excluded:1 iteratively:1 during:5 inferior:2 openml:9 percentile:3 won:3 abalone:1 prominent:1 freiburg:2 theoretic:1 demonstrate:2 performs:3 l1:2 passive:3 percent:1 reasoning:1 wise:2 recently:1 agglomeration:2 witten:1 multinomial:2 extend:2 versa:1 rd:1 automatic:3 vanilla:7 pm:1 tuning:2 ongoing:3 particle:1 had:2 longer:1 add:3 base:3 soares:2 hyperopt:1 own:1 recent:2 imbalanced:1 showed:3 optimizing:8 store:2 certain:1 meta:40 binary:3 success:1 underlined:2 yi:1 fortunately:1 greater:1 determine:2 converge:1 semi:1 full:2 mix:1 ing:1 match:1 adapt:1 cross:3 offer:2 cifar:3 icdm:1 post:2 jost:1 prediction:7 instantiating:1 basic:1 variant:1 regression:1 metric:1 kernel:7 whereas:1 crash:2 fine:2 separately:1 cawley:2 grow:1 median:4 source:2 crucial:1 exhibited:1 effectiveness:2 split:1 easy:1 embeddings:1 automated:10 variety:1 bengio:1 fit:1 isolation:1 weka:18 multiclass:2 whether:2 pca:5 gb:1 statnikov:1 deep:1 ytrain:1 workflow:1 detailed:2 listed:3 xtrain:1 tune:1 svms:3 category:2 telescope:1 visualized:1 http:2 outperform:2 tutorial:1 track:2 klein:1 diverse:3 intertwined:1 hyperparameter:20 discrete:3 promise:1 brucher:1 four:2 blum:1 drawn:1 imputation:3 wasteful:1 license:1 tenth:1 bardenet:2 lacoste:1 year:1 run:9 compete:1 parameterized:2 letter:1 springenberg:2 striking:1 place:1 almost:2 reasonable:1 guyon:2 draw:1 decision:4 summarizes:1 distinguish:1 tackled:3 fold:3 marchand:1 yielded:2 ijcnn:1 precisely:1 constraint:2 software:1 min:1 extremely:1 performing:3 rossi:1 department:1 structured:4 according:2 combination:1 kalousis:1 across:6 describes:1 lasting:1 smac:5 yogatama:1 pipeline:2 computationally:2 resource:3 discus:1 german:2 thirion:1 letting:1 horwood:1 drastic:1 end:2 noether:1 studying:1 available:2 apply:4 observe:1 hierarchical:1 simulating:1 frequentist:1 robustness:5 ho:1 original:1 remaining:1 include:2 clustering:1 laviolette:1 exploit:1 giving:1 build:1 especially:1 vanschoren:1 implied:1 perrot:1 already:2 parametric:1 strategy:1 costly:1 cessor:1 evolutionary:1 gradient:6 subspace:2 distance:1 unable:1 exc:1 discriminant:1 reason:1 besides:1 code:1 cont:3 relationship:1 setup:3 frank:2 favorably:1 stated:1 rise:1 design:3 perform:6 allowing:2 observation:1 datasets:41 attacked:1 extended:1 ever:2 y1:2 reproducing:1 arbitrary:1 introduced:2 complement:1 required:1 vanderplas:1 extensive:2 optimized:1 boost:1 hour:5 nip:4 address:1 beyond:1 below:1 usually:1 spp:1 lion:1 challenge:7 rf:2 including:3 memory:2 max:2 power:2 hot:3 ranked:1 rely:1 regularized:1 mizil:2 improve:2 github:1 spiegelhalter:1 library:3 started:2 created:1 categorical:1 catch:1 auto:67 faced:1 review:1 literature:1 text:1 python:1 relative:1 loss:4 expect:3 rijn:1 carvalho:1 validation:6 foundation:2 degree:1 principle:1 uncorrelated:1 balancing:3 landmarking:2 prone:1 course:1 cancer:1 supported:1 free:2 infeasible:1 offline:1 allow:1 fledged:1 ber:4 wide:2 fall:1 taking:1 correspondingly:1 neighbor:1 fifth:1 sparse:3 benefit:1 van:1 depth:1 xn:6 evaluating:3 avoids:1 world:1 maci:1 made:1 collection:1 preprocessing:14 reinforcement:1 novice:2 far:2 programme:1 cope:1 pruning:1 uni:1 gene:1 ml:13 overfitting:1 instantiation:9 active:3 gomes:1 xi:1 search:3 continuous:1 quantifies:1 table:18 promising:3 learn:9 nature:1 robust:10 transfer:1 warmstart:2 improving:1 forest:7 katharina:1 feurer:3 expansion:1 complex:2 constructing:1 domain:3 did:1 aistats:1 main:1 dense:3 whole:1 hyperparameters:21 identifier:1 sebag:1 fair:1 complementary:1 allowed:1 categorized:1 x1:2 depicts:1 slow:1 formalization:1 natively:1 duchesnay:1 comprises:2 tied:2 jmlr:2 third:4 weighting:1 advertisement:1 grained:1 minute:1 preprocessor:6 embed:2 discarding:2 showing:1 list:2 experimented:1 svm:8 giraud:2 workshop:2 mnist:2 sequential:1 effectively:2 importance:1 kr:1 corr:1 phd:1 budget:4 egl:2 demand:3 suited:1 entropy:1 wolpert:1 led:3 hoos:3 simply:2 likely:3 partially:1 springer:1 leyton:3 conditional:2 goal:1 sized:1 towards:1 bennett:1 considerable:1 change:2 specifically:3 except:1 uniformly:1 sampler:2 total:2 called:1 partly:1 experimental:1 cond:4 pedregosa:1 aaron:1 formally:1 select:8 highdimensional:1 tpe:1 mark:1 latter:2 support:2 eggensperger:2 evaluate:3 tested:1 |
5,383 | 5,873 | A Framework for Individualizing Predictions of Disease
Trajectories by Exploiting Multi-Resolution Structure
Suchi Saria
Dept. of Computer Science
Johns Hopkins University
Baltimore, MD 21218
[email protected]
Peter Schulam
Dept. of Computer Science
Johns Hopkins University
Baltimore, MD 21218
[email protected]
Abstract
For many complex diseases, there is a wide variety of ways in which an individual can manifest the disease. The challenge of personalized medicine is to
develop tools that can accurately predict the trajectory of an individual?s disease,
which can in turn enable clinicians to optimize treatments. We represent an individual?s disease trajectory as a continuous-valued continuous-time function describing the severity of the disease over time. We propose a hierarchical latent
variable model that individualizes predictions of disease trajectories. This model
shares statistical strength across observations at different resolutions?the population, subpopulation and the individual level. We describe an algorithm for learning
population and subpopulation parameters offline, and an online procedure for dynamically learning individual-specific parameters. Finally, we validate our model
on the task of predicting the course of interstitial lung disease, a leading cause
of death among patients with the autoimmune disease scleroderma. We compare
our approach against state-of-the-art and demonstrate significant improvements in
predictive accuracy.
1
Introduction
In complex, chronic diseases such as autism, lupus, and Parkinson?s, the way the disease manifests
may vary greatly across individuals [1]. For example, in scleroderma, the disease we use as a running
example in this work, individuals may be affected across six organ systems?the lungs, heart, skin,
gastrointestinal tract, kidneys, and vasculature?to varying extents [2]. For any single organ system,
some individuals may show rapid decline throughout the course of their disease, while others may
show early decline but stabilize later on. Often in such diseases, the most effective drugs have
strong side-effects. With tools that can accurately predict an individual?s disease activity trajectory,
clinicians can more aggressively treat those at greatest risk early, rather than waiting until the disease
progresses to a high level of severity. To monitor the disease, physicians use clinical markers to
quantify severity. In scleroderma, for example, PFVC is a clinical marker used to measure lung
severity. The task of individualized prediction of disease activity trajectories is that of using an
individual?s clinical history to predict the future course of a clinical marker; in other words, the goal
is to predict a function representing a trajectory that is updated dynamically using an individual?s
previous markers and individual characteristics.
Predicting disease activity trajectories presents a number of challenges. First, there are multiple latent factors that cause heterogeneity across individuals. One such factor is the underlying biological
mechanism driving the disease. For example, two different genetic mutations may trigger distinct
disease trajectories (e.g. as in Figures 1a and 1b). If we could divide individuals into groups according to their mechanisms?or disease subtypes (see e.g. [3, 4, 5, 6])?it would be straightforward
to fit separate models to each subpopulation. In most complex diseases, however, the mechanisms
are poorly understood and clear definitions of subtypes do not exist. If subtype alone determined
trajectory, then we could cluster individuals. However, other unobserved individual-specific factors
1
such as behavior and prior exposures affect health and can cause different trajectories across individuals of the same subtype. For instance, a chronic smoker will typically have unhealthy lungs and
so may have a trajectory that is consistently lower than a non-smoker?s, which we must account for
using individual-specific parameters. An individual?s trajectory may also be influenced by transient
factors?e.g. an infection unrelated to the disease that makes it difficult to breath (similar to the
?dips? in Figure 1c or the third row in Figure 1d). This can cause marker values to temporarily drop,
and may be hard to distinguish from disease activity. We show that these factors can be arranged in
a hierarchy (population, subpopulation, and individual), but that not all levels of the hierarchy are
observed. Finally, the functional outcome is a rich target, and therefore more challenging to model
than scalar outcomes. In addition, the marker data is observed in continuous-time and is irregularly
sampled, making commonly used discrete-time approaches to time series modeling (or approaches
that rely on imputation) not well suited to this domain.
Related work. The majority of predictive models in medicine explain variability in the target outcome by conditioning on observed risk factors alone. However, these do not account for latent
sources of variability such as those discussed above. Further, these models are typically crosssectional?they use features from data measured up until the current time to predict a clinical marker
or outcome at a fixed point in the future. As an example, consider the mortality prediction model by
Lee et al. [7], where logistic regression is used to integrate features into a prediction about the probability of death within 30 days for a given patient. To predict the outcome at multiple time points,
typically separate models are trained. Moreover, these models use data from a fixed-size window,
rather than a growing history.
Researchers in the statistics and machine learning communities have proposed solutions that address a number of these limitations. Most related to our work is that by Rizopoulos [8], where the
focus is on making dynamical predictions about a time-to-event outcome (e.g. time until death).
Their model updates predictions over time using all previously observed values of a longitudinally
recorded marker. Besides conditioning on observed factors, they account for latent heterogeneity
across individuals by allowing for individual-specific adjustments to the population-level model?
e.g. for a longitudinal marker, deviations from the population baseline are modeled using random
effects by sampling individual-specific intercepts from a common distribution. Other closely related
work by Proust-Lima et al. [9] tackle a similar problem as Rizopoulos, but address heterogeneity
using a mixture model.
Another common approach to dynamical predictions is to use Markov models such as order-p
autoregressive models (AR-p), HMMs, state space models, and dynamic Bayesian networks
(see e.g. in [10]). Although such models naturally make dynamic predictions using the full history
by forward-filtering, they typically assume discrete, regularly-spaced observation times. Gaussian
processes (GPs) are a commonly used alternative for handling continuous-time observations?see
Roberts et al. [11] for a recent review of GP time series models. Since Gaussian processes are nonparametric generative models of functions, they naturally produce functional predictions dynamically by using the posterior predictive conditioned on the observed data. Mixtures of GPs have been
applied to model heterogeneity in the covariance structure across time series (e.g. [12]), however as
noted in Roberts et al., appropriate mean functions are critical for accurate forecasts using GPs. In
our work, an individual?s trajectory is expressed as a GP with a highly structured mean comprising
population, subpopulation and individual-level components where some components are observed
and others require inference.
More broadly, multi-level models have been applied in many fields to model heterogeneous collections of units that are organized within a hierarchy [13]. For example, in predicting student grades
over time, individuals within a school may have parameters sampled from the school-level model,
and the school-level model parameters in turn may be sampled from a county-specific model. In our
setting, the hierarchical structure?which individuals belong to the same subgroup?is not known a
priori. Similar ideas are studied in multi-task learning, where relationships between distinct prediction tasks are used to encourage similar parameters. This has been applied to modeling trajectories
by treating predictions at each time point as a separate task and enforcing similarity between submodels close in time [14]. This approach is limited, however, in that it models a finite number
of times. Others, more recently, have developed models for disease trajectories (see [15, 16] and
references within) but these focus on retrospective analysis to discover disease etiology rather than
dynamical prediction. Schulam et al. [16] incorporate differences in trajectories due to subtypes and
individual-specific factors. We build upon this work here. Finally, recommender systems also share
2
(d)
Subtype B-spline coefficients 2 Rdz
Subtype marginal model
coefficients 2 Rqz
(a)
(e)
~g
w
~g
G
M
zi
Subtype indicator 2 {1, . . . , G}
Subtype marginal
model features 2 Rqz
Structured noise GP
hyper-parameters
(b)
?
yij
~xiz
Ni
fi
Structured noise function 2 RR
Individual model coefficients 2
tij
Rd `
~bi
?
~i
?b
?
~xip
Individual model covariance matrix 2 Rd` ?d`
Population model features-to-coefficient map 2 Rdp ?qp
(c)
Population model coefficients 2 Rdp
Population model
q
features 2 R p
Figure 1: Plots (a-c) show example marker trajectories. Plot (d) shows adjustments to a population and
subpopulation fit (row 1). Row 2 makes an individual-specific long-term adjustment. Row 3 makes shortterm structured noise adjustments. Plot (e) shows the proposed graphical model. Levels in the hierarchy are
color-coded. Model parameters are enclosed in dashed circles. Observed random variables are shaded.
information across individuals with the aim of tailoring predictions (see e.g. [17, 18, 19]), but the
task is otherwise distinct from ours.
Contributions. We propose a hierarchical model of disease activity trajectories that directly addresses common?latent and observed?sources of heterogeneity in complex, chronic diseases using three levels: the population level, subpopulation level, and individual level. The model discovers
the subpopulation structure automatically, and infers individual-level structure over time when making predictions. In addition, we include a Gaussian process as a model of structured noise, which
is designed to explain away temporary sources of variability that are unrelated to disease activity.
Together, these four components allow individual trajectories to be highly heterogeneous while simultaneously sharing statistical strength across observations at different ?resolutions? of the data.
When making predictions for a given individual, we use Bayesian inference to dynamically update
our posterior belief over individual-specific parameters given the clinical history and use the posterior predictive to produce a trajectory estimate. Finally, we evaluate our approach by developing a
state-of-the-art trajectory prediction tool for lung disease in scleroderma. We train our model using
a large, national dataset containing individuals with scleroderma tracked over 20 years and compare
our predictions against alternative approaches. We find that our approach yields significant gains in
predictive accuracy of disease activity trajectories.
2
Disease Trajectory Model
We describe a hierarchical model of an individual?s clinical marker values. The graphical model
is shown in Figure 1e. For each individual i, we use Ni to denote the number of observed markers. We denote each individual observation using yij and its measurement time using tij where
j ? {1, . . . , Ni }. We use ~yi ? RNi and ~ti ? RNi to denote all of individual i?s marker values and
measurement times respectively. In the following discussion,
?(tij ) denotes a column-vector containing a basis expansion of the time tij and we use ? ~ti = [?(ti1 ), . . . , ?(tiNi )]> to denote the
matrix containing the basis expansion of points in ~ti in each of its rows. We model the jth marker
value for individual i as a normally distributed random variable with a mean assumed to be the sum
of four terms: a population component, a subpopulation component, an individual component, and
a structured noise component:
?
?
?
yij ? N ??p (tij )> ? ~xip + ?z (tij )> ?~zi + ?` (tij )>~bi +
{z
} |
{z
} | {z }
|
(A) population
(B) subpopulation
(C) individual
fi (tij )
| {z }
?
, ?2 ? .
(1)
(D) structured noise
The four terms in the sum serve two purposes. First, they allow for a number of different sources
of variation to influence the observed marker value, which allows for heterogeneity both across and
within individuals. Second, they share statistical strength across different subsets of observations.
The population component shares strength across all observations. The subpopulation component
3
shares strength across observations belonging to subgroups of individuals. The individual component shares strength across all observations belonging to the same individual. Finally, the structured
noise shares information across observations belonging to the same individual that are measured at
similar times. Predicting an individual?s trajectory involves estimating her subtype and individualspecific parameters as new clinical data becomes available1 . We describe each of the components in
detail below.
Population level. The population model predicts aspects of an individual?s disease activity trajectory using observed baseline characteristics (e.g. gender and race), which are represented using the
feature vector ~xip . This sub-model is shown within the orange box in Figure 1e. Here we assume
that this component is a linear model where the coefficients are a function of the features ~xip ? Rqp .
The predicted value of the jth marker of individual i measured at time tij is shown in Eq. 1 (A),
where ?p (t) ? Rdp is a basis expansion of the observation time and ? ? Rdp ?qp is a matrix used as
a linear map from an individual?s covariates ~xip to coefficients ?i ? Rdp . At this level, individuals
with similar covariates will have similar coefficients. The matrix ? is learned offline.
Subpopulation level. We model an individual?s subtype using a discrete-valued latent variable
zi ? {1, . . . , G}, where G is the number of subtypes. We associate each subtype with a unique
disease activity trajectory represented using B-splines, where the number and location of the knots
and the degree of the polynomial pieces are fixed prior to learning. These hyper-parameters determine a basis expansion ?z (t) ? Rdz mapping a time t to the B-spline basis function values at
~g ? Rdz
that time. Trajectories for each subtype are parameterized by a vector of coefficients ?
for g ? {1, . . . , G}, which are learned offline. Under subtype zi , the predicted value of marker
yij measured at time tij is shown in Eq. 1 (B). This component explains differences such as those
observed between the trajectories in Figures 1a and 1b. In many cases, features at baseline may be
predictive of subtype. For example, in scleroderma, the types of antibody an individual produces
(i.e. the presence of certain proteins in the blood) are correlated with certain trajectories. We can
improve predictive performance by conditioning on baseline covariates to infer the subtype. To do
this, we use a multinomial logistic regression to define feature-dependent marginal probabilities:
>
zi | ~xiz ? Mult (?1:G (~xiz )), where ?g (~xiz ) ? ew~ g ~xiz . We denote the weights of the multinomial
regression using w
~ 1:G , where the weights of the first class are constrained to be ~0 to ensure model
identifiability. The remaining weights are learned offline.
Individual level. This level models deviations from the population and subpopulation models using parameters that are learned dynamically as the individual?s clinical history grows. Here, we
parameterize the individual component using a linear model with basis expansion ?` (t) ? Rd` and
individual-specific coefficients ~bi ? Rd` . An individual?s coefficients are modeled as latent variables with marginal distribution ~bi ? N (~0, ?b ). For individual i, the predicted value of marker yij
measured at time tij is shown in Eq. 1 (C). This component can explain, for example, differences in
overall health due to an unobserved characteristic such as chronic smoking, which may cause atypically lower lung function than what is predicted by the population and subpopulation components.
Such an adjustment is illustrated across the first and second rows of Figure 1d.
Structured noise. Finally, the structured noise component fi captures transient trends. For example, an infection may cause an individual?s lung function to temporarily appear more restricted
than it actually is, which may cause short-term trends like those shown in Figure 1c and the third
row of Figure 1d. We treat fi as a function-valued latent variable and model it using a Gaussian process with zero-valued
mean function
and Ornstein-Uhlenbeck (OU) covariance function:
KOU (t1 , t2 ) = a2 exp ?`?1 |t1 ? t2 | . The amplitude a controls the magnitude of the structured
noise that we expect to see and the length-scale ` controls the length of time over which we expect
these temporary trends to occur. The OU kernel is ideal for modeling such deviations as it is both
mean-reverting and draws from the corresponding stochastic process are only first-order continuous,
which eliminates long-range dependencies between deviations [20]. Applications in other domains
may require different kernel structures motivated by properties of the noise in the trajectories.
1
The model focuses on predicting the long-term trajectory of an individual when left untreated. In many
chronic conditions, as is the case for scleroderma, drugs only provide short-term relief (accounted for in our
model by the individual-specific adjustments). If treatments that alter long-term course are available and commonly prescribed, then these should be included within the model as an additional component that influences
the trajectory.
4
2.1 Learning
Objective function. To learn the parameters of our model ? = {?, w
~ 1:G , ?~1:G , ?b , a, `, ? 2 }, we
maximize the observed-data log-likelihood (i.e. the probability of all individual?s marker values ~yi
given measurement times ~ti and features {~xip , ~xiz }). This requires marginalizing over the latent
variables {zi , ~bi , fi } for each individual. This yields a mixture of multivariate normals:
P (~yi | Xi , ?) =
G
X
?zi (~xiz ) N ~yi | ?p ~ti ? ~xip + ?z ~ti ?~zi , K ~ti , ~ti ,
(2)
zi =1
where K(t1 , t2 ) = ?` (t1 )> ?b ?` (t2 ) + KOU (t1 , t2 ) + ? 2 I(t1 = t2 ). The observed-data
PM
log-likelihood for all individuals is therefore: L (?) = i=1 log P (~yi | Xi , ?). A more detailed
derivation is provided in the supplement.
Optimizing the objective. To maximize the observed-data log-likelihood with respect to ?, we
partition the parameters into two subsets. The first subset, ?1 = {?b , ?, `, ? 2 }, contains values that
parameterize the covariance function K(t1 , t2 ) above. As is often done when designing the kernel of a Gaussian process, we use a combination of domain knowledge to choose candidate values
and model selection using observed-data log-likelihood as a criterion for choosing among candidates [20]. The second subset, ?2 = {?, w
~ 1:G , ?~1:G }, contains values that parameterize the mean
of the multivariate normal distribution in Equation 2. We learn these parameters using expectation
maximization (EM) to find a local maximum of the observed-data log-likelihood.
Expectation step. All parameters related to ~bi and fi are limited to the covariance kernel and are
not optimized using EM. We therefore only need to consider the subtype indicators zi as unobserved in the expectation step. Because zi is discrete, its posterior is computed by normalizing the
?
joint probability of zi and ~yi . Let ?ig
denote the posterior probability that individual i has subtype
g ? {1, . . . , G}, then we have
?
?ig
? ?g (~xiz ) N ~yi | ?p ~ti ? ~xip + ?z ~ti ?~g , K ~ti , ~ti .
(3)
Maximization step. In the maximization step, we optimize the marginal probability of the soft
assignments under the multinomial logistic regression model with respect to w
~ 1:G using gradientbased methods. To optimize the expected complete-data log-likelihood with respect to ? and ?~1:G ,
we note that the mean of the multivariate normal for each individual is a linear function of these
parameters. Holding ? fixed, we can therefore solve for ?~1:G in closed form and vice versa. We use
a block coordinate ascent approach, alternating between solving for ? and ?~1:G until convergence.
Because the expected complete-data log-likelihood is concave with respect to all parameters in ?2 ,
each maximization step is guaranteed to converge. We provide additional details in the supplement.
2.2 Prediction
Our prediction y?(t0i ) for the value of the trajectory at time t0i is the expectation of the marker yi0
under the posterior predictive conditioned on observed markers ~yi measured at times ~ti thus far.
This requires evaluating the following expression:
Z
G Z
h
i
X
0
y? (ti ) =
E yi0 | zi , ~bi , fi , t0i P zi , ~bi , fi | ~yi , Xi , ? dfi d~bi
(4)
d
N
{z
}|
{z
}
zi =1 R ` R i |
prediction given latent vars.
=
E?z ,~b ,f
i i i
h
>
?p (t0i )
? ~xip +
>
?z (t0i )
~ ? (Eq. 7)
?
i
posterior over latent vars.
i
>
?~zi + ?` (t0i ) ~bi + fi (t0i )
~b? (Eq. 10)
i
z }|
z }|
h i{
h i{
>
>
>
= ?p (t0i ) ? ~xip + ?z (t0i ) E?zi ?~zi + ?` (t0i ) E~?b ~bi +
i
|
{z
} |
{z
} |
{z
}
population prediction
subpopulation prediction
individual prediction
(5)
fi? (t0i ) (Eq. 12)
z }| {
E?f [fi (t0i )]
| i {z }
,
(6)
structured noise prediction
where E ? denotes an expectation conditioned on ~yi , Xi , ?. In moving from Eq. 4 to 5, we have
written the integral as an expectation and substituted the inner expectation with the mean of the
normal distribution in Eq. 1. From Eq. 5 to 6, we use linearity of expectation. Eqs. 7, 10, and 12
5
1
2
(a)
60
?
?
??
?
?
??
?
?
?
2.5
?
7.5
10.0 0.0
2.5
5.0
2
7.5
10.0 0.0
2.5
??
?
5.0
4
7.5
4
10.0
0.0
2.5
?
?
?
5.0
1
(d)
?
?
?
? ?? ?
7.5
10.0 0.0
2.5
??
?
?
?
? ?
?
?
100
??
?
?
?
? ?
?
(c)
?
?
?
?
? ?
40
? ?? ?
?
?
? ?? ?
5.0
2
7.5
10.0 0.0
2.5
? ?? ?
5.0
4
7.5
10.0
0.56
Pr( ) = 0.40
Pr( ) =
80
80
? ?
?
?
?
?? ?
?
60
? ?
?
?
?
?
? ?
?
?
?
?? ?
?
?
? ?
?
?
?
?
? ?
?
?
?
?? ?
?
?
?
? ?
?
?
?
?
? ?
?
?
?
?? ?
?
?
60
?
? ?
?
?
?
?
? ?
?
?
?
?? ?
?
?
? ?
?
?
?
?
? ?
?
?
?
?? ?
?
?
?
?
? ?
?
?
?
?
?
?
?
40
40
20
?
?
?
?
? ?? ?
2
(b)
80
60
?
?
? ?? ?
5.0
1
??
?
?
?
? ?
?
?
0.0
Pr( ) = 0.60
Pr( ) = 0.39
?
?
?
? ?
?
1
Pr( ) = 0.71
Pr( ) = 0.21
?
?
?
? ?
40
100
4
Pr( ) = 0.57
Pr( ) = 0.18
80
Pr( ) = 0.53
Pr( ) = 0.26
0.0
2.5
Pr( ) = 0.54
Pr( ) = 0.46
5.0
7.5
10.0 0.0
2.5
Pr( ) = 0.99
Pr( ) = 0.01
5.0
7.5
10.0 0.0
2.5
Pr( ) = 0.54
Pr( ) = 0.46
20
5.0
7.5
10.0 0.0
2.5
5.0
7.5
10.0 0.0
2.5
5.0
7.5
10.0 0.0
Years Since First Seen
Figure 2: Plots (a) and (c) show dynamic predictions using the proposed model for two individuals. Red
markers are unobserved. Blue shows the trajectory predicted using the most likely subtype, and green shows
the second most likely. Plot (b) shows dynamic predictions using the B-spline GP baseline. Plot (d) shows
predictions made using the proposed model without individual-specific adjustments.
below show how the expectations in Eq. 6 are computed. An expanded version of these steps are
provided in the supplement.
Computing the population prediction is straightforward as all quantities are observed. To compute
the subpopulation prediction, we need to compute the marginal posterior over zi , which we used in
the expectation step above (Eq. 3). The expected subtype coefficients are therefore
P
G
? ~
?~i? ,
(7)
zi =1 ?izi ?zi .
To compute the individual prediction, note that by conditioning on zi , the integral over the likelihood
with respect to fi and the prior over ~bi form the likelihood and prior of a Bayesian linear regression.
Let Kf = KOU (~ti , ~ti ) + ? 2 I, then the posterior over ~bi conditioned on zi is:
P ~bi | zi , ~yi , Xi , ? ? N ~bi | 0, ?b N ~yi | ?p ? ~xip + ?z ~ti ?~zi + ?` ~ti ~bi , Kf . (8)
Just as in Eq. 2, we have integrated over fi moving its effect from the mean of the normal distribution
to the covariance. Because the prior over ~bi is conjugate to the likelihood on the right side of Eq. 8,
the posterior can be written in closed form as a normal distribution (see e.g. [10]). The mean of the
left side of Eq. 8 is therefore
h
i?1 h
i
> ?1
> ?1
~z
~
~
~
~
~
??1
+
?
(
t
)
K
?
(
t
)
?
(
t
)
K
~
y
?
?
(
t
)?
~
x
?
?
(
t
)
?
,
(9)
`
i
`
i
`
i
i
p
i
ip
z
i
i
b
f
f
To compute the unconditional posterior mean of ~bi we take the expectation of Eq. 9 with respect to
the posterior over zi . Eq. 9 is linear in ?~zi , so we can directly replace ?~zi with its mean (Eq. 7):
h
i?1 h
i
~b? , ??1 + ?` (~ti )> K ?1 ?` (~ti )
~? .
?` (~ti )> Kf?1 ~yi ? ?p (~ti )? ~xip ? ?z (~ti )?
i
i
b
f
(10)
Finally, to compute the structured noise prediction, note that conditioned on zi and ~bi , the GP prior
and marker likelihood (Eq. 1) form a standard GP regression (see e.g. [20]). The conditional
posterior of fi (t0i ) is therefore a GP with mean
?1
KOU (t0i , ~ti ) KOU (~ti , ~ti ) + ? 2 I
~yi ? ?p (~ti )? ~xip ? ?z (~ti )?~zi ? ?` (~ti )~bi .
(11)
To compute the unconditional posterior expectation of fi (t0i ), we note that the expression above is
linear in zi and ~bi and so their expectations can be plugged in to obtain
?1
~yi ? ?p (~ti )? ~xip ? ?z (~ti )?~i? ? ?` (~ti )~b?i . (12)
f ? (t0i ) , KOU (t0i , ~ti ) KOU (~ti , ~ti ) + ? 2 I
6
2.5
5.0
7.5
10.0
3
Experiments
We demonstrate our approach by building a tool to predict the lung disease trajectories of individuals with scleroderma. Lung disease is currently the leading cause of death among scleroderma
patients, and is notoriously difficult to treat because there are few predictors of decline and there is
tremendous variability across individual trajectories [21]. Clinicians track lung severity using percent of predicted forced vital capacity (PFVC), which is expected to drop as the disease progresses.
In addition, demographic variables and molecular test results are often available at baseline to aid
prognoses. We train and validate our model using data from the Johns Hopkins Scleroderma Center
patient registry, which is one of the largest in the world. To select individuals from the registry, we
used the following criteria. First, we include individuals who were seen at the clinic within two
years of their earliest scleroderma-related symptom. Second, we exclude all individuals with fewer
than two PFVC measurements after their first visit. Finally, we exclude individuals who received a
lung transplant. The dataset contains 672 individuals and a total of 4, 992 PFVC measurements.
For the population model, we use constant functions (i.e. observed covariates adjust an individual?s
intercept). The population covariates (~xip ) are gender, African American race, and indicators of
ACA and Scl-70 antibodies?two proteins believed to be connected to scleroderma-related lung
disease. Note that all features are binary. For the subpopulation B-splines, we set boundary knots
at 0 and 25 years (the maximum observation time in our data set is 23 years), use two interior knots
that divide the time period from 0-25 years into three equally spaced chunks, and use quadratics
as the piecewise components. These B-spline hyperparameters (knots and polynomial degree) are
also used for all baseline models. We select G = 9 subtypes using BIC. The covariates in the
subtype marginal model (~xiz ) are the same used in the population model. For the individual model,
we use linear functions. For the hyper-parameters ?1 = {?b , ?, `, ? 2 } we set ?b to be a diagonal
covariance matrix with entries [16, 10?2 ] along the diagonal, which correspond to intercept and
slope variances respectively. Finally, we set ? = 6, ` = 2, and ? 2 = 1 using domain knowledge;
we expect transient deviations to last around 2 years and to change PFVC by around ?6 units.
Baselines. First, to compare against typical approaches used in clinical medicine that condition on
baseline covariates only (e.g. [22]), we fit a regression model conditioned on all covariates included
in ~xiz above. The mean is parameterized using B-spline bases (?(t)) as:
P
P
y? | ~xiz = ?(t)> ?~0 +
xi ?~i +
xi xj ?~ij .
(13)
xi in ~
xiz
xi ,xj in pairs of ~
xiz
The second baseline is similar to [8] and [23] and extends the first baseline by accounting for
individual-specific heterogeneity. The model has a mean function identical to the first baseline and
individualizes predictions using a GP with the same kernel as in Equation 2 (using hyper-parameters
as above). Another natural approach is to explain heterogeneity by using a mixture model similar to
[9]. However, a mixture model cannot adequately explain away individual-specific sources of variability that are unrelated to subtype and therefore fails to recover subtypes that capture canonical
trajectories (we discuss this in detail in the supplemental section). The recovered subtypes from the
full model do not suffer from this issue. To make the comparison fair and to understand the extent
to which the individual-specific component contributes towards personalizing predictions, we create
a mixture model (Proposed w/ no personalization) where the subtypes are fixed to be the same as
those in the full model and the remaining parameters are learned. Note that this version does not
contain the individual-specific component.
Evaluation. We make predictions after one, two, and four years of follow-up. Errors are summarized within four disjoint time periods: (1, 2], (2, 4], (4, 8], and (8, 25] years2 . To measure error,
we use the absolute difference between the prediction and a smoothed version of the individual?s
observed trajectory. We estimate mean absolute error (MAE) using 10-fold CV at the level of individuals (i.e. all of an individual?s data is held-out), and test for statistically significant reductions in
error using a one-sided, paired t-test. For all models, we use the MAP estimate of the individual?s
trajectory. In the models that include subtypes, this means that we choose the trajectory predicted by
the most likely subtype under the posterior. Although this discards information from the posterior,
in our experience clinicians find this choice to be more interpretable.
Qualitative results. In Figure 2 we present dynamically updated predictions for two patients (one
per row, dynamic updates move left to right). Blue lines indicate the prediction under the most likely
subtype and green lines indicate the prediction under the second most likely. The first individual
2
After the eighth year, data becomes too sparse to further divide this time span.
7
Model
B-spline with Baseline Feats.
B-spline + GP
Proposed
Proposed w/ no personalization
B-spline with Baseline Feats.
B-spline + GP
Proposed
Proposed w/ no personalization
Predictions using 1 year of data
(1, 2] % Im. (2, 4] % Im.
12.78
12.73
5.49
7.70
?
5.26
7.04
8.6
6.17
7.12
Predictions using 2 years of data
12.73
5.88
?
5.48
6.8
6.00
Predictions using 4 years of data
(4, 8]
12.40
9.67
10.17
9.38
12.40
8.65
?
7.95
8.12
% Im.
8.1
(8, 25]
12.14
10.71
12.12
12.85
% Im.
12.14
10.02
9.53
11.39
B-spline with Baseline Feats.
12.40
12.14
B-spline + GP
6.00
8.88
?
?
Proposed
5.14
14.3
7.58
14.3
Proposed w/ no personalization
5.75
9.16
Table 1: MAE of PFVC predictions for the two baselines and the proposed model. Bold numbers indicate best
performance across models (? is stat. significant). ?% Im.? reports percent improvement over next best.
(Figure 2a) is a 50-year-old, white woman with Scl-70 antibodies, which are thought to be associated
with active lung disease. Within the first year, her disease seems stable, and the model predicts this
course with 57% confidence. After another year of data, the model shifts 21% of its belief to a
rapidly declining trajectory; likely in part due to the sudden dip in year 2. We contrast this with the
behavior of the B-spline GP shown in Figure 2b, which has limited capacity to express individualized
long-term behavior. We see that the model does not adequately adjust in light of the downward trend
between years one and two. To illustrate the value of including individual-specific adjustments, we
now turn to Figures 2c and 2d (which plot predictions made by the proposed model with and without
personalization respectively). This individual is a 60-year-old, white man that is Scl-70 negative,
which makes declining lung function less likely. Both models use the same set of subtypes, but
whereas the model without individual-specific adjustment does not consider the recovering subtype
to be likely until after year two, the full model shifts the recovering subtype trajectory downward
towards the man?s initial PFVC value and identify the correct trajectory using a single year of data.
Quantitative results. Table 1 reports MAE for the baselines and the proposed model. We note that
after observing two or more years of data, our model?s errors are smaller than the two baselines (and
statistically significantly so in all but one comparison). Although the B-spline GP improves over the
first baseline, these results suggest that both subpopulation and individual-specific components enable more accurate predictions of an individual?s future course as more data are observed. Moreover,
by comparing the proposed model with and without personalization, we see that subtypes alone are
not sufficient and that individual-specific adjustments are critical. These improvements also have
clinical significance. For example, individuals who drop by more than 10 PFVC are candidates for
aggressive immunosuppressive therapy. Out of the 7.5% of individuals in our data who decline by
more than 10 PFVC, our model predicts such a decline at twice the true-positive rate of the B-spline
GP (31% vs. 17%) and with a lower false-positive rate (81% vs. 90%).
4
Conclusion
We have described a hierarchical model for making individualized predictions of disease activity
trajectories that accounts for both latent and observed sources of heterogeneity. We empirically
demonstrated that using all elements of the proposed hierarchy allows our model to dynamically
personalize predictions and reduce error as more data about an individual is collected. Although
our analysis focused on scleroderma, our approach is more broadly applicable to other complex,
heterogeneous diseases [1]. Examples of such diseases include asthma [3], autism [4], and COPD
[5]. There are several promising directions for further developing the ideas presented here. First, we
observed that predictions are less accurate early in the disease course when little data is available to
learn the individual-specific adjustments. To address this shortcoming, it may be possible to leverage
time-dependent covariates in addition to the baseline covariates used here. Second, the quality of our
predictions depends upon the allowed types of individual-specific adjustments encoded in the model.
More sophisticated models of individual variation may further improve performance. Moreover,
approaches for automatically learning the class of possible adjustments would make it possible to
apply our approach to new diseases more quickly.
8
References
[1] J. Craig. Complex diseases: Research and applications. Nature Education, 1(1):184, 2008.
[2] J. Varga, C.P. Denton, and F.M. Wigley. Scleroderma: From Pathogenesis to Comprehensive Management. Springer Science & Business Media, 2012.
[3] J. L?otvall et al. Asthma endotypes: a new approach to classification of disease entities within the asthma
syndrome. Journal of Allergy and Clinical Immunology, 127(2):355?360, 2011.
[4] L.D. Wiggins, D.L. Robins, L.B. Adamson, R. Bakeman, and C.C. Henrich. Support for a dimensional
view of autism spectrum disorders in toddlers. Journal of autism and developmental disorders, 42(2):191?
200, 2012.
[5] P.J. Castaldi et al. Cluster analysis in the copdgene study identifies subtypes of smokers with distinct
patterns of airway disease and emphysema. Thorax, 2014.
[6] S. Saria and A. Goldenberg. Subtyping: What Is It and Its Role in Precision Medicine. IEEE Intelligent
Systems, 30, 2015.
[7] D.S. Lee, P.C. Austin, J.L. Rouleau, P.P Liu, D. Naimark, and J.V. Tu. Predicting mortality among patients
hospitalized for heart failure: derivation and validation of a clinical model. Jama, 290(19):2581?2587,
2003.
[8] D. Rizopoulos. Dynamic predictions and prospective accuracy in joint models for longitudinal and timeto-event data. Biometrics, 67(3):819?829, 2011.
[9] C. Proust-Lima et al. Joint latent class models for longitudinal and time-to-event data: A review. Statistical Methods in Medical Research, 23(1):74?90, 2014.
[10] K.P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
[11] S. Roberts, M. Osborne, M. Ebden, S. Reece, N. Gibson, and S. Aigrain. Gaussian processes for timeseries modelling. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 371(1984):20110550, 2013.
[12] J.Q. Shi, R. Murray-Smith, and D.M. Titterington. Hierarchical gaussian process mixtures for regression.
Statistics and computing, 15(1):31?41, 2005.
[13] A. Gelman and J. Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge
University Press, 2006.
[14] H. Wang et al. High-order multi-task feature learning to identify longitudinal phenotypic markers for
alzheimer?s disease progression prediction. In Advances in Neural Information Processing Systems, pages
1277?1285, 2012.
[15] J. Ross and J. Dy. Nonparametric mixture of gaussian processes with constraints. In Proceedings of the
30th International Conference on Machine Learning (ICML-13), pages 1346?1354, 2013.
[16] P.F. Schulam, F.M. Wigley, and S. Saria. Clustering longitudinal clinical marker trajectories from electronic health data: Applications to phenotyping and endotype discovery. In Proceedings of the TwintyNinth AAAI Conference on Artificial Intelligence, 2015.
[17] B.M. Marlin. Modeling user rating profiles for collaborative filtering. In Advances in neural information
processing systems, 2003.
[18] G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: A survey of
the state-of-the-art and possible extensions. Knowledge and Data Engineering, IEEE Transactions on,
17(6):734?749, 2005.
[19] D. Sontag, K. Collins-Thompson, P.N. Bennett, R.W. White, S. Dumais, and B. Billerbeck. Probabilistic
models for personalizing web search. In Proceedings of the fifth ACM international conference on Web
search and data mining, pages 433?442. ACM, 2012.
[20] C.E. Rasmussen and C.K. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
[21] Y. Allanore et al. Systemic sclerosis. Nature Reviews Disease Primers, 2015.
[22] D. Khanna et al. Clinical course of lung physiology in patients with scleroderma and interstitial lung
disease: analysis of the scleroderma lung study placebo group. Arthritis & Rheumatism, 63(10):3078?
3085, 2011.
[23] J.Q. Shi, B. Wang, E.J. Will, and R.M. West. Mixed-effects gaussian process functional regression models
with application to dose?response curve prediction. Stat. Med., 31(26):3165?3177, 2012.
9
| 5873 |@word adomavicius:1 version:3 polynomial:2 seems:1 yi0:2 covariance:7 accounting:1 reduction:1 initial:1 liu:1 series:3 contains:3 genetic:1 ours:1 longitudinal:5 current:1 recovered:1 comparing:1 must:1 written:2 john:3 partition:1 tailoring:1 drop:3 treating:1 update:3 plot:7 designed:1 alone:3 generative:1 fewer:1 interpretable:1 v:2 intelligence:1 smith:1 short:2 pschulam:1 sudden:1 location:1 hospitalized:1 mathematical:1 along:1 qualitative:1 expected:4 longitudinally:1 rapid:1 behavior:3 growing:1 multi:4 grade:1 kou:7 gastrointestinal:1 automatically:2 little:1 window:1 becomes:2 provided:2 discover:1 underlying:1 unrelated:3 moreover:3 estimating:1 linearity:1 medium:1 what:2 developed:1 titterington:1 supplemental:1 unobserved:4 marlin:1 quantitative:1 ti:35 concave:1 tackle:1 control:2 subtype:23 unit:2 normally:1 appear:1 medical:1 positive:2 t1:7 understood:1 local:1 treat:3 engineering:2 scleroderma:16 twice:1 studied:1 dynamically:7 challenging:1 rizopoulos:3 shaded:1 hmms:1 limited:3 bi:21 range:1 statistically:2 systemic:1 unique:1 block:1 procedure:1 gibson:1 drug:2 jhu:2 mult:1 thought:1 significantly:1 physiology:1 word:1 confidence:1 subpopulation:18 protein:2 suggest:1 cannot:1 close:1 selection:1 interior:1 gelman:1 risk:2 influence:2 intercept:3 optimize:3 map:3 demonstrated:1 chronic:5 center:1 shi:2 straightforward:2 exposure:1 williams:1 kidney:1 thompson:1 focused:1 resolution:3 survey:1 disorder:2 suchi:1 population:23 variation:2 coordinate:1 updated:2 target:2 hierarchy:5 trigger:1 lima:2 user:1 gps:3 designing:1 associate:1 trend:4 element:1 predicts:3 observed:25 role:1 wang:2 capture:2 parameterize:3 connected:1 disease:52 proust:2 developmental:1 covariates:10 dynamic:6 trained:1 solving:1 predictive:8 serve:1 upon:2 basis:6 joint:3 represented:2 derivation:2 train:2 distinct:4 forced:1 describe:3 effective:1 shortcoming:1 reece:1 artificial:1 hyper:4 outcome:6 choosing:1 encoded:1 valued:4 solve:1 otherwise:1 statistic:2 gp:14 ip:1 online:1 rr:1 propose:2 thorax:1 tu:1 rapidly:1 poorly:1 validate:2 exploiting:1 convergence:1 cluster:2 produce:3 tract:1 illustrate:1 develop:1 stat:2 measured:6 ij:1 school:3 received:1 autoimmune:1 progress:2 eq:19 strong:1 recovering:2 c:1 involves:1 predicted:7 indicate:3 quantify:1 direction:1 closely:1 correct:1 stochastic:1 enable:2 transient:3 education:1 explains:1 require:2 multilevel:1 county:1 biological:1 im:5 subtypes:12 yij:5 extension:1 gradientbased:1 around:2 therapy:1 normal:6 exp:1 mapping:1 predict:7 driving:1 vary:1 early:3 a2:1 purpose:1 applicable:1 currently:1 ross:1 castaldi:1 largest:1 organ:2 vice:1 create:1 tool:4 mit:2 gaussian:10 aim:1 rather:3 parkinson:1 varying:1 phenotyping:1 earliest:1 focus:3 improvement:3 consistently:1 modelling:1 likelihood:11 greatly:1 contrast:1 baseline:20 inference:2 goldenberg:1 dependent:2 unhealthy:1 breath:1 typically:4 xip:15 xiz:13 integrated:1 her:2 comprising:1 overall:1 among:4 issue:1 classification:1 priori:1 art:3 constrained:1 orange:1 marginal:7 field:1 sampling:1 identical:1 atypically:1 denton:1 icml:1 alter:1 future:3 others:3 spline:16 t2:7 piecewise:1 few:1 intelligent:1 report:2 simultaneously:1 national:1 comprehensive:1 individual:105 murphy:1 tuzhilin:1 relief:1 highly:2 mining:1 evaluation:1 adjust:2 rdp:5 mixture:8 personalization:6 unconditional:2 light:1 held:1 accurate:3 integral:2 encourage:1 experience:1 biometrics:1 divide:3 plugged:1 old:2 circle:1 dose:1 instance:1 column:1 modeling:4 soft:1 ar:1 assignment:1 maximization:4 deviation:5 subset:4 entry:1 placebo:1 predictor:1 too:1 dependency:1 dumais:1 chunk:1 immunology:1 international:2 lee:2 physician:1 probabilistic:2 together:1 hopkins:3 quickly:1 aaai:1 mortality:2 recorded:1 containing:3 choose:2 management:1 woman:1 american:1 leading:2 account:4 exclude:2 aggressive:1 ebden:1 student:1 stabilize:1 summarized:1 coefficient:12 bold:1 schulam:3 race:2 ornstein:1 depends:1 piece:1 later:1 view:1 closed:2 aca:1 observing:1 red:1 lung:17 recover:1 identifiability:1 slope:1 mutation:1 contribution:1 collaborative:1 ni:3 accuracy:3 variance:1 characteristic:3 who:4 spaced:2 yield:2 correspond:1 identify:2 bayesian:3 accurately:2 craig:1 knot:4 trajectory:46 notoriously:1 autism:4 researcher:1 history:5 african:1 explain:5 influenced:1 sharing:1 infection:2 definition:1 against:3 failure:1 naturally:2 associated:1 sampled:3 gain:1 dataset:2 treatment:2 manifest:2 color:1 knowledge:3 infers:1 improves:1 organized:1 ou:2 amplitude:1 sophisticated:1 actually:1 day:1 follow:1 response:1 izi:1 arranged:1 done:1 box:1 symptom:1 just:1 until:5 asthma:3 web:2 marker:25 khanna:1 logistic:3 quality:1 grows:1 building:1 effect:4 contain:1 lupus:1 true:1 untreated:1 adequately:2 aggressively:1 alternating:1 death:4 illustrated:1 white:3 noted:1 criterion:2 transplant:1 hill:1 complete:2 demonstrate:2 percent:2 personalizing:2 discovers:1 recently:1 fi:15 common:3 functional:3 multinomial:3 qp:2 tracked:1 individualizing:1 empirically:1 conditioning:4 physical:1 discussed:1 belong:1 mae:3 significant:4 measurement:5 versa:1 declining:2 cv:1 cambridge:1 rd:4 pm:1 moving:2 stable:1 similarity:1 base:1 posterior:16 multivariate:3 recent:1 perspective:1 optimizing:1 discard:1 certain:2 binary:1 yi:15 seen:2 additional:2 syndrome:1 determine:1 maximize:2 converge:1 period:2 dashed:1 multiple:2 full:4 infer:1 clinical:15 long:5 believed:1 dept:2 molecular:1 visit:1 coded:1 equally:1 paired:1 prediction:52 regression:10 heterogeneous:3 patient:7 expectation:13 represent:1 uhlenbeck:1 kernel:5 arthritis:1 addition:4 whereas:1 baltimore:2 source:6 eliminates:1 ascent:1 med:1 regularly:1 alzheimer:1 presence:1 ideal:1 leverage:1 vital:1 variety:1 affect:1 fit:3 zi:31 bic:1 xj:2 prognosis:1 registry:2 inner:1 decline:5 idea:2 reduce:1 ti1:1 copd:1 toddler:1 shift:2 six:1 motivated:1 expression:2 etiology:1 retrospective:1 suffer:1 peter:1 sontag:1 cause:8 tij:11 varga:1 clear:1 detailed:1 nonparametric:2 exist:1 canonical:1 disjoint:1 track:1 per:1 blue:2 broadly:2 discrete:4 affected:1 waiting:1 group:2 express:1 crosssectional:1 four:5 monitor:1 blood:1 imputation:1 phenotypic:1 year:21 sum:2 parameterized:2 extends:1 throughout:1 submodels:1 electronic:1 draw:1 antibody:3 dy:1 guaranteed:1 distinguish:1 fold:1 quadratic:1 activity:10 strength:6 occur:1 constraint:1 personalized:1 aspect:1 prescribed:1 span:1 expanded:1 structured:13 developing:2 according:1 combination:1 sclerosis:1 belonging:3 conjugate:1 across:18 smaller:1 em:2 making:5 restricted:1 pr:16 handling:1 sided:1 interstitial:2 heart:2 equation:2 previously:1 turn:3 describing:1 mechanism:3 discus:1 reverting:1 irregularly:1 scl:3 demographic:1 available:3 aigrain:1 apply:1 progression:1 hierarchical:7 away:2 appropriate:1 alternative:2 primer:1 denotes:2 running:1 include:4 ensure:1 remaining:2 graphical:2 clustering:1 medicine:4 build:1 murray:1 society:1 skin:1 objective:2 move:1 quantity:1 allergy:1 md:2 diagonal:2 individualized:3 separate:3 capacity:2 majority:1 entity:1 prospective:1 extent:2 collected:1 toward:1 enforcing:1 besides:1 length:2 modeled:2 relationship:1 difficult:2 robert:3 holding:1 negative:1 allowing:1 recommender:2 observation:12 markov:1 finite:1 timeseries:1 heterogeneity:9 severity:5 variability:5 smoothed:1 wiggins:1 community:1 rating:1 smoking:1 pair:1 optimized:1 philosophical:1 learned:5 temporary:2 tremendous:1 subgroup:2 address:4 dynamical:3 below:2 pattern:1 eighth:1 challenge:2 green:2 including:1 royal:1 belief:2 greatest:1 event:3 critical:2 available1:1 rely:1 natural:1 predicting:6 rdz:3 indicator:3 business:1 representing:1 t0i:17 improve:2 identifies:1 shortterm:1 health:3 prior:6 review:3 discovery:1 kf:3 marginalizing:1 expect:3 mixed:1 generation:1 limitation:1 filtering:2 enclosed:1 var:2 validation:1 tini:1 integrate:1 clinic:1 degree:2 rni:2 sufficient:1 share:7 row:8 austin:1 course:8 accounted:1 last:1 rasmussen:1 jth:2 offline:4 side:3 allow:2 understand:1 rheumatism:1 wide:1 absolute:2 sparse:1 fifth:1 distributed:1 dip:2 boundary:1 curve:1 evaluating:1 world:1 rich:1 autoregressive:1 forward:1 commonly:3 collection:1 made:2 ig:2 far:1 pfvc:9 transaction:2 feat:3 active:1 assumed:1 xi:9 spectrum:1 continuous:5 latent:13 search:2 table:2 robin:1 promising:1 learn:3 nature:2 contributes:1 expansion:5 pathogenesis:1 complex:6 domain:4 substituted:1 significance:1 noise:13 hyperparameters:1 profile:1 osborne:1 fair:1 allowed:1 personalize:1 west:1 aid:1 precision:1 sub:1 dfi:1 fails:1 airway:1 allanore:1 rqp:1 candidate:3 third:2 specific:22 normalizing:1 false:1 supplement:3 magnitude:1 conditioned:6 downward:2 forecast:1 smoker:3 suited:1 likely:8 jama:1 expressed:1 adjustment:13 temporarily:2 scalar:1 springer:1 gender:2 acm:2 conditional:1 goal:1 towards:2 replace:1 man:2 saria:3 hard:1 change:1 included:2 determined:1 clinician:4 typical:1 bennett:1 total:1 ew:1 select:2 support:1 collins:1 wigley:2 incorporate:1 evaluate:1 correlated:1 |
5,384 | 5,874 | Gaussian Process Random Fields
David A. Moore and Stuart J. Russell
Computer Science Division
University of California, Berkeley
Berkeley, CA 94709
{dmoore, russell}@cs.berkeley.edu
Abstract
Gaussian processes have been successful in both supervised and unsupervised
machine learning tasks, but their computational complexity has constrained practical applications. We introduce a new approximation for large-scale Gaussian
processes, the Gaussian Process Random Field (GPRF), in which local GPs are
coupled via pairwise potentials. The GPRF likelihood is a simple, tractable, and
parallelizeable approximation to the full GP marginal likelihood, enabling latent
variable modeling and hyperparameter selection on large datasets. We demonstrate
its effectiveness on synthetic spatial data as well as a real-world application to
seismic event location.
1
Introduction
Many machine learning tasks can be framed as learning a function given noisy information about
its inputs and outputs. In regression and classification, we are given inputs and asked to predict the
outputs; by contrast, in latent variable modeling we are given a set of outputs and asked to reconstruct
the inputs that could have produced them. Gaussian processes (GPs) are a flexible class of probability
distributions on functions that allow us to approach function-learning problems from an appealingly
principled and clean Bayesian perspective. Unfortunately, the time complexity of exact GP inference
is O(n3 ), where n is the number of data points. This makes exact GP calculations infeasible for
real-world data sets with n > 10000.
Many approximations have been proposed to escape this limitation. One particularly simple approximation is to partition the input space into smaller blocks, replacing a single large GP with a multitude
of local ones. This gains tractability at the price of a potentially severe independence assumption.
In this paper we relax the strong independence assumptions of independent local GPs, proposing
instead a Markov random field (MRF) of local GPs, which we call a Gaussian Process Random
Field (GPRF). A GPRF couples local models via pairwise potentials that incorporate covariance
information. This yields a surrogate for the full GP marginal likelihood that is simple to implement
and can be tractably evaluated and optimized on large datasets, while still enforcing a smooth
covariance structure. The task of approximating the marginal likelihood is motivated by unsupervised
applications such as the GP latent variable model [1], but examining the predictions made by our
model also yields a novel interpretation of the Bayesian Committee Machine [2].
We begin by reviewing GPs and MRFs, and some existing approximation methods for large-scale
GPs. In Section 3 we present the GPRF objective and examine its properties as an approximation to
the full GP marginal likelihood. We then evaluate it on synthetic data as well as an application to
seismic event location.
1
(a) Full GP.
(b) Local GPs.
(c) Bayesian committee machine.
Figure 1: Predictive distributions on a toy regression problem.
2
2.1
Background
Gaussian processes
Gaussian processes [3] are distributions on real-valued functions. GPs are parameterized by a mean
function ?? (x), typically assumed without loss of generality to be ?(x) = 0, and a covariance
function (sometimes called a kernel) k? (x, x0 ), with hyperparameters ?. A common choice is the
squared exponential covariance, kSE (x, x0 ) = ?f2 exp ? 21 kx ? x0 k2 /`2 , with hyperparameters ?f2
and ` specifying respectively a prior variance and correlation lengthscale.
We say that a random function f (x) is Gaussian process distributed if, for any n input points X,
the vector of function values f = f (X) is multivariate Gaussian, f ? N (0, k? (X, X)). In many
applications we have access only to noisy observations y = f + ? for some noise process ?. If
the noise is iid Gaussian, i.e., ? ? N (0, ?n2 I), then the observations are themselves Gaussian,
y ? N (0, Ky ), where Ky = k? (X, X) + ?n2 I.
The most common application of GPs is to Bayesian regression [3], in which we attempt to predict
the function values f ? at test points X ? via the conditional distribution given the training data,
p(f ? |y; X, X ? , ?). Sometimes, however, we do not observe the training inputs X, or we observe them
only partially or noisily. This setting is known as the Gaussian Process Latent Variable Model (GPLVM) [1]; it uses GPs as a model for unsupervised learning and nonlinear dimensionality reduction.
The GP-LVM setting typically involves multi-dimensional observations, Y = (y(1) , . . . , y(D) ), with
each output dimension y(d) modeled as an independent Gaussian process. The input locations and/or
hyperparameters are typically sought via maximization of the marginal likelihood
L(X, ?) = log p(Y ; X, ?) =
D
X
1
1
? log |Ky | ? yiT Ky?1 yi + C
2
2
i=1
=?
1
D
log |Ky | ? tr(Ky?1 Y Y T ) + C,
2
2
(1)
though some recent work [4, 5] attempts to recover an approximate posterior on X by maximizing
a variational bound. Given a differentiable covariance function, this maximization is typically
performed by gradient-based methods, although local maxima can be a significant concern as the
marginal likelihood is generally non-convex.
2.2
Scalability and approximate inference
The main computational difficulty in GP methods is the need to invert or factor the kernel matrix Ky ,
which requires time cubic in n. In GP-LVM inference this must be done at every optimization step to
evaluate (1) and its derivatives.
This complexity has inspired a number of approximations. The most commonly studied are inducingpoint methods, in which the unknown function is represented by its values at a set of m inducing
points, where m n. These points can be chosen by maximizing the marginal likelihood in a
surrogate model [6, 7] or by minimizing the KL divergence between the approximate and exact GP
posteriors [8]. Inference in such models can typically be done in O(nm2 ) time, but this comes at
the price of reduced representational capacity: while smooth functions with long lengthscales may
be compactly represented by a small number of inducing points, for quickly-varying functions with
2
significant local structure it may be difficult to find any faithful representation more compact than the
complete set of training observations.
A separate class of approximations, so-called ?local? GP methods [3, 9, 10], involves partitioning the
inputs into blocks of m points each, then modeling each block with an independent Gaussian process.
If the partition is spatially local, this corresponds to a covariance function that imposes independence
between function values in different regions of the input space. Computationally, each block requires
only O(m3 ) time; the total time is linear in the number of blocks. Local approximations preserve
short-lengthscale structure within each block, but their harsh independence assumptions can lead to
predictive discontinuities and inaccurate uncertainties (Figure 1b). These assumptions are problematic
for GP-LVM inference because the marginal likelihood becomes discontinuous at block boundaries.
Nonetheless, local GPs sometimes work very well in practice, achieving results comparable to more
sophisticated methods in a fraction of the time [11].
The Bayesian Committee Machine (BCM) [2] attempts to improve on independent local GPs by
averaging the predictions of multiple GP experts. The model is formally equivalent to an inducingpoint model in which the test points are the inducing points, i.e., it assumes that the training blocks
are conditionally independent given the test data. The BCM can yield high-quality predictions that
avoid the pitfalls of local GPs (Figure 1c), while maintaining scalability to very large datasets [12].
However, as a purely predictive approximation, it is unhelpful in the GP-LVM setting, where we are
interested in the likelihood of our training set irrespective of any particular test data. The desire for a
BCM-style approximation to the marginal likelihood was part of the motivation for this current work;
in Section 3.2 we show that the GPRF proposed in this paper can be viewed as such a model.
Mixture-of-experts models [13, 14] extend the local GP concept in a different direction: instead
of deterministically assigning points to GP models based on their spatial locations, they treat the
assignments as unobserved random variables and do inference over them. This allows the model to
adapt to different functional characteristics in different regions of the space, at the price of a more
difficult inference task. We are not aware of mixture-of-experts models being applied in the GP-LVM
setting, though this should in principle be possible.
Simple building blocks are often combined to create more complex approximations. The PIC
approximation [15] blends a global inducing-point model with local block-diagonal covariances, thus
capturing a mix of global and local structure, though with the same boundary discontinuities as in
?vanilla? local GPs. A related approach is the use of covariance functions with compact support [16]
to capture local variation in concert with global inducing points. [11] surveys and compares several
approximate GP regression methods on synthetic and real-world datasets.
Finally, we note here the similar title of [17], which is in fact orthogonal to the present work: they use
a random field as a prior on input locations, whereas this paper defines a random field decomposition
of the GP model itself, which may be combined with any prior on X.
2.3
Markov Random Fields
We recall some basic theory regarding Markov random fields (MRFs), also known as undirected
graphical models [18]. A pairwise MRF consists of an undirected graph (V, E), along with node
potentials ?i and edge potentials ?ij , which define an energy function on a random vector y,
X
X
E(y) =
?i (yi ) +
?ij (yi , yj ),
(2)
i?V
(i,j)?E
where y is partitioned into components yi identified with nodes in the graph. This energy in turn
1
defines
R a probability density, the ?Gibbs distribution?, given by p(y) = Z exp(?E(y)) where
Z = exp(?E(z))dz is a normalizing constant.
Gaussian random fields are the special case of pairwise MRFs in which the Gibbs distribution is
multivariate Gaussian. Given a partition of y into sub-vectors y1 , y2 , . . . , yM , a zero-mean Gaussian
distribution with covariance K and precision matrix J = K ?1 can be expressed by potentials
1
?i (yi ) = ? yiT Jii yi ,
?ij (yi , yj ) = ?yiT Jij yj
(3)
2
where Jij is the submatrix of J corresponding to the sub-vectors yi , yj . The normalizing constant
Z = (2?)n/2 |K|1/2 involves the determinant of the covariance matrix. Since edges whose potentials
3
are zero can be dropped without effect, the nonzero entries of the precision matrix can be seen as
specifying the edges present in the graph.
3
Gaussian Process Random Fields
We consider a vector of n real-valued1 observations y ? N (0, Ky ) modeled by a GP, where Ky
is implicitly a function of input locations X and hyperparameters ?. Unless otherwise specified,
all probabilities p(yi ), p(yi , yj ), etc., refer to marginals of this full GP. We would like to perform
gradient-based optimization on the marginal likelihood (1) with respect to X and/or ?, but suppose
that the cost of doing so directly is prohibitive.
In order to proceed, we assume a partition y = (y1 , y2 , . . . , yM ) of the observations into M
blocks of size at most m, with an implied corresponding partition X = (X1 , X2 , . . . , XM ) of the
(perhaps unobserved) inputs. The source of this partition is not a focus of the current work; we
might imagine that the blocks correspond to spatially local clusters of input points, assuming that
we have noisy observations of the X values or at least a reasonable guess at an initialization. We
let Kij = cov? (yi , yj ) denote the appropriate submatrix of Ky , and Jij denote the corresponding
submatrix of the precision matrix Jy = Ky?1 ; note that Jij 6= (Kij )?1 in general.
3.1
The GPRF Objective
Given the precision matrix Jy , we could use (3) to represent the full GP distribution in factored
form as an MRF. This is not directly useful, since computing Jy requires cubic time. Instead we
propose approximating the marginal likelihood via a random field in which local GPs are connected
by pairwise potentials. Given an edge set which we will initially take to be the complete graph,
E = {(i, j)|1 ? i < j ? M }, our approximate objective is
qGP RF (y; X, ?) =
M
Y
p(yi )
i=1
=
M
Y
Y
(i,j)?E
p(yi )1?|Ei |
i=1
p(yi , yj )
,
p(yi )p(yj )
Y
(4)
p(yi , yj )
(i,j)?E
where Ei denotes the neighbors of i in the graph, and p(yi ) and p(yi , yj ) are marginal probabilities
under the full GP; equivalently they are the likelihoods of local GPs defined on the points Xi and
Xi ? Xj respectively. Note that these local likelihoods depend implicitly on X and ?. Taking the log,
we obtain the energy function of an unnormalized MRF
log qGP RF (y; X, ?) =
M
X
X
(1 ? |Ei |) log p(yi ) +
log p(yi , yj )
i=1
(5)
(i,j)?E
with potentials
?iGP RF (yi ) = (1 ? |Ei |) log p(yi ),
GP RF
?ij
(yi , yj ) = log p(yi , yj ).
(6)
We refer to the approximate objective (5) as qGP RF rather than pGP RF to emphasize that it is not in
general a normalized probability density. It can be interpreted as a ?Bethe-type? approximation [19],
in which a joint density is approximated via overlapping pairwise marginals. In the special case that
the full precision matrix Jy induces a tree structure on the blocks of our partition, qGP RF recovers
the exact marginal likelihood. (This is shown in the supplementary material.) In general this will not
be the case, but in the spirit of loopy belief propagation [20], we consider the tree-structured case as
an approximation for the general setting.
Before further analyzing the nature of the approximation, we first observe that as a sum of local
Gaussian log-densities, the objective (5) is straightforward to implement and fast to evaluate. Each
of the O(M 2 ) pairwise densities requires O((2m)3 ) = O(m3 ) time, for an overall complexity of
1
The extension to multiple independent outputs is straightforward.
4
O(M 2 m3 ) = O(n2 m) when M = n/m. The quadratic dependence on n cannot be avoided by any
algorithm that computes similarities between all pairs of training points; however, in practice we
will consider ?local? modifications in which E is something smaller than all pairs of blocks. For
example, if each block is connected only to a fixed number of spatial neighbors, the complexity
reduces to O(nm2 ), i.e., linear in n. In the special case where E is the empty set, we recover the
exact likelihood of independent local GPs.
It is also straightforward to obtain the gradient of (5) with respect to hyperparameters ? and inputs
X, by summing the gradients of the local densities. The likelihood and gradient for each term in the
sum can be evaluated independently using only local subsets of the training data, enabling a simple
parallel implementation.
Having seen that qGP RF can be optimized efficiently, it remains for us to argue its validity as a proxy
for the full GP marginal likelihood. Due to space constraints we defer proofs to the supplementary
material, though our results are not difficult. We first show that, like the full marginal likelihood (1),
qGP RF has the form of a Gaussian distribution, but with a different precision matrix.
Theorem 1. The objective qGP RF has the form of an unnormalized Gaussian density with precision
? with blocks J?ij given by
matrix J,
(ij)
X (ij)
Q
if
(i,
j)
?
E
?1
?1
?
?
12
Jii = Kii +
Q11 ? Kii ,
Jij =
,
(7)
0
otherwise.
j?Ei
where Q(ij) is the local precision matrix Q(ij) defined as the inverse of the marginal covariance,
!
?1
(ij)
(ij)
Q11
Q12
Kii Kij
(ij)
Q
=
=
.
(ij)
(ij)
Kji Kjj
Q21
Q22
Although the Gaussian density represented by qGP RF is not in general normalized, we show that it is
approximately normalized in a certain sense.
Theorem 2. The objective qGP RF is approximately normalized in the sense that the optimal value
of the Bethe free energy [19],
!
X
Z
X Z
(1 ? |Ei |) ln bi (yi )
bij (yi , yj )
FB (b) =
? log Z,
bi (yi )
+
bij (yi , yj ) ln
ln ?i (yi )
?ij (yi , yj ))
yi
yi ,yj
i?V
(i,j)?E
(8)
the approximation to the normalizing constant found by loopy belief propagation, is precisely zero.
Furthermore, this optimum is obtained when the pseudomarginals bi , bij are taken to be the true GP
marginals pi , pij .
This implies that loopy belief propagation run on a GPRF would recover the marginals of the true GP.
3.2
Predictive equivalence to the BCM
We have introduced qGP RF as a surrogate model for the training set (X, y); however, it is natural
to extend the GPRF to make predictions at a set of test points X ? , by including the function values
f ? = f (X ? ) as an M + 1st block, with an edge to each of the training blocks. The resulting predictive
distribution,
?
?
M
M
?
Y
Y
Y p(yi , yj )
p(y
,
f
)
i
? p(yi )
?
pGP RF (f ? |y) ? qGP RF (f ? , y) = p(f ? )
?)
p(y
)p(f
p(y
)p(y
)
i
i
j
i=1
i=1
(i,j)?E
? p(f ? )1?M
M
Y
p(f ? |yi ),
(9)
i=1
corresponds exactly to the prediction of the Bayesian Committee Machine (BCM) [2]. This motivates
the GPRF as a natural extension of the BCM as a model for the training set, providing an alternative to
5
(a) Noisy observed locations: mean error 2.48.
(b) Full GP: 0.21.
(c) GPRF-100: 0.36. (d) FITC-500: 4.86.
(showing grid cells)
(with inducing points,
note contraction)
Figure 2: Inferred locations on synthetic data (n = 10000), colored by the first output dimension y1 .
the standard transductive interpretation of the BCM.2 A similar derivation shows that the conditional
distribution of any block yi given all other blocks yj6=i also takes the form of a BCM prediction,
suggesting the possibility of pseudolikelihood training [21], i.e., directly optimizing the quality of
BCM predictions on held-out blocks (not explored in this paper).
4
4.1
Experiments
Uniform Input Distribution
We first consider a 2D synthetic dataset intended to simulate spatial location tasks such as WiFiSLAM [22] or seismic event location (below), in which we observe high-dimensional measurements
but have only noisy information regarding the locations at which
? those measurements were taken.
We sample n points uniformly from the square of side length n to generate the true inputs X, then
sample 50-dimensional output Y from independent GPs with SE kernel k(r) = exp(?(r/`)2 ) for
2
` = 6.0 and noise standard deviation ?n = 0.1. The observed points X obs ? N (X, ?obs
I) arise by
corrupting X with isotropic Gaussian noise of standard deviation ?obs = 2. The parameters `, ?n ,
and ?obs were chosen to generate problems with interesting short-lengthscale structure for which
GP-LVM optimization could nontrivially improve the initial noisy locations. Figure 2a shows a
typical sample from this model.
For local GPs and GPRFs, we take the spatial partition to be a grid with n/m cells, where m is the
desired number of points per cell. The GPRF edge set E connects each cell to its eight neighbors
(Figure 2c), yielding linear time complexity O(nm2 ). During optimization, a practical choice is
necessary: do we use a fixed partition of the points, or re-assign points to cells as they cross spatial
boundaries? The latter corresponds to a coherent (block-diagonal) spatial covariance function, but
introduces discontinuities to the marginal likelihood. In our experiments the GPRF was not sensitive
to this choice, but local GPs performed more reliably with fixed spatial boundaries (in spite of the
discontinuities), so we used this approach for all experiments.
For comparison, we also evaluate the Sparse GP-LVM, implemented in GPy [23], which uses the
FITC approximation to the marginal likelihood [7]. (We also considered the Bayesian GP-LVM [4],
but found it to be more resource-intensive with no meaningful difference in results on this problem.)
Here the approximation parameter m is the number of inducing points.
We ran L-BFGS optimization to recover maximum a posteriori (MAP) locations, or local optima
thereof. Figure 3a shows mean location error (Euclidean distance) for n = 10000 points; at this size
it is tractable to compare directly to the full GP-LVM. The GPRF with a large block size (m=1111,
corresponding to a 3x3 grid) nearly matches the solution quality of the full GP while requiring less
time, while the local methods are quite fast to converge but become stuck at inferior optima. The
FITC optimization exhibits an interesting pathology: it initially moves towards a good solution but
then diverges towards what turns out to correspond to a contraction of the space (Figure 2d); we
conjecture this is because there are not enough inducing points to faithfully represent the full GP
2
The GPRF is still transductive, in the sense that adding a test block f ? will change the marginal distribution
on the training observations y, as can be seen explicitly in the precision matrix (7). The contribution of the
GPRF is that it provides a reasonable model for the training-set likelihood even in the absence of test points.
6
(a) Mean location error over time for (b) Mean error at convergence as a (c) Mean location error over time
n = 10000, including comparison function of n, with learned length- for n = 80000.
to full GP.
scale.
Figure 3: Results on synthetic data.
distribution over the entire space. A partial fix is to allow FITC to jointly optimize over locations and
the correlation lengthscale `; this yielded a biased lengthscale estimate `? ? 7.6 but more accurate
locations (FITC-500-` in Figure 3a).
To evaluate scaling behavior, we next considered problems of increasing size up to n = 80000.3 Out
of generosity to FITC we allowed each method to learn its own preferred lengthscale. Figure 3b
reports the solution quality at convergence, showing that even with an adaptive lengthscale, FITC
requires increasingly many inducing points to compete in large spatial domains. This is intractable
for larger problems due to O(m3 ) scaling; indeed, attempts to run at n > 55000 with 2000 inducing
points exceeded 32GB of available memory. Recently, more sophisticated inducing-point methods
have claimed scalability to very large datasets [24, 25], but they do so with m ? 1000; we expect
that they would hit the same fundamental scaling constraints for problems that inherently require
many inducing points.
On our largest synthetic problem, n = 80000, inducing-point approximations are intractable, as is
the full GP-LVM. Local GPs converge more quickly than GPRFs of equal block size, but the GPRFs
find higher-quality solutions (Figure 3c). After a short initial period, the best performance always
belongs to a GPRF, and at the conclusion of 24 hours the best GPRF solution achieves mean error
42% lower than the best local solution (0.18 vs 0.31).
4.2 Seismic event location
We next consider an application to seismic event location, which formed the motivation for this
work. Seismic waves can be viewed as high-dimensional vectors generated from an underlying threedimension manifold, namely the Earth?s crust. Nearby events tend to generate similar waveforms;
we can model this spatial correlation as a Gaussian process. Prior information regarding the event
locations is available from traditional travel-time-based location systems [26], which produce an
independent Gaussian uncertainty ellipse for each event.
A full probability model of seismic waveforms, accounting for background noise and performing
joint alignment of arrival times, is beyond the scope of this paper. To focus specifically on the ability
to approximate GP-LVM inference, we used real event locations but generated synthetic waveforms
by sampling from a 50-output GP using a Mat?ern kernel [3] with ? = 3/2 and a lengthscale of
40km. We also generated observed location estimates X obs , by corrupting the true locations with
3
The astute reader will wonder how we generated synthetic data on problems that are clearly too large
for an exact GP. For these synthetic problems as well as the seismic example below, the covariance matrix is
relatively sparse, with only ~2% of entries corresponding to points within six kernel lengthscales of each other.
By considering only these entries, we were able to draw samples using a sparse Cholesky factorization, although
this required approximately 30GB of RAM. Unfortunately, this approach does not straightforwardly extend to
GP-LVM inference under the exact GP, as the standard expression for the marginal likelihood derivatives
?K
1
?
y
log p(y) = tr
(Ky?1 y)(Ky?1 y)T ? Ky?1
?xi
2
?xi
involves the full precision matrix Ky?1 which is not sparse in general. Bypassing this expression via automatic
differentiation through the sparse Cholesky decomposition could perhaps allow exact GP-LVM inference to
scale to somewhat larger problems.
7
(a) Event map for seismic dataset.
(b) Mean location error over time.
Figure 4: Seismic event location task.
Gaussian noise of standard deviation 20km in each dimension. Given the observed waveforms and
noisy locations, we are interested in recovering the latitude, longitude, and depth of each event.
Our dataset consists of 107556 events detected at the Mankachi array station in Kazakstan between
2004 and 2012. Figure 4a shows the event locations, colored to reflect a principle axis tree partition
[27] into blocks of 400 points (tree construction time was negligible). The GPRF edge set contains
all pairs of blocks for which any two points had initial locations within one kernel lengthscale (40km)
of each other. We also evaluated longer-distance connections, but found that this relatively local edge
set had the best performance/time tradeoffs: eliminating edges not only speeds up each optimization
step, but in some cases actually yielded faster per-step convergence (perhaps because denser edge
sets tended to create large cliques for which the pairwise GPRF objective is a poor approximation).
Figure 4b shows the quality of recovered locations as a function of computation time; we jointly
optimized over event locations as well as two lengthscale parameters (surface distance and depth)
and the noise variance ?n2 . Local GPs perform quite well on this task, but the best GPRF achieves 7%
lower mean error than the best local GP model (12.8km vs 13.7km, respectively) given equal time. An
even better result can be obtained by using the results of a local GP optimization to initialize a GPRF.
Using the same partition (m = 800) for both local GPs and the GPRF, this ?hybrid? method gives the
lowest final error (12.2km), and is dominant across a wide range of wall clock times, suggesting it as
a promising practical approach for large GP-LVM optimizations.
5
Conclusions and Future Work
The Gaussian process random field is a tractable and effective surrogate for the GP marginal likelihood.
It has the flavor of approximate inference methods such as loopy belief propagation, but can be
analyzed precisely in terms of a deterministic approximation to the inverse covariance, and provides
a new training-time interpretation of the Bayesian Committee Machine. It is easy to implement and
can be straightforwardly parallelized.
One direction for future work involves finding partitions for which a GPRF performs well, e.g.,
partitions that induce a block near-tree structure. A perhaps related question is identifying when the
GPRF objective defines a normalizable probability distribution (beyond the case of an exact tree
structure) and under what circumstances it is a good approximation to the exact GP likelihood.
This evaluation in this paper focuses on spatial data; however, both local GPs and the BCM have been
successfully applied to high-dimensional regression problems [11, 12], so exploring the effectiveness
of the GPRF for dimensionality reduction tasks would also be interesting. Another useful avenue is
to integrate the GPRF framework with other approximations: since the GPRF and inducing-point
methods have complementary strengths ? the GPRF is useful for modeling a function over a large
space, while inducing points are useful when the density of available data in some region of the
space exceeds what is necessary to represent the function ? an integrated method might enable new
applications for which neither approach alone would be sufficient.
Acknowledgements
We thank the anonymous reviewers for their helpful suggestions. This work was supported by DTRA
grant #HDTRA-11110026, and by computing resources donated by Microsoft Research under an
Azure for Research grant.
8
References
[1] Neil D Lawrence. Gaussian process latent variable models for visualisation of high dimensional data.
Advances in Neural Information Processing Systems (NIPS), 2004.
[2] Volker Tresp. A Bayesian committee machine. Neural Computation, 12(11):2719?2741, 2000.
[3] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[4] Michalis K Titsias and Neil D Lawrence. Bayesian Gaussian process latent variable model. In International
Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[5] Andreas C. Damianou, Michalis K. Titsias, and Neil D. Lawrence. Variational Inference for Latent
Variables and Uncertain Inputs in Gaussian Processes. Journal of Machine Learning Research (JMLR),
2015.
[6] Joaquin Qui?nonero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian
process regression. Journal of Machine Learning Research (JMLR), 6:1939?1959, 2005.
[7] Neil D Lawrence. Learning for larger datasets with the Gaussian process latent variable model. In
International Conference on Artificial Intelligence and Statistics (AISTATS), 2007.
[8] Michalis K Titsias. Variational learning of inducing variables in sparse Gaussian processes. In International
Conference on Artificial Intelligence and Statistics (AISTATS), 2009.
[9] Duy Nguyen-Tuong, Matthias Seeger, and Jan Peters. Model learning with local Gaussian process
regression. Advanced Robotics, 23(15):2015?2034, 2009.
[10] Chiwoo Park, Jianhua Z Huang, and Yu Ding. Domain decomposition approach for fast Gaussian process
regression of large spatial data sets. Journal of Machine Learning Research (JMLR), 12:1697?1728, 2011.
[11] Krzysztof Chalupka, Christopher KI Williams, and Iain Murray. A framework for evaluating approximation
methods for Gaussian process regression. Journal of Machine Learning Research (JMLR), 14:333?350,
2013.
[12] Marc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian Processes. In International Conference on
Machine Learning (ICML), 2015.
[13] Carl Edward Rasmussen and Zoubin Ghahramani. Infinite mixtures of Gaussian process experts. Advances
in Neural Information Processing Systems (NIPS), pages 881?888, 2002.
[14] Trung Nguyen and Edwin Bonilla. Fast allocation of Gaussian process experts. In International Conference
on Machine Learning (ICML), pages 145?153, 2014.
[15] Edward Snelson and Zoubin Ghahramani. Local and global sparse Gaussian process approximations. In
Artificial Intelligence and Statistics (AISTATS), 2007.
[16] Jarno Vanhatalo and Aki Vehtari. Modelling local and global phenomena with sparse Gaussian processes.
In Uncertainty in Artificial Intelligence (UAI), 2008.
[17] Guoqiang Zhong, Wu-Jun Li, Dit-Yan Yeung, Xinwen Hou, and Cheng-Lin Liu. Gaussian process latent
random field. In AAAI Conference on Artificial Intelligence, 2010.
[18] Daphne Koller and Nir Friedman. Probabilistic graphical models: Principles and techniques. MIT Press,
2009.
[19] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Bethe free energy, Kikuchi approximations, and
belief propagation algorithms. Advances in Neural Information Processing Systems (NIPS), 13, 2001.
[20] Kevin P Murphy, Yair Weiss, and Michael I Jordan. Loopy belief propagation for approximate inference:
An empirical study. In Uncertainty in Artificial Intelligence (UAI), pages 467?475, 1999.
[21] Julian Besag. Statistical analysis of non-lattice data. The Statistician, pages 179?195, 1975.
[22] Brian Ferris, Dieter Fox, and Neil D Lawrence. WiFi-SLAM using Gaussian process latent variable models.
In International Joint Conference on Artificial Intelligence (IJCAI), pages 2480?2485, 2007.
[23] The GPy authors. GPy: A Gaussian process framework in Python.
SheffieldML/GPy, 2012?2015.
http://github.com/
[24] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. In Uncertainty in
Artificial Intelligence (UAI), page 282, 2013.
[25] Yarin Gal, Mark van der Wilk, and Carl Rasmussen. Distributed variational inference in sparse Gaussian
process regression and latent variable models. In Advances in Neural Information Processing Systems
(NIPS), 2014.
[26] International Seismological Centre. On-line Bulletin. Int. Seis. Cent., Thatcham, United Kingdom, 2015.
http://www.isc.ac.uk.
[27] James McNames. A fast nearest-neighbor algorithm based on a principal axis search tree. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 23(9):964?976, 2001.
9
| 5874 |@word determinant:1 eliminating:1 km:6 vanhatalo:1 covariance:14 decomposition:3 contraction:2 accounting:1 tr:2 igp:1 reduction:2 initial:3 liu:1 contains:1 united:1 existing:1 current:2 recovered:1 com:1 assigning:1 must:1 hou:1 partition:13 pseudomarginals:1 concert:1 v:2 alone:1 intelligence:10 prohibitive:1 guess:1 trung:1 isotropic:1 short:3 colored:2 provides:2 node:2 location:32 daphne:1 along:1 become:1 consists:2 introduce:1 x0:3 pairwise:8 indeed:1 behavior:1 themselves:1 examine:1 multi:1 inspired:1 freeman:1 pitfall:1 considering:1 increasing:1 becomes:1 begin:1 q21:1 underlying:1 lowest:1 what:3 appealingly:1 interpreted:1 proposing:1 unobserved:2 finding:1 differentiation:1 gal:1 berkeley:3 every:1 donated:1 exactly:1 k2:1 hit:1 uk:1 partitioning:1 crust:1 grant:2 before:1 negligible:1 lvm:14 local:44 treat:1 dropped:1 analyzing:1 approximately:3 pami:1 might:2 initialization:1 studied:1 equivalence:1 specifying:2 sheffieldml:1 factorization:1 bi:3 range:1 practical:3 faithful:1 yj:18 practice:2 block:28 implement:3 x3:1 jan:1 empirical:1 yan:1 seismological:1 induce:1 spite:1 zoubin:2 cannot:1 tuong:1 selection:1 optimize:1 equivalent:1 map:2 deterministic:1 dz:1 maximizing:2 reviewer:1 straightforward:3 williams:2 www:1 independently:1 convex:1 survey:1 identifying:1 factored:1 iain:1 array:1 variation:1 imagine:1 suppose:1 construction:1 exact:10 gps:24 us:2 carl:4 approximated:1 particularly:1 observed:4 ding:1 capture:1 region:3 connected:2 russell:2 ran:1 principled:1 vehtari:1 complexity:6 asked:2 depend:1 reviewing:1 predictive:5 purely:1 titsias:3 division:1 f2:2 duy:1 edwin:1 compactly:1 joint:3 represented:3 seis:1 derivation:1 fast:5 effective:1 lengthscale:10 detected:1 artificial:9 kevin:1 lengthscales:2 whose:1 quite:2 supplementary:2 valued:1 larger:3 say:1 relax:1 reconstruct:1 otherwise:2 denser:1 ability:1 cov:1 neil:6 statistic:4 gp:49 transductive:2 noisy:7 itself:1 jointly:2 final:1 differentiable:1 matthias:1 propose:1 jij:5 nonero:1 representational:1 inducing:16 ky:15 scalability:3 convergence:3 cluster:1 empty:1 optimum:3 diverges:1 produce:1 ijcai:1 kikuchi:1 ac:1 nearest:1 ij:15 strong:1 longitude:1 implemented:1 c:1 involves:5 come:1 q12:1 implies:1 recovering:1 direction:2 waveform:4 discontinuous:1 enable:1 material:2 kii:3 require:1 assign:1 fix:1 wall:1 anonymous:1 brian:1 extension:2 exploring:1 bypassing:1 considered:2 exp:4 lawrence:6 scope:1 predict:2 sought:1 achieves:2 earth:1 travel:1 title:1 sensitive:1 largest:1 create:2 faithfully:1 successfully:1 mit:2 clearly:1 gaussian:48 always:1 rather:1 avoid:1 zhong:1 varying:1 volker:1 focus:3 modelling:1 likelihood:26 contrast:1 seeger:1 generosity:1 besag:1 sense:3 posteriori:1 inference:14 helpful:1 mrfs:3 inaccurate:1 typically:5 entire:1 integrated:1 initially:2 visualisation:1 koller:1 interested:2 overall:1 classification:1 flexible:1 constrained:1 spatial:12 special:3 initialize:1 marginal:21 field:13 aware:1 jarno:1 having:1 ng:1 sampling:1 equal:2 stuart:1 park:1 unsupervised:3 nearly:1 wifi:1 yu:1 icml:2 future:2 report:1 escape:1 preserve:1 divergence:1 murphy:1 azure:1 intended:1 connects:1 statistician:1 microsoft:1 william:1 attempt:4 friedman:1 possibility:1 evaluation:1 severe:1 alignment:1 introduces:1 mixture:3 analyzed:1 yielding:1 held:1 slam:1 isc:1 accurate:1 edge:10 partial:1 necessary:2 orthogonal:1 unless:1 tree:7 fox:1 euclidean:1 desired:1 re:1 uncertain:1 kij:3 modeling:4 assignment:1 maximization:2 loopy:5 tractability:1 cost:1 deviation:3 entry:3 subset:1 lattice:1 uniform:1 wonder:1 successful:1 examining:1 too:1 straightforwardly:2 synthetic:10 combined:2 st:1 density:9 fundamental:1 international:7 probabilistic:1 gpy:4 michael:1 ym:2 quickly:2 squared:1 reflect:1 q11:2 aaai:1 huang:1 expert:5 derivative:2 style:1 toy:1 li:1 jii:2 potential:8 suggesting:2 bfgs:1 int:1 explicitly:1 bonilla:1 performed:2 view:1 candela:1 doing:1 wave:1 recover:4 parallel:1 defer:1 contribution:1 square:1 formed:1 variance:2 characteristic:1 efficiently:1 yield:3 correspond:2 bayesian:10 produced:1 iid:1 damianou:1 tended:1 nonetheless:1 energy:5 james:2 thereof:1 proof:1 recovers:1 couple:1 gain:1 dataset:3 recall:1 dimensionality:2 sophisticated:2 actually:1 exceeded:1 higher:1 supervised:1 wei:3 done:2 evaluated:3 though:4 generality:1 furthermore:1 correlation:3 clock:1 joaquin:1 christopher:1 replacing:1 ei:6 nonlinear:1 overlapping:1 propagation:6 defines:3 quality:6 perhaps:4 building:1 effect:1 validity:1 requiring:1 concept:1 y2:2 normalized:4 true:4 spatially:2 moore:1 nonzero:1 conditionally:1 during:1 inferior:1 aki:1 unnormalized:2 complete:2 demonstrate:1 performs:1 variational:4 snelson:1 novel:1 recently:1 common:2 functional:1 extend:3 interpretation:3 marginals:4 significant:2 refer:2 measurement:2 gibbs:2 framed:1 automatic:1 vanilla:1 grid:3 centre:1 pathology:1 had:2 access:1 similarity:1 longer:1 surface:1 etc:1 nicolo:1 something:1 dominant:1 multivariate:2 posterior:2 recent:1 own:1 perspective:1 noisily:1 optimizing:1 belongs:1 chalupka:1 claimed:1 certain:1 yi:36 der:1 seen:3 somewhat:1 dtra:1 edward:3 parallelized:1 converge:2 period:1 full:18 multiple:2 mix:1 reduces:1 smooth:2 exceeds:1 match:1 adapt:1 calculation:1 cross:1 long:1 faster:1 lin:1 jy:4 prediction:7 mrf:4 regression:10 basic:1 circumstance:1 yeung:1 sometimes:3 kernel:6 represent:3 invert:1 cell:5 robotics:1 background:2 whereas:1 source:1 biased:1 tend:1 undirected:2 mcnames:1 spirit:1 effectiveness:2 nontrivially:1 call:1 jordan:1 near:1 enough:1 easy:1 independence:4 xj:1 identified:1 andreas:1 regarding:3 avenue:1 tradeoff:1 intensive:1 motivated:1 six:1 expression:2 gb:2 peter:2 proceed:1 generally:1 useful:4 se:1 induces:1 dit:1 reduced:1 generate:3 http:2 problematic:1 per:2 hyperparameter:1 mat:1 achieving:1 yit:3 neither:1 clean:1 krzysztof:1 ram:1 graph:5 astute:1 fraction:1 sum:2 run:2 inverse:2 parameterized:1 uncertainty:5 compete:1 reasonable:2 reader:1 wu:1 draw:1 fusi:1 ob:5 kji:1 scaling:3 comparable:1 submatrix:3 capturing:1 bound:1 qui:1 ki:1 jianhua:1 cheng:1 quadratic:1 yielded:2 strength:1 constraint:2 precisely:2 normalizable:1 n3:1 x2:1 nearby:1 simulate:1 speed:1 performing:1 relatively:2 conjecture:1 ern:1 structured:1 poor:1 smaller:2 across:1 increasingly:1 partitioned:1 modification:1 dieter:1 taken:2 computationally:1 ln:3 resource:2 remains:1 turn:2 committee:6 tractable:3 available:3 ferris:1 yedidia:1 eight:1 observe:4 appropriate:1 alternative:1 yair:2 cent:1 assumes:1 denotes:1 michalis:3 graphical:2 maintaining:1 unifying:1 ghahramani:2 murray:1 ellipse:1 approximating:2 implied:1 objective:9 move:1 question:1 blend:1 valued1:1 dependence:1 diagonal:2 surrogate:4 traditional:1 exhibit:1 gradient:5 distance:3 separate:1 thank:1 capacity:1 chris:1 manifold:1 argue:1 enforcing:1 assuming:1 length:2 modeled:2 providing:1 minimizing:1 julian:1 equivalently:1 difficult:3 unfortunately:2 kingdom:1 potentially:1 implementation:1 reliably:1 motivates:1 unknown:1 seismic:10 perform:2 observation:8 datasets:6 markov:3 wilk:1 enabling:2 gplvm:1 kse:1 y1:3 station:1 pic:1 inferred:1 david:1 introduced:1 pair:3 namely:1 kl:1 specified:1 optimized:3 required:1 connection:1 california:1 bcm:10 coherent:1 learned:1 nm2:3 hour:1 tractably:1 discontinuity:4 nip:4 beyond:2 able:1 unhelpful:1 below:2 pattern:1 xm:1 latitude:1 rf:15 including:2 memory:1 belief:6 event:15 difficulty:1 natural:2 hybrid:1 advanced:1 fitc:7 improve:2 github:1 axis:2 irrespective:1 harsh:1 jun:2 coupled:1 tresp:1 nir:1 prior:4 acknowledgement:1 python:1 loss:1 expect:1 interesting:3 limitation:1 suggestion:1 allocation:1 integrate:1 pij:1 proxy:1 imposes:1 sufficient:1 principle:3 corrupting:2 pi:1 supported:1 free:2 rasmussen:4 infeasible:1 side:1 allow:3 pseudolikelihood:1 neighbor:4 wide:1 taking:1 bulletin:1 sparse:10 distributed:3 q22:1 boundary:4 dimension:3 depth:2 world:3 evaluating:1 hensman:1 computes:1 fb:1 stuck:1 made:1 commonly:1 adaptive:1 avoided:1 author:1 nguyen:2 transaction:1 approximate:10 compact:2 emphasize:1 implicitly:2 preferred:1 clique:1 global:5 uai:3 summing:1 assumed:1 xi:4 search:1 latent:11 promising:1 bethe:3 nature:1 learn:1 ca:1 inherently:1 kjj:1 complex:1 domain:2 marc:1 aistats:4 main:1 motivation:2 noise:7 hyperparameters:5 arise:1 n2:4 arrival:1 big:1 allowed:1 complementary:1 yarin:1 x1:1 cubic:2 precision:10 sub:2 deterministically:1 exponential:1 jmlr:4 pgp:2 bij:3 theorem:2 showing:2 explored:1 multitude:1 concern:1 normalizing:3 intractable:2 adding:1 kx:1 flavor:1 desire:1 expressed:1 partially:1 van:1 corresponds:3 conditional:2 viewed:2 towards:2 price:3 absence:1 change:1 typical:1 specifically:1 uniformly:1 infinite:1 averaging:1 principal:1 called:2 total:1 m3:4 meaningful:1 formally:1 deisenroth:1 support:1 cholesky:2 latter:1 mark:1 jonathan:1 incorporate:1 evaluate:5 phenomenon:1 |
5,385 | 5,875 | MCMC for Variationally Sparse Gaussian Processes
James Hensman
CHICAS, Lancaster University
[email protected]
Maurizio Filippone
EURECOM
[email protected]
Alexander G. de G. Matthews
University of Cambridge
[email protected]
Zoubin Ghahramani
University of Cambridge
[email protected]
Abstract
Gaussian process (GP) models form a core part of probabilistic machine learning.
Considerable research effort has been made into attacking three issues with GP
models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate
covariance function parameter posteriors. This paper simultaneously addresses
these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo
sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations
based on inducing-point sparse GPs. Code to replicate each experiment in this paper is available at github.com/sparseMCMC.
1
Introduction
Gaussian process models are attractive for machine learning because of their flexible nonparametric
nature. By combining a GP prior with different likelihoods, a multitude of machine learning tasks
can be tackled in a probabilistic fashion [1]. There are three things to consider when using a GP
model: approximation of the posterior function (especially if the likelihood is non-Gaussian), computation, storage and inversion of the covariance matrix, which scales poorly in the number of data;
and estimation (or marginalization) of the covariance function parameters. A multitude of approximation schemes have been proposed for efficient computation when the number of data is large.
Early strategies were based on retaining a sub-set of the data [2]. Snelson and Ghahramani [3] introduced an inducing point approach, where the model is augmented with additional variables, and
Titsias [4] used these ideas in a variational approach. Other authors have introduced approximations
based on the spectrum of the GP [5, 6], or which exploit specific structures within the covariance
matrix [7, 8], or by making unbiased stochastic estimates of key computations [9]. In this work, we
extend the variational inducing point framework, which we prefer for general applicability (no specific requirements are made of the data or covariance function), and because the variational inducing
point approach can be shown to minimize the KL divergence to the posterior process [10].
To approximate the posterior function and covariance parameters, Markov chain Monte-Carlo
(MCMC) approaches provide asymptotically exact approximations. Murray and Adams [11] and
Filippone et al. [12] examine schemes which iteratively sample the function values and covariance
parameters. Such sampling schemes require computation and inversion of the full covariance matrix at each iteration, making them unsuitable for large problems. Computation may be reduced
somewhat by considering variational methods, approximating the posterior using some fixed family of distributions [13, 14, 15, 16, 1, 17], though many covariance matrix inversions are generally
required. Recent works [18, 19, 20] have proposed inducing point schemes which can reduce the
1
Table 1: Existing variational approaches
Reference
Williams & Barber[21] [also 14, 17]
Titsias [4]
Chai [18]
Nguyen and Bonilla [1]
Hensman et al. [20]
This work
p(y | f )
probit/logit
Gaussian
softmax
any factorized
probit
any factorized
Sparse
Posterior
Hyperparam.
7
3
3
7
3
3
Gaussian (assumed)
Gaussian (optimal)
Gaussian (assumed)
Mixture of Gaussians
Gaussian (assumed)
free-form
point estimate
point estimate
point estimate
point estimate
point estimate
free-form
computation required substantially, though the posterior is assumed Gaussian and the covariance
parameters are estimated by (approximate) maximum likelihood. Table 1 places our work in the
context of existing variational methods for GPs.
This paper presents a general inference scheme, with the only concession to approximation being
the variational inducing point assumption. Non-Gaussian posteriors are permitted through MCMC,
with the computational benefits of the inducing point framework. The scheme jointly samples the
inducing-point representation of the function with the covariance function parameters; with sufficient inducing points our method approaches full Bayesian inference over GP values and the covariance parameters. We show empirically that the number of required inducing points is substantially
smaller than the dataset size for several real problems.
2
Stochastic process posteriors
The model is set up as follows. We are presented with some data inputs X = {xn }N
n=1 and responses
y = {yn }N
n=1 . A latent function is assumed drawn from a GP with zero mean and covariance
function k(x, x0 ) with (hyper-) parameters ?. Consistency of the GP means that only those points
with data are considered: the latent vector f represents the values of the function at the observed
points f = {f (xn )}N
n=1 , and has conditional distribution p(f | X, ?) = N (f | 0, Kf f ), where Kf f
is a matrix composed of evaluating the covariance function at all pairs of points in X. The data
likelihood depends on the latent function values: p(y | f ). To make a prediction for latent function
value test points f ? = {f (x? )}x? ?X? , the posterior function values and parameters are integrated:
ZZ
p(f ? | y) =
p(f ? | f , ?)p(f , ? | y) d? df .
(1)
In order to make use of the computational savings offered by the variational inducing point framework [4], we introduce additional input points to the function Z and collect the responses of the
function at that point into the vector u = {um = f (zm )}M
m=1 . With some variational posterior
q(u, ?), new points are predicted similarly to the exact solution
ZZ
q(f ? ) =
p(f ? | u, ?)q(u, ?) d? du .
(2)
This makes clear that the approximation is a stochastic process in the same fashion as the true posterior: the length of the predictions vector f ? is potentially unbounded, covering the whole domain.
To obtain a variational objective, first consider the support of u under the true posterior, and for
f under the approximation. In the above, these points are subsumed into the prediction vector f ? :
from here we shall be more explicit, letting f be the points of the process at X, u be the points of
the process at Z and f ? be a large vector containing all other points of interest1 . All of the free
parameters of the model are then f ? , f , u, ?, and using a variational framework, we aim to minimize
the Kullback-Leibler divergence between the approximate and true posteriors:
p(f ? | u, f , ?)p(u | f , ?)p(f , ? | y)
(3)
K , KL[q(f ? , f , u, ?)||p(f ? , f , u, ? | y)] = ?E
log
p(f ? | u, f , ?)p(f | u, ?)q(u, ?)
q(f ? ,f ,u,?)
1
The vector f ? here is considered finite but large enough to contain any point of interest for prediction. The
infinite case follows Matthews et al. [10], is omitted here for brevity, and results in the same solution.
2
where the conditional distributions for f ? have been expanded to make clear that they are the same
under the true and approximate posteriors, and X, Z and X? have been omitted for clarity. Straightforward identities simplify the expression,
p(u | f , ?)p(f | ?)p(?)p(y | f )/p(y)
K = ?Eq(f ,u,?) log
p(f | u, ?)q(u, ?)
(4)
p(u | ?)p(?)p(y | f )
= ?Eq(f ,u,?) log
+ log p(y) ,
q(u, ?)
resulting in the variational inducing-point objective investigated by Titsias [4], aside from the inclusion of ?. This can be rearranged to give the following informative expression
p(u | ?)p(?) exp{Ep(f | u,?) [log p(y | f )]}
? log C + log p(y).
(5)
K = KL q(u, ?)||
C
Here C is an intractable constant which normalizes the distribution and is independent of q. Minimizing the KL divergence on the right hand side reveals that the optimal variational distribution is
log q?(u, ?) = Ep(f | u,?) [log p(y | f )] + log p(u | ?) + log p(?) ? log C.
(6)
For general likelihoods, since the optimal distribution does not take any particular form, we intend to
sample from it using MCMC, thus combining the benefits of variationally-sparse Gaussian processes
with a free-form posterior. Sampling is feasible using standard methods since log q? is computable
up to a constant, using O(N M 2 ) computations. After completing this work, it was brought to our
attention that a similar suggestion had been made in [22], though the idea was dismissed because
?prediction in sparse GP models typically involves some additional approximations?. Our presentation of the approximation consisting of the entire stochastic process makes clear that no additional
approximations are required. To sample effectively, the following are proposed.
Whitening the prior Noting that the problem (6) appears similar to a standard GP for u, albeit
with an interesting ?likelihood?, we make use of an ancillary augmentation u = Rv, with RR> =
Kuu , v ? N (0, I). This results in the optimal variational distribution
log q?(v, ?) = Ep(f | u=Rv) [log p(y | f )] + log p(v) + log p(?) ? log C
(7)
Previously [11, 12] this parameterization has been used with schemes which alternate between sampling the latent function values (represented by v or u) and the parameters ?. Our scheme uses
HMC across v and ? jointly, whose effectiveness is examined throughout the experiment section.
Quadrature The first term in (6) is the expected log-likelihood. In the case of factorization across
the data-function pairs, this results in N one-dimensional integrals. For Gaussian or Poisson likelihood these integrals are tractable, otherwise they can be approximated by Gauss-Hermite quadrature.
Given the current sample v, the expectations are computed w.r.t. p(fn | v, ?) = N (?n , ?n ), with:
? = A> v; ? = diag(Kf f ? A> A); A = R?1 Kuf ; RR> = Kuu ,
(8)
where the kernel matrices Kuf , Kuu are computed similarly to Kf f , but over the pairs in
(X, Z), (Z, Z) respectively. From here, one can compute the expected likelihood and it is subsequently straightforward to compute derivatives in terms of Kuf , diag(Kf f ) and R.
Reverse mode differentiation of Cholesky To compute derivatives with respect to ? and Z we use
reverse-mode differentiation (backpropagation) of the derivative through the Cholesky matrix decomposition, transforming ? log q?(v, ?)/?R into ? log q?(v, ?)/?Kuu , and then ? log q?(v, ?)/??.
This is discussed by Smith [23], and results in a O(M 3 ) operation; an efficient Cython implementation is provided in the supplement.
3
Treatment of inducing point positions & inference strategy
A natural question is, what strategy should be used to select the inducing points Z? In the original inducing point formulation [3], the positions Z were treated as parameters to be optimized. One could
interpret them as parameters of the approximate prior covariance [24]. The variational formulation
3
[4] treats them as parameters of the variational approximation, thus protecting from over-fitting as
they form part of the variational posterior. In this work, since we propose a Bayesian treatment of
the model, we question whether it is feasible to treat Z in a Bayesian fashion.
Since u and Z are auxiliary parameters, the form of their distribution does not affect the marginals of
the model. The term p(u | Z) has been defined by the consistency with the GP in order to preserve
the posterior-process interpretation above (i.e. u should be points on the GP), but we are free to
choose p(Z). Omitting dependence on ? for clarity, and choosing w.l.o.g. q(u, Z) = q(u | Z)q(Z),
the bound on the marginal likelihood, similarly to (4) is given by
p(y | f )p(u | Z)p(Z)
L = Ep(f | u,Z)q(u | Z)q(Z) log
.
(9)
q(u | Z)q(Z)
The bound can be maximized w.r.t p(Z) by noting that the term only appears inside a (negative) KL
divergence: ?Eq(Z) [log q(Z)/p(Z)]. Substituting the optimal p(Z) = q(Z) reduces (9) to
p(y | f )p(u | Z)
L = Eq(Z) Ep(f | u,Z)q(u | Z) log
,
(10)
q(u | Z)
which can now be optimized w.r.t. q(Z). Since no entropy term appears for q(Z), the bound is
maximized when the distribution becomes a Dirac?s delta. In summary, since we are free to choose
a prior for Z which maximizes the amount of information captured by u, the optimal distribution
? This formally motivates optimizing the inducing points Z.
becomes p(Z) = q(Z) = ?(Z ? Z).
Derivatives for Z For completeness we also include the derivative of the free form objective with
respect to the inducing point positions. Substituting the optimal distribution q?(u, ?) into (4) to give
? and then differentiating we obtain
K
?
?
?K
? log C
=?
= ?Eq?(v,?)
Ep(f | u=Rv) [log p(y | f )] .
(11)
?Z
?Z
?Z
Since we aim to draw samples from q?(v, ?), evaluating this free form inducing point gradient using
samples seems plausible but challenging. Instead we use the following strategy.
1. Fit a Gaussian approximation to the posterior. We follow [20] in fitting a Gaussian approximation to the posterior. The positions of the inducing points are initialized using k-means clustering
of the data. The values of the latent function are represented by a mean vector (initialized randomly)
and a lower-triangular matrix L forms the approximate posterior covariance as LL> . For large problems (such as the MNIST experiment), stochastic optimization using AdaDelta is used. Otherwise,
LBFGS is used. After a few hundred iterations with the inducing points positions fixed, they are
optimized in free-form alongside the variational parameters and covariance function parameters.
2. Initialize the model using the approximation. Having found a satisfactory approximation, the
HMC strategy takes the optimized inducing point positions from the Gaussian approximation. The
initial value of v is drawn from the Gaussian approximation, and the covariance parameters are initialized at the (approximate) MAP value.
3. Tuning HMC. The HMC algorithm has two free parameters to tune, the number of leapfrog
steps and the step-length. We follow a strategy inspired by Wang et al. [25], where the number of
leapfrog steps is drawn randomly from 1 to Lmax , and Bayesian
optimization is used to maximize
?
the expected square jump distance (ESJD), penalized by Lmax . Rather than allow an adaptive (but
convergent) scheme as [25], we run the optimization for 30 iterations of 30 samples each, and use
the best parameters for a long run of HMC.
4. Run tuned HMC to obtain predictions. Having tuned the HMC, it is run for several thousand
iterations to obtain a good approximation to q?(v, ?). The samples are used to estimate the integral in
equation (2). The following section investigates the effectiveness of the proposed sampling scheme.
4
4.1
Experiments
Efficient sampling using Hamiltonian Monte Carlo
This section illustrates the effectiveness of Hamiltonian Monte Carlo in sampling from q?(v, ?). As
already pointed out, the form assumed by the optimal variational distribution q?(v, ?) in equation (6)
resembles the joint distribution in a GP model with a non-Gaussian likelihood.
4
For a fixed ?, sampling v is relatively straightforward, and this can be done efficiently using HMC
[12, 26, 27] or Elliptical Slice Sampling [28]. A well tuned HMC has been reported to be extremely
efficient in sampling the latent variables, and this motivates our effort into trying to extend this
efficiency to the sampling of hyper-parameters as well. This is also particularly appealing due to the
convenience offered by the proposed representation of the model.
The problem of drawing samples from the posterior distribution over v, ? has been investigated in
detail in [11, 12]. In these works, it has been advocated to alternate between the sampling of v and
? in a Gibbs sampling fashion and condition the sampling of ? on a suitably chosen transformation
of the latent variables. For each likelihood model, we compare efficiency and convergence speed of
the proposed HMC sampler with a Gibbs sampler where v is sampled using HMC and ? is sampled
using the Metropolis-Hastings algorithm. To make the comparison fair, we imposed the mass matrix
in HMC and the covariance in MH to be isotropic, and any parameters of the proposal were tuned
using Bayesian optimization. Unlike in the proposed HMC sampler, for the Gibbs sampler we did
not penalize the objective function of the Bayesian optimization for large numbers of leapfrog steps,
as in this case HMC proposals on the latent variables are computationally cheaper than those on
the hyper-parameters. We report efficiency in sampling from q?(v, ?) using Effective Sample Size
(ESS) and Time Normalized (TN)-ESS. In the supplement we include convergence plots based on
the Potential Scale Reduction Factor (PSRF) computed based on ten parallel chains; in these each
chain is initialized from the VB solution and individually tuned using Bayesian optimization.
4.2
Binary Classification
We first use the image dataset [29] to investigate the benefits of the approach over a Gaussian approximation, and to investigate the effect of changing the number of inducing points, as well as
optimizing the inducing points under the Gaussian approximation. The data are 18 dimensional: we
investigated the effect of our approximation using both ARD (one lengthscale per dimension) and an
isotropic RBF kernel. The data were split randomly into 1000/1019 train/test sets; the log predictive
density over ten random splits is shown in Figure 1.
Following the strategy outlined above, we fitted a Gaussian approximation to the posterior, with Z
initialized with k-means. Figure 1 investigates the difference in performance when Z is optimized
using the Gaussian approximation, compared to just using k-means for Z. Whilst our strategy is not
guaranteed to find the global optimum, it is clear that it improves the performance.
The second part of Figure 1 shows the performance improvement of our sampling approach over the
Gaussian approximation. We drew 10,000 samples, discarding the first 1000: we see a consistent
improvement in performance once M is large enough. For small M , The Gaussian approximation
appears to work very well. The supplement contains a similar Figure for the case where a single
lengthscale is shared: there, the improvement of the MCMC method over the Gaussian approximation is smaller but consistent. We speculate that the larger gains for ARD are due to posterior
uncertainty in the lengthscales, which is poorly represented by a point in the Gaussian/MAP approximation.
The ESS and TN-ESS are comparable between HMC and the Gibbs sampler. In particular, for 100
inducing points and the RBF covariance, ESS and TN-ESS for HMC are 11 and 1.0 ? 10?3 and for
the Gibbs sampler are 53 and 5.1 ? 10?3 . For the ARD covariance, ESS and TN-ESS for HMC are
14 and 5.1 ? 10?3 and for the Gibbs sampler are 1.6 and 1.5 ? 10?4 . Convergence, however, seems
to be faster for HMC, especially for the ARD covariance (see the supplement).
4.3
Log Gaussian Cox Processes
We apply our methods to Log Gaussian Cox processes [30]: doubly stochastic models where the
rate of an inhomogeneous Poisson process is given by a Gaussian process. The main difficulty for
inference lies in that the likelihood of the GP requires an integral over the domain, which is typically
intractable. For low dimensional problems, this integral can be approximated on a grid; assuming
that the GP is constant over the width of the grid leads to a factorizing Poisson likelihood for each
of the grid points. Whilst some recent approaches allow for a grid-free approach [19], these usually
require concessions in the model, such as an alternative link function, and do not approach full
Bayesian inference over the covariance function parameters.
5
log p(y? )[MCMC] ? log p(y? )[Gauss.]
log p(y? )[MCMC]
?0.2
?0.4
Zoptimized
Zk-means
5 10 20 50 100 5 10 20 50 100
number of inducing points
4
?10?2
2
0
Zoptimized
Zk-means
5 10 20 50 100 5 10 20 50 100
number of inducing points
VB+Gaussian
VB+MCMC
MCMC
2
rate
MCMC VB+MCMC
Figure 1: Performance of the method on the image dataset, with one lengthscale per dimension.
Left, box-plots show performance for varying numbers of inducing points and Z strategies. Optimizing Z using the Gaussian approximation offers significant improvement over the k-means strategy.
Right: improvement of the performance of the Gaussian approximation method, with the same inducing points. The method offers consistent performance gains when the number of inducing points
is larger. The supplement contains a similar figure with only a single lengthscale.
1
0
1860
1880
1900
1920
time (years)
1940
1960
0 20 40 60
lengthscale
0
2 4
variance
Figure 2: The posterior of the rates for the coal mining disaster data. Left: posterior rates using our
variational MCMC method and a Gaussian approximation. Data are shown as vertical bars. Right:
posterior samples for the covariance function parameters using MCMC. The Gaussian approximation estimated the parameters as (12.06, 0.55).
Coal mining disasters On the one-dimensional coal-mining disaster data. We held out 50% of
the data at random, and using a grid of 100 points with 30 evenly spaced inducing points Z, fitted
both a Gaussian approximation to the posterior process with an (approximate) MAP estimate for the
covariance function parameters (variance and lengthscale of an RBF kernel). With Gamma priors
on the covariance parameters we ran our sampling scheme using HMC, drawing 3000 samples.
The resulting posterior approximations are shown in Figure 2, alongside the true posterior using
a sampling scheme similar to ours (but without the inducing point approximation). The free-form
variational approximation matches the true posterior closely, whilst the Gaussian approximation
misses important detail. The approximate and true posteriors over covariance function parameters
are shown in the right hand part of Figure 2, there is minimal discrepancy in the distributions.
Over 10 random splits of the data, the average held-out log-likelihood was ?1.229 for the Gaussian
approximation and ?1.225 for the free-form MCMC variant; the average difference was 0.003, and
the MCMC variant was always better than the Gaussian approximation. We attribute this improved
performance to marginalization of the covariance function parameters.
Efficiency of HMC is greater than for the Gibbs sampler; ESS and TN-ESS for HMC are 6.7 and
3.1 ? 10?2 and for the Gibbs sampler are 9.7 and 1.9 ? 10?2 . Also, chains converge within few
thousand iterations for both methods, although convergence for HMC is faster (see the supplement).
6
Figure 3: Pine sapling data. From left to right: reported locations of pine saplings; posterior mean
intensity on a 32x32 grid using full MCMC; posterior mean intensity on a 32x32 grid (with sparsity
using 225 inducing points), posterior mean intensity on a 64x64 grid (using 225 inducing points).
Pine saplings The advantages of the proposed approximation are prominent as the number of grid
points become higher, an effect emphasized with increasing dimension of the domain. We fitted a
similar model to the above to the pine sapling data [30].
We compared the sampling solution obtained using 225 inducing points on a 32 x 32 grid to the
gold standard full MCMC run with the same prior and grid size. Figure 3 shows that the agreement
between the variational sampling and full sampling is very close. However the variational method
was considerably faster. Using a single core on a desktop computer required 3.4 seconds to obtain
1 effective sample for a well tuned variational method whereas it took 554 seconds for well tuned
full MCMC. This effect becomes even larger as we increase the resolution of the grid to 64 x 64,
which gives a better approximation to the underlying smooth function as can be seen in figure 3. It
took 4.7 seconds to obtain one effective sample for the variational method, but now gold standard
MCMC comparison was computationally extremely challenging to run for even a single HMC step.
This is because it requires linear algebra operations using O(N 3 ) flops with N = 4096.
4.4
Multi-class Classification
To do multi-class classification with Gaussian processes, one latent function is defined for each of
the classes. The functions are defined a-priori independent, but covary a posteriori because of the
likelihood. Chai [18] studies a sparse variational approximation to the softmax multi-class likelihood
restricted to a Gaussian approximation. Here, following [31, 32, 33], we use a robust-max likelihood.
Given a vector fn containing K latent functions evaluated at the point xn , the probability that the
label takes the integer value yn is 1 ? if yn = argmax fn and /K ? 1 otherwise. As Girolami
and Rogers [31] discuss, the ?soft? probit-like behaviour is recovered by adding a diagonal ?nugget?
to the covariance function. In this work, was fixed to 0.001, though it would also be possible to
treat this as a parameter for inference. The expected log-likelihood is Ep(fn | v,?) [log p(yn | fn )] =
p log()+(1?p) log(/(K?1)), where p is the probability that the labelled function is largest, which
is computable using one-dimensional quadrature. An efficient Cython implementation is contained
in the supplement.
Toy example To investigate the proposed posterior approximation for the multivariate classification case, we turn to the toy data shown in Figure 4. We drew 750 data points from three Gaussian
distributions. The synthetic data was chosen to include non-linear decision boundaries and ambiguous decision areas. Figure 4 shows that there are differences between the variational and sampling
solutions, with the sampling solution being more conservative in general (the contours of 95% confidence are smaller). As one would expect at the decision boundary there are strong correlations
between the functions which could not be captured by the Gaussian approximation we are using.
Note the movement of inducing points away from k-means and towards the decision boundaries.
Efficiency of HMC and the Gibbs sampler is comparable. In the RBF case, ESS and TN-ESS for
HMC are 1.9 and 3.8 ? 10?4 and for the Gibbs sampler are 2.5 and 3.6 ? 10?4 . In the ARD case, ESS
and TN-ESS for HMC are 1.2 and 2.8 ? 10?3 and for the Gibbs sampler are 5.1 and 6.8 ? 10?4 . For
both cases, the Gibbs sampler struggles to reach convergence even though the average acceptance
rates are similar to those recommended for the two samplers individually.
MNIST The MNIST dataset is a well studied benchmark with a defined training/test split. We used
500 inducing points, initialized from the training data using k-means. A Gaussian approximation
7
Figure 4: A toy multiclass problem. Left: the Gaussian approximation, colored points show the
simulated data, lines show posterior probability contours at 0.3, 0.95, 0.99. Inducing points positions
shows as black points. Middle: the free form solution with 10,000 posterior samples. The free-form
solution is more conservative (the contours are smaller). Right: posterior samples for v at the same
position but across different latent functions. The posterior exhibits strong correlations and edges.
Figure 5: Left: three k-means centers used to initialize the inducing point positions. Center: the
positions of the same inducing points after optimization. Right: difference.
was optimized using minibatch-based optimization over the means and variances of q(u), as well
as the inducing points and covariance function parameters. The accuracy on the held-out data was
98.04%, significantly improving on previous approaches to classify these digits using GP models.
For binary classification, Hensman et al. [20] reported that their Gaussian approximation resulted in
movement of the inducing point positions toward the decision boundary. The same effect appears
in the multivariate case, as shown in Figure 5, which shows three of the 500 inducing points used
in the MNIST problem. The three examples were initialized close to the many six digits, and after
optimization have moved close to other digits (five and four). The last example still appears to be
a six, but has moved to a more ?unusual? six shape, supporting the function at another extremity.
Similar effects are observed for all inducing-point digits. Having optimized the inducing point
positions with the approximate q(v), and estimate for ?, we used these optimal inducing points to
draw samples from v and ?. This did not result in an increase in accuracy, but did improve the
log-density on the test set from -0.068 to -0.064. Evaluating the gradients for the sampler took
approximately 0.4 seconds on a desktop machine, and we were easily able to draw 1000 samples.
This dataset size has generally be viewed as challenging in the GP community and consequently
there are not many published results to compare with. One recent work [34] reports a 94.05%
accuracy using variational inference and a GP latent variable model.
5
Discussion
We have presented an inference scheme for general GP models. The scheme significantly reduces
the computational cost whilst approaching exact Bayesian inference, making minimal assumptions
about the form of the posterior. The improvements in accuracy in comparison with the Gaussian
approximation of previous works has been demonstrated, as has the quality of the approximation
to the hyper-parameter distribution. Our MCMC scheme was shown to be effective for several
likelihoods, and we note that the automatic tuning of the sampling parameters worked well over
hundreds of experiments. This paper shows that MCMC methods are feasible for inference in large
GP problems, addressing the unfair sterotype of ?slow? MCMC.
Acknowledgments JH was funded by an MRC fellowship, AM and ZG by EPSRC grant
EP/I036575/1 and a Google Focussed Research award.
8
References
[1] T. V. Nguyen and E. V. Bonilla. Automated variational inference for Gaussian process models. In NIPS,
pages 1404?1412, 2014.
[2] L. Csat?o and M. Opper. Sparse on-line Gaussian processes. Neural comp., 14(3):641?668, 2002.
[3] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In NIPS, pages 1257?
1264, 2005.
[4] M. K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In AISTATS, pages
567?574, 2009.
[5] M. L?azaro-Gredilla, J. Qui?nonero-Candela, C. E. Rasmussen, and A. Figueiras-Vidal. Sparse spectrum
Gaussian process regression. JMLR, 11:1865?1881, 2010.
[6] A. Solin and S. S?arkk?a. Hilbert space methods for reduced-rank Gaussian process regression. arXiv
preprint 1401.5508, 2014.
[7] A. G. Wilson, E. Gilboa, A. Nehorai, and J. P. Cunningham. Fast kernel learning for multidimensional
pattern extrapolation. In NIPS, pages 3626?3634. 2014.
[8] S. S?arkk?a. Bayesian filtering and smoothing, volume 3. Cambridge University Press, 2013.
[9] M. Filippone and R. Engler. Enabling scalable stochastic gradient-based inference for Gaussian processes
by employing the Unbiased LInear System SolvEr (ULISSE). ICML 2015, 2015.
[10] A. G. D. G. Matthews, J. Hensman, R. E. Turner, and Z. Ghahramani. On sparse variational methods and
the KL divergence between stochastic processes. arXiv preprint 1504.07027, 2015.
[11] I. Murray and R. P. Adams. Slice sampling covariance hyperparameters of latent Gaussian models. In
NIPS, pages 1732?1740, 2010.
[12] M. Filippone, M. Zhong, and M. Girolami. A comparative evaluation of stochastic-based inference methods for Gaussian process models. Mach. Learn., 93(1):93?114, 2013.
[13] M. N. Gibbs and D. J. C. MacKay. Variational Gaussian process classifiers. IEEE Trans. Neural Netw.,
11(6):1458?1464, 2000.
[14] M. Opper and C. Archambeau. The variational Gaussian approximation revisited. Neural comp., 21(3):
786?792, 2009.
[15] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. JMLR, 6:1679?1704, 2005.
[16] H. Nickisch and C. E. Rasmussen. Approximations for binary Gaussian process classification. JMLR, 9:
2035?2078, 2008.
[17] E. Khan, S. Mohamed, and K. P. Murphy. Fast Bayesian inference for non-conjugate Gaussian process
regression. In NIPS, pages 3140?3148, 2012.
[18] K. M. A. Chai. Variational multinomial logit Gaussian process. JMLR, 13(1):1745?1808, June 2012.
[19] C. Lloyd, T. Gunter, M. A. Osborne, and S. J. Roberts. Variational inference for Gaussian process modulated poisson processes. ICML 2015, 2015.
[20] J. Hensman, A. Matthews, and Z. Ghahramani. Scalable variational Gaussian process classification. In
AISTATS, pages 351?360, 2014.
[21] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Trans. Pattern
Anal. Mach. Intell., 20(12):1342?1351, 1998.
[22] Michalis K Titsias, Neil Lawrence, and Magnus Rattray. Markov chain monte carlo algorithms for gaussian processes. In D. Barber, A. T. Chiappa, and S. Cemgil, editors, Bayesian time series models. 2011.
[23] S. P. Smith. Differentiation of the cholesky algorithm. J. Comp. Graph. Stat., 4(2):134?147, 1995.
[24] J. Qui?nonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process
regression. JMLR, 6:1939?1959, 2005.
[25] Z. Wang, S. Mohamed, and N. De Freitas. Adaptive Hamiltonian and Riemann manifold Monte Carlo. In
ICML, volume 28, pages 1462?1470, 2013.
[26] J. Vanhatalo and A. Vehtari. Sparse Log Gaussian Processes via MCMC for Spatial Epidemiology. In
Gaussian processes in practice, volume 1, pages 73?89, 2007.
[27] O. F. Christensen, G. O. Roberts, and J. S. Rosenthal. Scaling limits for the transient phase of local
MetropolisHastings algorithms. JRSS:B, 67(2):253?268, 2005.
[28] I. Murray, R. P. Adams, and D. J. C. MacKay. Elliptical slice sampling. In AISTATS, volume 9, 2010.
[29] G. R?atsch, T. Onoda, and K-R M?uller. Soft margins for adaboost. Mach. Learn., 42(3):287?320, 2001.
[30] J. M?ller, A. R. Syversveen, and R. P. Waagepetersen. Log Gaussian Cox processes. Scand. stat., 25(3):
451?482, 1998.
[31] M. Girolami and S. Rogers. Variational Bayesian multinomial probit regression with Gaussian process
priors. Neural Comp., 18:2006, 2005.
[32] H. Kim and Z. Ghahramani. Bayesian Gaussian Process Classification with the EM-EP Algorithm. IEEE
TPAMI, 28(12):1948?1959, 2006.
[33] D. Hern?andez-Lobato, J. M. Hern?andez-Lobato, and P. Dupont. Robust multi-class Gaussian process
classification. In NIPS, pages 280?288, 2011.
[34] Y. Gal, M. Van der Wilk, and Rasmussen C. E. Distributed variational inference in sparse Gaussian
process regression and latent variable models. In NIPS. 2014.
9
| 5875 |@word cox:3 middle:1 inversion:3 seems:2 replicate:1 logit:2 suitably:1 vanhatalo:1 covariance:32 decomposition:1 reduction:1 initial:1 contains:2 series:1 kuf:3 tuned:7 ours:1 existing:2 freitas:1 current:1 com:1 elliptical:2 recovered:1 arkk:2 fn:5 informative:1 shape:1 dupont:1 plot:2 aside:1 parameterization:1 desktop:2 isotropic:2 es:14 smith:2 core:2 hamiltonian:3 colored:1 completeness:1 revisited:1 location:1 hermite:1 unbounded:1 five:1 become:1 doubly:1 fitting:2 inside:1 introduce:1 coal:3 x0:1 expected:4 examine:1 multi:4 inspired:1 riemann:1 considering:1 increasing:1 becomes:3 provided:1 solver:1 underlying:1 maximizes:1 factorized:2 mass:1 what:1 substantially:2 whilst:4 transformation:1 differentiation:3 gal:1 pseudo:1 multidimensional:1 um:1 classifier:1 uk:3 grant:1 yn:4 local:1 treat:3 struggle:1 cemgil:1 limit:1 mach:3 extremity:1 approximately:1 black:1 resembles:1 examined:1 studied:1 collect:1 challenging:3 archambeau:1 factorization:1 acknowledgment:1 practice:1 backpropagation:1 digit:4 area:1 significantly:2 confidence:1 zoubin:2 convenience:1 close:3 storage:1 context:1 map:3 imposed:1 center:2 demonstrated:1 lobato:2 williams:2 straightforward:3 attention:1 resolution:1 x32:2 x64:1 exact:3 gps:2 us:1 agreement:1 adadelta:1 approximated:2 particularly:1 observed:2 ep:9 epsrc:1 preprint:2 wang:2 thousand:2 movement:2 ran:1 vehtari:1 transforming:1 cam:2 nehorai:1 algebra:1 predictive:1 titsias:5 efficiency:5 easily:1 joint:1 mh:1 represented:3 train:1 fast:2 effective:4 lengthscale:6 monte:6 hyper:4 lancaster:2 choosing:1 interest1:1 lengthscales:1 whose:1 larger:3 plausible:1 drawing:2 otherwise:4 triangular:1 neil:1 gp:20 jointly:2 advantage:1 rr:2 tpami:1 took:3 propose:1 fr:1 zm:1 i036575:1 combining:2 nonero:2 poorly:2 gold:2 inducing:46 moved:2 dirac:1 figueiras:1 chai:3 convergence:5 requirement:1 optimum:1 assessing:1 comparative:1 adam:3 ac:3 stat:2 chiappa:1 ard:5 advocated:1 eq:5 strong:2 auxiliary:1 predicted:1 involves:1 girolami:3 inhomogeneous:1 closely:1 attribute:1 stochastic:9 subsequently:1 ancillary:1 transient:1 rogers:2 require:2 behaviour:1 andez:2 considered:2 magnus:1 exp:1 lawrence:1 matthew:4 substituting:2 pine:4 early:1 omitted:2 estimation:1 label:1 individually:2 largest:1 gunter:1 uller:1 brought:1 gaussian:75 always:1 aim:2 rather:1 zhong:1 varying:1 wilson:1 june:1 leapfrog:3 improvement:6 rank:1 likelihood:21 kim:1 am:1 posteriori:1 inference:17 integrated:1 typically:2 entire:1 cunningham:1 issue:1 classification:11 flexible:1 priori:1 retaining:1 smoothing:1 softmax:2 initialize:2 mackay:2 marginal:1 spatial:1 once:1 saving:1 having:3 sampling:26 zz:2 represents:1 icml:3 discrepancy:1 report:2 simplify:1 few:2 randomly:3 composed:1 simultaneously:2 divergence:5 preserve:1 gamma:1 cheaper:1 resulted:1 murphy:1 intell:1 argmax:1 consisting:1 phase:1 subsumed:1 interest:1 acceptance:1 investigate:3 mining:3 evaluation:1 cython:2 mixture:1 held:3 chain:5 integral:5 edge:1 initialized:7 minimal:2 fitted:3 classify:1 soft:2 applicability:1 cost:1 addressing:1 hundred:2 reported:3 considerably:1 synthetic:1 nickisch:1 density:2 epidemiology:1 probabilistic:2 augmentation:1 containing:2 choose:2 derivative:5 toy:3 potential:1 de:2 speculate:1 lloyd:1 bonilla:2 depends:1 view:1 extrapolation:1 candela:2 parallel:1 nugget:1 minimize:2 square:1 accuracy:4 variance:3 efficiently:2 maximized:2 spaced:1 bayesian:15 carlo:6 mrc:1 comp:4 published:1 kuss:1 reach:1 mohamed:2 james:2 sampled:2 gain:2 dataset:5 treatment:2 improves:1 hilbert:1 variationally:2 appears:6 higher:1 follow:2 permitted:1 response:2 improved:1 adaboost:1 formulation:2 done:1 though:5 box:1 evaluated:1 syversveen:1 just:1 correlation:2 hand:2 hastings:1 google:1 minibatch:1 mode:2 quality:1 omitting:1 effect:6 contain:1 unbiased:2 true:7 normalized:1 iteratively:1 leibler:1 satisfactory:1 covary:1 attractive:1 ll:1 width:1 covering:1 ambiguous:1 trying:1 prominent:1 tn:7 image:2 variational:39 snelson:2 multinomial:2 empirically:1 volume:4 extend:2 discussed:1 interpretation:1 interpret:1 marginals:1 significant:1 cambridge:3 gibbs:13 tuning:2 automatic:1 consistency:2 outlined:1 similarly:3 inclusion:1 pointed:1 grid:12 had:1 funded:1 whitening:1 posterior:43 multivariate:2 recent:3 optimizing:3 reverse:2 binary:4 der:1 captured:2 seen:1 additional:4 somewhat:1 greater:1 attacking:1 maximize:1 converge:1 recommended:1 ller:1 rv:3 full:7 reduces:2 smooth:1 faster:3 match:1 offer:2 long:1 award:1 prediction:6 variant:2 regression:6 scalable:2 expectation:1 df:1 poisson:4 arxiv:2 iteration:5 kernel:4 disaster:3 filippone:5 penalize:1 proposal:2 whereas:1 fellowship:1 unlike:1 thing:1 effectiveness:3 integer:1 noting:2 split:4 enough:2 automated:1 marginalization:2 affect:1 fit:1 approaching:1 reduce:1 idea:2 computable:2 multiclass:1 whether:1 expression:2 six:3 effort:2 kuu:4 generally:2 clear:4 tune:1 amount:1 nonparametric:1 ten:2 rearranged:1 reduced:2 estimated:2 delta:1 per:2 csat:1 rattray:1 rosenthal:1 shall:1 key:1 four:1 drawn:3 clarity:2 changing:1 asymptotically:1 graph:1 year:1 run:6 uncertainty:1 place:1 family:1 throughout:1 draw:3 decision:5 prefer:1 scaling:1 investigates:2 vb:4 comparable:2 qui:2 bound:3 completing:1 guaranteed:1 tackled:1 convergent:1 worked:1 speed:1 extremely:2 expanded:1 relatively:1 maurizio:2 gredilla:1 alternate:2 conjugate:1 jr:1 smaller:4 across:3 em:1 appealing:1 metropolis:1 making:3 christensen:1 restricted:1 computationally:2 equation:2 previously:1 hern:2 discus:1 turn:1 letting:1 tractable:1 unusual:1 available:1 gaussians:1 operation:2 vidal:1 apply:1 away:1 alternative:1 original:1 clustering:1 include:3 michalis:1 unifying:1 unsuitable:1 exploit:1 ghahramani:6 especially:2 murray:3 approximating:1 objective:4 intend:1 question:2 already:1 strategy:10 dependence:1 diagonal:1 exhibit:1 gradient:3 distance:1 link:1 simulated:1 evenly:1 manifold:1 barber:3 toward:1 assuming:1 code:1 length:2 scand:1 minimizing:1 hmc:26 robert:2 potentially:1 negative:1 implementation:2 anal:1 motivates:2 vertical:1 markov:2 wilk:1 benchmark:1 finite:1 protecting:1 solin:1 enabling:1 supporting:1 flop:1 community:1 intensity:3 introduced:2 pair:3 required:5 kl:6 khan:1 optimized:7 nip:7 trans:2 address:1 able:1 bar:1 alongside:2 usually:1 pattern:2 sparsity:1 dismissed:1 max:1 metropolishastings:1 natural:1 hybrid:1 treated:1 difficulty:1 turner:1 scheme:16 improve:1 github:1 prior:7 kf:5 probit:4 expect:1 suggestion:1 interesting:1 filtering:1 offered:2 sufficient:1 consistent:3 editor:1 normalizes:1 summary:1 lmax:2 penalized:1 last:1 free:16 rasmussen:5 gilboa:1 side:1 allow:2 jh:1 focussed:1 waagepetersen:1 differentiating:1 sparse:15 distributed:1 benefit:3 slice:3 boundary:4 hensman:6 xn:3 evaluating:3 dimension:3 contour:3 opper:2 author:1 made:3 jump:1 adaptive:2 nguyen:2 employing:1 approximate:13 netw:1 kullback:1 psrf:1 global:1 reveals:1 assumed:6 spectrum:2 factorizing:1 latent:15 table:2 nature:1 zk:2 robust:2 hyperparam:1 learn:2 onoda:1 improving:1 du:1 investigated:3 domain:3 diag:2 did:3 aistats:3 main:1 whole:1 hyperparameters:1 osborne:1 fair:1 quadrature:3 augmented:1 fashion:4 slow:1 sub:1 position:12 explicit:1 lie:1 unfair:1 jmlr:5 specific:2 discarding:1 emphasized:1 multitude:2 intractable:2 mnist:4 albeit:1 adding:1 effectively:1 drew:2 supplement:7 illustrates:1 margin:1 entropy:1 azaro:1 lbfgs:1 contained:1 van:1 conditional:2 identity:1 presentation:1 viewed:1 consequently:1 rbf:4 towards:1 labelled:1 shared:1 considerable:1 feasible:3 infinite:1 sampler:15 miss:1 conservative:2 gauss:2 atsch:1 zg:1 select:1 formally:1 support:2 cholesky:3 modulated:1 alexander:1 brevity:1 mcmc:23 |
5,386 | 5,876 | Streaming, Distributed Variational Inference for
Bayesian Nonparametrics
Trevor Campbell1
Julian Straub2 John W. Fisher III2 Jonathan P. How1
1
LIDS, 2 CSAIL, MIT
{tdjc@ , jstraub@csail. , fisher@csail. , jhow@}mit.edu
Abstract
This paper presents a methodology for creating streaming, distributed inference algorithms for Bayesian nonparametric (BNP) models. In the proposed framework,
processing nodes receive a sequence of data minibatches, compute a variational
posterior for each, and make asynchronous streaming updates to a central model.
In contrast to previous algorithms, the proposed framework is truly streaming, distributed, asynchronous, learning-rate-free, and truncation-free. The key challenge
in developing the framework, arising from the fact that BNP models do not impose
an inherent ordering on their components, is finding the correspondence between
minibatch and central BNP posterior components before performing each update.
To address this, the paper develops a combinatorial optimization problem over
component correspondences, and provides an efficient solution technique. The
paper concludes with an application of the methodology to the DP mixture model,
with experimental results demonstrating its practical scalability and performance.
1
Introduction
Bayesian nonparametric (BNP) stochastic processes are streaming priors ? their unique feature is
that they specify, in a probabilistic sense, that the complexity of a latent model should grow as the
amount of observed data increases. This property captures common sense in many data analysis
problems ? for example, one would expect to encounter far more topics in a document corpus after
reading 106 documents than after reading 10 ? and becomes crucial in settings with unbounded, persistent streams of data. While their fixed, parametric cousins can be used to infer model complexity
for datasets with known magnitude a priori [1, 2], such priors are silent with respect to notions of
model complexity growth in streaming data settings.
Bayesian nonparametrics are also naturally suited to parallelization of data processing, due to the
exchangeability, and thus conditional independence, they often exhibit via de Finetti?s theorem. For
example, labels from the Chinese Restaurant process [3] are rendered i.i.d. by conditioning on the
underlying Dirichlet process (DP) random measure, and feature assignments from the Indian Buffet
process [4] are rendered i.i.d. by conditioning on the underlying beta process (BP) random measure.
Given these properties, one might expect there to be a wealth of inference algorithms for BNPs that
address the challenges associated with parallelization and streaming. However, previous work has
only addressed these two settings in concert for parametric models [5, 6], and only recently has each
been addressed individually for BNPs. In the streaming setting, [7] and [8] developed streaming
inference for DP mixture models using sequential variational approximation. Stochastic variational
inference [9] and related methods [10?13] are often considered streaming algorithms, but their performance depends on the choice of a learning rate and on the dataset having known, fixed size a
priori [5]. Outside of variational approaches, which are the focus of the present paper, there exist
exact parallelized MCMC methods for BNPs [14, 15]; the tradeoff in using such methods is that they
provide samples from the posterior rather than the distribution itself, and results regarding assessing
1
?m3
Minibatch
Posterior
?i2
Retrieve Original
?o1
Central Posterior
Central Node
Intermediate
Posterior
?o1
?i1
Node
Central Node
?3
?
?m3
?3 (= ?m2 )
?m2
?i2
?2
?m2
?2
(= ?i2 + ?m3 ? ?0 )
?m1
?i1
?1
?m1
?1
(= ?i1 + ?m1 ? ?o1 )
Node
Central Node
Node
Central Node
Node
Retrieve Data
...
yt+1
yt
...
Data Stream
(a) Retrieve the data/prior
...
yt+1
yt
...
...
Data Stream
yt+1
yt
...
...
yt+2
(c) Perform component ID
...
Data Stream
Data Stream
(b) Perform inference
yt+1
(d) Update the model
Figure 1: The four main steps of the algorithm that is run asynchronously on each processing node.
convergence remain limited. Sequential particle filters for inference have also been developed [16],
but these suffer issues with particle degeneracy and exponential forgetting.
The main challenge posed by the streaming, distributed setting for BNPs is the combinatorial problem of component identification. Most BNP models contain some notion of a countably infinite set
of latent ?components? (e.g. clusters in a DP mixture model), and do not impose an inherent ordering on the components. Thus, in order to combine information about the components from multiple
processors, the correspondence between components must first be found. Brute force search is in2
tractable even for moderately sized models ? there are K1K+K
possible correspondences for two
1
sets of components of sizes K1 and K2 . Furthermore, there does not yet exist a method to evaluate
the quality of a component correspondence for BNP models. This issue has been studied before in
the MCMC literature, where it is known as the ?label switching problem?, but past solution techniques are generally model-specific and restricted to use on very simple mixture models [17, 18].
This paper presents a methodology for creating streaming, distributed inference algorithms for
Bayesian nonparametric models. In the proposed framework (shown for a single node A in Figure 1), processing nodes receive a sequence of data minibatches, compute a variational posterior
for each, and make asynchronous streaming updates to a central model using a mapping obtained
from a component identification optimization. The key contributions of this work are as follows.
First, we develop a minibatch posterior decomposition that motivates a learning-rate-free streaming,
distributed framework suitable for Bayesian nonparametrics. Then, we derive the component identification optimization problem by maximizing the probability of a component matching. We show
that the BNP prior regularizes model complexity in the optimization; an interesting side effect of this
is that regardless of whether the minibatch variational inference scheme is truncated, the proposed
algorithm is truncation-free. Finally, we provide an efficiently computable regularization bound for
the Dirichlet process prior based on Jensen?s inequality1 . The paper concludes with applications of
the methodology to the DP mixture model, with experimental results demonstrating the scalability
and performance of the method in practice.
2
Streaming, distributed Bayesian nonparametric inference
The proposed framework, motivated by a posterior decomposition that will be discussed in Section
2.1, involves a collection of processing nodes with asynchronous access to a central variational posterior approximation (shown for a single node in Figure 1). Data is provided to each processing
node as a sequence of minibatches. When a processing node receives a minibatch of data, it obtains
the central posterior (Figure 1a), and using it as a prior, computes a minibatch variational posterior
approximation (Figure 1b). When minibatch inference is complete, the node then performs component identification between the minibatch posterior and the current central posterior, accounting for
possible modifications made by other processing nodes (Figure 1c). Finally, it merges the minibatch
posterior into the central variational posterior (Figure 1d).
In the following sections, we use the DP mixture [3] as a guiding example for the technical development of the inference framework. However, it is emphasized that the material in this paper
generalizes to many other BNP models, such as the hierarchical DP (HDP) topic model [19], BP latent feature model [20], and Pitman-Yor (PY) mixture [21] (see the supplement for further details).
1
Regularization bounds for other popular BNP priors may be found in the supplement.
2
2.1
Posterior decomposition
Consider a DP mixture model [3], with cluster parameters ?, assignments z, and observed data y.
For each asynchronous update made by each processing node, the dataset is split into three subsets
y = yo ? yi ? ym for analysis. When the processing node receives a minibatch of data ym , it
queries the central processing node for the original posterior p(?, zo |yo ), which will be used as the
prior for minibatch inference. Once inference is complete, it again queries the central processing
node for the intermediate posterior p(?, zo , zi |yo , yi ) which accounts for asynchronous updates from
other processing nodes since minibatch inference began. Each subset yr , r ? {o, i, m}, has Nr
r
observations {yrj }N
j=1 , and each variable zrj ? N assigns yrj to cluster parameter ?zrj . Given the
independence of ? and z in the prior, and the conditional independence of the data given the latent
parameters, Bayes? rule yields the following decomposition of the posterior of ? and z given y,
Updated Central Posterior
z }| {
p(?, z|y) ?
Original Posterior
Minibatch Posterior
Intermediate Posterior
z
}|
{ z
}|
{ z
}|
{
p(zi , zm |zo )
? p(?, zo |yo )?1 ? p(?, zm , zo |ym , yo ) ? p(?, zi , zo |yi , yo ).
p(zi |zo )p(zm |zo )
(1)
This decomposition suggests a simple streaming, distributed, asynchronous update rule for a processing node: first, obtain the current central posterior density p(?, zo |yo ), and using it as a prior,
compute the minibatch posterior p(?, zm , zo |yo , ym ); and then update the central posterior density
by using (1) with the current central posterior density p(?, zi , zo |yi , yo ). However, there are two
issues preventing the direct application of the decomposition rule (1):
Unknown component correspondence: Since it is generally intractable to find the minibatch posteriors p(?, zm , zo |yo , ym ) exactly, approximate methods are required. Further, as (1) requires the
multiplication of densities, sampling-based methods are difficult to use, suggesting a variational approach. Typical mean-field variational techniques introduce an artificial ordering of the parameters
in the posterior, thereby breaking symmetry that is crucial to combining posteriors correctly using
density multiplication [6]. The use of (1) with mean-field variational approximations thus requires
first solving a component identification problem.
Unknown model size: While previous posterior merging procedures required a 1-to-1 matching
between the components of the minibatch posterior and central posterior [5, 6], Bayesian nonparametric posteriors break this assumption. Indeed, the datasets yo , yi , and ym from the same nonparametric mixture model can be generated by the same, disjoint, or an overlapping set of cluster
parameters. In other words, the global number of unique posterior components cannot be determined
until the component identification problem is solved and the minibatch posterior is merged.
2.2
Variational component identification
Suppose we have the following mean-field exponential family prior and approximate variational
posterior densities in the minibatch decomposition (1),
T
p(?k ) = h(?k )e?0 T (?k )?A(?0 ) ?k ? N
p(?, zo |yo ) ' qo (?, zo ) = ?o (zo )
Ko
Y
T
h(?k )e?ok T (?k )?A(?ok )
k=1
p(?, zm , zo |ym , yo ) ' qm (?, zm , zo ) = ?m (zm )?o (zo )
K
m
Y
T
h(?k )e?mk T (?k )?A(?mk )
(2)
k=1
p(?, zi , zo |yi , yo ) ' qi (?, zi , zo ) = ?i (zi )?o (zo )
Ki
Y
T
h(?k )e?ik T (?k )?A(?ik ) ,
k=1
where ?r (?), r ? {o, i, m} are products of categorical distributions for the cluster labels zr , and the
goal is to use the posterior decomposition (1) to find the updated posterior approximation
K
Y
p(?, z|y) ' q(?, z) = ?(z)
k=1
3
T
h(?k )e?k T (?k )?A(?k ) .
(3)
As mentioned in the previous section, the artificial ordering of components causes the na??ve application of (1) with variational approximations to fail, as disparate components from the approximate
posteriors may be merged erroneously. This is demonstrated in Figure 3a, which shows results from
a synthetic experiment (described in Section 4) ignoring component identification. As the number of
parallel threads increases, more matching mistakes are made, leading to decreasing model quality.
To address this, first note that there is no issue with the first Ko components of qm and qi ; these can
be merged directly since they each correspond to the Ko components of qo . Thus, the component
0
identification problem reduces to finding the correspondence between the last Km
= Km ? Ko
0
components of the minibatch posterior and the last Ki = Ki ? Ko components of the intermediate
posterior. For notational simplicity (and without loss of generality), fix the component ordering
0
of the intermediate posterior qi , and define ? : [Km ] ? [Ki + Km
] to be the 1-to-1 mapping
from minibatch posterior component k to updated central posterior component ?(k), where [K] :=
{1, . . . , K}. The fact that the first Ko components have no ordering ambiguity can be expressed as
0
?(k) = k ?k ? [Ko ]. Note that the maximum number of components after merging is Ki + Km
,
0
since each of the last Km
components in the minibatch posterior may correspond to new components
in the intermediate posterior. After substituting the three variational approximations (2) into (1), the
goal of the component identification optimization is to find the 1-to-1 mapping ? ? that yields the
largest updated posterior normalizing constant, i.e. matches components with similar densities,
XZ
p(zi , zm |zo )
?
?
? ? argmax
qo (?, zo )?1 qm
(?, zm , zo )qi (?, zi , zo )
p(z
|z
)p(z
|z
)
?
i
o
m
o
?
z
s.t.
?
?
qm
(?, zm ) = ?m
(zm )
K
m
Y
T
h(??(k) )e?mk T (??(k) )?A(?mk )
(4)
k=1
?(k) = k, ?k ? [Ko ] , ? 1-to-1
?
? (zmj = ?(k)) = P? (zmj = k).
(zm ) is the distribution such that P?m
Taking the
where ?m
m
logarithm of the objective and exploiting the mean-field decoupling allows the separation of the
objective into a sum of two terms: one expressing the quality of the matching between components
(the integral over ?), and one that regularizes the final model size (the sum over z). While the first
term is available in closed form, the second is in general not. Therefore, using the concavity of
the logarithm function, Jensen?s inequality yields a lower bound that can be used in place of the
intractable original objective, resulting in the final component identification optimization:
0
Ki +Km
?
? ? argmax
?
X
A (?
?k? ) + E?? [log p(zi , zm , zo )]
(5)
k=1
s.t.
?
??k? = ??ik + ??mk
? ??ok
?(k) = k ?k ? [Ko ] , ? 1-to-1.
A more detailed derivation of the optimization may be found in the supplement. E?? denotes expec?
tation under the distribution ?o (zo )?i (zi )?m
(zm ), and
?m??1 (k) k ? ? ([Km ])
?rk k ? Kr
?
??rk =
?r ? {o, i, m}, ??mk =
,
(6)
?0 k > Kr
?0
k?
/ ? ([Km ])
where ? ([Km ]) denotes the range of the mapping ?. The definitions in (6) ensure that the prior ?0
is used whenever a posterior r ? {i, m, o} does not contain a particular component k. The intuition
for the optimization (5) is that it combines finding component correspondences with high similarity
(via the log-partition function) with a regularization term2 on the final updated posterior model size.
Despite its motivation from the Dirichlet process mixture, the component identification optimization
(5) is not specific to this model. Indeed, the derivation did not rely on any properties specific to the
Dirichlet process mixture; the optimization applies to any Bayesian nonparametric model with a set
of ?components? ?, and a set of combinatorial ?indicators? z. For example, the optimization applies
to the hierarchical Dirichlet process topic model [10] with topic word distributions ? and local-toglobal topic correspondences z, and to the beta process latent feature model [4] with features ? and
2
h
i
?
This is equivalent to the KL-divergence regularization ?KL ?o (zo )?i (zi )?m
(zm ) p(zi , zm , zo ) .
4
binary assignment vectors z. The form of the objective in the component identification optimization
(5) reflects this generality. In order to apply the proposed streaming, distributed method to a particular model, one simply needs a black-box variational inference algorithm that computes posteriors
of the form (2), and a way to compute or bound the expectation in the objective of (5).
2.3
Updating the central posterior
To update the central posterior, the node first locks it and solves for ? ? via (5). Locking prevents
other nodes from solving (5) or modifying the central posterior, but does not prevent other nodes
from reading the central posterior, obtaining minibatches, or performing inference; the synthetic
experiment in Section 4 shows that this does not incur a significant time penalty in practice. Then
the processing node transmits ? ? and its minibatch variational posterior to the central processing
node where the product decomposition (1) is used to find the updated central variational posterior q
in (3), with parameters
??
??
K = max Ki , max ? ? (k) , ?(z) = ?i (zi )?o (zo )?m
(zm ), ?k = ??ik + ??mk
? ??ok . (7)
k?[Km ]
Finally, the node unlocks the central posterior, and the next processing node to receive a new minibatch will use the above K, ?(z), and ?k from the central node as their Ko , ?o (zo ), and ?ok .
3
Application to the Dirichlet process mixture model
The expectation in the objective of (5) is typically intractable to compute in closed-form; therefore,
a suitable lower bound may be used in its place. This section presents such a bound for the Dirichlet
process, and discusses the application of the proposed inference framework to the Dirichlet process
mixture model using the developed bound. Crucially, the lower bound decomposes such that the optimization (5) becomes a maximum-weight bipartite matching problem. Such problems are solvable
in polynomial time [22] by the Hungarian algorithm, leading to a tractable component identification
step in the proposed streaming, distributed framework.
3.1
Regularization lower bound
For the Dirichlet process with concentration parameter ? > 0, p(zi , zm , zo ) is the Exchangeable
Partition Probability Function (EPPF) [23]
Y
p(zi , zm , zo ) ? ?|K|?1
(nk ? 1)!,
(8)
k?K
where nk is the amount of data assigned to cluster k, and K is the set of labels of nonempty clusters.
Given that the variational distribution ?r (zr ), r ? {i, m, o} is a product of independent categorQNr QKr 1[zrj =k]
ical distributions ?r (zr ) = j=1
, Jensen?s inequality may be used to bound the
k=1 ?rjk
regularization in (5) below (see the supplement for further details) by
0
Ki +Km
E?? [log p(zi , zm , zo )] ?
X
?
1 ? es?k log ? + log ? max 2, t??k + C
k=1
s??k = s?ik + s??mk + s?ok ,
(9)
t??k = t?ik + t??mk + t?ok ,
where C is a constant with respect to the component mapping ?, and
PN
PN
r
r
k?Kr
k?Kr
j=1 log(1??rjk )
j=1 ?rjk
s?rk =
?r?{o,i,m} t?rk
=
?r?{o,i,m}
0
k>Kr
0
k>Kr
PN
PN
(10)
m
m
k??([Km ])
k??([Km ])
j=1 log(1??mj? ?1 (k) )
j=1 ?mj? ?1 (k)
s??mk =
t??mk =
.
0
k??([K
/
m ])
k??([K
/
m ])
0
Note that the bound (9) allows incremental updates: after finding the optimal mapping ? ? , the central
update (7) can be augmented by updating the values of sk and tk on the central node to
?
?
sk ? s?ik + s??mk + s?ok ,
tk ? t?ik + t??mk + t?ok .
5
(11)
600
140
Truth
Lower Bound
Truth
Lower Bound
120
200
Regularization
Regularization
400
Increasing ?
0
-200
Increasing ?
80
60
-400
-600
100
0
20
40
60
80
40
0.001
Certain
100
Number of Clusters
0.01
(a)
0.1
Clustering Uncertainty ?
1.0
1000
Uncertain
(b)
Figure 2: The Dirichlet process regularization and lower bound, with (2a) fully uncertain labelling and varying
number of clusters, and (2b) the number of clusters fixed with varying labelling uncertainty.
As with K, ?k , and ? from (7), after performing the regularization statistics update (11), a processing
node that receives a new minibatch will use the above sk and tk as their sok and tok , respectively.
Figure 2 demonstrates the behavior of the lower bound in a synthetic
experiment
with N = 100
datapoints for various DP concentration parameter values ? ? 10?3 , 103 . The true regularization
log E? [p(z)] was computed by sample approximation with 104 samples. In Figure 2a, the number of
1
. This figure demonstrates
clusters K was varied, with symmetric categorical label weights set to K
two important phenomena. First, the bound increases as K ? 0; in other words, it gives preference
to fewer, larger clusters, which is the typical BNP ?rich get richer? property. Second, the behavior of
the bound as K ? N depends on the concentration parameter ? ? as ? increases, more clusters are
preferred. In Figure 2b, the number of clusters K was fixed to 10, and the categorical
label weights
were sampled from a symmetric Dirichlet distribution with parameter ? ? 10?3 , 103 . This figure
demonstrates that the bound does not degrade significantly with high labelling uncertainty, and is
nearly exact for low labelling uncertainty. Overall, Figure 2a demonstrates that the proposed lower
bound exhibits similar behaviors to the true regularization, supporting its use in the optimization (5).
3.2
Solving the component identification optimization
Given that both the regularization (9) and component matching score in the objective (5) decompose
0
], the objective can be rewritten using a matrix of matching
as a sum of terms for each k ? [Ki + Km
0
0
0
0
Ki +Km
?(Ki +Km
(
)
)
scores R ? R
and selector variables X ? {0, 1}(Ki +Km )?(Ki +Km ) . Setting
Xkj = 1 indicates that component k in the minibatch posterior is matched to component j in the
intermediate posterior (i.e. ?(k) = j), providing a score Rkj defined using (6) and (10) as
Rkj = A (?
?ij + ??mk ? ??oj )+ 1 ? es?ij +?smk +?soj log ?+log ? max 2, t?ij + t?mk + t?oj . (12)
The optimization (5) can be rewritten in terms of X and R as
X? ? argmax tr XT R
X
XT 1 = 1, Xkk = 1, ?k ? [Ko ]
0
0
T
X ? {0, 1}(Ki +Km )?(Ki +Km ) , 1 = [1, . . . , 1] .
s.t. X1 = 1,
(13)
The first two constraints express the 1-to-1 property of ?(?). The constraint Xkk = 1?k ? [Ko ] fixes
the upper Ko ?Ko block of X to I (due to the fact that the first Ko components are matched directly),
0
0
and the off-diagonal blocks to 0. Denoting X0 , R0 to be the lower right (Ki0 + Km
) ? (Ki0 + Km
)
0
blocks of X, R, the remaining optimization problem is a linear assignment problem on X with
cost matrix ?R0 , which can be solved using the Hungarian algorithm3 . Note that if Km = Ko
or Ki = Ko , this implies that no matching problem needs to be solved ? the first Ko components
0
of the minibatch posterior are matched directly, and the last Km
are set as new components. In
practical implementation of the framework, new clusters are typically discovered at a diminishing
rate as more data are observed, so the number of matching problems that are solved likewise tapers
off. The final optimal component mapping ? ? is found by finding the nonzero elements of X? :
? ? (k) ? argmax X?kj ?k ? [Km ] .
j
3
For the experiments in this work, we used the implementation at github.com/hrldcpr/hungarian.
6
(14)
CPU Time (s)
-5.0
12.0
-6.0
10.0
-7.0
6.0
-8.0
4.0
-9.0
-10.0
2.0
-10.0
-11.0
0.0
-7.0
6.0
-8.0
4.0
-9.0
2.0
0.0
1
2
4
8
16
24
32
40
48
1
2
4
8
16
24
32
40
48
SDA-DP
Batch
SVA
SVI
moVB
SC
-7
-8
-9
-10
-11
-11.0
-5
-4
-3
-2
-1
(a) SDA-DP without component ID
(b) SDA-DP with component ID
45
# Clusters
# Matchings
True # Clusters
120
20
3
4
100
100
80
80
60
# Clusters
# Matchings
True # Clusters
120
Count
Count
25
2
140
35
30
1
(c) Test log-likelihood traces
140
40
0
Time (Log10 s)
# Threads
# Threads
Count
-6.0
8.0
8.0
-6
-5.0
CPU Time
Test Log Likelihood
Test Log Likelihood
Test Log Likelihood
CPU Time
Test Log Likelihood
10.0
Test Log Likelihood
CPU Time (s)
12.0
60
15
40
40
10
20
5
0
0
10
20
30
40
50
60
Merge Time (microseconds)
(d) CPU time for component ID
70
0
20
0
20
40
60
80
# Minibatches Merged
(e) Cluster/component ID counts
100
0
1
2
4
8
16
24
32
40
48
# Threads
(f) Final cluster/component ID counts
Figure 3: Synthetic results over 30 trials. (3a-3b) Computation time and test log likelihood for SDA-DP with
varying numbers of parallel threads, with component identification disabled (3a) and enabled (3b). (3c) Test log
likelihood traces for SDA-DP (40 threads) and the comparison algorithms. (3d) Histogram of computation time
(in microseconds) to solve the component identification optimization. (3e) Number of clusters and number of
component identification problems solved as a function of the number of minibatch updates (40 threads). (3f)
Final number of clusters and matchings solved with varying numbers of parallel threads.
4
Experiments
In this section, the proposed inference framework is evaluated on the DP Gaussian mixture with a
normal-inverse-Wishart (NIW) prior. We compare the streaming, distributed procedure coupled with
standard variational inference [24] (SDA-DP) to five state-of-the-art inference algorithms: memoized online variational inference (moVB) [13], stochastic online variational inference (SVI) [9] with
1
learning rate (t+10)? 2 , sequential variational approximation (SVA) [7] with cluster creation thresh?1
old 10 and prune/merge threshold 10?3 , subcluster splits MCMC (SC) [14], and batch variational
inference (Batch) [24]. Priors were set by hand and all methods were initialized randomly. Methods that use multiple passes through the data (e.g. moVB, SVI) were allowed to do so. moVB was
allowed to make birth/death moves, while SVI/Batch had fixed truncations. All experiments were
performed on a computer with 24 CPU cores and 12GiB of RAM.
Synthetic: This dataset consisted of 100,000 2-dimensional vectors generated from a Gaussian mixture model with 100 clusters and a NIW(?0 , ?0 , ?0 , ?0 ) prior with ?0 = 0, ?0 = 10?3 , ?0 = I,
and ?0 = 4. The algorithms were given the true NIW prior, DP concentration ? = 5, and minibatches of size 50. SDA-DP minibatch inference was truncated to K = 50 components, and all other
algorithms were truncated to K = 200 components. Figure 3 shows the results from the experiment
over 30 trials, which illustrate a number of important properties of SDA-DP. First and foremost,
ignoring the component identification problem leads to decreasing model quality with increasing
number of parallel threads, since more matching mistakes are made (Figure 3a). Second, if component identification is properly accounted for using the proposed optimization, increasing the number
of parallel threads reduces execution time, but does not affect the final model quality (Figure 3b).
Third, SDA-DP (with 40 threads) converges to the same final test log likelihood as the comparison
algorithms in significantly reduced time (Figure 3c). Fourth, each component identification optimization typically takes ? 10?5 seconds, and thus matching accounts for less than a millisecond of
total computation and does not affect the overall computation time significantly (Figure 3d). Fifth,
the majority of the component matching problems are solved within the first 80 minibatch updates
(out of a total of 2,000) ? afterwards, the true clusters have all been discovered and the processing
nodes contribute to those clusters rather than creating new ones, as per the discussion at the end of
Section 3.2 (Figure 3e). Finally, increased parallelization can be advantageous in discovering the
correct number of clusters; with only one thread, mistakes made early on are built upon and persist,
whereas with more threads there are more component identification problems solved, and thus more
chances to discover the correct clusters (Figure 3f).
7
3500
3000
Count
2500
2000
1500
1000
500
0
0
1
2
3
4
5
6
7
8
9
Cluster
(a) Airplane trajectory clusters
Algorithm
SDA-DP
SVI
SVA
moVB
SC
Batch
(b) Airplane cluster weights
(c) MNIST clusters
(d) Numerical results on Airplane, MNIST, and SUN
Airplane
MNIST
SUN
Time (s) TestLL Time (s) TestLL Time (s) TestLL
0.66
-0.55
3.0
-145.3
9.4
-150.3
1.50
-0.59
117.4
-147.1
568.9
-149.9
3.00
-4.71
57.0
-145.0
10.4
-152.8
0.69
-0.72
645.9
-149.2
1258.1
-149.7
393.6
-1.06
1639.1
-146.8
1618.4
-150.6
1.07
-0.72
829.6
-149.5
1881.5
-149.7
Figure 4: (4a-4b) Highest-probability instances and counts for 10 trajectory clusters generated by SDA-DP.
(4c) Highest-probability instances for 20 clusters discovered by SDA-DP on MNIST. (4d) Numerical results.
Airplane Trajectories: This dataset consisted of ?3,000,000 automatic dependent surveillance
broadcast (ADS-B) messages collected from planes across the United States during the period 201303-22 01:30:00UTC to 2013-03-28 12:00:00UTC. The messages were connected based on plane call
sign and time stamp, and erroneous trajectories were filtered based on reasonable spatial/temporal
bounds, yielding 15,022 trajectories with 1,000 held out for testing. The latitude/longitude points
in each trajectory were fit via linear regression, and the 3-dimensional parameter vectors were clustered. Data was split into minibatches of size 100, and SDA-DP used 16 parallel threads.
MNIST Digits [25]: This dataset consisted of 70,000 28 ? 28 images of hand-written digits, with
10,000 held out for testing. The images were reduced to 20 dimensions with PCA prior to clustering.
Data was split into minibatches of size 500, and SDA-DP used 48 parallel threads.
SUN Images [26]: This dataset consisted of 108,755 images from 397 scene categories, with 8,755
held out for testing. The images were reduced to 20 dimensions with PCA prior to clustering. Data
was split into minibatches of size 500, and SDA-DP used 48 parallel threads.
Figure 4 shows the results from the experiments on the three real datasets. From a qualitative standpoint, SDA-DP discovers sensible clusters in the data, as demonstrated in Figures 4a?4c. However,
an important quantitative result is highlighted by Table 4d: the larger a dataset is, the more the
benefits of parallelism provided by SDA-DP become apparent. SDA-DP consistently provides a
model quality that is competitive with the other algorithms, but requires orders of magnitude less
computation time, corroborating similar findings on the synthetic dataset.
5
Conclusions
This paper presented a streaming, distributed, asynchronous inference algorithm for Bayesian nonparametric models, with a focus on the combinatorial problem of matching minibatch posterior components to central posterior components during asynchronous updates. The main contributions are
a component identification optimization based on a minibatch posterior decomposition, a tractable
bound on the objective for the Dirichlet process mixture, and experiments demonstrating the performance of the methodology on large-scale datasets. While the present work focused on the DP
mixture as a guiding example, it is not limited to this model ? exploring the application of the
proposed methodology to other BNP models is a potential area for future research.
Acknowledgments
This work was supported by the Office of Naval Research under ONR MURI grant N000141110688.
8
References
[1] Agostino Nobile. Bayesian Analysis of Finite Mixture Distributions. PhD thesis, Carnegie Mellon University, 1994.
[2] Jeffrey W. Miller and Matthew T. Harrison. A simple example of Dirichlet process mixture inconsistency
for the number of components. In Advances in Neural Information Processing Systems 26, 2013.
[3] Yee Whye Teh. Dirichlet processes. In Encyclopedia of Machine Learning. Springer, New York, 2010.
[4] Thomas L. Griffiths and Zoubin Ghahramani. Infinite latent feature models and the Indian buffet process.
In Advances in Neural Information Processing Systems 22, 2005.
[5] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson, and Michael I. Jordan. Streaming
variational Bayes. In Advances in Neural Information Procesing Systems 26, 2013.
[6] Trevor Campbell and Jonathan P. How. Approximate decentralized Bayesian inference. In Proceedings
of the 30th Conference on Uncertainty in Artificial Intelligence, 2014.
[7] Dahua Lin. Online learning of nonparametric mixture models via sequential variational approximation.
In Advances in Neural Information Processing Systems 26, 2013.
[8] Xiaole Zhang, David J. Nott, Christopher Yau, and Ajay Jasra. A sequential algorithm for fast fitting of
Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 23(4):1143?1162,
2014.
[9] Matt Hoffman, David Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of
Machine Learning Research, 14:1303?1347, 2013.
[10] Chong Wang, John Paisley, and David M. Blei. Online variational inference for the hierarchical Dirichlet
process. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics,
2011.
[11] Michael Bryant and Erik Sudderth. Truly nonparametric online variational inference for hierarchical
Dirichlet processes. In Advances in Neural Information Proecssing Systems 23, 2009.
[12] Chong Wang and David Blei. Truncation-free stochastic variational inference for Bayesian nonparametric
models. In Advances in Neural Information Processing Systems 25, 2012.
[13] Michael Hughes and Erik Sudderth. Memoized online variational inference for Dirichlet process mixture
models. In Advances in Neural Information Processing Systems 26, 2013.
[14] Jason Chang and John Fisher III. Parallel sampling of DP mixture models using sub-clusters splits. In
Advances in Neural Information Procesing Systems 26, 2013.
[15] Willie Neiswanger, Chong Wang, and Eric P. Xing. Asymptotically exact, embarassingly parallel MCMC.
In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, 2014.
[16] Carlos M. Carvalho, Hedibert F. Lopes, Nicholas G. Polson, and Matt A. Taddy. Particle learning for
general mixtures. Bayesian Analysis, 5(4):709?740, 2010.
[17] Matthew Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical
Society: Series B, 62(4):795?809, 2000.
[18] Ajay Jasra, Chris Holmes, and David Stephens. Markov chain Monte Carlo methods and the label switching problem in Bayesian mixture modeling. Statistical Science, 20(1):50?67, 2005.
[19] Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet processes.
Journal of the American Statistical Association, 101(476):1566?1581, 2006.
[20] Finale Doshi-Velez and Zoubin Ghahramani. Accelerated sampling for the Indian buffet process. In
Proceedings of the International Conference on Machine Learning, 2009.
[21] Avinava Dubey, Sinead Williamson, and Eric Xing. Parallel Markov chain Monte Carlo for Pitman-Yor
mixture models. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, 2014.
[22] Jack Edmonds and Richard Karp. Theoretical improvements in algorithmic efficiency for network flow
problems. Journal of the Association for Computing Machinery, 19:248?264, 1972.
[23] Jim Pitman. Exchangeable and partially exchangeable random partitions. Probability Theory and Related
Fields, 102(2):145?158, 1995.
[24] David M. Blei and Michael I. Jordan. Variational inference for Dirichlet process mixtures. Bayesian
Analysis, 1(1):121?144, 2006.
[25] Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. MNIST database of handwritten digits. Online: yann.lecun.com/exdb/mnist.
[26] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. SUN 397 image
database. Online: vision.cs.princeton.edu/projects/2010/SUN.
9
| 5876 |@word trial:2 polynomial:1 advantageous:1 km:26 crucially:1 decomposition:10 accounting:1 thereby:1 tr:1 series:1 score:3 united:1 denoting:1 document:2 past:1 current:3 com:2 yet:1 must:1 written:1 john:4 numerical:2 partition:3 concert:1 update:15 intelligence:4 fewer:1 yr:1 discovering:1 plane:2 core:1 filtered:1 blei:5 provides:2 node:35 contribute:1 preference:1 zhang:1 five:1 unbounded:1 direct:1 beta:2 become:1 ik:8 persistent:1 qualitative:1 combine:2 fitting:1 introduce:1 x0:1 forgetting:1 indeed:2 behavior:3 xz:1 bnp:11 decreasing:2 utc:2 cpu:6 increasing:4 becomes:2 provided:2 discover:1 underlying:2 matched:3 project:1 developed:3 finding:6 temporal:1 quantitative:1 growth:1 bryant:1 exactly:1 k2:1 qm:4 demonstrates:4 brute:1 exchangeable:3 grant:1 before:2 local:1 mistake:3 tation:1 switching:3 despite:1 id:6 merge:2 might:1 black:1 studied:1 suggests:1 limited:2 range:1 term2:1 practical:2 unique:2 acknowledgment:1 testing:3 lecun:2 practice:2 block:3 hughes:1 svi:5 digit:3 procedure:2 area:1 significantly:3 matching:13 boyd:1 word:3 embarassingly:1 griffith:1 zoubin:2 get:1 cannot:1 yee:2 py:1 equivalent:1 demonstrated:2 yt:8 maximizing:1 regardless:1 focused:1 simplicity:1 assigns:1 m2:3 rule:3 holmes:1 datapoints:1 retrieve:3 enabled:1 eppf:1 notion:2 updated:6 suppose:1 taddy:1 exact:3 element:1 updating:2 persist:1 muri:1 database:2 observed:3 solved:8 capture:1 wang:4 connected:1 sun:5 ordering:6 highest:2 mentioned:1 intuition:1 complexity:4 moderately:1 locking:1 broderick:1 solving:3 incur:1 creation:1 upon:1 bipartite:1 eric:2 efficiency:1 xkk:2 matchings:3 various:1 derivation:2 zo:34 fast:1 monte:2 query:2 artificial:6 sc:3 outside:1 birth:1 bnps:4 richer:1 posed:1 larger:2 solve:1 apparent:1 statistic:3 highlighted:1 itself:1 asynchronously:1 final:8 online:8 beal:1 sequence:3 product:3 zm:21 combining:1 scalability:2 exploiting:1 convergence:1 cluster:37 assessing:1 incremental:1 converges:1 tk:3 derive:1 develop:1 illustrate:1 soj:1 ij:3 solves:1 longitude:1 hungarian:3 involves:1 implies:1 gib:1 c:1 merged:4 correct:2 filter:1 stochastic:5 modifying:1 material:1 fix:2 clustered:1 decompose:1 sok:1 exploring:1 considered:1 normal:1 mapping:7 algorithmic:1 matthew:3 substituting:1 early:1 torralba:1 nobile:1 combinatorial:4 label:8 individually:1 largest:1 reflects:1 hoffman:1 mit:2 gaussian:2 rather:2 nott:1 pn:4 exchangeability:1 varying:4 surveillance:1 wilson:1 office:1 karp:1 focus:2 yo:14 naval:1 notational:1 consistently:1 properly:1 indicates:1 likelihood:9 improvement:1 contrast:1 sense:2 inference:33 dependent:1 streaming:21 typically:3 diminishing:1 ical:1 i1:3 issue:4 overall:2 priori:2 development:1 art:1 spatial:1 field:5 once:1 having:1 sampling:3 nearly:1 future:1 develops:1 inherent:2 richard:1 randomly:1 ve:1 divergence:1 argmax:4 jeffrey:1 message:2 chong:4 truly:2 mixture:28 yielding:1 held:3 chain:2 integral:1 machinery:1 old:1 logarithm:2 initialized:1 theoretical:1 mk:15 uncertain:2 increased:1 instance:2 modeling:1 assignment:4 cost:1 subset:2 synthetic:6 sda:18 density:7 international:2 csail:3 probabilistic:1 off:2 michael:5 ym:7 avinava:1 na:1 again:1 central:31 ambiguity:1 thesis:1 broadcast:1 wishart:1 yau:1 creating:3 american:1 leading:2 account:2 suggesting:1 potential:1 de:1 depends:2 stream:5 ad:1 performed:1 break:1 jason:1 closed:2 competitive:1 bayes:2 xing:2 parallel:11 carlos:1 contribution:2 efficiently:1 likewise:1 yield:3 correspond:2 miller:1 bayesian:16 identification:23 handwritten:1 carlo:2 trajectory:6 processor:1 whenever:1 trevor:2 andre:1 definition:1 tamara:1 james:1 doshi:1 naturally:1 associated:1 transmits:1 degeneracy:1 sampled:1 dataset:8 popular:1 sinead:1 campbell:1 ok:9 methodology:6 specify:1 nonparametrics:3 evaluated:1 box:1 generality:2 furthermore:1 until:1 hand:2 receives:3 qo:3 christopher:2 overlapping:1 minibatch:31 quality:6 tdjc:1 disabled:1 aude:1 effect:1 matt:2 contain:2 true:6 consisted:4 willie:1 regularization:13 assigned:1 symmetric:2 nonzero:1 death:1 i2:3 during:2 whye:2 exdb:1 complete:2 performs:1 image:6 variational:34 jack:1 discovers:1 recently:1 began:1 common:1 xkj:1 conditioning:2 discussed:1 association:2 m1:3 dahua:1 velez:1 expressing:1 significant:1 mellon:1 paisley:2 automatic:1 particle:3 testll:3 had:1 access:1 similarity:1 posterior:65 thresh:1 certain:1 hay:1 inequality:2 binary:1 onr:1 inconsistency:1 yi:6 niw:3 impose:2 parallelized:1 r0:2 prune:1 period:1 stephen:2 multiple:2 afterwards:1 infer:1 reduces:2 technical:1 match:1 ki0:2 lin:1 qi:4 ko:18 regression:1 oliva:1 ajay:2 expectation:2 foremost:1 vision:1 histogram:1 receive:3 whereas:1 addressed:2 wealth:1 grow:1 harrison:1 sudderth:2 crucial:2 standpoint:1 parallelization:3 pass:1 flow:1 jordan:3 call:1 finale:1 intermediate:7 split:6 iii:1 independence:3 restaurant:1 zi:18 affect:2 fit:1 silent:1 regarding:1 tradeoff:1 computable:1 airplane:5 cousin:1 sva:3 whether:1 motivated:1 thread:16 pca:2 penalty:1 suffer:1 york:1 cause:1 antonio:1 generally:2 detailed:1 dubey:1 amount:2 nonparametric:11 encyclopedia:1 category:1 reduced:3 exist:2 millisecond:1 sign:1 arising:1 correctly:1 disjoint:1 per:1 edmonds:1 carnegie:1 finetti:1 express:1 key:2 four:1 demonstrating:3 threshold:1 prevent:1 ram:1 asymptotically:1 sum:3 run:1 inverse:1 uncertainty:7 fourth:1 lope:1 place:2 family:1 reasonable:1 yann:2 separation:1 unlocks:1 bound:21 ki:16 correspondence:9 constraint:2 bp:2 scene:1 erroneously:1 performing:3 rendered:2 jasra:2 developing:1 remain:1 across:1 lid:1 modification:1 restricted:1 discus:1 count:7 fail:1 nonempty:1 neiswanger:1 tractable:3 end:1 generalizes:1 available:1 rewritten:2 jhow:1 decentralized:1 apply:1 hierarchical:5 nicholas:2 batch:5 encounter:1 buffet:3 corinna:1 original:4 in2:1 denotes:2 dirichlet:20 ensure:1 clustering:3 remaining:1 thomas:1 lock:1 graphical:1 log10:1 rkj:2 k1:1 chinese:1 ghahramani:2 society:1 objective:9 move:1 parametric:2 concentration:4 nr:1 diagonal:1 exhibit:2 dp:31 algorithm3:1 majority:1 sensible:1 procesing:2 degrade:1 topic:5 chris:1 collected:1 rjk:3 hdp:1 erik:2 o1:3 julian:1 providing:1 difficult:1 ashia:1 trace:2 disparate:1 polson:1 implementation:2 motivates:1 unknown:2 perform:2 teh:2 upper:1 observation:1 datasets:4 markov:2 finite:1 tok:1 truncated:3 supporting:1 regularizes:2 jim:1 discovered:3 varied:1 david:7 required:2 kl:2 merges:1 address:3 below:1 memoized:2 parallelism:1 latitude:1 reading:3 challenge:3 built:1 max:4 oj:2 royal:1 subcluster:1 suitable:2 force:1 rely:1 indicator:1 zr:3 solvable:1 scheme:1 github:1 concludes:2 categorical:3 coupled:1 kj:1 prior:18 literature:1 multiplication:2 smk:1 loss:1 expect:2 fully:1 interesting:1 carvalho:1 xiao:1 accounted:1 supported:1 last:4 asynchronous:9 free:5 truncation:4 side:1 burges:1 taper:1 taking:1 hedibert:1 fifth:1 pitman:3 yor:2 distributed:12 benefit:1 dimension:2 rich:1 computes:2 preventing:1 concavity:1 collection:1 made:5 far:1 approximate:4 obtains:1 selector:1 countably:1 preferred:1 dealing:1 global:1 zmj:2 n000141110688:1 corpus:1 corroborating:1 search:1 latent:6 decomposes:1 sk:3 table:1 mj:2 decoupling:1 ignoring:2 symmetry:1 obtaining:1 williamson:1 did:1 main:3 motivation:1 allowed:2 x1:1 augmented:1 ehinger:1 sub:1 guiding:2 exponential:2 stamp:1 breaking:1 third:1 theorem:1 rk:4 erroneous:1 specific:3 emphasized:1 xt:2 jensen:3 krista:1 cortes:1 expec:1 normalizing:1 intractable:3 mnist:7 sequential:5 merging:2 kr:6 supplement:4 phd:1 magnitude:2 execution:1 labelling:4 nk:2 suited:1 simply:1 prevents:1 expressed:1 partially:1 chang:1 applies:2 springer:1 truth:2 chance:1 minibatches:9 conditional:2 sized:1 goal:2 microsecond:2 fisher:3 infinite:2 typical:2 determined:1 total:2 experimental:2 e:2 m3:3 jonathan:2 jianxiong:1 indian:3 wibisono:1 accelerated:1 yrj:2 evaluate:1 mcmc:4 princeton:1 phenomenon:1 |
5,387 | 5,877 | Fixed-Length Poisson MRF:
Adding Dependencies to the Multinomial
David I. Inouye
Pradeep Ravikumar
Inderjit S. Dhillon
Department of Computer Science
University of Texas at Austin
{dinouye,pradeepr,inderjit}@cs.utexas.edu
Abstract
We propose a novel distribution that generalizes the Multinomial distribution to
enable dependencies between dimensions. Our novel distribution is based on the
parametric form of the Poisson MRF model [1] but is fundamentally different because of the domain restriction to a fixed-length vector like in a Multinomial where
the number of trials is fixed or known. Thus, we propose the Fixed-Length Poisson MRF (LPMRF) distribution. We develop AIS sampling methods to estimate
the likelihood and log partition function (i.e. the log normalizing constant), which
was not developed for the Poisson MRF model. In addition, we propose novel
mixture and topic models that use LPMRF as a base distribution and discuss the
similarities and differences with previous topic models such as the recently proposed Admixture of Poisson MRFs [2]. We show the effectiveness of our LPMRF
distribution over Multinomial models by evaluating the test set perplexity on a
dataset of abstracts and Wikipedia. Qualitatively, we show that the positive dependencies discovered by LPMRF are interesting and intuitive. Finally, we show
that our algorithms are fast and have good scaling (code available online).
1
Introduction & Related Work
The Multinomial distribution seems to be a natural distribution for modeling count-valued data
such as text documents. Indeed, most topic models such as PLSA [3], LDA [4] and numerous
extensions?see [5] for a survey of probabilistic topic models?use the Multinomial as the fundamental base distribution while adding complexity using other latent variables. This is most likely
due to the extreme simplicity of Multinomial parameter estimation?simple frequency counts?that
is usually smoothed by the simple Dirichlet conjugate prior. In addition, because the Multinomial
requires the length of a document to be fixed or pre-specified, usually a Poisson distribution on document length is assumed. This yields a Poisson-Multinomial distribution?which by well-known
results is merely an independent Poisson model.1 However, the Multinomial assumes independence
between the words because the Multinomial is merely the sum of independent categorical variables.
This restriction does not seem to fit with real-world text. For example, words like ?neural? and
?network? will tend to co-occur quite frequently together in NIPS papers. Thus, we seek to relax
the word independence assumption of the Multinomial.
The Poisson MRF distribution (PMRF) [1] seems to be a potential replacement for the PoissonMultinomial because it allows some dependencies between words. The Poisson MRF is developed
by assuming that every conditional distribution is 1D Poisson. However, the original formulation
in [1] only allowed for negative dependencies. Thus, several modifications were proposed in [6]
to allow for positive dependencies. One proposal, the Truncated Poisson MRF (TPMRF), simply
truncated the PMRF by setting a max count for every word. While this formulation may provide
1
The assumption of Poisson document length is not important for most topic models [4].
1
interesting parameter estimates, a TPMRF with positive dependencies may be almost entirely concentrated at the corners of the joint distribution because of the quadratic term in the log probability
(see the bottom left of Fig. 1). In addition, the log partition function of the TPMRF is intractable to
estimate even for a small number of dimensions because the sum is over an exponential number of
terms.
Thus, we seek a different distribution than a TPMRF that allows positive dependencies but is more
appropriately normalized. We observe that the Multinomial is proportional to an independent Poisson model with the domain restricted to a fixed length L. Thus, in a similar way, we propose a
Fixed-Length Poisson MRF (LPMRF) that is proportional to a PMRF but is restricted to a domain
with a fixed vector length?i.e. where kxk1 = L. This distribution is quite different from previous
PMRF variants because the normalization is very different as will be described in later sections. For
a motivating example, in Fig. 1, we show the marginal distributions of the empirical distribution
and fitted models using only three words from the Classic3 dataset that contains documents regarding library sciences and aerospace engineering (See Sec. 4). Clearly, real-world text has positive
dependencies as evidenced by the empirical marginals of ?boundary? and ?layer? (i.e. referring
to the boundary layer in fluid dynamics) and LPMRF does the best at fitting this empirical distribution. In addition, the log partition function?and hence the likelihood?for LPMRF can be
approximated using samplingEmpirical
as described
in later sections. Under the PMRFmodels.LPMRF(nnz=0,nnzwords=0)
or TPMRF models,
Distribution
both the log partition function and likelihood were computationally intractable to compute exactly.2
Thus, approximating the log partition function of an LPMRF opens up the door for likelihood-based
hyperparameter estimation and model evaluation that was not possible with PMRF.
boundary? Poisson (Ind. Poissons)
(b) Multinomial
boundaryMarginal Distributions
(a) Empirical
library
library
models.TPMRF(nnz=6,nnzwords=0)
layer
library
layer
layer
boundary
library
layer
boundary
(d) Fixed-Length PMRF ? Poisson
(c) Truncated Poisson MRF
boundary
boundary
library
library
layer
library
models.LPMRF(nnz=6,nnzwords=0)
layer
layer
boundary
library
layer
boundary
Figure 1: Marginal Distributions from Classic3 Dataset (Top Left) Empirical Distribution, (Top
Right) Estimated Multinomial ? Poisson joint distribution?i.e. independent Poissons, (Bottom
Left) Truncated Poisson MRF, (Bottom Right) Fixed-Length PMRF ? Poisson joint distribution.
The simple empirical distribution clearly shows a strong dependency between ?boundary? and
?layer? but strong negative dependency of ?boundary? with ?library?. Clearly, the word-independent
Multinomial-Poisson distribution underfits the data. While the Truncated PMRF can model dependencies, it obviously has normalization problems because the normalization is dominated by the
edge case. The LPMRF-Poisson distribution much more appropriately fits the empirical data.
In the topic modeling literature, many researchers have realized the issue with using the Multinomial distribution as the base distribution. For example, the interpretability of a Multinomial can be
difficult since it only gives an ordering of words. Thus, multiple metrics have been proposed to evaluate topic models based on the perceived dependencies between words within a topic [7, 8, 9, 10].
In particular, [11] showed that the Multinomial assumption was often violated in real world data.
In another paper [12], the LDA topic assignments for each word are used to train a separate Ising
model?i.e. a Bernoulli MRF?for each topic in a heuristic two-stage procedure. Instead of modeling dependencies a posteriori, we formulate a generalization of topic models that allows the LPMRF
distribution to directly replace the Multinomial. This allows us to compute a topic model and word
dependencies jointly under a unified model as opposed to the two-stage heuristic procedure in [12].
2
The example in Fig. 1 was computed by exhaustively computing the log partition function.
2
This model has some connection to the Admixture of Poisson MRFs model (APM) [2], which was
the first topic model to consider word dependencies. However, the LPMRF topic model directly
relaxes the LDA word-independence assumption (i.e. the independent case is the same as LDA)
whereas APM is only an indirect relaxation of LDA because APM mixes in the exponential family
canonical parameter space while LDA mixes in the standard Multinomial parameter space. Another
difference with APM is that our proposed LPMRF topic model can actually produce topic assignments for each word similar to LDA with Gibbs sampling [13]. Finally, the LPMRF topic model
does not fall into the same generalization of topic models as APM because the instance-specific
distribution is not an LPMRF?as described more fully in later sections. The follow up APM paper
[14] gives a fast algorithm for estimating the PMRF parameters. We use this algorithm as the basis
for estimating the topic LPMRF parameters. For estimating the topic vectors for each document, we
give a simple coordinate descent algorithm for estimation of the LPMRF topic model. This estimation of topic vectors can be seen as a direct relaxation of LDA and could even provide a different
estimation algorithm for LDA.
2
Fixed-Length Poisson MRF
Notation Let p, n and k denote the number of words, documents and topics respectively. We will
generally use uppercase letters for matrices (e.g. ?, X), boldface lowercase letters or indices of
matrices for column vectors (i.e xi , ?, ?s ) and lowercase letters for scalar values (i.e. xi , ?s ).
Poisson MRF Definition First, we will briefly describe the Poisson MRF distribution and refer
the reader to [1, 6] for more details. A PMRF can be parameterized by a node vector ? and an
edge matrix ? whoseP
non-zeros encode the direct
dependencies between words: PrPMRF (x | ?, ?) =
p
exp ? T x+xT ?x? s=1 log(xs !)?A (?, ?) , where A (?, ?) is the log partition function needed
for normalization. Note that without loss of generality, we can assume ? is symmetric because it
only shows up in the symmetric quadratic term. The conditional distribution of one word given all
the others?i.e. Pr(xs |x?s )?is a 1D Poisson distribution with natural parameter ?s = ?s + xT?s ?s
by construction. One primary issue with the PMRF is that the log partition function (i.e. A (?, ?))
is a log-sum over all vectors in Zp+ and thus with even one positive dependency, the log partition
function is infinite because of the quadratic term in the formulation. Yang et al. [6] tried to address
this issue but, as illustrated in the introduction, their proposed modifications to the PMRF can yield
unusual models for real-world data.
LPMRF with Different Parameters L = 10
LPMRF with Different Parameters L = 20
0.4
0.35
Negative Dependency
Independent (Binomial)
Positive Dependency
0.2
Negative Dependency
Independent (Binomial)
Positive Dependency
0.25
Probability
Probability
0.3
0.3
0.2
0.15
0.1
0.1
0.05
0
0
1
2
3
4
5
6
Domain of Binomial at L = 10
7
8
9
0
10
0
2
4
6
8
10
12
Domain of Binomial at L = 20
14
16
18
20
Figure 2: LPMRF distribution for L = 10 (left) and L = 20 (right) with negative, zero and positive dependencies. The distribution of LPMRF can be quite different than a Multinomial (zero
dependency) and thus provides a much more flexible parametric distribution for count data.
LPMRF Definition The Fixed-Length Poisson MRF (LPMRF) distribution is a simple yet fundamentally different distribution than the PMRF. Letting L ? kxk1 be the length of document, we
define the LPMRF distribution as follows:
P
Pr (x|?, ?, L) = exp(? T x + xT ?x ? s log(xs !) ? AL (?, ?))
(1)
LPMRF
X
P
AL (?, ?) = log
exp(? T x + xT ?x ? s log(xs !))
(2)
x?XL
XL = {x : x ? Zp+ , kxk1 = L}.
(3)
The only difference from the PMRF parametric form is the log partition function AL (?, ?) which is
conditioned on the set XL (unlike the unbounded set for PMRF). This domain restriction is critical to
formulating a tractable and reasonable distribution. Combined with a Poisson distribution on vector
length L = kxk1 , the LPMRF distribution can be a much more suitable distribution for documents
than a Multinomial. The LPMRF distribution reduces to the standard Multinomial if there are no
3
dependencies. However, if there are dependencies, then the distribution can be quite different than
a Multinomial as illustrated in Fig. 2 for an LPMRF with p = 2 and L fixed at either 10 or 20
words. After the original submission, we realized that for p = 2 the LPMRF model is the same
as the multiplicative binomial generalization in [15]. Thus, the LPMRF model can be seen as a
multinomial generalization (p ? 2) of the multiplicative binomial in [15].
LPMRF Parameter Estimation Because the parametric form of the LPMRF model is the same
as the form of the PMRF model and we primarily care about finding the correct dependencies, we
decide to use the PMRF estimation algorithm described in [14] to estimate ? and ?. The algorithm
in [14] uses an approximation to the likelihood by using the pseudo-likelihood and performing `1
regularized nodewise Poisson regressions. The `1 regularization is important both for the sparsity of
the dependencies and the computational efficiency of the algorithm. While the PMRF and LPMRF
are different distributions, the pseudo-likelihood approximation for estimation provides good results
as shown in the results section. We present timing results to show the scalability of this algorithm in
Sec. 5. Other parameter estimation methods would be an interesting area of future work.
2.1
Likelihood and Log Partition Estimation
Unlike previous work on the PMRF or TPMRF distributions, we develop a tractable approximation
to the LPMRF log partition function (Eq. 2) so that we can compute approximate likelihood values.
The likelihood of a model can be fundamentally important for hyperparameter optimization and
model evaluation.
LPMRF Annealed Importance Sampling First, we develop an LPMRF Gibbs sampler by considering the most common form of Multinomial sampling, namely by taking the sum of a sequence
of L Categorical variables. From this intuition, we sample one word at a time while holding all other
words fixed. The probability of one word in the sequence w` given all the other words is proportional to exp(?s + 2?s x?` ) where x?` is the sum of all other words. See the Appendix for the
details of Gibbs sampling. Then, we derive an annealed importance sampler [16] using the Gibbs
sampling by scaling the ? matrix for each successive distribution by the linear sequence starting
with 0 and ending with 1 (i.e. ? = 0, . . . , 1). Thus, we start with a simple Multinomial sample
from Pr(x | ?, 0 ? ?, L) = PrMult (x | ?, L) and then Gibbs sample from each successive distribution
PrLPMRF (x | ?, ??, L) updating the sample weight as defined in [16] until we reach the final distribution when ? = 1. From these weighted samples, we can compute an estimate of the log partition
function [16].
Upper Bound Using H?older?s inequality, a simple convex relaxation and the partition function of a
Multinomial,
an upper bound for the log partition function can be computed: AL (?, ?) ? L2 ??,1 +
P
Llog( s exp ?s ) ? log(L!), where ??,1 is the maximum eigenvalue ofP?. See the Appendix for
the full derivation. We simplify this upper bound by subtracting log( s exp ?s ) from ? (which
does not change the distribution) so that the second term becomes 0. Then, neglecting the constant
term ?log(L!) that does not interact with the parameters (?, ?), the log partition function is upper
bounded by a simple quadratic function w.r.t. L.
Weighting ? for Different L For datasets in which L is observed for every sample but is not
uniform?such as document collections, the log partition function will grow quadratically in L if
there are any positive dependencies as suggested by the upper bound. This causes long documents
to have extremely small likelihood. Thus, we must modify ? as L gets larger to conteract this
? L = ?(L)?. In particular,
effect. We propose a simple modification that scales the ? for each L: ?
we propose to use the sigmoidal function using the Log Logistic cumulative distribution function
(CDF): ?(L) = 1 ? LogLogisticCDF(L | ?LL , ?LL ). We set the ?LL parameter to 2 so that theP
tail is
?= 1
O(1/L2 ) which will eventually cause the upper bound to approach a constant. Letting L
i Li
n
? for some small constant c. This choice of ?LL
be the mean instance length, we choose ?LL = cL
helps the weighting function to appropriately scale for corpuses of different average lengths.
Final Approximation Method for All L For our experiments, we approximate the log partition
function value for all L in the range of the corpus. We use 100 AIS samples for 50 different test
? and 3L
? so that we cover both small and large values
values of L linearly spaced between the 0.5L
4
of L. This gives a total of 5,000 annealed importance samples. We use the quadratic form of the
upper bound Ua (L) = ?(L)L2 a (ignoring constants with respect to ?) and find a constant a that
P
? L (?, ?)?Llog(
upper bounds all 50 estimates: amax = maxL [?(L)L2 ]?1 (A
s exp ?s )+log(L!)),
?
where AL is an AIS estimate of the log partition function for the 50 test values of L. This gives a
smooth approximation for all L that are greater than or equal to all individual estimates (figure of
example approximation in Appendix).
Mixtures of LPMRF With an approximation to the likelihood, we can easily formulate an estimation algorithm for a mixture of LPMRFs using a simple alternating, EM-like procedure. First,
given the cluster assignments, the LPMRF parameters can be estimated as explained above. Then,
the best cluster assignments can be computed by assigning each instance to the highest likelihood
cluster. Extending the LPMRF to topic models requires more careful analysis as described next.
3
Generalizing Topic Models using Fixed-Length Distributions
In standard topic models like LDA, the distribution contains a unique topic variable for every word in
the corpus. Essentially, this means that every word is actually drawn from a categorical distribution.
However, this does not allow us to capture dependencies between words because there is only one
word being drawn at a time. Therefore, we need to reformulate LDA in a way that the words from a
topic are sampled jointly from a Multinomial. From this reformulation, we can then simply replace
the Multinomial with an LPMRF to obtain a topic model with LPMRF as the base distribution. Our
reformulation of LDA groups the topic indicator variables for each word into k vectors corresponding to the k different topics. These k ?topic indicator? vectors z l are then assumed to be drawn
from a Multinomial with fixed length L = kz j k. This grouping of topic vectors yields an equivalent
distribution because the topic indicators are exchangeable and independent of one another given the
observed word and the document-topic distribution. This leads to the following generalization of
topic models in which an observation xi is the summation of k hidden variables zij :
Novel LPMRF Topic Model
wi ? Dirichlet(?)
?
Li ? Poisson(? = L)
Generic Topic Model
wi ? SimplexPrior(?)
?
Li ? LengthDistribution(L)
mi ? PartitionDistribution(wi , Li )
zij
j
? FixedLengthDist(? ;
Pk
xi = j=1 zij
kzij k
=
mi ? Multinomial(p = wi ; N = Li )
mji )
zij ? LPMRF(? j , ?j ; L = mji )
Pk
xi = j=1 zij .
Note that this generalization of topic models does not require the partition distribution and the fixedlength distribution to be the same. In addition, other distributions could be substituted for the Dirichlet prior distribution on document-topic distributions like the logistic normal prior. Finally, this generalization allows for real-valued topic models for other types of data although exploration of this is
outside the scope of this paper.
This generalization is distinctive from the topic model generalization termed ?admixtures? in [2].
Admixtures assume that each observation is drawn from an instance-specific base distribution whose
parameters are a convex combination of previous parameters. Thus an admixture of LPMRFs could
be formulated by assuming
document, given the document-topic weights wi , is drawn
P thatj each
? i = wij ?j ; L = kxi k1 ). Though this may be an interesting
from a LPMRF(??i =
w
?
,
?
ij
j
model in its own right and useful for further exploration in future work, this is not the same as
the above proposed model because the distribution of xi is not an LPMRF but rather a sum of
independent LPMRFs. One case?possibly the only case?where these two generalizations of topic
models intersect is when the distribution is a Multinomial (i.e. a LPMRF with ? = 0). As another
distinction from APM, the LPMRF topic model directly generalizes LDA because the LPMRF in the
above model reduces to a Multinomial if ? = 0. Fully exploring the differences between this topic
model generalization and the admixture generalization are quite interesting but outside the scope of
this paper.
With this formulation of LPMRF topic models, we can create a joint optimization problem to solve
for the topic matrix Zi = [zi1 , zi2 , . . . , zik ] for each document and to solve for the shared LPMRF
5
parameters ? 1...k , ?1...k . The optimization is based on minimizing the negative log posterior:
?
arg min
Z1...n ,? 1...k ,?1...k
s.t.
n
k
n
k
X
X
1 XX
Pr (zij |? j , ?j , mji ) ? log( Pr (m1...k
))
?
log( Pr (? j , ?j ))
i
prior
prior
n i=1 j=1 LPMRF
i=1
j=1
Zi e = xi ,
Zi ? Zk?p
+ ,
where e is the all ones vector. Notice that the observations xi only show up in the constraints.
The prior distribution on m1...k
can be related to the Dirichlet distribution as in LDA by taking
i
j P
`
/
m
)
=
Pr
(m
Prprior (m1...k
Dir
i |?). Also, notice that the documents are all independent if the
i
i
`
LPMRF parameters are known so this optimization can be trivially parallelized.
Connection to Collapsed Gibbs Sampling This optimization is very similar to the collapsed
Gibbs sampling for LDA [13]. Essentially, the key part to estimating the topic models is estimating
the topic indicators for each word in the corpus. The model parameters can then be estimated directly
from these topic indicators. In the case of LDA, the Multinomial parameters are trivial to estimate
by merely keeping track of counts and thus the parameters can be updated in constant time for every
topic resampled. This also suggests that an interesting area of future work would be to understand the
connections between collapsed Gibbs sampling and this optimization problem. It may be possible
to use this optimization problem to speed up Gibbs sampling convergence or provide a MAP phase
after Gibbs sampling to get non-random estimates.
Estimating Topic Matrices Z1...n For LPMRF topic models, the estimation of the LPMRF parameters given the topic assignments requires solving another complex optimization problem. Thus,
we pursue an alternating EM-like scheme as in LPMRF mixtures. First, we estimate LPMRF parameters with the PMRF algorithm from [14], and then we optimize the topic matrix Zi ? Rp?k for each
document. Because of the constraints on Zi , we pursue a simple dual coordinate descent procedure.
We select two coordinates in row r of Zi and determine if the optimization problem can be improved
by moving a words from topic ` to topic q. Thus, we only need to solve a series of simple univariate
problems. Each univariate problem only has xis number of possible solutions and thus if the max
count of words in a document is bounded by a constant, the univariate subproblems can be solved
bi = Zi + aer eT ? aer eT gives
efficiently. More formally, we are seeking a step size a such that Z
q
`
a better optimization value than Zi . If we remove constant terms w.r.t. a, we arrive at the following
univariate optimization problem (suppressing dependence on i because each of the n subproblems
are independent):
arg min ?a[?r` ? ?rq + 2z`T ?`r ? 2zqT ?qr ] + [log((zr` +a)!) + log((zrq ?a)!)]
?zr` ?a?zrq
+ Am`+a (? ` , ?` ) + Amq+a (? q , ?q ) ? log( Pr (m
? 1...k )),
prior
where m
? is the new distribution of length based on the step size a. The first term is the linear and
quadratic term from the sufficient statistics. The second term is the change in base measure of a
word is moved. The third term is the difference in log partition function if the length of the topic
vectors changes. Note that the log partition function can be precomputed so it merely costs a table
lookup. The prior also only requires a simple calculation to update. Thus the main computation
comes in the inner product z`T ?`r . However, this inner product can be maintained very efficiently
and updated efficiently so that it does not significantly affect the running time.
4
Perplexity Experiments
We evaluated our novel LPMRF model using perplexity on a held-out test set of documents from
a corpus composed of research paper abstracts3 denoted Classic3 and a collection of Wikipedia
documents. The Classic3 dataset has three distinct topic areas: medical (Medline, 1033), library
information sciences (CISI, 1460) and aerospace engineering (CRAN, 1400).
3
http://ir.dcs.gla.ac.uk/resources/test_collections/
6
Experimental Setup We train all the models using a 90% training split of the documents
and compute the held-out perplexity on the remaining 10% where perplexity is equal to
exp(?L(X test |? 1...k , ?1...k )/Ntest ), where L is the log likelihood and Ntest is the total number of
words in the test set. We evaluate single, mixture and topic models with both the Multinomial as
the base distribution and LPMRF as the base distribution at k = {1, 3, 10, 20}. The topic indicator
matrices Zi for the test set are estimated by fitting a MAP-based estimate while holding the topic
parameters ? 1...k , ?1...k fixed.4 For a single Multinomial or LPMRF, we set the smoothing parameter ? to 10?4 . 5 We select the LPMRF models using all combinations of 20 log spaced ? between
1 and 10?3 , and 5 linearly spaced weighting function constants c between 1 and 2 for the weighting
function described in Sec. 2.1. In order to compare our algorithms with LDA, we also provide perplexity results using an LDA Gibbs sampler [13] for MATLAB 6 to estimate the model parameters.
For LDA, we used 2000 iterations and optimized the hyperparameters ? and ? using the likelihood
of a tuning set. We do not seek to compare with many topic models because many of them use the
Multinomial as a base distribution which could be replaced by a LPMRF but rather we simply focus
on simple representative models.7
Classic3
Wikipedia
9.3
34
7.9
Perplexity
Perplexity
47
Test Set Perplexity for Classic3
Results The perplexity results for all models can be seen in Fig. 3. Clearly, a single LPMRF significantly outperforms a single Multinomial on the test dataset both for the Classic3 and Wikipedia
datasets. The LPMRF model outperforms the simple Multinomial mixtures and topic models in all
cases. This suggests that the LPMRF model could be an interesting replacement for the Multinomial in more complex models. For a small number of topics, LPMRF topic models also outperforms
Gibbs sampling LDA but does not perform as well for larger number of topics. This is likely due to
the well-developed sampling methods for learning LDA. Exploring the possibility of incorporating
sampling into the fitting of the LPMRF topic model is an excellent area of future work. We believe
LPMRF shows significant promise for replacing the Multinomial in various probabilistic models.
35
Mult
30
LPMRF
Mult
Gibbs LDA
25
20
15
10
5
0
k=3
Mult
LPMRF
k=10
Mixture
LPMRF
k=20
k=3
k=10
k=20
Topic Model
Figure 3: (Left) The LPMRF models quite significantly outperforms the Multinomial for both
datasets. (Right) The LPMRF model outperforms the simple Multinomial model in all cases. For a
small number of topics, LPMRF topic models also outperforms Gibbs sampling LDA but does not
perform as well for larger number of topics.
Qualitative Analysis of LPMRF Parameters In addition to perplexity analysis, we present the
top words, top positive dependencies and the top negative dependencies for the LPMRF topic
model in Table 1. Notice that in LDA, only the top words are available for analysis but an
LPMRF topic model can produce intuitive dependencies. For example, the positive dependency
?language+natural? is composed of two words that often co-occur in the library sciences but each
word independently does not occur very often in comparison to ?information? and ?library?. The
positive dependency ?stress+reaction? suggests that some of the documents in the Medline dataset
likely refer inducing stress on a subject and measuring the reaction. Or in the aerospace topic, the
positive dependency ?non+linear? suggests that non-linear equations are important in aerospace.
Notice that these concepts could not be discovered with a standard Multinomial-based topic model.
4
For topic models, the likelihood computation is intractable if averaging over all possible Zi . Thus, we
use a MAP simplification primarily for computational reasons to compare models without computationally
expensive likelihood estimation.
5
For the LPMRF, this merely means adding 10?4 to y-values of the nodewise Poisson regressions.
6
http://psiexp.ss.uci.edu/research/programs_data/toolbox.htm
7
We could not compare to APM [2, 14] because it is not computationally tractable to calculate the likelihood
of a test instance in APM, and thus we cannot compute perplexity.
7
Table 1: Top Words and Dependencies for LPMRF Topic Model
Topic 1
Topic 2
Topic 3
Top words
Top Pos. Edges
Top Neg. Edges
Top words
Top Pos. Edges
Top Neg. Edges
Top words
Top Pos. Edges
information
states+united
paper-book
patients
term+long
cells-patient
flow
supported+simply
flow-shells
library
point+view
libraries-retrieval
cases
positive+negative
patients-animals
pressure
account+taken
number-numbers
test+tests
library-chemical
normal
cooling+hypothermi
patients-rats
boundary
agreement+good
flow-shell
cells
system+central
hormone-protein
results
moment+pitching
wing-hypersonic
research
system
libraries
book
systems
data
primary+secondary libraries-language
recall+precision
system-published
treatment
Top Neg. Edges
atmosphere+height growth-parathyroid
theory
non+linear
solutions-turbulent
children
function+functions
patients-lens
method
lower+upper
mach-reynolds
information-citation
found
methods+suitable
patients-mice
layer
tunnel+wind
flow-stresses
language+natural chemical-document
results
stress+reaction
patients-dogs
given
time+dependent
theoretical-drag
dissemination+sdi information-citations
direct+access
use
years+five
library-scientists
blood
low+rates
hormone-tumor
number
level+noise
general-buckling
scientific
term+long
library-scientific
disease
case+report
patients-child
presented
purpose+note
made-conducted
5
Timing and Scalability
Finally, we explore the practical performance of our algorithms. In C++, we implemented the three
core algorithms: fitting p Poisson regressions, fitting the n topic matrices for each document, and
sampling 5,000 AIS samples. The timing for each of these components respectively can be seen in
Fig. 4 for the Wikipedia dataset. We set ? = 1 in the first two experiments which yields roughly
20,000 non-zeros and varied ? for the third experiment. Each of the components is trivially parallelized using OpenMP (http://openmp.org/). All timing experiments were conducted on
the TACC Maverick system with Intel Xeon E5-2680 v2 Ivy Bridge CPUs (2.80 GHz), 20 CPUs
per node, and 12.8 GB memory per CPU (https://www.tacc.utexas.edu/). The scaling
is generally linear in the parameters except for fitting topic matrices which is O(k 2 ). For the AIS
sampling, the scaling is linear in the number of non-zeros in ? irrespective of p. Overall, we believe
our implementations provide both good scaling and practical performance (code available online).
Timing for Poisson Regressions
Timing for Fitting Topic Matrices
n=50k
n=10k
n=5k
1000
1000
800
100
n=50000
10000
Number of Unique Words (p) (log scale)
p=5000
p=1000
500
600
400
400
300
200
100
0
1
1000
p=10000
600
200
10
Timing for AIS Sampling
700
n=100000
Time (s)
n=100k
Time (s)
Time in Seconds (log scale)
10000
k=3
k=10
p=5000
k=3
k=10
p=10000
0
0
100000
200000
300000
400000
500000
Number of Nonzeros in Edge Matrix
Figure 4: (Left) The timing for fitting p Poisson regressions shows an empirical scaling of O(np).
(Middle) The timing for fitting topic matrices empirically shows scaling that is O(npk 2 ). (Right)
The timing for AIS sampling shows that the sampling is approximately linearly scaled with the
number of non-zeros in ? irrespective of p.
6
Conclusion
We motivated the need for a more flexible distribution than the Multinomial such as the Poisson
MRF. However, the PMRF distribution has several complications due to its normalization that hinder it from being a general-purpose model for count data. We overcome these difficulties by restricting the domain to a fixed length as in a Multinomial while retaining the parametric form of
the Poisson MRF. By parameterizing by the length of the document, we can then efficiently compute sampling-based estimates of the log partition function and hence the likelihood?which were
not tractable to compute under the PMRF model. We extend the LPMRF distribution to both mixtures and topic models by generalizing topic models using fixed-length distributions and develop
parameter estimation methods using dual coordinate descent. We evaluate the perplexity of the proposed LPMRF models on datasets and show that they offer good performance when compared to
Multinomial-based models. Finally, we show that our algorithms are fast and have good scaling.
Potential new areas could be explored such as the relation between the topic matrix optimization
method and Gibbs sampling. It may be possible to develop sampling-based methods for the LPMRF
topic model similar to Gibbs sampling for LDA. In general, we suggest that the LPMRF model could
open up new avenues of research where the Multinomial distribution is currently used.
Acknowledgments
This work was supported by NSF (DGE-1110007, IIS-1149803, IIS-1447574, DMS-1264033, CCF1320746) and ARO (W911NF-12-1-0390).
8
References
[1] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu, ?Graphical models via generalized linear models,? in
NIPS, pp. 1367?1375, 2012.
[2] D. I. Inouye, P. Ravikumar, and I. S. Dhillon, ?Admixture of Poisson MRFs: A Topic Model with Word
Dependencies,? in International Conference on Machine Learning (ICML), pp. 683?691, 2014.
[3] T. Hofmann, ?Probabilistic latent semantic analysis,? in Uncertainty in Artificial Intelligence (UAI),
pp. 289?296, Morgan Kaufmann Publishers Inc., 1999.
[4] D. Blei, A. Ng, and M. Jordan, ?Latent dirichlet allocation,? JMLR, vol. 3, pp. 993?1022, 2003.
[5] D. Blei, ?Probabilistic topic models,? Communications of the ACM, vol. 55, pp. 77?84, Nov. 2012.
[6] E. Yang, P. Ravikumar, G. Allen, and Z. Liu., ?On poisson graphical models,? in NIPS, pp. 1718?1726,
2013.
[7] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei, ?Reading tea leaves: How humans interpret
topic models,? in NIPS, 2009.
[8] D. Mimno, H. M. Wallach, E. Talley, M. Leenders, and A. McCallum, ?Optimizing semantic coherence
in topic models,? in EMNLP, pp. 262?272, 2011.
[9] D. Newman, Y. Noh, E. Talley, S. Karimi, and T. Baldwin, ?Evaluating topic models for digital libraries,?
in ACM/IEEE Joint Conference on Digital Libraries (JCDL), pp. 215?224, 2010.
[10] N. Aletras and R. Court, ?Evaluating Topic Coherence Using Distributional Semantics,? in International
Conference on Computational Semantics (IWCS 2013) - Long Papers, pp. 13?22, 2013.
[11] D. Mimno and D. Blei, ?Bayesian Checking for Topic Models,? in EMNLP, pp. 227?237, 2011.
[12] R. Nallapati, A. Ahmed, W. Cohen, and E. Xing, ?Sparse word graphs: A scalable algorithm for capturing
word correlations in topic models,? in ICDM, pp. 343?348, 2007.
[13] M. Steyvers and T. Griffiths, ?Probabilistic topic models,? in Latent Semantic Analysis: A Road to Meaning, pp. 424?440, 2007.
[14] D. I. Inouye, P. K. Ravikumar, and I. S. Dhillon, ?Capturing Semantically Meaningful Word Dependencies
with an Admixture of Poisson MRFs,? in NIPS, pp. 3158?3166, 2014.
[15] P. M. E. Altham, ?Two Generalizations of the Binomial Distribution,? Journal of the Royal Statistical
Society. Series C (Applied Statistics), vol. 27, no. 2, pp. 162?167, 1978.
[16] R. M. Neal, ?Annealed importance sampling,? Statistics and Computing, vol. 11, no. 2, pp. 125?139,
2001.
9
| 5877 |@word trial:1 middle:1 briefly:1 seems:2 plsa:1 open:2 seek:3 tried:1 pressure:1 moment:1 liu:2 contains:2 series:2 zij:6 united:1 document:26 suppressing:1 reynolds:1 outperforms:6 reaction:3 yet:1 assigning:1 must:1 partition:23 hofmann:1 remove:1 update:1 zik:1 intelligence:1 leaf:1 mccallum:1 core:1 blei:4 provides:2 node:2 complication:1 successive:2 sigmoidal:1 org:1 five:1 unbounded:1 height:1 direct:3 qualitative:1 fitting:9 indeed:1 roughly:1 frequently:1 cpu:3 considering:1 ua:1 becomes:1 estimating:6 notation:1 bounded:2 xx:1 pursue:2 developed:3 unified:1 finding:1 pseudo:2 every:6 growth:1 exactly:1 scaled:1 uk:1 exchangeable:1 medical:1 positive:15 engineering:2 timing:10 modify:1 scientist:1 mach:1 approximately:1 drag:1 wallach:1 suggests:4 pitching:1 co:2 range:1 bi:1 unique:2 practical:2 acknowledgment:1 procedure:4 area:5 nnz:3 empirical:8 intersect:1 significantly:3 mult:3 boyd:1 pre:1 word:48 griffith:1 road:1 protein:1 suggest:1 get:2 cannot:1 collapsed:3 restriction:3 equivalent:1 map:3 optimize:1 www:1 annealed:4 starting:1 independently:1 convex:2 survey:1 formulate:2 simplicity:1 parameterizing:1 amax:1 steyvers:1 coordinate:4 poissons:2 updated:2 construction:1 us:1 agreement:1 approximated:1 expensive:1 updating:1 submission:1 ising:1 cooling:1 distributional:1 bottom:3 kxk1:4 observed:2 baldwin:1 solved:1 capture:1 wang:1 calculate:1 pradeepr:1 ordering:1 highest:1 leenders:1 rq:1 intuition:1 disease:1 complexity:1 dynamic:1 exhaustively:1 hinder:1 solving:1 distinctive:1 efficiency:1 basis:1 easily:1 joint:5 indirect:1 htm:1 po:3 various:1 derivation:1 train:2 distinct:1 fast:3 describe:1 artificial:1 newman:1 outside:2 quite:6 heuristic:2 larger:3 valued:2 whose:1 solve:3 relax:1 s:1 statistic:3 jointly:2 final:2 online:2 obviously:1 sequence:3 eigenvalue:1 jcdl:1 propose:6 subtracting:1 aro:1 product:2 uci:1 ivy:1 intuitive:2 moved:1 inducing:1 scalability:2 qr:1 convergence:1 cluster:3 zp:2 extending:1 produce:2 help:1 derive:1 develop:5 ac:1 ij:1 eq:1 cisi:1 strong:2 implemented:1 c:1 come:1 correct:1 exploration:2 human:1 enable:1 require:1 atmosphere:1 generalization:13 summation:1 extension:1 exploring:2 prpmrf:1 normal:2 exp:8 scope:2 purpose:2 perceived:1 estimation:14 currently:1 utexas:2 bridge:1 create:1 weighted:1 clearly:4 rather:2 apm:9 encode:1 focus:1 bernoulli:1 likelihood:19 am:1 posteriori:1 mrfs:4 dependent:1 lowercase:2 hidden:1 relation:1 wij:1 semantics:2 karimi:1 issue:3 arg:2 flexible:2 dual:2 denoted:1 overall:1 retaining:1 noh:1 animal:1 smoothing:1 marginal:2 equal:2 ng:1 sampling:25 icml:1 future:4 others:1 report:1 fundamentally:3 simplify:1 primarily:2 np:1 composed:2 individual:1 replaced:1 phase:1 replacement:2 possibility:1 evaluation:2 mixture:8 extreme:1 pradeep:1 uppercase:1 held:2 gla:1 edge:9 neglecting:1 theoretical:1 fitted:1 instance:5 xeon:1 modeling:3 column:1 cover:1 w911nf:1 measuring:1 assignment:5 cost:1 uniform:1 conducted:2 motivating:1 dependency:39 dir:1 kxi:1 combined:1 referring:1 fundamental:1 international:2 probabilistic:5 together:1 mouse:1 medline:2 central:1 opposed:1 choose:1 possibly:1 emnlp:2 corner:1 book:2 wing:1 li:5 account:1 potential:2 lookup:1 sec:3 psiexp:1 inc:1 later:3 multiplicative:2 view:1 wind:1 start:1 xing:1 ir:1 kaufmann:1 efficiently:4 yield:4 spaced:3 bayesian:1 mji:3 researcher:1 published:1 reach:1 definition:2 frequency:1 pp:15 dm:1 mi:2 sampled:1 dataset:7 treatment:1 recall:1 underfits:1 actually:2 follow:1 improved:1 formulation:4 evaluated:1 though:1 generality:1 stage:2 until:1 correlation:1 cran:1 replacing:1 fixedlength:1 logistic:2 lda:25 scientific:2 believe:2 dge:1 effect:1 normalized:1 concept:1 hence:2 regularization:1 chemical:2 alternating:2 symmetric:2 dhillon:3 kzij:1 semantic:3 illustrated:2 neal:1 ind:1 ll:5 maintained:1 rat:1 generalized:1 stress:4 allen:2 meaning:1 novel:5 recently:1 wikipedia:5 common:1 multinomial:50 empirically:1 cohen:1 tail:1 extend:1 m1:3 marginals:1 interpret:1 refer:2 significant:1 amq:1 gibbs:16 ai:7 tuning:1 trivially:2 maverick:1 language:3 moving:1 access:1 similarity:1 base:9 posterior:1 own:1 showed:1 optimizing:1 perplexity:13 termed:1 inequality:1 sdi:1 neg:3 seen:4 morgan:1 greater:1 care:1 parallelized:2 determine:1 ii:2 multiple:1 mix:2 full:1 reduces:2 nonzeros:1 smooth:1 hormone:2 calculation:1 offer:1 long:4 retrieval:1 ofp:1 ahmed:1 icdm:1 ravikumar:5 mrf:17 variant:1 regression:5 scalable:1 essentially:2 metric:1 poisson:41 patient:8 iteration:1 normalization:5 cell:2 proposal:1 addition:6 whereas:1 grow:1 publisher:1 appropriately:3 unlike:2 subject:1 tend:1 flow:4 effectiveness:1 seem:1 jordan:1 yang:3 door:1 split:1 relaxes:1 independence:3 fit:2 zi:10 affect:1 inner:2 regarding:1 avenue:1 court:1 texas:1 motivated:1 gb:1 iwcs:1 cause:2 matlab:1 tunnel:1 generally:2 useful:1 concentrated:1 tacc:2 http:4 canonical:1 nsf:1 notice:4 estimated:4 track:1 per:2 nodewise:2 hyperparameter:2 promise:1 vol:4 tea:1 group:1 key:1 reformulation:2 blood:1 drawn:5 graph:1 relaxation:3 merely:5 sum:6 year:1 letter:3 parameterized:1 uncertainty:1 arrive:1 almost:1 family:1 reader:1 reasonable:1 decide:1 coherence:2 appendix:3 scaling:8 capturing:2 entirely:1 layer:12 bound:7 resampled:1 simplification:1 quadratic:6 aer:2 occur:3 constraint:2 dominated:1 speed:1 extremely:1 formulating:1 min:2 performing:1 department:1 combination:2 conjugate:1 dissemination:1 em:2 wi:5 modification:3 explained:1 restricted:2 pr:8 taken:1 computationally:3 resource:1 equation:1 discus:1 count:7 eventually:1 precomputed:1 needed:1 turbulent:1 letting:2 tractable:4 unusual:1 generalizes:2 available:3 observe:1 v2:1 generic:1 zi2:1 rp:1 original:2 top:16 dirichlet:5 assumes:1 binomial:7 running:1 remaining:1 graphical:2 k1:1 npk:1 approximating:1 society:1 seeking:1 realized:2 parametric:5 primary:2 dependence:1 separate:1 topic:99 trivial:1 reason:1 boldface:1 assuming:2 length:24 code:2 index:1 reformulate:1 minimizing:1 difficult:1 setup:1 holding:2 subproblems:2 negative:8 fluid:1 implementation:1 perform:2 upper:9 observation:3 datasets:4 descent:3 truncated:5 zi1:1 communication:1 dc:1 discovered:2 varied:1 smoothed:1 david:1 evidenced:1 namely:1 dog:1 specified:1 toolbox:1 connection:3 z1:2 optimized:1 aerospace:4 quadratically:1 distinction:1 nip:5 address:1 suggested:1 usually:2 sparsity:1 reading:1 max:2 interpretability:1 memory:1 royal:1 critical:1 suitable:2 natural:4 difficulty:1 regularized:1 indicator:6 zr:2 older:1 scheme:1 library:22 numerous:1 admixture:8 irrespective:2 categorical:3 inouye:3 text:3 prior:8 literature:1 l2:4 checking:1 fully:2 loss:1 interesting:7 proportional:3 allocation:1 programs_data:1 digital:2 sufficient:1 austin:1 row:1 supported:2 keeping:1 zqt:1 allow:2 understand:1 fall:1 taking:2 sparse:1 ghz:1 mimno:2 boundary:12 dimension:2 maxl:1 evaluating:3 world:4 ending:1 cumulative:1 kz:1 overcome:1 qualitatively:1 collection:2 made:1 approximate:2 citation:2 nov:1 uai:1 corpus:5 assumed:2 xi:9 thep:1 latent:4 table:3 zk:1 ignoring:1 interact:1 e5:1 excellent:1 cl:1 complex:2 domain:7 substituted:1 pk:2 main:1 linearly:3 classic3:7 hyperparameters:1 noise:1 nallapati:1 allowed:1 child:2 graber:1 fig:6 representative:1 intel:1 talley:2 precision:1 exponential:2 xl:3 jmlr:1 weighting:4 third:2 specific:2 xt:4 explored:1 x:4 normalizing:1 grouping:1 intractable:3 incorporating:1 restricting:1 adding:3 importance:4 conditioned:1 generalizing:2 pmrf:22 simply:4 likely:3 univariate:4 explore:1 inderjit:2 scalar:1 chang:1 gerrish:1 acm:2 cdf:1 shell:2 conditional:2 formulated:1 careful:1 replace:2 shared:1 change:3 infinite:1 openmp:2 except:1 semantically:1 sampler:3 llog:2 averaging:1 tumor:1 total:2 lens:1 secondary:1 ntest:2 experimental:1 meaningful:1 select:2 formally:1 violated:1 evaluate:3 |
5,388 | 5,878 | Human Memory Search as Initial-Visit Emitting
Random Walk
?
Kwang-Sung Jun? , Xiaojin Zhu? , Timothy Rogers?
Wisconsin Institute for Discovery, ? Department of Computer Sciences, ? Department of Psychology
University of Wisconsin-Madison
[email protected], [email protected], [email protected]
Ming Yuan
Department of Statistics
University of Wisconsin-Madison
[email protected]
Zhuoran Yang
Department of Mathematical Sciences
Tsinghua University
[email protected]
Abstract
Imagine a random walk that outputs a state only when visiting it for the first time.
The observed output is therefore a repeat-censored version of the underlying walk,
and consists of a permutation of the states or a prefix of it. We call this model
initial-visit emitting random walk (INVITE). Prior work has shown that the random walks with such a repeat-censoring mechanism explain well human behavior
in memory search tasks, which is of great interest in both the study of human
cognition and various clinical applications. However, parameter estimation in INVITE is challenging, because naive likelihood computation by marginalizing over
infinitely many hidden random walk trajectories is intractable. In this paper, we
propose the first efficient maximum likelihood estimate (MLE) for INVITE by decomposing the censored output into a series of absorbing random walks. We also
prove theoretical properties of the MLE including identifiability and consistency.
We show that INVITE outperforms several existing methods on real-world human
response data from memory search tasks.
1
Human Memory Search as a Random Walk
A key goal for cognitive science has been to understand the mental structures and processes that
underlie human semantic memory search. Semantic fluency has provided the central paradigm for
this work: given a category label as a cue (e.g. animals, vehicles, etc.) participants must generate as
many example words as possible in 60 seconds without repetition. The task is useful because, while
exceedingly easy to administer, it yields rich information about human semantic memory. Participants do not generate responses in random order but produce ?bursts? of related items, beginning
with the highly frequent and prototypical, then moving to subclusters of related items. This ordinal
structure sheds light on associative structures in memory: retrieval of a given item promotes retrieval
of a related item, and so on, so that the temporal proximity of items in generated lists reflects the
degree to which the two items are related in memory [14, 5]. The task also places demands on other
important cognitive contributors to memory search: for instance, participants must retain a mental trace of previously-generated items and use it to refrain from repetition, so that the task draws
upon working memory and cognitive control in addition to semantic processes. For these reasons
the task is a central tool in all commonly-used metrics for diagnosing cognitive dysfunction (see
e.g. [6]). Performance is generally sensitive to a variety of neurological disorders [19], but different
syndromes also give rise to different patterns of impairment, making it useful for diagnosis [17]. For
these reasons the task has been widely employed both in basic science and applied health research.
Nevertheless, the representations and processes that support category fluency remain poorly understood. Beyond the general observation that responses tend to be clustered by semantic relatedness,
1
it is not clear what ordinal structure in produced responses reveals about the structure of human
semantic memory, in either healthy or disordered populations. In the past few years researchers in
cognitive science have begun to fill this gap by considering how search models from other domains
of science might explain patterns of responses observed in fluency tasks [12, 13, 15]. We review
related works in Section 4.
In the current work we build on these advances by considering, not how search might operate on a
pre-specified semantic representation, but rather how the representation itself can be learned from
data (i.e., human-produced semantic fluency lists) given a specified model of the list-generation
process. Specifically, we model search as a random walk on a set of states (e.g. words) where the
transition probability indicates the strength of association in memory, and with the further constraint
that node labels are only generated when the node is first visited. Thus, repeated visits are censored
in the output. We refer to this generative process as the initial-visit emitting (INVITE) random walk.
The repeat-censoring mechanism of INVITE was first employed in Abbott et al. [1]. However, their
work did not provide a tractable method to compute the likelihood nor to estimate the transition
matrix from the fluency responses. The problem of estimating the underlying Markov chain from
the lists so produced is nontrivial because once the first two items in a list have been produced there
may exist infinitely many pathways that lead to production of the next item. For instance, consider
the produced sequence ?dog? ? ?cat? ? ?goat? where the underlying graph is fully connected.
Suppose a random walk visits ?dog? then ?cat?. The walk can then visit ?dog? and ?cat? arbitrarily
many times before visiting ?goat?; there exist infinitely many walks that outputs the given sequence.
How can the transition probabilities of the underlying random walk be learned?
A solution to this problem would represent a significant advance from prior works that estimate
parameters from a separate source such as a standard text corpus [13]. First, one reason for verbal fluency?s enduring appeal has been that the task appears to reveal important semantic structure
that may not be discoverable by other means. It is not clear that methods for estimating semantic
structure based on another corpus do a very good job at modelling the structure of human semantic
representations generally [10], or that they would reveal the same structures that govern behavior
specifically in this widely-used fluency task. Second, the representational structures employed can
vary depending upon the fluency category. For instance, the probability of producing ?chicken?
after ?goat? will differ depending on whether the task involves listing ?animals?, ?mammals?, or
?farm animals?. Simply estimating a single structure from the same corpus will not capture these
task-based effects. Third, special populations, including neurological patients and developing children, may generate lists from quite different underlying mental representations, which cannot be
independently estimated from a standard corpus.
In this work, we make two important contributions on the INVITE random walk. First, we propose
a tractable way to compute the INVITE likelihood. Our key insight in computing the likelihood is
to turn INVITE into a series of absorbing random walks. This formulation allows us to leverage the
fundamental matrix [7] and compute the likelihood in polynomial time. Second, we show that the
MLE of INVITE is consistent, which is non-trivial given that the convergence of the log likelihood
function is not uniform. We formally define INVITE and present the two main contributions as
well as an efficient optimization method to estimate the parameters in Section 2. In Section 3, we
apply INVITE to both toy data and real-world fluency data. On toy data our experiments empirically
confirm the consistency result. On actual human responses from verbal fluency INVITE outperforms
off-the-shelf baselines. The results suggest that INVITE may provide a useful tool for investigating
human cognitive functions.
2
The INVITE Random Walk
INVITE is a probabilistic model with the following generative story. Consider a random walk on
a set of n states S with an initial distribution ? > 0 (entry-wise) and an arbitrary transition matrix
P where Pij is the probability of jumping from state i to j. A surfer starts from a random initial
state drawn from ?. She outputs a state if it is the first time she visits that state. Upon arriving
at an already visited state, however, she does not output the state. The random walk continues
indefinitely. Therefore, the output consists of states in the order of their first-visit; the underlying
entire walk trajectory is hidden. We further assume that the time step of each output is unobserved.
For example, consider the random walk over four states in Figure 1(a). If the underlying random
walk takes the trajectory (1, 2, 1, 3, 1, 2, 1, 4, 1, . . .), the observation is (1, 2, 3, 4).
2
T
1
3
1
1
3
3
1
1
1
W1
1
1
3
4
2
1
1
3
1
1
4
T
1
W1
1
2
1
1
3
1
4
Log likelihood
2
?1.44
?1.46
?1.48
?1.5
?1.52
0
0.5
?
1
(a)
(b)
(c)
(d)
Figure 1: (a-c) Example Markov chains (d) Example nonconvexity of the INVITE log likelihood
We say that the observation produced by INVITE is a censored list since non-initial visits are censored. It is easy to see that a censored list is a permutation of the n states or a prefix thereof (more
on this later). We denote a censored list by a = (a1 , a2 , . . . , aM ) where M ? n. A censored list
is not Markovian since the probability of a transition in censored list depends on the whole history
rather than just the current state. It is worth noting that INVITE is distinct from Broder?s algorithm
for generating random spanning trees [4], or the self-avoiding random walk [9], or cascade models
of infection. We discuss the technical difference to related works in Section 4.
We characterize the type of output INVITE is capable of producing, given that the underlying uncensored random walk continues indefinitely. A state s is said to be transient if a random walk
starting from s has nonzero probability of not returning to itself in finite time and recurrent if such
probability is zero. A set of states A is closed if a walk cannot exit A; i.e., if i ? A and j 6? A,
then a random walk from i cannot reach j. A set of states B is irreducible if there exists a path
between every pair of states in B; i.e., if i, j ? B, then a random walk from i can reach j. Define
[M ] = {1, 2, . . . , M }. We use a1:M as a shorthand for a1 , . . . , aM . Theorem 1 states that a finite
state Markov chain can be uniquely decomposed into disjoint sets, and Theorem 2 states what a
censored list should look like. All proofs are in the supplementary material.
Theorem 1. [8] If the state space S is finite, then S can be written as a disjoint union T ? W1 ?
. . . ? WK , where T is a set of transient states that is possibly empty and each Wk , k ? [K], is a
nonempty closed irreducible set of recurrent states.
Theorem 2. Consider a Markov chain P with the decomposition S = T ? W1 ? . . . ? WK as in
Theorem 1. A censored list a = (a1:M ) generated by INVITE on P has zero or more transient states,
followed by all states in one and only one closed irreducible set. That is, ?` ? [M ] s.t. {a1:`?1 } ? T
and {a`:M } = Wk for some k ? [K].
As an example, when the graph is fully connected INVITE is capable of producing all n! permutations of the n states as the censored lists. As another example, in Figure 1 (b) and (c), both chains
have two transient states T = {1, 2} and two recurrent states W1 = {3, 4}. (b) has no path that
visits both 1 and 2, and thus every censored list must be a prefix of a permutation. However, (c) has
a path that visits both 1 and 2, thus can generate (1,2,3,4), a full permutation.
In general, each INVITE run generates a permutation of n states, or a prefix of a permutation. Let
Sym(n) be the symmetric group on [n]. Then, the data space D of censored lists is D ? {(a1:k ) |
a ? Sym(n), k ? [n]}.
2.1
Computing the INVITE likelihood
Learning and inference under the INVITE model is challenging due to its likelihood function. A
naive method to compute the probability of a censored list a given ? and
P P is to sum over all uncensored random walk trajectories x which produces a: P(a; ?, P) = x produces a P(x; ?, P).
This naive computation is intractable since the summation can be over an infinite number of trajectories x?s that might have produced the censored list a. For example, consider the censored list
a = (1, 2, 3, 4) generated from Figure 1(a). There are infinite uncensored trajectories to produce a
by visiting states 1 and 2 arbitrarily many times before visiting state 3, and later state 4.
The likelihood of ? and P on a censored list a is
(
QM ?1
?a1 k=1 P(ak+1 | a1:k ; P) if a cannot be extended
P(a; ?, P) =
0
otherwise.
3
(1)
Note we assign zero probability to a censored list that is not completed yet, since the underlying random walk must run forever. We say a censored list a is valid (invalid) under ? and P if
P(a; ?, P) > 0 (= 0).
We first review the fundamental matrix in the absorbing random walk. A state that transits to itself
with probability 1 is called an absorbing
state.
Given a Markov chain P with absorbing states, we
Q R
0
can rearrange the states into P =
, where Q is the transition between the nonabsorbing
0 I
states, R is the transition from the nonabsorbing states to absorbing states, and the rest trivially
represent the absorbing states. Theorem 3 presents the fundamental matrix, the essential tool for the
tractable computation of the INVITE likelihood.
Theorem 3. [7] The fundamental matrix of the Markov chain P0 is N = (I ? Q)?1 . Nij is
the expected number of times that a chain visits state j before absorption when starting from i.
Furthermore, define B = (I ? Q)?1 R. Then, Bik is the probability of a chain starting from i being
absorbed by k. In other words, Bi? is the absorption distribution of a chain starting from i.
As a tractable way to compute the likelihood, we propose a novel formulation that turns an INVITE
random walk into a series of absorbing random walks. Although INVITE itself is not an absorbing
random walk, each segment that produces the next item in the censored list can be modeled as one.
That is, for each k = 1 . . . M ? 1 consider the segment of the uncensored random walk starting
from the previous output ak until the next output ak+1 . For this segment, we construct an absorbing
random walk by keeping a1:k nonabsorbing and turning the rest into the absorbing states. A random
walk starting from ak is eventually absorbed by a state in S \ {a1:k }. The probability of being
absorbed by ak+1 is exactly the probability of outputting ak+1 after outputting a1:k in INVITE.
Formally, we construct an absorbing random walk P(k) :
(k)
R(k) ,
P(k) = Q
(2)
0
I
where the states are ordered as a1:M . Corollary 1 summarizes our computation of the INVITE
likelihood.
Corollary 1. The k-th step INVITE likelihood for k ? [M ? 1] is
[(I ? Q(k) )?1 R(k) ]k1
if (I ? Q(k) )?1 exists
P(ak+1 | a1:k , P) =
(3)
0
otherwise
Suppose
we observe
m
independent
realizations
of
INVITE:
Dm
=
n
o
(1)
(1)
(m)
(m)
a1 , ..., aM1 , ..., a1 , ..., aMm , where Mi is the length of the i-th censored list.
Pm
Then, the INVITE log likelihood is `(?, P; Dm ) = i=1 log P(a(i) ; ?, P).
2.2
Consistency of the MLE
Identifiability is an essential property for a model to be consistent. Theorem 4 shows that allowing
self-transitions in P cause INVITE to be unidentifiable. Then, Theorem 5 presents a remedy. The
proof for both theorems are presented in our supplementary material. Let diag(q) be a diagonal
matrix whose i-th diagonal entry is qi .
Theorem 4. Let P be an n ? n transition matrix without any self-transition (Pii = 0, ?i), and
q ? [0, 1)n . Define P0 = diag(q) + (I ? diag(q))P, a scaled transition matrix with self-transition
probabilities q. Then, P(a; ?, P) = P(a; ?, P0 ), for every censored list a.
For example, consider a censored listPa = (1, j) where j P
6= 1. Using the fundamental matrix,
P(a2 | a1 ; P) = (1 ? P11 )?1 P1j = ( j 0 6=1 P1j 0 )?1 P1j = ( j 0 6=1 cP1j 0 )?1 cP1j , ?c. This implies
that multiplying a constant c to P1j for all j 6= 1 and renormalizing the first row P1? to sum to 1
does not change the likelihood.
Theorem 5. Assume the initial distribution ? > 0 elementwise. In the space of transition matrices
P without self-transitions, INVITE is identifiable.
P
Let ?n?1 = {p ? Rn | pi ? 0, ?i, i pi = 1} be the probability simplex. For brevity, we pack
the parameters of INVITE into one vector ? as follows: ? ? ? = {(? > , P1? , . . . , Pn? )> | ?, Pi? ?
?n?1 , Pii = 0, ?i}. Let ? ? = (? ?> , P?1? , . . . , P?n? )> ? ? be the true model. Given a set of m
censored lists Dm generated from ? ? , the average log likelihood function and its pointwise limit are
4
m
X
bm (?) = 1
Q
log P(a(i) ); ?)
m i=1
Q? (?) =
and
X
P(a; ? ? ) log P(a; ?).
(4)
a?D
For brevity, we assume that the true model ? ? is strongly connected; the analysis can be easily
extended to remove it. Under the Assumption A1, Theorem 6 states the consistency result.
Assumption A1. Let ? ? = (? ?> , P?1? , . . . , P?n? )> ? ? be the true model. ? ? has no zero entries.
Furthermore, P? is strongly connected.
bm (?) is consistent.
Theorem 6. Assume A1. The MLE of INVITE ?bm ? max??? Q
We provide a sketch here. The proof relies on Lemma 6 and Lemma 2 that are presented in our
supplementary material. Since ? is compact, the sequence {?bm } has a convergent subsequence
bm (? ? ) ? Q
bm (?bm ),
{?bmj }. Let ? 0 = limj?? ?bmj . Since Q
j
j
j
bm (? ? ) ? lim Q
bm (?bm ) = Q? (? 0 ),
Q? (? ? ) = lim Q
j??
j
j??
j
j
where the last equality is due to Lemma 6. By Lemma 2, ? ? is the unique maximizer of Q? ,
which implies ? 0 = ? ? . Note that the subsequence was chosen arbitrarily. Since every convergent
subsequence converges to ? ? , ?bm converges to ? ? .
2.3
Parameter Estimation via Regularized Maximum Likelihood
We present a regularized MLE (RegMLE) of INVITE. We first extend the censored lists that we
consider. Now we allow the underlying walk to terminate after finite steps because in real-world
applications the observed censored lists are often truncated. That is, the underlying random walk
can be stopped before exhausting every state the walk could visit. For example, in verbal fluency,
participants have limited time to produce a list. Consequently, we use the prefix likelihood
M
?1
Y
P(ak+1 | a1:k ; P).
(5)
L(a; ?, P) = ?a1
k=1
We find the RegMLE by maximizing the prefix log likelihood plus a regularization term on ?, P.
Note that, ? and P can be separately optimized. P
For ?, we place a Dirichlet prior and find the
m
b by ?
maximum a posteriori (MAP) estimator ?
bj ? i=1 1a(i) =j + C? , ?j.
1
Directly computing the RegMLE of P requires solving a constrained optimization problem, because
the transition matrix P must be row stochastic. We re-parametrize P which leads to a more convenient unconstrained optimization
problem. Let ? ? Rn?n . We exponentiate ? and row-normalize it
Pn
?ij
to derive P: Pij = e / j 0 =1 e?ij0 , ?i, j. We fix the diagonal entries of ? to ?? to disallow selftransitions. We place squared `2 norm regularizer on ? to prevent overfitting. The unconstrained
optimization problem is:
P
Pm PMi ?1
(i)
(i)
2
min
,
? i=1 k=1
log P(ak+1 | a1:k ; ?) + 21 C? i6=j ?ij
(6)
?
where C? > 0 is a regularization parameter. We provide the derivative of the prefix log likelihood
w.r.t. ? in our supplementary material. We point out that the objective function of (6) is not convex
in ? in general. Let n = 5 and suppose we observe two censored lists (5, 4, 3, 1, 2) and (3, 4, 5, 1, 2).
We found with random starts two different local optima ? (1) and ? (2) of (6). We plot the prefix log
likelihood of (1 ? ?)? (1) + ?? (2) , where ? ? [0, 1] in Figure 1(d). Nonconvexity of this 1D slice
implies nonconvexity of the prefix log likelihood surface in general.
Efficient Optimization using Averaged Stochastic Gradient Descent Given a censored list a of
length M , computing the derivative of P(ak+1 | a1:k ) w.r.t. ? takes O(k 3 ) time for matrix inversion.
There are n2 entries in ?, so the time complexity per item is O(k 3 + n2 ). This computation needs to
be done for k = 1, ..., (M ? 1) in a list and for m censored lists, which makes the overall time complexity O(mM (M 3 +n2 )). In the worst case, M is as large as n, which makes it O(mn4 ). Even the
state-of-the-art batch optimization method such as LBFGS takes a very long time to find the solution
for a moderate problem size such as n ? 500. For a faster computation of the RegMLE (6), we turn
to averaged stochastic gradient descent (ASGD) [20, 18]. ASGD processes the lists sequentially by
updating the parameters after every list. The per-round objective function for ? on the i-th list is
M
i ?1
X
C? X 2
(i)
(i)
(i)
?ij .
f (a ; ?) ? ?
log P(ak+1 | a1:k ; ?) +
2m
i6=j
k=1
5
Ring, n=25
2
1
102
The number of lists (m)
INVITE
RW
FE
4
3
2
1
0
Grid, n=25
3
b
error(P)
b
error(P)
3
Star, n=25
5
b
error(P)
INVITE
RW
FE
INVITE
RW
FE
102
The number of lists (m)
2
1
0
102
The number of lists (m)
(a)
(b)
(c)
Figure 2: Toy experiment results where the error is measured with the Frobenius norm.
We randomly initialize ?0 . At round t, we update the solution ?t with ?t ? ?t?1 ? ?t ?f (a(i) ; ?)
1
?c
and the average estimate ? t with ? t ? t?1
t ? t?1 + t ?t . Let ?t = ?0 (1 + ?0 at) . We use
a = C? /m and c = 3/4 following [3] and pick ?0 by running the algorithm on a small subsample
of the train set. We run ASGD for a fixed number of epochs and take the final ? t as the solution.
3
Experiments
We compare INVITE against two popular estimators of P: naive random walk (RW) and First-Edge
(FE). RW is the regularized MLE of the naive random
pretending the censored lists
are the
Pwalk,P
(RW )
m
Mi ?1
b
?
underlying uncensored walk trajectory: Prc
i=1
j=1 1(a(i) =r)?(a(i) =c) + CRW .
j
j+1
Though simple and popular, RW is a biased estimator due to the model mismatch. FE was proposed
in [2] for graph structure
FE uses only the first two items in each
Precovery in cascade model.
(F E)
m
b
+CF E . Because the first transition in a censored
?
censored list: Prc
(i)
i=1 1(a(i)
1 =r)?(a2 =c)
list is always the same as the first transition in its underlying trajectory, FE is a consistent estimator
of P (assuming ? has no zero entries). In fact, FE is equivalent to the RegMLE of the length
two prefix likelihood of the INVITE model. However, we expect FE to waste information since it
discards the rest of the censored lists. Furthermore, FE cannot estimate the transition probabilities
from an item that does not appear as the first item in the lists, which is common in real-world data.
3.1
Toy Experiments
Here we compare the three estimators INVITE, RW, and FE on toy datasets, where the observations
are indeed generated by an initial-visit emitting random walk. We construct three undirected, unweighted graphs of n = 25 nodes each: (i) Ring,?
a ring ?
graph, (ii) Star, n ? 1 nodes each connected
to a ?hub? node, and (iii) Grid, a 2-dimensional n ? n lattice.
The initial distribution ? ? is uniform, and the transition matrix P? at each node has an
equal transition probability to its neighbors. For each graph, we generate datasets with m ?
{10, 20, 40, 80, 160, 320, 640} censored lists. Each censored list has length n. We note that, in
the star graph a censored list contains many apparent transitions between leaf nodes, although such
transitions are not allowed in its underlying uncensored random walk. This will mislead RW. This
effect is less severe in the grid graph and the ring graph.
For each estimator, we perform 5-fold cross validation (CV) for finding the best smoothing parameters C? , CRW , CF E on the grid 10?2 , 10?1.5 , . . . , 101 , respectively, with which we compute each
b and the true
estimator. Then, we evaluate the three
estimators using the Frobenius norm between P
qP
?
b =
transition matrix P : error(P)
(Pbij ? P ? )2 . Note the error must approach 0 as m inij
i,j
creases for consistent estimators. We repeat the same experiment 20 times where each time we draw
a new set of censored lists.
b changes as the number of censored lists m increases. The error bars
Figure 2 shows how error(P)
are 95% confidence bounds. We make three observations: (1) INVITE tends towards 0 error. This
is expected given the consistency of INVITE in Theorem 6. (2) RW is biased. In all three plots, RW
tends towards some positive number, unlike INVITE and FE. This is because RW has the wrong
model on the censored lists. (3) INVITE outperforms FE. On the ring and grid graphs INVITE
dominates FE for every training set size. On the star graph FE is better than INVITE with a small
m, but INVITE eventually achieves lower error. This reflects the fact that, although FE is unbiased,
it discards most of the censored lists and therefore has higher variance compared to INVITE.
6
n
m
Length
Min.
Max.
Mean
Median
Animal
274
4710
2
36
18.72
19
Food
452
4622
1
47
20.73
21
Animal
Food
Test set mean neg. loglik.
60.18 (?1.75)
69.16 (?2.00)
72.12 (?2.17)
83.62 (?2.32)
94.54 (?2.75)
100.27 (?2.96)
Table 2: Verbal fluency test set log likelihood.
Table 1: Statistics of the verbal fluency data.
3.2
Model
INVITE
RW
FE
INVITE
RW
FE
Verbal Fluency
We now turn to the real-world fluency data where we compare INVITE with the baseline models.
Since we do not have the ground truth parameter ? and P, we compare test set log likelihood
of various models. Confirming the empirical performance of INVITE sheds light on using it for
practical applications such as the dignosis and classification of the brain-damaged patient.
Data The data used to assess human memory search consists of two verbal fluency datasets from
the Wisconsin Longitudinal Survey (WLS). The WLS is a longitudinal assessment of many sociodemographic and health factors that has been administered to a large cohort of Wisconsin residents
every five years since the 1950s. Verbal fluency for two semantic categories, animals and foods,
was administered in the last two testing rounds (2005 and 2010), yielding a total of 4714 lists for
animals and 4624 lists for foods collected from a total of 5674 participants ranging in age from their
early-60?s to mid-70?s. The raw lists included in the WLS were preprocessed by expanding abbreviations (?lab? ? ?labrador?), removing inflections (?cats? ? ?cat?), correcting spelling errors, and
removing response errors like unintelligible items. Though instructed to not repeat, some human
participants did occasionally produce repeated words. We removed the repetitions from the data,
which consist of 4% of the word token responses. Finally, the data exhibits a Zipfian behavior with
many idiosyncratic, low count words. We removed words appearing in less than 10 lists. In total,
the process resulted in removing 5% of the total number of word token responses. The statistics of
the data after preprocessing is summarized in Table 1.
Procedure We randomly subsample 10% of the lists as the test set, and use the rest as the training
set. We perform 5-fold CV on the training set for each estimator to find the best smoothing parameter
C? , CRW , CF E ? {101 , 10.5 , 100 , 10?.5 , 10?1 , 10?1.5 , 10?2 } respectively, where the validation
measure is the prefix log likelihood for INVITE and the standard random walk likelihood for RW.
For the validation measure of FE we use the INVITE prefix log likelihood since FE is equivalent to
the length two prefix likelihood of INVITE. Then, we train the final estimator on the whole training
set using the fitted regularization parameter.
Result The experiment result is summarized in Table 2. For each estimator, we measure the average per-list negative prefix log likelihood on the test set for INVITE and FE, and the standard random
walk per-list negative log likelihood for RW. The number in the parenthesis is the 95% confidence
interval. Boldfaced numbers mean that the corresponding estimator is the best and the difference
from the others is statistically significant under a two-tailed paired t-test at 95% significance level.
In both animal and food verbal fluency tasks, the result indicates that human-generated fluency lists
are better explained by INVITE than by either RW or FE. Furthermore, RW outperforms FE. We
believe that FE performs poorly despite being consistent because the number of lists is too small
(compared to the number of states) for FE to reach a good estimate.
4
Related Work
Though behavior in semantic fluency tasks has been studied for many years, few computationally explicit models of the task have been advanced. Influential models in the psychological literature, such
as the widely-known ?clustering and switching? model of Troyer et al. [21], have been articulated
only verbally. Efforts to estimate the structure of semantic memory from fluency lists have mainly
focused on decomposing the structure apparent in distance matrices that reflect the mean inter-item
ordinal distances across many fluency lists [5]?but without an account of the processes that generate list structure it is not clear how the results of such studies are best interpreted. More recently,
researchers in cognitive science have begun to focus on explicit model of the processes by which
fluency lists are generated. In these works, the structure of semantic memory is first modelled either
as a graph or as a continuous multidimensional space estimated from word co-occurrence statistics
in large corpora of natural language. Researchers then assess whether structure in fluency data can
be understood as resulting from a particular search process operating over the specified semantic
7
structure. Models explored in this vein include simple random walk over a semantic network, with
repeated nodes omitted from the sequence produced [12], the PageRank algorithm employed for network search by Google [13], and foraging algorithms designed to explain the behavior of animals
searching for food [15]. Each example reports aspects of human behavior that are well-explained by
the respective search process, given accompanying assumptions about the nature of the underlying
semantic structure. However, these works do not learn their model directly from the fluency lists,
which is the key difference from our study.
Broder?s algorithm Generate [4] for generating random spanning tree is similar to INVITE?s generative process. Given an undirected graph, the algorithm runs a random walk and outputs each
transition to an unvisited node. Upon transiting to an already visited node, however, it does not
output the transition. The random walk stops after visiting every node in the graph. In the end, we
observe an ordered list of transitions. For example, in Figure 1(a) if the random walk trajectory is
(2,1,2,1,3,1,4), then the output is (2?1, 1?3, 1?4). Note that if we take the starting node of the
first transition and the arriving nodes of each transition, then the output list reduces to a censored list
generated from INVITE with the same underlying random walk. Despite the similarity, to the best
of our knowledge, the censored list derived from the output of the algorithm Generate has not been
studied, and there has been no parameter estimation task discussed in prior works.
Self-avoiding random walk, or non-self-intersecting random walk, performs random walk while
avoiding already visited node [9]. For example, in Figure 1(a), if a self-avoiding random walk starts
from state 2 then visits 1, then it can only visit states 3 or 4 since 2 is already visited. In not visiting
the same node twice, self-avoiding walk is similar to INVITE. However, a key difference is that
self-avoiding walk cannot produce a transition i ? j if Pij = 0. In contrast, INVITE can appear to
have such ?transitions? in the censored list. Such behavior is a core property that allows INVITE to
switch clusters in modeling human memory search.
INVITE resembles cascade models in many aspects [16, 11]. In a cascade model, the information
or disease spreads out from a seed node to the whole graph by infections that occur from an infected
node to its neighbors. [11] formulates a graph learning problem where an observation is a list, or
so-called trace, that contains infected nodes along with their infection time. Although not discussed
in the present paper, it is trivial for INVITE to produce time stamps for each item in its censored
list, too. However, there is a fundamental difference in how the infection occurs. A cascade model
typically allows multiple infected nodes to infect their neighbors in parallel, so that infection can
happen simultaneously in many parts of the graph. On the other hand, INVITE contains a single
surfer that is responsible for all the infection via a random walk. Therefore, infection in INVITE is
necessarily sequential. This results in INVITE exhibiting clustering behaviors in the censored lists,
which is well-known in human memory search tasks [21].
5
Discussion
There are numerous directions to extend INVITE. First, more theoretical investigation is needed. For
example, although we know the MLE of INVITE is consistent, the convergence rate is unknown.
Second, one can improve the INVITE estimate when data is sparse by assuming certain cluster
structures in the transition matrix P, thereby reducing the degrees of freedom. For instance, it is
known that verbal fluency tends to exhibit ?runs? of semantically related words. One can assume
a stochastic block model P with parameter sharing at the block level, where the blocks represent
semantic clusters of words. One then estimates the block structure and the shared parameters at the
same time. Third, INVITE can be extended to allow repetitions in a list. The basic idea is as follows.
In the k-th segment we previously used an absorbing random walk to compute P(ak+1 | a1:k ), where
a1:k were the nonabsorbing states. For each nonabsorbing state ai , add a ?dongle twin? absorbing
state a0i attached only to ai . Allow a small transition probability from ai to a0i . If the walk is absorbed
by a0i , we output ai in the censored list, which becomes a repeated item in the censored list. Note
that the likelihood computation in this augmented model is still polynomial. Such a model with
?reluctant repetitions? will be an interesting interpolation between ?no repetitions? and ?repetitions
as in a standard random walk.?
Acknowledgments
The authors are thankful to the anonymous reviewers for their comments. This work is supported in
part by NSF grants IIS-0953219 and DGE-1545481, NIH Big Data to Knowledge 1U54AI11792401, NSF Grant DMS-1265202, and NIH Grant 1U54AI117924-01.
8
References
[1] J. T. Abbott, J. L. Austerweil, and T. L. Griffiths, ?Human memory search as a random walk in
a semantic network,? in NIPS, 2012, pp. 3050?3058.
[2] B. D. Abrahao, F. Chierichetti, R. Kleinberg, and A. Panconesi, ?Trace complexity of network
inference.? CoRR, vol. abs/1308.2954, 2013.
[3] L. Bottou, ?Stochastic gradient tricks,? in Neural Networks, Tricks of the Trade, Reloaded, ser.
Lecture Notes in Computer Science (LNCS 7700), G. Montavon, G. B. Orr, and K.-R. M?uller,
Eds. Springer, 2012, pp. 430?445.
[4] A. Z. Broder, ?Generating random spanning trees,? in FOCS. IEEE Computer Society, 1989,
pp. 442?447.
[5] A. S. Chan, N. Butters, J. S. Paulsen, D. P. Salmon, M. R. Swenson, and L. T. Maloney, ?An
assessment of the semantic network in patients with alzheimer?s disease.? Journal of Cognitive
Neuroscience, vol. 5, no. 2, pp. 254?261, 1993.
[6] J. R. Cockrell and M. F. Folstein, ?Mini-mental state examination.? Principles and practice of
geriatric psychiatry, pp. 140?141, 2002.
[7] P. G. Doyle and J. L. Snell, Random Walks and Electric Networks.
matical Association of America, 1984.
Washington, DC: Mathe-
[8] R. Durrett, Essentials of stochastic processes, 2nd ed., ser. Springer texts in statistics.
York: Springer, 2012.
[9] P. Flory, Principles of polymer chemistry.
New
Cornell University Press, 1953.
[10] A. M. Glenberg and S. Mehta, ?Optimal foraging in semantic memory,? Italian Journal of
Linguistics, 2009.
[11] M. Gomez Rodriguez, J. Leskovec, and A. Krause, ?Inferring networks of diffusion and influence,? Max-Planck-Gesellschaft. New York, NY, USA: ACM Press, July 2010, pp. 1019?
1028.
[12] J. Goi, G. Arrondo, J. Sepulcre, I. Martincorena, N. V. de Mendizbal, B. Corominas-Murtra,
B. Bejarano, S. Ardanza-Trevijano, H. Peraita, D. P. Wall, and P. Villoslada, ?The semantic organization of the animal category: evidence from semantic verbal fluency and network theory.?
Cognitive Processing, vol. 12, no. 2, pp. 183?196, 2011.
[13] T. L. Griffiths, M. Steyvers, and A. Firl, ?Google and the mind: Predicting fluency with pagerank,? Psychological Science, vol. 18, no. 12, pp. 1069?1076, 2007.
[14] N. M. Henley, ?A psychological study of the semantics of animal terms.? Journal of Verbal
Learning and Verbal Behavior, vol. 8, no. 2, pp. 176?184, Apr. 1969.
[15] T. T. Hills, P. M. Todd, and M. N. Jones, ?Optimal foraging in semantic memory,? Psychological Review, pp. 431?440, 2012.
[16] D. Kempe, J. Kleinberg, and E. Tardos, ?Maximizing the spread of influence through a social
network,? in Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, ser. KDD ?03. New York, NY, USA: ACM, 2003, pp. 137?146.
[17] F. Pasquier, F. Lebert, L. Grymonprez, and H. Petit, ?Verbal fluency in dementia of frontal lobe
type and dementia of Alzheimer type.? Journal of Neurology, vol. 58, no. 1, pp. 81?84, 1995.
[18] B. T. Polyak and A. B. Juditsky, ?Acceleration of stochastic approximation by averaging,?
SIAM J. Control Optim., vol. 30, no. 4, pp. 838?855, July 1992.
[19] T. T. Rogers, A. Ivanoiu, K. Patterson, and J. R. Hodges, ?Semantic memory in Alzheimer?s
disease and the frontotemporal dementias: a longitudinal study of 236 patients.? Neuropsychology, vol. 20, no. 3, pp. 319?335, 2006.
[20] D. Ruppert, ?Efficient estimations from a slowly convergent robbins-monro process,? Cornell
University Operations Research and Industrial Engineering, Tech. Rep., 1988.
[21] A. Troyer, M. Moscovitch, G. Winocur, M. Alexander, and D. Stuss, ?Clustering and switching
on verbal fluency: The effects of focal fronal- and temporal-lobe lesions,? Neuropsychologia,
vol. 36, no. 6, 1998.
9
| 5878 |@word version:1 inversion:1 polynomial:2 norm:3 nd:1 mehta:1 lobe:2 decomposition:1 p0:3 paulsen:1 pick:1 mammal:1 thereby:1 initial:9 series:3 contains:3 prefix:14 longitudinal:3 outperforms:4 existing:1 past:1 current:2 optim:1 loglik:1 yet:1 must:6 written:1 happen:1 confirming:1 kdd:1 remove:1 plot:2 unintelligible:1 update:1 designed:1 juditsky:1 generative:3 cue:1 leaf:1 item:18 beginning:1 core:1 indefinitely:2 mental:4 node:19 diagnosing:1 five:1 mathematical:1 burst:1 along:1 yuan:1 consists:3 prove:1 shorthand:1 pathway:1 boldfaced:1 focs:1 crw:3 inter:1 indeed:1 expected:2 behavior:9 p1:2 nor:1 brain:1 ming:1 decomposed:1 food:6 actual:1 considering:2 becomes:1 provided:1 estimating:3 underlying:16 what:2 interpreted:1 unobserved:1 finding:1 sung:1 temporal:2 every:9 multidimensional:1 shed:2 exactly:1 returning:1 qm:1 scaled:1 wrong:1 control:2 ser:3 underlie:1 grant:3 appear:2 producing:3 planck:1 before:4 positive:1 understood:2 local:1 todd:1 tsinghua:2 limit:1 tends:3 switching:2 despite:2 engineering:1 ak:12 path:3 interpolation:1 might:3 plus:1 twice:1 studied:2 resembles:1 challenging:2 co:1 limited:1 bi:1 statistically:1 averaged:2 unique:1 practical:1 responsible:1 mn4:1 testing:1 union:1 block:4 acknowledgment:1 swenson:1 practice:1 procedure:1 lncs:1 stuss:1 empirical:1 cascade:5 convenient:1 word:11 pre:1 confidence:2 griffith:2 suggest:1 prc:2 cannot:6 influence:2 equivalent:2 map:1 reviewer:1 petit:1 maximizing:2 starting:7 independently:1 convex:1 survey:1 focused:1 mislead:1 disorder:1 correcting:1 insight:1 estimator:13 fill:1 steyvers:1 population:2 searching:1 tardos:1 imagine:1 suppose:3 damaged:1 us:1 trick:2 updating:1 continues:2 vein:1 observed:3 capture:1 worst:1 connected:5 trade:1 removed:2 neuropsychology:1 disease:3 govern:1 complexity:3 solving:1 segment:4 upon:4 patterson:1 exit:1 easily:1 various:2 cat:5 america:1 regularizer:1 train:2 articulated:1 distinct:1 quite:1 whose:1 widely:3 supplementary:4 apparent:2 say:2 otherwise:2 statistic:5 austerweil:1 farm:1 itself:4 p1j:4 final:2 associative:1 sequence:4 propose:3 outputting:2 frequent:1 realization:1 poorly:2 representational:1 frobenius:2 normalize:1 convergence:2 empty:1 optimum:1 cluster:3 produce:9 generating:3 renormalizing:1 converges:2 ring:5 thankful:1 zipfian:1 depending:2 recurrent:3 derive:1 stat:1 measured:1 fluency:30 ij:3 job:1 c:1 involves:1 implies:3 pii:2 differ:1 exhibiting:1 direction:1 stochastic:7 human:19 disordered:1 transient:4 rogers:2 material:4 assign:1 fix:1 clustered:1 wall:1 investigation:1 anonymous:1 snell:1 polymer:1 absorption:2 summation:1 mm:1 proximity:1 accompanying:1 ground:1 great:1 seed:1 cognition:1 surfer:2 bj:1 vary:1 achieves:1 a2:3 early:1 omitted:1 winocur:1 estimation:4 label:2 visited:5 healthy:1 goat:3 sensitive:1 contributor:1 robbins:1 repetition:7 sociodemographic:1 tool:3 reflects:2 uller:1 always:1 rather:2 pn:2 shelf:1 cornell:2 corollary:2 derived:1 focus:1 abrahao:1 she:3 modelling:1 indicates:2 likelihood:35 mainly:1 tech:1 contrast:1 sigkdd:1 inflection:1 baseline:2 am:2 psychiatry:1 posteriori:1 inference:2 industrial:1 entire:1 typically:1 hidden:2 italian:1 semantics:1 overall:1 classification:1 animal:11 constrained:1 special:1 kempe:1 art:1 initialize:1 smoothing:2 equal:1 once:1 construct:3 washington:1 look:1 jones:1 simplex:1 others:1 report:1 few:2 irreducible:3 randomly:2 simultaneously:1 resulted:1 doyle:1 ab:1 freedom:1 organization:1 interest:1 highly:1 mining:1 severe:1 yielding:1 light:2 rearrange:1 a0i:3 chain:10 edge:1 capable:2 censored:48 jumping:1 respective:1 tree:3 walk:64 re:1 amm:1 theoretical:2 leskovec:1 nij:1 stopped:1 instance:4 fitted:1 psychological:4 modeling:1 markovian:1 infected:3 formulates:1 lattice:1 jerryzhu:1 entry:6 uniform:2 too:2 characterize:1 foraging:3 fundamental:6 broder:3 international:1 siam:1 retain:1 probabilistic:1 off:1 intersecting:1 w1:5 squared:1 central:2 reflect:1 hodges:1 possibly:1 slowly:1 cognitive:9 derivative:2 toy:5 unvisited:1 account:1 bmj:2 de:1 orr:1 chemistry:1 star:4 summarized:2 wk:4 waste:1 twin:1 depends:1 vehicle:1 later:2 asgd:3 closed:3 lab:1 start:3 participant:6 parallel:1 identifiability:2 monro:1 contribution:2 ass:2 variance:1 listing:1 yield:1 modelled:1 raw:1 produced:8 trajectory:9 worth:1 researcher:3 multiplying:1 history:1 explain:3 reach:3 sharing:1 infection:7 ed:2 maloney:1 against:1 pp:14 thereof:1 dm:4 proof:3 mi:2 stop:1 begun:2 popular:2 reluctant:1 lim:2 knowledge:3 appears:1 higher:1 subclusters:1 response:10 formulation:2 unidentifiable:1 done:1 strongly:2 though:3 furthermore:4 just:1 until:1 am1:1 working:1 sketch:1 hand:1 invite:75 maximizer:1 assessment:2 resident:1 google:2 rodriguez:1 wls:3 reveal:2 believe:1 dge:1 usa:2 effect:3 true:4 remedy:1 unbiased:1 equality:1 regularization:3 symmetric:1 nonzero:1 limj:1 gesellschaft:1 semantic:26 round:3 dysfunction:1 self:10 uniquely:1 hill:1 performs:2 exponentiate:1 ranging:1 wise:1 novel:1 recently:1 salmon:1 nih:2 common:1 absorbing:14 empirically:1 qp:1 attached:1 association:2 extend:2 discussed:2 elementwise:1 refer:1 significant:2 cv:2 ai:4 unconstrained:2 focal:1 consistency:5 trivially:1 pm:2 pmi:1 i6:2 grid:5 language:1 moving:1 similarity:1 surface:1 operating:1 etc:1 add:1 chan:1 moderate:1 discard:2 occasionally:1 certain:1 rep:1 refrain:1 arbitrarily:3 neg:1 geriatric:1 syndrome:1 employed:4 paradigm:1 july:2 ii:2 full:1 multiple:1 reduces:1 technical:1 faster:1 clinical:1 long:1 retrieval:2 cross:1 crease:1 mle:8 visit:16 promotes:1 a1:26 parenthesis:1 qi:1 paired:1 basic:2 patient:4 metric:1 represent:3 chicken:1 addition:1 separately:1 krause:1 interval:1 median:1 source:1 biased:2 operate:1 rest:4 unlike:1 comment:1 tend:1 undirected:2 neuropsychologia:1 bik:1 call:1 alzheimer:3 yang:1 leverage:1 noting:1 iii:1 easy:2 cohort:1 variety:1 switch:1 psychology:1 polyak:1 idea:1 cn:1 administered:2 panconesi:1 whether:2 effort:1 york:3 cause:1 impairment:1 useful:3 generally:2 clear:3 mid:1 category:5 rw:18 generate:8 exist:2 nsf:2 estimated:2 disjoint:2 per:4 neuroscience:1 diagnosis:1 vol:9 group:1 key:4 four:1 nevertheless:1 p11:1 drawn:1 wisc:4 prevent:1 preprocessed:1 abbott:2 diffusion:1 nonconvexity:3 graph:17 year:3 sum:2 run:5 place:3 draw:2 summarizes:1 bound:1 followed:1 gomez:1 convergent:3 fold:2 identifiable:1 nontrivial:1 strength:1 occur:1 constraint:1 generates:1 aspect:2 kleinberg:2 min:2 discoverable:1 department:4 developing:1 influential:1 transiting:1 remain:1 across:1 making:1 explained:2 computationally:1 previously:2 turn:4 discus:1 mechanism:2 nonempty:1 eventually:2 ordinal:3 count:1 needed:1 tractable:4 know:1 end:1 mind:1 parametrize:1 decomposing:2 operation:1 apply:1 observe:3 appearing:1 occurrence:1 batch:1 dirichlet:1 running:1 cf:3 completed:1 clustering:3 include:1 linguistics:1 madison:2 k1:1 build:1 society:1 objective:2 already:4 occurs:1 spelling:1 diagonal:3 visiting:6 said:1 gradient:3 exhibit:2 distance:2 separate:1 uncensored:6 transit:1 mail:1 collected:1 trivial:2 reason:3 spanning:3 assuming:2 length:6 modeled:1 pointwise:1 mini:1 idiosyncratic:1 fe:25 trace:3 negative:2 rise:1 pretending:1 perform:2 allowing:1 unknown:1 observation:6 markov:6 datasets:3 finite:4 descent:2 mathe:1 truncated:1 extended:3 dc:1 rn:2 ninth:1 arbitrary:1 dog:3 pair:1 specified:3 optimized:1 learned:2 nip:1 beyond:1 bar:1 pattern:2 mismatch:1 pagerank:2 including:2 memory:21 max:3 natural:1 examination:1 regularized:3 predicting:1 turning:1 advanced:1 zhu:1 improve:1 numerous:1 jun:1 naive:5 xiaojin:1 health:2 text:2 prior:4 review:3 discovery:3 epoch:1 literature:1 marginalizing:1 wisconsin:5 fully:2 expect:1 permutation:7 lecture:1 prototypical:1 generation:1 interesting:1 age:1 validation:3 degree:2 pij:3 consistent:7 principle:2 story:1 pi:3 production:1 row:3 censoring:2 token:2 repeat:5 last:2 keeping:1 arriving:2 sym:2 ij0:1 verbal:15 disallow:1 allow:3 understand:1 institute:1 neighbor:3 kwang:1 sparse:1 slice:1 world:5 transition:32 rich:1 exceedingly:1 valid:1 unweighted:1 commonly:1 instructed:1 preprocessing:1 author:1 bm:11 durrett:1 emitting:4 social:1 compact:1 relatedness:1 forever:1 confirm:1 overfitting:1 reveals:1 investigating:1 sequentially:1 corpus:5 neurology:1 subsequence:3 search:16 continuous:1 tailed:1 table:4 pack:1 terminate:1 nature:1 expanding:1 learn:1 bottou:1 necessarily:1 electric:1 domain:1 diag:3 troyer:2 did:2 significance:1 main:1 spread:2 apr:1 whole:3 subsample:2 big:1 n2:3 firl:1 repeated:4 child:1 allowed:1 lesion:1 augmented:1 chierichetti:1 ny:2 inferring:1 explicit:2 stamp:1 third:2 montavon:1 infect:1 reloaded:1 theorem:15 removing:3 folstein:1 hub:1 dementia:3 list:76 appeal:1 explored:1 evidence:1 dominates:1 intractable:2 exists:2 enduring:1 essential:3 consist:1 sequential:1 supported:1 corr:1 demand:1 gap:1 timothy:1 simply:1 infinitely:3 lbfgs:1 absorbed:4 ordered:2 verbally:1 neurological:2 springer:3 zhuoran:1 truth:1 relies:1 acm:3 abbreviation:1 goal:1 consequently:1 invalid:1 towards:2 acceleration:1 shared:1 ruppert:1 change:2 butter:1 included:1 ttrogers:1 specifically:2 infinite:2 reducing:1 semantically:1 exhausting:1 averaging:1 lemma:4 called:2 total:4 formally:2 support:1 moscovitch:1 brevity:2 alexander:1 frontal:1 evaluate:1 avoiding:6 |
5,389 | 5,879 | Spectral Learning of Large Structured HMMs for
Comparative Epigenomics
Chicheng Zhang
UC San Diego
[email protected]
Jimin Song
Rutgers University
[email protected]
Kevin C Chen
Rutgers University
[email protected]
Kamalika Chaudhuri
UC San Diego
[email protected]
Abstract
We develop a latent variable model and an ef?cient spectral algorithm motivated
by the recent emergence of very large data sets of chromatin marks from multiple
human cell types. A natural model for chromatin data in one cell type is a Hidden
Markov Model (HMM); we model the relationship between multiple cell types by
connecting their hidden states by a ?xed tree of known structure.
The main challenge with learning parameters of such models is that iterative methods such as EM are very slow, while naive spectral methods result in time and
space complexity exponential in the number of cell types. We exploit properties
of the tree structure of the hidden states to provide spectral algorithms that are
more computationally ef?cient for current biological datasets. We provide sample
complexity bounds for our algorithm and evaluate it experimentally on biological
data from nine human cell types. Finally, we show that beyond our speci?c model,
some of our algorithmic ideas can be applied to other graphical models.
1
Introduction
In this paper, we develop a latent variable model and ef?cient spectral algorithm motivated by the
recent emergence of very large data sets of chromatin marks from multiple human cell types [7, 9].
Chromatin marks are chemical modi?cations on the genome which are important in many basic
biological processes. After standard preprocessing steps, the data consists of a binary vector (one
bit for each chromatin mark) for each position in the genome and for each cell type.
A natural model for chromatin data in one cell type is a Hidden Markov Model (HMM) [8, 13],
for which ef?cient spectral algorithms are known. On biological data sets, spectral algorithms have
been shown to have several practical advantages over maximum likelihood-based methods, including
speed, prediction accuracy and biological interpretability [24]. Here we extend the approach by
modeling multiple cell types together. We model the relationships between cell types by connecting
their hidden states by a ?xed tree, the standard model in biology for relationships between cell
types. This comparative approach leverages the information shared between the different data sets
in a statistically uni?ed and biologically motivated manner.
Formally, our model is an HMM where the hidden state zt at time t has a structure represented by
a tree graphical model of known structure. For each tree node u we can associate an individual
u
hidden state ztu that depends not only on the previous hidden state zt?1
for the same tree node u but
also on the individual hidden state of its parent node. Additionally, there is an observation variable
xut for each node u, and the observation xut is independent of other state and observation variables
1
conditioned on the hidden state variable ztu . In the bioinformatics literature, [5] studied this model
with the additional constraint that all tree nodes share the same emission parameters. In biological
applications, the main outputs of interest are the learned observation matrices of the HMM and a
segmentation of the genome into regions which can be used for further studies.
A standard approach to unsupervised learning of HMMs is the Expectation-Maximization (EM) algorithm. When applied to HMMs with very large state spaces, EM is very slow. A recent line of
work on spectral learning [18, 1, 23, 6] has produced much more computationally ef?cient algorithms for learning many graphical models under certain mild conditions, including HMMs. However, a naive application of these algorithms to HMMs with large state spaces results in computational complexity exponential in the size of the underlying tree.
Here we exploit properties of the tree structure of the hidden states to provide spectral algorithms
that are more computationally ef?cient for current biological datasets. This is achieved by three
novel key ideas. Our ?rst key idea is to show that we can treat each root-to-leaf path in the tree
separately and learn its parameters using tensor decomposition methods. This step improves the
running time because our trees typically have very low depth. Our second key idea is a novel tensor
symmetrization technique that we call Skeletensor construction where we avoid constructing the full
tensor over the entire root-to-leaf path. Instead we use carefully designed symmetrization matrices
to reveal its range in a Skeletensor which has dimension equal to that of a single tree node. The third
and ?nal key idea is called Product Projections, where we exploit the independence of the emission
matrices along the root-to-leaf path conditioned on the hidden states to avoid constructing the full
tensors and instead construct compressed versions of the tensors of dimension equal to the number
of hidden states, not the number of observations. Beyond our speci?c model, we also show that
Product Projections can be applied to other graphical models and thus we contribute a general tool
for developing ef?cient spectral algorithms.
Finally we implement our algorithm and evaluate it on biological data from nine human cell types
[7]. We compare our results with the results of [5] who used a variational EM approach. We also
compare with spectral algorithms for learning HMMs for each cell type individually to assess the
value of the tree model.
1.1
Related Work
The ?rst ef?cient spectral algorithm for learning HMM parameters was due to [18]. There has been
an explosion of follow-up work on spectral algorithms for learning the parameters and structure of
latent variable models [23, 6, 4]. [18] gives a spectral algorithm for learning an observable operator
representation of an HMM under certain rank conditions. [23] and [3] extend this algorithm to
the case when the transition matrix and the observation matrix respectively are rank-de?cient. [19]
extends [18] to Hidden Semi-Markov Models.
[2] gives a general spectral algorithm for learning parameters of latent variable models that have a
multi-view structure ? there is a hidden node and three or more observable nodes that are not connected to any other nodes and are independent conditioned on the hidden node. Many latent variable
models have this structure, including HMMs, tree graphical models, topic models and mixture models. [1] provides a simpler, more robust algorithm that involves decomposing a third order tensor.
[21, 22, 25] provide algorithms for learning latent trees and of latent junction trees.
Several algorithms have been designed for learning HMM parameters for chromatin modeling, including stochastic variational inference [16] and contrastive learning of two HMMs [26]. However,
none of these methods extend directly to modeling multiple chromatin sequences simultaneously.
2
The Model
Probabilistic Model. The natural probabilistic model for a single epigenomic sequence is a hidden
Markov model (HMM), where time corresponds to position in the sequence. The observation at
time t is the sequence value at position t, and the hidden state at t is the regulatory function in this
position.
2
Figure 1: Left: A tree T with 3 nodes V = {r, u, v}. Right: A HMM whose hidden state has
structure T .
In comparative epigenomics, the goal is to jointly model epigenomic sequences from multiple
species or cell-types. This is done by an HMM with a tree-structured hidden state [5](THS-HMM),1
where each node in the tree representing the hidden state has a corresponding observation node.
Formally, we represent the model by a tuple H = (G, O, T , W); Figure 1 shows a pictorial representation.
G = (V, E) is a directed tree with known structure whose nodes represent individual cell-types
or species. The hidden state zt and the observation xt are represented by vectors {ztu } and {xut }
indexed by nodes u ? V . If (v, u) ? E, then v is the parent of u, denoted by ?(u); if v is a parent
of u, then for all t, ztv is a parent of ztu . In addition, the observations have the following product
?
?
structure: if u? ?= u, then conditioned on ztu , the observation xut is independent of ztu and xut as
?
?
well as any ztu? and xut? for t ?= t? .
O is a set of observation matrices Ou = P (xut |ztu ) for each u ? V and T is a set of transition
?(u)
u
tensors T u = P (zt+1
|ztu , zt+1 ) for each u ? V . Finally, W is the set of initial distributions where
?(u)
W u = P (z1u |z1 ) for each z1u .
Given a tree structure and a number of iid observation sequences corresponding to each node of
the tree, our goal is to determine the parameters of the underlying THS-HMM and then use these
parameters to infer the most likely regulatory function at each position in the sequences.
Below we use the notation D to denote the number of nodes in the tree and d to denote its depth.
For typical epigenomic datasets, D is small to moderate (5-50) while d is very small (2 or 3) as
it is dif?cult to obtain data with large d experimentally. Typically m, the number of possible values assumed by the hidden state at a single node, is about 6-25, while n, the number of possible
observation values assumed by a single node is much larger (e.g. 256 in our dataset).
Tensors. An order-3 tensor M ? Rn1 ? Rn2 ? Rn3 is a 3-dimensional array with n1 n2 n3 entries,
with its (i1 , i2 , i3 )-th entry denoted as Mi1 ,i2 ,i3 .
Given ni ? 1 vectors vi , i = 1, 2, 3, their tensor product, denoted by v1 ? v2 ? v3 is the n1 ? n2 ? n3
tensor whose (i1 , i2 , i3 )-th entry is (v1 )i1 (v2 )i2 (v3 )i3 . A tensor that can be expressed as the tensor
product of a set of vectors is called a rank 1 tensor. A tensor M is symmetric if and only if for any
permutation ? : [3] ? [3], Mi1 ,i2 ,i3 = M?(i1 ),?(i2 ),?(i3 ) .
Let M ? Rn1 ? Rn2 ? Rn3 . If Vi ? Rni ?mi , then M (V
?1 , V2 , V3 ) is a tensor of size m1 ? m2 ? m3 ,
whose (i1 , i2 , i3 )-th entry is: M (V1 , V2 , V3 )i1 ,i2 ,i3 = j1 ,j2 ,j3 Mj1 ,j2 ,j3 (V1 )j1 ,i1 (V2 )j2 ,i2 (V3 )j3 ,i3 .
Since a matrix is a order-2 tensor, we also use the following shorthand to denote matrix multiplii ?ni
, then M (V1 , V2 ) is a matrix of size m1 ? m2 ,
cation. Let M ? Rn1 ? Rn2 . If Vi ? Rm?
whose (i1 , i2 )-th entry is: M (V1 , V2 )i1 ,i2 = j1 ,j2 Mj1 ,j2 (V1 )j1 ,i1 (V2 )j2 ,i2 . This is equivalent to
V1? M V2 .
1
In the bioinformatics literature, this model is also known as a tree HMM.
3
Meta-States and Observations, Co-occurrence Matrices and Tensors. Given observations xut
u
and xut? at a single node u, we use the notation Pt,t
? to denote their expected co-occurence frequenu,u
u,u
u
u
?
cies: Pt,t? = E[xt ? xt? ], and Pt,t? to denote their corresponding empirical version. The tensor
u,u,u
u
u
u
? u,u,u
Pt,t
? ,t?? = E[xt ? xt? ? xt?? ] and its empirical version Pt,t? ,t?? are de?ned similarly.
Occasionally, we will consider the states or observations corresponding to a subset of nodes in G
coalesced into a single meta-state or meta-observation. Given a connected subset S ? V of nodes in
the tree G that includes the root, we use the notation ztS and xSt to denote the meta-state represented
by (ztu , u ? S) and the meta-observation represented by (xut , u ? S) respectively. We de?ne the
|S|
|S|
observation matrix for S as OS = P (xSt |ztS ) ? Rn ?m and the transition matrix for S as
|S|
|S|
S
T S = P (zt+1
|ztS ) ? Rm ?m , respectively.
V1 ,V2
For sets of nodes V1 and V2 , we use the notation Pt,t
to denote the expected co-occurrence
?
V1 ,V2
V1
V2
frequencies of the meta-observations xt and xt? . Its empirical version is denoted by P?t,t
.
?
V1 ,V2 ,V3
V
,V
,V
1
2
3
?
Similarly, we can de?ne the notation Pt,t? ,t?? and its empirical version Pt,t? ,t?? .
Background on Spectral Learning for Latent Variable Models. Recent work by [1] has provided
a novel elegant tensor decomposition method for learning latent variable models. Applied to HMMs,
the main idea is to decompose a transformed version of the third order co-occurrence tensor of the
?rst three observations to recover the parameters; [1] shows that given enough samples and under
fairly mild conditions on the model, this provides an approximation to the globally optimal solution.
The algorithm has three main steps. First, the third order tensor of the co-occurrences is symmetrized
using the second order co-occurrence matrices to yield a symmetric tensor; this symmetric tensor
is then orthogonalized by a whitening transformation. Finally, the resultant symmetric orthogonal
tensor is decomposed via the tensor power method.
In biological applications, instead of multiple independent sequences, we have a single long sequence in the steady state. In this case, following ideas from [23], we use the average over t of the
third order co-occurence tensors of three consecutive observations starting at time t. The second
order co-occurence tensor is also modi?ed similarly.
3
Algorithm
A naive approach for learning parameters of HMMs with tree-structured hidden states is to directly
apply the spectral method of [1]. Since this method ignores the structure of the hidden state, its
running time is very high, ?(nD mD ), even with optimized implementations. This motivates the
design of more computationally ef?cient approaches.
A plausible approach is to observe that at t = 1, the observations are generated by a tree graphical
model; thus in principle one could learn the parameters of the underlying tree using existing algorithms [22, 21, 25]. However, this approach does not directly produce the HMM parameters; it also
does not work for biological sequences because we do not have multiple independent samples at
t = 1; instead we have a single long sequence at the steady state, and the steady state distribution
of observations is not generated by a latent tree. Another plausible approach is to use the spectral
junction tree algorithm of [25]; however, this algorithm does not provide the actual transition and
observation matrix parameters which hold important biological information, and instead provides
an observable operator representation.
Our main contribution is to show that we can achieve a much better running time by exploiting the
structure of the hidden state. Our algorithm is based on three key ideas ? Partitioning, Skeletensor
Construction and Product Projections. We explain these ideas next.
Partitioning. Our ?rst observation is that to learn the parameters at a node u, we can focus only on
the unique path from the root to u. Thus we partition the learning problem on the tree into separate
learning problems on these paths. This maintains correctness as proved in the Appendix.
The Partitioning step reduces the computational complexity since we now need to learn an HMM
with md states and nd observations, instead of the naive method where we learn an HMM with mD
states and nD observations. As d ? D in biological data, this gives us signi?cant savings.
4
Constructing the Skeletensor. A naive way to learn the parameters of the HMM corresponding to
each root-to-node path is to work directly on the O(nd ? nd ? nd ) co-occurrence tensor. Instead,
we show that for each node u on a root-to-node path, a novel symmetrization method can be used
to construct a much smaller skeleton tensor T u of size n ? n ? n, which nevertheless captures the
effect of the entire root-to-node path and projects it into the skeleton tensor, thus revealing the range
of Ou . We call this the skeletensor.
Hu ,u,Hu
be the empirical n|Hu | ? n ? n|Hu |
Let Hu be the path from the root to a node u, and let P?1,2,3
tensor of co-occurrences of the meta-observations Hu , u and Hu at times 1, 2 and 3 respectively.
Based on the data we construct the following symmetrization matrices:
u,Hu ? Hu ,Hu ?
(P1,3
) ,
S1 ? P?2,3
u,Hu ? Hu ,Hu ?
S3 ? P?2,1
(P3,1
)
Hu ,u,Hu
Note that S1 and S3 are n ? n|Hu | matrices. Symmetrizing P?1,2,3
with S1 and S3 gives us an
n ? n ? n skeletensor, which can in turn be decomposed to give an estimate of Ou (see Lemma 3 in
the Appendix).
Even though naively constructing the symmetrization matrices and skeletensor takes O(N n2d+1 +
n3d ) time, this procedure improves computational ef?ciency because tensor construction is a onetime operation, while the power method which takes many iterations is carried out on a much smaller
tensor.
Product Projections. We further reduce the computational complexity by using a novel algorithmic technique that we call Product Projections. The key observation is as follows. Let
Hu = {u0 , u1 , . . . , ud?1 } be any root-to-node path in the tree and consider the HMM that generates
u
the observations (xut 0 , xut 1 , . . . , xt d?1 ) for t = 1, 2, . . .. Even though the individual observations
uj
xt , j = 0, 1, . . . , d ? 1 are highly dependent, the range of OHu , the emission matrix of the HMM
describing the path Hu , is contained in the product of the ranges of Ouj , where Ouj is the emission
matrix at node uj (Lemma 4 in the Appendix). Furthermore, even though the Ouj matrices are dif?cult to ?nd, their ranges can be determined by computing the SVDs of the observation co-occurrence
matrices at uj .
Thus we can implicitly construct and store (an estimate of) the range of OHu . This also gives us
Hu ,Hu
u,Hu
u,Hu
, the column spaces of P?2,1
and P?2,3
, and the range of the
estimates of the range of P?1,3
Hu ,u,Hu
. Therefore during skeletensor construction we can
?rst and third modes of the tensor P?1,2,3
Hu ,u,Hu
, and instead construct their projections onto their
avoid explicitly constructing S1 , S3 and P?1,2,3
ranges. This reduces the time complexity of the skeletensor construction step to O(N m2d+1 +
m3d + dmn2 ) (recall that the range has dimension m.) While the number of hidden states m could
be as high as n, this is a signi?cant gain in practice, as n ? m in biological datasets (e.g. 256
observations vs. 6 hidden states).
Product projections are more ef?cient than random projections [17] on the co-occurrence matrix of
meta-observations: the co-occurrence matrices are nd ? nd matrices, and random projections would
take ?(nd ) time. Also, product projections differ from the suggestion of [15] since we exploit
properties of the model to ef?ciently ?nd good projections.
The Product Projections technique is a general technique with applications beyond our model. Some
examples are provided in Appendix C.3.
3.1
The Full Algorithm
Our ?nal algorithm follows from combining the three key ideas above. Algorithm 1 shows how
to recover the observation matrices Ou at each node u. Once the Ou s are recovered, one can use
standard techniques to recover T and W ; details are described in Algorithm 2 in the Appendix.
3.2
Performance Guarantees
We now provide performance guarantees on our algorithm. Since learning parameters of HMMs
and many other graphical models is NP-Hard, spectral algorithms make simplifying assumptions on
the properties of the model generating the data. Typically these assumptions take the form of some
5
Algorithm 1 Algorithm for Observation Matrix Recovery
N
1: Input: N samples of the three consecutive observations (x1 , x2 , x3 )i=1 generated by an HMM
with tree structured hidden state with known tree structure.
2: for u ? V do
u,u
? u.
3:
Perform SVD on P?1,2
to get the ?rst m left singular vectors U
4: end for
5: for u ? V do
? Hu = ?v?H U
? v.
6:
Let Hu denote the set of nodes on the unique path from root r to u. Let U
u
7:
Construct Projected Skeletensor. First, compute symmetrization matrices:
? u )? P? u,Hu U
? Hu )? P? Hu ,Hu U
? Hu )((U
? Hu )?1
S?1u = ((U
2,3
1,3
? Hu )((U
? Hu )?1
? u )? P? u,Hu U
? Hu )? P? Hu ,Hu U
S?3u = ((U
2,1
3,1
8:
Compute symmetrized second and third co-occurrences for u:
? 2u
M
=
Hu ,u ? Hu ?u ? ? u
Hu ,u ? Hu ?u ? ? u ?
(P?1,2
(U (S1 ) , U ) + P?1,2
(U (S1 ) , U ) )/2
?u
M
3
=
Hu ,u,Hu ? Hu ?u ? ? u ? Hu ?u ?
P?1,2,3
( U ( S1 ) , U , U ( S3 ) )
? u using M
? u and decomOrthogonalization and Tensor Decomposition. Orthogonalize M
3
2
u
u
pose to recover (??1 , . . . , ??m ) as in [1] (See Algorithm 3 in the Appendix for details).
u
? u , where ?
?u = U
? u?
? u = (??u , . . . , ??m
10:
Undo Projection onto Range. Estimate Ou as: O
).
1
11: end for
9:
conditions on the rank of certain parameter matrices. We state below the conditions needed for our
algorithm to successfully learn parameters of a HMM with tree structured hidden states. Observe
that we need two kinds of rank conditions ? node-wise and path-wise ? to ensure that we can recover
the full set of parameters on a root-to-node path.
Assumption 1 (Node-wise Rank Condition). For all u ? V , the matrix Ou has rank m, and the
u,u
has rank m.
joint probability matrix P2,1
Assumption 2 (Path-wise Rank Condition). For any u ? V , let Hu denote the path from root to u.
Hu ,Hu
has rank m|Hu | .
Then, the joint probability matrix P1,2
? u indeed
Assumption 1 is required to ensure that the skeletensor can be decomposed, and that U
u
captures the range of O . Assumption 2 ensures that the symmetrization operation succeeds. This
kind of assumption is very standard in spectral learning [18, 1].
[3] has provided a spectral algorithm for learning HMMs involving fourth and higher order moments
when Assumption 1 does not hold. We believe similar approaches will apply to our problem as well,
and we leave this as an avenue for future work.
If Assumptions 1 and 2 hold, we can show that Algorithm 1 is consistent ? provided enough samples
are available, the model parameters learnt by the algorithms are close to the true model parameters.
A ?nite sample guarantee is provided in the Appendix.
Theorem 1 (Consistency). Suppose we run Algorithm 1 on the ?rst three observation vectors
{xi,1 , xi,2 , xi,3 } from N iid sequences generated by an HMM with tree-structured hidden states.
? u satisfy the following property: with high
Then, for all nodes u ? V , the recovered estimates O
? u such that as
probability over the iid samples, there exists a permutation ?u of the columns of O
u
u ?u
?O ? ? O ? ? ?(N ) where ?(N ) ? 0 as N ? ?.
Observe that the observation matrices (as well as the transition and initial probabilities) are recovered
upto permutations of hidden states in a globally consistent manner.
6
4
Experiments
Data and experimental settings. We ran our algorithm, which we call ?Spectacle-Tree?, on a
chromatin dataset on human chromosome 1 from nine cell types (H1-hESC, GM12878, HepG2,
HMEC, HSMM, HUVEC, K562, NHEK, NHLF) from the ENCODE project [7]. Following [5], we
used a biologically motivated tree structure of a star tree with H1-hESC, the embryonic stem cell
type, as the root. There are data for eight chromatin marks for each cell type which we preprocessed
into binary vectors using a standard Poisson background assumption [11]. The chromosome is
divided into 1,246,253 segments of length 200, following [11]. The observed data consists of a
binary vector of length eight for each segment, so the number of possible observations is the number
of all combinations of presence or absence of the chromatin marks (i.e. n = 28 = 256). We set
the number of hidden states, which we interpret as chromatin states, to m = 6, similar to the choice
of ENCODE. Our goals are to discover chromatin states corresponding to biologically important
functional elements such as promoters and enhancers, and to label each chromosome segment with
the most probable chromatin state.
Observe that instead of the ?rst few observations from N iid sequences, we have a single long
sequence in the steady state per cell type; thus, similar to [23], we calculate the empirical cooccurrence matrices and tensors used in the algorithm based on two and three successive observations respectively (so, more formally, instead of P?1,2 , we use the average over t of P?t,t+1 and
so on). Additionally, we use a projection procedure similar to [4] for rounding negative entries in
the recovered observation matrices. Our experiments reveal that the rank conditions appear to be
satis?ed for our dataset.
Run time and memory usage comparisons. First, we ?attened the HMM with tree-structured
hidden states into an ordinary HMM with an exponentially larger state space. Our Python implementation of the spectral algorithm for HMMs of [18] ran out of memory while performing singular
value decomposition on the co-occurence matrix, even using sparse matrix libraries. This suggests
that naive application of spectral HMM is not practical for biological data.
Next we compared the performance of Spectacle-Tree to a similar model which additionally constrained all transition and observation parameters to be the same on each branch [5]. That work used
several variational approximations to the EM algorithm and reported that SMF (structured mean
?eld) performed the best in their tests. Although we implemented Spectacle-Tree in Matlab and did
not optimize it for run-time ef?ciency, Spectacle-Tree took ?2 hr, whereas the SMF algorithm took
?13 hr for 13 iterations to convergence. This suggests that spectral algorithms may be much faster
than variational EM for our model.
Biological interpretation of the observation matrices. Having examined the ef?ciency of
Spectacle-Tree, we next studied the accuracy of the learned parameters. We focused on the observation matrices which hold most of the interesting biological information. Since the full observation
matrix is very large (28 ? 6 where each row is a combination of chromatin marks), Figure 2 shows
the 8 ? 6 marginal distribution of each chromatin mark conditioned on each hidden state. SpectacleTree identi?ed most of the major types of functional elements typically discovered from chromatin
data: repressive, strong enhancer, weak enhancer, promoter, transcribed region and background state
(states 1-6, respectively, in Figure 2b). In contrast, the SMF algorithm used three out of the six states
to model the large background state (i.e. the state with no chromatin marks). It identi?ed repressive,
transcribed and promoter states (states 2, 4, 5, respectively, in Figure 2a) but did not identify any
enhancer states, which are one of the most interesting classes for further biological studies.
We believe these results are due to that fact that the background state in the data set is large: ?62%
of the segments do not have chromatin marks for any cell type. The background state has lower
biological interest but is modeled well by the maximum likelihood approach. In contast, biologically interesting states such as promoters and enhancers comprise a relatively small fraction of the
genome. We cannot simply remove background segments to make the classes balanced because it
would change the length distribution of the hidden states. Finally, we observed that our model estimated signi?cantly different parameters for each cell type which captures different chromatin states
(Appendix Figure 3). For example, we found enhancer states with strong H3K27ac in all cell types
except for H1-hESC, where both enhancer states (3 and 6) had low signal for this mark. This mark
is known to be biologically important in these cells for distinguishing active from poised enhancers
7
(a) SMF
(b) Spectacle-Tree
Figure 2: The compressed observation matrices for the GM12878 cell type estimated by the SMF
and Spectacle-Tree algorithms. The hidden states are on the X axis.
[10]. This suggests that modeling the additional branch-speci?c parameters can yield interesting
biological insights.
Comparison of the chromosome segments labels. We computed the most probable state for each
chromosome segment using a posterior decoding algorithm. We tested the accuracy of the predictions using an experimentally de?ned data set and compared it to SMF and the spectral algorithm
for HMMs run for individual cell types without the tree (Spectral-HMM). Speci?cally we assessed
promoter prediction accuracy (state 5 for SMF and state 4 for Spectacle-Tree in Figure 2) using
CAGE data from [14] which was available for six of the nine cell types. We used the F1 score
(harmonic mean of precision and recall) for comparison and found that Spectacle-Tree was much
more accurate than SMF for all six cell types (Table 1). This was because the promoter predictions
of SMF were biased towards the background state so those predictions had slightly higher recall but
much lower speci?city.
Finally, we compared our predictions to Spectral-HMM to assess the value of the tree model. H1hESC is the root node so Spectral-HMM and Spectacle-Tree have the same model and obtain the
same accuracy (Table 1). Spectacle-Tree predicts promoters more accurately than Spectral-HMM
for all other cell types except HepG2. However, HepG2 is the most diverged from the root among
the cell types based on the Hamming distance between the chromatin marks. We hypothesize that
for HepG2, the tree is not a good model which slightly reduces the prediction accuracy.
Cell type
H1-hESC
GM12878
HepG2
HUVEC
K562
NHEK
SMF
.0273
.0220
.0274
.0275
.0255
.0287
Spectral-HMM
.1930
.1230
.1022
.1221
.0964
.1528
Spectacle-Tree
.1930
.1703
.0993
.1621
.1966
.1719
Table 1: F1 score for predicting promoters for six cell types. The highest F1 score for each cell type
is emphasized in bold. Ground-truth labels for the other 3 cell-types are currently unavailable.
Our experiments show that Spectacle-Tree has improved computational ef?ciency, biological interpretability and prediction accuracy on an experimentally-de?ned feature compared to variational
EM for a similar tree HMM model and a spectral algorithm for single HMMs. A previous study
showed improvements for spectral learning of single HMMs over the EM algorithm [24]. Thus our
algorithms may be useful to the bioinformatics community in analyzing the large-scale chromatin
data sets currently being produced.
Acknowledgements. KC and CZ thank NSF under IIS 1162581 for research support.
8
References
[1] Anima Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor
decompositions for learning latent variable models. CoRR, abs/1210.7559, 2012.
[2] Animashree Anandkumar, Daniel Hsu, and Sham M. Kakade. A method of moments for
mixture models and hidden Markov models. CoRR, abs/1203.0683, 2012.
[3] B. Balle, X. Carreras, F. Luque, and A. Quattoni. Spectral learning of weighted automata - A
forward-backward perspective. Machine Learning, 96(1-2), 2014.
[4] B. Balle, W. L. Hamilton, and J. Pineau. Methods of moments for learning stochastic languages: Uni?ed presentation and empirical comparison. In ICML, pages 1386?1394, 2014.
[5] Jacob Biesinger, Yuanfeng Wang, and Xiaohui Xie. Discovering and mapping chromatin states
using a tree hidden Markov model. BMC Bioinformatics, 14(Suppl 5):S4, 2013.
[6] A. Chaganty and P. Liang. Estimating latent-variable graphical models using moments and
likelihoods. In ICML, 2014.
[7] ENCODE Project Consortium. An integrated encyclopedia of DNA elements in the human
genome. Nature, 489:57?74, 2012.
[8] Jason Ernst and Manolis Kellis. Discovery and characterization of chromatin states for systematic annotation of the human genome. Nature Biotechnology, 28(8):817?825, 2010.
[9] Bernstein et. al. The NIH Roadmap Epigenomics Mapping Consortium. Nature Biotechnology,
28:1045?1048, 2010.
[10] Creyghton et. al. Histone H3K27ac separates active from poised enhancers and predicts developmental state. Proc Natl Acad Sci, 107(50):21931?21936, 2010.
[11] Ernst et. al. Mapping and analysis of chromatin state dynamics in nine human cell types.
Nature, 473:43?49, 2011.
[12] Jun Zhu et al. Characterizing dynamic changes in the human blood transcriptional network.
PLoS Comput Biol, 6:e1000671, 2010.
[13] M. Hoffman et al. Unsupervised pattern discovery in human chromatin structure through genomic segmentation. Nature Methods, 9(5):473?476, 2012.
[14] S. Djebali et al. Landscape of transcription in human cells. Nature, 2012.
[15] D. Foster, J. Rodu, and L. Ungar. Spectral dimensionality reduction for HMMs. In CoRR,
2012.
[16] N. Foti, J. Xu, D. Laird, and E. Fox. Stochastic variational inference for hidden markov models.
In NIPS, 2014.
[17] N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53, 2011.
[18] D. Hsu, S. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models.
In COLT, 2009.
[19] I. Melnyk and A. Banerjee. A spectral algorithm for inference in hidden semi-Markov models.
In AISTATS, 2015.
[20] E. Mossel and S Roch. Learning non-singular phylogenies and hidden Markov models. Ann.
Appl. Probab., 16(2), 05 2006.
[21] A. Parikh, L. Song, and E. P. Xing. A spectral algorithm for latent tree graphical models. In
ICML, pages 1065?1072, 2011.
[22] A. P. Parikh, L. Song, M. Ishteva, G. Teodoru, and E. P. Xing. A spectral algorithm for latent
junction trees. In UAI, 2012.
[23] S. Siddiqi, B. Boots, and G. Gordon. Reduced-rank hidden Markov models. In AISTATS, 2010.
[24] J. Song and K. C. Chen. Spectacle: fast chromatin state annotation using spectral learning.
Genome Biology, 16:33, 2015.
[25] L. Song, M. Ishteva, A. P. Parikh, E. P. Xing, and H. Park. Hierarchical tensor decomposition
of latent tree graphical models. In ICML, 2013.
[26] J. Zou, D. Hsu, D. Parkes, and R. Adams. Contrastive learning using spectral methods. In
NIPS, 2013.
9
| 5879 |@word mild:2 version:6 nd:11 hu:52 eng:2 decomposition:7 contrastive:2 simplifying:1 jacob:1 eld:1 reduction:1 moment:4 initial:2 score:3 daniel:2 existing:1 current:2 recovered:4 k562:2 partition:1 j1:4 cant:2 remove:1 designed:2 hypothesize:1 v:1 leaf:3 discovering:1 histone:1 cult:2 yuanfeng:1 parkes:1 provides:3 characterization:1 node:38 contribute:1 successive:1 simpler:1 zhang:2 along:1 m2d:1 consists:2 shorthand:1 poised:2 manner:2 indeed:1 expected:2 p1:2 multi:1 globally:2 decomposed:3 manolis:1 actual:1 provided:5 project:3 underlying:3 notation:5 discover:1 estimating:1 xed:2 kind:2 finding:1 transformation:1 guarantee:3 rm:2 partitioning:3 m3d:1 appear:1 hamilton:1 treat:1 acad:1 analyzing:1 path:16 studied:2 examined:1 suggests:3 appl:1 co:15 dif:2 hmms:17 ishteva:2 range:12 statistically:1 directed:1 practical:2 unique:2 practice:1 implement:1 x3:1 procedure:2 nite:1 empirical:7 revealing:1 projection:14 consortium:2 get:1 onto:2 close:1 cannot:1 operator:2 optimize:1 equivalent:1 xiaohui:1 starting:1 automaton:1 focused:1 recovery:1 m2:2 insight:1 array:1 diego:2 construction:5 pt:8 suppose:1 distinguishing:1 associate:1 element:3 predicts:2 observed:2 wang:1 capture:3 svds:1 calculate:1 region:2 ensures:1 connected:2 plo:1 highest:1 ran:2 balanced:1 developmental:1 complexity:6 skeleton:2 cooccurrence:1 dynamic:2 segment:7 joint:2 represented:4 fast:1 kevin:1 whose:5 larger:2 plausible:2 compressed:2 emergence:2 jointly:1 laird:1 advantage:1 sequence:14 took:2 product:12 j2:6 combining:1 chaudhuri:1 achieve:1 ernst:2 rst:8 parent:4 exploiting:1 zts:3 convergence:1 produce:1 comparative:3 generating:1 leave:1 telgarsky:1 adam:1 develop:2 pose:1 strong:2 p2:1 implemented:1 involves:1 signi:3 differ:1 epigenomic:3 stochastic:3 human:11 ungar:1 f1:3 decompose:1 biological:20 probable:2 rong:1 hold:4 ground:1 algorithmic:2 diverged:1 matus:1 mapping:3 major:1 consecutive:2 coalesced:1 proc:1 label:3 currently:2 symmetrization:7 individually:1 correctness:1 successfully:1 tool:1 city:1 weighted:1 hoffman:1 genomic:1 i3:9 avoid:3 encode:3 emission:4 focus:1 improvement:1 rank:12 likelihood:3 contrast:1 inference:3 dependent:1 spectacle:14 typically:4 entire:2 integrated:1 hidden:44 kc:1 transformed:1 i1:10 among:1 colt:1 denoted:4 constrained:1 fairly:1 uc:2 marginal:1 equal:2 construct:6 saving:1 once:1 having:1 comprise:1 biology:2 bmc:1 park:1 unsupervised:2 icml:4 foti:1 future:1 np:1 gordon:1 few:1 modi:2 simultaneously:1 individual:5 pictorial:1 gm12878:3 n1:2 ab:2 interest:2 satis:1 highly:1 mixture:2 natl:1 n2d:1 accurate:1 tuple:1 explosion:1 orthogonal:1 fox:1 tree:60 indexed:1 mj1:2 rn3:2 orthogonalized:1 column:2 modeling:4 maximization:1 ordinary:1 entry:6 subset:2 hesc:4 rounding:1 reported:1 learnt:1 siam:1 cantly:1 probabilistic:3 systematic:1 decoding:1 connecting:2 together:1 ohu:2 rn1:3 transcribed:2 de:6 rn2:3 star:1 bold:1 includes:1 satisfy:1 explicitly:1 depends:1 vi:3 performed:1 h1:4 root:16 view:1 jason:1 xing:3 recover:5 maintains:1 annotation:2 chicheng:1 contribution:1 ass:2 ni:2 accuracy:7 who:1 yield:2 identify:1 landscape:1 weak:1 accurately:1 produced:2 iid:4 none:1 anima:1 cation:2 randomness:1 ztu:10 explain:1 quattoni:1 ed:6 frequency:1 resultant:1 mi:1 hamming:1 gain:1 hsu:4 xut:12 dataset:3 proved:1 enhancer:9 animashree:1 recall:3 improves:2 dimensionality:1 segmentation:2 ou:7 carefully:1 higher:2 xie:1 follow:1 improved:1 done:1 though:3 furthermore:1 tropp:1 o:1 cage:1 banerjee:1 mode:1 pineau:1 reveal:2 believe:2 usage:1 effect:1 true:1 chemical:1 symmetric:4 i2:12 during:1 steady:4 variational:6 wise:4 ef:15 novel:5 harmonic:1 nih:1 parikh:3 functional:2 melnyk:1 exponentially:1 extend:3 interpretation:1 m1:2 martinsson:1 interpret:1 chaganty:1 consistency:1 similarly:3 language:1 had:2 whitening:1 carreras:1 posterior:1 recent:4 showed:1 perspective:1 moderate:1 occasionally:1 certain:3 store:1 meta:8 binary:3 additional:2 speci:5 determine:1 v3:6 ud:1 signal:1 semi:2 branch:2 multiple:8 full:5 u0:1 infer:1 reduces:3 stem:1 sham:2 ii:1 faster:1 long:3 divided:1 prediction:8 j3:3 basic:1 involving:1 expectation:1 rutgers:4 poisson:1 iteration:2 represent:2 cz:1 suppl:1 achieved:1 cell:35 addition:1 background:8 separately:1 whereas:1 xst:2 singular:3 biased:1 undo:1 repressive:2 elegant:1 call:4 ciently:1 anandkumar:2 leverage:1 presence:1 bernstein:1 enough:2 independence:1 reduce:1 idea:10 avenue:1 motivated:4 six:4 song:6 biotechnology:2 nine:5 matlab:1 useful:1 s4:1 encyclopedia:1 siddiqi:1 dna:1 reduced:1 nsf:1 s3:5 estimated:2 per:1 key:7 nevertheless:1 blood:1 preprocessed:1 nal:2 backward:1 v1:13 fraction:1 run:4 fourth:1 extends:1 p3:1 appendix:8 bit:1 bound:1 constraint:1 n3:2 x2:1 rodu:1 generates:1 u1:1 speed:1 performing:1 relatively:1 cies:1 ned:3 structured:8 developing:1 combination:2 smaller:2 slightly:2 em:8 kakade:3 biologically:5 s1:7 computationally:4 turn:1 describing:1 needed:1 symmetrizing:1 ge:1 end:2 junction:3 decomposing:1 operation:2 available:2 apply:2 observe:4 eight:2 v2:14 spectral:40 upto:1 hierarchical:1 occurrence:11 symmetrized:2 running:3 ensure:2 graphical:10 cally:1 exploit:4 uj:3 kellis:1 tensor:39 md:3 transcriptional:1 distance:1 separate:2 thank:1 sci:1 hmm:31 topic:1 roadmap:1 length:3 modeled:1 relationship:3 liang:1 negative:1 implementation:2 design:1 zt:6 motivates:1 perform:1 boot:1 observation:50 markov:11 datasets:4 smf:10 rn:1 ucsd:2 discovered:1 community:1 required:1 z1:1 optimized:1 identi:2 learned:2 nip:2 beyond:3 roch:1 below:2 pattern:1 challenge:1 including:4 interpretability:2 memory:2 power:2 natural:3 predicting:1 hr:2 zhu:1 representing:1 mi1:2 library:1 ne:2 mossel:1 axis:1 carried:1 jun:1 naive:6 occurence:4 review:1 literature:2 acknowledgement:1 python:1 balle:2 discovery:2 probab:1 permutation:3 suggestion:1 interesting:4 rni:1 consistent:2 principle:1 foster:1 share:1 embryonic:1 row:1 characterizing:1 sparse:1 epigenomics:3 depth:2 dimension:3 transition:6 genome:7 ignores:1 forward:1 san:2 preprocessing:1 projected:1 approximate:1 observable:3 uni:2 implicitly:1 transcription:1 active:2 uai:1 assumed:2 xi:3 latent:15 iterative:1 regulatory:2 table:3 additionally:3 learn:7 chromosome:5 robust:1 nature:6 unavailable:1 zou:1 constructing:6 onetime:1 did:2 aistats:2 main:5 promoter:8 n2:2 x1:1 xu:1 cient:11 hsmm:1 slow:2 precision:1 position:5 ciency:4 exponential:2 comput:1 third:7 theorem:1 xt:10 emphasized:1 dl:2 naively:1 exists:1 kamalika:2 corr:3 conditioned:5 chen:2 halko:1 simply:1 likely:1 luque:1 expressed:1 contained:1 corresponds:1 truth:1 goal:3 presentation:1 ann:1 towards:1 shared:1 absence:1 experimentally:4 hard:1 change:2 typical:1 determined:1 except:2 lemma:2 called:2 specie:2 svd:1 orthogonalize:1 m3:1 succeeds:1 experimental:1 formally:3 h3k27ac:2 phylogeny:1 mark:13 support:1 assessed:1 bioinformatics:4 evaluate:2 tested:1 biol:1 chromatin:27 |
5,390 | 588 | An Analog VLSI Chip for Radial Basis Functions
.lohn C. Platt
Synaptics, Inc.
2698 Orchard Parkway
San Jose, CA 95134
J aneen Anderson
David B. Kirk'"
Abstract
We have designed, fabricated, and tested an analog VLSI chip
which computes radial basis functions in parallel. We have developed a synapse circuit that approximates a quadratic function.
We aggregate these circuits to form radial basis functions. These
radial basis functions are then averaged together using a follower
aggregator.
1
INTRODUCTION
Radial basis functions (RBFs) are a mel hod for approximating a function from
scattered training points [Powell, H)87]. RBFs have been used to solve recognition
and prediction problems with a fair amonnt of success [Lee, 1991] [Moody, 1989]
[Platt, 1991]. The first layer of an RBF network computes t.he distance of the input
to the network to a set of stored memories. Each basis function is a non-linear
function of a corresponding distance. Tht> basis functions are then added together
with second-layer weights to produce the output of the network. The general form
of an RBF is
Yi
=
L hii<l>i (Iii - Cj Ii) ,
( 1)
i
where Yi is the output of the network, h ij is the second-layer weight, <l>j is the
non-linearity, C; is the jth memory stored in the network and f is the input to
"'Current address: Caltech Computer Graphics Group, Caltech 350-74, Pasadena, CA
92115
765
766
Anderson, Platt, and Kirk
output of network
-++++-++-++++++----1 exp
.-..-+-i-+-i-+-t-+-t-+-
p,xp
~-+-H-H-H-+-+-
-++++-++-++++++----1 exp
.-..-+-t-+-t-+-t-+-t-+-
-+-+-+-+-++++++-t-t---i
I
-++++++-+++++-+-~
quadratic synapse
exp
~-+-H-H-H-+-+-
input to network
\
linear synapse
Figure 1: The architecture of a Gaussian RBF network.
the network. Many researchers use Gaussians to create basis functions that have a
localized effect in input space [Poggio, 1990][Moody, 1989]:
(2)
The architecture of a Gaussian RBF network is shown in figure 1.
RBFs can be implemented either via software or hardware. If high speed is not necessary, then computing all of the basis functions in software is adequate. However,
if an application requires many inputs or high speed, then hardware is required.
RBFs use a lot of operations more complex than simply multiplication and addition.
For example, a Gaussian RBF requires an exponential for every basis function.
Using a partition of unity requires a divide for every basis function. Analog VLSI
is an attractive way of computing these complex operations very quickly: we can
compute all of the basis functions in parallel, using a few transistors per synapse.
This paper discusses an analog VLSI chip that computes radial basis functions.
We discuss how we map the mathematica.l model of an RBF into compact analog
hardware. We then present results from a test. chip that was fabricated. We discuss
possible applications for the hardware architecture and future theoretical work.
2
MAPPING RADIAL BASIS FUNCTIONS INTO
HARDWARE
In order to create an analog VLSI chip. we must map the idea of radial basis
functions into transistors. In order to create a high-density chip, the mathematics
of RBFs must be modified to be computed more naturally by transistor physics.
This section discusses the mapping from Gaussian RBFs into CMOS circuitry.
An Analog VLSI Chip for Radial Basis Functions
r
">-------"--
input to network
output to layer 2
Vref
Figure 2: Circuit diagram for first-layer neuron, showing t.hree Gaussian synapses
and the sense amplifier.
2.1
Computing Quadratic Distance
Ideally, the first-layer synapses in figure 1 would compute a quadratic dist.ance of
the input. to a stored value. Quadratics go to infinity for large values of their input.
hence are hard to build in analog hardware and are not robust against outliers in tlw
input data. Therefore, it is much more desirable to use a saturating non-linearity:
we will use a Gaussian for a first-layer synapse, which approxima.tes a quadratic
near its peak.
We implement the first-layer Gaussian synapse llsing an inverter (see figure 2). The
current running through each inverter from the voltage rail to ground is a Gaussian
function of the inverter's input, with the peak of the Gaussian occurring halfway
between the voltage rail and ground [Mead, 1980][Mead, 1992].
To adjust the center of the Gaussian, we place a capacitor between the input to thf'
synapse and the input of the inverter. The inverter thus has a floating gat.e input..
We adjust the charge on the floating gate by using a combination of tunneling and
non-avalanche hot electron injection [Anderson, 1990] [Anderson, 1992].
All of the Gaussian synapses for one neuron share a voltage rail. The sense amplifier
holds t.hat voltage rail at a particular volt.agf>. l/ref. The output of the sense amplifier is a voltage which is linear in t.he total current being drawn by the Gaussian
synapses. We use a floating gate in the sense amplifier to ensure that the output.
of the sense amplifier is known when the input to the network is at a known state.
Again, we adjust the floating gate via tunneling and injection.
Figure 3 shows the output of the sense amplifier for four different. neurons. The
data was taken from a real chip, described in section 3. The figure shows that the
top of a Gaussian approximates a quadratic reasonably well. Also. thf' width and
heights of the outputs of each first-layer neuron ma.tch very well. because the circuit
is operated above threshold .
767
768
Anderson, Platt, and Kirk
0)("'\1
bl)
?
->
t\:S
......
.....
0
......
;:::l
.....
0...
~
0
0...00
So
~
0)
<'->
1::\0
0)
?
cn O
4
2
3
5
Input Voltage
Figure 3: Measured output of set of four first-layer neurons. All of the synapses of
each neuron are programmed to peak at the same voltage. The x-axis is the input
voltage, and the y-axis is the voltage output of the sense amplifier
0
2.2
1
Computing the Basis Function
To comput.e a Gaussian basis function, t.he distance produced by the first layer
needs to be exponentiated. Since the output of the sense amplifier is a voltage
negatively proportional to the distance, a subthreshold transistor can perform this
exponentiation.
However, subthreshold circuits can be slow. Also, the choice of a Gaussian basis
function is somewhat arbitrary [Poggio, 1990]. Therefore, we choose to adjust the
sense amplifier to produce a voltage that is both above and below threshold. The
basis function that the chip comput.es can be expressed as
S?J
= Lk Gaussian(Ik - Cjk);
(3)
_ { (5j - Of~,
(4)
-
0,
if Sj > 0;
otherwise.
where 0 is a threshold that is set by how much current is required by the sense
amplifier to produce an output equal to thf~ threshold voltage of a N-type transistor.
Equations 3 and 4 have an intuitive explanation. Each first-layer synapse votes on
whether its input matched its stored value. The sum of these votes is Sj. If the
sum Sj is less than a threshold 0, then the basis function cPj is zero. However,
if the number of votes exceeds the threshold, then the basis function turns on.
Therefore, one can adjust the dimensionality of the basis function by adjusting 0:
the dimensionality is N - 0 -11, where N is the numher of inputs to the network.
r
Figure 4 shows how varying 0 changes the basis function, for N = 2. The input to
the network is a two-dimensional space, represented by loca.tion on the page. The
value of the basis function is represented by t.he darkness of the ink. Setting 0 = 1
yields the basis function on the left, which is a fuzzy O-dimnnsional point. Setting
o 0 yields the basis function on the right., which is a union of fuzzy I-dimensional
lines.
=
An Analog VLSI Chip for Radial Basis Functions
Figure 4: Examples of two simulated basis functions with differing dimensionality.
Having an adjustable dimension for basis functions is useful, because it increases the
robustness of the basis function. A Gaussian radial basis function is non-zero only
when all elements of the input vector roughly match the center of the Gaussian. By
using a hardware basis function, we can allow certain inputs not t.o match, while
still turning on the basis function.
2.3
Blending the Basis Functions
To make the blending of the basis functions easier to implement in analog VLSI,
we decided to use an alternative method for basis function combination, called the
partition of unity [Moody, 1989]:
(5)
The partition of unity suggests that the second layer should compute a weighted
average of first-layer outputs, not just a weighted sum. We can t:ompute a weighted
average reasonably well with a follower aggregator used in the linear region [Mead,
1989].
Equations 4 and 5 can both be implemented by using a wide-range amplifier as a
synapse (see figure 5). The bias of the amplifier is the outpu t. of the semje amplifier.
That way, the above-threshold non-linearity of the bias transist.or is applied to the
output of the first layer and implements (~quation 4. The amplifier then attempts
to drag the output of the second-layer neuron towards a stored value hij and implements equation 5. We store the value on a floating gate, using tunneling and
injection .
The follower aggregator does not implement equation 5 perfectly: the amplifiers
saturate, hence introduce a non-linearity. A follower aggregator implements
L
j
tanh(a(h ij
-
yd)</Jj = O.
(6)
769
770
Anderson, Platt, and Kirk
output of network
Figure 5: Circuit diagram for second-layer synapses.
We use a capacitive divider to increase the linear range (decrease a) of the amplifiers.
However. the non-linearity of the amplifiers may be beneficial, because it reduces
the effect of outliers in the stored h i j values.
3
RESULTS
We fabricated the chip in 2 micron CMOS . The current version of the chip has 8
inputs, 159 basis functions and 4 outputs. The chip size is 2.2 millimeters by 9.6
millimeters
The core radial basis function circuitry works end-to-end. By measuring the output
of the sense amplifier, we can measure the response of the first layer, which is shown
in figure 3. Experiments show that the average width of the first-layer Gaussians
is 0.350 volts, with a standard deviation of 23 millivolts. The centers of the firstlayer Gaussians can be programmed more accurately than 15 millivolts, which is
the resolution of the test setup for this chip. Further experiments show that the
second-layer followers are linear to within 4% over 5 volts . Due to one mis-sized
transistor, programming the second layer accurately is difficult .
We have successfully tested the chip at 90 kHz, which is the speed limit of the
current test setup. We have not yet tested the chip at its full speed. The static
power dissipation of the chip is 2 milliwatts.
Figure 6 shows an example of real end-to-end output of the chip . All synapses for
each first-layer neuron are programmed to the same value. The first-Ia.yer neurons
are programmed to a ramp: each neuron is programmed to respond t.o a voltage
32 millivolts higher than the previous neuron. The second layer neurons are programnled t.o values shown by y-values of the dots in figure 6. The output of the
chip is shown as the solid line in figure 6. The output is measured as all of the
inputs to the chip are swept simultaneously. The chip splines and smooths out the
noisy stored second-layer values. Notice that the stored second-layer values are low
for inputs near 2.5 V: the output of a chip is correspondingly lower.
An Analog VLSI Chip for Radial Basis Functions
'.
Q)
~
......
'0
>......
a
......
<5
.'
234
Input Voltage
Figure 6: Example of end-to-end output measured from the chip .
4
FUTURE WORK
The mathematical model of the hardware network suggests interesting theoretical
future work. There are t.wo novel features of this model: the variable dimensionality of the basis functions, and the non-linearity in the partition of unity. More
simulation work needs to be done to see how much of an benefit these features yield .
The chip architecture discussed in this paper is suitable for many mediumdimensional function mapping problems where radial basis functions are appropriate. For example, the chip is useful for high speed control, optical character
recognition, and robotics.
One application of the chip we have studied further is the antialiasing of printed
characters, with proportional spacing, multiple fonts, and a.rbitrary scaling. Each
antialiased pixel has an intensity which is the integral of the character's partial
coverage of that pixel convolved with some filter. The chip could perform a function
int.erpolation for each pixel of each character . The function being interpolated is the
intensity integral, based on the subpixel coverage a.'i convolv('d with the antialiasing
filter kernel. Figure 7 shows the results of the anti-aliasing of the character using a
simulation of the chip.
5
CONCLUSIONS
We have described a multi-layer analog \'LSI neural network chip that computes
radial basis functions in parallel. We use inverters as first-layer synapses, to compute
Gaussians that approximate quadratics. \Ve use follower aggregators a.'55econd-Iayer
neurons, to compute the basis functions and to blend the ba.'5is functions using a
partition of unity. Preliminary experiments with a test chip shows that the core
radial basis function circuitry works. In j he future, we will explore the new basis
function model suggested by the hardware and further investigate applications of
the chip.
771
772
Anderson, Platt, and Kirk
Figure 7: Three images of the letter "a". The image on the left is the high resolution
anti-aliased version of the character. The middle image is a smaller version of the
left image. The right image is the chip simulation, trained to be close to the middle
image, by using the left image as the training data.
Acknowledgements
We would like to thank Federico Faggin and Carver Mead for their good advice.
Thanks to John Lazzaro who gave us a new version of Until, a graphics editor.
We would also like to thank Steven Rosenberg and Bo Curry of Hewlett-Packard
Laboratories for their suggestions and support.
References
Anderson, J., Mead, C., 1990, MOS Device for Long-Term Learning, U. S. Patent
4,935,702.
Anderson, J., Mead, C., Allen, T., Wall, M., 1992, Adaptable MOS Current Mirror,
U. S. Patent 5,160,899.
Lee, Y., HJ91, Handwritten Digit Recognition Using k Nearest-Neighbor, Radial
Basis Function, and Backpropagation Neural Networks, Neural Computation, vol. 3,
no. 3, 440-449.
Mead, C., Conway, L., 1980, Introduction to VLSI Systems, Addison-Wesley, Reading, MA.
Mead, C., 1989, Analog VLSI and Neural Systems, Addison-Wesley, Reading, MA.
Mead, C., Allen, T., Faggin, F., Anderson, J., 1992, Synaptic Element and Array,
U. S. Patent 5,083,044.
Moody, J., Darken, C., 1989, Fast Learning in Networks of Locally-Tuned Processing Units, Neural Computation, vol. 1, no. 2,281-294.
Platt, J., 1991, Learning by Combining Memorization and Gradient Descent, In:
Advances in Neural Information Processing 3, Lippman, R., Moody, J .. Touretzky,
D., eds., Morgan-Kaufmann, San Mateo, CA, 714-720.
Poggio, T ., Girosi, F., 1990, Regularization Algorithms for Learning Tha.t Are
Equivalent to Multilayer Networks, Scienre, vol. 247, 978-982.
Powell, M. J. D., 1987, Radial Basis Fundions for Multivariable Interpolation: A
Review, In: Algorithms for Approximation, J. C. Mason, M. G. Cox, eds., Clarendon Press, Oxford.
| 588 |@word cox:1 version:4 agf:1 middle:2 simulation:3 solid:1 tuned:1 current:7 yet:1 follower:6 must:2 john:1 partition:5 girosi:1 designed:1 device:1 core:2 height:1 mathematical:1 ik:1 introduce:1 roughly:1 dist:1 aliasing:1 multi:1 linearity:6 matched:1 circuit:6 aliased:1 vref:1 fuzzy:2 developed:1 differing:1 fabricated:3 every:2 charge:1 platt:7 control:1 unit:1 limit:1 oxford:1 mead:9 interpolation:1 yd:1 drag:1 studied:1 mateo:1 suggests:2 programmed:5 range:2 averaged:1 decided:1 union:1 implement:6 ance:1 backpropagation:1 lippman:1 digit:1 powell:2 printed:1 radial:18 close:1 memorization:1 darkness:1 equivalent:1 map:2 center:3 go:1 resolution:2 array:1 tht:1 programming:1 element:2 recognition:3 steven:1 region:1 decrease:1 ideally:1 milliwatt:1 trained:1 negatively:1 basis:48 chip:33 represented:2 fast:1 aggregate:1 solve:1 ramp:1 otherwise:1 federico:1 tlw:1 noisy:1 transistor:6 combining:1 intuitive:1 produce:3 cmos:2 measured:3 nearest:1 ij:2 approxima:1 implemented:2 coverage:2 filter:2 wall:1 preliminary:1 blending:2 hold:1 ground:2 exp:3 mapping:3 mo:2 electron:1 circuitry:3 inverter:6 tanh:1 create:3 successfully:1 weighted:3 gaussian:18 modified:1 varying:1 voltage:14 rosenberg:1 subpixel:1 sense:11 pasadena:1 vlsi:11 pixel:3 loca:1 equal:1 having:1 future:4 spline:1 few:1 simultaneously:1 ve:1 floating:5 attempt:1 amplifier:18 investigate:1 adjust:5 operated:1 hewlett:1 integral:2 partial:1 necessary:1 poggio:3 carver:1 divide:1 theoretical:2 measuring:1 deviation:1 graphic:2 stored:8 faggin:2 thanks:1 density:1 peak:3 lee:2 physic:1 conway:1 together:2 quickly:1 moody:5 again:1 choose:1 int:1 inc:1 tion:1 lot:1 parallel:3 avalanche:1 rbfs:6 kaufmann:1 who:1 subthreshold:2 yield:3 millimeter:2 handwritten:1 accurately:2 produced:1 researcher:1 antialiasing:2 synapsis:8 touretzky:1 aggregator:5 synaptic:1 ed:2 against:1 mathematica:1 naturally:1 mi:1 static:1 adjusting:1 dimensionality:4 cj:1 adaptable:1 wesley:2 clarendon:1 higher:1 response:1 synapse:9 done:1 anderson:10 just:1 until:1 curry:1 effect:2 divider:1 hence:2 firstlayer:1 regularization:1 volt:3 laboratory:1 attractive:1 width:2 mel:1 multivariable:1 dissipation:1 allen:2 image:7 novel:1 patent:3 khz:1 analog:13 he:5 approximates:2 discussed:1 mathematics:1 dot:1 synaptics:1 store:1 certain:1 success:1 yi:2 swept:1 caltech:2 morgan:1 somewhat:1 llsing:1 ii:1 full:1 desirable:1 multiple:1 reduces:1 exceeds:1 smooth:1 match:2 long:1 prediction:1 multilayer:1 kernel:1 robotics:1 hj91:1 addition:1 ompute:1 spacing:1 diagram:2 capacitor:1 transist:1 near:2 iii:1 gave:1 architecture:4 perfectly:1 idea:1 cn:1 whether:1 wo:1 jj:1 lazzaro:1 adequate:1 useful:2 locally:1 hardware:9 lsi:1 notice:1 per:1 vol:3 group:1 four:2 threshold:7 drawn:1 millivolt:3 halfway:1 sum:3 jose:1 exponentiation:1 micron:1 respond:1 letter:1 place:1 tunneling:3 scaling:1 layer:27 quadratic:8 infinity:1 software:2 interpolated:1 speed:5 injection:3 optical:1 orchard:1 combination:2 beneficial:1 smaller:1 character:6 unity:5 outlier:2 taken:1 equation:4 discus:4 turn:1 addison:2 end:6 gaussians:4 operation:2 appropriate:1 hii:1 alternative:1 robustness:1 gate:4 hat:1 convolved:1 hree:1 capacitive:1 top:1 running:1 ensure:1 build:1 approximating:1 bl:1 ink:1 added:1 font:1 blend:1 gradient:1 distance:5 thank:2 simulated:1 numher:1 setup:2 difficult:1 hij:1 ba:1 adjustable:1 perform:2 neuron:13 darken:1 descent:1 anti:2 arbitrary:1 intensity:2 david:1 required:2 address:1 suggested:1 below:1 reading:2 packard:1 memory:2 explanation:1 hot:1 power:1 ia:1 suitable:1 turning:1 axis:2 lk:1 thf:3 review:1 acknowledgement:1 multiplication:1 interesting:1 suggestion:1 proportional:2 localized:1 xp:1 editor:1 share:1 jth:1 bias:2 exponentiated:1 allow:1 wide:1 neighbor:1 correspondingly:1 benefit:1 dimension:1 computes:4 san:2 sj:3 approximate:1 compact:1 parkway:1 iayer:1 reasonably:2 robust:1 ca:3 complex:2 fair:1 ref:1 advice:1 scattered:1 slow:1 exponential:1 comput:2 rail:4 kirk:5 saturate:1 showing:1 outpu:1 mason:1 cjk:1 cpj:1 gat:1 mirror:1 te:1 yer:1 hod:1 occurring:1 easier:1 simply:1 explore:1 tch:1 expressed:1 saturating:1 bo:1 tha:1 ma:3 sized:1 rbf:6 towards:1 hard:1 change:1 total:1 called:1 e:1 vote:3 support:1 tested:3 |
5,391 | 5,880 | A Structural Smoothing Framework For Robust
Graph-Comparison
S.V.N. Vishwanathan
Department of Computer Science
University of California
Santa Cruz, CA, 95064, USA
[email protected]
Pinar Yanardag
Department of Computer Science
Purdue University
West Lafayette, IN, 47906, USA
[email protected]
Abstract
In this paper, we propose a general smoothing framework for graph kernels by
taking structural similarity into account, and apply it to derive smoothed variants
of popular graph kernels. Our framework is inspired by state-of-the-art smoothing
techniques used in natural language processing (NLP). However, unlike NLP applications that primarily deal with strings, we show how one can apply smoothing
to a richer class of inter-dependent sub-structures that naturally arise in graphs.
Moreover, we discuss extensions of the Pitman-Yor process that can be adapted
to smooth structured objects, thereby leading to novel graph kernels. Our kernels
are able to tackle the diagonal dominance problem while respecting the structural
similarity between features. Experimental evaluation shows that not only our kernels achieve statistically significant improvements over the unsmoothed variants,
but also outperform several other graph kernels in the literature. Our kernels are
competitive in terms of runtime, and offer a viable option for practitioners.
1
Introduction
In many applications we are interested in computing similarities between structured objects such
as graphs. For instance, one might aim to classify chemical compounds by predicting whether a
compound is active in an anti-cancer screen or not. A kernel function which corresponds to a dot
product in a reproducing kernel Hilbert space offers a flexible way to solve this problem [19]. Rconvolution [10] is a framework for computing kernels between discrete objects where the key idea
is to recursively decompose structured objects into sub-structures. Let h?, ?iH denote a dot product in
a reproducing kernel Hilbert space, G represent a graph and ? (G) represent a vector of sub-structure
frequencies. The kernel between two graphs G and G 0 is computed by K (G, G 0 ) = h? (G) , ? (G 0 )iH .
Many existing graph kernels can be viewed as instances of R-convolution kernels. For instance, the
graphlet kernel [22] decomposes a graph into graphlets, Weisfeiler-Lehman Subtree kernel (referred
as Weisfeiler-Lehman for the rest of the paper) [23] decomposes a graph into subtrees, and the
shortest-path kernel [1] decomposes a graph into shortest-paths. However, R-convolution based
graph kernels suffer from a few drawbacks. First, the size of the feature space often grows exponentially. As size of the space grows, the probability that two graphs will contain similar sub-structures
becomes very small. Therefore, a graph becomes similar to itself but not to any other graph in the
training data. This is well known as the diagonal dominance problem [11] where the resulting kernel
matrix is close to the identity matrix. Second, lower order sub-structures tend to be more numerous
while a vast majority of the sub-structures occurs rarely. In other words, a few sub-structures dominate the distribution. This exhibits a strong power-law behavior and results in underestimation of
the true distribution. Third, the sub-structures used to define a graph kernel are often related to each
other. However, an R-convolution kernel only respects exact matchings. This problem is particu1
Figure 1: Graphlets of size k ? 5.
larly important when noise is present in the training data and considering partial similarity between
sub-structures might alleviate the noise problem.
Our solution: In this paper, we propose to tackle the above problems by using a general framework
to smooth graph kernels that are defined using a frequency vector of decomposed structures. We
use structure information by encoding relationships between lower and higher order sub-structures
in order to derive our method. The remainder of this paper is structured as follows. In Section 2,
we review three families of graph kernels for which our smoothing is applicable. In Section 3, we
review smoothing methods for multinomial distributions. In Section 4, we introduce a framework
for smoothing structured objects. In Section 5, we propose a Bayesian variant of our model that
is extended from the Hierarchical Pitman-Yor process [25]. In Section 6, we discuss related work.
In Section 7, we compare smoothed graph kernels to their unsmoothed variants as well as to other
state-of-the-art graph kernels. We report results on classification accuracy on several benchmark
datasets as well as their noisy-variants. Section 8 concludes the paper.
2
Graph kernels
Existing graphs kernels based on R-convolution can be categorized into three major families: graph
kernels based on limited-sized subgraphs [e.g. 22], graph kernels based on subtree patterns [e.g.
18, 21], and graph kernels based on walks [e.g. 27] or paths [e.g. 1].
Graph kernels based on subgraphs: A graphlet G [17] is non-isomorphic sub-graph Dof size-k,
E
0
(see Figure 1). Given two graphs G and G 0 , the kernel [22] is defined as KGK (G, G 0 ) = f G , f G
0
where f G and f G are vectors of normalized counts of graphlets, that is, the i-th component of f G
0
(resp. f G ) denotes the frequency of graphlet Gi occurring as a sub-graph of G (resp. G 0 ).
Graph kernels based on subtree patterns: Weisfeiler-Lehman [21] is a popular instance of graph
kernels that decompose a graph into its subtree patterns. It simply iterates over each vertex in a
graph, and compresses the label of the vertex and labels of its neighbors into a multiset label. The
vertex is then relabeled with the compressed label to be used for the next iteration. Algorithm concludes after running for h iterations, and the compressed labels are used for constructing a frequency
D
E
0
vector for each graph. Formally, given G and G 0 , this kernel is defined as KW L (G, G 0 ) = lG , lG
where lG contains the frequency of each compressed label occurring in h iterations.
Graph kernels based on walks or paths: Shortest-path graph kernel [1] is a popular instance of
this family. This kernel simply compares the sorted endpoints and the length of shortest-paths that
are common between two graphs. Formally, let PG represent the set of all shortest-paths in graph
G, and pi ? PG denote a triplet (ls , le , nk ) where nk is the length of the path and ls and le are the
labels of the sourceDand sinkEvertices, respectively. The kernel between graphs G and G 0 is defined
0
as KSP (G, G 0 ) = pG , pG where i-th component of pG contains the frequency of i-th triplet
0
occurring in graph G (resp. pG ).
3
Smoothing multinomial distributions
In this section, we briefly review smoothing techniques for multinomial distributions. Let
e1 , e2 , . . . , em represent a sequence of n discrete events drawn from a ground set A = {1, 2, . . . , V }.
2
Figure 2: Topologically sorted graphlet DAG for k ? 5 where nodes are colored based on degree.
Suppose, we would like to estimate the probability P (ei = a) for some a ? A. It is well known
ca
that the Maximum Likelihood Estimate (MLE) can be computed as PM LE (ei = a) =
Pm where ca
denotes the number of times the event a appears in the observed sequence and m = j cj denotes
the total number of observed events. However, MLE of the multinomial distribution is spiky since it
assigns zero probability to the events that did not occur in the observed sequence. In other words, an
event with low probability is often estimated to have zero probability mass. The general idea behind
smoothing is to adjust the MLE of the probabilities by pushing the high probabilities downwards
and pushing low or zero probabilities upwards in order to produce a more accurate distribution on
the events [30]. Interpolated smoothing methods offer a flexible solution between the higher-order
maximum likelihood model and lower-order smoothed model (or so-called, fallback model). The
way the fallback model is designed is the key to define a new smoothing method1 . Absolute discounting [15] and Interpolated Kneser-Ney [12] are two popular instances of interpolated smoothing
methods:
max {ca ? d, 0} md ? d 0
PA (ei = a) =
+
PA (ei = a) .
(1)
m
m
Here, d > 0 is a discount factor, md := |{a : ca > d}| is the number of events whose counts
are larger than d, while PA0 is the fallback distribution. Absolute discounting defines the fallback
distribution as the smoothed version of the lower-order MLE while Kneser-Ney uses an unusual
estimate of the fallback distribution by using number of different contexts that the event follows in
the lower order model.
4
Smoothing structured objects
In this section, we first propose a new interpolated smoothing framework that is applicable to a
richer set of objects such as graphs by using a Directed Acyclic Graph (DAG). We then discuss how
to design such DAGs for various graph kernels.
4.1
Structural smoothing
The key to designing a new smoothing method is to define a fallback distribution, which not only
incorporates domain knowledge but is also easy to estimate recursively. Suppose, we have access
to a weighted DAG where every node at the k-th level represents an event from the ground set A.
Moreover let wij denote the weight of the edge connecting event i to event j, and Pa (resp. Ca )
denote the parents (resp. children) of event a ? A in the DAG. We define our structural smoothing
for events at level k as follows:
max {ca ? d, 0} md ? d X k?1
wja
k
PSS
(ei = a) =
+
PSS (j) P
.
(2)
0
m
m
0
a ?Cj wja
j?Pa
The way to understand the above equation is as follows: we subtract a fixed discounting factor d
from every observed event which accumulates to a total mass of md ? d. Each event a receives
some portion of this accumulated probability mass from its parents. The proportion of the mass that
a parent j at level k ? 1 transmits to a given child a depends on the weight wja between the parent
and the child (normalized by the sum of the weights of the edges from j to all its children), and the
k?1
probability mass PSS
(j) that is assigned to node j. In other words, the portion a child event a is
able to obtain from the total discounted mass depends on how authoritative its parents are, and how
strong the relationship between the child and its parents.
1
See Table 2 in [3] for summarization of various smoothing algorithms using this general framework.
3
4.2
Designing the DAG
In order to construct a DAG for smoothing structured objects, we first construct a vocabulary V
that denotes the set of all unique sub-structures that are going to be smoothed. Each item in the
vocabulary V corresponds to a node in the DAG. V can be generated statically or dynamically
based on the type of sub-structure the graph kernel exploits. For instance, it requires a one-time
O(2k ) effort to generate the vocabulary of size ? k graphlets for graphlet kernel. However, one
needs to build the vocabulary dynamically in Weisfeiler-Lehman and Shortest-Path kernels since
the sub-structures depend on the node labels obtained from the datasets. After constructing the
vocabulary V , the parent/child relationship between sub-structures needs to be obtained. Given a
sub-structure s of size k, we apply a transformation to find all possible sub-structures of size k ? 1
that s can be reduced into. Each sub-structure s0 that is obtained by this transformation is assigned
as a parent of s. After obtaining the parent/child relationship between sub-structures, the DAG is
constructed by drawing a directed edge from each parent to its children nodes. Since all descendants
of a given sub-structure at depth k ? 1 are at depth k, this results in a topological ordering of the
vertices, and hence the resulting graph is indeed a DAG. Next, we discuss how to construct such
DAGs for different graph kernels.
Graphlet Kernel: We construct the vocabulary V for graphlet kernel by enumerating all canonical
graphlets of size up to k 2 . Each canonically-labeled graphlet is a node in the DAG. We then apply
a transformation to infer the parent/child relationship between graphlets as follows: we place a
directed edge from graphlet G to G0 if, and only if, G can be obtained from G0 by deleting a node.
In other words, all edges from a graphlet G of size k ? 1 point to a graphlet G0 of size k. In order to
assign weights to the edges, given a graphlet pair G and G0 , we count the number of times G can be
obtained from G0 by deleting a node (call this number nGG0 ). Recall that G is of size k ? 1 and G0
is of size k, and therefore
nGG0 can at most be k. Let CG denote the set of children of node G in the
P
0
DAG, and nG := G?C
? . Then we define the weight wGG0 of the edge connecting G and G
? G nGG
as nGG0 /nG . The idea here is that the weight encodes the proportion of different ways of extending
G which results in the graphlet G0 . For instance, let us consider G15 and its parents G5 , G6 , G7 (see
Figure 2 for the DAG of graphlets with size k ? 5). Even if graphlet G15 is not observed in the
training data, it still gets a probability mass proportional to the edge weight from its parents in order
to overcome the sparsity problem of unseen data.
Weisfeiler-Lehman Kernel: The Weisfeiler-Lehman kernel performs an exact matching between
the compressed multiset labels. For instance, given two labels ABCDE and ABCDF, it simply assigns zero value for their similarity even though two labels have a partial similarity. In order to
smooth Weisfeiler-Lehman kernel, we first run the original algorithm and obtain the multiset representation of each graph in the dataset. We then apply a transformation to infer the parent/child relationship between compressed labels as follows: in each iteration of Weisfeiler-Lehman algorithm,
and for each multiset label of size k in the vocabulary, we generate its power set by computing all
subsets of size k ? 1 while keeping the root node fixed. For instance, the parents of a multiset label
ABCDE are {ABCD, ABCE, ABDE, ACDE}. Then, we simply construct the DAG by drawing a
directed edge from parent labels to children. Notice that considering only the set of labels generated from the Weisfeiler-Lehman kernel is not sufficient enough for constructing a valid DAG. For
instance, it might be the case that none of the possible parents of a given label exists in the vocabulary simply due to the sparsity problem (e.g.out of all possible parents of ABCDE, we might only
observe ABCE in the training data). Thus, restricting ourselves to the original vocabulary leaves
such labels orphaned in the DAG. Therefore, we consider so-called pseudo parents as a part of the
vocabulary when constructing the DAG. Since the sub-structures in this kernel are data-dependent,
we use a uniform weight between a parent and its children.
Shortest-Path Kernel: Similar to other graph kernels discussed above, shortest-path graph kernel
does not take partial similarities into account. For instance, given two shortest-paths ABCDE and
ABCDF (compressed as AE5 and AF5, respectively), it assigns zero for their similarity since their
sink labels are different. However, one can notice that shortest-path sub-structures exhibit a strong
dependency relationship. For instance, given a shortest-path pij = {ABCDE} of size k, one can
derive the shortest-paths {ABCD, ABC, AB} of size < k as a result of the optimal sub-structure
property, that is, one can show that all sub-paths of a shortest-path are also shortest-paths with
2
We used Nauty [13] to obtain canonically-labeled isomorphic representations of graphlets.
4
Figure 3: An illustration of table assignment, adapted from [9]. In this example, labels at the tables
are given by (l1 , . . . , l4 ) = (G44 , G30 , G32 , G44 ). Black dots indicate the number of occurrences of
each label in 10 draws from the Pitman-Yor process.
the same source node [6]. In order to smooth shortest-path kernel, we first build the vocabulary by
computing all shortest-paths for each graph. Let pij be a shortest-path of size k and pij 0 be a shortestpath of size k ? 1 that is obtained by removing the sink node of pij . Let lij be the compressed form
of pij that represents the sorted labels of its endpoints i and j concatenated to its length (resp. lij 0 ).
Then, in order to build the DAG, we draw a directed edge from lij 0 of depth k ? 1 to lij of depth k if
and only if pij 0 is a sub-path of pij . In other words, all ascendants of lij consist of the compressed
labels obtained from sub-paths of pij of size < k. Similar to Weisfeiler-Lehman kernel, we assign a
uniform weight between parents and children.
5
Pitman-Yor Smoothing
Pitman-Yor processes are known to produce power-law distributions [8]. A novel interpretation of
interpolated Kneser-Ney is proposed by [25] as approximate inference in a hierarchical Bayesian
model consisting of Pitman-Yor process [16]. By following a similar spirit, we extend our model
to adapt Pitman-Yor process as an alternate smoothing framework. A Pitman-Yor process P on a
ground set Gk+1 of size-(k + 1) graphlets is defined via Pk+1 ? P Y (dk+1 , ?k+1 , Pk ) where dk+1
is a discount parameter, 0 ? dk+1 < 1, ? > ?dk+1 is a strength parameter, and Pk is a base
distribution. The most intuitive way to understand draws from the Pitman-Yor process is via the
Chinese restaurant process (see Figure 3). Consider a restaurant with an infinite number of tables
Algorithm 1 Insert a Customer
Input: dk+1 , ?k+1 , Pk
t ? 0 // Occupied tables
c ? () // Counts of customers
l ? () // Labels of tables
if t = 0 then
t?1
append 1 to c
draw graphlet Gi ? Pk // Insert customer in parent
draw Gj ? wij
append Gj to l
return Gj
else
with probability ? max(0, cj ? d)
cj ? cj + 1
return lj
with probability proportional to ? + dt
t?t+1
append 1 to c
draw graphlet Gi ? Pk // Insert customer in parent
draw Gj ? wij
append Gj to l
return Gj
end if
where customers enter the restaurant one by one. The first customer sits at the first table, and a
graphlet is assigned to it by drawing a sample from the base distribution since this table is occupied
for the first time. The label of the first table is the first graphlet drawn from the Pitman-Yor process.
5
Subsequent customers when they enter the restaurant decide to sit at an already occupied table with
probability proportional to ci ? dk+1 , where ci represents the number of customers already sitting at
table i. If they sit at an already occupied table, then the label of that table denotes the next graphlet
drawn from the Pitman-Yor process. On the other hand, with probability ?k+1 + dk+1 t, where t is
the current number of occupied tables, a new customer might decide to occupy a new table. In this
case, the base distribution is invoked to label this table with a graphlet. Intuitively the reason this
process generates power-law behavior is because popular graphlets which are served on tables with
a large number of customers have a higher probability of attracting new customers and hence being
generated again, similar to a rich gets richer phenomenon. In a hierarchical Pitman-Yor process, the
base distribution Pk is recursively defined via a Pitman-Yor process Pk ? P Y (dk , ?k , Pk?1 ). In
order to label a table, we need a draw from Pk , which is obtained by inserting a customer into the
corresponding restaurant. However, adopting the traditional hierarchical Pitman-Yor process is not
straightforward in our case since the size of the context differs between levels of hierarchy, that is, a
child restaurant in the hierarchy can have more than one parent restaurant to request a label from. In
other words, Pk+1 is defined over Gk+1 of size nk+1 while Pk is defined over Gk of size nk ? nk+1 .
Therefore, one needs a transformation function to transform base distributions of different sizes. We
incorporate edge weights between parent and child restaurants by using the same weighting scheme
in Section 4.2. This changes the Chinese Restaurant process as follows: When we need to label a
table, we will first draw a size-k graphlet Gi ? Pk by inserting a customer into the corresponding
restaurant. Given Gi , we will draw a size-(k + 1) graphlet Gj proportional to wij , where wij is
obtained from the DAG. See Algorithm 1 for pseudo code of inserting a customer. Deletion of a
customer is handled similarly (see Algorithm 2).
Algorithm 2 Delete a Customer
Input: d, ?, P0 , C, L, t
with probability ? cl
cl ? cl ? 1
Gj ? lj
if cl = 0 then
Pk ? 1/wij
delete cl from c
delete lj from l
t?t?1
end if
return G
6
Related work
A survey of most popular graph kernel methods is already given in previous sections. Several methods proposed in smoothing structured objects [4], [20]. Our framework is similar to dependency
tree kernels [4] since both methods are using the notion of smoothing for structured objects. However, our method is interested in the problem of smoothing the count of structured objects. Thus,
while smoothing is achieved by using a DAG, we discard the DAG once the counts are smoothed.
Another related work to ours is propagation kernels [14] that define graph features as counts of similar node-label distributions on the respective graphs by using Locality Sensitive Hashing (LSH).
Our framework not only considers node label distributions, but also explicitly incorporates structural similarity via the DAG. Another similar work to ours is recently proposed framework by [29]
which learns the co-occurrence relationship between sub-structures by using neural language models. However, their framework do not respect the structural similarity between sub-structures, which
is an important property to consider especially in the presence of noise in edges or labels.
7
Experiments
The aim of our experiments is threefold. First, we want to show that smoothing graph kernels
significantly improves the classification accuracy. Second, we want to show that the smoothed
kernels are comparable to or outperform state-of-the-art graph kernels in terms of classification
6
Table 1: Comparison of classification accuracy (? standard deviation) of shortest-path (SP),
Weisfeiler-Lehman (WL), graphlet (GK) kernels with their smoothed variants. Smoothed variants
with statistically significant improvements over the base kernels are shown in bold as measured by
a t-test with a p value of ? 0.05. Ramon & G?artner (Ram & G?ar), p-random walk and random
walk kernels are included for additional comparison where > 72H indicates the computation did not
finish in 72 hours. Runtime for constructing the DAG and smoothing (SMTH) the counts are also
reported where ? indicates seconds and ? indicates minutes.
DATASET
SP
S MOOTHED S P
WL
S MOOTHED W L
GK
S MOOTHED G K
P YP G K
?
R AM & G AR
P - RANDOMWALK
R ANDOM WALK
DAG /S MTH ( GK )
DAG /S MTH ( SP )
DAG /S MTH ( WL )
DAG /S MTH ( PYP )
M UTAG
85.22 ?2.43
87.94 ?2.58
82.22 ?1.87
87.44 ?1.95
81.33 ?1.02
83.17 ?0.64
83.11 ?1.23
84.88 ?1.86
80.05 ?1.64
83.72 ?1.50
6?
1?
3?
1?
1?
2?
6?
5?
P TC
58.24 ?2.44
60.82 ?1.84
60.41 ?1.93
60.47 ?2.39
55.56 ?1.46
58.44 ?1.00
57.44 ?1.44
58.47 ?0.90
59.38 ?1.66
57.85 ?1.30
6? 1?
19? 1?
1? 17?
6? 12?
E NZYMES
40.10 ?1.50
42.27 ?1.07
53.88 ?0.95
55.30 ?0.65
27.32 ?0.96
30.90 ?1.51
29.63 ?1.30
16.96 ?1.46
30.01 ?1.00
24.16 ?1.64
6? 1?
45? 1?
10? 12?
6? 21?
P ROTEINS
75.07 ?0.54
75.85 ?0.28
74.49 ?0.49
75.53 ?0.50
69.69 ?0.46
69.83 ?0.46
70.00 ?0.80
70.73 ?0.35
71.16 ?0.35
74.22 ?0.42
6? 1?
9?
1?
7?
70?
6? 1?
N CI 1
73.00?0.24
73.26?0.24
84.13?0.22
84.66?0.18
62.46?0.19
62.48?0.15
62.50?0.20
56.61?0.53
> 72 H
> 72 H
6?
3?
9?
17?
2?
21?
6?
8?
N CI 109
73.00?0.21
73.01?0.31
83.83?0.31
84.72?0.21
62.33?0.14
62.48?0.11
62.68?0.18
54.62?0.23
> 72 H
> 72 H
6? 3?
10? 16?
2? 21?
6? 8?
accuracy, while remaining competitive in terms of computational requirements. Third, we want to
show that our methods outperform base kernels when edge or label noise is presence.
Datasets We used the following benchmark datasets used in graph kernels: MUTAG, PTC, ENZYMES, PROTEINS, NCI1 and NCI109. MUTAG is a dataset of 188 mutagenic aromatic and
heteroaromatic nitro compounds [5] with 7 discrete labels. PTC [26] is a dataset of 344 chemical
compounds has 19 discrete labels. ENZYMES is a dataset of 600 protein tertiary structures obtained
from [2], and has 3 discrete labels. PROTEINS is a dataset of 1113 graphs obtained from [2] having
3 discrete labels. NCI1 and NCI109 [28] are two balanced datasets of chemical compounds having
size 4110 and 4127 with 37 and 38 labels, respectively.
Experimental setup We compare our framework against representative instances of major families
of graph kernels in the literature. In addition to the base kernels, we also compare our smoothed
kernels with the random walk kernel [7], the Ramon-G?artner subtree [18], and p-step random walk
kernel [24]. The Random Walk, p-step Random Walk and Ramon-G?artner are written in Matlab and
obtained from [22]. All other kernels were coded in Python except Pitman-Yor smoothing which is
coded in C++3 . We used a parallel implementation for smoothing the counts of Weisfeiler-Lehman
kernel for efficiency. All kernels are normalized to have a unit length in the feature space. Moreover,
we use 10-fold cross validation with a binary C-Support Vector Machine (SVM) where the C value
for each fold is independently tuned using training data from that fold. In order to exclude random
effects of the fold assignments, this experiment is repeated 10 times and average prediction accuracy
of 10 experiments with their standard deviations are reported4 .
7.1
Results
In our first experiment, we compare the base kernels with their smoothed variants. As can be seen
from Table 1, smoothing improves the classification accuracy of every base kernel on every dataset
with majority of the improvements being statistically significant with p ? 0.05. We observe that
even though smoothing improves the accuracy of graphlet kernels on PROTEINS and NCI1, the improvements are not statistically significant. We believe this is due to the fact that these datasets are
not sensitive to structural noise as much as the other datasets, thus considering the partial similarities
3
We modified the open source implementation of PYP: https://github.com/redpony/cpyp.
Implementations of original and smoothed versions of the kernels, datasets and detailed discussion of
parameter selection procedure with the list of parameters used in our experiments can be accessed from http:
//web.ics.purdue.edu/?ypinar/nips.
4
7
Figure 4: Classification accuracy vs. noise for base graph kernels (dashed lines) and their smoothed
variants (non-dashed lines).
do not improve the results significantly. Moreover, PYP smoothed graphlet kernels achieve statistically significant improvements in most of the datasets, however they are outperformed by smoothed
graphlet kernels introduced in Section 3.
In our second experiment, we picked the best smoothed kernel in terms of classification accuracy for
each dataset, and compared it against the performance of state-of-the-art graph kernels (see Table
1). Smoothed kernels outperform other methods on all datasets, and the results are statistically
significant on every dataset except PTC.
In our third experiment, we investigated the runtime behavior of our framework with two major
costs. First, one has to compute a DAG by using the original feature vectors. Next, the constructed
DAG need to be used to compute smoothed representations of the feature vectors. Table 1 shows
the total wallclock runtime taken by all graphs for constructing the DAG, and smoothing the counts
for each dataset. As can be seen from the runtimes, our framework adds a constant factor to the
original runtime for most of the datasets. While the DAG creation in Weisfeiler-Lehman kernel also
adds a negligible overhead, the cost of smoothing becomes significant if the vocabulary size gets
prohibitively large due to the exponential growing nature of the kernel w.r.t. to subtree parameter h.
Finally, in our fourth experiment, we test the performance of graph kernels when edge or label
noise is present. For edge noise, we randomly removed and added {10%, 20%, 30%} of the edges
in each graph. For label noise, we randomly flipped {25%, 50%, 75%} of the node labels in each
graph where random labels are selected proportionally to the original label-distribution of the graph.
Figure 4 shows the performance of smoothed graph kernels under noise. As can be seen from
the figure, smoothed kernels are able to outperform their base variants when noise is present. An
interesting observation is that even though a significant amount of edge noise is added to PROTEINS
and NCI datasets, the performance of base kernels do not change drastically. This further supports
our observation that these datasets are not sensitive to structural noise as much as the other datasets.
8
Conclusion and Future Work
We presented a novel framework for smoothing graph kernels inspired by smoothing techniques
from natural language processing and applied our method to state-of-the-art graph kernels. Our
framework is rather general, and lends itself to many extensions. For instance, by defining domainspecific parent-child relationships, one can construct different DAGs with different weighting
schemes. Another interesting extension of our smoothing framework would be to apply it to graphs
with continuous labels. Moreover, even though we restricted ourselves to graph kernels in this paper, our framework is applicable to any R-convolution kernel that uses a frequency-vector based
representation, such as string kernels.
9
Acknowledgments
We thank to Hyokun Yun for his tremendous help in implementing Pitman-Yor Processes. We also
thank to anonymous NIPS reviewers for their constructive comments, and Jiasen Yang, Joon Hee
Choi, Amani Abu Jabal and Parameswaran Raman for reviewing early drafts of the paper. This work
is supported by the National Science Foundation under grant No. #1219015.
8
References
[1] K. M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. In ICML, pages 74?81, 2005.
[2] K. M. Borgwardt, C. S. Ong, S. Sch?onauer, S. V. N. Vishwanathan, A. J. Smola, and H.-P. Kriegel. Protein
function prediction via graph kernels. In ISMB, Detroit, USA, 2005.
[3] S. F. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. In ACL,
pages 310?318, 1996.
[4] D. Croce, A. Moschitti, and R. Basili. Structured lexical similarity via convolution kernels on dependency
trees. In Proceedings of EMNLP, pages 1034?1046. Association for Computational Linguistics, 2011.
[5] A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch. Structureactivity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. J. Med. Chem, 34:786?797, 1991.
[6] A. Feragen, N. Kasenburg, J. Petersen, M. de Bruijne, and K. Borgwardt. Scalable kernels for graphs with
continuous attributes. In NIPS, pages 216?224, 2013.
[7] T. G?artner, P. Flach, and S. Wrobel. On graph kernels: Hardness results and efficient alternatives. In
COLT, pages 129?143, 2003.
[8] S. Goldwater, T. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating powerlaw generators. NIPS, 2006.
[9] S. Goldwater, T. L. Griffiths, and M. Johnson. Producing power-law distributions and damping word
frequencies with two-stage language models. JMLR, 12:2335?2382, 2011.
[10] D. Haussler. Convolution kernels on discrete structures. Technical Report UCS-CRL-99-10, UC Santa
Cruz, 1999.
[11] J. Kandola, T. Graepel, and J. Shawe-Taylor. Reducing kernel matrix diagonal dominance using semidefinite programming. In COLT, volume 2777 of Lecture Notes in Computer Science, pages 288?302,
Washington, DC, 2003.
[12] R. Kneser and H. Ney. Improved backing-off for M-gram language modeling. In ICASSP, 1995.
[13] B. D. McKay. Nauty user?s guide (version 2.4). Australian National University, 2007.
[14] M. Neumann, R. Garnett, P. Moreno, N. Patricia, and K. Kersting. Propagation kernels for partially
labeled graphs. In ICML?2012 Workshop on Mining and Learning with Graphs, Edinburgh, UK, 2012.
[15] H. Ney, U. Essen, and R. Kneser. On structuring probabilistic dependences in stochastic language modeling. In Computer Speech and Language, pages 1?38, 1994.
[16] J. Pitman and M. Yor. The two-parameter poisson-dirichlet distribution derived from a stable subordinator.
Annals of Probability, 25(2):855?900, 1997.
[17] N. Przulj. Biological network comparison using graphlet degree distribution. In ECCB, 2006.
[18] J. Ramon and T. G?artner. Expressivity versus efficiency of graph kernels. Technical report, First International Workshop on Mining Graphs, Trees and Sequences (held with ECML/PKDD?03), 2003.
[19] B. Sch?olkopf and A. J. Smola. Learning with Kernels. 2002.
[20] A. Severyn and A. Moschitti. Fast support vector machines for convolution tree kernels. Data Mining
and Knowledge Discovery, 25(2):325?357, 2012.
[21] N. Shervashidze and K. Borgwardt. Fast subtree kernels on graphs. In NIPS, 2010.
[22] N. Shervashidze, S. V. N. Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt. Efficient graphlet
kernels for large graph comparison. In AISTATS, 2009.
[23] N. Shervashidze, P. Schweitzer, E. J. Van Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeilerlehman graph kernels. JMLR, 12:2539?2561, 2011.
[24] A. J. Smola and R. Kondor. Kernels and regularization on graphs. In COLT, pages 144?158, 2003.
[25] Y. W. Teh. A hierarchical bayesian language model based on pitman-yor processes. In ACL, 2006.
[26] H. Toivonen, A. Srinivasan, R. D. King, S. Kramer, and C. Helma. Statistical evaluation of the predictive
toxicology challenge 2000-2001. Bioinformatics, 19(10):1183?1193, July 2003.
[27] S. V. N. Vishwanathan, N. N. Schraudolph, I. R. Kondor, and K. M. Borgwardt. Graph kernels. JMLR,
2010.
[28] N. Wale, I. A. Watson, and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval
and classification. Knowledge and Information Systems, 14(3):347?375, 2008.
[29] P. Yanardag and S. Vishwanathan. Deep graph kernels. In KDD, pages 1365?1374. ACM, 2015.
[30] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to information
retrieval. ACM Trans. Inf. Syst., 22(2):179?214, 2004.
9
| 5880 |@word kgk:1 version:3 briefly:1 kondor:2 proportion:2 flach:1 open:1 p0:1 pg:6 thereby:1 recursively:3 contains:2 tuned:1 ours:2 existing:2 current:1 com:1 written:1 cruz:2 mutagenic:2 subsequent:1 kdd:1 moreno:1 designed:1 graphlets:10 v:1 leaf:1 selected:1 item:1 tertiary:1 colored:1 iterates:1 multiset:5 node:16 draft:1 sits:1 accessed:1 mehlhorn:2 ucsc:1 constructed:2 schweitzer:1 viable:1 descendant:1 lopez:1 artner:5 overhead:1 wale:1 introduce:1 orbital:1 inter:1 hardness:1 indeed:1 behavior:3 pkdd:1 growing:1 inspired:2 discounted:1 decomposed:1 considering:3 becomes:3 estimating:1 moreover:5 mass:7 string:2 transformation:5 pseudo:2 every:5 tackle:2 runtime:5 prohibitively:1 uk:1 unit:1 grant:1 producing:1 negligible:1 encoding:1 accumulates:1 path:25 kneser:5 might:5 black:1 acl:2 dynamically:2 co:1 limited:1 g7:1 karypis:1 statistically:6 lafayette:1 directed:5 unique:1 acknowledgment:1 ismb:1 graphlet:28 differs:1 procedure:1 empirical:1 significantly:2 matching:1 word:7 griffith:2 protein:6 petersen:1 get:3 close:1 selection:1 hee:1 context:2 customer:16 reviewer:1 lexical:1 straightforward:1 l:2 independently:1 survey:1 assigns:3 powerlaw:1 subgraphs:2 haussler:1 dominate:1 his:1 notion:1 resp:6 hierarchy:2 suppose:2 annals:1 user:1 exact:2 programming:1 us:2 designing:2 larly:1 pa:4 labeled:3 observed:5 abcde:5 ordering:1 removed:1 balanced:1 respecting:1 ong:1 depend:1 reviewing:1 debnath:2 creation:1 predictive:1 efficiency:2 matchings:1 sink:2 icassp:1 various:2 fast:2 shervashidze:3 dof:1 whose:1 richer:3 larger:1 solve:1 drawing:3 compressed:8 gi:5 unseen:1 transform:1 itself:2 noisy:1 sequence:4 wallclock:1 propose:4 product:2 remainder:1 inserting:3 canonically:2 achieve:2 intuitive:1 olkopf:1 parent:26 requirement:1 extending:1 neumann:1 produce:2 object:11 help:1 derive:3 measured:1 strong:3 indicate:1 australian:1 drawback:1 attribute:1 stochastic:1 ucs:1 implementing:1 assign:2 shusterman:1 decompose:2 alleviate:1 anonymous:1 biological:1 extension:3 insert:3 ground:3 ic:1 major:3 early:1 outperformed:1 applicable:3 label:45 sensitive:3 wl:3 detroit:1 weighted:1 aim:2 modified:1 rather:1 occupied:5 onauer:1 kersting:1 structuring:1 derived:1 improvement:5 likelihood:2 ps:3 indicates:3 cg:1 am:1 parameswaran:1 inference:1 dependent:2 accumulated:1 lj:3 mth:4 wij:6 going:1 interested:2 backing:1 ksp:1 classification:8 flexible:2 nci1:3 nzymes:1 colt:3 smoothing:39 art:5 uc:1 construct:6 once:1 having:2 ng:2 washington:1 runtimes:1 kw:1 represents:3 flipped:1 icml:2 future:1 petri:1 report:3 primarily:1 few:2 randomly:2 pyp:3 kandola:1 national:2 ourselves:2 consisting:1 ab:1 patricia:1 mining:3 essen:1 evaluation:2 adjust:1 roteins:1 semidefinite:1 behind:1 held:1 subtrees:1 accurate:1 edge:17 partial:4 respective:1 damping:1 tree:4 taylor:1 walk:9 delete:3 leeuwen:1 instance:15 classify:1 modeling:3 ar:2 assignment:2 cost:2 vertex:4 subset:1 deviation:2 mckay:1 uniform:2 johnson:2 reported:1 dependency:3 borgwardt:7 nitro:2 international:1 probabilistic:1 off:1 utag:1 connecting:2 again:1 emnlp:1 severyn:1 weisfeiler:13 leading:1 return:4 yp:1 syst:1 account:2 exclude:1 de:2 bold:1 lehman:13 explicitly:1 depends:2 root:1 picked:1 portion:2 competitive:2 wja:3 option:1 parallel:1 accuracy:9 descriptor:1 sitting:1 goldwater:2 bayesian:3 none:1 served:1 weisfeilerlehman:1 aromatic:2 randomwalk:1 against:2 energy:1 frequency:8 vishy:1 e2:1 naturally:1 transmits:1 dataset:10 popular:6 recall:1 knowledge:3 improves:3 hilbert:2 cj:5 andom:1 graepel:1 appears:1 higher:3 dt:1 hashing:1 improved:1 though:4 smola:3 stage:1 spiky:1 correlation:1 hand:1 receives:1 web:1 ei:5 mutag:2 propagation:2 defines:1 fallback:6 grows:2 believe:1 usa:3 effect:1 compadre:1 contain:1 true:1 normalized:3 discounting:3 assigned:3 chemical:4 hence:2 regularization:1 deal:1 toxicology:1 subordinator:1 yun:1 performs:1 l1:1 upwards:1 invoked:1 novel:3 recently:1 common:1 multinomial:4 endpoint:2 exponentially:1 volume:1 discussed:1 interpretation:1 extend:1 association:1 significant:8 dag:33 enter:2 pm:2 similarly:1 language:10 shawe:1 dot:3 lsh:1 access:1 stable:1 similarity:12 gj:8 attracting:1 base:13 add:2 enzyme:2 inf:1 discard:1 compound:7 binary:1 watson:1 seen:3 additional:1 shortest:19 bruijne:1 dashed:2 july:1 infer:2 smooth:4 technical:2 adapt:1 offer:3 cross:1 schraudolph:1 retrieval:2 e1:1 mle:4 coded:2 molecular:1 prediction:2 variant:10 scalable:1 poisson:1 iteration:4 kernel:113 represent:4 pa0:1 adopting:1 achieved:1 addition:1 want:3 nci:1 else:1 source:2 sch:2 goodman:1 rest:1 unlike:1 comment:1 tend:1 med:1 incorporates:2 spirit:1 lafferty:1 practitioner:1 call:1 structural:9 presence:2 yang:1 easy:1 enough:1 restaurant:10 finish:1 idea:3 enumerating:1 whether:1 handled:1 effort:1 suffer:1 speech:1 matlab:1 deep:1 santa:2 detailed:1 proportionally:1 amount:1 discount:2 reduced:1 generate:2 occupy:1 outperform:5 http:2 canonical:1 notice:2 estimated:1 discrete:7 g32:1 threefold:1 pinar:1 dominance:3 key:3 abu:1 srinivasan:1 helma:1 ptc:3 drawn:3 vast:1 graph:86 ram:1 sum:1 run:1 fourth:1 topologically:1 place:1 family:4 kasenburg:1 decide:2 draw:10 raman:1 comparable:1 nci109:2 fold:4 topological:1 adapted:2 occur:1 strength:1 vishwanathan:5 encodes:1 interpolated:5 generates:1 unsmoothed:2 statically:1 department:2 structured:11 alternate:1 request:1 em:1 yanardag:2 intuitively:1 restricted:1 g6:1 taken:1 jiasen:1 equation:1 discus:4 count:10 end:2 unusual:1 apply:6 observe:2 hierarchical:5 occurrence:2 ney:5 alternative:1 original:6 compress:1 denotes:5 running:1 nlp:2 remaining:1 linguistics:1 dirichlet:1 pushing:2 exploit:1 concatenated:1 build:3 chinese:2 especially:1 g0:7 already:4 added:2 occurs:1 dependence:1 md:4 diagonal:3 g5:1 traditional:1 exhibit:2 lends:1 thank:2 majority:2 considers:1 reason:1 length:4 code:1 relationship:10 illustration:1 zhai:1 abcd:2 lg:3 setup:1 gk:6 append:4 design:1 implementation:3 summarization:1 teh:1 toivonen:1 convolution:8 observation:2 datasets:14 purdue:3 benchmark:2 moothed:3 anti:1 ecml:1 defining:1 extended:1 incorporate:1 hansch:1 dc:1 reproducing:2 smoothed:20 introduced:1 pair:1 california:1 deletion:1 tremendous:1 method1:1 hour:1 expressivity:1 nip:5 trans:1 able:3 kriegel:2 pattern:3 sparsity:2 challenge:1 max:3 ramon:4 deleting:2 power:5 event:16 natural:2 predicting:1 scheme:2 improve:1 github:1 numerous:1 concludes:2 lij:5 review:3 literature:2 discovery:1 python:1 law:4 lecture:1 interesting:2 proportional:4 acyclic:1 versus:1 generator:1 validation:1 foundation:1 authoritative:1 hydrophobicity:1 degree:2 sufficient:1 pij:8 s0:1 pi:1 cancer:1 token:1 supported:1 keeping:1 drastically:1 guide:1 understand:2 neighbor:1 taking:1 absolute:2 pitman:18 yor:18 edinburgh:1 van:1 overcome:1 depth:4 vocabulary:12 valid:1 gram:1 rich:1 domainspecific:1 approximate:1 active:1 continuous:2 triplet:2 decomposes:3 table:23 nature:1 robust:1 ca:7 obtaining:1 investigated:1 cl:5 interpolating:1 constructing:6 domain:1 garnett:1 did:2 pk:14 sp:3 aistats:1 joon:1 noise:13 arise:1 child:18 repeated:1 categorized:1 west:1 referred:1 representative:1 screen:1 downwards:1 sub:29 feragen:1 exponential:1 jmlr:3 third:3 weighting:2 learns:1 removing:1 minute:1 choi:1 wrobel:1 list:1 dk:8 svm:1 sit:2 exists:1 consist:1 ih:2 restricting:1 workshop:2 ci:4 relabeled:1 subtree:7 occurring:3 nk:5 chen:1 subtract:1 locality:1 tc:1 simply:5 heteroaromatic:2 partially:1 corresponds:2 abc:1 acm:2 viewed:1 identity:1 sized:1 sorted:3 king:1 kramer:1 crl:1 change:2 included:1 infinite:1 except:2 reducing:1 total:4 called:2 isomorphic:2 experimental:2 underestimation:1 rarely:1 formally:2 l4:1 support:3 chem:1 bioinformatics:1 constructive:1 phenomenon:1 |
5,392 | 5,881 | Optimization Monte Carlo: Efficient and
Embarrassingly Parallel Likelihood-Free Inference
Max Welling?
Informatics Institute
University of Amsterdam
[email protected]
Edward Meeds
Informatics Institute
University of Amsterdam
[email protected]
Abstract
We describe an embarrassingly parallel, anytime Monte Carlo method for
likelihood-free models. The algorithm starts with the view that the stochasticity of the pseudo-samples generated by the simulator can be controlled externally
by a vector of random numbers u, in such a way that the outcome, knowing u,
is deterministic. For each instantiation of u we run an optimization procedure to
minimize the distance between summary statistics of the simulator and the data.
After reweighing these samples using the prior and the Jacobian (accounting for
the change of volume in transforming from the space of summary statistics to the
space of parameters) we show that this weighted ensemble represents a Monte
Carlo estimate of the posterior distribution. The procedure can be run embarrassingly parallel (each node handling one sample) and anytime (by allocating
resources to the worst performing sample). The procedure is validated on six experiments.
1
Introduction
Computationally demanding simulators are used across the full spectrum of scientific and industrial
applications, whether one studies embryonic morphogenesis in biology, tumor growth in cancer
research, colliding galaxies in astronomy, weather forecasting in meteorology, climate changes in
the environmental science, earthquakes in seismology, market movement in economics, turbulence
in physics, brain functioning in neuroscience, or fabrication processes in industry. Approximate
Bayesian computation (ABC) forms a large class algorithms that aims to sample from the posterior
distribution over parameters for these likelihood-free (a.k.a. simulator based) models. Likelihoodfree inference, however, is notoriously inefficient in terms of the number of simulation calls per
independent sample. Further, like regular Bayesian inference algorithms, care must be taken so that
posterior sampling targets the correct distribution.
The simplest ABC algorithm, ABC rejection sampling, can be fully parallelized by running independent processes with no communication or synchronization requirements. I.e. it is an embarrassingly
parallel algorithm. Unfortunately, as the most inefficient ABC algorithm, the benefits of this title are limited. There has been considerable progress in distributed MCMC algorithms aimed at
large-scale data problems [2, 1]. Recently, a sequential Monte Carlo (SMC) algorithm called ?the
particle cascade? was introduced that emits streams of samples asynchronously with minimal memory management and communication [17]. In this paper we present an alternative embarrassingly
parallel sampling approach: each processor works independently, at full capacity, and will indefinitely emit independent samples. The main trick is to pull random number generation outside of the
simulator and treat the simulator as a deterministic piece of code. We then minimize the difference
?
Donald Bren School of Information and Computer Sciences University of California, Irvine, and Canadian
Institute for Advanced Research.
1
between observations and the simulator output over its input parameters and weight the final (optimized) parameter value with the prior and the (inverse of the) Jacobian. We show that the resulting
weighted ensemble represents a Monte Carlo estimate of the posterior. Moreover, we argue that the
error of this procedure is O() if the optimization gets -close to the optimal value. This ?Optimization Monte Carlo? (OMC) has several advantages: 1) it can be run embarrassingly parallel, 2)
the procedure generates independent samples and 3) the core procedure is now optimization rather
than MCMC. Indeed, optimization as part of a likelihood-free inference procedure has recently been
proposed [12]; using a probabilistic model of the mapping from parameters to differences between
observations and simulator outputs, they apply ?Bayesian Optimization? (e.g. [13, 21]) to efficiently
perform posterior inference. Note also that since random numbers have been separated out from the
simulator, powerful tools such as ?automatic differentiation? (e.g. [14]) are within reach to assist
with the optimization. In practice we find that OMC uses far fewer simulations per sample than
alternative ABC algorithms.
The approach of controlling randomness as part of an inference procedure is also found in a related
class of parameter estimation algorithms called indirect inference [11]. Connections between ABC
and indirect inference have been made previously by [7] as a novel way of creating summary statistics. An indirect inference perspective led to an independently developed version of OMC called the
?reverse sampler? [9, 10].
In Section 2 we briefly introduce ABC and present it from a novel viewpoint in terms of random
numbers. In Section 3 we derive ABC through optimization from a geometric point of view, then
proceed to generalize it to higher dimensions. We show in Section 4 extensive evidence of the
correctness and efficiency of our approach. In Section 5 we describe the outlook for optimizationbased ABC.
2
ABC Sampling Algorithms
The primary interest in ABC is the posterior of simulator parameters ? given a vector of (statistics
of) observations y, p(?|y). The likelihood p(y|?) is generally not available in ABC. Instead we can
use the simulator as a generator of pseudo-samples x that reside in the same space as y. By treating
x as auxiliary variables, we can continue with the Bayesian treatment:
R
p(?) p (y|x)p(x|?) dx
p(?)p(y|?)
R
p(?|y) =
?R
(1)
p(y)
p(?) p (y|x)p(x|?) dx d?
Of particular importance is the choice of kernel measuring the discrepancy between observations y
and pseudo-data x. Popular choices for kernels are the Gaussian kernel and the uniform -tube/ball.
The bandwidth parameter (which may be a vector accounting for relative importance of each
statistic) plays critical role: small produces more accurate posteriors, but is more computationally
demanding, whereas large induces larger error but is cheaper.
We focus our attention on population-based ABC samplers, which include rejection sampling, importance sampling (IS), sequential Monte Carlo (SMC) [6, 20] and population Monte Carlo [3]. In
rejection sampling, we draw parameters from the prior ? ? p(?), then run a simulation at those
parameters x ? p(x|?); if the discrepancy ?(x, y) < , then the particle is accepted, otherwise it
is rejected. This is repeated until n particles are accepted. Importance sampling generalizes rejection sampling using a proposal distribution q? (?) instead of the prior, and produces samples with
weights wi ? p(?)/q(?). SMC extends IS to multiple rounds with decreasing , adapting their particles after each round, such that each new population improves the approximation to the posterior.
Our algorithm has similar qualities to SMC since we generate a population of n weighted particles,
but differs significantly since our particles are produced by independent optimization procedures,
making it completely parallel.
3
A Parallel and Efficient ABC Sampling Algorithm
Inherent in our assumptions about the simulator is that internally there are calls to a random number
generator which produces the stochasticity of the pseudo-samples. We will assume for the moment
that this can be represented by a vector of uniform random numbers u which, if known, would
make the simulator deterministic. More concretely, we assume that any simulation output x can
be represented as a deterministic function of parameters ? and a vector of random numbers u,
2
(a) D? = Dy
(b) D? < Dy
Figure 1: Illustration of OMC geometry. (a) Dashed lines indicate contours f (?, u) over ? for several u. For
three values of u, their initial and optimal ? positions are shown (solid blue/white circles). Within the grey
acceptance region, the Jacobian, indicated by the blue diagonal line, describes the relative change in volume
induced in f (?, u) from a small change in ?. Corresponding weights ? 1/|J| are shown as vertical stems. (b)
When D? < Dy , here 1 < 2, the change in volume is proportional to the length of the line segment inside the
ellipsoid (|JT J|1/2 ). The orange line indicates the projection of the observation onto the contour of f (?, u) (in
this case, identical to the optimal).
i.e. x = f (?, u). This assumption has been used previously in ABC, first in ?coupled ABC?
[16] and also in an application of Hamiltonian dynamics to ABC [15]. We do not make any further
assumptions regarding u or p(u), though for some problems their dimension and distribution may be
known a priori. In these cases it may be worth employing Sobol or other low-discrepancy sequences
to further improve the accuracy of any Monte Carlo estimates.
We will first derive a dual representation for the ABC likelihood function p (y|?) (see also [16]),
Z
Z Z
p (y|?) = p (y|x)p(x|?) dx =
p (y|x)I[x = f (?, u)]p(u) dxdu
(2)
Z
= p (y|f (?, u))p(u) du
(3)
leading to the following Monte Carlo approximation of the ABC posterior,
Z
1X
p (?|y) ? p(?) p(u)p (y|f (u, ?)) du ?
p (y|f (ui , ?))p(?)
n i
ui ? p(u)
(4)
Since p is a kernel that only accepts arguments y and f (ui , ?) that are close to each other (for
values of that are as small as possible), Equation 4 tells us that we should first sample values for
u from p(u) and then for each such sample find the value for ?io that results in y = f (?io , u). In
practice we want to drive these values as close to each other as possible through optimization and
accept an O() error if the remaining distance is still O(). Note that apart from sampling the values
for u this procedure is deterministic and can be executed completely in parallel, i.e. without any
communication. In the following we will assume a single observation vector y, but the approach is
equally applicable to a dataset of N cases.
3.1 The case D? = Dy
We will first study the case when the number of parameters ? is equal to the number of summary
statistics y. To understand the derivation it helps to look at Figure 1a which illustrates the derivation
for the one dimensional case. In the following we use the following abbreviation: fi (?) stands for
f (?, ui ). The general idea is that we want to write the approximation to the posterior as a mixture
of small uniform balls (or delta peaks in the limit):
1X
1X
p (y|f (ui , ?))p(?) ?
wi U (?|?i? )p(?)
(5)
p(?|y) ?
n i
n i
3
with wi some weights that we will derive shortly. Then, if we make small enough we can replace
any average of a sufficiently smooth function h(?) w.r.t. this approximate posterior simply by evaluating h(?) at some arbitrarily chosen points inside these balls (for instance we can take the center
of the ball ?i? ),
Z
1X
h(?)p(?|y) d? ?
h(?i? )wi p(?i? )
(6)
n i
To derive this expression we first assume that:
p (y|fi (?)) = C()I[||y ? fi (?)||2 ? 2 ]
(7)
i.e. a ball of radius . C() is the normalizer which is immaterial because it cancels in the posterior.
For small enough we claim that we can linearize fi (?) around ?io :
f?i (?) = fi (?io ) + Joi (? ? ?io ) + Ri ,
Ri = O(||? ? ?io ||2 )
(8)
?fi (?io )
where Joi is the Jacobian matrix with columns ??d . We take ?io to be the end result of our
optimization procedure for sample ui . Using this we thus get,
||y ? fi (?)||2 ? ||(y ? fi (?io )) ? Joi (? ? ?io ) ? Ri ||2
(9)
We first note that since we assume that our optimization has ended up somewhere inside the ball
defined by ||y ? fi (?)||2 ? 2 we can assume that ||y ? fi (?io )|| = O(). Also, since we only
consider values for ? that satisfy ||y ? fi (?)||2 ? 2 , and furthermore assume that the function
fi (?) is Lipschitz continuous in ? it follows that ||? ? ?io || = O() as well. All of this implies
that we can safely ignore the remaining term Ri (which is of order O(||? ? ?io ||2 ) = O(2 )) if we
restrict ourselves to the volume inside the ball.
The next step is to view the term I[||y ? fi (?)||2 ? 2 ] as a distribution in ?. With the Taylor
expansion this results in,
o,?1
o
o
(y ? fi (?io ))) ? 2 ]
I[(? ? ?io ? Jo,?1
(y ? fi (?io )))T JoT
i Ji (? ? ?i ? Ji
i
This represents an ellipse in ?-space with a centroid ?i? and volume Vi given by
?
?i? = ?io + Jio,?1 (y ? fi (?io ))
Vi = q
o
det(JoT
i Ji )
with ? a constant independent of i. We can approximate the posterior now as,
1 X ?(? ? ?i? )p(?i? )
1 X U (?|?i? )p(?)
q
q
?
p(?|y) ?
? i
? i
o
o
det(JoT
det(JoT
i Ji )
i Ji )
(10)
(11)
(12)
where in the last P
step we have send ? 0. Finally, we can compute the constant ? through noroT o ?1/2
?
malization, ? =
. The whole procedure is accurate up to errors of the
i p(?i ) det(Ji Ji )
order O(2 ), and it is assumed that the optimization procedure delivers a solution that is located
within the epsilon ball. If one of the optimizations for a certain sample ui did not end up within
the epsilon ball there can be two reasons: 1) the optimization did not converge to the optimal value
for ?, or 2) for this value of u there is no solution for which f (?|u) can get within a distance
from the observation y. If we interpret as our uncertainty in the observation y, and we assume that
our optimization succeeded in finding the best possible value for ?, then we should simply reject
this sample ?i . However, it is hard to detect if our optimization succeeded and we may therefore
sometimes reject samples that should not have been rejected. Thus, one should be careful not to
create a bias against samples ui for which the optimization is difficult. This situation is similar to a
sampler that will not mix to remote local optima in the posterior distribution.
3.2 The case D? < Dy
This is the overdetermined case and here the situation as depicted in Figure 1b is typical: the manifold that f (?, ui ) traces out as we vary ? forms a lower dimensional surface in the Dy dimensional
enveloping space. This manifold may or may not intersect with the sphere centered at the observation y (or ellipsoid, for the general case instead of ). Assume that the manifold does intersect the
4
epsilon ball but not y. Since we trust our observation up to distance , we may simple choose to
pick the closest point ?i? to y on the manifold, which is given by,
o
?i? = ?io + Jo?
i (y ? fi (?i ))
oT o ?1 oT
Jo?
Ji
i = (Ji Ji )
(13)
Jo?
i
is the pseudo-inverse. We can now define our ellipse around this point, shifting the
where
center of the ball from y to fi (?i? ) (which do not coincide in this case). The uniform distribution
on the ellipse in ?-space is now defined in the D? dimensional manifold and has volume Vi =
o ?1/2
? det(JoT
. So once again we arrive at almost the same equation as before (Eq. 12) but with
i Ji )
the slightly different definition of the point ?i? given by Eq. 13. Crucially, since ||y ? fi (?i? )|| ? 2
and if we assume that our optimization succeeded, we will only make mistakes of order O(2 ).
3.3 The case D? > Dy
This is the underdetermined case in which it is typical that entire manifolds (e.g. hyperplanes) may
be a solution to ||y ? fi (?i? )|| = 0. In this case we can not approximate the posterior with a
mixture of point masses and thus the procedure does not apply. However, the case D? > Dy is less
interesting than the other ones above as we expect to have more summary statistics than parameters
for most problems.
4
Experiments
The goal of these experiments is to demonstrate 1) the correctness of OMC and 2) the relative
efficiency of OMC in relation to two sequential MC algorithms, SMC (aka population MC [3]) and
adaptive weighted SMC [5]. To demonstrate correctness, we show histograms of weighted samples
along with the true posterior (when known) and, for three experiments, the exact OMC weighted
samples (when the exact Jacobian and optimal ? is known). To demonstrate efficiency, we compute
the mean simulations per sample (SS)?the number of simulations required to reach an threshold?
and the effective sample size (ESS), defined as 1/wT w. Additionally, we may measure ESS/n, the
fraction of effective samples in the population. ESS is a good way of detecting whether the posterior
is dominated by a few particles and/or how many particles achieve discrepancy less than epsilon.
There are several algorithmic options for OMC. The most obvious is to spawn independent processes, draw u for each, and optimize until is reached (or a max nbr of simulations run), then
compute Jacobians and particle weights. Variations could include keeping a sorted list of discrepancies and allocating computational resources to the worst particle. However, to compare OMC with
SMC, in this paper we use a sequential version of OMC that mimics the epsilon rounds of SMC.
Each simulator uses different optimization procedures, including Newton?s method for smooth simulators, and random walk optimization for others; Jacobians were computed using one-sided finite
differences. To limit computational expense we placed a max of 1000 simulations per sample per
round for all algorithms. Unless otherwise noted, we used n = 5000 and repeated runs 5 times; lack
of error bars indicate very low deviations across runs. We also break some of the notational convention used thus far so that we can specify exactly how the random numbers translate into pseudo-data
and the pseudo-data into statistics. This is clarified for each example. Results are explained in
Figures 2 to 4.
4.1
Normal with Unknown Mean and Known Variance
The simplest example is the inference of the mean ? of a univariate normal distribution with known
variance ? 2 . The prior distribution ?(?) is normal with mean ?0 and variance k? 2 , where k > 0 is
a factor relating the dispersions of ? and the data yn . The simulator can generate data according to
the normal distribution, or deterministically if the random effects rum are known:
?(xm |?) = N (xm |?, ? 2 )
=?
xm = ? + rum
(14)
?
?1
where rum = ? 2 erf P(2um ? 1) (using the inverse CDF). A sufficient statistic for this P
problem is
1
the average s(x) = M
rum /M
m xm . Therefore we have f (?, u) = ? + R(u) where R(u) =
(the average of the random effects). In our experiment we set M = 2 and y = 0. The exact
Jacobian and ?io can be computed for this problem: for a draw ui , Ji = 1; if s(y) is the mean of the
observations y, then by setting f (?io , ui ) = s(y) we find ?io = s(y) ? R(ui ). Therefore the exact
weights are wi ? ?(?io ), allowing us to compare directly with an exact posterior based on our dual
representation by u (shown by orange circles in Figure 2 top-left). We used Newton?s method to
optimize each particle.
5
Figure 2: Left: Inference of unknown mean. For 0.1, OMC uses 3.7 SS; AW/SMC uses 20/20 SS; at 0.01,
OMC uses 4 SS (only 0.3 SS more), and SMC jumps to 110 SS. For all algorithms and values, ESS/n=1.
Right: Inference for mixture of normals. Similar results for OMC; at 0.025 AW/SMC had 40/50 SS and at
0.01 has 105/120 SS. The ESS/n remained at 1 for OMC, but decreased to 0.06/0.47 (AW/SMC) at 0.025,
and 0.35 for both at 0.01. Not only does the ESS remain high for OMC, but it also represents the tails of the
distribution well, even at low .
4.2 Normal Mixture
A standard illustrative ABC problem is the inference of the mean ? of a mixture of two normals
[19, 3, 5]: p(x|?) = ? N (?, ?12 ) + (1 ? ?) N (?, ?22 ), with ?(?) = U(?a , ?b ) where hyperparameters
are ? = 1/2, ?12 = 1, ?22 = 1/100, ?a = ?10, ?b = 10, and a single observation scalar y = 0.
For this problem M = 1 so we drop the subscript m. The true posterior is simply p(?|y = 0) ?
? N (?, ?12 ) + (1 ? ?) N (?, ?22 ), ? ? {?10, 10}. In this problem there are two random numbers u1
and u2 , one for selecting the mixture component and the other for the random innovation; further,
the statistic is the identity, i.e. s(x) = x:
?
?
(15)
x = [u1 < ?](? + ?1 2 erf(2u2 ? 1)) + [u1 ? ?](? + ?2 2 erf(2u2 ? 1))
?
[u1 <?] [u1 ??]
?2
= ? + R(u)
(16)
= ? + 2 erf(2u2 ? 1)?1
?
[u1 <?] [u1 ??]
where R(u) = 2 erf(2u2 ? 1)?1
?2
. As with the previous example, the Jacobian is 1
and ?io = y ? R(ui ) is known exactly. This problem is notable for causing performance issues in
ABC-MCMC [19] and its difficulty in targeting the tails of the posterior [3]; this is not the case for
OMC.
4.3 Exponential with Unknown Rate
In this example, the goal is to infer the rate ? of an exponential distribution, with a gamma prior
p(?) = Gamma(?|?, ?), based on M draws from Exp(?):
1
1
(17)
p(xm |?) = Exp(xm |?) = ? exp(??xm ) =? xm = ? ln(1 ? um ) = rum
?
?
where rum = ? ln(1 ? um ) (the
Pinverse CDF of the exponential). A sufficient statistic for this
problem is the average s(x) =
m xm /M . Again, we have exact expressions for the Jacobian
o
and ?i , using f (?, ui ) = R(ui )/?, Ji = ?R(ui )/?2 and ?io = R(ui )/s(y). We used M = 2,
s(y) = 10 in our experiments.
4.4 Linked Mean and Variance of Normal
In this example we link together the mean and variance of the data generating function as follows:
?
=?
xm = ? + ? 2 erf ?1 (2um ? 1) = ?rum
(18)
p(xm |?) = N (xm |?, ?2 )
6
Figure 3: Left: Inference of rate of exponential. A similar result wrt SS occurs for this experiment: at 1,
OMC had 15 v 45/50 for AW/SMC; at 0.01, SS was 28 OMC v 220 AW/SMC. ESS/n dropping with below
1: OMC drops at 1 to 0.71 v 0.97 for SMC; at 0.1 ESS/n remains the same. Right: Inference of linked
normal. ESS/n drops significantly for OMC: at 0.25 to 0.32 and at 0.1 to 0.13, while it remains high
for SMC (0.91 to 0.83). This is the result the inability of every ui to achieve ? < , whereas for SMC, the
algorithm allows them to ?drop? their random numbers and effectively switch to another. This was verified
by running an expensive fine-grained optimization, resulting in 32.9% and 13.6% optimized particles having
? under 0.25/0.1. Taking this inefficiency into account, OMC still requires 130 simulations per effective
sample v 165 for SMC (ie 17/0.13 and 136/0.83).
?
where rum = 1 + 2 erf ?1 (2um ? 1). We put a positive constraint on ?: p(?) = U(0, 10). We used
2 statistics, the mean and variance of M draws from the simulator:
1
?f1 (?, u)
s1 (x) =
xm
=? f1 (?, u) = ?R(u)
= R(u)
(19)
M
??
?f2 (?, u)
1 X
(xm ? s1 (x))2
=? f2 (?, u) = ?2 V (u)
= 2?V (u) (20)
s2 (x) =
M m
??
P
P
where V (u) = m ru2 m /M ? R(u)2 and R(u) = m rum /M ; the exact Jacobian is therefore
[R(u), 2?V (u)]T . In our experiments M = 10, s(y) = [2.7, 12.8].
4.5 Lotka-Volterra
The simplest Lotka-Volterra model explains predator-prey populations over time, controlled by a set
of stochastic differential equations:
dx1
dx2
= ?1 x1 ? ?2 x1 x2 + r1
= ??2 x2 ? ?3 x1 x2 + r2
(21)
dt
dt
where x1 and x2 are the prey and predator population sizes, respectively. Gaussian noise r ?
N (0, 102 ) is added at each full time-step. Lognormal priors are placed over ?. The simulator
runs for T = 50 time steps, with constant initial populations of 100 for both prey and predator.
There is therefore P = 2T outputs (prey and predator populations concatenated), which we use
as the statistics. To run a deterministic simulation, we draw ui ? ?(u) where the dimension of
u is P . Half
? of the random variables are used for r1 and the other half for r2 . In other words,
rust = 10 2 erf ?1 (2ust ? 1), where s ? {1, 2} for the appropriate population. The Jacobian is a
100?3 matrix that can be computed using one-sided finite-differences using 3 forward simulations.
4.6 Bayesian Inference of the M/G/1 Queue Model
Bayesian inference of the M/G/1 queuing model is challenging, requiring ABC algorithms [4, 8] or
sophisticated MCMC-based procedures [18]. Though simple to simulate, the output can be quite
7
Figure 4: Top: Lotka-Volterra. Bottom: M/G/1 Queue. The left plots shows the posterior mean ?1 std errors
of the posterior predictive distribution (sorted for M/G/1). Simulations per sample and the posterior of ?1 are
shown in the other plots. For L-V, at 3, the SS for OMC were 15 v 116/159 for AW/SMC, and increased
at 2 to 23 v 279/371. However, the ESS/n was lower for OMC: at 3 it was 0.25 and down to 0.1 at 2,
whereas ESS/n stayed around 0.9 for AW/SMC. This is again due to the optimal discrepancy for some u being
greater than ; however, the samples that remain are independent samples. For M/G/1, the results are similar,
but the ESS/n is lower than the number of discrepancies satisfying 1, 9% v 12%, indicating that the volume
of the Jacobians is having a large effect on the variance of the weights. Future work will explore this further.
noisy. In the M/G/1 queuing model, a single server processes arriving customers, which are then
served within a random time. Customer m arrives at time wm ? Exp(?3 ) after customer m ? 1, and
is served in sm ? U(?1 , ?2 ) service time. Both wm and sm are unobserved; only the inter-departure
times xm are observed. Following [18], we write the simulation algorithm in terms of arrival times
vm . To simplify the updates, we keep track of the departure times dm . Initially, d0 = 0 and v0 = 0,
followed by updates for m ? 1:
vm = vm?1 + wm
xm = sm + max(0, vm ? dm?1 )
dm = dm?1 + xm
(22)
After trying several optimization procedures, we found the most reliable optimizer was simply a
random walk. The random sources in the problem used for Wm (there are M ) and for Um (there are
M ), therefore u is dimension 2M . Typical statistics for this problem are quantiles of x and/or the
minimum and maximum values; in other words, the vector x is sorted then evenly spaced values for
the statistics functions f (we used 3 quantiles). The Jacobian is an M ?3 matrix. In our experiments
?? = [1.0, 5.0, 0.2]
5
Conclusion
We have presented Optimization Monte Carlo, a likelihood-free algorithm that, by controlling the
simulator randomness, transforms traditional ABC inference into a set of optimization procedures.
By using OMC, scientists can focus attention on finding a useful optimization procedure for their
simulator, and then use OMC in parallel to generate samples independently. We have shown that
OMC can also be very efficient, though this will depend on the quality of the optimization procedure applied to each problem. In our experiments, the simulators were cheap to run, allowing
Jacobian computations using finite differences. We note that for high-dimensional input spaces and
expensive simulators, this may be infeasible, solutions include randomized gradient estimates [22]
or automatic differentiation (AD) libraries (e.g. [14]). Future work will include incorporating AD,
improving efficiency using Sobol numbers (when the size u is known), incorporating Bayesian optimization, adding partial communication between processes, and inference for expensive simulators
using gradient-based optimization.
Acknowledgments
We thank the anonymous reviewers for the many useful comments that improved this manuscript.
MW acknowledges support from Facebook, Google, and Yahoo.
8
References
[1] Ahn, S., Korattikara, A., Liu, N., Rajan, S., and Welling, M. (2015). Large scale distributed Bayesian
matrix factorization using stochastic gradient MCMC. In KDD.
[2] Ahn, S., Shahbaba, B., and Welling, M. (2014). Distributed stochastic gradient MCMC. In Proceedings of
the 31st International Conference on Machine Learning (ICML-14), pages 1044?1052.
[3] Beaumont, M. A., Cornuet, J.-M., Marin, J.-M., and Robert, C. P. (2009). Adaptive approximate Bayesian
computation. Biometrika, 96(4):983?990.
[4] Blum, M. G. and Franc?ois, O. (2010). Non-linear regression models for approximate Bayesian computation. Statistics and Computing, 20(1):63?73.
[5] Bonassi, F. V. and West, M. (2015). Sequential Monte Carlo with adaptive weights for approximate
Bayesian computation. Bayesian Analysis, 10(1).
[6] Del Moral, P., Doucet, A., and Jasra, A. (2006). Sequential Monte Carlo samplers. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 68(3):411?436.
[7] Drovandi, C. C., Pettitt, A. N., and Faddy, M. J. (2011). Approximate Bayesian computation using indirect
inference. Journal of the Royal Statistical Society: Series C (Applied Statistics), 60(3):317?337.
[8] Fearnhead, P. and Prangle, D. (2012). Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. Journal of the Royal Statistical Society: Series
B (Statistical Methodology), 74(3):419?474.
[9] Forneron, J.-J. and Ng, S. (2015a). The ABC of simulation estimation with auxiliary statistics. arXiv
preprint arXiv:1501.01265v2.
[10] Forneron, J.-J. and Ng, S. (2015b). A likelihood-free reverse sampler of the posterior distribution. arXiv
preprint arXiv:1506.04017v1.
[11] Gourieroux, C., Monfort, A., and Renault, E. (1993). Indirect inference. Journal of applied econometrics,
8(S1):S85?S118.
[12] Gutmann, M. U. and Corander, J. (2015). Bayesian optimization for likelihood-free inference of
simulator-based statistical models. Journal of Machine Learning Research, preprint arXiv:1501.03291.
In press.
[13] Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient global optimization of expensive black-box
functions. Journal of Global optimization, 13(4):455?492.
[14] Maclaurin, D. and Duvenaud, D. (2015). Autograd. github.com/HIPS/autograd.
[15] Meeds, E., Leenders, R., and Welling, M. (2015). Hamiltonian ABC. Uncertainty in AI, 31.
[16] Neal, P. (2012). Efficient likelihood-free Bayesian computation for household epidemics. Statistical
Computing, 22:1239?1256.
[17] Paige, B., Wood, F., Doucet, A., and Teh, Y. W. (2014). Asynchronous anytime Sequential Monte Carlo.
In Advances in Neural Information Processing Systems, pages 3410?3418.
[18] Shestopaloff, A. Y. and Neal, R. M. (2013). On Bayesian inference for the M/G/1 queue with efficient
MCMC sampling. Technical Report, Dept. of Statistics, University of Toronto.
[19] Sisson, S., Fan, Y., and Tanaka, M. M. (2007). Sequential Monte Carlo without likelihoods. Proceedings
of the National Academy of Sciences, 104(6).
[20] Sisson, S., Fan, Y., and Tanaka, M. M. (2009). Sequential Monte Carlo without likelihoods: Errata.
Proceedings of the National Academy of Sciences, 106(16).
[21] Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical Bayesian optimization of machine learning
algorithms. Advances in Neural Information Processing Systems 25.
[22] Spall, J. C. (1992). Multivariate stochastic approximation using a simultaneous perturbation gradient
approximation. Automatic Control, IEEE Transactions on, 37(3):332?341.
9
| 5881 |@word briefly:1 version:2 prangle:1 grey:1 simulation:14 crucially:1 accounting:2 pick:1 solid:1 outlook:1 moment:1 initial:2 inefficiency:1 liu:1 series:3 selecting:1 sobol:2 com:3 gmail:2 dx:3 must:1 ust:1 kdd:1 cheap:1 treating:1 drop:4 plot:2 update:2 half:2 fewer:1 es:12 hamiltonian:2 core:1 indefinitely:1 detecting:1 node:1 clarified:1 toronto:1 hyperplanes:1 along:1 differential:1 inside:4 introduce:1 inter:1 snoek:1 indeed:1 market:1 simulator:24 brain:1 decreasing:1 moreover:1 mass:1 developed:1 astronomy:1 finding:2 differentiation:2 ended:1 unobserved:1 beaumont:1 pseudo:7 safely:1 every:1 growth:1 exactly:2 um:6 biometrika:1 control:1 internally:1 yn:1 before:1 positive:1 service:1 local:1 treat:1 scientist:1 limit:2 io:25 mistake:1 marin:1 subscript:1 black:1 challenging:1 limited:1 smc:20 factorization:1 acknowledgment:1 practical:1 earthquake:1 practice:2 differs:1 optimizationbased:1 procedure:20 intersect:2 cascade:1 weather:1 adapting:1 significantly:2 projection:1 reject:2 regular:1 donald:1 word:2 get:3 onto:1 close:3 targeting:1 turbulence:1 put:1 optimize:2 deterministic:6 customer:3 center:2 reviewer:1 send:1 economics:1 attention:2 independently:3 welch:1 schonlau:1 pull:1 population:11 variation:1 target:1 controlling:2 play:1 exact:7 us:5 overdetermined:1 drovandi:1 trick:1 s85:1 expensive:4 satisfying:1 located:1 std:1 econometrics:1 bottom:1 role:1 observed:1 preprint:3 bren:1 worst:2 region:1 gutmann:1 remote:1 movement:1 leenders:1 transforming:1 ui:19 dynamic:1 immaterial:1 depend:1 segment:1 predictive:1 efficiency:4 meed:2 completely:2 f2:2 indirect:5 represented:2 derivation:2 separated:1 describe:2 effective:3 monte:16 tell:1 outcome:1 outside:1 quite:1 larger:1 s:11 otherwise:2 epidemic:1 statistic:20 erf:8 ru2:1 noisy:1 asynchronously:1 final:1 sisson:2 advantage:1 sequence:1 causing:1 korattikara:1 translate:1 achieve:2 academy:2 requirement:1 optimum:1 r1:2 produce:3 generating:1 seismology:1 adam:1 help:1 derive:4 linearize:1 school:1 progress:1 eq:2 edward:1 auxiliary:2 ois:1 indicate:2 implies:1 convention:1 larochelle:1 radius:1 correct:1 stochastic:4 centered:1 explains:1 f1:2 stayed:1 anonymous:1 underdetermined:1 sufficiently:1 around:3 duvenaud:1 normal:9 exp:4 maclaurin:1 mapping:1 algorithmic:1 claim:1 joi:3 vary:1 optimizer:1 estimation:2 applicable:1 title:1 correctness:3 create:1 tool:1 weighted:6 gaussian:2 fearnhead:1 aim:1 rather:1 validated:1 focus:2 notational:1 likelihood:12 indicates:1 aka:1 industrial:1 normalizer:1 centroid:1 detect:1 inference:23 entire:1 accept:1 initially:1 relation:1 issue:1 dual:2 priori:1 yahoo:1 orange:2 equal:1 once:1 having:2 ng:2 sampling:12 biology:1 identical:1 represents:4 look:1 cancel:1 jones:1 icml:1 discrepancy:7 mimic:1 report:1 others:1 future:2 inherent:1 few:1 franc:1 simplify:1 spall:1 gamma:2 national:2 cheaper:1 autograd:2 geometry:1 ourselves:1 renault:1 interest:1 acceptance:1 mixture:6 arrives:1 allocating:2 accurate:2 emit:1 succeeded:3 partial:1 unless:1 taylor:1 walk:2 circle:2 minimal:1 increased:1 industry:1 column:1 instance:1 hip:1 measuring:1 deviation:1 uniform:4 fabrication:1 aw:7 st:1 peak:1 randomized:1 international:1 ie:1 probabilistic:1 physic:1 informatics:2 vm:4 together:1 jo:4 again:3 tube:1 management:1 choose:1 creating:1 inefficient:2 leading:1 jacobians:3 account:1 satisfy:1 notable:1 vi:3 stream:1 piece:1 queuing:2 view:3 break:1 ad:2 shahbaba:1 linked:2 reached:1 start:1 wm:4 option:1 parallel:10 predator:4 minimize:2 accuracy:1 variance:7 efficiently:1 ensemble:2 spaced:1 generalize:1 bayesian:19 produced:1 mc:2 carlo:16 notoriously:1 worth:1 drive:1 served:2 processor:1 randomness:2 simultaneous:1 reach:2 facebook:1 definition:1 against:1 galaxy:1 obvious:1 dm:4 emits:1 irvine:1 dataset:1 treatment:1 popular:1 anytime:3 improves:1 embarrassingly:6 sophisticated:1 manuscript:1 higher:1 dt:2 methodology:2 specify:1 improved:1 though:3 box:1 furthermore:1 rejected:2 until:2 trust:1 lack:1 google:1 del:1 bonassi:1 quality:2 indicated:1 scientific:1 effect:3 requiring:1 true:2 functioning:1 dx2:1 neal:2 climate:1 white:1 round:4 noted:1 illustrative:1 trying:1 demonstrate:3 delivers:1 novel:2 recently:2 fi:20 ji:13 rust:1 volume:7 tail:2 relating:1 interpret:1 ai:1 automatic:4 particle:12 stochasticity:2 had:2 surface:1 v0:1 ahn:2 posterior:24 closest:1 multivariate:1 perspective:1 apart:1 reverse:2 certain:1 server:1 continue:1 arbitrarily:1 minimum:1 greater:1 care:1 parallelized:1 converge:1 dashed:1 semi:1 full:3 multiple:1 mix:1 infer:1 stem:1 d0:1 smooth:2 technical:1 likelihoodfree:1 sphere:1 equally:1 controlled:2 regression:1 arxiv:5 histogram:1 kernel:4 sometimes:1 proposal:1 whereas:3 want:2 fine:1 decreased:1 source:1 ot:2 enveloping:1 comment:1 induced:1 call:2 monfort:1 mw:1 canadian:1 enough:2 switch:1 bandwidth:1 restrict:1 regarding:1 idea:1 knowing:1 det:5 whether:2 six:1 expression:2 assist:1 forecasting:1 moral:1 queue:3 paige:1 proceed:1 generally:1 useful:2 aimed:1 transforms:1 meteorology:1 induces:1 simplest:3 generate:3 neuroscience:1 delta:1 per:7 track:1 blue:2 write:2 dropping:1 rajan:1 lotka:3 threshold:1 blum:1 verified:1 prey:4 v1:1 fraction:1 wood:1 run:10 inverse:3 powerful:1 uncertainty:2 extends:1 arrive:1 almost:1 draw:6 dy:8 followed:1 fan:2 constraint:1 colliding:1 ri:4 x2:4 dominated:1 generates:1 u1:7 simulate:1 argument:1 performing:1 jasra:1 jio:1 according:1 ball:11 across:2 describes:1 slightly:1 remain:2 cornuet:1 wi:5 making:1 s1:3 explained:1 sided:2 taken:1 spawn:1 computationally:2 resource:2 equation:3 previously:2 ln:2 remains:2 wrt:1 end:2 available:1 generalizes:1 apply:2 v2:1 appropriate:1 alternative:2 shortly:1 top:2 running:2 include:4 remaining:2 newton:2 household:1 somewhere:1 concatenated:1 epsilon:5 ellipse:3 society:3 added:1 occurs:1 reweighing:1 volterra:3 primary:1 diagonal:1 traditional:1 corander:1 gradient:5 distance:4 link:1 thank:1 capacity:1 evenly:1 manifold:6 argue:1 reason:1 code:1 length:1 illustration:1 ellipsoid:2 innovation:1 difficult:1 unfortunately:1 executed:1 robert:1 expense:1 trace:1 unknown:3 perform:1 allowing:2 teh:1 vertical:1 observation:12 dispersion:1 sm:3 finite:3 situation:2 communication:4 perturbation:1 morphogenesis:1 introduced:1 required:1 extensive:1 optimized:2 connection:1 california:1 accepts:1 tanaka:2 bar:1 below:1 xm:17 departure:2 max:5 memory:1 including:1 reliable:1 shifting:1 royal:3 critical:1 demanding:2 difficulty:1 advanced:1 pettitt:1 improve:1 github:1 library:1 acknowledges:1 coupled:1 prior:7 geometric:1 relative:3 synchronization:1 fully:1 expect:1 generation:1 interesting:1 proportional:1 generator:2 sufficient:2 viewpoint:1 embryonic:1 cancer:1 summary:6 placed:2 last:1 free:8 keeping:1 arriving:1 infeasible:1 asynchronous:1 bias:1 understand:1 institute:3 taking:1 lognormal:1 jot:5 benefit:1 distributed:3 dimension:4 stand:1 evaluating:1 contour:2 rum:9 reside:1 made:1 concretely:1 coincide:1 adaptive:3 jump:1 forward:1 far:2 employing:1 welling:5 transaction:1 approximate:10 ignore:1 keep:1 doucet:2 global:2 instantiation:1 assumed:1 spectrum:1 continuous:1 additionally:1 improving:1 du:2 expansion:1 constructing:1 did:2 main:1 whole:1 s2:1 hyperparameters:1 noise:1 arrival:1 repeated:2 x1:4 west:1 quantiles:2 position:1 deterministically:1 exponential:4 jacobian:12 grained:1 externally:1 down:1 remained:1 jt:1 dx1:1 list:1 r2:2 evidence:1 incorporating:2 sequential:9 effectively:1 importance:4 adding:1 illustrates:1 rejection:4 depicted:1 led:1 simply:4 univariate:1 explore:1 erratum:1 amsterdam:2 scalar:1 u2:5 environmental:1 abc:25 cdf:2 abbreviation:1 goal:2 sorted:3 identity:1 careful:1 replace:1 lipschitz:1 considerable:1 change:5 hard:1 typical:3 sampler:5 wt:1 tumor:1 called:3 accepted:2 indicating:1 support:1 inability:1 dept:1 mcmc:7 handling:1 |
5,393 | 5,882 | Inverse Reinforcement Learning with Locally
Consistent Reward Functions
Quoc Phong Nguyen? , Kian Hsiang Low? , and Patrick Jaillet?
Dept. of Computer Science, National University of Singapore, Republic of Singapore?
Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USA?
{qphong,lowkh}@comp.nus.edu.sg? , [email protected]?
Abstract
Existing inverse reinforcement learning (IRL) algorithms have assumed each expert?s demonstrated trajectory to be produced by only a single reward function.
This paper presents a novel generalization of the IRL problem that allows each
trajectory to be generated by multiple locally consistent reward functions, hence
catering to more realistic and complex experts? behaviors. Solving our generalized IRL problem thus involves not only learning these reward functions but
also the stochastic transitions between them at any state (including unvisited
states). By representing our IRL problem with a probabilistic graphical model,
an expectation-maximization (EM) algorithm can be devised to iteratively learn
the different reward functions and the stochastic transitions between them in order
to jointly improve the likelihood of the expert?s demonstrated trajectories. As a
result, the most likely partition of a trajectory into segments that are generated
from different locally consistent reward functions selected by EM can be derived.
Empirical evaluation on synthetic and real-world datasets shows that our IRL algorithm outperforms the state-of-the-art EM clustering with maximum likelihood
IRL, which is, interestingly, a reduced variant of our approach.
1
Introduction
The reinforcement learning problem in Markov decision processes (MDPs) involves an agent using
its observed rewards to learn an optimal policy that maximizes its expected total reward for a given
task. However, such observed rewards or the reward function defining them are often not available
nor known in many real-world tasks. The agent can therefore learn its reward function from an
expert associated with the given task by observing the expert?s behavior or demonstration, and this
approach constitutes the inverse reinforcement learning (IRL) problem.
Unfortunately, the IRL problem is ill-posed because infinitely many reward functions are consistent
with the expert?s observed behavior. To resolve this issue, existing IRL algorithms have proposed
alternative choices of the agent?s reward function that minimize different dissimilarity measures defined using various forms of abstractions of the agent?s generated optimal behavior vs. the expert?s
observed behavior, as briefly discussed below (see [17] for a detailed review): (a) The projection
algorithm [1] selects a reward function that minimizes the squared Euclidean distance between the
feature expectations obtained by following the agent?s generated optimal policy and the empirical
feature expectations observed from the expert?s demonstrated state-action trajectories; (b) the multiplicative weights algorithm for apprentice learning [24] adopts a robust minimax approach to deriving the agent?s behavior, which is guaranteed to perform no worse than the expert and is equivalent
to choosing a reward function that minimizes the difference between the expected average reward
under the agent?s generated optimal policy and the expert?s empirical average reward approximated
using the agent?s reward weights; (c) the linear programming apprentice learning algorithm [23]
picks its reward function by minimizing the same dissimilarity measure but incurs much less time
empirically; (d) the policy matching algorithm [16] aims to match the agent?s generated optimal
behavior to the expert?s observed behavior by choosing a reward function that minimizes the sum of
1
squared Euclidean distances between the agent?s generated optimal policy and the expert?s estimated
policy (i.e., from its demonstrated trajectories) over every possible state weighted by its empirical
state visitation frequency; (e) the maximum entropy IRL [27] and maximum likelihood IRL (MLIRL)
[2] algorithms select reward functions that minimize an empirical approximation of the KullbackLeibler divergence between the distributions of the agent?s and expert?s generated state-action trajectories, which is equivalent to maximizing the average log-likelihood of the expert?s demonstrated
trajectories. The log-likelihood formulations of the maximum entropy IRL and MLIRL algorithms
differ in the use of smoothing at the trajectory and action levels, respectively. As a result, the former?s log-likelihood or dissimilarity measure does not utilize the agent?s generated optimal policy,
which is consequently questioned by [17] as to whether it is considered an IRL algorithm. Bayesian
IRL [21] extends IRL to the Bayesian setting by maintaining a distribution over all possible reward
functions and updating it using Bayes rule given the expert?s demonstrated trajectories. The work
of [5] extends the projection algorithm [1] to handle partially observable environments given the
expert?s policy (i.e., represented as a finite state controller) or observation-action trajectories.
All the IRL algorithms described above have assumed that the expert?s demonstrated trajectories
are only generated by a single reward function. To relax this restrictive assumption, the recent
works of [2, 6] have, respectively, generalized MLIRL (combining it with expectation-maximization
(EM) clustering) and Bayesian IRL (integrating it with a Dirichlet process mixture model) to handle
trajectories generated by multiple reward functions (e.g., due to many intentions) in observable
environments. But, each trajectory is assumed to be produced by a single reward function.
In this paper, we propose a new generalization of the IRL problem in observable environments,
which is inspired by an open question posed in the seminal works of IRL [19, 22]: If behavior
is strongly inconsistent with optimality, can we identify ?locally consistent? reward functions for
specific regions in state space? Such a question implies that no single reward function is globally
consistent with the expert?s behavior, hence invalidating the use of all the above-mentioned IRL
algorithms. More importantly, multiple reward functions may be locally consistent with the expert?s
behavior in different segments along its state-action trajectory and the expert has to switch/transition
between these locally consistent reward functions during its demonstration. This can be observed
in the following real-world example [26] where every possible intention of the expert is uniquely
represented by a different reward function: A driver intends to take the highway to a food center for
lunch. An electronic toll coming into effect on the highway may change his intention to switch to
another route. Learning of the driver?s intentions to use different routes and his transitions between
them allows the transport authority to analyze, understand, and predict the traffic route patterns and
behavior for regulating the toll collection. This example, among others (e.g., commuters? intentions
to use different transport modes, tourists? intentions to visit different attractions, Section 4), motivate
the practical need to formalize and solve our proposed generalized IRL problem.
This paper presents a novel generalization of the IRL problem that, in particular, allows each expert?s state-action trajectory to be generated by multiple locally consistent reward functions, hence
catering to more realistic and complex experts? behaviors than that afforded by existing variants of
the IRL problem (which all assume that each trajectory is produced by a single reward function)
discussed earlier. At first glance, one may straightaway perceive our generalization as an IRL problem in a partially observable environment by representing the choice of locally consistent reward
function in a segment as a latent state component. However, the observation model cannot be easily
specified nor learned from the expert?s state-action trajectories, which invalidates the use of IRL
for POMDP [5]. Instead, we develop a probabilistic graphical model for representing our generalized IRL problem (Section 2), from which an EM algorithm can be devised to iteratively select
the locally consistent reward functions as well as learn the stochastic transitions between them in
order to jointly improve the likelihood of the expert?s demonstrated trajectories (Section 3). As a
result, the most likely partition of an expert?s demonstrated trajectory into segments that are generated from different locally consistent reward functions selected by EM can be derived (Section 3),
thus enabling practitioners to identify states in which the expert transitions between locally consistent reward functions and investigate the resulting causes. To extend such a partitioning to work
for trajectories traversing through any (possibly unvisited) region of the state space, we propose
using a generalized linear model to represent and predict the stochastic transitions between reward
functions at any state (i.e., including states not visited in the expert?s demonstrated trajectories) by
exploiting features that influence these transitions (Section 2). Finally, our proposed IRL algorithm
is empirically evaluated using both synthetic and real-world datasets (Section 4).
2
2
Problem Formulation
A Markov decision process (MDP) for an agent is defined as a tuple (S, A, t, r? , ) consisting of a
finite set S of its possible states such that each state s 2 S is associated with a column vector s
of realized feature measurements, a finite set A of its possible actions, a state transition function
t : S ? A ? S ! [0, 1] denoting the probability t(s, a, s0 ) , P (s0 |s, a) of moving to state s0
by performing action a in state s, a reward function r? : S ! R mapping each state s 2 S
to its reward r? (s) , ?> s where ? is a column vector of reward weights, and constant factor
2 (0, 1) discounting its future rewards. When ? is known, the agent can compute its policy
?? : S ? A ! [0, 1] specifying the probability ?? (s, a) , P (a|s, r? ) of performing action a in state
s. However, ? is not known in IRL and to be learned from an expert (Section 3).
Let R denote a finite set of locally consistent reward functions of the agent and r?e be a reward function chosen arbitrarily from R prior to learning. Define a transition function ?! : R?S ?R ! [0, 1]
for switching between these reward functions as the probability ?! (r? , s, r?0 ) , P (r?0 |s, r? , !)
of switching from reward function r? to reward function r?0 in state s where the set ! ,
{!r? r?0 }r? 2R,r?0 2R\{r?e} contains column vectors of transition weights !r? r?0 for all r? 2 R and
r?0 2 R \ {r?e} if the features influencing the stochastic transitions between reward functions can
be additionally observed by the agent during the expert?s demonstration, and ! , ; otherwise.
In our generalized IRL problem, ?! is not known and to be learned from the expert (Section 3).
Specifically, in the former case, we propose using a generalized linear model to represent ?! :
P
?
exp(!r>? r?0 's )/(1 + r??2R\{r e} exp(!r>? r?? 's )) if r?0 6= r?e,
?
P
?! (r? , s, r?0 ) ,
(1)
1/(1 + r??2R\{r e} exp(!r>? r?? 's ))
otherwise;
?
where 's is a column vector of random feature measurements influencing the stochastic transitions
between reward functions (i.e., ?! ) in state s.
Remark 1. Different from s whose feature measurements are typically assumed in IRL algorithms
to be realized/known to the agent for all s 2 S and remain static over time, the feature measurements
of 's are, in practice, often not known to the agent a priori and can only be observed when the expert (agent) visits the corresponding state s 2 S during its demonstration (execution), and may vary
over time according to some unknown distribution, as motivated by the real-world examples given
in Section 1. Without prior observation of the feature measurements of 's for all s 2 S (or knowledge of their distributions) necessary for computing ?! (1), the agent cannot consider exploiting ?!
for switching between reward functions within MDP or POMDP planning, even after learning its
weights !; this eliminates the possibility of reducing our generalized IRL problem to an equivalent
conventional IRL problem (Section 1) with only a single reward function (i.e., comprising a mixture
of locally consistent reward functions). Furthermore, the observation model cannot be easily specified nor learned from the expert?s trajectories of states, actions, and 's , which invalidates the use of
IRL for POMDP [5]. Instead of exploiting ?! within planning, during the agent?s execution, when
it visits some state s and observes the feature measurements of 's , it can then use and compute ?!
for state s to switch between reward functions, each of which has generated a separate MDP policy
prior to execution, as illustrated in a simple example in Fig. 1 below.
?! (r?0 , s, r?0 )
?! (r? , s, r? )
?! (r? , s, r?0 )
Remark 2. Using a generalized linear model to represent ?! (1) allows learning of the stochastic transitions between reward functions
r?
r? 0
(specifically, by learning ! (Section 3)) to be generalized across different states. After learning, (1) can then be exploited for predicting
?! (r?0 , s, r? )
the stochastic transitions between reward functions at any state (i.e.,
including states not visited in the expert?s demonstrated state-action Figure 1: Transition functrajectories). Consequently, the agent can choose to traverse a trajec- tion ?! of an agent in state s
tory through any region (i.e., possibly not visited by the expert) of the for switching between two
state space during its execution and the most likely partition of its tra- reward functions r? and r?0
jectory into segments that are generated from different locally consis- with their respective politent reward functions selected by EM can still be derived (Section 3). cies ?? and ??0 generated
In contrast, if the feature measurements of 's cannot be observed by prior to execution.
the agent during the expert?s demonstration (i.e., ! = ;, as defined above), then such a generalization is not possible; only the transition probabilities of switching between reward functions at states
visited in the expert?s demonstrated trajectories can be estimated (Section 3). In practice, since the
number |S| of visited states is expected to be much larger than the length L of any feature vector 's ,
3
the number O(|S||R|2 ) of transition probabilities to be estimated is bigger than |!| = O(L|R|2 ) in
(1). So, observing 's offers a further advantage of reducing the number of parameters to be learned.
Fig. 2 shows the probabilistic graphical model for representing our generalized IRL problem. To describe our
model, some notations are necessary: Let N be the number of the expert?s demonstrated trajectories and Tn be the
length (i.e., number of time steps) of its n-th trajectory for
n = 1, . . . , N . Let r?tn 2 R, ant 2 A, and snt 2 S denote its reward function, action, and state at time step t
in its n-th trajectory, respectively. Let R?tn , Ant , and Stn
be random variables corresponding to their respective realizations r?tn , ant , and snt where R?tn is a latent variable,
and Ant and Stn are observable variables. Define r?n ,
n
n
n
(r?tn )Tt=0
, an , (ant )Tt=1
, and sn , (snt )Tt=1
as sequences
of all its reward functions, actions, and states in its n-th
trajectory, respectively. Finally, define r?1:N , (r?n )N
n=1 ,
1:N
n N
a1:N , (an )N
,
and
s
,
(s
)
as
tuples
of
all
n=1
n=1
its reward function sequences, action sequences, and state
sequences in its N trajectories, respectively.
R?0n
R?1n
R?2n
???
R?Tnn
An1
An2
???
AnTn
S1n
S2n
???
STnn
Figure 2: Probabilistic graphical model
of the expert?s n-th demonstrated trajectory encoding its stochastic transitions between reward functions with
solid edges (i.e., ?! (r?tn 1 , snt , r?tn ) =
P (r?tn |snt , r?tn 1 , !) for t = 1, . . . , Tn ),
state transitions with dashed edges
(i.e., t(snt , ant , snt+1 ) = P (snt+1 |snt , ant )
for t = 1, . . . , Tn
1), and policy
with dotted edges (i.e., ??tn (snt , ant ) =
It can be observed from Fig. 2 that our probabilistic graph- P (an |sn , r n ) for t = 1, . . . , T ).
?t
n
t t
ical model of the expert?s n-th demonstrated trajectory encodes its stochastic transitions between reward functions, state transitions, and policy. Through our
model, the Viterbi algorithm [20] can be applied to derive the most likely partition of the expert?s
trajectory into segments that are generated from different locally consistent reward functions selected by EM, as shown in Section 3. Given the state transition function t(?, ?, ?) and the number
|R| of reward functions, our model allows tractable learning of the unknown parameters using EM
(Section 3), which include the reward weights vector ? for all reward functions r? 2 R, transition
function ?! for switching between reward functions, initial state probabilities ?(s) , P (S1n = s)
for all s 2 S, and initial reward function probabilities (r? ) , P (R?0n = r? ) for all r? 2 R.
3
EM Algorithm for Parameter Learning
A straightforward approach to learning the unknown parameters ? , (?, , {?|r? 2 R}, ?! ) is to
select the value of ? that directly maximizes the log-likelihood of the expert?s demonstrated trajectories. Computationally, such an approach is prohibitively expensive due to a large joint parameter
space to be searched for the optimal value of ?. To ease this computational burden, our key idea is
to devise an EM algorithm that iteratively refines the estimate for ? to improve the expected loglikelihood instead, which is guaranteed to improve the original log-likelihood by at least as much:
P
Expectation (E) step. Q(?, ?i ) , r 1:N P (r?1:N |s1:N , a1:N , ?i ) log P (r?1:N , s1:N , a1:N |?).
?
Maximization (M) step. ?i+1 = argmax? Q(?, ?i )
where ?i denotes an estimate for ? at iteration i. The Q function of EM can be reduced to the
following sum of five terms, as shown in Appendix A:
PN
PN P
(2)
Q(?, ?i ) = n=1 log ?(sn1 ) + n=1 r? 2R P (R?0n = r? |sn , an , ?i ) log (r? )
PN PTn P
(3)
+ n=1 t=1 r? 2R P (R?tn = r? |sn , an , ?i ) log ?? (snt , ant )
PN PTn P
n
n
n
i
n
+ n=1 t=1 r? ,r?0 2R P (R?tn 1 = r? , st , R?tn = r?0 |s , a , ? ) ? log ?! (r? , st , r?0 )
(4)
PN PTn 1
(5)
+ n=1 t=1 log t(snt , ant , snt+1 ) .
Interestingly, each of the first four terms in (2), (3), and (4) contains a unique unknown parameter
type (respectively, ?, , {?|r? 2 R}, and ?! ) and can therefore be maximized separately in the
M step to be discussed below. As a result, the parameter space to be searched can be greatly reduced. Note that the third term (3) generalizes the log-likelihood in MLIRL [2] (i.e., assuming all
trajectories to be produced by a single reward function) to that allowing each expert?s trajectory to
be generated by multiple locally consistent reward functions. The last term (5), which contains the
known state transition function t, is independent of unknown parameters ?.1
1
If the state transition function is unknown, then it can be learned by optimizing the last term (5).
4
Learning initial state probabilities. To maximize the first P
term in the Q function (2) of EM, we
use the method of Lagrange multipliers with the constraint s2S ?(s) = 1 to obtain the estimate
PN
?b(s) = (1/N ) n=1 I1n for all s 2 S where I1n is an indicator variable of value 1 if sn1 = s, and
0 otherwise. Since ?b can be computed directly from the expert?s demonstrated trajectories in O(N )
time, it does not have to be refined.
Learning initial reward function probabilities. To maximize the second
P term in Q function (2) of
EM, we utilize the method of Lagrange multipliers with the constraint r? 2R (r? ) = 1 to derive
PN
i+1
(6)
(r?i ) = (1/N ) n=1 P (R?0n = r? |sn , an , ?i )
for all r?i 2 R where i+1 denotes an estimate for at iteration i+1, ?i denotes an estimate for ? at
PN
iteration i, and P (R?tn = r? |sn , an , ?i ) (in this case, t = 0) can be computed in O( n=1 |R|2 Tn )
time using a procedure inspired by Baum-Welch algorithm [3], as shown in Appendix B.
Learning reward functions. The third term in the Q function (3) of EM is maximized using
gradient ascent and its gradient g1 (?) with respect to ? is derived to be
Tn
N X
X
P (R?tn = r? |sn , an , ?i ) d?? (snt , ant )
g1 (?) ,
(7)
?? (snt , ant )
d?
n=1 t=1
for all ? 2 {?0 |r?0 2 R}. For ?? (snt , ant ) to be differentiable in ?, we define the Q? function
of MDP using an operator that blends the Q? values via Boltzmann exploration [2]: Q? (s, a) ,
P
P
0
0 0
0
?> s +
? (s , a ) where ?a Q? (s, a) ,
s0 2S t(s, a, s ) ?a QP
a2A Q? (s, a) ? ?? (s, a) such
that ?? (s, a) , exp( Q? (s, a))/ a0 2A exp( Q? (s, a0 )) is defined as a Boltzmann exploration
policy, and > 0 is a temperature parameter. Then, we update ?i+1
?i + g1 (?i ) where is the
learning step size. We use backtracking line search method to improve the performance of gradient
ascent. Similar to MLIRL, the time incurred in each iteration of gradient ascent depends mostly on
that of value iteration, which increases with the size of the MDP?s state and action space.
Learning transition function for switching between reward functions. To maximize the fourth
term in the Q function (4) of EM, if the feature measurements of 's cannot be observed by the agent
during the expert?s demonstration
(i.e., ! = ;), then we utilize the method of Lagrange multipliers
P
with the constraints r?0 2R ?! (r? , s, r?0 ) = 1 for all r? 2 R and s 2 S to obtain
PN PTn
P
PN P Tn
?!i+1 (r?i , s, r?0i ) = ( n=1 t=1
(8)
n,t,r?i ,s,r?0i )/(
r ?i 2R
n=1
t=1 n,t,r?i ,s,r??i )
?
for r?i , r?0i 2 R and s 2 S where S is the set of states visited by the expert, ?!i+1 is an estimate
for ?! at iteration i + 1, and n,t,r?i ,s,r??i , P (R?tn 1 = r? , Stn = s, R?tn = r??|sn , an , ?i ) can be
computed efficiently by exploiting the intermediate results from evaluating P (R?tn = r? |sn , an , ?i )
described previously, as detailed in Appendix B.
On the other hand, if the feature measurements of 's can be observed by the agent during the expert?s
demonstration, then recall that we use a generalized linear model to represent ?! (1) (Section 2) and
! is the unknown parameter to be estimated. Similar to learning the reward weights vector ? for
reward function r? , we maximize the fourth term (4) in the Q function of EM by using gradient
ascent and its gradient g2 (!r? r?0 ) with respect to !r? r?0 is derived to be
Tn X
N X
n
X
?)
n,t,r? ,sn
? d?! (r? , st , r?
t ,r?
(9)
g2 (!r? r?0 ) ,
n
?
(r
,
s
,
r
)
d!
?
! ? t
r? r? 0
?
n=1 t=1
r??2R
for all !r? r?0 2 !. Let !ri ? r?0 denote an estimate for !r? r?0 at iteration i. Then, it is updated
using !ri+1
!ri ? r?0 + g2 (!ri ? r?0 ) where is the learning step size. Backtracking line search
? r? 0
method is also used to improve the performance of gradient ascent here. In both cases, the time
incurred in each iteration i is proportional to the number of n,t,r?i ,s,r??i to be computed, which is
PN
O( n=1 |R|2 |S|Tn ) time.
Viterbi algorithm for partitioning a trajectory into segments with different locally consistent
b b 2 R}, ?!b ) for the unknown pab = (b
reward functions. Given the final estimate ?
? , b, {?|r
?
rameters ? produced by EM, the most likely partition of the expert?s n-th demonstrated trajectory
n
into segments generated by different locally consistent reward functions is r??n = (r??tn )Tt=0
,
n
n
n
n
b = argmaxr n P (r?n , s , a |?),
b which can be derived using the
argmaxr?n P (r?n |s , a , ?)
?
Viterbi algorithm [20]. Specifically, define vr?b,T for T = 1, . . . , Tn as the probability of the most
5
likely reward function sequence (r?tn )Tt=01 from time steps 0 to T 1 ending with reward function
r?b at time step T that produce state and action sequences (snt )Tt=1 and (ant )Tt=1 :
b
vr?b,T , max(r?n )T 1 P ((r?tn )Tt=01 , R?Tn = r? , (snt )Tt=1 , (ant )Tt=1 |?)
t t=0
= t(snT 1 , anT 1 , snT ) ??b(snT , anT ) maxr?b0 vr?b0 ,T 1 ?!b (r?b0 , snT , r?b) ,
b = ?b(sn ) ? b(sn , an ) maxr b(r b0 ) ?!b (r b0 , sn , r b) .
vr ,1 , maxr n P (r?n , R?n = r? , sn , an |?)
b
?
?0
0
1
1
1
1
?
1
1
b0
?
?
?
1
?
Then, r??n = argmaxr?b0 b(r?b0 ) ?!b (r?b0 , sn1 , r??n ), r??n = argmaxr?b0 vr?b0 ,T ?!b (r?b0 , snT +1 , r??n ) for
0
1
T
T +1
T = 1, . . . , Tn 1, and r??n = argmaxr?b vr?b,Tn . The above Viterbi algorithm can be applied in the
Tn
same way to partition an agent?s trajectory traversing through any region (i.e., possibly not visited
by the expert) of the state space during its execution in O(|R|2 T ) time.
4
Experiments and Discussion
This section evaluates the empirical performance of our IRL algorithm using 3 datasets featuring
experts? demonstrated trajectories in two simulated grid worlds and real-world taxi trajectories. The
average log-likelihood of the expert?s demonstrated trajectories is used as the performance metric
because it inherently accounts for the fidelity of our IRL algorithm in learning the locally consistent
reward functions (i.e., R) and the stochastic transitions between them (i.e., ?! ):
PNtot
(10)
L(?) , (1/Ntot ) n=1
log P (sn , an |?)
where Ntot is the total number of the expert?s demonstrated trajectories available in the dataset.
As proven in [17], maximizing L(?) with respect to ? is equivalent to minimizing an empirical approximation of the Kullback-Leibler divergence between the distributions of the agent?s and expert?s
b produced by EM (Section 3)
generated state-action trajectories. Note that when the final estimate ?
n n b
is plugged into (10), the resulting P (s , a |?) in (10) can be computed efficiently using a procedure
similar to that in Section 3, as detailed in Appendix C. To avoid local maxima in gradient ascent,
we initialize our EM algorithm with 20 random ?0 values and report the best result based on the Q
value of EM (Section 3).
0 1
2 3 4
0 1
2 3 4
To demonstrate the importance of modeling and learning
stochastic transitions between locally consistent reward functions, the performance of our IRL algorithm is compared with
that of its reduced variant assuming no change/switching of
reward function within each trajectory, which is implemented
by initializing ?! (r? , s, r? ) = 1 for all r? 2 R and s 2 S
and deactivating the learning of ?! . In fact, it can be shown
(Appendix D) that such a reduction, interestingly, is equivalent to EM clustering with MLIRL [2]. So, our IRL algorithm
generalizes EM clustering with MLIRL, the latter of which
has been empirically demonstrated in [2] to outperform many
existing IRL algorithms, as discussed in Section 1.
0
0
1
1
2
O
D
O
2
3
3
4
4
D
A
B
Figure 3: Grid worlds A (states
(0, 0), (1, 1), and (2, 2) are, respectively, examples of water, land, and
obstacle), and B (state (2, 2) is an
example of barrier). ?O? and ?D? denote origin and destination.
Simulated grid world A. The environment (Fig. 3) is modeled as a 5 ? 5 grid of states, each of
which is either land, water, water and destination, or obstacle associated with the respective feature
vectors (i.e., s ) (0, 1, 0)> , (1, 0, 0)> , (1, 0, 1)> , and (0, 0, 0)> . The expert starts at origin (0, 2)
and any of its actions can achieve the desired state with 0.85 probability. It has two possible reward
functions, one of which prefers land to water and going to destination (i.e., ? = (0, 20, 30)> ), and
the other of which prefers water to land and going to destination (i.e., ?0 = (20, 0, 30)> ). The expert
will only consider switching its reward function at states (2, 0) and (2, 4) from r?0 to r? with 0.5
probability and from r? to r?0 with 0.7 probability; its reward function remains unchanged at all
other states. The feature measurements of 's cannot be observed by the agent during the expert?s
demonstration. So, ! = ; and ?! is estimated using (8). We set to 0.95 and the number |R| of
reward functions of the agent to 2.
Fig. 4a shows results of the average log-likelihood L (10) achieved by our IRL algorithm, EM
clustering with MLIRL, and the expert averaged over 4 random instances with varying number
N of expert?s demonstrated trajectories. It can be observed that our IRL algorithm significantly
outperforms EM clustering with MLIRL and achieves a L performance close to that of the expert,
especially when N increases. This can be explained by its modeling of ?! and its high fidelity in
learning and predicting ?! : While our IRL algorithm allows switching of reward function within
each trajectory, EM clustering with MLIRL does not.
6
We also observe that the accuracy of
estimating the transition probabilities ?! (r? , s, .) (?! (r?0 , s, .)) using
(8) depends on the frequency and
distribution of trajectories demonstrated by the expert with its reward
Our IRL algorithm
Our IRL algorithm
EM clustering with MLIRL
EM clustering with MLIRL
function R?tn 1 = r? (R?tn 1 = r?0 )
Expert
Expert
n
at time step t 1 and its state st = s
No. of demonstrated trajectories
No. of demonstrated trajectories
(a)
(b)
at time step t, which is expected.
Those transition probabilities that Figure 4: Graphs of average log-likelihood L achieved by our
are poorly estimated due to few rel- IRL algorithm, EM clustering with MLIRL, and the expert vs.
evant expert?s demonstrated trajec- number N of expert?s demonstrated trajectories in simulated
tories, however, do not hurt the L grid worlds (a) A (Ntot = 1500) and (b) B (Ntot = 500).
performance of our IRL algorithm by much because such trajectories tend to have very low probability of being demonstrated by the expert. In any case, this issue can be mitigated by using the
generalized linear model (1) to represent ?! and observing the feature measurements of 's necessary
for learning and computing ?! , as shown next.
?22
?15
Average log?likelihood
Average log?likelihood
?23
?17
?19
?21
?23
?24
?25
?26
?27
?28
?25
?29
0
200
400
600
800
1000
1200
1400
1600
0
100
200
300
400
500
600
Simulated grid world B. The environment (Fig. 3) is also modeled as a 5 ? 5 grid of states, each
of which is either the origin, destination, or land associated with the respective feature vectors (i.e.
>
>
>
s ) (0, 1) , (1, 0) , and (0, 0) . The expert starts at origin (4, 0) and any of its actions can achieve
the desired state with 0.85 probability. It has two possible reward functions, one of which prefers
going to destination (i.e., ? = (30, 0)> ), and the other of which prefers returning to origin (i.e., ?0 =
(0, 30)> ). While moving to the destination, the expert will encounter barriers at some states with
corresponding feature vectors 's = (1, 1)> and no barriers at all other states with 's = (0, 1)> ; the
second component of 's is used as an offset value in the generalized linear model (1). The expert?s
behavior of switching between reward functions is governed by a generalized linear model ?! (1)
with r?e = r?0 and transition weights !r? r? = ( 11, 12)> and !r?0 r? = (13, 12)> . As a result,
it will, for example, consider switching its reward function at states with barriers from r? to r?0
with 0.269 probability. We estimate ?! using (9) and set to 0.95 and the number |R| of reward
functions of the agent to 2. To assess the fidelity of learning and predicting the stochastic transitions
between reward functions at unvisited states, we intentionally remove all demonstrated trajectories
that visit state (2, 0) with a barrier.
Fig. 4b shows results of L (10) performance achieved by our IRL algorithm, EM clustering with
MLIRL, and the expert averaged over 4 random instances with varying N . It can again be observed
that our IRL algorithm outperforms EM clustering with MLIRL and achieves an L performance
comparable to that of the expert due to its modeling of ?! and its high fidelity in learning and
predicting ?! : While our IRL algorithm allows switching of reward function within each trajectory,
EM clustering with MLIRL does not. Besides, the estimated transition function ?!b using (9) is very
close to that of the expert, even at unvisited state (2, 0). So, unlike using (8), the learning of ?! with
(9) can be generalized well across different states, thus allowing ?! to be predicted accurately at any
state. Hence, we will model ?! with (1) and learn it using (9) in the next experiment.
Real-world taxi trajectories. The Comfort taxi company in Singapore has provided GPS traces of
59 taxis with the same origin and destination that are map-matched [18] onto a network (i.e., comprising highway, arterials, slip roads, etc) of 193 road segments (i.e., states). Each road segment/state
is specified by a 7-dimensional feature vector s : Each of the first six components of s is an indicator describing whether it belongs to Alexandra Road (AR), Ayer Rajah Expressway (AYE), Depot
Road (DR), Henderson Road (HR), Jalan Bukit Merah (JBM), or Lower Delta Road (LDR), while
the last component of s is the normalized shortest path distance from the road segment to destination. We assume that the 59 map-matched trajectories are demonstrated by taxi drivers with a
common set R of 2 reward functions and the same transition function ?! (1) for switching between
reward functions, the latter of which is influenced by the normalized taxi speed constituting the first
component of 2-dimensional feature vector 's ; the second component of 's is used as an offset of
value 1 in the generalized linear model (1). The number |R| of reward functions is set to 2 because
when we experiment with |R| = 3, two of the learned reward functions are similar. Every driver can
deterministically move its taxi from its current road segment to the desired adjacent road segment.
7
1
?4
?b
(r ? , s, rb
)
! b
?0
?b
(r ?0 , s, rb
)
! b
?0
?4.5
0.8
?5
?5.5
Probability
Average log?likelihood
Fig. 5a shows results of L (10)
performance achieved by our IRL
algorithm and EM clustering with
MLIRL averaged over 3 random instances with varying N . Our IRL
algorithm outperforms EM clustering with MLIRL due to its modeling
of ?! and its high fidelity in learning
and predicting ?! .
?6
0.6
0.4
?6.5
?7
?8
0.2
Our IRL algorithm
EM clustering with MLIRL
?7.5
10
20
30
40
50
0
0
60
No. of demonstrated trajectories
0.2
0.4
0.6
Normalized taxi speed
0.8
1
(a)
(b)
Figure 5: Graphs of (a) average log-likelihood L achieved by
our IRL algorithm and EM clustering with MLIRL vs. no. N
of taxi trajectories (Ntot = 59) and (b) transition probabilities
of switching between reward functions vs. taxi speed.
To see this, our IRL algorithm is
able to learn that a taxi driver is
likely to switch between reward
functions representing different intentions within its demonstrated trajectory: Reward function r?b denotes his intention of driving
directly to the destination (Fig. 6a) due to a huge penalty (i.e., reward weight -49) on being far
from destination and a large reward (i.e., reward weight 35.7) for taking the shortest path from origin to destination, which is via JBM, while r?b0 denotes his intention of detouring to DR or JBM
(Fig. 6b) due to large rewards for traveling on them (respectively, reward weights 30.5 and 23.7).
As an example, Fig. 6c shows the most likely partition of a demonstrated trajectory into segments
generated from locally consistent reward functions r?b and r?b0 , which is derived using our Viterbi
algorithm (Section 3). It can be observed that the driver is initially in r?b0 on the slip road exiting
AYE, switches from r?b0 to r?b upon turning into AR to detour to DR, and remains in r?b while driving
along DR, HR, and JBM to destination. On the other hand, the reward functions learned by EM
clustering with MLIRL are both associated with his intention of driving directly to destination (i.e.,
similar to r?b); it is not able to learn his intention of detouring to DR or JBM.
Fig. 5b shows the influence of normalized taxi
speed (i.e., first component of 's ) on the estimated transition function ?!b using (9). It
can be observed that when the driver is in
r?b (i.e., driving directly to destination), he is
very unlikely to change his intention regardless of taxi speed. But, when he is in r?b0 (i.e.,
detouring to DR or JBM), he is likely (unlikely) to remain in this intention if taxi speed
is low (high). The demonstrated trajectory in
Fig. 6c in fact supports this observation: The
driver initially remains in r?b0 on the upslope
slip road exiting AYE, which causes the low
taxi speed. Upon turning into AR to detour to
DR, he switches from r?b0 to r?b because he can
drive at relatively high speed on flat terrain.
5
O
AR
JBM
AYE
HR
DR
LDR
(a)
D
O
AR
JBM
AYE
HR
DR
D
LDR
(b)
O
JBM
AR
AYE
DR
HR
(c)
Conclusion
LDR
D
Figure 6: Reward (a) r?b(s) and (b) r?b0 (s) for each
This paper describes an EM-based IRL al- road segment s with ?b = (7.4, 3.9, 16.3, 20.3, 35.7,
gorithm that can learn the multiple reward
>
b0
functions being locally consistent in differ- 21.5, 49.0)> and ? = (5.2, 9.2, 30.5, 15.0, 23.7,
21.5, 9.2) such that more red road segments
ent segments along a trajectory as well as the
give higher rewards. (c) Most likely partition of a
stochastic transitions between them. It gendemonstrated trajectory from origin ?O? to destinaeralizes EM-clustering with MLIRL and has
tion ?D? into red and green segments generated by
been empirically demonstrated to outperform
r and r?b0 , respectively.
it on both synthetic and real-world datasets. ?b
For our future work, we plan to extend our IRL algorithm to cater to an unknown number of reward
functions [6], nonlinear reward functions [12] modeled by Gaussian processes [4, 8, 13, 14, 15, 25],
other dissimilarity measures described in Section 1, linearly-solvable MDPs [7], active learning with
Gaussian processes [11], and interactions with self-interested agents [9, 10].
Acknowledgments. This work was partially supported by Singapore-MIT Alliance for Research
and Technology Subaward Agreement No. 52 R-252-000-550-592.
8
References
[1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. ICML,
2004.
[2] M. Babes?-Vroman, V. Marivate, K. Subramanian, and M. Littman. Apprenticeship learning about multiple
intentions. In Proc. ICML, pages 897?904, 2011.
[3] J. Bilmes. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and Hidden Markov models. Technical Report ICSI-TR-97-02, University of California,
Berkeley, 1998.
[4] J. Chen, N. Cao, K. H. Low, R. Ouyang, C. K.-Y. Tan, and P. Jaillet. Parallel Gaussian process regression
with low-rank covariance matrix approximations. In Proc. UAI, pages 152?161, 2013.
[5] J. Choi and K. Kim. Inverse reinforcement learning in partially observable environments. JMLR, 12:691?
730, 2011.
[6] J. Choi and K. Kim. Nonparametric Bayesian inverse reinforcement learning for multiple reward functions. In Proc. NIPS, pages 314?322, 2012.
[7] K. Dvijotham and E. Todorov. Inverse optimal control with linearly-solvable MDPs. In Proc. ICML,
pages 335?342, 2010.
[8] T. N. Hoang, Q. M. Hoang, and K. H. Low. A unifying framework of anytime sparse Gaussian process
regression models with stochastic variational inference for big data. In Proc. ICML, pages 569?578, 2015.
[9] T. N. Hoang and K. H. Low. A general framework for interacting Bayes-optimally with self-interested
agents using arbitrary parametric model and model prior. In Proc. IJCAI, pages 1394?1400, 2013.
[10] T. N. Hoang and K. H. Low. Interactive POMDP Lite: Towards practical planning to predict and exploit
intentions for interacting with self-interested agents. In Proc. IJCAI, pages 2298?2305, 2013.
[11] T. N. Hoang, K. H. Low, P. Jaillet, and M. Kankanhalli. Nonmyopic ?-Bayes-optimal active learning of
Gaussian processes. In Proc. ICML, pages 739?747, 2014.
[12] S. Levine, Z. Popovi?c, and V. Koltun. Nonlinear inverse reinforcement learning with Gaussian processes.
In Proc. NIPS, pages 19?27, 2011.
[13] K. H. Low, J. Chen, T. N. Hoang, N. Xu, and P. Jaillet. Recent advances in scaling up Gaussian process
predictive models for large spatiotemporal data. In S. Ravela and A. Sandu, editors, Proc. Dynamic
Data-driven Environmental Systems Science Conference (DyDESS?14). LNCS 8964, Springer, 2015.
? ul. Generalized online sparse Gaussian processes
[14] K. H. Low, N. Xu, J. Chen, K. K. Lim, and E. B. Ozg?
with application to persistent mobile robot localization. In Proc. ECML/PKDD Nectar Track, 2014.
[15] K. H. Low, J. Yu, J. Chen, and P. Jaillet. Parallel Gaussian process regression for big data: Low-rank
representation meets Markov approximation. In Proc. AAAI, pages 2821?2827, 2015.
[16] G. Neu and C. Szepesv?ari. Apprenticeship learning using inverse reinforcement learning and gradient
methods. In Proc. UAI, pages 295?302, 2007.
[17] G. Neu and C. Szepesv?ari. Training parsers by inverse reinforcement learning. Machine Learning, 77(2?
3):303?337, 2009.
[18] P. Newson and J. Krumm. Hidden Markov map matching through noise and sparseness. In Proc. 17th
ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages
336?343, 2009.
[19] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. ICML, 2000.
[20] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc.
IEEE, 77(2):257?286, 1989.
[21] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. In Proc. IJCAI, pages 2586?
2591, 2007.
[22] S. Russell. Learning agents for uncertain environments. In Proc. COLT, pages 101?103, 1998.
[23] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear programming. In Proc.
ICML, pages 1032?1039, 2008.
[24] U. Syed and R. E. Schapire. A game-theoretic approach to apprenticeship learning. In Proc. NIPS, pages
1449?1456, 2007.
? ul. GP-Localize: Persistent mobile robot localization
[25] N. Xu, K. H. Low, J. Chen, K. K. Lim, and E. B. Ozg?
using online sparse Gaussian process observation model. In Proc. AAAI, pages 2585?2592, 2014.
[26] J. Yu, K. H. Low, A. Oran, and P. Jaillet. Hierarchical Bayesian nonparametric approach to modeling and
learning the wisdom of crowds of urban traffic route planning agents. In Proc. IAT, pages 478?485, 2012.
[27] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning.
In Proc. AAAI, pages 1433?1438, 2008.
9
| 5882 |@word briefly:1 open:1 covariance:1 pick:1 incurs:1 tr:1 solid:1 reduction:1 initial:4 contains:3 denoting:1 interestingly:3 outperforms:4 existing:4 current:1 refines:1 realistic:2 partition:8 remove:1 update:1 v:4 selected:5 amir:1 authority:1 traverse:1 marivate:1 deactivating:1 five:1 along:3 driver:8 koltun:1 persistent:2 apprenticeship:5 expected:5 behavior:14 pkdd:1 nor:3 planning:4 inspired:2 globally:1 company:1 resolve:1 food:1 provided:1 estimating:1 notation:1 mitigated:1 maximizes:2 matched:2 ouyang:1 minimizes:3 berkeley:1 every:3 i1n:2 interactive:1 prohibitively:1 returning:1 partitioning:2 control:1 engineering:1 influencing:2 local:1 switching:15 taxi:15 encoding:1 meet:1 path:2 tory:2 specifying:1 ease:1 averaged:3 practical:2 unique:1 acknowledgment:1 practice:2 procedure:2 lncs:1 empirical:7 significantly:1 projection:2 matching:2 intention:15 integrating:1 road:14 cannot:6 close:2 onto:1 operator:1 influence:2 seminal:1 equivalent:5 conventional:1 demonstrated:36 center:1 maximizing:2 baum:1 straightforward:1 map:3 regardless:1 pomdp:4 welch:1 perceive:1 rule:1 attraction:1 importantly:1 deriving:1 his:7 handle:2 hurt:1 updated:1 tan:1 parser:1 programming:2 gps:1 slip:3 origin:8 agreement:1 approximated:1 expensive:1 updating:1 recognition:1 gorithm:1 observed:18 levine:1 electrical:1 snt:23 initializing:1 region:4 intends:1 russell:2 observes:1 icsi:1 mentioned:1 environment:8 reward:114 littman:1 ziebart:1 dynamic:1 motivate:1 solving:1 segment:18 predictive:1 upon:2 localization:2 easily:2 joint:1 evant:1 various:1 represented:2 describe:1 choosing:2 refined:1 crowd:1 whose:1 posed:2 solve:1 larger:1 loglikelihood:1 relax:1 otherwise:3 pab:1 g1:3 gp:1 jointly:2 final:2 online:2 toll:2 trajec:2 advantage:1 sequence:6 differentiable:1 propose:3 interaction:1 coming:1 cao:1 combining:1 realization:1 krumm:1 poorly:1 achieve:2 gentle:1 ent:1 exploiting:4 ijcai:3 produce:1 derive:2 develop:1 b0:22 implemented:1 predicted:1 involves:2 implies:1 differ:2 stochastic:15 exploration:2 iat:1 abbeel:1 generalization:5 considered:1 exp:5 mapping:1 predict:3 viterbi:5 driving:4 vary:1 achieves:2 estimation:1 proc:24 visited:7 highway:3 weighted:1 argmaxr:5 mit:2 gaussian:11 aim:1 pn:11 avoid:1 varying:3 mobile:2 derived:7 rank:2 likelihood:17 greatly:1 contrast:1 kim:2 inference:1 ozg:2 abstraction:1 typically:1 unlikely:2 a0:2 initially:2 hidden:3 ical:1 going:3 selects:1 comprising:2 interested:3 issue:2 among:1 ill:1 fidelity:5 colt:1 priori:1 plan:1 art:1 smoothing:1 initialize:1 ng:2 yu:2 icml:7 constitutes:1 future:2 others:1 report:2 few:1 national:1 divergence:2 lite:1 argmax:1 consisting:1 huge:1 regulating:1 investigate:1 possibility:1 evaluation:1 henderson:1 mixture:3 tuple:1 edge:3 necessary:3 respective:4 traversing:2 euclidean:2 plugged:1 detour:2 desired:3 alliance:1 uncertain:1 instance:3 column:4 earlier:1 modeling:5 obstacle:2 ar:6 maximization:3 republic:1 kullbackleibler:1 optimally:1 spatiotemporal:1 synthetic:3 st:4 international:1 probabilistic:5 destination:15 aye:6 squared:2 again:1 aaai:3 choose:1 possibly:3 dr:10 worse:1 expert:72 unvisited:4 account:1 tra:1 ntot:5 depends:2 multiplicative:1 tion:2 observing:3 traffic:2 analyze:1 start:2 bayes:3 red:2 parallel:2 minimize:2 ass:1 accuracy:1 efficiently:2 maximized:2 rabiner:1 identify:2 wisdom:1 ant:17 bayesian:6 accurately:1 produced:6 trajectory:63 comp:1 sn1:3 drive:1 bilmes:1 influenced:1 neu:2 evaluates:1 frequency:2 intentionally:1 associated:5 static:1 dataset:1 massachusetts:1 recall:1 knowledge:1 anytime:1 lim:2 formalize:1 higher:1 popovi:1 ayer:1 formulation:2 evaluated:1 strongly:1 dey:1 furthermore:1 babe:1 traveling:1 hand:2 ramachandran:1 transport:2 irl:56 nonlinear:2 glance:1 mode:1 mdp:5 alexandra:1 usa:1 effect:1 ldr:4 normalized:4 multiplier:3 geographic:1 former:2 hence:4 discounting:1 iteratively:3 leibler:1 illustrated:1 adjacent:1 during:10 self:3 uniquely:1 bowling:1 game:1 generalized:18 tt:10 demonstrate:1 theoretic:1 tn:36 temperature:1 variational:1 novel:2 ari:2 nonmyopic:1 common:1 commuter:1 empirically:4 qp:1 phong:1 discussed:4 extend:2 he:5 measurement:11 grid:7 moving:2 robot:2 jaillet:7 an2:1 etc:1 patrick:1 recent:2 optimizing:1 belongs:1 driven:1 route:4 mlirl:22 arbitrarily:1 cater:1 exploited:1 devise:1 maximize:4 shortest:2 dashed:1 multiple:8 technical:1 match:1 offer:1 devised:2 visit:4 bigger:1 a1:3 variant:3 regression:3 controller:1 expectation:5 metric:1 iteration:8 represent:5 achieved:5 szepesv:2 separately:1 eliminates:1 unlike:1 ascent:6 tend:1 inconsistent:1 invalidates:2 practitioner:1 consis:1 intermediate:1 switch:6 todorov:1 idea:1 whether:2 motivated:1 six:1 ul:2 penalty:1 questioned:1 speech:1 cause:2 action:20 remark:2 prefers:4 detailed:3 nonparametric:2 locally:22 reduced:4 kian:1 schapire:2 outperform:2 singapore:4 tutorial:2 dotted:1 estimated:8 delta:1 track:1 rb:2 visitation:1 key:1 four:1 localize:1 urban:1 utilize:3 graph:3 sum:2 inverse:13 fourth:2 extends:2 electronic:1 decision:2 appendix:5 scaling:1 comparable:1 guaranteed:2 constraint:3 afforded:1 ri:4 encodes:1 flat:1 speed:8 optimality:1 performing:2 ptn:4 relatively:1 cies:1 according:1 nectar:1 remain:2 across:2 em:40 describes:1 lunch:1 s1:2 quoc:1 explained:1 computationally:1 previously:1 remains:3 describing:1 tractable:1 available:2 generalizes:2 observe:1 hierarchical:1 s2n:1 apprentice:2 alternative:1 encounter:1 original:1 denotes:5 clustering:19 dirichlet:1 include:1 graphical:4 maintaining:1 unifying:1 exploit:1 restrictive:1 especially:1 unchanged:1 move:1 question:2 realized:2 blend:1 parametric:1 bagnell:1 gradient:9 distance:3 separate:1 simulated:4 water:5 assuming:2 length:2 besides:1 modeled:3 demonstration:8 minimizing:2 unfortunately:1 mostly:1 trace:1 policy:13 unknown:9 perform:1 allowing:2 boltzmann:2 observation:6 datasets:4 markov:6 finite:4 enabling:1 ecml:1 defining:1 interacting:2 arbitrary:1 exiting:2 s2s:1 specified:3 california:1 learned:8 nu:1 nip:3 able:2 below:3 pattern:1 comfort:1 including:3 max:1 green:1 subramanian:1 syed:2 predicting:5 indicator:2 hr:5 turning:2 solvable:2 representing:5 minimax:1 improve:6 technology:2 mdps:3 sn:15 review:1 sg:1 prior:5 stn:3 jectory:1 a2a:1 rameters:1 proportional:1 proven:1 hoang:6 incurred:2 agent:36 consistent:23 s0:4 editor:1 land:5 featuring:1 maas:1 supported:1 last:3 understand:1 institute:1 taking:1 barrier:5 sparse:3 transition:38 world:13 evaluating:1 ending:1 adopts:1 collection:1 reinforcement:13 nguyen:1 far:1 constituting:1 observable:6 tourist:1 kullback:1 maxr:3 active:2 uai:2 assumed:4 tuples:1 terrain:1 search:2 latent:2 additionally:1 an1:1 learn:8 robust:1 inherently:1 complex:2 linearly:2 big:2 noise:1 xu:3 fig:13 hsiang:1 vr:6 deterministically:1 governed:1 jmlr:1 third:2 choi:2 specific:1 invalidating:1 tnn:1 offset:2 burden:1 rel:1 importance:1 dissimilarity:4 execution:6 sparseness:1 chen:5 vroman:1 entropy:3 backtracking:2 likely:10 infinitely:1 lagrange:3 partially:4 g2:3 dvijotham:1 springer:1 environmental:1 acm:1 consequently:2 towards:1 change:3 specifically:3 reducing:2 total:2 s1n:2 select:3 searched:2 support:1 latter:2 dept:2 subaward:1 |
5,394 | 5,883 | Consistent Multilabel Classification
Oluwasanmi Koyejo?
Department of Psychology,
Stanford University
[email protected]
Nagarajan Natarajan?
Department of Computer Science,
University of Texas at Austin
[email protected]
Pradeep Ravikumar
Department of Computer Science,
University of Texas at Austin
[email protected]
Inderjit S. Dhillon
Department of Computer Science,
University of Texas at Austin
[email protected]
Abstract
Multilabel classification is rapidly developing as an important aspect of modern
predictive modeling, motivating study of its theoretical aspects. To this end, we
propose a framework for constructing and analyzing multilabel classification metrics which reveals novel results on a parametric form for population optimal classifiers, and additional insight into the role of label correlations. In particular,
we show that for multilabel metrics constructed as instance-, micro- and macroaverages, the population optimal classifier can be decomposed into binary classifiers based on the marginal instance-conditional distribution of each label, with a
weak association between labels via the threshold. Thus, our analysis extends the
state of the art from a few known multilabel classification metrics such as Hamming loss, to a general framework applicable to many of the classification metrics
in common use. Based on the population-optimal classifier, we propose a computationally efficient and general-purpose plug-in classification algorithm, and prove
its consistency with respect to the metric of interest. Empirical results on synthetic
and benchmark datasets are supportive of our theoretical findings.
1
Introduction
Modern classification problems often involve the prediction of multiple labels simultaneously associated with a single instance e.g. image tagging by predicting multiple objects in an image. The
growing importance of multilabel classification has motivated the development of several scalable
algorithms [8, 12, 18] and has led to the recent surge in theoretical analysis [1, 3, 7, 16] which helps
guide and understand practical advances. While recent results have advanced our knowledge of
optimal population classifiers and consistent learning algorithms for particular metrics such as the
Hamming loss and multilabel F -measure [3, 4, 5], a general understanding of learning with respect
to multilabel classification metrics has remained an open problem. This is in contrast to the more
traditional settings of binary and multiclass classification where several recently established results
have led to a rich understanding of optimal and consistent classification [9, 10, 11]. This manuscript
constitutes a step towards establishing results for multilabel classification at the level of generality
currently enjoyed only in these traditional settings.
Towards a generalized analysis, we propose a framework for multilabel sample performance metrics
and their corresponding population extensions. A classification metric is constructed to measure the
utility1 of a classifier, as defined by the practitioner or end-user. The utility may be measured using
?
1
Equal contribution.
Equivalently, we may define the loss as the negative utility.
1
the sample metric given a finite dataset, and further generalized to the population metric with respect
to a given data distribution (i.e. with respect to infinite samples). Two distinct approaches have been
proposed for studying the population performance of classifier in the classical settings of binary
and multiclass classification, described by Ye et al. [17] as decision theoretic analysis (DTA) and
empirical utility maximization (EUM). DTA population utilities measure the expected performance
of a classifier on a fixed-size test set, while EUM population utilities are directly defined as a function
of the population confusion matrix. However, state-of-the-art analysis of multilabel classification
has so-far lacked such a distinction. The proposed framework defines both EUM and DTA multilabel
population utility as generalizations of the aforementioned classic definitions. Using this framework,
we observe that existing work on multilabel classification [1, 3, 7, 16] have exclusively focused on
optimizing the DTA utility of (specific) multilabel metrics.
Averaging of binary classification metrics remains one of the most widely used approaches for defining multilabel metrics. Given a binary label representation, such metrics are constructed via averaging with respect to labels (instance-averaging), with respect to examples separately for each label
(macro-averaging), or with respect to both labels and examples (micro-averaging). We consider a
large sub-family of such metrics where the underlying binary metric can be constructed as a fraction of linear combinations of true positives, false positives, false negatives and true negatives [9].
Examples in this family include the ubiquitous Hamming loss, the averaged precision, the multilabel averaged F -measure, and the averaged Jaccard measure, among others. Our key result is that a
Bayes optimal multilabel classifier for such metrics can be explicitly characterized in a simple form
? the optimal classifier thresholds the label-wise conditional probability marginals, and the label dependence in the underlying distribution is relevant to the optimal classifier only through the threshold
parameter. Further, the threshold is shared by all the labels when the metric is instance-averaged
or micro-averaged. This result is surprising and, to our knowledge, a first result to be shown at this
level of generality for multilabel classification. The result also sheds additional insight into the role
of label correlations in multilabel classification ? answering prior conjectures by Dembczy?nski et al.
[3] and others.
We provide a plug-in estimation based algorithm that is efficient as well as theoretically consistent,
i.e. the true utility of the empirical estimator approaches the optimal (EUM) utility of the Bayes
classifier (Section 4). We also present experimental evaluation on synthetic and real-world benchmark multilabel datasets comparing different estimation algorithms (Section 5) for representative
multilabel performance metrics selected from the studied family. The results observed in practice
are supportive of what the theory predicts.
1.1
Related Work
We briefly highlight closely related theoretical results in the multilabel learning literature. Gao and
Zhou [7] consider the consistency of multilabel learning with respect to DTA utility, with a focus
on two specific losses ? Hamming and rank loss (the corresponding measures are defined in Section
2). Surrogate losses are devised which result in consistent learning with respect to these metrics.
In contrast, we propose a plug-in estimation based algorithm which directly estimates the Bayes
optimal, without going through surrogate losses. Dembczynski et al. [2] analyze the DTA population
optimal classifier for the multilabel rank loss, showing that the Bayes optimal is independent of label
correlations in the unweighted case, and construct certain weighted univariate losses which are DTA
consistent surrogates in the more general weighted case. Perhaps the work most closely related
to ours is by Dembczynski et al. [4] who propose a novel DTA consistent plug-in rule estimation
based algorithm for multilabel F -measure. Cheng et al. [1] consider optimizing popular losses in
multilabel learning such as Hamming, rank and subset 0/1 loss (which is the multilabel analog of
the classical 0-1 loss). They propose a probabilistic version of classifier chains (first introduced by
Read et al. [13]) for estimating the Bayes optimal with respect to subset 0/1 loss, though without
rigorous theoretical justification.
2
A Framework for Multilabel Classification Metrics
Consider multilabel classification with M labels, where each instance is denoted by x 2 X . For
convenience, we will focus on the common binary encoding, where the labels are represented by
a vector y 2 Y = {0, 1}M , so ym = 1 iff the mth label is associated with the instance, and
2
ym = 0 otherwise. The goal is to learn a multilabel classifier f : X 7! Y that optimizes a certain
performance metric with respect to P ? a fixed data generating distribution over the domain X ? Y,
using a training set of instance-label pairs (x(n) , y(n) ), n = 1, 2, . . . , N drawn (typically assumed
iid.) from P. Let X and Y denote the random variables for instances and labels respectively, and let
denote the performance (utility) metric of interest.
Most classification metrics can be represented as functions of the entries of the confusion matrix. In
case of binary classification, the confusion matrix is specified by four numbers, i.e., true positives,
true negatives, false positives and false negatives. Similarly, we construct the following primitives
for multilabel classification:
(n)
(n)
c )m,n = Jfm (x(n) ) = 1, ym
c )m,n = Jfm (x(n) ) = 0, ym
TP(f
= 1K
TN(f
= 0K
(1)
(n)
(n)
(n)
(n)
c
c
FP(f )m,n = Jfm (x ) = 1, ym = 0K
FN(f )m,n = Jfm (x ) = 0, ym = 1K
where JZK denotes the indicator function that is 1 if the predicate Z is true or 0 otherwise. It is clear
that most multilabel classification metrics considered in the literature can be written as a function of
the M N primitives defined in (1).
In the following, we consider a construction which is of sufficient generality to capture all multilabel
c )m,n , FP(f
c )m,n , TN(f
c )m,n , FN(f
c )m,n }M,N
metrics in common use. Let Ak (f ) : {TP(f
m=1,n=1 7! R,
k = 1, 2, . . . , K represent a set of K functions. Consider sample multilabel metrics constructed as
functions: : {Ak (f )}K
k=1 7! [0, 1). We note that the metric need not decompose over individual
instances. Equipped with this definition of a sample performance metric , consider the population
utility of a multilabel classifier f defined as:
U (f ; , P) = ({E [ Ak (f ) ]}K
(2)
k=1 ),
where the expectation is over iid draws from the joint distribution P. Note that this can be seen as
a multilabel generalization of the so-called Empirical Utility Maximization (EUM) style classifiers
studied in binary [9, 10] and multiclass [11] settings.
Our goal is to learn a multilabel classifier that maximizes U (f ; , P) for general performance metrics
. Define the (Bayes) optimal multilabel classifier as:
f ? = argmax U (f ; , P).
(3)
f :X ! {0,1}M
p
Let U (f ; , P) = U . We say that ?f is a consistent estimator of f ? if U (?f ; , P) ! U ? .
?
?
Examples. The averaged accuracy (1 - Hamming loss) used in multilabel classification correPM PN
c )m,n + FN(f
c )m,n and Ham (f ) =
sponds to simply choosing: A1 (f ) = M1N m=1 n=1 FP(f
2
1 A1 (f ). The measure
corresponding
to
rank
loss
can
be
obtained
by choosing Ak (f ) =
?
??
?
PM
PM
1
c
c )m2 ,k , for k = 1, 2, . . . , N and Rank = 1
FN(f
m1 =1
m2 =1 FP(f )m1 ,k
M2
P
N
1
k=1 Ak (f ). Note that the choice of {Ak }, and therefore , is not unique.
N
Remark 1. Existing results on multilabel classification have focused on decision-theoretic analysis
(DTA) style classifiers, where the utility is defined as:
?
?
UDTA (f ; , P) = E
({Ak (f )}K
(4)
k=1 ) ,
and the expectation is over iid samples from P. Furthermore, there are no theoretical results for
consistency with respect to general performance metrics in this setting (See Appendix B.2).
For the remainder of this manuscript, we refer to U (f ; P) as the utility defined in (2). We will also
c ) as TP)
c when it is clear from the context.
drop the argument f (e.g. write TP(f
2.1
A Framework for Averaged Binary Multilabel Classification Metrics
The most popular class of multilabel performance metrics consists of averaged binary performance
metrics, that correspond to particular settings of {Ak (f )} using certain averages as described in the
following. For the remainder of this subsection, the metric : [0, 1]4 ! [0, 1) will refer to a binary
classification metric as is typically applied to a binary confusion matrix.
2
A subtle but important aspect of the definition of rank loss in the existing literature, including [2] and [7],
is that the Bayes optimal is allowed to be a real-valued function and may not correspond to a label decision.
3
Micro-averaging: Micro-averaged multilabel performance metrics micro are defined by averaging over both labels and examples. Let:
N X
M
N X
M
X
X
c )= 1
c )m,n ,
c )= 1
c )m,n ,
TP(f
TP(f
FP(f
FP(f
(5)
M N n=1 m=1
M N n=1 m=1
c ) and FN(f
c ) are defined similarly, then the micro-averaged multilabel performance metrics are
TN(f
given by:
K
c FP,
c TN,
c FN).
c
(TP,
(6)
micro ({Ak (f )}k=1 ) :=
Thus, for micro-averaging, one applies a binary performance metric to the confusion matrix defined
by the averaged quantities described in (5).
Macro-averaging: The metric macro measures average classification performance across labels.
Define the averaged measures:
N
N
X
X
c )m,n ,
c m (f ) = 1
c )m,n ,
c m (f ) = 1
TP
TP(f
FP
FP(f
N n=1
N n=1
c m (f ) and FN
c m (f ) are defined similarly. The macro-averaged performance metric is given by:
TN
K
macro ({Ak (f )}k=1 )
:=
M
1 X
c m , FP
c m , TN
c m , FN
c m ).
(TP
M m=1
(7)
Instance-averaging: The metric instance measures the average classification performance across
examples. Define the averaged measures:
M
M
X
X
c n (f ) = 1
c )m,n ,
c n (f ) = 1
c )m,n ,
TP
TP(f
FP
FP(f
M m=1
M m=1
c n (f ) and FN
c n (f ) are defined similarly. The instance-averaged performance metric is given by:
TN
K
instance ({Ak (f )}k=1 ) :=
3
N
1 X c c c c
(TPn , FPn , TNn , FNn ).
N n=1
(8)
Characterizing the Bayes Optimal Classifier for Multilabel Metrics
We now characterize the optimal multilabel classifier for the large family of metrics outlined in
Section 2.1 ( micro , macro and instance ) with respect to the EUM utility. We begin by observing
that while micro-averaging and instance-averaging seem quite different when viewed as sample
averages, they are in fact equivalent at the population level. Thus, we need only focus on micro to
characterize instance as well.
Proposition 1. For a given binary classification metric , consider the averaged multilabel metrics
micro defined in (6) and
instance defined in (8). For any f , U (f ; micro , P) ? U (f ; instance , P). In
particular, f ? ? ? f ? ? .
micro
instance
We further restrict our study to metrics selected from the linear-fractional metric family, recently
studied in the context of binary classification [9]. Any in this family can be written as:
c
c
c
c
c FP,
c FN,
c TN)
c = a0 + a11 TP + a10 FP + a01 FN + a00 TN ,
(TP,
c + b10 FP
c + b01 FN
c + b00 TN
c
b0 + b11 TP
c FP,
c FN,
c TN
c are defined as in Section 2.1. Many
where a0 , b0 , aij , bij , i, j 2 {0, 1} are fixed, and TP,
popular multilabel metrics can be derived using linear-fractional . Some examples include3 :
c
TP
c
(1 + 2 )TP
Jaccard :
Jacc =
F :
c + FP
c + FN
c
F =
TP
c + 2 FN
c + FP
c
(9)
(1 + 2 )TP
c
TP
c
c
Precision :
Hamming :
Prec =
Ham = TP + TN
c + FP
c
TP
3
Note that Hamming is typically defined as the loss, given by 1
4
Ham .
PM
PM
Define the population
h
i quantities: ? = m=1 P(Ym = 1) and (f ) = m=1 P(fm (x) = 1). Let
c ) , where the expectation is over iid draws from P. From (5), it follows that,
TP(f ) = E TP(f
h
i
c ) = (f ) TP(f ), TN(f ) = 1 ?
FP(f ) := E FP(f
(f ) + TP(f ) and FN(f ) = (f ) TP(f ).
Now, the population utility (2) corresponding to
U (f ;
micro , P)
=
can be written succinctly as:
c0 + c1 TP(f ) + c2 (f )
(TP(f ), FP(f ), FN(f ), TN(f )) =
d0 + d1 TP(f ) + d2 (f )
micro
(10)
with the constants:
c0 = a01 ? + a00
d0 = b01 ? + b00
a00 ? + a0 ,
b00 ? + b0 ,
c1 = a11 a10
d1 = b11 b10
a01 + a00 ,
b01 + b00 ,
c2 = a10
d2 = b10
a00 and
b00 .
We assume that the joint P has a density ? that satisfies dP = ?dx, and define ?m (x) = P(Ym =
1|X = x). Our first main result characterizes the Bayes optimal multilabel classifier f ? micro .
Theorem 2. Given the constants {c1 , c2 , c0 } and {d1 , d2 , d0 }, define:
d2 U ? micro c2
?
=
.
c1 d1 U ? micro
(11)
The optimal Bayes classifier f ? := f ? micro defined in (3) is given by:
?
1. When c1 > d1 U ? micro , f ? takes the form fm
(x) = J?m (x) >
?
2. When c1 < d1 U ? micro , f ? takes the form fm
(x) = J?m (x) <
?
?
K, for m 2 [M ].
K, for m 2 [M ].
The proof is provided in Appendix A.2, and applies equivalently to instance-averaging. Theorem 2
recovers existing results in binary [9] settings (See Appendix B.1 for details), and is sufficiently
general to capture many of the multilabel metrics used in practice. Our proof is closely related to
the binary classification case analyzed in Theorem 2 of [9], but differs in the additional averaging
across labels. A key observation from Theorem 2 is that the optimal multilabel classifier can be
obtained by thresholding the marginal instance-conditional probability for each label P(Ym = 1|x)
and, importantly, that the optimal classifiers for all the labels share the same threshold ? . Thus,
the effect of the joint distribution is only in the threshold parameter. We emphasize that while the
presented results characterize the optimal population classifier, incorporating label correlations into
the prediction algorithm may have other benefits with finite samples, such as statistical efficiency
when there are known structural similarities between the marginal distributions [3]. Further analysis
is left for future work.
The Bayes optimal for the macro-averaged population metric is straightforward to establish. We
observe that the threshold is not shared in this case.
Proposition 3. For a given linear-fractional metric , consider the macro-averaged multilabel
metric macro defined in (7). Let c1 > d1 U ? macro and f ? = f ? ?macro (x). We have, for m = 1, 2, . . . , M :
?
fm
= J?m (x) >
?
m K,
?
where m
2 [0, 1] is a constant that depends on the metric and the label-wise instance-conditional
marginals of P. Analogous results hold for c1 < d1 U ? macro .
Remark 2. It is clear that micro-, macro- and instance- averaging are equivalent at the population
level when the metric is linear. This is a straightforward consequence of the observation that
the corresponding sample utilities are the same. More generally, micro-, macro- and instanceaveraging are equivalent whenever the optimal threshold is a constant independent of P, such as for
linear metrics, where d1 = d2 = 0 so ? = cc12 (cf. Corollary 4 of Koyejo et al. [9]). Thus, our
analysis recovers known results for Hamming loss [3, 7].
4
Consistent Plug-in Estimation Algorithm
Importantly, the Bayes optimal characterization points to a simple plug-in estimation algorithm
that enjoys consistency as follows. First, one obtains an estimate ??m (x) of the marginal instanceconditional probability ?m (x) = P(Ym = 1|x) for each label m (see Reid and Williamson [14])
5
using a training sample. Then, the given metric micro (f ) is maximized on a validation sample. For
the remainder of this manuscript, we assume wlog. that c1 > d1 U ? . Note that in order to maximize
over {f : fm (x) = J?m (x) > K 8m = 1, 2, . . . , M, 2 (0, 1)}, it suffices to optimize:
? = argmax
? ),
(12)
micro (f
2(0,1)
where micro is the micro-averaged sample metric defined in (6) (similarly for instance ). Though the
threshold search is over a continuous space 2 (0, 1) the number of distinct micro (?f ) values given
a training sample of size N is at most N M . Thus (12) can be solved efficiently on a finite sample.
Algorithm 1: Plugin-Estimator for
micro
and
instance
Input: Training examples S = {x
and metric micro (or
for m = 1, 2, . . . , M do
(n)
1. Select the training data for label m: Sm = {x(n) , ym }N
n=1 .
2. Split the training data Sm into two sets Sm1 and Sm2 .
3. Estimate ??m (x) using Sm1 , define f?m (x) = J?
?m (x) > K.
end for
Obtain ? by solving (12) on S2 = [M
m=1 Sm2 .
Return: ?f ?.
(n)
, y(n) }N
n=1
instance ).
Consistency of the proposed algorithm. The following theorem shows that the plug-in procedure
of Algorithm 1 results in a consistent classifier.
p
Theorem 4. Let micro be a linear-fractional metric. If the estimates ??m (x) satisfy ??m ! ?m , 8m,
then the output multilabel classifier ?f ? of Algorithm 1 is consistent.
The proof is provided in Appendix A.4. From Proposition 1, it follows that consistency holds for
instance as well. Additionally, in light of Proposition 3, we may apply the learning algorithms
proposed by [9] for binary classification independently for each label to obtain a consistent estimator
for macro .
5
Experiments
We present two sets of results. The first is an experimental validation on synthetic data with known
ground truth probabilities. The results serve to verify our main result (Theorem 2) characterizing
the Bayes optimal for averaged multilabel metrics. The second is an experimental evaluation of
the plugin estimator algorithms for micro-, instance-, and macro-averaged multilabel metrics on
benchmark datasets.
5.1
Synthetic data: Verification of Bayes optimal
We consider the micro-averaged F1 metric in (9) for multilabel classification with 4 labels. We
sample a set of five 2-dimensional vectors x = {x(1) , x(2) , . . . , x(5) } from the standard Gaussian.
The conditional probability ?m for label m is modeled using a sigmoid function: ?m (x) = P(Ym =
1|x) = 1+exp1 wT x , using a vector wm sampled from the standard Gaussian. The Bayes optimal
m
f ? (x) 2 {0, 1}4 that maximizes the micro-averaged F1 population utility is then obtained by exhaustive search over all possible label vectors for each instance. In Figure 1 (a)-(d), we plot the
?
conditional probabilities (wrt. the sample index) for each label, the corresponding fm
for each x,
?
and the optimal threshold
using (11). We observe that the optimal multilabel classifier indeed
thresholds P(Ym |x) for each label m, and furthermore, that the threshold is same for all the labels,
as stated in Theorem 2.
6
(a)
(b)
(c)
(d)
Figure 1: Bayes optimal classifier for multilabel F1 measure on synthetic data with 4 labels, and
distribution supported on 5 instances. Plots from left to right show the Bayes optimal classifier
prediction for instances, and for labels 1 through 4. Note that the optimal ? at which the label-wise
marginal ?m (x) is thresholded is shared, conforming to Theorem 2 (larger plots are included in
Appendix C).
5.2
Benchmark data: Evaluation of plug-in estimators
We now evaluate the proposed plugin-estimation (Algorithm 1) that is consistent for micro- and
instance-averaged multilabel metrics. We focus on two metrics, F1 and Jaccard, listed in (9). We
compare Algorithm 1, designed to optimize micro-averaged (or instance-averaged) multilabel met?
rics to two related plugin-estimation methods: (i) a separate threshold m
tuned for each label m
individually ? this optimizes the utility corresponding to the macro-averaged metric, but is not consistent for micro-averaged or instance-averaged metrics, and is the most common approach in practice. We refer to this as Macro-Thres, (ii) a constant threshold 1/2 for all the labels ? this is known
to be optimal for averaged accuracy (equiv. Hamming loss), but not for non-decomposable F1 or
Jaccard metrics. We refer to this as Binary Relevance (BR) [15].
We use four benchmark multilabel datasets4 in our experiments: (i) S CENE, an image dataset consisting of 6 labels, with 1211 training and 1196 test instances, (ii) B IRDS, an audio dataset consisting
of 19 labels, with 323 training and 322 test instances, (iii) E MOTIONS, a music dataset consisting
of 6 labels, with 393 training and 202 test instances, and (iv) C AL 500, a music dataset consisting
of 174 labels, with 400 training and 100 test instances5 . We perform logistic regression (with L2
regularization) on a separate subsample to obtain estimates of ??m (x) of P(Ym = 1|x), for each label
m (as described in Section 4). All the methods we evaluate rely on obtaining a good estimator for
the conditional probability. So we exclude labels that are associated with very few instances ? in
particular, we train and evaluate using labels associated with at least 20 instances, in each dataset,
for all the methods.
In Table 1, we report the micro-averaged F1 and Jaccard metrics on the test set for Algorithm 1,
Macro-Thres and Binary Relevance. We observe that estimating a fixed threshold for all the labels
(Algorithm 1) consistently performs better than estimating thresholds for each label (Macro-Thres)
and than using threshold 1/2 for all labels (BR); this conforms to our main result in Theorem 2 and
the consistency analysis of Algorithm 1 in Theorem 4. A similar trend is observed for the instanceaveraged metrics computed on the test set, shown in Table 2. Proposition 1 shows that maximizing
the population utilities of micro-averaged and instance-averaged metrics are equivalent; the result
holds in practice as presented in Table 2. Finally, we report macro-averaged metrics computed on
test set in Table 3. We observe that Macro-Thres is competitive in 3 out of 4 datasets; this conforms
to Proposition 3 which shows that in the case of macro-averaged metrics, it is optimal to tune a
threshold specific to each label independently. Beyond consistency, we note that by using more
samples, joint threshold estimation enjoys additional statistical efficiency, while separate threshold
estimation enjoys greater flexibility. This trade-off may explain why Algorithm 1 achieves the best
performance in three out of four datasets in Table 3, though it is not consistent for macro-averaged
metrics.
4
5
The datasets were obtained from http://mulan.sourceforge.net/datasets-mlc.html.
Original C AL 500 dataset does not provide splits; we split the data randomly into train and test sets.
7
DATASET
S CENE
B IRDS
E MOTIONS
C AL 500
BR
0.6559
0.4040
0.5815
0.3647
Algorithm 1
F1
0.6847 ? 0.0072
0.4088 ? 0.0130
0.6554 ? 0.0069
0.4891 ? 0.0035
Macro-Thres
0.6631 ? 0.0125
0.2871 ? 0.0734
0.6419 ? 0.0174
0.4160 ? 0.0078
BR
0.4878
0.2495
0.3982
0.2229
Algorithm 1
Jaccard
0.5151 ? 0.0084
0.2648 ? 0.0095
0.4908 ? 0.0074
0.3225 ? 0.0024
Macro-Thres
0.5010 ? 0.0122
0.1942 ? 0.0401
0.4790 ? 0.0077
0.2608 ? 0.0056
Table 1: Comparison of plugin-estimator methods on multilabel F1 and Jaccard metrics. Reported
values correspond to micro-averaged metric (F1 and Jaccard) computed on test data (with standard
deviation, over 10 random validation sets for tuning thresholds). Algorithm 1 is consistent for microaveraged metrics, and performs the best consistently across datasets.
DATASET
BR
S CENE
B IRDS
E MOTIONS
C AL 500
0.5695
0.1209
0.4787
0.3632
Algorithm 1
F1
0.6422 ? 0.0206
0.1390 ? 0.0110
0.6241 ? 0.0204
0.4855 ? 0.0035
Macro-Thres
0.6303 ? 0.0167
0.1390 ? 0.0259
0.6156 ? 0.0170
0.4135 ? 0.0079
BR
0.5466
0.1058
0.4078
0.2268
Algorithm 1
Jaccard
0.5976 ? 0.0177
0.1239 ? 0.0077
0.5340 ? 0.0072
0.3252 ? 0.0024
Macro-Thres
0.5902 ? 0.0176
0.1195 ? 0.0096
0.5173 ? 0.0086
0.2623 ? 0.0055
Table 2: Comparison of plugin-estimator methods on multilabel F1 and Jaccard metrics. Reported
values correspond to instance-averaged metric (F1 and Jaccard) computed on test data (with standard deviation, over 10 random validation sets for tuning thresholds). Algorithm 1 is consistent for
instance-averaged metrics, and performs the best consistently across datasets.
DATASET
BR
S CENE
B IRDS
E MOTIONS
C AL 500
0.6601
0.3366
0.5440
0.1293
Algorithm 1
F1
0.6941 ? 0.0205
0.3448 ? 0.0110
0.6450 ? 0.0204
0.2687 ? 0.0035
Macro-Thres
0.6737 ? 0.0137
0.2971 ? 0.0267
0.6440 ? 0.0164
0.3226 ? 0.0068
BR
0.5046
0.2178
0.3982
0.0880
Algorithm 1
Jaccard
0.5373 ? 0.0177
0.2341 ? 0.0077
0.4912 ? 0.0072
0.1834 ? 0.0024
Macro-Thres
0.5260 ? 0.0176
0.2051 ? 0.0215
0.4900 ? 0.0133
0.2146 ? 0.0036
Table 3: Comparison of plugin-estimator methods on multilabel F1 and Jaccard metrics. Reported
values correspond to the macro-averaged metric computed on test data (with standard deviation,
over 10 random validation sets for tuning thresholds). Macro-Thres is consistent for macro-averaged
metrics, and is competitive in three out of four datasets. Though not consistent for macro-averaged
metrics, Algorithm 1 achieves the best performance in three out of four datasets.
6
Conclusions and Future Work
We have proposed a framework for the construction and analysis of multilabel classification metrics
and corresponding population optimal classifiers. Our main result is that for a large family of averaged performance metrics, the EUM optimal multilabel classifier can be explicitly characterized by
thresholding of label-wise marginal instance-conditional probabilities, with weak label dependence
via a shared threshold. We have also proposed efficient and consistent estimators for maximizing
such multilabel performance metrics in practice. Our results are a step forward in the direction of
extending the state-of-the-art understanding of learning with respect to general metrics in binary and
multiclass settings. Our work opens up many interesting research directions, including the potential
for further generalization of our results beyond averaged metrics, and generalized results for DTA
population optimal classification, which is currently only well-understood for the F -measure.
Acknowledgments: We acknowledge the support of NSF via CCF-1117055, CCF-1320746 and IIS1320894, and NIH via R01 GM117594-01 as part of the Joint DMS/NIGMS Initiative to Support
Research at the Interface of the Biological and Mathematical Sciences.
8
References
[1] Weiwei Cheng, Eyke H?ullermeier, and Krzysztof J Dembczynski. Bayes optimal multilabel
classification via probabilistic classifier chains. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 279?286, 2010.
[2] Krzysztof Dembczynski, Wojciech Kotlowski, and Eyke H?ullermeier. Consistent multilabel
ranking through univariate losses. In Proceedings of the 29th International Conference on
Machine Learning (ICML-12), pages 1319?1326, 2012.
[3] Krzysztof Dembczy?nski, Willem Waegeman, Weiwei Cheng, and Eyke H?ullermeier. On label
dependence and loss minimization in multi-label classification. Machine Learning, 88(1-2):
5?45, 2012.
[4] Krzysztof Dembczynski, Arkadiusz Jachnik, Wojciech Kotlowski, Willem Waegeman, and
Eyke H?ullermeier. Optimizing the F-measure in multi-label classification: Plug-in rule approach versus structured loss minimization. In Proceedings of the 30th International Conference on Machine Learning, pages 1130?1138, 2013.
[5] Krzysztof J Dembczynski, Willem Waegeman, Weiwei Cheng, and Eyke H?ullermeier. An
exact algorithm for F-measure maximization. In Advances in Neural Information Processing
Systems, pages 1404?1412, 2011.
[6] Luc Devroye. A probabilistic theory of pattern recognition, volume 31. Springer, 1996.
[7] Wei Gao and Zhi-Hua Zhou. On the consistency of multi-label learning. Artificial Intelligence,
199:22?44, 2013.
[8] Ashish Kapoor, Raajay Viswanathan, and Prateek Jain. Multilabel classification using bayesian
compressed sensing. In Advances in Neural Information Processing Systems, pages 2645?
2653, 2012.
[9] Oluwasanmi O Koyejo, Nagarajan Natarajan, Pradeep K Ravikumar, and Inderjit S Dhillon.
Consistent binary classification with generalized performance metrics. In Advances in Neural
Information Processing Systems, pages 2744?2752, 2014.
[10] Harikrishna Narasimhan, Rohit Vaish, and Shivani Agarwal. On the statistical consistency
of plug-in classifiers for non-decomposable performance measures. In Advances in Neural
Information Processing Systems, pages 1493?1501, 2014.
[11] Harikrishna Narasimhan, Harish Ramaswamy, Aadirupa Saha, and Shivani Agarwal. Consistent multiclass algorithms for complex performance measures. In Proceedings of the 32nd
International Conference on Machine Learning (ICML-15), pages 2398?2407, 2015.
[12] James Petterson and Tib?erio S Caetano. Submodular multi-label learning. In Advances in
Neural Information Processing Systems, pages 1512?1520, 2011.
[13] Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. Classifier chains for multilabel classification. Machine learning, 85(3):333?359, 2011.
[14] Mark D Reid and Robert C Williamson. Composite binary losses. The Journal of Machine
Learning Research, 9999:2387?2422, 2010.
[15] Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. Mining multi-label data. In
Data mining and knowledge discovery handbook, pages 667?685. Springer, 2010.
[16] Willem Waegeman, Krzysztof Dembczynski, Arkadiusz Jachnik, Weiwei Cheng, and Eyke
H?ullermeier. On the bayes-optimality of f-measure maximizers. Journal of Machine Learning
Research, 15:3333?3388, 2014.
[17] Nan Ye, Kian Ming A Chai, Wee Sun Lee, and Hai Leong Chieu. Optimizing F-measures: a
tale of two approaches. In Proceedings of the International Conference on Machine Learning,
2012.
[18] Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit Dhillon. Large-scale multi-label
learning with missing labels. In Proceedings of the 31st International Conference on Machine
Learning, pages 593?601, 2014.
9
| 5883 |@word version:1 briefly:1 nd:1 c0:3 open:2 d2:5 thres:11 raajay:1 exclusively:1 tuned:1 ours:1 existing:4 comparing:1 surprising:1 b01:3 written:3 dx:1 conforming:1 fn:17 drop:1 plot:3 designed:1 intelligence:1 selected:2 characterization:1 five:1 mathematical:1 constructed:5 c2:4 initiative:1 prove:1 consists:1 theoretically:1 tagging:1 indeed:1 expected:1 surge:1 growing:1 multi:6 ming:1 decomposed:1 zhi:1 equipped:1 begin:1 estimating:3 underlying:2 provided:2 maximizes:2 mulan:1 katakis:1 what:1 prateek:2 narasimhan:2 finding:1 gm117594:1 shed:1 classifier:37 reid:2 positive:4 understood:1 consequence:1 plugin:7 encoding:1 ak:11 analyzing:1 establishing:1 studied:3 averaged:44 practical:1 unique:1 acknowledgment:1 practice:5 differs:1 procedure:1 empirical:4 b00:5 composite:1 convenience:1 context:2 optimize:2 equivalent:4 missing:1 maximizing:2 jesse:1 oluwasanmi:2 primitive:2 straightforward:2 independently:2 focused:2 decomposable:2 m2:3 insight:2 estimator:11 rule:2 importantly:2 holmes:1 population:23 classic:1 justification:1 analogous:1 construction:2 user:1 exact:1 trend:1 recognition:1 natarajan:2 predicts:1 observed:2 role:2 tib:1 solved:1 capture:2 pradeepr:1 caetano:1 sun:1 trade:1 ham:3 multilabel:71 solving:1 predictive:1 serve:1 efficiency:2 joint:5 geoff:1 represented:2 train:2 distinct:2 jain:2 artificial:1 grigorios:1 choosing:2 exhaustive:1 quite:1 stanford:2 widely:1 valued:1 say:1 larger:1 otherwise:2 compressed:1 net:1 propose:6 remainder:3 macro:34 relevant:1 rapidly:1 kapoor:1 iff:1 flexibility:1 sourceforge:1 chai:1 extending:1 generating:1 a11:2 object:1 help:1 tale:1 measured:1 b0:3 c:3 met:1 direction:2 closely:3 tsoumakas:1 nagarajan:2 suffices:1 generalization:3 f1:14 decompose:1 proposition:6 biological:1 equiv:1 extension:1 hold:3 sufficiently:1 considered:1 ground:1 datasets4:1 achieves:2 purpose:1 estimation:10 applicable:1 label:61 currently:2 utexas:3 individually:1 weighted:2 minimization:2 gaussian:2 zhou:2 pn:1 corollary:1 derived:1 focus:4 consistently:3 rank:6 contrast:2 rigorous:1 a01:3 typically:3 pfahringer:1 a0:3 mth:1 going:1 jachnik:2 classification:44 aforementioned:1 among:1 denoted:1 html:1 development:1 art:3 marginal:6 equal:1 construct:2 yu:1 icml:3 constitutes:1 future:2 others:2 report:2 ullermeier:6 micro:42 few:2 saha:1 modern:2 randomly:1 wee:1 simultaneously:1 petterson:1 individual:1 argmax:2 consisting:4 interest:2 mining:2 evaluation:3 analyzed:1 pradeep:2 light:1 chain:3 fu:1 conforms:2 iv:1 theoretical:6 instance:44 modeling:1 tp:31 tpn:1 maximization:3 deviation:3 subset:2 entry:1 predicate:1 dembczy:2 motivating:1 characterize:3 reported:3 synthetic:5 nski:2 st:1 density:1 international:6 probabilistic:3 off:1 lee:1 ym:14 ashish:1 style:2 return:1 iis1320894:1 wojciech:2 exclude:1 potential:1 ioannis:2 satisfy:1 explicitly:2 ranking:1 depends:1 ramaswamy:1 analyze:1 observing:1 characterizes:1 wm:1 bayes:19 competitive:2 dembczynski:7 contribution:1 accuracy:2 who:1 efficiently:1 maximized:1 jzk:1 correspond:5 weak:2 bayesian:1 iid:4 explain:1 whenever:1 definition:3 a10:3 james:1 dm:1 associated:4 proof:3 recovers:2 hamming:10 sampled:1 dataset:10 popular:3 knowledge:3 subsection:1 fractional:4 ubiquitous:1 subtle:1 harikrishna:2 manuscript:3 aadirupa:1 wei:1 though:4 generality:3 furthermore:2 correlation:4 defines:1 logistic:1 perhaps:1 effect:1 ye:2 verify:1 true:6 ccf:2 vaish:1 regularization:1 read:2 dhillon:3 eyke:6 eum:7 generalized:4 theoretic:2 confusion:5 tn:14 performs:3 motion:4 interface:1 image:3 wise:4 novel:2 recently:2 nih:1 common:4 sigmoid:1 volume:1 association:1 analog:1 m1:2 marginals:2 refer:4 a00:5 enjoyed:1 tuning:3 consistency:10 pm:4 similarly:5 outlined:1 submodular:1 similarity:1 recent:2 purushottam:1 optimizing:4 optimizes:2 certain:3 cene:4 kar:1 binary:24 supportive:2 arkadiusz:2 seen:1 additional:4 greater:1 maximize:1 ii:2 multiple:2 d0:3 characterized:2 plug:10 devised:1 naga86:1 ravikumar:2 a1:2 prediction:3 scalable:1 irds:4 regression:1 metric:92 expectation:3 represent:1 agarwal:2 c1:9 separately:1 koyejo:3 kotlowski:2 seem:1 practitioner:1 structural:1 leong:1 split:3 iii:1 weiwei:4 psychology:1 restrict:1 fm:6 multiclass:5 br:8 texas:3 motivated:1 utility:21 remark:2 generally:1 clear:3 involve:1 listed:1 tune:1 shivani:2 http:1 kian:1 nsf:1 write:1 key:2 four:5 waegeman:4 threshold:24 drawn:1 thresholded:1 krzysztof:6 fraction:1 extends:1 family:7 draw:2 decision:3 appendix:5 jaccard:13 nan:1 cheng:5 aspect:3 argument:1 optimality:1 sponds:1 conjecture:1 department:4 developing:1 structured:1 viswanathan:1 combination:1 dta:10 across:5 computationally:1 remains:1 wrt:1 end:3 studying:1 willem:4 lacked:1 apply:1 observe:5 sm1:2 prec:1 vlahavas:1 original:1 denotes:1 harish:1 include:1 cf:1 sm2:2 music:2 establish:1 classical:2 r01:1 quantity:2 parametric:1 dependence:3 traditional:2 surrogate:3 hai:1 dp:1 separate:3 devroye:1 modeled:1 index:1 equivalently:2 robert:1 frank:1 m1n:1 negative:5 stated:1 perform:1 observation:2 datasets:11 sm:2 benchmark:5 finite:3 acknowledge:1 defining:1 erio:1 mlc:1 introduced:1 pair:1 specified:1 distinction:1 established:1 beyond:2 pattern:1 fp:22 including:2 rely:1 predicting:1 indicator:1 advanced:1 b10:3 prior:1 understanding:3 literature:3 l2:1 discovery:1 rohit:1 loss:24 highlight:1 interesting:1 versus:1 validation:5 sufficient:1 consistent:23 verification:1 thresholding:2 share:1 austin:3 succinctly:1 supported:1 enjoys:3 aij:1 guide:1 understand:1 characterizing:2 benefit:1 world:1 rich:1 unweighted:1 forward:1 far:1 emphasize:1 obtains:1 bernhard:1 reveals:1 handbook:1 assumed:1 search:2 continuous:1 why:1 table:8 additionally:1 learn:2 b11:2 obtaining:1 williamson:2 complex:1 constructing:1 domain:1 main:4 s2:1 subsample:1 allowed:1 representative:1 hsiang:1 wlog:1 precision:2 sub:1 exp1:1 answering:1 bij:1 theorem:11 remained:1 specific:3 showing:1 tnn:1 sensing:1 eibe:1 maximizers:1 incorporating:1 false:4 importance:1 nigms:1 led:2 simply:1 univariate:2 gao:2 microaveraged:1 inderjit:4 chieu:1 applies:2 springer:2 hua:1 truth:1 satisfies:1 sanmi:1 conditional:8 fnn:1 goal:2 viewed:1 towards:2 shared:4 luc:1 included:1 infinite:1 averaging:15 wt:1 called:1 experimental:3 select:1 support:2 mark:1 relevance:2 evaluate:3 audio:1 d1:10 |
5,395 | 5,884 | Is Approval Voting Optimal Given Approval Votes?
Nisarg Shah
Computer Science Department
Carnegie Mellon University
[email protected]
Ariel D. Procaccia
Computer Science Department
Carnegie Mellon University
[email protected]
Abstract
Some crowdsourcing platforms ask workers to express their opinions by approving a set of k good alternatives. It seems that the only reasonable way to aggregate
these k-approval votes is the approval voting rule, which simply counts the number of times each alternative was approved. We challenge this assertion by proposing a probabilistic framework of noisy voting, and asking whether approval voting
yields an alternative that is most likely to be the best alternative, given k-approval
votes. While the answer is generally positive, our theoretical and empirical results
call attention to situations where approval voting is suboptimal.
1
Introduction
It is surely no surprise to the reader that modern machine learning algorithms thrive on large
amounts of data ? preferably labeled. Online labor markets, such as Amazon Mechanical Turk
(www.mturk.com), have become a popular way to obtain labeled data, as they harness the power
of a large number of human workers, and offer significantly lower costs compared to expert opinions. But this low-cost, large-scale data may require compromising quality: the workers are often
unqualified or unwilling to make an effort, leading to a high level of noise in their submitted labels.
To overcome this issue, it is common to hire multiple workers for the same task, and aggregate their
noisy opinions to find more accurate labels. For example, TurKit [17] is a toolkit for creating and
managing crowdsourcing tasks on Mechanical Turk. For our purposes its most important aspect
is that it implements plurality voting: among available alternatives (e.g., possible labels), workers
report the best alternative in their opinion, and the alternative that receives the most votes is selected.
More generally, workers may be asked to report the k best alternatives in their opinion; such a vote
is known as a k-approval vote. This has an advantage over plurality (1-approval) in noisy situations
where a worker may not be able to pinpoint the best alternative accurately, but can recognize that
it is among the top k alternatives [23].1 At the same time, k-approval votes, even for k > 1, are
much easier to elicit than, say, rankings of the alternatives, not to mention full utility functions. For
example, EteRNA [16] ? a citizen science game whose goal is to design RNA molecules that fold
into stable structures ? uses 8-approval voting on submitted designs, that is, each player approves
up to 8 favorite designs; the designs that received the largest number of approval votes are selected
for synthesis in the lab.
So, the elicitation of k-approval votes is common practice and has significant advantages. And it
may seem that the only reasonable way to aggregate these votes, once collected, is via the approval
voting rule, that is, tally the number of approvals for each alternative, and select the most approved
one.2 But is it? In other words, do the k-approval votes contain useful information that can lead to
1
k-approval is also used for picking k winners, e.g., various cities in the US such as San Francisco, Chicago,
and New York use it in their so-called ?participatory budgeting? process [15].
2
There is a subtle distinction, which we will not belabor, between k-approval voting, which is the focus of
this paper, and approval voting [8], which allows voters to approve as many alternatives as they wish. The latter
1
significantly better outcomes, and is ignored by approval voting? Or is approval voting an (almost)
optimal method for aggregating k-approval votes?
Our Approach. We study the foregoing questions within the maximum likelihood estimation (MLE)
framework of social choice theory, which posits the existence of an underlying ground truth that provides an objective comparison of the alternatives. From this viewpoint, the votes are noisy estimates
of the ground truth. The optimal rule then selects the alternative that is most likely to be the best
alternative given the votes. This framework has recently received attention from the machine learning community [18, 3, 2, 4, 21], in part due to its applications to crowdsourcing domains [20, 21, 9],
where, indeed, there is a ground truth, and individual votes are objective.
In more detail, in our model there exists a ground truth ranking over the alternatives, and each voter
holds an opinion, which is another ranking that is a noisy estimate of the ground truth ranking. The
opinions are drawn i.i.d. from the popular Mallows model [19], which is parametrized by the ground
truth ranking, a noise parameter ? ? [0, 1], and a distance metric d over the space of rankings.
We use five well-studied distance metrics: the Kendall tau (KT) distance, the (Spearman) footrule
distance, the maximum displacement distance, the Cayley distance, and the Hamming distance.
When required to submit a k-approval vote, a voter simply approves the top k alternatives in his
opinion. Given the votes, an alternative a is the maximum likelihood estimate (MLE) for the best
alternative if the votes are most likely generated by a ranking that puts a first.
We can now reformulate our question in slightly more technical terms:
Is approval voting (almost) a maximum likelihood estimator for the best alternative, given votes drawn from the Mallows model? How does the answer depend
on the noise parameter ? and the distance metric d?
Our results. Our first result (Theorem 1) shows that under the Mallows model, the set of winners
according to approval voting coincides with the set of MLE best alternatives under the Kendall tau
distance, but under the other four distances there may exist approval winners that are not MLE best
alternatives. Our next result (Theorem 2) confirms the intuition that the suboptimality of approval
voting stems from the information that is being discarded: when only a single alternative is approved
or disapproved in each vote, approval voting ? which now utilizes all the information that can be
gleaned from the anonymous votes ? is optimal under mild conditions.
Going back to the general case of k-approval votes, we show (Theorem 3) that even under the four
distances for which approval voting is suboptimal, a weaker statement holds: in cases with very high
or very low noise, every MLE best alternative is an approval winner (but some approval winners
may not be MLE best alternatives). And our experiments, using real data, show that the accuracy of
approval voting is usually quite close to that of the MLE in pinpointing the best alternative.
We conclude that approval voting is a good way of aggregating k-approval votes in most situations.
But our work demonstrates that, perhaps surprisingly, approval voting may be suboptimal, and, in
situations where a high degree of accuracy is required, exact computation of the MLE best alternative
is an option worth considering. We discuss our conclusions in more detail in Section 6.
2
Model
Let [t] , {1, . . . , t}. Denote the set of alternatives by A, and let |A| = m. We use L(A) to denote
the set of rankings (total orders) of the alternatives in A. For a ranking ? ? L(A), let ?(i) denote
the alternative occupying position i in ?, and let ? ?1 (a) denote the rank (position) of alternative a
in ?. With a slight abuse of notation, let ?([t]) , {a ? A|? ?1 (a) ? [t]}. We use ?a?b to denote
the ranking obtained by swapping the positions of alternatives a and b in ?. We assume that there
exists an unknown true ranking of the alternatives (the ground truth), denoted ? ? ? L(A). We also
make the standard assumption of a uniform prior over the true ranking.
framework of approval voting has been studied extensively, both from the axiomatic point of view [7, 8, 13,
22, 1], and the game-theoretic point of view [14, 12, 6]. However, even under this framework it is a standard
assumption that votes are tallied by counting the number of times each alternative is approved, which is why
we simply refer to the aggregation rule under consideration as approval voting.
2
Let N = {1, . . . , n} denote the set of voters. Each voter i has an opinion, denoted ?i ? L(A),
which is a noisy estimate of the true ranking ? ? ; the collection of opinions ? the (opinion) profile
? is denoted ?. Fix k ? [m]. A k-approval vote is a collection of k alternatives approved by a
voter. When asked to submit a k-approval vote, voter i simply submits the vote Vi = ?i ([k]), which
is the set of alternatives at the top k positions in his opinion. The collection of all votes is called
the vote profile, and denoted V = {Vi }i?[n] . For a ranking ? and a k-approval vote v, we say that
v is generated from ?, denoted ? ?k v (or ? ? v when the value of k is clear from the context),
if v = ?([k]). More generally, for an opinion profile ? and a vote profile V , we say ? ?k V (or
? ? V ) if ?i ?k Vi for every i ? [n].
Let Ak = {Ak ? A||Ak | = k} denote the set of all subsets of A of size k. A voting rule operating
on k-approval votes is a function (Ak )n ? A that returns a winning alternative given the votes.3
In particular, let us define the approval score of an alternative a, denoted SC A PP (a), as the number
of voters that approve a. Then, approval voting simply chooses an alternative with the greatest
approval score. Note that we do not break ties. Instead, we talk about the set of approval winners.
Following the standard social choice literature, we model the opinion of each voter as being drawn
i.i.d. from an underlying noise model. A noise model describes the probability of drawing an opinion
? given the true ranking ? ? , denoted Pr[?|? ? ]. We say that a noise model is neutral if the labels
of the alternatives do not matter, i.e., renaming alternatives in the true ranking ? and in the opinion
? ? , in the same fashion, keeps Pr[?|? ? ] intact. A popular noise model is the Mallows model [19],
?
under which Pr[?|? ? ] = ?d(?,? ) /Z?m . Here, d is a distance metric over the space of rankings.
Parameter ? ? [0, 1] governs the noise level; ? = 0 implies that the true ranking is generated with
probability 1, and ? = 1 implies the uniform distribution. Z?m is the normalization constant, which
is independent of the true ranking ? ? given that distance d is neutral, i.e., renaming alternatives in
the same fashion in two rankings does not change the distance between them. Below, we review five
popular distances used in the social choice literature; they are all neutral.
? The Kendall tau (KT) distance, denoted dKT , measures the number of pairs of alternatives
over which two rankings disagree. Equivalently, it is the number of swaps required by
bubble sort to convert one ranking into another.
? The (Spearman) footrule (FR) distance, denoted dFR , measures the total displacement (absolute difference between positions) of all alternatives in two rankings.
? The Maximum Displacement (MD) distance, denoted dMD , measures the maximum of the
displacements of all alternatives between two rankings.
? The Cayley (CY) distance, denoted dCY , measures the minimum number of swaps (not
necessarily of adjacent alternatives) required to convert one ranking into another.
? The Hamming (HM) distance, denoted dHM , measures the number of positions in which
two rankings place different alternatives.
Since opinions
the probability of aPprofile ? given the true ranking ? ? is
Qnare drawn independently,
n
?
?
d(?,? ? )
Pr[?|? ] = i=1 Pr[?i |? ] ? ?
, where d(?, ? ? ) = i=1 d(?i , ? ? ). Once we fix the noise
model,
for a fixed k we can derive the probability of observing a given k-approval vote v: Pr[v|? ? ] =
P
Pr[?|? ? ]. Then, the probability of drawing a given vote profile V is Pr[V |? ? ] =
Qn??L(A):??v?
P
?
?
i=1 Pr[Vi |? ]. Alternatively, this can also be expressed as Pr[V |? ] =
??L(A)n :??V Pr[?|? ].
Hereinafter, we omit the domains L(A)n for ? and L(A) for ? ? when they are clear from the context.
Finally, given the vote profile V the likelihood of an alternative a beingPthe best alternative in the true
ranking ? ? is proportional to (via Bayes? rule) Pr[V |? ? (1) = a] = ?? :?? (1)=a Pr[V |? ? ]. Using
the two expressions derived earlier for Pr[V |? ? ], and ignoring the normalization constant Z?m from
the probabilities, we define the likelihood function of a given votes V as
"
#
n
X
X
X
Y
X
d(?,? ? )
d(?i ,? ? )
L(V, a) ,
?
=
?
.
(1)
? ? :? ? (1)=a ?:??V
? ? :? ? (1)=a i=1
?i :?i ?Vi
The maximum likelihood estimate (MLE) for the best alternative is given by arg maxa?A L(V, a).
Again, we do not break ties; we study the set of MLE best alternatives.
3
Technically, this is a social choice function; a social welfare function returns a ranking of the alternatives.
3
3
Optimal Voting Rules
At first glance, it seems natural to use approval voting (that is, returning the alternative that is
approved by the largest number of voters) given k-approval votes. However, consider the following
example with 4 alternatives (A = {a, b, c, d}) and 5 voters providing 2-approval votes:
V1 = {b, c}, V2 = {b, c}, V3 = {a, d}, V4 = {a, b}, V5 = {a, c}.
(2)
Notice that alternatives a, b, and c receive 3 approvals each, while alternative d receives only a
single approval. Approval voting may return any alternative other than alternative d. But is that
always optimal? In particular, while alternatives b and c are symmetric, alternative a is qualitatively
different due to different alternatives being approved along with a. This indicates that under certain
conditions, it is possible that not all three alternatives are MLE for the best alternative. Our first
result shows that this is indeed the case under three of the distance functions listed above, and a
similar example works for a fourth. However, surprisingly, under the Kendall tau distance the MLE
best alternatives are exactly the approval winners, and hence are polynomial-time computable, which
stands in sharp contrast to the NP-hardness of computing them given rankings [5].
Theorem 1. The following statements hold for aggregating k-approval votes using approval voting.
1. Under the Mallows model with a fixed distance d ? {dMD , dCY , dHM , dFR }, there exist a
vote profile V with at most six 2-approval votes over at most five alternatives, and a choice
for the Mallows parameter ?, such that not all approval winners are MLE best alternatives.
2. Under the Mallows model with the distance d = dKT , the set of MLE best alternatives
coincides with the set of approval winners, for all vote profiles V and all values of the
Mallows parameter ? ? (0, 1).
Proof. For the Mallows model with d ? {dMD , dCY , dHM } and any ? ? (0, 1), the profile from
Equation (2) is a counterexample: alternatives b and c are MLE best alternatives, but a is not.
For the Mallows model with d = dFR , we could not find a counter example with 4 alternatives;
computer-based simulations generated the following counterexample with 5 alternatives that works
for any ? ? (0, 1): V1 = V2 = {a, b}, V3 = V4 = {c, d}, V5 = {a, e}, and V6 = {b, c}.
Here, alternatives a, b, and c have the highest approval score of 3. However, alternative b has a
strictly lower likelihood of being the best alternative than alternative a, and hence is not an MLE
best alternative. The calculation verifying these counterexamples is presented in the online appendix
(specifically, Appendix A).
In contrast, for the Kendall tau distance, we show that all approval winners are MLE best alternatives,
and vice-versa. We begin by simplifying the likelihood function L(V, a) from Equation (1) for the
special case of the Mallows model with the Kendall tau distance. In this case, it is well known that
Qm
Pj?1
the normalization constant satisfies Z?m = j=1 T?j , where T?j = i=0 ?i . Consider a ranking
?i such that ?i ? Vi . We can decompose dKT (?i , ? ? ) into three types of pairwise mismatches:
i) d1 (?i , ? ? ): The mismatches over pairs (b, c) where b ? Vi and c ? A \ Vi , or vice-versa; ii)
d2 (?i , ? ? ): The mismatches over pairs (b, c) where b, c ? Vi ; and iii) d3 (?i , ? ? ): The mismatches
over pairs (b, c) where b, c ? A \ Vi .
Note that every ranking ?i that satisfies ?i ? Vi has identical mismatches of type 1. Let us denote
the number of such mismatches by dKT (Vi , ? ? ). Also, notice that d2 (?i , ? ? ) = dKT (?i |Vi , ? ? |Vi ),
where ?|S denotes the ranking of alternatives in S ? A dictated by ?. Similarly, d3 (?i , ? ? ) =
dKT (?i |A\Vi , ? ? |A\Vi ). Now, in the expression for the likelihood function L(V, a),
L(V, a) =
X
n
Y
X
?dKT (Vi ,?
?
)+dKT (?i |Vi ,? ? |Vi )+dKT (?i |A\V ,? ? |A\V )
i
i
? ? :? ? (1)=a i=1 ?i :?i ?V
=
X
n
Y
?
dKT (Vi ,? ? )
?
? ? :? ? (1)=a i=1
=
X
n
Y
?
dKT (?i1 ,? ? |Vi ) ?
?
?i1 ?L(Vi )
?dKT (Vi ,?
?
)
?
? ?
X
??
dKT (?i2 ,? ? |A\V ) ?
X
?
?i2 ?L(A\Vi )
? Z?k ? Z?m?k ?
? ? :? ? (1)=a i=1
X
? ? :? ? (1)=a
4
?dKT (V,?
?
)
b
, L(V,
a).
i
The second equality follows because every ranking ?i that satisfies ?i ? V can be generated by
picking rankings ?i1 ? L(Vi ) and ?i2 ? L(A \ Vi ), and concatenating them. The third equality
follows from theP
definition of the normalization constant in the Mallows model. Finally, we denote
n
?
dKT (V, ? ? ) ,
i=1 dKT (Vi , ? ). It follows that maximizing L(V, a) amounts to maximizing
b
L(V,
a). Note that dKT (V, ? ? ) counts the number of times alternative a is approved while alternative
b is not for all a, b ? P
A with b ?? a. That is, let nV (a, ?b) , |{i ? [n]|a ? Vi ? b ?
/ Vi }|.
Then, dKT (V, ? ? ) = a,b?A:b?? a nV (a, ?b). Also, note that for alternatives c, d ? A, we have
SC A PP (c) ? SC A PP (d) = nV (c, ?d) ? nV (d, ?c).
b
Next, we show that L(V,
a) is a monotonically increasing function of SC A PP (a). Equivalently,
b
b
L(V, a) ? L(V, b) if and only if SC A PP (a) ? SC A PP (b). Fix a, b ? A. Consider the bijection
between the sets of rankings placing a and b first, which simply swaps a and b (? ? ?a?b ). Then,
X
?
?
b
b
L(V,
a) ? L(V,
b) =
?dKT (V,? ) ? ?dKT (V,?a?b ) .
(3)
? ? :? ? (1)=a
?
?
?
(1) = a. Note that ?a?b
(1) = b. Let C
?
? ? (equivalently, in ?a?b
). Now, ? ? and
Fix ? such that ?
denote the set of alternatives positioned
?
between a and b in
?a?b
have identical disagreements with
V on a pair of alternatives (x, y) unless i) one of x and y belongs to {a, b}, and ii) the other belongs
?
to C ? {a, b}. Thus, the difference of disagreements of ? ? and ?a?b
with V on such pairs is
?
dKT (V, ? ? ) ? dKT (V, ?a?b
)
h
i X
V
V
= n (b, ?a) ? n (a, ?b) +
[nV (c, ?a) + nV (b, ?c) ? nV (c, ?b) ? nV (a, ?c)]
c?C
= (|C| + 1) ? SC
A PP
(b) ? SC
A PP
(a) .
?
b
b
Thus, SC A PP (a) = SC A PP (b) implies dKT (V, ? ? ) = dKT (V, ?a?b
) (and thus, L(V,
a) = L(V,
b)),
A PP
A PP
?
?
b
b
and SC (a) > SC (b) implies dKT (V, ? ) < dKT (V, ?a?b ) (and thus, L(V,
a) > L(V,
b)).
Suboptimality of approval voting for distances other than the KT distance stems from the fact that
in counting the number of approvals for a given alternative, one discards information regarding
other alternatives approved along with the given alternative in various votes. However, no such
information is discarded when only one alternative is approved (or not approved) in each vote. That
is, given plurality (k = 1) or veto (k = m ? 1) votes, approval voting should be optimal, not only
for the Mallows model but for any reasonable noise model. The next result formalizes this intuition.
Theorem 2. Under a neutral noise model, the set of MLE best alternatives coincides with the set of
approval winners
1. given plurality votes, if p1 > pi > 0, ?i ? {2, . . . , m}, where pi is the probability of the
alternative in position i in the true ranking appearing in the first position in a sample, or
2. given veto votes, if 0 < q1 < qi , ?i ? {2, . . . , m}, where qi is the probability of the
alternative in position i in the true ranking appearing in the last position in a sample.
Proof. We show the proof for plurality votes. The case of veto votes is symmetric: in every vote,
instead of a single approved alternative, we have a single alternative that is not approved. Note that
the probability pi is independent of the true ranking ? ? due to the neutrality of the noise model.
?
?
Consider a plurality vote profile V and an alternative
Pa. Let T =? {? ? L(A)|? ?(1) = a}.
The likelihood function for a is given by L(V, a) = ?? ?T Pr[V |? ]. Under every ? ? T , the
Qn
contribution of the SC A PP (a) plurality votes for a to the product Pr[V |? ? ] = i=1 Pr[Vi |? ? ] is
A PP
(p1 )SC (a) . Note that the alternatives in A \ {a} are distributed among positions in {2, . . . , m} in
all possible ways by the rankings in T . Let ib denote the position of alternative b ? A \ {a}. Then,
SC A PP (a)
L(V, a) = p1
?
X
SC A PP (b)
Y
pi b
{ib }b?A\{a} ={2,...,m} b?A\{a}
= (p1 )n?k ?
X
Y
{ib }b?A\{a} ={2,...,m} b?A\{a}
5
pi b
p1
SC APP (b)
.
P
The second transition holds because SC A PP (a) = n ? k ? b?A\{a} SC A PP (b). Our assumption in
the theorem statement implies 0 < pib /p1 < 1 for ib ? {2, . . . , m}. Now, it can be checked that
P
A PP
A PP
b
b
for a, b ? A, we have L(V,
a)/L(V,
b) = i?{2,...,m} (pi /p1 )SC (b)?SC (a) . Thus, SC A PP (a) ?
b
b
SC A PP (b) if and only if L(V,
a) ? L(V,
b), as required.
Note that the conditions of Theorem 2 are very mild. In particular, the condition for plurality votes
is satisfied under the Mallows model with all five distances we consider, and the condition for veto
votes is satisfied under the Mallows model with the Kendall tau, the footrule, and the maximum
displacement distances. This is presented as Theorem 4 in the online appendix (Appendix B).
4
High Noise and Low Noise
While Theorem 1 shows that there are situations where at least some of the approval winners may not
be MLE best alternatives, it does not paint the complete picture. In particular, in both profiles used as
counterexamples in the proof of Theorem 1, it holds that every MLE best alternative is an approval
winner. That is, the optimal rule choosing an MLE best alternative works as if a tie-breaking scheme
is imposed on top of approval voting. Does this hold true for all profiles? Part 2 of Theorem 1 gives
a positive answer for the Kendall tau distance. In this section, we answer the foregoing question
(largely) in the positive under the other four distance functions, with respect to the two ends of the
Mallows spectrum: the case of low noise (? ? 0), and the case of high noise (? ? 1). The case
of high noise is especially compelling (because that is when it becomes hard to pinpoint the ground
truth), but both extreme cases have received special attention in the literature [24, 21, 11]. In contrast
to previous results, which have almost always yielded different answers in the two cases, we show
that every MLE best alternative is an approval winner in both cases, in almost every situation.
P
P
?
We begin with the likelihood function for alternative a: L(V, a) = ?? :?? (1)=a ?:??V ?d(?,? ) .
When ? ? 0, maximizing L(V, a) requires minimizing the minimum exponent. Ties, if any, are
broken using the number of terms achieving the minimum exponent, then the second smallest exponent, and so on. At the other extreme, let ? = 1 ? with ? 0. Using the first-order approximation
?
(1 ? )d(?,? ) ? 1 ? ? d(?, ? ? ), maximizing L(V, a) requires minimizing the sum of d(?, ? ? ) over
?
all ? , ? with ? ? (1) = a and ? ? V . Ties are broken using higher-order approximations. Let
X
X
L0 (V, a) = ? min
min d(?, ? ? )
L1 (V, a) =
d(?, ? ? ).
?
? :? (1)=a ?:??V
? ? :? ? (1)=a ?:??V
We are interested in minimizing L0 (V, a) and L1 (V, a); this leads to novel combinatorial problems
that require detailed analysis. We are now ready for the main result of this section.
Theorem 3. The following statements hold for using approval voting to aggregate k-approval votes
drawn from the Mallows model.
1. Under the Mallows model with d ? {dFR , dCY , dHM } and ? ? 0, and under the Mallows
model with d ? {dFR , dCY , dHM , dMD } and ? ? 1, it holds that for every k ? [m ? 1],
and every profile with k-approval votes, every MLE best alternative is an approval winner.
2. Under the Mallows model with d = dMD and ? ? 0, there exists a profile with seven 2approval votes over 5 alternatives such that no MLE best alternative is an approval winner.
Before we proceed to the proof, we remark that in part 1 of the theorem, by ? ? 0 and ? ? 1,
we mean that there exist 0 < ??0 , ??1 < 1 such that the result holds for all ? ? ??0 and ? ? ??1 ,
respectively. In part 2 of the theorem, we mean that for every ?? > 0, there exists a ? < ?? for
which the negative result holds. Due to space constraints, we only present the proof for the Mallows
model with d = dFR and ? ? 0; the full proof appears in the online appendix (Appendix C).
Proof of Theorem 3 (only for d = dFR , ? ? 0). Let ? ? 0 in the Mallows model with the footrule
distance. To analyze L0 (V, ?), we first analyze min?:??V dFR (? ? , ?) for a fixed ? ? ? L(A). Then,
we minimize it over ? ? , and show that the set of alternatives that appear first in the minimizers (i.e.,
the set of alternatives minimizing L0 (V, a)) is exactly the set of approval winners. Since every MLE
best alternative in the ? ? 0 case must minimize L0 (V, ?), the result follows.
6
Fix ? ? ? L(A). Imagine a boundary between positions k and k + 1 in all rankings, i.e., between the
approved and the non-approved alternatives. Now, given a profile ? such that ? ? V , we first apply
the following operation repeatedly. For i ? [n], let an alternative a ? A be in positions t and t0 in ? ?
and ?i , respectively. If t and t0 are on the same side of the boundary (i.e., either both are at most k or
both are greater than k) and t 6= t0 , then swap alternatives ?i (t) and ?i (t0 ) = a in ?i . Note that this
decreases the displacement of a in ?i with respect to ? ? by |t ? t0 |, and increases the displacement
of ?i (t) by at most |t ? t0 |. Hence, the operation cannot increase dFR (?, ? ? ). Let ? ? denote the
profile that we converge to. Note that ? ? satisfies ? ? ? V (because we only swap alternatives on
the same side of the boundary), dFR (? ? , ? ? ) ? dFR (?, ? ? ), and the following condition:
Condition X: for i ? [n], every alternative that is on the same side of the boundary in ? ? and ?i? is
in the same position in both rankings.
Because we started from an arbitrary profile ? (subject to ? ? V ), it follows that it is sufficient to
minimize dFR (? ? , ? ? ) over all ? ? with ? ? ? V satisfying condition X. However, we show that
subject to ? ? ? V and condition X, dFR (? ? , ? ? ) is actually a constant.
Note that for i ? [n], every alternative that is in different positions in ?i? and ? ? must be on different
sides of the boundary in the two rankings. It is easy to see that in every ?i? , there is an equal number
of alternatives on both sides of the boundary that are not in the same position as they are in ? ? . Now,
we can divide the total footrule distance dFR (? ? , ? ? ) into four parts:
1. Let i ? [n] and t ? [k] such that ? ? (t) 6= ?i? (t). Let a = ? ? (t) and (?i? )?1 (a) = t0 > k.
Then, the displacement t0 ? t of a is broken into two parts: (i) t0 ? k, and (ii) k ? t.
2. Let i ? [n] and t ? [m] \ [k] such that ? ? (t) 6= ?i? (t). Let a = ? ? (t) and (?i? )?1 (a) =
t0 ? k. Then, the displacement t ? t0 of a is broken into two parts: (i) k ? t0 , and (ii) t ? k.
Because the number of alternatives of type 1 and 2 is equal for every ?i? , we can see that the total
displacements of types 1(i) and 2(ii) are equal, and so are the total displacements of types 1(ii) and
2(i). By observing that there are exactly n ? SC A PP (? ? (t)) instances of type 1 for a given value of
t ? k, and SC A PP (? ? (t)) instances of type 2 for a given value of t > k, we conclude that
#
" k
m
X
X
A PP
A PP
?
?
?
?
SC (? (t)) ? (t ? k) .
dFR (? , ? ) = 2 ?
(n ? SC (? (t))) ? (k ? t) +
t=1
t=k+1
Pm
Minimizing this over ? ? reduces to minimizing t=1 SC A PP (? ? (t)) ? (t ? k). By the rearrangement
inequality, this is minimized when alternatives are ordered in a non-increasing order of their approval
scores. Note that exactly the set of approval winners appear first in such rankings.
Theorem 3 shows that under the Mallows model with d ? {dFR , dCY , dHM }, every MLE best
alternative is an approval winner for both ? ? 0 and ? ? 1. We believe that the same statement
holds for all values of ?, as we were unable to find a counterexample despite extensive simulations.
Conjecture 1. Under the Mallows model with distance d ? {dFR , dCY , dHM }, every MLE best
alternative is an approval winner for every ? ? (0, 1).
5
Experiments
We perform experiments with two real-world datasets ? Dots and Puzzle [20] ? to compare the
performance of approval voting against that of the rule that is MLE for the empirically observed
distribution of k-approval votes (and not for the Mallows model). Mao et al. [20] collected these
datasets by asking workers on Amazon Mechanical Turk to rank either four images by the number
of dots they contain (Dots), or four states of an 8-puzzle by their distance to the goal state (Puzzle).
Hence, these datasets contain ranked votes over 4 alternatives in a setting where a true ranking of the
alternatives indeed exists. Each dataset has four different noise levels; higher noise was created by
increasing the task difficulty [20]. For Dots, ranking images with a smaller difference in the number
of dots leads to high noise, and for Puzzle, ranking states farther away from the goal state leads to
high noise. Each noise level of each dataset contains 40 profiles with approximately 20 votes each.
7
In our experiments, we extract 2-approval votes from the ranked votes by taking the top 2 alternatives
in each vote. Given these 2-approval votes, approval voting returns an alternative with the largest
number of approvals. To apply the MLE rule, however, we need to learn the underlying distribution
of 2-approval votes. To that end, we partition the set of profiles in each noise level of each dataset
into training (90%) and test (10%) sets. We use a high fraction of the profiles for training in order
to examine the maximum advantage that the MLE rule may have over approval voting.
Given the training profiles (which approval voting simply ignores), the MLE rule learns the probabilities of observing each of the 6 possible 2-subsets of the alternatives given a fixed true ranking.4
On the test data, the MLE rule first computes the likelihood of each ranking given the votes. Then, it
computes the likelihood of each alternative being the best by adding the likelihoods of all rankings
that put the alternative first. It finally returns an alternative with the highest likelihood.
0.9
0.9
0.8
0.8
Accuracy
Accuracy
We measure the accuracy of both methods by their frequency of being able to pinpoint the correct
best alternative. For each noise level in each dataset, the accuracy is averaged over 1000 simulations
with random partitioning of the profiles into training and test sets.
0.7
0.6
0.5
0.7
MLE
Approval
0.6
1
2
3
4
0.5
Noise Level
1
2
3
4
Noise Level
(a) Dots
(b) Puzzle
Fig. 1: The MLE rule (trained on 90% of the profiles) and approval voting for 2-approval votes.
Figures 1(a) and 1(b) show that in general the MLE rule does achieve greater accuracy than approval
voting. However, the increase is at most 4.5%, which may not be significant in some contexts.
6
Discussion
Our main conclusion from the theoretical and empirical results is that approval voting is typically
close to optimal for aggregating k-approval votes. However, the situation is much subtler than it
appears at first glance. Moreover, our theoretical analysis is restricted by the assumption that the
votes are drawn from the Mallows model. A recent line of work in social choice theory [9, 10] has
focused on designing voting rules that perform well ? simultaneously ? under a wide variety of
noise models. It seems intuitive that approval voting would work well for aggregating k-approval
votes under any reasonable noise model; an analysis extending to a wide family of realistic noise
models would provide a stronger theoretical justification for using approval voting.
On the practical front, it should be emphasized that approval voting is not always optimal. When
maximum accuracy matters, one may wish to switch to the MLE rule. However, learning and applying the MLE rule is much more demanding. In our experiments we learn the entire distribution over
k-approval votes given the true ranking. While for 2-approval or 3-approval votes over 4 alternatives we only need to learn 6 probability values, in general for k-approval votes over m alternatives
one would need to learn m
k probability values, and the training data may not be sufficient for this
purpose. This calls for the design of estimators for the best alternative that achieve greater statistical
efficiency by avoiding the need to learn the entire underlying distribution over votes.
4
Technically, we learn a neutral noise model where the probability of a subset of alternatives being observed
only depends on the positions of the alternatives in the true ranking.
8
References
[1] C. Al?os-Ferrer. A simple characterization of approval voting. Social Choice and Welfare,
27(3):621?625, 2006.
[2] H. Azari Soufiani, W. Z. Chen, D. C. Parkes, and L. Xia. Generalized method-of-moments for
rank aggregation. In Proc. of 27th NIPS, pages 2706?2714, 2013.
[3] H. Azari Soufiani, D. C. Parkes, and L. Xia. Random utility theory for social choice. In Proc. of
26th NIPS, pages 126?134, 2012.
[4] H. Azari Soufiani, D. C. Parkes, and L. Xia. Computing parametric ranking models via rankbreaking. In Proc. of 31st ICML, pages 360?368, 2014.
[5] J. Bartholdi, C. A. Tovey, and M. A. Trick. Voting schemes for which it can be difficult to tell
who won the election. Social Choice and Welfare, 6:157?165, 1989.
[6] D. Baumeister, G. Erd?elyi, E. Hemaspaandra, L. A. Hemaspaandra, and J. Rothe. Computational aspects of approval voting. In Handbook on Approval Voting, pages 199?251. Springer,
2010.
[7] S. J. Brams. Mathematics and democracy: Designing better voting and fair-division procedures. Princeton University Press, 2007.
[8] S. J. Brams and P. C. Fishburn. Approval Voting. Springer, 2nd edition, 2007.
[9] I. Caragiannis, A. D. Procaccia, and N. Shah. When do noisy votes reveal the truth? In Proc. of
14th EC, pages 143?160, 2013.
[10] I. Caragiannis, A. D. Procaccia, and N. Shah. Modal ranking: A uniquely robust voting rule.
In Proc. of 28th AAAI, pages 616?622, 2014.
[11] E. Elkind and N. Shah. Electing the most probable without eliminating the irrational: Voting
over intransitive domains. In Proc. of 30th UAI, pages 182?191, 2014.
[12] G. Erd?elyi, M. Nowak, and J. Rothe. Sincere-strategy preference-based approval voting fully
resists constructive control and broadly resists destructive control. Math. Log. Q., 55(4):425?
443, 2009.
[13] P. C. Fishburn. Axioms for approval voting: Direct proof. Journal of Economic Theory,
19(1):180?185, 1978.
[14] P. C. Fishburn and S. J. Brams. Approval voting, condorcet?s principle, and runoff elections.
Public Choice, 36(1):89?114, 1981.
[15] A. Goel, A. K. Krishnaswamy, S. Sakshuwong, and T. Aitamurto. Knapsack voting. In Proc.
of Collective Intelligence, 2015.
[16] J. Lee, W. Kladwang, M. Lee, D. Cantu, M. Azizyan, H. Kim, A. Limpaecher, S. Yoon,
A. Treuille, and R. Das. RNA design rules from a massive open laboratory. Proceedings
of the National Academy of Sciences, 111(6):2122?2127, 2014.
[17] G. Little, L. B. Chilton, M. Goldman, and R. C. Miller. TurKit: Human computation algorithms
on Mechanical Turk. In Proc. of 23rd UIST, pages 57?66, 2010.
[18] T. Lu and C. Boutilier. Learning Mallows models with pairwise preferences. In Proc. of 28th
ICML, pages 145?152, 2011.
[19] C. L. Mallows. Non-null ranking models. Biometrika, 44:114?130, 1957.
[20] A. Mao, A. D. Procaccia, and Y. Chen. Better human computation through principled voting.
In Proc. of 27th AAAI, pages 1142?1148, 2013.
[21] A. D. Procaccia, S. J. Reddi, and N. Shah. A maximum likelihood approach for selecting sets
of alternatives. In Proc. of 28th UAI, pages 695?704, 2012.
[22] M. R. Sertel. Characterizing approval voting. Journal of Economic Theory, 45(1):207?211,
1988.
[23] N. Shah, D. Zhou, and Y. Peres. Approval voting and incentives in crowdsourcing. In Proc. of
32nd ICML, pages 10?19, 2015.
[24] H. P. Young. Condorcet?s theory of voting.
82(4):1231?1244, 1988.
9
The American Political Science Review,
| 5884 |@word mild:2 eliminating:1 polynomial:1 seems:3 approved:15 stronger:1 nd:2 open:1 d2:2 confirms:1 simulation:3 simplifying:1 q1:1 mention:1 moment:1 contains:1 score:4 selecting:1 com:1 must:2 chicago:1 partition:1 nisarg:1 realistic:1 intelligence:1 selected:2 farther:1 parkes:3 provides:1 characterization:1 bijection:1 math:1 preference:2 five:4 along:2 direct:1 become:1 pairwise:2 uist:1 indeed:3 hardness:1 market:1 p1:7 examine:1 approval:112 goldman:1 election:2 little:1 bartholdi:1 considering:1 increasing:3 becomes:1 begin:2 underlying:4 notation:1 moreover:1 null:1 tallied:1 maxa:1 proposing:1 formalizes:1 preferably:1 every:21 voting:59 tie:5 exactly:4 returning:1 demonstrates:1 qm:1 biometrika:1 partitioning:1 control:2 omit:1 appear:2 positive:3 before:1 aggregating:5 despite:1 ak:4 abuse:1 approximately:1 voter:11 studied:2 averaged:1 practical:1 mallow:28 practice:1 implement:1 procedure:1 displacement:11 empirical:2 elicit:1 axiom:1 significantly:2 word:1 renaming:2 submits:1 cannot:1 close:2 put:2 context:3 applying:1 www:1 imposed:1 maximizing:4 attention:3 independently:1 focused:1 amazon:2 unwilling:1 rule:20 estimator:2 his:2 justification:1 imagine:1 massive:1 exact:1 us:1 designing:2 pa:1 trick:1 satisfying:1 democracy:1 cayley:2 labeled:2 observed:2 yoon:1 verifying:1 cy:1 soufiani:3 azari:3 counter:1 highest:2 decrease:1 principled:1 intuition:2 broken:4 asked:2 irrational:1 trained:1 depend:1 technically:2 division:1 efficiency:1 swap:5 various:2 talk:1 dkt:26 sc:28 tell:1 aggregate:4 outcome:1 choosing:1 whose:1 quite:1 foregoing:2 say:4 drawing:2 noisy:7 online:4 advantage:3 hire:1 product:1 fr:1 achieve:2 academy:1 intuitive:1 extending:1 derive:1 received:3 c:2 implies:5 posit:1 correct:1 compromising:1 human:3 opinion:17 public:1 require:2 budgeting:1 fix:5 plurality:8 arielpro:1 anonymous:1 decompose:1 probable:1 strictly:1 hold:11 ground:8 welfare:3 puzzle:5 smallest:1 purpose:2 estimation:1 proc:12 axiomatic:1 label:4 combinatorial:1 largest:3 vice:2 city:1 occupying:1 rna:2 always:3 zhou:1 derived:1 focus:1 l0:5 rank:3 likelihood:16 indicates:1 contrast:3 political:1 kim:1 minimizers:1 typically:1 entire:2 going:1 selects:1 i1:3 interested:1 issue:1 among:3 arg:1 denoted:12 exponent:3 caragiannis:2 platform:1 special:2 equal:3 once:2 identical:2 placing:1 icml:3 minimized:1 report:2 np:1 sincere:1 modern:1 simultaneously:1 recognize:1 national:1 individual:1 neutrality:1 hemaspaandra:2 rearrangement:1 intransitive:1 extreme:2 swapping:1 accurate:1 citizen:1 kt:3 nowak:1 worker:8 unless:1 divide:1 theoretical:4 instance:2 earlier:1 compelling:1 asking:2 assertion:1 cost:2 subset:3 neutral:5 uniform:2 front:1 answer:5 chooses:1 st:1 probabilistic:1 v4:2 rankbreaking:1 lee:2 picking:2 synthesis:1 again:1 aaai:2 satisfied:2 fishburn:3 creating:1 expert:1 american:1 leading:1 return:5 matter:2 ranking:55 vi:30 depends:1 view:2 break:2 lab:1 kendall:8 observing:3 analyze:2 aggregation:2 option:1 sort:1 bayes:1 contribution:1 minimize:3 accuracy:8 largely:1 who:1 miller:1 yield:1 accurately:1 elkind:1 lu:1 worth:1 app:1 submitted:2 checked:1 definition:1 against:1 pp:27 turk:4 frequency:1 destructive:1 proof:9 hamming:2 dataset:4 popular:4 ask:1 subtle:1 positioned:1 actually:1 back:1 appears:2 higher:2 harness:1 modal:1 erd:2 receives:2 o:1 glance:2 quality:1 reveal:1 perhaps:1 believe:1 contain:3 true:17 hence:4 equality:2 symmetric:2 laboratory:1 i2:3 adjacent:1 game:2 uniquely:1 coincides:3 suboptimality:2 won:1 generalized:1 theoretic:1 complete:1 gleaned:1 l1:2 image:2 consideration:1 novel:1 recently:1 common:2 empirically:1 winner:20 slight:1 ferrer:1 mellon:2 significant:2 refer:1 versa:2 counterexample:5 rd:1 pm:1 similarly:1 mathematics:1 dot:6 toolkit:1 stable:1 operating:1 krishnaswamy:1 recent:1 dictated:1 belongs:2 discard:1 certain:1 inequality:1 minimum:3 greater:3 goel:1 managing:1 surely:1 converge:1 v3:2 monotonically:1 ii:6 multiple:1 full:2 reduces:1 stem:2 technical:1 calculation:1 offer:1 mle:37 qi:2 mturk:1 cmu:2 metric:4 normalization:4 pinpointing:1 receive:1 nv:8 subject:2 veto:4 seem:1 call:2 reddi:1 counting:2 iii:1 easy:1 variety:1 switch:1 suboptimal:3 economic:2 regarding:1 computable:1 t0:12 whether:1 expression:2 six:1 utility:2 effort:1 york:1 proceed:1 remark:1 repeatedly:1 ignored:1 generally:3 useful:1 clear:2 governs:1 listed:1 detailed:1 boutilier:1 amount:2 extensively:1 exist:3 notice:2 broadly:1 carnegie:2 incentive:1 express:1 four:7 achieving:1 drawn:6 d3:2 pj:1 approving:1 v1:2 fraction:1 convert:2 sum:1 fourth:1 place:1 almost:4 reasonable:4 reader:1 family:1 utilizes:1 appendix:6 dhm:7 fold:1 yielded:1 constraint:1 aspect:2 min:3 conjecture:1 department:2 according:1 spearman:2 describes:1 slightly:1 smaller:1 subtler:1 restricted:1 pr:17 ariel:1 equation:2 discus:1 count:2 end:2 available:1 operation:2 apply:2 v2:2 away:1 disagreement:2 appearing:2 alternative:133 shah:6 existence:1 knapsack:1 top:5 denotes:1 especially:1 approve:2 objective:2 question:3 v5:2 paint:1 parametric:1 strategy:1 md:1 distance:36 unable:1 parametrized:1 condorcet:2 seven:1 collected:2 reformulate:1 providing:1 minimizing:6 equivalently:3 difficult:1 statement:5 negative:1 design:6 collective:1 unknown:1 perform:2 disagree:1 datasets:3 discarded:2 hereinafter:1 situation:7 peres:1 sharp:1 arbitrary:1 community:1 pair:6 mechanical:4 required:5 extensive:1 distinction:1 nip:2 able:2 elicitation:1 usually:1 below:1 mismatch:6 challenge:1 pib:1 unqualified:1 tau:8 power:1 greatest:1 demanding:1 natural:1 ranked:2 difficulty:1 resists:2 scheme:2 footrule:5 picture:1 started:1 ready:1 bubble:1 hm:1 created:1 extract:1 prior:1 literature:3 review:2 fully:1 proportional:1 rothe:2 degree:1 sufficient:2 principle:1 viewpoint:1 azizyan:1 pi:6 surprisingly:2 last:1 dfr:17 side:5 weaker:1 wide:2 taking:1 characterizing:1 absolute:1 distributed:1 overcome:1 boundary:6 xia:3 stand:1 transition:1 world:1 qn:2 ignores:1 computes:2 collection:3 qualitatively:1 san:1 ec:1 social:9 keep:1 uai:2 handbook:1 conclude:2 francisco:1 thep:1 alternatively:1 spectrum:1 why:1 favorite:1 learn:6 molecule:1 robust:1 ignoring:1 necessarily:1 domain:3 submit:2 da:1 main:2 noise:31 profile:23 edition:1 fair:1 fig:1 fashion:2 position:18 mao:2 tally:1 wish:2 pinpoint:3 winning:1 concatenating:1 dmd:5 ib:4 third:1 breaking:1 learns:1 young:1 theorem:16 emphasized:1 exists:5 adding:1 chen:2 easier:1 surprise:1 simply:7 likely:3 labor:1 expressed:1 v6:1 ordered:1 springer:2 truth:9 satisfies:4 goal:3 change:1 hard:1 specifically:1 called:2 total:5 player:1 vote:76 intact:1 select:1 procaccia:5 latter:1 constructive:1 princeton:1 d1:1 avoiding:1 crowdsourcing:4 |
5,396 | 5,885 | A Normative Theory of Adaptive Dimensionality
Reduction in Neural Networks
Cengiz Pehlevan
Simons Center for Data Analysis
Simons Foundation
New York, NY 10010
[email protected]
Dmitri B. Chklovskii
Simons Center for Data Analysis
Simons Foundation
New York, NY 10010
[email protected]
Abstract
To make sense of the world our brains must analyze high-dimensional datasets
streamed by our sensory organs. Because such analysis begins with dimensionality reduction, modeling early sensory processing requires biologically plausible
online dimensionality reduction algorithms. Recently, we derived such an algorithm, termed similarity matching, from a Multidimensional Scaling (MDS) objective function. However, in the existing algorithm, the number of output dimensions is set a priori by the number of output neurons and cannot be changed. Because the number of informative dimensions in sensory inputs is variable there is a
need for adaptive dimensionality reduction. Here, we derive biologically plausible
dimensionality reduction algorithms which adapt the number of output dimensions
to the eigenspectrum of the input covariance matrix. We formulate three objective
functions which, in the offline setting, are optimized by the projections of the input
dataset onto its principal subspace scaled by the eigenvalues of the output covariance matrix. In turn, the output eigenvalues are computed as i) soft-thresholded,
ii) hard-thresholded, iii) equalized thresholded eigenvalues of the input covariance matrix. In the online setting, we derive the three corresponding adaptive
algorithms and map them onto the dynamics of neuronal activity in networks with
biologically plausible local learning rules. Remarkably, in the last two networks,
neurons are divided into two classes which we identify with principal neurons and
interneurons in biological circuits.
1
Introduction
Our brains analyze high-dimensional datasets streamed by our sensory organs with efficiency and
speed rivaling modern computers. At the early stage of such analysis, the dimensionality of sensory
inputs is drastically reduced as evidenced by anatomical measurements. Human retina, for example,
conveys signals from ?125 million photoreceptors to the rest of the brain via ?1 million ganglion
cells [1] suggesting a hundred-fold dimensionality reduction. Therefore, biologically plausible dimensionality reduction algorithms may offer a model of early sensory processing.
In a seminal work [2] Oja proposed that a single neuron may compute the first principal component
of activity in upstream neurons. At each time point, Oja?s neuron projects a vector composed of firing rates of upstream neurons onto the vector of synaptic weights by summing up currents generated
by its synapses. In turn, synaptic weights are adjusted according to a Hebbian rule depending on the
activities of only the postsynaptic and corresponding presynaptic neurons [2].
Following Oja?s work, many multineuron circuits were proposed to extract multiple principal components of the input, for a review see [3]. However, most multineuron algorithms did not meet the
same level of rigor and biological plausibility as the single-neuron algorithm [2, 4] which can be
derived using a normative approach, from a principled objective function [5], and contains only lo1
cal Hebbian learning rules. Algorithms derived from principled objective functions either did not
posess local learning rules [6, 4, 7, 8] or had other biologically implausible features [9]. In other
algorithms, local rules were chosen heuristically rather than derived from a principled objective
function [10, 11, 12, 9, 3, 13, 14, 15, 16].
There is a notable exception to the above observation but it has other shortcomings. The twolayer circuit with reciprocal synapses [17, 18, 19] can be derived from the minimization of the
representation error. However, the activity of principal neurons in the circuit is a dummy variable
without its own dynamics. Therefore, such principal neurons do not integrate their input in time,
contradicting existing experimental observations.
Other normative approaches use an information theoretical objective to compare theoretical limits with experimentally measured information in single neurons or populations [20, 21, 22] or to
calculate optimal synaptic weights in a postulated neural network [23, 22].
Recently, a novel approach to the problem has been proposed [24]. Starting with the Multidimensional Scaling (MDS) strain cost function [25, 26] we derived an algorithm which maps onto a
neuronal circuit with local learning rules. However, [24] had major limitations, which are shared by
vairous other multineuron algorithms:
1. The number of output dimensions was determined by the fixed number of output neurons precluding adaptation to the varying number of informative components. A better solution would be
to let the network decide, depending on the input statistics, how many dimensions to represent
[14, 15]. The dimensionality of neural activity in such a network would be usually less than the
maximum set by the number of neurons.
2. Because output neurons were coupled by anti-Hebbian synapses which are most naturally implemented by inhibitory synapses, if these neurons were to have excitatory outputs, as suggested by
cortical anatomy, they would violate Dale?s law (i.e. each neuron uses only one fast neurotransmitter). Here, following [10], by anti-Hebbian we mean synaptic weights that get more negative
with correlated activity of pre- and postsynaptic neurons.
3. The output had a wide dynamic range which is difficult to implement using biological neurons
with a limited range. A better solution [27, 13] is to equalize the output variance across neurons.
In this paper, we advance the normative approach of [24] by proposing three new objective functions which allow us to overcome the above limitations. We optimize these objective functions by
proceeding as follows. In Section 2, we formulate and solve three optimization problems of the
form:
Offline setting : Y? = arg min L (X, Y) .
(1)
Y
Here, the input to the network, X = [x1 , . . . , xT ] is an n ? T matrix with T centered input data
samples in Rn as its columns and the output of the network, Y = [y1 , . . . , yT ] is a k?T matrix with
corresponding outputs in Rk as its columns. We assume T >> k and T >> n. Such optimization
problems are posed in the so-called offline setting where outputs are computed after seeing all data.
Whereas the optimization problems in the offline setting admit closed-form solution, such setting
is ill-suited for modeling neural computation on the mechanistic level and must be replaced by the
online setting. Indeed, neurons compute an output, yT , for each data sample presentation, xT ,
before the next data sample is presented and past outputs cannot be altered. In such online setting,
optimization is performed at every time step, T , on the objective which is a function of all inputs
and outputs up to time T . Moreover, an online algorithm (also known as streaming) is not capable
of storing all previous inputs and outputs and must rely on a smaller number of state variables.
In Section 3, we formulate three corresponding online optimization problems with respect to yT ,
while keeping all the previous outputs fixed:
Online setting : yT ? arg min L (X, Y) .
(2)
yT
Then we derive algorithms solving these problems online and map their steps onto the dynamics of
neuronal activity and local learning rules for synaptic weights in three neural networks.
We show that the solutions of the optimization problems and the corresponding online algorithms
remove the limitations outlined above by performing the following computational tasks:
2
C
output eig.
?
B
output eig.
output eig.
A
?
?
input eig.
E
D
yk
y1
x1
z1
x2
Principal
x1
...
z1
zl
y1
xn
x1
yk
x2
...
xn
anti-Hebbian synapses
Hebbian
Inter-neurons
input eig.
F
zl
yk
x2
?
input eig.
y1
. . . xn
?
Figure 1: Input-output
functions of the three
offline solutions and
neural network implementations of the
corresponding online
algorithms. A-C. Inputoutput functions of
covariance eigenvalues.
A. Soft-thresholding.
B. Hard-thresholding.
C. Equalization after
thresholding.
D-F.
Corresponding network
architectures.
1. Soft-thresholding the eigenvalues of the input covariance matrix, Figure 1A: eigenvalues below
the threshold are set to zero and the rest are shrunk by the threshold magnitude. Thus, the number of output dimensions is chosen adaptively. This algorithm maps onto a single-layer neural
network with the same architecture as in [24], Figure 1D, but with modified learning rules.
2. Hard-thresholding of input eigenvalues, Figure 1B: eigenvalues below the threshold vanish as
before, but eigenvalues above the threshold remain unchanged. The steps of such algorithm map
onto the dynamics of neuronal activity in a network which, in addition to principal neurons, has a
layer of interneurons reciprocally connected with principal neurons and each other, Figure 1E.
3. Equalization of non-zero eigenvalues, Figure 1C. The corresponding network?s architecture, Figure 1F, lacks reciprocal connections among interneurons. As before, the number of abovethreshold eigenvalues is chosen adaptively and cannot exceed the number of principal neurons. If
the two are equal, this network whitens the output.
In Section 4, we demonstrate that the online algorithms perform well on a synthetic dataset and, in
Discussion, we compare our neural circuits with biological observations.
2
Dimensionality reduction in the offline setting
In this Section, we introduce and solve, in the offline setting, three novel optimization problems
whose solutions reduce the dimensionality of the input. We state our results in three Theorems
which are proved in the Supplementary Material.
2.1
Soft-thresholding of covariance eigenvalues
We consider the following optimization problem in the offline setting:
2
min
X> X ? Y> Y ? ?T IT
F ,
Y
(3)
where ? ? 0 and IT is the T ?T identity matrix. To gain intuition behind this choice of the objective
function let us expand the squared norm and keep only the Y-dependent terms:
2
2
arg min
X> X ? Y> Y ? ?T IT
F = arg min
X> X ? Y> Y
F + 2?T Tr Y> Y , (4)
Y
Y
where the first term matches the similarity of input and output[24] and the second term is a nuclear
norm of Y> Y known to be a convex relaxation of the matrix rank used for low-rank matrix modeling
[28]. Thus, objective function (3) enforces low-rank similarity matching.
We show that the optimal output Y is a projection of the input data, X, onto its principal subspace.
The subspace dimensionality is set by m, the number of eigenvalues of the data covariance matrix,
PT
C = T1 XX> = T1 t=1 xt x>
t , that are greater than or equal to the parameter ?.
3
>
Theorem 1. Suppose
an eigen-decomposition of X> X = VX ?X VX , where ?X =
X
X
X
diag ?X
,
.
.
.
,
?
with
?X
1
1 ? . . . ? ?T . Note that ? has at most n nonzero eigenvalues coinT
ciding with those of T C. Then,
>
Y? = Uk STk (?X , ?T )1/2 VkX ,
(5)
X
X
X
are optima of (3), where STk (? , ?T ) = diag ST ?1 , ?T , . . . , ST ?k , ?T , ST is the softX
X
thresholding function, ST(a, b) = max(a
k consists of the columns of V corresponding
X? b, 0), V
X
X
to the top k eigenvalues, i.e. Vk = v1 , . . . , vk and Uk is any k ? k orthogonal matrix, i.e.
Uk ? O(k). The form (5) uniquely defines all optima of (3), except when k < m, ?X
k > ?T and
X
?X
k = ?k+1 .
2.2
Hard-thresholding of covariance eigenvalues
Consider the following minimax problem in the offline setting:
2
2
min max
X> X ? Y> Y
F ?
Y> Y ? Z> Z ? ?T IT
F ,
Y
(6)
Z
where ? ? 0 and we introduced an internal variable Z, which is an l ? T matrix Z = [z1 , . . . , zT ]
with zt ? Rl . The intuition behind this objective function is again based on similarity matching but
rank regularization is applied indirectly via the internal variable, Z.
>
Theorem 2. Suppose
an eigen-decomposition of X> X = VX ?X VX , where ?X
X
X
X
diag ?1 , . . . , ?T with ?X
1 ? . . . ? ?T ? 0. Assume l ? min(k, m). Then,
>
=
>
Y? = Uk HTk (?X , ?T )1/2 VkX ,
Z? = Ul STl,min(k,m) (?X , ?T )1/2 VlX ,
(7)
X
X
X
are optima of (6), where HTk (? , ?T ) = diag HT ?1 , ?T , . . . , HT ?k , ?T , HT(a, b) =
a?(a ? b) with ?() being the step function: ?(a ? b) = 1if a ? b and ?(a
? b) = 0 if a <
X
X
X
b, STl,min(k,m) (? , ?T ) = diag ST ?1 , ?T , . . . , ST ?min(k,m) , ?T , 0, . . . , 0 ,VpX =
| {z }
l?min(k,m)
X
v1 , . . . , vpX and Up ? O(p). The form (7) uniquely defines all optima (6) except when either
X
1) ? is an eigenvalue of C or 2) k < m and ?X
k = ?k+1 .
2.3
Equalizing thresholded covariance eigenvalues
Consider the following minimax problem in the offline setting:
min max Tr ?X> XY> Y + Y> YZ> Z + ?T Y> Y ? ?T Z> Z ,
Y
Z
(8)
where ? ? 0 and ? > 0. This objective function follows from (6) after dropping the quartic Z term.
>
Theorem 3. Suppose
of X> X is X> X = VX ?X VX , where ?X =
an eigen-decomposition
X
X
X
diag ?X
,
.
.
.
,
?
with
?
?
.
.
.
?
?
?
0.
Assume
l ? min(k, m). Then,
1
1
T
T
p
>
>
Y? = Uk ?T ?k (?X , ?T )1/2 VkX ,
Z? = Ul ?l?T O?Y ? VX ,
(9)
X
X
X
are optima of (8), where ?k (? , ?T ) = diag ? ?1 ? ?T , . . . , ? ?k ? ?T , ?l?T is an
l ? T rectangular diagonal matrix with top min(k, m) diagonals are set to arbitrary nonnegative
constants and the rest are zero, O?Y ? is a block-diagonal orthogonal matrix that has two blocks:
the
min(k, m) dimensional and the bottom block is T ? min(k, m) dimensional, Vp =
Xtop blockX is
v1 , . . . , vp , and Up ? O(p). The form (9) uniquely defines all optima of (8) except when either
X
1) ? is an eigenvalue of C or 2) k < m and ?X
k = ?k+1 .
Remark 1. If k = m, then Y is full-rank and
equalizing variance across all channels.
3
1
T
YY> = ?Ik , implying that the output is whitened,
Online dimensionality reduction using Hebbian/anti-Hebbian neural nets
In this Section, we formulate online versions of the dimensionality reduction optimization problems
presented in the previous Section, derive corresponding online algorithms and map them onto the dynamics of neural networks with biologically plausible local learning rules. The order of subsections
corresponds to that in the previous Section.
4
3.1
Online soft-thresholding of eigenvalues
Consider the following optimization problem in the online setting:
2
yT ? arg min
X> X ? Y> Y ? ?T IT
F .
(10)
yT
By keeping only the terms that depend on yT we get the following objective for (2):
!
!
T
?1
T
?1
X
X
2
2
4
>
>
>
>
L = ?4xT
xt yt yT + 2yT
yt yt + ?T Im yT ? 2kxT k kyT k + kyT k . (11)
t=1
t=1
In the large-T limit, the last two terms can be dropped since the first two terms grow linearly with T
and dominate. The remaining cost is a positive definite quadratic form in yT and the optimization
problem is convex. At its minimum, the following equality holds:
!
!
T
?1
T
?1
X
X
>
>
yt yt + ?T Im yT =
yt xt xT .
(12)
t=1
t=1
While a closed-form analytical solution via matrix inversion exists for yT , we are interested in
biologically plausible algorithms. Instead, we use a weighted Jacobi iteration where yT is updated
according to:
yT ? (1 ? ?) yT + ? WTY X xT ? WTY Y yT ,
(13)
where ? is the weight parameter, and WTY X and WTY Y are normalized input-output and outputoutput covariances,
TP
?1
TP
?1
yt,i xt,k
yt,i yt,j
t=1
YY
YY
YX
WT,ik
= t=1 T ?1
,
WT,i,j6
=
,
WT,ii
= 0.
(14)
=i
TP
?1
P 2
2
?T +
?T +
yt,i
yt,i
t=1
t=1
Iteration (13) can be implemented by the dynamics of neuronal activity in a single-layer network,
Figure 1D. Then, WTY X and WTY Y represent the weights of feedforward (xt ? yt ) and lateral
(yt ? yt ) synaptic connections, respectively. Remarkably, synaptic weights appear in the online
solution despite their absence in the optimization problem formulation (3). Previously, nonnormalized covariances have been used as state variables in an online dictionary learning algorithm [29].
To formulate a fully online algorithm, we rewrite (14) in a recursive form. This requires introducing
Y
Y
=
representing cumulative activity of a neuron i up to time T ? 1, DT,i
a scalar variable DT,i
TP
?1
2
?T +
yt,i
. Then, at each data sample presentation, T , after the output yT converges to a steady
t=1
state, the following updates are performed:
Y
2
DTY +1,i ? DT,i
+ ? + yT,i
,
Y X
X
YX
2
WTY+1,ij
? WT,ij
+ yT,i xT,j ? ? + yT,i
WT,ij /DTY +1,i ,
Y
YY
2
YY
Y
WTY+1,i,j6
=i ? WT,ij + yT,i yT,j ? ? + yT,i WT,ij /DT +1,i .
(15)
Hence, we arrive at a neural network algorithm that solves the optimization problem (10) for streaming data by alternating between two phases. After a data sample is presented at time T , in the first
phase of the algorithm (13), neuron activities are updated until convergence to a fixed point. In the
second phase of the algorithm, synaptic weights are updated for feedforward connections according
to a local Hebbian rule (15) and for lateral connections according to a local anti-Hebbian rule (due
to the (?) sign in equation (13)). Interestingly, in the ? = 0 limit, these updates have the same
form as the single-neuron Oja rule [24, 2], except that the learning rate is not a free parameter but is
determined by the cumulative neuronal activity 1/DTY +1,i [4, 5].
3.2
Online hard-thresholding of eigenvalues
Consider the following minimax problem in the online setting, where we assume ? > 0:
2
2
{yT , zT } ? arg min arg max
X> X ? Y> Y
?
Y> Y ? Z> Z ? ?T IT
.
yT
F
zT
F
(16)
By keeping only those terms that depend on yT or zT and considering the large-T limit, we get the
5
following objective:
2
L = 2?T kyT k ? 4x>
T
T
?1
X
!
xt yt>
t=1
yT ? 2z>
T
T
?1
X
!
zt z>
t + ?T Ik
zT + 4yT>
t=1
T
?1
X
!
yt z>
t
zT .
t=1
(17)
Note that this objective is strongly convex in yT and strongly concave in zT . The solution of this
minimax problem is the saddle-point of the objective function, which is found by setting the gradient
of the objective with respect to {yT , zT } to zero [30]:
!
!
!
!
T
?1
T
?1
T
?1
T
?1
X
X
X
X
>
>
>
>
yt xt xT ?
y t zt zT ,
zt zt + ?T Ik zT =
zt y t y T .
?T yT =
t=1
t=1
t=1
t=1
(18)
To obtain a neurally plausible algorithm, we solve these equations by a weighted Jacobi iteration:
yT ? (1 ? ?) yT + ? WTY X xT ? WTY Z zT , zT ? (1 ? ?) zT + ? WTZY yT ? WTZZ zT .
(19)
Here, similarly to (14), WT are normalized covariances that can be updated recursively:
Y
DTY +1,i ? DT,i
+ ?,
Z
2
DTZ+1,i ? DT,i
+ ? + zT,i
X
YX
YX
WTY+1,ij
? WT,ij
+ yT,i xT,j ? ?WT,ij
/DTY +1,i
Z
YZ
YZ
WTY+1,ij
? WT,ij
+ yT,i zT,j ? ?WT,ij
/DTY +1,i
ZY
ZY
2
Z
WTZY
+1,i,j ? WT,ij + zT,i yT,j ? ? + zT,i WT,ij /DT +1,i
ZZ
2
ZZ
Z
ZZ
WTZZ
WT,ii
= 0.
(20)
+1,i,j6=i ? WT,ij + zT,i zT,j ? ? + zT,i WT,ij /DT +1,i ,
Equations (19) and (20) define an online algorithm that can be naturally implemented by a neural
network with two populations of neurons: principal and interneurons, Figure 1E. Again, after each
data sample presentation, T , the algorithm proceeds in two phases. First, (19) is iterated until
convergence by the dynamics of neuronal activities. Second, synaptic weights are updated according
to local, anti-Hebbian (for synapses from interneurons) and Hebbian (for all other synapses) rules.
3.3
Online thresholding and equalization of eigenvalues
Consider the following minimax problem in the online setting, where we assume ? > 0 and ? > 0:
{yT , zT } ? arg min arg max Tr ?X> XY> Y + Y> YZ> Z + ?TY> Y ? ?TZ> Z . (21)
yT
zT
By keeping only those terms that depend on yT or zT and considering the large-T limit, we get the
following objective:
!
!
T
?1
T
?1
X
X
2
2
>
>
>
>
L = ?T kyT k ? 2xT
xt yt yT ? ?T kzT k + 2yT
y t zt zT .
(22)
t=1
t=1
This objective is strongly convex in yT and strongly concave in zT and its saddle point is given by:
!
!
!
T
?1
T
?1
T
?1
X
X
X
?T yT =
yt x>
xT ?
yt z>
zT ,
?T zT =
zt yt> yT .
(23)
t
t
t=1
t=1
t=1
To obtain a neurally plausible algorithm, we solve these equations by a weighted Jacobi iteration:
yT ? (1 ? ?) yT + ? WTY X xT ? WTY Z zT ,
zT ? (1 ? ?) zT + ?WTZY yT ,
(24)
As before, WT are normalized covariances which can be updated recursively:
Y
DTY +1,i ? DT,i
+ ?,
Z
DTZ+1,i ? DT,i
+?
YX
YX
YX
WT +1,ij ? WT,ij + yT,i xT,j ? ?WT,ij /DTY +1,i
Z
YZ
YZ
WTY+1,ij
? WT,ij
+ yT,i zT,j ? ?WT,ij
/DTY +1,i
ZY
ZY
Z
WTZY
(25)
+1,i,j ? WT,ij + zT,i yT,j ? ?WT,ij /DT +1,i .
Equations (24) and (25) define an online algorithm that can be naturally implemented by a neural
network with principal neurons and interneurons. As beofre, after each data sample presentation at
6
10
10
1
10
102 103 104
T
? T -1.56
-1
10-2
10
10-4
-3
1
10
102 103 104
T
C
?T
? T -1.80
10
102 103 104
T
10
? T -1.53
1
10-1
?T
10-2
-1.33
-1.43
10-3
1
10
Eigenvalue Error
?T
-1.50
103
102
10
1
10-1
10-2
10-3
1
102 103 104
T
Subspace Error
B
Eigenvalue Error
103
102
10
1
10-1
10-2
10-3
1
Subspace Error
Subspace Error
Eigenvalue Error
A
103
102
10
1
10-1
10-2
? T -1.48
1
10
102 103 104
T
10
1
10-1
10-2
10-3
1
? T -1.41
? T -1.38
10
102 103 104
T
Figure 2: Performance of the three neural networks: soft-thresholding (A), hard-thresholding (B),
equalization after thresholding (C). Top: eigenvalue error, bottom: subspace error as a function
of data presentations. Solid lines - means and shades - stds over 10 runs. Red - principal, blue inter-neurons. Dashed lines - best-fit power laws. For metric definitions see text.
time T , the algorithm, first, iterates (24) by the dynamics of neuronal activities until convergence
and, second, updates synaptic weights according to local anti-Hebbian (for synapses from interneurons) and Hebbian (25) (for all other synapses) rules.
While an algorithm similar to (24), (25), but with predetermined learning rates, was previously given
in [15, 14], it has not been derived from an optimization problem. Plumbley?s convergence analysis
of his algorithm [14] suggests that at the fixed point of synaptic updates, the interneuron activity is
also a projection onto the principal subspace. This result is a special case of our offline solution, (9),
supported by the online numerical simulations (next Section).
4
Numerical simulations
Here, we evaluate the performance of the three online algorithms on a synthetic dataset, which is
generated by an n = 64 dimensional colored Gaussian process with a specified covariance matrix.
In this covariance matrix, the eigenvalues, ?1..4 = {5, 4, 3, 2} and the remaining ?5..60 are chosen
uniformly from the interval [0, 0.5]. Correlations are introduced in the covariance matrix by generating random orthonormal eigenvectors. For all three algorithms, we choose ? = 1 and, for the
equalizing algorithm, we choose ? = 1. In all simulated networks, the number of principal neurons,
k = 20, and, for the hard-thresholding and the equalizing algorithms, the number of interneurons,
l = 5. Synaptic weight matrices were initialized randomly, and synaptic update learning rates,
Y
Z
1/D0,i
and 1/D0,i
were initialized to 0.1. Network dynamics is run with a weight ? = 0.1 until the
relative change in yT and zT in one cycle is < 10?5 .
To quantify the performance of these algorithms, we use two different metrics. The first metric,
eigenvalue error, measures the deviation of output covariance eigenvalues from their optimal offline
values given in Theorems 1, 2 and 3. The eigenvalue error at time T is calculated by summing
squared differences between the eigenvalues of T1 YY> or T1 ZZ> , and their optimal offline values
at time T . The second metric, subspace error, quantifies the deviation of the learned subspace from
the true principal subspace. To form such metric, at each T , we calculate the linear transformation that maps inputs, xT , to outputs, yT = FYT X xT and zT = FZX
T xT , at the fixed points of
the neural dynamics stages ((13), (19), (24)) of the three algorithms. Exact expressions for these
matrices for all algorithms are given in the Supplementary Material. Then, at each T , the deviation
X
X >
2
is
Fm,T F>
m,T ? Um,T Um,T F , where Fm,T is an n ? m matrix whose columns are the top m
right singular vectors of FT , Fm,T F>
m,T is the projection matrix to the subspace spanned by these
X
singular vectors, Um,T is an n?m matrix whose columns are the principal eigenvectors of the input
X>
covariance matrix C at time T , UX
m,T Um,T is the projection matrix to the principal subspace.
7
Further numerical simulations comparing the performance of the soft-thresholding algorithm with
? = 0 with other neural principal subspace algorithms can be found in [24].
5
Discussion and conclusions
We developed a normative approach for dimensionality reduction by formulating three novel optimization problems, the solutions of which project the input onto its principal subspace, and rescale
the data by i) soft-thresholding, ii) hard-thresholding, iii) equalization after thresholding of the input
eigenvalues. Remarkably we found that these optimization problems can be solved online using
biologically plausible neural circuits. The dimensionality of neural activity is the number of either
input covariance eigenvalues above the threshold, m, (if m < k) or output neurons, k (if k ? m).
The former case is ubiquitous in the analysis of experimental recordings, for a review see [31].
Interestingly, the division of neurons into two populations, principal and interneurons, in the last
two models has natural parallels in biological neural networks. In biology, principal neurons and
interneurons usually are excitatory and inhibitory respectively. However, we cannot make such an
assignment in our theory, because the signs of neural activities, xT and yT , and, hence, the signs of
synaptic weights, W, are unconstrained. Previously, interneurons were included into neural circuits
[32], [33] outside of the normative approach.
Similarity matching in the offline setting has been used to analyze experimentally recorded neuron activity lending support to our proposal. Semantically similar stimuli result in similar neural
activity patterns in human (fMRI) and monkey (electrophysiology) IT cortices [34, 35]. In addition, [36] computed similarities among visual stimuli by matching them with the similarity among
corresponding retinal activity patterns (using an information theoretic metric).
We see several possible extensions to the algorithms presented here: 1) Our online objective functions may be optimized by alternative algorithms, such as gradient descent, which map onto different
circuit architectures and learning rules. Interestingly, gradient descent-ascent on convex-concave objectives has been previously related to the dynamics of principal and interneurons [37]. 2) Inputs
coming from a non-stationary distribution (with time-varying covariance matrix) can be processed
by algorithms derived from the objective functions where contributions from older data points are
?forgotten?, or ?discounted?. Such discounting results in higher learning rates in the corresponding
online algorithms, even at large T , giving them the ability to respond to variations in data statistics
[24, 4]. Hence, the output dimensionality can track the number of input dimensions whose eigenvalues exceed the threshold. 3) In general, the output of our algorithms is not decorrelated. Such
decorrelation can be achieved by including a correlation-penalizing term in our objective functions
[38]. 4) Choosing the threshold parameter ? requires an a priori knowledge of input statistics. A
better solution, to be presented elsewhere, would be to let the network adjust such threshold adaptively, e.g. by filtering out all the eigenmodes with power below the mean eigenmode power. 5)
Here, we focused on dimensionality reduction using only spatial, as opposed to the spatio-temporal,
correlation structure.
We thank L. Greengard, A. Sengupta, A. Grinshpan, S. Wright, A. Barnett and E. Pnevmatikakis.
References
[1] David H Hubel. Eye, brain, and vision. Scientific American Library/Scientific American Books, 1995.
[2] E Oja. Simplified neuron model as a principal component analyzer. J Math Biol, 15(3):267?273, 1982.
[3] KI Diamantaras and SY Kung. Principal component neural networks: theory and applications. John
Wiley & Sons, Inc., 1996.
[4] B Yang. Projection approximation subspace tracking. IEEE Trans. Signal Process., 43(1):95?107, 1995.
[5] T Hu, ZJ Towfic, C Pehlevan, A Genkin, and DB Chklovskii. A neuron as a signal processing device. In
Asilomar Conference on Signals, Systems and Computers, pages 362?366. IEEE, 2013.
[6] E Oja. Principal components, minor components, and linear neural networks. Neural Networks, 5(6):927?
935, 1992.
[7] R Arora, A Cotter, K Livescu, and N Srebro. Stochastic optimization for pca and pls. In Allerton Conf.
on Communication, Control, and Computing, pages 861?868. IEEE, 2012.
[8] J Goes, T Zhang, R Arora, and G Lerman. Robust stochastic principal component analysis. In Proc. 17th
Int. Conf. on Artificial Intelligence and Statistics, pages 266?274, 2014.
8
[9] Todd K Leen. Dynamics of learning in recurrent feature-discovery networks. NIPS, 3, 1990.
[10] P F?oldiak. Adaptive network for optimal linear feature extraction. In Int. Joint Conf. on Neural Networks,
pages 401?405. IEEE, 1989.
[11] TD Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural
networks, 2(6):459?473, 1989.
[12] J Rubner and P Tavan. A self-organizing network for principal-component analysis. EPL, 10:693, 1989.
[13] MD Plumbley. A hebbian/anti-hebbian network which optimizes information capacity by orthonormalizing the principal subspace. In Proc. 3rd Int. Conf. on Artificial Neural Networks, pages 86?90, 1993.
[14] MD Plumbley. A subspace network that determines its own output dimension. Tech. Rep., 1994.
[15] MD Plumbley. Information processing in negative feedback neural networks. Network-Comp Neural,
7(2):301?305, 1996.
[16] P Vertechi, W Brendel, and CK Machens. Unsupervised learning of an efficient short-term memory
network. In NIPS, pages 3653?3661, 2014.
[17] BA Olshausen and DJ Field. Sparse coding with an overcomplete basis set: A strategy employed by v1?
Vision Res, 37(23):3311?3325, 1997.
[18] AA Koulakov and D Rinberg. Sparse incomplete representations: a potential role of olfactory granule
cells. Neuron, 72(1):124?136, 2011.
[19] S Druckmann, T Hu, and DB Chklovskii. A mechanistic model of early sensory processing based on
subtracting sparse representations. In NIPS, pages 1979?1987, 2012.
[20] AL Fairhall, GD Lewen, W Bialek, and RRR van Steveninck. Efficiency and ambiguity in an adaptive
neural code. Nature, 412(6849):787?792, 2001.
[21] SE Palmer, O Marre, MJ Berry, and W Bialek. Predictive information in a sensory population. PNAS,
112(22):6908?6913, 2015.
[22] E Doi, JL Gauthier, GD Field, J Shlens, et al. Efficient coding of spatial information in the primate retina.
J Neurosci, 32(46):16256?16264, 2012.
[23] R Linsker. Self-organization in a perceptual network. Computer, 21(3):105?117, 1988.
[24] C Pehlevan, T Hu, and DB Chklovskii. A hebbian/anti-hebbian neural network for linear subspace learning: A derivation from multidimensional scaling of streaming data. Neural Comput, 27:1461?1495, 2015.
[25] G Young and AS Householder. Discussion of a set of points in terms of their mutual distances. Psychometrika, 3(1):19?22, 1938.
[26] WS Torgerson. Multidimensional scaling: I. theory and method. Psychometrika, 17(4):401?419, 1952.
[27] HG Barrow and JML Budd. Automatic gain control by a basic neural circuit. Artificial Neural Networks,
2:433?436, 1992.
[28] EJ Cand`es and B Recht. Exact matrix completion via convex optimization. Found Comput Math,
9(6):717?772, 2009.
[29] J Mairal, F Bach, J Ponce, and G Sapiro. Online learning for matrix factorization and sparse coding.
JMLR, 11:19?60, 2010.
[30] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
[31] P Gao and S Ganguli. On simplicity and complexity in the brave new world of large-scale neuroscience.
Curr Opin Neurobiol, 32:148?155, 2015.
[32] M Zhu and CJ Rozell. Modeling inhibitory interneurons in efficient sensory coding models. PLoS Comput
Biol, 11(7):e1004353, 2015.
[33] PD King, J Zylberberg, and MR DeWeese. Inhibitory interneurons decorrelate excitatory cells to drive
sparse code formation in a spiking model of v1. J Neurosci, 33(13):5475?5485, 2013.
[34] N Kriegeskorte, M Mur, DA Ruff, R Kiani, et al. Matching categorical object representations in inferior
temporal cortex of man and monkey. Neuron, 60(6):1126?1141, 2008.
[35] R Kiani, H Esteky, K Mirpour, and K Tanaka. Object category structure in response patterns of neuronal
population in monkey inferior temporal cortex. J Neurophysiol, 97(6):4296?4309, 2007.
[36] G Tka?cik, E Granot-Atedgi, R Segev, and E Schneidman. Retinal metric: a stimulus distance measure
derived from population neural responses. PRL, 110(5):058104, 2013.
[37] HS Seung, TJ Richardson, JC Lagarias, and JJ Hopfield.
excitatory-inhibitory networks. NIPS, 10:329?335, 1998.
Minimax and hamiltonian dynamics of
[38] C Pehlevan and DB Chklovskii. Optimization theory of hebbian/anti-hebbian networks for pca and
whitening. In Allerton Conf. on Communication, Control, and Computing, 2015.
9
| 5885 |@word h:1 version:1 inversion:1 norm:2 kriegeskorte:1 heuristically:1 hu:3 simulation:3 covariance:20 decomposition:3 decorrelate:1 twolayer:1 tr:3 solid:1 recursively:2 reduction:12 contains:1 precluding:1 interestingly:3 past:1 existing:2 current:1 comparing:1 must:3 john:1 multineuron:3 numerical:3 informative:2 predetermined:1 opin:1 remove:1 update:5 implying:1 stationary:1 intelligence:1 device:1 reciprocal:2 hamiltonian:1 short:1 colored:1 iterates:1 math:2 lending:1 allerton:2 org:2 zhang:1 plumbley:4 simonsfoundation:2 ik:4 consists:1 olfactory:1 introduce:1 inter:2 indeed:1 cand:1 brain:4 discounted:1 td:1 considering:2 psychometrika:2 begin:1 project:2 moreover:1 xx:1 circuit:10 neurobiol:1 monkey:3 developed:1 proposing:1 transformation:1 forgotten:1 temporal:3 every:1 multidimensional:4 sapiro:1 concave:3 um:4 scaled:1 uk:5 zl:2 control:3 diamantaras:1 appear:1 before:4 t1:4 dropped:1 local:10 positive:1 todd:1 limit:5 despite:1 meet:1 firing:1 suggests:1 limited:1 factorization:1 palmer:1 range:2 steveninck:1 enforces:1 recursive:1 block:3 implement:1 definite:1 pehlevan:4 matching:6 projection:6 pre:1 boyd:1 seeing:1 get:4 cannot:4 onto:12 vlx:1 cal:1 seminal:1 equalization:5 optimize:1 map:8 center:2 yt:78 go:1 starting:1 convex:7 rectangular:1 formulate:5 focused:1 simplicity:1 rule:15 nuclear:1 dominate:1 orthonormal:1 his:1 spanned:1 shlens:1 population:6 vandenberghe:1 variation:1 updated:6 pt:1 suppose:3 exact:2 us:1 livescu:1 machens:1 rivaling:1 rozell:1 std:1 bottom:2 ft:1 role:1 solved:1 calculate:2 connected:1 cycle:1 plo:1 rinberg:1 yk:3 principled:3 equalize:1 intuition:2 pd:1 complexity:1 seung:1 dynamic:14 depend:3 solving:1 rewrite:1 predictive:1 division:1 efficiency:2 basis:1 neurophysiol:1 joint:1 hopfield:1 neurotransmitter:1 derivation:1 fast:1 shortcoming:1 doi:1 artificial:3 equalized:1 formation:1 outside:1 choosing:1 whose:4 posed:1 plausible:9 solve:4 supplementary:2 ability:1 statistic:4 koulakov:1 richardson:1 online:31 kxt:1 eigenvalue:34 equalizing:4 net:1 analytical:1 subtracting:1 coming:1 adaptation:1 organizing:1 inputoutput:1 convergence:4 optimum:6 generating:1 converges:1 object:2 derive:4 depending:2 recurrent:1 pose:1 completion:1 measured:1 rescale:1 ij:23 minor:1 solves:1 implemented:4 quantify:1 anatomy:1 shrunk:1 stochastic:2 centered:1 human:2 vx:7 material:2 brave:1 biological:5 im:2 adjusted:1 extension:1 hold:1 wright:1 major:1 dictionary:1 early:4 proc:2 organ:2 pnevmatikakis:1 weighted:3 cotter:1 minimization:1 gaussian:1 modified:1 rather:1 ck:1 ej:1 varying:2 derived:9 ponce:1 vk:2 rank:5 tech:1 sense:1 ganguli:1 dependent:1 streaming:3 dty:9 w:1 expand:1 mirpour:1 interested:1 arg:9 among:3 ill:1 priori:2 sengupta:1 spatial:2 special:1 mutual:1 equal:2 field:2 extraction:1 barnett:1 zz:4 biology:1 unsupervised:2 linsker:1 fmri:1 stimulus:3 retina:2 modern:1 randomly:1 oja:6 composed:1 genkin:1 orthonormalizing:1 replaced:1 phase:4 curr:1 organization:1 interneurons:14 adjust:1 behind:2 tj:1 hg:1 capable:1 xy:2 orthogonal:2 incomplete:1 initialized:2 re:1 overcomplete:1 theoretical:2 epl:1 column:5 modeling:4 soft:8 tp:4 assignment:1 cost:2 introducing:1 deviation:3 hundred:1 synthetic:2 gd:2 adaptively:3 recht:1 st:6 vertechi:1 squared:2 again:2 recorded:1 ambiguity:1 opposed:1 choose:2 admit:1 tz:1 american:2 book:1 conf:5 suggesting:1 potential:1 retinal:2 coding:4 int:3 inc:1 postulated:1 notable:1 jc:1 performed:2 closed:2 analyze:3 red:1 parallel:1 simon:4 contribution:1 brendel:1 variance:2 sy:1 identify:1 vp:2 iterated:1 zy:4 comp:1 drive:1 j6:3 synapsis:9 implausible:1 decorrelated:1 synaptic:14 eigenmode:1 definition:1 ty:1 conveys:1 naturally:3 jacobi:3 whitens:1 gain:2 dataset:3 proved:1 subsection:1 knowledge:1 dimensionality:18 ubiquitous:1 cj:1 cik:1 higher:1 htk:2 dt:11 response:2 formulation:1 leen:1 strongly:4 stage:2 until:4 correlation:3 gauthier:1 eig:6 lack:1 defines:3 eigenmodes:1 scientific:2 olshausen:1 normalized:3 true:1 former:1 regularization:1 equality:1 hence:3 alternating:1 discounting:1 nonzero:1 self:2 uniquely:3 inferior:2 steady:1 theoretic:1 demonstrate:1 novel:3 recently:2 spiking:1 rl:1 million:2 jl:1 measurement:1 cambridge:1 rd:1 unconstrained:1 outlined:1 automatic:1 similarly:1 analyzer:1 had:3 dj:1 similarity:7 cortex:3 whitening:1 own:2 quartic:1 optimizes:1 lo1:1 termed:1 rep:1 minimum:1 greater:1 mr:1 employed:1 schneidman:1 dashed:1 signal:4 ii:4 multiple:1 violate:1 pnas:1 full:1 neurally:2 hebbian:20 d0:2 match:1 adapt:1 plausibility:1 offer:1 bach:1 divided:1 basic:1 whitened:1 vision:2 metric:7 iteration:4 represent:2 achieved:1 cell:3 proposal:1 whereas:1 remarkably:3 chklovskii:5 addition:2 interval:1 grow:1 singular:2 jml:1 esteky:1 rest:3 ascent:1 recording:1 db:4 yang:1 prl:1 exceed:2 iii:2 feedforward:3 fit:1 architecture:4 fm:3 reduce:1 expression:1 pca:2 ul:2 abovethreshold:1 york:2 jj:1 remark:1 se:1 eigenvectors:2 processed:1 kiani:2 category:1 reduced:1 zj:1 inhibitory:5 sign:3 torgerson:1 neuroscience:1 dummy:1 yy:6 track:1 anatomical:1 blue:1 dropping:1 threshold:8 penalizing:1 deweese:1 thresholded:4 ht:3 ruff:1 v1:5 relaxation:1 dmitri:1 run:2 respond:1 arrive:1 decide:1 scaling:4 layer:4 ki:1 fold:1 quadratic:1 kyt:4 nonnegative:1 activity:20 fairhall:1 segev:1 x2:3 speed:1 min:19 formulating:1 performing:1 according:6 across:2 smaller:1 remain:1 postsynaptic:2 son:1 biologically:8 rrr:1 primate:1 asilomar:1 equation:5 previously:4 turn:2 mechanistic:2 greengard:1 indirectly:1 alternative:1 eigen:3 top:4 remaining:2 sanger:1 yx:7 giving:1 yz:6 granule:1 unchanged:1 objective:24 streamed:2 strategy:1 md:5 diagonal:3 bialek:2 gradient:3 subspace:19 distance:2 thank:1 lateral:2 simulated:1 capacity:1 presynaptic:1 eigenspectrum:1 code:2 difficult:1 negative:2 ba:1 implementation:1 zt:43 perform:1 neuron:40 observation:3 datasets:2 descent:2 anti:10 barrow:1 communication:2 strain:1 y1:4 rn:1 arbitrary:1 householder:1 introduced:2 evidenced:1 david:1 specified:1 optimized:2 z1:3 connection:4 learned:1 tanaka:1 nip:4 trans:1 suggested:1 proceeds:1 usually:2 below:3 pattern:3 max:5 including:1 reciprocally:1 memory:1 power:3 decorrelation:1 natural:1 rely:1 zhu:1 minimax:6 representing:1 altered:1 older:1 eye:1 library:1 arora:2 categorical:1 extract:1 coupled:1 text:1 review:2 lewen:1 discovery:1 berry:1 relative:1 law:2 fully:1 limitation:3 filtering:1 srebro:1 foundation:2 integrate:1 rubner:1 tka:1 thresholding:19 storing:1 wty:15 excitatory:4 changed:1 elsewhere:1 supported:1 last:3 keeping:4 free:1 offline:14 drastically:1 allow:1 wide:1 mur:1 sparse:5 van:1 overcome:1 dimension:8 cortical:1 world:2 xn:3 cumulative:2 calculated:1 sensory:9 dale:1 feedback:1 adaptive:5 simplified:1 zylberberg:1 keep:1 hubel:1 mairal:1 photoreceptors:1 summing:2 spatio:1 quantifies:1 channel:1 nature:1 robust:1 mj:1 marre:1 nonnormalized:1 upstream:2 diag:7 cengiz:1 did:2 da:1 linearly:1 neurosci:2 lagarias:1 contradicting:1 dtz:2 x1:4 neuronal:9 ny:2 wiley:1 kzt:1 comput:3 perceptual:1 vanish:1 jmlr:1 fyt:1 young:1 rk:1 theorem:5 shade:1 xt:25 normative:6 stl:2 exists:1 magnitude:1 interneuron:1 suited:1 electrophysiology:1 saddle:2 ganglion:1 gao:1 visual:1 ux:1 tracking:1 scalar:1 pls:1 aa:1 corresponds:1 determines:1 identity:1 presentation:5 king:1 stk:2 shared:1 man:1 absence:1 hard:8 experimentally:2 change:1 determined:2 except:4 uniformly:1 included:1 wt:25 semantically:1 principal:30 called:1 rigor:1 experimental:2 lerman:1 e:1 exception:1 internal:2 support:1 kung:1 evaluate:1 vkx:3 druckmann:1 biol:2 correlated:1 |
5,397 | 5,886 | Efficient Non-greedy Optimization of Decision Trees
Mohammad Norouzi1?
Maxwell D. Collins2 ?
Matthew Johnson3
4
5
David J. Fleet
Pushmeet Kohli
1,4
Department of Computer Science, University of Toronto
2
Department of Computer Science, University of Wisconsin-Madison
3,5
Microsoft Research
Abstract
Decision trees and randomized forests are widely used in computer vision and machine learning. Standard algorithms for decision tree induction optimize the split
functions one node at a time according to some splitting criteria. This greedy procedure often leads to suboptimal trees. In this paper, we present an algorithm for
optimizing the split functions at all levels of the tree jointly with the leaf parameters, based on a global objective. We show that the problem of finding optimal
linear-combination (oblique) splits for decision trees is related to structured prediction with latent variables, and we formulate a convex-concave upper bound on
the tree?s empirical loss. Computing the gradient of the proposed surrogate objective with respect to each training exemplar is O(d2 ), where d is the tree depth,
and thus training deep trees is feasible. The use of stochastic gradient descent for
optimization enables effective training with large datasets. Experiments on several classification benchmarks demonstrate that the resulting non-greedy decision
trees outperform greedy decision tree baselines.
1
Introduction
Decision trees and forests [5, 21, 4] have a long and rich history in machine learning [10, 7]. Recent
years have seen an increase in their popularity, owing to their computational efficiency and applicability to large-scale classification and regression tasks. A case in point is Microsoft Kinect where
decision trees are trained on millions of exemplars to enable real-time human pose estimation from
depth images [22].
Conventional algorithms for decision tree induction are greedy. They grow a tree one node at a
time following procedures laid out decades ago by frameworks such as ID3 [21] and CART [5].
While recent work has proposed new objective functions to guide greedy algorithms [20, 12], it
continues to be the case that decision tree applications (e.g., [9, 14]) utilize the same dated methods
of tree induction. Greedy decision tree induction builds a binary tree via a recursive procedure as
follows: beginning with a single node, indexed by i, a split function si is optimized based on a
corresponding subset of the training data Di such that Di is split into two subsets, which in turn
define the training data for the two children of the node i. The intrinsic limitation of this procedure
is that the optimization of si is solely conditioned on Di , i.e., there is no ability to fine-tune the split
function si based on the results of training at lower levels of the tree. This paper proposes a general
framework for non-greedy learning of the split parameters for tree-based methods that addresses this
limitation. We focus on binary trees, while extension to N -ary trees is possible. We show that our
joint optimization of the split functions at different levels of the tree under a global objective not
only promotes cooperation between the split nodes to create more compact trees, but also leads to
better generalization performance.
?
Part of this work was done while M. Norouzi and M. D. Collins were at Microsoft Research, Cambridge.
1
One of the key contributions of this work is establishing a link between the decision tree optimization problem and the problem of structured prediction with latent variables [25]. We present a novel
formulation of the decision tree learning that associates a binary latent decision variable with each
split node in the tree and uses such latent variables to formulate the tree?s empirical loss. Inspired
by advances in structured prediction [23, 24, 25], we propose a convex-concave upper bound on the
empirical loss. This bound acts as a surrogate objective that is optimized using stochastic gradient descent (SGD) to find a locally optimal configuration of the split functions. One complication
introduced by this particular formulation is that the number of latent decision variables grows exponentially with the tree depth d. As a consequence, each gradient update will have a complexity of
O(2d p) for p-dimensional inputs. One of our technical contributions is showing how this complexity
can be reduced to O(d2 p) by modifying the surrogate objective, thereby enabling efficient learning
of deep trees.
2
Related work
Finding optimal split functions at different levels of a decision tree according to some global objective, such as a regularized empirical risk, is NP-complete [11] due to the discrete and sequential
nature of the decisions in a tree. Thus, finding an efficient alternative to the greedy approach has
remained a difficult objective despite many prior attempts.
Bennett [1] proposes a non-greedy multi-linear programming based approach for global tree optimization and shows that the method produces trees that have higher classification accuracy than
standard greedy trees. However, their method is limited to binary classification with 0-1 loss and
has a high computation complexity, making it only applicable to trees with few nodes.
The work in [15] proposes a means for training decision forests in an online setting by incrementally
extending the trees as new data points are added. As opposed to a naive incremental growing of the
trees, this work models the decision trees with Mondrian Processes.
The Hierarchical Mixture of Experts model [13] uses soft splits rather than hard binary decisions to
capture situations where the transition from low to high response is gradual. The use of soft splits at
internal nodes of the tree yields a probabilistic model in which the log-likelihood is a smooth function of the unknown parameters. Hence, training based on log-likelihood is amenable to numerical
optimization via methods such as expectation maximization (EM). That said, the soft splits necessitate the evaluation of all or most of the experts for each data point, so much of the computational
advantage of the decision tree are lost.
Murthy and Salzburg [17] argue that non-greedy tree learning methods that work by looking ahead
are unnecessary and sometimes harmful. This is understandable since their methods work by minimizing the empirical loss without any regularization, which is prone to overfitting. To avoid this
problem, it is a common practice (see Breiman [4] or Criminisi and Shotton [7] for an overview)
to limit the tree depth and introduce limits on the number of training instances below which a tree
branch is not extended, or to force a diverse ensemble of trees (i.e., a decision forest) through the
use of bagging. Bennett and Blue [2] describe a different way to overcome overfitting by using
max-margin framework and the Support Vector Machines (SVM) at the split nodes of the tree. Subsequently, Bennett et al. [3] show how enlarging the margin of decision tree classifiers results in
better generalization performance.
Our formulation for decision tree induction improves on prior art in a number of ways. Not only
does our latent variable formulation of decision trees enable efficient learning, it can handle any
general loss function while not sacrificing the O(dp) complexity of inference imparted by the tree
structure. Further, our surrogate objective provides a natural way to regularize the joint optimization
of tree parameters to discourage overfitting.
3
Problem formulation
For ease of exposition, this paper focuses on binary classification trees, with m internal (split) nodes,
and m + 1 leaf (terminal) nodes. Note that in a binary tree the number of leaves is always one more
than the number of internal (non-leaf) nodes. An input, x ? Rp , is directed from the root of the
tree down through internal nodes to a leaf node. Each leaf node specifies a distribution over k class
labels. Each internal node, indexed by i ? {1, . . . , m}, performs a binary test by evaluating a node2
h1
h1
+1
h2
-1
?1
?2
?3
-1
h3
h2
+1
+1
?4
T
f ([+1, ?1, +1] ) = [0, 0, 0, 1]
?1
T
?2
h3
+1
?3
?4
T
= 14
f ([?1, +1, +1] ) = [0, 1, 0, 0]
? = ?T f (h) = ? 4
T
= 12
? = ?T f (h) = ? 2
Figure 1: The binary split decisions in a decision tree with m = 3 internal nodes can be thought as
T
a binary vector h = [h1 , h2 , h3 ] . Tree navigation to reach a leaf can be expressed in terms of a
function f (h). The selected leaf parameters can be expressed by ? = ?T f (h).
specific split function si (x) : Rp ? {?1, +1}. If si (x) evaluates to ?1, then x is directed to the
left child of node i. Otherwise, x is directed to the right child. And so on down the tree. Each split
function si (?), by parameterized a weight vector wi , is assumed to be a linear threshold function,
i.e., si (x) = sgn(wi T x). We incorporate an offset parameter to obtain split functions of the form
sgn(wi T x ? bi ) by appending a constant ??1? to the input feature vector.
Each leaf node, indexed by j ? {1, . . . , m + 1}, specifies a conditional probability distribution over
class labels, l ? {1, . . . , k}, denoted p(y = l | j). Leaf distributions are parametrized with a vector
of unnormalized predictive log-probabilities, denoted ? j ? Rk , and a softmax function; i.e.,
exp ? j[l]
(1)
p(y = l | j) = Pk
,
?=1 exp ? j[?]
where ? j[?] denotes the ?th element of vector ? j .
The parameters of the tree comprise the m internal weight vectors, {wi }m
i=1 , and the m + 1 vectors
of unnormalized log-probabilities, one for each leaf node, {? j }m+1
.
We
pack these parameters
j=1
m?p
(m+1)?k
into two matrices W ? R
and ? ? R
whose rows comprise weight vectors and leaf
T
T
parameters, i.e., W ? [w1 , . . . , wm ] and ? ? [? 1 , . . . , ? m+1 ] . Given a dataset of input-output
n
pairs, D ? {xz , yz }z=1 , where yz ? {1, . . . , k} is the ground truth class label associated with
input xz ? Rp , we wish to find a joint configuration of oblique splits W and leaf parameters ?
that minimize some measure of misclassification loss on the training dataset. Joint optimization of
the split functions and leaf parameters according to a global objective is known to be extremely
challenging [11] due to the discrete and sequential nature of the splitting decisions within the tree.
One can evaluate all of the split functions, for every internal node of the tree, on input x by computing sgn(W x), where sgn(?) is the element-wise sign function. One key idea that helps linking
decision tree learning to latent structured prediction is to think of an m-bit vector of potential split
decisions, e.g., h = sgn(W x) ? {?1, +1}m , as a latent variable. Such a latent variable determines
the leaf to which a data point is directed, and then classified using the leaf parameters. To formulate
the loss for (x, y), we introduce a tree navigation function f : Hm ? Im+1 that maps an m-bit
sequence of split decisions (Hm ? {?1, +1}m ) to an indicator vector that specifies a 1-of-(m + 1)
encoding. Such an indicator vector is only non-zero at the index of the selected leaf. Fig. 1 illustrates
the tree navigation function for a tree with 3 internal nodes.
Using the notation developed above, ? = ?T f (sgn(W x)) represents the parameters corresponding
to the leaf to which x is directed by the split functions in W . A generic loss function of the form
`(?, y) measures the discrepancy between the model prediction based on ? and an output y. For
the softmax model given by (1), a natural loss is the negative log probability of the correct label,
referred to as log loss,
X
k
`(?, y) = `log (?, y) = ? ? [y] + log
exp(? [?] ) .
(2)
?=1
3
For regression tasks, when y ? Rq , and the value of ? ? Rq is directly emitted as the model
prediction, a natural choice of ` is squared loss,
`(?, y) = `sqr (?, y) = k? ? yk2 .
(3)
One can adopt other forms of loss within our decision tree learning framework as well. The goal of
learning is to find W and ? that minimize empirical loss, for a given training set D, that is,
X
L(W, ?; D) =
` ?T f (sgn(W x)), y .
(4)
(x,y)?D
Direct global optimization of empirical loss L(W, ?; D) with respect to W is challenging. It is a
discontinuous and piecewise-constant function of W . Furthermore, given an input x, the navigation
function f (?) yields a leaf parameter vector based on a sequence of binary tests, where the results of
the initial tests determine which subsequent tests are performed. It is not clear how this dependence
of binary tests should be formulated.
4
Decision trees and structured prediction
To overcome the intractability in the optimization of L, we develop a piecewise smooth upper bound
on empirical loss. Our upper bound is inspired by the formulation of structured prediction with latent
variables [25]. A key observation that links decision tree learning to structured prediction, is that
one can re-express sgn(W x) in terms of a latent variable h. That is,
sgn(W x) = argmax(hT W x) .
(5)
h?Hm
In this form, decision tree?s split functions implicitly map an input x to a binary vector h by maximizing a score function hT W x, the inner product of h and W x. One can re-express the score
function in terms of a more familiar form of a joint feature space on h and x, as wT ?(h, x), where
?(h, x) = vec (hxT ), and w = vec (W ). Previously, Norouzi and Fleet [19] used the same reformulation (5) of linear threshold functions to learn binary similarity preserving hash functions.
Given (5), we re-express empirical loss as,
L(W, ?; D) =
X
b
`(?T f (h(x)),
y) ,
(x,y)?D
where
(6)
b
h(x)
= argmax(hT W x) .
h?Hm
This objective resembles the objective functions used in structured prediction, and since we do not
b
have a priori access to the ground truth split decisions, h(x),
this problem is a form of structured
prediction with latent variables.
5
Upper bound on empirical loss
We develop an upper bound on loss for an input-output pair (x, y), which takes the form,
`(?T f (sgn(W x)), y) ? maxm gT W x + `(?T f (g), y) ? maxm (hT W x) .
g?H
h?H
(7)
b
To validate the bound, first note that the second term on the RHS is maximized by h = h(x)
=
b
sgn(W x). Second, when g = h(x), it is clear that the LHS equals the RHS. Finally, for all other
b
values of g, the RHS can only get larger than when g = h(x)
because of the max operator. Hence,
the inequality holds. An algebraic proof of (7) is presented in the supplementary material.
In the context of structured prediction, the first term of the upper bound, i.e., the maximization over
g, is called loss-augmented inference, as it augments the inference problem, i.e., the maximization
over h, with a loss term. Fortunately, the loss-augmented inference for our decision tree learning
formulation can be solved exactly, as discussed below.
It is also notable that the loss term on the LHS of (7) is invariant to the scale of W , but the upper
bound on the right side of (7) is not. As a consequence, as with binary SVM and margin-rescaling
formulations of structural SVM [24], we introduce a regularizer on the norm of W when optimizing
the bound. To justify the regularizer, we discuss the effect of the scale of W on the bound.
4
Proposition 1. The upper bound on the loss becomes tighter as a constant multiple of W increases,
i.e., for a > b > 0:
maxm agT W x + `(?T f (g), y) ? maxm (ahT W x) ?
g?H
h?H
(8)
T
maxm bg W x + `(?T f (g), y) ? maxm (bhT W x).
g?H
h?H
Proof. Please refer to the supplementary material for the proof.
In the limit, as the scale of W approach +?, the loss term `(?T f (g), y) becomes negligible compared to the score term gT W x. Thus, the solutions to loss-augmented inference and inference
problems become almost identical, except when an element of W x is very close to 0. Thus, even
though a larger kW k yields a tighter bound, it makes the bound approach the loss itself, and therefore becomes nearly piecewise-constant, which is hard to optimize. Based on Proposition 1, one
easy way to decrease the upper bound is to increase the norm of W , which does not affect the loss.
Our experiments indicate that a lower value of the loss can be achieved when the norm of W is
regularized. We therefore constrain the norm of W to obtain an objective with better generalization.
Since each row of W acts independently in a decision tree in the split functions, it is reasonable to
constrain the norm of each row independently. Summing over the bounds for different training pairs
and constraining the norm of rows of W , we obtain the following optimization problem, called the
surrogate objective:
X
T
0
T
T
minimize L (W, ?; D) =
max g W x + `(? f (g), y) ? maxm (h W x)
g?Hm
h?H
(9)
(x,y)?D
s.t.
kwi k2 ? ?
for all i ? {1, . . . , m}
where ? ? R+ is a regularization parameter and wi is the ith row of W . For all values of ?, we
have L(W, ?; D) ? L0 (W, ?; D). Instead of using the typical Lagrange form for regularization, we
employ hard constraints to enable sparse gradient updates of the rows of W , since the gradients for
most rows of W are zero at each step in training.
6
Optimizing the surrogate objective
Even though minimizing the surrogate objective of (9) entails a non-convex optimization,
L0 (W, ?; D) is much better behaved than empirical loss in (4). L0 (W, ?; D) is piecewise linear
and convex-concave in W , and the constraints on W define a convex set.
Loss-augmented inference. To evaluate and use the surrogate objective in (9) for optimization, we
must solve a loss-augmented inference problem to find the binary code that maximizes the sum of
the score and loss terms:
b(x) = argmax gT W x + `(?T f (g), y) .
g
(10)
g?Hm
An observation that makes this optimization tractable is that f (g) can only take on m+ 1 distinct
values, which correspond to terminating at one of the m+1 leaves of the tree and selecting a leaf
parameter from {? j }m+1
j=1 . Fortunately, for any leaf index j ? {1, . . . , m+1}, we can solve
argmax gT W x + `(? j , y)
s. t. f (g) = 1j ,
(11)
g?Hm
efficiently. Note that if f (g) = 1j , then ?T f (g) equals the j th row of ?, i.e., ? j . To solve (11)
we need to set all of the binary bits in g corresponding to the path from the root to the leaf j to be
consistent with the path direction toward the leaf j. However, bits of g that do not appear on this path
have no effect on the output of f (g), and all such bits should be set based on g[i] = sgn(wi T x) to
obtain maximum gT W x. Accordingly, we can essentially ignore the off-the-path bits by subtracting
T
sgn(W x) W x from (11) to obtain,
T
argmax gT W x + `(? j , y) = argmax g ? sgn(W x) W x + `(? j , y) .
(12)
g?Hm
g?Hm
5
Algorithm 1 Stochastic gradient descent (SGD) algorithm for non-greedy decision tree learning.
1: Initialize W (0) and ?(0) using greedy procedure
2: for t = 0 to ? do
3:
Sample a pair (x, y) uniformly at random from D
b ? sgn(W (t) x)
4:
h
b ? argmaxg?Hm gT W (t) x + `(?T f (g), y)
5:
g
6:
7:
8:
9:
10:
11:
b T
bxT + ? hx
W (tmp) ? W (t) ? ? g
for i = 1 to m do n
? .
(tmp)
o (tmp)
(t+1)
W i, . ? min 1, ?
W i, .
2 W i, .
end for
?
?(t+1) ? ?(t) ? ? ??
`(?T f (b
g), y)?=?(t)
end for
T
Note that sgn(W x) W x is constant in g, and this subtraction zeros out all bits in g that are not on
the path to the leaf j. So, to solve (12), we only need to consider the bits on the path to the leaf j for
which sgn(wi T x) is not consistent with the path direction. Using a single depth-first search on the
decision tree, we can solve (11) for every j, and among those, we pick the one that maximizes (11).
The algorithm described above is O(mp) ? O(2d p), where d is the tree depth, and we require
a multiple of p for computing the inner product wi x at each internal node i. This algorithm is
not efficient for deep trees, especially as we need to perform loss-augmented inference once for
every stochastic gradient computation. In what follows, we develop an alternative more efficient
formulation and algorithm with time complexity of O(d2 p).
Fast loss-augmented inference. To motivate the fast loss-augmented inference algorithm, we formulate a slightly different upper bound on the loss, i.e.,
`(?T f (sgn(W x)), y) ?
max
g?B1 (sgn(W x))
gT W x + `(?T f (g), y) ? maxm hT W x ,
h?H
(13)
where B1 (sgn(W x)) denotes the Hamming ball of radius 1 around sgn(W x), i.e., B1 (sgn(W x)) ?
{g ? Hm | kg ? sgn(W x)kH ? 1}, hence g ? B1 (sgn(W x)) implies that g and sgn(W x) differ
in at most one bit. The proof of (13) is identical to the proof of (7). The key benefit of this new
formulation is that loss-augmented inference with the new bound is computationally efficient. Since
b and sgn(W x) differ in at most one bit, then f (b
g
g) can only take d + 1 distinct values. Thus we
need to evaluate (12) for at most d + 1 values of j, requiring a running time of O(d2 p).
Stochastic gradient descent (SGD). One reasonable approach to minimizing (9) uses stochastic
gradient descent (SGD), the steps of which are outlined in Alg 1. Here, ? denotes the learning rate,
and ? is the number of optimization steps. Line 6 corresponds to a gradient update in W , which is
?
supported by the fact that ?W
hT W x = hxT . Line 8 performs projection back to the feasible region
of W , and Line 10 updates ? based on the gradient of loss. Our implementation modifies Alg 1 by
adopting common SGD tricks, including the use of momentum and mini-batches.
Stable SGD (SSGD). Even though Alg 1 achieves good training and test accuracy relatively quickly,
we observe that after several gradient updates some of the leaves may end up not being assigned to
any data points and hence the full tree capacity is not exploited. We call such leaves inactive as
opposed to active leaves that are assigned to at least one training data point. An inactive leaf may
become active again, but this rarely happens given the form of gradient updates. To discourage
abrupt changes in the number of inactive leaves, we introduce a variant of SGD, in which the assignments of data points to leaves are fixed for a number of gradient updates. Thus, the bound is
optimized with respect to a set of data point leaf assignment constraints. When the improvement in
the bound becomes negligible the leaf assignment variables are updated, followed by another round
of optimization of the bound. We call this algorithm Stable SGD (SSGD) because it changes the assignment of data points to leaves more conservatively than SGD. Let a(x) denote the 1-of-(m + 1)
encoding of the leaf to which a data point x should be assigned to. Then, each iteration of SSGD
6
Test accuracy
SensIT
Protein
0.7
0.7
0.6
0.6
0.5
MNIST
0.9
0.8
0.8
0.7
0.7
0.6
0.6
6
Training accuracy
Connect4
0.8
14
10
Depth
18
0.5
6
14
10
Depth
18
0.4
6
14
10
Depth
18
0.5
1.0
1.0
1.0
1.0
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
6
14
10
Depth
18
0.5
6
14
10
Depth
18
0.5
6
14
10
Depth
Axis-aligned
CO2
Non-greedy
Random
OC1
0.8
0.7
6
14
10
Depth
18
0.6
18
6
14
10
Depth
18
Figure 2: Test and training accuracy of a single tree as a function of tree depth for different methods.
Non-greedy trees achieve better test accuracy throughout different depths. Non-greedy exhibit less
vulnerability to overfitting.
with fast loss-augmented inference relies on the following upper bound on loss,
`(?T f (sgn(W x)), y) ?
max
gT W x + `(?T f (g), y) ?
max
m
h?H |f (h)=a(x)
g?B1 (sgn(W x))
hT W x .
(14)
One can easily verify that the RHS of (14) is larger than the RHS of (13), hence the inequality.
Computational complexity. To analyze the computational complexity of each SGD step, we note
b = sgn(W x) is bounded above by the
b (defined in (10)) and h
that Hamming distance between g
b corresponding to the path to a selected
depth of the tree d. This is because only those elements of g
b xT needed for Line 6 of Alg 1
leaf can differ from sgn(W x). Thus, for SGD the expression (b
g ? h)
b
b differ. Accordingly, Lines 6 and 7
can be computed in O(dp), if we know which bits of h and g
can be performed in O(dp). The computational bottleneck is the loss augmented inference in Line
5. When fast loss-augmented inference is performed in O(d2 p) time, the total time complexity of
gradient update for both SGD and SSGD becomes O(d2 p + k), where k is the number of labels.
7
Experiments
Experiments are conducted on several benchmark datasets from LibSVM [6] for multi-class classification, namely SensIT, Connect4, Protein, and MNIST. We use the provided train; validation; test
sets when available. If such splits are not provided, we use a random 80%/20% split of the training
data for train; validation and a random 64%/16%/20% split for train; validation; test sets.
We compare our method for non-greedy learning of oblique trees with several greedy baselines,
including conventional axis-aligned trees based on information gain, OC1 oblique trees [17] that
use coordinate descent for optimization of the splits, and random oblique trees that select the best
split function from a set of randomly generated hyperplanes based on information gain. We also
compare with the results of CO2 [18], which is a special case of our upper bound approach applied
greedily to trees of depth 1, one node at a time. Any base algorithm for learning decision trees can
be augmented by post-training pruning [16], or building ensembles with bagging [4] or boosting [8].
However, the key differences between non-greedy trees and baseline greedy trees become most
apparent when analyzing individual trees. For a single tree the major determinant of accuracy is the
size of the tree, which we control by changing the maximum tree depth.
Fig. 2 depicts test and training accuracy for non-greedy trees and four other baselines as function of
tree depth. We evaluate trees of depth 6 up to 18 at depth intervals of 2. The hyper-parameters for
each method are tuned for each depth independently. While the absolute accuracy of our non-greedy
trees varies between datasets, a few key observations hold for all cases. First, we observe that non7
Num. active leaves
4,000
4,000
4,000
3,000
3,000
3,000
2,000
2,000
2,000
1,000
1,000
1,000
0 0
10
101
102
103
Regularization parameter ? (log)
Tree depth d =10
0 0
10
101
102
0 0
10
103
Regularization parameter ? (log)
101
102
103
Regularization parameter ? (log)
Tree depth d =13
Tree depth d =16
Figure 3: The effect of ? on the structure of the trees trained by MNIST. A small value of ? prunes
the tree to use far fewer leaves than an axis-aligned baseline used for initialization (dotted line).
greedy trees achieve the best test performance across tree depths across multiple datasets. Secondly,
trees trained using our non-greedy approach seem to be less susceptible to overfitting and achieve
better generalization performance at various tree depths. As described below, we think that the norm
regularization provides a principled way to tune the tightness of the tree?s fit to the training data.
Finally, the comparison between non-greedy and CO2 [18] trees concentrates on the non-greediness
of the algorithm, as it compares our method with its simpler variant, which is applied greedily one
node at a time. We find that in most cases, the non-greedy optimization helps by improving upon
the results of CO2.
1800
Training time (sec)
A key hyper-parameter of our method is the regularization
constant ? in (9), which controls the tightness of the upper bound. With a small ?, the norm constraints force the
method to choose a W with a large margin at each internal node. The choice of ? is therefore closely related to the
generalization of the learned trees. As shown in Fig. 3, ?
also implicitly controls the degree of pruning of the leaves
of the tree during training. We train multiple trees for different values of ? ? {0.1, 1, 4, 10, 43, 100}, and we pick
the value of ? that produces the tree with minimum validation error. We also tune the choice of the SGD learning rate,
?, in this step. This ? and ? are used to build a tree using
the union of both the training and validation sets, which is
evaluated on the test set.
Loss-aug inf
Fast loss-aug inf
1500
1200
900
600
300
0
6
10
Depth
14
18
Figure 4:
Total time to execute
1000 epochs of SGD on the Connect4
dataset using loss-agumented inference and its fast varient.
To build non-greedy trees, we initially build an axis-aligned tree with split functions that threshold
a single feature optimized using conventional procedures that maximize information gain. The axisaligned split is used to initialize a greedy variant of the tree training procedure called CO2 [18]. This
provides initial values for W and ? for the non-greedy procedure.
Fig. 4 shows an empirical comparison of training time for SGD with loss-augmented inference
and fast loss-augmented inference. As expected, run-time of loss-augmented inference exhibits
exponential growth with deep trees whereas its fast variant is much more scalable. We expect to see
much larger speedup factors for larger datasets. Connect4 only has 55, 000 training points.
8
Conclusion
We present a non-greedy method for learning decision trees, using stochastic gradient descent to
optimize an upper bound on the empirical loss of the tree?s predictions on the training set. Our model
poses the global training of decision trees in a well-characterized optimization framework. This
makes it simpler to pose extensions that could be considered in future work. Efficiency gains could
be achieved by learning sparse split functions via sparsity-inducing regularization on W . Further,
the core optimization problem permits applying the kernel trick to the linear split parameters W ,
making our overall model applicable to learning higher-order split functions or training decision
trees on examples in arbitrary Reproducing Kernel Hilbert Spaces.
Acknowledgment. MN was financially supported in part by a Google fellowship. DF was financially supported in part by NSERC Canada and the NCAP program of the CIFAR.
8
References
[1] K. P. Bennett. Global tree optimization: A non-greedy decision tree algorithm. Computing Science and
Statistics, pages 156?156, 1994.
[2] K. P. Bennett and J.A. Blue. A support vector machine approach to decision trees. In Department
of Mathematical Sciences Math Report No. 97-100, Rensselaer Polytechnic Institute, pages 2396?2401,
1997.
[3] K. P. Bennett, N. Cristianini, J. Shawe-Taylor, and D. Wu. Enlarging the margins in perceptron decision
trees. Machine Learning, 41(3):295?313, 2000.
[4] L. Breiman. Random forests. Machine Learning, 45(1):5?32, 2001.
[5] L. Breiman, J. Friedman, R. A. Olshen, and C. J. Stone. Classification and regression trees. Chapman &
Hall/CRC, 1984.
[6] C. C. Chang and C. J. Lin. LIBSVM: a library for support vector machines, 2001.
[7] A. Criminisi and J. Shotton. Decision Forests for Computer Vision and Medical Image Analysis. Springer,
2013.
[8] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics,
pages 1189?1232, 2001.
[9] J. Gall, A. Yao, N. Razavi, L. Van Gool, and V. Lempitsky. Hough forests for object detection, tracking,
and action recognition. IEEE Trans. PAMI, 33(11):2188?2202, 2011.
[10] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning (Ed. 2). Springer, 2009.
[11] L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete. Information
Processing Letters, 5(1):15?17, 1976.
[12] J. Jancsary, S. Nowozin, and C. Rother. Loss-specific training of non-parametric image restoration models: A new state of the art. ECCV, 2012.
[13] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural Comput.,
6(2):181?214, 1994.
[14] E. Konukoglu, B. Glocker, D. Zikic, and A. Criminisi. Neighbourhood approximation forests. In Medical
Image Computing and Computer-Assisted Intervention?MICCAI 2012, pages 75?82. Springer, 2012.
[15] B. Lakshminarayanan, D. M. Roy, and Y. H. Teh. Mondrian forests: Efficient online random forests. In
Advances in Neural Information Processing Systems, pages 3140?3148, 2014.
[16] J. Mingers. An empirical comparison of pruning methods for decision tree induction. Machine Learning,
4(2):227?243, 1989.
[17] S. K. Murthy and S. L. Salzberg. On growing better decision trees from data. PhD thesis, John Hopkins
University, 1995.
[18] M. Norouzi, M. D. Collins, D. J. Fleet, and P. Kohli. Co2 forest: Improved random forest by continuous
optimization of oblique splits. arXiv:1506.06155, 2015.
[19] M. Norouzi and D. J. Fleet. Minimal Loss Hashing for Compact Binary Codes. ICML, 2011.
[20] S. Nowozin. Improved information gain estimates for decision tree induction. ICML, 2012.
[21] J. R. Quinlan. Induction of decision trees. Machine learning, 1(1):81?106, 1986.
[22] J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M. Finocchio, R. Moore, P. Kohli, A. Criminisi,
A. Kipman, et al. Efficient human pose estimation from single depth images. IEEE Trans. PAMI, 2013.
[23] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. NIPS, 2003.
[24] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. ICML, 2004.
[25] C. N. J. Yu and T. Joachims. Learning structural SVMs with latent variables. ICML, 2009.
9
| 5886 |@word kohli:3 determinant:1 norm:8 d2:6 gradual:1 jacob:1 pick:2 sgd:15 thereby:1 initial:2 configuration:2 score:4 selecting:1 tuned:1 si:7 must:1 john:1 subsequent:1 numerical:1 hofmann:1 enables:1 update:8 hash:1 greedy:33 leaf:40 selected:3 fewer:1 cook:1 accordingly:2 beginning:1 ith:1 oblique:6 core:1 num:1 provides:3 boosting:2 node:26 toronto:1 complication:1 hyperplanes:1 math:1 simpler:2 mathematical:1 direct:1 become:3 introduce:4 expected:1 xz:2 growing:2 multi:2 terminal:1 inspired:2 becomes:5 provided:2 notation:1 bounded:1 maximizes:2 rivest:1 what:1 kg:1 developed:1 finding:3 every:3 act:2 concave:3 growth:1 exactly:1 classifier:1 k2:1 control:3 medical:2 intervention:1 appear:1 negligible:2 limit:3 consequence:2 despite:1 encoding:2 analyzing:1 establishing:1 solely:1 path:8 pami:2 initialization:1 resembles:1 challenging:2 ease:1 limited:1 bi:1 directed:5 acknowledgment:1 recursive:1 lost:1 practice:1 union:1 fitzgibbon:1 procedure:8 empirical:14 thought:1 projection:1 protein:2 altun:1 get:1 hyafil:1 close:1 tsochantaridis:1 operator:1 risk:1 context:1 greediness:1 applying:1 optimize:3 conventional:3 map:2 maximizing:1 modifies:1 independently:3 convex:5 formulate:4 splitting:2 abrupt:1 regularize:1 handle:1 coordinate:1 updated:1 annals:1 programming:1 us:3 gall:1 associate:1 element:5 trick:2 recognition:1 roy:1 continues:1 taskar:1 solved:1 capture:1 region:1 decrease:1 rq:2 principled:1 complexity:8 co2:6 cristianini:1 trained:3 terminating:1 mondrian:2 motivate:1 predictive:1 upon:1 efficiency:2 easily:1 joint:5 various:1 regularizer:2 train:4 distinct:2 fast:8 effective:1 describe:1 hyper:2 whose:1 apparent:1 widely:1 larger:5 supplementary:2 solve:5 tightness:2 otherwise:1 ability:1 statistic:2 id3:1 think:2 jointly:1 itself:1 axisaligned:1 online:2 advantage:1 sequence:2 propose:1 subtracting:1 product:2 aligned:4 achieve:3 razavi:1 kh:1 validate:1 inducing:1 extending:1 produce:2 incremental:1 object:1 help:2 develop:3 pose:4 exemplar:2 h3:3 aug:2 indicate:1 implies:1 differ:4 direction:2 concentrate:1 radius:1 sensit:2 correct:1 owing:1 modifying:1 stochastic:7 criminisi:4 subsequently:1 human:2 sgn:30 enable:3 discontinuous:1 material:2 crc:1 require:1 hx:1 generalization:5 proposition:2 tighter:2 secondly:1 im:1 extension:2 assisted:1 hold:2 around:1 considered:1 ground:2 hall:1 exp:3 tmp:3 matthew:1 major:1 achieves:1 adopt:1 estimation:2 applicable:2 label:5 vulnerability:1 maxm:8 create:1 always:1 rather:1 avoid:1 breiman:3 l0:3 focus:2 joachim:2 improvement:1 likelihood:2 greedily:2 baseline:5 inference:19 initially:1 koller:1 overall:1 classification:7 among:1 denoted:2 priori:1 proposes:3 art:2 softmax:2 initialize:2 special:1 equal:2 comprise:2 once:1 chapman:1 identical:2 represents:1 kw:1 yu:1 icml:4 nearly:1 discrepancy:1 future:1 np:2 report:1 piecewise:4 few:2 employ:1 randomly:1 individual:1 familiar:1 argmax:6 microsoft:3 attempt:1 friedman:3 detection:1 glocker:1 evaluation:1 mixture:2 navigation:4 amenable:1 lh:2 tree:131 indexed:3 harmful:1 connect4:4 taylor:1 re:3 hough:1 sacrificing:1 girshick:1 minimal:1 instance:1 soft:3 salzberg:1 restoration:1 assignment:4 maximization:3 applicability:1 subset:2 conducted:1 varies:1 randomized:1 probabilistic:1 off:1 quickly:1 hopkins:1 yao:1 w1:1 squared:1 again:1 thesis:1 opposed:2 choose:1 necessitate:1 expert:3 rescaling:1 potential:1 sec:1 lakshminarayanan:1 notable:1 mp:1 bg:1 performed:3 root:2 h1:3 bht:1 analyze:1 wm:1 contribution:2 minimize:3 accuracy:9 sqr:1 efficiently:1 ensemble:2 yield:3 maximized:1 correspond:1 norouzi:4 ago:1 history:1 ary:1 murthy:2 classified:1 reach:1 ed:1 evaluates:1 associated:1 di:3 proof:5 hamming:2 gain:5 dataset:3 improves:1 hilbert:1 back:1 maxwell:1 higher:2 hashing:1 response:1 improved:2 bxt:1 formulation:10 done:1 though:3 evaluated:1 execute:1 furthermore:1 miccai:1 jerome:1 incrementally:1 google:1 behaved:1 grows:1 building:1 effect:3 requiring:1 verify:1 hence:5 regularization:9 assigned:3 moore:1 round:1 during:1 please:1 unnormalized:2 criterion:1 stone:1 complete:2 mohammad:1 demonstrate:1 node2:1 performs:2 image:5 wise:1 novel:1 common:2 overview:1 exponentially:1 million:1 linking:1 discussed:1 refer:1 cambridge:1 vec:2 outlined:1 shawe:1 access:1 entail:1 similarity:1 yk2:1 stable:2 gt:9 base:1 imparted:1 recent:2 optimizing:3 inf:2 inequality:2 binary:19 exploited:1 seen:1 preserving:1 fortunately:2 minimum:1 guestrin:1 prune:1 subtraction:1 determine:1 maximize:1 branch:1 multiple:4 full:1 smooth:2 technical:1 characterized:1 long:1 cifar:1 lin:1 post:1 promotes:1 prediction:13 variant:4 regression:3 scalable:1 vision:2 expectation:1 essentially:1 df:1 arxiv:1 iteration:1 sometimes:1 adopting:1 kernel:2 achieved:2 whereas:1 fellowship:1 fine:1 interval:1 grow:1 closely:1 kwi:1 cart:1 seem:1 jordan:1 emitted:1 call:2 structural:2 constraining:1 split:41 shotton:3 easy:1 affect:1 fit:1 hastie:1 suboptimal:1 inner:2 idea:1 fleet:4 inactive:3 expression:1 bottleneck:1 finocchio:1 algebraic:1 action:1 deep:4 clear:2 tune:3 locally:1 svms:1 augments:1 reduced:1 specifies:3 outperform:1 dotted:1 sign:1 popularity:1 tibshirani:1 blue:2 diverse:1 discrete:2 express:3 key:7 four:1 salzburg:1 threshold:3 reformulation:1 changing:1 libsvm:2 ht:7 utilize:1 year:1 sum:1 run:1 parameterized:1 letter:1 laid:1 almost:1 reasonable:2 throughout:1 wu:1 decision:53 bit:11 bound:26 followed:1 ahead:1 constraint:4 constrain:2 aht:1 extremely:1 min:1 relatively:1 speedup:1 structured:11 department:3 according:3 combination:1 ball:1 across:2 slightly:1 em:2 wi:8 making:2 happens:1 invariant:1 computationally:1 previously:1 turn:1 discus:1 needed:1 know:1 tractable:1 end:3 available:1 argmaxg:1 permit:1 observe:2 hierarchical:2 polytechnic:1 generic:1 appending:1 neighbourhood:1 alternative:2 batch:1 rp:3 bagging:2 denotes:3 running:1 quinlan:1 madison:1 build:4 yz:2 especially:1 konukoglu:1 objective:17 added:1 parametric:1 dependence:1 surrogate:8 said:1 exhibit:2 gradient:18 dp:3 financially:2 distance:1 link:2 capacity:1 parametrized:1 argue:1 toward:1 induction:8 rother:1 code:2 index:2 mini:1 minimizing:3 difficult:1 susceptible:1 olshen:1 negative:1 implementation:1 understandable:1 unknown:1 perform:1 teh:1 upper:15 observation:3 datasets:5 markov:1 benchmark:2 enabling:1 descent:7 situation:1 extended:1 looking:1 reproducing:1 kinect:1 arbitrary:1 sharp:1 canada:1 david:1 introduced:1 pair:4 namely:1 kipman:1 optimized:4 learned:1 nip:1 trans:2 address:1 below:3 sparsity:1 oc1:2 program:1 max:7 including:2 gool:1 misclassification:1 natural:3 force:2 regularized:2 indicator:2 mn:1 dated:1 library:1 axis:4 hm:11 naive:1 prior:2 epoch:1 interdependent:1 wisconsin:1 loss:52 expect:1 limitation:2 validation:5 h2:3 degree:1 consistent:2 intractability:1 nowozin:2 row:8 prone:1 eccv:1 cooperation:1 supported:3 guide:1 side:1 perceptron:1 institute:1 absolute:1 sparse:2 benefit:1 van:1 overcome:2 depth:30 transition:1 evaluating:1 rich:1 conservatively:1 far:1 pushmeet:1 pruning:3 compact:2 ignore:1 implicitly:2 global:8 overfitting:5 active:3 summing:1 b1:5 unnecessary:1 assumed:1 search:1 latent:13 rensselaer:1 decade:1 continuous:1 nature:2 pack:1 learn:1 forest:12 improving:1 alg:4 discourage:2 constructing:1 pk:1 rh:5 child:3 augmented:16 fig:4 referred:1 depicts:1 momentum:1 wish:1 exponential:1 comput:1 down:2 remained:1 enlarging:2 rk:1 specific:2 xt:1 showing:1 offset:1 svm:3 intrinsic:1 hxt:2 mnist:3 sequential:2 agt:1 phd:1 conditioned:1 illustrates:1 margin:6 lagrange:1 expressed:2 nserc:1 tracking:1 chang:1 springer:3 corresponds:1 truth:2 determines:1 relies:1 conditional:1 lempitsky:1 goal:1 formulated:1 exposition:1 bennett:6 feasible:2 hard:3 change:2 jancsary:1 typical:1 except:1 uniformly:1 wt:1 justify:1 called:3 total:2 rarely:1 select:1 internal:11 support:4 mingers:1 collins:2 incorporate:1 evaluate:4 |
5,398 | 5,887 | Statistical Topological Data Analysis ?
A Kernel Perspective
Stefan Huber
IST Austria
[email protected]
Roland Kwitt
Department of Computer Science
University of Salzburg
[email protected]
Marc Niethammer
Department of Computer Science and BRIC
UNC Chapel Hill
[email protected]
Weili Lin
Department of Radiology and BRIC
UNC Chapel Hill
[email protected]
Ulrich Bauer
Department of Mathematics
Technische Universit?t M?nchen (TUM)
[email protected]
Abstract
We consider the problem of statistical computations with persistence diagrams, a
summary representation of topological features in data. These diagrams encode
persistent homology, a widely used invariant in topological data analysis. While
several avenues towards a statistical treatment of the diagrams have been explored
recently, we follow an alternative route that is motivated by the success of methods
based on the embedding of probability measures into reproducing kernel Hilbert
spaces. In fact, a positive definite kernel on persistence diagrams has recently
been proposed, connecting persistent homology to popular kernel-based learning
techniques such as support vector machines. However, important properties of that
kernel enabling a principled use in the context of probability measure embeddings
remain to be explored. Our contribution is to close this gap by proving universality
of a variant of the original kernel, and to demonstrate its effective use in twosample hypothesis testing on synthetic as well as real-world data.
1
Introduction
Over the past years, advances in adopting methods from algebraic topology to study the ?shape? of
data (e.g., point clouds, images, shapes) have given birth to the field of topological data analysis
(TDA) [5]. In particular, persistent homology has been widely established as a tool for capturing
?relevant? topological features at multiple scales. The output is a summary representation in the
form of so called barcodes or persistence diagrams, which, roughly speaking, encode the life span
of the features. These ?topological summaries? have been successfully used in a variety of different
fields, including, but not limited to, computer vision and medical imaging. Applications range from
the analysis of cortical surface thickness [8] to the structure of brain networks [15], brain artery trees
[2] or histology images for breast cancer analysis [22].
Despite the success of TDA in these areas, a statistical treatment of persistence diagrams (e.g.,
computing means or variances) turns out to be difficult, not least because of the unusual structure
of the barcodes as intervals, rather than numerical quantities [1]. While substantial advancements in
1
the direction of statistical TDA have been made by studying the structure of the space of persistence
diagrams endowed with p-Wasserstein metrics (or variants thereof) [18, 19, 28, 11], it is technically
and computationally challenging to work in this space. In a machine learning context, we would
rather work with Hilbert spaces, primarily due to the highly regular structure and the abundance of
readily available and well-studied methods for statistics and learning.
One way to circumvent issues such as non-uniqueness of the Fr?chet mean [18] or computationally intensive algorithmic strategies [28] is to consider mappings of persistence barcodes into linear
function spaces. Statistical computations can then be performed based on probability theory on Banach spaces [14]. However, the methods proposed in [4] cannot guarantee that different probability
distributions can always be distinguished by a statistical test.
Contribution. In this work, we consider the task of statistical computations with persistence diagrams. Our contribution is to approach this problem by leveraging the theory of embedding probability measures into reproducing kernel Hilbert spaces [23], in our case, probability measures on
the space of persistence diagrams. In particular, we start with a recently introduced kernel on persistence diagrams by Reininghaus et al. [20] and identify missing properties that are essential for a
well-founded use in the aforementioned framework. By enforcing mild restrictions on the underlying space, we can in fact close the remaining gaps and prove that a minor modification of the kernel
is universal in the sense of Steinwart [25] (see Section 3). Our experiments demonstrate, on a couple
of synthetic and real-world data samples, how this universal kernel enables a principled solution to
the selected problem of (kernel-based) two-sample hypothesis testing.
Related work. In the following, we focus our attention on work related to a statistical treatment of
persistent homology. Since this is a rather new field, several avenues are pursued in parallel. Mileyko
et al. [18] study properties of the set of persistence diagrams when endowed with the p-Wasserstein
metric. They show, for instance, that under this metric, the space is Polish and the Fr?chet mean
exists. However, it is not unique and no algorithmic solution is provided. Turner et al. [28] later show
that the L 2 -Wasserstein metric on the set of persistence diagrams yields a geodesic space, and that
the additional structure can be leveraged to construct an algorithm for computing the Fr?chet mean
and to prove a law of large numbers. In [19], Munch et al. take a different approach and introduce
a probabilistic variant of the Fr?chet mean as a probability measure on persistence diagrams. While
this yields a unique mean, the solution itself is not a persistence diagram anymore. Techniques for
computing confidence sets for persistence diagrams are investigated by Fasy et al. [11]. The authors
focus on the Bottleneck metric (i.e., a special case of the p-Wasserstein metric when p = ?),
remarking that similar results could potentially be obtained for the case of the p-Wasserstein metric
under stronger assumptions on the underlying topological space.
While the aforementioned results concern properties of the set of persistence diagrams equipped
with p-Wasserstein metrics, a different strategy is advocated by Bubenik in [4]. The key idea is to
circumvent the peculiarities of the metric by mapping persistence diagrams into function spaces.
One such representation is the persistence landscape, i.e., a sequence of 1-Lipschitz functions in
a Banach space. While it is in general not possible to go back and forth between landscapes and
persistence diagrams, the Banach space structure enables a well-founded theoretical treatment of statistical concepts, such as averages or confidence intervals [14]. Chazal et al. [6] establish additional
convergence results and propose a bootstrap procedure for obtaining confidence sets.
Another, less statistically oriented, approach towards a convenient summary of persistence barcodes
is followed by Adcock et al. [1]. The idea is to attach numerical quantities to persistence barcodes,
which can then be used as input to any machine learning algorithm in the form of feature vectors.
This strategy is rooted in a study of algebraic functions on barcodes. However, it does not necessarily
guarantee stability of the persistence summary representation, which is typically a desired property
of a feature map [20].
Our proposed approach to statistical TDA is also closely related to work in the field of kernel-based
learning techniques [21] or, to be more specific, to the embedding of probability measures into a
RKHS [23] and the study of suitable kernel functions in that context [7, 24]. In fact, the idea of
mapping probability measures into a RKHS has led to many developments generalizing statistical
concepts, such as two-sample testing [13], testing for conditional independence, or statistical inference [12], form Euclidean spaces to other domains equipped with a kernel. In the context of
supervised learning with TDA, Reininghaus et al. [20] recently established a first connection to
2
kernel-based learning techniques via the definition of a positive definite kernel on persistence diagrams. While positive definiteness is sufficient for many techniques, such as support vector machines
or kernel PCA, additional properties are required in the context of embedding probability measures.
Organization. Section 2 briefly reviews some background material and introduces some notation.
In Section 3, we show how a slight modification of the kernel in [20] fits into the framework of
embedding probability measures into a RKHS. Section 4 presents a set of experiments on synthetic
and real data, highlighting the advantages of the kernel. Finally, Section 5 summarizes the main
contributions and discusses future directions.
2
Background
Since our discussion of statistical TDA from a kernel perspective is largely decoupled from how
the topological summaries are obtained, we only review two important notions for the theory of
persistent homology: filtrations and persistence diagrams. For a thorough treatment of the topic, we
refer the reader to [10]. We also briefly review the concept of embedding probability measures into
a RKHS, following [23].
Filtrations. A standard approach to TDA assigns to some metric space (M, d M ) a growing sequence
of simplicial complexes (indexed by a parameter t ? R), typically referred to as a filtration. Recall that an abstract simplicial complex is a collection of nonempty sets that is closed under taking
nonempty subsets. Persistent homology then studies the evolution of the homology of these complexes for a growing parameter t. Some widely used constructions, particularly for point cloud data,
?
are the Vietoris?Rips and the Cech
complex. The Vietoris?Rips complex is a simplicial complex
with vertex set M such that [x 0 , . . . , x m ] is an m-simplex iff maxi, j ?m d M (x i , x j ) ? t. For a point
?
set M ? Rd in Euclidean space, the Cech
complex is a simplicial complex with vertex set M ? Rd
such that [x 0 , . . . , x m ] is an m-simplex iff the closed balls of radius t centered at the x i have a
non-empty common intersection.
A more general way of obtaining a filtration is to consider the sublevel sets f ?1 (??,t], for t ? R,
of a function f : X ? R on a topological space X. For instance, in the case of surfaces meshes,
?
a commonly used function is the heat kernel signature (HKS) [27]. The Cech
and Vietoris?Rips
filtrations appear as special cases, both being sublevel set filtrations of an appropriate function on
?
the subsets (abstract simplices) of the vertex set M: for the Cech
filtration, the function assigns to
each subset the radius of its smallest enclosing sphere, while for the Vietoris?Rips filtration, the
function assigns to each subset its diameter (equivalently, the length of its longest edge).
death
Persistence diagrams. Studying the evolution of the topology of a filtration allows us to capture interesting properties of the metric or function used to generate the filtration. Persistence diagrams provide a concise description of the changes in homology that occur during this process.
Existing connected components may merge,
f :R?R
cycles may appear, etc. This leads to the
appearance and disappearance of homological
features of different dimension. Persistent homology tracks the birth b and death d of such
topological features. The multiset of points p,
where each point p = (b, d) corresponds to a
birth
birth/death time pair, is called the persistence
Fig. 1: A function and its 0-th persistence diagram.
diagram of the filtration. An example of a persistence diagram for 0-dimensional features (i.e., connected components) of a function f : X ? R
with X = R is shown in Fig. 2. We use the identifiers F, G to denote persistence diagrams in the
remainder of the paper. Since d > b, all points lie in the half-plane above the diagonal.
RKHS embedding of probability measures. An important concept for our work is the embedding
of probability measures into reproducing kernel Hilbert spaces [23]. Consider a Borel probability
measure P defined on a compact metric space (X, d), which we observe through the i.i.d. sample
m with x ? P. Furthermore, let k : X ? X ? R be a positive definite kernel, i.e., a
X = {x i }i=1
i
function which realizes an inner product k (x, y) = h?(x), ?(y)i G with x, y ? X in some Hilbert
space G for some (possibly unknown) map ? : X ? G (see [26, Definition 4.1.]). Also, let H be
the associated RKHS, generated by functions k x = k (x, ?) : X ? R induced by the kernel, i.e.,
H = span{k x : x ? X} = span{h?(x), ?(?)i G : x ? X}, with the scalar product hk x , k y i H = k (x, y).
3
The linear structure on the RKHS H admits the construction of means. The embedding of a probability measure P on X is now accomplished via the mean map ? : P 7? ? P = E x?P [k x ]. If this map
is injective, the kernel k is called characteristic. This is true, in particular, if H is dense in the space
of continuous functions X ? R (with the supremum norm), in which case we refer to the kernel as
universal [25]. While a universal kernel is always characteristic, the converse is not true.
P
Since it has been shown [13] that the empirical estimate of the mean, ? X = 1/m i k x i , is a good
proxy for ? P , the injectivity of ? can be used to define distances between distributions P and Q,
m and Y = {y } n . Specifically, this can be done via the maximum
observed via samples X = {x i }i=1
i i=1
mean discrepancy
MMD[F , P,Q] = sup (E x?P [ f (x)] ? Ey?Q [ f (y)]),
(1)
f ?F
where F denotes a suitable class of functions X ? R, and E x?P [ f (x)] denotes the expectation of
f (x) w.r.t. P (which can be written as h? P , f i by virtue of the reproducing property of k). Gretton
et al. [13] restrict F to functions on a unit ball in H , i.e., F = { f ? H : k f k? ? 1}, and show
that Eq. (1) can be expressed as the RHKS distance between the means ? P and ?Q of the measures
2 . Empirical estimates of this quantity are given in [13].
P and Q as MMD2 [F , P,Q] = k ? P ? ?Q k H
This connection is of particular importance to us, since it allows for two-sample hypothesis testing
in a principled manner given a suitable (characteristic/universal) kernel. Prominent examples of
2
universal kernels for X = Rd are the Gaussian RBF kernel k (x, y) = e?? k x?y k and the kernel ehx, yi .
However, without a characteristic/universal kernel, MMD[F , P,Q] = 0 does not imply P = Q. A
well-known example of a non-characteristic kernel is the scalar product kernel k (x, y) = hx, yi with
x, y ? Rd . Even if P , Q, e.g., if the variances of the distributions differ, the MMD will still be zero
if the means are equal.
In the context of a statistical treatment of persistent homology, the ability to embed probability
measures on the space of persistence diagrams into a RKHS is appealing. Specifically, the problem
of testing whether two different samples exhibit significantly different homological features ? as
captured in the persistence diagram ? boils down to a two-sample test with null hypothesis H0 :
? P = ?Q vs. a general alternative H A : ? P , ?Q , where P and Q are probability measures on the
set of persistence diagrams. The computation of this test only involves evaluations of the kernel.
Enabling this procedure via a suitable universal kernel will be discussed next.
3
The universal persistence scale space kernel
In the following, for 1 ? q ? ? we let Dq = {F | dW,q (F, ?) < ?}, denote the metric space of
persistence diagrams with the q-Wasserstein metric dW,q 1, where ? is the empty diagram. In [18,
Theorem 1], Mileyko et al. show that (Dq , dW,q ) is a complete metric space. When the subscript q
is omitted, we do not refer to any specific instance of q-Wasserstein metric.
Let us fix the numbers N ? N and R ? R. We denote by S the subset of D consisting of those
persistence diagrams that are birth-death bounded by R (i.e., for every D ? S the birth/death time
of its points is less or equal to R; see [18, Definition 5]) and whose total multiplicities (i.e., the sum
of multiplicities of all points in a diagram) are bounded by N. While this might appear restrictive at
first sight, it does not really pose a limitation in practice. In fact, for data generated by some finite
process (e.g., meshes have a finite number of vertices/faces, images have limited resolution, etc.),
establishing N and R is typically not a problem. We remark that the aforementioned restriction is
similar to enforcing boundedness of the support of persistence landscapes in [4, Section 3.6].
In [20], Reininghaus et al. introduce the persistence scale space (PSS) kernel as a stable, multi-scale
kernel on the set D of persistence diagrams of finite total multiplicity, i.e., each diagram contains
only finitely many points. Let p = (b, d) denote a point in a diagram F ? D, and let p = (d, b)
denote its mirror image across the diagonal. Further, let ? = {x = (x 1 , x 2 ) ? R2 , x 2 ? x 1 }. The
feature map ?? : D ? L 2 (?) is given as the solution of a heat diffusion problem with a Dirichlet
boundary condition on the diagonal by
P
q
1The q-Wasserstein metric is defined as dW,q (F, G) = inf? ( x ?F k x ? ?(x)k? ) 1/q , where ? ranges over
all bijections from F ? D to G ? D, with D denoting the multiset of diagonal points (t,t), each with countably
infinite multiplicity.
4
?? (F) : ? ? R,
x 7?
k x?p k 2
1 X ? k x?p k2
e 4? ? e? 4? .
4?? p ?F
The kernel k? : D ? D ? R is then given in closed form as
k p?q k 2
1 X ? k p?q k2
k? (F, G) = h?? (F), ?? (G)i L 2 (?) =
e 8? ? e? 8? .
8?? p ?F
(2)
(3)
q ?G
for ? > 0 and F, G ? D. By construction, positive
? definiteness of k? is guaranteed. The kernel is
stable in the sense that the distance d? (F, G) = k (F, F) + k (G, G) ? 2k (F, G) is bounded up to a
constant by dW,1 (F, G) [20, Theorem 2].
We have the following property:
Proposition 1. Restricting the kernel in Eq. (3) to S ? S, the mean map ? sends a probability
measure P on S to an element ? P ? H .
Proof. The claim immediately follows from [13, Lemma 3] and [24, Proposition 2], since k? is
measurable and bounded on S, and hence ? P ? H .
While positive definiteness enables the use of k? in many kernel-based learning techniques [21], we
are interested in assessing whether it is universal, or if we can construct a universal kernel from k?
(see Section 2). The following theorem of Christmann and Steinwart [7] is particularly relevant to
this question.
Theorem 1. (cf. Theorem 2.2 of [7]) Let X be a compact metric space and G a separable Hilbert
space such that there exists a continuous and injective map ? : X ? G. Furthermore, let K : R ? R
be a function that is analytic on some neighborhood of 0, i.e., it can locally be expressed by its Taylor
series
?
X
K (t) =
an t n , t ? [?r,r].
n=0
If an > 0 for all n ? N0 , then k : X ? X ? R,
k (x, y) = K (h?(x), ?(y)i G ) =
?
X
an h?(x), ?(y)inG .
(4)
n=0
is a universal kernel.
Kernels of the form Eq. (4) are typically referred to as Taylor kernels.
Note that universality of a kernel on X refers to a specific choice of metric on X. By using the
same argument as for the linear dot-product kernel in Rd (see above), the PSS kernel k? cannot be
universal with respect to the metric d k ? , which is induced by the scalar product defining k? . On the
other hand, it is unclear whether k? is universal with respect to the metric dW,q . However, we do
have the following result:
U : S ? S ? R,
Proposition 2. The kernel k?
U
k?
(F, G) = exp(k? (F, G)),
(5)
is universal with respect to the metric dW,1 .
Proof. We prove this proposition by means of Theorem 1. We set G = L 2 (?), which is a separable
Hilbert space. As shown in Reininghaus et al. [20], the feature map ?? : D ? L 2 (?) is injective.
Furthermore, it is continuous by construction, as the metric on D is induced by the norm on L 2 (?),
and so is ?? restricted to S. The function K : R ? R is defined as x 7? exp(x), and hence is
analytic on R. Its Taylor coefficients an are 1/n!, and thus are positive for any n.
It remains to show that (S, dW,1 ) is a compact metric space. First, define R = ? N ? ([?R, R]2 ) N ,
which is a bounded, closed, and therefore compact subspace of (R2 ) N . Now consider the function
5
?? (F1 )
?? (F2 )
?? (F3 )
?? (FN )
corresponds to the 2 holes
average
Fig. 2: Visualization of the mean PSS function (right) taken over 30 samples from a double-annulus (cf. [19]).
f : R ? S that maps (p1 , . . . , p N ) ? R to the persistence diagram {pi : 1 ? i ? N if pi <
??} ? S. We note that for all D = {p1 , . . . , pn } ? S, with n ? N, there exists an X ? R, e.g.,
X = (p1 , . . . , pn , 0, . . . , 0), such that f (X ) = D; this implies S = f (R). Next, we show that f is
1-Lipschitz continuous w.r.t. the 1-Wasserstein distance on persistence diagrams, i.e.,
?X = (p1 , . . . , p N ),Y = (q1 , . . . , qN ) ? R : dW,1 ( f (X ), f (Y )) ? d(X,Y ),
P
where we defined d as inf ? 1?i ?N kpi ? ?(pi )k? with ? ranging over all bijections between
{p1 , . . . , p N } and {q1 , . . . , qN }. In other words, d corresponds to the 1-Wasserstein distance without allowing matches to the diagonal. Now, by definition, dW,1 ( f (X ), f (Y )) ? d(X,Y ), because all
bijections considered by d are also admissible for dW,1 . Since thus R is compact and f is continuous, we have that S = f (R) is compact as well.
We refer to the kernel of Eq. (5) as the universal persistence scale-space (u-PSS) kernel.
U , since
Remark. While we prove Prop. 1 for the PSS kernel in Eq. (3), it obviously also holds for k?
exponentiation does neither invalidate measurability nor boundedness.
Relation to persistence landscapes. As the feature map ?? of Eq. (2) defines a function-valued
summary of persistent homology in the Hilbert space L 2 (?), the results on probability in Banach
spaces [14], used in [4] for persistence landscapes, naturally apply to ?? as well. This includes,
for instance, the law of large numbers or the central limit theorem [4, Theorems 9,10]. Conversely,
considering a persistence landscape ?(D) as a function in L 2 (N ? R) or L 2 (R2 ) yields a positive
definite kernel h?(?), ?(?)i L 2 on persistence diagrams. However, it is unclear whether a universal
U . In
kernel can be constructed from persistence landscapes in a way similar to the definition of k?
particular, we are not aware of a proof that the construction of persistence landscapes, considered
as functions in L 2 , is continuous with respect to dW,q for some 1 ? q ? ?. For a more detailed
treatment of the differences between ?? and persistence landscapes, we refer the reader to [20].
4
Experiments
We first describe a set of experiments on synthetic data appearing in previous work to illustrate the
use of the PSS feature map ?? and the universal persistence scale-space kernel on two different
tasks. We then present two applications on real-world data, where we assess differences in the
persistent homology of functions on 3D surfaces of lateral ventricles and corpora callosa with respect
to different group assignments (i.e., age, demented/non-demented). In all experiments, filtrations
and the persistence diagrams are obtained using Dipha2, which can directly handle our types of
input data. Source code to reproduce the experiments is available at https://goo.gl/KouBPT.
4.1
Synthetic data
Computation of the mean PSS function. We repeat the experiment from [19, 4] of sampling from
the union of two overlapping annuli. In particular, we repeatedly (N = 30 times) draw samples of
size 100 (out of 10000), and then compute persistence diagrams F1 , . . . , FN for 1-dim. features by
considering sublevel sets of the distance function from the points. Finally, we compute the mean
of the PSS functions ?? (Fi ) defined by the feature map from Eq. (2). This simply amounts to
computing 1/N ? ?? (F1 ? ? ? ? ? FN ). A visualization of the pointwise average, for a fixed choice of
?, is shown in Fig. 2. We remind the reader that the convergence results used in [4] equally hold
for this feature map, as explained in Section 3. In particular, the above process of taking means
converges to the expected value of the PSS function. As can be seen in Fig. 2, the two 1-dim. holes
manifest themselves as two ?bumps? at different positions in the mean PSS function.
2available online: https://code.google.com/p/dipha/
6
0-dim./no-noise
0.06
0.05
significance level
0.04
0.2
0.1
10 20 30 40 50 60 70 80 90 100
0-dim./with-noise
1.0
Torus: (r ? 2)2 + z 2 = 1
0.5
0.3
0.02
0.01
Sphere: r2 = 2?
1-dim./no-noise
0.4
0.03
0.00
0.7
0.6
0.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
10 20 30 40 50 60 70 80 90 100
10 20 30 40 50 60 70 80 90 100
1-dim./with-noise
10 20 30 40 50 60 70 80 90 100
Fig. 3: Left: Illustration of one random sample (of size 200) on a sphere and a torus in R3 with equal surface
area. To generate a noisy sample, we add Gaussian noise N (0, 0.1) to each point in a sample (indicated by
the vectors). Right: Two-sample hypothesis testing results (H0 : P = Q vs. H A : P , Q) for 0- and 1-dim.
features. The box plots show the variation in p-values (y-axis) over a selection of values for ? as a function of
increasing sample size (x-axis). Sample sizes for which the median p-value is less than the chosen significance
level (here: 0.05) are marked green, and red otherwise.
Torus vs. sphere. In this slightly more involved example, we repeat an experiment from [4, Section
4.3] on the problem of discriminating between a sphere and a torus in R3 , based on random samples
drawn from both objects. In particular, we repeatedly (N times) draw samples from the torus and the
sphere (corresponding to measures P and Q) and then compute persistence diagrams. Eventually, we
test the null-hypothesis H0 : P = Q, i.e., that samples were drawn from the same object; cf. [4] for a
thorough description of the full setup. We remark that our setup uses the Delaunay triangulation of
the point samples instead of the Coxeter?Freudenthal?Kuhn triangulation of a regular grid as in [4].
Conceptually, the important difference is in the two-sample testing strategy. In [4], two factors
influence the test: (1) the choice of a functional to map the persistence landscape to a scalar and
(2) the choice of test statistic. Bubenik chooses a z-test to test for equality between the mean
persistence landscapes. In contrast, we can test for true equality in distribution. This is possible since
universality of the kernel ensures that the MMD of Eq. (1) is a metric for the space of probability
measures on persistence diagrams. All p-values are obtained by bootstrapping the test statistic under
H0 over 104 random permutations. We further vary the number of samples/object used to compute
the MMD statistic from N = 10 to N = 100 and add Gaussian noise N (0, 0.1) in one experiment.
Results are shown in Fig. 3 over a selection of u-PSS scales ? ? {100, 10, 1, 0.1, 0.01, 0.001}. For
0-dimension features and no noise, we can always reject H0 at ? = 0.05 significance. For 1-dim.
features and no noise, we need at least 60 samples to reliably reject H0 at the same level of ?.
4.2
Real-world data
We use two real-world datasets in our experiments: (1) 3D surfaces of the corpus callosum and (2)
3D surfaces of the lateral ventricles from neotates. The corpus callosum surfaces were obtained from
the longitudinal dataset of the OASIS brain database3. We use all subject data from the first visit,
and the grouping criteria is disease state: dementia vs. non-dementia. Note that the demented group
is comprised of individuals with very mild to mild AD. This discrimination is based on the clinical
dementia rating (CDR) score; Marcus et al. [17] explain this dataset in detail. The lateral ventricle
dataset is an extended version of [3]. It contains data from 43 neonates. All subjects were repeatedly
imaged approximately every 3 months (starting from 2 weeks) in the first year and every 6 months
in the second year. According to Bompard et al. [3], the ventricle growth is the dominant effect
and occurs in a non-uniform manner most significantly during the first 6 months. This raises the
question whether age also has an impact on the shape of these brain structures that can be detected
by persistent homology of the HKS (see Setup below, or Section 2) function. Hence, we set our
grouping criteria to be developmental age: ? 6 months vs. > 6 months. It is important to note that
the heat kernel signature is not scale-invariant. For that reason, we normalize the (mean-subtracted)
configuration matrices (containing the vertex coordinates of each mesh) by their Euclidean norm,
cf. [9]. This ensures that our analysis is not biased by growth (scaling) effects.
3available online: http://www.oasis-brains.org
7
p-value
0
? in k?U
1
t1
t5
t10
t15
t20
HKS time
HKS time
(a) (Right) lateral ventricles; Grouping: subjects ? 6months vs. > 6months
p-value
0
? in k?U
1
t1
t5
t10
t15
t20
HKS time
HKS time
(b) Corpora callosa; Grouping: demented vs. non-demented subjects
Fig. 4: Left: Effect of increasing HKS time t i , illustrated on one exemplary surface mesh of both datasets.
Right: Contour plots of p-values estimated via random permutations, shown as a function of the u-PSS kernel
scale ? and the HKS time.
Setup. We follow an experimental setup, similar to [16] and [20], and compute the heat kernel
signature [27] for various times t i as a function defined on the 3D surface meshes. In all experiments,
U of Eq. (5) and vary the HKS time t in 1 = t < t <
we use the proposed kernel u-PSS kernel k?
i
1
2
? ? ? < t 20 = 10.5; Regarding the u-PSS kernel scale ?i , we sweep from 10?9 = ?1 < ? ? ? < ?10 =
101 . Null- (H0 ) and alternative (H A ) hypotheses are defined as in Section 2 with two samples of
m , {G } n . The test statistic under H is bootstrapped using B = 5 ? 104
persistence diagrams {Fi }i=1
0
i i=1
random permutations. This is also the setup recommended in [13] for low samples sizes.
Results. Figure 4 shows the estimated p-values for both datasets as a function of the u-PSS kernel
scale and the HKS time for 1-dim. features. The false discovery rate is controlled by the BenjaminiHochberg procedure. On the lateral ventricle data, we observe p-values < 0.01 (for the right ventricles), especially around HKS times t 10 to t 15 , cf. Fig. 4(a). Since the results for left and right lateral
ventricles are similar, only the p-values plots for the right lateral ventricle are shown. In general,
the results indicate that, at specific settings of t i , the HKS function captures salient shape features
of the surface, which lead to statistically significant differences in the persistent homology. We do,
however, point out that there is no clear guideline on how to choose the HKS time. In fact, setting
t i too low might emphasize noise, while setting t i too high tends to smooth-out details, as can be
seen in the illustration of the HKS time on the left-hand side of Fig. 4. On the corpus callosum
data, cf. Fig. 4(b), no significant differences in the persistent homology of the two groups (again for
1-dim. features) can be identified with p-values ranging from 0.1 to 0.9. This does not allow to
reject H0 at any reasonable level.
5
Discussion
With the introduction of a universal kernel for persistence diagrams in Section 3, we enable the use
of this topological summary representation in the framework of embedding probability measures
into reproducing kernel Hilbert spaces. While our experiments are mainly limited to two-sample
hypothesis testing, our kernel allows to use a wide variety of statistical techniques and learning
methods which are situated in that framework. It is important to note that our construction, via Theorem 1, essentially depends on a restriction of the set D to a compact metric space. We remark that
similar conditions are required in [4] in order to enable statistical computations, e.g., constraining
the support of the persistence landscapes. However, it will be interesting to investigate which properties of the kernel remain valid when lifting these restrictions. From an application point of view,
we have shown that we can test for a statistical difference in the distribution of persistence diagrams.
This is in contrast to previous work, where hypothesis testing is typically limited to test for specific
properties of the distributions, such as equality in mean.
Acknowledgements. This work has been partially supported by the Austrian Science Fund, project
no. KLI 00012. We also thank the anonymous reviewers for their valuable comments/suggestions.
8
References
[1] A. Adcock, E. Carlsson, and G. Carlsson. The ring of algebraic functions on persistence bar codes. arXiv,
available at http://arxiv.org/abs/1304.0530, 2013.
[2] P. Bendich, J.S. Marron, E. Miller, A. Pieloch, and S. Skwerer. Persistent homology analysis of brain
artery trees. arXiv, available at http://arxiv.org/abs/1411.6652, 2014.
[3] L. Bompard, S. Xu, M. Styner, B. Paniagua, M. Ahn, Y. Yuan, V. Jewells, W. Gao, D. Shen, H. Zhu, and
W. Lin. Multivariate longitudinal shape analysis of human lateral ventricles during the first twenty-four
months of life. PLoS One, 2014.
[4] P. Bubenik. Statistical topological data analysis using persistence landscapes. JMLR, 16:77?102, 2015.
[5] G. Carlsson. Topology and data. Bull. Amer. Math. Soc., 46:255?308, 2009.
[6] F. Chazal, B.T. Fasy, F. Lecci A. Rinaldo, and L. Wasserman. Stochastic convergence of persistence
landscapes and silhouettes. In SoCG, 2014.
[7] A. Christmann and I. Steinwart. Universal kernels on non-standard input spaces. In NIPS, 2010.
[8] M.K. Chung, P. Bubenik, and P.T. Kim. Persistence diagrams of cortical surface data. In IPMI, 2009.
[9] I.L. Dryden and K.V. Mardia. Statistical shape analysis. Wiley series in probability and statistics. Wiley,
1998.
[10] H. Edelsbrunner and J. Harer. Computational Topology. An Introduction. AMS, 2010.
[11] B. Fasy, F. Lecci, A. Rinaldo, L. Wasserman, S. Balakrishnan, and A. Singh. Confidence sets for persistence diagrams. Ann. Statist., 42(6):2301?2339, 2014.
[12] K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes? rule: Bayesian inference with positive definite
kernels. JMLR, 14:3753?3783, 2013.
[13] A. Gretton, K.M. Borgwardt, M.J. Rasch, B. Sch?lkopf, and A. Smola. A kernel two-sample test. JMLR,
13:723?773, 2012.
[14] M. Ledoux and M. Talagrand. Probability in Banach spaces. Classics in Mathematics. Springer, 1991.
[15] H. Lee, M.K. Chung, H. Kang, and D.S. Lee. Hole detection in metabolic connectivity of Alzheimer?s
disease using k-Laplacian. In MICCAI, 2014.
[16] C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In CVPR, 2014.
[17] D.S. Marcus, A.F. Fotenos, J.G. Csernansky, J.C. Morris, and R.L. Buckner. Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J. Cognitive Neurosci.,
22(12):2677?2684, 2010.
[18] Y. Mileyko, S. Mukherjee, and J. Harer. Probability measures on the space of persistence diagrams.
Inverse Probl., 27(12), 2011.
[19] E. Munch, P. Bendich, S. Mukherjee, J. Mattingly, and J. Harer. Probabilistic Fr?chet means and statistics
on vineyards. CoRR, 2013. http://arxiv.org/abs/1307.6530.
[20] R. Reininghaus, U. Bauer, S. Huber, and R. Kwitt. A stable multi-scale kernel for topological machine
learning. In CVPR, 2015.
[21] B. Sch?lkopf and A.J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.
[22] N. Singh, H. D. Couture, J. S. Marron, C. Perou, and M. Niethammer. Topological descriptors of histology
images. In MLMI, 2014.
[23] A. Smola, A. Gretton, L. Song, and B. Sch?lkopf. Hilbert space embedding for distributions. In ALT,
2007.
[24] B. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?lkopf, and G. Lanckriet. Hilbert space embeddings
and metrics on probability measures. JMLR, 11:1517?1561, 2010.
[25] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. JMLR, 2:67?
93, 2001.
[26] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.
[27] J. Sun, M. Ovsjanikov, and L. Guibas. A concise and probably informative multi-scale signature based
on heat diffusion. In SGP, 2009.
[28] K. Turner, Y. Mileyko, S. Mukherjee, and J. Harer. Fr?chet means for distributions of persistence diagrams. Discrete Comput. Geom., 52(1):44?70, 2014.
9
| 5887 |@word mild:3 version:1 briefly:2 nchen:1 stronger:1 norm:3 mri:1 open:1 q1:2 concise:2 boundedness:2 configuration:1 contains:2 series:3 score:1 denoting:1 rkhs:8 bootstrapped:1 longitudinal:3 past:1 existing:1 com:1 universality:3 written:1 readily:1 mesh:5 numerical:2 fn:3 informative:1 shape:6 enables:3 analytic:2 plot:3 fund:1 n0:1 v:7 discrimination:1 pursued:1 selected:1 advancement:1 half:1 plane:1 multiset:2 math:1 org:5 constructed:1 persistent:14 yuan:1 prove:4 manner:2 introduce:2 huber:3 expected:1 roughly:1 p1:5 nor:1 growing:2 multi:3 brain:6 themselves:1 equipped:2 considering:2 increasing:2 provided:1 project:1 underlying:2 notation:1 bounded:5 null:3 bootstrapping:1 guarantee:2 thorough:2 every:3 growth:2 universit:1 k2:2 homological:2 reininghaus:5 unit:1 medical:1 converse:1 appear:3 positive:9 t1:2 tends:1 limit:1 despite:1 establishing:1 subscript:1 merge:1 approximately:1 might:2 studied:1 conversely:1 challenging:1 limited:4 range:2 statistically:2 unique:2 testing:10 practice:1 union:1 definite:5 bootstrap:1 procedure:3 area:2 universal:20 empirical:2 significantly:2 reject:3 convenient:1 persistence:67 confidence:4 word:1 regular:2 refers:1 t10:2 unc:4 close:2 cannot:2 selection:2 context:6 influence:2 restriction:4 measurable:1 map:14 www:1 missing:1 reviewer:1 go:1 attention:1 starting:1 kpi:1 resolution:1 shen:1 assigns:3 immediately:1 wasserman:2 chapel:2 rule:1 dw:12 embedding:11 proving:1 stability:1 notion:1 handle:1 variation:1 coordinate:1 classic:1 construction:6 rip:4 us:1 hypothesis:9 lanckriet:1 element:1 recognition:1 particularly:2 mukherjee:3 observed:1 cloud:2 capture:2 ensures:2 connected:2 cycle:1 sun:1 demented:6 goo:1 plo:1 valuable:1 principled:3 substantial:1 disease:2 developmental:1 ipmi:1 chet:6 geodesic:1 signature:4 raise:1 singh:2 technically:1 f2:1 various:1 heat:5 effective:1 describe:1 detected:1 neighborhood:1 h0:8 birth:6 whose:1 widely:3 valued:1 cvpr:2 otherwise:1 ability:1 statistic:7 radiology:1 itself:1 noisy:1 online:2 obviously:1 sequence:2 advantage:1 ledoux:1 niethammer:2 exemplary:1 propose:1 dipha:1 product:5 fr:6 remainder:1 relevant:2 iff:2 forth:1 description:2 artery:2 normalize:1 convergence:3 empty:2 double:1 assessing:1 converges:1 ring:1 object:3 illustrate:1 ac:1 pose:1 finitely:1 minor:1 advocated:1 eq:9 soc:1 c:1 involves:1 christmann:3 implies:1 indicate:1 differ:1 direction:2 kuhn:1 radius:2 closely:1 rasch:1 peculiarity:1 stochastic:1 centered:1 human:1 enable:2 material:1 hx:1 fix:1 f1:3 really:1 anonymous:1 proposition:4 hold:2 around:1 considered:2 guibas:1 exp:2 algorithmic:2 mapping:3 week:1 claim:1 bump:1 vary:2 smallest:1 omitted:1 uniqueness:1 realizes:1 successfully:1 tool:1 callosum:3 stefan:2 fukumizu:2 mit:1 always:3 gaussian:3 sight:1 rather:3 pn:2 encode:2 focus:2 longest:1 ps:15 mainly:1 hk:1 polish:1 contrast:2 kim:1 sense:2 buckner:1 dim:10 inference:2 am:1 typically:5 relation:1 mattingly:1 reproduce:1 interested:1 issue:1 aforementioned:3 development:1 special:2 field:4 construct:2 equal:3 f3:1 aware:1 sampling:1 discrepancy:1 future:1 simplex:2 primarily:1 oriented:1 individual:1 consisting:1 ab:3 detection:1 organization:1 highly:1 investigate:1 evaluation:1 introduces:1 edge:1 injective:3 rhks:1 decoupled:1 tree:2 indexed:1 euclidean:3 taylor:3 desired:1 theoretical:1 instance:4 assignment:1 adcock:2 bull:1 mileyko:4 technische:1 subset:5 vertex:5 uniform:1 comprised:1 munch:2 too:2 thickness:1 marron:2 synthetic:5 chooses:1 borgwardt:1 discriminating:1 probabilistic:2 lee:2 connecting:1 connectivity:1 barcodes:6 again:1 central:1 sublevel:3 leveraged:1 possibly:1 containing:1 choose:1 cognitive:1 chung:2 li:1 includes:1 coefficient:1 ad:1 depends:1 performed:1 later:1 view:1 closed:4 sup:1 red:1 start:1 bayes:1 parallel:1 contribution:4 ass:1 descriptor:1 variance:2 largely:1 characteristic:5 yield:3 identify:1 landscape:14 simplicial:4 miller:1 conceptually:1 lkopf:4 bayesian:1 annulus:2 mmd2:1 chazal:3 explain:1 definition:5 sriperumbudur:1 involved:1 thereof:1 naturally:1 associated:1 proof:3 boil:1 couple:1 dataset:3 treatment:7 popular:1 austria:1 recall:1 manifest:1 hilbert:11 back:1 tum:1 supervised:1 follow:2 amer:1 done:1 box:1 harer:4 furthermore:3 smola:3 miccai:1 talagrand:1 hand:2 steinwart:5 overlapping:1 google:1 defines:1 indicated:1 measurability:1 neonate:1 usa:1 effect:3 concept:4 homology:16 true:3 evolution:2 hence:3 equality:3 regularization:1 imaged:1 death:5 sgp:1 illustrated:1 during:3 rooted:1 criterion:2 prominent:1 hill:2 complete:1 demonstrate:2 image:5 ranging:2 recently:4 fi:2 common:1 functional:1 banach:5 discussed:1 slight:1 tda:7 refer:5 significant:2 cambridge:1 probl:1 rd:5 grid:1 mathematics:2 consistency:1 bendich:2 dot:1 stable:3 access:1 invalidate:1 surface:11 ahn:1 etc:2 add:2 delaunay:1 dominant:1 edelsbrunner:1 multivariate:1 triangulation:2 perspective:2 histology:2 inf:2 route:1 success:2 life:2 accomplished:1 yi:2 cech:4 fasy:3 wasserstein:11 additional:3 injectivity:1 captured:1 ey:1 seen:2 recommended:1 multiple:1 full:1 gretton:5 ing:1 smooth:1 match:1 clinical:1 sphere:6 lin:2 roland:1 equally:1 visit:1 controlled:1 impact:1 laplacian:1 variant:3 vineyard:1 breast:1 vision:1 metric:27 expectation:1 essentially:1 austrian:1 arxiv:5 kernel:76 adopting:1 mmd:5 background:2 interval:2 diagram:50 median:1 source:1 sends:1 sch:4 biased:1 probably:1 comment:1 induced:3 med:1 subject:4 balakrishnan:1 leveraging:1 alzheimer:1 structural:1 constraining:1 embeddings:2 variety:2 independence:1 fit:1 topology:4 restrict:1 identified:1 inner:1 idea:3 regarding:1 avenue:2 intensive:1 vietoris:4 bottleneck:1 whether:5 motivated:1 pca:1 song:2 algebraic:3 speaking:1 remark:4 repeatedly:3 detailed:1 clear:1 amount:1 bijections:3 locally:1 situated:1 statist:1 morris:1 diameter:1 generate:2 http:6 ovsjanikov:2 estimated:2 track:1 discrete:1 ist:2 key:1 group:3 salzburg:1 salient:1 four:1 drawn:2 neither:1 diffusion:2 imaging:2 year:3 sum:1 inverse:1 exponentiation:1 reader:3 reasonable:1 draw:2 summarizes:1 scaling:1 capturing:1 followed:1 guaranteed:1 topological:14 ehx:1 occur:1 argument:1 span:3 separable:2 department:4 according:1 lecci:2 ball:2 remain:2 across:1 slightly:1 appealing:1 modification:2 explained:1 invariant:2 multiplicity:4 restricted:1 taken:1 socg:1 computationally:2 visualization:2 remains:1 turn:1 discus:1 nonempty:2 r3:2 eventually:1 unusual:1 studying:2 available:6 endowed:2 apply:1 observe:2 appropriate:1 appearing:1 distinguished:1 anymore:1 subtracted:1 alternative:3 database3:1 original:1 denotes:2 remaining:1 dirichlet:1 cf:6 restrictive:1 especially:1 establish:1 sweep:1 question:2 quantity:3 occurs:1 coxeter:1 strategy:4 styner:1 disappearance:1 diagonal:5 unclear:2 exhibit:1 subspace:1 distance:6 thank:1 lateral:8 topic:1 reason:1 kwitt:2 enforcing:2 marcus:2 length:1 code:3 pointwise:1 remind:1 illustration:2 equivalently:1 difficult:1 setup:6 potentially:1 filtration:12 enclosing:1 reliably:1 guideline:1 unknown:1 twenty:1 allowing:1 datasets:3 enabling:2 finite:3 freudenthal:1 defining:1 extended:1 reproducing:5 rating:1 introduced:1 pair:1 required:2 connection:2 kang:1 established:2 nip:1 perou:1 adult:1 beyond:1 bar:1 below:1 remarking:1 geom:1 t20:2 including:1 green:1 suitable:4 circumvent:2 attach:1 turner:2 mn:1 zhu:1 hks:14 older:1 imply:1 axis:2 review:3 carlsson:3 discovery:1 acknowledgement:1 law:2 permutation:3 interesting:2 limitation:1 suggestion:1 age:3 sufficient:1 proxy:1 ventricle:10 dq:2 metabolic:1 ulrich:2 pi:3 bric:2 cancer:1 summary:8 twosample:1 gl:1 repeat:2 supported:1 side:1 allow:1 wide:1 taking:2 face:1 bauer:3 boundary:1 dimension:2 cortical:2 world:5 valid:1 contour:1 qn:2 t5:2 author:1 made:1 collection:1 commonly:1 founded:2 compact:7 emphasize:1 countably:1 silhouette:1 supremum:1 corpus:5 continuous:6 obtaining:2 investigated:1 necessarily:1 complex:8 marc:1 domain:1 significance:3 main:1 dense:1 neurosci:1 noise:9 identifier:1 xu:1 fig:11 referred:2 borel:1 definiteness:3 simplices:1 wiley:2 position:1 torus:5 comput:1 lie:1 mardia:1 jmlr:5 abundance:1 admissible:1 down:1 theorem:9 embed:1 specific:5 dementia:3 maxi:1 explored:2 r2:4 admits:1 alt:1 virtue:1 concern:1 grouping:4 essential:1 exists:3 restricting:1 false:1 corr:1 importance:1 mirror:1 lifting:1 hole:3 gap:2 generalizing:1 led:1 intersection:1 simply:1 appearance:1 gao:1 cdr:1 highlighting:1 expressed:2 rinaldo:2 partially:1 scalar:4 springer:2 corresponds:3 oasis:2 ma:1 prop:1 conditional:1 marked:1 month:8 kli:1 ann:1 rbf:1 towards:2 lipschitz:2 change:1 specifically:2 infinite:1 lemma:1 called:3 total:2 experimental:1 t15:2 support:7 bubenik:4 dryden:1 |
5,399 | 5,888 | Variational Consensus Monte Carlo
Maxim Rabinovich, Elaine Angelino, and Michael I. Jordan
Computer Science Division
University of California, Berkeley
{rabinovich, elaine, jordan}@eecs.berkeley.edu
Abstract
Practitioners of Bayesian statistics have long depended on Markov chain Monte
Carlo (MCMC) to obtain samples from intractable posterior distributions. Unfortunately, MCMC algorithms are typically serial, and do not scale to the large
datasets typical of modern machine learning. The recently proposed consensus
Monte Carlo algorithm removes this limitation by partitioning the data and drawing samples conditional on each partition in parallel [22]. A fixed aggregation
function then combines these samples, yielding approximate posterior samples.
We introduce variational consensus Monte Carlo (VCMC), a variational Bayes
algorithm that optimizes over aggregation functions to obtain samples from a distribution that better approximates the target. The resulting objective contains an
intractable entropy term; we therefore derive a relaxation of the objective and
show that the relaxed problem is blockwise concave under mild conditions. We
illustrate the advantages of our algorithm on three inference tasks from the literature, demonstrating both the superior quality of the posterior approximation
and the moderate overhead of the optimization step. Our algorithm achieves a
relative error reduction (measured against serial MCMC) of up to 39% compared
to consensus Monte Carlo on the task of estimating 300-dimensional probit regression parameter expectations; similarly, it achieves an error reduction of 92%
on the task of estimating cluster comembership probabilities in a Gaussian mixture model with 8 components in 8 dimensions. Furthermore, these gains come
at moderate cost compared to the runtime of serial MCMC?achieving near-ideal
speedup in some instances.
1
Introduction
Modern statistical inference demands scalability to massive datasets and high-dimensional models.
Innovation in distributed and stochastic optimization has enabled parameter estimation in this setting, e.g. via stochastic [3] and asynchronous [20] variants of gradient descent. Achieving similar
success in Bayesian inference ? where the target is a posterior distribution over parameter values,
rather than a point estimate ? remains computationally challenging.
Two dominant approaches to Bayesian computation are variational Bayes and Markov chain Monte
Carlo (MCMC). Within the former, scalable algorithms like stochastic variational inference [11]
and streaming variational Bayes [4] have successfully imported ideas from optimization. Within
MCMC, adaptive subsampling procedures [2, 14], stochastic gradient Langevin dynamics [25], and
Firefly Monte Carlo [16] have applied similar ideas, achieving computational gains by operating
only on data subsets. These algorithms are serial, however, and thus cannot take advantage of
multicore and multi-machine architectures. This motivates data-parallel MCMC algorithms such as
asynchronous variants of Gibbs sampling [1, 8, 12].
Our work belongs to a class of communication-avoiding data-parallel MCMC algorithms. These
algorithms partition the full dataset X1:N into K disjoint subsets XI1:K where XIk denotes the data
1
associated with core k. Each core samples from a subposterior distribution,
pk (?k ) / p (XIk | ?k ) p (?k )
1/K
(1)
,
and then a centralized procedure combines the samples into an approximation of the full posterior.
Due to their efficiency, such procedures have recently received substantial attention [18, 22, 24].
One of these algorithms, consensus Monte Carlo (CMC), requires communication only at the start
and end of sampling [22]. CMC proceeds from the intuition that subposterior samples, when aggregated correctly, can approximate full posterior samples. This is formally backed by the factorization
p (? | x1:N ) / p (?)
K
Y
k=1
p (XIk | ?) =
K
Y
pk (?) .
(2)
k=1
If one can approximate the subposterior densities pk , using kernel density estimates for instance [18],
it is therefore possible to recombine them into an estimate of the full posterior.
Unfortunately, the factorization does not make it immediately clear how to aggregate on the level of
samples without first having to obtain an estimate of the densities pk themselves. CMC alters (2) to
untie the parameters across partitions and plug in a deterministic link F from the ?k to ?:
p (? | x1:N ) ?
K
Y
k=1
pk (?k ) ?
?=F (?1 ,...,?K ) .
(3)
This approximation and an aggregation function motivated by a Gaussian approximation lie at the
core of the CMC algorithm [22].
The introduction of CMC raises numerous interesting questions whose answers are essential to its
wider application. Two among these stand out as particularly vital. First, how should the aggregation
function be chosen to achieve the closest possible approximation to the target posterior? Second,
when model parameters exhibit structure or must conform to constraints ? if they are, for example,
positive semidefinite covariance matrices or labeled centers of clusters ? how can the weighted
averaging strategy of Scott et al. [22] be modified to account for this structure?
In this paper, we propose variational consensus Monte Carlo (VCMC), a novel class of data-parallel
MCMC algorithms that allow both questions to be addressed. By formulating the choice of aggregation function as a variational Bayes problem, VCMC makes it possible to adaptively choose the aggregation function to achieve a closer approximation to the true posterior. The flexibility of VCMC
likewise supports nonlinear aggregation functions, including structured aggregation functions applicable to not purely vectorial inference problems.
An appealing benefit of the VCMC point of view is a clarification of the untying step leading
to (3). In VCMC, the approximate factorization corresponds to a variational approximation to
the true posterior. This approximation can be viewed as the joint distribution of (?1 , . . . , ?K )
and ? in an augmented model that assumes conditional independence between the data partitions
and posits a deterministic mapping from partition-level parameters to the single global parameter.
The added flexibility of this point-of-view makes it possible to move beyond subposteriors and include alternative forms of (3) within the CMC framework. In particular, it is possible to define
pk (?k ) = p (?k ) p (XIk | ?k ), using partial posteriors in place of subposteriors (cf. [23]). Although
extensive investigation of this issue is beyond the scope of this paper, we provide some evidence
in Section 6 that partial posteriors are a better choice in some circumstances and demonstrate that
VCMC can provide substantial gains in both the partial posterior and subposterior settings.
Before proceeding, we outline the remainder of this paper. Below, in ?2, we review CMC and
related data-parallel MCMC algorithms. Next, we cast CMC as a variational Bayes problem in ?3.
We define the variational optimization objective in ?4, addressing the challenging entropy term
by relaxing it to a concave lower bound, and give conditions for which this leads to a blockwise
concave maximization problem. In ?5, we define several aggregation functions, including novel
ones that enable aggregation of structured samples?e.g. positive semidefinite matrices and mixture
model parameters. In ?6, we evaluate the performance of VCMC and CMC relative to serial MCMC.
We replicate experiments carried out by Scott et al. [22] and execute more challenging experiments
in higher dimensions and with more data. Finally in ?7, we summarize our approach and discuss
several open problems generated by this work.
2
2
Related work
We focus on data-parallel MCMC algorithms for large-scale Bayesian posterior sampling. Several recent research threads propose schemes in the setting where the posterior factors as in (2).
In general, these parallel strategies are approximate relative to serial procedures, and the specific
algorithms differ in terms of the approximations employed and amount of communication required.
At one end of the communication spectrum are algorithms that fit into the MapReduce model [7].
First, K parallel cores sample from K subposteriors, defined in (1), via any Monte Carlo sampling
procedure. The subposterior samples are then aggregated to obtain approximate samples from the
full posterior. This leads to the challenge of designing proper and efficient aggregation procedures.
Scott et al. [22] propose consensus Monte Carlo (CMC), which constructs approximate posterior
samples via weighted averages of subposterior samples; our algorithms are motivated by this work.
Let ?k,t denote the t-th subposterior sample from core k. In CMC, the aggregation function averages
?
across each set of K samples {?k,t }K
k=1 to produce one approximate posterior sample ?t . Uniform
averaging is a natural but na??ve heuristic that can in fact be improved upon via a weighted average,
K
X
?? = F (?1:K ) =
Wk ?k ,
(4)
k=1
where in general, ?k is a vector and Wk can be a matrix. The authors derive weights motivated by the
special case of a Gaussian posterior, where each subposterior is consequently also Gaussian. Let ?k
be the covariance of the k-th subposterior. This suggests weights Wk = ?k 1 equal to the subposteriors? inverse covariances. CMC treats arbitrary subpostertiors as Gaussians, aggregating with
? 1 computed from the observed subposterior samples.
weights given by empirical estimates of ?
k
Neiswanger et al. [18] propose aggregation at the level of distributions rather than samples. Here, the
idea is to form an approximate posterior via a product of density estimates fit to each subposterior,
and then sample from this approximate posterior. The accuracy and computational requirements
of this approach depend on the complexity of these density estimates. Wang and Dunson [24]
develop alternate data-parallel MCMC methods based on applying a Weierstrass transform to each
subposterior. These Weierstrass sampling procedures introduce auxiliary variables and additional
communication between computational cores.
3
Consensus Monte Carlo as variational inference
Given the distributional form of the CMC framework (3), we would like to choose F so that the
induced distribution on ? is as close as possible to the true posterior. This is precisely the problem
addressed by variational Bayes, which approximates an intractable posterior p (? | X) by the solution
q ? to the constrained optimization problem
min DKL (q || p (? | X)) subject to q 2 Q,
where Q is the family of variational approximations to the distribution, usually chosen to make both
optimization and evaluation of target expectations tractable. We thus view the aggregation problem
in CMC as a variational inference problem, with the variational family given by all distributions
Q = QF = {qF : F 2 F}, where each F is in some function class F and defines a density
Z
K
Y
qF (?) =
pk (?k ) ? ?=F (?1 ,...,?K ) d?1:K .
?K k=1
In practice, we optimize over finite-dimensional F using projected stochastic gradient descent
(SGD).
4
The variational optimization problem
Standard optimization of the variational Bayes objective uses the evidence lower bound (ELBO)
?
?
p (?, X)
p (?, X)
log p (X) = log Eq
Eq log
q (?)
q (?)
= log p (X) DKL (q || p (? | X)) = : LVB (q) .
(5)
3
We can therefore recast the variational optimization problem in an equivalent form as
max LVB (q) subject to q 2 Q.
Unfortunately, the variational Bayes objective LVB remains difficult to optimize. Indeed, by writing
LVB (q) = Eq [log p (?, X)] + H [q]
we see that optimizing LVB requires computing an entropy H [q] and its gradients. We can deal with
this issue by deriving a lower bound on the entropy that relaxes the objective further.
PK
Concretely, suppose that every F 2 F can be decomposed as F (?1:K ) = k=1 Fk (?k ), with
each Fk a differentiable bijection. Since the ?k come from subposteriors conditioning on different
segments of the data, they are independent. The entropy power inequality [6] therefore implies
H [q]
max H [Fk (?k )] = max (H [pk ] + Epk [log det [J (Fk ) (?k )]])
1?k?K
1?k?K
min H [pk ] + max Epk [log det [J (Fk ) (?k )]]
1?k?K
1?k?K
min H [pk ] +
1?k?K
K
1 X
? [q] ,
Epk [log det [J (Fk ) (?k )]] = : H
K
(6)
(7)
k=1
where J (f ) denotes the Jacobian of the function f . The proof can be found in the supplement.
This approach gives an explicit, easily computed approximation to the entropy?and this approximation is a lower bound, allowing us to interpret it simply as a further relaxation of the original
inference problem. Furthermore, and crucially, it decouples pk and Fk , thereby making it possible
to optimize over Fk without estimating the entropy of any pk . We note additionally that if we are
willing to sacrifice concavity, we can use the tighter lower bound on the entropy given by (6).
Putting everything together, we can define our relaxed variational objective as
? [q] .
L (q) = Eq [log p (?, X)] + H
(8)
Maximizing this function is the variational Bayes problem we consider in the remainder of the paper.
Conditions for concavity Under certain conditions, the problem posed above is blockwise concave. To see when this holds, we use the language of graphical models and exponential families. To
derive the result in the greatest possible generality, we decompose the variational objective as
LVB = Eq [log p (?, X)] + H [q]
? [q]
L? + H
? then treat our choice of relaxed entropy (7). We emphasize that
and prove concavity directly for L,
while the entropy relaxation is only defined for decomposed aggregation functions, concavity of the
partial objective holds for arbitrary aggregation functions. All proofs are in the supplement.
Suppose the model distribution is specified via a graphical model G, so that ? = (?u )u2V (G) , such
that each conditional distribution is defined by an exponential family
?
?
?
?
X ? 0 ?T
0
log p ?u | ?par(u) = log hu (?u ) +
?u
T u !u (?u ) log Au ?par(u) .
u0 2par(u)
If each of these log conditional density functions is log-concave in ?u , we can guarantee that the log
likelihood is concave in each ?u individually.
Theorem 4.1 (Blockwise concavity of the variational cross-entropy). Suppose that the model distribution is specified by a graphical model G in which each conditional probability density is a
log-concave exponential
family. Suppose further that the variational aggregation function family
Q
satisfies F = u2V (G) F u such that we can decompose each aggregation function across nodes via
F (?) = (F u (?u ))u2V (G) , F 2 F and F u 2 F u .
If each F u is a convex subset of some vector space Hu , then the variational cross-entropy L? is
concave in each F u individually.
4
Assuming that the aggregation function can be decomposed into a sum over functions of individual
subposterior terms we can also prove concavity of our entropy relaxation (7).
QK
Theorem 4.2 (Concavity of the relaxed entropy). Suppose F = k=1 Fk , with each function
PK
F 2 F decomposing as F (?1 , . . . , ?K ) = k=1 Fk (?k ) for unique bijective Fk 2 Fk . Then the
relaxed entropy (7) is concave in F .
As a result, we derive concavity of the variational objective in a broad range of settings.
Corollary 4.1 (Concavity of the variational objective). Under the hypotheses of Theorems 4.1 and
? is concave in each F u individually.
4.2, the variational Bayes objective L = L? + H
5
Variational aggregation function families
The performance of our algorithm depends critically on the choice of aggregation function family F.
The family must be sufficiently simple to support efficient optimization, expressive to capture the
complex transformation from the set of subposteriors to the full posterior, and structured to preserve
structure in the parameters. We now illustrate some aggregation functions that meet these criteria.
Vector aggregation. In the simplest case, ? 2 Rd is an unconstrained vector. Then, a linear aggrePK
gation function FW = k=1 Wk ?k makes sense, and it is natural to impose constraints to make this
d
sum behave like a weighted average?i.e., each Wk 2 S+
is a positive semidefinite (PSD) matrix
PK
and k=1 Wk = Id . For computational reasons, it is often desirable to restrict to diagonal Wk .
Spectral aggregation. Cases involving structure exhibit more interesting behavior. Indeed, if our
d
, applying the vector aggregation function above to the flattened
parameter is a PSD matrix ? 2 S+
vector form vec (?) of the parameter does not suffice. Denoting elementwise matrix product as ,
PK
d
we note that this strategy would in general lead to FW (?1:m ) = k=1 Wk ?k 2
/ S+
.
We therefore introduce a more sophisticated aggregation function that preserves PSD structure. For
this, given symmetric A 2 Rd?d , define R (A) and D (A) to be orthogonal and diagonal matrices,
T
respectively, such that A = R (A) D (A) R (A). Impose further?and crucially?the canonical
ordering D (A)11 ? ? ? D (A)dd . We can then define our spectral aggregation function by
spec
FW
(?1:K ) =
K
X
T
R (?k ) [Wk D (?k )] R (?k ) .
k=1
d
Assuming Wk 2 S+
, the output of this function is guaranteed to be PSD, as required. As above we
PK
K
d
restrict the set of Wk to the matrix simplex {(Wk )k=1 : Wk 2 S+
,
k=1 Wk = I}.
Combinatorial aggregation. Additional complexity arises with unidentifiable latent variables
and, more generally, models with multimodal posteriors. Since this class encompasses many popular
algorithms in machine learning, including factor analysis, mixtures of Gaussians and multinomials,
and latent Dirichlet allocation (LDA), we now show how our framework can accommodate them.
For concreteness, suppose now that our model parameters are given by ? 2 RL?d , where L denotes
the number of global latent variables (e.g. cluster centers). We introduce discrete alignment parameters ak that indicate how latent variables associated with partitions map to global latent variables.
Each ak is thus a one-to-one correspondence [L] ! [L], with ak` denoting the index on worker
core k of cluster center `. For fixed a, we then obtain the variational aggregation function
?X
?L
K
Fa (?1:K ) =
Wk` ?kak` (`)
.
k=1
`=1
Optimization can then proceed in an alternating manner, switching between the alignments ak
and the weights Wk , or in a greedy manner, fixing the alignments at the start and optimizing
the weight matrices. In practice, we do the latter, aligning using a simple heuristic objective
PK PL
2
?
O (a) =
??1` 2 , where ??k` denotes the mean value of cluster center ` on
k=2
`=1 ?kak`
partition k. As O suggests, we set a1` = `. Minimizing O via the Hungarian algorithm [15] leads
to good alignments.
5
Figure 1: High-dimensional probit regression (d = 300). Moment approximation error for the
uniform and Gaussian averaging baselines and VCMC, relative to serial MCMC, for subposteriors (left) and partial posteriors (right); note the different vertical axis scales. We assessed three
groups of functions: first moments, with f ( ) = j for 1 ? j ? d; pure second moments, with
f ( ) = j2 for 1 ? j ? d; and mixed second moments, with f ( ) = i j for 1 ? i < j ? d. For
brevity, results for pure second moments are relegated to Figure 5 in the supplement.
6
Empirical evaluation
We now evaluate VCMC on three inference problems, in a range of data and dimensionality conditions. In the vector parameter case, we compare directly to the simple weighting baselines corresponding to previous work on CMC [22]; in the other cases, we compare to structured analogues of
these weighting schemes. Our experiments demonstrate the advantages of VCMC across the whole
range of model dimensionality, data quantity, and availability of parallel resources.
Baseline weight settings. Scott et al. [22] studied linear aggregation functions with fixed weights,
Wkunif =
1
? Id
K
? ?
?k
Wkgauss / diag ?
and
1
(9)
,
? k denotes the
corresponding to uniform averaging and Gaussian averaging, respectively, where ?
standard empirical estimate of the covariance. These are our baselines for comparison.
Evaluation metrics. Since the goal of MCMC is usually to estimate event probabilities and function expectations, we evaluate algorithm accuracy for such estimates, relative to serial MCMC output. For each model, we consider a suite of test functions f 2 F (e.g. low degree polynomials,
cluster comembership indicators), and we assess the error of each algorithm A using the metric
?A (f ) =
|EA [f ] EMCMC [f ]|
.
|EMCMC [f ]|
In the body of the paper, we report median values of ?A , computed within each test function class.
The supplement expands on this further, showing quartiles for the differences in ?VCMC and ?CMC .
Bayesian probit regression. We consider the nonconjugate probit regression model. In this case,
we use linear aggregation functions as our function class. For computational efficiency, we also
limit ourselves to diagonal Wk . We use Gibbs sampling on the following augmented model:
?
1 if Zn > 0,
? N (0, 2 Id ),
Zn | , xn ? N ( T xn , 1),
Yn | Z n , , x n =
0 otherwise.
This augmentation allows us to implement an efficient and rapidly mixing Gibbs sampler, where
| x1:N = X,
z1:N = z ? N ?XT z, ? ,
?=
2
I d + XT X
1
.
We run two experiments: the first using a data generating distribution from Scott et al. [22],
with N = 8500 data points and d = 5 dimensions, and the second using N = 105 data points and
d = 300 dimensions. As shown in Figure 1 and, in the supplement,1 Figures 4 and 5, VCMC decreases the error of moment estimation compared to the baselines, with substantial gains starting
at K = 25 partitions (and increasing with K). We also run the high-dimensional experiment using
partial posteriors [23] in place of subposteriors, and observe substantially lower errors in this case.
6
Figure 2: High-dimensional normal-inverse Wishart model (d = 100). (Far left, left, right) Moment
approximation error for the uniform and Gaussian averaging baselines and VCMC, relative to serial
MCMC. Letting ?j denote the j th largest eigenvalue of ? 1 , we assessed three groups of functions:
first moments, with f (?) = ?j for 1 ? j ? d; pure second moments, with f (?) = ?2j for 1 ? j ? d;
and mixed second moments, with f (?) = ?i ?j for 1 ? i < j ? d. (Far right) Graph of error in
estimating E [?j ] as a function of j (where ?1 ?2 ? ? ? ?d ).
Normal-inverse Wishart model.
inverse Wishart model
To compare directly to prior work [22], we consider the normal-
? ? Wishart (?, V ) ,
Xn | ?, ? ? N ?, ?
1
.
Here, we use spectral aggregation rules as our function class, restricting to diagonal Wk for computational efficiency. We run two sets of experiments: one using the covariance matrix from Scott
et al. [22], with N = 5000 data points and d = 5 dimensions, and one using a higher-dimensional
covariance matrix designed to have a small spectral gap and a range of eigenvalues, with N = 105
data points and d = 100 dimensions. In both cases, we use a form of projected SGD, using 40
samples per iteration to estimate the variational gradients and running 25 iterations of optimization.
We note that because the mean ? is treated as a point-estimated parameter, one could sample ?
exactly using normal-inverse Wishart conjugacy [10]. As Figure 2 shows,2 VCMC improves both
first and second posterior moment estimation as compared to the baselines. Here, the greatest gains
from VCMC appear at large numbers of partitions (K = 50, 100). We also note that uniform and
Gaussian averaging perform similarly because the variances do not differ much across partitions.
Mixture of Gaussians. A substantial portion of Bayesian inference focuses on latent variable
models and, in particular, mixture models. We therefore evaluate VCMC on a mixture of Gaussians,
?1:L ? N 0, ? 2 Id ,
Zn ? Cat (?) ,
Xn | Z n = z ? N ? z ,
2
Id ,
where the mixture weights ? and the prior and likelihood variances ? 2 and 2 are assumed known.
We use the combinatorial aggregation functions defined in Section 5; we set L = 8, ? = 2, = 1,
and ? uniform and generate N = 5 ? 104 data points in d = 8 dimensions, using the model
from Nishihara et al. [19]. The resulting inference problem is therefore L ? d = 64-dimensional.
All samples were drawn using the PyStan implementation of Hamiltonian Monte Carlo (HMC).
As Figure 3a shows, VCMC drastically improves moment estimation compared to the baseline
Gaussian averaging (9). To assess how VCMC influences estimates in cluster membership probabilities, we generated 100 new test points from the model and analyzed cluster comembership
probabilities for all pairs in the test set. Concretely, for each xi and xj in the test data, we estimated P [xi and xj belong to the same cluster]. Figure 3a shows the resulting boost in accuracy:
when = 1, VCMC delivers estimates close to those of serial MCMC, across all numbers of partitions; the errors are larger for = 2. Unlike previous models, uniform averaging here outperforms
Gaussian averaging, and indeed is competitive with VCMC.
Assessing computational efficiency. The efficiency of VCMC depends on that of the optimization
step, which depends on factors including the step size schedule, number of samples used per iteration
to estimate gradients, and size of data minibatches used per iteration. Extensively assessing the
influence of all these factors is beyond the scope of this paper, and is an active area of research both
in general and specifically in the context of variational inference [13, 17, 21]. Here, we provide
1
2
Due to space constraints, we relegate results for d = 5 to the supplement.
Due to space constraints, we compare to the d = 5 experiment of Scott et al. [22] in the supplement.
7
(a) Mixture of Gaussians (d = 8, L = 8).
(b) Error versus timing and speedup measurements.
Figure 3: (a) Expectation approximation error for the uniform and Gaussian baselines and VCMC.
We report the median error, relative to serial MCMC, for cluster comembership probabilities of
pairs of test data points, for (left) = 1 and (right) = 2, where we run the VCMC optimization
procedure for 50 and 200 iterations, respectively. When = 2, some comembership probabilities
are estimated poorly by all methods; we therefore only use the 70% of comembership probabilities
with the smallest errors across all the methods. (b) (Left) VCMC error as a function of number of
seconds of optimization. The cost of optimization is nonnegligible, but still moderate compared to
serial MCMC?particularly since our optimization scheme only needs small batches of samples and
can therefore operate concurrently with the sampler. (Right) Error versus speedup relative to serial
MCMC, for both CMC with Gaussian averaging (small markers) and VCMC (large markers).
an initial assessment of the computational efficiency of VCMC, taking the probit regression and
Gaussian mixture models as our examples, using step sizes and sample numbers from above, and
eschewing minibatching on data points.
Figure 3b shows timing results for both models. For the probit regression, while the optimization
cost is not negligible, it is significantly smaller than that of serial sampling, which takes over 6000
seconds to produce 1000 effective samples.3 Across most numbers of partitions, approximately 25
iterations?corresponding to less than 1500 seconds of wall clock time?suffices to give errors close
to those at convergence. For the mixture, on the other hand, the computational cost of optimization
is minimal compared to serial sampling. We can see this in the overall speedup of VCMC relative
to serial MCMC: for sampling and optimization combined, low numbers of partitions (K ? 25)
achieve speedups close to the ideal value of K, and large numbers (K = 50, 100) still achieve good
speedups of about K/2. The cost of the VCMC optimization step is thus moderate?and, when the
MCMC step is expensive, small enough to preserve the linear speedup of embarrassingly parallel
sampling. Moreover, since the serial bottleneck is an optimization, we are optimistic that performance, both in terms of number of iterations and wall clock time, can be significantly increased by
using techniques like data minibatching [9], adaptive step sizes [21], or asynchronous updates [20].
7
Conclusion and future work
The flexibility of variational consensus Monte Carlo (VCMC) opens several avenues for further
research. Following previous work on data-parallel MCMC, we used the subposterior factorization. Our variational framework can accomodate more general factorizations that might be more
statistically or computationally efficient ? e.g. the factorization used by Broderick et al. [4]. We
also introduced structured sample aggregation, and analyzed some concrete instantiations. Complex
latent variable models would require more sophisticated aggregation functions ? e.g. ones that account for symmetries in the model [5] or lift the parameter to a higher dimensional space before
aggregating. Finally, recall that our algorithm ? again following previous work ? aggregates in a
sample-by-sample manner, cf. (4). Other aggregation paradigms may be useful in building approximations to multimodal posteriors or in boosting the statistical efficiency of the overall sampler.
Acknowledgments. We thank R.P. Adams, N. Altieri, T. Broderick, R. Giordano, M.J. Johnson,
and S.L. Scott for helpful discussions. E.A. is supported by the Miller Institute for Basic Research
in Science, University of California, Berkeley. M.R. is supported by a Hertz Foundation Fellowship,
generously endowed by Google, and an NSF Graduate Research Fellowship. Support for this project
was provided by Amazon and by ONR under the MURI program (N00014-11-1-0688).
3
We ran the sampler for 5100 iterations, including 100 burnin steps, and kept every fifth sample.
8
References
[1] A. U. Asuncion, P. Smyth, and M. Welling. Asynchronous distributed learning of topic models. In
Advances in Neural Information Processing Systems 21, pages 81?88, 2008.
[2] R. Bardenet, A. Doucet, and C. Holmes. Towards scaling up Markov chain Monte Carlo: An adaptive
subsampling approach. In Proceedings of the 31st International Conference on Machine Learning, 2014.
[3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 2nd edition, 1990.
[4] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. I. Jordan. Streaming variational Bayes. In
Advances in Neural Information Processing Systems 26, pages 1727?1735, 2013.
[5] T. Campbell and J. P. How. Approximate decentralized Bayesian inference. In 30th Conference on
Uncertainty in Artificial Intelligence, 2014.
[6] T. M. Cover and J. A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and
Signal Processing). Wiley-Interscience, 2006.
[7] J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. Communications of
the ACM, 51(1):107?113, Jan. 2008.
[8] F. Doshi-Velez, D. A. Knowles, S. Mohamed, and Z. Ghahramani. Large scale nonparametric Bayesian
inference: Data parallelisation in the Indian buffet process. In Advances in Neural Information Processing
Systems 22, pages 1294?1302, 2009.
[9] J. C. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159, 2011.
[10] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. Bayesian Data Analysis,
Third Edition. Chapman and Hall/CRC, 2013.
[11] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine
Learning Research, 14(1):1303?1347, May 2013.
[12] M. Johnson, J. Saunderson, and A. Willsky. Analyzing Hogwild parallel Gaussian Gibbs sampling. In
Advances in Neural Information Processing Systems 26, pages 2715?2723, 2013.
[13] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In Advances in Neural Information Processing Systems 26, pages 315?323, 2013.
[14] A. Korattikara, Y. Chen, and M. Welling. Austerity in MCMC land: Cutting the Metropolis-Hastings
budget. In Proceedings of the 31st International Conference on Machine Learning, 2014.
[15] H. W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2
(1-2):83?97, 1955.
[16] D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with subsets of data. In Proceedings
of 30th Conference on Uncertainty in Artificial Intelligence, 2014.
[17] S. Mandt and D. M. Blei. Smoothed gradients for stochastic variational inference. In Advances in Neural
Information Processing Systems 27, pages 2438?2446, 2014.
[18] W. Neiswanger, C. Wang, and E. Xing. Asymptotically exact, embarrassingly parallel MCMC. In 30th
Conference on Uncertainty in Artificial Intelligence, 2014.
[19] R. Nishihara, I. Murray, and R. P. Adams. Parallel MCMC with generalized elliptical slice sampling.
Journal of Machine Learning Research, 15:2087?2112, 2014.
[20] F. Niu, B. Recht, C. R?e, and S. Wright. Hogwild!: A lock-free approach to parallelizing stochastic
gradient descent. In Advances in Neural Information Processing Systems 24, pages 693?701, 2011.
[21] R. Ranganath, C. Wang, D. M. Blei, and E. P. Xing. An adaptive learning rate for stochastic variational
inference. In Proceedings of the 30th International Conference on Machine Learning, pages 298?306,
2013.
[22] S. L. Scott, A. W. Blocker, and F. V. Bonassi. Bayes and big data: The consensus Monte Carlo algorithm.
In Bayes 250, 2013.
[23] H. Strathmann, D. Sejdinovic, and M. Girolami. Unbiased Bayes for big data: Paths of partial posteriors.
arXiv:1501.03326, 2015.
[24] X. Wang and D. B. Dunson. Parallel MCMC via Weierstrass sampler. arXiv:1312.4605, 2013.
[25] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings
of the 28th International Conference on Machine Learning, 2011.
9
| 5888 |@word mild:1 polynomial:1 replicate:1 nd:1 open:2 willing:1 hu:2 crucially:2 covariance:6 sgd:2 thereby:1 accommodate:1 moment:12 reduction:3 initial:1 contains:1 series:1 denoting:2 outperforms:1 elliptical:1 must:2 belmont:1 partition:13 remove:1 designed:1 update:1 spec:1 greedy:1 intelligence:3 hamiltonian:1 core:7 blei:3 weierstrass:3 boosting:1 bijection:1 node:1 zhang:1 prove:2 combine:2 overhead:1 firefly:2 interscience:1 manner:3 introduce:4 sacrifice:1 indeed:3 behavior:1 themselves:1 multi:1 untying:1 decomposed:3 increasing:1 project:1 estimating:4 moreover:1 suffice:1 provided:1 substantially:1 transformation:1 suite:1 guarantee:1 berkeley:3 every:2 expands:1 concave:10 runtime:1 exactly:1 decouples:1 nonnegligible:1 partitioning:1 yn:1 appear:1 bertsekas:1 positive:3 negligible:1 before:2 aggregating:2 treat:2 timing:2 depended:1 limit:1 switching:1 ak:4 id:5 analyzing:1 meet:1 niu:1 mandt:1 approximately:1 path:1 might:1 au:1 studied:1 suggests:2 challenging:3 relaxing:1 factorization:6 range:4 statistically:1 graduate:1 unique:1 acknowledgment:1 practice:2 implement:1 procedure:8 jan:1 area:1 empirical:3 significantly:2 boyd:1 giordano:1 cannot:1 close:4 gelman:1 context:1 applying:2 writing:1 influence:2 optimize:3 equivalent:1 deterministic:2 map:1 center:4 maximizing:1 backed:1 dean:1 attention:1 starting:1 convex:1 amazon:1 immediately:1 pure:3 rule:1 holmes:1 deriving:1 enabled:1 target:4 suppose:6 massive:1 smyth:1 programming:1 exact:2 us:1 designing:1 hypothesis:1 element:1 expensive:1 imported:1 particularly:2 distributional:1 labeled:1 muri:1 observed:1 wang:5 capture:1 elaine:2 ordering:1 decrease:1 ran:1 substantial:4 intuition:1 vehtari:1 complexity:2 broderick:3 dynamic:2 raise:1 depend:1 segment:1 predictive:1 purely:1 recombine:1 division:1 efficiency:7 upon:1 easily:1 joint:1 multimodal:2 cat:1 effective:1 monte:17 eschewing:1 artificial:3 aggregate:2 lift:1 whose:1 heuristic:2 posed:1 larger:1 drawing:1 elbo:1 otherwise:1 statistic:1 transform:1 online:1 advantage:3 differentiable:1 eigenvalue:2 propose:4 product:2 minibatching:2 remainder:2 j2:1 korattikara:1 rapidly:1 mixing:1 flexibility:3 achieve:4 poorly:1 scalability:1 convergence:1 cluster:11 requirement:1 assessing:2 strathmann:1 produce:2 generating:1 adam:3 wider:1 derive:4 illustrate:2 develop:1 fixing:1 measured:1 multicore:1 received:1 eq:5 auxiliary:1 hungarian:2 cmc:17 come:2 implies:1 indicate:1 differ:2 kuhn:1 posit:1 girolami:1 stochastic:12 quartile:1 enable:1 everything:1 crc:1 require:1 suffices:1 wall:2 investigation:1 decompose:2 tighter:1 pl:1 hold:2 sufficiently:1 hall:1 wright:1 normal:4 maclaurin:1 mapping:1 scope:2 achieves:2 smallest:1 estimation:4 applicable:1 combinatorial:2 individually:3 largest:1 successfully:1 weighted:4 hoffman:1 concurrently:1 generously:1 gaussian:14 modified:1 rather:2 wilson:1 corollary:1 focus:2 naval:1 likelihood:2 baseline:9 sense:1 helpful:1 inference:17 austerity:1 membership:1 streaming:2 typically:1 relegated:1 issue:2 among:1 overall:2 constrained:1 special:1 equal:1 construct:1 having:1 sampling:12 chapman:1 broad:1 future:1 simplex:1 report:2 modern:2 preserve:3 ve:1 individual:1 ourselves:1 psd:4 centralized:1 evaluation:3 alignment:4 mixture:10 analyzed:2 semidefinite:3 yielding:1 chain:3 closer:1 partial:7 worker:1 orthogonal:1 minimal:1 instance:2 increased:1 cover:1 zn:3 rabinovich:2 maximization:1 assignment:1 cost:5 addressing:1 subset:4 uniform:8 johnson:3 answer:1 eec:1 combined:1 adaptively:1 st:2 density:8 international:4 recht:1 xi1:1 michael:1 together:1 concrete:1 na:1 augmentation:1 again:1 choose:2 wishart:5 leading:1 account:2 wk:18 availability:1 depends:3 view:3 nishihara:2 hogwild:2 optimistic:1 hazan:1 portion:1 start:2 aggregation:36 bayes:14 parallel:16 competitive:1 xing:2 asuncion:1 ass:2 accuracy:3 qk:1 variance:3 likewise:1 miller:1 bayesian:10 critically:1 carlo:17 epk:3 against:1 mohamed:1 doshi:1 associated:2 proof:2 saunderson:1 gain:5 dataset:1 popular:1 recall:1 dimensionality:2 improves:2 schedule:1 embarrassingly:2 sophisticated:2 ea:1 campbell:1 higher:3 nonconjugate:1 improved:1 execute:1 unidentifiable:1 generality:1 furthermore:2 clock:2 hand:1 hastings:1 expressive:1 nonlinear:2 marker:2 assessment:1 google:1 defines:1 bonassi:1 quality:1 lda:1 scientific:1 building:1 true:3 unbiased:1 former:1 alternating:1 symmetric:1 deal:1 kak:2 criterion:1 generalized:1 bijective:1 outline:1 demonstrate:2 duchi:1 delivers:1 parallelisation:1 variational:39 novel:2 recently:2 superior:1 multinomial:1 rl:1 conditioning:1 belong:1 approximates:2 elementwise:1 interpret:1 velez:1 measurement:1 gibbs:4 vec:1 paisley:1 rd:2 unconstrained:1 fk:12 similarly:2 language:1 operating:1 lvb:6 aligning:1 dominant:1 posterior:30 closest:1 recent:1 optimizing:2 optimizes:1 moderate:4 belongs:1 certain:1 n00014:1 inequality:1 onr:1 success:1 additional:2 relaxed:5 impose:2 employed:1 aggregated:2 paradigm:1 signal:1 u0:1 full:6 desirable:1 plug:1 cross:2 long:1 serial:17 dkl:2 a1:1 variant:2 basic:1 regression:6 scalable:1 circumstance:1 expectation:4 involving:1 metric:2 arxiv:2 iteration:8 kernel:1 sejdinovic:1 u2v:3 fellowship:2 addressed:2 median:2 operate:1 unlike:1 induced:1 subject:2 jordan:3 practitioner:1 near:1 ideal:2 vital:1 enough:1 relaxes:1 independence:1 fit:2 xj:2 carlin:1 architecture:1 restrict:2 idea:3 avenue:1 det:3 bottleneck:1 thread:1 motivated:3 accelerating:1 proceed:1 generally:1 useful:1 clear:1 amount:1 nonparametric:1 extensively:1 simplest:1 generate:1 canonical:1 nsf:1 alters:1 estimated:3 disjoint:1 correctly:1 per:3 conform:1 discrete:1 group:2 putting:1 demonstrating:1 achieving:3 drawn:1 bardenet:1 kept:1 asymptotically:1 graph:1 relaxation:4 blocker:1 concreteness:1 subgradient:1 sum:2 run:4 inverse:5 uncertainty:3 telecommunication:1 place:2 family:9 knowles:1 scaling:1 bound:5 guaranteed:1 correspondence:1 vectorial:1 constraint:4 precisely:1 min:3 formulating:1 speedup:7 structured:5 alternate:1 hertz:1 across:8 smaller:1 appealing:1 metropolis:1 making:1 computationally:2 resource:1 conjugacy:1 remains:2 discus:1 singer:1 neiswanger:2 letting:1 tractable:1 end:2 gaussians:5 decomposing:1 endowed:1 decentralized:1 observe:1 quarterly:1 spectral:4 alternative:1 batch:1 buffet:1 original:1 thomas:1 denotes:5 assumes:1 subsampling:2 include:1 cf:2 graphical:3 dirichlet:1 running:1 lock:1 ghahramani:1 murray:1 objective:13 move:1 question:2 added:1 quantity:1 strategy:3 fa:1 diagonal:4 exhibit:2 gradient:10 link:1 thank:1 athena:1 topic:1 consensus:10 reason:1 willsky:1 assuming:2 index:1 minimizing:1 innovation:1 difficult:1 unfortunately:3 dunson:3 hmc:1 blockwise:4 xik:4 implementation:1 motivates:1 proper:1 stern:1 perform:1 allowing:1 teh:1 vertical:1 markov:3 datasets:2 finite:1 descent:4 behave:1 logistics:1 langevin:2 communication:6 smoothed:1 arbitrary:2 parallelizing:1 introduced:1 cast:1 required:2 specified:2 extensive:1 z1:1 pair:2 california:2 boost:1 beyond:3 proceeds:1 below:1 usually:2 scott:9 summarize:1 challenge:1 encompasses:1 program:1 recast:1 including:5 max:4 analogue:1 power:1 greatest:2 event:1 natural:2 treated:1 indicator:1 scheme:3 numerous:1 axis:1 carried:1 review:1 literature:1 mapreduce:2 prior:2 relative:9 probit:6 par:3 mixed:2 interesting:2 limitation:1 allocation:1 versus:2 foundation:1 degree:1 rubin:1 dd:1 land:1 qf:3 supported:2 asynchronous:4 free:1 drastically:1 allow:1 institute:1 taking:1 fifth:1 distributed:2 benefit:1 slice:1 dimension:7 xn:4 stand:1 concavity:9 author:1 concretely:2 adaptive:5 projected:2 simplified:1 far:2 welling:3 ranganath:1 approximate:11 emphasize:1 cutting:1 global:3 active:1 instantiation:1 doucet:1 assumed:1 xi:2 spectrum:1 latent:7 additionally:1 symmetry:1 complex:2 diag:1 pk:18 whole:1 big:2 edition:2 x1:4 augmented:2 body:1 wiley:2 explicit:1 exponential:3 lie:1 jacobian:1 weighting:2 third:1 angelino:1 theorem:3 specific:1 xt:2 showing:1 ghemawat:1 evidence:2 intractable:3 essential:1 restricting:1 flattened:1 maxim:1 supplement:7 accomodate:1 budget:1 demand:1 gap:1 chen:1 entropy:15 simply:1 relegate:1 corresponds:1 satisfies:1 acm:1 minibatches:1 ma:1 conditional:5 viewed:1 goal:1 consequently:1 towards:1 gation:1 fw:3 typical:1 specifically:1 averaging:11 sampler:5 clarification:1 burnin:1 formally:1 support:3 untie:1 arises:1 latter:1 assessed:2 brevity:1 indian:1 wibisono:1 evaluate:4 mcmc:29 avoiding:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.