Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
6,800 | 7,151 | On Fairness and Calibration
Geoff Pleiss?, Manish Raghavan?, Felix Wu, Jon Kleinberg, Kilian Q. Weinberger
Cornell University, Department of Computer Science
{geoff,manish,kleinber}@cs.cornell.edu,
{fw245,kwq4}@cornell.edu
Abstract
The machine learning community has become increasingly concerned with the
potential for bias and discrimination in predictive models. This has motivated a
growing line of work on what it means for a classification procedure to be ?fair.?
In this paper, we investigate the tension between minimizing error disparity across
different population groups while maintaining calibrated probability estimates. We
show that calibration is compatible only with a single error constraint (i.e. equal
false-negatives rates across groups), and show that any algorithm that satisfies this
relaxation is no better than randomizing a percentage of predictions for an existing
classifier. These unsettling findings, which extend and generalize existing results,
are empirically confirmed on several datasets.
1
Introduction
Recently, there has been growing concern about errors of machine learning algorithms in sensitive
domains ? including criminal justice, online advertising, and medical testing [33] ? which may
systematically discriminate against particular groups of people [2, 4, 8]. A recent high-profile
example of these concerns was raised by the news organization ProPublica, who studied a riskassessment tool that is widely used in the criminal justice system. This tool assigns to each criminal
defendant an estimated probability that they will commit a future crime. ProPublica found that the
risk estimates assigned to defendants who did not commit future crimes were on average higher
among African-American defendants than Caucasian defendants [1]. This is a form of false-positive
error, and in this case it disproportionately affected African-American defendants. To mitigate issues
such as these, the machine learning community has proposed different frameworks that attempt
to quantify fairness in classification [2, 4, 8, 19, 26, 34, 37]. A recent and particularly noteworthy
framework is Equalized Odds [19] (also referred to as Disparate Mistreatment [37]),1 which constrains
classification algorithms such that no error type (false-positive or false-negative) disproportionately
affects any population subgroup. This notion of non-discrimination is feasible in many settings, and
researchers have developed tractable algorithms for achieving it [17, 19, 34, 37].
When risk tools are used in practice, a key goal is that they are calibrated: if we look at the set of
people who receive a predicted probability of p, we would like a p fraction of the members of this
set to be positive instances of the classification problem [11]. Moreover, if we are concerned about
fairness between two groups G1 and G2 (e.g. African-American defendants and white defendants)
then we would like this calibration condition to hold simultaneously for the set of people within each
of these groups as well [16]. Calibration is a crucial condition for risk tools in many settings. If a
risk tool for evaluating defendants were not calibrated with respect to groups defined by race, for
example, then a probability estimate of p could carry different meaning for African-American and
white defendants, and hence the tool would have the unintended and highly undesirable consequence
of incentivizing judges to take race into account when interpreting its predictions. Despite the
?
1
Equal contribution, alphebetical order.
For the remainder of the paper, we will use Equalized Odds to refer to this notion of non-discrimination.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
importance of calibration as a property, our understanding of how it interacts with other fairness
properties is limited. We know from recent work that, except in the most constrained cases, it is
impossible to achieve calibration while also satisfying Equalized Odds [8, 26]. However, we do not
know how best to achieve relaxations of these guarantees that are feasible in practice.
Our goal is to further investigate the relationship between calibration and error rates. We show
that even if the Equalized Odds conditions are relaxed substantially ? requiring only that weighted
sums of the group error rates match ? it is still problematic to also enforce calibration. We provide
necessary and sufficient conditions under which this calibrated relaxation is feasible. When feasible,
it has a unique optimal solution that can be achieved through post-processing of existing classifiers.
Moreover, we provide a simple post-processing algorithm to find this solution: withholding predictive information for randomly chosen inputs to achieve parity and preserve calibration. However,
this simple post-processing method is fundamentally unsatisfactory: although the post-processed
predictions of our information-withholding algorithm are ?fair? in expectation, most practitioners
would object to the fact that a non-trivial portion of the individual predictions are withheld as a result
of coin tosses ? especially in sensitive settings such as health care or criminal justice. The optimality
of this algorithm thus has troubling implications and shows that calibration and error-rate fairness are
inherently at odds (even beyond the initial results by [8] and [26]).
Finally, we evaluate these theoretical findings empirically, comparing calibrated notions of nondiscrimination against the (uncalibrated) Equalized Odds framework on several datasets. These
experiments further support our conclusion that calibration and error-rate constraints are in most
cases mutually incompatible goals. In practical settings, it may be advisable to choose only one of
these goals rather than attempting to achieve some relaxed notion of both.
2
Related Work
Calibrated probability estimates are considered necessary for empirical risk analysis tools [4, 10,
12, 16]. In practical applications, uncalibrated probability estimates can be misleading in the sense
that the end user of these estimates has an incentive to mistrust (and therefore potentially misuse)
them. We note however that calibration does not remove all potential for misuse, as the end user?s
biases might cause her or him to treat estimates differently based on group membership. There
are several post-processing methods for producing calibrated outputs from classification algorithms.
For example, Platt Scaling [31] passes outputs through a learned sigmoid function, transforming
them into calibrated probabilities. Histogram Binning and Isotonic Regression [35] learn a general
monotonic function from outputs to probabilities. See [30] and [18] for empirical comparisons of
these methods.
Equalized Odds [19], also referred to as Disparate Mistreatment [37], ensures that no error type
disproportionately affects any particular group. Hardt et al. [19] provide a post-processing technique
to achieve this framework, while Zafar et al. [37] introduce optimization constraints to achieve
non-discrimination at training time. Recently, this framework has received significant attention
from the algorithmic fairness community. Researchers have found that it is incompatible with other
notions of fairness [8, 9, 26]. Additionally, Woodworth et al. [34] demonstrate that, under certain
assumptions, post-processing methods for achieving non-discrimination may be suboptimal.
Alternative fairness frameworks exist and are continuously proposed. We highlight several of these
works, though by no means offer a comprehensive list. (More thorough reviews can be found in
[2, 4, 32]). It has been shown that, under most frameworks of fairness, there is a trade-off between
algorithmic performance and non-discrimination [4, 9, 19, 39]. Several works approach fairness
through the lens of Statistical Parity [6, 7, 14, 20, 22, 23, 29, 38]. Under this definition, group
membership should not affect the prediction of a classifier, i.e. members of different groups should
have the same probability of receiving a positive-class prediction. However, it has been argued that
Statistical Parity may not be applicable in many scenarios [8, 13, 19, 26], as it attempts to guarantee
equal representation. For example, it is inappropriate in criminal justice, where base rates differ across
different groups. A related notion is Disparate Impact [15, 36], which states that the prediction rates
for any two groups should not differ by more than 80% (a number motivated by legal requirements).
Dwork et al. [13] introduce a notion of fairness based on the idea that similar individuals should
receive similar outcomes, though it challenging to achieve this notion in practice. Fairness has also
been considered in online learning [21, 24], unsupervised learning [5], and causal inference [25, 27].
2
3
Problem Setup
The setup of our framework most follows the Equalized Odds framework [19, 37]; however, we
extend their framework for use with probabilistic classifiers. Let P ? Rk ? {0, 1} be the input space
of a binary classification task. In our criminal justice example, (x, y) ? P represents a person, with x
representing the individual?s history and y representing whether or not the person will commit another
crime. Additionally, we assume the presence of two groups G1 , G2 ? P , which represent disjoint
population subsets, such as different races. We assume that the groups have different base rates ?t , or
probabilities of belonging to the positive class: ?1 = P(x,y)?G1 [y = 1] 6= P(x,y)?G2 [y = 1] = ?2 .
Finally, let h1 , h2 : Rk ? [0, 1] be binary classifiers, where h1 classifies samples from G1 and h2
classifies samples from G2 .2 Each classifier outputs the probability that a given sample x belongs to
the positive class. The notion of Equalized Odds non-discrimination is based on the false-positive and
false-negative rates for each group, which we generalize here for use with probabilistic classifiers:
Definition 1. The generalized
false-positive rate of classifier ht for group Gt is cf p (ht ) =
E(x,y)?Gt ht (x) | y = 0 . Similarly, the generalized false-negative rate of classifier ht is
cf n (ht ) = E(x,y)?Gt (1 ? ht (x)) | y = 1 .
If the classifier were to output either 0 or 1, this represents the standard notions of false-positive and
false-negative rates. We now define the Equalized Odds framework (generalized for probabilistic
classifiers), which aims to ensure that errors of a given type are not biased against any group.
Definition 2 (Probabilistic Equalized Odds). Classifiers h1 and h2 exhibit Equalized Odds for groups
G1 and G2 if cf p (h1 ) = cf p (h2 ) and cf n (h1 ) = cf n (h2 ).
Calibration Constraints. As stated in the introduction, these two conditions do not necessarily
prevent discrimination if the classifier predictions do not represent well-calibrated probabilities.
Recall that calibration intuitively says that probabilities should carry semantic meaning: if there are
100 people in G1 for whom h1 (x) = 0.6, then we expect 60 of them to belong to the positive class.
Definition 3. A classifier ht is perfectly calibrated if ?p ? [0, 1], P(x,y)?Gt y = 1 | ht (x) = p = p.
It is commonly accepted amongst practitioners that both classifiers h1 and h2 should be calibrated
with respect to groups G1 and G2 to prevent discrimination [4, 10, 12, 16]. Intuitively, this prevents
the probability scores from carrying group-specific information. Unfortunately, Kleinberg et al. [26]
(as well as [8], in a binary setting) prove that a classifier cannot achieve both calibration and Equalized
Odds, even in an approximate sense, except in the most trivial of cases.
3.1
Geometric Characterization of Constraints
We now will characterize the calibration and error-rate constraints with simple geometric intuitions.
Throughout the rest of this paper, all of our results can be easily derived from this interpretation. We
begin by defining the region of classifiers which are trivial, or those that output a constant value for
all inputs (i.e. hc (x) = c, where 0 ? c ? 1 is a constant). We can visualize these classifiers on a
graph with generalized false-positive rates on one axis and generalized false-negatives on the other. It
follows from the definitions of generalized false-positive/false-negative rates and calibration that all
trivial classifiers h lie on the diagonal defined by cf p (h) + cf n (h) = 1 (Figure 1a). Therefore, all
classifiers that are ?better than random? must lie below this diagonal in false-positive/false-negative
space (the gray triangle in the figure). Any classifier that lies above the diagonal performs ?worse
than random,? as we can find a point on the trivial classifier diagonal with lower false-positive and
false-negative rates.
Now we will characterize the set of calibrated classifiers for groups G1 and G2 , which we denote as
H1? and H2? . Kleinberg et al. show that the generalized false-positive and false-negative rates of a
calibrated classifier are linearly related by the base rate of the group:3
cf n (ht ) = (1 ? ?t )/?t cf p (ht ).
2
(1)
In practice, h1 and h2 can be trained jointly (i.e. they are the same classifier).
Throughout this work we will treat the calibration constraint as holding exactly; however, our results
generalize to approximate settings as well. See the Supplementary Materials for more details.
3
3
0
?1
h1
h2
0
1
Generalized FP Rate
1
1
Generalized FP Rate
(b) Satisfying cal. and
equal F.P. rates.
1
h1
h2
0
Generalized FN Rate
1
?2
(a) Possible cal. classifiers H1? , H2? (blue/red).
Generalized FN Rate
h? 2
1 ?2
1 ?1
h? 1
Generalized FN Rate
Generalized FN Rate
H1?
H2?
1
h2
0
1
Generalized FP Rate
(c) Satisfying cal. and
equal F.N. rates.
h1
1
Generalized FP Rate
(d) Satisfying cal. and a
general constraint.
0
1
Generalized FP Rate
(a) Level-order curves of
cost. Low cost implies
low error rates.
1
h? 2
h1
h2
0
?2
h
1
Generalized FP Rate
(b) Usually, there is a cal? 2 with
ibrated classifier h
the same cost of h1 .
1
h1
h2
0
Generalized FN Rate
g
g =3
g= =2
1
1
Generalized FN Rate
H1?
H2?
1
Generalized FN Rate
Generalized FN Rate
Figure 1: Calibration, trivial classifiers, and equal-cost constraints ? plotted in the false-pos./false-neg.
plane. H1? , H2? are the set of cal. classifiers for the two groups, and h?1 , h?2 are trivial classifiers.
1
Generalized FP Rate
(c) Cal. and equal-cost
are incompatible if h1
has high error.
h? 2
h2
0
1
Generalized FP Rate
(d) Possible cal. classifiers for G2 (bold red) by
mixing h2 and h?2 .
Figure 2: Calibration-Preserving Parity through interpolation.
In other words, h1 lies on a line with slope (1 ? ?1 )/?1 and h2 lies on a line with slope (1 ? ?2 )/?2
(Figure 1a). The lower endpoint of each line is the perfect classifier, which assigns the correct
prediction with complete certainty to every input. The upper endpoint is a trivial classifier, as no
calibrated classifier can perform ?worse than random? (see Lemma 3 in Section S2). The only trivial
classifier that satisfies the calibration condition for a group Gt is the one that outputs the base rate ?t .
We will refer to h?1 and h?2 as the trivial classifiers, calibrated for groups G1 and G2 respectively. It
follows from the definitions that cf p (h?1 ) = ?1 and cf n (h?1 ) = 1 ? ?1 , and likewise for h?2 .
Finally, it is worth noting that for calibrated classifiers, a lower false-positive rate necessarily
corresponds to a lower false-negative rate and vice-versa. In other words, for a given base rate, a
?better? calibrated classifier lies closer to the origin on the line of calibrated classifiers.
Impossibility of Equalized Odds with Calibration. With this geometric intuition, we can provide
a simplified proof of the main impossibility result from [26]:
Theorem (Impossibility Result [26]). Let h1 and h2 be classifiers for groups G1 and G2 with
?1 6= ?2 . h1 and h2 satisfy the Equalized Odds and calibration conditions if and only if h1 and h2
are perfect predictors.
Intuitively, the three conditions define a set of classifiers which is overconstrained. Equalized Odds
stipulates that the classifiers h1 and h2 must lie on the same coordinate in the false-positive/falsenegative plane. As h1 must lie on the blue line of calibrated classifiers for H1? and h2 on the red line
H2? they can only satisfy EO at the unique intersection point ? the origin (and location of the perfect
classifier). This implies that unless the two classifiers achieve perfect accuracy, we must relax the
Equalized Odds conditions if we want to maintain calibration.
4
Relaxing Equalized Odds to Preserve Calibration
In this section, we show that a substantially simplified notion of Equalized Odds is compatible with
calibration. We introduce a general relaxation that seeks to satisfy a single equal-cost constraint
while maintaining calibration for each group Gt . We begin with the observation that Equalized
4
Odds sets constraints to equalize false-positives cf p (ht ) and false-negatives cf n (ht ). To capture
and generalize this, we define a cost function gt to be a linear function in cf p (ht ) and cf n (ht ) with
arbitrary dependence on the group?s base rate ?t . More formally, a cost function for group Gt is
gt (ht ) = at cf p (ht ) + bt cf n (ht )
(2)
where at and bt are non-negative constants that are specific to each group (and thus may depend
on ?t ): see Figure 1d. We also make the assumption that for any ?t , at least one of at and bt is
nonzero, meaning gt (ht ) = 0 if and only if cf p (ht ) = cf n (ht ) = 0.4 This class of cost functions
encompasses a variety of scenarios. As an example, imagine an application in which the equal
false-positive condition is essential but not the false-negative condition. Such a scenario may arise
in our recidivism-prediction example, if we require that non-repeat offenders of any race are not
disproportionately labeled as high risk. If we plot the set of calibrated classifiers H1? and H2? on
the false-positive/false-negative plane, we can see that ensuring the false-positive condition requires
finding classifiers h1 ? H1? and h2 ? H2? that fall on the same vertical line (Figure 1b). Conversely,
if we instead choose to satisfy only the false-negative condition, we would find classifiers h1 and h2
that fall on the same horizontal (Figure 1c). Finally, if both false-positive and false-negative errors
incur a negative cost on the individual, we may choose to equalize a weighted combination of the
error rates [3, 4, 8], which can be graphically described by the classifiers lying on a convex and
negatively-sloped level set (Figure 1d). With these definitions, we can formally define our relaxation:
Definition 4 (Relaxed Equalized Odds with Calibration). Given a cost function gt of the form in (2),
classifiers h1 and h2 achieve Relaxed Equalized Odds with Calibration for groups G1 and G2 if both
classifiers are calibrated and satisfy the constraint g1 (h1 ) = g2 (h2 ).
It is worth noting that, for calibrated classifiers, an increase in cost strictly corresponds to an increase
in both the false-negative and false-positive rate. This can be interpreted graphically, as the level-order
cost curves lie further away from the origin as cost increases (Figure 2a). In other words, the cost
function can always be used as a proxy for either error rate.5
Feasibility. It is easy to see that Definition 4 is always satisfiable ? in Figures 1b, 1c, and 1d we see
that there are many such solutions that would lie on a given level-order cost curve while maintaining
calibration, including the case in which both classifiers are perfect. In practice, however, not all
classifiers are achievable. For the rest of the paper, we will assume that we have access to ?optimal?
(but possibly discriminatory) calibrated classifiers h1 and h2 such that, due to whatever limitations
there are on the predictability of the task, we are unable to find other classifiers that have lower
cost with respect to gt . We allow h1 and h2 to be learned in any way, as long as they are calibrated.
Without loss of generality, for the remainder of the paper, we will assume that g1 (h1 ) ? g2 (h2 ).
Since by assumption we have no way to find a classifier for G1 with lower cost than h1 , our goal
? 2 with cost equal to h1 . This pair of classifiers would represent
is therefore to find a classifier h
the lowest cost (and therefore optimal) set of classifiers that satisfies calibration and the equal cost
constraint. For a given base rate ?t and value of the cost function gt , a calibrated classifier?s position
in the generalized false-positive/false-negative plane is uniquely determined (Figure 2a). This is
because each level-order curve of the cost function gt has negative slope in this plane, and each level
order curve only intersects a group?s calibrated classifier line once. In other words, there is a unique
? 2 (Figure 2b).
solution in the false-positive/false-negative plane for classifier h
Consider the range of values that gt can take. As noted above, gt (ht ) ? 0, with equality if and only if
ht is the perfect classifier. On the other hand, the trivial classifier (again, which outputs the constant
?t for all inputs) is the calibrated classifier that achieves maximum cost for any gt (see Lemma 3 in
Section S2). As a result, the cost of a classifier for group Gt is between 0 and gt (h?t ). This naturally
leads to a characterization of feasibility: Definition 4 can be achieved if and only if h1 incurs less
cost than group G2 ?s trivial classifier h?2 ; i.e. if g1 (h1 ) ? g2 (h?2 ). This can be seen graphically in
Figure 2c, in which the level-order curve for g1 (h1 ) does not intersect the set of calibrated classifiers
for G2 . Since, by assumption, we cannot find a calibrated classifier for G1 with strictly smaller cost
than h1 , there is no feasible solution. On the other hand, if h1 incurs less cost than h?2 , then we will
show feasibility by construction with a simple algorithm.
An Algorithm. While it may be possible to encode the constraints of Definition 4 into the training
procedure of h1 and h2 , it is not immediately obvious how to do so. Even naturally probabilistic
4
5
By calibration, we cannot have one of cf p (ht ) = 0 or cf n (ht ) = 0 without the other, see Figure 1a.
This holds even for approximately calibrated classifiers ? see Section S3.
5
algorithms, such as logistic regression, can become uncalibrated in the presence of optimization
constraints (as is the case in [37]). It is not straightforward to encode the calibration constraint if
the probabilities are assumed to be continuous, and post-processing calibration methods [31, 35]
would break equal-cost constraints by modifying classifier scores. Therefore, we look to achieve the
calibrated Equalized Odds relaxation by post-processing existing calibrated classifiers.
? 2 for
Again, given h1 and h2 with g1 (h1 ) ? g2 (h2 ), we want to arrive at a calibrated classifier h
?
group G2 such that g1 (h1 ) = g2 (h2 ). Recall that, under our assumptions, this would be the best
possible solution with respect to classifier cost. We show that this cost constraint can be achieved by
withholding predictive information for a randomly chosen subset of group G2 . In other words, rather
than always returning h2 (x) for all samples, we will occasionally return the group?s mean probability
(i.e. the output of the trivial classifier h?2 ). In Lemma 4 in Section S2, we show that if
?
2
? 2 (x) = h (x) = ?2 with probability ?
h
(3)
h2 (x)
with probability 1 ? ?
? 2 is a linear interpolation between the costs of h2 and h?2 (Figure 2d). More formally,
then the cost of h
? 2 ) = (1 ? ?)g2 (h2 ) + ?g2 (h?2 )), and thus setting ? = g1 (h?1 )?g2 (h2 ) ensures
we have that g2 (h
g2 (h 2 )?g2 (h2 )
?
that g2 (h2 ) = g1 (h1 ) as desired (Figure 2b). Moreover, this randomization preserves calibration (see
Section S4). Algorithm 1 summarizes this method.
Algorithm 1 Achieving Calibration and an Equal-Cost Constraint via Information Withholding
Input: classifiers h1 and h2 s.t. g2 (h2 ) ? g1 (h1 ) ? g2 (h?2 ), holdout set Pvalid .
? Determine base rate ?2 of G2 (using Pvalid ) to produce trivial classifier h?2 .
? 2 using with ? = g1 (h?1 )?g2 (h2 ) , where ? is the interpolation parameter.
? Construct h
g2 (h 2 )?g2 (h2 )
?
? 2 ).
return h1 , h2 ? which are calibrated and satisfy g1 (h1 ) = g2 (h
Implications. In a certain sense, Algorithm 1 is an ?optimal? method because it arrives at the unique
? 2 , where h
? 2 is calibrated and has cost equal to h1 . Therefore
false-negative/false-positive solution for h
(by our assumptions) we can find no better classifiers that satisfy Definition 4. This simple result
has strong consequences, as the tradeoffs to satisfy both calibration and the equal-cost constraint are
often unsatisfactory ? both intuitively and experimentally (as we will show in Section 5).
We find two primary objections to this solution. First, it equalizes costs simply by making a classifier
strictly worse for one of the groups. Second, it achieves this cost increase by withholding information
on a randomly chosen population subset, making the outcome inequitable within the group (as
measured by a standard measure of inequality like the Gini coefficient). Due to the optimality of
the algorithm, the former of these issues is unavoidable in any solution that satisfies Definition 4.
The latter, however, is slightly more subtle, and brings up the question of individual fairness (what
guarantees we would like an algorithm to make with respect to each individual) and how it interacts
with group fairness (population-level guarantees). While this certainly is an important issue for future
work, in this particular setting, even if one could find another algorithm that distributes the burden of
additional cost more equitably, any algorithm will make at least as many false-positive/false-negative
errors as Algorithm 1, and these misclassifications will always be tragic to the individuals whom
they affect. The performance loss across the entire group is often significant enough to make this
combination of constraints somewhat worrying to use in practice, regardless of the algorithm.
Impossibility of Satisfying Multiple Equal-Cost Constraints. It is natural to argue there might be
multiple cost functions that we would like to equalize across groups. However, satisfying more than
one distinct equal-cost constraint (i.e. different curves in the F.P./F.N. plane) is infeasible.
Theorem 1 (Generalized impossibility result). Let h1 and h2 be calibrated classifiers for G1 and
G2 with equal cost with respect to gt . If ?1 6= ?2 , and if h1 and h2 also have equal cost with respect
to a different cost function gt0 , then h1 and h2 must be perfect classifiers.
(Proof in Section S5). Note that this is a generalization of the impossibility result of [26]. Furthermore,
we show in Theorem 9 (in Section S5) that this holds in an approximate sense: if calibration and
multiple distinct equal-cost constraints are approximately achieved by some classifier, then that
classifier must have approximately zero generalized false-positive and false-negative rates.
6
(b) Health Prediction.
Equal Odds (Derived)
Calib. + Equal F.N.
H1?
H2?
H1?
H2?
0.6
0.2
0.0
h1
h2
?1
h
h1
h2
heo
1
heo
2
0.4
0.2
Equal Odds (Derived)
Generalized F.N. Rate
Generalized F.N. Rate
(a) Income Prediction.
?2
h
0.4
0.0
0.2
Generalized F.P. Rate
0.4
0.6
Calib. + Equal Cost
H1?
H2?
H1?
H2?
h1
h2
?1
h
h1
h2
heo
1
heo
2
0.4
0.2
0.2
0.4
?2
h
0.6
0.2
0.4
Generalized F.P. Rate
0.6
Generalized F.N. Rate
(c) Recidivism Prediction.
0.8
Equal Odds (Trained)
Equal Odds (Derived)
H1?
H2?
H1?
H2?
0.6
h1
h2
heo
1
heo
2
0.4
0.2
0.25
0.50
0.75
Calib. + Equal F.P.
H1?
H2?
h1
h2
?1
h
h1
h2
heo
1
heo
2
0.25
0.50
0.75
Generalized F.P. Rate
?2
h
0.25
0.50
0.75
Figure 3: Generalized F.P. and F.N. rates for two groups under Equalized Odds and the calibrated
relaxation. Diamonds represent post-processed classifiers. Points on the Equalized Odds (trained)
graph represent classifiers achieved by modifying constraint hyperparameters.
5
Experiments
In light of these findings, our goal is to understand the impact of imposing calibration and an equalcost constraint on real-world datasets. We will empirically show that, in many cases, this will result
in performance degradation, while simultaneously increasing other notions of disparity. We perform
experiments on three datasets: an income-prediction, a health-prediction, and a criminal recidivism
dataset. For each task, we choose a cost function within our framework that is appropriate for the
given scenario. We begin with two calibrated classifiers h1 and h2 for groups G1 and G2 . We
assume that these classifiers cannot be significantly improved without more training data or features.
? 2 to equalize the costs while maintaining calibration. The original classifiers are
We then derive h
trained on a portion of the data, and then the new classifiers are derived using a separate holdout
set. To compare against the (uncalibrated) Equalized Odds framework, we derive F.P./F.N. matching
classifiers using the post-processing method of [19] (EO-Derived). On the criminal recidivism
dataset, we additionally learn classifiers that directly encode the Equalized Odds constraints, using the
methods of [37] (EO-Trained). (See Section S6 for detailed training and post-processing procedures.)
We visualize model error rates on the generalized F.P. and F.N. plane. Additionally, we plot the
calibrated classifier lines for G1 and G2 to visualize model calibration.
Income Prediction. The Adult Dataset from UCI Machine Learning Repository [28] contains 14
demographic and occupational features for various people, with the goal of predicting whether a
person?s income is above $50, 000. In this scenario, we seek to achieve predictions with equalized
cost across genders (G1 represents women and G2 represents men). We model a scenario where the
primary concern is ensuring equal generalized F.N. rates across genders, which would, for example,
help job recruiters prevent gender discrimination in the form of underestimated salaries. Thus, we
choose our cost constraint to require equal generalized F.N. rates across groups. In Figure 3a, we
see that the original classifiers h1 and h2 approximately lie on the line of calibrated classifiers. In
the left plot (EO-Derived), we see that it is possible to (approximately) match both error rates of the
classifiers at the cost of heo
1 deviating from the set of calibrated classifiers. In the right plot, we see
? 2 lie on
that it is feasible to equalize the generalized F.N. rates while maintaining calibration. h1 and h
the same level-order curve of gt (represented by the dashed-gray line), and simultaneously remain on
the ?line? of calibrated classifiers. It is worth noting that achieving either notion of non-discrimination
requires some cost to at least one of the groups. However, maintaining calibration further increases
the difference in F.P. rates between groups. In some sense, the calibrated framework trades off one
notion of disparity for another while simultaneously increasing the overall error rates.
7
Health Prediction. The Heart Dataset from the UCI Machine Learning Repository contains 14
processed features from 906 adults in 4 geographical locations. The goal of this dataset is to
accurately predict whether or not an individual has a heart condition. In this scenario, we would
like to reduce disparity between middle-aged adults (G1 ) and seniors (G2 ). In this scenario, we
consider F.P. and F.N. to both be undesirable. A false prediction of a heart condition could result in
unnecessary medical attention, while false negatives incur cost from delayed treatment. We therefore
utilize the following cost function gt (ht ) = rf p ht (x) (1 ? y) + rf n (1 ? ht (x)) y, which essentially
assigns a weight to both F.N. and F.P. predictions. In our experiments, we set rf p = 1 and rf n = 3.
In the right plot of Figure 3b, we can see that the level-order curves of the cost function form a curved
line in the generalized F.P./F.N. plane. Because our original classifiers lie approximately on the
? 2 while maintaining
same level-order curve, little change is required to equalize the costs of h1 and h
calibration. This is the only experiment in which the calibrated framework incurs little additional
cost, and therefore could be considered a viable option. However, it is worth noting that, in this
example, the equal-cost constraint does not explicitly match either of the error types, and therefore
the two groups will in expectation experience different types of errors. In the left plot of Figure 3b
(EO-Derived), we see that it is alternatively feasible to explicitly match both the F.P. and F.N. rates
while sacrificing calibration.
Criminal Recidivism Prediction. Finally, we examine the frameworks in the context of our motivating example: criminal recidivism. As mentioned in the introduction, African Americans (G1 ) receive
a disproportionate number of F.P. predictions as compared with Caucasians (G2 ) when automated risk
tools are used in practice. Therefore, we aim to equalize the generalized F.P. rate. In this experiment,
we modify the predictions made by the COMPAS tool [12], a risk-assessment tool used in practice
by the American legal system. Additionally, we also see if it is possible to improve the classifiers
with training-time Equalized Odds constraints using the methods of Zafar et al. [37] (EO-Trained).
In Figure 3c, we first observe that the original classifiers h1 and h2 have large generalized F.P. and
F.N. rates. Both methods of achieving Equalized Odds ? training constraints (left plot) and postprocessing (middle plot) match the error rates while sacrificing calibration. However, we observe that,
assuming h1 and h2 cannot be improved, it is infeasible to achieve the calibrated relaxation (Figure 3c
right). This is an example where matching the F.P. rate of h1 would require a classifier worse than the
trivial classifier h?2 . This example therefore represents an instance in which calibration is completely
incompatible with any error-rate constraints. If the primary concern of criminal justice practitioners
is calibration [12, 16], then there will inherently be discrimination in the form of F.P. and F.N. rates.
However, if the Equalized Odds framework is adopted, the miscalibrated risk scores inherently cause
discrimination to one group, as argued in the introduction. Therefore, the most meaningful change in
such a setting would be an improvement to h2 (the classifier for African Americans) either through
the collection of more data or the use of more salient features. A reduction in overall error to the
group with higher cost will naturally lead to less error-rate disparity.
6
Discussion and Conclusion
We have observed cases in which calibration and relaxed Equalized Odds are compatible and cases
where they are not. When it is feasible, the penalty of equalizing cost is amplified if the base rates
between groups differ significantly. This is expected, as base rate differences are what give rise
to cost-disparity in the calibrated setting. Seeking equality with respect to a single error rate (e.g.
false-negatives, as in the income prediction experiment) will necessarily increase disparity with
respect to the other error. This may be tolerable (in the income prediction case, some employees will
end up over-paid) but could also be highly problematic (e.g. in criminal justice settings). Finally, we
have observed that the calibrated relaxation is infeasible when the best (discriminatory) classifiers are
not far from the trivial classifiers (leaving little room for interpolation). In such settings, we see that
calibration is completely incompatible with an equalized error constraint.
In summary, we conclude that maintaining cost parity and calibration is desirable yet often difficult
in practice. Although we provide an algorithm to effectively find the unique feasible solution to both
constraints, it is inherently based on randomly exchanging the predictions of the better classifier with
the trivial base rate. Even if fairness is reached in expectation, for an individual case, it may be hard
to accept that occasionally consequential decisions are made by randomly withholding predictive
information, irrespective of a particular person?s feature representation. In this paper we argue that,
as long as calibration is required, no lower-error solution can be achieved.
8
Acknowledgements
GP, FW, and KQW are supported in part by grants from the National Science Foundation (III1149882, III-1525919, III-1550179, III-1618134, and III-1740822), the Office of Naval Research
DOD (N00014-17-1-2175), and the Bill and Melinda Gates Foundation. MR is supported by an NSF
Graduate Research Fellowship (DGE-1650441). JK is supported in part by a Simons Investigator
Award, an ARO MURI grant, a Google Research Grant, and a Facebook Faculty Research Grant.
References
[1] J. Angwin, J. Larson, S. Mattu, and L. Kirchner.
Machine bias: There?s software used
across the country to predict future criminals. And it?s biased against blacks. ProPublica, 2016.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[2] S. Barocas and A. D. Selbst. Big data?s disparate impact. California Law Review, 104, 2016.
[3] R. Berk. A primer on fairness in criminal justice risk assessments. Criminology, 41(6):6?9, 2016.
[4] R. Berk, H. Heidari, S. Jabbari, M. Kearns, and A. Roth. Fairness in criminal justice risk assessments: The
state of the art. arXiv preprint arXiv:1703.09207, 2017.
[5] T. Bolukbasi, K.-W. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai. Man is to computer programmer as
woman is to homemaker? debiasing word embeddings. In NIPS, pages 4349?4357, 2016.
[6] T. Calders and S. Verwer. Three naive bayes approaches for discrimination-free classification. KDD, 2012.
[7] T. Calders, F. Kamiran, and M. Pechenizkiy. Building classifiers with independency constraints. In ICDM
Workshops, 2009.
[8] A. Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments.
arXiv preprint arXiv:1703.00056, 2017.
[9] S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the cost of
fairness. In KDD, pages 797?806, 2017.
[10] C. S. Crowson, E. J. Atkinson, and T. M. Therneau. Assessing calibration of prognostic risk scores.
Statistical Methods in Medical Research, 25(4):1692?1706, 2016.
[11] A. P. Dawid. The well-calibrated bayesian. Journal of the American Statistical Association, 77(379):
605?610, 1982.
[12] W. Dieterich, C. Mendoza, and T. Brennan. COMPAS risk scales: Demonstrating accuracy equity and
predictive parity. Technical report, Northpointe, July 2016. http://www.northpointeinc.com/northpointeanalysis.
[13] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Innovations in
Theoretical Computer Science, 2012.
[14] H. Edwards and A. Storkey. Censoring representations with an adversary. In ICLR, 2016.
[15] M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and
removing disparate impact. In KDD, pages 259?268, 2015.
[16] A. Flores, C. Lowenkamp, and K. Bechtel. False positives, false negatives, and false analyses: A rejoinder
to ?machine bias: There?s software used across the country to predict future criminals. and it?s biased against
blacks.?. Technical report, Crime & Justice Institute, September 2016. http://www.crj.org/cji/entry/falsepositives-false-negatives-and-false-analyses-a-rejoinder.
[17] G. Goh, A. Cotter, M. Gupta, and M. P. Friedlander. Satisfying real-world goals with dataset constraints.
In NIPS, pages 2415?2423. 2016.
[18] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In ICML,
2017.
[19] M. Hardt, E. Price, and S. Nathan. Equality of opportunity in supervised learning. In Advances in Neural
Information Processing Systems, 2016.
9
[20] J. E. Johndrow and K. Lum. An algorithm for removing sensitive information: application to raceindependent recidivism prediction. arXiv preprint arXiv:1703.04957, 2017.
[21] M. Joseph, M. Kearns, J. H. Morgenstern, and A. Roth. Fairness in learning: Classic and contextual bandits.
In NIPS, 2016.
[22] F. Kamiran and T. Calders. Classifying without discriminating. In International Conference on Computer
Control and Communication, 2009.
[23] T. Kamishima, S. Akaho, and J. Sakuma. Fairness-aware learning through regularization approach. In
ICDM Workshops, 2011.
[24] M. Kearns, A. Roth, and Z. S. Wu. Meritocratic fairness for cross-population selection. In International
Conference on Machine Learning, pages 1828?1836, 2017.
[25] N. Kilbertus, M. Rojas-Carulla, G. Parascandolo, M. Hardt, D. Janzing, and B. Sch?lkopf. Avoiding
discrimination through causal reasoning. In NIPS, 2017.
[26] J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent trade-offs in the fair determination of risk scores.
In Innovations in Theoretical Computer Science. ACM, 2017.
[27] M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva.
arXiv:1703.06856, 2017.
Counterfactual fairness.
arXiv preprint
[28] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
[29] C. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel. The variational fair auto encoder. In ICLR,
2016.
[30] A. Niculescu-Mizil and R. Caruana. Predicting good probabilities with supervised learning. In ICML,
2005.
[31] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood
methods. Advances in Large Margin Classifiers, 10(3):61?74, 1999.
[32] A. Romei and S. Ruggieri. A multidisciplinary survey on discrimination analysis. The Knowledge
Engineering Review, 29(05):582?638, 2014.
[33] White-House. Big data: A report on algorithmic systems, opportunity, and civil rights. Technical report,
May 2016.
[34] B. Woodworth, S. Gunasekar, M. I. Ohannessian, and N. Srebro. Learning non-discriminatory predictors.
In Proceedings of the 2017 Conference on Learning Theory, volume 65, pages 1920?1953, Amsterdam,
Netherlands, 07?10 Jul 2017. PMLR.
[35] B. Zadrozny and C. Elkan. Obtaining calibrated probability estimates from decision trees and naive
bayesian classifiers. In ICML, pages 609?616, 2001.
[36] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Learning fair classifiers. arXiv preprint
arXiv:1507.05259, 2015.
[37] M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. Fairness beyond disparate treatment &
disparate impact: Learning classification without disparate mistreatment. In World Wide Web Conference,
2017.
[38] R. S. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In ICML, 2013.
[39] I. Zliobaite. On the relation between accuracy and fairness in binary classification. In ICML Workshop on
Fairness, Accountability, and Transparency in Machine Learning, 2015.
10
| 7151 |@word repository:3 faculty:1 middle:2 achievable:1 consequential:1 justice:10 prognostic:1 seek:2 zliobaite:1 paid:1 incurs:3 carry:2 reduction:1 venkatasubramanian:1 initial:1 contains:2 disparity:7 score:5 occupational:1 unintended:1 lichman:1 existing:4 comparing:1 com:1 contextual:1 yet:1 must:6 fn:8 kdd:3 remove:1 propublica:4 plot:8 discrimination:16 caucasian:2 bolukbasi:1 plane:9 brennan:1 characterization:2 location:2 org:2 become:2 viable:1 prove:1 nondiscrimination:1 introduce:3 expected:1 examine:1 growing:2 little:3 inappropriate:1 increasing:2 begin:3 classifies:2 moreover:3 lowest:1 what:3 interpreted:1 substantially:2 morgenstern:1 developed:1 finding:4 guarantee:4 certainty:1 mitigate:1 thorough:1 every:1 exactly:1 returning:1 classifier:111 platt:2 whatever:1 control:1 medical:3 grant:4 producing:1 positive:29 felix:1 engineering:1 treat:2 modify:1 consequence:2 despite:1 interpolation:4 noteworthy:1 approximately:6 might:2 black:2 accountability:1 studied:1 conversely:1 challenging:1 relaxing:1 limited:1 discriminatory:3 range:1 graduate:1 unique:5 practical:2 testing:1 practice:9 procedure:3 intersect:1 empirical:2 significantly:2 matching:2 davy:1 word:6 cannot:5 undesirable:2 selection:1 cal:8 risk:15 impossible:1 context:1 isotonic:1 www:3 bill:1 misuse:2 roth:3 graphically:3 attention:2 straightforward:1 regardless:1 convex:1 survey:1 assigns:3 immediately:1 s6:1 population:6 classic:1 notion:14 coordinate:1 imagine:1 construction:1 user:2 origin:3 elkan:1 dawid:1 storkey:1 satisfying:7 particularly:1 jk:1 muri:1 mistreatment:3 binning:1 labeled:1 observed:2 preprint:5 capture:1 region:1 ensures:2 news:1 kilian:1 verwer:1 sun:1 trade:3 russell:1 uncalibrated:4 equalize:7 intuition:2 transforming:1 mentioned:1 feller:1 constrains:1 trained:6 carrying:1 depend:1 predictive:5 incur:2 negatively:1 gt0:1 triangle:1 completely:2 easily:1 po:1 geoff:2 differently:1 various:1 represented:1 intersects:1 distinct:2 gini:1 equalized:32 zemel:3 outcome:2 equalizes:1 sentencing:1 mendoza:1 widely:1 supplementary:1 say:1 relax:1 encoder:1 withholding:6 dieterich:1 g1:30 commit:3 gp:1 jointly:1 online:2 equalizing:1 aro:1 remainder:2 saligrama:1 uci:4 moeller:1 mixing:1 achieve:13 amplified:1 requirement:1 assessing:1 produce:1 perfect:7 object:1 help:1 derive:2 advisable:1 measured:1 received:1 job:1 edward:1 strong:1 c:1 predicted:1 judge:1 implies:2 quantify:1 differ:3 disproportionate:1 correct:1 modifying:2 raghavan:2 programmer:1 material:1 disproportionately:4 argued:2 require:3 generalization:1 randomization:1 strictly:3 hold:3 lying:1 considered:3 ic:1 algorithmic:4 predict:3 visualize:3 achieves:2 friedler:1 applicable:1 sensitive:3 him:1 vice:1 tool:10 weighted:2 cotter:1 offs:1 always:4 aim:2 rather:2 kalai:1 cornell:3 office:1 encode:3 derived:8 naval:1 improvement:1 unsatisfactory:2 likelihood:1 impossibility:6 sense:5 inference:1 membership:2 niculescu:1 bt:3 entire:1 accept:1 her:1 bandit:1 relation:1 issue:3 classification:9 among:1 overall:2 raised:1 constrained:1 art:1 equal:30 once:1 construct:1 aware:1 beach:1 represents:5 look:2 unsupervised:1 fairness:26 jon:1 icml:5 future:5 report:4 fundamentally:1 inherent:1 barocas:1 kilbertus:1 modern:1 randomly:5 simultaneously:4 national:1 comprehensive:1 individual:9 preserve:3 deviating:1 therneau:1 delayed:1 maintain:1 attempt:2 organization:1 investigate:2 highly:2 dwork:3 certainly:1 arrives:1 light:1 implication:2 closer:1 mullainathan:1 necessary:2 experience:1 unless:1 tree:1 goh:1 desired:1 plotted:1 causal:2 sacrificing:2 theoretical:3 homemaker:1 instance:2 caruana:1 heo:9 exchanging:1 cost:64 subset:3 entry:1 predictor:2 dod:1 characterize:2 motivating:1 randomizing:1 calibrated:49 st:1 person:4 geographical:1 international:2 discriminating:1 probabilistic:6 off:2 receiving:1 continuously:1 again:2 calib:3 unavoidable:1 kirchner:1 choose:5 possibly:1 woman:2 worse:4 american:8 return:2 manish:2 li:1 account:1 potential:2 bold:1 coefficient:1 satisfy:8 explicitly:2 race:4 h1:77 break:1 portion:2 red:3 reached:1 option:1 satisfiable:1 bayes:1 jul:1 slope:3 simon:1 contribution:1 accuracy:3 who:3 likewise:1 generalize:4 lkopf:1 bayesian:2 accurately:1 pleiss:2 advertising:1 confirmed:1 researcher:2 worth:4 african:6 history:1 janzing:1 facebook:1 definition:13 against:6 obvious:1 naturally:3 proof:2 ruggieri:1 holdout:2 hardt:4 dataset:6 treatment:2 counterfactual:1 recall:2 knowledge:1 overconstrained:1 subtle:1 higher:2 supervised:2 tension:1 improved:2 though:2 generality:1 furthermore:1 heidari:1 hand:2 horizontal:1 web:1 assessment:4 google:1 rodriguez:2 logistic:1 brings:1 multidisciplinary:1 gray:2 dge:1 usa:1 building:1 requiring:1 former:1 hence:1 assigned:1 equality:3 regularization:1 nonzero:1 semantic:1 white:3 uniquely:1 noted:1 larson:1 generalized:40 complete:1 demonstrate:1 performs:1 interpreting:1 silva:1 postprocessing:1 reasoning:1 meaning:3 variational:1 recently:2 sigmoid:1 empirically:3 debiasing:1 endpoint:2 volume:1 extend:2 belong:1 interpretation:1 association:1 louizos:1 employee:1 refer:2 significant:2 s5:2 versa:1 imposing:1 feldman:1 similarly:1 akaho:1 calibration:54 access:1 gt:22 base:11 pitassi:2 recent:3 belongs:1 scenario:8 occasionally:2 certain:2 n00014:1 inequality:1 binary:4 neg:1 preserving:1 seen:1 additional:2 relaxed:5 care:1 somewhat:1 eo:6 mr:1 goel:1 determine:1 dashed:1 july:1 multiple:3 desirable:1 transparency:1 technical:3 match:5 determination:1 offer:1 long:3 pechenizkiy:1 cross:1 icdm:2 post:12 award:1 gunasekar:1 gummadi:2 feasibility:3 impact:6 prediction:29 ensuring:2 regression:2 essentially:1 expectation:3 arxiv:10 histogram:1 represent:5 achieved:6 receive:3 want:2 fellowship:1 compas:2 objection:1 underestimated:1 scheidegger:1 aged:1 leaving:1 country:2 crucial:1 sch:1 biased:3 rest:2 salary:1 archive:1 pass:1 member:2 reingold:1 parascandolo:1 odds:35 practitioner:3 presence:2 noting:4 iii:4 easy:1 concerned:2 enough:1 variety:1 affect:4 automated:1 embeddings:1 misclassifications:1 perfectly:1 suboptimal:1 reduce:1 idea:1 tradeoff:1 whether:3 motivated:2 cji:1 url:1 penalty:1 cause:2 miscalibrated:1 detailed:1 ohannessian:1 netherlands:1 s4:1 kamiran:2 processed:3 http:4 percentage:1 exist:1 problematic:2 nsf:1 s3:1 estimated:1 disjoint:1 blue:2 stipulates:1 incentive:1 affected:1 group:52 key:1 salient:1 independency:1 demonstrating:1 achieving:5 loftus:1 prevent:3 ht:27 utilize:1 graph:2 relaxation:9 worrying:1 fraction:1 sum:1 defendant:9 selbst:1 sakuma:1 arrive:1 throughout:2 swersky:2 wu:3 incompatible:5 summarizes:1 scaling:1 decision:3 atkinson:1 constraint:36 software:2 certifying:1 kleinberg:4 nathan:1 optimality:2 attempting:1 recidivism:8 department:1 combination:2 belonging:1 across:10 smaller:1 increasingly:1 slightly:1 remain:1 angwin:1 kusner:1 joseph:1 making:3 intuitively:4 heart:3 legal:2 mutually:1 calder:3 know:2 tractable:1 instrument:1 end:3 demographic:1 jabbari:1 adopted:1 observe:2 away:1 enforce:1 appropriate:1 tolerable:1 pmlr:1 alternative:1 weinberger:2 coin:1 gate:1 primer:1 original:4 falsepositives:1 cf:22 ensure:1 opportunity:2 maintaining:8 woodworth:2 especially:1 seeking:1 question:1 primary:3 dependence:1 interacts:2 diagonal:4 exhibit:1 amongst:1 iclr:2 september:1 unable:1 separate:1 whom:2 argue:2 trivial:17 sloped:1 assuming:1 relationship:1 minimizing:1 innovation:2 troubling:1 setup:2 unfortunately:1 difficult:1 potentially:1 holding:1 negative:29 disparate:9 stated:1 rise:1 perform:2 diamond:1 upper:1 vertical:1 observation:1 datasets:4 withheld:1 curved:1 zadrozny:1 defining:1 communication:1 arbitrary:1 community:3 criminal:17 pair:1 required:2 crime:4 california:1 learned:2 subgroup:1 nip:5 adult:3 beyond:2 adversary:1 flores:1 below:1 usually:1 fp:8 encompasses:1 rf:4 including:2 natural:1 regularized:1 predicting:2 valera:2 mizil:1 representing:2 improve:1 misleading:1 offender:1 axis:1 irrespective:1 naive:2 health:4 auto:1 lum:1 review:3 understanding:1 geometric:3 acknowledgement:1 friedlander:1 law:1 loss:2 expect:1 highlight:1 huq:1 men:1 limitation:1 rejoinder:2 srebro:1 h2:73 foundation:2 awareness:1 sufficient:1 proxy:1 article:1 systematically:1 classifying:1 censoring:1 compatible:3 summary:1 repeat:1 parity:6 supported:3 free:1 infeasible:3 bias:6 allow:1 understand:1 senior:1 institute:1 fall:2 wide:1 curve:10 evaluating:1 world:3 commonly:1 made:2 collection:1 simplified:2 far:1 income:6 welling:1 approximate:3 ml:1 assumed:1 unnecessary:1 conclude:1 pierson:1 corbett:1 alternatively:1 continuous:1 additionally:5 learn:2 ca:1 inherently:4 obtaining:1 hc:1 necessarily:3 zou:1 zafar:4 domain:1 did:1 main:1 linearly:1 s2:3 big:2 arise:1 profile:1 hyperparameters:1 fair:7 kqw:1 referred:2 predictability:1 position:1 lie:13 house:1 incentivizing:1 rk:2 theorem:3 removing:2 specific:2 list:1 gupta:1 concern:4 essential:1 burden:1 workshop:3 false:55 effectively:1 importance:1 mattu:1 margin:1 crj:1 civil:1 intersection:1 simply:1 prevents:1 amsterdam:1 g2:40 chang:1 monotonic:1 chouldechova:1 gender:3 corresponds:2 satisfies:4 kamishima:1 acm:1 goal:9 rojas:1 toss:1 room:1 man:1 feasible:9 experimentally:1 change:2 hard:1 determined:1 except:2 fw:1 price:1 distributes:1 lemma:3 degradation:1 lens:1 berk:2 discriminate:1 accepted:1 kearns:3 equity:1 meaningful:1 formally:3 carulla:1 people:5 support:2 latter:1 guo:1 investigator:1 evaluate:1 avoiding:1 |
6,801 | 7,152 | Imagination-Augmented Agents
for Deep Reinforcement Learning
S?bastien Racani?re? Th?ophane Weber? David P. Reichert? Lars Buesing
Arthur Guez Danilo Rezende Adria Puigdom?nech Badia Oriol Vinyals
Nicolas Heess Yujia Li Razvan Pascanu
Peter Battaglia
Demis Hassabis David Silver Daan Wierstra
DeepMind
Abstract
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep
reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods,
which prescribe how a model should be used to arrive at a policy, I2As learn to
interpret predictions from a learned environment model to construct implicit plans
in arbitrary ways, by using the predictions as additional context in deep policy
networks. I2As show improved data efficiency, performance, and robustness to
model misspecification compared to several baselines.
1
Introduction
A hallmark of an intelligent agent is its ability to rapidly adapt to new circumstances and "achieve
goals in a wide range of environments" [1]. Progress has been made in developing capable agents for
numerous domains using deep neural networks in conjunction with model-free reinforcement learning
(RL) [2?4], where raw observations directly map to values or actions. However, this approach usually
requires large amounts of training data and the resulting policies do not readily generalize to novel
tasks in the same environment, as it lacks the behavioral flexibility constitutive of general intelligence.
Model-based RL aims to address these shortcomings by endowing agents with a model of the
world, synthesized from past experience. By using an internal model to reason about the future,
here also referred to as imagining, the agent can seek positive outcomes while avoiding the adverse
consequences of trial-and-error in the real environment ? including making irreversible, poor decisions.
Even if the model needs to be learned first, it can enable better generalization across states, remain
valid across tasks in the same environment, and exploit additional unsupervised learning signals, thus
ultimately leading to greater data efficiency. Another appeal of model-based methods is their ability
to scale performance with more computation by increasing the amount of internal simulation.
The neural basis for imagination, model-based reasoning and decision making has generated a
lot of interest in neuroscience [5?7]; at the cognitive level, model learning and mental simulation
have been hypothesized and demonstrated in animal and human learning [8?11]. Its successful
deployment in artificial model-based agents however has hitherto been limited to settings where an
exact transition model is available [12] or in domains where models are easy to learn ? e.g. symbolic
environments or low-dimensional systems [13?16]. In complex domains for which a simulator is
not available to the agent, recent successes are dominated by model-free methods [2, 17]. In such
domains, the performance of model-based agents employing standard planning methods usually
suffers from model errors resulting from function approximation [18, 19]. These errors compound
during planning, causing over-optimism and poor agent performance. There are currently no planning
?
Equal contribution, corresponding authors: {sracaniere, theophane, reichert}@google.com.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
or model-based methods that are robust against model imperfections which are inevitable in complex
domains, thereby preventing them from matching the success of their model-free counterparts.
We seek to address this shortcoming by proposing Imagination-Augmented Agents, which use
approximate environment models by "learning to interpret" their imperfect predictions. Our algorithm
can be trained directly on low-level observations with little domain knowledge, similarly to recent
model-free successes. Without making any assumptions about the structure of the environment
model and its possible imperfections, our approach learns in an end-to-end way to extract useful
knowledge gathered from model simulations ? in particular not relying exclusively on simulated
returns. This allows the agent to benefit from model-based imagination without the pitfalls of
conventional model-based planning. We demonstrate that our approach performs better than modelfree baselines in various domains including Sokoban. It achieves better performance with less data,
even with imperfect models, a significant step towards delivering the promises of model-based RL.
2
The I2A architecture
Figure 1: I2A architecture. ?? notation indicates imagined quantities. a): the imagination core (IC)
predicts the next time step conditioned on an action sampled from the rollout policy ?
? . b): the IC
imagines trajectories of features f? = (?
o, r?), encoded by the rollout encoder. c): in the full I2A,
aggregated rollout encodings and input from a model-free path determine the output policy ?.
In order to augment model-free agents with imagination, we rely on environment models ? models
that, given information from the present, can be queried to make predictions about the future. We
use these environment models to simulate imagined trajectories, which are interpreted by a neural
network and provided as additional context to a policy network.
In general, an environment model is any recurrent architecture which can be trained in an unsupervised
fashion from agent trajectories: given a past state and current action, the environment model predicts
the next state and any number of signals from the environment. In this work, we will consider
in particular environment models that build on recent successes of action-conditional next-step
predictors [20?22], which receive as input the current observation (or history of observations) and
current action, and predict the next observation, and potentially the next reward. We roll out the
environment model over multiple time steps into the future, by initializing the imagined trajectory
with the present time real observation, and subsequently feeding simulated observations into the
model.
The actions chosen in each rollout result from a rollout policy ?
? (explained in Section 3.1). The
environment model together with ?
? constitute the imagination core module, which predicts next time
steps (Fig 1a). The imagination core is used to produce n trajectories T?1 , . . . , T?n . Each imagined
trajectory T? is a sequence of features (f?t+1 , . . . , f?t+? ), where t is the current time, ? the length
of the rollout, and f?t+i the output of the environment model (i.e. the predicted observation and/or
reward).
Despite recent progress in training better environment models, a key issue addressed by I2As is that a
learned model cannot be assumed to be perfect; it might sometimes make erroneous or nonsensical
predictions. We therefore do not want to rely solely on predicted rewards (or values predicted
2
input observations
input action
stacked context
ConvNet
predicted observation
predicted reward
one-hot
tile
Figure 2: Environment model. The
input action is broadcast and concatenated to the observation. A convolutional network transforms this into
a pixel-wise probability distribution
for the output image, and a distribution for the reward.
from predicted states), as is often done in classical planning. Additionally, trajectories may contain
information beyond the reward sequence (a trajectory could contain an informative subsequence ? for
instance solving a subproblem ? which did not result in higher reward). For these reasons, we use
a rollout encoder E that processes the imagined rollout as a whole and learns to interpret it, i.e. by
extracting any information useful for the agent?s decision, or even ignoring it when necessary (Fig 1b).
Each trajectory is encoded separately as a rollout embedding ei = E(T?i ). Finally, an aggregator A
converts the different rollout embeddings into a single imagination code cia = A(e1 , . . . , en ).
The final component of the I2A is the policy module, which is a network that takes the information
cia from model-based predictions, as well as the output cmf of a model-free path (a network which
only takes the real observation as input; see Fig 1c, right), and outputs the imagination-augmented
policy vector ? and estimated value V . The I2A therefore learns to combine information from its
model-free and imagination-augmented paths; note that without the model-based path, I2As reduce to
a standard model-free network [3]. I2As can thus be thought of as augmenting model-free agents by
providing additional information from model-based planning, and as having strictly more expressive
power than the underlying model-free agent.
3
Architectural choices and experimental setup
3.1
Rollout strategy
For our experiments, we perform one rollout for each possible action in the environment. The first
action in the ith rollout is the ith action of the action set A, and subsequent actions for all rollouts are
produced by a shared rollout policy ?
? . We investigated several types of rollout policies (random, pretrained) and found that a particularly efficient strategy was to distill the imagination-augmented policy
into a model-free policy. This distillation strategy consists in creating a small model-free network
?
? (ot ), and adding to the total loss a cross entropy auxiliary loss between the imagination-augmented
policy ?(ot ) as computed on the current observation, and the policy ?
? (ot ) as computed on the same
observation. By imitating the imagination-augmented policy, the internal rollouts will be similar to
the trajectories of the agent in the real environment; this also ensures that the rollout corresponds
to trajectories with high reward. At the same time, the imperfect approximation results in a rollout
policy with higher entropy, potentially striking a balance between exploration and exploitation.
3.2
I2A components and environment models
In our experiments, the encoder is an LSTM with convolutional encoder which sequentially processes
a trajectory T . The features f?t are fed to the LSTM in reverse order, from f?t+? to f?t+1 , to mimic
Bellman type backup operations.2 The aggregator simply concatenates the summaries. For the
model-free path of the I2A, we chose a standard network of convolutional layers plus one fully
connected one [e.g. 3]. We also use this architecture on its own as a baseline agent.
Our environment model (Fig. 2) defines a distribution which is optimized by using a negative loglikelihood loss lmodel . We can either pretrain the environment model before embedding it (with frozen
weights) within the I2A architecture, or jointly train it with the agent by adding lmodel to the total
loss as an auxiliary loss. In practice we found that pre-training the environment model led to faster
runtime of the I2A architecture, so we adopted this strategy.
2
The choice of forward, backward or bi-directional processing seems to have relatively little impact on the
performance of the I2A, however, and should not preclude investigating different strategies.
3
For all environments, training data for our environment model was generated from trajectories of
a partially trained standard model-free agent (defined below). We use partially pre-trained agents
because random agents see few rewards in some of our domains. However, this means we have to
account for the budget (in terms of real environment steps) required to pretrain the data-generating
agent, as well as to then generate the data. In the experiments, we address this concern in two
ways: by explicitly accounting for the number of steps used in pretraining (for Sokoban), or by
demonstrating how the same pretrained model can be reused for many tasks (for MiniPacman).
3.3
Agent training and baseline agents
Using a fixed pretrained environment model, we trained the remaining I2A parameters with asynchronous advantage actor-critic (A3C) [3]. We added an entropy regularizer on the policy ? to
encourage exploration and the auxiliary loss to distill ? into the rollout policy ?
? as explained above.
We distributed asynchronous training over 32 to 64 workers; we used the RMSprop optimizer [23]. We
report results after an initial round of hyperparameter exploration (details in Appendix A). Learning
curves are averaged over the top three agents unless noted otherwise.
A separate hyperparameter search was carried out for each agent architecture in order to ensure
optimal performance. In addition to the I2A, we ran the following baseline agents (see Appendix B
for architecture details for all agents).
Standard model-free agent. For our main baseline agent, we chose a model-free standard architecture similar to [3], consisting of convolutional layers (2 for MiniPacman, and 3 for Sokoban) followed
by a fully connected layer. The final layer, again fully connected, outputs the policy logits and the
value function. For Sokoban, we also tested a ?large? standard architecture, where we double the
number of all feature maps (for convolutional layers) and hidden units (for fully connected layers).
The resulting architecture has a slightly larger number of parameters than I2A.
Copy-model agent. Aside from having an internal environment model, the I2A architecture is
very different from the one of the standard agent. To verify that the information contained in the
environment model rollouts contributed to an increase in performance, we implemented a baseline
where we replaced the environment model in the I2A with a ?copy? model that simply returns the input
observation. Lacking a model, this agent does not use imagination, but uses the same architecture,
has the same number of learnable parameters (the environment model is kept constant in the I2A),
and benefits from the same amount of computation (which in both cases increases linearly with the
length of the rollouts). This model effectively corresponds to an architecture where policy logits and
value are the final output of an LSTM network with skip connections.
4
Sokoban experiments
We now demonstrate the performance of I2A over baselines in a puzzle environment, Sokoban. We
address the issue of dealing with imperfect models, highlighting the strengths of our approach over
planning baselines. We also analyze the importance of the various components of the I2A.
Sokoban is a classic planning problem, where the agent has to push a number of boxes onto given target
locations. Because boxes can only be pushed (as opposed to pulled), many moves are irreversible, and
mistakes can render the puzzle unsolvable. A human player is thus forced to plan moves ahead of time.
We expect that artificial agents will similarly benefit from internal simulation. Our implementation
of Sokoban procedurally generates a new level each episode (see Appendix D.4 for details, Fig. 3
for examples). This means an agent cannot memorize specific puzzles.3 Together with the planning
aspect, this makes for a very challenging environment for our model-free baseline agents, which
solve less than 60% of the levels after a billion steps of training (details below). We provide videos of
agents playing our version of Sokoban online [24].
While the underlying game logic operates in a 10 ? 10 grid world, our agents were trained directly
on RGB sprite graphics as shown in Fig. 4 (image size 80 ? 80 pixels). There are no aspects of I2As
that make them specific to grid world games.
3
Out of 40 million levels generated, less than 0.7% were repeated. Training an agent on 1 billion frames
requires less than 20 million episodes.
4
Figure 3: Random examples of procedurally generated Sokoban levels. The player (green sprite)
needs to push all 4 boxes onto the red target squares to solve a level, while avoiding irreversible
mistakes. Our agents receive sprite graphics (shown above) as observations.
4.1
I2A performance vs. baselines on Sokoban
Figure 4 (left) shows the learning curves of the I2A architecture and various baselines explained
throughout this section. First, we compare I2A (with rollouts of length 5) against the standard
model-free agent. I2A clearly outperforms the latter, reaching a performance of 85% of levels solved
vs. a maximum of under 60% for the baseline. The baseline with increased capacity reaches 70% still significantly below I2A. Similarly, for Sokoban, I2A far outperforms the copy-model.
Sokoban performance
0.8
0.6
0.4
I2A
standard(large)
standard
no reward I2A
copy-model I2A
0.2
0.0
0.0
0.2
0.4
0.6
Unroll depth analysis
1.0
fraction of levels solved
fraction of levels solved
1.0
0.8
environment steps
0.8
0.6
0.4
0.0
0.0
1.0
1e9
unroll depth
15
5
3
1
0.2
0.2
0.4
0.6
environment steps
0.8
1.0
1e9
Figure 4: Sokoban learning curves. Left: training curves of I2A and baselines. Note that I2A use
additional environment observations to pretrain the environment model, see main text for discussion.
Right: I2A training curves for various values of imagination depth.
Since using imagined rollouts is helpful for this task, we investigate how the length of individual
rollouts affects performance. The latter was one of the hyperparameters we searched over. A
breakdown by number of unrolling/imagination steps in Fig. 4 (right) shows that using longer rollouts,
while not increasing the number of parameters, increases performance: 3 unrolling steps improves
speed of learning and top performance significantly over 1 unrolling step, 5 outperforms 3, and as a
test for significantly longer rollouts, 15 outperforms 5, reaching above 90% of levels solved. However,
in general we found diminishing returns with using I2A with longer rollouts. It is noteworthy that
5 steps is relatively small compared to the number of steps taken to solve a level, for which our
best agents need about 50 steps on average. This implies that even such short rollouts can be highly
informative. For example, they allow the agent to learn about moves it cannot recover from (such
as pushing boxes against walls, in certain contexts). Because I2A with rollouts of length 15 are
significantly slower, in the rest of this section, we choose rollouts of length 5 to be our canonical I2A
architecture.
It terms of data efficiency, it should be noted that the environment model in the I2A was pretrained
(see Section 3.2). We conservatively measured the total number of frames needed for pretraining to
be lower than 1e8. Thus, even taking pretraining into account, I2A outperforms the baselines after
seeing about 3e8 frames in total (compare again Fig. 4 (left)). Of course, data efficiency is even better
if the environment model can be reused to solve multiple tasks in the same environment (Section 5).
4.2
Learning with imperfect models
One of the key strengths of I2As is being able to handle learned and thus potentially imperfect
environment models. However, for the Sokoban task, our learned environment models actually
perform quite well when rolling out imagined trajectories. To demonstrate that I2As can deal with
less reliable predictions, we ran another experiment where the I2A used an environment model that
had shown much worse performance (due to a smaller number of parameters), with strong artifacts
accumulating over iterated rollout predictions (Fig. 5, left). As Fig. 5 (right) shows, even with such a
5
clearly flawed environment model, I2A performs similarly well. This implies that I2As can learn to
ignore the latter parts of the rollout as errors accumulate, but still use initial predictions when errors
are less severe. Finally, note that in our experiments, surprisingly, the I2A agent with poor model
ended outperforming the I2A agent with good model. We posit this was due to random initialization,
though we cannot exclude the noisy model providing some form of regularization ? more work will
be required to investigate this effect.
fraction of levels solved
1.0
Sokoban good vs. bad models
I2A: good model
I2A: poor model
MC: good model
MC: poor model
0.8
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
environment steps
0.8
1.0
1e9
Figure 5: Experiments with a noisy environment model. Left: each row shows an example 5-step
rollout after conditioning on an environment observation. Errors accumulate and lead to various
artefacts, including missing or duplicate sprites. Right: comparison of Monte-Carlo (MC) search and
I2A when using either the accurate or the noisy model for rollouts.
Learning a rollout encoder is what enables I2As to deal with imperfect model predictions. We can
further demonstrate this point by comparing them to a setup without a rollout encoder: as in the
classic Monte-Carlo search algorithm of Tesauro and Galperin [25], we now explicitly estimate the
value of each action from rollouts, rather than learning an arbitrary encoding of the rollouts, as in
I2A. We then select actions according to those values. Specifically, we learn a value function V from
states, and, using a rollout policy ?
? , sample a trajectory
rollout for each initial action, and compute
P
the corresponding estimated Monte Carlo return t?T ? t rta + V (xaT ) where ((xat , rta ))t=0..T comes
from a P
trajectory initialized with action a. Action a is chosen with probability proportional to
exp(?( t=0..T ? t rta + V (xaT ))/?), where ? is a learned temperature. This can be thought of as a
form of I2A with a fixed summarizer (which computes returns), no model-free path, and very simple
policy head. In this architecture, only V, ?
? and ? are learned.4
We ran this rollout encoder-free agent on Sokoban with both the accurate and the noisy environment
model. We chose the length of the rollout to be optimal for each environment model (from the same
range as for I2A, i.e. from 1 to 5). As can be seen in Fig. 5 (right),5 when using the high accuracy
environment model, the performance of the encoder-free agent is similar to that of the baseline
standard agent. However, unlike I2A, its performance degrades catastrophically when using the poor
model, showcasing the susceptibility to model misspecification.
4.3
Further insights into the workings of the I2A architecture
So far, we have studied the role of the rollout encoder. To show the importance of various other
components of the I2A, we performed additional control experiments. Results are plotted in Fig. 4
(left) for comparison. First, I2A with the copy model (Section 3.3) performs far worse, demonstrating
that the environment model is indeed crucial. Second, we trained an I2A where the environment
model was predicting no rewards, only observations. This also performed worse. However, after
much longer training (3e9 steps), these agents did recover performance close to that of the original
I2A (see Appendix D.2), which was never the case for the baseline agent even with that many
steps. Hence, reward prediction is helpful but not absolutely necessary in this task, and imagined
observations alone are informative enough to obtain high performance on Sokoban. Note this is in
contrast to many classical planning and model-based reinforcement learning methods, which often
rely on reward prediction.
4
5
the rollout policy is still learned by distillation from the output policy
Note: the MC curves in Fig. 5 only used a single agent rather than averages.
6
4.4
Imagination efficiency and comparison with perfect-model planning methods
I2A@87
? 1400
I2A MC search @95 ? 4000
MCTS@87
? 25000
MCTS@95
? 100000
Random search ? millions
Boxes
1 2 3 4 5 6 7
I2A (%) 99.5 97 92 87 77 66 53
Standard (%) 97 87 72 60 47 32 23
Table 1: Imagination efficiency of various
architectures.
Table 2: Generalization of I2A to environments with different number of boxes.
In previous sections, we illustrated that I2As can be used to efficiently solve planning problems and
can be robust in the face of model misspecification. Here, we ask a different question ? if we do
assume a nearly perfect model, how does I2A compare to competitive planning methods? Beyond
raw performance we focus particularly on the efficiency of planning, i.e. the number of imagination
steps required to solve a fixed ratio of levels. We compare our regular I2A agent to a variant of
Monte Carlo Tree Search (MCTS), which is a modern guided tree search algorithm [12, 26]. For
our MCTS implementation, we aimed to have a strong baseline by using recent ideas: we include
transposition tables [27], and evaluate the returns of leaf nodes by using a value network (in this case,
a deep residual value network trained with the same total amount of data as I2A; see appendix D.3
for further details).
Running MCTS on Sokoban, we find that it can achieve high performance, but at a cost of a much
higher number of necessary environment model simulation steps: MCTS reaches the I2A performance
of 87% of levels solved when using 25k model simulation steps on average to solve a level, compared
to 1.4k environment model calls for I2A. Using even more simulation steps, MCTS performance
increases further, e.g. reaching 95% with 100k steps.
If we assume access to a high-accuracy environment model (including the reward prediction), we
can also push I2A performance further, by performing basic Monte-Carlo search with a trained I2A
for the rollout policy: we let the agent play whole episodes in simulation (where I2A itself uses the
environment model for short-term rollouts, hence corresponding to using a model-within-a-model),
and execute a successful action sequence if found, up to a maximum number of retries; this is
reminiscent of nested rollouts [28]. With a fixed maximum of 10 retries, we obtain a score of 95%
(up from 87% for the I2A itself). The total average number of model simulation steps needed to
solve a level, including running the model in the outer loop, is now 4k, again much lower than the
corresponding MCTS run with 100k steps. Note again, this approach requires a nearly perfect model;
we don?t expect I2A with MC search to perform well with approximate models. See Table 1 for a
summary of the imagination efficiency for the different methods.
4.5
Generalization experiments
Lastly, we probe the generalization capabilities of I2As, beyond handling random level layouts in
Sokoban. Our agents were trained on levels with 4 boxes. Table 2 shows the performance of I2A
when such an agent was tested on levels with different numbers of boxes, and that of the standard
model-free agent for comparison. We found that I2As generalizes well; at 7 boxes, the I2A agent is
still able to solve more than half of the levels, nearly as many as the standard agent on 4 boxes.
5
Learning one model for many tasks in MiniPacman
In our final set of experiments, we demonstrate how a single model, which provides the I2A with a
general understanding of the dynamics governing an environment, can be used to solve a collection
of different tasks. We designed a simple, light-weight domain called MiniPacman, which allows us to
easily define multiple tasks in an environment with shared state transitions and which enables us to
do rapid experimentation.
In MiniPacman (Fig. 6, left), the player explores a maze that contains food while being chased by
ghosts. The maze also contains power pills; when eaten, for a fixed number of steps, the player moves
faster, and the ghosts run away and can be eaten. These dynamics are common to all tasks. Each task
7
is defined by a vector wrew ? R5 , associating a reward to each of the following five events: moving,
eating food, eating a power pill, eating a ghost, and being eaten by a ghost. We consider five different
reward vectors inducing five different tasks. Empirically we found that the reward schemes were
sufficiently different to lead to very different high-performing policies6 (for more details on the game
and tasks, see appendix C.
To illustrate the benefits of model-based methods in this multi-task setting, we train a single environment model to predict both observations (frames) and events (as defined above, e.g. "eating a ghost").
Note that the environment model is effectively shared across all tasks, so that the marginal cost of
learning the model is nil. During training and testing, the I2As have access to the frame and reward
predictions generated by the model; the latter was computed from model event predictions and the
task reward vector wrew . As such, the reward vector wrew can be interpreted as an ?instruction? about
which task to solve in the same environment [cf. the Frostbite challenge of 11]. For a fair comparison,
we also provide all baseline agents with the event variable as input.7
We trained baseline agents and I2As separately on each task. Results in Fig. 6 (right) indicate the
benefit of the I2A architecture, outperforming the standard agent in all tasks, and the copy-model
baseline in all but one task. Moreover, we found that the performance gap between I2As and baselines
is particularly high for tasks 4 & 5, where rewards are particularly sparse, and where the anticipation
of ghost dynamics is especially important. We posit that the I2A agent can leverage its environment
and reward model to explore the environment much more effectively.
Task Name
Regular
Avoid
Hunt
Ambush
Rush
Standard model-free
192
-16
-35
-40
1.3
Copy-model
919
3
33
-30
178
I2A
859
23
334
294
214
Figure 6: Minipacman environment. Left: Two frames from a minipacman game. Frames are 15 ? 19
RGB images. The player is green, dangerous ghosts red, food dark blue, empty corridors black,
power pills in cyan. After eating a power pill (right frame), the player can eat the 4 weak ghosts
(yellow). Right: Performance after 300 million environment steps for different agents and all tasks.
Note I2A clearly outperforms the other two agents on all tasks with sparse rewards.
6
Related work
Some recent work has focused on applying deep learning to model-based RL. A common approach is
to learn a neural model of the environment, including from raw observations, and use it in classical
planning algorithms such as trajectory optimization [29?31]. These studies however do not address a
possible mismatch between the learned model and the true environment.
Model imperfection has attracted particular attention in robotics, when transferring policies from
simulation to real environments [32?34]. There, the environment model is given, not learned, and
used for pretraining, not planning at test time. Liu et al. [35] also learn to extract information from
trajectories, but in the context of imitation learning. Bansal et al. [36] take a Bayesian approach to
model imperfection, by selecting environment models on the basis of their actual control performance.
The problem of making use of imperfect models was also approached in simplified environment in
Talvitie [18, 19] by using techniques similar to scheduled sampling [37]; however these techniques
break down in stochastic environments; they mostly address the compounding error issue but do not
address fundamental model imperfections.
A principled way to deal with imperfect models is to capture model uncertainty, e.g. by using Gaussian
Process models of the environment, see Deisenroth and Rasmussen [15]. The disadvantage of this
method is its high computational cost; it also assumes that the model uncertainty is well calibrated
and lacks a mechanism that can learn to compensate for possible miscalibration of uncertainty. Cutler
et al. [38] consider RL with a hierarchy of models of increasing (known) fidelity. A recent multi-task
6
For example, in the ?avoid? game, any event is negatively rewarded, and the optimal strategy is for the agent
to clear a small space from food and use it to continuously escape the ghosts.
7
It is not necessary to provide the reward vector wrew to the baseline agents, as it is equivalent a constant bias.
8
GP extension of this study can further help to mitigate the impact of model misspecification, but
again suffers from high computational burden in large domains, see Marco et al. [39].
A number of approaches use models to create additional synthetic training data, starting from Dyna
[40], to more recent work e.g. Gu et al. [41] and Venkatraman et al. [42]; these models increase data
efficiency, but are not used by the agent at test time.
Tamar et al. [43], Silver et al. [44], and Oh et al. [45] all present neural networks whose architectures
mimic classical iterative planning algorithms, and which are trained by reinforcement learning or
to predict user-defined, high-level features; in these, there is no explicit environment model. In our
case, we use explicit environment models that are trained to predict low-level observations, which
allows us to exploit additional unsupervised learning signals for training. This procedure is expected
to be beneficial in environments with sparse rewards, where unsupervised modelling losses can
complement return maximization as learning target as recently explored in Jaderberg et al. [46] and
Mirowski et al. [47].
Internal models can also be used to improve the credit assignment problem in reinforcement learning:
Henaff et al. [48] learn models of discrete actions environments, and exploit the effective differentiability of the model with respect to the actions by applying continuous control planning algorithms to
derive a plan; Schmidhuber [49] uses an environment model to turn environment cost minimization
into a network activity minimization.
Kansky et al. [50] learn symbolic networks models of the environment and use them for planning,
but are given the relevant abstractions from a hand-crafted vision system.
Close to our work is a study by Hamrick et al. [51]: they present a neural architecture that queries
learned expert models, but focus on meta-control for continuous contextual bandit problems. Pascanu
et al. [52] extend this work by focusing on explicit planning in sequential environments, and learn
how to construct a plan iteratively.
The general idea of learning to leverage an internal model in arbitrary ways was also discussed by
Schmidhuber [53].
7
Discussion
We presented I2A, an approach combining model-free and model-based ideas to implement
imagination-augmented RL: learning to interpret environment models to augment model-free decisions. I2A outperforms model-free baselines on MiniPacman and on the challenging, combinatorial
domain of Sokoban. We demonstrated that, unlike classical model-based RL and planning methods,
I2A is able to successfully use imperfect models (including models without reward predictions),
hence significantly broadening the applicability of model-based RL concepts and ideas.
As all model-based RL methods, I2As trade-off environment interactions for computation by pondering before acting. This is essential in irreversible domains, where actions can have catastrophic
outcomes, such as in Sokoban. In our experiments, the I2A was always less than an order of magnitude slower per interaction than the model-free baselines. The amount of computation can be varied
(it grows linearly with the number and depth of rollouts); we therefore expect I2As to greatly benefit
from advances on dynamic compute resource allocation (e.g. Graves [54]). Another avenue for
future research is on abstract environment models: learning predictive models at the "right" level of
complexity and that can be evaluated efficiently at test time will help to scale I2As to richer domains.
Remarkably, on Sokoban I2As compare favourably to a strong planning baseline (MCTS) with a
perfect environment model: at comparable performance, I2As require far fewer function calls to the
model than MCTS, because their model rollouts are guided towards relevant parts of the state space
by a learned rollout policy. This points to further potential improvement by training rollout policies
that "learn to query" imperfect models in a task-relevant way.
Acknowledgements
We thank Victor Valdes for designing and implementing the Sokoban environment, Joseph Modayil
for reviewing an early version of this paper, and Ali Eslami, Hado Van Hasselt, Neil Rabinowitz,
Tom Schaul, Yori Zwols for various help and feedback.
9
References
[1] Shane Legg and Marcus Hutter. Universal intelligence: A definition of machine intelligence. Minds and
Machines, 17(4):391?444, 2007.
[2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra,
and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602,
2013.
[3] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley,
David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In
International Conference on Machine Learning, pages 1928?1937, 2016.
[4] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy
optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15),
pages 1889?1897, 2015.
[5] Demis Hassabis, Dharshan Kumaran, and Eleanor A Maguire. Using imagination to understand the neural
basis of episodic memory. Journal of Neuroscience, 27(52):14365?14374, 2007.
[6] Daniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K
Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4):677?694, 2012.
[7] Demis Hassabis, Dharshan Kumaran, Seralynne D Vann, and Eleanor A Maguire. Patients with hippocampal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences, 104(5):
1726?1731, 2007.
[8] Edward C Tolman. Cognitive maps in rats and men. Psychological Review, 55(4):189, 1948.
[9] Anthony Dickinson and Bernard Balleine. The Role of Learning in the Operation of Motivational Systems.
John Wiley & Sons, Inc., 2002.
[10] Brad E Pfeiffer and David J Foster. Hippocampal place-cell sequences depict future paths to remembered
goals. Nature, 497(7447):74?79, 2013.
[11] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines
that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.
[12] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian
Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go
with deep neural networks and tree search. Nature, 529(7587):484?489, 2016.
[13] Jing Peng and Ronald J Williams. Efficient learning and planning within the dyna framework. Adaptive
Behavior, 1(4):437?454, 1993.
[14] Pieter Abbeel and Andrew Y Ng. Exploration and apprenticeship learning in reinforcement learning. In
Proceedings of the 22nd international conference on Machine learning, pages 1?8. ACM, 2005.
[15] Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search.
In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465?472,
2011.
[16] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under
unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071?1079, 2014.
[17] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David
Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. ICLR, 2016.
[18] Erik Talvitie. Model regularization for stable sample rollouts. In UAI, pages 780?789, 2014.
[19] Erik Talvitie. Agnostic system identification for monte carlo planning. In AAAI, pages 2986?2992, 2015.
[20] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video
prediction using deep networks in atari games. In Advances in Neural Information Processing Systems,
pages 2863?2871, 2015.
[21] Silvia Chiappa, S?bastien Racaniere, Daan Wierstra, and Shakir Mohamed. Recurrent environment
simulators. In 5th International Conference on Learning Representations, 2017.
10
[22] Felix Leibfried, Nate Kushman, and Katja Hofmann. A deep learning approach for joint video frame and
reward prediction in atari games. CoRR, abs/1611.07078, 2016. URL http://arxiv.org/abs/1611.
07078.
[23] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-RMSprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
[24] https://drive.google.com/open?id=0B4tKsKnCCZtQY2tTOThucHVxUTQ, 2017.
[25] Gerald Tesauro and Gregory R Galperin. On-line policy improvement using monte-carlo search. In NIPS,
volume 96, pages 1068?1074, 1996.
[26] R?mi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International
Conference on Computers and Games, pages 72?83. Springer, 2006.
[27] Benjamin E Childs, James H Brodeur, and Levente Kocsis. Transpositions and move groups in monte
carlo tree search. In Computational Intelligence and Games, 2008. CIG?08. IEEE Symposium On, pages
389?395. IEEE, 2008.
[28] Christopher D Rosin. Nested rollout policy adaptation for monte carlo tree search. In Ijcai, pages 649?654,
2011.
[29] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally
linear latent dynamics model for control from raw images. In Advances in Neural Information Processing
Systems, pages 2746?2754, 2015.
[30] Ian Lenz, Ross A Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model
predictive control. In Robotics: Science and Systems, 2015.
[31] Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In IEEE International
Conference on Robotics and Automation (ICRA), 2017.
[32] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey.
Journal of Machine Learning Research, 10(Jul):1633?1685, 2009.
[33] Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Sergey Levine, Kate Saenko, and
Trevor Darrell. Towards adapting deep visuomotor representations from simulated to real environments.
arXiv preprint arXiv:1511.07111, 2015.
[34] Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter
Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep inverse
dynamics model. arXiv preprint arXiv:1610.03518, 2016.
[35] YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to
imitate behaviors from raw video via context translation. arXiv preprint arXiv:1707.03374, 2017.
[36] Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, and Claire J Tomlin. Goal-driven dynamics
learning via bayesian optimization. arXiv preprint arXiv:1703.09260, 2017.
[37] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence
prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages
1171?1179, 2015.
[38] Mark Cutler, Thomas J Walsh, and Jonathan P How. Real-world reinforcement learning via multifidelity
simulators. IEEE Transactions on Robotics, 31(3):655?671, 2015.
[39] Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P Schoellig, Andreas Krause, Stefan Schaal,
and Sebastian Trimpe. Virtual vs. real: Trading off simulations and physical experiments in reinforcement
learning with bayesian optimization. arXiv preprint arXiv:1703.01250, 2017.
[40] Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approximating
dynamic programming. In Proceedings of the seventh international conference on machine learning, pages
216?224, 1990.
[41] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with
model-based acceleration. In International Conference on Machine Learning, pages 2829?2838, 2016.
[42] Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi, and J Andrew
Bagnell. Improved learning of dynamics models for control. In International Symposium on Experimental
Robotics, pages 703?713. Springer, 2016.
11
[43] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. In
Advances in Neural Information Processing Systems, pages 2154?2162, 2016.
[44] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel DulacArnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and
planning. arXiv preprint arXiv:1612.08810, 2016.
[45] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. arXiv preprint arXiv:1707.03497,
2017.
[46] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver,
and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint
arXiv:1611.05397, 2016.
[47] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil,
Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments.
arXiv preprint arXiv:1611.03673, 2016.
[48] Mikael Henaff, William F Whitney, and Yann LeCun. Model-based planning in discrete action spaces.
arXiv preprint arXiv:1705.07177, 2017.
[49] J?rgen Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive
environments. In Neural Networks, 1990., 1990 IJCNN International Joint Conference on, pages 253?258.
IEEE, 1990.
[50] Ken Kansky, Tom Silver, David A M?ly, Mohamed Eldawy, Miguel L?zaro-Gredilla, Xinghua Lou, Nimrod
Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. Schema networks: Zero-shot transfer with a
generative causal model of intuitive physics. Accepted at International Conference for Machine Learning,
2017, 2017.
[51] Jessica B. Hamrick, Andy J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W.
Battaglia. Metacontrol for adaptive imagination-based optimization. In Proceedings of the 5th International
Conference on Learning Representations (ICLR 2017), 2017.
[52] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, David Reichert, Theophane Weber, Sebastien
Racaniere, Lars Buesing, Daan Wierstra, and Peter Battaglia. Learning model-based planning from scratch.
arXiv preprint, 2017.
[53] J?rgen Schmidhuber. On learning to think: Algorithmic information theory for novel combinations of
reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249,
2015.
[54] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983,
2016.
[55] Leemon C Baird III. Advantage updating. Technical report, Wright Lab. Technical Report WL-TR-93-1l46.,
1993.
[56] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic
computation graphs. In Advances in Neural Information Processing Systems, pages 3528?3536, 2015.
[57] Levente Kocsis and Csaba Szepesv?ri. Bandit based monte-carlo planning. In European conference on
machine learning, pages 282?293. Springer, 2006.
[58] Sylvain Gelly and David Silver. Combining online and offline knowledge in uct. In Proceedings of the
24th international conference on Machine learning, pages 273?280. ACM, 2007.
[59] Joshua Taylor and Ian Parberry. Procedural generation of sokoban levels. In Proceedings of the International
North American Conference on Intelligent Games and Simulation, pages 5?12, 2011.
[60] Yoshio Murase, Hitoshi Matsubara, and Yuzuru Hiraga. Automatic making of sokoban problems. PRICAI?96: Topics in Artificial Intelligence, pages 592?600, 1996.
12
| 7152 |@word katja:1 trial:1 exploitation:1 version:2 seems:1 nd:2 nonsensical:1 reused:2 open:1 instruction:1 pieter:7 simulation:13 seek:2 rgb:2 accounting:1 schoellig:1 thereby:1 tr:1 shot:1 catastrophically:1 initial:3 liu:2 contains:2 score:1 selecting:1 exclusively:1 daniel:1 past:2 existing:1 hasselt:2 current:5 comparing:1 contextual:1 manuel:1 outperforms:7 com:2 yuxuan:1 guez:3 reminiscent:1 attracted:1 readily:1 john:3 devin:1 ronald:1 subsequent:1 informative:3 hofmann:1 enables:2 designed:1 ashutosh:1 depict:1 aside:1 alone:1 intelligence:5 fewer:1 leaf:1 v:4 half:1 generative:1 kushman:1 imitate:1 talvitie:3 ith:2 core:3 short:2 aja:1 transposition:2 mental:1 provides:1 pascanu:5 valdes:1 philipp:2 node:1 location:1 org:1 five:3 rollout:33 wierstra:5 corridor:1 symposium:2 jonas:1 amnesia:1 consists:1 pritzel:1 combine:1 behavioral:1 introduce:1 apprenticeship:1 balleine:1 peng:2 expected:1 indeed:1 rapid:1 andrea:1 planning:32 behavior:2 simulator:3 multi:2 brain:1 bellman:1 nardi:1 relying:1 pitfall:1 food:4 actual:1 little:2 preclude:1 unrolling:3 increasing:3 motivational:1 provided:1 notation:1 underlying:2 theophane:3 moreover:1 agnostic:1 what:1 hitherto:1 atari:3 interpreted:2 dharshan:2 deepmind:1 proposing:1 csaba:1 ended:1 mitigate:1 marian:1 saxena:1 runtime:1 zaremba:1 control:9 unit:1 dorfman:1 ly:1 before:2 felix:2 positive:1 mistake:2 consequence:1 irreversible:4 eslami:1 despite:1 sutton:1 puigdom:1 id:1 soyer:1 reacting:1 laurent:2 matteo:1 solely:1 noteworthy:1 black:1 plus:1 path:7 initialization:1 studied:1 might:1 chose:3 challenging:2 deployment:1 hunt:2 limited:1 walsh:1 bi:1 range:2 averaged:1 lecun:1 zaro:1 testing:1 practice:1 implement:1 razvan:4 procedure:1 demis:4 episodic:1 riedmiller:2 universal:1 significantly:5 thought:2 adapting:1 matching:1 pre:2 regular:2 seeing:1 anticipation:1 symbolic:2 onto:2 close:2 cannot:5 operator:1 transition:2 context:6 applying:2 accumulating:1 equivalent:1 map:3 demonstrated:2 missing:1 conventional:1 williams:1 attention:1 starting:1 layout:1 go:1 focused:1 survey:1 computes:1 koray:4 insight:1 oh:3 racaniere:2 classic:2 handle:1 embedding:2 banino:1 target:3 imagine:1 hierarchy:1 play:1 user:1 dickinson:1 carl:1 us:3 designing:1 programming:1 jaitly:1 samy:1 exact:1 prescribe:1 particularly:4 updating:1 hessel:1 breakdown:1 predicts:3 role:2 subproblem:1 levine:8 module:2 preprint:15 capture:1 initializing:1 solved:6 region:1 ensures:1 connected:4 coursera:1 episode:3 trade:1 e8:2 xinghua:1 ran:3 principled:1 rose:1 environment:93 benjamin:1 rmsprop:2 complexity:1 reward:27 dynamic:11 gerald:1 ultimately:1 donna:1 trained:13 singh:2 reviewing:1 solving:1 ali:1 predictive:2 negatively:1 efficiency:9 eric:1 basis:3 czarnecki:1 gu:2 pill:4 easily:1 joint:2 leemon:1 various:8 regularizer:1 train:2 stacked:1 forced:1 effective:1 shortcoming:2 monte:11 query:2 artificial:3 approached:1 visuomotor:1 outcome:2 quite:1 richer:1 larger:1 whose:1 solve:11 loglikelihood:1 encoded:2 otherwise:1 constitutive:1 encoder:9 ability:2 neil:2 tomlin:1 gp:1 jointly:1 noisy:4 think:2 final:4 online:2 itself:2 shakir:1 sequence:5 advantage:2 kocsis:2 frozen:1 cig:1 interaction:2 adaptation:1 causing:1 relevant:3 combining:3 loop:1 rapidly:1 flexibility:1 achieve:2 academy:1 schaul:3 intuitive:1 inducing:1 sutskever:1 ijcai:1 double:1 empty:1 darrell:1 billion:2 jing:1 generating:1 silver:10 produce:1 perfect:5 tim:2 help:3 andrew:2 illustrate:1 augmenting:1 capobianco:1 chiappa:1 measured:1 recurrent:5 derive:1 miguel:1 progress:2 strong:3 edward:1 auxiliary:4 implemented:1 skip:1 implies:2 indicate:1 murase:1 memorize:1 trading:1 guided:3 predicted:6 come:1 posit:2 goroshin:1 artefact:1 stochastic:2 lars:2 subsequently:1 human:2 exploration:4 enable:1 virtual:1 implementing:1 require:1 feeding:1 abbeel:7 generalization:4 wall:1 strictly:1 extension:1 marco:2 sufficiently:1 credit:1 ic:2 wright:1 exp:1 puzzle:3 algorithmic:1 predict:4 matthew:1 rgen:2 optimizer:1 early:1 achieves:1 susceptibility:1 battaglia:3 estimation:1 lenz:1 combinatorial:1 currently:1 ross:2 wl:1 create:1 successfully:1 arun:1 hoffman:1 minimization:2 stefan:1 compounding:1 clearly:3 imperfection:5 gaussian:1 always:1 aim:1 rather:2 denil:1 reaching:3 tolman:1 avoid:2 eating:5 conjunction:1 rezende:1 focus:2 schaal:1 improvement:2 legg:1 modelling:1 indicates:1 pretrain:3 greatly:1 contrast:2 baseline:27 helpful:2 schacter:1 abstraction:1 integrated:1 transferring:1 diminishing:1 hidden:1 bandit:2 eaten:3 pixel:2 issue:3 fidelity:1 augment:2 plan:4 animal:1 tzeng:1 marginal:1 equal:1 construct:2 never:1 having:2 beach:1 flawed:1 sampling:2 ng:1 ted:1 r5:1 piotr:1 icml:2 igor:1 unsupervised:5 nearly:3 venkatraman:2 mimic:2 inevitable:1 nech:1 report:3 intelligent:2 richard:2 duplicate:1 future:6 modern:1 mirza:1 escape:1 few:1 national:1 encoding:2 individual:1 replaced:1 consisting:1 rollouts:21 multifidelity:1 william:1 ab:2 harley:2 jessica:1 interest:1 investigate:2 mnih:3 highly:1 joel:1 severe:1 cutler:2 misha:1 light:1 hubert:1 xat:3 accurate:2 andy:2 encourage:1 worker:1 arthur:3 experience:2 necessary:4 capable:1 unless:1 tree:6 puigdomenech:1 taylor:2 initialized:1 a3c:1 plotted:1 causal:1 divide:1 showcasing:1 re:1 rush:1 hutter:1 increased:1 instance:1 psychological:1 disadvantage:1 whitney:1 assignment:1 maximization:1 applicability:1 cost:4 distill:2 rosin:1 sidor:1 rolling:1 predictor:1 successful:2 seventh:1 graphic:2 calandra:1 gregory:1 author:1 synthetic:1 calibrated:1 st:1 international:15 explores:1 lstm:3 fundamental:1 lee:2 physic:1 off:2 michael:1 szymon:1 together:2 continuously:1 ilya:1 again:5 aaai:1 opposed:1 choose:1 huang:1 tile:1 broadcast:1 e9:4 worse:3 cognitive:2 creating:1 expert:1 imagination:25 american:1 leading:1 ullman:1 return:7 li:2 volodymyr:3 potential:1 exclude:1 wojciech:2 account:2 ioannis:2 automation:1 north:1 inc:1 baird:1 kate:1 explicitly:2 performed:2 break:1 lot:1 lab:1 analyze:1 schema:1 red:2 competitive:1 recover:2 capability:1 jul:1 contribution:1 square:1 accuracy:2 convolutional:5 roll:1 efficiently:2 gathered:1 directional:1 yellow:1 generalize:1 buesing:2 bayesian:3 weak:1 iterated:1 identification:1 produced:1 raw:5 mc:6 carlo:11 trajectory:18 kavukcuoglu:4 drive:1 history:1 reach:2 suffers:2 aggregator:2 trevor:2 sebastian:1 definition:1 xiaoxiao:1 against:3 andre:1 mohamed:2 james:1 mi:1 sampled:1 ask:1 knowledge:3 improves:1 garrett:1 actually:1 focusing:1 higher:3 coline:1 danilo:1 tom:5 improved:2 zwols:1 execute:1 evaluated:1 done:1 box:10 though:1 governing:1 implicit:1 lastly:1 uct:1 working:1 hand:1 favourably:1 ei:1 expressive:1 mehdi:1 christopher:1 trust:1 google:2 lack:2 defines:1 artifact:1 scheduled:2 rabinowitz:2 aviv:1 grows:1 building:1 effect:1 hypothesized:1 verify:1 true:1 logits:2 lillicrap:3 contain:2 hence:3 counterpart:1 unroll:2 moritz:1 regularization:2 iteratively:1 illustrated:1 deal:3 round:1 game:11 during:2 shixiang:1 noted:2 daniele:1 samuel:1 rat:1 hippocampal:2 stone:1 modelfree:1 bansal:2 demonstrate:5 performs:3 motion:1 temperature:1 reasoning:1 weber:3 wise:1 image:4 novel:3 recently:1 hallmark:1 common:2 endowing:1 junhyuk:2 rl:9 physical:1 empirically:1 phoenix:1 conditioning:1 cmf:1 volume:1 imagined:8 discussed:1 tassa:1 million:4 extend:1 interpret:4 synthesized:1 accumulate:2 significant:1 distillation:2 honglak:2 queried:1 automatic:1 kansky:2 grid:2 similarly:4 erez:1 had:1 moving:1 badia:2 stable:1 robot:1 access:2 actor:1 mirowski:2 longer:4 chelsea:2 own:1 recent:9 joschka:1 henaff:2 driven:1 reverse:1 schmidhuber:4 selectivity:1 certain:1 tesauro:2 rewarded:1 meta:1 outperforming:2 compound:1 success:4 remembered:1 yi:1 joshua:3 victor:1 seen:1 george:2 additional:8 remembering:1 greater:1 schneider:1 aggregated:1 eleanor:2 determine:1 nate:1 signal:3 pilco:1 christiano:1 full:1 multiple:3 technical:2 faster:2 adapt:1 hamrick:2 long:1 cross:1 compensate:1 e1:1 impact:2 jost:1 prediction:20 basic:1 variant:1 controller:1 circumstance:1 patient:1 vision:1 i2a:76 arxiv:30 iteration:1 sergey:8 sometimes:1 navdeep:1 hado:2 robotics:5 cell:1 receive:2 addition:1 want:1 krause:1 separately:2 addressed:1 szepesv:1 remarkably:1 crucial:1 ot:3 rest:1 unlike:2 shane:1 name:1 jordan:1 call:2 tobin:1 extracting:1 leverage:2 bengio:1 easy:1 enough:1 knepper:1 embeddings:1 iii:1 affect:1 architecture:24 associating:1 reduce:1 idea:4 imperfect:11 andreas:1 tamar:2 avenue:1 veda:1 optimism:1 url:1 render:1 peter:4 sprite:4 pretraining:4 action:24 constitute:1 deep:17 heess:5 useful:2 gabriel:1 delivering:1 clear:1 aimed:1 transforms:1 amount:5 dark:1 tenenbaum:1 locally:1 differentiability:1 ken:1 nimrod:1 generate:1 http:2 canonical:1 estimated:2 retries:2 per:1 neuroscience:2 blue:1 discrete:2 hyperparameter:2 hennig:1 promise:1 group:1 key:2 procedural:1 demonstrating:2 levente:2 frostbite:1 leibo:1 kept:1 backward:1 pondering:1 graph:1 fraction:3 convert:1 run:2 inverse:1 uncertainty:3 unsolvable:1 procedurally:2 striking:1 springenberg:1 place:1 throughout:1 arrive:1 architectural:1 wu:1 lake:1 yann:1 dileep:1 lanctot:1 appendix:6 decision:4 comparable:1 pushed:1 cyan:1 layer:6 followed:1 activity:1 strength:2 ahead:1 summarizer:1 ijcnn:1 dangerous:1 alex:3 ri:1 dominated:1 pricai:1 aspect:3 simulate:1 nathan:1 speed:1 generates:1 performing:2 eat:1 relatively:2 martin:3 developing:1 according:1 gredilla:1 deepmpc:1 combination:1 poor:6 miscalibration:1 remain:1 across:3 slightly:1 smaller:1 beneficial:1 son:1 mastering:1 joseph:1 making:5 den:1 explained:3 imitating:1 modayil:1 taken:1 resource:1 turn:1 mechanism:1 dyna:2 needed:2 mind:1 fed:1 finn:2 antonoglou:2 end:4 adopted:1 generalizes:1 panneershelvam:1 available:2 experimentation:1 operation:2 victoria:1 probe:1 away:1 cia:2 robustness:1 hassabis:4 reichert:4 shah:1 slower:2 original:1 thomas:2 angela:1 remaining:1 ensure:1 running:3 cf:1 include:1 lerrel:1 top:2 assumes:1 pushing:1 mikael:1 exploit:3 gelly:1 concatenated:1 especially:1 build:1 approximating:1 classical:5 icra:1 move:5 added:1 quantity:1 foresight:1 matsubara:1 degrades:1 strategy:6 question:1 bagnell:1 gradient:2 iclr:2 fabio:1 convnet:1 separate:1 thank:1 lou:1 capacity:1 simulated:3 outer:1 alonso:1 chris:1 maddison:1 topic:1 reason:2 marcus:1 erik:2 length:7 code:1 tijmen:1 julian:1 balance:1 providing:2 lmodel:2 ratio:1 setup:2 mostly:1 schrittwieser:1 coulom:1 potentially:3 noam:1 negative:1 implementation:2 policy:35 unknown:1 perform:3 galperin:2 contributed:1 sebastien:1 neuron:1 observation:24 kumaran:2 daan:5 viola:1 hinton:1 misspecification:4 head:1 frame:9 shazeer:1 varied:1 arbitrary:3 brenden:1 tomer:1 david:13 complement:1 required:3 blackwell:1 connection:1 optimized:1 learned:12 nip:2 address:7 beyond:3 able:3 usually:2 mordatch:1 below:3 mismatch:1 yujia:2 ghost:9 maguire:2 challenge:1 scott:1 max:1 memory:2 including:7 green:2 video:4 power:5 reliable:1 hot:1 event:5 rely:3 predicting:1 residual:1 pfeiffer:1 scheme:1 improve:1 numerous:1 mcts:10 martial:1 carried:1 parberry:1 extract:2 roberto:2 text:1 review:1 understanding:1 schulman:2 acknowledgement:1 yoshio:1 graf:4 lacking:1 fully:4 expect:3 lecture:1 loss:7 generation:1 men:1 proportional:1 allocation:1 gershman:1 geoffrey:1 agent:70 usa:1 xiao:1 foster:1 playing:2 critic:1 translation:1 row:1 karl:1 claire:1 course:1 concept:1 summary:2 surprisingly:1 copy:7 hebert:1 free:29 asynchronous:3 offline:1 bias:1 rasmussen:2 pulled:1 understand:1 allow:1 wide:1 face:1 taking:1 sparse:3 distributed:1 van:3 feedback:1 curve:6 depth:4 world:6 valid:1 maze:2 benefit:6 conservatively:1 forward:1 made:1 reinforcement:17 preventing:1 sifre:2 collection:1 adaptive:3 far:4 yori:1 employing:1 transaction:1 simplified:1 yuzuru:1 approximate:2 ignore:1 jaderberg:2 logic:1 satinder:2 dealing:1 sequentially:1 investigating:1 uai:1 predictron:1 assumed:1 abhishek:1 imitation:2 don:1 subsequence:1 search:16 iterative:1 latent:2 continuous:4 table:5 additionally:1 nature:2 concatenates:1 transfer:3 ca:1 learn:13 ignoring:1 robust:2 nicolas:5 ballard:2 rta:3 imagining:2 broadening:1 investigated:1 complex:3 european:1 anthony:1 domain:14 marc:2 did:2 main:2 linearly:2 whole:2 silvia:1 backup:2 hyperparameters:1 paul:1 repeated:1 child:1 fair:1 augmented:9 fig:15 crafted:1 referred:1 en:1 fashion:1 judy:1 wiley:1 explicit:3 watter:1 learns:3 ian:2 down:1 erroneous:1 bad:1 specific:2 embed:1 bastien:2 navigate:1 learnable:1 appeal:1 explored:1 gupta:1 concern:1 burden:1 essential:1 adding:2 effectively:3 corr:1 sequential:1 importance:2 magnitude:2 imago:1 push:3 conditioned:1 budget:1 gap:1 entropy:3 led:1 timothy:3 simply:2 explore:1 boedecker:1 visual:1 vinyals:4 highlighting:1 contained:1 brad:1 partially:2 pretrained:4 pinto:1 driessche:1 springer:3 nested:2 tieleman:1 corresponds:2 lewis:1 acm:2 conditional:2 adria:2 goal:3 acceleration:1 towards:3 shared:3 adverse:1 specifically:1 sylvain:1 operates:1 yuval:1 acting:1 total:6 nil:1 called:1 catastrophic:1 experimental:2 bernard:1 accepted:1 player:6 saenko:1 berkenkamp:1 select:1 deisenroth:2 internal:7 mark:1 guo:1 searched:1 people:1 jonathan:2 alexander:1 latter:4 absolutely:1 barreto:1 oriol:4 reactive:1 evaluate:1 avoiding:2 tested:2 scratch:1 handling:1 |
6,802 | 7,153 | Extracting low-dimensional dynamics from
multiple large-scale neural population recordings
by learning to predict correlations
1
Marcel Nonnenmacher1 , Srinivas C. Turaga2 and Jakob H. Macke1?
research center caesar, an associate of the Max Planck Society, Bonn, Germany
2
HHMI Janelia Research Campus, Ashburn, VA
[email protected], [email protected]
[email protected]
Abstract
A powerful approach for understanding neural population dynamics is to extract
low-dimensional trajectories from population recordings using dimensionality
reduction methods. Current approaches for dimensionality reduction on neural
data are limited to single population recordings, and can not identify dynamics
embedded across multiple measurements. We propose an approach for extracting
low-dimensional dynamics from multiple, sequential recordings. Our algorithm
scales to data comprising millions of observed dimensions, making it possible
to access dynamics distributed across large populations or multiple brain areas.
Building on subspace-identification approaches for dynamical systems, we perform
parameter estimation by minimizing a moment-matching objective using a scalable
stochastic gradient descent algorithm: The model is optimized to predict temporal
covariations across neurons and across time. We show how this approach naturally
handles missing data and multiple partial recordings, and can identify dynamics
and predict correlations even in the presence of severe subsampling and small
overlap between recordings. We demonstrate the effectiveness of the approach
both on simulated data and a whole-brain larval zebrafish imaging dataset.
1
Introduction
Dimensionality reduction methods based on state-space models [1, 2, 3, 4, 5] are useful for uncovering low-dimensional dynamics hidden in high-dimensional data. These models exploit structured
correlations in neural activity, both across neurons and over time [6]. This approach has been used to
identify neural activity trajectories that are informative about stimuli and behaviour and yield insights
into neural computations [7, 8, 9, 10, 11, 12, 13]. However, these methods are designed for analyzing
one population measurement at a time and are typically applied to population recordings of a few
dozens of neurons, yielding a statistical description of the dynamics of a small sample of neurons
within a brain area. How can we, from sparse recordings, gain insights into dynamics distributed
across entire circuits or multiple brain areas? One promising approach to scaling up the empirical
study of neural dynamics is to sequentially record from multiple neural populations, for instance by
moving the field-of-view of a microscope [14]. Similarly, chronic multi-electrode recordings make it
possible to record neural activity within a brain area over multiple days, but with neurons dropping
in and out of the measurement over time [15]. While different neurons will be recorded in different
sessions, we expect the underlying dynamics to be preserved across measurements.
The goal of this paper is to provide methods for extracting low-dimensional dynamics shared across
multiple, potentially overlapping recordings of neural population activity. Inferring dynamics from
?
current primary affiliation: Centre for Cognitive Science, Technical University Darmstadt
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
such data can be interpreted as a missing-data problem in which data is missing in a structured manner
(referred to as ?serial subset observations? [16], SSOs). Our methods allow us to capture the relevant
subspace and predict instantaneous and time-lagged correlations between all neurons, even when
substantial blocks of data are missing. Our methods are highly scalable, and applicable to data sets
with millions of observed units. On both simulated and empirical data, we show that our methods
extract low-dimensional dynamics and accurately predict temporal and cross-neuronal correlations.
Statistical approach: The standard approach for dimensionality reduction of neural dynamics is
based on search for a maximum of the log-likelihood via expectation-maximization (EM) [17, 18].
EM can be extended to missing data in a straightforward fashion, and SSOs allow for efficient
implementations, as we will show below. However, we will also show that subsampled data can lead
to slow convergence and high sensitivity to initial conditions. An alternative approach is given by
subspace identification (SSID) [19, 20]. SSID algorithms are based on matching the moments of the
model with those of the empirical data: The idea is to calculate the time-lagged covariances of the
model as a function of the parameters. Then, spectral methods (e.g. singular value decompositions)
are used to reconstruct parameters from empirically measured covariances. However, these methods
scale poorly to high-dimensional datasets where it impossible to even construct the time-lagged
covariance matrix. Our approach is also based on moment-matching ? rather than using spectral
approaches, however, we use numerical optimization to directly minimize the squared error between
empirical and reconstructed time-lagged covariances without ever explicitly constructing the full
covariance matrix, yielding a subspace that captures both spatial and temporal correlations in activity.
This approach readily generalizes to settings in which many data points are missing, as the corresponding entries of the covariance can simply be dropped from the cost function. In addition,
it can also generalize to models in which the latent dynamics are nonlinear. Stochastic gradient
methods make it possible to scale our approach to high-dimensional (p = 107 ) and long (T = 105 )
recordings. We will show that use of temporal information (through time-lagged covariances) allows
this approach to work in scenarios (low overlap between recordings) in which alternative approaches
based on instantaneous correlations are not applicable [2, 21].
Related work:
Several studies have addressed estimation of linear dynamical systems from
subsampled data: Turaga et al. [22] used EM to learn high-dimensional linear dynamical models form
multiple observations, an approach which they called ?stitching?. However, their model assumed highdimensional dynamics, and is therefore limited to small population sizes (N ? 100). Bishop & Yu
[23] studied the conditions under which a covariance-matrix can be reconstructed from multiple partial
measurements. However, their method and analysis were restricted to modelling time-instantaneous
covariances, and did not include temporal activity correlations. In addition, their approach is not based
on learning parameters jointly, but estimates the covariance in each observation-subset separately,
and then aligns these estimates post-hoc. Thus, while this approach can be very effective and is
important for theoretical analysis, it can perform sub-optimally when data is noisy. In the context
of SSID methods, Markovsky [24, 25] derived conditions for the reconstruction of missing data
from deterministic univariate linear time-invariant signals, and Liu et al. [26] use a nuclear normregularized SSID to reconstruct partially missing data vectors. Balzano et al. [21, 27] presented a
scalable dimensionality reduction approach (GROUSE) for data with missing entries. This approach
does not aim to capture temporal corrrelations, and is designed for data which is missing at random.
Soudry et al. [28] considered population subsampling from the perspective of inferring functional
connectivity, but focused on observation schemes in which there are at least some simultaneous
observations for each pair of variables.
2
2.1
Methods
Low-dimensional state-space models with linear observations
Model class: Our goal is to identify low-dimensional dynamics from multiple, partially overlapping
recordings of a high-dimensional neural population, and to use them to predict neural correlations.
We denote neural activity by Y = {yt }Tt=1 , a length-T discrete-time sequence of p-dimensional
vectors. We assume that the underlying n-dimensional dynamics x linearly modulate y,
yt = Cxt + ?t ,
?t ? N (0, R)
(1)
xt+1 = f (xt , ?t ),
?t ? p(?),
(2)
(i)
with diagonal observation noise covariance matrix R ? Rp?p . Thus, each observed variable yt ,
i = 1, . . . , p is a noisy linear combination of the shared time-evolving latent modes xt .
2
a
c
e
0.9
5
0.3
10
0.0
0.3
10
10
0.6
20
10
f
15
15
time-lag s = 0
12
20
200
3
0
time relative to switch-point
0
stitched
latent dim. #1
2
0
0
12
16
20
12
0
before switch
200
# neuron
d
1
-200
20
1 2 3
# latent dim.
separate
0
b
# latent dim.
16
20
-200
after switch
16
20
2 4 6 8
latent dim. #2
20
time-lag s = 5
separate model stitched model ground truth
estimate
estimate
(unknown)
# neuron
# neuron
0.6
5
2 4 6 8
# neuron
Figure 1: Identifying low-dimensional dynamics shared across neural recordings a) Different
subsets of a large neural population are recorded sequentially (here: neurons 1 to 11, cyan, are recored
first, then neurons 10 to 20, green). b) Low-dimensional (n = 3) trajectories extracted from data
in a: Our approach (orange) can extract the dynamics underlying the entire population, whereas an
estimation on each of the two observed subsets separately will not be able to align dynamics across
subsets. c) Subspace-maps (linear projection matrices C) inferred from each of the two observed
subsets separately (and hence not aligned), and for the entire recording. d) Same information as in
b, but as phase plots. e) Pairwise covariances? in this observation scheme, many covariances (red)
are unobserved, but can be reconstructed using our approach. f) Recovery of unobserved pairwise
covariances (red). Our approach is able to recover the unobserved covariance across subsets.
We consider stable latent zero-mean dynamics on x with time-lagged covariances ?s :=
Cov[xt+s , xt ] ? Rn?n for time-lag s ? {0, . . . , S}. Time-lagged observed covariances ?(s) ?
Rp?p can be computed from ?s as
?(s) := C?s C > + ?s=0 R.
(3)
An important special case is the classical linear dynamical system (LDS) with f (xt , ?t ) = Axt + ?t ,
with ?t ? N (0, Q) and ?s = As ?0 . As we will see below, our SSID algorithm works directly on
these time-lagged covariances, so it is also applicable also to generative models with non-Markovian
Gaussian latent dynamics, e.g. Gaussian Process Factor Analysis [2].
Partial observations and missing data: We treat multiple partial recordings as a missing-data
problem? we use yt to model all activity measurements across multiple experiments, and assume that
at any time t, only some of them will be observed. As a consequence, the data-dimensionality p could
now easily be comprised of thousands of neurons, even if only small subsets are observed at any
given time. We use index sets ?t ? {1, . . . , p}, where i ? ?t indicates that variable i is observed at
time point t. We obtain empirical estimates of time-lagged pairwise covariances for variable each
pair (i, j) over all of those time points where the pair of variables is jointly observed with time-lag s.
We define co-occurrence counts Tijs = |{t|i ? ?t+s ? j ? ?t }|.
In total there could be up to Sp2 many co-occurrence counts? however, for SSOs the number of unique
counts is dramatically lower. To capitalize on this, we define co-ocurrence groups F ? {1, . . . , p},
subsets of variables with identical observation patterns: ?i, j ? F ?t ? T : i ? ?t iff j ? ?t . All
element pairs (i, j) ? F 2 share the same co-occurence count Tijs per time-lag s. Co-occurence groups
are non-overlapping and together cover the whole range {1, . . . , p}. There might be pairs (i, j) which
are never observed, i.e. for which Tijs = 0 for each s. We collect variable pairs co-observed at least
twice at time-lag s, ?s = {(i, j)|Tijs > 1}. For these pairs we can calculate an unbiased estimate of
the s-lagged covariance,
(i)
(j)
Cov[yt+s , yt ] ?
Tijs
X (i) (j)
1
? (ij) .
y y := ?(s)
? 1 t t+s t
3
(4)
2.2
Expectation maximization for stitching linear dynamical systems
EM can readily be extended to missing data by removing likelihood-terms corresponding to missing
data [29]. In the E-step of our stitching-version of EM (sEM), we use the default Kalman filter and
smoother equations with subindexed Ct = C(?t ,:) and Rt = R(?t ,?t ) parameters for each time point
t. We speed up the E-step by tracking convergence of latent posterior covariances, and stop updating
these when they have converged [30]? for long T , this can result in considerably faster smoothing.
For the M-step, we adapt maximum likelihood estimates of parameters ? = {A, Q, C, R}. Dynamics
parameters (A, Q) are unaffected by SSOs. The update for C is given by
X
1 X (i) X
(i)
C(i,:) =
yt E[xt ]T ?
yt
(5)
E[xt ]T
|Oi |
X
X
?1
1 X
T
T
,
E[xt ]
?
E[xt xt ] ?
E[xt ]
|Oi |
where Oi = {t|i ? ?t } is the set of time points for which yi is observed, and all sums are over
t ? Oi . For SSOs, we use temporal structure in the observation patterns ?t to avoid unnecessary
calculations of the inverse in (5): all elements i of a co-occurence group share the same Oi .
2.3
Scalable subspace-identification with missing data via moment-matching
Subspace identification: Our algorithm (Stitching-SSID, S3ID) is based on moment-matching
approaches for linear systems [31]. We will show that it provides robust initialisation for EM,
and that it performs more robustly (in the sense of yielding samples which more closely capture
empirically measured correlations, and predict missing ones) on non-Gaussian and nonlinear data.
For fully observed linear dynamics, statistically consistent estimators for ? = {C, A, ?0 , R} can
?
be obtained from {?(s)}
s [20] by applying an SVD to the pK ? pL block Hankel matrix H with
? + l ? 1). For our situation with large p and massively missing entries in ?(s),
?
blocks Hk,l = ?(k
we
define an explicit loss function which penalizes the squared difference between empirically observed
covariances and those predicted by the parametrised model (3),
1X
2
?
rs ||?(s) ? ?(s)||
(6)
L(C, {?s }, R) =
?s ,
2 s
where || ? ||? denotes the Froebenius norm applied to all elements in index set ?. For linear dynamics,
we constrain ?s by setting ?s = As ?0 and optimize over A instead of over ?s . We refer to this
algorithm as ?linear S3ID?, and to the general one as ?nonlinear S3ID?. However, we emphasize that
only the latent dynamics are (potentially) nonlinear, dimensionality reduction is linear in both cases.
Optimization via stochastic gradients: For large-scale applications, explicit computation and
?
storage of the observed ?(s)
is prohibitive since they can scale as |?s | ? p2 , which renders
computation of the full loss L impractical. We note, however, that the gradients of L are linear in
? (i,j) ? P y(i) yt(j) . This allows us to obtain unbiased stochastic estimates of the gradients by
?(s)
t t+s
uniformly subsampling time points t and corresponding pairs of data vectors yt+s , yt with time-lag
s, without explicit calculation of the loss L. The batch-wise gradients are given by
?Lt,s
(i)
(i) >
>
i,t+s
= ?(s)(i,:) ? yt+s yt> Nsi,t C?>
+
[?(s)
]
?
y
y
C?s
(7)
(i,:)
t
s
t+s Ns
?C(i,:)
X
?Lt,s
(i)
>
=
C(i,:)
?(s)(i,:) ? yt+s yt> Nsi,t C
(8)
??s
i??t+s
2
?Lt,s
?s0
(i)
= 0 ?(0)(i,i) ? yt
,
(9)
?Rii
Tii
where Nsi,t ? Np?p is a diagonal matrix with [Nsi,t ]jj =
1
s
Tij
if j ? ?t , and 0 otherwise.
Gradients scale linearly in p both in memory and computation and allow us to minimize L without
explicit computation of the empirical time-lagged covariances, or L itself. To monitor performance
and convergence for large systems, we compute the loss over a random subset of covariances. The
computation of gradients for C and R can be fully vectorized over all elements i of a co-occurence
group, as these share the same matrices Nsi,t . We use ADAM [32] for stochastic gradient descent,
4
which combines momentum over subsequent gradients with individual self-adjusting step sizes for
each parameter. By using momentum on the stochastic gradients, we effectively obtain a gradient
that aggregates information from empirical time-lagged covariances across multiple gradient steps.
2.4
How temporal information helps for stitching
The key challenge in stitching is that the latent space inferred by an LDS is defined only up to
choice of coordinate system (i.e. a linear transformation of C). Thus, stitching is successful if one
can align the Cs corresponding to different subpopulations into a shared coordinate system for the
latent space of all p neurons [23] (Fig. 1). In the noise-free regime and if one ignores temporal
information, this can work only if the overlap between two sub-populations is at least as large as
the latent dimensionality, as shown by [23]. However, dynamics (i.e. temporal correlations) provide
additional constraints for the alignment which can allow stitching even without overlap:
Assume two subpopulations I1 , I2 with parameters ?1 , ?2 , latent spaces x1 , x2 and with overlap set
(J)
J = I1 ? I2 and overlap o = |J|. The overlapping neurons yt are represented by both the matrix
1
2
rows CJ,: and CJ,: , each in their respective latent coordinate systems. To stitch, one needs to identify
the base change matrix M aligning latent coordinate systems consistently across the two populations,
1
2
i.e. such that M x1 = x2 satisfies the constraints C(J,:)
= C(J,:)
M ?1 . When only considering
time-instantaneous covariances, this yields o linear constraints, and thus the necessary condition that
o ? n, i.e. the overlap has to be at least as large the latent dimensionality [23].
Including temporal correlations yields additional constraints, as the time-lagged activities also have
to be aligned, and these constraints can be combined in the observability matrix J:
? 1
? ? 2
?
C(J,:)
C(J,:)
1
2
? C(J,:)
? ? C(J,:)
? ?1
A1
A2
?=?
? M = OJ2 M ?1 .
OJ1 = ?
? ???
? ? ???
?
n?1
n?1
1
2
C(J,:)
(A1 )
C(J,:)
(A2 )
If both observability matrices OJ1 and OJ2 have full rank (i.e. rank n), then M is uniquely constrained,
and this identifies the base change required to align the latent coordinate systems.
To get consistent latent dynamics, the matrices A1 and A2 have to be similar, i.e. M A1 M ?1 = A2 ,
and correspondingly the time-lagged latent covariance matrices ?1s , ?2s satisfy ?1s = M ?2s M > .
These dynamics might yield additional constraints: For example, if both A1 and A2 have unique (and
the same) eigenvalues (and we know that we have identified all latent dimensions), then one could
align the latent dimensions of x which share the same eigenvalues, even in the absence of overlap.
2.5
Details of simulated and empirical data
Linear dynamical system: We simulate LDSs to test algorithms S3IDand sEM. For dynamics
matrices A, we generate eigenvalues with absolute values linearly spanning the interval [0.9, 0.99] and
complex angles independently von Mises-distributed with zero mean and concentration ? = 1000,
resulting in smooth latent tractories. To investigate stitching-performance on SSOs, we divded the
entire population size of size p = 1000 into two subsets I1 = [1, . . . p1 ], I2 = [p2 . . . p], p2 ? p1
with overlap o = p1 ? p2 . We simulate for Tm = 50k time points, m = 1, 2 for a total of T = 105
time points. We set the Rii such that 50% of the variance of each variable is private noise. Results are
aggregated over 20 data sets for each simulation. For the scaling analysis in section 3.2, we simulate
population sizes p = 103 , 104 , 105 , at overlap o = 10%, for Tm = 15k and 10 data sets (different
random initialisation for LDS parameters and noise) for each population size. We compute subspace
? = ||(I ? C? C? > )C||F /||C||F .
projection errors between C and C? as e(C, C)
Simulated neural networks: We simulate a recurrent network of 1250 exponential integrate-andfire neurons [33] (250 inhibitory and p = 1000 excitatory neurons) with clustered connectivity for
T = 60k time points. The inhibitory neurons exhibit unspecific connectivity towards the excitatory
units. Excitatory neurons are grouped into 10 clusters with high connectivity (30%) within cluster
and low connectivity (10%) between clusters, resulting in low-dimensional dynamics with smooth,
oscillating modes corresponding to the 10 clusters.
Larval-zebrafish imaging: We applied S3ID to a dataset obtained by light-sheet fluorescence
imaging of the whole brain of the larval zebrafish [34]. For this data, every data vector yt represents
5
a 2048 ? 1024 ? 41 three-dimensional image stack of of fluorescence activity recorded sequentially
across 41 z-planes, over in total T = 1200 time points of recording at 1.15 Hz scanning speed across
all z-planes. We separate foreground from background voxels by thresholding per-voxel fluorescence
activity variance and select p = 7, 828, 017 voxels of interest (? 9.55% of total) across all z-planes,
and z-scored variances.
3
Results
b
FA (naive)
GROUSE
sEM
S3ID
0.6
0.4
0.2
0.0
1.0
corr. of cov.
subsp. proj. error
0.8
1.00
1
10
overlap o
100
c
30 % overlap
o = 100.0 %
o = 30.0 %
o = 10.0 %
o = 2.5 %
o = 1.0 %
0.8
0.95
0.90
5 % overlap
subsp. proj. error
a
corr. of cov.
Stitching on simulated data
corr. of cov.
3.1
0.9
0.8
1 % overlap
1.0
0.6
0.4
0.2
0.8
0.6
0
5
10
time-lag s
15
0.0
0
50
100 150
EM iterations
200
Figure 2: Dimensionality reduction for multiple partial recordings a) Simulated LDS with
p = 1K neurons and n = 10 latent variables, two subpopulations, varying degrees of overlap
o. a) Subspace estimation performance for S3ID, sEM and reference algorithms (GROUSE and
naive FA). Subspace projection errors averaged over 20 generated data sets, ?1 SEM. S3ID returns
good subspace estimates across a wide range of overlaps. b) Estimation of dynamics. Correlations
between ground-truth and estimated time-lagged covariances for unobserved pair-wise covariances.
c) Subspace projection error for sEM as a function of iterations, for different overlaps. Errors per
data set, and means (bold lines). Convergence of sEM slows down with decreasing overlap.
To test how well parameters of LDS models can be reconstructed from high-dimensional partial
observations, we simulated an LDS and observed it through two overlapping subsets, parametrically
varying the size of overlap between them from o = 1% to o = 100%.
As a simple baseline, we apply a ?naive? Factor Analysis, for which we impute missing data as 0.
GROUSE [21], an algorithm designed for randomly missing data, recovers a consistent subspace
for overlap o = 30% and greater, but fails for smaller overlaps. As sEM (maximum number of 200
iterations) is prone to get stuck in local optima, we randomly initialise it with 4 seeds per fit and report
results with highest log-likelihood. sEM worked well even for small overlaps, but with increasingly
variable results (see Fig. 2c). Finally, we applied our SSID algorithm S3ID which exhibited good
performance, even for small overlaps.
b
dynamics eigenvalue
normalized variance
a
0.2
0.1
0.0
1.0
n =10
n =20
n =50
0.5
0.0
10
20
30
# latent dim.
40
50
10
20
30
40
# latent dim.
50
Figure 3: Choice of latent dimensionality
Eigenvalue spectra of system matrices estimated from simulated LDS data with o = 5%
overlap and different latent dimensionalities
n. a) Eigenvalues of instantaneous covariance
matrix ?0 . b) Eigenvalues of linear dynamics
matrix A. Both spectra indicate an elbow at
real data dimensionality n = 10 when S3ID is
run with n ? 10.
To quantify recovery of dynamics, we compare predictions for pairwise time-lagged covariances
between variables not co-observed simultaneously (Fig. 2b). Because GROUSE itself does not capture
temporal correlations, we obtain estimated time-lagged correlations by projecting data yt onto the
obtained subspace and extract linear dynamics from estimated time-lagged latent covariances. S3ID
is optimized to capture time-lagged covariances, and therefore outperforms alternative algorithms.
6
a
Figure 4:
Comparison with
post-hoc alignment of subspaces
a) Multiple partial recordings with
20 sequentially recorded subpopulations. b) We apply S3ID to the full
population, as well as factor analysis
to each of these subpopulations. The
latter gives 20 subspace estimates,
which we sequentially align using
subpopulation overlaps.
b
1
subsp. projection error
variable i
0.6
500
1000
0
50000
time t
100000
post-hoc alignment
S3ID
0.4
0.2
1
250
500
750
number of dimensions
1000
When we use a latent dimensionality (n = 20, 50) larger than the true one (n = 10), we observe
?elbows? in the eigen-spectra of instantaneous covariance estimate ?0 and dynamics matrix A located
at the true dimensionality (Fig. 3). This observation suggests we can use standard techniques for
choosing latent dimensionalities in applications where the real n is unknown. Choosing n too large
or too small led to some decrease in prediction quality of unobserved (time-lagged) correlations.
Importantly though, performance degraded gracefully when the dimensionality was chosen too big:
For instance, at 5% overlap, correlation between predicted and ground-truth unobserved instantaneous
covariances was 0.99 for true latent dimensionality n = 10 (Fig. 2b). At smaller n = 5 and n = 8,
correlations were 0.69 and 0.89, respectively, and for larger n = 20 and n = 50, they were 0.97 and
0.96. In practice, we recommend using n larger than the hypothesized latent dimensionality.
S3ID and sEM jointly estimate the subspace C across the entire population. An alternative approach
would be to identify the subspaces for the different subpopulations via separate matrices C(I,:) and
subsequently align these estimates via their pairwise overlap [23]. This works very well on this
example (as for each subset there is sufficient data to estimate each CI,: individually). However, in
Fig. 4 we show that this approach performs suboptimally in scenarios in which data is more noisy or
comprised of many (here 20) subpopulations. In summary, S3ID can reliably stitch simulated data
across a range of overlaps, even for very small overlaps.
principal angle
largest principal angle
3
10
4
10
S3ID+sEM
2
10
time [s]
S3ID
1
10
sEM
0
10
S3ID+sEM
-2
S3ID
5
p = 10
4
p = 10
3
p = 10
sEM
S3ID
S3ID+sEM
sEM
10
-1
1.0
S3ID+sEM
10
b
0
S3ID
10
sEM
a
Stitching for different population sizes: Combining S3ID with sEM works best
c
4
10
S3ID
S3ID+sEM
3
10
0.1
time [s]
3.2
2
10
0.01
1
10
1
10
2
3
10
10
time [s] (sEM)
4
10
Figure 5: Initializing EM with SSID for fast and robust convergence LDS with p = 103 , 104 , 105
neurons and n = 10 latent variables, 10% overlap. a) Largest principal angles as a function of
computation time. We compare randomly initalised sEM with sEM initialised from S3ID after a
single pass over the data. b) Comparison of final subspace estimate. We can combine the high
reliability of S3ID with the low final subspace angle of EM by initialising sEM with S3ID. c)
Comparison of total run-times. Initialization by S3ID does not change overall runtime.
The above results were obtained for fixed population size p = 1000. To investigate how performance
and computation time scale with population size, we simulate data from an LDS with fixed overlap
o = 10% for different population sizes. We run S3ID with a single pass, and subsequently use its
final parameter estimates to initialize sEM. We set the maximum number of iterations for sEM to 50,
corresponding to approximately 1.5h of training time for p = 105 observed variables. We quantify
the subspace estimates by the largest principal angle between ground-truth and estimated subspaces.
We find that the best performance is achieved by the combined algorithm (S3ID + sEM, Fig. 5a,b). In
particular, S3ID reliably and quickly leads to a reduction in error (Fig. 5a), but (at least when capped
at one pass over the data), further improvements can be achieved by letting sEM do further ?fine7
tuning? of parameters from the initial estimate [35]. When starting sEM from random initializations,
we find that it often gets stuck in local minima (potentially, shallow regions of the log-likelihood).
While convergence issues for EM have been reported before, we remark that these issues seems to be
much more severe for stitching. We hypothesize that the presence of two potential solutions (one for
each observation subset) makes parameter inference more difficult.
Computation times for both stitching algorithms scale approximately linear with observed population
size p (Fig. 5c). When initializing sEM by S3ID, we found that the cose of S3IDis amortized by
faster convergence of sEM. In summary, S3ID performs robustly across different population sizes,
but can be further improved when used as an initializer for sEM.
3.3
Spiking neural networks
How well can our approach capture and predict correlations in spiking neural networks, from partial
observations? To answer this question, we applied S3ID to a network simulation of inhibitory and
excitatory neurons (Fig. 6a), divided into into 10 clusters with strong intra-cluster connectivity. We
apply S3ID-initialised sEM with n = 20 latent dimensions to this data and find good recovery of
time-instantaneous covariances (Fig. 6b), but poor recovery of long-range temporal interactions.
Since sEM assumes linear latent dynamics, we test whether this is due to a violation of the linearity
assumption by applying S3ID with nonlinear latent dynamics, i.e. by learning the latent covariances
?s , s = 0, . . . , 39. This comes at the cost of learning 40 rather than 2 n ? n matrices to characterise
the latent space, but we note that this here still amounts to only 76.2% of the parameters learned for
C and R. We find that the nonlinear latent dynamics approach allows for markedly better predictions
of time-lagged covariances (Fig. 6b).
We attempt to recover cluster membership for each of the neurons from the estimated emission
matrices C using K-means clustering on the rows of C. Because the 10 clusters are distributed over
both subpopulations, this will only be successful if the latent representations for the two subpoplations
are sufficiently aligned. While we find that both approaches can assign most neurons correctly, only
the nonlinear version of S3ID allows correct recovery for every neuron. Thus, the flexibility of
S3ID allows more accurate reconstruction and prediction of correlations in data which violates the
assumptions of linear Gaussian dynamics.
We also applied dynamics-agnostic S3ID when undersampling two out of the ten clusters. Prediction
of unobserved covariances for the undersampled clusters was robust down to sampling only 50% of
neurons from those clusters. For 50/40/30% sampling, we obtained correlations of instantaneous
covariances of 0.97/0.80/0.32 for neurons in the undersampled clusters. Correlation across all clusters
remained above 0.97 throughout. K-means on the rows of learned emission matrix C still perfectly
identified the ten clusters at 40% sampling, whereas below that it fused the undersampled clusters.
b
# neuron (shuffled)
# neuron
0
60
time t
120
c
1.0
corr. of cov
1s
8
0.8
0.6
0.4
60
time t
0
120
10
20
30
time-lag s
6
4
2
partially obs. linear
partially obs. nonlinear
fully observed nonlinear
0
10
# cluster
a
40
1
200 400 600 800 1000
# neuron
Figure 6: Spiking network simulation a) Spiking data for 100 example neurons from 10 clusters,
and two observations with 10% overlap (clusters shuffled across observations-subsets). b) Correlations between ground-truth and estimated time-lagged covariances for non-observed pairwise
covariances, for S3ID with or without linearity assumption, as well as for sEM initialised with linear
S3ID. c) Recovery of cluster membership, using K-means clustering on estimated C.
3.4
Zebrafish imaging data
Finally, we want to determine how well the approach works on real population imaging data, and test
whether it can scale to millions of dimensions. To this end, we apply (both linear and nonlinear) S3ID
8
a
b
partially observed, nonlinear
fully observed, nonlinear
z = 41
partially observed, nonlinear
fully observed, nonlinear
fully observed, linear
1.0
z = 21
corr. of cov.
est. covariance
imaging plane z
0.5
0
0.9
-0.5
0.8
z=1
0
600
time t
1200
-0.5
0
0.5
ground-truth covariance
0
2
4
time-lag s
6
8
Figure 7: Zebrafish imaging data Multiple partial recordings for p = 7, 828, 017-dimensional data
from light-sheet fluoresence imaging of larval zebrafish. Data vectors represent volumetric frames
from 41 planes. a) Simulated observation scheme: we assume the imaging data was recorded over two
sessions with a single imaging plane in overlap. We apply S3ID with latent dimensionality n = 10
with linear and nonlinear latent dynamics. b) Quantification of covariance recovery. Comparison
of held-out ground-truth and estimated instantaneous covariances, for 106 randomly selected voxel
pairs not co-observed under the observation scheme in a. We estimate covariances from two models
learned from partially observed data (green: dynamics-agnostic; magenta: linear dynamics) and from
a control fit to fully-observed data (orange, dynamics-agnostic). left: Instantaneous covariances.
right: Prediction of time-lagged covariances. Correlation of covariances as a function of time-lag.
to volume scans of larval zebrafish brain activity obtained with light-sheet fluorescence microscopy,
comprising p = 7, 828, 017 voxels. We assume an observation scheme in which the first 21 (out
of 41) imaging planes are imaged in the first session, and the remaining 21 planes in the second,
i.e. with only z-plane 21 (234.572 voxels) in overlap (Fig. 7a,b). We evaluate the performance by
predicting (time-lagged) pairwise covariances for voxel pairs not co-observed under the assumed
multiple partial recording, using eq. 3. We find that nonlinear S3ID is able to reconstruct correlations
with high accuracy (Fig. 7c), and even outperforms linear S3ID applied to full observations. FA
applied to each imaging session and aligned post-hoc (as by [23]) obtained a correlation of 0.71 for
instantaneous covariances, and applying GROUSE to the observation scheme gave correlation 0.72.
4
Discussion
In order to understand how large neural dynamics and computations are distributed across large neural
circuits, we need methods for interpreting neural population recordings with many neurons and in
sufficiently rich complex tasks [12]. Here, we provide methods for dimensionality reduction which
dramatically expand the range of possible analyses. This makes it possible to identify dynamics
in data with millions of dimensions, even if many observations are missing in a highly structured
manner, e.g. because measurements have been obtained in multiple overlapping recordings. Our
approach identifies parameters by matching model-predicted covariances with empirical ones? thus,
it yields models which are optimized to be realistic generative models of neural activity. While
maximum-likelihood approaches (i.e. EM) are also popular for fitting dynamical system models
to data, they are not guaranteed to provide realistic samples when used as generative models, and
empirically often yield worse fits to measured correlations, or even diverging firing rates.
Our approach readily permits several possible generalizations: First, using methods similar to [35], it
could be generalized to nonlinear observation models, e.g. generalized linear models with Poisson
observations. In this case, one could still use gradient descent to minimize the mismatch between
model-predicted covariance and empirical covariances. Second, one could impose non-negativity
constraints on the entries of C to obtain more interpretable network models [36]. Third, one could
generalize the latent dynamics to nonlinear or non-Markovian parametric models, and optimize the
parameters of these nonlinear dynamics using stochastic gradient descent. For example, one could
optimize the kernel-function of GPFA directly by matching the GP-kernel to the latent covariances.
Acknowledgements We thank M. Ahrens for the larval zebrafish data. Our work was supported by
the caesar foundation.
9
References
[1] J. P. Cunningham and M. Y. Byron, ?Dimensionality reduction for large-scale neural recordings,?
Nature neuroscience, vol. 17, no. 11, pp. 1500?1509, 2014.
[2] M. Y. Byron, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani,
?Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population
activity,? in Advances in neural information processing systems, pp. 1881?1888, 2009.
[3] J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani., ?Empirical
models of spiking in neural populations.,? in Advances in Neural Information Processing
Systems, pp. 1350?1358, 2011.
[4] D. Pfau, E. A. Pnevmatikakis, and L. Paninski, ?Robust learning of low-dimensional dynamics
from large neural ensembles,? in Advances in neural information processing systems, pp. 2391?
2399, 2013.
[5] Y. Gao, L. Busing, K. V. Shenoy, and J. P. Cunningham, ?High-dimensional neural spike train
analysis with generalized count linear dynamical systems,? in Advances in Neural Information
Processing Systems, pp. 2044?2052, 2015.
[6] M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu,
and K. V. Shenoy, ?Neural population dynamics during reaching,? Nature, vol. 487, no. 7405,
p. 51, 2012.
[7] O. Mazor and G. Laurent, ?Transient dynamics versus fixed points in odor representations by
locust antennal lobe projection neurons,? Neuron, vol. 48, no. 4, pp. 661?73, 2005.
[8] K. L. Briggman, H. D. I. Abarbanel, and W. B. Kristan, Jr, ?Optical imaging of neuronal
populations during decision-making,? Science, vol. 307, no. 5711, pp. 896?901, 2005.
[9] D. V. Buonomano and W. Maass, ?State-dependent computations: spatiotemporal processing in
cortical networks.,? Nat Rev Neurosci, vol. 10, no. 2, pp. 113?125, 2009.
[10] K. V. Shenoy, M. Sahani, and M. M. Churchland, ?Cortical control of arm movements: a
dynamical systems perspective,? Annu Rev Neurosci, vol. 36, pp. 337?59, 2013.
[11] V. Mante, D. Sussillo, K. V. Shenoy, and W. T. Newsome, ?Context-dependent computation by
recurrent dynamics in prefrontal cortex,? Nature, vol. 503, no. 7474, pp. 78?84, 2013.
[12] P. Gao and S. Ganguli, ?On simplicity and complexity in the brave new world of large-scale
neuroscience,? Curr Opin Neurobiol, vol. 32, pp. 148?55, 2015.
[13] N. Li, K. Daie, K. Svoboda, and S. Druckmann, ?Robust neuronal dynamics in premotor cortex
during motor planning,? Nature, vol. 532, no. 7600, pp. 459?64, 2016.
[14] N. J. Sofroniew, D. Flickinger, J. King, and K. Svoboda, ?A large field of view two-photon
mesoscope with subcellular resolution for in vivo imaging,? eLife, vol. 5, 2016.
[15] A. K. Dhawale, R. Poddar, S. B. Wolff, V. A. Normand, E. Kopelowitz, and B. P. ?lveczky,
?Automated long-term recording and analysis of neural activity in behaving animals,? eLife,
vol. 6, 2017.
[16] Q. J. Huys and L. Paninski, ?Smoothing of, and parameter estimation from, noisy biophysical
recordings,? PLoS Comput Biol, vol. 5, no. 5, p. e1000379, 2009.
[17] A. P. Dempster, N. M. Laird, and D. B. Rubin, ?Maximum likelihood from incomplete data via
the em algorithm,? Journal of the royal statistical society. Series B (methodological), pp. 1?38,
1977.
[18] Z. Ghahramani and G. E. Hinton, ?Parameter estimation for linear dynamical systems,? tech.
rep., Technical Report CRG-TR-96-2, University of Totronto, Dept. of Computer Science, 1996.
[19] P. Van Overschee and B. De Moor, Subspace identification for linear systems: Theory?
Implementation?Applications. Springer Science & Business Media, 2012.
[20] T. Katayama, Subspace methods for system identification. Springer Science & Business Media,
2006.
[21] L. Balzano, R. Nowak, and B. Recht, ?Online identification and tracking of subspaces from
highly incomplete information,? in Communication, Control, and Computing (Allerton), 2010
48th Annual Allerton Conference on, pp. 704?711, IEEE, 2010.
10
[22] S. Turaga, L. Buesing, A. M. Packer, H. Dalgleish, N. Pettit, M. Hausser, and J. Macke,
?Inferring neural population dynamics from multiple partial recordings of the same neural
circuit,? in Advances in Neural Information Processing Systems, pp. 539?547, 2013.
[23] W. E. Bishop and B. M. Yu, ?Deterministic symmetric positive semidefinite matrix completion,?
in Advances in Neural Information Processing Systems, pp. 2762?2770, 2014.
[24] I. Markovsky, ?The most powerful unfalsified model for data with missing values,? Systems &
Control Letters, 2016.
[25] I. Markovsky, ?A missing data approach to data-driven filtering and control,? IEEE Transactions
on Automatic Control, 2016.
[26] Z. Liu, A. Hansson, and L. Vandenberghe, ?Nuclear norm system identification with missing
inputs and outputs,? Systems & Control Letters, vol. 62, no. 8, pp. 605?612, 2013.
[27] J. He, L. Balzano, and J. Lui, ?Online robust subspace tracking from partial information,? arXiv
preprint arXiv:1109.3827, 2011.
[28] D. Soudry, S. Keshri, P. Stinson, M.-h. Oh, G. Iyengar, and L. Paninski, ?Efficient" shotgun"
inference of neural connectivity from highly sub-sampled activity data,? PLoS Comput Biol,
vol. 11, no. 10, p. e1004464, 2015.
[29] S. C. Turaga, L. Buesing, A. Packer, H. Dalgleish, N. Pettit, M. Hausser, and J. H. Macke,
?Inferring neural population dynamics from multiple partial recordings of the same neural
circuit,? in Advances in Neural Information Processing Systems, pp. 539?547, 2013.
[30] E. A. Pnevmatikakis, K. R. Rad, J. Huggins, and L. Paninski, ?Fast kalman filtering and forward?
backward smoothing via a low-rank perturbative approach,? Journal of Computational and
Graphical Statistics, vol. 23, no. 2, pp. 316?339, 2014.
[31] M. Aoki, State space modeling of time series. Springer Science & Business Media, 1990.
[32] D. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? arXiv preprint
arXiv:1412.6980, 2014.
[33] R. Brette and W. Gerstner, ?Adaptive exponential integrate-and-fire model as an effective
description of neuronal activity,? Journal of neurophysiology, vol. 94, no. 5, pp. 3637?3642,
2005.
[34] M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, ?Whole-brain functional
imaging at cellular resolution using light-sheet microscopy.,? Nature Methods, vol. 10, no. 5,
pp. 413?420, 2013.
[35] L. Buesing, J. H. Macke, and M. Sahani, ?Spectral learning of linear dynamics from generalisedlinear observations with application to neural population data,? in Advances in Neural Information Processing Systems, pp. 1682?1690, 2012.
[36] L. Buesing, T. A. Machado, J. P. Cunningham, and L. Paninski, ?Clustered factor analysis of
multineuronal spike data,? in Advances in Neural Information Processing Systems, pp. 3500?
3508, 2014.
11
| 7153 |@word neurophysiology:1 trial:1 private:1 version:2 norm:2 seems:1 busing:1 r:1 simulation:3 lobe:1 covariance:56 decomposition:1 tr:1 briggman:1 moment:5 reduction:10 liu:2 series:2 initial:2 initialisation:2 outperforms:2 current:2 perturbative:1 readily:3 numerical:1 subsequent:1 informative:1 realistic:2 opin:1 hypothesize:1 designed:3 plot:1 update:1 interpretable:1 motor:1 generative:3 prohibitive:1 selected:1 plane:9 record:2 provides:1 allerton:2 org:1 combine:2 fitting:1 manner:2 pairwise:7 p1:3 ldss:1 planning:1 multi:1 brain:8 decreasing:1 considering:1 elbow:2 campus:1 underlying:3 circuit:4 linearity:2 agnostic:3 medium:3 kaufman:1 interpreted:1 neurobiol:1 unobserved:7 transformation:1 impractical:1 temporal:13 every:2 runtime:1 axt:1 control:7 unit:2 planck:1 shenoy:6 before:2 positive:1 dropped:1 local:2 treat:1 soudry:2 consequence:1 analyzing:1 laurent:1 firing:1 approximately:2 might:2 aoki:1 twice:1 initialization:2 studied:1 collect:1 suggests:1 co:11 limited:2 range:5 statistically:1 averaged:1 locust:1 huys:1 unique:2 practice:1 block:3 totronto:1 area:4 empirical:11 evolving:1 matching:7 projection:6 subpopulation:9 get:3 onto:1 sheet:4 storage:1 context:2 impossible:1 applying:3 optimize:3 deterministic:2 map:1 center:1 missing:23 chronic:1 straightforward:1 yt:19 starting:1 independently:1 keller:1 focused:1 resolution:2 simplicity:1 identifying:1 recovery:7 insight:2 estimator:1 importantly:1 nuclear:2 vandenberghe:1 oh:1 initialise:1 population:36 handle:1 coordinate:5 svoboda:2 associate:1 element:4 amortized:1 updating:1 located:1 observed:31 preprint:2 initializing:2 capture:7 calculate:2 thousand:1 region:1 plo:2 decrease:1 highest:1 movement:1 substantial:1 dempster:1 complexity:1 dynamic:60 churchland:2 easily:1 represented:1 train:1 fast:2 effective:2 aggregate:1 choosing:2 macke1:1 balzano:3 lag:11 larger:3 premotor:1 reconstruct:3 otherwise:1 cov:7 statistic:1 gp:1 jointly:3 noisy:4 itself:2 final:3 laird:1 online:2 hoc:4 sequence:1 eigenvalue:7 biophysical:1 propose:1 reconstruction:2 interaction:1 relevant:1 aligned:4 combining:1 iff:1 poorly:1 flexibility:1 subcellular:1 description:2 convergence:7 electrode:1 cluster:19 optimum:1 oscillating:1 adam:2 help:1 sussillo:1 recurrent:2 completion:1 measured:3 ij:1 eq:1 strong:1 orger:1 p2:4 predicted:4 marcel:2 c:1 indicate:1 quantify:2 come:1 closely:1 correct:1 filter:1 stochastic:8 subsequently:2 transient:1 violates:1 brave:1 behaviour:1 darmstadt:1 assign:1 pettit:2 clustered:2 generalization:1 larval:6 crg:1 pl:1 sufficiently:2 considered:1 ground:7 seed:1 predict:8 sp2:1 a2:5 estimation:7 robson:1 applicable:3 fluorescence:4 individually:1 grouped:1 largest:3 pnevmatikakis:2 moor:1 iyengar:1 gaussian:5 aim:1 rather:2 reaching:1 avoid:1 varying:2 unspecific:1 derived:1 emission:2 improvement:1 consistently:1 modelling:1 likelihood:7 indicates:1 rank:3 hk:1 methodological:1 tech:1 baseline:1 kristan:1 sense:1 dim:6 inference:2 ganguli:1 dependent:2 membership:2 typically:1 entire:5 brette:1 cunningham:6 hidden:1 proj:2 expand:1 comprising:2 germany:1 i1:3 issue:2 uncovering:1 overall:1 animal:1 spatial:1 recored:1 orange:2 special:1 smoothing:3 field:2 construct:1 never:1 initialize:1 beach:1 sampling:3 identical:1 represents:1 yu:3 capitalize:1 caesar:4 markovsky:3 foreground:1 np:1 stimulus:1 report:2 recommend:1 few:1 randomly:4 simultaneously:1 packer:2 individual:1 subsampled:2 phase:1 fire:1 attempt:1 curr:1 interest:1 highly:4 investigate:2 intra:1 severe:2 alignment:3 violation:1 yielding:3 light:4 semidefinite:1 stitched:2 parametrised:1 held:1 accurate:1 nowak:1 partial:13 necessary:1 cose:1 respective:1 incomplete:2 penalizes:1 mazor:1 theoretical:1 instance:2 nuyujukian:1 modeling:1 markovian:2 cover:1 newsome:1 maximization:2 cost:2 subset:15 entry:4 parametrically:1 comprised:2 successful:2 too:3 optimally:1 reported:1 answer:1 scanning:1 spatiotemporal:1 considerably:1 combined:2 st:1 nonnenmacher:1 recht:1 sensitivity:1 together:1 quickly:1 fused:1 connectivity:7 squared:2 von:1 recorded:5 initializer:1 prefrontal:1 hansson:1 worse:1 cognitive:1 macke:5 abarbanel:1 return:1 li:2 potential:1 tii:1 de:3 photon:1 bold:1 generalisedlinear:1 satisfy:1 explicitly:1 view:2 red:2 recover:2 dalgleish:2 vivo:1 cxt:1 minimize:3 oi:5 degraded:1 accuracy:1 variance:4 ensemble:1 yield:6 identify:7 generalize:2 lds:9 identification:8 buesing:5 accurately:1 trajectory:3 unaffected:1 converged:1 simultaneous:1 aligns:1 volumetric:1 initialised:3 pp:23 naturally:1 mi:1 recovers:1 gain:1 stop:1 dataset:2 adjusting:1 popular:1 covariation:1 sampled:1 dimensionality:22 cj:2 day:1 improved:1 though:1 stinson:1 correlation:28 nonlinear:19 overlapping:6 mode:2 quality:1 usa:1 building:1 hypothesized:1 normalized:1 unbiased:2 true:3 hence:1 shuffled:2 imaged:1 symmetric:1 maass:1 i2:3 lveczky:1 impute:1 self:1 uniquely:1 during:3 turagas:1 generalized:3 tt:1 demonstrate:1 performs:3 interpreting:1 image:1 wise:2 instantaneous:12 functional:2 spiking:5 empirically:4 machado:1 volume:1 million:4 he:1 measurement:7 refer:1 tuning:1 automatic:1 similarly:1 session:4 centre:1 janelia:2 reliability:1 moving:1 access:1 stable:1 cortex:2 behaving:1 align:6 base:2 aligning:1 posterior:1 perspective:2 driven:1 scenario:2 massively:1 affiliation:1 rep:1 yi:1 minimum:1 additional:3 greater:1 impose:1 aggregated:1 determine:1 signal:1 smoother:1 multiple:22 full:5 smooth:2 technical:2 faster:2 adapt:1 calculation:2 hhmi:2 long:5 cross:1 divided:1 serial:1 post:4 a1:5 va:1 prediction:6 scalable:4 expectation:2 poisson:1 arxiv:4 iteration:4 represent:1 kernel:2 achieved:2 microscopy:2 microscope:1 preserved:1 addition:2 whereas:2 separately:3 background:1 addressed:1 interval:1 want:1 singular:1 exhibited:1 markedly:1 recording:28 oj2:2 hz:1 byron:2 effectiveness:1 extracting:3 presence:2 automated:1 switch:3 fit:3 gave:1 identified:2 perfectly:1 observability:2 idea:1 tijs:5 tm:2 whether:2 shotgun:1 render:1 multineuronal:1 jj:1 remark:1 dramatically:2 useful:1 tij:1 characterise:1 amount:1 ten:2 ashburn:1 generate:1 inhibitory:3 ahrens:2 estimated:9 neuroscience:2 per:4 correctly:1 discrete:1 dropping:1 vol:17 santhanam:1 group:4 key:1 monitor:1 undersampling:1 andfire:1 backward:1 imaging:15 sum:1 run:3 inverse:1 angle:6 powerful:2 letter:2 hankel:1 throughout:1 zebrafish:8 ob:2 decision:1 scaling:2 initialising:1 cyan:1 ct:1 guaranteed:1 mante:1 annual:1 activity:17 constraint:7 worked:1 constrain:1 x2:2 bonn:1 speed:2 simulate:5 elife:2 optical:1 buonomano:1 structured:3 turaga:3 combination:1 poor:1 jr:1 across:24 smaller:2 em:12 increasingly:1 shallow:1 rev:2 making:2 constrained:1 huggins:1 projecting:1 restricted:1 invariant:1 equation:1 count:5 know:1 letting:1 stitching:13 end:1 generalizes:1 grouse:6 permit:1 apply:5 observe:1 spectral:3 occurrence:2 robustly:2 alternative:4 batch:1 odor:1 eigen:1 rp:2 denotes:1 assumes:1 subsampling:3 include:1 clustering:2 remaining:1 graphical:1 exploit:1 ghahramani:1 society:2 classical:1 objective:1 question:1 spike:2 fa:3 primary:1 rt:1 concentration:1 diagonal:2 parametric:1 exhibit:1 gradient:15 subspace:26 separate:4 thank:1 simulated:10 gracefully:1 evaluate:1 cellular:1 spanning:1 length:1 kalman:2 index:2 suboptimally:1 ssid:8 minimizing:1 difficult:1 keshri:1 potentially:3 slows:1 ba:1 lagged:24 implementation:2 rii:2 reliably:2 unknown:2 perform:2 neuron:35 observation:26 datasets:1 descent:4 situation:1 extended:2 ever:1 hinton:1 communication:1 frame:1 rn:1 jakob:2 stack:1 inferred:2 pair:11 required:1 optimized:3 rad:1 pfau:1 hausser:2 learned:3 ryu:2 kingma:1 nip:1 capped:1 able:3 dynamical:10 below:3 pattern:2 mismatch:1 regime:1 challenge:1 max:1 green:2 memory:1 including:1 royal:1 overschee:1 overlap:34 business:3 quantification:1 predicting:1 undersampled:3 arm:1 scheme:6 identifies:2 negativity:1 extract:4 naive:3 occurence:4 katayama:1 sahani:4 understanding:1 voxels:4 acknowledgement:1 relative:1 embedded:1 fully:7 expect:1 loss:4 antennal:1 oj1:2 filtering:2 versus:1 foundation:1 integrate:2 degree:1 vectorized:1 consistent:3 s0:1 sufficient:1 thresholding:1 foster:1 rubin:1 share:4 nsi:5 row:3 prone:1 excitatory:4 summary:2 supported:1 free:1 allow:4 understand:1 srinivas:1 wide:1 correspondingly:1 absolute:1 sparse:1 distributed:5 van:1 dimension:7 default:1 cortical:2 world:1 rich:1 ignores:1 stuck:2 forward:1 adaptive:1 voxel:3 transaction:1 reconstructed:4 emphasize:1 sequentially:5 assumed:2 unnecessary:1 spectrum:3 search:1 latent:45 promising:1 nature:5 learn:1 robust:6 ca:1 sem:35 gerstner:1 complex:2 constructing:1 did:1 pk:1 linearly:3 neurosci:2 whole:4 noise:4 scored:1 big:1 x1:2 neuronal:4 fig:14 referred:1 fashion:1 slow:1 n:1 sub:3 inferring:4 momentum:2 explicit:4 fails:1 exponential:2 comput:2 third:1 dozen:1 removing:1 down:2 remained:1 magenta:1 bishop:2 normregularized:1 xt:12 annu:1 sequential:1 effectively:1 corr:5 ci:1 nat:1 ocurrence:1 lt:3 led:1 simply:1 univariate:1 paninski:5 gao:2 stitch:2 tracking:3 partially:7 gpfa:1 springer:3 truth:7 satisfies:1 extracted:1 modulate:1 goal:2 king:1 towards:1 shared:4 absence:1 change:3 uniformly:1 lui:1 principal:4 wolff:1 called:1 total:5 pas:3 svd:1 diverging:1 est:1 select:1 highdimensional:1 latter:1 scan:1 dept:1 druckmann:1 biol:2 |
6,803 | 7,154 | Unifying PAC and Regret: Uniform PAC Bounds for
Episodic Reinforcement Learning
Tor Lattimore?
[email protected]
Christoph Dann
Machine Learning Department
Carnegie-Mellon University
[email protected]
Emma Brunskill
Computer Science Department
Stanford University
[email protected]
Abstract
Statistical performance bounds for reinforcement learning (RL) algorithms can be
critical for high-stakes applications like healthcare. This paper introduces a new
framework for theoretically measuring the performance of such algorithms called
Uniform-PAC, which is a strengthening of the classical Probably Approximately
Correct (PAC) framework. In contrast to the PAC framework, the uniform version
may be used to derive high probability regret guarantees and so forms a bridge
between the two setups that has been missing in the literature. We demonstrate
the benefits of the new framework for finite-state episodic MDPs with a new
algorithm that is Uniform-PAC and simultaneously achieves optimal regret and
PAC guarantees except for a factor of the horizon.
1
Introduction
The recent empirical successes of deep reinforcement learning (RL) are tremendously exciting, but the
performance of these approaches still varies significantly across domains, each of which requires the
user to solve a new tuning problem [1]. Ultimately we would like reinforcement learning algorithms
that simultaneously perform well empirically and have strong theoretical guarantees. Such algorithms
are especially important for high stakes domains like health care, education and customer service,
where non-expert users demand excellent outcomes.
We propose a new framework for measuring the performance of reinforcement learning algorithms
called Uniform-PAC. Briefly, an algorithm is Uniform-PAC if with high probability it simultaneously
for all ? > 0 selects an ?-optimal policy on all episodes except for a number that scales polynomially
with 1/?. Algorithms that are Uniform-PAC converge to an optimal policy with high probability
and immediately yield both PAC and high probability regret bounds, which makes them superior to
algorithms that come with only PAC or regret guarantees. Indeed,
(a) Neither PAC nor regret guarantees imply convergence to optimal policies with high probability;
(b) (?, ?)-PAC algorithms may be ?/2-suboptimal in every episode;
(c) Algorithms with small regret may be maximally suboptimal infinitely often.
?
Tor Lattimore is now at DeepMind, London
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Uniform-PAC algorithms suffer none of these drawbacks. One could hope that existing algorithms
with PAC or regret guarantees might be Uniform-PAC already, with only the analysis missing.
Unfortunately this is not the case and modification is required to adapt these approaches to satisfy
the new performance metric. The key insight for obtaining Uniform-PAC guarantees is to leverage
time-uniform concentration bounds such as the finite-time versions of the law of iterated logarithm,
which obviates the need for horizon-dependent confidence levels.
We provide a new optimistic algorithm for episodic RL called UBEV that is Uniform PAC. Unlike its
predecessors, UBEV uses confidence intervals based on the law of iterated logarithm (LIL) which
hold uniformly over time. They allow us to more tightly control the probability of failure events
in which the algorithm behaves poorly. Our analysis is nearly optimal according to the traditional
metrics, with a linear dependence on the state space for the PAC setting and square root dependence
for the regret. Therefore UBEV is a Uniform PAC algorithm with PAC bounds and high probability
regret bounds that are near optimal in the dependence on the length of the episodes (horizon) and
optimal in the state and action spaces cardinality as well as the number of episodes. To our knowledge
UBEV is the first algorithm with both near-optimal PAC and regret guarantees.
Notation and setup. We consider episodic fixed-horizon MDPs with time-dependent dynamics,
which can be formalized as a tuple M = (S, A, pR , P, p0 , H). The statespace S and the actionspace
A are finite sets with cardinality S and A. The agent interacts with the MDP in episodes of H time
steps each. At the beginning of each time-step t ? [H] the agent observes a state st and chooses an
action at based on a policy ? that may depend on the within-episode time step (at = ?(st , t)). The
next state is sampled from the tth transition kernel st+1 ? P (?|st , at , t) and the initial state from
s1 ? p0 . The agent then receives a reward drawn from a distribution pR (st , at , t) which can depend
on st , at and t with mean r(st , at , t) determined by the reward function. The reward distribution pR
is supported on [0, 1].2 The value function from time step t for policy ? is defined as
"H
#
X
X
?
Vt? (s) := E
r(si , ai , i)st = s =
P (s0 |s, ?(s, t), t)Vt+1
(s0 ) + r(s, ?(s, t), t) .
s0 ?S
i=t
and the optimal value function is denoted by Vt? . In any fixed episode, the quality of a policy ? is
evaluated by the total expected reward or return
#
"H
X
?
?
? := E
r(si , ai , i)? = p>
0 V1 ,
i=t
?
?
which is compared to the optimal return ?? = p>
0 V1 . For this notation p0 and the value functions Vt ,
?
V1 are interpreted as vectors of length S. If an algorithm follows policy ?k in episode k, then the
optimality gapP
in episode k is ?k := ?? ? ??k which is bounded by ?max = max? ?? ? ?? ? H.
?
We let N? := k=1 I{?k > ?} be the number of ?-errors and R(T ) be the regret after T episodes:
PT
R(T ) := k=1 ?k . Note that T is the number of episodes and not total time steps (which is HT
after T episodes) and k is an episode index while t usually denotes time indices within an episode.
? notation is similar to the usual O-notation but suppresses additional polylog-factors, that is
The O
? (x)) iff there is a polynomial p such that g(x) = O(f (x)p(log(x))).
g(x) = O(f
2
Uniform PAC and Existing Learning Frameworks
We briefly summarize the most common performance measures used in the literature.
? (?, ?)-PAC: There exists a polynomial function FPAC (S, A, H, 1/?, log(1/?)) such that
P (N? > FPAC (S, A, H, 1/?, log(1/?))) ? ? .
? Expected Regret: There exists a function FER (S, A, H, T ) such that E[R(T )]
FER (S, A, H, T ).
? High Probability Regret: There exists a function FHPR (S, A, H, T, log(1/?)) such that
?
P (R(T ) > FHPR (S, A, H, T, log(1/?))) ? ? .
2
The reward may be allowed to depend on the next-state with no further effort in the proofs. The boundedness
assumption could be replaced by the assumption of subgaussian noise with known subgaussian parameter.
2
? Uniform High Probability Regret: There exists a function FUHPR (S, A, H, T, log(1/?)) such
that
P (exists T : R(T ) > FUHPR (S, A, H, T, log(1/?))) ? ? .
In all definitions the function F should be polynomial in all arguments. For notational conciseness
we often omit some of the parameters of F where the context is clear. The different performance
guarantees are widely used (e.g. PAC: [2, 3, 4, 5], (uniform) high-probability regret: [6, 7, 8];
expected regret: [9, 10, 11, 12]). Due to space constraints, we will not discuss Bayesian-style
performance guarantees that only hold in expectation with respect to a distribution over problem
instances. We will shortly discuss the limitations of the frameworks listed above, but first formally
define the Uniform-PAC criteria
Definition 1 (Uniform-PAC). An algorithm is Uniform-PAC for ? > 0 if
P (exists ? > 0 : N? > FUPAC (S, A, H, 1/?, log(1/?))) ? ? ,
where FUPAC is polynomial in all arguments.
All the performance metrics are functions of the distribution of the sequence of errors over the
episodes (?k )k?N . Regret bounds are the integral of this sequence up to time T , which is a random
variable. The expected regret is just the expectation of the integral, while the high-probability
regret is a quantile. PAC bounds are the quantile of the size of the superlevel set for a fixed level ?.
Uniform-PAC bounds are like PAC bounds, but hold for all ? simultaneously.
Limitations of regret. Since regret guarantees only bound the integral of ?k over k, it does not
distinguish between making a few severe mistakes and many small mistakes. In fact, since regret
bounds provably grow with the number of episodes T , an algorithm that achieves optimal regret may
still make infinitely many mistakes (of arbitrary quality, see proof of Theorem 2 below). This is
highly undesirable in high-stakes scenarios. For example in drug treatment optimization in healthcare,
we would like to distinguish between infrequent severe complications (few large ?k ) and frequent
minor side effects (many small ?k ). In fact, even with an optimal regret bound, we could still serve
infinitely patients with the worst possible treatment.
Limitations of PAC. PAC bounds limit the number of mistakes for a given accuracy level ?, but
is otherwise non-restrictive. That means an algorithm with ?k > ?/2 for all k almost surely might
still be (?, ?)-PAC. Worse, many algorithms designed to be (?, ?)-PAC actually exhibit this behavior
because they explicitly halt learning once an ?-optimal policy has been found. The less widely used
TCE (total cost of exploration) bounds [13] and KWIK guarantees [14] suffer from the same issueand
for conciseness are not discussed in detail.
Advantages of Uniform-PAC. The new criterion overcomes the limitations of PAC and regret
guarantees by measuring the number of ?-errors at every level simultaneously. By definition, algorithms that are Uniform-PAC for a ? are (?, ?)-PAC for all ? > 0. We will soon see that an algorithm
with a non-trivial Uniform-PAC guarantee also has small regret with high probability. Furthermore,
there is no loss in the reduction so that an algorithm with optimal Uniform-PAC guarantees also
has optimal regret, at least in the episodic RL setting. In this sense Uniform-PAC is the missing
bridge between regret and PAC. Finally, for algorithms based on confidence bounds, Uniform-PAC
guarantees are usually obtained without much additional work by replacing standard concentration
bounds with versions that hold uniformly over episodes (e.g. using the law of the iterated logarithms).
In this sense we think Uniform-PAC is the new ?gold-standard? of theoretical guarantees for RL
algorithms.
2.1
Relationships between Performance Guarantees
Existing theoretical analyses usually focus exclusively on either the regret or PAC framework. Besides
occasional heuristic translations, Proposition 4 in [15] and Corollary 3 in [6] are the only results
relating a notion of PAC and regret, we are aware of. Yet the guarantees there are not widely used3
3
The average per-step regret in [6] is superficially a PAC bound, but does not hold over infinitely many
time-steps and exhibits the limitations of a conventional regret bound. The translation to average loss in [15]
comes at additional costs due to the discounted infinite horizon setting.
3
cannot imply
Expected
Regret
High-Prob.
Regret
preclude
implies subopt.
for single T
implies
Uniform HighProb. Regret
PAC
implies
implies
Uniform PAC
Figure 1: Visual summary of relationship among the different learning frameworks: Expected regret
(ER) and PAC preclude each other while the other crossed arrows represent only a does-not-implies
relationship. Blue arrows represent imply relationships. For details see the theorem statements.
unlike the definitions given above which we now formally relate to each other. A simplified overview
of the relations discussed below is shown in Figure 1.
Theorem 1. No algorithm can achieve
? a sub-linear expected regret bound for all T and
? a finite (?, ?)-PAC bound for a small enough ?
simultaneously for all two-armed multi-armed bandits with Bernoulli reward distributions. This
implies that such guarantees also cannot be satisfied simultaneously for all episodic MDPs.
A full proof is in Appendix A.1, but the intuition is simple. Suppose a two-armed Bernoulli bandit has
mean rewards 1/2 + ? and 1/2 respectively and the second arm is chosen at most F < ? times with
probability at least 1 ? ?, then one can easily show that in an alternative bandit with mean rewards
1/2 + ? and 1/2 + 2? there is a non-zero probability that the second arm is played finitely often and in
this bandit the expected regret will be linear. Therefore, sub-linear expected regret is only possible if
each arm is pulled infinitely often almost surely.
Theorem 2. The following statements hold for performance guarantees in episodic MDPs:
(a) If an algorithm satisfies a (?, ?)-PAC bound with FPAC = ?(1/?2 ) then it satisfies for a
specific T = ?(??3 ) a FHPR = ?(T 2/3 ) bound. Further, there is an MDP and algorithm that
satisfies the (?, ?)-PAC bound FPAC = ?(1/?2 ) on that MDP and has regret R(T ) = ?(T 2/3 )
on that MDP for any T . That means a (?, ?)-PAC bound with FPAC = ?(1/?2 ) can only be
converted to a high-probability regret bound with FHPR = ?(T 2/3 ).
(b) For any chosen ?, ? > 0 and FPAC , there is an MDP and algorithm that satisfies the (?, ?)-PAC
bound FPAC on that MDP and has regret R(T ) = ?(T ) on that MDP. That means a (?, ?)-PAC
bound cannot be converted to a sub-linear uniform high-probability regret bound.
(c) For any FUHPR (T, ?) with FUHPR (T, ?) ? ? as T ? ?, there is an algorithm that satisfies
that uniform high-probability regret bound on some MDP but makes infinitely many mistakes
for any sufficiently small accuracy level ? > 0 for that MDP. Therefore, a high-probability
regret bound (uniform or not) cannot be converted to a finite (?, ?)-PAC bound.
(d) For any FUHPR (T, ?) there is an algorithm that satisfies that uniform high-probability regret
bound on some MDP but suffers expected regret ER(T ) = ?(T ) on that MDP.
For most
?interesting RL problems including episodic MDPs the worst-case expected regret grows
with O( T ). The theorem shows that establishing an optimal high probability regret bound does not
imply any finite PAC bound. While PAC bounds may be converted to regret bounds, the resulting
bounds are necessarily severely suboptimal with a rate of T 2/3 . The next theorem formalises the
claim that Uniform-PAC is stronger than both the PAC and high-probability regret criteria.
4
? 1 /? + C2 /?2 )
Theorem 3. Suppose an algorithm is Uniform-PAC for some ? with FUPAC = O(C
where C1 , C2 > 0 are constant in ?, but may depend on other quantities such as S, A, H, log(1/?),
then the algorithm
(a) converges to optimal policies with high probability: P(limk?? ?k = 0) ? 1 ? ?.
(b) is (?, ?)-PAC with bound FPAC = FUPAC for all ?.
?
? C2 T + max{C1 , C2 }).
(c) enjoys a high-probability regret at level ? with FUHPR = O(
Observe that stronger uniform PAC bounds lead to stronger regret bounds and for RL in episodic
MDPs, an optimal uniform-PAC bound implies a uniform regret bound. To our knowledge, there
are no existing approaches with PAC or regret guarantees that are Uniform-PAC. PAC methods such
as MBIE, MoRMax, UCRL-?, UCFH, Delayed Q-Learning or Median-PAC all depend on advance
knowledge of ? and eventually stop improving their policies. Even when disabling the stopping
condition, these methods are not uniform-PAC as their confidence bounds only hold for finitely many
episodes and are eventually violated according to the law of iterated logarithms. Existing algorithms
with uniform high-probability regret bounds such as UCRL2 or UCBVI
p [16] also do not satisfy
uniform-PAC bounds since they use upper confidence bounds with width log(T )/n where T is the
number of observed episodes and n is the number of observations for a specific state and action. The
presence of log(T ) causes the algorithm to try each action in each state infinitely often. One might
begin to wonder if uniform-PAC is too good to be true. Can any algorithm meet the requirements? We
demonstrate in Section 4 that the answer is yes by showing that UBEV has meaningful Uniform-PAC
bounds. A key technique that allows us to prove thesepbounds is the use of finite-time law of iterated
logarithm confidence bounds which decrease at rate (log log n)/n.
3
The UBEV Algorithm
The pseudo-code for the proposed UBEV algorithm is given in Algorithm 1. In each episode it
follows an optimistic policy ?k that is computed by backwards induction using a carefully chosen
confidence interval on the transition probabilities in each state. In line 8 an optimistic estimate of the
Q-function for the current state-action-time triple is computed using the empirical estimates of the
expected next state value V?next ? R (given that the values at the next time are V?t+1 ) and expected
immediate reward r? plus confidence bounds (H ? t)? and ?. We show in Lemma D.1 in the appendix
that the policy update in Lines 3?9 finds an optimal solution to maxP 0 ,r0 ,V 0 ,?0 Es?p0 [V10 (s)] subject
to the constraints that for all s ? S, a ? A, t ? [H],
0
Vt0 (s) = r(s, ? 0 (s, t), t) + P 0 (s, ? 0 (s, t), t)> Vt+1
0
VH+1
0
0
(Bellman Equation)
(1)
0
= 0, P (s, a, t) ? ?S , r (s, a, t) ? [0, 1]
0
|[(P ? P?k )(s, a, t)]> Vt+1
| ? ?(s, a, t)(H ? t)
|r0 (s, a, t) ? r?k (s, a, t)| ? ?(s, a, t)
(2)
where (P 0 ? P?k )(s, a, t) is short for P 0 (s, a, t) ? P?k (s, a, t) = P 0 (?|s, a, t) ? P?k (?|s, a, t) and
s
s
!
2 ln ln max{e, n(s, a, t)} + ln(18SAH/?)
ln(SAH ln(n(s, a, t))/?)
?(s, a, t) =
=O
n(s, a, t)
n(s, a, t)
0
,s,a,t)
is the width of a confidence bound with e = exp(1) and P?k (s0 |s, a, t) = m(s
n(s,a,t) are the empirical
transition probabilities and r?k (s, a, t) = l(s, a, t)/n(s, a, t) the empirical immediate rewards (both
at the beginning of the kth episode). Our algorithm is conceptually similar to other algorithms based
on the optimism principle such as MBIE [5], UCFH [3], UCRL2 [6] or UCRL-? [2] but there are
several key differences:
? Instead of using confidence intervals over the transition kernel by itself, we incorporate the
value function directly into the concentration analysis. Ultimately this saves a factor of S in
the sample complexity, but the price is a more difficult analysis. Previously MoRMax [17]
also used the idea of directly bounding the transition and value function, but in a very different
algorithm that required discarding data and had a less tight bound. A similar technique has
been used by Azar et al. [16].
5
Algorithm 1: UBEV (Upper Bounding the Expected Next State Value) Algorithm
Input : failure tolerance ? ? (0, 1]
0
1 n(s, a, t) = l(s, a, t) = m(s , s, a, t) = 0;
V?H+1 (s0 ) := 0 ?s, s0 ? S, a ? A, t ? [H]
2 for k = 1, 2, 3, . . . do
/* Optimistic planning
3
4
5
*/
for t = H to 1 do
for s ? S do
for a ? Aqdo
2 ln ln(max{e,n(s,a,t)})+ln(18SAH/?)
n(s,a,t)
>
l(s,a,t)
?next := m(?,s,a,t) V?t+1 //
;
V
n(s,a,t)
n(s,a,t)
n
6
? :=
7
r? :=
8
Q(a) := min {1, r? + ?} + min max V?t+1 , V?next + (H ? t)?
empirical estimates
o
V?t (s) := Q(?k (s, t))
?k (s, t) := arg maxa Q(a),
9
// confidence bound
/* Execute policy for one episode
11
12
13
MoRMax
UBEV
UCRL2
MBIE
3.5
Expected Return S=50
Expected Return S=5
MedianPAC
DelayedQL
OIM
UCFH
UCBVI_1
UCBVI_2
3.0
3.0
2.5
2.0
1.5
1.0
0.5
0.0 3
10
*/
s1 ? p0 ;
for t = 1 to H do
at := ?k (st , t), rt ? pR (st , at , t) and st+1 ? P (st , at , t)
n(st , at , t)++; m(st+1 , st , at , t)++; l(st , at , t)+= rt // update statistics
10 4
10 5
10 6
Number of Episodes
10 7
2.5
2.0
1.5
1.0
0.5 3
10
optimal
2.5
Expected Return S=200
10
10 4
10 5
10 6
Number of Episodes
10 7
2.0
1.5
1.0
0.5 3
10
10 4
10 5
10 6
Number of Episodes
10 7
Figure 2: Empirical comparison of optimism-based algorithms with frequentist regret or PAC bounds
on a randomly generated MDP with 3 actions, time horizon 10 and S = 5, 50, 200 states. All
algorithms are run with parameters that satisfy their bound requirements. A detailed description of
the experimental setup including a link to the source code can be found in Appendix B.
? Many algorithms update their policy less and less frequently (usually when the number of
samples doubles), and only finitely often in total. Instead, we update the policy after every
episode, which means that UBEV immediately leverages new observations.
? Confidence bounds in existing algorithms
that keep improving the policy (e.g. Jaksch et al.
p
[6], Azar et al. [16]) scale at a rate log(k)/n where k is the number of episodes played so far
and n is the number of times the specific (s, a, t) has been observed. As the results of a brief
empirical comparison in Figure 2 indicate, this leads to slow learning (compare UCBVI_1
and UBEV?s performance which differ essentially only by their usep
of different rate bounds).
Instead
the
width
of
UBEV?s
confidence
bounds
?
scales
at
rate
ln ln(max{e, n})/n ?
p
(log log n)/n which is the best achievable rate and results in significantly faster learning.
4
Uniform PAC Analysis
We now discuss the Uniform-PAC analysis of UBEV which results in the following Uniform-PAC
and regret guarantee.
6
Theorem 4. Let ?k be the policy of UBEV in the kth episode. Then with probability at least 1 ? ?
for all ? > 0 jointly the number of episodes k where the expected return from the start state is not
?-optimal (that is ?k > ?) is at most
SAH 4
1 1
2
O
min
1+?S
A,
S
polylog
A,
S,
H,
,
.
?2
? ?
Therefore, with probability at least 1 ? ? UBEV converges to optimal policies and for all episodes T
has regret
?
R(T ) = O H 2 ( SAT + S 3 A2 ) polylog(S, A, H, T ) .
Here polylog(x . . . ) is a function that can be bounded by a polynomial of logarithm, that is, ?k, C :
polylog(x . . . ) ? ln(x . . . )k +C. In Appendix C we provide a lower bound on the sample complexity
that shows that if ? < 1/(S 2 A), the Uniform-PAC bound is tight up to log-factors and a factor of H.
To our knowledge, UBEV is the first algorithm with both near-tight (up to H factors) high probability
regret and (?, ?) PAC bounds as well as the first algorithm with any nontrivial uniform-PAC bound.
Using Theorem 3 the convergence and regret bound follows immediately from the uniform PAC
bound. After a discussion of the different confidence bounds allowing us to prove uniform-PAC
bounds, we will provide a short proof sketch of the uniform PAC bound.
4.1
Enabling Uniform PAC With Law-of-Iterated-Logarithm Confidence Bounds
To have a PAC bound for all ? jointly, it is critical that UBEV continually make use of new experience.
If UBEV stopped leveraging new observations after some fixed number, it would not be able to
distinguish with high probability among which of the remaining possible MDPs do or do not have
optimal policies that are sufficiently optimal in the other MDPs. The algorithm therefore could
potentially follow a policy that is not at least ?-optimal for infinitely many episodes for a sufficiently
small ?. To enable UBEV to incorporate all new observations, the confidence bounds in UBEV must
hold for an infinite number of updates. We therefore require a proof that the total probability of all
possible failure events (of the high confidence bounds not holding) is bounded by ?, in order to obtain
high probability guarantees. In contrast to prior (?, ?)-PAC proofs that only consider a finite number
of failure events (which is enabled by requiring an RL algorithm to stop using additional data), we
must bound the probability of an infinite set of possible failure events.
Some choices of confidence bounds will hold uniformly across all sample sizes but are not sufficiently
tight for uniform PAC results. For
q example, the recent work by Azar et al. [16] uses confidence
intervals that shrink at a rate of lnnT , where T is the number of episodes, and n is the number of
samples of a (s, a) pair at a particular time step. This confidence interval will hold for all episodes,
but these intervals do not shrink sufficiently quickly and can even increase. One simple approach for
constructing confidence intervals that is sufficient for uniform PAC guarantees is to combine bounds
2
for fixed number of samples with a union bound allocating failure probability
p ?/n to the failure case
with n samples. This results in confidence intervals that shrink at rate 1/n ln n. Interestingly we
know of no algorithms that do such in our setting.
We follow a similarly simple but much stronger
approach of using law-of-iterated logarithm (LIL)
p
bounds that shrink at the better rate of 1/n ln ln n. Such bounds have sparked recent interest in
sequential decision making [18, 19, 20, 21, 22] but to the best of our knowledge we are the first to
leverage them for RL. We prove several general LIL bounds in Appendix F and explain how we use
these results in our analysis in Appendix E.2. These LIL bounds are both sufficient to ensure
puniform
PAC bounds, and much tighter (and therefore will lead to much better performance) than 1/n ln T
bounds. Indeed, LIL have the tightest possible rate dependence on the number of samples n for a
bound that holds for all timesteps (though they are not tight with respect to constants).
4.2
Proof Sketch
We now provide a short overview of our uniform PAC bound in Theorem 4. It follows the typical
scheme for optimism based algorithms: we show that in each episode UBEV follows a policy that is
7
? k that yields highest return in a set of MDPs Mk given by the
optimal with respect to the MDP M
constraints in Eqs. (1)?(2) (Lemma D.1 in the appendix). We then define a failure event F (more
details see below) such that on the complement F C , the true MDP is in Mk for all k.
Under the event that the true MDP is in the desired set, the V1? ? V1? ? V?1?k , i.e., the value V?1?k of ?k
? k is higher than the optimal value function of the true MDP M (Lemma E.16). Therefore,
in MDP M
?k
? ?k
the optimality gap is bounded by ?k ? p>
0 (V1 ? V1 ). The right hand side this expression is then
decomposed via a standard identity (Lemma E.15) as
H
X
X
?k
wtk (s, a)((P?k ? P )(s, a, t))> V?t+1
+
t=1 (s,a)?S?A
H
X
X
wtk (s, a)(?
rk (s, a, t) ? r(s, a, t)),
t=1 (s,a)?S?A
where wtk (s, a) is the probability that when following policy ?k in the true MDP we encounter
? k For
st = s and at = a. The quantities P?k , r?k are the model parameters of the optimistic MDP M
the sake of conciseness, we ignore the second term above in the following which can be bounded by
?/3 in the same way as the first. We further decompose the first term as
X
?k
wtk (s, a)((P?k ? P )(s, a, t))> V?t+1
(3)
t?[H]
(s,a)?Lctk
+
X
?k
wtk (s, a)((P?k ? P?k )(s, a, t))> V?t+1
+
t?[H]
(s,a)?Ltk
X
?k
wtk (s, a)((P?k ? P )(s, a, t))> V?t+1
(4)
t?[H]
(s,a)?Ltk
?
where Ltk = (s, a) ? S ? A : wtk (s, a) ? wmin = 3HS
is the set of state-action pairs with
2
non-negligible visitation probability. The value of wmin is chosen so that (3) is bounded by ?/3.
Since V? ?k is the optimal solution of the optimization problem in Eq. (1), we can bound
s
!
2 ln (ln(n (s, a))/?)
H
tk
?k
|((P?k ?P?k )(s, a, t))> V?t+1
| ? ?k (s, a, t)(H ? t) = O
, (5)
ntk (s, a)
where ?k (s, a, t) is the value of ?(s, a, t) and ntk (s, a) the value of n(s, a, t) right before episode k.
Further we decompose
?s
?
2 ln ln ntk (s,a)
SH
?k
?k
?
? ,(6)
|((P?k ? P )(s, a, t))> V?t+1
| ? k(P?k ? P )(s, a, t)k1 kV?t+1
k? ? O ?
ntk (s, a)
where the second inequality follows from a standard concentration bound used in the definition of the
failure event F (see below). Substituting this and (5) into (4) leads to
?
?
s
H
2
X
X
SH ln(ln(ntk (s, a))/?) ?
(4) ? O ?
wtk (s, a)
.
(7)
ntk (s, a)
t=1
s,a?Ltk
P
On F C it also holds that ntk (s, a) ? 12 i<k wti (s, a) ? ln 9SAH
and so on nice episodes where
?
eachP
(s, a) ? Ltk with significant probability wtk (s, a) also P
had significant probability in the past,
1
i.e., i<k wti (s, a) ? 4 ln 9SA
i<k wti (s, a). Substituting this into
? , it holds that ntk (s, a) ? 4
(7), we can use a careful pidgeon-hole argument laid out it Lemma E.3 in the appendix to show
that this term is bounded by ?/3 on all but O(AS 2 H 4 /?2 polylog(A, S, H, 1/?, 1/?)) nice episodes.
Again using a pidgeon-hole argument, one can show that all but at most O(S 2 AH 3 /? ln(SAH/?))
episodes are nice. Combining both bounds, we get that on F C the optimality gap ?k is at most ?
except for at most O(AS 2 H 4 /?2 polylog(A, S, H, 1/?, 1/?)) episodes.
We decompose the failure event into multiple components. In addition to the events FkN that a
(s, a, t) triple has
P been observed few times compared to its visitation probabilities in the past, i.e.,
ntk (s, a) < 12 i<k wti (s, a) ? ln 9SAH
as well as a conditional version of this statement, the
?
failure event F contains events where empirical estimates of the immediate rewards, the expected
optimal value of the successor states and the individual transition probabilites are far from their true
8
expectations. For the full definition of F see Appendix E.2. F also contains event F L1 we used in
Eq. (6) defined as
r
S ?2)
4
?
?k, s, a, t : kPk (s, a, t) ? P (s, a, t)k1 ? ntk (s,a)
2 llnp(ntk (s, a)) + ln 18SAH(2
.
?
It states that the L1-distance of the empirical transition probabilities to the true probabilities for
any (s, a, t) in any episode k is too large and we show that P(F L1 ) ? 1 ? ?/9 using a uniform
version of the popular bound by Weissman et al. [23] which we prove in Appendix F. We show in
similar manner that the other events in F have small probability uniformly for all episodes k so that
P(F ) ? ?. Together this yields the uniform PAC bound in Thm. 4 using the second term in the min.
With a more refined analysis that avoids the use of H?lder?s inequality in (6) and a stronger notion of
nice episodes called friendly episodes we obtain the bound with the first term in the min. However,
since a similar analysis has been recently released [16], we defer this discussion to the appendix.
4.3
Discussion of UBEV Bound
? 2 AH 4 /?2 ), which
The (Uniform-)PAC bound for UBEV in Theorem 4 is never worse than O(S
2
improves on the similar MBIE algorithm by a factor of H (after adapting the discounted setting for
which MBIE was analysed to our setting). For ? < 1/(S 2 A) our bound has a linear dependence on
the size of the state-space and depends on H 4 , which is a tighter dependence on the horizon than
6 2
?
MoRMax?s O(SAH
/? ), the best sample-complexity bound with linear dependency S so far.
Comparing UBEV?s regret bound to the ones of UCRL2 [6] and REGAL [24] requires care because
(a) we measure the regret over entire episodes and (b) our transition dynamics are time-dependent
within each episode, which effectively increases the state-space by a factor of ?
H. Converting the
bounds for UCRL2/REGAL to our setting yields a regret bound of order SH 2 AHT . Here, the
diameter is?H, the state space increases by H due to time-dependent transition dynamics and an
additional H is gained by stating the regret
? in terms of episodes T instead of time steps. Hence,
UBEV?s bounds are better by a factor of SH. Our bound matches the recent regret bound for
episodic RL by Azar et al. [16] in the S, A and T terms but not in H. Azar et al. [16] has regret
bounds that are optimal in H but their algorithm is not uniform PAC, due to the characteristics we
outlined in Section 2.
5
Conclusion
The Uniform-PAC framework strengthens and unifies the PAC and high-probability regret performance criteria for reinforcement learning in episodic MDPs. The newly proposed algorithm is
Uniform-PAC, which as a side-effect means it is the first algorithm that is both PAC and has sublinear (and nearly optimal) regret. Besides this, the use of law-of-the-iterated-logarithm confidence
bounds in RL algorithms for MDPs provides a practical and theoretical boost at no cost in terms of
computation or implementation complexity.
This work opens up several immediate research questions for future work. The definition of
Uniform-PAC and the relations to other PAC and regret notions directly apply to multi-armed bandits
and contextual bandits as special cases of episodic RL, but not to infinite horizon reinforcement
learning. An extension to these non-episodic RL settings is highly desirable. Similarly, a version
of the UBEV algorithm for infinite-horizon RL with linear state-space sample complexity would
be of interest. More broadly, if theory is ever to say something useful about practical algorithms
for large-scale reinforcement learning, then it will have to deal with the unrealizable function
approximation setup (unlike the tabular function representation setting considered here), which is a
major long-standing open challenge.
Acknowledgements. We appreciate the support of a NSF CAREER award and a gift from
Yahoo.
References
[1] Vincent Fran?ois-Lavet, Rapha?l Fonteneau, and Damien Ernst. How to discount deep reinforcement learning: Towards new dynamic strategies. In NIPS 2015 Workshop on Deep
9
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
Reinforcement Learning, 2015.
Tor Lattimore and Marcus Hutter. Near-optimal PAC bounds for discounted MDPs. In Theoretical Computer Science, volume 558, 2014.
Christoph Dann and Emma Brunskill. Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning. In Neural Information Processing Systems, 2015.
Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, and Robert E Schapire.
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable. In International
Conference on Machine Learning, 2017.
Alexander L Strehl, Lihong Li, and Michael L Littman. Reinforcement Learning in Finite
MDPs : PAC Analysis. Journal of Machine Learning Research, 10:2413?2444, 2009.
Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal Regret Bounds for Reinorcement
Learning. Journal of Machine Learning Research, 11:1563?1600, 2010.
Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire.
Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits. In Journal of
Machine Learning Research, volume 32, 2014.
Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. Informationtheoretic regret bounds for Gaussian process optimization in the bandit setting. In IEEE
Transactions on Information Theory, volume 58, 2012.
Jean Yves Audibert, R?mi Munos, and Csaba Szepesv?ri. Exploration-exploitation tradeoff
using variance estimates in multi-armed bandits. Theoretical Computer Science, 410(19):
1876?1902, 2009.
Peter Auer. Using upper confidence bounds for online learning. Proceedings 41st Annual
Symposium on Foundations of Computer Science, pages 270?293, 2000.
S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic
Multi-armed Bandit Problems. arXiv.org, cs.LG(1):138, 2012.
Peter Auer and Ronald Ortner. Online Regret Bounds for a New Reinforcement Learning
Algorithm. In Proceedings 1st Austrian Cognitive Vision Workshop, 2005.
Jason Pazis and Ronald Parr. Efficient PAC-optimal Exploration in Concurrent , Continuous
State MDPs with Delayed Updates. In AAAI Conference on Artificial Intelligence, 2016.
Lihong Li, Michael L. Littman, Thomas J. Walsh, and Alexander L. Strehl. Knows what it
knows: A framework for self-aware learning. Machine Learning, 82(3):399?443, nov 2011.
Alexander L. Strehl and Michael L. Littman. An analysis of model-based Interval Estimation
for Markov Decision Processes. Journal of Computer and System Sciences, 74(8):1309?1331,
2008.
Mohammad Gheshlaghi Azar, Ian Osband, and R?mi Munos. Minimax Regret Bounds for
Reinforcement Learning. In International Conference on Machine Learning, 2017.
Istv?n Szita and Csaba Szepesv?ri. Model-based reinforcement learning with nearly tight
exploration complexity bounds. In International Conference on Machine Learning, 2010.
Kevin Jamieson, Matthew Malloy, Robert Nowak, and S?bastien Bubeck. lil? UCB : An Optimal
Exploration Algorithm for Multi-Armed Bandits. 2013.
Akshay Balsubramani and Aaditya Ramdas. Sequential Nonparametric Testing with the Law of
the Iterated Logarithm. In Uncertainty in Artificial Intelligence, 2016.
Aur?lien Garivier, Emilie Kaufmann, and Tor Lattimore. On Explore-Then-Commit Strategies.
In Advances in Neural Information Processing Systems, 2016.
Pascal Massart. Concentration inequalities and model selection. Lecture Notes in Mathematics,
1896, 2007.
Aurelien Garivier and Olivier Cappe. The KL-UCB Algorithm for Bounded Stochastic Bandits
and Beyond. In Conference on Learning Theory, 2011.
Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdu, and Marcelo J Weinberger. Inequalities for the L 1 Deviation of the Empirical Distribution. Technical report, 2003. URL http://www.hpl.hp.com/techreports/2003/HPL-2003-97R1.pdf?
origin=publicationDetail.
10
[24] Peter L. Bartlett and a. Tewari. REGAL: A regularization based algorithm for reinforcement
learning in weakly communicating MDPs. Proceedings of the Twenty-Fifth Conference on
Uncertainty in Artificial Intelligence, pages 35?42, 2009.
[25] Stephane Boucheron, Gabor Lugosi, and Pascal Massart. Concentration Inequalities - A
Nonasymptotic Theory of Independence. Oxford University Press, 2013. ISBN 978-0-19953525-5.
[26] Rick Durrett. Probability - Theory and Examples. Cambridge University Press, 4 edition, 2010.
ISBN 978-0-521-76539-8.
11
| 7154 |@word h:1 exploitation:1 version:6 briefly:2 polynomial:5 stronger:5 achievable:1 open:2 p0:5 boundedness:1 reduction:1 initial:1 sah:9 exclusively:1 contains:2 daniel:1 interestingly:1 past:2 existing:6 current:1 com:2 comparing:1 contextual:3 analysed:1 si:2 gmail:1 yet:1 must:2 john:2 ronald:3 designed:1 update:6 intelligence:3 beginning:2 short:3 provides:1 complication:1 org:1 ucrl2:5 c2:4 predecessor:1 symposium:1 prove:4 combine:1 emma:2 manner:1 theoretically:1 indeed:2 expected:19 behavior:1 nor:1 planning:1 multi:5 frequently:1 bellman:2 discounted:3 decomposed:1 armed:7 preclude:2 cardinality:2 gift:1 begin:1 notation:4 bounded:8 what:1 interpreted:1 probabilites:1 deepmind:1 suppresses:1 maxa:1 csaba:2 guarantee:25 pseudo:1 every:3 friendly:1 healthcare:2 control:1 omit:1 jamieson:1 continually:1 before:1 service:1 negligible:1 mistake:5 limit:1 severely:1 oxford:1 ntk:11 establishing:1 meet:1 jiang:1 approximately:1 lugosi:1 might:3 plus:1 verdu:1 christoph:2 walsh:1 practical:2 testing:1 union:1 regret:76 episodic:14 empirical:10 drug:1 significantly:2 adapting:1 gabor:1 confidence:24 get:1 cannot:4 undesirable:1 selection:1 context:1 www:1 conventional:1 customer:1 missing:3 kale:1 fonteneau:1 formalized:1 immediately:3 communicating:1 insight:1 enabled:1 notion:3 krishnamurthy:1 mormax:4 pt:1 suppose:2 infrequent:1 user:2 olivier:1 us:2 origin:1 strengthens:1 observed:3 monster:1 worst:2 episode:46 decrease:1 highest:1 observes:1 intuition:1 complexity:7 reward:11 littman:3 dynamic:4 ultimately:2 depend:5 tight:6 weakly:1 serve:1 easily:1 fast:1 london:1 artificial:3 kevin:1 outcome:1 refined:1 jean:1 heuristic:1 stanford:2 solve:1 widely:3 say:1 otherwise:1 lder:1 maxp:1 eachp:1 statistic:1 satyen:1 commit:1 think:1 jointly:2 itself:1 online:2 sequence:2 advantage:1 net:1 matthias:1 isbn:2 propose:1 strengthening:1 fer:2 frequent:1 combining:1 iff:1 poorly:1 achieve:1 ernst:1 gold:1 description:1 kv:1 convergence:2 double:1 requirement:2 r1:1 converges:2 tk:1 derive:1 polylog:7 v10:1 stating:1 damien:1 seroussi:1 finitely:3 minor:1 disabling:1 sa:1 eq:3 strong:1 c:2 ois:1 come:2 implies:7 indicate:1 differ:1 drawback:1 correct:1 stephane:1 used3:1 stochastic:2 exploration:5 enable:1 successor:1 education:1 require:1 decompose:3 unrealizable:1 proposition:1 tighter:2 extension:1 hold:13 sufficiently:5 considered:1 exp:1 claim:1 parr:1 substituting:2 matthew:1 tor:5 achieves:2 major:1 actionspace:1 a2:1 released:1 estimation:1 bridge:2 concurrent:1 hope:1 techreports:1 gaussian:1 rick:1 corollary:1 ucrl:2 focus:1 notational:1 bernoulli:2 rank:1 contrast:2 tremendously:1 seeger:1 sense:2 dependent:4 stopping:1 entire:1 relation:2 bandit:12 lien:1 selects:1 provably:1 arg:1 among:2 szita:1 pascal:2 denoted:1 yahoo:1 special:1 once:1 aware:2 never:1 beach:1 nearly:3 future:1 tabular:1 report:1 few:3 ortner:2 randomly:1 simultaneously:7 tightly:1 individual:1 delayed:2 replaced:1 interest:2 highly:2 severe:2 introduces:1 sh:4 allocating:1 tuple:1 integral:3 ltk:5 nowak:1 experience:1 logarithm:10 desired:1 theoretical:6 stopped:1 mk:2 instance:1 hutter:1 measuring:3 cost:3 deviation:1 uniform:63 wonder:1 too:2 dependency:1 answer:1 varies:1 chooses:1 st:20 rapha:1 international:3 aur:1 standing:1 michael:3 together:1 quickly:1 again:1 aaai:1 satisfied:1 cesa:1 worse:2 cognitive:1 expert:1 style:1 return:7 li:3 converted:4 nonasymptotic:1 satisfy:3 explicitly:1 dann:2 depends:1 crossed:1 audibert:1 root:1 try:1 jason:1 optimistic:5 start:1 vt0:1 defer:1 square:1 yves:1 accuracy:2 marcelo:1 variance:1 characteristic:1 kaufmann:1 yield:4 yes:1 conceptually:1 bayesian:1 unifies:1 iterated:9 vincent:1 none:1 ah:2 explain:1 kpk:1 suffers:1 emilie:1 definition:7 failure:11 proof:7 conciseness:3 mi:2 sampled:1 stop:2 newly:1 treatment:2 popular:1 hsu:1 knowledge:5 improves:1 carefully:1 actually:1 auer:3 cappe:1 higher:1 follow:2 ebrun:1 maximally:1 evaluated:1 execute:1 shrink:4 though:1 furthermore:1 just:1 hpl:2 langford:2 sketch:2 receives:1 hand:1 replacing:1 quality:2 mdp:19 grows:1 usa:1 effect:2 requiring:1 true:7 hence:1 regularization:1 boucheron:1 jaksch:2 deal:1 width:3 self:1 criterion:4 pazis:1 formalises:1 pdf:1 demonstrate:2 mohammad:1 l1:3 aaditya:1 lattimore:5 recently:1 superior:1 common:1 behaves:1 rl:14 empirically:1 overview:2 volume:3 tce:1 discussed:2 relating:1 mellon:1 significant:2 cambridge:1 ai:2 tuning:1 outlined:1 mathematics:1 similarly:2 hp:1 had:2 lihong:3 alekh:2 kwik:1 something:1 sergio:1 recent:4 scenario:1 inequality:5 success:1 vt:6 additional:5 care:2 r0:2 surely:2 converge:1 converting:1 full:2 multiple:1 desirable:1 sham:1 technical:1 faster:1 adapt:1 match:1 long:2 niranjan:1 weissman:2 award:1 halt:1 subopt:1 austrian:1 patient:1 metric:3 expectation:3 essentially:1 arxiv:1 vision:1 kernel:2 represent:2 agarwal:2 c1:2 addition:1 szepesv:2 krause:1 interval:9 grow:1 median:1 source:1 unlike:3 limk:1 probably:1 massart:2 subject:1 tsachy:1 leveraging:1 subgaussian:2 near:5 leverage:3 presence:1 backwards:1 enough:1 independence:1 timesteps:1 wti:4 nonstochastic:1 suboptimal:3 andreas:1 idea:1 tradeoff:1 expression:1 optimism:3 bartlett:1 url:1 effort:1 osband:1 suffer:2 peter:4 cause:1 action:7 deep:3 useful:1 tewari:1 clear:1 listed:1 detailed:1 nonparametric:1 discount:1 tth:1 diameter:1 schapire:2 http:1 nsf:1 mbie:5 per:1 blue:1 broadly:1 carnegie:1 visitation:2 key:3 gadiel:1 istv:1 drawn:1 neither:1 garivier:2 ht:1 v1:7 run:1 prob:1 uncertainty:2 almost:2 laid:1 fran:1 decision:3 appendix:11 bound:108 nan:1 distinguish:3 played:2 annual:1 nontrivial:1 constraint:3 sparked:1 ucfh:3 ri:2 aurelien:1 sake:1 aht:1 argument:4 optimality:3 min:5 department:2 according:2 across:2 kakade:1 modification:1 s1:2 making:2 wtk:9 pr:4 superlevel:1 equation:1 ln:26 previously:1 discus:3 eventually:2 know:3 tightest:1 malloy:1 apply:1 observe:1 occasional:1 balsubramani:1 save:1 alternative:1 frequentist:1 shortly:1 encounter:1 weinberger:1 thomas:2 obviates:1 denotes:1 remaining:1 ensure:1 unifying:1 restrictive:1 quantile:2 especially:1 k1:2 classical:1 appreciate:1 already:1 quantity:2 question:1 strategy:2 concentration:6 dependence:6 usual:1 traditional:1 interacts:1 rt:2 exhibit:2 kth:2 distance:1 link:1 trivial:1 induction:1 marcus:1 erik:1 length:2 besides:2 index:2 relationship:4 code:2 lg:1 setup:4 unfortunately:1 difficult:1 robert:3 statement:3 relate:1 potentially:1 holding:1 implementation:1 lil:6 policy:22 twenty:1 perform:1 allowing:1 upper:3 bianchi:1 observation:4 markov:1 finite:9 enabling:1 oim:1 immediate:4 ever:1 arbitrary:1 thm:1 regal:3 complement:1 pair:2 required:2 kl:1 boost:1 nip:2 gheshlaghi:1 able:1 beyond:1 usually:4 below:4 summarize:1 challenge:1 max:7 including:2 critical:2 event:13 arm:3 minimax:1 scheme:1 mdps:15 imply:4 brief:1 health:1 vh:1 taming:1 prior:1 literature:2 nice:4 acknowledgement:1 nicol:1 law:9 loss:2 lecture:1 sublinear:1 interesting:1 limitation:5 triple:2 foundation:1 agent:3 sufficient:2 s0:6 exciting:1 principle:1 strehl:3 translation:2 summary:1 supported:1 soon:1 enjoys:1 side:3 allow:1 pulled:1 wmin:2 akshay:2 munos:2 fifth:1 benefit:1 tolerance:1 transition:9 superficially:1 avoids:1 ordentlich:1 reinforcement:16 durrett:1 simplified:1 far:3 polynomially:1 transaction:1 nov:1 ignore:1 informationtheoretic:1 overcomes:1 keep:1 sat:1 continuous:1 ca:1 career:1 obtaining:1 improving:2 excellent:1 necessarily:1 constructing:1 domain:2 arrow:2 bounding:2 noise:1 azar:6 edition:1 ramdas:1 allowed:1 slow:1 brunskill:2 sub:3 ian:1 theorem:11 rk:1 specific:3 discarding:1 bastien:2 pac:105 showing:1 er:2 learnable:1 exists:6 workshop:2 sequential:2 effectively:1 gained:1 hole:2 horizon:10 demand:1 gap:2 explore:1 infinitely:8 bubeck:2 visual:1 satisfies:6 conditional:1 cdann:2 identity:1 careful:1 towards:1 price:1 determined:1 except:3 uniformly:4 infinite:5 typical:1 lemma:5 called:4 total:5 stake:3 e:1 experimental:1 meaningful:1 ucb:2 fkn:1 formally:2 support:1 alexander:3 violated:1 incorporate:2 statespace:1 srinivas:1 |
6,804 | 7,155 | Gradients of Generative Models for Improved
Discriminative Analysis of Tandem Mass Spectra
John T. Halloran
Department of Public Health Sciences
University of California, Davis
[email protected]
David M. Rocke
Department of Public Health Sciences
University of California, Davis
[email protected]
Abstract
Tandem mass spectrometry (MS/MS) is a high-throughput technology used to
identify the proteins in a complex biological sample, such as a drop of blood. A
collection of spectra is generated at the output of the process, each spectrum of
which is representative of a peptide (protein subsequence) present in the original
complex sample. In this work, we leverage the log-likelihood gradients of generative models to improve the identification of such spectra. In particular, we show
that the gradient of a recently proposed dynamic Bayesian network (DBN) [7] may
be naturally employed by a kernel-based discriminative classifier. The resulting
Fisher kernel substantially improves upon recent attempts to combine generative
and discriminative models for post-processing analysis, outperforming all other
methods on the evaluated datasets. We extend the improved accuracy offered by
the Fisher kernel framework to other search algorithms by introducing Theseus,
a DBN representing a large number of widely used MS/MS scoring functions.
Furthermore, with gradient ascent and max-product inference at hand, we use
Theseus to learn model parameters without any supervision.
1
Introduction
In the past two decades, tandem mass spectrometry (MS/MS) has become an indispensable tool
for identifying the proteins present in a complex biological sample. At the output of a typical
MS/MS experiment, a collection of spectra is produced on the order of tens-to-hundreds of thousands,
each of which is representative of a protein subsequence, called a peptide, present in the original
complex sample. The main challenge in MS/MS is accurately identifying the peptides responsible for
generating each output spectrum.
The most accurate identification methods search a database of peptides to first score peptides, then
rank and return the top-ranking such peptide. The pair consisting of a scored candidate peptide and
observed spectrum is typically referred to as a peptide-spectrum match (PSM). However, PSM scores
returned by such database-search methods are often difficult to compare across different spectra (i.e.,
they are poorly calibrated), limiting the number of spectra identified per search [15]. To combat such
poor calibration, post-processors are typically used to recalibrate PSM scores [13, 19, 20].
Recent work has attempted to exploit generative scoring functions for use with discriminative
classifiers to better recalibrate PSM scores; by parsing a DBN?s Viterbi path (i.e., the most probable
sequence of random variables), heuristically derived features were shown to improve discriminative
recalibration using support vector machines (SVMs). Rather than relying on heuristics, we look
towards the more principled approach of a Fisher kernel [11]. Fisher kernels allow one to exploit the
sequential-modeling strengths of generative models such as DBNs, which offer vast design flexibility
for representing data sequences of varying length, for use with discriminative classifiers such as
SVMs, which offer superior accuracy but often require feature vectors of fixed length. Although
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
y10
1.0
intensity
0.8
0.4
insertions
b-ions
y-ions
0.2
b2
0.6
0.0200 y2 300
b3
y3
400
y11
y4
500
y5
600
b5 y6
700
b6 y7
800
y8
b7
900
m/z
b8
b9
y9
1000 1100
b10
1200
b11
1300
1400
y12
1500
Figure 1: Example tandem mass spectrum with precursor charge c(s) = 2 and generating peptide x =
LWEPLLDVLVQTK. Plotted in red and blue are, respectively, b- and y-ion peaks (discussed in Section 2.1.1),
while spurious observed peaks (called insertions) are colored gray. Note y1 , b1 , b4 , and b12 are absent fragment
ions (called deletions).
the number of variables in a DBN may vary given different observed sequences, a Fisher kernel
utilizes the fixed-length gradient of the log-likelihood (i.e., the Fisher score) in the feature-space of
a kernel-based classifier. Deriving the Fisher scores of a DBN for Rapid Identification of Peptides
(DRIP) [7], we show that the DRIP Fisher kernel greatly improves upon the previous heuristic
approach; at a strict FDR of 1% for the presented datasets, the heuristically derived DRIP features
improve accuracy over the base feature set by an average 6.1%, while the DRIP Fisher kernel raises
this average improvement to 11.7% (Table 2 in [9]), thus nearly doubling the total accuracy of DRIP
post-processing.
Motivated by improvements offered by the DRIP Fisher kernel, we look to extend this to other models
by defining a generative model representative of the large class of existing scoring functions [2,
5, 6, 16, 10, 22, 17]. In particular, we define a DBN (called Theseus1 ) which, given an observed
spectrum, evaluates the universe of all possible PSM scores. In this work, we use Theseus to model
PSM score distributions with respect to the widely used XCorr scoring function [5]. The resulting
Fisher kernel once again improves discriminative post-processing accuracy. Furthermore, with
the generative model in place, we explore inferring parameters of the modeled scoring function
using max-product inference and gradient-based learning. The resulting coordinate ascent learning
algorithm outperforms standard maximum-likelihood learning. Most importantly, this overall learning
algorithm is unsupervised which, to the authors? knowledge, is the first MS/MS scoring function
parameter estimation procedure not to rely on any supervision. We note that this overall training
procedure may be adapted by the many MS/MS search algorithms whose scoring functions lie in the
class modeled by Theseus.
The paper is organized as follows. We discuss background information in Section 2, including the
process by which MS/MS spectra are produced, the means by which spectra are identified, and related
previous work. In Section 3, we extensively discuss the log-likelihood of the DRIP model and derive
its Fisher scores. In Section 4, we introduce Theseus and derive gradients of its log-likelihood. We
then discuss gradient-based unsupervised learning of Theseus parameters and present an efficient,
monotonically convergent coordinate ascent algorithm. Finally, in Section 5, we show that DRIP
and Theseus Fisher kernels substantially improve spectrum identification accuracy and that Theseus?
coordinate ascent learning algorithm provides accurate unsupervised parameter estimation. For
convenience, a table of the notation used in this paper may be found in [9].
2
Background
A typical tandem mass spectrometry experiment begins by cleaving proteins into peptides using a
digesting enzyme. The resulting peptides are then separated via liquid chromatography and subjected
to two rounds of mass spectrometry. The first round measures the mass and charge of the intact
peptide, called the precursor mass and precursor charge, respectively. Peptides are then fragmented
and the fragments undergo a second round of mass spectrometry, the output of which is an observed
spectrum indicative of the fragmented peptide. The x-axis of this observed spectrum denotes massto-charge (m/z), measured in thomsons (Th), and the y-axis is a unitless intensity measure, roughly
proportional to the abundance of a single fragment ion with a given m/z value. A sample such
observed spectrum is illustrated in Figure 1.
1
In honor of Shannon?s magnetic mouse, which could learn to traverse a small maze.
2
2.1
MS/MS Database Search
Let s be an observed spectrum with precursor mass m(s) and precursor charge c(s). In order to
identify s, we search a database of peptides, as follows. Let P be the set of all possible peptide
sequences. Each peptide x ? P is a string x = x1 x2 . . . xn comprised of characters, called amino
acids. Given a peptide database D ? P, we wish to find the peptide x ? D responsible for generating
s. Using the precursor mass and charge, the set of peptides to be scored is constrained by setting
a mass tolerance
threshold, w, such
o that we score the set of candidate peptides D(s, D, w) =
n
?
w
, where m(x) denotes the mass of peptide x. Note that we?ve
x : x ? D, m(x)
?
m(s)
c(s)
overloaded m(?) to return either a peptide?s or observed spectrum?s precursor mass; we similarly
overload c(?). Given s and denoting an arbitrary scoring function as ?(x, s), the output of a search
algorithm is thus x? = argmaxx?D(m(s),c(s),D,w) ?(x, s), the top-scoring PSM.
2.1.1
Theoretical Spectra
In order to score a candidate peptide x, fragment ions corresponding to suffix masses (called b-ions)
and prefix masses (called y-ions) are collected into a theoretical spectrum. The annotated b- and
y-ions of the generating peptide for an observed spectrum are illustrated in Figure 1. Varying based
on the value of c(s), the kth respective b- and y-ion pair of x are
Pn
Pk
m(xi ) + 18 + cy
i=1 m(xi ) + cb
b(x, cb , k) =
, y(x, cy , k) = i=n?k
,
cb
cy
where cb is the charge of the b-ion and cy is the charge of the y-ion. For c(s) = 1, we have
cb = cy = 1, since these are the only possible, detectable fragment ions. For higher observed charge
states c(s) ? 2, it is unlikely for a single fragment ion to consume the entire charge, so that we have
cb + cy = c(s), where cb , cy ? [1, c(s) ? 1]. The b-ion offset corresponds to the mass of a cb charged
hydrogen atom, while the y-ion offset corresponds to the mass of a water molecule plus a cy charged
hydrogen atom.
Further fragment ions may occur, each corresponding to the loss of a molecular group off a b- or
y-ion. Called neutral losses, these correspond to a loss of either water, ammonia, or carbon monoxide.
These fragment ions are commonly collected into a vector v, whose elements are weighted based on
their corresponding fragment ion. For instance, XCorr [5] assigns all b- and y-ions a weight of 50
and all neutral losses a weight of 10.
2.2
Previous Work
Many scoring functions have been proposed for use in search algorithms. They range from simple
dot-product scoring functions (X!Tandem [2], Morpheus [22]), to cross-correlation based scoring
functions (XCorr [5]), to exact p-values over linear scoring functions calculated using dynamic
programming (MS-GF+ [16] and XCorr p-values [10]). The recently introduced DRIP [7] scores
candidate peptides without quantization of m/z measurements and allows learning the expected
locations of theoretical peaks given high quality, labeled training data. In order to avoid quantization of
the m/z axis, DRIP employs a dynamic alignment strategy wherein two types of prevalent phenomena
are explicitly modeled: spurious observed peaks, called insertions, and absent theoretical peaks,
called deletions (examples of both are displayed in Figure 1). DRIP then uses max-product inference
to calculate the most probable sequences of insertions and deletions to score candidate peptides, and
was shown to achieve state-of-the-art performance on a variety of datasets.
In practice, scoring functions are often poorly calibrated (i.e., PSM scores from different spectra are
difficult to compare to one another), leading to potentially identified spectra left on the table during
statistical analysis. In order to properly recalibrate such PSM scores, several semi-supervised postprocessing methods have been proposed [13, 19, 20]. The most popular such method is Percolator [13],
which, given the output target and decoy PSMs (discussed in Section 5) of a scoring algorithm and
features detailing each PSM, utilizes an SVM to learn a discriminative classifier between target PSMs
and decoy PSMs. PSM scores are then recalibrated using the learned decision boundary.
Recent work has attempted to leverage the generative nature of the DRIP model for discriminative
use by Percolator [8]. As earlier mentioned, the output of DRIP is the most probable sequence of
insertions and deletions, i.e., the Viterbi path. However, DRIP?s observations are the sequences of
3
observed spectrum m/z and intensity values, so that the lengths of PSM?s Viterbi paths vary depending
on the number of observed spectrum peaks. In order to exploit DRIP?s output in the feature-space of
a discriminative classifier, PSM Viterbi paths were heuristically mapped to a fixed-length vector of
features. The resulting heuristic features were shown to dramatically improve Percolator?s ability to
discriminate between PSMs.
2.3
Fisher Kernels
Using generative models to extract features for discriminative classifiers has been used to great
effect in many problem domains by using Fisher kernels
P[11, 12, 4]. Assuming a generative model
with a set of parameters ? and likelihood p(O|?) =
H p(O, H|?), where O is a sequence of
observations and H is the set of hidden variables, the Fisher score is then Uo = ?? log p(O|?).
Given observations Oi and Oj of differing length (and, thus, different underlying graphs in the case
of dynamic graphical models), a kernel-based classifier over these instances is trained using UOi
and UOj in the feature-space. Thus, a similarity measure is learned in the gradient space, under the
intuition that objects which induce similar likelihoods will induce similar gradients.
3
DRIP Fisher Scores
Theoretical
Spectrum
Variables
Observed
Spectrum
Variables
Figure 2: Graph of DRIP, the frames (i.e., time instances) of which correspond to observed spectrum peaks.
Shaded nodes represent observed variables and unshaded nodes represent hidden variables. Given an observed
spectrum, the middle frame (the chunk) dynamically expands to represent the second observed peak to the
penultimate observed peak.
We first define, in detail, DRIP?s log-likelihood, followed by the Fisher score derivation for DRIP?s
learned parameters. For discussion of the DRIP model outside the score of this work, readers are
directed to [7, 8]. Denoting an observed peak as a pair (Omz , Oin ) consisting of an m/z measurement
and intensity measurement, respectively, let s = (O1mz , O1in ), (O2mz , O2in ), . . . , (OTmz , OTin ) be an
MS/MS spectrum of T peaks and x be a candidate (which, given s, we?d like to score). We denote
the theoretical spectrum of x, consisting of its unique b- and y-ions sorted in ascending order, as the
length-d vector v. The graph of DRIP is displayed in Figure 2, where variables which control the
traversal of the theoretical spectrum are highlighted in blue and variables which control the scoring
of observed peak measurements are highlighted in red. Groups of variables are collected into time
instances called frames. The frames of DRIP correspond to the observed peak m/z and intensity
observations, so that there are T frames in the model.
Unless otherwise specified, let t be an arbitrary frame 1 ? t ? T . ?t is a multinomial random variable
which dictates the number of theoretical peaks traversed in a frame. The random variable Kt , which
denotes the index of the current theoretical peak index, is a deterministic function of its parents, such
that p(Kt = Kt?1 + ?t |Kt?1 , ?t ) = 1. Thus, ?t > 1 corresponds to the deletion of ?t ? 1 theoretical
peaks. The parents of ?t ensure that DRIP does not attempt to increment past the last theoretical
peak, i.e., p(?t > d ? Kt?1 |d, Kt?1 , it?1 ) = 0. Subsequently, the theoretical peak value v(Kt ) is
used to access a Gaussian from a collection (the mean of each Gaussian corresponds to a position
along the m/z axis, learned using the EM algorithm [3]) with which to score observations. Hence,
the state-space of the model is all possible traversals, from left to right, of the theoretical spectrum,
accounting for all possible deletions.
When scoring observed peak measurements, the Bernoulli random variable it denotes whether a
peak is scored using learned Gaussians (when it = 0) or considered an insertion and scored using an
4
insertion penalty (when it = 1). When scoring m/z observations, we thus have p(Otmz |v(Kt ), it =
0) = f (Otmz |?mz (v(Kt )), ? 2 ) and p(Otmz |v(Kt ), it = 1) = amz , where ?mz is a vector of
Gaussian means and ? 2 the m/z Gaussian variance. Similarly, when scoring intensity observations,
we have p(Otin |it = 0) = f (Otin |?in , ?
? 2 ) and p(Otin |it = 1) = ain , where ?in and ?
? 2 are the
intensity Gaussian mean and variance, respectively. Let i0 = K0 = ? and 1{?} denote the indicator
function. Denoting DRIP?s Gaussian parameters as ?, the likelihood is thus
p(s|x, ?) =
T
Y
p(?t |Kt?1 , d, it?1 )1{Kt =Kt?1 +?t } p(Otmz |Kt )p(Otin )
t=1
=
T
Y
p(?t |Kt?1 , d, it?1 )1{Kt =Kt?1 +?t } (
t=1
=
T
Y
1
X
1
X
p(it )p(Otmz |Kt , it ))(
p(it )p(Otin |it ))
it =0
it =0
?(?t , Kt?1 , it , it?1 ).
t=1
The only stochastic variables in the model are it and ?t , where all other random variables are either
observed or deterministic given the sequences i1:T and ?1:T . Thus, we may equivalently write
?
log p(s|x, ?) =
p(s|x, ?) = p(i1:T , ?1:T |?). The Fisher score of the kth m/z mean is thus ??mz
(k)
1
?
p(s|x,?) ??mz (k) p(s|x, ?),
and we have (please see [9] for the full derivation)
?
?
p(s|x, ?) = mz
??mz (k)
?? (k)
=
X
X
i1:T ,?1:T
1{Kt =k} p(s|x, ?)
i1:T ,?1:T
?
?
p(i1:T , ?1:T |?)
??mz (k)
i1:T ,?1:T :Kt =k,1?t?T
!
!
Y
Y
?
1
mz |K ) .
p(O
t
t
??mz (k)
p(Otmz |Kt )
X
p(i1:T , ?1:T |?) =
t:Kt =k
t:Kt =k
T
X
(Omz ? ?mz (k))
?
log p(s|x, ?) =
p(it , Kt = k|s, ?)p(it = 0|Kt , Otmz ) t
.
mz
?? (k)
?2
(1)
t=1
Note that the posterior in Equation 1, and thus the Fisher score, may be efficiently computed using
sum-product inference. Through similar steps, we have
?
?? 2 (k)
log p(s|x, ?) =
X
t
(Omz ? ?mz (k))
1
p(it , Kt = k|s, ?)p(it = 0|Kt , Otmz )( t
? 2)
2? 4
2?
X
(Oin ? ?in )
?
log p(s|x, ?) =
p(it , Kt |s, ?)p(it = 0|Otin ) t 2
in
?
?
??
(2)
(3)
t
in
in
X
?
in )( (Ot ? ? ) ? 1 ),
log
p(s|x,
?)
=
p(i
,
K
|s,
?)p(i
=
0|O
t
t
t
t
??
?2
2?
?4
2?
?2
t
(4)
where ? 2 (k) denotes the partial derivative of the variance for the kth m/z Gaussian with mean
?mz (k).
Let U? = ??mz log p(s, x|?) and U?2 = ??2 (k) log p(s, x|?). U? and U?2 are length-d vectors
corresponding to the mapping of a peptide?s sequence of b- and y-ions into r-dimensional space
(i.e., dimension equal to an m/z-discretized observed spectrum). Let 1 be the length-r vector of
ones. Defining z mz , z i ? Rr , the elements of which are the quantized observed spectrum m/z and
intensity values, respectively, we use the following DRIP gradient-based features for SVM training in
Section 5: |U? |1 , |U?2 |1 , U?T z mz , U?T2 z i , U?T 1, U?T2 1, ?in log p(s, x|?), and ???? 2 log p(s, x|?).
??
4
Theseus
Given an observed spectrum s, we focus on representing the universe of linear PSM scores using a
DBN. Let z denote the vector resulting from preprocessing the observed spectrum, s. As a modeling
example, we look to represent the popular XCorr scoring function. Using subscript ? to denote a
5
vector whose elements are shifted ? units, XCorr?s scoring function is defined as
XCorr(s, x) = v T z ?
75
X
v T z? = v T (z ?
? =?75
75
X
z? ) = v T z 0 ,
? =?75
P75
where z 0 = z? ? =?75 z? . Let ? ? Rl be a vector of XCorr weights for the various types of possible
fragment ions (described in Section 2.1.1). As described in [10], given c(s), we reparameterize z 0
into a vector z? such that XCorr(x, s) is rendered as a dot-product between z? and a boolean vector
u in the reparameterized space. This reparameterization readily applies to any linear MS/MS scoring
Pl
function. The ith element of z? is z? (i) = j=1 ?(j)zj (i), where zj is a vector whose element zj (i)
is the sum of all higher charged fragment ions added into the singly-charged fragment ions for the jth
fragment ion type. The nonzero elements of u correspond to the singly-charged b-ions of x and we
Pn
Pn Pl
have uT z? = i=1 z? (m(xi ) + 1) = i=1 j=1 ?(j)zj (m(xi ) + 1) = v T z 0 = XCorr(s, x).
Figure 3: Graph of Theseus. Shaded nodes are observed random variables and unshaded nodes are hidden (i.e.,
stochastic). The model is unrolled for n + 1 frames, including B0 in frame zero. Plate notation denotes M
repetitions of the model, where M is the number of discrete precursor masses allowed by the precursor-mass
tolerance threshold, w.
Our generative model is illustrated in Figure 3. n is the maximum possible peptide length and m is
one of M discrete precursor masses (dictated by the precursor-mass tolerance threshold, w, and m(s)).
A hypothesis is an instantiation of random variables across all frames in the model, i.e., for the set of
all possible sequences of Xi random variables, X1:n = X1 , X2 , . . . , Xn , a hypothesis is x1:n ? X1:n .
In our case, each hypothesis corresponds to a peptide and the corresponding log-likelihood its XCorr
score. Each frame after the first contains an amino acid random variable so that we accumulate b-ions
in successive frames and access the score contribution for each such ion.
For frame i, Xi is a random amino acid and Bi the accumulated mass up to the current frame.
B0 and Bn are observed to zero and m, respectively, enforcing the boundary conditions that all
length-n PSMs considered begin with mass zero and end at a particular precursor mass. For i > 0,
Bi is a deterministic function of its parents, p(Bi |Bi?1 , Xi ) = p(Bi = Bi?1 + m(Xi )) = 1.
Thus, hypotheses which do not respect these mass constraints receive probability zero, i.e., p(Bn 6=
m|Bn?1 , Xn ) = 0. m is observed to the value of the current precursor mass being considered.
Let A be the set of amino acids, where |A| = 20. Given Bi and m, the conditional distribution of Xi
changes such that p(Xi ? A|Bi?1 < m) = ?U{A}, p(Xi = ?|Bi?1 ? m) = 1, where U{?} is the
uniform distribution over the input set and ? ?
/ A, m(?) = 0. Thus, when the accumulated mass is
less than m, Xi is a random amino acid and, otherwise, Xi deterministically takes on a value with
zero mass. To recreate XCorr scores, ? = 1/|A|, though, in general, any desired mass function may
be used for p(Xi ? A|Bi?1 < m).
Si is a virtual evidence child [18], i.e., a leaf node whose conditional distribution need not be
normalized to compute probabilistic quantities of interest in the DBN. For our model, we have
P|?|
p(Si |Bi < m, ?) = exp(z? (Bi )) = exp( i=1 ?i zi (Bi )) and p(Si |Bi ? m, ?) = 1. Let t0 denote
6
the first frame in which m(X1:n ) ? m. The log-likelihood is then log p(s, X1:n |?)
= log p(X1:n , B0:n , S1:n?1 )
= log(1{B0 =0} (
n?1
Y
p(Xi |m, Bi?1 )p(Bi = Bi?1 + m(Xi ))p(Si |m, Bi , ?))1{Bn?1 +m(Xn )=m} )
i=1
n
Y
= log 1{B0 =0 ?m(X1:n )=m} + log(
p(Xi |m, Bi?1 )p(Bi = Bi?1 + m(Xi ))p(Si |m, Bi , ?))+
i=t0 +1
0
log(
t
Y
p(Xi |m, Bi?1 )p(Bi = Bi?1 + m(Xi ))p(Si |m, Bi , ?))
i=1
0
= log 1{m(X1:n )=m} + log 1 + log(
t
Y
exp(z? (Bi )))
i=1
0
= log 1{m(X1:n )=m} +
t
X
z? (Bi ) = log 1{B0 =0 ? m(X1:n )=m} + XCorr(X1:t0 , s)
i=1
The ith element of Theseus? Fisher score is thus
X
?
1
? X
?
log p(s|?) =
log
p(s, x1:n |?) =
p(s, x1:n |?)
??(i)
??(i)
p(s|?) ??(i) x
x
1:n
=
1:n
t0
X
0
t
Y
1 X
exp(z? (bj )),
zi (bj ))
1{b0 =0 ? m(x1:n )=m} (
p(s|?) x
j=1
j=1
(5)
1:n
While Equation 5 is generally difficult to compute, we calculate it efficiently using sum-product infer?
ence. Note that when the peptide sequence is observed, i.e., X1:n = x
?, we have ??(i)
log p(s, x
?|?) =
P
z(m(?
x
)).
1:j
j
Using the model?s Fisher scores, Theseus? parameters ? may be learned via maximum likelihood
estimation. Given a dataset of spectra s1 , s2 , . . . , sn , we present an alternate learning algorithm
(Algorithm 1) which converges monotonically to a local optimum (proven in [9]). Within each
iteration, Algorithm 1 uses max-product inference to efficiently infer the most probable PSMs per
iteration, mitigating the need for training labels. ? is maximized in each iteration using gradient
ascent.
Algorithm 1 Theseus Unsupervised Learning Algorithm
1: while not converged do
2:
for i = 1, . . . , n do
3:
x
?i ? argmaxxi ?P log p(si , xi |?)
4:
end for
Pn
5:
? ? argmax? i=1 log p(si , x
?i |?)
6: end while
5
Results
Measuring peptide identification performance is complicated by the fact that ground-truth is unavailable for real-world data. Thus, in practice, it is most common to estimate the false discovery rate
(FDR) [1] by searching a decoy database of peptides which are unlikely to occur in nature, typically
generated by shuffling entries in the target database [14]. For a particular score threshold, t, FDR
is then calculated as the proportion of decoys scoring better than t to the number of targets scoring
better than t. Once the target and decoy PSMs are calculated, a curve displaying the FDR threshold
vs. the number of correctly identified targets at each given threshold may be calculated. In place of
FDR along the x-axis, we use the q-value [14], defined to be the minimum FDR threshold at which a
given score is deemed to be significant. As many applications require a search algorithm perform
well at low thresholds, we only plot q ? [0, 0.1].
The same datasets and search settings used to evaluate DRIP?s heuristically derived features in [8]
are adapted in this work. MS-GF+ (one of the most accurate search algorithms in wide use, plotted
7
10
DRIP Fisher
DRIP
XCorr p-value Fisher
XCorr p-value
XCorr Fisher
XCorr
MS-GF+
5
0.04
0.06
0.08
0.10
12
10
7
5
0.02
0.04
0.06
0.08
0.10
10
8
6
4
0.02
q-value
(b) Worm-1
4
0.02
0.04
0.06
0.08
7
6
5
4
3
0.10
0.02
0.04
q-value
0.06
0.08
12
10
10
8
0.02
0.04
0.06
q-value
(g) Yeast-2
0.02
0.08
(e) Worm-4
0.10
0.06
0.08
0.10
(f) Yeast-1
14
12
10
8
6
0.04
q-value
Spectra identified (1000?s)
12
6
8
6
0.10
14
14
0.10
14
q-value
(d) Worm-3
0.08
(c) Worm-2
Spectra identified (1000?s)
Spectra identified (1000?s)
6
0.06
q-value
8
2
0.04
q-value
Spectra identified (1000?s)
0.02
Spectra identified (1000?s)
0
15
Spectra identified (1000?s)
15
Spectra identified (1000?s)
Gradient-based feature representations derived from DRIP and XCorr were used to train an SVM
classifier [13] and recalibrate PSM scores. Theseus training and computation of XCorr Fisher
scores were performed using a customized version of Crux v2.1.17060 [17]. For an XCorr PSM, a
feature representation is derived directly using both ?? log p(s|?) and ?? log p(s, x|?) as defined in
Section 4, representing gradient information for both the distribution of PSM scores and the individual
PSM score, respectively. DRIP gradient-based features, as defined in Section 3, were derived using
a customized version of the DRIP Toolkit [8].Figure 4 displays the resulting search accuracy for
four worm and yeast datasets. For the uncalibrated search results in Figure 5, we show that XCorr
parameters may be learned without supervision using Theseus, and that the presented coordinate
descent algorithm (which estimates the most probable PSMs to take a step in the objective space)
converges to a much better local optimum than maximum likelihood estimation.
Spectra identified (1000?s)
Spectra identified (1000?s)
for reference) was run using version 9980, with PSMs ranked by E-value and Percolator features
calculated using msgf2pin. All database searches were run using a ?3.0Th mass tolerance, XCorr
flanking peaks not allowed in Crux searches, and all search algorithm settings otherwise left to their
defaults. Peptides were derived from the protein databases using trypsin cleavage rules without
suppression of proline and a single fixed carbamidomethyl modification was included.
0.02
0.04
0.06
q-value
(h) Yeast-3
0.08
0.10
12
10
8
6
0.02
0.04
0.06
0.08
0.10
q-value
(i) Yeast-4
Figure 4: Search accuracy plots measured by q-value versus number of spectra identified for worm (C. elegans)
and yeast (Saccharomyces cerevisiae) datasets. All methods are post-processed using the Percolator SVM
classifier [13]. ?DRIP? augments the standard set of DRIP features with DRIP-Viterbi-path parsed PSM features
(described in [8]) and ?DRIP Fisher? augments the heuristic set with gradient-based DRIP features. ?XCorr,?
?XCorr p-value,? and ?MS-GF+? use their standard sets of Percolator features (described in [8]), while ?XCorr
p-value Fisher? and ?XCorr Fisher? augment the standard XCorr feature sets with gradient-based Theseus
features.
8
5
0
0.02
0.04
p-value Coordinate Ascent
p-value MLE
p-value
Coordinate Ascent
MLE
12
10
8
6
4
0.06
q-value
0.08
0.10
Spectra identified (1000?s)
XCorr
XCorr
XCorr
XCorr
XCorr
XCorr
Spectra identified (1000?s)
Spectra identified (1000?s)
10
0.02
0.04
0.06
0.08
q-value
(b) Yeast-1
0.10
12
10
7
5
0.02
0.04
0.06
0.08
0.10
q-value
(c) Yeast-2
Figure 5: Search accuracy of Theseus? learned scoring function parameters. Coordinate ascent parameters are
learned using Algorithm 1 and MLE parameters are learned using gradient ascent.
5.1
Discussion
DRIP gradient-based post-processing improves upon the heuristically derived features in all cases,
and does so substantially on a majority of datasets. In the case of the yeast datasets, this distinguishes
DRIP post-processing performance from all competitors and leads to state-of-the-art identification
accuracy. Furthermore, we note that both XCorr and XCorr p-value post-processing performance are
greatly improved using the gradient-based features derived using Theseus, raising performance above
the highly similar MS-GF+ in several cases. Particularly noteworthy is the substantial improvement
in XCorr accuracy which, using gradient-based information, is nearly competitive with its p-value
counterpart. Considering the respective runtimes of the underlying search algorithms, this thus
presents a tradeoff for a researcher considering search time and accuracy. In practice, the DRIP and
XCorr p-value computations are at least an order of magnitude slower than XCorr computation in
Crux [21]. Thus, the presented work not only improves state-of-the-art accuracy, but also improves
the accuracy of simpler, yet significantly faster, search algorithms.
Owing to max-product inference in graphical models, we also show that Theseus may be used to
effectively learn XCorr model parameters (Figure 5) without supervision. Furthermore, we show that
XCorr p-values are also made more accurate by training the underlying scoring function for which
p-values are computed. This marks a novel step towards unsupervised training of uncalibrated scoring
functions, as unsupervised learning has been extensively explored for post-processor recalibration,
but has remained an open problem for MS/MS database-search scoring functions. The presented
learning framework, as well as the presented XCorr gradient-based feature representation, may be
adapted by many of the widely scoring functions represented by Theseus [2, 5, 6, 16, 10, 22, 17].
Many exciting avenues are open for future work. Leveraging the large breadth of graphical models
research, we plan to explore other learning paradigms using Theseus (for instance, estimating other
PSMs using k-best Viterbi in order to discriminatively learn parameters using algorithms such as
max-margin learning). Perhaps most exciting, we plan to further investigate the peptide-to-observedspectrum mapping derived from DRIP Fisher scores. Under this mapping, we plan to explore learning
distance metrics between PSMs in order to identify proteins from peptides.
Acknowledgments: This work was supported by the National Center for Advancing Translational
Sciences (NCATS), National Institutes of Health, through grant UL1 TR001860.
References
[1] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: a practical and powerful
approach to multiple testing. Journal of the Royal Statistical Society B, 57:289?300, 1995.
[2] R. Craig and R. C. Beavis. Tandem: matching proteins with tandem mass spectra. Bioinformatics, 20:1466?1467, 2004.
[3] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39:1?22,
1977.
[4] Charles Elkan. Deriving tf-idf as a fisher kernel. In International Symposium on String
Processing and Information Retrieval, pages 295?300. Springer, 2005.
9
[5] J. K. Eng, A. L. McCormack, and J. R. Yates, III. An approach to correlate tandem mass spectral
data of peptides with amino acid sequences in a protein database. Journal of the American
Society for Mass Spectrometry, 5:976?989, 1994.
[6] Jimmy K Eng, Tahmina A Jahan, and Michael R Hoopmann. Comet: An open-source ms/ms
sequence database search tool. Proteomics, 13(1):22?24, 2013.
[7] John T. Halloran, Jeff A. Bilmes, and William S. Noble. Learning peptide-spectrum alignment
models for tandem mass spectrometry. In Uncertainty in Artificial Intelligence (UAI), Quebec
City, Quebec Canada, July 2014. AUAI.
[8] John T Halloran, Jeff A Bilmes, and William S Noble. Dynamic bayesian network for accurate
detection of peptides from tandem mass spectra. Journal of Proteome Research, 15(8):2749?
2759, 2016.
[9] John T. Halloran and David M. Rocke. Gradients of Generative Models for Improved Discriminative Analysis of Tandem Mass Spectra: Supplementary Materials, 2017.
[10] J Jeffry Howbert and William S Noble. Computing exact p-values for a cross-correlation
shotgun proteomics score function. Molecular & Cellular Proteomics, pages mcp?O113, 2014.
[11] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In
Advances in Neural Information Processing Systems, Cambridge, MA, 1998. MIT Press.
[12] Tommi S Jaakkola, Mark Diekhans, and David Haussler. Using the fisher kernel method to
detect remote protein homologies. In ISMB, volume 99, pages 149?158, 1999.
[13] L. K?all, J. Canterbury, J. Weston, W. S. Noble, and M. J. MacCoss. A semi-supervised machine
learning technique for peptide identification from shotgun proteomics datasets. Nature Methods,
4:923?25, 2007.
[14] Uri Keich, Attila Kertesz-Farkas, and William Stafford Noble. Improved false discovery rate
estimation procedure for shotgun proteomics. Journal of proteome research, 14(8):3148?3161,
2015.
[15] Uri Keich and William Stafford Noble. On the importance of well-calibrated scores for
identifying shotgun proteomics spectra. Journal of proteome research, 14(2):1147?1160, 2014.
[16] Sangtae Kim and Pavel A Pevzner. Ms-gf+ makes progress towards a universal database search
tool for proteomics. Nature communications, 5, 2014.
[17] Sean McIlwain, Kaipo Tamura, Attila Kertesz-Farkas, Charles E Grant, Benjamin Diament,
Barbara Frewen, J Jeffry Howbert, Michael R Hoopmann, Lukas K?all, Jimmy K Eng, et al.
Crux: rapid open source protein tandem mass spectrometry analysis. Journal of proteome
research, 2014.
[18] J. Pearl. Probabilistic Reasoning in Intelligent Systems : Networks of Plausible Inference.
Morgan Kaufmann, 1988.
[19] M. Spivak, J. Weston, L. Bottou, L. K?all, and W. S. Noble. Improvements to the Percolator
algorithm for peptide identification from shotgun proteomics data sets. Journal of Proteome
Research, 8(7):3737?3745, 2009. PMC2710313.
[20] M. Spivak, J. Weston, D. Tomazela, M. J. MacCoss, and W. S. Noble. Direct maximization
of protein identifications from tandem mass spectra. Molecular and Cellular Proteomics,
11(2):M111.012161, 2012. PMC3277760.
[21] Shengjie Wang, John T Halloran, Jeff A Bilmes, and William S Noble. Faster and more
accurate graphical model identification of tandem mass spectra using trellises. Bioinformatics,
32(12):i322?i331, 2016.
[22] C. D. Wenger and J. J. Coon. A proteomics search algorithm specifically designed for highresolution tandem mass spectra. Journal of proteome research, 2013.
10
| 7155 |@word version:3 middle:1 proportion:1 open:4 heuristically:5 bn:4 accounting:1 eng:3 pavel:1 contains:1 score:39 fragment:13 liquid:1 series:1 denoting:3 prefix:1 past:2 existing:1 outperforms:1 current:3 si:8 yet:1 parsing:1 john:5 readily:1 keich:2 drop:1 plot:2 designed:1 farkas:2 v:1 generative:13 leaf:1 intelligence:1 indicative:1 ith:2 colored:1 provides:1 quantized:1 node:5 location:1 traverse:1 successive:1 cleavage:1 simpler:1 along:2 direct:1 become:1 symposium:1 pevzner:1 combine:1 introduce:1 expected:1 rapid:2 roughly:1 discretized:1 relying:1 precursor:13 considering:2 tandem:16 begin:2 estimating:1 notation:2 underlying:3 mass:41 substantially:3 string:2 differing:1 combat:1 y3:1 expands:1 charge:10 auai:1 classifier:11 control:2 unit:1 uo:1 grant:2 local:2 subscript:1 path:5 noteworthy:1 plus:1 dynamically:1 b12:1 shaded:2 range:1 bi:28 ismb:1 psms:11 directed:1 responsible:2 unique:1 acknowledgment:1 practical:1 practice:3 testing:1 procedure:3 universal:1 significantly:1 dictate:1 matching:1 induce:2 protein:12 proteome:6 convenience:1 coon:1 unshaded:2 charged:5 deterministic:3 center:1 jimmy:2 y8:1 identifying:3 assigns:1 rule:1 haussler:2 importantly:1 deriving:2 reparameterization:1 searching:1 coordinate:7 increment:1 limiting:1 dbns:1 target:6 controlling:1 exact:2 programming:1 us:2 hypothesis:4 elkan:1 element:7 particularly:1 database:13 labeled:1 observed:33 wang:1 thousand:1 cy:8 calculate:2 stafford:2 remote:1 mz:16 uncalibrated:2 principled:1 mentioned:1 intuition:1 substantial:1 insertion:7 dempster:1 benjamin:1 dynamic:5 traversal:2 trained:1 raise:1 upon:3 k0:1 various:1 represented:1 derivation:2 train:1 separated:1 artificial:1 outside:1 whose:5 heuristic:4 widely:3 supplementary:1 plausible:1 consume:1 otherwise:3 ability:1 highlighted:2 laird:1 sequence:14 rr:1 product:9 poorly:2 flexibility:1 achieve:1 ammonia:1 exploiting:1 parent:3 optimum:2 generating:4 converges:2 object:1 derive:2 depending:1 measured:2 b0:7 progress:1 tommi:1 annotated:1 owing:1 subsequently:1 stochastic:2 public:2 virtual:1 material:1 require:2 crux:4 biological:2 probable:5 traversed:1 pl:2 b9:1 considered:3 ground:1 exp:4 great:1 cb:8 viterbi:6 mapping:3 bj:2 vary:2 estimation:5 label:1 peptide:42 ain:1 repetition:1 tf:1 city:1 tool:3 weighted:1 mit:1 gaussian:7 cerevisiae:1 rather:1 pn:4 avoid:1 varying:2 jaakkola:2 derived:10 focus:1 improvement:4 methodological:1 properly:1 rank:1 likelihood:14 prevalent:1 greatly:2 bernoulli:1 saccharomyces:1 suppression:1 kim:1 detect:1 inference:7 suffix:1 i0:1 accumulated:2 typically:3 unlikely:2 entire:1 spurious:2 hidden:3 shengjie:1 i1:7 mitigating:1 overall:2 translational:1 augment:1 plan:3 constrained:1 art:3 equal:1 once:2 drip:40 beach:1 atom:2 runtimes:1 y6:1 look:3 unsupervised:6 throughput:1 nearly:2 noble:9 future:1 t2:2 intelligent:1 employ:1 distinguishes:1 ve:1 national:2 individual:1 argmax:1 consisting:3 william:6 attempt:2 detection:1 interest:1 highly:1 investigate:1 alignment:2 accurate:6 kt:29 partial:1 respective:2 unless:1 incomplete:1 detailing:1 desired:1 plotted:2 theoretical:13 instance:5 modeling:2 earlier:1 boolean:1 ence:1 measuring:1 maximization:1 introducing:1 recalibrate:4 neutral:2 entry:1 hundred:1 comprised:1 uniform:1 calibrated:3 chunk:1 st:1 peak:21 international:1 probabilistic:2 off:1 michael:2 b8:1 mouse:1 again:1 american:1 derivative:1 leading:1 return:2 b2:1 uoi:1 explicitly:1 ranking:1 performed:1 red:2 competitive:1 mcilwain:1 complicated:1 b6:1 contribution:1 oi:1 accuracy:14 acid:6 variance:3 unitless:1 efficiently:3 correspond:4 identify:3 maximized:1 kaufmann:1 identification:10 bayesian:2 accurately:1 produced:2 craig:1 bilmes:3 researcher:1 processor:2 converged:1 recalibration:2 evaluates:1 competitor:1 naturally:1 dataset:1 ncats:1 popular:2 knowledge:1 ut:1 improves:6 organized:1 sean:1 higher:2 supervised:2 y7:1 wherein:1 improved:5 evaluated:1 though:1 furthermore:4 correlation:2 hand:1 y9:1 quality:1 gray:1 perhaps:1 yeast:9 b3:1 usa:1 effect:1 normalized:1 y2:1 homology:1 counterpart:1 hence:1 y12:1 nonzero:1 illustrated:3 round:3 during:1 please:1 davis:2 m:32 highresolution:1 plate:1 attila:2 thomson:1 postprocessing:1 reasoning:1 novel:1 recently:2 charles:2 superior:1 common:1 multinomial:1 rl:1 b4:1 volume:1 extend:2 discussed:2 accumulate:1 measurement:5 significant:1 cambridge:1 shuffling:1 dbn:8 similarly:2 benjamini:1 dot:2 toolkit:1 calibration:1 access:2 supervision:4 similarity:1 base:1 enzyme:1 posterior:1 recent:3 dictated:1 barbara:1 indispensable:1 honor:1 outperforming:1 scoring:29 morgan:1 minimum:1 employed:1 paradigm:1 monotonically:2 july:1 semi:2 ul1:1 full:1 multiple:1 infer:2 match:1 faster:2 offer:2 long:1 cross:2 retrieval:1 post:9 molecular:3 mle:3 proteomics:10 metric:1 iteration:3 kernel:17 represent:4 ion:30 tamura:1 spectrometry:8 background:2 receive:1 source:2 ot:1 ascent:9 strict:1 undergo:1 elegans:1 quebec:2 leveraging:1 leverage:2 iii:1 variety:1 b7:1 zi:2 identified:17 avenue:1 tradeoff:1 absent:2 t0:4 whether:1 motivated:1 recreate:1 b5:1 diekhans:1 shotgun:5 penalty:1 returned:1 dramatically:1 generally:1 singly:2 ten:1 extensively:2 svms:2 processed:1 augments:2 zj:4 shifted:1 per:2 correctly:1 blue:2 write:1 discrete:2 yates:1 group:2 four:1 threshold:8 comet:1 blood:1 breadth:1 y10:1 advancing:1 vast:1 graph:4 sum:3 run:2 psm:19 powerful:1 uncertainty:1 place:2 reader:1 utilizes:2 decision:1 hochberg:1 followed:1 convergent:1 display:1 strength:1 adapted:3 occur:2 constraint:1 idf:1 x2:2 reparameterize:1 trypsin:1 rendered:1 department:2 alternate:1 poor:1 across:2 em:2 character:1 modification:1 s1:2 flanking:1 equation:2 mcp:1 discus:3 detectable:1 subjected:1 ascending:1 end:3 gaussians:1 v2:1 spectral:1 magnetic:1 slower:1 original:2 top:2 denotes:6 ensure:1 graphical:4 exploit:3 parsed:1 society:3 objective:1 argmaxxi:1 added:1 quantity:1 strategy:1 gradient:23 kth:3 spivak:2 distance:1 mapped:1 penultimate:1 majority:1 y5:1 collected:3 cellular:2 water:2 enforcing:1 assuming:1 length:11 modeled:3 y4:1 index:2 decoy:5 unrolled:1 equivalently:1 difficult:3 carbon:1 potentially:1 y11:1 design:1 fdr:6 perform:1 observation:7 datasets:9 descent:1 displayed:2 reparameterized:1 defining:2 communication:1 y1:1 frame:15 arbitrary:2 intensity:8 canada:1 david:3 overloaded:1 pair:3 introduced:1 specified:1 raising:1 california:2 learned:10 deletion:6 ucdavis:2 pearl:1 nip:1 challenge:1 max:6 including:2 oj:1 royal:2 ranked:1 rely:1 indicator:1 customized:2 representing:4 improve:5 technology:1 axis:5 deemed:1 b10:1 extract:1 health:3 gf:6 sn:1 discovery:3 recalibrated:1 amz:1 loss:4 discriminatively:1 proportional:1 proven:1 versus:1 offered:2 rubin:1 displaying:1 exciting:2 supported:1 last:1 jth:1 allow:1 institute:1 wide:1 lukas:1 tolerance:4 fragmented:2 boundary:2 calculated:5 xn:4 dimension:1 world:1 maze:1 curve:1 default:1 author:1 collection:3 commonly:1 preprocessing:1 made:1 correlate:1 instantiation:1 uai:1 b1:1 discriminative:13 xi:21 spectrum:61 subsequence:2 search:26 hydrogen:2 decade:1 table:3 learn:5 nature:4 b11:1 ca:1 molecule:1 unavailable:1 argmaxx:1 halloran:5 bottou:1 complex:4 domain:1 pk:1 main:1 oin:2 universe:2 s2:1 scored:4 allowed:2 child:1 amino:6 x1:17 representative:3 referred:1 chromatography:1 trellis:1 inferring:1 position:1 wish:1 deterministically:1 candidate:6 lie:1 canterbury:1 abundance:1 remained:1 offset:2 explored:1 svm:4 evidence:1 quantization:2 false:3 sequential:1 effectively:1 importance:1 magnitude:1 uri:2 margin:1 rocke:2 explore:3 doubling:1 applies:1 springer:1 corresponds:5 truth:1 ma:1 weston:3 conditional:2 sorted:1 towards:3 jeff:3 fisher:32 wenger:1 change:1 included:1 typical:2 specifically:1 called:12 total:1 discriminate:1 worm:6 attempted:2 shannon:1 intact:1 support:1 mark:2 bioinformatics:2 overload:1 evaluate:1 phenomenon:1 |
6,805 | 7,156 | Asynchronous Parallel Coordinate Minimization
for MAP Inference
Ofer Meshi
Google
[email protected]
Alexander G. Schwing
Department of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
[email protected]
Abstract
Finding the maximum a-posteriori (MAP) assignment is a central task for structured
prediction. Since modern applications give rise to very large structured problem
instances, there is increasing need for efficient solvers. In this work we propose
to improve the efficiency of coordinate-minimization-based dual-decomposition
solvers by running their updates asynchronously in parallel. In this case messagepassing inference is performed by multiple processing units simultaneously without
coordination, all reading and writing to shared memory. We analyze the convergence properties of the resulting algorithms and identify settings where speedup
gains can be expected. Our numerical evaluations show that this approach indeed
achieves significant speedups in common computer vision tasks.
1
Introduction
Finding the most probable configuration of a structured distribution is an important task in machine
learning and related applications. It is also known as the maximum a-posteriori (MAP) inference
problem in graphical models [Wainwright and Jordan, 2008, Koller and Friedman, 2009], and has
found use in a wide range of applications, from disparity map estimation in computer vision, to
part-of-speech tagging in natural language processing, protein-folding in computational biology and
others. Generally, MAP inference is intractable, and efficient algorithms only exist in some special
cases, such as tree-structured graphs. It is therefore common to use approximations.
In recent years, many approximate MAP inference methods have been proposed [see Kappes et al.,
2015, for a recent survey]. One of the major challenges in applying approximate inference techniques
is that modern applications give rise to very large instances. For example, in semantic image
segmentation the task is to assign labels to all pixels in an image [e.g., Zhou et al., 2016]. This can
translate into a MAP inference problem with hundreds of thousands of variables (one for each pixel).
For this reason, efficiency of approximate inference algorithms is becoming increasingly important.
One approach to dealing with the growth in problem complexity is to use cheap (but often inaccurate)
algorithms. For example, variants of the mean field algorithm have witnessed a surge in popularity
due to their impressive success in several computer vision tasks [Kr?henb?hl and Koltun, 2011]. A
shortcoming of this approach is that it is limited to a specific type of model (fully connected graphs
with Gaussian pairwise potentials). Moreover, the mean field approximation is often less accurate
than other approximations, e.g., those based on convex relaxations [Desmaison et al., 2016].
In this work we study an alternative approach to making approximate MAP inference algorithms
more efficient ? parallel computation. Our study is motivated by two developments. First, current
hardware trends increase the availability of parallel processing hardware in the form of multi-core
CPUs as well as GPUs. Second, recent theoretical results improve our understanding of various
asynchronous parallel algorithms, and demonstrate their potential usefulness, especially for objective
functions that are typical in machine learning problems [e.g., Recht et al., 2011, Liu et al., 2015].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Focusing on a smoothed objective function originating from a dual-decomposition approximation,
we present a fully asynchronous parallel algorithm for MAP inference based on block-coordinate
updates. Our approach gives rise to a message-passing procedure, where messages are computed
and updated in shared memory asynchronously in parallel by multiple processing units, with no
attempt to coordinate their actions. The reason we focus on asynchronous algorithms is because
the runtime of synchronous algorithms is dominated by the slowest worker, which may cause the
overhead from synchronization to outweigh the gain from parallelization. The asynchronous parallel
setting is particularly suitable for message-passing algorithms, like the ones we study here.
Our analysis is conducted under the bounded delay assumption, which is standard in the literature on
asynchronous optimization and matches well modern multicore architectures. It reveals the precise
relation between the delay and the expected change in objective value following an update. This
result suggests a natural criterion for adaptively choosing the number of parallel workers to guarantee
convergence to the optimal value. Additional analysis shows that speedups which are linear in the
number of processors can be expected under some conditions. We illustrate the performance of our
algorithm both on synthetic models and on a disparity estimation task from computer vision. We
demonstrate 45-fold improvements or more when compared to other asynchronous optimization
techniques.
2
Related Work
Our work is inspired by recent advances in the study of asynchronous parallel algorithms and their
successful application to various machine learning tasks. In particular, parallel versions of various
sequential algorithms have been recently analyzed, adding to past work in asynchronous parallel
optimization [Bertsekas and Tsitsiklis, 1989, Tseng, 1991]. Those include, for example, stochastic
gradient descent [Recht et al., 2011], conditional gradient [Wang et al., 2016], ADMM [Zhang and
Kwok, 2014], proximal gradient methods [Davis et al., 2016], and coordinate descent [Liu et al.,
2015, Liu and Wright, 2015, Avron et al., 2015, Hsieh et al., 2015, Peng et al., 2016, You et al., 2016].
The algorithms we study here are based on block coordinate minimization, a coordinate descent
method in which an optimal update is computed in closed form.1 To the best of our knowledge, this
algorithm has yet to be analyzed in the asynchronous parallel setting. The analysis of this algorithm
is significantly more challenging compared to other coordinate descent methods, since there is no
notion of a step-size, which is carefully chosen in previous analyses to guarantee convergence [e.g.,
Liu et al., 2015, Avron et al., 2015, Peng et al., 2016]. Furthermore, in most previous papers, the
function which is being optimized is assumed to be strongly convex, or to satisfy a slightly weaker
condition [Liu et al., 2015, Hsieh et al., 2015]. In contrast, we analyze a smooth and convex MAP
objective, which does not satisfy any of these strong-convexity conditions. We focus on this particular
objective function since optimal block coordinate updates are known in this case, which is not true
for its strongly convex counterparts [Meshi et al., 2015].
We are not the first to study parallel inference methods in graphical models. Parallel variants of
Belief Propagation (BP) are proposed and analyzed by Gonzalez et al. [2011]. They present bounds
on achievable gains from parallel inference on chain graphs, as well as an optimal parallelization
scheme. However, the algorithms they propose include global synchronization steps, which often hurt
efficiency. In contrast, we focus on the fully asynchronous setting, so our algorithms and analysis
are substantially different. Piatkowski and Morik [2011] and Ma et al. [2011] also describe parallel
implementations of BP, but those again involve synchronization. We are particularly interested
in the MAP inference problem and use convergent coordinate minimization methods with a dualdecomposition objective. This is quite different from marginal inference with BP, used in the
aforementioned works; for example, BP is not guaranteed to converge even with sequential execution.
Dual-decomposition based parallel inference for graphical models has been investigated by Choi and
Rutenbar [2012] and extended by Hurkat et al. [2015]. They study hardware implementations of
the TRW-S algorithm (a coordinate-minimization algorithm very similar to the ones we study here),
where some message computations can be parallelized. However, their parallelization scheme is quite
different from ours as it is synchronous, i.e., the messages computed in parallel have to be carefully
chosen, and it is specific to grid-structured graphs. In addition, they provide no theoretical analysis
1
For a single coordinate this is equivalent to exact line search, but for larger blocks the updates can differ.
2
of convergence (which is not directly implied by TRW-S convergence due to different message
scheduling).
Schwing et al. [2011] and Zhang et al. [2014] also study dual-decomposition based parallel inference. They demonstrate gains when parallelizing the computation across multiple machines in a
cluster. However, their approach requires the employed processing units to run in synchrony. Parallel
MAP solvers based on subdifferential techniques [Schwing et al., 2012], have also been considered by Schwing et al. [2014] using a Frank-Wolfe algorithm. Albeit individual computations are
performed in parallel, their approach also requires a synchronous gradient step.
An alternative parallel inference approach is based on sampling algorithms [Singh et al., 2010, Wick
et al., 2010, Asuncion et al., 2011]. However, the gains in runtime observed in this case are usually
much smaller than those observed for algorithms which do not use sampling.
Our work is thus the first to propose and analyze a fully asynchronous parallel coordinate minimization
algorithm for MAP inference in graphical models.
3
Approach
In this section we formalize the MAP inference problem and present our algorithmic framework.
Consider a set of discrete variables X1 , . . . , XN , and denote by xi 2 Xi a particular assignment to
variable Xi from a discrete set Xi . Let r ? {1, . . . , N } denote a subset of the variables, also known
as a region, and let R be the set of all regions that are used in a problem. Each region r 2 R is
associated with a local score function ?r (xr ), referred to as a factor. The MAP inference problem is
to find a joint assignment x that maximizes the sum of all factor scores,
X
max
?r (xr ) .
(1)
x
r2R
Consider semantic image segmentation as an example. Factors depending on a single variable denote
univariate preferences often obtained from neural networks [Chen? et al., 2015]. Factors depending
on two or more variables encode local preference relationships.
The problem in Eq. (1) is a combinatorial optimization problem which is generally NP-hard [Shimony,
1994]. Notable tractable special cases include tree-structured graphs and super-modular pairwise
factors. In this work we are interested in solving the general form of the problem, therefore we resort
to approximate inference.
Multiple ways to compute an approximate MAP solution have been proposed. Here we employ
approximations based on the dual-decomposition method [Komodakis et al., 2007, Werner, 2010,
Sontag et al., 2011], which often deliver competitive performance compared to other approaches,
and are also amenable to asynchronous parallel execution. The key idea in dual-decomposition is
to break the global optimization problem of Eq. (1) into multiple (easy) subproblems, one for each
factor. Agreement constraints between overlapping subproblem maximizers are then defined, and the
resulting program takes the following form,2
min
X
r2R
max ?r (xr )+
xr
X
p:r2p
pr (xr )
X
c:c2r
rc (xc )
!
? min
X
r2R
max ??r (xr ) .
xr
(2)
Here, ?r 2 p? (similarly, ?c 2 r?) represents parent-child containment relationships, often represented
as a region graph [Wainwright and Jordan, 2008], and are Lagrange multipliers for the agreement
constraints, defined for every region r, assignment xr , and parent p : r 2 p. In particular, these
constraints enforce that the maximizing assignment in a parent region p agrees with the maximizing
assignment in the child region r on the values of the variables in r (which are also in p due to
containment). For a full derivation see Werner [2010] (Eq. (11)). The modification of the model
factors ?r by the multipliers is known as a reparameterization, and is denoted here by ??r for brevity.
The program in Eq. (2) is an unconstrained convex problem with a (piecewise-linear) non-smooth
objective function. Standard algorithms, such as subgradient descent, can be applied in this case
[Komodakis et al., 2007, Sontag et al., 2011], however, often, faster algorithms can be derived for a
smoothed variant of this objective function [Johnson, 2008, Hazan and Shashua, 2010, Werner, 2009,
2
The problem in Eq. (2) can also be derived as the dual of a linear programming relaxation of Eq. (1).
3
Algorithm 1 Block Coordinate Minimization
1: Initialize: 0 = 0
2: while not converged do
3:
Choose a block s at random
4:
Update: st+1 = argmin s0 f ( s0 , t s ),
5: end while
and keep:
t+1
s
=
t
s
Savchynskyy et al., 2011]. In this approach the max operator is replaced with soft-max, giving rise to
the following problem:
?
?
X
X
min f ( ) :=
log
exp ??r (xr )/
,
(3)
xr
r2R
where
is a parameter controlling the amount of smoothing (larger is smoother).
Algorithms: Several algorithms for optimizing either the smooth (Eq. (3)) or non-smooth (Eq. (2))
problem have been studied. Block coordinate minimization algorithms, which are the focus of our
work, are among the most competitive methods. In particular, in this approach a block of variables s
is updated at each iteration using the values in other blocks, i.e., s , which are held fixed. Below we
will assume a randomized schedule, where the next block to update is chosen uniformly at random.
Other schedules are possible [e.g., Meshi et al., 2014, You et al., 2016], but this one will help to
avoid unwanted coordination between workers in an asynchronous implementation. The resulting
meta-algorithm is given in Algorithm 1.
Various choices of blocks give rise to different algorithms in this family. A key consideration is to
make sure that the update in line 4 of Algorithm 1 can be computed efficiently. Indeed, for several
types of blocks, efficient, oftentimes analytically computable, updates are known [Werner, 2007,
Globerson and Jaakkola, 2008, Kolmogorov, 2006, Sontag et al., 2011, Meshi et al., 2014]. To make
the discussion concrete, we next instantiate the block coordinate minimization update (line 4 in
Algorithm 1) using the smooth objective in Eq. (3) for two types of blocks.3 Specifically, we use the
Pencil block, consisting of the variables pr (?), and the Star block, which consists of the set ?r (?).
Intuitively, for the Pencil block, we choose a parent p and one of its children r. For the Star block we
choose a region r and consider all of its parents.
To simplify notation, it is useful to define per-factor probability distributions, referred to as beliefs:
?
?
?r (xr ) / exp ??r (xr )/
.
Using this definition, the Pencil update is performed by picking a pair of adjacent regions p, r, and
setting:
1
t+1
t
log ?tp (xr ) log ?tr (xr )
(4)
pr (xr ) = pr (xr ) +
2
P
for all xr , where we denote the marginal belief ?p (xr ) = x0 ?p (xr , x0p\r ). Similarly, for the Star
p\r
update we pick a region r, and set:
t+1
pr (xr )
=
t
pr (xr )
+
log ?tp (xr )
0
1
Y t
1
t
? log @?r (xr ) ?
?p0 (xr )A
Pr + 1
0
0
p :r2p
for all p : r 2 p and all xr , where Pr = |{p : r 2 p}| is the number of parents of r in the region
graph. Full derivation of the above updates is outside the scope of this paper and can be found in
previous work [e.g., Meshi et al., 2014]. The variables are sometimes called messages. Hence the
algorithms considered here belong to the family of message-passing procedures.
In terms of convergence rate, it is known that coordinate minimization converges to the optimum of
the smooth problem in Eq. (3) with rate O(1/ t) [Meshi et al., 2014].
In this work our goal is to study asynchronous parallel coordinate minimization for approximate
MAP inference. This means that each processing unit repeatedly performs the operations in lines 3-4
3
Similar updates for the non-smooth case (Eq. (2)) are also known. Those are easily obtained by switching
from soft-max to max.
4
of Algorithm 1 independently, with minimal coordination between units. We refer to this algorithm as
APCM ? for Asynchronous Parallel Coordinate Minimization. We use APCM-Pencil and APCM-Star
to refer to the instantiations of APCM with Pencil and Star blocks, respectively.
4
Analysis
We now proceed to analyze the convergence properties of the asynchronous variants of Algorithm 1.
In this setting, the iteration counter t corresponds to write operations, which are assumed to be atomic.
Note, however, that in our experiments in Section 5 we use a lock-free implementation, which may
result in inconsistent writes and reads.
If there is no delay, then the algorithm is performing exact coordinate minimization. However, since
updates happen asynchronously, there will generally be a difference between the current beliefs ?t
and the ones used to compute the update. We denote by k(t) the iteration counter corresponding to
the time in which values were read. The bounded delay assumption implies that t k(t) ? ? for
some constant ? . We present results for the Pencil block next, and defer results for the Star block to
Appendix B.
Our first result precisely characterizes the expected change in objective value following an update as
a function of the old and new beliefs. All proofs appear in the supplementary material.
Proposition 1. The APCM-Pencil algorithm satisfies:
Es [f (
t+1
)]
f ( t) =
n
X X
r
log
p:r2p
X ?tr (xr ) q
k(t)
xr
+ log
where n =
blocks.
P P
r
?r
k(t)
?p
X ?tp (xr ) q
k(t)
xr
p:r2p
(xr )
?p
(xr )
k(t)
(xr ) ? ?r
k(t)
?p
(5)
(xr )
k(t)
(xr ) ? ?r
(xr )
!
,
1 is the number of Pencil blocks, and the expectation is over the choice of
At a high-level, our derivation carefully tracks the effect of stale beliefs on convergence by separating old and new beliefs after applying the update (see Appendix A.1). We next highlight a few
consequences of Proposition 1. First, it provides an exact characterization of the expected change in
objective value, not an upper bound. Second, as a sanity check, when there is no delay (k(t) = t),
the belief ratio terms (?t /?k(t) ) drop, and we recover the sequential decrease in objective, which
corresponds to the (negative) Bhattacharyya divergence measure between the pair of distributions
?tr (xr ) and ?tp (xr ) [Meshi et al., 2014]. Finally, Proposition 1 can be used to dynamically set the
degree of parallelization as follows. We estimate Eq. (5) (per block) and if the result is strictly positive
then it suggests that the delay is too large and we should reduce the number of concurrent processors.
Next, we obtain an upper bound on the expected change in objective value that takes into account the
sparsity of the update.
Proposition 2. The APCM-Pencil algorithm satisfies:
Es [f (
t+1
)]
f ( t) ?
n
+
t 1
X
d=k(t)
n
"
max log
xr
X X
r
log
p:r2p
?d+1
r(d) (xr )
?dr(d) (xr )
Xq
xr
!
k(t)
?p
+ max log
xr
k(t)
(xr ) ? ?r
?d+1
p(d) (xr )
?dp(d) (xr )
!2
(xr )
.
!#
(6)
(7)
This bound separates the expected change in objective into two terms: the delay term (Eq. (6)) and
the (stale) improvement term (Eq. (7)). The improvement term is always non-positive, it is equal to
the negative Bhattacharyya divergence, and it is exactly the same as the expected improvement in
the sequential setting. The delay term is always non-negative, and as before, when there is no delay
(k(t) = t), the sum in Eq. (6) is empty, and we recover the sequential improvement. Note that the
delay term depends only on the beliefs in regions that were actually updated between the read and
current write. This result is obtained by exploiting the sparsity of the updates: each message affects
only the neighboring nodes in the graph (see Appendix A.2). Similar structural properties are also
used in related analyses [e.g., Recht et al., 2011], however in other settings this involves making
5
40
30
Objective
3.5
4
3
2
25
3
20
2.5
15
2
10
1
1.5
0
1
0
20
40
60
80
100
Iteration
120
55
1 worker
10 workers
20 workers
40 workers
40 workers (adaptive)
35
4
5
Objective
40 workers (adaptive)
4.5
50
Objective
6
Number of active workers
1 worker
10 workers
20 workers
40 workers
40 workers (adaptive)
7
45
40
35
5
0
20
40
60
Iteration
80
100
120
30
0
20
40
60
80
100
120
Iteration
Figure 1: Simulation of APCM-Pencil on toy models. (Left) objective vs. iteration (equiv., update)
on a 3-node chain graph. The dashed lines show the same objective when iterations are divided by
the number of workers, which approximates runtime. (Middle) objective vs. iteration and vs. number
of active workers on a 3-node chain graph when adapting the number of workers. (Right) objective
vs. iteration (equiv., update) on a 6-node fully connected graph.
non-trivial assumptions (such as how training examples interact), whereas in our case the sparsity
pattern is readily available through the structure of the graphical model.
To demonstrate the hardness of our setting, we present in Appendix A.3 a case where the RHS of
Eq. (6) - (7) may be a large positive number. This happens when some beliefs are very close to 0. In
contrast, the next theorem uses the results above to show speedups under additional assumptions.
t
? 2
Theorem 1. Let |??r (xr )| ? M for all t, r, xr , and let k t
k < B for all t. Assume that the
c
gradient is bounded from below as krf k2 c, and that the delay is bounded as ? ? 32M
. Then
8nB
t
?
Es [f ( )] f ( ) ? t .
This upper bound is only 2 times slower than the corresponding sequential bound (see Theorem 3 in
Meshi et al. [2014]), however, in this parallel setting we execute updates roughly ? times faster, so
we obtain a linear speedup in this case. Notice that this rate applies only when the gradient is not
too small, so we expect to get large gains from parallelization initially, and smaller gains as we get
closer to optimality. This is due to the hardness of our setting (see Appendix A.3), and gives another
theoretical justification to adaptively reduce the number of processing units as the iterations progress.
At first glance, the assumptions in Theorem 1 (specifically, the bounds M and B) seem strong.
However, it turns out that they are easily satisfied whenever f ( t ) ? f (0) (see Lemma 9 in Meshi
et al. [2014]) ? which is a mild assumption that is satisfied in all of our experiments except some
adversarially constructed toy problems (see Section 5.1).
5
Experiments
In this section we present numerical experiments to study the performance of APCM in practical MAP
estimation problems. We first simulate APCM on toy problems in Section 5.1, then, in Section 5.2,
we demonstrate our approach on a disparity estimation task from computer vision.
5.1
Synthetic Problems
To better understand the behavior of APCM, we simulate the APCM-Pencil algorithm sequentially as
follows. We keep a set of ?workers,? each of which can be in one of two states: ?read? or ?update.?
In every step, we choose one of the workers at random using a skewed
distribution to encourage
P
large delays: the probability of sampling a worker w is pw = e?sw / w0 e?sw0 , where sw is sampled
uniformly in [0, 1], and ? = 5. If the worker is in the ?read? state, then it picks a message uniformly
at random, makes a local copy of the beliefs, and moves to state ?update.? Else, if the worker wakes
up in state ?update,? then it computes the update from its local beliefs, writes the update to the global
beliefs, and goes back to state ?read.? This procedure creates delays between the read and write steps.
Our first toy model consists of 3 binary variables and 2 pairwise factors, forming a chain graph. This
model has a total of 4 messages. Factor values are sampled uniformly in the range [ 5, 5]. In Fig. 1
(left) we observe that as the number of workers grows, the updates become less effective due to stale
beliefs. Importantly, it takes 40 workers operating on 4 messages to observe divergence. We don?t
6
4 200
10
1
2
4
8
16
32
46
5.54
2.8
1.155
8
1.156
1
2
4
8
16
32
46
1.1555
1.1545
1.155
1.1545
2.795
5.5
1.154
1.154
2.79
3
10
4
10
3
Time [ms]
4
10
5
10
3
Time [ms]
6
10
5.6
1
2
4
8
16
32
46
5.58
5.56
5.54
10
4
10
5
10
Time [ms]
7
10
2.805
8
10
1
2
4
8
16
32
46
1.1555
2.8
5.52
4
10
6
Time [ms]
1.156
1
2
4
8
16
32
46
2.81
Dual
10
10
Dual
10
Dual
1
2
4
8
16
32
46
1.1555
5.52
5.48
HOGWILD!
10
1.156
1
2
4
8
16
32
46
2.805
8 400
8
1.155
1.1545
8
1.156
1
2
4
8
16
32
46
1.1555
Dual
Dual
Ours
5.56
10
2.81
Dual
5.6
5.58
8 200
7
Dual
6
Dual
2 200
10
1.155
1.1545
2.795
5.5
1.154
5.48
1.154
2.79
10
3
10
4
Time [ms]
10
3
10
4
10
5
10
Time [ms]
3
10
4
Time [ms]
10
5
10
4
10
6
Time [ms]
Figure 2: For = 1 and an 8 state model, we illustrate the convergence behavior of our approach
compared to HOGWILD!, for a variety of MRF configurations (2, 4, 8), and different number of
iterations (200, 400). Different number of threads are used for each configuration.
Algorithm 2 HOGWILD! A single update
1: Choose a region r 2 R at random
2: Update: pr (xr ) = ?t ?r (xr ) for all xr , p : r 2 p
rc (xc )
+= ?t ?r (xc ) for all xc , c : c 2 r
expect a setting with more workers than messages to be observed in practice. We also adaptively
change the number of workers as suggested by our theory, which indeed helps to regain convergence.
Fig. 1 (middle) shows how the number of workers decreases as the objective approaches the optimum.
Our second toy model consists of 6 binary variables forming a fully connected graph. This model has
30 messages. In this setting, despite stale beliefs due to a skewed distribution, Fig. 1 (right) shows
that APCM is convergent even with 40 active workers. Hypothetically assuming 40 workers to run in
parallel yields a significant speedup when compared to a single thread, as is illustrated by the dashed
lines in Fig. 1.
5.2
Disparity Estimation
We now proceed to test our approach on a disparity estimation task, a more realistic setup. In our
case, the employed pairwise graphical model, often also referred to as a pairwise Markov random
field (MRF), is grid structured. It has 144 ? 185 = 26, 640 unary regions with 8 states and is a
downsampled version from Schwing et al. [2011]. We use the temperature parameter = 1 for the
smooth objective (Eq. (3)). We compare our APCM-Star algorithm to the HOGWILD! approach
[Recht et al., 2011], which employs an asynchronous parallel stochastic
P gradient descent method ?
summarized in Algorithm 2, where we use the shorthand ?r (xc ) = x0 ?r (xc , x0r\c ). We refer
r\c
the reader to Appendix C in the supplementary material for additional results on graphical models
with larger state space size and for results regarding the non-smooth update obtained for = 0. In
short, those results are similar to the ones reported here.
No synchronization is used for both HOGWILD! and our approach, i.e., we allow inconsistent
reads and writes. Hence our optimization is lock-free and each of the threads is entirely devoted to
computing and updating messages. We use one additional thread that constantly monitors progress
by computing the objective in Eq. (3). We perform this function evaluation a fixed number of times,
either 200 or 400 times. Running for more iterations lets us compare performance in the high-accuracy
regime. During function evaluation, other threads randomly and independently choose a region r and
update the variables ?r (?), i.e., we evaluate the Star block updates of Eq. (5). Our choice is motivated
by the fact that Star block updates are more overlapping compared to Pencil updates, as they depend
on more variables. Therefore, Star blocks are harder to parallelize (see Theorem 2 in Appendix B).
To assess the performance of our technique we use pairwise graphical models of different densities.
In particular, we use a ?connection width? of 2, 4, or 8. This means we connect variables in the grid by
7
Ours
HOGWILD!
Comparison
100
40
30
20
10
2 200
4 200
8 200
8 400
80
30
speedup
2 200
4 200
8 200
8 400
speedup f
speedup f
40
20
10
10
20
30
threads
40
2 200
4 200
8 200
8 400
60
40
20
10
20
30
threads
(a)
(b)
40
0
10
20
30
40
threads
(c)
Figure 3: Speedup w.r.t. single thread obtained for a specific number of threads for our approach
(a) and HOGWILD! (b), using a variety of MRF neighborhoods (2, 4, 8), and different number of
iterations (200, 400). Speedups are shown for = 1 and 8 states. (c) shows the speedup of our
method compared to HOGWILD!.
pairwise factors, if their `1 -norm distance is less than 2, 4, or 8. A ?connection width? of 2 is often
also referred to as 8-neighborhood, because a random variable is connected to its eight immediate
neighbors. A ?connection width? of 4 or 8 connects a random variable to 48 or 224 neighboring
variables respectively. Hence, the connectivity of the employed graphical model is reasonably dense
to observe inconsistent reads and writes. At the same time our experiments cover connection densities
well above many typical graphical models used in practice.
Convergence: In a first experiment we investigate the convergence behavior of our approach and
the HOGWILD! implementation for different graphical model configurations. We examine the
behavior when using one to 46 threads, where the number of threads is not adapted, but remains
fixed throughout the run. The stepsize parameter, necessary in the case of HOGWILD!, is chosen
to be as large as possible while still ensuring convergence (following Recht et al. [2011]). Note
that our approach is hyper-parameter free. Hence no tuning is required, which we consider an
important practical advantage. We also evaluated HOGWILD! using a diminishing stepsize, but
found those results to be weaker than the ones reported here. Also note that a diminishing stepsize
introduces yet another hyper-parameter. Our results are provided in Fig. 2 for = 1 and 8 states
per random variable. We assess different MRF configurations (2, 4, 8 connectivity), and iterations
(200, 400). Irrespective of the chosen setup, we observe monotone convergence even with 46 threads
at play for both approaches. In neither of our configurations do we observe any instability during
optimization. As expected, we also observe the exact minimization employed in our approach to
result in significantly faster descent than use of the gradient (i.e., HOGWILD!). This is consistent
with the comparison of these methods in the sequential setting.
Thread speedup: In our second experiment we investigate the speedup obtained when using an
increasing number of threads. To this end we use the smallest dual value obtained with a single thread
and illustrate how much faster we are able to obtain an identical or better value when using more than
one thread during computation. The results for all the investigated graphical model configurations
are illustrated in Fig. 3 (a) for our approach and in Fig. 3 (b) for HOGWILD!. In these figures, we
observe very similar speedups across different graphical model configurations. We also observe that
our approach scales just as well as the gradient based technique does.
HOGWILD! speedup: In our third experiment we directly compare HOGWILD! to our approach.
More specifically, we use the smallest dual value found with the gradient based technique using a
fixed number of threads, and assess how much faster the proposed approach is able to find an identical
or better value when using the same number of threads. We show speedups of our approach compared
to HOGWILD! in Fig. 3 (c). Considering the results presented in the previous paragraphs, speedups
are to be expected. In all cases, we observe the speedups to be larger when using more threads.
Depending on the model setup, we observe speedups to stabilize at values around 45 or higher.
In summary, we found our asynchronous optimization technique to be a compelling practical approach
to infer approximate MAP configurations for graphical models.
8
6
Conclusion
We believe that parallel algorithms are essential for dealing with the scale of modern problem instances
in graphical models. This has led us to present an asynchronous parallel coordinate minimization
algorithm for MAP inference. Our theoretical analysis provides insights into the effect of stale
updates on the convergence and speedups of this scheme. Our empirical results show the great
potential of this approach, achieving linear speedups with up to 46 concurrent threads.
Future work may include improving the analysis (possibly under additional assumptions), particularly
the restriction on the gradients in Theorems 1 and 2. An interesting extension of our work is to derive
asynchronous parallel coordinate minimization algorithms for other objective functions, including
those arising in other inference tasks, such as marginal inference. Another natural extension is to
try our algorithms on MAP problems from other domains, such as natural language processing and
computational Biology, adding to our experiments on disparity estimation in computer vision.
Acknowledgments
This material is based upon work supported in part by the National Science Foundation under Grant
No. 1718221. This work utilized computing resources provided by the Innovative Systems Lab (ISL)
at NCSA.
References
A. Asuncion, P. Smyth, M. Welling, D. Newman, I. Porteous, and S. Triglia. Distributed Gibbs sampling for
latent variable models. 2011.
H. Avron, A. Druinsky, and A. Gupta. Revisiting asynchronous linear solvers: Provable convergence rate through
randomization. J. ACM, 62(6):51:1?51:27, 2015.
D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall,
Inc., Upper Saddle River, NJ, USA, 1989. ISBN 0-13-648700-9.
L.-C. Chen? , A. G. Schwing? , A. L. Yuille, and R. Urtasun. Learning Deep Structured Models. In Proc. ICML,
2015. ? equal contribution.
J. Choi and R. A. Rutenbar. Hardware implementation of mrf map inference on an fpga platform. In Field
Programmable Logic, 2012.
D. Davis, B. Edmunds, and M. Udell. The sound of apalm clapping: Faster nonsmooth nonconvex optimization
with stochastic asynchronous palm. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett,
editors, Advances in Neural Information Processing Systems 29, pages 226?234. 2016.
A. Desmaison, R. Bunel, P. Kohli, P. H. Torr, and M. P. Kumar. Efficient continuous relaxations for dense crf. In
European Conference on Computer Vision, pages 818?833, 2016.
A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP LPrelaxations. In NIPS. MIT Press, 2008.
J. Gonzalez, Y. Low, and C. Guestrin. Parallel Inference on Large Factor Graphs. Cambridge University Press,
2011.
T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for approximate
inference. IEEE Transactions on Information Theory, 56(12):6294?6316, 2010.
C.-J. Hsieh, H.-F. Yu, and I. S. Dhillon. Passcode: Parallel asynchronous stochastic dual co-ordinate descent. In
ICML, volume 15, pages 2370?2379, 2015.
S. Hurkat, J. Choi, E. Nurvitadhi, J. F. Mart?nez, and R. A. Rutenbar. Fast hierarchical implementation of
sequential tree-reweighted belief propagation for probabilistic inference. In Field Programmable Logic, pages
1?8, 2015.
J. Johnson. Convex Relaxation Methods for Graphical Models: Lagrangian and Maximum Entropy Approaches.
PhD thesis, EECS, MIT, 2008.
J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schn?rr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, T. Kr?ger,
J. Lellmann, N. Komodakis, B. Savchynskyy, and C. Rother. A comparative study of modern inference
techniques for structured discrete energy minimization problems. International Journal of Computer Vision,
115(2):155?184, 2015.
9
D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.
V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 28(10):1568?1583, 2006.
N. Komodakis, N. Paragios, and G. Tziritas. Mrf optimization via dual decomposition: Message-passing
revisited, 2007.
P. Kr?henb?hl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In
Advances in Neural Information Processing Systems 24, pages 109?117. 2011.
J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties.
SIAM Journal on Optimization, 25(1):351?376, 2015.
J. Liu, S. J. Wright, C. R?, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordinate descent
algorithm. Journal of Machine Learning Research, 16:285?322, 2015.
N. Ma, Y. Xia, and V. K. Prasanna. Data parallelism for belief propagation in factor graphs. In 2011 23rd
International Symposium on Computer Architecture and High Performance Computing, pages 56?63, 2011.
O. Meshi, T. Jaakkola, and A. Globerson. Smoothed coordinate descent for map inference. In S. Nowozin, P. V.
Gehler, J. Jancsary, and C. Lampert, editors, Advanced Structured Prediction. MIT Press, 2014.
O. Meshi, M. Mahdavi, and A. G. Schwing. Smooth and strong: MAP inference with linear convergence. In
Neural Informaion Processing Systems, 2015.
Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on
Optimization, 22(2):341?362, 2012.
Z. Peng, Y. Xu, M. Yan, and W. Yin. Arock: An algorithmic framework for asynchronous parallel coordinate
updates. SIAM Journal on Scientific Computing, 38(5):A2851?A2879, 2016.
N. Piatkowski and K. Morik. Parallel inference on structured data with crfs on gpus. In International Workshop
at ECML PKDD on Collective Learning and Inference on Structured Data (COLISD2011), 2011.
B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient
descent. In Advances in Neural Information Processing Systems 24. 2011.
B. Savchynskyy, S. Schmidt, J. Kappes, and C. Schnorr. A study of Nesterov?s scheme for lagrangian decomposition and map labeling. CVPR, 2011.
A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Distributed Message Passing for Large Scale Graphical
Models. In Proc. CVPR, 2011.
A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Globally Convergent Dual MAP LP Relaxation Solvers
using Fenchel-Young Margins. In Proc. NIPS, 2012.
A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Globally Convergent Parallel MAP LP Relaxation
Solver using the Frank-Wolfe Algorithm. In Proc. ICML, 2014.
Y. Shimony. Finding the MAPs for belief networks is NP-hard. Aritifical Intelligence, 68(2):399?410, 1994.
S. Singh, A. Subramanya, F. Pereira, and A. McCallum. Distributed map inference for undirected graphical
models. In Neural Information Processing Systems (NIPS) Workshop on Learning on Cores, Clusters, and
Clouds (LCCC), 2010.
D. Sontag, A. Globerson, and T. Jaakkola. Introduction to dual decomposition for inference. In Optimization for
Machine Learning, pages 219?254. MIT Press, 2011.
P. Tseng. On the rate of convergence of a partially asynchronous gradient projection algorithm. SIAM Journal
on Optimization, 1(4):603?619, 1991.
M. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Now
Publishers Inc., Hanover, MA, USA, 2008.
Y.-X. Wang, V. Sadhanala, W. Dai, W. Neiswanger, S. Sra, and E. Xing. Parallel and distributed block-coordinate
frank-wolfe algorithms. In Proceedings of The 33rd International Conference on Machine Learning, pages
1548?1557, 2016.
T. Werner. A linear programming approach to max-sum problem: A review. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 29(7):1165?1179, 2007.
T. Werner. Revisiting the decomposition approach to inference in exponential families and graphical models.
Technical Report CTU-CMP-2009-06, Czech Technical University, 2009.
10
T. Werner. Revisiting the linear programming relaxation approach to gibbs energy minimization and weighted
constraint satisfaction. IEEE PAMI, 32(8):1474?1488, 2010.
M. Wick, A. McCallum, and G. Miklau. Scalable probabilistic databases with factor graphs and mcmc. Proc.
VLDB Endow., 3(1-2):794?804, 2010.
Y. You, X. Lian, J. Liu, H.-F. Yu, I. S. Dhillon, J. Demmel, and C.-J. Hsieh. Asynchronous parallel greedy
coordinate descent. In Advances in Neural Information Processing Systems 29, pages 4682?4690. 2016.
J. Zhang, A. G. Schwing, and R. Urtasun. Message Passing Inference for Large Scale Graphical Models with
High Order Potentials. In Proc. NIPS, 2014.
R. Zhang and J. T. Kwok. Asynchronous distributed admm for consensus optimization. In ICML, pages
1701?1709, 2014.
B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba. Semantic understanding of scenes through
the ade20k dataset. arXiv preprint arXiv:1608.05442, 2016.
11
| 7156 |@word mild:1 kohli:1 version:2 middle:2 achievable:1 pw:1 norm:2 vldb:1 simulation:1 decomposition:10 hsieh:4 p0:1 pick:2 tr:3 harder:1 configuration:9 liu:8 disparity:6 score:2 ours:3 bhattacharyya:2 miklau:1 past:1 current:3 com:1 yet:2 readily:1 numerical:3 happen:1 realistic:1 cheap:1 drop:1 update:39 v:4 intelligence:3 instantiate:1 greedy:1 ctu:1 mccallum:2 core:2 short:1 provides:2 characterization:1 node:4 revisited:1 preference:2 bittorf:1 zhang:4 rc:2 constructed:1 become:1 symposium:1 koltun:2 consists:3 shorthand:1 overhead:1 paragraph:1 x0:2 peng:3 pairwise:7 indeed:3 hardness:2 behavior:4 pkdd:1 surge:1 examine:1 multi:1 tagging:1 expected:10 roughly:1 inspired:1 globally:2 cpu:1 solver:6 increasing:2 considering:1 provided:2 bounded:4 moreover:1 maximizes:1 notation:1 argmin:1 substantially:1 finding:3 nj:1 guarantee:2 avron:3 every:2 growth:1 runtime:3 unwanted:1 exactly:1 k2:1 unit:6 grant:1 appear:1 bertsekas:2 positive:3 before:1 engineering:1 local:4 consequence:1 switching:1 despite:1 parallelize:1 niu:1 becoming:1 pami:1 studied:1 dynamically:1 suggests:2 challenging:1 co:1 limited:1 aschwing:1 range:2 practical:3 globerson:4 acknowledgment:1 atomic:1 practice:2 block:28 writes:4 xr:51 procedure:3 empirical:1 yan:1 significantly:2 adapting:1 projection:1 downsampled:1 protein:1 get:2 savchynskyy:3 close:1 operator:1 scheduling:1 nb:1 prentice:1 applying:2 writing:1 instability:1 restriction:1 outweigh:1 map:30 equivalent:1 lagrangian:2 maximizing:2 crfs:2 go:1 independently:2 convex:6 survey:1 druinsky:1 passcode:1 insight:1 importantly:1 reparameterization:1 notion:1 coordinate:29 hurt:1 justification:1 updated:3 controlling:1 play:1 exact:4 programming:3 smyth:1 us:1 agreement:2 trend:1 wolfe:3 particularly:3 updating:1 utilized:1 database:1 gehler:1 observed:3 cloud:1 subproblem:1 preprint:1 electrical:1 wang:2 thousand:1 revisiting:3 kappes:3 region:15 connected:5 counter:2 decrease:2 convexity:1 complexity:1 nesterov:2 singh:2 solving:1 depend:1 deliver:1 creates:1 upon:1 efficiency:4 yuille:1 easily:2 joint:1 various:4 represented:1 kolmogorov:2 derivation:3 fast:1 shortcoming:1 describe:1 effective:1 demmel:1 newman:1 labeling:1 hyper:2 choosing:1 outside:1 neighborhood:2 sanity:1 quite:2 modular:1 larger:4 supplementary:2 cvpr:2 subramanya:1 asynchronously:3 advantage:1 rr:1 isbn:1 propose:3 regain:1 product:2 neighboring:2 translate:1 exploiting:1 convergence:19 cluster:2 parent:6 optimum:2 empty:1 comparative:1 converges:1 help:2 illustrate:3 depending:3 derive:1 fixing:1 multicore:1 progress:2 eq:19 strong:3 involves:1 implies:1 tziritas:1 differ:1 stochastic:7 material:3 meshi:12 assign:1 randomization:1 proposition:4 probable:1 equiv:2 strictly:1 extension:2 around:1 considered:2 wright:4 hall:1 exp:2 great:1 algorithmic:2 scope:1 major:1 achieves:1 torralba:1 smallest:2 estimation:7 proc:6 label:1 combinatorial:1 apalm:1 coordination:3 agrees:1 concurrent:2 weighted:1 minimization:19 mit:5 gaussian:2 always:2 super:1 zhou:2 avoid:1 cmp:1 jaakkola:4 edmunds:1 endow:1 encode:1 derived:2 focus:4 improvement:5 check:1 slowest:1 contrast:3 kim:1 posteriori:2 inference:40 unary:1 inaccurate:1 initially:1 diminishing:2 koller:2 relation:1 originating:1 interested:2 pixel:2 dual:22 aforementioned:1 among:1 denoted:1 development:1 smoothing:1 special:2 initialize:1 platform:1 marginal:3 field:5 equal:2 beach:1 sampling:4 biology:2 adversarially:1 represents:1 yu:2 icml:4 identical:2 future:1 others:1 np:2 piecewise:1 simplify:1 employ:2 few:1 modern:5 randomly:1 nonsmooth:1 report:1 simultaneously:1 divergence:3 national:1 individual:1 replaced:1 consisting:1 connects:1 friedman:2 attempt:1 huge:1 message:21 investigate:2 evaluation:3 introduces:1 analyzed:3 hamprecht:1 primal:1 devoted:1 held:1 chain:4 amenable:1 accurate:1 edge:1 closer:1 worker:30 encourage:1 necessary:1 tree:4 old:2 re:1 theoretical:4 minimal:1 instance:3 witnessed:1 soft:2 compelling:1 fenchel:1 cover:1 tp:4 shimony:2 assignment:6 werner:7 subset:1 hundred:1 usefulness:1 delay:13 successful:1 fpga:1 conducted:1 johnson:2 too:2 reported:2 connect:1 eec:1 proximal:1 synthetic:2 adaptively:3 recht:6 st:2 density:2 randomized:1 river:1 international:4 siam:4 lee:1 probabilistic:3 picking:1 concrete:1 connectivity:2 again:1 central:1 satisfied:2 thesis:1 choose:6 possibly:1 dr:1 resort:1 zhao:1 toy:5 mahdavi:1 account:1 potential:5 star:10 summarized:1 stabilize:1 availability:1 inc:2 satisfy:2 notable:1 mcmc:1 depends:1 performed:3 break:1 hogwild:17 closed:1 lab:1 analyze:4 hazan:5 shashua:2 competitive:2 characterizes:1 recover:2 parallel:42 xing:1 asuncion:2 defer:1 synchrony:1 contribution:1 ass:3 accuracy:1 efficiently:1 yield:1 identify:1 andres:1 processor:2 converged:1 whenever:1 definition:1 energy:3 associated:1 proof:1 gain:7 sampled:2 dataset:1 knowledge:1 segmentation:2 formalize:1 schedule:2 carefully:3 actually:1 back:1 trw:2 focusing:1 higher:1 execute:1 evaluated:1 strongly:2 furthermore:1 just:1 overlapping:2 propagation:4 google:2 glance:1 scientific:1 stale:5 grows:1 arock:1 usa:3 effect:2 believe:1 true:1 multiplier:2 counterpart:1 analytically:1 pencil:12 hence:4 read:9 x0p:1 dhillon:2 fidler:1 semantic:3 illustrated:2 reweighted:2 komodakis:4 adjacent:1 skewed:2 during:3 width:3 davis:2 criterion:1 m:8 crf:1 demonstrate:5 performs:1 temperature:1 image:3 variational:1 consideration:1 recently:1 common:2 volume:1 belong:1 approximates:1 nurvitadhi:1 significant:2 refer:3 cambridge:1 gibbs:2 tuning:1 unconstrained:1 grid:3 rd:2 similarly:2 illinois:2 sugiyama:1 language:2 impressive:1 operating:1 recent:4 sadhanala:1 optimizing:1 nonconvex:1 meta:1 binary:2 success:1 guestrin:1 additional:5 dai:1 employed:4 parallelized:1 converge:1 lprelaxations:1 dashed:2 smoother:1 multiple:5 full:2 sound:1 infer:1 champaign:1 smooth:10 match:1 faster:6 technical:2 long:1 divided:1 ensuring:1 prediction:2 variant:4 mrf:6 scalable:1 vision:8 expectation:1 arxiv:2 iteration:15 sometimes:1 folding:1 addition:1 subdifferential:1 whereas:1 else:1 wake:1 publisher:1 parallelization:5 sure:1 undirected:1 inconsistent:3 seem:1 jordan:3 structural:1 bunel:1 easy:1 variety:2 affect:1 architecture:2 reduce:2 idea:1 regarding:1 computable:1 synchronous:3 thread:21 motivated:2 henb:2 speech:1 proceed:2 passing:9 cause:1 action:1 sontag:4 repeatedly:1 deep:1 generally:3 useful:1 programmable:2 involve:1 amount:1 hardware:4 exist:1 notice:1 arising:1 popularity:1 per:3 track:1 discrete:3 wick:2 write:3 pollefeys:3 key:2 monitor:1 achieving:1 neither:1 krf:1 graph:16 relaxation:7 subgradient:1 monotone:1 year:1 sum:3 run:3 luxburg:1 you:3 family:4 reader:1 throughout:1 guyon:1 gonzalez:2 appendix:7 entirely:1 bound:7 guaranteed:1 convergent:6 fold:1 adapted:1 constraint:4 precisely:1 bp:4 scene:1 dominated:1 simulate:2 min:3 optimality:1 innovative:1 performing:1 kumar:1 gpus:2 speedup:22 department:1 structured:12 palm:1 across:2 slightly:1 increasingly:1 smaller:2 lp:2 making:2 modification:1 happens:1 hl:2 intuitively:1 pr:9 resource:1 remains:1 turn:1 neiswanger:1 tractable:1 end:2 ofer:1 operation:2 available:1 hanover:1 clapping:1 eight:1 kwok:2 observe:10 hierarchical:1 enforce:1 stepsize:3 alternative:2 schmidt:1 slower:1 running:2 include:4 porteous:1 graphical:22 lock:3 sw:2 xc:6 giving:1 especially:1 puig:1 implied:1 objective:25 move:1 r2r:4 desmaison:2 gradient:13 dp:1 distance:1 separate:1 separating:1 w0:1 tseng:2 trivial:1 reason:2 provable:1 urtasun:5 consensus:1 assuming:1 rother:1 morik:2 relationship:2 ratio:1 setup:3 frank:3 subproblems:1 negative:3 rise:5 implementation:7 collective:1 perform:1 upper:4 markov:1 urbana:1 descent:14 ecml:1 immediate:1 extended:1 precise:1 smoothed:3 parallelizing:2 isl:1 ordinate:1 pair:2 rutenbar:3 required:1 optimized:1 connection:4 schn:1 x0r:1 czech:1 nip:5 informaion:1 able:2 suggested:1 usually:1 below:2 pattern:3 parallelism:2 regime:1 reading:1 challenge:1 sparsity:3 program:2 max:11 memory:2 including:1 belief:19 wainwright:3 suitable:1 satisfaction:1 natural:4 kausler:1 advanced:1 scheme:4 improve:2 irrespective:1 xq:1 review:1 understanding:2 literature:1 synchronization:4 fully:7 expect:2 highlight:1 interesting:1 ger:1 foundation:1 degree:1 consistent:1 s0:2 principle:1 editor:2 nowozin:2 prasanna:1 summary:1 supported:1 asynchronous:30 free:4 copy:1 tsitsiklis:2 weaker:2 understand:1 allow:1 wide:1 neighbor:1 distributed:6 xia:1 xn:1 computes:1 adaptive:3 oftentimes:1 welling:1 transaction:3 approximate:9 keep:2 dealing:2 logic:2 global:3 active:3 reveals:1 instantiation:1 sequentially:1 containment:2 assumed:2 xi:4 don:1 search:1 latent:1 continuous:1 schnorr:1 reasonably:1 ca:1 messagepassing:1 sra:1 improving:1 interact:1 try:1 investigated:2 european:1 domain:1 garnett:1 dense:2 rh:1 lellmann:1 c2r:1 sridhar:1 lampert:1 child:3 x1:1 xu:1 fig:8 referred:4 paragios:1 pereira:1 exponential:2 third:1 young:1 theorem:6 choi:3 specific:3 udell:1 r2p:5 gupta:1 maximizers:1 intractable:1 essential:1 workshop:2 albeit:1 sequential:8 adding:2 kr:3 phd:1 execution:2 margin:1 chen:2 aritifical:1 entropy:1 led:1 yin:1 nez:1 univariate:1 saddle:1 forming:2 lagrange:1 partially:1 applies:1 corresponds:2 satisfies:2 constantly:1 acm:1 ma:3 mart:1 conditional:1 goal:1 shared:2 admm:2 change:6 hard:2 jancsary:1 typical:2 specifically:3 uniformly:4 except:1 ncsa:1 torr:1 schwing:11 lemma:1 called:1 total:1 batra:1 e:3 hypothetically:1 alexander:1 brevity:1 evaluate:1 lian:1 |
6,806 | 7,157 | Multiscale Quantization for Fast Similarity Search
Xiang Wu Ruiqi Guo Ananda Theertha Suresh Sanjiv Kumar
Dan Holtmann-Rice David Simcha Felix X. Yu
Google Research, New York
{wuxiang, guorq, theertha, sanjivk, dhr, dsimcha, felixyu}@google.com
Abstract
We propose a multiscale quantization approach for fast similarity search on large,
high-dimensional datasets. The key insight of the approach is that quantization
methods, in particular product quantization, perform poorly when there is large
variance in the norms of the data points. This is a common scenario for realworld datasets, especially when doing product quantization of residuals obtained
from coarse vector quantization. To address this issue, we propose a multiscale
formulation where we learn a separate scalar quantizer of the residual norm scales.
All parameters are learned jointly in a stochastic gradient descent framework to
minimize the overall quantization error. We provide theoretical motivation for the
proposed technique and conduct comprehensive experiments on two large-scale
public datasets, demonstrating substantial improvements in recall over existing
state-of-the-art methods.
1
Introduction
Large-scale similarity search is central to information retrieval and recommendation systems for
images, audio, video, and textual information. For high-dimensional data, several hashing based
methods have been proposed, including randomized [19, 1, 32] and learning-based techniques
[34, 35, 15]. Another set of techniques, based on quantization, have become popular recently due to
their strong performance on real-world data. In particular, product quantization (PQ) [12, 20] and its
variants have regularly claimed top spots on public benchmarks such as GIST1M, SIFT1B [20] and
DEEP10M [3].
In product quantization, the original vector space is decomposed into a Cartesian product of lower
dimensional subspaces, and vector quantization is performed in each subspace independently. Vector
quantization (VQ) approximates a vector x 2 Rdim(x) by finding the closest quantizer in a codebook
C:
ck2
V Q (x; C) = argmin kx
c2{Cj }
dim(x)?m
where C 2 R
is a vector quantization codebook with m codewords, and the j-th column Cj
represents the j-th quantizer. Similarly, product quantization (PQ) with K subspaces can be defined
as following concatenation:
P Q (x;
S = {S(k) }) = [
V Q (x
(1)
; S(1) ); ? ? ? ;
V Q (x
(K)
; S(K) )]
where x(k) denotes the subvector of x in the k-th subspace, and S(k) 2 Rdim(x
K product quantization codebooks, each with l sub-quantizers.
(k)
)?l
(1)
is a collection of
Product quantization works well in large part due to the fact that it permits asymmetric distance
computation [20], in which only dataset vectors are quantized while the query remains unquantized.
This is more precise than techniques based on Hamming distances (which generally require hashing
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2 1/2
2
"#
/ '
2
! "#
(a)
(b)
Figure 1: Variance in data point norms poses a challenge to product quantization. (a) PQ quantization
error on a synthetic dataset X 2 Rd?N grows as the standard deviation of data point norms (kxk2 )
increases. The mean of the dataset is zero ?(x) = 0, and the average squared norm is fixed,
?(kxk22 ) = 1. In both settings, m = 16 codes are generated per data point and one with l = 16
sub-quantizers per subspace, the other
p with l = 256. (b) Ratio between the standard deviation
(krx k2 ) and normalization factor ?(krx k22 ), where rx represents the residual after vector (coarse)
quantization on the real-world dataset of SIFT1M.
the query), while still being efficient to compute using lookup table operations. We will give a more
detailed background on product quantization variants in Section 1.2.
1.1
Motivation of Multiscale
Empirically, product quantization works the best when the variance in each subspace is roughly
balanced [20]. To ensure this, a rotation matrix is often applied to the data prior to performing
quantization. This rotation can be either random [20] or learned [11, 30, 39].
In this work, however, we argue that the quality of the product quantization codebook also degenerates
when there is variance in the norms of the data points being encoded?even when the variance is
relatively moderate. We illustrate this point by generating synthetic datasets such that: (1) the dataset
mean is zero; (2) data point direction is chosen uniformly at random; (3) the average squared norm
of the data points is fixed. In Figure 1a, we plot quantization error (MSE) of product quantization
against the standard deviation of the norms of the data points. Clearly, quantization error increases
with the variance of the data point norms. In real-world settings (Figure 1b), the residuals of a coarse
vector quantization of the data also tend to have highly varying norms.
To compensate for the case when there is large variance in norms, we modify the formulation of
product quantization by separately scalar quantizing data point norms, and then unit-normalizing the
data points before applying product quantization. When computing asymmetric distances, this simply
requires a scalar multiplication of the PQ codebook once per scalar quantizer, which has negligible
computational cost in practice.
To scale quantization based search techniques to massive datasets, a popular strategy is to first
vector quantize the input vectors in the original space (coarse quantization), and then apply product
quantization on the vector quantization residuals [20]. However, in such a ?VQ-PQ? style approach,
the norms of the residuals exhibit significant variance. Therefore, the proposed multiscale approach
provides significant gains for massive search even when the original data is fully normalized.
1.2
Related Works
The original idea of product quantization traces back to early works of signal processing [14, 12].
J?gou et al. [20] first introduced efficient asymmetric distance computation (ADC) and applied it to
the approximate nearest neighbor (ANN) search problem. Since then, there have been multiple lines
of work focused on improving PQ.
Coarse Quantizer. Also termed inverted file (IVF) indexing structure in J?gou et al. [20], this
approach learns a vector quantization of the data points via clustering, using the cluster indices to
form an inverted index storing all data points corresponding to a given cluster index consecutively.
A data point is encoded via PQ codes associated with the residual (offset) of the data point from its
closet cluster center. This design enables non-exhaustive search by searching only a subset of the m
2
clusters/partitions in the inverted index. However, previous works have learned coarse quantizers as a
separate preprocessing step, without training the coarse quantizers jointly with the PQ codebooks.
Rotation Matrix. Since PQ quantizes each subspace independently, a rotation matrix can be applied
to reduce the intra-subspace statistical dependence. Researchers have proposed multiple ways to
estimate such a rotation matrix: Norouzi and Fleet [30] use ITQ [13] style alternating quantization;
Optimized PQ [11] also applied a simple strategy to minimize the quantization error; Locally
Optimized PQ [22] learns a separate R for each coarse partition (and incurs the extra overhead
of multiplying each local rotation with the query vector to compute lookup tables specific to each
partition). In high-dimensional setup, Zhang et al. [39] address the scalability issue in learning the
d ? d rotation matrix by imposing a Kronecker product structure. While learning such orthogonal
transformations is a good strategy in general, it does not change the norm of data points. Thus it still
suffers from norm variance as discussed in Section 1.1.
Additive Codebooks. Another line of research is focused on learning additive codebooks instead of
subspace codebooks. This includes additive quantization [5, 6, 26], composite quantization [37, 38]
and stacked quantization [27]. Since they do not work in subspaces, additive codebooks don?t require
rotation, although they are harder to learn and more expensive to encode. Empirically, such additive
codebooks are more expressive, and outperform OPQ at lower bitrates. However, OPQ achieves
similar performance at higher bitrates. Since additive codebooks don?t address the variance of data
point norms, the proposed multiscale approach can also be applied to additive codebooks as well.
Implementation Improvements. Much effort has been put into optimizing the implementation of
ADC, as it is computationally critical. Douze et al. [10] propose using Hamming distance for fast
pruning. Johnson et al. [21] come up with an efficient GPU implementation for ADC lookup. Andr?
et al. [2] propose to use SIMD-based computation to compute lower bounds for ADC. Our method is
compatible with all of these improvements. We also discuss our ADC implementation in Section 4.4.
Non-quantization Techniques. There is a large body of similarity search literature on nonquantization based methods in both inner product search and nearest neighbor search. Tree based
methods [7, 29, 9], graph based methods [16] and locality sensitive hashing style algorithms [19, 1, 32]
focus on non-exhaustive search by partitioning the search space. In practice, these often lead to
random memory accesses, and are often combined with exhaustive methods in ways similar to
IVFADC [20, 4, 31, 28]. Binary embedding based approaches [36, 24, 18, 13, 17, 25] focus on
learning short binary codes, and can be searched efficiently in Hamming space. However, there
is typically a large gap between the precision of distance computations in Hamming vs. product
codes under the same bitrate, and ADC can be computed with similar speed ([2, 21], Section 4.4).
Therefore, we focus on comparison to ADC based techniques in this paper.
1.3
Contributions
We propose a complete end-to-end training algorithm to learn coarse quantizers, a rotation matrix,
and product quantization codebooks, together with scalar quantizers to capture coarse quantization
residual norms. This differs from prior work in that it (a) identifies and addresses the problem of
variance in data point norms; (b) includes coarse quantizers as a part of the optimization; and (c) is endto-end trainable using stochastic gradient descent (SGD), which leads to a significant improvement
in quantization error compared to previous methods using alternating optimization [30, 11]. We
also present ablation tests demonstrating the importance of each component of the algorithm in
Section 4.2. In addition, we present theoretical motivation for our approach in Section 3.
2
Methodology
We focus on minimizing quantization error kx x
?k2 , where x is a data point and x
? is its quantized approximation, as a proxy for minimizing query-database distance approximation error
|kq xk2 kq x
?k2 |. State-of-the-art quantization techniques take a hierarchical approach [11, 27].
For instance, one or more ?coarse? quantization stages (VQ) can be followed by product quantization
(PQ) of the vector quantization residuals. A learned rotation is often applied to the residuals prior to
product quantization to further reduce quantization error [11].
This style of approach provides two key benefits:
3
1. Real world data is often clusterable, with the diameter of clusters substantially lower than the
diameter of the dataset as a whole. The vector quantization can thus be used to obtain a ?residual
dataset? with much smaller diameter, yielding significant reductions in quantization error when
quantized with only a product code [15].
2. By additionally learning a rotation of the VQ residuals, the variance within each PQ subspace can
be significantly reduced for many real world datasets, yielding substantially lower quantization
error and correspondingly higher recall.
As noted in Section 1.1, an additional source of quantization error when performing product quantization is the variance of data point norms. We extend the above strategy by explicitly representing the
norm of VQ residuals, learning a PQ codebook only on the unit-normalized rotated VQ residuals,
while separately scalar quantizing the residual norm scales. Specifically, multiscale quantization
employs the following steps: (1) vector quantization of the dataset; (2) learned rotation of the vector
quantization residuals; (3) reparameterization of the rotated residuals into direction and scale components; (4) product quantization of the direction component; (5) scalar quantization of the scale
component.
Formally, in multiscale quantization, the rotated residual rx and its `2 normalized version r?x are
defined as:
rx = R(x
r?x = rx /krx k2
V Q (x)),
And a data point x 2 Rd is approximated by
x?x
?=
V Q (x)
+ r?x , where r?x =
SQ ( x )R
T
rx )
P Q (?
and
x
= krx k2 /k
rx )k2
P Q (?
(2)
From above, V Q (x) = argminc2{Cj } kx ck2 returns the closest vector quantization codeword for
x; C 2 Rd?m is a vector quantization codebook with m codewords; Cj is its j-th codeword (i.e. the
j-th column of C); And the matrix R 2 Rd?d is a learned rotation, applied to the residuals of vector
quantization; The residual norm scale x is a scalar multiplier to the product quantized P Q (?
rx ) that
helps preserve the `2 norm of the rotated residual rx ; And SQ returns the nearest scalar quantizer
from a scalar quantization codebook W 2 Rp with p codewords (equivalent to one-dimensional
vector quantization). The product quantizer P Q (rx ) is given by
0 (1) (1) 1
0
1
(1)
rx )
r?x
P Q (?
B (2) (2) C
B (2) C
rx ) C
B P Q (?
B r?x C
B
C
B . C
(?
r
)
=
,
r
?
=
PQ x
x
..
B
C
B . C
@
A
@ . A
.
(K) (K)
rx )
P Q (?
(K)
r?x
as the concatenation of codewords obtained by dividing the rotated and normalized residuals r?x
(k)
into K subvectors r?x , k = 1, 2, ? ? ? , K, and quantizing the subvectors independently by vector
(k)
quantizers P Q (?) to minimize quantization error:
(k)
rx(k) )
P Q (?
= argmin k?
rx(k)
sk2 .
(k)
s2{Sj }
(k)
Hence, S(k) 2 Rd ?l is the vector quantization codebook for the k-th subspace (with l codewords).
d
Frequently, d(k) , the dimension of the k-th subvector, is simply K
, although subvectors of varying
size are also possible.
(k)
(k)
The quantized, normalized residuals are represented by the K indices of index( P Q (?
rx )), k =
1, ? ? ? , K. This representation has an overall bitrate of K log2 l, where K is the number of subspaces,
and l is the number of product quantizers in each subspace. The residual norm scales are maintained by
organizing the residuals associated with a VQ partition into groups, where within a group all residuals
have the same quantized norm scale. The groups are ordered by quantized norm scale, and thus
only the indices of group boundaries need to be maintained. The total storage cost incluiding group
boundaries and scalar quantization levels is thus O(mp), where m is number of vector quantizers and
p is the number of scalar quantizers. In our experiments, we set p to 8, which we find has a negligible
effect on recall compared with using unquantized norm scales.
4
2.1
Efficient Search under Multiscale Quantization
The multiscale quantization model enables nearest neighbor search to be carried out efficiently. For
a query q, we compute the squared `2 distance of q with each codeword in the vector quantization
codebook C, and search further within the nearest VQ partition. Suppose the corresponding quantizer
is c?q = argminc2{Cj } kq ck2 , and the corresponding quantization partition is Pq? = {x 2
{Xj }[N ] | V Q (x) = c?q }. Then, the approximate squared `2 distance between the query and database
points in Pq? are computed using a lookup table. The final prediction is made by taking the database
point with the smallest approximate distance, i.e.
?
?
xpred
= argmin kq c?q k22 2 R(q c?q ) ? [ SQ ( x ) P Q (?
rx )] + k SQ ( x ) P Q (?
rx )k22 .
q
x2Pq?
We use a lookup table to compute the quantized inner product between subvectors of the query?s
rotated VQ residual q? = R(q
c?q ) and the scaled product quantized data point residuals
rx ). Letting q?(k) be the k-th subvector of q? and wx = SQ ( x ) the quantized norm
SQ ( x ) P Q (?
scale, we first precompute inner products and the squared quantized `2 norm with the PQ codebook S
(k)
(k)
(k)
as vj = 2?
q (k) ? wx Sj + wx2 kSj k22 for all j and k, giving K lookup tables v (1) , . . . , v (K) each
of dimension l. We can then compute
2?
q ? wx
P Q (rx )
+ wx2 k
2
P Q (rx )k2
=
K
X
k=1
v
(k)
index(
(k)
P Q (rx ))
In practice, instead of searching only one vector quantization partition, one can use soft vector
quantization and search the t partitions with the lowest kq Cj k2 . The final complexity of the search
tK
is O( Nm
).
In our implementation, since all the data points with the same quantized norm scale are stored in
consecutive groups, we need only create a new lookup table at the beginning of a new group, by
(k)
(k)
combining scale independent lookup tables of 2?
q (k) ? Sj and kSj k22 (multiplied by wx and wx2 ,
respectively) using hardware optimized fused multiply-add instructions. We incur this computation
cost only p times for a VQ partition, where p is the number of scalar quantizers. In our experiment, we
set p = 8 and the number of VQ partitions to search t = 8, maintaining relatively low performance
overhead. We discuss more on the lookup table implementation in Section 4.4.
2.2
Optimization Procedure
We can explicitly formulate the mean squared loss as a function of our parameter vector ? =
(C; {S(k) }[K] ; R; {Wi }[m] ) per our approximation formulation (2). Wi here represents the parameter vector for the scalar quantizer of norm scales in partition i. To jointly train the parameters
of the model, we use stochastic gradient descent. To optimize the orthogonal transformation of
vector quantization residuals while maintaining orthogonality, we parameterize it via the Cayley
characterization of orthogonal matrices [8]:
R = (I
A)(I + A)
1
,
(3)
where A is a skew-symmetric matrix, i.e. A = AT . Note that (3) is differentiable w.r.t. the
d(d 1)/2 parameters of A. Computing the gradient requires an inversion of a d ? d matrix at each
iteration. However we found this tradeoff to be acceptable for datasets with dimensionalities in the
hundreds to thousands. When applying this method on high-dimensional datasets, one can restrict the
number of parameters of A to trade off capacity and computational cost.
The codebook for vector quantization is initialized using random samples from the dataset, while
the codebook for product quantization is initialized using the residuals (after vector quantization,
normalization and rotation) of a set of independent samples. To allow the vector quantization a
chance to partition the space, we optimize only the vector quantization error for several epochs before
initializing the product codes and doing full joint training. The parameters of the skew-symmetric
matrix A were initialized by sampling from N (0, 1e 3).
All optimization parameters were fixed for all datasets (although we note it would be possible to
improve results slightly with more extensive per-dataset tuning). We used the Adam optimization
algorithm [23] with the parameters suggested by the authors, minibatch sizes of 2000, and a learning
rate of 1e 4 during joint training (and 1e 3 when training only the vector quantizers).
5
To learn the scalar quantizer for residual norm scales and capture their local distribution within a
VQ partition, we jointly optimize the assignment of PQ codes and the scalar quantizer for all data
points within the same partition. Leaving the PQ codebook and rotation fixed, we alternate between
following two steps until convergence:
1. Fix all assigned PQ codes and scalar quantize the norm scales
only within the partition.
x
= krx k2 /k
2. Fix all quantized norm scales within the partition and reassign PQ codes for rx /
rx )k2
P Q (?
SQ ( x ).
In practice, it only takes a few iterations to converge to a local minimum for every VQ partition.
3
Analysis
Below we provide theoretical motivation and analysis for the components of the proposed quantization
approach, including for multiscale, learned rotation, and coarse quantization stages.
3.1
Multiscale
We first show that adding a scalar quantizer further increases the recall when the norms of the residuals
exhibit large variance. For a query q and a given partition with center Cj , if we define qj = q Cj ,
then the `2 error caused by residual quantization is
|kqj
rx k22
kqj
r?x k22 | = |
2qj ? (rx
? |2qj ? (rx
r?x ) + krx k22
r?x )| + |krx k22
The first query dependent term can be further transformed as
q
q
|2qj ? (rx r?x )| = 2 [(rx r?x )T qj ][qjT (rx r?x )] = 2 (rx
Taking expectation w.r.t q yields
q
Eq |2qj ? (rx r?x )| ? 2 Eq [(rx
r?x )T (qj qjT )(rx
q
r?x )] = 2 (rx
k?
rx k22 |
k?
rx k22 |.
r?x )T (qj qjT )(rx
r?x )
r?x )T Eq (qj qjT )(rx
r?x ),
where the inequality follows from Jensen?s inequality. If q is the largest eigen value of the covariance
matrix Eq (qj qjT ), then
p
Eq |kqj rx k22 kqj r?x k22 | ? 2
r?x k2 + |krx k22 k?
rx k22 |.
q krx
Existing quantization methods have focused on the first term in the error of `2 distance. However for
VQ residuals with large variance in krx k2 , the second quadratic term becomes dominant. By scalar
quantizing the residual norm scales, especially within each VQ partition locally, we can reduce the
second term substantially and thus improve recall on real datasets.
3.2
Rotation Matrix
Performing quantization after a learned rotation has been found to work well in practice [13, 30].
Here we show rotation is required in some scenarios. Let xi = Ryi , 1 ? i ? n. We show that
there exist simple examples, where the yi ?s have a product code with small codebook size and MSE
0, whereas to get any small MSE on xi s one may need to use exponentially many codewords. On
real-world datasets, this difference might not be quite so pronounced, but it is still significant and
hence undoing the rotation can yield significantly better MSE. We provide the following Lemma (see
the supplementary material for a proof).
Lemma 1. Let X = RY, i.e., for 1 ? i ? n, xi = Ryi . There exists a dataset Y and a rotation
matrix R such that a canonical basis product code of size 2 is sufficient to achieve MSE of 0 for Y,
whereas any product code on X requires 2c?min(d/K,K)? codewords to achieve MSE ?kxkmax , where
c is some universal constant and kxkmax is the maximum `2 norm of any data point.
6
(a)
(b)
Figure 2: (a) Break down by contribution to MSE reduction from each component in our model
on SIFT1M and DEEP10M datasets with different bitrates. The baseline is the original IVFADC
setup with no rotation or norm scale quantization. (b) Time spent per query by different distance
computation methods on linear search of a database of size |X| = 27 , 28 , 29 , ? ? ? 216 under 128 bits.
Lower curves indicate faster search time.
3.3
Coarse Quantization
We analyze the proposed vector and product quantization when the data is generated by a K-subspace
mixture model that captures two properties observed in many real-world data sets: samples belong to
one of several underlying categories, also referred to as components, and within each component the
residuals are generated independently in K subspaces. The precise model is defined in Appendix B.
For a query q, let x?q be the sample that minimizes kq xk2 . Let xVq Q be the output of the hierarchical
nearest neighbor algorithm that first finds the nearest cluster center and then searches within that
cluster. We show that if q is generated independently of x, then with high probability it returns an
xVq Q that is near-optimal.
Theorem 1. Given n samples from an underlying K-subspace mixture model that has been clustered
correctly and an independently generated query q, with probability 1
,
r
r
dr2
4n
d2
2n
kq x?q k22 kq xVq Q k22 ? 8b
log
+ 4r2
log
.
2K
2K
See Appendix B for a proof. Note that r = maxx2X krx k1 is the maximum value of the residual
in any coordinate and offers a natural scaling for our problem and b = maxx2X kq xk2 is the
maximum distance between q and any data point.
4
4.1
Experiments
Evaluation Datasets
We evaluate the performance of end-to-end trained multiscale quantization (MSQ) on the SIFT1M [20]
and DEEP10M [3] datasets, which are often used in benchmarking the performance of nearest
neighbor search. SIFT1M [20] contains 1 million, 128 dimensional SIFT descriptors extracted from
Flickr images. DEEP10M is introduced in [3], by extracting 96 PCA components from the final
hidden layer activations of GoogLeNet [33].
At training time, each dataset is indexed with 1024 VQ coarse quantizers. At query time, quantized
residuals from the 8 partitions closest to the query are further searched using ADC to generate the
final nearest neighbors. We report results on both quantization error (MSE, Section 4.2) and in terms
of retrieval recall (Recall1@N, Section 4.3). Often, the two metrics are strongly correlated.
4.2
Ablation Tests
Compared to IVFADC [20], which uses plain PQ with coarse quantizers, our end-to-end trained MSQ
reduces quantization error by 15-20% on SIFT1M, and 20-25% on DEEP10M, which is a substantial
reduction. Multiple components contribute to this reduction: (1) learned rotation of the VQ residuals;
(2) separate quantization of the residual norms into multiple scales; and (3) end-to-end training of all
parameters.
7
Figure 3: Recall curves when retrieving Top-1 neighbors (Recall1@N) on the SIFT1M dataset with
varying numbers of codebooks and centers. We search t = 8 out of m = 1024 VQ partitions.
In order to understand the effect of each component, we plot the MSE reduction relative to IVFADC [20] for several ablation tests (Figure 2a). On DEEP10M, the proposed multiscale approach
and the end-to-end learning contribute an additional 5-10% MSE reduction on top of learned rotation, while they contribute 10-15% on SIFT1M. It is important to note that on SIFT1M, multiscale
quantization and end-to-end training have a bigger impact than learned rotation, which is itself often
considered to yield a significant improvement.
4.3 Recall Experiments
We compare the proposed end-to-end trained multiscale quantization method against three baselines
methods: product quantization (PQ) [20], optimized product quantization (OPQ) [11] and stacked
quantizers (SQ) [27]. We generate ground-truth results using brute force search, and compare the
results of each method against ground-truth in fixed-bitrate settings.
For fixed-bitrate experiments, we show recall curves for varying numbers of PQ codebooks from the
range {8, 16, 32} for the SIFT1M dataset and {6, 12, 24} for the DEEP10M dataset. For each number
of codebooks, we experimented with both 16 centers for in-register table lookup and 256 centers
for in-memory table lookup in Figure 3 and 4. From the recall curves, it is clear that multiscale
quantization performs better than all baselines across both datasets in all settings.
4.4
Speed Benchmarks
We use the same indexing structure (IVF), and the same ADC computation implementation for all
baselines (PQ [20], OPQ [11], SQ [27]). Thus the speed of all baselines are essentially identical at the
same bitrate, meaning Figure 3 and 4 are both fixed-memory and fixed-time, and thus directly comparable. For codebooks with 256 centers, we implemented in-memory lookup table (LUT256) [20]; for
codebooks with 16 centers, we implemented in-register lookup table (LUT16) using the VPSHUFB
instruction from AVX2, which performs 32 lookups in parallel.
Also, we notice that there have been different implementations of ADC. The original algorithm
proposed in [20] uses in-memory lookup tables. We place tables in SIMD registers and leverage
SIMD instructions for fast lookup. Similar ideas are also reported in recent literature [10, 17, 2].
Here we put them on equal footing and provide a comparison of different approaches. In Figure 2b,
we plot the time for distance computation at the same bitrate. Clearly, VPSHUFB based LUT16
achieves almost the same speed compared to POPCNT based Hamming, and they are both 5x faster
than in-memory based ADC. As a practical observation, when the number of neighbors to be retrieved
is large, Recall1@N of LUT256 and LUT16 is often comparable at the same bitrate, and LUT16
with 5x speed up is almost always preferred.
8
Figure 4: Recall curves when retrieving Top-1 neighbors (Recall1@N) on the DEEP10M datasets
varying numbers of codebooks and centers. We search t = 8 out of m = 1024 VQ partitions.
5
Conclusions
We have proposed an end-to-end trainable multiscale quantization method that minimizes overall
quantization loss. We introduce a novel scalar quantization approach to account for the variances in
data point norms, which is both empirically and theoretically motivated. Together with the end-to-end
training, this contributes to large reduction in quantization error over existing competing methods that
already employ optimized rotation and coarse quantization. Finally, we conducted comprehensive
nearest neighbor search retrieval experiments on two large-scale, publicly available benchmark
datasets, and achieve considerable improvement over state-of-the-art.
6
Acknowledgements
We thank Jeffrey Pennington and Chen Wang for their helpful comments and discussions.
References
[1] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, and Ludwig Schmidt. Practical and
optimal lsh for angular distance. In Advances in Neural Information Processing Systems, pages 1225?1233,
2015.
[2] Fabien Andr?, Anne-Marie Kermarrec, and Nicolas Le Scouarnec. Cache locality is not enough: highperformance nearest neighbor search with product quantization fast scan. Proceedings of the VLDB
Endowment, 9(4):288?299, 2015.
[3] A. Babenko and V. Lempitsky. Efficient indexing of billion-scale datasets of deep descriptors. In 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2055?2063, June 2016.
[4] Artem Babenko and Victor Lempitsky. The inverted multi-index. In Computer Vision and Pattern
Recognition (CVPR), 2012 IEEE Conference on, pages 3069?3076. IEEE, 2012.
[5] Artem Babenko and Victor Lempitsky. Additive quantization for extreme vector compression. In Computer
Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 931?938. IEEE, 2014.
[6] Artem Babenko and Victor Lempitsky. Tree quantization for large-scale similarity search and classification.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4240?4248,
2015.
[7] Jon Louis Bentley. Multidimensional binary search trees used for associative searching. Communications
of the ACM, 18(9):509?517, 1975.
[8] Arthur Cayley. Sur quelques propri?t?s des d?terminants gauches. Journal f?r die reine und angewandte
Mathematik, 32:119?123, 1846.
9
| 7157 |@word version:1 ruiqi:1 compression:1 inversion:1 norm:40 instruction:3 d2:1 vldb:1 covariance:1 sgd:1 incurs:1 harder:1 reduction:7 contains:1 reine:1 existing:3 com:1 babenko:4 anne:1 activation:1 gpu:1 sanjiv:1 partition:22 additive:8 wx:4 kqj:4 enables:2 razenshteyn:1 plot:3 v:1 beginning:1 short:1 footing:1 ck2:3 coarse:17 quantizer:12 codebook:14 quantized:14 provides:2 characterization:1 contribute:3 zhang:1 c2:1 become:1 retrieving:2 dan:1 overhead:2 introduce:1 theoretically:1 roughly:1 frequently:1 ry:1 multi:1 unquantized:2 decomposed:1 gou:2 cache:1 subvectors:4 becomes:1 underlying:2 qjt:5 lowest:1 argmin:3 substantially:3 minimizes:2 adc:11 finding:1 transformation:2 every:1 multidimensional:1 k2:12 scaled:1 brute:1 partitioning:1 unit:2 louis:1 before:2 felix:1 negligible:2 local:3 modify:1 might:1 range:1 practical:2 alexandr:1 practice:5 differs:1 spot:1 sq:9 suresh:1 procedure:1 universal:1 significantly:2 composite:1 bitrate:7 get:1 put:2 storage:1 applying:2 quelques:1 optimize:3 equivalent:1 sanjivk:1 center:9 independently:6 focused:3 formulate:1 insight:1 reparameterization:1 embedding:1 searching:3 coordinate:1 suppose:1 massive:2 us:2 expensive:1 approximated:1 cayley:2 recognition:4 asymmetric:3 database:4 observed:1 initializing:1 capture:3 parameterize:1 thousand:1 wang:1 laarhoven:1 trade:1 substantial:2 balanced:1 ryi:2 und:1 complexity:1 trained:3 incur:1 basis:1 joint:2 represented:1 stacked:2 train:1 fast:5 query:14 exhaustive:3 quite:1 encoded:2 supplementary:1 cvpr:3 jointly:4 itself:1 final:4 indyk:1 associative:1 differentiable:1 quantizing:4 propose:5 douze:1 product:41 combining:1 rdim:2 ablation:3 organizing:1 poorly:1 degenerate:1 achieve:3 ludwig:1 pronounced:1 scalability:1 billion:1 convergence:1 cluster:7 generating:1 adam:1 rotated:6 tk:1 help:1 illustrate:1 spent:1 pose:1 nearest:11 eq:5 strong:1 dividing:1 implemented:2 itq:1 come:1 indicate:1 direction:3 simcha:1 stochastic:3 consecutively:1 public:2 material:1 require:2 fix:2 clustered:1 considered:1 ground:2 achieves:2 early:1 smallest:1 xk2:3 consecutive:1 sensitive:1 largest:1 create:1 clearly:2 always:1 varying:5 closet:1 encode:1 focus:4 june:1 improvement:6 baseline:5 dim:1 helpful:1 dependent:1 typically:1 hidden:1 transformed:1 issue:2 overall:3 classification:1 art:3 equal:1 once:1 simd:3 beach:1 sampling:1 piotr:1 identical:1 represents:3 yu:1 jon:1 report:1 employ:2 few:1 preserve:1 comprehensive:2 jeffrey:1 highly:1 intra:1 multiply:1 evaluation:1 mixture:2 extreme:1 yielding:2 arthur:1 holtmann:1 orthogonal:3 conduct:1 tree:3 indexed:1 initialized:3 theoretical:3 instance:1 column:2 soft:1 assignment:1 cost:4 deviation:3 subset:1 kq:9 hundred:1 johnson:1 conducted:1 stored:1 reported:1 synthetic:2 combined:1 st:1 randomized:1 ksj:2 off:1 together:2 fused:1 ilya:1 squared:6 central:1 nm:1 style:4 return:3 highperformance:1 account:1 de:1 lookup:16 includes:2 explicitly:2 mp:1 caused:1 register:3 performed:1 break:1 doing:2 analyze:1 parallel:1 contribution:2 minimize:3 publicly:1 variance:16 descriptor:2 efficiently:2 yield:3 norouzi:1 rx:40 multiplying:1 researcher:1 flickr:1 suffers:1 against:3 krx:11 associated:2 proof:2 hamming:5 gain:1 dataset:15 popular:2 recall:11 dimensionality:1 cj:8 back:1 hashing:3 higher:2 methodology:1 formulation:3 strongly:1 angular:1 stage:2 until:1 expressive:1 multiscale:18 google:2 minibatch:1 quality:1 grows:1 bentley:1 usa:1 effect:2 k22:17 normalized:5 multiplier:1 ivf:2 hence:2 assigned:1 alternating:2 symmetric:2 during:1 maintained:2 noted:1 die:1 complete:1 performs:2 image:2 meaning:1 novel:1 recently:1 common:1 rotation:26 empirically:3 exponentially:1 million:1 discussed:1 extend:1 approximates:1 belong:1 googlenet:1 significant:6 imposing:1 rd:5 tuning:1 similarly:1 pq:26 lsh:1 access:1 similarity:5 add:1 dominant:1 closest:3 recent:1 retrieved:1 optimizing:1 moderate:1 scenario:2 claimed:1 termed:1 codeword:3 inequality:2 binary:3 yi:1 victor:3 inverted:4 minimum:1 additional:2 undoing:1 converge:1 signal:1 multiple:4 full:1 reduces:1 faster:2 offer:1 compensate:1 retrieval:3 long:1 bigger:1 impact:1 prediction:1 variant:2 essentially:1 expectation:1 metric:1 vision:4 iteration:2 normalization:2 background:1 addition:1 separately:2 opq:4 whereas:2 source:1 leaving:1 extra:1 file:1 comment:1 tend:1 regularly:1 extracting:1 near:1 leverage:1 enough:1 xj:1 restrict:1 competing:1 codebooks:16 idea:2 reduce:3 inner:3 tradeoff:1 qj:10 fleet:1 motivated:1 pca:1 effort:1 york:1 reassign:1 deep:1 generally:1 detailed:1 clear:1 locally:2 hardware:1 category:1 diameter:3 reduced:1 generate:2 outperform:1 exist:1 andr:2 canonical:1 notice:1 per:6 correctly:1 clusterable:1 group:7 key:2 demonstrating:2 marie:1 graph:1 realworld:1 place:1 almost:2 wu:1 acceptable:1 appendix:2 scaling:1 comparable:2 bit:1 bound:1 layer:1 followed:1 quadratic:1 kronecker:1 orthogonality:1 speed:5 min:1 kumar:1 performing:3 relatively:2 alternate:1 precompute:1 smaller:1 slightly:1 across:1 wi:2 indexing:3 computationally:1 vq:19 remains:1 mathematik:1 discus:2 skew:2 letting:1 end:19 available:1 operation:1 permit:1 multiplied:1 apply:1 hierarchical:2 schmidt:1 eigen:1 rp:1 original:6 top:4 denotes:1 ensure:1 clustering:1 log2:1 maintaining:2 felixyu:1 giving:1 k1:1 especially:2 already:1 codewords:7 strategy:4 dependence:1 exhibit:2 gradient:4 subspace:17 distance:15 separate:4 thank:1 concatenation:2 capacity:1 argue:1 code:12 sur:1 index:9 ratio:1 minimizing:2 setup:2 quantizers:16 trace:1 design:1 implementation:8 perform:1 observation:1 datasets:18 benchmark:3 descent:3 communication:1 precise:2 david:1 introduced:2 subvector:3 required:1 extensive:1 optimized:5 learned:11 textual:1 nip:1 address:4 suggested:1 below:1 pattern:4 challenge:1 including:2 memory:6 video:1 endto:1 critical:1 natural:1 force:1 residual:40 representing:1 improve:2 kxk22:1 identifies:1 carried:1 thijs:1 prior:3 literature:2 epoch:1 acknowledgement:1 multiplication:1 xiang:1 relative:1 fully:1 sk2:1 loss:2 sufficient:1 proxy:1 dr2:1 storing:1 endowment:1 compatible:1 allow:1 understand:1 neighbor:11 taking:2 correspondingly:1 benefit:1 boundary:2 dimension:2 wx2:3 world:7 curve:5 plain:1 author:1 collection:1 made:1 preprocessing:1 sj:3 approximate:3 pruning:1 preferred:1 quantizes:1 xi:3 don:2 search:29 table:14 additionally:1 learn:4 ca:1 nicolas:1 angewandte:1 contributes:1 improving:1 quantize:2 mse:10 vj:1 motivation:4 whole:1 s2:1 kxkmax:2 body:1 referred:1 benchmarking:1 precision:1 sub:2 kxk2:1 learns:2 artem:3 down:1 theorem:1 specific:1 sift:1 jensen:1 offset:1 r2:1 theertha:2 experimented:1 normalizing:1 exists:1 quantization:107 andoni:1 adding:1 pennington:1 importance:1 cartesian:1 kx:3 gap:1 chen:1 locality:2 simply:2 ordered:1 scalar:20 recommendation:1 truth:2 chance:1 extracted:1 rice:1 acm:1 lempitsky:4 ann:1 maxx2x:2 considerable:1 change:1 specifically:1 uniformly:1 ananda:1 lemma:2 total:1 formally:1 searched:2 guo:1 scan:1 evaluate:1 audio:1 trainable:2 correlated:1 |
6,807 | 7,158 | Diverse and Accurate Image Description Using a
Variational Auto-Encoder with an Additive Gaussian
Encoding Space
Liwei Wang
Alexander G. Schwing
Svetlana Lazebnik
{lwang97, aschwing, slazebni}@illinois.edu
University of Illinois at Urbana-Champaign
Abstract
This paper explores image caption generation using conditional variational autoencoders (CVAEs). Standard CVAEs with a fixed Gaussian prior yield descriptions
with too little variability. Instead, we propose two models that explicitly structure
the latent space around K components corresponding to different types of image
content, and combine components to create priors for images that contain multiple
types of content simultaneously (e.g., several kinds of objects). Our first model
uses a Gaussian Mixture model (GMM) prior, while the second one defines a novel
Additive Gaussian (AG) prior that linearly combines component means. We show
that both models produce captions that are more diverse and more accurate than
a strong LSTM baseline or a ?vanilla? CVAE with a fixed Gaussian prior, with
AG-CVAE showing particular promise.
1
Introduction
Automatic image captioning [9, 11, 18?20, 24] is a challenging open-ended conditional generation
task. State-of-the-art captioning techniques [23, 32, 36, 1] are based on recurrent neural nets with
long-short term memory (LSTM) units [13], which take as input a feature representation of a provided
image, and are trained to maximize the likelihood of reference human descriptions. Such methods are
good at producing relatively short, generic captions that roughly fit the image content, but they are
unsuited for sampling multiple diverse candidate captions given the image. The ability to generate
such candidates is valuable because captioning is profoundly ambiguous: not only can the same image
be described in many different ways, but also, images can be hard to interpret even for humans, let
alone machines relying on imperfect visual features. In short, we would like the posterior distribution
of captions given the image, as estimated by our model, to accurately capture both the open-ended
nature of language and any uncertainty about what is depicted in the image.
Achieving more diverse image description is a major theme in several recent works [6, 14, 27, 31, 35].
Deep generative models are a natural fit for this goal, and to date, Generative Adversarial Models
(GANs) have attracted the most attention. Dai et al. [6] proposed jointly learning a generator to
produce descriptions and an evaluator to assess how well a description fits the image. Shetty et
al. [27] changed the training objective of the generator from reproducing ground-truth captions to
generating captions that are indistinguishable from those produced by humans.
In this paper, we also explore a generative model for image description, but unlike the GAN-style
training of [6, 27], we adopt the conditional variational auto-encoder (CVAE) formalism [17, 29].
Our starting point is the work of Jain et al. [14], who trained a ?vanilla? CVAE to generate questions
given images. At training time, given an image and a sentence, the CVAE encoder samples a latent z
vector from a Gaussian distribution in the encoding space whose parameters (mean and variance)
come from a Gaussian prior with zero mean and unit variance. This z vector is then fed into a decoder
that uses it, together with the features of the input image, to generate a question. The encoder and the
decoder are jointly trained to maximize (an upper bound on) the likelihood of the reference questions
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Predicted Object Labels:
?person? ?cup? ?donut? ?dining table?
Predicted Object Labels:
?cup? ?fork? ?knife? ?sandwich? ?dining table? ?mouse?
AG-CVAE:
a woman sitting at a table with a cup of co?ee
a person sitting at a table with a cup of co?ee
a table with two plates of donuts and a cup of co?ee
a woman sitting at a table with a plate of co?ee
a man sitting at a table with a plate of food
AG-CVAE:
a close up of a plate of food on a table
a table with a plate of food on it
a plate of food with a sandwich on it
a white plate topped with a plate of food
a plate of food on a table next to a cup of co?ee
LSTM Baseline:
a close up of a table with two plates of co?ee
a close up of a table with a plate of food
a close up of a plate of food on a table
a close up of a table with two plates of food
a close up of a table with plates of food
LSTM Baseline:
a close up of a plate of food on a table
a close up of a plate of food with a sandwich
a close up of a plate of food
a close up of a plate of food on a white plate
a close up of a plate of food with a sandwich on it
Figure 1: Example output of our proposed AG-CVAE approach compared to an LSTM baseline
(see Section 4 for details). For each method, we show top five sentences following consensus
re-ranking [10]. The captions produced by our method are both more diverse and more accurate.
Object Labels: ?person?
AG-CVAE sentences:
a man and a woman standing in a room
a man and a woman are playing a game
a man standing next to a woman in a room
a man standing next to a woman in a field
a man standing next to a woman in a suit
Object Labels: ?person?,?bus?
AG-CVAE sentences:
a man and a woman sitting on a bus
a man and a woman sitting on a train
a man and woman sitting on a bus
a man and a woman sitting on a bench
a man and a woman are sitting on a bus
Object Labels: ?person?, ?remote?
AG-CVAE sentences:
a man and a woman playing a video game
a man and a woman are playing a video game
a man and woman are playing a video game
a man and a woman playing a game with a remote
a woman holding a nintendo wii game controller
Object Labels: ?person?, ?train?
AG-CVAE sentences:
a man and a woman sitting on a train
a woman and a woman sitting on a train
a woman sitting on a train next to a train
a woman sitting on a bench in a train
a man and a woman sitting on a bench
Figure 2: Illustration of how our additive latent space structure controls the image description process.
Modifying the object labels changes the weight vectors associated with semantic components in
the latent space. In turn, this shifts the mean from which the z vectors are drawn and modifies the
resulting descriptions in an intuitive way.
given the images. At test time, the decoder is seeded with an image feature and different z samples,
so that multiple z?s result in multiple questions.
While Jain et al. [14] obtained promising question generation performance with the above CVAE
model equipped with a fixed Gaussian prior, for the task of image captioning, we observed a tendency
for the learned conditional posteriors to collapse to a single mode, yielding little diversity in candidate
captions sampled given an image. To improve the behavior of the CVAE, we propose using a set of K
Gaussian priors in the latent z space with different means and standard deviations, corresponding to
different ?modes? or types of image content. For concreteness, we identify these modes with specific
object categories, such as ?dog? or ?cat.? If ?dog? and ?cat? are detected in an image, we would like to
encourage the generated captions to capture both of them.
Starting with the idea of multiple Gaussian priors, we propose two different ways of structuring
the latent z space. The first is to represent the distribution of z vectors using a Gaussian Mixture
model (GMM). Due to the intractability of Gaussian mixtures in the VAE framework, we also
introduce a novel Additive Gaussian (AG) prior that directly adds multiple semantic aspects in the
z space. If an image contains several objects or aspects, each corresponding to means ?k in the
latent space, then we require the mean of the encoder distribution to be close to a weighted linear
combination of the respective means. Our CVAE formulation with this additive Gaussian prior
(AG-CVAE) is able to model a richer, more flexible encoding space, resulting in more diverse and
accurate captions, as illustrated in Figure 1. As an additional advantage, the additive prior gives us an
interpretable mechanism for controlling the captions based on the image content, as shown in Figure
2. Experiments of Section 4 will show that both GMM-CVAE and AG-CVAE outperform LSTMs
and ?vanilla? CVAE baselines on the challenging MSCOCO dataset [5], with AG-CVAE showing
marginally higher accuracy and by far the best diversity and controllability.
2
Background
Our proposed framework for image captioning extends the standard variational auto-encoder [17]
and its conditional variant [29]. We briefly set up the necessary background here.
Variational auto-encoder (VAE): Given samples x from a dataset, VAEs aim at modeling the data
likelihood p(x). To this end, VAEs assume that the data points x cluster around a low-dimensional
manifold parameterized by embeddings or encodings z. To obtain the sample x corresponding to an
embedding z, we employ the decoder p(x|z) which is often based on deep nets. Since the decoder?s
posterior p(z|x) is not tractably computable we approximate it with a distribution q(z|x) which is
2
referred to as the encoder. Taking together all those ingredients, VAEs are based on the identity
log p(x) ? DKL [q(z|x), p(z|x)] = Eq(z|x) [log p(x|z)] ? DKL [q(z|x), p(z)],
(1)
which relates the likelihood p(x) and the conditional p(z|x). It is hard to compute the KL-divergence
DKL [q(z|x), p(z|x)] because the posterior p(z|x) is not readily available from the decoder distribution p(x|z) if we use deep nets. However, by choosing an encoder distribution q(z|x) with sufficient
capacity, we can assume that the non-negative KL-divergence DKL [q(z|x), p(z|x)] is small. Thus,
we know that the right-hand-side is a lower bound on the log-likelihood log p(x), which can be
maximized w.r.t. both encoder and decoder parameters.
Conditional variational auto-encoders (CVAE): In tasks like image captioning, we are interested
in modeling the conditional distribution p(x|c), where x are the desired descriptions and c is some
representation of content of the input image. The VAE identity can be straightforwardly extended by
conditioning both the encoder and decoder distributions on c. Training of the encoder and decoder
proceeds by maximizing the lower bound on the conditional data-log-likelihood p(x|c), i.e.,
log p? (x|c) ? Eq? (z|x,c) [log p? (x|z, c)] ? DKL [q? (z|x, c), p(z|c)] ,
(2)
where ? and ?, the parameters for the decoder distribution p? (x|z, c) and the encoder distribution
q? (z|x, c) respectively. In practice, the following stochastic objective is typically used:
max
?,?
N
1 X
log p? (xi |z i , ci ) ? DKL [q? (z|x, c), p(z|c)],
N i=1
s.t. ?i z i ? q? (z|x, c).
It approximates the expectation Eq? (z|x,c) [log p? (x|z, c)] using N samples z i drawn from the approximate posterior q? (z|x, c) (typically, just a single sample is used). Backpropagation through
the encoder that produces samples z i is achieved via the reparameterization trick [17], which is
applicable if we restrict the encoder distribution q? (z|x, c) to be, e.g., a Gaussian with mean and
standard deviation output by a deep net.
3
Gaussian Mixture Prior and Additive Gaussian Prior
Our key observation is that the behavior of the trained CVAE crucially depends on the choice of
the prior p(z|c). The prior determines how the learned latent space is structured, because the KLdivergence term in Eq. (2) encourages q? (z|x, c), the encoder distribution over z given a particular
description x and image content c, to be close to this prior distribution. In the vanilla CVAE
formulation, such as the one adopted in [14], the prior is not dependent on c and is fixed to a
zero-mean unit-variance Gaussian. While this choice is the most computationally convenient, our
experiments in Sec. 4 will demonstrate that for the task of image captioning, the resulting model has
poor diversity and worse accuracy than the standard maximum-likelihood-trained LSTM. Clearly, the
prior has to change based on the content of the image. However, because of the need to efficiently
compute the KL-divergence in closed form, it still needs to have a simple structure, ideally a Gaussian
or a mixture of Gaussians.
Motivated by the above considerations, we encourage the latent z space to have a multi-modal
structure composed of K modes or clusters, each corresponding to different types of image content.
Given an image I, we assume that we can obtain a distribution c(I) = (c1 (I), . . . , cK (I)), where the
entries ck are nonnegative and sum to one. In our current work, for concreteness, we identify these
with a set of object categories that can be reliably detected automatically, such as ?car,? ?person,? or
?cat.? The MSCOCO dataset, on which we conduct our experiments, has direct supervision for 80
such categories. Note, however, our formulation is general and can be applied to other definitions of
modes or clusters, including latent topics automatically obtained in an unsupervised fashion.
GMM-CVAE: We can model p(z|c) as a Gaussian mixture with weights ck and components with
means ?k and standard deviations ?k :
p(z|c) =
K
X
ck N z |?k , ?k2 I ,
(3)
k=1
where ck is defined as the weights above and ?k represents the mean vector of the k-th component.
In practice, for all components, we use the same standard deviation ?.
3
Decoder
Decoder
C3
C1
Cz
switch
C1 C2 C3
C2
Z
Z
Cluster
Vector
?
Cluster
Vector
C1 C2 C3
?
Encoder
Encoder
(a) GMM-CVAE
(b) AG-CVAE
Figure 3: Overview of GMM-CVAE and AG-CVAE models. To sample z vectors given an image,
GMM-CVAE (a) switches from one cluster center to another, while AG-CVAE (b) encourages the
embedding z for an image to be close to the average of its objects? means.
It is not directly tractable to optimize Eq. (2) with the above GMM prior. We therefore approximate
the KL divergence stochastically [12]. In each step during training, we first draw a discrete component
k according to the cluster probability c(I), and then sample z from the resulting Gaussian component.
Then we have
1
1
?k
+ 2 Eq? (z|x,ck ) kz ? ?k k22 ?
DKL [q? (z|x, ck ), p(z|ck )] = log
??
2?
2
(4)
2
2
?? + k?? ? ?k k2
?k
1
= log
+
? , ?k ck ? c(I).
??
2?k2
2
We plug the above KL term into Eq. (2) to obtain an objective function, which we optimize w.r.t. the
encoder and decoder parameters ? and ? using stochastic gradient descent (SGD). In principle, the
prior parameters ?k and ?k can also be trained, but we obtained good results by keeping them fixed
(the means are drawn randomly and all standard deviations are set to the same constant, as will be
further explained in Section 4).
At test time, in order to generate a description given an image I, we first sample a component index k
from c(I), and then sample z from the corresponding component distribution. One limitation of this
procedure is that, if an image contains multiple objects, each individual description is still conditioned
on just a single object.
AG-CVAE: We would like to structure the z space in a way that can directly reflect object cooccurrence. To this end, we propose a simple novel conditioning mechanism with an additive
Gaussian prior. If an image contains several objects with weights ck , each corresponding to means ?k
in the latent space, we want the mean of the encoder distribution to be close to the linear combination
of the respective means with the same weights:
K
!
X
2
p(z|c) = N z
ck ?k , ? I ,
(5)
k=1
PK
where ? 2 I is a spherical covariance matrix with ? 2 = k=1 c2k ?k2 . Figure 3 illustrates the difference
between this AG-CVAE model and the GMM-CVAE model introduced above.
In order to train the AG-CVAE model using the objective of Eq. (2), we need to compute the
KL-divergence DKL [q? (z|x, c), p(z|c)] where q? (z|x, c) = N (z | ?? (x, c), ??2 (x, c)I) and the prior
p(z|c) is given by Eq. (5). Its analytic expression can be derived to be
?
2 ?
K
X
1
1
DKL [q? (z|x, c), p(z|c)] = log
+ 2 Eq? ?
z ?
ck ?k
? ?
2?
2
k=1
P
K
??2 + k?? ? k=1 ck ?k k2
1
?
+
? .
= log
2
??
2?
2
?
??
We plug the above KL-divergence term into Eq. (2) to obtain the stochastic objective function for
training the encoder and decoder parameters. We initialize the mean and variance parameters ?k and
?k in the same way as for GMM-CVAE and keep them fixed throughout training.
4
2
? 1 , log
2
? 2 , log(
)
Wc1
)
Wc2
2
2
Reconstruction
Loss
??
? , log(
1
?
K
, log(
2
K
)
WcK
hT
LSTM
LSTM
LSTM
LSTM
WI
Wc
We
We
??
Image
Feature
Cluster
Vector
w1
w2
??
LSTM
p0
p1
h0
h1
P(Real/Fake)
??
hT
LSTM
LSTM
LSTM
LSTM
LSTM
We
WI
Wc
Wz
We
We
??
We
wT
Image
Feature
Cluster
Vector
z
ws
w1
??
wT
LSTM
Figure 4: Illustration of our encoder (left) and decoder (right). See text for details.
Next, we need to specify our architectures for the encoder and decoder, which are shown in Fig. 4.
The encoder uses an LSTM to map an image I, its vector c(I), and a caption into a point in the
latent space. More specifically, the LSTM receives the image feature in the first step, the cluster
vector in the second step, and then the caption word by word. The hidden state hT after the last step
2
is transformed into K mean vectors, ??k , and K log variances, log ??k
, using a linear layer for each.
2
For AG-CVAE, the ??k and ??k are then summed with weights ck and c2k respectively to generate
the desired ?? and ??2 encoder outputs. Note that the encoder is used at training time only, and the
input cluster vectors are produced from ground truth object annotations.
The decoder uses a different LSTM that receives as input first the image feature, then the cluster
vector, then a z vector sampled from the conditional distribution of Eq. (5). Next, it receives a ?start?
symbol and proceeds to output a sentence word by word until it produces an ?end? symbol. During
training, its c(I) inputs are derived from the ground truth, same as for the encoder, and the log-loss is
used to encourage reconstruction of the provided ground-truth caption. At test time, ground truth
object vectors are not available, so we rely on automatic object detection, as explained in Section 4.
4
Experiments
4.1 Implementation Details
We test our methods on the MSCOCO dataset [5], which is the largest ?clean? image captioning
dataset available to date. The current (2014) release contains 82,783 training and 40,504 validation
images with five reference captions each, but many captioning works re-partition this data to enlarge
the training set. We follow the train/val/test split released by [23]. It allocates 118, 287 images for
training, 4, 000 for validation, and 1, 000 for testing.
Features. As image features, we use 4,096-dimensional activations from the VGG-16 network [28].
The cluster or object vectors c(I) are 80-dimensional, corresponding to the 80 MSCOCO object
categories. At training time, c(I) consist of binary indicators corresponding to ground truth object
labels, rescaled to sum to one. For example, an image with labels ?person,? ?car,? and ?dog? results in a
cluster vector with weights of 1/3 for the corresponding objects and zeros elsewhere. For test images
I, c(I) are obtained automatically through object detection. We train a Faster R-CNN detector [26]
for the MSCOCO categories using our train/val split by fine-tuning the VGG-16 net [28]. At test
time, we use a threshold of 0.5 on the per-class confidence scores output by this detector to determine
whether the image contains a given object (i.e., all the weights are once again equal).
Baselines. Our LSTM baseline is obtained by deleting the z vector input from the decoder architecture shown in Fig. 4. This gives a strong baseline comparable to NeuralTalk2 [1] or Google Show
and Tell [33]. To generate different candidate sentences using the LSTM, we use beam search with
a width of 10. Our second baseline is given by the ?vanilla? CVAE with a fixed Gaussian prior
following [14]. For completeness, we report the performance of our method as well as all baselines
both with and without the cluster vector input c(I).
Parameter settings and training. For all the LSTMs, we use a one-hot encoding with vocabulary
size of 11,488, which is the number of words in the training set. This input gets projected into a word
embedding layer of dimension 256, and the LSTM hidden space dimension is 512. We found that
the same LSTM settings worked well for all models. For our three models (CVAE, GMM-CVAE,
and AG-CVAE), we use a dimension of 150 for the z space. We wanted it to be at least equal to the
number of categories to make sure that each z vector corresponds to a unique set of cluster weights.
The means ?k of clusters for GMM-CVAE and AG-CVAE are randomly initialized on the unit ball
5
obj #z std beam
B4
B3
B2
B1
C
R
M
S
10
0.413 0.515 0.643 0.790 1.157 0.597 0.285 0.218
LSTM
X
10
0.428 0.529 0.654 0.797 1.202 0.607 0.290 0.223
20 0.1
0.261 0.381 0.538 0.742 0.860 0.531 0.246 0.184
CVAE X 20
2
0.312 0.421 0.565 0.733 0.910 0.541 0.244 0.176
20 0.1
0.371 0.481 0.619 0.778 1.080 0.582 0.274 0.209
GMM- X 20
2
0.423 0.533 0.666 0.813 1.216 0.617 0.298 0.233
CVAE X 20
2
2
0.449 0.553 0.680 0.821 1.251 0.624 0.299 0.232
X 100 2
0.494 0.597 0.719 0.856 1.378 0.659 0.325 0.261
X 100 2
2
0.527 0.625 0.740 0.865 1.430 0.670 0.329 0.263
20 0.1
0.431 0.537 0.668 0.814 1.230 0.622 0.300 0.235
X 20
2
0.451 0.557 0.686 0.829 1.259 0.630 0.305 0.243
AGX 20
2
2
0.471 0.573 0.698 0.834 1.308 0.638 0.309 0.244
0.532 0.631 0.749 0.876 1.478 0.682 0.342 0.278
CVAE X 100 2
X 100 2
2
0.557 0.654 0.767 0.883 1.517 0.690 0.345 0.277
Table 1: Oracle (upper bound) performance according to each metric. Obj indicates whether the
object (cluster) vector is used; #z is the number of z samples; std is the test-time standard deviation;
beam is the beam width if beam search is used. For the caption quality metrics, C is short for Cider,
R for ROUGE, M for METEOR, S for SPICE.
obj #z std beam
B4
B3
B2
B1
C
R
M
10
0.286 0.388 0.529 0.702 0.915 0.510 0.235
LSTM
X
10
0.292 0.395 0.536 0.711 0.947 0.516 0.238
20 0.1
0.245 0.347 0.495 0.674 0.775 0.491 0.217
CVAE X 20
2
0.265 0.372 0.521 0.698 0.834 0.506 0.225
20 0.1
0.271 0.376 0.522 0.702 0.890 0.507 0.231
GMM- X 20
2
0.278 0.388 0.538 0.718 0.932 0.516 0.238
CVAE X 20
2
2
0.289 0.394 0.538 0.715 0.941 0.513 0.235
X 100 2
0.292 0.402 0.552 0.728 0.972 0.520 0.241
X 100 2
2
0.307 0.413 0.557 0.729 0.986 0.525 0.242
20 0.1
0.287 0.394 0.540 0.715 0.942 0.518 0.238
AGX 20
2
0.286 0.391 0.537 0.716 0.953 0.517 0.239
CVAE
X 20
2
2
0.299 0.402 0.544 0.716 0.963 0.518 0.237
X 100 2
0.301 0.410 0.557 0.732 0.991 0.527 0.243
X 100 2
2
0.311 0.417 0.559 0.732 1.001 0.528 0.245
Table 2: Consensus re-ranking using CIDEr. See caption of Table 1 for legend.
S
0.165
0.170
0.147
0.158
0.166
0.170
0.169
0.174
0.177
0.168
0.172
0.173
0.177
0.179
and are not changed throughout training. The standard deviations ?k are set to 0.1 at training time and
tuned on the validation set at test time (the values used for our results are reported in the tables). All
networks are trained with SGD with a learning rate that is 0.01 for the first 5 epochs, and is reduced
by half every 5 epochs. On average all models converge within 50 epochs.
4.2 Results
A big part of the motivation for generating diverse candidate captions is the prospect of being able to
re-rank them using some discriminative method. Because the performance of any re-ranking method
is upper-bounded by the quality of the best candidate caption in the set, we will first evaluate different
methods assuming an oracle that can choose the best sentence among all the candidates. Next, for a
more realistic evaluation, we will use a consensus re-ranking approach [10] to automatically select a
single top candidate per image. Finally, we will assess the diversity of the generated captions using
uniqueness and novelty metrics.
Oracle evaluation. Table 1 reports caption evaluation metrics in the oracle setting, i.e., taking the
maximum of each relevant metric over all the candidates. We compare caption quality using five
metrics: BLEU [25], METEOR [7], CIDEr [30], SPICE [2], and ROUGE [21]. These are calculated
using the MSCOCO caption evaluation tool [5] augmented by the author of SPICE [2]. For the
LSTM baseline, we report the scores attained among 10 candidates generated using beam search (as
suggested in [23]). For CVAE, GMM-CVAE and AG-CVAE, we sample a fixed number of z vectors
from the corresponding prior distributions (the numbers of samples are given in the table).
The high-level trend is that ?vanilla? CVAE falls short even of the LSTM baseline, while the upperbound performance for GMM-CVAE and AG-CVAE considerably exceeds that of the LSTM given
6
beam % unique % novel
size
per image sentences
LSTM X
10
0.656
CVAE X 20 2
0.118
0.820
X 20 2
0.594
0.809
GMM- X 20 2
2
0.539
0.716
CVAE X 100 2
0.376
0.767
X 100 2
2
0.326
0.688
X 20 2
0.764
0.795
AGX 20 2
2
0.698
0.707
CVAE X 100 2
0.550
0.745
X 100 2
2
0.474
0.667
Table 3: Diversity evaluation. For each method, we report the percentage of unique candidates
generated per image by sampling different numbers of z vectors. We also report the percentage of
novel sentences (i.e., sentences not seen in the training set) out of (at most) top 10 sentences following
consensus re-ranking. It should be noted that for CVAE, there are 2,466 novel sentences out of 3,006.
For GMM-CVAE and AG-CVAE, we get roughly 6,200-7,800 novel sentences.
obj
#z
std
Predicted Object Labels:
'bottle' 'refrigerator'
Predicted Object Labels:
'person' 'horse' 'bear'
AG-CVAE:
an open refrigerator filled with lots of food
a refrigerator filled with lots of food and drinks
a refrigerator filled with lots of food
a large open refrigerator filled with lots of food
a refrigerator filled with lots of food and other items
AG-CVAE:
a man standing next to a brown horse
a man is standing next to a horse
a person standing next to a brown and white horse
a man standing next to a horse and a man
a man holding a brown and white horse
LSTM Baseline:
a refrigerator filled with lots of food
a refrigerator filled with lots of food on top
a refrigerator filled with lots of food inside
a refrigerator filled with lots of food inside of it
a refrigerator filled with lots of food and other items
LSTM Baseline:
a close up of a person with a horse
a close up of a horse with a horse
a black and white photo of a man wearing a hat
a black and white photo of a person wearing a hat
a black and white photo of a man in a hat
(a)
(b)
Predicted Object Labels:
'person' 'backpack' 'umbrella'
Predicted Object Labels:
'person' ?bed?
AG-CVAE:
a person holding an umbrella in front of a building
a woman holding a red umbrella in front of a building
a person holding an umbrella in the rain
a man and woman holding an umbrella in the rain
a man holding a red umbrella in front of a building
AG-CVAE:
a baby laying on a bed with a blanket
a woman laying on a bed with a baby
a man laying on a bed with a baby
a baby laying in a bed with a blanket
a baby is laying in bed with a cat
LSTM Baseline:
a man holding an umbrella on a city street
a man holding an umbrella in the rain
a man is holding an umbrella in the rain
a person holding an umbrella in the rain
a man holding an umbrella in the rain with an umbrella
LSTM Baseline:
a baby is laying on a bed with a blanket
a baby is laying on a bed with a stu?ed animal
a little girl laying in a bed with a blanket
a little girl laying on a bed with a blanket
a man laying in a bed with a blanket
(c)
(d)
Figure 5: Comparison of captions produced by our AG-CVAE method and the LSTM baseline. For
each method, top five captions following consensus re-ranking are shown.
the right choice of standard deviation and a large enough number of z samples. AG-CVAE obtains the
highest upper bound. A big advantage of the CVAE variants over the LSTM is that they can be easily
used to generate more candidate sentences simply by increasing the number of z samples, while the
only way to do so for the LSTM is to increase the beam width, which is computationally prohibitive.
In more detail, the top two lines of Table 1 compare performance of the LSTM with and without the
additional object (cluster) vector input, and show that it does not make a dramatic difference. That is,
improving over the LSTM baseline is not just a matter of adding stronger conditioning information
as input. Similarly, for CVAE, GMM-CVAE, and AG-CVAE, using the object vector as additional
conditioning information in the encoder and decoder can increase accuracy somewhat, but does not
account for all the improvements that we see. One thing we noticed about the models without the
object vector is that they are more sensitive to the standard deviation parameter and require more
careful tuning (to demonstrate this, the table includes results for several values of ? for the CVAE
models).
Consensus re-ranking evaluation. For a more realistic evaluation we next compare the same models
after consensus re-ranking [10, 23]. Specifically, for a given test image, we first find its nearest
neighbors in the training set in the cross-modal embedding space learned by a two-branch network
proposed in [34]. Then we take all the ground-truth reference captions of those neighbors and
calculate the consensus re-ranking scores between them and the candidate captions. For this, we
use the CIDEr metric, based on the observation of [22, 30] that it can give more human-consistent
evaluations than BLEU.
7
Object Labels: ?cat? ?suitcase?
GMM-CVAE:
a black and white cat is sitting in a suitcase
a cat that is sitting in a suitcase
a cat that is sitting on a suitcase
a cat sitting on top of a suitcase
a black and white cat sitting on a suitcase
a cat sitting in a suitcase on a table
AG-CVAE:
a small white and black cat sitting on top of a suitcase
a cat sitting on a piece of luggage
a small gray and white cat sitting in a suitcase
a white cat sitting on top of a suitcase
a black and white cat sitting in a suitcase
a black and white cat sitting on top of a suitcase
a cat that is sitting on a table
a black and white cat sitting next to a suitcase
a cat sitting in front of a suitcase
a cat sitting on a wooden bench in the sun
a close up of a cat sitting on a suitcase
a cat sitting on top of a blue suitcase
a large brown and white cat sitting on top of a suitcase
a cat sitting on top of a suitcase
a white cat with a suitcase
Object Labels: ?cup? ?dining table?
?teddy bear?
GMM-CVAE:
a teddy bear sitting next to a teddy bear
a teddy bear sitting on a table next to a table
a teddy bear sitting on top of a table
a teddy bear sitting on a table next to a cup of co?ee
a stu?ed teddy bear sitting next to a table
a stu?ed teddy bear sitting on a table
a teddy bear sitting next to a table filled with stu?ed animals
a teddy bear is sitting on a table
ateddy bear sitting on a table next to a teddy bear
AG-CVAE:
a white teddy bear sitting next to a table
a couple of stu?ed animals sitting on a table
a teddy bear sitting next to a bunch of flowers
a couple of teddy bears sitting on a table
a large teddy bear sitting on a table
a bunch of stu?ed animals sitting on a table
a group of teddy bears sitting on a table
a large teddy bear sitting on a table next to a table
a teddy bear sitting next to a pile of books
a group of teddy bears sitting next to each other
a white teddy bear sitting on a wooden table
two teddy bears sitting next to each other
a couple of teddy bears sitting next to each other
a white teddy bear sitting next to a table
a teddy bear sitting next to a wooden table
a large stu?ed animal sitting on top of a table
Object Labels: ?cat? ?suitcase? ?chair?
GMM-CVAE:
a cat that is sitting in a suitcase
a cat sitting on top of a suitcase
a cat sitting in a suitcase on the floor
a black and white cat is sitting in a suitcase
a close up of a cat on a suitcase
AG-CVAE:
a white and black cat sitting in a suitcase
a cat that is sitting on a chair
a white and black cat sitting on top of a suitcase
a black and white cat sitting on a chair
a cat sitting on a chair in a room
a large brown and white cat sitting on top of a desk
a cat sitting on a wooden bench in the sun
a close up of a cat sitting on a suitcase
a black and white cat sitting next to a piece of luggage
a small white and black cat sitting in a chair
a black and white cat sitting on top of a suitcase
a cat sitting on top of a blue chair
a cat sitting on top of a suitcase
Object Labels: ?cup? ?dining table?
?teddy bear? ?sandwich? ?cake?
GMM-CVAE:
a teddy bear sitting next to a teddy bear
a teddy bear sitting on a table next to a cup of co?ee
a teddy bear sitting on a table with a teddy bear
a teddy bear with a teddy bear sitting on top of it
a teddy bear sitting on top of a table
a teddy bear sitting next to a cup of co?ee
a table with a teddy bear and a teddy bear
a teddy bear sitting on a table next to a glass of co?ee
two teddy bears sitting on a table next to each other
AG-CVAE:
a table topped with a cake
a couple of cake sitting on top of a table
a table with a cake and a bunch of stu?ed animals
a cake with a bunch of co?ee on it
a white teddy bear sitting next to a glass of co?ee
a table with a cake and a bear on it
a table with a bunch of teddy bears
a table with two plates of food on it
a table topped with a variety of food
a table with two teddy bears
a table with a cake and a plate of food
a couple of sandwiches sitting on top of a table
a table topped with a cake and two plates of food
a table with a bunch of cakes on it
a table with a cake and a cup of co?ee
a white plate of food next to a table
a white table topped with lots of food
Figure 6: Comparison of captions produced by GMM-CVAE and AG-CVAE for two different versions
of input object vectors for the same images. For both models, we draw 20 z samples and show the
resulting unique captions.
Table 2 shows the evaluation based on the single top-ranked sentence for each test image. While the
re-ranked performance cannot get near the upper bounds of Table 1, the numbers follow a similar
trend, with GMM-CVAE and AG-CVAE achieving better performance than the baselines in almost
all metrics. It should also be noted that, while it is not our goal to outperform the state of the art in
absolute terms, our performance is actually better than some of the best methods to date [23, 37],
although [37] was trained on a different split. AG-CVAE tends to get slightly higher numbers than
GMM-CVAE, although the advantage is smaller than for the upper-bound results in Table 1. One
of the most important take-aways for us is that there is still a big gap between upper-bound and
re-ranking performance and that improving re-ranking of candidate sentences is an important future
direction.
Diversity evaluation. To compare the generative capabilities of our different methods we report
two indicative numbers in Table 3. One is the average percentage of unique captions in the set of
candidates generated for each image. This number is only meaningful for the CVAE models, where
we sample candidates by drawing different z samples, and multiple z?s can result in the same caption.
For LSTM, the candidates are obtained using beam search and are by definition distinct. From Table
3, we observe that CVAE has very little diversity, GMM-CVAE is much better, but AG-CVAE has the
decisive advantage.
Similarly to [27], we also report the percentage of all generated sentences for the test set that have
not been seen in the training set. It only really makes sense to assess novelty for sentences that
are plausible, so we compute this percentage based on (at most) top 10 sentences per image after
consensus re-ranking. Based on the novelty ratio, CVAE does well. However, since it generates
fewer distinct candidates per image, the absolute numbers of novel sentences are much lower than for
GMM-CVAE and AG-CVAE (see table caption for details).
8
Qualitative results. Figure 5 compares captions generated by AG-CVAE and the LSTM baseline on
four example images. The AG-CVAE captions tend to exhibit a more diverse sentence structure with
a wider variety of nouns and verbs used to describe the same image. Often this yields captions that
are more accurate (?open refrigerator? vs. ?refrigerator? in (a)) and better reflective of the cardinality
and types of entities in the image (in (b), our captions mention both the person and the horse while the
LSTM tends to mention only one). Even when AG-CVAE does not manage to generate any correct
candidates, as in (d), it still gets the right number of people in some candidates. A shortcoming of
AG-CVAE is that detected objects frequently end up omitted from the candidate sentences if the
LSTM language model cannot accommodate them (?bear? in (b) and ?backpack? in (c)). On the
one hand, this shows that the capacity of the LSTM decoder to generate combinatorially complex
sentences is still limited, but on the other hand, it provides robustness against false positive detections.
Controllable sentence generation. Figure 6 illustrates how the output of our GMM-CVAE and
AG-CVAE models changes when we change the input object vectors in an attempt to control the
generation process. Consistent with Table 3, we observe that for the same number of z samples,
AG-CVAE produces more unique candidates than GMM-CVAE. Further, AG-CVAE is more flexible
than GMM-CVAE and more responsive to the content of the object vectors. For the first image
showing a cat, when we add the additional object label ?chair,? AG-CVAE is able to generate some
captions mentioning a chair, but GMM-CVAE is not. Similarly, in the second example, when we add
the concepts of ?sandwich? and ?cake,? only AG-CVAE can generate some sentences that capture
them. Still, the controllability of AG-CVAE leaves something to be desired, since, as observed above,
it has trouble mentioning more than two or three objects in the same sentence, especially in unusual
combinations.
5
Discussion
Our experiments have shown that both our proposed GMM-CVAE and AG-CVAE approaches
generate image captions that are more diverse and more accurate than standard LSTM baselines.
While GMM-CVAE and AG-CVAE have very similar bottom-line accuracies according to Table 2,
AG-CVAE has a clear edge in terms of diversity (unique captions per image) and controllability, both
quantitatively (Table 3) and qualitatively (Figure 6).
Related work. To date, CVAEs have been used for image question generation [14], but as far as we
know, our work is the first to apply them to captioning. In [8], a mixture of Gaussian prior is used in
CVAEs for colorization. Their approach is essentially similar to our GMM-CVAE, though it is based
on mixture density networks [4] and uses a different approximation scheme during training.
Our CVAE formulation has some advantages over the CGAN approach adopted by other recent
works aimed at the same general goals [6, 27]. GANs do not expose control over the structure of
the latent space, while our additive prior results in an interpretable way to control the sampling
process. GANs are also notoriously tricky to train, in particular for discrete sampling problems like
sentence generation (Dai et al. [6] have to resort to reinforcement learning and Shetty et al. [27] to an
approximate Gumbel sampler [15]). Our CVAE training is much more straightforward.
While we represent the z space as a simple vector space with multiple modes, it is possible to impose
on it a more general graphical model structure [16], though this incurs a much greater level of
complexity. Finally, from the viewpoint of inference, our work is also related to general approaches
to diverse structured prediction, which focus on extracting multiple modes from a single energy
function [3]. This is a hard problem necessitating sophisticated approximations, and we prefer to
circumvent it by cheaply generating a large number of diverse and plausible candidates, so that ?good
enough? ones can be identified using simple re-ranking mechanisms.
Future work. We would like to investigate more general formulations for the conditioning information c(I), not necessarily relying on object labels whose supervisory information must be provided
separately from the sentences. These can be obtained, for example, by automatically clustering nouns
or noun phrases extracted from reference sentences, or even clustering vector representations of entire
sentences. We are also interested in other tasks, such as question generation, where the cluster vectors
can represent the question type (?what is,? ?where is,? ?how many,? etc.) as well as the image content.
Control of the output by modifying the c vector would in this case be particularly natural.
Acknowledgments: This material is based upon work supported in part by the National Science
Foundation under Grants No. 1563727 and 1718221, and by the Sloan Foundation. We would like to
thank Jian Peng and Yang Liu for helpful discussions.
9
References
[1] Neuraltalk2. https://github.com/karpathy/neuraltalk2.
[2] P. Anderson, B. Fernando, M. Johnson, and S. Gould. Spice: Semantic propositional image caption
evaluation. In ECCV, 2016.
[3] D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse M-Best Solutions in Markov
Random Fields. In ECCV, 2012.
[4] C. M. Bishop. Mixture density networks. 1994.
[5] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Doll?r, and C. L. Zitnick. Microsoft coco captions:
Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
[6] B. Dai, D. Lin, R. Urtasun, and S. Fidler. Towards diverse and natural image descriptions via a conditional
gan. ICCV, 2017.
[7] M. Denkowski and A. Lavie. Meteor universal: Language specific translation evaluation for any target
language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation, 2014.
[8] A. Deshpande, J. Lu, M.-C. Yeh, and D. Forsyth. Learning diverse image colorization. CVPR, 2017.
[9] J. Devlin, H. Cheng, H. Fang, S. Gupta, L. Deng, X. He, G. Zweig, and M. Mitchell. Language models for
image captioning: The quirks and what works. arXiv preprint arXiv:1505.01809, 2015.
[10] J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick. Exploring nearest neighbor approaches
for image captioning. arXiv preprint arXiv:1505.04467, 2015.
[11] A. Farhadi, M. Hejrati, M. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every
picture tells a story: Generating sentences from images. In ECCV, 2010.
[12] J. R. Hershey and P. A. Olsen. Approximating the kullback leibler divergence between gaussian mixture
models. In ICASSP, 2007.
[13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[14] U. Jain, Z. Zhang, and A. Schwing. Creativity: Generating diverse questions using variational autoencoders.
CVPR, 2017.
[15] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. ICLR, 2017.
[16] M. J. Johnson, D. Duvenaud, A. Wiltschko, S. Datta, and R. Adams. Structured vaes: Composing
probabilistic graphical models and variational autoencoders. NIPS, 2016.
[17] D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014.
[18] R. Kiros, R. Salakhutdinov, and R. Zemel. Multimodal neural language models. In ICML, 2014.
[19] G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Babytalk:
Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 35(12):2891?2903, 2013.
[20] P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi. Generalizing image captions for image-text
parallel corpus. In ACL, 2013.
[21] C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out:
Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.
[22] S. Liu, Z. Zhu, N. Ye, S. Guadarrama, and K. Murphy. Improved image captioning via policy gradient
optimization of spider. ICCV, 2017.
[23] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Deep captioning with multimodal recurrent
neural networks (m-rnn). ICLR, 2015.
[24] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and
H. Daum? III. Midge: Generating image descriptions from computer vision detections. In Proceedings
of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages
747?756. Association for Computational Linguistics, 2012.
[25] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine
translation. In ACL. Association for Computational Linguistics, 2002.
[26] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region
proposal networks. In NIPS, 2015.
[27] R. Shetty, M. Rohrbach, L. A. Hendricks, M. Fritz, and B. Schiele. Speaking the same language: Matching
machine to human captions by adversarial training. ICCV, 2017.
[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[29] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative
models. In NIPS, 2015.
[30] R. Vedantam, C. Lawrence Zitnick, and D. Parikh. Cider: Consensus-based image description evaluation.
In CVPR, 2015.
[31] A. K. Vijayakumar, M. Cogswell, R. R. Selvaraju, Q. Sun, S. Lee, D. Crandall, and D. Batra. Diverse beam
search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424, 2016.
[32] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In
CVPR, 2015.
[33] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: Lessons learned from the 2015 mscoco
image captioning challenge. IEEE transactions on pattern analysis and machine intelligence, 2016.
[34] L. Wang, Y. Li, and S. Lazebnik. Learning deep structure-preserving image-text embeddings. In CVPR,
2016.
[35] Z. Wang, F. Wu, W. Lu, J. Xiao, X. Li, Z. Zhang, and Y. Zhuang. Diverse image captioning via grouptalk.
In IJCAI, 2016.
10
[36] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and
tell: Neural image caption generation with visual attention. In ICML, 2015.
[37] Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo. Image captioning with semantic attention. In CVPR, 2016.
11
| 7158 |@word cnn:2 version:1 briefly:1 stronger:1 open:5 crucially:1 covariance:1 p0:1 dramatic:1 incurs:1 sgd:2 mention:2 rivera:1 accommodate:1 liu:2 contains:5 score:3 tuned:1 cvae:114 current:2 com:1 guadarrama:1 luo:1 activation:1 attracted:1 readily:1 must:1 neuraltalk2:3 additive:9 partition:1 realistic:2 analytic:1 wanted:1 interpretable:2 v:1 alone:1 generative:5 half:1 prohibitive:1 item:2 fewer:1 indicative:1 leaf:1 intelligence:2 short:6 completeness:1 provides:1 evaluator:1 five:4 zhang:2 c2:3 direct:1 qualitative:1 combine:2 inside:2 introduce:1 peng:1 behavior:2 p1:1 frequently:1 kiros:2 multi:1 roughly:2 salakhutdinov:1 relying:2 spherical:1 automatically:5 food:31 little:5 equipped:1 cardinality:1 increasing:1 farhadi:1 provided:3 spain:1 bounded:1 what:3 kind:1 ag:54 hejrati:1 ended:2 kldivergence:1 every:2 k2:5 tricky:1 control:5 unit:4 grant:1 producing:1 positive:1 attend:1 tends:2 rouge:3 encoding:6 donut:2 black:16 acl:3 challenging:2 co:13 mentioning:2 collapse:1 limited:1 aschwing:1 unique:7 acknowledgment:1 testing:1 practice:2 goyal:1 backpropagation:1 procedure:1 universal:1 rnn:1 yan:1 liwei:1 convenient:1 matching:1 word:6 confidence:1 get:5 cannot:2 close:20 optimize:2 map:1 center:1 maximizing:1 modifies:1 straightforward:1 attention:3 starting:2 cvaes:4 fang:3 reparameterization:2 embedding:4 slazebni:1 controlling:1 target:1 caption:46 us:5 premraj:1 trick:1 trend:2 recognition:1 particularly:1 std:4 observed:2 fork:1 bottom:1 preprint:5 wang:5 capture:3 refrigerator:13 calculate:1 region:1 sun:4 remote:2 rescaled:1 prospect:1 valuable:1 highest:1 complexity:1 schiele:1 cooccurrence:1 ideally:1 trained:8 shakhnarovich:1 eacl:1 yuille:1 upon:1 dodge:1 gu:1 girl:2 easily:1 icassp:1 multimodal:2 cat:46 chapter:1 unsuited:1 train:12 jain:3 distinct:2 describe:1 shortcoming:1 detected:3 zemel:2 tell:5 horse:10 crandall:1 choosing:1 h0:1 whose:2 richer:1 plausible:2 cvpr:6 drawing:1 encoder:28 ability:1 simonyan:1 ward:1 jointly:2 advantage:5 sequence:1 net:5 dining:4 propose:4 reconstruction:2 relevant:1 date:4 papineni:1 rashtchian:1 description:17 intuitive:1 bed:11 ijcai:1 cluster:20 produce:5 captioning:17 generating:7 adam:1 object:45 wider:1 recurrent:2 quirk:1 nearest:2 eq:12 strong:2 predicted:6 come:1 blanket:6 direction:1 meteor:3 correct:1 modifying:2 stochastic:3 wc2:1 human:5 material:1 require:2 really:1 creativity:1 exploring:1 around:2 duvenaud:1 ground:7 lawrence:1 major:1 salakhudinov:1 adopt:1 released:1 omitted:1 cgan:1 uniqueness:1 applicable:1 label:19 expose:1 sensitive:1 largest:1 combinatorially:1 create:1 city:1 tool:1 weighted:1 suitcase:30 clearly:1 gaussian:25 aim:1 cider:5 ck:14 vae:3 structuring:1 derived:2 release:1 focus:1 improvement:1 rank:1 likelihood:7 indicates:1 adversarial:2 baseline:21 sense:1 yamaguchi:1 helpful:1 inference:1 wooden:4 dependent:1 glass:2 typically:2 entire:1 hidden:2 w:1 transformed:1 interested:2 among:2 flexible:2 animal:6 art:2 summed:1 initialize:1 noun:3 softmax:1 field:2 once:1 equal:2 beach:1 sampling:4 enlarge:1 represents:1 unsupervised:1 icml:2 future:2 report:7 guzman:1 quantitatively:1 employ:1 randomly:2 composed:1 simultaneously:1 divergence:7 national:1 individual:1 murphy:1 stu:8 sandwich:7 suit:1 attempt:1 microsoft:1 detection:5 investigate:1 evaluation:16 mixture:10 yielding:1 accurate:6 edge:1 encourage:3 necessary:1 respective:2 allocates:1 conduct:1 filled:11 initialized:1 re:16 desired:3 girshick:2 formalism:1 modeling:2 babytalk:1 phrase:1 deviation:9 entry:1 johnson:2 too:1 front:4 reported:1 straightforwardly:1 encoders:1 considerably:1 cho:1 st:1 person:18 explores:1 lstm:45 density:2 fritz:1 vijayakumar:1 standing:8 probabilistic:1 lee:2 decoding:1 together:2 mouse:1 gans:3 w1:2 again:1 reflect:1 manage:1 choose:1 huang:1 woman:26 worse:1 stochastically:1 book:1 resort:1 style:1 li:3 account:1 upperbound:1 diversity:8 wck:1 sec:1 b2:2 includes:1 matter:1 forsyth:2 sloan:1 explicitly:1 ranking:13 depends:1 decisive:1 piece:2 h1:1 lot:11 closed:1 red:2 start:1 bayes:1 capability:1 parallel:1 annotation:1 ass:3 topped:5 accuracy:4 variance:5 who:1 efficiently:1 maximized:1 yield:2 sitting:87 identify:2 lesson:1 convolutional:1 accurately:1 produced:5 marginally:1 lu:2 bunch:6 notoriously:1 ren:1 detector:2 ed:8 definition:2 against:1 energy:1 deshpande:1 associated:1 couple:5 sampled:2 dataset:5 mitchell:3 car:2 sophisticated:1 actually:1 higher:2 attained:1 follow:2 modal:2 specify:1 hershey:1 improved:1 formulation:5 zisserman:1 though:2 anderson:1 just:3 c2k:2 autoencoders:3 until:1 hand:3 receives:3 lstms:2 google:1 defines:1 mode:7 quality:3 gray:1 ordonez:2 supervisory:1 usa:1 b3:2 ye:1 contain:1 k22:1 brown:5 umbrella:12 building:3 concept:1 seeded:1 fidler:1 leibler:1 semantic:4 illustrated:1 white:31 indistinguishable:1 game:6 during:3 encourages:2 width:3 ambiguous:1 noted:2 plate:24 demonstrate:2 necessitating:1 image:88 variational:9 lazebnik:2 novel:8 consideration:1 parikh:1 overview:1 conditioning:5 b4:2 volume:1 association:3 he:2 approximates:1 interpret:1 cup:12 automatic:4 vanilla:6 tuning:2 similarly:3 illinois:2 language:7 han:1 supervision:1 etc:1 add:3 something:1 posterior:5 recent:2 coco:1 schmidhuber:1 server:1 nintendo:1 binary:1 baby:7 seen:2 preserving:1 dai:3 additional:4 somewhat:1 floor:1 impose:1 greater:1 deng:1 determine:1 maximize:2 converge:1 novelty:3 fernando:1 relates:1 multiple:10 branch:2 champaign:1 exceeds:1 faster:2 plug:2 cross:1 long:3 knife:1 lin:3 zweig:1 wiltschko:1 dkl:9 prediction:1 variant:2 controller:1 essentially:1 expectation:1 metric:8 vision:1 arxiv:10 represent:3 cz:1 achieved:1 hochreiter:1 c1:4 beam:11 background:2 want:1 fine:1 separately:1 proposal:1 jian:1 w2:1 unlike:1 sure:1 tend:1 thing:1 legend:1 obj:4 reflective:1 ee:13 near:1 extracting:1 yang:2 split:3 embeddings:2 enough:2 iii:1 switch:2 variety:2 fit:3 cogswell:1 bengio:3 architecture:2 restrict:1 identified:1 imperfect:1 idea:1 devlin:2 computable:1 vgg:2 shift:1 whether:2 motivated:1 expression:1 luggage:2 speaking:1 deep:8 fake:1 clear:1 aimed:1 karpathy:1 desk:1 sohn:1 category:6 reduced:1 generate:12 http:1 outperform:2 percentage:5 spice:4 backpack:2 estimated:1 per:7 blue:2 diverse:18 discrete:2 promise:1 profoundly:1 group:2 key:1 four:1 threshold:1 achieving:2 drawn:3 gmm:35 clean:1 yadollahpour:1 ht:3 concreteness:2 dhar:1 sum:2 package:1 parameterized:1 uncertainty:1 you:1 svetlana:1 extends:1 throughout:2 almost:1 wu:1 draw:2 prefer:1 comparable:1 bound:8 layer:2 drink:1 courville:1 cheng:1 nonnegative:1 oracle:4 worked:1 wc:2 aspect:2 generates:1 toshev:2 chair:8 mensch:1 relatively:1 gould:1 structured:4 according:3 combination:3 poor:1 ball:1 lavie:1 smaller:1 slightly:1 aways:1 wi:2 hockenmaier:1 explained:2 iccv:3 computationally:2 bus:4 turn:1 mechanism:3 know:2 fed:1 tractable:1 end:4 photo:3 unusual:1 adopted:2 available:3 wii:1 gaussians:1 doll:1 apply:1 observe:2 generic:1 responsive:1 shetty:3 robustness:1 jang:1 hat:3 cake:11 top:27 rain:6 clustering:2 trouble:1 gan:2 graphical:2 linguistics:3 daum:1 especially:1 approximating:1 objective:5 noticed:1 question:9 exhibit:1 gradient:2 iclr:3 thank:1 capacity:2 decoder:20 street:1 entity:1 topic:1 manifold:1 evaluate:1 consensus:10 urtasun:1 bleu:3 laying:10 assuming:1 index:1 colorization:2 illustration:2 ratio:1 holding:12 spider:1 negative:1 ba:1 implementation:1 reliably:1 summarization:1 policy:1 upper:7 observation:2 markov:1 urbana:1 descent:1 teddy:41 controllability:3 jin:1 extended:1 variability:1 reproducing:1 verb:1 datta:1 introduced:1 propositional:1 dog:3 bottle:1 kl:7 c3:3 sentence:33 learned:4 kingma:1 barcelona:1 nip:4 tractably:1 able:3 suggested:1 proceeds:2 flower:1 poole:1 pattern:2 hendricks:1 challenge:1 max:1 memory:2 video:3 including:1 wz:1 deleting:1 hot:1 natural:3 rely:1 ranked:2 circumvent:1 indicator:1 zhu:2 scheme:1 improve:1 github:1 sadeghi:1 zhuang:1 picture:1 categorical:1 auto:6 text:4 prior:27 epoch:3 yeh:1 understanding:1 val:2 loss:2 bear:45 generation:9 limitation:1 ingredient:1 generator:3 validation:3 foundation:2 sufficient:1 consistent:2 xiao:1 principle:1 viewpoint:1 story:1 intractability:1 playing:5 roukos:1 translation:3 pile:1 eccv:3 elsewhere:1 changed:2 summary:1 supported:1 last:1 keeping:1 side:1 fall:1 neighbor:3 taking:2 absolute:2 dimension:3 vocabulary:1 calculated:1 kz:1 author:1 qualitatively:1 reinforcement:1 projected:1 collection:1 far:2 erhan:2 welling:1 transaction:2 approximate:4 obtains:1 olsen:1 kullback:1 keep:1 b1:2 corpus:1 vedantam:2 xi:1 discriminative:1 search:5 latent:13 table:80 promising:1 nature:1 ca:1 controllable:1 composing:1 improving:2 complex:1 necessarily:1 european:1 zitnick:3 pk:1 linearly:1 big:3 motivation:1 xu:2 augmented:1 fig:2 referred:1 fashion:1 mscoco:7 theme:1 mao:1 candidate:23 young:1 choi:2 specific:2 bishop:1 showing:3 symbol:2 gupta:3 consist:1 workshop:2 false:1 adding:1 ci:1 wc1:1 conditioned:1 illustrates:2 gumbel:2 gap:1 chen:1 stratos:1 depicted:1 generalizing:1 simply:1 explore:1 rohrbach:1 cheaply:1 visual:2 vinyals:2 corresponds:1 truth:7 determines:1 extracted:1 conditional:12 goal:3 identity:2 careful:1 towards:2 room:3 man:32 content:11 hard:3 change:4 specifically:2 wt:2 sampler:1 schwing:2 batra:2 tendency:1 meaningful:1 vaes:4 select:1 berg:6 people:1 alexander:1 kulkarni:1 kuznetsova:1 wearing:2 bench:5 |
6,808 | 7,159 | Improved Training of Wasserstein GANs
Ishaan Gulrajani1?, Faruk Ahmed1 , Martin Arjovsky2 , Vincent Dumoulin1 , Aaron Courville1,3
1
Montreal Institute for Learning Algorithms
2
Courant Institute of Mathematical Sciences
3
CIFAR Fellow
[email protected]
{faruk.ahmed,vincent.dumoulin,aaron.courville}@umontreal.ca
[email protected]
Abstract
Generative Adversarial Networks (GANs) are powerful generative models, but
suffer from training instability. The recently proposed Wasserstein GAN (WGAN)
makes progress toward stable training of GANs, but sometimes can still generate
only poor samples or fail to converge. We find that these problems are often due
to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the
critic, which can lead to undesired behavior. We propose an alternative to clipping
weights: penalize the norm of gradient of the critic with respect to its input. Our
proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning,
including 101-layer ResNets and language models with continuous generators.
We also achieve high quality generations on CIFAR-10 and LSUN bedrooms. ?
1
Introduction
Generative Adversarial Networks (GANs) [9] are a powerful class of generative models that cast
generative modeling as a game between two networks: a generator network produces synthetic data
given some noise source and a discriminator network discriminates between the generator?s output
and true data. GANs can produce very visually appealing samples, but are often hard to train, and
much of the recent work on the subject [22, 18, 2, 20] has been devoted to finding ways of stabilizing
training. Despite this, consistently stable training of GANs remains an open problem.
In particular, [1] provides an analysis of the convergence properties of the value function being
optimized by GANs. Their proposed alternative, named Wasserstein GAN (WGAN) [2], leverages
the Wasserstein distance to produce a value function which has better theoretical properties than the
original. WGAN requires that the discriminator (called the critic in that work) must lie within the
space of 1-Lipschitz functions, which the authors enforce through weight clipping.
Our contributions are as follows:
1. On toy datasets, we demonstrate how critic weight clipping can lead to undesired behavior.
2. We propose gradient penalty (WGAN-GP), which does not suffer from the same problems.
3. We demonstrate stable training of varied GAN architectures, performance improvements
over weight clipping, high-quality image generation, and a character-level GAN language
model without any discrete sampling.
?
?
Now at Google Brain
Code for our models is available at https://github.com/igul222/improved wgan training.
2
2.1
Background
Generative adversarial networks
The GAN training strategy is to define a game between two competing networks. The generator
network maps a source of noise to the input space. The discriminator network receives either a
generated sample or a true data sample and must distinguish between the two. The generator is
trained to fool the discriminator.
Formally, the game between the generator G and the discriminator D is the minimax objective:
min max E [log(D(x))] + E [log(1
G
D
?
x?P
g
x?Pr
?
D(x))],
(1)
? =
where Pr is the data distribution and Pg is the model distribution implicitly defined by x
G(z), z ? p(z) (the input z to the generator is sampled from some simple noise distribution,
such as the uniform distribution or a spherical Gaussian distribution).
If the discriminator is trained to optimality before each generator parameter update, then minimizing the value function amounts to minimizing the Jensen-Shannon divergence between Pr and Pg
[9], but doing so often leads to vanishing gradients as the discriminator saturates. In practice, [9]
?
advocates that the generator be instead trained to maximize Ex?P
?
g [log(D(x))], which goes some
way to circumvent this difficulty. However, even this modified loss function can misbehave in the
presence of a good discriminator [1].
2.2
Wasserstein GANs
[2] argues that the divergences which GANs typically minimize are potentially not continuous with
respect to the generator?s parameters, leading to training difficulty. They propose instead using
the Earth-Mover (also called Wasserstein-1) distance W (q, p), which is informally defined as the
minimum cost of transporting mass in order to transform the distribution q into the distribution p
(where the cost is mass times transport distance). Under mild assumptions, W (q, p) is continuous
everywhere and differentiable almost everywhere.
The WGAN value function is constructed using the Kantorovich-Rubinstein duality [24] to obtain
?
?
?
?
?
min max E D(x)
E D(x))
(2)
?
G D2D x?Pr
x?P
g
where D is the set of 1-Lipschitz functions and Pg is once again the model distribution implicitly
? = G(z), z ? p(z). In that case, under an optimal discriminator (called a critic in the
defined by x
paper, since it?s not trained to classify), minimizing the value function with respect to the generator
parameters minimizes W (Pr , Pg ).
The WGAN value function results in a critic function whose gradient with respect to its input is
better behaved than its GAN counterpart, making optimization of the generator easier. Additionally,
WGAN has the desirable property that its value function correlates with sample quality, which is not
the case for GANs.
To enforce the Lipschitz constraint on the critic, [2] propose to clip the weights of the critic to lie
within a compact space [ c, c]. The set of functions satisfying this constraint is a subset of the
k-Lipschitz functions for some k which depends on c and the critic architecture. In the following
sections, we demonstrate some of the issues with this approach and propose an alternative.
2.3
Properties of the optimal WGAN critic
In order to understand why weight clipping is problematic in a WGAN critic, as well as to motivate
our approach, we highlight some properties of the optimal critic in the WGAN framework. We prove
these in the Appendix.
2
Proposition 1. Let Pr and Pg be two distributions in X , a compact metric space. Then, there is a
1-Lipschitz function f ? which is the optimal solution of maxkf kL ?1 Ey?Pr [f (y)] Ex?Pg [f (x)].
Let ? be the optimal coupling between Pr and Pg , defined as the minimizer of: W (Pr , Pg ) =
inf ?2?(Pr ,Pg ) E(x,y)?? [kx yk] where ?(Pr , Pg ) is the set of joint distributions ?(x, y) whose
?
marginals are Pr and Pg , respectively. Then, if f ? his differentiable? , ?(x
i = y) = 0 , and xt =
tx + (1 t)y with 0 ? t ? 1, it holds that P(x,y)?? rf ? (xt ) = kyy xxtt k = 1.
Corollary 1. f ? has gradient norm 1 almost everywhere under Pr and Pg .
3
Difficulties with weight constraints
We find that weight clipping in WGAN leads to optimization difficulties, and that even when optimization succeeds the resulting critic can have a pathological value surface. We explain these
problems below and demonstrate their effects; however we do not claim that each one always occurs
in practice, nor that they are the only such mechanisms.
Our experiments use the specific form of weight constraint from [2] (hard clipping of the magnitude
of each weight), but we also tried other weight constraints (L2 norm clipping, weight normalization),
as well as soft constraints (L1 and L2 weight decay) and found that they exhibit similar problems.
8 Gaussians
25 Gaussians
Gradient norm (log scale)
To some extent these problems can be mitigated with batch normalization in the critic, which [2]
use in all of their experiments. However even with batch normalization, we observe that very deep
WGAN critics often fail to converge.
Swiss Roll
Weight clipping
Weight clipping (c = 0.001)
10
Weight clipping (c = 0.01)
Weight clipping (c = 0.1)
Gradient penalty
0
0.02
0.01
0.00
0.01
Weights
0.02
Gradient penalty
10
20
13
10
7
4
1
0.50
0.25
0.00
0.25
Weights
0.50
Discriminator layer
(a) Value surfaces of WGAN critics trained to optimality on toy datasets using (top) weight clipping
and (bottom) gradient penalty. Critics trained with
weight clipping fail to capture higher moments of the
data distribution. The ?generator? is held fixed at the
real data plus Gaussian noise.
(b) (left) Gradient norms of deep WGAN critics during training on toy datasets either explode or vanish
when using weight clipping, but not when using a
gradient penalty. (right) Weight clipping (top) pushes
weights towards two values (the extremes of the clipping range), unlike gradient penalty (bottom).
Figure 1: Gradient penalty in WGANs does not exhibit undesired behavior like weight clipping.
3.1
Capacity underuse
Implementing a k-Lipshitz constraint via weight clipping biases the critic towards much simpler
functions. As stated previously in Corollary 1, the optimal WGAN critic has unit gradient norm
almost everywhere under Pr and Pg ; under a weight-clipping constraint, we observe that our neural
network architectures which try to attain their maximum gradient norm k end up learning extremely
simple functions.
To demonstrate this, we train WGAN critics with weight clipping to optimality on several toy distributions, holding the generator distribution Pg fixed at the real distribution plus unit-variance Gaussian noise. We plot value surfaces of the critics in Figure 1a. We omit batch normalization in the
?
We can actually assume much less, and talk only about directional derivatives on the direction of the line;
which we show in the proof always exist. This would imply that in every point where f ? is differentiable (and
thus we can take gradients in a neural network setting) the statement holds.
?
This assumption is in order to exclude the case when the matching point of sample x is x itself. It is
satisfied in the case that Pr and Pg have supports that intersect in a set of measure 0, such as when they are
supported by two low dimensional manifolds that don?t perfectly align [1].
3
Algorithm 1 WGAN with gradient penalty. We use default values of
0.0001, 1 = 0, 2 = 0.9.
= 10, ncritic = 5, ? =
Require: The gradient penalty coefficient , the number of critic iterations per generator iteration
ncritic , the batch size m, Adam hyperparameters ?, 1 , 2 .
Require: initial critic parameters w0 , initial generator parameters ?0 .
1: while ? has not converged do
2:
for t = 1, ..., ncritic do
3:
for i = 1, ..., m do
4:
Sample real data x ? Pr , latent variable z ? p(z), a random number ? ? U [0, 1].
?
5:
x
G? (z)
?
?
6:
x
?x + (1 ?)x
?
? 2 1)2
7:
L(i)
Dw (x)
Dw (x) + (krx? Dw (x)k
8:
end for
Pm (i)
1
9:
w
Adam(rw m
i=1 L , w, ?, 1 , 2 )
10:
end for
11:
Sample a batch of P
latent variables {z (i) }m
i=1 ? p(z).
m
1
12:
?
Adam(r? m i=1 Dw (G? (z)), ?, ?, 1 , 2 )
13: end while
critic. In each case, the critic trained with weight clipping ignores higher moments of the data distribution and instead models very simple approximations to the optimal functions. In contrast, our
approach does not suffer from this behavior.
3.2
Exploding and vanishing gradients
We observe that the WGAN optimization process is difficult because of interactions between the
weight constraint and the cost function, which result in either vanishing or exploding gradients
without careful tuning of the clipping threshold c.
To demonstrate this, we train WGAN on the Swiss Roll toy dataset, varying the clipping threshold c
in [10 1 , 10 2 , 10 3 ], and plot the norm of the gradient of the critic loss with respect to successive
layers of activations. Both generator and critic are 12-layer ReLU MLPs without batch normalization. Figure 1b shows that for each of these values, the gradient either grows or decays exponentially
as we move farther back in the network. We find our method results in more stable gradients that
neither vanish nor explode, allowing training of more complicated networks.
4
Gradient penalty
We now propose an alternative way to enforce the Lipschitz constraint. A differentiable function
is 1-Lipschtiz if and only if it has gradients with norm at most 1 everywhere, so we consider directly constraining the gradient norm of the critic?s output with respect to its input. To circumvent
tractability issues, we enforce a soft version of the constraint with a penalty on the gradient norm
? ? Px? . Our new objective is
for random samples x
?
?
? 2
L = E [D(x)]
E [D(x)] +
E (krx? D(x)k
?
x?P
x?Pr
?
x?P
g
?
x
{z
|
{z
} |
Our gradient penalty
Original critic loss
?
1)2 .
}
(3)
Sampling distribution We implicitly define Px? sampling uniformly along straight lines between
pairs of points sampled from the data distribution Pr and the generator distribution Pg . This is
motivated by the fact that the optimal critic contains straight lines with gradient norm 1 connecting
coupled points from Pr and Pg (see Proposition 1). Given that enforcing the unit gradient norm
constraint everywhere is intractable, enforcing it only along these straight lines seems sufficient and
experimentally results in good performance.
Penalty coefficient All experiments in this paper use = 10, which we found to work well across
a variety of architectures and datasets ranging from toy tasks to large ImageNet CNNs.
4
No critic batch normalization Most prior GAN implementations [21, 22, 2] use batch normalization in both the generator and the discriminator to help stabilize training, but batch normalization
changes the form of the discriminator?s problem from mapping a single input to a single output to
mapping from an entire batch of inputs to a batch of outputs [22]. Our penalized training objective
is no longer valid in this setting, since we penalize the norm of the critic?s gradient with respect
to each input independently, and not the entire batch. To resolve this, we simply omit batch normalization in the critic in our models, finding that they perform well without it. Our method works
with normalization schemes which don?t introduce correlations between examples. In particular, we
recommend layer normalization [3] as a drop-in replacement for batch normalization.
Two-sided penalty We encourage the norm of the gradient to go towards 1 (two-sided penalty)
instead of just staying below 1 (one-sided penalty). Empirically this seems not to constrain the
critic too much, likely because the optimal WGAN critic anyway has gradients with norm 1 almost
everywhere under Pr and Pg and in large portions of the region in between (see subsection 2.3). In
our early observations we found this to perform slightly better, but we don?t investigate this fully.
We describe experiments on the one-sided penalty in the appendix.
5
5.1
Experiments
Training random architectures within a set
We experimentally demonstrate our model?s ability to train a large number of architectures which
we think are useful to be able to train. Starting from the DCGAN architecture, we define a set of
architecture variants by changing model settings to random corresponding values in Table 1. We
believe that reliable training of many of the architectures in this set is a useful goal, but we do not
claim that our set is an unbiased or representative sample of the whole space of useful architectures:
it is designed to demonstrate a successful regime of our method, and readers should evaluate whether
it contains architectures similar to their intended application.
Table 1: We evaluate WGAN-GP?s ability to train the architectures in this set.
Nonlinearity (G)
Nonlinearity (D)
Depth (G)
Depth (D)
Batch norm (G)
Batch norm (D; layer norm for WGAN-GP)
Base filter count (G)
Base filter count (D)
[ReLU, LeakyReLU,
[ReLU, LeakyReLU,
[4, 8, 12, 20]
[4, 8, 12, 20]
[True, False]
[True, False]
[32, 64, 128]
[32, 64, 128]
softplus(2x+2)
2
softplus(2x+2)
2
1, tanh]
1, tanh]
From this set, we sample 200 architectures and train each on 32?32 ImageNet with both WGAN-GP
and the standard GAN objectives. Table 2 lists the number of instances where either: only the standard GAN succeeded, only WGAN-GP succeeded, both succeeded, or both failed, where success
is defined as inception score > min score. For most choices of score threshold, WGAN-GP
successfully trains many architectures from this set which we were unable to train with the standard
GAN objective.
Table 2: Outcomes of training 200 random architectures, for different success thresholds. For
comparison, our standard DCGAN achieved a score of 7.24. A longer version of this table can be
found in the appendix.
Min. score
1.0
3.0
5.0
7.0
9.0
Only GAN
0
1
0
1
0
Only WGAN-GP
8
88
147
104
0
5
Both succeeded
192
110
42
5
0
Both failed
0
1
11
90
200
DCGAN
LSGAN
WGAN (clipping)
WGAN-GP (ours)
Baseline (G: DCGAN, D: DCGAN)
G: No BN and a constant number of filters, D: DCGAN
G: 4-layer 512-dim ReLU MLP, D: DCGAN
No normalization in either G or D
Gated multiplicative nonlinearities everywhere in G and D
tanh nonlinearities everywhere in G and D
101-layer ResNet G and D
Figure 2: Different GAN architectures trained with different methods. We only succeeded in training every architecture with a shared set of hyperparameters using WGAN-GP.
5.2
Training varied architectures on LSUN bedrooms
To demonstrate our model?s ability to train many architectures with its default settings, we train six
different GAN architectures on the LSUN bedrooms dataset [30]. In addition to the baseline DCGAN architecture from [21], we choose six architectures whose successful training we demonstrate:
(1) no BN and a constant number of filters in the generator, as in [2], (2) 4-layer 512-dim ReLU
MLP generator, as in [2], (3) no normalization in either the discriminator or generator (4) gated
multiplicative nonlinearities, as in [23], (5) tanh nonlinearities, and (6) 101-layer ResNet generator
and discriminator.
Although we do not claim it is impossible without our method, to the best of our knowledge this
is the first time very deep residual networks were successfully trained in a GAN setting. For each
architecture, we train models using four different GAN methods: WGAN-GP, WGAN with weight
clipping, DCGAN [21], and Least-Squares GAN [17]. For each objective, we used the default set
of optimizer hyperparameters recommended in that work (except LSGAN, where we searched over
learning rates).
For WGAN-GP, we replace any batch normalization in the discriminator with layer normalization
(see section 4). We train each model for 200K iterations and present samples in Figure 2. We only
succeeded in training every architecture with a shared set of hyperparameters using WGAN-GP.
For every other training method, some of these architectures were unstable or suffered from mode
collapse.
5.3
Improved performance over weight clipping
One advantage of our method over weight clipping is improved training speed and sample quality.
To demonstrate this, we train WGANs with weight clipping and our gradient penalty on CIFAR10 [13] and plot Inception scores [22] over the course of training in Figure 3. For WGAN-GP,
6
Convergence on CIFAR-10
7
6
6
Inception Score
Inception Score
Convergence on CIFAR-10
7
5
4
3
Weight clipping
Gradient Penalty (RMSProp)
2
Gradient Penalty (Adam)
DCGAN
1
0.0
5
4
3
Weight clipping
Gradient Penalty (RMSProp)
2
Gradient Penalty (Adam)
DCGAN
1
0.5
1.0
1.5
Generator iterations
0
2.0
?105
1
2
3
Wallclock time (in seconds)
4
?105
Figure 3: CIFAR-10 Inception score over generator iterations (left) or wall-clock time (right) for
four models: WGAN with weight clipping, WGAN-GP with RMSProp and Adam (to control for
the optimizer), and DCGAN. WGAN-GP significantly outperforms weight clipping and performs
comparably to DCGAN.
we train one model with the same optimizer (RMSProp) and learning rate as WGAN with weight
clipping, and another model with Adam and a higher learning rate. Even with the same optimizer,
our method converges faster and to a better score than weight clipping. Using Adam further improves
performance. We also plot the performance of DCGAN [21] and find that our method converges
more slowly (in wall-clock time) than DCGAN, but its score is more stable at convergence.
5.4
Sample quality on CIFAR-10 and LSUN bedrooms
For equivalent architectures, our method achieves comparable sample quality to the standard GAN
objective. However the increased stability allows us to improve sample quality by exploring a wider
range of architectures. To demonstrate this, we find an architecture which establishes a new state of
the art Inception score on unsupervised CIFAR-10 (Table 3). When we add label information (using
the method in [19]), the same architecture outperforms all other published models except for SGAN.
Table 3: Inception scores on CIFAR-10. Our unsupervised model achieves state-of-the-art performance, and our conditional model outperforms all others except SGAN.
Unsupervised
Supervised
Method
Score
Method
Score
ALI [8] (in [26])
BEGAN [4]
DCGAN [21] (in [11])
Improved GAN (-L+HA) [22]
EGAN-Ent-VI [7]
DFM [26]
WGAN-GP ResNet (ours)
5.34 ? .05
5.62
6.16 ? .07
6.86 ? .06
7.07 ? .10
7.72 ? .13
7.86 ? .07
SteinGAN [25]
DCGAN (with labels, in [25])
Improved GAN [22]
AC-GAN [19]
SGAN-no-joint [11]
WGAN-GP ResNet (ours)
SGAN [11]
6.35
6.58
8.09 ? .07
8.25 ? .07
8.37 ? .08
8.42 ? .10
8.59 ? .12
We also train a deep ResNet on 128 ? 128 LSUN bedrooms and show samples in Figure 4. We
believe these samples are at least competitive with the best reported so far on any resolution for this
dataset.
5.5
Modeling discrete data with a continuous generator
To demonstrate our method?s ability to model degenerate distributions, we consider the problem of
modeling a complex discrete distribution with a GAN whose generator is defined over a continuous
space. As an instance of this problem, we train a character-level GAN language model on the Google
Billion Word dataset [6]. Our generator is a simple 1D CNN which deterministically transforms a
latent vector into a sequence of 32 one-hot character vectors through 1D convolutions. We apply a
softmax nonlinearity at the output, but use no sampling step: during training, the softmax output is
7
Figure 4: Samples of 128 ? 128 LSUN bedrooms. We believe these samples are at least comparable
to the best published results so far.
passed directly into the critic (which, likewise, is a simple 1D CNN). When decoding samples, we
just take the argmax of each output vector.
We present samples from the model in Table 4. Our model makes frequent spelling errors (likely
because it has to output each character independently) but nonetheless manages to learn quite a lot
about the statistics of language. We were unable to produce comparable results with the standard
GAN objective, though we do not claim that doing so is impossible.
Table 4: Samples from a WGAN character-level language model trained with our method on sentences from the Billion Word dataset, truncated to 32 characters. The model learns to directly output
one-hot character embeddings from a latent vector without any discrete sampling step. We were
unable to achieve comparable results with the standard GAN objective and a continuous generator.
WGAN with gradient penalty (1D CNN)
Busino game camperate spent odea
In the bankaway of smarling the
SingersMay , who kill that imvic
Keray Pents of the same Reagun D
Manging include a tudancs shat "
His Zuith Dudget , the Denmbern
In during the Uitational questio
Divos from The ? noth ronkies of
She like Monday , of macunsuer S
Solice Norkedin pring in since
ThiS record ( 31. ) UBS ) and Ch
It was not the annuas were plogr
This will be us , the ect of DAN
These leaded as most-worsd p2 a0
The time I paidOa South Cubry i
Dour Fraps higs it was these del
This year out howneed allowed lo
Kaulna Seto consficutes to repor
The difference in performance between WGAN
P and other GANs can be explained as follows. Consider the simplex n = {p 2 Rn : pi
0, i pP
i = 1}, and the set of vertices on the simplex (or
one-hot vectors) Vn = {p 2 Rn : pi 2 {0, 1}, i pi = 1} ? n . If we have a vocabulary of
size n and we have a distribution Pr over sequences of size T , we have that Pr is a distribution on
VnT = Vn ? ? ? ? ? Vn . Since VnT is a subset of Tn , we can also treat Pr as a distribution on Tn (by
assigning zero probability mass to all points not in VnT ).
8
0.8
30
20
10
0
train
validation
10
Negative critic loss
train
validation
40
Negative critic loss
Negative critic loss
50
5
0
0
2
4
Generator iterations
?104
train
validation
0.6
0.4
0.2
0.0
0.0
0.5
1.0
1.5
Generator iterations
(a)
2.0
?104
0.0
0.5
1.0
1.5
Generator iterations
2.0
?104
(b)
Figure 5: (a) The negative critic loss of our model on LSUN bedrooms converges toward a minimum
as the network trains. (b) WGAN training and validation losses on a random 1000-digit subset of
MNIST show overfitting when using either our method (left) or weight clipping (right). In particular,
with our method, the critic overfits faster than the generator, causing the training loss to increase
gradually over time even as the validation loss drops.
Pr is discrete (or supported on a finite number of elements, namely VnT ) on Tn , but Pg can easily
be a continuous distribution over Tn . The KL divergences between two such distributions are
infinite, and so the JS divergence is saturated. In practice, this means a discriminator might quickly
learn to reject all samples that don?t lie on VnT (sequences of one-hot vectors) and give meaningless
gradients to the generator. However, it is easily seen that the conditions of Theorem 1 and Corollary
1 of [2] are satisfied even on this non-standard learning scenario with X = Tn . This means that
W (Pr , Pg ) is still well defined, continuous everywhere and differentiable almost everywhere, and
we can optimize it just like in any other continuous variable setting. The way this manifests is that in
WGANs, the Lipschitz constraint forces the critic to provide a linear gradient from all Tn towards
towards the real points in VnT .
Other attempts at language modeling with GANs [31, 14, 29, 5, 15, 10] typically use discrete models
and gradient estimators [27, 12, 16]. Our approach is simpler to implement, though whether it scales
beyond a toy language model is unclear.
5.6
Meaningful loss curves and detecting overfitting
An important benefit of weight-clipped WGANs is that their loss correlates with sample quality
and converges toward a minimum. To show that our method preserves this property, we train a
WGAN-GP on the LSUN bedrooms dataset [30] and plot the negative of the critic?s loss in Figure 5a.
We see that the loss converges as the generator minimizes W (Pr , Pg ).
GANs, like all models trained on limited data, will eventually overfit. To explore the loss curve?s
behavior when the network overfits, we train large unregularized WGANs on a random 1000-image
subset of MNIST and plot the negative critic loss on both the training and validation sets in Figure 5b. In both WGAN and WGAN-GP, the two losses diverge, suggesting that the critic overfits
and provides an inaccurate estimate of W (Pr , Pg ), at which point all bets are off regarding correlation with sample quality. However in WGAN-GP, the training loss gradually increases even while
the validation loss drops.
[28] also measure overfitting in GANs by estimating the generator?s log-likelihood. Compared
to that work, our method detects overfitting in the critic (rather than the generator) and measures
overfitting against the same loss that the network minimizes.
6
Conclusion
In this work, we demonstrated problems with weight clipping in WGAN and introduced an alternative in the form of a penalty term in the critic loss which does not exhibit the same problems. Using
our method, we demonstrated strong modeling performance and stability across a variety of architectures. Now that we have a more stable algorithm for training GANs, we hope our work opens
the path for stronger modeling performance on large-scale image datasets and language. Another
interesting direction is adapting our penalty term to the standard GAN objective function, where it
might stabilize training by encouraging the discriminator to learn smoother decision boundaries.
9
Acknowledgements
We would like to thank Mohamed Ishmael Belghazi, L?eon Bottou, Zihang Dai, Stefan Doerr,
Ian Goodfellow, Kyle Kastner, Kundan Kumar, Luke Metz, Alec Radford, Sai Rajeshwar, Aditya
Ramesh, Tom Sercu, Zain Shah and Jake Zhao for insightful comments.
References
[1] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial
networks. 2017.
[2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875,
2017.
[3] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450,
2016.
[4] D. Berthelot, T. Schumm, and L. Metz. Began: Boundary equilibrium generative adversarial
networks. arXiv preprint arXiv:1703.10717, 2017.
[5] T. Che, Y. Li, R. Zhang, R. D. Hjelm, W. Li, Y. Song, and Y. Bengio. Maximum-likelihood
augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983, 2017.
[6] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint
arXiv:1312.3005, 2013.
[7] Z. Dai, A. Almahairi, P. Bachman, E. Hovy, and A. Courville. Calibrating energy-based generative adversarial networks. arXiv preprint arXiv:1702.01691, 2017.
[8] V. Dumoulin, M. I. D. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and
A. Courville. Adversarially learned inference. 2017.
[9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,
and Y. Bengio. Generative adversarial nets. In Advances in neural information processing
systems, pages 2672?2680, 2014.
[10] R. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundary-seeking generative adversarial networks. arXiv preprint arXiv:1702.08431, 2017.
[11] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie. Stacked generative adversarial
networks. arXiv preprint arXiv:1612.04357, 2016.
[12] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv
preprint arXiv:1611.01144, 2016.
[13] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
[14] J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. Adversarial learning for neural dialogue
generation. arXiv preprint arXiv:1701.06547, 2017.
[15] X. Liang, Z. Hu, H. Zhang, C. Gan, and E. P. Xing. Recurrent topic-transition gan for visual
paragraph generation. arXiv preprint arXiv:1703.07022, 2017.
[16] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation
of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
[17] X. Mao, Q. Li, H. Xie, R. Y. Lau, and Z. Wang. Least squares generative adversarial networks.
arXiv preprint arXiv:1611.04076, 2016.
[18] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks.
arXiv preprint arXiv:1611.02163, 2016.
10
[19] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans.
arXiv preprint arXiv:1610.09585, 2016.
[20] B. Poole, A. A. Alemi, J. Sohl-Dickstein, and A. Angelova. Improved generator objectives for
gans. arXiv preprint arXiv:1612.02780, 2016.
[21] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[22] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved
techniques for training gans. In Advances in Neural Information Processing Systems, pages
2226?2234, 2016.
[23] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al. Conditional image
generation with pixelcnn decoders. In Advances in Neural Information Processing Systems,
pages 4790?4798, 2016.
[24] C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media,
2008.
[25] D. Wang and Q. Liu. Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016.
[26] D. Warde-Farley and Y. Bengio. Improving generative adversarial networks with denoising
feature matching. 2017.
[27] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229?256, 1992.
[28] Y. Wu, Y. Burda, R. Salakhutdinov, and R. Grosse. On the quantitative analysis of decoderbased generative models. arXiv preprint arXiv:1611.04273, 2016.
[29] Z. Yang, W. Chen, F. Wang, and B. Xu. Improving neural machine translation with conditional
sequence generative adversarial nets. arXiv preprint arXiv:1703.04887, 2017.
[30] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao. Lsun: Construction of
a large-scale image dataset using deep learning with humans in the loop. arXiv preprint
arXiv:1506.03365, 2015.
[31] L. Yu, W. Zhang, J. Wang, and Y. Yu. Seqgan: sequence generative adversarial nets with policy
gradient. arXiv preprint arXiv:1609.05473, 2016.
11
| 7159 |@word mild:1 cnn:3 version:2 norm:19 seems:2 stronger:1 villani:1 open:2 hu:1 tried:1 bn:2 bachman:1 jacob:1 pg:22 moment:2 initial:2 liu:1 contains:2 score:15 ours:3 outperforms:3 com:2 activation:1 gmail:1 assigning:1 must:2 enables:1 plot:6 drop:3 update:1 designed:1 generative:21 alec:1 vanishing:3 farther:1 record:1 d2d:1 sgan:4 provides:2 ncritic:3 detecting:1 successive:1 monday:1 simpler:2 zhang:4 mathematical:1 along:2 constructed:1 olah:1 ect:1 prove:1 advocate:1 dan:1 paragraph:1 introduce:1 behavior:5 nor:2 kiros:1 kundan:1 brain:1 chelba:1 salakhutdinov:1 detects:1 spherical:1 resolve:1 encouraging:1 estimating:1 mitigated:1 mass:3 medium:1 minimizes:3 finding:2 fellow:1 every:4 sai:1 quantitative:1 zaremba:1 classifier:1 control:1 lipshitz:1 unit:3 faruk:2 omit:2 before:1 treat:1 despite:1 path:1 might:2 plus:2 luke:1 jurafsky:1 collapse:1 limited:1 range:2 transporting:1 practice:3 implement:1 swiss:2 digit:1 intersect:1 attain:1 significantly:1 matching:2 reject:1 word:3 adapting:1 impossible:2 instability:1 zihang:1 optimize:1 equivalent:1 map:1 demonstrated:2 shi:1 go:2 williams:1 starting:1 independently:2 resolution:1 stabilizing:1 pouget:1 estimator:1 shlens:1 his:2 dw:4 reparameterization:1 stability:2 sercu:1 anyway:1 construction:1 goodfellow:3 element:1 amortized:1 satisfying:1 bottom:2 preprint:22 wang:4 capture:1 region:1 yk:1 principled:1 discriminates:1 dour:1 rmsprop:4 warde:2 trained:11 motivate:1 ali:1 gu:1 easily:2 joint:2 hopcroft:1 tx:1 talk:1 train:23 stacked:1 describe:1 rubinstein:1 outcome:1 kalchbrenner:1 whose:4 quite:1 koehn:1 ability:4 statistic:1 gp:20 transform:1 itself:1 think:1 advantage:1 differentiable:5 wallclock:1 sequence:5 net:3 propose:6 interaction:1 frequent:1 causing:1 loop:1 lsgan:2 achieve:2 degenerate:1 ent:1 billion:3 convergence:4 produce:4 adam:8 converges:5 staying:1 resnet:5 help:1 coupling:1 wider:1 montreal:1 ac:1 spent:1 recurrent:1 ex:2 progress:2 strong:1 p2:1 auxiliary:1 direction:2 cnns:1 filter:4 human:1 implementing:1 require:2 espeholt:1 wall:2 proposition:2 exploring:1 hold:2 visually:1 equilibrium:1 mapping:2 claim:4 optimizer:4 early:1 achieves:2 earth:1 label:2 tanh:4 almahairi:1 successfully:2 establishes:1 hope:1 stefan:1 gaussian:3 always:2 modified:1 rather:1 varying:1 bet:1 corollary:3 improvement:1 consistently:1 she:1 likelihood:2 contrast:1 adversarial:18 baseline:2 dim:2 inference:1 inaccurate:1 typically:2 entire:2 a0:1 dfm:1 issue:2 art:2 softmax:3 once:1 sampling:5 adversarially:1 yu:3 unsupervised:4 kastner:1 simplex:2 others:1 recommend:1 mirza:1 connectionist:1 pathological:1 preserve:1 divergence:4 wgan:52 mover:1 intended:1 argmax:1 replacement:1 seto:1 attempt:1 mlp:2 investigate:1 mnih:1 saturated:1 extreme:1 farley:2 devoted:1 held:1 succeeded:6 encourage:1 cifar10:1 old:1 egan:1 theoretical:1 instance:2 classify:1 modeling:7 soft:2 increased:1 ishmael:1 measuring:1 clipping:38 cost:3 tractability:1 vertex:1 subset:4 uniform:1 krizhevsky:1 successful:2 lsun:9 too:1 reported:1 synthetic:1 cho:1 lipschtiz:1 oord:1 ritter:1 off:1 decoding:1 diverge:1 connecting:1 quickly:1 concrete:1 gans:18 synthesis:1 alemi:1 again:1 satisfied:2 choose:1 slowly:1 huang:1 wgans:5 derivative:1 leading:1 zhao:1 dialogue:1 toy:7 li:5 suggesting:1 exclude:1 nonlinearities:4 stabilize:2 coefficient:2 vnt:6 depends:1 vi:1 multiplicative:2 try:1 lot:1 dumoulin:2 doing:2 overfits:3 portion:1 competitive:1 xing:1 metz:4 complicated:1 contribution:1 minimize:1 mlps:1 square:2 hovy:1 roll:2 variance:1 who:1 likewise:1 convolutional:1 directional:1 vincent:2 comparably:1 manages:1 straight:3 published:2 converged:1 explain:1 against:1 nonetheless:1 energy:1 pp:1 mohamed:1 krx:2 chintala:2 proof:1 sampled:2 dataset:7 manifest:1 subsection:1 knowledge:1 improves:1 actually:1 back:1 higher:3 courant:1 supervised:1 xie:1 tom:1 improved:8 though:2 just:3 inception:7 correlation:2 clock:2 overfit:1 receives:1 transport:2 google:2 del:1 mode:1 quality:9 behaved:1 grows:1 believe:3 effect:1 calibrating:1 true:4 unbiased:1 counterpart:1 funkhouser:1 undesired:3 game:4 during:3 seff:1 demonstrate:13 leakyrelu:2 tn:6 performs:2 argues:1 l1:1 image:7 ranging:1 kyle:1 recently:1 umontreal:1 began:2 empirically:1 exponentially:1 volume:1 berthelot:1 marginals:1 ishaan:1 tuning:2 pm:1 nonlinearity:3 language:9 pixelcnn:1 stable:7 longer:2 surface:3 align:1 base:2 add:1 j:1 recent:1 inf:1 scenario:1 poursaeed:1 success:2 seen:1 minimum:3 wasserstein:7 dai:2 arjovsky:3 ey:1 converge:2 maximize:1 recommended:1 exploding:2 smoother:1 multiple:1 desirable:1 faster:2 ahmed:1 cifar:8 mle:1 variant:1 metric:1 arxiv:44 resnets:1 sometimes:1 normalization:17 iteration:8 achieved:1 penalize:2 background:1 addition:1 source:2 suffered:1 meaningless:1 unlike:1 south:1 subject:1 comment:1 leverage:1 presence:1 constraining:1 bengio:4 embeddings:1 mastropietro:1 yang:1 variety:3 relu:5 bedroom:8 architecture:31 competing:1 perfectly:1 regarding:1 whether:2 motivated:1 six:2 passed:1 penalty:25 song:2 suffer:3 deep:6 useful:3 fool:1 informally:1 amount:1 transforms:1 clip:1 rw:1 generate:1 http:1 exist:1 problematic:1 per:1 kill:1 discrete:8 hyperparameter:1 dickstein:2 four:2 threshold:4 changing:1 neither:1 schumm:1 relaxation:1 year:1 everywhere:11 powerful:2 named:1 clipped:1 almost:6 reader:1 lamb:1 wu:1 vn:3 draw:1 decision:1 appendix:3 comparable:4 layer:13 distinguish:1 courville:4 constraint:14 constrain:1 explode:2 speed:1 min:4 optimality:3 extremely:1 kumar:1 mikolov:1 martin:1 px:2 poor:1 across:2 slightly:1 character:7 appealing:1 making:1 lau:1 den:1 explained:1 pr:27 gradually:2 sided:4 unregularized:1 remains:1 previously:1 count:2 fail:3 mechanism:1 eventually:1 ge:1 end:4 available:1 gaussians:2 apply:1 observe:3 salimans:1 enforce:5 alternative:5 batch:17 shah:1 jang:1 original:2 top:2 include:1 gan:29 eon:1 brant:1 jake:1 seeking:1 objective:11 move:1 occurs:1 strategy:1 spelling:1 kantorovich:1 unclear:1 exhibit:3 gradient:44 che:2 distance:3 unable:3 thank:1 capacity:1 decoder:1 w0:1 maddison:1 topic:1 manifold:1 extent:1 unstable:1 toward:3 enforcing:2 ozair:1 code:1 minimizing:3 unrolled:1 liang:1 difficult:1 potentially:1 holding:1 statement:1 stated:1 negative:6 ba:1 implementation:1 policy:1 perform:2 teh:1 allowing:1 gated:2 observation:1 convolution:1 datasets:5 benchmark:1 finite:1 ramesh:1 truncated:1 saturates:1 hinton:1 ubs:1 rn:2 varied:2 introduced:1 cast:1 pair:1 kl:2 namely:1 optimized:1 discriminator:17 imagenet:2 sentence:1 pfau:1 learned:1 robinson:1 able:1 beyond:1 poole:4 below:2 regime:1 rf:1 including:1 max:2 reliable:1 hot:4 odena:1 difficulty:4 force:1 circumvent:2 doerr:1 business:1 residual:1 minimax:1 scheme:1 improve:1 github:1 imply:1 categorical:1 coupled:1 prior:1 l2:2 acknowledgement:1 seqgan:1 graf:1 loss:21 fully:1 highlight:1 generation:5 interesting:1 generator:38 validation:7 sufficient:1 xiao:1 tiny:1 critic:48 pi:3 translation:1 lo:1 course:1 penalized:1 supported:2 bias:1 understand:1 burda:1 institute:2 wide:1 benefit:1 van:1 curve:2 default:3 depth:2 valid:1 vocabulary:1 boundary:3 transition:1 ignores:1 author:1 reinforcement:1 far:2 correlate:2 compact:2 implicitly:3 belghazi:2 overfitting:5 angelova:1 belongie:1 don:4 continuous:10 latent:4 why:1 table:9 additionally:1 learn:3 ca:1 improving:2 bottou:3 complex:1 whole:1 noise:5 hyperparameters:4 allowed:1 xu:2 augmented:1 representative:1 grosse:1 mao:1 deterministically:1 lie:3 vanish:2 learns:1 ian:1 theorem:1 xt:2 specific:1 underuse:1 insightful:1 jensen:1 nyu:1 decay:2 list:1 abadie:1 intractable:1 mnist:2 false:2 sohl:2 magnitude:1 push:1 kx:1 gumbel:1 chen:2 easier:1 monroe:1 simply:1 likely:2 explore:1 visual:1 failed:2 vinyals:1 aditya:1 dcgan:17 radford:3 ch:1 springer:1 minimizer:1 conditional:4 goal:1 cheung:1 careful:1 towards:6 lipschitz:8 shared:2 replace:1 hard:2 experimentally:2 change:1 infinite:1 except:3 uniformly:1 hjelm:2 denoising:1 called:3 duality:1 succeeds:1 shannon:1 meaningful:1 aaron:2 formally:1 support:1 softplus:2 searched:1 evaluate:2 schuster:1 noth:1 |
6,809 | 716 | On the Use of Evidence in Neural Networks
David H. Wolpert
The Santa Fe Institute
1660 Old Pecos Trail
Santa Fe, NM 87501
Abstract
The Bayesian "evidence" approximation has recently been employed to
determine the noise and weight-penalty terms used in back-propagation.
This paper shows that for neural nets it is far easier to use the exact result
than it is to use the evidence approximation. Moreover, unlike the evidence approximation, the exact result neither has to be re-calculated for
every new data set, nor requires the running of computer code (the exact
result is closed form). In addition, it turns out that the evidence procedure's MAP estimate for neural nets is, in toto, approximation error. Another advantage of the exact analysis is that it does not lead one to incorrect intuition, like the claim that using evidence one can "evaluate different priors in light of the data". This paper also discusses sufficiency
conditions for the evidence approximation to hold, why it can sometimes
give "reasonable" results, etc.
1 THE EVIDENCE APPROXIMATION
It has recently become popular to consider the problem of training neural nets from a Bayesian viewpoint (Buntine and Weigend 1991, MacKay 1992). The usual way of doing this
starts by assuming that there is some underlying target function f from R n to R, parameterized by an N-dimensional weight vector w. We are provided with a training set L of noisecorrupted samples of f. Our goal is to make a guess for w, basing that guess only on L. Now
assume we have Li.d. additive gaussian noise resulting in P(L I w, ~) oc exp(-~ X2 )), where
X2(w, L) is the usual sum-squared training set error, and ~ reflects the noise level. Assume
further that P(w I a.) oc exp(-o.W(w)), where W(w) is the sum of the squares ofthe weights.
If the values of a. and ~ are known and fixed, to the values ~ and ~t respectively, then P(w)
539
540
Wolpert
= pew I o.J and P(L I w)
=P(L I W. ~t). Bayes' theorem then tells us that the posterior is
proportional to the product of the likelihood and the prior. i.e.? pew I L) 0<: peL I w) pew).
Consequently. finding the maximum a posteriori (MAP) w - the w which maximizes
pew I L) - is equivalent to finding the w minimizing X2{w. L) + (Ut I ~t)W{w). This can be
viewed as a justification for performing gradient descent with weight-decay.
One of the difficulties with the foregoing is that we almost never know Ut and ~t in realworld problems. One way to deal with this is to estimate Ut and ~t. for example via a technique like cross-validation. In contrast. a Bayesian approach to this problem would be to
set priors over a. and ~. and then examine the consequences for the posterior of w.
This Bayesian approach is the starting point for the "evidence" approximation created by
Gull {Gull 1989). One makes three assumptions. for pew I y). peL I w. y). and P{y). (For simplicity of the exposition, from now on the two hyperparameters a. and ~ will be expressed
as the two components of the single vector y.) The quantity of interest is the posterior:
pew I L) = J dyP{w, yl L)
= J dy [(pew. y I L) / P{y I L)}
x P{y I L)]
(I)
The evidence approximation suggests that if P{y I L) is sharply peaked about y = 1. while
the term in curly brackets is smooth about y = 1, then one can approximate the w-dependence of pew I L) as pew, 11 L) / P{11 L) 0<: P(L I w.1) pew 11). In other words. with the
evidence approximation. one sets the ~sterior by taking pew) = pew 11) and P(L I w) = peL
I w.1). where 1 is the MAPy. PeL I y) = J dw [peL I w. y) pew I y)] is known as the "evidence"
for L given y. For relatively smooth P(y). the peak of P{y I L) is the same as the peak of the
evidence (hence the name "evidence approximation"). Although the current discussion will
only explicitly consider using evidence to set hyperparameters like a. and ~. most of what
will be said also applies to the use of evidence to set other characteristics of the learner. like
its architecture.
MacKay has applied the evidence approximation to finding the posterior for the neural net
pew I a.) and P(L I w.~) recounted above combined with a P{y) = P{a., ~) which is uniform
over all a. and ~ from 0 to +00 (MacKay 1992). In addition to the error introduced by the
evidence approximation, additional error is introduced by his need to approximate 1.
MacKay states that although he expects his approximation for 1 to be valid. "it is a matter
of further research to establish [conditions for] this approximation to be reliable".
2 THE EXACT CALCULATION
It is always true that the exact posterior is given by
pew) = f dyP(w I y) P(y).
P(L I w) = Jdy {P(L I w. y) x pew I y) x P(y)} I pew);
pew I L) 0<: J dy {P(L I w. y) x pew I y) x P(y)}
(2)
where the proportionality constant. being independent of w. is irrelevant.
Using the neural net pew I a.) and peL I w, ~) recounted above, and MacKay's P{y), it is
trivial to use equation 2 to calculate that pew) 0<: [W(w)r(N12 + 1), where N is the number of
weIghts. Similarly, with m the number of pairs in L, peL I w) 0<: [x2{w. L)r(m12 + 1). {See
(Wolpert 1992) and (Buntine and Weigend 1991). and allow the output values in L to range
On the Use of Evidence in Neural Networks
from -00 to +00.) These two results give us the exact expression for the posterior pew I L).
In contrast, the evidence-approximated posterior oc exp[-a'(L) W(w) - W(L) X2(w, L)].
It is illuminating to compare this exact calculation to the calculation based on the evidence
approximation. A good deal of relati vely complicated mathematics followed by some computer-based numerical estimation is necessary to arrive at the answer given by the evidence
approximation. (This is due to the need to approximate y.) In contrast, to perform the exact
calculation one only need evaluate a simple gaussian integral, which can be done in closed
form, and in particular one doesn't need to petform any computer-based numerical estimation. In addition, with the evidence procedure ' I must be re-evaluated for each new data set,
which means that the formula giving the posterior must be re-derived every time one uses
a new data set. In contrast, the exact calculation's formula for the posterior holds for any
data set; no re-calculations are required. So as a practical tool, the exact calculation is both
far simpler and quicker to use than the calculation based on the evidence approximation.
Another advantage of the exact calculation, of course, is that it is exact. Indeed, consider
the simple case where the noise is fixed, i.e., P(y) =P(YI) O(Y2 - Pt), so that the only term
we must "deal with" is YI =a. Set all other distributions as in (MacKay 1992). For this case,
the w-dependence of the exact posterior can be quite different from that of the evidenceapproximated posterior. In particular, note that the MAP estimate based on the exact calculation is w = O. This is, of course, a silly answer, and reflects the poor choice of distributions
made in (MacKay 1992). In particular, it directly reflects the un-normalizability of MacKay's P(a). However the important point is that this is the exactly correct answer for those
distributions. On the other hand, the evidence procedure will result in an MAP estimate of
argminw [X2(w, L) + (a' / W)W(w)], where a' and P' are derived from L. This answer will
often be far from the correct answer ofw = O. Note also that the evidence approximations's
answer will vary, perhaps greatly, with L, whereas the correct answer is L-independent. Finally, since the correct answer is w = 0, the difference between the evidence procedure's
answer and the correct answer is equal to the evidence procedure's answer. In other words,
although there exist scenarios for which the evidence approximation is valid, neural nets
with flat P(YI) is not one of them; for this scenario, the evidence procedure's answer is in
toto approximation error. (A possible reason for this is presented in section 4.)
If one were to use a more reasonable P(a), uniform only from 0 to an upper cut-off Clmax'
the results would be essentially the same, for large enough Clmax' The effect on the exact
posterior, to first order, is to introduce a small region around w = 0 in which P(w) behaves
like a decaying exponential in W(w) (the exponent being set by Clmax) rather than like
[W(w)r(N/2 + 1) (T. Wallstrom, private communication). For large enough Clmax' the region
is small enough so that the exact posterior still has a peak very close toO. On the other hand,
for large enough Clmax, there is no change in the evidence procedure's answer. (Generically,
the major effect on the evidence procedure of modifying P(y) is not to change its guess for
P(w I L), but rather to change the associated error, i.e., change whether sufficiency conditions for the validity of the approximation are met. See below.) Even with a normalizable
prior, the evidence procedure's answer is still essentially all approximation error.
Consider again the case where the prior over both a and Pis uniform. With the evidence
approximation, the log of the posterior is - { X2(w, L) + (a' / W)W(w) }, where a' and P'
are set by the data. On the other hand, the exact calculation shows that the log of the pos-
541
542
Wolpert
terior is really given by -{ In[x2(w, L)] + (N+2/ m+2) In [w(w)] }. What's interesting about
this is not simply the logarithms, absent from the evidence approximation's answer, but also
the factor mUltiplying the tenn involving the "weight penalty" quantity W(w). In the evidence approximation, this factor is data-dependent, whereas in the exact calculation it only
depends on the number of data. Moreover, the value of this factor in the exact calculation
tells us that if the number of weights increases, or alternatively the number of training examples decreases, the "weight penalty" term becomes more important, and fitting the training examples becomes less important. (It is not at all clear that this trade-off between N and
m is reflected in (a' / W), the corresponding factor from the evidence approximation.) As
before, if we have upper cut-offs on P(y), so that the MAP estimate may be reasonable,
things don't change much. For such a scenario, the N vs. m trade-off governing the relative
importance ofW(w) and X2(w, L) still holds, but only to lowest order, and only in the region
sufficiently far from the ex-singularities (like w = 0) so that pew I L) behaves like
[W(w)r(N!2 + 1) x [X2(w, L)r(m!2 + 1).
All of this notwithstanding, the evidence approximation has been reported to give good results in practice. This should not be all that surprising. There are many procedures which
are formally illegal but which still give reasonable advice. (Some might classify all of nonBayesian statistics that way.) The evidence procedure fixes y to a single value, essentially
by maximum likelihood. That'S not unreasonable, just usually illegal (as well as far more
laborious than the correct Bayesian procedure).
In addition, the tests of the evidence approximation reported in (MacKay 1992) are not all
that convincing. For paper 1, the evidence approximation gives a' = 2.5. For any other a in
an interval extending three orders of magnitude about this a', test set error is essentially
unchanged (see figure 5 of (MacKay 1992?. Since such error is what we're ultimately interested in, this is hardly a difficult test of the evidence approximation. In paper 2 of
(MacKay 1992) the initial use of the evidence approximation is "a failure of Bayesian prediction"; P(y I L) doesn't correlate with test set error (see figure 7). MacKay addresses this
by arguing that poor Bayesian results are never wrong, but only "an opportunity to learn"
(in contrast to poor non-Bayesian results?). Accordingly, he modifies the system while
looking at the test set, to get his desired correlation on the test set. To do this legally, he
should have instead modified his system while looking at a validation set, separate from the
test set. However if he had done that, it would have raised the question of why one should
use evidence at all; since one is already assuming that behavior on a validation set corresponds to behavior on a test set, why not just set a and p via cross-validation?
3
EVIDENCE AND THE PRIOR
Consider again equation 1. Since y depends on the data L, it would appear that when the
evidence approximation is valid, the data determines the prior, or as MacKay puts it, "the
modem Bayesian ... does not assign the priors - many different priors can be ... compared
in the light of the data by evaluating the evidence" (MacKay 1992). If this were true, it
would remove perhaps the most major objection which has been raised concerning Bayesian analysis - the need to choose priors in a subjective manner, independent of the data.
However the exact pew) given by equation 2 is data-independent. So one has chosen the
prior, in a subjective way. The evidence procedure is simply providing a data-dependent
approximation to a data-independent quantity. In no sense does the evidence procedure allow one to side-step the need to make subjective assumptions which fix pew).
On the Use of Evidence in Neural Networks
Since the true pew) doesn't vary with L whereas the evidence approximation's pew) does,
one might suspect that that approximation to pew) can be quite poor, even when the evidence approximation to the posterior is good. Indeed, ifP(w I YI) is exponential, there is no
non-pathological scenario for which the evidence approximation to pew) is correct:
Theorem 1: Assume that pew I YI)
pew)
oc
e- a
U(w) for
oc
e-YI
U(w).
Then the only way that one can have
some constant a is ijP(YI) = Ofor all YI
'* a.
Proof: Our proposed equality is exp( -a xU) = IdYl {P(YI) x exp( -YI xU)} (the normalization factors having all been absorbed into P(YI?. We must find an a and a normalizable
P(YI) such that this equality holds for all allowed U. Let u be such an allowed value of U.
Take the derivative with respect to U of both sides of the proposed equality t times, and
evaluate for U = u. The result is at = IdYl ?yd x R(YI? for any integer t ~ 0, where R(YI) ==
P(Yt) exp(u(a - Yt?. Using this, we see thaddYt?Yt - a)2 x R(Yt? = O. Since both R(Yl) and
(Yt - a)2 are nowhere negative, this means that for all YI for which (Yt - a)2
equal zero. Therefore R(Yt) must equal zero for all Yl a. QED.
'*
'* 0, R(Yl) must
Since the evidence approximation for the prior is always wrong, how can its approximation
for the posterior ever be good? To answer this, write pew I L) =PeL I w) X [P'(w) + E(w)] /
P(L), where P'(w) is the evidence approximation to pew). (It is assumed that we know the
likelihood exactly.) This means that pew I L) - {PeL I w) X P'(w) / P(L)} , the error in the
evidence procedure's estimate for the posterior, equals peL I w) x E(w) / PeL). So we can
have arbitrarily large E(w) and not introduce sizable error into the posterior of w, but only
for those w for which PeL I w) is small. As L varies, the w with non-negligible likelihood
vary, and the Ysuch thatfor those w pew I y) is a good approximation to pew) varies. When
it works, the y given by the evidence approximation reflects this changing of Y with L.
4 SUFFICIENCY CONDITIONS FOR EVIDENCE TO WORK
'*
Note that regardless of how peaked the evidence is, -{ X2(w, L) + (a'i W)W(w)}
- ( In[x2(w, L)] + (N+2 / m+2) In[W(w)] }; the evidence approximation always has nonnegligible error for neural nets used with flat P(Y). To understand this, one must carefully
elucidate a set of sufficiency conditions necessary for the evidence approximation to be valid. (Unfortunately, this has never been done before. A direct consequence is that no one has
ever checked, formally, that a particular use of the evidence approximation is justified.)
One such set of sufficiency conditions, the one implicit in all attempts to date to justify the
evidence approximation (i.e., the one implicit in the logic of equation I), is the following:
P(y I L) is sharply peaked about a particular Y,y.
pew, YI L) / P(y I L) varies slowly around Y= y.
pew, YI L) is infinitesimal for all Ysufficiently far from y.
(i)
(ii)
(iii)
Formally, condition (iii) can be taken to mean that there exists a not too large positive constant k, and a small positive constant b, such that I pew I L) - k I y!..c+iJ dy P(w, YI L) I is
bounded by a small constant E for all w. (As stated, (iii) has k = 1. This will almost always
543
544
Wolpert
be the case in practice and will usually be assumed, but it is not needed to prove theorem
2.) Condition (ii) can be taken to mean that across ["I - 0, "I + 0], IP(w I y, L) - pew 1"1 L)I <
't, for some small positive constant 't, for all w. (Here and throughout this paper, when y is
multi-dimensional, "0" is taken to be a small positive vector.)
Theorem 2: When conditions (i), (ii), and (iii) hold, pew I L) == PeL I w, "I) x pew I "I), up
to an (irrelevant) overall proportionality constant.
Proof: Condition (iii) gives IP(wIL) - kJy!o+f> dy[P(wlyL)xP(yIL)]I < ?forallw.
y+f>
y+f>
However Ik Jy-O dy[P(wly,L)xP(yIL)] - kP(wl"lL)Jy-O dyP(ylL)I < 'tk x
Iy!:
dy P(y I L), by condition (ii). If we now combine these two results, we see that
y+8
J y+8
.
I P(w I L) - k P(w 1"1 L) Jy-O dyP(y I L) I < ? + 'tk x y-O dyP(y I L). Smce the
integral is bounded by 1, IP(wIL) - kP(wl"lL)Iy!;8 dyP(ylL)I < ?+'tk.Sincethe
integral is independent of w, up to an overall proportionality constant (that integral times
k) the w-dependence of P(w I L) can be approximated by that of P(w I "I, L) oc PeL I w, 1)
x pew I"I), incurring error less than ? + 'tk. Take k not too large and both ? and 't small. QED.
Note that the proof would go through even if P(y I L) were not peaked about "I, or if
P(y I L) were peaked about some point far from the "I for which (ii) and (iii) hold; nowhere
in the proof is the definition of "I from condition (i) used. However in practice, when condition (iii) is met, k = 1, P(y I L) falls to 0 outside of ["I - 0, "I + 0], and pew I y, L) stays
reasonably bounded for all such y. (If this weren't the case, then P(w I y, L) would have to
fall to 0 outside of ["I - 0, "I + 0], something which is rarely true.) So we see that we could
either just give conditions (ii) and (iii), or we could give (i), (ii), and the extra condition that
P(w I y, L) is bounded small enough so that condition (iii) is met (In addition, one can prove
that if the evidence approximation is valid, then conditions (i) and (ii) give condition (iii).)
In any case, it should be noted that conditions (i) and (ii) by themselves are not sufficient
for the evidence approximation to be valid. To see this, have w be one-dimensional, and let
pew, yl L) = 0 both for {Iy- "II < 0, Iw - w*1 < v} and for {Iy- "II> 0, Iw - w*1 > v}. Let it
be constant everywhere else (within certain bounds of allowed yand w). Then for both
and v small, conditions (i) and (ii) hold: the evidence is peaked about "I, and 't = O. Yet for
the true MAP w, w*, the evidence approximation fails badly. (Generically, this scenario
will also result in a big error if rather than using the evidence-approximated posterior to
guess the MAP w. one instead uses it to evaluate the posterior-averaged f, Idf f P(f I L).)
a
Gull mentions only condition (i). MacKay also mentions condition (ii), but not condition
(iii). Neither author plugs in for ? and 't, or in any other way uses their distributions to infer
bounds on the error accompanying their use of the evidence approximation.
Since by (i) P(y I L) is sharply peaked about "I, one would expect that for (ii) to hold
pew, yl L) must also be sharply peaked about "I. Although this line of reasoning can be formalized, it turns out to be easier to prove the result using sufficiency condition (iii):
Theorem 3: If condition (iii) holds, then for all w such that P(w I L) > c > ?,for each component i ofy, pew, Yi I L) must have a Yi-peak somewhere within 0i[l + 2? / (c - E)] of(y'h
Proof: Condition (iii) withk= 1 tellsusthatP(wIL)-
Iy!;8
oyP(w,yIL)<?.Extending
On the Use of Evidence in Neural Networks
the integrals over ')j~ gives P(w 1 L) -
J(y_i~-H?i
dYi P(w, Yi 1 L) < E. From now on the i
subscript on Yand a will be implicit. We have E > Jy!o-H>+r dy P(w, Y 1 L) for any scalar r
> O. Assume that P(w, Y 1 L) doesn't have a peak anywhere in [y - a, y + a + r]. Without
loss of generality, assume also that P(w, Y + aiL) ~ P(w, Y - aiL). These two assumptions
mean that for any YE [y + a, y + a + r], the value of P(w, Y1L) exceeds the maximal value
y+Mr
it takes on in the interval [y - a, y + a]. Therefore JY-H>
dy P(w, Y 1 L) ~ (r / 2a) x
y+o
y-H>
Jy~ dyP(w,yIL).Thismeansthady~ dyP(w,yIL) < 2aE/r.ButsinceP(wIL)<
E + Jy~-H> dyP(w, y 1L), this means that P(w 1L) < E(1 + 2a / r). So ifP(w 1L) > c > E, r
< 2E / (c - E), and there must be a peak of P(w, Y1L) within ao + 2E/(C - E)) of y. QED.
So for those w with non-negligible posterior, for E small, the y-peak of P(w, y 1 L)
P(L 1 w, y) x P(w 1y) x P(y) must lie essentially within the peak of P(y 1L). Therefore:
oc
=
Theorem 4: Assume that P(w 1 YI)
exp(-YI U(w)) / ZI(YI) for some function U(.),
P(L 1w, yiJ = exp(-Y2 V(w, L)) / ~(Y2' w)for somefunction V(., .), andP(y) = P(YI)P('Y2)'
(The Zj act as normalization constants.) Then if condition (iii) holds, for all w with nonnegligible posterior the y-solution to the equations
-U(w) + i)YI [In(p(YI) -In(ZI(YI))] =0
-V(w, L) + i)Y2 [In(p(YiJ - In(~(Y2' w))] = 0
must like within the y-peak ofP(y 1L).
Proof: P(w, yl L) oc {P(YI) xP(Y2) x exp[-yIU(w) - Y2 V(w, L)] } / {ZI(YI) X Zz(Y2, w)}.
For both i =1 and i = 2, evaluate i)r. {f dYj;ti P(w, Y1L)}, and set it equal to zero. This gives
the two equations. Now define "the y-peak of P(y 1L)" to mean a cube with i-component
width ai[l + 2E / (c - E)], centered on y, where having a "non-negligible posterior" means
P(w 1L) > c. Applying theorem 3, we get the result claimed. QED.
In particular, in MacKay's scenario, P(y) is uniform, W(w)
X2(w, L). Therefore ZI and
=I:.i=1 (Wi)2, and V(w, L) =
Zz are proportional to (YlrN/2 and (yi)-rn!2 respectively. This
means that if the vector {YI' Y2} = {N / [2W(w)], m / [2X2(w, L)]} does not lie within the
peak of the evidence for the MAP w, condition (iii) does not hold. That YI / Y2 must approximately equal [N X2(w, L)] / [m W(w)] should not be too surprising. If we set the wgradient of both the evidence-approximated and exact P(w 1L) to zero, and demand that the
same w,w', solves both equations, we get YI /Y2 -[(N + 2) X2(w', L)] / [(m + 2)W(w')].
(Unfortunately, if one continues and evaluates i)wii)wl(w 1L) at w', often one finds that it
has opposite signs for the two posteriors - a graphic failure of the evidence approximation.)
=
It is not clear from the provided neural net data whether this condition is met in (MacKay
1992). However it appears that the corresponding condition is nm met, for YI at least, for
the scenario in (Gull 1992) in which the evidence approximation is used with U(.) being the
entropy. (See (Strauss et al. 1993, Wolpert et al. 1993).) Since conditions (i) through (iii)
545
546
Wolpert
are sufficient conditions, not necessary ones, this does not prove that Gull's use of evidence
is invalid. (It is still an open problem to delineate the full iff for when the evidence approximation is valid, though it appears that matching of peaks as in theorem 3 is necessary. See
(Wolpert et al. 1993).) However this does mean that the justification offered by Gull for his
use of evidence is apparently invalid. It might also help explain why Gull's results were "visually disappointing and ... clearly ... 'over-fitted''', to use his terms.
The first equation in theorem 4 can be used to set restrictions on the set of w which both
have non-negligible posterior and for which condition (iii) holds. Consider for example
MacKay's scenario, where that equation says that N /2W(w) must lie within the width of
the evidence peak. If the evidence peak is sharp, this means that unless all w with non-negligible posterior have essentially the same W(w), condition (iii) can not hold for all of them.
Finally, if for some reason one wishes to know y, theorem 4 can sometimes be used to circumvent the common difficulty of evaluating P(y I L). To do this, one assumes that conditions (i) through (iii) hold. Then one finds any w with a non-negligible posterior (say by use
of the evidence approximation coupled with approximations to P(y I L)) and uses it in theorem 4 to find a y which must lie within the peak of P(y I L), and therefore must lie close to
the correct value of y.
To summarize, there might be scenarios in which the exact calculation of the quantity of
interest is intractable, so that some approximation like evidence is necessary. Alternatively,
if one's choice ofP(w I y), P(y), and P(L I w, y) is poor, the evidence approximation would
be useful if the error in that approximation somehow "cancels" error in the choice of distributions. However if one believes one's choice of distributions, and if the quantity of interest
is P(w I L), then at a minimum one should check conditions (i) through (iii) before using
the evidence approximation. When one is dealing with neural nets, one needn't even do
that; the exact calculation is quicker and simpler than using the evidence approximation.
Acknowledgments
This work was done at the SFI and was supported in part by NLM grant F37 LMOOOll. I
would like to thank Charlie Strauss and Tim Wall strom for stimulating discussion.
References
Buntine, W., Weigend, A. (1991). Bayesian back-propagation. Complex Systems, S, 603.
Gull, S.F. (1989). Developments in maximum entropy data analysis. In "Maximum-entropy and Bayesian methods", J. Skilling (Ed.). Kluwer Academics publishers.
MacKay, DJ.C. (1992). Bayesian Interpolation. A Practical Framework for Backpropagation Networks. Neural Computation, 4,415 and 448.
Strauss, C.E.M, Wolpert, D.H., Wolf, D.R. (1993). Alpha, Evidence, and the Entropic Prior. In "Maximum-entropy and Bayesian methods", A. Mohammed-Djafari (Ed.). Kluwer
Academics publishers. In press
Wolpert, D.H. (1992). A Rigorous Investigation of "Evidence" and "Occam Factors" in
Bayesian Reasoning. SFI TR 92-03-13. Submitted.
Wolpert, D.H., Strauss, C.E.M., Wolf, D.R. (1993). On evidence and the marginalization
of alpha in the entropic prior. In preparation.
PART VI
NETWORK
DYNAMICS AND
CHAOS
| 716 |@word private:1 open:1 proportionality:3 recounted:2 thatfor:1 mention:2 tr:1 initial:1 subjective:3 current:1 surprising:2 yet:1 must:16 numerical:2 additive:1 remove:1 v:1 tenn:1 guess:4 accordingly:1 simpler:2 direct:1 become:1 ik:1 incorrect:1 prove:4 fitting:1 combine:1 manner:1 introduce:2 indeed:2 behavior:2 themselves:1 nor:1 examine:1 multi:1 becomes:2 provided:2 moreover:2 underlying:1 maximizes:1 bounded:4 lowest:1 pel:14 what:3 ail:2 finding:3 every:2 act:1 ti:1 exactly:2 wrong:2 nonnegligible:2 grant:1 appear:1 before:3 negligible:6 positive:4 consequence:2 subscript:1 interpolation:1 yd:1 approximately:1 might:4 suggests:1 range:1 averaged:1 practical:2 acknowledgment:1 arguing:1 practice:3 backpropagation:1 procedure:15 illegal:2 matching:1 word:2 get:3 close:2 put:1 applying:1 restriction:1 equivalent:1 map:8 yt:7 modifies:1 go:1 regardless:1 starting:1 simplicity:1 formalized:1 his:6 dw:1 n12:1 justification:2 target:1 pt:1 elucidate:1 exact:24 trail:1 us:4 curly:1 nowhere:2 approximated:4 continues:1 legally:1 cut:2 quicker:2 calculate:1 region:3 decrease:1 trade:2 intuition:1 wil:4 dynamic:1 ultimately:1 learner:1 po:1 kp:2 tell:2 outside:2 quite:2 foregoing:1 say:2 statistic:1 ip:3 advantage:2 net:9 product:1 maximal:1 argminw:1 date:1 iff:1 extending:2 tk:4 help:1 tim:1 ij:1 solves:1 sizable:1 met:5 correct:8 modifying:1 centered:1 nlm:1 assign:1 fix:2 ao:1 really:1 weren:1 wall:1 investigation:1 singularity:1 yij:2 hold:14 accompanying:1 around:2 sufficiently:1 exp:9 visually:1 claim:1 major:2 vary:3 entropic:2 estimation:2 iw:2 wl:3 basing:1 tool:1 reflects:4 offs:1 clearly:1 gaussian:2 always:4 normalizability:1 modified:1 rather:3 derived:2 likelihood:4 check:1 greatly:1 contrast:5 rigorous:1 sense:1 posteriori:1 dependent:2 interested:1 overall:2 exponent:1 development:1 raised:2 mackay:19 cube:1 equal:6 never:3 having:2 zz:2 cancel:1 peaked:8 pathological:1 attempt:1 interest:3 laborious:1 generically:2 bracket:1 light:2 dyi:1 integral:5 necessary:5 vely:1 unless:1 old:1 logarithm:1 re:5 desired:1 gull:8 fitted:1 classify:1 expects:1 uniform:4 too:4 graphic:1 buntine:3 reported:2 jdy:1 answer:16 varies:3 combined:1 peak:15 stay:1 yl:7 off:3 iy:5 squared:1 again:2 nm:2 ijp:1 choose:1 slowly:1 derivative:1 li:1 needn:1 matter:1 explicitly:1 depends:2 vi:1 closed:2 doing:1 apparently:1 start:1 bayes:1 decaying:1 complicated:1 square:1 characteristic:1 ofthe:1 bayesian:15 multiplying:1 submitted:1 explain:1 checked:1 ed:2 definition:1 infinitesimal:1 failure:2 evaluates:1 associated:1 proof:6 popular:1 ut:3 carefully:1 back:2 appears:2 reflected:1 sufficiency:6 done:4 evaluated:1 delineate:1 generality:1 though:1 governing:1 just:3 implicit:3 anywhere:1 correlation:1 hand:3 propagation:2 somehow:1 perhaps:2 name:1 effect:2 validity:1 ye:1 true:5 y2:12 hence:1 equality:3 sfi:2 deal:3 strom:1 ll:2 width:2 noted:1 oc:8 reasoning:2 chaos:1 recently:2 ifp:2 common:1 behaves:2 yil:5 he:4 kluwer:2 ai:1 pew:47 mathematics:1 similarly:1 had:1 dj:1 etc:1 something:1 posterior:27 irrelevant:2 disappointing:1 scenario:9 claimed:1 certain:1 mohammed:1 arbitrarily:1 yi:35 minimum:1 additional:1 mr:1 employed:1 determine:1 ii:14 full:1 ofy:1 infer:1 smooth:2 exceeds:1 academic:2 calculation:15 cross:2 plug:1 ofp:2 concerning:1 jy:7 prediction:1 involving:1 ae:1 essentially:6 sometimes:2 normalization:2 qed:4 justified:1 addition:5 whereas:3 interval:2 objection:1 else:1 publisher:2 extra:1 unlike:1 suspect:1 thing:1 integer:1 iii:21 enough:5 marginalization:1 zi:4 architecture:1 opposite:1 absent:1 whether:2 expression:1 penalty:3 hardly:1 useful:1 santa:2 clear:2 dyj:1 exist:1 zj:1 sign:1 ofor:1 write:1 changing:1 neither:2 sum:2 weigend:3 pecos:1 realworld:1 parameterized:1 everywhere:1 arrive:1 almost:2 reasonable:4 throughout:1 dy:8 bound:2 followed:1 badly:1 sharply:4 idf:1 normalizable:2 x2:16 toto:2 flat:2 silly:1 performing:1 relatively:1 poor:5 across:1 wi:1 taken:3 equation:9 turn:2 discus:1 nonbayesian:1 needed:1 know:3 wii:1 incurring:1 unreasonable:1 skilling:1 assumes:1 running:1 charlie:1 opportunity:1 relati:1 somewhere:1 giving:1 establish:1 unchanged:1 question:1 quantity:5 already:1 dependence:3 usual:2 said:1 gradient:1 separate:1 thank:1 trivial:1 reason:2 assuming:2 code:1 providing:1 minimizing:1 convincing:1 difficult:1 unfortunately:2 fe:2 ofw:2 negative:1 stated:1 dyp:9 perform:1 upper:2 m12:1 y1l:3 modem:1 descent:1 communication:1 looking:2 ever:2 rn:1 sharp:1 david:1 introduced:2 pair:1 required:1 clmax:5 yll:2 address:1 andp:1 below:1 usually:2 summarize:1 reliable:1 smce:1 belief:1 difficulty:2 circumvent:1 created:1 coupled:1 prior:14 relative:1 loss:1 expect:1 interesting:1 proportional:2 validation:4 illuminating:1 offered:1 sufficient:2 xp:3 viewpoint:1 pi:1 occam:1 course:2 supported:1 side:2 allow:2 understand:1 institute:1 fall:2 taking:1 yiu:1 calculated:1 valid:7 evaluating:2 doesn:4 author:1 made:1 far:7 correlate:1 approximate:3 alpha:2 logic:1 dealing:1 assumed:2 alternatively:2 don:1 un:1 why:4 learn:1 reasonably:1 complex:1 big:1 noise:4 hyperparameters:2 allowed:3 xu:2 advice:1 fails:1 wish:1 exponential:2 lie:5 formula:2 theorem:11 decay:1 evidence:87 exists:1 intractable:1 strauss:4 importance:1 magnitude:1 notwithstanding:1 demand:1 easier:2 wolpert:11 entropy:4 simply:2 absorbed:1 expressed:1 scalar:1 applies:1 terior:1 corresponds:1 wolf:2 determines:1 stimulating:1 goal:1 viewed:1 consequently:1 exposition:1 invalid:2 change:5 justify:1 yand:2 rarely:1 formally:3 preparation:1 evaluate:5 ex:1 |
6,810 | 7,160 | Learning Populations of Parameters
Kevin Tian, Weihao Kong, and Gregory Valiant
Department of Computer Science
Stanford University
Stanford, CA, 94305
(kjtian, whkong, valiant)@stanford.edu
Abstract
Consider the following estimation problem: there are n entities, each with an
unknown parameter pi ? [0, 1], and we observe n independent random variables,
X1 , . . . , Xn , with Xi ? Binomial(t, pi ). How accurately can one recover the
?histogram? (i.e. cumulative density function) of the pi ?s? While the empirical
estimates would recover the histogram to earth mover distance ?( ?1t ) (equivalently,
`1 distance between the CDFs), we show that, provided n is sufficiently large, we
can achieve error O( 1t ) which is information theoretically optimal. We also extend
our results to the multi-dimensional parameter case, capturing settings where
each member of the population has multiple associated parameters. Beyond the
theoretical results, we demonstrate that the recovery algorithm performs well
in practice on a variety of datasets, providing illuminating insights into several
domains, including politics, sports analytics, and variation in the gender ratio of
offspring.
1
Introduction
In many domains, from medical records, to the outcomes of political elections, performance in sports,
and a number of biological studies, we have enormous datasets that reflect properties of a large
number of entities/individuals. Nevertheless, for many of these datasets, the amount of information
that we have about each entity is relatively modest?often too little to accurately infer properties about
that entity. In this work, we consider the extent to which we can accurately recover an estimate of the
population or set of property values of the entities, even in the regime in which there is insufficient
data to resolve properties of each specific entity.
To give a concrete example, suppose we have a large dataset representing 1M people, that records
whether each person had the flu in each of the past 5 years. Suppose each person has some underlying
probability of contracting the flu in a given year, with pi representing the probability that the ith
person contracts the flu each year (and assuming independence between years). With 5 years of data,
the empirical estimates p?i for each person are quite noisy (and the estimates will all be multiples
of 15 ). Despite this, to what extent can we hope to accurately recover the population or set of pi ?s?
An accurate recovery of this population of parameters might be very useful?is it the case that most
people have similar underlying probabilities of contracting the flu, or is there significant variation
between people? Additionally, such an estimate of this population could be fruitfully leveraged as a
prior in making concrete predictions about individuals? pi ?s, as a type of empirical Bayes method.
The following example motivates the hope for significantly improving upon the empirical estimates:
Example 1. Consider a set of n biased coins, with the ith coin having an unknown bias pi . Suppose
we flip each coin twice (independently), and observe that the number of coins where both flips landed
heads is roughly n4 , and similarly for the number coins that landed HT, T H, and T T . We can
safely conclude that almost all of the pi ?s are almost exactly 12 . The reasoning proceeds in two
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
steps: first, since the average outcome is balanced between heads and tails, the average pi must be
very close to 12 . Given this, if there was any significant amount of variation in the pi ?s, one would
expect to see significantly more HHs and T T s than the HT and T H outcomes, simply because
Pr[Binomial(2, p) = 1] = 2p(1 ? p) attains a maximum for p = 1/2.
Furthermore, suppose we now consider the ith coin, and see that it landed heads twice. The empirical
estimate of pi would be 1, but if we observe close to n4 coins with each pair of outcomes, using the
above reasoning that argues that almost all of the p?s are likely close to 12 , we could safely conclude
that pi is likely close to 12 .
This ability to ?denoise? the empirical estimate of a parameter based on the observations of a number
of independent random variables (in this case, the outcomes of the tosses of the other coins), was
first pointed out by Charles Stein in the setting of estimating the means of a set of Gaussians and is
known as ?Stein?s phenomenon? [14]. We discuss this further in Section 1.1. Example 1 was chosen
to be an extreme illustration of the ability to leverage the large number of entities being studied, n, to
partially compensate for the small amount of data reflecting each entity (the 2 tosses of each coin, in
the above example).
Our main result, stated below, demonstrates that even for worst-case sets of p?s, significant ?denoising?
is possible. While we cannot hope to always accurately recover each pi , we show that we can
accurately recover the set or histogram of the p?s, as measured in the `1 distance between the
cumulative distribution functions, or equivalently, the ?earth mover?s distance? (also known as 1Wasserstein distance) between the set of p?s regarded as a distribution P that places mass n1 at each pi ,
and the distribution Q returned by our estimator. Equivalently, our returned distribution Q can also
be represented as a set of n values q1 , . . . , qn , in which case this earth mover?s distance is precisely
1/n times the `1 distance between the vector of sorted pi ?s, and the vector of sorted qi ?s.
Theorem 1. Consider a set of n probabilities, p1 , . . . , pn with pi ? [0, 1], and suppose we observe
the outcome of t independent flips of each coin, namely X1 , . . . , Xn , with Xi ? Binomial(t, pi ).
There is an algorithm that produces a distribution Q supported on [0, 1], such that with probability at
least 1 ? ? over the randomness of X1 , . . . , Xn ,
t r
X
?
?
3t t ln t
2t 3
t
kP ? QkW ? + 3
ln( ) ? + O? ( ? ),
t
? n
t
n
i=1
where P denotes the distribution that places mass
distance.
1
n
at value pi , and k ? kW denotes the Wasserstein
The above theorem applies to the setting where we hope to recover a set of arbitrary pi ?s. In some
practical settings, we might think of each pi as being sampled independently from some underlying
distribution Ppop over probabilities, and the goal is to recover this population distribution Ppop .
Since the empirical distribution of n draws
? from a distribution Ppop over [0, 1] converges to Ppop
in Wasserstein distance at a rate of O(1/ n), the above theorem immediately yeilds the analogous
result in this setting:
Corollary 1. Consider a distribution Ppop over [0, 1], and suppose we observe X1 , . . . , Xn where Xi
is obtained by first drawing pi independently from Ppop , and then drawing Xi from Binomial(t, pi ).
There is an algorithm that will output
a distribution Q such that with probability at least 1 ? ?,
t
t
kPpop ? QkW ? ?t + O? 3 ?t ln
.
n
The inverse linear dependence on t of Theorem 1 and Corollary 1 is information theoretically optimal,
and is attained asymptotically for sufficiently large n:
Proposition 1. Let Ppop denote a distribution over [0, 1], and for positive integers t and n, let
X1 , . . . , Xn denote random variables with Xi distributed as Binomial(t, pi ) where pi is drawn
independently according to Ppop . An estimator f maps X1 , . . . , Xn to a distribution f (X1 , . . . , Xn ).
Then, for every fixed t, the following lower bound on the accuracy of any estimator holds for all n:
1
inf sup E [kf (X1 , . . . , Xn ) ? Ppop kW ] > .
f Ppop
4t
Our estimation algorithm, whose performance is characterized by Theorem 1, proceeds via the
method of moments. Given X1 , . . . , Xn with Xi ? Binomial(t, pi ), and sufficiently large n, we can
2
obtain accurate estimates of the first t moments of the distribution/histogram P defined by the pi ?s.
Accurate estimates of the first t moments can then be leveraged to recover an estimate of P that
is accurate to error 1t plus a factor that depends (exponentially on t) on the error in the recovered
moments.
The intuition for the lower bound, Proposition 1, is that the realizations of Binomial(t, pi ) give no
information beyond the first t moments. Additionally, there exist distributions P and Q whose first t
1
moments agree exactly, but which differ in their t + 1st moment, and have kP ? QkW ? 2t
. Putting
these two pieces together establishes the lower bound.
We also extend our results to the practically relevant multi-parameter analog of the setting described above, where the ith datapoint corresponds to a pair, or d-tuple of hidden parameters,
p(i,1) , . . . , p(i,d) , and we observe independent random variables X(i,1) , . . . , X(i,d) with X(i,j) ?
Binomial(t(i,j) , p(i,j) ). In this setting, the goal is to recover the multivariate set of d-tuples
{p(i,1) , . . . , p(i,d) }, again in an earth mover?s sense. This setting corresponds to recovering an
approximation of an underlying joint distribution over these d-tuples of parameters.
To give one concrete motivation for this problem, consider a hypothetical setting where we have n
genotypes (sets of genetic features), with ti people of the ith genotype. Let X(i,1) denote the number
of people with the ith genotype who exhibit disease 1, and X(i,2) denote the number of people with
genotype i who exhibit disease 2. The interpretation of the hidden parameters pi,1 and pi,2 are the
respective probabilities of people with the ith genotype of developing each of the two diseases. Our
results imply that provided n is large, one can accurately recover an approximation to the underlying
set or two-dimensional joint distribution of {(pi,1 , pi,2 )} pairs, even in settings where there are too
few people of each genotype to accurately determine which of the genotypes are responsible for
elevated disease risk. Recovering this set of pairs would allow one to infer whether there are common
genetic drivers of the two diseases?even in the regime where there is insufficient data to resolve
which genotypes are the common drivers.
Our multivariate analog of Theorem 1 is also formulated in terms of multivariate analog of earth
mover?s distance (see Definition 1 for a formal definition):
Theorem 2. Let {pi,j } denote a set of n d-tuples of hidden parameters in [0, 1]d , with i ? {1, . . . , n}
and j ? {1, . . . , d}, and suppose we observe random variables Xi,j , with Xi,j ? Binomial(t, pi,j ).
There is an algorithm that produces a distribution Q supported on [0, 1]d , such that with probability
at least 1 ? ? over the randomness of the Xi,j s,
r
t
X
C1
d(2t)d+1 2t
1 1
C1
1
kP ? QkW ?
+ C2
ln( ) ?
+ O?,t,d ( ? ),
t
? n
t
n
3|?|
|?|=1
for absolute constants C1 , C2 , where ? is a d-dimensional multi-index consisting of all d-tuples of
nonnegative integers summing to at most t, P denotes the distribution that places mass n1 at value
pi = (pi,1 , . . . , pi,d ) ? [0, 1]d , and k ? kW denotes the d-dimensional Wasserstein distance between
P and Q.
1.1
Related Work
The seminal paper of Charles Stein [14] was one of the earliest papers to identify the surprising
possibility of leveraging the availability of independent data reflecting a large number of parameters
of interest, to partially compensate for having little information about each parameter. The specific
setting examined considered the problem of estimating a list of unknown means, ?1 , . . . , ?n given
access to n independent Gaussian random variables, X1 , . . . , Xn , with Xi ? N (?i , 1). Stein showed
that, perhaps surprisingly, that there is an estimator for the list of parameters ?1 , . . . , ?n that has
smaller expected squared error than the naive unbiased empirical estimates of ??i = Xi . This
improved estimator ?shrinks? the empirical estimates towards the average of the Xi ?s. In our setting,
the process of recovering the set/histogram of unknown pi ?s and then leveraging this recovered set as
a prior to correct the empirical estimates of each pi can be viewed as an analog of Stein?s ?shrinkage?,
and will have the property that the empirical estimates are shifted (in a non-linear fashion) towards
the average of the pi ?s.
More closely related to the problem considered in this paper is the work on recovering an approximation to the unlabeled set of probabilities of domain elements, given independent draws from a
3
distribution of large discrete support (see e.g. [11, 2, 15, 16, 1]). Instead of learning the distribution,
these works considered the alternate goal of simply returning an approximation to the multiset of
probabilities with which the domain elements arise but without specifying which element occurs with
which probability. Such a multiset can be used to estimate useful properties of the distribution that
do not depend on the labels of the domain of the distribution, such as the entropy or support size of
the distribution, or the number of elements likely to be observed in a new, larger sample [12, 17].
The benefit of pursuing this weaker goal of returning the unlabeled multiset is that it can be learned
to significantly higher accuracy for a given sample size?essentially as accurate as the empirical
distribution of a sample that is a logarithmic factor larger [15, 17].
Building on the above work, the recent work [18] considered the problem of recovering the ?frequency
spectrum? of rare genetic variants. This problem is similar to the problem we consider, but focuses
on a rather different regime. Specifically, the model considered posits that each location i = 1, . . . , n
in the genome has some probability pi of being mutated in a given individual. Given the sequences of
t individuals, the goal is to recover the set of pi ?s. The work [18] focused on the regime in which
1
many of the pi ?s are significantly less than nt
, and hence correspond to mutations that have never
been observed; one conclusion of that work was that one can accurately estimate the number of such
rare mutations that would be discovered in larger sequencing cohorts. Our work, in contrast, focuses
on the regime where the pi ?s are constant, and do not scale as a function of n, and the results are
incomparable.
Also related to the current work are the works [9, 10] on testing whether certain properties of
collections of distributions hold. The results of these works show that specific properties, such as
whether most of the distributions are identical versus have significant variation, can be decided based
on a sample size that is significantly sublinear in the number of distributions.
Finally, the papers [5, 6] consider the related by more difficult setting of learning ?Poisson Binomials,?
namely a sum of independent non-identical Bernoulli random variables, given access to samples. In
contrast to our work, in the setting they consider, each ?sample? consists of only the sum of these n
random variables, rather than observing the outcome of each random variable.
1.2
Organization of paper
In Section 2 we describe the two components of our algorithm for recovering the population of
Bernoulli parameters: obtaining accurate estimates of the low-order moments (Section 2.1), and
leveraging those moments to recover the set of parameters (Section 2.3). The complete algorithm is
presented in Section 2.2, and a discussion of the multi-dimensional extension to which Theorem 2
applies is described in Section 2.4. In Section 3 we validate the empirical performance of our
approach on synthetic data, as well as illustrate its potential applications to several real-world settings.
2
Learning a population of binomial parameters
Our approach to recovering the underlying distribution or set of pi ?s proceeds via the method of
moments. In the following section we show that, given ? t samples from each Bernoulli distribution,
we can accurately estimate each of the first t moments. In Section 2.3 we explain how these first t
moments can then be leveraged to recover the set of pi ?s, to earth mover?s distance O(1/t).
2.1
Moment estimation
Our
approach proceeds by estimating the first t moments of P , namely
Pnmethod-of-moments
1
k
i=1 pi , for each integer k between 1 and t. The estimator we describe is unbiased, and also
n
applies in the setting of Corollary 1 where each pi is drawn i.i.d. from a distribution Ppop . In this
case, we will obtain an unbiased estimator for Ep?Ppop [pk ]. We limit ourselves to estimating the
first t moments because, as show in the proof of the lower bound, Proposition 1, the distribution of
the Xi ?s are determined by the first t moments, and hence no additional information can be gleaned
regarding the higher moments.
Xi
Pn
k
1
th
For 1 ? k ? t, our estimate for the k moment is ?k = n i=1 t . The motivation for this
k
unbiased estimator is the following: Note that given any k i.i.d. samples of a variable distributed
4
according to Bernoulli(pi ), an unbiased estimator for pki is their product, namely the estimator
which
is 1 if all the tosses come up heads, and otherwise is 0. Thus, if we average over all kt subsets of
size k, and then average over the population, we still derive an unbiased estimator.
Lemma 1. Given {p1 , . . . , pn }, let Xi denote the P
random variable distributed according to
n
Binomial(t, pi ). For k ? {1, . . . , t}, let ?k = n1 i=1 pki denote the k th true moment, and
Xi
Pn
?k = n1 i=1 kt denote our estimate of the kth moment. Then E[?k ] = ?k , and Pr(|?k ? ?k | ?
1
2
k
) ? 2e? 3 n .
Given
the above lemma, we obtain the fact that, with probability at least 1 ? ?, the events |?k ? ?k | ?
q
2t 3
ln( ? ) n simultaneously occur for all k ? {1, . . . , t}.
2.2
Distribution recovery from moment estimates
Given the estimates of the moments of the distribution P , as described above, our algorithm will
recover a distribution, Q, whose moments are close to the estimated moments. We propose two
algorithms, whose distribution recoveries are via the standard linear programming or quadratic
programming approaches which will recover a distribution Q supported on some (sufficiently fine) net of [0, 1]: the variables of the linear (or quadratic) program correspond to the amount of probability
mass that Q assigns to each element of the -net, the constraints correspond to ensuring that the
amount of mass at each element is nonnegative and that the total amount of mass is 1, and the
objective function will correspond to the (possibly weighted) sum of the discrepancies between the
estimated moments, and the moments of the distribution represented by Q.
To see why it suffices to solve this program over an -net of the unit interval, note that any distribution
over [0, 1] can be rounded so as to be supported on an -net, while changing the distribution by
at most 2 in Wasserstein distance. Additionally, such a rounding alters each moment by at most
O(), because the rounding alters the individual contributions of point masses to the k th moment
by only O(k ) < O(). As our goal is to recover a distribution with distance O(1/t), it suffices to
choose and -net with 1/t so that the additional error due to this discretization is negligible.
As this distribution recovery program has O(1/) variables and O(t) constraints, both of which are
independent of n, this program can be solved extremely efficiently both in theory and in practice.
We formally describe this algorithm below, which takes as input X1 , . . . , Xn , binomial parameter t,
an integer m corresponding to the size of the -net, and a weight vector w.
Algorithms 1 and 2: Distribution Recovery with Linear / Quadratic Objectives
Input: Integers X1 , . . . , Xn , integers t and m, and weight vector w ? Rt .
Output: Vector q = (q0 , . . . , qm ) of length m + 1, representing a distribution with
probability mass qi at value mi .
P Xki
1
? For each k ? {1, . . . , t}, compute ?k = n
t .
k
? (Algorithm 1) Solve the linear program over variables q0 , . . . , qm :
t
m
X
X
i
minimize:
|??k ? ?k |wk , where ??k =
qi ( )k ,
m
i=0
k=1
X
subject to:
qi = 1, and for all i, qi ? 0.
i
? (Algorithm 2) Solve the quadratic program over variables q0 , . . . , qm :
t
m
X
X
i
minimize:
(??k ? ?k )2 wk2 , where ??k =
qi ( )k ,
m
i=0
k=1
X
subject to:
qi = 1, and for ll i, qi ? 0.
i
5
2.2.1
Practical considerations
Our theoretical results, Theorem 1 and Corollary 1, apply to the setting where the weight vector,
w in the above linear program objective function has wk = 1 for all k. It makes intuitive sense to
penalize the discrepancy in the kth moment inversely proportionally to the empirically estimated
standard deviation of the k th moment estimate, and our empirical results are based on such a weighted
objective.
Additionally, in some settings we observed an empirical improvement in the robustness and quality of
the recovered distribution if one averages the results of running Algorithm 1 or 2 on several random
subsamples of the data. In our empirical section, Section 3, we refer to this as a bootstrapped version
of our algorithm.
2.3
Close moments imply close distributions
In this section we complete the high-level proof that Algorithm 1 accurately recovers P , the distribution corresponding to the set of pi ?s, establishing Theorem 1 and Corollary 1. The guarantees
of Lemma 1 ensure that, with high probability, the estimated moments will be close to the true
moments. Together with the observation that discretizing P to be supported on an -net of [0, 1] alters
the moments by O(), it follows that there is a solution to the linear program in the second step of
Algorithm 1 corresponding to a distribution whose moments are close to the true moments of P , and
hence with high probability Algorithm 1 will return such a distribution.
To conclude the proof, all that remains is to show that, provided the distribution Q returned by
Algorithm 1 has similar first t moments to the true distribution, P , then P and Q will be close in
Wasserstein (earth mover?s) distance. We begin by formally defining the Wasserstein (earth mover?s)
distance between two distributions P and Q:
Definition 1. The RWasserstein, or earth mover?s, distance between distributions P, Q, is ||P ?
Q||W :=
inf
d(x, y)d?(x, y), where ?(P, Q) is the set of all couplings on P and Q,
[0,1]2d
???(P,Q)
namely a distribution whose
R marginals agree with the distributions. The equivalent dual definition is
||P ? Q||W := sup
(x)d(P ? Q)(x) where the supremum is taken over Lipschitz functions, g.
g
g?Lip(1)
As its name implies, this distance metric can be thought of as the cost of the optimal scheme of
?moving? the probability mass from P to create Q, where the cost per unit mass of moving from
probability x and y is |x ? y|. Distributions over R, it is not hard to see that this distance is exactly
the `1 distance between the associated cumulative distribution functions.
The following slightly stronger version of Proposition 1 in [7] bounds the Wasserstein distance
between any pair of distributions in terms of the discrepancies in their low-order moments:
Theorem 3. For two distributions P and Q supported on [0, 1] whose first
Ptt moments are ? and ?
respectively, the Wasserstein distance ||P ? Q||W is bounded by ?t + 3t k=1 |?k ? ?k |.
The formal proof of this theorem is provided in the Appendix A, and we conclude this section with
an intuitive sketch of this proof. For simplicity, first consider the setting where the two distributions
P, Q have the exact same first t moments. This immediately implies that for any polynomial f of
degree at most t,R the expectation of f with respect to P is equal to the expectation of f with respect
to Q. Namely, Pf (x)(P
R (x) ? Q(x))dx = 0. Leveraging the definition of Wasserstein distance
kP ? QkW = g?Lip g(x)(P (x) ? Q(x))dx, the theorem now follows from the standard fact
that, for any Lipschitz function g, there exists a degree t polynomial fg that approximates it to within
`? distance O(1/t) on the interval [0, 1].
If there is nonzero discrepancy betweenPthe first t moments of P and Q, the above proof continues to
t
hold, with an additional error term of k=1 ck (?k ? ?k ), where ck is the coefficient of the degree
k term in the polynomial approximation fg . Leveraging the fact that any Lipschitz function g can
be approximated to `? distance O(1/t) on the unit interval using a polynomial with coefficients
bounded by 3t , we obtain Theorem 3.
6
2.4
Extension: multivariate distribution estimation
We also consider the natural multivariate extension of the the problem of recovering a population
of Bernoulli parameters. Suppose, for example, that every member i of a population of size n has
two associated binomial parameters p(i,1) , p(i,2) , as in Theorem 2. One could estimate the marginal
distribution of the p(i,1) and p(i,2) separately using Algorithm 1, but it is natural to also want to
estimate the joint distribution up to small Wasserstein distance in the 2-d sense. Similarly, one can
consider the analogous d-dimensional distribution recovery question.
The natural idea underlying our extension to this setting is to include estimates of the multivariate
moments represented by multi-indices ? with |?| ? t. For example, in a 2-d setting, the moments
for members i of the population would look like Epi ?P [pa(i,1) pb(i,2) ]. Again, it remains to bound how
close an interpolating polynomial can get to any d-dimensional Lipschitz function, and bound the
size of the coefficients of such a polynomial. To this end, we use the following theorem from [3]:
Lemma 2. Given any Lipschitz function f supported on [0, 1]d , there is a degree s polynomial p(x)
such that
Cd
sup |p(x) ? f (x)| ?
,
t
x?[0,1]d
where Cd is a constant that depends on d.
In Appendix D, we prove the following bound on the magnitude of the coefficients of the interpolating
d t
2
polynomial: |c? | ? (2t)
, where c? is the coefficient of the ? multinomial term. Together with the
3|?|
concentration bound of the ?th moment of the distribution, we obtain Theorem 2, the multivariate
analog of Theorem 1.
3
3.1
Empirical performance
Recovering distributions with known ground truth
We begin by demonstrating the effectiveness of our algorithm on several synthetic datasets. We
considered three different choices for an underlying distribution Ppop over [0, 1], then drew n
independent samples p1 , . . . , pn ? Ppop . For a parameter t, for each i ? {1, . . . , n}, we then drew
Xi ? Binomial(t, pi ), and ran our population estimation algorithm on the set X1 , . . . , Xn , and
measured the extent to which we recovered the distribution Ppop . In all settings, n was sufficiently
large that there was little difference between the histogram corresponding to the set {p1 , . . . , pn }
and the distribution Ppop . Figure 1 depicts the error of the recovered distribution as t takes on all
even values from 2 to 14, for three choices of Ppop : the ?3-spike? distribution with equal mass at the
values 1/4, 1/2, and 3/4, a Normal distribution truncated to be supported on [0, 1], and the uniform
distribution over [0, 1].
(a) 3-spike distribution
(b) truncated normal
(c) Uniform on [0, 1]
Figure 1: Earth mover?s distance (EMD) between the true underlying distribution Ppop and the
distribution recovered by Algorithm 2 for three choices of Ppop : (a) the distribution consisting of
equally weighted point masses at locations 14 , 12 , 34 ; (b) the normal distribution with mean 0.5 and
standard deviation 0.15, truncated to be supported on [0, 1]; and (c) the uniform distribution over
[0, 1]. For each underlying distributions, we plot the EMD (median over 20 trials) between Ppop and
the distribution recovered with Algorithm 2 as t, the number of samples from each of the n Bernoulli
random variables, takes on all even values from 2 to 14. These results are given for n = 10, 000
(green) and n = 100, 000 (blue). For comparison, the distance between Ppop and the histogram of
the empirical probabilities for n = 100, 000 is also shown (red).
7
Figure 2 shows representative plots of the CDFs of the recovered histograms and empirical histograms
for each of the three choices of Ppop considered above.
(a) 3-spike distribution
(b) truncated normal
(c) Uniform on [0, 1]
Figure 2: CDFs of the true distribution P (green), the histogram recovered by Algorithm 2 (blue) for
P , and the empirical histogram (red) corresponding to t = 10 samples and n = 100, 000. Note that
1
.
the empirical distribution is only supported on multiples of 10
We also considered recovering the distribution of probabilities that different flights are delayed (i.e.
each flight?for example Delta Airlines 123?corresponds to a parameter p ? [0, 1] representing the
probability that flight is delayed on a given day. Our algorithm was able to recover this non-parametric
distribution of flight delay parameters extremely well based on few (? 10) data points per flight. In
this setting, we had access to a dataset with > 50 datapoints per flight, and hence could compare the
recovered distribution to a close approximation of the ground truth distribution. These results are
included in the appendix.
3.2
Distribution of offspring sex ratios
One of the motivating questions for this work was the following naive sounding question: do all
members of a given species have the same propensity of giving birth to a male vs female child, or is
there significant variation in this probability across individuals? For a population of n individuals,
letting pi represent the probability that a future child of the ith individual is male, this questions
is precisely the question of characterizing the histogram or set of the pi ?s. This question of the
uniformity of the pi ?s has been debated both by the popular science community (e.g. the recent BBC
article ?Why Billionaires Have More Sons?), and more seriously by the biology community.
Meiosis ensures that each male produces the same number of spermatozoa carrying the X chromosome
as carrying the Y chromosome. Nevertheless, some studies have suggested that the difference in
the amounts of genetic material in these chromosomes result in (slight) morphological differences
between the corresponding spermatozoa, which in turn result in differences in their motility (speed of
movement), etc. (see e.g. [4, 13]). Such studies have led to a chorus of speculation that the relative
timing of ovulation and intercourse correlates with the sex of offspring.
While it is problematic to tackle this problem in humans (for a number of reasons, including sexselective abortions), we instead consider this question for dogs. Letting pi denote the probability
that each puppy in the ith litter is male, we could hope to recover the distribution of the pi ?s. If this
sex-ratio varies significantly according to the specific parents involved, or according to the relative
timing of ovulation and intercourse, then such variation would be evident in the pi ?s. Conveniently, a
typical dog litter consists of 4-8 puppies, allowing our approach to recover this distribution based on
accurate estimates of these first moments.
Based on a dataset of n ? 8, 000 litters, compiled by the Norwegian Kennel Club, we produced
estimates of the first 10 moments of the distribution of pi ?s by considering only litters consisting of at
least 10 puppies. Our algorithm suggests that the distribution of the pi ?s is indistinguishable from a
spike at 12 , given the size of the dataset. Indeed, this conclusion is evident based even on the estimates
P
P
of the first two moments: n1 i pi ? 0.497 and n1 i p2i ? 0.249, since among distribution over
[0, 1] with expectation 1/2, the distribution consisting of a point mass at 1/2 has minimal variance,
equal to 0.25, and these two moments robustly characterize this distribution. (For example, any
distribution supported on [0, 1] with mean 1/2 and for which > 10% of the mass lies outside the
range (0.45, 0.55), must have second moment at least 0.2505, though reliably resolving such small
variation would require a slightly large dataset.)
8
3.3
Political tendencies on a county level
We performed a case study on the political leanings of counties. We assumed the following model:
Each of the n = 3116 counties in the US have an intrinsic ?political-leaning? parameter pi denoting
their likelihood of voting Republican in a given election. We observe t = 8 independent samples of
each parameter, corresponding to whether each county went Democratic or Republican during the 8
presidential elections from 1976 to 2004.
(a) CDF recovered from 6 moments
(blue), empirical CDF (red)
(b) CDF recovered from 8 moments
(blue), empirical CDF (red)
Figure 3: Output of bootstrapping Algorithm 2 on political data for n =3,116 counties over t = 8
elections.
3.4
Game-to-game shooting of NBA players
We performed a case study on the scoring probabilities of two NBA players. One can think of this
experiment as asking whether NBA players, game-to-game, have differences in their intrinsic ability
to score field goals (in the sports analytics world, this is the idea of ?hot / cold? shooting nights).
The model for each player is as follows: for the ith basketball game there is some parameter pi
representing the player?s latent shooting percentage for that game, perhaps varying according to the
opposing team?s defensive strategy. The empirical shooting percentage of a player varies significantly
from game-to-game?recovering the underlying distribution or histogram of the pi ?s allows one to
directly estimate the consistency of a player. Additionally, such a distribution could be used as a prior
for making decisions during games. For example, conditioned on the performance during the first
half of a game, one could update the expected fraction of subsequent shots that are successful.
The dataset used was the per-game 3 point shooting percentage of players, with sufficient statistics
of ?3 pointers made? and ?3 pointers attempted? for each game. To generate estimates of the k th
moment, we considered games where at least k 3 pointers were attempted. The players chosen were
Stephen Curry of the Golden State Warriors (who is considered a very consistent shooter) and Danny
Green of the San Antonio Spurs (whose nickname ?Icy Hot? gives a good idea of his suspected
consistency).
(a) Estimated CDF of Curry?s gameto-game shooting percentage (blue),
empirical CDF (red), n=457 games.
(b) Estimated CDF of Green?s gameto-game shooting percentage (blue),
empirical CDF (red), n=524 games.
Figure 4: Estimates produced by bootstrapped version of Algorithm 2 on NBA dataset, 8 moments
included
9
Acknowledgments
We thank Kaja Borge and Ane N?dtvedt for sharing an anonymized dataset on sex composition of
dog litters, based on data collected by the Norwegian Kennel Club. This research was supported by
NSF CAREER Award CCF-1351108, ONR Award N00014-17-1-2562, NSF Graduate Fellowship
DGE-1656518, and a Google Faculty Fellowship.
References
[1] Jayadev Acharya, Hirakendu Das, Alon Orlitsky, and Ananda Theertha Suresh. A unified maximum
likelihood approach for optimal distribution property estimation. arXiv preprint arXiv:1611.02960, 2016.
[2] Jayadev Acharya, Alon Orlitsky, and Shengjun Pan. Recent results on pattern maximum likelihood. In
Networking and Information Theory, 2009. ITW 2009. IEEE Information Theory Workshop on, pages
251?255. IEEE, 2009.
[3] Thomas Bagby, Len Bos, and Norman Levenberg. Multivariate simultaneous approximation. Constructive
Approximation, 18(4), 2002.
[4] P. Barlow and C.G. Vosa. The y chromosome in human spermatozoa. Nature, 226:961?962, 1970.
[5] Constantinos Daskalakis, Ilias Diakonikolas, and Rocco A Servedio. Learning poisson binomial distributions. Algorithmica, 72(1):316?357, 2015.
[6] Ilias Diakonikolas, Daniel M Kane, and Alistair Stewart. Properly learning poisson binomial distributions
in almost polynomial time. In Conference on Learning Theory, pages 850?878, 2016.
[7] Weihao Kong and Gregory Valiant. Spectrum estimation from samples. arXiv preprint arXiv:1602.00061,
2016.
[8] Nicolai Korneichuk and Nikolaj Pavlov??c Korn?eichuk. Exact constants in approximation theory, volume 38.
Cambridge University Press, 1991.
[9] Reut Levi, Dana Ron, and Ronitt Rubinfeld. Testing properties of collections of distributions. Theory of
Computing, 9(8):295?347, 2013.
[10] Reut Levi, Dana Ron, and Ronitt Rubinfeld. Testing similar means. Siam J. Discrete Math, 28(4):1699?
1724, 2014.
[11] Alon Orlitsky, Narayana P Santhanam, Krishnamurthy Viswanathan, and Junan Zhang. On modeling
profiles instead of values. In Proceedings of the 20th conference on Uncertainty in artificial intelligence,
pages 426?435. AUAI Press, 2004.
[12] Alon Orlitsky, Ananda Theertha Suresh, and Yihong Wu. Optimal prediction of the number of unseen
species. Proceedings of the National Academy of Sciences, page 201607774, 2016.
[13] L.M. Penfold, C. Holt, W.V. Holt, G.R. Welch, D.G. Cran, and L.A. Johnson. Comparative motility of x
and y chromosome?bearing bovine sperm separated on the basis of dna content by flow sorting. Molecular
Reproduction and Development, 50(3):323?327, 1998.
[14] Charles Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution.
In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1:
Contributions to the Theory of Statistics, pages 197?206, Berkeley, Calif., 1956. University of California
Press.
[15] Gregory Valiant and Paul Valiant. Estimating the unseen: an n/log(n)-sample estimator for entropy and
support size, shown optimal via new clts. In Proceedings of the forty-third annual ACM symposium on
Theory of computing, pages 685?694. ACM, 2011.
[16] Gregory Valiant and Paul Valiant. Estimating the unseen: improved estimators for entropy and other
properties. In Advances in Neural Information Processing Systems, pages 2157?2165, 2013.
[17] Gregory Valiant and Paul Valiant. Instance optimal learning of discrete distributions. In Proceedings of the
48th Annual ACM SIGACT Symposium on Theory of Computing, pages 142?155. ACM, 2016.
[18] James Zou, Gregory Valiant, Paul Valiant, Konrad Karczewski, Siu On Chan, Kaitlin Samocha, Monkol Lek,
Shamil Sunyaev, Mark Daly, and Daniel G MacArthur. Quantifying unobserved protein-coding variants in
human populations provides a roadmap for large-scale sequencing projects. Nature Communications, 7,
2016.
10
| 7160 |@word kong:2 trial:1 faculty:1 version:3 polynomial:9 stronger:1 clts:1 sex:4 q1:1 shot:1 moment:54 score:1 daniel:2 genetic:4 bootstrapped:2 seriously:1 denoting:1 past:1 recovered:12 current:1 nt:1 surprising:1 discretization:1 nicolai:1 dx:2 must:2 danny:1 subsequent:1 plot:2 update:1 v:1 half:1 intelligence:1 ith:10 record:2 pointer:3 provides:1 multiset:3 math:1 location:2 club:2 ron:2 zhang:1 narayana:1 mathematical:1 c2:2 driver:2 symposium:3 shooting:7 consists:2 prove:1 theoretically:2 indeed:1 expected:2 roughly:1 p1:4 multi:5 resolve:2 election:4 little:3 pf:1 considering:1 provided:4 estimating:6 underlying:11 begin:2 bounded:2 mass:14 project:1 what:1 unified:1 unobserved:1 bootstrapping:1 guarantee:1 safely:2 berkeley:2 every:2 hypothetical:1 ti:1 voting:1 tackle:1 golden:1 orlitsky:4 exactly:3 returning:2 demonstrates:1 qm:3 auai:1 unit:3 medical:1 positive:1 negligible:1 offspring:3 timing:2 limit:1 despite:1 establishing:1 flu:4 might:2 plus:1 twice:2 studied:1 examined:1 wk2:1 specifying:1 suggests:1 kane:1 pavlov:1 cdfs:3 tian:1 analytics:2 range:1 graduate:1 decided:1 practical:2 responsible:1 acknowledgment:1 testing:3 practice:2 cold:1 suresh:2 empirical:26 significantly:7 thought:1 holt:2 protein:1 get:1 cannot:1 close:12 unlabeled:2 risk:1 seminal:1 equivalent:1 map:1 independently:4 focused:1 welch:1 simplicity:1 recovery:7 immediately:2 assigns:1 defensive:1 insight:1 estimator:14 regarded:1 datapoints:1 his:1 population:16 variation:7 krishnamurthy:1 analogous:2 suppose:8 exact:2 programming:2 pa:1 element:6 approximated:1 continues:1 observed:3 ep:1 preprint:2 solved:1 worst:1 ensures:1 morphological:1 went:1 movement:1 ran:1 balanced:1 intuition:1 disease:5 depend:1 uniformity:1 carrying:2 upon:1 bbc:1 basis:1 joint:3 represented:3 epi:1 separated:1 describe:3 kp:4 artificial:1 kevin:1 outcome:7 outside:1 birth:1 quite:1 whose:8 stanford:3 larger:3 solve:3 drawing:2 otherwise:1 presidential:1 ability:3 statistic:3 unseen:3 think:2 noisy:1 subsamples:1 sequence:1 net:7 propose:1 product:1 relevant:1 realization:1 spur:1 achieve:1 academy:1 intuitive:2 validate:1 parent:1 inadmissibility:1 produce:3 comparative:1 converges:1 illustrate:1 derive:1 coupling:1 alon:4 measured:2 recovering:11 come:1 implies:2 differ:1 posit:1 puppy:3 closely:1 correct:1 weihao:2 human:3 material:1 require:1 suffices:2 landed:3 county:5 proposition:4 biological:1 extension:4 hold:3 practically:1 sufficiently:5 considered:10 ground:2 normal:5 earth:10 estimation:7 daly:1 label:1 kennel:2 propensity:1 create:1 establishes:1 weighted:3 hope:5 always:1 gaussian:1 pki:2 rather:2 ck:2 pn:6 shrinkage:1 varying:1 corollary:5 earliest:1 focus:2 improvement:1 properly:1 sequencing:2 bernoulli:6 likelihood:3 political:5 contrast:2 attains:1 sense:3 bos:1 sperm:1 hidden:3 bovine:1 dual:1 among:1 development:1 marginal:1 equal:3 field:1 never:1 having:2 emd:2 beach:1 biology:1 identical:2 kw:3 look:1 constantinos:1 discrepancy:4 future:1 acharya:2 few:2 simultaneously:1 mover:10 national:1 individual:8 delayed:2 algorithmica:1 consisting:4 ourselves:1 n1:6 opposing:1 organization:1 interest:1 possibility:1 male:4 extreme:1 genotype:8 accurate:7 kt:2 tuple:1 respective:1 modest:1 calif:1 theoretical:2 minimal:1 instance:1 modeling:1 asking:1 stewart:1 shengjun:1 cost:2 deviation:2 subset:1 rare:2 uniform:4 delay:1 successful:1 fruitfully:1 rounding:2 johnson:1 too:2 siu:1 motivating:1 characterize:1 varies:2 gregory:6 synthetic:2 person:4 density:1 st:2 siam:1 contract:1 rounded:1 together:3 concrete:3 again:2 reflect:1 squared:1 leveraged:3 possibly:1 choose:1 return:1 potential:1 coding:1 wk:2 availability:1 coefficient:5 depends:2 piece:1 performed:2 observing:1 sup:3 red:6 recover:20 bayes:1 len:1 mutation:2 contribution:2 minimize:2 accuracy:2 variance:1 who:3 efficiently:1 correspond:4 identify:1 mutated:1 accurately:11 produced:2 randomness:2 datapoint:1 explain:1 networking:1 simultaneous:1 sharing:1 definition:5 servedio:1 frequency:1 involved:1 james:1 associated:3 proof:6 mi:1 recovers:1 sampled:1 dataset:8 popular:1 nikolaj:1 reflecting:2 attained:1 higher:2 day:1 improved:2 shrink:1 though:1 furthermore:1 sketch:1 flight:6 cran:1 night:1 warrior:1 google:1 quality:1 perhaps:2 curry:2 dge:1 usa:1 building:1 name:1 unbiased:6 true:6 ccf:1 norman:1 hence:4 barlow:1 q0:3 nonzero:1 motility:2 konrad:1 ll:1 indistinguishable:1 during:3 game:17 basketball:1 levenberg:1 evident:2 complete:2 demonstrate:1 performs:1 argues:1 gleaned:1 reasoning:2 consideration:1 macarthur:1 charles:3 common:2 multinomial:1 empirically:1 exponentially:1 volume:2 extend:2 tail:1 analog:5 interpretation:1 elevated:1 marginals:1 approximates:1 significant:5 refer:1 slight:1 composition:1 cambridge:1 consistency:2 similarly:2 pointed:1 had:2 moving:2 access:3 compiled:1 etc:1 multivariate:9 showed:1 recent:3 female:1 chan:1 inf:2 certain:1 n00014:1 discretizing:1 onr:1 itw:1 scoring:1 wasserstein:11 additional:3 determine:1 forty:1 stephen:1 resolving:1 multiple:3 infer:2 ptt:1 characterized:1 long:1 compensate:2 equally:1 award:2 ilias:2 molecular:1 qi:8 prediction:2 variant:2 ensuring:1 essentially:1 metric:1 poisson:3 expectation:3 ane:1 histogram:13 represent:1 arxiv:4 c1:3 penalize:1 chorus:1 want:1 fine:1 separately:1 interval:3 fellowship:2 median:1 biased:1 airline:1 sigact:1 subject:2 sounding:1 litter:5 member:4 leveraging:5 flow:1 effectiveness:1 integer:6 leverage:1 cohort:1 variety:1 independence:1 incomparable:1 regarding:1 idea:3 yihong:1 politics:1 whether:6 returned:3 antonio:1 useful:2 proportionally:1 amount:7 stein:6 dna:1 generate:1 exist:1 percentage:5 problematic:1 nsf:2 shifted:1 alters:3 estimated:6 delta:1 per:4 blue:6 discrete:3 santhanam:1 putting:1 levi:2 nevertheless:2 enormous:1 pb:1 drawn:2 demonstrating:1 changing:1 hirakendu:1 ht:2 asymptotically:1 fraction:1 year:5 sum:3 inverse:1 uncertainty:1 place:3 almost:4 pursuing:1 wu:1 draw:2 decision:1 appendix:3 nba:4 capturing:1 bound:9 abortion:1 quadratic:4 nonnegative:2 annual:2 occur:1 precisely:2 constraint:2 meiosis:1 speed:1 extremely:2 xki:1 relatively:1 department:1 developing:1 according:6 alternate:1 rubinfeld:2 viswanathan:1 smaller:1 slightly:2 across:1 son:1 pan:1 alistair:1 n4:2 making:2 pr:2 taken:1 ln:5 agree:2 remains:2 discus:1 turn:1 flip:3 letting:2 end:1 gaussians:1 apply:1 observe:8 robustly:1 coin:10 robustness:1 thomas:1 binomial:17 denotes:4 running:1 ensure:1 include:1 giving:1 jayadev:2 objective:4 question:7 occurs:1 spike:4 parametric:1 concentration:1 dependence:1 rt:1 strategy:1 diakonikolas:2 rocco:1 exhibit:2 junan:1 kth:2 usual:1 distance:28 thank:1 entity:8 roadmap:1 extent:3 collected:1 reason:1 assuming:1 length:1 index:2 insufficient:2 providing:1 ratio:3 illustration:1 equivalently:3 difficult:1 stated:1 reliably:1 motivates:1 unknown:4 allowing:1 observation:2 datasets:4 truncated:4 defining:1 norwegian:2 head:4 team:1 communication:1 discovered:1 arbitrary:1 community:2 pair:5 namely:6 dog:3 speculation:1 california:1 learned:1 icy:1 nip:1 beyond:2 able:1 proceeds:4 below:2 suggested:1 pattern:1 democratic:1 regime:5 program:8 including:2 green:4 hot:2 event:1 natural:3 korn:1 representing:5 scheme:1 republican:2 imply:2 inversely:1 naive:2 prior:3 kf:1 relative:2 contracting:2 expect:1 sublinear:1 versus:1 dana:2 illuminating:1 degree:4 sufficient:1 consistent:1 anonymized:1 article:1 suspected:1 leaning:2 pi:63 cd:2 supported:12 surprisingly:1 bias:1 allow:1 formal:2 weaker:1 characterizing:1 absolute:1 fg:2 distributed:3 benefit:1 xn:13 world:2 cumulative:3 genome:1 qn:1 collection:2 made:1 san:1 correlate:1 supremum:1 summing:1 conclude:4 assumed:1 tuples:4 xi:17 spectrum:2 daskalakis:1 latent:1 shooter:1 why:2 additionally:5 lip:2 lek:1 nature:2 chromosome:5 ca:2 career:1 obtaining:1 improving:1 bearing:1 interpolating:2 zou:1 domain:5 da:1 pk:1 main:1 motivation:2 arise:1 profile:1 paul:4 denoise:1 child:2 p2i:1 x1:13 representative:1 depicts:1 fashion:1 debated:1 lie:1 third:2 theorem:18 specific:4 list:2 theertha:2 reproduction:1 exists:1 intrinsic:2 workshop:1 valiant:11 drew:2 magnitude:1 conditioned:1 sorting:1 entropy:3 logarithmic:1 led:1 simply:2 likely:3 conveniently:1 sport:3 partially:2 applies:3 gender:1 corresponds:3 truth:2 acm:4 cdf:8 sorted:2 goal:7 formulated:1 viewed:1 quantifying:1 towards:2 toss:3 lipschitz:5 content:1 hard:1 included:2 specifically:1 determined:1 typical:1 denoising:1 lemma:4 ananda:2 total:1 specie:2 tendency:1 player:9 attempted:2 formally:2 people:8 support:3 mark:1 constructive:1 phenomenon:1 |
6,811 | 7,161 | Clustering with Noisy Queries
Arya Mazumdar and Barna Saha
College of Information and Computer Sciences
University of Massachusetts Amherst
Amherst, MA 01003
{arya,barna}@cs.umass.edu
Abstract
In this paper, we provide a rigorous theoretical study of clustering with noisy
queries. Given a set of n elements, our goal is to recover the true clustering by
asking minimum number of pairwise queries to an oracle. Oracle can answer
queries of the form ?do elements u and v belong to the same cluster??-the queries
can be asked interactively (adaptive queries), or non-adaptively up-front, but its
answer can be erroneous with probability p. In this paper, we provide the first
information theoretic lower bound on the number of queries for clustering with
noisy oracle in both situations. We design novel algorithms that closely match
this query complexity lower bound, even when the number of clusters is unknown.
Moreover, we design computationally efficient algorithms both for the adaptive
and non-adaptive settings. The problem captures/generalizes multiple application
scenarios. It is directly motivated by the growing body of work that use crowdsourcing for entity resolution, a fundamental and challenging data mining task
aimed to identify all records in a database referring to the same entity. Here crowd
represents the noisy oracle, and the number of queries directly relates to the cost
of crowdsourcing. Another application comes from the problem of sign edge
prediction in social network, where social interactions can be both positive and
negative, and one must identify the sign of all pair-wise interactions by querying
a few pairs. Furthermore, clustering with noisy oracle is intimately connected
to correlation clustering, leading to improvement therein. Finally, it introduces
a new direction of study in the popular stochastic block model where one has an
incomplete stochastic block model matrix to recover the clusters.
1
Introduction
Clustering is one of the most fundamental and popular methods for data classification. In this paper
we initiate a rigorous theoretical study of clustering with the help of a noisy oracle, a model that
captures many application scenarios and has drawn significant attention in recent years.
Suppose we are given a set of n points, that need to be clustered into k clusters where k is unknown
to us. Suppose there is an oracle that can answer pair-wise queries of the form, ?do u and v belong to
the same cluster??. Repeating the same question to the oracle always returns the same answer, but the
answer could be wrong with probability p = 12 ? ?, ? > 0 (i.e., slightly better than random answer).
We are interested to find the minimum number of queries needed to recover the true clusters with high
probability. Understanding query complexity of the noisy oracle model is a fundamental theoretical
question [25] with many existing works on sorting and selection [7, 8] where queries are erroneous
with probability p, and repeating the same question does not change the answer. Here we study the
basic clustering problem under this setting which also captures several fundamental applications.
Crowdsourced Entity Resolution. Entity resolution (ER) is an important data mining task that
tries to identify all records in a database that refer to the same underlying entity. Starting with the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
seminal work of Fellegi and Sunter [26], numerous algorithms with variety of techniques have been
developed for ER [24, 28, 40, 19]. Still, due to ambiguity in representation and poor data quality,
accuracy of automated ER techniques has been unsatisfactory. To remedy this, a recent trend in
ER has been to use human in the loop. In this setting, humans are asked simple pair-wise queries
adaptively, ?do u and v represent the same entity??, and these answers are used to improve the
final accuracy [30, 54, 56, 27, 52, 21, 29, 37, 55, 46]. Proliferation of crowdsourcing platforms like
Amazon Mechanical Turk (AMT), CrowdFlower etc. allows for easy implementation. However,
data collected from non-expert workers on crowdsourcing platforms are inevitably noisy. A simple
scheme to reduce errors could be to take a majority vote after asking the same question to multiple
independent crowd workers. However, often that is not sufficient. Our experiments on several real
datasets (see experimentation section for details) with answers collected from AMT [31, 52] show
majority voting could even increase the errors. Interestingly, such an observation has been made
by a recent paper as well [51]. There are more complex querying model [51, 55, 53], and involved
heuristics [31, 52] to handle errors in this scenario. Let p, 0 < p < 1/2, be the probability of error1
of a query answer which might also be the aggregated answer after repeating the query several times.
Therefore, once the answer has been aggregated, it cannot change. In all crowdsourcing works, the
goal is to minimize the number of queries to reduce the cost and time of crowdsourcing, and recover
the entities (clusters). This is exactly clustering with noisy oracle. While several heuristics have been
developed [52, 30, 53], here we provide a rigorous theory with near-optimal algorithms and hardness
bounds.
Another recent work that is conceptually close is by Asthiani et al. [4], where pair-wise queries
are used for clustering. However, the setting is very different. They consider the specific NP-hard
k-means objective with distance matrix which must be a metric and must satisfy a deterministic
separation property.
Signed Edge Prediction. The edge sign prediction problem can be defined as follows. Suppose
we are given a social network with signs on all its edges, but the sign from node u to v, denoted
by s(u, v) ? {?1} is hidden. The goal is to recover these signs as best as possible using minimal
amount of information. Social interactions or sentiments can be both positive (?like?, ?trust?) and
negative (?dislike?, ?distrust?). [41] provides several such examples; e.g., Wikipedia, where one can
vote for or against the nomination of others to adminship [10], or Epinions and Slashdots where users
can express trust or distrust, or can declare others to be friends or foes [9, 39]. Initiated by [11, 34],
many techniques and related models using convex optimization, low-rank approximation and learning
theoretic approaches have been used for this problem [17, 12, 14]. Recently [16, 14, 48] proposed
the following model for edge sign prediction. We can query a pair of nodes (u, v) to test whether
s(u, v) = +1 indicating u and v belong to the same cluster or s(u, v) = ?1 indicating they are not.
However, the query fails to return the correct answer with probability 0 < p < 1/2, and we want to
query the minimal possible pairs. This is exactly the case of clustering with noisy oracle. Our result
significantly improves, and generalizes over [16, 14, 48].
Correlation Clustering. In fact, when all pair-wise queries are given, and the goal is to recover the
maximum likelihood (ML) clustering, then our problem is equivalent to noisy correlation clustering
[6, 44]. Introduced by [6], correlation clustering is an extremely well-studied model of clustering. We
are given a graph G = (V, E) with each edge e ? E labelled either +1 or ?1, the goal of correlation
clustering is to either (a) minimize the number of disagreements, that is the number of intra-cluster
?1 edges and inter-cluster +1 edges, or (b) maximize the number of agreements that is the number
of intra-cluster +1 edges and inter-cluster ?1 edges. Correlation clustering is NP-hard, but can be
approximated well with provable guarantees [6]. In a random noise model, also introduced by [6] and
studied further by [44], we start with a ground truth clustering, and then each edge label is flipped
with probability p. This is exactly the graph we observe if we make all possible pair-wise queries, and
the ML decoding coincides with
pcorrelation clustering. The proposed algorithm of [6]
p can recover in
this case all clusters of size ?( |V | log |V |), and if ?all? the clusters have size ?( |V |), then they
can be recovered by [44]. Using our proposed algorithms for clustering with noisy oracle, we can
also recover significantly smaller sized clusters given the number of clusters are not too many. Such a
result is possible to obtain using the repeated-peeling technique of [3]. However, our running time is
significantly better. E.g. for k ? n1/6 , we have a running time of O(n log n), whereas for [3], it is
dominated by the time to solve a convex optimization over n-vertex graph which is at least O(n3 ).
1
An approximation of p can often be estimated manually from a small sample of crowd answers.
2
Stochastic Block Model (SBM). The clustering with faulty oracle is intimately connected with the
planted partition model, also known as the stochastic block model [36, 23, 22, 2, 1, 32, 18, 49]. The
stochastic block model is an extremely well-studied model of random graphs where two vertices within
the same community share an edge with probability p0 , and two vertices in different communities
share an edge with probability q 0 . It is often assumed that k, the number of communities, is a constant
(e.g. k = 2 is known as the planted bisection model and is studied extensively [1, 49, 23] or a
slowly growing function of n (e.g. k = o(log n)). There are extensive literature on characterizing the
threshold phenomenon in SBM in terms of the gap between p0 and q 02 (e.g. see [2] and therein for
many references) for exact and approximate recovery of clusters of nearly equal size. If we allow
for different probability of errors for pairs of elements based on whether they belong to the same
cluster or not, then the resultant faulty oracle model is an intriguing generalization of SBM. Consider
the probability of error for a query on (u, v) is 1 ? p0 if u and v belong to the same cluster and q 0
otherwise; but now, we can only learn a subset of the entries of an SBM matrix by querying adaptively.
Understanding how the threshold of recovery changes for such an ?incomplete? or ?space-efficient?
SBM will be a fascinating direction to pursue. In fact, our lower bound results extend to asymmetric
probability values, while designing efficient algorithms and sharp thresholds are ongoing works. In
[15], a locality model where measurements can only be obtained for nearby nodes is studied for two
clusters with non-adaptive querying and allowing repetitions. It would also be interesting to extend
our work with such locality constraints.
In a companion paper, we have studied a related problem where the queries are not noisy and certain
similarity values between each pair of elements are available [47]. Most of the results of the two
papers are available online in a more extensive version [45].
Contributions. Formally the clustering with a noisy oracle is defined as follows.
Problem (Query-Cluster ). Consider a set of points V ? [n] containing k latent clusters Vi ,
i = 1, . . . , k, Vi ? Vj = ?, where k and the subsets Vi ? [n] are unknown. There is an oracle
Op,q : V ? V ? {?1}, with two error parameters p, q : 0 < p < q < 1. The oracle takes as
input a pair of vertices u, v ? V ? V , and if u, v belong to the same cluster then Op,q (u, v) = +1
with probability 1 ? p and Op,q (u, v) = ?1 with probability p. On the other hand, if u, v do not
belong to the same cluster then Op,q (u, v) = +1 with probability 1 ? q and Op,q (u, v) = ?1 with
probability q. Such an oracle is called a binary asymmetric channel. A special case would be when
p = 1 ? q = 12 ? ?, ? > 0, the binary symmetric channel, where the error rate is the same p for all
pairs. Except for the lower bound, we focus on the symmetric case in this paper. Note that the oracle
returns the same answer on repetition. Now, given V , find Q ? V ? V such that |Q| is minimum,
and from the oracle answers it is possible to recover Vi , i = 1, 2, ..., k with high probability3 . Note
that the entries of Q can be chosen adaptively based on the answers of previously chosen queries.
Our contributions are as follows.
nk
? Lower Bound (Section 2). We show that ?( ?(pkq)
) is the information theoretic lower bound
on the number of adaptive queries required to obtain the correct clustering with high probability
even when the clusters are of similar size (see, Theorem 1). Here ?(pkq) is the Jensen-Shannon
divergence between Bernoulli p and q distributions. For the symmetric case, that is when p = 1 ? q,
1
?(pk1 ? p) = (1 ? 2p) log 1?p
p . In particular, if p = 2 ? ?, our lower bound on query complexity
nk
nk
is ?( ?2 ) = ?( (1?2p)2 ). Developing lower bounds in the interactive setting especially with noisy
answers appears to be significantly challenging as popular techniques based on Fano-type inequalities
for multiple hypothesis testing [13, 42] do not apply, and we believe our technique will be useful in
other noisy interactive learning settings.
? Information-Theoretic Optimal Algorithm (Section 3 and B.1). For the symmetric error case, we
nk log n
design an algorithm which asks at most O( (1?2p)
2 ) queries (Theorem 2) matching the lower bound
within an O(log n) factor, whenever p = 12 ? ?.
? Computationally Efficient Algorithm (Section 3.2 and B.2). We next design an algorithm that is
computationally efficient and runs in O(nk log n + k 1+2? ) time where ? ? 2.373 is the exponent
of fast matrix multiplication and asks at most O(nklog(n) + min (nk 2 log(n), k 5 log2 n)) queries
treating p as a constant4 . Note that most prior works in SBM, or works on edge sign detection, only
n
n
Most recent works consider the region of interest as p0 = a log
and q 0 = b log
for some a > b > 0.
n
n
High probability implies with probability 1 ? on (1), where on (1) ? 0 as n ? ?
4
For exact dependency on p see the corresponding section.
2
3
3
consider the case when k is a constant [2, 32, 18], even just k = 2 [49, 1, 16, 14, 48]. For small
values of k, we get a?highly efficient algorithm. We can use this algorithm to recover all clusters of
size at least min (k, n) log n
?for correlation clustering on noisy graph, improving upon the results
of [6, 44]. As long as k = o( n), this improves upon the running time of O(n3 ) in [3].
? Nonadaptive Algorithm (Section B.3). When the queries must be done up-front, for k = 2, we
n log n
give a simple O(n log n) time algorithm that asks O( (1?2p)
4 ) queries improving upon [48] where
a polynomial time algorithm (at least with a running time of O(n3 )) is shown with number of
log n
queries O(n log n/(1/2 ? p) log log n ) and over [16, 14] where O(npoly log n) queries are required
under certain conditions on the clusters. Our result generalizes to k > 2, and we show interesting
lower bounds in this setting (Appendix C in the supplementary material). Further, we derive new
lower bounds showing trade-off between queries and threshold of recovery for incomplete SBM in
Appendix C.
2
Lower bound for the faulty-oracle model
Note that we are not allowed to ask the same question multiple times to get the correct answer. In
this case, even for probabilistic recovery, a minimum size bound on cluster size is required. For
k?2
example, consider the following two different clusterings. C1 : V = ti=1
Vi t {v1 , v2 } t {v3 } and
k?2
C2 : V = ti=1 Vi t {v1 } t {v2 , v3 }. Now if one of these two clusterings are given to us uniformly
at random, no matter how many queries we do, we will fail to recover the correct clustering with
positive probability. Therefore, the challenge in proving lower bounds is when clusters all have size
more than a minimum threshold, or when they are all nearly balanced. This removes the constraint on
the algorithm designer on how many times a cluster can be queried with a vertex and the algorithms
can have greater flexibility. Our lower bound holds for a large set of clustering instances. We define
a clustering to be balanced if either of the following two conditions hold 1) the minimum size of a
n
cluster is ? 20k
, 2) the maximum size of a cluster is ? 4n
k . For any balanced clustering, we prove a
lower bound on the number of queries required.
Our main lower bound in this section uses the Jensen-Shannon (JS) divergence. The well-known KL
P
(i)
divergence is defined between two probability mass functions f and g: D(f kg) = i f (i) log fg(i)
.
1
Further define the JS divergence as: ?(f kg) = 2 (D(f kg) + D(gkf )). In particular, the KL and
JS divergences between two Bernoulli random variable with parameters p and q are denoted with
D(pkq) and ?(pkq) respectively.
Theorem 1 (Query-Cluster Lower
Bound).
For any balanced clustering instance, if any (randomnk
ized) algorithm does not make ? ?(pkq) expected number of queries then the recovery will be
incorrect with probability at least 0.29 ? O( k1 ).
Note that the lower bound is more effective when p and q are close. Moreover
our actual lower bound
nk
is slightly tighter with the expected number of queries required given by ? min{D(qkp),D(pkq)}
.
Proof of Theorem 1. We have V to be the n-element set to be clustered: V = tki=1 Vi . To prove
Theorem 1 we first show that, if the number of queries is small, then there exist ?(k) number of
clusters, that are not being sufficiently queried with. Then we show that, since the size of the clusters
cannot be too large or too small, there exists a decent number of vertices in these clusters.
The main piece of the proof of Theorem 1 is Lemma 1. We provide a sketch of this lemma here, the
full proof, which is inspired by a technique of lower bounding regret in multi-arm bandit problems
(see [5, 38]) is given in Appendix A in the supplementary material.
Lemma 1. Suppose, there are k clusters. There exist at least 4k
5 clusters such that for each element
v from any of these clusters, v will be assigned to a wrong cluster by any randomized algorithm with
k
probability 0.29 ? 10/k unless the total number of queries involving v is more than 10?(pkq)
.
Proof-sketch of Lemma 1. Let us assume that the k clusters are already formed, and all elements
except for one element v has already been assigned to a cluster. Note that, queries that do not involve
v plays no role in this stage.
4
Now the problem reduces to a hypothesis testing problem where the ith hypothesis Hi for i = 1, . . . , k,
denotes that the true cluster for v is Vi . We can also add a null-hypothesis H0 that stands for the
vertex belonging to none of the clusters (hypothetical). Let Pi denote the joint probability distribution
of our observations (the answers to the queries involving vertex v) when Hi is true, i = 1, . . . , k.
That is for any event A we have Pi (A) = Pr(A|Hi ).
Suppose T denotes the total number of queries made by an (possibly randomized) algorithm at this
stage before assigning a cluster. Let the random variable Ti denote the number of queries involving
cluster Vi , i = 1, . . . , k. In the second step, we need to identify a set of clusters that are not being
queried with enough by the algorithm.
Pk
10T
We must have,
i=1 E0 Ti = T. Let J1 ? {i ? {1, . . . , k} : E0 Ti ?
k }. That is J1
10T
contains clusters which were involved in less than k queries before assignment. Let Ei ?
{the algorithm outputs cluster Vi } and J2 = {i ? {1, . . . , n} : P0 (Ei ) ? 10
k }. The set of clus4k
9k
ters, J = J1 ? J2 has size, |J| ? 2 ? 10 ? k = 5 .
Now let us assume that we are given an element v ? Vj for some j ? J to cluster (Hj is the true
hypothesis). The probability of correct clustering is Pj (Ej ). In the last step, we give an upper bound
on probability of correct assignment for this element.
We must have, Pj (Ej ) = P0 (Ej ) + Pj (Ej ) ? P0 (Ej ) ? 10
? 10
k + |P0 (Ej ) ? Pj (Ej )|
k + kP0 ?
q
10
1
Pj kT V ? k + 2 D(P0 kPj ). where kP0 ? Pj kT V denotes the total variation distance between
two distributions and and in the last step we have used the relation between total variation and
divergence (Pinsker?s inequality). Since P0 and Pj are the joint distributions of the independent
random variables (answers to queries) that are identical to one of two Bernoulli random variables: Y ,
which is Bernoulli(p), or Z, which is Bernoulli(q), it is possible to show, D(P0 kPj ) ? 10T
k D(qkp).
Now plugging this in,
10
+
Pj (Ej ) ?
k
r
10
1 10T
D(qkp) ?
+
2 k
k
r
10
1
=
+ 0.707,
2
k
k
if T ? 10D(qkp)
. Had we bounded the total variation distance with D(Pj kP0 ) in the Pinsker?s
inequality then we would have D(pkq) in the denominator.
Now we are ready to prove Theorem 1.
Proof of Theorem 1. We will show the claim by considering a balanced input. Recall that for a
balanced input either the maximum size of a cluster is ? 4n
k or the minimum size of a cluster is
n
? 20k
. We will consider the two cases separately for the proof.
Case 1: the maximum size of a cluster is ?
4n
k .
Suppose, the total number of queries is T 0 . That means number of vertices involved in the queries is
? 2T 0 . Note that there are k clusters and n elements. Let U be the set of vertices that are involved in
0
16T 0
7n
0
less than 16T
n queries. Clearly, (n ? |U |) n ? 2T , or |U | ? 8 .
Now we know from Lemma 1 that there exists 4k
5 clusters such that a vertex v from any one of these
clusters will be assigned to a wrong cluster by any randomized algorithm with probability 1/4 unless
k
the expected number of queries involving this vertex is more than 10?(qkp)
.
We claim that U must have an intersection with at least one of these 4k
5 clusters. If not, then more
4k
k
than 7n
vertices
must
belong
to
less
than
k
?
=
clusters.
Or
the
maximum
size of a cluster will
8
5
5
4n
be 7n?5
>
,
which
is
prohibited
according
to
our
assumption.
8k
k
Now each vertex in the intersection of U and the 4k
5 clusters are going to be assigned to an incorrect
16T 0
k
nk
cluster with positive probability if, n ? 10?(pkq) . Therefore we must have T 0 ? 160?(pkq)
.
Case 2: the minimum size of a cluster is ?
n
20k .
0
0
0 16T
Let U 0 be the set of clusters that are involved in at most 16T
k queries. That means, (k ? |U |) k ?
4k
?
2T 0 . This implies, |U 0 | ? 7k
8 . Now we know from Lemma 1 that there exist 5 clusters (say U ) such
5
that a vertex v from any one of these clusters will be assigned to a wrong cluster by any randomized
algorithm with probability 1/4 unless the expected number of queries involving this vertex is more
k
4k
27k
than 10?(pkq)
. Quite clearly |U ? ? U | ? 7k
8 + 5 ? k = 40 .
Consider a cluster Vi such that i ? U ? ? U , which is always possible because the intersection is
0
nonempty. Vi is involved in at most 16T
k queries. Let the minimum size of any cluster be t. Now,
0
at least half of the vertices of Vi must each be involved in at most 32T
kt queries. Now each of these
k
vertices must be involved in at least 10?(pkq)
queries (see Lemma 1) to avoid being assigned to a
0
k
nk
0
wrong cluster with positive probability. This means 32T
?
or
T
=
?
kt
10?(pkq)
?(pkq) , since
n
.
t ? 20k
3
Algorithms
Let V = tki=1 Vi be the true clustering and V = tki=1 V?i be the maximum likelihood (ML) estimate
of the clustering that can be found when all n2 queries have been made to the faulty oracle. Our first
result obtains a query complexity upper bound within an O(log n) factor of the information theoretic
lower bound. The algorithm runs in quasi-polynomial time, and we show this is the optimal possible
assuming the famous planted clique hardness. Next, we show how the ideas can be extended to make
it computationally efficient. We consider both the adaptive and non-adaptive versions. The missing
proofs and details are provided in Appendix B in the supplementary document.
3.1
Information-Theoretic Optimal Algorithm
In particular, we prove the following theorem.
nk log n
Theorem 2. There exists an algorithm with query complexity O( (1?2p)
2 ) for Query-Cluster that
returns the ML estimate with high probability when query answers are incorrect with probability
C log n
p < 12 . Moreover, the algorithm returns all true clusters of V of size at least (1?2p)
2 for a suitable
constant C with probability 1 ? on (1).
Remark 1. Assuming p =
1
2
1/2+?
? ?, as ? ? 0, ?(pk1 ? p) = (1 ? 2p) ln 1?p
p = 2? ln 1/2?? =
2
2?
4?
2? ln(1 + 1/2??
) ? 1/2??
= O(?2 ) = O((1 ? 2p)2 ), matching the query complexity lower bound
within an O(log n) factor.
Algorithm. 1
The algorithm that we propose is completely deterministic and has several phases.
Phase 1: Selecting a small subgraph. Let c =
16
(1?2p)2 .
1. Select c log n vertices arbitrarily from V . Let V 0 be the set of selected vertices. Create a
subgraph G0 = (V 0 , E 0 ) by querying for every (u, v) ? V 0 ? V 0 and assigning a weight of
?(u, v) = +1 if the query answer is ?yes? and ?(u, v) = ?1 otherwise .
2. Extract the heaviest weight subgraph S in G0 . If |S| ? c log n, move to Phase 2.
3. Else we have |S| < c log n. Select a new vertex u, add it to V 0 , and query u with every
vertex in V 0 \ {u}. Move to step (2).
Phase 2: Creating an Active List of Clusters. Initialize an empty list called active when Phase 2 is
executed for the first time.
1. Add S to the list active.
0
0
0
2. Update
P G by removing S from V and every edge incident0 on S. For every vertex z ? V ,
if u?S ?(z, u) > 0, include z in S and remove z from G with all edges incident to it.
3. Extract the heaviest weight subgraph S in G0 . If |S| ? c log n, Move to step(1). Else move
to Phase 3.
Phase 3: Growing the Active Clusters. We now have a set of clusters in active.
1. Select an unassigned vertex v not in V 0 (that is previously unexplored), and for every cluster
C ? active, pick c log n distinct vertices u1 , u2 , ...., ul in the cluster and query v with them.
If the majority of these answers are ?yes?, then include v in C.
6
2. Else we have for every C ? active the majority answer is ?no? for v. Include v ? V 0 and
query v with every node in V 0 \ v and update E 0 accordingly. Extract the heaviest weight
subgraph S from G0 and if its size is at least c log n move to Phase 2 step (1). Else move to
Phase 3 step (1) by selecting another unexplored vertex.
Phase 4: Maximum Likelihood (ML) Estimate.
1. When there is no new vertex to query in Phase 3, extract the maximum likelihood clustering
of G0 and return them along with the active clusters, where the ML estimation is defined as,
X X
max
?i,j , (see Appendix B.1)
(1)
S` ,`=1,???:V =t`=1 S`
`
i,j?S` ,i6=j
all n2 queries on V ? V
Analysis. The high level steps of the analysis are as follows. Suppose
have been made. If the ML estimate of the clustering with these n2 answers is same as the true
clustering of V that is, tki=1 Vi ? tki=1 V?i then the algorithm for noisy oracle finds the true clustering
with high probability.
Let without loss of generality, |V?1 | ? ... ? |V?l | ? 6c log n > |V?l+1 | ? ... ? |V?k |. We will show that
Phase 1-3 recover V?1 , V?2 ...V?l with probability at least 1 ? n1 . The remaining clusters are recovered in
Phase 4.
A subcluster is a subset of nodes in some cluster. Lemma 2 shows that any set S that is included in
active in Phase 2 of the algorithm is a subcluster of V . This establishes that all clusters in active at
any time are subclusters of some original cluster in V .
96
Lemma 2. Let c0 = 6c = (1?2p)
2 . Algorithm 1 in Phase 1 and 3 returns a subcluster of V of size at
least c log n with high probability if G0 contains a subcluster of V of size at least c0 log n. Moreover,
it does not return any set of vertices of size at least c log n if G0 does not contain a subcluster of V of
size at least c log n.
Lemma 2 is proven in three steps. Step 1 shows that if V 0 contains a subcluster of size ? c0 log n then
S ? Vi for some i ? [1, k] will be returned with high probability when G0 is processed. Step 2 shows
that size of S will be at least c log n, and finally step 3 shows that if there is no subcluster of size at
least c log n in V 0 , then no subset of size > c log n will be returned by the algorithm when processing
G0 , because otherwise that S will span more than one cluster, and the weight of a subcluster contained
in S will be higher than S giving to a contradiction.
From Lemma 2, any S added to active in Phase 2 is a subcluster with high probability, and has size at
least c log n. Moreover, whenever G0 contains a subcluster of V of size at least c0 log n, it is retrieved
by the algorithm and added to active. The next lemma shows that each subcluster added to active is
correctly grown to the true cluster: (1) every vertex added to such a cluster is correct, and (2) no two
clusters in active can be merged. Therefore, clusters obtained from active are the true clusters.
Lemma 3. The list active contains all the true clusters of V of size ? c0 log n at the end of the
algorithm with high probability.
Finally, once all the clusters in active are grown, we have a fully queried graph in G0 containing the
small clusters which can be retrieved in Phase 4. This completes the correctness of the algorithm.
With the following lemma, we get Theorem 2.
nk log n
Lemma 4. The query complexity of the algorithm for faulty oracle is O (1?2p)
.
2
Running time of this algorithm is dominated by finding the heaviest weight subgraph in G0 , execution
O(
log n
)
k log n
(2p?1)2 ), that is quasi-polynomial in n. We
of each of those calls can be done in time O([ (2p?1)
2]
show that it is unlikely that this running time can be improved by showing a reduction from the famous
planted clique problem for which quasi-polynomial time is the best known (see Appendix B.1).
3.2
Computationally Efficient Algorithm
We now prove the following theorem. We give the algorithm here which is completely deterministic
with known k. The extension to unknown k and a detailed proof of correctness are deferred to
Appendix B.2.
7
2
nk
Theorem 3. There exists a polynomial time algorithm with query complexity O( (2p?1)
4 ) for Queryk log n
Cluster with error probability p, which recovers all clusters of size at least ?( (2p?1)
4 ).
2
log n
Algorithm 2. Let N = 64k
4 . We define two thresholds T (a) = pa +
? (1?2p)
?(a) = 2p(1 ? p)a + 2 N log n. The algorithm is as follows.
6
(1?2p)
?
N log n and
Phase 1-2C: Selecting a Small Subgraph. Initially we have an empty graph G0 = (V 0 , E 0 ), and all
vertices in V are unassigned to any cluster.
1. Select X new vertices arbitrarily from the unassigned vertices in V \ V 0 and add them to V 0 such
that the size of V 0 is N . If there are not enough vertices left in V \ V 0 , select all of them. Update
G0 = (V 0 , E 0 ) by querying for every (u, v) such that u ? X and v ? V 0 and assigning a weight
of ?(u, v) = +1 if the query answer is ?yes? and ?(u, v) = ?1 otherwise .
2. Let N + (u) denote all the neighbors of u in G0 connected by +1-weighted edges. We now
cluster G0 . Select every u and v such that u 6= v and |N + (u)|, |N + (v)| ? T (|V 0 |). Then if
|N + (u)\N + (v)|+|N + (v)\N + (u)| ? ?(|V 0 |) (the symmetric difference of these neighborhoods)
include u and v in the same cluster. Include in active all clusters formed in this step that have size
log n
0
at least 64k
(1?2p)4 . If there is no such cluster, abort. Remove all vertices in such cluster from V and
any edge incident on them from E 0 .
Phase 3C: Growing the Active Clusters.
1. For every unassigned vertex v ? V \ V 0 , and for every cluster C ? active, pick c log n distinct
vertices u1 , u2 , ...., ul in the cluster and query v with them. If the majority of these answers are
?yes?, then include v in C.
2. Output all the clusters in active and move to Phase 1 step (1) to obtain the remaining clusters.
nk log n
?
Running time of the algorithm can be shown to be O( (1?2p)
2 + kN ) where ? ? 2.373 is the
exponent of fast matrix multiplication5 . Thus for small values of k, we get a highly efficient
2
log n
algorithm. The query complexity of the algorithm is O( nk
(2p?1)4 ) since each vertex is involved in
2
k log n
k log n
0
at most O( (2p?1)
4 ) queries within G and O( (2p?1)2 ) across the active clusters. In fact, in each
iteration, the number of queries within G0 is O(N 2 ) and since there could be at most k rounds, the
nk log n
nk2 log n
2
overall query complexity is O( (2p?1)
2 + min ( (2p?1)4 , kN )). Moreover, using the algorithm for
unknown k verbatim, we can obtain a?correlation clustering algorithm for random?noise model that
n) log n
recovers all clusters of size ?( min(k,
), improving over [6, 44] for k < lognn since our ML
(2p?1)4
0
estimate on G is correlation clustering.
3.3
Non-adaptive Algorithm
Finally for non-adaptive querying that is when querying must be done up front we prove the following.
This shows while for k = 2, nonadaptive algorithms are as powerful as adaptive algorithms, for
k ? 3, substantial advantage can be gained by allowing adaptive querying. For details, see Appendix
B.3 in the supplementary material.
Theorem 4. ? For k = 2, there exists an O(n log n) time nonadaptive algorithm that recovers the
n log n
clusters with high probability with query complexity O( (1?2p)
4 ). For k ? 3, if R is the ratio between
the maximum to minimum cluster size, then there exists a randomized nonadaptive algorithm that
log n
recovers all clusters with high probability with query complexity O( Rnk
(1?2p)2 ). Moreover, there exists
2
log n
a computationally efficient algorithm for the same with query complexity O( Rnk
(1?2p)4 ).
? For k ? 3, if the minimum cluster size is r, then any deterministic nonadaptive algorithm must
2
make ?( nr ) queries even when query answers are perfect to recover the clusters exactly. This shows
that adaptive algorithms are much more powerful than their nonadaptive counterparts.
5
Fast matrix multiplication can be avoided by slightly increasing the dependency on k.
8
4
Experiments
In this section, we report some experimental results on real and synthetic datasets.
Real Datasets. We use the following three real datasets where the answers are generated from
Amazon Mechanical Turk.
? landmarks consists of images of famous landmarks in Paris and Barcelona. Since the images are
of different sides and clicked at different angles, it is difficult for humans to label them correctly. It
consists of 266 nodes, 12 clusters with a total of 35245 edges, out of which 3738 are intra-cluster
edges [31].
? captcha consists of CAPTCHA images, each showing a four-digit number. It consists of 244
nodes, 69 clusters with a total of 29890 edges out of which only 386 are intra-cluster edges [52].
? gym contains images of gymnastics athletes, where it is very difficult to distinguish the face of
the athlete, e.g. when the athlete is upside down on the uneven bars. It consists of 94 nodes, 12
clusters and 4371 edges out of which 449 are intra-cluster edges [52].
Repeating queries vs no repetition. Interestingly, we make the following observations. In
landmarks dataset, when a majority vote is taken after asking each pairwise query 10 times,
we get a total erroneous answers of 3696. However, just using the first crowd answer, the
erroneous answers reduce to 2654. This shows that not only a simple strategy of repeating
each query and taking a majority vote does not help to reduce error, in fact, it can amplify errors due to correlated answers by the crowd members. We observed the same phenomenon
in the gym dataset where 449 answers are incorrect when majority voting is used over five answers for each query, compared to 310 by just using the first crowd user. For captcha, the
error rate slightly decreases when using majority voting from 241 erroneous answers to 201.
Synthetic Datasets. We also did experiments on the following synthetic
datasets from [27].
? skew and sqrtn contain fictitious
hospital patients data, including
name, phone number, birth date and
address. The errors are generated
synthetically with error probability
p = 0.2. Each of them have 900
nodes, 404550 edges. skew has
8175 intra-cluster edges, whereas
sqrtn contains 13050 intra-cluster
edges.
Number of Queries vs Accuracy.
Figure 1 plots the number of queries
Figure 1: Number of Queries vs Accuracy Trade-off
vs accuracy trade-off of our computationally efficient adaptive algorithm.
Among the vertices that are currently clustered, we count the number of induced edges that are
classified correctly and then divide it by the total number of edges in the dataset to calculate accuracy.
Given the gap between maximum and minimum cluster size is significant in all real datasets, nonadaptive algorithms do not perform well. Moreover, if we select queries randomly, and look at the
queried edges in each cluster, then even to achieve an intra-cluster minimum degree of two in every
reasonable sized cluster, we waste a huge number queries on inter-cluster edges. While we make only
389 queries in gym to get an accuracy of 90%, the total number of random queries is 1957 considering
only the clusters of size at least nine. For landmark dataset, the number of queries is about 7400
to get an accuracy of 90%, whereas the total number of random queries is 21675 considering the
clusters of size at least seven. This can be easily explained by the huge discrepancy in the number of
intra and inter-cluster edges where random edge querying cannot perform well. Among the edges
that were mislabeled by our adaptive algorithm, 70 ? 90% of them are inter-cluster with very few
errors in intra-cluster edges, that is the clusters returned are often superset of the original clusters.
Similarly, the querying cost is also dominated by the inter-cluster edge queries. For example, out of
4339 queries issued by skew, 3844 are for inter-cluster edges. By using some side information such
as a similarity matrix, a significant reduction in query complexity may be possible.
9
Acknowledgements: This work is supported in parts by NSF awards CCF 1642658, CCF 1642550,
CCF 1464310, CCF 1652303, a Yahoo ACE Award and a Google Faculty Research Award. The
authors are thankful to an anonymous reviewer whose comments led to many improvements in
the presentation. The authors would also like to thank Sanjay Subramanian for his help with the
experiments.
References
[1] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. IEEE
Trans. Information Theory, 62(1):471?487, 2016.
[2] E. Abbe and C. Sandon. Community detection in general stochastic block models: Fundamental
limits and efficient algorithms for recovery. In IEEE 56th Annual Symposium on Foundations of
Computer Science, FOCS, pages 670?688, 2015.
[3] N. Ailon, Y. Chen, and H. Xu. Breaking the small cluster barrier of graph clustering. In
Proceedings of the 30th International Conference on Machine Learning, ICML 2013, pages
995?1003, 2013.
[4] H. Ashtiani, S. Kushagra, and S. Ben-David. Clustering with same-cluster queries. NIPS, 2016.
[5] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[6] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine Learning, 56(1-3):89?113,
2004.
[7] M. Braverman and E. Mossel. Noisy sorting without resampling. In Proceedings of the
nineteenth annual ACM-SIAM symposium on Discrete algorithms, pages 268?276. Society for
Industrial and Applied Mathematics, 2008.
[8] M. Braverman and E. Mossel. Sorting from noisy information. CoRR, abs/0910.1191, 2009.
[9] M. J. Brzozowski, T. Hogg, and G. Szabo. Friends and foes: ideological social networking. In
Proceedings of the SIGCHI conference on human factors in computing systems, pages 817?820.
ACM, 2008.
[10] M. Burke and R. Kraut. Mopping up: modeling wikipedia promotion decisions. In Proceedings
of the 2008 ACM conference on Computer supported cooperative work, pages 27?36. ACM,
2008.
[11] D. Cartwright and F. Harary. Structural balance: a generalization of heider?s theory. Psychological review, 63(5):277, 1956.
[12] N. Cesa-Bianchi, C. Gentile, F. Vitale, G. Zappella, et al. A correlation clustering approach to
link classification in signed networks. In COLT, pages 34?1, 2012.
[13] K. Chaudhuri, F. C. Graham, and A. Tsiatas. Spectral clustering of graphs with general degrees
in the extended planted partition model. In COLT, pages 35?1, 2012.
[14] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex
optimization. Journal of Machine Learning Research, 15(1):2213?2238, 2014.
[15] Y. Chen, G. Kamath, C. Suh, and D. Tse. Community recovery in graphs with locality. In
Proceedings of The 33rd International Conference on Machine Learning, pages 689?698, 2016.
[16] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In Advances in neural information
processing systems, pages 2204?2212, 2012.
[17] K.-Y. Chiang, C.-J. Hsieh, N. Natarajan, I. S. Dhillon, and A. Tewari. Prediction and clustering
in signed networks: a local to global perspective. Journal of Machine Learning Research,
15(1):1177?1213, 2014.
10
[18] P. Chin, A. Rao, and V. Vu. Stochastic block model and community detection in the sparse
graphs: A spectral algorithm with optimal rate of recovery. arXiv preprint arXiv:1501.05021,
2015.
[19] P. Christen. Data matching: concepts and techniques for record linkage, entity resolution, and
duplicate detection. Springer Science and Business Media, 2012.
[20] T. M. Cover and J. A. Thomas. Elements of information theory, 2nd Ed. John Wiley & Sons,
2012.
[21] N. Dalvi, A. Dasgupta, R. Kumar, and V. Rastogi. Aggregating crowdsourced binary ratings. In
WWW, pages 285?294, 2013.
[22] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?a. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Physical Review E,
84(6):066106, 2011.
[23] M. E. Dyer and A. M. Frieze. The solution of some random np-hard problems in polynomial
expected time. Journal of Algorithms, 10(4):451?489, 1989.
[24] A. K. Elmagarmid, P. G. Ipeirotis, and V. S. Verykios. Duplicate record detection: A survey.
IEEE Trans. Knowl. Data Eng., 19(1):1?16, 2007.
[25] U. Feige, P. Raghavan, D. Peleg, and E. Upfal. Computing with noisy information. SIAM
Journal on Computing, 23(5):1001?1018, 1994.
[26] I. P. Fellegi and A. B. Sunter. A theory for record linkage. Journal of the American Statistical
Association, 64(328):1183?1210, 1969.
[27] D. Firmani, B. Saha, and D. Srivastava. Online entity resolution using an oracle. PVLDB,
9(5):384?395, 2016.
[28] L. Getoor and A. Machanavajjhala. Entity resolution: theory, practice & open challenges.
PVLDB, 5(12):2018?2019, 2012.
[29] A. Ghosh, S. Kale, and P. McAfee. Who moderates the moderators?: crowdsourcing abuse
detection in user-generated content. In EC, pages 167?176, 2011.
[30] C. Gokhale, S. Das, A. Doan, J. F. Naughton, N. Rampalli, J. Shavlik, and X. Zhu. Corleone:
Hands-off crowdsourcing for entity matching. In SIGMOD Conference, pages 601?612, 2014.
[31] A. Gruenheid, B. Nushi, T. Kraska, W. Gatterbauer, and D. Kossmann. Fault-tolerant entity
resolution with the crowd. CoRR, abs/1512.00537, 2015.
[32] B. Hajek, Y. Wu, and J. Xu. Achieving exact cluster recovery threshold via semidefinite
programming: Extensions. IEEE Transactions on Information Theory, 62(10):5918?5937,
2016.
[33] T. S. Han and S. Verdu. Generalizing the fano inequality. IEEE Transactions on Information
Theory, 40(4):1247?1251, 1994.
[34] F. Harary et al. On the notion of balance of a signed graph. The Michigan Mathematical Journal,
2(2):143?146, 1953.
[35] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the
American statistical association, 58(301):13?30, 1963.
[36] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social
networks, 5(2):109?137, 1983.
[37] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In
NIPS, pages 1953?1961, 2011.
[38] R. Kleinberg. Lecture notes in learning, games, and electronic markets, 2007.
11
[39] C. A. Lampe, E. Johnston, and P. Resnick. Follow the reader: filtering comments on slashdot.
In Proceedings of the SIGCHI conference on Human factors in computing systems, pages
1253?1262. ACM, 2007.
[40] M. D. Larsen and D. B. Rubin. Iterative automated record linkage using mixture models.
Journal of the American Statistical Association, 96(453):32?41, 2001.
[41] J. Leskovec, D. Huttenlocher, and J. Kleinberg. Predicting positive and negative links in online
social networks. In Proceedings of the 19th international conference on World wide web, pages
641?650. ACM, 2010.
[42] S. H. Lim, Y. Chen, and H. Xu. Clustering from labels and time-varying graphs. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in
Neural Information Processing Systems 27, pages 1188?1196. Curran Associates, Inc., 2014.
[43] K. Makarychev, Y. Makarychev, and A. Vijayaraghavan. Correlation clustering with noisy partial
information. In Proceedings of The 28th Conference on Learning Theory, pages 1321?1342,
2015.
[44] C. Mathieu and W. Schudy. Correlation clustering with noisy input. In Proceedings of the
Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas,
USA, January 17-19, 2010, pages 712?728, 2010.
[45] A. Mazumdar and B. Saha. Clustering via crowdsourcing. arXiv preprint arXiv:1604.01839,
2016.
[46] A. Mazumdar and B. Saha. A Theoretical Analysis of First Heuristics of Crowdsourced Entity
Resolution. The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017.
[47] A. Mazumdar and B. Saha. Query complexity of clustering with side information. In Advances
in Neural Information Processing Systems (NIPS) 31, 2017.
[48] M. Mitzenmacher and C. E. Tsourakakis. Predicting signed edges with o(n(1+) logn) queries.
CoRR, abs/1609.00750, 2016.
[49] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for the planted bisection model. In
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages
69?75. ACM, 2015.
[50] Y. Polyanskiy and S. Verd?u. Arimoto channel coding converse and r?enyi divergence. In
Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on,
pages 1327?1333. IEEE, 2010.
[51] D. Prelec, H. S. Seung, and J. McCoy. A solution to the single-question crowd wisdom problem.
Nature, 541(7638):532?535, 2017.
[52] V. Verroios and H. Garcia-Molina. Entity resolution with crowd errors. In 31st IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea, April 13-17, 2015,
pages 219?230, 2015.
[53] V. Verroios, H. Garcia-Molina, and Y. Papakonstantinou. Waldo: An adaptive human interface
for crowd entity resolution. In SIGMOD, pages 219?230, 2017.
[54] N. Vesdapunt, K. Bellare, and N. Dalvi. Crowdsourcing algorithms for entity resolution. PVLDB,
7(12):1071?1082, 2014.
[55] R. K. Vinayak and B. Hassibi. Crowdsourced clustering: Querying edges vs triangles. In
Advances in Neural Information Processing Systems, pages 1316?1324, 2016.
[56] J. Wang, T. Kraska, M. J. Franklin, and J. Feng. Crowder: Crowdsourcing entity resolution.
PVLDB, 5(11):1483?1494, 2012.
12
| 7161 |@word faculty:1 version:2 polynomial:6 nd:1 c0:5 open:1 hsieh:1 p0:11 eng:1 pick:2 asks:3 reduction:2 contains:7 uma:1 selecting:3 karger:1 neeman:1 document:1 interestingly:2 franklin:1 existing:1 recovered:2 assigning:3 intriguing:1 must:13 john:1 partition:2 j1:3 remove:3 treating:1 plot:1 update:3 v:5 resampling:1 half:1 selected:1 intelligence:1 accordingly:1 ith:1 pvldb:4 record:6 chiang:1 provides:1 node:9 allerton:2 five:1 mathematical:1 along:1 c2:1 symposium:4 incorrect:4 prove:6 consists:5 focs:1 dalvi:2 krzakala:1 pairwise:2 inter:7 expected:5 market:1 hardness:2 proliferation:1 growing:4 multi:1 inspired:1 kp0:3 actual:1 considering:3 increasing:1 clicked:1 provided:1 moreover:8 underlying:1 npoly:1 mass:1 probability3:1 null:1 kpj:2 kg:3 bounded:2 medium:1 pursue:1 developed:2 finding:1 ghosh:1 guarantee:1 every:13 hypothetical:1 voting:3 ti:5 unexplored:2 interactive:2 exactly:4 wrong:5 control:1 converse:1 positive:6 declare:1 before:2 tki:5 local:1 aggregating:1 limit:1 decelle:1 engineering:1 initiated:1 abuse:1 might:1 signed:5 therein:2 studied:6 verdu:1 challenging:2 schudy:1 thirty:1 testing:2 vu:1 practice:1 block:9 regret:1 digit:1 nomination:1 significantly:4 matching:4 get:7 cannot:3 close:2 selection:1 amplify:1 faulty:5 seminal:1 www:1 equivalent:1 deterministic:4 reviewer:1 missing:1 kale:1 attention:1 starting:1 convex:3 survey:1 resolution:12 amazon:2 recovery:10 contradiction:1 sbm:7 kushagra:1 oh:1 his:1 proving:1 handle:1 notion:1 variation:3 crowdsourcing:12 qkp:5 suppose:7 play:1 user:3 exact:4 programming:1 us:1 verd:1 designing:1 hypothesis:5 curran:1 agreement:1 pa:1 element:12 trend:1 approximated:1 natarajan:1 associate:1 asymmetric:2 database:2 cooperative:1 observed:2 role:1 huttenlocher:1 preprint:2 resnick:1 wang:1 capture:3 calculate:1 region:1 connected:3 trade:3 decrease:1 nushi:1 balanced:6 substantial:1 complexity:15 asked:2 seung:1 waldo:1 pinsker:2 upon:3 completely:2 triangle:1 mislabeled:1 easily:1 joint:2 grown:2 distinct:2 fast:3 effective:1 enyi:1 query:102 artificial:1 verroios:2 neighborhood:1 crowd:10 h0:1 birth:1 quite:1 modular:1 heuristic:3 supplementary:4 solve:1 say:1 ace:1 otherwise:4 whose:1 nineteenth:1 noisy:23 final:1 online:3 advantage:1 propose:1 leinhardt:1 interaction:3 j2:2 loop:1 date:1 vesdapunt:1 subgraph:7 flexibility:1 achieve:1 chaudhuri:1 cluster:136 empty:2 perfect:1 ben:1 thankful:1 help:3 derive:1 friend:2 op:5 c:1 come:1 implies:2 peleg:1 error1:1 direction:2 lognn:1 closely:1 correct:7 merged:1 stochastic:10 kraut:1 human:6 raghavan:1 material:3 clustered:3 generalization:2 anonymous:1 tighter:1 extension:2 fellegi:2 hold:2 burke:1 sufficiently:1 hall:1 ground:1 prohibited:1 lawrence:1 algorithmic:1 makarychev:2 claim:2 crowdflower:1 estimation:1 label:3 currently:1 knowl:1 repetition:3 create:1 establishes:1 correctness:2 weighted:1 promotion:1 clearly:2 always:2 avoid:1 hj:1 ej:8 unassigned:4 varying:1 mccoy:1 focus:1 improvement:2 unsatisfactory:1 rank:1 likelihood:4 bernoulli:5 industrial:1 rigorous:3 unlikely:1 initially:1 hidden:1 bandit:2 relation:1 quasi:3 going:1 interested:1 overall:1 classification:2 among:2 colt:2 denoted:2 exponent:2 yahoo:1 logn:1 platform:2 special:1 initialize:1 gatterbauer:1 equal:1 once:2 beach:1 manually:1 identical:1 represents:1 flipped:1 look:1 icml:1 abbe:2 nearly:2 discrepancy:1 np:3 others:2 report:1 sanghavi:2 few:2 saha:5 duplicate:2 randomly:1 frieze:1 verbatim:1 divergence:7 szabo:1 phase:20 slashdot:1 n1:2 ab:3 detection:6 interest:1 huge:2 mining:2 highly:2 intra:10 braverman:2 deferred:1 introduces:1 mixture:1 semidefinite:1 kt:4 edge:39 worker:2 partial:1 korea:1 unless:3 incomplete:3 divide:1 e0:2 theoretical:4 leskovec:1 minimal:2 psychological:1 instance:2 tse:1 modeling:1 asking:3 rao:1 cover:1 vinayak:1 assignment:2 cost:3 vertex:38 subset:4 entry:2 seventh:1 front:3 too:3 dependency:2 answer:37 kn:2 crowder:1 synthetic:3 adaptively:4 referring:1 st:2 fundamental:5 amherst:2 randomized:5 international:4 siam:4 probabilistic:1 off:4 decoding:1 nk2:1 heaviest:4 aaai:2 ambiguity:1 interactively:1 containing:2 cesa:2 slowly:1 possibly:1 hoeffding:1 creating:1 expert:1 american:3 leading:1 return:8 coding:1 waste:1 matter:1 inc:1 satisfy:1 vi:16 piece:1 try:1 start:1 recover:13 crowdsourced:4 contribution:2 minimize:2 formed:2 accuracy:8 who:1 rastogi:1 identify:4 wisdom:1 yes:4 conceptually:1 ashtiani:1 famous:3 bisection:2 none:1 machanavajjhala:1 foe:2 classified:1 moderator:1 networking:1 whenever:2 ed:1 prelec:1 against:1 turk:2 involved:9 larsen:1 resultant:1 proof:8 recovers:4 dataset:4 massachusetts:1 popular:3 ask:1 recall:1 lim:1 improves:2 hajek:1 auer:1 appears:1 higher:1 subclusters:1 follow:1 improved:1 april:1 done:3 mitzenmacher:1 generality:1 furthermore:1 just:3 stage:2 pk1:2 sly:1 correlation:13 tsiatas:1 hand:2 sketch:2 web:1 trust:2 ei:2 google:1 abort:1 sunter:2 quality:1 laskey:1 believe:1 usa:2 name:1 contain:2 true:12 remedy:1 counterpart:1 ccf:4 concept:1 assigned:6 symmetric:5 dhillon:1 moore:1 round:1 game:1 coincides:1 bansal:1 chin:1 theoretic:6 polyanskiy:1 interface:1 image:4 wise:6 gokhale:1 novel:1 recently:1 wikipedia:2 physical:1 arimoto:1 belong:8 extend:2 association:3 significant:3 refer:1 epinions:1 measurement:1 multiarmed:1 queried:5 rd:1 consistency:1 mathematics:1 fano:2 i6:1 similarly:1 hogg:1 had:1 han:1 similarity:2 etc:1 pkq:14 add:4 j:3 recent:5 retrieved:2 perspective:1 moderate:1 phone:1 scenario:3 certain:2 issued:1 bandeira:1 inequality:5 binary:3 arbitrarily:2 fault:1 molina:2 minimum:13 greater:1 gentile:1 aggregated:2 maximize:1 v3:2 forty:1 relates:1 multiple:4 full:1 upside:1 reduces:1 verykios:1 match:1 long:2 gkf:1 award:3 plugging:1 prediction:5 involving:5 basic:1 denominator:1 patient:1 metric:1 arxiv:4 iteration:1 represent:1 c1:1 whereas:3 want:1 separately:1 kossmann:1 else:4 completes:1 johnston:1 comment:2 induced:1 south:1 member:1 vijayaraghavan:1 call:1 structural:1 near:1 synthetically:1 easy:1 decent:1 automated:2 variety:1 enough:2 superset:1 mcafee:1 nonstochastic:1 reduce:4 idea:1 texas:1 whether:2 motivated:1 ul:2 linkage:3 sentiment:1 returned:3 nine:1 remark:1 useful:1 tewari:1 detailed:1 aimed:1 involve:1 amount:1 repeating:5 extensively:1 bellare:1 processed:1 schapire:1 exist:3 nsf:1 sign:8 estimated:1 designer:1 correctly:3 discrete:2 dasgupta:1 express:1 four:1 threshold:8 blum:1 achieving:1 drawn:1 pj:9 v1:2 graph:15 nonadaptive:7 icde:1 year:1 naughton:1 sum:1 run:2 angle:1 powerful:2 soda:1 reasonable:1 reader:1 wu:1 distrust:2 separation:1 electronic:1 decision:1 appendix:8 graham:1 rnk:2 bound:25 hi:3 distinguish:1 fascinating:1 oracle:25 annual:5 constraint:2 n3:3 athlete:3 mazumdar:4 dominated:3 kleinberg:2 nearby:1 u1:2 extremely:2 min:5 span:1 kumar:1 developing:1 according:1 ailon:1 poor:1 belonging:1 smaller:1 slightly:4 across:1 intimately:2 harary:2 son:1 feige:1 explained:1 pr:1 taken:1 computationally:7 ln:3 previously:2 skew:3 count:1 fail:1 nonempty:1 needed:1 initiate:1 know:2 dyer:1 end:1 generalizes:3 available:2 experimentation:1 apply:1 observe:1 v2:2 spectral:2 disagreement:1 chawla:1 sigchi:2 gym:3 weinberger:1 shah:1 original:2 thomas:1 denotes:3 clustering:56 running:7 include:6 remaining:2 log2:1 firmani:1 sigmod:2 giving:1 k1:1 especially:1 ghahramani:1 society:1 feng:1 objective:1 g0:17 question:6 already:2 move:7 added:4 strategy:1 planted:6 cartwright:1 nr:1 jalali:1 zdeborov:1 distance:3 thank:1 link:2 entity:17 majority:9 landmark:4 captcha:3 seven:1 collected:2 provable:1 assuming:2 elmagarmid:1 ratio:1 balance:2 difficult:2 executed:1 kamath:1 negative:3 ized:1 design:4 implementation:1 tsourakakis:1 unknown:5 perform:2 allowing:2 upper:2 bianchi:2 observation:3 twenty:1 datasets:7 arya:2 inevitably:1 january:1 situation:1 extended:2 communication:1 sharp:1 community:6 rating:1 introduced:2 david:1 pair:13 mechanical:2 required:5 extensive:2 kl:2 paris:1 sandon:1 barna:2 ideological:1 barcelona:1 nip:4 trans:2 address:1 bar:1 sanjay:1 challenge:2 max:1 including:1 reliable:1 subcluster:11 subramanian:1 event:1 suitable:1 zappella:1 business:1 getoor:1 ipeirotis:1 predicting:2 arm:1 zhu:1 scheme:1 improve:1 mossel:3 numerous:1 mathieu:1 ready:1 extract:4 prior:1 understanding:2 literature:1 acknowledgement:1 dislike:1 multiplication:2 review:2 asymptotic:1 freund:1 loss:1 fully:1 lecture:1 interesting:2 filtering:1 fictitious:1 querying:12 proven:1 foundation:1 upfal:1 incident:2 degree:2 sufficient:1 doan:1 rubin:1 editor:1 share:2 pi:2 austin:1 supported:2 last:2 christen:1 side:3 allow:1 shavlik:1 neighbor:1 wide:1 characterizing:1 face:1 taking:1 barrier:1 sparse:2 fg:1 stand:1 gruenheid:1 world:1 author:2 made:4 adaptive:15 avoided:1 ec:1 social:7 transaction:2 welling:1 approximate:1 obtains:1 clique:2 ml:8 global:1 active:22 tolerant:1 assumed:1 suh:1 latent:1 iterative:2 nature:1 learn:1 channel:3 ca:1 improving:3 complex:1 kraska:2 vj:2 da:1 did:1 pk:1 main:2 blockmodels:1 bounding:1 noise:2 n2:3 repeated:1 allowed:1 body:1 xu:5 wiley:1 hassibi:1 fails:1 breaking:1 peeling:1 companion:1 theorem:14 erroneous:5 removing:1 specific:1 down:1 showing:3 er:4 jensen:2 list:4 cortes:1 exists:7 corr:3 gained:1 execution:1 nk:15 sorting:3 gap:2 chen:5 locality:3 intersection:3 led:1 generalizing:1 michigan:1 garcia:2 contained:1 partially:1 ters:1 u2:2 srivastava:1 springer:1 holland:1 truth:1 amt:2 acm:9 ma:1 goal:5 sized:2 presentation:1 labelled:1 content:1 change:3 hard:3 included:1 except:2 uniformly:1 lemma:15 called:2 total:12 hospital:1 experimental:1 vote:4 shannon:2 vitale:1 indicating:2 formally:1 college:1 select:7 uneven:1 seoul:1 ongoing:1 heider:1 phenomenon:2 correlated:1 |
6,812 | 7,162 | Higher-Order Total Variation Classes on Grids:
Minimax Theory and Trend Filtering Methods
Veeranjaneyulu Sadhanala
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Yu-Xiang Wang
Carnegie Mellon University/Amazon AI
Pittsburgh, PA 15213/Palo Alto, CA 94303
[email protected]
James Sharpnack
University of California, Davis
Davis, CA 95616
[email protected]
Ryan J. Tibshirani
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We consider the problem of estimating the values of a function over n nodes of a
d-dimensional grid graph (having equal side lengths n1/d ) from noisy observations.
The function is assumed to be smooth, but is allowed to exhibit different amounts
of smoothness at different regions in the grid. Such heterogeneity eludes classical
measures of smoothness from nonparametric statistics, such as Holder smoothness.
Meanwhile, total variation (TV) smoothness classes allow for heterogeneity, but
are restrictive in another sense: only constant functions count as perfectly smooth
(achieve zero TV). To move past this, we define two new higher-order TV classes,
based on two ways of compiling the discrete derivatives of a parameter across the
nodes. We relate these two new classes to Holder classes, and derive lower bounds
on their minimax errors. We also analyze two naturally associated trend filtering
methods; when d = 2, each is seen to be rate optimal over the appropriate class.
1
Introduction
In this work, we focus on estimation of a mean parameter defined over the nodes of a d-dimensional
grid graph G = (V, E), with equal side lengths N = n1/d . Let us enumerate V = {1, . . . , n} and
E = {e1 , . . . , em }, and consider data y = (y1 , . . . , yn ) ? Rn observed over V , distributed as
yi ? N (?0,i , ? 2 ),
independently, for i = 1, . . . , n,
(1)
where ?0 = (?0,1 , . . . , ?0,n ) ? Rn is the mean parameter to be estimated, and ? 2 > 0 the common
noise variance. We will assume that ?0 displays some kind of regularity or smoothness over G, and
are specifically interested in notions of regularity built around on the total variation (TV) operator
X
kD?k1 =
|?i ? ?j |,
(2)
(i,j)?E
defined with respect to G, where D ? Rm?n is the edge incidence matrix of G, which has `th row
D` = (0, . . . , ?1, . . . , 1, . . . , 0), with ?1 in location i and 1 in location j, provided that the `th edge
is e` = (i, j) with i < j. There is an extensive literature on estimators based on TV regularization,
both in Euclidean spaces and over graphs. Higher-order TV regularization, which, loosely speaking,
considers the TV of derivatives of the parameter, is much less understood, especially over graphs.
In this paper, we develop statistical theory for higher-order TV smoothness classes, and we analyze
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
associated trend filtering methods, which are seen to achieve the minimax optimal estimation error
rate over such classes. This can be viewed as an extension of the work in [22] for the zeroth-order
TV case, where by ?zeroth-order?, we refer to the usual TV operator as defined in (2).
Motivation. TV denoising over grid graphs, specifically 1d and 2d grid graphs, is a well-studied
problem in signal processing, statistics, and machine learning, some key references being [20, 5, 26].
Given data y ? Rn as per the setup described above, the TV denoising or fused lasso estimator over
the grid G is defined as
1
?? = argmin ky ? ?k22 + ?kD?k1 ,
(3)
n
2
??R
where ? ? 0 is a tuning parameter. The TV denoising estimator generalizes seamlessly to arbitrary
graphs. The problem of denoising over grids, the setting we focus on, is of particular relevance to a
number of important applications, e.g., in time series analysis, and image and video processing.
A strength of the nonlinear TV denoising estimator in (3)?where by ?nonlinear?, we mean that ?? is
nonlinear as a function of y?is that it can adapt to heterogeneity in the local level of smoothness of
the underlying signal ?0 . Moreover, it adapts to such heterogeneity at an extent that is beyond what
linear estimators are capable of capturing. This principle is widely evident in practice and has been
championed by many authors in the signal processing literature. It is also backed by statistical theory,
i.e., [8, 16, 27] in the 1d setting, and most recently [22] in the general d-dimensional setting.
Note that the TV denoising estimator ?? in (3) takes a piecewise constant structure by design, i.e., at
many adjacent pairs (i, j) ? E we will have ??i = ??j , and this will be generally more common for
larger ?. For some problems, this structure may not be ideal and we might instead seek a piecewise
smooth estimator, that is still able to cope with local changes in the underlying level of smoothness,
but offers a richer structure (beyond a simple constant structure) for the base trend. In a 1d setting,
this is accomplished by trend filtering methods, which move from piecewise constant to piecewise
polynomial structure, via TV regularization of discrete derivatives of the parameter [24, 13, 27]. An
extension of trend filtering to general graphs was developed in [31]. In what follows, we study the
statistical properties of this graph trend filtering method over grids, and we propose and analyze a
more specialized trend filtering estimator for grids based on the idea that something like a Euclidean
coordinate system is available at any (interior) node. See Figure 1 for a motivating illustration.
Related work. The literature on TV denoising is enormous and we cannot give a comprehensive
review, but only some brief highlights. Important methodological and computational contributions
are found in [20, 5, 26, 4, 10, 6, 28, 15, 7, 12, 1, 25], and notable theoretical contributions are found
in [16, 19, 9, 23, 11, 22, 17]. The literature on higher-order TV-based methods is more sparse and
more concentrated on the 1d setting. Trend filtering methods in 1d were pioneered in [24, 13], and
analyzed statistically in [27], where they were also shown to be asymptotically equivalent to locally
adaptive regression splines of [16]. An extension of trend filtering to additive models was given in
[21]. A generalization of trend filtering that operates over an arbitrary graph structure was given in
[31]. Trend filtering is not the only avenue for higher-order TV regularization: the signal processing
community has also studied higher-order variants of TV, see, e.g., [18, 3]. The construction of the
discrete versions of these higher-order TV operators is somewhat similar to that in [31] as well our
Kronecker trend filtering proposal, however, the focus of the work is quite different.
Summary of contributions. An overview of our contributions is given below.
? We propose a new method for trend filtering over grid graphs that we call Kronecker trend
filtering (KTF), and compare its properties to the more general graph trend filtering (GTF)
proposal of [31].
? For 2d grids, we derive estimation error rates for GTF and KTF, each of these rates being a
function of the regularizer evaluated at the mean ?0 .
? For d-dimensional grids, we derive minimax lower bounds for estimation over two higherorder TV classes defined using the operators from GTF and KTF. When d = 2, these lower
bounds match the upper bounds in rate (apart from log factors) derived for GTF and KTF,
ensuring that each method is minimax rate optimal (modulo log factors) for its own notion
of regularity. Also, the KTF class contains a Holder class of an appropriate order, and KTF
is seen to be rate optimal (modulo log factors) for this more homogeneous class as well.
2
Underlying signal and data
?
Laplacian smoothing, large ?
Laplacian smoothing, small ?
Graph trend filtering
Kronecker trend filtering
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
? ?
?
?
? ?
?
??
?
?
?
??
?
?
? ? ?
? ?
?
?
? ??
?
? ??
? ? ?
? ??
??
?
?? ?
? ???
?
?
?
? ??
? ??
?
?
?
?
?
?
?
?
?
? ??
??
?
?
?
?
?
?
?
? ? ??
?
? ? ?? ? ? ?
??
?
??
?
?
? ? ? ??
?
? ? ?
?? ???
???
?
?
?
?
?? ??
?
? ? ?? ? ?
?
?
?
?
?
? ??
?
?
?? ? ?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
?
?
??
?? ? ?
?? ?
? ?? ?? ? ??? ?
?
?? ?
?
?
? ??
?
?
? ?? ? ??
?? ?
?
???
?? ??
? ? ?? ??
?
?
? ? ? ? ? ?? ?
??
? ? ?
?
?
? ?
? ? ? ?? ?
?
??
??
? ? ??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ? ? ?? ? ?
??
??
?
??? ? ? ? ?
?
?
? ? ? ?
?? ?
????? ? ? ? ? ?
?? ?
? ??
??
?
?
? ? ? ? ? ??
?
??
?
?? ?
??? ?
?
?
?? ? ?
?
? ??? ??
? ? ?
? ??
?
??
??
?
?? ? ?
? ? ? ?
? ?? ? ?
? ?
? ? ? ?? ?
? ? ?? ?
? ?
???
?? ?
??
?
? ?
?
?
?
?
???
? ??
?
?
?
?
?
?? ?? ?
??
??
? ??? ?? ??
? ???
? ????
??
? ?
??
??
???
?
? ? ? ??
?? ??
?
??
??
?
?
?
?
??
? ? ?? ??
?
??
?
? ? ? ? ?? ??
?
? ???
?
? ? ?
? ?? ?? ?
?
??
?
?
?? ?
? ? ? ? ? ?? ?? ? ??? ? ?? ??
?
?
?
?
?
?
?
?
?
?
?
?
? ? ?
?? ? ?
?
?
??
?
? ? ?
? ?
? ? ??
? ?? ??
???
?? ?
?? ?
?
? ?? ??
? ? ? ?? ?? ? ? ? ? ? ??
?
?
?
?
??
?
? ?
? ?
?
? ? ??
? ? ?
? ???
?
? ? ? ??
? ?
??? ? ? ?? ???
? ?
?
?
?
? ? ??
?
?
??
?
?
?
?
? ?
? ??
???
??
? ? ??
??
??
??
?
? ?
?
?
? ???
?
??? ?
?
??
?
? ? ??
?
? ??
?
?
?? ?
?
?? ?
?? ?
?
??? ? ? ? ?
?
? ? ??
??? ?? ????
?
? ?
? ?? ? ?
??
?
? ??
?? ?
???
?? ???
?
? ??
??
? ?
? ??
??
?
? ? ? ?????
? ? ? ?
?
???
??
? ? ?
? ??
? ??
? ? ??
?
?
?
??
?
?
??
?
? ??
?? ?
?
? ???
??
? ? ?? ?
?
?? ? ??
? ?
? ??
?
??
?
?
? ??
?
?
?? ? ?
?
?? ?? ? ? ??
?
?
?
? ?? ?
?? ??
?
?
? ? ? ??
?
?
? ?
???? ? ? ? ? ? ?
??
? ? ? ?
? ? ? ? ??? ?
?
??
? ??
?
?
??
??
??
??
??
?
??
????
?
??
?
?
?
?
??
?
??
?
?? ?
?
??
?
??
? ??
?? ? ?
? ???
?
?? ?
?
? ? ??
?
?? ? ? ? ? ? ? ? ? ?
?
?
?
?
?
?
??
??
?
?
?
?
? ?
?
?
??
? ?? ?
?
?
?
? ?? ??
? ?
?
? ? ??
?? ? ??? ? ? ? ? ?
?
?
?
?
?
?
?
?
? ? ?? ?
?
? ?? ?? ??? ?
? ? ?
?
?
?
? ???
?
?
?
?
??
?
? ?
?
?
?
?
?
?
?
?
?
??
?
? ?
?
?
?
?
? ?
? ?
?
? ?
?
?
? ??
?
?
?
?? ?
?
?
?
? ? ? ?
???
? ?
??
?
??
?
?
?
??? ???
?
? ??
?
? ?
? ?? ??
?
?
?
? ?
?? ?
??
? ?
?
?
? ?
? ?
??
? ?
??
? ??? ?? ?
? ?
? ???
?
?
?
?
?
?
? ? ?
?
? ??
?
?
? ???
?
? ? ?
?
?
??
?
??
?
?
?
??
? ? ? ? ?? ? ??? ?
??
?
?
? ?
?? ?
??
?
?
?
?
?
?
???? ? ? ? ?
?
?? ?
??
?
?
?
?
?
?
? ??
?
?
??
? ?? ?
??
?
? ?
? ?
?
?
?
??
??? ?
?
?
??
?
? ??
?
? ??
?
?? ? ?
?
?
? ? ?
?
?
?? ?
?
??
??? ?
?
? ??
?
??
?
?
?? ?
??
?
?
?
?
?
??
?? ??
?
?
?
?
?
?
?
?
?
??
? ?
?
?
?
?
? ??
?
?
?
?
?
? ?
?
?
?
?
? ?
? ??
??
? ?
??
????
? ?
?
??
?
?
??
?
?? ? ?
? ?
??
??
?
?
? ? ?
?
??
?
?
?
?
? ? ? ? ?? ? ? ?
?
?
??
??
?
? ?? ?
??
?? ??
?
?
? ?
? ??
?
?
? ?
??
?
? ?
?
?
?
?
?
??
?
??
?
? ?
?
?
? ?
??
?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
??
?
?
?
?
TV denoising
Figure 1: Top left: an underlying signal ?0 and associated data y (shown as black points). Top middle and top
right: Laplacian smoothing fit to y, at large and small tuning parameter values, respectively. Bottom left, middle,
and right: TV denoising (3), graph trend filtering (5), and Kronecker trend filtering (5) fit to y, respectively (the
latter two are of order k = 2, with penalty operators as described in Section 2). In order to capture the larger of
the two peaks, Laplacian smoothing must significantly undersmooth throughout; with more regularization, it
undersmooths throughout. TV denoising is able to adapt to heterogeneity in the smoothness of the underlying
signal, but exhibits ?staircasing? artifacts, as it is restricted to fitting piecewise constant functions. Graph and
Kronecker trend filtering overcome this, while maintaining local adaptivity.
Notation. For deterministic sequences an , bn we write an = O(bn ) to denote that an /bn is upper
?1
bounded for large enough n, and an bn to denote that both an = O(bn ) and a?1
n = O(bn ). For
random sequences An , Bn , we write An = OP (Bn ) to denote that An /Bn is bounded in probability.
Given a d-dimensional grid G = (V, E), where V = {1, . . . , n}, as before, we will sometimes index
a parameter ? ? Rn defined over the nodes in the following convenient way. Letting N = n1/d and
Zd = {(i1 /N, . . . , id /N ) : i1 , . . . , id ? {1, . . . , N }} ? [0, 1]d , we will index the components of ?
by their lattice positions, denoted ?(x), x ? Zd . Further, for each j = 1, . . . , d, we will define the
discrete derivative of ? in the jth coordinate direction at a location x by
?(x + ej /N ) ? ?(x) if x, x + ej /N ? Zd ,
(Dxj ?)(x) =
(4)
0
else.
Naturally, we denote by Dxj ? ? Rn the vector with components (Dxj ?)(x), x ? Zd . Higher-order
discrete derivatives are simply defined by repeated application of the above definition. We use abbreviations (Dx2j ?)(x) = (Dxj (Dxj ?))(x), for j = 1, . . . , d, and (Dxj ,x` ?)(x) = (Dxj (Dx` ?))(x),
for j, ` = 1, . . . , d, and so on.
Given an estimator ?? of the mean parameter ?0 in (1), and K ? Rn , two quantities of interest are:
? ?0 ) = 1 k?? ? ?0 k2 and R(K) = inf sup E MSE(?,
? ?0 ) .
MSE(?,
2
?
n
? ?0 ?K
? ?0 )]
The first quantity here is called the mean squared error (MSE) of ?; we will also call E[MSE(?,
?
the risk of ?. The second quantity is called the minimax risk over K (the infimum being taken over
?
all estimators ?).
3
2
Trend filtering methods
Review: graph trend filtering. To review the family of estimators developed in [31], we start by
introducing a general-form estimator called the generalized lasso signal approximator [28],
1
?? = argmin ky ? ?k22 + ?k??k1 ,
2
??Rn
(5)
for a matrix ? ? Rr?n , referred to as the penalty operator. For an integer k ? 0, the authors [31]
defined the graph trend filtering (GTF) estimator of order k by (5), with the penalty operator being
DLk/2
for k even,
(k+1)
?
=
(6)
L(k+1)/2 for k odd.
Here, as before, we use D for the edge incidence matrix of G. We also use L = DT D for the graph
Laplacian matrix of G. The intuition behind the above definition is that ?(k+1) ? gives something
roughly like the (k + 1)st order discrete derivatives of ? over the graph G.
Note that the GTF estimator reduces to TV denoising in (3) when k = 0. Also, like TV denoising,
GTF applies to arbitrary graph structures; see [31] for more details and for the study of GTF over
general graphs. Our interest is of course its behavior over grids, and we will now use the notation
introduced in (4), to shed more light on the GTF penalty
P operator in (6) over a d-dimensional grid.
For any signal ? ? Rn , we can write k?(k+1) ?k1 = x?Zd dx , where at all points x ? Zd (except
for those close to the boundary),
?
d
d
X
X
?
?
?
?
Dxj1 ,x2j ,...,x2j ? (x) for k even, where q = k/2,
?
q
?
2
j =1 j ,...,j =1
dx = 1 d 2 q
(7)
?
?
X
?
?
Dx2j ,x2j ,...,x2j ? (x)
for k odd, where q = (k + 1)/2.
?
q
?
1
2
j1 ,...,jq =1
Written in this form, it appears that the GTF operator ?(k+1) aggregates derivatives in somewhat of
an unnatural way. But we must remember that for a general graph structure, only first derivatives and
divergences have obvious discrete analogs?given by application of D and L, respectively. Hence,
GTF, which was originally designed for general graphs, relies on combinations of D and L to produce
something like higher-order discrete derivatives. This explains the form of the aggregated derivatives
in (6), which is entirely based on divergences.
Kronecker trend filtering. There is a natural alternative to the GTF penalty operator that takes
advantage of the Euclidean-like structure available at the (interior) nodes of a grid graph. At a point
x ? Zd (not close to the boundary), consider using
dx =
d
X
D k+1 ? (x)
x
j
(8)
j=1
as a basic building block for penalizing derivatives, rather than (7). This gives rise to a method we
call Kronecker trend filtering (KTF), which for an integer order k ? 0 is defined by (5), but now with
the choice of penalty operator
?
?
(k+1)
D1d
? I ? ??? ? I
?
?
(k+1)
? I ? D1d
? ??? ? I ?
(k+1)
e
?
?.
?
=?
(9)
..
?
?
?
.
(k+1)
I ? I ? ? ? ? ? D1d
(k+1)
Here, D1d
? R(N ?k?1)?N is the 1d discrete derivative operator of order k + 1 (e.g., as used in
univariate trend filtering, see [27]), I ? RN ?N is the identity matrix, and A ? B is the Kronecker
product of matrices A, B. Each group of rows in (9) features a total of d ? 1 Kronecker products.
KTF reduces to TV denoising in (3) when k = 0, and thus also to GTF with k = 0. But for k ? 1,
GTF and KTF are different estimators. A look at the action of their penalty operators, as displayed in
4
(7), (8) reveals some of their differences. For example, we see that GTF considers mixed derivatives
of total order k + 1, but KTF only considers directional derivatives of order k + 1 that are parallel to
the coordinate axes. Also, GTF penalizes aggregate derivatives (i.e., sums of derivatives), whereas
KTF penalizes individual ones.
More subtle differences between GTF and KTF have to do with the structure of their estimates, as we
discuss next. Another subtle difference lies in how the GTF and KTF operators (6), (9) relate to more
classical notions of smoothness, particularly, Holder smoothness. This is covered in Section 4.
Structure of estimates. It is straightforward to see that the GTF operator (6) has a 1-dimensional
null space, spanned by 1 = (1, . . . , 1) ? Rn . This means that GTF lets constant signals pass through
unpenalized, but nothing else; or, in other words, it preserves the projection of y onto the space of
constant signals, y?1, but nothing else. The KTF operator, meanwhile, has a much richer null space.
Lemma 1. The null space of the KTF operator (9) has dimension (k + 1)d , and it is spanned by a
polynomial basis made up of elements
p(x) = xa1 1 xa2 2 ? ? ? xadd ,
x ? Zd ,
where a1 , . . . , ad ? {0, . . . , k}.
The proof is elementary and (as with all proofs in this paper) is given in the supplement. The lemma
shows that KTF preserves the projection of y onto the space of polynomials of max degree k, i.e., lets
much more than just constant signals pass through unpenalized.
Beyond the differences in these base trends (represented by their null spaces), GTF and KTF admit
estimates with similar but generally different structures. KTF has the advantage that this structure is
more transparent: its estimates are piecewise polynomial functions of max degree k, with generally
fewer pieces for larger ?. This is demonstrated by a functional representation for KTF, given next.
Lemma 2. Let hi : [0, 1] ? R, i = 1, . . . , N be the (univariate) falling factorial functions [27, 30]
of order k, defined over knots 1/N, 2/N, . . . , N :
hi (t) =
i?1
Y
(t ? t` ),
t ? [0, 1], i = 1, . . . , k + 1,
`=1
k
Y
i+`
i+k
hi+k+1 (t) =
t?
?1 t>
,
N
N
(10)
t ? [0, 1], i = 1, . . . , N ? k ? 1.
`=1
(For k = 0, our convention is for the empty product to equal 1.) Let Hd be the space spanned by all
d-wise tensor products of falling factorial functions, i.e., Hd contains f : [0, 1]d ? R of the form
f (x) =
N
X
?i1 ,...,id hi1 (x)hi2 (x2 ) ? ? ? hid (xd ),
x ? [0, 1]d ,
i1 ,...,id =1
for coefficients ? ? Rn (whose components we index by ?i1 ,...,id , for i1 , . . . , id = 1, . . . , N ). Then
the KTF estimator defined in (5), (9) is equivalent to the functional optimization problem
k
d
X
X
2
1 X
? f (?, x?j )
?
f = argmin
y(x) ? f (x) + ?
TV
,
(11)
2
?xkj
f ?Hd
j=1 x ?Z
x?Z
?j
d
d?1
where f (?, x?j ) denotes f as function of the jth dimension with all other dimensions fixed at x?j ,
? k /?xkj (?) denotes the kth partial weak derivative operator with respect to xj , for j = 1, . . . , d, and
TV(?) denotes the total variation operator. The discrete (5), (9) and functional (11) representations
are equivalent in that f? and ?? match at all grid locations x ? Zd .
Aside from shedding light on the structure of KTF solutions, the functional optimization problem in
? which
(11) is of practical importance: the function f? is defined over all of [0, 1]d (as opposed to ?,
is of course only defined on the grid Zd ) and thus we may use it to interpolate the KTF estimate to
non-grid locations. It is not clear to us that a functional representation as in (11) (or even a sensible
interpolation strategy) is available for GTF on d-dimensional grids.
5
3
Upper bounds on estimation error
In this section, we assume that d = 2, and derive upper bounds on the estimation error of GTF and
KTF for 2d grids. Upper bounds for generalized lasso estimators were studied in [31], and we will
leverage one of their key results, which is based on what these authors call incoherence of the left
singular vectors of the penalty operator ?. A slightly refined version of this result is stated below.
Theorem 1 (Theorem 6 in [31]). Suppose that ? ? Rr?n has rank q, and denote by ?1 ? . . . ? ?q
its nonzero singular values. Also let u1 , . . . , uq be the corresponding left singular vectors. Assume
that these vectors, except for the first i0 , are incoherent, meaning that for a constant ? ? 1,
?
kui k? ? ?/ n, i = i0 + 1, . . . , q,
q
Pq
Then for ? ? (log r/n) i=i0 +1 ?i?2 , the generalized lasso estimator ?? in (5) satisfies
v
!
u
q
u log r X
i
?
1
nullity(?)
0
? ?0 ) = OP
? k??0 k1 .
MSE(?,
+ + t
n
n
n
n i=i +1 ?i2
0
For GTF and KTF, we will apply this result,
an appropriate choice of i0 with the partial
Pbalancing
q
sum of reciprocal squared singular values i=i0 +1 ?i?2 . The main challenge, as we will see, is in
establishing incoherence of the singular vectors.
Error bounds for graph trend filtering. The authors in [31] have already used Theorem 1 (their
Theorem 6) in order to derive error rates for GTF on 2d grids. However, their resultsP
(specifically,
q
their Corollary 8) can be refined using a tighter upper bound for the partial sum term i=i0 +1 ?i?2 .
No real further tightening is possible, since, as we show later, the results below match the minimax
lower bound in rate, up to log factors.
Theorem 2. Assume that d = 2. For k = 0, Cn = k?(1) ?0 k1 (i.e., Cn equal to the TV of ?0 , as in
(2)), and ? log n, the GTF estimator in (5), (6) (i.e., the TV denoising estimator in (3)) satisfies
1
log n
?
MSE(?, ?0 ) = OP
+
Cn .
n
n
k
1
?
k
For any integer k ? 1, Cn = k?(k+1) ?0 k1 and ? n k+2 (log n) k+2 Cn k+2 , GTF satisfies
2
2
1
1
? k+2
k+2
?
k+2
+n
.
MSE(?, ?0 ) = OP
(log n)
Cn
n
Remark 1. The result for k = 0 in Theorem 2 was essentially already established by [11] (a small
difference is that the above rate is sharper by a factor of log n; though to be fair, [11] also take into
account `0 sparsity). It is interesting to note that the case k = 0 appears to be quite special, in that
the GTF estimator, i.e., TV denoising estimator, is adaptive to the underlying smoothness parameter
Cn (the prescribed choice of tuning parameter ? log n does not depend on Cn ).
Pq
The technique for upper bounding i=i0 +1 ?i?2 in the proof of Theorem 2 can be roughly explained
as follows. The GTF operator ?(k+1) on a 2d grid has squared singular values:
k+1
?(i1 ? 1)
?(i2 ? 1)
4 sin2
+ 4 sin2
, i1 , i2 = 1, . . . , N.
2N
2N
We can upper bound the sum of squared reciprocal singular values with a integral over [0, 1]2 , make
use of the identity sin x ? x/2 for small enough x, and then switch to polar coordinates to calculate
the integral (similar to [11], in analyzing TV denoising). The arguments to verify incoherence of the
left singular vectors of ?(k+1) are themselves somewhat delicate, but were already given in [31].
Error bounds for Kronecker trend filtering. In comparison to the GTF case, the application of
Theorem 1 to KTF is a much more difficult task, because (to the best of our knowledge) the KTF
e (k+1) does not admit closed-form expressions for its singular values and vectors. This
operator ?
is true in any dimension (i.e., even for d = 1, where KTF reduces to univariate trend filtering). As
it turns out, the singular values can be handled with a relatively straightforward application of the
Cauchy interlacing theorem. It is establishing the incoherence of the singular vectors that proves to
be the real challenge. This is accomplished by leveraging specialized approximation bounds for the
eigenvectors of Toeplitz matrices from [2].
6
Theorem 3. Assume that d = 2. For k = 0, since KTF reduces to the GTF with k = 0 (and to TV
denoising), it satisfies the result stated in the first part of Theorem 2.
k
1
? k
e (k+1) ?0 k1 and ? n k+2
For any integer k ? 1, Cn = k?
(log n) k+2 Cn k+2 , the KTF estimator in
(5), (9) satisfies
2
1
2
1
? k+2
k+2
?
k+2
(log n)
Cn
MSE(?, ?0 ) = OP
+n
.
n
The results in Theorems 2 and 3 match, in terms of their dependence on n, k, d and the smoothness
parameter Cn . As we will see in the next section, the smoothness classes defined by the GTF and
KTF operators are similar, though not exactly the same, and each GTF and KTF is minimax rate
optimal with respect to its own smoothness class, up to log factors.
Beyond 2d? To analyze GTF and KTF on grids of dimension d ? 3, we would need to establish
incoherence of the left singular vectors of the GTF and KTF operators. This should be possible by
extending the arguments given in [31] (for GTF) and in the proof of Theorem 3 (for KTF), and is left
to future work.
4
Lower bounds on estimation error
We present lower bounds on the minimax estimation error over smoothness classes defined by the
operators from GTF (6) and KTF (9), denoted
Tdk (Cn ) = {? ? Rn : k?(k+1) ?k1 ? Cn },
e (k+1) ?k1 ? Cn },
Tedk (Cn ) = {? ? Rn : k?
(12)
(13)
respectively (where the subscripts mark the dependence on the dimension d of the underlying grid
graph). Before we derive such lower bounds, we examine embeddings of (the discretization of) the
class of Holder smooth functions into the GTF and KTF classes, both to understand the nature of
these new classes, and to define what we call a ?canonical? scaling for the radius parameter Cn .
Embedding of Holder spaces and canonical scaling. Given an integer k ? 0 and L > 0, recall
that the Holder class H(k + 1, L; [0, 1]d ) contains k times differentiable functions f : [0, 1]d ? R,
such that for all integers ?1 , . . . , ?d ? 0 with ?1 + ? ? ? + ?d = k,
? k f (x)
? k f (z)
d
?
?x 1 ? ? ? ?x?d ? ?x?1 ? ? ? ?x?d ? Lkx ? zk2 , for all x, z ? [0, 1] .
1
1
d
d
To compare Holder smoothness with the GTF and KTF classes defined in (12), (13), we discretize
the class H(k + 1, L; [0, 1]d ) by considering function evaluations over the grid Zd , defining
Hdk+1 (L) = ? ? Rn : ?(x) = f (x), x ? Zd , for some f ? H(k + 1, L; [0, 1]d ) .
(14)
Now we ask: how does the (discretized) Holder class in (14) compare to the GTF and KTF classes
in (12), (13)? Beginning with a comparison to KTF, fix ? ? Hdk+1 (L), corresponding to evaluations
of f ? H(k + 1, L; [0, 1]d ), and consider a point x ? Zd that is away from the boundary. Then the
KTF penalty at x is
D k+1 ? (x) = Dxk ? (x + ej /N ) ? Dxk ? (x)
xj
j
j
k
?
?k
+ N k ?(N )
? N k k f (x + ej /N ) ?
f
(x)
k
?xj
?xj
? LN k?1 + cLN k?1 .
(15)
In the second line above, we define ?(N ) to be the sum of absolute errors in the discrete approximations to the partial derivatives (i.e., the error in approximating ? k f (x)/?xkj by (Dxkj ?)(x)/N k , and
similarly at x + ej /N ). In the third line, we use Holder smoothness to upper bound the first term,
and we use standard numerical analysis (details in the supplement) for the second term to ensure that
?(N ) ? cL/N for a constant c > 0 depending only on k. Summing the bound in (15) over x ? Zd
as appropriate gives a uniform bound on the KTF penalty at ?, and leads to the next result.
7
Lemma 3. For any integers k ? 0, d ? 1, the (discretized) Holder and KTF classes defined in (14),
(13) satisfy Hdk+1 (L) ? Tedk (cLn1?(k+1)/d ), where c > 0 is a constant depending only on k.
This lemma has three purposes. First, it provides some supporting evidence that the KTF class is an
interesting smoothness class to study, as it shows the KTF class contains (discretizations of) Holder
smooth functions, which are a cornerstone of classical nonparametric regression theory. In fact, this
containment is strict and the KTF class contains more heterogeneous functions in it as well. Second,
it leads us to define what we call the canonical scaling Cn n1?(k+1)/d for the radius of the KTF
class (13). This will be helpful for interpreting our minimax lower bounds in what follows; at this
scaling, note that we have Hdk+1 (1) ? Tedk (Cn ). Third and finally, it gives us an easy way to establish
lower bounds on the minimax estimation error over KTF classes, by invoking well-known results on
minimax rates for Holder classes. This will be described shortly.
As for GTF, calculations similar to (15) are possible, but complications ensue for x on the boundary
of the grid Zd . Importantly, unlike the KTF penalty, the GTF penalty includes discrete derivatives at
the boundary and so these complications have serious consequences, as stated next.
Lemma 4. For any integers k, d ? 1, there are elements in the (discretized) Holder class Hdk+1 (1)
in (14) that do not lie in the GTF class Tdk (Cn ) in (12) for arbitrarily large Cn .
This lemma reveals a very subtle drawback of GTF caused by the use of discrete derivatives at the
boundary of the grid. The fact that GTF classes do not contain (discretized) Holder classes makes
them seem less natural (and perhaps, in a sense, less interesting) than KTF classes. In addition, it
means that we cannot use standard minimax theory for Holder classes to establish lower bounds for
the estimation error over GTF classes. However, as we will see next, we can construct lower bounds
for GTF classes via another (more purely geometric) embedding strategy; interestingly, the resulting
rates match the Holder rates, suggesting that, while GTF classes do not contain all (discretized)
Holder functions, they do contain ?enough? of these functions to admit the same lower bound rates.
Minimax rates for GTF and KTF classes. Following from classical minimax theory for Holder
classes [14, 29], and Lemma 3, we have the following result for the minimax rates over KTF classes.
Theorem 4. For any integers k ? 0, d ? 1, the KTF class defined in (13) has minimax estimation
error satisfying
2d
2d
R Tedk (Cn ) = ?(n? 2k+2+d Cn2k+2+d ).
For GTF classes, we use a different strategy. We embed an ellipse, then rotate the parameter space
and embed a hypercube, leading to the following result.
Theorem 5. For any integers k ? 0, d ? 1, the GTF class defined in (12) has minimax estimation
error satisfying
2d
2d
R Tdk (Cn ) = ?(n? 2k+2+d Cn2k+2+d ).
Several remarks are in order.
Remark 2. Plugging in the canonical scaling Cn n1?(k+1)/d in Theorems 4 and 5, we see that
2k+2
R(Tedk (Cn )) = ?(n? 2k+2+d ) and
2k+2
R(Tdk (Cn )) = ?(n? 2k+2+d ),
both matching the usual rate for the Holder class Hdk+1 (1). For KTF, this should be expected, as its
lower bound is constructed via the Holder embedding given in Lemma 3. But for GTF, it may come
as somewhat of a surprise?despite the fact it does not embed a Holder class as in Lemma 4, we see
that the GTF class shares the same rate, suggesting it still contains something like ?hardest? Holder
smooth signals.
Remark 3. For d = 2 and all k ? 0, we can certify that the lower bound rate in Theorem 4 is tight,
modulo log factors, by comparing it to the upper bound in Theorem 3. Likewise, we can certify that
the lower bound rate in Theorem 5 is tight, up to log factors, by comparing it to the upper bound in
Theorem 2. For d ? 3, the lower bound rates in Theorems 4 and 5 will not be tight for some values
of k. For example, when k = 0, at the canonical scaling Cn n1?1/d , the lower bound rate (given
by either theorem) is n?2/(2+d) , however, [22] prove that the minimax error of the TV class scales
(up to log factors) as n?1/d for d ? 2, so we see there is a departure in the rates for d ? 3.
8
GTFcl
ass KTFcl
ass H?l
dercl
ass
Figure 2: Illustration of the two higher-order TV classes, namely the GTF and KTF classes, as they relate to
the (discretized) Holder class. The horizontally/vertically checkered region denotes the part of Holder class not
contained in the GTF class. As explained in Section 4, this is due to the fact that the GTF operator penalizes
discrete derivatives on the boundary of the grid graph. The diagonally checkered region (also colored in blue)
denotes the part of the Holder class contained in the GTF class. The minimax lower bound rates we derive for
the GTF class in Theorem 5 match the well-known Holder rates, suggesting that this region is actually sizeable
and contains the ?hardest? Holder smooth signals.
In general, we conjecture that the Holder embedding for the KTF class (and ellipse embedding for
GTF) will deliver tight lower bound rates, up to log factors, when k is large enough compared to d.
This would have interesting implications for adaptivity to smoother signals (see the next remark); a
precise study will be left to future work, along with tight minimax lower bounds for all k, d.
Remark 4. Again by comparing Theorems 3 and 4, as well as Theorems 2 and 5, we find that, for
d = 2 and all k ? 0, KTF is rate optimal for the KTF smoothness class and GTF is rate optimal for
the GTF smoothness class, modulo log factors. We conjecture that this will continue to hold for all
d ? 3, which will be examined in future work. Moreover, an immediate consequence of Theorem 3
and the Holder embedding in Lemma 3 is that KTF adapts automatically to Holder smooth signals,
i.e., it achieves a rate (up to log factors) of n?(k+1)/(k+2) over H2k+1 (1), matching the well-known
minimax rate for the more homogeneous Holder class. It is not clear that GTF shares this property.
5
Discussion
In this paper, we studied two natural higher-order extensions of the TV estimator on d-dimensional
grid graphs. The first was graph trend filtering (GTF) as defined in [31], applied to grids; the second
was a new Kronecker trend filtering (KTF) method, which was built with the special (Euclidean-like)
structure of grids in mind. GTF and KTF exhibit some similarities, but are different in important
ways. Notably, the notion of smoothness defined using the KTF operator is somewhat more natural,
and is a strict generalization of the standard notion of Holder smoothness (in the sense that the KTF
smoothness class strictly contains a Holder class of an appropriate order). This is not true for the
notion of smoothness defined using the GTF operator. Figure 2 gives an illustration.
When d = 2, we derived tight upper bounds for the estimation error achieved by the GTF and KTF
estimators?tight in the sense that these upper bound match in rate (modulo log factors) the lower
bounds on the minimax estimation errors for the GTF and KTF classes. We constructed the lower
bound for the KTF class by leveraging the fact that it embeds a Holder class; for the GTF class, we
used a different (more geometric) embedding. While these constructions proved to be tight for d = 2
and all k ? 0, we suspect this will no longer be the case in general, when d is large enough relative
to k. We will examine this in future work, along with upper bounds for GTF and KTF when d ? 3.
Another important consideration for future work are the minimax linear rates over GTF and KTF
classes, i.e., minimax rates when we restrict our attention to linear estimators. We anticipate that a
gap will exist between minimax linear and nonlinear rates for all k, d (as it does for k = 0, as shown
in [22]). This would, e.g., provide some rigorous backing to the claim that the KTF class is larger
than its embedded Holder class (the latter having matching minimax linear and nonlinear rates).
Acknowledgements. We thank Sivaraman Balakrishnan for helpful discussions regarding minimax
rates for Holder classes on grids. JS was supported by NSF Grant DMS-1712996. VS, YW, and RT
were supported by NSF Grants DMS-1309174 and DMS-1554123.
9
References
[1] Alvaro Barbero and Suvrit Sra. Modular proximal optimization for multidimensional totalvariation regularization. arXiv: 1411.0589, 2014.
[2] Johan M. Bogoya, Albrecht Bottcher, Sergei M. Grudsky, and Egor A. Maximenko. Eigenvectors
of Hermitian Toeplitz matrices with smooth simple-loop symbols. Linear Algebra and its
Applications, 493:606?637, 2016.
[3] Kristian Bredies, Karl Kunisch, and Thomas Pock. Total generalized variation. SIAM Journal
on Imaging Sciences, 3(3):492?526, 2010.
[4] Antonin Chambolle and Jerome Darbon. On total variation minimization and surface evolution
using parametric maximum flows. International Journal of Computer Vision, 84:288?307,
2009.
[5] Antonin Chambolle and Pierre-Louis Lions. Image recovery via total variation minimization
and related problems. Numerische Mathematik, 76(2):167?188, 1997.
[6] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems
with applications to imaging. Journal of Mathematical Imaging and Vision, 40:120?145, 2011.
[7] Laurent Condat. A direct algorithm for 1d total variation denoising. HAL: 00675043, 2012.
[8] David L. Donoho and Iain M. Johnstone. Minimax estimation via wavelet shrinkage. Annals of
Statistics, 26(8):879?921, 1998.
[9] Zaid Harchaoui and Celine Levy-Leduc. Multiple change-point estimation with a total variation
penalty. Journal of the American Statistical Association, 105(492):1480?1493, 2010.
[10] Holger Hoefling. A path algorithm for the fused lasso signal approximator. Journal of Computational and Graphical Statistics, 19(4):984?1006, 2010.
[11] Jan-Christian Hutter and Philippe Rigollet. Optimal rates for total variation denoising. Annual
Conference on Learning Theory, 29:1115?1146, 2016.
[12] Nicholas Johnson. A dynamic programming algorithm for the fused lasso and l0 -segmentation.
Journal of Computational and Graphical Statistics, 22(2):246?260, 2013.
[13] Seung-Jean Kim, Kwangmoo Koh, Stephen Boyd, and Dimitry Gorinevsky. `1 trend filtering.
SIAM Review, 51(2):339?360, 2009.
[14] Aleksandr P. Korostelev and Alexandre B. Tsybakov. Minimax Theory of Image Reconstructions.
Springer, 2003.
[15] Arne Kovac and Andrew Smith. Nonparametric regression on a graph. Journal of Computational
and Graphical Statistics, 20(2):432?447, 2011.
[16] Enno Mammen and Sara van de Geer. Locally apadtive regression splines. Annals of Statistics,
25(1):387?413, 1997.
[17] Oscar Hernan Madrid Padilla, James Sharpnack, James Scott, , and Ryan J. Tibshirani. The
DFS fused lasso: Linear-time denoising over general graphs. arXiv: 1608.03384, 2016.
[18] Christiane Poschl and Otmar Scherzer. Characterization of minimizers of convex regularization
functionals. In Frames and Operator Theory in Analysis and Signal Processing, volume 451,
pages 219?248. AMS eBook Collections, 2008.
[19] Alessandro Rinaldo. Properties and refinements of the fused lasso. Annals of Statistics, 37(5):
2922?2952, 2009.
[20] Leonid I. Rudin, Stanley Osher, and Emad Faterni. Nonlinear total variation based noise removal
algorithms. Physica D: Nonlinear Phenomena, 60(1):259?268, 1992.
[21] Veeranjaneyulu Sadhanala and Ryan J. Tibshirani. Additive models via trend filtering. arXiv:
1702.05037, 2017.
10
[22] Veeranjaneyulu Sadhanala, Yu-Xiang Wang, and Ryan J. Tibshirani. Total variation classes
beyond 1d: Minimax rates, and the limitations of linear smoothers. Advances in Neural
Information Processing Systems, 29, 2016.
[23] James Sharpnack, Alessandro Rinaldo, and Aarti Singh. Sparsistency via the edge lasso.
International Conference on Artificial Intelligence and Statistics, 15, 2012.
[24] Gabriel Steidl, Stephan Didas, and Julia Neumann. Splines in higher order TV regularization.
International Journal of Computer Vision, 70(3):214?255, 2006.
[25] Wesley Tansey and James Scott. A fast and flexible algorithm for the graph-fused lasso. arXiv:
1505.06475, 2015.
[26] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and
smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B, 67(1):91?108,
2005.
[27] Ryan J. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering. Annals of
Statistics, 42(1):285?323, 2014.
[28] Ryan J. Tibshirani and Jonathan Taylor. The solution path of the generalized lasso. Annals of
Statistics, 39(3):1335?1371, 2011.
[29] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009.
[30] Yu-Xiang Wang, Alexander Smola, and Ryan J. Tibshirani. The falling factorial basis and its
statistical applications. International Conference on Machine Learning, 31, 2014.
[31] Yu-Xiang Wang, James Sharpnack, Alex Smola, and Ryan J. Tibshirani. Trend filtering on
graphs. Journal of Machine Learning Research, 17(105):1?41, 2016.
11
| 7162 |@word middle:2 version:2 polynomial:5 seek:1 bn:9 invoking:1 series:2 contains:8 interestingly:1 past:1 com:1 incidence:2 discretization:1 comparing:3 dx:4 must:2 written:1 sergei:1 numerical:1 additive:2 j1:1 christian:1 zaid:1 designed:1 aside:1 v:1 intelligence:1 fewer:1 rudin:1 beginning:1 reciprocal:2 smith:1 colored:1 provides:1 characterization:1 node:6 location:5 complication:2 mathematical:1 along:2 constructed:2 direct:1 prove:1 fitting:1 hermitian:1 notably:1 expected:1 behavior:1 themselves:1 examine:2 roughly:2 discretized:6 automatically:1 dimitry:1 considering:1 provided:1 estimating:1 underlying:7 moreover:2 alto:1 notation:2 bounded:2 null:4 what:6 kind:1 argmin:3 developed:2 remember:1 multidimensional:1 xd:1 shed:1 exactly:1 k2:1 rm:1 grant:2 yn:1 louis:1 before:3 veeranjaneyulu:3 understood:1 local:3 vertically:1 pock:2 consequence:2 despite:1 aleksandr:1 id:6 analyzing:1 establishing:2 subscript:1 incoherence:5 interpolation:1 laurent:1 path:2 might:1 zeroth:2 ryantibs:1 championed:1 studied:4 black:1 examined:1 sara:1 statistically:1 dlk:1 ktf:68 practical:1 practice:1 block:1 jan:1 discretizations:1 significantly:1 xadd:1 convenient:1 projection:2 word:1 matching:3 boyd:1 cannot:2 interior:2 close:2 operator:29 onto:2 risk:2 equivalent:3 deterministic:1 demonstrated:1 backed:1 straightforward:2 xa1:1 attention:1 independently:1 convex:2 numerische:1 amazon:2 recovery:1 estimator:26 iain:1 importantly:1 spanned:3 hd:3 embedding:7 notion:6 variation:12 coordinate:4 annals:5 construction:2 suppose:1 pioneered:1 modulo:5 programming:1 homogeneous:2 pa:3 trend:37 element:2 satisfying:2 particularly:1 jsharpna:1 observed:1 bottom:1 wang:4 capture:1 calculate:1 region:4 knight:1 alessandro:2 intuition:1 seung:1 dynamic:1 depend:1 tight:8 singh:1 algebra:1 purely:1 deliver:1 basis:2 represented:1 regularizer:1 fast:1 artificial:1 aggregate:2 refined:2 saunders:1 quite:2 richer:2 widely:1 larger:4 whose:1 modular:1 jean:1 toeplitz:2 statistic:11 noisy:1 sequence:2 rr:2 advantage:2 differentiable:1 propose:2 reconstruction:1 product:4 hid:1 loop:1 achieve:2 adapts:2 hi2:1 hdk:6 ky:2 regularity:3 empty:1 extending:1 neumann:1 produce:1 derive:7 develop:1 depending:2 stat:1 andrew:1 odd:2 op:5 gtf:70 keith:1 c:1 come:1 convention:1 direction:1 radius:2 drawback:1 dfs:1 explains:1 transparent:1 generalization:2 fix:1 tighter:1 ryan:8 elementary:1 anticipate:1 extension:4 strictly:1 physica:1 hold:1 around:1 claim:1 achieves:1 enno:1 purpose:1 aarti:1 estimation:18 polar:1 emad:1 palo:1 sivaraman:1 minimization:2 rather:1 ej:5 shrinkage:1 corollary:1 derived:2 focus:3 ax:1 l0:1 methodological:1 sharpnack:4 rank:1 seamlessly:1 rigorous:1 kim:1 sense:4 am:1 sin2:2 helpful:2 minimizers:1 i0:7 jq:1 i1:8 interested:1 backing:1 dual:1 flexible:1 denoted:2 smoothing:4 special:2 equal:4 construct:1 having:2 beach:1 holger:1 yu:4 look:1 hardest:2 future:5 spline:3 piecewise:7 serious:1 leduc:1 preserve:2 divergence:2 comprehensive:1 individual:1 sparsistency:1 interpolate:1 delicate:1 n1:6 interest:2 evaluation:2 analyzed:1 light:2 behind:1 primal:1 implication:1 edge:4 capable:1 partial:4 integral:2 euclidean:4 loosely:1 penalizes:3 taylor:1 theoretical:1 hutter:1 lattice:1 introducing:1 uniform:1 johnson:1 cln:1 motivating:1 faterni:1 proximal:1 rosset:1 st:2 peak:1 siam:2 alvaro:1 international:4 gorinevsky:1 michael:1 fused:7 squared:4 again:1 opposed:1 admit:3 american:1 derivative:21 leading:1 albrecht:1 account:1 suggesting:3 de:1 sizeable:1 includes:1 coefficient:1 satisfy:1 notable:1 caused:1 ad:1 piece:1 later:1 closed:1 analyze:4 sup:1 start:1 parallel:1 contribution:4 holder:37 variance:1 korostelev:1 likewise:1 ensue:1 directional:1 weak:1 knot:1 definition:2 james:6 obvious:1 dm:3 naturally:2 associated:3 proof:4 proved:1 ask:1 recall:1 knowledge:1 yuxiangw:1 stanley:1 segmentation:1 subtle:3 actually:1 appears:2 alexandre:2 wesley:1 higher:13 dt:1 originally:1 evaluated:1 though:2 chambolle:3 hoefling:1 just:1 smola:2 jerome:1 nonlinear:7 xa2:1 infimum:1 artifact:1 perhaps:1 hal:1 usa:1 building:1 k22:2 verify:1 true:2 contain:3 christiane:1 evolution:1 regularization:8 hence:1 bredies:1 nonzero:1 i2:3 adjacent:1 sin:1 davis:2 mammen:1 generalized:5 evident:1 julia:1 saharon:1 interpreting:1 image:3 wise:1 meaning:1 consideration:1 recently:1 xkj:3 common:2 specialized:2 functional:5 rigollet:1 ji:1 overview:1 hernan:1 volume:1 analog:1 association:1 mellon:3 refer:1 ai:1 smoothness:26 tuning:3 grid:34 similarly:1 pq:2 d1d:4 similarity:1 longer:1 lkx:1 surface:1 base:2 something:4 j:1 own:2 sadhanala:3 inf:1 apart:1 kovac:1 suvrit:1 arbitrarily:1 continue:1 yi:1 accomplished:2 seen:3 somewhat:5 aggregated:1 signal:18 stephen:1 smoother:2 multiple:1 harchaoui:1 reduces:4 interlacing:1 smooth:9 match:7 adapt:2 calculation:1 offer:1 long:1 arne:1 e1:1 a1:1 laplacian:5 ensuring:1 plugging:1 variant:1 regression:4 basic:1 heterogeneous:1 essentially:1 cmu:2 vision:3 arxiv:4 sometimes:1 achieved:1 proposal:2 whereas:1 addition:1 else:3 singular:12 unlike:1 strict:2 suspect:1 balakrishnan:1 leveraging:2 dxj:7 seem:1 flow:1 call:6 integer:10 leverage:1 ideal:1 enough:5 embeddings:1 easy:1 switch:1 xj:4 fit:2 stephan:1 perfectly:1 lasso:12 restrict:1 idea:1 cn:27 avenue:1 regarding:1 expression:1 handled:1 unnatural:1 penalty:13 speaking:1 action:1 remark:6 gabriel:1 enumerate:1 generally:3 covered:1 clear:2 eigenvectors:2 factorial:3 cornerstone:1 amount:1 nonparametric:4 yw:1 tsybakov:2 locally:2 concentrated:1 exist:1 canonical:5 nsf:2 estimated:1 certify:2 tibshirani:9 per:1 darbon:1 blue:1 zd:15 carnegie:3 discrete:14 write:3 group:1 key:2 enormous:1 falling:3 penalizing:1 hi1:1 imaging:3 graph:33 asymptotically:1 sum:5 dxk:2 oscar:1 throughout:2 family:1 scaling:6 capturing:1 bound:39 entirely:1 hi:3 display:1 annual:1 strength:1 kronecker:11 alex:1 x2:1 barbero:1 u1:1 argument:2 prescribed:1 relatively:1 conjecture:2 tv:39 combination:1 kd:2 across:1 slightly:1 em:1 osher:1 explained:2 restricted:1 koh:1 taken:1 ln:1 mathematik:1 discus:1 count:1 turn:1 mind:1 letting:1 zk2:1 generalizes:1 available:3 apply:1 dxkj:1 away:1 appropriate:5 pierre:1 nicholas:1 uq:1 alternative:1 compiling:1 shortly:1 thomas:2 top:3 denotes:5 ensure:1 nullity:1 graphical:3 maintaining:1 restrictive:1 k1:10 especially:1 prof:1 establish:3 society:1 classical:4 approximating:1 ellipse:2 tensor:1 move:2 hypercube:1 already:3 quantity:3 padilla:1 strategy:3 parametric:1 dependence:2 usual:2 rt:1 exhibit:3 kth:1 higherorder:1 thank:1 sensible:1 considers:3 extent:1 cauchy:1 length:2 index:3 illustration:3 scherzer:1 setup:1 difficult:1 robert:1 sharper:1 relate:3 stated:3 rise:1 tightening:1 design:1 upper:14 discretize:1 observation:1 displayed:1 supporting:1 immediate:1 heterogeneity:5 defining:1 philippe:1 precise:1 y1:1 rn:14 frame:1 arbitrary:3 community:1 undersmooth:1 introduced:1 david:1 pair:1 namely:1 extensive:1 california:1 steidl:1 ucdavis:1 established:1 nip:1 beyond:5 able:2 below:3 lion:1 scott:2 departure:1 sparsity:2 challenge:2 built:2 max:2 royal:1 video:1 unpenalized:2 natural:4 zhu:1 minimax:31 brief:1 incoherent:1 review:4 literature:4 geometric:2 acknowledgement:1 removal:1 xiang:4 relative:1 embedded:1 highlight:1 adaptivity:2 mixed:1 interesting:4 filtering:35 limitation:1 approximator:2 degree:2 principle:1 share:2 row:2 karl:1 course:2 summary:1 diagonally:1 supported:2 jth:2 side:2 allow:1 understand:1 johnstone:1 absolute:1 sparse:1 distributed:1 van:1 overcome:1 boundary:7 dimension:6 author:4 made:1 adaptive:3 collection:1 refinement:1 cope:1 functionals:1 checkered:2 reveals:2 summing:1 pittsburgh:3 assumed:1 containment:1 nature:1 johan:1 ca:3 sra:1 mse:8 kui:1 cl:1 meanwhile:2 as:3 main:1 motivation:1 noise:2 bounding:1 antonin:3 nothing:2 allowed:1 repeated:1 fair:1 condat:1 referred:1 madrid:1 embeds:1 position:1 lie:2 levy:1 third:2 wavelet:1 theorem:26 embed:3 symbol:1 evidence:1 importance:1 supplement:2 gap:1 surprise:1 simply:1 univariate:3 horizontally:1 h2k:1 contained:2 rinaldo:2 applies:1 kristian:1 springer:2 satisfies:5 relies:1 abbreviation:1 viewed:1 identity:2 donoho:1 leonid:1 change:2 specifically:3 except:2 operates:1 denoising:20 lemma:11 total:14 called:3 x2j:4 pas:2 geer:1 shedding:1 mark:1 latter:2 rotate:1 jonathan:1 alexander:1 relevance:1 phenomenon:1 |
6,813 | 7,163 | Training Quantized Nets: A Deeper Understanding
Hao Li1?, Soham De1?, Zheng Xu1 , Christoph Studer2 , Hanan Samet1 , Tom Goldstein1
1
Department of Computer Science, University of Maryland, College Park
2
School of Electrical and Computer Engineering, Cornell University
{haoli,sohamde,xuzh,hjs,tomg}@cs.umd.edu, [email protected]
Abstract
Currently, deep neural networks are deployed on low-power portable devices by first training
a full-precision model using powerful hardware, and then deriving a corresponding lowprecision model for efficient inference on such systems. However, training models directly
with coarsely quantized weights is a key step towards learning on embedded platforms that
have limited computing resources, memory capacity, and power consumption. Numerous
recent publications have studied methods for training quantized networks, but these studies
have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training
methods under convexity assumptions. We then look at the behavior of these algorithms for
non-convex problems, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods
lack, which explains the difficulty of training using low-precision arithmetic.
1
Introduction
Deep neural networks are an integral part of state-of-the-art computer vision and natural language
processing systems. Because of their high memory requirements and computational complexity,
networks are usually trained using powerful hardware. There is an increasing interest in training
and deploying neural networks directly on battery-powered devices, such as cell phones or other
platforms. Such low-power embedded systems are memory and power limited, and in some cases
lack basic support for floating-point arithmetic.
To make neural nets practical on embedded systems, many researchers have focused on training nets
with coarsely quantized weights. For example, weights may be constrained to take on integer/binary
values, or may be represented using low-precision (8 bits or less) fixed-point numbers. Quantized nets
offer the potential of superior memory and computation efficiency, while achieving performance that
is competitive with state-of-the-art high-precision nets. Quantized weights can dramatically reduce
memory size and access bandwidth, increase power efficiency, exploit hardware-friendly bitwise
operations, and accelerate inference throughput [1?3].
Handling low-precision weights is difficult and motivates interest in new training methods. When
learning rates are small, stochastic gradient methods make small updates to weight parameters.
Binarization/discretization of weights after each training iteration ?rounds off? these small updates
and causes training to stagnate [1]. Thus, the na?ve approach of quantizing weights using a rounding
procedure yields poor results when weights are represented using a small number of bits. Other
approaches include classical stochastic rounding methods [4], as well as schemes that combine
full-precision floating-point weights with discrete rounding procedures [5]. While some of these
schemes seem to work in practice, results in this area are largely experimental, and little work has
been devoted to explaining the excellent performance of some methods, the poor performance of
others, and the important differences in behavior between these methods.
?
Equal contribution. Author ordering determined by a cryptographically secure random number generator.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Contributions This paper studies quantized training methods from a theoretical perspective, with
the goal of understanding the differences in behavior, and reasons for success or failure, of various
methods. In particular, we present a convergence analysis showing that classical stochastic rounding
(SR) methods [4] as well as newer and more powerful methods like BinaryConnect (BC) [5] are
capable of solving convex discrete problems up to a level of accuracy that depends on the quantization
level. We then address the issue of why algorithms that maintain floating-point representations, like
BC, work so well, while fully quantized training methods like SR stall before training is complete.
We show that the long-term behavior of BC has an important annealing property that is needed for
non-convex optimization, while classical rounding methods lack this property.
2
Background and Related Work
The arithmetic operations of deep networks can be truncated down to 8-bit fixed-point without
significant deterioration in inference performance [4, 6?9]. The most extreme scenario of quantization
is binarization, in which only 1-bit (two states) is used for weight representation [10, 5, 1, 3, 11, 12].
Previous work on obtaining a quantized neural network can be divided into two categories: quantizing
pre-trained models with or without retraining [7, 13, 6, 14, 15], and training a quantized model from
scratch [4, 5, 3, 1, 16]. We focus on approaches that belong to the second category, as they can be
used for both training and inference under constrained resources.
For training quantized NNs from scratch, many authors suggest maintaining a high-precision floating
point copy of the weights while feeding quantized weights into backprop [5, 11, 3, 16], which results
in good empirical performance. There are limitations in using such methods on low-power devices,
however, where floating-point arithmetic is not always available or not desirable. Another widely
used solution using only low-precision weights is stochastic rounding [17, 4]. Experiments show
that networks using 16-bit fixed-point representations with stochastic rounding can deliver results
nearly identical to 32-bit floating-point computations [4], while lowering the precision down to 3-bit
fixed-point often results in a significant performance degradation [18]. Bayesian learning has also
been applied to train binary networks [19, 20]. A more comprehensive review can be found in [3].
3
Training Quantized Neural Nets
We consider empirical risk minimization problems of the form:
m
1 X
min F (w) :=
fi (w),
w?W
m i=1
(1)
where the objective function decomposes into a sum over many functions fi : Rd ? R. Neural
networks have objective functions of this form where each fi is a non-convex loss function. When
floating-point representations are available, the standard method for training neural networks is
stochastic gradient descent (SGD), which on each iteration selects a function f? randomly from
{f1 , f2 , . . . , fm }, and then computes
SGD: wt+1 = wt ? ?t ?f?(wt ),
(2)
for some learning rate ?t . In this paper, we consider the problem of training convolutional neural
networks (CNNs). Convolutions are computationally expensive; low precision weights can be used
to accelerate them by replacing expensive multiplications with efficient addition and subtraction
operations [3, 9] or bitwise operations [11, 16].
To train networks using a low-precision representation of the weights, a quantization function Q(?)
is needed to convert a real-valued number w into a quantized/rounded version w
? = Q(w). We use
the same notation for quantizing vectors, where we assume Q acts on each dimension of the vector.
Different quantized optimization routines can be defined by selecting different quantizers, and also
by selecting when quantization happens during optimization. The common options are:
Deterministic Rounding (R) A basic uniform or deterministic quantization function snaps a
floating point value to the closest quantized value as:
|w| 1
Qd (w) = sign(w) ? ? ?
+
,
(3)
?
2
2
where ? denotes the quantization step or resolution, i.e., the smallest positive number that is
representable. One exception to this definition is when we consider binary weights, where all weights
are constrained to have two values w ? {?1, 1} and uniform rounding becomes Qd (w) = sign(w).
The deterministic rounding SGD maintains quantized weights with updates of the form:
Deterministic Rounding: wbt+1 = Qd wbt ? ?t ?f?(wbt ) ,
(4)
where wb denotes the low-precision weights, which are quantized using Qd immediately after applying
the gradient descent update. If gradient updates are significantly smaller than the quantization step,
this method loses gradient information and weights may never be modified from their starting values.
Stochastic Rounding (SR) The quantization function for stochastic rounding is defined as:
w
w
w
b ? c + 1 for p ? ?
? b?
c,
Qs (w) = ? ?
w
b?c
otherwise,
(5)
where p ? [0, 1] is produced by a uniform random number generator. This operator is nondeterministic, and rounds its argument up with probability w/? ? bw/?c, and down otherwise.
This quantizer satisfies the important property E[Qs (w)] = w. Similar to the deterministic rounding
method, the SR optimization method also maintains quantized weights with updates of the form:
Stochastic Rounding: wbt+1 = Qs wbt ? ?t ?f?(wbt ) .
(6)
BinaryConnect (BC) The BinaryConnect algorithm [5] accumulates gradient updates using a
full-precision buffer wr , and quantizes weights just before gradient computations as follows.
BinaryConnect: wrt+1 = wrt ? ?t ?f? Q(wrt ) .
(7)
Either stochastic rounding Qs or deterministic rounding Qd can be used for quantizing the weights
wr , but in practice, Qd is the common choice. The original BinaryConnect paper constrains the
low-precision weights to be {?1, 1}, which can be generalized to {??, ?}. A more recent method,
Binary-Weights-Net (BWN) [3], allows different filters to have different scales for quantization,
which often results in better performance on large datasets.
Notation For the rest of the paper, we use Q to denote both Qs and Qd unless the situation requires
this to be distinguished. We also drop the subscripts on wr and wb , and simply write w.
4
Convergence Analysis
We now present convergence guarantees for the Stochastic Rounding (SR) and BinaryConnect
(BC) algorithms, with updates of the form (6) and (7), respectively. For the purposes of deriving
theoretical guarantees, we assume each fi in (1) is differentiable and convex, and the domain
W is convex and has dimension d. We consider both the case where F is ?-strongly convex:
h?F (w0 ), w ? w0 i ? F (w) ? F (w0 ) ? ?2 kw ? w0 k2 , as well as where F is weakly convex. We also
assume the (stochastic) gradients are bounded: Ek?f?(wt )k2 ? G2 . Some results below also assume
the domain of the problem is finite. In this case, the rounding algorithm clips values that leave the
domain. For example, in the binary case, rounding returns bounded values in {?1, 1}.
4.1
Convergence of Stochastic Rounding (SR)
We can rewrite the update rule (6) as:
wt+1 = wt ? ?t ?f?(wt ) + rt ,
where rt = Qs (wt ? ?t ?f?(wt )) ? wt + ?t ?f?(wt ) denotes the quantization error on the t-th
iteration. We want to bound this error in expectation. To this end, we present the following lemma.
Lemma 1. The stochastic rounding error rt on each iteration can be bounded, in expectation, as:
2 ?
E
rt
? d??t G,
where d denotes the dimension of w.
3
Proofs for all theoretical results are presented in the Appendices. From Lemma 1, we see that
the rounding error per step decreases as the learning rate ?t decreases. This is intuitive since the
probability of an entry in wt+1 differing from wt is small when the gradient update is small relative
to ?. Using the above lemma, we now present convergence rate results for Stochastic Rounding (SR)
in both the strongly-convex case and the non-strongly convex case. Our error estimates are ergodic,
PT
i.e., they are in terms of w
? T = T1 t=1 wt , the average of the iterates.
Theorem 1. Assume that F is ?-strongly convex and the learning rates are given by ?t =
Consider the SR algorithm with updates of the form (6). Then, we have:
?
(1 + log(T + 1))G2
d?G
T
?
E[F (w
? ) ? F (w )] ?
+
,
2?T
2
where w? = arg minw F (w).
Theorem 2. Assume the domain has finite diameter D, and learning rates are given by ?t =
a constant c. Consider the SR algorithm with updates of the form (6). Then, we have:
?
?
1
T +1 2
d?G
T
?
2
cG +
.
E[F (w
? ) ? F (w )] ? ? D +
2T
2
c T
1
?(t+1) .
c
?
,
t
for
We see that in both cases, SR converges until it reaches an ?accuracy floor.? As the quantization
becomes more fine grained, our theory predicts that the accuracy of SR approaches that of highprecision floating point at a rate linear in ?. This extra term caused by the discretization is unavoidable
since this method maintains quantized weights.
4.2
Convergence of Binary Connect (BC)
When analyzing the BC algorithm, we assume that the Hessian satisfies the Lipschitz bound:
k?2 fi (x) ? ?2 fi (y)k ? L2 kx ? yk for some L2 ? 0. While this is a slightly non-standard
assumption, we will see that it enables us to gain better insights into the behavior of the algorithm.
The results here hold for both stochastic and uniform rounding. In this case, the quantization error r
does not approach 0 as in SR-SGD. Nonetheless, the effect of this rounding error diminishes with
shrinking ?t because ?t multiplies the gradient update, and thus implicitly the rounding error as well.
Theorem 3. Assume F is L-Lipschitz smooth, the domain has finite diameter D, and learning rates
are given by ?t = ?ct . Consider the BC-SGD algorithm with updates of the form (7). Then, we have:
?
T +1 2 ?
1
T
?
2
E[F (w
? ) ? F (w )] ? ? D +
cG + d?LD.
2T
2c T
As with SR, BC can only converge up to an error floor. So far this looks a lot like the convergence
guarantees for SR. However, things change when we assume strong convexity and bounded Hessian.
1
Theorem 4. Assume that F is ?-strongly convex and the learning rates are given by ?t = ?(t+1)
.
Consider the BC algorithm with updates of the form (7). Then we have:
?
(1 + log(T + 1))G2
DL2 d?
+
.
E[F (w
? T ) ? F (w? )] ?
2?T
2
Now, the error floor is determined by both ? and L2 . For a quadratic least-squares problem, the
gradient of F is linear and the Hessian is constant. Thus, L2 = 0 and we get the following corollary.
1
Corollary 1. Assume that F is quadratic and the learning rates are given by ?t = ?(t+1)
. The BC
algorithm with updates of the form (7) yields
E[F (w
? T ) ? F (w? )] ?
(1 + log(T + 1))G2
.
2?T
We see that the real-valued weights accumulated in BC can converge to the true minimizer of quadratic
losses. Furthermore, this suggests that, when the function behaves like a quadratic on the distance
4
Figure 1: The SR method starts at some location x (in this case 0), adds a perturbation to x, and then rounds.
As the learning rate ? gets smaller, the distribution of the perturbation gets ?squished? near the origin, making
the algorithm less likely to move. The ?squishing? effect is the same for the part of the distribution lying to the
left and to the right of x, and so it does not effect the relative probability of moving left or right.
scale ?, one would expect BC to perform fundamentally better than SR. While this may seem
like a restrictive condition, there is evidence that even non-convex neural networks become well
approximated as a quadratic in the later stages of optimization within a neighborhood of a local
minimum [21].
Note, our convergence results on BC are for wr instead of wb , and these measures of convergence are
not directly comparable. It is not possible to bound wb when BC is used, as the values of wb may
not converge in the usual sense (e.g., in the +/-1 binary case wr might converge to 0, in which case
arbitrarily small perturbations to wr might send wb to +1 or -1).
5
What About Non-Convex Problems?
The global convergence results presented above for convex problems show that, in general, both
the SR and BC algorithms converge to within O(?) accuracy of the minimizer (in expected value).
However, these results do not explain the large differences between these methods when applied to
non-convex neural nets. We now study how the long-term behavior of SR differs from BC. Note
that this section makes no convexity assumptions, and the proposed theoretical results are directly
applicable to neural networks.
Typical (continuous-valued) SGD methods have an important exploration-exploitation tradeoff. When
the learning rate is large, the algorithm explores by moving quickly between states. Exploitation
happens when the learning rate is small. In this case, noise averaging causes the algorithm more
greedily pursues local minimizers with lower loss values. Thus, the distribution of iterates produced
by the algorithm becomes increasingly concentrated near minimizers as the learning rate vanishes
(see, e.g., the large-deviation estimates in [22]). BC maintains this property as well?indeed, we saw
in Corollary 1 a class of problems for which the iterates concentrate on the minimizer for small ?t .
In this section, we show that the SR method lacks this important tradeoff: as the stepsize gets small
and the algorithm slows down, the quality of the iterates produced by the algorithm does not improve,
and the algorithm does not become progressively more likely to produce low-loss iterates. This
behavior is illustrated in Figures 1 and 2.
To understand this problem conceptually, consider the simple case of a one-variable optimization
problem starting at x0 = 0 with ? = 1 (Figure 1). On each iteration, the algorithm computes a
stochastic approximation ?f? of the gradient by sampling from a distribution, which we call p. This
gradient is then multiplied by the stepsize to get ??f?. The probability of moving to the right (or
left) is then roughly proportional to the magnitude of ??f?. Note the random variable ??f? has
distribution p? (z) = ??1 p(z/?).
Now, suppose that ? is small enough that we can neglect the tails of p? (z) that lie outside the interval
[?1, 1]. The probability of transitioning from x0 = 0 to x1 = 1 using stochastic rounding, denoted
by T? (0, 1), is then
Z 1
Z
Z 1/?
Z ?
1 1
T? (0, 1) ?
zp? (z)dz =
zp(z/?) dz = ?
p(x)x dx ? ?
p(x)x dx,
? 0
0
0
0
where the first approximation is because we neglected the unlikely case that ??f? > 1, and the
second approximation appears because we added a small tail probability to the estimate. These
5
12
Loss Value
10
8
6
4
2
0
-2
0
2
4
6
8
Weight w
(a) ? = 1.0
(b) ? = 0.1
(c) ? = 0.01
(d) ? = 0.001
Figure 2: Effect of shrinking the learning rate in SR vs BC on a toy problem. The left figure plots the objective
function (8). Histograms plot the distribution of the quantized weights over 106 iterations. The top row of plots
correspond to BC, while the bottom row is SR, for different learning rates ?. As the learning rate ? shrinks, the
BC distribution concentrates on a minimizer, while the SR distribution stagnates.
approximations get more accurate for small ?. We see that, assuming the tails of p are ?light? enough,
R?
R0
we have T? (0, 1) ? ? 0 p(x)x dx as ? ? 0. Similarly, T? (0, ?1) ? ? ?? p(x)x dx as ? ? 0.
What does this observation mean for the behavior of SR? First of all, the probability of leaving x0 on
an iteration is
Z ?
Z 0
T? (0, ?1) + T? (0, 1) ? ?
p(x)x dx +
p(x)x dx ,
??
0
which vanishes for small ?. This means the algorithm slows down as the learning rate drops off,
which is not surprising. However, the conditional probability of ending up at x1 = 1 given that the
algorithm did leave x0 is
R?
p(x)x dx
T? (0, 1)
1
0
0
= R0
T? (0, 1|x 6= x ) ?
,
R?
T? (0, ?1) + T? (0, 1)
p(x)x dx +
p(x)x dx
??
0
which does not depend on ?. In other words, provided ? is small, SR, on average, makes the same
decisions/transitions with learning rate ? as it does with learning rate ?/10; it just takes 10 times
longer to make those decisions when ?/10 is used. In this situation, there is no exploitation benefit in
decreasing ?.
5.1
Toy Problem
To gain more intuition about the effect of shrinking the learning rate in SR vs BC, consider the
following simple 1-dimensional non-convex problem:
? 2
if w < 1,
?w + 2,
min f (w) := (w ? 2.5)2 + 0.75,
(8)
if 1 ? w < 3.5,
w
?
(w ? 4.75)2 + 0.19, if w ? 3.5.
Figure 2 shows a plot of this loss function. To visualize the distribution of iterates, we initialize at
w = 4.0, and run SR and BC for 106 iterations using a quantization resolution of 0.5.
Figure 2 shows the distribution of the quantized weight parameters w over the iterations when
optimized with SR and BC for different learning rates ?. As we shift from ? = 1 to ? = 0.001, the
distribution of BC iterates transitions from a wide/explorative distribution to a narrow distribution
in which iterates aggressively concentrate on the minimizer. In contrast, the distribution produced
by SR concentrates only slightly and then stagnates; the iterates are spread widely even when the
learning rate is small.
5.2
Asymptotic Analysis of Stochastic Rounding
The above argument is intuitive, but also informal. To make these statements rigorous, we interpret
the SR method as a Markov chain. On each iteration, SR starts at some state (iterate) x, and moves to
6
0.6
0.2
0.8
0.6
0.4
0.2
B
A
B
A
0.2
0.6
0.1
0.3
0.4
0.2
0.2
0.2
0.1
0.1
C
C
0.2
0.6
Figure 3: Markov chain example with 3 states. In the right figure, we halved each transition probability for
moving between states, with the remaining probability put on the self-loop. Notice that halving all the transition
probabilities would not change the equilibrium distribution, and instead would only increase the mixing time of
the Markov chain.
a new state y with some transition probability T? (x, y) that depends only on x and the learning rate
?. For fixed ?, this is clearly a Markov process with transition matrix2 T? (x, y).
The long-term behavior of this Markov process is determined by the stationary distribution of
T? (x, y). We show below that for small ?, the stationary distribution of T? (x, y) is nearly invariant
to ?, and thus decreasing ? below some threshold has virtually no effect on the long term behavior of
the method. This happens because, as ? shrinks, the relative transition probabilities remain the same
(conditioned on the fact that the parameters change), even though the absolute probabilities decrease
(see Figure 3). In this case, there is no exploitation benefit to decreasing ?.
Theorem 5. Let px,k denote the probability distribution of the kth entry in ?f?(x), the stochastic gradient estimate at x. Assume there is a constant C1 such that for all x, k, and ? we have
R?
R C2
R0
1
px,k (z) dz ? C
? 2 , and some C2 such that both 0 px,k (z) dz > 0 and ?C2 px,k (z) dz > 0.
?
Define the matrix
?R ?
z
?
?R0 px,k (z) ? dz, if x and y differ only at coordinate k, and yk = xk + ?
0
?
U (x, y) =
p (z) z dz, if x and y differ only at coordinate k, and yk = xk ? ?
? ?? x,k ?
?
0, otherwise,
and the associated markov chain transition matrix
? ) + ?0 U
?,
T??0 = I ? ?0 ? diag(1T U
(9)
where ?0 is the largest constant that makes T??0 non-negative. Suppose T?? has a stationary distribution, denoted ?
? . Then, for sufficiently small ?, T? has a stationary distribution ?? , and
lim ?? = ?
?.
??0
Furthermore, this limiting distribution satisfies ?
? (x) > 0 for any state x, and is thus not concentrated
on local minimizers of f .
While the long term stationary behavior of SR is relatively insensitive to ?, the convergence speed
of the algorithm is not. To measure this, we consider the mixing time of the Markov chain. Let ??
denote the stationary distribution of a Markov chain. We say that the -mixing time of the chain is
M if M is the smallest integer such that [23]
|P(xM ? A|x0 ) ? ?(A)| ? , for all x0 and all subsets of states A ? X.
(10)
We show below that the mixing time of the Markov chain gets large for small ?, which means
exploration slows down, even though no exploitation gain is being realized.
Theorem 6. Let px,k satisfy the assumptions of Theorem 5. Choose some sufficiently small that
there exists a proper subset of states A ? X with stationary probability ?? (A) greater than . Let
M (?) denote the -mixing time of the chain with learning rate ?. Then,
lim M (?) = ?.
??0
2
Our analysis below does not require the state space to be finite, so T? (x, y) may be a linear operator rather
than a matrix. Nonetheless, we use the term ?matrix? as it is standard.
7
Table 1: Top-1 test error after training with full-precision (ADAM), binarized weights (R-ADAM, SR-ADAM,
BC-ADAM), and binarized weights with big batch size (Big SR-ADAM).
CIFAR-10
CIFAR-100 ImageNet
VGG-9 VGG-BC ResNet-56 WRN-56-2 ResNet-56 ResNet-18
ADAM
BC-ADAM
Big SR-ADAM
SR-ADAM
R-ADAM
6
7.97
10.36
16.95
23.33
23.99
7.12
8.21
16.77
20.56
21.88
8.10
8.83
19.84
26.49
33.56
6.62
7.17
16.04
21.58
27.90
33.98
35.34
50.79
58.06
68.39
36.04
52.11
77.68
88.86
91.07
Experiments
To explore the implications of the theory above, we train both VGG-like networks [24] and Residual
networks [25] with binarized weights on image classification problems. On CIFAR-10, we train
ResNet-56, wide ResNet-56 (WRN-56-2, with 2X more filters than ResNet-56), VGG-9, and the
high capacity VGG-BC network used for the original BC model [5]. We also train ResNet-56 on
CIFAR-100, and ResNet-18 on ImageNet [26].
We use Adam [27] as our baseline optimizer as we found it to frequently give better results than
well-tuned SGD (an observation that is consistent with previous papers on quantized models [1?5]),
and we train with the three quantized algorithms mentioned in Section 3, i.e., R-ADAM, SR-ADAM
and BC-ADAM. The image pre-processing and data augmentation procedures are the same as [25].
Following [3], we only quantize the weights in the convolutional layers, but not linear layers, during
training (See Appendix H.1 for a discussion of this issue, and a detailed description of experiments).
We set the initial learning rate to 0.01 and decrease the learning rate by a factor of 10 at epochs 82 and
122 for CIFAR-10 and CIFAR-100 [25]. For ImageNet experiments, we train the model for 90 epochs
and decrease the learning rate at epochs 30 and 60. See Appendix H for additional experiments.
Results The overall results are summarized in Table 1. The binary model trained by BC-ADAM
has comparable performance to the full-precision model trained by ADAM. SR-ADAM outperforms
R-ADAM, which verifies the effectiveness of Stochastic Rounding. There is a performance gap
between SR-ADAM and BC-ADAM across all models and datasets. This is consistent with our
theoretical results in Sections 4 and 5, which predict that keeping track of the real-valued weights as
in BC-ADAM should produce better minimizers.
Exploration vs exploitation tradeoffs Section 5 discusses the exploration/exploitation tradeoff
of continuous-valued SGD methods and predicts that fully discrete methods like SR are unable to
enter a greedy phase. To test this effect, we plot the percentage of changed weights (signs different
from the initialization) as a function of the training epochs (Figures 4 and 5). SR-ADAM explores
aggressively; it changes more weights in the conv layers than both R-ADAM and BC-ADAM, and
keeps changing weights until nearly 40% of the weights differ from their starting values (in a binary
model, randomly re-assigning weights would result in 50% change). The BC method never changes
more than 20% of the weights (Fig 4(b)), indicating that it stays near a local minimizer and explores
less. Interestingly, we see that the weights of the conv layers were not changed at all by R-ADAM;
when the tails of the stochastic gradient distribution are light, this method is ineffective.
6.1
A Way Forward: Big Batch Training
We saw in Section 5 that SR is unable to exploit local minima because, for small learning rates,
shrinking the learning rate does not produce additional bias towards moving downhill. This was
illustrated in Figure 1. If this is truly the cause of the problem, then our theory predicts that we can
improve the performance of SR for low-precision training by increasing the batch size. This shrinks
the variance of the gradient distribution in Figure 1 without changing the mean and concentrates
more of the gradient distribution towards downhill directions, making the algorithm more greedy.
To verify this, we tried different batch sizes for SR including 128, 256, 512 and 1024, and found that
the larger the batch size, the better the performance of SR. Figure 5(a) illustrates the effect of a batch
size of 1024 for BC and SR methods. We find that the BC method, like classical SGD, performs best
8
30
20
10
0
0
20
40
60
80
100
120
Epochs
140
160
180
50
conv_1
conv_2
conv_3
conv_4
conv_5
conv_6
linear_1
linear_2
linear_3
40
30
20
10
0
0
20
(a) R-ADAM
40
60
80
100
120
Epochs
140
160
180
Percentage of changed weights (%)
conv_1
conv_2
conv_3
conv_4
conv_5
conv_6
linear_1
linear_2
linear_3
40
Percentage of changed weights (%)
Percentage of changed weights (%)
50
50
conv_1
conv_2
conv_3
conv_4
conv_5
conv_6
linear_1
linear_2
linear_3
40
30
20
10
0
0
20
(b) BC-ADAM
40
60
80
100
120
Epochs
140
160
180
(c) SR-ADAM
BC-ADAM 128
BC-ADAM 1024
SR-ADAM 128
SR-ADAM 1024
Error (%)
50
40
30
20
10
0
0
20
40
60
80
100
Epochs
120
140
160
(a) BC-ADAM vs SR-ADAM
60
50
40
30
BC-ADAM 128
BC-ADAM 1024
SR-ADAM 128
SR-ADAM 1024
20
10
0
0
20
40
60
80
100
Epochs
120
140
160
Percentage of changed weights (%)
60
Percentage of changed weights (%)
Figure 4: Percentage of weight changes during training of VGG-BC on CIFAR-10.
50
BC-ADAM 128
BC-ADAM 1024
SR-ADAM 128
SR-ADAM 1024
40
30
20
10
0
0
20
40
60
80
100
Epochs
120
140
160
(b) Weight changes since beginning (c) Weight changes every 5 epochs
Figure 5: Effect of batch size on SR-ADAM when tested with ResNet-56 on CIFAR-10. (a) Test error vs epoch.
Test error is reported with dashed lines, train error with solid lines. (b) Percentage of weight changes since
initialization. (c) Percentage of weight changes per every 5 epochs.
with a small batch size. However, a large batch size is essential for the SR method to perform well.
Figure 5(b) shows the percentage of weights changed by SR and BC during training. We see that the
large batch methods change the weights less aggressively than the small batch methods, indicating
less exploration. Figure 5(c) shows the percentage of weights changed during each 5 epochs of
training. It is clear that small-batch SR changes weights much more frequently than using a big batch.
This property of big batch training clearly benefits SR; we see in Figure 5(a) and Table 1 that big
batch training improved performance over SR-ADAM consistently.
In addition to providing a means of improving fixed-point training, this suggests that recently
proposed methods using big batches [28, 29] may be able to exploit lower levels of precision to
further accelerate training.
7
Conclusion
The training of quantized neural networks is essential for deploying machine learning models
on portable and ubiquitous devices. We provide a theoretical analysis to better understand the
BinaryConnect (BC) and Stochastic Rounding (SR) methods for training quantized networks. We
proved convergence results for BC and SR methods that predict an accuracy bound that depends
on the coarseness of discretization. For general non-convex problems, we proved that SR differs
from conventional stochastic methods in that it is unable to exploit greedy local search. Experiments
confirm these findings, and show that the mathematical properties of SR are indeed observable (and
very important) in practice.
Acknowledgments
T. Goldstein was supported in part by the US National Science Foundation (NSF) under grant CCF1535902, by the US Office of Naval Research under grant N00014-17-1-2078, and by the Sloan
Foundation. C. Studer was supported in part by Xilinx, Inc. and by the US NSF under grants
ECCS-1408006, CCF-1535897, and CAREER CCF-1652065. H. Samet was supported in part by the
US NSF under grant IIS-13-20791.
9
References
[1] Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: Training
deep neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830
(2016)
[2] Marchesi, M., Orlandi, G., Piazza, F., Uncini, A.: Fast neural networks without multipliers. IEEE
Transactions on Neural Networks 4(1) (1993) 53?62
[3] Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet Classification Using Binary
Convolutional Neural Networks. ECCV (2016)
[4] Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision.
In: ICML. (2015)
[5] Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: Training deep neural networks with binary
weights during propagations. In: NIPS. (2015)
[6] Lin, D., Talathi, S., Annapureddy, S.: Fixed point quantization of deep convolutional networks. In: ICML.
(2016)
[7] Hwang, K., Sung, W.: Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1.
In: IEEE Workshop on Signal Processing Systems (SiPS). (2014)
[8] Lin, Z., Courbariaux, M., Memisevic, R., Bengio, Y.: Neural networks with few multiplications. ICLR
(2016)
[9] Li, F., Zhang, B., Liu, B.: Ternary weight networks. arXiv preprint arXiv:1605.04711 (2016)
[10] Kim, M., Smaragdis, P.: Bitwise neural networks. In: ICML Workshop on Resource-Efficient Machine
Learning. (2015)
[11] Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: Training
neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061 (2016)
[12] Baldassi, C., Ingrosso, A., Lucibello, C., Saglietti, L., Zecchina, R.: Subdominant dense clusters allow for
simple learning and high computational performance in neural networks with discrete synapses. Physical
review letters 115(12) (2015) 128101
[13] Anwar, S., Hwang, K., Sung, W.: Fixed point optimization of deep convolutional neural networks for
object recognition. In: ICASSP, IEEE (2015)
[14] Zhu, C., Han, S., Mao, H., Dally, W.J.: Trained ternary quantization. ICLR (2017)
[15] Zhou, A., Yao, A., Guo, Y., Xu, L., Chen, Y.: Incremental network quantization: Towards lossless CNNs
with low-precision weights. ICLR (2017)
[16] Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., Zou, Y.: Dorefa-net: Training low bitwidth convolutional
neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 (2016)
[17] H?hfeld, M., Fahlman, S.E.: Probabilistic rounding in neural network learning with limited precision.
Neurocomputing 4(6) (1992) 291?299
[18] Miyashita, D., Lee, E.H., Murmann, B.: Convolutional neural networks using logarithmic data representation. arXiv preprint arXiv:1603.01025 (2016)
[19] Soudry, D., Hubara, I., Meir, R.: Expectation backpropagation: Parameter-free training of multilayer
neural networks with continuous or discrete weights. In: NIPS. (2014)
[20] Cheng, Z., Soudry, D., Mao, Z., Lan, Z.: Training binary multilayer neural networks for image classification
using expectation backpropagation. arXiv preprint arXiv:1503.03562 (2015)
[21] Martens, J., Grosse, R.: Optimizing neural networks with kronecker-factored approximate curvature. In:
International Conference on Machine Learning. (2015) 2408?2417
[22] Lan, G., Nemirovski, A., Shapiro, A.: Validation analysis of mirror descent stochastic approximation
method. Mathematical programming 134(2) (2012) 425?458
[23] Levin, D.A., Peres, Y., Wilmer, E.L.: Markov chains and mixing times. American Mathematical Soc.
(2009)
10
[24] Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition. In:
ICLR. (2015)
[25] He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: CVPR. (2016)
[26] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A.,
Bernstein, M., et al.: Imagenet Large Scale Visual Recognition Challenge. IJCV (2015)
[27] Kingma, D., Ba, J.: Adam: A method for stochastic optimization. ICLR (2015)
[28] De, S., Yadav, A., Jacobs, D., Goldstein, T.: Big batch SGD: Automated inference using adaptive batch
sizes. arXiv preprint arXiv:1610.05792 (2016)
[29] Goyal, P., Doll?r, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., He, K.:
Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677 (2017)
[30] Lax, P.: Linear Algebra and Its Applications. Number v. 10 in Linear algebra and its applications. Wiley
(2007)
[31] Krizhevsky, A.: Learning multiple layers of features from tiny images. (2009)
[32] Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
[33] Collobert, R., Kavukcuoglu, K., Farabet, C.: Torch7: A matlab-like environment for machine learning. In:
BigLearn, NIPS Workshop. (2011)
[34] Ioffe, S., Szegedy, C.: Batch Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift. (2015)
11
| 7163 |@word exploitation:7 version:1 coarseness:1 annapureddy:1 retraining:1 tried:1 jacob:1 sgd:11 solid:1 ld:1 initial:1 liu:1 selecting:2 tuned:1 bc:50 interestingly:1 outperforms:1 bitwise:3 discretization:3 surprising:1 activation:2 assigning:1 dx:9 explorative:1 numerical:1 enables:1 drop:2 plot:5 update:16 progressively:1 v:5 stationary:7 greedy:4 device:4 de1:1 xk:2 beginning:1 quantizer:1 quantized:29 iterates:9 location:1 zhang:2 mathematical:3 c2:3 become:2 ijcv:1 combine:1 nondeterministic:1 x0:6 indeed:2 expected:1 roughly:1 behavior:11 frequently:2 decreasing:3 little:1 farhadi:1 increasing:2 becomes:3 provided:1 conv:2 notation:2 bounded:4 what:2 differing:1 finding:1 murmann:1 sung:2 guarantee:4 zecchina:1 every:2 binarized:4 act:1 friendly:1 sip:1 k2:2 grant:4 before:2 positive:1 engineering:1 local:6 t1:1 ecc:1 soudry:4 accumulates:1 analyzing:1 subscript:1 might:2 initialization:2 studied:1 suggests:2 christoph:1 limited:4 nemirovski:1 practical:1 acknowledgment:1 ternary:2 practice:3 goyal:1 differs:2 backpropagation:2 procedure:3 area:1 empirical:3 significantly:1 pre:2 word:1 studer:2 suggest:1 get:7 operator:2 put:1 risk:1 applying:1 baldassi:1 conventional:1 deterministic:6 marten:1 dz:7 pursues:1 send:1 starting:3 convex:18 focused:1 resolution:2 ergodic:1 immediately:1 factored:1 q:6 rule:1 insight:1 deriving:2 coordinate:2 limiting:1 pt:1 suppose:2 programming:1 origin:1 expensive:2 approximated:1 recognition:4 predicts:3 bottom:1 preprint:9 electrical:1 yadav:1 sun:1 ordering:1 decrease:5 yk:3 mentioned:1 intuition:1 vanishes:2 convexity:3 complexity:1 constrains:1 environment:1 battery:1 neglected:1 trained:5 weakly:1 solving:1 rewrite:1 depend:1 algebra:2 purely:1 deliver:1 efficiency:2 f2:1 accelerate:3 icassp:1 bitwidth:2 represented:2 various:1 train:8 fast:1 neighborhood:1 outside:1 widely:2 valued:5 larger:1 snap:1 say:1 otherwise:3 cvpr:1 simonyan:1 agrawal:1 differentiable:1 quantizing:4 net:10 piazza:1 loop:1 mixing:6 intuitive:2 description:1 convergence:12 yaniv:2 requirement:1 zp:2 cluster:1 produce:3 adam:45 leave:2 converges:1 resnet:9 object:1 incremental:1 school:1 strong:1 soc:1 c:1 qd:7 differ:3 concentrate:5 direction:1 cnns:2 stochastic:26 filter:2 exploration:5 uncini:1 anwar:1 explains:1 backprop:1 require:1 feeding:1 f1:1 samet:1 hold:1 lying:1 sufficiently:2 equilibrium:1 predict:2 visualize:1 optimizer:1 smallest:2 purpose:1 diminishes:1 applicable:1 ingrosso:1 currently:1 hubara:3 saw:2 largest:1 talathi:1 minimization:1 clearly:2 biglearn:1 always:1 modified:1 rather:1 zhou:3 cornell:2 publication:1 office:1 corollary:3 focus:1 naval:1 consistently:1 kyrola:1 secure:1 contrast:1 cg:2 greedily:1 sense:1 baseline:1 rigorous:1 inference:5 kim:1 minimizers:4 el:2 accumulated:1 unlikely:1 selects:1 issue:2 hanan:1 arg:1 classification:3 denoted:2 overall:1 multiplies:1 platform:2 art:2 constrained:4 initialize:1 equal:1 never:2 beach:1 sampling:1 identical:1 kw:1 park:1 look:2 icml:3 throughput:1 nearly:3 others:1 fundamentally:1 few:1 wen:1 randomly:2 ve:1 comprehensive:1 national:1 neurocomputing:1 floating:9 phase:2 bw:1 maintain:1 interest:2 investigate:1 zheng:1 truly:1 extreme:1 light:2 devoted:1 cryptographically:1 chain:10 accurate:2 implication:1 integral:1 capable:1 minw:1 unless:1 re:1 girshick:1 theoretical:7 wb:6 deviation:1 entry:2 subset:2 uniform:4 krizhevsky:1 levin:1 rounding:32 reported:1 connect:1 bwn:1 subdominant:1 nns:1 st:1 explores:3 international:1 stay:1 memisevic:1 probabilistic:1 off:2 lee:1 rounded:1 quickly:1 yao:1 na:1 augmentation:1 unavoidable:1 choose:1 huang:1 ek:1 american:1 return:1 toy:2 li:1 szegedy:1 potential:1 de:1 summarized:1 inc:1 satisfy:1 xu1:1 caused:1 depends:3 sloan:1 collobert:1 later:1 lot:1 dally:1 competitive:1 start:2 option:1 maintains:4 jia:1 contribution:2 square:1 ni:1 accuracy:6 convolutional:8 variance:1 largely:1 yield:2 correspond:1 conceptually:1 bayesian:1 kavukcuoglu:1 produced:4 ren:1 researcher:1 russakovsky:1 explain:1 synapsis:1 deploying:2 reach:1 farabet:1 stagnates:2 definition:1 failure:1 nonetheless:2 proof:1 associated:1 gain:3 proved:2 lim:2 ubiquitous:1 routine:1 goldstein:2 appears:1 tom:1 zisserman:1 improved:1 shrink:3 strongly:5 though:2 furthermore:2 just:2 stage:1 until:2 replacing:1 su:1 lack:4 propagation:1 minibatch:1 quality:1 ordonez:1 hwang:2 usa:1 effect:9 verify:1 true:1 multiplier:1 ccf:2 aggressively:3 illustrated:2 xnor:1 round:3 komodakis:1 during:6 self:1 xilinx:1 generalized:1 complete:1 performs:1 image:6 fi:6 recently:1 superior:1 common:2 behaves:1 physical:1 insensitive:1 belong:1 tail:4 he:2 interpret:1 significant:2 enter:1 rd:1 similarly:1 language:1 moving:5 access:1 han:1 longer:1 add:1 curvature:1 closest:1 halved:1 recent:2 perspective:1 optimizing:1 phone:1 scenario:1 buffer:1 n00014:1 binary:12 success:1 arbitrarily:1 minimum:2 greater:1 additional:2 floor:3 deng:1 r0:4 subtraction:1 converge:5 signal:1 dashed:1 arithmetic:4 full:5 desirable:1 ii:1 multiple:1 smooth:1 offer:1 long:6 cifar:8 lin:2 divided:1 halving:1 basic:2 multilayer:2 vision:1 expectation:4 arxiv:18 iteration:10 histogram:1 normalization:1 deterioration:1 cell:1 c1:1 background:1 addition:2 want:1 fine:1 annealing:1 interval:1 krause:1 leaving:1 extra:1 rest:1 umd:1 sr:63 zagoruyko:1 ineffective:1 virtually:1 thing:1 seem:2 effectiveness:1 integer:2 call:1 near:3 feedforward:1 bengio:4 enough:2 bernstein:1 automated:1 iterate:1 li1:1 bandwidth:1 fm:1 stall:1 reduce:1 dl2:1 tradeoff:4 vgg:6 shift:2 dorefa:1 torch7:1 accelerating:1 hessian:3 cause:3 matlab:1 deep:12 dramatically:1 detailed:1 clear:1 karpathy:1 hardware:3 clip:1 category:2 diameter:2 concentrated:2 narayanan:1 shapiro:1 meir:1 percentage:11 nsf:3 notice:1 sign:3 wr:6 per:2 track:1 discrete:5 write:1 coarsely:2 key:1 saglietti:1 threshold:1 lan:2 achieving:1 binaryconnect:8 changing:2 lowering:1 sum:1 convert:1 run:1 letter:1 powerful:3 wu:1 matrix2:1 decision:2 appendix:3 comparable:2 bit:7 bound:4 ct:1 layer:5 cheng:1 smaragdis:1 quadratic:5 kronecker:1 speed:1 argument:2 min:2 px:6 relatively:1 department:1 poor:2 representable:1 smaller:2 slightly:2 increasingly:1 remain:1 newer:1 across:1 marchesi:1 making:2 happens:3 invariant:1 computationally:1 resource:3 discus:1 needed:2 wrt:3 end:1 informal:1 lax:1 available:2 operation:4 doll:1 multiplied:1 distinguished:1 stepsize:2 batch:19 original:2 denotes:4 top:2 include:1 remaining:1 maintaining:1 neglect:1 exploit:5 restrictive:1 classical:4 objective:3 move:2 added:1 realized:1 rt:4 usual:1 gradient:18 kth:1 iclr:5 distance:1 unable:3 maryland:1 capacity:2 consumption:1 w0:4 portable:2 reason:1 assuming:1 gopalakrishnan:1 providing:1 difficult:1 mostly:1 quantizers:1 statement:1 hao:1 slows:3 negative:1 ba:1 design:1 motivates:1 wbt:6 proper:1 perform:2 satheesh:1 convolution:1 observation:2 datasets:2 markov:10 finite:4 descent:3 truncated:1 situation:2 peres:1 perturbation:3 david:1 optimized:1 imagenet:6 narrow:1 kingma:1 hour:1 nip:4 address:1 able:1 miyashita:1 usually:1 below:5 xm:1 challenge:1 including:1 memory:5 power:6 difficulty:1 natural:1 residual:3 zhu:1 scheme:2 improve:2 lossless:1 numerous:1 binarization:2 review:2 understanding:2 l2:4 epoch:14 powered:1 multiplication:2 relative:3 asymptotic:1 embedded:3 fully:2 loss:6 expect:1 lucibello:1 limitation:1 proportional:1 generator:2 validation:1 foundation:2 consistent:2 viewpoint:1 courbariaux:4 tiny:1 row:2 eccv:1 changed:9 supported:3 fahlman:1 copy:1 keeping:1 free:1 wilmer:1 bias:1 allow:1 deeper:1 understand:2 explaining:1 wide:3 absolute:1 benefit:3 dimension:3 ending:1 transition:8 computes:2 author:2 forward:1 adaptive:1 far:1 transaction:1 approximate:1 observable:1 implicitly:1 keep:1 confirm:1 global:1 ioffe:1 quantizes:1 search:2 continuous:3 decomposes:1 why:1 table:3 khosla:1 ca:1 career:1 rastegari:1 obtaining:1 improving:1 quantize:1 excellent:1 zou:1 domain:5 diag:1 did:1 spread:1 dense:1 big:9 noise:1 verifies:1 x1:2 xu:1 fig:1 deployed:1 grosse:1 wiley:1 precision:23 shrinking:4 mao:2 downhill:2 lie:1 grained:1 down:6 theorem:7 transitioning:1 covariate:1 showing:1 hjs:1 gupta:1 evidence:1 exists:1 essential:2 quantization:16 workshop:3 mirror:1 magnitude:1 conditioned:1 illustrates:1 kx:1 gap:1 chen:1 lowprecision:1 logarithmic:1 simply:1 explore:2 likely:2 visual:1 g2:4 loses:1 satisfies:3 minimizer:6 ma:1 conditional:1 goal:1 towards:4 lipschitz:2 change:13 wrn:2 determined:3 typical:1 redmon:1 reducing:1 wt:14 averaging:1 degradation:1 lemma:4 tulloch:1 experimental:1 exception:1 indicating:2 soham:1 college:1 internal:1 support:1 guo:1 tested:1 scratch:2 handling:1 |
6,814 | 7,164 | Permutation-based Causal Inference Algorithms
with Interventions
Yuhao Wang
Laboratory for Information and Decision Systems
and Institute for Data, Systems and Society
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Karren Dai Yang
Institute for Data, Systems and Society
and Broad Institute of MIT and Harvard
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Liam Solus
Department of Mathematics
KTH Royal Institute of Technology
Stockholm, Sweden
[email protected]
Caroline Uhler
Laboratory for Information and Decision Systems
and Institute for Data, Systems and Society
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Abstract
Learning directed acyclic graphs using both observational and interventional data is
now a fundamentally important problem due to recent technological developments
in genomics that generate such single-cell gene expression data at a very large
scale. In order to utilize this data for learning gene regulatory networks, efficient
and reliable causal inference algorithms are needed that can make use of both
observational and interventional data. In this paper, we present two algorithms
of this type and prove that both are consistent under the faithfulness assumption.
These algorithms are interventional adaptations of the Greedy SP algorithm and
are the first algorithms using both observational and interventional data with
consistency guarantees. Moreover, these algorithms have the advantage that they
are nonparametric, which makes them useful also for analyzing non-Gaussian data.
In this paper, we present these two algorithms and their consistency guarantees,
and we analyze their performance on simulated data, protein signaling data, and
single-cell gene expression data.
1
Introduction
Discovering causal relations is a fundamental problem across a wide variety of disciplines including
computational biology, epidemiology, sociology, and economics [5, 18, 20, 22]. DAG models can
be used to encode causal relations in terms of a directed acyclic graph (DAG) G, where each node
is associated to a random variable and the arrows represent their causal influences on one another.
The non-arrows of G encode a collection of conditional independence (CI) relations through the socalled Markov properties. While DAG models are extraordinarily popular within the aforementioned
research fields, it is in general a difficult task to recover the underlying DAG G from samples from the
joint distribution on the nodes. In fact, since different DAGs can encode the same set of CI relations,
from observational data alone the underlying DAG G is in general only identifiable up to Markov
equivalence, and interventional data is needed to identify the complete DAG.
In recent years, the new drop-seq technology has allowed obtaining high-resolution observational
single-cell gene expression data at a very large scale [12]. In addition, earlier this year this technology
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
was combined with the CRISPR/Cas9 system into perturb-seq, a technology that allows obtaining
high-throughput interventional gene expression data [4]. An imminent question now is how to make
use of a combination of observational and interventional data (of the order of 100,000 cells / samples
on 20,000 genes / variables) in the causal discovery process. Therefore, the development of efficient
and consistent algorithms using both observational and interventional data that are implementable
within genomics is now a crucial goal. This is the purpose of the present paper.
The remainder of this paper is structured as follows: In Section 2 we discuss related work. Then
in Section 3, we recall fundamental facts about DAG models and causal inference that we will use
in the coming sections. In Section 4, we present the two algorithms and discuss their consistency
guarantees. In Section 5, we analyze the performance of the two algorithms on both simulated and
real datasets. We end with a short discussion in Section 6.
2
Related Work
Causal inference algorithms based on observational data can be classified into three categories:
constraint-based, score-based, and hybrid methods. Constraint-based methods, such as the PC
algorithm [22], treat causal inference as a constraint satisfaction problem and rely on CI tests to
recover the model via its Markov properties. Score-based methods, on the other hand, assign a
score function such as the Bayesian Information Criterion (BIC) to each DAG and optimize the
score via greedy approaches. An example is the prominent Greedy Equivalence Search (GES) [14].
Hybrid methods either alternate between score-based and constraint-based updates, as in Max-Min
Hill-Climbing [26], or use score functions based on CI tests, as in the recently introduced Greedy SP
algorithm [23].
Based on the growing need for efficient and consistent algorithms that accommodate observational and
interventional data [4], it is natural to consider extensions of the previously described algorithms that
can accommodate interventional data. Such options have been considered in [8], in which the authors
propose GIES, an extension of GES that accounts for interventional data. This algorithm can be
viewed as a greedy approach to `0 -penalized maximum likelihood estimation with interventional data,
an otherwise computationally infeasible score-based approach. Hence GIES is a parametric approach
(relying on Gaussianity) and while it has been applied to real data [8, 9, 15], we will demonstrate via
an example in Section 3 that it is in general not consistent. In this paper, we assume causal sufficiency,
i.e., that there are no latent confounders in the data-generating DAG. In addition, we assume that the
interventional targets are known. Methods such as ACI [13], HEJ [10], COmbINE [25] and ICP [15]
allow for latent confounders with possibly unknown interventional targets. In addition, other methods
have been developed specifically for the analysis of gene expression data [19]. A comparison of the
method presented here and some of these methods in the context of gene expression data is given in
the Supplementary Material.
The main purpose of this paper is to provide the first algorithms (apart from enumerating all DAGs)
for causal inference based on observational and interventional data with consistency guarantees.
These algorithms are adaptations of the Greedy SP algorithm [23]. As compared to GIES, another
advantage of these algorithms is that they are nonparametric and hence do not assume Gaussianity, a
feature that is crucial for applications to gene expression data which is inherently non-Gaussian.
3
Preliminaries
DAG models. Given a DAG G = ([p], A) with node set [p] := {1, . . . , p} and a collection of arrows
A, we associate the nodes of G to a random vector (X1 , . . . , Xp ) with joint probability distribution P.
For a subset of nodes S ? [p], we let PaG (S), AnG (S), ChG (S), DeG (S), and NdG (S), denote the
parents, ancestors, children, descendants, and nondescendants of S in G. Here, we use the typical
graph theoretical definitions of these terms as given in [11]. By the Markov property, the collection of
non-arrows of G encode a set of CI relations Xi ?
? XNd(i)\ Pa(i) | XPa(i) . A distribution P is said to
satisfy the Markov assumption (a.k.a. be Markov) with respect to G if it entails these CI relations. A
fundamental result about DAG models is that the complete set of CI relations implied by the Markov
assumption for G is given by the d-separation relations in G [11, Section 3.2.2]; i.e., P satisfies the
Markov assumption with respect to G if and only if XA ?
? XB | XC in P whenever A and B are
2
1
2
4
3
1
7
5
3
7
4
6
6
5
2
Figure 1: A generating DAG (left) and its GIES local maxima (right) for which GIES is not consistent.
d-separated in G given C. The faithfulness assumption is the assertion that the only CI relations
entailed by P are those implied by d-separation in G.
Two DAGs G and H with the same set of d-separation statements are called Markov equivalent,
and the complete set of DAGs that are Markov equivalent to G is called its Markov equivalence
class (MEC), denoted [G]. The MEC of G is represented combinatorially by a partially directed
graph Gb := ([p], D, E), called its CP-DAG or essential graph [1]. The arrows D are precisely
those arrows in G that have the same orientation in all members of [G], and the edges E represent
those arrows that change direction between distinct members of the MEC. In [2], the authors give a
transformational characterization of the members of [G]. An arrow i ? j in G is called a covered
arrow if PaG (j) = PaG (i) ? {i}. Two DAGs G and H are Markov equivalent if and only if there
exists a sequence of covered arrow reversals transforming G into H [2]. This transformational
characterization plays a fundamental role in GES [14], GIES [8], Greedy SP [23], as well as the
algorithms we introduce in this paper.
Learning from Interventions. In this paper, we consider multiple interventions. Given an ordered
list of subsets of [p] denoted by I := {I1 , I2 , . . . , IK }, for each Ij we generate an interventional
distribution, denoted Pj , by forcing the random variables Xi for i ? Ij to the value of some
independent random variables. We assume throughout that Ij = ? for some j, i.e., that we have access
to a combination of observational and interventional data. If P is Markov with respect to G = ([p], A),
then the intervention DAG of Ij is the subDAG G j := ([p], Aj ) where Aj = {(i, j) ? A : j ?
/ Ij };
i.e., G j is given by removing the incoming arrows to all intervened nodes in G. Notice that Pj is
always Markov with respect to G j . This fact allows us to naturally extend the notions of Markov
equivalence and essential graphs to the interventional setting, as described in [8]. Two DAGs G and H
are I-Markov equivalent for the collection of interventions I if they have the same skeleton and the
same set of immoralities, and if G j and Hj have the same skeleton for all j = 1, . . . , K [8, Theorem
10]. Hence, any two I-Markov equivalent DAGs lie in the same MEC. The I-Markov equivalence
class (I-MEC) of G is denoted [G]I . The I-essential graph of G is the partially directed graph
j
K
j
j
j
bj
b
GbI := [p], ?K
j=1 D , ?j=1 E , where G = ([p], D , E ). The arrows of GI are called I-essential
arrows of G.
Greedy Interventional Equivalence Search (GIES). GIES is a three-phase score-based algorithm:
In the forward phase, GIES initializes with an empty I-essential graph Gb0 . Then it sequentially
steps from one I-essential graph Gbi to a larger one Gbi+1 given by adding a single arrow to Gbi . In
the backward phase, it steps from one essential graph Gbi to a smaller one Gbi+1 containing precisely
one less arrow than Gbi . In the turning phase, the algorithm reverses the direction of arrows. It first
considers reversals of non-I-essential arrows and then the reversal of I-essential arrows, allowing
it to move between I-MECs. At each step in all phases the maximal scoring candidate is chosen,
and the phase is only terminated when no higher-scoring I-essential graph exists. GIES repeatedly
executes the forward, backward, and turning phases, in that order, until no higher-scoring I-essential
graph can be found. It is amenable to any score that is constant on an I-MEC, such as the BIC.
The question whether GIES is consistent, was left open in [8]. We now prove that GIES is in general
not consistent; i.e., if nj i.i.d. samples are drawn from the interventional distribution Pj , then even
as n1 + ? ? ? + nK ? ? and under the faithfulness assumption, GIES may not recover the optimal
I-MEC with probability 1. Consider the data-generating DAG depicted on the left in Figure 1.
3
Algorithm 1:
? an initial permutation ?0 , a threshold ?n >
Input: Observations X,
interventional targets I = {I1 , . . . , IK }.
Output: A permutation ? and its minimal I-MAP G? .
1
Set G? :=
argmax
PK
k=1
?nk , and a set of
Score(G);
G consistent with ?
2
Using a depth-first search approach with root ?, search for a permutation ?s with
Score(G?s ) > Score(G? ) that is connected to ? through a sequence of permutations
?0 = ?, ?1 , ? ? ? , ?s?1 , ?s ,
where each permutation ?k is produced from ?k?1 by a transposition that corresponds to a
covered edge in G?k?1 such that Score(G?k ) > Score(G?k?1 ) ? ?n . If no such G?s exists,
return ? and G? ; else set ? := ?s and repeat.
Suppose we take interventions I consisting of I1 = ?, I2 = {4}, I3 = {5}, and that GIES arrives at
the DAG G depicted on the right in Figure 1. If the data collected grows as n1 = Cn2 = Cn3 for
some constant C > 1, then we can show that the BIC score of G is a local maximum with probability
1
2 as n1 tends to infinity. The proof of this fact relies on the observation that GIES must initialize the
turning phase at G, and that G contains precisely one covered arrow 5 ? 4, which is colored red in
Figure 1. The full proof is given in the Supplementary Material.
Greedy SP. In this paper we adapt the hybrid algorithm Greedy SP to provide consistent algorithms
that use both interventional and observational data. Greedy SP is a permutation-based algorithm that
associates a DAG to every permutation of the random variables and greedily updates the DAG by
transposing elements of the permutation. More precisely, given a set of observed CI relations C and a
permutation ? = ?1 ? ? ? ?p , the Greedy SP algorithm assigns a DAG G? := ([p], A? ) to ? via the rule
? i ? ? j ? A?
??
i < j and ?i 6?
? ?j | {?1 , . . . , ?max(i,j) }\{?i , ?j },
for all 1 ? i < j ? p. The DAG G? is a minimal I-MAP (independence map) with respect to C,
since any DAG G? is Markov with respect to C and any proper subDAG of G? encodes a CI relation
that is not in C [17]. Using a depth-first search approach, the algorithm reverses covered edges in
G? , takes a linear extension ? of the resulting DAG and re-evaluates against C to see if G? has fewer
arrows than G? . If so, the algorithm reinitializes at ? , and repeats this process until no sparser DAG
can be recovered. In the observational setting, Greedy SP is known to be consistent whenever the
data-generating distribution is faithful to the sparsest DAG [23].
4
Two Permutation-Based Algorithms with Interventions
We now introduce our two interventional adaptations of Greedy SP and prove that they are consistent
under the faithfulness assumption. In the first algorithm, presented in Algorithm 1, we use the same
moves as Greedy SP, but we optimize with respect to a new score function that utilizes interventional
data, namely the sum of the interventional BIC scores. To be more precise, for a collection of
interventions I = {I1 , . . . , IK }, the new score function is
K
K
X
X
k
?
Score(G) :=
maximize `k X ; A, ?
?
?nk |G k |,
k=1
(A,?)?G k
k=1
where `k denotes the log-likelihood of the interventional distribution Pk , (A, ?) are any parameters
consistent with G k , |G| denotes the number of arrows in G, and ?nk = lognknk .
When Algorithm 1 has access to observational and interventional data, then uniform consistency
follows using similar techniques to those used to prove uniform consistency of Greedy SP in [23]. A
full proof of the following consistency result for Algorithm 1 is given in the Supplementary Material.
Theorem 4.1. Suppose P is Markov with respect to an unknown I-MAP G?? . Suppose also that
observational and interventional data are drawn from P for a collection of interventional targets
I = {I1 := ?, I2 , . . . , IK }. If Pk is faithful to (G?? )k for all k ? [K], then Algorithm 1 returns the
I-MEC of the data-generating DAG G?? almost surely as nk ? ? for all k ? [K].
4
Algorithm 2: Interventional Greedy SP (IGSP)
Input: A collection of interventional targets I = {I1 , . . . , IK } and a starting permutation ?0 .
Output: A permutation ? and its minimal I-MAP G? .
1
2
Set G := G?0 ;
Using a depth-first-search approach with root ?, search for a minimal I-MAP G? with
|G| > |G? | that is connected to G by a list of I-covered edge reversals. Along the search,
prioritize the I-covered edges that are also I-contradicting edges. If such G? exists, set
G := G? , update the number of I-contradicting edges, and repeat this step. If not, output G?
with |G| = |G? | that is connected to G by a list of I-covered edges and minimizes the number
of I-contradicting edges.
A problematic feature of Algorithm 1 from a computational perspective is the the slack parameter ?n .
In fact, if this parameter were not included, then Algorithm 1 would not be consistent. This can be
seen via an application of Algorithm 1 to the example depicted in Figure 1. Using the same set-up
as the inconsistency example for GIES, suppose that the left-most DAG G in Figure 1 is the data
generating DAG, and that we draw nk i.i.d. samples from the interventional distribution Pk for the
collection of targets I = {I1 = ?, I2 = {4}, I3 = {5}}. Suppose also that n1 = Cn2 = Cn3 for
some constant C > 1, and now additionally assume that we initialize Algorithm 1 at the permutation
? = 1276543. Then the minimal I-MAP G? is precisely the DAG presented on the right in Figure 1.
This DAG contains one covered arrow, namely 5 ? 4. Reversing it produces the minimal I-MAP G?
for ? = 1276453. Computing the score difference Score(G? ) ? Score(G? ) using [16, Lemma 5.1]
shows that as n1 tends to infinity, Score(G? ) < Score(G? ) with probability 12 . Hence, Algorithm 1
would not be consistent without the slack parameter ?n . This calculation can be found in the
Supplementary Material.
Our second interventional adaptation of the Greedy SP algorithm, presented in Algorithm 2, leaves the
score function the same (i.e., the number of edges of the minimal I-MAP), but restricts the possible
covered arrow reversals that can be queried at each step. In order to describe this restricted set of
moves we provide the following definitions.
Definition 4.2. Let I = {I1 , . . . , IK } be a collection of interventions, and for i, j ? [p] define the
collection of indices
Ii\j := {k ? [K] : i ? Ik and j 6? Ik }.
For a minimal I-MAP G? we say that a covered arrow i ? j ? G? is I-covered if
Ii\j = ?
or
i ? j 6? (G k )?
for all k ? Ii\j .
Definition 4.3. We say that an arrow i ? j ? G? is I-contradicting if the following three
conditions hold: (a) Ii\j ? Ij\i 6= ?, (b) Ii\j = ? or i ?
? j in distribution Pk for all k ? Ii\j ,
(c) Ij\i = ? or there exists k ? Ij\i such that i 6?
? j in distribution Pk .
In the observational setting, GES and Greedy SP utilize covered arrow reversals to transition between
members of a single MEC as well as between MECs [2, 3, 23]. Since an I-MEC is characterized
by the skeleta and immoralities of each of its interventional DAGs, I-covered arrows represent the
natural candidate for analogous transitionary moves between I-MECs in the interventional setting. It
is possible that reversing an I-covered edge i ? j in a minimal I-MAP G? results in a new minimal
I-MAP G? that is in the same I-MEC as G? . Namely, this happens when i ? j is a non-I-essential
edge in G? . Similar to Greedy SP, Algorithm 2 implements a depth-first-search approach that allows
for such I-covered arrow reversals, but it prioritizes those I-covered arrow reversals that produce a
minimal I-MAP G? that is not I-Markov equivalent to G? . These arrows are I-contradicting arrows.
The result of this refined search via I-covered arrow reversal is an algorithm that is consistent under
the faithfulness assumption.
Theorem 4.4. Algorithm 2 is consistent under the faithfulness assumption.
The proof of Theorem 4.4 is given in the Supplementary Material. When only observational data is
available, Algorithm 2 boils down to greedy SP. We remark that the number of queries conducted in a
given step of Algorithm 2 is, in general, strictly less than in the purely observational setting. That is
to say, I-covered arrows generally constitute a strict subset of the covered arrows in a DAG. At first
5
(a) p = 10, K = 1
(b) p = 10, K = 2
(c) p = 20, K = 1
(d) p = 20, K = 2
Figure 2: The proportion of consistently estimated DAGs for 100 Gaussian DAG models on p nodes
with K single-node interventions.
glance, keeping track of the I-covered edges may appear computationally inefficient. However, at
each step we only need to update this list locally; so the computational complexity of the algorithm
is not drastically impacted by this procedure. Hence, access to interventional data is beneficial in
two ways: it allows to reduce the search directions at every step and it often allows to estimate
the true DAG more accurately, since an I-MEC is in general smaller than an MEC. Note that in
this paper all the theoretical analysis are based on the low-dimensional setting, where p n. The
high-dimensional consistency of greedy SP is shown in [23], and it is not difficult to see that the same
high-dimensional consistency guarantees also apply to IGSP.
5
Evaluation
In this section, we compare Algorithm 2, which we call Interventional Greedy SP (IGSP) with GIES
on both simulated and real data. Algorithm 1 is of interest from a theoretical perspective, but it is
computationally inefficient since it requires performing two variable selection procedures per update.
Therefore, it will not be analyzed in this section. The code utilized for the following experiments can
be found at https://github.com/yuhaow/sp-intervention.
5.1
Simulations
Our simulations are conducted for linear structural equation models with Gaussian noise:
(X1 , . . . , Xp )T = ((X1 , . . . , Xp )A)T + ,
where ? N (0, 1p ) and A = (aij )pi,j=1 is an upper-triangular matrix of edge weights with aij 6= 0
if and only if i ? j is an arrow in the underlying DAG G ? . For each simulation study we generated
100 realizations of an (Erd?s-Renyi) random p-node Gaussian DAG model for p ? {10, 20} with an
expected edge density of 1.5. The collections of interventional targets I = {I0 := ?, I1 , . . . , IK }
always consist of the empty set I0 together with K = 1 or 2. For p = 10, the size of each intervention
set was 5 for K = 1 and 4 for K = 2. For p = 20, the size was increased up to 10 and 8 to keep
the proportion of intervened nodes constant. In each study, we compared GIES with Algorithm 2
for n samples for each intervention with n = 103 , 104 , 105 . Figure 2 shows the proportion of
consistently estimated DAGs as distributed by choice of cut-off parameter for partial correlation tests.
Interestingly, although GIES is not consistent on random DAGs, in some cases it performs better than
IGSP, in particular for smaller sample sizes. However, as implied by the consistency guarantees given
in Theorem 4.4, IGSP performs better as the sample size increases.
We also conducted a focused simulation study on models for which the data-generating DAG G is that
depicted on the left in Figure 1, for which GIES is not consistent. In this simulation study, we took 100
realizations of Gaussian models for the data-generating DAG G for which the nonzero edge-weights
aij were randomly drawn from [?1, ?c, ) ? (c, 1] for c = 0.1, 0.25, 0.5. The interventional targets
were I = {I0 = ?, I1 }, where I1 was uniformly at random chosen to be {4}, {5}, {4, 5}. Figure 3
shows, for each choice of c, the proportion of times G was consistently estimated as distributed by the
choice of cut-off parameter for the partial correlation tests. We see from these plots that as expected
from our theoretical results GIES recovers G at a lower rate than Algorithm 2.
6
(a) c = 0.1
(b) c = 0.25
(c) c = 0.5
Figure 3: Proportion of times the DAG G from Figure 1 (left) is consistently estimated under GIES
and Algorithm 2 for Gaussian DAG models with edge-weights drawn from [?1, ?c) ? (c, 1].
5.2
Application to Real Data
In the following, we report results for studies conducted on two real datasets coming from genomics.
The first dataset is the protein signaling dataset of Sachs et al. [21], and the second is the single-cell
gene expression data generated using perturb-seq in [4].
Analysis of protein signaling data. The dataset of Sachs et al. [21] consists of 7466 measurements of
the abundance of phosphoproteins and phospholipids recorded under different experimental conditions
in primary human immune system cells. The different experimental conditions are generated using
various reagents that inhibit or activate signaling nodes, and thereby correspond to interventions
at different nodes in the protein signaling network. The dataset is purely interventional and most
interventions take place at more than one target. Since some of the experimental perturbations
effect receptor enzymes instead of the measured signaling molecules, we consider only the 5846
measurements in which the perturbations of receptor enzymes are identical. In this way, we can define
the observational distribution to be that of molecule abundances in the model where only the receptor
enzymes are perturbed. This results in 1755 observational measurements and 4091 interventional
measurements. Table E.2 in the Supplementary Material summarizes the number of samples as
well as the targets for each intervention. For this dataset we compared the GIES results reported
in [9] with Algorithm 2 using both, a linear Gaussian and a kernel-based independence criterium
[6, 24]. A crucial advantage of Algorithm 2 over GIES is that it is nonparametric and does not require
Gaussianity. In particular, it supports kernel-based CI tests that are in general able to deal better with
non-linear relationships and non-Gaussian noise, a feature that is typical of datasets such as this one.
For the GIES algorithm we present the results of [8] in which the authors varied the number of edge
additions, deletions, and reversals as tuning parameters. For the linear Gaussian and kernel-based
implementations of IGSP our tuning parameter is the cut-off value for the CI tests, just as in the
simulated data studies in Section 5.1. Figure 4 reports our results for thirteen different cut-off values
in [10?4 , 0.7], which label the corresponding points in the plots. The linear Gaussian and kernel-based
implementations of IGSP are comparable and generally both outperform GIES. The Supplementary
Material contains a comparison of the results obtained by IGSP on this dataset to other recent methods
that allow also for latent confounders, such as ACI, COmbINE and ICP.
Analysis of perturb-seq gene expression data. We analyzed the performance of GIES and IGSP
on perturb-seq data published by Dixit et al. [4]. The dataset contains observational data as well as
interventional data from ?30,000 bone marrow-derived dendritic cells (BMDCs). Each data point
contains gene expression measurements of 32,777 genes, and each interventional data point comes
from a cell where a single gene has been targeted for deletion using the CRISPR/Cas9 system.
After processing the data for quality, the data consists of 992 observational samples and 13,435
interventional samples from eight gene deletions. The number of samples collected under each of the
eight interventions is shown in the Supplementary Material. These interventions were chosen based
7
(a) Directed edge recovery
(b) Skeleton recovery
Figure 4: ROC plot of the models estimated from the data [21] using GIES as reported in [8] and the
linear Gaussian and kernel-based versions of IGSP with different cut-off values for the CI tests. The
solid line indicates the accuracy achieved by random guessing.
on empirical evidence that the gene deletion was effective1 . We used GIES and IGSP to learn causal
DAGs over 24 of the measured genes, including the ones targeted by the interventions, using both
observational and interventional data. We followed [4] in focusing on these 24 genes, as they are
general transcription factors known to regulate each other as well as numerous other genes [7].
We evaluated the learned causal DAGs based on their accuracy in predicting the true effects of each of
the interventions (shown in Figure 5(a)) when leaving out the data for that intervention. Specifically,
if the predicted DAG indicates an arrow from gene A to gene B, we count this as a true positive if
knocking out gene A caused a significant change2 in the distribution of gene B, and a false positive
otherwise. For each inference algorithm and for every choice of the tuning parameters, we learned
eight causal DAGs, each one trained with one of the interventional datasets being left out. We then
evaluated each algorithm based on how well the causal DAGs are able to predict the corresponding
held-out interventional data. As seen in Figure 5(b), IGSP predicted the held-out interventional data
better than GIES (as implemented in the R-package pcalg) and random guessing, for a number of
choices of the cut-off parameter. The true and reconstructed networks for both genomics datasets are
shown in the Supplementary Material.
6
Discussion
We have presented two hybrid algorithms for causal inference using both observational and interventional data and we proved that both algorithms are consistent under the faithfulness assumption.
These algorithms are both interventional adaptations of the Greedy SP algorithm and are the first
algorithms of this type that have consistency guarantees. While Algorithm 1 suffers a high level of
inefficiency, IGSP is implementable and competitive with the state-of-the-art, i.e., GIES. Moreover,
IGSP has the distinct advantage that it is nonparametric and therefore does not require a linear
Gaussian assumption on the data-generating distribution. We conducted real data studies for protein
signaling and single-cell gene expression datasets, which are typically non-linear with non-Gaussian
noise. In general, IGSP outperformed GIES. This purports IGSP as a viable method for analyzing the
new high-resolution datasets now being produced by procedures such as perturb-seq. An important
1
An intervention was considered effective if the distribution of the gene expression levels of the deleted gene
is significantly different from the distribution of its expression levels without intervention, based on a Wilcoxon
Rank-Sum test with ? = 0.05. Ineffective interventions on a gene are typically due to poor targeting ability of
the guide-RNA designed for that gene.
2
Based on a Wilcoxon Rank-Sum test with ? = 0.05, which is approximately equivalent to a q-value of
magnitude ? 3 in Figure 5(a)
8
(a) True effects of gene deletions
(b) Causal effect prediction accuracy rate
Figure 5: (a) Heatmap of the true effects of each gene deletion on each measured gene. The q-value
has the same magnitude as the log p-value of the Wilcoxon rank-sum test between the distributions of
observational data and the interventional data. Positive and negative q-values indicate increased and
decreased abundance as a result of deletion respectively. (b) ROC plot of prediction accuracy by the
causal DAGs learned by IGSP and GIES. The solid line indicates the accuracy achieved by random
guessing.
challenge for future work is to make these algorithms scale to 20,000 nodes, i.e., the typical number
of genes in such studies. In addition, in future work it would be interesting to extend IGSP to allow
for latent confounders. An advantage of not allowing for latent confounders is that a DAG is usually
more identifiable. For example, if we consider a DAG with two observable nodes, a DAG without
confounders is fully identifiable by intervening on only one of the two nodes, but the same is not true
for a DAG with confounders.
Acknowledgements
Yuhao Wang was supported by DARPA (W911NF-16-1-0551) and ONR (N00014-17-1-2147).
Liam Solus was supported by an NSF Mathematical Sciences Postdoctoral Research Fellowship
(DMS - 1606407). Karren Yang was supported by the MIT Department of Biological Engineering.
Caroline Uhler was partially supported by DARPA (W911NF-16-1-0551), NSF (1651995) and ONR
(N00014-17-1-2147). We thank Dr. Sofia Triantafillou from the University of Crete for helping us
run COmbINE.
References
[1] S. A. Andersson, D. Madigan, and M. D. Perlman. A characterization of Markov equivalence
classes for acyclic digraphs. The Annals of Statistics 25.2 (1997): 505-541.
[2] D. M. Chickering. A transformational characterization of equivalent Bayesian network structures. Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence. Morgan
Kaufmann Publishers Inc., 1995.
[3] D. M. Chickering. Optimal structure identification with greedy search. Journal of Machine
Learning Research 3.Nov (2002): 507-554.
[4] A. Dixit, O. Parnas, B. Li, J. Chen, C. P. Fulco, L. Jerby-Arnon, N. D. Marjanovic, D. Dionne,
T. Burks, R. Raychowdhury, B. Adamson, T. M. Norman, E. S. Lander, J. S. Weissman,
N. Friedman and A. Regev. Perturb-seq: dissecting molecular circuits with scalable single-cell
RNA profiling of pooled genetic screens. Cell 167.7 (2016): 1853-1866.
9
[5] N. Friedman, M. Linial, I. Nachman and D. Peter. Using Bayesian networks to analyze expression data. Journal of Computational Biology 7.3-4 (2000): 601?620.
[6] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?lkopf. Kernel measures of conditional dependence.
Advances in Neural Information Processing Systems. 2008.
[7] M. Garber, N. Yosef, A Goren, R Raychowdhury, A. Thielke, M. Guttman, J. Robinson,
B. Minie, N. Chevrier, Z. Itzhaki, R. Blecher-Gonen, C. Bornstein, D. Amann-Zalcenstein,
A. Weiner, D. Friedrich, J. Meldrim, O. Ram, C. Chang, A. Gnirke, S. Fisher, N. Friedman,
B. Wong, B. E. Bernstein, C. Nusbaum, N. Hacohen, A. Regev, and I. Amit. A high throughput
Chromatin ImmunoPrecipitation approach reveals principles of dynamic gene regulation in
mammals Mol. Cell. 447.5 (2012): 810-822
[8] A. Hauser and P. B?hlmann. Characterization and greedy learning of interventional Markov
equivalence classes of directed acyclic graphs. Journal of Machine Learning Research 13.Aug
(2012): 2409-2464.
[9] A. Hauser and P. B?hlmann. Jointly interventional and observational data: estimation of
interventional Markov equivalence classes of directed acyclic graphs. Journal of the Royal
Statistical Society: Series B (Statistical Methodology) 77.1 (2015): 291-318.
[10] A. Hyttinen, F. Eberhardt, and M. J?rvisalo. Constraint-based Causal Discovery: Conflict
Resolution with Answer Set Programming. UAI. 2014.
[11] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996.
[12] E. Z. Macosko, A. Basu, R. Satija, J. Nemesh, K. Shekhar, M. Goldman, I. Tirosh, A. R. Bialas,
N. Kamitaki, E. M. Martersteck, J. J. Trombetta, D. A. Weitz, J. R. Sanes, A. K. Shalek,
A. Regev, and S. A. McCarroll. Highly parallel genome-wide expression profiling of individual
cells using nanoliter droplets. Cell 161.5 (2015): 1202-1214.
[13] S. Magliacane, T. Claassen, and J. M. Mooij. Ancestral causal inference. Advances In Neural
Information Processing Systems. 2016.
[14] C. Meek. Graphical Models: Selecting causal and statistical models. Diss. PhD thesis, Carnegie
Mellon University, 1997.
[15] N. Meinshausen, A. Hauser, J. M. Mooij, J. Peters, P. Versteeg, and P. B?hlmann. Methods
for causal inference from gene perturbation experiments and validation. Proceedings of the
National Academy of Sciences, USA. 113.27 (2016): 7361-7368.
[16] P. Nandy, A. Hauser, and M. H. Maathuis. High-dimensional consistency in score-based and
hybrid structure learning. ArXiv preprint arXiv: 1507.02608 (2015).
[17] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufman, San Mateo, 1988.
[18] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, Cambridge,
2000.
[19] A. Rau, F. Jaffr?zic, and G. Nuel. Joint estimation of causal effects from observational and
intervention gene expression data. BMC Systems Biology 7.1 (2013): 111.
[20] J. M. Robins, M. A. Hern?n and B. Brumback. Marginal structural models and causal inference
in epidemiology. Epidemiology 11.5 (2000): 550-560.
[21] K. Sachs, O. Perez, D. Pe?er, D. A. Lauffenburger and G. P. Nolan. Causal protein-signaling
networks derived from multiparameter single-cell data. Science 308.5721 (2005): 523-529.
[22] P. Spirtes, C. N. Glymour and R. Scheines. Causation, Prediction, and Search. MIT Press,
Cambridge, 2001.
[23] L. Solus, Y. Wang, C. Uhler, and L. Matejovicova. Consistency guarantees for permutationbased causal inference algorithms. ArXiv preprint arXiv: 1702.03530 (2017).
[24] R. E. Tillman, A. Gretton, and P. Spirtes. Nonlinear directed acyclic structure learning with
weakly additive noise model. Advances in neural information processing systems. 2009.
[25] S. Triantafillou and I. Tsamardinos. Constraint-based causal discovery from multiple interventions over overlapping variable sets. Journal of Machine Learning Research 16 (2015):
2147-2205.
[26] I. Tsamardinos, L. E. Brown, and C. F. Aliferis. The max-min hill-climbing Bayesian network
structure learning algorithm. Machine Learning 65.1 (2006): 31-78.
10
| 7164 |@word version:1 proportion:5 open:1 simulation:5 mammal:1 thereby:1 solid:2 accommodate:2 initial:1 inefficiency:1 contains:5 score:26 series:1 selecting:1 genetic:1 interestingly:1 recovered:1 com:1 must:1 additive:1 drop:1 plot:4 update:5 designed:1 alone:1 greedy:26 discovering:1 fewer:1 leaf:1 intelligence:1 tillman:1 short:1 colored:1 transposition:1 characterization:5 node:15 mathematical:1 along:1 ik:9 viable:1 descendant:1 prove:4 consists:2 combine:3 eleventh:1 introduce:2 expected:2 growing:1 relying:1 goldman:1 moreover:2 underlying:3 arnon:1 circuit:1 kaufman:1 minimizes:1 developed:1 nj:1 guarantee:8 every:3 burk:1 intervention:26 appear:1 positive:3 engineering:1 local:2 treat:1 tends:2 receptor:3 analyzing:2 oxford:1 approximately:1 mateo:1 equivalence:9 meinshausen:1 liam:2 directed:8 faithful:2 perlman:1 implement:1 signaling:8 procedure:3 empirical:1 significantly:1 imminent:1 madigan:1 protein:6 targeting:1 selection:1 context:1 influence:1 wong:1 optimize:2 equivalent:8 map:13 economics:1 starting:1 focused:1 resolution:3 recovery:2 assigns:1 rule:1 notion:1 analogous:1 annals:1 target:10 play:1 suppose:5 programming:1 harvard:1 associate:2 pa:1 element:1 utilized:1 cut:6 xnd:1 observed:1 role:1 knocking:1 preprint:2 wang:3 connected:3 sun:1 technological:1 inhibit:1 transforming:1 complexity:1 skeleton:3 dynamic:1 trained:1 weakly:1 purely:2 linial:1 claassen:1 joint:3 darpa:2 represented:1 various:1 separated:1 distinct:2 describe:1 activate:1 effective:1 query:1 artificial:1 extraordinarily:1 refined:1 garber:1 supplementary:9 larger:1 aliferis:1 say:3 otherwise:2 nolan:1 triangular:1 ability:1 statistic:1 gi:1 multiparameter:1 jointly:1 advantage:5 sequence:2 took:1 propose:1 coming:2 maximal:1 adaptation:5 remainder:1 realization:2 academy:1 intervening:1 dixit:2 parent:1 empty:2 produce:2 generating:9 measured:3 ij:8 lauritzen:1 aug:1 implemented:1 predicted:2 come:1 revers:2 indicate:1 direction:3 human:1 observational:27 material:9 reagent:1 require:2 assign:1 cn3:2 preliminary:1 dendritic:1 stockholm:1 biological:1 extension:3 strictly:1 helping:1 hold:1 considered:2 bj:1 predict:1 purpose:2 estimation:3 outperformed:1 label:1 nachman:1 xpa:1 combinatorially:1 fukumizu:1 mit:6 gaussian:14 always:2 rna:2 i3:2 immorality:2 zic:1 hj:1 encode:4 derived:2 consistently:4 rank:3 likelihood:2 indicates:3 greedily:1 inference:13 i0:3 typically:2 relation:11 ancestor:1 dissecting:1 i1:11 aforementioned:1 orientation:1 denoted:4 socalled:1 development:2 heatmap:1 art:1 initialize:2 marginal:1 field:1 beach:1 hyttinen:1 biology:3 identical:1 broad:1 bmc:1 throughput:2 prioritizes:1 future:2 report:2 fundamentally:1 intelligent:1 mec:13 causation:1 randomly:1 national:1 individual:1 phase:8 argmax:1 consisting:1 transposing:1 n1:5 friedman:3 uhler:3 interest:1 highly:1 evaluation:1 entailed:1 analyzed:2 arrives:1 pc:1 ndg:1 perez:1 held:2 xb:1 amenable:1 edge:19 partial:2 sweden:1 yuhao:2 re:1 causal:27 theoretical:4 sociology:1 minimal:11 increased:2 earlier:1 assertion:1 w911nf:2 hlmann:3 subset:3 uniform:2 conducted:5 reported:2 hauser:4 answer:1 perturbed:1 combined:1 confounders:7 st:1 density:1 fundamental:4 epidemiology:3 ancestral:1 guttman:1 off:6 probabilistic:1 discipline:1 icp:2 together:1 thesis:1 recorded:1 containing:1 possibly:1 prioritize:1 dr:1 inefficient:2 return:2 li:1 account:1 transformational:3 gy:33 pooled:1 gaussianity:3 inc:1 satisfy:1 caused:1 amann:1 root:2 bone:1 hej:1 analyze:3 red:1 competitive:1 recover:3 option:1 parallel:1 weitz:1 accuracy:5 kaufmann:1 correspond:1 identify:1 climbing:2 lkopf:1 bayesian:4 identification:1 accurately:1 produced:2 published:1 executes:1 classified:1 caroline:2 suffers:1 whenever:2 definition:4 evaluates:1 against:1 dm:1 naturally:1 associated:1 proof:4 recovers:1 boil:1 dataset:7 proved:1 massachusetts:3 popular:1 recall:1 focusing:1 higher:2 methodology:1 gbi:7 impacted:1 erd:1 sufficiency:1 evaluated:2 xa:1 just:1 until:2 correlation:2 hand:1 nonlinear:1 overlapping:1 glance:1 aj:2 quality:1 grows:1 usa:2 effect:6 brown:1 true:7 norman:1 hence:5 laboratory:2 nonzero:1 spirtes:2 i2:4 deal:1 criterion:1 prominent:1 hill:2 complete:3 demonstrate:1 performs:2 cp:1 reasoning:2 recently:1 extend:2 measurement:5 significant:1 mellon:1 cambridge:6 rau:1 dag:60 queried:1 tuning:3 consistency:13 mathematics:1 immune:1 access:3 entail:1 wilcoxon:3 enzyme:3 recent:3 perspective:2 apart:1 forcing:1 pcalg:1 n00014:2 onr:2 inconsistency:1 scoring:3 criterium:1 seen:2 morgan:2 dai:1 surely:1 maximize:1 ii:6 multiple:2 full:2 gretton:2 adapt:1 calculation:1 characterized:1 long:1 profiling:2 molecular:1 weissman:1 prediction:3 scalable:1 arxiv:4 represent:3 kernel:6 achieved:2 cell:15 addition:5 fellowship:1 decreased:1 lander:1 else:1 leaving:1 crucial:3 publisher:1 sch:1 strict:1 ineffective:1 member:4 call:1 structural:2 yang:2 bernstein:1 variety:1 independence:3 bic:4 reduce:1 enumerating:1 whether:1 expression:16 weiner:1 nusbaum:1 gb:1 peter:2 constitute:1 repeatedly:1 remark:1 useful:1 generally:2 se:1 covered:21 tsamardinos:2 nonparametric:4 ang:1 locally:1 parnas:1 category:1 generate:2 http:1 outperform:1 restricts:1 problematic:1 nsf:2 notice:1 estimated:5 track:1 per:1 triantafillou:2 carnegie:1 threshold:1 drawn:4 deleted:1 interventional:54 pj:3 utilize:2 backward:2 ram:1 graph:15 year:2 sum:4 run:1 package:1 uncertainty:1 place:1 throughout:1 almost:1 seq:7 separation:3 utilizes:1 draw:1 decision:2 summarizes:1 comparable:1 rvisalo:1 followed:1 meek:1 identifiable:3 constraint:6 precisely:5 infinity:2 encodes:1 min:2 performing:1 glymour:1 department:2 structured:1 alternate:1 combination:2 poor:1 yosef:1 across:1 smaller:3 beneficial:1 happens:1 restricted:1 computationally:3 equation:1 scheines:1 previously:1 hern:1 discus:2 slack:2 count:1 needed:2 ge:4 end:1 reversal:10 available:1 lauffenburger:1 apply:1 eight:3 regulate:1 denotes:2 graphical:2 xc:1 perturb:6 amit:1 society:4 implied:3 initializes:1 move:4 question:2 parametric:1 primary:1 regev:3 dependence:1 guessing:3 said:1 kth:2 thank:1 simulated:4 considers:1 collected:2 code:1 index:1 relationship:1 difficult:2 regulation:1 thirteen:1 statement:1 negative:1 implementation:2 proper:1 unknown:2 allowing:2 upper:1 observation:2 markov:24 datasets:7 implementable:2 precise:1 reinitializes:1 perturbation:3 varied:1 introduced:1 namely:3 gb0:1 faithfulness:7 crete:1 friedrich:1 conflict:1 learned:3 deletion:7 pearl:2 nip:1 robinson:1 able:2 usually:1 gonen:1 challenge:1 royal:2 reliable:1 including:2 max:3 satisfaction:1 natural:2 hybrid:5 rely:1 predicting:1 turning:3 github:1 technology:7 numerous:1 genomics:4 immunoprecipitation:1 discovery:3 acknowledgement:1 mooij:2 fully:1 permutation:14 interesting:1 acyclic:6 magliacane:1 validation:1 cn2:2 consistent:19 xp:3 principle:1 pi:1 penalized:1 repeat:3 supported:4 keeping:1 infeasible:1 dis:1 drastically:1 aij:3 allow:3 guide:1 institute:8 wide:2 basu:1 pag:3 distributed:2 depth:4 transition:1 genome:1 author:3 collection:11 forward:2 san:1 reconstructed:1 nov:1 observable:1 transcription:1 gene:35 keep:1 deg:1 sequentially:1 incoming:1 reveals:1 uai:1 shekhar:1 xi:2 aci:2 postdoctoral:1 search:13 regulatory:1 latent:5 table:1 additionally:1 robin:1 learn:1 molecule:2 ca:1 inherently:1 obtaining:2 mol:1 eberhardt:1 marrow:1 sp:21 pk:6 main:1 sachs:3 arrow:36 terminated:1 noise:4 sofia:1 contradicting:5 allowed:1 child:1 x1:3 causality:1 roc:2 screen:1 purport:1 sanes:1 sparsest:1 lie:1 intervened:2 candidate:2 chickering:2 pe:1 renyi:1 abundance:3 removing:1 theorem:5 down:1 er:1 list:4 evidence:1 essential:12 exists:5 consist:1 false:1 adding:1 ci:13 phd:1 magnitude:2 nk:6 sparser:1 chen:1 depicted:4 ordered:1 partially:3 chang:1 corresponds:1 satisfies:1 relies:1 ma:3 conditional:2 viewed:1 goal:1 targeted:2 digraph:1 fisher:1 change:1 included:1 specifically:2 typical:3 uniformly:1 reversing:2 lemma:1 called:5 andersson:1 experimental:3 maathuis:1 support:1 chromatin:1 |
6,815 | 7,165 | Time-dependent spatially varying graphical models,
with application to brain fMRI data analysis
Kristjan Greenewald
Department of Statistics
Harvard University
Seyoung Park
Department of Biostatistics
Yale University
Shuheng Zhou
Department of Statistics
University of Michigan
Alexander Giessing
Department of Statistics
University of Michigan
Abstract
In this work, we present an additive model for space-time data that splits the data
into a temporally correlated component and a spatially correlated component. We
model the spatially correlated portion using a time-varying Gaussian graphical
model. Under assumptions on the smoothness of changes in covariance matrices,
we derive strong single sample convergence results, confirming our ability to estimate meaningful graphical structures as they evolve over time. We apply our
methodology to the discovery of time-varying spatial structures in human brain
fMRI signals.
1
Introduction
Learning structured models of high-dimensional datasets from relatively few training samples is an
important task in statistics and machine learning. Spatiotemporal data, in the form of n variables
evolving over m time points, often fits this regime due to the high (mn) dimension and potential
difficulty in obtaining independent samples. In this work, we develop a nonparametric framework
for estimating time varying spatiotemporal graphical structure using an `1 regularization method.
The covariance of a spatiotemporal array X = [x1 , . . . , xm ] ? Rn?m is an mn by mn matrix
? = Cov vec([x1 , . . . , xm ]) ,
(1)
where xi ? Rn , i = 1, . . . , m denotes the n variables or features of interest at the ith time point.
Even for moderately large m and n the number of degrees of freedom (mn(mn + 1)/2) in the
covariance matrix can greatly exceed the number of training samples available for estimation. One
way to handle this problem is to introduce structure and/or sparsity, thus reducing the number of
parameters to be estimated. Spatiotemporal data is often highly structured, hence the design of
estimators that model and exploit appropriate covariance structure can provide significant gains.
We aim to develop a nonparametric framework for estimating time varying graphical structure for
matrix-variate distributions. Associated with each xi ? Rn is its undirected graph G(i). Under the
assumption that the law L(xi ) of xi changes smoothly, Zhou et al. (2010) introduced a nonparametric method to estimate the graph sequence G(1), G(2), . . . assuming that the xi ? N (0, B(i/m))
are independent, where B(t) is a smooth function over t ? [0, 1] and we have mapped the indices i
onto points t = i/m on the interval [0, 1]. In this work, we are interested in the general time series
model where the xi , i = 1, . . . , m are dependent and the B ?1 (t) graphs change over time.
One way to introduce dependency into the xi is to study the following covariance structure. Let
A = (aij ) ? Rm?m , B(t) = (bij (t)) ? Rn?n , t ? [0, 1] be symmetric positive definite covariance
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
matrices. Let diag(v), v = (v1 , . . . , vm ) be the diagonal matrix with elements vi along the diagonal.
Consider the random matrix X with row vectors y j corresponding to measurements at the jth spatial
location, and columns xi corresponding to the m measurements at times i/m, i = 1, . . . , m:
?j = 1, . . . , n, y j ? Nm (0, Aj ) where Aj = A + diag(bjj (1), . . . , bjj (m)), and
(2)
i
?i = 1, . . . , m, x ? Nn (0, aii I + B(i/m)) where B(t) changes smoothly over t ? [0, 1]; (3)
that is, the covariance of the column vectors xi corresponding to each time point changes smoothly
with time (if aii is a smooth function of i). This provides ultimate flexibility in parameterizing
spatial correlations, for example, across different geographical scales through variograms (Cressie,
2015), each of which is allowed to change over seasons. Observe that while we have used the normal
distribution here for simplicity, all our results hold for general subgaussian distributions.
The model (3) also allows modeling the dynamic gene regulatory and brain connectivity networks
with topological (e.g., Erd?os-R?enyi random graph, small-world graph, or modular graphs) constraints via degree specifications as well as spatial constraints in the set of {B(t), t = 1, 2, . . .}.
When A = 0, we return to the case of Zhou et al. (2010) where there is no temporal correlation, i.e.
y 1 , . . . , y n assumed to be independent.
We propose methodologies to study the model as constructed in (2) and (3). Building upon and
extending techniques of Zhou et al. (2010) and Rudelson & Zhou (2017); Zhou (2014), we aim to
design estimators to estimate graph sequence G(1), G(2), . . ., where the temporal graph H and spatial graphs G(i) are determined by the zeros of A?1 and B(t)?1 . Intuitively, the temporal correlation
and spatial correlation are modeled as two additive processes. The covariance of X is now
Xm
Cov[vec(X)] = ? = A ? In +
(ei eTi ) ? B(i/m)
(4)
i=1
where ei ? Rm , ?i are the m-dimensional standard basis vectors.
In the context of this model, we aim to develop a nonparametric method for estimating time varying
graphical structure for matrix-variate normal distributions using an `1 regularization method. We
will show that, as long as the covariances change smoothly over time, we can estimate the spatial
and temporal covariance matrices well in terms of predictive risk even when n, m are both large. We
will investigate the following theoretical properties: (a) consistency and rate of convergence in the
operator and Frobenius norm of the covariance matrices and their inverse, (b) large deviation results
for covariance matrices for simultaneously correlated and non-identically distributed observations,
and (c) conditions that guarantee smoothness of the covariances.
Besides the model (4), another well-studied option for modeling spatio-temporal covariances ? is
to introduce structure via the Kronecker product of smaller symmetric positive definite matrices, i.e.
? = A ? B. The Kronecker product model, however, is restrictive when applied to general spatiotemporal covariances as it assumes the covariance is separable (disallowing such simple scenarios
as the presence of additive noise), and does not allow for time varying spatial structure. When used
to estimate covariances not following Kronecker product structure, many estimators will respond to
the model mismatch by giving ill-conditioned estimates (Greenewald & Hero, 2015).
Human neuroscience data is a notable application where time-varying structure emerges. In neuroscience, one must take into account temporal correlations as well as spatial correlations, which
reflect the connectivity formed by the neural pathways. It is conceivable that the brain connectivity
graph will change over a sufficiently long period of measurements. For example, as a child learns to
associate symbols in the environment, certain pathways within the brain are reinforced. When they
begin to associate images with words, the correlation between a particular sound like Mommy and
the sight of a face becomes stronger and forms a well worn pathway. On the other hand, long term
non-use of connections between sensory and motor neurons can result in a loss of the pathway.
1.1
Datasets and Related Work
Estimating graphical models (connectomes) in fMRI data using sparse inverse covariance techniques
has enjoyed wide application (Huang et al., 2010; Varoquaux et al., 2010; Narayan et al., 2015; Kim
et al., 2015). However, recent research has only now begun exploring observed phenomena such
as temporal correlations and additive temporally correlated noise (Chen et al., 2015; Arbabshirani
et al., 2014; Kim et al., 2015; Qiu et al., 2016), and time-varying dynamics and graphical models
(connectomes) (Calhoun et al., 2014; Liu & Duyn, 2013; Chang & Glover, 2010; Chen et al., 2015).
2
We consider the ADHD-200 fMRI dataset (Biswal et al., 2010), and study resting state fMRIs for
a variety of healthy patients in the dataset at different stages of development. Using our methods,
we are able to directly estimate age-varying graphical models across brain regions, chronicling the
development of brain structure throughout childhood.
Several models have emerged to generalize the Kronecker product model to allow it to model more
realistic covariances while still maintaining many of the gains associated with Kronecker structure.
Kronecker PCA, discussed in Tsiligkaridis & Hero (2013), approximates the covariance matrix using
a sum of Kronecker products. An algorithm (Permuted Rank-penalized Least Squares (PRLS)) for
fitting the KronPCA model to a measured sample covariance matrix was introduced in (Tsiligkaridis
& Hero, 2013) and was shown to have strong high dimensional MSE performance guarantees. From
a modeling perspective, the strengths of Kronecker PCA lie in its ability to handle ?near separable?
covariances and a variety of time-varying effects. While the Kronecker PCA model is very general,
so far incorporation of sparsity in the inverse covariance has not been forthcoming. This motivates
our introduction of the sparse model (4), which we demonstrate experimentally in Section 10 of the
supplement to enjoy better statistical convergence.
Carvalho et al. (2007) proposed a Bayesian additive time-varying graphical model, where the
spatially-correlated noise term is a parameter of the driving noise covariance in a temporal dynamical model. Unlike our method, they did not estimate the temporal correlation, instead requiring the
dynamical model to be pre-set. Our proposed method has wholly independent spatial and temporal
models, directly estimating an inverse covariance graphical model for the temporal relationships of
the data. This allows for a much richer temporal model and increases its applicability.
In the context of fMRI, the work of Qiu et al. (2016) used a similar kernel-weighted estimator for the
spatial covariance, however they modeled the temporal covariance with a simple AR-1 model which
they did not estimate, and their estimator did not attempt to remove. Similarly, Monti et al. (2014)
used a smoothed kernel estimator for B ?1 (t) with a penalty to further promote smoothness, but did
not model the temporal correlations. Our additive model allows the direct estimation of the temporal
behavior, revealing a richer structure than a simple AR-1, and allowing for effective denoising of the
data, and hence better estimation of the spatial graph structures.
2
The model and method
Let the elements of A 0 and B(t) be denoted as [A]ij := aij and [B(t)]ij := bij (t), t ? [0, 1].
Similar to the setting in (Zhou et al., 2010), we assume that bij (t) is a smooth function of time t
for all i, j, and assume that B ?1 (t) is sparse. Furthermore, we suppose that m n, corresponding
to there being more time points than spatial variables. For a random variable Y , the subgaussian
norm of Y , kY k?2 , is defined via kY k?2 = supp?1 p?1/2 (E|Y |p )1/p . Note that if E[Y ] = 0,
we also have E[exp(tY )] ? exp(Ct2 kY k2?2 ) ?t ? R. Define an n ? m random matrix Z with
2
independent, zero mean entries Zij satisfying E[Zij
] = 1 and having subgaussian norm kZij k?2 ?
K. Matrices Z1 , Z2 denote independent copies of Z. We now write an additive generative model
for subgaussian data X ? Rn?m having covariance given in (4). Let
X = Z1 A1/2 + ZB
1/2
1/2
(5)
1/2
where ZB = [B(1/m) Z2 e1 , . . . , B(i/m) Z2 ei , . . . , B(1)
m-dimensional standard basis vectors. Then the covariance
m
Z2 em ], and ei ? R , ?i are the
? = Cov[vec(X)] = Cov[vec(Z1 A1/2 )] + Cov[vec(ZB )]
Xm
= Cov[vec(Z1 A1/2 )] +
(ei eTi ) ? Cov[B(i/m)1/2 Z2 ei ]
i=1
Xm
= A ? In +
(ei eTi ) ? B(i/m).
i=1
Thus (5) is a generative model for data following the covariance model (4).
2.1
Estimators
As in Rudelson & Zhou (2017), we can exploit the large-m convergence of Z1 AZ1T to tr(A)I to
project out the A part and create an estimator for the B covariances. As B(t) is time-varying, we
use a weighted average across time to create local estimators of spatial covariance matrix B(t).
3
It is often assumed that knowledge of the trace of one of the factors is available a priori. For example, the spatial signal variance may be known and time invariant, corresponding to tr(B(t)) being
known. Alternatively, the temporal component variance may be constant and known, corresponding
to tr(A) being known. In our analysis below, we suppose that tr(A) is known or otherwise estimated
(similar results hold when tr(B(t)) is known). For simplicity in stating the trace estimators, in what
follows we suppose that tr(B(t)) = tr(B) is constant, and without loss of generality that the data
has been normalized such that diagonal elements Aii are constant over i.
As B(t) is smoothly varying over time, the estimate at time t0 should depend strongly on the time
samples close to t0 , and less on the samples further from t0 . For any time
of interest t0 , we thus
Pm
construct a weighted estimator using a weight vector wi (t0 ) such that t=1 wt (t0 ) = 1. Our
weighted, unstructured sample-based estimator for B(t0 ) is then given by
Xm
tr(A)
1
i/m ? t0
T
b
In , where wi (t0 ) =
K
, (6)
Sm (t0 ) :=
wi (t0 ) xi xi ?
i=1
m
mh
h
and we have considered the class of weight vectors wi (t0 ) arising from a symmetric nonnegative
kernel function K with compact support [0, 1] and bandwidth determined by parameter h. A list of
minor regularity assumptions on K are listed in the supplement. For kernels such as the Gaussian
kernel, this wt (t0 ) will result in samples close to t0 being highly weighted, with the ?weight decay?
away from t0 scaling with the bandwidth h. A wide bandwidth will be appropriate for slowlyvarying B(t), and a narrow bandwidth for quickly-varying B(t).
To enforce sparsity in the estimator for B ?1 (t0 ), we substitute Sbm (t0 ) into the widely-used GLasso
objective function, resulting in a penalized estimator for B(t0 ) with regularization parameter ?m
b? (t0 ) := arg min tr B ?1 Sbm (t0 ) + log |B? | + ?m |B ?1 |1 .
(7)
B
?
?
B 0
P?
For a matrix B, we let |B|1 := ij |Bij |. Increasing the parameter ?m gives an increasingly sparse
b ?1 (t0 ). Having formed an estimator for B, we can now form a similar estimator for A. Under the
B
?
constant-trace assumption, we construct an estimator for tr(B)
Xm
n
1
?
tr(B)
=
wi kXi k22 ? tr(A), with wi = .
(8)
i=1
m
m
For a time-varying trace tr(B(t)), use the time-averaged kernel
m
X
n
1
i/m ? t0
?
wi (t0 )kXi k22 ? tr(A), with wi (t0 ) =
tr(B(t
K
.
(9)
0 )) =
m
mh
h
i=1
In the future we will derive rates for the time varying case by choosing an optimal h. These estimators allow us to construct a sample covariance matrix for A:
1
1
?
?
A? = X T X ? diag{tr(B(1/m)),
. . . , tr(B(1))}.
(10)
n
n
? Note that with m > n, A? has negative
We (similarly to B(t)) apply the GLasso approach to A.
1
T
eigenvalues since ?min n X X = 0. We obtain a positive semidefinite matrix A?+ as:
A?+ = arg min kA? ? Akmax .
(11)
A0
We use alternating direction method of multipliers (ADMM) to solve (11) as in Boyd et al. (2011),
and prove that this retains a tight elementwise error bound. Note that while we chose this method
of obtaining a positive semidefinite A?+ for its simplicity, there may exist other possible projections,
the exact method is not critical to our overall Kronecker sum approach. In fact, if the GLasso is not
?
used, it is not necessary to do the projection (11), as the elementwise bounds also hold for A.
We provide a regularized estimator for the correlation matrices ?(A) = diag(A)?1/2 Adiag(A)?1/2
using the positive semidefinite A?+ as the initial input to the GLasso problem
?1
?
??? (A) = argminA? 0 tr(A?1
(12)
? ?(A+ )) + log |A? | + ?n |A? |1,off ,
where ?n > 0 is a regularization parameter and | ? |1,off is the L1 norm on the offdiagonal.
Form the estimate for A as tr(A)
?? (A). Observe that our method has three tuning parameters, two if
m ?
tr(A) is known or can be estimated. If tr(A) is not known, we present several methods to choose it
in Section 7.1 in the supplement. Once tr(A) is chosen, the estimators (7) and (12) for A and B(t)
respectively do not depend on each other, allowing ?m and ?n to be tuned independently.
4
3
Statistical convergence
?
We first bound the estimation error for the time-varying B(t). Since B(t)
is based on a kernel? is a biased estimator, with the bias depending on the kernel width
smoothed sample covariance, B(t)
and the smoothness of B(t). In Section 12.1 of the supplement, we derive the bias and variance of
S?m (t0 ), using arguments from kernel smoothing and subgaussian concentration respectively.
In the following results, we assume that the eigenvalues of the matrices A and B(t) are bounded:
Assumption 1: There exist positive constants cA , cB such that
and c1B ? ?min (B(t)) ? ?max (B(t)) ? cB for all t.
1
cA
? ?min (A) ? ?max (A) ? cA
Assumption 2: B(t) has entries with bounded second derivatives on [0, 1].
Putting the bounds on the bias and variance together and optimizing the rate of h, we obtain the
following, which we prove in the supplementary material.
Theorem 1. Suppose that the above Assumption holds, the entries Bij (t) of B(t) have bounded second derivatives for all i, j, and t ? [0, 1], sb +n = o((m/ log m)2/3 ), and that h (m?1 log m)1/3 .
00
Then with probability at least 1 ? mc8/3 , Sbm (t0 ) is positive definite and for some C
1/3
maxij |Sbm (t0 , i, j) ? B(t0 , i, j)| ? C m?1 log m
.
This result confirms that the mh temporal samples selected by the kernel act as replicates for estimating B(t). We can now substitute this elementwise bound on Sbm (t0 ) into the GLasso proof, obtaining
? successfully exploits sparsity in B ?1 (t).
the following theorem which demonstrates that B(t)
Theorem 2. Suppose the conditionsqof Theorem 1 and that B ?1 (t) has at most sb nonzero off-
m
, then the GLasso estimator (7) satisfies
diagonal elements for all t. If ?m ? log
m2/3
!
!
r
r
(s
+
n)
log
m
(s
+
n)
log
m
b
b
? 0 ) ? B(t0 )kF = Op
? ?1 (t0 ) ? B ?1 (t0 )kF = O
kB(t
, kB
m2/3
m2/3
Observe that this single-sample bound converges whenever the A part dimensionality m grows.
The proof follows from the concentration bound in Theorem 1 using the argument in Zhou et al.
(2010), Zhou et al. (2011), and Rothman et al. (2008). Note that ?m goes to zero as m increases, in
accordance with the standard bias/variance tradeoff.
We now turn to the estimator for the A part. As it does not involve kernel smoothing, we simply
?
need to bound the variance. We have the following bound on the error of A:
Theorem 3. Suppose the above Assumption holds. Then
p
maxij |A?ij ? Aij | ? C(cA + cB ) n?1 log m
with probability 1 ?
c
m4
for some constants C, c > 0.
Recall that we have assumed that m > n, so the probability converges to 1 with increasing m or
n. While A? is not positive definite, the triangle inequality implies a bound on the positive definite
projection A?+ (11):
p
? max + kA? ? Akmax ? 2kA? ? Akmax = Op
kA?+ ? Akmax ? kA?+ ? Ak
n?1 log m . (13)
Thus, similarly to the earlier result for B(t), the estimator (12) formed by substituting the positive
semidefinite ?(A?+ ) into the GLasso objective enjoys the following error bound (Zhou et al., 2011).
Theorem 4. Suppose the conditions ofq
Theorem 3 and that A?1 has at most sa = o(n/ log m)
nonzero off-diagonal elements. If ?n ? lognm , then the GLasso estimator (12) satisfies
!
!
r
r
sa log m
sa log m
?1
?1
?
?
kA ? AkF = Op
, kA ? A kF = Op
.
n
n
Observe that this single-sample bound converges whenever the B(t) dimensionality n grows since
the sparsity sa = o(n/ log m). For relaxation of this stringent sparsity assumption, one can use
other assumptions, see for example Theorem 3.3 in Zhou (2014).
5
4
Simulation study
We generated a time varying sequence of spatial covariances B(ti ) = ?(ti )?1 according to the
method of Zhou et al. (2010), which follows a type of Erdos-Renyi random graph model. Initially
we set ?(0) = 0.25In?n , where n = 100. Then, we randomly select k edges and update ?(t) as
follows: for each new edge (i, j), a weight a > 0 is chosen uniformly at random from [0.1, 0.3]; we
subtract a from ?ij and ?ji , and increase ?ii , ?jj by a. This keeps B(t) positive definite. When
we later delete an existing edge from the graph, we reverse the above procedure.
We consider t ? [0, 1], changing the graph structure at the points ti = i/5 as follows. At each ti ,
five existing edges are deleted, and five new edges are added. For each of the five new edges, a target
weight is chosen. Linear interpolation of the edge weights between the ti is used to smoothly add
the new edges and gradually delete the ones to be deleted. Thus, almost always, there are 105 edges
in the graph and 10 edges have weights that are varying smoothly (Figure 1).
t = 0/5
t = 1/5
t = 2/5
t = 3/5
t = 4/5
t = 5/5
Figure 1: Example sequence of Erdos-Renyi B ?1 (t) = ?(t) graphs. At each time point, the 100
edges connecting n = 100 nodes are shown. Changes are indicated by red and green edges: red
edges indicate edges that will be deleted in the next increment and green indicates new edges.
In the first set of experiments we consider B(t) generated from the ER time-varying graph procedure
and A an AR-1 covariance with parameter ?. The magnitudes of the two factors are balanced. We
set n = 100 and vary m from 200 to 2400. For each n, m pair, we vary the B(t) regularization
parameter ?, estimating every B(t), t = 1/m, . . . , 1 for each. We evaluate performance using the
? ? B(t)kF /kB(t)kF ), the mean relative L2
mean relative Frobenius B(t) estimation error (kB(t)
? ? B(t)k2 /kB(t)k2 ), and the Matthews correlation coefficient (MCC).
estimation error (kB(t)
The MCC quantifies edge support estimation performance, and is defined as follows. Let the number
of true positive edge detections be TP, true negatives TN, false positives FP, and false negatives FN.
TP?TN?FP?FN
The Matthews correlation coefficient is defined as MCC = ?
.
(TP+FP)(TP+FN)(TN+FP)(TN+FN)
Increasing values of MCC imply better edge estimation performance, with MCC = 0 implying
complete failure and MCC = 1 implying perfect edge set estimation.
Results are shown in Figure 2, for ? = .5 and 50 edges in B, ? = .5 and 100 edges in B, and ? = .95
and 100 edges in B. As predicted by the theory, increasing m improves performance and increasing
? decreases performance. Increasing the number of edges in B changes the optimal ?, as expected.
Figure 3 shows performance results for the penalized estimator A? using MCC, Frobenius error, and
L2 error, where A follows an AR(1) model with ? = 0.5 and B follows a random ER model. Note
the MCC, Frobenius, spectral norm errors are improved with larger n. In the supplement (Section
11), we repeat these experiments, using an alternate random graph topologies, with similar results.
5
fMRI Application
The ADHD-200 resting-state fMRI dataset (Biswal et al., 2010) was collected from 973 subjects,
197 of which were diagnosed with ADHD types 1, 2, or 3. The fMRI images have varying numbers
6
MCC
1
Frobenius Error
L2 Error
1.2
0.8
1.2
m
m
m
m
MCC
0.6
0.4
0.2
200
400
800
2400
1
0.8
0.8
L2 Error
Frobenius Error
1
=
=
=
=
0.6
0.4
0.4
0.2
0
0.2
0
0
0.2
0.4
0.6
0
0.8
0
?
MCC
1
0.2
0.4
0.6
0.8
0
0.4
0.2
=
=
=
=
200
400
800
2400
0.6
0.4
0
0.2
0.4
0.6
0.8
0
0.4
0.2
0
=
=
=
=
200
400
800
2400
0.8
0.6
0.6
0.6
0.4
0.4
0.2
0.2
0
0.4
0.8
0.8
1
0.8
L2 Error
Frobenius Error
MCC
0.6
0.6
1.2
m
m
m
m
1
0.2
0.4
?
L2 Error
1.2
0
0.2
?
Frobenius Error
0.8
0.8
0.2
0
0.8
?
MCC
1
0.6
0.4
0
0.6
0.6
0.8
0.2
0
0.8
1
0.8
L2 Error
Frobenius Error
MCC
0.6
0.6
1.2
m
m
m
m
1
0.4
0.4
?
L2 Error
1.2
0.2
0.2
?
Frobenius Error
0.8
0
0.6
0
0
0.2
0.4
?
0.6
0.8
0
0.2
0.4
?
?
Figure 2: MCC, Frobenius, and L2 norm error curves for B a random ER graph and n = 100. Top:
A is AR covariance with ? = .5 and 50 edges in B, Middle: A is AR(1) covariance with ? = .5 and
B having 100 edges, Bottom: AR covariance with ? = .95 and 100 edges in B.
MCC
1
m=200, Frobenius Error
1.2
1
MCC
0.6
0.4
0.2
0
1
0.8
0.8
L2 Error
Frobenius Error
0.8
0.6
0.2
0.4
0.4
0.2
0.2
0.6
1
0
0
MCC
0.2
0.4
0.2
0
0.2
0.4
0.6
0.4
0.6
L2 Error
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0.2
1
0.8
L2 Error
Frobenius Error
MCC
0.4
0
1.2
n=200
n=400
n=800
1
0.6
0.6
m=800, Frobenius Error
1.2
0.8
0.6
0.4
0
0
L2 Error
1.2
n=200
n=400
n=800
0
0
0.2
0.4
0.6
0
0.2
0.4
0.6
Figure 3: MCC, Frobenius, and L2 norm error curves for A a AR(1) with ? = 0.5 when B is a
random ER graph. From top to bottom: m = 200 and m = 800.
of voxels which we divide into 90 regions of interest for graphical model analysis (Wehbe et al.,
2014), and between 76 and 276 images exist for each subject. Provided covariates for the subjects
include age, gender, handedness, and IQ. Previous works such as (Qiu et al., 2016) used this dataset
to establish that the brain network density increases with age, corresponding to brain development
as subjects mature. We revisit this problem using our additive approach. Our additive model allows
the direct estimation of the temporal behavior, revealing a richer structure than a simple AR-1, and
allowing for effectively a denoising of the data, and better estimation of the spatial graph structure.
We estimate the temporal A covariances for each subject using the voxels contained in the regions
of interest, with example results shown in Figure 5 in the supplement. We choose ?B as the lower
limit of the eigenvalues of n1 X T X, as in the high sample regime it is an upper bound on ?B .
7
1200
1200
1000
1000
1000
800
600
400
? = 0.875
? = 0.9
? = 0.925
200
800
600
400
? = 0.875
? = 0.9
? = 0.925
200
0
Number of edges
1200
Number of edges
Number of edges
We then estimate the brain connectivity network at a range of ages from 8 to 18, using both our
proposed method and the method of Monti et al. (2014), as it is an optimally-penalized version
of the estimator in Qiu et al. (2016). We use a Gaussian kernel with bandwidth h, and estimate
the graphs using a variety of values of ? and h. Subjects with fewer than 120 time samples were
eliminated, and those with more were truncated to 120 to reduce bias towards longer scans. The
number of edges in the estimated graphs are shown in Figure 4. Note the consistent increase in
network density with age, becoming more smooth with increasing h.
10
12
14
16
18
600
400
? = 0.875
? = 0.9
? = 0.925
200
0
8
800
0
8
10
Age
12
14
16
18
8
10
Age
12
14
16
18
Age
1200
1200
1000
1000
1000
800
600
400
? = 0.5
? = 0.6
? = 0.7
200
0
8
10
12
14
Age
16
800
600
400
? = 0.5
? = 0.6
? = 0.7
200
18
Number of edges
1200
Number of edges
Number of edges
(a) Non-additive method of Monti et al. (2014) (optimally penalized version of Qiu et al. (2016)).
0
8
10
12
14
Age
16
800
600
400
? = 0.5
? = 0.6
? = 0.7
200
18
0
8
10
12
14
16
18
Age
(b) Our proposed additive method, allowing for denoising of the time-correlated data.
Figure 4: Number of edges in the estimated B ?1 (t) graphical models across 90 brain regions as a
function of age. Shown are results using three different values of the regularization parameter ?,
and from left to right the kernel bandwidth parameter used is h = 1.5, 2, and 3. Note the consistently
increasing edge density in our estimate, corresponding to predictions of increased brain connectivity
as the brain develops, leveling off in the late teenage years. Compare this to the method of Monti
et al. (2014), which successfully detects the trend in the years 11-14, but fails for other ages.
6
Conclusion
In this work, we presented estimators for time-varying graphical models in the presence of timecorrelated signals and noise. We revealed a bias-variance tradeoff scaling with the underlying rate
of change, and proved strong single sample convergence results in high dimensions. We applied our
methodology to an fMRI dataset, discovering meaningful temporal changes in functional connectivity, consistent with scientifically expected childhood growth and development.
Acknowledgement
This work was supported in part by NSF under Grant DMS-1316731, Elizabeth Caroline Crosby
Research Award from the Advance Program at the University of Michigan, and by AFOSR grant
FA9550-13-1-0043.
References
Arbabshirani, M., Damaraju, E., Phlypo, R., Plis, S., Allen, E., Ma, S., Mathalon, D., Preda, A.,
Vaidya, J., and Adali, T. Impact of autocorrelation on functional connectivity. Neuroimage, 102:
294?308, 2014.
8
Biswal, B., Mennes, M., Zuo, X., Gohel, S., Kelly, C., Smith, S., Beckmann, C., Adelstein, J.,
Buckner, R., and Colcombe, S. Toward discovery science of human brain function. Proceedings
of the National Academy of Sciences, 107(10):4734?4739, 2010.
Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. Distributed optimization and statistical
R in Machine Learning, 3(1):1?122, 2011.
learning via ADMM. Foundations and Trends
Calhoun, V., Miller, R., Pearlson, G., and Adal?, T. The chronnectome: time-varying connectivity
networks as the next frontier in fMRI data discovery. Neuron, 84(2):262?274, 2014.
Carvalho, C., West, M., et al. Dynamic matrix-variate graphical models. Bayesian analysis, 2(1):
69?97, 2007.
Chang, C. and Glover, G. Time?frequency dynamics of resting-state brain connectivity measured
with fmri. Neuroimage, 50(1):81?98, 2010.
Chen, S., Liu, K., Yang, Y., Xu, Y., Lee, S., Lindquist, M., Caffo, B., and Vogelstein,
J. An m-estimator for reduced-rank high-dimensional linear dynamical system identification.
arXiv:1509.03927, 2015.
Cressie, N. Statistics for spatial data. John Wiley & Sons, 2015.
Greenewald, K. and Hero, A. Robust kronecker product PCA for spatio-temporal covariance estimation. Signal Processing, IEEE Transactions on, 63(23):6368?6378, Dec 2015.
Huang, S., Li, J., Sun, L., Ye, J., Fleisher, A., Wu, T., Chen, K., and Reiman, E. Learning brain
connectivity of alzheimer?s disease by sparse inv. cov. est. NeuroImage, 50(3):935?949, 2010.
Kim, J., Pan, W., Initiative, Alzheimer?s Disease Neuroimaging, et al. Highly adaptive tests for
group differences in brain functional connectivity. NeuroImage: Clinical, 9:625?639, 2015.
Liu, X. and Duyn, J. Time-varying functional network information extracted from brief instances of
spontaneous brain activity. Proc. of the Natl. Academy of Sciences, 110(11):4392?4397, 2013.
Monti, R., Hellyer, P., Sharp, D., Leech, R., Anagnostopoulos, C., and Montana, G. Estimating
time-varying brain conn. networks from fMRI time series. NeuroImage, 103:427?443, 2014.
Narayan, M., Allen, G., and Tomson, S. Two sample inference for populations of graphical models
with applications to functional connectivity. arXiv preprint arXiv:1502.03853, 2015.
Qiu, H., Han, F., Liu, H., and Caffo, B. Joint estimation of multiple graphical models from high
dimensional time series. Journal of the Royal Statistical Society: Series B, 78(2):487?504, 2016.
Rothman, A., Bickel, P., Levina, E., Zhu, J., et al. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008.
Rudelson, M. and Zhou, S. Errors-in-variables models with dependent measurements. The Electronic Journal of Statistics, 11(1):1699?1797, 2017.
Tsiligkaridis, T. and Hero, A. Covariance estimation in high dimensions via kronecker product
expansions. IEEE Trans. on Sig. Proc., 61(21):5347?5360, 2013.
Varoquaux, G., Gramfort, A., Poline, J-B., and Thirion, B. Brain covariance selection: better individual functional connectivity models using population prior. Advances in Neural Information
Processing Systems, 23:2334?2342, 2010.
Wehbe, L., Murphy, B., Talukdar, P., Fyshe, A., Ramdas, A., and Mitchell, T. Simultaneously
uncovering the patterns of brain regions involved in different story reading subprocesses. PLOS
ONE, 9(11):e112575, 2014.
Zhou, S. Gemini: Graph estimation with matrix variate normal instances. The Annals of Statistics,
42(2):532?562, 2014.
Zhou, S., Lafferty, J., and Wasserman, L. Time varying undirected graphs. Machine Learning, 80
(2-3):295?319, 2010.
Zhou, S., R?utimann, P., Xu, M., and B?uhlmann, P. High-dimensional covariance estimation based
on gaussian graphical models. The Journal of Machine Learning Research, 12:2975?3026, 2011.
9
| 7165 |@word version:2 middle:1 norm:7 stronger:1 confirms:1 simulation:1 pearlson:1 covariance:45 tr:22 initial:1 liu:4 series:4 zij:2 tuned:1 existing:2 ka:7 z2:5 chu:1 must:1 john:1 fn:4 additive:11 realistic:1 confirming:1 motor:1 remove:1 update:1 implying:2 generative:2 selected:1 fewer:1 discovering:1 adal:1 ith:1 smith:1 fa9550:1 provides:1 node:1 location:1 five:3 kristjan:1 glover:2 along:1 constructed:1 direct:2 initiative:1 prove:2 pathway:4 fitting:1 leveling:1 autocorrelation:1 introduce:3 shuheng:1 expected:2 behavior:2 brain:21 detects:1 increasing:8 becomes:1 begin:1 estimating:8 project:1 bounded:3 provided:1 biostatistics:1 underlying:1 what:1 teenage:1 guarantee:2 temporal:21 every:1 act:1 ti:5 growth:1 rm:2 k2:3 demonstrates:1 grant:2 enjoy:1 positive:13 local:1 accordance:1 limit:1 ak:1 interpolation:1 becoming:1 chose:1 studied:1 montana:1 fyshe:1 subprocesses:1 range:1 averaged:1 definite:6 procedure:2 wholly:1 evolving:1 mcc:20 revealing:2 boyd:2 projection:3 word:1 pre:1 onto:1 close:2 selection:1 operator:1 context:2 risk:1 worn:1 go:1 independently:1 simplicity:3 unstructured:1 wasserman:1 m2:3 estimator:29 parameterizing:1 array:1 sbm:5 zuo:1 population:2 handle:2 lindquist:1 increment:1 annals:1 target:1 suppose:7 spontaneous:1 exact:1 cressie:2 sig:1 harvard:1 element:5 associate:2 satisfying:1 trend:2 observed:1 bottom:2 preprint:1 childhood:2 fleisher:1 region:5 sun:1 plo:1 decrease:1 leech:1 balanced:1 disease:2 environment:1 moderately:1 covariates:1 dynamic:4 depend:2 tight:1 predictive:1 upon:1 basis:2 triangle:1 aii:3 mh:3 joint:1 enyi:1 effective:1 choosing:1 modular:1 emerged:1 richer:3 widely:1 solve:1 supplementary:1 calhoun:2 otherwise:1 larger:1 ability:2 statistic:8 cov:8 sequence:4 eigenvalue:3 propose:1 product:7 talukdar:1 flexibility:1 academy:2 duyn:2 frobenius:16 mommy:1 ky:3 convergence:6 regularity:1 extending:1 perfect:1 converges:3 derive:3 develop:3 stating:1 depending:1 iq:1 measured:2 narayan:2 op:4 ij:5 minor:1 sa:4 strong:3 predicted:1 implies:1 indicate:1 direction:1 kb:6 human:3 stringent:1 material:1 varoquaux:2 ofq:1 rothman:2 exploring:1 frontier:1 hold:5 sufficiently:1 considered:1 normal:3 exp:2 cb:3 matthew:2 substituting:1 driving:1 vary:2 bickel:1 estimation:17 proc:2 reiman:1 healthy:1 uhlmann:1 create:2 successfully:2 weighted:5 eti:3 gaussian:4 sight:1 aim:3 always:1 zhou:17 season:1 varying:27 adiag:1 consistently:1 rank:2 indicates:1 greatly:1 c1b:1 kim:3 plis:1 buckner:1 caffo:2 inference:1 dependent:3 nn:1 sb:2 initially:1 interested:1 arg:2 overall:1 ill:1 uncovering:1 denoted:1 priori:1 development:4 spatial:18 smoothing:2 gramfort:1 construct:3 once:1 having:4 beach:1 eliminated:1 park:1 promote:1 fmri:13 future:1 develops:1 few:1 randomly:1 simultaneously:2 national:1 individual:1 m4:1 murphy:1 n1:1 attempt:1 freedom:1 detection:1 interest:4 highly:3 investigate:1 replicates:1 monti:5 semidefinite:4 natl:1 edge:35 necessary:1 divide:1 theoretical:1 delete:2 increased:1 column:2 modeling:3 earlier:1 instance:2 ar:9 tp:4 retains:1 applicability:1 deviation:1 entry:3 optimally:2 dependency:1 spatiotemporal:5 kxi:2 st:1 geographical:1 density:3 preda:1 ct2:1 vm:1 off:5 lee:1 together:1 quickly:1 connecting:1 connectivity:13 reflect:1 nm:1 choose:2 huang:2 derivative:2 return:1 li:1 supp:1 account:1 potential:1 crosby:1 coefficient:2 notable:1 vi:1 later:1 portion:1 red:2 option:1 offdiagonal:1 formed:3 square:1 variance:7 reinforced:1 miller:1 generalize:1 bayesian:2 identification:1 caroline:1 whenever:2 failure:1 ty:1 anagnostopoulos:1 frequency:1 involved:1 dm:1 associated:2 proof:2 vaidya:1 gain:2 dataset:5 proved:1 begun:1 mitchell:1 recall:1 knowledge:1 emerges:1 dimensionality:2 improves:1 methodology:3 improved:1 erd:1 strongly:1 generality:1 furthermore:1 diagnosed:1 stage:1 correlation:13 hand:1 ei:7 o:1 aj:2 indicated:1 grows:2 usa:1 building:1 ye:1 effect:1 requiring:1 normalized:1 k22:2 multiplier:1 regularization:6 hence:2 true:2 spatially:4 symmetric:3 alternating:1 nonzero:2 kzij:1 biswal:3 width:1 scientifically:1 complete:1 demonstrate:1 tn:4 l1:1 allen:2 image:3 parikh:1 permuted:1 functional:6 ji:1 discussed:1 approximates:1 resting:3 elementwise:3 significant:1 measurement:4 vec:6 smoothness:4 enjoyed:1 tuning:1 consistency:1 pm:1 similarly:3 specification:1 han:1 longer:1 argmina:1 add:1 recent:1 perspective:1 optimizing:1 bjj:2 reverse:1 scenario:1 certain:1 inequality:1 period:1 signal:4 ii:1 vogelstein:1 multiple:1 sound:1 smooth:4 levina:1 clinical:1 long:4 e1:1 award:1 a1:3 impact:1 prediction:1 patient:1 arxiv:3 kernel:13 dec:1 interval:1 biased:1 unlike:1 subject:6 undirected:2 mature:1 lafferty:1 alzheimer:2 subgaussian:5 near:1 presence:2 yang:1 exceed:1 split:1 identically:1 revealed:1 variety:3 fit:1 variate:4 forthcoming:1 bandwidth:6 topology:1 reduce:1 tradeoff:2 t0:32 pca:4 ultimate:1 penalty:1 jj:1 listed:1 involve:1 nonparametric:4 reduced:1 exist:3 nsf:1 revisit:1 estimated:5 neuroscience:2 arising:1 write:1 group:1 adhd:3 putting:1 conn:1 deleted:3 changing:1 prls:1 v1:1 graph:25 relaxation:1 sum:2 year:2 gemini:1 inverse:4 respond:1 throughout:1 almost:1 wu:1 electronic:2 scaling:2 bound:13 yale:1 topological:1 nonnegative:1 activity:1 strength:1 constraint:2 kronecker:12 incorporation:1 argument:2 min:5 separable:2 relatively:1 department:4 structured:2 according:1 alternate:1 across:4 smaller:1 em:1 increasingly:1 elizabeth:1 wi:8 son:1 pan:1 intuitively:1 invariant:2 gradually:1 turn:1 thirion:1 hero:5 available:2 wehbe:2 apply:2 observe:4 away:1 appropriate:2 enforce:1 spectral:1 substitute:2 rudelson:3 assumes:1 denotes:1 top:2 include:1 graphical:18 maintaining:1 exploit:3 giving:1 restrictive:1 establish:1 society:1 objective:2 added:1 concentration:2 diagonal:5 conceivable:1 mapped:1 collected:1 toward:1 assuming:1 besides:1 connectomes:2 index:1 modeled:2 relationship:1 beckmann:1 neuroimaging:1 trace:4 negative:3 design:2 motivates:1 allowing:4 upper:1 observation:1 neuron:2 datasets:2 sm:1 truncated:1 rn:5 smoothed:2 sharp:1 inv:1 peleato:1 introduced:2 pair:1 eckstein:1 connection:1 z1:5 narrow:1 akf:1 nip:1 trans:1 able:1 dynamical:3 below:1 xm:7 mismatch:1 pattern:1 regime:2 sparsity:6 fp:4 reading:1 program:1 max:3 green:2 royal:1 maxij:2 critical:1 difficulty:1 regularized:1 lognm:1 mn:5 zhu:1 brief:1 imply:1 temporally:2 prior:1 kelly:1 voxels:2 discovery:3 l2:14 kf:5 evolve:1 acknowledgement:1 relative:2 law:1 afosr:1 loss:2 glasso:8 permutation:1 carvalho:2 age:13 foundation:1 degree:2 consistent:2 story:1 row:1 poline:1 penalized:5 repeat:1 supported:1 copy:1 jth:1 enjoys:1 aij:3 bias:6 allow:3 wide:2 face:1 sparse:6 distributed:2 curve:2 dimension:3 world:1 sensory:1 adaptive:1 far:1 transaction:1 compact:1 erdos:2 gene:1 keep:1 assumed:3 spatio:2 xi:11 alternatively:1 regulatory:1 quantifies:1 tsiligkaridis:3 robust:1 ca:5 obtaining:3 expansion:1 mse:1 diag:4 did:4 noise:5 qiu:6 ramdas:1 allowed:1 child:1 x1:2 xu:2 west:1 wiley:1 fails:1 neuroimage:5 lie:1 late:1 renyi:2 learns:1 bij:5 theorem:9 er:4 symbol:1 list:1 decay:1 false:2 effectively:1 supplement:6 magnitude:1 conditioned:1 chen:4 subtract:1 smoothly:7 michigan:3 simply:1 contained:1 chang:2 gender:1 satisfies:2 extracted:1 ma:1 seyoung:1 towards:1 admm:2 change:12 experimentally:1 determined:2 reducing:1 uniformly:1 wt:2 denoising:3 zb:3 est:1 meaningful:2 select:1 support:2 scan:1 alexander:1 greenewald:3 adali:1 evaluate:1 disallowing:1 phenomenon:1 correlated:7 |
6,816 | 7,166 | Gradient Methods for Submodular Maximization
Hamed Hassani
ESE Department
University of Pennsylvania
Philadelphia, PA
[email protected]
Mahdi Soltanolkotabi
EE Department
University of Southern California
Los Angeles, CA
[email protected]
Amin Karbasi
ECE Department
Yale University
New Haven, CT
[email protected]
Abstract
In this paper, we study the problem of maximizing continuous submodular functions that naturally arise in many learning applications such as those involving
utility functions in active learning and sensing, matrix approximations and network
inference. Despite the apparent lack of convexity in such functions, we prove that
stochastic projected gradient methods can provide strong approximation guarantees
for maximizing continuous submodular functions with convex constraints. More
specifically, we prove that for monotone continuous DR-submodular functions, all
fixed points of projected gradient ascent provide a factor 1/2 approximation to the
global maxima. We also study stochastic gradient methods and show that after
O(1/2 ) iterations these methods reach solutions which achieve in expectation
? ). An immediate application of our results is to
objective values exceeding ( OPT
2
maximize submodular functions that are defined stochastically, i.e. the submodular
function is defined as an expectation over a family of submodular functions with an
unknown distribution. We will show how stochastic gradient methods are naturally
well-suited for this setting, leading to a factor 1/2 approximation when the function is monotone. In particular, it allows us to approximately maximize discrete,
monotone submodular optimization problems via projected gradient ascent on a
continuous relaxation, directly connecting the discrete and continuous domains.
Finally, experiments on real data demonstrate that our projected gradient methods
consistently achieve the best utility compared to other continuous baselines while
remaining competitive in terms of computational effort.
1
Introduction
Submodular set functions exhibit a natural diminishing returns property, resembling concave functions
in continuous domains. At the same time, they can be minimized exactly in polynomial time (while
can only be maximized approximately), which makes them similar to convex functions. They have
found numerous applications in machine learning, including viral marketing [1], dictionary learning
[2] network monitoring [3, 4], sensor placement [5], product recommendation [6, 7], document and
corpus summarization [8] data summarization [9], crowd teaching [10, 11], and probabilistic models
[12, 13]. However, submodularity is in general a property that goes beyond set functions and can
be defined for continuous functions. In this paper, we consider the following stochastic continuous
submodular optimization problem
max F (x) ? E??D [F? (x)],
x?K
(1.1)
where K is a bounded convex body, D is generally an unknown distribution, and F? ?s are continuous
submodular functions for every ? ? D. We also use OPT ? maxx?K F (x) to denote the optimum
value. We note that the function F (x) is itself also continuous submodular, as a non-negative
combination of submodular functions are still submodular [14]. The formulation covers popular
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
instances of submodular optimization. For instance, when D puts all the probability mass on a single
function, (1.1) reduces to deterministic continuous submodular optimization. Another common
objective is the finite-sum continuous submodular optimization where D is uniformly distributed over
m
1
m instances, i.e., F (x) ? m
??=1 F? (x).
A natural approach to solving problems of the form (1.1) is to use projected stochastic methods.
As we shall see in Section 5, these local search heuristics are surprisingly effective. However, the
reasons for this empirical success is completely unclear. The main challenge is that maximizing F
corresponds to a nonconvex optimization problem (as the function F is not concave), and a priori it is
not clear why gradient methods should yield a reliable solution. This leads us to the main challenge
of this paper
Do projected gradient methods lead to provably good solutions for continuous
submodular maximization with general convex constraints?
We answer the above question in the affirmative, proving that projected gradient methods produce a
competitive solution with respect to the optimum. More specifically, given a general bounded convex
body K and a continuous function F that is monotone, smooth, and (weakly) DR-submodular we
show that
? All stationary points of a DR-submodular function F over K provide a 1/2 approximation
to the global maximum. Thus, projected gradient methods with sufficiently small step sizes
(a.k.a. gradient flows) always lead to a solutions with 1/2 approximation guarantees.
? Projected gradient ascent after O ( L2 ) iterations produces a solution with objective value
larger than (OPT/2 ? ). When calculating the gradient is difficult but an unbiased estimate
2
can be easily obtained, the stochastic projected gradient ascent in O ( L2 + ?2 ) iterations
finds a solution with objective value exceeding (OPT/2 ? ). Here, L2 is the smoothness
of the continuous submodular function measured in the `2 -norm, ? 2 is the variance of the
stochastic gradient with respect to the true gradient and OPT is the function value at the
global optimum.
? More generally, for weakly continuous DR-submodular functions with parameter ? (define
in (2.6)) we prove the above results with ? 2 /(1 + ? 2 ) approximation guarantee.
Our result have some important implications. First, they show that projected gradient methods are an
efficient way of maximizing the multilinear extension of (weakly) submodular set functions for any
submodularity ratio ? (note that ? = 1 corresponds to submodular functions) [2]. Second, in contrast
to conditional gradient methods for submodular maximization that require the initial point to be the
origin [15, 16], projected gradient methods can start from any initial point in the constraint set K
and still produce a competitive solution. Third, such conditional gradient methods, when applied
to the stochastic setting (with a fixed batch size), perform poorly and can produce arbitrarily bad
solutions when applied to continuous submodular functions (see [17, Appendix B] in the long version
of this paper for an example and further discussion on why conditional gradient methods do not easily
admit stochastic variants). In contrast, stochastic projected gradient methods are stable by design
and provide a solution with an approximation ratio of at least 1/2 in expectation. Finally, our work
provides a unifying approach for solving the stochastic submodular maximization problem [18]
f (S) ? E??D [f? (S)],
(1.2)
where the functions f? ? 2V ? R+ are submodular set functions defined over the ground set V . Such
objective functions naturally arise in many data summarization applications [19] and have been
recently introduced and studied in [18]. Since D is unknown, problem (1.2) cannot be directly solved.
Instead, [18] showed that in the case of coverage functions, it is possible to efficiently maximize f by
lifting the problem to the continuous domain and using stochastic gradient methods on a continuous
relaxation to reach a solution that is within a factor (1 ? 1/e) of the optimum. In contrast, our work
provides a general recipe with 1/2 approximation guarantee for problem (1.2) in which f? ?s can be
any monotone submodular function.
2
2
Continuous Submodular Maximization
A set function f ? 2V ? R+ , defined on the ground set V , is called submodular if for all subsets
A, B ? V , we have
f (A) + f (B) ? f (A ? B) + f (A ? B).
Even though submodularity is mostly considered on discrete domains, the notion can be naturally
extended to arbitrary lattices [20]. To this aim, let us consider a subset of Rn+ of the form X = ?ni=1 Xi
where each Xi is a compact subset of R+ . A function F ? X ? R+ is submodular [21] if for all
(x, y) ? X ? X , we have
F (x) + F (y) ? F (x ? y) + F (x ? y),
(2.1)
where x?y ? max(x, y) (component-wise) and x?y ? min(x, y) (component-wise). A submodular
function is monotone if for any x, y ? X obeying x ? y, we have F (x) ? F (y) (here, by x ? y we
mean that every element of x is less than that of y). Like set functions, we can define submodularity in
an equivalent way, reminiscent of diminishing returns, as follows [14]: the function F is submodular
if for any x ? X , any two distinct basis vectors ei , ej ? Rn , and any two non-negative real numbers
zi , zj ? R+ obeying xi + zi ? Xi and xj + zj ? Xj we have
F (x + zi ei ) + F (x + zj ej ) ? F (x) + F (x + zi ei + zj ej ).
(2.2)
Clearly, the above definition includes submodularity over a set (by restricting Xi ?s to {0, 1}) or over
an integer lattice (by restricting Xi ?s to Z+ ) as special cases. However, in the remainder of this paper
we consider continuous submodular functions defined on product of sub-intervals of R+ . We note that
when twice differentiable, F is submodular if and only if all cross-second-derivatives are non-positive
[14], i.e.,
? 2 F (x)
?i ? j, ?x ? X ,
? 0.
(2.3)
?xi ?xj
The above expression makes it clear that continuous submodular functions are not convex nor concave
in general as concavity (convexity) implies that ?2 F ? 0 (resp.?2 F ? 0). Indeed, we can have
functions that are both submodular and convex/concave. For instance, for a concave function g
and non-negative weights ?i ? 0, the function F (x) = g(?ni=1 ?i xi ) is submodular and concave.
Trivially, affine functions are submodular, concave, and convex. A proper subclass of submodular
functions are called DR-submodular [16, 22] if for any x, y ? X obeying x ? y, any standard basis
vector ei ? Rn , and any non-negative number z ? R+ obeying zei + x ? X and zei + y ? X , we have
F (zei + x) ? F (x) ? F (zei + y) ? F (y).
(2.4)
One can easily verify that for a differentiable DR-submodular functions the gradient is an antitone
mapping, i.e., for all x, y ? X such that x ? y we have ?F (x) ? ?F (y) [16]. When twice
differentiable, DR-submodularity is equivalent to
? 2 F (x)
? 0.
?xi ?xj
?i & j, ?x ? X ,
(2.5)
The above twice differentiable functions are sometimes called smooth submodular functions in the
literature [23]. However, in this paper, we say a differentiable submodular function F is L-smooth
w.r.t a norm ? ? ? (and its dual norm ? ? ?? ) if for all x, y ? X we have
??F (x) ? ?F (x)?? ? L?x ? y?.
Here, ? ? ?? is the dual norm of ? ? ? defined as ?g?? = supx?Rn ? ?x??1 g T x. When the function is
smooth w.r.t the `2 -norm we use L2 (note that the `2 norm is self-dual). We say that a function is
weakly DR-submodular with parameter ? if
[?F (x)]i
.
x,y?X i?[n] [?F (y)]i
? = inf inf
(2.6)
x?y
See [24] for related definitions. Clearly, for a differentiable DR-submodular function we have ? = 1.
An important example of a DR-submodular function is the multilinear extension [15] F ? [0, 1]n ? R
of a discrete submodular function f , namely,
F (x) = ? ? xi ? (1 ? xj )f (S).
S?V i?S
j/?S
3
We note that for set functions, DR-submodularity (i.e., Eq. 2.4) and submodularity (i.e., Eq. 2.1) are
equivalent. However, this is not true for the general submodular functions defined on integer lattices
or product of sub-intervals [16, 22].
The focus of this paper is on continuous submodular maximization defined in Problem (1.1). More
specifically, we assume that K ? X is a a general bounded convex set (not necessarily down-closed
as considered in [16]) with diameter R. Moreover, we consider F? ?s to be monotone (weakly)
DR-submodular functions with parameter ?.
3
Background and Related Work
Submodular set functions [25, 20] originated in combinatorial optimization and operations research,
but they have recently attracted significant interest in machine learning. Even though they are usually
considered over discrete domains, their optimization is inherently related to continuous optimization
methods. In particular, Lovasz [26] showed that the Lovasz extension is convex if and only if
the corresponding set function is submodular. Moreover, minimizing a submodular set-function is
equivalent to minimizing the Lovasz extension.1 This idea has been recently extended to minimization
of strict continuous submodular functions (i.e., cross-order derivatives in (2.3) are strictly negative)
[14]. Similarly, approximate submodular maximization is linked to a different continuous extension
known as multilinear extension [28]. Multilinear extension (which is an example of DR-submodular
functions studied in this paper) is not concave nor convex in general. However, a variant of conditional
gradient method, called continuous greedy, can be used to approximately maximize them. Recently,
Chekuri et al [23] proposed an interesting multiplicative weight update algorithm that achieves
? 2 /2 ) steps for twice differentiable monotone DR(1 ? 1/e ? ) approximation guarantee after O(n
submodular functions (they are also called smooth submodular functions) subject to a polytope
constraint. Similarly, Bian et al [16] proved that a conditional gradient method, similar to the
continuous greedy algorithm, achieves (1 ? 1/e ? ) approximation guarantee after O(L2 /) iterations
for maximizing a monotone DR-submodular functions subject to special convex constraints called
down-closed convex bodies. A few remarks are in order. First, the proposed conditional gradient
methods cannot handle the general stochastic setting we consider in Problem (1.1) (in fact, projection
is the key). Second, there is no near-optimality guarantee if conditional gradient methods do not start
from the origin. More precisely, for the continuous greedy algorithm it is necessary to start from
the 0 vector (to be able to remain in the convex constraint set at each iteration). Furthermore, the
0 vector must be a feasible point of the constraint set. Otherwise, the iterates of the algorithm may
fall out of the convex constraint set leading to an infeasible final solution. Third, due to the starting
point requirement, they can only handle special convex constraints, called down-closed. And finally,
the dependency on L2 is very subomptimal as it can be as large as the dimension n (e.g., for the
multilinear extensions of some submodular set functions, see [17, Appendix B] in the long version of
this paper). Our work resolves all of these issues by showing that projected gradient methods can also
approximately maximize monotone DR-submodular functions subject to general convex constraints,
albeit, with a lower 1/2 approximation guarantee.
Generalization of submodular set functions has lately received a lot of attention. For instance, a line
of recent work considered DR-submodular function maximization over an integer lattice [29, 30, 22].
Interestingly, Ene and Nguyen [31] provided an efficient reduction from an integer-lattice DRsubmodular to a submodular set function, thus suggesting a simple way to solve integer-lattice
DR-submodular maximization. Note that such reductions cannot be applied to the optimization
problem (1.1) as expressing general convex body constraints may require solving a continuous
optimization problem.
4
Algorithms and Main Results
In this section we discuss our algorithms together with the corresponding theoretical guarantees. In
what follows, we assume that F is a weakly DR-submodular function with parameter ?.
1
The idea of using stochastic methods for submodular minimization has recently been used in [27].
4
4.1
Characterizing the quality of stationary points
We begin with the definition of a stationary point.
Definition 4.1 A vector x ? K is called a stationary point of a function F ? X ? R+ over the set
K ? X if maxy?K ??F (x), y ? x? ? 0.
Stationary points are of interest because they characterize the fixed points of the Gradient Ascent
(GA) method. Furthermore, (projected) gradient ascent with a sufficiently small step size is known
to converge to a stationary point for smooth functions [32]. To gain some intuition regarding this
connection, let us consider the GA procedure. Roughly speaking, at any iteration t of the GA
procedure, the value of F increases (to the first order) by ??F (xt ), xt+1 ? xt ?. Hence, the progress
at time t is at most maxy?K ??F (xt ), y ?xt ?. If at any time t we have maxy?K ??F (xt ), y ?xt ? ? 0,
then the GA procedure will not make any progress and it will be stuck once it falls into a stationary
point.
The next natural question is how small can the value of F be at a stationary point compared to the
global maximum? The following lemma relates the value of F at a stationary point to OPT.
Theorem 4.2 Let F ? X ? R+ be monotone and weakly DR-submodular with parameter ? and
assume K ? X is a convex set. Then,
(i) If x is a stationary point of F in K, then F (x) ?
?2
OPT.
1+? 2
(ii) Furthermore, if F is L-smooth, gradient ascent with a step size smaller than 1/L will
converge to a stationary point.
The theorem above guarantees that all fixed points of the GA method yield a solution whose function
?2
?2
value is at least 1+?
2 OPT. Thus, all fixed point of GA provide a factor 1+? 2 approximation ratio. The
particular case of ? = 1, i.e., when F is DR-submodular, asserts that at any stationary point F is at
least OPT/2. This lower bound is in fact tight. In the long version of this paper (in particular [17,
Appendix A]) we provide a simple instance of a differentiable DR-Submodular function that attains
OPT/2 at a stationary point that is also a local maximum.
We would like to note that our result on the quality of stationary points (i.e., first part of Theorem 4.2
above) can be viewed as a simple extension of the results in [33]. In particular, the special case of
? = 1 follows directly from [28, Lemma 3.2]. See the long version of this paper [17, Section 7] for
how this lemma is used in our proofs. However, we note that the main focus of this paper is whether
such a stationary point can be found efficiently using stochastic schemes that do not require exact
evaluations of gradients. This is the subject of the next section.
4.2
(Stochastic) gradient methods
We now discuss our first algorithmic approach. For simplicity we focus our exposition on the DR
submodular case, i.e., ? = 1, and discuss how this extends to the more general case in the long version
of this paper ([17, Section 7]). A simple approach to maximizing DR submodular functions is to use
the (projected) Gradient Ascent (GA) method. Starting from an initial estimate x1 ? K obeying the
constraints, GA iteratively applies the following update
xt+1 = PK (xt + ?t ?F (xt )) .
(4.1)
Here, ?t is the learning rate and PK (v) denotes the Euclidean projection of v onto the set K.
However, in many problems of practical interest we do not have direct access to the gradient of F . In
these cases it is natural to use a stochastic estimate of the gradient in lieu of the actual gradient. This
leads to the Stochastic Gradient Method (SGM). Starting from an initial estimate x0 ? K obeying the
constraints, SGM iteratively applies the following updates
xt+1 = PK (xt + ?t gt ) .
(4.2)
Specifically, at every iteration t, the current iterate xt is updated by adding ?t gt , where gt is an
unbiased estimate of the gradient ?F (xt ) and ?t is the learning rate. The result is then projected onto
the set K. We note that when gt = ?F (xt ), i.e., when there is no randomness in the updates, then
5
Algorithm 1 (Stochastic) Gradient Method for Maximizing F (x) over a convex set K
Parameters: Integer T > 0 and scalars ?t > 0, t ? [T ]
Initialize: x1 ? K
for t = 1 to T do
yt+1 ? xt + ?t gt ,
where gt is a random vector s.t. E[gt ?xt ] = ?F (xt )
xt+1 = arg minx?K ??x ? yt+1 ??2
end for
Pick ? uniformly at random from {1, 2, . . . , T }.
Output x?
the SGM updates (4.2) reduce to the GA updates (4.1). We detail the SGM method in Algorithm 1.
As we shall see in our experiments detained in Section 5, the SGM method is surprisingly effective
for maximizing monotone DR-submodular functions. However, the reasons for this empirical success
was previously unclear. The main challenge is that maximizing F corresponds to a nonconvex
optimization problem (as the function F is not concave), and a priori it is not clear why gradient
methods should yield a competitive ratio. Thus, studying gradient methods for such nonconvex
problems poses new challenges:
Do (stochastic) gradient methods converge to a stationary point?
The next theorem addresses some of these challenges. To be able to state this theorem let us recall the
standard definition of smoothness. We say that a continuously differentiable function F is L-smooth
(in Euclidean norm) if the gradient ?F is L-Lipschitz, that is ??F (x) ? ?F (y)?`2 ? L ?x ? y?`2 .
2
We also defined the diameter (in Euclidean norm) as R2 = supx,y?K 12 ?x ? y?`2 . We now have all
the elements in place to state our first theorem.
Theorem 4.3 (Stochastic Gradient Method) Let us assume that F is L-smooth w.r.t. the Euclidean
norm ???`2 , monotone and DR-submodular. Furthermore, assume that we have access to a stochastic
oracle gt obeying
2
E[gt ] = ?F (xt ) and E[ ?gt ? ?F (xt )?`2 ] ? ? 2 .
We run stochastic gradient updates of the form (4.2) with ?t =
1?
.
?
L+ R
t
Let ? be a random variable
taking values in {1, 2, . . . , T } with equal probability. Then,
E[F (x? )] ?
OPT
R2 L + OPT R?
?(
+ ? ).
2
2T
T
(4.3)
Remark 4.4 We would like to note that if we pick ? to be a random variable taking values in
{2, . . . , T ? 1} with probability (T 1?1) and 1 and T each with probability 2(T1?1) then
E[F (x? )] ?
OPT
R2 L R?
?(
+ ? ).
2
2T
T
2
The above results roughly state that T = O ( R L +
R2 ? 2
)
2
iterations of the stochastic gradient method
from any initial point, yields a solution whose objective value is at least
T =
2
O ( R L
2
2
OPT
2
? . Stated differently,
+ R2? ) iterations of the stochastic gradient method provides in expectation
OPT
? approximation ratio for DR-submodular maximization. As explained in
2
a value
that exceeds
Section
4.1, it is not possible to go beyond the factor 1/2 approximation ratio using gradient ascent from an
arbitrary initialization.
An important aspect of the above result is that it only requires an unbiased estimate of the gradient.
This flexibility is crucial for many DR-submodular maximization problems (see, (1.1)) as in many
cases calculating the function F and its derivative is not feasible. However, it is possible to provide a
good un-biased estimator for these quantities.
6
We would like to point out that our results are similar in nature to known results about stochastic
methods for convex optimization. Indeed, this result interpolates between the ?1T for stochastic
smooth optimization, and the 1/T for deterministic smooth optimization. The special case of ? = 0
which corresponds to Gradient Ascent deserves particular attention. In this case, and under the
2
assumptions of Theorem 4.3, it is possible to show that F (xT ) ? OPT
? RT L , without the need for a
2
randomized choice of ? ? [T ].
Finally, we would like to note that while the first term in (4.3) decreases as 1/T , the pre-factor L
could be rather large in many applications. For instance, this quantity may depend on the dimension
of the input n (see [17, Appendix C] in the extended version of this paper). Thus, the number of
iterations for reaching a desirable accuracy may be very large. Such a large computational load causes
(stochastic) gradient methods infeasible in some application domains. It is possible to overcome this
deficiency by using stochastic mirror methods (see [17, Section 4.3] in the extended version of this
paper).
5
Experiments
In our experiments, we consider a movie recommendation application [19] consisting of N users and
n movies. Each user i has a user-specific utility function fi for evaluating sets of movies. The goal is
to find a set of k movies such that in expectation over users? preferences it provides the highest utility,
i.e., max?S??k f (S), where f (S) ? Ei?D [fi (S)]. This is an instance of the stochastic submodular
maximization problem defined in (1.2). We consider a setting that consists of N users and consider
the empirical objective function N1 ?N
j=1 fi . In other words, the distribution D is assumed to be
uniform on the integers between 1 and N . We can then run the (discrete) greedy algorithm on the
empirical objective function to find a good set of size k. However, as N is a large number, the greedy
algorithm will require a high computational complexity. Another way of solving this problem is
to evaluate the multilinear extension Fi of any sampled function fi and solve the problem in the
continuous domain as follows. Let F (x) = Ei?D [Fi (x)] for x ? [0, 1]n and define the constraint set
Pk = {x ? [0, 1]m ? ?ni=1 xi ? k}. The discrete and continuous optimization formulations lead to the
same optimal value [15]:
max f (S) = max F (x).
x?Pk
S??S??k
Therefore, by running the stochastic versions of projected gradient methods, we can find a solution
in the continuous domain that is at least 1/2 approximation to the optimal value. By rounding that
fractional solution (for instance via randomized Pipage rounding [15]) we obtain a set whose utility is
at least 1/2 of the optimum solution set of size k. We note that randomized Pipage rounding does not
need access to the value of f . We also remark that projection onto Pk can be done very efficiently in
O(n) time (see [18, 34, 35]). Therefore, such an approach easily scales to big data scenarios where
the size of the data set (e.g. number of users) or the number of items n (e.g. number of movies) are
very large.
In our experiments, we consider the following baselines:
?
(i) Stochastic Gradient Ascent (SG): We use the step size ?t = c/ t and mini-batch size B.
The details for computing an unbiased estimator for the gradient of F are given in [17,
Appendix D] of the extended version of this paper.
(ii) Frank-Wolfe (FW) variant of [16]: We use T to denote the total number of iterations and
B to denote mini-batch sizes (we further let ? = 1, ? = 0, see Algorithm 1 in [16] for more
details).
(iii) Batch-mode Greedy (Greedy): We run the vanilla greedy algorithm (in the discrete domain)
in the following way. At each round of the algorithm (for selecting a new element), B
random users are picked and the function f is estimated by the average over the B selected
users.
To run the experiments we use the MovieLens data set. It consists of 1 million ratings (from 1 to 5) by
N = 6041 users for n = 4000 movies. Let ri,j denote the rating of user i for movie j (if such a rating
does not exist we assign ri,j to 0). In our experiments, we consider two well motivated objective
functions. The first one is called ?facility location" where the valuation function by user i is defined
7
5.6
6
objective value
objective value
8
SG(B = 20)
Greedy(B = 100)
Greedy(B = 1000)
4
FW(B = 20)
FW(B = 100)
10
20
30
5.4
5.2
SG(B = 20, c = 1)
Greedy
SG(B = 20, c = 10)
5.0
4.8
40
50
0
k
200
400
600
800
1000
T (number of iterations)
(a) Concave Over Modular
(b) Concave Over Modular
4.75
4.50
objective value
objective value
5.00
SG(B = 20)
Greedy(B = 100)
4.25
Greedy(B = 1000)
4.00
FW(B = 20)
4.80
4.75
SG(B = 20, c = 1)
4.70
SG(B = 20, c = 2)
Greedy
FW(B = 100)
3.75
10
20
30
40
4.65
50
k
(c) Facility Location
0.0
0.2
0.4
0.6
0.8
number of function computations ?108
(d) Facility Location
Figure 1: (a) shows the performance of the algorithms w.r.t. the cardinality constraint k for the
concave over modular objective. Each of the continuous algorithms (i.e., SG and FW) run for
T = 2000 iterations. (b) shows the performance of the SG algorithm versus the number of iterations
for fixed k = 20 for the concave over modular objective. The green dashed line?
indicates the value
obtained by Greedy (with B = 1000). Recall that the step size of SG is c/ t. (c) shows the
performance of the algorithms w.r.t. the cardinality constraint k for the facility location objective
function. Each of the continuous algorithms (SG and FW) run for T = 2000 iterations. (d) shows
the performance of different algorithms versus the number of simple function computations (i.e. the
number of fi ?s evaluated during the algorithm) for the facility location objective function. For the
greedy algorithm, larger number of function computations corresponds to a larger batch size. For SG
larger time corresponds to larger iterations.
as f (S, i) = maxj?S ri,j . In words, the way user i evaluates a set S is by picking the highest rated
movie in S. Thus, the objective function is equal to
ffac (S) =
1 N
? max ri,j .
N i=1 j?S
In our second experiment, we consider a different user-specific valuation function which is a concave
function composed with a modular function, i.e., f (S, i) = (?j?S ri,j )1/2 . Again, by considering the
uniform distribution over the set of users, we obtain
fcon (S) =
1/2
1 N
? ( ? ri,j ) .
N i=1 j?S
Note that the multilinear extensions of f1 and f2 are neither concave nor convex.
Figure 1 depicts the performance of different algorithms for the two proposed objective functions. As
Figures 1a and 1c show, the FW algorithm needs a much higher mini-batch size to be comparable
8
in performance to our stochastic gradient methods. Note that a smaller batch size leads to less
computational effort (using the same value for B and T , the computational complexity of FW and
SGA is almost the same). Figure 1b shows that after a few hundred iterations SG with B = 20 obtains
almost the same utility as the Greedy method with a large batch size (B = 1000). Finally, Figure 1d
shows the performance of the algorithms with respect to the number of times the single functions
(fi ?s) are evaluated. This further shows that gradient based methods have comparable complexity
w.r.t. the Greedy algorithm in the discrete domain.
6
Conclusion
In this paper we studied gradient methods for submodular maximization. Despite the lack of
convexity of the objective function we demonstrated that local search heuristics are effective at
finding approximately optimal solutions. In particular, we showed that all fixed point of projected
gradient ascent provide a factor 1/2 approximation to the global maxima. We also demonstrated that
stochastic gradient and mirror methods achieve an objective value of OPT/2 ? , in O( 12 ) iterations.
We further demonstrated the effectiveness of our methods with experiments on real data.
While in this paper we have focused on convex constraints, our framework may allow non-convex
constraints as well. For instance it may be possible to combine our framework with recent results in
[36, 37, 38] to deal with general nonconvex constraints. Furthermore, in some cases projection onto
the constraint set may be computationally intensive or even intractable but calculating an approximate
projection may be possible with significantly less effort. One of the advantages of gradient descentbased proofs is that they continue to work even when some perturbations are introduced in the updates.
Therefore, we believe that our framework can deal with approximate projections and we hope to
pursue this in future work.
Acknowledgments
This work was done while the authors were visiting the Simon?s Institute for the Theory of Computing.
A. K. is supported by DARPA YFA D16AP00046. The authors would like to thank Jeff Bilmes, Volkan
Cevher, Chandra Chekuri, Maryam Fazel, Stefanie Jegelka, Mohammad-Reza Karimi, Andreas
Krause, Mario Lucic, and Andrea Montanari for helpful discussions.
9
References
[1] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social network. In
KDD, 2003.
[2] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse
approximation and dictionary selection. ICML, 2011.
[3] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. Van Briesen, and N. Glance. Cost-effective outbreak
detection in networks. In KDD, 2007.
[4] R. M. Gomez, J. Leskovec, and A. Krause. Inferring networks of diffusion and influence. In Proceedings
of KDD, 2010.
[5] C. Guestrin, A. Krause, and A. P. Singh. Near-optimal sensor placements in gaussian processes. In ICML,
2005.
[6] K. El-Arini, G. Veda, D. Shahaf, and C. Guestrin. Turning down the noise in the blogosphere. In KDD,
2009.
[7] B. Mirzasoleiman, A. Badanidiyuru, and A. Karbasi. Fast constrained submodular maximization: Personalized data summarization. In ICML, 2016.
[8] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In Proceedings of
Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011.
[9] B. Mirzasoleiman, A. Karbasi, R. Sarkar, and A. Krause. Distributed submodular maximization: Identifying
representative elements in massive data. In NIPS, 2013.
[10] A. Singla, I. Bogunovic, G. Bartok, A. Karbasi, and A. Krause. Near-optimally teaching the crowd to
classify. In ICML, 2014.
[11] B. Kim, O. Koyejo, and R. Khanna. Examples are not enough, learn to criticize! criticism for interpretability.
In NIPS, 2016.
[12] J. Djolonga and A. Krause. From map to marginals: Variational inference in bayesian submodular models.
In NIPS, 2014.
[13] R. Iyer and J. Bilmes. Submodular point processes with applications to machine learning. In Artificial
Intelligence and Statistics, 2015.
[14] F. Bach. Submodular functions: from discrete to continous domains. arXiv preprint arXiv:1511.00394,
2015.
[15] G. Calinescu, C. Chekuri, M. Pal, and J. Vondrak. Maximizing a submodular set function subject to a
matroid constraint. SIAM Journal on Computing, 2011.
[16] A. Bian, B. Mirzasoleiman, J. M. Buhmann, and A. Krause. Guaranteed non-convex optimization:
Submodular maximization over continuous domains. arXiv preprint arXiv:1606.05615, 2016.
[17] H. Hassani, M. Soltanolkotabi, and A. Karbasi. Gradient methods for submodular maximization. arXiv
preprint arXiv:1708.03949, 2017.
[18] M. Karimi, M. Lucic, H. Hassani, and A. Krasue. stochastic submodular maximization: The case for
coverage functions. 2017.
[19] S. A. Stan, M. Zadimoghaddam, A. Krasue, and A. Karbasi. Probabilistic submodular maximization in
sub-linear time. ICML, 2017.
[20] S. Fujishige. Submodular functions and optimization, volume 58. Annals of Discrete Mathematics, North
Holland, Amsterdam, 2nd edition, 2005.
[21] L. A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica,
2(4):385?393, 1982.
[22] T. Soma and Y. Yoshida. A generalization of submodular cover via the diminishing return property on the
integer lattice. In NIPS, 2015.
10
[23] C. Chekuri, T. S. Jayram, and J. Vondrak. On multiplicative weight updates for concave and submodular
function maximization. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer
Science, pages 201?210. ACM, 2015.
[24] R. Eghbali and M. Fazel. Designing smoothing functions for improved worst-case competitive ratio in
online optimization. In Advances in Neural Information Processing Systems, pages 3287?3295, 2016.
[25] J. Edmonds. Matroids and the greedy algorithm. Mathematical programming, 1(1):127?136, 1971.
[26] L?szl? Lov?sz. Submodular functions and convexity. In Mathematical Programming The State of the Art,
pages 235?257. Springer, 1983.
[27] D. Chakrabarty, Y. T. Lee, Sidford A., and S. C. W. Wong. Subquadratic submodular function minimization.
In STOC, 2017.
[28] C. Chekuri, J. Vondr?k, and R.s Zenklusen. Submodular function maximization via the multilinear
relaxation and contention resolution schemes. In Proceedings of the 43rd ACM Symposium on Theory of
Computing (STOC), 2011.
[29] T. Soma, N. Kakimura, K. Inaba, and K. Kawarabayashi. Optimal budget allocation: Theoretical guarantee
and efficient algorithm. In ICML, 2014.
[30] C. Gottschalk and B. Peis. Submodular function maximization on the bounded integer lattice. In
International Workshop on Approximation and Online Algorithms, 2015.
[31] A. Ene and H. L. Nguyen. A reduction for optimizing lattice submodular functions with diminishing
returns. arXiv preprint arXiv:1606.08362, 2016.
[32] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science
& Business Media, 2013.
[33] J. Vondrak, C. Chekuri, and R. Zenklusen. Submodular function maximization via the multilinear relaxation
and contention resolution schemes. In Proceedings of the forty-third annual ACM symposium on Theory of
computing, pages 783?792. ACM, 2011.
[34] P. Brucker. An O(n) algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):163?
166, 1984.
[35] P. M. Pardalos and N. Kovoor. An algorithm for a singly constrained class of quadratic programs subject to
upper and lower bounds. Mathematical Programming, 46(1):321?328, 1990.
[36] S. Oymak, B. Recht, and M. Soltanolkotabi. Sharp time?data tradeoffs for linear inverse problems. arXiv
preprint arXiv:1507.04793, 2015.
[37] M. Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample complexity
barriers via nonconvex optimization. arXiv preprint arXiv:1702.06175, 2017.
[38] M. Soltanolkotabi. Learning ReLUs via gradient descent. arXiv preprint arXiv:1705.04591, 2017.
11
| 7166 |@word version:9 polynomial:1 norm:9 nd:1 pick:2 reduction:3 initial:5 selecting:1 document:2 interestingly:1 current:1 reminiscent:1 attracted:1 must:1 kdd:4 update:9 stationary:16 greedy:21 selected:1 intelligence:1 item:1 volkan:1 provides:4 iterates:1 location:5 preference:1 mathematical:3 direct:1 symposium:2 prove:3 consists:2 combine:1 introductory:1 x0:1 lov:1 upenn:1 indeed:2 andrea:1 roughly:2 nor:3 brucker:1 resolve:1 actual:1 cardinality:2 considering:1 provided:1 begin:1 bounded:4 moreover:2 mass:1 medium:1 what:1 pursue:1 affirmative:1 finding:1 guarantee:11 every:3 subclass:1 concave:16 exactly:1 positive:1 t1:1 local:3 despite:2 meet:1 approximately:5 twice:4 initialization:1 studied:3 fazel:2 practical:1 acknowledgment:1 procedure:3 antitone:1 empirical:4 maxx:1 significantly:1 projection:6 sga:1 pre:1 word:2 cannot:3 ga:9 selection:2 onto:4 put:1 influence:2 wong:1 equivalent:4 deterministic:2 demonstrated:3 yt:2 maximizing:11 resembling:1 go:2 attention:2 starting:3 yoshida:1 convex:26 focused:1 resolution:2 simplicity:1 identifying:1 recovery:1 estimator:2 proving:1 handle:2 notion:1 updated:1 resp:1 tardos:1 annals:1 user:14 exact:1 massive:1 programming:3 designing:1 origin:2 pa:1 element:4 wolfe:1 inaba:1 preprint:7 solved:1 worst:1 decrease:1 highest:2 intuition:1 sgm:5 convexity:4 complexity:4 nesterov:1 weakly:7 solving:4 tight:1 depend:1 singh:1 badanidiyuru:1 f2:1 completely:1 basis:2 easily:4 darpa:1 differently:1 distinct:1 fast:1 effective:4 artificial:1 crowd:2 apparent:1 heuristic:2 larger:5 solve:2 whose:3 say:3 modular:5 otherwise:1 statistic:1 itself:1 final:1 online:2 advantage:1 differentiable:9 maryam:1 product:3 remainder:1 poorly:1 achieve:3 flexibility:1 amin:2 asserts:1 los:1 recipe:1 optimum:5 requirement:1 sea:1 produce:4 mirzasoleiman:3 pose:1 measured:1 received:1 progress:2 eq:2 strong:1 coverage:2 implies:1 submodularity:8 stochastic:35 human:1 pardalos:1 require:4 assign:1 f1:1 generalization:2 opt:17 multilinear:9 extension:11 strictly:1 sufficiently:2 considered:4 ground:2 mapping:1 algorithmic:1 dictionary:2 achieves:2 combinatorial:1 singla:1 minimization:3 lovasz:3 hope:1 clearly:2 sensor:2 gaussian:1 always:1 aim:1 eghbali:1 rather:1 reaching:1 ej:3 focus:3 consistently:1 indicates:1 contrast:3 criticism:1 attains:1 baseline:2 kim:1 helpful:1 inference:2 el:1 diminishing:4 provably:1 karimi:2 issue:1 dual:3 arg:1 priori:2 art:1 constrained:2 special:5 initialize:1 kempe:2 smoothing:1 equal:2 once:1 beach:1 icml:6 future:1 minimized:1 djolonga:1 subquadratic:1 haven:1 few:2 composed:1 kakimura:1 usc:1 maxj:1 bartok:1 consisting:1 n1:1 detection:1 interest:3 evaluation:1 szl:1 yfa:1 implication:1 necessary:1 euclidean:4 theoretical:3 leskovec:2 cevher:1 instance:10 classify:1 cover:2 sidford:1 maximization:23 lattice:9 deserves:1 cost:1 subset:4 uniform:2 hundred:1 rounding:3 pal:1 characterize:1 optimally:1 dependency:1 answer:1 supx:2 st:1 recht:1 international:1 randomized:3 siam:1 oymak:1 probabilistic:2 lee:1 picking:1 connecting:1 together:1 continuously:1 again:1 arini:1 dr:28 stochastically:1 admit:1 derivative:3 leading:2 return:4 suggesting:1 zenklusen:2 includes:1 north:1 multiplicative:2 lot:1 closed:3 picked:1 linked:1 mario:1 competitive:5 start:3 relus:1 simon:1 pipage:2 ni:3 accuracy:1 variance:1 efficiently:3 maximized:1 yield:4 bayesian:1 monitoring:1 bilmes:3 randomness:1 hamed:1 reach:2 definition:5 evaluates:1 chakrabarty:1 naturally:4 proof:2 gain:1 sampled:1 proved:1 popular:1 kawarabayashi:1 recall:2 fractional:1 hassani:4 higher:1 bian:2 improved:1 formulation:2 done:2 though:2 evaluated:2 ffac:1 furthermore:5 marketing:1 chekuri:6 shahaf:1 ei:6 lack:2 glance:1 khanna:1 mode:1 quality:2 believe:1 usa:1 verify:1 unbiased:4 true:2 facility:5 hence:1 iteratively:2 deal:2 round:1 during:1 self:1 covering:1 demonstrate:1 mohammad:1 lucic:2 wise:2 variational:1 contention:2 recently:5 fi:8 common:1 viral:1 reza:1 volume:2 million:1 association:1 marginals:1 significant:1 expressing:1 measurement:1 smoothness:2 rd:1 vanilla:1 trivially:1 mathematics:1 similarly:2 teaching:2 soltanolkotabi:5 submodular:102 language:1 stable:1 access:3 gt:10 showed:3 recent:2 zadimoghaddam:1 inf:2 optimizing:1 scenario:1 nonconvex:5 success:2 arbitrarily:1 continue:1 meeting:1 guestrin:3 converge:3 maximize:5 forty:1 signal:1 dashed:1 relates:1 ii:2 desirable:1 reduces:1 smooth:11 exceeds:1 cross:2 long:6 lin:1 bach:1 involving:1 variant:3 basic:1 expectation:5 chandra:1 arxiv:14 iteration:18 sometimes:1 background:1 krause:8 interval:2 koyejo:1 crucial:1 biased:1 ascent:12 strict:1 subject:6 fujishige:1 flow:1 effectiveness:1 integer:9 ee:1 near:3 iii:1 enough:1 iterate:1 xj:5 matroid:1 zi:4 pennsylvania:1 reduce:1 idea:2 regarding:1 andreas:1 tradeoff:1 intensive:1 angeles:1 whether:1 expression:1 ese:1 motivated:1 utility:6 veda:1 effort:3 interpolates:1 speaking:1 cause:1 remark:3 generally:2 clear:3 singly:1 diameter:2 exist:1 zj:4 estimated:1 edmonds:1 discrete:11 shall:2 key:1 soma:2 neither:1 diffusion:1 relaxation:4 monotone:13 sum:1 run:6 inverse:1 letter:1 extends:1 family:1 place:1 almost:2 appendix:5 comparable:2 bound:2 ct:1 guaranteed:1 gomez:1 yale:2 quadratic:3 oracle:1 annual:2 placement:2 constraint:21 precisely:1 deficiency:1 ri:6 personalized:1 kleinberg:1 aspect:1 vondrak:3 min:1 optimality:1 department:3 structured:1 combination:1 remain:1 smaller:2 maxy:3 outbreak:1 explained:1 karbasi:7 ene:2 computationally:1 previously:1 discus:3 end:1 bogunovic:1 lieu:1 studying:1 yurii:1 operation:2 spectral:1 batch:8 faloutsos:1 knapsack:1 denotes:1 remaining:1 running:1 linguistics:1 unifying:1 calculating:3 objective:21 question:2 quantity:2 rt:1 map:1 unclear:2 southern:1 gradient:65 exhibit:1 minx:1 visiting:1 thank:1 calinescu:1 polytope:1 valuation:2 reason:2 mini:3 ratio:7 minimizing:2 innovation:1 difficult:1 mostly:1 stoc:2 frank:1 fcon:1 negative:5 stated:1 design:1 proper:1 summarization:5 unknown:3 perform:1 pei:1 upper:1 finite:1 descent:1 immediate:1 extended:5 d16ap00046:1 rn:4 perturbation:1 arbitrary:2 sharp:1 rating:3 introduced:2 sarkar:1 namely:1 connection:1 continous:1 california:1 nip:5 address:1 beyond:2 able:2 usually:1 jayram:1 criticize:1 challenge:5 program:1 including:1 max:6 reliable:1 green:1 interpretability:1 natural:4 business:1 turning:1 buhmann:1 scheme:3 movie:8 rated:1 technology:1 numerous:1 stan:1 lately:1 stefanie:1 philadelphia:1 literature:1 l2:4 sg:13 lecture:1 zei:4 interesting:1 wolsey:1 allocation:1 versus:2 affine:1 jegelka:1 kovoor:1 course:1 surprisingly:2 supported:1 infeasible:2 allow:1 institute:1 fall:2 characterizing:1 taking:2 barrier:1 matroids:1 sparse:1 distributed:2 van:1 overcome:1 dimension:2 evaluating:1 concavity:1 stuck:1 author:2 projected:19 nguyen:2 social:1 approximate:3 compact:1 obtains:1 vondr:1 sz:1 global:5 active:1 gottschalk:1 corpus:1 assumed:1 xi:11 continuous:37 search:2 un:1 why:3 nature:1 learn:1 ca:2 inherently:1 necessarily:1 domain:12 da:1 pk:6 main:5 montanari:1 spread:1 big:1 noise:1 arise:2 edition:1 body:4 x1:2 representative:1 depicts:1 sub:3 inferring:1 originated:1 exceeding:2 obeying:7 mahdi:1 breaking:1 third:3 down:4 theorem:8 bad:1 xt:22 load:1 specific:2 showing:1 sensing:1 r2:4 intractable:1 workshop:1 restricting:2 albeit:1 adding:1 mirror:2 lifting:1 iyer:1 budget:1 suited:1 blogosphere:1 amsterdam:1 scalar:1 recommendation:2 holland:1 applies:2 springer:2 corresponds:6 acm:4 conditional:7 viewed:1 goal:1 exposition:1 jeff:1 lipschitz:1 feasible:2 fw:9 specifically:4 movielens:1 uniformly:2 lemma:3 called:9 total:1 ece:1 combinatorica:1 evaluate:1 |
6,817 | 7,167 | Smooth Primal-Dual Coordinate Descent Algorithms
for Nonsmooth Convex Optimization
Ahmet Alacaoglu1
1
Quoc Tran-Dinh2
Olivier Fercoq3
Volkan Cevher1
Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland
{ahmet.alacaoglu, volkan.cevher}@epfl.ch
2
Department of Statistics and Operations Research, UNC-Chapel Hill, NC, USA
[email protected]
3
LTCI, T?l?com ParisTech, Universit? Paris-Saclay, Paris, France
[email protected]
Abstract
We propose a new randomized coordinate descent method for a convex optimization
template with broad applications. Our analysis relies on a novel combination
of four ideas applied to the primal-dual gap function: smoothing, acceleration,
homotopy, and coordinate descent with non-uniform sampling. As a result, our
method features the first convergence rate guarantees among the coordinate descent
methods, that are the best-known under a variety of common structure assumptions
on the template. We provide numerical evidence to support the theoretical results
with a comparison to state-of-the-art algorithms.
1
Introduction
We develop randomized coordinate descent methods to solve the following composite convex problem:
F ? = minp {F (x) = f (x) + g(x) + h(Ax)} ,
x2R
(1)
where f : Rp ! R, g : Rp ! R [ {+1}, and h : Rm ! R [ {+1} are proper, closed and
convex functions, A 2 Rm?p is a given matrix. The optimization template (1) covers many important
applications including support vector machines, sparse model selection, logistic regression, etc. It is
also convenient to formulate generic constrained convex problems by choosing an appropriate h.
Within convex optimization, coordinate descent methods have recently become increasingly popular
in the literature [1?6]. These methods are particularly well-suited to solve huge-scale problems
arising from machine learning applications where matrix-vector operations are prohibitive [1].
To our knowledge, there is no coordinate descent method for the general three-composite form (1)
within our structure assumptions studied here that has rigorous convergence guarantees. Our paper
specifically fills this gap. For such a theoretical development, coordinate descent algorithms require
specific assumptions on the convex optimization problems [1, 4, 6]. As a result, to rigorously handle
the three-composite case, we assume that (i) f is smooth, (ii) g is non-smooth but decomposable
(each component has an ?efficiently computable? proximal operator), and (iii) h is non-smooth.
Our approach: In a nutshell, we generalize [4, 7] to the three composite case (1). For this purpose,
we combine several classical and contemporary ideas: We exploit the smoothing technique in [8],
the efficient implementation technique in [4, 14], the homotopy strategy in [9], and the nonuniform
coordinate selection rule in [7] in our algorithm, to achieve the best known complexity estimate for
the template.
Surprisingly, the combination of these ideas is achieved in a very natural and elementary primal-dual
gap-based framework. However, the extension is indeed not trivial since it requires to deal with the
composition of a non-smooth function h and a linear operator A.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
While our work has connections to the methods developed in [7, 10, 11], it is rather distinct. First,
we consider a more general problem (1) than the one in [4, 7, 10]. Second, our method relies on
Nesterov?s accelerated scheme rather than a primal-dual method as in [11]. Moreover, we obtain
the first rigorous convergence rate guarantees as opposed to [11]. In addition, we allow using any
sampling distribution for choosing the coordinates.
Our contributions: We propose a new smooth primal-dual randomized coordinate descent method
for solving (1) where f is smooth, g is nonsmooth, separable and has a block-wise proximal operator,
and h is a general nonsmooth function. Under such a structure, we show that our algorithm achieves
the best known O(n/k) convergence rate, where k is the iteration count and to our knowledge, this is
the first time that this convergence rate is proven for a coordinate descent algorithm.
We instantiate our algorithm to solve special cases of (1) including the case g = 0 and constrained
problems. We analyze the convergence rate guarantees of these variants individually and discuss the
choices of sampling distributions.
Exploiting the strategy in [4, 14], our algorithm can be implemented in parallel by breaking up the
full vector updates. We also provide a restart strategy to enhance practical performance.
Paper organization: We review some preliminary results in Section 2. The main contribution of
this paper is in Section 3 with the main algorithm and its convergence guarantee. We also present
special cases of the proposed algorithm. Section 4 provides numerical evidence to illustrate the
performance of our algorithms in comparison to existing methods. The proofs are deferred to the
supplementary document.
2
Preliminaries
Notation: Let [n] := {1, 2, ? ? ? , n} be the set of n positive integer indices. Let us decompose
the variable vector x into n-blocks
denoted by xi as x = [x1 ; x2 ; ? ? ? ; xn ] such that each block xi
Pn
has the size pi
1 with i=1 pi = p. We also decompose the identity matrix Ip of Rp into n
block as Ip = [U1 , U2 , ? ? ? , Un ],Pwhere Ui 2 Rp?pi has pi unit vectors. In this case, any vector
n
x 2 Rp can be written as x = i=1 Ui xi , and each block becomes xi = Ui> x for i 2 [n]. We
define the partial gradients as ri f (x) = Ui> rf (x) for i 2 [n]. For a convex function f , we use
dom (f ) to denote its domain, f ? (x) := supu u> x f (u) to denote its Fenchel conjugate, and
proxf (x) := arg minu f (u) + (1/2)ku xk2 to denote its proximal operator. For a convex set
X , X (?) denotes its indicator function. We also need the following weighted norms:
kxi k2(i) = hHi xi , xi i,
(kyi k?(i) )2 = hHi 1 yi , yi i,
(2)
Pn
Pn
2
kxk2[?] = i=1 L?
(kyk?[?] )2 = i=1 Li ? (kyi k?(i) )2 .
i kxi k(i) ,
Here, Hi 2 Rpi ?pi is a symmetric positive definite matrix, and Li 2 (0, 1) for i 2 [n] and ? > 0.
In addition, we use k ? k to denote k ? k2 .
Formal assumptions on the template: We require the following assumptions to tackle (1):
Assumption 1. The functions f , g and h are all proper, closed and convex. Moreover, they satisfy
(a) The partial derivative ri f (?) of f is Lipschitz continuous with the Lipschitz constant
? i 2 [0, +1), i.e., kri f (x + Ui di ) ri f (x)k? ? L
? i kdi k(i) for all x 2 Rp , di 2 Rpi .
L
(i)
Pn
(b) The function g is separable, which has the following form g(x) = i=1 gi (xi ).
(c) One of the following assumptions for h holds for Subsections 3.3 and 3.4, respectively:
i. h is Lipschitz continuous which is equivalent to the boundedness of dom (h? ).
ii. h is the indicator function for an equality constraint, i.e., h(Ax) := {c} (Ax).
Now, we briefly describe the main techniques used in this paper.
Acceleration: Acceleration techniques in convex optimization date back to the seminal work of
Nesterov in [13], and is one of standard techniques in convex optimization. We exploit such a scheme
to achieve the best known O(1/k) rate for the nonsmooth template (1).
Nonuniform distribution: We assume that ? is a random index on [n] associated with a probability
distribution q = (q1 , ? ? ? , qn )> such that
P {? = i} = qi > 0, i 2 [n], and
2
n
X
i=1
qi = 1.
(3)
When qi = n1 for all i 2 [n], we obtain the uniform distribution. Let i0 , i1 , ? ? ? , ik be i.i.d. realizations
of the random index ? after k iteration. We define Fk+1 = (i0 , i1 , ? ? ? , ik ) as the -field generated
by these realizations.
Smoothing techniques: We can write the convex function h(u) = supy {hu, yi h? (y)} using
its Fenchel conjugate h? . Since h in (1) is convex but possibly nonsmooth, we smooth h as
n
o
?
2
h (u) := max
hu,
yi
h
(y)
ky
yk
?
,
(4)
2
m
y2R
where y? 2 Rm is given and > 0 is the smoothness parameter. Moreover, the quadratic function
b(y, y)
? = 12 ky yk
? 2 is defined based on a given norm in Rm . Let us denote by y ? (u), the unique
solution of this concave maximization problem in (4), i.e.:
n
o
y ? (u) := arg max
hu, yi h? (y) 2 ky yk
? 2 = prox 1 h? y? + 1 u ,
(5)
m
y2R
where proxh? is the proximal operator of h? . If we assume that h is Lipschitz continuous, or
equivalently that dom (h? ) is bounded, then it holds that
h (u) ? h(u) ? h (u) +
2
Dh
?
2
,
where Dh? :=
max
y2dom(h? )
ky
Let us define a new smoothed function (x) := f (x) + h (Ax). Then,
block partial gradient
?
ri (x) = ri f (x) + A>
i y (Ax)
yk
? < +1.
(6)
is differentiable, and its
(7)
2
? i + kAi k , where L
? i is given in
is also Lipschitz continuous with the Lipschitz constant Li ( ) := L
Assumption 1, and Ai 2 Rm?pi is the i-th block of A.
Homotopy: In smoothing-based methods, the choice of the smoothness parameter is critical. This
choice may require the knowledge of the desired accuracy, number of maximum iterations or the
diameters of the primal and/or dual domains as in [8]. In order to make this choice flexible and our
method applicable to the constrained problems, we employ a homotopy strategy developed in [9] for
deterministic algorithms, to gradually update the smoothness parameter while making sure that it
converges to 0.
3
Smooth primal-dual randomized coordinate descent
In this section, we develop a smoothing primal-dual method to solve (1). Or approach is to combine
the four key techniques mentioned above: smoothing, acceleration, homotopy, and randomized
coordinate descent. Similar to [7] we allow to use arbitrary nonuniform distribution, which may allow
to design a good distribution that captures the underlying structure of specific problems.
3.1 The algorithm
Algorithm 1 below smooths, accelerates, and randomizes the coordinate descent method.
Algorithm 1. SMooth, Accelerate, Randomize The Coordinate Descent (SMART-CD)
Input: Choose 1 > 0 and ? 2 [0, 1] as two input parameters. Choose x0 2 Rp .
0 ?
2
? i + kAi k for i 2 [n]. Compute S? := Pn (B 0 )? and qi := (Bi ) for all i 2 [n].
1
Set Bi0 := L
i
i=1
S?
1
2
Set ?0 := min {qi | 1 ? i ? n} 2 (0, 1] for i 2 [n]. Set x
?0 = x
?0 := x0 .
3
4
5
6
7
for k
0, 1, ? ? ? , kmax do
Update x
?k := (1 ?k )?
x k + ?k x
?k and compute u
?k := A?
xk .
1 k
?
?
k
Compute the dual step yk := y k+1 (?
u ) = prox 1 h? y? + k+1
u
? .
k+1
Select a block coordinate ik 2 [n] according to the probability distribution q.
Set x
?k+1 := x
?k , and compute the primal ik -block coordinate:
x
?k+1
:= argminp
ik
xik 2R
8
9
10
11
ik
n
?
hrik f (?
x k ) + A>
ik y k , x i k
x
?kik i + gik (xik ) +
?k Bikk
kxik
2?0
Update x
?k+1 := x
?k + ??k0 (?
xk+1 x
?k ).
Compute ?k+1 2 (0, 1) as the unique positive root of ? 3 + ? 2 + ?k2 ?
2
? i + kAi k for i 2 [n].
Update k+2 := 1+?k+1
and Bik+1 := L
k+1
k+2
end for
3
?k2 = 0.
o
x
?kik k2(ik ) .
From the update x
?k := x
?k 1 + ?k?0 1 (?
xk x
?k 1 ) and x
?k := (1 ?k )?
x k + ?k x
?k , it directly follows
?
k
1
that x
?k := (1 ?k ) x
?k 1 + ?0 (?
xk x
? k 1 ) + ?k x
?k . Therefore, it is possible to implement the
k
algorithm without forming x
? .
3.2 Efficient implementation
While the basic variant in Algorithm 1 requires full vector updates at each iteration, we exploit the
idea in [4, 14] and show that we can partially update these vectors in a more efficient manner.
Algorithm 2. Efficient SMART-CD
Input: Choose a parameter 1 > 0 and ? 2 [0, 1] as two input parameters. Choose x0 2 Rp .
0 ?
2
? i + kAi k for i 2 [n]. Compute S? := Pn (B 0 )? and qi := (Bi ) for all i 2 [n].
1
Set Bi0 := L
i
i=1
S?
1
2
Set ?0 := min {qi | 1 ? i ? n} 2 (0, 1] for i 2 [n] and c0 = (1 ?0 ). Set u0 = z?0 := x0 .
3
4
5
6
for k
0, 1, ? ? ? , kmax do
1
Compute the dual step y ?k+1 (ck Auk + A?
z k ) := prox 1 h? y? + k+1
(ck Auk + A?
zk ) .
k+1
Select a block coordinate ik 2 [n] according to the probability distribution q.
?
k
Let rki := rik f (ck uk + z?k ) + A>
z k ). Compute
ik y k+1 (ck Au + A?
n
o
?k B k
tk+1
:= arg min
hrki , ti + gik (t + z?ikk ) + 2?0ik ktk2(ik ) .
ik
p
t2R
7
8
9
10
11
ik
Update z?ik+1
:= z?ikk + tk+1
ik .
k
1 ?k /?0 k+1
k+1
k
Update uik := uik
tik .
ck
Compute ?k+1 2 (0, 1) as the unique positive root of ? 3 + ? 2 + ?k2 ?
2
? i + kAi k for i 2 [n].
Update k+2 := 1+?k+1
and Bik+1 := L
k+1
k+2
end for
?k2 = 0.
We present the following result which shows the equivalence between Algorithm 1 and Algorithm 2,
the proof of which can be found in the supplementary document.
Qk
Proposition 3.1. Let ck = l=0 (1 ?l ), z?k = ck uk + z?k and z?k = ck 1 uk + z?k . Then, x
?k = z?k ,
x
?k = z?k and x
?k = z?k , for all k 0, where x
?k , x
?k , and x
?k are defined in Algorithm 1.
According to Algorithm 2, we never need to form or update full-dimensional vectors. Only times
that we need x
?k are when computing the gradient and the dual variable y ?k+1 . We present two special
cases which are common in machine learning, in which we can compute these steps efficiently.
Remark 3.2. Under the following assumptions, we can characterize the per-iteration complexity
explicitly. Let A, M 2 Rm?p , and
Pm
th
(a) f has the form f (x) = j=1 j (e>
j M x), where ej is the j standard unit vector.
(b) h is separable as in h(Ax) =
{c} (Ax)
or h(Ax) = kAxk1 .
k
k
Assuming that we store and maintain the residuals ru,f
= M uk , rzk?,f = M z?k , ru,h
= Auk ,
rzk?,h = A?
z k , then we have the per-iteration cost as O(max{|{j | Aji 6= 0}|, |{j | Mji 6= 0}|})
arithmetic operations. If h is partially separable as in [3], then the complexity of each iteration will
remain moderate.
3.3 Case 1: Convergence analysis of SMART-CD for Lipschitz continuous h
We provide the following main theorem, which characterizes the convergence rate of Algorithm 1.
Theorem 3.3. Let x? be an optimal solution of (1) and let 1 > 0 be given. In addition, let
?0 := min {qi | i 2 [n]} 2 (0, 1] and 0 := (1 + ?0 ) 1 be given parameters. For all k
1, the
sequence x
?k generated by Algorithm 1 satisfies:
where C ? (x0 ) := (1
?
E F (?
xk )
?
F? ?
?0 )(F 0 (x0 )
C ? (x0 )
+
?0 (k 1) + 1
F ?) +
Pn
?0 Bi0
?
i=1 2qi kxi
4
+ ?0 )Dh2 ?
,
2(?0 k + 1)
1 (1
(8)
x0i k2(i) and Dh? is as defined by (6).
In the special case when we use uniform distribution, ?0 = qi = 1/n, the convergence rate reduces to
?
nC ? (x0 )
(n + 1) 0 Dh2 ?
F? ?
+
,
k+n 1
2k + 2n
Pn B 0
where C ? (x0 ) := (1 n1 )(F 0 (x0 ) F ? ) + i=1 2i kx?i x0i k2(i) . This estimate shows that the
convergence rate of Algorithm 1 is
?n?
O
,
k
which is the best known so far to the best of our knowledge.
?
E F (?
xk )
3.4 Case 2: Convergence analysis of SMART-CD for non-smooth constrained optimization
In this section, we instantiate Algorithm 1 to solve constrained convex optimization problem with
possibly non-smooth terms in the objective. Clearly, if we choose h(?) = {c} (?) in (1) as the indicator
function of the set {c} for a given vector c 2 Rm , then we obtain a constrained problem:
F ? := minp {F (x) = f (x) + g(x) | Ax = c} ,
(9)
x2R
where f and g are defined as in (1), A 2 Rm?p , and c 2 Rm .
We can specify Algorithm 1 to solve this constrained problem by modifying the following two steps:
(a) The update of y ?k+1 (A?
xk ) at Step 5 is changed to
y ?k+1 (A?
xk ) := y? +
1
k+1
(A?
xk
(10)
c),
which requires one matrix-vector multiplication in A?
xk .
(b) The update of ?k at Step 9 and k+1 at Step 10 are changed to
?k+1 :=
?k
1+?k
and
k+2
:= (1
?k+1 )
k+1 .
(11)
Now, we analyze the convergence of this algorithm by providing the following theorem.
Theorem 3.4. Let x
?k be the sequence generated by Algorithm 1 for solving (9) using the updates
(10) and (11) and let y ? be an arbitrary optimal solution of the dual problem of (9). In addition,
let ?0 := min {qi | i 2 [n]} 2 (0, 1] and 0 := (1 + ?0 ) 1 be given parameters. Then, we have the
following estimates:
8
?
?
?
?
?
0
?
)
yk
? 2
?
>
E F (?
xk ) F ? ? ?0C(k (x1)+1
+ 2(?10ky
xk bk ,
<
(k 1)+1) + ky kE kA?
(12)
h
i
?
?
1/2
>
:
E kA?
xk bk
? ?0 (k 11)+1 ky ? yk
? + ky ? yk
? 2 + 2 1 1 C ? (x0 )
,
Pn ?0 Bi0 ?
where C ? (x0 ) := (1 ?0 )(F 0 (x0 ) F ? ) + i=1 2q
kxi x0i k2(i) . We note that the following
i
?
?
?
?
lower bound always holds ky ? kE kA?
xk bk ? E F (?
xk ) F ? .
3.5 Other special cases
We consider the following special cases of Algorithm 1:
The case h = 0: In this case, we obtain an algorithm similar to the one studied in [7] except that
we have non-uniform sampling instead of importance sampling. If the distribution is uniform, then
we obtain the method in [4].
The case g = 0: In this case, we have F (x) = f (x) + h(Ax), which can handle the linearly
constrained problems with smooth objective function. In this case, we can choose ?0 = 1, and the
coordinate proximal gradient step, Step 7 in Algorithm 1, is simplified as
?
?
q ik
1
?
x
?k+1
:= x
?kik ? B
rik f (?
x k ) + A>
uk ) .
(13)
k H ik
ik y k+1 (?
ik
k
ik
In addition, we replace Step 8 with
x
?k+1
=x
?ki +
i
?k k+1
(?
x
qi i
We then obtain the following results:
5
x
?ki ), 8i 2 [n].
(14)
Corollary 3.5. Assume that Assumption 1 holds. Let ?0 = 1, 1 > 0 and Step 7 and 8 of Algorithm 1
be updated by (13) and (14), respectively. If, in addition, h is Lipschitz continuous, then we have
?
E F (?
xk )
where Dh? is defined by (6).
n
? 1X
Bi0 ?
F? ?
kx
k i=12qi2 i
2
1 D h?
x0i k2(i) +
k+1
,
(15)
If, instead of Lipschitz continuous h, we have h(?) = {c} (?) to solve the constrained problem (9)
with g = 0, then we have
8
?
?
?
?
?
0
?
? 2
<
E F (?
xk ) F ? ? C k(x ) + 1 ky2k yk
+ ky ? kE kA?
xk bk ,
h
i
(16)
?
?
1/2
:
E kA?
xk bk
? k1 ky ? yk
? + ky ? yk
? 2 + 2 1 1 C ? (x0 )
,
where C ? (x0 ) :=
n
P
Bi0
2q 2
i=1 i
kx?i
x0i k2(i) .
3.6 Restarting SMART-CD
It is known that restarting an accelerated method significantly enhances its practical performance
when the underlying problem admits a (restricted) strong convexity condition. As a result, we describe
below how to restart (i.e., the momentum term) in Efficient SMART-CD. If the restart is injected in
the k-th iteration, then we restart the algorithm with the following steps:
8 k+1
u
0,
>
>
>
k+1
>
r
0,
>
u,f
>
>
k+1
>
< ru,h
0,
k
y
?
y ?k+1 (ck ru,h
+ rzk?,h ),
>
>
>
>
k+1
1,
>
>
>
?
?
,
k+1
0
>
:
ck
1.
The first three steps of the restart procedure is for restarting the primal variable which is classical
[15]. Restarting y? is also suggested in [9]. The cost of this procedure is essentially equal to the cost
of one iteration as described in Remark 3.2, therefore even restarting once every epoch will not cause
a significant difference in terms of per-iteration cost.
4
Numerical evidence
We illustrate the performance of Efficient SMART-CD in brain imaging and support vector machines
applications. We also include one representative example of a degenerate linear program to illustrate
why the convergence rate guarantees of our algorithm matter. We compare SMART-CD with VuCondat-CD [11], which is a coordinate descent variant of Vu-Condat?s algorithm [16], FISTA [17],
ASGARD [9], Chambolle-Pock?s primal-dual algorithm [18], L-BFGS [19] and SDCA [5].
4.1 A degenerate linear program: Why do convergence rate guarantees matter?
We consider the following degenerate linear program studied in [9]:
8
min 2xp
>
>
x2Rp
<
Pp 1
s.t.
xk = 1,
k=1P
(17)
p 1
>
x
x
=
0,
(2
?
j
?
d),
>
p
k
k=1
:
xp 0.
Pp 1
Here, the constraint xp
k=1 xk = 0 is repeated d times. This problem satisfies the linear
constraint qualification condition, which guarantees the primal-dual optimality. If we define
f (x) = 2xp ,
where
Ax =
"p 1
X
k=1
x k , xp
g(x) =
p 1
X
{xp 0} (xp ),
xk , . . . , x p
k=1
p 1
X
k=1
h(Ax) =
xk
#>
we can fit this problem and its dual form into our template (1).
6
,
{c} (Ax),
c = [1, 0, . . . , 0]> ,
10
2
10
10
100
||Ax-c||
F(x)-F*
100
-2
10-4
10-6
2
200
400
-2
10-4
SMART-CD
SMART-CD-Restart
Vu-Condat-CD
0
10
600
800
10-6
1000
0
200
400
epoch
600
800
1000
epoch
Figure 1: The convergence behavior of 3 algorithms on a degenerate linear program.
For this experiment, we select the dimensions p = 10 and d = 200. We implement our algorithm and
compare it with Vu-Condat-CD. We also combine our method with the restarting strategy proposed
above. We use the same mapping to fit the problem into the template of Vu-Condat-CD.
Figure 1 illustrates the convergence behavior of Vu-Condat-CD and SMART-CD. We compare
primal suboptimality and feasibility in the plots. The explicit solution of the problem is used to
generate the plot with primal suboptimality. We observe that degeneracy of the problem prevents
Vu-Condat-CD from making any progress towards the solution, where SMART-CD preserves O(1/k)
rate as predicted by theory. We emphasize that the authors in [11] proved almost sure convergence
for Vu-Condat-CD but they did not provide a convergence rate guarantee for this method. Since the
problem is certainly non-strongly convex, restarting does not significantly improve performance of
SMART-CD.
4.2 Total Variation and `1 -regularized least squares regression with functional MRI data
In this experiment, we consider a computational neuroscience application where prediction is done
based on a sequence of functional MRI images. Since the images are high dimensional and the number
of samples that can be taken is limited, TV-`1 regularization is used to get stable and predictive
estimation results [20]. The convex optimization problem we solve is of the form:
min 1 kM x
x2Rp 2
bk2 + rkxk1 + (1
(18)
r)kxkTV .
This problem fits to our template with
bk2 ,
f (x) = 12 kM x
g(x) = rkxk1 ,
h(u) = (1
r)kuk1 ,
where D is the 3D finite difference operator to define a total variation norm k ? kTV and u = Dx.
We use an fMRI dataset where the primal variable x is 3D image of the brain that contains 33177
voxels. Feature matrix M has 768 rows, each representing the brain activity for the corresponding
example [20]. We compare our algorithm with Vu-Condat?s algorithm, FISTA, ASGARD, ChambollePock?s primal-dual algorithm, L-BFGS and Vu-Condat-CD.
8500
9500
9000
F(x)
F(x)
9000
9500
F(x)
Chambolle-Pock
Vu-Condat
FISTA
ASGARD
L-BFGS
Vu-Condat-CD
SMART-CD
9500
9000
8500
8500
8000
0
8000
20
40
60
time (s)
80
100
0
20
40
60
time (s)
80
100
8000
0
20
40
60
80
100
time (s)
Figure 2: The convergence of 7 algorithms for problem (18). The regularization parameters for the
first plot are = 0.001, r = 0.5, for the second plot are = 0.001, r = 0.9, for the third plot are
= 0.01, r = 0.5 .
Figure 2 illustrates the convergence behaviour of the algorithms for different values of the regularization parameters. Per-iteration cost of SMART-CD and Vu-Condat-CD is similar, therefore
the behavior of these two algorithms are quite similar in this experiment. Since Vu-Condat?s,
7
Chambolle-Pock?s, FISTA and ASGARD methods work with full dimensional variables, they have
slow convergence in time. L-BFGS has a close performance to coordinate descent methods.
The simulation in Figure 2 is performed using benchmarking tool of [20]. The algorithms are tuned
for the best parameters in practice.
4.3 Linear support vector machines problem with bias
In this section, we consider an application of our algorithm to support vector machines (SVM)
problem for binary classification. Given a training set with m examples {a1 , a2 , . . . , am } such that
ai 2 Rp and class labels {b1 , b2 , . . . bm } such that bi 2 { 1, +1}, we define the soft margin primal
support vector machines problem with bias as
minp
w2R
m
X
i=1
?
Ci max 0, 1
?
bi (hai , wi + w0 ) + 2 kwk2 .
(19)
As it is a common practice, we solve its dual formulation, which is a constrained problem:
8
Pm
1
2
minm
<
i=1 xi
2 kM D(b)xk
x2R
:
0 ? xi ? Ci , i = 1, ? ? ? , m, b> x = 0,
s.t.
(20)
where D(b) represents a diagonal matrix that has the class labels bi in its diagonal and M 2 Rp?m is
formed by the example vectors. If we define
m
X
1
f (x) =
kM D(b)xk2
xi , gi (xi ) = {0?xi ?Ci } , c = 0, A = b> ,
2
i=1
then, we can fit this problem into our template in (9).
We apply the specific version of SMART-CD for constrained setting from Section 3.4 and compare
with Vu-Condat-CD and SDCA. Even though SDCA is a state-of-the-art method for SVMs, we are
not able to handle the bias term using SDCA. Hence, it only applies to (20) when b> x = 0 constraint
is removed. This causes SDCA not to converge to the optimal solution when there is bias term in the
problem (19). The following table summarizes the properties of the classification datasets we used.
Data Set
rcv1.binary [21, 22]
a8a [21, 23]
gisette [21, 24]
Training Size
20,242
22,696
6,000
Number of Features
47,236
123
5,000
Convergence Plot
Figure 3, plot 1
Figure 3, plot 2
Figure 3, plot 3
Figure 3 illustrates the performance of the algorithms for solving the dual formulation of SVM in (20).
We compute the duality gap for each algorithm and present the results with epochs in the horizontal
axis since per-iteration complexity of the algorithms is similar. As expected, SDCA gets stuck at
a low accuracy since it ignores one of the constraints in the problem. We demonstrate this fact in
the first experiment and then limit the comparison to SMART-CD and Vu-Condat-CD. Equipped
with restart strategy, SMART-CD shows the fastest convergence behavior due to the restricted strong
convexity of (20).
10 0
10 0
10 -1
10 -2
10 -3
SMART-CD
SMART-CD-Restart
Vu-Condat-CD
SDCA
10 -4
10
0
10
1
epoch
10
2
Duality gap
10 -1
Duality gap
Duality gap
10 -1
10 0
10 -2
10
-3
10
-4
0
10
10 -3
10 -4
SMART-CD
SMART-CD-Restart
Vu-Condat-CD
10
10 -2
1
epoch
10
2
10
SMART-CD
SMART-CD-Restart
Vu-Condat-CD
-5
10
0
10
1
10
2
epoch
Figure 3: The convergence of 4 algorithms on the dual SVM (20) with bias. We only used SDCA in
the first dataset since it stagnates at a very low accuracy.
5
Conclusions
Coordinate descent methods have been increasingly deployed to tackle huge scale machine learning
problems in recent years. The most notable works include [1?6]. Our method relates to several works
8
in the literature including [1, 4, 7, 9, 10, 12]. The algorithms developed in [2?4] only considered
a special case of (1) with h = 0, and cannot be trivially extended to apply to general setting (1).
Here, our algorithm can be viewed as an adaptive variant of the method developed in [4] extended to
the sum of three functions. The idea of homotopy strategies relate to [9] for first-order primal-dual
methods. This paper further extends such an idea to randomized coordinate descent methods for
solving (1). We note that a naive application of the method developed in [4] to the smoothed problem
with a carefully chosen fixed smoothness parameter would result in the complexity O(n2 /k), whereas
using our homotopy strategy on the smoothness parameter, we reduced this complexity to O(n/k).
With additional strong convexity assumption on problem template (1), it is possible to obtain O(1/k 2 )
rate by using deterministic first-order primal-dual algorithms [9, 18]. It remains as future work to
incorporate strong convexity to coordinate descent methods for solving nonsmooth optimization
problems with a faster convergence rate.
Acknowledgments
The work of O. Fercoq was supported by a public grant as part of the Investissement d?avenir project,
reference ANR-11-LABX-0056-LMH, LabEx LMH. The work of Q. Tran-Dinh was partly supported
by NSF grant, DMS-1619884, USA. The work of A. Alacaoglu and V. Cevher was supported by
European Research Council (ERC) under the European Union?s Horizon 2020 research and innovation
programme (grant agreement no 725594 - time-data).
References
[1] Y. Nesterov, ?Efficiency of coordinate descent methods on huge-scale optimization problems,?
SIAM Journal on Optimization, vol. 22, no. 2, pp. 341?362, 2012.
[2] P. Richt?rik and M. Tak??c, ?Iteration complexity of randomized block-coordinate descent
methods for minimizing a composite function,? Mathematical Programming, vol. 144, no. 1-2,
pp. 1?38, 2014.
[3] P. Richt?rik and M. Tak??c, ?Parallel coordinate descent methods for big data optimization,?
Mathematical Programming, vol. 156, no. 1-2, pp. 433?484, 2016.
[4] O. Fercoq and P. Richt?rik, ?Accelerated, parallel, and proximal coordinate descent,? SIAM
Journal on Optimization, vol. 25, no. 4, pp. 1997?2023, 2015.
[5] S. Shalev-Shwartz and T. Zhang, ?Stochastic dual coordinate ascent methods for regularized
loss minimization,? Journal of Machine Learning Research, vol. 14, pp. 567?599, 2013.
[6] I. Necoara and D. Clipici, ?Parallel random coordinate descent method for composite minimization: Convergence analysis and error bounds,? SIAM J. on Optimization, vol. 26, no. 1,
pp. 197?226, 2016.
[7] Z. Qu and P. Richt?rik, ?Coordinate descent with arbitrary sampling i: Algorithms and complexity,? Optimization Methods and Software, vol. 31, no. 5, pp. 829?857, 2016.
[8] Y. Nesterov, ?Smooth minimization of non-smooth functions,? Math. Prog., vol. 103, no. 1,
pp. 127?152, 2005.
[9] Q. Tran-Dinh, O. Fercoq, and V. Cevher, ?A smooth primal-dual optimization framework for
nonsmooth composite convex minimization,? arXiv preprint arXiv:1507.06243, 2015.
[10] O. Fercoq and P. Richt?rik, ?Smooth minimization of nonsmooth functions with parallel
coordinate descent methods,? arXiv preprint arXiv:1309.5885, 2013.
[11] O. Fercoq and P. Bianchi, ?A coordinate descent primal-dual algorithm with large step size and
possibly non separable functions,? arXiv preprint arXiv:1508.04625, 2015.
[12] Y. Nesterov and S.U. Stich, ?Efficiency of the accelerated coordinate descent method on
structured optimization problems,? SIAM J. on Optimization, vol. 27, no. 1, pp. 110?123, 2017.
[13] Y. Nesterov, ?A method for unconstrained convex minimization problem with the rate of
convergence O(1/k 2 ),? Doklady AN SSSR, vol. 269, translated as Soviet Math. Dokl., pp. 543?
547, 1983.
[14] Y. T. Lee and A. Sidford, ?Efficient accelerated coordinate descent methods and faster algorithms
for solving linear systems,? in Foundations of Computer Science (FOCS), 2013 IEEE Annual
Symp. on, pp. 147?156, IEEE, 2013.
9
[15] B. O?Donoghue and E. Candes, ?Adaptive restart for accelerated gradient schemes,? Foundations
of computational mathematics, vol. 15, no. 3, pp. 715?732, 2015.
[16] B. C. V?u, ?A splitting algorithm for dual monotone inclusions involving cocoercive operators,?
Advances in Computational Mathematics, vol. 38, no. 3, pp. 667?681, 2013.
[17] A. Beck and M. Teboulle, ?A fast iterative shrinkage-thresholding algorithm for linear inverse
problems,? SIAM journal on imaging sciences, vol. 2, no. 1, pp. 183?202, 2009.
[18] A. Chambolle and T. Pock, ?A first-order primal-dual algorithm for convex problems with
applications to imaging,? Journal of mathematical imaging and vision, vol. 40, no. 1, pp. 120?
145, 2011.
[19] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, ?A limited memory algorithm for bound constrained
optimization,? SIAM Journal on Scientific Computing, vol. 16, no. 5, pp. 1190?1208, 1995.
[20] E. D. Dohmatob, A. Gramfort, B. Thirion, and G. Varoquaux, ?Benchmarking solvers for tv-`1
least-squares and logistic regression in brain imaging,? in Pattern Recognition in Neuroimaging,
2014 International Workshop on, pp. 1?4, IEEE, 2014.
[21] C.-C. Chang and C.-J. Lin, ?Libsvm: a library for support vector machines,? ACM transactions
on intelligent systems and technology (TIST), vol. 2, no. 3, p. 27, 2011.
[22] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li, ?Rcv1: A new benchmark collection for text
categorization research,? Journal of Machine Learning Research, vol. 5, no. Apr, pp. 361?397,
2004.
[23] M. Lichman, ?UCI machine learning repository,? 2013.
[24] I. Guyon, S. Gunn, A. Ben-Hur, and G. Dror, ?Result analysis of the nips 2003 feature selection
challenge,? in Advances in neural information processing systems, pp. 545?552, 2005.
[25] P. Tseng, ?On accelerated proximal gradient methods for convex-concave optimization,? Submitted to SIAM J. Optim, 2008.
10
| 7167 |@word repository:1 mri:2 version:1 briefly:1 norm:3 c0:1 hu:3 km:4 simulation:1 q1:1 boundedness:1 contains:1 lichman:1 tist:1 ktv:1 tuned:1 document:2 existing:1 ka:5 com:1 optim:1 rpi:2 dx:1 written:1 numerical:3 plot:9 update:15 prohibitive:1 instantiate:2 kyk:1 xk:24 volkan:2 provides:1 math:2 zhang:1 mathematical:3 become:1 ik:21 focs:1 combine:3 symp:1 manner:1 x0:15 expected:1 indeed:1 behavior:4 brain:4 t2r:1 byrd:1 equipped:1 solver:1 becomes:1 project:1 notation:1 moreover:3 bounded:1 underlying:2 gisette:1 dror:1 developed:5 guarantee:9 every:1 ti:1 concave:2 tackle:2 nutshell:1 universit:1 rm:9 k2:12 uk:5 doklady:1 unit:2 grant:3 positive:4 pock:4 qualification:1 kuk1:1 limit:1 randomizes:1 au:1 studied:3 equivalence:1 lausanne:1 fastest:1 limited:2 bi:5 practical:2 unique:3 acknowledgment:1 vu:18 practice:2 block:11 definite:1 supu:1 implement:2 union:1 procedure:2 aji:1 sdca:8 significantly:2 composite:7 convenient:1 get:2 unc:2 close:1 selection:3 operator:7 cannot:1 kmax:2 seminal:1 equivalent:1 deterministic:2 convex:21 formulate:1 ke:3 decomposable:1 splitting:1 chapel:1 rule:1 fill:1 handle:3 coordinate:36 variation:2 updated:1 olivier:2 programming:2 agreement:1 recognition:1 particularly:1 gunn:1 preprint:3 capture:1 richt:5 contemporary:1 removed:1 yk:11 mentioned:1 rose:1 convexity:4 complexity:8 ui:5 nesterov:6 rigorously:1 dom:3 solving:6 smart:24 predictive:1 efficiency:2 translated:1 accelerate:1 k0:1 soviet:1 distinct:1 fast:1 describe:2 choosing:2 y2r:2 shalev:1 quite:1 supplementary:2 solve:9 kai:5 anr:1 statistic:1 gi:2 kik:3 ip:2 sequence:3 differentiable:1 propose:2 tran:3 fr:1 uci:1 realization:2 date:1 degenerate:4 achieve:2 ky:12 exploiting:1 convergence:28 categorization:1 converges:1 ben:1 tk:2 illustrate:3 develop:2 x0i:5 progress:1 strong:4 implemented:1 predicted:1 avenir:1 switzerland:1 sssr:1 modifying:1 stochastic:1 public:1 require:3 behaviour:1 homotopy:7 preliminary:2 decompose:2 proposition:1 elementary:1 varoquaux:1 asgard:4 extension:1 hold:4 considered:1 minu:1 mapping:1 achieves:1 a2:1 xk2:2 purpose:1 bi0:6 estimation:1 applicable:1 tik:1 label:2 council:1 individually:1 tool:1 weighted:1 minimization:6 clearly:1 rki:1 always:1 rather:2 ck:10 pn:9 ej:1 shrinkage:1 ikk:2 corollary:1 ax:14 rigorous:2 am:1 inference:1 epfl:2 i0:2 tak:2 france:1 i1:2 arg:3 dual:26 among:1 flexible:1 denoted:1 classification:2 development:1 smoothing:6 art:2 constrained:12 special:7 gramfort:1 field:1 equal:1 never:1 once:1 beach:1 sampling:6 represents:1 broad:1 argminp:1 fmri:1 ky2k:1 nonsmooth:8 auk:3 intelligent:1 future:1 employ:1 preserve:1 beck:1 n1:2 maintain:1 ltci:1 organization:1 huge:3 certainly:1 deferred:1 primal:22 necoara:1 partial:3 desired:1 theoretical:2 cevher:3 fenchel:2 soft:1 teboulle:1 cover:1 sidford:1 maximization:1 cost:5 uniform:5 characterize:1 proximal:7 kxi:4 st:1 international:1 randomized:7 siam:7 lee:1 enhance:1 opposed:1 choose:6 possibly:3 derivative:1 li:4 prox:3 bfgs:4 b2:1 pwhere:1 matter:2 satisfy:1 notable:1 explicitly:1 performed:1 root:2 closed:2 analyze:2 characterizes:1 parallel:5 candes:1 contribution:2 square:2 formed:1 accuracy:3 qk:1 efficiently:2 generalize:1 mji:1 lu:1 minm:1 submitted:1 stagnates:1 email:1 dohmatob:1 pp:21 dm:1 proof:2 di:2 associated:1 degeneracy:1 proved:1 dataset:2 popular:1 hur:1 knowledge:4 subsection:1 carefully:1 back:1 proxh:1 specify:1 formulation:2 done:1 though:1 chambolle:4 strongly:1 horizontal:1 logistic:2 scientific:1 usa:3 equality:1 regularization:3 hence:1 symmetric:1 laboratory:1 deal:1 proxf:1 suboptimality:2 hill:1 demonstrate:1 image:3 wise:1 novel:1 recently:1 common:3 functional:2 x2r:3 kwk2:1 significant:1 composition:1 dinh:2 kri:1 ai:2 smoothness:5 unconstrained:1 fk:1 pm:2 dh2:2 trivially:1 erc:1 mathematics:2 inclusion:1 stable:1 etc:1 recent:1 moderate:1 store:1 binary:2 yi:5 additional:1 converge:1 ii:2 u0:1 full:4 arithmetic:1 relates:1 reduces:1 smooth:18 faster:2 long:1 lin:1 a1:1 feasibility:1 qi:12 prediction:1 variant:4 regression:3 basic:1 involving:1 essentially:1 vision:1 arxiv:6 iteration:13 achieved:1 addition:6 whereas:1 sure:2 ascent:1 bik:2 integer:1 yang:1 iii:1 variety:1 fit:4 idea:6 computable:1 donoghue:1 cause:2 remark:2 svms:1 diameter:1 reduced:1 generate:1 qi2:1 nsf:1 neuroscience:1 arising:1 per:5 write:1 vol:17 key:1 four:2 kyi:2 libsvm:1 nocedal:1 imaging:5 monotone:1 year:1 sum:1 inverse:1 injected:1 extends:1 almost:1 prog:1 guyon:1 summarizes:1 accelerates:1 bound:3 hi:1 ki:2 hhi:2 quadratic:1 annual:1 activity:1 constraint:5 x2:1 ri:5 software:1 u1:1 fercoq:6 min:7 optimality:1 rcv1:2 separable:5 department:1 tv:2 according:3 structured:1 combination:2 conjugate:2 remain:1 increasingly:2 wi:1 qu:1 making:2 quoc:1 gradually:1 restricted:2 taken:1 remains:1 discus:1 count:1 kdi:1 thirion:1 end:2 operation:3 apply:2 observe:1 generic:1 appropriate:1 rp:10 denotes:1 include:2 exploit:3 k1:1 classical:2 objective:2 strategy:8 randomize:1 diagonal:2 hai:1 enhances:1 gradient:6 lmh:2 restart:11 w0:1 tseng:1 trivial:1 assuming:1 ru:4 index:3 providing:1 minimizing:1 innovation:1 nc:2 equivalently:1 neuroimaging:1 relate:1 xik:2 implementation:2 design:1 proper:2 bianchi:1 datasets:1 benchmark:1 finite:1 descent:29 extended:2 nonuniform:3 smoothed:2 arbitrary:3 bk:5 paris:2 connection:1 nip:2 able:1 suggested:1 dokl:1 lion:1 below:2 pattern:1 challenge:1 saclay:1 program:4 rf:1 including:3 max:5 memory:1 critical:1 natural:1 regularized:2 indicator:3 residual:1 zhu:1 representing:1 scheme:3 improve:1 technology:1 library:1 axis:1 naive:1 text:1 review:1 literature:2 epoch:7 voxels:1 multiplication:1 loss:1 proven:1 foundation:2 labex:1 supy:1 rik:7 xp:7 minp:3 thresholding:1 bk2:2 pi:6 cd:39 row:1 changed:2 surprisingly:1 supported:3 formal:1 allow:3 bias:5 template:11 sparse:1 dimension:1 xn:1 qn:1 ignores:1 author:1 stuck:1 adaptive:2 collection:1 simplified:1 bm:1 programme:1 far:1 transaction:1 restarting:7 emphasize:1 kaxk1:1 ktk2:1 b1:1 investissement:1 xi:12 shwartz:1 un:1 continuous:7 iterative:1 why:2 table:1 ku:1 zk:1 ca:1 european:2 domain:2 did:1 apr:1 main:4 linearly:1 big:1 n2:1 repeated:1 condat:18 x1:2 telecom:1 representative:1 benchmarking:2 uik:2 deployed:1 slow:1 momentum:1 explicit:1 kxk2:1 breaking:1 third:1 theorem:4 specific:3 admits:1 svm:3 gik:2 evidence:3 workshop:1 importance:1 ci:3 illustrates:3 kx:3 margin:1 gap:7 horizon:1 suited:1 forming:1 prevents:1 partially:2 u2:1 chang:1 applies:1 ch:1 satisfies:2 relies:2 dh:4 acm:1 labx:1 lewis:1 identity:1 viewed:1 acceleration:4 towards:1 lipschitz:9 replace:1 paristech:2 fista:4 stich:1 specifically:1 except:1 total:2 duality:4 w2r:1 partly:1 clipici:1 select:3 support:7 accelerated:7 incorporate:1 |
6,818 | 7,168 | The Importance of Communities for
Learning to Influence
Eric Balkanski
Harvard University
[email protected]
Nicole Immorlica
Microsoft Research
[email protected]
Yaron Singer
Harvard University
[email protected]
Abstract
We consider the canonical problem of influence maximization in social networks.
Since the seminal work of Kempe, Kleinberg, and Tardos [KKT03] there have been
two, largely disjoint efforts on this problem. The first studies the problem associated
with learning the generative model that produces cascades, and the second focuses
on the algorithmic challenge of identifying a set of influencers, assuming the
generative model is known. Recent results on learning and optimization imply that
in general, if the generative model is not known but rather learned from training data,
no algorithm for influence maximization can yield a constant factor approximation
guarantee using polynomially-many samples, drawn from any distribution.
In this paper we describe a simple algorithm for maximizing influence from training
data. The main idea behind the algorithm is to leverage the strong community
structure of social networks and identify a set of individuals who are influentials
but whose communities have little overlap. Although in general, the approximation
guarantee of such an algorithm is unbounded, we show that this algorithm performs
well experimentally. To analyze its performance, we prove this algorithm obtains a
constant factor approximation guarantee on graphs generated through the stochastic
block model, traditionally used to model networks with community structure.
1
Introduction
For well over a decade now, there has been extensive work on the canonical problem of influence
maximization in social networks. First posed by Domingos and Richardson [DR01, RD02] and
elegantly formulated and further developed by Kempe, Kleinberg, and Tardos [KKT03], influence
maximization is the algorithmic challenge of selecting individuals who can serve as early adopters of
a new idea, product, or technology in a manner that will trigger a large cascade in the social network.
In their seminal paper, Kempe, Kleinberg, and Tardos characterize a family of natural influence
processes for which selecting a set of individuals that maximize the resulting cascade reduces to
maximizing a submodular function under a cardinality constraint. Since submodular functions can be
maximized within a 1 1/e approximation guarantee, one can then obtain desirable guarantees for
the influence maximization problem. There have since been two, largely separate, agendas of research
on the problem. The first line of work is concerned with learning the underlying submodular function
from observations of cascades [LK03, AA05, LMF+ 07, GBL10, CKL11, GBS11, NS12, GLK12,
DSSY12, ACKP13, DSGRZ13, FK14, DBB+ 14, CAD+ 14, DGSS14, DLBS14, NPS15, HO15].
The second line of work focuses on algorithmic challenges revolving around maximizing influence,
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
assuming the underlying function that generates the diffusion process is known [KKT05, MR07,
SS13, BBCL14, HS15, HK16, AS16].
In this paper, we consider the problem of learning to influence where the goal is to maximize influence
from observations of cascades. This problem synthesizes both problems of learning the function
from training data and of maximizing influence given the influence function. A natural approach for
learning to influence is to first learn the influence function from cascades, and then apply a submodular
optimization algorithm on the function learned from data. Somewhat counter-intuitively, it turns
out that this approach yields desirable guarantees only under very strong learnability conditions1 .
In some cases, when there are sufficiently many samples, and one can observe exactly which node
attempts to influence whom at every time step, these learnability conditions can be met. A slight
relaxation however (e.g. when there are only partial observations [NPS15, HXKL16]), can lead to
sharp inapproximability.
A recent line of work shows that even when a function is statistically learnable, optimizing the
function learned from data can be inapproximable [BRS17, BS17]. In particular, even when the
submodular function f : 2N ! R is a coverage function (which is PMAC learnable [BDF+ 12, FK14]),
one would need to observe exponentially many samples {Si , f (Si )}m
i=1 to obtain a constant factor
approximation guarantee. Since coverage functions are special cases of the well studied models of
influence (independent cascade, linear and submodular threshold), this implies that when the influence
function is not known but learned from data, the influence maximization problem is intractable.
Learning to influence social networks. As with all impossibility results, the inapproximability
discussed above holds for worst case instances, and it may be possible that such instances are rare
for influence in social networks. In recent work, it was shown that when a submodular function has
bounded curvature, there is a simple algorithm that can maximize the function under a cardinality
constraint from samples [BRS16]. Unfortunately, simple examples show that submodular functions
that dictate influence processes in social networks do not have bounded curvature. Are there other
reasonable conditions on social networks that yield desirable approximation guarantees?
Main result. In this paper we present a simple algorithm for learning to influence. This algorithm
leverages the idea that social networks exhibit strong community structure. At a high level, the
algorithm observes cascades and aims to select a set of nodes that are influential, but belong to
different communities. Intuitively, when an influential node from a certain community is selected
to initiate a cascade, the marginal contribution of adding another node from that same community
is small, since the nodes in that community were likely already influenced. This observation can
be translated into a simple algorithm which performs very well in practice. Analytically, since
community structure is often modeled using stochastic block models, we prove that the algorithm
obtains a constant factor approximation guarantee in such models, under mild assumptions.
1.1
Technical overview
The analysis for the approximation guarantees lies at the intersection of combinatorial optimization
and random graph theory. We formalize the intuition that the algorithm leverages the community
structure of social networks in the standard model to analyze communities, which is the stochastic
block model. Intuitively, the algorithm obtains good approximations by picking the nodes that have
the largest individual influence while avoiding picking multiple nodes in the same community by
pruning nodes with high influence overlap. The individual influence of nodes and their overlap
are estimated by the algorithm with what we call first and second order marginal contributions of
nodes, which can be estimated from samples. We then uses phase transition results of Erd?os?R?nyi
random graphs and branching processes techniques to compare these individual influences for nodes
in different communities in the stochastic block model and bound the overlap of pairs of nodes.
The optimization from samples model. Optimization from samples was recently introduced by
[BRS17] in the context of submodular optimization, we give the definition for general set functions.
1
In general, the submodular function f : 2N ! R needs to be learnable everywhere within arbitrary precision,
i.e. for every set S one needs to assume that the learner can produce a surrogate function f? : 2N ! R s.t. for
every S ? N the surrogate guarantees to be (1 ?)f (S) ? f?(S) ? (1 + ?)f (S), for ? 2 o(1)[HS16, HS17].
2
Definition 1. A class of functions F = {f : 2N ! R} is ?-optimizable from samples over
distribution D under constraint M if there exists an algorithm s.t. for all f 2 F, given a set of
samples {(Si , f (Si ))}m
i=1 where the sets Si are drawn i.i.d. from D, the algorithm returns S 2 M
s.t.:
?
Pr
S1 ,...,Sm ?D
E[f (S)]
? ? max f (T )
T 2M
1
,
where the expectation is over the decisions of the algorithm and m 2 poly(|N |, 1/ ).
We focus on bounded product distributions D, so every node a is, independently, in S ? D with some
probability pa 2 [1/ poly(n), 1 1/ poly(n)]. We assume this is the case throughout the paper.
Influence process. We assume that the influence process follows the standard independent cascade
model. In the independent cascade model, a node a influences each of its neighbors b with some
probability qab , independently. Thus, given a seed set of nodes S, the set of nodes influenced is the
number of nodes connected to some node in S in the random subgraph of the network which contains
every edge ab independently with probability qab .We define f (S) to be the expected number of nodes
influenced by S according to the independent cascade model over some weighted social network.
The learning to influence model: optimization from samples for influence maximization. The
learning to influence model is an interpretation of the optimization from samples model [BRS17]
for the specific problem of influence maximization in social networks. We are given a collection of
samples {(Si , |cc(Si )|)}m
i=1 where sets Si are the seed sets of nodes and |cc(Si )| is the number of
nodes influenced by Si , i.e., the number of nodes that are connected to Si in the random subgraph of
the network. This number of nodes is a random variable with expected value f (Si ) := E[|cc(Si )|]
over the realization of the influence process. Each sample is an independent realization of the
influence process. The goal is then to find a set of nodes S under a cardinality constraint k which
maximizes the influence in expectation, i.e., find a set S of size at most k which maximizes the
expected number of nodes f (S) influenced by seed set S.
2
The Algorithm
We present the main algorithm, COPS. This algorithm is based on a novel optimization from samples
technique which detects overlap in the marginal contributions of two different nodes, which is useful
to avoid picking two nodes who have intersecting influence over a same collection of nodes.
2.1
Description of COPS
COPS, consists of two steps. It first orders nodes in decreasing order of first order marginal
contribution, which is the expected marginal contribution of a node a to a random set S ? D. Then,
it iteratively removes nodes a whose marginal contribution overlaps with the marginal contribution of
at least one node before a in the ordering. The solution is the k first nodes in the pruned ordering.
Algorithm 1 COPS, learns to influence networks with COmmunity Pruning from Samples.
Input: Samples S = {(S, f (S))}, acceptable overlap ?.
Order nodes according to their first order marginal contributions
Iteratively remove from this ordering nodes a whose marginal contribution has overlap of at least
? with at least one node before a in this ordering.
return k first nodes in the ordering
The strong performance of this algorithm for the problem of influence maximization is best explained
with the concept of communities. Intuitively, this algorithm first orders nodes in decreasing order of
their individual influence and then removes nodes which are in a same community. This second step
allows the algorithm to obtain a diverse solution which influences multiple different communities
of the social network. In comparison, previous algorithms in optimization from samples [BRS16,
BRS17] only use first order marginal contributions and perform well if the function is close to linear.
Due to the high overlap in influence between nodes in a same community, influence functions are far
3
from being linear and these algorithms have poor performance for influence maximization since they
only pick nodes from a very small number of communities.
2.2
Computing overlap using second order marginal contributions
We define second order marginal contributions, which are used to compute the overlap between the
marginal contribution of two nodes.
Definition 2. The second order expected marginal contribution of a node a to a random set S
containing node b is
vb (a) :=
E
[f (S [ {a}) f (S)].
S?D:a62S,b2S
The first order marginal contribution v(a) of node a is defined similarly as the marginal contribution
of a node a to a random set S, i.e., v(a) := ES?D:a62S [f (S [ {a}) f (S)]. These contributions can
be estimated arbitrarily well for product distributions D by taking the difference between the average
value of samples containing a and b and the average value of samples containing b but not a (see
Appendix B for details).
The subroutine OVERLAP(a, b, ?), ? 2 [0, 1], compares the second order marginal contribution of
a to a random set containing b and the first order marginal contribution of a to a random set. If b
causes the marginal contribution of a to decrease by at least a factor of 1 ?, then we say that a has
marginal contribution with overlap of at least ? with node b.
Algorithm 2 OVERLAP(a, b, ?), returns true if a and b have marginal contributions that overlap by
at least a factor ?.
Input: Samples S = {(S, f (S))}, node a, acceptable overlap ?
If second order marginal contribution vb (a) is at least a factor of 1 ? smaller than first order
marginal contribution v(a),
return Node a has overlap of at least ? with node b
OVERLAP is used to detect nodes in a same community. In the extreme case where two nodes a and b
are in a community C where any node in C influences all of community C, then the second order
marginal contribution vb (a) of a to random set S containing b is vb (a) = 0 since b already influences
all of C so a does not add any value, while v(a) ? |C|. In the opposite case where a and b are in
two communities which are not connected in the network, we have v(a) = vb (a) since adding b to a
random set S has no impact on the value added by a.
2.3
Analyzing community structure
The main benefit from COPS is that it leverages the community structure of social networks. To
formalize this explanation, we analyze our algorithm in the standard model used to study the
community structure of networks, the stochastic block model. In this model, a fixed set of nodes
V is partitioned in communities C1 , . . . , C` . The network is then a random graph G = (V, E)
where edges are added to E independently and where an intra-community edge is in E with much
larger probability than an inter-community edge. These edges are added with identical probability
sb
qC
for every edge in a same community, but with different probabilities for edges inside different
communities Ci and Cj . We illustrate this model in Figure 1.
3
Dense Communities and Small Seed Set in the Stochastic Block Model
In this section, we show that COPS achieves a 1 O(|Ck | 1 ) approximation, where Ck is the kth
largest community, in the regime with dense communities and small seed set, which is described
below. We show that the algorithm picks a node from each of the k largest communities with
high probability, which is the optimal solution. In the next section, we show a constant factor
approximation algorithm for a generalization of this setting, which requires a more intricate analysis.
In order to focus on the main characteristics of the community structure as an explanation for the
performance of the algorithm, we make the following simplifying assumptions for the analysis. We
4
Figure 1: An illustration of the stochastic block model with communities C1 , C2 , C3 and C4 of sizes 6, 4, 4
and 4. The optimal solution for influence maximization with k = 4 is in green. Picking the k first nodes in the
ordering by marginal contributions without pruning, as in [BRS16], leads to a solution with nodes from only C1
(red). By removing nodes with overlapping marginal contributions, COPS obtains a diverse solution.
first assume that there are no inter-community edges.2 We also assume that the random graph obtained
from the stochastic block model is redrawn for every sample and that we aim to find a good solution
in expectation over both the stochastic block model and the independent cascade model.
Formally, let G = (V, E) be the random graph over n nodes obtained from an independent cascade
process over the graph generated by the stochastic block model. Similarly as for the stochastic block
model, edge probabilities for the independent cascade model may vary between different communities
ic
and are identical within a single community C, where all edges have weights qC
. Thus, an edge e
ic
sb
between two nodes in a community C is in E with probability pC := qC
? qC
, independently for
ic
sb
every edge, where qC
and qC
are the edge probabilities in the independent cascade model and the
stochastic block model respectively. The total influence by seed set S is then |ccG (Si )| where ccG (S)
is the set of nodes connected to S in G and we drop the subscript when it is clear from context. Thus,
the objective function is f (S) := EG [|cc(S)|]. We describe the two assumptions for this section.
Dense communities. We assume that for the k largest communities C, pC > 3 log |C|/|C| and
C has super-constant size (|C| = !(1)). This assumption corresponds to communities where the
probability pC that a node ai 2 C influences another node aj 2 C is large. Since the subgraph G[C]
of G induced by a community C is an Erd?os?R?nyi random graph, we get that G[C] is connected
with high probability (see Appendix C).
Lemma 3. [ER60] Assume C is a ?dense" community, then the subgraph G[C] of G is connected
with probability 1 O(|C| 2 ).
Small seed set. We also assume that the seed sets S ? D are small enough so that they rarely
intersect with a fixed community C, i.e., PrS?D [S \C = ;] 1 o(1). This assumption corresponds
to cases where the set of early influencers is small, which is usually the case in cascades.
The analysis in this section relies on two main lemmas. We first show that the first order marginal
contribution of a node is approximately the size of the community it belongs to (Lemma 4). Thus, the
ordering by marginal contributions orders elements by the size of the community they belong to. Then,
we show that any node a 2 C that is s.t. that there is a node b 2 C before a in the ordering is pruned
(Lemma 5). Regarding the distribution S ? D generating the samples, as previously mentioned, we
consider any bounded product distribution. This implies that w.p. 1 1/ poly(n), the algorithm can
compute marginal contribution estimates v? that are all a 1/ poly(n)-additive approximation to the
true marginal contributions v (See Appendix B for formal analysis of estimates). Thus, we give the
analysis for the true marginal contributions, which, with probability 1 1/ poly(n) over the samples,
easily extends for arbitrarily good estimates.
The following lemma shows that the ordering by first order marginal contributions corresponds to the
ordering by decreasing order of community sizes that nodes belong to.
Lemma 4. For all a 2 C where C is one of the k largest communities, the first order marginal
contribution of node a is approximately the size of its community, i.e., (1 o(1))|C| ? v(a) ? |C|.
Proof. Assume a is a node in one of the k largest communities. Let Da and D a denote the
distributions S ? D conditioned on a 2 S and a 62 S respectively. We also denote marginal
contributions by fS (a) := f (S [ {a}) f (S). We obtain
2
The analysis easily extends to cases where inter-community edges form with probability significantly
sb
smaller to qC
, for all C.
5
v(a) =
E
S?D
a ,G
Pr [S \ C = ;] ? Pr[cc(a) = C] ?
[fS (a)]
S?D
=
G
a
E
S?D a : S\C=;,
G : cc(a)=C
[fS (a)]
Pr [S \ C = ;] ? Pr[cc(a) = C] ? |C|
S?D
(1
G
a
o(1)) ? |C|
where the last inequality is by the small seed set assumption and since C is connected with probability
1 o(1) (Lemma 3 and |C| = !(1) by dense community assumption). For the upper bound, v(a) is
trivially at most the size of a?s community since there are no inter-community edges.
The next lemma shows that the algorithm does not pick two nodes in a same community.
Lemma 5. With probability 1 o(1), for all pairs of nodes a, b such that a, b 2 C where C is one of
the k largest communities, OVERLAP(a, b, ?) = True for any constant ? 2 [0, 1).
Proof. Let a, b be two nodes in one of the k largest communities C and D
S ? D conditioned on a 62 S and b 2 S. Then,
vb (a) =
E
S?D
a,b
a,b
denote the distribution
[fS (a)] ? Pr[b 2 cc(a)] ? 0 + Pr[b 62 cc(a)] ? |C| = o(1) ? o(1) ? v(a)
where the last equality is since G[C] is not connected w.p. O(|C| 2 ) by Lemma 3 and since
|C| = !(1) by the dense community assumption, which concludes the proof.
By combining Lemmas 4 and 5, we obtain the main result for this section (proof in Appendix D).
Theorem 6. In the dense communities and small seed set setting, COPS with ?-overlap allowed,
for any constant ? 2 (0, 1) is a 1 o(1)-approximation algorithm for learning to influence from
samples from a bounded product distribution D.
4
Constant Approximation for General Stochastic Block Model
In this section, we relax assumptions from the previous section and show that COPS is a constant
factor approximation algorithm in this more demanding setting. Recall that G is the random graph
obtained from both the stochastic block model and the independent cascade model. A main observation that is used in the analysis is to observe that the random subgraph G[C], for some community C,
is an Erd?os?R?nyi random graph G|C|,pC .
Relaxation of the assumptions. Instead of only considering dense communities where pC =
?((log |C|)/|C|), we consider both tight communities C where pC (1 + ?)/|C| for some constant
? > 0 and loose communities C where pC ? (1 ?)/|C| for some constant ? > 0.3 We also
relax the small seed set assumption to the reasonable non-ubiquitous seed set assumption. Instead
of having a seed set S ? D rarely intersect with a fixed community C, we only assume that
PrS?D [S \ C = ;] ? for some constant ? > 0. Again, since seed sets are of small sizes in practice,
it seems reasonable that with some constant probability a community does not contain any seeds.
Overview of analysis. At a high level, the analysis exploits the remarkably sharp threshold for the
phase transition of Erd?os?R?nyi random graphs. This phase transition (Lemma 7) tells us that a tight
community C contains w.h.p. a giant connected component with a constant fraction of the nodes
from C. Thus, a single node from a tight community influences a constant fraction of its community
in expectation. The ordering by first order marginal contributions thus ensures a constant factor
approximation of the value from nodes in tight communities (Lemma 10). On the other hand, we show
that a node from a loose community influences only at most a constant number of nodes in expectation
(Lemma 8) by using branching processes. Since the algorithm checks for overlap using second order
marginal contributions, the algorithm picks at most one node from any tight community (Lemma 11).
Combining all the pieces together, we obtain a constant factor approximation (Theorem 12).
3
Thus, we consider all possible sizes of communities except communities of size that converges to exactly
1/pC , which is unlikely to occur in practice.
6
We first state the result for the giant connected component in a tight community, which is an immediate
corollary of the prominent giant connected component result in the Erd?os?R?nyi model.
Lemma 7. [ER60] Let C be a tight community with |C| = !(1), then G[C] has a ?giant" connected
component containing a constant fraction of the nodes in C w.p. 1 o(1).
The following lemma analyzes the influence of a node in a loose community through the lenses of
Galton-Watson branching processes to show that such a node influences at most a constant number of
nodes in expectation. The proof is deferred to Appendix E.
Lemma 8. Let C be a loose community, then f ({a}) ? c for all a 2 C and some constant c.
We can now upper bound the value of the optimal solution S ? . Let C1 , . . . , Ct be the t ? k tight
communities that have at least one node in Ci that is in the optimal solution S ? and that are of
super-constant size, i.e., |C| = !(1). Without loss, we order these communities in decreasing order
of their size |Ci |.
Lemma 9. Let S ? be the optimal
Pt set of nodes and Ci and t be defined as above. There exists a
constant c such that f (S ? ) ? i=1 |Ci | + c ? k.
?
?
Proof. Let SA
and SB
be a partition of the optimal nodes in nodes that are in tight communities
with super-constant individual influence
Ptand nodes that are not in such a community. The influence
?
?
f (SA
) is trivially upper bounded by i=1 |Ci |. Next, there exists some constant c s.t. f (SB
)?
P
f
({a})
?
c?
where
the
first
inequality
is
by
submodularity
and
the
second
since
nodes
in
?
a2SB
loose communities have constant individual influence by Lemma 8 and nodes in tight community
without super-constant individual influence haveP
constant influence by definition. We conclude that
t
?
?
by submodularity, f (S ? ) ? f (SA
) + f (SB
) ? i=1 |Ci | + c ? k.
Pt
Next, we argue that the solution returned by the algorithm is a constant factor away from i=1 |Ci |.
Lemma 10. Let a be the ith node in the ordering by first order maginal contribution after the pruning
and Ci be the ith largest tight community with super-constant individual influence and with at least
one node in the optimal solution S ? . Then, f ({a}) ?|Ci | for some constant ? > 0.
Proof. By definition of Ci , we have |C1 | ? ? ? |Ci | that are all tight communities. Let b be a
node in Cj for j 2 [i], 1gc(C) be the indicator variable indicating if there is a giant component in
community C, and gc(C) be this giant component. We get
v(b) Pr[1gc(Cj ) ] ? Pr [S \ Cj = ;] ? Pr[b 2 gc(Cj )] ? E[|gc(Cj )| : b 2 gc(Cj )]
S?D
b
(1 o(1)) ? ?1 ? ?2 ? ?3 |Cj | ?|Cj |
for some constants ?1 , ?2 , ?3 , ? > 0 by Lemma 7 and the non-ubiquitous assumption. Similarly as in
Theorem 6, if a and b are in different communities, OVERLAP(a, b, ?) = False for ? 2 (0, 1]. Thus,
there is at least one node b 2 [ij=1 Cj at position i or after in the ordering after the pruning, and
v(b) ?|Cj | for some j 2 [i]. By the ordering by first order marginal contributions and since node a
is in ith position, v(a) v(b), and we get that f ({a}) v(a) v(b) ?|Cj | ?|Ci |.
Next, we show that the algorithm never picks two nodes from a same tight community and defer the
proof to Appendix E.
Lemma 11. If a, b 2 C and C is a tight community, then OVERLAP(a, b, ?) = True for ? = o(1).
We combine the above lemmas to obtain the approximation guarantee of COPS (proof in Appendix E).
Theorem 12. With overlap allowed ? = 1/ poly(n), COPS is a constant factor approximation
algorithm for learning to influence from samples drawn from a bounded product distribution D in the
setting with tight and loose communities and non-ubiquitous seed sets.
5
Experiments
In this section, we compare the performance of COPS and three other algorithms on real and synthetic
networks. We show that COPS performs well in practice, it outperforms the previous optimization
from samples algorithm and gets closer to the solution obtained when given complete access to the
influence function.
7
DBLP
Greedy
COPS
MargI
Random
Facebook
400
250
100
Performance
300
200
200
150
100
200
0.4
0.8
1.2
q
1.6
0
2.0
x10x10$2$2&&
Stochastic Block Model 1
0
1500
3
6
k
9
12
15
Stochastic Block Model 2
900
600
0.4
n
0.6
0.8
1.0
x105%
0
0.0
q
0.9
1
100
0
1.1
$2&$2&
x10x10
Erd?s?R?nyi
0.2
0.4
n
0.6
0.8
1.0
5
x10
x105%%
3
6
k
9
12
15
Preferential Attachment
200
400
0
0.0
0
250
200
300
0.2
0.8
600
Performance
Performance
Performance
200
0.7
800
1200
400
0
0.0
150
50
0
0.6
Performance
600
200
100
50
0
0.0
Facebook
300
250
Performance
300
Performance
DBLP
300
Performance
400
150
100
50
0.2
0.4
n
0.6
0.8
1.0
x10
x1055%%
0
0.0
0.2
0.4
n
0.6
0.8
1.0
x1055%%
x10
Figure 2: Empirical performance of COPS against the G REEDY upper bound, the previous optimization from
samples algorithm M ARG I and a random set.
Experimental setup. The first synthetic network considered is the stochastic block model, SBM 1,
where communities have random sizes with one community of size significantly larger than the other
communities. We maintained the same expected community size as n varied. In the second stochastic
block model, SBM 2, all communities have same expected size and the number of communities was
fixed as n varied. The third and fourth synthetic networks were an Erd?os?R?nyi (ER) random graph
and the preferential attachment model (PA). Experiments were also conducted on two real networks
publicly available ([LK15]). The first is a subgraph of the Facebook social network with n = 4k
and m = 88k. The second is a subgraph of the DBLP co-authorship network, which has ground
truth communities as described in [LK15], where nodes of degree at most 10 were pruned to obtain
n = 54k, m = 361k and where the 1.2k nodes with degree at least 50 were considered as potential
nodes in the solution.
Benchmarks. We considered three different benchmarks to compare the COPS algorithm against.
The standard G REEDY algorithm in the value query model is an upper bound since it is the optimal
efficient algorithm given value query access to the function and COPS is in the more restricted setting
with only samples. M ARG I is the optimization from samples algorithm which picks the k nodes
with highest first order marginal contribution ([BRS16]) and does not use second order marginal
contributions. R ANDOM simply returns a random set. All the samples are drawn from the product
distribution with marginal probability k/n, so that samples have expected size k. We further describe
the parameters of each plot in Appendix F.
Empirical evaluation. COPS significantly outperforms the previous optimization from samples
algorithm M ARG I, getting much closer to the G REEDY upper bound. We observe that the more there
is a community structure in the network, the better the performance of COPS is compared to M ARG I,
e.g., SBM vs ER and PA (which do not have a community structure). When the edge weight q := q i.c.
for the cascades is small, the function is near-linear and M ARG I performs well, whereas when it is
large, there is a lot of overlap and COPS performs better. The performance of COPS as a function
of the overlap allowed (experiment in Appendix F) can be explained as follows: Its performance
slowly increases as the the overlap allowed increases and COPS can pick from a larger collection of
nodes until it drops when it allows too much overlap and picks mostly very close nodes from a same
community. For SBM 1 with one larger community, M ARG I is trapped into only picking nodes from
that larger community and performs even less well than R ANDOM. As n increases, the number of
nodes influenced increases roughly linearly for SBM 2 when the number of communities is fixed
since the number of nodes per community increases linearly, which is not the case for SBM 1.
8
References
[AA05] Eytan Adar and Lada A. Adamic. Tracking information epidemics in blogspace. In WI, 2005.
[ACKP13] Bruno D. Abrahao, Flavio Chierichetti, Robert Kleinberg, and Alessandro Panconesi. Trace
complexity of network inference. In KDD, 2013.
[AS16] Rico Angell and Grant Schoenebeck. Don?t be greedy: Leveraging community structure to find
high quality seed sets for influence maximization. arXiv preprint arXiv:1609.06520, 2016.
[BBCL14] Christian Borgs, Michael Brautbar, Jennifer T. Chayes, and Brendan Lucier. Maximizing social
influence in nearly optimal time. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium
on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 946?957,
2014.
[BDF+ 12] Ashwinkumar Badanidiyuru, Shahar Dobzinski, Hu Fu, Robert Kleinberg, Noam Nisan, and
Tim Roughgarden. Sketching valuation functions. In Proceedings of the twenty-third annual
ACM-SIAM symposium on Discrete Algorithms, pages 1025?1035. Society for Industrial and
Applied Mathematics, 2012.
[BHK] Avrim Blum, John Hopcroft, and Ravindran Kannan. Foundations of data science.
[BRS16] Eric Balkanski, Aviad Rubinstein, and Yaron Singer. The power of optimization from samples. In
Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information
Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4017?4025, 2016.
[BRS17] Eric Balkanski, Aviad Rubinstein, and Yaron Singer. The limitations of optimization from samples.
In STOC, 2017.
[BS17] Eric Balkanski and Yaron Singer. The sample complexity of optimizing a convex function. In
COLT, 2017.
[CAD+ 14] Justin Cheng, Lada A. Adamic, P. Alex Dow, Jon M. Kleinberg, and Jure Leskovec. Can cascades
be predicted? In WWW, 2014.
[CKL11] Flavio Chierichetti, Jon M. Kleinberg, and David Liben-Nowell. Reconstructing patterns of
information diffusion from incomplete observations. In NIPS, 2011.
[DBB+ 14] Abir De, Sourangshu Bhattacharya, Parantapa Bhattacharya, Niloy Ganguly, and Soumen
Chakrabarti. Learning a linear influence model from transient opinion dynamics. In CIKM,
2014.
[DGSS14] Hadi Daneshmand, Manuel Gomez-Rodriguez, Le Song, and Bernhard Sch?lkopf. Estimating diffusion network structures: Recovery conditions, sample complexity & soft-thresholding algorithm.
In ICML, 2014.
[DLBS14] Nan Du, Yingyu Liang, Maria-Florina Balcan, and Le Song. Influence function learning in
information diffusion networks. In ICML, 2014.
[DR01] Pedro Domingos and Matthew Richardson. Mining the network value of customers. In KDD,
2001.
[DSGRZ13] Nan Du, Le Song, Manuel Gomez-Rodriguez, and Hongyuan Zha. Scalable influence estimation
in continuous-time diffusion networks. In NIPS, 2013.
[DSSY12] Nan Du, Le Song, Alexander J. Smola, and Ming Yuan. Learning networks of heterogeneous
influence. In NIPS, 2012.
[ER60] Paul Erdos and Alfr?d R?nyi. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad.
Sci, 5(1):17?60, 1960.
[FK14] Vitaly Feldman and Pravesh Kothari. Learning coverage functions and private release of marginals.
In COLT, 2014.
[GBL10] Amit Goyal, Francesco Bonchi, and Laks VS Lakshmanan. Learning influence probabilities in
social networks. In KDD, 2010.
[GBS11] Manuel Gomez-Rodriguez, David Balduzzi, and Bernhard Sch?lkopf. Uncovering the temporal
dynamics of diffusion networks. In ICML, 2011.
9
[GLK12] Manuel Gomez-Rodriguez, Jure Leskovec, and Andreas Krause. Inferring networks of diffusion
and influence. ACM Transactions on Knowledge Discovery from Data, 5(4):21, 2012.
[HK16] Xinran He and David Kempe. Robust influence maximization. In Proceedings of the 22nd ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA,
USA, August 13-17, 2016, pages 885?894, 2016.
[HO15] Jean Honorio and Luis Ortiz. Learning the structure and parameters of large-population graphical
games from behavioral data. Journal of Machine Learning Research, 16:1157?1210, 2015.
[HS15] Thibaut Horel and Yaron Singer. Scalable methods for adaptively seeding a social network. In
Proceedings of the 24th International Conference on World Wide Web, WWW 2015, Florence, Italy,
May 18-22, 2015, pages 441?451, 2015.
[HS16] Thibaut Horel and Yaron Singer. Maximization of approximately submodular functions. In
Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information
Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3045?3053, 2016.
[HS17] Avinatan Hassidim and Yaron Singer. Submodular maximization under noise. In COLT, 2017.
[HXKL16] Xinran He, Ke Xu, David Kempe, and Yan Liu. Learning influence functions from incomplete
observations. In Advances in Neural Information Processing Systems 29: Annual Conference on
Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages
2065?2073, 2016.
[KKT03] David Kempe, Jon M. Kleinberg, and ?va Tardos. Maximizing the spread of influence through a
social network. In KDD, 2003.
[KKT05] David Kempe, Jon M. Kleinberg, and ?va Tardos. Influential nodes in a diffusion model for social
networks. In Automata, Languages and Programming, 32nd International Colloquium, ICALP
2005, Lisbon, Portugal, July 11-15, 2005, Proceedings, pages 1127?1138, 2005.
[LK03] David Liben-Nowell and Jon M. Kleinberg. The link prediction problem for social networks. In
CIKM, 2003.
[LK15] Jure Leskovec and Andrej Krevl. Snap datasets, stanford large network dataset collection. 2015.
[LMF+ 07] Jure Leskovec, Mary McGlohon, Christos Faloutsos, Natalie S. Glance, and Matthew Hurst.
Patterns of cascading behavior in large blog graphs. In SDM, 2007.
[MR07] Elchanan Mossel and S?bastien Roch. On the submodularity of influence in social networks. In
STOC, 2007.
[NPS15] Harikrishna Narasimhan, David C. Parkes, and Yaron Singer. Learnability of influence in networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural
Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages
3186?3194, 2015.
[NS12] Praneeth Netrapalli and Sujay Sanghavi. Learning the graph of epidemic cascades. In SIGMETRICS/Performance, 2012.
[RD02] Matthew Richardson and Pedro Domingos. Mining knowledge-sharing sites for viral marketing.
In KDD, 2002.
[SS13] Lior Seeman and Yaron Singer. Adaptive seeding in social networks. In 54th Annual IEEE
Symposium on Foundations of Computer Science, FOCS 2013, 26-29 October, 2013, Berkeley, CA,
USA, pages 459?468, 2013.
10
| 7168 |@word mild:1 private:1 seems:1 nd:2 hu:1 simplifying:1 pick:8 lakshmanan:1 liu:1 contains:2 selecting:2 outperforms:2 com:1 cad:2 manuel:4 si:14 luis:1 john:1 additive:1 partition:1 kdd:5 christian:1 remove:3 drop:2 plot:1 seeding:2 v:2 generative:3 selected:1 greedy:2 ith:3 parkes:1 math:1 node:105 unbounded:1 c2:1 natalie:1 symposium:3 chakrabarti:1 yuan:1 prove:2 consists:1 focs:1 combine:1 behavioral:1 inside:1 yingyu:1 manner:1 bonchi:1 inter:4 ravindran:1 intricate:1 expected:8 behavior:1 roughly:1 detects:1 decreasing:4 ming:1 little:1 cardinality:3 considering:1 spain:3 estimating:1 underlying:2 bounded:7 maximizes:2 daneshmand:1 what:1 developed:1 narasimhan:1 giant:6 guarantee:12 temporal:1 berkeley:1 every:8 exactly:2 grant:1 brautbar:1 before:3 acad:1 analyzing:1 niloy:1 subscript:1 approximately:3 studied:1 co:1 statistically:1 practice:4 block:18 goyal:1 intersect:2 empirical:2 yan:1 cascade:21 dictate:1 significantly:3 get:4 close:2 andrej:1 context:2 influence:76 seminal:2 www:2 customer:1 nicole:1 maximizing:6 independently:5 convex:1 automaton:1 ke:1 qc:7 identifying:1 recovery:1 sbm:6 cascading:1 population:1 traditionally:1 adar:1 tardos:5 pt:2 trigger:1 programming:1 us:1 domingo:3 harvard:4 pa:3 element:1 preprint:1 worst:1 ensures:1 connected:12 ordering:14 counter:1 decrease:1 highest:1 observes:1 liben:2 mentioned:1 intuition:1 alessandro:1 colloquium:1 complexity:3 dynamic:2 tight:15 badanidiyuru:1 serve:1 eric:4 learner:1 translated:1 easily:2 hopcroft:1 describe:3 fk14:3 havep:1 tell:1 query:2 rubinstein:2 whose:3 jean:1 posed:1 larger:5 stanford:1 say:1 relax:2 snap:1 epidemic:2 richardson:3 ganguly:1 cop:22 chayes:1 sdm:1 product:7 schoenebeck:1 combining:2 realization:2 soumen:1 subgraph:7 description:1 getting:1 sea:1 produce:2 generating:1 converges:1 tim:1 illustrate:1 montreal:1 ij:1 sa:3 strong:4 netrapalli:1 coverage:3 predicted:1 implies:2 met:1 submodularity:3 stochastic:18 redrawn:1 transient:1 opinion:1 brs16:5 generalization:1 krevl:1 hold:1 around:1 sufficiently:1 ic:3 considered:3 ground:1 seed:17 algorithmic:3 matthew:3 dobzinski:1 achieves:1 early:2 nowell:2 vary:1 estimation:1 pravesh:1 combinatorial:1 largest:9 weighted:1 sigmetrics:1 aim:2 super:5 rather:1 ck:2 avoid:1 corollary:1 release:1 focus:4 abrahao:1 maria:1 portland:1 check:1 impossibility:1 industrial:1 brendan:1 sigkdd:1 detect:1 hassid:1 inst:1 inference:1 sb:7 unlikely:1 honorio:1 subroutine:1 arg:6 uncovering:1 colt:3 special:1 kempe:7 marginal:40 never:1 having:1 beach:1 identical:2 icml:3 nearly:1 jon:5 dbb:2 sanghavi:1 individual:11 phase:3 microsoft:2 ortiz:1 attempt:1 ab:1 mining:3 intra:1 evaluation:1 deferred:1 extreme:1 pc:8 behind:1 pmac:1 edge:16 closer:2 partial:1 preferential:2 fu:1 elchanan:1 incomplete:2 leskovec:4 instance:2 soft:1 maximization:15 rare:1 conducted:1 too:1 learnability:3 characterize:1 synthetic:3 adaptively:1 st:1 international:3 siam:2 influencers:2 picking:5 michael:1 together:1 sketching:1 intersecting:1 again:1 balkanski:4 containing:6 slowly:1 blogspace:1 return:5 potential:1 de:1 oregon:1 brs17:5 nisan:1 piece:1 lot:1 analyze:3 red:1 zha:1 yaron:10 defer:1 florence:1 contribution:41 publicly:1 hadi:1 characteristic:1 largely:2 who:3 maximized:1 yield:3 identify:1 lkopf:2 lada:2 cc:9 influenced:6 sharing:1 facebook:3 definition:5 against:2 associated:1 proof:9 lior:1 dataset:1 recall:1 lucier:1 knowledge:3 alfr:1 ubiquitous:3 cj:12 formalize:2 andom:2 harikrishna:1 rico:1 erd:7 horel:2 marketing:1 smola:1 until:1 hand:1 dow:1 adamic:2 web:1 o:6 overlapping:1 rodriguez:4 glance:1 aj:1 quality:1 mary:1 usa:4 revolving:1 concept:1 bs17:2 true:5 contain:1 evolution:1 analytically:1 equality:1 iteratively:2 eg:1 game:1 branching:3 maintained:1 authorship:1 prominent:1 abir:1 complete:1 performs:6 balcan:1 thibaut:2 novel:1 recently:1 viral:1 overview:2 exponentially:1 discussed:1 slight:1 belong:3 interpretation:1 marginals:1 he:2 feldman:1 ai:1 sujay:1 trivially:2 mathematics:1 similarly:3 portugal:1 bruno:1 submodular:12 language:1 access:2 ashwinkumar:1 add:1 curvature:2 recent:3 xinran:2 optimizing:2 belongs:1 italy:1 certain:1 inequality:2 shahar:1 arbitrarily:2 watson:1 blog:1 flavio:2 analyzes:1 x105:2 somewhat:1 maximize:3 july:1 multiple:2 desirable:3 reduces:1 x10:3 technical:1 long:1 va:2 impact:1 prediction:1 scalable:2 florina:1 heterogeneous:1 expectation:6 arxiv:2 c1:5 x10x10:2 whereas:1 remarkably:1 krause:1 qab:2 sch:2 induced:1 december:4 vitaly:1 leveraging:1 quebec:1 call:1 near:1 leverage:4 hurst:1 enough:1 concerned:1 opposite:1 andreas:1 idea:3 regarding:1 aviad:2 praneeth:1 panconesi:1 effort:1 song:4 f:4 returned:1 cause:1 useful:1 clear:1 canonical:2 estimated:3 disjoint:1 trapped:1 per:1 cikm:2 diverse:2 discrete:2 threshold:2 blum:1 drawn:4 diffusion:8 graph:15 relaxation:2 fraction:3 everywhere:1 fourth:1 soda:1 extends:2 family:1 reasonable:3 throughout:1 decision:1 acceptable:2 appendix:9 vb:6 bound:6 ct:1 nan:3 gomez:4 cheng:1 annual:7 roughgarden:1 occur:1 constraint:4 alex:1 kleinberg:10 generates:1 pruned:3 influential:3 according:2 poor:1 smaller:2 reconstructing:1 partitioned:1 wi:1 s1:1 intuitively:4 explained:2 pr:12 restricted:1 previously:1 jennifer:1 turn:1 loose:6 singer:9 initiate:1 optimizable:1 available:1 apply:1 observe:4 away:1 bhattacharya:2 faloutsos:1 graphical:1 laks:1 exploit:1 balduzzi:1 amit:1 nyi:8 society:1 avinatan:1 objective:1 already:2 added:3 surrogate:2 exhibit:1 kth:1 separate:1 link:1 sci:1 whom:1 argue:1 valuation:1 kannan:1 assuming:2 modeled:1 illustration:1 liang:1 setup:1 unfortunately:1 mostly:1 robert:2 stoc:2 october:1 trace:1 noam:1 agenda:1 publ:1 twenty:2 perform:1 galton:1 upper:6 observation:7 kothari:1 francesco:1 sm:1 benchmark:2 datasets:1 january:1 immediate:1 gc:6 varied:2 sharp:2 arbitrary:1 august:1 community:109 canada:1 introduced:1 david:8 pair:2 extensive:1 c3:1 c4:1 learned:4 barcelona:3 nip:4 seeman:1 jure:4 justin:1 roch:1 below:1 usually:1 pattern:2 regime:1 challenge:3 max:1 green:1 explanation:2 power:1 overlap:28 demanding:1 natural:2 lisbon:1 indicator:1 technology:1 mossel:1 imply:1 attachment:2 concludes:1 discovery:2 loss:1 icalp:1 limitation:1 foundation:2 degree:2 thresholding:1 last:2 sourangshu:1 formal:1 neighbor:1 wide:1 taking:1 fifth:1 benefit:1 transition:3 world:1 collection:4 adaptive:1 san:1 far:1 polynomially:1 social:23 transaction:1 pruning:5 obtains:4 erdos:1 bernhard:2 hongyuan:1 conclude:1 francisco:1 don:1 continuous:1 decade:1 learn:1 robust:1 ca:3 adopter:1 synthesizes:1 du:3 poly:7 elegantly:1 da:1 main:8 dense:8 linearly:2 spread:1 noise:1 paul:1 bdf:2 allowed:4 xu:1 site:1 chierichetti:2 precision:1 christos:1 position:2 inferring:1 lie:1 third:2 learns:1 removing:1 theorem:4 specific:1 bastien:1 borgs:1 er:2 learnable:3 intractable:1 exists:3 false:1 adding:2 avrim:1 importance:1 ci:13 ccg:2 conditioned:2 lmf:2 dblp:3 reedy:3 intersection:1 simply:1 likely:1 tracking:1 inapproximability:2 pedro:2 corresponds:3 truth:1 relies:1 acm:4 goal:2 formulated:1 ericbalkanski:1 experimentally:1 inapproximable:1 except:1 lemma:24 total:1 lens:1 eytan:1 e:1 experimental:1 rarely:2 select:1 formally:1 indicating:1 immorlica:1 alexander:1 bhk:1 avoiding:1 hung:1 |
6,819 | 7,169 | Multiplicative Weights Update with Constant
Step-Size in Congestion Games: Convergence, Limit
Cycles and Chaos
Gerasimos Palaiopanos?
SUTD
Singapore
[email protected]
Ioannis Panageas?
MIT
Cambridge, MA 02139
[email protected]
Georgios Piliouras?
SUTD
Singapore
[email protected]
Abstract
The Multiplicative Weights Update (MWU) method is a ubiquitous meta-algorithm
that works as follows: A distribution is maintained on a certain set, and at each
step the probability assigned to action ? is multiplied by (1 ? C(?)) > 0 where
C(?) is the ?cost" of action ? and then rescaled to ensure that the new values form
a distribution. We analyze MWU in congestion games where agents use arbitrary
admissible constants as learning rates and prove convergence to exact Nash
equilibria. Interestingly, this convergence result does not carry over to the nearly
homologous MWU variant where at each step the probability assigned to action ?
is multiplied by (1 ? )C(?) even for the simplest case of two-agent, two-strategy
load balancing games, where such dynamics can provably lead to limit cycles or
even chaotic behavior.
1
Introduction
The Multiplicative Weights Update (MWU) is a ubiquitous meta-algorithm with numerous applications in different fields [2]. It is particularly useful in game theory due to its regret-minimizing
properties [24, 11]. It is typically introduced in two nearly identical variants, the one in which at
each step the probability assigned to action ? is multiplied by (1 ? C(?)) and the one in which
it is multiplied by (1 ? )C(?) where C(?) is the cost of action ?. We will refer to the first as the
linear variant, MWU` , and the second as the exponential, MWUe (also known as Hedge). In the
literature there is little distinction between these two variants as both carry the same advantageous
regret-minimizing property. It is also well known that in order to achieve sublinear regret, the learning
rate must be decreasing as time progresses. This constraint raises a natural question: Are there
interesting classes of games where MWU behaves well without the need to fine-tune its learning rate?
A natural setting to test the learning behavior of MWU with constant learning rates is the wellstudied class of congestion games. Unfortunately, even for the simplest instances of congestion
games MWUe fails to converge to equilibria. For example, even in the simplest case of two balls two
?
Gerasimos Palaiopanos would like to acknowledge a SUTD Presidential fellowship.
Ioannis Panageas would like to acknowledge a MIT-SUTD postdoctoral fellowship. Part of this work was
completed while Ioannis Panageas was a PhD student at Georgia Institute of Technology and a visiting scientist
at the Simons Institute for the Theory of Computing.
?
Georgios Piliouras would like to acknowledge SUTD grant SRG ESD 2015 097, MOE AcRF Tier 2 Grant
2016-T2-1-170 and a NRF Fellowship. Part of this work was completed while Georgios Piliouras was a visiting
scientist at the Simons Institute for the Theory of Computing.
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
bins games,4 MWUe with = 1 ? e?10 is shown to converge to a limit cycle of period 2 for infinitely
many initial conditions (Theorem 4.1). If the cost functions of the two edges are not identical then we
create instances of two player load balancing games such that MWUe has periodic orbits of length k
for all k > 0, as well as uncountable many initial conditions which never settle on any periodic orbit
but instead exhibit an irregular behavior known as Li-Yorke chaos (Theorem 4.2, see Corollary 4.3).
The source of these problems is exactly the large, fixed learning rate , e.g., ? 1 for costs in [0, 1].
Intuitively, the key aspect of the problem can be captured by (simultaneous) best response dynamics.
If both agents start from the same edge and best-respond simultaneously they will land on the second
edge which now has a load of two. In the next step they will both jump back to the first edge and this
motion will be continued perpetually. Naturally, MWUe dynamics are considerably more intricate as
they evolve over mixed strategies and allow for more complicated non-equilibrium behavior but the
key insight is correct. Each agent has the right goal, decrease his own cost and hence the potential of
the game, however, as they pursue this goal too aggressively they cancel each other?s gains and lead
to unpredictable non-converging behavior.
In a sense, the cautionary tales above agree with our intuition. Large, constant learning rates nullify
the known performance guarantees of MWU. We should expect erratic behavior in such cases. The
typical way to circumvent these problems is through careful monitoring and possibly successive
halving of the parameter, a standard technique in the MWU literature. In this paper, we explore an
alternative, cleaner, and surprisingly elegant solution to this problem. We show that applying MWU` ,
the linear variant of MWU, suffices to guarantee convergence in all congestion games.
Our key contributions. Our key result is the proof of convergence of MWU` in congestion games.
The main technical contribution is a proof that the potential of the mixed state is always strictly
decreasing along any nontrivial trajectory (Theorem 3.1). This result holds for all congestion games,
irrespective of the number of agents or the size, topology of the strategy sets. Moreover, each agent i
may be applying different learning rates i which will be constant along the dynamics (i does not
depend on the number of iterations T of the dynamics and therefore is bounded away from zero as
T ? ?; this is not the case for most of the results in the literature). The only restriction on the set
of allowable learning rates i is that for each agent the multiplicative factor (1 ? i Ci (s)) should
be positive for all strategy outcomes s.5 Arguing convergence to equilibria for all initial conditions
(Theorem 3.4) and further, convergence to Nash equilibria for all interior initial conditions (Theorem
3.8) follows. Proving that the potential always decreases (Theorem 3.1) hinges upon discovering a
novel interpretation of MWU dynamics. Specifically, we show that the class of dynamical systems
derived by applying MWU` in congestion games is a special case of a convergent class of dynamical
systems introduced by Baum and Eagon [5] (see Theorem 2.4). The most well known member of this
class is the classic Baum-Welch algorithm, the standard instantiation of the Expectation-Maximization
(EM) algorithm for hidden Markov models (HMM). Effectively, the proof of convergence of both
these systems boils down to a proof of membership to the same class of Baum-Eagon systems (see
section 2.3 for more details on these connections).
In the second part we provide simple congestion games where MWUe provably fails to converge. The
first main technical contribution of this section is proving convergence to a limit cycle, specifically a
periodic orbit of length two, for the simplest case of two balls two bins games for infinitely many initial
conditions (Theorem 4.1). Moreover, after normalizing costs to lie in [0, 1], i.e. c(x) = x/2, we prove
that almost all symmetric non-equilibrium initial conditions converge to a unique limit cycle when
both agents use learning rate = 1?e?10 . In contrast, since 1??C(s) ? 1?(1?e?10 )1 = e?10 > 0,
MWU` successfully converges to equilibrium. In other words, for the same learning rates, MWUe
exhibits chaotic behavior whereas MWU` converges to Nash equilibrium. Establishing chaotic
behavior for the case of edges with different cost functions is rather straightforward in comparison
(Theorem 4.2). The key step is to exploit symmetries in the system to reduce it to a single dimensional
one and then establish the existence of a periodic orbit of length three. The existence of periodic
orbits of any length as well as chaotic orbits then follows from the Li-Yorke theorem 2.3 [30] (see
section 2.2 for background on chaos and dynamical systems). Finally, for any learning rate 1 > > 0,
we construct n-player games so that MWUe has chaotic behavior for uncountably many starting
points.
4
n balls n bin games are symmetric load balancing games with n agent and n edges/elements each with a
cost function of c(x)=x. We normalize costs equal to c(x) = x/n so that they lie in [0, 1].
5
This is an absolutely minimal restriction so that the denominator of MWU` cannot become equal to zero.
2
Related work and Extensions/Implications of our results.
Connections to learning in games and price of anarchy: Several recent papers, e.g., [40, 22] focus
on proving welfare guarantees of no-regret dynamics in games exploiting connections to (robust)
price of anarchy literature [37] by establishing fast convergence of the time average behavior to
(approximate) coarse correlate equilibria. Although these approaches are rather powerful they are
not always applicable. For example, it is well known that when we consider the makespan (i.e. the
load of the most congested machine) instead of the social/total cost there can be an exponential gap
between the performance of coarse correlated equilibria and Nash equilibria. For example the price
of anarchy for the makespan objective for ?
n balls n bins games is O(log(n)/ log log(n)) whereas for
the worst no regret algorithm it can be ?( n) [9]. Moreover, even if we focus on the social cost, the
price of anarchy guarantees do not carry over if we perform affine transformation to the cost functions
(e.g. if there exist users of different tiers/types that the system designer wants to account for in a
differential manner). In contrast, our convergence results are robust to any affine cost transformation.
In fact, our results apply for all weighted potential games [32] (Remark 3.5).
Connections to distributed computation and adversarial agent scheduling: A rather realistic
concern about results on learning in games has to do with their sensitivity to the ordering of the moves
of the agent dynamics. For example, better-response dynamics in congestion games are guaranteed to
converge only if in every round, exactly one agent deviates to a better strategy. A series of recent
papers has established strong non-termination (cycling) results for large classes of bounded recall
dynamics with a wide variety of interesting and timely applications: game theory, circuit design,
social networks, routing and congestion control [26, 19, 34, 25]. In the case of games, these results
translate to corollaries such as: ?If there are two or more pure Nash equilibria in a game with unique
best responses, then all bounded-recall self-independent dynamics6 for which those equilibria are
fixed points can fail to converge in asynchronous environments." Even the simplest 2 balls 2 bins
game satisfies these properties (two pure Nash and unique best responses) which shows the strength
of this impossibility result. In contrast, our convergence result holds for any adversarial scheduling
with the minimal fairness assumption that given any mixed state at least one agent who is not best
responding eventually will be given the possibility to update their behavior, answering open questions
in [26, 25]. In fact, our convergence result is in a sense the strongest possible, no matter how many
agents get to update their behavior (as long as one of them does) then the potential of the game will
strictly decrease (Corollary 3.6).
Connections to complexity theory: Whereas the complexity of computing both mixed Nash equilibria in general games (PPAD-complete [17]) as well as the complexity of finding pure Nash equilibria
in congestion games (PLS-complete [20]) have both been completely characterized and are thus
unlikely to admit an efficient time algorithm, the complexity of computing mixed Nash equilibria
in congestion games has withstood so far an exhaustive characterization. Naturally, it lies on the
intersection of both PPAD and PLS, known as CLS [18]. Such an equilibrium can be found both via
an end-of-line type of argument as well as a local search type of argument, but it is still not known
if it is CLS-complete. Given the active interest for producing CLS-complete problems [16, 21] our
constructive/convergence proof may help shed light on this open question.
Chaos for arbitrary small learning rates : Although our example of chaotic behavior uses a very
high learning rate = 1 ? e?10 , it should be noted that for any learning rate (e.g. = e?10 ), as
well as for any number of agents n, we can create congestion games with n agents where MWUe
exhibits chaotic behavior (Corollary 4.3).
Congestion/potential games: Congestion games are amongst the most well known and thoroughly
studied class of games. Proposed in [36] and isomorphic to potential games [32], they have been
successfully employed in myriad modeling problems. Despite the numerous positive convergence
results for concurrent dynamics in congestion games, e.g., [33, 23, 7, 1, 6, 28, 10, 13, 12, 31], we
know of no prior work establishing such a deterministic convergence result of the day-to-day agent
behavior to exact Nash equilibria for general atomic congestion games. MWU has also been studied
in congestion games. In [29] randomized variants of the exponential version of the MWU are shown
to converge w.h.p. to pure Nash equilibria as long as the learning rate is small enough. In contrast
our positive results for linear M W U` hold deterministically and for all learning rates. Recently, [14]
showed that if the Hedge algorithm is run with a suitably decreasing learning factor , the sequence
6
A dynamic is called self-independent if the agent?s response does not depend on his actions.
3
of play converges to a Nash equilibrium with probability 1 (in the bandit case). The result and the
techniques are orthogonal to ours, since we assume fixed learning rates.
Non-convergent dynamics: Outside the class of congestion games, there exist several negative
results in the literature concerning the non-convergence of MWU and variants thereof. In particular,
in [15] it was shown that the multiplicative updates algorithm fails to find the unique Nash equilibrium
of the 3 ? 3 Shapley game. Similar non-convergent results have been proven for perturbed zero-sum
games [4], as well as for the continuous time version of MWU, the replicator dynamics [27, 35]. The
possibility of applying Li-Yorke type arguments for MWU in congestion games with two agents
was inspired by a remark in [3] for the case of continuum of agents. Our paper is the first to our
knowledge where non-convergent MWU behavior in congestion games is formally proven capturing
both limit cycles and chaos and we do so in the minimal case of two balls two bin games.
2
Preliminaries
Notation. We use boldface letters, e.g., x, to denote column vectors (points). For a function
f : Rm ? Rm , by f n we denote the composition of f with itself n times, namely f ? f ? ? ? ? ? f .
{z
}
|
n times
2.1
Congestion Games
A congestion game [36] is defined by the tuple (N ; E; (Si )i?N ; (ce )e?E ) where N is the set of
agents, N = |N |, E is a set of resources (also known as edges or bins or facilities) and each player i
has a set Si of subsets of E (Si ? 2E ) and |Si | ? 1. Each strategy si ? Si is a set of edges and ce is
a positive cost (latency) function associated with facility e. We use small greek characters like ?, ?
to denote different strategies/paths.
For a strategy profile s = (s1 , s2 , . . . , sN ), the cost of player i
P
is given by ci (s) = e?si ce (`e (s)), where `e (s) is the number of players using e in s (the load of
P
P`e (s)
edge e). The potential function is defined to be ?(s) = e?E j=1
ce (j).
For each i ? N and ? ?P
Si , pi? denotes the probability player i chooses strategy ?. We denote
by ?(Si ) = {p ? 0 :
? pi? = 1} the set of mixed (randomized) strategies of player i and
? = ?i ?(Si ) the set of mixed strategies of all players. We use ci? = EsP
c (?, s?i ) to denote
?i ?p?i i
the expected cost of player i given that he chooses strategy ? and c?i = ??Si pi? ci? to denote his
expected cost.
2.2
Dynamical Systems and Chaos
Let x(t+1) = f (x(t) ) be a discrete time dynamical system with update rule f : Rm ? Rm . The
point z is called a fixed point of f if f (z) = z. A sequence (f t (x(0) ))t?N is called a trajectory or
orbit of the dynamics with x(0) as starting point. A common technique to show that a dynamical
system converges to a fixed point is to construct a function P : Rm ? R such that P (f (x)) > P (x)
unless x is a fixed point. We call P a Lyapunov or potential function.
Definition 2.1. C = {z1 , . . . , zk } is called a periodic orbit of length k if zi+1 = f (zi ) for 1 ? i ?
k ? 1 and f (zk ) = z1 . Each point z1 , . . . , zk is called periodic point of period k. If the dynamics
converges to some periodic orbit, we also use the term limit cycle.
Some dynamical systems converge and their behavior can be fully understood and some others
have strange, chaotic behavior. There are many different definitions for what chaotic behavior and
chaos means. In this paper we follow the definition of chaos by Li and Yorke. Let us first give
the definition of a scrambled set. Given a dynamical system with update rule f , a pair x and y is
called ?scrambled" if limn?? inf |f n (x) ? f n (y)| = 0 (the trajectories get arbitrarily close) and
also limn?? sup |f n (x) ? f n (y)| > 0 (the trajectories move apart). A set S is called ?scrambled"
if ?x, y ? S, the pair is ?scrambled".
Definition 2.2 (Li and Yorke). A discrete time dynamical system with update rule f , f : X ? X
continuous on a compact set X ? R is called chaotic if (a) for each k ? Z+ , there exists a periodic
point p ? X of period k and (b) there is an uncountably infinite set S ? X that is ?scrambled".
Li and Yorke proved the following theorem [30] (there is another theorem of similar flavor due to
Sharkovskii [38]):
4
Theorem 2.3 (Period three implies chaos). Let J be an interval and let F : J ? J be continuous.
Assume there is a point a ? J for which the points b = F (a), c = F 2 (a) and d = F 3 (a), satisfy
d ? a < b < c (or d ? a > b > c).
Then
1. For every k = 1, 2, . . . there is a periodic point in J having period k.
2. There is an uncountable set S ? J (containing no periodic points), which satisfies the
following conditions:
? For every p, q ? S with p 6= q,
lim sup |F n (p) ? F n (q)| > 0 and lim inf |F n (p) ? F n (q)| = 0.
n??
n??
? For every point p ? S and periodic point q ? J,
lim sup |F n (p) ? F n (q)| > 0.
n??
Notice that if there is a periodic point with period 3, then the hypothesis of the theorem will be
satisfied.
2.3
Baum-Eagon Inequality, Baum-Welch and EM
We start this subsection by stating the Baum-Eagon inequality. This inequality will be used to show
that MWU` converges to fixed points and more specifically Nash equilibria for congestion games.
Theorem 2.4 (Baum-Eagon inequality [5]). Let P (x) = P ({xij }) be a polynomial with nonnegative
coefficients homogeneous
degree d in its variables {xij }. Let x = {xij } be any point of the
Pqof
i
domain D : xij ? 0, j=1
xij = 1, i = 1, 2, ..., p, j = 1, 2, ..., qi . For x = {xij } ? D let
=(x) = = {xij } denote the point of D whose i, j coordinate is
! ,X
qi
?P
?P
0
=(x)ij = xij
x
ij
?xij (x)
?xij 0 (x)
0
j =1
Then P (=(x)) > P (x) unless =(x) = x.
The Baum-Welch algorithm is a classic technique used to find the unknown parameters of a hidden
Markov model (HMM). A HMM describes the joint probability of a collection of ?hidden" and
observed discrete random variables. It relies on the assumption that the i-th hidden variable given the
(i ? 1)-th hidden variable is independent of previous hidden variables, and the current observation
variables depend only on the current hidden state. The Baum-Welch algorithm uses the well known
EM algorithm to find the maximum likelihood estimate of the parameters of a hidden Markov model
given a set of observed feature vectors. More detailed exposition of these ideas can be found here
[8]. The probability of making a specific time series of observations of length T can be shown to
be a homogeneous polynomial P of degree T with nonnegative (integer) coefficients of the model
parameters. Baum-Welch algorithm is homologous to the iterative process derived by applying the
Baum-Eagon theorem to polynomial P [5, 41].
In a nutshell, both Baum-Welch and MWU` in congestion games are special cases of the Baum-Eagon
iterative process (for different polynomials P ).
2.4
Multiplicative Weights Update
In this section, we describe the MWU dynamics (both the linear MWU` , and the exponential
MWUe variants) applied in congestion games. The update rule (function) ? : ? ? ? (where
p(t + 1) = ?(p(t))) for the linear variant MWU` is as follows:
pi? (t + 1) = (?(p(t)))i? = pi? (t)
5
1 ? i ci? (t)
, ?i ? N , ?? ? Si ,
1 ? i c?i (t)
(1)
where i is a constant (can depend on player i but not on p) so that both enumerator and denominator
of the fraction in (1) are positive (and thus the fraction is well defined). Under the assumption that
def
1/i > ?1 = supi,p??,??Si {ci? }, it follows that 1/i > ci? for all i, ? and hence 1/i > c?i .
The update rule (function) ? : ? ? ? (where p(t + 1) = ?(p(t))) for the exponential variant
MWUe is as follows:
pi? (t + 1) = (?(p(t)))i? = pi? (t) P
(1 ? i )ci? (t)
, ?i ? N , ?? ? Si ,
ci? 0 (t)
0
? 0 ?Si pi? (t)(1 ? i )
(2)
where i < 1 is a constant (can depend on player i but not on p). Note that i can be small when the
number of agents N is large enough.
Remark 2.5. Observe that ? is invariant under the discrete dynamics (1), (2) defined above. If
pi? = 0 then pi? remains zero, and if it is P
positive, it remains positive (both numerator and
denominator are positive) and also is true that ??Si pi? = 1 for all agents i. A point p? is called
a fixed point if it stays invariant under the update rule of the dynamics, namely ?(p? ) = p? or
?(p? ) = p? . A point p? is a fixed point of (1), (2) if for all i, ? with p?i? > 0 we have that ci? = c?i .
To see why, observe that if p?i? , p?i? 0 > 0, then ci? = ci? 0 and thus ci? = c?i . We conclude that the set
of fixed points of both dynamics (1), (2) coincide and are supersets of the set of Nash equilibria of the
corresponding congestion game.
3
Convergence of MWU` to Nash Equilibria
We first prove that MWU` (1) converges to fixed points7 . Technically, we establish that function
def
? = Es?p [?(s)] is strictly decreasing along any nontrivial (i.e. nonequilibrium) trajectory, where
? is the potential function of the congestion game as defined in Section 2. Formally we show the
following theorem:
Theorem 3.1 (? is decreasing). Function ? is decreasing w.r.t. time, i.e., ?(p(t + 1)) ? ?(p(t))
where equality ?(p(t + 1)) = ?(p(t)) holds only at fixed points.
We define the function
?
def
Q(p) =
X
i?N
|
?(1/i ? 1/?) ?
?
X
pi? ? + 1/? ?
??Si
{z
X
?
i?N
constant term
?
?
Y
pi? ? ??(p),
(3)
??Si
}
and
P show that Q(p) is strictly increasing w.r.t time, unless p is a fixed point. Observe that
??Si pi? = 1 since p lies in ?, but we include this terms in Q for technical reasons that will be
made clear later in the section. By showing thatPQ is increasing with time, Theorem 3.1 trivially
follows since Q = const ? ? where const = i?N 1/i ? 1/?(N ? 1). To show that Q(p) is
strictly increasing w.r.t time, unless p is a fixed point, we use a generalization of an inequality by
Baum and Eagon [5] on function Q.
Corollary 3.2 (Generalization of Baum-Eagon). Theorem 2.4 holds even if P is non-homogeneous.
We want to apply Corollary 3.2 on Q. To do so, it suffices to show that Q(p) is a polynomial with
nonnegative coefficients.
Lemma 3.3. Q(p) is a polynomial with respect to pi? and has nonnegative coefficients.
Using Lemma 3.3 and Corollary 3.2 we show the following:
Theorem 3.4. Let Q be the function defined in (3). Let also p(t) ? ? be the point MWU` (1)
def
outputs at time t with update rule ?. It holds that Q(p(t + 1)) = Q(?(p(t))) > Q(p(t)) unless
?(p(t)) = p(t) (fixed point). Namely Q is strictly increasing with respect to the number of iterations
t unless MWU` is at a fixed point.
7
All missing proofs can be found in the full version of this paper http://arxiv.org/abs/1703.01138.
6
Remark 3.5 (Weighted potential games). A congestion game is a potential game because if a player
deviates, the difference he experiences in his cost is exactly captured by the deviation of the global
P
P`e (s)
(same for all players) function ? = e?E j=1
ce (j). In a weighted potential game, it holds that
ci (si , s?i ) ? ci (s0i , s?i ) = wi (?(si , s?i ) ? ?(s0i , s?i )), where wi is some constant not necessarily
1 (as in the potential games case) and vector s?i captures the strategies of all players but i. It
is not hard to see that Lemma 3.3 and thus Theorems 3.4 and 3.1 hold in this particular class of
games (which is a generalization of congestion games), and so do the rest of the theorems of the
section. Effectively, in terms of the weighted potential games analysis, it is possible to reduce it to
the standard potential games analysis as follows: Consider the system with learning rates i and
cost functions wi ci so that the game with cost functions ci is a potential game. The only necessary
condition that we ask of this system is that i wi ci (s) < 1 for all i (as in the standard case) so that
the enumerators/denominators are positive.
By reduction, we can show that for every round T , even if a subset (that depends on the round T )
of the players update their strategy according to MWU` and the rest remain fixed, the potential still
decreases.
Corollary 3.6 (Any subset). Assume that at time t we partition the players in two sets St , St0 so that
we allow only players in St to apply MWU` dynamics, whereas the players in St0 remain fixed. It
holds that the expected potential function of the game at time t decreases.
As stated earlier in the section, if Q(p(t)) is strictly increasing with respect to time t unless p(t) is
a fixed point, it follows that the expected potential function ?(p(t)) = const ? Q(p(t)) is strictly
decreasing unless p(t) is a fixed point and Theorem 3.1 is proved. Moreover, we can derive the fact
that our dynamics converges to fixed points as a corollary of Theorem 3.1.
Theorem 3.7 (Convergence to fixed points). MWU` dynamics (1) converges to fixed points.
We conclude the section by strengthening the convergence result (i.e., Theorem 3.7). We show that if
the initial distribution p is in the interior of ? then we have convergence to Nash equilibria.
Theorem 3.8 (Convergence to Nash equilibria). Assume that the fixed points of (1) are isolated. Let
p(0) be a point in the interior of ?. It follows that limt?? p(t) = p? is a Nash equilibrium.
Proof. We showed in Theorem 3.7 that MWU` dynamics (1) converges, hence limt?? p(t) exists
(under the assumption that the fixed points are isolated) and is equal
P to a fixed point of the dynamics
p? . Also it is clear from the dynamics that ? is invariant, i.e., ??Sj pj? (t) = 1, pj? (t) > 0 for all
j and t ? 0 since p(0) is in the interior of ?.
Assume that p? is not a Nash equilibrium, then there exists a player i and a strategy ? ? Si so that
ci? (p? ) < c?i (p? ) (on mixed strategies p? ) and p?i? = 0. Fix a ? > 0 and let U? = {p : ci? (p) <
c?i (p) ? ?}. By continuity we have that U? is open. It is also true that p? ? U? for ? small enough.
Since p(t) converges to p? as t ? ?, there exists a time t0 so that for all t0 ? t0 we have that
p(t0 ) ? U? . However, from MWU` dynamics (1) we get that if p(t0 ) ? U? then 1 ? i ci? (t0 ) >
1? c (t0 )
1 ? i c?i (t0 ) and hence pi? (t0 + 1) = pi? (t0 ) 1?ii c?i?i (t0 ) ? pi? (t0 ) > 0, i.e., pi? (t0 ) is positive and
increasing with t0 ? t0 . We reached a contradiction since pi? (t) ? p?i? = 0, thus p? is a Nash
equilibrium.
4
Non-Convergence of MWUe : Limit Cycle and Chaos
We consider a symmetric two agent congestion game with two edges e1 , e2 . Both agents have the
same two available strategies ?1 = {e1 } and ?2 = {e2 }. We denote x, y the probability that the first
and the second agent respectively choose strategy ?1 .
For the first example, we assume that ce1 (l) = 12 ? l and ce2 (l) = 21 ? l. Computing the expected
2?y
1+x
2?x
costs we get that c1?1 = 1+y
2 , c1?2 = 2 , c2?1 = 2 , c2?2 = 2 . MWUe then becomes xt+1 =
(1?1 )
xt
xt (1?1 )
yt +1
2
(yt +1)
2
+(1?xt )(1?1 )
2?yt
2
(1?2 )
(first player) and yt+1 = yt
yt (1?2 )
xt +1
2
xt +1
2
+(1?yt )(1?2 )
2?xt
2
(sec-
ond player). We assume that 1 = 2 and also that x0 = y0 (players start with the same mixed
7
(a) Exponential MWUe : Plot of function G (blue) (b) Linear MWU` : Plot of function G` (blue) and
and its iterated versions G2 (red), G3 (yellow). its iterated versions G2` (red) and G3` (yellow). FuncFunction y(x) = x is also included.
tion y(x) = x is also included.
(c) Exponential MWUe : Plot of function G10 . (d) Linear MWU` : Plot of function G10
` . Function
Function y(x) = x is also included.
y(x) = x is also included.
Figure 1: We compare and contrast MWUe (left) and MWU` (right) in the same two agent two
strategy/edges congestion game with ce1 (l) = 14 ? l and ce2 (l) = 1.4
4 ? l and same learning rate
= 1 ? e?40 . MWUe exhibits sensitivity to initial conditions whereas MWU` equilibrates. Function
y(x) = x is also included in the graphs to help identify fixed points and periodic points.
strategy. Due to symmetry, it follows that xt = yt for all t ? N, thus it suffices to keep track only of
one variable (we have reduced the number of variables of the update rule of the dynamics to one) and
the dynamics becomes xt+1 = xt
(1?)
xt +1
xt (1?) 2
xt +1
2
+(1?xt )(1?)
2?xt
2
. Finally, we choose = 1 ? e?10
and we get
xt+1 = H(xt ) = xt
i.e., we denote H(x) =
e?5(xt +1)
,
xt e?5(xt +1) + (1 ? xt )e?5(2?xt )
xe?5(x+1)
.
xe?5(x+1) +(1?x)e?5(2?x)
For the second example, we assume that ce1 (l) =
the expected costs we get that c1?1 = 1+y
4 , c1?2 =
MWUe then becomes xt+1 = xt
yt
(1?2 )
xt +1
yt (1?2 ) 4
xt +1
4
+(1?yt )(1?2 )
1.4(2?xt )
4
(1?1 )
yt +1
xt (1?1 ) 4
1
4 ? l and
1.4(2?y)
,
4
(yt +1)
4
+(1?xt )(1?1 )
ce2 (l) = 1.4
4 ? l. Computing
1.4(2?x)
c2?1 = 1+x
.
4 , c2?2 =
4
1.4(2?yt )
4
(first player) and yt+1 =
(second player). We assume that 1 = 2 and also that x0 = y0
(players start with the same mixed strategy. Similarly, due to symmetry, it follows that xt = yt
for all t ? N, thus it suffices to keep track only of one variable and the dynamics becomes
8
(1?)
xt+1 = xt
xt (1?)
xt +1
4
xt +1
4
+(1?xt )(1?)
1.4(2?xt )
4
xt+1 = G(xt ) = xt
i.e., we denote G(x) =
. Finally, we choose = 1 ? e?40 and we get
e?10(xt +1)
,
xt e?10(xt +1) + (1 ? xt )e?14(2?xt )
xe?10(x+1)
.
xe?10(x+1) +(1?x)e?14(2?x)
We show the following three statements, the proofs of which can be found in the full version.
Theorem 4.1. For all but a measure zero set S of x ? (0, 1) we get that limt?? H 2t (x) = ?1 or ?2 .
Moreover, H(?1 ) = ?2 and H(?2 ) = ?1 , i.e., {?1 , ?2 } is a periodic orbit. Thus, all but a measure
zero set S of initial conditions converge to the limit cycle {?1 , ?2 }. Finally, the initial points in S
converge to the equilibrium 12 .
Theorem 4.2. There exist two player two strategy symmetric congestion games such that MWUe has
periodic orbits of length n for any natural number n > 0 and as well as an uncountably infinite set
of ?scrambled" initial conditions (Li-Yorke chaos).
Using Theorem 4.2, we conclude with the following corollary.
Corollary 4.3. For any 1 > > 0 and n, there exists a n-player congestion game G() (depending
on ) so that MWUe dynamics exhibits Li-Yorke chaos for uncountably many starting points.
5
Conclusion and Future Work
We have analyzed MWU` in congestion games where agents use arbitrary admissible constants as
learning rates and showed convergence to exact Nash equilibria. We have also shown that this
result is not true for the nearly homologous exponential variant MWUe even for the simplest case of
two-agent, two-strategy load balancing games. There we prove that such dynamics can provably lead
to limit cycles or even chaotic behavior.
For a small enough learning rate the behavior of MWUe approaches that of its smooth variant,
replicator dynamics, and hence convergence is once again guaranteed [29]. This means that as we
increase the learning rate from near zero values we start off with a convergent system and we
end up with a chaotic one. Numerical experiments establish that between the convergent region
and the chaotic region there exists a range of values for for which the system exhibits periodic
behavior. Period doubling is known as standard route for 1-dimensional chaos (e.g. logistic map) and
is characterized by unexpected regularities such as the Feigenbaum constant [39]. Elucidating these
connections is an interesting open problem. More generally, what other type of regularities can be
established in these non-equilibrium systems?
Another interesting question has to do with developing a better understanding of the set of conditions
that result to non-converging trajectories. So far, it has been critical for our non-convergent examples
that the system starts from a symmetric initial condition. Whether such irregular MWUe trajectories
can be constructed for generic initial conditions, possibly in larger congestion games, is not known.
Nevertheless, the non-convergent results, despite their non-generic nature are rather useful since
they imply that we cannot hope to leverage the power of Baum-Eagon techniques for MWUe . In
conclusion, establishing generic (non)convergence results (e.g. for most initial conditions, most
congestion games) for MWUe with constant step size is an interesting future direction.
References
[1] H. Ackermann, P. Berenbrink, S. Fischer, and M. Hoefer. Concurrent imitation dynamics in
congestion games. In PODC, pages 63?72, New York, USA, 2009. ACM.
[2] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta-algorithm
and applications. Theory of Computing, 8(1):121?164, 2012.
[3] I. Avramopoulos. Evolutionary stability implies asymptotic stability under multiplicative
weights. CoRR, abs/1601.07267, 2016.
9
[4] M.-F. Balcan, F. Constantin, and R. Mehta. The weighted majority algorithm does not converge
in nearly zero-sum games. In ICML Workshop on Markets, Mechanisms and Multi-Agent
Models, 2012.
[5] L. E. Baum and J. A. Eagon. An inequality with applications to statistical estimation for
probabilistic functions of markov processes and to a model of ecology. Bulletin of the American
Mathematical Society, 73(3):360?363, 1967.
[6] P. Berenbrink, M. Hoefer, and T. Sauerwald. Distributed selfish load balancing on networks. In
ACM Transactions on Algorithms (TALG), 2014.
[7] P. Berenbrink, T. Friedetzky, L. A. Goldberg, P. W. Goldberg, Z. Hu, and R. Martin. Distributed
selfish load balancing. SIAM J. Comput., 37(4):1163?1181, November 2007.
[8] J. A Bilmes et al. A gentle tutorial of the em algorithm and its application to parameter
estimation for gaussian mixture and hidden markov models. International Computer Science
Institute, 4(510):126, 1998.
[9] A. Blum, M. Hajiaghayi, K. Ligett, and A. Roth. Regret minimization and the price of total
anarchy. In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC,
pages 373?382, 2008.
[10] I. Caragiannis, A. Fanelli, N. Gravin, and A. Skopalik. Efficient computation of approximate
pure nash equilibria in congestion games. In FOCS, 2011.
[11] N. Cesa-Bianchi and G. Lugoisi. Prediction, Learning, and Games. Cambridge University
Press, 2006.
[12] P. Chen and C. Lu. Generalized mirror descents in congestion games. Artificial Intelligence,
241:217?243, 2016.
[13] S. Chien and A. Sinclair. Convergence to approximate nash equilibria in congestion games. In
Games and Economic Behavior, pages 315?327, 2011.
[14] J. Cohen, A. Heliou, and P. Mertikopoulos. Learning with bandit feedback in potential games.
In Proceedings of the 31th International Conference on Neural Information Processing Systems,
2017.
[15] C. Daskalakis, R. Frongillo, C. Papadimitriou, G. Pierrakos, and G. Valiant. On learning
algorithms for Nash equilibria. Symposium on Algorithmic Game Theory (SAGT), pages
114?125, 2010.
[16] C. Daskalakis, C. Tzamos, and M. Zampetakis. A Converse to Banach?s Fixed Point Theorem
and its CLS Completeness. ArXiv e-prints, February 2017.
[17] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a nash
equilibrium. pages 71?78. ACM Press, 2006.
[18] C. Daskalakis and C. Papadimitriou. Continuous local search. In Proceedings of the Twentysecond Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?11, pages 790?804,
Philadelphia, PA, USA, 2011. Society for Industrial and Applied Mathematics.
[19] R. Engelberg, A. Fabrikant, M. Schapira, and D. Wajc. Best-response dynamics out of sync:
Complexity and characterization. In Proceedings of the Fourteenth ACM Conference on
Electronic Commerce, EC ?13, pages 379?396, New York, NY, USA, 2013. ACM.
[20] A. Fabrikant, C. Papadimitriou, and K. Talwar. The complexity of pure Nash equilibria. In
ACM Symposium on Theory of Computing (STOC), pages 604?612. ACM, 2004.
[21] J. Fearnley, S. Gordon, R. Mehta, and R. Savani. CLS: New Problems and Completeness. ArXiv
e-prints, February 2017.
[22] D. J Foster, T. Lykouris, K. Sridharan, and E. Tardos. Learning in games: Robustness of fast
convergence. In Advances in Neural Information Processing Systems, pages 4727?4735, 2016.
10
[23] D. Fotakis, A. C. Kaporis, and P. G. Spirakis. Atomic congestion games: Fast, myopic and
concurrent. In Burkhard Monien and Ulf-Peter Schroeder, editors, Algorithmic Game Theory,
volume 4997 of Lecture Notes in Computer Science, pages 121?132. Springer Berlin Heidelberg,
2008.
[24] D. Fudenberg and D. K. Levine. The Theory of Learning in Games. MIT Press Books. The
MIT Press, 1998.
[25] A. D Jaggard, N. Lutz, M. Schapira, and R. N Wright. Dynamics at the boundary of game
theory and distributed computing. ACM Transactions on Economics and Computation (TEAC),
2017.
[26] A. D Jaggard, M. Schapira, and R. N Wright. Distributed computing with adaptive heuristics.
In ICS, 2011.
[27] R. Kleinberg, K. Ligett, G. Piliouras, and ?. Tardos. Beyond the Nash equilibrium barrier. In
Symposium on Innovations in Computer Science (ICS), 2011.
[28] R. Kleinberg, G. Piliouras, and ?. Tardos. Load balancing without regret in the bulletin board
model. Distributed Computing, 24(1):21?29, 2011.
[29] R. Kleinberg, G. Piliouras, and ?. Tardos. Multiplicative updates outperform generic no-regret
learning in congestion games. In ACM Symposium on Theory of Computing (STOC), 2009.
[30] T. Li and J. A. Yorke. Period three implies chaos. The American Mathematical Monthly,
82(10):985?992, 1975.
[31] P. Mertikopoulos and A. L. Moustakas. The emergence of rational behavior in the presence of
stochastic perturbations. The Annals of Applied Probability, 20(4):1359?1388, 2010.
[32] D. Monderer and L. S. Shapley. Potential games. Games and Economic Behavior, pages
124?143, 1996.
[33] D. Monderer and L. S Shapley. Fictitious play property for games with identical interests.
Journal of economic theory, 68(1):258?265, 1996.
[34] N. Nisan, M. Schapira, and A. Zohar. Asynchronous best-reply dynamics. In International
Workshop on Internet and Network Economics, pages 531?538. Springer, 2008.
[35] G. Piliouras and J. S. Shamma. Optimization despite chaos: Convex relaxations to complex
limit sets via Poincar? recurrence. In SODA, 2014.
[36] R.W. Rosenthal. A class of games possessing pure-strategy Nash equilibria. International
Journal of Game Theory, 2(1):65?67, 1973.
[37] T. Roughgarden. Intrinsic robustness of the price of anarchy. In Proc. of STOC, pages 513?522,
2009.
[38] A.N. Sharkovskii. Co-existence of cycles of a continuous mapping of the line into itself.
Ukrainian Math. J., 16:61 ? 71, 1964.
[39] S. Strogatz. Nonlinear Dynamics and Chaos. Perseus Publishing, 2000.
[40] V. Syrgkanis, A. Agarwal, H. Luo, and R. E. Schapire. Fast convergence of regularized
learning in games. In Proceedings of the 28th International Conference on Neural Information
Processing Systems, NIPS?15, pages 2989?2997, Cambridge, MA, USA, 2015. MIT Press.
[41] L. R Welch. Hidden markov models and the baum-welch algorithm. IEEE Information Theory
Society Newsletter, 53(4):10?13, 2003.
11
| 7169 |@word version:6 polynomial:6 advantageous:1 suitably:1 open:4 termination:1 mehta:2 hu:1 carry:3 reduction:1 initial:14 series:2 ce2:3 ours:1 interestingly:1 current:2 com:1 luo:1 si:22 must:1 realistic:1 partition:1 numerical:1 plot:4 ligett:2 update:18 congestion:44 intelligence:1 discovering:1 coarse:2 characterization:2 completeness:2 math:1 successive:1 org:1 mathematical:2 along:3 c2:4 constructed:1 become:1 differential:1 symposium:6 focs:1 prove:4 shapley:3 sync:1 manner:1 x0:2 expected:6 market:1 intricate:1 behavior:25 multi:1 inspired:1 decreasing:7 little:1 unpredictable:1 increasing:6 becomes:4 moreover:5 bounded:3 circuit:1 notation:1 what:2 pursue:1 perseus:1 finding:1 transformation:2 st0:2 guarantee:4 every:5 hajiaghayi:1 shed:1 nutshell:1 exactly:3 rm:5 control:1 grant:2 converse:1 producing:1 anarchy:6 positive:10 scientist:2 local:2 understood:1 limit:11 esp:1 despite:3 establishing:4 path:1 studied:2 co:1 shamma:1 range:1 savani:1 unique:4 commerce:1 arguing:1 ond:1 atomic:2 regret:8 chaotic:13 poincar:1 srg:1 word:1 gerasimos:2 get:8 cannot:2 interior:4 close:1 scheduling:2 applying:5 restriction:2 deterministic:1 map:1 missing:1 baum:18 yt:16 straightforward:1 kale:1 starting:3 roth:1 economics:2 convex:1 welch:8 twentysecond:1 syrgkanis:1 pure:7 contradiction:1 insight:1 continued:1 rule:8 his:4 proving:3 classic:2 stability:2 coordinate:1 tardos:4 congested:1 annals:1 play:2 user:1 exact:3 homogeneous:3 us:2 goldberg:3 hypothesis:1 pa:1 element:1 particularly:1 observed:2 levine:1 capture:1 worst:1 region:2 cycle:11 ordering:1 decrease:5 rescaled:1 intuition:1 environment:1 nash:29 complexity:7 zampetakis:1 dynamic:38 raise:1 depend:5 myriad:1 technically:1 upon:1 completely:1 joint:1 lykouris:1 fast:4 describe:1 artificial:1 outcome:1 outside:1 exhaustive:1 whose:1 heuristic:1 larger:1 presidential:1 withstood:1 fischer:1 emergence:1 itself:2 sequence:2 strengthening:1 fabrikant:2 translate:1 achieve:1 normalize:1 gentle:1 exploiting:1 convergence:29 regularity:2 converges:11 help:2 derive:1 tale:1 stating:1 depending:1 ij:2 progress:1 strong:1 implies:3 lyapunov:1 direction:1 greek:1 correct:1 stochastic:1 routing:1 settle:1 bin:7 suffices:4 generalization:3 fix:1 preliminary:1 gravin:1 strictly:8 extension:1 hold:9 wright:2 ic:2 welfare:1 equilibrium:39 algorithmic:2 mapping:1 continuum:1 estimation:2 proc:1 applicable:1 concurrent:3 create:2 successfully:2 weighted:5 hope:1 minimization:1 mit:6 enumerator:2 always:3 gaussian:1 feigenbaum:1 rather:4 frongillo:1 corollary:11 derived:2 focus:2 likelihood:1 impossibility:1 contrast:5 adversarial:2 industrial:1 sense:2 mwu:42 membership:1 typically:1 unlikely:1 hidden:10 bandit:2 supersets:1 provably:3 caragiannis:1 yahoo:1 special:2 field:1 construct:2 never:1 equal:3 beach:1 having:1 once:1 identical:3 nrf:1 cancel:1 ppad:2 nearly:4 fairness:1 icml:1 future:2 papadimitriou:4 t2:1 others:1 gordon:1 simultaneously:1 ab:2 ecology:1 interest:2 possibility:2 elucidating:1 wellstudied:1 analyzed:1 mixture:1 light:1 myopic:1 implication:1 constantin:1 edge:11 tuple:1 necessary:1 experience:1 orthogonal:1 unless:8 orbit:11 isolated:2 minimal:3 instance:2 column:1 modeling:1 earlier:1 maximization:1 cost:22 deviation:1 subset:3 nonequilibrium:1 too:1 perturbed:1 periodic:17 considerably:1 chooses:2 thoroughly:1 st:3 international:5 sensitivity:2 randomized:2 siam:2 csail:1 stay:1 probabilistic:1 off:1 again:1 cesa:1 satisfied:1 containing:1 choose:3 possibly:2 sinclair:1 admit:1 equilibrates:1 american:2 book:1 li:9 account:1 potential:22 ioannis:4 student:1 sec:1 coefficient:4 matter:1 satisfy:1 depends:1 nisan:1 multiplicative:9 later:1 tion:1 analyze:1 sup:3 reached:1 start:6 red:2 hazan:1 complicated:1 timely:1 simon:2 contribution:3 who:1 ulf:1 identify:1 yellow:2 ackermann:1 iterated:2 lu:1 monitoring:1 trajectory:7 bilmes:1 simultaneous:1 strongest:1 definition:5 hoefer:2 thereof:1 e2:2 naturally:2 proof:8 associated:1 boil:1 gain:1 rational:1 proved:2 ask:1 recall:2 knowledge:1 lim:3 subsection:1 ubiquitous:2 back:1 day:2 follow:1 response:6 reply:1 mertikopoulos:2 nonlinear:1 acrf:1 continuity:1 schapira:4 logistic:1 usa:5 true:3 facility:2 hence:5 assigned:3 aggressively:1 equality:1 symmetric:5 round:3 game:95 self:2 numerator:1 recurrence:1 maintained:1 noted:1 generalized:1 allowable:1 complete:4 newsletter:1 motion:1 balcan:1 chaos:16 novel:1 recently:1 possessing:1 common:1 replicator:2 behaves:1 cohen:1 volume:1 banach:1 interpretation:1 he:2 ukrainian:1 makespan:2 refer:1 composition:1 monthly:1 cambridge:3 trivially:1 mathematics:1 similarly:1 podc:1 own:1 recent:2 showed:3 inf:2 apart:1 route:1 certain:1 meta:3 inequality:6 arbitrarily:1 xe:4 captured:2 employed:1 converge:11 period:8 ii:1 full:2 smooth:1 technical:3 characterized:2 esd:1 long:3 concerning:1 e1:2 qi:2 converging:2 variant:12 halving:1 prediction:1 denominator:4 supi:1 expectation:1 arxiv:3 iteration:2 limt:3 agarwal:1 irregular:2 c1:4 whereas:5 fellowship:3 fine:1 background:1 want:2 interval:1 source:1 limn:2 rest:2 elegant:1 member:1 sridharan:1 call:1 integer:1 near:1 leverage:1 presence:1 enough:4 variety:1 zi:2 topology:1 reduce:2 idea:1 economic:3 t0:15 whether:1 peter:1 york:2 action:6 remark:4 useful:2 latency:1 detailed:1 clear:2 tune:1 cleaner:1 generally:1 simplest:6 reduced:1 http:1 schapire:1 outperform:1 exist:3 xij:10 sutd:6 singapore:2 notice:1 tutorial:1 designer:1 rosenthal:1 track:2 panageas:3 blue:2 discrete:5 key:5 nevertheless:1 blum:1 pj:2 ce:5 graph:1 relaxation:1 fraction:2 sum:2 run:1 fourteenth:1 letter:1 powerful:1 respond:1 soda:2 talwar:1 almost:1 strange:1 electronic:1 capturing:1 def:4 internet:1 guaranteed:2 tzamos:1 convergent:8 nonnegative:4 annual:2 nontrivial:2 strength:1 schroeder:1 roughgarden:1 constraint:1 cautionary:1 kleinberg:3 aspect:1 argument:3 martin:1 developing:1 according:1 ball:6 describes:1 remain:2 em:4 character:1 y0:2 wi:4 g3:2 making:1 s1:1 intuitively:1 invariant:3 tier:2 resource:1 agree:1 remains:2 eventually:1 fail:1 mechanism:1 know:1 end:2 available:1 multiplied:4 apply:3 observe:3 away:1 generic:4 eagon:11 alternative:1 robustness:2 existence:3 uncountable:2 responding:1 ensure:1 denotes:1 completed:2 include:1 publishing:1 hinge:1 const:3 exploit:1 establish:3 february:2 society:3 objective:1 move:2 question:4 print:2 strategy:24 visiting:2 exhibit:6 cycling:1 amongst:1 evolutionary:1 berlin:1 monderer:2 hmm:3 majority:1 reason:1 boldface:1 length:7 yorke:9 minimizing:2 innovation:1 unfortunately:1 statement:1 stoc:4 negative:1 stated:1 design:1 unknown:1 perform:1 bianchi:1 observation:2 markov:6 acknowledge:3 nullify:1 november:1 descent:1 perturbation:1 arbitrary:3 introduced:2 namely:3 moe:1 pair:2 connection:6 z1:3 distinction:1 established:2 nip:2 zohar:1 beyond:1 dynamical:9 erratic:1 power:1 critical:1 natural:3 homologous:3 circumvent:1 regularized:1 technology:1 imply:1 numerous:2 arora:1 irrespective:1 philadelphia:1 sn:1 lutz:1 deviate:2 prior:1 sg:1 literature:5 understanding:1 evolve:1 georgios:4 asymptotic:1 fully:1 expect:1 lecture:1 sublinear:1 interesting:5 mixed:10 fictitious:1 proven:2 agent:30 degree:2 affine:2 foster:1 editor:1 pi:20 balancing:7 land:1 uncountably:4 surprisingly:1 asynchronous:2 allow:2 piliouras:7 institute:4 wide:1 bulletin:2 barrier:1 distributed:6 feedback:1 boundary:1 collection:1 jump:1 perpetually:1 coincide:1 made:1 adaptive:1 far:2 ec:1 social:3 correlate:1 transaction:2 sj:1 approximate:3 compact:1 chien:1 keep:2 global:1 active:1 instantiation:1 conclude:3 imitation:1 postdoctoral:1 scrambled:6 search:2 continuous:5 iterative:2 s0i:2 daskalakis:4 why:1 nature:1 zk:3 robust:2 ca:1 symmetry:3 heidelberg:1 cl:5 necessarily:1 complex:1 domain:1 main:2 fudenberg:1 s2:1 profile:1 palaiopanos:2 board:1 georgia:1 ny:1 fails:3 deterministically:1 exponential:8 comput:1 lie:4 answering:1 admissible:2 theorem:33 down:1 load:10 specific:1 xt:46 showing:1 normalizing:1 concern:1 exists:6 workshop:2 intrinsic:1 effectively:2 g10:2 ci:21 corr:1 phd:1 mirror:1 valiant:1 gap:1 flavor:1 chen:1 intersection:1 selfish:2 explore:1 infinitely:2 unexpected:1 strogatz:1 pls:2 g2:2 doubling:1 springer:2 satisfies:2 relies:1 acm:11 ma:2 hedge:2 goal:2 careful:1 exposition:1 price:6 hard:1 included:5 typical:1 specifically:3 infinite:2 talg:1 lemma:3 total:2 called:9 isomorphic:1 e:1 player:27 formally:2 absolutely:1 constructive:1 correlated:1 |
6,820 | 717 | Amplifying and Linearizing Apical
Synaptic Inputs
to Cortical Pyramidal Cells.
Ojvind Bernander, Christof Koch . .
Computation and Neural Systems Program,
California Institute of Technology, 139-74
Pasadena, Ca 91125, USA.
Rodney J. Douglas
Anatomical Neuropharmacology Unit,
Dept. Pharmacology,
Oxford, UK.
Abstract
Intradendritic electrophysiological recordings reveal a bewildering
repertoire of complex electrical spikes and plateaus that are difficult to reconcile with conventional notions of neuronal function.
In this paper we argue that such dendritic events are just an exuberant expression of a more important mechanism - a proportional
current amplifier whose primary task is to offset electrotonic losses.
Using the example of functionally important synaptic inputs to the
superficial layers of an anatomically and electrophysiologically reconstructed layer 5 pyramidal neuron, we derive and simulate the
properties of conductances that linearize and amplify distal synaptic input current in a graded manner. The amplification depends
on a potassium conductance in the apical tuft and calcium conductances in the apical trunk.
?To whom all correspondence should be addressed.
519
520
Bemander, Koch, and Douglas
1
INTRODUCTION
About half the pyramidal neurons in layer 5 of neocortex have long apical dendrites
that arborize extensively in layers 1-3. There the dendrites receive synaptic input
from the inter-areal feedback projections (Felleman and van Essen, 1991) that play
an important role in many models of brain function (Rockland and Virga, 1989).
At first sight this seems to be an unsatisfactory arrangement. In light of traditional
passive models of dendritic function the distant inputs cannot have a significant
effect on the output discharge of the pyramidal cell. The distal inputs are at least
one to two space constants removed from the soma in layer 5 and so only a small
fraction of the voltage signal will reach there. Nevertheless, experiments in cortical
slices have shown that synapses located in even the most superficial cortical layers
can provide excitation strong enough to elicit action potentials in the somata of
layer 5 pyramidal cells (Cauller and Connors, 1992, 1994). These results suggest
that the apical dendrites are active rather than passive, and able to amplify the
signal en route to the soma. Indeed, electrophysiological recordings from cortical
pyramidal cells provide ample evidence for a variety of voltage-dependent dendritic
conductances that could perform such amplification (Spencer and Kandel, 1961;
Regehr et al., 1993; Yuste and Tank, 1993; Pockberger, 1991; Amitai et al., 1993;
Kim and Connors, 1993).
Although the available experimental data on the various active conductances provide direct support for amplification, they are not adequate to specify the mechanism by which it occurs. Consequently, notions of dendritic amplification have been
informal, usually favoring voltage gain, and mechanisms that have a binary (high
gain) quality. In this paper, we formalize what conductance properties are required
for a current amplifier, and derive the required form of their voltage dependency by
analysis.
We propose that current amplification depends on two active conductances: a
voltage-dependent K+ conductance, gK, in the superficial part of the dendritic
tree that linearizes synaptic input, and a voltage-dependent Ca 2 + conductance,
gc a, in layer 4 that amplifies the result of the linearization stage. Spencer and Kandel (1961) hypothesized the presence of dendritic calcium channels that amplify
distal inputs. More recently, a modeling study of a cerebellar Purkinje cell suggests
that dendritic calcium counteracts attenuation of distal inputs so that the somatic
response is independent of synaptic location (De Schutter and Bower, 1992). A
gain-control mechanism involving both potassium and calcium has also been proposed in locust non-spiking interneurons (Laurent, 1993). In these cells, the two
conductances counteract the nonlinearity of graded transmitter release, so that the
output of the interneuron was independent of its membrane voltage. The principle
that we used can be explained with the help of a highly simplified three compartment model (Fig. 1A). The leftmost node represents the soma and is clamped to
-50 m V. The justification for this is that the time-averaged somatic voltage is remarkably constant and close to -50 m V for a wide range of spike rates. The middle
node represents the apical trunk containing gCa, and the rightmost node represents
the apical tuft with a synaptic induced conductance change gsyn in parallel with
gK. For simplicity we assume that the model is in steady-state, and has an infinite
membrane resistance, Rm.
Amplifying and Linearizing Apical Synaptic Inputs to Cortical Pyramidal Cells
150ma
9
9
Vsoma
--
EK
Eea
T
Esyn
~----~--~I----~----
B PaSSlve response and targets
c
Linearized and amplified
~
o
Actlvation curves
30
1
Ul
H
,
,,
,
10
O~--~--~--~----~L-~--~
100
gsyn (nS)
200
10
-50
v
(mV)
Figure 1: Simplified model used to demonstrate the concepts of saturation,
linearization, and amplification. (A) Circuit diagram. The somatic compartment was clamped to V$oma = -50 mV with ECa = 115 mV, EK = -95 mV,
E$yn = 0 m V, and g = 40 nS. The membrane capacitance was ignored, since
only steady state properties were studied, and membrane leak was not included for
simplicity. (B) Somatic current, I$oma, in response to synaptic input. The passive response (thin dashed line) is sublinear and saturates for low values of gsyn.
The linearized response (thick solid line) is obtained by introducing an inactivating
potassium conductance, OK ("gA" in c). A persistent persistent OK results in a
somewhat sub-linear response (thick dashed line; "gM" in c). The addition of a calcium conductance amplifies the response (thin solid line). (C) Analytically derived
activation curves. The inactivating potassium conductance ("IA") was derived, but
the persistent version (" IM") proved to be more stable.
521
522
Bemander, Koch, and Douglas
2
RESULTS
Fig. 1B shows the computed relationship between the excitatory synaptic input
conductance and the axial current, I soma , flowing into the somatic (leftmost) compartment. The synaptic input rapidly saturates; increasing gsyn beyond about 50 nS
leads to little further increase in Isoma. This saturation is due to the EPSP in the
distal compartment reducing the effective synaptic driving potential. We propose
that the first goal of dendritic amplification is to linearize this relationship, so that
the soma is more sensitive to the exact amount of excitatory input impinging on
the apical tuft, by introducing a potassium conductance that provides a hyperpolarizing current in proportion to the degree of membrane depolarization. The
voltage-dependence of such a conductance can be derived by postulating a linear
relationship between the synaptic current flowing into the somatic node and the
synaptic input, i.e. Isoma
constant? gsyn. In conjunction with Ohm's law and
current conservation, this relation leads to a simple fractional polynominal for the
voltage dependency of gK (labeled "gA" in Fig. 1C). As the membrane potential
depolarizes, gK activates and pulls it back towards EK . At large depolarizations gK
inactivates, similar to the "A" potassium conductance, resulting overall in a linear
relationship between input and output (Fig. 1B). As the slope conductance of this
particular form of gK can become negative, causing amplification of the synaptic
input, we use a variant of gK that is monotonized by leveling out the activation
curve after it has reached its maximum, similar to the "M" current (Fig. IC). Incorporating this non-inactivating K+ conductance into the distal compartment results
in a slightly sublinear relationship between input and output (Fig. 1B).
=
With gK in place, amplification of Isoma is achieved by introducing an inward
current between the soma and the postsynaptic site. The voltage-dependency of the
amplification conductance can be derived by postulating Isoma
gain ? constant?
gsyn' This leads to the non-inactivating gCa shown in Fig. 1C, in which the overall
relationship between synaptic input and somatic output current (Fig. 1B) reflects
the amplification.
=
We extend this concept of deriving the form of the required conductances to a
detailed model of a morphologically reconstructed layer 5 pyramidal cell from cat
visual cortex (Douglas et al., 1991, Fig. 2A;). We assume a passive dendritic tree,
and include a complement of eight common voltage-dependent conductances in its
soma. 500 non-NMDA synapses are distributed on the dendritic tuft throughout
layers 1, 2 and 3, and we assume a proportionality between the presynaptic firing
frequency fin and the time-averaged synaptic induced conductance change. When
fin is increased, the detailed model exhibits the same saturation as seen in the
simple model (Fig. 2B). Even if an 500 synapses are activated at fin = 500 Hz only
0.65 nA of current is delivered to the soma. This saturation is caused when the
synaptic input current flows into the high input resistances of the distal dendrites,
thereby reducing the synaptic driving potential. Layer 1 and 2 input together can
contribute a maximum of 0.25 nA to the soma. This is too little current to cause
the cell to spike, in contrast with the experimental evidence (Cauller and Connors,
1994), in which spike discharge was evoked reliably. Electrotonic losses make only
a minor contribution to the small somatic signal. Even when the membrane leak
current is eliminated by setting Rm to infinity, Isoma only increases a mere 2% to
0.66 nA.
Amplifying and Linearizing Apical Synaptic Inputs to Cortical Pyramidal Cells
~8r2layer1
?<v
1100 urn
Current del i vered to soma
B
2~----~------~----~------~
Layer 5:
Passive dendrlte
layer3
l:
?
Layer 4
Layers 1-3:
Llneanzed
and ampllfled
1
til
H
./
~~
~--
~--
~~
~---?
~~
LayerS
Layers 1-3:
PaSSive dendrlte
200
100
fl n (Hz)
C
Activation Curves
c 1
Input-Output behavi or
D
40
0
M
.jJ
III
:>
M
N
30
.jJ
u
:r:
'-"
III
.jJ
~
.-i
III
20
?
0
Active,
predicted
'H
M
.jJ
U
III
Active
dendrite
10
'"'
'H
Passive dendrlte
Q60
00
50
150
200
V (mV)
Figure 2: Amplification in the detailed model. (A) The morphology of this
layer V pyramidal cell was reconstructed from a HRP-stained cell in area 17 of the
adult cat (Douglas et ai., 1991). The layers are marked in alternating black and
grey. The boundaries between superficial layers are not exact, but rough estimates
and were chosen at branch points; a few basal dendrites may reach into layer 6.
Axon not shown. (B) Current delivered to the soma by stimulation of 500 AMPA
synapses throughout either layer 5 or layers 1-3. (C) Derived activation curves for
gK and gCa' Sigmoidal fits of the form g(V)
1/(1 + e(Vhcll/-V)/K), resulted in
]{K
3.9 mY, Vhalj,K
-51 mY, KCa
13.7 mY, Vhalj,Ca
-14 mY. (D)
Output spike rate as a function of input activation rate of 500 AMPA synapses
in layers 1-3, with and without the derived conductances. The dashed line shows
the lout rate predicted by using the linear target Isoma as a function of lin in
combination with the somatic f - I relationship.
=
=
=
=
=
523
524
Bemander, Koch, and Douglas
o
Vrn
(rnV)
-50
100
200
t
(rnsec)
Figure 3: Dendritic calcium spikes. All-or-nothing dendritic Ca 2+ calcium
spikes can be generated by adding a voltage-independent but Ca 2+-dependent K+
conductance to the apical tree with gma~
11.4 nS. The trace shown is in response
to sustained intradendritic current injection of 0.5 nA. For clamp currents of 0.3 nA
or less, no calcium spikes are triggered and only single somatic spikes are obtained
(not shown). These currents do not substantially affect the current amplifier effect.
=
By analogy with the simple model of Fig. 1, we eliminate the saturating response by
introducing a non-inactivating form of gK spread evenly throughout layers 1-3. The
resulting linearized response is amplified by a Ca2+ conductance located at the base
of the apical tuft, where the apical dendrite crosses from layer 4 to layer 3 (Fig. 2A).
This is in agreement with recent calcium imaging experiments, which established
that layer 5 neocortical pyramidal cells have a calcium hot spot in the apical tree
about 500-600 pm away from the soma (Tank et ai., 1988). Although the derivation
of the voltage-dependency of these two conductances is more complicated than in
the three compartment model, the principle of the derivation is similar (Bernander,
1993, Fig. 2C;). We derive a Ca 2+ conductance, for a synaptic current gain of
two, resembling a non-inactivating, high-threshold calcium conductance. The curve
relating synaptic input frequency and axial current flowing into the soma (Fig. 2B)
shows both the linearized and amplified relationships.
Once above threshold, the model cell has a linear current-discharge relation with a
slope of about 50 spikes per second per nA, in good agreement with experimental
observations in vitro (Mason and Larkman, 1990) and in vivo (Ahmed et a/., 1993).
Given a sustained synaptic input frequency, the somatic f-I relationship can be used
to convert the synaptic current flowing into the soma 130ma into an equivalent output
frequency (Abbott, 1991; Powers et a/., 1992; Fig. 2D). This simple transformation
accounts for all the relevant nonlinearities, including synaptic saturation , interaction
and the threshold mechanism at the soma or elsewhere. We confirmed the validity
of our transformation method by explicitly computing the expected relationship
between lin and lout, without constraining the somatic potential, and comparing
the two. Qualitatively, both methods lead to very similar results (Fig. 2D): in the
Amplifying and Linearizing Apical Synaptic Inputs to Cortical Pyramidal Cells
presence of dendritic gCa superficial synaptic input can robustly drive the cell, in a
proportional manner over a large input range.
The amplification mechanism derived above is continuous in the input rate. It does
not exhibit the slow calcium spikes described in the literature (Pockberger, 1991;
Amitai et ai., 1993; Kim and Connors, 1993). However, it is straightforward to add a
calcium-dependent potassium conductance yielding such spikes. Incorporating such
a conductance into the apical trunk leads to calcium spikes (Fig. 3) in response to
an intradendritic current injection of 0.4 nA or more, while for weaker inputs no
such events are seen. In response to synaptic input to the tuft of 120 Hz or more,
these spikes are activated, resulting in a moderate depression (25% or less) of the
average output rate, lout (not shown).
In our view, the function of the dendritic conductances underlying this all-or-none
voltage event is the gradual current amplification of superficial input, without amplifying synaptic input to the basal dendrites (Bernander, 1993). Because gCa
depolarizes the membrane, further activating gCa, the gain of the current amplifier
is very sensitive to the density and shape of the dendritic gCa. Thus, neuromodulators that act upon gCa control the extent to which cortical feedback pathways,
acting via superficial synaptic input, have access to the output of the cell.
Acknowledgements
This work was supported by the Office of Naval Research, the National Institute of
Mental Health through the Center for Neuroscience, the Medical Research Council
of the United Kingdom, and the International Human Frontier Science Program.
References
[1] L.F. Abbott. Realistic synaptic inputs for model neuronal networks. Network,
2:245-258, 1991.
[2] B. Ahmed, J .C. Anderson, R.J. Douglas, K.A.C. Martin, and J .C. Nelson.
The polyneuronal innervation of spiny steallate neurons in cat visual cortex.
Submitted, 1993.
[3] Y. Amitai, A. Friedman, B.W. Connors, and M.J. Gutnick. Regenerative activity in apical dendrites of pyramidal cells in neocortex. Cerebral Cortex,
3:26-38, 1993.
[4]
6
Bernander. Synaptic integration and its control in neocortical pyramidal
cells. May 1993. Ph.D . thesis, California Institute of Technology.
[5] L.J. CauUer and B.W. Connors. Functions of very distal dendrites: experimental and computational studies of layer I synapses on neocortical pyramidal
cells. In T. McKenna, J. Javis, and S.F. Zarnetzer, editors, Single Neuron
Computation, chapter 8, pages 199-229. Academic Press, Boston, MA, 1992.
[6] L.J. Cauller and B.W. Connors. J. Neuroscience, In Press.
[7] E. De Schutter and J .M. Bower. Firing rate of purkinje cells does not depend
on the dendritic location of parallel fiber inputs. Eur. J. of Neurosci., S5:17,
1992.
525
526
Bemander, Koch, and Douglas
[8] R.J. Douglas, K.A.C. Martin, and D. Whitteridge. An intracellular analysis of
the visual responses of neurones in cat visual cortex. J. Physiology, 440:659696, 1991.
[9] D.J. Felleman and D.C. Van Essen. Distributed hierarchical processing in the
primate cerebral cortex. Cerebral Cortex, 1:1-47, 1991.
[10] H.G. Kim and B.W. Connors. Apical dendrites of the neocortex: Correlation
between sodium- and calcium-dependent spiking and pyramidal cell morphology. J. Neuroscience, In press.
[11] G. Laurent. A dendritic gain-control mechanism in axonless neurons of the
locust, schistocerca americana. J Physiology (London), 470:45-54, 1993.
[12] A. Mason and A.U. Larkman. Correlations between morphology and electrophysiology of pyramidal neurons in slices of rat visual cortex. II. Electrophysiology. J. Neuroscience, 10(5):1415-1428, 1990.
[13] H. Pockberger. Electrophysiological and morphological properties of rat motor
cortex neurons in vivo. Brain Research, 539:181-190, 1991.
[14] P.K. Powers, R.F. Tobinson, and M.A. Konodi. Effective synaptic current can
be estimated from measurements of neuronal discharge. J. Neurophysiology,
68(3):964-968, 1992.
[15] W.G. Regehr, J. Kehoe, P. Ascher, and C.M. Armstrong. Synaptically triggered
action-potentials in dendrites. Neuron, 11(1):145-151,1993.
[16] K.S. Rockland and A. Virga. Terminal arbors of individual "feedback" axons projecting from area V2 to VI in the macaque monkey: a study using immunohistochemistry of anterogradely transported phaseoulus vulgarisleucoagglutinin. J. Compo Neurol., 285:54-72, 1989.
[17] W.A. Spencer and E.R. Kandel. Electrophysiology of hippocampal neurons.
IV fast prepotentials. J. Neurophysiology, 24:272-285, 1961.
[18] D.W. Tank, M. Sugimori, J .A. Connor, and R.R. Llimis. Spatially resolved
calcium dynamics of mammalian purkinje cells in cerebellar slice. Science,
242:773-777, 1988.
[19] R. Yuste, K.R. Delaney, M.J. Gutnick, and D.W. Tank. Spatially localized
calcium accumulations in apical dendrites of layer 5 neocortical neurons. In
Neuroscience Abstr. 19, page 616.2, 1993.
| 717 |@word neurophysiology:2 version:1 middle:1 seems:1 proportion:1 proportionality:1 grey:1 gradual:1 linearized:4 thereby:1 solid:2 united:1 rightmost:1 current:30 comparing:1 activation:5 hyperpolarizing:1 realistic:1 distant:1 shape:1 motor:1 half:1 compo:1 mental:1 provides:1 node:4 location:2 contribute:1 sigmoidal:1 direct:1 become:1 persistent:3 sustained:2 pathway:1 leveling:1 manner:2 inter:1 expected:1 indeed:1 morphology:3 brain:2 terminal:1 little:2 increasing:1 innervation:1 underlying:1 circuit:1 inward:1 what:1 substantially:1 depolarization:2 monkey:1 transformation:2 attenuation:1 act:1 rm:2 uk:1 control:4 unit:1 medical:1 christof:1 yn:1 inactivating:6 oxford:1 laurent:2 firing:2 black:1 studied:1 inactivates:1 evoked:1 suggests:1 range:2 averaged:2 locust:2 spot:1 area:2 elicit:1 physiology:2 projection:1 suggest:1 amplify:3 cannot:1 close:1 ga:2 accumulation:1 conventional:1 equivalent:1 center:1 resembling:1 straightforward:1 simplicity:2 ojvind:1 deriving:1 pull:1 notion:2 justification:1 discharge:4 target:2 play:1 gm:1 exact:2 larkman:2 agreement:2 located:2 mammalian:1 labeled:1 role:1 electrical:1 morphological:1 removed:1 leak:2 dynamic:1 depend:1 upon:1 resolved:1 various:1 cat:4 chapter:1 fiber:1 derivation:2 fast:1 effective:2 london:1 whose:1 delivered:2 triggered:2 vrn:1 propose:2 clamp:1 interaction:1 epsp:1 causing:1 relevant:1 rapidly:1 rockland:2 amplified:3 amplification:14 amplifies:2 potassium:7 abstr:1 help:1 derive:3 linearize:2 axial:2 minor:1 gma:1 strong:1 predicted:2 thick:2 human:1 activating:1 repertoire:1 dendritic:17 spencer:3 im:1 frontier:1 koch:5 ic:1 driving:2 amplifying:5 sensitive:2 council:1 reflects:1 rough:1 activates:1 sight:1 rather:1 voltage:15 office:1 conjunction:1 release:1 derived:7 naval:1 unsatisfactory:1 transmitter:1 contrast:1 kim:3 dependent:7 eliminate:1 pasadena:1 relation:2 favoring:1 tank:4 overall:2 morphologically:1 integration:1 once:1 eliminated:1 represents:3 ascher:1 thin:2 gsyn:6 few:1 resulted:1 national:1 individual:1 hrp:1 friedman:1 amplifier:4 conductance:33 interneurons:1 essen:2 highly:1 yielding:1 light:1 activated:2 tree:4 iv:1 increased:1 modeling:1 purkinje:3 depolarizes:2 introducing:4 apical:19 ohm:1 too:1 dependency:4 my:4 eur:1 density:1 international:1 together:1 na:7 thesis:1 neuromodulators:1 containing:1 ek:3 til:1 account:1 potential:6 nonlinearities:1 de:2 caused:1 mv:5 depends:2 explicitly:1 vi:1 view:1 reached:1 parallel:2 complicated:1 rodney:1 slope:2 vivo:2 contribution:1 compartment:6 isoma:6 oma:2 intradendritic:3 mere:1 none:1 confirmed:1 drive:1 submitted:1 plateau:1 synapsis:6 reach:2 synaptic:33 bewildering:1 frequency:4 gain:7 proved:1 fractional:1 electrophysiological:3 formalize:1 nmda:1 back:1 ok:2 specify:1 response:13 flowing:4 anderson:1 just:1 stage:1 correlation:2 anterogradely:1 del:1 quality:1 reveal:1 usa:1 effect:2 hypothesized:1 regehr:2 concept:2 validity:1 analytically:1 alternating:1 spatially:2 distal:8 excitation:1 steady:2 amitai:3 linearizing:4 leftmost:2 rat:2 hippocampal:1 neocortical:4 demonstrate:1 felleman:2 eca:1 passive:7 recently:1 common:1 stimulation:1 spiking:2 vitro:1 gutnick:2 cerebral:3 extend:1 counteracts:1 relating:1 functionally:1 neuropharmacology:1 significant:1 s5:1 measurement:1 areal:1 connor:1 ai:3 whitteridge:1 pm:1 nonlinearity:1 stable:1 access:1 cortex:8 base:1 add:1 recent:1 moderate:1 route:1 binary:1 llimis:1 seen:2 somewhat:1 signal:3 dashed:3 branch:1 ii:1 academic:1 ahmed:2 cross:1 long:1 lin:2 regenerative:1 involving:1 variant:1 cerebellar:2 delaney:1 achieved:1 cell:23 synaptically:1 receive:1 addition:1 remarkably:1 addressed:1 diagram:1 pyramidal:17 recording:2 induced:2 hz:3 ample:1 flow:1 linearizes:1 layer3:1 presence:2 constraining:1 iii:4 enough:1 variety:1 affect:1 fit:1 expression:1 gca:8 ul:1 resistance:2 neurones:1 cause:1 jj:4 action:2 adequate:1 electrotonic:2 ignored:1 depression:1 detailed:3 kca:1 amount:1 neocortex:3 extensively:1 ph:1 neuroscience:5 stained:1 per:2 estimated:1 anatomical:1 basal:2 soma:16 nevertheless:1 threshold:3 douglas:9 abbott:2 imaging:1 fraction:1 convert:1 counteract:1 ca2:1 place:1 throughout:3 layer:29 fl:1 correspondence:1 electrophysiologically:1 activity:1 infinity:1 cauller:3 simulate:1 urn:1 injection:2 martin:2 combination:1 membrane:8 slightly:1 postsynaptic:1 spiny:1 primate:1 anatomically:1 explained:1 projecting:1 trunk:3 mechanism:7 informal:1 available:1 eight:1 arborize:1 away:1 hierarchical:1 v2:1 tuft:6 robustly:1 polyneuronal:1 rnv:1 include:1 graded:2 capacitance:1 arrangement:1 spike:14 occurs:1 primary:1 dependence:1 traditional:1 exhibit:2 evenly:1 nelson:1 whom:1 argue:1 presynaptic:1 extent:1 kehoe:1 relationship:10 difficult:1 esyn:1 kingdom:1 gk:10 trace:1 negative:1 reliably:1 calcium:17 perform:1 neuron:10 observation:1 fin:3 bemander:4 saturates:2 gc:1 somatic:12 eea:1 complement:1 required:3 california:2 established:1 macaque:1 adult:1 able:1 beyond:1 usually:1 lout:3 program:2 saturation:5 including:1 ia:1 event:3 hot:1 power:2 sodium:1 technology:2 bernander:4 health:1 literature:1 acknowledgement:1 law:1 loss:2 sublinear:2 yuste:2 proportional:2 analogy:1 localized:1 degree:1 principle:2 editor:1 excitatory:2 elsewhere:1 supported:1 weaker:1 institute:3 wide:1 van:2 slice:3 feedback:3 curve:6 cortical:8 distributed:2 boundary:1 qualitatively:1 simplified:2 reconstructed:3 schutter:2 connors:8 active:5 conservation:1 continuous:1 channel:1 transported:1 superficial:7 ca:6 dendrite:13 complex:1 ampa:2 impinging:1 spread:1 neurosci:1 intracellular:1 reconcile:1 pharmacology:1 nothing:1 neuronal:3 fig:17 site:1 en:1 postulating:2 slow:1 axon:2 n:4 sub:1 kandel:3 clamped:2 bower:2 mckenna:1 offset:1 mason:2 americana:1 neurol:1 evidence:2 incorporating:2 adding:1 linearization:2 interneuron:1 boston:1 electrophysiology:3 visual:5 saturating:1 ma:3 goal:1 marked:1 consequently:1 towards:1 change:2 included:1 infinite:1 reducing:2 acting:1 experimental:4 arbor:1 support:1 dept:1 armstrong:1 |
6,821 | 7,170 | Learning Neural Representations of
Human Cognition across Many fMRI Studies
Arthur Mensch?
Inria
[email protected]
Julien Mairal?
Inria
[email protected]
Danilo Bzdok
Department of Psychiatry, RWTH
[email protected]
Bertrand Thirion?
Inria
[email protected]
Ga?l Varoquaux?
Inria
[email protected]
Abstract
Cognitive neuroscience is enjoying rapid increase in extensive public brain-imaging
datasets. It opens the door to large-scale statistical models. Finding a unified
perspective for all available data calls for scalable and automated solutions to
an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive
processes/psychological tasks to brain networks? We cast this challenge in a
machine-learning approach to predict conditions from statistical brain maps across
different studies. For this, we leverage multi-task learning and multi-scale dimension reduction to learn low-dimensional representations of brain images that carry
cognitive information and can be robustly associated with psychological stimuli.
Our multi-dataset classification model achieves the best prediction performance
on several large reference datasets, compared to models without cognitive-aware
low-dimension representations; it brings a substantial performance boost to the
analysis of small datasets, and can be introspected to identify universal template
cognitive concepts.
Due to the advent of functional brain-imaging technologies, cognitive neuroscience is accumulating
quantitative maps of neural activity responses to specific tasks or stimuli. A rapidly increasing
number of neuroimaging studies are publicly shared (e.g., the human connectome project, HCP [1]),
opening the door to applying large-scale statistical approaches [2]. Yet, it remains a major challenge
to formally extract structured knowledge from heterogeneous neuroscience repositories. As stressed
in [3], aggregating knowledge across cognitive neuroscience experiments is intrinsically difficult due
to the diverse nature of the hypotheses and conclusions of the investigators. Cognitive neuroscience
experiments aim at isolating brain effects underlying specific psychological processes: they yield
statistical maps of brain activity that measure the neural responses to carefully designed stimulus.
Unfortunately, neither regional brain responses nor experimental stimuli can be considered to be
atomic: a given experimental stimulus recruits a spatially distributed set of brain regions [4], while
each brain region is observed to react to diverse stimuli. Taking advantage of the resulting data
richness to build formal models describing psychological processes requires to describe each cognitive
?
?
Inria, CEA, Universit? Paris-Saclay, 91191 Gif sur Yvette, France
Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
conclusion on a common basis for brain response and experimental study design. Uncovering atomic
basis functions that capture the neural building blocks underlying cognitive processes is therefore a
primary goal of neuroscience [5], for which we propose a new data-driven approach.
Several statistical approaches have been proposed to tackle the problem of knowledge aggregation in
functional imaging. A first set of approaches relies on coordinate-based meta-analysis to define robust
neural correlates of cognitive processes: those are extracted from the descriptions of experiments ?
based on categories defined by text mining [6] or expert [7]? and correlated with brain coordinates
related to these experiments. Although quantitative meta-analysis techniques provide useful summaries of the existing literature, they are hindered by label noise in the experiment descriptions, and
weak information on brain activation as the maps are reduced to a few coordinates [8]. A second,
more recent set of approaches models directly brain maps across studies, either focusing on studies
on similar cognitive processes [9], or tackling the entire scope of cognition [10, 11]. Decoding, i.e.
predicting the cognitive process from brain activity, across many different studies touching different
cognitive questions is a key goal for cognitive neuroimaging as it provides a principled answer to
reverse inference [12]. However, a major roadblock to scaling this approach is the necessity to label
cognitive tasks across studies in a rich but consistent way, e.g., building an ontology [13].
We follow a more automated approach and cast dataset accumulation into a multi-task learning
problem: our model is trained to decode simultaneously different datasets, using a shared architecture.
Machine-learning techniques can indeed learn universal representations of inputs that give good
performance in multiple supervised problems [14, 15]. They have been successful, especially with
the development of deep neural network [see, e.g., 16], in sharing representations and transferring
knowledge from one dataset prediction model to another (e.g., in computer-vision [17] and audioprocessing [18]). A popular approach is to simultaneously learn to represent the inputs of the
different datasets in a low-dimensional space and to predict the outputs from the low-dimensional
representatives. Using very deep model architectures in functional MRI is currently thwarted by the
signal-to-noise ratio of the available recordings and the relative little size of datasets [19] compared
to computer vision and text corpora. Yet, we show that multi-dataset representation learning is a
fertile ground for identifying cognitive systems with predictive power for mental operations.
Contribution. We introduce a new model architecture dedicated to multi-dataset classification, that
performs two successive linear dimension reductions of the input statistical brain images and predicts
psychological conditions from a learned low-dimensional representation of these images, linked to
cognitive processes. In contrast to previous ontology-based approaches, imposing a structure across
different cognitive experiments is not needed in our model: the representation of brain images is
learned using the raw set of experimental conditions for each dataset. To our knowledge, this work is
the first to propose knowledge aggregation and transfer learning in between functional MRI studies
with such modest level of supervision. We demonstrate the performance of our model on several
openly accessible and rich reference datasets in the brain-imaging domain. The different aspects of
its architecture bring a substantial increase in out-of-sample accuracy compared to models that forgo
learning a cognitive-aware low-dimensional representation of brain maps. Our model remains simple
enough to be interpretable: it can be collapsed into a collection of classification maps, while the space
of low-dimensional representatives can be explored to uncover a set of meaningful latent components.
1
Model: multi-dataset classification of brain statistical images
Our general goal is to extract and integrate biological knowledge across many brain-imaging studies
within the same statistical learning framework. We first outline how analyzing large repositories of
fMRI experiments can be cast as a classification problem. Here, success in capturing brain-behavior
relationships is measured by out-of-sample prediction accuracy. The proposed model (Figure 1)
solves a range of these classification problems in an identical statistical estimation and imposes
a shared latent structure across the single-dataset classification parameters. These shared model
parameters may be viewed as a chain of two dimension reductions. The first reduction layer leverages
knowledge about brain spatial regularities; it is learned from resting-state data and designed to capture
neural activity patterns at different coarseness levels. The second reduction layer projects data on
directions generally relevant for cognitive-state prediction. The combination of both reductions
yields low-dimensional representatives that are less affected by noise and subject variance than
2
Figure 1: Model architecture: Three-layer multi-dataset classification. The first layer (orange) is
learned from data acquired outside of cognitive experiments and captures a spatially coherent signal
at multiple scales, the second layer (blue) embeds these representations in a space common to all
datasets, from which the conditions are predicted (pink) from multinomial models.
the high-dimensional samples: classification is expected to have better out-of-sample prediction
performance.
1.1
Problem setting: predicting conditions from brain activity in multiple studies
We first introduce our notations and terminology, and formalize a general prediction problem applicable to any task fMRI dataset. In a single fMRI study, each subject performs different experiments
in the scanner. During such an experiment, the subjects are presented a set of sensory stimuli (i.e.,
conditions) that aim at recruiting a target set of cognitive processes. We fit a first-level general linear
model for every record to obtain z-score maps that quantify the importance of each condition in
explaining each voxel. Formally, the n statistical maps (xi )i?[n] of a given study form a sequence
in Rp , where p is the number of voxels in the brain. Each such observation is labelled by a condition ci
in [1, k] whose effect captures xi . A single study typically features one or a few (if experiments are
repeated) statistical map per condition and per subject, and may present up to k = 30 conditions.
Across the studies, the observed brain maps can be modeled as generated from an unknown joint
distribution of brain activity and associated cognitive conditions ((xi , ci ))i?[n] where variability
across trials and subjects acts as confounding noise. In this context, we wish to learn a decoding
model that predicts condition c from brain activity x measured from new subjects or new studies.
Inspired by recent work [10, 20, 21], we frame the condition prediction problem into the estimation
of a multinomial classification model. Our models estimate a probability vector of x being labeled by
each condition in C. This vector is modeled as a function of (W, b) in Rp?k ? Rk that takes the
softmax form. For all j in [1, k], its j-th coordinate is defined as
p(x, W, b)j , P[c = j|x, W, b] = P
eW
l?C
(j) >
x+b
eW(l)
>
x+b
.
(1)
Fitting the model weights is done by minimizing the cross-entropy between (p(xi ))i and the true
labels ([ci = j]j?[k] )i , with respect to (W, b), with or without imposing parameter regularization.
In this model, an input image is classified in between all conditions presented in the whole study.
It is possible to restrict this classification to the set of conditions used in a given experiment ? the
empirical results of this study can be reproduced in this setting.
The challenge of model parameter estimation. A major inconvenience of the vanilla multinomial
model lies in the ratio between the limited number of samples provided by a typical fMRI dataset
and the overwhelming number of model weights to be estimated. Fitting the model amounts to
estimating k discriminative brain map, i.e. millions of parameters (4M for the 23 conditions of HCP),
whereas most brain-imaging studies yield less than a hundred observations and therefore only a few
thousands samples. This makes it hard to reasonably approximate the population parameters for
successful generalization, especially because the variance between subjects is high compared to the
3
variance between conditions. The obstacle is usually tackled in one of two major ways in brainimaging: 1) we can impose sparsity or a-priori structure over the model weights. Alternatively, 2) we
can reduce the dimension of input data by performing spatial clustering or univariate feature selection
by ANOVA. However, we note that, on the one hand, regularization strategies frequently incur costly
computational budgets if one wants to obtain interpretable weights [22] and they introduce artificial
bias. On the other hand, existing techniques developed in fMRI analysis for dimension reduction can
lead to distorted signal and accuracy losses [23]. Most importantly, previous statistical approaches
are not tuned to identifying conditions from task fMRI data. We therefore propose to use a dimension
reduction that is estimated from data and tuned to capture the common hidden aspects shared by
statistical maps across studies ? we aggregate several classification models that share parameters.
1.2
Learning shared representation across studies for decoding
We now consider several fMRI studies. (xi )i?[n] is the union of all statistical maps from all datasets.
P
We write D the set of all studies, Cd the set of all kd conditions from study d, k , d kd the total
number of conditions and Sd the subset of [n] that index samples of study d. For each study d,
we estimate the parameters (Wd , bd ) for the classification problem defined above. Adapting the
multi-task learning framework of [14], we constrain the weights (Wd )d to share a common latent
structure: namely, we fix a latent dimension l ? p, and enforce that for all datasets d,
Wd = We Wd0 ,
(2)
where the matrix We in Rp?l is shared across datasets, and (Wd0 )d are dataset-specific classification
matrices from a l dimensional input space. Intuitively, We should be a ?consensus? projection matrix,
that project every sample xi from every dataset onto a lower dimensional representation We> xi in Rl
that is easy to label correctly.
The latent dimension l may be chosen larger than k. In this case, regularization is necessary to
ensure that the factorization (2) is indeed useful, i.e., that the multi-dataset classification problem
does not reduce to separate multinomial regressions on each dataset. To regularize our model, we
apply Dropout [24] to the projected data representation. Namely, during successive training iterations,
we set a random fraction r of the reduced data features to 0. This prevents the co-adaptation of
matrices We and (Wd0 )d and ensures that every direction of We is useful for classifying every
dataset. Formally, Dropout amounts to sample binary diagonal matrices M in Rl?l during training,
with Bernouilli distributed coefficients; for all datasets d, Wd0 is estimated through the task of
classifying Dropout-corrupted reduced data (MWe> xi )i?Sd ,M?M .
In practice, matrices We and (Wd0 )d are learned by jointly minimizing the following expected risk,
where the objective is the sum of each of single-study cross-entropies, averaged over Dropout noise:
X 1 X X
min
EM ? ?j=ci log pd [xi , We MWd0 , bd ]j ]
(3)
We
|Sd |
0
(Wd )d d?D
i?Sd j?Cd
Imposing a common structure to the classification matrices (Wd )d is natural as the classes to be
distinguished do share some common neural organization ? brain maps have a correlated spatial
structure, while the psychological conditions of the diffent datasets may trigger shared cognitive
primitives underlying human cognition [21, 20]. With our design, we aim at learning a matrix We
that captures these common aspects and thus benefits the generalization performance of all the
classifiers. As We is estimated from data, brain maps from one study are enriched by the maps from
all the other studies, even if the conditions to be classified are not shared among studies. In so doing,
our modeling approach allows transfer learning among all the classification tasks.
Unfortunately, estimators provided by solving (3) may have limited generalization performance as n
remain relatively small (? 20, 000) compared to the number of parameters. We address this issue by
performing an initial dimension reduction that captures the spatial structure of brain maps.
1.3
Initial dimension reduction using localized rest-fMRI activity patterns
The projection expressed by We ignores the signal structure of statistical brain maps. Acknowledging
this structure in commonly acquired brain measurements should allow to reduce the dimensionality
of data with little signal loss, and possibly the additional benefit of a denoising effect. Several recent
4
studies [25] in the brain-imaging domain suggest to use fMRI data acquired in experiment-free studies
for such dimension reduction. For this reason, we introduce a first reduction of dimension that is not
estimated from statistical maps, but from resting-state data. Formally, we enforce We = Wg We0 ,
where g > l (g ? 300), Wg ? Rp?g and We0 ? Rg?k . Intuitively, the multiplication by matrix Wg
should summarize the spatial distribution of brain maps, while multiplying by We0 , that is estimated
solving (3), should find low-dimensional representations able to capture cognitive features. We0
is now of reasonable size (g ? l ? 15000): solving (3) should estimate parameters with better
generalization performance. Defining an appropriate matrix Wg is the purpose of the next paragaphs.
Resting-state decomposition. The initial dimension reduction determines the relative contribution
of statistical brain maps over what is commonly interpreted by neuroscience investigators as functional
networks. We discover such macroscopical brain networks by performing a sparse matrix factorization
over the massive resting-state dataset provided in the HCP900 release [1]: such a decomposition
technique, described e.g., in [26, 27] efficiently provides (i.e., in the order of few hours) a given
number of sparse spatial maps that decompose the resting state signal with good reconstruction
performance. That is, it finds a sparse and positive matrix D in Rp?g and loadings A in Rg?m such
that the m resting-state brain images Xrs in Rp?m are well approximated by DA. D is this a set
of slightly overlapping networks ? each voxel belongs to at most two networks. To maximally
preserve Euclidian distance when performing the reduction, we perform an orthogonal projection,
which amounts to setting Wg , D(D> D)?1 . Replacing in (3), we obtain the reduced expected risk
minimization problem, where the input dimension is now the number g of dictionary components:
X 1 X X
min
EM ? ?j=ci log pd [Wg> xi , We0 MWd0 , bd ]j .
(4)
0
g?l
|Sd |
We ?R
0
(Wd
)d
d?D
i?Sd j?Cd
Multiscale projection. Selecting the ?best? number of brain networks q is an ill-posed problem [28]: the size of functional networks that will prove relevant for condition classification is
unknown to the investigator. To address this issue, we propose to reduce high-resolution data (xi )i
in a multi-scale fashion: we initially extract 3 sparse spatial dictionaries (Dj )j?[3] with 16, 64
and 512 components respectively. Then, we project statistical maps onto each of the dictionaries,
and concatenate the loadings, in a process analogous to projecting on an overcomplete dictionary in
computer vision [e.g., 29]. This amounts to define the matrix Wg as the concatenation
?1
?1
?1
Wg , [D1 (D>
D2 (D>
D3 (D>
] ? Rp?(16+64+512) .
1 D1 )
2 D2 )
3 D3 )
(5)
With this definition, the reduced data (Wg> xi )i carry information about the network activations
at different scales. As such, it makes the classification maps learned by the model more regular
than when using a single-scale dictionary, and indeed yields more interpretable classification maps.
However, it only brings only a small improvement in term of predictive accuracy, compared to using
a simple dictionary of size k = 512. We further discuss multi-scale decomposition in Appendix A.2.
1.4
Training with stochastic gradient descent
As illustrated in Figure 1, our model may be interpreted as a three-layer neural network with linear
activations and several read-out heads, each corresponding to a specific dataset. The model can be
trained using stochastic gradient descent, following a previously employed alternated training scheme
[18]: we cycle through datasets d ? D and select, at each iteration, a mini-batch of samples (xi )i?B ,
where B ? Sd has the same size for all datasets. We perform a gradient step ? the weights Wd0 , bd
and We0 are updated, while the others are left unchanged. The optimizer thus sees the same number
of samples for each dataset, and the expected stochastic gradient is the gradient of (4), so that the
empirical risk decreases in expectation and we find a critical point of (4) asymptotically. We use the
Adam solver [30] as a flavor of stochastic gradient descent, as it allows faster convergence.
Computational cost. Training the model on projected data (Wg> xi )i takes 10 minutes on a conventional single CPU machine with an Intel Xeon 3.21Ghz. The initial step of computing the dictionaries
(D1 , D2 , D3 ) from all HCP900 resting-state (4TB of data) records takes 5 hours using [27], while
transforming data from all the studies with Wg projection takes around 1 hour. Adding a new dataset
with 30 subjects to our model and performing the joint training takes no more than 20 minutes. This
is much less than the cost of fitting a first-level GLM on this dataset (? 1h per subject).
5
2
Experiments
We characterize the behavior and performance of our model on several large, publicly available
brain-imaging datasets. First, to validate the relevance of all the elements of our model, we perform an
ablation study. It proves that the multi-scale spatial dimension reduction and the use of multi-dataset
classification improves substancially classification performance, and suggests that the proposed
model captures a new interesting latent structure of brain images. We further illustrate the effect
of transfer learning, by systematically varying the number of subjects in a single dataset: we show
how multi-dataset learning helps mitigating the decrease in accuracy due to smaller train size ? a
result of much use for analysing cognitive experiments on small cohorts. Finally, we illustrate the
interpretability of our model and show how the latent ?cognitive-space? can be explored to uncover
some template brain maps associated with related conditions in different datasets.
2.1
Datasets and tools
Datasets. Our experimental study features 5 publicly-available task fMRI study. We use all restingstate records from the HCP900 release [1] to compute the sparse dictionaries that are used in the first
dimension reduction materialized by Wg . We succinctly describe the conditions of each dataset ?
we refer the reader to the original publications for further details.
?
?
?
?
?
HCP: gambling, working memory, motor, language, social and relational tasks. 800 subjects.
Archi [31]: localizer protocol, motor, social and relational task. 79 subjects.
Brainomics [32]: localizer protocol. 98 subjects.
Camcan [33]: audio-video task, with frequency variation. 606 subjects.
LA5c consortium [34]: task-switching, balloon analog risk taking, stop-signal and spatial
working memory capacity tasks ? high-level tasks. 200 subjects.
The last four datasets are target datasets, on which we measure out-of-sample prediction performance.
The larger HCP dataset serves as a knowledge transfering dataset, which should boost these performance when considered in the multi-dataset model. We register the task time-series in the reference
MNI space before fitting a general linear model (GLM) and computing the maps (standardized by
z-scoring) associated with each base condition ? no manual design of contrast is involved. More
details on the pipeline used for z-map extraction is provided in Appendix A.1.
Tools. We use pytorch 1 to define and train the proposed models, nilearn [35] to handle brain datasets,
along with scikit-learn [36] to design the experimental pipelines. Sparse brain decompositions were
computed from the whole HCP900 resting-state data. The code for reproducing experiments is
available at http://github.com/arthurmensch/cogspaces. Our model involves a few noncritical hyperparameters: we use batches of size 256, set the latent dimension l = 100 and use a
Dropout rate r = 0.75 in the latent cognitive space ? this value perform slightly better than r = 0.5.
We use a multi-scale dictionary with 16, 64 and 512 components, as it yields the best quantitative
and qualitative results.2 . Finally, test accuracy is measured on half of the subjects of each dataset,
that are removed from training sets beforehand. Benchmarks are repeated 20 times with random split
folds to estimate the variance in performance.
2.2
Dimension reduction and transfer improves test accuracy
For the four benchmark studies, the proposed model brings between +1.3% to +13.4% extra test
accuracy compared to a simple multinomial classification. To further quantify which aspects of the
model improve performance, we perform an ablation study: we measure the prediction accuracy
of six models, from the simplest to the most complete model described in Section 1. The first
three experiments study the effect of initial dimension reduction and regularization3 . The last three
experiments measure the performance of the proposed factored model, and the effect of multi-dataset
classification.
1
http://pytorch.org/
Note that using only the 512-components dictionary yields comparable predictive accuracy. Quantitatively,
the multi-scale approach is beneficial when using dictionary with less components (e.g., 16, 64, 128) ? see
Appendix A.2 for a quantitative validation of the multi-scale approach.
3
For these models, `2 and Dropout regularization parameter are estimated by nested cross-validation.
2
6
95%
Full input + L2
Dim. reduction + L2
Test accuracy
90.7 91.0
90%
86.7 87.4
85%
80%
91.9
65%
87.8
60.6 59.9 61.0 61.3
84.6 85.4
62.0
62.9
62.6
61.1
61.8
59.8
60%
81.8
55.8 55.6
79.1
77.5
55%
76.5
75%
Transfer from HCP
Transfer from all datasets
Dim. red. + dropout
Factored model + dropout
Archi
50%
Brainomics
CamCan
LA5C
Figure 2: Ablation results. Each dimension reduction of the model has a relevant contribution.
Dropout regularization is very effective when applied to the cognitive latent space. Learning this
latent space allows to transfer knowledge between datasets.
Test accuracy
No transfer
Transfer from HCP
Transfer from all datasets
68%
90%
90%
80%
80%
70%
65%
70%
Archi
Train
subjects 5
60%
Brainomics
60%
Camcan
50%
10
20
30
Train size
39
5 10
20
30
40
49
20 60 100
200
302
Figure 3: Learning curves in the single-dataset and multi-dataset setting. Estimating the latent
cognitive space from multiple datasets is very useful for studying small cohorts.
1.
2.
3.
4.
5.
6.
Baseline `2 -penalized multinomial classification, where we predict c from x ? Rp directly.
Multinomial classification after projection on a dictionary, i.e. predicting c from Wg x.
Same as experience 2, using Dropout noise on projected data Wg x.
Factored model in the single-study case: solving (4) with the target study only.
Factored model in a two-study case: using target study alongside HCP.
Factored model in the multi-study case: using target study alongside all other studies.
The results are summarized in Figure 2. On average, both dimension reduction introduced by Wg
and We0 are beneficial to generalization performance. Using many datasets for prediction brings a
further increase in performance, providing evidence of transfer learning between datasets.
In detail, the comparison between experiments 1, 2 and 3 confirms that projecting brain images onto
functional networks of interest is a good strategy to capture cognitive information [20, 25]. Note that
in addition to improving the statistical properties of the estimators, the projection reduces drastically
the computational complexity of training our full model. Experiment 2 and 3 measure the impact of
the regularization method without learning a further latent projection. Using Dropout on the input
space performs consistently better than `2 regularization (+1% to +5%); this can be explained in
view of [37], that interpret input-Dropout as a `2 regularization on the natural model parametrization.
Experiment 4 shows that Dropout regularization becomes much more powerful when learning a
second dimension reduction, i.e. when solving problem (4). Even when using a single study for
learning, we observe a significant improvement (+3% to +7%) in performance on three out of four
datasets. Learning a latent space projection together with Dropout-based data augmentation in this
space is thus a much better regularization strategy than a simple `2 or input-Dropout regularization.
Finally, the comparison between experiments 4, 5 and 6 exhibits the expected transfer effect. On
three out of four target studies, learning the projection matrix We0 using several datasets leads to an
accuracy gain from +1.1% to +1.6%, consistent across folds. The more datasets are used, the higher
the accuracy gain ? already note that this gain increases with smaller train size. Jointly classifying
images on several datasets thus brings extra information to the cognitive model, which allows to find
better representative brain maps for the target study. In particular, we conjecture that the large number
of subjects in HCP helps modeling inter-subject noises. On the other hand, we observe a negative
transfer effect on LA5c, as the tasks of this dataset share little cognitive aspects with the tasks of the
other datasets. This encourages us to use richer dataset repositories for further improvement.
7
L
Face
z=-10mm
R
L
Audio
calculation
z=46mm
Latent cognitive Latent cognitive
Multi-scale
spatial projection space (single)
(multi-study)
R
Latent cognitive Latent cognitive
Multi-scale
spatial projection space (single)
(multi-study)
Figure 4: Classification maps from our model are more specific of higher level functions: they focus
more on the FFA for faces, and on the left intraparietal suci for calculations.
Figure 5: The latent space of our model can be explored to unveil some template brain statistical
maps, that corresponds to bags of conditions related across color-coded datasets.
2.3
Transfer learning is very effective on small datasets
To further demonstrate the benefits of the multi-dataset model, we vary the size of target datasets
(Archi, Brainomics and CamCan) and compare the performance of the single-study model with the
model that aggregates Archi, Brainomics, CamCan and HCP studies. Figure 3 shows that the effect
of transfer learning increases as we reduce the training size of the target dataset. This suggests that
the learned data embedding Wg We0 does capture some universal cognitive information, and can be
learned from different data sources. As a consequence, aggregating a larger study to mitigate the
small number of training samples in the target dataset. With only 5 subjects, the gain in accuracy due
to transfer is +13% on Archi, +8% on Brainomics, and +6% on CamCan. Multi-study learning
should thus proves very useful to classify conditions in studies with ten or so subjects, which are still
very common in neuroimaging.
2.4
Introspecting classification maps
At prediction time, our multi-dataset model can be collapsed into one multinomial model per dataset.
Each dataset d is then classified using matrix Wg We0 Wd0 . Similar to the linear models classically
used for decoding, the model weights for each condition can be represented as a brain map. Figure 4
shows the maps associated with digit computation and face viewing, for the Archi dataset. The
models 2, 4 and 5 from the ablation study are compared. Although it is hard to assess the intrinsic
quality of the maps, we can see that the introduction of the second projection layer and the multistudy problem formulation (here, appending the HCP dataset) yields maps with more weight on the
high-level functional regions known to be specific of the task: for face viewing, the FFA stands out
more compared to primary visual cortices; for calculations, the weights of the intraparietal sulci
becomes left lateralized, as it has been reported for symbolic number processing [38].
2.5
Exploring the latent space
Within our model, classification is performed on the same l-dimensional space E for all datasets,
that is learned during training. To further show that this space captures some cognitive information,
we extract from E template brain images associated to general cognitive concepts. Fitting our
model on the Archi, Brainomics, CamCan and HCP studies, we extract representative vectors of E
with a k-means clustering over the projected data and consider the centroids (yj )j of 50 clusters.
Each centroid yj can be associated to a brain image tj ? Rp that lies in the span of D1 , D2
8
and D3 . In doing so, we go backward through the model and obtain a representative of yj with
well delineated spatial regions. Going forward, we compute the classification probability vectors
>
>
Wd> yj = Wd0 W0 e Wg> tj for each study d. Together, these probability vectors give an indication
on the cognitive functions that tj captures. Figure 5 represents six template images, associated
to their probability vectors, shown as word clouds. We clearly obtain interpretable pairs of brain
image/cognitive concepts. These pairs capture across datasets clusters of experiment conditions with
similar brain representations.
3
Discussion
We compare our model to a previously proposed formulation for brain image classification. We show
how our model differs from convex multi-task learning, and stress the importance of Dropout.
Task fMRI classification. Our model is related to a previous semi-supervised classification
model [20] that also performs multinomial classification of conditions in a low-dimensional space:
the dimension reduction they propose is the equivalent of our projection Wg . Our approach differs
in two aspects. First, we replace the initial semi-supervised dimension reduction with unsupervised
analysis of resting-state, using a much more tractable approach that we have shown to be conservative
of cognitive signals. Second, we introduce the additional cognitive-aware projection We0 , learned
on multiple studies. It substancially improves out-of-sample prediction performance, especially on
small datasets, and above all allow to uncover a cognitive-aware latent space, as we have shown in
our experiments.
Convex multi-task learning. Due to the Dropout regularization and the fact that l is allowed to be
larger than k, our formulation differs from the classical approach [39] to the multi-task problem, that
would estimate ? = We0 [W10 , . . . , Wd0 ]d ? Rg?k by solving a convex empirical risk minimization
problem with a trace-norm penalization, that encourages ? to be low-rank. We tested this formulation,
which does not perform better than the explicit factorization formulation with Dropout regularization.
Trace-norm regularized regression has the further drawback of being slower to train, as it typically
operates with full gradients, e.g. using FISTA [40]. In contrast, the non-convex explicit factorization
model is easily amenable to large-scale stochastic optimization ? hence our focus.
Importance of Dropout. The use of Dropout regularization is crucial in our model. Without
Dropout, in the single-study case with l > k, solving the factored problem (4) yields a solution worse
in term of empirical risk than solving the simple multinomial problem on (Wg> xi )i , which finds a
global minimizer of (4). Yet, Figure 2 shows that the model enriched with a latent space (red) has
better performance in test accuracy than the simple model (orange), thanks to the Dropout noise
applied to the latent-space representation of the input data. Dropout is thus a promising novel way of
regularizing fMRI models.
4
Conclusion
We proposed and characterized a novel cognitive neuroimaging modeling scheme that blends latent
factor discovery and transfer learning. It can be applied to many different cognitive studies jointly
without requiring explicit correspondences between the cognitive tasks. The model helps identifying
the fundamental building blocks underlying the diversity of cognitive processes that the human
mind can realize. It produces a basis of cognitive processes whose generalization power is validated
quantitatively, and extracts representations of brain activity that grounds the transfer of knowledge
from existing fMRI repositories to newly acquired task data. The captured cognitive representations
will improve as we provide the model with a growing number of studies and cognitive conditions.
5
Acknowledgments
This project has received funding from the European Union?s Horizon 2020 Framework Programme
for Research and Innovation under grant agreement No 720270 (Human Brain Project SGA1). Julien
Mairal was supported by the ERC grant SOLARIS (No 714381) and a grant from ANR (MACARON
project ANR-14-CE23-0003-01). We thank Olivier Grisel for his most helpful insights.
9
References
[1] David Van Essen, Kamil Ugurbil, and others. The Human Connectome Project: A data acquisition
perspective. NeuroImage, 62(4):2222?2231, 2012.
[2] Russell A. Poldrack, Chris I. Baker, Joke Durnez, Krzysztof J. Gorgolewski, Paul M. Matthews, Marcus R.
Munaf?, Thomas E. Nichols, Jean-Baptiste Poline, Edward Vul, and Tal Yarkoni. Scanning the horizon:
Towards transparent and reproducible neuroimaging research. Nature Reviews Neuroscience, 18(2):
115?126, 2017.
[3] Allen Newell. You can?t play 20 questions with nature and win: Projective comments on the papers of this
symposium. 1973.
[4] John D. Medaglia, Mary-Ellen Lynall, and Danielle S. Bassett. Cognitive Network Neuroscience. Journal
of Cognitive Neuroscience, 27(8):1471?1491, 2015.
[5] Lisa Feldman Barrett. The future of psychology: Connecting mind to brain. Perspectives on psychological
science, 4(4):326?339, 2009.
[6] Tal Yarkoni, Russell A. Poldrack, Thomas E. Nichols, David C. Van Essen, and Tor D. Wager. Large-scale
automated synthesis of human functional neuroimaging data. Nature methods, 8(8):665?670, 2011.
[7] Angela R. Laird, Jack J. Lancaster, and Peter T. Fox. Brainmap. Neuroinformatics, 3(1):65?77, 2005.
[8] Gholamreza Salimi-Khorshidi, Stephen M. Smith, John R. Keltner, Tor D. Wager, and Thomas E. Nichols.
Meta-analysis of neuroimaging data: A comparison of image-based and coordinate-based pooling of
studies. NeuroImage, 45(3):810?823, 2009.
[9] Tor D. Wager, Lauren Y. Atlas, Martin A. Lindquist, Mathieu Roy, Choong-Wan Woo, and Ethan Kross.
An fMRI-Based Neurologic Signature of Physical Pain. New England Journal of Medicine, 368(15):
1388?1397, 2013.
[10] Yannick Schwartz, Bertrand Thirion, and Gael Varoquaux. Mapping paradigm ontologies to and from the
brain. In Advances in Neural Information Processing Systems, pages 1673?1681. 2013.
[11] Oluwasanmi Koyejo and Russell A. Poldrack. Decoding cognitive processes from functional MRI. In
NIPS Workshop on Machine Learning for Interpretable Neuroimaging, pages 5?10, 2013.
[12] Russell A. Poldrack, Yaroslav O. Halchenko, and Stephen Jos? Hanson. Decoding the large-scale structure
of brain function by classifying mental states across individuals. Psychological Science, 20(11):1364?1372,
2009.
[13] Jessica A. Turner and Angela R. Laird. The cognitive paradigm ontology: Design and application.
Neuroinformatics, 10(1):57?66, 2012.
[14] Rie Kubota Ando and Tong Zhang. A framework for learning predictive structures from multiple tasks and
unlabeled data. Journal of Machine Learning Research, 6(Nov):1817?1853, 2005.
[15] Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. Multi-task learning for classification
with dirichlet process priors. Journal of Machine Learning Research, 8(Jan):35?63, 2007.
[16] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[17] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell.
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. In International
Conference on Machine Learning, volume 32, pages 647?655, 2014.
[18] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural
networks with multitask learning. In International Conference on Machine Learning, pages 160?167,
2008.
[19] Danilo Bzdok and B. T. Thomas Yeo. Inference in the age of big data: Future perspectives on neuroscience.
NeuroImage, 155(Supplement C):549 ? 564, 2017.
[20] Danilo Bzdok, Michael Eickenberg, Olivier Grisel, Bertrand Thirion, and Ga?l Varoquaux. Semi-supervised
factored logistic regression for high-dimensional neuroimaging data. In Advances in Neural Information
Processing Systems, pages 3348?3356, 2015.
10
[21] Timothy Rubin, Oluwasanmi O Koyejo, Michael N Jones, and Tal Yarkoni. Generalized CorrespondenceLDA Models (GC-LDA) for Identifying Functional Regions in the Brain. In Advances in Neural Information
Processing Systems, pages 1118?1126, 2016.
[22] Alexandre Gramfort, Bertrand Thirion, and Ga?l Varoquaux. Identifying Predictive Regions from fMRI
with TV-L1 Prior. In International Workshop on Pattern Recognition in Neuroimaging, pages 17?20, 2013.
[23] Bertrand Thirion, Ga?l Varoquaux, Elvis Dohmatob, and Jean-Baptiste Poline. Which fMRI clustering
gives good brain parcellations? Frontiers in neuroscience, 8:167, 2014.
[24] Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):
1929?1958, 2014.
[25] Thomas Blumensath, Saad Jbabdi, Matthew F. Glasser, David C. Van Essen, Kamil Ugurbil, Timothy E.J.
Behrens, and Stephen M. Smith. Spatially constrained hierarchical parcellation of the brain with restingstate fMRI. NeuroImage, 76:313?324, 2013.
[26] Arthur Mensch, Julien Mairal, Bertrand Thirion, and Ga?l Varoquaux. Dictionary learning for massive
matrix factorization. In International Conference on Machine Learning, pages 1737?1746, 2016.
[27] Arthur Mensch, Julien Mairal, Bertrand Thirion, and Ga?l Varoquaux. Stochastic Subsampling for
Factorizing Huge Matrices. IEEE Transactions on Signal Processing, 99(to appear), 2017.
[28] Simon B. Eickhoff, Bertrand Thirion, Ga?l Varoquaux, and Danilo Bzdok. Connectivity-based parcellation:
Critique and implications. Human brain mapping, 36(12):4771?4792, 2015.
[29] St?phane G. Mallat and Zhifeng Zhang. Matching pursuits with time-frequency dictionaries. IEEE
Transactions on Signal Processing, 41(12):3397?3415, 1993.
[30] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International
Conference for Learning Representations, 2015.
[31] Philippe Pinel, Bertrand Thirion, S?bastien Meriaux, Antoinette Jobert, Julien Serres, Denis Le Bihan,
Jean-Baptiste Poline, and Stanislas Dehaene. Fast reproducible identification and large-scale databasing of
individual functional cognitive networks. BMC Neuroscience, 8(1):91, 2007.
[32] Dimitri Papadopoulos Orfanos, Vincent Michel, Yannick Schwartz, Philippe Pinel, Antonio Moreno, Denis
Le Bihan, and Vincent Frouin. The Brainomics/Localizer database. NeuroImage, 144:309?314, 2017.
[33] Meredith A. Shafto, Lorraine K. Tyler, Marie Dixon, Jason R Taylor, James B. Rowe, Rhodri Cusack,
William D. Calder, Andrew J. an d Marslen-Wilson, John Duncan, Tim Dalgleish, Richard N. Henson,
Carol Brayne, and Fiona E. Matthews. The Cambridge Centre for Ageing and Neuroscience (Cam-CAN)
study protocol: A cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing.
BMC Neurology, 14:204, 2014.
[34] RA Poldrack, Eliza Congdon, William Triplett, KJ Gorgolewski, KH Karlsgodt, JA Mumford, FW Sabb,
NB Freimer, ED London, TD Cannon, et al. A phenome-wide examination of neural and cognitive function.
Scientific Data, 3:160110, 2016.
[35] Alexandre Abraham, Fabian Pedregosa, Michael Eickenberg, Philippe Gervais, Andreas Mueller, Jean
Kossaifi, Alexandre Gramfort, Bertrand Thirion, and Gael Varoquaux. Machine learning for neuroimaging
with scikit-learn. Frontiers in Neuroinformatics, 8:14, 2014.
[36] Fabian Pedregosa, Ga?l Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel,
Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos,
David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ?douard Duchesnay. Scikit-learn: Machine
learning in Python. Journal of Machine Learning Research, 12:2825?2830, 2011.
[37] Stefan Wager, Sida Wang, and Percy S Liang. Dropout Training as Adaptive Regularization. In Advances
in Neural Information Processing Systems, pages 351?359. 2013.
[38] Stephanie Bugden, Gavin R. Price, D. Adam McLean, and Daniel Ansari. The role of the left intraparietal
sulcus in the relationship between symbolic number processing and children?s arithmetic competence.
Developmental Cognitive Neuroscience, 2(4):448?457, 2012.
[39] Nathan Srebro, Jason Rennie, and Tommi S. Jaakkola. Maximum-margin matrix factorization. In Advances
in Neural Information Processing Systems, pages 1329?1336, 2004.
[40] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
11
| 7170 |@word multitask:1 trial:1 repository:4 mri:3 norm:2 coarseness:1 loading:2 open:1 d2:4 confirms:1 decomposition:4 euclidian:1 carry:2 reduction:24 initial:6 lorraine:1 series:1 score:1 selecting:1 halchenko:1 daniel:1 tuned:2 necessity:1 dubourg:1 existing:3 wd:7 com:1 activation:4 tackling:1 yet:3 diederik:1 bd:4 john:3 realize:1 concatenate:1 ronan:1 behavior:2 motor:2 moreno:1 designed:2 reproducible:2 atlas:1 interpretable:5 half:1 amir:1 inconvenience:1 parametrization:1 smith:2 record:3 papadopoulos:1 mental:3 provides:2 ron:1 successive:2 denis:2 org:2 zhang:3 along:1 symposium:1 eickenberg:2 qualitative:1 prove:1 blumensath:1 fitting:5 introduce:5 acquired:4 inter:1 blondel:1 indeed:3 ra:1 expected:5 ontology:4 rapid:1 growing:1 multi:33 nor:1 frequently:1 brain:65 salakhutdinov:1 bertrand:12 inspired:1 td:1 cpu:1 overwhelming:1 little:3 solver:1 increasing:1 becomes:2 provided:4 discover:1 baker:1 underlying:4 notation:1 investigator:3 advent:1 project:8 what:1 estimating:2 interpreted:2 gif:1 recruit:1 developed:1 meriaux:1 unified:2 finding:1 substancially:2 quantitative:4 mitigate:1 every:5 act:1 tackle:1 universit:1 classifier:1 schwartz:2 grant:3 appear:1 before:1 positive:1 aggregating:2 sd:7 consequence:1 switching:1 analyzing:1 critique:1 inria:9 suggests:2 co:1 limited:2 projective:1 factorization:6 range:1 averaged:1 acknowledgment:1 lecun:1 gervais:1 yj:4 union:2 block:2 practice:1 differs:3 atomic:2 variation:1 digit:1 jan:1 universal:4 empirical:4 adapting:1 projection:15 matching:1 word:1 regular:1 inp:1 suggest:1 symbolic:2 krishnapuram:1 onto:3 unlabeled:1 ga:8 selection:1 consortium:1 nb:1 context:1 applying:1 collapsed:2 risk:6 accumulating:1 accumulation:1 equivalent:1 map:37 brainmap:1 conventional:1 primitive:1 go:1 oluwasanmi:2 jimmy:1 convex:4 resolution:1 xuejun:1 identifying:5 pinel:2 react:1 alpes:1 matthieu:2 factored:7 estimator:2 insight:1 importantly:1 regularize:1 ellen:1 his:1 embedding:1 handle:1 lindquist:1 congdon:1 population:1 analogous:1 updated:1 coordinate:5 target:10 trigger:1 mallat:1 decode:1 play:1 behrens:1 massive:2 olivier:3 bzdok:5 hypothesis:1 agreement:1 element:1 roy:1 recognition:2 approximated:1 balaji:1 predicts:2 database:1 labeled:1 observed:2 cloud:1 role:1 wang:1 capture:14 thousand:1 region:6 ensures:1 cycle:1 richness:1 balloon:1 russell:4 decrease:2 removed:1 jbabdi:1 principled:1 substantial:2 developmental:1 pd:2 transforming:1 complexity:1 cam:1 jobert:1 signature:1 trained:2 solving:8 passos:1 predictive:5 incur:1 eric:1 basis:3 easily:1 joint:2 transfering:1 represented:1 train:6 univ:1 fast:2 describe:2 effective:2 london:1 artificial:1 aggregate:3 outside:1 m4x:1 neuroinformatics:3 lancaster:1 jean:4 richer:1 larger:4 posed:1 whose:2 rennie:1 tested:1 wg:20 anr:2 jointly:3 laird:2 reproduced:1 advantage:1 sequence:1 indication:1 propose:5 reconstruction:1 fr:3 adaptation:1 relevant:3 ablation:4 rapidly:1 description:2 kh:1 validate:1 lauren:1 sutskever:1 convergence:1 regularity:1 darrell:1 cluster:2 produce:1 adam:3 phane:1 tim:1 help:3 andrew:1 illustrate:2 measured:3 received:1 solves:1 edward:1 predicted:1 salimi:1 involves:1 quantify:2 tommi:1 direction:2 shafto:1 ning:1 drawback:1 stochastic:7 human:8 nilearn:1 viewing:2 public:1 ja:1 fix:1 generalization:6 transparent:1 decompose:1 khorshidi:1 varoquaux:11 biological:1 frontier:2 exploring:1 pytorch:2 mm:2 scanner:1 around:1 considered:2 ground:2 gavin:1 tyler:1 lawrence:1 mapping:2 cognition:3 predict:3 solaris:1 matthew:3 scope:1 recruiting:1 major:4 optimizer:1 vary:1 tor:3 achieves:1 dictionary:14 purpose:1 ruslan:1 estimation:3 applicable:1 bag:1 label:4 currently:1 prettenhofer:1 healthy:1 tool:2 hoffman:1 minimization:2 stefan:1 clearly:1 aim:3 cannon:1 shrinkage:1 varying:1 jaakkola:1 publication:1 wilson:1 release:2 focus:2 validated:1 improvement:3 consistently:1 rank:1 grisel:3 contrast:3 psychiatry:1 centroid:2 baseline:1 lateralized:1 dim:2 helpful:1 mueller:1 inference:2 cnrs:1 ansari:1 transferring:1 entire:1 initially:1 hidden:1 typically:2 going:1 france:2 unveil:1 mitigating:1 issue:2 classification:35 ill:1 among:2 uncovering:1 priori:1 development:1 spatial:12 gramfort:3 tzeng:1 constrained:1 softmax:1 orange:2 aware:4 bernouilli:1 extraction:1 beach:1 bmc:2 identical:1 represents:1 jones:1 unsupervised:1 carin:1 stanislas:1 fmri:18 yoshua:1 others:2 quantitatively:2 richard:1 opening:1 grenoble:3 future:2 stimulus:7 few:5 preserve:1 simultaneously:2 individual:2 beck:1 william:2 ando:1 jessica:1 organization:1 huge:1 interest:1 essen:3 mining:1 cournapeau:1 tj:3 wager:4 chain:1 implication:1 amenable:1 beforehand:1 wd0:9 arthur:4 experience:1 necessary:1 modest:1 fox:1 orthogonal:1 enjoying:1 old:1 taylor:1 isolating:1 overcomplete:1 psychological:8 we0:12 xeon:1 classify:1 modeling:3 teboulle:1 obstacle:1 cost:2 subset:1 hundred:1 krizhevsky:1 successful:2 characterize:1 reported:1 answer:1 scanning:1 corrupted:1 xue:1 st:2 thanks:1 international:5 siam:1 fundamental:1 accessible:1 decoding:6 jos:1 connectome:2 synthesis:1 connecting:1 together:2 michael:3 ilya:1 connectivity:1 augmentation:1 wan:1 possibly:1 classically:1 worse:1 cognitive:62 mclean:1 expert:1 dimitri:1 yeo:1 michel:2 de:1 yaroslav:1 diversity:1 summarized:1 coefficient:1 dixon:1 register:1 collobert:1 performed:1 view:1 jason:3 doing:2 linked:1 red:2 aggregation:2 dalgleish:1 yarkoni:3 simon:1 jia:1 contribution:3 ass:1 publicly:3 accuracy:16 convolutional:1 acknowledging:1 variance:4 efficiently:1 yield:8 identify:1 weak:1 raw:1 vincent:4 identification:1 multiplying:1 classified:3 manual:1 ed:1 sharing:1 definition:1 dohmatob:1 rowe:1 trevor:1 acquisition:1 frequency:2 involved:1 james:1 associated:8 stop:1 newly:1 gain:4 dataset:42 popular:1 intrinsically:1 color:1 knowledge:11 dimensionality:1 improves:3 formalize:1 uncover:3 carefully:1 focusing:1 alexandre:5 higher:2 supervised:4 follow:1 danilo:5 response:4 maximally:1 wei:1 rie:1 formulation:5 done:1 working:2 hand:3 bihan:2 replacing:1 multiscale:1 scikit:3 overlapping:1 logistic:1 multidisciplinary:1 quality:1 brings:5 lda:1 scientific:1 mary:1 usa:1 effect:9 building:3 nichols:3 true:1 requiring:1 concept:3 hence:1 regularization:15 spatially:3 read:1 illustrated:1 during:4 encourages:2 generalized:1 stress:1 outline:1 complete:1 demonstrate:2 performs:4 dedicated:1 bring:1 l1:1 allen:1 percy:1 regularizing:1 image:16 jack:1 novel:2 funding:1 common:8 aachen:1 functional:12 multinomial:10 rl:2 physical:1 poldrack:5 volume:1 million:1 analog:1 resting:9 interpret:1 refer:1 significant:1 measurement:1 cambridge:1 feldman:1 imposing:3 restingstate:2 vanilla:1 erc:1 centre:1 language:2 dj:1 henson:1 supervision:1 cortex:1 base:1 recent:3 touching:1 perspective:4 confounding:1 belongs:1 driven:1 reverse:1 meta:3 binary:1 success:1 vul:1 scoring:1 captured:1 additional:2 impose:1 employed:1 paradigm:2 sida:1 signal:10 semi:3 relates:1 stephen:3 multiple:6 full:3 reduces:1 arithmetic:1 faster:1 england:1 characterized:1 calculation:3 long:1 cross:4 baptiste:3 coded:1 impact:1 prediction:12 scalable:1 regression:3 heterogeneous:2 vision:3 expectation:1 liao:1 iteration:2 represent:1 addition:1 whereas:1 want:1 meredith:1 koyejo:2 source:1 crucial:1 extra:2 rest:1 regional:1 saad:1 comment:1 subject:21 recording:1 pooling:1 dehaene:1 fertile:1 call:1 leverage:2 door:2 cohort:2 split:1 enough:1 easy:1 automated:3 bengio:1 fit:1 psychology:1 architecture:6 restrict:1 rwth:2 hindered:1 reduce:5 andreas:1 ce23:1 glasser:1 six:2 ugurbil:2 peter:2 eliza:1 antonio:1 deep:5 useful:5 gael:3 generally:1 amount:4 ten:1 category:1 simplest:1 lifespan:1 http:2 reduced:5 estimated:7 neuroscience:15 correctly:1 per:4 intraparietal:3 blue:1 diverse:2 write:1 brucher:1 affected:1 key:1 four:4 terminology:1 yangqing:1 sulcus:2 d3:4 prevent:1 neither:1 anova:1 marie:1 backward:1 krzysztof:1 imaging:9 asymptotically:1 fraction:1 sum:1 inverse:1 powerful:1 you:1 distorted:1 reader:1 reasonable:1 yann:1 eickhoff:1 duncan:1 scaling:1 appendix:3 comparable:1 capturing:1 dropout:25 layer:7 serres:1 tackled:1 correspondence:1 fold:2 fiona:1 mni:1 activity:9 constrain:1 alex:1 archi:8 tal:3 aspect:6 nathan:1 nitish:1 min:2 span:1 mensch:4 performing:5 relatively:1 kubota:1 conjecture:1 martin:1 department:1 tv:1 structured:1 materialized:1 combination:1 pink:1 kd:2 remain:1 across:18 em:2 slightly:2 smaller:2 stephanie:1 beneficial:2 delineated:1 intuitively:2 explained:1 projecting:2 openly:1 glm:2 pipeline:2 calder:1 remains:2 previously:2 describing:1 discus:1 thirion:12 needed:1 mind:2 tractable:1 serf:1 studying:1 pursuit:1 available:5 operation:2 apply:1 observe:2 hierarchical:1 appropriate:1 enforce:2 generic:1 appending:1 robustly:1 distinguished:1 batch:2 slower:1 rp:9 original:1 thomas:5 standardized:1 dirichlet:1 ensure:1 clustering:3 angela:2 subsampling:1 medicine:1 parcellation:3 especially:3 build:1 prof:2 classical:1 jake:1 unchanged:1 objective:1 perrot:1 already:1 question:2 blend:1 strategy:3 costly:1 joke:1 mumford:1 diagonal:1 primary:2 exhibit:1 pain:1 win:1 gradient:7 distance:1 separate:1 thank:1 concatenation:1 capacity:1 w0:1 chris:1 gorgolewski:2 consensus:1 reason:1 marcus:1 ageing:2 code:1 sur:1 modeled:2 index:1 mini:1 relationship:2 minimizing:2 ratio:2 innovation:1 liang:1 difficult:1 neuroimaging:11 unfortunately:2 providing:1 trace:2 negative:1 ba:1 design:5 unknown:2 perform:6 observation:2 datasets:39 benchmark:2 fabian:2 descent:3 philippe:3 defining:1 relational:2 hinton:2 variability:1 thwarted:1 frame:1 gc:1 head:1 reproducing:1 competence:1 david:4 introduced:1 cast:3 vanderplas:1 paris:1 extensive:1 namely:2 pair:2 ethan:1 hanson:1 coherent:1 learned:10 hour:3 kingma:1 nip:2 boost:2 address:2 able:1 alongside:2 usually:1 pattern:3 sparsity:1 challenge:4 summarize:1 tb:1 saclay:1 interpretability:1 memory:2 video:1 oriol:1 power:2 critical:1 natural:3 examination:2 regularized:1 predicting:3 turner:1 cea:1 scheme:2 improve:2 github:1 technology:1 julien:6 mathieu:2 woo:1 extract:6 alternated:1 kj:1 text:2 prior:2 literature:1 review:1 voxels:1 discovery:1 multiplication:1 l2:2 relative:2 python:1 loss:2 interesting:1 gholamreza:1 srebro:1 geoffrey:2 localized:1 age:1 penalization:1 validation:2 integrate:1 consistent:2 imposes:1 rubin:1 thresholding:1 systematically:1 classifying:4 share:4 cd:3 poline:3 penalized:1 succinctly:1 summary:1 last:2 supported:1 free:1 drastically:1 formal:1 bias:1 allow:2 lisa:1 explaining:1 template:5 face:4 taking:2 wide:1 sparse:6 distributed:2 ghz:1 van:3 curve:1 dimension:25 stand:1 benefit:3 rich:2 sensory:1 forward:1 ignores:1 commonly:2 collection:1 adaptive:1 projected:4 programme:1 voxel:2 social:2 transaction:2 correlate:1 approximate:1 nov:1 global:1 overfitting:1 databasing:1 mairal:5 corpus:1 discriminative:1 xi:15 alternatively:1 neurology:1 factorizing:1 iterative:1 latent:24 promising:1 learn:7 reasonably:1 transfer:18 ca:1 neurologic:1 nature:5 robust:1 improving:1 european:1 kamil:2 domain:2 protocol:3 da:1 marc:1 abraham:1 whole:2 big:1 bassett:1 noise:8 hyperparameters:1 paul:1 repeated:2 child:1 allowed:1 enriched:2 gambling:1 intel:1 representative:6 fashion:1 tong:1 judy:1 localizer:3 embeds:1 neuroimage:5 duchesnay:1 wish:1 explicit:3 lie:2 zhifeng:1 donahue:1 rk:1 minute:2 ffa:2 specific:6 bastien:1 explored:3 barrett:1 evidence:1 macaron:1 intrinsic:1 workshop:2 kossaifi:1 adding:1 importance:3 ci:5 decaf:1 supplement:1 yvette:1 budget:1 horizon:2 margin:1 flavor:1 entropy:2 rg:3 timothy:2 univariate:1 visual:2 vinyals:1 expressed:1 hcp:11 yannick:2 prevents:1 sectional:1 srivastava:1 nested:1 minimizer:1 determines:1 relies:1 newell:1 corresponds:1 extracted:1 ljk:1 weston:1 w10:1 goal:3 viewed:1 towards:1 labelled:1 replace:1 jeff:1 price:1 analysing:1 fista:1 hard:2 fw:1 typical:1 shared:9 operates:1 denoising:1 conservative:1 total:1 elvis:1 forgo:1 experimental:6 ya:1 meaningful:1 ew:2 pedregosa:2 select:1 formally:4 stressed:1 carol:1 relevance:1 roadblock:1 audio:2 d1:4 correlated:2 |
6,822 | 7,171 | A KL-LUCB Bandit Algorithm for
Large-Scale Crowdsourcing
Ervin T?nczos? and Robert Nowak?
University of Wisconsin-Madison
[email protected], [email protected]
Bob Mankoff
Former Cartoon Editor of the New Yorker
[email protected]
Abstract
This paper focuses on best-arm identification in multi-armed bandits with bounded
rewards. We develop an algorithm that is a fusion of lil-UCB and KL-LUCB,
offering the best qualities of the two algorithms in one method. This is achieved by
proving a novel anytime confidence bound for the mean of bounded distributions,
which is the analogue of the LIL-type bounds recently developed for sub-Gaussian
distributions. We corroborate our theoretical results with numerical experiments
based on the New Yorker Cartoon Caption Contest.
1
Multi-Armed Bandits for Large-Scale Crowdsourcing
This paper develops a new multi-armed bandit (MAB) for large-scale crowdsourcing, in the style
of the KL-UCB [4, 9, 3]. Our work is strongly motivated by crowdsourcing contests, like the New
Yorker Cartoon Caption contest [10]3 . The new approach targets the ?best-arm identification problem?
[1] in the fixed confidence setting and addresses two key limitations of existing theory and algorithms:
(i) State of the art algorithms for best arm identification are based on sub-Gaussian confidence bounds
[5] and fail to exploit the fact that rewards are usually bounded in crowdsourcing applications.
(ii) Existing KL-UCB algorithms for best-arm identification do exploit bounded rewards [8] , but have
suboptimal performance guarantees in the fixed confidence setting, both in terms of dependence
on problem-dependent hardness parameters (Chernoff information) and on the number of arms,
which can be large in crowdsourcing applications.
The new algorithm we propose and analyze is called lil-KLUCB, since it is inspired by the lil-UCB
algorithm [5] and the KL-LUCB algorithm [8]. The lil-UCB algorithm is based on sub-Gaussian
bounds and has a sample complexity for best-arm identification that scales as
X
?1
??2
log ??2
i log(?
i ),
i?2
where ? ? (0, 1) is the desired confidence and ?i = ?1 ? ?i is the gap between the means of the
best arm (denoted as arm 1) and arm i. If the rewards are in [0, 1], then the KL-LUCB algorithm has
?
3
This work was partially supported by the NSF grant IIS-1447449 and the AFSOR grant FA9550-13-1-0138.
For more details on the New Yorker Cartoon Caption Contest, see the Supplementary Materials.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
a sample complexity scaling essentially like4
X
(Di? )?1 log(n? ?1 (Di? )?1 ) ,
i?2
where n is the number of arms and Di? := D? (?1 , ?i ) is the Chernoff-information between a Ber(?1 )
and a Ber(?i ) random variable5 . Ignoring the logarithmic factor, this bound is optimal for the case
of Bernoulli rewards [7, 11]. Comparing these two bounds, we observe that KL-LUCB may offer
benefits since Di? = D? (?1 , ?i ) ? (?1 ? ?i )2 /2 = ?2i /2, but lil-UCB has better logarithmic
dependence on the ?2i and no explicit dependence on the number of arms n. Our new algorithm
lil-KLUCB offers the best of both worlds, providing a sample complexity that scales essentially like
X
(Di? )?1 log(? ?1 log(Di? )?1 ) .
i?2
The key to this result is a novel anytime confidence bound for sums of bounded random variables,
which requires a significant departure from previous analyses of KL-based confidence bounds.
The practical benefit of lil-KLUCB is illustrated in terms of the New Yorker Caption Contest problem
[10]. The goal of that crowdsourcing task is to identify the funniest cartoon caption from a batch
of n ? 5000 captions submitted to the contest each week. The crowd provides ?3-star? ratings for
the captions, which can be mapped to {0, 1/2, 1}, for example. Unfortunately, many of the captions
are not funny, getting average ratings close to 0 (and consequently very small variances). This
fact, however, is ideal for KL-based confidence intervals, which are significantly tighter than those
based on sub-Gaussianity and the worst-case variance of 1/4. Compared to existing methods, the
lil-KLUCB algorithm better addresses the two key features in this sort of application: (1) a very large
number of arms, and (2) bounded reward distributions which, in many cases, have very low variance.
In certain instances, this can have a profound effect on sample complexity (e.g., O(n2 ) complexity
for algorithms using sub-Gaussian bounds vs. O(n log n) for lil-KLUCB, as shown in Table 1).
The paper is organized as follows. Section 2 defines the best-arm identification problem, gives the
lil-KLUCB algorithm and states the main results. We also briefly review related literature, and
compare the performance of lil-KLUCB to that of previous algorithms. Section 3 provides the main
technical contribution of the paper, a novel anytime confidence bound for sums of bounded random
variables. Section 4 analyzes the performance of the lil-KLUCB algorithm. Section 5 provides
experimental support for the lil-KLUCB algorithm using data from the New Yorker Caption Contest.
2
Problem Statement and Main Results
Consider a MAB problem with n arms. We use the shorthand notation [n] := {1, . . . , n}. For every
i ? [n] let {Xi,j }j?N denote the reward sequence of arm i, and suppose that P(Xi,j ? [0, 1]) = 1 for
all i ? [n], j ? N. Furthermore, assume that all rewards are independent, and that Xi,j ? Pi for all
j ? N. Let the mean reward of arm i be denoted by ?i and assume w.l.o.g. that ?1 > ?2 ? ? ? ? ? ?n .
We focus on the best-arm identification problem in the fixed-confidence setting. At every time t ? N
we are allowed to select an arm to sample (based on past rewards) and observe the next element in
its reward sequence. Based on the observed rewards, we wish to find the arm with the highest mean
reward. In the fixed confidence setting, we prescribe a probability of error ? ? (0, 1) and our goal is
to construct an algorithm that finds the best arm with probability at least 1 ? ?. Among 1 ? ? accurate
algorithms, one naturally favors those that require fewer samples. Hence proving upper bounds on
the sample complexity of a candidate algorithm is of prime importance.
The lil-KLUCB algorithm that we propose is a fusion of lil-UCB [5] and KL-LUCB [8], and its
operation is essentially a special instance of LUCB++ [11]. At each time step t, let Ti (t) denote the
total number of samples drawn from arm i so far, and let ?
bi,Ti (t) denote corresponding empirical
mean. The algorithm is based on lower and upper confidence bounds of the following general form:
4
A more precise characterization of the sample complexity is given in Section 2.
The Chernoff-information between random variables Ber(x) and Ber(y) (0 < x < y < 1) is D? (x, y) =
?
1?z
D(z , x) = D(z ? , y), where D(z, x) = z log xz + (1 ? z) log 1?x
and z ? is the unique z ? (x, y) such that
D(z, x) = D(z, y).
5
2
for each i ? [n] and any ? ? (0, 1)
c log (? log2 (2Ti (t))/?)
Li (t, ?) = inf m < ?
bi,Ti (t) : D ?
bi,Ti (t) , m ?
Ti (t)
c log (? log2 (2Ti (t))/?)
Ui (t, ?) = sup m > ?
bi,Ti (t) : D ?
bi,Ti (t) , m ?
Ti (t)
where c and ? are small constants (defined in the next section). These bounds are designed so that
with probability at least 1 ? ?, Li (Ti (t), ?) ? ?i ? Ui (Ti (t), ?) holds for all t ? N. For any t ? N
let TOP(t) be the index of the arm with the highest empirical mean, breaking ties at random. With
this notation, we state the lil-KLUCB algorithm and our main theoretical result.
lil-KLUCB
1. Initialize by sampling every arm once.
2. While LTOP(t) (TTOP(t) (t), ?/(n ? 1)) ?
max Ui (Ti (t), ?) do:
i6=TOP(t)
? Sample the following two arms:
? TOP(t), and
? arg max Ui (Ti (t), ?)
i6=TOP(t)
and update means and confidence bounds.
3. Output TOP(t)
Theorem 1. For every i ? 2 let ?
ei ? (?i , ?1 ), and ?
e = maxi?2 ?
ei . With probability at least 1 ? 2?,
lil-KLUCB returns the arm with the largest mean and the total number of samples it collects is upper
bounded by
X c0 log ? ?1 log D? (?i , ?
ei )?1
c0 log (n ? 1)? ?1 log D? (?1 , ?
e)?1
inf
+
,
?
e2 ,...,e
?n
D? (?1 , ?
e)
D? (?i , ?
ei )
i?2
?
where c0 is some universal constant, D (x, y) is the Chernoff-information.
Remark 1. Note that the LUCB++ algorithm of [11] is general enough to handle identification of
the top k arms (not just the best-arm). All arguments presented in this paper also go through when
considering the top-k problem for k > 1. However, to keep the arguments clear and concise, we
chose to focus on the best-arm problem only.
2.1
Comparison with previous work
We now compare the sample complexity of lil-KLUCB to that of the two most closely related
algorithms, KL-LUCB [8] and lil-UCB [5]. For a detailed review of the history of MAB problems
and the use of KL-confidence intervals for bounded rewards, we refer the reader to [3, 9, 4].
For the KL-LUCB algorithm, Theorem 3 of [8] guarantees a high-probability sample complexity
upper bound scaling as
X
inf
(D? (?i , c))?1 log n? ?1 (D? (?i , c))?1 .
c?(?1 ,?2 )
i?1
Our result improves this in two ways. On one hand, we eliminate the unnecessary logarithmic
dependence on the number of arms n in every term. Note that the log n factor still appears in
Theorem 1 in the term corresponding to the number of samples on the best arm. It is shown in [11]
that this factor is indeed unavoidable. The other improvement lil-KLUCB offers over KL-LUCB
is improved logarithmic dependence on the Chernoff-information terms. This is due to the tighter
confidence intervals derived in Section 3.
Comparing Theorem 1 to the sample complexity of lil-UCB, we see that the two are of the same form,
the exception being that the Chernoff-information terms take the place of the squared mean-gaps
3
(which arise due to the use of sub-Gaussian (SG) bounds). To give a sense of the improvement this
can provide, we compare the sums6
X
X 1
1
SKL =
.
and SSG =
?
D (?i , ?1 )
?2i
i?2
0
0
i?2
0
Let ?, ? ? (0, 1), ? < ? and ? = |? ? ? |. Note that the Chernoff-information between Ber(?)
and Ber(?0 ) can be expressed as
D? (?, ?0 ) = max 0 min{D(x, ?), D(x, ?0 )} = D(x? , ?) = D(x? , ?0 ) =
x?[?,? ]
D(x? ,?)+D(x? ,?0 )
2
,
for some unique x? ? [?, ?0 ]. It follows that
D? (?, ?0 ) ?
D(x, ?) + D(x, ?0 )
1
p
.
= log p
2
x?[?,? ]
?(? + ?) + (1 ? ?)(1 ? ? ? ?)
min 0
Using this with every term in SKL gives us an upper bound on that sum. If the means are all bounded
well away from 0 and 1, then SKL may not differ that much from SSG . There are some situations
however, when the two expressions behave radically differently. As an example, consider a situation
when ?1 = 1. In this case we get
X
X 1
X 1
2
? 2
SKL ?
= SSG .
1
?i
?2i
log 1??i
i?2
i?2
i?2
Table 1 illustrates the difference between the scaling of the sums SKL and SSG when the gaps have
the parametric form ?i = (i/n)? .
Table 1: SKL versus SSG for mean gaps ?i = ( ni )? , i = 1, . . . , n
?
SKL
SSG
? (0, 1/2)
1/2
? (1/2, 1)
1
? (1, ?)
n
n
n
n log n
n
n2?
n log n
n2
n?
n2?
We see that KL-type confidence bounds can sometimes provide a significant advantage in terms of
the sample complexity. Intuitively, the gains will be greatest when many of the means are close to 0
or 1 (and hence have low variance). We will illustrate in Section 5 that such gains often also manifest
in practical applications like the New Yorker Caption Contest problem.
3
Anytime Confidence Intervals for Sums of Bounded Random Variables
The main step in our analysis is proving a sharp anytime confidence bound for the mean of
bounded random variables. These will be used to show, in Section 4, that lil-KLUCB draws at
most O((Di? )?1 log log(Di? )?1 ) samples from a suboptimal arm i, where Di? := D? (?1 , ?i ) is
the Chernoff-information between a Ber(?1 ) and a Ber(?i ) random variable and arm 1 is the arm
with the largest mean. The iterated log factor is a necessary consequence of the law-of-the-iterated
logarithm [5], and in it is in this sense that we call the bound sharp. Prior work on MAB algorithms
based on KL-type confidence bounds [4, 9, 3] did not focus on deriving tight anytime confidence
bounds.
Consider a sequence
of iid random variables Y1 , Y2 , . . . that are bounded in [0, 1] and have mean ?.
P
Let ?
bt = 1t j?[t] Yj be the empirical mean of the observations up to time t ? N.
Theorem 2. Let ? ? [0, 1] and ? ? (0, 1) be arbitrary. Fix any l ? 0 and set N = 2l , and define
? NN+1
?
X
X
N +1
N +1
?(N ) = ? 1/(N +1) ?
1{l6=0} log2 (2t)? N + N
(k + 1)? N ?
.
k?l
t?[N ]
6
Consulting the proof of Theorem 1 it is clear that the number of samples on the sub-optimal arms of
lil-KLUCB scales essentially as SKL w.h.p. (ignoring doubly logarithmic terms), and a similar argument can be
made about lil-UCB. This justifies considering these sums in order to compare lil-KLUCB and lil-UCB.
4
(i) Define the sequence zt ? (0, 1 ? ?], t ? N such that
log (?(N ) log (2t)/?)
2
D ? + NN+1 zt , ? =
,
t
if a solution exists, and zt = 1 ? ? otherwise. Then P (?t ? N : ?
bt ? ? > zt ) ? ?.
(1)
(ii) Define the sequence zt > 0, t ? N such that
log (?(N ) log (2t)/?)
2
D ? ? NN+1 zt , ? =
,
t
if a solution exists, and zt = ? otherwise. Then P (?t ? N : ?
bt ? ? < ?zt ) ? ?.
The result above can be used to construct anytime confidence bounds for the mean as follows. Consider
part (i) of Theorem 2 and fix ?. The result gives a sequence zt that upper bounds the deviations of
the empirical mean. It is defined through an equation of the form D(? + N zt /(N + 1), ?) = ft .
Note that the arguments of the function on the left must be in the interval [0, 1], in particular
N zt /(N + 1) < 1 ? ?, and the maximum of D(? + x, ?) for x > 0 is D(1, ?) = log ??1 . Hence,
equation 1 does not have a solution if ft is too large (that is, if t is small). In these cases we set
zt = 1 ? ?. However, since ft is decreasing, equation 1 does have a solution when t ? T (for some
T depending on ?), and this solution is unique (since D(? + x, ?) is strictly increasing).
With high probability ?
bt ? ? ? zt for all t ? N by Theorem 2. Furthermore, the function D(? + x, ?)
is increasing in x ? 0. By combining these facts we get that with probability at least 1 ? ?
t +?
D ? + NN+1 zt , ? ? D NN?b+1
,? .
On the other hand
log (?(N ) log2 (2t)/?)
,
t
by definition. Chaining these two inequalities leads to the lower confidence bound
log (?(N ) log (2t)/?)
N?
bt +m
2
L(t, ?) = inf m < ?
bt : D N +1 , m ?
t
D ?+
N
N +1 zt , ?
?
(2)
which holds for all times t with probability at least 1 ? ?. Considering the left deviations of ?
bt ? ?
we can get an upper confidence bound in a similar manner:
log (?(N ) log (2t)/?)
+m
2
U (t, ?) = sup m > ?
bt : D NN?bt+1
,m ?
.
(3)
t
That is, for all times t, with probability at least 1 ? 2? we have L(t, ?) ? ?
bt ? U (t, ?).
Note that the constant log ?(N ) ? 2 log2 (N ), so the choice of N plays a relatively mild role in
+m
the bounds. However, we note here that if N is sufficiently large, then NN?bt+1
? ?
bt , and thus
N?
bt +m
D N +1 , m ? D (b
?t , m), in which case the bounds above are easily compared to those in prior
works [4, 9, 3]. We make this connection more precise and show that the confidence intervals defined
as
c(N ) log (?(N ) log2 (2t)/?)
L0 (t, ?) = inf m < ?
bt : D (b
?t , m) ?
, and
t
c(N ) log (?(N ) log2 (2t)/?)
U 0 (t, ?) = inf m > ?
bt : D (b
?t , m) ?
,
t
satisfy L0 (t, ?) ? ?
bt ? U 0 (t, ?) for all t, with probability 1 ? 2?. The constant c(N ) is defined in
Theorem 1 in the Supplementary Material, where the correctness of L0 (t, ?) and U 0 (t, ?) is shown.
Proof of Theorem 2. The proofs of parts (i) and (ii) are completely analogous, henceP
in what follows
we only prove part (i). Note that {b
?t ? ? > zt } ?? {St > tzt }, where St = j?[t] (Yj ? ?)
denotes the centered sum up to time t. We start with a simple union bound
X
P ?t ? [2k , 2k+1 ] : St > tzt . (4)
P (?t ? N : St > tzt ) ? P (?t ? [N ] : St > tzt ) +
k?l
5
First, we bound each summand in the second term individually. In an effort to save space, we define
the event Ak = {?t ? [2k , 2k+1 ] : St > tzt }. Let tj,k = (1 + Nj )2k . In what follows we use the
notation tj ? tj,k . We have
X
X
P (Ak ) ?
P (?t ? [tj?1 , tj ] : St > tzt ) ?
P ?t ? [tj?1 , tj ] : St > tj?1 ztj?1 ,
j?[N ]
j?[N ]
where the last step is true if tzt is non-decreasing in t. This technical claim is formally shown in
Lemma 1 in the Supplementary Material. However, to give a short heuristic, it is easy to see that
tzt has an increasing lower bound. Noting that D(? + x, ?) is convex in x (the second derivative is
positive), and that D(?, ?) = 0, we have D(1, ?)x ? D(? + x, ?). Hence zt & t?1 log log t.
Using a Chernoff-type bound together with Doob?s inequality, we can continue as
X
P (Ak ) ? inf
P ?t ? [tj?1 , tj ] : exp (?St ) > exp ?tj?1 ztj?1
?>0
?
j?[N ]
?Stj
exp ? sup ?tj?1 ztj?1 ? log E e
X
?>0
j?[N ]
=
X
exp ?tj sup
??0
j?[N ]
? NN+j?1
+j ztj?1
? log E e
?(Y1 ??)
.
Using E(e?Y1 ) ? E(e?? ) where ? ? Ber(?) (see Lemma 9 of [4]), and the notation ?j =
X
P (Ak ) ?
exp ?tj sup ??j ztj?1 ? log E e?(???)
=
N +j?1
N +j ,
??0
j?[N ]
X
(5)
exp ?tj D ? + ?j ztj?1 , ? ,
(6)
j?[N ]
since the rate function of a Bernoulli random variable can be explicitly computed, namely we have
sup?>0 (?x ? log E(e?? )) = D(? + x, ?) (see [2]).
Again, we use the convexity of D(? + x, ?). For any ? ? (0, 1) we have ?D(? + x, ?) ?
N
D(? + ?x, ?), since D(?, ?) = 0. Using this with ? = ?j (N
+1) and x = ?j ztj?1 , we get that
N
N
?j (N +1) D ? + ?j ztj?1 , ? ? D ? + N +1 ztj?1 , ? .
This implies
P (Ak ) ?
X
exp ?tj NN+1 ?j D ? +
N
N +1 ztj?1 , ?
.
(7)
j?[N ]
Plugging in the definition of tj and the sequence zt , and noting that ? < 1, we arrive at the bound
NN+1
X
N +1
N +j?1
?
log ?(N ) log2 (2k+1
)/?
? N ?(N )(k+1)
P (Ak ) ?
exp ?
.
N
N
j?[N ]
Regarding the first term in (4), again using the Bernoulli rate function bound we have
X
X
P (?t ? [N ] : ?
bt ? ? > zt ) ?
P (b
?t ? ? > zt ) ?
exp (?tD(? + zt , ?)) .
t?[N ]
t?[N ]
Using the convexity of D(? + x, ?) as before, we can continue as
X
P (?t ? [N ] : ?
bt ? ? > zt ) ?
exp ?t NN+1 D ? +
N
N +1 zt , ?
t?[N ]
?
X
exp ? NN+1 log (?(N ) log2 (2t)/?)
t?[N ]
??
N +1
N
?(N )?
N +1
N
X
t?[N ]
6
log2 (2t)?
N +1
N
.
Plugging the two bounds back into (4) we conclude that
?
?
X
N +1
N +1 X
N +1
N +1
?1{l6=0} log2 (2j)? N +
(k + 1)? N ? ? ? ,
P (?t : ?
bt ? ? > zt ) ? ? N ?(N )? N
k?l
j?[N ]
by the definition of ?(N ).
4
Analysis of lil-KLUCB
Recall that the lil-KLUCB algorithm uses confidence bounds of the form Ui (t, ?) = sup{m > ?
bt :
D(b
?t , m) ? ft (?)} with some decreasing sequence ft (?). In this section we make this dependence
explicit, and use the notations Ui (ft (?)) and Li (ft (?)) for upper and lower confidence bounds. For
any > 0 and i ? [n], define the events ?i () = {?t ? N : ?i ? [Li (ft ()), Ui (ft ())]}.
The correctness of the algorithm follows from the correctness of the individual confidence intervals,
as is usually the case with LUCB algorithms. This is shown formally in Proposition 1 provided in the
Supplementary Materials. The main focus in this section is to show a high probability upper bound
on the sample complexity. This can be done by combining arguments frequently used for analyzing
LUCB algorithms and those used in the analysis of the lil-UCB [5]. The proof is very similar in spirit
to that of the LUCB++ algorithm [11]. Due to spatial restrictions, we only provide a proof sketch
here, while the detailed proof is provided in the Supplementary Materials.
Proof sketch of Theorem 1. Observe that at each time step two things can happen (apart from stopping): (1) Arm 1 is not sampled (two sub-optimal arms are sampled); (2) Arm 1 is sampled together
with some other (suboptimal) arm. Our aim is to upper bound the number of times any given arm is
sampled for either of the reasons above. We do so by conditioning on the event
?
?
\
?0 = ?1 (?) ? ? ?i (?i )? , for a certain choice of {?i } defined below.
i?2
For instance, if arm 1 is not sampled at a given time t, we know that TOP(t) 6= 1, which means
there must be an arm i ? 2 such that Ui (Ti (t), ?) ? U1 (T1 (t), ?). However, on the event ?1 (?), the
UCB of arm 1 is accurate, implying that Ui (Ti (t), ?) ? ?1 . This implies that Ti (t) can not be too
big, since on ?i (?i ), ?
bi,t is ?close" to ?i , and also Ui (Ti (t), ?) is not much larger then ?
bi . All this is
made formal in Lemma 2, yielding the following upper bound on number of times arm i is sampled
for reason (1):
?i (? ? ?i ) = min {t ? N : ft (? ? ?i ) < D? (?i , ?1 )} .
Similar arguments can be made about the number of samples of any suboptimal arm i for reason (2),
and also the number of samples on arm 1. This results in the sample complexity upper bound
X K1 log ? ?1 log D? (?i , ?
ei )?1 + log ?i?1
K1 log (n ? 1)? ?1 log D? (?1 , ?
e)?1
+
,
D? (?1 , ?
e)
D? (?i , ?
ei )
i?2
0
on the event ? , where K1 is a universal constant. Finally, we define the quantities ?i = sup{ >
0 : Ui (ft ()) ? ?i ?t ? N}. Note that we have P(?i < ?) = P(?t ? N : Ui (ft (?)) ? ?i ) ? ?
according to Theorem 1 in the Supplementary Material. Substituting ? = exp(?D? (?i , ?
ei )z) we get
log ?i?1
ei )z) .
P D? (?i ,e?i ) ? z ? exp(?D? (?i , ?
Hence {?i }i?2 are independent sub-exponential variables, which allows us to control their contribution to the sum above using standard techniques.
5
Real-World Crowdsourcing
We now compare the performance of lil-KLUCB to that of other algorithms in the literature. We do
this using both synthetic data and real data from the New Yorker Cartoon Caption contest [10]7 . To
7
These data can be found at https://github.com/nextml/caption-contest-data
7
keep comparisons fair, we run the same UCB algorithm for all the competing confidence bounds.
We set N = 8 and ? = 0.01 in our experiments. The confidence bounds are [KL]: the KL-bound
derived based on Theorem 2, [SG1]: a matching sub-Gaussian bound derived using the proof of
Theorem 2, using sub-Gaussian tails instead of the KL rate-function (the exact derivations are in the
Supplementary Material), and [SG2]: the sharper sub-Gaussian bound provided by Theorem 8 of [7].
0.00
0.75
1.50
2.25
3.00
3.75
1.0
0.8
0.6
0.4
0.2
0.6
0.4
0.2
Kaufmann lil?UCB
KL?UCB
SG lil?UCB
1.2
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
(Empirical) probability ? 250 trials
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Kaufmann lil?UCB
KL?UCB
SG lil?UCB
0.0
1.2
P(best arm in top 5), alpha=1/2
0.8
1.0
P(best arm in top 5), alpha=1
0.0
(Empirical) probability ? 250 trials
We compare these methods by computing the empirical probability that the best-arm is among the top
5 empirically best arms, as a function of the total number of samples. We do so using using synthetic
data in Figure 5 , where the Bernoulli rewards simulate cases from Table 1, and using real human
response data from two representative New Yorker caption contests in Figure 5.
4.50
0.00
Number of samples (10 thousands)
0.15
0.30
0.45
0.60
0.75
0.90
Number of samples (10 thousands)
Figure 1: Probability of the best-arm in the top 5 empirically best arms, as a function of the number of samples,
based on 250 repetitions. ?i = 1 ? ((i ? 1)/n)? , with ? = 1 in the left panel, and ? = 1/2 in the right panel.
The mean-profile is shown above each plot. [KL] Blue; [SG1] Red; [SG2] Black.
As seen in Table 1, the KL confidence bounds have the potential to greatly outperform the subGaussian ones. To illustrate this indeed translates into superior performance, we simulate two cases,
with means ?i = 1 ? ((i ? 1)/n)? , with ? = 1/2 and ? = 1, and n = 1000. As expected, the
KL-based method requires significantly fewer samples (about 20 % for ? = 1 and 30 % for ? = 1/2)
to find the best arm. Furthermore, the arms with means below the median are sampled about 15 and
25 % of the time respectively ? key in crowdsourcing applications, since having participants answer
fewer irrelevant (and potentially annoying) questions improves both efficiency and user experience.
0
2
4
6
8
10
12
14
16
18
20
1.2
1.0
0.8
0.6
0.4
0.2
Kaufmann lil?UCB
KL?UCB
SG lil?UCB
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Kaufmann lil?UCB
KL?UCB
SG lil?UCB
0.0
0.2
0.4
0.6
0.8
1.0
1.2
(Empirical) probability ? 250 trials
P(best arm in top 5), Contest 512
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
(Empirical) probability ? 250 trials
P(best arm in top 5), Contest 558
0.00
Number of samples (10 thousands)
0.75
1.50
2.25
3.00
3.75
4.50
Number of samples (10 thousands)
Figure 2: Probability of the best-arm in the top 5 empirically best arms vs. number of samples, based on 250
bootstrapped repetitions. Data from New Yorker contest 558 (?1 = 0.536) on left, and contest 512 (?1 = 0.8)
on right. Mean-profile above each plot. [KL] Blue; [SG1] Red; [SG2] Black.
8
To see how these methods fair on real data, we also run these algorithms on bootstrapped human
response data from the real New Yorker Caption Contest. The mean reward of the best arm in these
contests is usually between 0.5 and 0.85, hence we choose one contest from each end of this spectrum.
At the lower end of the spectrum, the three methods fair comparably. This is expected because the
sub-Gaussian bounds are relatively good for means about 0.5. However, in cases where the top mean
is significantly larger than 0.5 we see a marked improvement in the KL-based algorithm.
Extension to numerical experiments
Since a large number of algorithms have been proposed in the literature for best arm identification,
we include another algorithm in the numerical experiments for comparison.
Previously we compared lil-KLUCB to lil-UCB as a comparison for two reasons. First, this comparison illustrates best the gains of using the novel anytime confidence bounds as opposed to those using
sub-Gaussian tails. Second, since lil-UCB is the state of the art algorithm, any other algorithm will
likely perform worse.
The authors of [6] compare a number of different best arm identification methods, and conclude
that two of them seem to stand out: lil-UCB and Thompson sampling. Therefore, we now include
Thomspon sampling [Th] in our numerical experiments for the New Yorker data.
We implemented the method as prescribed in [6]. As can bee seen in Figure 5, Thompson sampling
seems to perform somewhat worse than the previous methods in these two instances.
0
2
4
6
8
10
12
14
16
18
20
1.2
1.0
0.8
0.6
0.4
0.2
Kaufmann lil?UCB
KL?UCB
SG lil?UCB
Thompson
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Kaufmann lil?UCB
KL?UCB
SG lil?UCB
Thompson
0.0
0.2
0.4
0.6
0.8
1.0
1.2
(Empirical) probability ? 250 trials
P(best arm in top 5), Contest 512
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
(Empirical) probability ? 250 trials
P(best arm in top 5), Contest 512
0.00
Number of samples (10 thousands)
0.75
1.50
2.25
3.00
3.75
4.50
Number of samples (10 thousands)
Figure 3: Probability of the best-arm in the top 5 empirically best arms vs. number of samples, based on 250
bootstrapped repetitions. Data from New Yorker contest 558 (?1 = 0.536) on left, and contest 512 (?1 = 0.8)
on right. Mean-profile above each plot. [KL] Blue; [SG1] Red; [SG2] Black; [Th] Purple.
9
References
[1] Jean-Yves Audibert and S?bastien Bubeck. Best arm identification in multi-armed bandits. In COLT-23th
Conference on Learning Theory-2010, pages 13?p, 2010.
[2] St?phane Boucheron, G?bor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic
theory of independence. Oxford university press, 2013.
[3] Olivier Capp?, Aur?lien Garivier, Odalric-Ambrym Maillard, R?mi Munos, Gilles Stoltz, et al. Kullback?
leibler upper confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3):1516?
1541, 2013.
[4] Aur?lien Garivier and Olivier Capp?. The kl-ucb algorithm for bounded stochastic bandits and beyond. In
COLT, pages 359?376, 2011.
[5] Kevin Jamieson, Matthew Malloy, Robert Nowak, and S?bastien Bubeck. lil-ucb: An optimal exploration
algorithm for multi-armed bandits. In Conference on Learning Theory, pages 423?439, 2014.
[6] Kevin G Jamieson, Lalit Jain, Chris Fernandez, Nicholas J Glattard, and Rob Nowak. Next: A system for
real-world development, evaluation, and application of active learning. In Advances in Neural Information
Processing Systems, pages 2656?2664, 2015.
[7] Emilie Kaufmann, Olivier Capp?, and Aur?lien Garivier. On the complexity of best arm identification in
multi-armed bandit models. The Journal of Machine Learning Research, 2016.
[8] Emilie Kaufmann and Shivaram Kalyanakrishnan. Information complexity in bandit subset selection. In
COLT, pages 228?251, 2013.
[9] Odalric-Ambrym Maillard, R?mi Munos, Gilles Stoltz, et al. A finite-time analysis of multi-armed bandits
problems with kullback-leibler divergences. In COLT, pages 497?514, 2011.
[10] B. Fox Rubin. How new yorker cartoons could teach computers to be funny. CNET News, 2016.
https://www.cnet.com/news/how-new-yorker-cartoons-could-teach-computers-to-be-funny/.
[11] Max Simchowitz, Kevin Jamieson, and Benjamin Recht. The simulator: Understanding adaptive sampling
in the moderate-confidence regime. arXiv preprint arXiv:1702.05186, 2017.
10
| 7171 |@word mild:1 trial:6 briefly:1 seems:1 c0:3 annoying:1 sg2:4 kalyanakrishnan:1 concise:1 yorker:15 offering:1 bootstrapped:3 past:1 existing:3 com:3 comparing:2 must:2 numerical:4 happen:1 designed:1 plot:3 update:1 v:3 implying:1 fewer:3 short:1 fa9550:1 characterization:1 provides:3 consulting:1 profound:1 shorthand:1 doubly:1 prove:1 manner:1 hardness:1 indeed:2 expected:2 ervin:1 xz:1 frequently:1 multi:7 simulator:1 inspired:1 decreasing:3 td:1 armed:7 considering:3 increasing:3 provided:3 bounded:14 notation:5 panel:2 what:2 developed:1 nj:1 guarantee:2 every:6 ti:18 tie:1 control:1 grant:2 jamieson:3 positive:1 before:1 t1:1 consequence:1 ak:6 analyzing:1 oxford:1 lugosi:1 black:3 chose:1 collect:1 bi:7 practical:2 unique:3 yj:2 union:1 universal:2 empirical:11 significantly:3 matching:1 confidence:33 get:5 close:3 selection:1 stj:1 restriction:1 www:1 go:1 convex:1 thompson:4 deriving:1 proving:3 handle:1 analogous:1 annals:1 target:1 suppose:1 play:1 user:1 caption:14 exact:1 olivier:3 us:1 prescribe:1 element:1 observed:1 ft:12 role:1 preprint:1 worst:1 thousand:6 news:2 highest:2 benjamin:1 convexity:2 ui:12 complexity:15 reward:16 tight:1 efficiency:1 completely:1 capp:3 easily:1 differently:1 derivation:1 jain:1 kevin:3 crowd:1 jean:1 heuristic:1 supplementary:7 larger:2 otherwise:2 favor:1 statistic:1 sequence:8 advantage:1 simchowitz:1 propose:2 combining:2 getting:1 phane:1 illustrate:2 develop:1 depending:1 implemented:1 implies:2 differ:1 closely:1 stochastic:1 centered:1 human:2 exploration:1 afsor:1 material:7 require:1 fix:2 mab:4 proposition:1 tighter:2 strictly:1 extension:1 hold:2 sufficiently:1 exp:13 week:1 claim:1 matthew:1 substituting:1 individually:1 largest:2 correctness:3 repetition:3 gaussian:10 aim:1 derived:3 focus:5 l0:3 improvement:3 bernoulli:4 greatly:1 sense:2 dependent:1 stopping:1 nn:12 eliminate:1 bt:20 bandit:10 lien:3 doob:1 arg:1 among:2 colt:4 pascal:1 denoted:2 glattard:1 development:1 art:2 special:1 initialize:1 spatial:1 construct:2 once:1 having:1 beach:1 cartoon:8 chernoff:9 sampling:5 develops:1 summand:1 divergence:1 individual:1 evaluation:1 ztj:10 yielding:1 tj:17 accurate:2 nowak:3 necessary:1 experience:1 stoltz:2 fox:1 logarithm:1 desired:1 theoretical:2 instance:4 corroborate:1 deviation:2 subset:1 too:2 answer:1 synthetic:2 st:11 recht:1 aur:3 shivaram:1 together:2 squared:1 again:2 unavoidable:1 opposed:1 choose:1 worse:2 derivative:1 style:1 return:1 li:4 potential:1 nonasymptotic:1 star:1 skl:8 gaussianity:1 satisfy:1 explicitly:1 audibert:1 fernandez:1 analyze:1 sup:8 red:3 start:1 sort:1 participant:1 thomspon:1 contribution:2 purple:1 ni:1 yves:1 variance:4 kaufmann:8 identify:1 ssg:6 identification:12 bor:1 iterated:2 lalit:1 comparably:1 iid:1 bob:1 history:1 submitted:1 emilie:2 definition:3 e2:1 naturally:1 proof:8 di:9 mi:2 gain:3 sampled:7 manifest:1 anytime:8 recall:1 improves:2 maillard:2 organized:1 back:1 appears:1 response:2 improved:1 done:1 strongly:1 furthermore:3 just:1 hand:2 sketch:2 ei:8 defines:1 quality:1 tzt:8 usa:1 effect:1 y2:1 true:1 former:1 hence:6 boucheron:1 leibler:2 illustrated:1 chaining:1 novel:4 recently:1 superior:1 empirically:4 conditioning:1 tail:2 significant:2 refer:1 i6:2 contest:22 inf:7 apart:1 prime:1 moderate:1 irrelevant:1 certain:2 inequality:3 continue:2 seen:2 analyzes:1 somewhat:1 ii:4 technical:2 offer:3 long:1 plugging:2 essentially:4 arxiv:2 sometimes:1 achieved:1 klucb:22 interval:7 median:1 massart:1 funniest:1 thing:1 spirit:1 seem:1 call:1 subgaussian:1 noting:2 ideal:1 enough:1 easy:1 independence:1 competing:1 suboptimal:4 regarding:1 translates:1 motivated:1 expression:1 effort:1 remark:1 clear:2 detailed:2 http:2 outperform:1 nsf:1 blue:3 key:4 drawn:1 wisc:2 garivier:3 sum:8 run:2 place:1 arrive:1 reader:1 funny:3 draw:1 scaling:3 bound:50 u1:1 simulate:2 argument:6 min:3 prescribed:1 relatively:2 according:1 rob:1 intuitively:1 equation:3 previously:1 fail:1 know:1 end:2 operation:1 malloy:1 observe:3 away:1 nicholas:1 save:1 batch:1 top:19 denotes:1 include:2 log2:11 madison:1 l6:2 exploit:2 k1:3 question:1 quantity:1 parametric:1 concentration:1 dependence:6 mapped:1 chris:1 odalric:2 reason:4 index:1 providing:1 unfortunately:1 robert:2 statement:1 sharper:1 potentially:1 teach:2 lil:49 zt:24 perform:2 gilles:2 upper:13 observation:1 finite:1 behave:1 situation:2 precise:2 cnet:2 y1:3 hearst:1 sharp:2 arbitrary:1 rating:2 namely:1 kl:32 connection:1 nip:1 address:2 beyond:1 usually:3 below:2 departure:1 regime:1 max:4 analogue:1 greatest:1 event:5 arm:66 github:1 ltop:1 review:2 literature:3 sg:7 prior:2 bee:1 understanding:1 wisconsin:1 law:1 limitation:1 allocation:1 versus:1 rubin:1 editor:1 pi:1 rdnowak:1 supported:1 last:1 sg1:4 formal:1 ber:9 ambrym:2 munos:2 benefit:2 world:3 stand:1 author:1 made:3 adaptive:1 far:1 alpha:2 kullback:2 keep:2 active:1 unnecessary:1 conclude:2 xi:3 spectrum:2 table:5 ca:1 ignoring:2 did:1 main:6 big:1 arise:1 profile:3 n2:4 allowed:1 fair:3 representative:1 sub:14 explicit:2 wish:1 exponential:1 candidate:1 breaking:1 theorem:15 bastien:2 maxi:1 fusion:2 exists:2 sequential:1 importance:1 illustrates:2 justifies:1 gap:4 logarithmic:5 likely:1 bubeck:2 expressed:1 partially:1 radically:1 goal:2 marked:1 consequently:1 lemma:3 total:3 called:1 lucb:14 experimental:1 ucb:37 exception:1 select:1 formally:2 support:1 crowdsourcing:9 |
6,823 | 7,172 | Collaborative Deep Learning in
Fixed Topology Networks
Zhanhong Jiang1 , Aditya Balu1 , Chinmay Hegde2 , and Soumik Sarkar1
1
Department of Mechanical Engineering, Iowa State University,
zhjiang, baditya, [email protected]
2
Department of Electrical and Computer Engineering , Iowa State University, [email protected]
Abstract
There is significant recent interest to parallelize deep learning algorithms in order
to handle the enormous growth in data and model sizes. While most advances
focus on model parallelization and engaging multiple computing agents via using
a central parameter server, aspect of data parallelization along with decentralized
computation has not been explored sufficiently. In this context, this paper presents
a new consensus-based distributed SGD (CDSGD) (and its momentum variant,
CDMSGD) algorithm for collaborative deep learning over fixed topology networks
that enables data parallelization as well as decentralized computation. Such a framework can be extremely useful for learning agents with access to only local/private
data in a communication constrained environment. We analyze the convergence
properties of the proposed algorithm with strongly convex and nonconvex objective
functions with fixed and diminishing step sizes using concepts of Lyapunov function construction. We demonstrate the efficacy of our algorithms in comparison
with the baseline centralized SGD and the recently proposed federated averaging
algorithm (that also enables data parallelism) based on benchmark datasets such as
MNIST, CIFAR-10 and CIFAR-100.
1
Introduction
In this paper, we address the scalability of optimization algorithms for deep learning in a distributed
setting. Scaling up deep learning [1] is becoming increasingly crucial for large-scale applications
where the sizes of both the available data as well as the models are massive [2]. Among various
algorithmic advances, many recent attempts have been made to parallelize stochastic gradient descent
(SGD) based learning schemes across multiple computing agents. An early approach called Downpour
SGD [3], developed within Google?s disbelief software framework, primarily focuses on model
parallelization (i.e., splitting the model across the agents). A different approach known as elastic
averaging SGD (EASGD) [4] attempts to improve perform multiple SGDs in parallel; this method
uses a central parameter server that helps in assimilating parameter updates from the computing
agents. However, none of the above approaches concretely address the issue of data parallelization,
which is an important issue for several learning scenarios: for example, data parallelization enables
privacy-preserving learning in scenarios such as distributed learning with a network of mobile and
Internet-of-Things (IoT) devices. A recent scheme called Federated Averaging SGD [5] attempts
such a data parallelization in the context of deep learning with significant success; however, they still
use a central parameter server.
In contrast, deep learning with decentralized computation can be achieved via gossip SGD algorithms [6, 7], where agents communicate probabilistically without the aid of a parameter server.
However, decentralized computation in the sense of gossip SGD is not feasible in many real life
applications. For instance, consider a large (wide-area) sensor network [8, 9] or multi-agent robotic
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Table 1: Comparisons between different optimization approaches
Method
f
?f
Step Size Con.Rate D.P. D.C. C.C.T.
SGD
Str-con
Lip.
Con.
O(? k ) No No No
Downpour SGD [3] Nonconvex
Lip.
Con.&Ada. N/A
Yes No No
EASGD [4]
Str-con
Lip.
Con.
O(? k ) No No No
Str-con Lip.&Bou.
Con.
O(? k )
Gossip SGD [7]
No Yes No
Str-con Lip.&Bou.
Dim.
O( k1 )
FedAvg [5]
Nonconvex
Lip.
Con.
N/A
Yes No No
Str-con Lip.&Bou.
Con.
O(? k )
Dim.
O( k1 ) Yes Yes Yes
CDSGD [This paper] Str-con Lip.&Bou.
Nonconvex Lip.&Bou.
Con.
N/A
Nonconvex Lip.&Bou.
Dim.
N/A
Con.Rate: convergence rate, Str-con: strongly convex. Lip.&Bou.: Lipschitz continuous and
bounded. Con.: constant and Con.&Ada.: constant&adagrad. Dim.: diminishing. ? ? (0, 1) is a
positive constant. ? (0.5, 1] is a positive constant. D.P.: data parallelism. D.C.: decentralized
computation. C.C.T.: constrained communication topology.
network that aims to learn a model of the environment in a collaborative manner [10, 11]. For such
cases, it may be infeasible for arbitrary pairs of agents to communicate on-demand; typically, agents
are only able to communicate with their respective neighbors in a communication network in a fixed
(or evolving) topology.
Contribution: This paper introduces a new class of approaches for deep learning that enables both
data parallelization and decentralized computation. Specifically, we propose consensus-based distributed SGD (CDSGD) and consensus-based distributed momentum SGD (CDMSGD) algorithms for
collaborative deep learning that, for the first time, satisfies all three requirements: data parallelization,
decentralized computation, and constrained communication over fixed topology networks. Moreover,
while most existing studies solely rely on empirical evidence from simulations, we present rigorous
convergence analysis for both (strongly) convex and non-convex objective functions, with both fixed
and diminishing step sizes using a Lyapunov function construction approach. Our analysis reveals
several advantages of our method: we match the best existing rates of convergence in the centralized
setting, while simultaneously supporting data parallelism as well as constrained communication
topologies; to our knowledge, this is the first approach that achieves all three desirable properties; see
Table 1 for a detailed comparison.
Finally, we validate our algorithms? performance on benchmark datasets, such as MNIST, CIFAR-10,
and CIFAR-100. Apart from centralized SGD as a baseline, we also compare performance with
that of Federated Averaging SGD as it also enables data parallelization. Empirical evidence (for a
given number of agents and other hyperparametric conditions) suggests that while our method is
slightly slower, we can achieve higher accuracy compared to the best available algorithm (Federated
Averaging (FedAvg)). Empirically, the proposed framework in this work is suitable for situations
without central parameter servers, but also robust to a central parameter server failture situation.
Related work: Apart from the algorithms mentioned above, a few other related works exist, including
a distributed system called Adam for large deep neural network (DNN) models [12] and a distributed
methodology by Strom [13] for DNN training by controlling the rate of weight-update to reduce the
amount of communication. Natural Gradient Stochastic Gradient Descent (NG-SGD) based on model
averaging [14] and staleness-aware async-SGD [15] have also been developed for distributed deep
learning. A method called CentralVR [16] was proposed for reducing the variance and conducting
parallel execution with linear convergence rate. Moreover, a decentralized algorithm based on
gossip protocol called the multi-step dual accelerated (MSDA) [17] was developed for solving
deterministically smooth and strongly convex distributed optimization problems in networks with a
provable optimal linear convergence rate. A new class of decentralized primal-dual methods [18]
was also proposed recently in order to improve inter-node communication efficiency for distributed
convex optimization problems. To minimize a finite sum of nonconvex functions over a network, the
authors in [19] proposed a zeroth-order distributed algorithm (ZENITH) that was globally convergent
with a sublinear rate. From the perspective of distributed optimization, the proposed algorithms
have similarities with the approaches of [20, 21]. However, we distinguish our work due to the
collaborative learning aspect with data parallelization and extension to the stochastic setting and
nonconvex objective functions. In [20] the authors only considered convex objective functions in a
2
deterministic setting, while the authors in [21] presented results for non-convex optimization problems
in a deterministic setting. Our proof techniques are different from those in [20, 21] with the choice
of Lyapunov function, as well as the notion of stochastic Lyapunov gradient. More importantly, we
provide an extensive and thorough suite of numerical comparisons with both centralized methods and
distributed methods on benchmark datasets.
The rest of the paper is organized as follows. While section 2 formulates the distributed, unconstrained
stochastic optimization problem, section 3 presents the CDSGD algorithm and the Lyapunov stochastic gradient required for analysis presented in section 4. Validation experiments and performance
comparison results are described in section 5. The paper is summarized, concluded in section 6
along with future research directions. Detailed proofs of analytical results, extensions (e.g., effect of
diminishing step size) and additional experiments are included in the supplementary section 7.
2
Formulation
We consider the standard (unconstrained) empirical risk minimization problem typically used in
machine learning problems (such as deep learning):
n
min
1X i
f (x),
n i=1
(1)
where x ? Rd denotes the parameter of interest and f : Rd ? R is a given loss function, and f i
is the function value corresponding to a data point i. In this paper, we are interested in learning
problems where the computational agents exhibit data parallelism, i.e., they only have access to
their own respective training datasets. However, we assume that the agents can communicate over
a static undirected graph G = (V, E), where V is a vertex set (with nodes corresponding to agents)
and E is an edge set. With N agents, we have V = {1, 2, ..., N } and E ? V ? V. If (j, l) ? E,
then Agent j can communicate with Agent l. The neighborhood of agent j ? V is defined as:
N b(j) , {l ? V : (j, l) ? E or j = l}. Throughout this paper we assume that the graph G is
connected. Let Dj , j = 1, . . . , n denote the subset of the training data (comprising nj samples)
PN
corresponding to the j th agents such that j=1 nj = n. With this setup, we have the following
simplification of Eq. 1:
N
N
1XX i
NXX i
min
f (x) =
fj (x),
n j=1
n j=1
i?Dj
(2)
i?Dj
where, fj (x) = N1 f (x) is the objective function specific to Agent j. This formulation enables us to
PN
state the optimization problem in a distributed manner, where f (x) = j=1 fj (x). 1 Furthermore,
the problem (1) can be reformulated as
min
N
NXX i j
N T
1 F(x) :=
fj (x )
n
n j=1
(3a)
i?Dj
s.t. xj = xl ?(j, l) ? E,
(3b)
where x := (x1 , x2 , . . . , xN )T ? RN ?d and F(x) can be written as
F(x) =
X
i?D1
f1i (x1 ),
X
f2i (x2 ), . . . ,
i?D2
X
T
i
fN
(xN )
(4)
i?DN
Note that with d > 1, the parameter set x as well as the gradient ?F(x) correspond to matrix
variables. However, for simplicity in presenting our analysis, we set d = 1 in this paper, which
corresponds to the case where x and ?F(x) are vectors.
1
Note that in our formulation, we are assuming that every agent has the same local objective function while
in general distributed optimization problems they can be different.
3
We now introduce several key definitions and assumptions that characterize the objective functions
and the agent interaction matrix.
Definition 1. A function f : Rd ? R is H-strongly convex, if for all x, y ? Rd , we have f (y) ?
f (x) + ?f (x)T (y ? x) + H2 ky ? xk2 .
Definition 2. A function f : Rd ? R is ?-smooth if for all x, y ? Rd , we have f (y) ? f (x) +
?f (x)T (y ? x) + ?2 ky ? xk2 .
As a consequence of Definition 2, we can conclude that ?f is Lipschitz continuous, i.e., k?f (y) ?
?f (x)k ? ?ky ? xk [22].
Definition 3. A function c is said to be coercive if it satisfies: c(x) ? ? whenkxk ? ?.
Assumption 1. The objective functions fj : Rd ? R are assumed to satisfy the following conditions:
a) Each fj is ?j -smooth; b) each fj is proper (not everywhere infinite) and coercive; and c) each fj
is Lj -Lipschitz continuous, i.e., |fj (y) ? fj (x)| < Lj ky ? xk ?x, y ? Rd .
PN
As a consequence of Assumption 1, we can conclude that j=1 fj (xj ) possesses Lipschitz continuous gradient with parameter ?m := maxj ?j . Similarly, each fj is strongly convex with Hj such that
PN
j
j=1 fj (x ) is strongly convex with Hm = minj Hj .
Regarding the communication network, we use ? to denote the agent interaction matrix, where the
element ?jl signifies the link weight between agents j and l.
Assumption 2. a) If (j, l) ?
/ E, then ?jl = 0; b) ?T = ?; c) null{I ? ?} = span{1}; and d)
I ? ?I.
The main outcome of Assumption 2 is that the probability transition matrix is doubly stochastic
and that we have ?1 (?) = 1 > ?2 (?) ? ? ? ? ? ?N (?) ? 0, where ?z (?) denotes the z-th largest
eigenvalue of ?.
3
3.1
Proposed Algorithm
Consensus Distributed SGD
For solving stochastic optimization problems, SGD and its variants have been commonly used to
centralized and distributed problem formulations. Therefore, the following algorithm is proposed
based on SGD and the concept of consensus to solve the problem laid out in Eq. 2,
xjk+1 =
X
?jl xlk ? ?gj (xjk )
(5)
l?N b(j)
where N b(j) indicates the neighborhood of agent j, ? is the step size, gj (xjk ) is stochastic gradient
of fj at xjk , which corresponds to a minibatch of sampled data points at the k th epoch. More
0
P
formally, gj (xjk ) = b10 q0 ?D0 ?fjq (xjk ), where b0 is the size of the minibatch D0 randomly selected
from the data subset Dj . While the pseudo-code of CDSGD is shown below in Algorithm 1,
momentum versions of CDSGD based on Polyak momentum [23] and Nesterov momentum [24] are
also presented in the supplementary section 7. In experiments, Nesterov momentum is used as it has
been shown in the traditional SGD implementations that the Nesterov variant outperforms the Polyak
momentum. Note, that mini-batch implementations of these algorithms are straightforward, hence,
4
are not discussed here in detail, and that the convergence analysis of momentum variants is out of
scope in this paper and will be presented in our future work.
Algorithm 1: CDSGD
Input : m, ?, N
Initialize: xj0 , (j = 1, 2, . . . , N )
Distribute the training dataset to N agents.
for each agent do
for k = 0 : m do
Randomly shuffle the corresponding data subset Dj (without replacement)
P
j
wk+1
= l?N b(j) ?jl xlk
j
xjk+1 = wk+1
? ?gj (xjk )
end
end
3.2 Tools for convergence analysis
We now analyze the convergence properties of the iterates {xjk } generated by Algorithm 1. The
following section summarizes some key intermediate concepts required to establish our main results.
First, we construct an appropriate Lyapunov function that will enable us to establish convergence.
Observe that the update law in Alg. 1 can be expressed as:
xk+1 = ?xk ? ?g(xk ),
where
(6)
T
g(xk ) = [g1 (x1k )g2 (x2k )...gN (xN
k )]
Denoting wk = ?xk , the update law can be re-written as xk+1 = wk ? ?g(xk ). Moreover,
xk+1 = xk ? xk + wk ? ?g(xk ). Rearranging the last equality yields the following relation:
xk+1 = xk ? ?(g(xk ) + ??1 (xk ? wk )) = xk ? ?(g(xk ) + ??1 (I ? ?)xk )
(7)
where the last term in Eq. 7 is the Stochastic Lyapunov Gradient. From Eq. 7, we observe that
the ?effective" gradient step is given by g(xk ) + ??1 (I ? ?)xk . Rewriting ?J i (xk ) = g(xk ) +
??1 (I ? ?)xk , the updates of CDSGD can be expressed as:
xk+1 = xk ? ??J i (xk ).
(8)
The above expression naturally motivates the following Lyapunov function candidate:
V (x, ?) :=
N T
1
1 F(x) +
kxk2I??
n
2?
(9)
PN
where k ? kI?? denotes the norm with respect to the PSD matrix I ? ?. Since j=1 fj (xj ) has a
?m -Lipschitz continuous gradient, ?V (x) also is a Lipschitz continuous gradient with parameter:
?? := ?m + ??1 ?max (I ? ?) = ?m + ??1 (1 ? ?N (?)).
Similarly, as
PN
j=1
fj (xj ) is Hm -strongly convex, then V (x) is strongly convex with parameter:
? := Hm + (2?)?1 ?min (I ? ?) = Hm + (2?)?1 (1 ? ?2 (?)).
H
Based on Definition 1, V has a unique minimizer, denoted by x? with V ? = V (x? ). Correspondingly,
using strong convexity of V , we can obtain the relation:
? (x) ? V ? ) ? k?V (x)k2 for all x ? RN .
2H(V
(10)
From strong convexity and the Lipschitz continuous property of ?fj , the constants Hm and ?m
? ? ?? .
further satisfy Hm ? ?m and hence, H
Next, we introduce two key lemmas that will help establish our main theoretical guarantees. Due to
space limitations, all proofs are deferred to the supplementary material in Section 7.
5
Lemma 1. Under Assumptions 1 and 2, the iterates of CDSGD satisfy ?k ? N:
E[V (xk+1 )] ? V (xk ) ? ???V (xk )T E[?J i (xk )] +
?? 2
? E[k?J i (xk )k2 ]
2
(11)
At a high level, since E[?J i (xk )] is the unbiased estimate of ?V (xk ), using the updates ?J i (xk )
will lead to sufficient decrease in the Lyapunov function. However, unbiasedness is not enough, and
we also need to control higher order moments of ?J i (xk ) to ensure convergence. Specifically, we
consider the variance of ?J i (xk ):
V ar[?J i (xk )] := E[k?J i (xk )k2 ] ? kE[?J i (xk )]k2
(12)
To bound the variance of ?J i (xk ), we use a standard assumption presented in [25] in the context of
(centralized) deep learning. Such an assumption aims at providing an upper bound for the ?gradient
noise" caused by the randomness in the minibatch selection at each iteration.
Assumption 3. a) There exist scalars ?2 ? ?1 > 0 such that ?V (xk )T E[?J i (xk )] ?
?1 k?V (xk )k2 and kE[?J i (xk )]k ? ?2 k?V (xk )k for all k ? N; b) There exist scalars Q ? 0 and
QV ? 0 such that V ar[?J i (xk )] ? Q + QV k?V (xk )k2 for all k ? N.
Remark 1. While Assumption 3(a) guarantees the sufficient descent of V in the direction of
??J i (xk ), Assumption 3(b) states that the variance of ?J i (xk ) is bounded above by the second moment of ?V (xk ). The constant Q can be considered to represent the second moment of the
?gradient noise" in ?J i (xk ). Therefore, the second moment of ?J i (xk ) can be bounded above as
E[k?J i (xk )k2 ] ? Q + Qm k?V (xk )k2 , where Qm := QV + ?22 ? ?12 > 0.
Lemma 2. Under Assumptions 1, 2, and 3, the iterates of CDSGD satisfy ?k ? N:
E[V (xk+1 )] ? V (xk ) ? ?(?1 ?
??
??
?Qm )?k?V (xk )k2 + ?2 Q .
2
2
(13)
In Lemma 2, the first term is strictly negative if the step size satisfies the following necessary
condition:
2?1
0<??
(14)
?? Qm
However, in latter analysis, when such a condition is substituted into the convergence analysis, it may
produce a larger upper bound. For obtaining a tight upper bound, we impose a sufficient condition
for the rest of analysis as follows:
?1
0<??
(15)
?? Qm
As ?? is a function of ?, the above inequality can be rewritten as 0 < ? ?
4
?1 ?(1??N (?))Qm
.
?m Qm
Main Results
We now present our main theoretical results establishing the convergence of CDSGD. First, we show
that for most generic loss functions (whether convex or not), CDSGD achieves consensus across
different agents in the graph, provided the step size (which is fixed across iterations) does not exceed
a natural upper bound.
Proposition 1. (Consensus with fixed step size) Under Assumptions 1 and 2, the iterates of CDSGD
(Algorithm 1) satisfy ?k ? N:
?L
E[kxjk ? sk k] ?
(16)
1 ? ?2 (?)
where ? satisfies 0 < ? ?
?1 ?(1??N (?))Qm
?m Qm
and L is an upper bound of E[kg(xk )k], ?k ? N
PN
j
1
j=1 xk
N
(defined properly and discussed in Lemma 4 in the supplementary section 7) and sk =
represents the average parameter estimate.
The proof of this proposition can be adapted from [26, Lemma 1].
Next, we show that for strongly convex loss functions, CDSGD converges linearly to a neighborhood
of the global optimum.
6
Theorem 1. (Convergence of CDSGD with fixed step size, strongly convex case) Under Assumptions 1, 2 and 3, the iterates of CDSGD satisfy the following inequality ?k ? N:
k?1
? 1 )k?1 (V (x1 ) ? V ? ) +
E[V (xk ) ? V ? ] ? (1 ? ?H?
?2 ?? Q X
? 1 )l
(1 ? ?H?
2
l=0
k?1
= (1 ? (?Hm + 1 ? ?2 (?))?1 )
+
(?2 ?m + ?(1 ? ?N (?)))Q
2
when the step size satisfies 0 < ? ?
(V (x1 ) ? V ? )
(17)
k?1
X
(1 ? (?Hm + 1 ? ?2 (?))?1 )l
l=0
?1 ?(1??N (?))Qm
.
?m Qm
A detailed proof is presented in the supplementary section 7. We observe from Theorem 1 that
the sequence of Lyapunov function values {V (xk )} converges linearly to a neighborhood of the
(??m +1??N (?))Q
?Q
optimal value, i.e., limk?? E[V (xk ) ? V ? ] ? 2??
? 1 = 2(Hm +??1 (1??2 (?))?1 . We also observe that
H?
the term on the right hand side decreases with the spectral gap of the agent interaction matrix ?,
i.e., 1 ? ?2 (?), which suggests an interesting relation between convergence and topology of the
graph. Moreover, we observe that the upper bound is proportional to the step size parameter ?, and
smaller step sizes lead to smaller radii of convergence. (However, choosing a very small step-size
may negatively affect the convergence rate of the algorithm). Finally, if the gradient in this context is
not stochastic (i.e., the parameter Q = 0), then linear convergence to the optimal value is achieved,
which matches known rates of convergence with (centralized) gradient descent under strong convexity
and smoothness assumptions.
N T
T
?
?
Remark 2. Since E[ N
n 1 F(xk )] ? E[V (xk )] and n 1 F(x ) = V , the sequence of objective
N T
T
?
function values are themselves upper bounded as follows: E[ n 1 F(xk )? N
n 1 F(x )] ? E[V (xk )?
?
V ]. Therefore, using Theorem 1 we can establish analogous convergence rates in terms of the true
T
objective function values { N
n 1 F(xk )} as well.
The above convergence result for CDSGD is limited to the case when the objective functions are
strongly convex. However, most practical deep learning systems (such as convolutional neural
network learning) involve optimizing over highly non-convex objective functions, which are much
harder to analyze. Nevertheless, we show that even under such situations, CDSGD exhibits a (weaker)
notion of convergence.
Theorem 2. (Convergence of CDSGD with fixed step size, nonconvex case) Under Assumptions 1, 2,
and 3, the iterates of CDSGD satisfy ?m ? N:
E[
m
X
k?V (xk )k2 ] ?
k=1
?? m?Q 2(V (x1 ) ? Vinf )
+
?1
?1 ?
(18)
(?m ? + 1 ? ?N (?))mQ 2(V (x1 ) ? Vinf )
=
+
.
?1
?1 ?
when the step size satisfies 0 < ? ?
?1 ?(1??N (?))Qm
.
?m Qm
Remark 3. Theorem
Pm 2 states that when in the absence of ?gradient noise" (i.e., when Q = 0),
the quantity E[ k=1 k?V (xk )k2 ] remains finite. Therefore, necessarily {k?V (xk )k} ? 0 and
the estimates approach a stationary point. On the other hand, if the gradient calculations are
stochastic, then a similar claim cannot be made. However, for this case we have the upper bound
Pm
(?m ?+1??N (?))Q
1
2
limm?? E[ m
. This tells us that while we cannot guarantee
k=1 k?V (xk )k ] ?
?1
convergence in terms of sequence of objective function values, we can still assert that the average
of the second moment of gradients is strictly bounded from above even for the case of nonconvex
objective functions.
Moreover, the upper bound cannot be solely controlled via the step-size parameter ? (which is
different from what is implied in the strongly convex case by Theorem 1). In general, the upper bound
becomes tighter as ?N (?) increases; however, an increase in ?N (?) may result in a commensurate
increase in ?2 (?), leading to worse connectivity in the graph and adversely affecting consensus
among agents. Again, our upper bounds are reflective of interesting tradeoffs between consensus and
convergence in the gradients, and their dependence on graph topology.
7
cifar10 experiment
cifar10 experiment
0.8
0.8
accs
1.0
accs
1.0
0.6
0.6
0.4
0.4
0.2
0.0
0
1000
2000
3000
Number of epochs
4000
SGD
CDSGD
CDMSGD
Federated Averaging
0.2
SGD
CDSGD
0.0
5000
(a)
0
200
400
600
Number of epochs
800
1000
(b)
Figure 1: Average training (solid lines) and validation (dash lines) accuracy for (a) comparison of
CDSGD with centralized SGD and (b) CDMSGD with Federated average method
The above results are for fixed step size ?, and we can prove complementary results for CDSGD even
for the (more prevalent) case of diminishing step size ?k . These are presented in the supplementary
material due to space constraints.
5
Experimental Results
This section presents the experimental results using the benchmark image recognition dataset, CIFAR10. We use a deep convolutional nerual network (CNN) model (with 2 convolutional layers with 32
filters each followed by a max pooling layer, then 2 more convolutional layers with 64 filters each
followed by another max pooling layer and a dense layer with 512 units, ReLU activation is used in
convolutional layers) to validate the proposed algorithm. We use a fully connected topology with 5
agents and uniform agent interaction matrix except mentioned otherwise. A mini-batch size of 128
and a fixed step size of 0.01 are used in these experiments. The experiments are performed using Keras
and TensorFlow [27, 28] and the codes will be made publicly available soon. While we included the
training and validation accuracy plots for the different case studies here, the corresponding training
loss plots, results with other becnmark datasets such as MNIST and CIFAR-100 and decaying as well
as different fixed step sizes are presented in the supplementary section 7.
5.1 Performance comparison with benchmark methods
We begin with comparing the accuracy of CDSGD with that of the centralized SGD algorithm
as shown in Fig. 1(a). While the CDSGD convergence rate is significantly slower compared to
SGD as expected, it is observed that CDSGD can eventually achieve high accuracy, comparable
with centralized SGD. However, another interesting observation is that the generalization gap (the
difference between training and validation accuracy as defined in [29]) for the proposed CDSGD
algorithm is significantly smaller than that of SGD which is an useful property. We also compare both
CDSGD and CDMSGD with the Federated averaging SGD (FedAvg) algorithm which also performs
data parallelization (see Fig. 1(b)). For the sake of comparison, we use same number of agents and
choose E = 1 and C = 1 as the hyperparameters in the FedAvg algorithm as it is close to a fully
connected topology scenario as considered in the CDSGD and CDMSGD experiments. As CDSGD
is significantly slow, we mainly compare the CDMSGD with FedAvg which have similar convergence
rates (CDMSGD being slightly slower). The main observation is that CDMSGD performs better
than FedAvg at the steady state and can achieve centralized SGD level performance. It is important
to note that FedAvg does not perform decentralized computation. Essentially it runs a brute force
parameter averaging on a central parameter server at every epoch (i.e., consensus at every epoch)
and then broadcasts the updated parameters to the agents. Hence, it tends to be slightly faster than
CDMSGD which uses a truly decentralized computation over a network.
5.2 Effect of network size and topology
In this section, we investigate the effects of network size and topology on the performance of the
proposed algorithms. Figure 2(a) shows the change in training performance as the number of agents
grow from 2 to 8 and to 16. Although with increase in number of agents, the convergence rate slows
down, all networks are able to achieve similar accuracy levels. Finally, we investigate the impact of
network sparsity (as quantified by the second largest eigenvalue) on the learning performance. The
primary observation is convergence of average accuracy value happens faster for sparser networks
8
1.0
cifar10 ex eriment
accs
0.8
1.0
0.04
0.8
0.03
0.6
0.4
0.4
0.2
0.2
0.0
Fully Connected with ?2(() = 0
S arse To ology with ?2(() = 0.54
S arse To ology with ?2(() = 0.86
accs
0.6
0
200
400
600
Number of epochs
800
0.0
1000
(a)
Variance among agents for accs
cifar10 experiment
2 Agents
8 Agents
16 Agents
0.02
0.01
0.00
0
200
400
600
Number of e ochs
800
1000
(b)
Figure 2: Average training (solid lines) and validation (dash lines) accuracy along with accuracy
variance over agents for CDMSGD algorithm with (a) varying network size and (b) varying network
topology
(higher second largest eigenvalue). This is similar to the trend observed for FedAvg algorithm
while reducing the Client fraction (C) which makes the (stochastic) agent interaction matrix sparser.
However, from the plot of the variance of accuracy values over agents (a smooth version using moving
average filter), it can be observed that the level of consensus is more stable for denser networks
compared to that for sparser networks. This is also expected as discussed in Proposition 1. Note,
with the availability of a central parameter server (as in federated averaging), sparser topology may
be useful for a faster convergence, however, consensus (hence, topology density) is critical for a
collaborative learning paradigm with decentralized computation.
6
Conclusion and Future Work
This paper addresses the collaborative deep learning (and many other machine learning) problem
in a completely distributed manner (i.e., with data parallelism and decentralized computation) over
networks with fixed topology. We establish a consensus based distributed SGD framework and
proposed associated learning algorithms that can prove to be extremely useful in practice. Using a
Lyapunov function construction approach, we show that the proposed CDSGD algorithm can achieve
linear convergence rate with sufficiently small fixed step size and sublinear convergence rate with
diminishing step size (see supplementary section 7 for details) for strongly convex and Lipschitz
differentiable objective functions. Moreover, decaying gradients can be observed for the nonconvex
objective functions using CDSGD. Relevant experimental results using benchmark datasets show that
CDSGD can achieve centralized SGD level accuracy with sufficient training epochs while maintaining
a significantly low generalization error. The momentum variant of the proposed algorithm, CDMSGD
can outperform recently proposed FedAvg algorithm which also uses data parallelism but does not
perform a decentralized computation, i.e., uses a central parameter server. The effects of network
size and topology are also explored experimentally which conforms to the analytical understandings.
While current and future research is focusing on extensive testing and validation of the proposed
framework especially for large networks, a few technical research directions include: (i) collaborative
learning with extreme non-IID data; (ii) collaborative learning over directed time-varying graphs; and
(iii) understanding the dependencies between learning rate and consensus.
Acknowledgments
This paper is based upon research partially supported by the USDA-NIFA under Award no. 201767021-25965, the National Science Foundation under Grant No. CNS-1464279 and No. CCF1566281. Any opinions, findings, and conclusions or recommendations expressed in this material are
those of the authors and do not necessarily reflect the views of the funding agencies.
9
References
[1] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444,
2015.
[2] Suyog Gupta, Wei Zhang, and Josh Milthorpe. Model accuracy and runtime tradeoff in
distributed deep learning. arXiv preprint arXiv:1509.04210, 2015.
[3] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew
Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In
Advances in neural information processing systems, pages 1223?1231, 2012.
[4] Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging
sgd. In Advances in Neural Information Processing Systems, pages 685?693, 2015.
[5] H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. Communicationefficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629,
2016.
[6] Michael Blot, David Picard, Matthieu Cord, and Nicolas Thome. Gossip training for deep
learning. arXiv preprint arXiv:1611.09726, 2016.
[7] Peter H Jin, Qiaochu Yuan, Forrest Iandola, and Kurt Keutzer. How to scale distributed deep
learning? arXiv preprint arXiv:1611.04581, 2016.
[8] Kushal Mukherjee, Asok Ray, Thomas Wettergren, Shalabh Gupta, and Shashi Phoha. Real-time
adaptation of decision thresholds in sensor networks for detection of moving targets. Automatica,
47(1):185 ? 191, 2011.
[9] Chao Liu, Yongqiang Gong, Simon Laflamme, Brent Phares, and Soumik Sarkar. Bridge damage
detection using spatiotemporal patterns extracted from dense sensor network. Measurement
Science and Technology, 28(1):014011, 2017.
[10] H.-L. Choi and J. P. How. Continuous trajectory planning of mobile sensors for informative
forecasting. Automatica, 46(8):1266?1275, 2010.
[11] D. K. Jha, P. Chattopadhyay, S. Sarkar, and A. Ray. Path planning in gps-denied environments
with collective intelligence of distributed sensor networks. International Journal of Control, 89,
2016.
[12] Trishul M Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. Project
adam: Building an efficient and scalable deep learning training system. In OSDI, volume 14,
pages 571?582, 2014.
[13] Nikko Strom. Scalable distributed dnn training using commodity gpu cloud computing. In
INTERSPEECH, volume 7, page 10, 2015.
[14] Hang Su and Haoyu Chen. Experiments on parallel training of deep neural network using model
averaging. arXiv preprint arXiv:1507.01239, 2015.
[15] Wei Zhang, Suyog Gupta, Xiangru Lian, and Ji Liu. Staleness-aware async-sgd for distributed
deep learning. arXiv preprint arXiv:1511.05950, 2015.
[16] Soham De and Tom Goldstein. Efficient distributed sgd with variance reduction. In Data Mining
(ICDM), 2016 IEEE 16th International Conference on, pages 111?120. IEEE, 2016.
[17] Kevin Scaman, Francis Bach, S?bastien Bubeck, Yin Tat Lee, and Laurent Massouli?. Optimal
algorithms for smooth and strongly convex distributed optimization in networks. arXiv preprint
arXiv:1702.08704, 2017.
[18] Guanghui Lan, Soomin Lee, and Yi Zhou. Communication-efficient algorithms for decentralized
and stochastic optimization. arXiv preprint arXiv:1701.03961, 2017.
[19] Davood Hajinezhad, Mingyi Hong, and Alfredo Garcia. Zenith: A zeroth-order distributed
algorithm for multi-agent nonconvex optimization.
10
[20] Angelia Nedic and Asuman Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48?61, 2009.
[21] Jinshan Zeng and Wotao Yin. On nonconvex decentralized gradient descent. arXiv preprint
arXiv:1608.05766, 2016.
[22] Angelia Nedi?c and Alex Olshevsky. Stochastic gradient-push for strongly convex functions on
time-varying directed graphs. IEEE Transactions on Automatic Control 61.12, pages 3936?3947,
2016.
[23] Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR
Computational Mathematics and Mathematical Physics, 4(5):1?17, 1964.
[24] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87.
Springer Science & Business Media, 2013.
[25] L?on Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine
learning. arXiv preprint arXiv:1606.04838, 2016.
[26] Kun Yuan, Qing Ling, and Wotao Yin. On the convergence of decentralized gradient descent.
arXiv preprint arXiv:1310.7063, 2013.
[27] Fran?ois Chollet. Keras. https://github.com/fchollet/keras, 2015.
[28] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale
machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467,
2016.
[29] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding
deep learning requires rethinking generalization. CoRR, abs/1611.03530, 2016.
[30] Angelia Nedi?c and Alex Olshevsky. Distributed optimization over time-varying directed graphs.
IEEE Transactions on Automatic Control, 60(3):601?615, 2015.
[31] S. Ram, A. Nedic, and V. Veeravalli. A new class of distributed optimization algorithms:
application to regression of distributed data. Optimization Methods and Software, 27(1):71? 88,
2012.
11
| 7172 |@word hampson:1 private:1 version:2 cnn:1 norm:1 d2:1 simulation:1 tat:1 sgd:35 solid:2 harder:1 reduction:1 moment:5 liu:2 efficacy:1 daniel:1 denoting:1 kurt:1 outperforms:1 existing:2 current:1 comparing:1 com:1 activation:1 written:2 gpu:1 hajinezhad:1 fn:1 numerical:1 devin:2 informative:1 enables:6 plot:3 update:6 stationary:1 intelligence:1 selected:1 device:1 xk:73 iterates:6 node:2 zhang:4 mathematical:1 along:3 dn:1 yuan:2 prove:2 doubly:1 abadi:1 introductory:1 ray:2 introduce:2 manner:3 privacy:1 inter:1 expected:2 themselves:1 planning:2 multi:4 globally:1 str:7 becomes:1 provided:1 xx:1 bounded:5 moreover:6 begin:1 project:1 medium:1 null:1 what:1 kg:1 developed:3 coercive:2 finding:1 nj:2 suite:1 guarantee:3 pseudo:1 thorough:1 assert:1 every:3 commodity:1 growth:1 runtime:1 k2:11 qm:13 control:5 unit:1 brute:1 grant:1 ozdaglar:1 positive:2 engineering:2 local:2 tends:1 consequence:2 parallelize:2 establishing:1 solely:2 becoming:1 path:1 laurent:1 zeroth:2 quantified:1 suggests:2 limited:1 directed:3 unique:1 practical:1 acknowledgment:1 testing:1 lecun:2 practice:1 area:1 empirical:3 evolving:1 significantly:4 cannot:3 close:1 selection:1 context:4 risk:1 deterministic:2 dean:2 straightforward:1 convex:23 bou:7 ke:3 communicationefficient:1 simplicity:1 splitting:1 nedi:2 matthieu:3 importantly:1 mq:1 handle:1 notion:2 analogous:1 updated:1 construction:3 controlling:1 target:1 massive:1 gps:1 us:4 samy:1 engaging:1 element:1 trend:1 recognition:1 ochs:1 mukherjee:1 observed:4 cloud:1 preprint:12 electrical:1 cord:1 connected:4 shuffle:1 decrease:2 mentioned:2 benjamin:1 environment:3 convexity:3 agency:1 nesterov:4 solving:2 tight:1 negatively:1 upon:1 efficiency:1 completely:1 seth:1 various:1 effective:1 tell:1 kevin:1 neighborhood:4 outcome:1 choosing:1 supplementary:8 solve:1 larger:1 denser:1 kai:1 otherwise:1 g1:1 advantage:1 eigenvalue:3 sequence:3 analytical:2 differentiable:1 propose:1 interaction:5 adaptation:1 scaman:1 relevant:1 achieve:6 validate:2 scalability:1 ky:4 convergence:34 requirement:1 optimum:1 produce:1 adam:2 converges:2 boris:1 help:2 andrew:1 gong:1 kxk2i:1 b0:1 eq:4 strong:3 ois:1 lyapunov:11 direction:3 radius:1 filter:3 stochastic:15 zenith:2 enable:1 opinion:1 material:3 thome:1 generalization:3 proposition:3 tighter:1 extension:2 strictly:2 sufficiently:2 considered:3 algorithmic:1 scope:1 claim:1 achieves:2 early:1 chiyuan:1 xk2:2 bridge:1 largest:3 tool:1 qv:3 minimization:1 sensor:5 suzue:1 aim:2 pn:7 hj:2 zhou:1 mobile:2 varying:5 probabilistically:1 focus:2 properly:1 prevalent:1 indicates:1 mainly:1 contrast:1 rigorous:1 brendan:1 baseline:2 sense:1 dim:4 kushal:1 osdi:1 typically:2 lj:2 diminishing:6 relation:3 dnn:3 limm:1 choromanska:1 comprising:1 interested:1 issue:2 among:3 dual:2 denoted:1 ussr:1 constrained:4 initialize:1 aware:2 construct:1 phoha:1 beach:1 ng:1 represents:1 future:4 yoshua:1 primarily:1 few:2 randomly:2 simultaneously:1 national:1 qing:1 maxj:1 replacement:1 cns:1 jeffrey:2 n1:1 karthik:1 attempt:3 disbelief:1 psd:1 detection:2 interest:2 centralized:12 ab:1 highly:1 investigate:2 mining:1 picard:1 deferred:1 introduces:1 truly:1 extreme:1 primal:1 andy:1 edge:1 necessary:1 cifar10:5 respective:2 conforms:1 re:1 xjk:9 theoretical:2 instance:1 gn:1 ar:2 formulates:1 ada:2 signifies:1 vertex:1 subset:3 uniform:1 johnson:1 characterize:1 dependency:1 spatiotemporal:1 angelia:3 unbiasedness:1 st:1 density:1 international:2 guanghui:1 recht:1 lee:2 physic:1 michael:1 ashish:1 connectivity:1 again:1 central:8 reflect:1 choose:1 broadcast:1 worse:1 adversely:1 brent:1 leading:1 distribute:1 de:1 summarized:1 wk:6 ology:2 availability:1 jha:1 satisfy:7 chinmay:2 caused:1 performed:1 view:1 analyze:3 francis:1 decaying:2 parallel:3 simon:1 collaborative:9 contribution:1 minimize:1 publicly:1 greg:2 accuracy:13 variance:8 conducting:1 convolutional:5 correspond:1 yield:1 yes:6 asuman:1 craig:1 iid:1 none:1 trajectory:1 randomness:1 acc:5 minj:1 definition:6 tucker:1 naturally:1 proof:5 associated:1 chattopadhyay:1 con:18 static:1 sampled:1 dataset:2 hardt:1 knowledge:1 organized:1 goldstein:1 focusing:1 higher:3 methodology:1 tom:1 wei:2 formulation:4 strongly:16 furthermore:1 hand:2 su:1 zeng:1 veeravalli:1 google:1 minibatch:3 building:1 usa:1 effect:4 concept:3 unbiased:1 true:1 shalabh:1 ramage:1 hence:4 equality:1 moritz:1 q0:1 moore:1 staleness:2 strom:2 interspeech:1 davis:1 steady:1 hong:1 presenting:1 alfredo:1 demonstrate:1 performs:2 fj:17 image:1 recently:3 funding:1 yutaka:1 empirically:1 ji:1 volume:3 jl:4 discussed:3 significant:2 measurement:1 smoothness:1 rd:8 unconstrained:2 automatic:3 pm:2 similarly:2 mathematics:1 dj:6 moving:2 access:2 stable:1 similarity:1 gj:4 own:1 recent:3 perspective:1 optimizing:1 apart:2 suyog:2 scenario:3 sixin:1 server:9 nonconvex:12 inequality:2 success:1 jorge:1 life:1 yi:1 preserving:1 olshevsky:2 additional:1 impose:1 xj0:1 paradigm:1 corrado:2 ii:1 multiple:3 desirable:1 d0:2 smooth:5 technical:1 match:2 faster:3 calculation:1 bach:1 long:1 cifar:5 icdm:1 award:1 controlled:1 impact:1 variant:5 scalable:2 basic:1 heterogeneous:1 essentially:1 regression:1 arxiv:24 iteration:3 represent:1 monga:1 agarwal:1 achieved:2 affecting:1 grow:1 concluded:1 crucial:1 parallelization:12 rest:2 posse:1 limk:1 pooling:2 undirected:1 thing:1 reflective:1 kera:3 yang:1 intermediate:1 exceed:1 enough:1 iii:1 bengio:2 xj:4 affect:1 relu:1 topology:17 polyak:3 reduce:1 regarding:1 barham:1 tradeoff:2 easgd:2 whether:1 expression:1 forecasting:1 x1k:1 peter:1 reformulated:1 remark:3 deep:27 useful:4 detailed:3 involve:1 amount:1 apacible:1 msda:1 http:1 outperform:1 exist:3 async:2 key:3 nevertheless:1 enormous:1 threshold:1 lan:1 rewriting:1 nocedal:1 ram:1 graph:9 subgradient:1 chollet:1 fraction:1 sum:1 run:1 everywhere:1 communicate:5 massouli:1 throughout:1 laid:1 yann:2 forrest:1 fran:1 keutzer:1 summarizes:1 scaling:1 decision:1 comparable:1 x2k:1 bound:11 ki:1 internet:1 dash:2 distinguish:1 convergent:1 simplification:1 followed:2 layer:6 vinf:2 adapted:1 constraint:1 alex:2 x2:2 software:2 sake:1 aspect:2 extremely:2 min:4 span:1 department:2 across:4 iastate:2 increasingly:1 slightly:3 smaller:3 happens:1 quoc:1 xiangru:1 remains:1 f1i:1 eventually:1 sgds:1 end:2 yurii:1 available:3 brevdo:1 decentralized:18 rewritten:1 observe:5 appropriate:1 generic:1 spectral:1 nxx:2 batch:2 slower:3 thomas:1 denotes:3 ensure:1 include:1 downpour:2 maintaining:1 k1:2 especially:1 establish:5 implied:1 objective:16 quantity:1 damage:1 primary:1 dependence:1 traditional:1 said:1 exhibit:2 gradient:24 blot:1 link:1 rethinking:1 denied:1 f2i:1 consensus:14 provable:1 assuming:1 code:2 mini:2 providing:1 setup:1 kun:1 frank:1 negative:1 slows:1 implementation:2 proper:1 motivates:1 collective:1 perform:3 wotao:2 upper:11 observation:3 datasets:6 commensurate:1 benchmark:6 finite:2 descent:6 jin:1 supporting:1 situation:3 hinton:1 communication:9 rn:2 arbitrary:1 sarkar:2 david:1 nikko:1 pair:1 mechanical:1 required:2 extensive:2 tensorflow:2 nip:1 address:3 able:2 parallelism:6 below:1 pattern:1 sparsity:1 including:1 max:3 suitable:1 critical:1 natural:2 rely:1 force:1 client:1 business:1 nedic:2 scheme:2 improve:2 github:1 technology:1 hm:9 b10:1 speeding:1 chao:1 epoch:7 understanding:3 eugene:1 adagrad:1 law:2 loss:4 fully:3 lecture:1 sublinear:2 interesting:3 limitation:1 proportional:1 geoffrey:1 chilimbi:1 validation:6 h2:1 iowa:2 assimilating:1 agent:44 foundation:1 sufficient:4 usda:1 trishul:1 course:1 fchollet:1 supported:1 last:2 soon:1 infeasible:1 side:1 weaker:1 senior:1 wide:1 neighbor:1 correspondingly:1 distributed:34 xn:3 transition:1 concretely:1 made:3 author:4 commonly:1 transaction:3 hang:1 global:1 robotic:1 reveals:1 automatica:2 conclude:2 assumed:1 kalyanaraman:1 continuous:8 nerual:1 sk:2 table:2 lip:11 nature:1 learn:1 robust:1 nicolas:1 ca:1 elastic:2 rearranging:1 obtaining:1 curtis:1 alg:1 bottou:1 necessarily:2 protocol:1 substituted:1 anna:1 main:6 dense:2 linearly:2 noise:3 hyperparameters:1 paul:2 ling:1 complementary:1 x1:6 fig:2 gossip:5 slow:1 aid:1 momentum:9 mao:1 deterministically:1 xl:1 iot:1 candidate:1 mcmahan:1 zhifeng:1 theorem:6 down:1 choi:1 specific:1 bastien:1 explored:2 gupta:3 evidence:2 mnist:3 corr:1 federated:8 execution:1 push:1 demand:1 gap:2 sparser:4 chen:3 yin:3 garcia:1 bubeck:1 josh:1 vinyals:1 aditya:1 expressed:3 iandola:1 g2:1 scalar:2 partially:1 recommendation:1 springer:1 corresponds:2 minimizer:1 satisfies:6 mingyi:1 extracted:1 mart:1 lipschitz:8 xlk:2 feasible:1 absence:1 change:1 included:2 specifically:2 infinite:1 reducing:2 except:1 averaging:12 experimentally:1 lemma:6 called:5 experimental:3 citro:1 formally:1 soham:1 mark:1 latter:1 accelerated:1 rajat:1 oriol:1 lian:1 d1:1 ex:1 |
6,824 | 7,173 | Fast-Slow Recurrent Neural Networks
Asier Mujika
Department of Computer Science
ETH Z?rich, Switzerland
[email protected]
Florian Meier
Department of Computer Science
ETH Z?rich, Switzerland
[email protected]
Angelika Steger
Department of Computer Science
ETH Z?rich, Switzerland
[email protected]
Abstract
Processing sequential data of variable length is a major challenge in a wide range
of applications, such as speech recognition, language modeling, generative image
modeling and machine translation. Here, we address this challenge by proposing
a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FSRNN). The FS-RNN incorporates the strengths of both multiscale RNNs and
deep transition RNNs as it processes sequential data on different timescales and
learns complex transition functions from one time step to the next. We evaluate
the FS-RNN on two character level language modeling data sets, Penn Treebank
and Hutter Prize Wikipedia, where we improve state of the art results to 1.19
and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two
FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best
known compression algorithm with respect to the BPC measure. We also present
an empirical investigation of the learning and network dynamics of the FS-RNN,
which explains the improved performance compared to other RNN architectures.
Our approach is general as any kind of RNN cell is a possible building block for
the FS-RNN architecture, and thus can be flexibly applied to different tasks.
1
Introduction
Processing, modeling and predicting sequential data of variable length is a major challenge in the
field of machine learning. In recent years, recurrent neural networks (RNNs) [34, 32, 39, 41] have
been the most popular tool to approach this challenge. RNNs have been successfully applied to
improve state of the art results in complex tasks like language modeling and speech recognition. A
popular variation of RNNs are long short-term memories (LSTMs) [18], which have been proposed
to address the vanishing gradient problem [16, 5, 17]. LSTMs maintain constant error flow and thus
are more suitable to learn long-term dependencies compared to standard RNNs.
Our work contributes to the ongoing debate on how to interconnect several RNN cells with the goals
of promoting the learning of long-term dependencies, favoring efficient hierarchical representations of
information, exploiting the computational advantages of deep over shallow networks and increasing
computational efficiency of training and testing. In deep RNN architectures, RNNs or LSTMs
are stacked layer-wise on top of each other [9, 20, 11]. The additional layers enable the network
to learn complex input to output relations and encourage a efficient hierarchical representation
of information. In these architectures, the hidden states of all the hierarchical layers are updated
once per time step (by one time step we refer to the time between two consecutive input elements).
In multiscale RNN architectures [35, 9, 25, 6], the operation on different timescales is enforced
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
by updating the higher layers less frequently, which further encourages an efficient hierarchical
representation of information. Updating higher layers in fewer time steps leads to computationally
efficient implementations and gives rise to short gradient paths that favor the learning of long-term
dependencies. In deep transition RNN architectures, intermediate sequentially connected layers are
interposed between two consecutive hidden states in order to increase the depth of the transition
function from one time step to the next, as for example in deep transition networks [31] or Recurrent
Highway Networks (RHN) [43]. The intermediate layers enable the network to learn complex nonlinear transition functions. Thus, the model exploits the fact that deep models can represent some
functions exponentially more efficiently than shallow models [4]. We interpret these networks as
several RNN cells that update a single hidden state sequentially. Observe that any RNN cell can be
used to build a deep transition RNN by connecting several of these cells sequentially.
Here, we propose the Fast-Slow RNN (FS-RNN) architecture, a novel way of interconnecting RNN
cells, that combines advantages of multiscale RNNs and deep transition RNNs. The architecture
consists of k sequentially connected RNN cells in the lower hierarchical layer and one RNN cell in
the higher hierarchical layer, see Figure 1 and Section 3. Therefore, the hidden state of the lower
layer is updated k times per time step, whereas the hidden state of the higher layer is updated only
once per time step. We evaluate the FS-RNN on two standard character level language modeling data
sets, namely Penn Treebank and Hutter Prize Wikipedia. Additionally, following [31], we present an
empirical analysis that reveals advantages of the FS-RNN architecture over other RNN architectures.
The main contributions of this paper are:
? We propose the FS-RNN as a novel RNN architecture.
? We improve state of the art results on the Penn Treebank and Hutter Prize Wikipedia data
sets.
? We surpass the BPC performance of the best known text compression algorithm evaluated
on Hutter Prize Wikipedia by using an ensemble of two FS-RNNs.
? We show empirically that the FS-RNN incorporates strengths of both multiscale RNNs and
deep transition RNNs, as it stores long-term dependencies efficiently and it adapts quickly
to unexpected input.
? We provide our code in the following URL https://github.com/amujika/Fast-Slow-LSTM.
2
Related work
In the following, we review the work that relates to our approach in more detail. First, we focus
on deep transition RNNs and multiscale RNNs since these two architectures are the main sources
of inspiration for the FS-RNN architecture. Then, we discuss how our approach differs from these
two architectures. Finally, we review other approaches that address the issue of learning long-term
dependencies when processing sequential data.
Pascanu et al. [31] investigated how a RNN can be converted into a deep RNN. In standard RNNs,
the transition function from one hidden state to the next is shallow, that is, the function can be
written as one linear transformation concatenated with a point wise non-linearity. The authors added
intermediate layers to increase the depth of the transition function, and they found empirically that
such deeper architectures boost performance. Since deeper architectures are more difficult to train,
they equip the network with skip connections, which give rise to shorter gradient paths (DT(S)-RNN,
see [31]). Following a similar line of research, Zilly et al. [43] further increased the transition depth
between two consecutive hidden states. They used highway layers [38] to address the issue of training
deep architectures. The resulting RHN [43] achieved state of the art results on the Penn Treebank and
Hutter Prize Wikipedia data sets. Furthermore, a vague similarity to deep transition networks can be
seen in adaptive computation [12], where an LSTM cell learns how many times it should update its
state after receiving the input to produce the next output.
Multiscale RNNs are obtained by stacking multiple RNNs with decreasing order of update frequencies
on top of each other. Early attempts proposed such architectures for sequential data compression
[35], where the higher layer is only updated in case of prediction errors of the lower layer, and for
sequence classification [9], where the higher layers are updated with a fixed smaller frequency. More
recently, Koutnik et al. [25] proposed the Clockwork RNN, in which the hidden units are divided into
2
hSt?1
k
hF
t?1
hSt
fS
f F1
1
hF
t
f F2
2
hF
t
???
F
ht k?1
f Fk
k
hF
t
yt
xt
Figure 1: Diagram of a Fast-Slow RNN with k Fast cells. Observe that only the second Fast cell
receives the input from the Slow cell.
several modules, of which the i-th module is only updated every 2i -th time-step. General advantages
of this multiscale RNN architecture are improved computational efficiency, efficient propagation
of long-term dependencies and flexibility in allocating resources (units) to the hierarchical layers.
Multiscale RNNs have been applied for speech recognition in [3], where the slower operating RNN
pools information over time and the timescales are fixed hyperparameters as in Clockwork RNNs. In
[36], multiscale RNNs are applied to make context-aware query suggestions. In this case, explicit
hierarchical boundary information is provided. Chung et al. [6] presented a hierarchical multiscale
RNN (HM-RNN) that discovers the latent hierarchical structure of the sequence without explicitly
given boundary information. If a parametrized boundary detector indicates the end of a segment, then
a summarized representation of the segment is fed to the upper layer and the state of the lower layer
is reset [6].
Our FS-RNN architectures borrows elements from both deep transition RNNs and multiscale RNNs.
The lower hierarchical layer is a deep transition RNN, that updates the hidden state several times per
time step, whereas the higher hierarchical layer updates the hidden state only once per time step.
Many approaches aim at solving the problem of learning long-term dependencies in sequential data.
A very popular one is to use external memory cells that can be accessed and modified by the network,
see Neural Turing Machines [13], Memory Networks [40] and Differentiable Neural Computer [14].
Other approaches focus on different optimization techniques rather than network architectures. One
attempt is Hessian Free optimization [29], a second order training method that achieved good results
on RNNs. The use of different optimization techniques can improve learning in a wide range of RNN
architectures and therefore, the FS-RNN may also benefit from it.
3
Fast-Slow RNN
We propose the FS-RNN architecture, see Figure 1. It consists of k sequentially connected RNN
cells F1 , . . . , Fk on the lower hierarchical layer and one RNN cell S on the higher hierarchical layer.
We call F1 , . . . , Fk the Fast cells, S the Slow cell and the corresponding hierarchical layers the Fast
and Slow layer, respectively. S receives input from F1 and feeds its state to F2 . F1 receives the
sequential input data xt , and Fk outputs the predicted probability distribution yt of the next element
of the sequence.
Intuitively, the Fast cells are able to learn complex transition functions from one time step to the
next one. The Slow cell gives rise to shorter gradient paths between sequential inputs that are distant
in time, and thus, it facilitates the learning of long-term dependencies. Therefore, the FS-RNN
architecture incorporates advantages of deep transition RNNs and of multiscale RNNs, see Section 2.
Since any kind of RNN cell can be used as building block for the FS-RNN architecture, we state
the formal update rules of the FS-RNN for arbitrary RNN cells. We define a RNN cell Q to be a
differentiable function f Q (h, x) that maps a hidden state h and an additional input x to a new hidden
state. Note that x can be input data or input from a cell in a higher or lower hierarchical layer. If a
cell does not receive an additional input, then we will omit x. The following equations define the
FS-RNN architecture for arbitrary RNN cells F1 , . . . , Fk and S.
3
F1
1
k
hF
(hF
t =f
t?1 , xt )
1
hSt = f S (hSt?1 , hF
t )
S
F2
1
2
(hF
hF
t , ht )
t =f
F
i?1
Fi
i
) for 3 ? i ? k
hF
t = f (ht
k
The output yt is computed as an affine transformation of hF
t . It is possible to extend the FS-RNN
architecture in order to further facilitate the learning of long-term dependencies by adding hierarchical
layers, each of which operates on a slower timescale than the ones below, resembling clockwork
RNNs [25]. However, for the tasks considered in Section 4, we observed that this led to overfitting
the training data even when applying regularization techniques and reduced the performance at test
time. Therefore, we will not further investigate this extension of the model in this paper, even though
it might be beneficial for other tasks or larger data sets.
In the experiments in Section 4, we use LSTM cells as building blocks for the FS-RNN architecture.
For completeness, we state the update function f Q for an LSTM Q. The state of an LSTM is a pair
(ht , ct ), consisting of the hidden state and the cell state. The function f Q maps the previous state and
input (ht?1 , ct?1 , xt ) to the next state (ht , ct ) according to
? ?
ft
? it ?
Q
Q
Q
?o ? = Wh ht?1 + Wx xt + b
t
gt
ct = ?(ft ) ct?1 + ?(it ) tanh(gt )
ht = ?(ot ) tanh(ct ) ,
where ft , it and ot are commonly referred to as forget, input and output gates, and gt are the new
candidate cell states. Moreover, WhQ , WxQ and bQ are the learnable parameters, ? denotes the sigmoid
function, and denotes the element-wise multiplication.
4
Experiments
For the experiments, we consider the Fast-Slow LSTM (FS-LSTM) that is a FS-RNN, where each
RNN cell is a LSTM cell. The FS-LSTM is evaluated on two character level language modeling data
sets, namely Penn Treebank and Hutter Prize Wikipedia, which will be referred to as enwik8 in this
section. The task consists of predicting the probability distribution of the next character given all the
previous ones. In Section 4.1, we compare the performance of the FS-LSTM with other approaches.
In Section 4.2, we empirically compare the network dynamics of different RNN architectures and
show the FS-LSTM combines the benefits of both, deep transition RNNs and multiscale RNNs.
4.1
Performance on Penn Treebank and Hutter Prize Wikipedia
The FS-LSTM achieves 1.19 BPC and 1.25 BPC on the Penn Treebank and enwik8 data sets,
respectively. These results are compared to other approaches in Table 1 and Table 2 (the baseline
LSTM results without citations are taken from [44] for Penn Treebank and from [15] for enwik8).
For the Penn Treebank, the FS-LSTM outperforms all previous approaches with significantly less
parameters than the previous top approaches. We did not observe any improvement when increasing
the model size, probably due to overfitting. In the enwik8 data set, the FS-LSTM surpasses all other
neural approaches. Following [13], we compare the results with text compression algorithms using
the BPC measure. An ensemble of two FS-LSTM models (1.20 BPC) outperforms cmix (1.23 BPC)
[24], the current best text compression algorithm on enwik8 [27]. However, a fair comparison is
difficult. Compression algorithms are usually evaluated by the final size of the compressed data set
including the decompressor size. For character prediction models, the network size is usually not
taken into account and the performance is measured on the test set. We remark that as the FS-LSTM is
evaluated on the test set, it should achieve similar performance on any part of the English Wikipedia.
4
Table 1: BPC on Penn Treebank
Model
BPC
Param Count
Zoneout LSTM [2]
2-Layers LSTM
HM-LSTM [6]
HyperLSTM - small [15]
HyperLSTM [15]
NASCell - small [44]
NASCell [44]
1.27
1.243
1.24
1.233
1.219
1.228
1.214
6.6M
5.1M
14.4M
6.6M
16.3M
FS-LSTM-2 (ours)
FS-LSTM-4 (ours)
1.190
1.193
7.2M
6.5M
The FS-LSTM-2 and FS-LSTM-4 model consist of two and four cells in the Fast layer, respectively.
The FS-LSTM-4 model outperforms the FS-LSTM-2 model, but its processing time for one time step
is 25% higher than the one of the FS-LSTM-2. Adding more cells to the Fast layer could further
improve the performance as observed for RHN [43], but would increase the processing time, because
the cell states are computed sequentially. Therefore, we did not further increase the number of Fast
cells.
The model is trained to minimize the cross-entropy loss between
Pn the predictions and the training
data. Formally, the loss function is defined as L = ? n1 i=1 log p? (xi |x1 , . . . , xi?1 ), where
p? (xi |x1 , . . . , xi?1 ) is the probability that a model with parameters ? assigns to the next character
xi given all the previous ones. The model is evaluated by the BPC measure, which uses the binary
logarithm instead of the natural logarithm in the loss function. All the hyperparameters used for the
experiments are summarized in Table 3. We regularize the FS-LSTM with dropout [37]. In each
time step, a different dropout mask is applied for the non-recurrent connections [42], and Zoneout
[2] is applied for the recurrent connections. The network is trained with minibatch gradient descent
using the Adam optimizer [23]. If the gradients have norm larger than 1 they are normalized to 1.
Truncated backpropagation through time (TBPTT) [34, 10] is used to approximate the gradients,
and the final hidden state is passed to the next sequence. The learning rate is divided by a factor 10
for the last 20 epochs in the Penn Treebank experiments, and it is divided by a factor 10 whenever
the validation error does not improve in two consecutive epochs in the enwik8 experiments. The
forget bias of every LSTM cell is initialized to 1, and all weight matrices are initialized to orthogonal
matrices. Layer normalization [1] is applied to the cell and to each gate separately. The network with
the smallest validation error is evaluated on the test set. The two data sets that we use for evaluation
are:
Penn Treebank [28] The dataset is a collection of Wall Street Journal articles written in English.
It only contains 10000 different words, all written in lower-case, and rare words are replaced with
"< unk >". Following [30], we split the data set into train, validation and test sets consisting of
5.1M, 400K and 450K characters, respectively.
Hutter Prize Wikipedia [19] This dataset is also known as enwik8 and it consists of "raw"
Wikipedia data, that is, English articles, tables, XML data, hyperlinks and special characters. The
data set contains 100M characters with 205 unique tokens. Following [7], we split the data set into
train, validation and test sets consisting of 90M, 5M and 5M characters, respectively.
4.2
Comparison of network dynamics of different architectures
We compare the FS-LSTM architecture with the stacked-LSTM and the sequential-LSTM architectures, depicted in Figure 2, by investigating the network dynamics. In order to conduct a fair
comparison we chose the number of parameters to roughly be the same for all three models. The
FS-LSTM consists of one Slow and four Fast LSTM cells of 450 units each. The stacked-LSTM
consists of five LSTM cells stacked on top of each other consisting of 375 units each, which will be
5
Table 2: BPC on enwik8
Model
BPC
Param Count
LSTM, 2000 units
Layer Norm LSTM, 1800 units
HyperLSTM [15]
HM-LSTM [6]
Surprisal-driven Zoneout [33]
ByteNet [22]
RHN - depth 5 [43]
RHN - depth 10 [43]
Large RHN - depth 10 [43]
1.461
1.402
1.340
1.32
1.31
1.31
1.31
1.30
1.27
18M
14M
27M
35M
64M
23M
21M
46M
FS-LSTM-2 (ours)
FS-LSTM-4 (ours)
Large FS-LSTM-4 (ours)
2 ? Large FS-LSTM-4 (ours)
1.290
1.277
1.245
1.198
27M
27M
47M
2 ? 47M
cmix v13 [24]
1.225
-
Table 3: Hyperparameters for the character-level language model experiments.
Penn Treebank
enwik8
Large
FS-LSTM-2
FS-LSTM-4
FS-LSTM-2
FS-LSTM-4
Non-recurrent dropout
Cell zoneout
Hidden zoneout
0.35
0.5
0.1
0.35
0.5
0.1
0.2
0.3
0.05
0.2
0.3
0.05
0.25
0.3
0.05
Fast cell size
Slow cell size
700
400
500
400
900
1500
730
1500
1200
1500
150
128
128
0.002
200
150
128
128
0.002
200
150
128
256
0.001
35
150
128
256
0.001
35
100
128
256
0.001
50
TBPTT length
Minibatch size
Input embedding size
Initial Learning rate
Epochs
FS-LSTM-4
referred to as Stacked-1, ... , Stacked-5, from bottom to top. The sequential-LSTM consists of five
sequentially connected LSTM cells of 500 units each. All three models require roughly the same time
to process one time step. The models are trained on enwik8 for 20 epochs with minibatch gradient
descent using the Adam optimizer [23] without any regularization, but layer normalization [1] is
applied on the cell states of the LSTMs. The hyperparameters are not optimized for any of the three
models. We repeat each experiment 5 times and report the mean and standard deviation.
The experiments suggest that the FS-LSTM architecture favors the learning of long-term dependencies
(Figure 3), enforces hidden cell states to change at different rates (Figure 4) and facilitates a quick
adaptation to unexpected inputs (Figure 5). Moreover, the FS-LSTM achieves a mean performance
of 1.49 BPC with a standard deviation of 0.007 BPC and outperforms the stacked-LSTM (mean 1.60
BPC, standard deviation 0.022 BPC ) and the sequential-LSTM (mean 1.58 BPC, standard deviation
0.008 BPC ).
In Figure 3, we asses the ability to capture long-term dependencies by investigating the effect of
the cell state on the loss at later time points, following [2]. We measure the effect of the cell state
t
at time t ? k on the loss at time t by the gradient k ?c?L
k. This gradient is the largest for the Slow
t?k
6
yt
f5
h5t?1
h5t
..
.
f1
h1t?1
h1t
f1
ht?1
f2
f5
???
yt
xt
xt
(a) Stacked
ht
(b) Sequential
Figure 2: Diagram of (a) stacked-LSTM and (b) sequential-LSTM with 5 cells each.
1.0
Gradient norm
0.8
Stacked-1
Stacked-2
Stacked-3
Stacked-4
Stacked-5
FS-Fast
FS-Slow
Sequential
Stacked-4
Stacked-5
0.6
0.4
0.2
0.0
100
80
60
k
40
20
0 100
80
60
k
40
20
0
t
Figure 3: Long-term effect of the cell states on the loss function. The average value of
?c?L
,
t?k
which is the effect of the cell state at time t ? k on the loss function at time t, is plotted against k for
the different layers in the three RNN architectures. The shaded area shows the standard deviation.
For the sequential-LSTM only the first cell is considered.
LSTM, and it is small and steeply decaying as k increases for the Fast LSTM. Evidently, the Slow
cell captures long-term dependencies, whereas the Fast cell only stores short-term information. In the
stacked-LSTM, the gradients decrease from the top layer to the bottom layer, which can be explained
by the vanishing gradient problem. The small, steeply decaying gradients of the sequential-LSTM
indicate that it is less capable to learn long-term dependencies than the other two models.
Figure 4 gives further evidence that the FS-LSTM stores long-term dependencies efficiently in the
Slow LSTM cell. It shows that among all the layers of the three RNN architectures, the cell states of
the Slow LSTM change the least from one time step to the next. The highest change is observed for
the cells of the sequential model followed by the Fast LSTM cells.
In Figure 5, we investigate whether the FS-LSTM quickly adapts to unexpected characters, that is,
whether it performs well on the subsequent ones. In text modeling, the initial character of a word
has the highest entropy, whereas later characters in a word are usually less ambiguous [10]. Since
the first character of a word is the most difficult one to predict, the performance at the following
positions should reflect the ability to adapt to unexpected inputs. While the prediction qualities at
the first position are rather close for all three models, the FS-LSTM outperforms the stacked-LSTM
and sequential-LSTM significantly on subsequent positions. It is possible that new information is
incorporated quickly in the Fast layer, because it only stores short-term information, see Figure 3.
7
Sequential
FS-Fast
FS-Slow
Stacked-1
Stacked-2
Stacked-3
Stacked-4
Stacked-5
Rate of change
1.0
0.8
0.6
0.4
0.2
0.0
Sequential
Fast-Slow
Stacked
Pn
Figure 4: Rate of change of the cell states from one time step to the next. We plot n1 i=1 (ct,i ?
ct?1,i )2 averaged over all time steps, where ct,i is the value of the ith unit at time step t, for the
different layers of the three RNN architectures. The error bars show the standard deviation. For the
sequential-LSTM only the first cell is considered.
2.5
Fast-Slow
Sequential
Stacked
1.00
Relative loss
BPC
2.0
1.02
1.5
1.0
0.98
0.96
0.94
0.92
0.5
0.90
2
4
6
8
character position
10
2
4
6
8
character position
10
Figure 5: Bits-per-character at each character position. The left panel shows the average bits-percharacter at each character positions in the test set. The right panel shows the average relative loss
with respect to the stacked-LSTM at each character position. The shaded area shows the standard
deviation. For this Figure, a word is considered to be a sequence of lower-case letters of length at
least 2 in-between two spaces.
5
Conclusion
In this paper, we have proposed the FS-RNN architecture. Up to our knowledge, it is the first
architecture that incorporates ideas of both multiscale and deep transition RNNs. The FS-RNN
architecture improved state of the art results on character level language modeling evaluated on
the Penn Treebank and Hutter Prize Wikipedia data sets. An ensemble of two FS-RNNs achieves
better BPC performance than the best known compression algorithm. Further experiments provided
evidence that the Slow cell enables the network to learn long-term dependencies, while the Fast cells
enable the network to quickly adapt to unexpected inputs and learn complex transition functions from
one time step to the next.
Our FS-RNN architecture provides a general framework for connecting RNN cells as any type of
RNN cell can be used as building block. Thus, there is a lot of flexibility in applying the architecture
to different tasks. For instance using RNN cells with good long-term memory, like EURNNs [21]
or NARX RNNs [26, 8], for the Slow cell might boost the long-term memory of the FS-RNN
8
architecture. Therefore, the FS-RNN architecture might improve performance in many different
applications.
Acknowledgments
We thank Julian Zilly for many helpful discussions.
References
[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450, 2016.
[2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Zoneout: Regularizing rnns by randomly
preserving hidden activations. arXiv preprint arXiv:1607.06450, 2016.
[3] Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End-to-end
attention-based large vocabulary speech recognition. Acoustics, Speech and Signal Processing (ICASSP),
2016 IEEE International Conference, 2016.
R in Machine Learning,
[4] Yoshua Bengio et al. Learning deep architectures for ai. Foundations and trends
2(1):1?127, 2009.
[5] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient
descent is difficult. IEEE transactions on neural networks, 5(2):157?166, 1994.
[6] Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks.
arXiv preprint arXiv:1609.01704, 2016.
[7] Junyoung Chung, Caglar G?l?ehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural
networks. In ICML, pages 2067?2075, 2015.
[8] Robert DiPietro, Nassir Navab, and Gregory D. Hager. Revisiting narx recurrent neural networks for
long-term dependencies, 2017.
[9] Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In
Nips, volume 409, 1995.
[10] Jeffrey L. Elman. Finding structure in time. COGNITIVE SCIENCE, 14:179?211, 1990.
[11] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[12] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983,
2016.
[13] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401,
2014.
[14] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi?nska,
Sergio G?mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing
using a neural network with dynamic external memory. Nature, 538(7626):471?476, 2016.
[15] David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1611.01578, 2016.
[16] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. PhD thesis, diploma thesis, institut
f?r informatik, lehrstuhl prof. brauer, technische universit?t m?nchen, 1991.
[17] Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem
solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02):107?116,
1998.
[18] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
[19] Marcus Hutter. The human knowledge compression contest. http://prize.hutter1.net, 2012.
[20] Herbert Jaeger. Discovering multiscale dynamical features with hierarchical echo state networks. Technical
report, Jacobs University Bremen, 2007.
[21] Li Jing, Yichen Shen, Tena Dub?cek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin
Solja?ci?c. Tunable efficient unitary neural networks (eunn) and their application to rnns, 2016.
[22] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray
Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016.
[23] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[24] Bryon Knoll. Cmix. http://www.byronknoll.com/cmix.html, 2017. Accessed: 2017-05-18.
9
[25] Jan Koutn?k, Klaus Greff, Faustino Gomez, and J?rgen Schmidhuber. A clockwork rnn. arXiv preprint
arXiv:1603.08983, 2016.
[26] Tsungnan Lin, Bill G Horne, Peter Tino, and C Lee Giles. Learning long-term dependencies in narx
recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329?1338, 1996.
[27] Matt Mahoney. Large text compression benchmark. http://mattmahoney.net/dc/text.html, 2017.
Accessed: 2017-05-18.
[28] Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus
of english: The penn treebank. Comput. Linguist., 19(2):313?330, June 1993.
[29] James Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimization. In
Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1033?1040,
2011.
?
[30] Tom??s Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Kombrink Stefan, and Jan Cernock?.
Subword
language modeling with neural networks. preprint: http://www.fit.vutbr.cz/ imikolov/rnnlm/char.pdf, 2012.
[31] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent
neural networks. arXiv preprint arXiv:1312.6026, 2013.
[32] AJ Robinson and Frank Fallside. The utility driven dynamic error propagation network. University of
Cambridge Department of Engineering, 1987.
[33] Kamil Rocki, Tomasz Kornuta, and Tegan Maharaj.
arXiv:1610.07675, 2016.
Surprisal-driven zoneout.
arXiv preprint
[34] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. Cognitive modeling, 5(3):1, 1988.
[35] J?rgen Schmidhuber. Learning complex, extended sequences using the principle of history compression.
Neural Computation, 4(2):234?242, 1992.
[36] Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In
Proceedings of the 24th ACM International on Conference on Information and Knowledge Management,
pages 553?562. ACM, 2015.
[37] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929?1958, January
2014.
[38] Rupesh Kumar Srivastava, Klaus Greff, and J?rgen Schmidhuber. Highway networks. arXiv preprint
arXiv:1505.00387, 2015.
[39] Paul J Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural
networks, 1(4):339?356, 1988.
[40] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916,
2014.
[41] Ronald J Williams. Complexity of exact gradient computation algorithms for recurrent neural networks.
Technical report, Technical Report Technical Report NU-CCS-89-27, Boston: Northeastern University,
College of Computer Science, 1989.
[42] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
[43] Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn?k, and J?rgen Schmidhuber. Recurrent highway
networks. arXiv preprint arXiv:1607.03474, 2016.
[44] Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. arXiv preprint
arXiv:1611.01578, 2016.
10
| 7173 |@word nchen:1 compression:10 norm:3 jacob:1 hager:1 initial:2 contains:2 ours:6 subword:1 reynolds:1 outperforms:5 current:1 com:2 activation:1 diederik:1 written:3 john:2 ronald:2 subsequent:2 distant:1 wx:1 enables:1 plot:1 update:7 generative:2 fewer:1 discovering:1 ivo:2 ith:1 prize:11 short:5 vanishing:3 completeness:1 pascanu:2 provides:1 accessed:3 five:2 consists:7 combine:2 mask:1 market:1 roughly:2 elman:1 frequently:1 kiros:2 salakhutdinov:1 decreasing:1 param:2 increasing:2 provided:2 horne:1 linearity:1 moreover:2 panel:2 grabska:1 kind:2 proposing:1 finding:1 transformation:2 every:2 zaremba:1 universit:1 unit:8 penn:15 omit:1 wayne:2 danihelka:2 engineering:1 marin:1 mach:1 path:3 might:3 rnns:34 steger:2 chose:1 shaded:2 range:2 averaged:1 unique:1 acknowledgment:1 enforces:1 testing:1 lecun:1 block:4 differs:1 backpropagation:2 razvan:1 jan:4 area:2 rnn:69 empirical:2 eth:3 significantly:2 word:6 suggest:1 close:1 context:2 applying:2 www:2 bill:1 map:2 marten:1 quick:1 yt:5 clockwork:4 resembling:1 attention:1 flexibly:1 jimmy:3 hypernetworks:1 sepp:3 shen:1 williams:2 assigns:1 rule:1 regularize:1 zoneout:7 embedding:1 variation:1 updated:6 netzen:1 exact:1 us:1 element:4 trend:1 recognition:4 rumelhart:1 updating:2 werbos:1 observed:3 ft:3 module:2 bottom:2 preprint:18 capture:2 revisiting:1 v13:1 connected:4 decrease:1 highest:2 alessandro:1 complexity:1 nie:1 brauer:1 dynamic:6 angelika:1 imikolov:1 trained:3 solving:1 segment:2 zilly:3 efficiency:2 f2:4 vague:1 icassp:1 stacked:26 train:3 ramalho:1 fast:26 query:2 klaus:2 kalchbrenner:1 larger:2 deoras:1 dmitriy:1 compressed:1 encoder:1 favor:2 ability:2 simonyan:1 timescale:1 echo:1 final:2 patrice:1 advantage:5 sequence:7 differentiable:2 evidently:1 net:3 propose:3 jamie:2 reset:1 adaptation:1 flexibility:2 achieve:1 adapts:2 jianyun:1 exploiting:1 sutskever:4 jing:1 jaeger:1 produce:1 generating:1 adam:3 tim:1 recurrent:22 andrew:1 measured:1 edward:1 hst:4 skip:1 predicted:1 indicate:1 switzerland:3 annotated:1 stochastic:1 human:1 enable:3 char:1 explains:1 require:1 espeholt:1 beatrice:1 f1:9 marcinkiewicz:1 wall:1 investigation:1 koutn:2 generalization:1 ryan:2 extension:1 sordoni:1 considered:4 predict:1 rgen:5 major:2 achieves:4 consecutive:4 early:1 optimizer:2 smallest:1 ruslan:1 faustino:1 tanh:2 highway:4 largest:1 successfully:1 tool:1 navab:1 stefan:1 aim:1 modified:1 rather:2 pn:2 focus:2 june:1 improvement:1 indicates:1 steeply:2 baseline:1 maharaj:1 helpful:1 rupesh:2 el:1 interconnect:1 hidden:17 relation:1 favoring:1 whq:1 issue:2 classification:1 unk:1 among:1 html:2 hossein:1 art:5 special:1 field:1 once:3 aware:2 frasconi:1 beach:1 untersuchungen:1 koray:1 construct:1 icml:2 report:5 yoshua:8 randomly:1 replaced:1 consisting:4 jeffrey:1 maintain:1 n1:2 attempt:2 harley:1 investigate:2 evaluation:1 bpc:22 mahoney:1 allocating:1 encourage:1 capable:1 shorter:2 bq:1 orthogonal:1 conduct:1 institut:1 logarithm:2 initialized:2 re:1 plotted:1 hutter:11 increased:1 instance:1 modeling:11 giles:1 yichen:1 stacking:1 technische:1 surpasses:1 deviation:7 rare:1 krizhevsky:1 sumit:1 dependency:19 koutnik:1 gregory:1 cho:2 st:1 lstm:71 international:4 oord:1 lee:1 receiving:1 pool:1 connecting:2 quickly:4 ilya:4 thesis:2 reflect:1 management:1 f5:2 external:2 cognitive:2 chung:3 simard:1 wojciech:1 li:1 chorowski:1 account:1 converted:1 summarized:2 explicitly:1 later:2 lot:1 jason:1 hf:11 decaying:2 tomasz:1 contribution:1 minimize:1 ass:1 greg:2 efficiently:3 ensemble:4 raw:1 kavukcuoglu:1 informatik:1 dub:1 cc:1 history:1 detector:1 whenever:1 against:1 frequency:2 james:1 dynamischen:1 dataset:2 tunable:1 popular:3 wh:1 mitchell:1 knowledge:4 barwi:1 feed:1 higher:10 dt:1 tom:1 improved:3 mujika:1 lehrstuhl:1 evaluated:7 though:1 mez:1 furthermore:1 receives:3 lstms:4 multiscale:16 nonlinear:1 propagation:2 minibatch:3 quality:1 aj:1 lei:2 mary:1 usa:1 building:5 facilitate:1 normalized:1 effect:4 matt:1 mattmahoney:1 regularization:3 inspiration:1 kyunghyun:2 during:1 tino:1 encourages:1 ambiguous:1 backpropagating:1 pdf:1 performs:1 greff:2 image:1 wise:3 regularizing:1 novel:3 recently:1 discovers:1 fi:1 wikipedia:12 sigmoid:1 empirically:3 exponentially:1 volume:1 extend:1 salah:1 hihi:1 interpret:1 refer:1 cambridge:1 ai:1 fk:5 ongoing:1 contest:1 language:8 similarity:1 operating:1 ahn:1 gt:3 sergio:1 recent:1 inf:2 driven:3 schmidhuber:5 store:4 outperforming:1 binary:1 bytenet:1 seen:1 preserving:1 additional:3 dai:1 florian:1 herbert:1 signal:1 relates:1 multiple:1 technical:4 adapt:2 cross:1 long:25 lin:1 divided:3 christina:1 prediction:4 arxiv:34 represent:1 normalization:3 cz:1 achieved:2 cell:64 hochreiter:3 receive:1 addition:1 whereas:4 separately:1 diagram:2 source:1 ot:2 nska:1 probably:1 facilitates:2 bahdanau:1 incorporates:4 flow:1 call:1 unitary:1 chopra:1 intermediate:3 split:2 bengio:8 fit:1 architecture:46 idea:1 whether:2 utility:1 url:1 passed:1 f:66 peter:1 karen:1 speech:5 hessian:2 linguist:1 remark:1 deep:20 reduced:1 http:5 hutter1:1 per:7 paolo:1 georg:1 four:2 cek:1 prevent:1 nal:1 ht:10 year:1 enforced:1 turing:2 letter:1 uncertainty:1 yann:1 bit:3 dropout:4 layer:40 ct:9 followed:1 gomez:1 h1t:2 strength:2 alex:6 nitish:1 kumar:2 mikolov:1 department:4 according:1 smaller:1 beneficial:1 son:1 character:23 shallow:3 quoc:2 tegmark:1 intuitively:1 explained:1 den:1 taken:2 computationally:1 resource:1 equation:1 discus:1 count:2 fed:1 end:3 gulcehre:1 operation:1 promoting:1 observe:3 hierarchical:21 slower:2 gate:2 top:6 denotes:2 tiago:1 narx:3 exploit:1 concatenated:1 build:1 prof:1 added:1 antoine:1 hai:1 gradient:17 fallside:1 thank:1 parametrized:1 street:1 decoder:1 equip:1 dzmitry:1 marcus:2 length:4 code:1 julian:2 difficult:4 robert:1 frank:1 debate:1 rise:3 ba:3 implementation:1 gated:1 upper:1 philemon:1 benchmark:1 caglar:2 descent:3 gas:1 truncated:1 january:1 hinton:4 incorporated:1 santorini:1 extended:1 dc:1 jakob:1 arbitrary:2 david:2 hyperlstm:3 meier:1 namely:2 pair:1 connection:3 optimized:1 acoustic:1 boost:2 nu:1 kingma:1 nip:2 robinson:1 address:4 able:1 bar:1 below:1 usually:3 dynamical:1 scott:1 challenge:4 hyperlink:1 including:1 memory:8 max:1 surprisal:2 suitable:1 natural:1 hybrid:1 cernock:1 predicting:2 improve:7 github:1 xml:1 hm:3 text:6 review:2 epoch:4 multiplication:1 relative:2 graf:5 loss:9 diploma:1 suggestion:2 rnnlm:1 geoffrey:4 borrows:1 validation:4 foundation:1 vutbr:1 affine:1 article:2 principle:1 treebank:15 neuronalen:1 bremen:1 bordes:1 translation:2 ehre:1 kombrink:1 token:1 repeat:1 last:1 free:2 english:4 interposed:1 formal:1 bias:1 deeper:2 wide:2 benefit:2 van:1 rhn:6 depth:6 boundary:3 transition:21 vocabulary:1 rich:3 feedback:1 author:1 commonly:1 adaptive:2 collection:1 sungjin:1 agnieszka:1 reinforcement:1 brakel:1 transaction:2 citation:1 approximate:1 sequentially:7 reveals:1 overfitting:3 investigating:2 corpus:1 xi:5 agapiou:1 search:1 latent:1 table:7 additionally:1 learn:8 nature:1 ca:1 contributes:1 serdyuk:1 investigated:1 complex:7 kamil:1 did:2 timescales:3 main:2 hyperparameters:4 paul:1 fair:2 x1:2 lasse:1 referred:3 junyoung:2 slow:23 interconnecting:1 position:8 explicit:1 comput:1 candidate:1 learns:2 northeastern:1 xt:7 zu:1 learnable:1 evidence:2 consist:1 sequential:22 adding:2 ci:1 phd:1 boston:1 entropy:2 forget:2 led:1 depicted:1 vinyals:1 unexpected:5 tegan:1 ch:3 acm:2 grefenstette:1 weston:1 goal:1 fuzziness:1 ann:1 change:5 operates:1 surpass:1 colmenarejo:1 formally:1 aaron:1 college:1 anoop:1 ethz:3 oriol:1 evaluate:2 malcolm:1 srivastava:3 |
6,825 | 7,174 | Learning Disentangled Representations with
Semi-Supervised Deep Generative Models
N. Siddharth
University of Oxford
[email protected]
Brooks Paige
Alan Turing Institute
University of Cambridge
Jan-Willem van de Meent
Northeastern University
[email protected]
[email protected]
Alban Desmaison
University of Oxford
Noah D. Goodman
Stanford University
Pushmeet Kohli ?
Deepmind
[email protected]
[email protected]
[email protected]
Frank Wood
University of Oxford
Philip H.S. Torr
University of Oxford
[email protected]
[email protected]
Abstract
Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic
encoder and decoder network. Typically these models encode all features of the data into a
single variable. Here we are interested in learning disentangled representations that encode
distinct aspects of the data into separate variables. We propose to learn such representations
using model architectures that generalise from standard VAEs, employing a general graphical
model structure in the encoder and decoder. This allows us to train partially-specified models
that make relatively strong assumptions about a subset of interpretable variables and rely on
the flexibility of neural networks to learn representations for the remaining variables. We
further define a general objective for semi-supervised learning in this model class, which can be
approximated using an importance sampling procedure. We evaluate our framework?s ability
to learn disentangled representations, both by qualitative exploration of its generative capacity,
and quantitative evaluation of its discriminative ability on a variety of models and datasets.
1
Introduction
Learning representations from data is one of the fundamental challenges in machine learning and
artificial intelligence. Characteristics of learned representations can depend on their intended use.
For the purposes of solving a single task, the primary characteristic required is suitability for that
task. However, learning separate representations for each and every such task involves a large amount
of wasteful repetitive effort. A representation that has some factorisable structure, and consistent
semantics associated to different parts, is more likely to generalise to a new task.
Probabilistic generative models provide a general framework for learning representations: a model is
specified by a joint probability distribution both over the data and over latent random variables, and a
representation can be found by considering the posterior on latent variables given specific data. The
learned representation ? that is, inferred values of latent variables ? depends then not just on the
data, but also on the generative model in its choice of latent variables and the relationships between
the latent variables and the data. There are two extremes of approaches to constructing generative
models. At one end are fully-specified probabilistic graphical models [18, 21], in which a practitioner
decides on all latent variables present in the joint distribution, the relationships between them, and
the functional form of the conditional distributions which define the model. At the other end are
?
Author was at Microsoft Research during this project.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
deep generative models [7, 16, 19, 20], which impose very few assumptions on the structure of the
model, instead employing neural networks as flexible function approximators that can be used to
train a conditional distribution on the data, rather than specify it by hand.
The tradeoffs are clear. In an explicitly constructed graphical model, the structure and form of the
joint distribution ensures that latent variables will have particular semantics, yielding a disentangled
representation. Unfortunately, defining a good probabilistic model is hard: in complex perceptual
domains such as vision, extensive feature engineering (e.g. Berant et al. [1], Siddharth et al. [30]) may
be necessary to define a suitable likelihood function. Deep generative models completely sidestep
the difficulties of feature engineering. Although they address learning representations which then
enable them to better reconstruct data, the representations themselves do not always exhibit consistent
meaning along axes of variation: they produce entangled representations. While such approaches
have considerable merit, particularly when faced with the absence of any side information about data,
there are often situations when aspects of variation in data can be, or are desired to be characterised.
Bridging this gap is challenging. One way to enforce a disentangled representation is to hold different
axes of variation fixed during training [20]. Johnson et al. [13] combine a neural net likelihood
with a conjugate exponential family model for the latent variables. In this class of models, efficient
marginalisation over the latent variables can be performed by learning a projection onto the same
conjugate exponential family in the encoder. Here we propose a more general class of partiallyspecified graphical models: probabilistic graphical models in which the modeller only needs specify
the exact relationship for some subset of the random variables in the model. Factors left undefined in
the model definition are then learned, parametrised by flexible neural networks. This provides the
ability to situate oneself at a particular point on a spectrum, by specifying precisely those axes of
variations (and their dependencies) we have information about or would like to extract, and learning
disentangled representations for them, while leaving the rest to be learned in an entangled manner.
A subclass of partially-specified models that is particularly common is that where we can obtain
supervision data for some subset of the variables. In practice, there is often variation in the data
which is (at least conceptually) easy to explain, and therefore annotate, whereas other variation is less
clear. For example, consider the MNIST dataset of handwritten digits: the images vary both in terms
of content (which digit is present), and style (how the digit is written), as is visible in the right-hand
side of Fig. 1. Having an explicit ?digit? latent variable captures a meaningful and consistent axis of
variation, independent of style; using a partially-specified graphical model means we can define a
?digit? variable even while leaving unspecified the semantics of the different styles, and the process
of rendering a digit to an image. In a fully unsupervised learning procedure there is generally no
guarantee that inference on a model with 10 classes will in fact recover the 10 digits. However, given
a small amount of labelled examples, this task becomes significantly easier. Beyond the ability to
encode variation along some particular axes, we may also want to interpret the same data in different
ways. For example, when considering images of people?s faces, we might wish to capture the person?s
identity in one context, and the lighting conditions on the faces in another.
In this paper we introduce a recipe for learning and inference in partially-specified models, a flexible
framework that learns disentangled representations of data by using graphical model structures to
encode constraints to interpret the data. We present this framework in the context of variational
autoencoders (VAEs), developing a generalised formulation of semi-supervised learning with DGMs
that enables our framework to automatically employ the correct factorisation of the objective for
any given choice of model and set of latents taken to be observed. In this respect our work extends
previous efforts to introduce supervision into variational autoencoders [17, 23, 31]. We introduce a
variational objective which is applicable to a more general class of models, allowing us to consider
graphical-model structures with arbitrary dependencies between latents, continuous-domain latents,
and those with dynamically changing dependencies. We provide a characterisation of how to compile
partially-supervised generative models into stochastic computation graphs, suitable for end-to-end
training. This approach allows us also amortise inference [6, 22, 28, 33], simultaneously learning
a network that performs approximate inference over representations at the same time we learn the
unknown factors of the model itself. We demonstrate the efficacy of our framework on a variety of
tasks, involving classification, regression, and predictive synthesis, including its ability to encode
latents of variable dimensionality.
2
z
p
?
z
p
?
Generative Model
?
x
q
y
?
y
x
Disentangled Representation
y (partial labels) (a)
?
Recognition Model
z
?
Stochastic Computation Graph for VAE
x (data)
y
z
?
(c) y (digit label)
(b)
(d)
Figure 2: (a) Visual analogies for the MNIST data, with inferred style latent variable fixed and
?the label
? varied. (b) Exploration in ?style? space for a 2D latent gaussian random variable. Visual
analogies for the SVHN data when (c) fully supervised, and (d) partially supervised with just 100
p
labels/digit.
z (handwriting style)
?
126
To train deep generative models in a semi-supervised manner, we need to incorporate labelled data
?
?127 into
? the
? variational
?
bound.
In a fully unsupervised setting, the contribution of a particular data
i
128
point x to the ELBO can be expressed, with minor adjustments of Equation (1), whose Monte-Carlo
q129 approximation
p
samples
z
x
platents z and y from the recognition distribution q z, y | xi .
"
#
p? xi | z, y p(z, y)
i
L ?, ; x = Eq (z,y|xi ) log
.
(2)
q (z, y | xi )
Figure 1: Semi-supervised learning
in structured variational autoencoders, illustrated on MNIST
130
By contrast, in the fully supervised setting the values y are treated as observed and become fixed
digits. Top-Left: Generative model.
Bottom-Left:
Recognition
model.
Middle:
Stochastic
com-the label y is observed
131
inputs
into the computation
graph, instead
of being
sampled
from q . When
putation graph, showing expansion
of each
tofor
itsfixed
corresponding
Generative-model
132
along
with node
the data,
(xi , yi ) pairs, sub-graph.
the lower bound
on the conditional log-marginal likelihood
dependencies are shown in blue
dependencies are shown in orange. See
133 and
log precognition-model
? (x | y) is
"
#
Section 2.2 for a detailed explanation. Right: learned representation.
i
i
i
2
Framework and Formulation
134
Lx|y ?,
z; x
i
, y i = Eq
z (z|x
i ,yi )
log
p? x | z, y p z | y
q z (z | xi , yi )
This quantity can be optimized directly to learn model parameters ? and
z
.
(3)
simultaneously via SGD.
VAEs [16, 27] are a class of deep
generative
models
that simultaneously
train both
a probabilistic
135
However, it does
not contain
the encoder parameters
y . This difficulty was also encountered in a
1
encoder and decoder for a elements
of acontext
data set
D = {x
. . xN
}. The
central
is that
136
related
by Kingma
et al., .[17].
Their
solution
was toanalogy
augment the
loss function by including
137 aan
explicit
additional
term forthe
learning
a classifier
directly on theprobability
supervised points.
an encoding z can be considered
latent
variable,
casting
decoder
as a conditional
density p? (x|z). The parameters
of propose
this distribution
areapproach.
the output
of a deterministic
? (z)we
? with
138 ?Here
an alternative
We extend
the model withneural
an auxiliary variable y
network with parameters ? (most
MLPs
or CNNs)
which
takes z as input. By placing a
139 commonly
likelihood p(?
y | y) =
densities
? (y) to define
y
weak prior over z, the decoder defines a posterior and jointp(?
distribution
p?y(z| y)p
| x)? (x?| py,? (x
| z)p(z).
y , y, z, x) = p(?
z)p(y,
z)
q(?
y , y, z that
| x) =
p(?
y | y)q(y, zthe
| x).
Inference in VAEs can be performed using a variational method
approximates
?
zn
?
posterior distribution p? (z | x) 140
using
an
encoder
q
(z
|
x),
whose
parameters
?
(x)
are
? , we recover the expression in Equation (2).
When we marginalize
the ELBO for this model over? y
?
i
? that
141
Treating?)y
= y isasreferred
observed to
results
in ?inference
the supervised
objective
the output of a network (with parameters
as an
network?
" by solid
#
or a ?recognition network?. The generative and inference networks, denoted
p? xi | z,
i
xny p(z, y) .
i
i
L
?,
;
x
=
E
(y)
log
(4)
and dashed lines respectively in the graphical model, are trained
jointly
q (z,y|xby
) performing
y
? =yi
y
q (z, yN | xi )
stochastic gradient ascent on the evidence lower bound (ELBO) L(?, ?; D) ? log p? (D),
Integration over an observed y is then replaced with evaluation of the ELBO and the density q y at
iN
yX
. A Monte Carlo estimator of Equation (4) can be constructed automatically for any factorization
n
latentp?variables
and
weighting
n [log
L(?, ?; D) =
L(?, ?; xn144) =of q by
Eqsampling
(xn | z)z+
log
p(z) ?the
logresulting
q? (z|xELBO
)]. estimate
(1) by the conditional
? (z|x )
145
density
terms
q
(y|?).
y
n=1
n=1
142
N
X
143
146
Note that the exact functional form of the Monte Carlo estimator will vary depending on the
150
tioned on the selected class. The computation graph for a model with this factorization is shown in Fig-
n
n ) [log p? (x | z)] is approximated by
Typically, the first term Eq? (z|x147
Monte
Carlofor
estimate
the
dependency structure of q z, y | xi .a For
example,
discreteand
y, choosing
q (z, y | x) =
n
remaining two terms are expressed
as
a
divergence
?KL(q
(z|x
)kp(z)),
which
can
be
computed
148
q z (z | y, x)q y (y | x), decomposes
the problem into simultaneously learning a classifier
?
149
q yand
(y | prior
x) alongside
the generative model parameters ? and encoder q z (z|x, y) which is condianalytically when the encoder model
are Gaussian.
In this paper, we will consider
generative
(x, y, z)
andthethe
151 models
ure 1. Ininit,which
the valueboth
y of the
the distribution
q ymodel
(? | x) isp?observed,
while
distribution q z (? | x, y)
approximate posterior q? (y, z | x) can have arbitrary conditional dependency structures involving
random variables defined over a number of different distribution types. We are interested in defining
4
VAE architectures in which a subset of variables y are interpretable. For these variables, we assume
that supervision labels are available for some fraction of the data. The VAE will additionally retain
some set of variables z for which inference is performed in a fully unsupervised manner. This is in
keeping with our central goal of defining and learning in partially-specified models. In the running
example for MNIST, y corresponds to the classification label, whereas z captures all other implicit
features, such as the pen type and handwriting style.
This class of models is more general than the models in the work by Kingma et al. [17], who consider
three model designs with a specific conditional dependence structure. We also do not require p(y, z)
to be a conjugate exponential family model, as in the work by Johnson et al. [14]. To perform
semi-supervised learning in this class of models, we need to i) define an objective that is suitable to
general dependency graphs, and ii) define a method for constructing a stochastic computation graph
[29] that incorporates both the conditional dependence structure in the generative model and that of
the recognition model into this objective.
3
2.1
Objective Function
Previous work on semi-supervised learning for deep generative models [17]
defines an objective over N unsupervised data points D = {x1 , . . . , xN }
and M supervised data points Dsup = {(x1 , y1 ), . . . , (xM , yM )},
L(?, ?; D, Dsup ) =
N
X
n=1
L(?, ?; xn ) + ?
M
X
m=1
Lsup (?, ?; xm , ym ).
(2)
?
z
n
?
yn
zm
ym
Our model?s joint distribution factorises into unsupervised and supervised
xn
xm
collections of terms over D and Dsup as shown in the graphical model. The
N
M
standard variational bound on the joint evidence of all observed data (including supervision) also factorises as shown in Eq. (2). As the factor corresponding to the unsupervised
part of the graphical model is exactly that as Eq. (1), we focus on the supervised term in Eq. (2),
expanded below, incorporating an additional weighted component as in Kingma et al. [17].
p? (xm , ym , z)
Lsup (?, ?; xm , ym ) = Eq? (z|xm ,ym ) log
+ ? log q? (ym | xm ).
(3)
q? (z | xm , ym )
Note that the formulation in Eq. (2) introduces an constant ? that controls the relative strength of
the supervised term. While the joint distribution in our model implicitly weights the two terms, in
situations where the relative sizes of D and Dsup are vastly different, having control over the relative
weights of the terms can help ameliorate such discrepancies.
This definition in Eq. (3) implicitly assumes
that we can evaluate the conditional probability
R
q? (z|x, y) and the marginal q? (y|x) = dz q? (y, z|x). This was indeed the case for the models
considered by Kingma et al. [17], which have a factorisation q? (y, z|x) = q? (z|x, y)q? (y|x).
Here we will derive an estimator for Lsup that generalises to models in which q? (y, z | x) can have
an arbitrary conditional dependence structure. For purposes of exposition, we will for the moment
consider the case where q? (y, z | x) = q? (y | x, z)q? (z | x). For this factorisation, generating
samples zm,s ? q? (z | xm , ym ) requires inference, which means we can no longer compute a simple
Monte Carlo estimator by sampling from the unconditioned distribution q? (z | xm ). Moreover, we
also cannot evaluate the density q? (z | xm , ym ).
In order to address these difficulties, we re-express the supervised terms in the objective as
p(xm , ym , z)
Lsup (?, ?; xm , ym ) = Eq? (z|xm ,ym ) log
+ (1 + ?) log q? (ym | xm ),
q? (ym , z | xm )
(4)
which removes the need to evaluate q? (z | xm , ym ). We can then use (self-normalised) importance
sampling to approximate the expectation. To do so, we sample proposals zm,s ? q? (z | xm ) from
the unconditioned encoder distribution, and define the estimator
S
p? (xm , ym , z)
1 X wm,s
p? (xm , ym , zm,s )
Eq? (z|xm ,ym ) log
'
log
,
(5)
q? (ym , z | xm )
S s=1 Z m
q? (ym , zm,s | xm )
where the unnormalised importance weights wm,s and normaliser Z m are defined as
wm,s :=
q? (ym , zm,s | xm )
,
q? (zm,s | xm )
Zm =
S
1 X m,s
w .
S s=1
(6)
To approximate log q? (ym | xm ), we use a Monte Carlo estimator of the lower bound that is normally
used in maximum likelihood estimation,
S
q? (ym , z | xm )
1X
m
m
log q? (y | x ) ? Eq? (z|xm ) log
'
log wm,s ,
(7)
q? (z | xm )
S s=1
using the same samples zm,s and weights wm,s as in Eq. (5). When we combine the terms in Eqs. (5)
and (7), we obtain the estimator
S
1 X wm,s
p? (xm , ym , zm,s )
L?sup (?, ?; xm, ym ) :=
log
+ (1 + ?) log wm,s .
m
S s=1 Z
q? (ym , zm,s | xm )
4
(8)
We note that this estimator applies to any conditional dependence structure. Suppose that we were to
define an encoder q? (z 2 , y 1 , z 1 | x) with factorisation q? (z 2 | y 1 , z 1 , x)q? (y 1 | z 1 , x)q? (z 1 | x).
If we propose z 2 ? q? (z 2 | y 1 , z 1 , x) and z 1 ? q? (z 1 | x), then the importance weights wm,s for
the estimator in Eq. (8) are defined as
wm,s :=
m m,s
q? (zm,s
| xm )
m
2 , y 1 , z1
= q? (y1m | zm,s
m,s
1 , x ).
m
q? (zm,s
| y1m , zm,s
| xm )
2
1 , x )q? (z1
In general, the importance weights are simply the product of conditional probabilities of the supervised
variables y in the model. Note that this also applies to the models in Kingma et al. [17], whose
objective we can recover by taking the weights to be constants wm,s = q? (ym | xm ).
We can also define an objective analogous to the one used in importance-weighted autoencoders [2],
in which we compute the logarithm of a Monte Carlo estimate, rather than the Monte Carlo estimate
of a logarithm. This objective takes the form
#
"
#
"
S
S
m
m m,s
X
X
p
(x
,
y
,
z
)
1
1
?
m,s
sup,iw
m
m
L?
(?, ?; x , y ) := log
+ ? log
w
,
(9)
S s=1 q? (zm,s | xm )
S s=1
which can be derived by moving the sums in Eq. (8) into the logarithms and applying the substitution
wm,s /q? (ym , zm,s | xm ) = 1/q? (zm,s | xm ).
2.2
Construction of the Stochastic Computation Graph
To perform gradient ascent on the objective in Eq. (8), we map the graphical models for p? (x, y, z)
and q? (y, z|x) onto a stochastic computation graph in which each stochastic node forms a sub-graph.
Figure 1 shows this expansion for the simple VAE for MNIST digits from [16]. In this model, y is a
discrete variable that represents the underlying digit, our latent variable of interest, for which we have
partial supervision data. An unobserved Gaussian-distributed variable z captures the remainder of the
latent information. This includes features such as the hand-writing style and stroke thickness. In the
generative model (Fig. 1 top-left), we assume a factorisation p? (x, y, z) = p? (x | y, z)p(y)p(z) in
which y and z are independent under the prior. In the recognition model (Fig. 1 bottom-left), we use
a conditional dependency structure q? (y, z | x) = q?z (z | y, x)q?y (y|x) to disentangle the digit
label y from the handwriting style z (Fig. 1 right).
The generative and recognition model are jointly form a stochastic computation graph (Fig. 1 centre)
containing a sub-graph for each stochastic variable. These can correspond to fully supervised,
partially supervised and unsupervised variables. This example graph contains three types of subgraphs, corresponding to the three possibilities for supervision and gradient estimation:
? For the fully supervised variable x, we compute the likelihood p under the generative model, that
is p? (x | y, z) = N (x ; ?? (y, z)). Here ?? (y, z) is a neural net with parameters ? that returns
the parameters of a normal distribution (i.e. a mean vector and a diagonal covariance).
? For the unobserved variable z, we compute both the prior probability p(z) = N (z ; ?z ), and the
conditional probability q? (z | x, y) = N (z ; ??z (x, y)). Here the usual reparametrisation is
used to sample z from q? (z | x, y) by first sampling ? N (0, I) using the usual reparametrisation trick z = g(, ?? (x, y)).
? For the partially observed variable y, we also compute probabilities p(y) = Discrete(y; ?y ) and
q?y (y|x) = Discrete(y; ??z (x)). The value y is treated as observed when available, and sampled
otherwise. In this particular example, we sample y from a q?y (y|x) using a Gumbel-softmax
[12, 24] relaxation of the discrete distribution.
The example in Fig. 1 illustrates a general framework for defining VAEs with arbitrary dependency
structures. We begin by defining a node for each random variable. For each node we then specify
a distribution type and parameter function ?, which determines how the probability under the
generative model depends on the other variables in the network. This function can be a constant, fully
deterministic, or a neural network whose parameters are learned from the data. For each unsupervised
and semi-supervised variable we must additionally specify a function ? that returns the parameter
values in the recognition model, along with a (reparametrised) sampling procedure.
Given this specification of a computation graph, we can now compute the importance sampling
estimate in Eq. (8) by simply running the network forward repeatedly to obtain samples from q? (?|?)
for all unobserved variables. We then calculate p? (x, y, z), q? (y|x), q? (y, z|x), and the importance
5
(a)
(b)
(c)
(d)
Figure 2: (a) Visual analogies for the MNIST data, partially supervised with just 100 labels (out of
50000). We infer the style variable z and then vary the label y. (b) Exploration in style space with
label y held fixed and (2D) style z varied. Visual analogies for the SVHN data when (c) partially
supervised with just 1000 labels, and (d) fully supervised.
weight w, which is the joint probability of all semi-supervised variable for which labels are available.
This estimate can then be optimised with respect to the variables ? and ? to train the autoencoder.
3
Experiments
We evaluate our framework along a number of different axes pertaining to its ability to learn disentangled representations through the provision of partial graphical-model structures for the latents and
weak supervision. In particular, we evaluate its ability to (i) function as a classifier/regressor for
particular latents under the given dataset, (ii) learn the generative model in a manner that preserves
the semantics of the latents with respect to the data generated, and (iii) perform these tasks, in a
flexible manner, for a variety of different models and data.
For all the experiments run, we choose architecture and parameters that are considered standard
for the type and size of the respective datasets. Where images are concerned (with the exception
of MNIST), we employ (de)convolutional architectures, and employ a standard GRU recurrence
in the Multi-MNIST case. For learning, we used AdaM [15] with a learning rate and momentumcorrection terms set to their default values. As for the mini batch sizes, they varied from 100-700
depending on the dataset being used and the sizes of the labelled subset Dsup . All of the above,
including further details of precise parameter values and the source code, including our PyTorchbased library for specifying arbitrary graphical models in the VAE framework, is available at ?
https://github.com/probtorch/probtorch.
3.1
MNIST and SVHN
We begin with an experiment involving a simple dependency structure, in fact the very same as that
in Kingma et al. [17], to validate the performance of our importance-sampled objective in the special
case where the recognition network and generative models factorise as indicated in Fig. 1(left), giving
us importance weights that are constant wm,s = q? (y m |xm ). The model is tested on it?s ability to
classify digits and perform conditional generation on the MNIST and Google Street-View House
Numbers (SVHN) datasets. As Fig. 1(left) shows, the generative and recognition models have the
?digit? label, denoted y, partially specified (and partially supervised) and the ?style? factor, denoted
z, assumed to be an unobserved (and unsupervised) variable.
Figure 2(a) and (c) illustrate the conditional generation capabilities of the learned model, where we
show the effect of first transforming a given input (leftmost column) into the disentangled latent
space, and with the style latent variable fixed, manipulating the digit through the generative model to
generate data with expected visual characteristics. Note that both these results were obtained with
partial supervision ? 100 (out of 50000) labelled data points in the case of MNIST and 1000 (out
of 70000) labelled data points in the case of SVHN. The style latent variable z was taken to be a
diagonal-covariance Gaussian of 10 and 15 dimensions respectively. Figure 2(d) shows the same for
SVHN with full supervision. Figure 2(b) illustrates the alternate mode of conditional generation,
where the style latent, here taken to be a 2D Gaussian, is varied with the digit held fixed.
Next, we evaluate our model?s ability to effectively learn a classifier from partial supervision. We
compute the classification error on the label-prediction task on both datasets, and the results are
6
Effect of Supervision Rate ( )
1.0
60
MNIST
40
20
0.6
0
60
SVHN
M = 1000
M = 3000
0.4
SVHN
N = 70000
Classification Error (%)
0.8
MNIST
N = 50000
M = 100
M = 600
M = 1000
M = 3000
40
0.2
20
0.0
0
0.0
0.0
0.2
0.2
0.4
0.4
0.60.6
Supervision Rate ( )
0.80.8
1.0
M
Ours
M2 [17]
100
600
1000
3000
9.71 (? 0.91)
3.84 (? 0.86)
2.88 (? 0.79)
1.57 (? 0.93)
11.97 (? 1.71)
4.94 (? 0.13)
3.60 (? 0.56)
3.92 (? 0.63)
M
Ours
M1+M2 [17]
1000 38.91 (? 1.06)
3000 29.07 (? 0.83)
36.02 (? 0.10)
?
1.0
Figure 3: Right: Classification error rates for different labelled-set sizes M over multiple runs,
with supervision rate ? = N?M
+?M , ? = 1. For SVHN, we compare against a multi-stage process
(M1+M2) [17], where our model only uses a single stage. Left: Classification error over different
labelled set sizes and supervision rates for MNIST (top) and SVHN (bottom). Here, scaling of the
classification objective is held fixed at ? = 50 (MNIST) and ? = 70 (SVHN). Note that for sparsely
labelled data (M N ), a modicum of over-representation (? > 1) helps improve generalisation
with better performance on the test set. Conversely, too much over-representation leads to overfitting.
reported in the table in Fig. 3. Note that there are a few minor points of difference in the setup
between our method and those we compare against [17]. We always run our models directly on the
data, with no pre-processing or pre-learning on the data. Thus, for MNIST, we compare against
model M2 from the baseline which does just the same. However, for SVHN, the baseline method
does not report errors for the M2 model; only the two-stage M1+M2 model which involves a separate
feature-extraction step on the data before learning a semi-supervised classifier.
As the results indicate, our model and objective does indeed perform on par with the setup considered
in Kingma et al. [17], serving as basic validation of our framework. We note however, that from
the perspective of achieving the lowest possible classification error, one could adopt any number of
alternate factorisations [23] and innovations in neural-network architectures [26, 32].
Supervision rate: As discussed in Section 2.1, we formulate our objective to provide a handle
on the relative weight between the supervised and unsupervised terms. For a given unsupervised
set size N , supervised set size M , and scaling term ?, the relative weight is ? = ?M/(N + ?M ).
Figure 3 shows exploration of this relative weight parameter over the MNIST and SVHN datasets
and over different supervised set sizes M . Each line in the graph measures the classification error
for a given M , over ?, starting at ? = 1, i.e. ? = M/(N + M ). In line with Kingma et al.[17], we
use ? = 0.1/?. When the labelled data is very sparse (M N ), over-representing the labelled
examples during training can help aid generalisation by improving performance on the test data. In
our experiments, for the most part, choosing this factor to be ? = M/(N + M ) provides good results.
However, as is to be expected, over-fitting occurs when ? is increased beyond a certain point.
3.2
Intrinsic Faces
We next move to a more complex domain involving generative models of faces. As can be seen in the
graphical models for this experiment in Fig. 4, the dependency structures employed here are more
complex in comparison to those from the previous experiment. Here, we use the ?Yale B? dataset [5]
as processed by Jampani et al. [11] for the results in Fig. 5. We are interested in showing that our
model can learn disentangled representations of identity and lighting and evaluate it?s performance
on the tasks of (i) classification of person identity, and (ii) regression for lighting direction.
Note that our generative model assumes no special structure ? we simply specify a model where all
latent variables are independent under the prior. Previous work [11] assumed a generative model
with latent variables identity i, lighting l, shading s, and reflectance r, following the relationship
(n ? l) ? r + for the pixel data. Here, we wish to demonstrate that our generative model still learns
the correct relationship over these latent variables, by virtue of the structure in the recognition model
and given (partial) supervision.
Note that in the recognition model (Fig. 4), the lighting l is a latent variable with continuous domain,
and one that we partially supervise. Further, we encode identity i as a categorical random variable,
7
Intrinsic Faces
Multi-MNIST
s
s
i
x
`
r
Generative Model
i
`
x
zk
xk
K
yk
ak
x
hk?1
r
Recognition Model
K
Generative Model
zk
xk
K
yk
ak
x
hk
K
Recognition Model
Figure 4: Generative and recognition models for the intrinsic-faces and multi-MNIST experiments.
Input Recon.
Input Recon.
Varying Identity
Ours
(Full Supervision)
Ours
(Semi-Supervised)
Jampani et al. [11]
(plot asymptotes)
Varying Lighting
Identity
Lighting
1.9% (? 1.5)
3.1% (? 3.8)
3.5% (? 3.4)
17.6% (? 1.8)
? 30
? 10
Figure 5: Left: Exploring the generative capacity of the supervised model by manipulating identity
and lighting given a fixed (inferred) value of the other latent variables. Right: Classification and
regression error rates for identity and lighting latent variables, fully-supervised, and semi-supervised
(with 6 labelled example images for each of the 38 individuals, a supervision rate of ? = 0.5,
and ? = 10). Classification is a direct 1-out-of-38 choice, whereas for the comparison, error is a
nearest-neighbour loss based on the inferred reflectance. Regression loss is angular distance.
instead of constructing a pixel-wise surface-normal map (each assumed to be independent Gaussian)
as is customary. This formulation allows us to address the task of predicting identity directly, instead
of applying surrogate evaluation methods (e.g. nearest-neighbour classification based on inferred
reflectance). Figure 5 presents both qualitative and quantitative evaluation of the framework to jointly
learn both the structured recognition model, and the generative model parameters.
3.3
Multi-MNIST
Finally, we conduct an experiment that extends the complexity from the prior models even further.
Particularly, we explore the capacity of our framework to handle models with stochastic dimensionality ? having the number of latent variables itself determined by a random variable, and models
that can be composed of other smaller (sub-)models. We conduct this experiment in the domain of
multi-MNIST. This is an apposite choice as it satisfies both the requirements above ? each image can
have a varying number of individual digits, which essentially dictates that the model must learn to
count, and as each image is itself composed of (scaled and translated) exemplars from the MNIST
data, we can employ the MNIST model itself within the multi-MNIST model.
The model structure that we assume for the generative and recognition networks is shown in Fig. 4.
We extend the models from the MNIST experiment by composing it with a stochastic sequence
generator, in which the loop length K is a random variable. For each loop iteration k = 1, . . . , K,
the generative model iteratively samples a digit yk , style zk , and uses these to generate a digit image
xk in the same manner as in the earlier MNIST example. Additionally, an affine tranformation is also
sampled for each digit in each iteration to transform the digit images xk into a common, combined
canvas that represents the final generated image x, using a spatial transformer network [10].
In the recognition model, we predict the number of digits K from the pixels in the image. For each
loop iteration k = 1, . . . , K, we define a Bernoulli-distributed digit image xk . When supervision is
available, we compute the probability of xk from the binary cross-entropy in the same manner as in
the likelihood term for the MNIST model. When no supervision is available, we deterministically set
xk to the mean of the distribution. This can be seen akin to providing bounding-boxes around the
constituent digits as supervision for the labelled data, which must be taken into account when learning
the affine transformations that decompose a multi-MNIST image into its constituent MNIST-like
images. This model design is similar to the one used in DRAW [9], recurrent VAEs [3], and AIR [4].
8
Input
Reconstruction
Decomposition
M
M +N
0.1
0.5
1.0
Count Error (%)
w/o MNIST
w/ MNIST
85.45 (? 5.77)
93.27 (? 2.15)
99.81 (? 1.81)
76.33 (? 8.91)
80.27 (? 5.45)
84.79 (? 5.11)
Figure 6: Left: Example input multi-MNIST images and reconstructions. Top-Right: Decomposition
of Multi-MNIST images into constituent MNIST digits. Bottom-Right: Count accuracy over
different supervised set sizes M for given dataset size M + N = 82000.
In the absence of a canonical multi-MNIST dataset, we created our own from the MNIST dataset by
manipulating the scale and positioning of the standard digits into a combined canvas, evenly balanced
across the counts (1-3) and digits. We then conducted two experiments within this domain. In the
first experiment, we seek to measure how well the stochastic sequence generator learns to count
on its own, with no heed paid to disentangling the latent representations for the underlying digits.
Here, the generative model presumes the availability of individual MNIST-digit images, generating
combinations under sampled affine transformations. In the second experiment, we extend the above
model to now also incorporate the same pre-trained MNIST model from the previous section, which
allows the generative model to sample MNIST-digit images, while also being able to predict the
underlying digits. This also demonstrates how we can leverage compositionality of models: when
a complex model has a known simpler model as a substructure, the simpler model and its learned
weights can be dropped in directly.
The count accuracy errors across different supervised set sizes, reconstructions for a random set of
inputs, and the decomposition of a given set of inputs into their constituent individual digits, are
shown in Fig. 6. All reconstructions and image decompositions shown correspond to the nested-model
configuration. We observe that not only are we able to reliably infer the counts of the digits in the
given images, we are able to simultaneously reconstruct the inputs as well as its constituent parts.
4
Discussion and Conclusion
In this paper we introduce a framework for learning disentangled representations of data using
partially-specified graphical model structures and semi-supervised learning schemes in the domain of
variational autoencoders (VAEs). This is accomplished by defining hybrid generative models which
incorporate both structured graphical models and unstructured random variables in the same latent
space. We demonstrate the flexibility of this approach by applying it to a variety of different tasks
in the visual domain, and evaluate its efficacy at learning disentangled representations in a semisupervised manner, showing strong performance. Such partially-specified models yield recognition
networks that make predictions in an interpretable and disentangled space, constrained by the structure
provided by the graphical model and the weak supervision.
The framework is implemented as a PyTorch library [25], enabling the construction of stochastic
computation graphs which encode the requisite structure and computation. This provides another
direction to explore in the future ? the extension of the stochastic computation graph framework to
probabilistic programming [8, 34, 35]. Probabilistic programs go beyond the presented framework to
permit more expressive models, incorporating recursive structures and higher-order functions. The
combination of such frameworks with neural networks has recently been studied in Le et al. [22] and
Ritchie et al. [28], indicating a promising avenue for further exploration.
Acknowledgements
This work was supported by the EPSRC, ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant
Seebibyte EP/M013774/1, and EPSRC/MURI grant EP/N019474/1. BP was supported by The Alan
9
Turing Institute under the EPSRC grant EP/N510129/1. FW and NDG were supported under DARPA
PPAML through the U.S. AFRL under Cooperative Agreement FA8750-14-2-0006.
References
[1] Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad
Huang, Peter Clark, and Christopher D Manning. Modeling biological processes for reading
comprehension. In EMNLP, 2014.
[2] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders.
arXiv preprint arXiv:1509.00519, 2015.
[3] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua
Bengio. A recurrent latent variable model for sequential data. In Advances in neural information
processing systems, pages 2980?2988, 2015.
[4] S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and
Geoffrey. E Hinton. Attend, infer, repeat: Fast scene understanding with generative models.
arXiv preprint arXiv:1603.08575, 2016.
[5] A.S. Georghiades, P.N. Belhumeur, and D.J. Kriegman. From few to many: Illumination cone
models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach.
Intelligence, 23(6):643?660, 2001.
[6] Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. In
CogSci, 2014.
[7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural
Information Processing Systems, pages 2672?2680, 2014.
[8] ND Goodman, VK Mansinghka, D Roy, K Bonawitz, and JB Tenenbaum. Church: A language
for generative models. In Uncertainty in Artificial Intelligence, pages 220?229, 2008.
[9] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. Draw: A
recurrent neural network for image generation. In Proceedings of the 32nd International
Conference on Machine Learning (ICML-15), pages 1462?1471, 2015.
[10] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In
Advances in Neural Information Processing Systems, pages 2017?2025, 2015.
[11] Varun Jampani, S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli, and John Winn. Consensus
message passing for layered graphical models. In International Conference on Artificial
Intelligence and Statistics, pages 425?433, 2015.
[12] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv
preprint arXiv:1611.01144, 2016.
[13] Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta.
Composing graphical models with neural networks for structured representations and fast
inference. In Advances in Neural Information Processing Systems, pages 2946?2954, 2016.
[14] Matthew J. Johnson, David K. Duvenaud, Alex B. Wiltschko, Sandeep R. Datta, and Ryan P.
Adams. Composing graphical models with neural networks for structured representations and
fast inference. In Advances in Neural Information Processing Systems, 2016.
[15] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR,
abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
[16] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the
2nd International Conference on Learning Representations, 2014.
10
[17] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In Advances in Neural Information Processing
Systems, pages 3581?3589, 2014.
[18] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques.
MIT press, 2009.
[19] Tejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture:
A probabilistic programming language for scene perception. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, pages 4390?4399, 2015.
[20] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pages
2530?2538, 2015.
[21] Steffen L Lauritzen and David J Spiegelhalter. Local computations with probabilities on
graphical structures and their application to expert systems. Journal of the Royal Statistical
Society. Series B (Methodological), pages 157?224, 1988.
[22] Tuan Anh Le, Atilim Gunes Baydin, and Frank Wood. Inference compilation and universal
probabilistic programming. arXiv preprint arXiv:1610.09900, 2016.
[23] L. Maal?e, C. K. S?nderby, S. K. S?nderby, and O. Winther. Auxiliary deep generative models.
arXiv preprint arXiv:1602.05473, 2016.
[24] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of
discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
[25] PyTorch. PyTorch. http://pytorch.org/, 2017. Accessed: 2017-11-4.
[26] A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and Raiko. T. Semi-supervised learning with
ladder networks. In Advances in Neural Information Processing Systems, pages 3532?3540,
2015.
[27] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation
and approximate inference in deep generative models. In Proceedings of The 31st International
Conference on Machine Learning, pages 1278?1286, 2014.
[28] Daniel Ritchie, Paul Horsfall, and Noah D Goodman. Deep amortized inference for probabilistic
programs. arXiv preprint arXiv:1610.05735, 2016.
[29] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation
using stochastic computation graphs. In Advances in Neural Information Processing Systems,
pages 3510?3522, 2015.
[30] N. Siddharth, A. Barbu, and J. M. Siskind. Seeing what you?re told: Sentence-guided activity
recognition in video. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), pages 732?39, June 2014.
[31] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using
deep conditional generative models. In Advances in Neural Information Processing Systems,
pages 3465?3473, 2015.
[32] C. K. S?nderby, T. Raiko, L. Maal?e, S. K. S?nderby, and O. Winther. Ladder variational
autoencoders. In Advances in Neural Information Processing Systems, 2016.
[33] Andreas Stuhlm?ller, Jacob Taylor, and Noah Goodman. Learning stochastic inverses. In
Advances in neural information processing systems, pages 3048?3056, 2013.
[34] David Wingate, Andreas Stuhlmueller, and Noah D Goodman. Lightweight implementations
of probabilistic programming languages via transformational compilation. In International
Conference on Artificial Intelligence and Statistics, pages 770?778, 2011.
[35] Frank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic programming inference. In Artificial Intelligence and Statistics, pages 1024?1032,
2014.
11
| 7174 |@word kohli:4 middle:1 nd:3 pieter:1 seek:1 eng:1 covariance:2 decomposition:4 jacob:1 paid:1 sgd:1 solid:1 shading:1 moment:1 substitution:1 contains:1 efficacy:2 configuration:1 jimenez:2 daniel:2 series:1 ours:4 lightweight:1 fa8750:1 com:3 diederik:3 written:1 must:3 john:2 visible:1 enables:1 remove:1 treating:1 interpretable:3 plot:1 asymptote:1 generative:47 intelligence:6 selected:1 ivo:1 xk:7 tarlow:1 provides:3 node:4 lx:1 org:2 simpler:2 daphne:1 accessed:1 wierstra:2 along:5 constructed:2 direct:1 become:1 qualitative:2 combine:2 fitting:1 introduce:4 manner:8 expected:2 indeed:2 themselves:1 multi:11 steffen:1 salakhutdinov:1 siddharth:3 automatically:2 considering:2 becomes:1 project:1 begin:2 moreover:1 theophane:2 harding:1 underlying:3 anh:1 lowest:1 what:1 provided:1 unspecified:1 deepmind:1 unobserved:4 transformation:2 guarantee:1 quantitative:2 every:1 subclass:1 exactly:1 classifier:5 scaled:1 uk:5 control:2 normally:1 demonstrates:1 grant:4 yn:2 sherjil:1 danihelka:1 generalised:1 before:1 engineering:2 dropped:1 fwood:1 attend:1 local:1 eslami:2 encoding:2 ak:2 oxford:4 mach:1 ure:1 optimised:1 laurent:1 might:1 studied:1 dynamically:1 specifying:2 challenging:1 compile:1 conversely:1 factorization:2 practice:1 recursive:1 backpropagation:1 digit:34 procedure:3 jan:2 universal:1 yan:1 significantly:1 dictate:1 projection:1 pre:3 seeing:1 onto:2 marginalize:1 cannot:1 layered:1 context:2 applying:3 writing:1 transformer:2 py:1 deterministic:2 map:2 dz:1 go:1 starting:1 jimmy:1 formulate:1 modicum:1 unstructured:1 factorisation:6 pouget:1 subgraphs:1 estimator:9 m2:6 siskind:1 disentangled:14 reparameterization:1 handle:2 variation:8 analogous:1 construction:2 suppose:1 xinchen:1 barbu:1 exact:2 programming:5 us:2 goodfellow:1 agreement:1 trick:1 element:1 berant:2 approximated:2 particularly:3 nderby:4 recognition:23 forthe:1 jampani:3 amortized:2 roy:1 sparsely:1 muri:1 cooperative:1 srikumar:1 observed:9 bottom:4 epsrc:4 ep:3 preprint:7 wingate:1 capture:4 calculate:1 ensures:1 yk:3 balanced:1 transforming:1 complexity:1 warde:1 kriegman:1 trained:2 depend:1 solving:1 brittany:1 predictive:1 ali:2 completely:1 gu:1 translated:1 isp:1 joint:7 darpa:1 kratarth:1 georghiades:1 ppaml:1 train:5 distinct:1 fast:3 monte:8 kp:1 artificial:5 pertaining:1 cogsci:1 choosing:2 whose:4 jean:1 stanford:2 cvpr:1 reconstruct:2 elbo:4 encoder:10 ability:9 otherwise:1 simonyan:1 statistic:3 jointly:4 itself:4 transform:1 unconditioned:2 final:1 shakir:2 putation:1 sequence:2 net:3 propose:4 reconstruction:4 product:1 zm:18 remainder:1 loop:3 flexibility:2 validate:1 constituent:5 recipe:1 requirement:1 produce:1 generating:2 adam:4 karol:1 stuhlmueller:1 help:3 depending:2 andrew:1 ac:5 pose:1 derive:1 illustrate:1 exemplar:1 nearest:2 minor:2 lauritzen:1 eq:18 strong:2 mansinghka:3 auxiliary:2 implemented:1 involves:2 indicate:1 direction:2 guided:1 correct:2 cnns:1 stochastic:19 exploration:5 enable:1 require:1 abbeel:1 suitability:1 decompose:1 biological:1 ryan:2 comprehension:1 exploring:1 extension:1 pytorch:4 hold:1 around:1 considered:4 duvenaud:2 normal:2 predict:2 matthew:2 vary:3 adopt:1 baydin:1 purpose:2 estimation:3 ruslan:1 applicable:1 label:15 iw:1 honkala:1 weighted:3 mit:1 always:2 gaussian:6 rather:2 varying:3 casting:1 vae:5 encode:7 ax:5 focus:1 june:1 derived:1 vk:1 methodological:1 bernoulli:1 likelihood:7 m013774:1 rezende:3 hk:2 contrast:1 adversarial:1 baseline:2 inference:16 typically:2 koller:1 manipulating:3 interested:3 semantics:4 pixel:3 classification:13 flexible:4 augment:1 denoted:3 spatial:2 integration:1 orange:1 softmax:2 marginal:2 special:2 constrained:1 having:3 beach:1 sampling:6 extraction:1 koray:1 placing:1 represents:2 unsupervised:11 kastner:1 icml:1 discrepancy:1 future:1 report:1 yoshua:2 mirza:1 jb:1 few:3 employ:4 neighbour:2 composed:2 simultaneously:5 divergence:1 preserve:1 individual:4 replaced:1 intended:1 microsoft:1 william:1 ab:2 factorise:1 friedman:1 interest:1 message:1 possibility:1 mnih:1 evaluation:4 introduces:1 extreme:1 yielding:1 undefined:1 ndg:1 parametrised:1 farley:1 normaliser:1 held:3 compilation:2 partial:6 necessary:1 respective:1 conduct:2 taylor:1 logarithm:3 desired:1 re:2 increased:1 classify:1 column:1 earlier:1 modeling:1 alban:2 whitney:1 zn:1 subset:5 latents:7 johnson:4 conducted:1 too:1 graphic:1 reported:1 dependency:12 thickness:1 combined:2 st:2 person:2 fundamental:1 density:5 international:5 winther:2 retain:1 probabilistic:15 told:1 lee:1 regressor:1 synthesis:1 ym:29 concrete:1 vastly:1 central:2 aan:1 containing:1 choose:1 huang:1 emnlp:1 berglund:1 expert:1 sidestep:1 style:17 return:2 presumes:1 chung:1 account:1 transformational:1 de:3 includes:1 availability:1 explicitly:1 depends:2 performed:3 view:1 sup:2 wm:12 recover:3 bayes:1 capability:1 substructure:1 contribution:1 mlps:1 air:1 accuracy:2 convolutional:2 characteristic:3 who:1 correspond:2 yield:1 conceptually:1 weak:3 handwritten:1 kavukcuoglu:1 carlo:7 lighting:10 modeller:1 stroke:1 explain:1 definition:2 against:3 mohamed:2 associated:1 handwriting:3 sampled:5 dataset:7 dimensionality:2 provision:1 afrl:1 higher:1 supervised:40 danilo:3 varun:1 specify:5 zisserman:1 formulation:4 ox:4 box:1 just:5 implicit:1 stage:3 angular:1 autoencoders:8 canvas:2 hand:3 roger:1 expressive:1 christopher:1 mehdi:1 google:2 defines:2 mode:1 indicated:1 semisupervised:2 usa:1 effect:2 contain:1 iteratively:1 illustrated:1 vivek:1 during:3 self:1 recurrence:1 meent:2 samuel:1 leftmost:1 demonstrate:3 performs:1 svhn:13 reasoning:1 meaning:1 variational:11 image:21 wise:1 recently:1 kyle:1 weber:2 common:2 functional:2 heed:1 tassa:1 extend:3 discussed:1 approximates:1 m1:3 interpret:2 dinh:1 cambridge:1 honglak:1 ritchie:2 erc:2 centre:1 language:3 moving:1 robot:3 specification:1 supervision:22 longer:1 surface:1 disentangle:1 posterior:4 own:2 perspective:1 certain:1 binary:1 approximators:1 yi:4 accomplished:1 yuri:1 joshua:1 seen:2 additional:2 impose:1 employed:1 goel:1 belhumeur:1 ller:1 dashed:1 semi:14 ii:3 full:2 multiple:1 infer:3 alan:2 generalises:1 positioning:1 cross:1 long:1 wiltschko:2 prediction:2 involving:4 regression:4 basic:1 vision:3 expectation:1 essentially:1 arxiv:15 repetitive:1 annotate:1 iteration:3 proposal:1 whereas:3 want:1 winn:1 entangled:2 leaving:2 source:1 goodman:6 marginalisation:1 rest:1 ascent:2 incorporates:1 practitioner:1 leverage:1 iii:1 easy:1 concerned:1 rendering:1 variety:4 bengio:2 architecture:5 andreas:2 avenue:1 tradeoff:1 oneself:1 vikash:2 expression:1 sandeep:2 bridging:1 url:1 effort:2 akin:1 peter:1 paige:1 karen:1 passing:1 y1m:2 repeatedly:1 deep:12 heess:2 generally:1 n510129:1 clear:2 detailed:1 amount:2 tenenbaum:3 recon:2 processed:1 sohn:1 http:3 generate:2 canonical:1 blue:1 reparametrisation:2 serving:1 dgms:1 discrete:5 express:1 achieving:1 characterisation:1 wasteful:1 changing:1 graph:19 relaxation:2 fraction:1 wood:3 sum:1 cone:1 run:3 turing:3 inverse:2 uncertainty:1 you:1 ameliorate:1 extends:2 family:3 draw:2 scaling:2 bound:5 courville:2 yale:1 lsup:4 encountered:1 activity:1 strength:1 noah:5 precisely:1 constraint:1 alex:3 bp:1 scene:2 aspect:2 performing:1 expanded:1 relatively:1 structured:6 developing:1 alternate:2 combination:2 manning:1 conjugate:3 smaller:1 across:2 supervise:1 taken:4 equation:3 bing:1 count:7 merit:1 end:4 maal:2 available:6 willem:2 permit:1 observe:1 enforce:1 alternative:1 batch:1 jang:1 customary:1 top:4 remaining:2 running:2 assumes:2 tuan:1 graphical:23 yx:1 giving:1 reflectance:3 gregor:1 society:1 objective:17 move:1 quantity:1 occurs:1 primary:1 dependence:4 usual:2 diagonal:2 surrogate:1 desmaison:1 exhibit:1 gradient:4 distance:1 separate:3 valpola:1 capacity:3 philip:2 decoder:5 street:1 evenly:1 maddison:1 consensus:1 ozair:1 code:1 length:1 gunes:1 relationship:5 mini:1 providing:1 rasmus:1 innovation:1 setup:2 unfortunately:1 disentangling:1 frank:3 ba:1 design:2 reliably:1 anal:1 pei:1 unknown:1 perform:5 allowing:1 teh:1 implementation:1 datasets:5 daan:2 enabling:1 defining:6 situation:2 hinton:1 precise:1 y1:1 varied:4 arbitrary:5 vander:1 datta:2 inferred:5 compositionality:1 david:5 pair:1 required:1 specified:10 extensive:1 optimized:1 kl:1 z1:2 gru:1 sentence:1 learned:8 kingma:11 nip:1 brook:1 address:3 beyond:3 able:3 alongside:1 below:1 trans:1 xm:39 pattern:3 poole:1 perception:1 reading:1 challenge:1 program:2 including:5 max:3 explanation:1 royal:1 video:1 suitable:3 difficulty:3 rely:1 treated:2 predicting:1 hybrid:1 representing:1 scheme:1 improve:1 github:1 spiegelhalter:1 factorises:2 library:2 ladder:2 picture:1 axis:1 created:1 church:1 raiko:2 categorical:2 helios:1 auto:1 extract:1 autoencoder:1 xby:1 unnormalised:1 faced:1 prior:6 understanding:1 acknowledgement:1 nir:1 schulman:1 relative:6 graf:1 fully:11 loss:3 par:1 generation:4 analogy:4 geoffrey:1 gershman:1 generator:2 validation:1 clark:1 affine:3 consistent:3 principle:1 supported:3 repeat:1 keeping:1 side:2 normalised:1 burda:1 generalise:2 institute:2 face:7 taking:1 sparse:1 van:2 distributed:2 default:1 xn:5 dimension:1 author:1 commonly:1 collection:1 forward:1 situate:1 employing:2 pushmeet:5 welling:2 approximate:5 implicitly:2 jaderberg:1 decides:1 overfitting:1 assumed:3 discriminative:1 xi:9 spectrum:1 continuous:3 latent:29 pen:1 decomposes:1 table:1 bonawitz:1 additionally:3 learn:12 zk:3 promising:1 nicolas:2 ca:1 composing:3 improving:1 expansion:2 complex:4 constructing:3 domain:8 bounding:1 paul:1 recurrent:3 x1:2 xu:1 fig:15 junyoung:1 adg:1 grosse:1 aid:1 sub:4 explicit:2 wish:2 exponential:3 deterministically:1 house:1 perceptual:1 weighting:1 learns:3 ian:1 northeastern:2 specific:2 showing:3 abadie:1 chun:1 virtue:1 evidence:2 linden:1 incorporating:2 intrinsic:3 mnist:39 sequential:1 effectively:1 importance:11 corr:1 illumination:1 illustrates:2 gumbel:2 gap:1 easier:1 chen:1 entropy:1 simply:3 likely:1 explore:2 visual:6 josh:1 expressed:2 adjustment:1 brad:1 partially:16 applies:2 corresponds:1 nested:1 determines:1 satisfies:1 tejas:2 conditional:18 identity:10 goal:1 tioned:1 exposition:1 labelled:12 absence:2 considerable:1 hard:1 content:1 fw:1 characterised:1 torr:2 generalisation:2 determined:1 yuval:1 yand:1 meaningful:1 vaes:8 exception:1 indicating:1 aaron:2 people:1 kihyuk:1 stuhlm:1 jonathan:1 kulkarni:2 incorporate:3 evaluate:9 requisite:1 tested:1 |
6,826 | 7,175 | Self-Supervised Intrinsic Image Decomposition
Michael Janner
MIT
Jiajun Wu
MIT
Tejas D. Kulkarni
DeepMind
[email protected]
[email protected]
[email protected]
Ilker Yildirim
MIT
Joshua B. Tenenbaum
MIT
[email protected]
[email protected]
Abstract
Intrinsic decomposition from a single image is a highly challenging task, due to
its inherent ambiguity and the scarcity of training data. In contrast to traditional
fully supervised learning approaches, in this paper we propose learning intrinsic
image decomposition by explaining the input image. Our model, the Rendered
Intrinsics Network (RIN), joins together an image decomposition pipeline, which
predicts reflectance, shape, and lighting conditions given a single image, with a
recombination function, a learned shading model used to recompose the original
input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate
representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object
categories, lighting conditions, and shapes. Extensive experiments demonstrate
that our method performs well on both intrinsic image decomposition and knowledge transfer.
1
Introduction
There has been remarkable progress in computer vision, particularly for answering questions such
as ?what is where?? given raw images. This progress has been possible due to large labeled training
sets and representation learning techniques such as convolutional neural networks [LeCun et al.,
2015]. However, the general problem of visual scene understanding will require algorithms that
extract not only object identities and locations, but also their shape, reflectance, and interactions
with light. Intuitively disentangling the contributions from these three components, or intrinsic
images, is a major triumph of human vision and perception. Conferring this type of intuition to an
algorithm, though, has proven a difficult task, constituting a major open problem in computer vision.
This problem is challenging in particular because it is fundamentally underconstrained. Consider
the porcelain vase in Figure 1a. Most individuals would have no difficulty identifying the true colors
and shape of the vase, along with estimating the lighting conditions and the resultant shading on the
object, as those shown in 1b. However, the alternatives in 1c, which posits a flat shape, and 1d, with
unnatural red lighting, are entirely consistent in that they compose to form the correct observed vase
in 1a.
The task of finding appropriate intrinsic images for an object is then not a question of simply finding a valid answer, as there are countless factorizations that would be equivalent in terms of their
rendered combination, but rather of finding the most probable answer. Roughly speaking, there are
two methods of tackling such a problem: a model must either (1) employ handcrafted priors on the
reflectance, shape, and lighting conditions found in the natural world in order to assign probabilities
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
a
b
c
d
e
Figure 1: A porcelain vase (a) along with three predictions (b-d) for its underlying intrinsic images.
The set in (c) assumes the contribution from shading is negligible by predicting a completely flat
rather than rounded shape. The reflectance is therefore indistinguishable from the observed image.
The set in (d) includes the correct shape but assumes red lighting and a much brighter blue color in
the regions affected by shading. While the decomposition in (b) is much more inuitively pleasing
than either of these alternatives, all of these options are valid in that they combine to exactly form
the observed vase. (e) shows a sphere with our visualized normals map as a shape reference.
to intrinsic image proposals or (2) have access to a library of ground truth intrinsic images and their
corresponding composite images.
Unfortunately, there are limitations to both methods. Although there has been success with the
first route in the past [Barron and Malik, 2015], strong priors are often difficult to hand-tune in a
generally useful fashion. On the other hand, requiring access to complete, high quality ground truth
intrinsic images for real world scenes is also limiting, as creating such a training set requires an
enormous amount of human effort and millions of crowd-sourced annotations [Bell et al., 2014].
In this paper, we propose a deep structured autoencoder, the Rendered Instrinsics Network (RIN),
that disentangles intrinsic image representations and uses them to reconstruct the input. The decomposition model consists of a shared convolutional encoder for the observation and three separate
decoders for the reflectance, shape, and lighting. The shape and lighting predictions are used to
train a differentiable shading function. The output of the shader is combined with the reflectance
prediction to reproduce the observation. The minimal structure imposed in the model ? namely, that
intrinsic images provide a natural way of disentangling real images and that they provide enough
information to be used as input to a graphics engine ? makes RIN act as an autoencoder with useful
intermediate representations.
The structure of RIN also exploits two natural sources of supervision: one applied to the intermediate representations themselves, and the other to the reconstructed image. This provides a way for
RIN to improve its representations with unlabeled data. By avoiding the need for intrinsic image
labels for all images in the dataset, RIN can adapt to new types of inputs even in the absence of
ground truth data. We demonstrate the utility of this approach in three transfer experiments. RIN is
first trained on a simple set of five geometric primitives in a supervised manner and then transferred
to common computer vision test objects. Next, RIN is trained on a dataset with a skewed underlying
lighting distribution and fills in the missing lighting conditions on the basis of unlabeled observations. Finally, RIN is trained on a single Shapenet category and then transferred to a separate, highly
dissimilar category.
Our contributions are three-fold. First, we propose a novel formulation for intrinsic image decomposition, incorporating a differentiable, unsupervised reconstruction loss into the loop. Second, we
instantiate this approach with the RIN, a new model that uses convolutional neural networks for both
intrinsic image prediction and recombination via a learned shading function. This is also the first
work to apply deep learning to the full decomposition into reflectance, shape, lights, and shading,
as prior work has focused on the reflectance-shading decomposition. Finally, we show that RIN can
make use of unlabeled data to improve its intermediate intrinsic image representations and transfer
knowledge to new objects unseen during training.
2
Figure 2: RIN contains two convolutional encoder-decoders, one used for predicting the intrinsic
images from an input and another for predicting the shading stemming from a light source applied
on a shape. The two networks together function as a larger structured autoencoder, forcing a specific
type of intermediate representation in order to reconstruct the input image.
2
Related Work
Intrinsic images were introduced by Barrow and Tenenbaum as useful mid-level scene descriptors [Barrow and Tenenbaum, 1978]. The model posits that an image can be expressed as the
pointwise product between contributions from the true colors of an object, or its reflectance, and
contributions from the shading on that object:
image I = reflectance R ? shading S
(1)
Decomposing one step further, the shading is expressed as some function of an object?s shape and
the ambient lighting conditions. The exact nature of this shading function varies by implementation.
Early work on intrinsic image decomposition was based on insights from Land?s Retinex Theory [Land and McCann, 1971]. Horn [1974] separated images into true colors and shading using the
assumption that large image gradients tend to correspond to reflectance changes and small gradients
to lighting changes. While this assumption works well for a hypothetical Mondrian World of flat
colors, it does not always hold for natural images. In particular, Weiss [2001] found that this model
of reflectance and lighting is rarely true for outdoor scenes.
More recently, Barron and Malik [2015] developed an iterative algorithm called SIRFS that maximizes the likelihood of intrinsic image proposals under priors derived from regularities in natural
images. SIRFS proposes shape and lighting estimates and combines them via a spherical harmonics
renderer to produce a shading image. Lombardi and Nishino [2012, 2016] and Oxholm and Nishino
[2016] proposed a Bayesian formulation of such an optimization procedure, also formulating priors
based on the distribution of material properties and the physics of lighting in the real world. Researchers have also explored reconstructing full 3D shapes through intrinsic images by making use
of richer generative models [Kar et al., 2015, Wu et al., 2017].
Tang et al. [2012] combined Lambertian reflectance assumptions with Deep Belief Networks to
learn a prior over the reflectance of greyscale images and applied their Deep Lambertian Network
to one-shot face recognition. Narihira et al. [2015b] applied deep learning to intrinsic images first
using human judgments on real images and later in the context of animated movie frames [Narihira
et al., 2015a]. Rematas et al. [2016] and Hold-Geoffroy et al. [2017] also used convolutional neural
networks to estimate reflectance maps and illumination parameters, respectively, in unconstrained
outdoor settings.
Innamorati et al. [2017] generalized the intrinsic image decomposition by considering the contributions of specularity and occlusion in a direction-dependent model. Shi et al. [2017] found improved
performance in the full decomposition by incorporating skip layer connections [He et al., 2016] in
the network architecture, which were used to generate much crisper images. Our work can be seen
3
Inputs
Inputs
Shader Outputs
Shader Outputs
Figure 3: In contrast to simple Lambertian shading techniques, our learned shading model can
handle shadows cast between objects. Inputs to the shader are shape and lighting parameter pairs.
as a further extension of these models which aims to relax the need for a complete set of ground
truth data by modeling the image combination process, as in Nalbach et al. [2017].
Incorporating a domain-specific decoder to reconstruct input images has been explored by Hinton
et al. in their transforming autoencoders [Hinton et al., 2011], which also learned natural representations of images in use by the vision community. Our work differs in the type of representation in
question, namely images rather than descriptors like affine transformations or positions. Kulkarni
et al. [2015] were also interested in learning disentangled representations in an autoencoder, which
they achieved by selective gradient updates during training. Similarly, Chen et al. [2016] showed
that a mutual information objective could driven disentanglement of a deep network?s intermediate
representation.
3
3.1
Model
Use of Reconstruction
RIN differs most strongly with past work in its use of the reconstructed input. Other approaches
have fallen into roughly two groups in this regard:
1. Those that solve for one of the intrinsic images to match the observed image. SIRFS, for example, predicts shading and then solves equation 1 for reflectance given its prediction and the
input [Barron and Malik, 2015]. This ensures that the intrinsic image estimations combine to
form exactly the observed image, but also deprives the model of any reconstruction error.
2. Data-driven techniques that rely solely on ground truth labelings [Narihira et al., 2015a, Shi
et al., 2017]. These approaches assume access to ground truth labels for all inputs and do not
explicitly model the reconstruction of the input image based on intrinsic image predictions.
Making use of the reconstruction for this task has been previously unexplored because such an
error signal can be difficult to interpret. Just as the erroneous intrinsic images in Fig 1c-d combine
to reconstruct the input exactly, one cannot assume that low reconstruction error implies accurate
intrinsic images. An even simpler degenerate solution that yields zero reconstruction error is:
?=I
R
and
S? = 11T ,
(2)
where S? is the all-ones matrix. It is necessary to further constrain the predictions such that the model
does not converge to such explanations.
3.2
Shading Engine
RIN decomposes an observation into reflectance, shape, and lighting conditions. As opposed to
models which estimate only reflectance and shading, which may make direct use of Equation 1
to generate a reconstruction, we must employ a function that transforms our shape and lighting
predictions into a shading estimate. Linear Lambertian assumptions could reduce such a function to
a straightforward dot product, but would produce a shading function incapable of modeling lighting
conditions that drastically change across an image or ray-tracing for the purposes of casting shadows.
4
Inputs
Shader Outputs
Inputs
a
Shader Outputs
c
b
Figure 4: Our shading model?s outputs after training only on synthetic car models from the ShapeNet
dataset [Chang et al., 2015]. (a) shows the effect of panning the light horizontally and (b) shows
the effect of changing the intensity of the light. The input lights are visualized by rendering them
onto a sphere. Even though the shader was trained only on synthetic data, it generalizes well to real
shapes with no further training. The shape input to (c) is an estimated normals map of a Beethoven
bust [Qu?eau and Durou, 2015].
Instead, we opt to learn a shading model. Such a model is not limited in the way that a pre-defined
shading function would be, as evidenced by shadows cast between objects in Fig 3. Learning a
shader also has the benefit of allowing for different representations of lighting conditions. In our
experiments, lights are defined by a position in three-dimensional space and a magnitude, but alternate representations such as the radius, orientation, and color of a spotlight could be just as easily
adopted.
3.3
Architecture
Our model consists of two convolutional encoder-decoder networks, the first of which predicts intrinsic images from an observed image, and the second of which approximates the shading process of
a rendering engine. Both networks employ mirror-link connections introduced by Shi et al. [2017],
which connect layers of the encoder and decoder of the same size. These connections yield sharper
results than the blurred outputs characteristic of many deconvolutional models.
The first network has a single encoder for the observation and three separate decoders for the reflectance, lighting, and shape. Unlike Shi et al. [2017], we do not link layers between the decoders
so that it is possible to update the weights of one of the decoders without substantially affecting
the others, as is useful in the transfer learning experiments. The encoder has 5 convolutional layers with {16, 32, 64, 128, 256} filters of size 3?3 and stride of 2. Batch normalization [Ioffe and
Szegedy, 2015] and ReLU activation are applied after every convolutional layer. The layers in the
reflectance and shape decoders have the same number of features as the encoder but in reverse order
plus a final layer with 3 output channels. Spatial upsampling is applied after the convolutional layers in the decoders. The lighting decoder is a simple linear layer with an output dimension of four
(corresponding to a position in three-dimensional space and an intensity of the light).
The shape is passed as input to the shading encoder directly. The lighting estimate is passed to
a fully-connected layer with output dimensionality matching that of the shading encoder?s output,
which is concatenated to the encoded shading representation. The shading decoder architecture is
the same as that of the first network. The final component of RIN, with no learnable parameters,
is a componentwise multiplication between the output of the shading network and the predicted
reflectance.
5
Render Reflectance Shape
Shading Lights
Render Reflectance Shape
Shading Lights
Reference
RIN
SIRFS
Reference
RIN
SIRFS
Figure 5: Intrinsic image prediction from our model on objects from the training category (motorbikes) as well as an example from outside this category (an airplane). The quality of the airplane
intrinsic images is significantly lower, which is reflected in the reconstruction (labeled ?Render? in
the RIN rows). This allows reconstruction to drive the improvement of the intermediate intrinsic
image representations. Predictions from SIRFS are shown for comparison. Note that the reflectance
in SIRFS is defined based on the difference between the observation and shading prediction, so there
is not an analogous reconstruction.
Motorbike (Train)
RIN
SIRFS
Airplane (Transfer)
Reflectance
Shape
Lights
Reflectance
Shape
Lights
0.0021
0.0059
0.0044
0.0094
0.1398
?
0.0042
0.0054
0.0119
0.0080
0.4873
?
Table 1: MSE of our model and SIRFS on a test set of ShapeNet motorbikes, the category used
to train RIN, and airplanes, a held-out class. The lighting representation of SIRFS (a vector with
27 components) is sufficiently different from that of our model that we do not attempt to compare
performance here directly. Instead, see the visualization of lights in Fig 5.
4
Experiments
RIN makes use of unlabeled data by comparing its reconstruction to the original input image. Because our shading model is fully differentiable, as opposed to most shaders that involve ray-tracing,
the reconstruction error may be backpropagated to the intrinsic image predictions and optimized via
a standard coordinate ascent algorithm. RIN has one shared encoder for the intrinsic images but
three separate decoders, so the appropriate decoder can be updated while the others are held fixed.
In the following experiments, we first train RIN (including the shading model) on a dataset with
ground truth labels for intrinsic images. This is treated as a standard supervised learning problem
using mean squared error on the intrinsic image predictions as a loss. The model is then trained
further on an additional set of unlabeled data using only reconstruction loss as an error signal.
We refer to this as the self-supervised transfer. For both modes of learning, we optimize using
Adam [Kingma and Ba, 2015].
During transfer, one half of a minibatch will consist of the unlabeled transfer data the other half will
come from the labeled data. This ensures that the representations do not shift too far from those
learned during the initial supervised phase, as the underconstrained nature of the problem can drive
6
Train Shapes
Test Shapes
Input
Reference
RIN: Direct transfer
RIN: Self-supervised
SIRFS
Figure 6: Predictions of RIN before (?Direct transfer?) and after (?Self-supervised?) it adapts to new
shapes on the basis of unlabeled data.
Stanford Bunny
Direct transfer
Self-supervised
Utah Teapot
Blender Suzanne
Shape
Shading
Shape
Shading
Shape
Shading
0.074
0.048
0.071
0.005
0.036
0.029
0.043
0.003
0.086
0.058
0.104
0.007
Table 2: MSE of RIN trained on five geometric primitives before and after self-supervised learning
of more complicated shapes.
the model to degenerate solutions. When evaluating our model on test data, we use the outputs of the
three decoders and the learned shader directly; we do not enforce that the predictions must explain
the input exactly.
Below, we demonstrate that our model can effectively transfer to different shapes, lighting conditions, and object categories without ground truth intrinsic images. However, for this unsupervised
transfer to yield benefits, there must be a sufficient number of examples of the new, unlabeled data.
For example, the MIT Intrinsic Images dataset [Grosse et al., 2009], containing twenty real-world
images, is not large enough for the unsupervised learning to affect the representations of our model.
In the absence of any unsupervised training, our model is similar to that of Shi et al. [2017] adapted
to predict the full set of intrinsic images.
4.1
Supervised training
Data The majority of data was generated from ShapeNet [Chang et al., 2015] objects rendered in
Blender. For the labeled datasets, the rendered composite images were accompanied by the object?s
reflectance, a map of the surface normals at each point, and the parameters of the lamp used to
light the scene. Surface normals are visualized by mapping the XYZ components of normals to
appropriate RGB ranges. For the following supervised learning experiments, we used a dataset size
of 40,000 images.
Intrinsic image decomposition
The model in Fig 5 was trained on ShapeNet motorbikes. Although it accurately predicts the intrinsic images of the train class, its performance drops when
tested on other classes. In particular, the shape predictions suffer the most, as they are the most
dissimilar from anything seen in the training set. Crucially, the poor intrinsic image predictions are
reflected in the reconstruction of the input image. This motivates the use of reconstruction error to
drive improvement of intrinsic images when there is no ground truth data.
Shading model
In contrast with the intrinsic image decomposition, shading prediction generalized well outside of the training set. The shader was trained on the shapes and lights from the same
set of rendered synthetic cars as above. Even though this represents only a narrow distribution over
7
Input /
Reference
RIN:
Direct transfer
RIN:
Self-supervised
Input /
Reference
RIN:
Direct transfer
RIN:
Self-supervised
Render
Lights
Figure 7: Predictions of RIN trained on left-lit images before and after self-supervised learning
on right-lit images. RIN uncovers the updated lighting distribution without external supervision or
ground truth data.
shapes, we found that the shader produced plausible predictions for even real-world objects (Fig 4).
Because the shader generalized without any further effort, its parameters were never updated during
self-supervised training. Freezing the parameters of the shader prevents our model from producing
nonsensical shading images.
4.2
Shape transfer
Data
We generated a dataset of five shape primitives (cubes, spheres, cones, cylinders, and
toruses) viewed at random orientations using the Blender rendering engine. These images are
used for supervised training. Three common reference shapes (Stanford bunny, Utah teapot, and
Blender?s Suzanne) are used as the unlabeled transfer class. To isolate the effects of shape mismatch
in the labeled versus unlabeled data, all eight shapes were rendered with random monochromatic
materials and a uniform distribution over lighting positions within a contained region of space in
front of the object. The datasets consisted of each shape rendered with 500 different colors, with
each colored shape being viewed at 10 orientations.
Results By only updating weights for the shape decoder during self-supervised transfer, the predictions for held-out shapes improves by 29% (averaged across the three shapes). Because a shape
only affects a rendered image via shading, the improvement in shapes comes alongside an improvement in shading predictions as well. Shape-specific results are given in Table 2 and visualized in
Fig 6.
4.3
Lighting transfer
Data
Cars from the ShapeNet 3D model repository were rendered at random orientations and
scales. In the labeled data, they were lit only from the left side. In the unlabeled data, they were lit
from both the left and right.
Results Before self-supervised training on the unlabeled data, the model?s distribution over lighting predictions mirrored that of the labeled training set. When tested on images lit from the right,
then, it tended to predict centered lighting. After updating the lighting decoder based on reconstruction error from these right-lit images though, the model?s lighting predictions more accurately
reflected the new distribution and lighting mean-squared error reduces by 18%. Lighting predictions,
along with reconstructions, for right-lit images are shown in Fig 7.
4.4
Category transfer
In the previous transfer experiments, only one intrinsic image was mismatched between the labeled
and unlabeled data, so only one of RIN?s decoders needed updating during transfer. When transferring between object categories, though, there is not such a guarantee. Although it might be expected
that a model trained on sufficiently many object categories would learn a generally-useful distribution over reflectances, it is difficult to ensure that this is the case. We are interested in these sorts of
8
Input /
Reference
RIN:
Direct transfer
RIN:
Self-supervised
Input /
Reference
RIN:
Direct transfer
RIN:
Self-supervised
Render
Reflectance
Shading
Figure 8: RIN was first trained on ShapeNet airplanes and then tested on cars. Because most of the
airplanes were white, the reflectance predictions were washed out even for colorful cars. RIN fixed
the mismatch between datasets without any ground truth intrinsic images of cars.
Direct transfer
Self-supervised
Reflectance
Shape
Lights
Shading
Render
0.019
0.015
0.014
0.014
0.584
0.572
0.065
0.044
0.035
0.006
Table 3: MSE of RIN trained on ShapeNet airplanes before and after self-supervised transfer to cars.
Although RIN improves its shading predictions, these are not necessarily driven by an improvement
in shape prediction.
scenarios to determine how well self-supervised transfer works when more than one decoder needs
to be updated to account for unlabeled data.
Data
Datasets of ShapeNet cars and airplanes were created analogously to those in Section 4.1.
The airplanes had a completely different color distribution than the cars as they were mostly white,
whereas the cars had a more varied reflectance distribution. The airplanes were used as the labeled
category to ensure a mismatch between the train and transfer data.
Results To transfer to the new category, we allowed updates to all three of the RIN decoders. (The
shader was left fixed as usual.) There were pronounced improvements in the shading predictions
(32%) accompanied by modest improvements in reflectances (21%). The shading predictions were
not always caused by improved shape estimates. Because there is a many-to-one mapping from
shape to shading (conditioned on a lighting condition), it is possible for the shape predictions to
worsen in order to improve the shading estimates. The lighting predictions also remained largely
unchanged, although for the opposite reason: because no lighting region were intentionally left out
of the training data, the lighting predictions were adequate on the transfer classes even without
self-supervised learning.
5
Conclusion
In this paper, we proposed the Rendered Intrinsics Network for intrinsic image prediction. We
showed that by learning both the image decomposition and recombination functions, RIN can make
use of reconstruction loss to improve its intermediate representations. This allowed unlabeled data
to be used during training, which we demonstrated with a variety of transfer tasks driven solely
by self-supervision. When there existed a mismatch between the underlying intrinsic images of
the labeled and unlabeled data, RIN could also adapt its predictions in order to better explain the
unlabeled examples.
9
Acknowledgements
This work is supported by ONR MURI N00014-16-1-2007, the Center for Brain, Minds and Machines (NSF #1231216), Toyota Research Institute, and Samsung.
References
Jonathan T Barron and Jitendra Malik. Shape, illumination, and reflectance from shading. IEEE TPAMI, 37
(8):1670?1687, 2015.
H.G. Barrow and J.M. Tenenbaum. Recovering intrinsic scene characteristics from images. Computer Vision
Systems, 1978.
Sean Bell, Kavita Bala, and Noah Snavely. Intrinsic images in the wild. ACM TOG, 33(4):159, 2014.
Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio
Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
Xi Chen, Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan:
Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
Roger Grosse, Micah K. Johnson, Edward H. Adelson, and William T. Freeman. Ground-truth dataset and
baseline evaluations for intrinsic image algorithms. In ICCV, 2009.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
CVPR, 2016.
Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In ICANN, 2011.
Yannick Hold-Geoffroy, Kalyan Sunkavalli, Sunil Hadap, Emiliano Gambaretto, and Jean-Francois Lalonde.
Deep outdoor illumination estimation. In CVPR, 2017.
Berthold K.P. Horn. Determining lightness from an image. Computer Graphics and Image Processing, 3:
277?299, 1974.
Carlo Innamorati, Tobias Ritschel, Tim Weyrich, and Niloy J. Mitra. Decomposing single images for layered
photo retouching. Computer Graphics Forum, 36:15?25, 07 2017.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In ICML, 2015.
Abhishek Kar, Shubham Tulsiani, Joao Carreira, and Jitendra Malik. Category-specific object reconstruction
from a single image. In CVPR, 2015.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse
graphics network. In NIPS, 2015.
Edwin H. Land and John J. McCann. Lightness and retinex theory. Journal of the Optical Society of America,
61:1?11, 1971.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
Stephen Lombardi and Ko Nishino. Single image multimaterial estimation. In CVPR, 2012.
Stephen Lombardi and Ko Nishino. Reflectance and illumination recovery in the wild. IEEE TPAMI, 38(1):
129?141, 2016.
Oliver Nalbach, Elena Arabadzhiyska, Dushyant Mehta, Hans-Peter Seidel, and Tobias Ritschel. Deep shading:
Convolutional neural networks for screen-space shading. Computer Graphics Forum, 36(4), 2017.
Takuya Narihira, Michael Maire, and Stella X. Yu. Direct intrinsics: Learning albedo-shading decomposition
by convolutional regression. In ICCV, 2015a.
Takuya Narihira, Michael Maire, and Stella X. Yu. Learning lightness from human judgement on relative
reflectance. In CVPR, 2015b.
10
Geoffrey Oxholm and Ko Nishino. Shape and reflectance estimation in the wild. IEEE TPAMI, 38(2):376?389,
2016.
Yvain Qu?eau and Jean-Denis Durou. Edge-preserving integration of a normal field: Weighted least-squares,
tv and L1 approaches. In International Conference on Scale Space and Variational Methods in Computer
Vision, 2015.
Konstantinos Rematas, Tobias Ritschel, Mario Fritz, Efstratios Gavves, and Tinne Tuytelaars. Deep reflectance
maps. In CVPR, June 2016.
Jian Shi, Yue Dong, Hao Su, and Stella X. Yu. Learning non-lambertian object intrinsics across shapenet
categories. In CVPR, 2017.
Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Deep lambertian networks. In ICML, 2012.
Yair Weiss. Deriving intrinsic images from image sequences. In ICCV, 2001.
Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, William T Freeman, and Joshua B Tenenbaum. Marrnet:
3d shape reconstruction via 2.5d sketches. In NIPS, 2017.
11
| 7175 |@word kohli:1 repository:2 judgement:1 nonsensical:1 open:1 mehta:1 pieter:1 crucially:1 blender:4 decomposition:17 rgb:1 uncovers:1 shot:1 shading:55 takuya:2 initial:1 contains:1 deconvolutional:1 animated:1 past:2 com:1 comparing:1 activation:1 gmail:1 tackling:1 must:4 diederik:1 john:2 stemming:1 gavves:1 shape:60 enables:1 christian:1 drop:1 interpretable:1 update:3 generative:2 instantiate:1 half:2 lamp:1 colored:1 provides:1 location:1 triumph:1 denis:1 simpler:1 zhang:1 five:3 shubham:1 tianfan:1 along:3 direct:10 consists:2 compose:1 combine:4 ray:2 wild:3 manner:1 mccann:2 angel:1 expected:1 roughly:2 themselves:1 brain:1 freeman:2 salakhutdinov:1 spherical:1 manolis:1 duan:1 considering:1 estimating:1 underlying:3 joao:1 maximizes:1 what:1 substantially:1 deepmind:1 developed:1 finding:3 transformation:1 guarantee:1 every:1 unexplored:1 hypothetical:1 act:1 exactly:4 colorful:1 producing:1 before:5 negligible:1 mitra:1 niloy:1 solely:2 might:1 plus:1 challenging:2 factorization:1 limited:1 range:1 averaged:1 horn:2 lecun:2 differs:2 procedure:1 maire:2 yan:1 bell:2 significantly:1 composite:2 matching:1 narihira:5 pre:1 cannot:1 unlabeled:18 onto:1 layered:1 context:1 disentangles:1 sirfs:11 optimize:1 equivalent:1 map:5 imposed:1 missing:1 shi:6 demonstrated:1 primitive:3 straightforward:1 center:1 maximizing:1 jimmy:1 focused:1 identifying:1 recovery:1 zimo:1 insight:1 fill:1 deriving:1 disentangled:1 handle:1 coordinate:1 analogous:1 limiting:1 updated:4 exact:1 us:2 recognition:2 particularly:1 updating:3 predicts:4 labeled:10 muri:1 observed:6 preprint:1 wang:2 region:3 ensures:2 connected:1 sun:2 intuition:1 transforming:2 tobias:3 trained:12 mondrian:1 kalyan:1 tog:1 rin:45 completely:2 basis:2 edwin:1 tinne:1 easily:1 samsung:1 america:1 train:7 separated:1 outside:2 crowd:1 sourced:1 rein:1 jean:2 richer:1 larger:1 solve:1 encoded:1 stanford:2 relax:1 reconstruct:4 plausible:1 encoder:10 cvpr:7 unseen:2 tuytelaars:1 final:2 tpami:3 differentiable:3 lombardi:3 net:1 sequence:1 propose:3 reconstruction:22 interaction:1 product:2 loop:1 degenerate:2 adapts:1 pronounced:1 sutskever:1 regularity:1 francois:1 produce:2 adam:2 object:21 tim:1 tulsiani:1 progress:2 solves:1 strong:1 recovering:1 janner:2 predicted:1 skip:1 shadow:3 implies:1 come:2 direction:1 posit:2 radius:1 correct:2 filter:1 stochastic:1 centered:1 human:4 material:2 require:1 assign:1 abbeel:1 opt:1 probable:1 disentanglement:1 extension:1 hold:3 sufficiently:2 ground:12 normal:6 guibas:1 mapping:2 predict:2 major:2 early:1 albedo:1 purpose:1 estimation:4 ruslan:1 label:3 weighted:1 mit:9 always:2 aim:1 rather:3 casting:1 derived:1 june:1 marrnet:1 improvement:7 likelihood:1 contrast:3 adversarial:1 shapenet:11 baseline:1 dependent:1 transferring:2 reproduce:1 selective:1 interested:2 labelings:1 orientation:4 proposes:1 spatial:1 integration:1 mutual:1 jiajunwu:1 cube:1 field:1 never:1 beach:1 teapot:2 represents:1 lit:7 adelson:1 unsupervised:5 icml:2 yu:3 others:2 yoshua:1 fundamentally:1 inherent:1 employ:3 individual:1 phase:1 occlusion:1 william:3 kavita:1 attempt:1 pleasing:1 cylinder:1 highly:2 evaluation:1 light:17 held:3 accurate:1 ambient:1 oliver:1 edge:1 necessary:1 shuran:1 modest:1 savarese:1 minimal:1 modeling:2 whitney:1 uniform:1 krizhevsky:1 johnson:1 front:1 graphic:5 too:1 encoders:1 answer:2 varies:1 connect:1 xue:1 synthetic:3 combined:2 st:1 fritz:1 international:1 off:1 physic:1 dong:1 rounded:1 michael:3 together:2 analogously:1 ilya:1 squared:2 ambiguity:1 opposed:2 containing:1 huang:1 external:1 creating:1 li:1 szegedy:2 account:1 stride:1 accompanied:2 includes:1 blurred:1 jitendra:2 explicitly:1 caused:1 leonidas:1 later:1 mario:1 red:2 sort:1 option:1 complicated:1 annotation:1 worsen:1 contribution:6 square:1 convolutional:12 descriptor:2 characteristic:2 largely:1 correspond:1 judgment:1 yield:3 raw:1 bayesian:1 fallen:1 accurately:2 produced:1 yildirim:1 ren:1 carlo:1 lighting:38 porcelain:2 researcher:1 drive:3 explain:2 tended:1 intentionally:1 resultant:1 sunil:1 dataset:8 knowledge:3 color:8 car:10 dimensionality:1 improves:2 sean:1 supervised:25 reflected:3 wei:2 improved:2 formulation:2 though:5 strongly:1 just:2 roger:1 autoencoders:1 hand:2 sketch:1 freezing:1 su:2 minibatch:1 mode:1 quality:2 usa:1 effect:3 utah:2 requiring:1 true:4 consisted:1 jbt:1 funkhouser:1 white:2 indistinguishable:1 during:9 self:19 skewed:1 anything:1 generalized:3 complete:2 demonstrate:3 performs:1 l1:1 image:104 harmonic:1 variational:1 novel:1 recently:1 common:2 bust:1 handcrafted:1 million:1 micah:1 he:2 approximates:1 lalonde:1 interpret:1 spotlight:1 refer:1 unconstrained:1 similarly:1 geoffroy:2 had:2 dot:1 access:3 han:1 supervision:3 surface:2 renderer:1 showed:2 driven:4 forcing:1 reverse:1 route:1 scenario:1 n00014:1 kar:2 incapable:1 success:1 onr:1 hanrahan:1 joshua:2 seen:2 preserving:1 additional:2 edward:1 converge:1 determine:1 xiangyu:1 sida:1 signal:3 stephen:2 full:4 reduces:1 seidel:1 match:1 adapt:2 sphere:3 long:1 prediction:36 ko:3 regression:1 vision:7 arxiv:2 normalization:2 sergey:1 achieved:1 proposal:2 affecting:1 bunny:2 whereas:1 source:2 jian:2 unlike:1 ascent:1 yue:1 isolate:1 tend:1 monochromatic:1 intermediate:8 bengio:1 enough:2 rendering:3 variety:1 affect:2 relu:1 brighter:1 architecture:3 opposite:1 reduce:1 intrinsics:4 airplane:10 konstantinos:1 shift:2 utility:1 passed:2 unnatural:1 effort:2 panning:1 accelerating:1 song:1 render:6 suffer:1 peter:1 speaking:1 shaoqing:1 adequate:1 deep:14 useful:6 generally:2 involve:1 tune:1 amount:1 transforms:1 mid:1 tenenbaum:6 backpropagated:1 visualized:4 category:14 generate:2 mirrored:1 nsf:1 estimated:1 jiajun:2 blue:1 xingyuan:1 affected:1 group:1 four:1 enormous:1 changing:1 cone:1 houthooft:1 inverse:1 eau:2 wu:3 yann:1 entirely:1 layer:10 bala:1 conferring:1 fold:1 existed:1 adapted:1 noah:1 constrain:1 alex:1 scene:6 flat:3 formulating:1 optical:1 rendered:11 transferred:2 structured:2 tv:1 alternate:1 combination:2 poor:1 across:3 reconstructing:1 qu:2 making:2 intuitively:1 iccv:3 pipeline:1 suzanne:2 equation:2 visualization:1 previously:1 xyz:1 needed:1 mind:1 photo:1 adopted:1 generalizes:1 decomposing:2 apply:1 eight:1 lambertian:6 barron:4 appropriate:3 enforce:1 alternative:2 batch:2 yair:1 motorbike:4 original:2 thomas:1 assumes:2 ensure:2 exploit:1 reflectance:38 recombination:3 concatenated:1 forum:2 society:1 unchanged:1 malik:5 objective:1 question:3 snavely:1 usual:1 traditional:1 gradient:3 iclr:1 separate:4 link:2 upsampling:1 decoder:20 majority:1 reason:1 pointwise:1 difficult:4 disentangling:2 unfortunately:1 mostly:1 sharper:1 greyscale:1 hao:2 ba:2 implementation:1 motivates:1 twenty:1 allowing:1 observation:6 datasets:4 beethoven:1 barrow:3 pat:1 hinton:5 frame:1 varied:1 community:1 intensity:2 introduced:2 evidenced:1 namely:2 cast:2 pair:1 extensive:1 connection:3 componentwise:1 optimized:1 engine:4 learned:7 narrow:1 kingma:2 nip:4 alongside:1 below:1 perception:1 mismatch:4 including:1 explanation:1 belief:1 difficulty:1 natural:6 rely:1 predicting:3 vase:5 treated:1 residual:1 improve:5 movie:1 library:1 lightness:3 created:1 stella:3 washed:1 autoencoder:4 extract:1 auto:1 ilker:1 prior:6 understanding:1 geometric:2 acknowledgement:1 schulman:1 countless:1 multiplication:1 determining:1 relative:1 fully:3 loss:4 limitation:1 specularity:1 proven:1 versus:1 remarkable:1 geoffrey:4 affine:1 sufficient:1 consistent:1 land:3 row:1 supported:1 drastically:1 side:1 mismatched:1 explaining:1 institute:1 face:1 tracing:2 benefit:2 regard:1 dimension:1 berthold:1 valid:2 world:6 evaluating:1 rich:1 far:1 constituting:1 pushmeet:1 reconstructed:2 nishino:5 yichuan:1 ioffe:2 oxholm:2 xi:2 abhishek:1 yifan:1 iterative:1 decomposes:1 table:4 nature:3 transfer:31 learn:3 ca:1 channel:1 mse:3 necessarily:1 domain:1 icann:1 allowed:2 fig:7 join:1 screen:1 fashion:1 grosse:2 position:4 torus:1 outdoor:3 answering:1 infogan:1 toyota:1 tang:2 remained:1 erroneous:1 elena:1 specific:4 covariate:1 learnable:1 explored:2 intrinsic:53 incorporating:3 consist:1 underconstrained:2 kulkarni:3 effectively:1 mirror:1 magnitude:1 illumination:4 conditioned:1 chen:3 savva:1 simply:1 josh:1 visual:1 horizontally:1 expressed:2 prevents:1 contained:1 kaiming:1 yannick:1 chang:3 truth:12 acm:1 tejas:2 identity:1 viewed:2 shared:2 absence:2 change:3 carreira:1 reducing:1 called:1 silvio:1 rarely:1 qixing:1 internal:1 retinex:2 jonathan:1 dissimilar:2 scarcity:1 tested:3 avoiding:1 |
6,827 | 7,176 | Exploring Generalization in Deep Learning
Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, Nathan Srebro
Toyota Technological Institute at Chicago
{bneyshabur, srinadh, mcallester, nati}@ttic.edu
Abstract
With a goal of understanding what drives generalization in deep networks, we
consider several recently suggested explanations, including norm-based control,
sharpness and robustness. We study how these measures can ensure generalization,
highlighting the importance of scale normalization, and making a connection
between sharpness and PAC-Bayes theory. We then investigate how well the
measures explain different observed phenomena.
1
Introduction
Learning with deep neural networks has enjoyed huge empirical success in recent years across a wide
variety of tasks. Despite being a complex, non-convex optimization problem, simple methods such as
stochastic gradient descent (SGD) are able to recover good solutions that minimize the training error.
More surprisingly, the networks learned this way exhibit good generalization behavior, even when
the number of parameters is significantly larger than the amount of training data [20, 30].
In such an over parametrized setting, the objective has multiple global minima, all minimize the
training error, but many of them do not generalize well. Hence, just minimizing the training error is
not sufficient for learning: picking the wrong global minima can lead to bad generalization behavior.
In such situations, generalization behavior depends implicitly on the algorithm used to minimize
the training error. Different algorithmic choices for optimization such as the initialization, update
rules, learning rate, and stopping condition, will lead to different global minima with different
generalization behavior [7, 12, 18]. For example, Neyshabur et al. [18] introduced Path-SGD, an
optimization algorithm that is invariant to rescaling of weights, and showed better generalization
behavior over SGD for both feedforward and recurrent neural networks [18, 22]. Keskar et al. [12]
noticed that the solutions found by stochastic gradient descent with large batch sizes generalizes
worse than the one found with smaller batch sizes, and Hardt et al. [10] discuss how stochastic
gradient descent ensures uniform stability, thereby helping generalization for convex objectives.
What is the bias introduced by these algorithmic choices for neural networks? What ensures generalization in neural networks? What is the relevant notion of complexity or capacity control?
As mentioned above, simply accounting for complexity in terms of the number of parameters, or any
measure which is uniform across all functions representable by a given architecture, is not sufficient
to explain the generalization ability of neural networks trained in practice. For linear models, norms
and margin-based measures, and not the number of parameters, are commonly used for capacity
control [5, 9, 25]. Also norms such as the trace norm and max norm are considered as sensible
inductive biases in matrix factorization and are often more appropriate than parameter-counting
measures such as the rank [27, 28]. In a similar spirit, Bartlett [3], Neyshabur et al. [20] and in
parallel to this work, Bartlett et al. [2] suggested different norms of network parameters to measure
the capacity of neural networks. In a different line of work, Keskar et al. [12] suggested ?sharpness?
(robustness of the training error to perturbations in the parameters) as a complexity measure for neural
networks. Others, including Langford and Caruana [13] and more recently Dziugaite and Roy [8],
propose a PAC-Bayes analysis.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
What makes a complexity measure appropriate for explaining generalization in deep learning?
First, an appropriate complexity measure must be sufficient in ensuring generalization. Second,
networks learned in practice should be of low complexity under this measure. This can happen if our
optimization algorithms bias us toward lower complexity models under this measure and it is possible
to capture real data using networks of low complexity. In particular, the complexity measure should
help explain several recently observed empirical phenomena that are not explained by a uniform
notion of complexity:
? It is possible to obtain zero training error on random labels using the same architecture
for which training with real labels leads to good generalization [30]. We would expect
the networks learned using real labels (and which generalizes well) to have much lower
complexity, under the suggested measure, than those learned using random labels (and
which obviously do not generalize well).
? Increasing the number of hidden units, thereby increasing the number of parameters, can
lead to a decrease in generalization error even when the training error does not decrease [20].
We would expect to see the complexity measure decrease as we increase the number of
hidden units.
? When training the same architecture, with the same training set, using two different optimization methods (or different algorithmic or parameter choices), one method results
in better generalization even though both lead to zero training error [18, 12]. We would
expect to see a correlation between the complexity measure and generalization ability among
zero-training error models.
In this paper we examine complexity measures that have recently been suggested, or could be
considered, in explaining generalization in deep learning. We evaluate the measures based on their
ability to theoretically guarantee generalization, and their empirical ability to explain the above
phenomena. Studying how each measure can guarantee generalization also let us better understand
how it should be computed and compared in order to explain the empirical phenomena.
We investigate complexity measures including norms, robustness and sharpness of the network.
We emphasize in our theoretical and empirical study the importance of relating the scale of the
parameters and the scale of the output of the network, e.g. by relating norm and margin. In this light,
we discuss how sharpness by itself is not sufficient for ensuring generalization, but can be combined,
through PAC-Bayes analysis, with the norm of the weights to obtain an appropriate complexity
measure. The role of sharpness in PAC-Bayesian analysis of neural networks was also recently
noted by Dziugaite and Roy [8], who used numerical techniques to numerically optimize the overall
PAC-Bayes bound?here we emphasize the distinct role of sharpness as a balance for norm.
Notation
Let fw (x) be the function computed by a d layer feed-forward network with parameters w and
Rectified Linear Unit (ReLU) activations, fw (x) = Wd ?(Wd?1 ?(....?(W1 x))) where ?(z) =
max{0, z}. Let hi be the number of nodes in layer i, with h0 = n. Therefore, for any layer i, we
have Wi ? Rhi ?hi?1 . Given any input x, the loss of the prediction by the function fw is then given by
b
`(w, x). We also denote by L(w) the expected loss and by L(w)
the empirical loss over the training
set. For any integer k, [k] denotes the set {1, 2, ? ? ? , k}. Finally, k.kF , k.k2 , k.k1 , k.k? denote
Frobenius norm, the spectral norm, element-wise `1 -norm and element-wise `? norm respectively.
2
Generalization and Capacity Control in Deep Learning
In this section, we discuss complexity measures that have been suggested, or could be used for capacity
control in neural networks. We discuss advantages and weaknesses of each of these complexity
measures and examine their abilities to explain the observed generalization phenomena in deep
learning.
We consider the statistical capacity of a model class in terms of the number of examples required to
ensure generalization, i.e. that the population (or test error) is close to the training error, even when
minimizing the training error. This also roughly corresponds to the maximum number of examples
on which one can obtain small training error even with random labels.
2
Given a model class H, such as all the functions representable by some feedforward or convolutional
networks, one can consider the capacity of the entire class H?this corresponds to learning with
a uniform ?prior? or notion of complexity over all models in the class. Alternatively, we can also
consider some complexity measure, which we take as a mapping that assigns a non-negative number
to every hypothesis in the class - M : {H, S} ? R+ , where S is the training set. It is then sufficient
to consider the capacity of the restricted class HM,? = {h : h ? H, M(h) ? ?} for a given ? ? 0.
One can then ensure generalization of a learned hypothesis h in terms of the capacity of HM,M(h) .
Having a good hypothesis with low complexity, and being biased toward low complexity (in terms of
M) can then be sufficient for learning, even if the capacity of the entire H is high. And if we are
indeed relying on M for ensuring generalization (and in particular, biasing toward models with lower
complexity under M), we would expect a learned h with lower value of M(h) to generalize better.
For some of the measures discussed, we allow M to depend also on the training set. If this is done
carefully, we can still ensure generalization for the restricted class HM,? .
We will consider several possible complexity measures. For each candidate measure, we first investigate whether it is sufficient for generalization, and analyze the capacity of HM,? . Understanding the
capacity corresponding to different complexity measures also allows us to relate between different
measures and provides guidance as to what and how we should measure: From the above discussion,
it is clear that any monotone transformation of a complexity measures leads to an equivalent notion
of complexity. Furthermore, complexity is meaningful only in the context of a specific hypothesis
class H, e.g. specific architecture or network size. The capacity, as we consider it (in units of sample
complexity), provides a yardstick by which to measure complexity (we should be clear though, that
we are vague regarding the scaling of the generalization error itself, and only consider the scaling
in terms of complexity and model class, thus we obtain only a very crude yardstick sufficient for
investigating trends and relative phenomena, not a quantitative yardstick).
2.1
Network Size
For any model, if its parameters have finite precision, its capacity is linear in the total number of
parameters. Even without making an assumption on the precision of parameters, the VC dimension
of feedforward networks can be bounded in terms of the number of parameters dim(w)[1, 3, 6, 23].
In particular, Bartlett [4] and Harvey et al. [11], following Bartlett et al. [6], give the following tight
(up to logarithmic factors) bound on the VC dimension and hence capacity of feedforward networks
with ReLU activations:
? ? dim(w))
VC-dim = O(d
(1)
In the over-parametrized settings, where the number of parameters is more than the number of
samples, complexity measures that depend on the total number of parameters are too weak and
cannot explain the generalization behavior. Neural networks used in practice often have significantly
more parameters than samples, and indeed can perfectly fit even random labels, obviously without
generalizing [30]. Moreover, measuring complexity in terms of number of parameters cannot explain
the reduction in generalization error as the number of hidden units increase [20] (see also Figure 4).
2.2
Norms and Margins
Capacity of linear predictors can be controlled independent of the number of parameters, e.g. through
regularization of its `2 norm. Similar norm based complexity measures have also been established for
feedforward neural networks with ReLU activations. For example, capacity can be bounded based
Qd
2
on the `1 norm of the weights of hidden units in each layer, and is proportional to i=1 kWi k1,? ,
where kWi k1,? is the maximum over hidden units in layer i of the `1 norm of incoming weights to
the hidden unit [5]. More generally Neyshabur et al. [19] considered group norms `p,q corresponding
to `q norm over hidden units of `p norm of incoming weights to the hidden unit. This includes
`2,2 which is equivalent to the Frobenius norm where the capacity of the network is proportional to
Qd
2
i=1 kWi kF . They further motivated a complexity measure that is invariant to node-wise rescaling
reparametrization 1 , suggesting `p path norms which is the minimum over all node-wise rescalings
Qd
of i=1 kWi kp,? and is equal to `p norm of a vector with coordinates each of which is the product
1
Node-rescaling can be defined as a sequence of reparametrizations, each of which corresponds to multiplying
incoming weights and dividing outgoing weights of a hidden unit by a positive scalar ?. The resulting network
computes the same function as the network before the reparametrization.
3
of weights along a path from an input node to an output node in the network. While preparing this
manuscript, we became aware of parallel work Bartlett et al. [2] that proves generalization bounds
2/3 3
Qd
2 Pd
with capacity is proportional to i=1 kWi k2
kW
k
/
kW
k
.
j
j
j=1
1
2
Capacity control in terms of norm, when using a zero/one loss (i.e. counting errors) requires us in
addition to account for scaling of the output of the neural networks, as the loss is insensitive to this
scaling but the norm only makes sense in the context of such scaling. For example, dividing all the
weights by the same number will scale down the output of the network but does not change the 0/1
loss, and hence it is possible to get a network with arbitrary small norm and the same 0/1 loss. Using
a scale sensitive losses, such as the cross entropy loss, does address this issue (if the outputs are scaled
down toward zero, the loss becomes trivially bad), and one can obtain generalization guarantees in
terms of norm and the cross entropy loss.
However, we should be careful when comparing the norms of different models learned by minimizing
the cross entropy loss, in particular when the training error goes to zero. When the training error goes
to zero, in order to push the cross entropy loss (or any other positive loss that diminish at infinity)
to zero, the outputs of the network must go to infinity, and thus the norm of the weights (under any
norm) should also go to infinity. This means that minimizing the cross entropy loss will drive the
norm toward infinity. In practice, the search is terminated at some finite time, resulting in large,
but finite norm. But the value of this norm is mostly an indication of how far the optimization is
allowed to progress?using a stricter stopping criteria (or higher allowed number of iterations) would
yield higher norm. In particular, comparing the norms of models found using different optimization
approaches is meaningless, as they would all go toward infinity.
Instead, to meaningfully compare norms of the network, we should explicitly take into account the
scaling of the outputs of the network. One way this can be done, when the training error is indeed
zero, is to consider the ?margin? of the predictions in addition to the norms of the parameters. We
refer to the margin for a single data point x as the difference between the score of the correct label
and the maximum score of other labels, i.e.
fw (x)[ytrue ] ? max fw (x)[y]
(2)
y6=ytrue
In order to measure scale over an entire training set, one simple approach is to consider the ?hard
margin?, which is the minimum margin among all training points. However, this definition is very
sensitive to extreme points as well as to the size of the training set. We consider instead a more
robust notion that allows a small portion of data points to violate the margin. For a given training
set and small value > 0, we define the margin ?margin as the lowest value of ? such that dme data
point have margin lower than ? where m is the size of the training set. We found empirically that the
qualitative and relative nature of our empirical results is almost unaffected by reasonable choices of
(e.g. between 0.001 and 0.1).
The measures we investigate in this work and their corresponding capacity bounds are as follows 2 :
? `2 norm with capacity proportional to
1
2
?margin
? `1 -path norm with capacity proportional to
Qd
2
i=1
1
2
?margin
? `2 -path norm with capacity proportional to
1
2
?margin
? spectral norm with capacity proportional to
1
2
?margin
4 kWi kF [19].
2
d
Q
P
2Wi [ji , ji?1 ] [5, 19].
Q
j? d [hk ] i=1
k=0
Qd
P
2
Q
i=1 4hi Wi [ji , ji?1 ].
j? d
k=0 [hk ]
Qd
2
i=1 hi kWi k2 .
Qd
where k=0 [hk ] is the Cartesian product over sets [hk ]. The above bounds indicate that capacity can
be bounded in terms of either `2 -norm or `1 -path norm independent of number of parameters. The
2
We have dropped the term that only depends on the norm of the input. The bounds based on `2 -path norm
and spectral norm can be derived directly from the those based on `1 -path norm and `2 norm respectively.
Without further conditions on weights, exponential dependence on depth is tight but the 4d dependence might be
loose [19]. As we discussed at the beginning of this subsection, in parallel work, Bartlett et al. [2] have improved
the spectral bound.
4
1030
`2 norm
true labels
random labels
1025
1020
1035
`1 -path norm
104
1030
10K 20K 30K 40K 50K
size of traning set
1025
`2 -path norm
102
10K 20K 30K 40K 50K
100
size of traning set
1015
spectral norm
1010
10K 20K 30K 40K 50K
105
size of traning set
10K 20K 30K 40K 50K
size of traning set
Figure 1: Comparing different complexity measures on a VGG network trained on subsets of CIFAR10
dataset with true (blue line) or random (red line) labels. We plot norm divided by margin to avoid scaling issues (see Section 2), where for each complexity measure, we drop the terms that only depend on
Qd
?2 P Q
2
depth or number of hidden units; e.g. for `2 -path norm we plot ?margin
j? d
[hk ]
i=1 Wi [ji , ji?1 ].
k=0
We also set the margin over training set S to be 5th -percentile of the margins of the data points in S, i.e.
Prc5 {fw (xi )[yi ] ? maxy6=yi fw (x)[y]|(xi , yi ) ? S}. In all experiments, the training error of the learned network is zero. The plots indicate that these measures can explain the generalization as the complexity of model
learned with random labels is always higher than the one learned with true labels. Moreover, the gap between the
complexity of models learned with true and random labels increases as we increase the size of the training set.
`2 -path norm dependence on the number of hidden units in each layer is unavoidable. However, it is
not clear if a bound that only depends on the product of spectral norms is possible.
As an initial empirical investigation of the appropriateness of the different complexity measures,
we compared the complexity (under each of the above measures) of models trained on true versus
random labels. We would expect to see two phenomena: first, the complexity of models trained
on true labels should be substantially lower than those trained on random labels, corresponding to
their better generalization ability. Second, when training on random labels, we expect capacity to
increase almost linearly with the number of training examples, since every extra example requires
new capacity in order to fit it?s random label. However, when training on true labels we expect the
model to capture the true functional dependence between input and output and thus fitting more
training examples should only require small increases in the capacity of the network. The results are
reported in Figure 1. We indeed observe a gap between the complexity of models learned on real and
random labels for all four norms, with the difference in increase in capacity between true and random
labels being most pronounced for the `2 norm and `2 -path norm.
Lipschitz Continuity and Robustness The measures/norms we discussed so far also control the
Lipschitz constant of the network with respect to its input. Is the capacity control achieved through the
bound on the Lipschitz constant? Is bounding the Lipschitz constant alone enough for generalization?
In Appendix A, we show that the current bounds using Lipschitz have exponential dependence to the
input dimension and therefore the capacity bounds discussed above are not merely a consequence of
bounding the Lipschitz constant.
In Section 3 we present further empirical investigations of the appropriateness of these complexity
measures to explain other phenomena.
2.3
Sharpness
The notion of sharpness as a generalization measure was recently suggested by Keskar et al. [12] and
corresponds to robustness to adversarial perturbations on the parameter space:
?? (w) =
b w+? ) ? L(f
b w)
max|?i |??(|wi |+1) L(f
b w+? ) ? L(f
b w ),
'
max
L(f
b w)
|?i |??(|wi |+1)
1 + L(f
(3)
b w ) is generally very small in the case of neural networks in practice, so
where the training error L(f
we can simply drop it from the denominator without a significant change in the sharpness value.
As we will explain below, sharpness defined this way does not capture the generalization behavior.
To see this, we first examine whether sharpness can predict the generalization behavior for networks
trained on true vs random labels. In the left plot of Figure 2, we plot the sharpness for networks
trained on true vs random labels. While sharpness correctly predicts the generalization behavior for
5
true labels
random labels
1
0.25
0.20.08
0.4
0.20.08
0.15
0.10.04
0.05
10K
20K
30K
40K
50K
5K
10K
30K
50K
0.25
0.10.04
0.6
random labels
0.30.12
5K
10K
30K
50K
0.15
0.8
expected sharpness
expected sharpness
expected sharpness
expected sharpness
sharpness
true labels
0.30.12
1.2
0.05
0 0
0 2K
4K
2
1
KLKL
size of traning set
3 6K
#10
0 0
0 2K
1
8
4K
2
KLKL
3 6K
#108
Figure 2: Sharpness and PAC-Bayes measures on a VGG network trained on subsets of CIFAR10 dataset with
true or random labels. In the left panel, we plot max sharpness, calculated as suggested by Keskar et al. [12]
where the perturbation for parameter wi has magnitude 5.10?4 (|wi | + 1). The middle and right plots show the
relationship between expected sharpness and KL divergence in PAC-Bayes bound for true and random labels
respectively. For PAC-Bayes plots, each point in the plot correspond to a choice of ? where the standard deviation
of the perturbation for the parameter wi is ?(10 |wi | + 1). The corresponding KL to each ? is weighted `2
norm where the weight for each parameter is the inverse of the standard deviation of the perturbation.
bigger networks, for networks of smaller size, those trained on random labels have less sharpness
than the ones trained on true labels. Furthermore sharpness defined above depends on the scale of w
and can be artificially increased or decreased by changing the scale of the parameters. Therefore,
sharpness alone is not sufficient to control the capacity of the network.
Instead, we advocate viewing a related notion of expected sharpness in the context of the PACBayesian framework. Viewed this way, it becomes clear that sharpness controls only one of two
relevant terms, and must be balanced with some other measure such as norm. Together, sharpness and
norm do provide capacity control and can explain many of the observed phenomena. This connection
between sharpness and the PAC-Bayes framework was also recently noted by Dziugaite and Roy [8].
The PAC-Bayesian framework [16, 17] provides guarantees on the expected error of a randomized
predictor (hypothesis), drawn form a distribution denoted Q and sometimes referred to as a ?posterior?
(although it need not be the Bayesian posterior), that depends on the training data. Let fw be any
predictor (not necessarily a neural network) learned from training data. We consider a distribution
Q over predictors with weights of the form w + ?, where w is a single predictor learned from the
training set, and ? is a random variable. Then, given a ?prior? distribution P over the hypothesis that
is independent of the training data, with probability at least 1 ? ? over the draw of the training data,
the expected error of fw+? can be bounded as follows [15]:
s
b w+? )] + 4
E? [L(fw+? )] ? E? [L(f
KL (w + ?kP ) + ln
m
2m
?
(4)
b w+? )] with L(f
b w ) + E? [L(f
b w+? )] ? L(f
b w ) we can see that the PAC-Bayes
Substituting E? [L(f
bound depends on two quantities - i) the expected sharpness and ii) the Kullback Leibler (KL)
divergence to the ?prior? P . The bound is valid for any distribution measure P , any perturbation
distribution ? and any method of choosing w dependent on the training set. A simple way to
instantiate the bound is to set P to be a zero mean, ? 2 variance Gaussian distribution. Choosing the
perturbation ? to also be a zero mean spherical Gaussian with variance ? 2 in every direction, yields
the following guarantee (w.p. 1 ? ? over the training set):
v
u
u 1 kwk22
2m
b
b
b
u
n
n
+ ln
E??N (0,?) [L(fw+? )] ? L(fw ) + E??N (0,?) [L(fw+? )] ? L(fw ) +4
,
2
?
|
{z
} t m | 2?
{z }
expected sharpness
(5)
KL
Another interesting approach is to set the variance of the perturbation to each parameter with respect
to the magnitude of the parameter. For example if ?i = ? |wi | + ?, then the KL term in the above
P w2
expression changes to i 2?i2 . The above generalization guarantees give a clear way to think about
i
capacity control jointly in terms of both the expected sharpness and the norm, and as we discussed
earlier indicates that sharpness by itself cannot control the capacity without considering the scaling.
In the above generalization bound, norms and sharpness interact in a direct way depending on ?,
6
0.2
0.1
0
0
0.6
`2 norm
spectral norm
path-`1 norm
path-`2 norm
sharpness
0.4
0.2
1K
2K
3K
4K
5K
#random labels
0
0
1K
2K
3K
4K
#random labels
5K
expected sharpness
0.8
measure
error
0.3
1
training
test
expected sharpness
0.4
0.3 0.3
0
1K
2K
3K
4K
5K
0.25 0.25
0.2 0.2
0.15 0.15
0.1 0.1
0.05 0.05
0 0
0 01
12
23
KL KL
34 45 5
7
#107#10
Figure 3: Experiments on global minima with poor generalization. For each experiment, a VGG network is
trained on union of a subset of CIFAR10 with size 10000 containing samples with true labels and another subset
of CIFAR10 datasets with varying size containing random labels. The learned networks are all global minima for
the objective function on the subset with true labels. The left plot indicates the training and test errors based on
the size of the set with random labels. The plot in the middle shows change in different measures based on the
size of the set with random labels. The plot on the right indicates the relationship between expected sharpness
and KL in PAC-bayes for each of the experiments. Measures are calculated as explained in Figures 1 and 2.
as increasing the norm by decreasing ? causes decrease in sharpness and vice versa. It is therefore
important to find the right balance between the norm and sharpness by choosing ? appropriately in
order to get a reasonable bound on the capacity.
In our experiments we observe that looking at both these measures jointly indeed makes a better predictor for the generalization error. As discussed earlier, Dziugaite and Roy [8] numerically optimize
the overall PAC-Bayes generalization bound over a family of multivariate Gaussian distributions
(different choices of perturbations and priors). Since the precise way the sharpness and KL-divergence
are combined is not tight, certainly not in (5), nor in the more refined bound used by Dziugaite and
Roy [8], we prefer shying away from numerically optimizing the balance between sharpness and the
KL-divergence. Instead, we propose using bi-criteria plots, where sharpness and KL-divergence are
plotted against each other, as we vary the perturbation variance. For example, in the center and right
panels of Figure 2 we show such plots for networks trained on true and random labels respectively.
We see that although sharpness by itself is not sufficient for explaining generalization in this setting
(as we saw in the left panel), the bi-criteria plots are significantly lower for the true labels. Even more
so, the change in the bi-criteria plot as we increase the number of samples is significantly larger with
random labels, correctly capturing the required increase in capacity. For example, to get a fixed value
of expected sharpness such as = 0.05, networks trained with random labels require higher norm
compared to those trained with true labels. This behavior is in agreement with our earlier discussion,
that sharpness is sensitive to scaling of the parameters and is not a capacity control measure as it can
be artificially changed by scaling the network. However, combined with the norm, sharpness does
seem to provide a capacity measure.
3
Empirical Investigation
In this section we investigate the ability of the discussed measures to explain the the generalization
phenomenon discussed in the Introduction. We already saw in Figures 1 and 2 that these measures
capture the difference in generalization behavior of models trained on true or random labels, including
the increase in capacity as the sample size increases, and the difference in this increase between true
and random labels.
Different Global Minima Given different global minima of the training loss on the same training
set and with the same model class, can these measures indicate which model is going to generalize
better? In order to verify this property, we can calculate each measure on several different global
minima and see if lower values of the measure imply lower generalization error. In order to find
different global minima for the training loss, we design an experiment where we force the optimization
methods to converge to different global minima with varying generalization abilities by forming a
confusion set that includes samples with random labels. The optimization is done on the loss that
includes examples from both the confusion set and the training set. Since deep learning models have
very high capacity, the optimization over the union of confusion set and training set generally leads
to a point with zero error over both confusion and training sets which thus is a global minima for the
7
error
0.06
1
0.8
0.6
0.04
0.4
0.02
0
8
0.2
32
128 512
2K
8K
0
32
128
512
#hidden units
2K
8K
expected sharpness
training
test
expected sharpness
0.08
0.3
0.3
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
32
128
512
2048
0.25
0.05
0
0
0
0
1
1
KL KL
2
2
6
#10
#106
Figure 4: The generalization of two layer perceptron trained on MNIST with varying number of hidden units.
The left plot indicates the training and test errors. The test error decreases as the size increases. The middle
plot shows measures for each of the trained networks. The plot on the right indicates the relationship between
sharpness and KL in PAC-Bayes for each experiment. Measures are calculated as explained in Figures 1 and 2.
training set. We randomly select a subset of CIFAR10 dataset with 10000 data points as the training
set and our goal is to find networks that have zero error on this set but different generalization abilities
on the test set. In order to do that, we train networks on the union of the training set with fixed size
10000 and confusion sets with varying sizes that consists of CIFAR10 samples with random labels;
and we evaluate the learned model on an independent test set. The trained network achieves zero
training error but as shown in Figure 3, the test error of the model increases with increasing size of
the confusion set. The middle panel of this Figure suggests that the norm of the learned networks can
indeed be predictive of their generalization behavior. However, we again observe that sharpness has
a poor behavior in these experiments. The right panel of this figure also suggests that PAC-Bayes
measure of joint sharpness and KL divergence, has better behavior - for a fixed expected sharpness,
networks that have higher generalization error, have higher norms.
Increasing Network Size We also repeat the experiments conducted by Neyshabur et al. [20] where
a fully connected feedforward network is trained on MNIST dataset with varying number of hidden
units and we check the values of different complexity measures on each of the learned networks. The
left panel in Figure 4 shows the training and test error for this experiment. While 32 hidden units are
enough to fit the training data, we observe that networks with more hidden units generalize better.
Since the optimization is done without any explicit regularization, the only possible explanation for
this phenomenon is the implicit regularization by the optimization algorithm. Therefore, we expect a
sensible complexity measure to decrease beyond 32 hidden units and behave similar to the test error.
Different measures are reported for learned networks. The middle panel suggest that all margin/norm
based complexity measures decrease for larger networks up to 128 hidden units. For networks with
more hidden units, `2 norm and `1 -path norm increase with the size of the network. The middle panel
suggest that `2 -path norm and spectral norm can provide some explanation for this phenomenon.
However, as we discussed in Section 2, the actual complexity measure based on `2 -path norm and
spectral norm also depends on the number of hidden units and taking this into account indicates that
these measures cannot explain this phenomenon. In Appendix A, we discuss another complexity
measure that also depends the spectral norm through Lipschitz continuity or robustness argument.
Even though this bound is very loose (exponential in input dimension), it is monotonic with respect
to the spectral norm that is reported in the plots. The right panel shows that the joint PAC-Bayes
measure decrease for larger networks up to size 128 but fails to explain this generalization behavior
for larger networks. This suggests that the measures looked so far are not sufficient to explain all the
generalization phenomenon observed in neural networks.
4
Conclusion
Learning with deep neural networks displays good generalization behavior in practice, a phenomenon
that remains largely unexplained. In this paper we discussed different candidate complexity measures
that might explain generalization in neural networks. We outline a concrete methodology for
investigating such measures, and report on experiments studying how well the measures explain
different phenomena. While there is no clear choice yet, some combination of expected sharpness
and norms do seem to capture much of the generalization behavior of neural networks. A major issue
still left unresolved is how the choice of optimization algorithm biases such complexity to be low,
and what is the precise relationship between optimization and implicit regularization.
8
References
[1] M. Anthony and P. L. Bartlett. Neural network learning: Theoretical foundations. cambridge
university press, 2009.
[2] P. Bartlett, D. J. Foster, and M. Telgarsky. Spectrally-normalized margin bounds for neural
networks. arXiv preprint arXiv:1706.08498, 2017.
[3] P. L. Bartlett. The sample complexity of pattern classification with neural networks: the size of
the weights is more important than the size of the network. IEEE transactions on Information
Theory, 44(2):525?536, 1998.
[4] P. L. Bartlett. The impact of the nonlinearity on the VC-dimension of a deep network. Preprint,
2017.
[5] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and
structural results. Journal of Machine Learning Research, 3(Nov):463?482, 2002.
[6] P. L. Bartlett, V. Maiorov, and R. Meir. Almost linear vc dimension bounds for piecewise
polynomial networks. Neural computation, 10(8):2159?2173, 1998.
[7] P. Chaudhari, A. Choromanska, S. Soatto, and Y. LeCun. Entropy-sgd: Biasing gradient descent
into wide valleys. arXiv preprint arXiv:1611.01838, 2016.
[8] G. K. Dziugaite and D. M. Roy. Computing nonvacuous generalization bounds for deep
(stochastic) neural networks with many more parameters than training data. arXiv preprint
arXiv:1703.11008, 2017.
[9] T. Evgeniou, M. Pontil, and T. Poggio. Regularization networks and support vector machines.
Advances in computational mathematics, 13(1):1?50, 2000.
[10] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient
descent. In ICML, 2016.
[11] N. Harvey, C. Liaw, and A. Mehrabian. Nearly-tight vc-dimension bounds for piecewise linear
neural networks. arXiv preprint arXiv:1703.02930, 2017.
[12] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836,
2016.
[13] J. Langford and R. Caruana. (not) bounding the true error. In Proceedings of the 14th
International Conference on Neural Information Processing Systems: Natural and Synthetic,
pages 809?816. MIT Press, 2001.
[14] U. v. Luxburg and O. Bousquet. Distance-based classification with lipschitz functions. Journal
of Machine Learning Research, 5(Jun):669?695, 2004.
[15] D. McAllester. Simplified pac-bayesian margin bounds. Lecture notes in computer science,
pages 203?215, 2003.
[16] D. A. McAllester. Some PAC-Bayesian theorems. In Proceedings of the eleventh annual
conference on Computational learning theory, pages 230?234. ACM, 1998.
[17] D. A. McAllester. PAC-Bayesian model averaging. In Proceedings of the twelfth annual
conference on Computational learning theory, pages 164?170. ACM, 1999.
[18] B. Neyshabur, R. Salakhutdinov, and N. Srebro. Path-SGD: Path-normalized optimization in
deep neural networks. In Advanced in Neural Information Processsing Systems (NIPS), 2015.
[19] B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In
Proceeding of the 28th Conference on Learning Theory (COLT), 2015.
[20] B. Neyshabur, R. Tomioka, and N. Srebro. In search of the real inductive bias: On the role
of implicit regularization in deep learning. Proceeding of the International Conference on
Learning Representations workshop track, 2015.
9
[21] B. Neyshabur, R. Tomioka, R. Salakhutdinov, and N. Srebro. Data-dependent path normalization
in neural networks. In the International Conference on Learning Representations, 2016.
[22] B. Neyshabur, Y. Wu, R. Salakhutdinov, and N. Srebro. Path-normalized optimization of
recurrent neural networks with relu activations. Advances in Neural Information Processing
Systems, 2016.
[23] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to
algorithms. Cambridge university press, 2014.
[24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556, 2014.
[25] A. J. Smola, B. Sch?lkopf, and K.-R. M?ller. The connection between regularization operators
and support vector kernels. Neural networks, 11(4):637?649, 1998.
[26] J. Sokolic, R. Giryes, G. Sapiro, and M. R. Rodrigues. Generalization error of invariant
classifiers. arXiv preprint arXiv:1610.04574, 2016.
[27] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In International Conference on
Computational Learning Theory, pages 545?560. Springer Berlin Heidelberg, 2005.
[28] N. Srebro, J. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Advances
in neural information processing systems, pages 1329?1336, 2005.
[29] H. Xu and S. Mannor. Robustness and generalization. Machine learning, 86(3):391?423, 2012.
[30] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires
rethinking generalization. In International Conference on Learning Representations, 2017.
10
| 7176 |@word middle:6 polynomial:1 norm:90 twelfth:1 accounting:1 sgd:5 thereby:2 reduction:1 initial:1 score:2 current:1 wd:2 comparing:3 activation:4 yet:1 must:3 numerical:1 chicago:1 happen:1 plot:20 drop:2 update:1 v:2 alone:2 instantiate:1 beginning:1 provides:3 mannor:1 node:6 zhang:1 along:1 direct:1 qualitative:1 consists:1 fitting:1 advocate:1 eleventh:1 theoretically:1 expected:20 indeed:6 roughly:1 bneyshabur:1 examine:3 nor:1 behavior:17 salakhutdinov:3 relying:1 spherical:1 decreasing:1 actual:1 considering:1 increasing:5 becomes:2 notation:1 bounded:4 moreover:2 panel:9 bhojanapalli:1 what:7 lowest:1 substantially:1 spectrally:1 shraibman:1 transformation:1 guarantee:6 sapiro:1 quantitative:1 every:3 stricter:1 wrong:1 k2:3 scaled:1 control:15 unit:22 classifier:1 positive:2 before:1 dropped:1 consequence:1 despite:1 path:22 might:2 initialization:1 suggests:3 factorization:2 bi:3 lecun:1 practice:6 union:3 pontil:1 empirical:10 significantly:4 suggest:2 get:3 cannot:4 close:1 valley:1 operator:1 context:3 risk:1 optimize:2 equivalent:2 center:1 go:5 convex:2 sharpness:54 assigns:1 rule:1 stability:2 population:1 notion:7 rhi:1 coordinate:1 rodrigues:1 hypothesis:6 agreement:1 element:2 roy:6 trend:1 recognition:1 predicts:1 observed:5 role:3 preprint:8 capture:5 calculate:1 ensures:2 connected:1 decrease:8 technological:1 mentioned:1 balanced:1 pd:1 complexity:53 trained:19 depend:3 tight:4 predictive:1 vague:1 joint:2 train:2 distinct:1 kp:2 choosing:3 h0:1 refined:1 shalev:1 larger:5 rennie:1 ability:9 simonyan:1 think:1 jointly:2 itself:4 maiorov:1 obviously:2 advantage:1 sequence:1 indication:1 propose:2 product:3 unresolved:1 relevant:2 frobenius:2 pronounced:1 rademacher:1 telgarsky:1 ben:1 help:1 depending:1 recurrent:2 progress:1 dividing:2 indicate:3 qd:9 appropriateness:2 direction:1 correct:1 stochastic:5 vc:6 mcallester:5 viewing:1 require:2 nonvacuous:1 generalization:64 investigation:3 exploring:1 helping:1 considered:3 diminish:1 algorithmic:3 mapping:1 predict:1 substituting:1 major:1 vary:1 achieves:1 label:48 unexplained:1 pacbayesian:1 sensitive:3 saw:2 vice:1 weighted:1 mit:1 always:1 gaussian:4 avoid:1 varying:5 jaakkola:1 derived:1 rank:2 indicates:6 check:1 hk:5 adversarial:1 sense:1 dim:3 dependent:2 stopping:2 entire:3 hidden:20 going:1 choromanska:1 overall:2 among:2 issue:3 classification:2 denoted:1 colt:1 equal:1 aware:1 evgeniou:1 having:1 beach:1 preparing:1 kw:2 y6:1 icml:1 nearly:1 others:1 report:1 piecewise:2 randomly:1 divergence:6 huge:1 investigate:5 certainly:1 weakness:1 extreme:1 light:1 ytrue:2 cifar10:6 poggio:1 plotted:1 guidance:1 theoretical:2 increased:1 earlier:3 caruana:2 measuring:1 deviation:2 subset:6 uniform:4 predictor:6 conducted:1 too:1 reported:3 traning:5 combined:3 synthetic:1 st:1 recht:2 international:5 randomized:1 picking:1 together:1 concrete:1 w1:1 again:1 unavoidable:1 containing:2 worse:1 rescaling:3 suggesting:1 account:3 includes:3 explicitly:1 depends:8 analyze:1 portion:1 red:1 bayes:14 recover:1 parallel:3 reparametrization:2 minimize:3 convolutional:2 became:1 keskar:5 who:1 variance:4 yield:2 correspond:1 largely:1 generalize:6 weak:1 bayesian:6 lkopf:1 multiplying:1 drive:2 rectified:1 unaffected:1 explain:18 chaudhari:1 definition:1 mudigere:1 against:1 dataset:4 hardt:3 subsection:1 carefully:1 manuscript:1 feed:1 higher:6 methodology:1 zisserman:1 improved:1 done:4 though:3 furthermore:2 just:1 implicit:3 smola:1 langford:2 correlation:1 continuity:2 usa:1 dziugaite:6 verify:1 true:24 normalized:3 inductive:2 hence:3 regularization:7 soatto:1 leibler:1 i2:1 noted:2 percentile:1 liaw:1 criterion:4 outline:1 confusion:6 image:1 wise:4 recently:7 functional:1 empirically:1 ji:6 insensitive:1 discussed:10 relating:2 numerically:3 refer:1 significant:1 versa:1 cambridge:2 enjoyed:1 trivially:1 mathematics:1 nonlinearity:1 posterior:2 multivariate:1 recent:1 showed:1 optimizing:1 harvey:2 success:1 yi:3 processsing:1 minimum:14 converge:1 ller:1 ii:1 multiple:1 violate:1 faster:1 cross:5 long:1 divided:1 maxy6:1 bigger:1 controlled:1 ensuring:3 prediction:2 impact:1 denominator:1 arxiv:14 iteration:1 normalization:2 sometimes:1 kernel:1 achieved:1 addition:2 decreased:1 rescalings:1 appropriately:1 biased:1 meaningless:1 extra:1 w2:1 sch:1 kwi:7 kwk22:1 meaningfully:1 spirit:1 seem:2 integer:1 structural:1 counting:2 feedforward:6 bengio:1 enough:2 variety:1 relu:4 fit:3 architecture:4 perfectly:1 regarding:1 vgg:3 reparametrizations:1 whether:2 motivated:1 expression:1 bartlett:12 cause:1 deep:16 generally:3 clear:6 amount:1 meir:1 correctly:2 track:1 blue:1 group:1 four:1 drawn:1 changing:1 nocedal:1 monotone:1 merely:1 year:1 luxburg:1 inverse:1 mehrabian:1 almost:3 reasonable:2 family:1 wu:1 draw:1 appendix:2 scaling:10 prefer:1 capturing:1 bound:26 layer:7 hi:4 display:1 annual:2 infinity:5 bousquet:1 nathan:1 argument:1 smelyanskiy:1 combination:1 representable:2 poor:2 across:2 smaller:2 wi:11 making:2 explained:3 invariant:3 restricted:2 ln:2 remains:1 discus:5 loose:2 singer:1 studying:2 generalizes:2 neyshabur:10 observe:4 away:1 appropriate:4 spectral:11 batch:3 robustness:7 denotes:1 ensure:4 k1:3 prof:1 objective:3 noticed:1 already:1 quantity:1 looked:1 dependence:5 exhibit:1 gradient:5 distance:1 berlin:1 capacity:43 parametrized:2 sensible:2 rethinking:1 me:1 toward:6 relationship:4 minimizing:4 balance:3 mostly:1 relate:1 trace:2 negative:1 design:1 datasets:1 finite:3 descent:5 behave:1 situation:1 looking:1 precise:2 perturbation:10 arbitrary:1 sharp:1 ttic:1 david:2 introduced:2 required:2 kl:16 giryes:1 connection:3 learned:19 established:1 nip:2 address:1 able:1 suggested:8 beyond:1 below:1 pattern:1 biasing:2 including:4 max:7 explanation:3 natural:1 force:1 advanced:1 imply:1 hm:4 jun:1 prior:4 understanding:4 nati:1 kf:3 relative:2 loss:18 expect:8 fully:1 lecture:1 interesting:1 proportional:7 srebro:8 versus:1 foundation:1 sufficient:11 foster:1 changed:1 surprisingly:1 repeat:1 bias:5 allow:1 understand:1 perceptron:1 institute:1 wide:2 explaining:3 taking:1 dimension:7 depth:2 calculated:3 valid:1 computes:1 forward:1 commonly:1 simplified:1 far:3 transaction:1 nov:1 emphasize:2 implicitly:1 kullback:1 global:11 investigating:2 incoming:3 xi:2 shwartz:1 alternatively:1 search:2 nature:1 robust:1 ca:1 interact:1 heidelberg:1 complex:1 artificially:2 necessarily:1 anthony:1 linearly:1 terminated:1 bounding:3 allowed:2 xu:1 referred:1 precision:2 fails:1 tomioka:3 explicit:1 exponential:3 candidate:2 crude:1 toyota:1 srinadh:2 tang:1 down:2 theorem:1 bad:2 specific:2 pac:19 behnam:1 mendelson:1 workshop:1 mnist:2 importance:2 magnitude:2 push:1 cartesian:1 margin:23 gap:3 entropy:6 generalizing:1 logarithmic:1 simply:2 forming:1 highlighting:1 vinyals:1 scalar:1 monotonic:1 springer:1 corresponds:4 acm:2 goal:2 viewed:1 careful:1 lipschitz:8 fw:14 change:5 hard:1 averaging:1 total:2 meaningful:1 select:1 support:2 yardstick:3 evaluate:2 outgoing:1 phenomenon:16 |
6,828 | 7,177 | A framework for Multi-A(rmed)/B(andit) Testing
with Online FDR Control
Fanny Yang
Dept. of EECS, U.C. Berkeley
[email protected]
Aaditya Ramdas
Dept. of EECS and Statistics, U.C. Berkeley
[email protected]
Kevin Jamieson
Allen School of CSE, U. of Washington
[email protected]
Martin Wainwright
Dept. of EECS and Statistics, U.C. Berkeley
[email protected]
Abstract
We propose an alternative framework to existing setups for controlling false alarms
when multiple A/B tests are run over time. This setup arises in many practical
applications, e.g. when pharmaceutical companies test new treatment options
against control pills for different diseases, or when internet companies test their
default webpages versus various alternatives over time. Our framework proposes to
replace a sequence of A/B tests by a sequence of best-arm MAB instances, which
can be continuously monitored by the data scientist. When interleaving the MAB
tests with an an online false discovery rate (FDR) algorithm, we can obtain the
best of both worlds: low sample complexity and any time online FDR control. Our
main contributions are: (i) to propose reasonable definitions of a null hypothesis for
MAB instances; (ii) to demonstrate how one can derive an always-valid sequential
p-value that allows continuous monitoring of each MAB test; and (iii) to show that
using rejection thresholds of online-FDR algorithms as the confidence levels for
the MAB algorithms results in both sample-optimality, high power and low FDR
at any point in time. We run extensive simulations to verify our claims, and also
report results on real data collected from the New Yorker Cartoon Caption contest.
1
Introduction
Randomized trials are the default option to determine whether potential improvements of an alternative
method (e.g. website design for a tech company, or medication in clinical trials for pharmaceutical
companies) are significant compared to a well-established default. In the applied domain, this is often
colloquially referred to as A/B testing or A/B/n testing for several alternatives. The standard practice
is to divert a small amount of the traffic or patients to the alternative and control. If an alternative
appears to be significantly better, it is implemented; otherwise, the default setting is maintained.
At first glance, this procedure seems intuitive and simple. However, in cases where the aim is to
optimize over one particular metric, one can do better. In particular, this common tool suffers from
several downsides. (1) First, one may wish to allocate more traffic to a better treatment if it is clearly
better. Yet typical A/B/n testing frameworks split the traffic uniformly over alternatives. Adaptive
techniques should help to detect better alternatives faster. (2) Second, companies often desire to
continuously monitor an ongoing A/B test as they may adjust their termination criteria as time goes
by and possibly stop earlier or later than originally intended. However, this practice may result in
many more false alarms if not properly accounted for. This is one of the reasons for the lack of
reproducibility of scientific results, an issue recently receiving increased attention from the public
media. (3) Third, the lack of sufficient evidence or an insignificant improvement of the metric may
make it undesirable from a practical or financial perspective to replace the default. Therefore, when a
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
company runs hundreds to thousands of A/B tests within a year, ideally the number of statistically
insignificant changes that it made should be small relative to the total number of changes made. While
controlling the false alarm rate of each individual test does not achieve this type of false discovery
rate (FDR) control, there are known procedures in the multiple testing literature that are tailored to
this problem.
In this paper, we provide a novel framework that addresses the above shortcomings of A/B or A/B/n
testing. The first concern is tackled by employing recent advances in adaptive sampling like the pureexploration multi-armed bandit (MAB) algorithm. For the second concern, we adopt the notion of
any-time p-values for guilt-free continuous monitoring. Finally, we handle the third issue using recent
results in online FDR control. Hence the combined framework can be described as doubly-sequential
(sequences of MAB tests, each of which is itself sequential). Although each of those problems
has been studied in hitherto disparate communities, how to leverage the best of all worlds, if at all
possible, has remained an open problem. The main contributions of this paper are in successfully
merging these ideas in a meta framework and presenting the conditions under which it can be shown
to yield near-optimal sample complexity and FDR control.
The remainder of this paper is organized as follows. In Section 2, we lay out the conceptual challenges
that we address in the paper, and describe a meta-algorithm that combines adaptive sampling strategies
with FDR control procedures. Section 3 is devoted to the description of a concrete procedure, along
with some theoretical guarantees on its properties. In Section 4, we discuss some results of our
extensive experiments on both simulated and real-world data sets available to us.
2
Formal experimental setup and a meta-algorithm
In this section provide a high-level overview of our proposed combined framework aimed at addressing the shortcomings mentioned in the introduction. A specific instantiation of this meta-algorithm
along with detailed theoretical guarantees are specified in Section 3.
For concreteness, we refer to the system designer, whether a tech company or a pharmaceutical
company, as a (data) scientist. We assume that the scientist needs to possibly conduct an infinite
number of experiments sequentially, indexed by j. Each experiment has one default setting, referred
to as the control, and K = K(j) alternative settings, called the treatments or alternatives. The
scientist must return one of the K + 1 options that is the ?best? according to some predefined metric,
before the next experiment is started. Such a setup is a simple mathematical model both for clinical
trials run by pharmaceutical labs, and A/B/n testing used at scale by tech companies.
One full experiment consists of a sequence of steps. In each step, the scientist assigns a new person to
one of the K + 1 options and observes an outcome. In practice, the role of the scientist could be taken
by an adaptive algorithm, which determines the assignment at time step j by careful consideration
of all previous outcomes. Borrowing terminology from the multi-armed bandit (MAB) literature,
we refer to each of the K + 1 options as an arm, and each assignment to arm i is termed ?pulling
arm i?. For concreteness, we assign the index 0 to the control arm and note that it is known to the
algorithm. Furthermore, we assume that the observable metric from each pull of arm i = 0, 1, . . . , K
corresponds to an independent draw from an unknown probability distribution with expectation ?i .
In the sequel we use ?i? := max ?i to denote the mean of the best arm. We refer the reader to
i=1,...,K
Table 1 in Appendix A for a glossary of the notation used throughout this paper.
2.1
Some desiderata and difficulties
Given the setup above, how can we mathematically describe the guarantees that the companies might
desire from an improved multiple-A/B/n testing framework? For which parts can we leverage known
results and what challenges remain?
For the purpose of addressing the first question, let us adopt terminology from the hypothesis testing
literature and view each experiment as a test of a null hypothesis. Any claim that an alternative arm is
the best is called a discovery, and when such a claim is erroneous, it is called a false discovery. When
multiple hypotheses are to be tested, the scientist needs to define the quantity it wants to control.
While we may desire that the probability of even a single false discovery is small, this is usually
far too stringent for a large and unknown number of tests and results in low power. For this reason,
[1] proposed that it may be more useful to control the expected ratio of false discoveries to the total
number of discoveries (called the False Discovery Rate, or FDR for short) or the ratio of expected
number of false discoveries to the expected number of total discoveries (called the modified FDR
2
or mFDR for short). Over the past decades, the FDR and its variants like the mFDR have become
standard quantities for multiple testing applications. In the following, if not otherwise specified, we
use the term FDR to denote both measures in order to simplify the presentation. In Section 3, we
show that both mFDR and FDR can be controlled for different choices of procedures.
2.1.1
Challenges in viewing an MAB instance as a hypothesis test
In our setup, we want to be able to control the FDR at any time in an online manner. Online FDR
procedures were first introduced by Foster and Stine [2], and have since been studied by other authors
(e.g., [3, 4]). They are based on comparing a valid p-value P j with carefully-chosen levels ?j for
each hypothesis test1 . We reject the null hypothesis, represented as Rj = 1, when P j ? ?j and we
set Rj = 0 otherwise.
As mentioned, we want to use adaptive MAB algorithms to test each hypothesis, since they can find a
best arm among K + 1 with near-optimal sample complexity. However the traditional MAB setup
does not account for the asymmetry between the arms as is the case in a testing setup, with one being
the default (control) and others being alternatives (treatments). This is the standard scenario in A/B/n
testing applications, as e.g. a company might prefer wrong claims that the control is the best (false
negative), rather than wrong claims that an alternative is the best (false positive), simply because new
system-wide adoption of selected alternatives might involve high costs. What would be a suitable
null hypothesis in this hybrid setting? For the sake of continuous monitoring, is it possible to define
and compute always-valid p-values that are super-uniformly distributed under the null hypothesis
when computed at any time t?
In addition to asymmetry, the practical scientist might have a different incentive than the ideal outcome
for MAB algorithms as he/she might not want to find the best alternative if it is not substantially
better than the control. Indeed, if the net gain is small, it might be offset by the cost of implementing
the change from the existing default choice. By similar reasoning, we may not require identifying
the single best arm if there is a set of arms with similar means all larger than the rest. We propose a
sensible null-hypothesis for each experiment which incorporates the approximation and minimum
improvement requirement as described above, and provide an always valid p-value which can be
easily calculated at each time step in the experiment. We show that a slight modification of the usual
LUCB algorithm caters to this specific null-hypothesis while still maintaining near-optimal sample
complexity.
desired FDR level ??
?
????
MAB
Exp j
???? (????)
Online FDR procedure
?? j+1
???? (????)
Test
???? < ????
MAB
?? j+1 (?? j+1)
Exp j+1
??j+1 (??j+1)
?
Test
??j+1 < ??j+1
MAB-FDR meta algorithm
Figure 1: Diagram of our MAB-FDR meta algorithm. The green solid arrows symbolize interaction
between the MAB and FDR procedures via the FDR test levels ?j and rejection indicator variables Rj .
Notice that the P j -values are now dependent as each ?j depends on the past rejections R1 , . . . , Rj?1 .
The eyes represent possible continuous monitoring by the scientist.
2.1.2
Interaction between MAB and FDR
In order to take advantage of the sample efficiency of best-arm bandit algorithms, it is crucial to set the
confidence levels close to what is needed. Given a user-defined level ?, at each hypothesis j, online
A valid P j must be stochastically dominated by a uniform distribution on [0, 1], which we henceforth refer
to as super-uniformly distributed.
1
3
FDR procedures automatically output the significance level ?j which are sufficient to guarantee FDR
control, based on past decisions. Can we directly set the MAB confidence levels to these output levels
?j ? If we do, our p-values are not independent across different hypotheses anymore: P j directly
depends on the FDR levels ?j and each ?j in turn depends on past MAB rejections, thus on past
MAB p-values (see Figure 1). Does the new interaction compromise FDR guarantees?
Although known procedures as in [2, 4] guarantee FDR control for independent p-values, this does
not hold for dependent p-values in general. Hence FDR control guarantees cannot simply be obtained
out of the box. A key insight that emerges from our analysis is that an appropriate bandit algorithm
actually shapes the p-value distribution under the null in a ?good? way that allows us to control FDR.
2.2
A meta-algorithm
Procedure 1 summarizes our doubly-sequential procedure, with a corresponding flowchart in Figure 1.
We will prove theoretical guarantees after instantiating the separate modules. Note that our framework
allows the scientist to plug in their favorite best-arm MAB algorithm or online FDR procedure. The
choice for each of them determines which guarantees can be proven for the entire setup. Any
independent improvement in either of the two parts would immediately lead to an overall performance
boost of the overall framework.
Procedure 1 MAB-FDR Meta algorithm skeleton
1. The scientist sets a desired FDR control rate ?.
2. For each j = 1, 2, . . . :
? Experiment j receives a designated control arm and some number of alternative arms.
? An online-FDR procedure returns an ?j that is some function of the past values {P ` }j?1
`=1 .
? An MAB procedure is executed with inputs (a) the control arm and K(j) alternative arms,
(b) confidence level ?j , maintains an always valid p-value for each t and if the procedure
self-terminates, returns a recommended arm.
? When the MAB procedure is terminated at time t by itself or the user, if the arm with the
highest empirical mean is not the control arm and Ptj ? ?j , then we return P j := Ptj ,
and the control arm is rejected in favor of this empirically best arm.
3
A concrete procedure with guarantees
We now take the high-level road map given in Procedure 1, and show that we can obtain a concrete,
practically implementable framework with FDR control and power guarantees. We first discuss the
key modeling decisions we have to make in order to seamlessly embed MAB algorithms into an
online FDR framework. We then outline a modified version of a commonly used best-arm algorithm,
before we finally prove FDR and power guarantees for the concrete combined procedure.
3.1
Defining null hypotheses and constructing p-values
Our first task is to define a null hypothesis for each experiment. As mentioned before, the choice of
the null is not immediately obvious, since we sample from multiple distributions adaptively instead
of independently. In particular, we will generally not have the same number of samples for all arms.
Given a default mean ?0 and alternatives means {?i }K
i=1 , we propose that the null hypothesis for the
j-th experiment should be defined as
H0j : ?0 ? ?i ? for all i = 1, . . . , K,
(1)
where we usually omit the index j for simplicity. It remains to define an always valid p-value
(previously defined by Johari et al. [5]) for each experiment for the purpose of continuous monitoring.
It is defined as a stochastic process {Pt }?
t=1 such that for all fixed and random stopping times T ,
under any distribution P0 over the arm rewards such that the null hypothesis is true, we have
P0 (PT ? ?) ? ?.
(2)
When all arms are drawn independently an equal number of times, by linearity of expectation one can
regard the distance of each pair of samples as a random variable drawn i.i.d. from a distribution with
mean ?
? := ?0 ? ?i . We can then view the problem as testing the standard hypothesis H0j : ?
? > ?.
However, when the arms are pulled adaptively, a different solution needs to be found?indeed, in this
4
case, the sample means are not unbiased estimators of the true means, since the number of times an
arm was pulled now depends on the empirical means of all the arms.
Our strategy is to construct always valid p-values by using the fact that p-values can be obtained
by inverting confidence intervals. To construct always-valid confidence bounds, we resort to the
fundamental concept of the law of the iterated logarithm (LIL), for which non-asymptotic versions
have been recently derived and used for both bandits and testing problems (see [6], [7]).
To elaborate, define the function
s
?n (?) =
log( 1? ) + 3 log(log( 1? )) +
n
3
2
log(log(en))
.
(3)
If ?
bi,n is the empirical average of independent samples from a sub-Gaussian distribution, then it is
known (see, for instance, [8, Theorem 8]) that for all ? ? (0, 1), we have
?
n [
max P
{b
?i,n ? ?i > ?n (? ? 0.1)} ,
n=1
?
[
o
P
{b
?i,n ? ?i < ??n (? ? 0.1)}
? ?, (4)
n=1
where ? ? 0.1 := min{?, 0.1}.
We are now ready to propose single arm p-values of the form
n
o
?
Pi,t : = sup ? ? [0, 1] | ?
bi,ni (t) ? ?ni (t) ( 2K
)? ?
b0,n0 (t) + ?n0 (t) ( ?2 ) +
n
o
= sup ? ? [0, 1] | LCBi (t) ? UCB0 (t) +
(5)
Here we set Pi,t = 1 if the supremum is taken over an empty set. Given these single arm p-values,
the always-valid p-value for the experiment is defined as
Pt := min
min Pi,s .
s?t i=1,...,K
(6)
We claim that this procedure leads to an always valid p-value (with proof in Appendix C).
Proposition 1. The sequence {Pt }?
t=1 defined via equation (6) is an always valid p-value.
3.2
Adaptive sampling for best-arm identification
In the traditional A/B testing setting described in the introduction, samples are allocated uniformly
to the different alternatives. But by allowing adaptivity, decisions can be made with the same
statistical significance using far fewer samples. Suppose moreover that there is a unique maximizer
i? := arg max ?i , so that ?i := ?i? ? ?i > 0 for all i 6= i? . Then for any ? ? (0, 1), best-arm
i=0,1,...,K
P
MAB algorithms can identify i? with probability at least 1?? based on at most2 i6=i? ??2
i log(1/?)
total samples (see the paper [9] for a brief survey and [10] for an application to clinical trials). In
contrast, if samples are allocated uniformly to the alternatives under the same conditions, then the
most natural procedures require K max ??2
i log(K/?) samples before returning i? with probability
i6=i?
at least 1 ? ?.
However, standard best-arm bandit algorithms do not incorporate asymmetry as induced by nullhypotheses as in definition (1) by default. Furthermore, recall that a practical scientist might desire
the ability to incorporate approximation and a minimum improvement requirement. More precisely,
it is natural to consider the requirement that the returned arm ib satisfies the bounds ?ib ? ?0 +
and ?ib ? ?i? ? for some > 0. In Algorithm 1 we present a modified MAB algorithm based on
the common LUCB algorithm (see [11, 12]) which incorporates the above desiderata. We provide a
visualization of how affects the usual stopping condition in Figure 4 in Appendix A.1.
The following proposition applies to Algorithm 1 run with a control arm indexed by i = 0 with mean
?0 and alternative arms indexed by i = 1, . . . , K with means ?i , respectively. Let ib denote the
random arm returned by the algorithm assuming that it exits, and define the set
S ? := {i? 6= 0 | ?i? ? max ?i ? and ?i? > ?0 + }.
(7)
i=1,...,K
2
Here we have ignored some doubly-logarithmic factors.
5
Algorithm 1 Best-arm identification with a control arm for confidence ? and precision ? 0
For all t let ni (t) be the number of times arm i has been pulled up to time t. In addition, for each arm
Pni (t)
ri (? ), define
i let ?
bi (t) = ni1(t) ? =1
?
LCBi (t) := ?
bi,ni (t) ? ?ni (t) ( 2K
)
UCBi (t) := ?
bi,ni (t) + ?ni (t) ( 2? ).
and
1. Set t = 1 and sample every arm once.
2. Repeat: Compute ht = arg max ?
bi (t), and `t = arg
i=0,1,...,K
max
i=0,1,...,K,i6=ht
UCBi (t)
(a) If LCB0 (t) > UCBi (t) ? , for all i, then output 0 and terminate.
Else if LCBht (t) > UCB`t (t) ? and LCBht (t) > UCB0 (t) + , then output ht and
terminate.
(b) If LCB0 (t) > UCBi (t) ? , for all i 6= 0, then output 0 and terminate.
Else if LCBht (t) > UCB`t (t) ? and LCBht (t) > UCB0 (t) + , then output ht and
terminate.
(c) If > 0, let ut = arg maxi6=0 UCBi (t) and pull all distinct arms in {0, ut , ht , `t } once.
If = 0, pull arms ht and `t and set t = t + 1.
Note that the mean associated with any index i? ? S ? , assuming that the set is non-empty, is
guaranteed to be -superior to the control mean, and at most -inferior to the maximum mean over all
arms.
Proposition 2. The algorithm 1 terminates in finite time with probability one. Furthermore, suppose
that the samples from each arm are independent and sub-Gaussian with scale 1. Then for any
? ? (0, 1) and ? 0, Algorithm 1 has the following guarantees:
(a) Suppose that ?0 > max ?i ? . Then with probability at least 1 ? ?, the algorithm exits with
i=1,...,K
P
K
?2
?2
ib = 0 after taking at most O
?
log(K
log(?
)/?)
time steps where
i
i=0 i
?0 = min (?0 + ) ? max ?j and
j=1,...,K
?i = min {(?0 + ) ? ?i } .
(b) Otherwise, suppose that the set S ? as defined in equation (7) is non-empty. Then with probability
?
at
least 1 ? ?, the algorithm exits with
ib ? S after taking at most
PK ?2
?2
O
i=0 ?i log(K log(?i )/?) time steps where
?0 = min
max ?j ? (?0 + ), max{?0 , } and
j=1,...,K
?i = max ?i , min
max ?j ? (?0 + ),
.
j=1,...,K
See Appendix D for the proof of this claim. Part (a) of Proposition 2 guarantees that when no
alternative arm is -superior to the control arm (i.e. under the null hypothesis), the algorithm stops
and returns the control arm with probability at least 1 ? ?. Part (b) guarantees that if there is in fact at
least one alternative that is -superior to the control arm (i.e. under the alternative), then the algorithm
will find at least one of them that is at most -inferior to the best of all possible arms.
As our algorithm is a slight modification of the LUCB algorithm, the results of [11, 12] provide insight into the number of samples taken before the algorithm terminates. Indeed, when
= 0 and i? = arg maxi=0,1,...,K ?i is a unique maximizer, the nearly optimal sample complexity result of [12] implies that the algorithm
terminates under settings (a) and (b) after at most
P
?2
?2
?2
maxj6=i? ??2
log(K
log(?
)/?)
+
?
j
j
i log(log(?i )/?) samples are taken (ignoring coni6=i?
stants), where ?i = ?i? ? ?i .
In our development to follow, we now bring back the index for experiment j, in particular using P j
to denote the quantity PTj at any stopping time T . Here the stopping time can either be defined by the
scientist, or in an algorithmic manner.
6
3.3
Best-arm MAB interacting with online FDR
After having established null hypotheses and p-values in the context of best-arm MAB algorithms, we
are now ready to embed them into an online FDR procedure. In the following, we consider p-values
for the j-th experiment P j := PTjj which is just the p-value as defined in equation (6) at the stopping
time Tj , which depends on ?j .
We denote the set of true null and false null hypotheses up to experiment J as H0 (J) and H1 (J)
respectively, where we drop the argument whenever it?s clear from the context. The variable
Rj = 1P j ??j indicates whether a the null hypothesis of experiment j has been rejected, where
Rj = 1 denotes a claimed discovery that an alternative was better than the control. The false
discovery rate (FDR) and modified FDR up to experiment J are then defined as
P
P
E j?H0 Rj
j?H0 Rj
and
mFDR(J) := PJ
FDR(J) := E PJ
.
(8)
E i=1 Ri + 1
i=1 Ri ? 1
Here the expectations are taken with respect to distributions of the arm pulls and the respective
sampling algorithm. In general, it is not true that control of one quantity implies control of the other.
Nevertheless, in the long run (when the law of large numbers is a good approximation), one does not
expect a major difference between the two quantities in practice.
The set of true nulls H0 thus includes all experiments where H0j is true, and the FDR and mFDR are
well-defined for any number of experiments J, since we often desire to control FDR(J) or mFDR(J)
for all J ? N. In order to measure power, we define the -best-arm discovery rate as
P
E j?H1 Rj 1?ib ??i? ? 1?ib ??0 +
(9)
BDR(J) :=
|H1 (J)|
We provide a concrete procedure 2 for our doubly sequential framework, where we use a particular
online FDR algorithm due to Javanmard and Montanari [4] known as LORD; the reader should note
that other online FDR procedure could be used to obtain essentially the same set of guarantees. Given
a desired level ?, the LORD procedure starts off with an initial ??-wealth? of W (0) < ?. Based on
a inifinite sequence {?i }?
i=1 that sums to one, and the time of the most recent discovery ?j , it uses up
a fraction ?j??j of the remaining ?-wealth to test. Whenever there is a rejection, we increase the
?-wealth by ? ? W (0). A feasible choice for a stopping time in practice is Tj := min{T (?j ), TS },
where TS is a maximal number of samples the scientist wants to pull and T (?j ) is the stopping time
of the best-arm MAB algorithm run at confidence ?j .
Procedure 2 MAB-LORD: best-arm identification with online FDR control
P?
1. Initialize W (0) < ?, set ?0 = 0, and choose a sequence {?i } s.t. i=1 ?i = 1
2. At each step j, compute ?j = ?j??j W (?j ) and
W (j + 1) = W (j) ? ?j + Rj (? ? W (0))
3. Output ?j and run Algorithm 1 using ?j -confidence and stop at a stopping time Tj .
4. Algorithm 1 returns P j and we reject the null hypothesis if P j ? ?j .
5. Set Rj = 1P j ??j , ?j = ?j?1 ? jRj , update j = j + 1 and go back to step 2.
The following theorem provides guarantees on mFDR and power for the MAB-LORD procedure.
Theorem 1 (Online mFDR control for MAB-LORD).
(a) Procedure 2 achieves mFDR control at level ? for stopping times Tj = min{T (?j ), TS }.
(b) Furthermore, if we set TS = ?, Procedure 2 satisfies
PJ
BDR(J) ?
j=1
1j?H1 (1 ? ?j )
|H1 (J)|
.
(10)
See Appendix E for the proof of this claim. Note that by the arguments in the proof of Theorem 1,
mFDR control itself is actually guaranteed for any generalized ?-investing procedure [3] combined
with any best-arm MAB algorithm. In fact we could use any adaptive stopping time Tj which depend
on the history only via the rejections R1 , . . . , Rj?1 . Furthermore, using a modified LORD proposed
7
by Javanmard and Montanari [13], we can also guarantee FDR control? a result we moved to the
Appendix F due to space constraints. It is noteworthy that small values of ? do not only guarantee
smaller FDR error but also higher BDR. However, there is no free lunch ? a smaller ? implies a
smaller ?j at each experiment, resulting in a larger required number of pulls for the the best-arm
MAB algorithm.
4
Experimental results
In the following, we briefly describe some results of our experiments3 on both simulated and realworld data sets, which illustrate that, apart from FDR control, MAB-FDR (used interchangeably with
MAB-LORD here) is highly advantageous in terms of sample complexity and power compared to a
straightforward embedding of A/B testing in online FDR procedures. Unless otherwise noted, we set
= 0 in all of our simulations to focus on the main ideas and keep the discussion concise.
Competing procedures There are two natural frameworks to compare against MAB-FDR. The
first, called AB-FDR or AB-LORD, swaps the MAB part for an A/B (i.e. A/B/n) test (uniformly
sampling all alternatives until termination). The second comparator exchanges the online FDR
control for independent testing at ? for all hypotheses ? we call this MAB-IND. Formally, AB-FDR
swaps step 3 in Procedure 2 with ?Output ?j and uniformly sample each arm until stopping time Tj .?
while MAB-IND swaps step 4 in Procedure 2 with ?The algorithm returns P j and we reject the null
hypothesis if P j ? ?.?. In order to compare the performances of these procedures, we ran three sets
?
of simulations using Procedure 2 with = 0 and ?j = 0.07 log(j?2)
as in [4].
je log j
1.0
MAB-LORD
AB-LORD
0.8
BDR
0.6
0.4
0.2
0.0100 200 300 400 500 600 700 800
Truncation time TS
Total number of samples /1000
Our experiments are run on artificial data with Gaussian/Bernoulli draws and real-world Bernoulli
draws from the New Yorker Cartoon Caption Contest. Recall that the sample complexity of the
best-arm MAB algorithm is determined by the gaps ?j = ?i? ? ?j . One of the main relevant
differences to consider between an experiment of artificial or real-world nature is thus the distribution
of the means ?i for i = 1, . . . , K. The artificial data simulations are run with a fixed gap ?2 while
the means of the other arms are set uniformly in [0, ?2 ]. For our real-world simulations, we use
empirical means computed from the cartoon caption contest (see details in Appendix B.1.1). In
addition, the contests actually follow a natural chronological order, which makes this dataset highly
relevant to our purposes. In all simulations, 60% of all the hypotheses are true nulls, and their indices
are chosen uniformly. Due to space constraints, the experimental results for artificial and real-world
Bernoulli draws are deferred to Appendix B.
160
140
120
100
80
60
40
20
0 20
(a)
MAB-LORD
40
60
AB-LORD
80
Number of arms
100
120
(b)
Figure 2: (a) Power vs. truncation time TS (per hypothesis) for 50 arms and (b) Sample complexity
vs. # arms for truncation time TS = 300 for Gaussian draws with fixed ?i? = 8, ? = 3 over 500
hypotheses with 200 non-nulls, averaged over 100 runs and ? = 0.1.
Power and sample complexity In this section we include figures on artificial Gaussian trials which
confirm that the total number of necessary pulls to determine significance is much smaller for MABFDR than for AB-FDR. In Fig. 2 (a) we fix the number of arms and plot the BDR with = 0 (BDR
for short) for both procedures over different choices of truncation times TS . Low BDR indicates that
the algorithm often reaches truncation time before it could stop. For Fig. 2 (b) we fix TS and show
how the sample complexity varies with the number of arms.
3
The code for reproducing all experiments and plots in this paper is publicly available at
https://github.com/fanny-yang/MABFDR
8
Observe in Fig. 2 (a) that the power at any given truncation time is much higher for MAB-FDR than
AB-FDR. This is because the best-arm MAB is more likely to satisfy the stopping criterion before
any given truncation time than the uniform sampling algorithm. Fig. 2(b) qualitatively shows how
the total number of necessary arm pulls for AB-FDR increases much faster with the number of arms
than for MAB-FDR before it plateaus due to the truncation. Recall that whenever the best-arm MAB
stops before the truncation time in each hypothesis, the stopping criterion is met, i.e. the best arm is
identified with probability at least 1 ? ?j , so that the power is bound to be close to one whenever
Tj = T (?j ).
mFDR control For Fig. 3, we again consider Gaussian draws as in Fig. 2. This time however, for
each true null hypothesis we skip the bandit experiment and directly draw P j ? [0, 1] to compare
with the significance levels ?j from our online FDR procedure 2 (see App. B.2 for motivation of this
setting). By Theorem 1, mFDR should still be controlled as it only requires the p-values to
Pbe superuniform. In Fig. 3(a) we plot the instantaneous false discovery proportion FDP(J) =
j?H0 J
PT
j=1
Rj
Rj
over the hypothesis index for different runs with the same settings. Apart from initial fluctuations due
to the relatively small denominator, observe how the guarantee for the FDR(J) = E FDP(J) with
the red line showing its empirical value, transfers to the control of each individual run (blue lines).
0.5
MAB-LORD
MAB-IND
MAB-Bonf.
0.4
mFDR
0.3
0.2
0.1
0.00.1
0.3
0.5
0.7
Proportion of alternatives ?1
(a)
0.9
(b)
Figure 3: (a) Single runs of MAB-LORD (blue) and their average (red) with uniformly drawn p-values
for null hypotheses and Gaussian draws as in Figure 2. (b) mFDR over different proportions of
non-nulls ?1 , with same settings, averaged over 80 runs.
In Figure 3 (b), we compare the mFDR of MAB-FDR against MAB-IND and a Bonferroni
type
P?
correction. The latter uses a simple union bound and chooses ?j = ?6?
2 j 2 such that
j=1 ?j ? ?
and thus trivially allows for any time FWER, implying FDR control. As expected, Bonferroni is too
conservative and barely makes any rejections whereas the naive MAB-IND approach does not control
FDR. LORD avoids both extremes and controls FDR while having reasonable power.
5
Discussion
The recent focus in popular media about the lack of reproducibility of scientific results erodes the
public?s confidence in published scientific research. To maintain credibility of claimed discoveries,
simply decreasing the statistical significance levels ? of each individual experimental work (e.g.,
reject at level 0.001 rather than 0.05) would drastically hurt power. A common approach is instead
to control the ratio of false discoveries to claimed discoveries at some desired value over many
sequential experiments, requiring the statistical significances ?j to change from experiment to
experiment. Unlike earlier works on online FDR control, our framework synchronously interacts
with adaptive sampling methods like MABs to make the overall sampling procedure per experiment
much more efficient than uniform sampling. To the best of our knowledge, it is the first work that
successfully combines the benefits of adaptive sampling and FDR control. It is worthwhile to note that
any improvement, theoretical or practical, to either online FDR algorithms or best-arm identification
in MAB, immediately results in a corresponding improvement for our MAB-FDR framework.
More general notions of FDR with corresponding online procedures have recently been developed by
Ramdas et al [14]. In particular, they incorporate the notion of memory and a priori importance of
each hypothesis. This could prove to be a valuable extension for our setting, especially in cases when
only the percentage of wrong rejections in the recent past matters. It would be useful to establish
FDR control for these generalized notions of FDR as well.
9
Acknowledgements
This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force
Office of Scientific Research Grant AFOSR-FA9550-14-1-001, and National Science Foundation
Grants CIF-31712-23800 and DMS-1309356.
References
[1] Y. Benjamini and Y. Hochberg, ?Controlling the false discovery rate: a practical and powerful
approach to multiple testing,? Journal of the Royal Statistical Society. Series B (Methodological),
pp. 289?300, 1995.
[2] D. P. Foster and R. A. Stine, ??-investing: a procedure for sequential control of expected false
discoveries,? Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 70,
no. 2, pp. 429?444, 2008.
[3] E. Aharoni and S. Rosset, ?Generalized ?-investing: definitions, optimality results and application to public databases,? Journal of the Royal Statistical Society: Series B (Statistical
Methodology), vol. 76, no. 4, pp. 771?794, 2014.
[4] A. Javanmard and A. Montanari, ?Online rules for control of false discovery rate and false
discovery exceedance,? The Annals of Statistics, 2017.
[5] R. Johari, L. Pekelis, and D. J. Walsh, ?Always valid inference: Bringing sequential analysis to
A/B testing,? arXiv preprint arXiv:1512.04922, 2015.
[6] K. G. Jamieson, M. Malloy, R. D. Nowak, and S. Bubeck, ?lil?ucb: An optimal exploration
algorithm for multi-armed bandits,? in COLT, vol. 35, 2014, pp. 423?439.
[7] A. Balsubramani and A. Ramdas, ?Sequential nonparametric testing with the law of the iterated logarithm,? in Proceedings of the Thirty-Second Conference on Uncertainty in Artificial
Intelligence. AUAI Press, 2016, pp. 42?51.
[8] E. Kaufmann, O. Capp?, and A. Garivier, ?On the complexity of best arm identification in
multi-armed bandit models,? The Journal of Machine Learning Research, 2015.
[9] K. Jamieson and R. Nowak, ?Best-arm identification algorithms for multi-armed bandits in
the fixed confidence setting,? in Information Sciences and Systems (CISS), 2014 48th Annual
Conference on. IEEE, 2014, pp. 1?6.
[10] S. S. Villar, J. Bowden, and J. Wason, ?Multi-armed bandit models for the optimal design of
clinical trials: benefits and challenges,? Statistical science: a review journal of the Institute of
Mathematical Statistics, vol. 30, no. 2, p. 199, 2015.
[11] S. Kalyanakrishnan, A. Tewari, P. Auer, and P. Stone, ?Pac subset selection in stochastic multiarmed bandits,? in Proceedings of the 29th International Conference on Machine Learning
(ICML-12), 2012, pp. 655?662.
[12] M. Simchowitz, K. Jamieson, and B. Recht, ?The simulator: Understanding adaptive sampling
in the moderate-confidence regime,? arXiv preprint arXiv:1702.05186, 2017.
[13] A. Javanmard and A. Montanari, ?On online control of false discovery rate,? arXiv preprint
arXiv:1502.06197, 2015.
[14] A. Ramdas, F. Yang, M. J. Wainwright, and M. I. Jordan, ?Online control of the false discovery
rate with decaying memory,? in Advances in Neural Information Processing Systems 2017,
arXiv preprint arXiv:1710.00499, 2017.
10
| 7177 |@word trial:6 briefly:1 version:2 seems:1 advantageous:1 proportion:3 open:1 termination:2 simulation:6 kalyanakrishnan:1 p0:2 concise:1 solid:1 yorker:2 initial:2 series:3 past:7 wainwrig:1 existing:2 comparing:1 com:1 yet:1 must:2 stine:2 shape:1 cis:1 drop:1 plot:3 update:1 n0:2 v:2 implying:1 intelligence:1 selected:1 website:1 fewer:1 short:3 fa9550:1 provides:1 cse:1 mathematical:2 along:2 become:1 consists:1 doubly:4 prove:3 combine:2 manner:2 javanmard:4 indeed:3 expected:5 multi:7 simulator:1 decreasing:1 company:11 automatically:1 armed:6 colloquially:1 notation:1 linearity:1 moreover:1 medium:2 null:26 hitherto:1 what:3 substantially:1 developed:1 guarantee:20 berkeley:6 every:1 auai:1 chronological:1 returning:1 wrong:3 control:56 grant:3 jamieson:5 omit:1 positive:1 before:9 scientist:14 fluctuation:1 noteworthy:1 might:7 studied:2 walsh:1 bi:6 statistically:1 adoption:1 averaged:2 practical:6 unique:2 thirty:1 testing:20 practice:5 union:1 procedure:43 empirical:5 significantly:1 reject:4 confidence:12 road:1 bowden:1 cannot:1 undesirable:1 close:2 selection:1 context:2 optimize:1 map:1 go:2 attention:1 straightforward:1 independently:2 survey:1 simplicity:1 identifying:1 assigns:1 immediately:3 insight:2 estimator:1 rule:1 pull:8 financial:1 embedding:1 handle:1 notion:4 hurt:1 annals:1 controlling:3 pt:5 suppose:4 user:2 caption:3 us:2 hypothesis:34 lay:1 muri:1 database:1 role:1 module:1 preprint:4 thousand:1 highest:1 observes:1 ran:1 disease:1 mentioned:3 valuable:1 complexity:11 skeleton:1 reward:1 ideally:1 depend:1 lord:15 compromise:1 efficiency:1 exit:3 swap:3 capp:1 pill:1 easily:1 various:1 represented:1 distinct:1 shortcoming:2 describe:3 artificial:6 kevin:1 outcome:3 h0:5 larger:2 otherwise:5 favor:1 statistic:4 ability:1 itself:3 online:27 sequence:7 advantage:1 net:1 simchowitz:1 propose:5 interaction:3 maximal:1 remainder:1 relevant:2 reproducibility:2 achieve:1 intuitive:1 description:1 moved:1 webpage:1 empty:3 asymmetry:3 requirement:3 r1:2 maxi6:1 help:1 derive:1 illustrate:1 school:1 b0:1 implemented:1 c:1 skip:1 implies:3 met:1 stochastic:2 exploration:1 stringent:1 viewing:1 public:3 implementing:1 require:2 exchange:1 assign:1 fix:2 mab:58 proposition:4 mathematically:1 extension:1 correction:1 hold:1 practically:1 exp:2 algorithmic:1 claim:8 major:1 achieves:1 adopt:2 purpose:3 ptj:3 villar:1 jrj:1 successfully:2 tool:1 clearly:1 always:11 gaussian:7 aim:1 modified:5 mfdr:15 rather:2 super:2 office:2 derived:1 focus:2 naval:1 improvement:7 methodological:1 properly:1 she:1 indicates:2 seamlessly:1 tech:3 guilt:1 medication:1 contrast:1 bernoulli:3 detect:1 inference:1 dependent:2 stopping:13 entire:1 borrowing:1 bandit:12 issue:2 among:1 overall:3 arg:5 colt:1 priori:1 proposes:1 development:1 initialize:1 equal:1 construct:2 once:2 having:2 washington:2 sampling:11 cartoon:3 beach:1 icml:1 nearly:1 report:1 others:1 simplify:1 national:1 individual:3 pharmaceutical:4 intended:1 erodes:1 maintain:1 bdr:7 ab:8 highly:2 adjust:1 deferred:1 extreme:1 tj:7 devoted:1 predefined:1 nowak:2 necessary:2 respective:1 unless:1 conduct:1 indexed:3 logarithm:2 desired:4 theoretical:4 instance:4 increased:1 earlier:2 downside:1 modeling:1 assignment:2 cost:2 addressing:2 subset:1 hundred:1 uniform:3 dod:1 too:2 fdp:2 varies:1 eec:3 rosset:1 chooses:1 ni1:1 recht:1 combined:4 international:1 adaptively:2 st:1 person:1 fundamental:1 sequel:1 randomized:1 off:1 receiving:1 pbe:1 continuously:2 concrete:5 ucbi:5 again:1 choose:1 possibly:2 henceforth:1 exceedance:1 stochastically:1 resort:1 return:7 account:1 potential:1 includes:1 matter:1 satisfy:1 depends:5 later:1 view:2 h1:5 lab:1 johari:2 traffic:3 sup:2 start:1 red:2 option:5 maintains:1 decaying:1 contribution:2 air:1 ni:7 stants:1 publicly:1 kaufmann:1 yield:1 identify:1 identification:6 iterated:2 monitoring:5 published:1 app:1 history:1 plateau:1 reach:1 suffers:1 whenever:4 definition:3 against:3 pp:7 obvious:1 dm:1 proof:4 associated:1 monitored:1 stop:5 gain:1 dataset:1 treatment:4 popular:1 recall:3 knowledge:1 emerges:1 ut:2 organized:1 carefully:1 actually:3 back:2 auer:1 appears:1 originally:1 higher:2 follow:2 methodology:2 improved:1 box:1 furthermore:5 rejected:2 just:1 until:2 receives:1 maximizer:2 lack:3 glance:1 scientific:4 pulling:1 usa:1 verify:1 true:8 unbiased:1 concept:1 requiring:1 hence:2 ind:5 interchangeably:1 self:1 bonferroni:2 inferior:2 maintained:1 noted:1 fwer:1 criterion:3 generalized:3 stone:1 presenting:1 outline:1 mabs:1 demonstrate:1 allen:1 aaditya:1 bring:1 reasoning:1 consideration:1 novel:1 recently:3 instantaneous:1 common:3 superior:3 empirically:1 overview:1 he:1 slight:2 significant:1 refer:4 multiarmed:1 credibility:1 test1:1 trivially:1 i6:3 benjamini:1 contest:4 maxj6:1 recent:5 perspective:1 moderate:1 apart:2 termed:1 scenario:1 claimed:3 meta:8 minimum:2 determine:2 recommended:1 ii:1 multiple:7 full:1 rj:14 faster:2 plug:1 clinical:4 long:2 controlled:2 instantiating:1 desideratum:2 variant:1 denominator:1 patient:1 metric:4 expectation:3 essentially:1 aharoni:1 arxiv:8 represent:1 tailored:1 addition:3 want:5 whereas:1 interval:1 diagram:1 else:2 wealth:3 crucial:1 allocated:2 rest:1 unlike:1 bringing:1 induced:1 incorporates:2 jordan:1 call:1 near:3 yang:4 leverage:2 ideal:1 iii:1 split:1 affect:1 symbolize:1 competing:1 identified:1 idea:2 whether:3 allocate:1 returned:2 cif:1 ignored:1 useful:2 generally:1 detailed:1 aimed:1 involve:1 clear:1 tewari:1 amount:1 nonparametric:1 http:1 percentage:1 notice:1 designer:1 per:2 blue:2 incentive:1 vol:4 key:2 terminology:2 threshold:1 nevertheless:1 monitor:1 drawn:3 pj:3 garivier:1 ht:6 concreteness:2 fraction:1 year:1 sum:1 run:15 realworld:1 powerful:1 uncertainty:1 throughout:1 reasonable:2 reader:2 draw:8 decision:3 appendix:8 prefer:1 summarizes:1 hochberg:1 bound:4 internet:1 guaranteed:2 tackled:1 annual:1 precisely:1 constraint:2 ri:3 sake:1 dominated:1 argument:2 optimality:2 min:9 martin:1 relatively:1 designated:1 according:1 remain:1 across:1 terminates:4 smaller:4 wason:1 lunch:1 modification:2 taken:5 equation:3 visualization:1 remains:1 previously:1 discus:2 turn:1 needed:1 rmed:1 available:2 malloy:1 observe:2 worthwhile:1 balsubramani:1 appropriate:1 anymore:1 alternative:27 denotes:1 remaining:1 include:1 maintaining:1 especially:1 establish:1 society:3 question:1 quantity:5 strategy:2 usual:2 traditional:2 interacts:1 distance:1 separate:1 simulated:2 sensible:1 collected:1 reason:2 barely:1 assuming:2 code:1 index:6 ratio:3 setup:9 executed:1 negative:1 disparate:1 design:2 bonf:1 fdr:74 lil:2 unknown:2 allowing:1 implementable:1 finite:1 t:9 defining:1 fanny:3 interacting:1 synchronously:1 reproducing:1 community:1 introduced:1 inverting:1 pair:1 required:1 specified:2 extensive:2 established:2 flowchart:1 boost:1 nip:1 address:2 able:1 usually:2 regime:1 challenge:4 max:13 green:1 memory:2 royal:3 wainwright:2 power:13 suitable:1 difficulty:1 hybrid:1 natural:4 force:1 indicator:1 arm:77 github:1 brief:1 eye:1 started:1 ready:2 naive:1 review:1 literature:3 discovery:24 acknowledgement:1 understanding:1 relative:1 law:3 asymptotic:1 afosr:1 expect:1 adaptivity:1 proven:1 versus:1 foundation:1 sufficient:2 foster:2 pi:3 accounted:1 repeat:1 supported:1 free:2 truncation:9 h0j:3 formal:1 drastically:1 pulled:3 institute:1 wide:1 pni:1 taking:2 distributed:2 regard:1 benefit:2 default:10 calculated:1 world:7 valid:13 avoids:1 glossary:1 author:1 made:3 adaptive:10 commonly:1 qualitatively:1 employing:1 far:2 observable:1 supremum:1 keep:1 confirm:1 sequentially:1 instantiation:1 conceptual:1 continuous:5 investing:3 decade:1 table:1 favorite:1 terminate:4 nature:1 transfer:1 ca:1 ignoring:1 constructing:1 domain:1 significance:6 main:4 pk:1 montanari:4 arrow:1 terminated:1 motivation:1 alarm:3 ramdas:5 caters:1 fig:7 referred:2 je:1 en:1 elaborate:1 precision:1 sub:2 wish:1 ib:8 third:2 interleaving:1 theorem:5 remained:1 erroneous:1 embed:2 specific:2 showing:1 pac:1 maxi:1 offset:1 insignificant:2 evidence:1 concern:2 false:22 sequential:9 merging:1 importance:1 experiments3:1 gap:2 rejection:8 logarithmic:1 simply:3 likely:1 bubeck:1 desire:5 partially:1 applies:1 corresponds:1 determines:2 satisfies:2 comparator:1 presentation:1 careful:1 replace:2 feasible:1 change:4 typical:1 infinite:1 uniformly:10 determined:1 conservative:1 total:7 called:6 experimental:4 lucb:3 ucb:3 formally:1 latter:1 arises:1 ongoing:1 incorporate:3 dept:3 tested:1 |
6,829 | 7,178 | Fader Networks:
Manipulating Images by Sliding Attributes
Guillaume Lample1,2 , Neil Zeghidour1,3 , Nicolas Usunier1 ,
Antoine Bordes1 , Ludovic Denoyer2 , Marc?Aurelio Ranzato1
{gl,neilz,usunier,abordes,ranzato}@fb.com
[email protected]
Abstract
This paper introduces a new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the
values of attributes directly in the latent space. As a result, after training, our
model can generate different realistic versions of an input image by varying the
attribute values. By using continuous attribute values, we can choose how much a
specific attribute is perceivable in the generated image. This property could allow
for applications where users can modify an image using sliding knobs, like faders
on a mixing console, to change the facial expression of a portrait, or to update
the color of some objects. Compared to the state-of-the-art which mostly relies
on training adversarial networks in pixel space by altering attribute values at train
time, our approach results in much simpler training schemes and nicely scales to
multiple attributes. We present evidence that our model can significantly change
the perceived value of the attributes while preserving the naturalness of images.
1
Introduction
We are interested in the problem of manipulating natural images by controlling some attributes
of interest. For example, given a photograph of the face of a person described by their gender,
age, and expression, we want to generate a realistic version of this same person looking older
or happier, or an image of a hypothetical twin of the opposite gender. This task and the related
problem of unsupervised domain transfer recently received a lot of interest [18, 25, 10, 27, 22, 24],
as a case study for conditional generative models but also for applications like automatic image
edition. The key challenge is that the transformations are ill-defined and training is unsupervised: the
training set contains images annotated with the attributes of interest, but there is no example of the
transformation: In many cases such as the ?gender swapping? example above, there are no pairs of
images representing the same person as a male or as a female. In other cases, collecting examples
requires a costly annotation process, like taking pictures of the same person with and without glasses.
Our approach relies on an encoder-decoder architecture where, given an input image x with its
attributes y, the encoder maps x to a latent representation z, and the decoder is trained to reconstruct
x given (z, y). At inference time, a test image is encoded in the latent space, and the user chooses
the attribute values y that are fed to the decoder. Even with binary attribute values at train time,
each attribute can be considered as a continuous variable during inference to control how much it is
perceived in the final image. We call our architecture Fader Networks, in analogy to the sliders of an
audio mixing console, since the user can choose how much of each attribute they want to incorporate.
1
Facebook AI Research
Sorbonne Universit?s, UPMC Univ Paris 06, UMR 7606, LIP6
3
LSCP, ENS, EHESS, CNRS, PSL Research University, INRIA
2
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: Interpolation between different attributes (Zoom in for better resolution). Each line shows
reconstructions of the same face with different attribute values, where each attribute is controlled as a
continuous variable. It is then possible to make an old person look older or younger, a man look more
manly or to imagine his female version. Left images are the originals.
The fundamental feature of our approach is to constrain the latent space to be invariant to the attributes
of interest. Concretely, it means that the distribution over images of the latent representations should
be identical for all possible attribute values. This invariance is obtained by using a procedure similar
to domain-adversarial training (see e.g., [21, 6, 15]). In this process, a classifier learns to predict the
attributes y given the latent representation z during training while the encoder-decoder is trained
based on two objectives at the same time. The first objective is the reconstruction error of the decoder,
i.e., the latent representation z must contain enough information to allow for the reconstruction of the
input. The second objective consists in fooling the attribute classifier, i.e., the latent representation
must prevent it from predicting the correct attribute values. In this model, achieving invariance is a
means to filter out, or hide, the properties of the image that are related to the attributes of interest.
A single latent representation thus corresponds to different images that share a common structure
but with different attribute values. The reconstruction objective then forces the decoder to use the
attribute values to choose, from the latent representation, the intended image.
Our motivation is to learn a disentangled latent space in which we have explicit control on some
attributes of interest, without supervision of the intended result of modifying attribute values. With
a similar motivation, several approaches have been tested on the same tasks [18, 25], on related
image-to-image translation problems [10, 27], or for more specific applications like the creation of
parametrized avatars [24]. In addition to a reconstruction loss, the vast majority of these works rely
on adversarial training in pixel space, which compares during training images generated with an
intentional change of attributes from genuine images for the target attribute values. Our approach is
different both because we use adversarial training for the latent space instead of the output, but also
because adversarial training aims at learning invariance to attributes. The assumption underlying our
work is that a high fidelity to the input image is less conflicting with the invariance criterion, than
with a criterion that forces the hallucinated image to match images from the training set.
As a consequence of this principle, our approach results in much simpler training pipelines than those
based on adversarial training in pixel space, and is readily amenable to controlling multiple attributes,
by adding new output variables to the discriminator of the latent space. As shown in Figure 1 on test
images from the CelebA dataset [14], our model can make subtle changes to portraits that end up
sufficient to alter the perceived value of attributes while preserving the natural aspect of the image
and the identity of the person. Our experiments show that our model outperforms previous methods
based on adversarial training on the decoders? output like [18] in terms of both reconstruction loss
and generation quality as measured by human subjects. We believe this disentanglement approach is
a serious competitor to the widespread adversarial losses on the decoder output for such tasks.
In the remainder of the paper, we discuss in more details the related work in Section 2. We then
present the training procedure in Section 3 before describing the network architecture and the
implementation in Section 4. Experimental results are shown in Section 5.
2
2
Related work
There is substantial literature on attribute-based and/or conditional image generation that can be split
in terms of required supervision, with three different levels. At one extreme are fully supervised
approaches developed to model known transformations, where examples take the form of (input,
transformation, result of the transformation). In that case, the model needs to learn the desired
transformation. This setting was previously explored to learn affine transformations [9], 3D rotations
[26], lighting variations [12] and 2D video game animations [20]. The methods developed in these
works however rely on the supervised setting, and thus cannot be applied in our setup.
At the other extreme of the supervision spectrum lie fully unsupervised methods that aim at learning
deep neural networks that disentangle the factors of variations in the data, without specification of
the attributes. Example methods are InfoGAN [4], or the predictability minimization framework
proposed in [21]. The neural photo editor [3] disentangles factors of variations in natural images for
image edition. [8] introduced the beta-VAE, a modification of the variational autoencoder (VAE)
framework that can learn latent factorized representations in a completely unsupervised manner. This
setting is considerably harder than the one we consider, and in general, it may be difficult with these
methods to automatically discover high-level concepts such as gender or age.
Our work lies in between the two previous settings. It is related to information as in [16]. Methods
developed for unsupervised domain transfer [10, 27, 22, 24] can also be applied in our case: given
two different domains of images such as ?drawings? and ?photograph?, one wants to map an image
from one domain to the other without supervision; in our case, a domain would correspond to an
attribute value. The mappings are trained using adversarial training in pixel space as mentioned in
the introduction, using separate encoders and/or decoders per domain, and thus do not scale well to
multiple attributes. In this line of work but more specifically considering the problem of modifying
attributes, the Invertible conditional GAN [18] first trains a GAN conditioned on the attribute values,
and in a second step learns to map input images to the latent space of the GAN, hence the name of
invertible GANs. It is used as a baseline in our experiments. Antipov et al. [1] use a pre-trained face
recognition system instead of a conditional GAN to learn the latent space, and only focuses on the
age attribute. The attribute-to-image approach [25] is a variational auto-encoder that disentangles
foreground and background to generate images using attribute values only. Conditional generation is
performed by inferring the latent state given the correct attributes and then changing the attributes.
Additionally, our work is related to work on learning invariant latent spaces using adversarial training
in domain adaptation [6], fair classification [5] and robust inference [15]. The training criterion
we use for enforcing invariance is similar to the one used in those works, the difference is that the
end-goal of these works is only to filter out nuisance variables or sensitive information. In our case,
we learn generative models, and invariance is used as a means to force the decoder to use attribute
information in its reconstruction.
Finally, for the application of automatically modifying faces using attributes, the feature interpolation
approach of [23] presents a means to generate alterations of images based on attributes using a
pre-trained network on ImageNet. While their approach is interesting from an application perspective,
their inference is costly and since it relies on pre-trained models, cannot naturally incorporate factors
or attributes that have not been foreseen during the pre-training.
3
Fader Networks
Let X be an image domain and Y the set of possible attributes associated with images in X , where
in the case of people?s faces typical attributes are glasses/no glasses, man/woman, young/old. For
simplicity, we consider here the case where attributes are binary, but our approach could be extended
to categorical attributes. In that setting, Y = {0, 1}n , where n is the number of attributes. We have a
training set D = {(x1 , y 1 ), ..., (xm , y m )}, of m pairs (image, attribute) (xi ? X , y i ? Y). The end
goal is to learn from D a model that will generate, for any attribute vector y 0 , a version of an input
image x whose attribute values correspond to y 0 .
Encoder-decoder architecture Our model, described in Figure 2, is based on an encoder-decoder
architecture with domain-adversarial training on the latent space. The encoder E?enc : X ? RN is a
convolutional neural network with parameters ?enc that maps an input image to its N -dimensional
latent representation E?enc (x). The decoder D?dec : (RN , Y) ? X is a deconvolutional network with
parameters ?dec that produces a new version of the input image given its latent representation E?enc (x)
3
and any attribute vector y 0 . When the context is clear, we simply use D and E to denote D?dec and
E?enc . The precise architectures of the neural networks are described in Section 4. The auto-encoding
loss associated to this architecture is a classical mean squared error (MSE) that measures the quality
of the reconstruction of a training input x given its true attribute vector y:
2
1 X
D?
LAE (?enc , ?dec ) =
E?enc (x), y ? x
2
dec
m
(x,y)?D
The exact choice of the reconstruction loss is not fundamental in our approach, and adversarial losses
such as PatchGAN [13] could be used in addition to the MSE at this stage to obtain better textures or
sharper images, as in [10]. Using a mean absolute or mean squared error is still necessary to ensure
that the reconstruction matches the original image.
Ideally, modifying y in D(E(x), y) would generate images with different perceived attributes, but
similar to x in every other aspect. However, without additional constraints, the decoder learns to
ignore the attributes, and modifying y at test time has no effect.
Learning attribute-invariant latent representations To avoid this behavior, our approach is to
learn latent representations that are invariant with respect to the attributes. By invariance, we mean
that given two versions of a same object x and x0 that are the same up to their attribute values, for
instance two images of the same person with and without glasses, the two latent representations
E(x) and E(x0 ) should be the same. When such an invariance is satisfied, the decoder must use the
attribute to reconstruct the original image. Since the training set does not contain different versions
of the same image, this constraint cannot be trivially added in the loss.
We hence propose to incorporate this constraint by doing adversarial training on the latent space.
This idea is inspired by the work on predictability minimization [21] and adversarial training for
domain adaptation [6, 15] where the objective is also to learn an invariant latent representation using
an adversarial formulation of the learning objective. To that end, an additional neural network called
the discriminator is trained to identify the true attributes y of a training pair (x, y) given E(x). The
invariance is obtained by learning the encoder E such that the discriminator is unable to identify the
right attributes. As in GANs [7], this corresponds to a two-player game where the discriminator aims
at maximizing its ability to identify attributes, and E aims at preventing it to be a good discriminator.
The exact structure of our discriminator is described in Section 4.
Discriminator objective The discriminator outputs probabilities of an attribute vector
P?dis (y|E(x)), where ?dis are the discriminator?s parameters. Using the subscript k to refer to
n
P
the k-th attribute, we have log P?dis (y|E(x)) =
log P?dis ,k (yk |E(x)). Since the objective of the
k=1
discriminator is to predict the attributes of the input image given its latent representation, its loss
depends on the current state of the encoder and is written as:
Ldis (?dis |?enc ) = ?
1
m
X
log P?dis y E?enc (x)
(1)
(x,y)?D
Adversarial objective The objective of the encoder is now to compute a latent representation that
optimizes two objectives. First, the decoder should be able to reconstruct x given E(x) and y, and
at the same time the discriminator should not be able to predict y given E(x). We consider that a
mistake is made when the discriminator predicts 1 ? yk for attribute k. Given the discriminator?s
parameters, the complete loss of the encoder-decoder architecture is then:
L(?enc , ?dec |?dis ) =
1
m
X
2
D?
E?enc (x), y ? x
2 ? ?E log P?dis (1 ? y|E?enc (x)) , (2)
dec
(x,y)?D
where ?E > 0 controls the trade-off between the quality of the reconstruction and the invariance
of the latent representations. Large values of ?E will restrain the amount of information about x
contained in E(x), and result in blurry images, while low values limit the decoder?s dependency on
the latent code y and will result in poor effects when altering attributes.
4
Figure 2: Main architecture. An (image, attribute) pair (x, y) is given as input. The encoder maps x
to the latent representation z; the discriminator is trained to predict y given z whereas the encoder
is trained to make it impossible for the discriminator to predict y given z only. The decoder should
reconstruct x given (z, y). At test time, the discriminator is discarded and the model can generate
different versions of x when fed with different attribute values.
Learning algorithm Overall, given the current state of the encoder, the optimal discriminator
?
parameters satisfy ?dis
(?enc ) ? argmin?dis Ldis (?dis |?enc ). If we ignore problems related to multiple
(and local) minima, the overall objective function is
?
?
?
?enc
, ?dec
= argmin L(?enc , ?dec |?dis
(?enc )) .
?enc ,?dec
?
In practice, it is unreasonable to solve for ?dis
(?enc ) at each update of ?enc . Following the practice of
adversarial training for deep networks, we use stochastic gradient updates for all parameters, consid?
ering the current value
of ?dis as an approximation for ?dis (?enc ). Given a training example (x, y), let
us denote Ldis ?dis ?enc , x, y the auto-encoder loss restricted to (x, y) and L ?enc , ?dec ?dis , x, y
(t)
(t)
the corresponding discriminator loss. The update at time t given the current parameters ?dis , ?enc ,
(t)
and ?dec and the training example (x(t) , y (t) ) is:
(t+1)
(t)
(t) (t)
= ?dis ? ???dis Ldis ?dis ?enc
, x(t) , y (t)
(t)
(t) (t+1)
(t+1) (t+1)
(t)
(t)
[?enc
, ?dec ] = [?enc
, ?dec ] ? ???enc ,?dec L ?enc
, ?dec ?dis , x(t) , y (t) .
?dis
The details of training and models are given in the next section.
4
Implementation
We adapt the architecture of our network from [10]. Let Ck be a Convolution-BatchNorm-ReLU
layer with k filters. Convolutions use kernel of size 4 ? 4, with a stride of 2, and a padding of 1, so
that each layer of the encoder divides the size of its input by 2. We use leaky-ReLUs with a slope of
0.2 in the encoder, and simple ReLUs in the decoder.
The encoder consists of the following 7 layers:
C16 ? C32 ? C64 ? C128 ? C256 ? C512 ? C512
Input images have a size of 256 ? 256. As a result, the latent representation of an image consists of
512 feature maps of size 2 ? 2. In our experiments, using 6 layers gave us similar results, while 8
layers significantly decreased the performance, even when using more feature maps in the latent state.
To provide the decoder with image attributes, we append the latent code to each layer given as input to
the decoder, where the latent code of an image is the concatenation of the one-hot vectors representing
5
Model
Real Image
IcGAN AE
IcGAN Swap
FadNet AE
FadNet Swap
Mouth
92.6
22.7
11.4
88.4
79.0
Naturalness
Smile Glasses
87.0
88.6
21.7
14.8
22.9
9.6
75.2
78.8
31.4
45.3
Mouth
89.0
88.1
10.1
91.8
66.2
Accuracy
Smile
88.3
91.7
9.9
90.1
97.1
Glasses
97.6
86.2
47.5
94.5
76.6
Table 1: Perceptual evaluation of naturalness and swap accuracy for each model. The naturalness
score is the percentage of images that were labeled as ?real? by human evaluators to the question ?Is
this image a real photograph or a fake generated by a graphics engine??. The accuracy score is the
classification accuracy by human evaluators on the values of each attribute.
the values of its attributes (binary attributes are represented as [1, 0] and [0, 1]). We append the latent
code as additional constant input channels for all the convolutions of the decoder. Denoting by n the
number of attributes, (hence a code of size 2n), the decoder is symmetric to the encoder, but uses
transposed convolutions for the up-sampling:
C512+2n ? C512+2n ? C256+2n ? C128+2n ? C64+2n ? C32+2n ? C16+2n .
The discriminator is a C512 layer followed by a fully-connected neural network of two layers of size
512 and n repsectively.
Dropout We found it beneficial to add dropout in our discriminator. We hypothesized that dropout
helped the discriminator to rely on a wider set of features in order to infer the current attributes,
improving and stabilizing its accuracy, and consequently giving better feedback to the encoder. We
set the dropout rate to 0.3 in all our experiments. Following [10], we also tried to add dropout in
the first layers of the decoder, but in our experiments, this turned out to significantly decrease the
performance.
Discriminator cost scheduling Similarly to [2], we use a variable weight for the discriminator loss
coefficient ?E . We initially set ?E to 0 and the model is trained like a normal auto-encoder. Then,
?E is linearly increased to 0.0001 over the first 500, 000 iterations to slowly encourage the model
to produce invariant representations. This scheduling turned out to be critical in our experiments.
Without it, we observed that the encoder was too affected by the loss coming from the discriminator,
even for low values of ?E .
Model selection Model selection was first performed automatically using two criteria. First, we
used the reconstruction error on original images as measured by the MSE. Second, we also want the
model to properly swap the attributes of an image. For this second criterion, we train a classifier
to predict image attributes. At the end of each epoch, we swap the attributes of each image in the
validation set and measure how well the classifier performs on the decoded images. These two
metrics were used to filter out potentially good models. The final model was selected based on human
evaluation on images from the train set reconstructed with swapped attributes.
5
Experiments
5.1
Experiments on the celebA dataset
Experimental setup We first present experiments on the celebA dataset [14], which contains
200, 000 images of celebrity of shape 178 ? 218 annotated with 40 attributes. We used the standard
training, validation and test split. All pictures presented in the paper or used for evaluation have been
taken from the test set. For pre-processing, we cropped images to 178 ? 178, and resized them to
256 ? 256, which is the resolution used in all figures of the paper. Image values were normalized
to [?1, 1]. All models were trained with Adam [11], using a learning rate of 0.002, ?1 = 0.5, and a
batch size of 32. We performed data augmentation by flipping horizontally images with a probability
0.5 at each iteration. As model baseline, we used IcGAN [18] with the model provided by the authors
and trained on the same dataset. 4
4
https://github.com/Guim3/IcGAN
6
Figure 3: Swapping the attributes of different faces. Zoom in for better resolution.
Qualitative evaluation Figure 3 shows examples of images generated when swapping different
attributes: the generated images have a high visual quality and clearly handle the attribute value
changes, for example by adding realistic glasses to the different faces. These generated images
confirm that the latent representation learned by Fader Networks is both invariant to the attribute
values, but also captures the information needed to generate any version of a face, for any attribute
value. Indeed, when looking at the shape of the generated glasses, different glasses shapes and colors
have been integrated into the original face depending on the face: our model is not only adding
?generic? glasses to all faces, but generates plausible glasses depending on the input.
Quantitative evaluation protocol We performed a quantitative evaluation of Fader Networks on
Mechanical Turk, using IcGAN as a baseline. We chose the three attributes Mouth (Open/Close),
Smile (With/Without) and Glasses (With/Without) as they were attributes in common between IcGAN
and our model. We evaluated two different aspects of the generated images: the naturalness, that
measures the quality of generated images, and the accuracy, that measures how well swapping an
attribute value is reflected in the generation. Both measures are necessary to assess that we generate
natural images, and that the swap is effective. We compare: R EAL I MAGE , that provides original
images without transformation, FAD N ET AE and I C GAN AE , that reconstruct original images
without attribute alteration, and FAD N ET S WAP and I C GAN S WAP , that generate images with one
swapped attribute, e.g., With Glasses ? Without Glasses. Before being submitted to Mechanical Turk,
all images were cropped and resized following the same processing than IcGAN. As a result, output
images were displayed in 64 ? 64 resolution, also preventing Workers from basing their judgment on
the sharpness of presented images exclusively.
Technically, we should also assess that the identity of a person is preserved when swapping attributes.
This seemed to be a problem for GAN-based methods, but the reconstruction quality of our model is
very good (RMSE on test of 0.0009, to be compared to 0.028 for IcGAN), and we did not observe
this issue. Therefore, we did not evaluate this aspect.
For naturalness, the first 500 images from the test set such that there are 250 images for each attribute
value were shown to Mechanical Turk Workers, 100 for each of the 5 different models presented
above. For each image, we asked whether the image seems natural or generated. The description
given to the Workers to understand their task showed 4 examples of real images, and 4 examples of
fake images (1 FAD N ET AE , 1 FAD N ET S WAP , 1 I C GAN AE , 1 I C GAN S WAP ).
The accuracy of each model on each attribute was evaluated in a different classification task, resulting
in a total of 15 experiments. For example, the FadNet/Glasses experiment consisted in asking
Workers whether people with glasses being added by FAD N ET S WAP effectively possess glasses,
and vice-versa. This allows us to evaluate how perceptible the swaps are to the human eye. In each
experiment, 100 images were shown (50 images per class, in the order they appear in the test set).
In both quantitative evaluations, each experiment was performed by 10 Workers, resulting in 5, 000
samples per experiment for naturalness, and 1, 000 samples per classification experiment on swapped
attributes. The results on both tasks are shown in Table 1.
7
Figure 4: (Zoom in for better resolution.) Examples of multi-attribute swap (Gender / Opened eyes /
Eye glasses) performed by the same model. Left images are the originals.
Quantitative results In the naturalness experiments, only around 90% of real images were classified as ?real? by the Workers, indicating the high level of requirement to generate natural images. Our
model obtained high naturalness accuracies when reconstructing images without swapping attributes:
88.4%, 75.2% and 78.8%, compared to IcGAN reconstructions whose accuracy does not exceed 23%,
whether it be for reconstructed or swapped images. For the swap, FAD N ET S WAP still consistently
outperforms I C GAN S WAP by a large margin. However, the naturalness accuracy varies a lot based
on the swapped attribute: from 79.0% for the opening of the mouth, down to 31.4% for the smile.
Classification experiments show that reconstructions with FAD N ET AE and I C GAN AE have very
high classification scores, and are even on par with real images on both Mouth and Smile. FAD N ET
S WAP obtains an accuracy of 66.2% for the mouth, 76.6% for the glasses and 97.1% for the smile,
indicating that our model can swap these attributes with a very high efficiency. On the other hand,
with accuracies of 10.1%, 47.5% and 9.9% on these same attributes, I C GAN S WAP does not seem
able to generate convincing swaps.
Multi-attributes swapping We present qualitative results for the ability of our model to swap
multiple attributes at once in Figure 4, by jointly modifying the gender, open eyes and glasses
attributes. Even in this more difficult setting, our model can generate convincing images with multiple
swaps.
5.2
Experiments on Flowers dataset
We performed additional experiments on the Oxford-102 dataset, which contains about 9, 000 images
of flowers classified into 102 categories [17]. Since the dataset does not contain other labels than the
flower categories, we built a list of color attributes from the flower captions provided by [19]. Each
flower is provided with 10 different captions. For a given color, we gave a flower the associated color
attribute, if that color appears in at least 5 out of the 10 different captions. Although being naive, this
approach was enough to create accurate labels. We resized images to 64 ? 64. Figure 5 represents
reconstructed flowers with different values of the ?pink? attribute. We can observe that the color of
the flower changes in the desired direction, while keeping the background cleanly unchanged.
Figure 5: Examples of reconstructed flowers with different values of the pink attribute. First row
images are the originals. Increasing the value of that attribute will turn flower colors into pink, while
decreasing it in images with originally pink flowers will make them turn yellow or orange.
8
6
Conclusion
We presented a new approach to generate variations of images by changing attribute values. The
approach is based on enforcing the invariance of the latent space w.r.t. the attributes. A key advantage
of our method compared to many recent models [27, 10] is that it generates realistic images of high
resolution without needing to apply a GAN to the decoder output. As a result, it could easily be
extended to other domains like speech, or text, where the backpropagation through the decoder can
be really challenging because of the non-differentiable text generation process for instance. However,
methods commonly used in vision to assess the visual quality of the generated images, like PatchGAN,
could totally be applied on top of our model.
Acknowledgments
The authors would like to thank Yedid Hoshen for initial discussions about the core ideas of the paper,
Christian Pursch and Alexander Miller for their help in setting up the experiments and Mechanical
Turk evaluations. The authors are also grateful to David Lopez-Paz and Mouhamadou Moustapha
Cisse for useful feedback and support on this project.
References
[1] Grigory Antipov, Moez Baccouche, and Jean-Luc Dugelay. Face aging with conditional
generative adversarial networks. arXiv preprint arXiv:1702.01983, 2017.
[2] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy
Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349,
2015.
[3] Andrew Brock, Theodore Lim, JM Ritchie, and Nick Weston. Neural photo editing with
introspective adversarial networks. arXiv preprint arXiv:1609.07093, 2016.
[4] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan:
Interpretable representation learning by information maximizing generative adversarial nets. In
Advances in Neural Information Processing Systems, pages 2172?2180, 2016.
[5] Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv
preprint arXiv:1511.05897, 2015.
[6] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran?ois
Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural
networks. Journal of Machine Learning Research, 17(59):1?35, 2016.
[7] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural
information processing systems, pages 2672?2680, 2014.
[8] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick,
Mohamed Shakir, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a
constrained variational framework. Proceedings of ICLR 2017, 2017.
[9] Geoffrey Hinton, Alex Krizhevsky, and Sida Wang. Transforming auto-encoders. Artificial
Neural Networks and Machine Learning?ICANN 2011, pages 44?51, 2011.
[10] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with
conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
[11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[12] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pages
2539?2547, 2015.
9
[13] Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision, pages 702?716.
Springer, 2016.
[14] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the
wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
[15] Gilles Louppe, Michael Kagan, and Kyle Cranmer. Learning to pivot with adversarial networks.
arXiv preprint arXiv:1611.01046, 2016.
[16] Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and
Yann LeCun. Disentangling factors of variation in deep representation using adversarial training.
In Advances in Neural Information Processing Systems, pages 5041?5049, 2016.
[17] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large
number of classes. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP?08.
Sixth Indian Conference on, pages 722?729. IEEE, 2008.
[18] Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M ?lvarez. Invertible
conditional gans for image editing. arXiv preprint arXiv:1611.06355, 2016.
[19] Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. Learning deep representations of
fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 49?58, 2016.
[20] Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In
Advances in Neural Information Processing Systems, pages 1252?1260, 2015.
[21] J?rgen Schmidhuber. Learning factorial codes by predictability minimization.
Computation, 4(6):863?879, 1992.
Neural
[22] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation.
arXiv preprint arXiv:1611.02200, 2016.
[23] Paul Upchurch, Jacob Gardner, Kavita Bala, Robert Pless, Noah Snavely, and Kilian Weinberger.
Deep feature interpolation for image content changes. arXiv preprint arXiv:1611.05507, 2016.
[24] Lior Wolf, Yaniv Taigman, and Adam Polyak. Unsupervised creation of parameterized avatars.
arXiv preprint arXiv:1704.05693, 2017.
[25] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional
image generation from visual attributes. In European Conference on Computer Vision, pages
776?791. Springer, 2016.
[26] Jimei Yang, Scott E Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In Advances in Neural Information
Processing Systems, pages 1099?1107, 2015.
[27] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image
translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
10
| 7178 |@word kohli:1 version:9 seems:1 open:2 cleanly:1 pieter:1 tried:1 jacob:1 harder:1 initial:1 liu:1 contains:3 score:3 exclusively:1 denoting:1 deconvolutional:1 pless:1 outperforms:2 current:5 com:2 luo:1 diederik:1 must:3 readily:1 written:1 john:1 realistic:4 shape:3 christian:1 interpretable:1 update:4 generative:6 selected:1 core:1 provides:1 yuting:1 simpler:2 evaluator:2 zhang:2 bowman:1 beta:2 junbo:2 qualitative:2 consists:3 lopez:1 wild:1 manner:1 x0:2 indeed:1 behavior:1 multi:2 inspired:1 ming:1 decreasing:1 automatically:3 duan:1 jm:1 considering:1 increasing:1 totally:1 provided:3 discover:1 underlying:1 project:1 evgeniya:1 factorized:1 argmin:2 developed:3 transformation:9 quantitative:4 every:1 hypothetical:1 collecting:1 jimei:2 universit:1 classifier:4 ustinova:1 sherjil:1 control:3 botvinick:1 appear:1 before:2 local:1 modify:1 mistake:1 consequence:1 limit:1 aging:1 encoding:1 oxford:1 subscript:1 interpolation:3 ajakan:1 fader:5 inria:1 umr:1 chose:1 theodore:1 challenging:1 luke:1 acknowledgment:1 lecun:1 practice:2 backpropagation:1 procedure:2 moez:1 yan:4 significantly:3 pre:5 cannot:3 close:1 selection:2 scheduling:2 context:1 impossible:1 disentangles:2 loic:1 map:7 maximizing:2 jimmy:1 patchgan:2 resolution:6 stabilizing:1 simplicity:1 sharpness:1 hsuan:1 pouget:1 higgins:1 disentangled:1 his:1 handle:1 variation:5 target:1 controlling:2 imagine:1 user:3 avatar:2 exact:2 caption:3 us:1 samy:1 hana:1 goodfellow:1 storkey:1 recognition:2 predicts:1 labeled:1 hoshen:1 observed:1 louppe:1 preprint:12 wang:2 capture:1 connected:1 kilian:1 cycle:1 ranzato:1 trade:1 decrease:1 yk:2 substantial:1 mentioned:1 transforming:1 schiele:1 ideally:1 asked:1 warde:1 trained:13 grateful:1 weakly:1 creation:2 technically:1 efficiency:1 completely:1 swap:13 easily:1 joost:1 xiaoou:1 represented:1 train:5 univ:1 effective:1 artificial:1 rein:1 whose:2 bernt:1 encoded:1 solve:1 plausible:1 jean:2 drawing:1 reconstruct:6 encoder:23 ability:2 neil:1 jointly:1 final:2 shakir:1 advantage:1 differentiable:1 net:2 reconstruction:15 propose:1 coming:1 fr:1 remainder:1 adaptation:2 turned:2 enc:29 mixing:2 ludovic:2 description:2 sutskever:1 yaniv:2 requirement:1 produce:2 generating:1 adam:4 object:2 wider:1 depending:2 batchnorm:1 help:1 andrew:3 ganin:1 measured:2 recurrent:1 bogdan:1 received:1 edward:1 ois:1 larochelle:1 direction:1 annotated:2 attribute:116 correct:2 filter:4 modifying:6 stochastic:2 human:5 opened:1 abbeel:1 really:1 disentanglement:1 around:1 considered:1 intentional:1 normal:1 slider:1 mapping:1 predict:6 matthew:1 rgen:1 efros:2 perceived:4 label:2 sensitive:1 lip6:2 basing:1 vice:1 create:1 amos:1 minimization:3 clearly:1 aim:4 ck:1 avoid:1 zhou:1 resized:3 varying:1 vae:3 knob:1 focus:1 properly:1 consistently:1 maria:1 adversarial:27 baseline:3 glass:20 inference:4 cnrs:1 integrated:1 initially:1 manipulating:2 interested:1 pixel:4 issue:1 overall:2 fidelity:1 ill:1 classification:7 c16:2 pascal:1 art:1 constrained:1 orange:1 weijer:1 genuine:1 once:1 nicely:1 beach:1 sampling:1 identical:1 represents:1 park:1 look:2 unsupervised:7 celeba:3 alter:1 foreground:1 mirza:1 yoshua:1 serious:1 opening:1 zoom:3 intended:2 irina:1 william:1 kavita:1 interest:6 alexei:2 evaluation:8 introduces:1 male:1 extreme:2 farley:1 swapping:7 amenable:1 accurate:1 encourage:1 worker:6 necessary:2 perceivable:1 facial:1 old:2 divide:1 desired:2 instance:2 portrait:2 increased:1 eal:1 asking:1 markovian:1 altering:2 whitney:1 xinchen:1 cost:1 wap:9 krizhevsky:1 paz:1 graphic:3 too:1 pal:1 encoders:2 upmc:1 dependency:1 varies:1 considerably:1 chooses:1 person:8 st:1 fundamental:2 international:1 lee:4 off:1 invertible:3 michael:3 synthesis:2 ilya:1 gans:3 squared:2 augmentation:1 satisfied:1 rafal:1 choose:3 slowly:1 woman:1 zhao:2 li:1 de:1 yaroslav:1 stride:1 alteration:2 twin:1 coefficient:1 satisfy:1 depends:1 performed:7 helped:1 lot:2 view:1 doing:1 mario:1 relus:2 annotation:1 slope:1 rmse:1 icvgip:1 ass:3 accuracy:12 convolutional:2 miller:1 correspond:2 identify:3 judgment:1 yellow:1 lighting:1 classified:2 submitted:1 c32:2 ping:1 facebook:1 sixth:1 competitor:1 turk:4 mohamed:1 naturally:1 associated:3 lior:2 transposed:1 dataset:7 color:8 lim:1 subtle:1 akata:1 appears:1 originally:1 supervised:3 reflected:1 zisserman:1 editing:2 formulation:1 evaluated:2 stage:1 hand:1 christopher:1 mehdi:1 celebrity:1 widespread:1 quality:7 believe:1 usa:1 name:1 effect:2 normalized:1 contain:3 concept:2 true:2 hypothesized:1 hence:3 xavier:1 consisted:1 symmetric:1 during:4 game:2 nuisance:1 samuel:1 criterion:5 complete:1 performs:1 image:113 variational:3 kyle:1 recently:1 common:2 rotation:1 console:2 hugo:1 refer:1 jozefowicz:1 versa:1 honglak:4 ai:1 ritchie:1 automatic:1 trivially:1 similarly:1 specification:1 supervision:4 add:2 disentangle:1 hide:1 female:2 perspective:1 showed:1 optimizes:1 recent:1 c64:2 schmidhuber:1 binary:3 yi:1 victor:1 preserving:2 minimum:1 additional:4 dai:1 isola:2 tinghui:1 grigory:1 sida:1 sliding:2 multiple:6 needing:1 infer:1 taesung:1 match:2 adapt:1 cross:1 long:1 controlled:1 basic:1 ae:8 vision:6 metric:1 arxiv:24 iteration:2 kernel:1 nilsback:1 dec:16 younger:1 preserved:1 addition:2 want:4 background:2 whereas:1 decreased:1 cropped:2 harrison:1 fine:1 swapped:5 moustapha:1 posse:1 subject:1 smile:6 seem:1 call:1 yang:3 exceed:1 split:2 enough:2 bengio:2 automated:1 relu:1 gave:2 burgess:1 architecture:11 opposite:1 ering:1 polyak:2 idea:2 psl:1 pivot:1 whether:3 expression:2 padding:1 speech:1 deep:8 useful:1 fake:2 clear:1 factorial:1 amount:1 chuan:1 tenenbaum:1 sohn:1 category:2 unpaired:1 generate:14 http:1 percentage:1 per:4 affected:1 key:2 salient:1 restrain:1 achieving:1 changing:2 prevent:1 vast:1 fooling:1 houthooft:1 wand:1 inverse:1 jose:1 taigman:2 parameterized:1 denoyer:1 yann:1 fran:1 sorbonne:1 dropout:5 layer:9 followed:1 courville:1 bala:1 marchand:1 xiaogang:1 noah:1 constraint:3 constrain:1 alex:1 generates:2 aspect:4 poor:1 pink:4 beneficial:1 reconstructing:1 perceptible:1 modification:1 making:1 lerchner:1 invariant:7 restricted:1 iccv:1 pipeline:1 taken:1 previously:1 bing:1 discus:1 describing:1 turn:2 precomputed:1 needed:1 sprechmann:1 fed:2 end:5 photo:2 usunier:1 naturalness:10 unreasonable:1 apply:1 observe:2 arka:1 generic:1 blurry:1 fad:8 batch:1 weinberger:1 original:9 top:1 ensure:1 gan:13 laviolette:1 giving:1 classical:1 jake:1 unchanged:1 objective:12 added:2 question:1 flipping:1 kagan:1 snavely:1 costly:2 antoine:1 gradient:1 iclr:1 separate:1 unable:1 thank:1 concatenation:1 decoder:28 parametrized:1 majority:1 phillip:2 zeynep:1 enforcing:2 ozair:1 code:6 reed:3 convincing:2 setup:2 disentangling:3 mostly:1 difficult:2 sharper:1 potentially:1 robert:1 append:2 ba:1 ziwei:1 implementation:2 gilles:1 convolution:4 discarded:1 ramesh:1 displayed:1 introspective:1 extended:2 looking:2 precise:1 hinton:1 rn:2 pablo:1 introduced:1 lvarez:1 germain:1 david:2 pair:4 paris:1 required:1 mechanical:4 discriminator:24 hallucinated:1 imagenet:1 sentence:1 engine:1 nick:1 learned:1 conflicting:1 kingma:1 nip:1 able:3 adversary:1 flower:12 pattern:1 xm:1 scott:3 challenge:1 built:1 video:1 mouth:6 hot:1 critical:1 natural:6 force:3 rely:3 predicting:1 zhu:2 representing:2 scheme:1 older:2 github:1 eye:4 picture:2 cisse:1 mathieu:1 gardner:1 categorical:1 jun:2 autoencoder:1 auto:5 naive:1 brock:1 text:2 epoch:1 literature:1 schulman:1 lae:1 loss:13 fully:3 par:1 generation:7 interesting:1 analogy:2 geoffrey:1 age:3 validation:2 affine:1 sufficient:1 consistent:1 principle:1 editor:1 share:1 translation:3 row:1 censoring:1 gl:1 keeping:1 dis:23 allow:2 understand:1 face:13 taking:1 absolute:1 leaky:1 cranmer:1 van:1 feedback:2 matthey:1 fb:1 preventing:2 concretely:1 made:1 author:3 seemed:1 commonly:1 attribute2image:1 pushmeet:1 reconstructed:4 obtains:1 ignore:2 confirm:1 happier:1 xi:2 spectrum:1 continuous:4 latent:38 table:2 additionally:1 channel:1 learn:9 transfer:2 robust:1 nicolas:1 ca:1 improving:1 mse:3 european:2 marc:1 domain:14 protocol:1 did:2 icann:1 main:1 linearly:1 aurelio:1 motivation:2 animation:1 edition:2 paul:1 fair:1 x1:1 xu:1 en:1 predictability:3 inferring:1 decoded:1 explicit:1 lie:2 perceptual:1 infogan:2 learns:3 young:1 ian:1 tang:1 down:1 grained:1 elena:1 specific:2 explored:1 list:1 abadie:1 evidence:1 glorot:1 upchurch:1 adding:3 effectively:1 texture:2 conditioned:1 margin:1 mage:1 chen:1 photograph:3 simply:1 visual:6 josh:1 horizontally:1 vinyals:1 contained:1 aditya:1 ldis:4 springer:2 gender:6 corresponds:2 wolf:2 relies:3 weston:1 conditional:9 lempitsky:1 identity:2 goal:2 tejas:1 consequently:1 luc:1 man:2 content:1 change:7 specifically:1 typical:1 called:1 total:1 invariance:11 experimental:2 player:1 indicating:2 aaron:1 guillaume:1 people:2 support:1 kihyuk:1 consid:1 alexander:2 vilnis:1 indian:1 kulkarni:1 oriol:1 incorporate:3 evaluate:2 audio:1 tested:1 |
6,830 | 7,179 | Action Centered Contextual Bandits
Kristjan Greenewald
Department of Statistics
Harvard University
[email protected]
Ambuj Tewari
Department of Statistics
University of Michigan
[email protected]
Predrag Klasnja
School of Information
University of Michigan
[email protected]
Susan Murphy
Departments of Statistics and Computer Science
Harvard University
[email protected]
Abstract
Contextual bandits have become popular as they offer a middle ground between
very simple approaches based on multi-armed bandits and very complex approaches
using the full power of reinforcement learning. They have demonstrated success in
web applications and have a rich body of associated theoretical guarantees. Linear
models are well understood theoretically and preferred by practitioners because
they are not only easily interpretable but also simple to implement and debug.
Furthermore, if the linear model is true, we get very strong performance guarantees.
Unfortunately, in emerging applications in mobile health, the time-invariant linear
model assumption is untenable. We provide an extension of the linear model for
contextual bandits that has two parts: baseline reward and treatment effect. We
allow the former to be complex but keep the latter simple. We argue that this
model is plausible for mobile health applications. At the same time, it leads to
algorithms with strong performance guarantees as in the linear model setting, while
still allowing for complex nonlinear baseline modeling. Our theory is supported by
experiments on data gathered in a recently concluded mobile health study.
1
Introduction
In the theory of sequential decision-making, contextual bandit problems (Tewari & Murphy, 2017)
occupy a middle ground between multi-armed bandit problems (Bubeck & Cesa-Bianchi, 2012) and
full-blown reinforcement learning (usually modeled using Markov decision processes along with
discounted or average reward optimality criteria (Sutton & Barto, 1998; Puterman, 2005)). Unlike
bandit algorithms, which cannot use any side-information or context, contextual bandit algorithms
can learn to map the context into appropriate actions. However, contextual bandits do not consider
the impact of actions on the evolution of future contexts. Nevertheless, in many practical domains
where the impact of the learner?s action on future contexts is limited, contextual bandit algorithms
have shown great promise. Examples include web advertising (Abe & Nakamura, 1999) and news
article selection on web portals (Li et al., 2010).
An influential thread within the contextual bandit literature models the expected reward for any
action in a given context using a linear mapping from a d-dimensional context vector to a real-valued
reward. Algorithms using this assumption include LinUCB and Thompson Sampling, for both of
which regret bounds have been derived. These analyses often allow the context sequence to be chosen
adversarially, but require the linear model, which links rewards to contexts, to be time-invariant.
There has been little effort to extend these algorithms and analyses when the data follow an unknown
nonlinear or time-varying model.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we consider a particular type of non-stationarity and non-linearity that is motivated
by problems arising in mobile health (mHealth). Mobile health is a fast developing field that uses
mobile and wearable devices for health care delivery. These devices provide us with a real-time
stream of dynamically evolving contextual information about the user (location, calendar, weather,
physical activity, internet activity, etc.). Contextual bandit algorithms can learn to map this contextual
information to a set of available intervention options (e.g., whether or not to send a medication
reminder). However, human behavior is hard to model using stationary, linear models. We make a
fundamental assumption in this paper that is quite plausible in the mHealth setting. In these settings,
there is almost always a ?do nothing? action usually called action 0. The expected reward for this
action is the baseline reward and it can change in a very non-stationary, non-linear fashion. However,
the treatment effect of a non-zero action, i.e., the incremental change over the baseline reward due to
the action, can often be plausibly modeled using standard stationary, linear models.
We show, both theoretically and empirically, that the performance of an appropriately designed
action-centered contextual bandit algorithm is agnostic to the high model complexity of the baseline
reward. Instead, we get the same level of performance as expected in a stationary, linear model setting.
Note that it might be tempting to make the entire model non-linear and non-stationary. However, the
sample complexity of learning very general non-stationary, non-linear models is likely to be so high
that they will not be useful in mHealth where data is often noisy, missing, or collected only over a
few hundred decision points.
We connect our algorithm design and theoretical analysis to the real world of mHealth by using data
from a pilot study of HeartSteps, an Android-based walking intervention. HeartSteps encourages
walking by sending individuals contextually-tailored suggestions to be active. Such suggestions can
be sent up to five times a day?in the morning, at lunchtime, mid-afternoon, at the end of the workday,
and in the evening?and each suggestion is tailored to the user?s current context: location, time of day,
day of the week, and weather. HeartSteps contains two types of suggestions: suggestions to go for a
walk, and suggestions to simply move around in order to disrupt prolonged sitting. While the initial
pilot study of HeartSteps micro-randomized the delivery of activity suggestions (Klasnja et al., 2015;
Liao et al., 2015), delivery of activity suggestions is an excellent candidate for the use of contextual
bandits, as the effect of delivering (vs. not) a suggestion at any given time is likely to be strongly
influenced by the user?s current context, including location, time of day, and weather.
This paper?s main contributions can be summarized as follows. We introduce a variant of the standard
linear contextual bandit model that allows the baseline reward model to be quite complex while
keeping the treatment effect model simple. We then introduce the idea of using action centering in
contextual bandits as a way to decouple the estimation of the above two parts of the model. We show
that action centering is effective in dealing with time-varying and non-linear behavior in our model,
leading to regret bounds that scale as nicely as previous bounds for linear contextual bandits. Finally,
we use data gathered in the recently conducted HeartSteps study to validate our model and theory.
1.1
Related Work
Contextual bandits have been the focus of considerable interest in recent years. Chu et al. (2011) and
Agrawal & Goyal (2013) have examined UCB and Thompson sampling methods respectively for
linear contextual bandits. Works such as Seldin et al. (2011), Dudik et al. (2011) considered contextual
bandits with fixed policy classes. Methods for reducing the regret under complex reward functions
include the nonparametric approach of May et al. (2012), the ?contextual zooming" approach of
Slivkins (2014), the kernel-based method of Valko et al. (2013), and the sparse method of Bastani
& Bayati (2015). Each of these approaches has regret that scales with the complexity of the overall
reward model including the baseline, and requires the reward function to remain constant over time.
2
Model and Problem Setting
Consider a contextual bandit with a baseline (zero) action and N non-baseline arms (actions or
0
treatments). At each time t = 1, 2, . . . , a context vector s?t ? Rd is observed, an action at ?
{0, . . . , N } is chosen, and a reward rt (at ) is observed. The bandit learns a mapping from a state
vector st,at depending on s?t and at to the expected reward rt (st,at ). The state vector st,at ? Rd is
a function of at and s?t . This form is used to achieve maximum generality, as it allows for infinite
possible actions so long as the reward can be modeled using a d-dimensional st,a . In the most
2
unstructured case with N actions, we can simply encode the reward with a d = N d0 dimensional
sTt,at = [I(at = 1)?
sTt , . . . , I(at = N )?
sTt ] where I(?) is the indicator function.
For maximum generality, we assume the context vectors are chosen by an adversary on the basis of
the history Ht?1 of arms a? played, states s?? , and rewards r? (?
s? , a? ) received up to time t ? 1, i.e.,
Ht?1 = {a? , s?t , r? (?
s? , a? ), i = 1, . . . , N, ? = 1, . . . , t ? 1}.
Consider the model E[rt (?
st , at )|?
st , at ] = f?t (?
st , at ), where f?t can be decomposed into a fixed
component dependent on action and a time-varying component that does not depend on action:
E[rt (?
st , at )|?
st , at ] = f?t (?
st , at ) = f (st,a )I(at > 0) + gt (?
st ),
t
where f?t (?
st , 0) = gt (?
st ) due to the indicator function I(at > 0). Note that the optimal action
depends in no way on gt , which merely confounds the observation of regret. We hypothesize that
the regret bounds for such a contextual bandit asymptotically depend only on the complexity of f ,
not of gt . We emphasize that we do not require any assumptions about or bounds on the complexity
or smoothness of gt , allowing gt to be arbitrarily nonlinear and to change abruptly in time. These
conditions create a partially agnostic setting where we have a simple model for the interaction but the
baseline cannot be modeled with a simple linear function. In what follows, for simplicity of notation
we drop s?t from the argument for rt , writing rt (at ) with the dependence on s?t understood.
In this paper, we consider the linear model for the reward difference at time t:
rt (at ) ? rt (0) = f (st,at )I(at > 0) + nt = sTt,at ?I(at > 0) + nt
(1)
where nt is zero-mean sub-Gaussian noise with variance ? 2 and ? ? Rd is a vector of coefficients.
The goal of the contextual bandit is to estimate ? at every time t and use the estimate to decide which
actions to take under a series of observed contexts. As is common in the literature, we assume that
both the baseline and interaction rewards are bounded by a constant for all t.
The task of the action-centered contextual bandit is to choose the probabilities ?(a, t) of playing each
arm at at time t so as to maximize expected differential reward
XN
E[rt (at ) ? rt (0)|Ht?1 , st,a ] =
?(a, t)E[rt (a) ? rt (0)|Ht?1 , st,a ]
(2)
a=0
XN
=
?(a, t)sTt,a ?I(a > 0).
a=0
This task is closely related to obtaining a good estimate of the reward function coefficients ?.
2.1
Probability-constrained optimal policy
In the mHealth setting, a contextual bandit must choose at each time point whether to deliver to the
user a behavior-change intervention, and if so, what type of intervention to deliver. Whether or not an
intervention, such as an activity suggestion or a medication reminder, is sent is a critical aspect of the
user experience. If a bandit sends too few interventions to a user, it risks the user?s disengaging with
the system, and if it sends too many, it risks the user?s becoming overwhelmed or desensitized to the
system?s prompts. Furthermore, standard contextual bandits will eventually converge to a policy that
maps most states to a near-100% chance of sending or not sending an intervention. Such regularity
could not only worsen the user?s experience, but ignores the fact that users have changing routines
and cannot be perfectly modeled. We are thus motivated to introduce a constraint on the size of the
probabilities of delivering an intervention. We constrain 0 < ?min ? 1 ? P(at = 0|?
st ) ? ?max < 1,
where P(at = 0|?
st ) is the conditional bandit-chosen probability of delivering an intervention at time
t. The constants ?min and ?max are not learned by the algorithm, but chosen using domain science,
and might vary for different components of the same mHealth system. We constrain P(at = 0|?
st ), not
each P(at = i|?
st ), as which intervention is delivered is less critical to the user experience than being
prompted with an intervention in the first place. User habituation can be mitigated by implementing
the nonzero actions (a = 1, . . . , N ) to correspond to several types or categories of messages, with
the exact message sent being randomized from a set of differently worded messages.
Conceptually, we can view the bandit as pulling two arms at each time t: the probability of sending
a message (constrained to lie in [?min , ?max ]) and which message to send if one is sent. While
these probability constraints are motivated by domain science, these constraints also enable our
3
proposed action-centering algorithm to effectively orthogonalize the baseline and interaction term
rewards, achieving sublinear regret in complex scenarios that often occur in mobile health and other
applications and for which existing approaches have large regret.
Under this probability constraint, we can now derive the optimal policy with which to compare the
bandit. The policy that maximizes the expected reward (2) will play the optimal action
a?t = arg
max
i?{0,...,N }
sTt,i ?I(i > 0),
with the highest allowed probability. The remainder of the probability is assigned as follows. If
the optimal action is nonzero, the optimal policy will then play the zero action with the remaining
probability (which is the minimum allowed probability of playing the zero action). If the optimal
action is zero, the optimal policy will play the nonzero action with the highest expected reward
a
??t = arg
max
i?{1,...,N }
sTt,i ?
with the remaining probability, i.e. ?min . To summarize, under the constraint 1 ? ?t? (0, t) ?
[?min , ?max ], the expected reward maximizing policy plays arm at with probability ? ? (a, t), where
If a?t 6= 0 : ? ? (a?t , t) = ?max , ? ? (0, t) = 1 ? ?max , ? ? (a, t) = 0 ?a 6= 0, a?t
If a?t = 0 : ? ? (0, t) = 1 ? ?min , ? ? (?
a?t , t) = ?min , ? ? (a, t) = 0 ?a 6= 0, a
??t .
3
(3)
Action-centered contextual bandit
Since the observed reward always contains the sum of the baseline reward and the differential reward
we are estimating, and the baseline reward is arbitrarily complex, the main challenge is to isolate the
differential reward at each time step. We do this via an action-centering trick, which randomizes the
action at each time step, allowing us to construct an estimator whose expectation is proportional to
the differential reward rt (?
at ) ? rt (0), where a
?t is the nonzero action chosen by the bandit at time t
to be randomized against the zero action. For simplicity of notation, we set the probability of the
bandit taking nonzero action P(at > 0) to be equal to 1 ? ?(0, t) = ?t .
3.1
Centering the actions - an unbiased rt (?
at ) ? rt (0) estimate
To determine a policy, the bandit must learn the coefficients ? of the model for the differential reward
rt (?
at ) ? rt (0) = sTt,?at ? as a function of a
?t . If the bandit had access at each time t to the differential
reward rt (?
at ) ? rt (0), we could estimate ? using a penalized least-squares approach by minimizing
XT
arg min
(rt (?
at ) ? rt (0) ? ?T st,?at )2 + ?k?k22
?
t=1
over ?, where rt (a) is the reward under action a at time t (Agrawal & Goyal, 2013). This corresponds
to the Bayesian estimator when the reward is Gaussian. Although we have only access to rt (at ),
not rt (?
at ) ? rt (0), observe that given a
?t , the bandit randomizes to at = a
?t with probability ?t and
at = 0 otherwise. Thus
E[(I(at > 0) ? ?t )rt (at )|Ht?1 , a
?t , s?t ] = ?t (1 ? ?t )rt (?
a) ? (1 ? ?t )?t rt (0)
= ?t (1 ? ?t )(rt (?
at ) ? rt (0)).
(4)
Thus (I(at > 0) ? ?t )rt (at ), which only uses the observed rt (at ), is proportional to an unbiased
estimator of rt (?
at ) ? rt (0). Recalling that a
?t , at are both known since they are chosen by the bandit
at time t, we create the estimate of the differential reward between a
?t and action 0 at time t as
r?t (?
at ) = (I(at > 0) ? ?t )rt (at ).
The corresponding penalized weighted least-squares estimator for ? using r?t (?
at ) is the minimizer of
XT
?t (1 ? ?t )(?
rt (?
at )/(?t (1 ? ?t )) ? ?T st,?at )2 + k?k22
(5)
t=1
(?
rt (?
at ))2
? 2?
rt (?
at )?T st,?at + ?t (1 ? ?t )(?T st,?at )2 + k?k22
t=1 ?t (1 ? ?t )
= c ? 2?T ?b + ?T B? + k?k2 ,
=
XT
2
4
where for simplicity of presentation we have used unit penalization k?k22 , and
XT
XT
?b =
(I(at > 0) ? ?t )st,?at rt (at ), B = I +
?t (1 ? ?t )st,?at sTt,?at .
t=1
t=1
i
h
at )
t (?
H
,
a
?
,
s
?
=
The weighted least-squares weights are ?t (1 ? ?t ), since var ?tr?(1??
t?1
t
t
t)
var[?
rt (?
at )t|Ht?1 ,?
at ,?
st ]
(?t (1??t ))2
and the standard deviation of r?t (?
at ) = (I(at > 0) ? ?t )rt (at ) given
Ht?1 , a
?t , s?t is of order gt (?
st ) = O(1). The minimizer of (5) is ?? = B ?1?b.
3.2
Action-Centered Thompson Sampling
As the Thompson sampling approach generates probabilities of taking an action, rather than selecting
an action, Thompson sampling is particularly suited to our regression approach. We follow the basic
framework of the contextual Thompson sampling approach presented by Agrawal & Goyal (2013),
extending and modifying it to incorporate our action-centered estimator and probability constraints.
The critical step in Thompson sampling is randomizing the model coefficients according to the
? v 2 B ?1 ) for ? at time t. A ?0 ? N (?,
? v 2 B ?1 ) is generated, and the action at chosen
prior N (?,
T 0
to maximize st,a ? . The probability that this procedure selects any action a is determined by the
distribution of ?0 ; however, it may select action 0 with a probability not in the required range
[1 ? ?max , 1 ? ?min ]. We thus introduce a two-step hierarchical procedure. After generating the
random ?0 , we instead choose the nonzero a
?t maximizing the expected reward
a
?t = arg
max
a?{1,...,N }
sTt,a ?0 .
Then we randomly determine whether to take the nonzero action, choosing at = a
?t with probability
Algorithm 1 Action-Centered Thompson Sampling
1: Set B = I, ?? = 0, ?
b = 0, choose [?min , ?max ].
2: for t = 1, 2, . . . do
3:
Observe current context s?t and form st,a for each a ? {1, . . . , N }.
? v 2 B ?1 ).
4:
Randomly generate ?0 ? N (?,
5:
Let
a
?t = arg
6:
7:
8:
max
a?{1,...,N }
sTt,a ?0 .
Compute probability ?t of taking a nonzero action according to (6).
Play action at = a
?t with probability ?t , else play at = 0.
Observe reward rt (at ) and update ??
B = B + ?t (1 ? ?t )st,?at sTt,?at ,
?b = ?b + st,?a (I(at > 0) ? ?t )rt (at ),
t
?? = B ?1?b.
9: end for
?t = P(at > 0) = max(?min , min(?max , P(sTt,?a ?? > 0))),
(6)
? v 2 B ?1 ). P(sT ?? > 0) is the probability that the expected
and at = 0 otherwise, where ?? ? N (?,
t,?
a
? v 2 B ?1 ). This probability is easily
relative reward sTt,?a ?? of action a
?t is higher than zero for ?? ? N (?,
computed using the normal CDF. Finally the bandit updates ?b, B and computes an updated ?? = B ?1?b.
Our action-centered Thompson sampling algorithm is summarized in Algorithm 1.
4
Regret analysis
Classically, the regret of a bandit is defined as the difference between the reward achieved by taking
the optimal actions a?t , and the expected reward received by playing the arm at chosen by the bandit
regretclassical (t) = sTt,a?t ? ? sTt,at ?,
5
(7)
where the expectation is taken conditionally on at , sTt,at , Ht?1 . For simplicity, let ?t? = 1 ? ?t? (0, t)
be the probability that the optimal policy takes a nonzero action, and recall that ?t = 1 ? ?t (0, t) is
the probability the bandit takes a nonzero action. The probability constraint implies that the optimal
policy (3) plays the optimal arm with a probability bounded away from 0 and 1, hence definition (7)
is no longer meaningful. We can instead create a regret that is the difference in expected rewards
conditioned on a
?t , ?t , sTt,at , Ht?1 , but not on the randomized action at :
regret(t) = ?t? sTt,?a?t ? ? ?t sTt,?at ?
(8)
where we have recalled that given a
?t , the bandit plays action at = a
?t with probability ?t and plays
the at = 0 with differential reward 0 otherwise. The action-centered contextual bandit attempts to
PT
minimize the cumulative regret R(T ) = t=1 regret(t) over horizon T .
4.1
Regret bound for Action-Centered Thompson Sampling
In the following theorem we show that with high probability, the probability-constrained Thompson
sampler has low regret relative to the optimal probability-constrained policy.
Theorem 1. Consider the action-centered contextual bandit problem, where f?t is potentially time
varying, and s?t at time t given Ht?1 is chosen by an adversary. Under this regime, the total regret at
time T for the action-centered Thompson sampling contextual bandit (Algorithm 1) satisfies
2?
d
1
R(T ) ? C
T 1+ (log(T d) log )
?
with probability at least 1 ? 3?/2, for any 0 < < 1, 0 < ? < 1. The constant C is in the proof.
Observe that this regret bound does not depend on the number of actions N , is sublinear in T , and
scales only with the complexity d of the interaction term, not the complexity
? of the baseline reward g.
Furthermore, = 1/ log(T ) can be chosen giving a regret of order O(d2 T ).
This bound is of the same order as the baseline Thompson sampling contextual bandit in the adversarial
setting when the baseline is identically zero (Agrawal & Goyal, 2013).
When the baseline can be
?
2
modeled with d0 features where d0 > d, our method
achieves
O(d
T
)
regret
whereas the standard
?
Thompson sampling approach has O((d + d0 )2 T ) regret. Furthermore, when the baseline reward is
time-varying, the worst case regret of
? the standard Thompson sampling approach is O(T ), while the
regret of our method remains O(d2 T ).
4.2
Proof of Theorem 1 - Decomposition of the regret
We will first bound the regret (8) at time t.
regret(t) = ?t? sTt,?a?t ? ? ?t sTt,?at ? = (?t? ? ?t )(sTt,?at ?) + ?t? (sTt,?a?t ? ? sTt,?at ?)
(9)
(?t?
(10)
?
(sTt,?a?t ?
?
?t )(sTt,?at ?)
+
(sTt,?a?t ?
?
sTt,?at ?),
?t?
sTt,?at ?)
where the inequality holds since
?
? 0 and 0 <
< 1 by definition. Then
XT
XT
XT
(sTt,?a?t ? ? sTt,?at ?)
R(T ) =
regret(t) ?
(?t? ? ?t )(sTt,?at ?) +
t=1
t=1
t=1
{z
} |
{z
}
|
I
II
Observe that we have decomposed the regret into a term I that depends on the choice of the
randomization ?t between the zero and nonzero action, and a term II that depends only on the
choice of the potential nonzero action a
?t prior to the randomization. We bound I using concentration
inequalities, and bound II using arguments paralleling those for standard Thompson sampling.
Lemma 1. Suppose that the conditions of Theorem 1 apply. Then with probability at least 1 ? 2? ,
p
I ? C d3 T log(T d) log(1/?) for some constant C given in the proof.
Lemma 2. Suppose that the conditions of Theorem 1 apply. Then term II can be bounded as
2?
T
X
d
1
II =
(sTt,?a?t ? ? sTt,?at ?) ? C 0
T 1+ log log(T d)
?
t=1
where the inequality holds with probability at least 1 ? ?.
6
The proofs are contained in Sections 4 and 5 in the supplement respectively. In the derivation,
the ?pseudo-actions? a
?t that Algorithm 1 chooses prior to the ?t baseline-nonzero randomization
correspond to the actions in the standard contextual bandit setting. Note that I involves only a
?t , not
a
??t , hence it is not surprising that the bound is smaller than that for II. Combining Lemmas 1 and 2
via the union bound gives Theorem 1.
5
Results
5.1
Simulated data
We first conduct experiments with simulated data, using N = 2 possible nonzero actions. In each
experiment, we choose a true reward generative model rt (s, a) inspired by data from the HeartSteps
study (for details see Section 1.1 in the supplement), and generate two length T sequences of state
vectors st,a ? RN K and s?t ? RL , where the s?t are iid Gaussian and st,a is formed by stacking
columns I(a = i)[1; s?t ] for i = 1, . . . , N . We consider both nonlinear and nonstationary baselines,
while keeping the treatment effect models the same. The bandit under evaluation iterates through the
T time points, at each choosing an action and receiving a reward generated according to the chosen
model. We set ?min = 0.2, ?max = 0.8.
At each time step, the reward under the optimal policy is calculated and compared to the reward
received by the bandit to form the regret regret(t). We can then plot the cumulative regret
Xt
cumulative regret(t) =
regret(? ).
? =1
50
Cumulative Regret wrt Optimal Policy
Cumulative Regret wrt Optimal Policy
In the first experiment, the baseline reward is nonlinear. Specifically, we generate rewards using
Action Centered TS
Standard TS
40
30
20
10
0
200
400
600
800
1000
Decision Point
Action Centered TS
Standard TS
200
150
100
50
0
200
400
600
800
1000
Decision Point
(a) Median cumulative regret
(b) Median with 1st and 3rd quartiles (dashed)
Figure 1: Nonlinear baseline reward g, in scenario with 2 nonzero actions and reward function based
on collected HeartSteps data. Cumulative regret shown for proposed Action-Centered approach,
compared to baseline contextual bandit, median computed over 100 random trials.
rt (st,at , s?t , at ) = ?T st,at + 2I(|[?
st ]1 | < 0.8) + nt where nt = N (0, 1) and ? ? R8 is a fixed vector
listed in supplement section 1.1. This simulates the quite likely scenario that for a given individual the
baseline reward is higher for small absolute deviations from the mean of the first context feature, i.e.
rewards are higher when the feature at the decision point is ?near average?, with reward decreasing
for abnormally high or low values. We run the benchmark Thompson sampling algorithm (Agrawal
& Goyal, 2013) and our proposed action-centered Thompson sampling algorithm, computing the
cumulative regrets and taking the median over 500 random trials. The results are shown in Figure 1,
demonstrating linear growth of the benchmark Thompson sampling algorithm and significantly lower,
sublinear regret for our proposed method.
We then consider a scenario with the baseline reward gt (?) function changing in time. We generate
rewards as rt (st,at , s?t , at ) = ?T st,at + ?tT s?t + nt where nt = N (0, 1), ? is a fixed vector as above,
and ?t ? R7 , s?t are generated as smoothly varying Gaussian processes (supplement Section 1.1). The
cumulative regret is shown in Figure 2, again demonstrating linear regret for the baseline approach
and significantly lower sublinear regret for our proposed action-centering algorithm as expected.
7
Cumulative Regret wrt Optimal Policy
Cumulative Regret wrt Optimal Policy
150
Action Centered TS
Standard TS
100
50
0
200
400
600
800
1000
Decision Point
250
200
Action Centered TS
Standard TS
150
100
50
0
200
400
600
800
1000
Decision Point
(a) Median cumulative regret
(b) Median with 1st and 3rd quartiles (dashed)
Figure 2: Nonstationary baseline reward g, in scenario with 2 nonzero actions and reward function
based on collected HeartSteps data. Cumulative regret shown for proposed Action-Centered approach,
compared to baseline contextual bandit, median computed over 100 random trials.
5.2
HeartSteps study data
The HeartSteps study collected the sensor and weather-based features shown in Figure 1 at 5 decision
points per day for each study participant. If the participant was available at a decision point, a
message was sent with constant probability 0.6. The sent message could be one of several activity
or anti-sedentary messages chosen by the system. The reward for that message was defined to be
log(0.5 + x) where x is the step count of the participant in the 30 minutes following the suggestion.
As noted in the introduction, the baseline reward, i.e. the step count of a subject when no message is
sent, does not only depend on the state in a complex way but is likely dependent on a large number of
unobserved variables. Because of these unobserved variables, the mapping from the observed state to
the reward is believed to be strongly time-varying. Both these characteristics (complex, time-varying
baseline reward function) suggest the use of the action-centering approach.
We run our contextual bandit on the HeartSteps data, considering the binary action of whether or not
to send a message at a given decision point based on the features listed in Figure 1 in the supplement.
Each user is considered independently, for maximum personalization and independence of results.
As above we set ?min = 0.2, ?max = 0.8.
We perform offline evaluation of the bandit using the method of Li et al. (2011). Li et al. (2011)
uses the sequence of states, actions, and rewards in the data to form an near-unbiased estimate of
the average expected reward achieved by each algorithm, averaging over all users. We used a total
of 33797 time points to create the reward estimates. The resulting estimates for the improvement
in average reward over the baseline randomization, averaged over 100 random seeds of the bandit
algorithm, are shown in Figure 2 of the supplement with the proposed action-centering approach
achieving the highest reward. Since the reward is logarithmic in the number of steps, the results imply
that the benchmark Thompson sampling approach achieves an average 1.6% increase in step counts
over the non-adaptive baseline, while our proposed method achieves an increase of 3.9%.
6
Conclusion
Motivated by emerging challenges in adaptive decision making in mobile health, in this paper we
proposed the action-centered Thompson sampling contextual bandit, exploiting the randomness of
the Thompson sampler and an action-centering approach to orthogonalize out the baseline reward.
We proved that our approach enjoys low regret bounds that scale only with the complexity of the
interaction term, allowing the baseline reward to be arbitrarily complex and time-varying.
Acknowledgments
This work was supported in part by grants R01 AA023187, P50 DA039838, U54EB020404, R01
HL125440 NHLBI/NIA, NSF CAREER IIS-1452099, and a Sloan Research Fellowship.
8
References
Abe, Naoki and Nakamura, Atsuyoshi. Learning to optimally schedule internet banner advertisements.
In Proceedings of the Sixteenth International Conference on Machine Learning, pp. 12?21. Morgan
Kaufmann Publishers Inc., 1999.
Agrawal, Shipra and Goyal, Navin. Thompson sampling for contextual bandits with linear payoffs. In
Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 127?135,
2013.
Bastani, Hamsa and Bayati, Mohsen. Online decision-making with high-dimensional covariates.
Available at SSRN 2661896, 2015.
Bubeck, S?bastien and Cesa-Bianchi, Nicolo. Regret analysis of stochastic and nonstochastic
multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
Chu, Wei, Li, Lihong, Reyzin, Lev, and Schapire, Robert E. Contextual bandits with linear payoff
functions. In International Conference on Artificial Intelligence and Statistics, pp. 208?214, 2011.
Dudik, Miroslav, Hsu, Daniel, Kale, Satyen, Karampatziakis, Nikos, Langford, John, Reyzin, Lev,
and Zhang, Tong. Efficient optimal learning for contextual bandits. In Proceedings of the TwentySeventh Conference Annual Conference on Uncertainty in Artificial Intelligence, pp. 169?178.
AUAI Press, 2011.
Klasnja, Predrag, Hekler, Eric B., Shiffman, Saul, Boruvka, Audrey, Almirall, Daniel, Tewari, Ambuj,
and Murphy, Susan A. Microrandomized trials: An experimental design for developing just-in-time
adaptive interventions. Health Psychology, 34(Suppl):1220?1228, Dec 2015.
Li, Lihong, Chu, Wei, Langford, John, and Schapire, Robert E. A contextual-bandit approach to
personalized news article recommendation. In Proceedings of the 19th International Conference
on World Wide Web, pp. 661?670. ACM, 2010.
Li, Lihong, Chu, Wei, Langford, John, and Wang, Xuanhui. Unbiased offline evaluation of contextualbandit-based news article recommendation algorithms. In Proceedings of the fourth ACM international conference on Web search and data mining, pp. 297?306. ACM, 2011.
Liao, Peng, Klasnja, Predrag, Tewari, Ambuj, and Murphy, Susan A. Sample size calculations for
micro-randomized trials in mhealth. Statistics in medicine, 2015.
May, Benedict C., Korda, Nathan, Lee, Anthony, and Leslie, David S. Optimistic Bayesian sampling
in contextual-bandit problems. The Journal of Machine Learning Research, 13(1):2069?2106,
2012.
Puterman, Martin L. Markov decision processes: discrete stochastic dynamic programming. John
Wiley & Sons, 2005.
Seldin, Yevgeny, Auer, Peter, Shawe-Taylor, John S., Ortner, Ronald, and Laviolette, Fran?ois.
PAC-Bayesian analysis of contextual bandits. In Advances in Neural Information Processing
Systems, pp. 1683?1691, 2011.
Slivkins, Aleksandrs. Contextual bandits with similarity information. The Journal of Machine
Learning Research, 15(1):2533?2568, 2014.
Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction. MIT Press, 1998.
Tewari, Ambuj and Murphy, Susan A. From ads to interventions: Contextual bandits in mobile health.
In Rehg, Jim, Murphy, Susan A., and Kumar, Santosh (eds.), Mobile Health: Sensors, Analytic
Methods, and Applications. Springer, 2017.
Valko, Michal, Korda, Nathan, Munos, R?mi, Flaounas, Ilias, and Cristianini, Nello. Finite-time
analysis of kernelised contextual bandits. In Uncertainty in Artificial Intelligence, pp. 654, 2013.
9
| 7179 |@word trial:5 middle:2 d2:2 decomposition:1 tr:1 initial:1 contains:2 series:1 selecting:1 daniel:2 existing:1 current:3 contextual:46 nt:7 surprising:1 michal:1 chu:4 must:2 john:5 ronald:1 analytic:1 hypothesize:1 designed:1 interpretable:1 drop:1 update:2 v:1 stationary:6 generative:1 plot:1 device:2 intelligence:3 iterates:1 location:3 zhang:1 five:1 kristjan:1 along:1 become:1 differential:8 introduce:4 theoretically:2 peng:1 expected:14 behavior:3 multi:3 inspired:1 discounted:1 decomposed:2 decreasing:1 prolonged:1 little:1 armed:3 considering:1 estimating:1 linearity:1 notation:2 bounded:3 agnostic:2 mitigated:1 maximizes:1 what:2 emerging:2 unobserved:2 guarantee:3 tewaria:1 pseudo:1 every:1 auai:1 growth:1 k2:1 unit:1 grant:1 intervention:13 understood:2 benedict:1 naoki:1 randomizes:2 sutton:2 lev:2 becoming:1 might:2 examined:1 dynamically:1 contextually:1 limited:1 range:1 averaged:1 practical:1 acknowledgment:1 union:1 regret:48 implement:1 goyal:6 kernelised:1 procedure:2 evolving:1 significantly:2 weather:4 suggest:1 get:2 cannot:3 selection:1 context:15 risk:2 writing:1 map:3 demonstrated:1 missing:1 maximizing:2 send:3 go:1 kale:1 independently:1 thompson:23 simplicity:4 unstructured:1 estimator:5 rehg:1 updated:1 pt:1 play:9 suppose:2 user:14 exact:1 paralleling:1 programming:1 us:3 harvard:4 trick:1 trend:1 particularly:1 walking:2 observed:6 wang:1 worst:1 susan:5 news:3 highest:3 complexity:8 covariates:1 reward:73 cristianini:1 dynamic:1 depend:4 mohsen:1 deliver:2 eric:1 learner:1 basis:1 shipra:1 easily:2 lunchtime:1 differently:1 derivation:1 fast:1 effective:1 artificial:3 choosing:2 quite:3 whose:1 plausible:2 valued:1 otherwise:3 calendar:1 satyen:1 statistic:5 noisy:1 delivered:1 online:1 sequence:3 agrawal:6 predrag:3 interaction:5 remainder:1 combining:1 reyzin:2 achieve:1 sixteenth:1 validate:1 workday:1 exploiting:1 regularity:1 extending:1 generating:1 incremental:1 depending:1 derive:1 andrew:1 school:1 received:3 strong:2 ois:1 involves:1 implies:1 closely:1 contextualbandit:1 modifying:1 quartile:2 stochastic:2 centered:20 human:1 enable:1 implementing:1 require:2 randomization:4 extension:1 hold:2 around:1 considered:2 ground:2 normal:1 stt:35 great:1 seed:1 mapping:3 week:1 achieves:3 vary:1 estimation:1 create:4 weighted:2 mit:1 sensor:2 always:2 gaussian:4 rather:1 mobile:10 barto:2 varying:9 encode:1 derived:1 focus:1 improvement:1 karampatziakis:1 medication:2 adversarial:1 baseline:35 dependent:2 entire:1 bandit:66 selects:1 overall:1 arg:5 constrained:4 santosh:1 field:1 construct:1 nicely:1 beach:1 sampling:22 equal:1 adversarially:1 r7:1 icml:1 future:2 micro:2 few:2 ortner:1 richard:1 randomly:2 individual:2 murphy:6 recalling:1 attempt:1 stationarity:1 interest:1 message:11 mining:1 evaluation:3 personalization:1 experience:3 conduct:1 taylor:1 walk:1 theoretical:2 android:1 miroslav:1 korda:2 column:1 modeling:1 leslie:1 stacking:1 deviation:2 hundred:1 conducted:1 too:2 optimally:1 connect:1 randomizing:1 chooses:1 banner:1 st:44 fundamental:1 randomized:5 international:5 lee:1 receiving:1 again:1 cesa:2 choose:5 classically:1 leading:1 li:6 potential:1 summarized:2 coefficient:4 inc:1 sloan:1 depends:3 stream:1 ad:1 view:1 optimistic:1 option:1 participant:3 worsen:1 contribution:1 minimize:1 square:3 formed:1 variance:1 characteristic:1 kaufmann:1 gathered:2 sitting:1 correspond:2 confounds:1 conceptually:1 bayesian:3 iid:1 advertising:1 randomness:1 history:1 influenced:1 ed:1 definition:2 centering:9 against:1 pp:8 associated:1 proof:4 mi:1 wearable:1 pilot:2 proved:1 treatment:5 popular:1 hsu:1 recall:1 reminder:2 schedule:1 routine:1 auer:1 higher:3 day:5 follow:2 wei:3 strongly:2 generality:2 furthermore:4 just:1 langford:3 web:5 navin:1 nonlinear:6 morning:1 pulling:1 usa:1 effect:5 k22:4 true:2 unbiased:4 former:1 evolution:1 assigned:1 hence:2 nonzero:16 puterman:2 conditionally:1 encourages:1 noted:1 criterion:1 tt:1 p50:1 recently:2 common:1 physical:1 empirically:1 rl:1 extend:1 smoothness:1 rd:5 debug:1 shawe:1 had:1 lihong:3 access:2 longer:1 similarity:1 etc:1 gt:8 nicolo:1 recent:1 scenario:5 inequality:3 binary:1 success:1 arbitrarily:3 morgan:1 minimum:1 care:1 dudik:2 nikos:1 abnormally:1 converge:1 maximize:2 determine:2 tempting:1 dashed:2 ii:7 full:2 d0:4 calculation:1 offer:1 long:2 believed:1 ilias:1 impact:2 variant:1 regression:1 basic:1 liao:2 expectation:2 kernel:1 tailored:2 suppl:1 achieved:2 dec:1 whereas:1 fellowship:1 else:1 median:7 concluded:1 sends:2 appropriately:1 publisher:1 unlike:1 isolate:1 subject:1 sent:7 simulates:1 habituation:1 practitioner:1 nonstationary:2 near:3 identically:1 independence:1 psychology:1 nonstochastic:1 perfectly:1 idea:1 thread:1 motivated:4 whether:5 effort:1 abruptly:1 peter:1 action:85 useful:1 tewari:5 delivering:3 listed:2 nonparametric:1 mid:1 category:1 generate:4 occupy:1 schapire:2 nsf:1 blown:1 arising:1 per:1 discrete:1 promise:1 nevertheless:1 demonstrating:2 achieving:2 bastani:2 changing:2 d3:1 ht:10 asymptotically:1 merely:1 year:1 sum:1 run:2 uncertainty:2 fourth:1 place:1 almost:1 decide:1 fran:1 delivery:3 decision:14 bound:14 internet:2 played:1 annual:1 activity:6 untenable:1 occur:1 constraint:7 constrain:2 personalized:1 generates:1 aspect:1 nathan:2 argument:2 optimality:1 min:14 kumar:1 ssrn:1 martin:1 department:3 influential:1 developing:2 according:3 remain:1 smaller:1 son:1 making:3 invariant:2 taken:1 remains:1 eventually:1 count:3 wrt:4 end:2 umich:2 sending:4 available:3 apply:2 observe:5 hierarchical:1 away:1 appropriate:1 remaining:2 include:3 laviolette:1 medicine:1 giving:1 plausibly:1 r01:2 move:1 fa:2 concentration:1 rt:47 dependence:1 linucb:1 link:1 zooming:1 simulated:2 argue:1 collected:4 disengaging:1 nello:1 length:1 modeled:6 prompted:1 minimizing:1 unfortunately:1 robert:2 potentially:1 design:2 policy:17 unknown:1 perform:1 allowing:4 bianchi:2 observation:1 markov:2 benchmark:3 finite:1 t:8 anti:1 payoff:2 jim:1 rn:1 aleksandrs:1 abe:2 prompt:1 david:1 required:1 slivkins:2 recalled:1 learned:1 nip:1 adversary:2 usually:2 regime:1 summarize:1 challenge:2 ambuj:4 including:2 max:16 power:1 critical:3 nakamura:2 valko:2 indicator:2 arm:7 imply:1 health:11 prior:3 literature:2 relative:2 sublinear:4 suggestion:11 proportional:2 var:2 bayati:2 penalization:1 foundation:1 article:3 playing:3 penalized:2 supported:2 keeping:2 enjoys:1 offline:2 side:1 allow:2 saul:1 wide:1 taking:5 munos:1 absolute:1 sparse:1 calculated:1 xn:2 world:2 cumulative:13 rich:1 computes:1 ignores:1 reinforcement:3 adaptive:3 emphasize:1 preferred:1 keep:1 dealing:1 active:1 disrupt:1 search:1 evening:1 learn:3 ca:1 career:1 obtaining:1 nia:1 excellent:1 complex:10 anthony:1 domain:3 main:2 xuanhui:1 noise:1 yevgeny:1 nothing:1 allowed:2 body:1 fashion:1 tong:1 wiley:1 sub:1 candidate:1 lie:1 advertisement:1 learns:1 theorem:6 minute:1 xt:9 bastien:1 pac:1 r8:1 sequential:1 effectively:1 supplement:6 portal:1 overwhelmed:1 conditioned:1 horizon:1 twentyseventh:1 suited:1 smoothly:1 michigan:2 logarithmic:1 simply:2 likely:4 bubeck:2 seldin:2 contained:1 partially:1 recommendation:2 springer:1 afternoon:1 corresponds:1 chance:1 minimizer:2 satisfies:1 cdf:1 acm:3 conditional:1 goal:1 presentation:1 considerable:1 hard:1 change:4 infinite:1 determined:1 reducing:1 specifically:1 sampler:2 averaging:1 decouple:1 lemma:3 called:1 total:2 experimental:1 orthogonalize:2 ucb:1 meaningful:1 select:1 latter:1 nhlbi:1 greenewald:1 incorporate:1 |
6,831 | 718 | A Massively-Parallel SIMD Processor for
Neural Network and Machine Vision
Applications
Michael A. Glover
Current Technology, Inc.
99 Madbury Road
Durham, NH 03824
W. Thomas Miller, III
Department of Electrical and Computer Engineering
The University of New Hampshire
Durham, NH 03824
Abstract
This paper describes the MM32k, a massively-parallel SIMD computer which is easy to program, high in performance, low in cost
and effective for implementing highly parallel neural network architectures. The MM32k has 32768 bit serial processing elements,
each of which has 512 bits of memory, and all of which are interconnected by a switching network. The entire system resides on
a single PC-AT compatible card. It is programmed from the host
computer using a C++ language class library which abstracts the
parallel processor in terms of fast arithmetic operators for vectors
of variable precision integers.
1
INTRODUCTION
Many well known neural network techniques for adaptive pattern classification and
function approximation are inherently highly parallel, and thus have proven difficult to implement for real-time applications at a reasonable cost. This includes
843
844
Glover and Miller
a variety of learning systems such as radial basis function networks [Moody 1989],
Kohonen self-organizing networks [Kohonen 1982], ART family networks [Carpenter
1988], and nearest-neighbor interpolators [Duda 1973], among others. This paper
describes the MM32k, a massively-parallel SIMD computer which is easy to program, high in performance, low in cost and effective for implementing highly parallel
neural network architectures. The MM32k acts as a coprocessor to accelerate vector
arithmetic operations on PC-AT class computers, and can achieve giga-operation
per second performance on suitable problems. It is programmed from the host
computer using a C++ language class library, which overloads typical arithmetic
operators, and supports variable precision arithmetic. The MM32k has 32768 bit
serial PEs, or processing elements, each of which has 512 bits of memory, and all
of which are interconnected by a switching network. The PEs are combined with
their memory on an single DRAM memory chip giving 2048 processors per chip.
The entire 32768 processor system resides on a single ISA bus compatible card. It
is much more cost effective than other SIMD processors [Hammerstrom 1990; Hillis
1985; Nickolls 1990; Potter 1985] and more flexible than fixed purpose chips [Holler
1991].
2
SIMD ARCHITECTURE
The SIMD PE array contains 32768 one bit processors, each with 512 bits of memory
and a connection to the interconnection network. The PE array design is unique
in that 2048 PEs, including their PE memory, are realized on a single chip. The
total PE array memory is 2 megabytes and has a peak memory bandwidth is 25
gigabytes per second. The PE array can add 8 bit integers at 2.5 gigaoperations
per second. It also dissipates less than 10 watts of power and is shown in Figure 1.
Each PE has three one bit registers, a 512 bit memory, and a one bit AL U. It
performs bit serial arithmetic and can therefore vary the number of bits of precision
to fit the problem at hand, saving SIMD instruction cycles and SIMD memory.
There are 17 instructions in the PE instruction set, all of which execute at a 6.25
MIPS rate. The PE instruction set is functionally complete in that it can perform
boolean NOT and OR functions and can therefore perform any operation, including
arithmetic and conditional operations. A single PE is shown in Figure 2.
The interconnection network allows data to be sent from one PE to another. It is
implemented by a 64*64 full crossbar switch with 512 PEs connected to each port
of the switch. It allows data to be sent from one PE to another PE, an arbitrary
distance away, in constant time. The peak switch bandwidth is 280 megabytes per
second. The switch also allows the PE array to perform data reduction operations,
such as taking the sum or maximum over data elements distributed across all PEs.
3
C++ PROGRAMMING ENVIRONMENT
The purpose of the C++ programming environment is to allow a programmer to
declare and manipulate vectors on the MM32k as if they were variables in a program running on the host computer. Programming is performed entirely on the
host, using standard MS-DOS or Windows compatible C++ compilers. The C++
programming environment for the MM32k is built around a C++ class, named
A Massively-Parallel SIMD Processor for Neural Network and Machine Vision Applications
Host Computer
(PC-AT)
Vector Instructions and Data
Controller
PE Instructions and Data
1-1PE
0
PE
1
PE
2
'--,.-
-,.-
1PE
3
...
PE
...
j
PE
3276
'--,--
Switch
Figure 1: A block diagram of the MM32k.
Bit 511
Bit 510
Bit 509
9 Bit Address
from Controller
Address Bus
..
512 Bit Memory
Bit 5
Bit 4
Bit 3
Bit 2
Bit 1
Bit 0
PE ALU Opcode
from Controller
Data Bus?
Data to
Switch
A Register
1 Bit
..
M Register
1 Bit
B Register
1 Bit
Data from
Switch
Figure 2: A block diagram of a single processing element (PE).
845
846
Glover and Miller
Table 1: 8 Bit Operations With 32768 and 262144 Elements
8 bit
operation
copy
vector+vector
vector+scalar
vector*vector
vector*scalar
vector>scalar
align( vector ,scalar)
sum( vector)
maximum( vector)
Actual MOPS
with length
of 32768
Actual MOPS
with length
of 262144
1796
1455
1864
206
426
1903
186
52
114
9429
2074
3457
215
450
6223
213
306
754
MM_ VECTOR, which represents a vector of integers. Most of the standard C
arithmetic operators, such as +, -, *, I, =, and> have been overloaded to work
with this class. Some basic functions, such as absolute value, square root, minimum, maximum, align, and sum, have also been overloaded or defined to work with
the class.
The significance of the class MM_ VECTOR is that instances of it look and act
like ordinary variables in a C++ program. So a programmer may add, subtract,
assign, and manipulate these vector variables from a program running on the host
computer, but the storage associated with them is in the SIMD memory and the
vector operations are performed in parallel by the SIMD PEs. MM_ VECTORs can
be longer than 32768. This is managed (transparent to the host program) by placing
two or more vector elements in the SIMD memory of each PE. The class library
keeps track of the number of words per PE. MM_ VECTORs can be represented by
different numbers of bits. The class library automatically keeps track of the number
of bits needed to represent each MM_ VECTOR without overflow. For example, if
two 12 bit integers were added together, then 13 bits would be needed to represent
the sum without overflow. The resulting MM_VECTOR would have 13 bits. This
saves SIMD memory space and SIMD PE instruction cycles. The performance of
the MM32k on simple operators running under the class library is listed in Table 1.
4
NEURAL NETWORK EXAMPLES
A common operation found in neural network classifiers (Kohonen, ART, etc.) is
the multi-dimensional nearest-neighbor match. If the network has a large number
of nodes, this operation is particularly inefficient on single processor systems, which
must compute the distance metric for each node sequentially. Using the MM32k, the
distance metrics for all nodes (up to 32768 nodes) can be computed simultaneously,
and the identification of the minimum distance can be made using an efficient tree
compare included in the system microcode.
A Massively-Parallel SIMD Processor for Neural Network and Machine Vision Applications
Table 2: Speedup on Nearest Neighbor Search
Processor
MM32k
i486
MIPS
Alpha
SPARC
Time for
32768 nodes
2.2
350
970
81
410
msec
msec
msec
msec
msec
Time for
65536 nodes
3.1
700
1860
177
820
msec
msec
msec
msec
msec
MM32k
speedup for
32768 nodes
MM32k
speedup for
65536 nodes
1:1
159:1
441:1
37:1
186:1
1:1
226:1
600:1
57:1
265:1
Figure 3 shows a C++ code example for performing a 16-dimensional nearest neighbor search over 32768 nodes. The global MM_ VECTOR variable state[16] defines
the 16-dimensionallocation of each node. Each logical element of state[ ] (state[O],
state[l], etc.) is actually a vector with 32768 elements distributed across all processors. The routine find_besLmatchO computes the euclidean distance between
each node's state and the current test vector test_input[ ], which resides on the host
processor. Note that the equations appear to be scalar in nature, but in fact direct
vector operations to be performed by all processors simultaneously.
The performance of the nearest neighbor search shown in Figure 3 is listed in Table
2. Performance on the same task is also listed for four comparison processors: a
Gateway2000 mode14DX2-66V with 66 MHz 80486 processor (i486), a DECstation
5000 Model 200 with 25 MHz MIPS R3000A processor (MIPS), a DECstation 3000
Model 500AXP with 150 MHz Alpha AXP processor (Alpha), and a Sun SPARCstation 10 Model 30 with 33 MHz SuperSPARC processor (SPARC). There are 16
subtractions, 16 additions, 16 absolute values, one global minimum, and one global
first operation performed. The MM32k is tested on problems with 32768 and 65536
exemplars and compared against four popular serial machines performing equivalent
searches. The MM32k requires 3.1 milliseconds to search 65536 exemplars which is
265 times faster than a SPARC 10.
The flexibility of the MM32k for neural network applications was demonstrated
by implementing compl~te fixed-point neural network paradigms on the MM32k
and on the four comparison processors (Table 3). Three different neural network
examples were evaluated. The first was a radial basis function network with 32,768
basis functions (rational function approximations to gaussian functions). Each basis
function had 9 8-bit inputs, 3 16-bit outputs (a vector basis function magnitude),
and independent width parameters for each of the nine inputs. The performances
listed in the table (RBF) are for feedforward response only. The second example
was a Kohonen self-organizing network with a two-dimensional sheet of Kohonen
nodes of dimension 200x150 (30,000 nodes). The problem was to map a nonlinear
robotics forward kinematics transformation with eight degrees of freedom (8-bit
parameters) onto the two-dimensional Kohonen layer. The performances listed in
the table (Kohonen) are for self-organizing training. The third example problem
was a neocognitron for target localization in a 256x256 8-bit input image. The first
hidden layer of the neocognitron had 8 256x256 sheets of linear convolution units
847
848
Glover and Miller
1* declare 16-D ""32k exemplars *1
""_VECTOR state[16] =
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168) ,
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168) ,
{
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168) ,
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168),
""_VECTOR(32168)
};
1* return PE number of processor with closest match */
long find_best_match(long test_input[16])
{
int i;
""_VECTOR difference(32168);
""_VECTOR distance(32168);
1* differences *1
1* distances *1
1* compute the 16-D distance scores *1
distance = OJ
for (i=O; i<16; ++i) {
difference = state[i] - test_input[i];
distance = distance + (difference * difference);
}
1* return the PE number for minimum distance
return first(distance == minimum(distance?;
*1
}
Figure 3: A C++ code example implementing a nearest neighbor search.
A Massively-Parallel SIMD Processor for Neural Network and Machine Vision Applications
Table 3: MM32k Speedup for Select Neural Network Paradigms
Processor
RBF
Kohonen
NCGTRN
MM32k
i486
MIPS
Alpha
SPARC
1:1
161:1
180:1
31:1
94:1
1:1
76:1
69:1
11:1
49:1
1:1
336:1
207:1
35:1
378:1
with 16x16 receptive fields in the input image. The second hidden layer of the
neocognitron had 8 256x256 sheets of sigmoidal units (fixed-point rational function
approximations to sigmoid functions) with 3x3x8 receptive fields in the first hidden
layer. The output layer of the neocognitron had 256x256 sigmoidal units with
3x3x8 receptive fields in the second hidden layer. The performances listed in the
table (NCGTRN) correspond to feedforward response followed by backpropagation
training. The absolute computation times for the MM32k were 5.1 msec, 10 msec,
and 1.3 sec, for the RBF, Kohonen, and NCGTRN neural networks, respectively.
Acknowledgements
This work was supported in part by a grant from the Advanced Research Projects
Agency (ARPA/ONR Grant #NOOOI4-92-J-1858).
References
J. 1. Potter. (1985) The Massively Parallel Processor, Cambridge, MA: MIT Press.
G. A. Carpenter and S. Grossberg. (1988) The ART of adaptive pattern recognition
by a self-organizing neural network. Computer vol. 21, pp. 77-88.
R. O. Duda and P. E. Hart. (1973) Pattern Classification and Scene Analysis. New
York: Wiley.
D. Hammerstrom. (1990) A VLSI architecture for high-performance, low cost, onchip learning, in Proc. IJCNN, San Diego, CA, June 17-21 , vol. II, pp. 537-544.
W. D. Hillis. (1985) The Connection Machine. Cambridge, MA: MIT Press .
M. Holler. (1991) VLSI implementations oflearning and memory systems: A review.
In Advances in Neural Information Processing Systems 3, ed. by R. P. Lippman, J.
E. Moody, and D. S. Touretzky, San Francisco, CA: Morgan Kaufmann.
T. Kohonen. (1982) Self-organized formation of topologically correct feature maps.
Biological Cybernetics, vol. 43, pp. 56-69.
J. Moody and C. Darken. (1989) Fast learning in networks of locally- tuned processing units. Neural Computation, vol. 1, pp. 281-294.
J. R . Nickolls. (1990) The design of the MasPar MP-1: A cost-effective massively
parallel computer. In Proc. COMPCON Spring '90, San Francisco, CA, pp. 25-28 ..
849
| 718 |@word coprocessor:1 duda:2 instruction:7 reduction:1 contains:1 score:1 tuned:1 current:2 must:1 node:13 sigmoidal:2 glover:4 direct:1 multi:1 automatically:1 actual:2 window:1 project:1 sparcstation:1 transformation:1 act:2 classifier:1 unit:4 grant:2 appear:1 declare:2 engineering:1 switching:2 onchip:1 programmed:2 grossberg:1 unique:1 x256:4 block:2 implement:1 backpropagation:1 lippman:1 decstation:2 word:1 road:1 radial:2 onto:1 operator:4 sheet:3 storage:1 equivalent:1 map:2 demonstrated:1 array:5 gigabyte:1 target:1 diego:1 programming:4 element:8 recognition:1 particularly:1 electrical:1 cycle:2 connected:1 sun:1 environment:3 agency:1 localization:1 basis:5 accelerate:1 chip:4 represented:1 interpolators:1 fast:2 effective:4 formation:1 interconnection:2 interconnected:2 kohonen:10 organizing:4 flexibility:1 achieve:1 exemplar:3 nearest:6 implemented:1 correct:1 programmer:2 implementing:4 assign:1 transparent:1 biological:1 around:1 mop:2 vary:1 purpose:2 proc:2 mit:2 gaussian:1 june:1 entire:2 hidden:4 vlsi:2 classification:2 among:1 flexible:1 art:3 field:3 simd:16 saving:1 represents:1 placing:1 look:1 others:1 simultaneously:2 freedom:1 highly:3 pc:3 tree:1 euclidean:1 maspar:1 arpa:1 instance:1 boolean:1 mhz:4 ordinary:1 cost:6 oflearning:1 combined:1 peak:2 holler:2 michael:1 together:1 moody:3 megabyte:2 inefficient:1 return:3 sec:1 includes:1 int:1 inc:1 register:4 mp:1 performed:4 root:1 compiler:1 parallel:13 square:1 kaufmann:1 miller:4 correspond:1 identification:1 cybernetics:1 processor:23 touretzky:1 ed:1 against:1 pp:5 associated:1 rational:2 popular:1 noooi4:1 logical:1 organized:1 routine:1 actually:1 axp:2 response:2 execute:1 evaluated:1 crossbar:1 hand:1 nonlinear:1 defines:1 alu:1 managed:1 self:5 width:1 m:1 neocognitron:4 complete:1 performs:1 image:2 common:1 sigmoid:1 nh:2 functionally:1 cambridge:2 language:2 had:4 longer:1 etc:2 add:2 align:2 closest:1 sparc:4 massively:8 onr:1 morgan:1 minimum:5 subtraction:1 paradigm:2 arithmetic:7 ii:1 full:1 isa:1 match:2 faster:1 long:2 host:8 serial:4 manipulate:2 hart:1 basic:1 controller:3 vision:4 metric:2 represent:2 robotics:1 addition:1 diagram:2 sent:2 integer:4 feedforward:2 iii:1 easy:2 mips:5 variety:1 switch:7 fit:1 architecture:4 bandwidth:2 york:1 nine:1 listed:6 locally:1 millisecond:1 per:6 track:2 vol:4 four:3 sum:4 topologically:1 named:1 family:1 reasonable:1 bit:37 entirely:1 layer:6 followed:1 ijcnn:1 scene:1 spring:1 performing:2 speedup:4 department:1 watt:1 describes:2 across:2 equation:1 bus:3 kinematics:1 needed:2 operation:12 eight:1 away:1 save:1 hammerstrom:2 thomas:1 running:3 giving:1 overflow:2 added:1 realized:1 receptive:3 distance:14 card:2 potter:2 length:2 code:2 difficult:1 dram:1 design:2 implementation:1 perform:3 convolution:1 darken:1 arbitrary:1 overloaded:2 connection:2 hillis:2 address:2 pattern:3 program:6 built:1 including:2 oj:1 memory:15 power:1 suitable:1 dissipates:1 advanced:1 technology:1 library:5 review:1 acknowledgement:1 x150:1 proven:1 degree:1 port:1 compatible:3 supported:1 copy:1 allow:1 neighbor:6 nickolls:2 taking:1 absolute:3 distributed:2 dimension:1 resides:3 computes:1 forward:1 made:1 adaptive:2 san:3 compl:1 alpha:4 keep:2 global:3 sequentially:1 francisco:2 search:6 table:9 nature:1 ca:3 inherently:1 significance:1 carpenter:2 x16:1 wiley:1 precision:3 msec:12 pe:33 third:1 magnitude:1 te:1 durham:2 subtract:1 scalar:5 microcode:1 ma:2 conditional:1 rbf:3 included:1 typical:1 hampshire:1 total:1 select:1 giga:1 support:1 overload:1 tested:1 |
6,832 | 7,180 | Estimating Mutual Information for
Discrete-Continuous Mixtures
Sreeram Kannan
Department of Electrical Engineering
University of Washington
[email protected]
Weihao Gao
Department of ECE
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
[email protected]
Sewoong Oh
Department of IESE
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
[email protected]
Pramod Viswanath
Department of ECE
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
[email protected]
Abstract
Estimation of mutual information from observed samples is a basic primitive in
machine learning, useful in several learning tasks including correlation mining,
information bottleneck, Chow-Liu tree, and conditional independence testing in
(causal) graphical models. While mutual information is a quantity well-defined
for general probability spaces, estimators have been developed only in the special
case of discrete or continuous pairs of random variables. Most of these estimators
operate using the 3H-principle, i.e., by calculating the three (differential) entropies
of X, Y and the pair (X, Y ). However, in general mixture spaces, such individual
entropies are not well defined, even though mutual information is. In this paper, we
develop a novel estimator for estimating mutual information in discrete-continuous
mixtures. We prove the consistency of this estimator theoretically as well as
demonstrate its excellent empirical performance. This problem is relevant in a
wide-array of applications, where some variables are discrete, some continuous,
and others are a mixture between continuous and discrete components.
1
Introduction
A fundamental quantity of interest in machine learning is mutual information (MI), which characterizes the shared information between a pair of random variables (X, Y ). MI obeys several intuitively
appealing properties including the data-processing inequality, invariance under one-to-one transformations and chain rule [10]. Therefore, mutual information is widely used in machine learning for
canonical tasks as classification [35], clustering [32, 49, 8, 29] and feature selection [2, 13]. Mutual
information also emerges as the ?correct" quantity in several graphical model inference problems
(e.g., the Chow-Liu tree [9] and conditional independence testing [6]). MI is also pervasively used
in many data science application domains, such as sociology [40], computational biology [28], and
computational neuroscience [41].
An important problem in any of these applications is to estimate mutual information effectively
from samples. While mutual information has been the de facto measure of information in several
applications for decades, the estimation of mutual information from samples remains an active
research problem. Recently, there has been a resurgence of interest in entropy, relative entropy and
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
mutual information estimators, on both the theoretical as well as practical fronts [46, 31, 44, 45, 22,
19, 7, 15, 14, 17, 16].
The previous estimators focus on either of two cases ? the data is either purely discrete or purely
continuous. In these special cases, the mutual information can be calculated based on the three
(differential) entropies of X, Y and (X, Y ). We term estimators based on this principle as 3Hestimators (since they estimate three entropy terms), and a majority of previous estimators fall under
this category [19, 16, 46].
In practical downstream applications, we often have to deal with a mixture of continuous and discrete
random variables. Random variables can be mixed in several ways. First, one random variable can be
discrete whereas the other is continuous. For example, we want to measure the strength of relationship
between children?s age and height, here age X is discrete and height Y is continuous. Secondly,
a single scalar random variable itself can be a mixture of discrete and continuous components.
For example, consider X taking a zero-inflated-Gaussian distribution, which takes value 0 with
probability 0.1 and is a Poisson distribution with mean 10 with probability 0.9. This distribution has
both a discrete component as well as a component with density, and is a well-known model for gene
expression readouts [24, 37]. Finally, X and / or Y can be high dimensional vector, each of whose
components may be discrete, continuous or mixed.
In all of the aforementioned mixed cases, mutual information is well-defined through the RadonNikodym derivative (see Section 2) but cannot be expressed as a function of the entropies or differential entropies of the random variables. Crucially, entropy is not well defined when a single
scalar random variable comprises of both discrete and continuous components, in which case, 3H
estimators (the vast majority of prior art) cannot be directly employed. In this paper, we address this
challenge by proposing an estimator that can handle all these cases of mixture distributions. The
estimator directly estimates the Radon-Nikodym derivative using the k-nearest neighbor distances
from the samples; we prove `2 consistency of the estimator and demonstrate its excellent practical
performance through a variety of experiments on both synthetic and real dataset. Most relevantly, it
strongly outperforms natural baselines of discretizing the mixed random variables (by quantization)
or making it continuous by adding a small Gaussian noise.
The rest of the paper is organized as follows. In Section 2, we review the general definition of
mutual information for Radon-Nikodym derivative and show that it is well-defined for all the cases of
mixtures. In Section 3, we propose our estimator of mutual information for mixed random variables.
In Section 4, we prove that our estimator is `2 consistent under certain technical assumptions and
verify that the assumptions are satisfied for most practical cases. Section 5 contains the results of our
detailed synthetic and real-world experiments testing the efficacy of the proposed estimator.
2
Problem Formation
In this section, we define mutual information for general distributions as follows (e.g., [39]).
Definition 2.1. Let PXY be a probability measure on the space X ? Y, where X and Y are both
Euclidean spaces. For any measurable set A ? X and B ? Y, define PX (A) = PXY (A ? Y) and
PY (B) = PXY (X ? B). Let PX PY be the product measure PX ? PY . Then the mutual information
I(X; Y ) of PXY is defined as
Z
I(X; Y ) ?
log
X ?Y
where
dPXY
dPX PY
dPXY
dPXY ,
dPX PY
(1)
is the Radon-Nikodym derivative.
We prove that for any probability measure P on X ? Y, the joint measure PXY is absolutely
continuous with respect to the product measure PX PY , hence mutual information is well-defined.
See Appendix ?? for the detailed proof. Notice that this general definition includes the following
cases of mixtures: (1) X is discrete and Y is continuous (or vice versa); (2) X or Y has many
components each, where some components are discrete and some are continuous; (3) X or Y or their
joint distribution is a mixture of continuous and discrete distributions.
2
3
Estimators of Mutual Information
Review of prior work. The estimation problem is quite different depending on whether the underlying distribution is discrete, continuous or mixed. As pointed out earlier, most existing estimators for
mutual information are based on the 3H principle: they estimate the three entropy terms first. This
3H principle can be applied only in the purely discrete or purely continuous case.
Discrete data: For entropy estimation of a discrete variable X, the straightforward approach to plug-in
the estimated probabilities p?X (x) into the formula for entropy has been shown to be suboptimal
[33, 1]. Novel entropy estimators with sub-linear sample complexity have been proposed [48, 53, 19,
21, 20, 23]. MI estimation can then be performed using the 3H principle, and such an approach is
shown to be worst-case optimal for mutual-information estimation [19].
Continuous data: There are several estimators for differential entropy of continuous random variables,
which have been exploited in a 3H principle to calculate the mutual information [3]. One family of
entropy estimators are based on kernel density estimators [34] followed by re-substitution estimation. An alternate family of entropy estimators is based on k-Nearest Neighbor (k-NN) estimates,
beginning with the pioneering work of Kozachenko and Leonenko [26] (the so-called KL estimator).
Recent progress involves an inspired mixture of an ensemble of kernel and k-NN estimators [46, 4].
Exponential concentration bounds under certain conditions are in [43].
Mixed Random Variables: Since the entropies themselves may not be well defined for mixed random
variables, there is no direct way to apply the 3H principle. However, once the data is quantized, this
principle can be applied in the discrete domain. That mutual information in arbitrary measure spaces
can indeed be computed as a maximum over quantization is a classical result [18, 36, 38]. However,
the choice of quantization is complicated and while some quantization schemes are known to be
consistent when there is a joint density[11], the mixed case is complex. Estimator of the average of
Radon-Nikodym derivative has been studied in [50, 51]. Very recent work generalizing the ensemble
entropy estimator when some components are discrete and others continuous is in [31].
Beyond 3H estimation: In an inspired work [27] proposed a direct method for estimating mutual
information (KSG estimator) when the variables have a joint density. The estimator starts with the
3H estimator based on differential entropy estimates based on the k-NN estimates, and employs a
heuristic to couple the estimates in order to improve the estimator. While the original paper did not
contain any theoretical proof, even of consistency, its excellent practical performance has encouraged
widespread adoption. Recent work [17] has established the consistency of this estimator along with its
convergence rate. Further, recent works [14, 16] involving a combination of kernel density estimators
and k-NN methods have been proposed to further improve the KSG estimator. [42] extends the KSG
estimator to the case when one variable is discrete and another is scalar continuous.
None of these works consider a case even if one of the components has a mixture of continuous and
discrete distribution, let alone for general probability distributions. There are two generic options:
(1) one can add small independent noise on each sample to break the multiple samples and apply
a continuous valued MI estimator (like KSG), or (2) quantize and apply discrete MI estimators but
the performance for high-dimensional case is poor. These form baselines to compare against in our
detailed simulations.
Mixed Regime. We first examine the behavior of other estimators in the mixed regime, before
proceeding to develop our estimator. Let us consider the case when X is discrete (but real valued)
and Y possesses a density. In this case, we will examine the consequence of using the 3H principle,
with differential entropy estimated by the K-nearest neighbors. To do this, fix a parameter k, that
determines the number of neighbors and let ?i,z denote the distance of the k-the nearest neighbor of
(N )
z, where z = x or z = y or z = (x, y). Then Ib3H (X; Y ) =
N
N cx ?di,x
1 X
log
+ a(k)
N i=1
k
!
+
N
N cy ?di,y
1 X
log
+ a(k)
N i=1
k
!
?
N
N cxy ?di,xy
1 X
log
+ a(k)
N i=1
k
!
where ?(?) is the digamma function and a(?) = log(?) ? ?(?). In the case that X is discrete and Y
has a density, I3H (X; Y ) = ?? + a ? b = ??, which is clearly wrong.
The basic idea of the KSG estimator is to ensure that the ? is the same for both x, y and (x, y) and the
difference is instead in the number of nearest neighbors. Let nx,i be the number of samples of Xi ?s
3
within distance ?i,xy and ny,i be the number of samples of Yi ?s within distance ?i,xy . Then the KSG
PN
(N )
estimator is given by IbKSG ? N1 i=1 ( ?(k) + log(N ) ? log(nx,i + 1) ? log(ny,i + 1) ) where
?(?) is the digamma function.
In the case of X being discrete and Y being continuous, it turns out that the KSG estimator does not
blow up (unlike the 3H estimator), since the distances do not go to zero. However, in the mixed case,
the estimator has a non-trivial bias due to discrete points and is no longer consistent.
Proposed Estimator. We propose the following estimator for general probability distributions,
inspired by the KSG estimator. The intuition is as follows. Fist notice that MI is the average of the
logarithm of Radon-Nikodym derivative, so we compute the Radon-Nikodym derivative for each
sample i and take the empirical
average.
The re-substitution estimator for MI is then given as follows:
Pn
dPXY
1
b
. The basic idea behind our estimate of the Radon-Nikodym
I(X; Y ) ? n i=1 log dPX PY
(xi ,yi )
derivative at each sample point is as follows:
? When the point is discrete (which can be detected by checking if the k-nearest neighbor
distance of data i is zero), then we can assert that data i is in a discrete component, and we
can use plug-in estimator for Radon-Nikodym derivative.
? If the point is such that there is a joint density (locally), the KSG estimator suggests a natural idea: fix the radius and estimate the Radon-Nikodym derivative by
(?(k) + log(N ) ? log(nx,i + 1) ? log(ny,i + 1)).
? If k-nearest neighbor distance is not zero, then it may be either purely continuous or mixed.
But we show below that the method for purely continuous is also applicable for mixed.
Precisely, let nx,i be the number of samples of Xi ?s within distance ?i,xy and ny,i be the number of
samples of Yi ?s with in ?i,xy . Denote k?i by the number of tuples (Xi , Yi ) within distance ?i,xy . If
the k-NN distance is zero, which means that the sample (Xi , Yi ) is a discrete point of the probability
measure, we set k to k?i , which is the number of samples that have the same value as (Xi , Yi ).
Otherwise we just keep k?i as k. Our proposed estimator is described in detail in Algorithm 1.
Algorithm 1 Mixed Random Variable Mutual Information Estimator
Input: {Xi , Yi }N
i=1 , where Xi ? X and Yi ? Y;
Parameter: k ? Z+ ;
for i = 1 to N do
?i,xy := the k smallest distance among [di,j := max{kXj ? Xi k, kYj ? Yi k}, j 6= i];
if ?i,xy = 0 then
k?i := number of samples such that di,j = 0;
else
k?i := k;
end if
nx,i := number of samples such that kXj ? Xi k ? ?i,xy ;
ny,i := number of samples such that kYj ? Yi k ? ?i,xy ;
?i := ?(k?i ) + log N ? log(nx,i + 1) ? log(ny,i + 1);
end for
PN
Output: Ib(N ) (X; Y ) := N1 i=1 ?i .
We note that our estimator recovers previous ideas in several canonical settings. If the underlying
distribution is discrete, the k-nearest neighbor distance ?i,xy equals to 0 with high probability, then
our estimator recovers the plug-in estimator. If the underlying distribution does not have probability
masses, then there are no multiple overlapping samples, so k?i equals to k, our estimator recovers the
KSG estimator. If X is discrete and Y is single-dimensional continuous and PX (x) > 0 for all x, for
sufficiently large dataset, the k-nearest neighbors of sample (xi , yi ) will be located on the same xi
with high probability. Therefore, our estimator recovers the discrete vs continuous estimator in [42].
4
4
Proof of Consistency
We show that under certain technical conditions on the joint probability measure, the proposed
estimator is consistent. We begin with the following definitions.
PXY (x, y, r) ? PXY ( {(a, b) ? X ? Y : ka ? xk ? r, kb ? yk ? r} ) ,
PX (x, r) ? PX ( {a ? X : ka ? xk ? r} ) ,
PY (y, r) ? PY ( {b ? Y : kb ? yk ? r} ) .
(2)
(3)
(4)
Theorem 1. Suppose that
1. k is chosen to be a function of N such that kN ? ? and kN log N/N ? 0 as N ? ?.
2. The set of discrete points {(x, y) : PXY (x, y, 0) > 0} is finite.
3.
PXY (x,y,r)
PX (x,r)PY (y,r)
converges to f (x, y) as r ? 0 and f (x, y) ? C with probability 1.
4. X ?Y can be decomposed into countable disjoint sets {Ei }?
i=1 such that f (x, y) is uniformly
continuous on each Ei .
R
5. X ?Y log f (x, y) dPXY < +?.
h
i
Then we have limN ?? E Ib(N ) (X; Y ) = I(X; Y ) .
Notice that Assumptions 2,3,4 are satisfied whenever (1) the distribution is (finitely) discrete; (2) the
distribution is continuous; (3) some dimensions are (countably) discrete and some dimensions are
continuous; (4) a mixture of the previous cases. Most real world data can be covered by these cases.
A sketch of the proof is below with the full proof in the supplementary material.
Proof. (Sketch) We start with an explicit form of the Radon-Nikodym derivative dPXY /(dPX PY ).
Lemma 4.1. Under Assumption 3 and 4 in Theorem 1, (dPXY /(dPX PY ))(x, y) = f (x, y) =
limr?0 PXY (x, y, r)/(PX (x, r)PY (y, r)).
PN
Notice that IbN (X; Y ) = (1/N ) i=1 ?i , where all ?i are identically distributed.
E[Ib(N ) (X; Y )] = E[?1 ]. Therefore, the bias can be written as:
Z
b(N )
E[
I
(X;
Y
)]
?
I(X;
Y
)
=
E
[E
[?
|X,
Y
]]
?
log f (X, Y )PXY
XY
1
Z
?
E [?1 |X, Y ] ? log f (X, Y ) dPXY .
Therefore,
(5)
Now we upper bound E [ ?1 |X, Y ]?log f (X, Y ) for every (x, y) ? X ?Y by dividing the domain
S
S
into three parts as X ? Y = ?1 ?2 ?3 where
? ?1 = {(x, y) : f (x, y) = 0} ;
? ?2 = {(x, y) : f (x, y) > 0, PXY (x, y, 0) > 0} ;
? ?3 = {(x, y) : f (x, y) > 0, PXY (x, y, 0) = 0} .
We show that limN ??
E
[?
|X,
Y
]?log
f
(X,
Y
)
dPXY = 0 for each i ? {1, 2, 3} separately.
1
?i
R
? For (x, y) ? ?1 , we will show that ?1 has zero probability
with respect to PXY , i.e.
R
PXY (?1 ) = 0. Hence, ?1 E [?1 |X, Y ] ? log f (X, Y ) dPXY = 0.
5
? For (x, y) ? ?2 , f (x, y) equals to PXY (x, y, 0)/PX (x, 0)PY (y, 0), so it can be viewed as
a discrete part. We will first show that the k-nearest neighbor distance ?k,1 = 0 with high
probability. Then we will use the the number of samples on (x, y) as k?i , and we will show
that the mean of estimate ?1 is closed to log f (x, y).
? For (x, y) ? ?3 , it can be viewed as a continuous part. We use the similar proof technique
as [27] to prove that the mean of estimate ?1 is closed to log f (x, y).
The following theorem bounds the variance of the proposed estimator.
Theorem 2. Assume in addition that
6. (kN log N )2 /N ? 0 as N ? ?.
Then we have
lim Var
h
N ??
i
Ib(N ) (X; Y ) = 0 .
(6)
Proof. (Sketch) We use the Efron-Stein inequality to bound the variance of the estimator. For simplicity, let Ib(N ) (Z) be the estimate based on original samples {Z1 , Z2 , . . . , ZN }, where Zi = (Xi , Yi ),
and Ib(N ) (Z\j ) is the estimate from {Z1 , . . . , Zj?1 , Zj+1 , . . . , ZN }. Then a certain version of Efron
2
h
i
PN
Stein inequality states that: Var Ib(N ) (Z) ? 2 j=1 supZ1 ,...,ZN Ib(N ) (Z) ? Ib(N ) (Z\j )
.
Now recall that
N
N
1 X ?
1 X
?i (Z) =
Ib(N ) (Z) =
?(ki ) + log N ? log(nx,i + 1) ? log(ny,i + 1) ,
N i=1
N i=1
(7)
Therefore, we have
1
b(N )
I (Z) ? Ib(N ) (Z\j ) ?
N
Z1 ,...,ZN
sup
sup
N
X
?i (Z) ? ?i (Z\j ) .
(8)
Z1 ,...,ZN i=1
To upper bound the difference | ?i (Z) ? ?i (Z\j ) | created by eliminating sample Zj for different
i ?s we consider three different cases: (1) i = j; (2) ?k,i = 0; (3) ?k,i > 0, and conclude that
PN
i=1 | ?i (Z) ? ?i (Z\j ) | ? O(k log N ) for all Zi ?s. The detail of the case study is in Section. ?? in
the supplementary material. Plug it into Efron-Stein inequality, we obtain:
N
2
h
i
X
Var Ib(N ) (Z) ? 2
sup Ib(N ) (Z) ? Ib(N ) (Z\j )
j=1
? 2
N
X
j=1
1
N
sup
Z1 ,...,ZN
N
X
?i (Z) ? ?i (Z\j )
!2
= O((k log N )2 /N ) .
(9)
Z1 ,...,ZN i=1
By Assumption 6, we have limN ?? Var
h
i
Ib(N ) (Z) = 0.
Combining Theorem 1 and Theorem 2, we have the `2 consistency of Ib(N ) (X; Y ).
5
Simulations
We evaluate the performance of our estimator in a variety of (synthetic and real-world) experiments.
Experiment I. (X, Y ) is a mixture of one continuous distribution and one discretedistribution.
The
1 0.9
continuous distribution is jointly Gaussian with zero mean and covariance ? =
, and
0.9 1
6
3
6
2
5
1
Y
4
0
Y
3
2
-1
1
-2
0
-3
-3
-2
-1
0
1
2
0
3
X
1
2
3
4
X
Figure 1: Left: An example of samples from a mixture of continuous (blue) and discrete (red)
distributions, where red points denote multiple samples. Right: An example of samples from a
discrete X and a continuous Y .
the discrete distribution is P (X = 1, Y = 1) = P (X = ?1, Y = ?1) = 0.45 and P (X = 1, Y =
?1) = P (X = ?1, 1) = 0.05. These two distributions are mixed with equal probability. The
scatter plot of a set of samples from this distribution is shown in the left panel of Figure. 1, where
the red squares denote multiple samples from the discrete distribution. For all synthetic experiments,
we compare our proposed estimator with a (fixed) partitioning estimator, an adaptive partitioning
estimator [11] implemented by [47], the KSG estimator [27] and noisy KSG estimator (by adding
Gaussian noise N (0, ? 2 I) on each sample to transform all mixed distributions into continuous one).
We plot the mean squared error versus number of samples in Figure 2. The mean squared error is
averaged over 250 independent trials.
The KSG estimator is entirely misled by the discrete samples as expected. The noisy KSG estimator
performs better but the added noise causes the estimate to degrade. In this experiment, the estimate
is less sensitive to the noise added and the line is indistinguishable with the line for KSG. The
partitioning and adaptive partitioning method quantizes all samples, resulting in an extra quantization
error. Note that only the proposed estimator has error decreasing with the sample size.
Experiment II. X is a discrete random variable and Y is a continuous random variable. X is
uniformly distributed over integers {0, 1, . . . , m ? 1} and Y is uniformly distributed over the range
[X, X + 2] for a given X. The ground truth I(X; Y ) = log(m) ? (m ? 1) log(2)/m. We choose
m = 5 and a scatter plot of a set of samples is in the right panel of Figure. 1. Notice that in this case
(and the following experiments) our proposed estimator degenerates to KSG if the hyper parameter k
is chosen the same, hence KSG is not plotted. In this experiment our proposed estimator outperforms
other methods.
Experiment III. Higher dimensional mixture. Let (X1 , Y1 ) and (Y2 , X2 ) have the same joint
distribution as in experiment II and independent of each other. We evaluate the mutual information
between X = (X1 , X2 ) and Y = (Y1 , Y2 ). Then ground truth I(X; Y ) = 2(log(m) ? (m ?
1) log(2)/m). We also consider X = (X1 , X2 , X3 ) and Y = (Y1 , Y2 , Y3 ) where (X3 , Y3 ) have the
same joint distribution as in experiment II and independent of (X1 , Y1 ), (X2 , Y2 ). The ground truth
I(X; Y ) = 3(log(m) ? (m ? 1) log(2)/m). The adaptive partitioning algorithm works only for
one-dimensional X and Y and is not compared here.
We can see that the performance of partitioning estimator is very bad because the number of partitions
grows exponentially with dimension. Proposed algorithm suffers less from the curse of dimensionality.
For the right figure, noisy KSG method has smaller error, but we point out that it is unstable with
respect to the noise level added: as the noise level is varied from ? = 0.5 to ? = 0.7 and the
performance varies significantly (far from convergence).
Experiment IV. Zero-inflated Poissonization. Here X ? Exp(1) is a standard exponential random
variable, and Y is zero-inflated Poissonization of X, i.e., Y = 0 with probability p and Y ?
Poisson(x) givenPX = x with probability 1 ? p. Here the ground truth is I(X; Y ) = (1 ?
?
p)(2 log 2 ? ? ? k=1 log k ? 2?k ) ? (1 ? p)0.3012, where ? is Euler-Mascheroni constant. We
repeat the experiment for no zero-inflation (p = 0) and for p = 15%. We find that the proposed
7
estimator is comparable to adaptive partitioning for no zero-inflation and outperforms others for 15%
zero-inflation.
1
mean squared error
10
0.1
1
0.01
1e-3
0.1
1e-4
Proposed
Partitioning
Adapt Part
Noisy KSG
KSG
0.01
1e-3
800
1600
2400
800
3200
mean squared error
10
1600
2400
3200
2400
3200
2400
3200
10
1
0.1
1
0.01
Proposed
Partitioning
Noisy KSG
1e-3
800
mean squared error
Proposed
Partitioning
Adapt Part
Noisy KSG
1e-5
Proposed
Partitioning
(Small) Noisy KSG
(Large) Noisy KSG
0.1
1600
2400
3200
800
1
1
0.1
0.1
0.01
0.01
1e-3
1e-3
Proposed
Partitioning
Adapt Part
Noisy KSG
1e-4
1e-5
800
Proposed
Partitioning
Adapt Part
Noisy KSG
1e-4
1e-5
1600
2400
3200
800
sample size
1600
1600
sample size
Figure 2: Mean squared error vs. sample size for synthetic experiments. Top row (left to right):
Experiment I; Experiment II. Middle row (left to right): Experiment III for 4 dimensions and 6
dimensions. Bottom row (left to right): Experiment IV for p = 0 and p = 15%.
We conclude that our proposed estimator is consistent for all these four experiments, and the mean
squared error is always the best or comparable to the best. Other estimators are either not consistent
or have large mean squared error for at least one experiment.
Feature Selection Task. Suppose there are a set of features modeled by independent random variables
(X1 , . . . , Xp ) and the data Y depends on a subset of features {Xi }i?S , where card(S) = q < p. We
observe the features (X1 , . . . , Xp ) and data Y and try to select which features are related to Y . In
many biological applications, some of the data is lost due to experimental reasons and set to 0; even
the available data is noisy. This setting naturally leads to a mixture of continuous and discrete parts
? i and Y? , instead of Xi and Y . Here X
? i and
which we model by supposing that the observation is X
Y? equals to 0 with probability ? and follows Poisson distribution parameterized by Xi or Y (which
corresponds to the noisy observation) with probability 1 ? ?.
In this experiment, (X1 , . . . , X20 ) are i.i.d. standard exponential random variables and Y is simply
? i equals to 0 with probability 0.15, and X
? i ? Poisson(Xi ) with probability 0.85.
(X1 , . . . , X5 ). X
?
?
? i ?s
Yi equals to 0 with probability 0.15 and Yi ? Exp(Yi ) with probability 0.85. Upon observing X
8
? i ; Y ) using different estimators, and select the features with top-r
and Y? , we evaluate MIi = I(X
highest mutual information. Since the underlying number of features is unknown, we iterate over all
r ? {0, . . . , p} and observe a receiver operating characteristic (ROC) curve, shown in left of Figure 3.
Compared to partitioning, noisy KSG and KSG estimators, we conclude that our proposed estimator
outperforms other estimators.
0.8
0.7
0.8
AUROC
True Positive Rate
1.0
0.6
0.4
0.6
0.5
0.4
Proposed
Partitioning
Noisy KSG
KSG
0.2
0.0
0.0
0.2
0.4
0.6
0.8
Proposed
Partitioning
Noisy KSG
KSG
0.3
0.2
0.0
1.0
False Positive Rate
0.2
0.4
0.6
0.8
1.0
Level of Dropout
Figure 3: Left: ROC curve for the feature selection task. Right: AUROC versus levels of dropout for
gene regulatory network inference.
Gene regulatory network inference. Gene expressions form a rich source of data from which to
infer gene regulatory networks; it is now possible to sequence gene expression data from single cells
using a technology called single-cell RNA-sequencing [52]. However, this technology has a problem
called dropout, which implies that sometimes, even when the gene is present it is not sequenced
[25, 12]. While we tested our algorithm on real single-cell RNA-seq dataset, it is hard to establish
the ground truth on these datasets. Instead we resorted to a challenge dataset for reconstructing
regulatory networks, called the DREAM5 challenge [30]. The simulated (insilico) version of this
dataset contains gene expression for 20 genes with 660 data point containing various perturbations.
The goal is to reconstruct the true network between the various genes. We used mutual information
as the test statistic in order to obtain AUROC for various methods. While the dataset did not have
any dropouts, in order to simulate the effect of dropouts in real data, we simulated various levels of
dropout and compared the AUROC (area under ROC) of different algorithms in the right of Figure 3
where we find the proposed algorithm to outperform the competing ones.
Acknowledgement
This work was partially supported by NSF grants CNS-1527754, CCF-1553452, CCF-1705007,
CCF-1651236, CCF-1617745, CNS-1718270 and GOOGLE Faculty Research Award.
References
[1] Jayadev Acharya, Hirakendu Das, Alon Orlitsky, and Ananda Suresh. Maximum likelihood
approach for symmetric distribution property estimation.
[2] R. Battiti. Using mutual information for selecting features in supervised neural net learning.
Neural Networks, IEEE Transactions on, 5(4):537?550, 1994.
[3] Jan Beirlant, Edward J Dudewicz, L?szl? Gy?rfi, and Edward C Van der Meulen. Nonparametric
entropy estimation: An overview. International Journal of Mathematical and Statistical
Sciences, 6(1):17?39, 1997.
[4] Thomas B Berrett, Richard J Samworth, and Ming Yuan. Efficient multivariate entropy estimation via k-nearest neighbour distances. arXiv preprint arXiv:1606.00304, 2016.
[5] G?rard Biau and Luc Devroye. Lectures on the nearest neighbor method. Springer, 2015.
[6] Christopher M Bishop. Pattern recognition. Machine Learning, 128:1?58, 2006.
9
[7] Yuheng Bu, Shaofeng Zou, Yingbin Liang, and Venugopal V Veeravalli. Estimation of kl
divergence between large-alphabet distributions. In Information Theory (ISIT), 2016 IEEE
International Symposium on, pages 1118?1122. IEEE, 2016.
[8] C. Chan, A. Al-Bashabsheh, J. B. Ebrahimi, T. Kaced, and T. Liu. Multivariate mutual
information inspired by secret-key agreement. Proceedings of the IEEE, 103(10):1883?1913,
2015.
[9] C Chow and Cong Liu. Approximating discrete probability distributions with dependence trees.
IEEE transactions on Information Theory, 14(3):462?467, 1968.
[10] T. M. Cover and J. A. Thomas. Information theory and statistics. Elements of Information
Theory, pages 279?335, 1991.
[11] Georges A Darbellay and Igor Vajda. Estimation of the information by an adaptive partitioning
of the observation space. IEEE Transactions on Information Theory, 45(4):1315?1321, 1999.
[12] Greg Finak, Andrew McDavid, Masanao Yajima, Jingyuan Deng, Vivian Gersuk, Alex K Shalek,
Chloe K Slichter, Hannah W Miller, M Juliana McElrath, Martin Prlic, et al. Mast: a flexible
statistical framework for assessing transcriptional changes and characterizing heterogeneity in
single-cell rna sequencing data. Genome biology, 16(1):278, 2015.
[13] F. Fleuret. Fast binary feature selection with conditional mutual information. The Journal of
Machine Learning Research, 5:1531?1555, 2004.
[14] S. Gao, G Ver Steeg, and A. Galstyan. Estimating mutual information by local gaussian
approximation. arXiv preprint arXiv:1508.00536, 2015.
[15] Shuyang Gao, Greg Ver Steeg, and Aram Galstyan. Efficient estimation of mutual information
for strongly dependent variables. In Artificial Intelligence and Statistics, pages 277?286, 2015.
[16] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Breaking the bandwidth barrier: Geometrical adaptive entropy estimation. In Advances in Neural Information Processing Systems, pages
2460?2468, 2016.
[17] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Demystifying fixed k-nearest neighbor
information estimators. In Information Theory (ISIT), 2017 IEEE International Symposium on,
pages 1267?1271. IEEE, 2017.
[18] Izrail Moiseevich Gelfand and AM Yaglom. Calculation of the amount of information about a
random function contained in another such function. American Mathematical Society Providence, 1959.
[19] Yanjun Han, Jiantao Jiao, and Tsachy Weissman. Adaptive estimation of shannon entropy. In
Information Theory (ISIT), 2015 IEEE International Symposium on, pages 1372?1376. IEEE,
2015.
[20] Yanjun Han, Jiantao Jiao, and Tsachy Weissman. Minimax estimation of discrete distributions
under ell1 loss. IEEE Transactions on Information Theory, 61(11):6343?6354, 2015.
[21] Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. Minimax estimation of
functionals of discrete distributions. IEEE Transactions on Information Theory, 61(5):2835?
2885, 2015.
[22] Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. Maximum likelihood estimation
of functionals of discrete distributions. IEEE Transactions on Information Theory, 63(10):6774?
6798, 2017.
[23] Jiantao Jiao, Kartik Venkat, and Tsachy Weissman. Non-asymptotic theory for the plug-in rule
in functional estimation. available on arXiv, 2014.
[24] Peter V Kharchenko, Lev Silberstein, and David T Scadden. Bayesian approach to single-cell
differential expression analysis. Nature methods, 11(7):740?742, 2014.
10
[25] Peter V Kharchenko, Lev Silberstein, and David T Scadden. Bayesian approach to single-cell
differential expression analysis. Nature methods, 11(7):740?742, 2014.
[26] LF Kozachenko and Nikolai N Leonenko. Sample estimate of the entropy of a random vector.
Problemy Peredachi Informatsii, 23(2):9?16, 1987.
[27] A. Kraskov, H. St?gbauer, and P. Grassberger. Estimating mutual information. Physical review
E, 69(6):066138, 2004.
[28] Smita Krishnaswamy, Matthew H Spitzer, Michael Mingueneau, Sean C Bendall, Oren Litvin,
Erica Stone, Dana Pe?er, and Garry P Nolan. Conditional density-based analysis of t cell
signaling in single-cell data. Science, 346(6213):1250689, 2014.
[29] Pan Li and Olgica Milenkovic. Inhomogoenous hypergraph clustering with applications. arXiv
preprint arXiv:1709.01249, 2017.
[30] Daniel Marbach, James C Costello, Robert K?ffner, Nicole M Vega, Robert J Prill, Diogo M
Camacho, Kyle R Allison, Manolis Kellis, James J Collins, Gustavo Stolovitzky, et al. Wisdom
of crowds for robust gene network inference. Nature methods, 9(8):796?804, 2012.
[31] Kevin R Moon, Kumar Sricharan, and Alfred O Hero III. Ensemble estimation of mutual
information. arXiv preprint arXiv:1701.08083, 2017.
[32] A. C. M?ller, S. Nowozin, and C. H. Lampert. Information theoretic clustering using minimum
spanning trees. Springer, 2012.
[33] Liam Paninski. Estimation of entropy and mutual information. Neural computation, 15(6):1191?
1253, 2003.
[34] Liam Paninski and Masanao Yajima. Undersmoothed kernel entropy estimators. IEEE Transactions on Information Theory, 54(9):4384?4388, 2008.
[35] H. Peng, F. Long, and C. Ding. Feature selection based on mutual information criteria of maxdependency, max-relevance, and min-redundancy. Pattern Analysis and Machine Intelligence,
IEEE Transactions on, 27(8):1226?1238, 2005.
[36] A Perez. Information theory with abstract alphabets. Theory of Probability and its Applications,
4(1), 1959.
[37] Emma Pierson and Christopher Yau. Zifa: Dimensionality reduction for zero-inflated single-cell
gene expression analysis. Genome biology, 16(1):241, 2015.
[38] Mark S Pinsker. Information and information stability of random variables and processes. 1960.
[39] Yury Polyanskiy and Yihong Wu. Strong data-processing inequalities for channels and bayesian
networks. In Convexity and Concentration, pages 211?249. Springer, 2017.
[40] David N Reshef, Yakir A Reshef, Hilary K Finucane, Sharon R Grossman, Gilean McVean,
Peter J Turnbaugh, Eric S Lander, Michael Mitzenmacher, and Pardis C Sabeti. Detecting novel
associations in large data sets. science, 334(6062):1518?1524, 2011.
[41] Fred Rieke. Spikes: exploring the neural code. MIT press, 1999.
[42] B. C. Ross. Mutual information between discrete and continuous data sets. PloS one,
9(2):e87357, 2014.
[43] Shashank Singh and Barnab?s P?czos. Exponential concentration of a density functional
estimator. In Advances in Neural Information Processing Systems, pages 3032?3040, 2014.
[44] Shashank Singh and Barnab?s P?czos. Finite-sample analysis of fixed-k nearest neighbor
density functional estimators. In Advances in Neural Information Processing Systems, pages
1217?1225, 2016.
[45] Shashank Singh and Barnab?s P?czos. Nonparanormal information estimation. arXiv preprint
arXiv:1702.07803, 2017.
11
[46] K. Sricharan, D. Wei, and A. O. Hero. Ensemble estimators for multivariate entropy estimation.
Information Theory, IEEE Transactions on, 59(7):4374?4388, 2013.
[47] Zolt?n Szab?. Information theoretical estimators toolbox. Journal of Machine Learning
Research, 15:283?287, 2014.
[48] Gregory Valiant and Paul Valiant. Estimating the unseen: an n/log (n)-sample estimator for
entropy and support size, shown optimal via new clts. In Proceedings of the forty-third annual
ACM symposium on Theory of computing, pages 685?694. ACM, 2011.
[49] G. Ver Steeg and A. Galstyan. Maximally informative hierarchical representations of highdimensional data. stat, 1050:27, 2014.
[50] Q. Wang, S. R. Kulkarni, and S. Verd?. Divergence estimation of continuous distributions based
on data-dependent partitions. Information Theory, IEEE Transactions on, 51(9):3064?3074,
2005.
[51] Q. Wang, S. R. Kulkarni, and S. Verd?. Divergence estimation for multidimensional densities
via-nearest-neighbor distances. Information Theory, IEEE Transactions on, 55(5):2392?2405,
2009.
[52] Angela R Wu, Norma F Neff, Tomer Kalisky, Piero Dalerba, Barbara Treutlein, Michael E
Rothenberg, Francis M Mburu, Gary L Mantalas, Sopheak Sim, Michael F Clarke, et al.
Quantitative assessment of single-cell rna-sequencing methods. Nature methods, 11(1):41?46,
2014.
[53] Yihong Wu and Pengkun Yang. Minimax rates of entropy estimation on large alphabets via best
polynomial approximation. IEEE Transactions on Information Theory, 62(6):3702?3720, 2016.
12
| 7180 |@word trial:1 milenkovic:1 middle:1 version:2 eliminating:1 faculty:1 reshef:2 clts:1 polynomial:1 simulation:2 crucially:1 covariance:1 zolt:1 reduction:1 liu:4 contains:2 efficacy:1 selecting:1 substitution:2 daniel:1 nonparanormal:1 outperforms:4 existing:1 ka:2 z2:1 scatter:2 written:1 grassberger:1 partition:2 informative:1 camacho:1 plot:3 diogo:1 v:2 alone:1 intelligence:2 xk:2 beginning:1 detecting:1 quantized:1 height:2 mathematical:2 along:1 direct:2 differential:8 symposium:4 yuan:1 prove:5 emma:1 theoretically:1 peng:1 secret:1 expected:1 indeed:1 behavior:1 themselves:1 examine:2 inspired:4 ming:1 decomposed:1 decreasing:1 manolis:1 curse:1 begin:1 estimating:6 underlying:4 panel:2 mass:1 spitzer:1 developed:1 proposing:1 transformation:1 assert:1 quantitative:1 every:1 y3:2 orlitsky:1 multidimensional:1 pramod:3 wrong:1 facto:1 partitioning:17 grant:1 before:1 positive:2 engineering:1 local:1 consequence:1 lev:2 studied:1 suggests:1 liam:2 range:1 adoption:1 obeys:1 averaged:1 practical:5 testing:3 lost:1 lf:1 dpx:5 x3:2 signaling:1 suresh:1 jan:1 area:1 empirical:2 significantly:1 cannot:2 selection:5 py:14 measurable:1 nicole:1 demystifying:1 primitive:1 straightforward:1 go:1 limr:1 mascheroni:1 simplicity:1 estimator:87 rule:2 array:1 kartik:3 oh:3 stability:1 handle:1 rieke:1 suppose:2 verd:2 agreement:1 element:1 recognition:1 located:1 viswanath:3 observed:1 bottom:1 preprint:5 ding:1 electrical:1 shashank:3 worst:1 calculate:1 cy:1 readout:1 cong:1 wang:2 plo:1 highest:1 yk:2 intuition:1 convexity:1 complexity:1 hypergraph:1 stolovitzky:1 pinsker:1 aram:1 pramodv:1 singh:3 purely:6 upon:1 eric:1 kxj:2 joint:8 various:4 alphabet:3 jiao:5 fast:1 detected:1 artificial:1 formation:1 hyper:1 crowd:1 kevin:1 whose:1 quite:1 widely:1 heuristic:1 valued:2 supplementary:2 gelfand:1 otherwise:1 reconstruct:1 nolan:1 statistic:3 unseen:1 jointly:1 itself:1 noisy:15 transform:1 sequence:1 net:1 propose:2 product:2 galstyan:3 relevant:1 combining:1 degenerate:1 convergence:2 assessing:1 converges:1 depending:1 develop:2 alon:1 stat:1 andrew:1 finitely:1 nearest:15 hilary:1 progress:1 strong:1 dividing:1 ibn:1 implemented:1 involves:1 implies:1 sim:1 inflated:4 edward:2 radius:1 correct:1 norma:1 weihao:3 kb:2 vajda:1 material:2 fix:2 barnab:3 isit:3 biological:1 secondly:1 exploring:1 sufficiently:1 inflation:3 ground:5 exp:2 matthew:1 smallest:1 estimation:27 samworth:1 applicable:1 ross:1 sensitive:1 vice:1 mit:1 clearly:1 gaussian:5 always:1 rna:4 pn:6 focus:1 sequencing:3 likelihood:2 digamma:2 baseline:2 yingbin:1 problemy:1 am:1 inference:4 dependent:2 nn:5 chow:3 classification:1 aforementioned:1 among:1 flexible:1 gbauer:1 art:1 special:2 mutual:39 masanao:2 equal:7 once:1 washington:1 beach:1 encouraged:1 biology:3 igor:1 others:3 acharya:1 employ:1 richard:1 neighbour:1 divergence:3 individual:1 cns:2 n1:2 interest:2 mining:1 szl:1 mixture:17 allison:1 perez:1 behind:1 chain:1 xy:12 tree:4 iv:2 euclidean:1 logarithm:1 re:2 plotted:1 causal:1 theoretical:3 sociology:1 earlier:1 cover:1 zn:7 subset:1 euler:1 front:1 kn:3 providence:1 varies:1 gregory:1 synthetic:5 st:2 density:12 fundamental:1 international:4 bu:1 michael:4 squared:8 satisfied:2 containing:1 choose:1 silberstein:2 yau:1 american:1 derivative:11 grossman:1 li:1 de:1 blow:1 gy:1 includes:1 darbellay:1 yury:1 coordinated:3 depends:1 performed:1 break:1 try:1 closed:2 observing:1 characterizes:1 sup:4 start:2 red:3 option:1 complicated:1 francis:1 square:1 greg:2 moon:1 cxy:1 variance:2 characteristic:1 ensemble:4 miller:1 wisdom:1 biau:1 bayesian:3 none:1 suffers:1 turnbaugh:1 whenever:1 definition:4 against:1 james:2 naturally:1 proof:8 mi:8 di:5 recovers:4 couple:1 dataset:6 recall:1 lim:1 emerges:1 efron:3 dimensionality:2 organized:1 sean:1 higher:1 supervised:1 wei:1 maximally:1 rard:1 mitzenmacher:1 though:1 strongly:2 sabeti:1 just:1 correlation:1 sketch:3 ei:2 christopher:2 veeravalli:1 overlapping:1 assessment:1 google:1 widespread:1 treutlein:1 vivian:1 grows:1 usa:1 effect:1 verify:1 contain:1 y2:4 true:2 ccf:4 hence:3 symmetric:1 laboratory:3 deal:1 kharchenko:2 indistinguishable:1 x5:1 criterion:1 yajima:2 yaglom:1 ell1:1 stone:1 theoretic:1 demonstrate:2 performs:1 polyanskiy:1 geometrical:1 novel:3 recently:1 vega:1 kyle:1 neff:1 functional:3 physical:1 overview:1 exponentially:1 association:1 versa:1 swoh:1 consistency:6 pointed:1 marbach:1 illinois:6 han:4 longer:1 operating:1 add:1 krishnaswamy:1 multivariate:3 recent:4 chan:1 barbara:1 certain:4 inequality:5 discretizing:1 binary:1 battiti:1 yi:15 exploited:1 der:1 minimum:1 george:1 employed:1 deng:1 forty:1 ller:1 yakir:1 ii:4 fist:1 multiple:4 full:1 infer:1 champaign:3 technical:2 bendall:1 adapt:4 plug:5 calculation:1 long:2 award:1 weissman:5 involving:1 basic:3 dudewicz:1 poisson:4 arxiv:11 kernel:4 sometimes:1 sequenced:1 oren:1 cell:10 whereas:1 want:1 separately:1 addition:1 lander:1 else:1 nikolai:1 source:1 limn:3 extra:1 operate:1 rest:1 posse:1 unlike:1 tsachy:5 supposing:1 integer:1 yuheng:1 kraskov:1 yang:1 iii:3 identically:1 variety:2 independence:2 iterate:1 zi:2 competing:1 suboptimal:1 bandwidth:1 idea:4 yihong:2 bottleneck:1 whether:1 expression:7 peter:3 yanjun:4 cause:1 useful:1 rfi:1 detailed:3 covered:1 fleuret:1 amount:1 nonparametric:1 stein:3 locally:1 category:1 outperform:1 canonical:2 zj:3 notice:5 nsf:1 neuroscience:1 estimated:2 disjoint:1 blue:1 alfred:1 discrete:52 key:1 four:1 redundancy:1 hirakendu:1 uw:1 vast:1 resorted:1 sharon:1 downstream:1 mingueneau:1 parameterized:1 extends:1 family:2 wu:3 seq:1 mii:1 appendix:1 clarke:1 radon:10 comparable:2 dropout:6 entirely:1 bound:5 ki:1 followed:1 pxy:16 annual:1 strength:1 precisely:1 informatsii:1 alex:1 x2:4 prill:1 simulate:1 min:1 leonenko:2 kumar:1 px:10 martin:1 department:4 alternate:1 combination:1 poor:1 smaller:1 finucane:1 reconstructing:1 pan:1 appealing:1 costello:1 making:1 intuitively:1 remains:1 turn:1 hero:2 end:2 available:2 finak:1 apply:3 observe:2 hierarchical:1 kozachenko:2 generic:1 original:2 thomas:2 top:2 clustering:3 ensure:1 ebrahimi:1 angela:1 graphical:2 calculating:1 establish:1 approximating:1 classical:1 society:1 jayadev:1 kellis:1 added:3 quantity:3 maxdependency:1 spike:1 concentration:3 dependence:1 transcriptional:1 distance:15 card:1 simulated:2 majority:2 nx:7 degrade:1 unstable:1 trivial:1 reason:1 kannan:1 spanning:1 devroye:1 code:1 modeled:1 relationship:1 liang:1 x20:1 robert:2 resurgence:1 countable:1 unknown:1 upper:2 observation:3 sricharan:2 datasets:1 urbana:3 finite:2 heterogeneity:1 y1:4 varied:1 perturbation:1 arbitrary:1 tomer:1 david:3 pair:3 kl:2 toolbox:1 z1:6 established:1 nip:1 address:1 beyond:1 below:2 pattern:2 regime:2 challenge:3 pioneering:1 including:2 max:2 natural:2 misled:1 minimax:3 scheme:1 improve:2 technology:2 meulen:1 created:1 prior:2 review:3 acknowledgement:1 checking:1 garry:1 relative:1 asymptotic:1 ksg:32 loss:1 lecture:1 mixed:17 var:4 versus:2 dana:1 age:2 consistent:6 xp:2 olgica:1 sewoong:3 principle:9 nikodym:10 mcvean:1 nowozin:1 row:3 repeat:1 supported:1 czos:3 bias:2 wide:1 fall:1 taking:1 neighbor:15 characterizing:1 barrier:1 distributed:3 van:1 curve:2 calculated:1 dimension:5 world:3 kyj:2 rich:1 genome:2 peredachi:1 fred:1 adaptive:7 far:1 transaction:12 shuyang:1 functionals:2 erica:1 countably:1 gene:12 keep:1 active:1 ver:3 receiver:1 conclude:3 quantizes:1 tuples:1 xi:17 pierson:1 continuous:43 regulatory:4 decade:1 nature:4 channel:1 robust:1 ca:1 rothenberg:1 quantize:1 excellent:3 complex:1 beirlant:1 zou:1 domain:3 da:1 venugopal:1 did:2 noise:7 steeg:3 lampert:1 paul:1 child:1 x1:8 venkat:3 roc:3 ny:7 sub:1 comprises:1 explicit:1 exponential:4 pe:1 ib:16 breaking:1 third:1 hannah:1 formula:1 theorem:6 bad:1 bishop:1 er:1 auroc:4 quantization:5 false:1 adding:2 effectively:1 gustavo:1 valiant:2 litvin:1 entropy:30 wgao9:1 generalizing:1 cx:1 simply:1 paninski:2 gao:5 expressed:1 poissonization:2 contained:1 partially:1 scalar:3 springer:3 corresponds:1 truth:5 determines:1 gary:1 acm:2 conditional:4 viewed:2 goal:1 i3h:1 shared:1 luc:1 hard:1 change:1 uniformly:3 szab:1 lemma:1 ananda:1 called:4 ece:2 invariance:1 experimental:1 shannon:1 select:2 highdimensional:1 mast:1 piero:1 mark:1 support:1 chloe:1 collins:1 relevance:1 absolutely:1 kulkarni:2 evaluate:3 tested:1 |
6,833 | 7,181 | Attention Is All You Need
Ashish Vaswani?
Google Brain
[email protected]
Llion Jones?
Google Research
[email protected]
Noam Shazeer?
Google Brain
[email protected]
Niki Parmar?
Google Research
[email protected]
Aidan N. Gomez? ?
University of Toronto
[email protected]
Jakob Uszkoreit?
Google Research
[email protected]
?ukasz Kaiser?
Google Brain
[email protected]
Illia Polosukhin? ?
[email protected]
Abstract
The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks that include an encoder and a decoder. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer,
based solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to
be superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including
ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task,
our model establishes a new single-model state-of-the-art BLEU score of 41.0 after
training for 3.5 days on eight GPUs, a small fraction of the training costs of the
best models from the literature.
1
Introduction
Recurrent neural networks, long short-term memory [12] and gated recurrent [7] neural networks
in particular, have been firmly established as state of the art approaches in sequence modeling and
transduction problems such as language modeling and machine translation [29, 2, 5]. Numerous
efforts have since continued to push the boundaries of recurrent language models and encoder-decoder
architectures [31, 21, 13].
?
Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started
the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and
has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head
attention and the parameter-free position representation and became the other person involved in nearly every
detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and
tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and
efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and
implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating
our research.
?
Work performed while at Google Brain.
?
Work performed while at Google Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Recurrent models typically factor computation along the symbol positions of the input and output
sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden
states ht , as a function of the previous hidden state ht?1 and the input for position t. This inherently
sequential nature precludes parallelization within training examples, which becomes critical at longer
sequence lengths, as memory constraints limit batching across examples. Recent work has achieved
significant improvements in computational efficiency through factorization tricks [18] and conditional
computation [26], while also improving model performance in case of the latter. The fundamental
constraint of sequential computation, however, remains.
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in
the input or output sequences [2, 16]. In all but a few cases [22], however, such attention mechanisms
are used in conjunction with a recurrent network.
In this work we propose the Transformer, a model architecture eschewing recurrence and instead
relying entirely on an attention mechanism to draw global dependencies between input and output.
The Transformer allows for significantly more parallelization and can reach a new state of the art in
translation quality after being trained for as little as twelve hours on eight P100 GPUs.
2
Background
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU
[20], ByteNet [15] and ConvS2S [8], all of which use convolutional neural networks as basic building
block, computing hidden representations in parallel for all input and output positions. In these models,
the number of operations required to relate signals from two arbitrary input or output positions grows
in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes
it more difficult to learn dependencies between distant positions [11]. In the Transformer this is
reduced to a constant number of operations, albeit at the cost of reduced effective resolution due
to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as
described in section 3.2.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions
of a single sequence in order to compute a representation of the sequence. Self-attention has been
used successfully in a variety of tasks including reading comprehension, abstractive summarization,
textual entailment and learning task-independent sentence representations [4, 22, 23, 19].
End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and
language modeling tasks [28].
To the best of our knowledge, however, the Transformer is the first transduction model relying
entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate
self-attention and discuss its advantages over models such as [14, 15] and [8].
3
Model Architecture
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 29].
Here, the encoder maps an input sequence of symbol representations (x1 , ..., xn ) to a sequence
of continuous representations z = (z1 , ..., zn ). Given z, the decoder then generates an output
sequence (y1 , ..., ym ) of symbols one element at a time. At each step the model is auto-regressive
[9], consuming the previously generated symbols as additional input when generating the next.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully
connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1,
respectively.
3.1
Encoder and Decoder Stacks
Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two
sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position2
Figure 1: The Transformer - model architecture.
wise fully connected feed-forward network. We employ a residual connection [10] around each of
the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is
LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer
itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding
layers, produce outputs of dimension dmodel = 512.
Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two
sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head
attention over the output of the encoder stack. Similar to the encoder, we employ residual connections
around each of the sub-layers, followed by layer normalization. We also modify the self-attention
sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This
masking, combined with fact that the output embeddings are offset by one position, ensures that the
predictions for position i can depend only on the known outputs at positions less than i.
3.2
Attention
An attention function can be described as mapping a query and a set of key-value pairs to an output,
where the query, keys, values, and output are all vectors. The output is computed as a weighted sum
of the values, where the weight assigned to each value is computed by a compatibility function of the
query with the corresponding key.
3.2.1
Scaled Dot-Product Attention
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of
queries and keys of dimension dk , and values of dimension dv . We compute the dot products of the
3
Scaled Dot-Product Attention
Multi-Head Attention
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several
attention layers running in parallel.
query with all keys, divide each by
values.
?
dk , and apply a softmax function to obtain the weights on the
In practice, we compute the attention function on a set of queries simultaneously, packed together
into a matrix Q. The keys and values are also packed together into matrices K and V . We compute
the matrix of outputs as:
QK T
Attention(Q, K, V ) = softmax( ? )V
dk
(1)
The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor
of ?1d . Additive attention computes the compatibility function using a feed-forward network with
k
a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is
much faster and more space-efficient in practice, since it can be implemented using highly optimized
matrix multiplication code.
While for small values of dk the two mechanisms perform similarly, additive attention outperforms
dot product attention without scaling for larger values of dk [3]. We suspect that for large values of
dk , the dot products grow large in magnitude, pushing the softmax function into regions where it has
extremely small gradients 4 . To counteract this effect, we scale the dot products by ?1d .
k
3.2.2
Multi-Head Attention
Instead of performing a single attention function with dmodel -dimensional keys, values and queries,
we found it beneficial to linearly project the queries, keys and values h times with different, learned
linear projections to dk , dk and dv dimensions, respectively. On each of these projected versions of
queries, keys and values we then perform the attention function in parallel, yielding dv -dimensional
output values. These are concatenated and once again projected, resulting in the final values, as
depicted in Figure 2.
Multi-head attention allows the model to jointly attend to information from different representation
subspaces at different positions. With a single attention head, averaging inhibits this.
4
To illustrate why the dot products get large, assume that the components of q and k are independent random
P k
variables with mean 0 and variance 1. Then their dot product, q ? k = di=1
qi ki , has mean 0 and variance dk .
4
MultiHead(Q, K, V ) = Concat(head1 , ..., headh )W O
where headi = Attention(QWiQ , KWiK , V WiV )
Where the projections are parameter matrices WiQ ? Rdmodel ?dk , WiK ? Rdmodel ?dk , WiV ? Rdmodel ?dv
and W O ? Rhdv ?dmodel .
In this work we employ h = 8 parallel attention layers, or heads. For each of these we use
dk = dv = dmodel /h = 64. Due to the reduced dimension of each head, the total computational cost
is similar to that of single-head attention with full dimensionality.
3.2.3
Applications of Attention in our Model
The Transformer uses multi-head attention in three different ways:
? In "encoder-decoder attention" layers, the queries come from the previous decoder layer,
and the memory keys and values come from the output of the encoder. This allows every
position in the decoder to attend over all positions in the input sequence. This mimics the
typical encoder-decoder attention mechanisms in sequence-to-sequence models such as
[31, 2, 8].
? The encoder contains self-attention layers. In a self-attention layer all of the keys, values
and queries come from the same place, in this case, the output of the previous layer in the
encoder. Each position in the encoder can attend to all positions in the previous layer of the
encoder.
? Similarly, self-attention layers in the decoder allow each position in the decoder to attend to
all positions in the decoder up to and including that position. We need to prevent leftward
information flow in the decoder to preserve the auto-regressive property. We implement this
inside of scaled dot-product attention by masking out (setting to ??) all values in the input
of the softmax which correspond to illegal connections. See Figure 2.
3.3
Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully
connected feed-forward network, which is applied to each position separately and identically. This
consists of two linear transformations with a ReLU activation in between.
FFN(x) = max(0, xW1 + b1 )W2 + b2
(2)
While the linear transformations are the same across different positions, they use different parameters
from layer to layer. Another way of describing this is as two convolutions with kernel size 1.
The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality
df f = 2048.
3.4
Embeddings and Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input
tokens and output tokens to vectors of dimension dmodel . We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In
our model, we share the same weight matrix between the two embedding layers and the pre-softmax
?
linear transformation, similar to [24]. In the embedding layers, we multiply those weights by dmodel .
3.5
Positional Encoding
Since our model contains no recurrence and no convolution, in order for the model to make use of the
order of the sequence, we must inject some information about the relative or absolute position of the
tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the
5
Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations
for different layer types. n is the sequence length, d is the representation dimension, k is the kernel
size of convolutions and r the size of the neighborhood in restricted self-attention.
Layer Type
Complexity per Layer
Self-Attention
Recurrent
Convolutional
Self-Attention (restricted)
O(n2 ? d)
O(n ? d2 )
O(k ? n ? d2 )
O(r ? n ? d)
Sequential
Operations
O(1)
O(n)
O(1)
O(1)
Maximum Path Length
O(1)
O(n)
O(logk (n))
O(n/r)
bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel
as the embeddings, so that the two can be summed. There are many choices of positional encodings,
learned and fixed [8].
In this work, we use sine and cosine functions of different frequencies:
P E(pos,2i) = sin(pos/100002i/dmodel )
P E(pos,2i+1) = cos(pos/100002i/dmodel )
where pos is the position and i is the dimension. That is, each dimension of the positional encoding
corresponds to a sinusoid. The wavelengths form a geometric progression from 2? to 10000 ? 2?. We
chose this function because we hypothesized it would allow the model to easily learn to attend by
relative positions, since for any fixed offset k, P Epos+k can be represented as a linear function of
P Epos .
We also experimented with using learned positional embeddings [8] instead, and found that the two
versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version
because it may allow the model to extrapolate to sequence lengths longer than the ones encountered
during training.
4
Why Self-Attention
In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations
(x1 , ..., xn ) to another sequence of equal length (z1 , ..., zn ), with xi , zi ? Rd , such as a hidden
layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we
consider three desiderata.
One is the total computational complexity per layer. Another is the amount of computation that can
be parallelized, as measured by the minimum number of sequential operations required.
The third is the path length between long-range dependencies in the network. Learning long-range
dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the
ability to learn such dependencies is the length of the paths forward and backward signals have to
traverse in the network. The shorter these paths between any combination of positions in the input
and output sequences, the easier it is to learn long-range dependencies [11]. Hence we also compare
the maximum path length between any two input and output positions in networks composed of the
different layer types.
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially
executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of
computational complexity, self-attention layers are faster than recurrent layers when the sequence
length n is smaller than the representation dimensionality d, which is most often the case with
sentence representations used by state-of-the-art models in machine translations, such as word-piece
[31] and byte-pair [25] representations. To improve computational performance for tasks involving
very long sequences, self-attention could be restricted to considering only a neighborhood of size r in
6
the input sequence centered around the respective output position. This would increase the maximum
path length to O(n/r). We plan to investigate this approach further in future work.
A single convolutional layer with kernel width k < n does not connect all pairs of input and output
positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels,
or O(logk (n)) in the case of dilated convolutions [15], increasing the length of the longest paths
between any two positions in the network. Convolutional layers are generally more expensive than
recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity
considerably, to O(k ? n ? d + n ? d2 ). Even with k = n, however, the complexity of a separable
convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer,
the approach we take in our model.
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions
from our models and present and discuss examples in the appendix. Not only do individual attention
heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic
and semantic structure of the sentences.
5
Training
This section describes the training regime for our models.
5.1
Training Data and Batching
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million
sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT
2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece
vocabulary [31]. Sentence pairs were batched together by approximate sequence length. Each training
batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000
target tokens.
5.2
Hardware and Schedule
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using
the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We
trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the
bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps
(3.5 days).
5.3
Optimizer
We used the Adam optimizer [17] with ?1 = 0.9, ?2 = 0.98 and = 10?9 . We varied the learning
rate over the course of training, according to the formula:
?0.5
lrate = d?0.5
, step_num ? warmup_steps?1.5 )
model ? min(step_num
(3)
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps,
and decreasing it thereafter proportionally to the inverse square root of the step number. We used
warmup_steps = 4000.
5.4
Regularization
We employ three types of regularization during training:
Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the
sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the
positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of
Pdrop = 0.1.
7
Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the
English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.
Model
ByteNet [15]
Deep-Att + PosUnk [32]
GNMT + RL [31]
ConvS2S [8]
MoE [26]
Deep-Att + PosUnk Ensemble [32]
GNMT + RL Ensemble [31]
ConvS2S Ensemble [8]
Transformer (base model)
Transformer (big)
BLEU
EN-DE
23.75
24.6
25.16
26.03
26.30
26.36
27.3
28.4
EN-FR
39.2
39.92
40.46
40.56
40.4
41.16
41.29
38.1
41.0
Training Cost (FLOPs)
EN-DE
EN-FR
1.0 ? 1020
1.4 ? 1020
1.5 ? 1020
1.2 ? 1020
8.0 ? 1020
20
1.8 ? 10
1.1 ? 1021
19
7.7 ? 10
1.2 ? 1021
3.3 ? 1018
2.3 ? 1019
2.3 ? 1019
9.6 ? 1018
2.0 ? 1019
Label Smoothing During training, we employed label smoothing of value ls = 0.1 [30]. This
hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.
6
Results
6.1
Machine Translation
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big)
in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0
BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is
listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model
surpasses all previously published models and ensembles, at a fraction of the training cost of any of
the competitive models.
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0,
outperforming all of the previously published single models, at less than 1/4 the training cost of the
previous state-of-the-art model. The Transformer (big) model trained for English-to-French used
dropout rate Pdrop = 0.1, instead of 0.3.
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which
were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We
used beam search with a beam size of 4 and length penalty ? = 0.6 [31]. These hyperparameters
were chosen after experimentation on the development set. We set the maximum output length during
inference to input length + 50, but terminate early when possible [31].
Table 2 summarizes our results and compares our translation quality and training costs to other model
architectures from the literature. We estimate the number of floating point operations used to train a
model by multiplying the training time, the number of GPUs used, and an estimate of the sustained
single-precision floating-point capacity of each GPU 5 .
6.2
Model Variations
To evaluate the importance of different components of the Transformer, we varied our base model
in different ways, measuring the change in performance on English-to-German translation on the
development set, newstest2013. We used beam search as described in the previous section, but no
checkpoint averaging. We present these results in Table 3.
In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions,
keeping the amount of computation constant, as described in Section 3.2.2. While single-head
attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.
5
We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.
8
Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base
model. All metrics are on the English-to-German translation development set, newstest2013. Listed
perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to
per-word perplexities.
base
N
dmodel
dff
h
dk
dv
Pdrop
ls
6
512
2048
8
1
4
16
32
64
512
128
32
16
16
32
64
512
128
32
16
0.1
0.1
32
128
32
128
(A)
(B)
train
steps
100K
2
4
8
(C)
256
1024
1024
4096
0.0
0.2
(D)
(E)
big
6
0.0
0.2
positional embedding instead of sinusoids
1024 4096 16
0.3
300K
PPL
(dev)
4.92
5.29
5.00
4.91
5.01
5.16
5.01
6.11
5.19
4.88
5.75
4.66
5.12
4.75
5.77
4.95
4.67
5.47
4.92
4.33
BLEU
(dev)
25.8
24.9
25.5
25.8
25.4
25.1
25.4
23.7
25.3
25.5
24.5
26.0
25.4
26.2
24.6
25.5
25.3
25.7
25.7
26.4
params
?106
65
58
60
36
50
80
28
168
53
90
213
In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This
suggests that determining compatibility is not easy and that a more sophisticated compatibility
function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected,
bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our
sinusoidal positional encoding with learned positional embeddings [8], and observe nearly identical
results to the base model.
7
Conclusion
In this work, we presented the Transformer, the first sequence transduction model based entirely on
attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with
multi-headed self-attention.
For translation tasks, the Transformer can be trained significantly faster than architectures based
on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014
English-to-French translation tasks, we achieve a new state of the art. In the former task our best
model outperforms even all previously reported ensembles.
We are excited about the future of attention-based models and plan to apply them to other tasks. We
plan to extend the Transformer to problems involving input and output modalities other than text and
to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs
such as images, audio and video. Making generation less sequential is another research goals of ours.
The code we used to train and evaluate our models is available at https://github.com/
tensorflow/tensor2tensor.
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful
comments, corrections and inspiration.
9
References
[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450, 2016.
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. CoRR, abs/1409.0473, 2014.
[3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural
machine translation architectures. CoRR, abs/1703.03906, 2017.
[4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine
reading. arXiv preprint arXiv:1601.06733, 2016.
[5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk,
and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical
machine translation. CoRR, abs/1406.1078, 2014.
[6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv
preprint arXiv:1610.02357, 2016.
[7] Junyoung Chung, ?aglar G?l?ehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation
of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014.
[8] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017.
[9] Alex Graves. Generating sequences with recurrent neural networks.
arXiv:1308.0850, 2013.
arXiv preprint
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 770?778, 2016.
[11] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and J?rgen Schmidhuber. Gradient flow in
recurrent nets: the difficulty of learning long-term dependencies, 2001.
[12] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation,
9(8):1735?1780, 1997.
[13] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring
the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
[14] ?ukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference
on Learning Representations (ICLR), 2016.
[15] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2,
2017.
[16] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks.
In International Conference on Learning Representations, 2017.
[17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[18] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint
arXiv:1703.10722, 2017.
[19] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen
Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint
arXiv:1703.03130, 2017.
[20] Samy Bengio ?ukasz Kaiser. Can active memory replace attention? In Advances in Neural
Information Processing Systems, (NIPS), 2016.
10
[21] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
[22] Ankur Parikh, Oscar T?ckstr?m, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention
model. In Empirical Methods in Natural Language Processing, 2016.
[23] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive
summarization. arXiv preprint arXiv:1705.04304, 2017.
[24] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv
preprint arXiv:1608.05859, 2016.
[25] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words
with subword units. arXiv preprint arXiv:1508.07909, 2015.
[26] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,
and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts
layer. arXiv preprint arXiv:1701.06538, 2017.
[27] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine
Learning Research, 15(1):1929?1958, 2014.
[28] Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory
networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,
Advances in Neural Information Processing Systems 28, pages 2440?2448. Curran Associates,
Inc., 2015.
[29] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural
networks. In Advances in Neural Information Processing Systems, pages 3104?3112, 2014.
[30] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.
Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015.
[31] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang
Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google?s neural machine
translation system: Bridging the gap between human and machine translation. arXiv preprint
arXiv:1609.08144, 2016.
[32] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with
fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016.
11
| 7181 |@word version:3 d2:3 crucially:1 excited:1 initial:1 configuration:1 contains:3 score:5 att:2 dff:1 tuned:1 ours:1 subword:1 outperforms:3 existing:1 com:8 activation:1 gmail:1 diederik:1 parmar:1 gpu:2 must:1 written:1 distant:1 subsequent:1 additive:3 christian:1 designed:2 interpretable:1 drop:1 bart:1 sukhbaatar:1 half:1 concat:1 krikun:1 aglar:1 short:3 regressive:2 toronto:2 traverse:1 denis:1 zhang:1 along:1 become:1 jonas:1 yuan:1 consists:3 sustained:1 fitting:1 headed:1 inside:1 peng:1 hieu:1 expected:1 behavior:1 kiros:1 multi:10 brain:4 salakhutdinov:1 relying:2 decreasing:1 little:1 considering:1 increasing:2 becomes:1 project:1 santos:1 transformation:4 every:3 scaled:6 sublayer:2 unit:1 szlam:1 appear:1 before:1 attend:5 local:1 modify:1 limit:2 encoding:9 establishing:1 solely:1 path:8 approximately:1 mirella:1 rnns:2 chose:2 ankur:1 wiq:1 suggests:1 co:1 vaswani:1 factorization:2 range:3 averaged:1 responsible:1 practice:2 block:1 implement:1 rnn:1 empirical:2 significantly:4 illegal:1 projection:2 pre:1 word:4 get:1 transformer:21 fruitful:1 map:1 dean:1 sepp:2 attention:83 l:2 jimmy:2 resolution:1 decomposable:1 attending:1 continued:1 shlens:1 embedding:6 handle:1 variation:2 hurt:2 target:1 massive:1 carl:1 us:1 designing:1 samy:1 curran:1 trick:2 logarithmically:1 element:1 expensive:1 recognition:2 romain:1 associate:1 sparsely:1 bottom:3 mike:2 yoon:1 preprint:15 wang:1 region:1 ensures:1 connected:3 sun:1 k40:1 decrease:1 complexity:7 trained:7 motivate:1 depend:1 grateful:1 efficiency:1 po:5 easily:1 schwenk:1 various:3 represented:1 gnmt:2 train:4 stacked:1 fast:1 effective:2 describe:1 eschewing:1 query:11 klaus:1 neighborhood:2 kalchbrenner:2 encoded:1 larger:2 precludes:1 encoder:25 ability:1 simonyan:1 syntactic:1 jointly:2 itself:1 final:1 sequence:37 advantage:1 net:1 took:2 propose:2 jamie:1 product:16 fr:2 denny:1 qin:1 cao:2 translate:1 achieve:1 sutskever:3 xception:1 francois:1 produce:1 generating:2 adam:2 boris:1 depthwise:1 spent:1 illustrate:1 recurrent:18 measured:1 implemented:4 c:1 predicted:1 come:3 stochastic:1 centered:1 exploration:1 jonathon:1 human:1 implementing:1 espeholt:1 merrienboer:1 ryan:1 comprehension:1 sainbayar:1 insert:1 exploring:1 correction:1 pham:1 around:3 lawrence:1 mapping:2 mo:1 rgen:2 achieves:3 optimizer:2 vary:1 early:1 ruslan:1 label:2 establishes:1 successfully:1 weighted:2 illia:3 clearly:1 yarats:1 newstest2013:2 zhou:2 conjunction:1 improvement:1 longest:1 greatly:1 kim:1 helpful:1 polosukhin:2 inference:2 typically:1 hidden:5 ukasz:3 compatibility:4 overall:1 dauphin:1 development:3 plan:3 art:8 softmax:7 summed:1 smoothing:2 equal:3 once:1 lukaszkaiser:1 frasconi:1 beach:1 thang:2 koray:1 identical:6 holger:1 jones:1 denton:1 nearly:3 yu:1 mimic:1 future:2 yoshua:5 richard:1 few:1 employ:4 composed:3 simultaneously:1 preserve:1 individual:1 floating:2 connects:1 consisting:2 ab:6 llion:3 highly:1 investigate:2 intra:1 multiply:1 ofir:1 abstractive:2 evaluation:1 mixture:1 yielding:1 andy:1 integral:1 arthur:1 shorter:1 respective:1 divide:1 rush:1 theoretical:1 wiv:2 modeling:7 earlier:1 compelling:1 dev:2 contiguous:1 measuring:1 zn:2 phrase:1 cost:8 surpasses:1 rare:1 cicero:1 krizhevsky:1 too:1 motivating:1 reported:2 connect:2 dependency:8 params:1 considerably:1 combined:1 cho:3 person:1 st:1 fundamental:1 twelve:1 international:2 oord:1 lstm:1 lee:1 off:1 dong:1 zhouhan:1 michael:1 ashish:2 ym:1 together:3 ilya:3 attentionbased:1 again:1 containing:1 rafal:1 worse:1 lukasz:1 luong:3 inject:1 chung:1 expert:1 li:2 szegedy:1 sinusoidal:2 de:2 lapata:1 b2:1 dilated:1 inc:1 piece:2 performed:2 multiplicative:1 sine:1 root:1 jason:1 doing:1 wolfgang:1 competitive:2 parallel:4 masking:2 contribution:1 square:1 accuracy:1 convolutional:9 became:1 qk:1 variance:2 ensemble:7 listing:1 correspond:1 yield:1 efficiently:1 reinforced:1 norouzi:1 vincent:1 kavukcuoglu:1 produced:1 ren:1 multiplying:1 published:2 sennrich:1 parallelizable:1 reach:1 attentive:1 frequency:1 involved:2 di:1 lior:1 dataset:2 birch:1 knowledge:1 dimensionality:4 improves:1 schedule:1 sophisticated:1 feed:5 rico:1 day:4 wei:1 entailment:1 evaluated:1 inception:1 layernorm:1 replacing:3 christopher:1 google:15 french:7 quality:5 newstest2014:1 lei:1 grows:1 alexandra:1 usa:1 effect:2 building:1 normalized:1 requiring:1 facilitate:1 hypothesized:1 former:1 hence:1 assigned:1 sinusoid:2 regularization:2 inspiration:1 kyunghyun:3 semantic:1 sin:1 during:4 self:24 recurrence:4 width:1 davis:1 noted:1 fethi:1 cosine:1 mohammad:1 performs:1 image:2 wise:4 novel:1 parikh:1 superior:1 rl:2 million:1 uszkoreit:2 extend:1 he:1 relating:1 bougares:1 significant:1 jozefowicz:1 rd:1 similarly:3 sugiyama:1 grangier:1 language:7 dot:16 wmt:8 longer:2 haddow:1 add:1 aligning:1 dominant:1 kwik:1 base:10 align:1 recent:1 leftward:1 perplexity:3 massively:1 schmidhuber:2 nvidia:1 outperforming:1 bytenet:3 minimum:2 additional:1 employed:1 parallelized:1 xiangyu:1 barry:1 signal:2 full:1 mirhoseini:1 faster:3 long:10 lin:1 bigger:1 qi:1 prediction:1 variant:2 basic:1 xw1:1 desideratum:1 involving:2 metric:1 df:1 vision:2 arxiv:30 sometimes:1 normalization:3 kernel:4 sergey:1 achieved:1 hochreiter:2 beam:3 background:1 addition:3 separately:1 affecting:1 whereas:1 interval:1 grow:1 source:1 jian:1 modality:1 parallelization:2 w2:1 yonghui:2 comment:1 suspect:1 bahdanau:1 flow:2 call:1 split:1 embeddings:8 identically:1 easy:1 variety:1 codebase:3 relu:1 zi:1 stephan:1 architecture:12 inner:1 idea:1 bridging:1 accelerating:1 effort:2 penalty:1 karen:1 shaoqing:1 deep:6 jie:1 generally:1 proportionally:1 listed:2 amount:2 hardware:1 reduced:3 generate:1 http:1 aidan:3 pdrop:3 per:5 ppl:1 paolo:1 key:15 thereafter:1 prevent:3 nal:2 ht:2 backward:1 krzysztof:1 chollet:1 fraction:3 sum:2 convert:2 counteract:2 inverse:1 you:1 oscar:1 place:1 throughout:1 yann:1 wu:2 draw:1 appendix:1 scaling:2 summarizes:1 dropout:6 entirely:4 layer:59 ki:1 followed:2 gomez:1 cheng:1 encountered:1 constraint:2 alex:3 generates:1 aspect:2 nitish:1 extremely:1 min:1 performing:2 separable:3 inhibits:1 gpus:6 gehring:1 structured:2 according:2 combination:2 unsure:1 manning:1 across:2 beneficial:2 smaller:1 describes:1 rob:1 making:1 quoc:4 dv:6 restricted:4 den:1 visualization:1 remains:1 previously:5 discus:2 german:7 mechanism:11 describing:1 bing:1 end:5 gulcehre:1 available:1 operation:8 experimentation:1 eight:2 apply:4 progression:1 observe:3 v2:2 ginsburg:1 caiming:1 batching:2 xiong:1 batch:1 outrageously:1 original:1 running:1 include:1 pushing:1 concatenated:1 feng:1 question:1 added:1 kaiser:3 usual:1 exhibit:1 gradient:2 iclr:2 subspace:1 distance:2 capacity:1 decoder:26 rethinking:1 bengio:6 bleu:11 dzmitry:1 maziarz:1 length:17 code:2 ying:1 difficult:1 executed:1 relate:1 noam:5 ba:2 wojna:1 zbigniew:1 packed:2 summarization:2 gated:3 allowing:1 perform:4 inspect:1 convolution:9 minh:2 caglar:1 flop:1 extended:1 hinton:3 head:17 y1:1 shazeer:3 varied:2 jakob:3 stack:8 arbitrary:1 auli:1 david:1 pair:8 required:2 moe:1 sentence:9 z1:2 connection:5 optimized:1 learned:6 textual:1 tensorflow:1 established:1 hour:2 kingma:1 nip:2 pattern:1 regime:1 reading:2 challenge:1 including:4 memory:8 max:1 video:1 nogueira:1 critical:1 difficulty:1 natural:1 residual:5 wik:1 improve:2 github:1 firmly:1 bowen:1 numerous:1 britz:1 started:1 auto:2 niki:2 byte:3 text:1 literature:2 geometric:1 acknowledgement:1 countless:2 multiplication:1 relative:2 determining:1 graf:2 fully:3 xiang:1 macherey:2 generation:1 geoffrey:3 hoang:1 foundation:1 vanhoucke:1 editor:1 share:1 ehre:1 translation:23 row:5 course:1 token:8 last:2 free:1 english:13 keeping:1 side:1 allow:3 vv:1 ffn:1 absolute:1 benefit:1 regard:1 boundary:1 dimension:11 xn:2 vocabulary:2 van:2 computes:1 dipanjan:1 forward:7 commonly:3 projected:2 k80:1 approximate:1 global:1 sequentially:1 active:1 overfitting:1 ioffe:1 b1:1 usz:1 consuming:1 xi:1 fergus:1 continuous:1 search:2 why:2 table:12 nature:1 learn:6 terminate:1 ca:1 inherently:1 p100:4 improving:3 complex:1 garnett:1 da:1 anna:1 paulus:1 linearly:3 big:9 hyperparameters:2 n2:1 x1:2 lasse:1 xu:1 junyoung:1 en:4 batched:1 transduction:9 dispensing:1 sub:12 position:35 precision:1 answering:1 third:2 learns:1 zhifeng:1 formula:1 minute:1 symbol:5 offset:2 experimented:2 dk:14 cortes:1 socher:1 albeit:1 sequential:8 corr:6 importance:1 maxim:1 logk:2 magnitude:1 push:1 chen:1 easier:1 gap:1 depicted:1 wavelength:1 gao:1 positional:10 vinyals:2 contained:1 kaiming:1 srivastava:1 corresponds:2 wolf:1 weston:1 conditional:1 goal:2 jeff:1 shared:1 replace:2 change:1 checkpoint:3 typical:2 except:1 reducing:2 averaging:4 called:1 total:3 aaron:1 latter:1 alexander:1 avoiding:1 oriol:2 evaluate:3 audio:1 schuster:2 extrapolate:1 |
6,834 | 7,182 | Recurrent Ladder Networks
Isabeau Pr?mont-Schwarz, Alexander Ilin, Tele Hotloo Hao,
Antti Rasmus, Rinu Boney, Harri Valpola
The Curious AI Company
{isabeau,alexilin,hotloo,antti,rinu,harri}@cai.fi
Abstract
We propose a recurrent extension of the Ladder networks [22] whose structure
is motivated by the inference required in hierarchical latent variable models. We
demonstrate that the recurrent Ladder is able to handle a wide variety of complex
learning tasks that benefit from iterative inference and temporal modeling. The
architecture shows close-to-optimal results on temporal modeling of video data,
competitive results on music modeling, and improved perceptual grouping based
on higher order abstractions, such as stochastic textures and motion cues. We
present results for fully supervised, semi-supervised, and unsupervised tasks. The
results suggest that the proposed architecture and principles are powerful tools
for learning a hierarchy of abstractions, learning iterative inference and handling
temporal information.
1
Introduction
Many cognitive tasks require learning useful representations on multiple abstraction levels. Hierarchical latent variable models are an appealing approach for learning a hierarchy of abstractions.
The classical way of learning such models is by postulating an explicit parametric model for the
distributions of random variables. The inference procedure, which evaluates the posterior distribution
of the unknown variables, is then derived from the model ? an approach adopted in probabilistic
graphical models (see, e.g., [5]).
The success of deep learning can, however, be explained by the fact that popular deep models focus
on learning the inference procedure directly. For example, a deep classifier like AlexNet [19] is
trained to produce the posterior probability of the label for a given data sample. The representations
that the network computes at different layers are related to the inference in an implicit latent variable
model but the designer of the model does not need to know about them.
However, it is actually tremendously valuable to understand what kind of inference is required by
different types of probabilistic models in order to design an efficient network architecture. Ladder
networks [22, 28] are motivated by the inference required in a hierarchical latent variable model. By
design, the Ladder networks aim to emulate a message passing algorithm, which includes a bottom-up
pass (from input to label in classification tasks) and a top-down pass of information (from label to
input). The results of the bottom-up and top-down computations are combined in a carefully selected
manner.
The original Ladder network implements only one iteration of the inference algorithm but complex
models are likely to require iterative inference. In this paper, we propose a recurrent extension
of the Ladder network for iterative inference and show that the same architecture can be used for
temporal modeling. We also show how to use the proposed architecture as an inference engine in
more complex models which can handle multiple independent objects in the sensory input. Thus, the
proposed architecture is suitable for the type of inference required by rich models: those that can
learn a hierarchy of abstractions, can handle temporal information and can model multiple objects in
the input.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
z
l+1
el sl
l
el
t
el
1
dl+1
dl
el
dl
el
el
1
dl+1
1
zt
zt+1
yt
1
yt
yt+1
xt
xt+1
y
x
dl
x
?
t
1
zt
(a)
(b)
(c)
Figure 1: (a): The structure of the Recurrent Ladder networks. The encoder is shown in red, the
decoder is shown in blue, the decoder-to-encoder connections are shown in green. The dashed line
separates two iterations t 1 and t. (b)-(c): The type of hierarchical latent variable models for which
RLadder is designed to emulate message passing. (b): A graph of a static model. (c): A fragment of
a graph of a temporal model. White circles are unobserved latent variables, gray circles represent
observed variables. The arrows represent the directions of message passing during inference.
2
Recurrent Ladder
Recurrent Ladder networks
In this paper, we present a recurrent extension of the Ladder networks which is conducive to iterative
inference and temporal modeling. Recurrent Ladder (RLadder) is a recurrent neural network whose
units resemble the structure of the original Ladder networks [22, 28] (see Fig. 1a). At every iteration
t, the information first flows from the bottom (the input level) to the top through a stack of encoder
cells. Then, the information flows back from the top to the bottom through a stack of decoder cells.
Both the encoder and decoder cells also use the information that is propagated horizontally. Thus, at
every iteration t, an encoder cell in the l-th layer receives three inputs: 1) the output of the encoder
cell from the level below el 1 (t), 2) the output dl (t 1) of the decoder cell from the same level from
the previous iteration, 3) the encoder state sl (t 1) from the same level from the previous iteration.
It updates its state value sl (t) and passes the same output el (t) both vertically and horizontally:
sl (t) = fs,l (el
el (t) = fe,l (el
1 (t), dl (t
1 (t), dl (t
1), sl (t
1), sl (t
1))
1)) .
(1)
(2)
The encoder cell in the bottom layer typically sends observed data (possibly corrupted by noise) as
its output e1 (t). Each decoder cell is stateless, it receives two inputs (the output of the decoder cell
from one level above and the output of the encoder cell from the same level) and produces one output
dl (t) = gl (el (t), dl+1 (t)) ,
(3)
which is passed both vertically and horizontally. The exact computations performed in the cells can
be tuned depending on the task at hand. In practice, we have used LSTM [15] or GRU [8] cells in the
encoder and cells inspired by the original Ladder networks in the decoder (see Appendix A).
Similarly to Ladder networks, the RLadder is usually trained with multiple tasks at different abstraction levels. Tasks at the highest abstraction level (like classification) are typically formulated at the
highest layer. Conversely, the output of the decoder cell in the bottom level is used to formulate
a low-level task which corresponds to abstractions close to the input. The low-level task can be
denoising (reconstruction of a clean input from the corrupted one), other possibilities include object
detection [21], segmentation [3, 23], or in a temporal setting, prediction. A weighted sum of the costs
at different levels is optimized during training.
Connection to hierarchical latent variables and message passing
The RLadder architecture is designed to mimic the computational structure of an inference procedure
in probabilistic hierarchical latent variable models. In an explicit probabilistic graphical model,
inference can be done by an algorithm which propagates information (messages) between the nodes
of a graphical model so as to compute the posterior distribution of the latent variables (see, e.g., [5]).
2
For static graphical models implicitly assumed by the RLadder (see Fig. 1b), messages need to be
propagated from the input level up the hierarchy to the highest level and from the top to the bottom,
as shown in Fig. 1a. In Appendix B, we present a derived iterative inference procedure for a simple
static hierarchical model to give an example of a message-passing algorithm. We also show how that
inference procedure can be implemented in the RLadder computational graph.
In the case of temporal modeling, the type of a graphical model assumed by the RLadder is shown
in Fig. 1c. If the task is to do next step prediction of observations x, an online inference procedure
should update the knowledge about the latent variables yt , zt using observed data xt and compute the
predictive distributions for the input xt+1 . Assuming that the distributions of the latent variables at
previous time instances (? < t) are kept fixed, the inference can be done by propagating messages
from the observed variables xt and the latent variables y, z bottom-up, top-down and from the past to
the future, as shown in Fig. 1c. The architecture of the RLadder (Fig. 1a) is designed so as to emulate
such a message-passing procedure, that is the information can propagate in all the required directions:
bottom-up, top-down and from the past to the future. In Appendix C, we present an example of the
message-passing algorithm derived for a temporal hierarchical model to show how it is related to the
RLadders?s computation graph.
Even though the motivation of the RLadder architecture is to emulate a message-passing procedure,
the nodes of the RLadder do not directly correspond to nodes of any specific graphical model.1 The
RLadder directly learns an inference procedure and the corresponding model is never formulated
explicitly. Note also that using stateful encoder cells is not strictly motivated by the message-passing
argument but in practice these skip connections facilitate training of a deep network.
As we mentioned previously, the RLadder is usually trained with multiple tasks formulated at
different representation levels. The purpose of tasks is to encourage the RLadder to learn the right
inference procedure, and hence formulating the right kind of tasks is crucial for the success of training.
For example, the task of denoising encourages the network to learn important aspects of the data
distribution [1, 2]. For temporal modeling, the task of next step prediction plays a similar role. The
RLadder is most useful in problems that require accurate inference on multiple abstraction levels,
which is supported by the experiments presented in this paper.
Related work
The RLadder architecture is similar to that of other recently proposed models for temporal modeling
[10, 11, 9, 27, 20]. In [9], the recurrent connections (from time t 1 to time t) are placed in the
lateral links between the encoder and the decoder. This can make it easier to extend an existing
feed-forward network architecture to the case of temporal data as the recurrent units do not participate
in the bottom-up computations. On the other hand, the recurrent units do not receive information from
the top, which makes it impossible for higher layers to influence the dynamics of lower layers. The
architectures in [10, 11, 27] are quite similar to ours but they could potentially derive further benefit
from the decoder-to-encoder connections between successive time instances (green links in Fig. 1b).
The aforementioned connections are well justified from the message-passing point of view: When
updating the posterior distribution of a latent variable, one should combine the latest information
from the top and from the bottom, and it is the decoder that contains the latest information from the
top. We show empirical evidence to the importance of those connections in Section 3.1.
3
Experiments with temporal data
In this section, we demonstrate that the RLadder can learn an accurate inference algorithm in tasks
that require temporal modeling. We consider datasets in which passing information both in time and
in abstraction hierarchy is important for achieving good performance.
3.1
Occluded Moving MNIST
We use a dataset where we know how to do optimal inference in order to be able to compare the
results of the RLadder to the optimal ones. To this end we designed the Occluded Moving MNIST
1
To emphasize this, we used different shapes for the nodes of the RLadder network (Fig. 1a) and the nodes
of graphical models that inspired the RLadder architecture (Figs. 1b-c).
3
t=1
t=2
t=3
t=4
t=5
observed frames
frames with
occlusion
visualized
optimal temporal
reconstruction
Figure 2: The Occluded Moving MNIST dataset. Bottom row: Optimal temporal recombination for a
sequence of occluded frames from the dataset.
dataset. It consists of MNIST digits downscaled to 14 ? 14 pixels flying on a 32 ? 32 white
background with white vertical and horizontal occlusion bars (4 pixels in width, and spaced by 8
visible pixels apart) which, when the digit flies behind them, occludes the pixels of the digit (see
Fig. 2). We also restrict the velocities to be randomly chosen in the set of eight discrete velocities
{(1, ?2), ( 1, ?2), (2, ?1), ( 2, ?1)} pixels/frame, so that apart from the bouncing, the movement
is deterministic. The digits are split into training, validation, and test sets according to the original
MNIST split. The primary task is then to classify the digit which is only partially observable at any
given moment, at the end of five time steps.
In order to do optimal classification, one would need to assimilate information about the digit identity
(which is only partially visible at any given time instance) by keeping track of the observed pixels
(see the bottom row of Fig. 2) and then feeding the resultant reconstruction to a classifier.
In order to encourage optimal inference, we add a next step prediction task to the RLadder at the
bottom of the decoder: The RLadder is trained to predict the next occluded frame, that is the network
never sees the un-occluded digit. This thus mimics a realistic scenario where the ground truth is
not known. To assess the importance of the features of the RLadder, we also do an ablation study.
In addition, we compare it to three other networks. In the first comparison network, the optimal
reconstruction of the digit from the five frames (as shown in Fig. 2) is fed to a static feed-forward
network from which the encoder of the RLadder was derived. This is our gold standard, and obtaining
similar results to it implies doing close to optimal temporal inference. The second, a temporal baseline,
is a deep feed-forward network (the one on which the encoder is based) with a recurrent neural
network (RNN) at the top only so that, by design the network can propagate temporal information
only at a high level, and not at a low level. The third, a hierarchical RNN, is a stack of convolutional
LSTM units with a few convolutional layers in between, which is the RLadder amputated of its
decoder. See Fig. 3 and Appendix D.1 for schematics and details of the architectures.
Fully supervised learning results. The results are presented in Table 1. The first thing to notice
is that the RLadder reaches (up to uncertainty levels) the classification accuracy obtained by the
network which was given the optimal reconstruction of the digit. Furthermore, if the RLadder does
not have a decoder or the decoder-to-encoder connections, or if it is trained without the auxiliary
prediction task, we see the classification error rise almost to the level of the temporal baseline. This
means that even if a network has RNNs at the lowest levels (like with only the feed-forward encoder),
or if it does not have a task which encourages it to develop a good world model (like the RLadder
without the next-frame prediction task), or if the information cannot travel from the decoder to the
encoder, the high level task cannot truly benefit from lower level temporal modeling.
Next, one notices from Table 1 that the top-level classification cost helps the low-level prediction
cost in the RLadder (which in turn helps the top-level cost in a mutually beneficial cycle). This
mutually supportive relationship between high-level and low-level inferences is nicely illustrated by
the example in Fig. 4. Up until time step t = 3 inclusively, the network believes the digit to be a five
4
xt
1
xt
xt
Temporal baseline network
xt
xt
1
1
x
?t
Hierarchical RNN
xt
x
?t+1
RLadder
Figure 3: Architectures used for modeling occluded Moving MNIST. Temporal baseline network is a
convolutional network with a fully connected RNN on top.
Table 1: Performance on Occluded Moving MNIST
Classification error (%)
Optimal reconstruction and static classifier
0.71 ? 0.03
Temporal baseline
Hierarchical RNN (encoder only)
RLadder w/o prediction task
RLadder w/o decoder-to-encoder conn.
RLadder w/o classification task
RLadder
2.02 ? 0.16
1.60 ? 0.05
1.51 ? 0.21
1.24 ? 0.05
0.74 ? 0.09
Prediction error, ?10
5
156.7 ? 0.4
155.2 ? 2.5
150.1 ? 0.1
(Fig. 4a). As such, at t = 3, the network predicts that the top right part of the five which has been
occluded so far will stick out from behind the occlusions as the digit moves up and right at the next
time step (Fig. 4b). Using the decoder-to-encoder connections, the decoder can relay this expectation
to the encoder at t = 4. At t = 4 the encoder can compare this expectation with the actual input
where the top right part of the five is absent (Fig. 4c). Without the decoder-to-encoder connections
this comparison would have been impossible. Using the upward path of the encoder, the network can
relay this discrepancy to the higher classification layers. These higher layers with a large receptive
field can then conclude that since it is not a five, then it must be a three (Fig. 4d). Now thanks to the
decoder, the higher classification layers can relay this information to the lower prediction layers so
that they can change their prediction of what will be seen at t = 5 appropriately (Fig. 4e). Without a
decoder which would bring this high level information back down to the low level, this drastic update
of the prediction would be impossible. With this information the lower prediction layer can now
predict that the top-left part of the three (which it has never seen before) will appear at the next time
step from behind the occlusion, which is indeed what happens at t = 5 (Fig. 4f).
Semi-supervised learning results. In the following experiment, we test the RLadder in the semisupervised scenario when the training data set contains 1.000 labeled sequences and 59.000 unlabeled
ones. To make use of the unlabeled data, we added an extra auxiliary task at the top level which
was the consistency cost with the targets provided by the Mean Teacher (MT) model [26]. Thus,
the RLadder was trained with three tasks: 1) next step prediction at the bottom, 2) classification
at the top, 3) consistency with the MT outputs at the top. As shown in Table 2, the RLadder
improves dramatically by learning a better model with the help of unlabeled data independently and in
addition to other semi-supervised learning methods. The temporal baseline model also improves the
classification accuracy by using the consistency cost but it is clearly outperformed by the RLadder.
3.2
Polyphonic Music Dataset
In this section, we evaluate the RLadder on the midi dataset converted to piano rolls [6]. The dataset
consists of piano rolls (the notes played at every time step, where a time step is, in this case, an eighth
note) of various piano pieces. We train an 18-layer RLadder containing five convolutional LSTMs
and one fully-connected LSTM. More details can be found in Appendix D.2. Table 3 shows the
5
t=1
t=2
t=3
t=4
t=5
ground-truth
unoccluded
digits
f
observed
frames
c
b
predicted
frames
a
probe of internal representations
e
d
Figure 4: Example prediction of an RLadder on the occluded moving MNIST dataset. First row: the
ground truth of the digit, which the network never sees and does not train on. Second row: The actual
five frames seen by the network and on which it trains. Third row: the predicted next frames of a
trained RLadder. Fourth row: A stopped-gradient (gradient does not flow into the RLadder) readout
of the bottom layer of the decoder trained on the ground truth to probe what aspects of the digit are
represented by the neurons which predict the next frame. Notice how at t = 1, the network does
not yet know in which direction the digit will move and so it predicts a superposition of possible
movements. Notice further (red annotations a-f), that until t = 3, the network thought the digit was a
five, but when the top bar of the supposed five did not materialize on the other side of the occlusion
as expected at t = 4, the network immediately concluded correctly that it was actually a three.
Table 2: Classification error (%) on semi-supervised Occluded Moving MNIST
1k labeled
1k labeled & 59k unlabeled
w/o MT
MT
Optimal reconstruction and static classifier
Temporal baseline
RLadder
3.50 ? 0.28
10.86 ? 0.43
10.49 ? 0.81
3.50 ? 0.28
10.86 ? 0.43
5.20 ? 0.77
1.34 ? 0.04
3.14 ? 0.16
1.69 ? 0.14
negative log-likelhoods of the next-step prediction obtained on the music dataset, where our results
are reported as mean plus or minus standard deviation over 10 seeds. We see that the RLadder is
competitive with the best results, and gives the best results amongst models outputting the marginal
distribution of notes at each time step.
The fact that the RLadder did not beat [16] on the midi datasets shows one of the limitations of
RLadder. Most of the models in Table 3 output a joint probability distribution of notes, unlike
RLadder which outputs the marginal probability for each note. That is to say, those models, to output
the probability of a note, take as input the notes at previous time instances, but also the ground truth
of the notes to the left at the same time instance. RLadder does not do that, it only takes as input the
past notes played. Even though, as the example in 3.1 of the the digit five turning into a three after
seeing only one absent dot, shows that internally the RLadder models the joint distribution.
4
Experiments with perceptual grouping
In this section, we show that the RLadder can be used as an inference engine in a complex model which
benefits from iterative inference and temporal modeling. We consider the task of perceptual grouping,
that is identifying which parts of the sensory input belong to the same higher-level perceptual
6
Table 3: Negative log-likelihood (smaller is better) on polyphonic music dataset
Piano-midi.de
Nottingham
Muse
JSB Chorales
6.48
5.54
6.01
5.60
5.03
4.34
3.90
8.51
7.59
6.27
5.56
6.10
5.92
5.86
7.43
8.76
5.69 ? 0.02
5.64 ? 0.02
Models outputting a joint distribution of notes:
NADE masked [4]
NADE [4]
RNN-RBM [6]
RNN-NADE (HF) [6]
LSTM-NADE [16]
TP-LSTM-NADE [16]
BALSTM [16]
7.42
7.05
7.09
7.05
7.39
5.49
5.00
3.32
2.89
2.39
2.31
2.06
1.64
1.62
Models outputting marginal probabilities for each note:
RNN [4]
LSTM [17]
MUT1 [17]
RLadder
7.88
6.866
6.792
6.19 ? 0.02
3.87
3.492
3.254
2.42 ? 0.03
components (objects). We enhance the previously developed model for perceptual grouping called
Tagger [13] by replacing the originally used Ladder engine with the RLadder. For another perspective
on the problem see [14] which also extends Tagger to a recurrent neural network, but does so from an
expectation maximization point of view.
4.1
Recurrent Tagger
Tagger is a model designed for perceptual grouping. When applied to images, the modeling assumption is that each pixel x
?i belongs to one of the K objects, which is described by binary variables zi,k :
zi,k = 1 if pixel i belongs to object k and zi,k = 0 otherwise. The reconstruction of the whole image
using object k only is ? k which is a vector with as many elements ?i,k as there are pixels. Thus, the
assumed probabilistic model can be written as follows:
K
Y
Y
p(?
x, ? , z, h) =
N (?
xi |?i,k , k2 )zi,k
p(zk , ? k |hk )p(hk )
(4)
i,k
k=1
where zk is a vector of elements zi,k and hk is (a hierarchy of) latent variables which define the shape
and the texture of the objects. See Fig. 5a for a graphical representation of the model and Fig. 5b
for possible values of model variables for the textured MNIST dataset used in the experiments of
? because Tagger is trained with an auxiliary
Section 4.2. The model in (4) is defined for noisy image x
low-level task of denoising. The inference procedure in model (4) should evaluate the posterior
?.
distributions of the latent variables zk , ? k , hk for each of the K groups given corrupted data x
Making the approximation that the variables of each of the K groups are independent a posteriori
Y
p(z, ? , h|?
x) ?
q(zk , ? k , hk ) ,
(5)
k
the inference procedure could be implemented by iteratively updating each of the K approximate
distributions q(zk , ? k , hk ), if the model (4) and the approximation (5) were defined explicitly.
Tagger does not explicitly define a probabilistic model (4) but learns the inference procedure directly.
The iterative inference procedure is implemented by a computational graph with K copies of the same
Ladder network doing inference for one of the groups (see Fig. 5c). At the end of every iteration, the
inference procedure produces the posterior probabilities ?i,k that pixel i belongs to object k and the
point estimates of the reconstructions ? k (see Fig. 5c). Those outputs, are used to form the low-level
cost and the inputs for the next iteration (see more details in [13]). In this paper, we replace the
original Ladder engine of Tagger with the RLadder. We refer to the new model as RTagger.
4.2
Experiments on grouping using texture information
The goal of the following experiment is to test the efficiency of RTagger in grouping objects using
the texture information. To this end, we created a dataset that contains thickened MNIST digits with
7
h1
hk
h2
?1
?k
?2
z1
z2
zk
K
K
x
?
x
x
?
x
(a)
(b)
?, ?
?
x
?, ?
(c)
Figure 5: (a): Graphical model for perceptual grouping. White circles are unobserved latent variables,
gray circles represent observed variables. (b): Examples of possible values of model variables for the
textured MNIST dataset. (c): Computational graph that implements iterative inference in perceptual
grouping task (RTagger). Two graph iterations are drawn. The plate notation represent K copies of
the same graph.
(a)
(b)
(c)
(d)
Figure 6: (a): Example image from the Brodatz-textured MNIST dataset. (b): The image reconstruction m0 by the group that learned the background. (c): The image reeconstruction m1 by the group
that learned the digit. (d): The original image colored using the found grouping ? k .
20 textures from the Brodatz dataset [7]. An example of a generated image is shown in Fig. 6a. To
create a greater diversity of textures (to avoid over-fitting), we randomly rotated and scaled the 20
Brodatz textures when producing the training data.
The network trained on the textured MNIST dataset has the architecture presented in Fig. 5c with
three iterations. The number of groups was set to K = 3. The details of the RLadder architecture are
presented in Appendix D.3. The network was trained on two tasks: The low-level segmentation task
was formulated around denoising, the same way as in the Tagger model [13]. The top-level cost was
the log-likelihood of the digit class at the last iteration.
Table 4 presents the obtained performance on the textured MNIST dataset in both fully supervised
and semi-supervised settings. All experiments were run over 5 seeds. We report our results as mean
plus or minus standard deviation. In some runs, Tagger experiments did not converge to a reasonable
solution (because of unstable or too slow convergence), so we did not include those runs in our
evaluations. Following [13], the segmentation accuracy was computed using the adjusted mutual
information (AMI) score [29] which is the mutual information between the ground truth segmentation
and the estimated segmentation ? k scaled to give one when the segmentations are identical and zero
when the output segmentation is random.
For comparison, we trained the Tagger model [13] on the same dataset. The other comparison method
was a feed-forward convolutional network which had an architecture resembling the bottom-up pass
(encoder) of the RLadder and which was trained on the classification task only. One thing to notice is
that the results obtained with the RTagger clearly improve over iterations, which supports the idea that
iterative inference is useful in complex cognitive tasks. We also observe that RTagger outperforms
Tagger and both approaches significantly outperform the convolutional network baseline in which the
classification task is not supported by the input-level task. We have also observed that the top-level
classification tasks makes the RTagger faster to train in terms of the number of updates, which also
supports that the high-level and low-level tasks mutually benefit from each other: Detecting object
8
Table 4: Results on the Brodatz-textured MNIST. i-th column corresponds to the intermediate results
of RTagger after the i-th iteration. In the fully supervised case, Tagger was only trained successfully
in 2 of the 5 seeds, the given results are for those 2 seeds. In the semi-supervised case, we were not
able to train Tagger successfully.
50k labeled
1k labeled + 49k unlabeled
Segmentation accuracy, AMI:
Segmentation accuracy, AMI:
RTagger 0.55 0.75 0.80 ? 0.01
Tagger
0.73 ? 0.02
RTagger 0.56 0.74 0.80 ? 0.03
Classification error, %:
RTagger
Tagger
ConvNet
Classification error, %:
18.2
8.0
?
?
RTagger
ConvNet
5.9 ? 0.2
12.15 ? 0.1
14.3 ? 0.46
63.8
?
28.2
?
22.6 ? 6.2
88 ? 0.30
Figure 7: Example of segmentation and generation by the RTagger trained on the Moving MNIST.
First row: frames 0-9 is the input sequence, frames 10-15 is the ground truth future. Second row:
Next step prediction of frames 1-9 and future frame generation (frames 10-15) by RTagger, the colors
represent grouping performed by RTagger.
boundaries using textures helps classify a digit, while knowing the class of the digit helps detect the
object boundaries. Figs. 6b-d show the reconstructed textures and the segmentation results for the
image from Fig. 6a.
4.3
Experiments on grouping using movement information
The same RTagger model can perform perceptual grouping in video sequences using motion cues. To
demonstrate this, we applied the RTagger to the moving MNIST [25]2 sequences of length 20 and the
low-level task was prediction of the next frame. When applied to temporal data, the RTagger assumes
the existence of K objects whose dynamics are independent of each other. Using this assumption,
the RTagger can separate the two moving digits into different groups. We assessed the segmentation
quality by the AMI score which was computed similarly to [13, 12] ignoring the background in the
case of a uniform zero-valued background and overlap regions where different objects have the same
color. The achieved averageAMI score was 0.75. An example of segmentation is shown in Fig. 7.
When we tried to use Tagger on the same dataset, we were only able to train successfully in a single
seed out of three. This is possibly because speed is intermediate level of abstraction not represented
at the pixel level. Due to its reccurent connections, RTagger can keep those representations from
one time step to the next and segment accordingly, something more difficult for Tagger to do, which
might explain the training instability.
5
Conclusions
In the paper, we presented recurrent Ladder networks. The proposed architecture is motivated by
the computations required in a hierarchical latent variable model. We empirically validated that the
recurrent Ladder is able to learn accurate inference in challenging tasks which require modeling
dependencies on multiple abstraction levels, iterative inference and temporal modeling. The proposed
model outperformed strong baseline methods on two challenging classification tasks. It also produced
competitive results on a temporal music dataset. We envision that the purposed Recurrent Ladder
will be a powerful building block for solving difficult cognitive tasks.
2
For this experiment, in order to have the ground truth segmentation, we reimplemented the dataset ourselves.
9
Acknowledgments
We would like to thank Klaus Greff and our colleagues from The Curious AI Company for their
contribution in the presented work, especially Vikram Kamath and Matti Herranen.
References
[1] Alain, G., Bengio, Y., and Rifai, S. (2012). Regularized auto-encoders estimate local statistics. CoRR,
abs/1211.4246.
[2] Arponen, H., Herranen, M., and Valpola, H. (2017). On the exact relationship between the denoising
function and the data distribution. arXiv preprint arXiv:1709.02797.
[3] Badrinarayanan, V., Kendall, A., and Cipolla, R. (2015). Segnet: A deep convolutional encoder-decoder
architecture for image segmentation. arXiv preprint arXiv:1511.00561.
[4] Berglund, M., Raiko, T., Honkala, M., K?rkk?inen, L., Vetek, A., and Karhunen, J. T. (2015). Bidirectional
recurrent neural networks as generative models. In Advances in Neural Information Processing Systems.
[5] Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics).
Springer-Verlag New York, Inc., Secaucus, NJ, USA.
[6] Boulanger-Lewandowski, N., Bengio, Y., and Vincent, P. (2012). Modeling temporal dependencies in
high-dimensional sequences: Application to polyphonic music generation and transcription. In Proceedings
of the 29th International Conference on Machine Learning (ICML-12), pages 1159?1166.
[7] Brodatz, P. (1966). Textures: a photographic album for artists and designers. Dover Pubns.
[8] Cho, K., Van Merri?nboer, B., Bahdanau, D., and Bengio, Y. (2014). On the properties of neural machine
translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.
[9] Cricri, F., Honkala, M., Ni, X., Aksu, E., and Gabbouj, M. (2016). Video Ladder networks. arXiv preprint
arXiv:1612.01756.
[10] Eyjolfsdottir, E., Branson, K., Yue, Y., and Perona, P. (2016). Learning recurrent representations for
hierarchical behavior modeling. arXiv preprint arXiv:1611.00094.
[11] Finn, C., Goodfellow, I. J., and Levine, S. (2016). Unsupervised learning for physical interaction through
video prediction. In Advances in Neural Information Processing Systems 29.
[12] Greff, K., Srivastava, R. K., and Schmidhuber, J. (2015). Binding via reconstruction clustering. CoRR,
abs/1511.06418.
[13] Greff, K., Rasmus, A., Berglund, M., Hao, T., Valpola, H., and Schmidhuber, J. (2016). Tagger: Deep
unsupervised perceptual grouping. In Advances in Neural Information Processing Systems 29.
[14] Greff, K., van Steenkiste, S., and Schmidhuber, J. (2017). Neural expectation maximization. In ICLR
Workshop.
[15] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735?
1780.
[16] Johnson, D. D. (2017). Generating polyphonic music using tied parallel networks. In International
Conference on Evolutionary and Biologically Inspired Music and Art.
[17] Jozefowicz, R., Zaremba, W., and Sutskever, I. (2015). An empirical exploration of recurrent network
architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15).
[18] Kingma, D. and Ba, J. (2015). Adam: A method for stochastic optimization. In The International
Conference on Learning Representations (ICLR), San Diego.
[19] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems.
10
[20] Laukien, E., Crowder, R., and Byrne, F. (2016). Feynman machine: The universal dynamical systems
computer. arXiv preprint arXiv:1609.03971.
[21] Newell, A., Yang, K., and Deng, J. (2016). Stacked hourglass networks for human pose estimation. In
European Conference on Computer Vision. Springer.
[22] Rasmus, A., Berglund, M., Honkala, M., Valpola, H., and Raiko, T. (2015). Semi-supervised learning with
Ladder networks. In Advances in Neural Information Processing Systems.
[23] Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image
segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention.
[24] Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2014). Striving for simplicity: The all
convolutional net. arXiv preprint arXiv:1412.6806.
[25] Srivastava, N., Mansimov, E., and Salakhudinov, R. (2015). Unsupervised learning of video representations
using LSTMs. In International Conference on Machine Learning, pages 843?852.
[26] Tarvainen, A. and Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency
targets improve semi-supervised deep learning results. In Advances in neural information processing systems.
[27] Tietz, M., Alpay, T., Twiefel, J., and Wermter, S. (2017). Semi-supervised phoneme recognition with
recurrent ladder networks. In International Conference on Artificial Neural Networks 2017.
[28] Valpola, H. (2015). From neural PCA to deep unsupervised learning. Advances in Independent Component
Analysis and Learning Machines.
[29] Vinh, N. X., Epps, J., and Bailey, J. (2010). Information theoretic measures for clusterings comparison:
Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(Oct),
2837?2854.
11
| 7182 |@word nd:1 propagate:2 tried:1 stateless:1 minus:2 moment:1 contains:3 fragment:1 score:3 tuned:1 ours:1 envision:1 past:3 existing:1 outperforms:1 z2:1 yet:1 must:1 written:1 realistic:1 visible:2 shape:2 occludes:1 designed:5 hourglass:1 update:4 polyphonic:4 cue:2 selected:1 generative:1 accordingly:1 dover:1 short:1 colored:1 detecting:1 node:5 successive:1 five:11 tagger:18 ilin:1 consists:2 combine:1 downscaled:1 fitting:1 manner:1 expected:1 indeed:1 behavior:1 inspired:3 company:2 actual:2 provided:1 notation:1 alexnet:1 lowest:1 what:4 tietz:1 kind:2 developed:1 unobserved:2 nj:1 temporal:32 every:4 zaremba:1 classifier:4 k2:1 stick:1 scaled:2 unit:4 medical:1 internally:1 appear:1 producing:1 intervention:1 before:1 vertically:2 local:1 path:1 might:1 rnns:1 plus:2 conversely:1 challenging:2 branson:1 averaged:1 acknowledgment:1 practice:2 block:1 implement:2 digit:23 procedure:15 riedmiller:1 empirical:2 rnn:8 universal:1 thought:1 significantly:1 ronneberger:1 seeing:1 suggest:1 cannot:2 close:3 unlabeled:5 impossible:3 influence:1 instability:1 deterministic:1 yt:4 resembling:1 latest:2 independently:1 formulate:1 simplicity:1 identifying:1 immediately:1 lewandowski:1 handle:3 merri:1 hierarchy:6 play:1 target:2 diego:1 exact:2 goodfellow:1 hotloo:2 velocity:2 element:2 recognition:2 updating:2 jsb:1 predicts:2 labeled:5 bottom:17 observed:9 role:2 fly:1 preprint:7 levine:1 readout:1 region:1 cycle:1 connected:2 movement:3 highest:3 valuable:1 mentioned:1 occluded:11 dynamic:2 trained:15 solving:1 segment:1 predictive:1 flying:1 efficiency:1 textured:6 joint:3 emulate:4 various:1 represented:2 harri:2 train:6 stacked:1 artificial:1 klaus:1 whose:3 quite:1 valued:1 say:1 otherwise:1 encoder:28 statistic:2 fischer:1 noisy:1 online:1 sequence:6 net:2 cai:1 propose:2 reconstruction:11 outputting:3 interaction:1 ablation:1 gold:1 supposed:1 secaucus:1 sutskever:2 convergence:1 produce:3 brodatz:5 adam:1 generating:1 rotated:1 object:14 help:5 depending:1 recurrent:23 derive:1 pose:1 propagating:1 develop:1 strong:1 implemented:3 auxiliary:3 resemble:1 skip:1 implies:1 predicted:2 direction:3 stochastic:2 exploration:1 human:1 require:5 feeding:1 rkk:1 adjusted:1 extension:3 strictly:1 assisted:1 correction:1 around:1 ground:8 seed:5 predict:3 m0:1 salakhudinov:1 relay:3 purpose:1 estimation:1 travel:1 outperformed:2 label:3 superposition:1 honkala:3 schwarz:1 create:1 successfully:3 tool:1 weighted:1 clearly:2 aim:1 avoid:1 derived:4 focus:1 validated:1 likelihood:2 hk:7 tremendously:1 baseline:9 detect:1 posteriori:1 inference:41 abstraction:12 el:12 typically:2 perona:1 pixel:11 upward:1 classification:20 aforementioned:1 art:1 mutual:2 marginal:3 field:1 brox:2 never:4 nicely:1 beach:1 identical:1 unsupervised:5 icml:2 discrepancy:1 mimic:2 future:4 report:1 dosovitskiy:1 few:1 randomly:2 occlusion:5 ourselves:1 ab:2 detection:1 assimilate:1 message:13 possibility:1 evaluation:1 truly:1 behind:3 accurate:3 encourage:2 circle:4 stopped:1 instance:5 classify:2 modeling:17 column:1 tp:1 maximization:2 cost:8 deviation:2 masked:1 uniform:1 krizhevsky:1 johnson:1 too:1 reported:1 dependency:2 encoders:1 teacher:2 corrupted:3 crowder:1 combined:1 cho:1 st:1 thanks:1 lstm:6 international:7 probabilistic:6 enhance:1 containing:1 possibly:2 berglund:3 cognitive:3 converted:1 de:1 diversity:1 includes:1 inc:1 explicitly:3 piece:1 performed:2 view:2 h1:1 kendall:1 doing:2 red:2 competitive:3 hf:1 parallel:1 annotation:1 cricri:1 vinh:1 contribution:1 ass:1 ni:1 accuracy:5 convolutional:10 roll:2 phoneme:1 correspond:1 spaced:1 vincent:1 artist:1 produced:1 explain:1 reach:1 evaluates:1 colleague:1 resultant:1 rbm:1 static:6 propagated:2 dataset:21 popular:1 knowledge:1 color:2 improves:2 segmentation:16 carefully:1 actually:2 back:2 feed:5 bidirectional:1 higher:6 originally:1 supervised:13 improved:1 done:2 though:2 furthermore:1 implicit:1 nottingham:1 biomedical:1 until:2 hand:2 receives:2 horizontal:1 lstms:2 replacing:1 quality:1 gray:2 semisupervised:1 usa:2 rinu:2 facilitate:1 building:1 byrne:1 hence:1 iteratively:1 illustrated:1 white:4 during:2 width:1 encourages:2 plate:1 theoretic:1 demonstrate:3 motion:2 bring:1 greff:4 image:12 fi:1 recently:1 mt:4 empirically:1 physical:1 extend:1 belong:1 m1:1 refer:1 jozefowicz:1 ai:2 consistency:4 similarly:2 had:1 dot:1 moving:10 add:1 something:1 posterior:6 perspective:1 belongs:3 apart:2 scenario:2 schmidhuber:4 verlag:1 supportive:1 success:2 binary:1 seen:3 greater:1 deng:1 converge:1 dashed:1 semi:9 multiple:7 photographic:1 conducive:1 faster:1 long:2 e1:1 schematic:1 prediction:19 variant:1 vision:1 expectation:4 arxiv:14 iteration:13 represent:5 normalization:1 achieved:1 cell:15 hochreiter:1 receive:1 justified:1 background:4 addition:2 sends:1 crucial:1 appropriately:1 extra:1 concluded:1 unlike:1 pass:1 yue:1 bahdanau:1 thing:2 flow:3 curious:2 yang:1 intermediate:2 split:2 bengio:3 variety:1 zi:5 architecture:21 restrict:1 idea:1 rifai:1 knowing:1 absent:2 motivated:4 pca:1 passed:1 f:1 passing:11 york:1 deep:10 dramatically:1 useful:3 visualized:1 stateful:1 sl:6 outperform:1 notice:5 designer:2 estimated:1 track:1 correctly:1 materialize:1 blue:1 discrete:1 group:7 conn:1 achieving:1 drawn:1 clean:1 kept:1 graph:8 inclusively:1 sum:1 run:3 powerful:2 bouncing:1 uncertainty:1 fourth:1 springenberg:1 extends:1 almost:1 reasonable:1 epps:1 appendix:6 wermter:1 layer:14 played:2 isabeau:2 aspect:2 speed:1 argument:1 formulating:1 nboer:1 according:1 beneficial:1 smaller:1 appealing:1 making:1 happens:1 biologically:1 explained:1 pr:1 handling:1 mutually:3 previously:2 turn:1 know:3 fed:1 drastic:1 end:4 finn:1 feynman:1 adopted:1 segnet:1 eight:1 probe:2 hierarchical:13 observe:1 bailey:1 existence:1 original:6 top:23 assumes:1 include:2 clustering:2 graphical:9 muse:1 music:8 recombination:1 especially:1 classical:1 boulanger:1 move:2 added:1 parametric:1 primary:1 receptive:1 evolutionary:1 gradient:2 amongst:1 iclr:2 convnet:2 valpola:6 separate:2 lateral:1 link:2 decoder:26 thank:1 participate:1 unstable:1 assuming:1 length:1 relationship:2 rasmus:3 boney:1 difficult:2 fe:1 potentially:1 kamath:1 hao:2 negative:2 rise:1 ba:1 design:3 zt:4 unknown:1 perform:1 vertical:1 observation:1 neuron:1 datasets:2 tele:1 beat:1 mansimov:1 hinton:1 frame:18 stack:3 required:6 gru:1 connection:11 optimized:1 z1:1 imagenet:1 engine:4 learned:2 kingma:1 nip:1 able:5 bar:2 below:1 usually:2 reimplemented:1 eighth:1 pattern:1 dynamical:1 green:2 memory:1 video:5 belief:1 suitable:1 overlap:1 regularized:1 turning:1 chorale:1 improve:2 badrinarayanan:1 ladder:24 created:1 raiko:2 auto:1 piano:4 fully:6 generation:3 limitation:1 validation:1 h2:1 propagates:1 principle:1 translation:1 row:8 gl:1 last:1 mont:1 antti:2 supported:2 placed:1 keeping:1 side:1 copy:2 understand:1 alain:1 wide:1 benefit:5 van:2 boundary:2 world:1 rich:1 computes:1 sensory:2 forward:5 san:1 far:1 reconstructed:1 approximate:1 emphasize:1 observable:1 implicitly:1 midi:3 transcription:1 keep:1 unoccluded:1 assumed:3 conclude:1 xi:1 inen:1 un:1 latent:17 iterative:11 vikram:1 table:10 learn:5 zk:6 matti:1 ca:1 ignoring:1 obtaining:1 complex:5 european:1 did:4 arrow:1 motivation:1 noise:1 whole:1 fig:29 nade:5 postulating:1 slow:1 explicit:2 rtagger:19 reccurent:1 perceptual:10 tied:1 third:2 learns:2 down:5 xt:11 specific:1 bishop:1 striving:1 evidence:1 grouping:14 dl:10 workshop:1 mnist:18 corr:2 importance:2 texture:10 album:1 karhunen:1 easier:1 likely:1 horizontally:3 partially:2 cipolla:1 springer:2 binding:1 corresponds:2 truth:8 gabbouj:1 newell:1 chance:1 oct:1 identity:1 formulated:4 goal:1 replace:1 change:1 ami:4 denoising:5 called:1 pas:3 internal:1 support:2 assessed:1 alexander:1 evaluate:2 srivastava:2 |
6,835 | 7,183 | Parameter-Free Online Learning via Model Selection
Dylan J. Foster
Cornell University
Satyen Kale
Google Research
Mehryar Mohri
NYU and Google Research
Karthik Sridharan
Cornell University
Abstract
We introduce an efficient algorithmic framework for model selection in online
learning, or parameter-free online learning. Our algorithms satisfy oracle inequalities in the adversarial online learning setting. Unlike previous work in this area that
has focused on specific, highly structured function classes, such as nested balls in a
Hilbert space, we propose a generic meta-algorithm framework that achieves model
selection oracle inequalities under minimal structural assumptions: we give the first
computationally efficient algorithms that work in arbitrary Banach spaces under
mild smoothness assumptions ? previous results only applied to Hilbert spaces.
We further derive new oracle inequalities for various matrix classes, non-nested
convex sets, and Rd with generic regularizers. Finally, we generalize these results
by providing oracle inequalities for arbitrary non-linear classes in the contextual
learning model; in particular, we give new algorithms for learning with multiple
kernels. These results are all derived through a unified meta-algorithm scheme
using a novel ?multi-scale? algorithm for prediction with expert advice based on
random playout, which may be of independent interest.
1
Introduction
A key problem in the design of learning algorithms is the choice of the hypothesis set F. This is
known as the model selection problem. The choice of F is driven by inherent trade-offs. In the
statistical learning setting, this can be analyzed in terms of the estimation and approximation errors.
A richer or more complex F helps better approximate the Bayes predictor (smaller approximation
error). On the other hand, a hypothesis set that is too complex may have too large a VC-dimension or
an unfavorable Rademacher complexity, thereby resulting in looser guarantees on the difference of
the loss of a hypothesis and that of the best-in class (large estimation error).
In the batch setting, this problem has been extensively studied with the main ideas originating in
the seminal work of [41] and [40] and the principle of Structural Risk Minimization (SRM). It is
typically formulated as follows: let (Fi )i?N be an infinite sequence of hypothesis sets (or models); the
problem consists of using the training sample to select a hypothesis set Fi with a favorable trade-off
and choose the best hypothesis f in Fi .
If we had access to a hypothetical oracle informing us of the best choice of i for a given problem, then
the problem would reduce to the standard one of learning with a fixed hypothesis set. Remarkably
though, techniques such as SRM, or similar penalty-based model selection methods, return a hypothesis f ? that enjoys finite-sample learning guarantees that are almost as favorable as those that would
be obtained had an oracle informed us of the index i? of the best-in-class classifier?s hypothesis set
[40; 13; 37; 22; 4; 25]. Such guarantees are sometimes referred to as oracle inequalities. They can
be derived even for data-dependent penalties [22; 4; 3].
Such results naturally raise the following questions in the online setting, which we study in this paper:
can we develop an analogous theory of model selection in online learning? Can we design online
algorithms for model selection with solutions benefitting from strong guarantees, analogous to the
batch ones? Unlike the statistical setting, in online learning, one cannot split samples to first learn the
optimal predictor within each subclass and then later learn the optimal subclass choice.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
A series of recent works on online learning provide some positive results along that direction. On
the algorithmic side, [26; 28; 31; 32] present solutions that efficiently achieve oracle inequalities
for the (important) special case where F1 , F2 , . . . is a sequence of nested balls in a Hilbert space.
On the theoretical side, a different line of work focusing on general hypothesis classes [14] uses
martingale-based sequential complexity measures to show that, information-theoretically, one can
obtain oracle inequalities in the online setting at a level of generality comparable to that of the batch
statistical learning. However, this last result is not algorithmic.
The first approach that a familiar reader might think of for tackling the online model selection problem
is to run, for each i, an online learning algorithm that minimizes regret against Fi and aggregate
over these algorithms using the multiplicative weights algorithm for prediction with expert advice.
This would work if all the losses or ?experts? considered were uniformly bounded by a reasonably
small quantity. However, in most reasonable problems, the losses of predictors or experts for Fi are
bounded by some quantity that grows with i. Using simple aggregation would scale our regret with
the magnitude of the largest Fi and not the i? we want to compare against. This is the main technical
challenge faced in this context and that we fully address in this paper.
This work presents an efficient algorithmic framework for online model selection in the adversarial
setting, an online analogue of SRM. Our results are based on a novel multi-scale algorithm for
prediction with expert advice. This algorithm works in a situation where the different experts? losses
lie in different ranges, and guarantees that the regret to each individual expert is adapted to the range
of its losses. The algorithm can also take advantage of a given prior over the experts reflecting their
importance. This general, abstract setting of prediction with expert advice yields algorithms for a
host of applications detailed below in a straightforward manner.
First, we give efficient algorithms for model selection for nested linear classes that provide oracle
inequalities in terms of the norm of the benchmark to which the algorithm?s performance is compared.
Our algorithm works for any norm, which considerably generalizes previous work [26; 28; 31;
32] from Hilbert spaces to arbitrary normed vector spaces. For most of the classes considered,
we give the first polynomial-time algorithms. This includes online oracle inequalities for highdimensional learning tasks such as online PCA and online matrix prediction. We then generalize these
results even further by providing oracle inequalities for arbitrary non-linear classes in the contextual
learning model. This yields applications for online penalized risk minimization and multiple kernel
learning. Due to space limitations, all proofs in the paper have been moved to the appendices in the
supplementary material, with proof sketches and intuitions in the main body of the paper.
1.1
Preliminaries
Notation. For a given norm ???, let ???? denote the dual norm. Likewise, for any function F , F ?
will denote its Fenchel conjugate. For a Banach space (B, ???), the dual is (B? , ???? ). We use x1?n
as shorthand for a sequence of vectors (x1 , . . . , xn ). For such sequences, we will use xt [i] to denote
the tth vector?s ith coordinate. We let ei denote the ith standard basis vector. ???p denotes the `p
norm, ??? denotes the spectral norm, and ???? denotes the trace norm.
Setup and Goals. We work in two closely related settings: online convex optimization (Protocol 1)
and online supervised learning (Protocol 2). In online convex optimization, the learner selects
decisions from a convex subset W of some Banach space B. Regret to a comparator w ? W in this
setting is defined as ?nt=1 ft (wt )? ?nt=1 ft (w). Suppose W can be decomposed into sets W1 , W2 , . . ..
For a fixed set Wk , the optimal regret, if one tailors the algorithm to compete with Wk , is typically
characterized by some measure of intrinsic complexity of the class (such as Littlestone?s dimension
[5] and sequential Rademacher complexity [34]), denoted Compn (Wk ). We would like to develop
algorithms that predict a sequence (wt ) such that
n
n
? ft (wt ) ? min ? ft (w) ? Compn (Wk ) + Penn (k) ?k.
t=1
w?Wk t=1
(1)
This equation is called an oracle inequality and states that the performance of the sequence (wt ) matches that of a comparator that minimizes the bias-variance tradeoff
mink {minw?Wk ?nt=1 ft (w) + Compn (Wk )}, up to a penalty Penn (k) whose scale ideally
matches that of Compn (Wk ). We shall see shortly that ensuring that the scale of Penn (k) does
2
Protocol 1 Online Convex Optimization
for t = 1, . . . , n do
Learner selects strategy qt ? (W) for convex decision set W.
Nature selects convex loss ft ? W ? R.
Learner draws wt ? qt and incurs loss ft (wt ).
end for
indeed match is the core technical challenge in developing online oracle inequalities for commonly
used classes.
In the supervised learning setting we measure regret against a benchmark class F = ??
k=1 Fk of
functions f ? X ? R, where X is some abstract context space, also called feature space. In this case,
the desired oracle inequality has the form:
n
n
yt , yt ) ? inf ? `(f (xt ), yt ) ? Compn (Fk ) + Penn (k) ?k.
? `(?
f ?Fk t=1
t=1
(2)
Protocol 2 Online Supervised Learning
for t = 1, . . . , n do
Nature provides xt ? X .
Learner selects randomized strategy qt ? (R).
Nature provides outcome yt ? Y.
Learner draws y?t ? qt and incurs loss `(?
yt , yt ).
end for
2
2.1
Online Model Selection
The need for multi-scale aggregation
Let us briefly motivate the main technical challenge overcome by the model selection approach we
consider. The most widely studied oracle inequality in online learning has the following form
n
n
?
d
? ft (wt ) ? ? ft (w) ? O?(?w?2 + 1) n ? log((?w?2 + 1)n)? ?w ? R .
t=1
(3)
t=1
In light of (1), a model selection approach to obtaining this inequality would be to split the set
W = Rd into `2 norm balls of doubling radius, i.e. Wk = ?w ? ?w?2 ? 2k ?. A standard fact [16] is
?
that such a set has Compn (Wk ) = 2k n if one optimizes over it using Mirror Descent, and so
obtaining the oracle inequality (1) is sufficient to recover (3), so long as Penn (k) is not too large
relative to Compn (Wk ).
Online model selection is fundamentally a problem of prediction with expert advice [8], where the
experts correspond to the different model classes one is choosing from. Our basic meta-algorithm,
M ULTI S CALE FTPL (Algorithm 3), operates in the following setup. The algorithm has access to a
finite number, N , of experts. In each round, the algorithm is required to choose one of the N experts.
Then the losses of all experts is revealed, and the algorithm incurs the loss of the chosen expert.
The twist from the standard setup is that the losses of all the experts are not uniformly bounded in
the same range. Indeed, for the setup described for the oracle inequality (3), class Wk will produce
predictions with norm as large as 2k . Therefore, here, we assume that expert i incurs losses in the
range [?ci , ci ], for some known parameter ci ? 0. The goal is to design an online learning algorithm
whose regret to expert i scales with ci , rather than maxi ci , which is what previous algorithms for
learning from expert advice (such as the standard multiplicative weights strategy or AdaHedge [12])
would achieve. Indeed, any regret bound scaling in maxi ci will be far too large to achieve (3), as
the term Penn (k) will dominate. This new type of scale-sensitive regret bound, achieved by our
algorithm M ULTI S CALE FTPL, is stated below.
3
Algorithm 3
procedure M ULTI S CALE FTPL(c, ?)
? Scale vector c with ci ? 1, Prior distribution ?.
for time t = 1, . . . , n: do
N
Draw sign vectors t+1 , . . . , n ? {?1} each uniformly at random.
Compute distribution
pt (
t+1?n )
= arg min
p?
sup
gt ??gt [i]??ci
i
?
where B(i) = 5ci n log?4c2i n??i ?.
Play it ? pt .
Observe loss vector gt .
end for
end procedure
N
t
n
s=1
s=t+1
??p, gt ? + sup?? ? ?ei , gs ? + 4 ?
s [i]ci
? B(i)??,
Theorem 1. Suppose the loss sequence (gt )t?n satisfies ?gt [i]? ? ci for a sequence (ci )i?[N ] with
each ci ? 1. Let ? ? N be a given prior distribution on the experts. Then, playing the strategy
(pt )t?n given by Algorithm 3, M ULTI S CALE FTPL yields the following regret bound:1
n
n
?
E???eit , gt ? ? ??ei , gt ?? ? O?ci n log(nci ??i )? ?i ? [N ].
(4)
t=1
t=1
The proof of the theorem is deferred to Appendix A in the supplementary material due to space
constraints. Briefly, the proof follows the technique of adaptive relaxations from [14]. It relies on
showing that the following function of the first t loss vectors g1?t is an admissible relaxation (see [14]
for definitions):
Rel(g1?t ) ?
E
t
T
s=1
s=t+1
sup?? ? ?ei , gs ? + 4 ?
N
t+1 ,..., T ?{?1}
i
s [i]ci
? B(i)?.
This implies that if we play the strategy (pt )t?n given by Algorithm 3, the regret to the ith expert is
bounded by B(i) + Rel(?), where Rel(?) indicates the Rel function applied to an empty sequence
of loss vectors. Then, as a final step, we bound Rel(?) using a probabilistic maximal inequality
(Lemma 2 in the supplementary material), yielding the given bound. Compared to related FTPL
algorithms [35], the analysis is surprisingly delicate, as additive ci factors can spoil the desired regret
bound (4) if the ci s differ by orders of magnitude.
The min-max optimization problem in M ULTI S CALE FTPL can be solved in polynomial-time using
linear programming ? see Appendix A.1 in the supplementary material for a full discussion.
In related work, [7] simultaneously developed a multi-scale experts algorithm which could also be
used in our framework. Their regret bound has sub-optimal dependence on the prior distribution over
experts, but their algorithm is more efficient and is able to obtain multiplicative regret guarantees.
2.2
Online convex optimization
One can readily apply M ULTI S CALE FTPL for online optimization problems whenever it is possible
to obtain good bounds on the losses of the different experts. One such application is to online convex
optimization, where such a bound can be obtained via appropriate bounds on the relevant norms of
the parameter vectors and the gradients of the loss functions. We detail this application below.
We now show how to apply M ULTI S CALE FTPL in the online convex optimization framework to
derive algorithms for parameter-free online learning and more. All of the algorithms in this section
are derived using a unified meta-algorithm strategy M ULTI S CALE OCO.
1
This regret bound holds under expectation over the player?s randomization.
It is assumed that each gt is selected before the randomized strategy pt is revealed, but may adapt
to the distribution over pt .
In fact, a slightly stronger version of this bound holds, namely
?
n
E??n
?e
,
g
?
?
min
?
?e
t
i?[N ] ?t=1 i , gt ? + O?ci n log(nci ??i )??? ? 0. A similar strengthening applies
t=1 it
to all subsequent bounds.
4
The setup is as follows. We have access to N sub-algorithms, denoted A LGi for i ? [N ]. In round
t, each sub-algorithm A LGi produces a prediction wti ? Wi , where Wi is a set in a vector space V
over R containing 0. Our desired meta-algorithm is then required to choose one of the predictions
wti . Then, a loss function ft ? V ? R is revealed, whereupon A LGi incurs loss ft (wti ), and the
meta-algorithm suffers the loss of the chosen prediction. We make the following assumption on the
sub-algorithms:
Assumption 1. The sub-algorithms satisfy the following conditions:
? For each i ? [N ], there is an associated norm ???(i) such that supw?Wi ?w?(i) ? Ri .
? For each i ? [N ], the sequence of functions ft are Li -Lipschitz on Wi with respect to ???(i) .
? For each sub-algorithm A LGi , the iterates (wti )t?n enjoy a regret bound ?nt=1 ft (wti ) ?
inf w?Wi ?nt=1 ft (w) ? Regn (i), where Regn (i) may be data- or algorithm-dependent.
Algorithm 4
1: procedure M ULTI S CALE OCO({A LGi , Ri , Li }i?[N ] , ?)
? Collection of sub-algorithms, prior ?.
2:
c ? (Ri ? Li )i?[N ]
? Sub-algorithm scale parameters.
3:
for t = 1, . . . , n do
4:
wti ? A LGi (f?1 , . . . , f?t?1 ) for each i ? A.
5:
it ? M ULTI S CALE FTPL[c, ?](g1 , . . . , gt?1 ).
6:
Play wt = wtit .
7:
Observe loss function ft and let f?t (w) = ft (w) ? ft (0).
8:
gt ? ?f?t (wti )?i?[N ] .
9:
end for
10: end procedure
In most applications, Wi will be a convex set and ft a convex function; this convexity is not necessary
to prove a regret bound for the meta-algorithm. We simply need boundedness of the set Wi and
Lipschitzness of the functions ft , as specified in Assumption 1. This assumption implies that for any i,
we have ?ft (w) ? ft (0)? ? Ri Li for any w ? Wi . Thus, we can design a meta-algorithm for this setup
by using M ULTI S CALE FTPL with ci = Ri Li , which is precisely the M ULTI S CALE OCO algorithm
(Algorithm 4). The following theorem provides a bound on the regret of M ULTI S CALE OCO; a direct
consequence of Theorem 1.
Theorem 2. Without loss of generality, assume that Ri Li ? 12 . Suppose that the inputs to Algorithm 4
satisfy Assumption 1. Then, the iterates (wt )t?n returned by Algorithm 4 follow the regret bound
n
n
?
E?? ft (wt ) ? inf ? ft (w)? ? E[Regn (i)] + O?Ri Li n log(Ri Li n??i )? ?i ? [N ]. (5)
t=1
w?Wi t=1
Theorem 2 shows that if we use Algorithm 4 to aggregate the iterates produced by a collection
of sub-algorithms (A LGi )i?[N ] , the regret against any sub-algorithm i will only depend on that
algorithm?s scale, not the regret of the worst sub-algorithm.
Application 1: Parameter-free online learning in uniformly convex Banach spaces. Using our
general framework, we can give a generalization of the parameter-free online learning bounds found
in [26; 28; 31; 32; 10] from Hilbert spaces to arbitrary uniformly convex Banach spaces. Recall
2
that a Banach space (B, ???) is (2, )-uniformly convex if 12 ??? is -strongly convex with respect
to itself [33]. Our algorithm is efficient whenever mirror descent over (B, ???) is efficient because
it is an instantiation of M ULTI S CALE OCO with the following collection A of N = n +?1 subalgorithms: for each i ? [n + 1], we set Ri = 2i?1 , Li = L, Wi = {w ? B ? ?w? ? Ri }, ?i =
and A LGi = M IRROR D ESCENT(?i , Wi , ??? ). Finally, we set ? = Uniform([n + 1]).
2
Ri
,
Ln
Mirror descent is a standard tool for online convex optimization and is described precisely in
Appendix A.2 in the supplementary material, but the only feature of its performance that will be
important to us is that, when configured as described above, the iterates (wti )t?n produced by A LGi
2
For notational convenience, all Lipschitz bounds are assumed to be at least 1 without loss of generality for
the remainder of the paper.
5
?
specified above will satisfy ?nt=1 ft (wti ) ? inf w?Wi ?nt=1 ft (w) ? O(Ri L n) on any sequence of
losses that are L-Lipschitz with respect to ???? . Using just this simple fact, combined with the regret
bound for M ULTI S CALE OCO (plus a few technical details in Appendix A.2), we can deduce the
following parameter-free regret bound:
Theorem 3 (Oracle Inequality for Uniformly Convex Banach Spaces). The iterates (wt )t?n produced
by M ULTI S CALE OCO on any L-Lipschitz sequence of losses (ft )t?n satisfy
n
n
?
E?? ft (wt ) ? ? ft (w)? ? O?L ? (?w? + 1) n ? log(L(?w? + 1)n)? ? ?w ? B.
t=1
(6)
t=1
Note that the above oracle inequality applies for any uniformly convex norm ???. Previous results
only obtain bounds of this form efficiently when ??? is a Hilbert space norm or `1 . As is standard for
such oracle inequality results,
? the bound is weaker than the optimal bound if ?w? were selected in
advance, but only by a mild log(L(?w? + 1)n) factor.
Proposition 1. The algorithm can be implemented in time O(TMD ? poly(n)) per iteration, where
TMD is the time complexity of a single mirror descent update.
In the example above, the (2, )-uniform convexity condition was mainly chosen because it is a
familiar assumption. The result can be straightforwardly generalized to related notions such as quniform convexity (see [38] for discussion). More generally, the approach can be used to derive oracle
inequalities with respect to general strongly convex regularizer R defined over the space W. Such
?
a bound would have the form O?L ? n(R(w) + 1) ? log((R(w) + 1)n)? for typical choices of R.
This example captures well-known quantile bounds [23] when one takes R to be the KL-divergence
and W to be the simplex, or, in the matrix case, takes R to be the quantum relative entropy and W to
be the set of density matrices, as in [19].
It is instructive to think of M ULTI S CALE OCO as executing a (scale-sensitive) online analogue of the
structural risk minimization principle. We simply specify a set of subclasses and a prior ? specifying
the importance of each subclass, and we are guaranteed that the algorithm?s performance matches
that of each sub-class, plus a penalty depending on the prior weight placed on that subclass. The
advantage of this approach is that the nested structure used in the Theorem 3 is completely inessential.
This leads to the exciting prospect of developing parameter-free algorithms over new and exotic set
systems. One such example is given below.
Application 2: Oracle inequality for many `p norms. The M ULTI S CALE OCO framework easily
allows us to obtain an oracle inequality with respect to many `p norms in Rd simultaneously. To the
best of our knowledge all previous works have only considered a single norm.
Theorem 4. Suppose that the loss functions (ft )t?n are all Lp -Lipschitz for each p ? [1 + , 2],
for some > 0, and that Lp ?Lp+1? log(d) = O(1) for all p, p? in this range. Then, there is a
computationally efficient algorithm that guarantees, for all w ? Rd and all p ? [1 + , 2],
n
n
?
E?? ft (wt ) ? ? ft (w)? ? O?(?w?p + 1)Lp n log((?w?p + 1)Lp log(d)n)?(p ? 1)?.
t=1
(7)
t=1
The configuration in the above theorem is described in full in Appendix A.2 in the supplementary
material. This strategy can be trivially extended to handle p in the range (2, ?). The inequality holds
for p ? 1 + rather than for p ? 1 because the `1 norm is not uniformly convex, but this is easily
rectified by changing the regularizer at p = 1; we omit this for simplicity of presentation.
The same strategy can also be applied to matrix optimization over Rd?d by replacing the `p norm
with the Schatten Sp norm. The Schatten Sp -norm has strong convexity parameter on the order of
p ? 1 (which matches the `p norm up to absolute constants [2]) so the only change to Theorem 4 will
be the running-time TMD . Likewise, the approach applies to (p, q)-group norms for group-structured
sparsity tasks [21].
6
Application 3: Adapting to rank for Online PCA For the online PCA task, the learner predicts
from a class Wk = ?W ? Rd?d ? W ? 0, ?W ? ? 1, ?W, I? = k?. For a fixed value of k, such a class
is a convex relaxation of the set of all rank k projection matrices. After producing a prediction Wt ,
we experience affine loss functions ft (Wt ) = ?I ? Wt , Yt ?, where Yt ? Y = S+ ? B .
We leverage an analysis of online PCA due to [30] together with M ULTI S CALE OCO to derive an
algorithm that competes with many values of the rank simultaneously. This gives the following result:
Theorem 5. There is an efficient algorithm for Online PCA with regret bound
?
?
?
?n
?
n
?
? ?
E???I ? Wt , Yt ? ?
min
?I ? W, Yt ?? ? O?
n min{k, d ? k}2 ? ?k ? [d?2].
?
?t=1
?
W projection t=1
?
?
rank(W )=k
?
?
For a fixed value of k, the above bound is already optimal up to log factors, but it holds for all k
simultaneously.
Application 4: Adapting to norm for Matrix Multiplicative Weights In the matrix
multiplicative weights setting [1] we consider hypothesis classes of the form Wr =
?W ? Rd?d ? W ? 0, ?W ?? ? r?. Losses are given by ft (W ) = ?W, Yt ?, where ?Yt ? ? 1. For
a fixed
? value of r, the matrix multiplicative weights strategy has regret against Wr bounded by
O(r n log d). Using this strategy for fixed r as a sub-algorithm for M ULTI S CALE OCO, we achieve
the following oracle inequality efficiently:
Theorem 6. There is an efficient matrix prediction algorithm with regret bound
n
n
?
E???Wt , Yt ? ? ??W, Yt ?? ? (?W ?? + 1) n log d log((?W ?? + 1)n)) ?W ? 0.
t=1
(8)
t=1
A remark on efficiency All of our algorithms providing bounds for the form (6) instantiate O(n)
experts with M ULTI S CALE FTPL because, in general, the worst case w for achieving (6) can have
norm as large as en . If one has an a priori bound ? say B ? on the range at which each ft attains its
minimum, then the number of experts be reduced to O(log(B)).
2.3
Supervised learning
We now consider the online supervised learning problem, with the goal being to compete with
a sequence of hypothesis class (Fk )k?[N ] simultaneously. Working in this setting makes clear a
key feature of the meta-algorithm approach: We can efficiently obtain online oracle inequalities
for arbitrary nonlinear function classes ? so long as we have an efficient algorithm for each
Fk . We obtain a supervised learning meta-algorithm by simply feeding the observed losses `(?, yt )
(which may even be non-convex) to the meta-algorithm M ULTI S CALE FTPL in the same fashion
as M ULTI S CALE OCO. The resulting strategy, which is described in detail in Appendix A.3 for
completeness, is called M ULTI S CALE L EARNING. We make the following assumptions analogous
to Assumption 1, which lead to the performance guarantee for M ULTI S CALE L EARNING given in
Theorem 7 below.
Assumption 2. The sub-algorithms satisfy the following conditions:
? For each i ? [N ], the iterates (?
yti )t?n produced by sub-algorithm A LGi satisfy ??
yti ? ? Ri .
? For each i ? [N ], the function `(?, yt ) is Li -Lipschitz on [?Ri , Ri ].
? For each sub-algorithm A LGi , the iterates (?
yti )t?n enjoy a regret bound
n
n
i
E??t=1 `(?
yt , yt ) ? inf f ?Fi ?t=1 `(f (xt ), yt )? ? Regn (i), where Regn (i) may be data- or
algorithm-dependent.
Theorem 7. Suppose that the inputs to Algorithm 5 satisfy Assumption 2. Then the iterates (?
yt )t?n
produced by Algorithm 5 enjoy the regret bound
n
n
?
E?? `(?
yti , yt ) ? inf ? `(f (xt ), yt )? ? E[Regn (i)] + O?Ri Li n log(Ri Li n??i )? ?i ? [N ].
t=1
f ?Fi t=1
7
(9)
Online penalized risk minimization In the statistical learning setting, oracle inequalities for
arbitrary sequence of hypothesis classes F1 , . . . , FK are readily available. Such inequalities are
typically stated in terms of complexity parameters for the classes (Fk ) such as VC dimension or
Rademacher complexity. For the online learning setting, it is well-known that sequential Rademacher
complexity Radn (F) provides a sequential counterpart to these complexity measures [34], in that
it generically characterizes the minimax optimal regret for a given class. We will obtain an oracle
inequality in terms of this parameter.
Assumption 3. The sequence of hypothesis classes F1 , . . . , FK are such that
1. There is an efficient algorithm A LGk producing iterates (?
ytk )t?n satisfying ?nt=1 `(?
ytk , yt ) ?
n
inf f ?Fk ?t=1 `(f (xt ), yt ) ? C ? Radn (Fk ).
2. Each Fk has output range [?Rk , Rk ], where Rk ? 1 without loss of generality.
?
3. Radn (Fk ) = ?(Rk n) ? this is obtained by all non-trivial classes.
Theorem 8 (Online penalized risk minimization). Under Assumption 3 there is an efficient (in K)
algorithm that achieves the following regret bound for any L-Lipschitz loss:
E?? `(?
yt , yt ) ? inf ? `(f (xt ), yt )? ? O?L ? Radn (Fk ) ?
n
n
f ?Fk t=1
t=1
?
log(kn)? ?k.
(10)
As in the previous section, one can of course derive tighter regret bounds and more efficient (e.g.
sublinear in K) algorithms if F1 , F2 , . . . are nested.
Application: Multiple Kernel Learning
Theorem 9. Let H1 ,?
. . . , HN be reproducing kernel Hilbert spaces for which each Hk has a kernel
K such that supx?X K(x, x) ? Bk . Then there is an efficient learning algorithm that guarantees
n
n
?
E?? `(?
yt , yt ) ? ? `(f (xt ), yt )? ? O?LBk k(?f ?Hk + 1) log(LBk n(?f ?Hk + 1))? ?k, ?f ? Hk
t=1
t=1
for any L-Lipschitz loss, whenever an efficient algorithm is available for the unit ball in each Hk .
3
Discussion and Further Directions
Related work There are two directions in parameter-free online learning that have been explored
extensively. The first explores bounds of the form (3); namely, the Hilbert space version of the more
general setting explored in Section 2.2. Beginning with [27], which obtained a slightly looser rate
than (3), research has focused on obtaining tighter dependence on ?w?2 and log(n) in this type
of bound [26; 28; 31; 32]; all of these algorithms run in linear time per update step. Recent work
[10; 11] has extended these results to the case where the Lipschitz constant is not known in advance.
These works give lower bounds for general norms, but only give efficient algorithms for Hilbert
spaces. Extending Algorithm 4 to reach this Pareto frontier of regret in the unknown Lipschitz setting
as in [11] may be an interesting direction for future research.
The second direction concerns so-called ?quantile bounds? [9; 23; 24; 32] for experts setting, where
the learner?s decision set W is the simplex d and losses are bounded in `? . The multi-scale
machinery developed in this is not needed to obtain bounds for this setting because the losses are
uniformly bounded across all model classes. Indeed, [14] recovered a basic form of quantile bound
using the vanilla multiplicative weights strategy as a meta-algorithm. It is not known whether the
more sophisticated data-dependent quantile bounds given in [23; 24] can be recovered similarly.
?
Losses with curvature. The O( n) type regret bounds for the meta-algorithm
derived in this
?
paper are appropriate when the sub-algorithms themselves incur O( n) regret bounds. However,
assuming certain curvature properties (such as strong convexity, exp-concavity, stochastic mixability,
etc [17; 39]) of the loss functions it is possible to construct sub-algorithms that admit significantly
more favorable regret bounds (O(log n) or even O(1)). These are also referred to as ?fast rates?
in online learning. A natural direction for further study is to design a meta-algorithm that admits
logarithmic or constant regret to each sub-algorithm, assuming that the loss functions of interest
satisfy similar curvature properties, with the regret to each individual sub-algorithm adapted to the
curvature parameters for that sub-algorithm. Perhaps surprisingly, for the special case of the logistic
8
loss, improper prediction and aggregation strategies similar to those proposed in this paper offer a
way to circumvent known proper learning lower bounds [18]. This approach will be explored in
detail in a forthcoming companion paper.
Computational efficiency. We suspect that a running-time of O(n) may be unavoidable for inequalities like (6) through our approach, since we essentially do not make use of the relationship
between sub-algorithms beyond using the nested class structure. In this sense, the high level of
generality we work in is both a blessing and a curse. Whether the runtime of M ULTI S CALE FTPL
can be brought down to match O(n) is an open question. This question boils down to whether or not
the min-max optimization the algorithm must solved can be done in 1) Linear time in the number of
experts 2) strongly polynomial time in the scales ci .
Acknowledgements
We thank Francesco Orabona and David Pal for inspiring initial discussions. Part of this work was
done while DF was an intern at Google Research and while DF and KS were visiting the Simons
Institute for the Theory of Computing. DF is supported by the NDSEG fellowship.
References
[1] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a
meta-algorithm and applications. Theory of Computing, 8(1):121?164, 2012.
[2] Keith Ball, Eric A Carlen, and Elliott H Lieb. Sharp uniform convexity and smoothness
inequalities for trace norms. Inventiones mathematicae, 115(1):463?482, 1994.
[3] Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: risk bounds
and structural results. Journal of Machine Learning Research, 3:463?482, 2003. ISSN 15324435.
[4] Peter L. Bartlett, St?ephane Boucheron, and G?abor Lugosi. Model selection and error estimation.
Machine Learning, 48(1-3):85?113, 2002.
[5] Shai Ben-David, David Pal, and Shai Shalev-Shwartz. Agnostic online learning. In Proceedings
of the 22th Annual Conference on Learning Theory, 2009.
[6] St?ephane Boucheron, G?abor Lugosi, and Pascal Massart. Concentration inequalities: A
nonasymptotic theory of independence. Oxford university press, 2013.
[7] Sebastien Bubeck, Nikhil Devanur, Zhiyi Huang, and Rad Niazadeh. Online auctions and multiscale online learning. Accepted to The 18th ACM conference on Economics and Computation
(EC 17), 2017.
[8] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge
University Press, 2006.
[9] Kamalika Chaudhuri, Yoav Freund, and Daniel J Hsu. A parameter-free hedging algorithm. In
Advances in neural information processing systems, pages 297?305, 2009.
[10] Ashok Cutkosky and Kwabena A Boahen. Online convex optimization with unconstrained
domains and losses. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett,
editors, Advances in Neural Information Processing Systems 29, pages 748?756. 2016.
[11] Ashok Cutkosky and Kwabena A. Boahen. Online learning without prior information. The 30th
Annual Conference on Learning Theory, 2017.
[12] Steven De Rooij, Tim Van Erven, Peter D Gr?unwald, and Wouter M Koolen. Follow the leader
if you can, hedge if you must. Journal of Machine Learning Research, 15(1):1281?1316, 2014.
[13] Luc Devroye, L?azl?o Gy?orfi, and G?abor Lugosi. A Probabilistic Theory of Pattern Recognition.
Springer, 1996.
9
[14] Dylan J Foster, Alexander Rakhlin, and Karthik Sridharan. Adaptive online learning. In
Advances in Neural Information Processing Systems, pages 3375?3383, 2015.
[15] S. Gerchinovitz and J. Yu. Adaptive and optimal online linear regression on `1 -balls. Theoretical
Computer Science, 2013.
[16] Elad Hazan. Introduction to online convex optimization. Foundations and Trends? in Optimization, 2(3-4):157?325, 2016.
[17] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex
optimization. Machine Learning, 69(2-3):169?192, 2007.
[18] Elad Hazan, Tomer Koren, and Kfir Y Levy. Logistic regression: Tight bounds for stochastic
and online optimization. In Proceedings of The 27th Conference on Learning Theory, pages
197?209, 2014.
[19] Elad Hazan, Satyen Kale, and Shai Shalev-Shwartz. Near-optimal algorithms for online matrix
prediction. SIAM J. Comput., 46(2):744?773, 2017. doi: 10.1137/120895731.
[20] Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction:
Risk bounds, margin bounds, and regularization. In Advances in Neural Information Processing
Systems 21, pages 793?800. MIT Press, 2009.
[21] Sham M Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Regularization techniques for
learning with matrices. Journal of Machine Learning Research, 13(Jun):1865?1890, 2012.
[22] Vladimir Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Trans.
Information Theory, 47(5):1902?1914, 2001.
[23] Wouter M Koolen and Tim Van Erven. Second-order quantile methods for experts and combinatorial games. In Proceedings of The 28th Conference on Learning Theory, pages 1155?1175,
2015.
[24] Haipeng Luo and Robert E Schapire. Achieving all with no parameters: Adanormalhedge. In
Conference on Learning Theory, pages 1286?1304, 2015.
[25] Pascal Massart. Concentration inequalities and model selection. Lecture Notes in Mathematics,
1896, 2007.
[26] Brendan McMahan and Jacob Abernethy. Minimax optimal algorithms for unconstrained linear
optimization. In Advances in Neural Information Processing Systems, pages 2724?2732, 2013.
[27] Brendan Mcmahan and Matthew Streeter. No-regret algorithms for unconstrained online convex
optimization. In Advances in neural information processing systems, pages 2402?2410, 2012.
[28] H. Brendan McMahan and Francesco Orabona. Unconstrained online linear learning in hilbert
spaces: Minimax algorithms and normal approximations. In Proceedings of The 27th Conference
on Learning Theory, pages 1020?1039, 2014.
[29] Arkadi Nemirovski. Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point
problems. SIAM Journal on Optimization, 15(1):229?251, 2004.
[30] Jiazhong Nie, Wojciech Kot?owski, and Manfred K Warmuth. Online pca with optimal regrets.
In International Conference on Algorithmic Learning Theory, pages 98?112. Springer, 2013.
[31] Francesco Orabona. Simultaneous model selection and optimization through parameter-free
stochastic learning. In Advances in Neural Information Processing Systems, pages 1116?1124,
2014.
[32] Francesco Orabona and D?avid P?al. From coin betting to parameter-free online learning. arXiv
preprint arXiv:1602.04128, 2016.
[33] Gilles Pisier. Martingales in banach spaces (in connection with type and cotype). course ihp,
feb. 2?8, 2011. 2011.
10
[34] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages,
combinatorial parameters, and learnability. Advances in Neural Information Processing Systems
23, pages 1984?1992, 2010.
[35] Alexander. Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize: From value to
algorithms. In Advances in Neural Information Processing Systems 25, pages 2150?2158, 2012.
[36] James Renegar. A polynomial-time algorithm, based on newton?s method, for linear programming. Mathematical Programming, 40(1):59?93, 1988.
[37] John Shawe-Taylor, Peter L Bartlett, Robert C Williamson, and Martin Anthony. Structural risk
minimization over data-dependent hierarchies. IEEE transactions on Information Theory, 44
(5):1926?1940, 1998.
[38] Nati Srebro, Karthik Sridharan, and Ambuj Tewari. On the universality of online mirror descent.
In Advances in neural information processing systems, pages 2645?2653, 2011.
[39] Tim van Erven, Peter D. Gr?unwald, Nishant A. Mehta, Mark D. Reid, and Robert C. Williamson.
Fast rates in statistical and online learning. Journal of Machine Learning Research, 16:1793?
1861, 2015.
[40] Vladimir Vapnik. Estimation of dependences based on empirical data, volume 40. SpringerVerlag New York, 1982.
[41] Vladimir Vapnik and Alexey Chervonenkis. On the uniform convergence of relative frequencies
of events to their probabilities. Theory of Probability and its Applications, 16(2):264?280, 1971.
11
| 7183 |@word mild:2 version:2 briefly:2 polynomial:4 norm:26 stronger:1 open:1 mehta:1 jacob:1 incurs:5 thereby:1 boundedness:1 initial:1 configuration:1 series:1 chervonenkis:1 daniel:1 erven:3 recovered:2 contextual:2 nt:8 luo:1 tackling:1 universality:1 must:2 readily:2 john:1 additive:1 subsequent:1 gerchinovitz:1 update:3 selected:2 instantiate:1 warmuth:1 beginning:1 ith:3 core:1 manfred:1 provides:4 iterates:9 completeness:1 mathematical:1 along:1 direct:1 consists:1 shorthand:1 prove:1 manner:1 introduce:1 theoretically:1 indeed:4 themselves:1 owski:1 multi:5 decomposed:1 curse:1 bounded:7 notation:1 exotic:1 competes:1 agnostic:1 what:1 minimizes:2 developed:2 informed:1 tmd:3 unified:2 lipschitzness:1 guarantee:9 hypothetical:1 subclass:5 concave:1 runtime:1 inessential:1 classifier:1 unit:1 penn:6 enjoy:3 omit:1 producing:2 reid:1 positive:1 before:1 consequence:1 oxford:1 lugosi:4 might:1 plus:2 alexey:1 koltchinskii:1 studied:2 k:1 specifying:1 nemirovski:1 range:8 regret:40 procedure:4 area:1 empirical:1 adapting:2 significantly:1 projection:2 gabor:1 orfi:1 cannot:1 convenience:1 selection:17 operator:1 risk:9 context:2 seminal:1 zhiyi:1 whereupon:1 yt:30 kale:4 straightforward:1 economics:1 normed:1 convex:28 focused:2 devanur:1 simplicity:1 dominate:1 handle:1 notion:1 coordinate:1 analogous:3 pt:6 suppose:5 play:3 shamir:1 hierarchy:1 programming:3 us:1 hypothesis:14 trend:1 satisfying:1 recognition:1 predicts:1 observed:1 ft:34 steven:1 preprint:1 solved:2 capture:1 worst:2 improper:1 trade:2 prospect:1 compn:7 intuition:1 boahen:2 convexity:6 complexity:11 nie:1 instructive:1 ideally:1 motivate:1 raise:1 depend:1 tight:1 incur:1 f2:2 learner:7 basis:1 completely:1 efficiency:2 eric:1 easily:2 eit:1 various:1 regularizer:2 fast:2 doi:1 aggregate:2 outcome:1 choosing:1 shalev:3 abernethy:1 whose:2 richer:1 supplementary:6 widely:1 elad:5 say:1 nikhil:1 relax:1 satyen:4 g1:3 think:2 itself:1 final:1 online:66 sequence:15 advantage:2 propose:1 maximal:1 strengthening:1 remainder:1 relevant:1 chaudhuri:1 achieve:4 moved:1 haipeng:1 convergence:2 empty:1 extending:1 rademacher:6 produce:2 executing:1 ben:1 help:1 derive:5 develop:2 depending:1 tim:3 qt:4 keith:1 strong:3 implemented:1 implies:2 differ:1 direction:6 radius:1 closely:1 escent:1 stochastic:3 vc:2 niazadeh:1 material:6 feeding:1 f1:4 generalization:1 preliminary:1 randomization:1 proposition:1 tighter:2 frontier:1 hold:4 considered:3 normal:1 exp:1 algorithmic:5 predict:1 matthew:1 achieves:2 estimation:4 favorable:3 combinatorial:2 sensitive:2 largest:1 tool:1 minimization:7 offs:1 brought:1 mit:1 gaussian:1 rather:2 cornell:2 derived:4 notational:1 rank:4 indicates:1 mainly:1 hk:5 adversarial:2 attains:1 brendan:3 benefitting:1 sense:1 dependent:5 carlen:1 typically:3 abor:3 originating:1 selects:4 arg:1 dual:2 supw:1 pascal:2 denoted:2 priori:1 special:2 construct:1 beach:1 kwabena:2 yu:1 oco:12 future:1 simplex:2 ephane:2 fundamentally:1 inherent:1 few:1 simultaneously:5 divergence:1 individual:2 familiar:2 karthik:6 delicate:1 interest:2 highly:1 wouter:2 deferred:1 generically:1 analyzed:1 yielding:1 light:1 regularizers:1 kfir:1 ftpl:13 necessary:1 experience:1 minw:1 ohad:1 machinery:1 taylor:1 littlestone:1 desired:3 theoretical:2 minimal:1 fenchel:1 yoav:1 subset:1 predictor:3 srm:3 uniform:4 gr:2 too:4 pal:2 lgk:1 learnability:1 straightforwardly:1 kn:1 supx:1 considerably:1 combined:1 st:3 density:1 explores:1 randomized:2 siam:2 international:1 probabilistic:2 off:1 lee:1 together:1 sanjeev:1 w1:1 unavoidable:1 ndseg:1 containing:1 choose:3 hn:1 huang:1 cesa:1 cale:26 admit:1 expert:29 return:1 wojciech:1 li:12 nonasymptotic:1 prox:1 de:1 gy:1 wk:13 includes:1 configured:1 satisfy:9 hedging:1 later:1 multiplicative:8 h1:1 hazan:5 sup:3 characterizes:1 bayes:1 aggregation:3 recover:1 shai:4 simon:1 arkadi:1 variance:1 efficiently:4 likewise:2 yield:3 correspond:1 generalize:2 produced:5 lgi:11 rectified:1 simultaneous:1 reach:1 suffers:1 mathematicae:1 whenever:3 definition:1 against:5 inventiones:1 frequency:1 james:1 naturally:1 proof:4 associated:1 boil:1 hsu:1 recall:1 knowledge:1 hilbert:10 sophisticated:1 reflecting:1 focusing:1 supervised:6 follow:2 specify:1 done:2 though:1 strongly:3 generality:5 just:1 hand:1 sketch:1 working:1 ei:4 replacing:1 nonlinear:1 multiscale:1 google:3 logistic:2 perhaps:1 grows:1 usa:1 ihp:1 counterpart:1 regularization:2 boucheron:2 round:2 game:2 ulti:26 generalized:1 auction:1 variational:1 novel:2 fi:8 koolen:2 twist:1 volume:1 banach:8 cambridge:1 smoothness:2 rd:7 vanilla:1 fk:14 trivially:1 similarly:1 unconstrained:4 mathematics:1 sugiyama:1 shawe:1 had:2 access:3 gt:12 deduce:1 etc:1 nicolo:1 curvature:4 feb:1 recent:2 inf:8 driven:1 optimizes:1 certain:1 meta:15 inequality:35 shahar:1 minimum:1 ashok:2 multiple:3 full:2 sham:2 smooth:1 technical:4 match:6 characterized:1 adapt:1 offer:1 long:3 host:1 ensuring:1 prediction:16 basic:2 regression:2 essentially:1 expectation:1 df:3 arxiv:2 iteration:1 kernel:5 sometimes:1 agarwal:1 achieved:1 remarkably:1 want:1 nci:2 fellowship:1 w2:1 unlike:2 massart:2 suspect:1 sridharan:6 structural:6 near:1 leverage:1 revealed:3 split:2 independence:1 forthcoming:1 wti:9 reduce:1 idea:1 avid:1 tradeoff:1 whether:3 pca:6 bartlett:3 penalty:5 lieb:1 peter:5 returned:1 york:1 remark:1 generally:1 tewari:4 detailed:1 clear:1 ytk:2 extensively:2 inspiring:1 tth:1 reduced:1 schapire:1 sign:1 per:2 wr:2 shall:1 group:2 key:2 rooij:1 achieving:2 changing:1 relaxation:3 monotone:1 run:2 compete:2 luxburg:1 you:2 tailor:1 almost:1 reader:1 reasonable:1 guyon:1 looser:2 earning:2 draw:3 decision:3 appendix:7 scaling:1 comparable:1 bound:50 guaranteed:1 koren:1 oracle:28 g:2 annual:2 adapted:2 renegar:1 constraint:1 precisely:2 ri:17 min:7 betting:1 martin:1 structured:2 developing:2 ball:6 conjugate:1 smaller:1 slightly:2 across:1 wi:12 lp:5 kakade:2 computationally:2 equation:1 ln:1 needed:1 end:6 generalizes:1 available:2 apply:2 observe:2 generic:2 spectral:1 appropriate:2 batch:3 coin:1 shortly:1 denotes:3 running:2 newton:1 quantile:5 amit:1 mixability:1 question:3 quantity:2 already:1 strategy:14 concentration:2 dependence:3 randomize:1 visiting:1 gradient:1 thank:1 schatten:2 trivial:1 assuming:2 issn:1 devroye:1 index:1 relationship:1 providing:3 vladimir:3 setup:6 robert:3 trace:2 mink:1 stated:2 design:5 proper:1 unknown:1 sebastien:1 bianchi:1 gilles:1 francesco:4 benchmark:2 finite:2 descent:5 situation:1 extended:2 reproducing:1 arbitrary:7 sharp:1 tomer:1 bk:1 david:3 namely:2 required:2 specified:2 kl:1 pisier:1 connection:1 rad:1 nishant:1 nip:1 trans:1 address:1 able:1 beyond:1 below:5 pattern:1 kot:1 sparsity:1 challenge:3 ambuj:4 max:2 analogue:2 event:1 natural:1 circumvent:1 minimax:3 scheme:1 arora:1 jun:1 cutkosky:2 faced:1 prior:8 acknowledgement:1 nati:1 relative:3 freund:1 loss:40 fully:1 lecture:1 sublinear:1 interesting:1 limitation:1 srebro:1 foundation:1 spoil:1 affine:1 sufficient:1 elliott:1 foster:2 principle:2 exciting:1 editor:1 playing:1 pareto:1 course:2 mohri:1 penalized:3 surprisingly:2 last:1 free:11 placed:1 supported:1 enjoys:1 side:2 bias:1 weaker:1 institute:1 absolute:1 van:3 overcome:1 dimension:3 xn:1 quantum:1 concavity:1 commonly:1 adaptive:3 collection:3 far:1 ec:1 transaction:1 approximate:1 instantiation:1 assumed:2 leader:1 shwartz:3 continuous:1 streeter:1 learn:2 reasonably:1 nature:3 ca:1 obtaining:3 mehryar:1 williamson:2 complex:2 poly:1 anthony:1 protocol:4 domain:1 garnett:1 sp:2 main:4 playout:1 body:1 x1:2 advice:6 referred:2 en:1 fashion:1 martingale:2 sub:22 dylan:2 lie:1 comput:1 mcmahan:3 levy:1 admissible:1 theorem:17 rk:4 companion:1 down:2 specific:1 xt:8 showing:1 maxi:2 nyu:1 explored:3 admits:1 rakhlin:3 concern:1 intrinsic:1 mendelson:1 rel:5 sequential:4 kamalika:1 importance:2 ci:20 mirror:5 vapnik:2 magnitude:2 margin:1 entropy:1 logarithmic:2 simply:3 saddle:1 bubeck:1 intern:1 doubling:1 applies:3 springer:2 nested:7 satisfies:1 relies:1 acm:1 adahedge:1 hedge:1 comparator:2 goal:3 formulated:1 presentation:1 informing:1 orabona:4 adanormalhedge:1 lipschitz:11 luc:1 yti:4 change:1 springerverlag:1 infinite:1 typical:1 uniformly:10 operates:1 wt:18 lemma:1 called:4 blessing:1 c2i:1 accepted:1 unfavorable:1 player:1 unwald:2 select:1 highdimensional:1 mark:1 alexander:3 cotype:1 |
6,836 | 7,184 | Bregman Divergence for Stochastic Variance
Reduction: Saddle-Point and Adversarial Prediction
Zhan Shi
Xinhua Zhang
University of Illinois at Chicago
Chicago, Illinois 60661
{zshi22,zhangx}@uic.edu
Yaoliang Yu
University of Waterloo
Waterloo, ON, N2L3G1
[email protected]
Abstract
Adversarial machines, where a learner competes against an adversary, have regained much recent interest in machine learning. They are naturally in the form of
saddle-point optimization, often with separable structure but sometimes also with
unmanageably large dimension. In this work we show that adversarial prediction
under multivariate losses can be solved much faster than they used to be. We ?rst
reduce the problem size exponentially by using appropriate suf?cient statistics,
and then we adapt the new stochastic variance-reduced algorithm of Balamurugan
& Bach (2016) to allow any Bregman divergence. We prove that the same linear
rate of convergence is retained and we show that for adversarial prediction using
KL-divergence we can further achieve a speedup of #example times compared
with the Euclidean alternative. We verify the theoretical ?ndings through extensive
experiments on two example applications: adversarial prediction and LPboosting.
1
Introduction
Many algorithmic advances have been achieved in machine learning by ?nely leveraging the separability in the model. For example, stochastic gradient descent (SGD) algorithms typically exploit the
fact that the objective is an expectation of a random function, with each component corresponding
to a training example. A ?dual? approach partitions the problem into blocks of coordinates and
processes them in a stochastic fashion [1]. Recently, by exploiting the ?nite-sum structure of the
model, variance-reduction based stochastic methods have surpassed the well-known sublinear lower
bound of SGD. Examples include SVRG [2], SAGA [3], SAG [4], Finito [5], MISO [6], and SDCA
[7, 8], just to name a few. Specialized algorithms have also been proposed for accommodating
proximal terms [9], and for further acceleration through the condition number [10?13].
However, not all empirical risks are separable in its plain form, and in many cases dualization is
necessary for achieving separability. This leads to a composite saddle-point problem with convexconcave (saddle) functions K and M :
n
(x? , y ? ) = arg minx maxy K(x, y) + M (x, y), where K(x, y) = n1 k=1 ?k (x, y). (1)
Most commonly used supervised losses for linear models can be written as g (Xw), where g is
the Fenchel dual of a convex function g, X is the design matrix, and w is the model vector. So
the regularized risk minimization can be naturally written as minw max? ? Xw + ?(w) ? g(?),
where ? is a regularizer. This ?ts into our framework (1) with a bilinear function K and a decoupled
function M . Optimization for this speci?c form of saddle-point problems has been extensively
studied. For example, [14] and [15] performed batch updates on w and stochastic updates on ?,
while [16] and [17] performed doubly stochastic updates on both w and ?, achieving O( 1 ) and
O(log 1 ) rates respectively. The latter two also studied the more general form (1). Our interest in this
paper is double stochasticity, aiming to maximally harness the power of separability and stochasticity.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Adversarial machines, where the learner competes against an adversary, have re-gained much recent
interest in machine learning [18?20]. On one hand they ?t naturally into the saddle-point optimization
framework (1) but on the other hand they are known to be notoriously challenging to solve. The central
message of this work is that certain adversarial machines can be solved signi?cantly faster than they
used to be. Key to our development is a new extension of the stochastic variance-reduced algorithm
in [17] such that it is compatible with any Bregman divergence, hence opening the possibility to
largely reduce the quadratic condition number in [17] by better adapting to the underlying geometry
using non-Euclidean norms and Bregman divergences.
Improving condition numbers by Bregman divergence has long been studied in (stochastic, proximal)
gradient descent [21, 22]. The best known algorithm is arguably stochastic mirror descent [23], which
was extended to saddle-points by [16] and to ADMM by [24]. However, they can only achieve the
sublinear rate O(1/) (for an -accurate solution). On the other hand, many recent stochastic variancereduced methods [2?6, 9, 17] that achieve the much faster linear rate O(log 1/) rely inherently on
the Euclidean structure, and their extension to Bregman divergence, although conceptually clear,
remains challenging in terms of the analysis. For example, the analysis of [17] relied on the resolvent
of monotone operators [25] and is hence restricted to the Euclidean norm. In ?2 we extend the notion
of Bregman divergence to saddle functions and we prove a new Pythagorean theorem that may be of
independent interest for analyzing ?rst order algorithms. In ?4 we introduce a fundamentally different
proof technique (details relegated to Appendix C) that overcomes several challenges arising from a
general Bregman divergence (e.g. asymmetry and unbounded gradient on bounded domain), and we
recover similar quantitative linear rate of convergence as [17] but with the ?exibility of using suitable
Bregman divergences to reduce the condition number.
The new stochastic variance-reduced algorithm Breg-SVRG is then applied to the adversarial prediction framework (with multivariate losses such as F-score) [19, 20]. Here we make three novel
contributions: (a) We provide a signi?cant reformulation of the adversarial prediction problem that
reduces the dimension of the optimization variable from 2n to n2 (where n is the number of samples),
hence making it amenable to stochastic variance-reduced optimization (?3). (b) We develop a new
ef?cient algorithm for computing the proximal update with a separable saddle KL-divergence (?5).
(c) We verify that Breg-SVRG accelerates its Euclidean alternative by a factor of n in both theory
and practice (?6), hence con?rming again the uttermost importance of adapting to the underlying
problem geometry. To our best knowledge, this is the ?rst time stochastic variance-reduced methods
have been shown with great promise in optimizing adversarial machines.
Finally, we mention that we expect our algorithm Breg-SVRG to be useful for solving many other
saddle-point problems, and we provide a second example (LPboosting) in experiments (?6).
2
Bregman Divergence and Saddle Functions
In this section we set up some notations, recall some background materials, and extend Bregman
divergences to saddle functions, a key notion in our later analysis.
Bregman divergence. For any convex and differentiable function ? over some closed convex set
C ? Rd , its induced Bregman divergence is de?ned as:
?x ? int(C), x ? C, ?? (x , x) := ?(x ) ? ?(x) ? ??(x), x ? x ,
(2)
where ?? is the gradient and ?, ? is the standard inner product in Rd . Clearly, ?? (x , x) ? 0 since
? is convex. We mention two familiar examples of Bregman divergence.
2
1
2
2
x ? x 2 , ?(x) = 12 x 2 , where ? 2 is 2 norm.
x
? (Unnormalized) KL-divergence: ?? (x , x) = i xi log xii ? xi + xi , ?(x) = i xi log xi .
? Squared Euclidean distance: ?? (x , x) =
Strong convexity. Following [26] we call a function f ?-convex if f ?? is convex, i.e. for all x, x
f (x ) ? f (x) + ?f (x), x ? x + ?? (x , x).
(3)
Smoothness. A function f is L-smooth wrt a norm ? if its gradient ?f is L-Lipschitz continuous,
i.e., for all x and x , ?f (x ) ? ?f (x) ? ? L x ? x , where ? ? is the dual norm of ? . The
change of a smooth function, in terms of its induced Bregman divergence, can be upper bounded by
the change of its input and lower bounded by the change of its slope, cf. Lemma 2 in Appendix A.
2
Saddle functions. Recall that a function ?(x, y) over Cz = Cx ? Cy is called a saddle function if it
is convex in x for any y ? Cy , and concave in y for any x ? Cx . Given a saddle function ?, we call
(x? , y ? ) its saddle point if
?(x? , y) ? ?(x? , y ? ) ? ?(x, y ? ),
?x ? Cx , ?y ? Cy ,
?
(4)
?
or equivalently (x , y ) ? arg minx?Cx maxy?Cy ?(x, y). Assuming ? is differentiable, we denote
G? (x, y) := [?x ?(x, y); ??y ?(x, y)].
(5)
Note the negation sign due to the concavity in y. We can quantify the notion of ?saddle?: A function
f (x, y) is called ?-saddle iff f ? ? is a saddle function, or equivalently, ?f (z , z) ? ?? (z , z) (see
below). Note that any saddle function ? is 0-saddle and ?-saddle.
Bregman divergence for saddle functions. We now de?ne the Bregman divergence induced by a
saddle function ?: for z = (x, y) and z = (x , y ) in Cz ,
?? (z , z) := ??y (x , x) + ???x (y , y) = ?(x , y) ? ?(x, y ) ? G? (z), z ? z ,
(6)
where ?y (x) = ?(x, y) is a convex function of x for any ?xed y, and similarly ?x (y) = ?(x, y) is a
concave (hence the negation) function of y for any ?xed x. The similarity between (6) and the usual
Bregman divergence ?? in (2) is apparent. However, ? is never evaluated at z but z (for G) and the
cross pairs (x , y) and (x, y ). Key to our subsequent analysis is the following lemma that extends a
result of [27] to saddle functions (proof in Appendix A).
Lemma 1. Let f and g be ?-saddle and ?-saddle respectively, with one of them being differentiable. Then, for any z = (x, y) and any saddle point (if exists) z ? := (x? , y ? ) ?
arg minx maxy {f (z) + g(z)} , we have f (x, y ? )+g(x, y ? ) ? f (x? , y)+g(x? , y)+??+? (z, z ? ).
Geometry of norms. In the sequel, we will design two convex functions ?x (x) and ?y (y) such that
their induced Bregman divergences are ?distance enforcing? (a.k.a. 1-strongly convex), that is, w.r.t.
two norms ? x and ? y that we also design, the following inequality holds:
?x (x, x ) := ??x (x, x ) ?
1
2
2
x ? x x , ?y (y, y ) := ??y (y, y ) ?
1
2
2
y ? y y .
(7)
Further, for z = (x, y), we de?ne
?z (z, z ) := ??x ??y (z, z ) ?
1
2
2
z ? z z ,
where
2
2
2
z z := x x + y y
(8)
When it is clear from the context, we simply omit the subscripts and write ?, ? , and ? ? .
3
Adversarial Prediction under Multivariate Loss
A number of saddle-point based machine learning problems have been listed in [17]. Here we
give another example (adversarial prediction under multivariate loss) that is naturally formulated
as a saddle-point problem but also requires a careful adaptation to the underlying geometry?a
challenge that was not addressed in [17] since their algorithm inherently relies on the Euclidean
norm. We remark that adaptation to the underlying geometry has been studied in the (stochastic)
mirror descent framework [23], with signi?cant improvements on condition numbers or gradient
norm bounds. Surprisingly, no analogous efforts have been attempted in the stochastic variance
reduction framework?a gap we intend to ?ll in this work.
The adversarial prediction framework [19, 20, 28], arising naturally as a saddle-point problem, is a
convex alternative to the generative adversarial net [18]. Given a training sample X = [x1 , . . . , xn ]
? = [?
and y
y1 , . . . , y?n ] ? {0, 1}n , adversarial prediction optimizes the following saddle function that
is an expectation of some multivariate loss (y, z) (e.g. F-score) over the labels y, z ? {0, 1}n of all
data points:
?
minn maxn
(9)
E (y, z), s.t. E ( n1 Xz) = n1 X y
p??2
q??2
y?p,z?q
z?q
Here the proponent tries to ?nd a distribution p(?) over the labeling on the entire training set in
n
order to minimize the loss (?2 is the 2n dimensional probability simplex). An opponent in contrast
tries to maximize the expected loss by ?nding another distribution q(?), but his strategy is subject to
the constraint that the feature expectation matches that of the empirical distribution. Introducing a
3
Lagrangian variable ? to remove the feature expectation constraint and specializing the problem to
z
F-score where (y, z) = 12y
y+1 z and (0, 0) := 1, the partial dual problem can be written as
2
2y z
1
? + minn maxn E
max ? ?2 ? 2 + n1 ? X y
(10)
1 y+1 z ? n ? Xy ,
?
p??2
q??2
y?p,z?q
where we use y z to denote the standard inner product and we followed [19] to add an 22 regularizer
on ? penalizing the dual variables on the constraints over the training data. It appears that solving
(10) can be quite challenging, because the variables p and q in the inner minimax problem have 2n
entries! A constraint sampling algorithm was adopted in [19] to address this challenge, although
no formal guarantee was established. Note that we can maximize the outer unconstrained variable
? (with dimension the same as the number of features) relatively easily using for instance gradient
ascent, provided that we can solve the inner minimax problem quickly?a signi?cant challenge to
which we turn our attention below.
Surprisingly, we show here that the inner minimax problem in (10) can be signi?cantly simpli?ed.
The key observation is that the expectation in the objective depends only on a few suf?cient statistics
of p and q. Indeed, by interpreting p and q as probability distributions over {0, 1}n we have:
n
n
2y z
2y z
E
=
p({0})q({0})
+
E
[[1
y
=
i]][[1
z
=
j]]
(11)
1 y+1 z
1 y + 1 z
i=1 j=1
= p({0})q({0}) +
n
n
2ij 1
1
? E (y[[1 y = i]]) ? E (z[[1 z = j]]),
i
+
j
i
j
i=1 j=1
?i
(12)
?j
where [[?]] = 1 if ? is true, and 0 otherwise. Crucially, the variables ?i and ?j are suf?cient for
re-expressing (10), since
1
(13)
1 ?i = E (1 y[[1 y = i]]) = E[[1 y = i]] = p({1 y = i}),
i
i?i =
E (y[[1 y = i]]) = Ey,
(14)
i
i
and similar equalities also hold for ?j . In details, the inner minimax problem of (10) simpli?es to:
n
n
1 2ijn2
2
min max
i+j ?i ?j +n ?i 11 ?j ?n1 ?i ?n1 ?j ?? Xi?i +?(?)??(?), (15)
??S ??S n2
i=1 j=1
fij (?i ,?j )
where S = {? ? 0 : 1 ? ? 1, ?i, i?i ? ? ?i 1 },
?(?) = ?
i,j
?ij log(?ij ). (16)
2
Importantly, ? = [?1 ; . . . , ?n ] (resp. ?) has n entries, which is signi?cantly smaller than the 2n
entries of p (resp. q) in (10). For later purpose we have also incorporated an entropy regularizer for
? and ? respectively in (15).
To justify the constraint set S, note from (12) and (13) that for any distribution p of y:
since ? ? 0 and y ? {0, 1}n , i?i ? ? E y[[1 y = i]] ? ? E[[1 y = i]] = ?i 1 .
(17)
Conversely, for any ? ? S, we can construct a distribution p such that i?ij = E (yj [[1 y = i]]) =
p({1 y = i, yj = 1}) in the following algorithmic way: Fix i and for each j de?ne Yj = {y ?
{0, 1}n : 1 y = i, yj = 1}. Let U = {1, . . . , n}. Find an index j in U that minimizes ?ij and set
p({y}) = i?ij /|Yj | for each y ? Yj . Perform the following updates:
(18)
U ? U \ {j}, ?k = j, Yk ? Yk \ Yj , ?ik ? ?ik ? ?ij |Yk ? Yj |/|Yj |
Continue this procedure until U is empty. Due to the way we choose j, ? remains nonnegative and
by construction ?ij = p({1 y = i, yj = 1}) once we remove j from U .
The objective function in (15) ?ts naturally into the framework of (1), with ?(?) ? ?(?) and
constraints corresponding to M , and the rest terms to K. The entropy function ? is convex wrt the
KL-divergence, which is in turn distance enforcing wrt the 1 norm over the probability simplex [23].
In the next section we propose the SVRG algorithm with Bregman divergence (Breg-SVRG) that (a)
provably optimizes strongly convex saddle function with a linear convergence rate, and (b) adapts
to the underlying geometry by choosing an appropriate Bregman divergence. Then, in ?5 we apply
Breg-SVRG to (15) and achieve a factor of n speedup over a straightforward instantiation of [17].
4
4
Breg-SVRG for Saddle-Point
In this section we propose an ef?cient algorithm
for solving the general saddle-point problem in (1)
and prove its linear rate of convergence. Our main
assumption is:
Assumption 1. There exist two norms ? x and
? y such that each ?k is a saddle function and
L-smooth; M is (?x , ?y )-saddle; and ?x and ?y
are distance enforcing (cf. (7)).
Algorithm 1: Breg-SVRG for Saddle-Point
1 Initialize z0 randomly. Set z
? = z0 .
2 for s = 1, 2, . . . do
epoch index
3
?
???
?s := ?K(?
z ), z0 ? z0s := zm
4
for t = 1, . . . , m do
iter index
5
Randomly pick ? ? {1, . . . , n}.
6
Compute vt using (20).
7
Update zt using (21).
m
m
8
z? ? z?s :=
(1 + ?)t zt
(1 + ?)t .
t=1
t=1
Note that w.l.o.g. we have scaled the norms so that
the usual strong convexity parameter of M is 1.
Recall we de?ned z z and ?z in (8). For saddle-point optimization, it is common to de?ne a signed
gradient G(z) := [?x K(z); ??y K(z)] (since K is concave in y). Recall J = K + M , and (x? , y ? )
is a saddle-point of J. Using Assumption 1, we measure the gap of an iterate zt = (xt , yt ) as follows:
t = (zt ) = J(xt , y ? ) ? J(x? , yt ) ? ?(zt , z ? ) ?
1
2
2
zt ? z ? ? 0.
(19)
Inspired by [2, 9, 17], we propose in Algorithm 1 a new stochastic variance-reduced algorithm for
solving the saddle-point problem (1) using Bregman divergences. The algorithm proceeds in epochs.
In each epoch, we ?rst compute the following stochastic estimate of the signed gradient G(zt ) by
drawing a random component from K:
z ) + ?x K(?
z)
vx (zt ) := ?x ?? (zt ) ? ?x ?? (?
vx (zt )
vt =
.
(20)
where
?vy (zt )
vy (zt ) := ?y ?? (zt ) ? ?y ?? (?
z ) + ?y K(?
z)
Here z? is the pivot chosen after completing the previous epoch. We make two important observations:
(1) By construction the stochastic gradient vt is unbiased: E? [vt ] = G(zt ); (2) The expensive gradient
evaluation ?K(?
z ) need only be computed once in each epoch since z? is held unchanged. If z? ? z ? ,
then the variance of vt would be largely reduced hence faster convergence may be possible.
Next, Algorithm 1 performs the following joint proximal update:
(xt+1 , yt+1 ) = arg min max ? vx (zt ), x + ? vy (zt ), y + ?M (x, y) + ?(x, xt ) ? ?(y, yt ), (21)
x
y
where we have the ?exibility in choosing a suitable Bregman divergence to better adapt to the
underlying geometry. When ?(x, xt ) = 12 x ? xt 22 , we recover the special case in [17]. However,
to handle the asymmetry in a general Bregman divergence (which does not appear for the Euclidean
distance), we have to choose the pivot z? in a signi?cantly different way than [2, 9, 17].
We are now ready to present our main convergence guarantee for Breg-SVRG in Algorithm 1.
Theorem
1. Let Assumption
1 hold, and choose a suf?ciently small ? > 0 such that m :=
1??L
???1
/log(1
+
?)
? 1. Then Breg-SVRG enjoys linear convergence in expectation:
log 18?L
2
E(?
z s ) ? (1 + ?)?ms [?(z ? , z0 ) + c(Z + 1)(z0 )], where Z =
m?1
t=0
(1+?)t , c =
18? 2 L2
1??L .
(22)
2
1
2
For example, we may set ? = 45L
, (1 + ?)m ? 64
2 , which leads to c = O(1/L ), m = ? L
45 ,
and Z = O(L2 ). Therefore, between epochs, the gap (?
z s ) decays (in expectation) by a factor of 45
64 ,
2
and each epoch needs to conduct the proximal update (21) for m = ?(L ) number of times. (We
remind that w.l.o.g. we have scaled the norms so that the usual strong convexity parameter is 1.) In
total, to reduce the gap below some threshold , Breg-SVRG needs to call the proximal update (21)
O(L2 log 1 ) number of times, plus a similar number of component gradient evaluations.
Discussions. As mentioned, Algorithm 1 and Theorem 1 extend those in [17] which in turn extend
[2, 9] to saddle-point problems. However, [2, 9, 17] all heavily exploit the Euclidean structure (in
particular symmetry) hence their proofs cannot be applied to an asymmetric Bregman divergence.
Our innovations here include: (a) A new Pythagorean theorem for the newly introduced saddle
Bregman divergence (Lemma 1). (b) A moderate extension of the variance reduction lemma in [9] to
accommodate any norm (Appendix B). (c) A different pivot z? is adopted in each epoch to handle
5
asymmetry. (d) A new analysis technique through introducing a crucial auxiliary variable that enables
us to bound the function gap directly. See our proof in Appendix C for more details. Compared with
classical mirror descent algorithms [16, 23] that can also solve saddle-point problems with Bregman
divergences, our analysis is fundamentally different and we achieve the signi?cantly stronger rate
O(log(1/) than the sublinear O(1/) rate of [16], at the expense of a squared instead of linear
dependence on L. Similar tradeoff also appeared in [17]. We will return to this issue in Section 5.
Variants and acceleration. Our analysis also supports to use different ? in vx and vy . The standard
acceleration methods such as universal catalyst [10] and non-uniform sampling can be applied directly
(see Appendix E where L, the largest smoothness constant over all pieces, is replaced by their mean).
5
Application of Breg-SVRG to Adversarial Prediction
The quadratic dependence on L, the smoothness parameter, in Theorem 1 reinforces the need to
choose suitable Bregman divergences. In this section we illustrate how this can be achieved for the
adversarial prediction problem in Section 3. As pointed out in
[17], the factorization of K is
important,
n
n
and we consider three schemes: (a) ?k = fij ; (b) ?k = n1 j=1 fk,j ; and (c) ?k = n1 i=1 fi,k .
W.l.o.g. let us ?x the ? in (16) to 1.
Comparison of smoothness constant. Both ? and ? are n2 -dimensional, and the bilinear function
2
2
fij can be written as ? Aij ?, where Aij ? Rn ?n is an n-by-n block matrix, with the (i, j)-th
2ij
block being n2 ( i+j
I + 11 ) and all other blocks being 0. The linear terms in (15) can be absorbed
into the regularizer ? without affecting the smoothness parameter.
For scheme (a), the smoothness constant L2 under 2 norm depends on the spectral norm of Aij :
2ij
L2 = maxi,j n2 (n + i+j
)) = ?(n3 ). In contrast the smoothness constant L1 under 1 norm depends
2ij
) = ?(n3 ); no saving is achieved.
on the absolute value of the entries in Aij : L1 = maxi,j n2 (1 + i+j
n
For scheme (b), the bilinear function ?k corresponds to n1 ? j=1 Akj ?. Then L1 = O(n2 ) while
n
n
1
2
Akj v2 ? n2 max
L22 = 2 max max
11 v = n5 .
(23)
2
j=1
j=1
n k v:v2 =1
v2 =1
Therefore, L21 saves a factor of n compared with L22 .
Comparison of smoothness constant for the overall problem. By strong duality, we may push the
maximization over ? to the innermost level of (10), arriving at an overall problem in ? and ? only:
n
n
1
i
ij
1
2
min max
c X?i +
? X X?j +
c 2 .
fij (?i , ?j ) ?
(24)
?n
2? i
2?n2
{?i }?S {?j }?S n2
i=1 j=1
2
2
? . The quadratic term w.r.t. ? can be written as ? Bij ?, where Bij ? Rn ?n is an
where c = X y
ij
n-by-n block matrix, with its (i, j)-th block being 2?
X X and all other blocks being 0. And we
assume each xi 2 ? 1. The smoothness constant can be bounded separately from Aij and Bij ; see
(128) in Appendix F.
For scheme (a), the smoothness constant square L22 under 2 norm is upper bounded by the sum of
ij 2
spectral norm square of Aij and Bij . So L22 ? maxi,j 2?
n = ?(n6 ), i.e. L2 = ?(n3 ). In contrast
2
the smoothness constant square L1 under 1 norm is at most the sum of square of maximum absolute
2
ij 2
2ij
value of the entries in Aij and Bij . Hence L21 ? maxi,j n2 (1+ i+j
) + maxi,j 2?
= ?(n6 ),
i.e. L1 = ?(n3 ). So no saving is achieved here.
n
n
For scheme (b), ?k corresponds to n1 (? j=1 Akj ? + ? j=1 Bkj ?). Then
2
2
n
n
1
2
L1 ? 2 max max
Akj v ? + max
Bkj v ? (by (128))
j=1
j=1
n k v:v1 =1
v:v1 =1
2 2
1
2kj
) + kj
? 2 max max n2 (1+ k+j
= n4 ,
2
j
n k
(25)
(26)
and by setting ? to 0 in (126), we get L22 ? n5 similar to (23). Therefore, L21 saves a factor of n
compared with L22 . Similar results apply to scheme (c) too. We also tried non-uniform sampling, but
6
it does not change the order in n. It can also be shown that if our scheme randomly samples n entries
from {Aij , Bij }, the above L1 and L2 cannot be improved by further engineering the factorization.
Computational complexity. We ?nally seek ef?cient algorithms for the proximal update (21) used
by Breg-SVRG. When M (?, ?) = ?(?) ? ?(?) as in (16), we can solve ? and ? separately as:
?ik log(?ik /bik ) ? cik , s.t. 1 ? ? 1, ?i ?k, 0 ? i?ik ? 1 ?i .
(27)
min
?
ik
where bik and cik are constants. In Appendix D we designe an ef?cient ?closed form? algorithm
which ?nds an accurate solution in O(n2 log2 1 ) time, which is also on par with that for computing
the stochastic gradient in schemes (b) and (c). Although scheme (a) reduces the cost of gradient
computation to O(n), its corresponding smoothness parameter L21 is increased by n2 times, hence
?
not worthwhile. We did manage to design an O(n)
algorithm for the proximal update in scheme (a),
but empirically the overall convergence is rather slow.
2
If we use the Euclidean squared distance as the Bregman divergence, then a term ? ? ?t 2 needs to
be added to the objective (27). No ef?cient ?closed form? solution is available, and so in experiments
we simply absorbed M into K, and then the proximal update becomes the Euclidean projection onto
S, which does admit a competitive O(n2 log2 (1/)) time solution.
6
Experimental Results
Our major goal here is to show that empirically Entropy-SVRG (Breg-SVRG with KL divergence) is
signi?cantly more ef?cient than Euclidean-SVRG (Breg-SVRG with squared Euclidean distance) on
some learning problems, especially those with an entropic regularizer and a simplex constraint.
6.1
Entropy regularized LPBoost
We applied Breg-SVRG to an extension of LP Boosting using entropy regularization [29]. In a binary
classi?cation setting, the base hypotheses over the training set can be compactly represented as
U = (y1 x1 , . . . , yn xn ) . Then the model considers a minimax game between a distribution d ? ?n
over training examples and a distribution w ? ?m over the hypotheses:
min
d??n ,di ??
max d U w + ??(d) ? ??(w).
w??m
(28)
Here w tries to combine the hypotheses to maximize the edge (prediction con?dence) yi xi w, while
the adversary d tries to place more weights (bounded by ?) on ?hard? examples to reduce the edge.
Settings. We experimented on the adult dataset from the UCI repository, which we partitioned
into n = 32, 561 training examples and 16,281 test examples, with m = 123 features. We set
? = ? = 0.01 and ? = 0.1 due to its best prediction accuracy. We tried a range of values of the step
size ?, and the best we found was 10?3 for Entropy-SVRG and 10?6 for Euclidean-SVRG (larger
step size for Euclidean-SVRG ?uctuated even worse). For both methods, m = 32561/50 gave good
results.
The stochastic gradient in d was computed by U:j wj , where U:j is the j-th column and j is randomly
sampled. The stochastic gradient in w is di Ui: . We tried with Uij wj and Uij di (scheme (a) in ?5),
but they performed worse. We also tried with the universal catalyst in the same form as [17], which
can be directly extended to Entropy-SVRG. Similarly we used the non-uniform sampling based on
the 2 norm of the rows and columns of U . It turned out that the Euclidean-SVRG can bene?t slightly
from it, while Entropy-SVRG does not. So we only show the ?accelerated? results for the former.
To make the computational cost comparable across machines, we introduced a counter called effective
number of passes: #pass. Assume the proximal operator has been called #po number of times, then
#pass := number of epochs so far +
n+m
nm
? #po.
(29)
We also compared with a ?convex? approach. Given d, the optimal w in (28) obviously admits a
closed-form solution. General saddle-point problems certainly do not enjoy such a convenience.
However, we hope to take advantage of this opportunity to study the following question: suppose we
solve (28) as a convex optimization in d and the stochastic gradient were computed from the optimal
7
10-2
10
-3
10 0
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
10 -2
10 0
Primal gap
Primal gap
10-1
Entropy, Saddle
Entropy, Convex
Euclidean, Saddle
Euclidean, Convex
Primal gap
100
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
10 -2
10-4
-5
0
10 -4
200
400
600
800
Number of effective passes
81
79
77
75
0
200
400
600
800
Number of effective passes
(b) Test accuracy v.s. #pass
Figure 1: Entropy Regularized
LPBoost on adult
5
10
15
CPU time(mins)
20
0.96
0.94
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
0.92
0.9
0
(b) Primal gap v.s. CPU time
0.96
Test F-score
Test accuracy (%)
85
Entropy, Saddle
Entropy, Convex
Euclidean, Saddle
Euclidean, Convex
10 -4
100
200
300
400
Number of effective passes
(a) Primal gap v.s. #pass
(a) Primal gap v.s. #pass
83
0
0
100
200
300
Number of effective passes
(c) Test F-score v.s. #pass
Test F-score
10
0.94
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
0.92
0.9
0
5
10
CPU time(mins)
15
(d) Test F-score v.s. CPU time
Figure 2: Adversarial Prediction on the synthetic dataset.
w, would it be faster than the saddle SVRG? Since solving w requires visiting the entire U , strictly
speaking the term n+m
nm ?#po in the de?nition of #pass in (29) should be replaced by #po. However,
we stuck with (29) because our interest is whether a more accurate stochastic gradient in d (based
on the optimal w) can outperform doubly stochastic (saddle) optimization. We emphasize that this
comparison is only for conceptual understanding, because generally optimizing the inner variable
requires costly iterative methods.
Results. Figure 1(a) demonstrated how fast the primal gap (with w optimized out for each d) is
reduced as a function of the number of effective passes. Methods based on entropic prox are clearly
much more ef?cient than Euclidean prox. This corroborates our theory that for problems like (28),
Entropy-SVRG is more suitable for the underlying geometry (entropic regularizer with simplex
constraints).
We also observed that using entropic prox, our doubly stochastic method is as ef?cient as the ?convex?
method, meaning that although at each iteration the w in saddle SVRG is not the optimal for the
current d, it still allows the overall algorithm to perform as fast as if it were. This suggests that for
general saddle-point problems where no closed-form inner solution is available, our method will still
be ef?cient and competitive. Note this ?convex? method is similar to the optimizer used by [29].
Finally, we investigated the increase of test accuracy as more passes over the data are performed.
Figure 1(b) shows, once more, that the entropic prox does allow the accuracy to be improved much
faster than Euclidean prox. Again, the convex and saddle methods perform similarly.
As a ?nal note, the Euclidean/entropic proximal operator for both d and w can be solved in either
closed form, or by a 1-D line search based on partial Lagrangian. So their computational cost differ
in the same order of magnitude as multiplication v.s. exponentiation, which is much smaller than the
difference of #pass shown in Figure 1.
6.2
Adversarial prediction with F-score
Datasets. Here we considered two datasets. The ?rst is a synthetic dataset where the positive
examples are drawn from a 200 dimensional normal distribution with mean 0.1 ? 1 and covariance
0.5 ? I, and negative examples are drawn from N (?0.1 ? 1, 0.5 ? I). The training set has n = 100
samples, half are positive and half are negative. The test set has 200 samples with the same class
ratio. Notice that n = 100 means we are optimizing over two 100-by-100 matrices constrained to a
challenging set S. So the optimization problem is indeed not trivial.
8
10 0
10 -2
10 -4
0
10 2
Primal gap
Primal gap
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
(a) Primal gap v.s. #pass
10 -2
0
20
40
60
CPU time(mins)
80
(b) Primal gap v.s. CPU time
0.9
0.9
0.85
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
0.8
0.75
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
10 0
10 -4
200
400
600
Number of effective passes
0
50
100
150
200
Number of effective passes
(c) Test F-score v.s. #pass
Test F-score
Methods. To apply saddle
SVRG, we used strong duality
to push the optimization over ?
to the inner-most level of (10),
and then eliminated ? because
it is a simple quadratic. So
we ended up with the convexconcave optimization as shown
in (24), where the K part of (15)
is augmented with a quadratic
term in ?. The formulae for
computing the stochastic gradient using scheme (b) are detailed in Appendix G. We ?xed
? = 1, ? = 0.01 for the ionosphere dataset, and ? = 1, ? =
0.1 for the synthetic dataset.
10 2
Test F-score
The second dataset, ionosphere,
has 211 training examples (122
pos and 89 neg). 89 examples
were used for testing (52 pos
and 37 neg). Each example has
34 features.
0.85
Euclidean, Convex
Euclidean, Saddle
Euclidean, Saddle, Catalyst
Entropy, Convex
Entropy, Saddle
Entropy, Saddle, Catalyst
0.8
0.75
0
10
20
CPU time(mins)
30
(d) Test F-score v.s. CPU time
Figure 3: Adversarial Prediction on the ionosphere dataset.
We also tried the universal catalyst along with non-uniform
sampling where each i was samn
2
pled with a probability proportional to k=1 Aik F , and similarly for j. Here ? F is the Frobenious norm.
Parameter Tuning. Since each entry in the n ? n matrix ? is relatively small when n is large, we
needed a relatively small step size. When n = 100, we used 10?2 for Entropy-SVRG and 10?6 for
Euclidean-SVRG (a larger step size makes it over-?uctuate). When applying catalyst, the catalyst
regularizor can suppress the noise from larger step size. After a careful trade off between catalyst
regularizor parameter and larger step size, we managed to achieve faster convergence empirically.
Results. The results on the two datasets are shown in Figures 2 and 3 respectively. We truncated
the #pass and CPU time in subplots (c) and (d) because the F-score has stabilized and we would
rather zoom in to see the initial growing phase. In terms of primal gap versus #pass (subplot a), the
entropy based method is signi?cantly more effective than Euclidean methods on both datasets (Figure
2(a) and 3(a)). Even with catalyst, Euclidean-Saddle is still much slower than the entropy based
methods on the synthetic dataset in Figure 2(a). The CPU time comparisons (subplot b) follow the
similar trend, except that the ?convex methods? should be ignored because they are introduced only
to compare #pass.
The F-score is noisy because, as is well known, it is not monotonic with the primal gap and glitches
can appear. In subplots 2(d) and 3(d), the entropy based methods achieve higher F-score signi?cantly
faster than the plain Euclidean based methods on both datasets. In terms of passes (subplots 2(c) and
3(c)), Euclidean-Saddle and Entropy-Saddle achieved a similar F-score at ?rst because their primal
gaps are comparable at the beginning. After 20 passes, the F-score of Euclidean-Saddle is overtaken
by Entropy-Saddle as the primal gap of Entropy-Saddle become much smaller than Euclidean-Saddle.
7
Conclusions and Future Work
We have proposed Breg-SVRG to solve saddle-point optimization and proved its linear rate of
convergence. Application to adversarial prediction con?rmed its effectiveness. For future work, we
are interested in relaxing the (potentially hard) proximal update in (21). We will also derive similar
reformulations for DCG and precision@k, with a quadratic number of variables and with a ?nite sum
structure that is again amenable to Breg-SVRG, leading to a similar reduction of the condition number
compared to Euclidean-SVRG. These reformulations, however, come with different constraint sets,
and new proximal algorithms with similar complexity as for the F-score can be developed.
9
References
[1] S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate frank-wolfe
optimization for structural SVMs. In ICML. 2013.
[2] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In NIPS. 2013.
[3] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with
support for non-strongly convex composite objectives. In NIPS. 2014.
[4] M. Schmidt, N. L. Roux, and F. Bach. Minimizing ?nite sums with the stochastic average
gradient. Mathematical Programming, 2016.
[5] A. J. Defazio, T. S. Caetano, and J. Domke. Finito: A faster, permutable incremental gradient
method for big data problems. In ICML. 2014.
[6] J. Mairal. Incremental majorization-minimization optimization with application to large-scale
machine learning. SIAM Journal on Optimization, 25(2):829?855, 2015.
[7] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss
minimization. J. Mach. Learn. Res., 14:567?599, 2013.
[8] S. Shalev-Shwartz. SDCA without duality, regularization, and individual convexity. In ICML.
2016.
[9] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance
reduction. SIAM Journal on Optimization, 24(4):2057?2075, 2014.
[10] H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for ?rst-order optimization. In NIPS.
2015.
[11] A. Nitanda. Stochastic proximal gradient descent with acceleration techniques. In NIPS. 2014.
[12] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for
regularized loss minimization. In ICML. 2014.
[13] R. Babanezhad, M. O. Ahmed, A. Virani, M. Schmidt, J. Kone?cn?y, and S. Sallinen. Stop
wasting my gradients: Practical svrg. In NIPS. 2015.
[14] Y. Zhang and L. Xiao. Stochastic primal-dual coordinate method for regularized empirical risk
minimization. In ICML. 2015.
[15] Z. Zhu and A. J. Storkey. Adaptive stochastic primal-dual coordinate descent for separable
saddle point problems. In Machine Learning and Knowledge Discovery in Databases, pp.
645?658. 2015.
[16] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[17] P. Balamurugan and F. Bach. Stochastic variance reduction methods for saddle-point problems.
In NIPS. 2016.
[18] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio. Generative adversarial nets. In NIPS. 2014.
[19] H. Wang, W. Xing, K. Asif, and B. D. Ziebart. Adversarial prediction games for multivariate
losses. In NIPS. 2015.
[20] F. Farnia and D. Tse. A minimax approach to supervised learning. In NIPS. 2016.
[21] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming,
103(1):127?152, 2005.
[22] Y. Nesterov. Excessive gap technique in nonsmooth convex minimization. SIAM J. on Optimization, 16(1):235?249, 2005. ISSN 1052-6234.
[23] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for
convex optimization. Operations Research Letters, 31(3):167?175, 2003.
[24] H. Wang and A. Banerjee. Bregman alternating direction method of multipliers. In NIPS. 2014.
[25] R. T. Rockafellar. Monotone operators associated with saddle functions and minimax problems.
Nonlinear Functional Analysis, 18(part 1):397?407, 1970.
10
[26] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent.
In Proc. Annual Conf. Computational Learning Theory. 2010.
[27] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. submitted
to SIAM Journal on Optimization, 2009.
[28] K. Asif, W. Xing, S. Behpour, and B. D. Ziebart. Adversarial cost-sensitive classi?cation. In
UAI. 2015.
[29] M. K. Warmuth, K. A. Glocer, and S. V. N. Vishwanathan. Entropy regularized LPBoost. In
Y. Freund, Y. L. Gy?or?, and G. Tur`an, eds., Proc. Intl. Conf. Algorithmic Learning Theory,
no. 5254 in Lecture Notes in Arti?cial Intelligence, pp. 256 ? 271. Springer-Verlag, Budapest,
October 2008.
[30] P. L. Combettes and J.-C. Pesquet. Proximal splitting methods in signal processing. Fixed-Point
Algorithms for Inverse Problems in Science and Engineering, 49:185?212, 2011.
11
| 7184 |@word repository:1 norm:22 stronger:1 nd:2 seek:1 crucially:1 tried:5 covariance:1 innermost:1 pick:1 sgd:2 mention:2 arti:1 accommodate:1 reduction:8 initial:1 score:18 nally:1 current:1 written:5 chicago:2 partition:1 subsequent:1 cant:3 enables:1 remove:2 update:13 juditsky:1 generative:2 half:2 intelligence:1 warmuth:1 beginning:1 boosting:1 zhang:6 unbounded:1 mathematical:2 along:1 become:1 ik:6 prove:3 doubly:3 combine:1 introduce:1 indeed:2 expected:1 xz:1 growing:1 inspired:1 cpu:10 becomes:1 provided:1 bounded:6 competes:2 underlying:7 notation:1 xed:3 permutable:1 minimizes:1 developed:1 ended:1 wasting:1 guarantee:2 cial:1 quantitative:1 concave:4 sag:1 scaled:2 omit:1 appear:2 yn:1 arguably:1 enjoy:1 positive:2 engineering:2 aiming:1 bilinear:3 mach:1 analyzing:1 subscript:1 signed:2 plus:1 studied:4 conversely:1 challenging:4 suggests:1 relaxing:1 factorization:2 nemirovski:1 range:1 practical:1 zhangx:1 yj:10 practice:1 block:8 testing:1 procedure:1 nite:3 sdca:2 empirical:3 universal:4 adapting:2 composite:3 projection:1 get:1 cannot:2 onto:1 convenience:1 operator:4 risk:3 context:1 applying:1 lagrangian:2 shi:1 yt:4 demonstrated:1 straightforward:1 attention:1 convex:43 roux:1 splitting:1 pouget:1 importantly:1 his:1 handle:2 notion:3 coordinate:6 analogous:1 resp:2 construction:2 suppose:1 heavily:1 aik:1 programming:3 hypothesis:3 goodfellow:1 trend:1 wolfe:1 expensive:1 storkey:1 asymmetric:1 lpboost:3 database:1 observed:1 solved:3 wang:2 cy:4 wj:2 caetano:1 counter:1 trade:1 tur:1 yk:3 mentioned:1 convexity:4 complexity:2 ui:1 ziebart:2 xinhua:1 warde:1 nesterov:2 solving:5 predictive:1 balamurugan:2 learner:2 compactly:1 easily:1 joint:1 po:6 represented:1 regularizer:6 fast:3 effective:9 labeling:1 choosing:2 shalev:4 apparent:1 quite:1 larger:4 solve:6 drawing:1 otherwise:1 statistic:2 uic:1 noisy:1 obviously:1 advantage:1 differentiable:3 net:2 propose:3 product:2 adaptation:2 zm:1 uci:1 turned:1 budapest:1 iff:1 achieve:7 adapts:1 rst:7 convergence:10 exploiting:1 asymmetry:3 double:1 empty:1 intl:1 incremental:3 illustrate:1 develop:1 derive:1 sallinen:1 virani:1 ij:16 strong:5 auxiliary:1 signi:11 come:1 quantify:1 differ:1 direction:1 fij:4 stochastic:38 vx:4 material:1 ndings:1 fix:1 extension:4 strictly:1 hold:3 considered:1 normal:1 babanezhad:1 great:1 algorithmic:3 major:1 optimizer:1 entropic:6 purpose:1 proc:2 proponent:1 miso:1 label:1 uctuate:1 waterloo:2 sensitive:1 largest:1 minimization:7 hope:1 clearly:2 rather:2 improvement:1 contrast:3 adversarial:24 typically:1 entire:2 dcg:1 yaoliang:2 uij:2 relegated:1 interested:1 provably:1 arg:4 dual:9 issue:1 overall:4 overtaken:1 development:1 constrained:1 special:1 initialize:1 construct:1 never:1 saving:2 beach:1 sampling:5 eliminated:1 once:3 progressive:1 breg:17 yu:2 icml:5 excessive:1 future:2 simplex:4 mirza:1 nonsmooth:1 fundamentally:2 few:2 opening:1 randomly:4 divergence:34 zoom:1 individual:1 familiar:1 replaced:2 geometry:8 phase:1 beck:1 n1:10 negation:2 unmanageably:1 interest:5 message:1 possibility:1 evaluation:2 certainly:1 farley:1 kone:1 primal:17 held:1 amenable:2 accurate:3 bregman:30 edge:2 partial:2 necessary:1 xy:1 minw:1 decoupled:1 conduct:1 euclidean:55 re:3 theoretical:1 fenchel:1 instance:1 increased:1 column:2 tse:1 teboulle:1 maximization:1 cost:4 introducing:2 entry:7 uniform:4 johnson:1 too:1 proximal:18 synthetic:4 my:1 st:1 siam:5 akj:4 cantly:8 sequel:1 off:1 quickly:1 again:3 central:1 squared:4 manage:1 nm:2 choose:4 l22:6 worse:2 admit:1 conf:2 leading:1 return:1 prox:5 de:7 gy:1 int:1 rockafellar:1 depends:3 resolvent:1 piece:1 performed:4 later:2 try:4 closed:6 competitive:2 recover:2 relied:1 xing:2 slope:1 majorization:1 contribution:1 minimize:1 square:4 accuracy:5 variance:14 largely:2 conceptually:1 notoriously:1 cation:2 submitted:1 l21:4 ed:2 against:2 pp:2 naturally:6 proof:4 di:3 associated:1 con:3 sampled:1 newly:1 dataset:8 proved:1 stop:1 recall:4 knowledge:2 cik:2 appears:1 higher:1 supervised:2 follow:1 harness:1 maximally:1 improved:2 evaluated:1 strongly:3 just:1 until:1 hand:3 nonlinear:2 banerjee:1 glocer:1 name:1 usa:1 verify:2 true:1 unbiased:1 managed:1 former:1 hence:9 equality:1 regularization:2 alternating:1 multiplier:1 convexconcave:2 ll:1 game:2 unnormalized:1 m:1 performs:1 l1:7 interpreting:1 duchi:1 meaning:1 novel:1 recently:1 ef:9 fi:1 common:1 specialized:1 functional:1 empirically:3 exponentially:1 extend:4 expressing:1 smoothness:12 rd:2 unconstrained:1 fk:1 tuning:1 similarly:4 pointed:1 illinois:2 stochasticity:2 similarity:1 add:1 base:1 jaggi:1 multivariate:6 recent:3 optimizing:3 optimizes:2 moderate:1 bkj:2 certain:1 verlag:1 inequality:1 binary:1 continue:1 asif:2 vt:5 yi:1 nition:1 neg:2 regained:1 simpli:2 subplot:2 speci:1 ey:1 maximize:3 signal:1 harchaoui:1 reduces:2 smooth:5 faster:9 adapt:2 match:1 bach:4 long:2 cross:1 lin:1 ahmed:1 specializing:1 prediction:19 variant:1 n5:2 expectation:7 surpassed:1 iteration:1 sometimes:1 cz:2 achieved:5 background:1 affecting:1 separately:2 addressed:1 crucial:1 rest:1 rming:1 ascent:3 pass:11 induced:4 subject:1 leveraging:1 effectiveness:1 bik:2 call:3 ciently:1 structural:1 bengio:1 subplots:3 iterate:1 behpour:1 gave:1 pesquet:1 reduce:5 inner:9 cn:1 tradeoff:1 pivot:3 whether:1 defazio:2 accelerating:1 effort:1 speaking:1 remark:1 ignored:1 useful:1 generally:1 clear:2 listed:1 detailed:1 tewari:1 extensively:1 svms:1 reduced:8 shapiro:1 outperform:1 exist:1 vy:4 stabilized:1 notice:1 sign:1 arising:2 reinforces:1 xii:1 write:1 promise:1 key:4 iter:1 reformulation:1 threshold:1 lan:1 achieving:2 drawn:2 penalizing:1 nal:1 lacoste:2 subgradient:1 monotone:2 sum:5 inverse:1 exponentiation:1 letter:1 extends:1 place:1 frobenious:1 appendix:9 zhan:1 comparable:2 accelerates:1 bound:3 completing:1 followed:1 courville:1 quadratic:6 nonnegative:1 annual:1 constraint:9 vishwanathan:1 n3:4 dence:1 min:9 separable:4 relatively:3 ned:2 speedup:2 maxn:2 smaller:3 slightly:1 across:1 separability:3 sam:1 partitioned:1 lp:1 n4:1 making:1 maxy:3 restricted:1 remains:2 turn:3 wrt:3 needed:1 singer:1 nitanda:1 rmed:1 adopted:2 available:2 reformulations:2 operation:1 opponent:1 apply:3 worthwhile:1 appropriate:2 spectral:2 save:2 alternative:3 batch:1 schmidt:3 slower:1 include:2 cf:2 log2:2 opportunity:1 xw:2 exploit:2 especially:1 classical:1 unchanged:1 pled:1 objective:6 intend:1 added:1 question:1 strategy:1 costly:1 dependence:2 usual:3 visiting:1 gradient:27 minx:3 distance:7 outer:1 accommodating:1 considers:1 tseng:1 trivial:1 enforcing:3 ozair:1 assuming:1 issn:1 retained:1 minn:2 index:3 remind:1 ratio:1 glitch:1 innovation:1 equivalently:2 minimizing:1 october:1 potentially:1 frank:1 expense:1 negative:2 suppress:1 design:4 zt:16 perform:3 upper:2 observation:2 datasets:5 descent:10 t:2 truncated:1 extended:2 incorporated:1 y1:2 rn:2 introduced:3 pair:1 kl:5 extensive:1 bene:1 optimized:1 established:1 nip:11 address:1 adult:2 adversary:3 proceeds:1 below:3 appeared:1 challenge:4 regularizor:2 max:14 power:1 suitable:4 rely:1 regularized:7 pletscher:1 zhu:1 minimax:7 scheme:12 ne:4 julien:2 nding:1 ready:1 n6:2 kj:2 epoch:9 understanding:1 l2:7 discovery:1 multiplication:1 catalyst:24 loss:11 expect:1 par:1 freund:1 sublinear:3 suf:4 lecture:1 proportional:1 versus:1 xiao:2 row:1 compatible:1 surprisingly:2 svrg:35 arriving:1 enjoys:1 aij:8 formal:1 allow:2 absolute:2 dualization:1 dimension:3 plain:2 xn:2 concavity:1 stuck:1 commonly:1 adaptive:1 projected:1 far:1 emphasize:1 overcomes:1 instantiation:1 uai:1 mairal:2 conceptual:1 corroborates:1 xi:5 shwartz:4 continuous:1 iterative:1 search:1 learn:1 robust:1 ca:2 inherently:2 symmetry:1 improving:1 investigated:1 domain:1 did:1 main:2 uwaterloo:1 noise:1 big:1 n2:15 finito:2 x1:2 augmented:1 xu:1 cient:12 fashion:1 slow:1 combettes:1 precision:1 saga:2 bij:6 theorem:5 z0:5 formula:1 xt:6 maxi:5 decay:1 experimented:1 admits:1 ionosphere:3 abadie:1 exists:1 gained:1 importance:1 mirror:5 magnitude:1 push:2 gap:21 farnia:1 entropy:46 cx:4 simply:2 saddle:98 absorbed:2 monotonic:1 springer:1 corresponds:2 relies:1 goal:1 formulated:1 acceleration:4 careful:2 lipschitz:1 admm:1 change:4 hard:2 except:1 justify:1 domke:1 classi:2 lemma:5 called:4 total:1 pas:13 duality:3 e:1 experimental:1 attempted:1 support:2 latter:1 accelerated:3 pythagorean:2 exibility:2 |
6,837 | 7,185 | Unbounded cache model for online language
modeling with open vocabulary
Edouard Grave
Facebook AI Research
[email protected]
Moustapha Cisse
Facebook AI Research
[email protected]
Armand Joulin
Facebook AI Research
[email protected]
Abstract
Recently, continuous cache models were proposed as extensions to recurrent neural
network language models, to adapt their predictions to local changes in the data
distribution. These models only capture the local context, of up to a few thousands
tokens. In this paper, we propose an extension of continuous cache models, which
can scale to larger contexts. In particular, we use a large scale non-parametric
memory component that stores all the hidden activations seen in the past. We
leverage recent advances in approximate nearest neighbor search and quantization
algorithms to store millions of representations while searching them efficiently. We
conduct extensive experiments showing that our approach significantly improves
the perplexity of pre-trained language models on new distributions, and can scale
efficiently to much larger contexts than previously proposed local cache models.
1
Introduction
Language models are a core component of many natural language processing applications such
as machine translation [3], speech recognition [2] or dialogue agents [50]. In recent years, deep
learning has led to remarkable progress in this domain, reaching state of the art performance on many
challenging benchmarks [31]. These models are known to be over-parametrized, and large quantities
of data are needed for them to reach their full potential [12]. Consequently, the training time can be
very long (up to weeks) even when vast computational resources are available [31]. Unfortunately, in
many real-world scenarios, either such quantity of data is not available, or the distribution of the data
changes too rapidly to permit very long training. A common strategy to circumvent these problems is
to use a pre-trained model and slowly finetune it on the new source of data. Such adaptive strategy is
also time-consuming for parametric models since the specificities of the new dataset must be slowly
encoded in the parameters of the model. Additionally, such strategy is also prone to overfitting and
dramatic forgetting of crucial information from the original dataset. These difficulties directly result
from the nature of parametric models.
In contrast, non-parametric approaches do not require retraining and can efficiently incorporate
new information without damaging the original model. This makes them particularly suitable for
settings requiring rapid adaptation to a changing distribution or to novel examples. However, nonparametric models perform significantly worse than fully trained deep models [12]. In this work,
we are interested in building a language model that combines the best of both non-parametric and
parametric approaches: a deep language model to model most of the distribution and a non-parametric
one to adapt it to the change of distribution.
This solution has been used in speech recognition under the name of cache models [36, 37]. Cache
models exploit the unigram distribution of a recent context to improve the predictive ability of the
model. Recently, Grave et al. [22] and Merity et al. [43] showed that this solution could be applied to
neural networks. However, cache models depend on the local context. Hence, they can only adapt a
parametric model to a local change in the distribution. These specificities limit their usefulness when
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the context is unavailable (e.g., tweets) or is enormous (e.g., book reading). This work overcomes
this limitation by introducing a fast non-parametric retrieval system into the hybrid approach. We
demonstrate that this novel combination of a parametric neural language model with a non-parametric
retrieval system can smoothly adapt to changes in the distribution while remaining as consistent as
possible with the history of the data. Our approach is as a generalization of cache models which
scales to millions of examples.
2
Related work
This section reviews different settings that require models to adapt to changes in the data distribution,
like transfer learning or open set (continual) learning. We also discuss solutions specific to language
models, and we briefly explain large-scale retrieval methods.
Transfer Learning. Transfer learning [10] is a well-established component of machine learning
practitioners? toolbox. It exploits the commonalities between different tasks to improve the predictive
performance of the models trained to solve them. Notable variants of transfer learning are multitask
learning [10], domain adaptation [6], and curriculum learning [8]. Multitask learning jointly trains
several models to promote sharing of statistical strength. Domain adaptation reuses existing information about a given problem (e.g., data or model) to solve a new task. Curriculum learning takes
one step further by adapting an existing model across a (large) sequence of increasingly difficult
tasks. Models developed for these settings have proven useful in practice. However, they are chiefly
designed for supervised learning and do not scale to the size of the problem we consider in this work.
Class-incremental and Open Set Learning. These methods are concerned with problems where
the set of targets is not known in advance but instead, increases over time. The main difficulty
in this scenario lies in the deterioration of performance on previously seen classes when trying to
accommodate new ones. Kuzborskij et al. [39] proposed to reduce the loss of accuracy when adding
new classes by partly retraining the existing classifier. Muhlbaier et al. [47] introduced an ensemble
model to deal with an increasingly large number of concepts. However, their approach relies on
unrealistic assumptions on the data distribution. Zero-shot learning [41] can deal with new classes
but often requires additional descriptive information about them [1]. Scheirer et al. [49] proposed a
framework for open set recognition based on one-class SVMs.
Adaptive language models. Adaptive language models change their parameters according to the
recent history. Therefore, they implement a form of domain adaptation. A popular approach adds
a cache to the model and has shown early success in the context of speech recognition [36, 38, 37].
Jelinek et al. further extended this strategy [29] into a smoothed trigram language model, reporting a
reduction in both perplexity and word error rates. Della Pietra et al.[15] adapt the cache to a general
n-gram model such that it satisfies marginal constraints obtained from the current document. Closer
to our work, Grave et al. [21] have shown that this strategy can improve modern language models
like recurrent networks without retraining. However, their model assumes that the data distribution
changes smoothly over time, by using a context window to improve the performance. Merity et
al. [43] proposed a similar model, where the cache is jointly trained with the language model.
Other adaptive language models have been proposed in the past: Kneser and Steinbiss [35] and, Iyer
and Ostendorf [26] dynamically adapt the parameters of their model to recent history using different
weight interpolation schemes. Bellegarda [5] and Coccaro and Jurafsky [14] use latent semantic
analysis to adapt their models to current context. Similarly, topic features have been used with either
maximum entropy models [33] or recurrent networks [46, 53]. Finally, Lau et al. [42] propose to use
pairs of distant of words to capture long-range dependencies.
Large scale retrieval approaches. The standard method for large-scale retrieval is to compress
vectors and query them using a standard efficient algorithm. One of the most popular strategies is
Locality-sensitive hashing (LSH) by Charikar [11], which uses random projections to approximate
the cosine similarity between vectors by a function related to the Hamming distance between their
corresponding binary codes. Several works have built on this initial binarization technique, such as
spectral hashing [54], or Iterative Quantization (ITQ) [19]. Product Quantization (PQ) [28] approximates the distances between vectors by simultaneously learning the codes and the centroids, using
2
k-means. In the context of text, several works have shown that compression does not significantly
reduce the performance of models [17, 24, 30].
3
Approach
In this section, we first briefly review language modeling and the use of recurrent networks for this
task. We then describe our model, called unbounded cache, and explain how to scale it to large
datasets with millions of words.
3.1
Language modeling
A language model evaluates the probability distribution of sequences of words. It is often framed
as learning the conditional probability of words, given their history [4]. Let V be the size of the
vocabulary; each word is represented by a one-hot encoding vector x in RV = V, corresponding
to its index in the dictionary. Using the chain rule, the probability assigned to a sequence of words
x1 , . . . , xT can be factorized as
p(x1 , ..., xT ) =
T
Y
p(xt | xt?1 , ..., x1 ).
(1)
t=1
This conditional probability is traditionally approximated with non-parametric models based on
counting statistics [20]. In particular, smoothed N-gram models [32, 34] have been the dominant type
of models historically, achieving good performance in practice [44]. While the use of parametric
models for language modeling is not new [48], their superiority has only been established with the
recent emergence of neural networks [7, 45]. In particular, recurrent networks are now the standard
approach, achieving state-of-the-art performances on several challenging benchmarks [31, 55].
3.2
Recurrent networks.
Recurrent networks are a special case of neural networks specifically designed for sequence modeling.
At each time step, they maintain a hidden representation of the past and make a prediction accordingly.
This representation is maintained by a continuous vector ht ? Rd encoding the history xt , ..., x1 .
The probability of the next word is then simply parametrized using this hidden vector, i.e.,
p(w | xt , ..., x1 ) ? exp(h>
t ow ).
(2)
The hidden vector ht is computed by recursively applying an update rule:
ht = ? (xt , ht?1 ) ,
(3)
where ? is a function depending on the architecture of the network. Depending on ?, the hidden
vectors may have a specific structure adapted to different sequence representation problems. Several
architectures for recurrent networks have been proposed, such as the Elman network [16], the long
short-term memory (LSTM) [25] or the gated recurrent unit (GRU) [13]. For example, the Elman
network [16] is defined by the following update rule
ht = ? (Lxt + Rht?1 ) ,
(4)
where ? is a non-linearity such as the logistic or tanh functions, L ? Rd?V is a word embedding
matrix and R ? Rd?d is the recurrent matrix. Empirical results have validated the effectiveness of
the LSTM architecture to natural language modeling [31]. We refer the reader to [23] for details on
this architecture. In the rest of this paper, we focus on this structure of recurrent networks.
Recurrent networks process a sentence one word at a time and update their weights by backpropagating
the error of the prediction to a fixed window size of past time steps. This training procedure
is computationally expensive, and often requires a significant amount of data to achieve good
performance. To circumvent the need of retraining such network for domain adaptation, we propose
to add a non-parametric model that takes care of the fluctuation in the data distribution.
3
3.3
Unbounded cache
An unbounded cache adds a non-parametric and unconstrained memory to a neural network. Our
approach is inspired by the cache model of Khun [36] and can be seen as an extension of Grave et
al. [22] to an unbounded memory structure tailored to deal with out-of-vocabulary and rare words.
Similar to Grave et al. [22], we extend a recurrent neural network with a key-value memory component, storing the pairs (hi , wi+1 ) of hidden representation and corresponding word. This memory
component also shares similarity with the parametric memory component of the pointer network
introduced by Vinyals et al. [52] and extended by Merity et al. [43]. As opposed to these models
and standard cache models, we do not restrict the cache component to recent history but store all
previously observed words. Using the information stored in the cache component, we can obtain a
probability distribution over the words observed up to time t using the kernel density estimator:
t?1
X
kht ? hi k
pcache (wt | w1 , ...wt?1 ) ?
1{w = wi }K
,
(5)
?
i=1
where K is a kernel, such as Epanechnikov or Gaussian, and ? is a smoothing parameter. If K is
the Gaussian kernel (K(x) = exp(?x2 /2)) and the hidden representations are normalized, this is
equivalent to the continuous cache model.
As the memory grows with the amount of data seen by the model, this probability distribution becomes
impossible to compute. Millions of words and their multiple associated context representations are
stored, and exact exhaustive matching is prohibitive. Instead, we use the approximate k-nearest
neighbors algorithm that is described below in Sec. 3.4 to estimate this probability distribution:
X
kht ? hi k
pcache (wt | w1 , ...wt?1 ) ?
,
(6)
1{w = wi }K
?(ht )
i?N (ht )
where N (ht ) is the set of nearest neighbors and ?(ht ) is the Euclidean distance from ht to its k-th
nearest neighbor. This estimator is known as variable kernel density estimation [51]. It should be
noted that if the kernel K is equal to zero outside of [?1, 1], taking the sum over the k nearest
neighbors is equivalent to taking the sum over the full data.
The distribution obtained using the estimator defined in Eq. 6 assigns non-zero probability to at
most k words, where k is the number of nearest neighbors used. In order to have non-zero probability
everywhere (and avoid getting infinite perplexity), we propose to linearly interpolate this distribution
with the one from the model:
p(wt | w1 , ...wt?1 ) = (1 ? ?)pmodel (wt | w1 , ...wt?1 ) + ?pcache (wt | w1 , ...wt?1 ).
3.4
Fast large scale retrieval
Fast computation of the probability of a rare word is crucial to make the cache grow to millions of
potential words. Their representation also needs to be stored with relatively low memory usage. In this
section, we briefly describe a scalable retrieval method introduced by Jegou et al. [27]. Their approach
called Inverted File System Product Quantization (IVFPQ) combines two methods, an inverted file
system [56] and a quantization method, called Product quantization (PQ) [28]. Combining these two
components offers a good compromise between a fast retrieval of approximate nearest neighbors and
a low memory footprint.
Inverted file system. Inverted file systems [56] are a core component of standard large-scale text
retrieval systems, like search engines. When a query x is compared to a set Y of potential elements,
an inverted file avoids an exhaustive search by providing a subset of possible matching candidates.
In the context of continuous vectors, this subset is obtained by measuring some distance between
the query and predefined vector representations of the set. More precisely, these candidates are
selected through ?coarse matching? by clustering all the elements in Y in c groups using k-means.
The centroids are used as the vector representations. Each element of the set Y is associated with
one centroid in an inverted table. The query x is then compared to each centroid and a subset of
them is selected according to their distance to the query. All the elements of Y associated with these
centroids are then compared to the query x. Typically, we take c centroids and keep the cc closest
centroids to a query.
4
This procedure is quite efficient but very memory consuming, as each vector in the set Y must be
stored. This can be drastically reduced by quantizing the vectors. Product Quantization (PQ) is
a popular quantization method that has shown competitive performance on many retrieval benchmarks [28]. Following Jegou et al. [28], we do not directly quantize the vector y but its residual r,
i.e., the difference between the vector and its associated centroids.
Product Quantization. Product quantization is a data-driven compression algorithm with no
overhead during search [28]. While PQ has been designed for image feature compression, Joulin
et al. [30] have demonstrated its effectiveness for text too. PQ compresses real-valued vector by
approximating them with the closest vector in a pre-defined structured set of centroids, called a
codebook. This codebook is obtained by splitting each residual vector r into k subvectors ri , each of
dimension d/k, and running a k-means algorithm with s centroids on each resulting subspace. The
resulting codebook contains cs elements which is too large to be enumerated, and is instead implicitly
defined by its structure: a d-dimensional vector x ? Rd is approximated as
x
?=
k
X
qi (x),
(7)
i=1
where qi (x) is the closest centroid to subvector xi . For each subspace, there are s = 2b centroids,
where b is the number of bits required to store the quantization index of the sub-quantizer. Note
that in PQ, the subspaces are aligned with the natural axis and improvements where made by Ge et
al. [18] to align the subspaces to principal axes in the data. The reconstructed vector can take 2kb
distinct reproduction values and is stored in kb bits.
PQ estimates the inner product in the compressed domain as
x> y ? x
?> y =
k
X
qi (xi )> y i .
(8)
i=1
In practice, the vector estimate x
? is trivially reconstructed from the codes, (i.e., from the quantization
indexes) by concatenating these centroids. PQ uses two parameters, namely the number of subquantizers k and the number of bits b per quantization index.
4
Experiments
In this section, we present evaluations of our unbounded cache model on different language modeling
tasks. We first briefly describe our experimental setting and the datasets we used, before presenting
the results.
4.1
Experimental setting
One of the motivations of our model is to be able to adapt to changing data distribution. In particular,
we want to incorporate new words in the vocabulary, as they appear in the test data. We thus consider
a setting where we do not replace any words by the <unk> token, and where the test set contains
out-of-vocabulary words (OOV) which were absent at train time. Since we use the perplexity as the
evaluation metric, we need to avoid probabilities equal to zero in the output of our models (which
would result in infinite perplexity). Thus, we always interpolate the probability distributions of the
various models with the uniform distribution over the full vocabulary:
puniform (wt ) =
1
.
|vocabulary|
This is a standard technique, which was previously used to compare language models trained on
datasets with different vocabularies [9].
Baselines We compare our unbounded cache model with the static model interpolated with uniform
distribution, as well as the static model interpolated with the unigram probability distribution observed
up to time t. Our proposal is a direct extension of the local cache model [22]. Therefore, we also
compare to it to highlight the settings where an unbounded cache model is preferable to a local one.
5
model
Size
OoV rate (%)
News 2008
News 2009
News 2010
News 2011
219,796
218,628
205,859
209,187
2.3%
2.4%
2.4%
2.5%
Commentary
Web
Wiki
Books
144,197
321,072
191,554
174,037
4.2%
5.9%
5.5%
3.7%
Table 1: Vocabulary size and out-of-vocabulary rate for various test sets (for a model trained on News
2007).
4.2
Implementation details
We train recurrent neural networks with 256 LSTM hidden units, using the Adagrad algorithm with a
learning rate of 0.2 and 10 epochs. We compute the gradients using backpropagation through time
(BPTT) over 20 timesteps. Because of the large vocabulary sizes, we use the adaptative softmax [21].
We use the IVFPQ implementation from the FAISS open source library.1 We use 4, 096 centroids
and 8 probes for the inverted file. Unless said otherwise, we query the 1, 024 nearest neighbors.
4.3
Datasets
Most commonly used benchmarks for evaluating language models propose to replace rare words
by the <unk> token. On the contrary, we are interested in open vocabulary settings, and therefore
decided to use datasets without <unk>. We performed experiments on data from the five following
domains:
? News Crawl2 is a dataset made of news articles, collected from various online publications.
There is one subset of the data for each year, from 2007 to 2011. This dataset will allow
testing the unbounded cache models on data whose distribution slowly changes over time.
The dataset is shuffled at the sentence level. In the following, we refer to this dataset as
news 2007-2011.
? News Commentary consists of political and economic commentaries from the website
https://www.project-syndicate.org/. This dataset is publicly available from the
Statistical Machine Translation workshop website. In the following, we refer to this dataset
as commentary.
? Common Crawl is a text dataset collected from diverse web sources. The dataset is shuffled
at the sentence level. In the following, we refer to this dataset as web.
? WikiText3 is a dataset derived from high quality English Wikipedia articles, introduced by
Merity et al. [43]. Since we do not to replace any tokens by <unk>, we use the raw version.
In the following, we refer to this dataset as wiki.
? The book Corpus This is a dataset of 3,036 English books, collected from the Project
Gutenberg4 [40]. We use a subset of the books, which have a length around 100,000 tokens.
In the following we refer to this dataset as books.
All these datasets are publicly available. Unless stated otherwise, we use 2 million tokens for training
the static models and 10 million tokens for evaluation. All datasets are lowercased and tokenized
using the europarl dataset tools.5
1
https://github.com/facebookresearch/faiss
http://www.statmt.org/wmt14/translation-task.html
3
https://metamind.io/research/the-wikitext-long-term-dependency-language-modeling-dataset/
4
http://www.gutenberg.org/
5
http://statmt.org/europarl/v7/tools.tgz
2
6
model
2007
2008
static
static + unigram
static + local cache
static + unbounded cache
220.9
220.3
218.9
166.5
237.6
235.9
234.5
191.4
Test set
2009 2010
256.2
252.6
250.5
202.6
259.7
256.1
256.2
204.8
2011
268.8
264.3
265.2
214.3
0.14
0.15
0.16
0.17
0.18
0.19
0.20
0.21
0.22
0.23
News 2008-2011
Entropy difference with baseline
Entropy difference with baseline
Table 2: Static model trained on news 2007 and tested on news 2007-2011.
news 2008
news 2009
news 2010
news 2011
200 400 600 800 1000
number k of nearest neighbors
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.55
0.60
Domain adaptation
commentary
web
wiki
books
200 400 600 800 1000
number k of nearest neighbors
Figure 1: Performance of our model, as a function of the number k of nearest neighbors, used to
estimate the probability of words in the unbounded cache. We report the entropy difference with the
static+unigram baseline.
Train domain
News
Web
Wiki
model
News
Test domain
Commentary
Web
Wiki
Books
static
static + unigram
static + local cache
static + unbounded cache
-
342.7
303.5
288.5
191.1
689.3
581.1
593.4
383.4
1003.2
609.4
316.5
337.4
687.1
349.1
240.3
237.2
static
static + unigram
static + local cache
static + unbounded cache
624.1
519.2
531.4
306.3
484.0
395.6
391.3
234.9
-
805.3
605.3
321.5
340.2
784.3
352.4
235.8
223.6
static
static + unigram
static + local cache
static + unbounded cache
638.1
537.9
532.8
318.7
626.3
462.2
436.7
255.3
901.0
688.5
694.3
456.1
-
654.6
346.9
228.8
223.8
Table 3: Static model trained on news 2007 and tested on data from other domains.
Dataset
News 2008
Commentary
Web
Wiki
Books
Static model
Local cache
Unbounded cache
82
78
85
87
81
664
613
668
637
626
433
494
502
540
562
Table 4: Computational time (in seconds) to process 10M tokens from different test sets for the static
language model, the local cache (size 10,000) and the unbounded cache.
7
Entropy difference with baseline
4.4
0.00
0.05
0.10
0.15
0.20
0.25 5
10
news 2008
news 2009
news 2010
107
106
Number of test examples (log scale)
Figure 2: Performance of the unbounded cache model, as a function
of the number of test examples. We
report the entropy difference with the
static+unigram baseline. We observe
that, as the number of test examples
increases (and thus, the information
stored in the cache), the performance
of the unbounded cache increases.
Results
We demonstrate the effectiveness of using an unbounded cache to complement a language model
as advocated in the previous sections model by performing two types of experiments representing a
near domain and far domain adaptation scenarios. In both experiments, we compare the unigram
static model, the unigram extension, and the unbounded cache model.
Local vs. Unbounded Cache We first study the impact of using an unbounded cache instead of a
local one. To that end, we compare the performance of the two models when trained and tested on
different combinations of the previously described datasets. These datasets can be categorized into
two groups according to their properties and the results obtained by the various models we use.
On the one hand, the Wiki and Books datasets are not shuffled. Hence, the recent history (up to a few
thousands words) contains a wealth of information that can be used by a local cache to reduce the
perplexity of a static model. Indeed, the local cache model achieves respectively 316.5 and 240.3
on the Wiki and Books datasets when trained on the News dataset. This corresponds to about 3?
reduction in perplexity on both datasets in comparison to the static model. A similar trend holds when
the training data is either Web or Wiki dataset. Surprisingly, the unbounded cache model performs
similarly to the cache model despite using orders of magnitude broader context. A static model
trained on News and augmented with an unbounded cache achieves respectively 337.4 and 237.2
of perplexity. It is also worth noting that our approach is more efficient than the local cache, while
storing a much larger number of elements. Thanks to the use of fast nearest neighbor algorithm,
it takes 502 seconds to process 10M tokens from the test set when using the unbounded cache.
Comparatively, it takes 668 seconds for a local cache model of size 10, 000 to perform a similar task.
The timing experiments, reported in Table 4.3, show a similar trend.
On the other hand, the Commentary and Web datasets are shuffled. Therefore, a local cache can
hardly capture the relevant statistics to significantly improve upon the static model interpolated with
the unigram distribution. Indeed, the perplexity of a local cache model on these datasets when the
static model is trained on the News dataset is respectively 288.5 and 593.4. In comparison, the
unbounded cache model achieves on the same datasets respectively a perplexity of 191.1 and 383.4.
That is an average improvement of about 50% over the local cache in both cases (see Table 3).
Near domain adaptation. We study the benefit of using an unbounded cache model when the test
domain is only slightly different from the source domain. We train the static model on news 2007
and test on the corpus news 2008 to news 2011. All the results are reported in Table 1.
We first observe that the unbounded cache brings a 24.6% improvement relative to the static model
on the in-domain news 2007 corpus by bringing the perplexity from 220.9 down to 166.5. In
comparison, neither using the unigram information nor using a local cache lead to significant
improvement. This result underlines two phenomena. First, the simple distributional information
captured by the unigram or the local cache is already captured by the static model. Second, the
unbounded cache enhances the discrimination capabilities of the static model by capturing useful
non-linearities thanks to the combination of the nearest neighbor and the representation extracted from
8
the static model. Interestingly, these observations remain consistent when we consider evaluations on
the test sets news 2008-2011. Indeed, the average improvement of unbounded cache relatively to
the static model on the corpus news 2008-2011 is 20.44% while the relative improvement of the
unigram cache is only 1.3%. Similarly to the in-domain experiment, the unigram brings little useful
information to the static model mainly because the source (news 2007) and the target distributions
(news 2008-2011) are very close. In contrast, the unbounded cache still complements the static
model with valuable non-linear information of the target distributions.
Far domain adaptation. Our second set of experiments is concerned with testing on different
domains from the one the static model is trained on. We use the News, Web and Wiki datasets as
source domains, and all five domains as target. The results are reported in Table 3.
First, we observe that the unigram, the local and the unbounded cache significantly help the static
model in all the far domain adaptation experiments. For example, when adapting the static model
from the News domain to the Commentary and Wiki domains, the unigram reduces the perplexity of
the static model by 39.2 and 393.8 in absolute value respectively. The unbounded cache significantly
improves upon the static model and the unigram on all the far domain adaptation experiment. The
smallest relative improvement compared to the static model and the unigram is achieved when
adapting from News to Web and is 79.7% and 51.6% respectively. The more the target domain is
different from the source one, the more interesting is the use of an unbounded cache mode. Indeed,
when adapting to the Books domain (which is the most different from the other domains) the average
improvement given by the unbounded cache relatively to the static model is 69.7%.
Number of nearest neighbors. Figure 1 shows the performance of our model with the number
of nearest neighbors per query. As observed previously by Grave et al [22], the performance of a
language model improves with the size of the context used in the cache. This context is, in some
sense, a constrained version of our set of retained nearest neighbors. Interestingly, we observe the
same phenomenon despite forming the set of possible predictions over a much broader set of potential
candidates than the immediate local context. Since IFVPQ has a linear complexity with the number
of nearest neighbors, setting the number of nearest neighbors to a thousand offers a good trade-off
between speed and accuracy.
Size of the cache. Figure 2 shows the gap between the performance of static language model with
and without the cache as the size of the test set increases. Despite having a much more significant set
of candidates to look from, our algorithm continues to select relevant information. As the test set is
explored, better representations for rare words are stored, explaining this constant improvement.
5
Conclusion
In this paper, we introduce an extension to recurrent networks for language modeling, which stores
past hidden activations and associated target words. This information can then be used to obtain a
probability distribution over the previous words, allowing the language models to adapt to the current
distribution of the data dynamically. We propose to scale this simple mechanism to large amounts of
data (millions of examples) by using fast approximate nearest neighbor search. We demonstrated on
several datasets that our unbounded cache is an efficient method to adapt a recurrent neural network
to new domains dynamically, and can scale to millions of examples.
Acknowledgements
We thank the anonymous reviewers for their insightful comments.
References
[1] I. Alabdulmohsin, M. Cisse, and X. Zhang. Is attribute-based zero-shot learning an ill-posed strategy? In
ECML-PKDD.
[2] D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski,
A. Coates, G. Diamos, et al. Deep speech 2: End-to-end speech recognition in English and Mandarin. In
ICML, 2016.
9
[3] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
In ICLR, 2015.
[4] L. R. Bahl, F. Jelinek, and R. L. Mercer. A maximum likelihood approach to continuous speech recognition.
PAMI, 1983.
[5] J. R. Bellegarda. Exploiting latent semantic information in statistical language modeling. Proceedings of
the IEEE, 2000.
[6] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning
from different domains. Machine learning, 79(1), 2010.
[7] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. JMLR, 2003.
[8] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009.
[9] C. Buck, K. Heafield, and B. van Ooyen. N-gram counts and language models from the common crawl. In
LREC, 2014.
[10] R. Caruana. Multitask learning. In Learning to learn. Springer, 1998.
[11] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
[12] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word
benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
[13] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on
sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[14] N. Coccaro and D. Jurafsky. Towards better integration of semantic predictors in statistical language
modeling. In ICSLP, 1998.
[15] S. Della Pietra, V. Della Pietra, R. L. Mercer, and S. Roukos. Adaptive language modeling using minimum
discriminant estimation. In Proceedings of the workshop on Speech and Natural Language, 1992.
[16] J. L. Elman. Finding structure in time. Cognitive science, 1990.
[17] M. Federico, N. Bertoldi, and M. Cettolo. Irstlm: an open source toolkit for handling large scale language
models. In INTERSPEECH, 2008.
[18] T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization for approximate nearest neighbor search.
In CVPR, 2013.
[19] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In
CVPR, 2011.
[20] J. T. Goodman. A bit of progress in language modeling. Computer Speech & Language, 2001.
[21] E. Grave, A. Joulin, M. Ciss?, D. Grangier, and H. J?gou. Efficient softmax approximation for GPUs. In
ICML, 2017.
[22] E. Grave, A. Joulin, and N. Usunier. Improving neural language models with a continuous cache. In ICLR,
2017.
[23] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In
ICASSP, 2013.
[24] K. Heafield. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on
Statistical Machine Translation, 2011.
[25] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 1997.
[26] R. M. Iyer and M. Ostendorf. Modeling long distance dependence in language: Topic mixtures versus
dynamic cache models. IEEE Transactions on speech and audio processing, 1999.
[27] H. Jegou, M. Douze, and C. Schmid. Hamming embedding and weak geometric consistency for large scale
image search. In ECCV, 2008.
[28] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. PAMI, 2011.
10
[29] F. Jelinek, B. Merialdo, S. Roukos, and M. Strauss. A dynamic language model for speech recognition. In
HLT, 1991.
[30] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. J?gou, and T. Mikolov. Fasttext.zip: Compressing text
classification models. arXiv preprint arXiv:1612.03651, 2016.
[31] R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. Exploring the limits of language modeling.
arXiv preprint arXiv:1602.02410, 2016.
[32] S. M. Katz. Estimation of probabilities from sparse data for the language model component of a speech
recognizer. ICASSP, 1987.
[33] S. Khudanpur and J. Wu. Maximum entropy techniques for exploiting syntactic, semantic and collocational
dependencies in language modeling. Computer Speech & Language, 2000.
[34] R. Kneser and H. Ney. Improved backing-off for m-gram language modeling. In ICASSP, 1995.
[35] R. Kneser and V. Steinbiss. On the dynamic adaptation of stochastic language models. In ICASSP, 1993.
[36] R. Kuhn. Speech recognition and the frequency of recently used words: A modified markov model for
natural language. In Proceedings of the 12th conference on Computational linguistics-Volume 1, 1988.
[37] R. Kuhn and R. De Mori. A cache-based natural language model for speech recognition. PAMI, 1990.
[38] J. Kupiec. Probabilistic models of short and long distance word dependencies in running text. In
Proceedings of the workshop on Speech and Natural Language, 1989.
[39] I. Kuzborskij, F. Orabona, and B. Caputo. From n to n+ 1: Multiclass transfer incremental learning. In
CVPR, 2013.
[40] S. Lahiri. Complexity of word collocation networks: A preliminary structural analysis. In Proceedings of
the Student Research Workshop at the 14th Conference of the European Chapter of the Association for
Computational Linguistics, 2014.
[41] C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object
categorization. PAMI, 2014.
[42] R. Lau, R. Rosenfeld, and S. Roukos. Trigger-based language models: A maximum entropy approach. In
ICASSP, 1993.
[43] S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models. In ICLR, 2017.
[44] T. Mikolov, A. Deoras, S. Kombrink, L. Burget, and J. Cernock`y. Empirical evaluation and combination of
advanced language modeling techniques. In INTERSPEECH, 2011.
[45] T. Mikolov, M. Karafi?t, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based
language model. In INTERSPEECH, 2010.
[46] T. Mikolov and G. Zweig. Context dependent recurrent neural network language model. In SLT, 2012.
[47] M. D. Muhlbaier, A. Topalis, and R. Polikar. Learn++ .NC: Combining ensemble of classifiers with
dynamically weighted consult-and-vote for efficient incremental learning of new classes. IEEE transactions
on neural networks, 20(1), 2009.
[48] R. Rosenfeld. A maximum entropy approach to adaptive statistical language modeling. Computer, Speech
and Language, 1996.
[49] W. J. Scheirer, A. de Rezende Rocha, A. Sapkota, and T. E. Boult. Toward open set recognition. PAMI,
2013.
[50] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialogue systems
using generative hierarchical neural network models. In AAAI, 2016.
[51] G. R. Terrell and D. W. Scott. Variable kernel density estimation. The Annals of Statistics, 1992.
[52] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In NIPS, 2015.
[53] T. Wang and K. Cho. Larger-context language modelling. In ACL, 2016.
[54] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2009.
[55] J. G. Zilly, R. K. Srivastava, J. Koutn?k, and J. Schmidhuber. Recurrent highway networks. In ICML, 2017.
[56] J. Zobel and A. Moffat. Inverted files for text search engines. ACM computing surveys (CSUR), 2006.
11
| 7185 |@word multitask:3 armand:1 version:2 briefly:4 compression:3 retraining:4 bptt:1 underline:1 open:8 dramatic:1 shot:3 accommodate:1 recursively:1 reduction:2 initial:1 contains:3 document:1 interestingly:2 past:5 existing:3 current:3 com:4 activation:2 must:2 distant:1 cis:1 designed:3 update:3 v:1 discrimination:1 generative:1 prohibitive:1 selected:2 website:2 accordingly:1 core:2 short:3 epanechnikov:1 pointer:3 coarse:1 quantizer:1 codebook:3 org:4 zhang:1 five:2 unbounded:36 bertoldi:1 direct:1 consists:1 combine:2 overhead:1 introduce:1 forgetting:1 indeed:4 rapid:1 merity:5 elman:3 nor:1 pkdd:1 chelba:1 jegou:4 inspired:1 little:1 gou:2 cache:76 window:2 subvectors:1 becomes:1 project:2 linearity:2 factorized:1 developed:1 finding:1 pmodel:1 continual:1 preferable:1 classifier:2 unit:2 reuses:1 superiority:1 appear:1 before:1 local:26 timing:1 limit:2 io:1 despite:3 encoding:2 fluctuation:1 interpolation:1 kneser:3 scheirer:2 pami:5 acl:1 dynamically:4 edouard:1 challenging:2 heafield:2 catanzaro:1 jurafsky:2 range:1 decided:1 harmeling:1 merialdo:1 testing:2 practice:3 implement:1 backpropagation:1 della:3 footprint:1 procedure:2 empirical:3 significantly:6 adapting:4 projection:1 matching:3 pre:3 word:31 burget:2 specificity:2 close:1 context:18 applying:1 impossible:1 vaughan:1 www:3 equivalent:2 demonstrated:2 reviewer:1 survey:1 ke:1 splitting:1 assigns:1 rule:3 estimator:3 rocha:1 embedding:2 searching:1 traditionally:1 steinbiss:2 annals:1 target:6 trigger:1 alabdulmohsin:1 exact:1 us:2 facebookresearch:1 jaitly:1 element:6 trend:2 recognition:11 particularly:1 approximated:2 expensive:1 continues:1 distributional:1 observed:4 preprint:4 wang:1 capture:3 thousand:3 compressing:1 news:37 sun:1 trade:1 valuable:1 complexity:2 dynamic:3 trained:14 depend:1 compromise:1 predictive:2 zilly:1 upon:2 icassp:5 represented:1 various:4 chapter:1 train:5 distinct:1 fast:6 describe:3 query:10 outside:1 exhaustive:2 quite:1 grave:9 larger:4 encoded:1 cvpr:3 solve:2 valued:1 otherwise:2 compressed:1 posed:1 ability:1 statistic:3 federico:1 syntactic:1 jointly:3 emergence:1 rosenfeld:2 online:2 sequence:6 descriptive:1 quantizing:1 propose:6 douze:3 product:9 adaptation:12 aligned:1 combining:2 relevant:2 rapidly:1 translate:1 achieve:1 lxt:1 getting:1 exploiting:2 billion:1 categorization:1 incremental:3 ben:1 object:1 help:1 depending:2 recurrent:21 blitzer:1 gong:1 mandarin:1 nearest:21 bradbury:1 advocated:1 progress:3 eq:1 c:1 itq:1 kuhn:2 kenlm:1 attribute:2 stochastic:1 kb:2 coccaro:2 bojanowski:1 require:2 icslp:1 generalization:1 anonymous:1 preliminary:1 koutn:1 enumerated:1 extension:6 exploring:1 hold:1 sordoni:1 around:1 exp:2 week:1 trigram:1 dictionary:1 commonality:1 early:1 achieves:3 smallest:1 torralba:1 recognizer:1 estimation:5 tanh:1 wikitext:1 sensitive:1 highway:1 tool:2 weighted:1 gaussian:2 always:1 modified:1 reaching:1 avoid:2 broader:2 publication:1 validated:1 focus:1 ax:1 derived:1 improvement:9 rezende:1 modelling:1 likelihood:1 mainly:1 contrast:2 political:1 centroid:14 baseline:6 sense:1 dependent:1 jauvin:1 typically:1 collocation:1 hidden:9 interested:2 backing:1 statmt:2 classification:2 unk:4 html:1 ill:1 art:2 special:1 smoothing:1 softmax:2 marginal:1 equal:2 constrained:1 integration:1 having:1 beach:1 look:1 icml:4 promote:1 report:2 few:2 modern:1 simultaneously:1 interpolate:2 pietra:3 maintain:1 evaluation:6 mixture:2 chain:1 predefined:1 closer:1 moffat:1 unless:2 conduct:1 euclidean:1 battenberg:1 modeling:21 measuring:2 caruana:1 introducing:1 subset:5 rare:4 uniform:2 usefulness:1 predictor:1 rounding:1 too:3 gutenberg:1 stored:7 reported:3 dependency:4 nickisch:1 cho:3 st:1 density:3 lstm:3 thanks:2 probabilistic:2 off:2 w1:5 aaai:1 opposed:1 slowly:3 worse:1 v7:1 book:12 cognitive:1 dialogue:2 chung:1 potential:4 de:2 zobel:1 sec:1 student:1 notable:1 collobert:1 performed:1 competitive:1 capability:1 publicly:2 accuracy:2 efficiently:3 ensemble:2 weak:1 raw:1 vincent:1 worth:1 cc:1 history:7 explain:2 reach:1 sharing:1 hlt:1 facebook:3 sixth:1 evaluates:1 chrzanowski:1 frequency:1 mohamed:1 associated:5 hamming:2 static:47 dataset:21 popular:3 adaptative:1 improves:3 finetune:1 hashing:3 oov:2 supervised:1 improved:1 wei:1 hand:2 whose:1 web:11 ostendorf:2 lahiri:1 logistic:1 brings:2 quality:1 mode:1 bahl:1 pineau:1 grows:1 usage:1 name:1 usa:1 requiring:1 building:2 concept:1 normalized:1 csur:1 hence:2 assigned:1 shuffled:4 syndicate:1 semantic:4 deal:3 during:1 interspeech:3 maintained:1 backpropagating:1 noted:1 cosine:1 trying:1 procrustean:1 presenting:1 demonstrate:2 performs:1 image:2 lazebnik:1 novel:2 recently:3 common:3 wikipedia:1 volume:1 million:9 extend:1 he:1 approximates:1 katz:1 association:1 refer:6 significant:3 jozefowicz:1 ai:3 framed:1 rd:4 unconstrained:1 trivially:1 consistency:1 similarly:3 grangier:1 language:60 pq:8 lsh:1 toolkit:1 similarity:3 add:3 align:2 dominant:1 closest:3 recent:8 showed:1 driven:1 perplexity:12 scenario:3 store:5 cettolo:1 schmidhuber:2 binary:2 success:1 inverted:8 seen:4 captured:2 additional:1 care:1 commentary:9 minimum:1 zip:1 rv:1 full:3 multiple:1 reduces:1 faster:1 adapt:11 offer:2 long:9 retrieval:10 zweig:1 qi:3 prediction:4 variant:1 scalable:1 impact:1 metric:1 arxiv:8 kernel:6 tailored:1 deterioration:1 achieved:1 hochreiter:1 proposal:1 ajoulin:1 want:1 bellegarda:2 wealth:1 grow:1 source:8 crucial:2 goodman:1 moustapha:1 rest:1 bringing:1 file:7 comment:1 bahdanau:1 contrary:1 effectiveness:3 practitioner:1 consult:1 structural:1 near:2 leverage:1 counting:1 noting:1 bengio:5 concerned:2 timesteps:1 architecture:4 restrict:1 reduce:3 inner:1 economic:1 koehn:1 multiclass:1 absent:1 tgz:1 speech:17 hardly:1 deep:5 useful:3 buck:1 amount:3 nonparametric:1 svms:1 reduced:1 http:6 wiki:11 coates:1 ducharme:1 per:2 diverse:1 group:2 key:1 serban:1 enormous:1 achieving:2 kuzborskij:2 changing:2 neither:1 ht:10 vast:1 tweet:1 year:2 sum:2 everywhere:1 reporting:1 reader:1 wu:2 bit:4 capturing:1 lrec:1 hi:3 courville:1 strength:1 adapted:1 constraint:1 precisely:1 x2:1 ri:1 anubhai:1 interpolated:3 speed:1 metamind:1 performing:1 mikolov:5 relatively:3 gpus:1 charikar:2 structured:1 according:3 amodei:1 combination:4 across:1 slightly:1 increasingly:2 remain:1 smaller:1 wi:3 karafi:1 lau:2 handling:1 computationally:1 resource:1 mori:1 previously:6 discus:1 count:1 mechanism:1 needed:1 ge:3 end:5 gulcehre:1 available:4 usunier:1 permit:1 probe:1 observe:4 hierarchical:1 spectral:2 ney:1 xiong:1 original:2 compress:2 assumes:1 remaining:1 clustering:1 running:2 linguistics:2 ooyen:1 exploit:2 brant:1 approximating:1 comparatively:1 already:1 quantity:2 parametric:16 strategy:7 dependence:1 said:1 enhances:1 ow:1 gradient:1 subspace:4 distance:7 thank:1 iclr:3 parametrized:2 topic:2 collected:3 discriminant:1 toward:1 tokenized:1 code:4 length:1 index:4 retained:1 providing:1 nc:1 difficult:1 unfortunately:1 stoc:1 fortunato:1 stated:1 implementation:2 perform:2 gated:2 allowing:1 fasttext:1 observation:1 datasets:17 markov:1 benchmark:5 ecml:1 immediate:1 extended:2 hinton:1 shazeer:1 smoothed:2 introduced:4 complement:2 pair:2 gru:1 toolbox:1 extensive:1 sentence:3 subvector:1 required:1 namely:1 optimized:1 engine:2 established:2 nip:3 kht:2 robinson:1 able:1 below:1 scott:1 kulesza:1 reading:1 built:1 memory:12 unrealistic:1 suitable:1 hot:1 natural:7 difficulty:2 circumvent:2 hybrid:1 david:1 curriculum:3 residual:2 cernock:2 advanced:1 representing:1 scheme:1 improve:5 github:1 historically:1 library:1 cisse:2 axis:1 schmid:2 binarization:1 review:2 text:7 epoch:1 acknowledgement:1 geometric:1 adagrad:1 relative:3 graf:1 fully:1 loss:1 highlight:1 boult:1 interesting:1 limitation:1 proven:1 versus:1 remarkable:1 agent:1 consistent:2 article:2 mercer:2 storing:2 share:1 roukos:3 translation:5 kombrink:1 casper:1 prone:1 eccv:1 token:9 surprisingly:1 english:3 drastically:1 allow:1 neighbor:21 explaining:1 taking:2 slt:1 absolute:1 jelinek:3 sparse:1 benefit:1 van:1 lowercased:1 dimension:1 vocabulary:12 world:1 gram:4 avoids:1 fb:3 evaluating:1 crawl:2 made:2 adaptive:6 commonly:1 far:4 transaction:2 reconstructed:2 approximate:6 implicitly:1 overcomes:1 keep:1 overfitting:1 corpus:4 consuming:2 xi:2 sentinel:1 fergus:1 continuous:7 search:9 latent:2 iterative:2 table:9 additionally:1 nature:1 transfer:5 learn:2 ca:1 unavailable:1 improving:1 caputo:1 quantize:1 european:1 domain:31 louradour:1 joulin:5 main:1 linearly:1 motivation:1 lampert:1 categorized:1 x1:5 augmented:1 sub:1 pereira:1 concatenating:1 lie:1 candidate:4 jmlr:1 down:1 unigram:19 specific:2 xt:7 showing:1 insightful:1 rht:1 explored:1 reproduction:1 workshop:5 socher:1 quantization:16 adding:1 strauss:1 diamos:1 magnitude:1 iyer:2 gap:1 chen:1 locality:1 smoothly:2 entropy:9 led:1 simply:1 forming:1 visual:1 vinyals:3 khudanpur:2 deoras:1 srivastava:1 springer:1 corresponds:1 chiefly:1 relies:1 satisfies:1 extracted:1 acm:1 weston:1 conditional:2 consequently:1 towards:1 orabona:1 replace:3 change:9 specifically:1 infinite:2 wt:11 principal:1 called:4 partly:1 experimental:2 vote:1 select:1 damaging:1 crammer:1 phenomenon:2 incorporate:2 audio:1 tested:3 schuster:2 europarl:2 |
6,838 | 7,186 | Predictive State Recurrent Neural Networks
Carlton Downey
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Ahmed Hefny
Carnegie Mellon University
Pittsburgh, PA, 15213
[email protected]
Byron Boots
Georgia Tech
Atlanta, GA, 30332
[email protected]
Boyue Li
Carnegie Mellon University
Pittsburgh, PA, 15213
[email protected]
Geoff Gordon
Carnegie Mellon University
Pittsburgh, PA, 15213
[email protected]
Abstract
We present a new model, Predictive State Recurrent Neural Networks (PSRNNs),
for filtering and prediction in dynamical systems. PSRNNs draw on insights from
both Recurrent Neural Networks (RNNs) and Predictive State Representations
(PSRs), and inherit advantages from both types of models. Like many successful
RNN architectures, PSRNNs use (potentially deeply composed) bilinear transfer
functions to combine information from multiple sources. We show that such bilinear
functions arise naturally from state updates in Bayes filters like PSRs, in which
observations can be viewed as gating belief states. We also show that PSRNNs
can be learned effectively by combining Backpropogation Through Time (BPTT)
with an initialization derived from a statistically consistent learning algorithm
for PSRs called two-stage regression (2SR). Finally, we show that PSRNNs can
be factorized using tensor decomposition, reducing model size and suggesting
interesting connections to existing multiplicative architectures such as LSTMs and
GRUs. We apply PSRNNs to 4 datasets, and show that we outperform several
popular alternative approaches to modeling dynamical systems in all cases.
1
Introduction
Learning to predict temporal sequences of observations is a fundamental challenge in a range of
disciplines including machine learning, robotics, and natural language processing. While there are a
wide variety of different approaches to modelling time series data, many of these approaches can be
categorized as either recursive Bayes Filtering or Recurrent Neural Networks.
Bayes Filters (BFs) [1] focus on modeling and maintaining a belief state: a set of statistics, which,
if known at time t, are sufficient to predict all future observations as accurately as if we know the
full history. The belief state is generally interpreted as the statistics of a distribution over the latent
state of a data generating process, conditioned on history. BFs recursively update the belief state by
conditioning on new observations using Bayes rule. Examples of common BFs include sequential
filtering in Hidden Markov Models (HMMs) [2] and Kalman Filters (KFs) [3].
Predictive State Representations [4] (PSRs) are a variation on Bayes filters that do not define system
state explicitly, but proceed directly to a representation of state as the statistics of a distribution
of features of future observations, conditioned on history. By defining the belief state in terms of
observables rather than latent states, PSRs can be easier to learn than other filtering methods [5?7].
PSRs also support rich functional forms through kernel mean map embeddings [8], and a natural
interpretation of model update behavior as a gating mechanism. This last property is not unique to
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
PSRs, as it is also possible to interpret the model updates of other BFs such as HMMs in terms of
gating.
Due to their probabilistic grounding, BFs and PSRs possess a strong statistical theory leading
to efficient learning algorithms. In particular, method-of-moments algorithms provide consistent
parameter estimates for a range of BFs including PSRs [5, 7, 9?11]. Unfortunately, current versions
of method of moments initialization restrict BFs to relatively simple functional forms such as linearGaussian (KFs) or linear-multinomial (HMMs).
Recurrent Neural Networks (RNNs) are an alternative to BFs that model sequential data via a
parameterized internal state and update function. In contrast to BFs, RNNs are directly trained to
minimize output prediction error, without adhering to any axiomatic probabilistic interpretation.
Examples of popular RNN models include Long-Short Term Memory networks [12] (LSTMs), Gated
Recurrent Units [13] (GRUs), and simple recurrent networks such as Elman networks [14].
RNNs have several advantages over BFs. Their flexible functional form supports large, rich models.
And, RNNs can be paired with simple gradient-based training procedures that achieve state-of-the-art
performance on many tasks [15]. RNNs also have drawbacks however: unlike BFs, RNNs lack an
axiomatic probabilistic interpretation, and are therefore difficult to analyze. Furthermore, despite
strong performance in some domains, RNNs are notoriously difficult to train; in particular it is
difficult to find good initializations.
In summary, RNNs and BFs offer complementary advantages and disadvantages: RNNs offer rich
functional forms at the cost of statistical insight, while BFs possess a sophisticated statistical theory
but are restricted to simpler functional forms in order to maintain tractable training and inference. By
drawing insights from both Bayes Filters and RNNs we develop a novel hybrid model, Predictive
State Recurrent Neural Networks (PSRNNs). Like many successful RNN architectures, PSRNNs
use (potentially deeply composed) bilinear transfer functions to combine information from multiple
sources. We show that such bilinear functions arise naturally from state updates in Bayes filters like
PSRs, in which observations can be viewed as gating belief states. We show that PSRNNs directly
generalize discrete PSRs, and can be learned effectively by combining Backpropogation Through
Time (BPTT) with an approximately consistent method-of-moments initialization based on two-stage
regression. We also show that PSRNNs can be factorized using tensor decomposition, reducing model
size and suggesting interesting connections to existing multiplicative architectures such as LSTMs.
2
Related Work
It is well known that a principled initialization can greatly increase the effectiveness of local search
heuristics. For example, Boots [16] and Zhang et al. [17] use subspace ID to initialize EM for linear
dyanmical systems, and Ko and Fox [18] use N4SID [19] to initialize GP-Bayes filters.
Pasa et al. [20] propose an HMM-based pre-training algorithm for RNNs by first training an HMM,
then using this HMM to generate a new, simplified dataset, and, finally, initializing the RNN weights
by training the RNN on this dataset.
Belanger and Kakade [21] propose a two-stage algorithm for learning a KF on text data. Their
approach consists of a spectral initialization, followed by fine tuning via EM using the ASOS method
of Martens [22]. They show that this approach has clear advantages over either spectral learning or
BPTT in isolation. Despite these advantages, KFs make restrictive linear-Gaussian assumptions that
preclude their use on many interesting problems.
Downey et al. [23] propose a two-stage algorithm for learning discrete PSRs, consisting of a spectral
initialization followed by BPTT. While that work is similar in spirit to the current paper, it is still an
attempt to optimize a BF using BPTT rather than an attempt to construct a true hybrid model. This
results in several key differences: they focus on the discrete setting, and they optimize only a subset
of the model parameters.
Haarnoja et al. [24] also recognize the complementary advantages of Bayes Filters and RNNs, and
propose a new network architecture attempting to combine some of the advantages of both. Their
approach differs substantially from ours as they propose a network consisting of a Bayes Filter
concatenated with an RNN, which is then trained end-to-end via backprop. In contrast our entire
network architecture has a dual interpretation as both a Bayes filter and a RNN. Because of this,
2
our entire network can be initialized via an approximately consistent method of moments algorithm,
something not possible in [24].
Finally, Kossaifi et al. [25] also apply tensor decomposition in the neural network setting. They
propose a novel neural network layer, based on low rank tensor factorization, which can directly
process tensor input. This is in contrast to a standard approach where the data is flattened to a vector.
While they also recognize the strength of the multilinear structure implied by tensor weights, both
their setting and their approach differ from ours: they focus on factorizing tensor input data, while
we focus on factorizing parameter tensors which arise naturally from a kernelized interpretation of
Bayes rule.
3
3.1
Background
Predictive State Representations
Predictive state representations (PSRs) [4] are a class of models for filtering, prediction, and simulation
of discrete time dynamical systems. PSRs provide a compact representation of a dynamical system
by representing state as a set of predictions of features of future observations.
Let ft = f (ot:t+k?1 ) be a vector of features of future observations and let ht = h(o1:t?1 ) be a
vector of features of historical observations. Then the predictive state is qt = qt|t?1 = E[ft | o1:t?1 ].
The features are selected such that qt determines the distribution of future observations P (ot:t+k?1 |
o1:t?1 ).1 Filtering is the process of mapping a predictive state qt to qt+1 conditioned on ot , while
prediction maps a predictive state qt = qt|t?1 to qt+j|t?1 = E[ft+j | o1:t?1 ] without intervening
observations.
PSRs were originally developed for discrete data as a generalization of existing Bayes Filters such as
HMMs [4]. However, by leveraging the recent concept of Hilbert Space embeddings of distributions
[26], we can embed a PSR in a Hilbert Space, and thereby handle continuous observations [8].
Hilbert Space Embeddings of PSRs (HSE-PSRs) [8] represent the state as one or more nonparametric
conditional embedding operators in a Reproducing Kernel Hilbert Space (RKHS) [27] and use Kernel
Bayes Rule (KBR) [26] to estimate, predict, and update the state.
For a full treatment of HSE-PSRs see [8]. Let kf , kh , ko be translation invariant kernels [28] defined
on ft , ht , and ot respectively. We use Random Fourier Features [28] (RFF) to define projections
?t = RFF (ft ), ?t = RFF (ht ), and ?t = RFF (ot ) such that kf (fi , fj ) ? ?Ti ?j , kh (hi , hj ) ?
?iT ?j , ko (oi , oj ) ? ?iT ?j . Using this notation, the HSE-PSR predictive state is qt = E[?t | ot:t?1 ].
Formally an HSE-PSR (hereafter simply referred to as a PSR) consists of an initial state b1 , a 3-mode
update tensor W , and a 3-mode normalization tensor Z. The PSR update equation is
?1
qt+1 = (W ?3 qt ) (Z ?3 qt )
?2 o t .
(1)
where ?i is tensor multiplication along the ith mode of the preceding tensor. In some settings (such
as with discrete data) it is possible to read off the observation probability directly from W ?3 qt ;
however, in order to generalize to continuous observations with RFF features we include Z as a
separate parameter.
3.2
Two-stage Regression
Hefny et al. [7] show that PSRs can be learned by solving a sequence of regression problems. This
approach, referred to as Two-Stage Regression or 2SR, is fast, statistically consistent, and reduces to
simple linear algebra operations. In 2SR the PSR model parameters q1 , W , and Z are learned using
1
For convenience we assume that the system is k-observable: that is, the distribution of all future observations
is determined by the distribution of the next k observations. (Note: not by the next k observations themselves.)
At the cost of additional notation, this restriction could easily be lifted.
3
the history features ?t defined earlier via the following set of equations:
q1 =
W =
T
1X
?t
T t=1
T
X
(2)
!
?t+1 ? ?t ? ?t
?3
t=1
Z=
T
X
!+
? t ? ?t
(3)
t=1
T
X
!
?t ? ?t ? ?t
t=1
?3
T
X
!+
? t ? ?t
.
(4)
t=1
Where + is the Moore-Penrose pseudo-inverse. It?s possible to view (2?4) as first estimating predictive
states by regression from history (stage 1) and then estimating parameters W and Z by regression
among predictive states (stage 2), hence the name Two-Stage Regression; for details see [7]. Finally
in practice we use ridge regression in order to improve model stability, and minimize the destabilizing
effect of rare events while preserving consistency. We could instead use nonlinear predictors in stage
1, but with RFF features, linear regression has been sufficient for our purposes.2 Once we learn model
parameters, we can apply the filtering equation (1) to obtain predictive states q1:T .
3.3
Tensor Decomposition
The tensor Canonical Polyadic decomposition (CP decomposition) [29] can be viewed as a generalization of the Singular Value Decomposition (SVD) to tensors. If T ? R(d1 ?...?dk ) is a tensor, then a
CP decomposition of T is:
T =
m
X
a1i ? a2i ? ... ? aki
i=1
aji
where
? Rdj and ? is the Kronecker product. The rank of T is the minimum m such that the
above equality holds. In other words, the CP decomposition represents T as a sum of rank-1 tensors.
4
Predictive State Recurrent Neural Networks
In this section we introduce Predictive State Recurrent Neural Networks (PSRNNs), a new RNN
architecture inspired by PSRs. PSRNNs allow for a principled initialization and refinement via BPTT.
The key contributions which led to the development of PSRNNs are: 1) a new normalization scheme
for PSRs which allows for effective refinement via BPTT; 2) the extention of the 2SR algorithm to a
multilayered architecture; and 3) the optional use of a tensor decomposition to obtain a more scalable
model.
4.1
Architecture
The basic building block of a PSRNN is a 3-mode tensor, which can be used to compute a bilinear
combination of two input vectors. We note that, while bilinear operators are not a new development
(e.g., they have been widely used in a variety of systems engineering and control applications for
many years [30]), the current paper shows how to chain these bilinear components together into a
powerful new predictive model.
Let qt and ot be the state and observation at time t. Let W be a 3-mode tensor, and let q be a vector.
The 1-layer state update for a PSRNN is defined as:
qt+1 =
W ?2 ot ?3 qt + b
kW ?2 ot ?3 qt + bk2
(5)
Here the 3-mode tensor of weights W and the bias vector b are the model parameters. This architecture
is illustrated in Figure 1a. It is similar, but not identical, to the PSR update (Eq. 1); sec 3.1 gives
2
Note that we can train a regression model to predict any quantity from the state. This is useful for general
sequence-to-sequence mapping models. However, in this work we focus on predicting future observations.
4
more detail on the relationship. This model may appear simple, but crucially the tensor contraction
W ?2 ot ?3 qt integrates information from bt and ot multiplicatively, and acts as a gating mechanism,
as discussed in more detail in section 5.
The typical approach used to increase modeling capability for BFs (including PSRs) is to use an initial
fixed nonlinearity to map inputs up into a higher-dimensional space [31, 30]. PSRNNs incorporate
such a step, via RFFs. However, a multilayered architecture typically offers higher representation
power for a given number of parameters [32].
To obtain a multilayer PSRNN, we stack the 1-layer blocks of Eq. (5) by providing the output of one
layer as the observation for the next layer. (The state input for each layer remains the same.) In this
way we can obtain arbitrarily deep RNNs. This architecture is displayed in Figure 1b.
We choose to chain on the observation (as opposed to on the state) as this architecture leads to a
natural extension of 2SR to multilayered models (see Sec. 4.2). In addition, this architecture is
consistent with the typical approach for constructing multilayered LSTMs/GRUs [12]. Finally, this
architecture is suggested by the full normalized form of an HSE PSR, where the observation is passed
through two layers.
(a) Single Layer PSRNN
(b) Multilayer PSRNN
Figure 1: PSRNN architecture: See equation 5 for details. We omit bias terms to avoid clutter.
4.2
Learning PSRNNs
There are two components to learning PSRNNs: an initialization procedure followed by gradientbased refinement. We first show how a statistically consistent 2SR algorithm derived for PSRs can
be used to initialize the PSRNN model; this model can then be refined via BPTT. We omit the
BPTT equations as they are similar to existing literature, and can be easily obtained via automatic
differentiation in a neural network library such as PyTorch or TensorFlow.
The Kernel Bayes Rule portion of the PSR update (equation 1) can be separated into two terms:
?1
(W ?3 qt ) and (Z ?3 qt ) . The first term corresponds to calculating the joint distribution, while
the second term corresponds to normalizing the joint to obtain the conditional distribution. In the
discrete case, this is equivalent to dividing the joint distribution of ft+1 and ot by the marginal of ot ;
see [33] for details.
If we remove the normalization term, and replace it with two-norm normalization, the PSR update
W ?3 qt ?2 ot
becomes qt+1 = kW
?3 qt ?2 ot k , which corresponds to calculating the joint distribution (up to a scale
factor), and has the same functional form as our single-layer PSRNN update equation (up to bias).
It is not immediately clear that this modification is reasonable. We show in appendix B that our
algorithm is consistent in the discrete (realizable) setting; however, to our current knowledge we
lose the consistency guarantees of the 2SR algorithm in the full continuous setting. Despite this we
determined experimentally that replacing full normalization with two-norm normalization appears to
have a minimal effect on model performance prior to refinement, and results in improved performance
after refinement. Finally, we note that working with the (normalized) joint distribution in place of the
conditional distribution is a commonly made simplification in the systems literature, and has been
shown to work well in practice [34].
The adaptation of the two-stage regression algorithm of Hefny et al. [7] described above allows us
to initialize 1-layer PSRNNs; we now extend this approach to multilayered PSRNNs. Suppose we
have learned a 1-layer PSRNN P using two-stage regression. We can use P to perform filtering
on a dataset to generate a sequence of estimated states q?1 , ..., q?n . According to the architecture
described in Figure 1b, these states are treated as observations in the second layer. Therefore we
can initialize the second layer by an additional iteration of two-stage regression using our estimated
5
states q?1 , ..., q?n in place of observations. This process can be repeated as many times as desired to
initialize an arbitrarily deep PSRNN. If the first layer were learned perfectly, the second layer would
be superfluous; however, in practice, we observe that the second layer is able to learn to improve on
the first layer?s performance.
Once we have obtained a PSRNN using the 2SR approach described above, we can use BPTT to
refine the PSRNN. We note that we choose to use 2-norm divisive normalization because it is not
practical to perform BPTT through the matrix inverse required in PSRs: the inverse operation is
ill-conditioned in the neighborhood of any singular matrix. We observe that 2SR provides us with an
initialization which converges to a good local optimum.
4.3
Factorized PSRNNs
In this section we show how the PSRNN model can be factorized to reduce the number of parameters
prior to applying BPTT.
Let (W, b0 ) be a PSRNN block. Suppose we decompose W using CP decomposition to obtain
W =
n
X
ai ? bi ? ci
i=1
Let A (similarly B, C) be the matrix whose ith row is ai (respectively bi , ci ). Then the PSRNN state
update (equation (5)) becomes (up to normalization):
qt+1 = W ?2 ot ?3 qt + b
= (A ? B ? C) ?2 ot ?3 qt + b
= AT (Bot Cqt ) + b
(6)
(7)
(8)
where is the Hadamard product. We call a PSRNN of this form a factorized PSRNN. This
model architecture is illustrated in Figure 2. Using a factorized PSRNN provides us with complete
control over the size of our model via the rank of the factorization. Importantly, it decouples the
number of model parameters from the number of states, allowing us to set these two hyperparameters
independently.
Figure 2: Factorized PSRNN Architecture
We determined experimentally that factorized PSRNNs are poorly conditioned when compared with
PSRNNs, due to very large and very small numbers often occurring in the CP decomposition. To
alleviate this issue, we need to initialize the bias b in a factorized PSRNN to be a small multiple of
the mean state. This acts to stabilize the model, regularizing gradients and preventing us from moving
away from the good local optimum provided by 2SR.
We note that a similar stabilization happens automatically in randomly initialized RNNs: after the
first few iterations the gradient updates cause the biases to become non-zero, stabilizing the model
and resulting in subsequent gradient descent updates being reasonable. Initialization of the biases is
only a concern for us because we do not want the original model to move away from our carefully
prepared initialization due to extreme gradients during the first few steps of gradient descent.
In summary, we can learn factorized PSRNNs by first using 2SR to initialize a PSRNN, then using CP
decomposition to factorize the tensor model parameters to obtain a factorized PSRNN, then applying
BPTT to the refine the factorized PSRNN.
6
5
Discussion
The value of bilinear units in RNNs was the focus of recent work by Wu et al [35]. They introduced the
concept of Multiplicative Integration (MI) units ? components of the form Ax By ? and showed
that replacing additive units by multiplicative ones in a range of architectures leads to significantly
improved performance. As Eq. (8) shows, factorizing W leads precisely to an architecture with MI
units.
Modern RNN architectures such as LSTMs and GRUs are known to outperform traditional RNN
architectures on many problems [12]. While the success of these methods is not fully understood,
much of it is attributed to the fact that these architectures possess a gating mechanism which allows
them both to remember information for a long time, and also to forget it quickly. Crucially, we note
that PSRNNs also allow for a gating mechanism. To see this consider a single entry in the factorized
PSRNN update (omitting normalization).
!
X
X
X
[qt+1 ]i =
Aji
Bjk [ot ]k
Cjl [qt ]l + b
(9)
j
k
l
P
The current state qt will only contribute to the new state if the function k Bjk [ot ]k of ot is non-zero.
Otherwise ot will cause the model to forget this information: the bilinear component of the PSRNN
architecture naturally achieves gating.
We note that similar bilinear forms occur as components of many successful models. For example,
consider the (one layer) GRU update equation:
zt = ?(Wz ot + Uz qt + cz )
rt = ?(Wr ot + Ur qt + cr )
qt+1 = zt qt + (1 ? zt ) ?(Wh ot + Uh (rt qt ) + ch )
The GRU update is a convex combination of the existing state qt and and update term Wh ot +Uh (rt
qt ) + ch . We see that the core part of this update term Uh (rt qt ) + ch bears a striking similarity to
our factorized PSRNN update. The PSRNN update is simpler, though, since it omits the nonlinearity
?(?), and hence is able to combine pairs of linear updates inside and outside ?(?) into a single matrix.
Finally, we would like to highlight the fact that, as discussed in section 5, the bilinear form shared in
some form by these models (including PSRNNs) resembles the first component of the Kernel Bayes
Rule update function. This observation suggests that bilinear components are a natural structure to
use when constructing RNNs, and may help explain the success of the above methods over alternative
approaches. This hypothesis is supported by the fact that there are no activation functions (other than
divisive normalization) present in our PSRNN architecture, yet it still manages to achieve strong
performance.
6
Experimental Setup
In this section we describe the datasets, models, model initializations, model hyperparameters, and
evaluation metrics used in our experiments.
We use the following datasets in our experiments:
? Penn Tree Bank (PTB) This is a standard benchmark in the NLP community [36]. Due to
hardware limitations we use a train/test split of 120780/124774 characters.
? Swimmer We consider the 3-link simulated swimmer robot from the open-source package
OpenAI gym.3 The observation model returns the angular position of the nose as well as the
angles of the two joints. We collect 25 trajectories from a robot that is trained to swim forward
(via the cross entropy with a linear policy), with a train/test split of 20/5.
? Mocap This is a Human Motion Capture dataset consisting of 48 skeletal tracks from three human
subjects collected while they were walking. The tracks have 300 timesteps each, and are from
a Vicon motion capture system. We use a train/test split of 40/8. Features consist of the 3D
positions of the skeletal parts (e.g., upper back, thorax, clavicle).
3
https://gym.openai.com/
7
? Handwriting This is a digit database available on the UCI repository [37, 38] created using a
pressure sensitive tablet and a cordless stylus. Features are x and y tablet coordinates and pressure
levels of the pen at a sampling rate of 100 milliseconds. We use 25 trajectories with a train/test
split of 20/5.
Models compared are LSTMs [30], GRUs [13], basic RNNs [14], KFs [3], PSRNNs, and factorized
PSRNNs. All models except KFs consist of a linear encoder, a recurrent module, and a linear decoder.
The encoder maps observations to a compressed representation; in the context of text data it can be
viewed as a word embedding. The recurrent module maps a state and an observation to a new state
and an output. The decoder maps an output to a predicted observation.4 We initialize the LSTMs and
RNNs with random weights and zero biases according to the Xavier initialization scheme [39]. We
initialize the the KF using the 2SR algorithm described in [7]. We initialize PSRNNs and factorized
PSRNNs as described in section 3.1.
In two-stage regression we use a ridge parameter of 10(?2) n where n is the number of training
examples (this is consistent with the values suggested in [8]). (Experiments show that our approach
works well for a wide variety of hyperparameter values.) We use a horizon of 1 in the PTB experiments,
and a horizon of 10 in all continuous experiments. We use 2000 RFFs from a Gaussian kernel, selected
according to the method of [28], and with the kernel width selected as the median pairwise distance.
We use 20 hidden states, and a fixed learning rate of 1 in all experiments. We use a BPTT horizon of
35 in the PTB experiments, and an infinite BPTT horizon in all other experiments. All models are
single layer unless stated otherwise.
We optimize models on the PTB using Bits Per Character (BPC) and evaluate them using both BPC
and one-step prediction accuracy (OSPA). We optimize and evaluate all continuous experiments using
the Mean Squared Error (MSE).
7
Results
In Figure 3a we compare performance of LSTMs, GRUs, and Factorized PSRNNs on PTB, where
all models have the same number of states and approximately the same number of parameters. To
achieve this we use a factorized PSRNN of rank 60. We see that the factorized PSRNN significantly
outperforms LSTMs and GRUs on both metrics. In Figure 3b we compare the performance of 1- and
2-layer PSRNNs on PTB. We see that adding an additional layer significantly improves performance.
4
This is a standard RNN architecture; e.g., a PyTorch implementation of this architecture for text prediction
can be found at https://github.com/pytorch/examples/tree/master/word_language_model.
(a) BPC and OSPA on PTB. All (b) Comparison between 1- and 2models have the same number of layer PSRNNs on PTB.
states and approximately the same
number of parameters.
Figure 3: PTB Experiments
8
(c) Cross-entropy and prediction
accuracy on Penn Tree Bank for
PSRNNs and factorized PSRNNs
of various rank.
In Figure 3c we compare PSRNNs with factorized PSRNNs on the PTB. We see that PSRNNs
outperform factorized PSRNNs regardless of rank, even when the factorized PSRNN has significantly
more model parameters. (In this experiment, factorized PSRNNs of rank 7 or greater have more
model parameters than a plain PSRNN.) This observation makes sense, as the PSRNN provides a
simpler optimization surface: the tensor multiplication in each layer of a PSRNN is linear with respect
to the model parameters, while the tensor multiplication in each layer of a Factorized PSRNN is
bilinear. In addition, we see that higher-rank factorized models outperform lower-rank ones. However,
it is worth noting that even models with low rank still perform well, as demonstrated by our rank 40
model still outperforming GRUs and LSTMs, despite having fewer parameters.
(a) MSE vs Epoch on the Swimmer, Mocap, and Handwriting datasets
(b) Test Data vs Model Prediction on a single feature of Swimmer. The first row shows initial performance. The
second row shows performance after training. In order the columns show KF, RNN, GRU, LSTM, and PSRNN.
Figure 4: Swimmer, Mocap, and Handwriting Experiments
In Figure 4a we compare model performance on the Swimmer, Mocap, and Handwriting datasets.
We see that PSRNNs significantly outperform alternative approaches on all datasets. In Figure 4b we
attempt to gain insight into why using 2SR to initialize our models is so beneficial. We visualize the
the one step model predictions before and after BPTT. We see that the behavior of the initialization
has a large impact on the behavior of the refined model. For example the initial (incorrect) oscillatory
behavior of the RNN in the second column is preserved even after gradient descent.
8
Conclusions
We present PSRNNs: a new approach for modelling time-series data that hybridizes Bayes filters with
RNNs. PSRNNs have both a principled initialization procedure and a rich functional form. The basic
PSRNN block consists of a 3-mode tensor, corresponding to bilinear combination of the state and
observation, followed by divisive normalization. These blocks can be arranged in layers to increase
the expressive power of the model. We showed that tensor CP decomposition can be used to obtain
factorized PSRNNs, which allow flexibly selecting the number of states and model parameters. We
showed how factorized PSRNNs can be viewed as both an instance of Kernel Bayes Rule and a gated
architecture, and discussed links to existing multiplicative architectures such as LSTMs. We applied
PSRNNs to 4 datasets and showed that we outperform alternative approaches in all cases.
Acknowledgements The authors gratefully acknowledge support from ONR (grant number
N000141512365) and DARPA (grant number FA87501720152).
9
References
[1] Sam Roweis and Zoubin Ghahramani. A unifying review of linear gaussian models. Neural
Comput., 11(2):305?345, February 1999. ISSN 0899-7667. doi: 10.1162/089976699300016674.
URL http://dx.doi.org/10.1162/089976699300016674.
[2] Leonard E. Baum and Ted Petrie. Statistical inference for probabilistic functions of finite state
markov chains. The Annals of Mathematical Statistics, 37:1554?1563, 1966.
[3] R. E. Kalman. A new approach to linear filtering and prediction problems. ASME Journal of
Basic Engineering, 1960.
[4] Michael L. Littman, Richard S. Sutton, and Satinder Singh. Predictive representations of state.
In In Advances In Neural Information Processing Systems 14, pages 1555?1561. MIT Press,
2001.
[5] Byron Boots, Sajid Siddiqi, and Geoffrey Gordon. Closing the learning planning loop with
predictive state representations. International Journal of Robotics Research (IJRR), 30:954?956,
2011.
[6] Byron Boots and Geoffrey Gordon. An online spectral learning algorithm for partially observable
nonlinear dynamical systems. In Proceedings of the 25th National Conference on Artificial
Intelligence (AAAI), 2011.
[7] Ahmed Hefny, Carlton Downey, and Geoffrey J. Gordon. Supervised learning for dynamical
system learning. In Advances in Neural Information Processing Systems, pages 1963?1971,
2015.
[8] Byron Boots, Geoffrey J. Gordon, and Arthur Gretton. Hilbert space embeddings of predictive
state representations. CoRR, abs/1309.6819, 2013. URL http://arxiv.org/abs/1309.
6819.
[9] Daniel J. Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden
markov models. CoRR, abs/0811.4413, 2008.
[10] Amirreza Shaban, Mehrdad Farajtabar, Bo Xie, Le Song, and Byron Boots. Learning latent
variable models by improving spectral solutions with exterior point methods. In Proceedings of
The International Conference on Uncertainty in Artificial Intelligence (UAI), 2015.
[11] Peter Van Overschee and Bart De Moor. N4sid: numerical algorithms for state space subspace
system identification. In Proc. of the World Congress of the International Federation of
Automatic Control, IFAC, volume 7, pages 361?364, 1993.
[12] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):
1735?1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735.
[13] KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the
properties of neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259,
2014.
[14] Jeffrey L. Elman. Finding structure in time. COGNITIVE SCIENCE, 14(2):179?211, 1990.
[15] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural
networks. CoRR, abs/1409.3215, 2014. URL http://arxiv.org/abs/1409.3215.
[16] Byron Boots. Learning stable linear dynamical systems. Online]. Avail.: https://www. ml. cmu.
edu/research/dap-papers/dap_boots. pdf, 2009.
[17] Yuchen Zhang, Xi Chen, Denny Zhou, and Michael I Jordan. Spectral methods meet em: A
provably optimal algorithm for crowdsourcing. In Advances in neural information processing
systems, pages 1260?1268, 2014.
[18] Jonathan Ko and Dieter Fox. Learning gp-bayesfilters via gaussian process latent variable
models. Autonomous Robots, 30(1):3?23, 2011.
10
[19] Peter Van Overschee and Bart De Moor. N4sid: Subspace algorithms for the identification of
combined deterministic-stochastic systems. Automatica, 30(1):75?93, January 1994. ISSN
0005-1098. doi: 10.1016/0005-1098(94)90230-5. URL http://dx.doi.org/10.1016/
0005-1098(94)90230-5.
[20] Luca Pasa, Alberto Testolin, and Alessandro Sperduti. A hmm-based pre-training approach
for sequential data. In 22th European Symposium on Artificial Neural Networks, ESANN
2014, Bruges, Belgium, April 23-25, 2014, 2014. URL http://www.elen.ucl.ac.be/
Proceedings/esann/esannpdf/es2014-166.pdf.
[21] David Belanger and Sham Kakade. A linear dynamical system model for text. In Francis
Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine
Learning, volume 37 of Proceedings of Machine Learning Research, pages 833?842, Lille,
France, 07?09 Jul 2015. PMLR.
[22] James Martens. Learning the linear dynamical system with asos. In Proceedings of the 27th
International Conference on Machine Learning (ICML-10), pages 743?750, 2010.
[23] Carlton Downey, Ahmed Hefny, and Geoffrey Gordon. Practical learning of predictive state
representations. Technical report, Carnegie Mellon University, 2017.
[24] Tuomas Haarnoja, Anurag Ajay, Sergey Levine, and Pieter Abbeel. Backprop kf: Learning
discriminative deterministic state estimators. In Advances in Neural Information Processing
Systems, pages 4376?4384, 2016.
[25] Jean Kossaifi, Zachary C Lipton, Aran Khanna, Tommaso Furlanello, and Anima Anandkumar.
Tensor regression networks. arXiv preprint arXiv:1707.08308, 2017.
[26] Alex Smola, Arthur Gretton, Le Song, and Bernhard Sch?lkopf. A hilbert space embedding
for distributions. In International Conference on Algorithmic Learning Theory, pages 13?31.
Springer, 2007.
[27] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American mathematical society, 68(3):337?404, 1950.
[28] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances
in neural information processing systems, pages 1177?1184, 2008.
[29] Frank L Hitchcock. The expression of a tensor or a polyadic as a sum of products. Studies in
Applied Mathematics, 6(1-4):164?189, 1927.
[30] Lennart Ljung. System identification. Wiley Online Library, 1999.
[31] Le Song, Byron Boots, Sajid M. Siddiqi, Geoffrey J. Gordon, and Alex J. Smola. Hilbert space
embeddings of hidden markov models. In Johannes F?rnkranz and Thorsten Joachims, editors,
Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages
991?998. Omnipress, 2010. URL http://www.icml2010.org/papers/495.pdf.
R in Machine
[32] Yoshua Bengio et al. Learning deep architectures for ai. Foundations and trends
Learning, 2(1):1?127, 2009.
[33] Le Song, Jonathan Huang, Alex Smola, and Kenji Fukumizu. Hilbert space embeddings of
conditional distributions with applications to dynamical systems. In Proceedings of the 26th
Annual International Conference on Machine Learning, pages 961?968. ACM, 2009.
[34] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005.
[35] Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On
multiplicative integration with recurrent neural networks. CoRR, abs/1606.06630, 2016. URL
http://arxiv.org/abs/1606.06630.
[36] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated
corpus of english: The penn treebank. Computational linguistics, 19(2):313?330, 1993.
11
[37] Fevzi. Alimoglu E. Alpaydin. Pen-Based Recognition of Handwritten Digits Data Set.
https://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits.
[38] E Alpaydin and Fevzi Alimoglu. Pen-based recognition of handwritten digits data set. University
of California, Irvine, Machine Learning Repository. Irvine: University of California, 1998.
[39] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward
neural networks. In In Proceedings of the International Conference on Artificial Intelligence
and Statistics (AISTATS?10). Society for Artificial Intelligence and Statistics, 2010.
12
| 7186 |@word repository:2 version:1 norm:3 nd:1 bptt:16 bf:1 open:1 pieter:1 simulation:1 crucially:2 decomposition:14 contraction:1 q1:3 pressure:2 thereby:1 kbr:1 recursively:1 moment:4 initial:4 series:2 hereafter:1 selecting:1 daniel:1 ours:2 rkhs:1 outperforms:1 existing:6 current:5 com:2 activation:1 yet:1 dx:2 subsequent:1 additive:1 numerical:1 remove:1 update:26 v:2 bart:3 intelligence:4 selected:3 fewer:1 pasa:2 ith:2 short:2 core:1 wolfram:1 blei:1 provides:3 contribute:1 org:6 simpler:3 zhang:5 bayesfilters:1 mathematical:2 along:1 become:1 symposium:1 cordless:1 consists:3 incorrect:1 combine:4 inside:1 introduce:1 pairwise:1 behavior:4 elman:2 themselves:1 planning:1 uz:1 ptb:10 inspired:1 salakhutdinov:1 automatically:1 preclude:1 becomes:2 provided:1 estimating:2 notation:2 factorized:28 interpreted:1 asos:2 substantially:1 developed:1 finding:1 differentiation:1 guarantee:1 temporal:1 pseudo:1 remember:1 ti:1 act:2 stylus:1 decouples:1 control:3 unit:5 penn:3 omit:2 appear:1 grant:2 before:1 engineering:2 local:3 understood:1 congress:1 bilinear:14 despite:4 sutton:1 id:1 anurag:1 meet:1 approximately:4 sajid:2 rnns:20 initialization:16 resembles:1 suggests:1 collect:1 hmms:4 factorization:2 range:3 statistically:3 bi:2 bjk:2 unique:1 practical:2 recursive:1 practice:3 block:5 differs:1 digit:4 procedure:3 aji:2 rnn:13 destabilizing:1 significantly:5 projection:1 pre:2 word:2 zoubin:1 convenience:1 ga:1 hse:5 operator:2 context:1 applying:2 optimize:4 restriction:1 deterministic:2 map:6 marten:2 equivalent:1 demonstrated:1 baum:1 sepp:1 regardless:1 flexibly:1 independently:1 convex:1 stabilizing:1 adhering:1 immediately:1 insight:4 rule:6 estimator:1 importantly:1 bfs:14 d1:1 embedding:3 handle:1 stability:1 variation:1 coordinate:1 autonomous:1 annals:1 suppose:2 hypothesis:1 swimmer:6 tablet:2 pa:4 trend:1 alimoglu:2 recognition:3 walking:1 database:1 ft:6 module:2 levine:1 preprint:1 initializing:1 capture:2 alpaydin:2 deeply:2 principled:3 alessandro:1 benjamin:1 littman:1 trained:3 singh:1 solving:1 algebra:1 ali:1 predictive:21 fevzi:2 observables:1 uh:3 easily:2 joint:6 darpa:1 geoff:1 various:1 train:6 separated:1 fast:1 effective:1 describe:1 elen:1 doi:5 artificial:5 hitchcock:1 neighborhood:1 refined:2 outside:1 whose:1 heuristic:1 widely:1 federation:1 jean:1 saizheng:1 drawing:1 otherwise:2 compressed:1 encoder:3 statistic:6 gp:2 online:3 advantage:7 sequence:7 bruges:1 ucl:1 propose:6 product:3 adaptation:1 thorax:1 denny:1 uci:2 combining:2 hadamard:1 loop:1 poorly:1 achieve:3 roweis:1 intervening:1 kh:2 sutskever:1 rff:6 optimum:2 generating:1 converges:1 help:1 recurrent:13 develop:1 avail:1 ac:1 qt:37 b0:1 eq:3 strong:3 dividing:1 esann:2 c:4 predicted:1 kenji:1 differ:1 drawback:1 annotated:1 filter:12 stochastic:1 stabilization:1 human:2 backprop:2 beatrice:1 abbeel:1 generalization:2 marcinkiewicz:1 decompose:1 alleviate:1 merrienboer:1 multilinear:1 extension:1 pytorch:3 hold:1 gradientbased:1 ic:1 mapping:2 predict:4 visualize:1 algorithmic:1 rgen:1 achieves:1 belgium:1 purpose:1 ruslan:1 proc:1 integrates:1 axiomatic:2 lose:1 nachman:1 yuhuai:1 sensitive:1 moor:2 fukumizu:1 mit:2 gaussian:4 rather:2 avoid:1 hj:1 cr:1 lifted:1 zhou:1 gatech:1 derived:2 focus:6 ax:1 joachim:1 modelling:2 rank:12 tech:1 contrast:3 greatly:1 rdj:1 realizable:1 sense:1 inference:2 entire:2 bt:1 typically:1 hidden:4 kernelized:1 france:1 provably:1 issue:1 dual:1 flexible:1 among:1 ill:1 development:2 art:1 integration:2 initialize:12 marginal:1 construct:1 once:2 having:1 beach:1 sampling:1 ted:1 extention:1 represents:1 kw:2 identical:1 lille:1 icml:2 future:7 yoshua:4 report:1 gordon:7 richard:1 few:2 modern:1 randomly:1 composed:2 recognize:2 national:1 consisting:3 jeffrey:1 maintain:1 attempt:3 ab:8 atlanta:1 evaluation:1 bpc:3 furlanello:1 extreme:1 superfluous:1 chain:3 psrs:24 arthur:2 fox:3 tree:3 kfs:5 unless:1 yuchen:1 initialized:2 desired:1 sperduti:1 minimal:1 instance:1 column:2 modeling:3 earlier:1 disadvantage:1 rffs:2 cost:2 subset:1 rare:1 entry:1 predictor:1 hybridizes:1 burgard:1 successful:3 vicon:1 cho:1 combined:1 st:1 grus:8 fundamental:1 lstm:1 international:9 recht:1 probabilistic:5 off:1 a1i:1 discipline:1 michael:2 together:1 quickly:1 ilya:1 squared:1 aaai:1 cjl:1 opposed:1 choose:2 huang:1 cognitive:1 american:1 leading:1 return:1 li:1 suggesting:2 de:2 sec:2 stabilize:1 explicitly:1 multiplicative:6 view:1 analyze:1 francis:1 portion:1 bayes:18 capability:1 jul:1 contribution:1 minimize:2 oi:1 accuracy:2 generalize:2 n4sid:3 identification:3 lkopf:1 handwritten:3 accurately:1 manages:1 trajectory:2 notoriously:1 cc:1 worth:1 anima:1 history:5 explain:1 oscillatory:1 sebastian:1 james:1 naturally:4 mi:2 attributed:1 handwriting:4 gain:1 hsu:1 dataset:4 treatment:1 popular:2 wh:2 mitchell:1 irvine:2 knowledge:1 improves:1 hilbert:8 hefny:5 sophisticated:1 carefully:1 back:1 appears:1 originally:1 higher:3 supervised:1 xie:1 improved:2 april:1 arranged:1 though:1 bboots:1 furthermore:1 angular:1 stage:14 smola:3 working:1 belanger:2 lennart:1 lstms:11 aronszajn:1 replacing:2 expressive:1 nonlinear:2 lack:1 khanna:1 mode:7 mary:1 building:2 omitting:1 usa:1 grounding:1 true:1 concept:2 name:1 effect:2 hence:2 equality:1 normalized:2 read:1 xavier:2 moore:1 kyunghyun:1 www:3 illustrated:2 during:1 width:1 aki:1 pdf:3 asme:1 ridge:2 complete:1 dap:1 cp:7 motion:2 fj:1 omnipress:1 novel:2 fi:1 petrie:1 common:1 functional:7 multinomial:1 conditioning:1 volume:2 discussed:3 interpretation:5 extend:1 interpret:1 mellon:5 backpropogation:2 ai:3 tuning:1 automatic:2 consistency:2 mathematics:1 similarly:1 closing:1 nonlinearity:2 language:1 gratefully:1 moving:1 robot:3 stable:1 similarity:1 polyadic:2 surface:1 something:1 recent:2 showed:4 schmidhuber:1 carlton:3 arbitrarily:2 success:2 outperforming:1 onr:1 preserving:1 minimum:1 additional:3 greater:1 preceding:1 mocap:4 neco:1 multiple:3 full:5 sham:2 reduces:1 gretton:2 rahimi:1 ifac:1 technical:1 ahmed:3 offer:3 long:4 cross:2 bach:1 luca:1 alberto:1 paired:1 impact:1 prediction:11 scalable:1 regression:16 ko:4 basic:4 ajay:1 cmu:5 multilayer:2 metric:2 arxiv:5 iteration:2 sergey:1 kernel:11 represent:1 normalization:11 robotics:3 cz:1 hochreiter:1 preserved:1 background:1 addition:2 fine:1 want:1 singular:2 source:3 median:1 sch:1 ot:25 unlike:1 sr:13 posse:3 archive:1 subject:1 tommaso:1 byron:7 bahdanau:1 leveraging:1 spirit:1 effectiveness:1 jordan:1 call:1 anandkumar:1 noting:1 feedforward:1 split:4 embeddings:6 bengio:4 variety:3 isolation:1 timesteps:1 architecture:31 restrict:1 perfectly:1 reduce:1 expression:1 url:7 passed:1 swim:1 downey:4 song:4 peter:2 proceed:1 cause:2 deep:4 generally:1 useful:1 clear:2 johannes:1 nonparametric:1 clutter:1 prepared:1 hardware:1 siddiqi:2 generate:2 http:11 outperform:6 canonical:1 millisecond:1 bot:1 estimated:2 wr:1 track:2 per:1 discrete:8 carnegie:5 skeletal:2 hyperparameter:1 key:2 openai:2 ht:3 sum:2 year:1 inverse:3 parameterized:1 uncertainty:1 powerful:1 striking:1 package:1 angle:1 place:2 master:1 reasonable:2 farajtabar:1 wu:2 draw:1 appendix:1 bit:1 layer:25 hi:1 followed:4 simplification:1 refine:2 annual:1 strength:1 occur:1 kronecker:1 precisely:1 alex:3 lipton:1 fourier:1 attempting:1 relatively:1 according:3 combination:3 beneficial:1 em:3 character:2 ur:1 sam:1 kakade:3 modification:1 happens:1 quoc:1 restricted:1 invariant:1 thorsten:1 dieter:2 equation:9 remains:1 mechanism:4 know:1 nose:1 tractable:1 end:2 available:1 operation:2 apply:3 observe:2 away:2 spectral:7 pmlr:1 alternative:5 gym:2 a2i:1 original:1 include:3 nlp:1 n000141512365:1 linguistics:1 maintaining:1 unifying:1 calculating:2 restrictive:1 concatenated:1 ghahramani:1 february:1 aran:1 society:2 tensor:29 implied:1 move:1 quantity:1 rt:4 mehrdad:1 traditional:1 gradient:7 subspace:3 distance:1 separate:1 link:2 simulated:1 thrun:1 hmm:4 decoder:3 collected:1 dzmitry:1 marcus:1 kalman:2 o1:4 issn:3 relationship:1 multiplicatively:1 providing:1 tuomas:1 ying:1 difficult:3 unfortunately:1 setup:1 potentially:2 frank:1 stated:1 haarnoja:2 implementation:1 zt:3 policy:1 gated:2 perform:3 allowing:1 boot:8 observation:31 upper:1 datasets:8 markov:4 benchmark:1 acknowledge:1 finite:1 descent:3 november:1 displayed:1 optional:1 january:1 defining:1 santorini:1 reproducing:2 stack:1 community:1 introduced:1 david:2 pair:1 required:1 gru:3 connection:2 california:2 omits:1 learned:6 tensorflow:1 nip:1 able:2 suggested:2 dynamical:10 challenge:1 including:4 memory:2 oj:1 belief:6 wz:1 power:2 event:1 overschee:2 natural:4 hybrid:2 treated:1 predicting:1 difficulty:1 representing:1 scheme:2 improve:2 github:1 library:2 ijrr:1 created:1 text:4 prior:2 literature:2 epoch:1 acknowledgement:1 kf:6 multiplication:3 review:1 understanding:1 fully:1 bear:1 highlight:1 ljung:1 interesting:3 limitation:1 filtering:9 icml2010:1 geoffrey:6 foundation:1 sufficient:2 consistent:9 bk2:1 bank:2 editor:2 treebank:1 translation:2 row:3 summary:2 supported:1 last:1 english:1 bias:7 allow:3 wide:2 van:3 plain:1 zachary:1 world:1 rich:4 rnkranz:1 preventing:1 forward:1 commonly:1 refinement:5 made:1 simplified:1 author:1 historical:1 transaction:1 compact:1 observable:2 bernhard:1 satinder:1 ml:2 uai:1 b1:1 pittsburgh:4 automatica:1 corpus:1 factorize:1 xi:1 discriminative:1 factorizing:3 search:1 latent:4 ggordon:1 continuous:5 pen:4 why:1 learn:4 transfer:2 lineargaussian:1 ca:1 exterior:1 improving:1 mse:2 european:1 constructing:2 domain:1 inherit:1 aistats:1 multilayered:5 arise:3 hyperparameters:2 repeated:1 complementary:2 categorized:1 referred:2 georgia:1 tong:1 wiley:1 position:2 comput:2 embed:1 gating:8 dk:1 glorot:1 normalizing:1 concern:1 kossaifi:2 consist:2 sequential:3 effectively:2 flattened:1 ci:2 ahefny:1 adding:1 corr:5 conditioned:5 occurring:1 horizon:4 chen:1 easier:1 entropy:2 forget:2 led:1 simply:1 psr:10 penrose:1 vinyals:1 partially:1 bo:1 springer:1 ch:3 corresponds:3 determines:1 acm:1 conditional:4 viewed:5 ann:1 leonard:1 replace:1 shared:1 experimentally:2 determined:3 typical:2 reducing:2 except:1 infinite:1 called:1 svd:1 divisive:3 experimental:1 formally:1 internal:1 support:3 jonathan:2 oriol:1 incorporate:1 evaluate:2 regularizing:1 crowdsourcing:1 |
6,839 | 7,187 | Early stopping for kernel boosting algorithms: A
general analysis with localized complexities
Yuting Wei1
Fanny Yang2?
Martin J. Wainwright1,2
Department of Statistics1
Department of Electrical Engineering and Computer Sciences2
UC Berkeley
Berkeley, CA 94720
{ytwei, fanny-yang, wainwrig}@berkeley.edu
Abstract
Early stopping of iterative algorithms is a widely-used form of regularization
in statistics, commonly used in conjunction with boosting and related gradienttype algorithms. Although consistency results have been established in some
settings, such estimators are less well-understood than their analogues based on
penalized regularization. In this paper, for a relatively broad class of loss functions
and boosting algorithms (including L2 -boost, LogitBoost and AdaBoost, among
others), we exhibit a direct connection between the performance of a stopped
iterate and the localized Gaussian complexity of the associated function class.
This connection allows us to show that local fixed point analysis of Gaussian or
Rademacher complexities, now standard in the analysis of penalized estimators,
can be used to derive optimal stopping rules. We derive such stopping rules in
detail for various kernel classes, and illustrate the correspondence of our theory
with practice for Sobolev kernel classes.
1
Introduction
While non-parametric models offer great flexibility, they can also lead to overfitting, and thus poor
generalization performance. For this reason, procedures for fitting non-parametric models must
involve some form of regularization, most commonly done by adding some type of penalty to the
objective function. An alternative form of regularization is based on the principle of early stopping, in
which an iterative algorithm is terminated after a pre-specified number of steps prior to convergence.
While the idea of early stopping is fairly old (e.g., [31, 1, 35]), recent years have witnessed renewed
interests in its properties, especially in the context of boosting algorithms and neural network training
(e.g., [25, 12]). Over the past decade, a line of work has yielded some theoretical insight into early
stopping, including works on classification error for boosting algorithms [3, 13, 18, 23, 39, 40],
L2 -boosting algorithms for regression [8, 7], and similar gradient algorithms in reproducing kernel
Hilbert spaces (e.g. [11, 10, 34, 39, 26]). A number of these papers establish consistency results for
particular forms of early stopping, guaranteeing that the procedure outputs a function with statistical
error that converges to zero as the sample size increases. On the other hand, there are relatively
few results that actually establish rate optimality of an early stopping procedure, meaning that the
achieved error matches known statistical minimax lower bounds. To the best of our knowledge,
B?hlmann and Yu [8] were the first to prove optimality for early stopping of L2 -boosting as applied
?
Yuting Wei and Fanny Yang contributed equally to this work.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
to spline classes, albeit with a rule that was not computable from the data. Subsequent work by
Raskutti et al. [26] refined this analysis of L2 -boosting for kernel classes and first established an
important connection to the localized Rademacher complexity; see also the related work [39, 27, 9]
with rates for particular kernel classes.
More broadly, relative to our rich and detailed understanding of regularization via penalization
(e.g., see the books [17, 33, 32, 37] and papers [2, 20] for details), the theory for early stopping
regularization is still not as well developed. In particular, for penalized estimators, it is now wellunderstood that complexity measures such as the localized Gaussian width, or its Rademacher
analogue, can be used to characterize their achievable rates [2, 20, 32, 37]. Is such a general and sharp
characterization also possible in the context of early stopping? The main contribution of this paper
is to answer this question in the affirmative for boosting algorithms in regression and classification
problems involving functions in reproducing kernel Hilbert spaces (RKHS).
The remainder of this paper is organized as follows. In Section 2, we provide background on
boosting methods and reproducing kernel Hilbert spaces, and then introduce the updates studied in
this paper. Section 3 is devoted to statements of our main results, followed by a discussion of their
consequences for particular function classes in Section 4. We provide simulations that confirm the
practical effectiveness of our stopping rules and show close agreement with our theoretical predictions.
The proofs for all of our results can be found in the supplemental material.
2
Background and problem formulation
The goal of prediction is to learn a function that maps covariates x 2 X to responses y 2 Y. In a
regression problem, the responses are typically real-valued, whereas in a classification problem, the
responses take values in a finite set. In this paper, we study both regression (Y = R) and classification
problems (e.g., Y = { 1, +1} in the binary case) where we observe a collection of n pairs of the
form {(xi , Yi )}ni=1 , with fixed covariates xi 2 X and corresponding random responses Yi 2 Y drawn
independently from a distribution PY |xi . In this section, we provide some necessary background on a
gradient-type algorithm which is often referred to as boosting algorithm.
2.1
Boosting and early stopping
Consider a cost function : R ? R ! [0, 1), where the non-negative scalar (y, ?) denotes the cost
associated with predicting ? when the true response is y. Some common examples of loss functions
that we consider in later sections include:
? the least-squares loss (y, ?) : = 21 (y
?)2 that underlies L2 -boosting [8],
? the logistic regression loss (y, ?) = ln(1 + e y? ) that underlies the LogitBoost algorithm [14, 15], and
? the exponential loss (y, ?) = exp( y?) that underlies the AdaBoost algorithm [13].
The least-squares loss is typically used for regression problems (e.g., [8, 11, 10, 34, 39, 26]), whereas
the latter two losses are frequently used in the setting of binary classification (e.g., [13, 23, 15]).
Given some loss function and function space F , we define the population cost functional f 7! L(f )
and the corresponding optimal (minimizing) function? via
n
h1 X
i
L(f ) : = EY1n
Yi , f (xi ) ,
f ? : = arg min L(f ).
(1)
f 2F
n i=1
Note that with the covariates {xi }ni=1 fixed, the functional L is a non-random object. As a standard
example, when we adopt the least-squares loss (y, ?) = 12 (y ?)2 , the population minimizer
f ? corresponds to the conditional expectation x 7! E[Y |x]. Since we do not have access to the
population distribution of the responses however, the computation of f ? is impossible. Given our
samples {Yi }ni=1 , we consider instead some procedure applied to the empirical loss
n
1X
Ln (f ) : =
(Yi , f (xi )),
(2)
n i=1
?
As clarified in the sequel, our assumptions guarantee uniqueness of f ? .
where the population expectation has been replaced by an empirical expectation. For example, when
Ln corresponds to the log likelihood of the samples with (Yi , f (xi )) = log[P(Yi ; f (xi ))], direct
unconstrained minimization of Ln would yield the maximum likelihood estimator.
It is well-known that direct minimization of Ln over a rich function class F may lead to overfitting.
A classical method to mitigate this phenomenon is to minimize the sum of the empirical loss with a
penalty term. Adjusting the weight on the regularization term allows for trade-off between fit to the
data, and some form of regularity or smoothness of the fit. The behavior of such penalized estimation
methods is quite well understood (see e.g. the books [17, 33, 32, 37] and papers [2, 20] for details).
In this paper, we study a form of algorithmic regularization, based on applying a gradient-type
algorithm to Ln . In particular, we consider boosting algorithms (see survey paper [7]) which involve
?boosting? or improve the fit of a function via a sequence of additive updates (see e.g. [28, 13, 6, 5, 29])
and can be understood as forms of functional gradient methods [23, 15]. Instead of running until
convergence, we then stop it ?early??that is, after some fixed number of steps. The way in which
the number of steps is chosen is referred to as a stopping rule, and the overall procedure is referred to
as early stopping of a boosting algorithm.
(a)
(b)
Pn
Figure 1: Plots of the squared error kf t f ? k2n = n1 i=1 (f t (xi ) f ? (xi ))2 versus the iteration
number t for (a) LogitBoost using a first-order Sobolev kernel (b) AdaBoost using the same first-order
Sobolev kernel K(x, x0 ) = 1 + min(x, x0 ) which generates a class of Lipschitz functions (splines of
order one). Both plots correspond to a sample size n = 100.
In more detail, a broad class of boosting algorithms [23] generate a sequence {f t }1
t=0 via updates of
the form
f t+1 = f t
?t g t
with
g t / arg max hrLn (f t ), d(xn1 )i,
(3)
kdkF ?1
where the scalar {?t }1
t=0 is a sequence of step sizes chosen by the user, the constraint kdkF ? 1
defines the unit ball in a given function class F , rLn (f ) 2 Rn denotes the gradient taken at
the vector f (x1 ), . . . , f (xn )), and hh, gi is the usual inner product between vectors h, g 2 Rn .
For non-decaying step sizes and a convex objective Ln , running this procedure for an infinite
number of iterations will lead to a minimizer of the empirical loss, thus causing overfitting. In
order to illustrate this phenomenon, Figure 1 provides plots of the squared error kf t f ? k2n : =
Pn
2
1
t
f ? (xi ) versus the iteration number, for LogitBoost in panel (a) and AdaBoost
i=1 f (xi )
n
in panel (b). (See Section 4.2 for more details on how these experiments were set up.)
In these plots, the dotted line indicates the minimum mean-squared error ?2n over all iterates of that
particular run of the algorithm. Both plots are qualitatively similar, illustrating the existence of a
?good? number of iterations to take, after which the MSE greatly increases. Hence a natural problem
is to decide at what iteration T to stop such that the iterate f T satisfies bounds of the form
L(f T )
L(f ? ) - ?2n
and
kf T
f ? k2n - ?2n
(4)
with high probability. The main results of this paper provide a stopping rule T for which bounds of
the form (4) do in fact hold with high probability over the randomness in the observed responses.
Moreover, as shown by our later results, under suitable regularity conditions, the expectation of the minimum squared error ?2n is proportional to the statistical minimax risk
inf fb supf 2F E[L(fb) L(f )], where the infimum is taken over all possible estimators fb. Coupled with our stopping time guarantee (4) this implies that our estimate achieves the minimax risk up
to constant factors. As a result, our bounds are unimprovable in general (see Corollary 1).
2.2
Reproducing Kernel Hilbert Spaces
The analysis of this paper focuses on algorithms with the update (3) when the function class F is
a reproducing kernel Hilbert space H (RKHS, see standard sources [36, 16, 30, 4]), consisting of
functions mapping a domain X to the real line R. Any RKHS is defined by a bivariate symmetric
kernel function K : X ? X ! R which is required to be positive semidefinite, i.e. for any integer
N ?N
N
1 and a collection of points {xj }N
is positive
j=1 in X , the matrix [K(xi , xj )]ij 2 R
P
semidefinite. The associated RKHS is the closure of linear span of the form f (?) = j 1 !j K(?, xj ),
1
where {xj }1
j=1 is some collection of points in X , and {!j }j=1 is a real-valued sequence. For
P `1
two functions f1 , f2 2 H which can be expressed as a finite sum f1 (?) = i=1
? K(?, xi ) and
P `2
P ` 1 P `2 i
f2 (?) = j=1 j K(?, xj ), the inner product is defined as hf1 , f2 iH = i=1 j=1 ?i j K(xi , xj )
P `1 2
with induced norm kf1 k2H = i=1
?i K(xi , xi ). For each x 2 X , the function K(?, x) belongs to
H , and satisfies the reproducing relation hf, K(?, x)iH = f (x) for all f 2 H .
Throughout this paper, we assume that the kernel function is uniformly bounded, meaning that there
is a constant L such that supx2X K(x, x) ? L. Such a boundedness condition holds for many kernels
used in practice, including the Gaussian, Laplacian, Sobolev, other types of spline kernels, as well
as any trace class kernel with trignometric eigenfunctions. By rescaling the kernel as necessary, we
may assume without loss of generality that L = 1. As a consequence, for any function f such that
kf kH ? r, we have by the reproducing relation that
kf k1 = suphf, K(?, x)iH ? kf kH sup kK(?, x)kH ? r.
x
x
Given samples {(xi , yi )}ni=1 , by the representer theorem [19], it is sufficient to restrict ourselves to
the linear subspace Hn = span{K(?, xi )}ni=1 , for which all f 2 Hn can be expressed as
n
1 X
f=p
!i K(?, xi )
n i=1
(5)
for some coefficient vector ! 2 Rn . Among those functions which achieve the infimum in expression
(1), let us define f ? as the one with the minimum Hilbert norm. This definition is equivalent to
restricting f ? to be in the linear subspace Hn .
2.3
Boosting in kernel spaces
For a finite number of covariates xi from i = 1 . . . n, let us define the normalized kernel matrix
K 2 Rn?n with entries Kij = K(xi , xj )/n. Since we can restrict the minimization of Ln and L
from H to the subspace Hn w.l.o.g., using expression
(5) we can then write the function value
p
vectors f (xn1 ) : = (f (x1 ), . . . , f (xn )) as f (xn1 ) = nK!. As there is a one-to-one correspondence
between the n-dimensional vectors f (xn1 ) 2 Rn and the corresponding function f 2 Hn in H by
the representer theorem, minimization of an empirical loss in the subspace Hn essentially becomes
the n-dimensional problem of fitting a response vector y over the set range(K). In the sequel, all
updates will thus be performed on the function value vectors f (xn1 ).
p p
With a change of variable d(xn1 ) = n Kz we then have dt (xn1 ) : = arg max hrLn (f t ), d(xn1 )i =
p
p
nKrLn (f t )
, where the maximum is taken over vectors d 2 range(K). In this paper we
rLn (f t )KrLn (f t )
study g t = hrLn (f t ),
the form
kdkH ?1
dt (xn1 )idt in the boosting update (3), so that the function value iterates take
f t+1 (xn1 ) = f t (xn1 )
?nKrLn (f t ),
(6)
where ? > 0 is a constant stepsize choice. Choosing f 0 (xn1 ) = 0 ensures that all iterates f t (xn1 )
remain in the range space of K. Our goal is to propose a stopping time T such that the averaged
PT
function fb = T1 t=1 f t satisfies bounds of the type (4). Importantly, we exhibit such bounds with a
statistical error term n that is specified by the localized Gaussian complexity of the kernel class.
3
Main results
We now turn to the statement of our main results, beginning with the introduction of some regularity
assumptions.
3.1
Assumptions
Recall from our earlier set-up that we differentiate between the empirical loss function Ln in
expression (2), and the population loss L in expression (1). Apart from assuming differentiability of
both functions, all of our remaining conditions are imposed on the population loss. Such conditions
at the population level are weaker than their analogues at the empirical level.
For a given radius r > 0, let us define the Hilbert ball around the optimal function f ? as
BH (f ? , r) : = {f 2 H | kf
f ? kH ? r}.
(7)
2
Our analysis makes particular use of this ball defined for the radius CH
: = 2 max{kf ? k2H , 32}. We
assume that the population loss is m-strongly convex and M -smooth over BH (f ? , 2CH ), meaning
that the sandwich inequality
m-M -condition
m
kf
2
gk2n ? L(f )
L(g)
hrL(g), f (xn1 )
g(xn1 )i ?
M
kf
2
gk2n
holds for all f, g 2 BH (f ? , 2CH ). To be clear, here rL(g) denotes the vector in Rn obtained by
taking the gradient of L with respect to the vector g(xn1 ). It can be verified by a straightforward
computation that when L is induced by the least-squares cost (y, ?) = 12 (y ?)2 , the m-M condition holds for m = M = 1. The logistic and exponential loss satisfy this condition (see supp.
material), where it is key that we have imposed the condition only locally on the ball BH (f ? , 2CH ).
In addition to the least-squares cost, our theory also applies to losses L induced by scalar functions
that satisfy the following condition:
0
-boundedness
max
i=1,...,n
@ (y, ?)
@?
?=f (xi )
?B
for all f 2 BH (f ? , 2CH ) and y 2 Y.
This condition holds with B = 1 for the logistic loss for all Y, and B = exp(2.5CH ) for the
exponential loss for binary classification with Y = { 1, 1}, using our kernel boundedness condition.
Note that whenever this condition holds with some finite B, we can always rescale the scalar loss
by 1/B so that it holds with B = 1, and we do so in order to simplify the statement of our results.
3.2
Upper bound in terms of localized Gaussian width
Our upper bounds involve a complexity measure known as the localized Gaussian width. In general,
Gaussian widths are widely used to obtain risk bounds for least-squares and other types of M estimators. In our case, we consider Gaussian complexities for ?localized? sets of the form
n
o
En ( , 1) : = f g | f, g 2 H , kf gkH ? 1, kf gkn ? .
(8)
The Gaussian complexity localized at scale is given by
Gn En ( , 1) : = E
h
i
1X
wi g(xi ) ,
,1) n i=1
n
sup
g2En (
where (w1 , . . . , wn ) denotes an i.i.d. sequence of standard Gaussian variables.
(9)
An essential quantity in our theory is specified by a certain fixed point equation that is now standard
in empirical process theory [32, 2, 20, 26]. Let us define the effective noise level
n
o
(
?
2
2
min t | max E[e((Yi f (xi )) /t ) ] < 1
for least squares
i=1,...,n
:=
(10)
4 (2M + 1)(1 + 2CH )
for 0 -bounded losses.
The critical radius
n
is the smallest positive scalar such that
Gn (En ( , 1))
?
(11)
.
We note that past workw on localized Rademacher and Gaussian complexity [24, 2] guarantee that
there exists a unique n > 0 that satisfies this condition, so that our definition is sensible.
3.2.1
Upper bounds on excess risk and empirical L2 (Pn )-error
With this set-up, we are now equipped to state our main theorem. It provides high-probability bounds
PT
on the excess risk and L2 (Pn )-error of the estimator f?T : = T1 t=1 f t defined by averaging the T
iterates of the algorithm.
Theorem 1. Consider any loss function satisfying the m-M -condition and the 0 -boundedness condition (if not least squares), for which we generate function iterates {f t }1
t=0 of the form (6) with step
m 1
size ? 2 (0, min{ M
, M , m}], initialized at f 0 = 0. Then for all iterations T = 0, 1, . . . b1/(8 n2 )c,
the averaged function estimate f?T satisfies the bounds
2 ?
C? 1
L(f?T ) L(f ? ) ?
+ n2 , and
(12a)
M ?mT
m
? 1
2 ?
kf?T f ? k2n ? C
+ n2 ,
(12b)
?mT
m
where both inequalities hold with probability at least 1
c1 exp( c2
m2 n
2
2
n
).
In these statements, all constants of the form cj are universal, whereas constants C depends on the
2
squared radius CH
: = 2 max{kf ? k2H , 32}. In order to gain intuition for the claims in the theorem,
1
note that (disregarding factors depending on (m, M )), for all iterations T . 1/ n2 , the first term ?mT
2
dominates the second term mn2 , so that taking further iterations reduces the upper bound on the error
until T ? 1/ n2 , at which point the upper bound on the error is of the order n2 .
Furthermore, note that similar bounds as in Theorem 1 can be obtained for the expected loss (over the
response yi , with the design fixed) by a simple integration argument. Hence if we perform updates
m
1
with step size ? = M
, after ? : = 2 max{8,M
} iterations, the mean squared error is bounded as
n
Ekf??
f ? k2n ? C 0
2
n
,
m2
(13)
where we use M m and where C 0 is another constant depending on CH . It is worth noting that
guarantee (13) matches the best known upper bounds for kernel ridge regression (KRR)?indeed, this
must be the case, since a sharp analysis of KRR is based on the same notion of localized Gaussian
complexity. Thus, our results establish a strong parallel between the algorithmic regularization of
early stopping, and the penalized regularization of kernel ridge regression. Moreover, as discussed in
Section 3.3, under suitable regularity conditions on the RKHS, the critical squared radius n2 also acts
as a lower bound for the expected risk, i.e. our upper bounds are not improvable in general.
Compared with the work of Raskutti et al. [26], which also analyzes the kernel boosting iterates of
the form (6), our theory more directly analyzes the effective function class that is explored in the
boosting process by taking T steps, with the localized Gaussian width (9) appearing more naturally.
In addition, our analysis applies to a broader class of loss functions beyond least-squares.
In the case of reproducing kernel Hilbert spaces, it is possible to sandwich the localized Gaussian
complexity by a function of the eigenvalues of the kernel matrix. Mendelson [24] provides this
argument in the case of the localized Rademacher complexity, but similar arguments apply to the
localized Gaussian complexity. Letting ?1 ?2 ? ? ? ?n 0 denote the ordered eigenvalues of
the normalized kernel matrix K, define the function
v
u n
1 uX
R( ) = p t
min{ 2 , ?j }.
(14)
n j=1
Up to a universal constant, this function is an upper bound on the Gaussian width Gn En ( , 1) for
p1 .
all
0, and up to another universal constant, it is also a lower bound for all
n
Note that the critical radius n2 only depends on our observations {(xi , yi )}ni=1 through the solution
of inequality (11). In many cases, with examples given in Section 4, it is possible to compute or
upper bound this critical radius, so that a concrete stopping rule can indeed by calculated in advance.
3.3
Achieving minimax lower bounds
We claim that for a broad class of kernels, upper bound (13) matches the known minimax lower
bound, thus is unimprovable in general. In particular, Yang et al. [38] define the class of regular
kernels, which includes the Gaussian and Sobolev kernels as particular cases. For such kernels, the
authors provide a minimax lower bound over the unit ball of the Hilbert space involving n , which
implies that any estimator fb has prediction risk lower bounded as
sup
kf ? kH ?1
Ekfb
f ? k2n
c`
2
n.
(15)
Comparing the lower bound (15) with upper bound (13) for our estimator f?T stopped after O(1/ n2 )
many steps, it follows that the bounds proven in Theorem 1 are unimprovable apart from constant
factors. We summarize our findings in the following corollary:
Corollary 1. For the class of regular kernels and any function f ? with kf ? kH ? 1, running
1
m
0
?T
T : = b 2 max{8,M
} c iterations with step size ? = M and f = 0 yields an estimate f such that
n
Ekf?T
f ? k2n ? inf
sup
fb kf ? kH ?1
Ekfb
f ? k2n ,
(16)
where the infimum is taken over all measurable functions of the input data and the expectation is
taken over the randomness of the response variables {Yi }ni=1 .
On a high level, the statement in Corollary 1 implies that stopping early essentially prevents us from
overfitting to the data and automatically finds the optimal balance between low training error (i.e.
fitting the data well) and low model complexity (i.e. generalizing well).
4
Consequences for various kernel classes
In this section, we apply Theorem 1 to derive some concrete rates for different kernel spaces and
then illustrate them with some numerical experiments. It is known that the complexity of a RKHS in
association with fixed covariates {xi }ni=1 can be characterized by the decay rate of the eigenvalues
{?j }nj=1 of the normalized kernel matrix K. The representation power of a kernel class is directly
correlated with the eigen-decay: the faster the decay, the smaller the function class.
4.1
Theoretical predictions as a function of decay
In this section, let us consider two broad types of eigen-decay:
?
-exponential decay: For some > 0, the kernel matrix eigenvalues satisfy a decay condition
of the form ?j ? c1 exp( c2 j ), where c1 , c2 are universal constants. Examples of kernels in
this class include the Gaussian kernel, which for the Lebesgue measure satisfies such a bound
with = 2 (real line) or = 1 (compact domain).
?
-polynomial decay: For some > 1/2, the kernel matrix eigenvalues satisfy a decay condition
of the form ?j ? c1 j 2 , where c1 is a universal constant. Examples of kernels in this class
include the k th -order Sobolev spaces for some fixed integer k 1 with Lebesgue measure on
a bounded domain. We consider Sobolev spaces that consist of functions that have k th -order
weak derivatives f (k) being Lebesgue integrable and f (0) = f (1) (0) = ? ? ? = f (k 1) (0) = 0.
For such classes, the -polynomial decay condition holds with = k.
Given eigendecay conditions of these types, it is possible to compute an upper bound on the critical
radius n . In particular, using the fact that the function R from equation (14) is an upper bound
on the function Gn E( , 1) , we can show that for -exponentially decaying kernels, we have
1/
2
- (log nn) , whereas for -polynomial kernels, we have n2 - n 2 +1 up to universal constants.
Combining with our Theorem 1, we obtain the following result:
m
Corollary 2 (Bounds based on eigendecay). Suppose we apply boosting with stepsize ? = M
and
0
initialization f = 0 on the empirical loss function Ln which satisfies the m-M -condition and
0
-boundedness conditions, and is defined on covariate-response pairs {(xi , Yi )}ni=1 with Yi drawn
from the distribution PY |xi . Then, the error of the averaged iterate f?T satisfies the following upper
bounds with high probability, ?.? neglecting dependence on problem parameters such as (m, M ):
2
n
(a) For kernels with -exponential eigen-decay with respect to {xi }ni=1 :
1/
n
kf?T f ? k2n . log n n when stopped after T ? log1/
steps.
n
(b) For kernels with -polynomial eigen-decay with respect to {xi }ni=1 :
kf?T f ? k2n . n 2 /(2 +1) , when stopped after T ? n2 /(2 +1) steps.
In particular, these bounds hold for LogitBoost and AdaBoost.
To the best of our knowledge, this result is the first to show non-asymptotic and optimal statistical rates
for the k ? k2n -error when using early stopping LogitBoost or AdaBoost with an explicit dependence
of the stopping rule on n. Our results also yield similar guarantees for L2 -boosting, as has been
established in past work [26]. Note that we can observe a similar trade-off between computational
efficiency and statistical accuracy as in the case of kernel least-squares regression [39, 26]: although
larger kernel classes (e.g. Sobolev classes) yield higher estimation errors, boosting updates reach the
optimum faster than for a smaller kernel class (e.g. Gaussian kernels).
4.2
Numerical experiments
We now describe some numerical experiments that provide illustrative confirmations of our theoretical
predictions using the first-order Sobolev kernel as a typical example for kernel classes with polynomial
eigen-decay. In particular, we consider the first-order Sobolev space of Lipschitz functions on the
unit interval [0, 1], defined by the kernel K(x, x0 ) = 1 + min(x, x0 ), and with the design points
{xi }ni=1 set equidistantly over [0, 1]. Note that the equidistant design yields -polynomial decay
of the eigenvalues of K with = 1 so that n2 ? n 2/3 . Accordingly, our theory predicts that the
stopping time T = (cn)2/3 should lead to an estimate f?T such that kf?T f ? k2n - n 2/3 .
In our experiments for L2 -Boost, we sampled Yi according to Yi = f ? (xi )+wi with wi ? N (0, 0.5),
which corresponds to the probability distribution P(Y | xi ) = N (f ? (xi ); 0.5), where f ? (x) =
|x 12 | 14 is defined on the unit interval [0, 1]. By construction, the function f ? belongs to the
first-order Sobolev space with kf ? kH = 1. For LogitBoost, we sampled Yi according to Bern(p(xi ))
exp(f ? (x))
?
0
where p(x) = 1+exp(f
= 0 in all cases, and ran the updates (6)
? (x)) with the same f . We chose f
2
for L -Boost and LogitBoost with the constant step size ? = 0.75. We compared various stopping
rules to the oracle gold standard G, which chooses the stopping time G = arg mint 1 kf t f ? k2n
that yields the minimum prediction error among all iterates {f t }. Although this procedure is
unimplementable in practice, but it serves as a convenient lower bound with which to compare.
Figure 2 shows plots of the mean-squared error kf?T f ? k2n over the sample size n averaged over 40
trials, for the gold standard T = G and stopping rules based on T = (7n)? for different choices of
?. Error bars correspond to the standard errors computed from our simulations. Panel (a) shows the
behavior for L2 -boosting, whereas panel (b) shows the behavior for LogitBoost.
Note that both plots are qualitatively similar and that the theoretically derived stopping rule T = (7n)?
with ?? = 2/3 = 0.67, while slightly worse than the Gold standard, tracks its performance closely.
(a)
(b)
Figure 2: The mean-squared errors for the stopped iterates f?T at the Gold standard, i.e. iterate with
the minimum error among all unstopped updates (blue) and at T = (7n)? (with the theoretically
optimal ? = 0.67 in red, ? = 0.33 in black and ? = 1 in green) for (a) L2 -Boost and (b) LogitBoost.
We also performed simulations for some ?bad? stopping rules, in particular for an exponent ? not
equal to ?? = 2/3, indicated by the green and black curves. In the log scale plots in Figure 3 we
can clearly see that for ? 2 {0.33, 1} the performance is indeed much worse, with the difference in
slope even suggesting a different scaling of the error with the number of observations n. Recalling
our discussion for Figure 1, this phenomenon likely occurs due to underfitting and overfitting effects.
(a)
(b)
Figure 3: Logarithmic plots of the mean-squared errors at the Gold standard in blue and at T = (7n)?
(with the theoretically optimal rule for ? = 0.67 in red, ? = 0.33 in black and ? = 1 in green) for (a)
L2 -Boost and (b) LogitBoost.
5
Discussion
In this paper, we have proven non-asymptotic bounds for early stopping of kernel boosting for a
relatively broad class of loss functions. These bounds allowed us to propose simple stopping rules
which, for the class of regular kernel functions [38], yield minimax optimal rates of estimation.
Although the connection between early stopping and regularization has long been studied and
explored in the literature, to the best of our knowledge, this paper is the first one to establish a
general relationship between the statistical optimality of stopped iterates and the localized Gaussian
complexity, a quantity well-understood to play a central role in controlling the behavior of regularized
estimators based on penalization [32, 2, 20, 37].
There are various open questions suggested by our results. Can fast approximation techniques for
kernels be used to approximately compute optimal stopping rules without having to calculate all
eigenvalues of the kernel matrix? Furthermore, we suspect that similar guarantees can be shown for
the stopped estimator f T which we observed to behave similarly to the averaged estimator f?T in our
simulations. It would be of interest to establish results on f T directly.
Acknowledgements
This work was partially supported by DOD Advanced Research Projects Agency W911NF-16-10552, National Science Foundation grant NSF-DMS-1612948, and Office of Naval Research Grant
DOD-ONR-N00014.
References
[1] R. S. Anderssen and P. M. Prenter. A formal comparison of methods proposed for the numerical solution
of first kind integral equations. Jour. Australian Math. Soc. (Ser. B), 22:488?500, 1981.
[2] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Annals of Statistics,
33(4):1497?1537, 2005.
[3] P. L. Bartlett and M. Traskin. Adaboost is consistent. Journal of Machine Learning Research, 8(Oct):2347?
2368, 2007.
[4] A. Berlinet and C. Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Kluwer
Academic, Norwell, MA, 2004.
[5] L. Breiman. Prediction games and arcing algorithms. Neural computation, 11(7):1493?1517, 1999.
[6] L. Breiman et al. Arcing classifier (with discussion and a rejoinder by the author). Annals of Statistics,
26(3):801?849, 1998.
[7] P. B?hlmann and T. Hothorn. Boosting algorithms: Regularization, prediction and model fitting. Statistical
Science, pages 477?505, 2007.
[8] P. B?hlmann and B. Yu. Boosting with L2 loss: Regression and classification. Journal of American
Statistical Association, 98:324?340, 2003.
[9] R. Camoriano, T. Angles, A. Rudi, and L. Rosasco. Nytro: When subsampling meets early stopping. In
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 1403?1411,
2016.
[10] A. Caponetto and Y. Yao. Adaptation for regularization operators in learning theory. Technical Report
CBCL Paper #265/AI Technical Report #063, Massachusetts Institute of Technology, September 2006.
[11] A. Caponneto. Optimal rates for regularization operators in learning theory. Technical Report CBCL Paper
#264/AI Technical Report #062, Massachusetts Institute of Technology, September 2006.
[12] R. Caruana, S. Lawrence, and C. L. Giles. Overfitting in neural nets: Backpropagation, conjugate gradient,
and early stopping. In Advances in Neural Information Processing Systems, pages 402?408, 2001.
[13] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to
boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[14] J. Friedman, T. Hastie, R. Tibshirani, et al. Additive logistic regression: a statistical view of boosting (with
discussion and a rejoinder by the authors). Annals of statistics, 28(2):337?407, 2000.
[15] J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics,
29:1189?1232, 2001.
[16] C. Gu. Smoothing spline ANOVA models. Springer Series in Statistics. Springer, New York, NY, 2002.
[17] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution-Free Theory of Nonparametric Regression.
Springer Series in Statistics. Springer, 2002.
[18] W. Jiang. Process consistency for adaboost. Annals of Statistics, 21:13?29, 2004.
[19] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Jour. Math. Anal. Appl.,
33:82?95, 1971.
[20] V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. Annals of
Statistics, 34(6):2593?2656, 2006.
[21] M. Ledoux. The Concentration of Measure Phenomenon. Mathematical Surveys and Monographs.
American Mathematical Society, Providence, RI, 2001.
[22] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer-Verlag,
New York, NY, 1991.
[23] L. Mason, J. Baxter, P. L. Bartlett, and M. R. Frean. Boosting algorithms as gradient descent. In Advances
in Neural Information Processing Systems 12, pages 512?518, 1999.
[24] S. Mendelson. Geometric parameters of kernel machines. In Proceedings of the Conference on Learning
Theory (COLT), pages 29?43, 2002.
[25] L. Prechelt. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55?69. Springer,
1998.
[26] G. Raskutti, M. J. Wainwright, and B. Yu. Early stopping and non-parametric regression: An optimal
data-dependent stopping rule. Journal of Machine Learning Research, 15:335?366, 2014.
[27] L. Rosasco and S. Villa. Learning with incremental iterative regularization. In Advances in Neural
Information Processing Systems, pages 1630?1638, 2015.
[28] R. E. Schapire. The strength of weak learnability. Machine learning, 5(2):197?227, 1990.
[29] R. E. Schapire. The boosting approach to machine learning: An overview. In Nonlinear estimation and
classification, pages 149?171. Springer, 2003.
[30] B. Sch?lkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[31] O. N. Strand. Theory and methods related to the singular value expansion and Landweber?s iteration for
integral equations of the first kind. SIAM J. Numer. Anal., 11:798?825, 1974.
[32] S. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
[33] A. W. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes. Springer-Verlag, New
York, NY, 1996.
[34] E. D. Vito, S. Pereverzyev, and L. Rosasco. Adaptive kernel methods using the balancing principle.
Foundations of Computational Mathematics, 10(4):455?479, 2010.
[35] G. Wahba. Three topics in ill-posed problems. In M. Engl and G. Groetsch, editors, Inverse and ill-posed
problems, pages 37?50. Academic Press, 1987.
[36] G. Wahba. Spline models for observational data. CBMS-NSF Regional Conference Series in Applied
Mathematics. SIAM, Philadelphia, PN, 1990.
[37] M. J. Wainwright. High-dimensional statistics: A non-asymptotic viewpoint. Cambridge University Press,
2017.
[38] Y. Yang, M. Pilanci, and M. J. Wainwright. Randomized sketches for kernels: Fast and optimal nonparametric regression. Annals of Statistics, 2017. To appear.
[39] Y. Yao, L. Rosasco, and A. Caponnetto. On early stopping in gradient descent learning. Constructive
Approximation, 26(2):289?315, 2007.
[40] T. Zhang and B. Yu. Boosting with early stopping: Convergence and consistency. Annals of Statistics,
33(4):1538?1579, 2005.
| 7187 |@word trial:1 illustrating:1 achievable:1 norm:2 polynomial:6 open:1 closure:1 simulation:4 boundedness:5 series:3 renewed:1 rkhs:6 past:3 wainwrig:1 comparing:1 must:2 subsequent:1 additive:2 numerical:4 plot:9 update:10 intelligence:1 greedy:1 accordingly:1 beginning:1 supx2x:1 characterization:1 boosting:34 provides:3 clarified:1 iterates:9 math:2 yuting:2 zhang:1 mathematical:2 c2:3 direct:3 prove:1 fitting:4 underfitting:1 introduce:1 theoretically:3 x0:4 indeed:3 expected:2 behavior:4 p1:1 frequently:1 automatically:1 equipped:1 becomes:1 project:1 moreover:2 bounded:5 panel:4 kimeldorf:1 what:1 kind:2 affirmative:1 developed:1 supplemental:1 finding:1 nj:1 guarantee:6 berkeley:3 mitigate:1 act:1 classifier:1 ser:1 berlinet:1 unit:4 grant:2 appear:1 positive:3 t1:2 engineering:1 understood:4 local:3 consequence:3 jiang:1 meet:1 approximately:1 black:3 chose:1 initialization:1 studied:2 koltchinskii:1 appl:1 range:3 gyorfi:1 averaged:5 practical:1 unique:1 practice:3 backpropagation:1 procedure:7 empirical:12 universal:6 convenient:1 pre:1 trignometric:1 regular:3 close:1 operator:2 bh:5 context:2 impossible:1 applying:1 risk:8 py:2 equivalent:1 map:1 imposed:2 measurable:1 straightforward:1 independently:1 convex:2 survey:2 m2:2 estimator:12 rule:16 insight:1 importantly:1 gkh:1 population:8 notion:1 annals:8 pt:2 suppose:1 construction:1 user:1 play:1 controlling:1 agreement:1 trick:1 satisfying:1 predicts:1 observed:2 role:1 electrical:1 calculate:1 ensures:1 trade:3 ran:1 monograph:1 intuition:1 agency:1 complexity:19 covariates:5 vito:1 f2:3 efficiency:1 gu:1 various:4 mn2:1 effective:2 describe:1 fast:2 artificial:1 choosing:1 refined:1 quite:1 widely:2 valued:2 larger:1 posed:2 statistic:14 gi:1 vaart:1 differentiate:1 sequence:5 eigenvalue:7 ledoux:2 net:1 propose:2 statistics1:1 product:2 remainder:1 rln:2 causing:1 adaptation:1 combining:1 flexibility:1 achieve:1 gold:5 kh:8 convergence:4 regularity:4 optimum:1 rademacher:7 guaranteeing:1 converges:1 incremental:1 object:1 derive:3 illustrate:3 depending:2 frean:1 rescale:1 ij:1 strong:1 soc:1 implies:3 australian:1 radius:8 closely:1 observational:1 material:2 f1:2 generalization:2 hold:10 around:1 exp:6 great:1 k2h:3 algorithmic:2 mapping:1 cbcl:2 claim:2 lawrence:1 camoriano:1 achieves:1 early:24 adopt:1 smallest:1 uniqueness:1 estimation:5 krr:2 minimization:5 mit:1 clearly:1 gaussian:21 always:1 ekf:2 pn:5 breiman:2 broader:1 office:1 conjunction:1 corollary:5 arcing:2 derived:1 focus:1 naval:1 likelihood:2 indicates:1 ekfb:2 greatly:1 dependent:1 stopping:41 nn:1 typically:2 relation:2 gkn:1 arg:4 among:4 classification:8 overall:1 colt:1 exponent:1 ill:2 smoothing:1 integration:1 fairly:1 uc:1 equal:1 having:1 beach:1 broad:5 yu:4 representer:2 report:4 others:1 spline:6 idt:1 simplify:1 few:1 national:1 replaced:1 consisting:1 ourselves:1 lebesgue:3 n1:1 sandwich:2 recalling:1 friedman:2 interest:2 unimprovable:3 numer:1 semidefinite:2 devoted:1 norwell:1 integral:2 neglecting:1 necessary:2 old:1 initialized:1 walk:1 theoretical:4 stopped:7 witnessed:1 kij:1 earlier:1 giles:1 gn:4 w911nf:1 caruana:1 engl:1 hlmann:3 cost:5 entry:1 dod:2 learnability:1 characterize:1 providence:1 answer:1 chooses:1 st:1 jour:2 international:1 siam:2 randomized:1 sequel:2 off:2 concrete:2 yao:2 w1:1 squared:10 central:1 hn:6 rosasco:4 worse:2 book:2 american:2 derivative:1 rescaling:1 supp:1 suggesting:1 de:1 includes:1 coefficient:1 satisfy:4 depends:2 later:2 h1:1 performed:2 view:1 wei1:1 sup:4 red:2 decaying:2 hf:1 parallel:1 slope:1 equidistantly:1 contribution:1 minimize:1 square:10 ni:12 accuracy:1 improvable:1 yield:7 correspond:2 weak:3 lkopf:1 worth:1 randomness:2 reach:1 whenever:1 definition:2 dm:1 naturally:1 associated:3 proof:1 xn1:16 stop:2 gain:1 sampled:2 adjusting:1 massachusetts:2 recall:1 knowledge:3 hilbert:10 organized:1 cj:1 landweber:1 actually:1 cbms:1 higher:1 dt:2 adaboost:8 response:11 wei:1 formulation:1 done:1 strongly:1 generality:1 furthermore:2 smola:1 until:2 talagrand:1 hand:1 sketch:1 nonlinear:1 defines:1 logistic:4 infimum:3 indicated:1 usa:1 effect:1 normalized:3 true:1 regularization:15 hence:2 symmetric:1 game:1 width:6 illustrative:1 ridge:2 theoretic:1 meaning:3 common:1 raskutti:3 functional:3 mt:3 rl:1 overview:1 exponentially:1 banach:1 discussed:1 association:2 kluwer:1 cambridge:3 ai:2 smoothness:1 unconstrained:1 consistency:4 mathematics:2 similarly:1 access:1 recent:1 inf:2 belongs:2 apart:2 mint:1 certain:1 n00014:1 verlag:2 inequality:4 binary:3 onr:1 yi:17 der:1 integrable:1 minimum:5 analyzes:2 wellunderstood:1 reduces:1 caponnetto:1 smooth:1 technical:4 match:3 characterized:1 faster:2 offer:1 long:2 academic:2 equally:1 laplacian:1 prediction:8 involving:2 regression:14 underlies:3 essentially:2 expectation:5 iteration:11 kernel:62 achieved:1 c1:5 background:3 whereas:5 addition:2 interval:2 singular:1 source:1 sch:1 regional:1 eigenfunctions:1 induced:3 suspect:1 effectiveness:1 integer:2 yang:4 noting:1 wn:1 baxter:1 iterate:4 xj:7 fit:3 equidistant:1 hastie:1 restrict:2 wahba:3 inner:2 idea:1 cn:1 computable:1 expression:4 bartlett:3 wellner:1 krzyzak:1 penalty:2 york:3 detailed:1 involve:3 k2n:14 clear:1 nonparametric:2 locally:1 differentiability:1 generate:2 schapire:3 nsf:2 dotted:1 track:1 tibshirani:1 blue:2 broadly:1 write:1 key:1 tchebycheffian:1 achieving:1 drawn:2 anova:1 verified:1 year:1 sum:2 run:1 angle:1 inverse:1 throughout:1 decide:1 sobolev:11 decision:1 scaling:1 hrl:1 bound:37 followed:1 rudi:1 correspondence:2 yielded:1 oracle:2 strength:1 constraint:1 ri:1 bousquet:1 generates:1 argument:3 optimality:3 min:6 span:2 martin:1 relatively:3 department:2 according:2 ball:5 poor:1 conjugate:1 remain:1 smaller:2 slightly:1 wi:3 taken:5 ln:10 equation:4 turn:1 hh:1 letting:1 serf:1 nytro:1 apply:3 observe:2 appearing:1 stepsize:2 alternative:1 eigen:5 existence:1 thomas:1 denotes:4 running:3 include:3 remaining:1 subsampling:1 k1:1 especially:1 establish:5 classical:1 society:1 objective:2 question:2 quantity:2 occurs:1 parametric:3 concentration:1 dependence:2 usual:1 villa:1 exhibit:2 gradient:10 september:2 subspace:4 sensible:1 topic:1 reason:1 assuming:1 traskin:1 relationship:1 kk:1 minimizing:1 balance:1 statement:5 trace:1 negative:1 design:3 anal:2 contributed:1 perform:1 upper:14 observation:2 finite:4 descent:2 behave:1 fanny:3 rn:6 reproducing:9 sharp:2 pair:2 required:1 specified:3 connection:4 kf1:1 established:3 boost:5 nip:1 beyond:1 bar:1 suggested:1 agnan:1 summarize:1 including:3 max:8 green:3 analogue:3 power:1 suitable:2 critical:5 natural:1 wainwright:3 regularized:1 predicting:1 isoperimetry:1 advanced:1 minimax:7 improve:1 technology:2 log1:1 coupled:1 philadelphia:1 prior:1 understanding:1 l2:13 literature:1 kf:23 acknowledgement:1 geometric:1 relative:1 asymptotic:3 freund:1 loss:30 proportional:1 rejoinder:2 proven:2 versus:2 localized:16 penalization:2 foundation:2 sufficient:1 consistent:1 principle:2 editor:1 viewpoint:1 balancing:1 penalized:5 supported:1 free:1 bern:1 formal:1 weaker:1 institute:2 taking:3 kdkf:2 van:2 curve:1 calculated:1 xn:2 rich:2 fb:6 kz:1 author:3 commonly:2 collection:3 qualitatively:2 adaptive:1 excess:2 compact:1 confirm:1 overfitting:6 b1:1 xi:36 iterative:3 decade:1 hothorn:1 yang2:1 learn:1 pilanci:1 ca:2 confirmation:1 expansion:1 mse:1 domain:3 main:6 terminated:1 logitboost:11 noise:1 hf1:1 n2:12 allowed:1 x1:2 referred:3 en:4 ny:3 explicit:1 exponential:5 theorem:9 bad:1 covariate:1 explored:2 disregarding:1 decay:14 mason:1 dominates:1 bivariate:1 essential:1 exists:1 ih:3 albeit:1 adding:1 restricting:1 mendelson:3 consist:1 nk:1 supf:1 generalizing:1 logarithmic:1 likely:1 prevents:1 expressed:2 ordered:1 strand:1 ux:1 partially:1 scalar:5 applies:2 springer:8 ch:9 corresponds:3 minimizer:2 satisfies:8 ma:2 oct:1 conditional:1 goal:2 lipschitz:2 change:1 infinite:1 typical:1 uniformly:1 averaging:1 geer:1 latter:1 wainwright1:1 constructive:1 kohler:1 phenomenon:4 correlated:1 |
6,840 | 7,188 | SVCCA: Singular Vector Canonical Correlation
Analysis for Deep Learning Dynamics and
Interpretability
Maithra Raghu,1,2 Justin Gilmer,1 Jason Yosinski,3 & Jascha Sohl-Dickstein1
1
Google Brain 2 Cornell University 3 Uber AI Labs
maithrar@gmail?com, gilmer@google?com, yosinski@uber?com, jaschasd@google?com
Abstract
We propose a new technique, Singular Vector Canonical Correlation Analysis
(SVCCA), a tool for quickly comparing two representations in a way that is both
invariant to affine transform (allowing comparison between different layers and
networks) and fast to compute (allowing more comparisons to be calculated than
with previous methods). We deploy this tool to measure the intrinsic dimensionality of layers, showing in some cases needless over-parameterization; to probe
learning dynamics throughout training, finding that networks converge to final
representations from the bottom up; to show where class-specific information in
networks is formed; and to suggest new training regimes that simultaneously save
computation and overfit less.
1
Introduction
As the empirical success of deep neural networks ([6, 8, 16]) become an indisputable fact, the goal
of better understanding these models escalates in importance. Central to this aim is a core issue
of deciphering learned representations. Facets of this key question have been explored empirically,
particularly for image models, in [1, 2, 9, 11, 12, 13, 14, 17, 18]. Most of these approaches are
motivated by interpretability of learned representations. More recently, [10] studied the similarities
of representations learned by multiple networks by finding permutations of neurons with maximal
correlation.
In this work we introduce a new approach to the study of network representations, based on an
analysis of each neuron?s activation vector ? the scalar outputs it emits on input datapoints. With
this interpretation of neurons as vectors (and layers as subspaces, spanned by neurons), we introduce SVCCA, Singular Vector Canonical Correlation Analysis, an amalgamation of Singular Value
Decomposition and Canonical Correlation Analysis [4], as a powerful method for analyzing deep
representations.
The main contributions resulting from the introduction of SVCCA are the following:
1. We ask: is the dimensionality of a layer?s learned representation the same as the number of
neurons in the layer? Answer: No. We show that the trained network performs equally well
with a number of directions just a fraction of the number of neurons with no additional
training, provided they are carefully chosen with SVCCA (Section 2.1). We explore the
consequences for model compression (Section 4.4).
2. We ask: what do deep representation learning dynamics look like? Answer: Networks
broadly converge bottom up. Using SVCCA, we compare layers across time and find they
solidify from the bottom up. This suggests a simple, computationally more efficient method
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Neurons with highest activations
(net1, net2)
Top SVD Directions
(net1, net2)
Top SVCCA directions
(net1, net2)
network
index over dataset
index over dataset
index over dataset
(a)
(b)
(c)
(d)
4
200
200
200
200
1
Figure 1: To demonstrate SVCCA, we consider a toy regression task (regression target as in Figure 3). (a)
We train two networks with four fully connected hidden layers starting from different random initializations,
and examine the representation learned by the penultimate (shaded) layer in each network. (b) The neurons
with the highest activations in net 1 (maroon) and in net 2 (green). The x-axis indexes over the dataset: in
our formulation, the representation of a neuron is simply its value over a dataset (Section 2). (c) The SVD
directions ? i.e. the directions of maximal variance ? for each network. (d) The top SVCCA directions. We
see that each pair of maroon/green lines (starting from the top) are almost visually identical (up to a sign). Thus,
although looking at just neurons (b) seems to indicate that the networks learn very different representations,
looking at the SVCCA subspace (d) shows that the information in the representations are (up to a sign) nearly
identical.
of training networks, Freeze Training, where lower layers are sequentially frozen after a
certain number of timesteps (Sections 4.1, 4.2).
3. We develop a method based on the discrete Fourier transform which greatly speeds up the
application of SVCCA to convolutional neural networks (Section 3).
4. We also explore an interpretability question, of when an architecture becomes sensitive to
different classes. We find that SVCCA captures the semantics of different classes, with
similar classes having similar sensitivities, and vice versa. (Section 4.3).
Experimental Details Most of our experiments are performed on CIFAR-10 (augmented with
random translations). The main architectures we use are a convolutional network and a residual
network1 . To produce a few figures, we also use a toy regression task: training a four hidden layer
fully connected network with 1D input and 4D output, to regress on four different simple functions.
2
Measuring Representations in Neural Networks
Our goal in this paper is to analyze and interpret the representations learned by neural networks. The
critical question from which our investigation departs is: how should we define the representation
of a neuron? Consider that a neuron at a particular layer in a network computes a real-valued
function over the network?s input domain. In other words, if we had a lookup table of all possible
input ? output mappings for a neuron, it would be a complete portrayal of that neuron?s functional
form.
However, such infinite tables are not only practically infeasible, but are also problematic to process
into a set of conclusions. Our primary interest is not in the neuron?s response to random data, but
rather in how it represents features of a specific dataset (e.g. natural images). Therefore, in this
study we take a neuron?s representation to be its set of responses over a finite set of inputs ? those
drawn from some training or validation set.
More concretely, for a given dataset X = {x1 , ? ? ? xm } and a neuron i on layer l, z li , we define z li to
be the vector of outputs on X, i.e.
z li = (zz li (x1 ), ? ? ? , z li (xm ))
1
Convnet layers: conv-conv-bn-pool-conv-conv-conv-bn-pool-fc-bn-fc-bn-out. Resnet layers:
conv-(x10 c/bn/r block)-(x10 c/bn/r block)-(x10 c/bn/r block)-bn-fc-out.
2
Note that this is a different vector from the often-considered vector of the ?representation at a layer
of a single input.? Here z li is a single neuron?s response over the entire dataset, not an entire layer?s
response for a single input. In this view, a neuron?s representation can be thought of as a single
vector in a high-dimensional space. Broadening our view from a single neuron to the collection of
neurons in a layer, the layer can be thought of as the set of neuron vectors contained within that
layer. This set of vectors will span some subspace. To summarize:
Considered over a dataset X with m examples, a neuron is a vector in Rm .
A layer is the subspace of Rm spanned by its neurons? vectors.
Within this formalism, we introduce Singular Vector Canonical Correlation Analysis (SVCCA) as
a method for analysing representations. SVCCA proceeds as follows:
? Input: SVCCA takes as input two (not necessarily different) sets of neurons (typically
layers of a network) l1 = {zz l11 , ..., z lm1 1 } and l2 = {zz l12 , ..., z lm2 2 }
? Step 1 First SVCCA performs a singular value decomposition of each subspace to get subsubspaces l10 ? l1 , l20 ? l2 which comprise of the most important directions of the original
subspaces l1 , l2 . In general we take enough directions to explain 99% of variance in the
subspace. This is especially important in neural network representations, where as we will
show many low variance directions (neurons) are primarily noise.
? Step 2 Second, compute the Canonical Correlation similarity ([4]) of l10 , l20 : linearly transform l10 , l20 to be as aligned as possible and compute correlation coefficients. In particul
l
l
l
lar, given the output of step 1, l10 = {zz 0 11 , ..., z 0 m1 01 }, l20 = {zz 0 12 , ..., z 0 m2 02 }, CCA linearly
transforms these subspaces ?l1 = WX l10 , ?l2 = WY l20 such as to maximize the correlations
corrs = {?1 , . . . ?min(m01 ,m02 ) } between the transformed subspaces.
? Output: With these steps, SVCCA outputs pairs of aligned directions, (?
z li1 , z?li2 ) and how
well they correlate, ?i . Step 1 also produces intermediate output in the form of the top
singular values and directions.
For a more detailed description of each step, see the Appendix. SVCCA can be used to analyse
any two sets of neurons. In our experiments, we utilize this flexibility to compare representations
across different random initializations, architectures, timesteps during training, and specific classes
and layers.
Figure 1 shows a simple, intuitive demonstration of SVCCA. We train a small network on a toy
regression task and show each step of SVCCA, along with the resulting very similar representations.
SVCCA is able to find hidden similarities in the representations.
2.1
Distributed Representations
An important property of SVCCA is that it is truly a subspace method: both SVD and CCA work
with span(z
Pm z 1 , . . . , z m ) instead of being axis aligned to the z i directions. SVD finds singular vectors
z 0i = j=1 sij z j , and the subsequent CCA finds a linear transform W , giving orthogonal canonPm
Pm
ically correlated directions {?
z 1 , . . . , z?m } = { j=1 w1j z 0j , . . . , j=1 wmj z 0j }. In other words,
SVCCA has no preference for representations that are neuron (axes) aligned.
If representations are distributed across many dimensions, then this is a desirable property of a
representation analysis method. Previous studies have reported that representations may be more
complex than either fully distributed or axis-aligned [15, 19, 10] but this question remains open.
We use SVCCA as a tool to probe the nature of representations via two experiments:
(a) We find that the subspace directions found by SVCCA are disproportionately important to
the representation learned by a layer, relative to neuron-aligned directions.
(b) We show that at least some of these directions are distributed across many neurons.
Experiments for (a), (b) are shown in Figure 2 as (a), (b) respectively. For both experiments, we first
acquire two different representations, l1 , l2 , for a layer l by training two different random initializations of a convolutional network on CIFAR-10. We then apply SVCCA to l1 and l2 to get directions
3
CIFAR10: Accuracy with SVCCA directions
and random neurons
1.0
CIFAR10 acc vs neurons used for SVCCA dirns
SVCCA fc1 (512 neurons)
SVCCA p2 (4096 neurons)
50 neurons for fc1
150 neurons for p2
300 neurons for p2
100 neurons for fc1
0.9
0.8
0.8
Accuracy
Accuracy
0.7
0.6
p2 (4096 neurons) SVCCA
p2 max acts neurons
p2 random neurons
fc1 (512 neurons) SVCCA
fc1 random neurons
fc2 (256 neurons) SVCCA
fc2 max acts neurons
0.4
0.2
0
100
200
300
Number of directions
400
0.6
0.5
0.4
0.3
0.2
0.1
500
(a)
0
10
20
30
Number of directions
(b)
40
50
Figure 2: Demonstration of (a) disproportionate importance of SVCCA directions, and (b) distributed nature
of some of these directions. For both panes, we first find the top k SVCCA directions by training two conv nets
on CIFAR-10 and comparing corresponding layers. (a) We project the output of the top three layers, pool1, fc1,
fc2, onto this top-k subspace. We see accuracy rises rapidly with increasing k, with even k num neurons
giving reasonable performance, with no retraining. Baselines of random k neuron subspaces and max activation
neurons require larger k to perform as well. (b): after projecting onto top k subspace (like left), dotted lines
then project again onto m neurons, chosen to correspond highly to the top k-SVCCA subspace. Many more
neurons are needed than k for better performance, suggesting distributedness of SVCCA directions.
{?
z l11 , ..., z?lm1 } and {?
z l12 , ..., z?lm2 }, ordered according to importance by SVCCA, with each z?lji being a
Pm
(l )
linear combination of the original neurons, i.e. z?lji = r=1 ?jri z lri .
For different values of k < m, we can then restrict layer li ?s output to lie in the subspace of
span(?
z l1i , . . . , z?lki ), the most useful k-dimensional subspace as found by SVCCA, done by projecting
each neuron into this k dimensional space.
We find ? somewhat surprisingly ? that very few SVCCA directions are required for the network
to perform the task well. As shown in Figure 2(a), for a network trained on CIFAR-10, the first
25 dimensions provide nearly the same accuracy as using all 512 dimensions of a fully connected
layer with 512 neurons. The accuracy curve rises rapidly with the first few SVCCA directions, and
plateaus quickly afterwards, for k m. This suggests that the useful information contained in m
neurons is well summarized by the subspace formed by the top k SVCCA directions. Two baselines for comparison are picking random and maximum activation neuron aligned subspaces and
projecting outputs onto these. Both of these baselines require far more directions (in this case: neurons) before matching the accuracy achieved by the SVCCA directions. These results also suggest
approaches to model compression, which are explored in more detail in Section 4.4.
Figure 2(b) next demonstrates that these useful SVCCA directions are at least somewhat distributed
over neurons rather than axis-aligned. First, the top k SVCCA directions are picked and the representation is projected onto this subspace. Next, the representation is further projected onto m
neurons, where the m are chosen as those most important to the SVCCA directions . The resulting
accuracy is plotted for different choices of k (given by x-axis) and different choices of m (different
lines). That, for example, keeping even 100 fc1 neurons (dashed green line) cannot maintain the
accuracy of the first 20 SVCCA directions (solid green line at x-axis 20) suggests that those 20
SVCCA directions are distributed across 5 or more neurons each, on average. Figure 3 shows a
further demonstration of the effect on the output of projecting onto top SVCCA directions, here for
the toy regression case.
Why the two step SV + CCA method is needed. Both SVD and CCA have important properties
for analysing network representations and SVCCA consequently benefits greatly from being a two
step method. CCA is invariant to affine transformations, enabling comparisons without natural
alignment (e.g. different architectures, Section 4.4). See Appendix B for proofs and a demonstrative
figure. While CCA is a powerful method, it also suffers from certain shortcomings, particularly in
determining how many directions were important to the original space X, which is the strength of
4
Original output
using 200 directions
4
Projection on top
02 SVCCA directions
Projection on top
06 SVCCA directions
Projection on top
15 SVCCA directions
Projection on top
30 SVCCA directions
3
2
1
0
1
2
3
4
0
50000
100000
150000
200000
0
50000
100000
150000
200000
0
50000
100000
150000
200000
0
50000
100000
150000
200000
0
50000
100000
150000
200000
Figure 3: The effect on the output of a latent representation being projected onto top SVCCA directions in
the toy regression task. Representations of the penultimate layer are projected onto 2, 6, 15, 30 top SVCCA
directions (from second pane). By 30, the output looks very similar to the full 200 neuron output (left).
SVD. See Appendix for an example where naive CCA performs badly. Both the SVD and CCA
steps are critical to the analysis of learning dynamics in Section 4.1.
3
Scaling SVCCA for Convolutional Layers
Applying SVCCA to convolutional layers can be done in two natural ways:
(1) Same layer comparisons: If X, Y are the same layer (at different timesteps or across random initializations) receiving the same input we can concatenate along the pixel (height h,
width w) coordinates to form a vector: a conv layer h ? w ? c maps to c vectors, each
of dimension hwd, where d is the number of datapoints. This is a natural choice because
neurons at different pixel coordinates see different image data patches to each other. When
X, Y are two versions of the same layer, these c different views correspond perfectly.
(2) Different layer comparisons: When X, Y are not the same layer, the image patches seen by
different neurons have no natural correspondence. But we can flatten an h?w ?c conv into
hwc neurons, each of dimension d. This approach is valid for convs in different networks
or at different depths.
3.1
Scaling SVCCA with Discrete Fourier Transforms
Applying SVCCA to convolutions introduces a computational challenge: the number of neurons
(h?w?c) in convolutional layers, especially early ones, is very large, making SVCCA prohibitively
expensive due to the large matrices involved. Luckily the problem of approximate dimensionality
reduction of large matrices is well studied, and efficient algorithms exist, e.g. [3].
For convolutional layers however, we can avoid dimensionality reduction and perform exact
SVCCA, even for large networks. This is achieved by preprocessing each channel with a Discrete
Fourier Transform (which preserves CCA due to invariances, see Appendix), causing all (covariance) matrices to be block-diagonal. This allows all matrix operations to be performed block by
block, and only over the diagonal blocks, vastly reducing computation. We show:
Theorem 1. Suppose we have a translation invariant (image) dataset X and convolutional layers
l1 , l2 . Letting DF T (li ) denote the discrete fourier transform applied to each channel of li , the
covariance cov(DF T (l1 ), DF T (l2 )) is block diagonal, with blocks of size c ? c.
We make only two assumptions: 1) all layers below l1 , l2 are either conv or pooling layers (translation equivariance) 2) The dataset X has all translations of the images Xi . This is necessary in the
proof for certain symmetries in neuron activations, but these symmetries typically exist in natural
images even without translation invariance, as shown in Figure App.2 in the Appendix. Below are
key statements, with proofs in Appendix.
Definition 1. Say a single channel image dataset X of images is translation invariant if for any
(a,b)
(wlog n ? n) image Xi ? X, with pixel values {zz 11 , ...zz nn }, Xi
= {zz ?a (1)?b (1) , ...zz ?a (n)?b (n) }
is also in X, for all 0 ? a, b ? n ? 1, where ?a (i) = a + i mod n (and similarly for b).
For a multiple channel image Xi , an (a, b) translation is an (a, b) height/width shift on every channel
separately. X is then translation invariant as above.
5
To prove Theorem 1, we first show another theorem:
Theorem 2. Given a translation invariant dataset X, and a convolutional layer l with channels
{c1 , . . . ck } applied to X
(a) the DFT of ci , F cF T has diagonal covariance matrix (with itself).
(b) the DFT of ci , cj , F ci F T , F cj F T have diagonal covariance with each other.
Finally, both of these theorems rely on properties of circulant matrices and their DFTs:
Lemma 1. The covariance matrix of ci applied to translation invariant X is circulant and block
circulant.
Lemma 2. The DFT of a circulant matrix is diagonal.
4
4.1
Applications of SVCCA
Learning Dynamics with SVCCA
We can use SVCCA as a window into learning dynamics by comparing the representation at a
layer at different points during training to its final representation. Furthermore, as the SVCCA
computations are relatively cheap to compute compared to methods that require training an auxiliary
network for each comparison [1, 9, 10], we can compare all layers during training at all timesteps to
all layers at the final time step, producing a rich view into the learning process.
The outputs of SVCCA are the aligned directions (?
xi , y?i ), how well they align, ?i , as well as in(j)
(i)
(j)
(i)
termediate output from the first step, of singular values and directions, ?X , x0 , ?Y , y 0 . We
condense these outputs into a single value, the SVCCA similarity ??, that encapsulates how well the
representations of two layers are aligned with each other,
X
1
?i ,
(1)
?? =
min (m1 , m2 ) i
where min (m1 , m2 ) is the size of the smaller of the two layers being compared. The SVCCA
similarity ?? is the average correlation across aligned directions, and is a direct multidimensional
analogue of Pearson correlation.
The SVCCA similarity for all pairs of layers, and all time steps, is shown in Figure 4 for a convnet
and a resnet architecture trained on CIFAR10.
4.2
Freeze Training
Observing in Figure 4 that networks broadly converge from the bottom up, we propose a training
method where we successively freeze lower layers during training, only updating higher and higher
layers, saving all computation needed for deriving gradients and updating in lower layers.
We apply this method to convolutional and residual networks trained on CIFAR-10, Figure 5, using
a linear freezing regime: in the convolutional network, each layer is frozen at a fraction (layer number/total layers) of total training time, while for resnets, each residual block is frozen at a fraction
(block number/total blocks). The vertical grey dotted lines show which steps have another set of layers frozen. Aside from saving computation, Freeze Training appears to actively help generalization
accuracy, like early stopping but with different layers requiring different stopping points.
4.3
Interpreting Representations: when are classes learned?
We also can use SVCCA to compare how correlated representations in each layer are with the logits
of each class in order to measure how knowledge about the target evolves throughout the network.
In Figure 6 we apply the DFT CCA technique on the Imagenet Resnet [5]. We take five different
classes and for different layers in the network, compute the DFT CCA similarity between the logit
of that class and the network layer. The results successfully reflect semantic aspects of the classes:
the firetruck class sensitivity line is clearly distinct from the two pairs of dog breeds, and network
develops greater sensitivity to firetruck earlier on. The two pairs of dog breeds, purposefully chosen
so that each pair is similar to the other in appearance, have cca similarity lines that are very close to
each other through the network, indicating these classes are similar to each other.
6
Convnet, CIFAR-10
layer (during training)
Resnet, CIFAR-10
layer (during training)
0% trained
35% trained
layer (end of training)
75% trained
layer (end of training)
layer (end of training)
100% trained
layer (end of training)
Weighted SVCCA scale
Figure 4: Learning dynamics plots for conv (top) and res (bottom) nets trained on CIFAR-10. Each pane is
a matrix of size layers ? layers, with each entry showing the SVCCA similarity ?? between the two layers.
Note that learning broadly happens ?bottom up? ? layers closer to the input seem to solidify into their final
representations with the exception of the very top layers. Per layer plots are included in the Appendix. Other
patterns are also visible ? batch norm layers maintain nearly perfect similarity to the layer preceding them due
to scaling invariance (with a slight reduction since batch norm changes the SVD directions which capture 99%
of the variance). In the resnet plot, we see a stripe like pattern due to skip connections inducing high similarities
to previous layers.
CIFAR10 Conv Freeze Training
0.90
0.90
Accuracy
0.85
0.85
test acc base
test acc freeze
0.80
test acc base
test acc freeze
0.80
0.75
0.70
CIFAR10 Resnet Freeze Training
0.75
0
20000
40000
60000
0.70
80000 100000 120000 140000 160000
Train step
0
20000
40000
60000
80000 100000 120000 140000 160000
Train step
Figure 5: Freeze Training reduces training cost and improves generalization. We apply Freeze Training to a
convolutional network on CIFAR-10 and a residual network on CIFAR-10. As shown by the grey dotted lines
(which indicate the timestep at which another layer is frozen), both networks have a ?linear? freezing regime:
for the convolutional network, we freeze individual layers at evenly spaced timesteps throughout training. For
the residual network, we freeze entire residual blocks at each freeze step. The curves were averaged over ten
runs.
4.4
Other Applications: Cross Model Comparison and compression
SVCCA similarity can also be used to compare the similarity of representations across different
random initializations, and even different architectures. We compare convolutional networks on
CIFAR-10 across random initializations (Appendix) and also a convolutional network to a residual
network in Figure 7, using the DFT method described in 3.
In Figure 3, we saw that projecting onto the subspace of the top few SVCCA directions resulted in
comparable accuracy. This observations motivates an approach to model compression. In particular,
letting the output vector of layer l be x (l) ? Rn?1 , and the weights W (l) , we replace the usual
W (l)x (l) with (W (l) PxT )(Pxx (l) ) where Px is a k ? n projection matrix, projecting x onto the top
SVCCA directions. This bottleneck reduces both parameter count and inference computational cost
7
CCA Similarity (using DFT) of Layers in
Imagenet Resnet to Different Classes
CCA Similarity with Class
1.0
0.9
0.8
0.7
0.6
s_terrier
w_terrier
husky
eskimo_dog
fire truck
0.5
0.4
0.3
0
10
20
30
40
50
60
70
80
Layer Number
Figure 6: We plot the CCA similarity using the Discrete Fourier Transform between the logits of five classes
and layers in the Imagenet Resnet. The classes are firetruck and two pairs of dog breeds (terriers and husky
like dogs: husky and eskimo dog) that are chosen to be similar to each other. These semantic properties are
captured in CCA similarity, where we see that the line corresponding to firetruck is clearly distinct from the
two pairs of dog breeds, and the two lines in each pair are both very close to each other, reflecting the fact that
each pair consists of visually similar looking images. Firetruck also appears to be easier for the network to
learn, with greater sensitivity displayed much sooner.
DFT CCA similarity between
Resnet and Convnet on CIFAR10
1.0
p2
0.9
bn2
Convnet layers
0.8
c5
0.7
c4
0.6
c3
0.5
p1
0.4
bn1
0.3
c2
0.2
c1
0.1
in
in bncv bncv bncv bncv bncv bncv bncv bn
0.0
Resnet layers
Figure 7: We plot the CCA similarity using the Discrete Fourier Transform between convolutional layers of a
Resnet and Convnet trained on CIFAR-10. We find that the lower layrs of both models are noticeably similar to
each other, and get progressively less similar as we compare higher layers. Note that the highest layers of the
resnet are least similar to the lower layers of the convnet.
for the layer by a factor ? nk . In Figure App.5 in the Appendix, we show that we can consecutively
compress top layers with SVCCA by a significant amount (in one case reducing each layer to 0.35
original size) and hardly affect performance.
5
Conclusion
In this paper we present SVCCA, a general method which allows for comparison of the learned distributed representations between different neural network layers and architectures. Using SVCCA
we obtain novel insights into the learning dynamics and learned representations of common neural
network architectures. These insights motivated a new Freeze Training technique which can reduce
the number of flops required to train networks and potentially even increase generalization performance. We observe that CCA similarity can be a helpful tool for interpretability, with sensitivity
to different classes reflecting their semantic properties. This technique also motivates a new algorithm for model compression. Finally, the ?lower layers learn first? behavior was also observed for
recurrent neural networks as shown in Figure App.6 in the Appendix.
8
References
[1] Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier
probes. arXiv preprint arXiv:1610.01644, 2016.
[2] David Eigen, Jason Rolfe, Rob Fergus, and Yann LeCun. Understanding deep architectures
using a recursive convolutional network. arXiv preprint arXiv:1312.1847, 2013.
[3] Nathan Halko, Martinsson Per-Gunnar, and Joel A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM
Rev., 53:217?288, 2011.
[4] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical correlation analysis: An overview
with application to learning methods. Neural Computation, 16:2639?2664, 2004.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. CoRR, abs/1512.03385, 2015.
[6] Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep
Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research
groups. IEEE Signal Processing Magazine, 29(6):82?97, 2012.
[7] Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge university press, 1985.
[8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[9] Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their
equivariance and equivalence. In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 991?999, 2015.
[10] Y. Li, J. Yosinski, J. Clune, H. Lipson, and J. Hopcroft. Convergent Learning: Do different
neural networks learn the same representations? In International Conference on Learning
Representations (ICLR), May 2016.
[11] Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning:
Do different neural networks learn the same representations? In Feature Extraction: Modern
Questions and Challenges, pages 196?212, 2015.
[12] Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5188?5196, 2015.
[13] Gr?egoire Montavon, Mikio L Braun, and Klaus-Robert M?uller. Kernel analysis of deep networks. Journal of Machine Learning Research, 12(Sep):2563?2581, 2011.
[14] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional
networks: Visualising image classification models and saliency maps. arXiv preprint
arXiv:1312.6034, 2013.
[15] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian
Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint
arXiv:1312.6199, 2013.
[16] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang
Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google?s neural machine translation system: Bridging the gap between human and machine translation. arXiv
preprint arXiv:1609.08144, 2016.
[17] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding
neural networks through deep visualization. In Deep Learning Workshop, International Conference on Machine Learning (ICML), 2015.
9
[18] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In
European conference on computer vision, pages 818?833. Springer, 2014.
`
[19] Bolei Zhou, Aditya Khosla, Agata
Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge in deep scene cnns. In International Conference on Learning Representations
(ICLR), volume abs/1412.6856, 2014.
10
| 7188 |@word version:1 compression:5 seems:1 norm:2 retraining:1 logit:1 open:1 grey:2 bn:9 decomposition:3 covariance:5 solid:1 reduction:3 com:4 comparing:3 activation:6 gmail:1 intriguing:1 john:1 concatenate:1 subsequent:1 wx:1 visible:1 net1:3 cheap:1 christian:1 plot:5 progressively:1 v:1 aside:1 hwd:1 parameterization:1 krikun:1 core:1 num:1 preference:1 zhang:1 five:2 height:2 along:2 c2:1 direct:1 become:1 yuan:1 prove:1 consists:1 inside:1 introduce:3 x0:1 andrea:3 p1:1 examine:1 behavior:1 brain:1 l20:5 window:1 increasing:1 becomes:1 provided:1 conv:12 project:2 hardoon:1 lapedriza:1 anh:1 what:1 finding:3 maroon:2 transformation:1 every:1 multidimensional:1 act:2 braun:1 zaremba:1 prohibitively:1 classifier:1 rm:2 demonstrates:1 producing:1 before:1 consequence:1 analyzing:1 initialization:6 studied:2 equivalence:1 suggests:3 shaded:1 averaged:1 horn:1 lecun:1 recursive:1 block:14 empirical:1 thought:2 vedaldi:3 matching:1 projection:5 word:2 flatten:1 suggest:2 dfts:1 get:3 onto:11 needle:1 cannot:1 close:2 applying:2 map:2 starting:2 sainath:1 jascha:1 m2:3 insight:2 spanned:2 deriving:1 datapoints:2 coordinate:2 target:2 deploy:1 suppose:1 magazine:1 exact:1 goodfellow:1 jaitly:1 net2:3 particularly:2 expensive:1 updating:2 recognition:4 stripe:1 bottom:6 observed:1 mike:1 preprint:5 visualising:1 capture:2 connected:3 sun:1 highest:3 solidify:2 lji:2 dynamic:8 trained:10 sep:1 hopcroft:2 lm1:2 pool1:1 train:5 distinct:2 fast:1 m02:1 shortcoming:1 klaus:2 pearson:1 larger:1 valued:1 say:1 cov:1 simonyan:1 breed:4 transform:8 analyse:1 itself:1 final:4 frozen:5 net:4 propose:2 maximal:2 qin:1 causing:1 aligned:11 cao:1 rapidly:2 flexibility:1 description:1 intuitive:1 inducing:1 sutskever:2 corrs:1 rolfe:1 produce:2 perfect:1 resnet:12 help:1 object:1 develop:1 recurrent:1 andrew:2 bn1:1 p2:7 auxiliary:1 disproportionate:1 indicate:2 skip:1 direction:49 cnns:1 consecutively:1 luckily:1 human:1 noticeably:1 disproportionately:1 require:3 generalization:3 investigation:1 practically:1 considered:2 visually:2 mapping:1 matthew:1 early:2 torralba:1 bn2:1 sensitive:1 saw:1 vice:1 successfully:1 tool:4 weighted:1 karel:1 uller:1 clearly:2 aim:1 rather:2 ck:1 avoid:1 zhou:1 cornell:1 agata:1 jaschasd:1 clune:3 ax:1 maithra:1 greatly:2 lri:1 baseline:3 helpful:1 inference:1 stopping:2 nn:1 entire:3 typically:2 hidden:3 transformed:1 condense:1 semantics:1 pixel:3 issue:1 classification:2 comprise:1 saving:2 having:1 beach:1 extraction:1 zz:9 identical:2 represents:1 look:2 yu:1 nearly:3 icml:1 pxt:1 yoshua:1 develops:1 few:4 primarily:1 modern:1 simultaneously:1 preserve:1 resulted:1 individual:1 fire:1 maintain:2 ab:2 interest:1 highly:1 yixuan:1 joel:1 alignment:1 introduces:1 truly:1 closer:1 cifar10:6 necessary:1 orthogonal:1 sooner:1 taylor:1 re:1 plotted:1 wmj:1 formalism:1 earlier:1 modeling:1 facet:1 measuring:2 cost:2 entry:1 deciphering:1 krizhevsky:1 johnson:1 gr:1 reported:1 answer:2 sv:1 st:1 international:3 sensitivity:5 siam:1 probabilistic:1 dong:1 receiving:1 pool:2 picking:1 quickly:2 ilya:2 again:1 central:1 vastly:1 successively:1 reflect:1 wojciech:1 toy:5 li:12 suggesting:1 actively:1 szegedy:1 lookup:1 summarized:1 coefficient:1 performed:2 view:5 jason:4 lab:1 picked:1 analyze:1 observing:1 wolfgang:1 lipson:3 contribution:1 formed:2 accuracy:12 convolutional:20 variance:4 correspond:2 spaced:1 saliency:1 norouzi:1 vincent:1 ren:1 app:3 randomness:1 acc:5 explain:1 plateau:1 detector:1 suffers:1 definition:1 involved:1 regress:1 mohamed:1 proof:3 emits:1 dataset:13 ask:2 knowledge:1 dimensionality:4 improves:1 cj:2 carefully:1 reflecting:2 appears:2 higher:3 response:4 zisserman:1 formulation:1 done:2 furthermore:1 just:2 roger:1 correlation:12 overfit:1 rahman:1 tropp:1 freezing:2 google:4 lar:1 aude:1 usa:1 effect:2 requiring:1 logits:2 semantic:3 visualizing:1 during:6 demonstrative:1 width:2 complete:1 demonstrate:1 mohammad:1 performs:3 l1:9 interpreting:1 network1:1 image:16 novel:1 recently:1 charles:1 common:1 functional:1 empirically:1 overview:1 egoire:1 volume:1 yosinski:5 interpretation:1 m1:3 interpret:1 slight:1 martinsson:1 significant:1 he:1 freeze:14 versa:1 cambridge:1 ai:1 dft:8 pm:3 similarly:1 shawe:1 had:1 bruna:1 similarity:20 align:1 base:2 patrick:1 certain:3 eskimo:1 success:1 seen:1 captured:1 additional:1 somewhat:2 greater:2 preceding:1 deng:1 george:1 converge:3 maximize:1 xiangyu:1 signal:1 dashed:1 multiple:2 desirable:1 afterwards:1 full:1 x10:3 reduces:2 cross:1 long:1 cifar:12 escalates:1 bolei:1 equally:1 regression:6 oliva:1 vision:3 df:3 navdeep:1 arxiv:10 resnets:1 kernel:1 achieved:2 c1:2 separately:1 singular:9 jian:1 lenc:1 yonghui:1 pooling:1 mahendran:1 mod:1 seem:1 intermediate:2 bengio:1 enough:1 affect:1 timesteps:5 architecture:9 li1:1 restrict:1 perfectly:1 reduce:1 shift:1 bottleneck:1 motivated:2 fuchs:1 bridging:1 karen:1 speech:1 shaoqing:1 hardly:1 distributedness:1 deep:14 antonio:1 useful:3 detailed:1 transforms:2 amount:1 ten:1 exist:2 canonical:7 problematic:1 dotted:3 sign:2 terrier:1 per:2 broadly:3 li2:1 discrete:6 jri:1 group:1 key:2 four:4 gunnar:1 drawn:1 dahl:1 utilize:1 timestep:1 fraction:3 run:1 powerful:2 hwc:1 throughout:3 almost:1 reasonable:1 yann:1 wu:1 patch:2 appendix:10 scaling:3 comparable:1 layer:93 cca:20 convergent:2 correspondence:1 portrayal:1 truck:1 badly:1 strength:1 alex:1 scene:1 fourier:6 speed:1 aspect:1 span:3 min:3 pane:3 nathan:1 relatively:1 px:1 according:1 combination:1 across:9 smaller:1 evolves:1 making:1 encapsulates:1 happens:1 rob:3 rev:1 quoc:1 projecting:6 invariant:7 sij:1 computationally:1 visualization:1 remains:1 count:1 needed:3 letting:2 end:4 raghu:1 lm2:2 operation:1 probe:3 apply:4 observe:1 m01:1 save:1 batch:2 eigen:1 original:5 compress:1 top:24 thomas:1 cf:1 zeiler:1 giving:2 especially:2 question:5 primary:1 usual:1 diagonal:6 gradient:1 iclr:2 subspace:21 convnet:7 fc2:3 penultimate:2 evenly:1 l12:2 index:4 demonstration:3 acquire:1 robert:1 statement:1 lki:1 potentially:1 rise:2 motivates:2 perform:3 allowing:2 l11:2 vertical:1 neuron:66 convolution:1 observation:1 finite:1 enabling:1 displayed:1 flop:1 hinton:2 looking:3 husky:3 rn:1 david:1 inverting:1 pair:10 required:2 dog:6 c3:1 connection:1 imagenet:4 c4:1 acoustic:1 learned:10 purposefully:1 nip:1 justin:1 able:1 proceeds:1 wy:1 below:2 xm:2 pattern:4 regime:3 summarize:1 challenge:2 interpretability:4 green:4 max:3 analogue:1 critical:2 natural:6 rely:1 residual:8 l10:5 fc1:7 axis:6 naive:1 szedmak:1 joan:1 understanding:7 l2:9 determining:1 relative:1 macherey:2 fully:4 permutation:1 ically:1 geoffrey:2 validation:1 abdel:1 gilmer:2 vanhoucke:1 affine:2 translation:12 surprisingly:1 keeping:1 infeasible:1 alain:1 senior:1 circulant:4 emerge:1 distributed:8 benefit:1 curve:2 calculated:1 dimension:5 valid:1 depth:1 rich:1 computes:1 concretely:1 collection:1 c5:1 projected:4 preprocessing:1 nguyen:2 far:1 erhan:1 correlate:1 approximate:2 indisputable:1 sequentially:1 xi:5 fergus:3 w1j:1 latent:1 khosla:1 why:1 table:2 learn:5 nature:2 channel:6 ca:1 symmetry:2 broadening:1 necessarily:1 complex:1 constructing:1 domain:1 equivariance:2 european:1 main:2 linearly:2 noise:1 x1:2 augmented:1 mikio:1 wlog:1 lie:1 montavon:1 zhifeng:1 ian:1 pxx:1 theorem:5 departs:1 dumitru:1 specific:3 showing:2 explored:2 aravindh:1 intrinsic:1 workshop:1 sohl:1 corr:1 importance:3 ci:4 maxim:1 hod:2 nk:1 chen:1 easier:1 gap:1 halko:1 fc:3 simply:1 explore:2 appearance:1 gao:1 aditya:1 contained:2 ordered:1 kaiming:1 scalar:1 springer:1 goal:2 consequently:1 jeff:2 replace:1 shared:1 analysing:2 change:1 included:1 infinite:1 reducing:2 lemma:2 total:3 invariance:3 svd:8 uber:2 experimental:1 indicating:1 exception:1 guillaume:1 tara:1 l1i:1 schuster:1 correlated:2 |
6,841 | 7,189 | Convolutional Phase Retrieval
Qing Qu
Columbia University
[email protected]
Yuqian Zhang
Columbia University
[email protected]
John Wright
Columbia University
[email protected]
Yonina C. Eldar
Technion
[email protected]
Abstract
We study the convolutional phase retrieval problem, which considers recovery of an unknown signal x ? Cn from m measurements consisting of the
magnitude of its cyclic convolution with a known kernel a of length m. This
model is motivated by applications to channel estimation, optics, and underwater acoustic communication, where the signal of interest is acted on
by a given channel/filter, and phase information is di?cult or impossible to
acquire. We show that when a is random and m is su?ciently large, x can
be e?ciently recovered up to a global phase using a combination of spectral initialization and generalized gradient descent. The main challenge is
coping with dependencies in the measurement operator; we overcome this
challenge by using ideas from decoupling theory, suprema of chaos processes and the restricted isometry property of random circulant matrices,
and recent analysis for alternating minimizing methods.
1
Introduction
We study the problem of recovering a unknown signal x ? Cn from measurements y =
|a ? x|, which consist of the magnitude of the convolution of x and a given filter a ? Cm ,
find z,
s.t.
y = |a ? z| ,
(1)
where ? denotes cyclic convolution. Let Ca ? C
be a circulant matrix generated by a,
and let A ? Cm?n be a matrix formed by the first n columns of Ca . Then the convolutional
phase retrieval problem can be rewritten in the common matrix-vector form
m?m
find
z,
s.t.
y = |Az| .
(2)
This problem is motivated by applications like channel estimation [37, 1], (non)coherent optical communication [14, 24], and underwater acoustic communication [31]. For example, in
millimeter-wave (mm-wave) wireless communications for 5G networks [27], one important
problem is to reconstruct signal angle of arrival (AoA) from measurements, which are taken
by the convolution of signal AoA and the antenna pattern.
Because of technical di?culties that the phase measurements are either very noisy and unreliable, or expensive to acquire, it is preferred to only take measurements of signal magnitude
and the phase information is lost.
Most known results on the exact solution of phase retrieval problems [8, 29, 10, 38, 36, 35]
pertain to generic random matrices, where the entries of A are independent subgaussian random variables. However, in practice it is almost impossible to design purely random measurement matrices: in many cases as we mentioned above, the measurement is much more
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
structured ? generated by passing a signal through a manually designed filter. Moreover,
the structured measurements often admit more e?cient numerical methods: by using the
fast Fourier transform for matrix-vector products, the benign structure of the convolutional
model (1) allows us to design methods with O(m) memory and O(m log m) computation
cost per iteration. While for generic measurements, the cost is around O(mn).
In this work, we study the convolutional phase retrieval problem (1) under the assumption
?
that the kernel a = [a1 , ? ? ? , am ] is random, with each entry i.i.d. complex Gaussian,
(
)
a = u + iv, u, v ?iid N 0, 12 I .
(3)
Compared to the generic random measurement, as we can see, the random convolution
model we study here is far more structured: it is parameterized by only O(m) independent
complex normal random variables, whereas the generic model involves O(mn) ones. Since
the rows and columns of A are probabilistically dependent, standard techniques (based on
concentration of functions of independent random vectors) do not apply.
We propose and analyze a local1 gradient descent type method, minimizing a weighted,
nonconvex and nonsmooth objective
2
1
1/2
(4)
minn f (z) =
b ? (y ? |Az|)
,
z?C
2m
where ? denotes the Hadamard product and b ? Rm
++ is a weighting vector. Our result can
be informally summarized as follows.
Theorem 1.1 (Informal) When m ? ?(n poly log n), with high probability, spectral initializa(0)
tion [25, 5] produces
an initialization
(
) z that is O(1/ poly log n) close to the optimum. Moreover,
when m ? ?
?Cx ?2
n poly log n
?x?2
, with high probability, a certain gradient descent method based
{
}
on (4) converges linearly from this initialization to the set X = xei? | ? ? [0, 2?) of points that
di?er from the true signal x only by a global phase.
Here, Cx ? Cm?m denotes the circulant matrix corresponding to cyclic convolution with a
length m zero padding of x, and poly log n denotes a polynomial in log n. A dependence of
the sample complexity m on ?Cx ? seems inevitable2 and is corroborated by experiments.
Our proof is based on ideas from decoupling theory [11], the suprema of chaos processes and
restricted isometry property of random circulant matrices [26, 20], and a new iterative analysis of alternating minimizing methods [35]. Our analysis draws connections between the
convergence properties of gradient descent and the classical alternating direction method.
This allows us to avoid the need to argue that high-degree polynomials in the structured
random matrix A concentrate uniformly, as would be required by a straightforward translation of existing analysis to this new setting. Instead, we control the bulk e?ect of phase
errors uniformly in a neighborhood around the ground truth. This requires us to develop
new decoupling and concentration tools for controlling nonlinear phase functions of circulant random matrices, which could be potentially useful for analyzing other random circulant convolution problems such as blind deconvolution [40], and convolutional dictionary
learning [18].
Prior art for phase retrieval. The challenge of developing e?cient, guaranteed methods
for phase retrieval has attracted substantial interest over the past decade [28, 19]. For the
generalized phase retrieval problem in which the sensing matrix A is i.i.d. random, the first
result on global recovery is based on semidefinite programming (SDP) [8, 3, 36]. However,
the computational cost of SDP limits its practicality. Nonconvex methods can be more efficient. [25] showed that the alternating minimization method provably converges to the
1
It would be nicer to characterize the global geometry of the problem as in [15, 33, 34, 32]. However,
the nonhomogeneity of ?Cx ? over the space causes tremendous di?culties for concentration with
m ? ?(n poly log n) samples.
?
2
The operator norm of Cx is nonhomogeneous over x ? CSn?1 , ranging from constant to O( n).
?
For instance, ?Cx ? = 1 when x is a standard basis vector; and ?Cx ? = n when x = ?1n 1.
2
truth, when initialized using a spectral method and provided with fresh samples at each
iteration. Candes et al. [5] showed with the same initalization, gradient descent for the
nonconvex least squares objective,
2
1
2
2
(5)
minn f1 (z) =
y ? |Az|
,
z?C
2m
provably recovers the ground truth, with near-optimal sample complexity m ? ?(n log n).
The work [10, 39, 38] further reduce the sample complexity to m ? ?(n) by using di?erent
nonconvex objectives and truncation techniques. Moreover, [34] reveals that the nonconvex
objective (5) has a benign global geometry: with high probability, it has no bad critical points
with m ? ?(n log3 n) samples3 .
Structured random measurements. The study of structured random measurement in signal processing [21] includes the study of random Fourier measurements [7, 9, 12] and partial
random convolutions [26, 20] in compressed sensing [6]. However, the study of structured
random measurement for phase retrieval is still quite limited. In particular, [17] and [4] studied the performance of SDP methods with t-designs and random masked Fourier transform
measurements. The authors in [5, 2] show that the phase retrieval problem with random
coded di?raction and STFT measurements can be solved by minimizing nonconvex objectives, while [5] requires resampling for the initialization, and in [2] the contraction radius
is not large enough for initialization. In addition, the motivation of these measurement
schemes are quite di?erent from ours. For more detailed review of this subject, we refer the
readers to Section 4 of [21].
Notations. We use (?)? and (?)? to denote the real and Hermitian transpose, respectively.
We use CSn?1 to denote a n dimensional complex sphere. Let ?(?) and ?(?) denote the real
and imaginary parts of a complex variable, respectively. Throughout the paper, we assume
the optimal solution is x ? Cn . Because the
{ solution is only}optimal to a global phase shift,
we define the optimal solution set as X = xei? | ? ? [0, 2?) , and define the distance from
a point z ? Cn to the set X as
.
dist(z, X ) = inf
z ? xei?
.
??[0,2?)
For any z ? C with |z| ?= 0, we use ?(z) to denote the phase of z, that is, ei?(z) = z/ |z|.
2
Algorithm
We develop an approach to convolutional phase retrieval based on local nonconvex optimization. Our proposed algorithm has two components: (1) a careful initialization using
the spectral method; (2) local refinement by (generalized) gradient descent. We introduce
the two steps in reverse order.
2.1
Minimization of a nonconvex and nonsmooth objective
We consider minimizing a weighted nonconvex and nonsmooth objective
2
1
1/2
f (z) =
b ? (y ? |Az|)
.
2m
(6)
The introduction of the positive weights b facilitates our analysis, by enabling us to compare
certain functions of the dependent random matrix A to functions involving more independent random variables. We will substantiate this claim in the next section.
Although the function (4) is not complex-di?erentiable, if one identifies Cn with R2n and
treats f (z) as a function in the real domain, f is still di?erentiable in the real sense. Thus,
we adopt the Wirtinger calculus [22], which can be thought of as a clean way of organizing
the real partial derivatives [29, 34].
3
[30] tightened the sample complexity to m ? ?(n log n) by using advanced probability tools.
3
On the other hand, it should also be noted that the absolute value |?| is nonsmooth at 0 and
hence f (z) is not di?erentiable everywhere even in the real sense. Similar to [38], for any
complex number u ? C, if we uniquely define its phase ?(u) at 0 by
{
u/ |u| if |u| ?= 0,
.
exp (i?(u)) =
1
otherwise,
then the Wirtinger gradient of (4) can be uniquely determined as
?
1 ?
f (z) =
A diag (b) [Az ? y ? exp (i?(Az))] .
?z
m
(7)
Starting from some initialization z (0) , we minimize the objective (6) by gradient descent
z (r+1) = z (r) ? ?
?
f (z (r) ),
?z
(8)
?
where ? > 0 is the stepsize. Indeed, ?z
f (z) can be interpreted as the gradient of f (z) as in
the real case; this method is also referred to as amplitude flow [38].
2.2
Initialization via spectral method
Similar to [25, 29], we compute the initialization z (0) via a spectral method, detailed in [29,
Algorithm 1]. More specifically, z (0) is a scaled leading eigenvector of
( )
1 ? 2
1 ?
yk ak a?k =
A diag y 2 A,
m
m
m
Y =
(9)
k=1
which is constructed from the knowledge of the sensing vectors and observations. The
leading eigenvector of Y can be e?ciently computed via the power method. Note that
2
E [Y ] = ?x? I + xx? , so the leading eigenvector of E [Y ] is proportional to the optimal solution x. Under the random convolutional model of A, by using probability tools from [21],
we show that v ? Y v concentrates to its expectation v ? E [Y ] v for all v ? CSn?1 whenever
m ? ?(n poly log n), ensuring the initialization z (0) close to the optimal set X .
3
Main Result and Analysis
In this section, we describe our main theoretical result, which shows that with high probability, the algorithm described in the previous section succeeds.
Theorem 3.1 (Main Result) Whenever m ? C0 n log31 n, the spectral method [29, Algorithm 1]
produces an initialization z (0) that satisfies
(
)
dist z (0) , X ? c0 log?6 n ?x?
with probability at least 1 ? c1 m?c2 . Suppose b = ??2 (y), where
(
)
1
t2
2
??2 (t) =
??2 (t) = 1 ? 2?? ??2 (t),
exp ? 2 ,
2?? 2
2?
t > 0,
(10)
with ? 2 > 1/2. Starting from z (0) , with ? 2 = 0.51 and stepsize ? = 2.02, whenever m ?
{ 17
}
?Cx ?2
log n, n log4 n , with probability at least 1 ? c3 m?c4 for all iterate z (r) (r ? 1)
C1 ?x?
2 max
defined in (8), we have
(
)
(
)
dist z (r) , X ? (1 ? ?)r dist z (0) , X ,
(11)
holds for a small scalar ? ? (0, 1). Here, c0 , c1 , c2 , c3 , c4 , C0 , C1 > 0 are numerical constants.
4
Remark: Our result shows that by initializing the problem O(1/polylog(n))-close to the
optimum via spectral method, the gradient descent (8) converges linearly to the optimal solution. As we can see, the sample complexity here also depends on ?Cx ?, which is quite different from the i.i.d. case. For a typical x ? CSn?1 (e.g., x is drawn uniformly random from
CSn?1 ), ?Cx ? remains as O(log n), the sample complexity m ? ? (n poly log n) matches the
i.i.d. case up to log factors. However, ?Cx ? is nonhomogeneous over x ? CSn?1 : if x is
sparse in the Fourier domain (e.g., x = ?1n 1), the sample complexity can be as large as
(
)
m ? ? n2 poly log n . Such a behavior is also demonstrated in the experiments of Section
4. We believe the (very large!) number of logarithms in our result is an artifact of our analysis, rather
of the method. We expect to reduce the sample complexity
( than a limitation
)
to m ? ?
?Cx ?2
n log6
?x?2
n by a tighter analysis, which is left for future work. The choices
of the weighting b ? Rm in (10), ? 2 = 0.51, and the stepsize ? = 2.02 are purely for the
purpose of analysis. In practice, the algorithm converges with b = 1 and a choice of small
stepsize ? , or by using backtracking linesearch for the stepsize ? .
In the following, we briefly highlight some major challenges and novel proofing ideas behind the analysis. The details can be found in our full paper.
3.1
Proof sketch of iterative contraction
Our analysis is largely inspired by the recent analysis of alternating direction method (ADM)
[35]. In this following, we draw connections between the gradient descent method (8) and
ADM, and sketch basic ideas of convergence analysis.
ADM iteration. ADM is a classical method for solving phase retrieval problems [16, 25,
35], which can be considered as a heuristic method that solves the problem
1
min
z?Cn ,|u|=1 2
2
?Az ? y ? u? .
At every iterate zb(r) , ADM proceeds in two steps:
2
(
)
1
c(r+1) = y ? exp Ab
z (r) ,
zb(r+1) = arg min
Az ? c(r+1)
,
z 2
which leads to the following update
(
(
))
zb(r+1) = A? y ? exp Ab
z (r) ,
?1
where A? = (A? A) A? is the pseudo-inverse of A. Let ?br = arg min?
zb(r) ? xei?
. The
distance between zb(r+1) and X is bounded by
(
(
))
(
)
b
b
z (r)
.
dist zb(r+1) , X =
zb(r+1) ? xei?r+1
?
A?
Axei?r ? y ? exp Ab
(12)
Gradient descent with b = 1. For
simplicity,
let us consider the gradient descent update
(r)
i?
(8) with b = 1. Let ?r = arg min?
z ? xe
, with stepsize ? = 1. The distance between
the iterate z (r) and the optimal set X is bounded by
(
)
1 ?
(r)
i?r
I
?
?
xe
dist z (r+1) , X =
z (r+1) ? xei?r+1
?
A
A
z
m
(
)
1
+
?A?
Axei?r ? y ? exp i?(Az (r) )
.
m
Towards iterative contraction. By measure concentration, it can be shown that
?
?
I ? 1 A? A
= o(1), ?A? ? m,
A?
? 1/ m,
m
5
(13)
(14)
holds with high probability whenever m ? ? (n poly log n). Therefore, to show iterative
contraction of both methods, based on (12) and (13), it is su?cient to show that
?
Axei? ? y ? exp (i?(Az))
? (1 ? ?) m
z ? xei?
,
(15)
for some constant ? ? (0, 1), where ? = arg min??[0,2?)
z ? xei?
such that ei? = x? z/ |x? z|.
By similar ideas of controlling (15) for the ADM method [35], this observation provides a
new way of analyzing the gradient descent method. As an attempt to show (15) for the
random circulant matrix A, we invoke the following lemma, which controls the error in a
first order approximation to exp(i?(?)).
Lemma 3.2 (Lemma 3.2, [35]) For any ? > 0, and for any z, z ? ? C, we have
|exp (i?(z ? + z)) ? exp (i?(z ? ))| ? 21|z|??|z? | + (1 ? ?)?1 |? (z/z ? )| .
Let us decompose z = ?x + ?w, where w ? CSn?1 with w ? x, and ?, ? ? C. Note that
?(?) = ?. Then by Lemma 3.2, for any ? ? (0, 1), we have
[
( (
))]
Axei? ? y ? exp (i?(Az))
=
|Ax| ? exp (i? (Ax)) ? exp i? Ax + ? Aw
?
1 ?
?? ((Aw) ? exp (?i?(Ax)))? .
?
|Ax| ? 1| ? ||Aw|??|Ax|
+
?
1
?
? ? |
{z
}
|
{z
}
T2
T1
The first term T1 can be bounded using the restricted isometry property of random circulant
matrices [20], together with some auxiliary analysis.
The second term T2 involves a nonlinear function exp (?i?(Ax)) of the random circulant
matrix A. Controlling this nonlinear, highly dependent random process T2 (w) for all w
is a nontrivial task. Next, we explain why controlling T2 is technically challenging, and
sketch the key ideas about how to control a smoothed variant of T2 , by using the weighting
b = ??2 (y) introduced in (10). We also provide intuition for why the weighting b is helpful.
3.2
Controlling the phase term T2
As elaborated above, the major challenge of showing iterative contraction is bounding the suprema of }
the nonlinear, dependent random process T2 (w) over the set S =
{
1
w ? CSn?1 | w ? x . By using the fact that ?(u) = 2i
(u ? u) for any u ? C, we have
? ?
2
2
1
1
sup T2 ? 2 ?A? + 2 sup w A diag (?(Ax)) Aw ,
{z
}
w?S |
w?S
L(a,w)
?
.
where ?(t) = exp (?2i?(t)). As from (14), ?A? ? m, the major task left is to show that
sup |L(a, w)| < (1 ? ? ? )m
(16)
w?S
for some constant ? ? ? (0, 1). [
Why decoupling?
]
a?1
Let A = ? ? ? . The term
a?m
L(a, w) = w? A? diag (?(Ax)) Aw =
m
?
k=1
?(a?k x)w? ak a?
kw
|
{z
}
dependence across k
is a summation of dependent random variables, for which our probability tools are very
limited. To overcome this problem, we deploy ideas from decoupling [11]. Informally, decoupling allows us to compare moments of the original random function to functions of more
independent random variables, which are usually easier to analyze. The book [11] provides
6
a beautiful introduction to this area. In our problem, notice that the random vector a occurs twice in the definition of L(a, w) ? one in the phase term ?(Ax) = exp(?2i?(Ax)),
and another in the quadratic term. The general spirit of decoupling is to seek to replace one
a with an independent copy a? of the same random vector, yielding a random process with
fewer dependencies. Here, we seek to replace L(a, w) with
?
? ?
?
QL
dec (a, a , w) = w A diag (?(A x)) Aw.
(17)
?
The usefulness of this new, decoupled form QL
dec (a, a , w), is that it introduces extra ranL
?
domness ? Qdec (a, a , w) is now a chaos process of a conditioned on a? . This makes analyz?
ing supw?S QL
dec (a, a , w) amenable to existing analysis of suprema of chaos processes for random circulant matrices [21]. However, achieving the decoupling requires additional work;
the most general existing results on decoupling pertain to tetrahedral polynomials, which are
polynomials with no monomials involving any power larger than one of any random variable. By appropriately tracking cross terms, these results can also be applied to more general (non-tetrahedral) polynomials in Gaussian random variables [23]. However, our random process L(a, w) involves a nonlinear phase term ?(Aw) which is not a polynomial,
and hence is not amenable to a direct appeal to existing results.
Decoupling is ?recoupling?. Existing results [23] for decoupling polynomials of Gaussian random variables are derived from two simple facts: (i) orthogonal projections of Gaussian variables are independent, and (ii) Jensen?s inequality. Indeed, for a ? CN (0, I), let us
introduce an independent vector ? ? CN (0, I). Write
g 2 = a ? ?.
g 1 = a + ?,
Because of Fact (i), these are independent CN (0, 2I) vectors. By conditional expectation,
[
]
[
]
.
1
2
b w).
E? QL
= E? QL
= L(a,
(18)
dec (g , g , w)
dec (a + ?, a ? ?, w)
?
Thus, we can see that the key idea of decoupling L(a, w) into QL
dec (a, a , w), is essentially
1
2
b
?recoupling? QL
dec (g , g , w) via conditional expectation ? the ?recoupled? term L can be
viewed as an approximation of L(a, w). Notice that by Fact (ii), for any convex function ?,
[
[
]
(
)]
( [
])
b w)
Ea sup ? L(a,
= Ea sup ? E? QL
(a
+
?,
a
?
?,
w)
dec
w?S
w?S
[
]
[
]
(
)
( L
)
1
2
1
2
? Ea,? sup ? QL
(a
+
?,
a
?
?,
w)
=
E
sup
?
Q
(g
,
g
,
w)
.
g ,g
dec
dec
w?S
w?S
p
b w) via
Thus, by choosing ?(t) = |t| , we can control moments of supw?S L(a,
L
1
2
b w)
sup L(a,
?
sup Qdec (g , g , w)
.
w?S
Lp
w?S
(19)
Lp
For tetrahedral
polynomials,
Lb = L, so the approximation is exact. As the tail bound of
b
supw?S L(a, w) can be controlled via its moments bounds [13, Chapter 7.2], this allows
us to directly control the object L(a, w) of interest. The reason that this control obtains is
?
because the conditional expectation operator E? [? | a] ?recouples? QL
dec (a, a , w) back to
the target L(a, w). In slogan form, (Gaussian) decoupling is recoupling.
?Recoupling? is Gaussian smoothing. A distinctive feature in convolutional phase retrieval is that L is not a polynomial. Hence, it may be challenging to posit a QL
dec which
?recouples? back to L. In other words, in the existing form, we need to tolerate an approximation error as Lb ?= L. By the triangle inequality,
b
b
w) ? L(a, w) .
w) + sup L(a,
sup |L(a, w)| ? sup L(a,
(20)
w?S
w?S
w?S
b
w) can be sharply controlled via its moments bound
As discussed above, the supw?S L(a,
in (19). Now the bound (20) is useful to derive tight control for L(a, w), if L(a, w) is very
7
b w) uniformly. The question is: for what L is it possible to find a ?well-behaved?
close to L(a,
L
Qdec for which the approximation error is small? To understand this question, recall that the
mechanism that links Qdec back to Lb is the conditional expectation operator E? [? | a]. For
our case, from (18) orthogonality leads to
.
b w) = wA? diag (h(Ax)) Aw,
L(a,
h(t) = Es?CN (0,?x?2 ) [?(t + s)] .
(21)
Thus, by combining the results in (20) and (21), we have
b
2
w) + ?h ? ??L? ?A? .
sup |L(a, w)| ? sup L(a,
|
{z
}
w?S
w?S
(22)
approximation error
Note that the function h is not exactly ?, but generated by convolving ? with a multivariate
Gaussian pdf : indeed, recoupling is Gaussian smoothing. The Fourier transform of a multivariate Gaussian is again a Gaussian; it decays quickly with frequency. So, in order to admit a
small approximation error, the target L must be smooth. However, in our case, the function
?(t) = exp(?2i?(t)) is discontinuous at t = 0; it changes extremely rapidly in the vicinity of
t = 0, and hence its Fourier transform (appropriately defined) does not decay quickly at all.
Therefore, L(a, w) is a poor target for approximation with a smooth function Lb = E? [QL
dec ].
From Fig. 1, the di?erence between h and ? increases as |t| ? 0. The poor approximation
error ?h ? ??L? = 1 results in a trivial bound for supw?S |L(a, w)| instead of (16).
Decoupling and convolutional phase retrieval. The key idea to reduce the approximation error ?? ? h?L? = 1 is to smooth ?. More specifically, we introduce a new objective
(6) with Gaussian weighting b = ??2 (y) in (10), replacing the analyzing target T2 with
(
)
Tb2 =
diag b1/2 ? ((Aw) ? exp (?i?(Ax)))
.
Consequently, we obtain a smoothed variant Ls (a, w) of L(a, w),
Ls (a, w) = w? A? diag (??2 (y) ? ?(Ax)) Aw.
Now the approximation error ?h ? ??L? in (22) is replaced by ?h(t) ? ??2 (t)?(t)?L? . As
observed from Fig. 1, the function ??2 (t) smoothes ?(t) especially near the vicinity of t =
0, such that the new approximation error ?f (t) ? ??2 (t)?(t)?L? is significantly reduced.
Thus, by using similar ideas as above, we can prove a desired bound supw?S |Ls (a, w)| <
(1 ? ?s )m. Finally, because the new weighting b = ??2 (y), the overall analysis needs to be
slightly modified correspondingly. We refer the readers to our full paper for more details.
Figure 2: Phase transition for recovFigure 1: Plots of functions ??2 (t),
ering the signal x ? CSn?1 with diff (t) and ?(t) for t ? R+ .
ferent ?Cx ?.
4
Experiments
Dependence of sample complexity on ?Cx ?. First, we investigate the dependence of the
sample complexity m on ?Cx ?. We assume the ground truth x ? CSn?1 , and consider three
8
cases: (1) x = e1 with ?Cx ? = 1, where e1 the standard basis vector; (2) x is uniformly ran?
dom generated from CSn?1 ; (3) x = ?1n 1, with ?Cx ? = n. For each case, we fix the signal
length n = 1000 and vary the ratio m/n. For each ratio m/n, we randomly generate the
kernel a ? CN (0, I) and repeat the experiment for 100 times. We initialize the algorithm
by the spectral method [29, Algorithm 1] and run the gradient descent
(8).
Given the al
b, we judge the success of recovery by inf ??[0,2?)
x
b ? xei?
? ?, where
gorithm output x
? = 10?5 . From Fig. 2, we can see that the larger the ?Cx ?, the more samples are needed
for exact recovery.
Figure 3: Experiment on real images.
Experiments on real image. Finally, we run the experiment on some real dataset to demonstrate the e?ectiveness and the e?ciency of the proposed method. We choose an image of
size 200 ? 300 as in Fig. 4, we use m = 5n log n samples for reconstruction. The kernel
a ? Cm is randomly generated as complex Gaussian CN (0, I). We run power method
for 100 iterations for initialization, and stop the algorithm once the error is smaller than
1 ? 10?4 . It takes 197.08s to reconstruct all the RGB channels. Experiment using general
Gaussian measurements A ? Cm?n could easily run out of memory on a personal computer
for problems of this size.
Figure 4: Experiment with real antenna pattern.
Experiments on signal AoA phase recovery for 5G communications. Finally, we demonstrate the e?ectiveness of the proposed method on a problem arising in 5G communication,
as we mentioned in the introduction. Fig. 4 (left) shows an antenna pattern a ? C361 obtained from Bell labs. We observe the modulus of the convolution of this pattern with the
signal of interest. For three di?erent types of signals with length n = 20, (1) x = e1 , (2) x is
uniformly random generated from CSn?1 , (3) x = ?1n 1, our result in Fig. 4 shows that we
can achieve almost perfect recovery.
9
5
Acknowledgement
This work was partially supported by the grants NSF CCF 1527809 and NSF IIS 1546411,
the grants from the European Unions Horizon 2020 research and innovation program under grant agreement No. 646804-ERCCOGBNYQ, and the grant from the Israel Science
Foundation under grant no. 335/14. QQ thanks the generous support of the Microsoft
graduate research fellowship. We would like to thank Shan Zhong for the helpful discussion for real applications and providing the antenna data for experiments, and we thank Ju
Sun and Han-wen Kuo for helpful discussion and input regarding the analysis of this work.
References
[1] Sercan ?. Arik and Joseph M. Kahn. Direct-detection mode-division multiplexing in modal basis
using phase retrieval. Opt. Lett., 41(18):4265?4268, Sep 2016.
[2] T. Bendory, Y. C. Eldar, and N. Boumal. Non-convex phase retrieval from stft measurements.
IEEE Transactions on Information Theory, PP(99):1?1, 2017.
[3] Emmanuel J. Cand?s, Yonina C. Eldar, Thomas Strohmer, and Vladislav Voroninski. Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6(1), 2013.
[4] Emmanuel J. Cand?s, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval from coded di?raction patterns. Applied and Computational Harmonic Analysis, 39(2):277?299, 2015.
[5] Emmanuel J. Cand?s, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow:
Theory and algorithms. Information Theory, IEEE Transactions on, 61(4):1985?2007, April 2015.
[6] Emmanuel J Cand?s, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact
signal reconstruction from highly incomplete frequency information. IEEE Transactions on information theory, 52(2):489?509, 2006.
[7] Emmanuel J Candes, Justin K Romberg, and Terence Tao. Stable signal recovery from incomplete
and inaccurate measurements. Communications on pure and applied mathematics, 59(8):1207?1223,
2006.
[8] Emmanuel J. Cand?s, Thomas Strohmer, and Vladislav Voroninski. Phaselift: Exact and stable
signal recovery from magnitude measurements via convex programming. Communications on
Pure and Applied Mathematics, 66(8):1241?1274, 2013.
[9] Emmanuel J Candes and Terence Tao. Near-optimal signal recovery from random projections:
Universal encoding strategies? IEEE transactions on information theory, 52(12):5406?5425, 2006.
[10] Yuxin Chen and Emmanuel J. Cand?s. Solving random quadratic systems of equations is nearly
as easy as solving linear systems. arXiv preprint arXiv:1505.05114, 2015.
[11] Victor De la Pena and Evarist Gin?. Decoupling: from dependence to independence. Springer, 1999.
[12] Yonina C Eldar and Gitta Kutyniok. Compressed sensing: theory and applications. Cambridge University Press, 2012.
[13] Simon Foucart and Holger Rauhut. A mathematical introduction to compressive sensing. Springer,
2013.
[14] Robert M Gagliardi and Sherman Karp. Optical communications. New York, Wiley-Interscience,
1976. 445 p., 1, 1976.
[15] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points?online stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory,
pages 797?842, 2015.
[16] R. W. Gerchberg and W. Owen Saxton. A practical algorithm for the determination of the phase
from image and di?raction plane pictures. Optik, 35:237?246, 1972.
[17] David Gross, Felix Krahmer, and Richard Kueng. A partial derandomization of phaselift using
spherical designs. arXiv preprint arXiv:1310.2267, 2013.
10
[18] Felix Heide, Wolfgang Heidrich, and Gordon Wetzstein. Fast and flexible convolutional sparse
coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
5135?5143, 2015.
[19] Kishore Jaganathan, Yonina C. Eldar, and Babak Hassibi. Phase retrieval: An overview of recent
developments. Chapter V, Optical Compressive Imaging, 2016.
[20] Felix Krahmer, Shahar Mendelson, and Holger Rauhut. Suprema of chaos processes and the
restricted isometry property. Communications on Pure and Applied Mathematics, 67(11):1877?1904,
2014.
[21] Felix Krahmer and Holger Rauhut. Structured random measurements in signal processing.
GAMM-Mitteilungen, 37(2):217?238, 2014.
[22] Ken Kreutz-Delgado.
arXiv:0906.4835, 2009.
The complex gradient operator and the CR-calculus.
arXiv preprint
[23] Stanislaw Kwapien. Decoupling inequalities for polynomial chaos. The Annals of Probability,
pages 1062?1071, 1987.
[24] Antonio Mecozzi, Cristian Antonelli, and Mark Shtaif. Kramers?kronig coherent receiver. Optica,
3(11):1220?1227, Nov 2016.
[25] Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization. In Advances in Neural Information Processing Systems, pages 2796?2804, 2013.
[26] Holger Rauhut. Compressive sensing and structured random matrices. Theoretical foundations
and numerical methods for sparse recovery, 9:1?92, 2010.
[27] Arash Shahmansoori, Gabriel E Garcia, Giuseppe Destino, Gonzalo Seco-Granados, and Henk
Wymeersch. 5g position and orientation estimation through millimeter wave mimo. In Globecom
Workshops (GC Wkshps), 2015 IEEE, pages 1?6. IEEE, 2015.
[28] Yoav Shechtman, Yonina C. Eldar, Oren Cohen, Henry N. Chapman, Jianwei Miao, and
Mordechai Segev. Phase retrieval with application to optical imaging: A contemporary overview.
Signal Processing Magazine, IEEE, 32(3):87?109, May 2015.
[29] Mahdi Soltanolkotabi. Algorithms and theory for clustering and nonconvex quadratic programming.
PhD thesis, Stanford University, 2014.
[30] Mahdi Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization. CoRR, abs/1702.06175, 2017.
[31] Milica Stojanovic, Josko A Catipovic, and John G Proakis. Phase-coherent digital communications for underwater acoustic channels. IEEE Journal of Oceanic Engineering, 19(1):100?111, 1994.
[32] Ju Sun, Qing Qu, and John Wright. Complete dictionary recovery over the sphere. arXiv preprint
arXiv:1504.06785, 2015.
[33] Ju Sun, Qing Qu, and John Wright. When are nonconvex problems not scary? arXiv preprint
arXiv:1510.06096, 2015.
[34] Ju Sun, Qing Qu, and John Wright. A geometric analysis of phase retreival. arXiv preprint
arXiv:1602.06664, 2016.
[35] Ir?ne Waldspurger. Phase retrieval with random gaussian sensing vectors by alternating projections. arXiv preprint arXiv:1609.03088, 2016.
[36] Ir?ne Waldspurger, Alexandre d`Aspremont, and St?phane Mallat. Phase recovery, maxcut and
complex semidefinite programming. Mathematical Programming, 149(1-2):47?81, 2015.
[37] P. Walk, H. Becker, and P. Jung. OFDM channel estimation via phase retrieval. In Asilomar 2015,
2015.
[38] G. Wang, G. B. Giannakis, and Y. C. Eldar. Solving systems of random quadratic equations via
truncated amplitude flow. IEEE Transactions on Information Theory, PP(99):1?1, 2017.
[39] Huishuai Zhang and Yingbin Liang. Reshaped wirtinger flow for solving quadratic systems of
equations. arXiv preprint arXiv:1605.07719, 2016.
[40] Yuqian Zhang, Yenson Lau, Han-wen Kuo, Sky Cheung, Abhay Pasupathy, and John Wright. On
the global geometry of sphere-constrained sparse blind deconvolution. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2017.
11
| 7189 |@word briefly:1 polynomial:10 seems:1 norm:1 c0:4 calculus:2 seek:2 rgb:1 contraction:5 decomposition:1 delgado:1 shechtman:1 moment:4 cyclic:3 gagliardi:1 ours:1 past:1 existing:6 csn:12 recovered:1 imaginary:1 attracted:1 must:1 john:6 numerical:3 benign:2 mordechai:1 designed:1 plot:1 update:2 resampling:1 fewer:1 cult:1 plane:1 yuxin:1 provides:2 zhang:3 mathematical:2 constructed:1 c2:2 direct:2 ect:1 yuan:1 prove:1 interscience:1 hermitian:1 introduce:3 indeed:3 behavior:1 cand:6 dist:6 sdp:3 derandomization:1 chi:1 inspired:1 spherical:1 provided:1 xx:1 moreover:3 notation:1 bounded:3 what:1 israel:1 cm:5 interpreted:1 prateek:1 eigenvector:3 compressive:3 pseudo:1 sky:1 every:1 kutyniok:1 exactly:1 rm:2 scaled:1 control:7 grant:5 positive:1 t1:2 felix:4 local:2 treat:1 engineering:1 limit:1 encoding:1 ak:2 analyzing:3 twice:1 initialization:12 studied:1 challenging:2 limited:2 graduate:1 practical:1 lost:1 practice:2 union:1 area:1 coping:1 suprema:5 erence:1 thought:1 significantly:1 projection:3 bell:1 word:1 universal:1 close:4 pertain:2 operator:5 romberg:2 impossible:2 demonstrated:1 straightforward:1 starting:2 l:3 convex:3 simplicity:1 recovery:13 pure:3 underwater:3 qq:1 annals:1 controlling:5 suppose:1 deploy:1 target:4 exact:5 programming:5 magazine:1 mallat:1 agreement:1 expensive:1 recognition:2 gorithm:1 corroborated:1 observed:1 preprint:8 solved:1 initializing:1 wang:1 sun:4 contemporary:1 yk:1 mentioned:2 substantial:1 intuition:1 ran:1 complexity:11 gross:1 saxton:1 erentiable:3 dom:1 personal:1 babak:1 solving:5 tight:1 purely:2 technically:1 distinctive:1 division:1 basis:3 triangle:1 easily:1 sep:1 chapter:2 jain:1 fast:2 describe:1 neighborhood:1 choosing:1 quite:3 heuristic:1 larger:2 stanford:1 reconstruct:2 compressed:2 otherwise:1 seco:1 reshaped:1 transform:4 antenna:4 noisy:1 online:1 cristian:1 propose:1 reconstruction:2 wetzstein:1 product:2 hadamard:1 combining:1 rapidly:1 organizing:1 achieve:1 az:11 waldspurger:2 convergence:2 optimum:2 raction:3 produce:2 perfect:1 converges:4 phane:1 object:1 polylog:1 develop:2 ac:1 derive:1 completion:1 erent:3 scary:1 solves:1 netrapalli:1 recovering:1 auxiliary:1 involves:3 judge:1 direction:2 concentrate:2 radius:1 posit:1 discontinuous:1 filter:3 stochastic:1 arash:1 f1:1 fix:1 decompose:1 opt:1 tighter:1 summation:1 rong:1 mm:1 hold:2 around:2 considered:1 wright:5 normal:1 ground:3 exp:20 claim:1 major:3 dictionary:2 adopt:1 vary:1 generous:1 purpose:1 estimation:4 tool:4 weighted:2 minimization:3 gaussian:14 arik:1 modified:1 rather:1 avoid:1 cr:1 zhong:1 karp:1 probabilistically:1 kueng:1 ax:14 derived:1 yingbin:1 am:1 sense:2 helpful:3 dependent:5 inaccurate:1 kahn:1 voroninski:2 tao:3 provably:2 overall:1 supw:6 flexible:1 eldar:7 proakis:1 adm:6 orientation:1 development:1 arg:4 art:1 smoothing:2 initialize:1 constrained:1 once:1 beach:1 yonina:6 manually:1 kw:1 holger:4 chapman:1 nearly:1 future:1 nonsmooth:4 t2:10 gordon:1 richard:1 sanghavi:1 wen:2 randomly:2 qing:4 replaced:1 phase:43 consisting:1 geometry:3 microsoft:1 ab:4 attempt:1 detection:1 interest:4 highly:2 investigate:1 introduces:1 semidefinite:2 yielding:1 behind:1 strohmer:2 amenable:2 partial:3 decoupled:1 orthogonal:1 vladislav:2 iv:1 incomplete:2 logarithm:1 initialized:1 desired:1 walk:1 phaselift:2 theoretical:2 instance:1 column:2 linesearch:1 yoav:1 cost:3 entry:2 monomials:1 masked:1 technion:2 usefulness:1 mimo:1 characterize:1 dependency:2 aw:10 st:2 thanks:1 ju:4 siam:1 invoke:1 terence:3 together:1 quickly:2 nicer:1 again:1 thesis:1 choose:1 huang:1 admit:2 book:1 convolving:1 derivative:1 leading:3 li:2 de:1 summarized:1 coding:1 includes:1 blind:2 depends:1 tion:1 lab:1 wolfgang:1 analyze:2 sup:14 wave:3 candes:3 simon:1 elaborated:1 minimize:1 il:1 ir:2 formed:1 convolutional:11 square:1 largely:1 millimeter:2 iid:1 rauhut:4 explain:1 whenever:4 definition:1 frequency:2 pp:2 proof:2 di:14 recovers:1 stop:1 dataset:1 oceanic:1 recall:1 knowledge:1 amplitude:2 ea:3 back:3 furong:1 alexandre:1 tolerate:1 miao:1 modal:1 april:1 hand:1 sketch:3 ei:2 su:2 nonlinear:5 replacing:1 mode:1 artifact:1 behaved:1 believe:1 modulus:1 usa:1 xiaodong:2 true:1 ccf:1 hence:4 vicinity:2 alternating:7 uniquely:2 noted:1 substantiate:1 generalized:3 pdf:1 jaganathan:1 complete:1 demonstrate:2 optik:1 ranging:1 image:4 chaos:6 novel:1 harmonic:1 common:1 overview:2 cohen:1 tail:1 discussed:1 pena:1 measurement:24 refer:2 cambridge:1 sujay:1 stft:2 mathematics:3 maxcut:1 soltanolkotabi:4 gonzalo:1 sherman:1 henry:1 stable:2 han:2 heidrich:1 multivariate:2 isometry:4 recent:3 showed:2 inf:2 reverse:1 certain:2 nonconvex:12 inequality:3 shahar:1 success:1 xe:2 victor:1 additional:1 signal:21 ii:3 full:2 ing:1 technical:1 match:1 smooth:3 determination:1 cross:1 long:1 retrieval:24 sphere:3 e1:3 coded:2 a1:1 controlled:2 ensuring:1 involving:2 basic:1 variant:2 essentially:1 expectation:5 vision:2 arxiv:16 iteration:4 kernel:4 oren:1 dec:13 c1:4 whereas:1 addition:1 fellowship:1 appropriately:2 extra:1 subject:1 facilitates:1 huishuai:1 flow:4 spirit:1 ciently:3 ee:1 subgaussian:1 near:3 wirtinger:4 yang:1 enough:1 easy:1 iterate:3 independence:1 ering:1 escaping:1 reduce:3 idea:10 cn:12 regarding:1 br:1 praneeth:1 shift:1 motivated:2 padding:1 becker:1 passing:1 cause:1 york:1 remark:1 antonio:1 gabriel:1 useful:2 detailed:2 informally:2 giuseppe:1 pasupathy:1 ken:1 reduced:1 generate:1 nsf:2 notice:2 arising:1 per:1 bulk:1 write:1 key:3 achieving:1 drawn:1 clean:1 imaging:3 run:4 angle:1 parameterized:1 everywhere:1 inverse:1 uncertainty:1 xei:9 almost:2 reader:2 throughout:1 smoothes:1 draw:2 bound:6 shan:1 guaranteed:1 quadratic:6 nontrivial:1 optic:1 orthogonality:1 sharply:1 segev:1 multiplexing:1 fourier:6 min:5 extremely:1 optical:4 acted:1 structured:10 developing:1 combination:1 poor:2 across:1 slightly:1 smaller:1 giannakis:1 lp:2 qu:4 joseph:1 globecom:1 lau:1 restricted:4 taken:1 asilomar:1 equation:3 remains:1 mechanism:1 needed:1 ge:1 informal:1 rewritten:1 apply:1 observe:1 spectral:9 generic:4 stepsize:6 original:1 thomas:2 denotes:4 clustering:1 practicality:1 emmanuel:8 especially:1 classical:2 tensor:1 objective:9 question:2 occurs:1 strategy:1 concentration:4 dependence:5 gin:1 gradient:17 distance:3 link:1 thank:2 argue:1 considers:1 trivial:1 reason:1 fresh:1 stanislaw:1 length:4 minn:2 ratio:2 minimizing:5 acquire:2 innovation:1 providing:1 ql:12 liang:1 robert:1 potentially:1 abhay:1 design:4 unknown:2 convolution:9 observation:2 enabling:1 descent:13 jin:1 truncated:1 communication:11 gc:1 smoothed:2 tetrahedral:3 lb:4 introduced:1 david:1 required:1 c3:2 connection:2 c4:2 acoustic:3 yuqian:2 coherent:3 tremendous:1 nip:1 justin:2 proceeds:1 usually:1 pattern:7 challenge:5 program:1 max:1 memory:2 power:3 critical:1 beautiful:1 advanced:1 mn:2 retreival:1 scheme:1 ne:2 picture:1 identifies:1 aspremont:1 columbia:6 prior:1 review:1 acknowledgement:1 geometric:1 expect:1 highlight:1 log6:1 limitation:1 nonhomogeneity:1 proportional:1 digital:1 foundation:2 degree:1 principle:1 tightened:1 kramers:1 r2n:1 translation:1 row:1 jung:1 repeat:1 wireless:1 truncation:1 transpose:1 copy:1 supported:1 understand:1 circulant:10 boumal:1 correspondingly:1 barrier:1 absolute:1 sparse:4 overcome:2 lett:1 transition:1 ferent:1 author:1 refinement:1 far:1 kishore:1 log3:1 transaction:5 nov:1 obtains:1 preferred:1 unreliable:1 global:7 gerchberg:1 reveals:1 b1:1 kreutz:1 receiver:1 iterative:5 decade:1 why:3 channel:6 robust:1 nonhomogeneous:2 ca:3 decoupling:16 evarist:1 culties:2 complex:9 poly:9 european:1 domain:2 diag:8 main:4 linearly:2 motivation:1 bounding:1 krahmer:3 arrival:1 n2:1 henk:1 fig:6 referred:1 cient:3 wiley:1 hassibi:1 position:1 ciency:1 mahdi:4 breaking:1 weighting:6 theorem:2 bad:1 showing:1 er:1 sensing:7 appeal:1 jensen:1 decay:2 foucart:1 deconvolution:2 consist:1 mendelson:1 workshop:1 corr:1 heide:1 magnitude:4 phd:1 conditioned:1 horizon:1 chen:1 easier:1 cx:18 garcia:1 backtracking:1 saddle:1 aoa:3 tracking:1 partially:1 scalar:1 springer:2 truth:4 satisfies:1 conditional:4 viewed:1 cheung:1 consequently:1 careful:1 towards:1 jianwei:1 replace:2 owen:1 change:1 determined:1 specifically:2 uniformly:6 typical:1 diff:1 lemma:4 zb:7 kuo:2 e:1 la:1 succeeds:1 log4:1 support:1 mark:1 |
6,842 | 719 | Central and Pairwise Data Clustering by
Competitive Neural Networks
Joachim Buhmann & Thomas Hofmann
Rheinische Friedrich-Wilhelms-UniversiHit
Institut fiir Informatik II, RomerstraBe 164
D-53117 Bonn, Fed. Rep. Germany
Abstract
Data clustering amounts to a combinatorial optimization problem to reduce the complexity of a data representation and to increase its precision.
Central and pairwise data clustering are studied in the maximum entropy framework. For central clustering we derive a set of reestimation
equations and a minimization procedure which yields an optimal number of clusters, their centers and their cluster probabilities. A meanfield
approximation for pairwise clustering is used to estimate assignment
probabilities. A se1fconsistent solution to multidimensional scaling and
pairwise clustering is derived which yields an optimal embedding and
clustering of data points in a d-dimensional Euclidian space.
1 Introduction
A central problem in information processing is the reduction of the data complexity with
minimal loss in precision to discard noise and to reveal basic structure of data sets. Data
clustering addresses this tradeoff by optimizing a cost function which preserves the original
data as complete as possible and which simultaneously favors prototypes with minimal
complexity (Linde et aI., 1980; Gray, 1984; Chou et aI., 1989; Rose et ai., 1990). We discuss an objective function for the joint optimization of distortion errors and the complexity
of a reduced data representation. A maximum entropy estimation of the cluster assignments yields a unifying framework for clustering algorithms with a number of different
distortion and complexity measures. The close analogy of complexity optimized clustering
with winner-take-all neural networks suggests a neural-like implementation resembling
topological feature maps (see Fig. 1).
104
Central and Pairwise Data Clustering by Competitive Neural Networks
X?1
Pyli
Figure 1: Architecture of a three
layer competitive neural network
for central data clustering with
d neurons in the input layer, K
neurons in the clustering layer
with activity (Mia) and G neurons in the classification layer.
The output neurons estimate the
conditional probability Pl'li of
data point i being in class 1.
Given is a set of data points which are characterized either by coordinates {Xi IXi E ~d; i =
1, ... , N} or by pairwise distances {Dikli, k = 1, ... , N}. The goal of data clustering
is to determine a partitioning of a data set which either minimizes the average distance
of data points to their cluster centers or the average distance between data points of the
same cluster. The two cases are refered to as central or pairwise clustering. Solutions
to central clustering are represented by a set of data prototypes {y alY a E ~d; a =
1, ... ,K}, and the size K of that set. The assignments {Miala = 1, ... ,K; i = 1, ... ,N},
Mia E {O, I} denote that data point i is uniquely assigned to cluster a (Lv Miv = 1).
Rate distortion theory specifies the optimal choice of Ya being the cluster centroids, i.e.,
l:i Mia a~ Dia (Xi, Ya) = O. Given only a set of distances or dissimilarities the solution
to pairwise clustering is characterized by the expected assignment variables (Mia). The
complexity {Cal a = 1, ... , K} of a clustering solution depends on the specific information
processing application at hand, in particular, we assume that Ca is only a function of the
cluster probability Pa = L~1 Mia/N. We propose the central clustering cost function
0
N
c~( {Miv})
K
= L
L Miv (DiV(Xi, Yv)
i=1 v=1
and the pairwise clustering cost function
N
Ck( {Miv})
=L
i=1
+ ACv(Pv))
(1)
KIN
LMiv
v=1
(2
N L MkvDik
Pv k=1
+ ACv(Pv)).
(2)
The distortion and complexity costs are adjusted in size by the weighting parameter A. The
cost functions (1,2) have to be optimized in an iterative fashion: (i) vary the assignment
variables Mia for a fixed number K of clusters such that the costs c,;,pc ({Mia} ) decrease;
(ii) increment the number of clusters K ~ K + 1 and optimize Mia again.
Complexity costs which penalize small, sparsely populated clusters, i.e., Ca = l/p~, s =
1,2 .... , favor equal cluster probabilities, thereby emphasizing the hardware aspect of a
clustering solution. The special case s = 1 with constant costs per cluster corresponds
to K -means clustering. An alternative complexity measure which estimates encoding
costs for data compression and data transmission is the Shannon entropy of a cluster set
(C) - LvPvCv = -l:v Pv logpv.
lOS
106
Buhmann and Hofmann
The most common choice for the distortion measure are distances Via = IIXi - YaW"
which preserve the permutation symmetry of (1) with respect to the cluster index /J. A data
partitioning scheme without permutation invariance of cluster indices is described by the
cost function
E1 =
+ .\C(PI/) ) .
L L Mil/ ( ((Vi 1/ ))
t
(3)
1/
The generalized distortion error ((Via)) - 2:')' To.')' Vi,), (Xi , Y')') between data point Xi and
cluster center Yo. quantifies the intrinsic quantization errors Vh(Xi, Y')') and the additional
errors due to transitions To.')' from index ry to a. Such transitions might be caused by noise
in communication channels. These index transitions impose a topological order on the
set of indices {a Ia = 1, ... , K} which establishes a connection to self-organizing feature
maps (Kohonen, 1984; Ritter et al., 1992) in the case of nearest neighbor transitions in a
d-dimensional index space. We refer to such a partitioning of the data space as topology
preserving clustering.
2
Maximum Entropy Estimation of Central Clustering
Different combinations of complexity terms, distortion measures and topology constraints
define a variety of central clustering algorithms which are relevant in very different information processing contexts. To derive robust, preferably parallel algorithms for these data
clustering cases, we study the clustering optimization problem in the probabilistic framework of maximum entropy estimation. The resulting Gibbs distribution proved to be the
most stable distribution with respect to changes in expected clustering costs (Tikochinsky
et al., 1984) and, therefore, has to be considered optimal in the sense of robust statistics.
Statistical physics (see e.g. (Amit, 1989; Rose et at., 1990)) states that maximizing the
entropy at a fixed temperature T = 1/f3 is equivalent to minimizing the free energy
-TlnZ = -Tln(
L
K))
exp(-f3E
{M", }
-.\N L
1/
PI/ 2
~c:P - ~ 2;= log (L exp [-f3( ((Vil/)) + .\c~)])
t
(4)
1/
with respect to the variables PI/, Y1/' The effective complexity costs are C~ For a derivation of (4) see (Buhmann, Kiihnel, 1993b).
a(Pl/CI/) / api/'
The resulting re-estimation equations for the expected cluster probabilities and the expected
centroid positions are necessary conditions of :FK being minimal, i.e.
(5)
Po.
t
o
a
1
N LLT')'a(Mh)aVia(Xi,Ya),
. ')'
Yo.
t
exp
[-f3( ((Via)) + .\C~)]
K
Lexp[-f3(((Vi l/))
1/=1
+ .\C~)]
(6)
(7)
Central and Pairwise Data Clustering by Competitive Neural Networks
The expectation value (Mia) of the assignment variable Mia can be interpreted as a fuzzy
membership of data point X i in cluster Q. The case of supervised clustering can be treated
in an analogous fashion (Buhmann, Kuhne1, 1993a) which gives rise to the third layer in the
neural network implementation (see Fig. 1). The global minimum of the free energy (4)
with respect to Pa, Ya determines the maximum entropy solution of the cost function (1).
Note that the optimization problem (1) of a KN state space has been reduced to a K( d + 1)
dimensional minimization of the free energy F K (4). To find the optimal parameters Pa . Ya
and the number of clusters K which minimize the free energy, we start with one cluster
located at the centroid of the data distribution, split that cluster and reestimate Pa, Ya using
equation (5,6). The new configuration is accepted as an improved solution if the free energy
(4) has been decreased. This splitting and reestimation loop is continued until we fail to
find a new configuration with lower free energy. The temperature determines the fuzziness
of a clustering solution, whereas the complexity term penalizes excessively many clusters.
3
Meanfield Approximation for Pairwise Clustering
The maximum entropy estimation for pairwise clustering constitutes a much harder problem
than the calculation of the free energy for central clustering. Analytical expression for
the Gibbs distributions are not known except for the quadratic distance measure Dik =
(Xi - Xk)2. Therefore, we approximate the free energy by a variational principle commonly
refered to as meanfield approximation. Given the costfunction (2) we derive a lower bound
to the free energy by a system of noninteracting assignment variables. The approximative
costfunction with the variational parameters Eiv is
N
K
E~
=L
L MivEiv,
(8)
v=) i=)
The original costfunction for pairwise clustering can be written as Ek = E9.: + V with a
(small) perturbation term V = Ek - E9.: due to cluster interactions. The partition function
L
Z
exp ( -
L
L
f1 Ek)
exp (-f1E~)
...;,..{M_i......:V}_ _ _ _ _ __
{M. v }
{Mi v}
exp (-/3 E9.:) exp (-,(3V)
L
exp (-/3 E9.:)
{Mi v }
Zo(exp( - /3V)o
> Zo exp( - /3 (V)o)
(9)
is bound from below if terms of the order O( ((V - (V)O)3)0) and higher are negligible compared to the quadratic term. The angular brackets denote averages over all configurations
of the costfunction without interactions. The averaged perturbation term (V)o amounts to
(V}o
= LL(MiV}(MkV )2
V
?k
1. '
1
~
NDik+ALL(MiV)Cv- LL(Miv)Eiv. (10)
V .,.
V .
1.
(Mia) being the averaged assignment variables
(Mia) =
exp( -/3Eia )
exp( -/3Eiv)
L
v
(11)
107
108
Buhmann and Hofmann
The meanfield approximation with the cost function (8) yields a lower bound to the partition
function Z of the original pairwise clustering problem. Therefore, we vary the parameters
Cia to maximize the quantity In Zo - ,8(V)o which produces the best lower bound of Z
based on an interaction free costfunction. Variation of Cia leads to the conditions
ViE {1 , ... ,N},a E {l. ... ,K} ,
(12)
ctv being defined as
For a given distance matrix V ik the transcendental equations (11,12) have to be solved
simultaneously.
So far the Cia have been treated as independent variation parameters. An important
problem, which is usually discussed in the context of Multidimensional Scaling, is to
find an embedding for the data set in an Euclidian space and to cluster the embedded data.
The variational framework can be applied to this problem, if we consider the parameters
Cia as functions of data coordinates and prototype coordinates, Cia = Via(Xi, Ya), e.g.
with a quadratic distortion measure Via (Xi. Ya) = IIXi - Ya11 2 . The variables Xi, Ya E ~d
are the variational parameters which have to be determined by maximizing In Zo - {1(V)0.
Without imposing the restriction for the prototypes to be the cluster centroids, this leads to
the following conditions for the data coordinates
After further algebraic manipulations we receive the explicit expression for the data points
KiXi
=~L
(Miv) (iIYvIl 2
v
-
ci*v) (Yv - L (MiJ.t)Y J.t),
J.t
(15)
with the covariance matrix Ki = ((yyT)i - (Y)i(Y);), (Y)i = L.v(Miv)Yv. Let us
assume that the matrix Ki is non-singular which imposes the condition K > d and the
cluster centers {y al a = 1, ... , K} being in general position. For K < d the equations
Cia = cta + Ci are exactly solvable and embedding in dimensions larger than K produces
non-unique solutions without improving the lower bound in (9).
Varying In Zo - ,8(V)o with respect to Ya yields a second set of stationarity conditions
L(Mja ) (1- (Mja)) (Cja -cja)
(Xj - Ya) =
0,
Va E {I, ... , K}.
(16)
j
The weighting factors in (16), however, decay exponentially fast with the inverse temperature, i.e., (Mja)(1 - (Mja )) rv 0(,8 exp[-,8c]), C > O. This implies that the optimal
solution for the data coordinates displays only a very weak dependence on the special
choice of the prototypes in the low temperature regime. Fixing the parameters Ya and
solving the transcendental equations (14,15) for Xi, the solution will be very close to the
optimal approximation. It is thus possible to choose the prototypes as the cluster centroids
Ya = 1/(Pa N ) L.i(Mia)Xi and, thereby, to solve Eq. (15) in a self-consistent fashion.
Central and Pairwise Data Clustering by Competitive Neural Networks
b
a
*
* * **
* * * *
*
* ir.
* *
*
*
*
*
*
c
* *
Figure 2: A data distribution (4000 data points) (a), generated by four normally distributed
sources is clustered with the complexity measure Ca. = -logpa. and.A = 0.4 (b). The plus
signs (+) denote the centers ofthe Gaussians and stars (*) denote cluster centers. Figure (c)
shows a topology preserving clustering solution with complexity Ca. = 1/Pa. and external
noise (ry = 0.05).
If the prototype variables depend on the data coordinates, the derivatives oY a./ OXi will
not vanish in general and the condition (14) becomes more complicated. Regardless
of this complication the resulting algorithm to estimate data coordinates Xi interleaves the
clustering process and the optimization of the embedding in a Euclidian space. The artificial
separation of multidimensional scaling from data clustering has been avoided. Data points
are embedded and clustered simultaneously. Furthermore, we have derived a maximum
entropy approximation which is most robust with respect to changes in the average costs
(EK).
4
Clustering Results
Non-topological (Ta.'}' = on,},) clustering results at zero temperature for the logarithmic
complexity measure (Ca. = 10gpa.) are shown in Fig. 2b. In the limit of very small complexity costs the best clustering solution densely covers the data distribution. The specific
choice of logarithmic complexity costs causes an almost homogeneous density of cluster
centers, a phenomenon which is known from studies of asymptotic codebook densities and
which is explained by the vanishing average complexity costs (Ca.) = -Pa.logpa. of very
sparsely occupied clusters (for references see (Buhmann, Kuhnel, 1993b?.
Figure 2c shows a clustering configuration assuming a one-dimensional topology in index
space with nearest neighbor transitions. The short links between neighboring nodes of the
neural chain indicate that the distortions due to cluster index transitions have also been
optimized. Note, that complexity optimized clustering determines the length of the chain
or, for a more general noise distribution, an optimal size of the cluster set. This stopping
criterion for adding new cluster nodes generalizes self-organizing feature maps (Kohonen,
1984) and removes arbitrariness in the design of topological mappings. Furthermore, our
algorithm is derived from an energy minimization principle in contrast to self-organizing
feature maps which "cannot be derived as a stochastic gradient on any energy function"
(Erwin et aI., 1992).
The complexity optimized clustering scheme has been tested on the real world task of
109
110
Buhmann and Hofmann
a
c
b
d
;
e
....
.
.~.,
.
:.
.
....
?
~
?
~\...
'11\ll
. . '
I
1L ..~ ._
'.
....
?.
\-:
,
....
' .~.~ .,' .'\.~ .. ~' - .
i'
,'.
,
;'!
. ""r',
.. ' .
" I '
.~
,I. tI ?
.. '
.. -.. -..
#"':.
.'
'- '
.....- ,
'
'I,
~:
u:-.
~
~.
? ':.'
.~
"1 '.
.tJ\
lfl"
.
~
~
\
:. -:~
Figure 3: Quantization of a 128x 128, 8bit, gray-level image. (a) Original picture. (b)
Image reconstruction from wavelet coefficients quantized with entropic complexity. (c)
Reconstruction from wavelet coefficients quantized by K -means clustering. (d,e) Absolute
values of reconstruction errors in the images (b,c). Black is normalized in (d,e) to a deviation
of 92 gray values.
image compression (Buhmann, Kuhnel, 1993b). Entropy optimized clustering of wavelet
decomposed images has reduced the reconstruction error of the compressed images up to
30 percent. Images of a compression and reconstruction experiment are shown in Fig. 3.
The compression ratio is 24.5 for a 128 x 128 image. According to our efficiency criterion
entropy optimized compression is 36.8% more efficient than K -means clustering for that
compression factor. The peak SNR values for (b,c) are 30.1 and 27.1, respectively. The
considerable higher error near edges in the reconstruction based on K -means clustering (e)
demonstrates that entropy optimized clustering of wavelet coefficients not only results in
higher compression ratios but, even more important it preserves psychophysically important
image features like edges more faithfully than conventional compression schemes.
5 Conclusion
Complexity optimized clustering is a maximum entropy approach to central and pairwise
data clustering which determines the optimal number of clusters as a compromise between
distortion errors and the complexity of a cluster set. The complexity term turns out to be as
important for the design of a cluster set as the distortion measure. Complexity optimized
clustering maps onto a winner-take-all network which suggests hardware implementations
in analog VLSI (Andreou et al., 1991). Topology preserving clustering provides us with a
Central and Pairwise Data Clustering by Competitive Neural Networks
cost function based approach to limit the size of self-organizing maps.
The maximum entropy estimation for pairwise clustering cannot be solved analytically
but has to be approximated by a meanfield approach. This mean field approximation of
the pairwise clustering costs with quadratic Euclidian distances establishes a connection
between multidimensional scaling and clustering. Contrary to the usual strategy which
embeds data according to their dissimilarities in a Euclidian space and, in a separate second
step, clusters the embedded data, our approach finds the Euclidian embedding and the data
clusters simultaneously and in a selfconsistent fashion.
The proposed framework for data clustering unifies traditional clustering techniques like
K -means clustering, entropy constraint clustering or fuzzy clustering with neural network
approaches such as topological vector quantizers. The network size and the cluster parameters are determined by a problem adapted complexity function which removes considerable
arbitrariness present in other non-parametric clustering methods.
Acknowledgement: JB thanks H. Kuhnel for insightful discussions. This work was
supported by the Ministry of Science and Research of the state Nordrhein-Westfalen.
References
Amit, D. (1989). Modelling Brain Function. Cambridge: Cambridge University Press.
Andreou, A. G., Boahen, K. A., Pouliquen, P.O., Pavasovic, A., Jenkins, R. E., Strohbehn, K. (1991). Current Mode Subthreshold MOS Circuits for Analog VLSI Neural
Systems. IEEE Transactions on Neural Networks, 2,205-213.
Buhmann, J., Kuhne1, H. (1993a). Complexity Optimized Data Clustering by Competitive
Neural Networks. Neural Computation, 5, 75-88.
Buhmann, J., Kuhnel, H. (1993b). Vector Quantization with Complexity Costs. IEEE
Transactions on Information Theory, 39(4),1133-1145.
Chou, P. A., Lookabaugh, T., Gray, R. M. (1989). Entropy-Constrained Vector Quantization.
IEEE Transactions on Acoustics, Speech and Signal Processing, 37, 31-42.
Erwin, W., Obermayer, K., Schulten, K. (1992). Self-organizing Maps: Ordering, Convergence Properties, and Energy Functions. Biological Cybernetics, 67, 47-55.
Gray, R. M. (1984). Vector Quantization. IEEE Acoustics, Speech and Signal Processing
Magazine, April, 4-29.
Kohonen, T. (1984). Self-organization and Associative Memory. Berlin: Springer.
Linde, Y., Buzo, A., Gray, R. M. (1980). An algorithm for vector quantizer design. IEEE
Transactions on Communications COM, 28, 84-95.
Ritter, H., Martinetz, T., Schulten, K. (1992). Neural Computation and Self-organizing
Maps. New York: Addison Wesley.
Rose, K., Gurewitz, E., Fox, G. (1990). Statistical Mechanics and Phase Transitions in
Clustering. Physical Review Letters, 65(8), 945-948 .
Tikochinsky, Y., Tishby, N.Z., Levine, R. D. (1984). Alternative Approach to MaximumEntropy Inference. Physical Review A, 30, 2638-2644.
111
| 719 |@word compression:8 covariance:1 euclidian:6 thereby:2 harder:1 reduction:1 configuration:4 current:1 com:1 written:1 transcendental:2 partition:2 hofmann:4 remove:2 xk:1 vanishing:1 short:1 provides:1 quantized:2 codebook:1 complication:1 node:2 quantizer:1 ik:1 cta:1 pairwise:19 expected:4 mechanic:1 ry:2 brain:1 decomposed:1 becomes:1 circuit:1 interpreted:1 minimizes:1 fuzzy:2 preferably:1 multidimensional:4 ti:1 exactly:1 demonstrates:1 partitioning:3 normally:1 negligible:1 vie:1 limit:2 api:1 encoding:1 might:1 plus:1 black:1 studied:1 suggests:2 averaged:2 unique:1 procedure:1 cannot:2 close:2 onto:1 cal:1 context:2 optimize:1 equivalent:1 map:8 restriction:1 center:7 maximizing:2 resembling:1 conventional:1 regardless:1 maximumentropy:1 lfl:1 splitting:1 continued:1 embedding:5 coordinate:7 increment:1 analogous:1 variation:2 magazine:1 homogeneous:1 ixi:1 approximative:1 pa:7 approximated:1 located:1 sparsely:2 levine:1 solved:2 ordering:1 tikochinsky:2 decrease:1 rose:3 boahen:1 complexity:29 depend:1 solving:1 compromise:1 efficiency:1 po:1 joint:1 mh:1 represented:1 derivation:1 zo:5 acv:2 effective:1 fast:1 artificial:1 larger:1 solve:1 distortion:11 compressed:1 favor:2 statistic:1 associative:1 analytical:1 propose:1 reconstruction:6 interaction:3 kohonen:3 relevant:1 loop:1 neighboring:1 organizing:6 los:1 convergence:1 cluster:40 transmission:1 costfunction:5 produce:2 derive:3 fixing:1 nearest:2 eq:1 implies:1 indicate:1 stochastic:1 mkv:1 f1:1 clustered:2 tln:1 biological:1 adjusted:1 pl:2 considered:1 buzo:1 exp:13 mapping:1 mo:1 vary:2 entropic:1 estimation:6 combinatorial:1 faithfully:1 establishes:2 minimization:3 ck:1 occupied:1 varying:1 mil:1 derived:4 yo:2 joachim:1 rheinische:1 modelling:1 contrast:1 centroid:5 chou:2 sense:1 inference:1 stopping:1 membership:1 vlsi:2 germany:1 classification:1 constrained:1 special:2 equal:1 field:1 f3:4 constitutes:1 yaw:1 jb:1 preserve:3 simultaneously:4 densely:1 phase:1 stationarity:1 organization:1 eiv:3 bracket:1 gpa:1 pc:1 tj:1 chain:2 edge:2 necessary:1 institut:1 fox:1 penalizes:1 re:1 minimal:3 cover:1 assignment:8 cost:21 deviation:1 snr:1 tishby:1 kn:1 psychophysically:1 thanks:1 density:2 peak:1 ritter:2 physic:1 probabilistic:1 again:1 central:16 choose:1 e9:4 external:1 ek:4 derivative:1 kuhnel:4 li:1 star:1 coefficient:3 caused:1 vi:3 depends:1 competitive:7 yv:3 start:1 parallel:1 complicated:1 minimize:1 ir:1 logpa:2 wilhelms:1 yield:5 ofthe:1 subthreshold:1 weak:1 unifies:1 informatik:1 vil:1 cybernetics:1 mia:13 llt:1 reestimate:1 energy:12 mi:2 proved:1 yyt:1 wesley:1 higher:3 ta:1 supervised:1 improved:1 april:1 eia:1 furthermore:2 angular:1 until:1 hand:1 mode:1 reveal:1 gray:6 excessively:1 normalized:1 analytically:1 assigned:1 ll:3 self:8 uniquely:1 criterion:2 generalized:1 complete:1 temperature:5 mja:4 percent:1 image:9 variational:4 common:1 arbitrariness:2 physical:2 winner:2 exponentially:1 discussed:1 analog:2 refer:1 cambridge:2 gibbs:2 ai:4 cv:1 imposing:1 fk:1 populated:1 stable:1 interleaf:1 selfconsistent:1 optimizing:1 discard:1 manipulation:1 rep:1 preserving:3 minimum:1 additional:1 ministry:1 impose:1 determine:1 maximize:1 signal:2 ii:2 rv:1 characterized:2 calculation:1 e1:1 va:1 basic:1 expectation:1 pavasovic:1 erwin:2 penalize:1 receive:1 whereas:1 decreased:1 singular:1 source:1 martinetz:1 contrary:1 near:1 split:1 variety:1 xj:1 architecture:1 topology:5 reduce:1 prototype:7 tradeoff:1 expression:2 linde:2 dik:1 algebraic:1 speech:2 york:1 cause:1 amount:2 iixi:2 hardware:2 reduced:3 specifies:1 sign:1 per:1 four:1 inverse:1 letter:1 almost:1 separation:1 scaling:4 bit:1 layer:5 bound:5 ki:2 display:1 topological:5 quadratic:4 activity:1 adapted:1 constraint:2 bonn:1 aspect:1 according:2 combination:1 refered:2 explained:1 equation:6 pouliquen:1 discus:1 lexp:1 fail:1 turn:1 addison:1 fed:1 dia:1 oxi:1 generalizes:1 gaussians:1 jenkins:1 cia:6 alternative:2 thomas:1 original:4 clustering:67 unifying:1 amit:2 objective:1 quantity:1 strategy:1 parametric:1 dependence:1 usual:1 traditional:1 obermayer:1 div:1 gradient:1 distance:8 link:1 separate:1 berlin:1 assuming:1 length:1 index:8 ratio:2 minimizing:1 quantizers:1 rise:1 implementation:3 design:3 neuron:4 communication:2 y1:1 perturbation:2 aly:1 friedrich:1 optimized:11 connection:2 andreou:2 acoustic:2 address:1 below:1 usually:1 regime:1 lookabaugh:1 memory:1 ia:1 meanfield:5 treated:2 buhmann:10 solvable:1 scheme:3 miv:9 picture:1 ctv:1 gurewitz:1 vh:1 review:2 acknowledgement:1 asymptotic:1 embedded:3 loss:1 permutation:2 oy:1 analogy:1 lv:1 consistent:1 imposes:1 principle:2 pi:3 supported:1 free:10 neighbor:2 absolute:1 distributed:1 dimension:1 transition:7 world:1 commonly:1 avoided:1 far:1 transaction:4 approximate:1 fiir:1 global:1 reestimation:2 xi:14 iterative:1 quantifies:1 channel:1 robust:3 ca:6 symmetry:1 improving:1 noise:4 fig:4 fashion:4 embeds:1 precision:2 position:2 pv:4 explicit:1 schulten:2 vanish:1 weighting:2 third:1 wavelet:4 kin:1 emphasizing:1 specific:2 insightful:1 decay:1 intrinsic:1 quantization:5 adding:1 ci:3 dissimilarity:2 entropy:16 logarithmic:2 strohbehn:1 springer:1 mij:1 corresponds:1 determines:4 conditional:1 goal:1 fuzziness:1 considerable:2 change:2 determined:2 except:1 invariance:1 accepted:1 ya:13 shannon:1 romerstrabe:1 tested:1 phenomenon:1 |
6,843 | 7,190 | Learning Non-Gaussian Multi-Index Model via
Second-Order Stein?s Method
Zhuoran Yang? Krishna Balasubramanian? Zhaoran Wang? Han Liu?
Abstract
We consider estimating the parametric components of semiparametric multi-index
models in high dimensions. To bypass the requirements of Gaussianity or elliptical
symmetry of covariates in existing methods, we propose to leverage a second-order
Stein?s method with score function-based corrections. We prove that our estimator
achieves a near-optimal statistical rate of convergence even when the score function
or the response variable is heavy-tailed. To establish the key concentration results,
we develop a data-driven truncation argument that may be of independent interest.
We supplement our theoretical findings with simulations.
1
Introduction
We consider the semiparametric index model that relates the response Y 2 R and the covariate
X 2 Rd as Y = f (h 1? , Xi, . . . , h k? , Xi) + ?, where each coefficient `? 2 Rd (` 2 [k]) is s? sparse and the noise term ? is zero-mean. Such a model is known as sparse multiple index model
(MIM). Given n i.i.d. observations {Xi , Yi }ni=1 of the above model with possibly d
n, we aim
to estimate the parametric component { `? }`2[k] when the nonparametric component f is unknown.
More importantly, we do not impose the assumption that X is Gaussian, which is commonly made in
the literature. Special cases of our model include phase retrieval, for which k = 1, and dimensionality
reduction, for which k 1. Motivated by these applications, we make a distinction between the cases
of k = 1, which is also known as single index model (SIM), and k > 1 in the rest of the paper.
Estimating the parametric component { `? }`2[k] without knowing the exact form of the link function
f naturally arises in various applications. For example, in one-bit compressed sensing [3, 39] and
sparse generalized linear models [36], we are interested in recovering the underlying signal vector
based on nonlinear measurements. In sufficient dimensionality reduction, where k is typically a fixed
number greater than one but much less than d, we aim to estimate the projection onto the subspace
spanned by { `? }`2[k] without knowing f . Furthermore, in deep neural networks, which are cascades
of MIMs, the nonparametric component corresponds to the activation function, which is prespecified,
and the goal is to estimate the linear parametric component, which is used for prediction at the test
stage. Hence, it is crucial to develop estimators for the parametric component with both statistical
accuracy and computational efficiency for a broad class of possibly unknown link functions.
Challenging aspects of index models: Several subtle issues arise from the optimal estimation of
SIM and MIM. In specific, most existing results depend crucially on restrictive assumptions on X and
f , and fail to hold when those assumptions are relaxed. Such issues arise even in the low-dimensional
setting with n
d. Let us consider, for example, the case of k = 1 with a known link function
f (z) = z 2 . This corresponds to phase retrieval, which is a challenging inverse problem that has
regained interest in the last few years along with the success of compressed sensing. A straightforward
way to estimate ? is via nonlinear least squares regression [17], which is a nonconvex optimization
problem. [6] propose an estimator based on convex relaxation. Although their estimator is optimal
?
?
Princeton University, email: {zy6, kb18}@princeton.edu
Tencent AI Lab & Northwestern University, email: {zhaoranwang, hanliu.cmu}@gmail.com
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
600
Figure 1: Histogram of the score function based
on 10000 independent realizations of the Gamma
distribution with shape parameter 5 and scale parameter 0.2. The dark solid histogram concentrated
around zero corresponds to the Gamma distribution, and the transparent histogram corresponds to
the distribution of the score function of the same
Gamma distribution.
500
400
300
200
100
0
-10
0
10
20
30
40
50
60
when X is sub-Gaussian, it is not agnostic to the link function, i.e., the same result does not hold
if the link function is not quadratic. Direct optimization of the nonconvex phase retrieval problem
is considered by [5] and [30], which propose statistically optimal estimators based on iterative
algorithms. However, they rely on the assumption that X is Gaussian. A careful look at their proofs
shows that extending them to a broader class of distributions is significantly more challenging ? for
example, they require sharp concentration inequalities for polynomials of degree four of X, which
leads to a suboptimal statistical rate when X is sub-Gaussian. Furthermore, their results are also not
agnostic to the link function. Similar observations could be made for both convex [21] and nonconvex
estimators [4] for sparse phase retrieval in high dimensions.
In addition, a surprising result for SIM is established in [28]. They show that when X is Gaussian,
even when the link function is unknown, one could estimate ? at the optimal statistical rate with
Lasso. Unfortunately, their assumptions on the link function are rather restrictive, which rule out
several interesting models including phase retrieval. Furthermore, none of the above approaches are
applicable to MIM. A line of work pioneered by Ker-Chau Li [18?20] focuses on the estimation of
MIM in low dimensions. We will provide a discussion about this line of work in the related work
section, but it again requires restrictive assumptions on either the link function or the distribution of X.
For example, in most cases X is assumed to be elliptically symmetric, which limits the applicability.
To summarize, there are several subtleties that arise from the interplay between the assumptions on X
and f in SIM and MIM. An interesting question is whether it is possible to estimate the parametric
component in SIM and MIM with milder assumptions on both X and f in the high-dimensional
setting. In this work, we provide a partial answer to this question. We construct estimators that work
for a broad class of link functions, including the quadratic link function in phase retrieval, and for a
large family of distributions of X, which are assumed to be known a priori. We particularly focus on
the case where X follows a non-Gaussian distribution, which is not necessarily elliptically symmetric
or sub-Gaussian, therefore making our method applicable to various situations that are not feasible
previously. Our estimators are based on a second-order variant of Stein?s identity for non-Gaussian
random variables, which utilizes the score function of the distribution of X. As we show in Figure 1,
even when the distribution of X is light-tailed, the distribution of the score function of X could be
arbitrarily heavy-tailed. In order to develop consistent estimators within this context, we threshold
the score function in a data-driven fashion. This enables us to obtain tight concentration bounds
that lead to near-optimal statistical rates of convergence. Moreover, our results also shed light on
two related problems. First, we provide an alternative interpretation of the initialization in [5] for
phase retrieval. Second, our estimators are constructed based on a sparsity constrained semidefinite
programming (SDP) formulation, which is related to a similar formulation of the sparse principal
component analysis (PCA) problem (see Section 4 for a detailed discussion). A consequence of
our results for SIM and MIM is a near-optimal statistical rate of convergence for sparse PCA with
heavy-tailed data in the moderate sample size regime. In summary, our contributions are as follows:
? We construct estimators for the parametric component of high-dimensional SIM and MIM
for a class of unknown link function under the assumption that the covariate distribution is
non-Gaussian but known a priori.
? We establish near-optimal statistical rates for our estimators. Our results complement existing
ones in the literature and hold in several cases that are previously not feasible.
? We provide numerical simulations that confirm our theoretical results.
2
Related work: There is a significant body of work on SIMs in the low-dimensional setting. We do not
attempt to cover all of them as we concentrate on the high dimensional setting. The success of Lasso
and related regression estimators in high-dimensions enables the exploration of high-dimensional
SIMs, although this is still very much work in progress. As mentioned previously, [25, 26, 28] show
that Lasso and phase retrieval estimators could also work for SIM in high dimensions assuming the
covariate is Gaussian and the link function satisfies certain properties. Very recently, [10] relax the
Gaussian assumption and show that a modified Lasso-type estimator works for elliptically symmetric
distributions. For the case of monotone link function, [38] analyze a nonconvex least squares estimator
under the assumption that the covariate is sub-Gaussian. However, the success of their estimator hinges
on the knowledge of the link function. Furthermore, [15, 23, 31, 32, 40] analyze the sliced inverse
regression estimator in the high-dimensional setting, focusing primarily on support recovery and
consistency properties. The Gaussian assumption on the covariate restricts them from being applicable
to various real-world applications involving heavy-tailed or non-symmetric covariate, for example,
problems in economics [9, 12]. Furthermore, several results are established on a case-by-case basis
for specific link functions. In specific, [1, 3, 8, 39] consider one-bit compressed sensing and matrix
completion respectively, where the link function is assumed to be the sign function. Also, [4] propose
nonconvex estimators for phase retrieval in high dimensions, where the link function is quadratic. This
line of work, except [1], makes Gaussian assumptions on the covariate and is specialized for particular
link functions. The non-asymptotic result obtained in [1] is under sub-Gaussian assumptions, but the
estimator therein lacks asymptotic consistency.
For MIMs, relatively less work studies the high-dimensional setting. In the low-dimensional setting, a
line of work on the estimation of MIM is proposed by Ker-Chau Li, including inverse regression [18],
principal Hessian directions [19], and regression under link violation [20]. The proposed estimators
are applicable for a class of unknown link functions under the assumption that the covariate follows
Gaussian or symmetric elliptical distributions. Such an assumption is restrictive as often times the
covariate is heavy-tailed or skewed [9, 12]. Furthermore, they concentrate only on the low-dimensional
setting and establish asymptotic results. The estimation of high-dimensional MIM under the subspace
sparsity assumption is previously considered in [7, 32] but also under rather restrictive distribution
assumptions on the covariate.
Notation: We employ [n] to denote the set {1, . . . , n}. For a vector v 2 Rd , we denote by kvkp the `p norm of v for any p 1. In addition, we define the support of v 2 Rd as supp(v) = {j 2 [d], vj 6= 0}.
We denote by min (A), the minimum eigenvalue of matrix A. Moreover, we denote the elementwise
`1 -norm, elementwise `1 -norm, operator norm, and Frobenius norm of a matrix A 2 Rd1 ?d2 to be
k ? k1 , k ? k1 , k ? kop , and k ? kF , correspondingly. We denote by vec(A) the vectorization of matrix
A, which is a vector in Rd1 d2 . For two matrices A, B 2 Rd1 ?d2 , we denote the trace inner product
to be hA, Bi = Trace(A> B). Also note that it could be viewed as the vector inner product between
vec(A) and vec(B). For a univariate function g : R ! R, we denote by g (v) and g (A) the output
of applying g to each element of vector v and matrix A, respectively. Finally, for a random variable
X 2 R with density p, we use p?d : Rd ! R to denote the joint density of X1 , ? ? ? , Xd , which are d
identical copies of X.
2
Models and Assumptions
As mentioned previously, we consider the cases of k = 1 (SIM) and k > 1 (MIM) separately. We
first discuss the motivation for our estimators, which highlights the assumptions on the link function
as well. Recall that our estimators are based on the second-order Stein?s identity. To begin with, we
present the first-order Stein?s identity, which motivates Lasso-type estimators for SIMs [25, 28].
Proposition 2.1 (First-Order Stein?s Identity [29]). Let X 2 Rd be a real-valued random vector with
density p. We assume that p : Rd ! R is differentiable. In addition, let g : Rd ! R be a continous
function such that E[rg(X)] exists. Then it holds that
?
?
?
?
E g(X) ? S(X) = E rg(X) ,
where S(x) =
rp(x)/p(x) is the score function of p.
One could apply the above Stein?s identity to SIMs to obtain an estimator of ? . To see this, note that
when X ? N (0, Id ) we have S(x) = x for x 2 Rd . In this case, since E(? ? X) = 0, we have
?
?
?
?
E(Y ? X) = E f (hX, ? i) ? X = E f 0 (hX, ? i) ? ? .
3
Thus, one could estimate ? by estimating E(Y ? X). This observation leads to the estimator proposed
in [25, 28]. However, in order for the estimator to work, it is necessary to assume E[f 0 (hX, ? i)] 6= 0.
Such a restriction prevents it from being applicable to some widely used cases of SIM, for example,
phase retrieval in which f is the quadratic function. Such a limitation of the first-order Stein?s identity
motivates us to examine the second-order Stein?s identity, which is summarized as follows.
Proposition 2.2 (Second-Order Stein?s Identity [13]). We assume the density of X is twice differentiable. We define the second-order score function T : Rd ! Rd?d as T (x) = r2 p(x)/p(x). For any
twice differentiable function g : Rd ! R such that E[r2 g(X)] exists, we have
?
?
?
?
E g(X) ? T (X) = E r2 g(X) .
(2.1)
Back to the phase retrieval example, when X ? N (0, Id ), the second-order score function is T (x) =
xx> Id , for x 2 Rd . Setting g(x) = hx, ? i2 in (2.1), we have
?
?
?
?
?
?
E g(X) ? T (X) = E g(X) ? (XX > Id ) = E hX, ? i2 ? (XX > Id ) = 2 ? ? > . (2.2)
Hence for phase retrieval, one could extract ? ? based on the second-order Stein?s identity even in
the situation where the first-order Stein?s identity fails. In fact, (2.2) is implicitly used in [5] to provide
a spectral initialization for the Wirtinger flow algorithm in the case of Gaussian phase retrieval. Here,
we establish an alternative justification based on Stein?s identity for why such an initialization works.
Motivated by this key observation, we propose to employ the second-order Stein?s identity to estimate
the parametric component of SIM and MIM with a broad class of unknown link functions as well as
non-Gaussian covariates. The precise statistical models we consider are defined as follows.
Definition 2.3 (SIM with Second-Order Link). The response Y 2 R and the covariate X 2 Rd are
linked via
Y = f (hX,
?
(2.3)
i) + ?,
where f : R ! R is an unknown function, ? 2 Rd is the parameter of interest, and ? 2 R is the
exogenous noise with E(?) = 0. We assume the entries of X are i.i.d. random variables with density
p0 and that ? is s? -sparse, i.e., ? contains only s? nonzero entries. Moreover, since the norm of ?
could be absorbed into f , we assume that k ? k2 = 1 for identifiability. Finally, we assume that f
and X satisfy E[f 00 (hX, ? i)] > 0.
Note that in Definition 2.3, we assume without any loss of generality that E[f 00 (hX, ? i)] is positive.
If E[f 00 (hX, ? i)] is negative, one could replace f by f by flipping the sign of Y . In another word,
we essentially only require that E[f 00 (hX, ? i)] is nonzero. Intuitively, such a restriction on f implies
that the second-order cross-moments contains the information of ? . Thus, we name this type of link
functions as the second-order link. Similarly, we define MIM with second-order link.
Definition 2.4 (MIM with Second-Order Link). The response Y 2 R and the covariate X 2 Rd are
linked via
Y = f (hX,
?
1 i, . . . , hX,
?
k i)
+ ?,
(2.4)
where f : Rk ! R is an unknown link function, { `? }`2[k] ? Rd are the parameters of interest, and
? 2 R is the exogenous random noise that satisfies E(?) = 0. In addition, we assume that the entries
of X are i.i.d. random variables with density p0 and that { `? }`2[k] span a k-dimensional subspace of
Rd . Let B ? = ( 1? . . . k? ) 2 Rd?k . The model in (2.4) could be reformulated as Y = f (XB ? ) + ?.
By QR-factorization, we could write B ? as Q? R? , where Q? 2 Rd?k is an orthonormal matrix and
R? 2 Rk?k is invertible. Since f is unknown, R? could be absorbed into the link function. Thus, we
assume that B ? is orthonormal for identifiability. We further assume that B ? is s? -row sparse, that is,
B ? contains only s? nonzero rows. Note that this definition of row sparsity does not depends on the
choice of coordinate system. Finally, we assume that f and X satisfy min (E[r2 f (XB ? )]) > 0.
In Definition 2.4, the assumption that E[r2 f (XB ? )] is positive definite is a multivariate generalization of the condition that E[f 00 (hX, ? i)] > 0 for SIM in Definition 2.3. It essentially guarantees that
estimating the projector of the subspace spanned by { `? }`2[k] is information-theoretically feasible.
4
3
Estimation Method and Main Results
We now introduce our estimators and establish their statistical rates of convergence. Discussion on
the optimality of the established rates and connection to sparse PCA are deferred to ?4. Recall that
we focus on the case in which X has i.i.d. entries with density p0 : R ! R. Hence, the joint density
Qd
0
of X is p(x) = p?d
0 (x) =
j=1 p0 (xj ). For notational simplicity, let s0 (u) = p0 (u)/p0 (u). Then
the first-order score function associated with p is S(x) = s0 (x). Equivalently, the j-th entry of the
first-order score function associated with p is given by [S(x)]j = s0 (xj ). Moreover, the second-order
score function is
?
?
T (x) = S(x)S(x)> rS(x) = S(x)S(x)> diag s00 (x) .
(3.1)
Before we present our estimator, we introduce the assumption on Y and s0 (?).
Assumption 3.1 (Bounded Moment). We assume there exists a constant M such that Ep0 [s0 (U )6 ] ?
M and E(Y 6 ) ? M . We denote 02 = Ep0 [s0 (U )2 ] = Varp0 [s0 (U )].
The assumption that Ep0 [s0 (U )6 ] ? M allows for a broad family of distributions including Gaussian
and more heavy-tailed random variables. Furthermore, we do not require the covariate to be elliptically
symmetric as is commonly required in existing methods, which enables our estimator to be applicable
for skewed covariates. As for the assumption that E(Y 6 ) ? M , note that in the case of SIM we have
?
?
E(Y 6 ) ? C E(?6 ) + E f 6 (hX, ? i) .
Thus this assumption is satisfied as long as both ? and f (hX, ? i) have bounded sixth moments. This
is a mild assumption that allows for heavy-tailed response. Now we are ready to present our estimator
for the sparse SIM in Definition 2.3. Recall that by Proposition 2.2 we have
?
?
E Y ? T (X) = C0 ? ? ? > ,
(3.2)
where C0 = 2E[f 00 (hX, ? i)] > 0 as in Definition 2.3. Hence, one way to estimator ? is to obtain
the leading eigenvector of the sample version of E[Y ? T (X)]. Moreover, as ? is sparse, we formulate
our estimator as a semidefinite program
?
?
e + kW k1
maximize W, ?
subject to 0
Id , Trace(W ) = 1.
W
(3.3)
e is an estimator of ?? = E[Y ? T (X)], which is defined as follows. Note that both the score
Here ?
T (X) and the response variable Y could be heavy-tailed. In order to obtain near-optimal estimates in
the finite-sample setting, we apply the truncation technique to handle the heavy-tails. In specific, for
a positive threshold parameter ? 2 R, we define the truncated random variables by
?
?
Yei = sign(Yi ) ? min{|Yi |, ? } and Te(Xi ) jk = sign Tjk (Xi ) ? min |Tjk (Xi )|, ? 2 .
(3.4)
Then we define the robust estimator of ?? as
n
X
e= 1
?
Yei ? Te(Xi ).
n i=1
(3.5)
c the solution of the convex optimization problem in (3.3), where is a regularization
We denote by W
c . The
parameter to be specified later. The final estimator b is defined as the leading eigenvector of W
following theorem quantifies the statistical rates of convergence of the proposed estimator.
p
Theorem 3.2. Let = 10 M log d/n in (3.3) and ? = (1.5M n/ log d)1/6 in (3.4). Then under
p
?
Assumption 3.1, we have k b
k2 ? 4 2/C0 ? s? with probability at least 1 d 2 .
Now we introduce the estimator of B ? for the sparse MIM in Definition 2.4. Proposition 2.2 implies
that E[Y ? T (X)] = B ? D0 B ? , where D0 = E[r2 f (XB ? )] is positive definite. Similar to (3.3), we
recover the column space of B ? by solving
?
?
e + kW k1 ,
maximize W, ?
subject to 0
W
Id , Trace(W ) = k,
5
(3.6)
e is defined in (3.5), > 0 is a regularization parameter, and k is the number of indices, which
where ?
c be the solution of (3.6), and let the final estimator B
b contain the
is assumed to be known. Let W
c as columns. For such an estimator, we have the following theorem
top k leading eigenvectors of W
quantifying its statistical rate of convergence. Let ?0 = min (E[r2 f (XB ? )]).
p
Theorem 3.3. Let = 10 M log d/n in (3.6) and ? = (1.5M n/ log d)1/6 in (3.4). Then under
Assumption 3.1, with probability at least 1 d 2 , we have
p
b B ? O ? 4 2/?0 ? s? ,
inf B
F
O2Ok
where Ok 2 Rk?k is the set of all possible rotation matrices.
Minimax lower bounds for subspace estimation for MIM are established in [22]. For k being fixed,
Theorem 3.3 is near-optimal from a minimaxp
point of view. The difference between the optimal rate
and the above theorem is roughly a factor of s? . We will discuss more about this gap in Section 4.
The proofs of Theorem 3.2 and Theorem 3.3 are provided in the supplementary material.
Remark 3.4. Recall that our discussion above is under the assumption that the entries of X are i.i.d.,
which could be relaxed to the case of weak dependence between the covariates without any significant
loss in the statistical rates presented above. We do not focus on this extension in this paper as we aim
to clearly convey the main message of the paper in a simpler setting.
4
Optimality and Connection to Sparse PCA
Now we discuss the optimality of the results presented in ?3. Throughout the discussion we assume
that k is fixed and does not increase with d and n. The estimators for SIM in (3.3) and MIM in (3.6)
are closely related to the semidefinite program-based estimator for sparse PCA [33]. In specific, let
X 2 Rd be a random vector with E(X) = 0 and covariance ? = E(XX > ), which is symmetric and
positive definite. The gaol of sparse PCA is to estimate the projector onto the subspace spanned by
the top k eigenvectors, namely {v`? }`2[k] , of ?, under the subspace sparsity assumption as specified
in Definition 2.4. An estimator based on semidefinite programing is introduced in [33, 34], which is
based on solving
?
?
b + kW k1
maximize W, ?
subject to 0
W
Id , Trace(W ) = k.
(4.1)
Pn
1
>
n
b=n
Here ?
i=1 Xi Xi is the sample covariance matrix given n i.i.d. observations {Xi }i=1 of
X. Note that the main difference between the SIM estimator and the sparse PCA estimator is the use of
e in place of ?.
b It is known that sparse PCA problem exhibits an interesting statistical-computational
?
tradeoff [16, 34, 35], which naturally appears
p in the context of SIM as well. In particular, while the
?
optimal
attain
p statistical rate for sparse PCA is O( s log d/n), the SDP-based estimator could only
?
?2
O(s log d/n) under the assumption that X is light-tailed.
It
is
known
that
when
n
=
?(s
log
d),
p
one could obtain the optimal statistical rate of O( s? log d/n) by nonconvex method [37]. However,
b to ? in the restricted operator norm:
their results rely on the sharp concentration of ?
p
b ??
b ?)w : kwk2 = 1, kwk0 ? s = O( s log d/n).
?
= sup w> (?
(4.2)
op,s
When X has heavy-tailed entries, for example, with bounded fourth moment, its highly unlikely that
(4.2) holds.
Heavy-tailed sparse PCA: Recall that our estimators leverage a data-driven truncation argument to
handle heavy-tailed distributions. Owing to the close relationship between our SIM/MIM estimators
and the sparse PCA estimator, it is natural to ask whether such a truncation argument could lead to a
sparse PCA estimator for heavy tailed X. Below we show it is indeed possible to obtain a near-optimal
estimator for heavy-tailed sparse PCA based on the truncation technique. For vector v 2 Rd , let #(v)
be a truncation operator that works entrywise as [#(v)]j = sign[vj ] ? min{|vj |, ? } for j 2 [d]. Then,
our estimator is defined as follows,
maximize hW, ?i + kW k1
subject to 0 W Id , Trace(W ) = k,
6
(4.3)
Pn
where ? = n 1 i=1 X i X >
i and X i = #(Xi ), for i = 1, . . . n. For the above estimator, we have the
following theorem under the assumption that X has heavy-tailed marginals. Let V ? = (v1? . . . vk? ) 2
Rd?k and we assume that ?0 = k (?)
k+1 (?) > 0.
c be the solution of the optimization in (4.3) and let Vb be the leading eigenvector
Theorem 4.1. Let W
p
c
of W . Also, we set the regularization parameter in (4.3) to be = C1 M log d/n and the truncation
parameter to be ? = (C2 M n/ log d)1/4 , where C1 and C2 are some positive constants. Moreover, we
assume that V ? contains only s? nonzero rows and that X satisfies E|Xj |4 ? M and E|Xi ?Xj |2 ? M .
Then, with probability at least 1 d 2 , we have
p
inf Vb V ? O F ? 4 2/?0 ? s? ,
O2Ok
where Ok 2 Rk?k is the set of all possible rotation matrices.
The proof of the above theorem is identical to that of Theorem 3.3 and thus we omit it. The above
theorem shows that with elementwise truncation, as long as X satisfies a bounded fourth moment
conp
dition, the SDP estimator for sparse PCA achieves the near-optimal statistical rate of O(s? log d/n).
We end this section with the following questions based on the above discussions:
p
1. Could we obtain the minimax optimal statistical rate O( s log d/n) for sparse PCA in the
high sample size regime with n = ?(s? 2 log d) if X has only bounded fourth moment?
p
2. Could we obtain the minimax optimal statistical rate O( s log d/n) given n = ?(s? 2 log d)
when f, X, and Y satisfy the bounded moment condition in Assumption 3.1 for MIM?
The answers to both questions lie in constructing truncation-based estimators that concentrate sharply
in restricted operator norm defined in (4.2) or more realistically exhibit one-sided concentration
bounds (see, e.g., [24] and [27] for related results and discussion). Obtaining such an estimator seems
to be challenging for heavy-tailed sparse PCA and it is not immediately clear if this is even possible.
We plan to report our findings for the above problem in the near future.
5
Experimental Results
In this section, we evaluate the finite-sample error of the proposed estimators on simulated data. We
concentrate on the case of sparse phase retrieval. Recall that in this case, the link function is known
and existing convex and non-convex estimators are applicable predominantly for the case of Gaussian
or light-tailed data. The question of what are the necessary assumptions on the measurement vectors
for (sparse) phase retrieval to work is an intriguing one [11]. In the sequel, we demonstrate that using
the proposed score-based estimators, one could use heavy-tailed and skewed measurements as well,
which significantly extend the class of measurement vectors applicable for sparse phase retrieval.
Recall that the covariate X has i.i.d. entries with distribution p0 . Throughout this section, we set p0
to be Gamma distribution with shape parameter 5 and scale parameter 1 or Rayleigh distribution
with scale parameter 2. The random noise ? is set to be standard Gaussian. Moreover, we solve the
optimization problems in (3.3) and (3.6) via the alternating direction method of multipliers (ADMM)
algorithm, which introduces a dual variable to handle the constraints and updates the primal and dual
variables iteratively.
For SIM, let the link functions be f1 (u) = u2 , f2 = |u|, and f3 (u) = 4u2 +3 cos(u), correspondingly.
Here f1 corresponds to the phase retrieval model and f2 and f3 could be viewed as its robust extension.
Throughout the experiment we vary n and fix d = 500 and s? = 5. Also, the support of ? is chosen
uniformly at random
from all the possible subsets of [d] with cardinality s? . For each j 2 supp( ? ),
p
we set j? = 1/ s? ? j , where j ?s are i.i.d. Rademacher random variables. Furthermore, we fix the
p
regularization parameter = 4 log d/n and threshold parameter ? = 20. In addition, we adopt the
cosine distance cos \( b, ? ) = 1 |h b, ? i|, to measure the p
estimation error. We plot the cosine
?
distance against the theoretical statistical rate of convergence s log d/n in Figure 2-(a)-(c) for each
link function, respectively. The plot is based on 100 independent
trials for each n. It shows that the
p
?
estimation error is bounded by a linear function of s log d/n, which corroborates the theory.
7
0.3
0.3
0.3
0.25
0.25
0.25
0.2
0.2
0.2
0.15
0.15
0.15
0.1
0.1
0.1
0.05
0.05
0.05
0
0.1
0.15
0.2
0.25
0.3
f1 (u) = u2
0.35
0.4
0
0.1
0.15
0.2
0.25
0.3
f2 (u) = |u|,
0.35
0.4
0
0.1
0.15
0.2
0.25
0.3
0.35
0.4
f3 (u) = 4u2 + 3 cos(u)
Figure 2: Cosine distances between the true parameter ? and the estimated parameter b in the sparse
SIM with the link function in one of f1 , f2 , and f3 . Here we set d = 500. s? = 5 and vary n.
6
Discussion
In this work, we study estimating the parametric component of SIM and MIM in the high dimensions,
under fairly general assumptions on the link function f and response Y . Furthermore, our estimators
are applicable in the non-Gaussian setting in which X is not required to satisfy restrictive Gaussian
or elliptical symmetry assumptions. Our estimators are based on a data-driven truncation technique in
combination with a second-order Stein?s identity.
In the low-dimensional setting, for two-layer neural network [14] propose a tensor-based method for
estimating the parametric component. Their estimators are sub-optimal even assuming X is Gaussian.
An immediate application of our truncation-based estimators enables us to obtain optimal results for
a fairly general class of covariate distributions in the low-dimensional setting. Obtaining optimal or
near-optimal results in the high-dimensional setting is of great interest for two-layer neural network,
albeit challenging. We plan to extend the results of the current paper to two-layer neural networks in
high dimensions and report our findings in the near future.
References
[1] Albert Ai, Alex Lapanowski, Yaniv Plan, and Roman Vershynin. One-bit compressed sensing
with non-Gaussian measurements. Linear Algebra and its Applications, 2014.
[2] St?ephane Boucheron, G?abor Lugosi, and Pascal Massart. Concentration inequalities: A
nonasymptotic theory of independence. Oxford University Press, 2013.
[3] Petros T Boufounos and Richard G Baraniuk. 1-bit compressive sensing. In Annual Conference
on Information Sciences and Systems, pages 16?21. IEEE, 2008.
[4] T Tony Cai, Xiaodong Li, and Zongming Ma. Optimal rates of convergence for noisy sparse
phase retrieval via thresholded Wirtinger flow. The Annals of Statistics, 44(5):2221?2251, 2016.
[5] Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via Wirtinger
flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985?2007,
2015.
[6] Emmanuel J Candes, Thomas Strohmer, and Vladislav Voroninski. Phaselift: Exact and stable
signal recovery from magnitude measurements via convex programming. Communications on
Pure and Applied Mathematics, 66(8):1241?1274, 2013.
[7] Xin Chen, Changliang Zou, and Dennis Cook. Coordinate-independent sparse sufficient dimension reduction and variable selection. The Annals of Statistics, 38(6):3696?3723, 2010.
[8] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix completion. Information and Inference, 3(3):189?223, 2014.
[9] Jianqing Fan, Jinchi Lv, and Lei Qi. Sparse high-dimensional models in economics. Annual
Review of Economics, 3(1):291?317, 2011.
[10] Larry Goldstein, Stanislav Minsker, and Xiaohan Wei. Structured signal recovery from nonlinear and heavy-tailed measurements. arXiv preprint arXiv:1609.01025, 2016.
[11] David Gross, Felix Krahmer, and Richard Kueng. A partial derandomization of phaselift using
spherical designs. Journal of Fourier Analysis and Applications, 2015.
8
[12] Joel L Horowitz. Semiparametric and nonparametric methods in econometrics, volume 12.
Springer, 2009.
[13] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Score function features for discriminative learning: Matrix and tensor framework. arXiv preprint arXiv:1412.2863, 2014.
[14] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity:
Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473,
2015.
[15] Bo Jiang and Jun S Liu. Variable selection for general index models via sliced inverse regression.
The Annals of Statistics, 42(5):1751?1786, 2014.
[16] Robert Krauthgamer, Boaz Nadler, and Dan Vilenchik. Do semidefinite relaxations solve sparse
PCA up to the information limit? The Annals of Statistics, 43(3):1300?1322, 2015.
[17] Guillaume Lecu?e and Shahar Mendelson. Minimax rate of convergence and the performance of
empirical risk minimization in phase retrieval. Electronic Journal of Probability, 20(57):1?29,
2015.
[18] Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American
Statistical Association, 86(414):316?327, 1991.
[19] Ker-Chau Li. On principal Hessian directions for data visualization and dimension reduction: Another application of Stein?s lemma. Journal of the American Statistical Association,
87(420):1025?1039, 1992.
[20] Ker-Chau Li and Naihua Duan. Regression analysis under link violation. The Annals of
Statistics, 17(3):1009?1052, 1989.
[21] Xiaodong Li and Vladislav Voroninski. Sparse signal recovery from quadratic measurements
via convex programming. SIAM Journal on Mathematical Analysis, 45(5):3019?3033, 2013.
[22] Qian Lin, Xinran Li, Dongming Huang, and Jun S Liu. On the optimality of sliced inverse
regression in high dimensions. arXiv preprint arXiv:1701.06009, 2017.
[23] Qian Lin, Zhigen Zhao, and Jun S Liu. On consistency and sparsity for sliced inverse regression
in high dimensions. arXiv preprint arXiv:1507.03895, 2015.
[24] Shahar Mendelson. Learning without concentration. In Conference on Learning Theory, pages
25?39, 2014.
[25] Matey Neykov, Jun S Liu, and Tianxi Cai. `1 -regularized least squares for support recovery of
high dimensional single index models with Gaussian designs. Journal of Machine Learning
Research, 17(87):1?37, 2016.
[26] Matey Neykov, Zhaoran Wang, and Han Liu. Agnostic estimation for misspecified phase
retrieval models. In Advances in Neural Information Processing Systems, pages 4089?4097,
2016.
[27] Roberto Imbuzeiro Oliveira. The lower tail of random quadratic forms, with applications to
ordinary least squares and restricted eigenvalue properties. arXiv preprint arXiv:1312.2903,
2013.
[28] Yaniv Plan and Roman Vershynin. The generalized lasso with non-linear observations. IEEE
Transactions on Information Theory, 62(3):1528?1537, 2016.
[29] Charles Stein, Persi Diaconis, Susan Holmes, and Gesine Reinert. Use of exchangeable pairs in
the analysis of simulations. In Stein?s Method. Institute of Mathematical Statistics, 2004.
[30] Ju Sun, Qing Qu, and John Wright. A geometric analysis of phase retrieval. arXiv preprint
arXiv:1602.06664, 2016.
[31] Kean Ming Tan, Zhaoran Wang, Han Liu, and Tong Zhang. Sparse generalized eigenvalue
problem: Optimal statistical rates via truncated Rayleigh flow. arXiv preprint arXiv:1604.08697,
2016.
[32] Kean Ming Tan, Zhaoran Wang, Han Liu, Tong Zhang, and Dennis Cook. A convex formulation
for high-dimensional sparse sliced inverse regression. manuscript, 2016.
[33] Vincent Q Vu, Juhee Cho, Jing Lei, and Karl Rohe. Fantope projection and selection: A nearoptimal convex relaxation of sparse PCA. In Advances in Neural Information Processing
Systems, pages 2670?2678, 2013.
9
[34] Tengyao Wang, Quentin Berthet, and Richard J Samworth. Statistical and computational tradeoffs in estimation of sparse principal components. The Annals of Statistics, 44(5):1896?1930,
2016.
[35] Zhaoran Wang, Quanquan Gu, and Han Liu. Sharp computational-statistical phase transitions
via oracle computational model. arXiv preprint arXiv:1512.08861, 2015.
[36] Zhaoran Wang, Han Liu, and Tong Zhang. Optimal computational and statistical rates of
convergence for sparse nonconvex learning problems. The Annals of Statistics, 42(6):2164?
2201, 12 2014.
[37] Zhaoran Wang, Huanran Lu, and Han Liu. Tighten after relax: Minimax-optimal sparse PCA in
polynomial time. In Advances in Neural Information Processing Systems, pages 3383?3391,
2014.
[38] Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C Eldar, and Tong Zhang. Sparse nonlinear regression: Parameter estimation and asymptotic inference. International Conference on Machine
Learning, 2015.
[39] Xinyang Yi, Zhaoran Wang, Constantine Caramanis, and Han Liu. Optimal linear estimation
under unknown nonlinear transform. In Advances in Neural Information Processing Systems,
pages 1549?1557, 2015.
[40] Lixing Zhu, Baiqi Miao, and Heng Peng. On sliced inverse regression with high-dimensional
covariates. Journal of the American Statistical Association, 101(474):630?643, 2006.
10
| 7190 |@word mild:1 trial:1 version:1 polynomial:2 norm:8 seems:1 c0:3 d2:3 simulation:3 crucially:1 r:1 covariance:2 p0:8 solid:1 moment:7 reduction:5 liu:13 contains:4 score:16 xinyang:1 existing:5 elliptical:3 com:1 current:1 surprising:1 activation:1 gmail:1 intriguing:1 john:1 numerical:1 hanie:2 shape:2 enables:4 plot:2 update:1 cook:2 prespecified:1 simpler:1 zhang:4 mathematical:2 along:1 constructed:1 direct:1 c2:2 prove:1 dan:1 introduce:3 theoretically:1 peng:1 indeed:1 roughly:1 examine:1 sdp:3 multi:2 derandomization:1 ming:2 spherical:1 balasubramanian:1 duan:1 kwk0:1 cardinality:1 begin:1 estimating:6 underlying:1 moreover:7 notation:1 agnostic:3 bounded:7 provided:1 what:1 xx:4 eigenvector:3 compressive:1 finding:3 guarantee:1 fantope:1 xd:1 shed:1 k2:2 exchangeable:1 omit:1 positive:6 before:1 felix:1 limit:2 consequence:1 minsker:1 id:9 oxford:1 jiang:1 lugosi:1 twice:2 initialization:3 therein:1 challenging:5 co:3 factorization:1 bi:1 statistically:1 vu:1 definite:3 ker:5 empirical:1 cascade:1 significantly:2 projection:2 attain:1 word:1 onto:2 close:1 selection:3 operator:4 context:2 applying:1 risk:1 restriction:2 projector:2 straightforward:1 economics:3 convex:9 formulate:1 simplicity:1 recovery:5 immediately:1 pure:1 qian:2 estimator:63 rule:1 holmes:1 importantly:1 spanned:3 orthonormal:2 quentin:1 handle:3 coordinate:2 justification:1 annals:7 tan:2 pioneered:1 exact:2 programming:3 element:1 particularly:1 jk:1 econometrics:1 preprint:9 wang:10 susan:1 sun:1 mentioned:2 mims:2 gross:1 convexity:1 covariates:5 depend:1 tight:1 solving:2 algebra:1 efficiency:1 yei:2 basis:1 f2:4 gu:1 joint:2 various:3 caramanis:1 matey:2 widely:1 valued:1 supplementary:1 solve:2 relax:2 compressed:4 statistic:8 transform:1 noisy:1 final:2 interplay:1 eigenvalue:3 differentiable:3 cai:2 propose:6 product:2 realization:1 realistically:1 frobenius:1 qr:1 convergence:10 yaniv:3 requirement:1 extending:1 rademacher:1 jing:1 develop:3 completion:2 op:1 progress:1 sim:22 recovering:1 implies:2 qd:1 concentrate:4 direction:3 closely:1 owing:1 exploration:1 larry:1 material:1 require:3 hx:16 transparent:1 generalization:1 f1:4 fix:2 proposition:4 extension:2 correction:1 hold:5 around:1 considered:2 wright:1 great:1 nadler:1 tjk:2 achieves:2 vary:2 adopt:1 estimation:12 samworth:1 applicable:9 quanquan:1 minimization:1 clearly:1 gaussian:27 aim:3 modified:1 rather:2 pn:2 broader:1 kueng:1 focus:4 notational:1 vk:1 milder:1 inference:2 typically:1 unlikely:1 abor:1 voroninski:2 interested:1 issue:2 dual:2 pascal:1 eldar:1 priori:2 chau:5 plan:5 constrained:1 special:1 fairly:2 construct:2 f3:4 beach:1 yonina:1 identical:2 kw:4 broad:4 look:1 future:2 report:2 ephane:1 roman:2 few:1 primarily:1 employ:2 richard:3 diaconis:1 gamma:4 qing:1 phase:23 attempt:1 interest:5 message:1 highly:1 joel:1 deferred:1 violation:2 introduces:1 reinert:1 semidefinite:5 light:4 primal:1 strohmer:1 xb:5 partial:2 necessary:2 ewout:1 janzamin:2 vladislav:2 phaselift:2 theoretical:3 column:2 cover:1 ordinary:1 applicability:1 entry:8 subset:1 nearoptimal:1 answer:2 vershynin:2 cho:1 ju:1 st:2 international:1 density:8 siam:1 sequel:1 invertible:1 again:1 s00:1 satisfied:1 huang:1 possibly:2 davenport:1 horowitz:1 american:3 zhao:1 leading:4 li:9 supp:2 nonasymptotic:1 zhaoran:9 summarized:1 gaussianity:1 coefficient:1 kvkp:1 satisfy:4 depends:1 later:1 view:1 lab:1 exogenous:2 analyze:2 linked:2 sup:1 recover:1 candes:2 identifiability:2 contribution:1 square:4 ni:1 accuracy:1 peril:1 weak:1 vincent:1 none:1 lu:1 anima:2 email:2 definition:10 sixth:1 against:1 tengyao:1 naturally:2 proof:3 associated:2 petros:1 ask:1 persi:1 recall:7 knowledge:1 dimensionality:2 subtle:1 goldstein:1 back:1 focusing:1 ok:2 appears:1 manuscript:1 miao:1 response:7 wei:1 entrywise:1 formulation:3 generality:1 furthermore:9 stage:1 dennis:2 nonlinear:5 lack:1 lei:2 mary:1 usa:1 name:1 xiaodong:3 contain:1 multiplier:1 true:1 hence:4 regularization:4 alternating:1 symmetric:7 nonzero:4 iteratively:1 boucheron:1 i2:2 skewed:3 cosine:3 generalized:3 demonstrate:1 recently:1 charles:1 predominantly:1 misspecified:1 rotation:2 specialized:1 volume:1 tail:2 interpretation:1 extend:2 elementwise:3 marginals:1 kwk2:1 association:3 measurement:8 significant:2 vec:3 ai:2 rd:23 consistency:3 mathematics:1 similarly:1 soltanolkotabi:1 stable:1 han:9 multivariate:1 xinran:1 constantine:1 moderate:1 driven:4 inf:2 certain:1 nonconvex:7 jianqing:1 inequality:2 shahar:2 success:3 arbitrarily:1 lecu:1 yi:4 krishna:1 minimum:1 greater:1 relaxed:2 impose:1 regained:1 maximize:4 signal:4 relates:1 multiple:1 d0:2 cross:1 long:3 retrieval:22 lin:2 qi:1 prediction:1 variant:1 regression:13 involving:1 essentially:2 cmu:1 albert:1 histogram:3 arxiv:18 c1:2 addition:5 semiparametric:3 separately:1 crucial:1 rest:1 massart:1 subject:4 majid:2 flow:4 anandkumar:2 near:11 yang:2 leverage:2 wirtinger:3 xj:4 independence:1 lasso:6 suboptimal:1 inner:2 knowing:2 tradeoff:2 whether:2 motivated:2 pca:19 reformulated:1 hessian:2 elliptically:4 remark:1 deep:1 wootters:1 detailed:1 eigenvectors:2 clear:1 nonparametric:3 stein:18 dark:1 oliveira:1 concentrated:1 restricts:1 sign:5 estimated:1 write:1 key:2 four:1 threshold:3 thresholded:1 v1:1 relaxation:3 monotone:1 year:1 neykov:2 inverse:9 fourth:3 baraniuk:1 place:1 family:2 throughout:3 electronic:1 utilizes:1 vb:2 bit:5 bound:3 layer:3 guaranteed:1 fan:1 quadratic:6 dition:1 annual:2 oracle:1 constraint:1 sharply:1 alex:1 aspect:1 fourier:1 argument:3 min:6 span:1 optimality:4 relatively:1 structured:1 combination:1 qu:1 making:1 intuitively:1 restricted:3 den:1 sided:1 visualization:1 previously:5 discus:3 fail:1 end:1 apply:2 spectral:1 alternative:2 rp:1 thomas:1 top:2 include:1 tony:1 krauthgamer:1 hinge:1 restrictive:6 k1:6 emmanuel:2 establish:5 tensor:3 question:5 flipping:1 parametric:10 concentration:7 mim:20 dependence:1 exhibit:2 subspace:7 distance:3 link:36 simulated:1 sedghi:2 assuming:2 index:9 relationship:1 equivalently:1 unfortunately:1 robert:1 trace:6 negative:1 design:2 motivates:2 unknown:10 observation:6 zongming:1 finite:2 truncated:2 immediate:1 situation:2 communication:1 precise:1 sharp:3 introduced:1 complement:1 namely:1 required:2 specified:2 david:1 continous:1 connection:2 pair:1 distinction:1 established:4 nip:1 below:1 beating:1 regime:2 sparsity:5 summarize:1 jinchi:1 program:2 including:4 natural:1 rely:2 regularized:1 zhu:1 minimax:5 ready:1 jun:4 extract:1 roberto:1 review:1 literature:2 ep0:3 geometric:1 kf:1 asymptotic:4 stanislav:1 loss:2 highlight:1 northwestern:1 interesting:3 limitation:1 conp:1 lv:1 degree:1 sufficient:2 consistent:1 s0:8 heng:1 bypass:1 heavy:18 row:4 karl:1 summary:1 last:1 truncation:11 copy:1 institute:1 correspondingly:2 xiaohan:1 sparse:41 van:1 dimension:13 world:1 transition:1 berthet:1 commonly:2 made:2 tighten:1 transaction:2 boaz:1 implicitly:1 confirm:1 assumed:4 corroborates:1 xi:12 discriminative:1 iterative:1 vectorization:1 quantifies:1 tailed:20 why:1 robust:2 ca:1 vilenchik:1 symmetry:2 tencent:1 obtaining:2 necessarily:1 zou:1 constructing:1 vj:3 diag:1 main:3 motivation:1 noise:4 arise:3 krahmer:1 sliced:7 convey:1 body:1 x1:1 fashion:1 tong:4 sub:6 fails:1 lie:1 mahdi:1 hw:1 kop:1 rk:4 theorem:13 rohe:1 specific:5 covariate:15 hanliu:1 sensing:5 r2:7 sims:4 exists:3 mendelson:2 albeit:1 supplement:1 magnitude:1 te:2 gap:1 chen:1 rd1:3 rg:2 rayleigh:2 univariate:1 absorbed:2 prevents:1 bo:1 subtlety:1 u2:4 springer:1 zhuoran:2 corresponds:5 satisfies:4 ma:1 goal:1 identity:13 viewed:2 quantifying:1 careful:1 replace:1 admm:1 feasible:3 programing:1 except:1 uniformly:1 principal:4 boufounos:1 lemma:1 experimental:1 xin:1 guillaume:1 berg:1 support:4 mark:1 arises:1 evaluate:1 princeton:2 |
6,844 | 7,191 | Gaussian Quadrature for Kernel Features
Tri Dao
Department of Computer Science
Stanford University
Stanford, CA 94305
[email protected]
Christopher De Sa
Department of Computer Science
Cornell University
Ithaca, NY 14853
[email protected]
Christopher R?
Department of Computer Science
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Kernel methods have recently attracted resurgent interest, showing performance
competitive with deep neural networks in tasks such as speech recognition. The
random Fourier features map is a technique commonly used to scale up kernel
machines, but employing the randomized feature map means that O(? 2 ) samples
are required to achieve an approximation error of at most ?. We investigate some
alternative schemes for constructing feature maps that are deterministic, rather
than random, by approximating the kernel in the frequency domain using Gaussian
quadrature. We show that deterministic feature maps can be constructed, for any
> 0, to achieve error ? with O(e + ? 1/ ) samples as ? goes to 0. Our method
works particularly well with sparse ANOVA kernels, which are inspired by the
convolutional layer of CNNs. We validate our methods on datasets in different
domains, such as MNIST and TIMIT, showing that deterministic features are faster
to generate and achieve accuracy comparable to the state-of-the-art kernel methods
based on random Fourier features.
1
Introduction
Kernel machines are frequently used to solve a wide variety of problems in machine learning [26].
They have gained resurgent interest and have recently been shown [13, 18, 21, 19, 22] to be competitive with deep neural networks in some tasks such as speech recognition on large datasets. A kernel
machine is one that handles input x1 , . . . , xn , represented as vectors in Rd , only in terms of some
kernel function k : Rd ? Rd ! R of pairs of data points k(xi , xj ). This representation is attractive
for classification problems because one can learn non-linear decision boundaries directly on the input
without having to extract features before training a linear classifier.
One well-known downside of kernel machines is the fact that they scale poorly to large datasets.
Naive kernel methods, which operate on the Gram matrix Gi,j = k(xi , xj ) of the data, can take a very
long time to run because the Gram matrix itself requires O(n2 ) space and many operations on it (e.g.,
the singular value decomposition) take up to O(n3 ) time. Rahimi and Recht [23] proposed a solution
to this problem: approximating the kernel with an inner product in a higher-dimensional space.
Specifically, they suggest constructing a feature map z : Rd ! RD such that k(x, y) ? hz(x), z(y)i.
This approximation enables kernel machines to use scalable linear methods for solving classification
problems and to avoid the pitfalls of naive kernel methods by not materializing the Gram matrix.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In the case of shift-invariant kernels, one technique that was proposed for constructing the function z
is random Fourier features [23]. This data-independent method approximates the Fourier transform
integral (1) of the kernel by averaging Monte-Carlo samples, which allows for arbitrarily-good
estimates of the kernel function k. Rahimi and Recht [23] proved that if the feature map has
? d2 then, with constant probability, the approximation hz(x), z(y)i is uniformly
dimension D = ?
?
?-close to the true kernel on a bounded set. While the random Fourier features method has proven
to be effective in solving practical problems, it comes with some caveats. Most importantly, the
accuracy guarantees are only probabilistic and there is no way to easily compute, for a particular
random sample, whether the desired accuracy is achieved.
Our aim is to understand to what extent randomness is necessary to approximate a kernel. We
thus propose a fundamentally different scheme for constructing the feature map z. While still
approximating the kernel?s Fourier transform integral (1) with a discrete sum, we select the sample
points and weights deterministically. This gets around the issue of probabilistic-only guarantees
by removing the randomness from the algorithm. For small dimension, deterministic maps yield
significantly lower error. As the dimension increases, some random sampling may become necessary,
and our theoretical insights provide a new approach to sampling. Moreover, for a particular class
of kernels called sparse ANOVA kernels (also known as convolutional kernels as they are similar
to the convolutional layer in CNNs) which have shown state-of-the-art performance in speech
recognition [22], deterministic maps require fewer samples than random Fourier features, both in
terms of the desired error and the kernel size. We make the following contributions:
? In Section 3, we describe how to deterministically construct a feature map z for the class of
subgaussian kernels (which can approximate any kernel well) that has exponentially small
(in D) approximation error.
? In Section 4, for sparse ANOVA kernels, we show that our method produces good estimates
using only O(d) samples, whereas random Fourier features requires O(d3 ) samples.
? In Section 5, we validate our results experimentally. We demonstrate that, for real classification problems on MNIST and TIMIT datasets, our method combined with random
sampling yields up to 3 times lower kernel approximation error. With sparse ANOVA kernels, our method slightly improves classification accuracy compared to the state-of-the-art
kernel methods based on random Fourier features (which are already shown to match the
performance of deep neural networks), all while speeding up the feature generation process.
2
Related Work
Much work has been done on extracting features for kernel methods. The random Fourier features
method has been analyzed in the context of several learning algorithms, and its generalization error
has been characterized and compared to that of other kernel-based algorithms [24]. It has also been
compared to the Nystr?m method [35], which is data-dependent and thus can sometimes outperform
random Fourier features. Other recent work has analyzed the generalization performance of the
random Fourier features algorithm [17], and improved the bounds on its maximum error [29, 31].
While we focus here on deterministic approximations to the Fourier transform integral and compare
them to Monte Carlo estimates, these are not the only two methods available to us. A possible
middle-ground method is quasi-Monte Carlo estimation, in which low-discrepancy sequences, rather
than the fully-random samples of Monte Carlo estimation, are used to approximate the integral.
This approach
was analyzed
in Yang et al. [34] and shown to achieves an asymptotic error of
?
?
d
1
? = O D (log(D)) . While this is asymptotically better than the random Fourier features
method, the complexity of the quasi-Monte Carlo method coupled with its larger constant factors
prevents it from being strictly better than its predecessor. Our method still requires asymptotically
fewer samples as ? goes to 0.
Our deterministic approach here takes advantage of a long line of work on numerical quadrature
for estimating integrals. Bach [1] analyzed in detail the connection between quadrature and random
feature expansions, thus deriving bounds for the number of samples required to achieve a given
average approximation error (though they did not present complexity results regarding maximum error
nor suggested new feature maps). This connection allows us to leverage longstanding deterministic
numerical integration methods such as Gaussian quadrature [6, 33] and sparse grids [2].
2
Unlike many other kernels used in machine learning, such as the Gaussian kernel, the sparse ANOVA
kernel allows us to encode prior information about the relationships among the input variables into
the kernel itself. Sparse ANOVA kernels have been shown [30] to work well for many classification
tasks, especially in structural modeling problems that benefit from both the good generalization of a
kernel machine and the representational advantage of a sparse model [9].
3
Kernels and Quadrature
We start with a brief overview of kernels. A kernel function k : Rd ? Rd ! R encodes the similarity
between pairs of examples. In this paper, we focus on shift invariant kernels (those which satisfy
k(x, y) = k(x y), where we overload the definition of k to also refer to a function k : Rd ! R)
that are positive definite and properly scaled. A kernel is positive definite if its Gram matrix is always
positive definite for all non-trivial inputs, and it is properly-scaled if k(x, x) = 1 for all x. In this
setting, our results make use of a theorem [25] that also provides the ?key insight? behind the random
Fourier features method.
Theorem 1 (Bochner?s theorem). A continuous shift-invariant properly-scaled kernel k : Rd ? Rd !
R is positive definite if and only if k is the Fourier transform of a proper probability distribution.
We can then write k in terms of its Fourier transform ? (which is a proper probability distribution):
Z
k(x y) =
?(!) exp(j! > (x y)) d!.
(1)
Rd
For ! distributed according to ?, this is equivalent to writing
?
?
?
?
k(x y) = E exp(j! > (x y)) = E hexp(j! > x), exp(j! > y)i ,
P
where we use the usual Hermitian inner product hx, yi = i xi yi . The random Fourier features
method proceeds by estimating this expected value using Monte Carlo sampling averaged across D
random selections of !. Equivalently, we can think of this as approximating (1) with a discrete sum
at randomly selected sample points.
Our objective is to choose some points !i and weights ai to uniformly approximate the integral (1)
PD
>
?
with k(x
y) =
y)). To obtain a feature map z : Rd ! CD where
i=1 ai exp(j!j (x
P
D
?
k(x y) = i=1 ai zi (x)zi (y), we can define
?p
?>
p
>
z(x) =
a1 exp(j!1> x) . . .
aD exp(j!D
x) .
We aim to bound the maximum error for x, y in a region M with diameter M = supx,y2M kx
?=
sup
k(x
y)
(x,y)2M
?
k(x
y) = sup
kuk?M
Z
Rd
?(!)ej!
>
u
d!
D
X
>
ai ej!i
u
.
yk:
(2)
i=1
A quadrature rule is a choice of !i and ai to minimize this maximum error. To evaluate a quadrature
rule, we are concerned with the sample complexity (for a fixed diameter M ).
Definition 1. For any ? > 0, a quadrature rule has sample complexity DSC (?) = D, where D is the
smallest value such that the rule, when instantiated with D samples, has maximum error at most ?.
We will now examine ways to construct deterministic quadrature rules and their sample complexities.
3.1
Gaussian Quadrature
Gaussian quadrature is one of the most popular techniques
numerical integration.
R in one-dimensional
PD
The main idea is to approximate integrals of the form ?(!)f (!) d! ? i=1 ai f (!i ) such that
the approximation is exact for all polynomials below a certain degree; D points are sufficient for
polynomials of degree up to 2D 1. While the points and weights used by Gaussian quadrature
depend both on the distribution ? and the parameter D, they can be computed efficiently using
orthogonal polynomials [10, 32]. Gaussian quadrature produces accurate results when integrating
functions that are well-approximated by polynomials, which include all subgaussian densities.
3
Estimated max error
Error of Polynomially-Exact Quadrature vs RFFs Error of Sparse Grid Quadrature vs RFFs
Error of Subsampled Dense Grid vs RFFs
0.14
0.8
0.14
Polynomially-exact
Sparse grid
Subsampled Dense Grid
Random Fourier
Random Fourier
Random Fourier
0.7
0.12
0.12
0.6
0.1
0.1
0.5
0.08
0.08
0.4
0.06
0.06
0.3
0.04
0.04
0.2
0.02
0.02
0.1
0
0
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0
0.5
1
1.5
2
2.5
3
Region diameter (M )
Region diameter (M )
Region diameter (M )
(a) Polynomially-exact
(b) Sparse grid
(c) Subsampled dense grid
Figure 1: Error comparison (empirical maximum over 106 uniformly-distributed samples) of different
quadrature schemes and the random Fourier features method.
Definition 2 (Subgaussian Distribution). We say that a distribution ??: Rd ! R?is subgaussian with
2
parameter b if for X ? ? and for all t 2 Rd , E [exp(ht, Xi)] ? exp 12 b2 ktk .
We subsequently assume that the distribution ? is subgaussian, which is a technical restriction
compared to random Fourier features. Many of the kernels encountered in practice have subgaussian
spectra, including the ubiquitous Gaussian kernel. More importantly, we can approximate any kernel
by convolving it with the Gaussian kernel, resulting in a subgaussian kernel. The approximation error
can be made much smaller than the inherent noise in the data generation process.
3.2
Polynomially-Exact Rules
Since Gaussian quadrature is so successful in one dimension, as commonly done in the numerical
analysis literature [14], we might consider using quadrature rules that are multidimensional analogues
of Gaussian quadrature ? rules that are accurate for all polynomials up to a certain degree R. In
higher dimensions, this is equivalent to saying that our quadrature rule satisfies
Z
d
D
d
Y
X
Y
X
rl
rl
?(!) (e>
!)
d!
=
a
(e>
for all r 2 Nd such that
rl ? R,
(3)
i
l
l !i )
Rd
l=1
i=1
l=1
l
where el are the standard basis vectors.
To test the accuracy of polynomially-exact
?
? quadrature, we constructed a feature map for a Gaussian
d
2
1
kernel, ?(!) = (2?) 2 exp
k!k
, in d = 25 dimensions with D = 1000 and accurate for
2
all polynomials up to degree R = 2. In Figure 1a, we compared this to a random Fourier features
rule with the same number of samples, over a range of region diameters M that captures most
of the data points in practice (as the kernel is properly scaled). For small regions in particular, a
polynomially-exact scheme can have a significantly lower error than a random Fourier feature map.
This experiment motivates us to investigate theoretical bounds on the behavior of this method. For
subgaussian kernels, it is straightforward to bound the maximum error of a polynomially-exact feature
map using the Taylor series approximation of the exponential function in (2).
Theorem 2. Let k be a kernel with b-subgaussian spectrum, and let k? be its estimation under some
quadrature rule with non-negative weights that is exact up to some even degree R. Let M ? Rd
be some region of diameter M . Then, for all x, y 2 M, the error of the quadrature features
approximation is bounded by
?
?R
2e eb2 M 2 2
?
k(x y) k(x y) ? p
.
R
?
All the proofs are found in the Appendix.
To bound the sample complexity of polynomially-exact quadrature, we need to determine how many
quadrature samples we will need to satisfy the conditions of Theorem 2. There are d+R
constraints
d
in (3), so a series of polynomially-exact quadrature rules that use only about this many sample points
can yield a bound on the sample complexity of this quadrature rule.
4
Corollary 1. Assume that we are given a class of feature maps that satisfy the conditions of Theorem 2,
and that all have a number of samples D ? d+R
for some fixed constant . Then, for any > 0,
d
the sample complexity of features maps in this class can be bounded by
?
?1 !
2e
d
2 +1 2
2
D(?) ? 2 max exp e
b M , p
.
? ?
? 1?
In particular, for a fixed dimension d, this means that for any , D(?) = O ?
.
The result of this corollary implies that, in terms of the desired error ?, the sample complexity
increases asymptotically slower than any negative power of ?. Compared to the result for random
Fourier features which had D(?) = O(? 2 ), this has a much weaker dependence on ?. While this
weaker dependence does come at the cost of an additional factor of 2d , it is a constant cost of operating
in dimension d, and is not dependent on the error ?.
The more pressing issue, when comparing polynomially-exact features to random Fourier features, is
the fact that we have no way of efficiently constructing quadrature rules that satisfy the conditions
of Theorem 2. One possible construction involves selecting random sample points !i , and then
solving (3) for the values of ai using a non-negative least squares (NNLS) algorithm. While this
construction works in low dimensions ? it is the method we used for the experiment in Figure 1a ?
it rapidly becomes infeasible to solve for higher values of d and R.
We will now show how to overcome this issue by introducing quadrature rules that can be rapidly
constructed using grid-based quadrature rules. These rules are constructed directly from products of a
one-dimensional quadrature rule, such as Gaussian quadrature, and so avoid the construction-difficulty
problems encountered in this section. Although grid-based quadrature rules can be constructed for any
kernel function [2], they are easier to conceptualize when the kernel k factors along the dimensions,
Qd
as k(u) = i=1 ki (ui ). For simplicity we will focus on this factorizable case.
3.3
Dense Grid Quadrature
The simplest way to do this is with a dense grid (also known as tensor product) construction.
A dense grid construction
starts by factoring the integral (1) into k(u) =
?
Q d ?R 1
>
?
(!)
exp(j!e
u)
d!
,
where
ei are the standard basis vectors. Since each of the
i
i=1
1 i
factors is an integral over a single dimension, we can approximate them all with a one-dimensional
quadrature rule. In this paper, we focus on Gaussian quadrature, although we could also use other
methods such as Clenshaw-Curtis [3]. Taking tensor products of the points and weights results in the
dense grid quadrature. The detailed construction is given in Appendix A.
The individual Gaussian quadrature rules are exact for all polynomials up to degree 2L 1, so the
dense grid is also accurate for all such polynomials. Theorem 2 then yields a bound on its sample
complexity.
Corollary 2. Let k be a kernel with a spectrum that is subgaussian with parameter b. Then, for any
> 0, the sample complexity of dense grid features can be bounded by
?
? ?
?1 !
2
2
2e
d eb M
D(?) ? max exp de
, p
.
2
? ?
? 1?
In particular, as was the case with polynomially-exact features, for a fixed d, D(?) = O ?
.
Unfortunately, this scheme suffers heavily from the curse of dimensionality, since the sample
complexity is doubly-exponential in d. This means that, even though they are easy to compute, the
dense grid method does not represent a useful solution to the issue posed in Section 3.2.
3.4
Sparse Grid Quadrature
The curse of dimensionality for quadrature in high dimensions has been studied in the numerical
integration setting for decades. One of the more popular existing techniques for getting around
5
the curse is called sparse grid or Smolyak quadrature [28], originally developed to solve partial
differential equations. Instead of taking the tensor product of the one-dimensional quadrature rule,
we only include points up to some fixed total level A, thus constructing a linear combination of dense
grid quadrature rules that achieves a similar error with exponentially fewer points than a single larger
quadrature rule. The detailed construction is given in Appendix B. Compared to polynomially-exact
rules, sparse grid quadrature can be computed quickly and easily (see Algorithm 4.1 from Holtz [12]).
To measure the performance of sparse grid quadrature, we constructed a feature map for the same
Gaussian kernel analyzed in the previous section, with d = 25 dimensions and up to level A = 2. We
compared this to a random Fourier features rule with the same number of samples, D = 1351, and
plot the results in Figure 1b. As was the case with polynomially-exact quadrature, this sparse grid
scheme has tiny error for small-diameter regions, but this error unfortunately increases to be even
larger than that of random Fourier features as the region diameter increases.
The sparse grid construction yields a bound on the sample count: D ? 3A d+A
A , where A is the
bound on the total level. By extending known bounds on the error of Gaussian quadrature, we can
similarly bound the error of the sparse grid feature method.
Theorem 3. Let k be a kernel with a spectrum that is subgaussian with parameter b, and let k? be
its estimation under the sparse grid quadrature rule up to level A. Let M ? Rd be some region
of diameter M , and assume that A 8b2 M 2 . Then, for all x, y 2 M, the error of the quadrature
features approximation is bounded by
? 2 2 ?A
4b M
d
?
k(x y) k(x y) ? 2
.
A
This, along with our above upper bound on the sample count, yields a bound on the sample complexity.
Corollary 3. Let k be a kernel with a spectrum that is subgaussian with parameter b. Then, for any
> 0, the sample complexity of sparse grid features can be bounded by
?
?
d
1
D(?) ? 2d max exp 8e2 b2 M 2 , 2 ?
.
? 1?
As was the case with all our previous deterministic features maps, for a fixed d, D(?) = O ?
.
Subsampled grids One of the downsides of the dense/sparse grids analyzed above is the difficulty
of tuning the number of samples extracted in the feature map. As the only parameter we can typically
set is the degree of polynomial exactness, even a small change in this (e.g., from 2 to 4) can produce
a significant increase in the number of features. However, we can always subsample the grid points
according to the distribution determined by their weights to both tame the curse of dimensionality and
to have fine-grained control over the number of samples. For simplicity, we focus on subsampling
the dense grid. In Figure 1c, we compare the empirical errors of subsampled dense grid and random
Fourier features, noting that they are essentially the same across all diameters.
3.5
Reweighted Grid Quadrature
Both random Fourier features and dense/sparse grid quadratures are data-independent. We now
describe a data-adaptive method to choose a quadrature for a pre-specified number of samples:
reweighting the grid points to minimize the difference between the approximate and the exact kernel
on a small subset of data. Adjusting the grid to the data distribution yields better kernel approximation.
We approximate the kernel k(x y) with
D
X
?
k(x
y) =
ai exp(j!i> (x
y)) =
i=1
D
X
ai cos(!i> (x
y)),
i=1
where ai
0, as k is real-valued. We first choose the set of potential grid points !1 , . . . , !D by
sampling from a dense grid of Gaussian quadrature points. To solve for the weights a1 , . . . , aD , we
independently sample n pairs (x1 , y1 ), . . . , (xn , yn ) from the dataset, then minimize the empirical
mean squared error (with variable a1 , . . . , aD ):
n
?2
1 X?
? l yl )
minimize
k(xl yl ) k(x
n
l=1
subject to ai 0, for i = 1, . . . , D.
6
For appropriately defined matrix M and vector b, this is an NNLS problem of minimizing
2
1
bk subject to a
0, with variable a 2 RD . The solution is often sparse, due to
n kM a
the active elementwise constraints a 0. Hence we can pick a larger set of potential grid points
!1 , . . . , !D0 (with D0 > D) and solve the above problem to obtain a smaller set of grid points (those
with aj > 0). To get even sparser solution, we add an `1 -penalty term with parameter
0:
2
minimize n1 kM a bk + 1> a
subject to ai 0, for i = 1, . . . , D0 .
Bisecting on
yields the desired number of grid points.
As this is a data-dependent quadrature, we empirically evaluate its performance on the TIMIT dataset,
which we will describe in more details in Section 5. In Figure 2b, we compare the estimated root
mean squared error on the dev set of different feature generation schemes against the number of
features D (mean and standard deviation over 10 runs). Random Fourier features, Quasi-Monte Carlo
(QMC) with Halton sequence, and subsampled dense grid have very similar approximation error,
while reweighted quadrature has much lower approximation error. Reweighted quadrature achieves
2?3 times lower error for the same number of features and requiring 3?5 times fewer features for a
fixed threshold of approximation error compared to random Fourier features. Moreover, reweighted
features have extremely low variance, even though the weights are adjusted based only on a very
small fraction of the dataset (500 samples out of 1 million data points).
Faster feature generation Not only does grid-based quadrature yield better statistical performance
to random Fourier features, it also has some notable systems benefits. Generating quadrature features
requires a much smaller number of multiplies, as the grid points only take on a finite set of values for
all dimensions (assuming an isotropic kernel). For example, a Gaussian quadrature that is exact up to
polynomials of degree 21 only requires 11 grid points for each dimension. To generate the features, we
multiply the input with these 11 numbers before adding the results to form the deterministic features.
The save in multiples may be particularly significant in architectures such as application-specific
integrated circuits (ASICs). In our experiment on the TIMIT dataset in Section 5, this specialized
matrix multiplication procedure (on CPU) reduces the feature generation time in half.
4
Sparse ANOVA Kernels
One type of kernel that is commonly used in machine learning, for example in structural modeling, is
the sparse ANOVA kernels [11, 8]. They are also called convolutional kernels, as they operate similarly
to the convolutional layer in CNNs. These kernels have achieved state-of-the-art performance on
large real-world datasets [18, 22], as we will see in Section 5. A kernel of this type can be written as
XY
k(x, y) =
k1 (xi yi ),
S2S i2S
where S is a set of subsets of the variables in {1, . . . , d}, and k1 is a one-dimensional kernel.
(Straightforward extensions, which we will not discuss here, include using different one-dimensional
kernels for each element of the products, and weighting the sum.) Sparse ANOVA kernels are
used to encode sparse dependencies among the variables: two variables are related if they appear
together in some S 2 S. These sparse dependencies are typically problem-specific: each S could
correspond to a factor in the graph if we are analyzing a distribution modeled with a factor graph.
Equivalently, we can think of the set S as a hypergraph, where each S 2 S corresponds to a hyperedge.
Using this notion, we define the rank of an ANOVA kernel to be r = maxS2S |S|, the degree as
= maxi2{1,...,d} |{S 2 S|i 2 S}|, and the size of the kernel to be the number of hyperedges
m = |S|. For sparse models, it is common for both the rank and the degree to be small, even as the
number of dimensions d becomes large, so m = O(d). This is the case we focus on in this section.
It is straightforward to apply the random Fourier features method to construct feature maps for
ANOVA
Q kernels: construct feature maps for each of the (at most r-dimensional) sub-kernels kS (x
y) = i2S k1 (xi yi ) individually, and then combine the results. To achieve overall error ?, it
suffices for each of the sub-kernel feature maps to have error ?/m; this can be achieved by random
? r(?m 1 ) 2 = ?
? rm2 ? 2 samples each, where the notation
Fourier features using DS = ?
? hides the log 1/? factor. Summed across all the m sub-kernels, this means that the random
?
7
Fourier features map can achieve error ? with constant probability with a sample complexity of
? rm3 ? 2 samples. While it is nice to be able to tackle this problem using random
D(?) = ?
features, the cubic dependence on m in this expression is undesirable: it is significantly larger than
2
?
the D = ?(d?
) we get in the non-ANOVA case.
Can we construct a deterministic feature map that has a better error bound? It turns out that we can.
Theorem 4. Assume that we use polynomially-exact quadrature to construct features for each of
the sub-kernels kS , under the conditions of Theorem 2, and then combine the resulting feature maps
to produce a feature map for the full ANOVA kernel. For any > 0, the sample complexity of this
method is
?
?
1
1
D(?) ? m2r max exp e2 +1 b2 M 2 , (4 ) ?
.
Compared to the random Fourier features, this rate depends only linearly on m. For fixed parameters
1
, b, M , , r, and for any > 0, we can bound the sample complexity D(?) = O(m? ), which is
better than random Fourier features both in terms of the kernel size m and the desired error ?.
5
Experiments
To evaluate the performance of deterministic feature maps, we analyzed the accuracy of a sparse
ANOVA kernel on the MNIST digit classification task [16] and the TIMIT speech recognition task [5].
Digit classification on MNIST This task consists of 70, 000 examples (60, 000 in the training
dataset and 10, 000 in the test dataset) of hand-written digits which need to be classified. Each
example is a 28 ? 28 gray-scale image. Clever kernel-based SVM techniques are known to achieve
very low error rates (e.g., 0.79%) on this problem [20]. We do not attempt to compare ourselves with
these rates; rather, we compare random Fourier features and subsampled dense grid features that
both approximate the same ANOVA kernel. The ANOVA kernel we construct is designed to have
a similar structure to the first layer of a convolutional neural network [27]. Just as a filter is run on
each 5 ? 5 square of the image, for our ANOVA kernel, each of the sub-kernels is chosen to run on a
5 ? 5 square of the original image (note that there are many, (28 5 + 1)2 = 576, such squares).
We choose the simple Gaussian kernel as our one-dimensional kernel.
Figure 2a compares the dense grid subsampling method to random Fourier features across a range of
feature counts. The deterministic feature map with subsampling performs better than the random
Fourier feature map across most large feature counts, although its performance degrades for very
small feature counts. The deterministic feature map is also somewhat faster to compute, taking?for
the 28800-features?320 seconds vs. 384 seconds for the random Fourier features, a savings of 17%.
Speech recognition on TIMIT This task requires producing accurate transcripts from raw audio
recordings of conversations in English, involving 630 speakers, for a total of 5.4 hours of speech.
We use the kernel features in the acoustic modeling step of speech recognition. Each data point
corresponds to a frame (10ms) of audio data, preprocessed using the standard feature space Maximum
Likelihood Linear Regression (fMMLR) [4]. The input x has dimension 40. After generating kernel
features z(x) from this input, we model the corresponding phonemes y by a multinomial logistic
regression model. Again, we use a sparse ANOVA kernel, which is a sum of 50 sub-kernels of the
2
form exp(
kxS yS k ), each acting on a subset S of 5 indices. These subsets are randomly
chosen a priori. To reweight the quadrature features, we sample 500 data points out of 1 million.
We plot the phone error rates (PER) of a speech recognizer trained based on different feature
generation schemes against the number of features D in Figure 2c (mean and standard deviation
over 10 runs). Again, subsampled dense grid performs similarly to random Fourier features, QMC
yields slightly higher error, while reweighted features achieve slightly lower phone error rates. All
four methods have relatively high variability in their phone error rates due to the stochastic nature of
the training and decoding steps in the speech recognition pipeline. The quadrature-based features
(subsampled dense grids and reweighted quadrature) are about twice as fast to generate, compared to
random Fourier features, due to the small number of multiplies required. We use the same setup as
May et al. [22], and the performance here matches both that of random Fourier features and deep
neural networks in May et al. [22].
8
0
Random Fourier
Subsampled dense grid
5000 10000 15000 20000 25000 30000
Number of features
(a) Test accuracy on MNIST
Kernel RMS approximation error on TIMIT
Random Fourier
Quasi-Monte Carlo
Subsampled dense grid
2
Reweighted quadrature
2.5
1.5
1
0.5
0
Phone error rate on TIMIT
Random Fourier
Quasi-Monte Carlo
Subsampled dense grid
Reweighted quadrature
19
18.9
Phone error rate
Test Accuracy on MNIST
RMS approximation error
Test accuracy
0.985
0.98
0.975
0.97
0.965
0.96
0.955
0.95
0.945
18.8
18.7
18.6
18.5
18.4
10000
20000
30000
40000
Number of features
50000
(b) Kernel approx. error on TIMIT
18.3
10000
15000 20000 25000
Number of features
30000
(c) Phone error rate on TIMIT
Figure 2: Performance of different feature generation schemes on MNIST and TIMIT.
6
Conclusion
We presented deterministic feature maps for kernel machines. We showed that we can achieve better
scaling in the desired accuracy ? compared to the state-of-the-art method, random Fourier features.
We described several ways to construct these feature maps, including polynomially-exact quadrature,
dense grid construction, sparse grid construction, and reweighted grid construction. Our results apply
well to the case of sparse ANOVA kernels, achieving significant improvements (in the dependency on
the dimension d) over random Fourier features. Finally, we evaluated our results experimentally, and
showed that ANOVA kernels with deterministic feature maps can produce comparable accuracy to
the state-of-the-art methods based on random Fourier features on real datasets.
ANOVA kernels are an example of how structure can be used to define better kernels. Resembling
the convolutional layers of convolutional neural networks, they induce the necessary inductive bias in
the learning process. Given CNNs? recent success in other domains beside images, such as sentence
classification [15] and machine translation [7], we hope that our work on deterministic feature maps
will enable kernel methods such as ANOVA kernels to find new areas of application.
Acknowledgments
This material is based on research sponsored by Defense Advanced Research Projects Agency
(DARPA) under agreement number FA8750-17-2-0095. We gratefully acknowledge the support of
the DARPA SIMPLEX program under No. N66001-15-C-4043, DARPA FA8750-12-2-0335 and
FA8750-13-2-0039, DOE 108845, National Institute of Health (NIH) U54EB020405, the National
Science Foundation (NSF) under award No. CCF-1563078, the Office of Naval Research (ONR)
under awards No. N000141210041 and No. N000141310129, the Moore Foundation, the Okawa
Research Grant, American Family Insurance, Accenture, Toshiba, and Intel. This research was
supported in part by affiliate members and other supporters of the Stanford DAWN project: Intel,
Microsoft, Teradata, and VMware. The U.S. Government is authorized to reproduce and distribute
reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and
conclusions contained herein are those of the authors and should not be interpreted as necessarily
representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S.
Government. Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the authors and do not necessarily reflect the views of DARPA, AFRL, NSF, NIH, ONR,
or the U.S. government.
9
References
[1] Francis Bach. On the equivalence between quadrature rules and random features. arXiv preprint
arXiv:1502.06800, 2015.
[2] Hans-Joachim Bungartz and Michael Griebel. Sparse grids. Acta numerica, 13:147?269, 2004.
[3] Charles W Clenshaw and Alan R Curtis. A method for numerical integration on an automatic
computer. Numerische Mathematik, 2(1):197?205, 1960.
[4] Mark JF Gales. Maximum likelihood linear transformations for HMM-based speech recognition.
Computer speech & language, 12(2):75?98, 1998.
[5] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren.
DARPA TIMIT acoustic phonetic continuous speech corpus CDROM, 1993. URL http:
//www.ldc.upenn.edu/Catalog/LDC93S1.html.
[6] Carl Friedrich Gauss. Methodus nova integralium valores per approximationem inveniendi.
apvd Henricvm Dieterich, 1815.
[7] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional
sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.
[8] S. R. Gunn and J. S. Kandola. Structural Modelling with Sparse Kernels. Machine Learning,
48(1-3):137?163, July 2002. ISSN 0885-6125, 1573-0565. doi: 10.1023/A:1013903804720.
URL https://link.springer.com/article/10.1023/A:1013903804720.
[9] Steve R. Gunn and Jaz S. Kandola. Structural modelling with sparse kernels. Machine learning,
48(1-3):137?163, 2002.
[10] Nicholas Hale and Alex Townsend. Fast and accurate computation of Gauss?Legendre and
Gauss?Jacobi quadrature nodes and weights. SIAM Journal on Scientific Computing, 35(2):
A652?A674, 2013.
[11] Thomas Hofmann, Bernhard Sch?lkopf, and Alexander J Smola. Kernel methods in machine
learning. The annals of statistics, pages 1171?1220, 2008.
[12] Markus Holtz. Sparse grid quadrature in high dimensions with applications in finance and
insurance, volume 77. Springer Science & Business Media, 2010.
[13] Po-Sen Huang, Haim Avron, Tara N Sainath, Vikas Sindhwani, and Bhuvana Ramabhadran.
Kernel methods match deep neural networks on TIMIT. In Acoustics, Speech and Signal
Processing (ICASSP), 2014 IEEE International Conference on, pages 205?209. IEEE, 2014.
[14] Eugene Isaacson and Herbert Bishop Keller. Analysis of numerical methods. Courier Corporation, 1994.
[15] Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the
2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages
1746?1751.
[16] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[17] Ming Lin, Shifeng Weng, and Changshui Zhang. On the sample complexity of random Fourier
features for online learning: How many random Fourier features do we need? ACM Trans.
Knowl. Discov. Data, 2014.
[18] Zhiyun Lu, Avner May, Kuan Liu, Alireza Bagheri Garakani, Dong Guo, Aur?lien Bellet, Linxi
Fan, Michael Collins, Brian Kingsbury, Michael Picheny, and Fei Sha. How to scale up kernel
methods to be as good as deep neural nets. arXiv:1411.4000 [cs, stat], November 2014. URL
http://arxiv.org/abs/1411.4000. arXiv: 1411.4000.
10
[19] Zhiyun Lu, Dong Quo, Alireza Bagheri Garakani, Kuan Liu, Avner May, Aur?lien Bellet, Linxi
Fan, Michael Collins, Brian Kingsbury, Michael Picheny, et al. A comparison between deep
neural nets and kernel acoustic models for speech recognition. In Acoustics, Speech and Signal
Processing (ICASSP), 2016 IEEE International Conference on, pages 5070?5074. IEEE, 2016.
[20] Subhransu Maji and Jitendra Malik. Fast and accurate digit classification. EECS Department,
University of California, Berkeley, Tech. Rep. UCB/EECS-2009-159, 2009.
[21] Avner May, Michael Collins, Daniel Hsu, and Brian Kingsbury. Compact kernel models for
acoustic modeling via random feature selection. In Acoustics, Speech and Signal Processing
(ICASSP), 2016 IEEE International Conference on, pages 2424?2428. IEEE, 2016.
[22] Avner May, Alireza Bagheri Garakani, Zhiyun Lu, Dong Guo, Kuan Liu, Aur?lien Bellet, Linxi
Fan, Michael Collins, Daniel Hsu, Brian Kingsbury, et al. Kernel approximation methods for
speech recognition. arXiv preprint arXiv:1701.03577, 2017.
[23] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances
in neural information processing systems, pages 1177?1184, 2007.
[24] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Advances in neural information processing systems,
pages 1313?1320, 2009.
[25] Walter Rudin. Fourier analysis on groups. Number 12. John Wiley & Sons, 1990.
[26] Bernhard Sch?lkopf and Alexander J Smola. Learning with kernels: Support vector machines,
regularization, optimization, and beyond. MIT press, 2002.
[27] Patrice Y Simard, Dave Steinkraus, and John C Platt. Best practices for convolutional neural
networks applied to visual document analysis. In ICDAR, page 958. IEEE, 2003.
[28] S. A. Smolyak. Quadrature and interpolation formulas for tensor products of certain class
of functions. Dokl. Akad. Nauk SSSR, 148(5):1042?1053, 1963. Transl.: Soviet Math. Dokl.
4:240-243, 1963.
[29] Bharath Sriperumbudur and Zoltan Szabo. Optimal rates for random Fourier features. In
C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural
Information Processing Systems 28, pages 1144?1152. Curran Associates, Inc., 2015.
[30] M Stitson, Alex Gammerman, Vladimir Vapnik, Volodya Vovk, Chris Watkins, and Jason
Weston. Support vector regression with anova decomposition kernels. Advances in kernel
methods?Support vector learning, pages 285?292, 1999.
[31] Dougal J. Sutherland and Jeff Schneider. On the error of random Fourier features. In Proceedings
of the 31th Annual Conference on Uncertainty in Artificial Intelligence (UAI-15). AUAI Press,
2015.
[32] Alex Townsend, Thomas Trogdon, and Sheehan Olver. Fast computation of Gauss quadrature
nodes and weights on the whole real line. IMA Journal of Numerical Analysis, page drv002,
2015.
[33] Lloyd N Trefethen. Is Gauss quadrature better than Clenshaw?Curtis? SIAM review, 50(1):
67?87, 2008.
[34] Jiyan Yang, Vikas Sindhwani, Haim Avron, and Michael Mahoney. Quasi-Monte Carlo feature
maps for shift-invariant kernels. In Proceedings of The 31st International Conference on
Machine Learning (ICML-14), pages 485?493, 2014.
[35] Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nystr?m method
vs random Fourier features: A theoretical and empirical comparison. In Advances in neural
information processing systems, pages 476?484, 2012.
11
| 7191 |@word middle:1 polynomial:10 nd:1 d2:1 km:2 decomposition:2 pick:1 nystr:2 stitson:1 liu:3 series:2 quo:1 selecting:1 daniel:2 document:2 fa8750:3 existing:1 comparing:1 com:1 jaz:1 attracted:1 written:2 john:2 griebel:1 numerical:8 ldc93s1:1 hofmann:1 enables:1 plot:2 designed:1 sponsored:1 v:5 half:1 fewer:4 selected:1 rudin:1 intelligence:1 isotropic:1 shifeng:1 caveat:1 provides:1 math:1 node:2 denis:1 org:1 zhang:1 kingsbury:4 along:2 constructed:6 zhiyun:3 become:1 predecessor:1 differential:1 jonas:1 transl:1 consists:1 doubly:1 combine:2 hermitian:1 upenn:1 expected:1 behavior:1 frequently:1 nor:1 examine:1 inspired:1 bhuvana:1 ming:1 pitfall:1 steinkraus:1 zhi:1 cpu:1 curse:4 becomes:2 project:2 estimating:2 bounded:6 moreover:2 circuit:1 notation:2 medium:1 what:1 resurgent:2 interpreted:1 developed:1 finding:1 transformation:1 teradata:1 corporation:1 guarantee:2 berkeley:1 avron:2 multidimensional:1 dahlgren:1 auai:1 tackle:1 finance:1 classifier:1 scaled:4 platt:1 control:1 grant:1 yn:1 appear:1 producing:1 before:2 positive:4 thereon:1 sutherland:1 analyzing:1 interpolation:1 might:1 garofolo:1 twice:1 eb:1 studied:1 k:2 equivalence:1 acta:1 co:1 range:2 averaged:1 practical:1 acknowledgment:1 lecun:1 practice:3 definite:4 digit:4 procedure:1 area:1 empirical:5 significantly:3 courier:1 pre:1 integrating:1 induce:1 suggest:1 get:3 close:1 selection:2 undesirable:1 clever:1 context:1 writing:1 restriction:1 equivalent:2 map:36 deterministic:18 www:1 resembling:1 go:2 straightforward:3 tianbao:1 independently:1 sainath:1 keller:1 numerische:1 simplicity:2 chrismre:1 insight:2 rule:28 importantly:2 deriving:1 handle:1 notion:1 nnls:2 annals:1 construction:11 heavily:1 exact:20 carl:1 curran:1 agreement:1 associate:1 element:1 recognition:11 particularly:2 approximated:1 gunn:2 yoon:1 preprint:3 capture:1 region:10 fiscus:1 yk:1 tame:1 pd:2 agency:1 complexity:18 ui:1 hypergraph:1 benjamin:2 trained:1 depend:1 solving:3 ali:2 basis:2 bisecting:1 sink:1 bungartz:1 easily:2 darpa:6 po:1 icassp:3 represented:1 olver:1 maji:1 soviet:1 walter:1 instantiated:1 fast:4 effective:1 describe:3 monte:10 doi:1 artificial:1 trefethen:1 stanford:7 solve:5 larger:5 say:1 posed:1 valued:1 dieterich:1 gi:1 statistic:1 think:2 transform:5 itself:2 kuan:3 online:1 patrice:1 sequence:4 advantage:2 pressing:1 net:2 sen:1 propose:1 product:8 rapidly:2 poorly:1 achieve:9 representational:1 nauk:1 validate:2 getting:1 extending:1 produce:5 generating:2 i2s:2 stat:1 transcript:1 sa:1 c:3 involves:1 come:2 implies:1 qd:1 sssr:1 cnns:4 subsequently:1 filter:1 stochastic:1 enable:1 opinion:1 material:2 require:1 government:3 hx:1 suffices:1 generalization:3 randomization:1 brian:4 zoltan:1 adjusted:1 strictly:1 extension:1 rong:1 around:2 ground:1 exp:16 lawrence:1 achieves:3 smallest:1 purpose:1 recognizer:1 estimation:4 knowl:1 individually:1 changshui:1 weighted:1 hope:1 minimization:1 exactness:1 mit:1 gaussian:21 always:2 aim:2 yarats:1 rather:3 avoid:2 zhou:1 cornell:2 ej:2 office:1 corollary:4 encode:2 focus:6 naval:1 properly:4 joachim:1 rank:2 likelihood:2 modelling:2 improvement:1 tech:1 linxi:3 kim:1 dependent:3 el:1 factoring:1 typically:2 integrated:1 lien:3 quasi:6 reproduce:1 subhransu:1 issue:4 classification:10 among:2 dauphin:1 overall:1 priori:1 multiplies:2 html:1 art:6 integration:4 summed:1 conceptualize:1 construct:8 saving:1 having:1 beach:1 sampling:5 yu:1 icml:1 dao:1 discrepancy:1 simplex:1 yoshua:1 fundamentally:1 inherent:1 randomly:2 vmware:1 kandola:2 national:2 individual:1 subsampled:12 kitchen:1 szabo:1 ourselves:1 ima:1 clenshaw:3 n1:1 attempt:1 microsoft:1 ab:1 interest:2 dougal:1 investigate:2 multiply:1 insurance:2 mahoney:1 analyzed:7 weng:1 behind:1 copyright:1 accurate:7 maxi2:1 integral:9 partial:1 necessary:3 xy:1 orthogonal:1 taylor:1 desired:6 theoretical:3 n000141310129:1 modeling:4 downside:2 dev:1 rffs:3 cost:2 introducing:1 deviation:2 subset:4 successful:1 dependency:3 supx:1 eec:2 combined:1 recht:4 st:2 density:1 randomized:1 siam:2 international:4 aur:3 probabilistic:2 yl:2 dong:3 decoding:1 lee:1 michael:9 together:1 quickly:1 squared:2 again:2 reflect:1 accenture:1 choose:4 gale:1 huang:1 emnlp:1 convolving:1 american:1 simard:1 li:1 mahdavi:1 potential:2 distribute:1 de:2 b2:4 lloyd:1 inc:1 satisfy:4 notable:1 jitendra:1 ad:3 depends:1 root:1 view:2 jason:1 sup:2 francis:1 competitive:2 start:2 timit:13 contribution:1 minimize:5 square:4 accuracy:11 convolutional:11 variance:1 phoneme:1 efficiently:2 yield:10 correspond:1 lkopf:2 raw:1 lu:3 carlo:10 dave:1 randomness:2 classified:1 bharath:1 suffers:1 definition:3 against:2 sriperumbudur:1 frequency:1 e2:2 proof:1 jacobi:1 hsu:2 proved:1 adjusting:1 popular:2 dataset:6 conversation:1 improves:1 ubiquitous:1 dimensionality:3 afrl:1 higher:4 originally:1 steve:1 improved:1 done:2 though:3 evaluated:1 just:1 smola:2 d:1 hand:1 christopher:2 ei:1 replacing:1 reweighting:1 logistic:1 aj:1 gray:1 scientific:1 usa:1 requiring:1 true:1 ccf:1 inductive:1 hence:1 regularization:1 moore:1 attractive:1 reweighted:9 speaker:1 m:1 isaacson:1 demonstrate:1 performs:2 lamel:1 image:4 recently:2 dawn:1 nih:2 common:1 charles:1 holtz:2 specialized:1 multinomial:1 rl:3 overview:1 dsc:1 empirically:1 exponentially:2 volume:1 million:2 approximates:1 elementwise:1 refer:1 significant:3 ai:12 rd:19 tuning:1 grid:56 approx:1 similarly:3 automatic:1 sugiyama:1 grangier:1 gratefully:1 had:1 language:2 han:1 similarity:1 operating:1 add:1 patrick:1 bagheri:3 recent:2 hide:1 showed:2 phone:6 phonetic:1 certain:3 hyperedge:1 arbitrarily:1 success:1 onr:2 rep:1 yi:4 herbert:1 additional:1 somewhat:1 schneider:1 bochner:1 determine:1 affiliate:1 signal:3 july:1 multiple:1 full:1 reduces:1 rahimi:4 d0:3 alan:1 technical:1 match:3 faster:3 characterized:1 bach:2 long:3 lin:1 jiyan:1 discov:1 y:1 award:2 a1:3 scalable:1 involving:1 regression:3 essentially:1 arxiv:9 kernel:115 sometimes:1 represent:1 alireza:3 achieved:3 whereas:1 fine:1 singular:1 hyperedges:1 ithaca:1 appropriately:1 operate:2 unlike:1 sch:2 tri:1 hz:2 subject:3 recording:1 member:1 extracting:1 subgaussian:12 structural:4 yang:3 leverage:1 noting:1 bengio:1 easy:1 concerned:1 variety:1 xj:2 zi:2 architecture:1 inner:2 regarding:1 idea:1 okawa:1 pallett:1 haffner:1 supporter:1 shift:4 whether:1 expression:1 rms:2 defense:1 url:3 penalty:1 materializing:1 speech:17 deep:7 useful:1 detailed:2 u54eb020405:1 diameter:11 simplest:1 generate:3 http:3 outperform:1 nsf:2 governmental:1 estimated:2 per:2 gammerman:1 discrete:2 write:1 numerica:1 group:1 key:1 four:1 threshold:1 achieving:1 d3:1 preprocessed:1 anova:23 kuk:1 ht:1 n66001:1 asymptotically:3 graph:2 fraction:1 sum:5 run:5 uncertainty:1 saying:1 family:1 yann:2 endorsement:1 decision:1 appendix:3 scaling:1 comparable:2 layer:5 bound:16 ki:1 haim:2 fan:3 encountered:2 annual:1 constraint:2 toshiba:1 alex:3 fei:1 n3:1 encodes:1 markus:1 fourier:58 extremely:1 relatively:1 gehring:1 department:4 according:2 combination:1 legendre:1 across:5 slightly:3 smaller:3 bellet:3 son:1 avner:4 invariant:4 pipeline:1 equation:1 mathematik:1 discus:1 count:5 turn:1 icdar:1 nova:1 available:1 operation:1 apply:2 nicholas:1 save:1 alternative:1 slower:1 ktk:1 original:1 thomas:2 vikas:2 include:3 subsampling:3 k1:3 especially:1 approximating:4 ramabhadran:1 feng:1 tensor:4 objective:1 implied:1 already:1 malik:1 degrades:1 sha:1 dependence:3 usual:1 mehrdad:1 gradient:1 link:1 hmm:1 chris:1 extent:1 trivial:1 assuming:1 issn:1 modeled:1 relationship:1 index:1 minimizing:1 asics:1 akad:1 equivalently:2 setup:1 unfortunately:2 vladimir:1 reweight:1 negative:3 proper:2 motivates:1 policy:1 upper:1 datasets:6 finite:1 acknowledge:1 november:1 jin:1 variability:1 y1:1 frame:1 auli:1 bk:2 s2s:1 pair:3 required:3 specified:1 david:1 connection:2 sentence:2 friedrich:1 catalog:1 acoustic:7 california:1 herein:1 hour:1 nip:1 trans:1 able:1 suggested:1 proceeds:1 below:1 beyond:1 dokl:2 cdrom:1 program:1 max:5 including:2 analogue:1 power:1 ldc:1 difficulty:2 business:1 natural:1 townsend:2 advanced:1 representing:1 scheme:9 brief:1 reprint:1 extract:1 naive:2 coupled:1 qmc:2 speeding:1 health:1 prior:1 literature:1 nice:1 eugene:1 review:1 multiplication:1 asymptotic:1 beside:1 fully:1 generation:7 proven:1 foundation:2 degree:10 sufficient:1 article:1 garakani:3 editor:1 tiny:1 cd:1 translation:1 supported:1 english:1 infeasible:1 bias:1 weaker:2 understand:1 institute:1 wide:1 taking:3 rm2:1 sparse:37 benefit:2 distributed:2 boundary:1 dimension:19 xn:2 gram:4 overcome:1 world:1 author:2 commonly:3 made:1 adaptive:1 longstanding:1 employing:1 polynomially:15 picheny:2 approximate:10 compact:1 bernhard:2 active:1 uai:1 corpus:1 xi:6 spectrum:5 continuous:2 decade:1 learn:1 nature:1 ca:3 y2m:1 curtis:3 expansion:1 bottou:1 necessarily:2 constructing:6 domain:3 m2r:1 factorizable:1 did:1 official:1 main:1 dense:25 linearly:1 garnett:1 whole:1 noise:1 subsample:1 n2:1 quadrature:74 x1:2 intel:2 cubic:1 ny:1 wiley:1 sub:6 deterministically:2 exponential:2 xl:1 watkins:1 weighting:1 grained:1 removing:1 theorem:11 formula:1 specific:2 bishop:1 showing:2 hale:1 svm:1 cortes:1 mnist:7 vapnik:1 adding:1 n000141210041:1 gained:1 hexp:1 notwithstanding:1 kx:1 sparser:1 easier:1 authorized:1 visual:1 prevents:1 expressed:2 contained:1 recommendation:1 sindhwani:2 volodya:1 springer:2 hua:1 corresponds:2 satisfies:1 extracted:1 cdesa:1 acm:1 weston:1 jeff:1 jf:1 fisher:1 experimentally:2 change:1 specifically:1 determined:1 uniformly:3 vovk:1 averaging:1 acting:1 called:3 total:3 kxs:1 gauss:5 ucb:1 select:1 tara:1 support:4 mark:1 guo:2 collins:4 alexander:2 overload:1 evaluate:3 audio:2 sheehan:1 |
6,845 | 7,192 | Value Prediction Network
Junhyuk Oh?
Satinder Singh?
Honglak Lee?,?
University of Michigan
?
Google Brain
{junhyuk,baveja,honglak}@umich.edu, [email protected]
?
Abstract
This paper proposes a novel deep reinforcement learning (RL) architecture, called
Value Prediction Network (VPN), which integrates model-free and model-based
RL methods into a single neural network. In contrast to typical model-based
RL methods, VPN learns a dynamics model whose abstract states are trained
to make option-conditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that
VPN has several advantages over both model-free and model-based baselines in a
stochastic environment where careful planning is required but building an accurate
observation-prediction model is difficult. Furthermore, VPN outperforms Deep
Q-Network (DQN) on several Atari games even with short-lookahead planning,
demonstrating its potential as a new way of learning a good state representation.
1
Introduction
Model-based reinforcement learning (RL) approaches attempt to learn a model that predicts future
observations conditioned on actions and can thus be used to simulate the real environment and do
multi-step lookaheads for planning. We will call such models an observation-prediction model to
distinguish it from another form of model introduced in this paper. Building an accurate observationprediction model is often very challenging when the observation space is large [23, 5, 13, 4] (e.g., highdimensional pixel-level image frames), and even more difficult when the environment is stochastic.
Therefore, a natural question is whether it is possible to plan without predicting future observations.
In fact, raw observations may contain information unnecessary for planning, such as dynamically
changing backgrounds in visual observations that are irrelevant to their value/utility. The starting point
of this work is the premise that what planning truly requires is the ability to predict the rewards and
values of future states. An observation-prediction model relies on its predictions of observations to
predict future rewards and values. What if we could predict future rewards and values directly without
predicting future observations? Such a model could be more easily learnable for complex domains or
more flexible for dealing with stochasticity. In this paper, we address the problem of learning and
planning from a value-prediction model that can directly generate/predict the value/reward of future
states without generating future observations.
Our main contribution is a novel neural network architecture we call the Value Prediction Network
(VPN). The VPN combines model-based RL (i.e., learning the dynamics of an abstract state space
sufficient for computing future rewards and values) and model-free RL (i.e., mapping the learned
abstract states to rewards and values) in a unified framework. In order to train a VPN, we propose
a combination of temporal-difference search [28] (TD search) and n-step Q-learning [20]. In brief,
VPNs learn to predict values via Q-learning and rewards via supervised learning. At the same time,
VPNs perform lookahead planning to choose actions and compute bootstrapped target Q-values.
Our empirical results on a 2D navigation task demonstrate the advantage of VPN over model-free
baselines (e.g., Deep Q-Network [21]). We also show that VPN is more robust to stochasticity in the
environment than an observation-prediction model approach. Furthermore, we show that our VPN
outperforms DQN on several Atari games [2] even with short-lookahead planning, which suggests
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
that our approach can be potentially useful for learning better abstract-state representations and
reducing sample-complexity.
2
Related Work
Model-based Reinforcement Learning. Dyna-Q [32, 34, 39] integrates model-free and modelbased RL by learning an observation-prediction model and using it to generate samples for Q-learning
in addition to the model-free samples obtained by acting in the real environment. Gu et al. [7]
extended these ideas to continuous control problems. Our work is similar to Dyna-Q in the sense that
planning and learning are integrated into one architecture. However, VPNs perform a lookahead tree
search to choose actions and compute bootstrapped targets, whereas Dyna-Q uses a learned model
to generate imaginary samples. In addition, Dyna-Q learns a model of the environment separately
from a value function approximator. In contrast, the dynamics model in VPN is combined with the
value function approximator in a single neural network and indirectly learned from reward and value
predictions through backpropagation.
Another line of work [23, 4, 8, 30] uses observation-prediction models not for planning, but for improving exploration. A key distinction from these prior works is that our method learns abstract-state dynamics not to predict future observations, but instead to predict future rewards/values. For continuous
control problems, deep learning has been combined with model predictive control (MPC) [6, 18, 26],
a specific way of using an observation-prediction model. In cases where the observation-prediction
model is differentiable with respect to continuous actions, backpropagation can be used to find the
optimal action [19] or to compute value gradients [11]. In contrast, our work focuses on learning and
planning using lookahead for discrete control problems.
Our VPNs are related to Value Iteration Networks [35] (VINs) which perform value iteration (VI) by
approximating the Bellman-update through a convolutional neural network (CNN). However, VINs
perform VI over the entire state space, which in practice requires that 1) the state space is small and
representable as a vector with each dimension corresponding to a separate state and 2) the states have
a topology with local transition dynamics (e.g., 2D grid). VPNs do not have these limitations and are
thus more generally applicable, as we will show empirically in this paper.
VPN is close to and in-part inspired by Predictron [29] in that a recurrent neural network (RNN) acts
as a transition function over abstract states. VPN can be viewed as a grounded Predictron in that each
rollout corresponds to the transition in the environment, whereas each rollout in Predictron is purely
abstract. In addition, Predictrons are limited to uncontrolled settings and thus policy evaluation,
whereas our VPNs can learn an optimal policy in controlled settings.
Model-free Deep Reinforcement Learning. Mnih et al. [21] proposed the Deep Q-Network
(DQN) architecture which learns to estimate Q-values using deep neural networks. A lot of variations
of DQN have been proposed for learning better state representation [37, 16, 9, 22, 36, 24], including
the use of memory-based networks for handling partial observability [9, 22, 24], estimating both
state-values and advantage-values as a decomposition of Q-values [37], learning successor state
representations [16], and learning several auxiliary predictions in addition to the main RL values [12].
Our VPN can be viewed as a model-free architecture which 1) decomposes Q-value into reward,
discount, and the value of the next state and 2) uses multi-step reward/value predictions as auxiliary
tasks to learn a good representation. A key difference from the prior work listed above is that our
VPN learns to simulate the future rewards/values which enables planning. Although STRAW [36]
can maintain a sequence of future actions using an external memory, it cannot explicitly perform
planning by simulating future rewards/values.
Monte-Carlo Planning. Monte-Carlo Tree Search (MCTS) methods [15, 3] have been used for
complex search problems, such as the game of Go, where a simulator of the environment is already
available and thus does not have to be learned. Most recently, AlphaGo [27] introduced a value
network that directly estimates the value of state in Go in order to better approximate the value of
leaf-node states during tree search. Our VPN takes a similar approach by predicting the value of
abstract future states during tree search using a value function approximator. Temporal-difference
search [28] (TD search) combined TD-learning with MCTS by computing target values for a value
function approximator through MCTS. Our algorithm for training VPN can be viewed as an instance
of TD search, but it learns the dynamics of future rewards/values instead of being given a simulator.
2
(a) One-step rollout
(b) Multi-step rollout
Figure 1: Value prediction network. (a) VPN learns to predict immediate reward, discount, and the value of the
next abstract-state. (b) VPN unrolls the core module in the abstract-state space to compute multi-step rollouts.
3
Value Prediction Network
The value prediction network is developed for semi-Markov decision processes (SMDPs). Let xt be
the observation or a history of observations for partially observable MDPs (henceforth referred to
as just observation) and let ot be the option [33, 31, 25] at time t. Each option maps observations
to primitive actions, and the following Bellman equation holds for all policies ?: Q? (xt , ot ) =
Pk?1
E[ i=0 ? i rt+i + ? k V ? (xt+k )], where ? is a discount factor, rt is the immediate reward at time t,
and k is the number of time steps taken by the option ot before terminating in observation xt+k .
A VPN not only learns an option-value function Q? (xt , ot ) through a neural network parameterized
by ? like model-free RL, but also learns the dynamics of the rewards/values to perform planning. We
describe the architecture of VPN in Section 3.1. In Section 3.2, we describe how to perform planning
using VPN. Section 3.3 describes how to train VPN in a Q-Learning-like framework [38].
3.1
Architecture
The VPN consists of the following modules parameterized by ? = {?enc , ?value , ?out , ?trans }:
Encoding f?enc : x 7? s
Value f?value : s 7? V? (s)
Transition f?trans : s, o 7? s0
Outcome f?out : s, o 7? r, ?
? Encoding module maps the observation (x) to the abstract state (s ? Rm ) using neural networks
(e.g., CNN for visual observations). Thus, s is an abstract-state representation which will be
learned by the network (and not an environment state or even an approximation to one).
? Value module estimates the value of the abstract-state (V? (s)). Note that the value module is not a
function of the observation, but a function of the abstract-state.
? Outcome module predicts the option-reward (r ? R) for executing the option o at abstract-state
s. If the option takes k primitive actions before termination, the outcome module should predict
the discounted sum of the k immediate rewards as a scalar. The outcome module also predicts the
option-discount (? ? R) induced by the number of steps taken by the option.
? Transition module transforms the abstract-state to the next abstract-state (s0 ? Rm ) in an optionconditional manner.
Figure 1a illustrates the core module which performs 1-step rollout by composing the above modules:
f?core : s, o 7? r, ?, V? (s0 ), s0 . The core module takes an abstract-state and option as input and makes
separate option-conditional predictions of the option-reward (henceforth, reward), the option-discount
(henceforth, discount), and the value of the abstract-state at option-termination. By combining the
predictions, we can estimate the Q-value as follows: Q? (s, o) = r + ?V? (s0 ). In addition, the VPN
recursively applies the core module to predict the sequence of future abstract-states as well as rewards
and discounts given an initial abstract-state and a sequence of options as illustrated in Figure 1b.
3.2
Planning
VPN has the ability to simulate the future and plan based on the simulated future abstract-states.
Although many existing planning methods (e.g., MCTS) can be applied to the VPN, we implement
a simple planning method which performs rollouts using the VPN up to a certain depth (say d),
henceforth denoted as planning depth, and aggregates all intermediate value estimates as described in
Algorithm 1 and Figure 2. More formally, given an abstract-state s = f?enc (x) and an option o, the
3
Algorithm 1 Q-value from d-step planning
function Q-P LAN(s, o, d)
r, ?, V (s0 ), s0 ? f?core (s, o)
if d = 1 then
return r + ?V (s0 )
end if
A ? b-best options based on Q1 (s0 , o0 )
for o0 ? A do
qo0 ? Q-P LAN(s0 , o0 , d ? 1)
end for
return r +? d1 V (s0 ) + d?1
maxo0 ?A qo0
d
end function
(a) Expansion
(b) Backup
Figure 2: Planning with VPN. (a) Simulate b-best options up
to a certain depth (b = 2 in this example). (b) Aggregate all
possible returns along the best sequence of future options.
Q-value calculated from d-step planning is defined as:
V (s)
Qd? (s, o) = r + ?V?d (s0 )
V?d (s) = 1 ?
d V? (s) +
d?1
d
if d = 1
maxo Q?d?1 (s, o) if d > 1,
(1)
where s0 = f?trans (s, o), V? (s) = f?value (s), and r, ? = f?out (s, o). Our planning algorithm is divided
into two steps: expansion and backup. At the expansion step (see Figure 2a), we recursively simulate
options up to a depth of d by unrolling the core module. At the backup step, we compute the weighted
average of the direct value estimate V? (s) and maxo Qd?1
(s, o) to compute V?d (s) (i.e., value from
?
d?1
d-step planning) in Equation 1. Note that maxo Q? (s, o) is the average over d ? 1 possible value
estimates. We propose to compute the uniform average over all possible returns by using weights
proportional to 1 and d ? 1 for V? (s) and maxo Qd?1
(s, o) respectively. Thus, V?d (s) is the uniform
?
average of d expected returns along the path of the best sequence of options as illustrated in Figure 2b.
To reduce the computational cost, we simulate only b-best options at each expansion step based on
Q1 (s, o). We also find that choosing only the best option after a certain depth does not compromise
the performance much, which is analogous to using a default policy in MCTS beyond a certain depth.
This heuristic visits reasonably good abstract states during planning, though a more principled way
such as UCT [15] can also be used to balance exploration and exploitation. This planning method
is used for choosing options and computing target Q-values during training, as described in the
following section.
3.3
Learning
VPN can be trained through any existing valuebased RL algorithm for the value predictions combined with supervised learning for reward and discount predictions. In this paper, we present a modification of n-step Q-learning [20] and TD search [28].
The main idea is to generate trajectories by following -greedy policy based on the planning method
described in Section 3.2. Given an n-step trajectory x1 , o1 , r1 , ?1 , x2 , o2 , r2 , ?2 , ..., xn+1 generated
by the -greedy policy, k-step predictions are defined
as follows:
Figure 3: Illustration of learning process.
enc
f? (xt )
if k = 0
skt =
vtk = f?value (skt )
rtk , ?tk = f?out (sk?1
, ot ).
t
f?trans (sk?1
,
o
)
if k > 0
t?1
t?1
Intuitively, skt is the VPN?s k-step prediction of the abstract-state at time t predicted from xt?k
following options ot?k , ..., ot?1 in the trajectory as illustrated in Figure 3. By applying the value
and the outcome module, VPN can compute the k-step prediction of the value, the reward, and the
discount. The k-step prediction loss at step t is defined as:
Lt =
k
X
Rt ? vtl
2
+ rt ? rtl
l=1
4
2
+ log? ?t ? log? ?tl
2
rt + ?t Rt+1
if t ? n
is the target value, and Qd?? (sn+1 , o) is the Qmaxo Qd?? (sn+1 , o) if t = n + 1
value computed by the d-step planning method described in 3.2. Intuitively, Lt accumulates losses
over 1-step to k-step predictions of values, rewards, and discounts. We find that applying log? for
the discount prediction loss helps optimization, which amounts to computing the squared loss with
respect to the number of steps.
where Rt =
Our learning algorithm introduces two hyperparameters: the number of prediction steps (k) and
planning depth (dtrain ) used for choosing options and computing bootstrapped targets. We also make
use of a target network parameterized by ?? which is synchronized with ? after a certain number
of steps to stabilize training as suggested by [20]. The loss is accumulated
over n-steps and the
Pn
parameter is updated by computing its gradient as follows: ?? L = t=1 ?? Lt . The full algorithm
is described in the supplementary material.
3.4
Relationship to Existing Approaches
VPN is model-based in the sense that it learns an abstract-state transition function sufficient to predict
rewards/discount/values. Meanwhile, VPN can also be viewed as model-free in the sense that it
learns to directly estimate the value of the abstract-state. From this perspective, VPN exploits several
auxiliary prediction tasks, such as reward and discount predictions to learn a good abstract-state
representation. An interesting property of VPN is that its planning ability is used to compute the
bootstrapped target as well as choose options during Q-learning. Therefore, as VPN improves the
quality of its future predictions, it can not only perform better during evaluation through its improved
planning ability, but also generate more accurate target Q-values during training, which encourages
faster convergence compared to conventional Q-learning.
4
Experiments
Our experiments investigated the following questions: 1) Does VPN outperform model-free baselines
(e.g., DQN)? 2) What is the advantage of planning with a VPN over observation-based planning? 3)
Is VPN useful for complex domains with high-dimensional sensory inputs, such as Atari games?
4.1
Experimental Setting
Network Architecture. A CNN was used as the encoding module of VPN, and the transition
module consists of one option-conditional convolution layer which uses different weights depending
on the option followed by a few more convolution layers. We used a residual connection [10] from
the previous abstract-state to the next abstract-state so that the transition module learns the change
of the abstract-state. The outcome module is similar to the transition module except that it does not
have a residual connection and two fully-connected layers are used to produce reward and discount.
The value module consists of two fully-connected layers. The number of layers and hidden units vary
depending on the domain. These details are described in the supplementary material.
Implementation Details. Our algorithm is based on asynchronous n-step Q-learning [20] where
n is 10 and 16 threads are used. The target network is synchronized after every 10K steps.
We used the Adam optimizer [14], and the best learning rate and its decay were chosen from
{0.0001, 0.0002, 0.0005, 0.001} and {0.98, 0.95, 0.9, 0.8} respectively. The learning rate is multiplied by the decay every 1M steps. Our implementation is based on TensorFlow [1].1
VPN has four more hyperparameters: 1) the number of predictions steps (k) during training, 2) the
plan depth (dtrain ) during training, 3) the plan depth (dtest ) during evaluation, and 4) the branching
factor (b) which indicates the number of options to be simulated for each expansion step during
planning. We used k = dtrain = dtest throughout the experiment unless otherwise stated. VPN(d)
represents our model which learns to predict and simulate up to d-step futures during training and
evaluation. The branching factor (b) was set to 4 until depth of 3 and set to 1 after depth of 3, which
means that VPN simulates 4-best options up to depth of 3 and only the best option after that.
Baselines. We compared our approach to the following baselines.
1
The code is available on https://github.com/junhyukoh/value-prediction-network.
5
(a) Observation (b) DQN?s trajectory (c) VPN?s trajectory
Figure 4: Collect domain. (a) The agent should collect as many
goals as possible within a time limit which is given as additional
input. (b-c) DQN collects 5 goals given 20 steps, while VPN(5)
found the optimal trajectory via planning which collects 6 goals.
(a) Plan with 20 steps (b) Plan with 12 steps
Figure 5: Example of VPN?s plan. VPN
can plan the best future options just from
the current state. The figures show VPN?s
different plans depending on the time limit.
? DQN: This baseline directly estimates Q-values as its output and is trained through asynchronous
n-step Q-learning. Unlike the original DQN, however, our DQN baseline takes an option as
additional input and applies an option-conditional convolution layer to the top of the last encoding
convolution layer, which is very similar to our VPN architecture.2
? VPN(1): This is identical to our VPN with the same training procedure except that it performs
only 1-step rollout to estimate Q-value as shown in Figure 1a. This can be viewed as a variation of
DQN that predicts reward, discount, and the value of the next state as a decomposition of Q-value.
? OPN(d): We call this Observation Prediction Network (OPN), which is similar to VPN except that
it directly predicts future observations. More specifically, we train two independent networks: a
model network (f model : x, o 7? r, ?, x0 ) which predicts reward, discount, and the next observation,
and a value network (f value : x 7? V (x)) which estimates the value from the observation. The
training scheme is similar to our algorithm except that a squared loss for observation prediction is
used to train the model network. This baseline performs d-step planning like VPN(d).
4.2
Collect Domain
Task Description. We defined a simple but challenging 2D navigation task where the agent should
collect as many goals as possible within a time limit, as illustrated in Figure 4. In this task, the
agent, goals, and walls are randomly placed for each episode. The agent has four options: move
left/right/up/down to the first crossing branch or the end of the corridor in the chosen direction. The
agent is given 20 steps for each episode and receives a positive reward (2.0) when it collects a goal by
moving on top of it and a time-penalty (?0.2) for each step. Although it is easy to learn a sub-optimal
policy which collects nearby goals, finding the optimal trajectory in each episode requires careful
planning because the optimal solution cannot be computed in polynomial time.
An observation is represented as a 3D tensor (R3?10?10 ) with binary values indicating the presence/absence of each object type. The time remaining is normalized to [0, 1] and is concatenated to
the 3rd convolution layer of the network as a channel.
We evaluated all architectures first in a deterministic environment and then investigated the robustness
in a stochastic environment separately. In the stochastic environment, each goal moves by one block
with probability of 0.3 for each step. In addition, each option can be repeated multiple times with
probability of 0.3. This makes it difficult to predict and plan the future precisely.
Overall Performance. The result is summarized in Figure 6. To understand the quality of different
policies, we implemented a greedy algorithm which always collects the nearest goal first and a
shortest-path algorithm which finds the optimal solution through exhaustive search assuming that
the environment is deterministic. Note that even a small gap in terms of reward can be qualitatively
substantial as indicated by the small gap between greedy and shortest-path algorithms.
The results show that many architectures learned a better-than-greedy policy in the deterministic and
stochastic environments except that OPN baselines perform poorly in the stochastic environment. In
addition, the performance of VPN is improved as the plan depth increases, which implies that deeper
predictions are reliable enough to provide more accurate value estimates of future states. As a result,
VPN with 5-step planning represented by ?VPN(5)? performs best in both environments.
2
This architecture outperformed the original DQN architecture in our preliminary experiments.
6
8.5
9.5
8.0
Average reward
Average reward
10.0
9.0
8.5
8.0
7.5
7.0
0.0
0.5
1.0
Step
1.5
1e7
7.5
7.0
6.5
6.0
0.0
0.5
1.0
Step
1.5
1e7
Greedy
Shortest
DQN
OPN(1)
OPN(2)
OPN(3)
OPN(5)
VPN(1)
VPN(2)
VPN(3)
VPN(5)
(a) Deterministic
(b) Stochastic
Figure 6: Learning curves on Collect domain. ?VPN(d)? represents VPN with d-step planning, while ?DQN? and
?OPN(d)? are the baselines.
Comparison to Model-free Baselines. Our VPNs outperform DQN and VPN(1) baselines by a
large margin as shown in Figure 6. Figure 4 (b-c) shows an example of trajectories of DQN and
VPN(5) given the same initial state. Although DQN?s behavior is reasonable, it ended up with
collecting one less goal compared to VPN(5). We hypothesize that 6 convolution layers used by
DQN and VPN(1) are not expressive enough to find the best route in each episode because finding an
optimal path requires a combinatorial search in this task. On the other hand, VPN can perform such a
combinatorial search to some extent by simulating future abstract-states, which has advantages over
model-free approaches for dealing with tasks that require careful planning.
Comparison to Observation-based Planning. Compared to OPNs which perform planning based
on predicted observations, VPNs perform slightly better or equally well in the deterministic environment. We observed that OPNs can predict future observations very accurately because observations
in this task are simple and the environment is deterministic. Nevertheless, VPNs learn faster than
OPNs in most cases. We conjecture that it takes additional training steps for OPNs to learn to predict
future observations. In contrast, VPNs learn to predict only minimal but sufficient information for
planning: reward, discount, and the value of future abstract-states, which may be the reason why
VPNs learn faster than OPNs.
In the stochastic Collect domain, VPNs significantly outperform OPNs. We observed that OPNs
tend to predict the average of possible future observations (Ex [x]) because OPN is deterministic.
Estimating values on such blurry predictions leads to estimating V? (Ex [x]) which is different from
the true expected value Ex [V (x)]. On the other hand, VPN is trained to approximate the true expected
value because there is no explicit constraint or loss for the predicted abstract state. We hypothesize
that this key distinction allows VPN to learn different modes of possible future states more flexibly in
the abstract state space. This result suggests that a value-prediction model can be more beneficial
than an observation-prediction model when the environment is stochastic and building an accurate
observation-prediction model is difficult.
Generalization Performance. One advantage of Table 1: Generalization performance. Each number
model-based RL approach is that it can generalize represents average reward. ?FGs? and ?MWs? repwell to unseen environments as long as the dynamics resent unseen environments with fewer goals and
of the environment remains similar. To see if our more walls respectively. Bold-faced numbers repreVPN has such a property, we evaluated all archi- sent the highest rewards with 95% confidence level.
tectures on two types of previously unseen environDeterministic
Stochastic
ments with either reduced number of goals (from 8
to 5) or increased number of walls. It turns out that
Original FGs MWs Original FGs MWs
our VPN is much more robust to the unseen environGreedy 8.61 5.13 7.79 7.58 4.48 7.04
ments compared to model-free baselines (DQN and Shortest 9.71 5.82 8.98 7.64 4.36 7.22
VPN(1)), as shown in Table 1. The model-free baseDQN 8.66 4.57 7.08 7.85 4.11 6.72
lines perform worse than the greedy algorithm on
unseen environments, whereas VPN still performs VPN(1) 8.94 4.92 7.64 7.84 4.27 7.15
well. In addition, VPN generalizes as well as OPN OPN(5) 9.30 5.45 8.36 7.55 4.09 6.79
which can learn a near-perfect model in the deter- VPN(5) 9.29 5.43 8.31 8.11 4.45 7.46
ministic setting, and VPN significantly outperforms
OPN in the stochastic setting. This suggests that VPN has a good generalization property like
model-based RL methods and is robust to stochasticity.
7
Table 2: Performance on Atari games. Each number represents average score over 5 top agents.
Frostbite Seaquest Enduro Alien Q*Bert Ms. Pacman Amidar Krull Crazy Climber
DQN
VPN
3058
3811
2951
5628
326
382
1804 12592
1429 14517
2804
2689
535
641
12438
15930
41658
54119
Average reward
10
Effect of Planning Depth. To further investiVPN(1)
gate the effect of planning depth in a VPN, we
8
VPN(2)
measured the average reward in the determinis6
VPN(3)
tic environment by varying the planning depth
4
VPN(5)
(dtest ) from 1 to 10 during evaluation after train2
VPN(5)*
ing VPN with a fixed number of prediction steps
0
and planning depth (k, dtrain ), as shown in Figure 7. Since VPN does not learn to predict obser?2
vations, there is no guarantee that it can perform
1 2 3 4 5 6 7 8 9 10
deeper planning during evaluation (dtest ) than the
Plan depth (dtest )
planning depth used during training (dtrain ). Interestingly, however, the result in Figure 7 shows Figure 7: Effect of evaluation planning depth. Each
curve shows average reward as a function of planning
that if k = dtrain > 2, VPN achieves better per- depth, d , for each architecture that is trained with
test
formance during evaluation through deeper tree a fixed number of prediction steps. ?VPN(5)*? was
search (dtest > dtrain ). We also tested a VPN trained to make 10-step predictions but performed
with k = 10 and dtrain = 5 and found that a plan- 5-step planning during training (k = 10, dtrain = 5).
ning depth of 10 achieved the best performance
during evaluation. Thus, with a suitably large number of prediction steps during training, our VPN is
able to benefit from deeper planning during evaluation relative to the planning depth during training.
Figure 5 shows examples of good plans of length greater than 5 found by a VPN trained with planning
depth 5. Another observation from Figure 7 is that the performance of planning depth of 1 (dtest = 1)
degrades as the planning depth during training (dtrain ) increases. This means that a VPN can improve
its value estimations through long-term planning at the expense of the quality of short-term planning.
4.3
Atari Games
To investigate how VPN deals with complex visual observations, we evaluated it on several Atari
games [2]. Unlike in the Collect domain, in Atari games most primitive actions have only small value
consequences and it is difficult to hand-design useful extended options. Nevertheless, we explored if
VPNs are useful in Atari games even with short-lookahead planning using simple options that repeat
the same primitive action over extended time periods by using a frame-skip of 10.3 We pre-processed
the game screen to 84 ? 84 gray-scale images. All architectures take last 4 frames as input. We
doubled the number of hidden units of the fully-connected layer for DQN to approximately match the
number of parameters. VPN learns to predict rewards and values but not discount (since it is fixed),
and was trained to make 3-option-step predictions for planning which means that the agent predicts
up to 0.5 seconds ahead in real-time.
As summarized in Table 2 and Figure 8, our VPN outperforms DQN baseline on 7 out of 9 Atari
games and learned significantly faster than DQN on Seaquest, QBert, Krull, and Crazy Climber.
One possible reason why VPN outperforms DQN is that even 3-step planning is indeed helpful for
learning a better policy. Figure 9 shows an example of VPN?s 3-step planning in Seaquest. Our VPN
predicts reasonable values given different sequences of actions, which can potentially help choose a
better action by looking at the short-term future. Another hypothesis is that the architecture of VPN
itself, which has several auxiliary prediction tasks for multi-step future rewards and values, is useful
for learning a good abstract-state representation as a model-free agent. Finally, our algorithm which
performs planning to compute the target Q-value can potentially speed up learning by generating more
accurate targets as it performs value backups multiple times from the simulated futures, as discussed
in Section 3.4. These results show that our approach is applicable to complex visual environments
without needing to predict observations.
3
Much of the previous work on Atari games has used a frame-skip of 4. Though using a larger frame-skip
generally makes training easier, it may make training harder in some games if they require more fine-grained
control [17].
8
Frostbite
4000
2000
1000
0
1
2
3
4
1e7
Ms. Pacman
3000
0
0
1000
1
2
3
4
1e7
Amidar
2
3
4
1e7
0
3
4
1e7
Krull
3
4
1e7
0
2
3
4
1e7
0
0
1
2
3
4
1e7
Crazy Climber
DQN
VPN
30000
20000
10000
2000
2
1
40000
4000
1
0
50000
6000
0
0
60000
12000
8000
0
2
14000
10000
0
4000
2000
1
16000
300
100
1
0
18000
200
0
0
400
500
500
50
500
1500
8000
6000
100
600
2000
1000
150
700
2500
12000
10000
200
1000
500
1500
250
2000
QBert
16000
14000
300
3000
1500
Alien
2000
350
4000
2500
Enduro
400
5000
3000
0
Seaquest
6000
3500
1
2
3
4
1e7
0
0
1
2
3
4
1e7
Figure 8: Learning curves on Atari games. X-axis and y-axis correspond to steps and average reward over 100
episodes respectively.
(a) State
(b) Plan 1 (19.3)
(c) Plan 2 (18.7)
(d) Plan 3 (18.4)
(e) Plan 4 (17.1)
Figure 9: Examples of VPN?s value estimates. Each figure shows trajectories of different sequences of actions
from the initial state (a) along with VPN?s value estimates in the parentheses: r1 + ?r2 + ? 2 r3 + ? 3 V (s4 ).
The action sequences are (b) DownRight-DownRightFire-RightFire, (c) Up-Up-Up, (d) Left-Left-Left, and (e)
Up-Right-Right. VPN predicts the highest value for (b) where the agent kills the enemy and the lowest value for
(e) where the agent is killed by the enemy.
5
Conclusion
We introduced value prediction networks (VPNs) as a new deep RL way of integrating planning and
learning while simultaneously learning the dynamics of abstract-states that make option-conditional
predictions of future rewards/discount/values rather than future observations. Our empirical evaluations showed that VPNs outperform model-free DQN baselines in multiple domains, and outperform
traditional observation-based planning in a stochastic domain. An interesting future direction would
be to develop methods that automatically learn the options that allow good planning in VPNs.
Acknowledgement
This work was supported by NSF grant IIS-1526059. Any opinions, findings, conclusions, or
recommendations expressed here are those of the authors and do not necessarily reflect the views of
the sponsor.
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. J?zefowicz, L. Kaiser,
M. Kudlur, J. Levenberg, D. Man?, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens,
B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Vi?gas, O. Vinyals,
P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation
platform for general agents. arXiv preprint arXiv:1207.4708, 2012.
[3] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez,
S. Samothrakis, and S. Colton. A survey of monte carlo tree search methods. Computational Intelligence
and AI in Games, IEEE Transactions on, 4(1):1?43, 2012.
9
[4] S. Chiappa, S. Racaniere, D. Wierstra, and S. Mohamed. Recurrent environment simulators. In ICLR,
2017.
[5] C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video
prediction. In NIPS, 2016.
[6] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In ICRA, 2017.
[7] S. Gu, T. P. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration.
In ICML, 2016.
[8] X. Guo, S. P. Singh, R. L. Lewis, and H. Lee. Deep learning for reward design to improve monte carlo tree
search in atari games. In IJCAI, 2016.
[9] M. Hausknecht and P. Stone. Deep recurrent q-learning for partially observable MDPs. arXiv preprint
arXiv:1507.06527, 2015.
[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[11] N. Heess, G. Wayne, D. Silver, T. P. Lillicrap, Y. Tassa, and T. Erez. Learning continuous control policies
by stochastic value gradients. In NIPS, 2015.
[12] M. Jaderberg, V. Mnih, W. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement
learning with unsupervised auxiliary tasks. In ICLR, 2017.
[13] N. Kalchbrenner, A. van den Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and K. Kavukcuoglu.
Video pixel networks. arXiv preprint arXiv:1610.00527, 2016.
[14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[15] L. Kocsis and C. Szepesv?ri. Bandit based monte-carlo planning. In ECML, 2006.
[16] T. D. Kulkarni, A. Saeedi, S. Gautam, and S. Gershman. Deep successor reinforcement learning. arXiv
preprint arXiv:1606.02396, 2016.
[17] A. S. Lakshminarayanan, S. Sharma, and B. Ravindran. Dynamic action repetition for deep reinforcement
learning. In AAAI, 2017.
[18] I. Lenz, R. A. Knepper, and A. Saxena. Deepmpc: Learning deep latent features for model predictive
control. In RSS, 2015.
[19] N. Mishra, P. Abbeel, and I. Mordatch. Prediction and control with temporal segment models. In ICML,
2017.
[20] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.
Asynchronous methods for deep reinforcement learning. In ICML, 2016.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran,
D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature,
518(7540):529?533, 2015.
[22] J. Oh, V. Chockalingam, S. Singh, and H. Lee. Control of memory, active perception, and action in
minecraft. In ICML, 2016.
[23] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks
in atari games. In NIPS, 2015.
[24] E. Parisotto and R. Salakhutdinov. Neural map: Structured memory for deep reinforcement learning. arXiv
preprint arXiv:1702.08360, 2017.
[25] D. Precup. Temporal abstraction in reinforcement learning. PhD thesis, University of Massachusetts,
Amherst, 2000.
[26] T. Raiko and M. Tornio. Variational bayesian learning of nonlinear hidden state-space models for model
predictive control. Neurocomputing, 72(16):3704?3712, 2009.
[27] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner,
I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of
go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016.
10
[28] D. Silver, R. S. Sutton, and M. M?ller. Temporal-difference search in computer go. Machine Learning,
87:183?219, 2012.
[29] D. Silver, H. van Hasselt, M. Hessel, T. Schaul, A. Guez, T. Harley, G. Dulac-Arnold, D. Reichert,
N. Rabinowitz, A. Barreto, and T. Degris. The predictron: End-to-end learning and planning. In ICML,
2017.
[30] B. C. Stadie, S. Levine, and P. Abbeel. Incentivizing exploration in reinforcement learning with deep
predictive models. arXiv preprint arXiv:1507.00814, 2015.
[31] M. Stolle and D. Precup. Learning options in reinforcement learning. In SARA, 2002.
[32] R. S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic
programming. In ICML, 1990.
[33] R. S. Sutton, D. Precup, and S. Singh. Between MDPs and semi-MDPs: A framework for temporal
abstraction in reinforcement learning. Artificial intelligence, 112(1):181?211, 1999.
[34] R. S. Sutton, C. Szepesv?ri, A. Geramifard, and M. H. Bowling. Dyna-style planning with linear function
approximation and prioritized sweeping. In UAI, 2008.
[35] A. Tamar, S. Levine, P. Abbeel, Y. Wu, and G. Thomas. Value iteration networks. In NIPS, 2016.
[36] A. Vezhnevets, V. Mnih, S. Osindero, A. Graves, O. Vinyals, J. Agapiou, and K. Kavukcuoglu. Strategic
attentive writer for learning macro-actions. In NIPS, 2016.
[37] Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures
for deep reinforcement learning. In ICML, 2016.
[38] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279?292, 1992.
[39] H. Yao, S. Bhatnagar, D. Diao, R. S. Sutton, and C. Szepesv?ri. Multi-step dyna planning for policy
evaluation and control. In NIPS, 2009.
11
| 7192 |@word cnn:3 exploitation:1 polynomial:1 suitably:1 termination:2 r:1 decomposition:2 q1:2 harder:1 recursively:2 initial:3 score:1 bootstrapped:4 interestingly:1 outperforms:5 imaginary:1 existing:3 o2:1 com:2 current:1 steiner:1 mishra:1 hasselt:2 freitas:1 guez:2 devin:1 resent:1 enables:1 hypothesize:2 update:1 greedy:7 leaf:1 fewer:1 isard:1 intelligence:2 short:5 core:7 node:1 gautam:1 obser:1 zhang:1 rollout:6 along:3 olah:1 direct:1 wierstra:2 corridor:1 abadi:1 consists:3 combine:1 manner:1 x0:1 ravindran:1 indeed:1 expected:3 behavior:1 planning:74 multi:6 brain:1 simulator:3 bellman:2 inspired:1 discounted:2 salakhutdinov:1 nham:1 td:5 automatically:1 unrolling:1 estimating:3 opn:12 lowest:1 what:3 tic:1 atari:13 developed:1 maxo:4 unified:1 finding:3 ended:1 guarantee:1 temporal:6 every:2 collecting:1 act:1 saxena:1 rm:2 control:12 unit:2 grant:1 wayne:1 danihelka:1 before:2 positive:1 local:1 limit:3 consequence:1 sutton:5 encoding:4 accumulates:1 reacting:1 path:4 approximately:1 dynamically:1 suggests:3 challenging:2 collect:12 sara:1 limited:1 practice:1 block:1 implement:1 backpropagation:2 procedure:1 riedmiller:1 empirical:2 rnn:1 significantly:3 confidence:1 pre:1 integrating:1 arcade:1 petersen:1 doubled:1 cannot:2 close:1 applying:2 bellemare:2 conventional:1 map:3 deterministic:7 dean:1 go:4 primitive:4 starting:1 vtl:1 flexibly:1 survey:1 d1:1 shlens:1 oh:3 racaniere:1 variation:2 analogous:1 updated:1 target:12 dulac:1 programming:1 us:4 hypothesis:1 goodfellow:2 crossing:1 recognition:1 hessel:2 predicts:9 observed:2 levine:5 module:21 preprint:7 wang:1 connected:3 sun:1 episode:5 highest:2 principled:1 substantial:1 environment:27 complexity:1 reward:46 dynamic:11 trained:8 singh:5 terminating:1 segment:1 compromise:1 predictive:4 purely:1 writer:1 czarnecki:1 gu:2 easily:1 represented:2 train:4 describe:2 monte:5 artificial:1 aggregate:2 outcome:6 choosing:3 exhaustive:1 kalchbrenner:2 vations:1 whose:1 heuristic:1 supplementary:2 larger:1 enemy:2 say:1 otherwise:1 cvpr:1 ability:4 simonyan:1 unseen:5 itself:1 kocsis:1 advantage:6 differentiable:1 sequence:8 propose:2 interaction:1 macro:1 enc:4 combining:1 poorly:1 lookahead:6 schaul:3 description:1 sutskever:3 convergence:1 ijcai:1 zefowicz:1 r1:2 produce:1 generating:2 adam:2 executing:1 silver:7 tk:1 help:2 depending:3 recurrent:3 object:1 develop:1 chiappa:1 measured:1 nearest:1 tornio:1 vins:2 auxiliary:5 predicted:3 implemented:1 implies:1 skip:3 qd:5 synchronized:2 direction:2 ning:1 stochastic:14 exploration:3 deter:1 human:1 successor:2 opinion:1 material:2 alphago:1 require:2 premise:1 abbeel:3 generalization:3 wall:3 preliminary:1 hold:1 lookaheads:1 mapping:1 predict:20 dieleman:1 vary:1 optimizer:1 achieves:1 tavener:1 estimation:1 lenz:1 integrates:2 applicable:2 outperformed:1 combinatorial:2 repetition:1 weighted:1 always:1 e7:11 rather:2 pn:1 rusu:1 varying:1 focus:1 legg:1 indicates:1 alien:2 contrast:4 baseline:15 sense:3 helpful:1 abstraction:2 dayan:1 mws:3 accumulated:1 integrated:2 entire:1 hidden:3 bandit:1 pixel:2 overall:1 flexible:1 denoted:1 lucas:1 proposes:1 plan:18 seaquest:4 qbert:2 platform:1 geramifard:1 beach:1 veness:2 identical:1 represents:4 yu:1 unsupervised:2 icml:7 future:40 mirza:1 few:1 randomly:1 simultaneously:1 powley:1 neurocomputing:1 rollouts:2 maintain:1 attempt:1 harley:2 ostrovski:1 investigate:1 mnih:5 zheng:1 evaluation:13 introduces:1 truly:1 navigation:2 perez:1 accurate:6 partial:1 hausknecht:1 unless:1 tree:8 minimal:1 instance:1 increased:1 cost:1 strategic:1 uniform:2 osindero:1 dtrain:10 kudlur:1 combined:4 st:1 amherst:1 oord:1 lee:4 straw:1 modelbased:1 precup:3 yao:1 squared:2 reflect:1 aaai:1 thesis:1 diao:1 dtest:7 choose:4 huang:1 stolle:1 henceforth:4 worse:1 external:1 style:1 return:5 vtk:1 potential:1 de:1 degris:1 summarized:2 stabilize:1 bold:1 lakshminarayanan:1 vpn:103 explicitly:1 vi:3 performed:1 view:1 lot:1 option:44 jia:1 contribution:1 convolutional:1 formance:1 correspond:1 generalize:1 raw:1 bayesian:1 kavukcuoglu:6 accurately:1 ren:1 carlo:5 trajectory:9 bhatnagar:1 history:1 unrolls:1 attentive:1 tucker:1 mpc:1 mohamed:1 massachusetts:1 wicke:1 improves:1 graepel:1 enduro:2 supervised:2 improved:2 evaluated:3 though:2 furthermore:2 just:2 uct:1 until:1 hand:3 receives:1 expressive:1 nonlinear:1 google:2 mode:1 quality:3 indicated:1 gray:1 rabinowitz:1 dqn:26 usa:1 building:3 lillicrap:4 contain:1 normalized:1 true:2 krull:3 effect:3 vasudevan:1 moore:1 illustrated:4 deal:1 whitehouse:1 game:18 during:22 encourages:1 branching:2 davis:1 irving:1 levenberg:1 bowling:2 m:2 stone:1 demonstrate:1 performs:8 motion:1 pacman:2 image:3 variational:1 novel:2 recently:1 junhyuk:2 rl:13 empirically:1 physical:1 vezhnevets:1 tassa:1 discussed:1 he:1 honglak:3 ai:1 rd:1 grid:1 erez:1 killed:1 stochasticity:3 baveja:1 moving:1 robot:1 badia:1 showed:1 perspective:1 fgs:3 irrelevant:1 wattenberg:1 route:1 certain:5 binary:1 leach:1 additional:3 greater:1 sharma:1 shortest:4 ller:1 period:1 corrado:1 semi:2 branch:1 full:1 multiple:3 needing:1 ii:1 ing:1 faster:4 match:1 long:3 divided:1 equally:1 visit:1 controlled:1 parenthesis:1 prediction:54 sponsor:1 heterogeneous:1 arxiv:14 iteration:3 grounded:1 monga:1 agarwal:1 achieved:1 background:1 addition:8 whereas:4 separately:2 fine:1 szepesv:3 ot:7 unlike:2 warden:1 induced:1 tend:1 simulates:1 sent:1 call:3 near:1 presence:1 intermediate:1 easy:1 enough:2 knepper:1 browne:1 architecture:18 topology:1 observability:1 idea:2 reduce:1 barham:1 tamar:1 whether:1 thread:1 o0:3 utility:1 penalty:1 action:18 deep:23 heess:1 useful:5 generally:2 listed:1 transforms:1 amount:1 discount:19 s4:1 processed:1 reduced:1 generate:5 http:1 outperform:5 nsf:1 rtk:1 per:1 kill:1 naddaf:1 discrete:1 key:3 four:2 harp:1 demonstrating:1 lan:2 nevertheless:2 changing:1 frostbite:2 saeedi:1 leibo:1 sum:2 talwar:1 parameterized:3 throughout:1 reasonable:2 wu:1 decision:1 cowling:1 lanctot:2 layer:10 uncontrolled:1 followed:1 distinguish:1 ahead:1 precisely:1 constraint:1 x2:1 ri:3 archi:1 nearby:1 simulate:7 speed:1 conjecture:1 structured:1 deepmpc:1 combination:1 representable:1 describes:1 slightly:1 beneficial:1 climber:3 mastering:1 modification:1 intuitively:2 den:2 handling:1 taken:2 equation:2 remains:1 previously:1 turn:1 r3:2 dyna:6 finn:2 antonoglou:2 end:6 umich:1 rohlfshagen:1 available:2 generalizes:1 brevdo:1 panneershelvam:1 multiplied:1 indirectly:1 blurry:1 simulating:2 robustness:1 hassabis:2 gate:1 reichert:1 original:4 thomas:1 top:3 remaining:1 exploit:1 concatenated:1 murray:1 approximating:2 skt:3 icra:1 tensor:1 move:2 question:2 already:1 foresight:1 kaiser:1 degrades:1 rt:7 traditional:1 gradient:3 iclr:3 separate:2 simulated:3 fidjeland:1 maddison:1 extent:1 samothrakis:1 reason:2 assuming:1 code:1 o1:1 length:1 relationship:1 illustration:1 balance:1 schrittwieser:1 difficult:5 potentially:3 expense:1 stated:1 ba:1 implementation:2 design:2 policy:12 perform:14 observation:47 convolution:6 markov:1 kumaran:1 gas:1 ecml:1 immediate:3 extended:3 looking:1 frame:5 bert:1 sweeping:1 introduced:3 required:1 connection:2 learned:7 distinction:2 tensorflow:2 amidar:2 kingma:1 nip:7 trans:4 address:1 beyond:1 suggested:1 able:1 mordatch:1 perception:1 including:1 memory:4 reliable:1 video:3 dueling:1 natural:1 predicting:3 residual:3 scheme:1 improve:2 github:1 brief:1 mdps:4 mcts:5 axis:2 raiko:1 grewe:1 sn:2 faced:1 prior:2 acknowledgement:1 relative:1 graf:4 loss:7 fully:3 parisotto:1 interesting:2 limitation:1 proportional:1 approximator:4 gershman:1 agent:11 vanhoucke:1 sufficient:3 s0:13 placed:1 last:2 free:17 asynchronous:3 repeat:1 supported:1 allow:1 understand:1 deeper:4 arnold:1 benefit:1 distributed:1 curve:3 dimension:1 depth:26 transition:9 calculated:1 default:1 xn:1 sensory:1 author:1 qualitatively:1 reinforcement:15 sifre:1 transaction:1 approximate:2 observable:2 rtl:1 jaderberg:1 satinder:1 dealing:2 colton:1 active:1 uai:1 predictron:4 unnecessary:1 agapiou:1 search:19 continuous:5 latent:1 decomposes:1 sk:2 why:2 table:4 nature:2 learn:14 reasonably:1 robust:3 ca:1 minecraft:1 composing:1 channel:1 improving:1 expansion:5 investigated:2 complex:5 meanwhile:1 necessarily:1 domain:10 pk:1 main:3 backup:4 hyperparameters:2 repeated:1 x1:1 referred:1 tl:1 screen:1 sub:1 stadie:1 explicit:1 downright:1 crazy:3 watkins:1 learns:14 perfect:1 grained:1 down:1 incentivizing:1 specific:1 xt:7 ghemawat:1 learnable:1 r2:2 decay:2 explored:1 ments:2 kulkarni:1 phd:1 conditioned:1 illustrates:1 margin:1 gap:2 easier:1 chen:1 michigan:1 lt:3 visual:5 vinyals:3 expressed:1 partially:2 scalar:1 recommendation:1 van:4 applies:2 sadik:1 driessche:1 corresponds:1 relies:1 lewis:2 conditional:6 viewed:5 goal:12 king:1 acceleration:1 careful:3 prioritized:1 absence:1 man:1 change:1 typical:1 smdps:1 reducing:1 except:5 acting:1 specifically:1 beattie:1 called:1 experimental:2 citro:1 indicating:1 formally:1 highdimensional:1 guo:2 barreto:1 tested:1 schuster:1 ex:3 |
6,846 | 7,193 | A Learning Error Analysis for Structured Prediction
with Approximate Inference
1
Yuanbin Wu1, 2 , Man Lan1, 2 , Shiliang Sun1 , Qi Zhang3 , Xuanjing Huang3
School of Computer Science and Software Engineering, East China Normal University
2
Shanghai Key Laboratory of Multidimensional Information Processing
3
School of Computer Science, Fudan University
{ybwu, mlan, slsun}@cs.ecnu.edu.cn, {qz, xjhuang}@fudan.edu.cn
Abstract
In this work, we try to understand the differences between exact and approximate
inference algorithms in structured prediction. We compare the estimation and approximation error of both underestimate (e.g., greedy search) and overestimate
(e.g., linear relaxation of integer programming) models. The result shows that,
from the perspective of learning errors, performances of approximate inference
could be as good as exact inference. The error analyses also suggest a new margin for existing learning algorithms. Empirical evaluations on text classification,
sequential labelling and dependency parsing witness the success of approximate
inference and the benefit of the proposed margin.
1
Introduction
Given an input x ? X , structured prediction is the task of recovering a structure y = h(x) ? Y,
where Y is a set of combinatorial objects such as sequences (sequential labelling) and trees (syntactic
parsing). Usually, the computation of h(x) needs an inference (decoding) procedure to find an
optimal y:
h(x) = arg max score(x, y).
y?Y
Solving the ?arg max? operation is essential for training and testing structured prediction models,
and it is also one of the most time-consuming parts due to its combinatorial natural. In practice, the
inference problem often reduces to combinatorial optimization or integer programming problems,
which are intractable in many cases. In order to accelerate models, faster approximate inference
methods are usually applied. Examples include underestimation algorithms which output structures
with suboptimal scores (e.g., greedy search, max-product belief propagation), and overestimation algorithms which output structures in a larger output space (e.g., linear relaxation of integer programming). Understanding the trade-offs between computational efficiency and statistical performance is
important for designing effective structured prediction models [Chandrasekaran and Jordan, 2013].
Prior work [Kulesza and Pereira, 2007] shows that approximate inference may not be sufficient
for learning a good statistical model, even with rigorous approximation guarantees. However, the
successful application of various approximate inference algorithms motivates a deeper exploration
of the topic. For example, the recent work [Globerson et al., 2015] shows that an approximate
inference can achieve optimal results on grid graphs. In this work, instead of focusing on specific
models and algorithms, we try to analyze general estimation and approximation errors for structured
prediction with approximate inference.
Recall that given a hypothesis space H, a learning algorithm A receives a set of training samples
S = {(xi , yi )}m
i=1 which are i.i.d. according to a distribution D on the space X ? Y, and returns a
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
hypothesis A(S) ? H. Let e(h) = ED l(y, h(x)) be the risk of a hypothesis h on X ? Y (l is a loss
function), and h? = arg minh?H e(h). Applying algorithm A will suffer two types of error:
e(A(S)) =
e(h? )
| {z }
+
approximation
e(A(S)) ? e(h? )
|
{z
}
estimation
The estimation error measures how close A(S) is to the best possible h? ; the approximation error
measures whether H is suitable for D, which only depends on the hypothesis space. Our main
theoretical results are:
? For the estimation error, we show that, comparing with exact inference, overestimate inference always has larger estimation error, while underestimate inference can probably have
smaller error. The results are based on the PAC-Bayes framework [McAllester, 2007] for
structured prediction models.
? For the approximation error, we find that the errors of underestimate and exact inference
are not comparable. On the other side, overestimate inference algorithms have a smaller
approximation error than exact inference.
The results may explain the success of exact inference: it makes a good balance between the two
errors. They also suggest that the learning performances of approximate inference can still be improved. Our contributions on empirical algorithms are two-fold.
First, following the PAC-Bayes error bounds, we propose to use a new margin (Definition 3) when
working with approximate algorithms. It introduces a model parameter which can be tuned for
different inference algorithms. We investigate three widely used structured prediction models with
the new margin (structural SVM, structured perceptron and online passive-aggressive algorithm).
Second, we evaluate the algorithms on three NLP tasks: multi-class text classification (a special
case of structured prediction), sequential labelling (chunking, POS tagging, word segmentation)
and high-order non-projective dependency parsing. Results show that the proposed algorithms can
benefit each structured prediction task.
2
Related Work
The first learning error analysis of structured prediction was given in [Collins, 2001]. The bounds
depend on the number of candidate outputs of samples, which grow exponentially with the size
of a sample. To tighten the result, Taskar et al. [2003] provided an improved covering number
argument, where the dependency on the output space size is replaced by the l2 norm of feature
vectors, and London et al. [2013] showed that when the data exhibits weak dependence within each
structure (collective stability), the bound?s dependency on structure size could be improved. A
concise analysis based on the PAC-Bayes framework was given in [McAllester, 2007]. It enjoys the
advantages of Taskar et al.?s bound and has a simpler derivation. Besides the structured hinge loss,
the PAC-Bayes framework was also applied to derive generalization bounds (and consistent results)
for ramp and probit surrogate loss functions [McAllester and Keshet, 2011], and loss functions based
on Gibbs decoders [Honorio and Jaakkola, 2016]. Recently, Cortes et al. [2016] proposed a new
hypothesis space complexity measurement (factor graph complexity) by extending the Rademacher
complexity, and they can get tighter bounds than [Taskar et al., 2003].
For approximate inference algorithms, theoretical results have been given for different learning scenarios, such as the cutting plane algorithm of structured SVMs [Finley and Joachims, 2008, Wang
and Shawe-Taylor, 2009], subgradient descent [Martins et al., 2009], approximate inference via
dual loss [Meshi et al., 2010], pseudo-max approach [Sontag et al., 2010], local learning with decomposed substructures [Samdani and Roth, 2012], perceptron [Huang et al., 2012], and amortized
inference [Kundu et al., 2013, Chang et al., 2015]. Different from previous works, we try to give
a general analysis of approximate inference algorithms which is independent of specific learning
algorithms.
The concept of algorithmically separable is defined in [Kulesza and Pereira, 2007], it showed that
without understanding combinations of learning and inference, the learning model could fail. Two
recent works gave theoretical analyses on approximate inference showing that they could also obtain
2
promising performances: Globerson et al. [2015] showed that for a generative 2D grid models, a twostep approximate inference algorithm achieves optimal learning error. Meshi et al. [2016] showed
that approximation based on LP relaxations are often tight in practice.
The PAC-Bayes approach was initiated by [McAllester, 1999]. Variants of the theory include
Seeger?s bound [Seeger, 2002], Catoni?s bound [Catoni, 2007] and the works [Langford and ShaweTaylor, 2002, Germain et al., 2009] on linear classifiers.
3
Learning Error Analyses
We will focus on structured prediction with linear discriminant functions. Define exact inference
h(x, w) = arg max w? ?(x, y),
y?Y
where ?(x, y) ? Rd is the feature vector, and w is the parameter vector in Rd . We consider two
types of approximate inference algorithms, namely underestimate approximation and overestimate
approximation [Finley and Joachims, 2008] 1 .
Definition 1. Given a w, h- (x, w) is an underestimate approximation of h(x, w) on a sample x if
?w? ?(x, y ? ) ? w? ?(x, y - ) ? w? ?(x, y ? )
for some ? > 0, where y ? = h(x, w), y - = h- (x, w) ? Y. Similarly, h+ (x, w) is an overestimate
approximation of h(x, w) on sample x if
w? ?(x, y ? ) ? w? ?(x, y + ) ? ?w? ?(x, y ? )
?
for some ? > 0, where y + = h+ (x, w) ? Y? and Y ? Y.
Let H, H- , H+ be hypothesis spaces containing h, h- and h+ respectively: H = {h(?, w)|w ?
Rd }, H- = {h- (?, w)|?x ? X , h- (?, w) is an underestimation}, and H+ = {h+ (?, w)|?x ?
X , h+ (?, w) is an overestimation}. Let l(y, y?) ? [0, 1] be a structured loss function on Y ? Y and
I(?) be a 0-1 valued function which equals 1 if the argument is true, 0 otherwise.
3.1
Estimation Error
Our analysis of the estimation error for approximate inference is based on the PAC-Bayes results
for exact inference [McAllester, 2007]. PAC-Bayes is a framework for analyzing hypothesis h(?, w)
with stochastic parameters: given an input x, first randomly select a parameter w? according to some
distribution Q(w? |w), and then make a prediction using h(x, w? ). Define
1 ?
EQ(w? |w) l(yi , h(xi , w? )).
m i=1
m
L(Q, D, h(?, w)) = ED,Q(w? |w) l(y, h(x, w? )),
L(Q, S, h(?, w)) =
Given some prior distribution P (w) on the model parameter w, the following PAC-Bayes Theorem
[McAllester, 2003] gives an estimation error bound of h(x, w).
Lemma 2 (PAC-Bayes Theorem). Given a w, for any distribution D over X ? Y, loss function
l(y, y?) ? [0, 1], prior distribution P (w) over w, and ? ? [0, 1], we have with probability at least
1 ? ? (over the sample set S), the following holds for all posterior distribution Q(w? |w):
?
DKL (Q?P ) + ln m
?
L(Q, D, h(?, w)) ? L(Q, S, h(?, w)) +
,
2(m ? 1)
where DKL (Q?P ) is the KL divergence between Q and P .
1
Definition 1 slightly generalizes ?undergenerating? and ?overgenerating? in [Finley and Joachims, 2008].
Instead of requiring ? > 0, the ?undergenerating? there only considers ? ? (0, 1), and ?overgenerating? only
considers ? > 1. Although their definition is more intuitive (i.e., the meaning of ?over? and ?under? is more
clear), it implicitly assumes w? ?(x, y ? ) > 0 for all x and w, which limits the size of hypothesis space. Finally,
by adding a bias term, we could make w? ?(x, y ? ) + b > 0 for all x, and obtain the same definitions in [Finley
and Joachims, 2008].
3
Definition 3. For ? > 0, we extend the definition of margin as m? (x, y, y?, w) ? w? ?? (x, y, y?),
where ?? (x, y, y?) ? ??(x, y) ? ?(x, y?).
Clearly, m? (x, y ? , y - , w) ? 0 for underestimation, and m? (x, y ? , y + , w) ? 0 for overestimation.
The following theorem gives an analysis of the estimation error for approximate inference. The
proof (in the supplementary) is based on Theorem 2 of [McAllester, 2007], with emphasis on the
approximation rate ?.
?
Theorem 4. For a training set S = {(xi , yi )}m
i=1 , assume h (xi , w) is a ?i -approximation of
h(xi , w) on xi for all w. Denote ? = maxi ?i and Mi = maxy ??(xi , y)?1 . Then, for any D,
l(y, y?) ? [0, 1] and ? ? [0, 1], with probability at least 1 ? ?, the following upper bound holds.
?
m
S
2
(1 + ?)2 ?w?2 ln 2m?
?w?
?w?2 + ln ?
?
(1)
L(Q, D, h (?, w)) ? L(w, S) +
+
,
m
2(m ? 1)
L(w, S) =
?
?
?
?
1
m
?
?
?
1
m
m
?
i=1
m
?
i=1
maxy l(yi , y)I(m?i (xi , yi? , y, w) ? Mi )
if h? (?, w) ? H-
maxy l(yi , y)I(m?i (xi , yi? , y, w) ? ?Mi )
if h? (?, w) ? H+
?
?
where
? yi = h(xi , w), Q(w |w) is Gaussian with identity covariance matrix and mean (1 +
2m?S
?) 2 ln ?w?2 w, ?S is the maximum number of non-zero features among samples in S: ?S =
maxi,y ??(xi , y)?0 .
We compare the bound in Theorem 4 for two hypotheses h1 , h2 with approximation rate ?1,i , ?2,i
on sample xi . Without loss of generality, we assume w? ?(xi , yi? ) > 0 and ?1,i > ?2,i .
In the case of underestimation, since {y|m?1,i (xi , yi? , y, w) ? Mi } ? {y|m?2,i (xi , yi? , y, w) ?
Mi }, L(w, S) of h1 is smaller than that of h2 , but h1 has a larger square root term. Thus, it is
possible that underestimate approximation has a less estimation error than the exact inference. On
the other hand, for overestimation, both L(w, S) and the square root term of h1 are larger than those
of h2 . It means that the more overestimation an inference algorithm makes, the larger estimation
error it may suffer.
Theorem 4 requires that h? (?, w) attains approximation rate ?i on xi for all possible w. This assumption could be restrictive for including many approximate inference algorithms. We will try to relax
the requirement of Theorem 4 using the following measurement on stability of inference algorithms.
Definition 5. h(x, w) is ? -stable on a sample x with respect to a norm ? ? ? if for any w?
|w? ?(x, y) ? w?? ?(x, y ? )|
?w ? w? ?
??
,
?
|w ?(x, y)|
?w?
where y = h(x, w), y ? = h(x, w? ).
Theorem 6. Assume that h? (xi , w) is a ?i -approximation of h(xi , w) on the sample xi , and h? (?, w)
is ? -stable on S with respect to ? ? ?? . Then with the same symbols in Theorem 4, L(Q, D, h? (?, w))
is upper bounded by
?
m
S
(1 + 2? + ? )2 ?w?2 ln 2m?
?w?2
?w?2 + ln ?
L(w, S) +
+
.
m
2(m ? 1)
Note that we still need to consider all possible w? according to the definition of ? . However, upper
bounds of ? could be derived for some approximate inference algorithms. As an example, we discuss
the linear programming relaxation (LP-relaxation) of integer linear programming, which covers a
broad range of approximate inference algorithms. The ? -stability of LP-relaxation can be obtained
from perturbation theory of linear programming [Renegar, 1994, 1995].
Theorem 7 (Proposition 2.5 of [Renegar, 1995]). For a feasible linear programming
max . w? z
s.t. Az ? b, z ? 0,
4
1?
1
1?
2?
3
2?
1
3
2?
3
1
1?
3?
2
(a)
3?
(b)
1
2
2
(a)
(c)
3?
1
1?
3?
2?
3
2?
1
1?
(d)
3
2
(c)
(b)
1?
3
2
2?
3?
(e)
2
(d)
(f)
Figure 1: An example of exact inference with
less approximation error than underestimate
inference (i.e., e(h) < e(h- ))
3?
(e)
(f)
Figure 2: An example of underestimate inference with less approximation error than exact
inference (i.e., e(h- ) < e(h)).
let z?, z?? be solutions of the LP w.r.t. w and w? . Then
|w? z? ? w?? z?? | ?
max(?b?? , |w? z?|)
?w ? w? ?? ,
d
where d is the l? distance from A, b to the dual infeasible LP (?A, b?? = maxi,j,k {|Aij |, |bk |}):
d = inf{?|??A, ?b?? < ? ? the dual problem of the LP with(A + ?A, b + ?b) is infeasible}.
3.2
Approximation Error
In this section, we compare the approximation error of models with different inference algorithms.
The discussions are based on the following definition (Definition 1.1 of [Daniely et al., 2012]).
Definition 8. For hypothesis spaces H, H? , we say H essentially contains H? if for any h? ? H? ,
there is an h ? H satisfying e(h) ? e(h? ) for all D, where e(h) = ED l(y, h(x)). In other words,
for any distribution D, the approximation error of H is at most the error of H? .
Our main result is that there exist cases that approximation errors of exact and underestimate inference are not comparable, in the sense that neither H contains H- , nor H- contains H. 2
To see that approximation errors could be non-comparable, we consider an approximate inference
algorithm h- which always outputs the second best y for a given w. The two examples in Figure 1
and 2 demonstrate that it is both possible that e(h) < e(h- ) and e(h- ) < e(h). The following are the
details.
We consider an input space containing two samples X = {x, x? }. Sample x has three possible
output structures, which are named with 1, 2, 3 respectively. Sample x? also has three possible y,
which are named with 1? , 2? , 3? . Let the correct output of x and x? be 1 and 1? . For sample x, feature
vectors ?(x, 1), ?(x, 2), ?(x, 3) ? R2 are points on the unit circle and form a equilateral triangle
?(1, 2, 3). Similarly, feature vectors ?(x? , 1? ), ?(x? , 2? ), ?(x? , 3? ) are vertices of ?(1? , 2? , 3? ). The
parameter space of w is the unit circle (since inference results only depend on directions of w).
Given a w, the exact inference h(x, w) choose the y whose ?(x, y) has the largest projection on w
(i.e., h(x, w) = arg maxy?{1,2,3} w? ?(x, y) and h(x? , w) = arg maxy?{1? ,2? ,3? } w? ?(x? , y)), and
h- (x, w) choose the y whose ?(x, y) has the second largest projection on w.
2
Note that there exist two paradigms for handling intractability of inference problems. The first one is
to develop approximate inference algorithms for the exact problem, which is our focus here. Another one
is to develop approximate problems with tractable exact inference algorithms. For example, in probabilistic
graphical models, one can add conditional independent assumptions to get a simplified model with efficient
inference algorithms. In the second paradigm, it is clear that approximate models are less expressive than the
exact model, thus the approximation error of them are always larger. Our result, however, shows that it is
possible to have underestimate inference of the original problem with smaller approximation error.
5
We first show that it is possible e(h) < e(h- ). In Figure 1, (a) shows that for sample x, any w in
the gray arc can make the output of exact inference correct (i.e., h(x, w) = 1). Similarly, in (b),
any w in the gray arc guarantees h(x? , w) = 1? . (c) shows that the two gray arcs in (a) and (b) are
overlapping on the dark arc. For any w in the dark arc, the exact inference has correct outputs on
both x and x? , which means that approximation error of exact inference H is 0.
At the same time, in (d) of Figure 1, gray arcs contain w which makes the underestimate inference
correct on sample x (i.e., h- (x, w) = 1), gray arcs in (e) are w with h- (x? , w) = 1? . (f) shows
the gray arcs in (d) and (e) are not overlapping, which means it is impossible to find a w such that
h- (?, w) is correct on both x and x? . Thus the approximation error of underestimate inference H- is
strictly larger than 0, and we have e(h) < e(h- ).
Similarly, in Figure 2, (a), (b), (c) show that we are able to choose w such that the underestimate
inference is correct both on x and x? , which implies the approximation error of underestimation Hequals 0. On the other hand, (d), (e), (f) shows that the approximation error of exact inference H is
strictly larger than 0, and we have e(h- ) < e(h).
Following the two figures, we can illustrate that when ?(x, y) are vertices of convex regular n-gons,
it is both possible that e(h) < e(h- ) and e(h- ) < e(h), where h- is an underestimation outputting the
k-th best y. In fact, when we consider the ?worst? approximation which outputs y with the smallest
score, its approximation error equals to the exact inference since h(x, w) = h- (x, ?w). Thus, we
would like to think that the geometry structures of ?(x, y) could be complex enough to make both
exact and underestimate inference efficient.
To summarize, the examples suggest that underestimation algorithms give us a different family of
predictors. For some data distribution, the underestimation family can have a better predictor than
the exact inference family.
Finally, for the case of overestimate approximation, we can show that H+ contains H using Theorem
1 of [Kulesza and Pereira, 2007].
Theorem 9. For ? > 1, if the loss function l satisfies l(y1 , y2 ) ? l(y1 , y3 ) + l(y3 , y2 ), then H+
contains H.
4
Training with the New Margin
Theorems 4 and 6 suggest that we could learn the model parameter w by minimizing a non-convex
objective L(w, S) + ?w?2 . The L(w, S) term is related to the size of the set {y|m? (xi , yi? , y, w) ?
Mi }, which can be controlled by margin m?2 (xi , yi , yi- ). Specifically, for underestimation,
m? (xi , yi? , y, w)? ?w? ?(xi , yi ) ? w? ?(xi , y) ? ?w? ?(xi , yi ) ? w? ?(xi , yi? )
? ?w? ?(xi , yi ) ? ??1 w? ?(xi , yi- ) = ??1 m?2 (xi , yi , yi- , w), ?y.
It implies that the larger m?2 (xi , yi , yi- ), the lower L(w, S). Thus, when working with approximate inference, we can apply m?2 (xi , yi , yi- ) in existing maximum margin frameworks instead of
m1 (xi , yi , yi- ) (replacing exact yi? with the approximate y?
i ). For example, the structural SVM in
1
2
[Finley and Joachims, 2008] becomes min . 2 ||w|| + C i ?i , s.t. m?2 (xi , yi , yi- , w) > 1 ? ?i .
Intuitively, m?2 aims to improve learning process by including more information about inference
algorithms. For overestimation, we don?t have similar lower bounds as underestimation, but since
m? (xi , yi , yi+ , w) > 1 ? m1 (xi , yi , yi? , w) > ??1 , we can apply the margin m? as an approximation of m1 .
In practice, since it is hard to obtain ? for inference algorithms (even it is possible, as ? must consider
the worst case of all possible x, a tight ? maybe inefficient on individual samples), we treat it as an
algorithm parameter which can be heuristically determined either by prior knowledge or by tuning
on development data. We leave the study of how to estimate ? systematically for future work.
For empirical evaluation, we examine structural SVM with cutting plane learning algorithm [Finley
and Joachims, 2008], and we also adapt two wildly used online structured learning algorithms with
m? : structured perceptron [Collins, 2002] (Algorithm 3) and online passive aggressive algorithm
(PA) [Crammer et al., 2006] (Algorithm 4). The mistake bounds of the two algorithms are similar to
bounds with exact inference algorithms (given in the supplementary).
6
1:
2:
3:
4:
5:
6:
7:
8:
w0 = 0
for t = 0 to T do
yt- = h- (xt , wt )
if yt- ?= yt then
wt+1 = wt +??(xt , yt )??(xt , yt- )
end if
end for
return w = wT
1:
2:
3:
4:
5:
6:
7:
8:
w0 = 0
for t = 0 to T do
if m? (xt , yt , yt- , w) < 1 then
wt+1 = arg minw . ?w ? wt ?2
s.t. m? (xt , yt , yt- , w) ? 1
end if
end for
return w = wT
Figure 3: Structured perceptron with m? .
5
Figure 4: Online PA with m? .
Experiments
We present experiments on three natural language processing tasks: multi-class text classification,
sequential labelling and dependency parsing. For text classification, we compare with the vanilla
structural SVM. For sequential labelling, we consider three tasks (phrase chunking (chu), POS
tagging (pos) and Chinese word segmentation (cws)) and the perceptron training. For dependency
parsing, we focus on the second order non-projective parser and the PA algorithm. For each task,
we focus on underestimate inference algorithms.
5.1
Multi-class classification
Multi-class classification is a special case of structured prediction. It has a limited number of class
labels and a simple exact inference algorithm (i.e., by enumerating labels). To evaluate the proposed
margin constraints, we consider toy approximate algorithms which output the kth best class label.
We report results on the 20 newsgroups corpus
(18000 documents, 20 classes). The meta data
is removed (headers, footers and quotes), and
feature vectors are simple tf-idf vectors. We
take 20% of the training set as development
set for tuning ? (grid search in [0, 2] with step
size 0.05). The implementation is adapted from
SVMmulticlass 3 .
From the results (Figure 5) we find that, comparing with the vanilla structural SVM, the proposed margin constraints are able to improve
error rates for different inference algorithms.
And, as k becomes larger, the improvement becomes more significant. This property might be
attractive since algorithms with loose approximation rates are common in practical use. Another observation is that, as k becomes larger,
the best parameter ? decreases in general. It
shows that the tuned parameter can reflect the
definition of approximate rate (Defnition 1).
5.2
90
0.75
Vanilla SVM
SVM with m?
0.70
80
0.60
?
Error rates
0.65
70
0.55
60
0.50
50
0.45
40
0.40
2
3
4
5
6
7
8
k
Figure 5: Results on text classification. Blue
points are error rates for different k, and red points
are ? achieving the best error rates on the development set. The red dot line is the least square
linear fitting of red points. The model parameter
C = 104 .
Sequential Labelling
In sequential labelling, we predict sequences y = y 1 y 2 , . . . , y K , where y k ? Y is a label (e.g., POS
?K
tag). We consider the first order Markov assumption: h(x) = arg maxy k=1 w? ?(x, y k , y k?1 ).
The inference problem is tractable using O(KY 2 ) dynamic programming (Viterbi).
We examine a simple and fast greedy iterative decoder (?gid?), which is also known as the iterative
conditional modes [Besag, 1986]. The algorithm flips each label y k of y in a greedy way: for fixed
y k?1 and y k+1 , it finds a y k that makes the largest increase of the decoding objective function. The
3
http://www.cs.cornell.edu/People/tj/svm_light/svm_multiclass.html
7
chu
0.959
Accuracy
0.957
pos
0.952
? = 0.9
? = 0.99
? = 0.999
?=1
0.958
cws-pku
0.960
0.951
cws-msr
0.970
0.955
0.965
0.950
0.950
0.949
0.956
0.960
0.945
0.948
0.955
0.940
0.955
0.947
0.954
0.935
0.946
0.953
0.945
0.952
0.0
0.944
0.0
0.2
0.4
u
0.6
0.8
1.0
0.950
0.930
0.2
0.4
u
0.6
0.8
0.925
1.0
0.0
0.2
0.4
u
0.6
0.8
1.0
0.945
0.0
0.2
0.4
u
0.6
0.8
1.0
Figure 6: Results of sequential labelling tasks with Algorithm 3. The x-axis represents the random
selection parameters u. The y-axis represents label accuracy.
algorithm passes the sequence multiple times and stops when no y k can be changed. It is faster in
practice (speedup of 18x on POS tagging, 1.5x on word segmentation), requires less memory (O(1)
space complexity), and can obtain a reasonable performance.
We use the CoNLL 2000 dataset [Sang and Buchholz, 2000] for chunking and POS tagging,
SIGHAN 2005 bake-off corpus (pku and msr) [Emerson, 2005] for word segmentation. We use
Algorithm 3 with 20 iterations and learning step 1. We adopt standard feature sets in all tasks.
To test ? on more inference algorithms, we will apply a simple random selection strategy to generate a bunch of in-between inference algorithms: when decoding an example, we select ?Viterbi?
with probability u, ?gid? with probability 1 ? u. Heuristically, by varying u, we obtain inference
algorithms with different expected approximation rates.
Figure 6 shows the results of ? ? 1 4 . We can have following observations:
? At u = 0 (i.e., inference with ?gid?), models with ? < 1 are significantly better than ? = 1 on
pos and cws (p < 0.01 using z-test for proportions). Furthermore, on pos and cws, the best ?gid?
results with parameters ? < 1 are competitive to the standard perceptron with exact inference (i.e.,
? = 1, u = 1). Thus, it is possible for approximate inference to be both fast and good.
? For 0 < u < 1, we can find that curves of ? < 1 are above the curve of ? = 1 in many cases. The
largest gap is 0.2% on chu, 0.6% on pos and 2% on cws. Thus, the learning parameter ? can also
provide performance gains for the combined inference algorithms.
? For u = 1 (i.e., using the ?Viterbi?), it is interesting to see that in pos, ? < 1 still outperforms
? = 1 by a large margin. We suspect that the ? parameter might also help the structured perceptron
converging to a better solution.
5.3
Dependency Parsing
Our third experiment is high order non-projective dependency parsing, for which the exact inference
is intractable. We follows the approximate inference in MSTParser [McDonald and Pereira, 2006]
5
. The algorithm first finds the best high order projective tree using a O(n3 ) dynamic programming
[Eisner, 1996], then heuristically introduces non-projective edges on the projective tree.
We use the online PA in Algorithm 4 with above two-phase approximate inference algorithm. The
parser is trained and tested on 5 languages in the CoNLL-2007 shared task [Nivre et al., 2007] with
non-projective sentences more than 20%. Features are identical to default MSTParser settings 6 .
Figure 1 lists the results with different ?. It shows that on all languages, tuning the parameter helps
to improve the parsing accuracy. As a reference, we also include results of the first order models.
On Basque and Greek, the performance gains from ? is comparable to the gains from introducing
second order features, but the improvement on Czech, Hungarian and Turkish are limited. We also
find that different with text classification and sequential labelling, both ? > 1 and ? < 1 can obtain
optimal scores. Thus, with the feature configuration of MSTParser, the value of w? ?(x, y ? ) may
not always be positive during the online learning process, and it reflect the fact that feature space of
4
We also test models with ? > 1, which underperform ? < 1 in general. Details are in the supplementary.
http://sourceforge.net/projects/mstparser/
6
Features in MSTParser are less powerful than state-of-the-art, but we keep them for an easier implementation and comparison.
5
8
parsing problems is usually more complex. Finally, setting a global ? for different training samples
could be coarse (so we only get improvement in a small neighborhood of 1), and how to estimate ?
for individual x is an important future work.
Setting
1st Order
? =1
? =1?10?3
? =1?10?4
? =1+10?3
? =1+10?4
Basque
79.4
79.8
79.7
80.3
79.4
79.6
Czech
82.1
82.8
83.0
82.9
82.3
83.0
Greek
81.1
81.7
81.3
82.2
81.5
82.5
Hungarian
79.9
81.7
81.1
81.8
80.7
81.6
Turkish
85.0
85.5
85.2
85.7
85.6
85.4
Table 1: Results of the second order dependency parsing with parameter ?. We report the unlabelled
attachment score (UAS), which is the percentage of words with correct parents.
6
Conclusion
We analyzed the learning errors of structured prediction models with approximate inference. For the
estimation error, we gave a PAC-Bayes analysis for underestimation and overestimation inference
algorithms. For the approximation error, we showed the incomparability between exact and underestimate inference. The experiments on three NLP tasks with the newly proposed learning algorithms
showed encouraging performances. In future work, we plan to explore more adaptive methods for
estimating approximation rate ? and combining inference algorithms.
Acknowledgements
The authors wish to thank all reviewers for their helpful comments and suggestions. The corresponding authors are Man Lan and Shiliang Sun. This research is (partially) supported by NSFC
(61402175, 61532011), STCSM (15ZR1410700) and Shanghai Key Laboratory of Trustworthy
Computing (07dz22304201604). Yuanbin Wu is supported by a Microsoft Research Asia Collaborative Research Program.
References
Julian Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society B,
48(3):48?259, 1986.
Olivier Catoni. PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning, volume 56 of Lecture Notes-Monograph Series. IMS, 2007.
Venkat Chandrasekaran and Michael I. Jordan. Computational and statistical tradeoffs via convex
relaxation. In Proc. of the National Academy of Sciences, volume 110, 2013.
Kai-Wei Chang, Shyam Upadhyay, Gourab Kundu, and Dan Roth. Structural learning with amortized inference. In Proc. of AAAI, pages 2525?2531, 2015.
Michael Collins. Parameter estimation for statistical parsing models: Theory and practice of
distribution-free methods. In Proc. of the Seventh International Workshop on Parsing Technologies, 2001.
Michael Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. of EMNLP, pages 1?8, 2002.
Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, and Scott Yang. Structured prediction theory
based on factor graph complexity. In NIPS, pages 2514?2522, 2016.
Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. Online
passive-aggressive algorithms. Journal of Machine Learning Research, 7:551?585, 2006.
9
Amit Daniely, Sivan Sabato, and Shai Shalev-Shwartz. Multiclass learning approaches: A theoretical comparison with implications. In NIPS, pages 494?502, 2012.
Jason M. Eisner. Three new probabilistic models for dependency parsing: An exploration. In Proc.
of COLING, 1996.
Thomas Emerson. The second international Chinese word segmentation bakeoff. In the Second
SIGHAN Workshop on Chinese Language Processing, pages 123 ? 133, 2005.
Thomas Finley and Thorsten Joachims. Training structural SVMs when exact inference is intractable.
In Proc. of ICML, pages 304?311, 2008.
Pascal Germain, Alexandre Lacasse, Fran?ois Laviolette, and Mario Marchand. PAC-Bayesian learning of linear classifiers. In Proc. of ICML, pages 353?360, 2009.
Amir Globerson, Tim Roughgarden, David Sontag, and Cafer Yildirim. How hard is inference for
structured prediction? In Proc. of ICML, pages 2181?2190, 2015.
Jean Honorio and Tommi S. Jaakkola. Structured prediction: From gaussian perturbations to lineartime principled algorithms. In Proc. of UAI, 2016.
Liang Huang, Suphan Fayong, and Yang Guo. Structured perceptron with inexact search. In Proc.
of HLT-NAACL, pages 142?151, 2012.
Alex Kulesza and Fernando Pereira. Structured learning with approximate inference. In NIPS, pages
785?792, 2007.
Gourab Kundu, Vivek Srikumar, and Dan Roth. Margin-based decomposed amortized inference. In
Proc. of ACL, pages 905?913, 2013.
John Langford and John Shawe-Taylor. PAC-Bayes & margins. In NIPS, pages 423?430, 2002.
Ben London, Bert Huang, Ben Taskar, and Lise Getoor. Collective stability in structured prediction:
Generalization from one example. In Proc. of ICML, pages 828?836, 2013.
Andr? F. T. Martins, Noah A. Smith, and Eric P. Xing. Polyhedral outer approximations with application to natural language parsing. In Proc. of ICML, pages 713?720, 2009.
David McAllester. Generalization Bounds and Consistency for Structured Labeling, chapter Predicting Structured Data. MIT Press, 2007.
David A. McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355?363, 1999.
David A. McAllester. Pac-bayesian stochastic model selection. Machine Learning, 51(1):5?21,
2003.
David A. McAllester and Joseph Keshet. Generalization bounds and consistency for latent structural
probit and ramp loss. In NIPS, pages 2205?2212, 2011.
Ryan McDonald and Fernando Pereira. Online learning of approximate dependency parsing algorithms. In Proc. of EACL, 2006.
Ofer Meshi, David Sontag, Tommi S. Jaakkola, and Amir Globerson. Learning efficiently with
approximate inference via dual losses. In Proc. of ICML, pages 783?790, 2010.
Ofer Meshi, Mehrdad Mahdavi, Andrian Weller, and David Sontag. Train and test tightness of lp
relaxations in structured prediction. In Proc. of ICML, 2016.
Joakim Nivre, Johan Hall, Sandra K?bler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and
Deniz Yuret. The CoNLL 2007 shared task on dependency parsing. In Proc. of the CoNLL
Shared Task Session of EMNLP-CoNLL 2007, pages 915?932, 2007.
James Renegar. Some perturbation theory for linear programming. Mathematical Programming, 65:
73?91, 1994.
10
James Renegar. Incorporating condition measures into the complexity theory of linear programming.
SIAM Journal on Optimization, 5(3):506?524, 1995.
Rajhans Samdani and Dan Roth. Efficient decomposed learning for structured prediction. In Proc.
of ICML, 2012.
Erik F. Tjong Kim Sang and Sabine Buchholz. Introduction to the conll-2000 shared task: Chunking.
In Proc. of CoNLL and LLL, 2000.
Matthias Seeger. PAC-Bayesian generalisation error bounds for gaussian process classification.
JMLR, 3:233?269, 2002.
David Sontag, Ofer Meshi, Tommi S. Jaakkola, and Amir Globerson. More data means less inference: A pseudo-max approach to structured learning. In NIPS, pages 2181?2189, 2010.
Benjamin Taskar, Carlos Guestrin, and Daphne Koller. Max-margin Markov networks. In NIPS,
pages 25?32, 2003.
Zhuoran Wang and John Shawe-Taylor. Large-margin structured prediction via linear programming.
In Proc. of AISTATS, pages 599?606, 2009.
11
| 7193 |@word msr:2 norm:2 proportion:1 dekel:1 heuristically:3 underperform:1 covariance:1 concise:1 configuration:1 contains:5 score:5 series:1 tuned:2 document:1 outperforms:1 existing:2 comparing:2 trustworthy:1 chu:3 must:1 parsing:16 john:3 deniz:1 shawetaylor:1 greedy:4 generative:1 amir:3 plane:2 smith:1 coarse:1 simpler:1 daphne:1 mathematical:1 defnition:1 fitting:1 dan:3 polyhedral:1 tagging:4 expected:1 nor:1 examine:2 multi:4 decomposed:3 encouraging:1 lll:1 becomes:4 provided:1 project:1 bounded:1 estimating:1 fudan:2 gid:4 guarantee:2 pseudo:2 y3:2 multidimensional:1 classifier:2 unit:2 overestimate:6 positive:1 engineering:1 local:1 treat:1 limit:1 mistake:1 analyzing:1 initiated:1 nsfc:1 might:2 acl:1 emphasis:1 china:1 limited:2 projective:7 range:1 practical:1 globerson:5 testing:1 practice:5 procedure:1 emerson:2 turkish:2 empirical:3 significantly:1 projection:2 word:7 regular:1 suggest:4 get:3 close:1 selection:3 risk:1 applying:1 impossible:1 www:1 reviewer:1 roth:4 yt:9 convex:3 yuret:1 stability:4 parser:2 exact:30 programming:13 olivier:1 designing:1 hypothesis:10 pa:4 amortized:3 satisfying:1 srikumar:1 taskar:5 wang:2 worst:2 sun:1 trade:1 removed:1 decrease:1 monograph:1 principled:1 benjamin:1 complexity:6 overestimation:7 dynamic:2 sighan:2 trained:1 depend:2 solving:1 tight:2 eacl:1 efficiency:1 eric:1 triangle:1 accelerate:1 po:11 various:1 chapter:1 equilateral:1 derivation:1 train:1 fast:2 effective:1 london:2 labeling:1 header:1 neighborhood:1 shalev:2 whose:2 jean:1 larger:11 widely:1 valued:1 supplementary:3 ramp:2 otherwise:1 relax:1 say:1 kai:1 tightness:1 bler:1 syntactic:1 think:1 online:8 sequence:3 advantage:1 net:1 matthias:1 propose:1 outputting:1 product:1 combining:1 achieve:1 academy:1 incomparability:1 intuitive:1 az:1 ky:1 sourceforge:1 parent:1 requirement:1 extending:1 rademacher:1 leave:1 ben:2 object:1 help:2 derive:1 develop:2 illustrate:1 tim:1 school:2 eq:1 recovering:1 c:2 hungarian:2 implies:2 ois:1 tommi:3 direction:1 greek:2 overgenerating:2 correct:7 stochastic:2 exploration:2 mcallester:11 meshi:5 sandra:1 generalization:4 proposition:1 tighter:1 ryan:2 strictly:2 hold:2 hall:1 normal:1 viterbi:3 predict:1 achieves:1 adopt:1 smallest:1 estimation:14 proc:20 combinatorial:3 label:6 quote:1 largest:4 tf:1 uas:1 offs:1 clearly:1 mit:1 always:4 gaussian:3 aim:1 undergenerating:2 bake:1 cornell:1 varying:1 jaakkola:4 tjong:1 derived:1 focus:4 lise:1 joachim:7 improvement:3 seeger:3 rigorous:1 attains:1 kim:1 sense:1 besag:2 helpful:1 inference:92 honorio:2 hidden:1 koller:1 arg:8 classification:10 dual:4 among:1 html:1 pascal:1 development:3 plan:1 art:1 special:2 equal:2 beach:1 identical:1 represents:2 broad:1 koby:1 icml:8 future:3 report:2 randomly:1 divergence:1 national:1 individual:2 replaced:1 geometry:1 phase:1 microsoft:1 investigate:1 evaluation:2 introduces:2 analyzed:1 tj:1 implication:1 edge:1 minw:1 tree:3 taylor:3 pku:2 circle:2 twostep:1 theoretical:4 cover:1 phrase:1 introducing:1 vertex:2 daniely:2 predictor:2 successful:1 seventh:1 weller:1 dependency:12 combined:1 st:2 international:2 siam:1 probabilistic:2 off:1 decoding:3 michael:3 aaai:1 reflect:2 containing:2 huang:3 choose:3 emnlp:2 inefficient:1 return:3 sang:2 toy:1 mahdavi:1 aggressive:3 depends:1 try:4 jason:1 h1:4 root:2 analyze:1 mario:1 red:3 competitive:1 bayes:11 carlos:1 xing:1 shai:2 substructure:1 contribution:1 collaborative:1 square:3 accuracy:3 efficiently:1 weak:1 bayesian:5 yildirim:1 bunch:1 explain:1 ecnu:1 sebastian:1 ed:3 hlt:1 definition:13 inexact:1 underestimate:16 james:2 proof:1 mi:6 stop:1 gain:3 dataset:1 newly:1 recall:1 knowledge:1 segmentation:5 focusing:1 alexandre:1 nivre:2 supervised:1 asia:1 improved:3 wei:1 generality:1 wildly:1 furthermore:1 langford:2 working:2 receives:1 hand:2 expressive:1 replacing:1 overlapping:2 propagation:1 mode:1 gray:6 usa:1 naacl:1 concept:1 true:1 requiring:1 contain:1 y2:2 laboratory:2 vivek:1 attractive:1 during:1 covering:1 demonstrate:1 mcdonald:3 passive:3 meaning:1 recently:1 common:1 shanghai:2 exponentially:1 volume:2 extend:1 m1:3 ims:1 measurement:2 significant:1 gibbs:1 rd:3 tuning:3 grid:3 vanilla:3 similarly:4 consistency:2 session:1 shawe:3 language:5 dot:1 stable:2 add:1 joakim:1 posterior:1 recent:2 showed:6 perspective:1 inf:1 scenario:1 meta:1 success:2 yi:35 jens:1 guestrin:1 paradigm:2 fernando:2 multiple:1 reduces:1 faster:2 adapt:1 unlabelled:1 long:1 dkl:2 controlled:1 qi:1 prediction:22 variant:1 converging:1 essentially:1 iteration:1 shyam:1 grow:1 sabato:1 probably:1 pass:1 suspect:1 comment:1 vitaly:1 jordan:2 integer:4 structural:8 yang:2 enough:1 newsgroups:1 sun1:1 gave:2 wu1:1 suboptimal:1 cn:2 tradeoff:1 multiclass:1 enumerating:1 whether:1 svm_light:1 suffer:2 sontag:5 clear:2 maybe:1 dark:2 svms:2 http:2 generate:1 exist:2 percentage:1 andr:1 algorithmically:1 blue:1 key:2 sivan:1 lan:1 achieving:1 neither:1 shiliang:2 graph:3 relaxation:8 subgradient:1 powerful:1 named:2 family:3 chandrasekaran:2 reasonable:1 wu:1 fran:1 conll:7 comparable:4 bound:18 fold:1 marchand:1 renegar:4 adapted:1 roughgarden:1 noah:1 constraint:2 idf:1 alex:1 riedel:1 n3:1 software:1 tag:1 argument:2 min:1 separable:1 martin:2 speedup:1 structured:34 according:3 combination:1 smaller:4 slightly:1 lp:7 joseph:2 nilsson:1 maxy:6 intuitively:1 thorsten:1 chunking:4 ln:6 discus:1 loose:1 fail:1 singer:1 flip:1 tractable:2 end:4 generalizes:1 operation:1 ofer:4 apply:3 corinna:1 original:1 thomas:2 assumes:1 dirty:1 include:3 nlp:2 graphical:1 hinge:1 laviolette:1 yoram:1 restrictive:1 eisner:2 chinese:3 amit:1 society:1 sabine:1 objective:2 strategy:1 dependence:1 mehrdad:1 surrogate:1 exhibit:1 kth:1 distance:1 thank:1 decoder:2 outer:1 w0:2 topic:1 considers:2 discriminant:1 erik:1 besides:1 cws:6 julian:1 balance:1 minimizing:1 liang:1 implementation:2 motivates:1 collective:2 upper:3 observation:2 markov:3 arc:8 minh:1 lacasse:1 descent:1 witness:1 y1:2 perturbation:3 bert:1 bk:1 david:8 germain:2 namely:1 kl:1 sentence:1 czech:2 nip:8 able:2 usually:3 scott:1 kulesza:4 buchholz:2 summarize:1 program:1 max:9 including:2 memory:1 belief:1 royal:1 suitable:1 getoor:1 natural:3 predicting:1 kundu:3 thermodynamics:1 improve:3 technology:1 picture:1 axis:2 attachment:1 finley:7 text:6 prior:4 understanding:2 l2:1 acknowledgement:1 loss:11 probit:2 lecture:1 interesting:1 suggestion:1 h2:3 sufficient:1 consistent:1 intractability:1 systematically:1 changed:1 mohri:1 supported:2 free:1 infeasible:2 enjoys:1 aij:1 side:1 bias:1 understand:1 deeper:1 perceptron:9 benefit:2 curve:2 default:1 author:2 adaptive:1 simplified:1 tighten:1 approximate:37 cutting:2 implicitly:1 keep:1 global:1 uai:1 corpus:2 consuming:1 xi:35 discriminative:1 shwartz:2 don:1 search:4 iterative:2 latent:1 table:1 qz:1 promising:1 learn:1 johan:1 ca:1 basque:2 mehryar:1 complex:2 aistats:1 main:2 venkat:1 pereira:6 wish:1 candidate:1 jmlr:1 third:1 coling:1 upadhyay:1 theorem:15 specific:2 xt:5 pac:16 showing:1 maxi:3 symbol:1 r2:1 svm:7 cortes:2 list:1 essential:1 intractable:3 workshop:2 incorporating:1 sequential:9 adding:1 keshet:3 catoni:3 labelling:9 margin:16 gap:1 easier:1 explore:1 partially:1 chang:2 kuznetsov:1 samdani:2 zhuoran:1 satisfies:1 conditional:2 identity:1 shared:4 man:2 feasible:1 hard:2 specifically:1 determined:1 generalisation:1 wt:7 lemma:1 east:1 underestimation:11 select:2 people:1 guo:1 crammer:2 collins:4 evaluate:2 tested:1 handling:1 |
6,847 | 7,194 | Efficient Second-Order Online Kernel
Learning with Adaptive Embedding
Daniele Calandriello
Alessandro Lazaric
Michal Valko
SequeL team, INRIA Lille - Nord Europe, France
{daniele.calandriello, alessandro.lazaric, michal.valko}@inria.fr
Abstract
Online kernel learning (OKL) is a flexible framework for prediction problems,
since the large approximation space provided by reproducing kernel Hilbert spaces
often contains an accurate function for the problem. Nonetheless, optimizing over
this?space is computationally expensive. Not only first order methods accumulate
O( T ) more loss than the optimal function, but the curse of kernelization results
in a O(t) per-step complexity. Second-order methods get closer to the optimum
much faster, suffering only O(log T ) regret, but second-order updates are even
more expensive with their O(t2 ) per-step cost. Existing approximate OKL methods
reduce this complexity either by limiting the support vectors (SV) used by the
predictor, or by avoiding the kernelization process altogether using embedding.
Nonetheless, as long as the size of the approximation space or the number of
SV does not grow over time, an adversarial environment can always exploit the
approximation process. In this paper, we propose PROS-N-KONS, a method that
combines Nystr?m sketching to project the input point to a small and accurate
embedded space; and to perform efficient second-order updates in this space. The
embedded space is continuously updated to guarantee that the embedding remains
accurate. We show that the per-step cost only grows with the effective dimension
of the problem and not with T . Moreover, the second-order updated allows us to
achieve the logarithmic regret. We empirically compare our algorithm on recent
large-scales benchmarks and show it performs favorably.
1
Introduction
Online learning (OL) represents a family of efficient and scalable learning algorithms for building a
predictive model incrementally from a sequence of T data points. A popular online learning approach
[24] is to learn a linear predictor using gradient descent (GD) in the input space Rd . Since we can
explicitly store and update the d weights of the linear predictor, the total runtime of this algorithm is
O(T d), allowing it to scale to large problems. Unfortunately, it is sometimes the case that no good
predictor can be constructed starting from only the linear combination of the input features. For this
reason, online kernel learning (OKL) [8] first maps the points into a high-dimensional reproducing
kernel Hilbert space (RKHS) using a non-linear feature map ?, and then runs GD on the projected
points, which is often referred to as functional GD (FGD) [8]. With the kernel approach, each gradient
step does not update a fixed set of weights, but instead introduces the feature-mapped point in the
predictor as a support vector (SV). The resulting kernel-based predictor is flexible and data adaptive,
but the number of parameters, and therefore the per-step space and time cost, now scales with O(t),
the number of SVs included after t steps of GD. This curse of kernelization results in an O(T 2 ) total
runtime, and prevents standard OKL methods from scaling to large problems.
Given an RKHS H containing functions with very small prediction loss, the objective of an OL
algorithm is to approach over time the performance of the best predictor in H and thus minimize the
regret, that is the difference in cumulative loss between the OL algorithm and the best predictor in
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?
hindsight. First-order GD achieve a O( T ) regret for any arbitrary sequence of convex losses [8].
However, if we know that the losses are strongly convex, setting a more aggressive step-size in
first-order GD achieves a smaller O(log T ) regret [23]. Unfortunately, most common losses, such as
the squared loss, are not strongly convex when evaluated for a single point xt . Nonetheless, they
posses a certain directional curvature [6] that can be exploited by second-order GD methods, such as
kernelized online Newton step (KONS) [1] and kernel-recursive least squares (KRLS) [22], to achieve
the O(log T ) regret without strong convexity along all directions. The drawback of second-order
methods is that they have to store and invert the t ? t covariance matrix between all SV included
in the predictor. This requires O(t2 ) space and time per-step, dwarfing the O(t) cost of first-order
methods and resulting in an even more infeasible O(T 3 ) runtime.
Contributions In this paper, we introduce PROS-N-KONS, a new OKL method that (1) achieves
logarithmic regret for losses with directional curvature using second-order updates, and (2) avoids
the curse of kernelization, taking only a fixed per-step time and space cost. To achieve this, we start
from KONS, a low-regret exact second-order OKL method proposed in [1], but replace the exact
feature map ? with an approximate ?
e constructed using a Nystr?m dictionary approximation. For
a dictionary of size j, this non-linearly embeds the points in Rj , where we can efficiently perform
exact second-order updates in constant O(j 2 ) per-step time, and achieve the desired O(log T ) regret.
Combined with an online dictionary learning (KORS [1]) and an adaptive restart strategy, we show
that we never get stuck performing GD in an embedded space that is too distant from the true H,
but at the same time the size of the embedding j never grows larger than the effective dimension
of the problem. While previous methods [11, 9] used fixed embeddings, we adaptively construct
a small dictionary that scales only with the effective dimension of the data. We then construct an
accurate approximation of the covariance matrix, to avoid the variance due to dictionary changes
using carefully designed projections.
Related work Although first-order OKL methods cannot achieve logarithmic regret, many approximation methods have been proposed to make them scale to large datasets. Approximate methods
usually take one of two approaches, either performing approximate gradient updates in the true RKHS
(budgeted perceptron [2], projectron [13], forgetron [4]) preventing SV from entering the predictor,
or exact gradient updates in an approximate RKHS (Nystr?m [11], random feature expansion [9]),
where the points are embedded in a finite-dimensional space and the curse of kernelization does not
apply. Overall, the goal is to never exceed a budget of SVs in order to maintain a fixed per-step
update cost. Among budgeted methods, weight degradation [15] can be done in many different ways,
such as removal [4] or more expensive projection [13] and merging. Nonetheless, as long as the
size of the budget is fixed, the adversary can exploit this to increase the regret of the algorithm, and
oblivious inclusion strategies such as uniform sampling [7] fail. Another approach is to replace the
exact feature-map ? with an approximate feature map ?
e which allows to explicitly represent the
mapped points, and run linear OL on this embedding [11, 19]. When the embedding is oblivious to
data, the method is known as random-feature expansion, while a common data-dependent embedding
mapping is known as Nystr?m method [17]. Again, if the embedding is fixed or with a limit in size,
the adversary can exploit it. In addition, analyzing a change in embedding during the gradient descent
is an open problem, since the underlying RKHS changes with it.
The only approximate second-order method known to achieve logarithmic regret is S KETCHED KONS. Both S KETCHED -KONS and PROS-N-KONS are based on the exact second-order OL
method ONS [6] or its kernelized version KONS [1]. However, S KETCHED -KONS only applies
budgeting techniques to the Hessian of the second-order updates and not to the predictor itself,
resulting in a O(t) per-step evaluation time cost. Moreover, the Hessian sketching is performed
only through SV removal, resulting in high instability. In this paper, we solve these two issues with
PROS-N-KONS by directly approximating KONS using Nystr?m functional approximation. This
results in updates that are closer to SV projection than removal, and that budget both the representation
of the Hessian and the predictor.
2
Background
Notation We borrow the notation from [12] and [1]. We use upper-case bold letters A for matrices,
lower-case bold letters a for vectors, lower-case letters a for scalars. We denote by [A]ij and [a]i the
(i, j) element of a matrix and i-th element of a vector respectively. We denote by IT ? RT ?T the
identity matrix of dimension T and by Diag(a) ? RT ?T the diagonal matrix with the vector a ? RT
on the diagonal. We use eT,i ? RT to denote the indicator vector of dimension T for element i.
2
When the dimension of I and ei is clear from the context, we omit the T , and we also indicate the
identity operator by I. We use A B to indicate that A ? B is a positive semi-definite (PSD) matrix.
Finally, the set of integers between 1 and T is denoted by [T ] := {1, . . . , T }.
Kernels Given an input space X and a kernel function K(?, ?) : X ? X ? R, we denote the
reproducing kernel Hilbert space (RKHS) induced by K by H , and with ?(?) : X ? H the
associated feature map. Using the feature map, the kernel function can be represented as K(x, x0 ) =
h?(x), ?(x0 )iH , but with a slight abuse of notation we use the simplified notation K(x, x0 ) =
?(x)T ?(x0 ) in the following. Any function f ? H can be represented as a (potentially infinite) set of
weights w such that fw (x) = ?(x)T w. Given a set of t points, Dt = {xs }ts=1 we denote the feature
matrix with ?s as its s-th column by ?t ? R??t .
Online kernel learning (OKL) We consider online kernel learning, where an adversary chooses
an arbitrary sequence of points {xt }Tt=1 and convex differentiable losses {`t }Tt=1 . The learning
protocol is the following. At each round t ? [T ] (1) the adversary reveals the new point xt , (2)
the learner chooses a function fwt and predicts fwt (xt ) = ?(xt )T wt , (3) the adversary reveals the
loss `t , and (4) the learner suffers `t (?(xt )T wt ) and observes the associated gradient gt . We are
interested in bounding the cumulative regret between the learner and a fixed function w defined as
PT
RT (w) = t=1 `(?t wt ) ? `(?t w). Since H is potentially a very large space, we need to restrict
the class of comparators w. As in [12], we consider all functions that guarantee bounded predictions,
i.e., S = {w : ?t ? [T ], |?Tt w| ? C}. We make the following assumptions on the losses.
Assumption 1 (Scalar Lipschitz). The loss functions `t satisfy |`0t (z)| whenever |z| ? C.
Assumption 2 (Curvature). There exists ?t ? ? > 0 such that for all u, w ? S and for all t ? [T ],
?t
2
`t (?Tt w) := lt (w) ? lt (u) + ?lt (u)T (w ? u) +
(?lt (u)T (w ? u)) .
2
This assumption is weaker than strong convexity as it only requires the losses to be strongly convex
in the direction of the gradient. It is satisfied by squared loss, squared hinge loss, and in general, all
exp-concave losses [6]. Under this weaker requirement, second-order learning methods [6, 1], obtain
the O(log T ) regret at the cost of a higher computational complexity w.r.t. first-order methods.
Nystr?m approximation A common approach to alleviate the computational cost is to replace
the high-dimensional feature map ? with a finite-dimensional approximate feature map ?.
e Let
I = {xi }ji=1 be a dictionary of j points from the dataset and ?I be the associated feature matrix with
?(xi ) as columns. We define the embedding ?(x)
e
:= ??1 UT ?TI ?(x) ? Rj , where ?I = V?UT
is the singular value decomposition of the feature matrix. While in general ?I is infinite dimensional
and cannot be directly decomposed, we exploit the fact that U?VT V?UT = ?TI ?I = KI =
U?UT and that KI is a (finite-dimensional) PSD matrix. Therefore it is sufficient to compute
the eigenvectors U and eigenvalues ? of KI and take the square root ?1/2 = ?. Note that with
this definition we are effectively replacing the kernel K and H with an approximate KI and HI ,
such that KI (x, x0 ) = ?(x)
e T ?(x
e 0 ) = ?(x)T ?I U??1 ??1 UT ?TI ?(x0 ) = ?(x)T PI ?(x0 ) where
T
?1 T
PI = ?I (?I ?I ) ?I is the projection matrix on the column span of ?I . Since ?
e returns vectors
in Rj , this transformation effectively reduces the computation complexity of kernel operations from t
down to the size of the dictionary j. The accuracy of ?
e is directly related to the accuracy of the
projection PI in approximating the projection Pt = ?t (?Tt ?t )?1 ?Tt , so that for all s, s0 ? [t],
?(x
e s )T ?(x
e s0 ) is close to ?(xs )T Pt ?(xs0 ) = ?(xs )T ?(xs0 ).
Ridge leverage scores All that is left is to find an efficient algorithm to choose a good dictionary I
to minimize the error PI ? Pt . Among dictionary-selection methods, we focus on those that sample
points proportionally to their ridge leverage scores (RLSs) because they provide strong reconstruction
guarantees. We now define RLS and associated effective dimension.
Definition 1. Given a kernel function K, a set of points Dt = {xs }ts=1 and a parameter ? > 0, the
?-ridge leverage score (RLS) of point i is defined as
?t,i = et,i Kt (Kt + ?It )?1 et,i = ?Ti (?t ?Tt + ?I)?1 ?i ,
(1)
and the effective dimension of Dt as their sum for the each example of Dt ,
dteff (?) =
t
X
?t,i = Tr Kt (Kt + ?It )?1 .
i=1
3
(2)
The RLS of a point measures how orthogonal ?i is w.r.t. to the other points in ?t , and therefore
how important it is to include it in I to obtain an accurate projection PI . The effective dimension
captures the capacity of the RKHS H over the support vectors in Dt . Let {?i }i be the eigenvalues of
Pt
Kt , since dteff (?) = s=1 ?i /(?i + ?), the effective dimension can be seen as the soft rank of Kt
where only eigenvalues above ? are counted.
To estimate the RLS and construct an accurate I, we leverage KORS [1] (see Alg. 1 in App. A) that
extends the online row sampling of [3] to kernels. Starting from an empty dictionary, at each round,
KORS receives a new point xt , temporarily adds it to the current dictionary It and estimates its
associated RLS ?et . Then it draws a Bernoulli r.v. proportionally to ?et . If the outcome is one, the point
is deemed relevant and added to the dictionary, otherwise it is discarded and never added. Note that
since points get only evaluated once, and never dropped, the size of the dictionary grows over time
and the RKHS HIt is included in the RKHS HIt+1 . We restate the quality of the learned dictionaries
and the complexity of the algorithm that we use as a building block.
Proposition 1 ([1, Thm. 2]). Given parameters 0 < ? ? 1, 0 < ?, 0 < ? < 1, if ? ? 3 log(T /?)/?2
then the dictionary learned by KORS is such that w.p. 1 ? ?,
?
(1) for all rounds t ? [T ], we have 0 ?Tt (Pt ? PIt )?t + 1??
?I, and
(2) the maximum size of the dictionary J is bounded by
1+?
T
1?? 3?deff (?) log(2T /?).
e T (?)3 ) time per iteration.
The algorithm runs in O(dTeff (?)2 log4 (T )) space and O(d
eff
3
The PROS-N-KONS algorithm
We first use a toy OKL example from [1] to illustrate the main challenges for FGD in getting both
computational efficiency and optimal regret guarantees. We then propose a different approach which
will naturally lead to the definition of PROS-N-KONS.
Consider the case of binary classification with the square loss, where the point presented by the
adversary in the sequence is always the same point xexp , but each round with an opposite {1, ?1}
label. Note that the difficulty in this problem arises from the adversarial nature of the labels and it is
not due to the dataset itself. The cumulative loss of the comparator w becomes (?(xexp )T w ? 1)2 +
(?(xexp )T w + 1)2 + . . . for T steps. Our goal is to achieve O(log T ) regret w.r.t. the best solution
in hindsight, which is easily done by always predicting 0. Intuitively an algorithm will do well when
the gradient-step magnitude shrinks
?as 1/t. Note that these losses are not strongly convex, thus exact
first-order FGD only achieves O( T ) regret and does not guarantee our goal. Exact second-order
methods (e.g., KONS) achieve the O(log T ) regret, but also store T copies of the SV, and have T 4
runtime. If we try to improve the runtime using approximate updates and a fixed budget of SV, we
lose the O(log T ) regime, since skipping the insertion of a SV also slows down the reduction in the
step-size, both for first-order and second-order methods. If instead we try to compensate the scarcity
of SV additions due to the budget with larger updates to the step-size, the adversary can exploit such
an unstable algorithm, as is shown in [1] where in order to avoid an unstable solution forces the
algorithm to introduce SV with a constant probability. Finally, note that this example can be easily
generalized for any algorithm that stores a fixed budget of SV, replacing a single xexp with a set of
repeating vectors that exceed the budget. This also defeats oblivious embedding techniques such as
random feature expansion with a fixed amount of random features or a fixed dictionary, and simple
strategies that update the SV dictionary by insertion and removal.
If we relax the fixed-budget requirement, selection algorithms such as KORS can find an appropriate
budget size for the SV dictionary. Indeed, this single sample problem is intrinsically simple: its
effective dimension dTeff (?) ' 1 is small, and its induced RKHS H = ?(xexp ) is a singleton.
Therefore, following an adaptive embedding approach, we can reduce it to a one-dimensional
parametric problem and solve it efficiently in this space using exact ONS updates. Alternatively,
we can see this approach as constructing an approximate feature map ?
e that after one step will
exactly coincide with the exact feature map ?, but allows us to run exact KONS updates efficiently
e Building on this intuition, we propose PROS-N-KONS, a new second-order
replacing K with K.
FGD method that continuously searches for the best embedding space HIt and, at the same time,
exploits the small embedding space HIt to efficiently perform exact second-order updates.
4
We start from an empty dictionary I0 and a null predictor w0 = 0. At each round, PROS-N-KONS
(Algorithm 1) receives a new point xt and invokes KORS to decide whether it should be included in
the current dictionary or not. Let tj with j ? [J] be the random step when KORS introduces xtj in
the dictionary. We analyze PROS-N-KONS as an epoch-based algorithm using these milestones tj .
Note that the length hj = tj+1 ? tj and total number of epochs J is random, and is decided in a
data-adaptive way by KORS based on the difficulty of the problem. During epoch j, we have a
fixed dictionary Ij that induces a feature matrix ?Ij containing samples ?i ? Ij , an embedding
T
T
?(x)
e
: X ? Rj = ??1
j Uj ?j ?(x) based on the singular values ?j and singular vectors Uj of
e and induced RKHS Hj . At each round
?j , with its associated approximate kernel function K
tj < t < tj+1 , we perform an exact KONS update using the approximate map ?.
e This can be
j
e
computed explicitly since ?t is in R and can be easily stored in memory. The update rules are
eT ?
h(?
A
t et ) e ?1 e
et = A
e t?1 + ?t g
e ?1 g
et g
etT , ?
et = ?
e t?1 ? A
e t = ?Stt?1(?t ) = ?
et ?
A
At?1 ?t ,
t?1 et?1 , ?
T e ?1 e
e
2
?t At?1 ?t
A
where the oblique projection ?Stt?1 is
computed using the closed-form solution
from [12]. When t = tj and a new epoch
begins, we perform a reset step before
taking the first gradient step in the new
embedded space. We update the featuree t and ?
e tj to zero.
map ?,
e but we reset A
j
While this may seem a poor choice, as
information learned over time is lost, it
leaves intact the dictionary. As long as
(a) the dictionary, and therefore the embedded space where we perform our GD,
keeps improving and (b) we do not needlessly reset too often, we can count on
the fast second-order updates to quickly
catch up to the best function in the current
Hj . The motivating reason to reset the descent procedure when we switch subspace
is to guarantee that our starting point in
the descent cannot be influenced by the adversary, and therefore allow us to bound
the regret for the overall process (Sect. 4).
Input: Feasible parameter C, step-sizes ?t , regularizer ?
e 0 = 0, A
e 0 = ?I,
e 0 = 0, g
e0 = 0, P
1: Initialize j = 0, w
2: Start a KORS instance with an empty dictionary I0 .
3: for t = {1, . . . , T } do { Dictionary changed, reset.}
4:
Receive xt , feed it to KORS.
Receive zt (point added to dictionary or not)
5: if zt?1 = 1 then
6:
j =j+1
Build Kj from Ij and decompose it in Uj ?j ?Tj UTj
7:
e t?1 = ?I ? Rj?j .
8:
Set A
e t = 0 ? Rj
9:
?
10:
else {Execute a gradient-descent step.}
et =
Compute map ?t and approximate map ?
11:
T
T
j
??1
j Uj ?j ?t ? R .
e ?1 g
et = ?
e t?1 ? A
12:
Compute ?
t?1 et?1 .
eT ?
h(?
t et ) e ?1 e
et = ?
et ?
A ?t
13:
Compute ?
?1
eT A
e
e
?
t t?1 ?t
t?1
where h(z) = sign(z) max{|z| ? C, 0}
14: end if
eTt ?
et.
15: Predict yet = ?
eTt ?
et .
et = ??e t `t (?
e t ) = `0t (e
16: Observe g
y t )?
?t
T
e
e
e
e
17: Update At = At?1 + 2 gt gt .
18: end for
Figure 1: PROS-N-KONS
Computational complexity PROS-NKONS?s computation complexity is dome ?1 inversion required to compute the projection and the gradient update and by the query
inated by A
t
to KORS, that internally also inverts a j ? j matrix. Therefore, a na?ve implementation requires
e t . Notice that
O(j 3 ) per-step time and has a space O(j 2 ) space complexity necessary to store A
taking advantage of the fact that KORS only adds SV to the dictionary and never removes them, and
e t matrix is constructed using rank-one updates, a careful implementation reduces
that similarly, the A
the per-step cost to O(j 2 ). Overall, the total runtime of PROS-N-KONS is then O(T J 2 ), which
e dT (?)2 ).
using the bound on J provided by Prop. 1 and neglecting logarithmic terms reduces to O(T
eff
Compared to other exact second-order FGD methods, such as KONS or RKLS, PROS-N-KONS
dramatically improves the time and space complexity from polynomial to linear. Unlike other
approximate second-order methods, PROS-N-KONS does not add a new SV at each step. This way
it removes T 2 from the O(T 2 + T dTeff (?)3 ) time complexity of S KETCHED -KONS [1]. Moreover,
when mint ?t,t is small, S KETCHED -KONS needs to compensate by adding a constant probability of
adding a SV to the dictionary, resulting in a larger runtime complexity, while PROS-N-KONS has
no dependency on the value of the RLS. Even compared to first-order methods, which incur a larger
regret, PROS-N-KONS performs favorably, improving on the O(T 2 ) runtime of exact first-order
FGD. Compared to other approximate methods, the variant using rank-one updates matches the
O(J 2 ) per-step cost of the more accurate first-order methods such as the budgeted perceptron [2],
projectron [13], Nystr?m GD [11], while improving on their regret. PROS-N-KONS also closely
matches faster but less accurate O(J) methods such as the forgetron [4] and budgeted GD [21].
5
4
Regret guarantees
In this section, we study the regret performance of PROS-N-KONS.
Theorem 1 (proof in App. B,). For any sequence ?
of losses `t satisfying Asm. 2 with Lipschitz
constant L, let ? = mint ?t . If ?t ? ? for all t, ? ? T , ? ? ?, and predictions are bounded by C,
then the regret of PROS-N-KONS over T steps is bounded w.p. 1 ? ? as
4 T ?
L2
T ??
2
2
RT (w) ? J ?kwk + deff
log 2?L T /? +
+ 1 + 2JC, (3)
?
?L2
? 4(1 ? ?)
where J ? 3?dTeff (?) log(2T ) is the number of epochs. If ? = ?/T the previous bound reduces to
2
2 T
T
T
RT (w) = O ?kwk deff (?/T ) log(T ) + deff (?/T ) deff (?) log (T ) .
(4)
Remark (bound) The bound in Eq. 3 is composed of three terms. At each epoch of PROS-N-KONS,
an instance of KONS is run on the embedded feature space Hj obtained by using the dictionary Ij
constructed up to the previous epoch. As a result, we directly use the bound on the regret of KONS
(Thm. 1 in [1]) for each of the J epochs, thus leading to the first term in the regret. Since a new epoch
is started whenever a new SV is added to the dictionary, the number of epochs J is at most the size of
e T (?)), making the first term scale
the dictionary returned by KORS up to step T , which w.h.p. is O(d
eff
T
T
e
as O(deff (?)deff (?)) overall. Nonetheless, the comparator used in the per-epoch regret of KONS is
constrained to the RKHS Hj induced by the embedding used in epoch j. The second term accounts
for the difference in performance between the best solutions in the RKHS in epoch j and in the
original RKHS H. While this error is directly controlled by KORS through the RLS regularization ?
and the parameter ? (hence the factor ??/(1 ? ?) from Property (1) in Prop. 1), its impact on the
regret is amplified by the length of each epoch, thus leading to an overall linear term that needs to be
regularized. Finally, the last term summarizes the regret suffered every time a new epoch is started
and the default prediction yb = 0 is returned. Since the values yt and ybt are constrained in S, this
results in a regret of 2JC.
Remark (regret comparison) Tuning the RLS regularization as ? = ?/T leads to the bound in
Eq. 4. While the bound displays an explicit logarithmic dependency on T , this comes at the cost
of increasing the effective dimension, which now depends on the regularization ?/T . While in
general this could possibly compromise the overall regret, if the sequence of points ?1 , . . . , ?T
induces a kernel matrix with a rapidly decaying spectrum, the resulting regret is still competitive.
For instance, if the eigenvalues of KT decrease as ?t = at?q with constants a > 0 and ?
q > 1, then
dTeff (?/T ) ? aqT 1/q /(q ? 1). This shows that for any q > 2 we obtain a regret1 o( T log2 T )
showing that KONS still improves over first-order methods. Furthermore, if the kernel has a low
rank or the eigenvalues decrease exponentially, the final regret is poly-logarithmic, thus preserving
the full advantage of the second-order approach. Notice that this scenario is always verified when
H = Rd , and is also verified when the adversary draws samples from a stationary distribution
and, e.g., the Gaussian kernel [20] (see also [14, 16]). This result is particularly remarkable
when
compared to S KETCHED -KONS, whose regret scales as O ?kwk2 + dTeff (?) (log T )/? , where ?
is the fraction of samples which is forced into the dictionary (when ? = 1, we recover the bound
for KONS). Even when the effective dimension is small (e.g., exponentially decaying eigenvalues),
S KETCHED -KONS requires setting ? to T ?p for a constant p > 0 to get a subquadratic space
complexity, at the cost of increasing the regret to O(T p log T ). On the other hand, PROS-N-KONS
achieves a poly-logarithmic regret with linear space complexity up to poly-log factors (i.e., T dTeff (?)2 ),
thus greatly improving both the learning and computational performance w.r.t. S KETCHED -KONS.
Finally, notice that while ? = ?/T is the best choice agnostic to the kernel, better bounds can
be obtained optimizing Eq. 3 for ? depending on dTeff (?). For instance, let ? = ?/T s , then the
optimal value of s for q-polynomially
? decaying spectrum is s = q/(1 + q), leading to a regret bound
e q/(1+q) ), which is always o( T ) for any q > 1.
O(T
Remark (comparison in the Euclidean case) In the special case H = Rd , we can make a comparison with existing approximate methods for OL. In particular, the closest algorithm is S KETCHED ONS by Luo et al. [12]. Unlike PROS-N-KONS, and similarly to S KETCHED -KONS, they take the
1
Here we ignore the term dTeff (?) which is a constant w.r.t. T for any constant ?.
6
approach of directly approximating At in the exact H = Rd using frequent directions [5] to construct
a k-rank approximation of At for a fixed k. The resulting algorithm achieves a regret that is bounded
PT
PT
by k log T + k i=k+1 ?i2 , where the sum i=k+1 ?i2 is equal to the sum of all the smallest d ? k
eigenvalues of the final (exact) matrix AT . This quantity can vary from 0, when the data lies in a
subspace of rank r ? k, to T d?k
d when the sample lie orthogonally and in equal number along all d
directions available in Rd . Computationally, the algorithm requires O(T dk) time and O(dk) space.
Conversely, PROS-N-KONS automatically adapt its time and space complexity to the effective
dimension of the algorithm dTeff (?/T ) which is smaller than the rank for any ?. As a consequence,
e r2 ) time and O(r
e 2 ) space, achieving a O(r2 log T ) regret independently from
it requires only O(T
the spectrum of the covariance matrix. Computationally, all of these complexities are smaller than
the ones of S KETCHED -ONS in the regime r < k, which is the only one where S KETCHED -ONS
can guarantee a sublinear regret, and where the regrets of the two algorithms are close. Overall,
while S KETCHED -ONS implicitly relies on the r < k assumption, but continues to operate in a d
dimensional space and suffers large regret if r > k, PROS-N-KONS will adaptively convert the d
dimensional problem into a simpler one with the appropriate rank, fully reaping the computational
and regret benefits.
The bound in Thm. 1 can be refined in the specific case of squared loss as follows.
2
2
Theorem 2. For
? any sequence of squared losses `t = (yt ? ybt ) , L = 4C and ? = 1/(8C ), if ?t ? ?
for all t, ? ? T and ? ? ?, the regret of PROS-N-KONS over T steps is bounded w.p. 1 ? ? as
J
L2
X
4 j ?
L 0
0 ?
2
RT (w) ?
d
log 2?
Tr(Kj ) +? Lj +J L C +
+? ?kwk2 , (5)
? eff ?L2
?
?
j=1
2
Ptj+1 ?1 T
?? ?1
where ?0 = ? ? ? 1??
?t w ? yt + ?kwk22 is the best
? 1 and L?j = minw?H
t=tj
regularized cumulative loss in H within epoch j.
Let L?T be the best regularized cumulative loss over all T steps, then L?j ? L?T . Furthermore, we
have that djeff ? dTeff and thus regret in Eq. 5 can be (loosely) bounded as
RT (w) = O J dTeff (?) log(T ) + +?0 L?j + ?0 ?kwk22 .
The major difference w.r.t. the general bound in Eq. 3 is that we directly relate the regret of PROS-NKONS to the performance of the best predictor in H in hindsight, which replaces the linear term
?T /?. As a result, we can set ? = ? (for which ?0 = ?/(1 ? 2?)) and avoid increasing the effective
dimension of the problem. Furthermore, since L?T is the regularized loss of the optimal batch solution,
we expect it to be small whenever the H is well designed for the prediction task at hand. For instance,
if L?T scales as O(log T ) for a given regularization ? (e.g., in the realizable case L?T is actually just
?kwk), then the regret of PROS-N-KONS is directly comparable with KONS up to a multiplicative
factor depending on the number of epochs J and with a much smaller time and space complexity that
adapt to the effective dimension of the problem (see Prop. 1).
5
Experiments
We empirically validate PROS-N-KONS on several regression and binary classification problems,
showing that it is competitive with state-of-the-art methods. We focused on verifying 1) the advantage
of second-order vs. first-order updates, 2) the effectiveness of data-adaptive embedding w.r.t. the
oblivious one, and 3) the effective dimension in real datasets. Note that our guarantees hold for more
challenging (possibly adversarial) settings than what we test empirically.
Algorithms Beside PROS-N-KONS, we introduce two heuristic variants. C ON -KONS follows
the same update rules as PROS-N-KONS during the descent steps, but at reset steps it does not
e t?1 = ?j?1 Uj?1 ??1
e t?1 starting from ?
e t?1 and sets
reset the solution and instead computes w
j?1 ?
?1 T T
e
e t = ?j Uj ?j w
e t?1 . A similar update rule is used to map At?1 into the new embedded space
?
without resetting it. B-KONS is a budgeted version of PROS-N-KONS that stops updating the
dictionary at a maximum budget Jmax and then it continues learning on the last space for the rest of
the run. Finally, we also include the best BATCH solution in the final space HJ returned by KORS as
a best-in-hindsight comparator. We also compare to two state-of-the-art embedding-based first-order
7
Algorithm
FOGD
NOGD
PROS-N-KONS
C ON -KONS
B-KONS
BATCH
Algorithm
FOGD
NOGD
PROS-N-KONS
C ON -KONS
B-KONS
BATCH
Algorithm
FOGD
NOGD
D UAL -SGD
PROS-N-KONS
C ON -KONS
B-KONS
BATCH
parkinson n = 5, 875, d = 20
avg. squared loss #SV
time
0.04909 ? 0.00020
30
?
0.04896 ? 0.00068
30
?
0.05798 ? 0.00136
18
5.16
0.05696 ? 0.00129
18
5.21
0.05795 ? 0.00172
18
5.35
0.04535 ? 0.00002
?
?
cpusmall n = 8, 192, d = 12
avg. squared loss #SV
time
0.02577 ? 0.00050
30
?
0.02559 ? 0.00024
30
?
0.02494 ? 0.00141
20
7.28
0.02269 ? 0.00164
20
7.40
0.02496 ? 0.00177
20
7.37
0.01090 ? 0.00082
?
cadata n = 20, 640, d = 8
avg. squared loss #SV
time
0.04097 ? 0.00015
30
?
0.03983 ? 0.00018
30
?
0.03095 ? 0.00110
20
18.59
0.02850 ? 0.00174
19
18.45
0.03095 ? 0.00118
19
18.65
0.02202 ? 0.00002
?
?
casp n = 45, 730, d = 9
avg. squared loss #SV
time
0.08021 ? 0.00031
30
?
0.07844 ? 0.00008
30
?
0.06773 ? 0.00105
21
40.73
0.06832 ? 0.00315
20
40.91
0.06775 ? 0.00067
21
41.13
0.06100 ? 0.00003
?
?
slice n = 53, 500, d = 385
avg. squared loss #SV
time
0.00726 ? 0.00019
30
?
0.02636 ? 0.00460
30
?
?
?
?
did not complete
?
?
did not complete
?
?
0.00913 ? 0.00045
100
60
0.00212 ? 0.00001
?
?
year n = 463, 715, d = 90
avg. squared loss #SV
time
0.01427 ? 0.00004
30
?
0.01427 ? 0.00004
30
?
0.01440 ? 0.00000
100
?
0.01450 ? 0.00014
149
884.82
0.01444 ? 0.00017
147
889.42
0.01302 ? 0.00006
100
505.36
0.01147 ? 0.00001
?
?
Table 1: Regression datasets
methods from [11]. NOGD selects the first J points and uses them to construct an embedding and
then perform exact GD in the embedded space. FOGD uses random feature expansion to construct
an embedding, and then runs first-order GD in the embedded space. While oblivious embedding
methods are cheaper than data-adaptive Nystr?m, they are usually less accurate. Finally, D UAL -SGD
also performs a random feature expansion embedding, but in the dual space. Given the number
#SV of SVs stored in the predictor, and the input dimension d of the dataset?s samples, the time
complexity of all first-order methods is O(T d#SV ), while that of PROS-N-KONS and variants is
O(T (d + #SV )#SV ). When #SV ? d (as in our case) the two complexities coincide. The space
complexities are also close, with PROS-N-KONS O(#SV 2 ) not much larger than the first order
methods? O(#SV ). We do not run S KETCHED -KONS because the T 2 runtime is prohibitive.
Experimental setup We replicate the experimental setting in [11] with 9 datasets for regression
and 3 datasets for binary classification. We use the same preprocessing as Lu et al. [11]: each
feature of the points xt is rescaled to fit in [0, 1], for regression the target variable yt is rescaled in
[0, 1], while in binary classification the labels are {?1, 1}. We also do not tune the Gaussian kernel
bandwidth, but take the value ? = 8 used by [11]. For all datasets, we set ? = 1 and ? = 0.5 for all
PROS-N-KONS variants and Jmax = 100 for B-KONS. For each algorithm and dataset, we report
average and standard deviation of the losses. The scores for the competitor baselines are reported as
provided in the original papers [11, 10]. We only report scores for NOGD, FOGD, and D UAL -SGD,
since they have been shown to outperform other baselines such as budgeted perceptron [2], projectron
[13], forgetron [4], and budgeted GD [21]. For PROS-N-KONS variant we also report the runtime in
seconds, but do not compare with the runtimes reported by [11, 10], as that would imply comparing
different implementations. Note that since the complexities O(T d#SV ) and O(T (d + #SV )#SV )
are close, we do not expect large differences. All experiments are run on a single machine with 2
Xeon E5-2630 CPUs for a total of 10 cores, and are averaged over 15 runs.
Effective dimension and runtime We use size of the dictionary returned by KORS as a proxy for
the effective dimension of the datasets. As expected, larger datasets and datasets with a larger input
dimension have a larger effective dimension. Furthermore, dTeff (?) increases (sublinearly) when we
reduce ? from 1 to 0.01 in the ijcnn1 dataset. More importantly, dTeff (?) remains empirically small
8
Algorithm
FOGD
NOGD
D UAL -SGD
PROS-N-KONS
C ON -KONS
B-KONS
BATCH
Algorithm
FOGD
NOGD
D UAL -SGD
PROS-N-KONS
C ON -KONS
B-KONS
BATCH
? = 1, ? = 1
ijcnn1 n = 141, 691, d = 22
accuracy
#SV
time
9.06 ? 0.05
400
?
9.55 ? 0.01
100
?
8.35 ? 0.20
100
?
9.70 ? 0.01
100
211.91
9.64 ? 0.01
101
215.71
9.70 ? 0.01
98
206.53
8.33 ? 0.03
?
?
cod-rna n = 271, 617, d = 8
accuracy
#SV
time
10.30 ? 0.10
400
?
13.80 ? 2.10
100
?
4.83 ? 0.21
100
?
13.95 ? 1.19
38
270.81
18.99 ? 9.47
38
271.85
13.99 ? 1.16
38
274.94
3.781 ? 0.01
?
?
? = 0.01, ? = 0.01
ijcnn1 n = 141, 691, d = 22
accuracy
#SV
time
9.06 ? 0.05
400
?
9.55 ? 0.01
100
?
8.35 ? 0.20
100
?
10.73 ? 0.12
436
1003.82
6.23 ? 0.18
432
987.33
4.85 ? 0.08
100
147.22
5.61 ? 0.01
?
?
cod-rna n = 271, 617, d = 8
accuracy
#SV
time
10.30 ? 0.10
400
?
13.80 ? 2.10
100
?
4.83 ? 0.21
100
?
4.91 ? 0.04
111
459.28
5.81 ? 1.96
111
458.90
4.57 ? 0.05
100
333.57
3.61 ? 0.01
?
?
Table 2: Binary classification datasets
even for datasets with hundreds of thousands samples, such as year, ijcnn1 and cod-rna. On the
other hand, in the slice dataset, the effective dimension is too large for PROS-N-KONS to complete
and we only provide results for B-KONS. Overall, the proposed algorithm can process hundreds of
thousands of points in a matter of minutes and shows that it can practically scale to large datasets.
Regression All algorithms are trained and evaluated using the squared loss. Notice that whenever the
budget Jmax is not exceeded, B-KONS and PROS-N-KONS are the same algorithm and obtain the
same result. On regression datasets (Tab. 1) we set ? = 1 and ? = 1, which satisfies the requirements
of Thm. 2. Note that we did not tune ? and ? for optimal performance, as that would require
multiple runs, and violate the online setting. On smaller datasets such as parkinson and cpusmall,
where frequent restarts greatly interfere with the gradient descent, and even a small non-adaptive
embedding can capture the geometry of the data, PROS-N-KONS is outperformed by simpler
first-order methods. As soon as T reaches the order of tens of thousands (cadata, casp), second-order
updates and data adaptivity becomes relevant and PROS-N-KONS outperform its competitors, both
in the number of SVs and in the average loss. In this intermediate regime, C ON -KONS outperforms
PROS-N-KONS and B-KONS since it is less affected by restarts. Finally, when the number of
samples raises to hundreds of thousands, the intrinsic effective dimension of the dataset starts playing
a larger role. On slice, where the effective dimension is too large to run, B-KONS still outperforms
NOGD with a comparable budget of SVs, showing the advantage of second-order updates.
Binary classification All algorithms are trained using the hinge loss and they are evaluated using
the average online error rate. Results are reported in Tab. 2. While for regression, an arbitrary value
of ? = ? = 1 is sufficient to obtain good results, it fails for binary classification. Decreasing the
two parameters to 0.01 resulted in a 3-fold increase in the number of SVs included and runtime, but
almost a 2-fold decrease in error rate, placing PROS-N-KONS and B-KONS on par or ahead of
competitors without the need of any further parameter tuning.
6
Conclusions
We presented PROS-N-KONS a novel algorithm for sketched second-order OKL that achieves
O(dTeff log T ) regret for losses with directional curvature. Our sketching is data-adaptive and, when the
effective dimension of the dataset is constant, it achieves a constant per-step cost, unlike S KETCHED KONS [1], which was previously proposed for the same setting. We empirically showed that
PROS-N-KONS is practical, performing on par or better than state-of-the-art methods on standard
benchmarks using small dictionaries on realistic data.
9
Acknowledgements The research presented was supported by French Ministry of Higher Education and
Research, Nord-Pas-de-Calais Regional Council and French National Research Agency projects ExTra-Learn
(n.ANR-14-CE24-0010-01) and BoB (n.ANR-16-CE23-0003).
References
[1] Daniele Calandriello, Alessandro Lazaric, and Michal Valko. Second-order kernel online convex optimization with adaptive sketching. In International Conference on Machine Learning, 2017.
[2] Giovanni Cavallanti, Nicolo Cesa-Bianchi, and Claudio Gentile. Tracking the best hyperplane with a
simple budget perceptron. Machine Learning, 69(2-3):143?167, 2007.
[3] Michael B Cohen, Cameron Musco, and Jakub Pachocki. Online row sampling. International Workshop
on Approximation, Randomization, and Combinatorial Optimization APPROX, 2016.
[4] Ofer Dekel, Shai Shalev-Shwartz, and Yoram Singer. The forgetron: A kernel-based perceptron on a
budget. SIAM Journal on Computing, 37(5):1342?1372, 2008.
[5] Mina Ghashami, Edo Liberty, Jeff M Phillips, and David P Woodruff. Frequent directions: Simple and
deterministic matrix sketching. SIAM Journal on Computing, 45(5):1762?1792, 2016.
[6] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex
optimization. In Conference on Learning Theory. Springer, 2006.
[7] Wenwu He and James T. Kwok. Simple randomized algorithms for online learning with kernels. Neural
Networks, 60:17?24, 2014.
[8] J. Kivinen, A.J. Smola, and R.C. Williamson. Online learning with kernels. IEEE Transactions on Signal
Processing, 52(8), 2004.
[9] Quoc Le, Tam?s Sarl?s, and Alex J Smola. Fastfood - Approximating kernel expansions in loglinear time.
In International Conference on Machine Learning, 2013.
[10] Trung Le, Tu Nguyen, Vu Nguyen, and Dinh Phung. Dual Space Gradient Descent for Online Learning. In
Neural Information Processing Systems, 2016.
[11] Jing Lu, Steven C.H. Hoi, Jialei Wang, Peilin Zhao, and Zhi-Yong Liu. Large scale online kernel learning.
Journal of Machine Learning Research, 17(47):1?43, 2016.
[12] Haipeng Luo, Alekh Agarwal, Nicolo Cesa-Bianchi, and John Langford. Efficient second-order online
learning via sketching. In Neural Information Processing Systems, 2016.
[13] Francesco Orabona, Joseph Keshet, and Barbara Caputo. The projectron: a bounded kernel-based
perceptron. In International conference on Machine learning, 2008.
[14] Yi Sun, J?rgen Schmidhuber, and Faustino J. Gomez. On the size of the online kernel sparsification
dictionary. In International Conference on Machine Learning, 2012.
[15] Zhuang Wang, Koby Crammer, and Slobodan Vucetic. Breaking the curse of kernelization: Budgeted
stochastic gradient descent for large-scale svm training. Journal of Machine Learning Research, 13(Oct):
3103?3131, 2012.
[16] Andrew J. Wathen and Shengxin Zhu. On spectral distribution of kernel matrices related to radial basis
functions. Numerical Algorithms, 70(4):709?726, 2015.
[17] Christopher Williams and Matthias Seeger. Using the Nystr?m method to speed up kernel machines. In
Neural Information Processing Systems, 2001.
[18] Yi Xu, Haiqin Yang, Lijun Zhang, and Tianbao Yang. Efficient non-oblivious randomized reduction for
risk minimization with improved excess risk guarantee. In AAAI Conference on Artificial Intelligence,
2017.
[19] Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nystr?m method vs random
fourier features: A theoretical and empirical comparison. In Neural Information Processing Systems, 2012.
[20] Y. Yang, M. Pilanci, and M. J. Wainwright. Randomized sketches for kernels: Fast and optimal nonparametric regression. Annals of Statistics, 2017.
[21] Peilin Zhao, Jialei Wang, Pengcheng Wu, Rong Jin, and Steven C H Hoi. Fast bounded online gradient
descent algorithms for scalable kernel-based online learning. In International Conference on Machine
Learning, 2012.
[22] Fedor Zhdanov and Yuri Kalnishkan. An identity for kernel ridge regression. In Algorithmic Learning
Theory. 2010.
[23] Changbo Zhu and Huan Xu. Online gradient descent in function space. arXiv:1512.02394, 2015.
[24] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In International Conference on Machine Learning, 2003.
10
| 7194 |@word version:2 inversion:1 polynomial:1 replicate:1 dekel:1 open:1 covariance:3 decomposition:1 pengcheng:1 sgd:5 nystr:10 tr:2 reaping:1 reduction:2 liu:1 contains:1 score:5 woodruff:1 rkhs:14 outperforms:2 existing:2 current:3 comparing:1 michal:3 skipping:1 luo:2 yet:1 john:1 distant:1 realistic:1 numerical:1 remove:2 designed:2 update:30 v:2 stationary:1 intelligence:1 leaf:1 prohibitive:1 trung:1 oblique:1 core:1 simpler:2 casp:2 zhang:1 along:2 constructed:4 combine:1 introduce:3 x0:7 sublinearly:1 expected:1 indeed:1 ol:6 decomposed:1 decreasing:1 automatically:1 zhi:2 cpu:1 curse:5 increasing:3 becomes:2 provided:3 project:2 moreover:3 underlying:1 begin:1 notation:4 bounded:9 null:1 what:1 agnostic:1 hindsight:4 transformation:1 sparsification:1 guarantee:10 every:1 ti:4 concave:1 runtime:12 exactly:1 milestone:1 hit:4 internally:1 omit:1 positive:1 before:1 dropped:1 limit:1 consequence:1 analyzing:1 abuse:1 inria:2 conversely:1 challenging:1 pit:1 averaged:1 decided:1 practical:1 vu:1 recursive:1 regret:50 definite:1 block:1 lost:1 ybt:2 projectron:4 procedure:1 empirical:1 ce24:1 projection:9 radial:1 ett:3 get:4 cannot:3 close:4 selection:2 operator:1 context:1 risk:2 instability:1 lijun:1 zinkevich:1 map:16 deterministic:1 yt:4 kale:1 williams:1 starting:4 independently:1 convex:9 focused:1 tianbao:2 musco:1 rule:3 importantly:1 borrow:1 embedding:23 limiting:1 updated:2 pt:8 jmax:3 target:1 annals:1 exact:18 programming:1 us:2 pa:1 element:3 expensive:3 satisfying:1 particularly:1 continues:2 updating:1 predicts:1 role:1 steven:2 wang:3 capture:2 verifying:1 thousand:4 sun:1 sect:1 decrease:3 rescaled:2 observes:1 alessandro:3 jialei:2 intuition:1 environment:1 convexity:2 complexity:20 insertion:2 agency:1 trained:2 raise:1 fedor:1 compromise:1 predictive:1 incur:1 efficiency:1 learner:3 basis:1 easily:3 represented:2 regularizer:1 forced:1 fast:3 effective:21 cod:3 query:1 artificial:1 outcome:1 refined:1 shalev:1 sarl:1 whose:1 heuristic:1 larger:9 solve:2 elad:1 relax:1 otherwise:1 anr:2 satyen:1 asm:1 statistic:1 itself:2 final:3 online:25 fogd:7 sequence:7 differentiable:1 eigenvalue:7 advantage:4 matthias:1 propose:3 reconstruction:1 reset:7 fr:1 frequent:3 tu:1 relevant:2 rapidly:1 achieve:9 amplified:1 nogd:8 validate:1 haipeng:1 getting:1 empty:3 optimum:1 requirement:3 jing:1 adam:1 illustrate:1 depending:2 andrew:1 ij:6 eq:5 strong:3 indicate:2 come:1 direction:5 liberty:1 restate:1 drawback:1 closely:1 stochastic:1 eff:4 hoi:2 education:1 require:1 xexp:5 budgeting:1 alleviate:1 decompose:1 proposition:1 randomization:1 vucetic:1 rong:2 hold:1 practically:1 stt:2 exp:1 mapping:1 predict:1 algorithmic:1 rgen:1 major:1 achieves:7 dictionary:38 smallest:1 vary:1 ptj:1 outperformed:1 faustino:1 lose:1 label:3 combinatorial:1 calais:1 council:1 minimization:1 always:5 gaussian:2 rna:3 kalai:1 avoid:3 hj:6 parkinson:2 claudio:1 zhou:1 focus:1 rank:8 bernoulli:1 greatly:2 seeger:1 adversarial:3 baseline:2 realizable:1 dependent:1 i0:2 lj:1 kernelized:2 france:1 selects:1 interested:1 sketched:1 overall:8 among:2 flexible:2 dual:2 denoted:1 classification:7 issue:1 constrained:2 special:1 initialize:1 art:3 equal:2 construct:6 never:6 once:1 beach:1 sampling:3 runtimes:1 represents:1 lille:1 koby:1 comparators:1 rls:8 yu:1 placing:1 t2:2 subquadratic:1 report:3 oblivious:6 composed:1 ve:1 resulted:1 national:1 cheaper:1 xtj:1 geometry:1 maintain:1 psd:2 aqt:1 evaluation:1 introduces:2 tj:10 accurate:9 kt:7 closer:2 neglecting:1 necessary:1 minw:1 huan:1 orthogonal:1 euclidean:1 loosely:1 desired:1 e0:1 theoretical:1 instance:5 column:3 soft:1 xeon:1 cost:14 deviation:1 cpusmall:2 predictor:15 uniform:1 hundred:3 too:4 motivating:1 stored:2 reported:3 dependency:2 sv:39 gd:14 combined:1 st:1 adaptively:2 chooses:2 international:7 siam:2 randomized:3 sequel:1 michael:1 sketching:6 continuously:2 quickly:1 na:1 squared:12 aaai:1 again:1 satisfied:1 containing:2 choose:1 possibly:2 cesa:2 tam:1 zhao:2 leading:3 return:1 toy:1 li:1 aggressive:1 account:1 mahdavi:1 singleton:1 de:1 zhdanov:1 bold:2 matter:1 satisfy:1 jc:2 explicitly:3 depends:1 performed:1 root:1 try:2 closed:1 multiplicative:1 analyze:1 kwk:3 tab:2 start:4 decaying:3 competitive:2 recover:1 hazan:1 shai:1 contribution:1 minimize:2 square:3 accuracy:6 variance:1 efficiently:4 resetting:1 directional:3 lu:2 bob:1 app:2 influenced:1 suffers:2 reach:1 whenever:4 edo:1 definition:3 infinitesimal:1 competitor:3 nonetheless:5 james:1 naturally:1 associated:6 proof:1 stop:1 dataset:8 popular:1 intrinsically:1 ut:5 improves:2 hilbert:3 carefully:1 actually:1 feed:1 exceeded:1 forgetron:4 dt:6 higher:2 restarts:2 improved:1 yb:1 evaluated:4 done:2 strongly:4 shrink:1 execute:1 furthermore:4 just:1 smola:2 langford:1 hand:3 receives:2 sketch:1 ei:1 replacing:3 christopher:1 incrementally:1 interfere:1 french:2 quality:1 grows:3 usa:1 building:3 xs0:2 true:2 regularization:4 hence:1 entering:1 i2:2 round:6 during:3 daniele:3 generalized:2 mina:1 tt:8 ridge:4 complete:3 performs:3 pro:48 novel:1 common:3 functional:2 empirically:5 ji:1 cohen:1 exponentially:2 defeat:1 slight:1 he:1 kwk2:2 accumulate:1 dinh:1 phillips:1 rd:5 tuning:2 approx:1 similarly:2 inclusion:1 europe:1 alekh:1 gt:3 add:3 nicolo:2 curvature:4 closest:1 recent:1 showed:1 optimizing:2 mint:2 barbara:1 scenario:1 store:5 certain:1 schmidhuber:1 binary:7 deff:7 vt:1 yuri:1 yi:2 exploited:1 seen:1 preserving:1 ministry:1 gentile:1 ghashami:1 signal:1 semi:1 full:1 multiple:1 rj:6 reduces:4 violate:1 faster:2 match:2 adapt:2 long:4 compensate:2 cameron:1 controlled:1 impact:1 prediction:6 scalable:2 variant:5 regression:9 arxiv:1 iteration:1 kernel:37 sometimes:1 represent:1 agarwal:2 invert:1 receive:2 addition:2 background:1 else:1 grow:1 singular:3 suffered:1 extra:1 operate:1 unlike:3 posse:1 rest:1 regional:1 ascent:1 induced:4 kwk22:2 seem:1 effectiveness:1 integer:1 leverage:4 yang:4 exceed:2 intermediate:1 embeddings:1 switch:1 fit:1 restrict:1 opposite:1 bandwidth:1 reduce:3 krls:1 ce23:1 whether:1 returned:4 hessian:3 svs:6 remark:3 dramatically:1 clear:1 eigenvectors:1 proportionally:2 tune:2 amount:1 repeating:1 nonparametric:1 ten:1 induces:2 kalnishkan:1 fgd:6 outperform:2 notice:4 sign:1 lazaric:3 per:15 affected:1 kor:16 achieving:1 budgeted:8 calandriello:3 verified:2 fraction:1 sum:3 convert:1 year:2 run:12 letter:3 extends:1 family:1 almost:1 decide:1 wu:1 draw:2 summarizes:1 scaling:1 peilin:2 comparable:2 ki:5 hi:1 bound:13 gomez:1 display:1 fold:2 replaces:1 phung:1 ahead:1 alex:1 yong:1 fourier:1 speed:1 span:1 performing:3 utj:1 martin:1 slobodan:1 combination:1 fwt:2 poor:1 smaller:5 joseph:1 making:1 ijcnn1:4 quoc:1 intuitively:1 computationally:3 remains:2 previously:1 count:1 fail:1 singer:1 know:1 end:2 available:1 operation:1 ofer:1 apply:1 observe:1 kwok:1 appropriate:2 spectral:1 batch:7 altogether:1 original:2 include:2 log2:1 hinge:2 newton:1 exploit:6 yoram:1 invokes:1 uj:6 build:1 approximating:4 amit:1 feng:1 objective:1 added:4 quantity:1 strategy:3 parametric:1 rt:9 mehrdad:1 diagonal:2 loglinear:1 needlessly:1 gradient:17 subspace:2 mapped:2 capacity:1 restart:1 w0:1 unstable:2 reason:2 length:2 setup:1 unfortunately:2 potentially:2 relate:1 favorably:2 nord:2 slows:1 implementation:3 zt:2 perform:7 allowing:1 upper:1 bianchi:2 francesco:1 datasets:14 discarded:1 benchmark:2 finite:3 descent:11 t:2 jin:2 team:1 reproducing:3 arbitrary:3 thm:4 david:1 required:1 learned:3 pachocki:1 nip:1 adversary:9 usually:2 regime:3 challenge:1 cavallanti:1 max:1 memory:1 wainwright:1 difficulty:2 force:1 regularized:4 valko:3 indicator:1 predicting:1 regret1:1 ual:5 kivinen:1 zhu:2 improve:1 zhuang:1 orthogonally:1 imply:1 started:2 deemed:1 catch:1 kj:2 epoch:17 l2:4 removal:4 acknowledgement:1 embedded:10 loss:36 fully:1 expect:2 beside:1 sublinear:1 adaptivity:1 par:2 okl:10 remarkable:1 sufficient:2 proxy:1 s0:2 playing:1 pi:5 row:2 changed:1 supported:1 last:2 copy:1 soon:1 infeasible:1 weaker:2 allow:1 perceptron:6 taking:3 benefit:1 slice:3 dimension:26 default:1 giovanni:1 cumulative:5 avoids:1 computes:1 preventing:1 stuck:1 adaptive:10 projected:1 simplified:1 coincide:2 counted:1 dome:1 polynomially:1 avg:6 preprocessing:1 transaction:1 cadata:2 approximate:17 nguyen:2 ignore:1 implicitly:1 excess:1 ons:6 keep:1 reveals:2 xi:2 shwartz:1 alternatively:1 spectrum:3 search:1 table:2 learn:2 nature:1 pilanci:1 ca:1 caputo:1 improving:4 alg:1 expansion:6 e5:1 williamson:1 poly:3 constructing:1 protocol:1 diag:1 did:3 main:1 fastfood:1 linearly:1 bounding:1 suffering:1 xu:2 referred:1 embeds:1 fails:1 inverts:1 explicit:1 lie:2 breaking:1 down:2 theorem:2 minute:1 xt:10 specific:1 showing:3 jakub:1 r2:2 x:4 dk:2 svm:1 exists:1 intrinsic:1 ih:1 workshop:1 merging:1 effectively:2 adding:2 keshet:1 magnitude:1 budget:14 logarithmic:9 lt:4 prevents:1 temporarily:1 tracking:1 scalar:2 applies:1 springer:1 hua:1 satisfies:1 relies:1 prop:3 oct:1 comparator:3 goal:3 identity:3 careful:1 orabona:1 jeff:1 replace:3 lipschitz:2 feasible:1 change:3 fw:1 included:5 infinite:2 wt:3 hyperplane:1 degradation:1 total:5 experimental:2 intact:1 support:3 log4:1 wenwu:1 arises:1 crammer:1 scarcity:1 kernelization:6 avoiding:1 |
6,848 | 7,195 | Implicit Regularization in Matrix Factorization
Suriya Gunasekar
TTI at Chicago
[email protected]
Blake Woodworth
TTI at Chicago
[email protected]
Behnam Neyshabur
TTI at Chicago
[email protected]
Srinadh Bhojanapalli
TTI at Chicago
[email protected]
Nathan Srebro
TTI at Chicago
[email protected]
Abstract
We study implicit regularization when optimizing an underdetermined quadratic
objective over a matrix X with gradient descent on a factorization of X. We
conjecture and provide empirical and theoretical evidence that with small enough
step sizes and initialization close enough to the origin, gradient descent on a full
dimensional factorization converges to the minimum nuclear norm solution.
1
Introduction
When optimizing underdetermined problems with multiple global minima, the choice of optimization
algorithm can play a crucial role in biasing us toward a specific global minima, even though this bias is
not explicitly specified in the objective or problem formulation. For example, using gradient descent
to optimize an unregularized, underdetermined least squares problem would yield the minimum
Euclidean norm solution, while using coordinate descent or preconditioned gradient descent might
yield a different solution. Such implicit bias, which can also be viewed as a form of regularization,
can play an important role in learning.
In particular, implicit regularization has been shown to play a crucial role in training deep models
[14, 13, 18, 11]: deep models often generalize well even when trained purely by minimizing the
training error without any explicit regularization, and when there are more parameters than samples
and the optimization problem is underdetermined. Consequently, there are many zero training error
solutions, all global minima of the training objective, many of which generalize badly. Nevertheless,
our choice of optimization algorithm, typically a variant of gradient descent, seems to prefer solutions
that do generalize well. This generalization ability cannot be explained by the capacity of the
explicitly specified model class (namely, the functions representable in the chosen architecture).
Instead, it seems that the optimization algorithm biases us toward a ?simple" model, minimizing
some implicit ?regularization measure?, and that generalization is linked to this measure. But what
are the regularization measures that are implicitly minimized by different optimization procedures?
As a first step toward understanding implicit regularization in complex models, in this paper we
carefully analyze implicit regularization in matrix factorization models, which can be viewed as
two-layer networks with linear transfer. We consider gradient descent on the entries of the factor
matrices, which is analogous to gradient descent on the weights of a multilayer network. We show
how such an optimization approach can indeed yield good generalization properties even when the
problem is underdetermined. We identify the implicit regularizer as the nuclear norm, and show that
even when we use a full dimensional factorization, imposing no constraints on the factored matrix,
optimization by gradient descent on the factorization biases us toward the minimum nuclear norm
solution. Our empirical study leads us to conjecture that with small step sizes and initialization close
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
to zero, gradient descent converges to the minimum nuclear norm solution, and we provide empirical
and theoretical evidence for this conjecture, proving it in certain restricted settings.
2
Factorized Gradient Descent for Matrix Regression
We consider least squares objectives over matrices X ? Rn?n of the form:
2
min F (X) = kA(X) ? yk2 .
X0
(1)
where A : Rn?n ? Rm is a linear operator specified by A(X)i = hAi , Xi, Ai ? Rn?n , and
y ? Rm . Without loss of generality, we consider only symmetric positive semidefinite (p.s.d.)
X
symmetric
linearly independent Ai (otherwise, consider optimization over a larger matrix
Wand
X
with
A
operating
symmetrically on the off-diagonal blocks). In particular, this setting
>
X Z
covers problems including matrix completion (where Ai are indicators, [5]), matrix reconstruction
from linear measurements [15] and multi-task training (where each column of X is a predictor for a
different task and Ai have a single non-zero column, [2, 1]).
We are particularly interested in the regime where m n2 , in which case (1) is underdetermined
with many global minima satisfying A(X) = y. For such underdetermined problems, merely minimizing (1) cannot ensure recovery (in matrix completion or recovery problems) or generalization (in
prediction problems). For example, in a matrix completion problem (without diagonal observations),
we can minimize (1) by setting all non-diagonal unobserved entries to zero, or to any arbitrary value.
Instead of working on X directly, we will study a factorization X = U U > . We can write (1)
equivalently as optimization over U as,
2
min f (U ) =
A(U U > ) ? y
2 .
(2)
U ?Rn?d
When d < n, this imposes a constraint on the rank of X, but we will be mostly interested in the
case d = n, under which no additional constraint is imposed on X (beyond being p.s.d.) and (2) is
equivalent to (1). Thus, if m n2 , then (2) with d = n is similarly underdetermined and can be
optimized in many ways ? estimating a global optima cannot ensure generalization (e.g. imputing
zeros in a matrix completion objective). Let us investigate what happens when we optimize (2) by
gradient descent on U .
To simulate such a matrix reconstruction problem, we generated m n2 random measurement
matrices and set y = A(X ? ) according to some planted X ? 0. We minimized (2) by performing gradient descent on U to convergence, and then measured the relative reconstruction error
kX ? X ? kF /kX ? kF for X = U U > . Figure 1 shows the normalized training objective and reconstruction error as a function of the dimensionality d of the factorization, for different initialization
and step-size policies, and three different planted X ? .
First, we see that (for sufficiently large d) gradient descent indeed finds a global optimum, as
evidenced by the training error (the optimization objective) being zero. This is not surprising since
with large enough d this non-convex problem has no spurious local minima [4, 9] and gradient
descent converges almost surely to a global optima [12]; there has also been recent work establishing
conditions for global convergence for low d [3, 7].
The more surprising observation is that in panels (a) and (b), even when d > m/n, indeed even for
d = n, we still get good reconstructions from the solution of gradient descent with initialization U0
close to zero and small step size. In this regime, (2) is underdetermined and minimizing it does not
ensure generalization. To emphasize this, we plot the reference behavior of a rank unconstrained
global minimizer Xgd obtained via projected gradient descent for (1) on the X space. For d < n we
also plot an example of an alternate ?bad" rank d global optima obtained with an initialization based
on SVD of Xgd (?SVD Initialization?).
When d < m/n, we understand how the low-rank structure can guarantee generalization [16] and
reconstruction [10, 3, 7]. What ensures generalization when d m/n? Is there a strong implicit
regularization at play for the case of gradient descent on factor space and initialization close to zero?
Observing the nuclear norm of the resulting solutions plotted in Figure 2 suggests that gradient descent
implicitly induces a low nuclear norm solution. This is the case even for d = n when the factorization
2
Relative error
kU0 k F = 10 ?4 , ? = 10 ?3
kU0 k F = 1, ? = 10 ?3
0.8
Training error
kU0 k F = 10 ?4 , ?ELS
?
(a) Low rank X
0.8
0.6
0.6
1.5
0.4
0.4
1.0
0.2
0.2
0.5
0.0
0.0
?
0
10
20
30
dimension d
40
50
Xgd
SVD Initialization
(b) Low nuclear norm X ?
0
10
20
30
dimension d
40
(c) Low rank X ? , m = nr/4
2.0
50
0.0
0
10
20
30
dimension d
40
50
Figure 1: Reconstruction error of the global optima for 50 ? 50 matrix reconstruction. (Left) X ? is of ?
rank r = 2
and m = 3nr; (Center) X ? has a spectrum decaying as O(1/k1.5 ) normalized to have kX ? k? = rkX ? kF
for r = 2 and m = 3nr, and (Right) is a non-reconstructable setting where the number of measurements
m = nr/4 is much smaller than the requirement to reconstruct a rank r = 2 matrix. The plots compare the
reconstruction error of gradient descent on U for different choices initialization U0 and step size ?, including
fixed step-size and exact line search clipped for stability (?ELS ). Additonally, the orange dashed reference
line represents the performance of Xgd ? a rank unconstrained global optima obtained by projected gradient
descent for (1) on X space, and ?SVD-Initialization? is an example of an alternate rank d global optima, where
initialization U0 is picked based on SVD of Xgd and gradient descent is run on factor space with small stepsize.
Training error behaves similarly in all these settings (zero for d ? 2) and is plotted for reference. Results are
averaged across 3 random initialization and (near zero) errorbars indicate the standard deviation.
kU0 k F = 10 ?4 , ? = 10 ?3
Nuclear norm
(a) Low rank X ?
?
min kXk ?
kU0 k F = 10 ?4 , ?ELS
Xgd
SVD Initialization
A(X) = y
(b) Low nuclear norm X ?
2.5
2.0
(c) Low rank X ? , m = nr/4
2.5
1.2
2.0
1.0
0.8
1.5
1.5
0
10
20
30
dimension d
40
50
1.0
0.6
0
10
20
30
dimension d
40
50
0
10
20
30
dimension d
40
50
Figure 2: Nuclear norm of the solutions from Figure 1. In addition to the reference of Xgd from Figure 1, the
magenta dashed line (almost overlapped by the plot of kU kF = 10?4 , ? = 10?3 ) is added as a reference for
the (rank unconstrained) minimum nuclear norm global optima. The error bars indicate the standard deviation
across 3 random initializations. We have dropped the plot for kU kF = 1, ? = 10?3 to reduce clutter.
imposes no explicit constraints. Furthermore, we do not include any explicit regularization and
optimization is run to convergence without any early stopping. In fact, we can see a clear bias toward
low nuclear norm even in problems where reconstruction is not possible: in panel (c) of Figure 2 the
number of samples m = nr/4 is much smaller than those required to reconstruct a rank r ground
truth matrix X ? . The optimization in (2) is highly underdetermined and there are many possible
zero-error global minima, but gradient descent still prefers a lower nuclear norm solution. The
emerging story is that gradient descent biases us to a low nuclear norm solution, and we already know
how having low nuclear norm can ensure generalization [17, 6] and minimizing the nuclear norm
ensures reconstruction [15, 5].
Can we more explicitly characterize this bias? We see that we do not always converge precisely
to the minimum nuclear norm solution. In particular, the choice of step size and initialization
affects which solution gradient descent converges to. Nevertheless, as we formalize in Section 3, we
argue that when U is full dimensional, the step size becomes small enough, and the initialization
approaches zero, gradient descent will converge precisely to a minimum nuclear norm solution, i.e. to
argminX0 kXk? s.t. A(X) = y.
3
Gradient Flow and Main Conjecture
The behavior of gradient descent with infinitesimally small step size is captured by the differential
t
equation U? t := dU
dt = ??f (Ut ) with an initial condition for U0 . For the optimization in (2) this is
U? t = ?A? (A(Ut Ut> ) ? y)Ut ,
3
(3)
P
where A? : Rm ? Rn?n is the adjoint of A and is given by A? (r) = i ri Ai . Gradient descent
can be seen as a discretization of (3), and approaches (3) as the step size goes to zero.
The dynamics (3) define the behavior of the solution Xt = Ut Ut> and using the chain rule we can
verify that X? t = U? t Ut> + Ut U? t> = ?A? (rt )Xt ? Xt A? (rt ), where rt = A(Xt ) ? y is a vector
of the residual. That is, even though the dynamics are defined in terms of specific factorization
Xt = Ut Ut> , they are actually independent of the factorization and can be equivalently characterized
as
X? t = ?A? (rt )Xt ? Xt A? (rt ).
(4)
We can now define the limit point X? (Xinit ) := limt?? Xt for the factorized gradient flow (4)
initialized at X0 = Xinit . We emphasize that these dynamics are very different from the standard
gradient flow dynamics of (1) on X, corresponding to gradient descent on X, which take the form
X? t = ??F (Xt ) = ?A? (rt ).
Based on the preliminary experiments in Section 2 and a more comprehensive numerical study
discussed in Section 5, we state our main conjecture as follows:
b = lim??0 X? (?Xinit ) exists and is a global optima for
Conjecture. For any full rank Xinit , if X
b
b
(1) with A(X) = y, then X ? argminX0 kXk s.t. A(X) = y.
?
Requiring a full-rank initial point demands a full dimensional d = n factorization in (2). The
assumption of global optimality in the conjecture is generally satisfied: for almost all initializations,
gradient flow will converge to a local minimizer [12], and when d = n any such local minimizer is
also global minimum [9]. Since we are primarily concerned with underdetermined problems, we
expect the global optimum to achieve zero error, i.e. satisfy A(X) = y. We already know from
these existing literature that gradient descent (or gradient flow) will generally converge to a solution
satisfying A(X) = y; the question we address here is which of those solutions will it converge to.
The conjecture implies the same behavior for asymmetric factorization as X = U V > with gradient
flow on (U, V ), since this is equivalent to gradient flow on the p.s.d. factorization of XW> X
.
Z
4
Theoretical Analysis
We will prove our conjecture for the special case where the matrices Ai commute, and discuss
the more challenging non-commutative case. But first, let us begin by reviewing the behavior of
straight-forward gradient descent on X for the convex problem in (1).
Warm up: Consider gradient descent updates on the original problem (1) in X space, ignoring
the p.s.d. constraint. The gradient direction ?F (X) = A? (A(X) ? y) is always spanned by the m
matrices Ai . Initializing at Xinit = 0, we will therefore always remain in the m-dimensional subspace
2
L = {X = A? (s)|s ? Rm }. Now consider the optimization problem minX kXkF s.t. A(X) = y.
The KKT optimality conditions for this problem are A(X) = y and ?? s.t. X = A? (?). As long as
we are in L, the second condition is satisfied, and if we converge to a zero-error global minimum,
then the first condition is also satisfied. Since gradient descent stays on this manifold, this establishes
that if gradient descent converges to a zero-error solution, it is the minimum Frobenius norm solution.
Getting started: m = 1 Consider the simplest case of the factorized problem when m = 1 with
A1 = A and y1 = y. The dynamics of (4) are given by X? t = ?rt (AXt + Xt A), where rt is simply
RT
a scalar, and the solution for Xt is given by, Xt = exp (st A) X0 exp (st A) where sT = ? 0 rt dt.
b = lim??0 X? (?X0 ) exists and A(X)
b = y, we want to show X
b is an optimum for the
Assuming X
following problem
min kXk? s.t. A(X) = y.
(5)
X0
The KKT optimality conditions for (5) are:
?? ? Rm s.t.
A(X) = y
X0
A? (?) I
(I ? A? (?))X = 0
(6)
We already know that the first condition holds, and the p.s.d. condition is guaranteed by the factorization of X. The remaining complementary slackness and dual feasibility conditions effectively require
4
b is spanned by the top eigenvector(s) of A. Informally, looking to the gradient flow path above,
that X
for any non-zero y, as ? ? 0 it is necessary that |s? | ? ? in order to converge to a global optima,
thus eigenvectors corresponding to the top eigenvalues of A will dominate the span of X? (?Xinit ).
What we can prove: Commutative {Ai }i?[m] The characterization of the the gradient flow path
from the previous section can be extended to arbitrary m in the case that the matrices Ai commute,
RT
i.e. Ai Aj = Aj Ai for all i, j. Defining sT = ? 0 rt dt ? a vector integral, we can verify by
differentiating that solution of (4) is
Xt = exp (A? (st )) X0 exp (A? (st ))
(7)
m
{Ai }i=1
b = lim??0 X? (?I) exists and is a
commute, if X
Theorem 1. In the case where matrices
b
b
global optimum for (1) with A(X) = y, then X ? argminX0 kXk? s.t. A(X) = y.
b satisfies the complementary slackness and dual feasibility
Proof. It suffices to show that such a X
KKT conditions in (6). Since the matrices Ai commute and are symmetric, they are simultaneously
diagonalizable by a basis v1 , .., vn , and so is A? (s) for any s ? Rm . This implies that for any ?,
b also have the same eigenbasis. Furthermore, since X? (?I)
X? (?I) given by (7) and its limit X
>
b
b k for each k ? [n]. Therefore, ?k (X? (?I)) ?
converges to X, the scalars vk X? (?I)vk ? vk> Xv
b
?k (X), where ?k (?) is defined as the eigenvalue corresponding to eigenvector vk and not necessarily
the k th largest eigenvalue.
Let ? = ? log ?, then using X0 = e?? I in (7), ?k (X? (?I)) = exp(2?k (A? (s? (?))) ? 2?). For
b > 0, by the continuity of log, we have
all k such that ?k (X)
b
?
b ? 0 =? ?k A? s? (?) ? 1 ? log ?k (X) ? 0. (8)
2?k (A (s? (?))) ? 2? ? log ?k (X)
?
2?
b 6= 0, lim??? ?k (A? (?(?))) =
Defining ?(?) = s? (?)/? , we conclude that for all k such that ?k (X)
b
1. Similarly, for each k such that ?k (X) = 0,
exp(2?k (A? (s? (?))) ? 2?) ? 0 =? exp(?k (A? (?(?))) ? 1)
2?
? 0.
(9)
Thus, for every ? (0, 1], for sufficiently large ?
1
exp(?k (A? (?(?))) ? 1) < 2? < 1 =? ?k (A? (?(?))) < 1.
(10)
b =X
b establishing
Therefore, we have shown that lim??? A? (?(?)) I and lim??? A? (?(?))X
b for (5).
the optimality of X
Interestingly, and similarly to gradient descent on X, this proof does not exploit the particular form
of the ?control" rt and only relies on the fact that the gradient flow path stays within the manifold
M = {X = exp (A? (s)) Xinit exp (A? (s)) | s ? Rm } .
(11)
Since the Ai ?s commute, we can verify that the tangent space of M at a point X is given by
TX M = Span {Ai X + XAi }i?[m] , thus gradient flow will always remain in M. For any control
rt such that following X? t = ?A? (rt )Xt ? Xt A? (rt ) leads to a zero error global optimum, that
optimum will be a minimum nuclear norm solution. This implies in particular that the conjecture
extends to gradient flow on (2) even when the Euclidean norm is replaced by certain other norms, or
when only a subset of measurements are used for each step (such as in stochastic gradient descent).
However, unlike gradient descent on X, the manifold M is not flat, and the tangent space at each
point is different. Taking finite length steps, as in gradient descent, would cause us to ?fall off" of the
manifold. To avoid this, we must take infinitesimal steps, as in the gradient flow dynamics.
In the case that Xinit and the measurements Ai are diagonal matrices, gradient descent on (2) is
equivalent to a vector least squares problem, parametrized in terms of the square root of entries:
2
Corollary 2. Let x? (xinit ) be the limit point of gradient flow on minu?Rn kAx(u) ? yk2 with
initialization xinit , where x(u)i = u2i , A ? Rm?n and y ? Rm . If x
b = lim??0 x? (?~1) exists and
Ab
x = y, then x
b ? argminx?Rm
kxk
s.t.
Ax
=
y.
1
+
5
The plot thickens: Non-commutative {Ai }i?[m] Unfortunately, in the case that the matrices Ai
d
do not commute, analysis is much more difficult. For a matrix-valued function F , dt
exp(Ft ) is equal
?
?
to Ft exp(Ft ) only when Ft and Ft commute. Therefore, (7) is no longer a valid solution for (4).
Discretizing the solution path, we can express the solution as the ?time ordered exponential":
?
Xt = lim ?
1
Y
?0
?
?
exp (?A? (r? ))? X0 ?
t/
Y
?
exp (?A? (r? ))? ,
(12)
? =1
? =t/
where the order in the products is important. If Ai commute, the product of exponentials is equal to
an exponential of sums, which in the limit evaluates to the solution in (7). However, since in general
exp(A1 ) exp(A2 ) 6= exp(A1 + A2 ), the path (12) is not contained in the manifold M defined in
(11).
It is tempting to try to construct a new manifold M0 such that Span {Ai X + XAi }i?[m] ? TX M0
and X0 ? M0 , ensuring the gradient flow remains in M0 . However, since Ai ?s do not commute,
by combining infinitesimal steps along different directions, it is possible to move (very slowly) in
directions that are not of the form A? (s)X + XA? (s) for any s ? Rm . The possible directions
m
of movements indeed corresponds to the Lie algebra defined by the closure of {Ai }i=1 under the
commutator operator [Ai , Aj ] := Ai Aj ? Aj Ai . Even when m = 2, this closure will generally
encompass all of Rn?n , allowing us to approach any p.s.d. matrix X with some (wild) control
rt . Thus, we cannot hope to ensure the KKT conditions for an arbitrary control as we did in the
commutative case ? it is necessary to exploit the structure of the residuals A(Xt ) ? y in some way.
Nevertheless, in order to make finite progress moving along a commutator direction like [Ai , Aj ]Xt +
Xt [Ai , Aj ]> , it is necessary to use an extremely non-smooth control, e.g., looping 1/2 times between
steps in the directions Ai , Aj , ?Ai , ?Aj , each such loop making an 2 step in the desired direction.
We expect the actual residuals rt to behave much more smoothly and that for smooth control the
non-commutative terms in the expansion of the time ordered exponential (12) are asymptotically
lower order then the direct term A? (s) (as Xinit ? 0). This is indeed confirmed numerically, both for
the actual residual controls of the gradient flow path, and for other random controls.
5
Empirical Evidence
Beyond the matrix reconstruction experiments of Section 2, we also conducted experiments with
similarly simulated matrix completion problems, including problems where entries are sampled from
power-law distributions (thus not satisfying incoherence), as well as matrix completion problem on
non-simulated Movielens data. In addition to gradient descent, we also looked more directly at the
gradient flow ODE (3) and used a numerical ODE solver provided as part of SciPy [8] to solve
(3). But we still uses a finite (non-zero) initialization. We also emulated staying on a valid ?steering
path" by numerically approximating the time ordered exponential of 12 ? for a finite discretization
?, instead of moving linearly in the direction of the gradient ?f (U ) (like in gradient descent), we
?
multiply Xt on right and left by e??A (rt ) . The results of these experiments are summarized in
Figure 3.
In these experiments, we again observe trends similar to those in Section 2. In some panels in
Figure 3, we do see a discernible gap between the minimum nuclear norm global optima and the
nuclear norm of the gradient flow solution with kU0 kF = 10?4 . This discrepancy could either be
due to starting at a non-limit point of U0 , or numerical issue arising from approximations to the ODE,
or it could potentially suggest a weakening of the conjecture. Even if the later case were true, the
experiments so far provide strong evidence for atleast approximate versions of our conjecture being
true under a wide range of problems.
6
Nuclear norm
3.0
2.5
2.0
1.5
1.0
0.5
0.0
Time ordered exp.
kU0 k F = 10 ?4 , ? = 0. 1
ODE approx.
kU0 k F = 10 ?4
min kXk ?
A(X) = y
(a) Low rank X ?
Gradient descent
kU0 k F = 10 ?4 , ? = 10 ?3
Xgd
(b) Low nuclear norm X ? (c) Low rank X? , m = nr4
Nuclear norm
(i) Gaussian random measurements. We report the nuclear norm of the gradient flow solutions from three
different approximations to (3) ? numerical ODE solver (ODE approx.), time ordered exponential specified in
(12) (Time ordered exp.) and standard gradient descent with small step size (Gradient descent). The nuclear
norm of the solution from gradient descent on X space ? Xgd and the minimum nuclear norm global minima
?
?
are provided
? as references. In (a) X is rank r and m = 3nr, in (b) X has a decaying spectrum with
kX ? k? = rkX ? kF and m = 3nr, and in (c) X ? is rank r with m = nr/4, where n = 50, r = 2.
2.5
2.0
1.5
1.0
0.5
0.0
(a) Low rank X ?
(b) Low nuclear norm X ? (c) Low rank X? , m = nr4
Nuclear norm
(ii) Uniform matrix completion: ?i, Ai measures a uniform random entry of X ? . Details on X ? , number of
measurements, and the legends follow Figure3-(i).
1.2
1.0
0.8
0.6
0.4
0.2
0.0
(a) Low rank X ?
(b) Low nuclear norm X ? (c) Low rank X? , m = nr4
(iii) Power law matrix completion: ?i, Ai measures a random entry of X ? chosen according to a power law
distribution. Details on X ? , number of measurements, and the legends follow Figure3-(i).
Test Error
Nuclear norm
argminA(X)=y kXk?
Gradient descent
0.2880
8391
0.2631
8876
kU0 kF = 10?3 , ? = 10?2
Xgd
1.000
20912
(iv) Benchmark movie recommendation dataset ? Movielens 100k. The dataset contains ? 100k ratings from
n1 = 943 users on n2 = 1682 movies. In this problem, gradient updates are performed on the asymmetric
matrix factorization space X = U V > with dimension d = min (n1 , n2 ). The training data is completely fit to
have < 10?2 error. Test error is computed on a held out data of 10 ratings per user. Here we are not interested in
the recommendation performance (test error) itself but on observing the bias of gradient flow with initialization
close to zero to return a low nuclear norm solution ? the test error is provided merely to demonstrate the
effectiveness of such a bias in this application. Also, due to the scale of the problem, we only report a coarse
approximation of the gradient flow 3 from gradient descent with kU0 kF = 10?3 , ? = 10?2 .
Figure 3: Additional matrix reconstruction experiments
Exhaustive search Finally, we also did experiments on an exhaustive grid search over small
problems, capturing essentially all possible problems of this size. We performed an exhaustive grid
search for matrix completion problem instances in symmetric p.s.d. 3 ? 3 matrices. With m = 4,
there are 15 unique masks or {Ai }i?[4] ?s that are valid symmetric matrix completion observations.
7
Number of experiments
For each mask, we fill the m = 4 observations with all possible combinations of 10 uniformly spaced
values in the interval [?1, 1]. This gives us a total of 15 ? 104 problem instances. Of these problems
instances, we discard the ones that do not have a valid PSD completion and run the ODE solver on
every remaining instance with a random U0 such that kU0 kF = ?
? , for different values of ?
? . Results
on the deviation from the minimum nuclear norm are reported in Figure 4. For small ?
? = 10?5 , 10?3 ,
most of instances of our grid search algorithm returned solutions with near minimal nuclear norms,
and the deviations are within the possibility of numerical error. This behavior also decays for ?
? = 1.
(a) ? = 10 ?5
(b) ? = 10 ?3
(c) ? = 1
20000
15000
10000
5000
0
0.2
0.0
0.2
?(X? )
0.2
0.0
?(X? )
0.2
0.2
0.0
?(X? )
0.2
Figure 4: Histogram of relative sub-optimality of nuclear norm of X? in grid search experiments. We plot the
kX? k? ?kXmin k?
histogram of ?(X? ) =
, where kXmin k? = min kXk? . The panels correspond to different
kXmin k
?
A(X)=y
values of norm of initialization ?
? = kU0 kF . (Left) ?
? = 10?5 , (Center) ?
? = 10?3 , and (Right) ?
? = 1.
6
Discussion
It is becoming increasingly apparent that biases introduced by optimization procedures, especially
for under-determined problems, are playing a key role in learning. Yet, so far we have very little
understanding of the implicit biases associated with different non-convex optimization methods. In
this paper we carefully study such an implicit bias in a two-layer non-convex problem, identify it, and
show how even though there is no difference in the model class (problems (1) and (2) are equivalent
when d = n, both with very high capacity), the non-convex modeling induces a potentially much
more useful implicit bias.
We also discuss how the bias in the non-convex case is much more delicate then in convex gradient
descent: since we are not restricted to a flat manifold, the bias introduced by optimization depends
on the step sizes taken. Furthermore, for linear least square problems (i.e. methods based on the
gradients w.r.t. X in our formulation), any global optimization method that uses linear combination
of gradients, including conjugate gradient descent, Nesterov acceleration and momentum methods,
remains on the manifold spanned by the gradients, and so leads to the same minimum norm solution.
This is not true if the manifold is curved, as using momentum or passed gradients will lead us to
?shoot off? the manifold.
Much of the recent work on non-convex optimization, and matrix factorization in particular, has
focused on global convergence: whether, and how quickly, we converge to a global minima. In
contrast, we address the complimentary question of which global minima we converge to. There has
also been much work on methods ensuring good matrix reconstruction or generalization based on
structural and statistical properties. We do not assume any such properties, nor that reconstruction is
possible or even that there is anything to reconstruct?for any problem of the form (1) we conjecture
that (4) leads to the minimum nuclear norm solution. Whether such a minimum nuclear norm solution
is good for reconstruction or learning is a separate issue already well addressed by the above literature.
We based our conjecture on extensive numerical simulations, with random, skewed, reconstructible,
non-reconstructible, incoherent, non-incoherent, and and exhaustively enumerated problems, some
of which is reported in Section 5. We believe our conjecture holds, perhaps with some additional
technical conditions or corrections. We explain how the conjecture is related to control on manifolds
and the time ordered exponential and discuss a possible approach for proving it.
8
References
[1] Yonatan Amit, Michael Fink, Nathan Srebro, and Shimon Ullman. Uncovering shared structures
in multiclass classification. In Proceedings of the 24th international conference on Machine
learning, pages 17?24. ACM, 2007.
[2] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning.
Advances in neural information processing systems, 19:41, 2007.
[3] Srinadh Bhojanapalli, Behnam Neyshabur, and Nathan Srebro. Global optimality of local search
for low rank matrix recovery. Advances in Neural Information Processing Systems, 2016.
[4] Samuel Burer and Renato DC Monteiro. A nonlinear programming algorithm for solving
semidefinite programs via low-rank factorization. Mathematical Programming, 95(2):329?357,
2003.
[5] Emmanuel J Cand?s and Benjamin Recht. Exact matrix completion via convex optimization.
Foundations of Computational mathematics, 9(6):717?772, 2009.
[6] Rina Foygel and Nathan Srebro. Concentration-based guarantees for low-rank matrix reconstruction. In COLT, pages 315?340, 2011.
[7] Rong Ge, Jason D Lee, and Tengyu Ma. Matrix completion has no spurious local minimum. In
Advances in Neural Information Processing Systems, pages 2973?2981, 2016.
[8] Eric Jones, Travis Oliphant, Pearu Peterson, et al. SciPy: Open source scientific tools for
Python, 2001.
[9] Michel Journ?e, Francis Bach, P-A Absil, and Rodolphe Sepulchre. Low-rank optimization on
the cone of positive semidefinite matrices. SIAM Journal on Optimization, 20(5):2327?2351,
2010.
[10] Raghunandan Hulikal Keshavan. Efficient algorithms for collaborative filtering. PhD thesis,
STANFORD, 2012.
[11] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping
Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima.
In International Conference on Learning Representations, 2016.
[12] Jason D. Lee, Max Simchowitz, Michael I. Jordan, and Benjamin Recht. Gradient descent only
converges to minimizers. In 29th Annual Conference on Learning Theory, 2016.
[13] Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, and Nathan Srebro. Geometry of
optimization and implicit regularization in deep learning. arXiv preprint arXiv:1705.03071,
2017.
[14] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias:
On the role of implicit regularization in deep learning. In International Conference on Learning
Representations, 2015.
[15] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of
linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471?501, 2010.
[16] Nathan Srebro, Noga Alon, and Tommi S Jaakkola. Generalization error bounds for collaborative
prediction with low-rank matrices. In Advances In Neural Information Processing Systems,
pages 1321?1328, 2005.
[17] Nathan Srebro and Adi Shraibman. Rank, trace-norm and max-norm. In International Conference on Computational Learning Theory, pages 545?560. Springer, 2005.
[18] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding
deep learning requires rethinking generalization. In International Conference on Learning
Representations, 2017.
9
| 7195 |@word version:1 norm:46 seems:2 open:1 closure:2 simulation:1 commute:9 sepulchre:1 initial:2 contains:1 interestingly:1 existing:1 ka:1 discretization:2 surprising:2 yet:1 additonally:1 must:1 numerical:6 chicago:5 discernible:1 plot:7 update:2 characterization:1 coarse:1 theodoros:1 zhang:1 u2i:1 mathematical:1 along:2 direct:1 differential:1 prove:2 wild:1 x0:7 mask:2 indeed:5 behavior:6 cand:1 nor:1 multi:2 salakhutdinov:1 actual:2 little:1 solver:3 becomes:1 begin:1 estimating:1 provided:3 panel:4 factorized:3 bhojanapalli:2 what:4 complimentary:1 emerging:1 eigenvector:2 shraibman:1 unobserved:1 guarantee:2 every:2 fink:1 axt:1 rm:11 control:9 positive:2 dropped:1 local:5 xv:1 limit:5 establishing:2 path:7 incoherence:1 becoming:1 might:1 initialization:21 suggests:1 challenging:1 factorization:18 range:1 averaged:1 fazel:1 unique:1 block:1 procedure:2 pontil:1 empirical:4 suggest:1 get:1 cannot:4 close:5 operator:2 optimize:2 equivalent:4 imposed:1 center:2 go:1 starting:1 convex:9 focused:1 recovery:3 scipy:2 factored:1 rule:1 nuclear:38 spanned:3 dominate:1 fill:1 proving:2 stability:1 coordinate:1 analogous:1 diagonalizable:1 play:4 user:2 exact:2 programming:2 us:2 samy:1 origin:1 overlapped:1 trend:1 satisfying:3 particularly:1 asymmetric:2 role:5 ft:5 preprint:1 initializing:1 ensures:2 rina:1 movement:1 benjamin:4 nesterov:1 dynamic:6 exhaustively:1 trained:1 reviewing:1 solving:1 algebra:1 purely:1 eric:1 basis:1 completely:1 tx:2 shirish:1 regularizer:1 rkx:2 massimiliano:1 exhaustive:3 apparent:1 larger:1 valued:1 solve:1 stanford:1 otherwise:1 reconstruct:3 ability:1 itself:1 eigenvalue:3 simchowitz:1 reconstruction:17 maryam:1 product:2 combining:1 loop:1 achieve:1 adjoint:1 frobenius:1 getting:1 eigenbasis:1 convergence:4 optimum:16 requirement:1 tti:5 converges:7 staying:1 alon:1 completion:13 measured:1 progress:1 strong:2 indicate:2 implies:3 tommi:1 direction:8 stochastic:1 require:1 suffices:1 generalization:13 preliminary:1 underdetermined:11 enumerated:1 rong:1 correction:1 hold:2 sufficiently:2 blake:2 ground:1 exp:19 minu:1 m0:4 chiyuan:1 early:1 a2:2 ruslan:1 largest:1 establishes:1 tool:1 hope:1 minimization:1 always:4 gaussian:1 avoid:1 jaakkola:1 corollary:1 ax:1 vk:4 rank:31 contrast:1 absil:1 el:3 stopping:1 minimizers:1 typically:1 weakening:1 spurious:2 journ:1 tak:1 interested:3 monteiro:1 issue:2 dual:2 colt:1 classification:1 uncovering:1 special:1 orange:1 equal:2 construct:1 evgeniou:1 having:1 beach:1 represents:1 jones:1 discrepancy:1 minimized:2 report:2 primarily:1 simultaneously:1 comprehensive:1 replaced:1 geometry:1 argminx:4 raghunandan:1 n1:2 delicate:1 ab:1 psd:1 investigate:1 highly:1 multiply:1 possibility:1 rodolphe:1 semidefinite:3 held:1 chain:1 integral:1 necessary:3 iv:1 euclidean:2 initialized:1 desired:1 plotted:2 theoretical:3 minimal:1 instance:5 column:2 modeling:1 cover:1 kxkf:1 dheevatsa:1 deviation:4 entry:6 subset:1 predictor:1 uniform:2 conducted:1 characterize:1 reported:2 st:7 recht:4 international:5 siam:2 reconstructable:1 stay:2 lee:2 off:3 michael:2 quickly:1 again:1 thesis:1 satisfied:3 slowly:1 return:1 ullman:1 michel:1 parrilo:1 summarized:1 satisfy:1 explicitly:3 depends:1 later:1 root:1 picked:1 try:1 jason:2 linked:1 analyze:1 observing:2 francis:1 decaying:2 collaborative:2 minimize:1 square:5 keskar:1 yield:3 identify:2 spaced:1 correspond:1 generalize:3 emulated:1 confirmed:1 straight:1 explain:1 ping:1 mudigere:1 infinitesimal:2 evaluates:1 proof:2 associated:1 sampled:1 dataset:2 hardt:1 lim:8 ut:10 dimensionality:1 formalize:1 carefully:2 actually:1 dt:4 follow:2 formulation:2 though:3 generality:1 furthermore:3 xa:1 implicit:14 working:1 keshavan:1 nonlinear:1 continuity:1 slackness:2 aj:9 perhaps:1 scientific:1 believe:1 usa:1 reconstructible:2 normalized:2 verify:3 requiring:1 true:3 inductive:1 regularization:13 moritz:1 symmetric:5 skewed:1 anything:1 samuel:1 demonstrate:1 shoot:1 imputing:1 behaves:1 performed:2 discussed:1 numerically:2 measurement:8 imposing:1 ai:32 approx:2 unconstrained:3 grid:4 mathematics:1 similarly:5 moving:2 longer:1 yk2:2 operating:1 argmina:1 recent:2 optimizing:2 discard:1 certain:2 yonatan:1 discretizing:1 jorge:1 captured:1 minimum:29 additional:3 seen:1 steering:1 surely:1 converge:9 tempting:1 dashed:2 u0:6 ii:1 full:6 multiple:1 encompass:1 smooth:2 technical:1 characterized:1 burer:1 bach:1 long:2 gunasekar:1 a1:3 feasibility:2 ensuring:2 prediction:2 variant:1 regression:1 kax:1 multilayer:1 essentially:1 arxiv:2 histogram:2 limt:1 addition:2 want:1 ode:7 interval:1 addressed:1 source:1 crucial:2 noga:1 unlike:1 legend:2 flow:21 effectiveness:1 jordan:1 structural:1 near:2 symmetrically:1 iii:1 enough:4 concerned:1 bengio:1 affect:1 fit:1 architecture:1 reduce:1 andreas:1 multiclass:1 whether:2 passed:1 peter:1 returned:1 cause:1 prefers:1 deep:6 generally:3 useful:1 clear:1 informally:1 eigenvectors:1 clutter:1 induces:2 simplest:1 arising:1 per:1 write:1 express:1 key:1 nevertheless:3 nocedal:1 v1:1 asymptotically:1 merely:2 sum:1 cone:1 wand:1 run:3 clipped:1 almost:3 extends:1 vn:1 prefer:1 capturing:1 layer:2 renato:1 suriya:2 guaranteed:2 bound:1 quadratic:1 annual:1 badly:1 constraint:5 precisely:2 looping:1 ri:1 flat:2 nathan:8 simulate:1 nitish:1 min:7 optimality:6 span:3 performing:1 extremely:1 infinitesimally:1 tengyu:1 conjecture:16 smelyanskiy:1 according:2 alternate:2 combination:2 representable:1 conjugate:1 smaller:2 across:2 remain:2 increasingly:1 making:1 happens:1 explained:1 restricted:2 taken:1 unregularized:1 equation:2 remains:2 foygel:1 discus:3 know:3 ge:1 neyshabur:4 observe:1 travis:1 stepsize:1 batch:1 original:1 top:2 remaining:2 ensure:5 include:1 xw:1 exploit:2 woodworth:1 k1:1 especially:1 approximating:1 amit:1 emmanuel:1 objective:7 move:1 added:1 already:4 question:2 looked:1 planted:2 rt:19 concentration:1 diagonal:4 nr:9 hai:1 gradient:77 minx:1 subspace:1 separate:1 simulated:2 capacity:2 parametrized:1 rethinking:1 manifold:11 argue:1 figure3:2 oliphant:1 toward:5 preconditioned:1 assuming:1 length:1 minimizing:5 equivalently:2 difficult:1 mostly:1 unfortunately:1 potentially:2 ryota:2 trace:1 policy:1 allowing:1 observation:4 benchmark:1 finite:4 descent:50 behave:1 curved:1 defining:2 extended:1 looking:1 y1:1 rn:7 commutator:2 dc:1 arbitrary:3 sharp:1 ttic:5 rating:2 introduced:2 evidenced:1 namely:1 required:1 specified:4 extensive:1 optimized:1 pablo:1 errorbars:1 nip:1 address:2 beyond:2 bar:1 ku0:13 biasing:1 regime:2 program:1 including:4 max:2 power:3 warm:1 indicator:1 residual:4 movie:2 started:1 incoherent:2 review:1 understanding:3 literature:2 nati:1 kf:11 tangent:2 python:1 relative:3 law:3 loss:1 expect:2 filtering:1 srebro:8 foundation:1 imposes:2 story:1 playing:1 atleast:1 bias:16 understand:1 fall:1 wide:1 taking:1 peterson:1 differentiating:1 mikhail:1 dimension:7 valid:4 forward:1 projected:2 far:2 approximate:1 emphasize:2 implicitly:2 global:30 kkt:4 xai:2 conclude:1 xi:1 spectrum:2 search:8 ku:2 transfer:1 ca:1 ignoring:1 adi:1 du:1 expansion:1 complex:1 necessarily:1 did:2 main:2 linearly:2 n2:5 complementary:2 tomioka:2 sub:1 momentum:2 explicit:3 exponential:7 lie:1 srinadh:3 tang:1 shimon:1 magenta:1 theorem:1 bad:1 specific:2 xt:20 behnam:5 decay:1 evidence:4 exists:4 effectively:1 phd:1 commutative:5 kx:5 demand:1 gap:2 smoothly:1 simply:1 vinyals:1 kxk:9 ordered:7 contained:1 scalar:2 recommendation:2 springer:1 corresponds:1 minimizer:3 truth:1 satisfies:1 relies:1 acm:1 ma:1 viewed:2 consequently:1 acceleration:1 shared:1 movielens:2 determined:1 uniformly:1 total:1 svd:6 nr4:3 oriol:1 argyriou:1 |
6,849 | 7,196 | Optimal Shrinkage of Singular Values Under
Random Data Contamination
Matan Gavish
School of Computer Science and Engineering
Hebrew University
Jerusalem, Israel
[email protected]
Danny Barash
School of Computer Science and Engineering
Hebrew University
Jerusalem, Israel
[email protected]
Abstract
A low rank matrix X has been contaminated by uniformly distributed noise, missing
values, outliers and corrupt entries. Reconstruction of X from the singular values
and singular vectors of the contaminated matrix Y is a key problem in machine
learning, computer vision and data science. In this paper, we show that common
contamination models (including arbitrary combinations of uniform noise, missing
values, outliers and corrupt entries) can be described efficiently using a single
framework. We develop an asymptotically optimal algorithm that estimates X by
manipulation of the singular values of Y , which applies to any of the contamination
models considered. Finally, we find an explicit signal-to-noise cutoff, below which
estimation of X from the singular value decomposition of Y must fail, in a welldefined sense.
1
Introduction
Reconstruction of low-rank matrices from noisy and otherwise contaminated data is a key problem in
machine learning, computer vision and data science. Well-studied problems such as dimension reduction [3], collaborative filtering [24, 28], topic models [13], video processing [21], face recognition
[35], predicting preferences [26], analytical chemistry [29] and background-foreground separation
[4] all reduce, under popular approaches, to low-rank matrix reconstruction. A significant part of the
literature on these problems is based on the singular value decomposition (SVD) as the underlying
algorithmic component, see e.g. [7, 19, 23].
Understanding and improving the behavior of SVD in the presence of random data contamination
therefore arises as a crucially important problem in machine learning. While this is certainly a
classical problem [14, 17, 20], it remains of significant interest, owing in part to the emergence of
low-rank matrix models for matrix completion and collaborative filtering [9, 34].
Let X be an m-by-n unknown low-rank matrix of interest (m ? n), and assume that we only observe
the data matrix Y , which is a contaminated or noisy version of X. Let
Y =
m
X
yi ui vi0
(1)
i=1
be the SVD of the data matrix Y . Any algorithm based on the SVD essentially aims to obtain an
estimate for the target matrix X from (1). Most practitioners simply form the Truncated SVD (TSVD)
estimate [18]
?r =
X
r
X
yi ui vi0
i=1
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(2)
where r is an estimate of rank(X), whose choice in practice tends to be ad hoc [15].
Recently, [10, 16, 32] have shown that under white additive noise, it is useful to apply a carefully
designed shrinkage function ? : R ? R to the data singular values, and proposed estimators of the
form
?? =
X
n
X
?(yi )ui vi0 .
(3)
i=1
Such estimators are extremely simple to use, as they involve only simple manipulation of the data
singular values. Interestingly, in the additive white noise case, it was shown that a unique optimal
shrinkage function ?(y) exists, which asymptotically delivers the same performance as the best
possible rotation-invariant estimator based on the data Y [16]. Singular value shrinkage thus emerged
as a simple yet highly effective method for improving the SVD in the presence of white additive
noise, with the unique optimal shrinker as a natural choice for the shrinkage function. A typical form
of optimal singular value shrinker is shown in Figure 1 below, left panel.
Shrinkage of singular values, an idea that can be traced back to Stein?s groundbreaking work on
covariance estimation from the 1970?s [33], is a natural generalization of the classical TSVD. Indeed,
? r is equivalent to shrinkage with the hard thresholding shrinker ?(y) = 1y?? , as (2) is equivalent
X
to
?? =
X
n
X
1yi ?? ui vi0
(4)
i=1
with a specific choice of the so-called hard threshold ?. While the choice of the rank r for truncation
point TSVD is often ad hoc and based on gut feeling methods such as the Scree Plot method [11], its
equivalent formulation, namely hard thresholding of singular values, allows formal and systematic
analysis. In fact, restricting attention to hard thresholds alone [15] has shown that under white
additive noise there exists a unique asymptotically optimal choice of hard threshold for singular
values. The optimal hard threshold is a systematic, rational choice for the number of singular values
that should be included in a truncated SVD of noisy data. [27] has proposed an algorithm that finds
? ? in presence of additive noise and missing values, but has not derived an explicit shrinker.
1.1
Overview of main results
In this paper, we extend this analysis to common data contaminations that go well beyond additive
white noise, including an arbitrary combination of additive noise, multiplicative noise, missing-atrandom entries, uniformly distributed outliers and uniformly distributed corrupt entries.
The primary contribution of this paper is formal proof that there exists a unique asymptotically
optimal shrinker for singular values under uniformly random data contaminations, as well a unique
asymptotically optimal hard threshold. Our results are based on a novel, asymptotically precise
description of the effect of these data contaminations on the singular values and the singular vectors of
the data matrix, extending the technical contribution of [16, 27, 32] to the setting of general uniform
data contamination.
General contamination model. We introduce the model
Y =AX +B
(5)
where X is the target matrix to be recovered, and A, B are random matrices with i.i.d entries. Here,
(A B)i,j = Ai,j Bi,j is the Hadamard (entrywise) product of A and B.
iid
2
Assume that Ai,j ? (?A , ?A
), meaning that the entries of A are i.i.d drawn from a distribution
iid
2
2
with mean ?A and variance ?A
, and that Bi,j ? (0, ?B
). In Section 2 we show that for various
choices of the matrix A and B, this model represents a broad range of uniformly distributed random
contaminations, including an arbitrary combination of additive noise, multiplicative noise, missing-atrandom entries, uniformly distributed outliers and uniformly distributed corrupt entries. As a simple
example, if B ? 0 and P (Ai,j = 1) = ?, then the Y simply has missing-at-random entries.
2
To quantify what makes a ?good? singular value shrinker ? for use in (3), we use the standard Mean
Square Error (MSE) metric and
2
?
L(?|X) = X
? (Y ) ? X .
F
Using the methods of [16], our results can easily be extended to other error metrics, such as the
nuclear norm or operator norm losses. Roughly speaking, an optimal shrinker ? ? has the property
that, asymptotically as the matrix size grows,
L(? ? |X) ? L(?|X)
for any other shrinker ? and any low-rank target matrix X.
The design of optimal shrinkers requires a subtle understanding of the random fluctuations of the data
singular values y1 , . . . , yn , which are caused by the random contamination. Such results in random
matrix theory are generally hard to prove, as there are nontrivial correlations between yi and yj ,
i 6= j. Fortunately, in most applications it is very reasonable to assume that the target matrix X is
low rank. This allows us to overcome this difficulty by following [15, 27, 32] and considering an
asymptotic model for low-rank X, inspired by Johnstone?s Spiked Covariance Model [22], in which
the correlation between yi and yj , for i 6= j vanish asymptotically.
We state our main results informally at first. The first main result of this paper is the existence of a
unique asymptotically optimal hard threshold ?? in (4).
Importantly, as E(Y ) = ?A X, to apply hard thresholding to Y = A X + B we must from now on
define
n
X
?? = 1
1y >? ui vi0 .
X
?A i=1 i
Theorem 1. (Informal.) Let X be an m-by-n low-rank matrix and assume that we observe the
contaminated data matrix Y given by the general contamination model (5). Then there exists a
unique optimal (def. 3) hard threshold ?? for the singular values of Y , given by
?
s
c+
? = ?B
where ? = m/n and c =
1
c
c+
?
c
q
p
?
1 + ? + 1 + 14? + ? 2 / 2.
Our second main result is the existence of a unique asymptotically optimal shrinkage function ? ? in
(equation (3)). We calculate this shrinker explicitly:
Theorem 2. (Informal.) Assume everything as in Theorem 1. Then there exists a unique optimal (def.
3) shrinker ? ? for the singular values of Y given by
?
v
!2
u
2
?
2 u
?
?
y
? ?B t
?
?
?
1
? 4?
?B
? ? (y) = y?A
?
?
?
?
0
y ? ?B (1 +
p
?)
y < ?B (1 +
p
?)
We also discover that for each contamination model, there is a critical signal-to-noise cutoff, below
which X cannot be reconstructed from the singular values and vectors of Y . Specifically, let ?0 be
? ? (Y ) ? 0. Define the critical signal level for a
the zero singular value shrinker, ?0 (y) ? 0, so that X
0
shrinker ? by
xcritical (?) = inf {x : L(?|X) < L(?0 |X)}
x
? 0 is an arbitrary rank-1 matrix with singular value x. In other words, xcritical (?)
where X = x?
uv
is the smallest singular value of the target matrix, for which ? still outperforms the trivial zero
shrinker ?0 . As we show in Section 4, a target matrix X with a singular value below xcritical (?)
cannot be reliably reconstructed using ?. The critical signal level for the optimal shrinker ? ? is
of special importance, since a target matrix X with a singular value below xcritical (? ? ) cannot be
reliably reconstructed using any shrinker ?. Restricting attention to hard thresholds only, we define
xcritical (?), the critical level for a hard threshold, similarly. Again, singular values of X that fall
below xcritical (?? ) cannot be reliably reconstructed using any hard threshold.
Our third main result is the explicit calculation of these critical signal levels:
3
Theorem 3. (Informal.) Assume everything as in Theorem 1 and let c be as in Theorem 1. Let ? ? be
the optimal shrinker from Theorem 2 and let ?? be the optimal hard threshold from Theorem 1. The
critical signal levels for ? ? and ?? are given by:
1
xcritical (? ? )
=
(?B /?A ) ? ? 4
xcritical (?? )
=
(?B /?A ) ? c .
Finally, one might ask what the improvement is in terms of the mean square error that is guaranteed
by using the optimal shrinker and optimal threshold. As discussed below, existing methods are either
infeasible in terms of running time on medium and large matrices, or lack a theory that can predict
the reconstruction mean square error. For lack of a better candidate, we compare the optimal shrinker
and optimal threshold to the default method, namely, TSVD.
Theorem 4. (Informal.) Consider ? = 1, and denote the worst-case mean square error of TSVD, ? ?
and ?? by MT SV D , M?? and M?? , respectively, over a target matrix of low rank r. Then
MT SV D
=
M??
=
M??
=
?B
?A
2
?B
?A
2
?B
?A
2
5r
2r
3r .
Indeed, the optimal shrinker offers a significant performance improvement (specifically, an improvement of 3r(?B /?A )2 , over the TSVD baseline.
6
3
2.8
5
2.6
2.4
2.2
4
x
?(y)
2
1.8
3
1.6
1.4
1.2
2
1
0.8
0.6
1
?=0.3
?=0.6
?=1
0.4
Threshold Critical
Shrinker Critical
0.2
0
0
0
1
2
3
4
5
0.15
6
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
?
y
Figure 1: Left: Optimal shrinker for additive noise and missing-at-random contamination. Right:
Phase plane for critical signal levels, see Section 6, Simulation 2.
Our main results allow easy calculation of the optimal threshold, optimal shrinkage and signal-to-noise
cutoffs for various specific contamination models. For example:
1. Additive noise and missing-at-random. Let X be an m-by-n low-rank matrix. Assume
that some entries are completely missing and the rest suffer white additive noise. Formally,
we observe the contaminated matrix
Xi,j + Zi,j w.p. ?
Yi,j =
,
0
w.p. 1 ? ?
iid
where Zi,j ? (0, ? 2 ), namely, follows an unknown distribution with mean 0 and variance
? 2 . Let ? = m/n. Theorem 1 implies that in this case, the optimal hard threshold for the
singular values of Y is
p
?? = ? 2 ? (c + 1/c) (c + ?/c)
q
p
?
where c = 1 + ? + 1 + 14? + ? 2 / 2. In other words, the optimal location (w.r.t mean
square error) to truncate the singular values of Y , in order to recover X, is given by ?? . The
4
optimal shrinker from Theorem 2 for this contamination mode may be calculated similarly,
and is shown in Figure 1, left panel. By Theorem 4, the improvement in mean square
error obtained by using the optimal shrinker, over the TSVD baseline, is 3r? 2 /?, quite a
significant improvement.
2. Additive noise and corrupt-at-random. Let X be an m-by-n low-rank matrix. Assume
that some entries are irrecoverably corrupt (replaced by random entries), and the rest suffer
white additive noise. Formally,
Xi,j + Zi,j w.p. ?
Yi,j =
.
Wi,j
w.p. 1 ? ?
p
iid
iid
Where Zi,j ? (0, ? 2 ), Wi,j ? (0, ? 2 ), and ? is typically large. Let ?
? = ?? 2 + (1 ? ?)? 2 .
The optimal shrinker, which should be applied to the singular values of Y , is given by:
?
q
p
2
??
? (1 + ?)
? 2 /(y?) (y/?
? )2 ? ? ? 1 ? 4 y ? ?
.
? (y) =
p
?
0
y<?
? (1 + ?)
?
By Theorem 4, the improvement in mean square error, obtained by using the optimal
shrinker, over the TSVD baseline, is 3r(?? 2 + (1 ? ?)? 2 )/?2 .
1.2
Related Work
The general data contamination model we propose includes as special cases several modes extensively
studied in the literature, including missing-at-random and outliers. While it is impossible to propose a
complete list of algorithms to handle such data, we offer a few pointers, organized around the notions
of robust principal component analysis (PCA) and matrix completion. To the best of our knowledge,
the precise effect of general data contamination on the SVD (or the closely related PCA) has not been
documented thus far. The approach we propose, based on careful manipulation of the data singular
values, enjoys three distinct advantages. One, its running time is not prohibitive; indeed, it involves a
small yet important modification on top of the SVD or TSVD, so that it is available whenever the
SVD is available. Two, it is well understood and its performance (say, in mean square error) can be
reliably predicted by the available theory. Three, to the best of our knowledge, none of the approaches
below have become mainstream, and most practitioners still turn to the SVD, even in the presence of
data contamination. Our approach can easily be used in practice, as it relies on the well-known and
very widely used SVD, and can be implemented as a simple modification on top of the existing SVD
implementations.
Robust Principle Component Analysis (RPCA). In RPCA, one assumes Y = X + W where
X is the low rank target matrix and W is a sparse outliers matrix. Classical approaches such as
influence functions [20], multivariate trimming [17] and random sampling techniques [14] lack a
formal theoretical framework and are not well understood. More modern approaches based on convex
optimization [9, 34] proposed reconstructing X from Y via the nuclear norm minimization
min ||X||? + ? ||Y ? X||1 ,
X
whose runtime and memory requirements are both prohibitively large in medium and large matrices.
Matrix Completion. There are numerous heuristic approaches for data analysis in the presence of
missing values [5, 30, 31]. To the best of our knowledge, there are no formal guarantees of their
performance. When the target matrix is known to be low rank, the reconstruction problem is known
as matrix completion. [7?9] and numerous other authors have shown that a semi-definite program
may be used to stably recover the target matrix, even in the presence of additive noise. Here too, the
runtime and memory requirements are both prohibitively large in medium and large matrices, making
these algorithms infeasible in practice.
2
A Unified Model for Uniformly Distributed Contamination
Contamination modes encountered in practice are best described by a combination of primitive modes,
shown in Table 1 below. These primitive contamination modes fit into a single template:
5
Definition 1. Let A and B be two random variables, and assume that all moments of A and B are
bounded. Define the contamination link function
fA,B (x) = Ax + B .
Given a matrix X, define the corresponding contaminated matrix Y with entries
indep.
Yi,j ? fA,B (Xi,j ) .
(6)
Now observe that each of the primitive modes above corresponds to a different choice of random
variables A and B, as shown in Table 1. Specifically, each of the primitive modes is described by a
different assignment to A and B. We employ three different random variables in these assignments:
iid
iid
Z ? (0, ? 2 /n), a random variable describing multiplicative or additive noise; W ? (0, ? 2 /n), a
iid
random variable describing a large ?outlier? measurement; and M ? Bernoulli(?) describing a
random choice of ?defective? entries, such as a missing value, an outlier and so on.
iid
Table 1: Primitive modes fit into the model (6). By convention, Y is m-by-n, Z ? (0, ? 2 /n) denotes a
iid
iid
2
noise random variable, W ? (0, ? /n) denotes an outlier random variable and M ? Bernoulli(?) is a
contaminated entry random variable.
mode
i.i.d additive noise
i.i.d multiplicative noise
missing-at-random
outliers-at-random
corruption-at-random
model
Yi,j = Xi,j + Zi,j
Yi,j = Xi,j Zi,j
Yi,j = Mi,j Xi,j
Yi,j = Xi,j + Mi,j Wi,j
Yi,j = Mi,j Xi,j + (1 ? Mi,j )Wi,j
A
1
Z
M
1
M
B
Z
0
0
MW
(1 ? M )W
levels
?
?
?
?,?
?,?
Actual datasets rarely demonstrate a single primitive contamination mode. To adequately describe
contamination observed in practice, one usually needs to combine two or more of the primitive
contamination modes into a composite mode. While there is no point in enumerating all possible
combinations, Table 2 offers a few notable composite examples, using the framework (6). Many other
examples are possible of course.
3
Signal Model
Following [32] and [15], as we move toward our formal results we are considering an asymptotic
model inspired by Johnstone?s Spiked Model [22]. Specifically, we are considering a sequence of
iid
increasingly larger data target matrices Xn , and corresponding data matrices Yn ? fAn ,Bn (Xn ). We
make the following assumptions regarding the matrix sequence {Xn }:
A1 Limiting aspect ratio: The matrix dimension mn ? n sequence converges: mn /n ? ? as
n ? ?. To simplify the results, we assume 0 < ? ? 1.
A2 Fixed signal column span: Let the rank r > 0 be fixed and choose a vector x ? Rr with
coordinates x = (x1 , . . . xr ) such that x1 > . . . > xr > 0. Assume that for all n
?n diag(x1 , . . . , xr )V?n
Xn = U
is an arbitrary singular value decomposition of Xn ,
Table 2: Some examples of composite contamination modes and how they fit into the model (6). Z,W ,M are
the same as in Table 1.
mode
Additive noise and missing-at-random
Additive noise and corrupt-at-random
multiplicative noise and corrupt-at-random
Additive noise and outliers
6
A
M
M
ZM
1
B
ZM
ZM + W (1 ? M )
W (1 ? M )
Z + W (1 ? M )
levels
?,?
?,?,?
?,?,?
?,?,?
A3 Incoherence of the singular vectors of Xn : We make one of the following two assumptions
regarding the singular vectors of Xn :
?n and V?n ,
A3.1 Xn is random with an orthogonally invariant distribution. Specifically, U
which follow the Haar distribution on orthogonal matrices of size mn and n, respectively.
A3.2 The singular vectors of Xn are non-concentrated. Specifically, each left singular vector
?n ) and each right singular vector v
u
? n,i of Xn (the i-th column of U
?n,j of Xn (the j-th
1
?
column of Vn ) satisfy
||?
un,i ||? ? C
logD (mn )
?
mn
and
||?
vn,j ||? ? C
logD (n)
?
n
for any i, j and fixed constants C, D.
iid
iid
2
2
/n) and Bn ? (0, ?B
/n) have bounded
Definition 2. (Signal model.) Let An ? (?A , ?A
moments. Let Xn follow assumptions [A1]?[A3] above. We say that the matrix sequence
Yn = fP
We further denote
An ,Bn (Xn ) follows our signal model, where fA,B (X) is as in Definition 1.P
r
m
Xn = i=1 xi u
? n,i v
?n,i for the singular value decomposition of Xn and Yn = i=1 yn,i un,i vn,i
for the singular value decomposition of Yn .
4
Main Results
Having described the contamination and the signal model, we can now formulate our main results.
All proofs are deferred to the Supporting Information. Let Xn and Yn follow our signal model,
Definition 2, and write x = (x1 , . . . , xr ) for the non-zero singular values of Xn . For a shrinker ?,
we write
2
a.s.
?
L? (?|x) = lim X
n (Yn ) ? Xn .
n??
F
assuming the limit exists almost surely. The special case of hard thresholding at ? is denoted as
L? (?|x).
Definition 3. Optimal shrinker and optimal threshold. A shrinker ? ? is called optimal if
L? (?|x) ? L? (?|x)
for any shrinker ?, any r ? 1 and any x = (x1 , . . . , xr ). Similarly, a threshold ? is called optimal if
L? (?? |x) ? L? (?|x) for any threshold ?, any r ? 1 and any x = (x1 , . . . , xr ).
With these definitions, our main results Theorem 2 and Theorem 1 become formal. To make Theorem
3 formal, we need the following lemma and definition.
Lemma 1. Decomposition of the asymptotic mean square error. Let Xn and Yn follow our signal
model (Definition 2) and write x = (x1 , . . . , xr ) for the non-zero singular values of Xn , and let ? be
a.s. P
the optimal shrinker. Then the limit L? (?|x) a.s. exists, and L? (?|x) = ri=1 L1 (?|x), where
L1 (?|x) =
?
?
?x2 1 ?
(t4 ? ?)2
4
(t + ?t2 )(t4 + t2 )
1
t ? ?4
1
?
? 2
x
t < ?4
P
where t = (?A ? x)/?B . Similarly, for a threshold ? we have L? (?|x) = ri=1 L1 (?|x) with
?
2
?
1
?
2?
? ?B
t+
? t2 ? 2
?A x ? x(?)
t+
?A
t
t
t
L1 (?|x) =
?
? 2
x
?A x < x(?)
Where
?
r
q
?
?
2
?
(?B / 2?A ) (y/?B )2 ? ? ? 1 +
1 + ? ? (y/?B )2 ? 4?
x(y) =
?
?
0
1
1
t ? ?4
(7)
1
t < ?4
The incoherence assumption is widely used in related literature [6, 12, 27], and asserts that the singular
vectors are spread out so X is not sparse and does not share singular subspaces with the noise.
7
? ? (Y ) ? 0. Let ? be a
Definition 4. Let ?0 be the zero singular value shrinker, ?0 (y) ? 0, so that X
0
singular value shrinker. The critical signal level for ? is
xcritical (?) = inf {L1 (?|X) < L1 (?0 |X)}
x
As we can see, the asymptotic mean square error decomposes over the singular values of the target
matrix, x1 , . . . , xr . Each value xi that falls below xcritical (?) is better estimated with the zero
shrinker ?0 than with ?. It follows that any xi that falls below xcritical (? ? ), where ? ? is the optimal
shrinker, cannot be reliably estimated by any shrinker ?, and its corresponding data singular value yi
should simply be set to zero. This makes Theorem 2 formal.
5
Estimating the model parameters
In practice, using the optimal shrinker we propose requires an estimate of the model parameters. In
general, ?B is easy to estimate from the data via a median-matching method [15], namely
ymed
?
?B = ?
,
n??
where ymed is the median singular value of Y, and ?? is the median of the Mar?cenko-Pastur distribution. However, estimation of ?A and ?A must be considered on a case-by-case basis. For example, in
the ?Additive noise and missing at random? mode (table 2), ?A ? 1 is known, and ?A is estimated
by dividing the amount of missing values by the matrix size.
6
Simulation
Simulations were performed to verify the correctness of our main results2 . For more details, see
Supporting Information.
1. Critical signal level xcritical (?? ) under increasing noise. Figure 2, left panel, shows
the amount of data singular values yi above xcritical (?? ), as a function of the fraction of
missing values ?. Theorem 3 correctly predicts the exact values of ? at which the ?next?
data singular value falls below xcritical (?? ).
2. Phase plane for critical signal levels xcritical (? ? ) and xcritical (?? ). Figure 1, right panel,
shows the x, ? plane, where x is the signal level and ? is the fraction of missing values. At
each point in the plane, several independent data matrices were generated. Heatmap shows
the fraction of the experiments at which the data singular value y1 was above xcritical (? ? )
and xcritical (?? ). The overlaid graphs are theoretical predictions of the critical points.
3. Brute-force verification of the optimal shrinker shape. Figure 2, right panel, shows the
shape of the optimal shrinker (Theorem 1). We performed a brute-force search for the value
of ?(y) that produces the minimal mean square error. A brute force search, performed with
a relatively small matrix size, matches the asymptotic shape of the optimal shrinker.
7
Conclusions
Singular value shrinkage emerges as an effective method to reconstruct low-rank matrices from
contaminated data that is both practical and well understood. Through simple, carefully designed
manipulation of the data singular values, we obtain an appealing improvement in the reconstruction
mean square error. While beyond our present scope, following [16], it is highly likely that the
optimal shrinker we have developed offers the same mean square error, asymptotically, as the best
rotation-invariant estimator based on the data, making it asymptotically the best SVD-based estimator
for the target matrix.
2
The full Matlab code that generated the figures in this paper and in the Supporting Information is permanently
available at https://purl.stanford.edu/kp113fq0838.
8
5
6
4.5
4
3
?(y)
Number of estimable singular values
5
4
3.5
2.5
2
3
2
1.5
1
1
0
0.5
Theoretical
Empirical
0
0
0.2
0.4
0.6
0.8
1
-1
1
?
1.5
2
2.5
3
3.5
4
4.5
y
Figure 2: Left: empirical validation of the predicted critical signal level (Simulation 1). Right:
Empirical validation of the optimal shrinker shape (Simulation 3).
Acknowledgements
DB was supported by Israeli Science Foundation grant no. 1523/16 and German-Israeli Foundation
for scientific research and development program no. I-1100-407.1-2015.
References
[1] Benaych-Georges, Florent and Nadakuditi, Raj Rao. The singular values and vectors of low
rank perturbations of large rectangular random matrices. Journal of Multivariate Analysis, 111:
120?135, 2012. ISSN 0047259X.
[2] Bloemendal, Alex, Erdos, Laszlo, Knowles, Antti, Yau, Horng Tzer, and Yin, Jun. Isotropic
local laws for sample covariance and generalized Wigner matrices. Electronic Journal of
Probability, 19(33):1?53, 2014. ISSN 10836489.
[3] Boutsidis, Christos, Zouzias, Anastasios, Mahoney, Michael W, and Drineas, Petros. Randomized dimensionality reduction for k-means clustering. IEEE Transactions on Information
Theory, 61(2):1045?1062, 2015.
[4] Bouwmans, Thierry, Sobral, Andrews, Javed, Sajid, Ki, Soon, and Zahzah, El-hadi. Decomposition into low-rank plus additive matrices for background / foreground separation : A review for
a comparative evaluation with a large-scale dataset. Computer Science Review, 2016. ISSN
1574-0137.
[5] Buuren, Stef and Groothuis-Oudshoorn, Karin. mice: Multivariate imputation by chained
equations in r. Journal of statistical software, 45(3), 2011.
[6] Cai, Jian-Feng, Candes, Emmanuel J., and Zuowei, Shen. A singular value thresholding
algorithm for matrix completion. 2010 Society for Industrial and Applied Mathematics, 20(4):
1956?1982, 2010.
[7] Candes, Emmanuel J. and Plan, Yaniv. Matrix completion with noise. Proceedings of the IEEE,
98(6):925?936, 2010. ISSN 00189219.
[8] Candes, Emmanuel J and Plan, Yaniv. Matrix completion with noise. Proceedings of the IEEE,
98(6):925?936, 2010.
[9] Cand?s, Emmanuel J., Li, Xiaodong, Ma, Yi, and Wright, John. Robust principal component
analysis? Journal of the ACM, 58(3):1?37, may 2011. ISSN 00045411.
[10] Candes, Emmanuel J, Sing-Long, Carlos A, and Trzasko, Joshua D. Unbiased risk estimates for
singular value thresholding and spectral estimators. IEEE transactions on signal processing, 61
(19):4643?4657, 2013.
9
[11] Cattell, Raymond B. The scree test for the number of factors. Multivariate Behavioral Research,
1(2):245?276, 1966.
[12] Chandrasekaran, Venkat, Sanghavi, Sujay, Parrilo, Pablo a., and Willsky, Alan S. Rank-Sparsity
Incoherence for Matrix Decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011.
ISSN 1052-6234.
[13] Das, Rajarshi, Zaheer, Manzil, and Dyer, Chris. Gaussian lda for topic models with word
embeddings. In ACL (1), pp. 795?804, 2015.
[14] Fischler, Martin A and Bolles, Robert C. Random sample consensus: a paradigm for model
fitting with applications to image analysis and automated cartography. Communications of the
ACM, 24(6):381?395, 1981.
[15] Gavish, Matan and Donoho, David L. The optimal hard threshold for singular values is 4/sqrt(3).
IEEE Transactions on Information Theory, 60(8):5040?5053, 2014. ISSN 00189448.
[16] Gavish, Matan and Donoho, David L. Optimal shrinkage of singular values. IEEE Transactions
on Information Theory, 63(4):2137?2152, 2017.
[17] Gnanadesikan, Ramanathan and Kettenring, John R. Robust estimates, residuals, and outlier
detection with multiresponse data. Biometrics, pp. 81?124, 1972.
[18] Golub, Gene and Kahan, William. Calculating the singular values and pseudo-inverse of a
matrix. Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical
Analysis, 2(2):205?224, 1965.
[19] Hastie, Trevor, Tibshirani, Robert, Sherlock, Gavin, Brown, Patrick, Botstein, David, and
Eisen, Michael. Imputing Missing Data for Gene Expression Arrays Imputation using the SVD.
Technical Report, pp. 1?9, 1999.
[20] Huber, Peter J. Robust statistics. Springer, 2011.
[21] Ji, Hui, Liu, Chaoqiang, Shen, Zuowei, and Xu, Yuhong. Robust video denoising using low
rank matrix completion. 2010 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, pp. 1791?1798, 2010. ISSN 1063-6919.
[22] Johnstone, Iain M. On the distribution of the largest eigenvalue in principal components analysis.
The Annals of Statistics, 29(2):295?327, 2001.
[23] Lin, Zhouchen, Chen, Minming, and Ma, Yi. The Augmented Lagrange Multiplier Method for
Exact Recovery of Corrupted Low-Rank Matrices. 2013.
[24] Luo, Xin, Zhou, Mengchu, Xia, Yunni, and Zhu, Qingsheng. An efficient non-negative
matrix-factorization-based approach to collaborative filtering for recommender systems. IEEE
Transactions on Industrial Informatics, 10(2):1273?1284, 2014.
[25] Marcenko, V. A. and Pastur, L. A. Distribution of eigenvalues for some sets of random matrices.
Math. USSR-Sbornik, 1(4):457?483, 1967.
[26] Meloun, Milan, Capek, Jindrich, Miksk, Petr, and Brereton, Richard G. Critical comparison of
methods predicting the number of components in spectroscopic data. Analytica Chimica Acta,
423(1):51?68, 2000.
[27] Nadakuditi, Raj Rao. OptShrink: An algorithm for improved low-rank signal matrix Denoising
by optimal, data-driven singular value shrinkage. IEEE Transactions on Information Theory, 60
(5):3002?3018, 2014. ISSN 00189448.
[28] Rao, Nikhil, Yu, Hsiang-Fu, Ravikumar, Pradeep K, and Dhillon, Inderjit S. Collaborative
filtering with graph information: Consistency and scalable methods. In Advances in neural
information processing systems, pp. 2107?2115, 2015.
10
[29] Rennie, Jasson Dm M and Srebro, Nathan. Fast Maximum Margin Matrix Factorization for
Collaborative Prediction. Proceedings of the 22Nd International Conference on Machine
Learning, pp. 713?719, 2005. ISSN 1595931805. doi: 10.1145/1102351.1102441. URL
http://doi.acm.org/10.1145/1102351.1102441.
[30] Rubin, Donald B. Multiple imputation after 18+ years. Journal of the American statistical
Association, 91(434):473?489, 1996.
[31] Schafer, Joseph L. Analysis of incomplete multivariate data. CRC press, 1997.
[32] Shabalin, Andrey A and Nobel, Andrew B. Reconstruction of a low-rank matrix in the presence
of Gaussian noise. Journal of Multivariate Analysis, 118:67?76, 2013. ISSN 0047-259X.
[33] Stein, Charles M. Lectures on the theory of estimation of many parameters. Journal of
Soviet Mathematics, 74(5), 1986. URL http://link.springer.com/article/10.1007/
BF01085007.
[34] Wright, John, Peng, Yigang, Ma, Yi, Ganesh, Arvind, and Rao, Shankar. Robust Principal
Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices. Advances in Neural
Information Processing Systems (NIPS), pp. 2080?-2088, 2009. ISSN 0010-3640.
[35] Yang, Jian, Qian, Jianjun, Luo, Lei, Zhang, Fanlong, and Gao, Yicheng. Nuclear norm based
matrix regression with applications to face recognition with occlusion and illumination changes.
IEEE Transactions on Pattern Analysis and Machine Intelligence Machine Intelligence, pp(99):
1?1, 2016. ISSN 0162-8828.
11
| 7196 |@word version:1 norm:4 nd:1 simulation:5 crucially:1 bn:3 decomposition:8 covariance:3 minming:1 moment:2 reduction:2 liu:1 series:1 interestingly:1 outperforms:1 existing:2 recovered:1 com:1 luo:2 yet:2 danny:2 must:3 john:3 additive:21 numerical:1 shape:4 designed:2 plot:1 alone:1 intelligence:2 prohibitive:1 plane:4 isotropic:1 pointer:1 math:1 location:1 preference:1 org:1 zhang:1 become:2 welldefined:1 prove:1 combine:1 fitting:1 behavioral:1 introduce:1 peng:1 huber:1 indeed:3 roughly:1 cand:1 behavior:1 inspired:2 actual:1 considering:3 increasing:1 discover:1 underlying:1 bounded:2 panel:5 medium:3 estimating:1 schafer:1 israel:2 what:2 developed:1 barash:2 unified:1 guarantee:1 pseudo:1 runtime:2 prohibitively:2 brute:3 grant:1 yn:9 engineering:2 understood:3 local:1 tends:1 limit:2 fluctuation:1 incoherence:3 might:1 sajid:1 plus:1 acl:1 studied:2 acta:1 factorization:2 bi:2 range:1 unique:9 practical:1 yj:2 practice:6 definite:1 xr:8 analytica:1 empirical:3 composite:3 matching:1 jasson:1 word:3 donald:1 cannot:5 operator:1 shankar:1 risk:1 impossible:1 influence:1 equivalent:3 missing:19 jerusalem:2 attention:2 go:1 primitive:7 convex:1 rectangular:1 formulate:1 shen:2 recovery:2 qian:1 estimator:6 iain:1 array:1 importantly:1 nuclear:3 handle:1 notion:1 coordinate:1 limiting:1 annals:1 target:14 exact:3 recognition:3 predicts:1 observed:1 worst:1 calculate:1 indep:1 contamination:28 ui:5 fischler:1 chained:1 completely:1 basis:1 drineas:1 easily:2 various:2 soviet:1 distinct:1 fast:1 effective:2 describe:1 doi:2 matan:3 whose:2 emerged:1 quite:1 widely:2 heuristic:1 say:2 larger:1 otherwise:1 reconstruct:1 stanford:1 nikhil:1 statistic:2 rennie:1 kahan:1 emergence:1 noisy:3 hoc:2 advantage:1 sequence:4 rr:1 analytical:1 cai:1 eigenvalue:2 reconstruction:7 propose:4 product:1 zm:3 hadamard:1 description:1 asserts:1 milan:1 yaniv:2 requirement:2 extending:1 produce:1 comparative:1 converges:1 develop:1 ac:2 completion:8 andrew:2 school:2 thierry:1 dividing:1 implemented:1 c:1 involves:1 implies:1 predicted:2 quantify:1 convention:1 karin:1 closely:1 owing:1 everything:2 crc:1 generalization:1 spectroscopic:1 around:1 considered:2 wright:2 gavin:1 overlaid:1 algorithmic:1 predict:1 scope:1 smallest:1 a2:1 gavish:4 estimation:4 rpca:2 gnanadesikan:1 largest:1 correctness:1 minimization:1 gaussian:2 aim:1 zhou:1 shrinkage:12 gut:1 derived:1 ax:1 shabalin:1 improvement:7 rank:27 bernoulli:2 cartography:1 industrial:3 baseline:3 sense:1 el:1 typically:1 denoted:1 ussr:1 development:1 heatmap:1 plan:2 special:3 having:1 beach:1 sampling:1 represents:1 broad:1 yu:1 foreground:2 report:1 contaminated:9 t2:3 simplify:1 sanghavi:1 few:2 employ:1 modern:1 richard:1 replaced:1 phase:2 occlusion:1 william:1 detection:1 interest:2 trimming:1 highly:2 evaluation:1 certainly:1 multiresponse:1 deferred:1 mahoney:1 golub:1 pradeep:1 laszlo:1 fu:1 vi0:5 orthogonal:1 nadakuditi:2 biometrics:1 incomplete:1 theoretical:3 minimal:1 column:3 rao:4 assignment:2 entry:15 uniform:2 too:1 corrupted:2 sv:2 andrey:1 st:1 international:1 randomized:1 huji:2 siam:1 systematic:2 informatics:1 michael:2 mouse:1 again:1 choose:1 yau:1 american:1 li:1 parrilo:1 chemistry:1 includes:1 satisfy:1 notable:1 caused:1 explicitly:1 ad:2 multiplicative:5 performed:3 recover:2 carlos:1 candes:4 collaborative:5 contribution:2 il:2 square:13 hadi:1 variance:2 efficiently:1 iid:14 none:1 corruption:1 sqrt:1 whenever:1 trevor:1 definition:9 boutsidis:1 pp:8 dm:1 proof:2 mi:4 petros:1 rational:1 dataset:1 popular:1 ask:1 knowledge:3 lim:1 emerges:1 dimensionality:1 organized:1 subtle:1 carefully:2 back:1 follow:4 botstein:1 improved:1 entrywise:1 formulation:1 mar:1 correlation:2 ganesh:1 lack:3 petr:1 mode:15 stably:1 lda:1 scientific:1 grows:1 lei:1 usa:1 effect:2 xiaodong:1 verify:1 unbiased:1 brown:1 multiplier:1 adequately:1 dhillon:1 white:7 generalized:1 complete:1 demonstrate:1 bolles:1 delivers:1 l1:6 logd:2 wigner:1 meaning:1 image:1 novel:1 recently:1 charles:1 common:2 rotation:2 imputing:1 mt:2 ji:1 overview:1 extend:1 discussed:1 association:1 significant:4 measurement:1 ai:3 uv:1 consistency:1 sujay:1 similarly:4 mathematics:3 zhouchen:1 mainstream:1 patrick:1 multivariate:6 raj:2 inf:2 driven:1 manipulation:4 pastur:2 yi:19 joshua:1 fortunately:1 george:1 zuowei:2 surely:1 zouzias:1 paradigm:1 signal:22 semi:1 full:1 multiple:1 anastasios:1 alan:1 technical:2 match:1 calculation:2 offer:4 long:2 lin:1 arvind:1 ravikumar:1 a1:2 prediction:2 scalable:1 regression:1 vision:3 essentially:1 metric:2 background:2 rajarshi:1 horng:1 singular:62 median:3 jian:2 rest:2 benaych:1 db:1 practitioner:2 mw:1 presence:7 yang:1 easy:2 embeddings:1 results2:1 automated:1 fit:3 zi:6 hastie:1 florent:1 reduce:1 idea:1 regarding:2 enumerating:1 expression:1 pca:2 url:2 suffer:2 peter:1 speaking:1 matlab:1 useful:1 generally:1 involve:1 informally:1 amount:2 stein:2 extensively:1 concentrated:1 documented:1 http:3 estimated:3 correctly:1 tibshirani:1 write:3 tsvd:9 key:2 threshold:21 traced:1 drawn:1 imputation:3 cutoff:3 kettenring:1 groundbreaking:1 asymptotically:12 graph:2 chimica:1 fraction:3 year:1 inverse:1 almost:1 reasonable:1 chandrasekaran:1 knowles:1 vn:3 separation:2 electronic:1 def:2 ki:1 guaranteed:1 fan:1 encountered:1 nontrivial:1 alex:1 ri:2 x2:1 software:1 aspect:1 nathan:1 extremely:1 min:1 span:1 relatively:1 martin:1 truncate:1 combination:5 reconstructing:1 increasingly:1 wi:4 appealing:1 joseph:1 modification:2 making:2 yigang:1 outlier:12 invariant:3 spiked:2 equation:2 remains:1 turn:1 describing:3 fail:1 german:1 dyer:1 informal:4 available:4 apply:2 observe:4 spectral:1 permanently:1 existence:2 scree:2 running:2 top:2 assumes:1 denotes:2 clustering:1 calculating:1 emmanuel:5 classical:3 society:3 feng:1 move:1 fa:3 primary:1 subspace:1 link:2 chris:1 topic:2 mail:1 consensus:1 trivial:1 toward:1 nobel:1 willsky:1 assuming:1 code:1 issn:13 ratio:1 hebrew:2 robert:2 negative:1 design:1 reliably:5 implementation:1 unknown:2 javed:1 recommender:1 datasets:1 sing:1 truncated:2 supporting:3 cattell:1 extended:1 communication:1 precise:2 y1:2 perturbation:1 arbitrary:5 pablo:1 david:3 namely:4 nip:2 israeli:2 beyond:2 below:12 usually:1 pattern:2 fp:1 sparsity:1 program:2 sherlock:1 including:4 memory:2 video:2 sbornik:1 critical:15 natural:2 difficulty:1 force:3 predicting:2 haar:1 residual:1 mn:5 zhu:1 orthogonally:1 numerous:2 jun:1 raymond:1 review:2 literature:3 understanding:2 acknowledgement:1 asymptotic:5 law:1 loss:1 lecture:1 filtering:4 srebro:1 validation:2 foundation:2 verification:1 rubin:1 thresholding:6 principle:1 article:1 corrupt:8 share:1 course:1 supported:1 truncation:1 antti:1 infeasible:2 enjoys:1 soon:1 formal:8 allow:1 johnstone:3 fall:4 template:1 face:2 sparse:2 distributed:7 overcome:1 dimension:2 default:1 calculated:1 xn:20 xia:1 cenko:1 eisen:1 author:1 feeling:1 far:1 transaction:7 reconstructed:4 erdos:1 gene:2 xi:11 un:2 search:2 decomposes:1 table:7 robust:7 ca:1 improving:2 mse:1 diag:1 da:1 main:10 spread:1 noise:35 defective:1 x1:8 xu:1 augmented:1 venkat:1 hsiang:1 christos:1 explicit:3 candidate:1 vanish:1 third:1 theorem:19 specific:2 yuhong:1 list:1 a3:4 exists:7 restricting:2 ramanathan:1 importance:1 hui:1 illumination:1 t4:2 margin:1 chen:1 yin:1 simply:3 likely:1 gao:1 lagrange:1 inderjit:1 applies:1 springer:2 corresponds:1 relies:1 acm:3 ma:3 donoho:2 careful:1 hard:18 change:1 included:1 typical:1 specifically:6 uniformly:8 denoising:2 principal:4 lemma:2 called:3 zaheer:1 svd:15 xin:1 rarely:1 formally:2 estimable:1 arises:1 |
6,850 | 7,197 | Countering Feedback Delays in Multi-Agent Learning
Zhengyuan Zhou
Stanford University
[email protected]
Nicholas Bambos
Stanford University
[email protected]
Panayotis Mertikopoulos
Univ. Grenoble Alpes, CNRS, Inria, LIG
[email protected]
Peter Glynn
Stanford University
[email protected]
Claire Tomlin
UC Berkeley
[email protected]
Abstract
We consider a model of game-theoretic learning based on online mirror descent (OMD) with asynchronous and delayed feedback information. Instead of
focusing on specific games, we consider a broad class of continuous games defined
by the general equilibrium stability notion, which we call ?-variational stability. Our first contribution is that, in this class of games, the actual sequence of
play induced by OMD-based learning converges to Nash equilibria provided that
the feedback delays faced by the players are synchronous and bounded. Subsequently, to tackle fully decentralized, asynchronous environments with (possibly)
unbounded delays between actions and feedback, we propose a variant of OMD
which we call delayed mirror descent (DMD), and which relies on the repeated
leveraging of past information. With this modification, the algorithm converges to
Nash equilibria with no feedback synchronicity assumptions and even when the
delays grow superlinearly relative to the horizon of play.
1
Introduction
Online learning is a broad and powerful theoretical framework enjoying widespread applications and
great success in machine learning, data science, operations research, and many other fields [3, 7, 22].
The prototypical online learning problem may be described as follows: At each round t = 0, 1, . . . , a
player selects an action xt from some convex, compact set, and obtains a reward ut (xt ) based on
some a priori unknown payoff function ut . Subsequently, the player receives some feedback (e.g. the
past history of the reward functions) and selects a new action xt+1 with the goal of maximizing the
obtained reward. Aggregating over the rounds of the process, this is usually quantified by asking that
PT
the player?s (external) regret Reg(T ) ? maxx?X t=1 [ut (x) ? ut (xt )] grow sublinearly with the
horizon of play T , a property known as ?no regret?.
One of the most widely used algorithmic schemes for learning in this context is the online mirror
descent (OMD) class of algorithms [23]. Tracing its origins to [17] for offline optimization problems,
OMD proceeds by taking a gradient step in the dual (gradient) space and projecting it back to the
primal (decision) space via a mirror map generated by a strongly convex regularizer function (with
different regularizers giving rise to different algorithms). In particular, OMD includes as special cases
several seminal learning algorithms, such as Zinkevich?s online gradient descent (OGD) scheme
[29], and the multiplicative/exponential weights (EW) algorithm [1, 13]. Several variants of this
class also exist and, perhaps unsurprisingly, they occur with a variety of different names ? such as
?Follow-the-Regularized-Leader" [9], dual averaging [18, 25], and so on.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?
When ut is concave, OMD enjoys a sublinear O( T ) regret bound which is known to be universally
tight.1 A common instantiation of this is found in repeated multi-player games, where each player?s
payoff function is determined by the actions of all other players via a fixed mechanism ? the stage
game. Even though this mechanism may be unknown to the players, the universality of the OMD
regret bounds raises high expectations in terms of performance guarantees, so it is natural to assume
that players adopt some variant thereof when faced with such online decision processes. This leads to
the following central question: if all players of a repeated game employ an OMD updating rule, do
their actions converge to a Nash equilibrium of the underlying one-shot game?
Related Work. Given the prominence of Nash equilibrium as a solution concept in game theory
(compared to coarser notions such as correlated equilibria or the Hannan set), this problem lies at
the heart of multi-agent learning [4]. However, convergence to a Nash equilibrium is, in the words
of [4], ?considerably more difficult? than attaining a no-regret state for all players (which leads to
weaker notion of coarse correlated equilibrium in finite games). To study this question, a growing
body of literature has focused on special classes of games (e.g. zero-sum games, routing games) and
PT
established the convergence of the so-called ?ergodic average? T ?1 t=1 xt of OMD [2, 10, 12].
In general, the actual sequence of play may fail to converge altogether, even in simple, finite games
[16, 24]. On the other hand, there is a number of recent works establishing the convergence of play in
potential games with finite action sets under different assumptions for the number of players involved
(continuous or finite) and the quality of the available feedback (perfect, semi-bandit/imperfect, or
bandit/payoff-based) [5, 11, 14, 19]. However, these works focus on games with finite action sets and
feedback is assumed to be instantly available to the players (i.e. with no delays or asynchronicities),
two crucial assumptions that we do not make in this paper.
A further major challenge arises in decentralized environments (such as transportation networks),
where a considerable delay often occurs between a player?s action and the corresponding received
feedback. To study learning in such settings, [20] recently introduced an elegant and flexible delay
framework where the gradient at round t is only available at round t + dt ? 1, with dt being the
delay associated with the player?s action at round t.2 [20] then considered a very natural extension of
OMD under delays: updating the set of gradients as they are received (see Algorithm 1 for details).
PT
If the total delay after time T is D(T ) = t=1 dt , [20] showed that OMD enjoys an O(D(T )1/2 )
regret bound. This natural extension has several strengths: first, no assumption is made on how the
gradients are received (the delayed gradients can be received out-of-order); further, as pointed out
in [6, 8], a gradient ?does not need to be timestamped by the round s from which it originates,? as
required for example by the pooling strategies of [6, 8].
Our Contributions. Our investigations here differ from existing work in the following aspects:
First, we consider learning in games with asynchronous and delayed feedback by extending the
general single-agent feedback delay framework introduced in [20]. Previous work on the topic has
focused on the regret analysis of single-agent learning with delays, but the convergence properties
of such processes in continuous games are completely unknown. Second, we focus throughout
on the convergence of the actual sequence of play generated by OMD (its ?last iterate? in the
PT
parlance of optimization), as opposed to the algorithm?s ergodic average T1 t=1 xt . This last point
is worth emphasizing for several reasons: a) this mode of convergence is stronger and theoretically
more appealing because it implies ergodic convergence; b) in a game-theoretic setting, payoffs
are determined by the actual sequence of play, so ergodic convergence diminishes in value if it
is not accompanied by similar conclusions for the players? realized actions; and c) because there
is no inherent averaging, the techniques used to prove convergence of xt provide a much finer
understanding of the evolution of OMD.
The starting point of our paper is the introduction of an equilibrium stability notion which we
call ?-variational stability, a notion that is motivated by the concept of evolutionary stability in
population games and builds on the characterization of stable Nash equilibria as solutions to a Mintytype variational inequality [15]. This stability notion is intimately related to monotone operators in
variational analysis [21] and can be seen as a strict generalization of operator monotonicity in the
1
In many formulations, a cost function (as opposed to a reward function) is used, in which case such cost
functions need to be convex.
2
Of course, taking dt = 1 yields the classical no-delay setting.
2
current game-theoretic context.3 By means of this notion, we are able to treat convergence questions
in general games with continuous action spaces, without having to focus on a specific class of games
? such as concave potential or strictly monotone games (though our analysis also covers such games).
Our first result is that, assuming variational stability, the sequence of play induced by OMD converges
to the game?s set of Nash equilibria, provided that the delays of all players are synchronous and
bounded (see Theorems 4.3 and 4.4). As an inherited benefit, players adopting this learning algorithm
can receive gradients out-of-order and do not need to keep track of the timestamps from which the
gradients originate. In fact, even in the special case of learning without delays, we are not aware of a
similar convergence result for the actual sequence of play.
An important limitation of this result is that delays are assumed synchronous and bounded, an
assumption which might not hold in large, decentralized environments. To lift this barrier, we
introduce a modification of vanilla OMD which we call delayed mirror descent (DMD), and which
leverages past information repeatedly, even in rounds where players receive no feedback. Thanks
to this modification, play under DMD converges to variationally stable sets of Nash equilibria
(Theorem 5.2), even if the players experience asynchronous and unbounded delays: in particular,
delays could grow superlinearly in the game?s horizon, and DMD would still converge.
We mention that the convergence proofs for both OMD and DMD rely on designing a particular
Lyapunov function, the so-called ?-Fenchel coupling which serves as a ?primal-dual divergence?
measure between actions and gradient variables. Thanks to its Lyapunov properties, the ?-Fenchel
coupling provides a potent tool for proving convergence and we exploit it throughout. Further, we
present a unified theoretical framework that puts the analysis of both algorithms under different delay
assumptions on the same footing.
2
Problem Setup
2.1
Games with Continuous Action Sets
We start with the definition of a game with continuous action sets, which serves as a stage game and
provides a reward function for each player in an online learning process.
QN
Definition 2.1. A continuous game G is a tuple G = (N , X = i=1 Xi , {ui }N
i=1 ), where N is the
set of N players {1, 2, . . . , N }, Xi is a compact convex subset of some finite-dimensional vector
space Rdi representing the action space of player i, and ui : X ? R is the i-th player?s payoff
function.
Regarding the players? payoff functions, we make the following assumptions throughout:
1. For each i ? N , ui (x) is continuous in x.
2. For each i ? N , ui is continuously differentiable in xi and the partial gradient ?xi ui (x) is
Lipschitz continuous in x.
Throughout the paper, x?i denotes the joint action of all players but player i. Consequently, the joint
action4 x will frequently be written as (xi , x?i ). Two important quantities in the current context are:
Definition 2.2. We let v(x) be the profile of the players? individual payoff gradients,5 i.e. v(x) =
(v1 (x), . . . , vN (x)), where vi (x) , ?xi ui (x).
Definition 2.3. Given a continuous game G, x? ? X is called a (pure-strategy) Nash equilibrium if
for each i ? N , ui (x?i , x??i ) ? ui (xi , x??i ), ?xi ? Xi .
2.2
Online Mirror Descent in Games under Delays
In what follows, we consider a general multi-agent delay model extending the single-agent delay
model of [20] to the multi-agent learning case. At a high level, for each agent there can be an arbitrary
3
In the supplement, we give two well-known classes of games that satisfy this equilibrium notion.
Note that boldfaced letters are only used to denote joint actions. In particular, xi is a vector even though it
is not boldfaced.
5
Note that per the last assumption in the definition of a concave game (Definition 2.1), the gradient v(x)
always exists and is a continuous function on the joint action space X .
4
3
delay between the stage at which an action is played and the stage at which feedback is received about
said action (typically in the form of gradient information). There is no extra assumption imposed on
the feedback delays ? in particular, feedback can arrive out-of-order and in a completely asynchronous
manner across agents. Further, the received feedback is not time-stamped ? so the player might not
know to which iteration a specific piece of feedback corresponds.
When OMD is applied in this setting, we obtain the following scheme:
Algorithm 1 Online Mirror Descent on Games under Delays
1: Each player i chooses an initial yi0 .
2: for t = 0, 1, 2, . . . do
3:
for i = 1, . . . , N do
t
4:
xti = arg maxxi ?X
P i {hyi , xi is? hi (xi )}
t+1
t
t
5:
yi = yi + ?
s?Git vi (x )
6:
end for
7: end for
Three comments are in order here. First, each hi is a regularizer on Xi , as defined below:
Definition 2.4. Let D be a compact and convex subset of Rm . We say that g : D ? R is a regularizer
if g is continuous and strongly convex on D, i.e. there exists some K > 0 such that
1
g(td + (1 ? t)d0 ) ? tg(d) + (1 ? t)g(d0 ) ? Kt(1 ? t)kd0 ? dk2
2
(2.1)
for all t ? [0, 1], bd, bd0 ? D.
Second, the gradient step size ?t in Algorithm 1 can
any positive
non-increasing sequence
Pbe
Pand
?
?
that satisfies the standard summability assumption: t=0 ?t = ?, t=0 (?t )2 < ?.
Third, regarding the delay model: in Algorithm 1, Git denotes the set of rounds whose gradients
become available for player i at the current round t. Denote player i?s delay of the gradient at round
s to be dsi (a positive integer), then this gradient vi (xs ) will be available at round s + dsi ? 1, i.e.
s+ds ?1
s ? Gi i . In particular, if dsi = 1 for all s, player i doesn?t experience any feedback delays. Note
here again that each player can receive feedback out of order: this can happen if the gradient at an
earlier round has a much larger delay than that of the gradient at a later round.
3
?-Variational Stability: A Key Criterion
In this section, we define a key stability notion, called ?-variational stability. This notion allows us to
obtain strong convergence results for the induced sequence of play, as opposed to results that only hold
in specific classes of games. The supplement provides two detailed special classes of games (convex
potential games and asymmetric Cournot oligopolies) that admit variationally stable equilibria. Other
examples include monotone games (discussed later in this section), pseudo-monotone games [28],
non-atomic routing games [26, 27], symmetric influence network games [11] and many others.
3.1
?-Variational Stability
Definition 3.1. Given a game with continuous actions (N , X =
called ?-variationally stable for some ? ? RN
++ if
N
X
?i hvi (x), xi ? x?i i ? 0,
QN
i=1
Xi , {ui }N
i=1 ), a set C ? X is
for all x ? X , x? ? C.
(3.1)
i=1
with equality if and only if x ? C.
Remark 3.1. If C is ?-stable with ?i = 1 for all i, it is called simply stable [15].
We emphasize that in a game setting, ?-variational stability is more general than an important
concept called operator monotonicity in variational analysis. Specifically, v(?) is called a monotone
4
operator [21] if the following holds (with equality if and only if x = x
?):
hv(x) ? v(?
x), x ? x
?i ,
N
X
hvi (x) ? vi (?
x), xi ? x
?i i ? 0, ?x, x
? ? X.
(3.2)
i=1
If v(?) is monotone, the game admits a unique Nash equilibrium x? which (per the property of a Nash
equilibrium) satisfies hv(x? ), x ? x? i ? 0. Consequently, if v(?) is a monotone operator, it follows
that hv(x), x ? x? i ? hv(x? ), x ? x? i ? 0, where equality is achieved if and only if x = x? . This
implies that when v(x) is a monotone operator, the singleton set of the unique Nash equilibrium is
1-variationally stable, where 1 is the all-ones vector. The converse is not true: when v(x) is not a
monotone operator, we can still have a unique Nash equilibrium that is ?-variationally stable, or more
generally, have a ?-variationally stable set C.
3.2
Properties of ?-Variational Stability
Lemma 3.2. If C is nonempty and ?-stable, then it is closed, convex and contains all Nash equilibria
of the game.
The following lemma gives us a convenient sufficient condition ensuring that a singleton ?variationally stable set {x? } exists; in this case, we simply say that x? is ?-variationally stable.
QN
Lemma 3.3. Given a game with continuous actions (N , X = i=1 Xi , {ui }N
i=1 ), where each ui is
twice continuously differentiable. For each x ? X , define the ?-weighted Hessian matrix H ? (x) as
follows:
1
1
?
Hij
(x) = ?i ?xj vi (x) + ?j (?xi vj (x))T .
(3.3)
2
2
If H ? (x) is negative-definite for every x ? X , then the game admits a unique Nash equilibrium x?
that is ?-globally variational stable.
?
Remark 3.2. It is important to note that the Hessian matrix so defined is a block matrix: each Hij
(x)
1
?
is a di ?dj matrix. Writing it in terms of the utility function, we have Hij (x) = 2 ?i ?xj ?xi ui (x)+
1
T
2 ?j (?xi ?xj uj (x)) .
4
Convergence under Synchronous and Bounded Delays
In this section, we tackle the convergence of the last iterate of OMD under delays. We start by
defining an important divergence measure, ?-Fenchel coupling, that generalizes Bregman divergence.
We then establish its useful properties that play an indispensable role in both this and next sections.
4.1
?-Fenchel Coupling
QN
Definition 4.1. Fix a game with continuous action spaces (N , X = i=1 Xi , {ui }N
i=1 ) and for each
player i, let hi : Xi ? R be a regularizer with respect to the norm k ? ki that is Ki -strongly convex.
1. The convex conjugate function h?i : Rdi ? R of hi is defined as:
h?i (yi ) = max {hxi , yi i ? hi (xi )}.
xi ?Xi
2. The choice function Ci : Rdi ? Xi associated with regularizer hi for player i is defined as:
Ci (yi ) = arg max {hxi , yi i ? hi (xi )}.
xi ?Xi
?
3. For a ? ? RN
++ , the ?-Fenchel coupling F : X ? R
F ? (x, y) =
N
X
PN
i=1
di
? R is defined as:
?i (hi (xi ) ? hxi , yi i + h?i (yi )).
i=1
5
Note that although the domain of hi is Xi ? Rdi , the domain of its conjugate (gradient space) h?i
is Rdi . The two key properties of ?-Fenchel coupling that will be important in establishing the
convergence of OMD are given next.
Lemma 4.2. For each i ? {1, . . . , N }, let hi : Xi ? R be a regularizer with
respect to the norm
PN
N
di
i=1
k ? ki that is Ki -strongly convex and let ? ? R++ . Then ?x ? X , ??
y, y ? R
:
PN
PN
Ki ?i kCi (yi ) ? xi k2i ? 21 (mini Ki ?i ) i=1 kCi (yi ) ? xi k2i .
P
?i PN
1
? 2
? ) ? F ? (x, y) + N
2. F ? (x, y
i=1 ?i hy?i ? yi , Ci (yi ) ? xi i + 2 (maxi Ki )
i=1 (ky?i ? yi ki ) ,
?
?
where k ? ki is the dual norm of k ? ki (i.e. kyi ki = maxkxi ki ?1 hxi , yi i.
Remark 4.1.PCollecting each individual choice map into a vector, we obtain the aggregate choice
N
map C : R i=1 di ? X , with C(y) = (C1 (y1 ), . . . , CN (yN )). Since each space Xi is endowed
with norm k ? ki , we can define the induced aggregate norm k ? k on the joint space X as follows:
PN
PN
kxk = i=1 kxi ki . We can also similarly define the aggregate dual norm: kyk? = i=1 kyi k?i .
Henceforth, it shall be clear that the convergence in the joint space (e.g. C(yt ) ? x, yt ? y) will
be defined under the respective aggregate norm.
1. F ? (x, y) ?
1
2
i=1
Finally, we assume throughout the paper that the choice maps are regular in the following (very weak)
sense: a choice map C(?) is said to be ?-Fenchel coupling conforming if
C(yt ) ? x implies F ? (x, yt ) ? 0 as t ? ?.
(4.1)
Unless one aims for relatively pathological cases, choice maps induced by typical regularizers are
always ?-Fenchel coupling conforming: examples include the Euclidean and entropic regularizers.
4.2
Convergence of OMD to Nash Equilibrium
We start by characterizing the assumption on the delay model:
Assumption 1. The delays are assumed to be:
1. Synchronous: Git = Gjt , ?i, j, ?t.
2. Bounded: dti ? D, ?i, ?t (for some positive integer D).
QN
Theorem 4.3. Fix a game with continuous action spaces (N , X = i=1 Xi , {ui }N
i=1 ) that admits
x? as the unique Nash equilibrium that is ?-variationally stable. Under Assumption 1, the OMD
iterate xt given in Algorithm 1 converges to x? , irrespective of the initial point x0 .
Remark 4.2. The proof is rather long and involved. To aid the understanding and enhance the intuition,
we break it down into four main steps, each of which will be proved in the appendix in detail.
1. Since the delays are synchronous, we denote by G t the common set and dt the common
delay at round t. The gradient update in OMD under delays can then be written as:
(
)
X
X
yit+1 = yit + ?t
vi (xs ) = yit + ?t |G t |vi (xt ) +
{vi (xs ) ? vi (xt )} . (4.2)
s?G t
Define bti =
s?G t
s
s?G t {vi (x )
P
? vi (xt )}. We show limt?? kbti k?i = 0 for each player i.
2. Define bt = (bt1 , . . . , btN ) and we have limt?? bt = 0 per Claim 1. Since each player?s
gradient update can be written as yit+1 = yit + ?t (|G t |vi (xt ) + bti ) per Claim 1, we can then
write the joint OMD update (of all players) as:
y
t+1
xt = C(yt ),
= y + ?t {|G t |v(xt ) + bt } .
t
(4.3)
(4.4)
Let B(x? , ) , {x ? X | kx ? x? k < } be the open ball centered around x? with radius
. Then, using ?-Fenchel coupling as a ?energy" function and leveraging the handle on
bt given by Claim 1, we can establish that, for any > 0 the iterate xt will eventually
enter B(x? , ) and visit B(x? , ) infinitely often, no matter what the initial point x0 is.
Mathematically, the claim is that ? > 0, ?x0 , |{t | xt ? B(x? , )}| = ?.
6
? ? , ?) , {C(y) | F ? (x? , y) < ?}. In other words,
3. Fix any ? > 0 and consider the set B(x
?
?
B(x
, ?) is some ?neighborhood" of x? , which contains every x that is an image of some y
(under the choice map C(?)) that is within ? distance of x? under the ?-Fenchel coupling
? ? , ?) contains an open ball within it.
?metric". Although F ? (x? , y) is not a metric, B(x
? ? , ?).
Mathematically, the claim is that for any ? > 0, ?(?) > 0 such that: B(x? , ) ? B(x
? ? , ?), after long enough rounds, if xt ever enters B(x
? ? , ?), it
4. For any ?neighborhood" B(x
? ? , ?) thereafter. Mathematically, the claim is that for any ? > 0,
will be trapped inside B(x
? ? , ?), then xt? ? B(x
? ? , ?), ?t? ? t.
?T (?), such that for any t ? T (?), if xt ? B(x
Putting all four elements above together, we note that the significance of Claim 3 is that, since the
? ? , ?) infinitely often. It
iterate xt will enter B(x? , ) infinitely often (per Claim 2), xt must enter B(x
t
? ? , ?). Since this is
therefore follows that, per Claim 4, starting from iteration t, x will remain in B(x
? ?
t
true for any ? > 0, we have F (x , y ) ? 0 as t ? ?. Per Statement 1 in Lemma 4.2, this leads to
that kC(yt ) ? x? k ? 0 as t ? ?, thereby establishing that xt = C(yt ) ? x? as t ? 0.
In fact, the result generalizes straightforwardly to multiple Nash equilibria. The proof of the convergence to the set case is line-by-line identical, provided we redefine, in a standard way, every
quantity that measures the distance between two points to the corresponding quantity that measures
the distance between a point and a set (by taking the infimum over the distances between the point
and a point in that set). We directly state the result below.
QN
Theorem 4.4. Fix a game with continuous action spaces (N , X = i=1 Xi , {ui }N
i=1 ) that admits
X ? as a ?-variationally stable set (of necessarily all Nash equilibria), for some ? ? RN
++ . Under Assumption 1, the OMD iterate xt given in Algorithm 1 satisfies limt?? dist(xt , X ? ) = 0, irrespective
of x0 , where dist(?, ?) is the standard point-to-set distance function induced by the norm k ? k.
5
Delayed Mirror Descent: Asynchronous and Unbounded Delays
The synchronous and bounded delay assumption in Assumption 1 is fairly strong. In this section,
by a simple modification of OMD, we propose a new learning algorithm called Delayed Mirror
Descent (DMD), that allows the last-iterate convergence-to-Nash result to be generalized to cases
with arbitrary asynchronous delays among players as well as unbounded delay growth.
5.1
Delayed Mirror Descent in Games
The main idea for the modification is that when player i doesn?t receive any gradient on round t,
instead of not doing any gradient updates as in OMD, he uses the most recent set of gradients to
perform updates. More formally, define the most recent information set6 as:
t
G , if G t 6= ?
t
?
Gi = ?it?1 i t
Gi , if Gi = ?.
Under this definition, Delayed Mirror Descent is (note that G?it is always non-empty here):
We only make the following assumption on the delays:
Pt
Assumption 2. For each player i, limt?? s=min G?t ?s = 0.
i
This assumption essentially requires that no player?s delays grow too fast. Note that in particular,
players delays can be arbitrarily asynchronous. To make this assumption more concrete, we next give
two more explicit delay conditions that satisfy the main delay assumption. As made formal by the
following lemma, if the delays are bounded (but not necessarily synchronous), then Assumption 2 is
satisfied. Furthermore, by appropriately choosing the sequence ?t , Assumption 2 can accommodate
delays that are unbounded and grow super-linearly.
6
There may not be any gradient information in the first few rounds due to delays. Without loss of generality,
we can always start at the first round when there is non-empty gradient information, or equivalently, assume that
some gradient is available at t = 0.
7
Algorithm 2 Delayed Mirror Descent on Games
1: Each player i chooses an initial yi0 .
2: for t = 0, 1, 2, . . . do
3:
for i = 1, . . . , N do
4:
xti = arg maxxi ?Xi {hyit , xi i ? hi (xi )}
yit+1 = yit +
6:
end for
7: end for
5:
?t
|G?it |
P
s?G?it
vi (xs )
Lemma 5.1. Let {dsi }?
s=1 be the delay sequences for player i.
1. If each player i?s delay is bounded (i.e. ?d ? Z, dsi ? d, ?s), then Assumption 2 is satisfied
for any positive, non-increasing, not-summable-but-square-summable sequence {?t }.
2. There exists a positive, non-increasing, not-summable-but-square-summable sequence (e.g.
1
s
?t = t log t log
log t ) such that if di = O(s log s), ?i, then Assumption 2 is satisfied.
1
Proof: We will only prove Statement 2, the more interesting case. Take ?t = t log t log
log t ,
Rt
1
which is obviously positive, non-increasing and square-summable. Since s=4 s log s log log s ds =
log log log t ? ? as t ? ?, ?t is not summable. Next, let G?it be given and let t? be the most recent
round (up to and including t) such that Git? is not empty. This means:
?
G?it = Git , Gik = ?, ?k ? (t?, t].
Note that since the gradient at time t? will be available at time t? +
t ? t? ?
(5.1)
dti? ?
?
dti .
1, it follows that
(5.2)
Note that this implies t? ? ? as t ? ?, because otherwise, t? is bounded, leading to the right-side dti?
being bounded, which contradicts to the left-side diverging to infinity.
Since dsi = O(s log s), it follows that dsi ? Ks log s for some K > 0. Consequently, Equation 5.2
implies: t ? t? + K t?log t?. Denote stmin = min G?it , Equation 5.1 implies that stmin = min Git?, thereby
st
yielding stmin + di min ? 1 = t?. Therefore:
st
di min = t? ? stmin + 1.
(5.3)
t
Equation (5.3) implies that smin ? ? as t ? ?, because otherwise, the left-hand side of Equation (5.3) is bounded while the right-hand side goes to infinity (since t? ? ? as t ? ? as established
earlier).
With the above notation, it follows that:
lim
t??
t
X
s=min G?it
?s ? lim
t??
t
X
s=stmin
?s = lim
?
t?
? X
t?? ?
s=stmin
?s +
t
X
s=t?+1
?s
?
?
(5.4)
?
o
n t
t
s
?
(5.5)
di min ?smin + (t?log t?)?t
t??
(
)
t
s
di min
K t?log t?
= lim
+
t??
(stmin ) log(stmin ) log log(stmin ) (t? + 1) log(t? + 1) log log(t? + 1)
(5.6)
t
t
K(smin ) log(smin )
K t?log t?
? lim
+
t
t
t?? (st
(t? + 1) log(t? + 1) log log(t? + 1)
min ) log(smin ) log log(smin )
(5.7)
K
K
? lim
+
= 0.
(5.8)
t?? log log(st
log log(t? + 1)
min )
? lim
8
Remark 5.1. The proof to the second claim of Lemma 5.1 indicates that one can also easily obtain
slightly larger delay growth rates: O(t log t log log t), O(t log t log log t log log log t) and so on, by
choosing the corresponding step size sequences. Further, it is conceivable that one can identify
meaningfully larger delay growth rates that still satisfy Assumption 2, particularly under more
restrictions on the degree of delay asynchrony among the players. We leave that for future work.
5.2
Convergence of DMD to Nash Equilibrium
QN
Theorem 5.2. Fix a game with continuous action spaces (N , X = i=1 Xi , {ui }N
i=1 ) that admits
?
x as the unique Nash equilibrium that is ?-variationally stable. Under Assumption 2, the DMD
iterate xt given in Algorithm 2 converges to x? , irrespective of the initial point x0 .
The proof here uses a similar framework as the one in Remark 4.2, although the details are somewhat
different. Building on the notation and arguments given in Remark 4.2, we again outline three main
ingredients that together establish the result. Detailed proofs are omitted due to space limitation.
1. The gradient update in DMD can be rewritten as:
X vi (xs ) ? vi (xt )
?t X
yit+1 = yit + t
vi (xs ) = yit + ?t vi (xt ) + ?t
.
|G?i | ?t
|G?it |
?t
s?Gi
By defining: bti =
P
s?Gi
s
s?G?it
t
vi (x )?vi (x )
,
|G?it |
we can write player i?s gradient update as:
yit+1 = yit + ?t (vi (xt ) + bti ).
By bounding bti ?s magnitude using the delay sequence, Assumption 2 allows us to establish
that bti has negligible impact over time. Mathematically, the claim is that limt?? kbti k?i = 0.
2. The joint DMD update can be written as:
y
t+1
xt = C(yt ),
= yt + ?t (v(xt ) + bt ).
(5.9)
(5.10)
Here again using ?-Fenchel coupling as a ?energy" function and leveraging the handle on bt
given by Claim 1, we show that for any > 0 the iterate xt will eventually enter B(x? , )
and visit B(x? , ) infinitely often, no matter what the initial point x0 is. Furthermore, per
? ? , ?). This implies that xt must enter B(x
? ? , ?)
Claim 3 in Remark 4.2, B(x? , ) ? B(x
infinitely often.
3. Again using ?-Fenchel coupling, we show that under DMD, for any ?neighborhood"
? ? , ?), after long enough iterations, if xt ever enters B(x
? ? , ?), it will be trapped inB(x
?
?
side B(x , ?) thereafter.
Combining the above three elements, it follows that under DMD, starting from iteration t, xt will
? ? , ?). Since this is true for any ? > 0, we have F ? (x? , yt ) ? 0 as t ? ?, thereby
remain in B(x
establishing that xt = C(yt ) ? x? as t ? 0.
Here again, the result generalizes straightforwardly to the multiple Nash equilibria case (with identical
proofs modulo using point-to-set distance metric). We omit the statement.
6
Conclusion
We examined a model of game-theoretic learning based on OMD with asynchronous and delayed
information. By focusing on games with ?- stable equilibria, we showed that the sequence of play
induced by OMD converges whenever the feedback delays faced by the players are synchronous and
bounded. Subsequently, to tackle fully decentralized, asynchronous environments with unbounded
feedback delays (possibly growing sublinearly in the game?s horizon), we showed that our convergence result still holds under delayed mirror descent, a variant of vanilla OMD that leverages past
information even in rounds where no feedback is received. To further enhance the distributed aspect
of the algorithm, in future work we intend to focus on the case where the players? gradient input is
not only delayed, but also subject to stochastic imperfections ? or, taking this to its logical extreme,
when players only have observations of their in-game payoffs, and have no gradient information.
9
7
Acknowledgments
Zhengyuan Zhou is supported by Stanford Graduate Fellowship and he would like to thank Walid
Krichene and Alex Bayen for stimulating discussions (and their charismatic research style) that have
firmly planted the initial seeds for this work. Panayotis Mertikopoulos gratefully acknowledges
financial support from the Huawei Innovation Research Program ULTRON and the ANR JCJC project
ORACLESS (grant no. ANR?16?CE33?0004?01). Claire Tomlin is supported in part by the NSF
CPS:FORCES grant (CNS-1239166).
References
[1] S. A RORA , E. H AZAN , AND S. K ALE, The multiplicative weights update method: A meta-algorithm and
applications, Theory of Computing, 8 (2012), pp. 121?164.
[2] M. BALANDAT, W. K RICHENE , C. T OMLIN , AND A. BAYEN, Minimizing regret on reflexive Banach
spaces and Nash equilibria in continuous zero-sum games, in NIPS ?16: Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016.
[3] A. B LUM, On-line algorithms in machine learning, in Online algorithms, Springer, 1998, pp. 306?325.
[4] N. C ESA -B IANCHI AND G. L UGOSI, Prediction, learning, and games, Cambridge university press, 2006.
[5] J. C OHEN , A. H ?LIOU , AND P. M ERTIKOPOULOS, Learning with bandit feedback in potential games, in
NIPS ?17: Proceedings of the 31st International Conference on Neural Information Processing Systems,
2017.
[6] T. D ESAUTELS , A. K RAUSE , AND J. W. B URDICK, Parallelizing exploration-exploitation tradeoffs in
gaussian process bandit optimization., Journal of Machine Learning Research, 15 (2014), pp. 3873?3923.
[7] E. H AZAN, Introduction to Online Convex Optimization, Foundations and Trends(r) in Optimization Series,
Now Publishers, 2016.
[8] P. J OULANI , A. G YORGY, AND C. S ZEPESV?RI, Online learning under delayed feedback, in Proceedings
of the 30th International Conference on Machine Learning (ICML-13), 2013, pp. 1453?1461.
[9] A. K ALAI AND S. V EMPALA, Efficient algorithms for online decision problems, Journal of Computer and
System Sciences, 71 (2005), pp. 291?307.
[10] S. K RICHENE , W. K RICHENE , R. D ONG , AND A. BAYEN, Convergence of heterogeneous distributed
learning in stochastic routing games, in Communication, Control, and Computing (Allerton), 2015 53rd
Annual Allerton Conference on, IEEE, 2015, pp. 480?487.
[11] W. K RICHENE , B. D RIGH?S , AND A. M. BAYEN, Online learning of nash equilibria in congestion games,
SIAM Journal on Control and Optimization, 53 (2015), pp. 1056?1081.
[12] K. L AM , W. K RICHENE , AND A. BAYEN, On learning how players learn: estimation of learning dynamics
in the routing game, in Cyber-Physical Systems (ICCPS), 2016 ACM/IEEE 7th International Conference
on, IEEE, 2016, pp. 1?10.
[13] N. L ITTLESTONE AND M. K. WARMUTH, The weighted majority algorithm, INFORMATION AND
COMPUTATION, 108 (1994), pp. 212?261.
[14] R. M EHTA , I. PANAGEAS , AND G. P ILIOURAS, Natural selection as an inhibitor of genetic diversity:
Multiplicative weights updates algorithm and a conjecture of haploid genetics, in ITCS ?15: Proceedings
of the 6th Conference on Innovations in Theoretical Computer Science, 2015.
[15] P. M ERTIKOPOULOS, Learning in games with continuous action sets and unknown payoff functions.
https://arxiv.org/abs/1608.07310, 2016.
[16] P. M ERTIKOPOULOS , C. H. PAPADIMITRIOU , AND G. P ILIOURAS, Cycles in adversarial regularized
learning, in SODA ?18: Proceedings of the 29th annual ACM-SIAM symposium on discrete algorithms, to
appear.
[17] A. S. N EMIROVSKI AND D. B. Y UDIN, Problem Complexity and Method Efficiency in Optimization,
Wiley, New York, NY, 1983.
[18] Y. N ESTEROV, Primal-dual subgradient methods for convex problems, Mathematical Programming, 120
(2009), pp. 221?259.
[19] G. PALAIOPANOS , I. PANAGEAS , AND G. P ILIOURAS, Multiplicative weights update with constant
step-size in congestion games: Convergence, limit cycles and chaos, in NIPS ?17: Proceedings of the 31st
International Conference on Neural Information Processing Systems, 2017.
[20] K. Q UANRUD AND D. K HASHABI, Online learning with adversarial delays, in Advances in Neural
Information Processing Systems, 2015, pp. 1270?1278.
[21] R. T. ROCKAFELLAR AND R. J.-B. W ETS, Variational analysis, vol. 317, Springer Science & Business
Media, 2009.
10
R
[22] S. S HALEV-S HWARTZ ET AL ., Online learning and online convex optimization, Foundations and Trends
in Machine Learning, 4 (2012), pp. 107?194.
[23] S. S HALEV-S HWARTZ AND Y. S INGER, Convex repeated games and Fenchel duality, in Advances in
Neural Information Processing Systems 19, MIT Press, 2007, pp. 1265?1272.
[24] Y. V IOSSAT AND A. Z APECHELNYUK, No-regret dynamics and fictitious play, Journal of Economic
Theory, 148 (2013), pp. 825?842.
[25] L. X IAO, Dual averaging methods for regularized stochastic learning and online optimization, Journal of
Machine Learning Research, 11 (2010), pp. 2543?2596.
[26] Z. Z HOU , N. BAMBOS , AND P. G LYNN, Dynamics on linear influence network games under stochastic
environments, in International Conference on Decision and Game Theory for Security, Springer, 2016,
pp. 114?126.
[27] Z. Z HOU , B. YOLKEN , R. A. M IURA -KO , AND N. BAMBOS, A game-theoretical formulation of influence
networks, in American Control Conference (ACC), 2016, IEEE, 2016, pp. 3802?3807.
[28] M. Z HU AND E. F RAZZOLI, Distributed robust adaptive equilibrium computation for generalized convex
games, Automatica, 63 (2016), pp. 82?91.
[29] M. Z INKEVICH, Online convex programming and generalized infinitesimal gradient ascent, in ICML ?03:
Proceedings of the 20th International Conference on Machine Learning, 2003, pp. 928?936.
11
| 7197 |@word exploitation:1 stronger:1 norm:8 yi0:2 open:2 hu:1 git:6 prominence:1 mention:1 thereby:3 accommodate:1 shot:1 initial:7 contains:3 series:1 genetic:1 past:4 existing:1 current:3 universality:1 written:4 bd:1 conforming:2 must:2 hou:2 synchronicity:1 timestamps:1 happen:1 update:11 congestion:2 kyk:1 warmuth:1 footing:1 coarse:1 characterization:1 provides:3 allerton:2 org:1 unbounded:6 empala:1 mathematical:1 become:1 symposium:1 prove:2 boldfaced:2 redefine:1 inside:1 manner:1 introduce:1 x0:6 theoretically:1 sublinearly:2 frequently:1 growing:2 multi:5 dist:2 globally:1 td:1 actual:5 xti:2 increasing:4 provided:3 project:1 bounded:12 underlying:1 notation:2 medium:1 what:3 superlinearly:2 unified:1 guarantee:1 pseudo:1 berkeley:2 every:3 dti:4 concave:3 growth:3 tackle:3 rm:1 control:3 originates:1 grant:2 imag:1 converse:1 yn:1 omit:1 appear:1 t1:1 positive:6 negligible:1 aggregating:1 treat:1 limit:1 ets:1 establishing:4 inria:1 might:2 twice:1 k:1 quantified:1 cournot:1 examined:1 graduate:1 unique:6 acknowledgment:1 atomic:1 regret:9 definite:1 block:1 maxx:1 convenient:1 word:2 regular:1 selection:1 operator:7 ohen:1 put:1 context:3 influence:3 seminal:1 writing:1 restriction:1 zinkevich:1 map:7 imposed:1 transportation:1 maximizing:1 yt:11 go:1 omd:30 starting:3 convex:17 focused:2 ergodic:4 alpes:1 pure:1 rule:1 financial:1 stability:13 population:1 notion:10 proving:1 handle:2 ianchi:1 pt:5 play:14 modulo:1 programming:2 ogd:1 us:2 designing:1 origin:1 haploid:1 element:2 trend:2 particularly:1 updating:2 asymmetric:1 coarser:1 role:1 ultron:1 enters:2 hv:4 ugosi:1 cycle:2 intuition:1 environment:5 nash:25 ui:16 complexity:1 reward:5 ong:1 dynamic:3 inger:1 raise:1 tight:1 efficiency:1 completely:2 easily:1 joint:8 regularizer:6 univ:1 fast:1 lift:1 aggregate:4 neighborhood:3 choosing:2 whose:1 oligopoly:1 stanford:7 widely:1 larger:3 say:2 otherwise:2 anr:2 gi:6 tomlin:3 online:19 obviously:1 sequence:15 differentiable:2 propose:2 fr:1 combining:1 ky:1 convergence:25 empty:3 extending:2 perfect:1 lig:1 converges:7 leave:1 coupling:12 received:7 strong:2 bayen:5 implies:8 zyzhou:1 differ:1 lyapunov:2 radius:1 subsequently:3 stochastic:4 centered:1 exploration:1 routing:4 emirovski:1 hwartz:2 fix:5 generalization:1 investigation:1 mathematically:4 extension:2 strictly:1 hold:4 zhengyuan:2 around:1 considered:1 great:1 seed:1 equilibrium:32 algorithmic:1 claim:13 major:1 adopt:1 hvi:2 entropic:1 omitted:1 alai:1 estimation:1 diminishes:1 tool:1 weighted:2 mit:1 inhibitor:1 imperfection:1 always:4 aim:1 super:1 rather:1 gaussian:1 zhou:2 pn:7 focus:4 indicates:1 adversarial:2 sense:1 am:1 huawei:1 cnrs:1 typically:1 bt:6 bandit:4 kc:1 selects:2 arg:3 dual:7 flexible:1 among:2 priori:1 special:4 fairly:1 uc:1 field:1 aware:1 having:1 beach:1 identical:2 broad:2 icml:2 future:2 papadimitriou:1 others:1 inherent:1 grenoble:1 employ:1 few:1 pathological:1 divergence:3 individual:2 delayed:14 cns:1 ab:1 ce33:1 extreme:1 yielding:1 primal:3 regularizers:3 kt:1 bregman:1 tuple:1 partial:1 k2i:2 experience:2 respective:1 unless:1 enjoying:1 euclidean:1 theoretical:4 fenchel:13 earlier:2 asking:1 cover:1 udin:1 tg:1 cost:2 reflexive:1 rdi:5 subset:2 delay:57 too:1 straightforwardly:2 eec:1 kxi:1 considerably:1 chooses:2 st:7 thanks:2 potent:1 international:7 siam:2 pbe:1 enhance:2 together:2 continuously:2 concrete:1 again:5 central:1 satisfied:3 opposed:3 possibly:2 summable:6 gjt:1 henceforth:1 external:1 admit:1 halev:2 american:1 leading:1 style:1 potential:4 singleton:2 attaining:1 accompanied:1 diversity:1 includes:1 rockafellar:1 matter:2 satisfy:3 vi:20 piece:1 multiplicative:4 later:2 break:1 closed:1 doing:1 start:4 inherited:1 contribution:2 square:3 pand:1 yield:1 identify:1 weak:1 itcs:1 worth:1 finer:1 history:1 acc:1 whenever:1 definition:10 infinitesimal:1 energy:2 panayotis:3 involved:2 glynn:2 thereof:1 pp:19 associated:2 proof:8 di:9 proved:1 logical:1 lim:7 ut:5 variationally:11 back:1 focusing:2 dt:5 follow:1 formulation:2 though:3 strongly:4 generality:1 furthermore:2 stage:4 parlance:1 d:2 hand:3 receives:1 mertikopoulos:3 iao:1 widespread:1 mode:1 infimum:1 quality:1 perhaps:1 asynchrony:1 richene:5 balandat:1 name:1 usa:1 building:1 concept:3 true:3 evolution:1 equality:3 symmetric:1 krichene:1 round:20 game:72 esterov:1 criterion:1 generalized:3 outline:1 theoretic:4 image:1 variational:13 chaos:1 recently:1 common:3 physical:1 banach:1 discussed:1 he:2 kd0:1 jcjc:1 cambridge:1 enter:5 rd:1 vanilla:2 similarly:1 pointed:1 gratefully:1 dj:1 hxi:4 stable:17 bti:6 recent:4 showed:3 indispensable:1 inequality:1 meta:1 success:1 arbitrarily:1 yi:14 seen:1 somewhat:1 converge:3 hyi:1 ale:1 semi:1 multiple:2 hannan:1 timestamped:1 d0:2 long:4 inb:1 visit:2 ensuring:1 impact:1 variant:4 prediction:1 ko:1 heterogeneous:1 essentially:1 expectation:1 metric:3 arxiv:1 iteration:4 adopting:1 limt:5 achieved:1 c1:1 receive:4 cps:1 fellowship:1 grow:5 crucial:1 appropriately:1 extra:1 publisher:1 strict:1 comment:1 induced:7 pooling:1 elegant:1 subject:1 cyber:1 meaningfully:1 ascent:1 leveraging:3 call:4 integer:2 leverage:2 enough:2 variety:1 iterate:9 xj:3 imperfect:1 regarding:2 cn:1 idea:1 tradeoff:1 economic:1 synchronous:9 motivated:1 utility:1 peter:1 hessian:2 york:1 action:27 repeatedly:1 remark:8 generally:1 useful:1 detailed:2 clear:1 http:1 kbti:2 exist:1 nsf:1 trapped:2 track:1 per:8 panageas:2 instantly:1 write:2 discrete:1 shall:1 vol:1 smin:6 key:3 kci:2 four:2 thereafter:2 putting:1 yit:12 kyi:2 stamped:1 v1:1 subgradient:1 monotone:9 sum:2 letter:1 powerful:1 soda:1 arrive:1 throughout:5 vn:1 decision:4 appendix:1 bound:3 hi:11 ki:14 played:1 oracle:1 annual:2 strength:1 occur:1 azan:2 infinity:2 alex:1 ri:1 hy:1 aspect:2 argument:1 min:10 relatively:1 conjecture:1 ball:2 conjugate:2 across:1 remain:2 slightly:1 intimately:1 contradicts:1 appealing:1 modification:5 projecting:1 countering:1 heart:1 equation:4 eventually:2 bambos:4 mechanism:2 fail:1 know:1 nonempty:1 serf:2 end:4 liou:1 available:7 operation:1 decentralized:4 generalizes:3 endowed:1 rewritten:1 nicholas:1 rora:1 altogether:1 denotes:2 include:2 exploit:1 giving:1 build:1 uj:1 establish:4 classical:1 intend:1 question:3 realized:1 occurs:1 quantity:3 strategy:2 planted:1 rt:1 said:2 evolutionary:1 gradient:36 conceivable:1 distance:6 thank:1 majority:1 topic:1 originate:1 reason:1 assuming:1 mini:1 minimizing:1 btn:1 innovation:2 equivalently:1 difficult:1 setup:1 lynn:1 statement:3 hij:3 negative:1 rise:1 unknown:4 perform:1 observation:1 finite:6 descent:13 payoff:9 defining:2 ever:2 communication:1 y1:1 rn:3 arbitrary:2 esa:1 parallelizing:1 introduced:2 required:1 security:1 ertikopoulos:3 established:2 nip:4 able:1 proceeds:1 usually:1 below:2 challenge:1 program:1 max:2 including:1 omlin:1 natural:4 rely:1 regularized:3 force:1 business:1 representing:1 scheme:3 firmly:1 irrespective:3 acknowledges:1 lum:1 faced:3 literature:1 understanding:2 relative:1 unsurprisingly:1 fully:2 summability:1 dsi:7 loss:1 sublinear:1 prototypical:1 limitation:2 interesting:1 bt1:1 fictitious:1 ingredient:1 foundation:2 agent:9 degree:1 sufficient:1 claire:2 course:1 genetics:1 supported:2 last:5 asynchronous:10 enjoys:2 offline:1 formal:1 weaker:1 side:5 taking:4 barrier:1 characterizing:1 dk2:1 tracing:1 benefit:1 distributed:3 feedback:24 qn:7 doesn:2 made:2 adaptive:1 universally:1 compact:3 obtains:1 emphasize:1 keep:1 monotonicity:2 instantiation:1 automatica:1 assumed:3 leader:1 xi:42 continuous:20 learn:1 robust:1 ca:1 necessarily:2 domain:2 vj:1 significance:1 main:4 ehta:1 linearly:1 bounding:1 profile:1 repeated:4 body:1 palaiopanos:1 ny:1 aid:1 wiley:1 explicit:1 exponential:1 lie:1 dmd:12 third:1 maxxi:2 theorem:5 emphasizing:1 down:1 specific:4 xt:35 maxi:1 x:6 admits:5 gik:1 exists:4 bd0:1 mirror:13 supplement:2 ci:3 magnitude:1 horizon:4 kx:1 simply:2 infinitely:5 kxk:1 springer:3 corresponds:1 satisfies:3 relies:1 acm:2 stimulating:1 goal:1 consequently:3 lipschitz:1 considerable:1 determined:2 specifically:1 typical:1 averaging:3 walid:1 lemma:8 called:9 total:1 duality:1 diverging:1 player:53 ew:1 formally:1 support:1 arises:1 reg:1 correlated:2 |
6,851 | 7,198 | Asynchronous Coordinate Descent under More
Realistic Assumption?
Tao Sun
National University of Defense Technology
Changsha, Hunan 410073, China
[email protected]
Robert Hannah
University of California, Los Angeles
Los Angeles, CA 90095, USA
[email protected]
Wotao Yin
University of California, Los Angeles
Los Angeles, CA 90095, USA
[email protected]
Abstract
Asynchronous-parallel algorithms have the potential to vastly speed up algorithms
by eliminating costly synchronization. However, our understanding of these algorithms is limited because the current convergence theory of asynchronous block
coordinate descent algorithms is based on somewhat unrealistic assumptions. In
particular, the age of the shared optimization variables being used to update blocks
is assumed to be independent of the block being updated. Additionally, it is
assumed that the updates are applied to randomly chosen blocks.
In this paper, we argue that these assumptions either fail to hold or will imply less
efficient implementations. We then prove the convergence of asynchronous-parallel
block coordinate descent under more realistic assumptions, in particular, always
without the independence assumption. The analysis permits both the deterministic
(essentially) cyclic and random rules for block choices. Because a bound on the
asynchronous delays may or may not be available, we establish convergence for
both bounded delays and unbounded delays. The analysis also covers nonconvex,
weakly convex, and strongly convex functions. The convergence theory involves a
Lyapunov function that directly incorporates both objective progress and delays. A
continuous-time ODE is provided to motivate the construction at a high level.
1
Introduction
In this paper, we consider the asynchronous-parallel block coordinate descent (async-BCD) algorithm
for solving the unconstrained minimization problem
min f (x) = f (x1 , . . . , xN ),
x?RN
(1)
where f is a differentiable function and ?f is L-Lipschitz continuous. Async-BCD [14, 13, 16]
has virtually the same implementation as regular BCD. The difference is that the threads doing the
parallel computation do not wait for all others to finish and share their updates before starting the next
iteration, but merely continue to update with the most recent solution-vector information available2 .
?
The work is supported in part by the National Key R&D Program of China 2017YFB0202902, China
Scholarship Council, NSF DMS-1720237, and ONR N000141712162
2
Additionally, the step size needs to be modified to ensure convergence results hold. However in practice
traditional step sizes appear to still allow convergence, barring extreme circumstances.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In traditional algorithms, latency, bandwidth limits, and unexpected drains on resources, that delay
the update of even a single thread will cause the entire system to wait. By eliminating this costly idle
time, asynchronous algorithms can be much faster than traditional ones.
In async-BCD, each agent continually updates the solution vector, one block at a time, leaving
all other blocks unchanged. Each block update is a read-compute-update cycle. It begins with an
agent reading x from shared memory or a parameter server, and saving it in a local cache as x
?. The
agent then computes ? L1 ?i f (?
x), a block partial gradient3 . The final step of the cycle depends on
the parallel system setup. In a shared memory setup, the agent reads block xi again and writes
xi ? ?Lk ?i f (?
x) to xi (where ? k is the step size). In parameter-server setup, the agent can send
1
x) and let the server update xi . Other setups are possible, too. The iteration counter k
? L ?i f (?
increments upon the completion of any block update, and the updating block is denoted as ik .
Many iterations may occur between the time a computing node reads the solution vector x
? into
memory, and the time that the node?s corresponding update is applied to the shared solution vector.
Because of this, the iteration of asyn-BCD is, therefore, modeled [14] as
xk+1
= xkik ?
ik
?k
xk ),
?i f (?
L k
(2)
where x
?k is a potentially outdated version of xk , and xk+1
= xkj for all non-updating blocks j 6= ik .
j
The convergence behavior of this algorithm depends on the sequence of updated blocks ik , the step
size sequence ?k , as well as the ages of the blocks of x
?k relative to xk . We define the delay vector
~j(k) = (j(k, 1), j(k, 2), . . . , j(k, N )) ? ZN , which represents the how outdated each of the blocks
are. Specifically, we have define:
k?j(k,1)
x
?k = (x1
k?j(k,2)
, x2
k?j(k,N )
, . . . , xN
).
(3)
The k?th delay (or current delay) is j(k) = max1?i?N {j(k, i)}.
1.1
Dependence between delays and blocks
In previous analyses [13, 14, 16, 9], it is assumed that the block index ik and the delay ~j(k) were
independent sequences. This simplifies proofs, for example, giving Eik (Pik ?f (?
xk )) = N1 ?f (?
xk )
when ik is chosen at random, where Pi denotes the projection to the ith block. Without independence,
~j(k) will depend on ik , causing the distribution of x
?k to be different for each possible ik , thus
breaking the previous equality. However, the independence assumption is unrealistic in practice.
Consider a problem where some blocks are more expensive to update than others4 . Blocks that take
longer to update should have greater delays when they are updated because more other updates will
have occurred between the time that x
? is read and when the update is applied. For the same reason,
updates on blocks assigned to slower or busier agents will generally have greater delays. Indeed
this turns out to be the case in practice. Experiments were performed on a cluster with 2 nodes,
each with 16 threads running on an Intel Xeon CPU E5-2690 v2. The algorithm was applied to the
logistic regression problem on the ?news20? data set from LIBSVM, with 64 contiguous coordinate
blocks of equal size. Over 2000 epochs, blocks 0, 1, and 15 had average delays of 351, 115, and
28, respectively. ASync-BCD completed this over 7x faster than the corresponding synchronous
algorithm using the same computing resources, with a nearly equal decrease in objective function.
Even when blocks have balanced difficulty, and the computing nodes have equal computing power,
this dependence persists. We assigned 20 threads to each core, with each thread assigned to a block
of 40 coordinates with an equal numbers of nonzeros. The mean delay varied from 29 to 50 over
the threads. This may be due to the cluster scheduler or issues of data locality, which were hard to
examine. Clearly, there is strong dependence of the delays ~j(k) on the updated block ik .
1.2
Stochastic and deterministic block rules
This paper considers two different block rules: deterministic and stochastic. For the stochastic block
rule, at each update, a block is chosen from {1, 2, . . . , N } uniformly at random5 , for instance in
3
The computing can start before the reading is completed. If ?i f (?
x) does not require all components of x
?,
only the required ones are read.
4
say, because they are larger, bear more nonzero entries in the training set, or suffer poorer data locality.
5
The distribution doesn?t have to be uniform. We need only assume that every block has a nonzero probability
of being updated. It is easy to adjust our analysis to this case.
2
[14, 13, 16]. For the deterministic rule, ik is an arbitrary sequence that is assumed to be essentially
cyclic. That is, there is an N 0 ? N, N 0 ? N , such that each block i ? {1, 2, . . . , N } is updated at
least once in a window of N 0 , that is,
For each t ? Z+ , ? integer K(i, t) ? {tN 0 , tN 0 + 1, . . . , (1 + t)N 0 ? 1} such that iK(i,t) = i.
This encompasses different kinds of cyclic rules such as fixed ordering, random permutation, and
greedy selection. The stochastic block rule is easier to analyze because taking expectation will yield
a good approximation to the full gradient. It ensures the every block is updated at the specified
frequency. However, it can be expensive or even infeasible to implement for the following reasons.
In the shared memory setup, stochastic block rules require random data access, which is not only
significantly slower than sequential data access but also cause frequent cache misses (waiting for data
being fetched from slower cache or the main memory). The cyclic rules clearly avoid these issues
since data requirements are predictable. In the parameter-server setup where workers update randomly
assigned blocks at each step, each worker must either store all the problem data necessary to update
any block (which may mean massive storage requirements) or read the required data from the server
at every step (which may mean massive bandwidth requirements). Clearly this permanently assigning
blocks to agents avoids these issues. On the other hand, the analysis of cyclic rules generally has to
consider the worst-case ordering and necessarily gives worse performance in the worst case[19]. In
practice, worst-case behavior is rare, and cyclic rules often lead to good performance [7, 8, 3].
1.3
Bounded and unbounded delays
We consider different delay assumptions as well. Bounded delay is when j(k) ? ? for some fixed
? ? Z+ and all iterations k; while the unbounded delay allows supk {j(k)} = +?. Bounded and
unbounded delays can be further divided into deterministic and stochastic. Deterministic delays refer
to a sequence of delay vectors ~j(0), ~j(1), ~j(2), . . . that is arbitrary or follows an unknown distribution
so is treated as arbitrary. Our stochastic delay results apply to distributions that decay faster than
O(k ?3 ). Deterministic unbounded delays apply to the case when async-BCD runs on unfamiliar
hardware platforms. For convergence, we require a finite lim inf k {j(k)} and the current step size
? k to be adaptively chosen to the current delay j(k), which must be measured or overestimated.
Bounded delays and stochastic unbounded delays apply when the user can provide a bound or delay
distribution, respectively. The user can obtain these from previous experience or by running a pilot
test. In return, a fixed step size allows convergence, and measuring the current delay is not needed.
1.4
Contributions
Our contributions are mainly convergence results for three kinds of delays: bounded, stochastic
unbounded, deterministic unbounded, that are obtained without the artificial independence between
the block index and the delay. The results are provided for nonconvex, convex, and strongly convex
functions with Lipschitz gradients. Sublinear rates and linear rates are provided, which match
the rates for the corresponding synchronous algorithms in terms of order of magnitude. Due to
space limitation, we restrict ourselves to Lipschitz differentiable functions and leave out nonsmooth
proximable functions. Like many analyses of asynchronous algorithms, our proofs are built on
the construction of Lyapunov functions. We provide a simple ODE-based (i.e., continuous time)
construction for bounded delays to motivate the construction of the Lyapunov function in the standard
discrete setting. Our analysis brings great news to the practitioner. Roughly speaking, in a variety of
setting, even when there is no load balancing (thus the delays may depend on the block index) or
bound on the delays, convergence of async-BCD can be assured by using our provided step sizes.
Our proofs do not treat asynchronicity as noise, as many papers do6 , because modelling delays in this
way appears to destroy valuable information, and leads to inequalities that are too blunt to obtain
stronger results. This is why sublinear and linear rates can be established for weak and strong convex
problems respectively, even when delays depend on the blocks and are potentially unbounded. Our
main focus was to prove new convergence results in a new setting, not to obtain the best possible
rates. Space limitations make this difficult, and we leave it for future work. The main message is that
even without the independence assumption, convergence of the same order as for the corresponding
synchronous algorithm occurs. The step sizes and rates obtained may be overly pessimistic for the
6
See, for example, (5.1) and (A.10) in [18], and (14) and Lemma 4 in [6].
3
practitioner to use. In practice, we find that using the standard synchronous step size results in
convergence, and the observed rate of convergence is extremely close to that of the synchronous
counterpart. With the independence assumption, convergence rates for asynchronous algorithms have
recently been proven to be asymptotically the same as their synchronous counterparts[10].
1.5
Related work
Our work extends the theory on asynchronous BCD algorithms such as [18, 14, 13]. However, their
analyses rely on the independence assumption and assume bounded delays. The bounded delay
assumption was weakened by recent papers [9, 17], but independence and random blocks were still
needed. Recently [12] proposes (in the SGD setting) an innovative ?read after? sequence relabeling
technique to create the independence. However, enforcing independence in this way creates other
artificial implementation requirements that may be problematic: For instances, agents must read ?all
shared data parameters and historical gradients before starting iterations?, even if not all of this is
required to compute updates. Our analysis does not require these kinds of implementation fixes. Also,
our analysis also works for unbounded delays and deterministic block choices.
Related recent works also include [1, 2], which solve our problem with additional convex blockseparable terms in the objective. In the first paper [1], independence between blocks and delays is
avoided. However, they require a step size that diminishes at 1/k and that the sequence of iterate is
bounded (which in general may not be true). The second paper [2] relaxes independence by using
a different set of assumptions. In particular, their assumption D3 assumes that, regardless of the
previous updates, there is a universally positive chance for every block to be updated in the next step.
This Markov-type assumption relaxes the independence assumption but does not avoid it. Paper [15]
addressed this issue by decoupling the parameters read by each core from the virtual parameters on
which progress is actually defined. Based on the idea of [16], [12] addressed the dependence problem
in related work. In the convex case with a bounded delay ? , the step size in paper [14] is O( ? 21/N ). In
their proofs, the Lyapunov function is based on kxk ? x? k22 . Our analysis uses a Lyapunov function
consisting of both the function value and the sequence history, where the latter vanishes when delays
vanish. If the ? is much larger than the blocks of the problem, our step size O( ?1 ) is better even
under our much weaker conditions. The step size bound in [16, 9, 4] is O( 1+2?1/?N ), which is better
than ours, but they need the independence assumption and the stochastic block rule. Recently, [20]
introduces an asynchronous primal-dual method for a problem similar to ours but having additional
affine linear constraints. The analysis assumes bounded delays, random blocks, and independence.
1.6
Notation
We let x? denote any minimizer of f . For the update in (2), we use the following notation:
(2)
?k := xk+1 ? xk = ?
?k
xk ),
?i f (?
L k
dk := xk ? x
?k .
(4)
We also use the convention ?k := 0 if k < 0. Let ?k be the sigma algebra generated by
{x0 , x1 , . . . , xk }. Let E~j(k) denote the expectation over the value of ~j(k) (when it is a random
variable). E denotes the expectation over all random variables.
2
Bounded delays
In this part, we present convergence results for the bounded delays. If the gradient of the function is
L-Lipschitz (even if the function is nonconvex), we present the convergence for both the deterministic
and stochastic block rule. If the function is convex, we can obtain a sublinear convergence rate.
Further, if the function is restricted strongly convex, a linear convergence rate is obtained.
2.1
Continuous-time analysis
Let t be time in this subsection. Consider the ODE
x(t)
?
= ???f (?
x(t)),
(5)
where ? > 0. If we set x
?(t) ? x(t), this system describes a gradient flow, which monotonically decreases f (x(t)), and its discretization is the gradient descent iteration. Indeed, we have
4
(5)
d
dt f (x(t))
2
?(t) 6= x(t)) and
= h?f (x(t)), x(t)i
?
= ? ?1 kx(t)k
?
2 . Instead, we allow delays (i.e., x
impose the bound c > 0 on the delays:
Z
t
k?
x(t) ? x(t)k2 ?
kx(s)k
?
2 ds.
(6)
t?c
The delays introduce inexactness to the gradient flow f (x(t)). We lose monotonicity. Indeed,
d
f (x(t)) = h?f (x(t)), x(t)i
?
= h?f (?
x(t)), x(t)i
?
+ h?f (x(t)) ? ?f (?
x(t)), x(t)i
?
dt
Z
t
2
b)
a)
1
?cL
1
2
2
2
?
?(t)k2 ? kx(t)k
?
kx(t)k
?
kx(s)k
?
? ? kx(t)k
2 ? ?
2 + Lkx(t) ? x
2 +
2 ds,
?
2?
2
t?c
(7)
Here a) is from (5) and Lipschitzness of ?f and b) is from the Cauchy-Schwarz inequality Lkx(t) ?
(6) R t
2
kx(t)k
?
?L2 kx(t)??
x(t)k22
2
2
x
?(t)k2 ?kx(t)k
?
and kx(t)??
x(t)k22 ? c t?c kx(s)k
?
2 ?
2 ds. The inequalities
2? +
2
are generally unavoidable. Therefore, we design an energy function with both f and a weighted total
kinetic term, where ? > 0 will be decided below:
Z
t
2
s ? (t ? c) kx(s)k
?
2 ds.
?(t) = f (x(t)) + ?
(8)
t?c
By substituting the bound on
d
dt f (x(t))
in (7), we get the time derivative:
2
? = d f (x(t)) + ?ckx(t)k
?(t)
?
2 ??
dt
? ?(
Z
t
2
kx(s)k
?
2 ds
t?c
?cL2
1
2
? ?c)kx(t)k
?
)
2 ? (? ?
2?
2
Z
t
2
kx(s)k
?
2 ds
(9)
t?c
2
1
1
As long as ? < Lc
, there exists ? > 0 such that ( 2?
? ?c) > 0 and (? ? ?cL
2 ) > 0, so ?(t) is
monotonically nonincreasing. Assume min f is finite. Since ?(t) is lower bounded by min f , ?(t)
must converge, hence ?? ? 0, x(t)
?
? 0 by (8). ?f (?
x(t)) ? 0 by (5), and x
?(t) ? x(t) ? 0 by (6).
The last two results further yield ?f (x(t)) ? 0.
2.2
Discrete analysis
The analysis for our discrete iteration (2) is based on the following Lyapunov function:
?k := f (xk ) +
k?1
L X
(i ? (k ? ? ) + 1)k?i k22 .
2?
(10)
i=k??
for some ? > 0 determined later based on the step size and ? , the bound on the delays. The constant
? is not an algorithm parameter. In the lemma below, we present a fundamental inequality, which
states, regardless of which block ik is updated and which x
?k is used to compute the update in (2),
there is a sufficient descent in our Lyapunov function.
Lemma 1 (sufficient descent for bounded delays) Conditions: Let f be a function (possibly nonconvex) with L-Lipschitz gradient and finite min f . Let (xk )k?0 be generated by the async-BCD
algorithm (2), and the delays be bounded by ? . Choose the step size ?k ? ? = 2?2c+1 for arbitrary
fixed 0 < c < 1. Result: we can choose ? > 0 to obtain
?k ? ?k+1 ?
1 1
1
( ? ? ? )L ? k?k k22 ,
2 ?
2
(11)
Consequently,
lim k?k k2 = 0
k
(12)
and
?
min k?i k2 = o(1/ k).
1?i?k
(13)
?
So we have that the smallest gradient obtained by step k decays faster than O(1/ k). Based on the
lemma, we obtain a very general result for nonconvex problems.
5
Theorem 1 Assume the conditions of Lemma 1, for f that may be nonconvex. Under the deterministic
block rule, we have
?
min k?f (xk )k2 = o(1/ k).
lim k?f (xk )k2 = 0,
k
1?i?k
(14)
This rate has the same order of magnitude as standard gradient descent for a nonconvex function.
2.3
Stochastic block rule
Under the stochastic block rule, an agent picks a block from {1, 2, ..., N } uniformly randomly at the
beginning of each update. For the kth completed update, the index of the chosen block is ik . Our
result in this subsection relies on the following assumption on the random variable ik :
Eik (k?ik f (xk?? )k2 | ?k?? ) =
N
1 X
k?i f (xk?? )k2 ,
N i=1
(15)
where ?k = ?(x0 , x1 , . . . , xk , ~j(0), ~j(1), . . . , ~j(k)), k = 0, 1, . . ., is the filtration that represents
the information that is accumulated as our algorithm runs. It is important to note that (15) uses
xk?? instead of x
?k because x
?k may depend on ik . This condition essentially states that, given the
information at iteration k ? ? and earlier, ik is uniform at step k. We can relax (15) to nearly-uniform
distributions. Indeed, Theorem 2 below only needs that every block has a nonzero probability of
being updated given ?k?? , that is,
E(k?ik f (xk?? )k2 | ?k?? ) ?
N
?? X
k?i f (xk?? )k2 ,
N i=1
(16)
for some universal ?? > 0. The interpretation is that though ik and ?f (xk?? ) are dependent, since ?
iterations have passed, ?f (xk?? ) has a limited influence on the distribution ik : There is a minimum
probability that each index is chosen given sufficient time. For convenience and simplicity, we assume
(15) instead of (16) .
Next, we present a general result for a possibly nonconvex objective f .
Theorem 2 Assume the conditions of Lemma 1.Under the stochastic block rule and assumption (15),
we have:
lim Ek?f (xk )k2 = 0,
k
2.3.1
min Ek?f (xk )k22 = o(1/k).
1?i?k
(17)
Sublinear rate under convexity
When the function f is convex, we can obtain convergence rates, for which we need a slightly
modified Lyapunov function
Fk := f (xk ) + ? ?
k?1
X
(i ? (k ? ? ) + 1)k?i k22 ,
(18)
i=k??
? 1
2? ( ?
L
. Here, we assume ? ? 1. Since ? is just an upper bound of the
? ? )] 2?
Pk?1
delays, the delays can be 0. We also define ?k := E(Fk ? min f ), S(k, ? ) := i=k?? k?i k22 .
where ? := [1 +
?
1
2
Lemma 2 Assume the conditions of Lemma 1. Furthermore, let f be convex and use the stochastic block rule. Let xk denote the projection of xk to argmin f , assumed to exist, and let
2
L 1
? := max{ 8N? 2L , (12N + 2)L2 ? + ?? }, ? := ?/[ 4?
( ? ? 21 ? ? )]. Then we have:
(?k )2 ? ?(?k ? ?k+1 ) ? (?? ES(k, ? ) + Ekxk ? xk k22 ).
(19)
When ? = 1 (nearly no delay), we can obtain ? = O(N L2 /? 2 ) and ? = O(??/L) = O(N L/?),
which matches the result of standard BCD. This is used to prove sublinear convergence.
Theorem 3 Assume the conditions of Lemma 1. Furthermore, let f be convex and coercive7 , and
use the stochastic block rule. Then we have:
E(f (xk ) ? min f ) = O(1/k).
7
A function f is coercive if kxk ? ? means f (x) ? ?.
6
(20)
2.3.2
Linear rate under convexity
We next consider when f is ?-restricted strongly convex8 in addition to having L-Lipschitz gradient.
That is, for x ? dom(f ), h?f (x), x ? Projargmin f (x)i ? ? ? dist2 (x, argmin f ).
Theorem 4 Assume the conditions of Lemma 1. Furthermore, let f be ?-strongly convex, and use
the stochastic block rule. Then we have:
E(f (xk ) ? min f ) = O(ck ),
where c :=
3
?
min{?,1}
(1 +
?
min{?,1} )
(21)
< 1 for ? given in Lemma 2.
Stochastic unbounded delay
In this part, the delay vector ~j(k) is allowed to be an unbounded random variable. Under some
mild restrictions on the distribution of ~j(k), we can still establish convergence. In light of our
continuous-time analysis, we must develop a new bound for the last inner product in (7), which
requires the tail distribution of j(k) to decay sufficiently fast. Specifically, we define a sequence of
P+?
P+?
fixed parameters pj such that pj ? P(j(k) = j), ?k, sl = j=l jpj , and ci := l=i sl . Clearly,
c0 is larger than c1 , c2 , . . ., and we need c0 to be finite. Distributions with pj = O(j ?t ), for t > 3,
and exponential-decay distributions satisfy this requirement. Define the Lyapunov function Gk
Pk?1
L
as Gk := f (xk ) + ?? ? i=0 ck?1?i k?i k22 , where ?? := 2?
+ ( ?1 ? 12 ) cL0 ? ?Lc0 . To simplify the
Pk
presentation, we define R(k) := i=0 ck?i Ek?i k22 .
Lemma 3 (Sufficient descent for stochastic unbounded delays) Conditions: Let f be a function
(which may be nonconvex) with L-Lipschitz gradient and finite min f . Let delays be stochastic
unbounded. Use step size ?k ? ? = 2?2c
c0 +1 for arbitrary fixed 0 < c < 1. Results: we can set
? > 0 to ensures sufficient descent:
E[Gk ? Gk+1 ] ?
L 1
(
c0 ?
?
1
2
?
?
c0 )R(k).
(22)
lim Ek?k k2 = 0 and lim Ekdk k2 = 0.
(23)
And we have
k
3.1
k
Deterministic block rule
Theorem 5 Let the conditions of Lemma 3 hold for f . Under the deterministic block rule (?1.2), we
have:
lim Ek?f (xk )k2 = 0.
k
3.2
(24)
Stochastic block rule
Recall that under the stochastic block rule, the block to update is selected uniformly at random from
{1, 2, . . . , N }. The previous assumption (15), which is made for bounded delays, need to be updated
into the following assumption for unbounded delays:
Eik (k?ik f (xk?j(k) )k22 ) =
N
1 X
k?i f (xk?j(k) )k22 ,
N i=1
(25)
where j(k) is still a variable on both sides. As argued below (15), the uniform distribution can easily
be relaxed to a nearly-uniform distribution, but we use the former for simplicity.
Theorem 6 Let the conditions of Lemma 3 hold. Under the stochastic block rule and assumption
(25), we have
lim Ek?f (xk )k2 = 0.
k
8
(26)
A condition weaker than ?-strong convexity and useful for problems involving an underdetermined linear
mapping Ax; see [11, 13].
7
3.2.1
Convergence rate
When f is convex, we can derive convergence rates for ?k := E(Gk ? min f ).
Lemma 4 Let the conditions of Lemma 3 hold, and let f be convex. Let xk denote the projection of
?
L2
L 1
1
2
?
xk to argmin f . Let ? = max{ 8N
c0 )]. Then we
? 2 c0 , (12N + 2)L + ?} and ? = ?/[ 2 ( ? ? 2 ?
have
?
(?k )2 ? ?(?
? k ? ?k+1 ) ? (?R(k)
+ Ekxk ? xk k22 ),
(27)
A sublinear convergence rate can be obtained if supk {Ekxk ? xk k22 } < +?, which can be ensured
by adding a projection to a large artificial box set that surely contains the solution. Here we only
present a linear convergence result.
Theorem 7 Let the conditions of Lemma 3 hold. In addition, let f be ?-restricted strongly convex
?
? max{1, 1 }
and set step size ?k ? ? < 2?c20 +1 , with c = 1+?? max{1,? 1 } < 1. Then,
?
E(f (xk ) ? min f ) = O(ck ).
4
(28)
Deterministic unbounded delays
In this part, we consider deterministic unbounded delays, which require delay-adaptive step sizes.
Set positive sequence (i )i?0 (which can be optimized later given actual delays) such that ?i :=
P+?
Pj
?1
1
1
j=i j obeys ?1 < +?. Set Dj := 2 + 2 +
i=1 2i . We use a new Lyapunov function
P
+?
Hk := f (xk ) + L2 i=1 ?i k?k?i k22 . Let T ? lim inf j(k), and let QT be the subsequence of
N where the current delay is less than T . We prove convergence on the family of subsequences
xk , k ? QT . The algorithm is independent of the choice of T . The algorithm is run as before, and
after completion, an arbitrarily large T ? lim inf j(k) can be chosen. Extending the result to standard
sequence convergence has proven intractable.
Lemma 5 (sufficient descent for unbounded deterministic delays) Conditions: Let f be a function (which may be nonconvex) with L-Lipschitz gradient and finite min f . The delays j(k) are
deterministic and obey lim inf j(k) < ?. Use step size ?k = c/Dj(k) for arbitrary fixed 0 < c < 1.
Results: We have
Hk ? Hk+1 ? L( ?1k ? Dj(k) )k?k k22 ,
lim k?k k2 = 0.
k
(29)
On any subsequence QT (for arbitrarily large T ? lim inf j(k)), we have:
lim
(k?QT )??
kdk k2 = 0,
lim
(k?QT )??
xk )k2 = 0,
k?ik f (?
To prove our next result, we need a new assumption: essentially cyclically semi-unbounded delay
(ECSD), which is slightly stronger than the essentially cyclic assumption. In every window of N 0
steps, every index i is updated at least once with a delay less than B (at iteration K(i, t)). The number
B just needs to exist and can be arbitrarily large. It does not affect the step size.
Theorem 8 Let the conditions of Lemma 5 hold. For the deterministic index rule under the ECSD
assumption, for T ? B, we have:
lim
(k?QT )??
5
k?f (xk )k2 = 0.
(30)
Conclusion
In summary, we have proven a selection of convergence results for async-BCD under bounded and
unbounded delays, and stochastic and deterministic block choices. These results do not require
the independence assumption that occurs in the vast majority of other work so far. Therefore they
better model the behavior of real asynchronous solvers. These results were obtained with the use
of Lyapunov function techniques, and treating delays directly, rather than modelling them as noise.
Future work may involve obtaining a more exhaustive list of convergence results, sharper convergence
rates, and an extension to asynchronous stochastic gradient descent-like algorithms, such as SDCA.
8
References
[1] Loris Cannelli, Francisco Facchinei, Vyacheslav Kungurtsev, and Gesualdo Scutari. Asynchronous parallel
algorithms for nonconvex big-data optimization: Model and convergence. arXiv preprint arXiv:1607.04818,
2016.
[2] Loris Cannelli, Francisco Facchinei, Vyacheslav Kungurtsev, and Gesualdo Scutari. Asynchronous parallel
algorithms for nonconvex big-data optimization. Part II: Complexity and numerical results. arXiv preprint
arXiv:1701.04900, 2017.
[3] Yat Tin Chow, Tianyu Wu, and Wotao Yin. Cyclic coordinate update algorithms for fixed-point problems:
Analysis and applications. SIAM Journal on Scientific Computing, accepted, 2017.
[4] Damek Davis. The asynchronous palm algorithm for nonsmooth nonconvex problems. arXiv preprint
arXiv:1604.00526, 2016.
[5] Damek Davis and Wotao Yin. Convergence rate analysis of several splitting schemes. In Splitting Methods
in Communication, Imaging, Science, and Engineering, pages 115?163. Springer, 2016.
[6] Christopher M De Sa, Ce Zhang, Kunle Olukotun, and Christopher R?. Taming the wild: A unified analysis
of hogwild-style algorithms. In Advances in neural information processing systems, pages 2674?2682,
2015.
[7] Jerome Friedman, Trevor Hastie, Holger H?fling, Robert Tibshirani, et al. Pathwise coordinate optimization.
The Annals of Applied Statistics, 1(2):302?332, 2007.
[8] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models
via coordinate descent. Journal of statistical software, 33(1):1, 2010.
[9] Robert Hannah and Wotao Yin. On unbounded delays in asynchronous parallel fixed-point algorithms.
arXiv preprint arXiv:1609.04746, 2016.
[10] Robert Hannah and Wotao Yin. More Iterations per Second, Same Quality ? Why Asynchronous Algorithms
may Drastically Outperform Traditional Ones. arXiv preprint arXiv:1708.05136, 2017.
[11] Ming-Jun Lai and Wotao Yin. Augmented `1 and nuclear-norm models with a globally linearly convergent
algorithm. SIAM Journal on Imaging Sciences, 6(2):1059?1091, 2013.
[12] R?mi Leblond, Fabian Pedregosa, and Simon Lacoste-Julien. ASAGA: Asynchronous Parallel SAGA. In
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 46?54,
2017.
[13] J. Liu and S. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties.
SIAM Journal on Optimization, 25(1):351?376, 2015.
[14] Ji Liu, Stephen J. Wright, Christopher R?, Victor Bittorf, and Srikrishna Sridhar. An asynchronous parallel
stochastic coordinate descent algorithm. J. Mach. Learn. Res., 16(1):285?322, 2015.
[15] Horia Mania, Xinghao Pan, Dimitris Papailiopoulos, Benjamin Recht, Kannan Ramchandran, and
Michael I Jordan. Perturbed iterate analysis for asynchronous stochastic optimization. arXiv preprint
arXiv:1507.06970, 2015.
[16] Zhimin Peng, Yangyang Xu, Ming Yan, and Wotao Yin. Arock: an algorithmic framework for asynchronous
parallel coordinate updates. SIAM Journal on Scientific Computing, 38(5):A2851?A2879, 2016.
[17] Zhimin Peng, Yangyang Xu, Ming Yan, and Wotao Yin. On the convergence of asynchronous parallel
iteration with arbitrary delays. arXiv preprint arXiv:1612:04425, 2016.
[18] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild!: A lock-free approach to
parallelizing stochastic gradient descent. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and
K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 693?701. Curran
Associates, Inc., 2011.
[19] Ruoyu Sun and Yinyu Ye. Worst-case Complexity of Cyclic Coordinate Descent: O(n2 ) Gap with
Randomized Version. arXiv preprint arXiv:1604.07130, 2017.
[20] Yangyang Xu. Asynchronous parallel primal-dual block update methods. arXiv preprint arXiv:1705.06391,
2017.
9
| 7198 |@word mild:1 version:2 eliminating:2 stronger:2 norm:1 c0:7 pick:1 sgd:1 cyclic:9 contains:1 liu:2 ours:2 current:6 com:1 discretization:1 leblond:1 assigning:1 must:5 numerical:1 realistic:2 treating:1 update:29 greedy:1 selected:1 intelligence:1 xk:46 beginning:1 ith:1 core:2 math:2 node:4 bittorf:1 zhang:1 unbounded:21 c2:1 ik:22 prove:5 wild:1 introduce:1 x0:2 peng:2 news20:1 indeed:4 roughly:1 behavior:3 examine:1 ming:3 globally:1 cpu:1 actual:1 cache:3 window:2 solver:1 provided:4 begin:1 bounded:19 notation:2 kind:3 argmin:3 coercive:1 unified:1 lipschitzness:1 every:7 ensured:1 k2:20 appear:1 continually:1 before:4 positive:2 persists:1 local:1 treat:1 engineering:1 limit:1 mach:1 niu:1 path:1 china:3 weakened:1 limited:2 obeys:1 decided:1 practice:5 block:71 implement:1 writes:1 sdca:1 universal:1 yan:2 significantly:1 projection:4 idle:1 regular:1 wait:2 get:1 convenience:1 close:1 selection:2 storage:1 influence:1 restriction:1 deterministic:19 send:1 regardless:2 starting:2 convex:16 simplicity:2 splitting:2 rule:26 nuclear:1 coordinate:13 increment:1 updated:12 annals:1 construction:4 papailiopoulos:1 massive:2 user:2 us:2 curran:1 associate:1 expensive:2 updating:2 observed:1 preprint:9 worst:4 ensures:2 cycle:2 sun:2 news:1 ordering:2 counter:1 decrease:2 valuable:1 balanced:1 benjamin:2 predictable:1 vanishes:1 convexity:3 complexity:2 dom:1 motivate:2 weakly:1 solving:1 depend:4 algebra:1 upon:1 max1:1 creates:1 easily:1 fast:1 artificial:4 zemel:1 exhaustive:1 larger:3 solve:1 say:1 relax:1 statistic:2 final:1 sequence:11 differentiable:2 product:1 frequent:1 causing:1 los:4 dist2:1 convergence:36 cluster:2 requirement:5 extending:1 leave:2 derive:1 develop:1 completion:2 measured:1 qt:6 srikrishna:1 progress:2 sa:1 strong:3 involves:1 convention:1 lyapunov:11 tianyu:1 stochastic:29 virtual:1 require:7 jpj:1 argued:1 fix:1 pessimistic:1 underdetermined:1 extension:1 hold:7 sufficiently:1 wright:3 great:1 mapping:1 algorithmic:1 substituting:1 smallest:1 diminishes:1 lose:1 council:1 schwarz:1 create:1 weighted:1 minimization:1 clearly:4 always:1 modified:2 ck:4 rather:1 avoid:2 ax:1 focus:1 modelling:2 mainly:1 hk:3 damek:2 dependent:1 accumulated:1 entire:1 chow:1 tao:1 issue:4 dual:2 denoted:1 proposes:1 available2:1 platform:1 equal:4 once:2 saving:1 barring:1 beach:1 having:2 represents:2 holger:1 nearly:4 eik:3 future:2 others:1 nonsmooth:2 simplify:1 randomly:3 fling:1 national:2 relabeling:1 ourselves:1 consisting:1 n1:1 friedman:2 message:1 adjust:1 introduces:1 extreme:1 light:1 primal:2 nonincreasing:1 poorer:1 partial:1 worker:2 necessary:1 experience:1 taylor:1 re:2 zhimin:2 instance:2 xeon:1 earlier:1 cover:1 contiguous:1 measuring:1 zn:1 entry:1 rare:1 uniform:5 delay:73 too:2 perturbed:1 adaptively:1 st:1 recht:2 fundamental:1 siam:4 vyacheslav:2 international:1 overestimated:1 randomized:1 michael:1 vastly:1 again:1 unavoidable:1 choose:2 possibly:2 worse:1 ek:6 derivative:1 style:1 return:1 potential:1 de:1 inc:1 satisfy:1 depends:2 performed:1 hogwild:2 later:2 doing:1 analyze:1 start:1 parallel:13 simon:1 contribution:2 yield:2 ckx:1 proximable:1 weak:1 history:1 trevor:2 energy:1 frequency:1 dm:1 proof:4 mi:1 pilot:1 recall:1 lim:16 subsection:2 actually:1 appears:1 dt:4 though:1 strongly:6 box:1 furthermore:3 just:2 jerome:2 d:6 hand:1 christopher:4 logistic:1 brings:1 yat:1 quality:1 scientific:2 arock:1 usa:3 k22:17 ye:1 true:1 counterpart:2 former:1 equality:1 assigned:4 hence:1 read:9 regularization:1 nonzero:3 davis:2 generalized:1 tn:2 l1:1 recently:3 xkj:1 scutari:2 ji:1 tail:1 occurred:1 interpretation:1 refer:1 unfamiliar:1 unconstrained:1 fk:2 loris:2 shawe:1 had:1 dj:3 access:2 longer:1 lkx:2 mania:1 recent:3 inf:5 store:1 nonconvex:13 server:5 inequality:4 onr:1 continue:1 arbitrarily:3 victor:1 ruoyu:1 minimum:1 greater:2 somewhat:1 additional:2 impose:1 relaxed:1 surely:1 converge:1 monotonically:2 semi:1 ii:1 full:1 stephen:2 nonzeros:1 faster:4 match:2 long:2 divided:1 lai:1 kunle:1 involving:1 regression:1 essentially:5 circumstance:1 expectation:3 arxiv:18 iteration:13 c1:1 addition:2 ode:3 addressed:2 leaving:1 virtually:1 incorporates:1 flow:2 jordan:1 integer:1 practitioner:2 easy:1 relaxes:2 variety:1 independence:16 finish:1 iterate:2 affect:1 hastie:2 bandwidth:2 restrict:1 inner:1 simplifies:1 idea:1 angeles:4 synchronous:6 thread:6 defense:1 bartlett:1 passed:1 suffer:1 speaking:1 cause:2 generally:3 latency:1 useful:1 involve:1 hardware:1 sl:2 exist:2 outperform:1 nsf:1 problematic:1 async:8 overly:1 tibshirani:2 per:1 discrete:3 waiting:1 key:1 d3:1 pj:4 libsvm:1 ce:1 lacoste:1 destroy:1 asymptotically:1 vast:1 merely:1 imaging:2 olukotun:1 run:3 extends:1 family:1 asynchronicity:1 wu:1 fetched:1 pik:1 bound:9 outdated:2 convergent:1 occur:1 constraint:1 yangyang:3 x2:1 software:1 bcd:12 ucla:2 speed:1 min:16 extremely:1 innovative:1 cl2:1 c20:1 palm:1 describes:1 slightly:2 pan:1 rob:1 yinyu:1 restricted:3 resource:2 turn:1 fail:1 needed:2 available:1 permit:1 xinghao:1 apply:3 obey:1 v2:1 weinberger:1 slower:3 permanently:1 denotes:2 running:2 ensure:1 include:1 completed:3 assumes:2 lock:1 giving:1 scholarship:1 establish:2 unchanged:1 feng:1 objective:4 occurs:2 costly:2 dependence:4 traditional:4 gradient:15 kth:1 majority:1 argue:1 considers:1 cauchy:1 reason:2 enforcing:1 kannan:1 modeled:1 index:7 setup:6 difficult:1 robert:4 potentially:2 sharper:1 gk:5 sigma:1 filtration:1 blunt:1 implementation:4 design:1 unknown:1 cl0:1 wotao:8 upper:1 markov:1 finite:6 fabian:1 descent:17 communication:1 rn:1 varied:1 arbitrary:7 parallelizing:1 required:3 specified:1 optimized:1 california:2 established:1 nip:1 below:4 parallelism:1 dimitris:1 reading:2 encompasses:1 program:1 built:1 max:4 memory:5 unrealistic:2 power:1 facchinei:2 difficulty:1 treated:1 rely:1 scheme:1 technology:1 imply:1 julien:1 lk:1 jun:1 taming:1 epoch:1 understanding:1 l2:5 drain:1 relative:1 synchronization:1 bear:1 permutation:1 sublinear:6 limitation:2 proven:3 age:2 agent:9 affine:1 sufficient:6 inexactness:1 editor:1 share:1 pi:1 balancing:1 summary:1 supported:1 last:2 asynchronous:25 free:1 infeasible:1 drastically:1 side:1 allow:2 weaker:2 taking:1 xn:2 avoids:1 computes:1 doesn:1 kdk:1 made:1 adaptive:1 universally:1 avoided:1 historical:1 far:1 monotonicity:1 assumed:5 francisco:2 xi:4 subsequence:3 continuous:5 why:2 additionally:2 learn:1 ca:3 decoupling:1 obtaining:1 e5:1 necessarily:1 cl:2 assured:1 pk:3 main:3 linearly:1 asaga:1 big:2 noise:2 sridhar:1 n2:1 allowed:1 x1:4 augmented:1 xu:3 intel:1 lc:1 ekxk:3 scheduler:1 saga:1 pereira:1 exponential:1 breaking:1 vanish:1 tin:1 cyclically:1 hannah:3 theorem:9 load:1 list:1 decay:4 dk:1 exists:1 intractable:1 sequential:1 adding:1 ci:1 magnitude:2 ramchandran:1 kx:15 gap:1 easier:1 locality:2 yin:8 unexpected:1 kxk:2 pathwise:1 supk:2 springer:1 minimizer:1 chance:1 relies:1 kinetic:1 presentation:1 consequently:1 shared:6 lipschitz:8 hard:1 specifically:2 determined:1 uniformly:3 miss:1 lemma:19 total:1 accepted:1 e:1 pedregosa:1 latter:1 |
6,852 | 7,199 | Linear Convergence of a Frank-Wolfe Type
Algorithm over Trace-Norm Balls?
Zeyuan Allen-Zhu
Microsoft Research, Redmond
[email protected]
Wei Hu
Princeton University
[email protected]
Elad Hazan
Princeton University
[email protected]
Yuanzhi Li
Princeton University
[email protected]
Abstract
We propose a rank-k variant of the classical Frank-Wolfe algorithm to solve convex
optimization over a trace-norm ball. Our algorithm replaces the top singular-vector
computation (1-SVD) in Frank-Wolfe with a top-k singular-vector computation
(k-SVD), which can be done by repeatedly applying 1-SVD k times. Alternatively,
our algorithm can be viewed as a rank-k restricted version of projected gradient
descent. We show that our algorithm has a linear convergence rate when the
objective function is smooth and strongly convex, and the optimal solution has rank
at most k. This improves the convergence rate and the total time complexity of the
Frank-Wolfe method and its variants.
1
Introduction
Minimizing a convex matrix function over a trace-norm ball, which is: (recall that the trace norm
kXk? of a matrix X equals the sum of its singular values)
minX?Rm?n f (X) : kXk? ? ? ,
(1.1)
is an important optimization problem that serves as a convex surrogate to many low-rank machine
learning tasks, including matrix completion [2, 10, 16], multiclass classification [4], phase retrieval [3],
polynomial neural nets [12], and more. In this paper we assume without loss of generality that ? = 1.
One natural algorithm for Problem (1.1) is projected gradient descent (PGD). In each iteration,
PGD first moves X in the direction of the gradient, and then projects it onto the trace-norm ball.
Unfortunately, computing this projection requires the full singular value decomposition (SVD) of the
matrix, which takes O(mn min{m, n}) time in general. This prevents PGD from being efficiently
applied to problems with large m and n.
Alternatively, one can use projection-free algorithms. As first proposed by Frank and Wolfe [5],
one can select a search direction (which is usually the gradient direction) and perform a linear
optimization over the constraint set in this direction. In the case of Problem (1.1), performing linear
optimization over a trace-norm ball amounts to computing the top (left and right) singular vectors
of a matrix, which can be done much faster than full SVD. Therefore, projection-free algorithms
become attractive for convex minimization over trace-norm balls.
Unfortunately, despite its low per-iteration complexity, the Frank-Wolfe (FW) algorithm suffers from
slower convergence rate compared with PGD. When the objective f (X) is smooth, FW requires
O(1/?) iterations to convergence to an ?-approximate minimizer, and
? this 1/? rate is tight even if the
objective is also strongly convex [6]. In contrast, PGD achieves 1/ ? rate if f (X) is smooth (under
Nesterov?s acceleration [14]), and log(1/?) rate if f (X) is both smooth and strongly convex.
?
The full version of this paper can be found on https://arxiv.org/abs/1708.02105.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Recently, there were several results to revise the FW method to improve its convergence rate for
strongly-convex functions.
? The log(1/?) rate was obtained when the constraint set is a polyhedron [7, 11], and the 1/ ? rate was obtained when the constraint set is strongly convex [8] or is a
spectrahedron [6].
Among these results, the spectrahedron constraint (i.e., for all positive semidefinite matrices X with
Tr(X) = 1) studied by Garber [6] is almost identical to Problem (1.1), but slightly weaker.2 When
stating the result of Garber [6], we assume for simplicity that it also applies to Problem (1.1).
Our Question. In this paper, we propose to study the following general question:
Can we design a ?rank-k variant? of Frank-Wolfe to improve the convergence rate?
(That is, in each iteration it computes the top k singular vectors ? i.e., k-SVD ? of some matrix.)
Our motivation to study the above question can be summarized as follows:
? Since FW computes a 1-SVD and PGD computes a full SVD in each iteration, is there a value
k min{n, m} such that a rank-k variant of FW can achieve the convergence rate of PGD?
? Since computing k-SVD costs roughly the same (sequential) time as ?computing 1-SVD for k
times? (see recent work [1, 13]),3 if using a rank-k variant of FW, can the number of iterations be
reduced by a factor more than k? If so, then we can improve the sequential running time of FW.
? k-SVD can be computed in a more distributed manner than 1-SVD. For instance, using block
Krylov [13], one can distribute the computation of k-SVD to k machines, each in charge of
independent matrix-vector multiplications. Therefore, it is beneficial to study a rank-k variant of
FW in such settings.
1.1
Our Results
We propose blockFW, a rank-k variant of Frank-Wolfe. Given a convex function f (X) that is ?smooth, in each iteration t, blockFW performs an update Xt+1 ? Xt + ?(Vt ? Xt ), where ? > 0 is
a constant step size and Vt is a rank-k matrix computed from the k-SVD of (??f (Xt ) + ??Xt ). If
k = min{n, m}, blockFW can be shown to coincide with PGD, so it can also be viewed as a rank-k
restricted version of PGD.
Convergence. Suppose f (X) is also ?-strongly convex and suppose the optimal solution X ?
of Problem (1.1) has rank k, then we show that blockFW achieves linear convergence: it finds an
?
?-approximate minimizer within O( ?
log 1? ) iterations, or equivalently, in
k?
1
T =O
log
computations of 1-SVD.
?
?
We denote by T the number of 1-SVD computations throughout this paper. In contrast,
TFW = O ??
for Frank-Wolfe
n
o
?
? 1/4 ? 3/4
? 1/2 ? 1/2
1
TGar = O min ?? ,
for Garber [6].
k ,
?
?
?
?
?min (X ? )
?
kX k?
Above, ?min (X ? ) is the minimum non-zero singular value of X ? . Note that ?min (X ? ) ? rank(X
?) ?
1
.
k
We note that TGar is always outperformed by min{T, TFW }: ignoring the log(1/?) factor, we have
?
? 1/4 ? 3/4
? 1/4 ? 3/4 1/4
? ?
k
< ?
k, and
? min ?? , k?
?
?
?
? k?
? 1/2 ? 1/2
? 1/2 ? 1/2 1/2
1
k
< ?
? min ? , ? ? ?
?
?
?min (X ? ) .
2
The the best of our knowledge, given an algorithm that works for spectrahedron, to solve Problem (1.1), one
has to define a function g(Y ) over (n + m) ? (n + m) matrices, by setting g(Y ) = f (2Y1:m,m+1:m+n ) [10].
After this transformation, the function g(Y ) is no longer strongly convex, even if f (X) is strongly convex. In
contrast, most algorithms for trace-norm balls, including FW and our later proposed algorithm, work as well for
spectrahedron after minor changes to the analysis.
3
Using block Krylov [13], Lanszos [1], or SVRG [1], at least when k is small, the time complexity of
(approximately) computing the top k singular vectors of a matrix is no more than k times the complexity of
(approximately) computing the top singular vector of the same matrix. We refer interested readers to [1] for
details.
2
algorithm
# rank
# iterations
PGD [14]
min{m, n}
? log(1/?)
O
accelerated
PGD [14]
min{m, n}
?
? log(1/?)
O
FrankWolfe [9]
1
?
?
Garber [6]
blockFW
1
k
?
1
4
1
?2
3 ?
? 4
k , or
?
1
?
1
2
?
?
O
time complexity per iteration
mn min{m, n}
mn min{m, n}
1/2
1/2
k?k2
k?k2
nnz(?)
? min
, (? (?)?? (?))1/2
?1/2
1
? nnz(?) + (m + n)
O
? min
?min (X ? )
? log(1/?)
2
1/2
k?k2
?1/2
1/2
,
k?k2
(?1 (?)??2 (?))1/2
? nnz(?) + k(m + n)?
k?O
n
+?)1/2
,
? min (k?k?21/2
?(k?k2 +?)1/2
?1/2 ?min (X ? )
o
Table 1: Comparison of first-order methods to minimize a ?-smooth, ?-strongly convex function over the
?
unit-trace norm ball in Rm?n . In the table, k is the rank of X ? , ? = ?
is the condition number,
? = ?f (Xt ) is the gradient matrix, nnz(?) is the complexity to multiply ? to a vector, ?i (X) is the
i-th largest singular value of X, and ?min (X) is the minimum non-zero singular value of X.
R EMARK. The low-rank assumption on X ? should be reasonable: as we mentioned, in most
applications of Problem (1.1), the ultimate reason for imposing a trace-norm constraint is to ensure
that the optimal solution is low-rank; otherwise the minimization problem may not be interesting to
solve in the first place. Also, the immediate prior work [6] also assumes X ? to have low rank.
k-SVD Complexity. For theoreticians who are concerned about the time complexity of k-SVD, we
also compare it with the 1-SVD complexity of FW and Garber. If one uses LazySVD [1]4 to compute
k-SVD in each iteration of blockFW, then the per-iteration k-SVD complexity can be bounded by
1/2
1/2
? nnz(?) + k(m + n)? ? min (k?k2 + ?) , ?(k?k2 + ?)
k?O
.
(1.2)
?1/2
?1/2 ?min (X ? )
?
Above, ? = ?
is the condition number of f , ? = ?f (Xt ) is the gradient matrix of the current
iteration t, nnz(?) is the complexity to multiply ? to a vector, ?min (X ? ) is the minimum non-zero
? hides poly-logarithmic factors.
singular value of X ? , and O
In contrast, if using Lanczos, the 1-SVD complexity for FW and Garber can be bounded as (see [6])
1/2
1/2
n
o
k?k2
? nnz(?) ? min k?k2 ,
O
.
(1.3)
?1/2 (?1 (?) ? ?2 (?))1/2
Above, ?1 (?) and ?2 (?) are the top two singular values of ?, and the gap ?1 (?) ? ?2 (?) can be
as small as zero.
We emphasize that our k-SVD complexity (1.2) can be upper bounded by a quantity that only
depends poly-logarithmically on 1/?. In contrast, the worst-case 1-SVD complexity (1.3) of FW
and Garber depends on ??1/2 because the gap ?1 ? ?2 can be as small as zero. Therefore, if one
takes this additional ? dependency into consideration for the convergence rate, then blockFW has
rate polylog(1/?), but FW and Garber have rates ??3/2 and ??1 respectively. The convergence rates
and per-iteration running times of different algorithms for solving Problem (1.1) are summarized
in Table 1.
Practical Implementation. Besides our theoretical results above, we also provide practical suggestions for implementing blockFW. Roughly speaking, one can automatically select a different ?good?
rank k for each iteration. This can be done by iteratively finding the 1st, 2nd, 3rd, etc., top singular
vectors of the underlying matrix, and then stop this process whenever the objective decrease is not
worth further increasing the value k. We discuss the details in Section 6.
4
In fact, LazySVD is a general framework that says, with a meaningful theoretical support, one can apply
a reasonable 1-SVD algorithm k times in order to compute k-SVD. For simplicity, in this paper, whenever
referring to LazySVD, we mean to apply the Lanczos method k times.
3
2
Preliminaries and Notation
For a positive integer n, we define [n] := {1, 2, . . . , n}. For a matrix A, we denote by kAkF , kAk2
and kAk? respectively the Frobenius norm, the spectral norm, and the trace norm of A. We use
h?, ?i to denote the (Euclidean) inner products between vectors, or the (trace) inner products between
matrices (i.e., hA, Bi = Tr(AB > )). We denote by ?i (A) the i-th largest singular value of a matrix
A, and by ?min (A) the minimum non-zero singular value of A. We use nnz(A) to denote the time
complexity of multiplying matrix A to a vector (which is at most the number of non-zero entries of
A). We define the (unit) trace-norm ball Bm,n in Rm?n as Bm,n := {X ? Rm?n : kXk? ? 1}.
Definition 2.1. For a differentiable convex function f : K ? R over a convex set K ? Rm?n , we
say
? f is ?-smooth if f (Y ) ? f (X) + h?f (X), Y ? Xi + ?2 kX ? Y k2F for all X, Y ? K;
? f is ?-strongly convex if f (Y ) ? f (X) + h?f (X), Y ? Xi + ?2 kX ? Y k2F for all X, Y ? K.
For Problem (1.1), we assume f is differentiable, ?-smooth, and ?-strongly convex over Bm,n . We
?
denote by ? = ?
the condition number of f , and by X ? the minimizer of f (X) over the trace-norm
ball Bm,n . The strong convexity of f (X) implies:
Fact 2.2. f (X) ? f (X ? ) ?
?
2 kX
? X ? k2F for all X ? K.
Proof. The minimality of X ? implies h?f (X ? ), X ? X ? i ? 0 for all X ? K. The fact follows then
from the ?-strong convexity of f .
The Frank-Wolfe Algorithm. We now quickly review the Frank-Wolfe algorithm (see Algorithm 1)
and its relation to PGD.
Algorithm 1 Frank-Wolfe
Input: Step sizes {?t }t?1 (?t ? [0, 1]), starting point X1 ? Bm,n
1: for t = 1, 2, . . . do
2:
Vt ? argminV ?Bm,n h?f (Xt ), V i
by finding the top left/right singular vectors ut , vt of ??f (Xt ), and taking Vt = ut vt> .
3:
Xt+1 ? Xt + ?t (Vt ? Xt )
4: end for
Let ht = f (Xt ) ? f (X ? ) be the approximation error of Xt . The convergence analysis of Algorithm 1
is based on the following relation:
?
?
ht+1 = f (Xt + ?t (Vt ? Xt )) ? f (X ? ) ? ht + ?t h?f (Xt ), Vt ? Xt i + ?t2 kVt ? Xt k2F
2
?
?
?
?
? ht + ?t h?f (Xt ), X ? ? Xt i + ?t2 kVt ? Xt k2F ? (1 ? ?t )ht + ?t2 kVt ? Xt k2F .
2
2
(2.1)
Above, inequality ? uses the ?-smoothness of f , inequality ? is due to the choice of Vt in Line 2,
and inequality ? follows from the convexity of f . Based on (2.1), a suitable choice of the step size
?t = ?(1/t) gives the convergence rate O(?/?) for the Frank-Wolfe algorithm.
If f is also ?-strongly convex, a linear convergence rate can be achieved if we replace the linear
optimization step (Line 2) in Algorithm 1 with a constrained quadratic minimization:
?
Vt ? argminh?f (Xt ), V ? Xt i + ?t kV ? Xt k2F .
(2.2)
2
V ?Bm,n
In fact, if Vt is defined as above, we have the following relation similar to (2.1):
?
ht+1 ? ht + ?t h?f (Xt ), Vt ? Xt i + ?t2 kVt ? Xt k2F
2
(2.3)
? 2 ?
?
2
2
? ht + ?t h?f (Xt ), X ? Xt i + ?t kX ? Xt kF ? (1 ? ?t + ??t )ht ,
2
1
where the last inequality follows from Fact 2.2. Given (2.3), we can choose ?t = 2?
to obtain a linear
convergence rate because ht+1 ? (1 ? 1/4?)ht . This is the main idea behind the projected gradient
4
descent (PGD) method. Unfortunately, optimizing Vt from (2.2) requires a projection operation onto
Bm,n , and this further requires a full singular value decomposition of the matrix ?f (Xt ) ? ??t Xt .
3
A Rank-k Variant of Frank-Wolfe
1
Our main idea comes from the following simple observation. Suppose we choose ?t = ? = 2?
for
?
all iterations, and suppose rank(X ) ? k. Then we can add a low-rank constraint to Vt in (2.2):
?
?kV ? Xt k2F .
(3.1)
2
V ?Bm,n , rank(V )?k
Under this new choice of Vt , it is obvious that the same inequalities in (2.3) remain to hold, and thus
the linear convergence rate of PGD can be preserved. Let us now discuss how to solve (3.1).
Vt ?
argmin
h?f (Xt ), V ? Xt i +
3.1 Solving the Low-Rank Quadratic Minimization (3.1)
Although (3.1) is non-convex, we prove that it can be solved efficiently. To achieve this, we first show
that Vt is in the span of the top k singular vectors of ??Xt ? ?f (Xt ).
Pk
Lemma 3.1. The minimizer Vt of (3.1) can be written as Vt = i=1 ai ui vi> , where a1 , . . . , ak
are nonnegative scalars, and (ui , vi ) is the pair of the left and right singular vectors of At :=
??Xt ? ?f (Xt ) corresponding to its i-th largest singular value.
The proof of Lemma 3.1 is given in the full version of this paper. Now, owing to Lemma 3.1, we
Pk
can perform k-SVD on At to compute {(ui , vi )}i?[k] , plug the expression Vt = i=1 ai ui vi> into
the objective of (3.1), and then search for the optimal values {ai }i?[k] . The last step is equivalent
Pk
Pk
k
to minimizing ? i=1 ?i ai + ?2 ? i=1 a2i (where ?i = u>
i At vi ) over the simplex ? := a ? R :
1
a1 , . . . , ak ? 0, kak1 ? 1 , which is the same as projecting the vector ?? (?1 , . . . , ?k ) onto the
simplex ?. It can be easily solved in O(k log k) time (see for instance the applications in [15]).
3.2 Our Algorithm and Its Convergence
We summarize our algorithm in Algorithm 2 and call it blockFW.
Algorithm 2 blockFW
Input: Rank parameter k, starting point X1 = 0
1
1: ? ? 2?
.
2: for t = 1, 2, . . . do
3:
At ? ??Xt ? ?f (Xt )
4:
(u1 , v1 , . . . , uk , vk ) ? k-SVD(At )
(ui , vi ) is the i-th largest pair of left/right singular vectors of At
1
k
5:
a ? argmina?Rk ,a?0,kak1 ?1 ka ? ??
?k2
where ? := (u>
i At vi )i=1
Pk
6:
Vt ? i=1 ai ui vi>
7:
Xt+1 ? Xt + ?(Vt ? Xt )
8: end for
Since the state-of-the-art algorithms for k-SVD are iterative methods, which in theory can only give
approximate solutions, we now study the convergence of blockFW given approximate k-SVD solvers.
We introduce the following notion of an approximate solution to the low-rank quadratic minimization
problem (3.1).
Definition 3.2. Let gt (V ) = h?f (Xt ), V ? Xt i + ?2 ?kV ? Xt k2F be the objective function in (3.1),
and let gt? = gt (X ? ). Given parameters ? ? 0 and ? ? 0, a feasible solution V to (3.1) is called
(?, ?)-approximate if it satisfies g(V ) ? (1 ? ?)gt? + ?.
Note that the above multiplicative-additive definition makes sense because gt? ? 0:
Fact 3.3. If rank(X ? ) ? k, for our choice of step size ? =
?(1 ? ??)ht = ? h2t ? 0 according to (2.3).
1
2? ,
we have gt? = gt (X ? ) ?
The next theorem gives the linear convergence of blockFW under the above approximate solutions to
(3.1). Its proof is simple and uses a variant of (2.3) (see the full version of this paper).
5
Theorem 3.4. Suppose rank(X ? ) ? k and ? > 0. If each Vt computed in blockFW is a ( 21 , 8? )approximate solution to (3.1), then for every t, the error ht = f (Xt ) ? f (X ? ) satisfies
1 t?1
ht ? 1 ? 8?
h1 + 2? .
As a consequence, it takes O(? log
h1
? )
iterations to achieve the target error ht ? ?.
Based on Theorem 3.4, the per-iteration running time of blockFW is dominated by the time necessary
to produce a ( 12 , 8? )-approximate solution Vt to (3.1), which we study in Section 4.
4
Per-Iteration Running Time Analysis
In this section, we study the running time necessary to produce a ( 21 , ?)-approximate solution Vt
to (3.1). In particular, we wish to show a running time that depends only poly-logarithmically on 1/?.
The reason is that, since we are concerning about the linear convergence rate (i.e., log(1/?)) in this
paper, it is not meaningful to have a per-iteration complexity that scales polynomially with 1/?.
Remark 4.1. To the best of our knowledge, the Frank-Wolfe method and Garber?s method [6] have
their worst-case per-iteration complexities scaling polynomially with 1/?. In theory, this also slows
down their overall performance in terms of the dependency on 1/?.
4.1 Step 1: The Necessary k-SVD Accuracy
We first show that if the k-SVD in Line 4 of blockFW is solved sufficiently accurate, then Vt obtained
in Line 6 will be a sufficiently good approximate solution to (3.1). For notational simplicity, in this
section we denote Gt := k?f (Xt )k2 + ?, and we let k ? = rank(X ? ) ? k.
Lemma 4.2. Suppose ? ? [0, 1] and ? ? 0. In each iteration t of blockFW, if the vectors
u1 , v1 , . . . , uk , vk returned by k-SVD in Line 4 satisfy u>
At vi ? (1 ? ?)?i (At ) ? ? for all i ? [k ? ],
i
Pk
t
then Vt = i=1 ai ui vi> obtained in Line 6 is 6G
ht + 2 ?, ? -approximate to (3.1).
The proof of Lemma 4.2 is given in the full version of this paper, and is based on our earlier
characterization Lemma 3.1.
4.2 Step 2: The Time Complexity of k-SVD
We recall the following complexity statement for k-SVD:
Theorem 4.3 ([1]). The running time to compute the k-SVD of A ? Rm?n using LazySVD is5
2
2
(m+n)
(m+n)
? k?nnz(A)+k
? k?nnz(A)+k
?
?
O
or O
.
?
gap
u>
i Avi
?
In the
former case, we cani have
? (1 ? ?)?i (A) for all i ? [k]; in the latter case, if
?k? (A)??k? +1 (A)
gap ? 0,
for some k ? [k], then we can guarantee u>
i Avi ? ?i (A) ? ? for all
?k? (A)
i ? [k ? ].
The First Attempt. Recall that we need a ( 12 , ?)-approximate solution to (3.1). Using Lemma 4.2,
it suffices to obtain a (1 ? ?)-multiplicative approximation to the k-SVD of At (i.e., u>
i At v i ?
1
(1 ? ?)?i (At ) for all i ? [k]), as long as ? ? 12Gt /h
.
Therefore,
we
can
directly
apply
the first
+4
t
2
k?nnz(A
)+k
(m+n)
t
?
?
running time in Theorem 4.3: O
. However, when ht is very small, this running
?
time can be unbounded. In that case, we observe that ? = G?t (independent of ht ) also suffices:
since kAt k2 =
?2 Xt ? ?f (Xt )
2 ? ?2 + k?f (Xt )k2 ? Gt , from u>
i At vi ? (1 ? ?/Gt )?i (At )
?
?
we have u>
A
v
?
?
(A
)
?
?
(A
)
?
?
(A
)
?
kA
k
?
?
(A
t i
i
t
t
i
t
t 2
i
t ) ? ?; then according to
i
Gt i
Gt
Lemma 4.2 we can obtain (0, ?)-approximation to (3.1), which is stronger than ( 12 , ?)-approximation.
We summarize this running time (using ? = G?t ) in Claim 4.5; the running time depends polynomially
on 1? .
The Second Attempt. To make our linear convergence rate (i.e., the log(1/?) rate) meaningful, we
want the k-SVD running time to depend poly-logarithmically on 1/?. Therefore, when ht is small,
we wish to instead apply the second running time in Theorem 4.3.
5
The first is known as the gap-free result because it does not depend on the gap between any two singular
values. The second is known as the gap-dependent result, and it requires a k ?k full SVD after the k approximate
? notation hides poly-log factors in 1/?, 1/?, m, n, and 1/gap.
singular vectors are computed one by one. The O
6
Recall that X ? has rank k ? so ?k? (X ? ) ? ?k? +1 (X ? ) = ?min (X ? ). We can show that this implies
A? := ?2 X ? ? ?f (X ? ) also has a large gap ?k? (A? ) ? ?k? +1 (A? ). Now, according to Fact 2.2,
when ht is small, Xt and X ? are sufficiently close. This means At = ?2 Xt ? ?f (Xt ) is also close
to A? , and thus has a large gap ?k? (At ) ? ?k? +1 (At ). Then we can apply the second running time
in Theorem 4.3.
4.2.1
Formal Running Time Statements
Fact 4.4. We can store Xt as a decomposition into at most rank(Xt ) ? kt rank-1 components.6
Therefore, for At = ?2 Xt ? ?f (Xt ), we have nnz(At ) ? nnz(?f (Xt )) + (m + n)rank(Xt ) ?
nnz(?f (Xt )) + (m + n)kt.
If we always use the first running time in Theorem 4.3, then Fact 4.4 implies:
? k?
Claim 4.5. The k-SVD computation in the t-th iteration of blockFW can be implemented in O
p
2
nnz(?f (Xt )) + k (m + n)t Gt /? time.
Remark 4.6. As long as (m p
+ n)kt ? nnz(?f (Xt )), the k-SVD running time in Claim 4.5
? k ? nnz(?f (Xt )) Gt /? , which roughly equals k-times the 1-SVD running time
becomes O
p
? nnz(?) k?k2 /?) of FW and Garber [6]. Since in practice, it suffices to run blockFW and FW
O
for a few hundred 1-SVD computations, the relation (m + n)kt ? nnz(?f (Xt )) is often satisfied.
If, as discussed above, we apply the first running time in Theorem 4.3 only for large ht , and apply
the second running time in Theorem 4.3 for small ht , then we obtain the following theorem whose
proof is given in the full version of this paper.
in the t-th iteration of blockFW can be implemented in
Theorem 4.7. The k-SVD comuputation
?
?
G
t /?
2
? k ? nnz(?f (Xt )) + k (m + n)t
O
time.
?
?min (X )
Remark 4.8. Since according to Theorem 3.4 we only need to run blockFW for O(? log(1/?))
iterations, we can plug t = O(? log(1/?)) into Claim 4.5 and Theorem 4.7, and obtain the running
time presented in (1.2). The per-iteration running time of blockFW depends poly-logarithmically on
1/?. In contrast, the per-iteration running times of Garber [6] and FW depend polynomially on 1/?,
making their total running times even worse in terms of dependency on 1/?.
5
Maintaining Low-Rank Iterates
One of the main reasons to impose trace-norm constraints is to produce low-rank solutions. However,
the rank of iterate Xt in our algorithm blockFW can be as large as kt, which is much larger than k,
the rank of the optimal solution X ? . In this section, we show that by adding a simple modification
to blockFW, we can make sure the rank of Xt is O(k? log ?) in all iterations t, without hurting the
convergence rate much.
We modify blockFW as follows. Whenever t ? 1 is a multiple of S = d8?(log ? + 1)e, we compute
(note that this is the same as setting ? = 1 in (3.1))
?
Wt ?
argmin
h?f (Xt ), W ? Xt i + kW ? Xt k2F ,
2
W ?Bm,n , rank(W )?k
and let the next iterate Xt+1 be Wt . In all other iterations the algorithm is unchanged. After this
change, the function value f (Xt+1 ) may be greater than f (Xt ), but can be bounded as follows:
Lemma 5.1. Suppose rank(X ? ) ? k. Then we have f (Wt ) ? f (X ? ) ? ?ht .
Proof. We have the following relation similar to (2.3):
?
kWt ? Xt k2F
2
?
? ht + h?f (Xt ), X ? ? Xt i + kX ? ? Xt k2F
2
? 2
? ht ? ht + ? ht = ?ht .
2 ?
f (Wt ) ? f (X ? ) ? ht + h?f (Xt ), Wt ? Xt i +
6
In Section 5, we show how to ensure that rank(Xt ) is always O(k? log ?), a quantity independent of t.
7
1 S
1 8?(log ?+1)
From Theorem 3.4 we know that hS+1 ? (1 ? 8?
) h1 + 2? ? (1 ? 8?
)
h1 + 2? ?
1
?
?(log ?+1)
e
h1 + 2 = e? h1 + ?/2. Therefore, after setting XS+2 = WS+1 , we still have hS+2 ?
1
??
e h1 + 2 (according to Lemma 5.1). Continuing this analysis (letting the ?? here be the ?new
??), we know that this modified version of blockFW converges to an ?-approximate minimizer in
O ? log ? ? log h?1 iterations.
Remark 5.2. Since in each iteration the rank of Xt is increased by at most k, if we do the modified
step every S = O(? log ?) iterations, we have that throughout the algorithm, rank(Xt ) is never more
than O(k? log ?). Furthermore we can always store Xt using O(k? log ?) vectors, instead of storing
all the singular vectors obtained in previous iterations.
6
Preliminary Empirical Evaluation
We conclude this paper with some preliminary experiments to test the performance of blockFW. We
first recall two machine learning tasks that fall into Problem (1.1).
Matrix Completion. Suppose there is an unknown matrix M ? Rm?n close to low-rank, and we
observe a subset ? of its entries ? that is, we observe Mi,j for every (i, j) ? ?. (Think of Mi,j as
user i?s rating of movie j.) One can recover M by solving the following convex program:
P
(6.1)
minX?Rm?n 21 (i,j)?? (Xi,j ? Mi,j )2 | kXk? ? ? .
Although Problem (6.1) is not strongly convex, our experiments show the effectiveness of blockFW
on this problem.
Polynomial Neural Networks. Polynomial networks are neural networks with quadratic activation
function ?(a) = a2 . Livni et al. [12] showed that such networks can express any function computed
by a Turing machine, similar to networks with ReLU or sigmoid activations. Following [12], we
consider the class of 2-layer polynomial networks with inputs from Rd and k hidden neurons:
n
o
Pk
V
Pk = x 7? j=1 aj (wj> x)2 ?j ? [k], wj ? Rd , kwj k2 = 1 a ? Rk .
Pk
If we write A = i=1 aj wj wj> , we have the following equivalent formulation:
Pk = x 7? x> Ax A ? Rd?d , rank(A) ? k .
Therefore, if replace the hard rank constraint with trace norm kAk? ? ?, the task of empirical risk
minimization (ERM) given training data {(x1 , y1 ), . . . , (xN , yN )} ? Rd ? R can be formulated as7
PN
2
minA?Rd?d 21 i=1 (x>
kAk? ? ? .
(6.2)
i Axi ? yi )
P
N
2
Since f (A) = 21 i=1 (x>
i Axi ? yi ) is convex in A, the above problem falls into Problem (1.1).
Again, this objective f (A) might not be strongly convex, but we still perform experiments on it.
6.1 Preliminary Evaluation 1: Matrix Completion on Synthetic Data
We consider the following synthetic experiment for matrix completion. We generate a random
rank-10 matrix in dimension 1000 ? 1000, plus some small noise. We include each entry into ? with
probability 1/2. We scale M to kM k? = 10000, so we set ? = 10000 in (6.1).
We compare blockFW with FW and Garber [6]. When implementing the three algorithms, we use
exact line search. For Garber?s algorithm, we tune its parameter ?t = ct with different constant values
c, and then exactly search for the optimum ??t . When implementing blockFW, we use k = 10 and
? = 0.2. We use the M ATLAB built-in solver for 1-SVD and k-SVD.
In Figure 1(a), we compare the numbers of 1-SVD computations for the three algorithms. The plot
confirms that it suffices to apply a rank-k variant FW in order to achieve linear convergence.
6.2 Auto Selection of k
In practice, it is often unrealistic to know k in advance. Although one can simultaneously try
k = 1, 2, 4, 8, . . . and output the best possible solution, this can be unpleasant to work with. We
propose the following modification to blockFW which automatically chooses k.
In each iteration t, we first run 1-SVD and compute the objective decrease, denoted by d1 ? 0. Now,
given any approximate k-SVD decomposition of the matrix At = ??Xt ? ?f (Xt ), we can compute
its (k + 1)-SVD using one additional 1-SVD computation according to the LazySVD framework [1].
7
We consider square loss for simplicity. It can be any loss function `(x>
i Axi , yi ) convex in its first argument.
8
8
FW
Garber
This paper
7
6
4
7
FW
Garber
This paper
6
2.5
Log(error)
4
3
2
1
0
5
4
3
2
1.5
1
0.5
0
2
-1
-2
3
Log(error)
Log(error)
5
FW
Garber
This paper
3.5
-0.5
1
0
20
40
60
# 1-SVD computations
80
100
0
50
100
150
# 1-SVD computations
(a) matrix completion on synthetic (b) matrix completion
on M OVIE L ENS 1M, ? = 10000
data
200
-1
0
100
200
300
400
500
# 1-SVD computations
(c) polynomial neural network
on M NIST, ? = 0.03
Figure 1: Partial experimental results. The full 6 plots for M OVIE L ENS and 3 plots for M NIST are included in
the full version of this paper.
We compute the new objective decrease dk+1 . We stop this process and move to the next iteration t+1
k+1
whenever dk+1
< dkk . In other words, we stop whenever it ?appears? not worth further increasing k.
We count this iteration t as using k + 1 computations of 1-SVD.
All the experiments on real-life datasets are performed using this above auto-k process.
6.3
Preliminary Evaluation 2: Matrix Completion on M OVIE L ENS
We study the same experiment in Garber [6], the matrix completion Problem (6.1) on datasets
M OVIE L ENS 100K (m = 943, n = 1862 and |?| = 105 ) and M OVIE L ENS 1M (m = 6040, n =
3952 and |?| ? 106 ). In the second dataset, following [6], we further subsample ? so it contains
about half of the original entries. For each dataset, we run FW, Garber, and blockFW with three
different choices of ?.8 We present the six plots side-by-side in the full version of this paper.
We observe that when ? is large, there is no significant advantage for using blockFW. This is because
the rank of the optimal solution X ? is also high for large ?. In contrast, when ? is small (so X ? is of
low rank), as demonstrated for instance by Figure 1(b), it is indeed beneficial to apply blockFW.
6.4
Preliminary Evaluation 3: Polynomial Neural Network on M NIST
We use the 2-layer neural network Problem (6.2) to train a binary classifier on the M NIST dataset of
handwritten digits, where the goal is to distinguish images of digit ?0? from images of other digits.
The training set contains N = 60000 examples each of dimension d = 28 ? 28 = 784. We set yi = 1
if that example belongs to digit ?0? and yi = 0 otherwise. We divide the original grey levels by 256
so xi ? [0, 1]d . We again try three different values of ?, and compare FW, Garber, and blockFW.9
We present the three plots side-by-side in the full version of this paper.
The performance of our algorithm is comparable to FW and Garber for large ?, but as demonstrated
for instance by Figure 1(c), when ? is small so rank(X ? ) is small, it is beneficial to use blockFW.
7
Conclusion
In this paper, we develop a rank-k variant of Frank-Wolfe for Problem (1.1) and show that: (1)
it converges in log(1/?) rate for smooth and strongly convex functions, and (2) its per-iteration
complexity scales with polylog(1/?). Preliminary experiments suggest that the value k can also be
automatically selected, and our algorithm outperforms FW and Garber [6] when X ? is of relatively
smaller rank.
We hope more rank-k variants of Frank-Wolfe can be developed in the future.
Acknowledgments
Elad Hazan is supported by NSF grant 1523815 and a Google research award.
8
We perform exact line search for all algorithms. For Garber [6], we tune the best ?t = ct and exactly
search for the optimal ??t . For blockFW, we let k be chosen automatically and choose ? = 0.01 for all the six
experiments.
9
We perform exact line search for all algorithms. For Garber [6], we tune the best ?t = ct and exactly search
for the optimal ??t . For blockFW, we let k be chosen automatically and choose ? = 0.0005 for all the three
experiments.
9
References
[1] Zeyuan Allen-Zhu and Yuanzhi Li. LazySVD: Even faster SVD decomposition yet without
agonizing pain. In NIPS, pages 974?982, 2016.
[2] Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization.
Communications of the ACM, 55(6):111?119, 2012.
[3] Emmanuel J Candes, Yonina C Eldar, Thomas Strohmer, and Vladislav Voroninski. Phase
retrieval via matrix completion. SIAM review, 57(2):225?251, 2015.
[4] Miroslav Dudik, Zaid Harchaoui, and J?r?me Malick. Lifted coordinate descent for learning
with trace-norm regularization. In AISTATS, pages 327?336, 2012.
[5] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval research
logistics quarterly, 3(1-2):95?110, 1956.
[6] Dan Garber. Faster projection-free convex optimization over the spectrahedron. In NIPS, pages
874?882, 2016.
[7] Dan Garber and Elad Hazan. A linearly convergent conditional gradient algorithm with
applications to online and stochastic optimization. arXiv preprint arXiv:1301.4666, 2013.
[8] Dan Garber and Elad Hazan. Faster rates for the frank-wolfe method over strongly-convex sets.
In ICML, pages 541?549, 2015.
[9] Elad Hazan. Sparse approximate solutions to semidefinite programs. In Latin American
Symposium on Theoretical Informatics, pages 306?316. Springer, 2008.
[10] Martin Jaggi and Marek Sulovsk?. A simple algorithm for nuclear norm regularized problems.
In ICML, pages 471?478, 2010.
[11] Simon Lacoste-Julien and Martin Jaggi. An affine invariant linear convergence analysis for
frank-wolfe algorithms. arXiv preprint arXiv:1312.7864, 2013.
[12] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training
neural networks. In NIPS, pages 855?863, 2014.
[13] Cameron Musco and Christopher Musco. Randomized block krylov methods for stronger and
faster approximate singular value decomposition. In NIPS, pages 1396?1404, 2015.
[14] Yurii Nesterov. Introductory Lectures on Convex Programming Volume: A Basic course,
volume I. Kluwer Academic Publishers, 2004.
[15] Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming,
103(1):127?152, December 2005.
[16] Shai Shalev-Shwartz, Alon Gonen, and Ohad Shamir. Large-scale convex minimization with a
low-rank constraint. arXiv preprint arXiv:1106.1622, 2011.
10
| 7199 |@word h:2 version:11 polynomial:6 norm:19 stronger:2 nd:1 h2t:1 hu:1 km:1 confirms:1 grey:1 decomposition:6 tr:2 contains:2 frankwolfe:1 outperforms:1 current:1 ka:2 activation:2 yet:1 written:1 additive:1 zaid:1 plot:5 update:1 half:1 selected:1 theoretician:1 characterization:1 iterates:1 org:1 unbounded:1 mathematical:1 become:1 symposium:1 prove:1 dan:3 introductory:1 introduce:1 manner:1 indeed:1 roughly:3 automatically:5 solver:2 increasing:2 becomes:1 project:1 bounded:4 underlying:1 notation:2 argmin:2 developed:1 finding:2 transformation:1 guarantee:1 every:3 charge:1 exactly:3 rm:8 k2:15 uk:2 classifier:1 unit:2 grant:1 yn:1 positive:2 modify:1 consequence:1 despite:1 ak:2 approximately:2 might:1 plus:1 studied:1 bi:1 practical:2 acknowledgment:1 practice:2 block:3 kat:1 digit:4 nnz:20 empirical:2 projection:5 word:1 suggest:1 onto:3 close:3 selection:1 risk:1 applying:1 equivalent:2 demonstrated:2 starting:2 convex:31 musco:2 simplicity:4 nuclear:1 notion:1 coordinate:1 target:1 suppose:8 shamir:2 user:1 exact:4 programming:3 us:3 wolfe:20 logarithmically:4 preprint:3 solved:3 worst:2 wj:4 decrease:3 mentioned:1 benjamin:1 convexity:3 complexity:20 ui:7 nesterov:3 depend:3 tight:1 solving:3 efficiency:1 easily:1 train:1 avi:2 shalev:2 whose:1 garber:26 elad:5 solve:4 larger:1 say:2 otherwise:2 think:1 online:1 emark:1 advantage:1 differentiable:2 net:1 propose:4 product:2 kak1:2 achieve:4 frobenius:1 kv:3 convergence:25 optimum:1 produce:3 converges:2 polylog:2 develop:1 completion:10 stating:1 alon:1 minor:1 strong:2 implemented:2 c:3 implies:4 come:1 direction:4 owing:1 stochastic:1 implementing:3 suffices:4 preliminary:7 hold:1 sufficiently:3 roi:1 kvt:4 claim:4 achieves:2 a2:1 outperformed:1 largest:4 minimization:8 hope:1 mit:1 always:4 modified:2 pn:1 lifted:1 agonizing:1 ax:1 naval:1 vk:2 notational:1 rank:55 polyhedron:1 contrast:7 sense:1 dependent:1 w:1 relation:5 hidden:1 interested:1 voroninski:1 overall:1 classification:1 among:1 eldar:1 denoted:1 malick:1 constrained:1 yuanzhil:1 art:1 equal:2 never:1 beach:1 ovie:5 yonina:1 identical:1 kw:1 k2f:13 icml:2 future:1 simplex:2 t2:4 lazysvd:6 few:1 simultaneously:1 kwt:1 phase:2 microsoft:1 ab:2 attempt:2 multiply:2 evaluation:4 semidefinite:2 behind:1 spectrahedron:5 strohmer:1 accurate:1 kt:5 ehazan:1 partial:1 necessary:3 ohad:2 vladislav:1 euclidean:1 continuing:1 divide:1 theoretical:3 miroslav:1 instance:4 increased:1 earlier:1 lanczos:2 cost:1 entry:4 subset:1 hundred:1 dependency:3 sulovsk:1 synthetic:3 chooses:1 referring:1 st:2 recht:1 siam:1 randomized:1 csail:1 minimality:1 informatics:1 quickly:1 again:2 satisfied:1 choose:4 d8:1 worse:1 american:1 li:2 distribute:1 summarized:2 satisfy:1 depends:5 vi:11 performed:1 later:1 multiplicative:2 h1:7 try:2 hazan:5 recover:1 candes:2 shai:2 simon:1 minimize:1 square:1 accuracy:1 who:1 efficiently:2 handwritten:1 multiplying:1 worth:2 suffers:1 whenever:5 definition:3 atlab:1 obvious:1 proof:6 mi:3 stop:3 dataset:3 revise:1 recall:5 knowledge:2 ut:2 improves:1 appears:1 wei:1 formulation:1 done:3 strongly:16 generality:1 furthermore:1 christopher:1 google:1 aj:2 usa:1 former:1 regularization:1 iteratively:1 attractive:1 kak:3 mina:1 performs:1 allen:2 image:2 consideration:1 recently:1 sigmoid:1 volume:2 discussed:1 kluwer:1 refer:1 significant:1 imposing:1 ai:6 hurting:1 smoothness:1 rd:6 longer:1 etc:1 add:1 argmina:1 gt:15 jaggi:2 recent:1 hide:2 showed:1 optimizing:1 belongs:1 argminv:1 store:2 inequality:5 binary:1 vt:28 life:1 yi:5 minimum:4 additional:2 greater:1 impose:1 dudik:1 zeyuan:3 full:14 multiple:1 harchaoui:1 smooth:11 faster:5 academic:1 plug:2 long:3 retrieval:2 concerning:1 cameron:1 award:1 a1:2 variant:12 basic:1 arxiv:7 iteration:37 achieved:1 preserved:1 want:1 singular:26 publisher:1 sure:1 december:1 effectiveness:1 integer:1 call:1 latin:1 concerned:1 iterate:2 relu:1 inner:2 idea:2 multiclass:1 expression:1 six:2 ultimate:1 returned:1 speaking:1 repeatedly:1 remark:4 tune:3 amount:1 reduced:1 http:1 generate:1 nsf:1 per:11 write:1 express:1 ht:29 lacoste:1 v1:2 sum:1 run:4 turing:1 place:1 almost:1 throughout:2 reader:1 reasonable:2 scaling:1 comparable:1 layer:2 ct:3 distinguish:1 convergent:1 replaces:1 quadratic:5 nonnegative:1 constraint:9 dominated:1 u1:2 dkk:1 argument:1 min:28 span:1 performing:1 relatively:1 martin:2 according:6 ball:10 beneficial:3 slightly:1 remain:1 smaller:1 making:1 modification:2 projecting:1 restricted:2 invariant:1 erm:1 discus:2 count:1 know:3 letting:1 serf:1 end:2 yurii:2 operation:1 yuanzhi:2 apply:9 observe:4 quarterly:1 spectral:1 a2i:1 slower:1 original:2 thomas:1 top:10 running:24 ensure:2 assumes:1 include:1 maintaining:1 emmanuel:2 classical:1 unchanged:1 objective:9 move:2 question:3 quantity:2 kak2:1 surrogate:1 gradient:8 minx:2 pain:1 philip:1 me:1 reason:3 besides:1 minimizing:2 equivalently:1 unfortunately:3 statement:2 frank:20 trace:17 slows:1 design:1 implementation:1 unknown:1 perform:5 upper:1 observation:1 neuron:1 datasets:2 nist:4 descent:4 logistics:1 immediate:1 communication:1 y1:2 rating:1 pair:2 nip:5 redmond:1 krylov:3 usually:1 gonen:1 summarize:2 program:2 built:1 including:2 marek:1 unrealistic:1 suitable:1 natural:1 regularized:1 zhu:2 mn:3 improve:3 movie:1 julien:1 auto:2 prior:1 review:2 kf:1 multiplication:1 loss:3 lecture:1 kakf:1 interesting:1 suggestion:1 affine:1 storing:1 course:1 supported:1 last:2 free:4 svrg:1 formal:1 weaker:1 side:4 fall:2 taking:1 livni:2 sparse:1 distributed:1 axi:3 xn:1 dimension:2 computes:3 projected:3 coincide:1 bm:10 polynomially:4 approximate:18 emphasize:1 conclude:1 xi:4 shwartz:2 alternatively:2 search:8 iterative:1 table:3 ca:1 ignoring:1 poly:6 aistats:1 pk:10 main:3 linearly:1 motivation:1 noise:1 subsample:1 x1:3 en:5 wish:2 rk:2 theorem:15 down:1 xt:89 x:1 dk:2 sequential:2 adding:1 kx:6 gap:10 logarithmic:1 prevents:1 kxk:4 scalar:1 kwj:1 applies:1 springer:1 minimizer:5 satisfies:2 acm:1 conditional:1 viewed:2 formulated:1 goal:1 acceleration:1 replace:2 feasible:1 fw:25 change:2 hard:1 included:1 pgd:14 marguerite:1 wt:5 lemma:10 total:2 called:1 svd:56 experimental:1 meaningful:3 select:2 unpleasant:1 support:1 latter:1 accelerated:1 argminh:1 princeton:6 d1:1 |
6,853 | 72 | 223
'Ensemble' Boltzmann Units
have Collective Computational Properties
like those of Hopfield and Tank Neurons
Mark Derthick and Joe Tebelskis
Department of Computer Science
Carnegie-Mellon University
1 Introduction
There are three existing connection::;t models in which network states are assigned
a computational energy. These models-Hopfield nets, Hopfield and Tank nets, and
Boltzmann Machines-search for states with minimal energy. Every link in the network can be thought of as imposing a constraint on acceptable states, and each violation adds to the total energy. This is convenient for the designer because constraint
satisfaction problems can be mapped easily onto a network. Multiple constraints can
be superposed, and those states satisfying the most constraints will have the lowest
energy.
Of course there is no free lunch. Constraint satisfaction problems are generally
combinatorial and remain so even with a parallel implementation. Indeed, Merrick
Furst (personal communication) has shown that an NP-complete problem, graph coloring, can be reduced to deciding whether a connectionist network has a state with
an energy of zero (or below). Therefore designing a practical network for solving a
problem requires more than simply putting the energy minima in the right places. The
topography of the energy space affects the ease with which a network can find good
solutions. If the problem has highly interacting constraints, there will be many local
minima separated by energy barriers. There are two principal approaches to searching these spaces: monotonic gradient descent, introduced by Hopfield [1] and refined
by Hopfield and Tank [2]; and stochastic gradient descent, used by the Boltzmann
Machine [3]. While the monotonic methods are not guaranteed to find the optimal
solution, they generally find good solutions much faster than the Boltzmann Machine.
This paper adds a refinement to the Boltzmann Machine search algorithm analogous
to the Hopfield and Tank technique, allowing the user to trade off the speed of search
for the quality of the solution.
? American Institute of Physics 1988
224
2 Hopfield nets
A Hopfield net [1] consists of binary-valued units connected by symmetric weighted
links. The global energy of the network is defined to be
E =
1
-2 ~ ~ WijSjSj I
Jr'
~ljsj
I
where Sj is the state of unit i, and Wjj is the weight on the link between units i and j.
The search algorithm is: randomly select a unit and probe it until quiescence.
During a probe, a unit decides whether to be on or off, detennined by the states of
its neighbors. When a unit is probed, there are two possible resulting global states.
The difference in energy between these states is called the unit's energy gap:
The decision rule is
oiL1i < 0
s, = { 1 otherwise
This rule chooses the state with lower energy. With time, the global energy of the
network monotonically decreases. Since there are only a finite number of states, the
network must eventually reach quiescence.
3
Boltzmann Machines
A Boltzmann Machine [3] also has binary units and weighted links, and the same
energy function is used. Boltzmann Machines also have a learning rule for updating
weights, but it is not used in this paper. Here the important difference is in the
decision rule, which is stochastic. As in probing a Hopfield unit, the energy gap is
detennined. It is used to detennine a probability of adopting the on state:
P(Sj
1
= 1) = 1 + e-tl;jT
where T is the computational temperature. With this rule, energy does not decrease
monotonically. The network is more likely to adopt low energy states, but it sometimes goes uphill. The idea is that it can search a number of minima, but spends
more time in deeper ones. At low temperatures, the ratio of time spent in the deepest
minima is so large that the chances of not being in the global minimum are negligible.
It has been proven [4] that after searching long enough, the probabilities of the states
are given by the Boltzmann distribution, which is strictly a function of energy and
temperature, and is independent of topography:
-Pex -_ e-CEa-E,,)jT
.P{3
(1)
225
The approach to equilibrium, where equation 1 holds, is speeded by initially
searching at a high temperature and gradually decreasing it. Unfortunately, reaching
equilibrium stills takes exponential time. While the Hopfield net settles quickly and
is not guaranteed to find the best solution, a Boltzmann Machine can theoretically be
run long enough to guarantee that the global optimum is found Most of the time the
_ uphill moves which allow the network to escape local minima are a waste of time,
however. It is a direct consequence of the guaranteed ability to find the best solution
that makes finding even approximate solutions slow.
4
Hopfield and Tank networks
In Hopfield and Tank nets [2], the units take on continuous values between zero and
one, so the search takes place in the interior of a hypercube rather than only on its
vertices. The search algorithm is deterministic gradient descent. By beginning near
the center of the space and searching in the direction of steepest descent, it seems
likely that the deepest minimum will be found. There is still no guarantee, but good
results have been reported for many problems.
The modified energy equation is
E =
~ 1 r;
I
-21 ~~
~ ~ WjjSjSj + ~ R j 10 g- (s)ds I
l '
~
~ [jSj
(2)
I
Rj is'the input resistance to unit i, and g(u) is the sigmoidal unit transfer function
1+~2X.. The second term is zero for extreme values of Sj, and is minimized at Sj =
t.
The Hopfield and Tank model is continuous in time as well as value. Instead of
proceeding by discrete probes, the system is described by simultaneous differential
equations, one for each unit. Hopfield and Tank show that the following equation of
m<?tion results in a monotonic decrease in the value of the energy function:
duo
_ I = -u./r + ~ Woos' + [.
dt
I
~ IlJ
I
J
where r = RC, C is a constant determining the speed of convergence, Uj = g-I(Sj),
and the gain, .A, is analgous to (the inverse of) temperature in a Boltzmann Machine .
.A determines how important it is to satisfy the constraints imposed by the links to
other units. When .A is low, these constraints are largely ignored and the second term
dominates, tending to keep the system near the center of the search space, where
there is a single global minimum. At high gains, the minima lie at the corners of
the search space, in the same locations as for the Hopfield model and the Boltzmann
model. If the system is run at high gain, but the initial state is near the center of the
space, the search gradually moves out towards the corners, on the way encountering
"continental divides" between watersheds leading to all the various local minima. The
initial steepness of the watersheds serves as a heuristic for choosing which minima is
226
likely to be lower. This search heuristic emerges automatically from the architecture,
making network design simple. For many problems this single automatic heuristic
results in a system comparable to the best knowledge intensive algorithms in which
many domain specific heuristics are laboriously hand programmed.
For many problems, Hopfield and Tank nets seem quite sufficient [5,6]. However
for one network we have been using [7] the Hopfield and Tank model invariably settles
into poor local minima. The solution has been to use a new model combining the
advantages of Boltzmann Machines and Hopfield and Tank networks.
5
'Ensemble' Boltzmann Machines
It seems the Hopfield and Tank model gets its advantage by measuring the actual
gradient, giving the steepest direction to move. This is much more informative than
picking a random direction and deciding which of the two corners of the space to try,
as models using binary units must do. Peter Brown (personal communication) has
investigated continuous Boltzmann Machines, in which units stochastically adopt a
state between zero and one. The scheme presented here has a similar effect, but the
units actually take on discrete states between zero and one. Each ensemble unit can
be thought of as an ensemble of identically connected conventional Boltzmann units.
To probe the ensemble unit, each of its constituents is probed, and the state of the
ensemble unit is the average of its constituents' states. Because this average is over
a number of identical independent binary random variables, the ensemble unit's state
is binomially distributed.
Figure 1 shows an ensemble unit with three constituents. At infinite temperature,
and at zero temperature the states go to zero or one
all unit states tend toward
unless the energy gap is exactly zero. This is similar to the behavior of a Hopfield and
Tank network at low and high gain, respectively. In Ensemble Boltzmann Machines
(EBMs) the tendency towards! in the absence of constraints from other units results
from the shape of the binomial distribution. In contrast, the second term in the energy
equation is responsible for this effect in the Hopfield and Tank model.
Although an EBM proceeds in discrete time using probes, over a large number of
probes the search tends to proceed in the direction of the gradient. Every time a unit
is probed, a move is made along one axis whose length depends on the magnitude of
the gradient in that direction. Because probing still contains a degree of stochasticity,
EBMs can escape from local minima, and if run long enough are guaranteed to find
the global minimum. By varying n, the number of components of each ensemble
unit, the system can exhibit any intermediate behavior in the tradeoff between the
speed of convergence of Hopfield and Tank networks, and the ability to escape local
minima of Boltzmann Machines.
Clearly when n = 1 the performance is identical to a conventional Boltzmann
Machine, because each unit consists of a single Boltzmann unit. As n -+ 00 the
-t,
227
?
s=1/3
s=2/3
Figure 1: The heavy lines depict an 'Ensemble' Boltzmann Machine with two units. With
an ensemble size of three, this network behaves like a conventional Boltzmann Machine
consisting of six units (light lines). The state of the ensemble units is the average of the states
of its components.
value a unit takes on after probing becomes deterministic. The stable points of the
system are then identical to the ones of the Hopfield and Tank model.
To prove this, it suffices to show that at each probe the ensemble Boltzmann
unit takes on the state which gives rise to the lowest (Hopfield and Tank) energy.
Therefore the energy must monotonically decrease. Further, if the system is not at a
global (Hopfield and Tank) energy minimum, there is some unit which can be probed
so as to lower the energy.
To show that the state resulting from a probe is the minimum possible, we show
first that the derivative of the energy with resepect to the unit's state is zero at the
resulting state, and second that the second derivative is positive over the entire range
of possible states, zero to one.
Taking the derivative of equation 2 gives
Now
so
1
g(u) = 1 + e-2>'w
lIs
= -1n-2,\
1- s
g- (u)
Let T = 2lR' The EBM update rule is
1
Sk=---=
ilk / T
1
+ e-
228
Therefore
dEl
dslc
SI;
1
= - Ll Ic + Tin
[l+e
lLltl T
e-Llt/T
1
1+e- at1t
1+. Lll;/T
= -Lllc + Tin eLlI;/T
= - Lllc + T(LlIc/D
=
0
and
=
=
_1_. 1 - Sic ? [(1 - Sic) - (-Sic)]
2>"R
Sic
(I - SIc)2
1
2>..Rs lc (l - Sic)
> 0 on 0 < Sic < 1
In writing a program to simulate an EBM t it would be wasteful to explicitly
represent the components of each ensemble unit. Since each component has an
identical energy gapt the average of their values is given by the binomial distribution
b(ntp) where n is the ensemble size t and p is l+e 1LlIT. There are numerical methods
for sampling from this distribution in time independent of n [8]. When n is infinite t
there is no need to bother with the distribution because the result is just p.
Hopfield and Tank suggest [2] that the Hopfield and Tank. model is a mean field
approximation to the original Hopfield model. In a mean field approximation t the
average value of a variable is used to calculate its effect on other variables t rather
than calculating all the individual interactions. Consider a large ensemble of Hopfield
nets with two units t A and B. To find the distribution of final states exactlYt each B
unit must be updated based on the A unit in the same network. The calculation must
be repeated for every network in the ensemble. Using a mean field approximation t
the average value of all the B units is calculated based on the average value of all
the A units. This calculation is no harder than that of the state of a single Hopfield
network, yet is potentially more informative since it approximates an average property
of a whole ensemble of Hopfield networks. The states of Hopfield and Tank. units
can be viewed as representing the ensemble average of the states of Hopfield units
in this way. Peterson and Anderson [9] demonstrate rigorously that the behavior is a
mean field approximation.
In the EBM, it is intuitively clear that a mean field approximation is being made.
The network can be thought of as a real ensemble of Boltzmann networks t except with
additional connections between the networks so that each Boltzmann unit sees not
only its neighbors in the same nett but also sees the average state of the neighboring
units in all the nets (see figure 1).
229
6 Traveling Salesman Problem
The traveling salesman problem illustrates the use of energy-based connectionist networks, and the ease with which they may be designed. Given a list of city locations,
the task is to find a tour of minimum length through all the cities and returning to
the starting city. To represent a solution to an n city problem in a network, it is
convenient to use n columns of n rows of units [2]. If a unit at coordinates (i, J) is
on, it indicates that the ith city is the jth to be visited. A valid solution will have n
units on, one in every column and one in every row. The requirements can be divided
into four constraints: there can be no more than one unit on in a row, no more that
one unit on in a column, there must be n units on, and the distances between cities
must be minimized. Hopfield and Tank use the following energy function to effect
these constraints:
X
B/2
i
Hi
L L L SXiSYi +
i
x Y:IX
C/2
(;;~>Xi -
D/2
LLL
x Y:IX
nr
+
dxrsXi(sY,i+l
+ SY,i-l)
(3)
i
Here units are given two subscripts to indicate their row and column, and the subscripts "wrap around" when outside the range 1 < i < n. The first tenn is imple-mented with inhibitory links between every pair of units in a row, and is zero only
if no two are on. The second term is inhibition within columns. In the third term, n
is the number of cities in the tour. When the system reaches a vertex of the search
space, this term is zero only if exactly n units are on. This constraint is implemented
with inhibitory links between all n4 pairs of units plus an excitatory input current to
all units. In the last term dxr is the distance between cities X and Y. At points in
the search space representing valid tours, the summation is numerically equal to the
length of the tour.
As long as the constraints ensuring that the solution is a valid tour are stronger
than those minimizing distance, the global energy minimum will represent the shortest
tour. However every valid tour will be a local energy minimum. Which tour is chosen
will depend on the random initial starting state, and on the random probing order.
I
7 Empirical Results
The evidence that convinced me EBMs offer improved performance over Hopfield
and Tank networks was the ease of tuning them for the Ted Turner problem reported
230
in [7]. However this evidence is entirely subjective; it is impossible to show that
no set of parameters exist which would make the Hopfield and Tank model perform
well. Instead we have chosen to repeat the traveling salesman problem experiments
reported by Hopfield and Tank [2], using the same cities and the same values for the
constants in equation 3. The tour involves 10 cities, and the shortest tour is of length
2.72. An average tour has length 4.55. Hopfield and Tank report finding a valid tour
in 16 of 20 settlings, and that half of these are one of the two shortest tours.
One advantage of Hopfield and Tank nets over Boltzmann Machines is that they
move continuously in the direction of the gradient. EBMs move in discrete jumps
whose size is the value of the gradient along a given axis. When the system is
far from equilibrium these jumps can be quite large, and the search is inefficient.
Although Hopfield and Tank nets can do a whole search at high gain, Boltzmann
Machines usually vary the temperature so the system can remain close to equilibrium
as the low temperature eqUilibrium is approached. For this reason our model was
more sensitive to the gain parameter than the Hopfield and Tank model, and we used
temperatures much higher than
As expected, when n is infinite, an EBM produces results similar to those reported
by Hopfield and Tank. 85 out of 100 settlings resulted in valid tours, and the average
length was 2.73. Table 1 shows how n affects the number of valid tours and the
average tour length. As n decreases from infinity, both the average tour length and
the number of valid tours increases. (We have no explanation for the anomalously
low number of valid tours for n = 40.) Both of these effects result from the increased
sampling noise in determining the ensemble unit states for lower n. With more
noise, the system has an easier time escaping local minima which do not represent
valid tours. Yet at the same time the discriminability between the very best tours
and moderately good tours decreases, because these smaller energy differences are
swamped by the noise.
Rather than stop trials when the network was observed to converge, a constant
number of probes, 200 per unit, was made. However we noted that convergence was
generally faster for larger values of n. Thus for the traveling salesman problem, large
n give faster and better solutions, but a smaller values gives the highest reliability.
Depending on the application, a value of either infinity or 50 seems best.
2lR'
8
Conclusion
'Ensemble' Boltzmann Machines are completely upward compatible with conventional Boltzmann Machines. The above experiment can be taken to show that they
perform better at the traveling salesman problem. In addition, at the limit of infinite
ensemble size they perform similarly to Hopfield and Tank nets. For TSP and perhaps
many other problems, the latter model seems an equally good choice. Perhaps due to
the extreme regularity of the architecture, the energy space must be nicely behaved
231
Ensemble Size
I
40
50
100
1000
infinity
Percent Valid Average Tour Length
93
84
95
89
90
85
3.32
2.92
2.79
2.79
2.80
2.73
Table 1: Number of valid tours out of 100 trials and average tour length, as a function
of ensemble size. An ensemble size of one corresponds to a Boltzmann Machine. Infinity
loosely corresponds to a Hopfield and Tank network.
in that the ravine steepness near the center of the space is a good indication of its
eventual depth. In this case the ability to escape local minima is not required for
good perfonnance.
For the Ted Turner problem, which has a very irregular architecture and many
more constraint types, the ability to escape local minima seems essential. Conventional Boltzmann Machines are too noisy, both for efficient search and for debugging.
EBMs allow the designer the flexibility to add only as much noise as is necessary. In
addition, lower noise can be used for debugging. Even though this may give poorer
perfonnance, a more detenninistic search is easier for the debugger to understand,
allowing the proper fix to be made.
Acknowledgements
We appreciate receiving data and explanations from David Tank, Paul Smolensky,
and Erik Sobel. This research has been supported by an ONR Graduate Fellowship,
by NSF grant EET-8716324, and by the Defense Advanced Research Projects Agency
(DOD), ARPA Order No. 4976 under contract F33615-87-C-1499 and monitored by
the:
Avionics Laboratory
Air Force Wright Aeronautical Laboratories
Aeronautical Systems Division (AFSC)
Wright-Patterson AFB, OB 45433-6543
This research was also sponsored by the same agency under contract N00039-87C-0251 and monitored by the Space and Naval Warfare Systems Command.
232
References
[1] J. J. Hopfield, "Neural networks and physical systems with emergent collective
computational abilities," Proceedings of the National Academy of Sciences U.SA.,
vol. 79, pp. 2554-2558, April 1982.
[2] J. Hopfield and D. Tank, '''Neural' computation of decisions in optimization
problems,?' Biological Cybernetics, vol. 52, pp. 141-152, 1985.
[3] G. E. Hinton and T. J. Sejnowski, "Learning and relearning in Boltzmann Machines," in Parallel distributed processing: Explorations in the microstructure of
cognition, Cambridge, MA: Bradford Books, 1986.
[4] S. Geman and D. Geman, "Stochastic relaxation, Gibbs distributions, and the
Bayesian restoration of images," IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. PAMI-6, pp. 721-741, 1984.
[5] J. L. Marroquin, Probabilistic Solution of Inverse Problems. PhD thesis, MIT,
September 1985.
[6] J. Hopfield and D. Tank, "Simple 'Neural' optimization networks: an aid converter, signal decision circuit and a linear programming circuit," IEEE Transactions on Circuits and Systems, vol. 33, pp. 533--541, 1986.
[7] M. Derthick, "Counterfactual reasoning with direct models," in AAA/-87, Morgan
Kaufmann, July 1987.
[8] D. E. Knuth, The Art of Computer Programming. Second Edition. Vol. 2,
Addison-Wesley, 1981.
[9] C. Peterson and J. R. Anderson, "A mean field theory learning algorithm for
neural networks," Tech. Rep. EI-259-87, MCC, August 1987.
| 72 |@word trial:2 seems:5 stronger:1 r:1 harder:1 initial:3 contains:1 subjective:1 existing:1 current:1 merrick:1 si:1 yet:2 must:8 numerical:1 informative:2 shape:1 designed:1 sponsored:1 depict:1 update:1 tenn:1 half:1 intelligence:1 beginning:1 steepest:2 ith:1 lr:2 location:2 sigmoidal:1 rc:1 along:2 direct:2 differential:1 consists:2 prove:1 gapt:1 theoretically:1 uphill:2 expected:1 indeed:1 behavior:3 decreasing:1 automatically:1 actual:1 lll:2 becomes:1 project:1 circuit:3 lowest:2 duo:1 spends:1 finding:2 guarantee:2 every:7 exactly:2 returning:1 unit:60 grant:1 positive:1 negligible:1 local:10 tends:1 limit:1 consequence:1 debugger:1 subscript:2 pami:1 plus:1 discriminability:1 ease:3 programmed:1 speeded:1 range:2 graduate:1 practical:1 responsible:1 empirical:1 mcc:1 thought:3 convenient:2 suggest:1 get:1 onto:1 interior:1 close:1 impossible:1 writing:1 superposed:1 conventional:5 deterministic:2 imposed:1 center:4 go:2 starting:2 rule:6 searching:4 coordinate:1 analogous:1 updated:1 user:1 dxr:1 programming:2 designing:1 satisfying:1 updating:1 geman:2 observed:1 calculate:1 connected:2 trade:1 decrease:6 highest:1 agency:2 moderately:1 wjj:1 imple:1 rigorously:1 personal:2 depend:1 solving:1 patterson:1 division:1 completely:1 easily:1 hopfield:47 emergent:1 various:1 separated:1 sejnowski:1 approached:1 choosing:1 refined:1 outside:1 quite:2 heuristic:4 whose:2 valued:1 larger:1 otherwise:1 ability:5 tsp:1 noisy:1 final:1 derthick:2 advantage:3 indication:1 net:12 interaction:1 neighboring:1 combining:1 detennined:2 flexibility:1 academy:1 constituent:3 convergence:3 regularity:1 optimum:1 requirement:1 produce:1 spent:1 depending:1 sa:1 implemented:1 involves:1 indicate:1 direction:6 stochastic:3 exploration:1 settle:2 suffices:1 fix:1 microstructure:1 biological:1 summation:1 strictly:1 hold:1 around:1 ic:1 wright:2 deciding:2 equilibrium:5 cognition:1 furst:1 vary:1 adopt:2 combinatorial:1 visited:1 sensitive:1 city:10 weighted:2 mit:1 clearly:1 modified:1 reaching:1 rather:3 anomalously:1 varying:1 command:1 naval:1 indicates:1 tech:1 contrast:1 warfare:1 entire:1 initially:1 quiescence:2 tank:35 upward:1 art:1 field:6 equal:1 nicely:1 ted:2 sampling:2 identical:4 minimized:2 np:1 connectionist:2 report:1 escape:5 randomly:1 resulted:1 national:1 individual:1 consisting:1 invariably:1 highly:1 violation:1 extreme:2 light:1 watershed:2 sobel:1 poorer:1 detenninistic:1 necessary:1 perfonnance:2 unless:1 divide:1 loosely:1 minimal:1 arpa:1 increased:1 column:5 elli:1 measuring:1 restoration:1 vertex:2 tour:25 dod:1 too:1 reported:4 chooses:1 contract:2 off:2 physic:1 receiving:1 picking:1 probabilistic:1 quickly:1 continuously:1 ilj:1 thesis:1 corner:3 stochastically:1 american:1 derivative:3 leading:1 inefficient:1 book:1 li:1 waste:1 satisfy:1 explicitly:1 depends:1 tion:1 try:1 parallel:2 jsj:1 air:1 kaufmann:1 largely:1 ensemble:27 sy:2 bayesian:1 cybernetics:1 simultaneous:1 llt:1 reach:2 energy:34 pp:4 monitored:2 gain:6 stop:1 counterfactual:1 knowledge:1 emerges:1 marroquin:1 actually:1 coloring:1 wesley:1 higher:1 dt:1 afb:1 improved:1 april:1 though:1 anderson:2 just:1 until:1 d:1 hand:1 traveling:5 mented:1 ei:1 del:1 sic:7 quality:1 perhaps:2 behaved:1 effect:5 brown:1 assigned:1 symmetric:1 laboratory:2 ll:1 during:1 noted:1 complete:1 demonstrate:1 temperature:10 percent:1 reasoning:1 image:1 tending:1 behaves:1 physical:1 avionics:1 approximates:1 numerically:1 mellon:1 cambridge:1 imposing:1 gibbs:1 automatic:1 tuning:1 similarly:1 stochasticity:1 reliability:1 stable:1 encountering:1 inhibition:1 add:3 ntp:1 binary:4 onr:1 rep:1 morgan:1 minimum:23 additional:1 converge:1 shortest:3 monotonically:3 signal:1 july:1 multiple:1 rj:1 bother:1 faster:3 calculation:2 offer:1 long:4 divided:1 equally:1 ensuring:1 sometimes:1 adopting:1 represent:4 irregular:1 addition:2 fellowship:1 tend:1 seem:1 near:4 intermediate:1 enough:3 identically:1 affect:2 architecture:3 converter:1 escaping:1 idea:1 tradeoff:1 intensive:1 whether:2 six:1 defense:1 peter:1 resistance:1 proceed:1 ignored:1 generally:3 clear:1 reduced:1 exist:1 nsf:1 inhibitory:2 designer:2 per:1 discrete:4 carnegie:1 probed:4 vol:5 steepness:2 putting:1 four:1 wasteful:1 graph:1 aeronautical:2 relaxation:1 run:3 inverse:2 place:2 decision:4 acceptable:1 ob:1 comparable:1 entirely:1 hi:1 guaranteed:4 ilk:1 constraint:14 infinity:4 tebelskis:1 speed:3 simulate:1 f33615:1 department:1 debugging:2 poor:1 jr:1 remain:2 smaller:2 lunch:1 making:1 n4:1 aaa:1 swamped:1 intuitively:1 gradually:2 taken:1 equation:7 eventually:1 addison:1 serf:1 salesman:5 detennine:1 probe:9 original:1 binomial:2 calculating:1 giving:1 uj:1 hypercube:1 appreciate:1 move:6 nr:1 exhibit:1 gradient:8 september:1 continental:1 distance:3 link:7 mapped:1 wrap:1 me:1 analgous:1 toward:1 reason:1 erik:1 length:10 ratio:1 minimizing:1 unfortunately:1 potentially:1 rise:1 implementation:1 design:1 collective:2 boltzmann:32 binomially:1 perform:3 allowing:2 proper:1 neuron:1 finite:1 descent:4 hinton:1 communication:2 interacting:1 august:1 introduced:1 david:1 pair:2 required:1 connection:2 proceeds:1 below:1 usually:1 pattern:1 smolensky:1 program:1 ebm:5 explanation:2 satisfaction:2 settling:2 force:1 cea:1 turner:2 representing:2 scheme:1 advanced:1 axis:2 deepest:2 acknowledgement:1 determining:2 topography:2 proven:1 degree:1 sufficient:1 heavy:1 row:5 course:1 excitatory:1 convinced:1 repeat:1 last:1 free:1 compatible:1 jth:1 supported:1 allow:2 deeper:1 understand:1 institute:1 neighbor:2 peterson:2 taking:1 barrier:1 distributed:2 calculated:1 depth:1 valid:12 made:4 refinement:1 jump:2 far:1 transaction:2 sj:5 approximate:1 eet:1 keep:1 global:9 decides:1 xi:1 ravine:1 search:18 continuous:3 sk:1 table:2 transfer:1 investigated:1 domain:1 whole:2 noise:5 paul:1 edition:1 repeated:1 tl:1 slow:1 probing:4 lc:1 aid:1 exponential:1 lie:1 third:1 tin:2 ix:2 specific:1 jt:2 list:1 dominates:1 evidence:2 essential:1 joe:1 phd:1 magnitude:1 knuth:1 illustrates:1 relearning:1 gap:3 easier:2 simply:1 likely:3 monotonic:3 corresponds:2 chance:1 determines:1 ma:1 viewed:1 towards:2 eventual:1 absence:1 infinite:4 except:1 principal:1 total:1 called:1 bradford:1 tendency:1 select:1 mark:1 latter:1 |
6,854 | 720 | Bayesian Backprop in Action:
Pruning, Committees, Error Bars
and an Application to Spectroscopy
Hans Henrik Thodberg
Danish Meat Research Institute
Maglegaardsvej 2, DK-4000 Roskilde
thodberg~nn.meatre.dk
Abstract
MacKay's Bayesian framework for backpropagation is conceptually
appealing as well as practical. It automatically adjusts the weight
decay parameters during training, and computes the evidence for
each trained network. The evidence is proportional to our belief
in the model. The networks with highest evidence turn out to
generalise well. In this paper, the framework is extended to pruned
nets, leading to an Ockham Factor for "tuning the architecture
to the data". A committee of networks, selected by their high
evidence, is a natural Bayesian construction. The evidence of a
committee is computed. The framework is illustrated on real-world
data from a near infrared spectrometer used to determine the fat
content in minced meat. Error bars are computed, including the
contribution from the dissent of the committee members.
1
THE OCKHAM FACTOR
William of Ockham's (1285-1349) principle of economy in explanations, can be
formulated as follows:
If several theories account for a phenomenon we should prefer the
simplest which describes the data sufficiently well.
208
Bayesian Backprop in Action
The principle states that a model has two virtues: simplicity and goodness of fit.
But what is the meaning of "sufficiently well" - i.e. what is the optimal trade-off
between the two virtues? With Bayesian model comparison we can deduce this
trade-off.
We express our belief in a model as its probability given the data, and use Bayes'
formula:
P(H I D) = P(D IH)P(H)
(1)
P(D)
We assume that the prior belief P(H) is the same for all models, so we can compare
models by comparing P(D IH) which is called the evidence for H, and acts as a
quality measure in model comparison.
Assume that the model has a single tunable parameter w with a prior range ~ Wprior
so that pew IH) = 1/ ~Wprior. The most probable (or maximum posterior) value
WMP of the parameter w is given by the maximum of
P(
ID
w,
H)= P(Dlw,H)P(wl1i)
P(DIH)
(2)
The width of this distribution is denoted ~Wpo8terior. The evidence P(D 11i) is
obtained by integrating over the posterior w distribution and approximating the
integral:
P(DIH)
J
P(Dlw,H)P(wIH)dw
(3)
P(D IWMP, H) ~wpo8terior
(4)
Likelihood x OckhamFactor
(5)
~Wprior
Evidence
The evidence for the model is the product of two factors:
? The best fit likelihood, i.e. the probability of the data given the model and
the tuned parameters. It measures how well the tuned model fits the data .
? The integrated probability of the tuned model parameters with their uncertainties, i.e. the collapse of the available parameter space when the data
is taken into account. This factor is small when the model has many parameters or when some parameters must be tuned very accurately to fit
the data. It is called the Ockham Factor since it is large when the model is
simple.
By optimizing the modelling through the evidence framework we can avoid the
overfitting problem as well as the equally important "underfitting" problem.
2
THE FOUR LEVELS OF INFERENCE
In 1991-92 MacKay presented a comprehensive and detailed framework for combining backpropagation neural networks with Bayesian statistics (MacKay, 1992). He
outlined four levels of inference which applies for instance to a regression problem
where we have a training set and want to make predictions for new data:
209
210
Thodberg
Level 1 Make predictions including error bars for new input data.
Level 2 Estimate the weight parameters and their uncertainties.
Level 3 Estimate the scale parameters (the weight decay parameters and the noise
scale parameter) and their uncertainties.
Level 4 Select the network architecture and for that architecture select one of the
w-minima. Optionally select a committee to reflect the uncertainty on this
level.
Level 1 is the typical goal in an application. But to make predictions we have to
do some modelling, so at level 2 we pick a net and some weight decay parameters
and train the net for a while. But the weight decay parameters were picked rather
arbitrarily, so on level 3 we set them to their inferred maximum posterior (MP)
value. We alternate between level 2 and 3 until the network has converged. This is
still not the end, because also the network architecture was picked rather arbitrarily.
Hence level 2 and 3 are repeated for other architectures and the evidences of these
are computed on level 4. (Pruning makes level 4 more complicated, see section 6).
When we make inference on each of these levels, there are uncertainties which are
described by the posterior distributions of the parameters which are inferred. The
uncertainty on level 2 is described by the Hessian (the second derivative of the net
cost function with respect to the weights). The uncertainty on level 3 is negligible
if the number of weight decays parameters is small compared to the number of
weights. The uncertainty on level 4 is described by the committee of networks with
highest evidence within some margin (discussed below).
The uncertainties are used for two purposes. Firstly they give rise to error bars on
the predictions on level 1. And secondly the posterior uncertainty divided by the
prior uncertainty (the Ockham Factor) enters the evidence.
MacKay's approach differs in two respects from other Bayesian approaches to neural
nets:
? It assumes the Gaussian approximation to the posterior weight distribution.
In contrast, the Monte Carlo approach of (Neal, 1992) does not suffer from
this limitation .
? It determines maximum posterior values of the weight decay parameters,
rather than integrating them out as done in (Buntine and Weigend, 1991).
It is difficult to justify these choices in general. The Gaussian approximation is
believed to be good when there are at least 3 training examples per weight (MacKay,
1992). The use of MP weight decay parameters is the superior method when there
are ill-defined parameters, as there usually is in neural networks, where some weights
are typically poorly defined by the data (MacKay, 1993).
3
BAYESIAN NEURAL NETWORKS
The training set D consists of N cases of the form (x, t). We model t as a function
of x, t
y(x) + II, where II is Gaussian noise and y(x) is computed by a neural
=
Bayesian Backprop in Action
network 11. with weights w. The noise scale is a free parameter {3 = 1/(1';. The
probability of the data (the likelihood) is
P(Dlw,{3,11.)
ex
En
exp(-{3En)
(6)
~ L:(y - t)2
(7)
where the sum extends over the N cases.
In Bayesian modelling we must specify the prior distribution of the model parameters. The model contains k adjustable parameters w, called weights, which are
in general split into several groups, for instance one per layer of the net. Here we
consider the case with all weights in one group. The general case is described in
(MacKay, 1992) and in more details in (Thodberg, 1993). The prior of the weights
w 1S
p(w\{3,e,11.) ex exp(-{3eEw)
Ew
_
~L:w2
(8)
(9)
e
{3 and are called the scales of the model and are free parameters determined by
the data.
The most probable values of the weights given the data, some values of the scales
(to be determined later) and the model, is given by the maximum of
P(D/w,{3,e, 11.)p(w/{3,e, 11.)
( (3C)
p(D\{3,e,11.)
ex exp -
P(w\D,{3,e,11.)
(10)
(11)
So the maximum posterior weights according to the probabilistic interpretation are
identical to the weights obtained by minimising the familiar cost function C with
weight decay parameter
This is the well-known Bayesian account for weight
decay.
e.
4
MACKAY'S FORMULAE
The single most useful result of MacKay's analysis is a simple formula for the MP
value of the weight decay parameter
En
,
eMP =-E
w N -,
(12)
where , is the number of well-determined parameters which can be approximated
by the actual number of parameters k, or computed more accurately from the
eigenvalues Ai of the Hessian \T\T En:
,--L: Ai +'eMP
A'
k
-
;=}
The MP value of the noise scale is {3MP
= N /(2C).
(13)
211
212
Thodberg
The evidence for a neural network 'Jf. is, as in section 1, obtained by integration
over the posterior distribution of the inferred parameters, which gives raise to the
Ockham Factors:
Ev('Jf.)
=
log P( D 1'Jf.)
-
N - 'Y _ N log 411"C
2
2
N
+ log Ock(w) + log Ock?(3) + log Ock(e)
(14)
k
~ ""
L.J log eMP
i=l
eMP + Ai
logOck(w)
Ock?(3)
-
J411"/(N - 'Y)
logO
- -'Y
2
+ log h! + h log 2
Ock(e)
= .f4iFt
logO
(15)
(16)
The first line in (14) is the log likelihood. The Ockham Factor for the weights
Ock(w) is small when the eigenvalues Ai of the Hessian are large, corresponding to
well-determined weights. 0 is the prior range of the scales and is set (subjectively)
to 103 .
The expression (15) (valid for a network with a single hidden layer) contains a
symmetry factor h!2h . This is because the posterior volume must include all w
configurations which are equivalent to the particular one. The hidden units can be
permuted, giving a factor h! more posterior volume. And the sign of the weights to
and from every hidden unit can be changed giving 2h times more posterior volume.
5
COMMITTEES
For a given data set we usually train several networks with different numbers of
hidden units and different initial weights. Several of these networks have evidence
near or at the maximal value, but the networks differ in their predictions. The
different solutions are interpreted as components of the posterior distribution and
the correct Bayesian answer is obtained by averaging the predictions over the solutions, weighted by their posterior probabilities, i.e. their evidences. However, the
evidence is not accurately determined, primarily due to the Gaussian approximation. This means that instead of weighting with Ev('Jf.) we should use the weight
exp{log Ev / ~(log Ev?, where ~(log Ev) is the total uncertainty in the evaluation
of log Ev. As an approximation to this, we define the committee as the models
with evidence larger than log Evrnax - ~ log Ev, where Evrnax is the largest evidence
obtained, and all members enter with the same weight.
To compute the evidence Ev(C) of the committee, we assume for simplicity that all
networks in the committee C share the same architecture. Let Nc be the number
of truly different solutions in the committee. Of course, we count symmetric realisations only once. The posterior volume i.e. the Ockham Factor for the weights is
now Nc times larger. This renders the committee more probable - it has a larger
evidence:
(17)
log Ev(C) log Nc + log Ev('Jf.)
where log Ev('Jf.) denotes the average log evidence of the members. Since the evidence is correlated with the generalisation error, we expect the committee to generalise better than the committee members.
=
Bayesian Backprop in Action
6
PRUNING
We now extend the Bayesian framework to networks which are pruned to adjust the
architecture to the particular problem. This extends the fourth level of inference.
At first sight, the factor h! in the Ockham Factor for the weights in a sparsely connected network appears to be lost, since the network is (in general) not symmetric
with respect to permutations of the hidden units. However, the symmetry reappears
because for every sparsely connected network with tuned weights there are h! other
equivalent network architectures obtained by permuting the hidden units. So the
factor h! remains. If this argument is not found compelling, it can be viewed as an
assumption.
If the data are used to select the architecture, which is the case in pruning designed
to minimise the cost function, an additional Ockham Factor must be included.
With one output unit, only the input-to-hidden layer is sparsely connected, so
consider only these connections. Attach a binary pruning parameter to each of
the m potential connections. A sparsely connected architecture is described by
the values of the pruning parameters. The prior probability of a connection to be
present is described by a hyperparameter cP which is determined from the data i.e.
it is set to the fraction of connections remaining after pruning (notice the analogy
between cP and a weight decay parameter). A non-pruned connection gives an
Ockham Factor cP and a pruned 1 - cP, assuming the data to be certain about the
architecture. The Ockham Factors for the pruning parameters is therefore
log Ock(pruning)
= m(cPMP log cPMP + (1 -
cPMP) 10g(1 - cPMP?
(18)
The tuning of the meta-parameter to the data gives an Ockham factor Ock( cP) :::::;
J2jm, which is rather negligible.
From a minimum description length perspective (18) reflects the extra information
needed to describe the topology of a pruned net relative to a fully connected net. It
acts like a barrier towards pruning. Pruning is favoured only if the negative contribution log Ock(pruning) is compensated by an increase in for instance log Ock(w).
7
APPLICATION TO SPECTROSCOPY
Bayesian Backprop is used in a real-life application from the meat industry. The
data were recorded by a Tecator near-infrared spectrometer which measures the
spectrum of light transmitted through samples of minced pork meat. The absorbance spectrum has 100 channels in the region 850-1050 nm. We want to calibrate
the spectrometer to determine the fat content. The first 10 principal components
of the spectra are used as input to a neural network.
Three weight decay parameters are used: one for the weights and biases of the
hidden layer, one for the connections from the hidden to the output layer, and one
for the direct connections from the inputs to the output as well as the output bias.
The relation between test error and log evidence is shown in figure 1. The test error
is given as standard error of prediction (SEP), i.e. the root mean square error. The
12 networks with 3 hidden units and evidence larger than -270 are selected for a
213
214
Thodberg
0
...
?
?
?
C\f
1 hidden unit
2 hidden units
3 hidden units
4 hidden units
6 hidden units
8 hidden units
?
X
0
c:
0
.""
.!Z!
0
0
X
C!
.,...
"Q
I!!
-e
c..
X
0
~
~
LU
"E
j'"
co
ci
?
?
0
?
X
0
en
0
?
X
0
~
0
0
0
C
X
..
? ?
??
?
~OoO
?
C
?
X
X
? ??
? ? ? ? ?~
? ~mwlDl ~ ??
d(D
C
X
X
IJ 0
c
-320
-300
-280
. X_. ?
X -.
0
X
Ix
X
-260
log Evidence
Figure 1: The test error as a function of the log evidence for networks trained on
the spectroscopic data. High evidence implies low test error.
committee. The committee average gives 6% lower SEP than the members do on
average, and 21% lower SEP than a non-Bayesian analysis using early stopping (see
Thodberg, 1993).
Pruning is applied to the networks with 6 hidden units. The evidence decreases
slightly, i.e. Ock(pruning) dominates. Also the SEP is slightly worse. So the
evidence correctly suggests that pruning is not useful for this problem. 1
The Bayesian error bars are illustrated for the spectroscopic data in figure 2. We
study the model predictions on the line through input space defined by the second
principal component axis, i.e. the second input is varied while all other inputs are
zero. The total prediction variance for a new datum x is
(19)
where Uwu comes from the weight uncertainties (level 2) and Ucu from the committee dissent (level 4).
1 For artificial data generated by a sparsely connected network the evidence correctly
points to pruned nets as better models (see Thodberg, 1993).
Bayesian Backprop in Action
'.
" . ?. .
Total U,..rllinty
.? ?? ?, /CommiUM Prediction
......
.....:.. ...... ...........
\
~
Total Unc.rtlinly
\
\
l
\
\\ . . P*
c:
~
8
-
\
yo)
.,
as
u..
,
".
".
0
.-
'\.. ....... ::::::::::::::: .......... .
I
? ?? ?..??? ? .?? ??
......
" .
Unc.rllinly
".,"'.
".
\~"""~
' ? ConllTitlM Unc.rllinly
'.
\
".
10? Randcm Noi..
\
'.<,
,'.
".
'.
'.
".
,I
.... -.-;.' ,I
........ if
\~
"
/j
,,'.'.. . !;
,II
... .... ;.
10? Weight Unc?tainty
"~
\ '"....
"
................................ ................
,",.....'....:..-.:::----...::-_"'::-_.-._.-.
"o
!/
.<'>...."....
'.
\....
.!.I
. . .: : : : :...
-
.,...
//
.'
./
~/
.:~" .
"./
I. ...
ij. ....
t'/"
...
./
/
...
/
/
---..----~
~------,_----------._----------r_---------.r---------~~--~~
-4
-2
o
2
4
Second Principal Component
Figure 2: Prediction of the fat content as a function of the second principal component P2 of the NIR spectrum. 95% of the training data have Ip21 < 2. The total
error bars are indicated by a "1 sigma" band with the dotted lines. The total standard errors O'total(X) and the standard errors of its contributions (O'v, O'wu(x) and
O'cu(x)) are shown separately, multiplied by a factor of 10.
References
W.L.Buntine and A.S.Weigend, "Bayesian Back-Propagation", Complex Systems 5,
(1991) 603-643.
R.M.Neal, "Bayesian Learning via Stochastic Dynamics", Neural Information Processing Systems, Vol.5, ed. C.L.Giles, S.J .Hanson and J .D.Cowan (Morgan Kaufmann, San Mateo, 1993)
D.J .C.MacKay, "A Practical Bayesian Framework for Backpropagation Networks"
Neural Compo 4 (1992) 448-472.
D.J .C.MacKay, paper on Bayesian hyperparameters, in preparation 1993.
H.H.Thodberg, "A Review of Bayesian Backprop with an Application to
Near Infrared Spectroscopy" and "A Bayesian Approach to Pruning of Neural Networks", submitted to IEEE Transactions of Neural Networks 1993 (in
/pub/neuroprose/thodberg.ace-of-bayes*.ps.Z on archive.cis.ohio-state.edu).
215
| 720 |@word cu:1 pick:1 absorbance:1 initial:1 configuration:1 contains:2 pub:1 tuned:5 comparing:1 must:4 designed:1 selected:2 reappears:1 compo:1 firstly:1 direct:1 consists:1 underfitting:1 automatically:1 actual:1 what:2 interpreted:1 every:2 act:2 fat:3 unit:14 negligible:2 id:1 logo:2 uwu:1 mateo:1 suggests:1 co:1 collapse:1 range:2 practical:2 lost:1 differs:1 backpropagation:3 integrating:2 unc:4 equivalent:2 compensated:1 simplicity:2 adjusts:1 dw:1 construction:1 approximated:1 infrared:3 sparsely:5 tecator:1 enters:1 spectrometer:3 region:1 connected:6 trade:2 highest:2 decrease:1 noi:1 ock:11 dynamic:1 trained:2 raise:1 sep:4 train:2 describe:1 monte:1 artificial:1 ace:1 larger:4 statistic:1 eigenvalue:2 net:9 wih:1 product:1 maximal:1 combining:1 poorly:1 description:1 p:1 pork:1 ij:2 ex:3 p2:1 implies:1 come:1 differ:1 correct:1 stochastic:1 backprop:7 spectroscopic:2 probable:3 secondly:1 sufficiently:2 exp:4 early:1 purpose:1 largest:1 tainty:1 weighted:1 reflects:1 gaussian:4 sight:1 rather:4 r_:1 avoid:1 yo:1 modelling:3 likelihood:4 contrast:1 inference:4 economy:1 stopping:1 nn:1 integrated:1 typically:1 hidden:17 relation:1 ill:1 denoted:1 integration:1 mackay:11 once:1 identical:1 realisation:1 primarily:1 x_:1 comprehensive:1 familiar:1 william:1 evaluation:1 adjust:1 truly:1 light:1 permuting:1 integral:1 instance:3 industry:1 compelling:1 giles:1 goodness:1 calibrate:1 cost:3 buntine:2 answer:1 probabilistic:1 off:2 reflect:1 recorded:1 nm:1 worse:1 derivative:1 leading:1 account:3 potential:1 mp:5 later:1 root:1 picked:2 bayes:2 complicated:1 contribution:3 square:1 variance:1 kaufmann:1 conceptually:1 bayesian:24 accurately:3 lu:1 carlo:1 converged:1 submitted:1 ed:1 danish:1 tunable:1 dlw:3 back:1 appears:1 specify:1 ooo:1 done:1 wmp:1 until:1 propagation:1 quality:1 indicated:1 ucu:1 hence:1 symmetric:2 neal:2 illustrated:2 during:1 width:1 cp:5 meaning:1 ohio:1 superior:1 permuted:1 volume:4 discussed:1 he:1 interpretation:1 extend:1 ai:4 enter:1 pew:1 tuning:2 outlined:1 han:1 deduce:1 subjectively:1 posterior:15 perspective:1 optimizing:1 certain:1 meta:1 binary:1 arbitrarily:2 life:1 morgan:1 transmitted:1 minimum:2 additional:1 determine:2 ii:3 believed:1 minimising:1 divided:1 equally:1 prediction:11 regression:1 want:2 separately:1 w2:1 extra:1 archive:1 cowan:1 member:5 near:4 split:1 fit:4 architecture:11 topology:1 minimise:1 expression:1 suffer:1 render:1 hessian:3 action:5 useful:2 detailed:1 band:1 simplest:1 notice:1 dotted:1 sign:1 per:2 correctly:2 hyperparameter:1 vol:1 express:1 group:2 four:2 fraction:1 sum:1 weigend:2 uncertainty:13 fourth:1 extends:2 wu:1 prefer:1 layer:5 datum:1 argument:1 pruned:6 according:1 alternate:1 describes:1 slightly:2 appealing:1 neuroprose:1 wprior:3 taken:1 remains:1 turn:1 count:1 committee:17 needed:1 dih:2 end:1 available:1 multiplied:1 assumes:1 denotes:1 include:1 remaining:1 giving:2 approximating:1 assuming:1 length:1 optionally:1 difficult:1 nc:3 sigma:1 negative:1 rise:1 adjustable:1 ockham:13 extended:1 varied:1 inferred:3 connection:7 hanson:1 bar:6 usually:2 below:1 ev:11 including:2 explanation:1 belief:3 natural:1 attach:1 axis:1 nir:1 prior:7 review:1 relative:1 fully:1 expect:1 permutation:1 limitation:1 proportional:1 analogy:1 thodberg:10 principle:2 share:1 course:1 changed:1 free:2 bias:2 generalise:2 institute:1 emp:4 barrier:1 world:1 valid:1 computes:1 san:1 transaction:1 dissent:2 pruning:16 meat:4 overfitting:1 spectrum:4 channel:1 symmetry:2 spectroscopy:3 complex:1 noise:4 hyperparameters:1 repeated:1 en:5 henrik:1 favoured:1 weighting:1 ix:1 formula:3 dk:2 decay:12 virtue:2 evidence:31 dominates:1 ih:3 ci:2 margin:1 applies:1 determines:1 goal:1 formulated:1 viewed:1 towards:1 jf:6 content:3 included:1 typical:1 determined:6 generalisation:1 justify:1 averaging:1 principal:4 called:4 total:7 ew:1 select:4 preparation:1 phenomenon:1 correlated:1 |
6,855 | 7,200 | Hierarchical Clustering Beyond the Worst-Case
Vincent Cohen-Addad
University of Copenhagen
[email protected]
Varun Kanade
University of Oxford
Alan Turing Institute
[email protected]
Frederik Mallmann-Trenn
MIT
[email protected]
Abstract
Hiererachical clustering, that is computing a recursive partitioning of a dataset to
obtain clusters at increasingly finer granularity is a fundamental problem in data
analysis. Although hierarchical clustering has mostly been studied through procedures such as linkage algorithms, or top-down heuristics, rather than as optimization
problems, Dasgupta [9] recently proposed an objective function for hierarchical clustering and initiated a line of work developing algorithms that explicitly optimize an
objective (see also [7, 22, 8]). In this paper, we consider a fairly general random graph
model for hierarchical clustering, called the hierarchical stochastic block model
(HSBM), and show that in certain regimes the SVD approach of McSherry [18] combined with specific linkage methods results in a clustering that give an Op1q approximation to Dasgupta?s cost function. Finally, we report empirical evaluation on synthetic and real-world data showing that our proposed SVD-based method does indeed
achieve a better cost than other widely-used heurstics and also results in a better classification accuracy when the underlying problem was that of multi-class classification.
1
Introduction
Computing a recursive partitioning of a dataset to obtain a finer and finer classification of the data is a
classic problem in data analysis. Such a partitioning is often refered to as a hierarchical clustering and
represented as a rooted tree whose leaves correspond to data elements and where each internal node
induces a cluster of the leaves of its subtree. There exists a large literature on the design and analysis of
algorithms for hierarchical clustering (see e.g., [21]). Two main approaches have proven to be successful in practice so far: on the one hand divisive heuristics compute the hierarchical clustering tree in a
top-down fashion by recursively partitioning the data (see e.g., [14]). On the other hand, agglomerative
heuristics produce a tree by first defining a cluster for each data elements and successively merging
clusters according to a carefully defined function (see e.g., [19]). These heuristics are widely used in
practice and are now part of the data scientists? toolkit?standard machine learning libraries contain
implementations of both types of heuristics.
Agglomerative heuristics have several appealing features: they are easy to implement, easy to tune, and
r 2 polylognq on a dataset of size n. Standard divisive heuristics based on graph
their running time is Opn
partitioning or clustering methods (like for example the bisection k-means or the recursive sparsest-cut
approaches) often involve solving or approximating NP-hard problems.1 Therefore, it is natural to
1
In some cases, it may be possible to have a very fast algorithms based on heuristics to compute partitions,
however, we are unaware of any such methods that would have provable guarantees for the kinds of graphs that
appear in hierarchical clustering.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
ask how good the solution output by an agglomerative method is compared to the solution output by
a top-down method.
From a qualitative perspective, this question has been addressed in a large body of work (see e.g., [5]).
However, from a quantitative perspective little is known. As Dasgupta observes in his recent work [9],
both agglomerative and divisive heuristics are defined procedurally rather than in term of an objective
function to optimize, a reason why a quantitative comparision of the different heuristics is rather
difficult. Dasgupta introduced an objective function to model the problem of finding a hierarchical clustering of a similarity graph?such an objective can be used to explicitly design optimization algorithms
that minimize this cost function as well as serve as a quantitative measure of the quality of the output.
Given a similarity graph i.e., a graph where vertices represent data elements and edge weights similarities between data elements, Dasgupta?s objective function associates a cost to any hierarchical
clustering tree of the graph. He showed that his objective function exhibits several desirable properties:
For example, if the graph is disconnected i.e., data elements in different connected components are very
dissimilar, a tree minimizing this objective function will first split the graph according to the connected
components.
This axiomatic approach to defining a ?meaningful? objective function for hierarchical clustering has
been further explored in recent work by Cohen-Addad et al. [8]. Roughly speaking, they characterize
a family of cost functions, which includes Dasgupta?s cost function, that when the input graph has a
?natural? ground-truth hierarchical clustering tree (in other words a natural classification of the data),
this tree has optimal cost (and any tree that is not a ?natural? hierarchical clustering tree of the graph
has higher cost). Therefore, the results by Dasgupta and Cohen-Addad et al. indicate that Dasgupta?s
cost function provides a sound framework for a rigorous quantitative analysis of agglomerative and
divisive heuristics.
A suitable objective function to measure the quality of a clustering also allows one to explicitly design
algorithms that minimize the cost. Dasgupta showed that the recursive sparsest-cut heuristic is an
Oplog3{2 nq-approximation algorithm for his objective function. His
? analysis has been improved by
Charikar and Chatziafratis [7] and Cohen-Addad et al. [8] to Op log nq. Unfortunately, Charikar
and Chatziafratis [7] and Roy and Pokutta [22] showed that, for general inputs, the problem cannot be
approximated within any constant factor under the Small-Set Expansion hypothesis. Thus, as suggested
by Charikar and Chatziafratis [7], a natural way to obtain a more fine-grained analysis of the classic
agglomerative and divisive heuristics is to study beyond-worst case scenarios.
Random Graph Model for Hierarchical Clustering. A natural way to analyse a problem beyond the
worst-case is to consider a suitable random input model, which is the focus of this paper. More precisely,
we introduce a random graph model based on the notion of ?hierarchical stochastic block model?
(HSBM) introduced by Cohen-Addad et al., which is a natural extensions of the stochastic block
model introduced. Our random graph model relies on the notion of ultrametric, a metric in which the
triangle inequality is strengthened by requiring dpx,yq ? maxpdpx,zq,dpy,zqq. This is a key concept
as ultrametrics exactly capture the notion of data having a ?natural? hierarchical structure (cf. [5]).
The random graphs are generated from data that comes from an ultrametric, but the randomness hides
the natural hierarchical structure. Two natural questions are: Given a random graph generated in such a
fashion, when is it possible to identify the underlying ultrametric and is the optimization of Dasgupta?s
cost function easier for graphs generated according to such a model. The former question was partially
addressed by Cohen-Addad et al. and our focus is primarily on developing algorithms that achieve
an Op1q approximation to the expected Dasgupta cost, not on recovering the underlying ultrametric.
More formally, assume that the data elements lie in an unknown ultrametric space pA,distq and so
exhibit a natural hierarchical clustering defined by this ultrametric. The input is a random graph
generated as follows: an edge is added between nodes u,v P A with probability p ? f pdistpu,vqq, where
f is an (unknown) non-increasing function with range p0,1q.Thus, vertices that are very close in the
ultrametric (and so very similar) have a higher probability to have an edge between them than vertices
that are further apart. Given such a random graph, the goal is to obtain a hierarchical clustering tree
that has a good cost for the objective function. The actual ground-truth tree is optimal in expectation
and we focus on designing algorithms that with high probability output a tree whose cost is within
a constant factor of the expected cost of the ground-truth tree. Although, we do not study it in this
work, the question of exact recovery is also an interesting one and the work of Cohen-Addad et al. [8]
addresses this partially in certain regimes.
Algorithmic Results. Even in the case of random graphs, the linkage algorithms may perform quite
poorly, mainly because ties may be broken unfavourably at the very bottom, when the clusters are
2
singleton nodes; these choices cannot be easily compensated later on in the algorithm. We thus
consider the LINKAGE ++ algorithm which first uses a seeding step using a standard SVD approach
to build clusters of a significant size, which is an extension of the algorithm introduced in [8]. Then,
we show that using these clusters as starting point, the classic single-linkage approach achieves a
p1`?q-approximation for the problem (cf. Theorem 2.4).
Experimental Results. We evaluate the performance of L INKAGE ++ on real-world data (Scikit-learn)
as well as on synthetic hierarchical data. The measure of interest is the Dasgupta cost function and
for completeness we also consider the classification error (see e.g., [22]). Our experiments show that 1)
L INKAGE ++ performs well on all accounts and 2) that a clustering with a low Dasgupta cost appears to
be correlated with a good classification. On synthetic data L INKAGE ++ seems to be clearly superior.
Related Work. Our work follows the line of research initiated by Dasgupta [9] and further studied
by [22, 7, 8]. Dasgupta [9] introduced the cost function studied in this paper and showed that the re?
cursive sparsest-cut approach yields an Oplog3{2 nq. His analysis was recently improved to Op lognq
by [7, 8]. Roy and Pokutta [22] and Charikar also considered LP and SDP formulations with spreading
?
constraints to obtain approximation algorithms with approximation factor Oplognq and Op lognq
respectively. Both these works also showed the infeasibility of constant factor approximations under
the small-set expansion hypothesis. Cohen-Addad et al. [8] took an axiomatic approach to identify
suitable cost functions for data generated from ultrametrics, which results in a natural ground-truth
clustering. They also looked at a slightly less general hierarchical stochastic blockmodel (HSBM),
where each bottom-level cluster must have a linear size and with stronger conditions on allowable
probabilities. Their algorithm also has a ?seeding phase? followed by an agglomerative approach. We
go beyond their bounds by focusing on approximation algorithms (we obtain a p1`?q-approximation)
whereas they aim at recovering the underlying ultrametric. As the experiments show, this trade-off
seem not to impact the classification error compared to classic other approaches.
There is also a vast literature on graph partitionning problems in random and semi-random models.
Most of this work (see e.g., [18, 11]) focuses on recovering a hidden subgraph e.g., a clique, whereas
we address the problem of obtaining good approximation guarantees w.r.t. an objective function.the
reader may refer to [24, 13] for the definitions and the classic properties on agglomerative and divisive
heuristics. Agglomerative and divisive heuristics have been widely studied from either a qualitative
perspective or for classic ?flat? clustering objective like the classic k-median and k-means, see e.g.,
[20, 10, 16, 3, 2]. For further background on hierarchical clustering and its application in machine
learning and data science, the reader may refer to e.g., [15, 23, 12, 6].
Preliminaries In this paper, we work with undirected weighted graph G ? pV,E,wq, where V is a set
of vertices, E a set of edges, and w : E ? R` . In the random and semi-random model, we work with
unweighted graphs. We slightly abuse
? notation and extend the function w to subsets of V . Namely,
for any A,B ? V , let wpA,Bq ? aPA,bPB wpa,bq. We use weights to model similarity, namely
wpu,vq ? wpu,wq means that data element u is more similar to v than to w. When G is clear from
the context, we let |V | ? n and |E| ? m. For any subset S of vertices of a graph G, let GrSs be the
subgraph induced by the nodes of S.
In the following, let G ? pV,E,wq be a weighted graph on n vertices. A cluster tree or hierarchical
clustering T for G is a rooted binary tree with exactly |V | leaves, each of which is labeled by a distinct
vertex v P V . We denote LCAT pu,vq the lowest common ancestor of vertices u,v in T . Given a tree
T and a node N of T , we say that the subtree of N in T is the the connected subgraph containing all
the leaves of T that are descendant of N and denote this set of leaves by V pN q. A metric space pX,dq
is an ultrametric if for every x,y,z P X, dpx,yq ? maxtdpx,zq,dpy,zqu.
We borrow the notion of a (similarity) graph generated from an ultrametric and generating tree
introduced by [8]. A weighted graph G ? pV, E, wq is a generated from an ultrametric, if there
exists an ultrametric pX,dq, such that V ? X, and for every x,y P V,x ? y, e ? tx,yu exists, and
wpeq ? f pdpx,yqq, where f : R` ? R` is a non-increasing function.
Definition 1.1 (Generating Tree). Let G ? pV,E,wq be a graph generated by a minimal ultrametric
pV, dq. Let T be a rooted binary tree with |V | leaves; let N denote the internal nodes and L the
set of leaves of T and let ? : L ? V denote a bijection between the leaves of T and nodes of V . We
say that T is a generating tree for G, if there exists a weight function W : N ? R` , such that for
N1 ,N2 P N , if N1 appears on the path from N2 to the root, W pN1 q ? W pN2 q. Moreover for every
x,y P V , wptx,yuq ? W pLCAT p? ?1 pxq,? ?1 pyqqq.
3
As noted in [8], the above notion bear similarities to what is referred to as a dendrogram in the machine
learning literature (see e.g., [5]).
Objective Function. We consider the objective function introduced by Dasgupta [9]. Let G ? pV,E,wq
be a weighted graph and T ? pN ,Eq be any rooted binary tree with leaves set V . The cost induced
by a node N of T is costT pN
?q ? |V pN q| ? wpV pC1 q,V pC2 qq where C1 ,C2 are the children of N in
T . The cost?of T is costT ? N PN costT pN q. As pointed out by Dasgupta [9], this can be rephrased
as costT ? pu,vqPE wpu,vq?|V pLCAT pu,vqq|.
2
A General Hierarchical Stochastic Block Model
We introduce a generalization of the HSBM studied by [8] and [17]. Cohen-Addad et al. [8] introduce
an algorithm to recover a ?ground-truth? hierarchical clustering in the HSBM setting. The regime
in which their algorithm works is the following: (1) there is a set of hidden clusters that have linear
size and (2) the ratio between the minimum edge probability and the maximum edge probability is
Op1q. We aim at obtaining an algorithm that ?works? in a more general setting. We reach this goal
by proposing on p1 ` ?q-approximation algorithms. Our algorithm very similar to the widely-used
linkage approach and remains easy to implement and parallelize. Thus, the main message of our work
is that, on ?structured inputs? the agglomerative heuristics perform well, hence making a step toward
explaining their success in practice.
The graphs generated from our model possess an underlying, hidden (because of noise) ?ground-truth
hierarchical clustering tree? (see Definition 2.1). This aims at modeling real-world classification
problem for which we believe there is a natural hierarchical clustering but perturbed because of missing
information or measurement erros. For example, in the tree of life, there is a natural hierarchical clustering hidden that we would like to reconstruct. Unfortunately because of extinct species, we don?t have a
perfect input and must account for noise. We formalize this intuition using the notion of generating tree
(Def 1.1) which, as hinted at by the definition, can be associated to an ultrametric (and so a ?natural? hierarchical clustering). The ?ground-truth tree? is the tree obtained from a generating tree on k leaves to
which we will refer as ?bottom?-level clusters containing n1 ,n2 ,...,nk nodes (following the terminology
in [8]). Each edge of a generated graph has a fixed probability of being present, which only depends on
the underlying ground-truth tree. This probability is a function of the clusters in which their endpoints
lie and the underlying graph on k vertices for which the generating tree is generating (as in Def 1.1).
Definition 2.1 (Hierarchical Stochastic Block Model ? Generalization of [8]). Let n be a positive
integer. A hierarchical stochastic block model with k bottom-level clusters is defined as follows:
rk ,
r k ? pVrk , E
rk ,wq be a graph generated from an ultrametric, where |Vrk | ? k for each e P E
1) Let G
r
r
r
r
wpeq P p0,1q. let Tk be a tree on k leaves, let N denote the internal nodes of T and L denote the leaves;
r ? rks be a bijection. Let Tr be generating for G
r k with weight function W
? :N
r ? r0,1q.
let ?
r:L
?
2) For each i P rks, let pi P p0,1s be such that pi ? W pN q, if N denotes the parent of ?
r?1 piq in Tr.
?k
3) For each i P rks, there is a positive integer ni such that i?1 ni ? n.
Then a random graph G ? pV,Eq on n nodes is defined as follows. Each vertex i P rns is assigned a label
?piq P rks, so that exactly nj nodes are assigned the label j for j P rks. An edge pi,jq is added to the
? pN q if ?piq ? ?pjq and N is the least
graph with probability p?piq if ?piq ? ?pjq and with probability W
?1
?1
r piq and ?
r pjq in Tr. The graph G ? pV,Eq is returned without any labels.
common ancestor of ?
? pN0 q, where N0 is the root node of
We use, for a generating tree Tr, the notation pmin to denote W
r
T . Let nmin be the size of the smallest cluster (of the k clusters) As in [8], we will use the notion of
? in which an edge pi,jq
expected graph. The expected graph as the is the weighted complete graph G
has weight pi,j , where pi,j is the probability with which it appears in the random graph G. We refer
? as a ground-truth tree for G. In order to avoid
to any tree that is generating for the expected graph G
? the costs of the cluster tree T for the unweighted
ambiguity, we denote by costT pGq and costT pGq
? respectively. Observe that due to linearity of expectation
(random) graph G and weighted graph G
? ? ErcostT pGqs, where the expectation is
for any tree T and any admissible cost function, costT pGq
with respect to the random choices of edges in G. We have
a
Theorem 2.2. Let n be a positive integer and pmin ? ?p logn{nq. Let k be a fixed constant and G
r k has k nodes
be a graph generated from an HSBM (as per Defn. 2.1) where the underlying graph G
and minimum probability is pmin . For any binary tree T with n leaves labelled by the vertices of G,
4
the following holds with high probability: |costpT q?ErcostpT qs| ? opErcostpT qsq. The expectation
is taken only over the random choice of edges. In particular if T ? is a ground-truth tree for G, then,
with high probability, costpT ? q ? p1`op1qqminT 1 costpT 1 q ? p1`op1qqOPT.
Algorithm L INKAGE ++, a p1`?q-Approximation Algorithm in the HSBM. We consider a simple
algorithm, called LINKAGE ++, which works in two phases (see Alg. 1). We use a result of [18] who
considers the planted partition model. His approach however does not allow to recover directly a
hierarchical structure when the input has it.
Algorithm 1 L INKAGE ++
1: Input: Graph G ? pV,Eq generated from an HSBM.
2: Parameter: An integer k.
3: Apply (SVD) projection algorithm of [18, Thm. 12] with parameters G, k, ? ? |V |?2 , to get
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
?p1q,...,?p|V |q P R|V | for vertices in V , where dimpspanp?p1q,...,?p|V |qqq ? k.
Run the single-linkage algorithm on the points t?p1q,...,?p|V |qu until there are exactly k clusters.
Let C ? tC1? ,...,Ck? u be the clusters (of points ?piq) obtained. Let Ci ? V denote the set of vertices
corresponding to the cluster Ci? .
Define dist : C ?C ?? R` : distpCi? ,Cj? q ? wpCi? ,Cj? q{p|Ci? ||Cj? |q.
while there are at least two clusters in C do
Take the pair of clusters Ci1 ,Cj1 of C at max distpCi1 ,Cj1 q. Define a new cluster C 1 ? tCi1 YCj1 u.
Update dist: distpC 1 ,C`1 q ? maxpdistpCi1 ,C`1 q,distpCj1 ,C`1 qq
C ? C z tCi1 u z tCj1 u Y tC 1 u
end while
The sequence of merges in the while-loop (Steps 6 to 10) induces a hierarchical clustering tree
on tC1? ,...,Ck? u, say Tk1 with k leaves (C1? ,...,Ck? ). Replace each leaf Ci? of Tk1 by the tree obtained
for Ci? at Step 4 to obtain T .
1
Repeat the algorithm k 1 ? 2klogn times. Let T 1 ,...T k be the corresponding outputs.
Output: Tree T i (out of the k 1 candidates) that minimises ?pTi q.
Theorem 2.3 ([18], Observation 11 and a simplification of Theorem 12). Let ? be the confidence
parameter. Assume that for all u,v belonging to different clusters with adjacency vectors u,v (i.e.,
ui is 1 if the edge pu,iq exists in G and 0 otherwise) satisfy
`
?
(1)
}Erus?Ervs}22 ? c?k? ? 2 n{nmin `logpn{?q
for a large enough constant c, where Erus is the entry-wise expectation and ? 2 ? ?plog6 n{nq is an
upper bound on the variance. Then, the algorithm of [18, Thm. 12] with parameters G,k,? projects
the columns of the adjacency matrix of G to points t?p1q,...,?p|V |qu in a k-dimensional subspace
of R|V | such that the following holds w.p. at least 1?? over the random graph G and with probability
1{k over the random bits of the algorithm. There exists ? ? 0 such that for any u in the ith cluster and
v in the jth cluster: 1) if i ? j then }?puq??pvq}22 ? ? and 2) if i ? j then }?puq??pvq}22 ? 2?.
In the remainder we assume ? ? 1{|V |2 . We are ready to state our main theorem.
Theorem 2.4. Let n be a positive integer
a and ? ? 0 a constant.?Assume1{4that the separation of bottom
clusters given by (1) holds, pmin ? ?p logn{nq, and nmin ? n?log n. Let k be a fixed constant
r k has
and G be a graph generated from an HSBM (as per Defn. 2.1) where the underlying graph G
k nodes with satisfying the above constraints.
With high probability, Algorithm 1 with parameter k on graph G outputs a tree T 1 that satisfies
costT 1 ? p1`?qOPT.
We note that k might not be known in advance. However, different values of k can be tested and an
Op1q-estimate on k is enough for the proofs to hold. Thus, it is possible to run Algorithm 1 Oplog nq
times with different ?guesses? for k and take the best of these runs.
Let G ? pV,Eq be the input graph generated according to an HSBM. Let T be the tree output by
Algorithm 1. We divide the proof into two main lemmas that correspond to the outcome of the two
phases mentioned above.
5
The algorithm of [18, Thm. 12] might fail for two reasons: The first reason is that the random choices
by the algorithm result in an incorrect clustering. This happens w.p. at most 1?1{k and we can simply
repeat the algorithm sufficiently many times to be sure that at least once we get the desired result, i.e.,
the projections satisfy the conclusion of Thm. 2.3. Lemmas 2.6, 2.7 show that in this case, Steps 6
to 10 of L INKAGE ++ produce a tree that has cost close to optimal. Ultimately, the algorithm simply
outputs a tree that has the least cost among all the ones produced (and one of them is guaranteed to
have cost p1`?qOPT) with high probability.
The second reason why the McSherry?s algorithm may fail is that the generated random graph G might
?deviate? too much from its expectation. This is controlled by the parameter ? (which we set to 1{|V |2 ).
Deviations from expected behaviour will cause our algorithm to fail as well. We bound this failure
probability in terms of two events. The first bad event is that McSherry?s algorithm fails for either of the
aforementioned reasons. We denote the complement of this event E1 . The second bad event it that the
number of edges between the vertices of two nodes of the ground-truth tree deviates from it?s expectation.
Namely, that given two nodes N1 ,N2 of T ? , we expect the cut to be EpN1 ,N2 q ? |V pN1 q|?|V pN1 q|?
W pLCAT ? pN1 ,N2 qq. Thus, we define E2 to be the event that |wpV pN1 q,V pN2 qq ? EpN1 ,N2 q | ?
?2 EpN1 ,N2 q for all cuts of the k bottom leaves. Note that the number of cuts is bounded by 2k and we
will show that, due to size of nmin and pmin this even holds w.h.p.. The assumptions on the ground-truth
tree will ensure that the latter holds w.h.p. allowing us to argue that both events hold w.p. at least ?p1{kq
Thus, from now on we assume that both ?good? events E1 and E2 occur. We bound the probability of
event E1 in Lemma 2.5. We now prove a structural properties of the tree output by the algorithm, we
introduce the following definition. We say that a tree T ? pN ,Eq is a ?-approximate ground-truth tree for
G and T ? if there exists a weight function W 1 : N ?? R` such that for any two vertices a,b, we have that
1. ? ?1 W 1 pLCAT pa,bqq ? W pLCAT ? pa,bqq ? ?W 1 pLCAT pa,bqq and
2. for any node N of T and any node N 1 descendant of N in T , W pN q ? W pN 1 q.
Lemma 2.5. Let G be generated by an HSBM. Assume that the separation of bottom clusters given by
(1) holds. Let C1? ,...,Ck? be the hidden bottom-level clusters, i.e., Ci? ? tv | ?pvq ? iu. With probability
at least ?p1{kq, the clusters obtained after Step 4 correspond to the assignment ?, i.e., there exists
?
a permutation ? : rks ? rks, such that Cj ? C?pjq
.
a
Lemma 2.6. Assume that the separation of bottom clusters given by (1) holds, pmin ? ?p logn{nq,
?
and nmin ? n?log1{4 n. Let G be generated according to an HSBM and let T ? be a ground-truth
tree for G. Assume that events E1 and E2 occur, and that furthermore, the clusters obtained after Step 4
correspond to the assignment ?, i.e., there exists a permutation ? : rks ? rks such that for each v P Ci ,
?pvq ? ?piq. Then, the output by the algorithm is a p1`?q-approximate ground-truth tree.
The following lemma allows us to bound the cost of an approximate ground-truth tree.
Lemma 2.7. Let G be a graph generated according to an HSBM and let T ? be a ground-truth tree
? be the expected graph associated to T ? and G. Let T be a ?-approximate ground-truth
for G. Let G
tree. Then, costT ? ? 2 OPT.
Proof of Theorem 2.4. Conditioning on E1 and E2 which occur w.h.p. and combining Lemmas 2.5, 2.7,
and 2.6 together with Theorem 2.2 yields the result. As argued before, E1 holds at least w.p. 1{k and it is
possible to boost part of this probability by running Algorithm 1 multiple times. Running it ?pklognq
times and taking the tree with the smallest cost yields the result. Moreover, E2 also holds w.h.p..
3
Empirical Evaluation
In this section, we evaluate the effectiveness of L INKAGE ++ on real-world and synthetic datasets.
We compare our results to the classic agglomerative heuristics for hierarchical clustering both in
terms of the cost function and the classification error. Our goal is answering the question: How good
is L INKAGE ++ compared to the classic agglomerative approaches on real-world and synthetic data
that exhibit a ground-truth clustering?
Datasets. The datasets we use are part of the standard Scikit-learn library [4] (and most of them are
available at the UCI machine learning repository [1]). Most of these datasets exhibit a ?flat? clustering
structure, with the exception of the newsgroup datasets which is truly hierarchical. The goal of the
6
algorithm is to perform a clustering of the data by finding the underlying classes. The datasets are: iris,
digits, newsgroup2 , diabetes, cancer, boston. For a given dataset, we define similarity between
data elements using the cosine similarity, this is a standard approach for defining similarity between data
elements (see, e.g., [22]) This induces a weighted similarity graph that is given as input to L INKAGE ++.
Synthethic Data. We generate random graphs of sizes n P t256,512,1024u according to the model
described in Section 2.1. More precisely, we define a binary tree on ` P t4,8u bottom clusters/leaves.
Each leaf represents a ?class?. We create n{` vertices for each class. The probability of having an
edge between two vertices of class a and b is given by the probability induced by lowest common
ancestor between the leaves corresponding to a and b respectively. We first define pmin ? 2logn?`{n.
The probability induced by the vertices of the binary tree are the following: the probability at the root
is p ? pmin `p1?pmin q{logp`q, and the probability induced by a node at distance d from the root is
pd ` 1qp. In particular, the probability induced by the leaves is pmin ` logp`qp1 ? pmin q{logp`q ? 1.
We also investigate a less structured setting using a ground truth tree on three nodes.
Method. We run L INKAGE ++ with 9 different breakpoints at which we switch between phase 1
and phase 2 (which corresponds to ?guesses? of k). We output the clustering with the smallest cost.
To evaluate our algorithm, we compare its performances to classic agglomerative heuristics (for the
similarity setting): single linkage, complete linkage, (see also [24, 13] for a complete description)
and to the approach of performing only phase 1 of L INKAGE ++ until only one cluster remains; we
will denote the approach as PCA+. Additionally, we compare ourselves to applying only phase 2 of
L INKAGE ++, we call this approach density-based linkage. We observe that the running times of the
r 2 q stemming already from the agglomerative parts.3 This is close to the
algorithms are of order Opn
2
r
Opn qq running time achieved by the classic agglomerative heuristics.
We compare the results by using both the cost of the output tree w.r.t. the hierarchical clustering cost
function and the classification error. The classification error is a classic tool to compare different (usually flat) clusterings (see, e.g., [22]). For a k-clustering C : V ?? t1,...,ku,
the classification error? w.r.t.
`?
a ground-truth flat clustering C ? : V ?? t1,...,ku is defined as min?PSk xPV 1Cpxq??pC ? pxqq {|V |,
where Sk is the set of all permutations ? over k elements.
We note that the cost function is more relevant for the newsgroup dataset since it exhibits a truly
hierarchical structure and so the cost function is presumably capturing the quality of the classification
at different levels. On the other hand, the classification error is more relevant for the others data sets as
they are intrinsically flat. All experiments are repeated at least 10 times and standard deviation is shown.
Results. The results are summarized in Figure 1, 2, and 3 (App. 3). Almost in all experiments
L INKAGE ++ performs extremlely well w.r.t. the cost and classification error. Moreover, we observe
that a low cost function correlates with a good classification error. For synthetic data, in both
L INKAGE ++ and PCA+, we observe in Figure 2b that classification error drops drastically from
k ? 4 to k ? 8, from 0.5 to 0 as the size is number of nodes is increased from n ? 512 to n ? 1024. We
observe this threshold phenomena for all fixed k we considered. We can observe that the normalized
cost in Figure 2a for the other linkage algorithms increases in the aforementioned setting.
Moreover, the only dataset where L INKAGE ++ and PCA+ differ significantly is the hierarchical
dataset newsgroup. Here the cost function of PCA+ is much higher. While the classification error
of all algorithm is large, it turns out by inspecting the final clustering of L INKAGE ++ and PCA+ that
the categories which were being misclassified are mostly sub categories of the same category. On
the dataset of Figure 3 (App. 3) only L INKAGE ++ performs well.
Conclusion. Overall both algorithms L INKAGE ++ and Single-linkage perform considerably better
when it comes to real-world data and L INKAGE ++ and PCA+ dominate on our synthetic datasets.
However, in general there is no reason to believe that PCA+ would perform well in clustering truly
hierarchical data: there are regimes of the HSBM for which applying only phase 1 of the algorithm
might lead to a high missclassification error and high cost and for which we can prove that L INKAGE ++
is an p1`?q-approximation.
This is exemplified in Figure 3 (App. 3). Moreover, our experiments suggest that one should use in addition to L INKAGE ++ other linkage algorithm and pick the algorithm with the lowest cost function, which
appears to correlate with the classification error. Nevertheless, a high classification error of hierarchical
2
Due to the enormous size of the dataset, we consider a subset consisting of ?comp.graphics?, ?comp.os.mswindows.misc?, ?comp.sys.ibm.pc.hardware?, ?comp.sys.mac.hardware?, ?rec.sport.baseball?, ?rec.sport.hockey?
3
2
r
Top k singular vectors of an n?n matrix can be approximately computed in time Opkn
q.
7
(a)
(b)
Figure 1: A comparison of the algorithms on real-world data. (a) The figure shows the cost costp?q of the
algorithm normalized by the the cost of L INKAGE ++. (b) The figure shows the percentage of misclassified nodes.
By looking more closely at the output of the algorithm, one can see that a large fraction of the misclassifications
happen in subgroups of the same group.
(a)
(b)
Figure 2: A comparison of the algorithms on synthetic data for highly structured ground-truth for different n,k.
PCA+ performs well on these inputs and we conjecture that this due to the highly structured nature of the groundtruth. (a) The cost of L INKAGE ++ and PCA+ are well-below the costs? of the standard linkage algorithms. (b) We
see a threshold phenomena for k ? 8 from n ? 512 to n ? 1024. Here the classification error drops from 0.5 to 0,
which is explained by concentration of the eigenvalues allowing the PCA to separated the bottom clusters correctly.
data is not a bad sign per se: A misclassification of subcategories of the same categories (as we observe in
our experiments) is arguably tolerable, but ignored by the classification error. On the other hand, the cost
function captures such errors nicely by its inherently hierarchical nature and we thus strongly advocate it.
(a)
(b)
(c)
Figure 3: The clustering obtained by PCA+ on a ground truth tree on three nodes induced by the adjacency matrix
rr1.,0.49,0.39sr0.49,0.49,0.39sr0.39,0.39,0.62ss and n ? 999 nodes split equally. Here only L INKAGE ++ and
PCA+ classify the bottom clusters of the subtrees correctly. However, the projection to the euclidian space (PCA)
does not preserve the underlying ultramtric causing PCA+ to merge incorrectly. (a) L INKAGE ++ recovers the
ground truth. All other algorithm merge incorrectly. (b) L INKAGE ++ and PCA+ classify the bottom clusters
correctly causing the classification to be perfect even though PCA+ failed to correctly reconstruct the ground-truth.
This suggests that the classification error is less suitable measure for hierarchical data. (c) PCA+ in contrast
to L INKAGE ++ merges incorrectly two bottom clusters of different branches in the ground-truth tree (green and
blue as opposed to green and red).
Acknowledgement The project leading to this application has received funding from the European
Union?s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant
agreement No. 748094. This work was supported in part by EPSRC grant EP/N510129/1. This work
was supported in part by NSF Award Numbers BIO-1455983, CCF-1461559, and CCF-0939370.
8
References
[1] D. J. N. A. Asuncion. UCI machine learning repository, 2007.
[2] M. Balcan and Y. Liang. Clustering under perturbation resilience. SIAM J. Comput., 45(1):102?155, 2016.
[3] M.-F. Balcan, A. Blum, and S. Vempala. A discriminative framework for clustering via similarity functions.
In STOC ?08, pages 671?680. ACM.
[4] L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer,
A. Gramfort, J. Grobler, R. Layton, J. VanderPlas, A. Joly, B. Holt, and G. Varoquaux. API design
for machine learning software: experiences from the scikit-learn project. In ECML PKDD Workshop:
Languages for Data Mining and Machine Learning, pages 108?122, 2013.
[5] G. Carlsson and F. M?moli. Characterization, stability and convergence of hierarchical clustering methods.
Journal of Machine Learning Research, 11:1425?1470, 2010.
[6] R. M. Castro, M. J. Coates, and R. D. Nowak. Likelihood based hierarchical clustering. IEEE Transactions
on signal processing, 52(8):2308?2321, 2004.
[7] M. Charikar and V. Chatziafratis. Approximate hierarchical clustering via sparsest cut and spreading metrics.
In SODA?17, pages 841?854, 2017.
[8] V. Cohen-Addad, V. Kanade, F. Mallmann-Trenn, and C. Mathieu. Hierarchical clustering: Objective
functions and algorithms. To appear at SODA?17, 2017.
[9] S. Dasgupta. A cost function for similarity-based hierarchical clustering. In Proc. of the 48th Annual ACM
Symposium on Theory of Computing, STOC 2016. ACM, 2016.
[10] S. Dasgupta and P. M. Long. Performance guarantees for hierarchical clustering. Journal of Computer
and System Sciences, 70(4):555?569, 2005.
[11] U. Feige and J. Kilian. Heuristics for semirandom graph problems. J. Comput. Syst. Sci., 63(4):639?671,
Dec. 2001.
[12] J. Felsenstein and J. Felenstein. Inferring phylogenies, volume 2. Sinauer Associates Sunderland, 2004.
[13] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer, 2001.
[14] A. Gu?noche, P. Hansen, and B. Jaumard. Efficient algorithms for divisive hierarchical clustering with
the diameter criterion. Journal of classification, 8(1):5?30, 1991.
[15] N. Jardine and R. Sibson. Mathematical Taxonomy. Wiley series in probability and mathematical statistiscs.
John Wiley & Sons, 1972.
[16] G. Lin, C. Nagarajan, R. Rajaraman, and D. P. Williamson. A general approach for incremental
approximation and hierarchical clustering. In SODA ?06, pages 1147?1156. SIAM, 2006.
[17] V. Lyzinski, M. Tang, A. Athreya, Y. Park, and C. E. Priebe. Community detection and classification in hierarchical stochastic blockmodels. IEEE Transactions on Network Science and Engineering, 4(1):13?26, 2017.
[18] F. McSherry. Spectral partitioning of random graphs. In FOCS ?01, pages 529?537.
[19] F. Murtagh. A survey of recent advances in hierarchical clustering algorithms. The Computer Journal,
26(4):354?359, 1983.
[20] C. G. Plaxton. Approximation algorithms for hierarchical location problems. In STOC ?03, pages 40?49.
[21] C. K. Reddy and B. Vinzamuri. A survey of partitional and hierarchical clustering algorithms. Data
Clustering: Algorithms and Applications, 87, 2013.
[22] A. Roy and S. Pokutta. Hierarchical clustering via spreading metrics. In NIPS ?16, pages 2316?2324.
[23] P. H. Sneath and R. R. Sokal. Numerical taxonomy. Nature, 193(4818):855?860, 1962.
[24] M. Steinbach, G. Karypis, and V. Kumar. A comparison of document clustering techniques. In In KDD
Workshop on Text Mining, 2000.
9
| 7200 |@word repository:2 seems:1 stronger:1 rajaraman:1 p0:3 pick:1 euclidian:1 tr:4 tci1:2 recursively:1 series:1 document:1 semirandom:1 com:1 gmail:1 must:2 john:1 stemming:1 numerical:1 partition:2 happen:1 kdd:1 noche:1 seeding:2 drop:2 update:1 n0:1 leaf:20 guess:2 nq:8 sys:2 ith:1 provides:1 completeness:1 node:25 bijection:2 characterization:1 location:1 p1q:4 mathematical:2 c2:1 symposium:1 qualitative:2 descendant:2 incorrect:1 prove:2 advocate:1 focs:1 introduce:4 blondel:1 expected:7 indeed:1 roughly:1 p1:13 dist:2 sdp:1 multi:1 pkdd:1 little:1 actual:1 chatziafratis:4 increasing:2 project:3 underlying:11 opn:3 notation:2 moreover:5 linearity:1 lowest:3 what:1 bounded:1 kind:1 proposing:1 finding:2 nj:1 guarantee:3 quantitative:4 every:3 tie:1 exactly:4 uk:1 partitioning:6 bio:1 grant:2 appear:2 arguably:1 positive:4 before:1 scientist:1 t1:2 resilience:1 engineering:1 api:1 pxq:1 oxford:1 initiated:2 parallelize:1 path:1 abuse:1 approximately:1 might:4 merge:2 studied:5 suggests:1 jardine:1 range:1 karypis:1 recursive:4 block:6 practice:3 implement:2 dpx:2 union:1 digit:1 procedure:1 empirical:2 significantly:1 projection:3 word:1 confidence:1 holt:1 suggest:1 get:2 cannot:2 close:3 context:1 applying:2 optimize:2 compensated:1 pn2:2 missing:1 go:1 starting:1 survey:2 recovery:1 q:1 borrow:1 dominate:1 his:6 classic:12 stability:1 notion:7 ultrametric:15 qq:5 joly:1 exact:1 us:1 designing:1 hypothesis:2 steinbach:1 diabetes:1 associate:2 element:11 roy:3 approximated:1 pa:4 qqq:1 satisfying:1 rec:2 agreement:1 cut:7 labeled:1 bottom:14 epsrc:1 ep:1 louppe:1 capture:2 worst:3 connected:3 kilian:1 trade:1 observes:1 mentioned:1 intuition:1 pd:1 broken:1 ui:1 ultimately:1 solving:1 serve:1 baseball:1 triangle:1 gu:1 easily:1 represented:1 tx:1 separated:1 distinct:1 fast:1 outcome:1 whose:2 heuristic:20 widely:4 quite:1 say:4 s:1 reconstruct:2 otherwise:1 analyse:1 final:1 sequence:1 eigenvalue:1 moli:1 took:1 remainder:1 causing:2 uci:2 loop:1 combining:1 relevant:2 subgraph:3 poorly:1 achieve:2 description:1 parent:1 cluster:37 convergence:1 produce:2 generating:10 perfect:2 incremental:1 tk:1 iq:1 ac:1 minimises:1 xpv:1 op:3 received:1 eq:6 recovering:3 c:1 indicate:1 come:2 differ:1 closely:1 stochastic:8 vqq:2 adjacency:3 argued:1 behaviour:1 nagarajan:1 generalization:2 ci1:1 preliminary:1 opt:1 frederik:1 varoquaux:1 hinted:1 inspecting:1 qsq:1 extension:2 hold:11 sufficiently:1 considered:2 ground:26 presumably:1 algorithmic:1 achieves:1 smallest:3 proc:1 axiomatic:2 spreading:3 label:3 prettenhofer:1 hansen:1 create:1 tool:1 weighted:7 mit:2 clearly:1 aim:3 rather:3 ck:4 pn:11 avoid:1 focus:4 niculae:1 likelihood:1 mainly:1 grisel:1 contrast:1 rigorous:1 blockmodel:1 mueller:1 sokal:1 trenn:2 hidden:5 sunderland:1 ancestor:3 jq:2 misclassified:2 iu:1 overall:1 classification:26 among:1 aforementioned:2 logn:4 gramfort:1 fairly:1 once:1 having:2 beach:1 nicely:1 represents:1 park:1 yu:1 report:1 np:1 others:1 primarily:1 preserve:1 defn:2 phase:8 ourselves:1 consisting:1 n1:4 friedman:1 detection:1 interest:1 message:1 mining:2 investigate:1 highly:2 sr0:2 evaluation:2 truly:3 pc:2 mcsherry:4 subtrees:1 edge:14 nowak:1 experience:1 bq:2 tree:61 divide:1 re:1 desired:1 minimal:1 increased:1 column:1 modeling:1 classify:2 logp:3 assignment:2 cost:46 mac:1 vertex:18 subset:3 entry:1 deviation:2 kq:2 successful:1 piq:8 too:1 graphic:1 characterize:1 perturbed:1 synthetic:8 combined:1 considerably:1 st:1 pn0:1 fundamental:1 density:1 siam:2 off:1 together:1 ambiguity:1 successively:1 containing:2 opposed:1 leading:1 pmin:11 syst:1 account:2 singleton:1 summarized:1 includes:1 ultrametrics:2 satisfy:2 explicitly:3 depends:1 later:1 root:4 sklodowska:1 red:1 recover:2 asuncion:1 curie:1 minimize:2 ni:2 accuracy:1 variance:1 who:1 tk1:2 correspond:4 identify:2 yield:3 vincent:1 produced:1 bisection:1 comp:4 finer:3 randomness:1 app:3 reach:1 definition:6 failure:1 e2:5 associated:2 proof:3 recovers:1 dataset:9 intrinsically:1 ask:1 cj:4 formalize:1 pjq:4 carefully:1 appears:4 focusing:1 higher:3 varun:1 improved:2 formulation:1 ox:1 strongly:1 though:1 furthermore:1 wpa:2 dendrogram:1 nmin:5 hand:4 until:2 o:1 scikit:3 quality:3 believe:2 usa:1 contain:1 requiring:1 concept:1 tc1:2 former:1 hence:1 assigned:2 normalized:2 ccf:2 misc:1 rooted:4 noted:1 cosine:1 iris:1 criterion:1 allowable:1 complete:3 performs:4 puq:2 balcan:2 wise:1 recently:2 funding:1 superior:1 common:3 qp:1 cohen:10 endpoint:1 conditioning:1 volume:2 extend:1 he:1 significant:1 refer:4 pn1:5 measurement:1 rr1:1 pointed:1 language:1 toolkit:1 similarity:13 pu:4 recent:3 showed:5 perspective:3 hide:1 apart:1 scenario:1 certain:2 inequality:1 binary:6 success:1 dpy:2 life:1 minimum:2 r0:1 signal:1 semi:2 branch:1 multiple:1 desirable:1 sound:1 alan:1 op1q:4 long:2 lin:1 e1:6 equally:1 award:1 controlled:1 impact:1 metric:4 expectation:7 rks:9 represent:1 achieved:1 dec:1 c1:3 whereas:2 background:1 fine:1 bpb:1 addressed:2 addition:1 median:1 singular:1 qopt:2 posse:1 sure:1 induced:7 undirected:1 seem:1 effectiveness:1 integer:5 call:1 structural:1 granularity:1 split:2 easy:3 enough:2 switch:1 misclassifications:1 hastie:1 pgq:3 pca:17 linkage:15 vinzamuri:1 returned:1 speaking:1 cause:1 vrk:1 ignored:1 n510129:1 clear:1 involve:1 tune:1 cursive:1 se:1 induces:3 hardware:2 category:4 diameter:1 generate:1 percentage:1 nsf:1 coates:1 sign:1 per:3 correctly:4 tibshirani:1 blue:1 rephrased:1 dasgupta:19 group:1 key:1 sibson:1 terminology:1 threshold:2 nevertheless:1 enormous:1 blum:1 marie:1 vast:1 graph:58 fraction:1 mswindows:1 run:4 turing:1 procedurally:1 soda:3 family:1 reader:2 almost:1 groundtruth:1 separation:3 pvq:4 pc2:1 bit:1 capturing:1 bound:5 def:2 breakpoints:1 followed:1 apa:1 simplification:1 guaranteed:1 annual:1 comparision:1 occur:3 precisely:2 constraint:2 wpv:2 flat:5 software:1 cj1:2 extinct:1 min:1 kumar:1 performing:1 vempala:1 px:2 conjecture:1 charikar:5 developing:2 according:7 structured:4 tv:1 disconnected:1 sneath:1 belonging:1 felsenstein:1 feige:1 slightly:2 increasingly:1 pti:1 son:1 appealing:1 lp:1 addad:10 making:1 qu:2 happens:1 castro:1 refered:1 explained:1 taken:1 vq:3 remains:2 reddy:1 turn:1 fail:3 end:1 available:1 apply:1 observe:7 hierarchical:56 spectral:1 tolerable:1 top:4 clustering:59 running:5 cf:2 denotes:1 ensure:1 unfavourably:1 build:1 approximating:1 objective:17 question:5 added:2 looked:1 already:1 planted:1 concentration:1 exhibit:5 partitional:1 subspace:1 distance:1 sci:1 agglomerative:15 considers:1 argue:1 reason:6 provable:1 toward:1 ratio:1 minimizing:1 innovation:1 liang:1 difficult:1 mostly:2 unfortunately:2 plaxton:1 stoc:3 taxonomy:2 priebe:1 design:4 implementation:1 unknown:2 perform:5 allowing:2 upper:1 observation:1 datasets:7 ecml:1 incorrectly:3 defining:3 looking:1 rn:1 perturbation:1 pc1:1 thm:4 community:1 introduced:7 complement:1 copenhagen:1 namely:3 pair:1 vanderplas:1 merges:2 boost:1 subgroup:1 nip:2 address:2 beyond:4 suggested:1 usually:1 exemplified:1 below:1 regime:4 max:1 green:2 suitable:4 event:9 natural:15 misclassification:1 library:2 yq:2 mathieu:1 ready:1 log1:1 deviate:2 athreya:1 literature:3 acknowledgement:1 carlsson:1 text:1 sinauer:1 subcategories:1 expect:1 bear:1 permutation:3 interesting:1 proven:1 dq:3 pi:6 ibm:1 cancer:1 repeat:2 supported:2 jth:1 drastically:1 allow:1 institute:1 explaining:1 taking:1 world:7 unaware:1 unweighted:2 programme:1 far:1 correlate:2 transaction:2 approximate:5 clique:1 discriminative:1 costp:1 don:1 infeasibility:1 zq:2 why:2 sk:1 additionally:1 kanade:2 ku:2 nature:3 learn:3 ca:1 layton:1 synthethic:1 obtaining:2 inherently:1 alg:1 expansion:2 williamson:1 european:1 blockmodels:1 main:4 noise:2 n2:8 child:1 repeated:1 body:1 referred:1 fashion:2 strengthened:1 wiley:2 fails:1 sub:1 pv:10 sparsest:4 inferring:1 comput:2 lie:2 candidate:1 answering:1 grained:1 admissible:1 tang:1 down:3 theorem:8 rk:2 bad:3 specific:1 showing:1 explored:1 pokutta:3 exists:9 workshop:2 merging:1 ci:7 subtree:2 mallmann:3 t4:1 nk:1 horizon:1 easier:1 boston:1 tc:1 simply:2 failed:1 partially:2 sport:2 springer:1 corresponds:1 truth:26 satisfies:1 relies:1 acm:3 hockey:1 murtagh:1 goal:4 labelled:1 replace:1 psk:1 hard:1 pxqq:1 lemma:8 called:2 specie:1 svd:4 divisive:8 experimental:1 meaningful:1 newsgroup:3 exception:1 formally:1 pedregosa:1 phylogeny:1 internal:3 wq:7 latter:1 dissimilar:1 evaluate:3 tested:1 phenomenon:2 correlated:1 |
6,856 | 7,201 | Invariance and Stability
of Deep Convolutional Representations
Julien Mairal
Inria?
[email protected]
Alberto Bietti
Inria?
[email protected]
Abstract
In this paper, we study deep signal representations that are near-invariant to groups
of transformations and stable to the action of diffeomorphisms without losing signal
information. This is achieved by generalizing the multilayer kernel introduced
in the context of convolutional kernel networks and by studying the geometry
of the corresponding reproducing kernel Hilbert space. We show that the signal
representation is stable, and that models from this functional space, such as a large
class of convolutional neural networks, may enjoy the same stability.
1
Introduction
The results achieved by deep neural networks for prediction tasks have been impressive in domains
where data is structured and available in large amounts. In particular, convolutional neural networks
(CNNs) [14] have shown to model well the local appearance of natural images at multiple scales,
while also representing images with some invariance through pooling operations. Yet, the exact nature
of this invariance and the characteristics of functional spaces where convolutional neural networks
live are poorly understood; overall, these models are sometimes only seen as clever engineering black
boxes that have been designed with a lot of insight collected since they were introduced.
Understanding the geometry of these functional spaces is nevertheless a fundamental question. In
addition to potentially bringing new intuition about the success of deep networks, it may for instance
help solving the issue of regularization, by providing ways to control the variations of prediction
functions in a principled manner. Small deformations of natural signals often preserve their main
characteristics, such as the class label in a classification task (e.g., the same digit with different
handwritings may correspond to the same images up to small deformations), and provide a much
richer class of transformations than translations. Representations that are stable to small deformations
allow more robust models that may exploit these invariances, which may lead to improved sample
complexity. The scattering transform [5, 17] is a recent attempt to characterize convolutional
multilayer architectures based on wavelets. The theory provides an elegant characterization of
invariance and stability properties of signals represented via the scattering operator, through a notion
of Lipschitz stability to the action of diffeomorphisms. Nevertheless, these networks do not involve
?learning? in the classical sense since the filters of the networks are pre-defined, and the resulting
architecture differs significantly from the most used ones.
In this work, we study these theoretical properties for more standard convolutional architectures from
the point of view of positive definite kernels [27]. Specifically, we consider a functional space derived
from a kernel for multi-dimensional signals, which admits a multilayer and convolutional structure
that generalizes the construction of convolutional kernel networks (CKNs) [15, 16]. We show that
this functional space contains a large class of CNNs with smooth homogeneous activation functions
in addition to CKNs [15], allowing us to obtain theoretical results for both classes of models.
?
Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The main motivation for introducing a kernel framework is to study separately data representation
and predictive models. On the one hand, we study the translation-invariance properties of the kernel
representation and its stability to the action of diffeomorphisms, obtaining similar guarantees as the
scattering transform [17], while preserving signal information. When the kernel is appropriately
designed, we also show how to obtain signal representations that are near-invariant to the action of
any group of transformations. On the other hand, we show that these stability results can be translated
to predictive models by controlling their norm in the functional space. In particular, the RKHS norm
controls both stability and generalization, so that stability may lead to improved sample complexity.
Related work. Our work relies on image representations introduced in the context of convolutional
kernel networks [15, 16], which yield a sequence of spatial maps similar to traditional CNNs, but
each point on the maps is possibly infinite-dimensional and lives in a reproducing kernel Hilbert space
(RKHS). The extension to signals with d spatial dimensions is straightforward. Since computing the
corresponding Gram matrix as in classical kernel machines is computationally impractical, CKNs
provide an approximation scheme consisting of learning finite-dimensional subspaces of each RKHS?s
layer, where the data is projected, see [15]. The resulting architecture of CKNs resembles traditional
CNNs with a subspace learning interpretation and different unsupervised learning principles.
Another major source of inspiration is the study of group-invariance and stability to the action of
diffeomorphisms of scattering networks [17], which introduced the main formalism and several proof
techniques from harmonic analysis that were keys to our results. Our main effort was to extend them to
more general CNN architectures and to the kernel framework. Invariance to groups of transformations
was also studied for more classical convolutional neural networks from methodological and empirical
points of view [6, 9], and for shallow learned representations [1] or kernel methods [13, 19, 22].
Note also that other techniques combining deep neural networks and kernels have been introduced.
Early multilayer kernel machines appear for instance in [7, 26]. Shallow kernels for images modelling
local regions were also proposed in [25], and a multilayer construction was proposed in [4]. More
recently, different models based on kernels are introduced in [2, 10, 18] to gain some theoretical
insight about classical multilayer neural networks, while kernels are used to define convex models for
two-layer neural networks in [36]. Finally, we note that Lipschitz stability of deep models to additive
perturbations was found to be important to get robustness to adversarial examples [8]. Our results
show that convolutional kernel networks already enjoy such a property.
Notation and basic mathematical tools. A positive definite kernel K that operates on a set X
implicitly defines a reproducing kernel Hilbert space H of functions from X to R, along with a
mapping ? : X ? H. A predictive model associates to every point z in X a label in R; it consists of
a linear function f in H such that f (z) = hf, ?(z)iH , where ?(z) is the data representation. Given
now two points z, z ? in X , Cauchy-Schwarz?s inequality allows us to control the variation of the
predictive model f according to the geometry induced by the Hilbert norm k.kH :
|f (z) ? f (z ? )| ? kf kH k?(z) ? ?(z ? )kH .
(1)
This property implies that two points z and z ? that are close to each other according to the RKHS
norm should lead to similar predictions, when the model f has reasonably small norm in H.
Then, we consider notation from signal processing similar to [17]. We call a signal x a function
of Rd representing spatial coordinates, and H is a Hilbert space,
in L2 (?, H), where
R ? is a subset
2
2
when kxkL2 := ? kx(u)kH du < ?, where du is the Lebesgue measure on Rd . Given a linear
operator T : L2 (?, H) ? L2 (?, H? ), the operator norm is defined as kT kL2 (?,H)?L2 (?,H? ) :=
supkxkL2 (?,H) ?1 kT xkL2 (?,H? ) . For the sake of clarity, we drop norm subscripts, from now on, using
the notation k ? k for Hilbert space norms, L2 norms, and L2 ? L2 operator norms, while | ? | denotes
the Euclidean norm on Rd . Some useful mathematical tools are also presented in Appendix A.
2
Construction of the Multilayer Convolutional Kernel
We now present the multilayer convolutional kernel, which operates on signals with d spatial dimensions. The construction follows closely that of convolutional kernel networks [15] but generalizes it
to input signals defined on the continuous domain ? = Rd (which does not prevent signals to have
compact support), as done by Mallat [17] for analyzing the properties of the scattering transform; the
issue of discretization where ? is a discrete grid is addressed in Section 2.1.
2
xk (w) = Ak Mk Pk xk?1 (w) ? Hk
linear pooling
xk := Ak Mk Pk xk?1 : ? ? Hk
Mk Pk xk?1 : ? ? Hk
Mk Pk xk?1 (v) = ?k (Pk xk?1 (v)) ? Hk
kernel mapping
Pk xk?1 (v) ? Pk (patch extraction)
xk?1 (u) ? Hk?1
xk?1 : ? ? Hk?1
Figure 1: Construction of the k-th signal representation from the k?1-th one. Note that while ?
is depicted as a box in R2 here, our construction is supported on ? = Rd . Similarly, a patch is
represented as a squared box for simplicity, but it may potentially have any shape.
In what follows, an input signal is denoted by x0 and lives in L2 (?, H0 ), where H0 is typically
Rp0 (e.g., with p0 = 3, x0 (u) may represent the RGB pixel value at location u). Then, we build
a sequence of RKHSs H1 , H2 , . . ., and transform x0 into a sequence of ?feature maps? supported
on ?, respectively denoted by x1 in L2 (?, H1 ), x2 in L2 (?, H2 ), . . . . As depicted in Figure 1,
a new map xk is built from the previous one xk?1 by applying successively three operators that
perform patch extraction (Pk ), kernel mapping (Mk ) in a new RKHS Hk , and linear pooling (Ak ),
respectively. When going up in the hierarchy, the points xk (u) carry information from larger signal
neighborhoods centered at u in ? with more invariance, as we will formally show.
Patch extraction operator. Given the layer xk?1 , we consider a patch shape Sk , defined as a
compact centered subset of Rd , e.g., a box [?1, 1] ? [?1, 1] for
R images, and we define the Hilbert
space Pk := L2 (Sk , Hk?1 ) equipped with the norm kzk2 = Sk kz(u)k2 d?k (u), where d?k is the
normalized uniform measure on Sk for every z in Pk . More precisely, we now define the linear patch
extraction operator Pk : L2 (?, Hk?1 ) ? L2 (?, Pk ) such that for all u in ?,
Pk xk?1 (u) = (v 7? xk?1 (u + v))v?Sk ? Pk .
Note that by equipping Pk with a normalized measure, the operator Pk preserves the norm. By
Fubini?s theorem, we have indeed kPk xk?1 k = kxk?1 k and hence Pk xk?1 is in L2 (?, Pk ).
Kernel mapping operator. In a second stage, we map each patch of xk?1 to a RKHS Hk with a
kernel mapping ?k : Pk ? Hk associated to a positive definite kernel Kk . It is then possible to
define the non-linear pointwise operator Mk such that
Mk Pk xk?1 (u) := ?k (Pk xk?1 (u)) ? Hk .
As in [15], we use homogeneous dot-product kernels of the form
hz, z ? i
Kk (z, z ? ) = kzkkz ? k?k
with ?k (1) = 1,
kzkkz ? k
(2)
which ensures that kMk Pk xk?1 (u)k = kPk xk?1 (u)k and that Mk Pk xk?1 is in L2 (?, Hk ). Concrete
examples of kernels satisfying (2) with some other properties are presented in Appendix B.
Pooling operator. The last step to build the layer xk is to pool neighboring values to achieve some
local shift-invariance. As in [15], we apply a linear convolution operator Ak with a Gaussian kernel
at scale ?k , h?k (u) := ?k?d h(u/?k ), where h(u) = (2?)?d/2 exp(?|u|2 /2). Then,
Z
xk (u) = Ak Mk Pk xk?1 (u) =
h?k (u ? v)Mk Pk xk?1 (v)dv ? Hk .
Rd
Applying Schur?s test to the integral operator Ak (see Appendix A), we obtain that kAk k ? 1. Thus,
kxk k ? kMk Pk xk?1 k and xk ? L2 (?, Hk ). Note that a similar pooling operator is used in the
scattering representation [5, 17], though in a different way which does not affect subsequent layers.
3
Multilayer construction. Finally, we obtain a multilayer representation by composing multiple
times the previous operators. In order to increase invariance with each layer, the size of the patch Sk
and pooling scale ?k typically grow exponentially with k, with ?k and supc?Sk |c| of the same order.
With n layers, the final representation is given by the feature map
?n (x0 ) := xn = An Mn Pn An?1 Mn?1 Pn?1 ? ? ? A1 M1 P1 x0 ? L2 (?, Hn ).
(3)
Then, we can define a kernel Kn on two signals x0 and x?0 by Kn (x0 , x?0 ) := h?n (x0 ), ?n (x?0 )i,
whose RKHS HKn contains all functions of the form f (x0 ) = hw, ?n (x0 )i with w ? L2 (?, Hn ).
The following lemma shows that this representation preserves all information about the signal at each
layer, and each feature map xk can be sampled on a discrete set with no loss of information. This
suggests a natural approach for discretization which we discuss next. For space limitation reasons, all
proofs in this paper are relegated to Appendix C.
Lemma 1 (Signal preservation). Assume that Hk contains linear functions hw, ?i with w in Pk (this
is true for all kernels Kk described in Appendix B), then the signal xk?1 can be recovered from a
sampling of xk = Ak Mk Pk xk?1 at discrete locations as soon as the union of patches centered at
these points covers all of ?. It follows that xk can be reconstructed from such a sampling.
2.1
From Theory to Practice: Discretization and Signal Preservation
The previous construction defines a kernel representation for general signals in L2 (?, H0 ), which
is an abstract object defined for theoretical purposes, as often done in signal processing [17]. In
practice, signals are discrete, and it is thus important to discuss the problem of discretization, as done
in [15]. For clarity, we limit the presentation to 1-dimensional signals (? = Rd with d = 1), but the
arguments can easily be extended to higher dimensions d when using box-shaped patches. Notation
from the previous section is preserved, but we add a bar on top of all discrete analogues of their
? k ) for some RKHS H
?k.
discrete counterparts, e.g., x
?k is a discrete feature map in ?2 (Z, H
Input signals x0 and x
?0 . Discrete signals acquired by a physical device are often seen as local
integrators of signals defined on a continuous domain (e.g., sensors from digital cameras integrate the
pointwise distribution of photons that hit a sensor in a spatial window). Let us then consider a signal x0
in L2 (?, H0 ) and s0 a sampling interval. By defining x
?0 in ?2 (Z, H0 ) such that x
?0 [n] = x0 (ns0 ) for
all n in Z, it is thus natural to assume that x0 = A0 x, where A0 is a pooling operator (local integrator)
applied to an original signal x. The role of A0 is to prevent aliasing and reduce high frequencies;
typically, the scale ?0 of A0 should be of the same magnitude as s0 , which we choose to be s0 = 1 in
the following, without loss of generality. This natural assumption will be kept later in the analysis.
? k ) at each
Multilayer construction. We now want to build discrete feature maps x
?k in ?2 (Z, H
layer k involving subsampling with a factor sk w.r.t. x
?k?1 . We now define the discrete analogues of
the operators Pk (patch extraction), Mk (kernel mapping), and Ak (pooling) as follows: for n ? Z,
?1/2
? ek
P?k x
?k?1 [n] := ek (?
xk?1 [n], x
?k?1 [n + 1], . . . , x
?k?1 [n + ek ? 1]) ? P?k := H
k?1
? k P?k x
?k
M
?k?1 [n] := ??k (P?k x
?k?1 [n]) ? H
X
1/2
? k [nsk ? m]M
?k ? M
? k P?k x
? k P?k x
? k P?k x
?k,
x
?k [n] = A?k M
?k?1 [n] := sk
h
?k?1 [m] = (h
?k?1 )[nsk ] ? H
m?Z
where (i) P?k extracts a patch of size ek starting at position n in x
?k?1 [n] (defining a patch centered
? k?1 ;
at n is also possible), which lives in the Hilbert space P?k defined as the direct sum of ek times H
?
(ii) Mk is a kernel mapping identical to the continuous case, which preserves the norm, like Mk ;
(iii) A?k performs a convolution with a Gaussian filter and a subsampling operation with factor sk .
The next lemma shows that under mild assumptions, this construction preserves signal information.
? k contains the linear functions hw, ?i
Lemma 2 (Signal recovery with subsampling). Assume that H
for all w ? P?k and that ek ? sk . Then, x
?k?1 can be recovered from x
?k .
?k,
We note that this result relies on recovery by deconvolution of a pooling convolution with filter h
which is stable when its scale parameter, typically of order sk to prevent anti-aliasing, is small enough.
This suggests using small values for ek , sk , as in typical recent convolutional architectures [30].
4
Links between the parameters of the discrete and continuous models. Due to subsampling, the
patch size in the continuous and discrete models are related by a multiplicative factor. Specifically, a
patch of size ek with discretization corresponds to a patch Sk of diameter ek sk?1 sk?2 . . . s1 in the
continuous case. The same holds true for the scale parameter of the Gaussian pooling.
2.2
From Theory to Practice: Kernel Approximation and Convolutional Kernel Networks
Besides discretization, two modifications are required to use the image representation we have
described in practice. The first one consists of using feature maps with finite spatial support, which
introduces border effects that we did not study, but which are negligible when dealing with large
realistic images. The second one requires finite-dimensional approximation of the kernel maps,
leading to the convolutional kernel network model of [15]. Typically, each RKHS?s mapping is
approximated by performing a projection onto a subspace of finite dimension, a classical approach to
make kernel methods work at large scale [12, 31, 34]. One advantage is its compatibility with the
RKHSs (meaning that the approximations live in the respective RKHSs), and the stability results we
will present next are preserved thanks to the non-expansiveness of the projection.
It is then be possible to derive theoretical results for the CKN model, which appears as a natural
implementation of the kernel constructed previously; yet, we will also show in Section 5 that the
results apply more broadly to CNNs that are contained in the functional space associated to the kernel.
3
Stability to Deformations and Translation Invariance
In this section, we study the translation-invariance and the stability of the kernel representation
described in Section 2 for continuous signals under the action of diffeomorphisms. We use a
similar characterization of stability to the one introduced by Mallat [17]: for a C 1 -diffeomorphism
? : ? ? ?, let L? denote the linear operator defined by L? x(u) = x(u ? ? (u)), the representation
?(?) is stable under the action of diffeomorphisms if there exist two constants C1 and C2 such that
k?(L? x) ? ?(x)k ? (C1 k?? k? + C2 k? k? )kxk,
(4)
where ?? is the Jacobian of ? , k?? k? := supu?? k?? (u)k, and k? k? := supu?? |? (u)|. As
in [17], our results will assume the regularity condition k?? k? < 1/2. In order to have a translationinvariant representation, we want C2 to be small (a translation is a diffeomorphism with ?? = 0),
and indeed we will show that C2 is proportional to 1/?n , where ?n is the scale of the last pooling
layer, which typically increases exponentially with the number of layers n.
Note that unlike the scattering transform [17], we do not have a representation that preserves the
norm, i.e., such that k?(x)k = kxk. While the patch extraction Pk and kernel mapping Mk operators
do preserve the norm, the pooling operators Ak may remove (or significantly reduce) frequencies
from the signal that are larger than 1/?k . Yet, natural signals such as natural images often have high
energy in the low-frequency domain (the power spectra of natural images is often considered to have
a polynomial decay in 1/f 2 , where f is the signal frequency [33]). For such classes of signals, a
large fraction of the signal energy will be preserved by the pooling operator. In particular, with some
additional assumptions on the kernels Kk , it is possible to show [3]:
k?(x)k ? kAn ? ? ? A0 xk.
Additionally, when using a Gaussian kernel mapping ?n+1 on top of the last feature map as a
prediction layer instead of a linear layer, the final representation ?f (x) := ?n+1 (?n (A0 x)) preserves
stability and always has unit norm (see the extended version of the paper [3] for details). This suggests
that norm preservation may be a less relevant concern in our kernel setting.
3.1
Stability Results
In order to study the stability of the representation (3), we assume that the input signal x0 may be
written as x0 = A0 x, where A0 is an initial pooling operator at scale ?0 , which allows us to control
the high frequencies of the signal in the first layer. As discussed previously in Section 2.1, this
assumption is natural and compatible with any physical acquisition device. Note that ?0 can be taken
arbitrarily small, making the operator A0 arbitrarily close to the identity, so that this assumption does
not limit the generality of our results. Moreover, we make the following assumptions for each layer k:
5
(A1) Norm preservation: k?k (x)k = kxk for all x in Pk ;
(A2) Non-expansiveness: k?k (x) ? ?k (x? )k ? kx ? x? k for all x, x? in Pk ;
(A3) Patch sizes: there exists ? > 0 such that at any layer k we have
sup |c| ? ??k?1 .
c?Sk
Note that assumptions (A1-2) imply that the operators Mk preserve the norm and are non-expansive.
Appendix B exposes a large class of homogeneous kernels that satisfy assumptions (A1-2).
General bound for stability. The following result gives an upper bound on the quantity of interest,
k?(L? x) ? ?(x)k, in terms of the norm of various linear operators which control how ? affects each
layer. The commutator of linear operators A and B is denoted [A, B] = AB ? BA.
Proposition 3. Let ?(x) = ?n (A0 x) where ?n is defined in (3) for x in L2 (?, H0 ). Then,
!
n
X
k?(L? x) ? ?(x)k ?
k[Pk Ak?1 , L? ]k + k[An , L? ]k + kL? An ? An k kxk
(5)
k=1
In the case of a translation L? x(u) = Lc x(u) = x(u ? c), it is easy to see that pooling and
patch extraction operators commute with Lc (this is also known as covariance or equivariance to
translations), so that we are left with the term kLc An ? An k, which should control translation
invariance. For general diffeomorphisms ? , we no longer have exact covariance, but we show below
that commutators are stable to ? , in the sense that k[Pk Ak?1 , L? ]k is controlled by k?? k? , while
kL? An ? An k is controlled by k? k? and decays with the pooling size ?n .
Bound on k[Pk Ak?1 , L? ]k. We begin by notingRthat Pk z can be identified with (Lc z)c?Sk isometrically for all z in L2 (?, Hk?1 ), since kPk zk2 = Sk kLc zk2 d?k (c) by Fubini?s theorem. Then,
Z
kLc Ak?1 L? z ? L? Lc Ak?1 zk2 d?k (c)
kPk Ak?1 L? z ? L? Pk Ak?1 zk2 =
Sk
? sup kLc Ak?1 L? x ? L? Lc Ak?1 zk2 ,
c?Sk
so that k[Pk Ak?1 , L? ]k ? supc?Sk k[Lc Ak?1 , L? ]k. The following result lets us bound
k[Lc Ak?1 , L? ]k when |c| ? ??k?1 , which is satisfied under assumption (A3).
Lemma 4. Let A? be the pooling operator with kernel h? (u) = ? ?d h(u/?). If k?? k? ? 1/2,
there exists a constant C1 such that for any ? and |c| ? ??, we have
k[Lc A? , L? ]k ? C1 k?? k? ,
where C1 depends only on h and ?.
A similar result is obtained in Mallat [17, Lemma E.1] for commutators of the form [A? , L? ], but we
extend it to handle integral operators Lc A? with a shifted kernel. The proof (given in Appendix C.4)
relies on the fact that [Lc A? , L? ] is an integral operator in order to bound its norm via Schur?s test.
Note that ? can be made larger, at the cost of an increase of the constant C1 of the order ?d+1 .
Bound on kL? An ? An k. We bound the operator norm kL? An ? An k in terms of k? k? using the
following result due to Mallat [17, Lemma 2.11], with ? = ?n :
Lemma 5. If k?? k? ? 1/2, we have
C2
k? k? ,
(6)
kL? A? ? A? k ?
?
with C2 = 2d ? k?hk1 .
Combining Proposition 3 with Lemmas 4 and 5, we immediately obtain the following result.
Theorem 6. Let ?(x) be a representation given by ?(x) = ?n (A0 x) and assume (A1-3). If
k?? k? ? 1/2, we have
C2
(7)
k?(L? x) ? ?(x)k ? C1 (1 + n) k?? k? +
k? k? kxk.
?n
6
This result matches the desired notion of stability in Eq. (4), with a translation-invariance factor that
decays with ?n . The dependence on a notion of depth (the number of layers n here) also appears
in [17], with a factor equal to the maximal length of scattering paths, and with the same condition
k?? k? ? 1/2. However, while the norm of the scattering representation is preserved as the length
of these paths goes to infinity, the norm of ?(x) can decrease with depth due to pooling layers,
though this concern may be alleviated by using an additional non-linear prediction layer, as discussed
previously (see also [3]).
3.2
Stability with Kernel Approximations
As in the analysis of the scattering transform of [17], we have characterized the stability and shiftinvariance of the data representation for continuous signals, in order to give some intuition about the
properties of the corresponding discrete representation, which we have described in Section 2.1.
Another approximation performed in the CKN model of [15] consists of adding projection steps on
finite-dimensional subspaces of the RKHS?s layers, as discusssed in Section 2.2. Interestingly, the
stability properties we have obtained previously are compatible with these steps. We may indeed
redefine the operator Mk as the pointwise operation such that Mk z(u) = ?k ?k (z(u)) for any map z
in L2 (?, Pk ), instead of Mk z(u) = ?k (z(u)); ?k : Hk ? Fk is here a projection operator onto a
linear subspace. Then, Mk does not necessarily preserve the norm anymore, but kMk zk ? kzk, with a
loss of information corresponding to the quality of approximation of the kernel Kk on the points z(u).
On the other hand, the non-expansiveness of Mk is satisfied thanks to the non-expansiveness of
the projection. Additionally, the CKN construction provides a finite-dimensional representation
at each layer, which preserves the norm structure of the original Hilbert spaces isometrically. In
summary, it is possible to show that the conclusions of Theorem 6 remain valid for this tractable CKN
representation, but we lose signal information in the process. The stability of the predictions can then
be controlled through the norm of the last (linear) layer, which is typically used as a regularizer [15].
4
Global Invariance to Group Actions
In Section 3, we have seen how the kernel representation of Section 2 creates invariance to translations
by commuting with the action of translations at intermediate layers, and how the last pooling layer on
the translation group governs the final level of invariance. It is often useful to encode invariances
to different groups of transformations, such as rotations or reflections (see, e.g., [9, 17, 22, 29]).
Here, we show how this can be achieved by defining adapted patch extraction and pooling operators
that commute with the action of a transformation group G (this is known as group covariance or
equivariance). We assume that G is locally compact, so that we can define a left-invariant Haar
measure ??that is, a measure on G that satisfies ?(gS) = ?(S) for any Borel set S ? G and g in G.
We assume the initial signal x(u) is defined on G, and we define subsequent feature maps on the
same domain. The action of an element g ? G is denoted by Lg , where Lg x(u) = x(g ?1 u). Then,
we are interested in defining a layer?that is, a succession of patch extraction, kernel mapping, and
pooling operators?that commutes with Lg , in order to achieve equivariance to the group G.
Patch extraction.
We define patch extraction as follows
P x(u) = (x(uv))v?S
for all u ? G,
where S ? G is a patch centered at the identity. P commutes with Lg since
P Lg x(u) = (Lg x(uv))v?S = (x(g ?1 uv))v?S = P x(g ?1 u) = Lg P x(u).
Kernel mapping.
The pointwise operator M is defined as in Section 2, and thus commutes with Lg .
Pooling. The pooling operator on the group G is defined in a similar fashion as [22] by
Z
Z
Ax(u) =
x(uv)h(v)d?(v) =
x(v)h(u?1 v)d?(v),
G
G
where h is a pooling filter typically localized around the identity element. It is easy to see from the
first expression of Ax(u) that ALg x(u) = Lg Ax(u), making the pooling operator G-equivariant.
7
In our analysis of stability in Section 3, we saw that inner pooling layers are useful to guarantee
stability to local deformations, while global invariance is achieved mainly through the last pooling
layer. In some cases, one only needs stability to a subgroup of G, while achieving global invariance
to the whole group, e.g., in the roto-translation group [21], one might want invariance to a global
rotation but stability to local translations. Then, one can perform pooling just on the subgroup to
stabilize (e.g., translations) in intermediate layers, while pooling on the entire group at the last layer
to achieve the global group invariance.
5
Link with Convolutional Neural Networks
In this section, we study the connection between the kernel representation defined in Section 2 and
CNNs. Specifically, we show that the RKHS HKn obtained from our kernel construction contains
a set of CNNs on continuous domains with certain types of smooth homogeneous activations. An
important consequence is that the stability results of previous sections apply to this class of CNNs.
CNN maps construction. We now define a CNN function f? that takes as input an image x0 in
L2 (?, Rp0 ) with p0 channels, and builds a sequence of feature maps, represented at layer k as a
function zk in L2 (?, Rpk ) with pk channels; it performs linear convolutions with a set of filters
(wki )i=1,...,pk , followed by a pointwise activation function ? to obtain intermediate feature maps z?k ,
then applies a linear pooling filter and repeats the same operations at each layer. Note that here, each
wki is in L2 (Sk , Rpk?1 ), with channels denoted by wkij ? L2 (Sk , R). Formally, the intermediate map
z?k in L2 (?, Rpk ) is obtained for k ? 1 by
z?ki (u) = nk (u)? hwki , Pk zk?1 (u)i/nk (u) ,
(8)
where z?k (u) = (?
zk1 (u), . . . , z?kpk (u)) in Rpk , and Pk is the patch extraction operator, which operates
here on finite-dimensional maps. The activation involves a pointwise non-linearity ? along with a
quantity nk (u) that is independent of the filters and that will be made explicit in the sequel. Finally,
the map zk is obtained by using a pooling operator as in Section 2, with zk = Ak z?k , and z0 = x0 .
Homogeneous activations. The choice of non-linearity ? relies on Lemma B.2 of the appendix,
which shows that for many choices of smooth functions ?, the RKHSs Hk defined in Section 2 contains the linear functions z 7? kzk?(hg, zi/kzk) for all g in Pk . While this homogenization involving
the quantities kzk is not standard in classical CNNs, we note that (i) the most successful activation
function, namely rectified linear units, is homogeneous?that is, relu(hg, zi) = kzkrelu(hg, zi/kzk);
(ii) while relu is nonsmooth and thus not in our RKHSs, there exists a smoothed variant that satisfies
the conditions of Lemma B.2 for useful kernels. As noticed in [35, 36], this is for instance the case
for the inverse polynomial kernel described in Appendix B, In Figure 2, we plot and compare these
different variants of relu. Then, we may now define the quantities nk (u) := kPk xk?1 (u)k in (8),
which are due to the homogenization, and which are independent of the filters wki .
Classification layer. The final CNN prediction function f? is given by inner products with the
feature maps of the last layer:
f? (x0 ) = hwn+1 , zn i,
2
pn
with parameters wn+1 in L (?, R ). The next result shows that for appropriate ?, the function f?
is in HKn . The construction of this function in the RKHS and the proof are given in Appendix D. We
note that a similar construction for fully connected networks with constraints on weights and inputs
was given in [35].
Proposition 7 (CNNs and RKHSs). Assume the activation ? satisfies C? (a) < ? for all a ? 0,
where C? is defined for a given kernel in Lemma B.2. Then the CNN function f? defined above is in
the RKHS HKn , with norm
kf? k2 ? pn
pn
X
i
kwn+1
k22 Bn,i ,
i=1
Ppk?1
kwkij k22 Bk?1,j .
where Bn,i is defined recursively by B1,i = C?2 (kw1i k22 ) and Bk,i = C?2 pk?1 j=1
The results of this section imply that our study of the geometry of the kernel representations, and
in particular the stability and invariance properties of Section 3, apply to the generic CNNs defined
8
f:x
f:x
(x)
4
2.0
sReLU, w = 0
sReLU
sReLU, w = 0.5
3
1.5
sReLU, w = 1
f(x)
f(x)
|x| (wx/|x|)
ReLU, w=1
ReLU
1.0
sReLU, w = 2
2
1
0.5
0.0
0
2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
2.0
x
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
x
Figure 2: Comparison of one-dimensional functions obtained with relu and smoothed relu (sReLU)
activations. (Left) non-homogeneous setting of [35, 36]. (Right) our homogeneous setting, for
different values of the parameter w. Note that for w ? 0.5, sReLU and ReLU are indistinguishable.
above, thanks to the Lipschitz smoothness relation (1). The smoothness is then controlled by the
RKHS norm of these functions, which sheds light on the links between generalization and stability.
In particular, functions with low RKHS norm (a.k.a. ?large margin?) are known to generalize better to
unseen data (see, e.g., the notion of margin bounds for SVMs [27, 28]). This implies, for instance, that
generalization is harder if the task requires classifying two slightly deformed images with different
labels, since this requires a function with large RKHS norm according to our stability analysis. In
contrast, if a stable function (i.e., with small RKHS norm) is sufficient to do well on a training set,
learning becomes ?easier? and few samples may be enough for good generalization.
Acknowledgements
This work was supported by a grant from ANR (MACARON project under grant number ANR14-CE23-0003-01), by the ERC grant number 714381 (SOLARIS project), and by the MSR-Inria
joint center.
References
[1] F. Anselmi, L. Rosasco, and T. Poggio. On invariance and selectivity in representation learning.
Information and Inference, 5(2):134?158, 2016.
[2] F. Anselmi, L. Rosasco, C. Tan, and T. Poggio. Deep convolutional networks are hierarchical
kernel machines. preprint arXiv:1508.01084, 2015.
[3] A. Bietti and J. Mairal. Group invariance and stability to deformations of deep convolutional
representations. preprint arXiv:1706.03078, 2017.
[4] L. Bo, K. Lai, X. Ren, and D. Fox. Object recognition with hierarchical kernel descriptors. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2011.
[5] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Transactions on pattern
analysis and machine intelligence (PAMI), 35(8):1872?1886, 2013.
[6] J. Bruna, A. Szlam, and Y. LeCun. Learning stable group invariant representations with
convolutional networks. preprint arXiv:1301.3537, 2013.
[7] Y. Cho and L. K. Saul. Kernel methods for deep learning. In Advances in Neural Information
Processing Systems (NIPS), 2009.
[8] M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier. Parseval networks: Improving
robustness to adversarial examples. In International Conference on Machine Learning (ICML),
2017.
[9] T. Cohen and M. Welling. Group equivariant convolutional networks. In International Conference on Machine Learning (ICML), 2016.
[10] A. Daniely, R. Frostig, and Y. Singer. Toward deeper understanding of neural networks: The
power of initialization and a dual view on expressivity. In Advances in Neural Information
Processing Systems (NIPS), 2016.
9
[11] J. Diestel and J. J. Uhl. Vector Measures. American Mathematical Society, 1977.
[12] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations.
Journal of Machine Learning Research (JMLR), 2:243?264, 2001.
[13] B. Haasdonk and H. Burkhardt. Invariant kernel functions for pattern analysis and machine
learning. Machine learning, 68(1):35?61, 2007.
[14] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D.
Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation,
1(4):541?551, 1989.
[15] J. Mairal. End-to-End Kernel Learning with Supervised Convolutional Kernel Networks. In
Advances in Neural Information Processing Systems (NIPS), 2016.
[16] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. In Advances
in Neural Information Processing Systems (NIPS), 2014.
[17] S. Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics,
65(10):1331?1398, 2012.
[18] G. Montavon, M. L. Braun, and K.-R. M?ller. Kernel analysis of deep networks. Journal of
Machine Learning Research (JMLR), 12:2563?2581, 2011.
[19] Y. Mroueh, S. Voinea, and T. A. Poggio. Learning with group invariant features: A kernel
perspective. In Advances in Neural Information Processing Systems (NIPS), 2015.
[20] K. Muandet, K. Fukumizu, B. Sriperumbudur, B. Sch?lkopf, et al. Kernel mean embedding
R in Machine Learning,
of distributions: A review and beyond. Foundations and Trends
10(1-2):1?141, 2017.
[21] E. Oyallon and S. Mallat. Deep roto-translation scattering for object classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[22] A. Raj, A. Kumar, Y. Mroueh, T. Fletcher, and B. Schoelkopf. Local group invariant representations via orbit embeddings. In International Conference on Artificial Intelligence and Statistics
(AISTATS), 2017.
[23] S. Saitoh. Integral transforms, reproducing kernels and their applications, volume 369. CRC
Press, 1997.
[24] I. J. Schoenberg. Positive definite functions on spheres. Duke Mathematical Journal, 9(1):96?
108, 1942.
[25] B. Sch?lkopf. Support Vector Learning. PhD thesis, Technischen Universit?t Berlin, 1997.
[26] B. Sch?lkopf, A. Smola, and K.-R. M?ller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10(5):1299?1319, 1998.
[27] B. Sch?lkopf and A. J. Smola. Learning with kernels: support vector machines, regularization,
optimization, and beyond. 2001.
[28] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to
algorithms. Cambridge university press, 2014.
[29] L. Sifre and S. Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition
(CVPR), 2013.
[30] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. In International Conference on Learning Representations (ICLR), 2014.
[31] A. J. Smola and B. Sch?lkopf. Sparse greedy matrix approximation for machine learning. In
Proceedings of the International Conference on Machine Learning (ICML), 2000.
10
[32] E. M. Stein. Harmonic Analysis: Real-variable Methods, Orthogonality, and Oscillatory
Integrals. Princeton University Press, 1993.
[33] A. Torralba and A. Oliva. Statistics of natural image categories. Network: computation in
neural systems, 14(3):391?412, 2003.
[34] C. Williams and M. Seeger. Using the Nystr?m method to speed up kernel machines. In
Advances in Neural Information Processing Systems (NIPS), 2001.
[35] Y. Zhang, J. D. Lee, and M. I. Jordan. ?1 -regularized neural networks are improperly learnable
in polynomial time. In International Conference on Machine Learning (ICML), 2016.
[36] Y. Zhang, P. Liang, and M. J. Wainwright. Convexified convolutional neural networks. In
International Conference on Machine Learning (ICML), 2017.
11
| 7201 |@word mild:1 deformed:1 cnn:5 version:1 msr:1 polynomial:3 norm:33 rgb:1 covariance:3 p0:2 bn:2 commute:5 nystr:1 harder:1 recursively:1 carry:1 initial:2 contains:6 rkhs:17 interestingly:1 kmk:3 recovered:2 discretization:6 activation:8 yet:3 written:1 ckns:4 additive:1 subsequent:2 realistic:1 shape:2 wx:1 remove:1 designed:2 drop:1 plot:1 discrimination:1 intelligence:2 greedy:1 device:2 xk:38 provides:2 characterization:2 location:2 zhang:2 mathematical:4 along:2 constructed:1 direct:1 c2:7 consists:3 redefine:1 manner:1 acquired:1 x0:19 indeed:3 equivariant:2 p1:1 multi:1 integrator:2 aliasing:2 equipped:1 window:1 becomes:1 begin:1 project:2 notation:4 moreover:1 wki:3 linearity:2 what:1 voinea:1 transformation:6 impractical:1 guarantee:2 every:2 shed:1 isometrically:2 braun:1 universit:1 k2:2 hit:1 control:6 unit:2 grant:3 enjoy:2 appear:1 szlam:1 supc:2 positive:4 negligible:1 understood:1 local:8 engineering:1 limit:2 consequence:1 ak:22 analyzing:1 subscript:1 path:2 pami:1 inria:6 black:1 might:1 initialization:1 resembles:1 studied:1 suggests:3 camera:1 lecun:2 union:1 practice:4 definite:4 differs:1 supu:2 backpropagation:1 digit:1 empirical:1 significantly:2 projection:5 alleviated:1 pre:1 inp:1 get:1 onto:2 clever:1 close:2 operator:40 context:2 live:2 applying:2 map:21 center:1 straightforward:1 go:1 starting:1 williams:1 convex:1 simplicity:1 alpes:1 recovery:2 immediately:1 pure:1 insight:2 stability:32 handle:1 notion:4 variation:2 coordinate:1 embedding:1 schoenberg:1 construction:15 controlling:1 mallat:8 hierarchy:1 exact:2 losing:1 homogeneous:8 tan:1 duke:1 associate:1 element:2 trend:1 satisfying:1 approximated:1 recognition:6 role:1 preprint:3 haasdonk:1 rpk:4 region:1 ensures:1 connected:1 schoelkopf:1 decrease:1 principled:1 intuition:2 complexity:2 solving:1 predictive:4 creates:1 translated:1 easily:1 kwn:1 joint:1 represented:3 various:1 regularizer:1 univ:1 artificial:1 neighborhood:1 h0:6 shalev:1 whose:1 richer:1 larger:3 grave:1 cvpr:3 hk1:1 anr:1 statistic:2 simonyan:1 unseen:1 transform:6 final:4 sequence:4 advantage:1 eigenvalue:1 product:2 maximal:1 fr:2 neighboring:1 relevant:1 combining:2 poorly:1 achieve:3 kh:4 regularity:1 ben:1 object:3 help:1 derive:1 eq:1 involves:1 implies:2 closely:1 cnns:11 filter:8 centered:5 bojanowski:1 crc:1 generalization:4 proposition:3 extension:1 hold:1 around:1 considered:1 exp:1 fletcher:1 mapping:12 solaris:1 major:1 early:1 a2:1 torralba:1 purpose:1 lose:1 label:3 expose:1 jackel:1 schwarz:1 saw:1 hubbard:1 tool:2 fukumizu:1 sensor:2 gaussian:4 always:1 pn:5 encode:1 derived:1 ax:3 methodological:1 modelling:1 rank:1 mainly:1 expansive:1 hk:19 contrast:1 adversarial:2 seeger:1 sense:2 inference:1 nsk:2 cnrs:1 typically:8 entire:1 a0:11 relation:1 relegated:1 going:1 france:1 interested:1 compatibility:1 pixel:1 overall:1 issue:2 classification:3 denoted:5 dauphin:1 dual:1 spatial:6 uhl:1 equal:1 extraction:12 beach:1 sampling:3 shaped:1 identical:1 unsupervised:1 icml:5 nonsmooth:1 grenoble:3 few:1 preserve:11 geometry:4 consisting:1 lebesgue:1 attempt:1 ab:1 interest:1 henderson:1 introduces:1 light:1 hg:3 kt:2 integral:5 poggio:3 respective:1 fox:1 euclidean:1 desired:1 orbit:1 deformation:7 theoretical:5 mk:21 instance:4 formalism:1 cover:1 zn:1 cost:1 introducing:1 subset:2 daniely:1 uniform:1 successful:1 characterize:1 kn:2 cho:1 muandet:1 st:1 thanks:3 fundamental:1 ppk:1 international:7 sequel:1 lee:1 pool:1 concrete:1 squared:1 thesis:1 satisfied:2 successively:1 hn:2 possibly:1 choose:1 rosasco:2 ek:9 american:1 leading:1 photon:1 stabilize:1 hkn:4 satisfy:1 kzk2:1 depends:1 later:1 view:3 lot:1 h1:2 multiplicative:1 performed:1 sup:2 hf:1 convolutional:27 descriptor:1 characteristic:2 succession:1 correspond:1 yield:1 generalize:1 lkopf:5 handwritten:1 ren:1 rectified:1 oscillatory:1 kpk:6 sriperumbudur:1 energy:2 kl2:1 frequency:5 acquisition:1 proof:4 associated:2 handwriting:1 gain:1 sampled:1 hilbert:9 appears:2 scattering:14 higher:1 fubini:2 supervised:1 zisserman:1 improved:2 done:3 box:5 though:2 generality:2 just:1 stage:1 equipping:1 smola:3 hand:3 nonlinear:1 defines:2 quality:1 usa:1 effect:1 k22:3 normalized:2 true:2 counterpart:1 regularization:2 inspiration:1 hence:1 indistinguishable:1 kak:1 performs:2 reflection:1 image:14 harmonic:2 meaning:1 recently:1 rotation:3 functional:7 physical:2 cohen:1 exponentially:2 volume:1 extend:2 interpretation:1 m1:1 translationinvariant:1 discussed:2 homogenization:2 bietti:3 cambridge:1 smoothness:2 mroueh:2 rd:8 uv:4 grid:1 similarly:1 fk:1 erc:1 frostig:1 mathematics:1 dot:1 convexified:1 bruna:2 stable:8 impressive:1 longer:1 add:1 saitoh:1 recent:2 perspective:1 raj:1 selectivity:1 certain:1 inequality:1 success:1 arbitrarily:2 life:3 seen:3 preserving:1 additional:2 zip:1 ller:2 signal:44 preservation:4 ii:2 multiple:2 harchaoui:1 smooth:3 match:1 characterized:1 long:1 sphere:1 alberto:2 lai:1 a1:5 controlled:4 prediction:7 involving:2 basic:1 variant:2 multilayer:11 vision:3 oliva:1 arxiv:3 kernel:79 sometimes:1 represent:1 achieved:4 c1:7 preserved:4 addition:2 want:3 separately:1 fine:1 addressed:1 interval:1 grow:1 source:1 appropriately:1 sch:5 unlike:1 bringing:1 pooling:31 induced:1 elegant:1 hz:1 schur:2 jordan:1 call:1 near:2 intermediate:4 iii:1 enough:2 easy:2 wn:1 embeddings:1 affect:2 relu:8 zi:3 architecture:6 identified:1 reduce:2 inner:2 ce23:1 technischen:1 shift:1 expression:1 effort:1 improperly:1 klc:4 action:11 deep:12 useful:4 governs:1 involve:1 burkhardt:1 amount:1 transforms:1 stein:1 locally:1 svms:1 category:1 diameter:1 exist:1 shifted:1 broadly:1 discrete:13 group:21 key:1 nevertheless:2 achieving:1 clarity:2 prevent:3 kept:1 fraction:1 sum:1 inverse:1 patch:25 appendix:10 scaling:1 layer:34 bound:8 ki:1 followed:1 g:1 adapted:1 precisely:1 infinity:1 constraint:1 orthogonality:1 x2:1 ns0:1 sake:1 speed:1 argument:1 kumar:1 performing:1 diffeomorphisms:7 structured:1 according:3 remain:1 slightly:1 rp0:2 shallow:2 modification:1 s1:1 making:2 dv:1 invariant:10 taken:1 computationally:1 previously:4 scheinberg:1 discus:2 singer:1 tractable:1 zk2:5 end:2 studying:1 available:1 operation:4 generalizes:2 usunier:1 apply:4 denker:1 hierarchical:2 appropriate:1 generic:1 anymore:1 robustness:2 rkhss:6 original:2 anselmi:2 denotes:1 top:2 subsampling:4 exploit:1 build:4 classical:6 society:1 noticed:1 question:1 already:1 quantity:4 dependence:1 traditional:2 iclr:1 subspace:5 link:3 berlin:1 collected:1 cauchy:1 reason:1 toward:1 besides:1 length:2 pointwise:6 code:1 kk:5 providing:1 liang:1 lg:9 potentially:2 ba:1 implementation:1 perform:2 allowing:1 upper:1 convolution:5 zk1:1 howard:1 finite:7 commuting:1 anti:1 defining:4 extended:2 communication:1 commutator:3 perturbation:1 reproducing:4 smoothed:2 introduced:7 bk:2 namely:1 required:1 kl:5 david:1 connection:1 learned:1 boser:1 expressivity:1 subgroup:2 nip:7 beyond:2 bar:1 below:1 pattern:5 built:1 analogue:2 power:2 wainwright:1 natural:11 regularized:1 haar:1 mn:2 representing:2 scheme:1 imply:2 julien:2 cisse:1 extract:1 schmid:1 review:1 understanding:3 l2:28 acknowledgement:1 kf:2 loss:3 hwn:1 fully:1 parseval:1 limitation:1 proportional:1 localized:1 digital:1 h2:2 integrate:1 foundation:1 sufficient:1 s0:3 ckn:4 principle:1 classifying:1 translation:16 compatible:2 summary:1 supported:3 last:8 soon:1 repeat:1 allow:1 expansiveness:4 deeper:1 saul:1 sparse:1 kzk:5 dimension:4 xn:1 gram:1 depth:2 valid:1 kz:1 made:2 projected:1 sifre:1 welling:1 transaction:1 reconstructed:1 compact:3 implicitly:1 dealing:1 global:5 koniusz:1 mairal:5 b1:1 shwartz:1 spectrum:1 continuous:9 sk:24 additionally:2 roto:2 channel:3 nature:1 reasonably:1 robust:1 ca:1 composing:1 obtaining:1 zk:5 improving:1 alg:1 du:2 necessarily:1 domain:6 equivariance:3 did:1 pk:45 main:4 aistats:1 motivation:1 border:1 whole:1 x1:1 borel:1 fashion:1 lc:10 position:1 explicit:1 wkij:1 jmlr:2 jacobian:1 wavelet:1 hw:3 montavon:1 theorem:4 z0:1 learnable:1 r2:1 decay:3 admits:1 svm:1 concern:2 deconvolution:1 a3:2 exists:3 ih:1 macaron:1 adding:1 phd:1 magnitude:1 texture:1 kx:2 nk:4 margin:2 easier:1 generalizing:1 depicted:2 appearance:1 kxk:7 contained:1 bo:1 applies:1 corresponds:1 kan:1 satisfies:3 relies:4 ljk:1 identity:3 presentation:1 diffeomorphism:2 lipschitz:3 specifically:3 infinite:1 operates:3 typical:1 lemma:12 invariance:26 formally:2 support:4 princeton:1 |
6,857 | 7,202 | Statistical Cost Sharing
Eric Balkanski
Harvard University
[email protected]
Umar Syed
Google NYC
[email protected]
Sergei Vassilvitskii
Google NYC
[email protected]
Abstract
We study the cost sharing problem for cooperative games in situations where the
cost function C is not available via oracle queries, but must instead be learned from
samples drawn from a distribution, represented as tuples (S, C(S)), for different
subsets S of players. We formalize this approach, which we call S TATISTICAL
C OST S HARING, and consider the computation of the core and the Shapley value.
Expanding on the work by Balcan et al. [2015], we give precise sample complexity
bounds for computing cost shares that satisfy the core property with high probability for any function with a non-empty core. For the Shapley value, which has
never been studied in this setting, we show that for submodular cost functions
with bounded curvature
? it can be approximated from samples from the uniform
p
distribution to a 1 ? factor, and that the bound is tight. We then define statistical analogues of the Shapley axioms, and derive a notion of statistical Shapley
value and that these can be approximated arbitrarily well from samples from any
distribution and for any function.
1
Introduction
The cost sharing problem asks for an equitable way to split the cost of a service among all of the
participants. Formally, there is a cost function C defined over all subsets S ? N of a ground set of
elements, or players, and the objective is to fairly divide the cost of the ground set C(N ) among the
players. Unlike traditional learning problems, the goal here is not to predict the cost of the service,
but rather learn which ways of dividing the cost among the players are equitable.
Cost sharing is central to cooperative game theory, and there is a rich literature developing the
key concepts and principles to reason about this topic. Two popular cost sharing concepts are
the core [Gillies, 1959], where no group of players has an incentive to deviate, and the Shapley
value [Shapley, 1953], which is the unique vector of cost shares satisfying four natural axioms.
While both the core and the Shapley value are easy to define, computing them poses additional
challenges. One obstacle is that the computation of the cost shares requires knowledge of costs in
myriad different scenarios. For example, computing the exact Shapley value requires one to look at
the marginal contribution of a player over all possible subsets of others. Recent work [Liben-Nowell
et al., 2012] shows that one can find approximate Shapley values for a restricted subset of cost
functions by looking at the costs for polynomially many specifically chosen subsets. In practice,
however, another roadblock emerges: one cannot simply query for the cost of an arbitrary subset.
Rather, the subsets are passively observed, and the costs of unobserved subsets are simply unknown.
We share the opinion of Balcan et al. [2016] that the main difficulty with using cost sharing methods
in concrete applications is the information needed to compute them.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Concretely, consider the following cost sharing applications.
Attributing Battery Consumption on Mobile Devices. A modern mobile phone or tablet is
typically running a number of distinct apps concurrently. In addition to foreground processes, a lot
of activity may be happening in the background: email clients may be fetching new mail, GPS may
be active for geo-fencing applications, messaging apps are polling for new notifications, and so on.
All of these activities consume power; the question is how much of the total battery consumption
should be attributed to each app? This problem is non-trivial because the operating system induces
cooperation between apps to save battery power. For example there is no need to activate the GPS
sensor twice if two different apps request the current location almost simultaneously.
Understanding Black Box Learning Deep neural networks are prototypical examples of black
box learning, and it is almost impossible to tease out the contribution of a particular feature to the
final output. Particularly in situations where the features are binary, cooperative game theory gives a
formal way to analyze and derive these contributions. While one can evaluate the objective function
on any subset of features, deep networks are notorious for performing poorly on certain out of sample
examples [Goodfellow et al., 2014, Szegedy et al., 2013], which may lead to misleading conclusions
when using traditional cost sharing methods.
We model these cost sharing questions as follows. Let N be the set of possible players (apps or
features), and for a subset S ? N , let C(S) denote the cost of S. This cost represents the total power
consumed over a standard period of time, or the rewards obtained by the learner. We are given ordered
pairs (S1 , C(S1 )), (S2 , C(S2 )), . . . , (Sm , C(Sm )), where each Si ? N is drawn independently
from some distribution D. The problem of S TATISTICAL C OST S HARING asks to look for reasonable
cost sharing strategies in this setting.
1.1
Our results
We build on the approach from Balcan et al. [2015], which studied S TATISTICAL C OST S HARING in
the context of the core, and assume that only partial data about the cost function is observed. The
authors showed that cost shares that are likely to respect the core property can be obtained for certain
restricted classes of functions. Our main result is an algorithm that generalizes these results for all
games where the core is non-empty and we derive sample complexity bounds showing exactly the
number of samples required to compute cost shares (Theorems 1 and 2). While the main approach
of Balcan et al. [2015] relied on first learning the cost function and then computing cost shares, we
show how to proceed directly, computing cost shares without explicitly learning a good estimate of
the cost function. This high level idea was independently discovered by Balcan et al. [2016]; our
approach here greatly improves the sample complexity bounds, culminating in a result logarithmic in
the number of players. We also show that approximately satisfying the core with probability one is
impossible in general (Theorem 3).
We then focus on the Shapley value,
p which has never been studied in the S TATISTICAL C OST
S HARING context. We obtain a tight 1 ? multiplicative approximation of the Shapley values for
submodular functions with bounded curvature ? over the uniform distribution (Theorems 4 and 11),
but show that they cannot be approximated by a bounded factor in general, even for the restricted
class of coverage functions, which are learnable, over the uniform distribution (Theorem 5). We
also introduce a new cost sharing method called data-dependent Shapley value which is the unique
solution (Theorem 6) satisfying four natural axioms resembling the Shapley axioms (Definition 7),
and which can be approximated arbitrarily well from samples for any bounded function and any
distribution (Theorem 7).
1.2
Related work
There are two avenues of work which we build upon. The first is the notion of cost sharing in
cooperative games, first introduced by Von Neumann and Morgenstern [1944]. We consider the
Shapley value and the core, two popular solution concepts for cost-sharing in cooperative games. The
Shapley value [Shapley, 1953] is studied in algorithmic mechanism design [Anshelevich et al., 2008,
Balkanski and Singer, 2015, Feigenbaum et al., 2000, Moulin, 1999]. For applications of the Shapley
value, see the surveys by Roth [1988] and Winter [2002]. A naive computation of the Shapley value
of a cooperative game would take exponential time; recently, methods for efficiently approximating
2
the Shapley value have been suggested [Bachrach et al., 2010, Fatima et al., 2008, Liben-Nowell
et al., 2012, Mann, 1960] for some restricted settings.
The core, introduced by Gillies [1959], is another well-studied solution concept for cooperative
games. Bondareva [1963] and Shapley [1967] characterized when the core is non-empty. The
core has been studied in the context of multiple combinatorial games, such as facility location
Goemans and Skutella [2004] and maximum flow Deng et al. [1999]. In cases with no solutions in
the core or when it is computationally hard to find one, the balance property has been relaxed to hold
approximately [Devanur et al., 2005, Immorlica et al., 2008]. In applications where players submit
bids, cross-monotone cost sharing, a concept stronger than the core that satisfies the group strategy
proofness property, has attracted a lot of attention [Immorlica et al., 2008, Jain and Vazirani, 2002,
Moulin and Shenker, 2001, P?l and Tardos, 2003]. We note that these applications are sufficiently
different from the ones we are studying in this work.
The second is the recent work in econometrics and computational economics that aims to estimate
critical concepts directly from a limited data set, and reason about the sample complexity of the
computational problems. Specifically, in all of the above papers, the algorithm must be able to query
or compute C(S) for an arbitrary set S ? N . In our work, we are instead given a collection of
samples from some distribution; importantly the algorithm does not know C(S) for sets S that were
not sampled. This approach was first introduced by Balcan et al. [2015], who showed how to compute
an approximate core for some families of games. Their main technique is to first learn the cost
function C from samples and then to use the learned function to compute cost shares. The authors
also showed that there exist games that are not PAC-learnable but that have an approximate core that
can be computed. Independently, in recent follow up work, the authors showed how to extend their
approach to compute a probably approximate core for all games with a non-empty core, and gave
weak sample complexity bounds [Balcan et al., 2016]. We improve upon their bounds, showing that a
logarithmic number of samples suffices when the spread of the cost function is bounded.
2
Preliminaries
A cooperative game is defined by an ordered pair (N, C), where N is the ground set of elements, also
called players, and C : 2N ! R 0 is the cost function mapping each coalition S ? N to its cost,
C(S). The ground set of size n = |N | is called the grand coalition and we denote the elements by
N = {1, . . . , n} = [n]. We assume that C(;) = 0, C(S) 0 for all S ? N , and that maxS C(S)
is bounded by a polynomial in n, which are standard assumptions. We will slightly abuse notation
and use C(i) instead of C({i}) for i 2 N when it is clear from the context.
We recall three specific classes of functions. Submodular functions exhibit the property of diminishing
returns: CS (i) CT (i) for all S ? T ? N and i 2 N where CS (i) is the marginal contribution of
element i to set S, i.e., CS (i) = C(S [ {i}) C(S). Coverage functions are the canonical example
of submodular functions. A function is coverage if it can be written as C(S) = | [i2S Ti | where
Ti ? U for some
P universe U . Finally, we also consider the simple class of additive functions, such
that C(S) = i2S C(i).
n
A cost allocation
P is a vector 2 R where i is the share of element i. We call a cost allocation
balanced if i2N i = C(N ). Given a cooperative game (N, C) the goal in the cost sharing
literature is to find ?desirable" balanced cost allocations. Most proposals take an axiomatic approach,
defining a set of axioms that a cost allocation should satisfy. These lead to the concepts of Shapley
value and the core, which we define next. A useful tool to describe and compute these cost sharing
concepts is permutations. We denote by a uniformly random permutation of N and by S <i the
players before i 2 N in permutation .
2.1
The core
The core is a balanced cost allocation where no player has an incentive to deviate from the grand
coalition ? for any subset of players the sum of their shares does not cover their collective cost.
Definition 1. A cost allocation is in the core of function C if the following properties are satisfied:
P
? Balance: i2N i = C(N ),
P
? Core property: for all S ? N , i2S i ? C(S).
3
The core is a natural cost sharing concept. For example, in the battery blame scenario it translates to
the following assurance: No matter what other apps are running concurrently, an app is never blamed
for more battery consumption than if it were running alone. Given that app developers are typically
business competitors, and that a mobile device?s battery is a very scarce resource, such a guarantee
can rather neatly avoid a great deal of finger-pointing. Unfortunately, for a given cost function C
the core may not exist (we say the core is empty), or there may be multiple (or even infinitely many)
cost allocations in the core. For submodular functions C, the core is guaranteed to exist and one
allocation in the core can be computed in polynomial time. Specifically, for any permutation , the
cost allocation such that i = C(S <i [ {i}) C(S <i ) is in the core.
2.2
The Shapley value
The Shapley value provides an alternative cost sharing method. For a game (N, C) we denote it by
C
, dropping the superscript when it is clear from the context. While the Shapley value may not
satisfy the core property, it satisfies the following four axioms:
P
? Balance: i2N i = C(N ).
? Symmetry: For all i, j 2 N , if C(S [ {i}) = C(S [ {j}) for all S ? N \ {i, j} then
i = j.
? Zero element: For all i 2 N , if C(S [ {i}) = C(S) for all S ? N then
i
= 0.
? Additivity: For two games (N, C1 ) and (N, C2 ) with the same players, but different cost
functions C1 and C2 , let 1 and 2 be the respective cost allocations. Consider a new game
(N, C1 + C2 ), and let 0 be the cost allocation for this game. Then for all elements, i 2 N ,
0
1
2
i = i + i.
Each of these axioms is natural: balance ensures that the cost of the grand coalition is distributed
among all of the players. Symmetry states that two identical players should have equal shares. Zero
element verifies that a player that adds zero cost to any coalition should have zero share. Finally,
additivity just confirms that costs combine in a linear manner. It is surprising that the set of cost
allocations that satisfies all four axioms is unique. Moreover, the Shapley value can be written as
the following summation:
i
= E[C(S
<i
[ {i})
C(S
<i )]
=
X
S?N \{i}
|S|!(n
|S|
n!
1)!
(C(S [ {i})
C(S)).
This expression is the expected marginal contribution C(S [ {i}) C(S) of i over a set of players
S who arrived before i in a random permutation of N . As the summation is over exponentially many
terms, the Shapley value generally cannot be computed exactly in polynomial time. However, several
sampling approaches have been suggested to approximate the Shapley value for specific classes of
functions Bachrach et al. [2010], Fatima et al. [2008], Liben-Nowell et al. [2012], Mann [1960].
2.3
Statistical cost sharing
With the sole exception of Balcan et al. [2015], previous work in cost-sharing critically assumes
that the algorithm is given oracle access to C, i.e., it can query, or determine, the cost C(S) for any
S ? N . In this paper, we aim to (approximately) compute the Shapley value and other cost allocations
from samples, without oracle access to C, and with a number of samples that is polynomial in n.
Definition 2. Consider a cooperative game with players N and cost function C. In the S TATISTICAL
C OST S HARING problem we are given pairs (S1 , C(S1 )), (S2 , C(S2 )), . . . , (Sm , C(Sm )) where
each Si is drawn i.i.d. from a distribution D over 2N . The goal is to find a cost allocation 2 Rn .
In what follows we will often refer to an individual (S, C(S)) pair as a sample. It is tempting to
reduce S TATISTICAL C OST S HARING to classical cost sharing by simply collecting enough samples
to use known algorithms. For example, Liben-Nowell et al. [2012] showed how to approximate the
Shapley value with polynomially many queries C(S). However, if the distribution D is not aligned
with these specific queries, which is the case even for the uniform distribution, emulating these
4
algorithms in our setting requires exponentially many samples. Balcan et al. [2015] showed how to
instead first learn an approximation to C from the given samples and then compute cost shares for
the learned function, but their results hold only for a limited number of games and cost functions C.
We show that a more powerful approach is to compute cost shares directly from the data, without
explicitly learning the cost function first.
3
Approximating the Core from Samples
In this section, we consider the problem of finding cost allocations from samples that satisfy relaxations of the core. A natural approach to this problem is to first learn the underlying model, C, from
the data and to then compute a cost allocation for the learned function. As shown in Balcan et al.
[2015], this approach works if C is PAC-learnable, but there exist functions C that are not PAClearnable and for which a cost allocation that approximately satisfies the core can still be computed.
The main result of this section shows that a cost allocation that approximates the core property can be
computed from samples for any function with a non-empty core. We first give a sample complexity
bound that is linear in the number n of players, a result which was independently discovered by
Balcan et al. [2016]. With a more intricate analysis, we then improve this sample complexity to be
logarithmic in n, but at the cost of a weaker relaxation. Our algorithm, which runs in polynomial
time, directly computes a cost allocation that empirically satisfies the core property, i.e., it satisfies
the core property on all of the samples. We argue, by leveraging VC-dimension and Rademacher
complexity-based generalization bounds, that the same cost allocation will likely satisfy the core
property on newly drawn samples as well. We also propose a stronger notion of the approximate core,
and prove that it cannot be computed by any algorithm. This hardness result is information theoretic
and is not due to running time limitations. The proofs in this section are deferred to Appendix B.
We begin by defining three notions of the approximate core: the probably approximately stable
(Balcan et al. [2016]), mostly approximately stable, and probably mostly approximately stable cores.
P
Definition 3. Given , ? > 0, a cost allocation such that i2N i = C(N ) is in
?P
?
? the probably approximately stable core if PrS?D
1
for all D
i2S i ? C(S)
(see Balcan et al. [2015]),
P
? the mostly approximately stable core over D if (1 ?) i2S i ? C(S) for all S ? N ,
?
?
P
? the probably mostly approximately stable core if PrS?D (1 ?) i2S i ? C(S)
1
for all D,
For each of these notions, our goal is to efficiently compute a cost allocation in the approximate core,
in the following sense.
Definition 4. A cost allocation is efficiently computable for the class of functions C over distribution D, if for all C 2 C and any , , ? > 0, given C(N ) and m = poly(n, 1/ , 1/ , 1/?) samples
(Sj , C(Sj )) with each Sj drawn i.i.d. from distribution D, there exists an algorithm that computes
with probability at least 1
over both the samples and the choices of the algorithm.
We refer to the number of samples required to compute approximate cores as the sample complexity
of the algorithm. We first present our result for computing a probably approximately stable core with
sample complexity that is linear in the number of players, which was also independently discovered
by Balcan et al. [2016].
Theorem 1. The class of functions with a non-empty core has cost shares in the probably approximately stable core that are efficiently computable. The sample complexity is
?
?
n + log(1/ )
.
O
The full proof of Theorem 1 is in Appendix B, and can be summarized as follows: We define a class
of halfspaces which contains the core. Since we assume that C has a non-empty core, there exists a
cost allocation in this class of halfspaces that satisfies both the core property on all the samples
and the balance property. Given a set of samples, such a cost allocation can be computed with a
simple linear program. We then use the VC-dimension of the class of halfspaces to show that the
performance on the samples generalizes well to the performance on the distribution D.
5
We next show that the sample complexity dependence on n can be improved from linear to logarithmic
if we relax the goal from computing a cost allocation in the probably approximately stable core to
computing one in the probably mostly approximately stable core instead. The sample complexity
maxS C(S)
of our algorithm also depends on the spread of the function C, defined as min
(we assume
S6=; C(S)
minS6=; C(S) > 0).
Theorem 2. The class of functions with a non-empty core has cost allocations in the probably mostly
approximately stable core that are efficiently computable with sample complexity
!
?
?2
?
?2
1 ?
? (C)
2
2
128? (C) log(2n) + 8? (C) log(2/ ) = O
(log n + log(1/ )) .
?
?
where ? (C) =
maxS C(S)
minS6=; C(S)
is the spread of C.
The full proof of Theorem 2 is in Appendix B. Its main steps are:
1. We find a cost allocation which satisfies the core property on all samples, restricting the
search to cost allocations with bounded `1 -norm. Such a cost allocation can be found
efficiently since the space of such cost allocations is convex.
2. The analysis begins by bounding the `1 -norm of any vector in the core (Lemma 3). Combined
with the assumption that the core is non-empty, this implies that a cost allocation satisfying
the previous conditions exists.
3. Let [x]+ denote the function x 7! max(x, 0). Consider the following ?loss" function:
?P
i2S i
1
C(S)
+
This loss function is convenient since it is equal to 0 if and only if the core property is
satisfied for S and it is 1-Lipschitz, which is used in the next step.
4. Next, we bound the difference between the empirical loss and the expected loss for all
with a known result using the Rademacher complexity of linear predictors with low `1 norm
over ?-Lipschitz loss functions (Theorem 10).
5. Finally, given which approximately satisfies the core property in expectation, we show that
is in the probably mostly approximately stable core by Markov?s inequality (Lemma 4).
Since we obtained a probably mostly approximately stable core, a natural question is if it is possible
to compute cost allocations that are mostly approximately stable over natural distributions. The
answer is negative in general: even for the restricted class of monotone submodular functions, which
always have a solution in the core, the core cannot be mostly approximated from samples, even over
the uniform distribution. The full proof of this impossibility theorem is in Appendix B.
Theorem 3. Cost allocations in the (1/2 + ?)-mostly approximately stable core, i.e., such that for
all S,
?
? X
1
+? ?
i ? C(S),
2
i2S
cannot be computed for monotone submodular functions over the uniform distribution, for any
constant ? > 0.
4
Approximating the Shapley Value from Samples
We turn our attention to the S TATISTICAL C OST S HARING problem in the context of the Shapley
value. Since the Shapley value exists and is unique for all functions, a natural relaxation is to simply
approximate this value from samples. The distributions we consider in this section are the uniform
distribution, and more generally product distributions, which are the standard distributions studied in
the learning literature for combinatorial functions Balcan and Harvey [2011], Balcan et al. [2012],
Feldman and Kothari [2014], Feldman and Vondrak [2014]. It is easy to see that we need some
restrictions on the distribution D (for example, if the empty set if drawn with probability one, the
Shapley value cannot be approximated).
6
For submodular functions with bounded curvature, we prove approximation bounds when samples
are drawn from the uniform or a bounded product distribution, and also show that the bound for
the uniform distribution is tight. However, we show that the Shapley value cannot be approximated
from samples even for coverage functions (which are a special case of submodular functions) and
the uniform distribution. Since coverage functions are learnable from samples, this implies the
counter-intuitive observation that learnability does not imply that the Shapley value is approximable
from samples. We defer the full proofs to Appendix C.
Definition 5. An algorithm ?-approximates, ? 2 (0, 1], the Shapley value of cost functions C over
distribution D, if, for all C 2 C and all > 0, given poly(n, 1/ , 1/1 ?) samples from D, it computes
Shapley value estimates ?C such that ? i ? ?i ? ?1 i for all i 2 N such that i 1/ poly(n)1 with
probability at least 1
over both the samples and the choices made by the algorithm.
We consider submodular functions with bounded curvature, a common assumption in the submodular
maximization literature Iyer and Bilmes [2013], Iyer et al. [2013], Sviridenko et al. [2015], Vondr?k
[2010]. Intuitively, the curvature of a submodular function bounds by how much the marginal
contribution of an element can decrease. This property is useful since the Shapley value of an element
can be written as a weighted sum of its marginal contributions over all sets.
Definition 6. A monotone submodular function C has curvature ? 2 [0, 1] if CN \{i} (i)
(1
?)C(i) for all i 2 N . This curvature is bounded if ? < 1.
An immediate consequence of this definition is that CS (i) (1 ?)CT (i) for all S, T such that
i 62 S [ T by monotonicity and submodularity. The main tool used is estimates v?i of expected
marginal contributions vi = ES?D|i62S [CS (i)] where v?i = avg(Si ) avg(S i ) is the difference
between the average value of samples containing i and the average value of samples not containing i.
Theorem
4. Monotone submodular functions with bounded curvature ? have Shapley value that is
p
1 ? ? approximable from samples over the uniform distribution, which is tight, and 1 ? ?
approximable over any bounded product distribution for any constant ? > 0.
Consider the algorithm which computes ?i = v?i . Note that
i
= E[C(A
<i
[ {i})
C(A
<i )]
(1 ?)vi > 11+?? v?i > (1 ? ?)?
vi where the first inequality is by curvature and the second by
Lemma 5 which
p shows that the estimates v?i of vi are arbitrarily good. The other direction follows
similarly. The 1 ? result is the main technical component of the upper bound. We describe two
main steps:
1. The expected marginal contribution ES?U |i62S,|S|=j [CS (i)] of i to a uniformly random set
S of size j is decreasing in j, which is by submodularity.
2. Since a uniformly random set has size concentrated close to n/2, this implies that roughly
Pn 1
half of the terms in the summation i = ( j=0 ES?Uj |i62S [CS (i)])/n are greater than vi
and the other half of the terms are smaller.
For the tight lower bound, we show that there exists two functions that cannot be distinguished from
samples w.h.p. and that have an element with Shapley value which differs by an ?2 factor.
We show that the Shapley value of coverage (and submodular) functions are not approximable from
samples in general, even though coverage functions are PMAC-learnable ( Balcan and Harvey [2011])
from samples over any distribution Badanidiyuru et al. [2012].
Theorem 5. There exists no constant ? > 0 such that coverage functions have Shapley value that is
?-approximable from samples over the uniform distribution.
5
Data Dependent Shapley Value
The general impossibility result for computing the Shapley value from samples arises from the fact
that the concept was geared towards the query model, where the algorithm can ask for the cost of any
set S ? N . In this section, we develop an analogue that is distribution-dependent. We denote it by
C,D
with respect to both C and D. We define four natural distribution-dependent axioms resembling
1
See Appendix C for general definition.
7
the Shapley value axioms, and then prove that our proposed value is the unique solution satisfying
them. This value can be approximated arbitrarily well in the statistical model for all functions. The
proofs are deferred to Appendix D. We start by stating the four axioms.
Definition 7. The data-dependent axioms for cost sharing functions are:
P
? Balance: i2N D
i = ES?D [C(S)],
? Symmetry: for all i and j, if PrS?D [|S \ {i, j}| = 1] = 0 then
? Zero element: for all i, if PrS?D [i 2 S] = 0 then
D
i
D
i
=
D
j ,
= 0,
1+
? Additivity: for all i, if D1 , D2 , ?, and such that ? + = 1, ?D
i
where Pr [S ? ?D1 + D2 ] = ? ? Pr [S ? D1 ] + ? Pr [S ? D2 ].
D2
=?
D1
i
+
D2
i
The similarity to the original Shapley value axioms is readily apparent. The main distinction is that
we expect these to hold with regard to D, which captures the frequency with which different coalitions
S occur. Interpreting the axioms one by one, the balance property ensures that the expected cost is
always accounted for. The symmetry axiom states that if two elements always occur together, they
should have the same share, since they are indistinguishable. If an element is never observed, then it
should have zero share. Finally costs should combine in a linear manner according to the distribution.
The data-dependent Shapley value is
D
i
:=
X
S : i2S
Pr [S ? D] ?
C(S)
.
|S|
Informally, for all set S, the cost C(S) is divided equally between all elements in S and is weighted
with the probability that S occurs according to D. The main appeal of this cost allocation is the
following theorem.
Theorem 6. The data-dependent Shapley value is the unique value satisfying the four data-dependent
axioms.
The data-dependent Shapley value can be approximated from samples with the following empirical
data-dependent Shapley value:
X C(Sj )
?D = 1
.
i
m
|Sj |
Sj : i2Sj
These estimates are arbitrarily good with arbitrarily high probability.
Theorem 7. The empirical data-dependent Shapley value approximates the data-dependent Shapley
value arbitrarily well, i.e.,
D
| ?D
i
i |<?
with poly(n, 1/?, 1/ ) samples and with probability at least 1
for any , ? > 0.
6
Discussion and Future Work
We follow a recent line of work that studies classical algorithmic problems from a statistical perspective, where the input is restricted to a collection of samples. Our results fall into two categories,
we give results for approximating the Shapley value and the core and propose new cost sharing
concepts that are tailored for the statistical framework. We use techniques from multiple fields that
encompass statistical machine learning, combinatorial optimization, and, of course, cost sharing. The
cost sharing literature being very rich, the number of directions for future work are considerable. Obvious avenues include studying other cost sharing methods in this statistical framework, considering
other classes of functions to approximate known methods, and improving the sample complexity
of previous algorithms. More conceptually, an exciting modeling question arises when designing
?desirable" axioms from data. Traditionally these axioms only depended on the cost function, whereas
in this model they can depend on both the cost function and the distribution, providing an interesting
interplay.
8
References
Elliot Anshelevich, Anirban Dasgupta, Jon Kleinberg, Eva Tardos, Tom Wexler, and Tim Roughgarden. The price of stability for network design with fair cost allocation. SIAM Journal on
Computing, 38(4):1602?1623, 2008.
Yoram Bachrach, Evangelos Markakis, Ezra Resnick, Ariel D Procaccia, Jeffrey S Rosenschein, and
Amin Saberi. Approximating power indices: theoretical and empirical analysis. Autonomous
Agents and Multi-Agent Systems, 20(2):105?122, 2010.
Ashwinkumar Badanidiyuru, Shahar Dobzinski, Hu Fu, Robert Kleinberg, Noam Nisan, and Tim
Roughgarden. Sketching valuation functions. In Proceedings of the twenty-third annual ACMSIAM symposium on Discrete Algorithms, pages 1025?1035. Society for Industrial and Applied
Mathematics, 2012.
Maria-Florina Balcan and Nicholas JA Harvey. Learning submodular functions. In Proceedings of
the forty-third annual ACM symposium on Theory of computing, pages 793?802. ACM, 2011.
Maria-Florina Balcan, Florin Constantin, Satoru Iwata, and Lei Wang. Learning valuation functions.
In COLT, volume 23, pages 4?1, 2012.
Maria-Florina Balcan, Ariel D. Procaccia, and Yair Zick. Learning cooperative games. In Proceedings
of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos
Aires, Argentina, July 25-31, 2015, pages 475?481, 2015.
Maria-Florina Balcan, Ariel D Procaccia, and Yair Zick. Learning cooperative games. arXiv preprint
arXiv:1505.00039v2, 2016.
Eric Balkanski and Yaron Singer. Mechanisms for fair attribution. In Proceedings of the Sixteenth
ACM Conference on Economics and Computation, pages 529?546. ACM, 2015.
Olga N Bondareva. Some applications of linear programming methods to the theory of cooperative
games. Problemy kibernetiki, 10:119?139, 1963.
Xiaotie Deng, Toshihide Ibaraki, and Hiroshi Nagamochi. Algorithmic aspects of the core of
combinatorial optimization games. Mathematics of Operations Research, 24(3):751?766, 1999.
Nikhil R Devanur, Milena Mihail, and Vijay V Vazirani. Strategyproof cost-sharing mechanisms for
set cover and facility location games. Decision Support Systems, 39(1):11?22, 2005.
Shaheen S Fatima, Michael Wooldridge, and Nicholas R Jennings. A linear approximation method
for the shapley value. Artificial Intelligence, 172(14):1673?1699, 2008.
Joan Feigenbaum, Christos Papadimitriou, and Scott Shenker. Sharing the cost of muliticast transmissions (preliminary version). In Proceedings of the thirty-second annual ACM symposium on
Theory of computing, pages 218?227. ACM, 2000.
Vitaly Feldman and Pravesh Kothari. Learning coverage functions and private release of marginals.
In COLT, pages 679?702, 2014.
Vitaly Feldman and Jan Vondrak. Optimal bounds on approximation of submodular and xos functions
by juntas. In Information Theory and Applications Workshop (ITA), 2014, pages 1?10. IEEE, 2014.
Donald B Gillies. Solutions to general non-zero-sum games. Contributions to the Theory of Games,
4(40):47?85, 1959.
Michel X Goemans and Martin Skutella. Cooperative facility location games. Journal of Algorithms,
50(2):194?214, 2004.
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial
examples. CoRR, abs/1412.6572, 2014. URL http://arxiv.org/abs/1412.6572.
Nicole Immorlica, Mohammad Mahdian, and Vahab S Mirrokni. Limitations of cross-monotonic
cost-sharing schemes. ACM Transactions on Algorithms (TALG), 4(2):24, 2008.
9
Rishabh K Iyer and Jeff A Bilmes. Submodular optimization with submodular cover and submodular
knapsack constraints. In Advances in Neural Information Processing Systems, pages 2436?2444,
2013.
Rishabh K Iyer, Stefanie Jegelka, and Jeff A Bilmes. Curvature and optimal algorithms for learning
and minimizing submodular functions. In Advances in Neural Information Processing Systems,
pages 2742?2750, 2013.
Kamal Jain and Vijay V Vazirani. Equitable cost allocations via primal-dual-type algorithms. In
Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 313?321.
ACM, 2002.
David Liben-Nowell, Alexa Sharp, Tom Wexler, and Kevin Woods. Computing shapley value in
supermodular coalitional games. In International Computing and Combinatorics Conference,
pages 568?579. Springer, 2012.
Irwin Mann. Values of large games, IV: Evaluating the electoral college by Montecarlo techniques.
Rand Corporation, 1960.
Herv? Moulin. Incremental cost sharing: Characterization by coalition strategy-proofness. Social
Choice and Welfare, 16(2):279?320, 1999.
Herv? Moulin and Scott Shenker. Strategyproof sharing of submodular costs: budget balance versus
efficiency. Economic Theory, 18(3):511?533, 2001.
Martin P?l and ?va Tardos. Group strategy proof mechanisms via primal-dual algorithms. In
Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages
584?593. IEEE, 2003.
Alvin E Roth. The Shapley value: essays in honor of Lloyd S. Shapley. Cambridge University Press,
1988.
Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to
algorithms. 2014.
Lloyd S Shapley. On balanced sets and cores. Naval research logistics quarterly, 14(4):453?460,
1967.
LS Shapley. A value for n-person games1. 1953.
Maxim Sviridenko, Jan Vondr?k, and Justin Ward. Optimal approximation for submodular and
supermodular optimization with bounded curvature. In Proceedings of the Twenty-Sixth Annual
ACM-SIAM Symposium on Discrete Algorithms, pages 1134?1148. SIAM, 2015.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow,
and Rob Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013. URL
http://arxiv.org/abs/1312.6199.
John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior. 1944.
Jan Vondr?k. Submodularity and curvature: the optimal algorithm. RIMS Kokyuroku Bessatsu B, 23:
253?266, 2010.
Eyal Winter. The shapley value. Handbook of game theory with economic applications, 3:2025?2054,
2002.
10
| 7202 |@word private:1 version:1 fatima:3 polynomial:5 stronger:2 norm:3 d2:5 confirms:1 hu:1 essay:1 wexler:2 asks:2 contains:1 current:1 com:2 surprising:1 si:3 intriguing:1 sergei:1 must:2 attracted:1 written:3 readily:1 additive:1 john:1 christian:2 alone:1 half:2 intelligence:2 device:2 assurance:1 core:71 junta:1 provides:1 proofness:2 characterization:1 location:4 org:2 c2:3 symposium:6 prove:3 acmsiam:1 shapley:60 combine:2 manner:2 introduce:1 intricate:1 hardness:1 behavior:1 expected:5 roughly:1 multi:1 mahdian:1 decreasing:1 paclearnable:1 considering:1 begin:2 bounded:14 notation:1 moreover:1 underlying:1 what:2 developer:1 morgenstern:2 unobserved:1 finding:1 argentina:1 corporation:1 guarantee:1 collecting:1 ti:2 zaremba:1 exactly:2 before:2 service:2 depended:1 consequence:1 approximately:20 abuse:1 usyed:1 black:2 twice:1 studied:7 limited:2 i2n:5 unique:6 thirty:1 practice:1 differs:1 jan:3 axiom:18 empirical:4 convenient:1 donald:1 fetching:1 skutella:2 cannot:9 close:1 satoru:1 context:6 impossible:2 restriction:1 roth:2 nicole:1 resembling:2 attribution:1 attention:2 economics:2 independently:5 devanur:2 survey:1 bachrach:3 convex:1 l:1 importantly:1 shlens:1 s6:1 stability:1 notion:5 traditionally:1 autonomous:1 tardos:3 exact:1 programming:1 gps:2 designing:1 goodfellow:3 tablet:1 harvard:2 element:15 approximated:9 satisfying:6 particularly:1 econometrics:1 cooperative:14 observed:3 preprint:1 resnick:1 wang:1 capture:1 ensures:2 eva:1 counter:1 liben:5 decrease:1 halfspaces:3 balanced:4 complexity:16 reward:1 battery:6 depend:1 tight:5 badanidiyuru:2 myriad:1 upon:2 eric:2 learner:1 efficiency:1 joint:1 represented:1 finger:1 additivity:3 distinct:1 jain:2 describe:2 activate:1 hiroshi:1 query:7 artificial:2 kevin:1 shalev:1 harnessing:1 apparent:1 consume:1 say:1 relax:1 nikhil:1 ward:1 haring:7 final:1 superscript:1 interplay:1 propose:2 product:3 aligned:1 poorly:1 sixteenth:1 amin:1 intuitive:1 sutskever:1 ijcai:1 empty:11 transmission:1 neumann:2 rademacher:2 incremental:1 i2s:9 ben:1 tim:2 derive:3 develop:1 stating:1 pose:1 sole:1 dividing:1 coverage:9 c:7 culminating:1 implies:3 direction:2 submodularity:3 vc:2 xos:1 jonathon:1 opinion:1 mann:3 ja:1 alvin:1 suffices:1 generalization:1 preliminary:2 summation:3 hold:3 sufficiently:1 ground:4 welfare:1 great:1 algorithmic:3 predict:1 mapping:1 pointing:1 dobzinski:1 nowell:5 axiomatic:1 combinatorial:4 pravesh:1 tool:2 weighted:2 evangelos:1 concurrently:2 sensor:1 always:3 feigenbaum:2 aim:2 rather:3 avoid:1 pn:1 mobile:3 release:1 focus:1 naval:1 maria:4 greatly:1 impossibility:2 industrial:1 adversarial:1 sense:1 problemy:1 dependent:12 typically:2 diminishing:1 polling:1 among:4 colt:2 dual:2 special:1 fairly:1 marginal:7 equal:2 field:1 never:4 beach:1 sampling:1 identical:1 represents:1 look:2 jon:1 foreground:1 future:2 papadimitriou:1 others:1 kamal:1 aire:1 modern:1 winter:2 simultaneously:1 individual:1 jeffrey:1 ab:4 deferred:2 rishabh:2 primal:2 pmac:1 constantin:1 fu:1 partial:1 approximable:5 respective:1 iv:1 divide:1 theoretical:1 vahab:1 modeling:1 obstacle:1 ezra:1 cover:3 maximization:1 cost:117 geo:1 subset:11 uniform:12 predictor:1 learnability:1 answer:1 combined:1 st:1 person:1 grand:3 siam:3 international:2 michael:1 together:1 sketching:1 concrete:1 alexa:1 ilya:1 von:2 central:1 satisfied:2 balkanski:3 containing:2 messaging:1 return:1 michel:1 wojciech:1 szegedy:3 wooldridge:1 summarized:1 lloyd:2 matter:1 satisfy:5 combinatorics:1 explicitly:2 depends:1 vi:5 nisan:1 multiplicative:1 lot:2 eyal:1 analyze:1 start:1 relied:1 participant:1 yaron:1 shai:2 defer:1 contribution:10 who:2 efficiently:6 conceptually:1 apps:6 weak:1 critically:1 bilmes:3 app:3 sharing:31 email:1 notification:1 definition:10 competitor:1 sixth:1 frequency:1 obvious:1 proof:7 attributed:1 sampled:1 newly:1 popular:2 ask:1 recall:1 knowledge:1 emerges:1 improves:1 formalize:1 rim:1 supermodular:2 follow:2 tom:2 improved:1 rand:1 tatistical:7 box:2 though:1 just:1 google:4 lei:1 usa:1 concept:11 facility:3 deal:1 elliot:1 indistinguishable:1 game:32 arrived:1 theoretic:1 mohammad:1 interpreting:1 saberi:1 balcan:21 recently:1 common:1 empirically:1 exponentially:2 volume:1 shenker:3 extend:1 approximates:3 marginals:1 refer:2 cambridge:1 feldman:4 nyc:2 mathematics:2 similarly:1 neatly:1 submodular:23 blame:1 bruna:1 access:2 stable:15 geared:1 operating:1 similarity:1 ashwinkumar:1 add:1 curvature:12 recent:4 showed:6 perspective:1 electoral:1 phone:1 scenario:2 certain:2 harvey:3 ost:7 binary:1 arbitrarily:7 inequality:2 shahar:1 honor:1 equitable:3 additional:1 relaxed:1 moulin:4 greater:1 deng:2 determine:1 forty:1 period:1 tempting:1 july:1 multiple:3 desirable:2 full:4 encompass:1 technical:1 characterized:1 cross:2 long:1 divided:1 equally:1 va:1 florina:4 expectation:1 arxiv:4 tailored:1 strategyproof:2 c1:3 proposal:1 addition:1 background:1 whereas:1 unlike:1 probably:12 milena:1 vitaly:2 flow:1 leveraging:1 call:2 split:1 easy:2 enough:1 bid:1 ibaraki:1 gave:1 florin:1 reduce:1 idea:1 cn:1 avenue:2 computable:3 consumed:1 translates:1 economic:3 vassilvitskii:1 expression:1 herv:2 url:2 proceed:1 deep:2 useful:2 generally:2 clear:2 informally:1 jennings:1 induces:1 concentrated:1 category:1 http:2 exist:4 canonical:1 discrete:2 dropping:1 incentive:2 dasgupta:1 group:3 key:1 four:7 drawn:7 relaxation:3 monotone:5 sum:3 mihail:1 wood:1 run:1 powerful:1 fourth:2 almost:2 reasonable:1 family:1 decision:1 appendix:7 bound:15 ct:2 guaranteed:1 oracle:3 activity:2 roughgarden:2 markakis:1 occur:2 annual:6 constraint:1 sviridenko:2 vondrak:2 kleinberg:2 aspect:1 min:1 passively:1 performing:1 martin:2 developing:1 according:2 request:1 coalition:7 anirban:1 smaller:1 slightly:1 bessatsu:1 rob:1 s1:4 intuitively:1 restricted:6 pr:8 oskar:1 notorious:1 ariel:3 computationally:1 resource:1 turn:1 montecarlo:1 mechanism:4 needed:1 singer:2 know:1 studying:2 available:1 generalizes:2 operation:1 quarterly:1 v2:1 nicholas:2 distinguished:1 save:1 alternative:1 yair:2 knapsack:1 original:1 assumes:1 running:4 include:1 umar:1 yoram:1 build:2 uj:1 approximating:5 classical:2 society:1 objective:2 question:4 occurs:1 strategy:4 dependence:1 mirrokni:1 traditional:2 exhibit:1 consumption:3 topic:1 mail:1 argue:1 valuation:2 trivial:1 reason:2 index:1 providing:1 balance:8 minimizing:1 unfortunately:1 mostly:11 robert:1 noam:1 negative:1 design:2 collective:1 unknown:1 twenty:3 upper:1 kothari:2 observation:1 markov:1 sm:4 zick:2 logistics:1 immediate:1 situation:2 defining:2 looking:1 precise:1 emulating:1 discovered:3 rn:1 arbitrary:2 sharp:1 introduced:3 david:2 pair:4 required:2 learned:4 distinction:1 nip:1 able:1 suggested:2 justin:1 scott:2 challenge:1 program:1 max:4 analogue:2 power:4 critical:1 syed:1 natural:9 difficulty:1 client:1 business:1 scarce:1 scheme:1 improve:2 misleading:1 imply:1 kokyuroku:1 stefanie:1 naive:1 deviate:2 joan:2 literature:5 understanding:2 loss:5 expect:1 permutation:5 prototypical:1 limitation:2 allocation:37 interesting:1 ita:1 versus:1 foundation:1 agent:2 jegelka:1 principle:1 exciting:1 share:18 course:1 cooperation:1 accounted:1 tease:1 formal:1 weaker:1 fall:1 explaining:1 distributed:1 regard:1 dimension:2 evaluating:1 rich:2 computes:4 concretely:1 author:3 collection:2 made:1 avg:2 erhan:1 polynomially:2 social:1 transaction:1 vazirani:3 approximate:12 sj:6 vondr:3 rosenschein:1 monotonicity:1 active:1 handbook:1 tuples:1 fergus:1 shwartz:1 search:1 nagamochi:1 learn:4 expanding:1 ca:1 symmetry:4 improving:1 poly:4 submit:1 main:11 spread:3 universe:1 thiry:1 s2:4 bounding:1 verifies:1 fair:2 christos:1 exponential:1 third:2 ian:2 theorem:18 dumitru:1 specific:3 showing:2 pac:2 learnable:5 appeal:1 exists:6 workshop:1 restricting:1 corr:2 maxim:1 iyer:4 budget:1 vijay:2 attributing:1 logarithmic:4 simply:4 likely:2 infinitely:1 happening:1 ordered:2 monotonic:1 springer:1 iwata:1 satisfies:9 acm:10 goal:5 ericbalkanski:1 towards:1 jeff:2 lipschitz:2 price:1 considerable:1 hard:1 specifically:3 talg:1 uniformly:3 anshelevich:2 olga:1 lemma:3 total:2 called:3 goemans:2 e:4 player:21 exception:1 formally:1 procaccia:3 college:1 immorlica:3 support:1 arises:2 irwin:1 buenos:1 roadblock:1 evaluate:1 d1:4 |
6,858 | 7,203 | The Expressive Power of Neural Networks: A View
from the Width
Zhou Lu1,3
[email protected]
Hongming Pu1
[email protected]
Zhiqiang Hu2
[email protected]
Feicheng Wang1,3
[email protected]
Liwei Wang2,3
[email protected]
1, Department of Mathematics, Peking University
2, Key Laboratory of Machine Perception, MOE, School of EECS, Peking University
3, Center for Data Science, Peking University, Beijing Institute of Big Data Research
Abstract
The expressive power of neural networks is important for understanding deep
learning. Most existing works consider this problem from the view of the depth of
a network. In this paper, we study how width affects the expressiveness of neural
networks. Classical results state that depth-bounded (e.g. depth-2) networks with
suitable activation functions are universal approximators. We show a universal
approximation theorem for width-bounded ReLU networks: width-(n + 4) ReLU
networks, where n is the input dimension, are universal approximators. Moreover,
except for a measure zero set, all functions cannot be approximated by width-n
ReLU networks, which exhibits a phase transition. Several recent works demonstrate the benefits of depth by proving the depth-efficiency of neural networks. That
is, there are classes of deep networks which cannot be realized by any shallow
network whose size is no more than an exponential bound. Here we pose the dual
question on the width-efficiency of ReLU networks: Are there wide networks
that cannot be realized by narrow networks whose size is not substantially larger?
We show that there exist classes of wide networks which cannot be realized by
any narrow network whose depth is no more than a polynomial bound. On the
other hand, we demonstrate by extensive experiments that narrow networks whose
size exceed the polynomial bound by a constant factor can approximate wide and
shallow network with high accuracy. Our results provide more comprehensive
evidence that depth may be more effective than width for the expressiveness of
ReLU networks.
1
Introduction
Deep neural networks have achieved state-of-the-art performance in a wide range of tasks such
as speech recognition, computer vision, natural language processing, and so on. Despite their
promising results in applications, our theoretical understanding of neural networks remains limited.
The expressive power of neural networks, being one of the vital properties, is crucial on the way
towards a more thorough comprehension.
The expressive power describes neural networks? ability to approximate functions. This line of
research dates back at least to 1980?s. The celebrated universal approximation theorem states that
depth-2 networks with suitable activation function can approximate any continuous function on a
compact domain to any desired accuracy [3] [1] [9] [6]. However, the size of such a neural network
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
can be exponential in the input dimension, which means that the depth-2 network has a very large
width.
From a learning perspective, having universal approximation is just the first step. One must also
consider the efficiency, i.e., the size of the neural network to achieve approximation. Having a small
size requires an understanding of the roles of depth and width for the expressive power. Recently,
there are a series of works trying to characterize how depth affects the expressiveness of a neural
network . [5] showed the existence of a 3-layer network, which cannot be realized by any 2-layer to
more than a constant accuracy if the size is subexponential in the dimension. [2] proved the existence
of classes of deep convolutional ReLU networks that cannot be realized by shallow ones if its size
is no more than an exponential bound. For any integer k, [15] explicitly constructed networks with
O(k 3 ) layers and constant width which cannot be realized by any network with O(k) layers whose
size is smaller than 2k . This type of results are referred to as depth efficiency of neural networks
on the expressive power: a reduction in depth results in exponential sacrifice in width. However, it
is worth noting that these are existence results. In fact, as pointed out in [2], proving existence is
inevitable; There is always a positive measure of network parameters such that deep nets can?t be
realized by shallow ones without substantially larger size. Thus we should explore more in addition
to proving existence.
Different to most of the previous works which investigate the expressive power in terms of the depth
of neural networks, in this paper we study the problem from the view of width. We argue that
an integration of both views will provide a better understanding of the expressive power of neural
networks.
Firstly, we prove a universal approximation theorem for width-bounded ReLU networks. Let n
denotes the input dimension, we show that width-(n + 4) ReLU networks can approximate any
Lebesgue integrable function on n-dimensional space with respect to L1 distance. On the other hand,
except for a zero measure set, all Lebesgue integrable functions cannot be approximated by width-n
ReLU networks, which demonstrate a phase transition. Our result is a dual version of the classical
universal approximation theorem for depth-bounded networks.
Next, we explore quantitatively the role of width for the expressive power of neural networks. Similar
to the depth efficiency, we raise the following question on the width efficiency:
Are there wide ReLU networks that cannot be realized by any narrow network whose size is not
substantially increased?
We argue that investigation of the above question is important for an understanding of the roles of
depth and width for the expressive power of neural networks. Indeed, if the answer to this question is
yes, and the size of the narrow networks must be exponentially larger, then it is appropriate to say
that width has an equal importance as depth for neural networks.
In this paper, we prove that there exists a family of ReLU networks that cannot be approximated by
narrower networks whose depth increase is no more than polynomial. This polynomial lower bound
for width is significantly smaller than the exponential lower bound for depth. However, it does not
rule out the possibility of the existence of an exponential lower bound for width efficiency. On the
other hand, insights from the previous analysis suggest us to study if there is a polynomial upper
bound, i.e., a polynomial increase in depth and size suffices for narrow networks to approximate wide
and shallow networks. Theoretically proving a polynomial upper bound seems very difficult, and we
formally pose it as an open problem. Nevertheless, we conduct extensive experiments and the results
demonstrate that when the depth of the narrow network exceeds the polynomial lower bound by just a
constant factor, it can approximate wide shallow networks to a high accuracy. Together, these results
provide more comprehensive evidence that depth is more effective for the expressive power of ReLU
networks.
Our contributions are summarized as follows:
? We prove a Universal Approximation Theorem for Width-Bounded ReLU Networks. We
show that any Lebesgue-integrable function f from Rn to R can be approximated by a
fully-connected width-(n + 4) ReLU network to arbitrary accuracy with respect to L1
distance. In addition, except for a negligible set, all functions f from Rn to R cannot be
approximated by any ReLU network whose width is no more than n.
2
? We show a width efficiency polynomial lower bound. For integer k, there exist a class of
width-O(k 2 ) and depth-2 ReLU networks that cannot be approximated by any width-O(k 1.5 )
and depth-k networks. On the other hand, experimental results demonstrate that networks
with size slightly larger than the lower bound achieves high approximation accuracy.
1.1
Related Work
Research analyzing the expressive power of neural networks date back to decades ago. As one of the
most classic work, Cybenko[3] proved that a fully-connected sigmoid neural network with one single
hidden layer can universally approximate any continuous univariate function on a bounded domain
with arbitrarily small error. Barron[1], Hornik et al.[9] ,Funahashi[6] achieved similar results. They
also generalize the sigmoid function to a large class of activation functions, showing that universal
approximation is essentially implied by the network structure. Delalleau et al.[4] showed that there
exists a family of functions which can be represented much more efficiently with deep networks than
with shallow ones as well.
Due to the development and success of deep neural networks recently, there have been much more
works discussing the expressive power of neural networks theoretically. Depth efficiency is among
the most typical results. Eldan et.al [5] showed the existence of a 3-layer network, which cannot be
realized by any 2-layer to more than a constant accuracy if the size is subexponential in the dimension.
Cohen et.al [2] proved the existence of classes of deep convolutional ReLU networks that cannot be
realized by shallow ones if its size is no more than an exponential bound. For any integer k, Telgarsky
[15] explicitly constructed networks with O(k 3 ) layers and constant width which cannot be realized
by any network with O(k) layers whose size is smaller than 2k .
Other works turn to show deep networks? ability to approximate a wide range of functions. For
example, Liang et al.[12] showed that in order to approximate a function which is ?(log 1 )-order
derivable with error universally, a deep network with O(log 1 ) layers and O(poly log 1 ) weights
can do but ?(poly 1 ) weights will be required if there is only o(log 1 ) layers. Yarotsky [16] showed
that C n -functions on Rd with a bounded domain can be approximated with error universally by a
d
ReLU network with O(log 1 ) layers and O(( 1 ) n log 1 ) weights. In addition, for results based on
classic theories, Harvey et al.[7] provided a nearly-tight bound for VC-dimension of neural networks,
that the VC-dimension for a network with W weights and L layers will have a O(W L log W )
but ?(W L log W
L ) VC-dimension. Also, there are several works arguing for width?s importance
from other aspects, for example, Nguyen et al.[11] shows if a deep architecture is at the same time
sufficiently wide at one hidden layer then it has a well-behaved loss surface in the sense that almost
every critical point with full rank weight matrices is a global minimum from the view of optimization.
The remainder of the paper is organized as follows. In section 2 we introduce some background
knowledge needed in this article. In section 3 we present our main result ? the Width-Bounded
Universal Approximation Theorem; besides, we show two comparing results related to the theorem.
Then in section 4 we turn to explore quantitatively the role of width for the expressive power of neural
networks. Finally, section 5 concludes. All proofs can be found in the Appendix and we give proof
sketch in main text as well.
2
Preliminaries
We begin by presenting basic definitions that will be used throughout the paper. A neural network
is a directed computation graph, where the nodes are computation units and the edges describe the
connection pattern among the nodes. Each node receives as input a weighted sum of activations
flowed through the edges, applies some kind of activation function, and releases the output via the
edges to other nodes. Neural networks are often organized in layers, so that nodes only receive signals
from the previous layer and only release signals to the next layer. A fully-connected neural network is
a layered neural network where there exists a connection between every two nodes in adjacent layers.
In this paper, we will study the fully-connected ReLU network, which is a fully-connected neural
3
network with Rectifier Linear Unit (ReLU) activation functions. The ReLU function ReLU : R ? R
can be formally defined as
ReLU(x) = max{x, 0}
(1)
The architecture of neural networks often specified by the width and the depth of the networks. The
depth h of a network is defined as its number of layers (including output layer but excluding input
layer); while the width dm of a network is defined to be the maximal number of nodes in a layer. The
number of input nodes, i.e. the input dimension, is denoted as n.
In this paper we study the expressive power of neural networks. The expressive power describes
neural networks? ability to approximate functions. We focus on Lebesgue-integrable functions. A
Lebesgue-integrable function f : Rn ? R is a Lebesgue-measurable function satisfying
Z
|f (x)|dx < ?
(2)
Rn
which contains continuous functions, including functions such as the sgn function. Because we
deal with Lebesgue-integrable functions, we adopt L1 distance as a measure of approximation error,
different from L? distance used by some previous works which consider continuous functions.
3
Width-bounded ReLU Networks as Universal Approximator
In this section we consider universal approximation with width-bounded ReLU networks. The
following theorem is the main result of this section.
Theorem 1 (Universal Approximation Theorem for Width-Bounded ReLU Networks). For any
Lebesgue-integrable function f : Rn ? R and any > 0, there exists a fully-connected ReLU
network A with width dm ? n + 4, such that the function FA represented by this network satisfies
Z
|f (x) ? FA (x)|dx < .
(3)
Rn
The proof of this theorem is lengthy and is deferred to the supplementary material. Here we provide
an informal description of the high level idea.
For any Lebesgue integrable function and any predefined approximation accuracy, we explicitly
construct a width-(n + 4) ReLU network so that it can approximate the function to the given accuracy.
The network is a concatenation of a series of blocks. Each block satisfies the following properties:
1) It is a depth-(4n + 1) width-(n + 4) ReLU network.
2) It can approximate any Lebesgue integrable function which is uniformly zero outside a cube with
length ? to a high accuracy;
3) It can store the output of the previous block, i.e., the approximation of other Lebesgue integrable
functions on different cubes;
4) It can sum up its current approximation and the memory of the previous approximations.
It is not difficult to see that the construction of the whole network is completed once we build the
blocks. We illustrate such a block in Figure 1 . In this block, each layer has n + 4 neurons. Each
rectangle in Figure 1 represents a neuron, and the symbols in the rectangle describes the output of
that neuron as a function of the block. Among the n + 4 neurons, n neurons simply transfer the input
coordinates. For the other 4 neurons, 2 neurons store the approximation fulfilled by previous blocks.
The other 2 neurons help to do the approximation on the current cube. The topology of the block is
rather simple. It is very sparse, each neuron connects to at most 2 neurons in the next layer.
The proof is just to verify the construction illustrated in Figure 1 is correct. Because of the space
limit, we defer all the details to the supplementary materials.
Theorem 1 can be regarded as a dual version of the classical universal approximation theorem, which
proves that depth-bounded networks are universal approximator. If we ignore the size of the network,
4
Figure 1: One block to simulate the indicator function on [a1 , b1 ] ? [a2 , b2 ] ? ? ? ? ? [an , bn ]. For k
from 1 to n, we "chop" two sides in the kth dimension, and for every k the "chopping" process is
completed within a 4-layer sub-network as we show in Figure 1. It is stored in the (n+3)th node as
Ln in the last layer of A . We then use a single layer to record it in the (n+1)th or the (n+2)th node,
and reset the last two nodes to zero. Now the network is ready to simulate another (n+1)-dimensional
cube.
both depth and width themselves are efficient for universal approximation. At the technical level
however, there are a few differences between the two universal approximation theorems. The classical
depth-bounded theorem considers continuous function on a compact domain and use L? distance;
Our width-bounded theorem instead deals with Lebesgue-integrable functions on the whole Euclidean
space and therefore use L1 distance.
Theorem 1 implies that there is a phase transition for the expressive power of ReLU networks as the
width of the network varies across n, the input dimension. It is not difficult to see that if the width is
much smaller than n, then the expressive power of the network must be very weak. Formally, we
have the following two results.
Theorem 2. For any Lebesgue-integrable function f : Rn ? R satisfying that {x : f (x) 6= 0} is a
positive measure set in Lebesgue measure, and any function FA represented by a fully-connected
ReLU network A with width dm ? n, the following equation holds:
Z
Z
|f (x) ? FA (x)|dx = +? or
|f (x)|dx.
(4)
Rn
Rn
Theorem 2 says that even the width equals n, the approximation ability of the ReLU network is still
weak, at least on the Euclidean space Rn . If we restrict the function on a bounded set, we can still
prove the following theorem.
5
Theorem 3. For any continuous function f : [?1, 1]n ? R which is not constant along any direction,
there exists a universal ? > 0 such that for any function FA represented by a fully-connected ReLU
network with width dm ? n ? 1, the L1 distance between f and FA is at least ? :
Z
|f (x) ? FA (x)|dx ? ? .
(5)
[?1,1]n
Then Theorem 3 is a direct comparison with Theorem 1 since in Theorem 1 the L1 distance can be
arbitrarily small.
The main idea of the two theorems is grabbing the disadvantage brought by the insufficiency of
dimension. If the corresponding first layer values of two different input points are the same, the
output will be the same as well. When the ReLU network?s width is not larger than the input layer?s
width, we can find a ray for "most" points such that the ray passes the point and the corresponding
first layer values on the ray are the same. It is like a dimension reduction caused by insufficiency of
width. Utilizing this weakness of thin network, we can finally prove the two theorems.
4
Width Efficiency vs. Depth Efficiency
Going deeper and deeper has been a trend in recent years, starting from the 8-layer AlexNet[10],
the 19-layer VGG[13], the 22-layer GoogLeNet[14], and finally to the 152-layer and 1001-layer
ResNets[8]. The superiority of a larger depth has been extensively shown in the applications of many
areas. For example, ResNet has largely advanced the state-of-the-art performance in computer vision
related fields, which is claimed solely due to the extremely deep representations. Despite of the great
practical success, theories of the role of depth are still limited.
Theoretical understanding of the strength of depth starts from analyzing the depth efficiency, by
proving the existence of deep neural networks that cannot be realized by any shallow network whose
size is exponentially larger. However, we argue that even for a comprehensive understanding of the
depth itself, one needs to study the dual problem of width efficiency: Because, if we switch the role
of depth and width in the depth efficiency theorems and the resulting statements remain true, then
width would have the same power as depth for the expressiveness, at least in theory. It is worth noting
that a priori, depth efficiency theorems do not imply anything about the validity of width efficiency.
In this section, we study the width efficiency of ReLU networks quantitatively.
Theorem 4. Let n be the input dimension. For any integer k ? n + 4, there exists FA : Rn ? R
represented by a ReLU neural network A with width dm = 2k 2 and depth h = 3, such that for any
constant b > 0, there exists > 0 and for any function FB : Rn ? R represented by ReLU neural
network B whose parameters are bounded in [?b, b] with width dm ? k 3/2 and depth h ? k + 2,
the following inequality holds:
Z
|FA ? FB |dx ? .
(6)
Rn
Theorem 4 states that there are networks such that reducing width requires increasing in the size to
compensate, which is similar to that of depth qualitatively. However, at the quantitative level, this
theorem is very different to the depth efficiency theorems in [15] [5][2]. Depth efficiency enjoys
exponential lower bound, while for width Theorem 4 is a polynomial lower bound. Of course if a
corresponding polynomial upper bound can be proven, we can say depth plays a more important role
in efficiency, but such a polynomial lower bound still means that depth is not strictly stronger than
width in efficiency, sometimes it costs depth super-linear more nodes than width.
This raises a natural question: Can we improve the polynomial lower bound? There are at least two
possibilities.
1) Width efficiency has exponential lower bound. To be concrete, there are wide networks that cannot
be approximated by any narrow networks whose size is no more than an exponential bound.
2) Width efficiency has polynomial upper bound. Every wide network can be approximated by a
narrow network whose size increase is no more than a polynomial.
Exponential lower bound and polynomial upper bound have completely different implications. If
exponential lower bound is true, then width and depth have the same strength for the expressiveness,
6
at least in theory. If the polynomial upper bound is true, then depth plays a significantly stronger role
for the expressive power of ReLU networks.
Currently, neither the exponential lower bound nor the polynomial upper bound seems within the
reach. We pose it as a formal open problem.
4.1
Experiments
We further conduct extensive experiments to provide some insights about the upper bound of such
an approximation. To this end, we study a series of network architectures with varied width. For
each network architecture, we randomly sample the parameters, which, together with the architecture,
represent the function that we would like narrower networks to approximate. The approximation error
is empirically calculated as the mean square error between the target function and the approximator
function evaluated on a series of uniformly placed inputs. For simplicity and clearity, we refer to
the network architectures that will represent the target functions when assigned parameters as target
networks, and the corresponding network architectures for approximator functions as approximator
networks.
To be detailed, the target networks are fully-connected ReLU networks of input dimension n, output
dimension 1, width 2k 2 and depth 3, for n = 1, 2 and k = 3, 4, 5. For each of these networks, we
sample weight parameters according to standard normal distribution, and bias parameters according
to uniform distribution over [?1, 1). The network and the sampled parameters will collectively
represent a target function that we use a narrow approximator network of width 3k 3/2 and depth k + 2
to approximate, with a corresponding k. The architectures are designed in accordance to Theorem 4 ?
we aim to investigate whether such a lower bound is actually an upper bound. In order to empirically
calculate the approximation error, 20000 uniformly placed inputs from [?1, 1)n for n = 1 and 40000
such inputs for n = 2 are evaluated by the target function and the approximator function respectively,
and the mean square error is reported. For each target network, we repeat the parameter-sampling
process 50 times and report the mean square error in the worst and average case.
We adopt the standard supervised learning approach to search in the parameter space of the approximator network to find the best approximator function. Specifically, half of all the test inputs from
[?1, 1)n and the corresponding values evaluated by target function constitute the training set. The
training set is used to train approximator network with a mini-batch AdaDelta optimizer and learning
rate 1.0. The parameters of approximator network are randomly initialized according to [8]. The
training process proceeds 100 epoches for n = 1 and 200 epoches for n = 2; the best approximator
function is recorded.
Table 1 lists the results. Figure 2 illustrates the comparison of an example target function and the
corresponding approximator function for n = 1 and k = 5. Note that the target function values
vary with a scale ? 10 in the given domain, so the (absolute) mean square error is indeed a rational
measure of the approximation error. It is shown that the approximation error is indeed very small,
for the target networks and approximator networks we study. From Figure 2 we can see that the
approximation function is so close to the target function that we have to enlarge a local region to
better display the difference. Since the architectures of both the target networks and approximator
networks are determined according to Theorem 4, where the depth of approximator networks are in a
polynomial scale with respect to that of target networks, the empirical results show an indication that
a polynomial larger depth may be sufficient for a narrow network to approximate a wide network.
5
Conclusion
In this paper, we analyze the expressive power of neural networks with a view from the width,
distinguished from many previous works which focus on the view from the depth. We establish
the Universal Approximation Theorem for Width-Bounded ReLU Networks, in contrast with the
well-known Universal Approximation Theorem, which studies depth-bounded networks. Our result
demonstrate a phase transition with respect to expressive power when the width of a ReLU network
of given input dimension varies.
We also explore the role of width for the expressive power of neural networks: we prove that a wide
network cannot be approximated by a narrow network unless with polynomial more nodes, which
gives a lower bound of the number of nodes for approximation. We pose open problems on whether
7
Table 1: Empirical study results. n denotes the input dimension, k is defined in Theorem 4; the
width/depth for both target network and approximator network are determined in accordance to
Theorem 4. We report mean square error in the worst and average case over 50 runs of randomly
sampled parameters for target network.
n
1
1
1
2
2
2
k
3
4
5
3
4
5
target network
approximator network
width
depth
width
depth
18
36
50
18
36
50
3
3
3
3
3
3
16
24
34
16
24
34
5
6
7
5
6
7
worst case error
average case error
0.002248
0.003263
0.005643
0.008729
0.018852
0.030114
0.000345
0.000892
0.001296
0.001990
0.006251
0.007984
Figure 2: Comparison of an example target function and the corresponding approximator function for
n = 1 and k = 5. A local region is enlarged to better display the difference.
exponential lower bound or polynomial upper bound hold for the width efficiency, which we think
is crucial on the way to a more thorough understanding of expressive power of neural networks.
Experimental results support the polynomial upper bound and agree with our intuition and insights
from the analysis.
The width and the depth are two key components in the design of a neural network architecture.
Width and depth are both important and should be carefully tuned together for the best performance
of neural networks, since the depth may determine the abstraction level but the width may influence
the loss of information in the forwarding pass. A comprehensive understanding of the expressive
power of neural networks requires looking from both views.
Acknowledgments
This work was partially supported by National Basic Research Program of China (973 Program)
(grant no. 2015CB352502), NSFC (61573026), and the elite undergraduate training program of
School of Mathematical Science in Peking University. We would like to thank the anonymous
reviewers for their valuable comments on our paper.
8
References
[1] Andrew R Barron. Approximation and estimation bounds for artificial neural networks. Machine
Learning, 14(1):115?133, 1994.
[2] Nadav Cohen, Or Sharir, and Amnon Shashua. On the expressive power of deep learning: A
tensor analysis. In Conference on Learning Theory, pages 698?728, 2016.
[3] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of
Control, Signals, and Systems (MCSS), 2(4):303?314, 1989.
[4] Olivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In Advances in
Neural Information Processing Systems, pages 666?674, 2011.
[5] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In
Conference on Learning Theory, pages 907?940, 2016.
[6] Ken-Ichi Funahashi. On the approximate realization of continuous mappings by neural networks.
Neural networks, 2(3):183?192, 1989.
[7] Nick Harvey, Chris Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension bounds for piecewise linear neural networks. COLT 2017, 2017.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 770?778, 2016.
[9] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are
universal approximators. Neural networks, 2(5):359?366, 1989.
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[11] Quynh Nguyen and Matthias Hein. The loss surface of deep and wide neural networks. In
Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on
Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2603?2612,
International Convention Centre, Sydney, Australia, 06?11 Aug 2017. PMLR.
[12] R. Srikant Shiyu Liang. Why deep neural networks for funtion approximation? ICLR 2017,
2017.
[13] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. CoRR, abs/1409.1556, 2014.
[14] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,
Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.
CoRR, abs/1409.4842, 2014.
[15] Matus Telgarsky. Benefits of depth in neural networks. COLT 2016: 1517-1539, 2016.
[16] Dmitry Yarotsky. Error bounds for approximations with deep relu networks. arXiv preprint
arXiv:1610.01145, 2016.
9
| 7203 |@word version:2 polynomial:23 seems:2 stronger:2 open:3 chopping:1 bn:1 reduction:2 celebrated:1 series:4 contains:1 liu:1 tuned:1 kurt:1 existing:1 current:2 comparing:1 activation:6 dx:6 must:3 christian:1 designed:1 v:2 half:1 funahashi:2 record:1 node:14 firstly:1 sigmoidal:1 zhang:1 mathematical:1 along:1 constructed:2 direct:1 prove:6 ray:3 introduce:1 theoretically:2 sacrifice:1 indeed:3 themselves:1 nor:1 increasing:1 provided:1 begin:1 bounded:18 moreover:1 alexnet:1 funtion:1 kind:1 substantially:3 thorough:2 every:4 quantitative:1 control:1 unit:2 grant:1 superiority:1 positive:2 negligible:1 accordance:2 local:2 limit:1 insufficiency:2 despite:2 analyzing:2 nsfc:1 solely:1 shiyu:1 china:1 forwarding:1 limited:2 range:2 directed:1 practical:1 acknowledgment:1 arguing:1 block:10 area:1 empirical:2 universal:21 significantly:2 liwei:1 suggest:1 cannot:18 close:1 layered:1 influence:1 cb352502:1 yee:1 measurable:1 reviewer:1 center:1 starting:1 elite:1 simplicity:1 rule:1 insight:3 utilizing:1 regarded:1 proving:5 classic:2 coordinate:1 construction:2 play:2 target:18 shamir:1 olivier:1 trend:1 adadelta:1 approximated:10 recognition:4 satisfying:2 role:9 preprint:1 worst:3 calculate:1 region:2 connected:9 sun:1 sharir:1 valuable:1 intuition:1 raise:2 tight:2 efficiency:24 completely:1 represented:6 train:1 effective:2 describe:1 artificial:1 outside:1 whose:13 larger:8 supplementary:2 say:3 delalleau:2 ability:4 simonyan:1 think:1 itself:1 indication:1 net:1 matthias:1 maximal:1 reset:1 remainder:1 product:1 realization:1 date:2 achieve:1 description:1 sutskever:1 nadav:1 telgarsky:2 resnet:1 help:1 illustrate:1 andrew:3 pose:4 school:2 aug:1 sydney:1 implies:1 convention:1 direction:1 correct:1 vc:4 sgn:1 australia:1 material:2 suffices:1 investigation:1 cybenko:2 preliminary:1 anonymous:1 comprehension:1 strictly:1 hold:3 sufficiently:1 normal:1 great:1 mapping:1 matus:1 achieves:1 adopt:2 a2:1 optimizer:1 vary:1 estimation:1 currently:1 superposition:1 weighted:1 brought:1 always:1 super:1 aim:1 rather:1 zhou:1 release:2 focus:2 rank:1 contrast:1 sense:1 wang1:1 abstraction:1 hidden:2 going:2 classification:1 wanglw:1 dual:4 subexponential:2 among:3 denoted:1 priori:1 development:1 art:2 integration:1 colt:2 cube:4 equal:2 construct:1 once:1 having:2 beach:1 field:1 sampling:1 enlarge:1 represents:1 nearly:2 thin:1 inevitable:1 report:2 yoshua:1 quantitatively:3 piecewise:1 few:1 randomly:3 national:1 comprehensive:4 phase:4 connects:1 lebesgue:14 ab:2 investigate:2 possibility:2 deferred:1 weakness:1 predefined:1 implication:1 edge:3 ohad:1 unless:1 conduct:2 pku:5 euclidean:2 initialized:1 desired:1 hein:1 halbert:1 theoretical:2 increased:1 disadvantage:1 rabinovich:1 cost:1 uniform:1 krizhevsky:1 characterize:1 stored:1 reported:1 answer:1 varies:2 eec:1 st:1 international:2 together:3 ilya:1 concrete:1 precup:1 recorded:1 szegedy:1 summarized:1 b2:1 explicitly:3 caused:1 doina:1 hu2:1 view:8 analyze:1 shashua:1 start:1 defer:1 jia:1 contribution:1 square:5 accuracy:10 convolutional:4 largely:1 efficiently:1 ronen:1 yes:1 generalize:1 weak:2 vincent:1 mc:1 ren:1 worth:2 ago:1 reach:1 lengthy:1 definition:1 dm:6 proof:4 sampled:2 rational:1 proved:3 knowledge:1 organized:2 carefully:1 actually:1 back:2 maxwell:1 supervised:1 zisserman:1 wei:1 evaluated:3 just:3 hand:4 sketch:1 receives:1 quynh:1 expressive:25 behaved:1 usa:1 validity:1 verify:1 true:3 assigned:1 laboratory:1 illustrated:1 deal:2 white:1 adjacent:1 width:78 chop:1 anything:1 liaw:1 trying:1 whye:1 presenting:1 demonstrate:6 l1:6 dragomir:1 image:2 recently:2 sigmoid:2 empirically:2 cohen:2 exponentially:2 volume:1 googlenet:1 he:1 refer:1 anguelov:1 rd:1 mathematics:2 pointed:1 centre:1 language:1 surface:2 recent:2 showed:5 perspective:1 store:2 claimed:1 harvey:2 inequality:1 arbitrarily:2 success:2 discussing:1 approximators:3 integrable:12 minimum:1 george:1 determine:1 xiangyu:1 signal:3 full:1 exceeds:1 technical:1 long:1 compensate:1 a1:1 peking:4 basic:2 multilayer:1 vision:3 essentially:1 arxiv:2 resnets:1 sometimes:1 represent:3 abbas:1 achieved:2 receive:1 addition:3 background:1 grabbing:1 jian:1 crucial:2 pass:1 comment:1 integer:4 noting:2 exceed:1 vital:1 bengio:1 feedforward:2 switch:1 affect:2 relu:42 architecture:10 topology:1 restrict:1 idea:2 cn:5 vgg:1 whether:2 amnon:1 karen:1 speech:1 shaoqing:1 constitute:1 deep:21 detailed:1 extensively:1 ken:1 exist:2 srikant:1 fulfilled:1 ichi:1 key:2 nevertheless:1 yangqing:1 neither:1 rectangle:2 graph:1 sum:3 beijing:1 year:1 run:1 mehrabian:1 family:2 almost:1 throughout:1 appendix:1 bound:38 layer:35 display:2 strength:2 alex:1 aspect:1 simulate:2 extremely:1 department:1 according:4 describes:3 smaller:4 slightly:1 across:1 remain:1 shallow:10 ln:1 equation:1 agree:1 remains:1 turn:2 needed:1 end:1 informal:1 barron:2 appropriate:1 pierre:1 pmlr:1 distinguished:1 batch:1 wang2:1 existence:9 denotes:2 completed:2 build:1 prof:1 establish:1 classical:4 implied:1 tensor:1 question:5 realized:12 fa:9 exhibit:1 kth:1 iclr:1 distance:8 thank:1 concatenation:1 chris:1 argue:3 considers:1 besides:1 length:1 reed:1 mini:1 sermanet:1 liang:2 difficult:3 statement:1 design:1 teh:1 upper:11 neuron:10 convolution:1 hinton:1 excluding:1 looking:1 rn:13 varied:1 arbitrary:1 expressiveness:5 moe:1 required:1 extensive:3 connection:2 specified:1 imagenet:1 nick:1 narrow:12 nip:1 proceeds:1 perception:1 pattern:2 scott:1 program:3 max:1 including:2 memory:1 stinchcombe:1 power:27 suitable:2 critical:1 natural:2 indicator:1 residual:1 advanced:1 improve:1 imply:1 concludes:1 ready:1 text:1 epoch:2 understanding:9 fully:9 loss:3 proven:1 approximator:19 geoffrey:1 vanhoucke:1 sufficient:1 article:1 editor:1 course:1 eldan:2 placed:2 last:2 repeat:1 supported:1 enjoys:1 side:1 formal:1 deeper:3 bias:1 institute:1 wide:14 absolute:1 sparse:1 benefit:2 depth:66 dimension:19 transition:4 calculated:1 fb:2 qualitatively:1 universally:3 nguyen:2 erhan:1 approximate:16 compact:2 derivable:1 pu1:1 ignore:1 dmitry:1 global:1 b1:1 continuous:7 search:1 decade:1 why:1 table:2 promising:1 transfer:1 ca:1 hornik:2 poly:2 domain:5 main:4 big:1 whole:2 enlarged:1 referred:1 sub:1 exponential:14 theorem:39 dumitru:1 rectifier:1 showing:1 symbol:1 list:1 evidence:2 exists:7 undergraduate:1 corr:2 importance:2 ci:1 illustrates:1 simply:1 explore:4 univariate:1 kaiming:1 partially:1 applies:1 collectively:1 satisfies:2 narrower:2 towards:1 typical:1 except:3 uniformly:3 reducing:1 specifically:1 determined:2 pas:1 experimental:2 formally:3 support:1 lu1:1 |
6,859 | 7,204 | Spectrally-normalized margin bounds
for neural networks
Peter L. Bartlett?
Dylan J. Foster?
Matus Telgarsky?
Abstract
This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized spectral complexity: their Lipschitz
constant, meaning the product of the spectral norms of the weight matrices, times
a certain correction factor. This bound is empirically investigated for a standard
AlexNet network trained with SGD on the mnist and cifar10 datasets, with both
original and random labels; the bound, the Lipschitz constants, and the excess risks
are all in direct correlation, suggesting both that SGD selects predictors whose
complexity scales with the difficulty of the learning task, and secondly that the
presented bound is sensitive to this complexity.
1
Overview
Neural networks owe their astonishing success not only to their ability to fit any data set: they also
generalize well, meaning they provide a close fit on unseen data. A classical statistical adage is that
models capable of fitting too much will generalize poorly; what?s going on here?
Let?s navigate the many possible explanations provided by statistical theory. A first observation is
that any analysis based solely on the number of possible labellings on a finite training set ? as is
the case with VC dimension ? is doomed: if the function class can fit all possible labels (as is the
case with neural networks in standard configurations [Zhang et al., 2017]), then this analysis can not
distinguish it from the collection of all possible functions!
excess risk 0.9
cifar excess risk
cifar Lipschitz
cifar Lipschitz/margin
cifar [random] excess risk
cifar [random] Lipschitz
excess risk 0.3
epoch 10
epoch 100
Figure 1: An analysis of AlexNet [Krizhevsky et al., 2012] trained with SGD on cifar10, both with
original and with random labels. Triangle-marked curves track excess risk across training epochs (on
a log scale), with an ?x? marking the earliest epoch with zero training error. Circle-marked curves
track Lipschitz constants, normalized so that the two curves for random labels meet. The Lipschitz
constants tightly correlate with excess risk, and moreover normalizing them by margins (resulting in
the square-marked curve) neutralizes growth across epochs.
?
<[email protected]>; University of California, Berkeley and Queensland University of Technology.
<[email protected]>; Cornell University.
?
<[email protected]>; University of Illinois, Urbana-Champaign.
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Next let?s consider scale-sensitive measures of complexity, such as Rademacher complexity and
metric entropy, which work directly with real-valued function classes, and moreover are sensitive
to their magnitudes. Figure 1 plots the excess risk (the test error minus the training error) across
training epochs against one candidate scale-sensitive complexity measure, the Lipschitz constant of
the network (the product of the spectral norms of their weight matrices), and demonstrates that they
are tightly correlated (which is not the case for, say, the l2 norm of the weights). The data considered
in Figure 1 is the standard cifar10 dataset, both with original and with random labels, which has
been used as a sanity check when investigating neural network generalization [Zhang et al., 2017].
There is still an issue with basing a complexity measure purely on the Lipschitz constant (although
it has already been successfully employed to regularize neural networks [Cisse et al., 2017]): as
depicted in Figure 1, the measure grows over time, despite the excess risk plateauing. Fortunately,
there is a standard resolution to this issue: investigating the margins (a precise measure of confidence)
of the outputs of the network. This tool has been used to study the behavior of 2-layer networks,
boosting methods, SVMs, and many others [Bartlett, 1996, Schapire et al., 1997, Boucheron et al.,
2005]; in boosting, for instance, there is a similar growth in complexity over time (each training
iteration adds a weak learner), whereas margin bounds correctly stay flat or even decrease. This
behavior is recovered here: as depicted in Figure 1, even though standard networks exhibit growing
Lipschitz constants, normalizing these Lipschitz constants by the margin instead gives a decaying
curve.
1.1
Contributions
This work investigates a complexity measure for neural networks that is based on the Lipschitz
constant, but normalized by the margin of the predictor. The two central contributions are as follows.
? Theorem 1.1 below will give the rigorous statement of the generalization bound that is
the basis of this work. In contrast to prior work, this bound: (a) scales with the Lipschitz
constant (product of spectral norms of weight matrices) divided by the margin; (b) has
no dependence on combinatorial parameters (e.g., number of layers or nodes) outside of
log factors; (c) is multiclass (with no explicit dependence on the number of classes); (d)
measures complexity against a reference network (e.g., for the ResNet [He et al., 2016], the
reference network has identity mappings at each layer). The bound is stated below, with a
general form and analysis summary appearing in Section 3 and the full details relegated to
the appendix.
? An empirical investigation, in Section 2, of neural network generalization on the standard
datasets cifar10, cifar100, and mnist using the preceding bound. Rather than using the
bound to provide a single number, it can be used to form a margin distribution as in Figure 2.
These margin distributions will illuminate the following intuitive observations: (a) cifar10
is harder than mnist; (b) random labels make cifar10 and mnist much more difficult;
(c) the margin distributions (and bounds) converge during training, even though the weight
matrices continue to grow; (d) l2 regularization (?weight decay?) does not significantly
impact margins or generalization.
A more detailed description of the margin distributions is as follows. Suppose a neural network
computes a function f : Rd ? Rk , where k is the number of classes; the most natural way to
convert this to a classifier is to select the output coordinate with the largest magnitude, meaning
x 7? arg maxj f (x)j . The margin, then, measures the gap between the output for the correct label
and other labels, meaning f (x)y ? maxj6=y f (x)j .
Unfortunately, margins alone do not seem to say much; see for instance Figure 2a, where the
collections of all margins for all data points ? the unnormalized margin distribution ? are similar
for cifar10 with and without random labels. What is missing is an appropriate normalization, as in
Figure 2b. This normalization is provided by Theorem 1.1, which can now be explained in detail.
To state the bound, a little bit of notation is necessary. The networks will use L fixed nonlinearities
(?1 , . . . , ?L ), where ?i : Rdi?1 ? Rdi is ?i -Lipschitz (e.g., as with coordinate-wise ReLU, and
max-pooling, as discussed in Appendix A.1); occasionally, it will also hold that ?i (0) = 0. Given
L weight matrices A = (A1 , . . . , AL ) let FA denote the function computed by the corresponding
network:
FA (x) := ?L (AL ?L?1 (AL?1 ? ? ? ?1 (A1 x) ? ? ? )).
(1.1)
2
cifar
random
cifar
cifar random
0
0
(a) Margins.
(b) Normalized margins.
Figure 2: Margin distributions at the end of training AlexNet on cifar10, with and without random
labels. With proper normalization, random labels demonstrably correspond to a harder problem.
The network output FA (x) ? RdL (with d0 = d and dL = k) is converted to a class label in
{1, . . . , k} by taking the arg max over components, with an arbitrary rule for breaking ties. Whenever
input data x1 , . . . , xn ? Rd are given, collect them as rows of a matrix X ? Rn?d . Occasionally,
notation will be overloaded to discuss FA (X T ), a matrix whose ith column is FA (xi ). Let W denote
the maximum of {d, d1 , . . . , dL }. The l2 norm k ? k2 is always computed entry-wise; thus, for a
matrix, it corresponds to the Frobenius norm.
Next, define a collection of reference matrices (M1 , . . . , ML ) with the same dimensions as
A1 , . . . , AL ; for instance, to obtain a good bound for ResNet [He et al., 2016], it is sensible to
set Mi := I, the identity map, and the bound below will worsen as the network moves farther
from the identity map; for AlexNet [Krizhevsky et al., 2012], the simple choice Mi = 0 suffices.
Finally, let
k ? k? denote the spectral
norm and k ? kp,q denote the (p, q) matrix norm, defined by
kAkp,q :=
(kA:,1 kp , . . . , kA:,m kp )
q for A ? Rd?m . The spectral complexity RFA = RA of a
network FA with weights A is the defined as
?3/2
?
??
2/3
L
L
Y
X
kA>
? Mi> k2,1
i
? .
(1.2)
RA := ? ?i kAi k? ? ?
2/3
kA
k
?
i
i=1
i=1
The following theorem provides a generalization bound for neural networks whose nonlinearities are
fixed but whose weight matrices A have bounded spectral complexity RA .
Theorem 1.1. Let nonlinearities (?1 , . . . , ?L ) and reference matrices (M1 , . . . , ML ) be given as
above (i.e., ?i is ?i -Lipschitz and ?i (0) = 0). Then for (x, y), (x1 , y1 ), . . . , (xn , yn ) drawn iid from
any probability distribution over Rd ? {1, . . . , k}, with probability at least 1 ? ? over ((xi , yi ))ni=1 ,
every margin ? > 0 and network FA : Rd ? Rk with weight matrices A = (A1 , . . . , AL ) satisfy
!
r
h
i
kXk
R
ln(1/?)
2
A
b ? (FA ) + O
e
Pr arg max FA (x)j 6= y ? R
ln(W ) +
,
?n
n
j
p
b ? (f ) ? n?1 P 1 f (xi )y ? ? + maxj6=y f (xi )j and kXk2 = P kxi k2 .
where R
i
i
2
i
i
The full proof and a generalization beyond spectral norms is relegated to the appendix, but a sketch is
provided in Section 3, along with a lower bound. Section 3 also gives a discussion of related work:
briefly, it?s essential to note that margin and Lipschitz-sensitive bounds have a long history in the
neural networks literature [Bartlett, 1996, Anthony and Bartlett, 1999, Neyshabur et al., 2015]; the
distinction here is the sensitivity to the spectral norm, and that there is no explicit appearance of
combinatorial quantities such as numbers of parameters or layers (outside of log terms, and indices to
summations and products).
To close, miscellaneous observations and open problems are collected in Section 4.
2
Generalization case studies via margin distributions
In this section, we empirically study the generalization behavior of neural networks, via margin
distributions and the generalization bound stated in Theorem 1.1.
3
cifar
cifar random
mnist
cifar
cifar random
mnist random
0
0
(a) Mnist is easier than cifar10.
(b) Random mnist is as hard as random cifar10!
cifar
cifar random
cifar100
cifar
random label
random input
0
0
(c) cifar100 is (almost) as hard as cifar10 with random labels!
(d) Random inputs are harder than random labels.
Figure 3: A variety of margin distributions. Axes are re-scaled in Figure 3a, but identical in the other
subplots; the cifar10 (blue) and random cifar10 (green) distributions are the same each time.
Before proceeding with the plots, it?s a good time to give a more refined description of the margin distribution, one that is suitable for comparisons across datasets. Given n pattern/label pairs
((xi , yi ))ni=1 , with patterns as rows of matrix X ? Rn?d , and given a predictor FA : Rd ? Rk , the
(normalized) margin distribution is the univariate empirical distribution of the labeled data points
each transformed into a single scalar according to
(x, y) 7?
FA (x)y ? maxi6=y FA (x)i
,
RA kXk2 /n
where the spectral complexity RA is from eq. (1.2). The normalization is thus derived from the bound
in Theorem 1.1, but ignoring log terms.
Taken this way, the two margin distributions for two datasets can be interpreted as follows. Considering any fixed point on the horizontal axis, if the cumulative distribution of one density is lower than
the other, then it corresponds to a lower right hand side in Theorem 1.1. For no reason other than
visual interpretability, the plots here will instead depict a density estimate of the margin distribution.
The vertical and horizontal axes are rescaled in different plots, but the random and true cifar10
margin distributions are always the same.
A little more detail about the experimental setup is as follows. All experiments were implemented in
Keras [Chollet et al., 2015]. In order to minimize conflating effects of optimization and regularization,
the optimization method was vanilla SGD with step size 0.01, and all regularization (weight decay,
batch normalization, etc.) were disabled. ?cifar? in general refers to cifar10, however cifar100
will also be explicitly mentioned. The network architecture is essentially AlexNet [Krizhevsky et al.,
2012] with all normalization/regularization removed, and with no adjustments of any kind (even to
the learning rate) across the different experiments.
Comparing datasets. A first comparison is of cifar10 and the standard mnist digit data. mnist
is considered ?easy?, since any of a variety of methods can achieve roughly 1% test error. The
?easiness? is corroborated by Figure 3a, where the margin distribution for mnist places all its mass far
to the right of the mass for cifar10. Interestingly, randomizing the labels of mnist, as in Figure 3b,
results in a margin distribution to the left of not only cifar10, but also slightly to the left of (but
close to) cifar10 with randomized labels.
4
0
10 epochs
20 epochs
40 epochs
80 epochs
160 epochs
10
10
10
6
5
4
(b) Various levels of l2 regularization for cifar10.
(a) Margins across epochs for cifar10.
Figure 4
Next, Figure 3c compares cifar10 and cifar100, where cifar100 uses the same input images
as cifar10; indeed, cifar10 is obtained from cifar100 by collapsing the original 100 categories
into 10 groups. Interestingly, cifar100, from the perspective of margin bounds, is just as difficult
as cifar10 with random labels. This is consistent with the large observed test error on cifar100
(which has not been ?optimized? in any way via regularization).
Lastly, Figure 3d replaces the cifar10 input images with random images sampled from Gaussians
matching the first- and second-order image statistics (see [Zhang et al., 2017] for similar experiments).
Convergence of margins. As was pointed out in Section 1, the weights of the neural networks do
not seem to converge in the usual sense during training (the norms grow continually). However, as
depicted in Figure 4a, the sequence of (normalized) margin distributions is itself converging.
Regularization. As remarked in [Zhang et al., 2017], regularization only seems to bring minor
benefits to test error (though adequate to be employed in all cutting edge results). This observation
is certainly consistent with the margin distributions in Figure 4b, which do not improve (e.g., by
shifting to the right) in any visible way under regularization. An open question, discussed further in
Section 4, is to design regularization that improves margins.
3
Analysis of margin bound
This section will sketch the proof of Theorem 1.1, give a lower bound, and discuss related work.
3.1
Multiclass margin bound
The starting point of this analysis is a margin-based bound for multiclass prediction. To state the bound,
first recall that the margin operator M : Rk ?{1, . . . , k} ? R is defined as M(v, y) := vy ?max vi ,
i6=y
and define the ramp loss `? : R ? R+ as
?
?
?0
`? (r) := 1 + r/?
?
?1
r < ??,
r ? [??, 0],
r > 0,
and ramp risk as R? (f ) := E(`? (?M(f (x), y))). Given a sample S := ((x1 , y1 ), . . . , (xn , yn )),
b ? of R? as R
b ? (f ) := n?1 P `? (?M(f (xi ), yi )); note that R?
define an empirical counterpart R
i
b ? respectively upper bound the probability and fraction of errors on the source distribution
and R
and training set. Lastly, given
Pna set of real-valued functions H, define the Rademacher complexity as
R(H|S ) := n?1 E suph?H i=1 i h(xi , yi ), where the expectation is over the Rademacher random
variables (1 , . . . , n ), which are independent, uniform ?1-valued.
With this notation in place, the basic bound is as follows.
d
k
Lemma
3.1. Given functions F with
F 3 f : R ? R and any ? > 0, define F? :=
(x, y) 7? `? (?M(f (x), y)) : f ? F . Then, with probability at least 1 ? ? over a sample S
q
b
of size n, every f ? F satisfies Pr[arg maxi f (x)i 6= y] ? R? (f ) + 2R((F? )|S ) + 3 ln(1/?)
2n .
This bound is a direct consequence of standard tools in Rademacher complexity. In order to instantiate
this bound, covering numbers will be used to directly upper bound the Rademacher complexity term
5
R((F? )|S ). Interestingly, the choice of directly working in terms of covering numbers seems essential
to providing a bound with no explicit dependence on k; by contrast, prior work primarily handles
multiclass via a ?
Rademacher complexity analysis on each coordinate of a k-tuple of functions, and
pays a factor of k [Zhang, 2004].
3.2
Covering number complexity upper bounds
This subsection proves Theorem 1.1 via Lemma 3.1 by controlling, via covering numbers, the
Rademacher complexity R((F? )|S ) for networks with bounded spectral complexity.
The notation here for (proper) covering numbers is as follows. Let N (U, , k ? k) denote the least
cardinality of any subset V ? U that covers U at scale with norm k ? k, meaning
sup min kA ? Bk ? .
A?U B?V
Choices of U that will be used in the present work include both the image F|S of data S under some
function class F , as well as the conceptually simpler choice of a family of matrix products.
The full proof has the following steps: (I) A matrix covering bound for the affine transformation
of each layer is provided in Lemma 3.2; handling whole layers at once allows for more flexible
norms. (II) An induction on layers then gives a covering number bound for entire networks; this
analysis is only sketched here for the special case of norms used in Theorem 1.1, but the full proof in
the appendix culminates in a bound for more general norms (cf. Lemma A.7). (III) The preceding
whole-network covering number leads to Theorem 1.1 via Lemma 3.1 and standard techniques.
Step (I), matrix covering, is handled by the following lemma. The covering number considers the
matrix product XA, where A will be instantiated as the weight matrix for a layer, and X is the data
passed through all layers prior to the present layer.
Lemma 3.2. Let conjugate exponents (p, q) and (r, s) be given with p ? 2, as well as positive reals
(a, b, ) and positive integer m. Let matrix X ? Rn?d be given with kXkp ? b. Then
n
& 2 2 2/r '
o
a b m
d?m
ln N
XA : A ? R
, kAkq,s ? a , , k ? k2 ?
ln(2dm).
2
The proof relies upon the Maurey sparsification lemma [Pisier, 1980], which is stated in terms
of sparsifying convex hulls, and in its use here is inspired by covering number bounds for linear
predictors [Zhang, 2002]. To prove Theorem 1.1, this matrix covering bound will be instantiated
for the case of kAk2,1 . It is possible to instead scale with kAk2 and kXk2 , but even for the case of
the identity matrix X = I, this incurs an extra dimension factor. The use of kAk2,1 here thus helps
Theorem 1.1 avoid any appearance of W and L outside of log terms; indeed, the goal of covering
a whole matrix at a time (rather than the more standard vector covering) was to allow this greater
sensitivity and avoid combinatorial parameters.
Step (II) above, the induction on layers, proceeds as follows. Let Xi denote the output of layer i (thus
bi depending on covering matrices
X0 = X), and inductively suppose there exists a cover element X
b
b
(A1 , . . . , Ai?1 ) chosen to cover weight matrices in earlier layers. Thanks to Lemma 3.2, there also
bi ? A
bi X
bi k2 ? i . The desired cover element is thus X
bi so that kAi X
bi+1 = ?i (A
bi X
bi ) where
exists A
?i is the nonlinearity in layer i; indeed, supposing ?i is ?i -Lipschitz,
bi+1 k2 ? ?i kAi Xi ? A
bi X
bi k2
kXi+1 ? X
bi k2 + kAi X
bi ? A
bi X
bi k2
? ?i kAi Xi ? Ai X
b i k 2 ? i + i ,
? ?i kAi k? kXi ? Ai X
bi+1 depends on each choice
where the first term is controlled with the inductive hypothesis. Since X
bi , . . . , A
bi ), the cardinality of the full network cover is the product of the individual matrix covers.
(A
The preceding proof had no sensitivity to the particular choice of norms; it merely required an
operator norm on Ai , as well as some other norm that allows matrix covering. Such an analysis is
presented in full generality in Appendix A.5. Specializing to the particular case of spectral norms
and (2, 1) group norms leads to the following full-network covering bound.
6
Theorem 3.3. Let fixed nonlinearities (?1 , . . . , ?L ) and reference matrices (M1 , . . . , ML ) be given,
where ?i is ?i -Lipschitz and ?i (0) = 0. Let spectral norm bounds (s1 , . . . , sL ), and matrix (2, 1)
norm bounds (b1 , . . . , bL ) be given. Let data matrix X ? Rn?d be given, where the n rows correspond to data points. Let HX denote the family of matrices obtained by evaluating X with all choices
>
of network FA : HX := FA (X T ) : A = (A1 , . . . , AL ), kAi k? ? si , kA>
i ? Mi k2,1 ? bi ,
where each matrix has dimension at most W along each axis. Then for any > 0,
?
?
??
2/3 3
L
L
kXk22 ln(2W 2 ) ? Y 2 2 ? ?X bi
? .
ln N (HX , , k ? k2 ) ?
sj ?j
2
s
i
j=1
i=1
What remains is (III): Theorem 3.3 can be combined with the standard Dudley entropy integral upper
bound on Rademacher complexity (see e.g. Mohri et al. [2012]), which combined with Lemma 3.1
gives Theorem 1.1.
3.3
Rademacher complexity lower bounds
By reduction to the linear case (i.e., removing all nonlinearities), it is easy to provide a lower bound
on the Rademacher complexity of the networks studied here. Unfortunately, this bound only scales
with the product of spectral norms, and not the other terms in RA (cf. eq. (1.2)).
Theorem 3.4. Consider the setting of Theorem 3.3, but all nonlinearities are the ReLU z 7?
max{0, z}, the output dimension is dL = 1, and all non-output dimensions are at least 2 (and hence
W ? 2). Let data S := (x1 , . . . ,
xn ) be collected into data matrix X ? Rn?d .Then there is a c
Q
such that for any scalar r > 0, R FA : A = (A1 , . . . , AL ), i kAi k? ? r |S ? ckXk2 r.
Q
Note
Q that, due to the nonlinearity, the lower bound should indeed depend on i kAi k? and not
k i Ai k? ; as a simple sanity check, there exist networks for which the latter quantity is 0, but the
network does not compute the zero function.
3.4
Related work
To close this section on proofs, it is a good time to summarize connections to existing literature.
The algorithmic idea of large margin classifiers was introduced in the linear case by Vapnik [1982]
(see also [Boser et al., 1992, Cortes and Vapnik, 1995]). Vapnik [1995] gave an intuitive explanation
of the performance of these methods based on a sample-dependent VC-dimension calculation, but
without generalization bounds. The first rigorous generalization bounds for large margin linear
classifiers [Shawe-Taylor et al., 1998] required a scale-sensitive complexity analysis of real-valued
function classes. At the same time, a large margins analysis was developed for two-layer networks
[Bartlett, 1996], indeed with a proof technique that inspired the layer-wise induction used to prove
Theorem 1.1 in the present work. Margin theory was quickly extended to many other settings (see
for instance the survey by Boucheron et al. [2005]), one major success being an explanation of the
generalization ability of boosting methods, which exhibit an explicit growth in the size of the function
class over time, but a stable excess risk [Schapire et al., 1997]. The contribution of the present work
is to provide a margin bound (and corresponding Rademacher analysis) that can be adapted to various
operator norms at each layer. Additionally, the present work operates in the multiclass setting, and
avoids an explicit dependence on the number of classes k, which seems to appear in prior work
[Zhang, 2004, Tewari and Bartlett, 2007].
There are numerous generalization bounds for neural networks, including VC-dimension and fatshattering bounds (many of these can be found in [Anthony and Bartlett, 1999]). Scale-sensitive
analysis of neural networks started with [Bartlett, 1996], which can be interpreted in the present
setting as utilizing data norm k?k? and operator norm k?k??? (equivalently, the norm kA>
i k1,? on
weight matrix Ai ). This analysis can be adapted to give a Rademacher complexity analysis [Bartlett
and Mendelson, 2002], and has been adapted to other norms [Neyshabur et al., 2015], although the
k ? k? setting appears to be necessary to avoid extra combinatorial factors. More work is still needed
to develop complexity analyses that have matching upper and lower bounds, and also to determine
which norms are well-adapted to neural networks as used in practice.
The present analysis utilizes covering numbers, and is most closely connected to earlier covering
number bounds [Anthony and Bartlett, 1999, Chapter 12], themselves based on the earlier fatshattering analysis [Bartlett, 1996], however the technique here of pushing an empirical cover through
7
layers is akin to VC dimension proofs for neural networks [Anthony and Bartlett, 1999]. The use
of Maurey?s sparsification lemma was inspired by linear predictor covering number bounds [Zhang,
2002].
Comparison to preprint.
The original preprint of this paper [Bartlett et al.,
2017] featured a slightly different version of the spectral complexity RA , given by
3/2
Q
P
2/3
kAi ?Mi k1
L
L
?
kA
k
. In the present version (1.2), each kAi ? Mi k1
i ?
2/3
i=1 i
i=1
kAi k?
>
term is replaced by kA>
Rd?m
i ? Mi k2,1 . This is a strict improvement since for any matrix A ??
one has kAk2,1 ? kAk1 , and in general the gap between these two norms can be as large as d.
On a related note, all of the figures in this paper use the `1 norm in the spectral complexity RA instead
of the (2, 1) norm. Variants of the experiments described in Section 2 were carried out using each of
PL
the l1 , (2, 1), and l2 norms in the ( i=1 (?)2/3 )3/2 term with negligible difference in the results.
Since spectrally-normalized margin bounds were first proposed in the preprint [Bartlett et al.,
2017], subsequent works [Neyshabur et al., 2017, Neyshabur, 2017] re-derived a similar spectrallynormalized bound using the PAC-Bayes framework. Specifically, these works showed that RA may
1/2
Q
P
?
L
L
( W kAi ?Mi k2 )2
be replaced by (up to log(W ) factors):
. Unfortui=1 ?i kAi k? ? L
i=1
kAi k2?
nately, this bound never improves on Theorem 1.1, and indeed can be derived from it as follows. First,
the dependence on the individual matrices Ai in the second
this bound can be obtained from
term of?
Theorem 1.1 because for any A ? Rd?m it holds that
A>
2,1 ? dkAk2 . Second, the functional
PL
PL
form ( i=1 (?)2/3 )3/2 appearing in Theorem 1.1 may be replaced by the form L( i=1 (?)2 )1/2
appearing above by using that k?k2/3 ? Lk?k2 for any ? ? RL (this inequality following, for
instance, from Jensen?s inequality).
4
Further observations and open problems
Adversarial examples. Adversarial examples are a phenomenon where the neural network predictions can be altered by adding seemingly imperceptible noise to an input [Goodfellow et al., 2014].
This phenomenon can be connected to margins as follows. The margin is nothing more than the
distance an input must traverse before its label is flipped; consequently, low margin points are more
susceptible to adversarial noise than high margin points. Concretely, taking the 100 lowest margin
inputs from cifar10 and adding uniform noise at scale 0.15 yielded flipped labels on 5.86% of the
images, whereas the same level of noise on high margin points yielded 0.04% flipped labels. Can the
bounds here suggest a way to defend against adversarial examples?
Regularization. It was observed in [Zhang et al., 2017] that explicit regularization contributes little
to the generalization performance of neural networks. In the margin framework, standard weight
decay (l2 ) regularization seemed to have little impact on margin distributions in Section 2. On the
other hand, in the boosting literature, special types of regularization were developed to maximize
margins [Shalev-Shwartz and Singer, 2008]; perhaps a similar development can be performed here?
SGD. The present analysis applies to predictors that have large margins; what is missing is an
analysis verifying that SGD applied to standard neural networks returns large margin predictors!
Indeed, perhaps SGD returns not simply large margin predictors, but predictors that are well-behaved
in a variety of other ways that can be directly translated into refined generalization bounds.
Improvements to Theorem 1.1. There are several directions in which Theorem 1.1 might be
improved. Can a better choice of layer geometries (norms) yield better bounds on practical networks?
Can the nonlinearities? worst-case Lipschitz constant be replaced with an (empirically) averaged
quantity? Alternatively, can better lower bounds rule out these directions?
Rademacher vs. covering. Is it possible to prove Theorem 1.1 solely via Rademacher complexity,
with no invocation of covering numbers?
Acknowledgements
The authors thank Srinadh Bhojanapalli, Ryan Jian, Behnam Neyshabur, Maxim Raginsky, Andrew
J. Risteski, and Belinda Tzen for useful conversations and feedback. The authors thank Ben Recht
for giving a provocative lecture at the Simons Institute, stressing the need for understanding of
8
both generalization and optimization of neural networks. M.T. and D.F. acknowledge the use of
a GPU machine provided by Karthik Sridharan and made possible by an NVIDIA GPU grant.
D.F. acknowledges the support of the NDSEG fellowship. P.B. gratefully acknowledges the support
of the NSF through grant IIS-1619362 and of the Australian Research Council through an Australian
Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical
and Statistical Frontiers. The authors thank the Simons Institute for the Theory of Computing Spring
2017 program on the Foundations of Machine Learning. Lastly, the authors are grateful to La Burrita
(both the north and the south Berkeley campus locations) for upholding the glorious tradition of the
California Burrito.
References
Martin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge
University Press, 1999.
Peter Bartlett, Dylan J Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural
networks. arXiv preprint arXiv:1706.08498, 2017.
Peter L. Bartlett. For valid generalization the size of the weights is more important than the size of
the network. In NIPS, 1996.
Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and
structural results. JMLR, 3:463?482, Nov 2002.
Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for optimal
margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning
Theory, COLT ?92, pages 144?152, New York, NY, USA, 1992. ACM. ISBN 0-89791-497-X.
St?phane Boucheron, Olivier Bousquet, and Gabor Lugosi. Theory of classification: A survey of
some recent advances. ESAIM: Probability and Statistics, 9:323?375, 2005.
Fran?ois Chollet et al. Keras. https://github.com/fchollet/keras, 2015.
Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval
networks: Improving robustness to adversarial examples. In ICML, 2017.
Corinna Cortes and Vladimir N. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297,
1995.
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial
examples. 2014. arXiv:1412.6572 [stat.ML].
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual
networks. In ECCV, 2016.
Alex Krizhevsky, Ilya Sutskever, and Geoffery Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning.
MIT Press, 2012.
Behnam Neyshabur. Implicit regularization in deep learning. CoRR, abs/1709.01953, 2017. URL
http://arxiv.org/abs/1709.01953.
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural
networks. In COLT, 2015.
Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. A pac-bayesian
approach to spectrally-normalized margin bounds for neural networks. CoRR, abs/1707.09564,
2017.
Gilles Pisier. Remarques sur un r?sultat non publi? de b. maurey. S?minaire Analyse fonctionnelle
(dit), pages 1?12, 1980.
9
Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new
explanation for the effectiveness of voting methods. In ICML, pages 322?330, 1997.
Shai Shalev-Shwartz and Yoram Singer. On the equivalence of weak learnability and linear separability: New relaxations and efficient boosting algorithms. In COLT, 2008.
J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anthony. Structural risk minimization over
data-dependent hierarchies. IEEE Trans. Inf. Theor., 44(5):1926?1940, September 1998.
Ambuj Tewari and Peter L. Bartlett. On the consistency of multiclass classification methods. Journal
of Machine Learning Research, 8:1007?1025, 2007.
Vladimir N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, New
York, 1982.
Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, New York, 1995.
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding
deep learning requires rethinking generalization. ICLR, 2017.
Tong Zhang. Covering number bounds of certain regularized linear function classes. Journal of
Machine Learning Research, 2:527?550, 2002.
Tong Zhang. Statistical analysis of some multi-category large margin classification methods. Journal
of Machine Learning Research, 5:1225?1251, 2004.
10
| 7204 |@word version:2 briefly:1 norm:35 seems:3 open:3 queensland:1 incurs:1 sgd:7 minus:1 harder:3 reduction:1 configuration:1 interestingly:3 pna:1 existing:1 recovered:1 ka:9 comparing:1 com:1 si:1 must:1 gpu:2 visible:1 subsequent:1 christian:1 plot:4 depict:1 v:1 alone:1 instantiate:1 ith:1 farther:1 provides:1 boosting:6 node:1 location:1 traverse:1 org:1 simpler:1 zhang:13 mathematical:1 along:2 direct:2 prove:3 fitting:1 excellence:1 x0:1 indeed:7 ra:9 roughly:1 themselves:1 behavior:3 growing:1 multi:1 inspired:3 little:4 considering:1 cardinality:2 provided:5 moreover:2 notation:4 bounded:2 mass:2 alexnet:5 lowest:1 what:4 bhojanapalli:2 kind:1 interpreted:2 spectrally:4 developed:2 transformation:1 sparsification:2 berkeley:3 every:2 voting:1 growth:3 tie:1 demonstrates:1 classifier:4 k2:16 scaled:1 control:1 grant:2 yn:2 appear:1 continually:1 before:2 positive:2 negligible:1 consequence:1 rfa:1 despite:1 meet:1 solely:2 lugosi:1 might:1 studied:1 equivalence:1 collect:1 edouard:1 bi:20 averaged:1 practical:1 practice:1 digit:1 featured:1 empirical:5 significantly:1 gabor:1 matching:2 confidence:1 refers:1 suggest:1 close:4 operator:4 risk:13 map:2 missing:2 starting:1 convex:1 survey:2 resolution:1 rule:2 utilizing:1 shlens:1 regularize:1 handle:1 coordinate:3 cifar100:9 controlling:1 suppose:2 hierarchy:1 olivier:1 us:1 samy:1 hypothesis:1 goodfellow:2 element:2 corroborated:1 labeled:1 observed:2 preprint:4 verifying:1 worst:1 connected:2 sun:2 decrease:1 rescaled:1 removed:1 mentioned:1 benjamin:1 complexity:30 inductively:1 trained:2 depend:1 grateful:1 astonishing:1 purely:1 upon:1 learner:1 basis:1 triangle:1 translated:1 various:2 chapter:1 instantiated:2 kp:3 outside:3 refined:2 harnessing:1 sanity:2 whose:4 shalev:2 kai:15 valued:4 grave:1 say:2 ramp:2 ability:2 statistic:2 unseen:1 analyse:1 itself:1 seemingly:1 sequence:1 isbn:1 provocative:1 product:8 kak1:1 poorly:1 achieve:1 intuitive:2 description:2 frobenius:1 sutskever:1 convergence:1 rademacher:15 telgarsky:2 maxi6:1 ben:1 resnet:2 help:1 depending:1 develop:1 andrew:1 stat:1 phane:1 minor:1 eq:2 implemented:1 ois:1 australian:2 direction:2 closely:1 correct:1 hull:1 vc:4 bojanowski:1 jonathon:1 mcallester:1 hx:3 suffices:1 generalization:19 investigation:1 ryan:1 secondly:1 summation:1 theor:1 frontier:1 kakq:1 correction:1 hold:2 pl:3 considered:2 mapping:2 algorithmic:1 matus:2 major:1 chiyuan:1 estimation:1 label:22 combinatorial:4 sensitive:7 council:1 largest:1 basing:1 successfully:1 tool:2 minimization:1 mit:1 stressing:1 always:2 gaussian:1 rather:2 avoid:3 cornell:2 earliest:1 ax:2 derived:3 improvement:2 check:2 contrast:2 rigorous:2 tradition:1 talwalkar:1 sense:1 adversarial:6 defend:1 rostamizadeh:1 dependent:2 entire:1 relegated:2 going:1 transformed:1 selects:1 fl110100281:1 sketched:1 arg:4 issue:2 flexible:1 dauphin:1 classification:4 exponent:1 colt:3 development:1 special:2 once:1 never:1 beach:1 piotr:1 identical:1 flipped:3 icml:2 others:1 primarily:1 wee:1 tightly:2 individual:2 maxj:1 replaced:4 geometry:1 owe:1 karthik:1 ab:3 minaire:1 certainly:1 edge:1 capable:1 tuple:1 cifar10:27 necessary:2 integral:1 taylor:2 desired:1 circle:1 re:2 theoretical:1 instance:5 column:1 earlier:3 cover:7 yoav:1 rdi:2 entry:1 subset:1 predictor:9 uniform:2 krizhevsky:4 too:1 learnability:1 randomizing:1 kxi:3 combined:2 st:2 density:2 thanks:1 sensitivity:3 randomized:1 recht:2 stay:1 lee:1 quickly:1 ilya:1 central:1 ndseg:1 collapsing:1 mjt:1 return:2 szegedy:1 suggesting:1 converted:1 nonlinearities:7 de:1 north:1 satisfy:1 explicitly:1 vi:1 depends:1 fatshattering:2 performed:1 sup:1 decaying:1 bayes:1 worsen:1 shai:1 simon:2 contribution:3 minimize:1 square:1 ni:2 convolutional:1 correspond:2 yield:1 conceptually:1 generalize:2 weak:2 bayesian:1 iid:1 ren:1 history:1 whenever:1 against:3 remarked:1 dm:1 proof:9 mi:8 sampled:1 dataset:1 hardt:1 recall:1 subsection:1 conversation:1 improves:2 appears:1 nately:1 improved:1 though:3 fonctionnelle:1 generality:1 just:1 xa:2 lastly:3 implicit:1 correlation:1 sketch:2 hand:2 horizontal:2 working:1 perhaps:2 behaved:1 disabled:1 grows:1 usa:2 effect:1 normalized:10 true:1 counterpart:1 inductive:1 regularization:15 hence:1 moritz:1 boucheron:3 during:2 covering:23 unnormalized:1 l1:1 bring:1 meaning:5 wise:3 image:6 functional:1 empirically:3 overview:1 rl:1 discussed:2 he:3 m1:3 doomed:1 isabelle:1 cambridge:1 ai:7 rd:8 vanilla:1 consistency:1 i6:1 pointed:1 centre:1 illinois:2 nonlinearity:2 shawe:2 maxj6:2 had:1 gratefully:1 stable:1 risteski:1 etc:1 add:1 showed:1 recent:1 perspective:1 inf:1 occasionally:2 certain:2 nvidia:1 verlag:2 inequality:2 shahar:1 success:2 continue:1 yi:4 fortunately:1 greater:1 preceding:3 employed:2 converge:2 determine:1 maximize:1 xiangyu:1 ii:3 full:7 d0:1 champaign:1 imperceptible:1 calculation:1 long:2 cifar:16 divided:1 a1:7 controlled:1 impact:2 converging:1 prediction:2 basic:1 specializing:1 variant:1 essentially:1 metric:1 expectation:1 arxiv:4 iteration:1 normalization:6 whereas:2 fellowship:2 grow:2 source:1 jian:2 extra:2 moustapha:1 strict:1 south:1 pooling:1 supposing:1 sridharan:1 seem:2 effectiveness:1 integer:1 structural:2 kera:3 iii:2 easy:2 bengio:1 subplots:1 variety:3 plateauing:1 fit:3 relu:2 gave:1 architecture:1 idea:1 multiclass:7 handled:1 bartlett:21 url:1 passed:1 akin:1 peter:8 york:3 shaoqing:1 adequate:1 deep:4 useful:1 tewari:2 detailed:1 demonstrably:1 svms:1 category:2 dit:1 schapire:3 http:2 sl:1 exist:1 vy:1 nsf:1 track:2 correctly:1 blue:1 group:2 sparsifying:1 easiness:1 drawn:1 chollet:2 merely:1 fraction:1 convert:1 relaxation:1 raginsky:1 place:2 almost:1 family:2 guyon:1 yann:1 utilizes:1 fran:1 appendix:5 investigates:1 bit:1 bound:66 layer:19 pay:1 distinguish:1 replaces:1 yielded:2 annual:1 adapted:4 alex:1 flat:1 bousquet:1 nathan:2 min:1 spring:1 ameet:1 martin:1 marking:1 according:1 conjugate:1 across:6 slightly:2 separability:1 labellings:1 s1:1 kakp:1 explained:1 pr:2 taken:1 ln:7 remains:1 discus:2 needed:1 singer:2 end:1 usunier:1 gaussians:1 neyshabur:8 spectral:16 appropriate:1 appearing:3 dudley:1 batch:1 robustness:1 corinna:1 original:5 include:1 cf:2 pushing:1 yoram:1 giving:1 k1:3 rdl:1 prof:1 classical:1 bl:1 move:1 already:1 quantity:3 question:1 fa:15 dependence:6 usual:1 kak2:4 illuminate:1 exhibit:2 september:1 iclr:1 distance:1 thank:3 capacity:1 rethinking:1 sensible:1 collected:2 considers:1 reason:1 induction:3 afshin:1 sur:1 index:1 providing:1 vladimir:4 equivalently:1 difficult:2 unfortunately:2 setup:1 susceptible:1 statement:1 robert:1 ryota:1 stated:3 design:1 proper:2 gilles:1 upper:5 vertical:1 observation:5 datasets:5 urbana:1 arc:1 finite:1 acknowledge:1 extended:1 hinton:1 precise:1 y1:2 rn:5 culminates:1 arbitrary:1 overloaded:1 bk:1 pair:1 pisier:2 required:2 introduced:1 optimized:1 connection:1 imagenet:1 david:1 california:2 distinction:1 boser:2 nip:3 trans:1 beyond:1 proceeds:1 below:3 pattern:2 summarize:1 program:1 ambuj:1 valid:1 max:5 green:1 explanation:4 interpretability:1 shifting:1 including:1 suitable:1 difficulty:1 natural:1 regularized:1 residual:1 improve:1 github:1 technology:1 kxk22:1 altered:1 numerous:1 cisse:2 esaim:1 axis:2 started:1 carried:1 lk:1 acknowledges:2 epoch:12 prior:4 l2:6 literature:3 acknowledgement:1 understanding:2 neutralizes:1 loss:1 lecture:1 parseval:1 freund:1 maurey:3 suph:1 srebro:2 foundation:3 affine:1 consistent:2 foster:2 kxkp:1 row:3 eccv:1 summary:1 mohri:2 fchollet:1 campus:1 side:1 allow:1 institute:2 explaining:1 taking:2 fifth:1 benefit:1 feedback:1 curve:5 dimension:9 xn:4 evaluating:1 cumulative:1 avoids:1 computes:1 seemed:1 concretely:1 collection:3 author:4 made:1 far:1 correlate:1 excess:10 sj:1 nov:1 cutting:1 laureate:1 bernhard:1 ml:4 investigating:2 b1:1 xi:10 shwartz:2 alternatively:1 un:1 additionally:1 nature:1 ca:1 nicolas:1 ignoring:1 correlated:1 contributes:1 improving:1 mehryar:1 investigated:1 williamson:1 anthony:6 whole:3 noise:4 nothing:1 x1:4 ny:1 tong:2 tomioka:1 explicit:6 dylan:2 candidate:1 kxk2:3 invocation:1 breaking:1 jmlr:1 srinadh:2 ian:1 theorem:25 rk:4 removing:1 navigate:1 pac:2 jensen:1 maxi:1 behnam:4 decay:3 cortes:2 normalizing:2 dl:3 essential:2 exists:2 mnist:12 vapnik:7 mendelson:2 adding:2 workshop:1 maxim:1 corr:2 magnitude:2 margin:67 gap:2 easier:1 entropy:2 depicted:3 simply:1 appearance:2 univariate:1 visual:1 vinyals:1 kxk:1 adjustment:1 kaiming:1 scalar:2 applies:1 springer:2 corresponds:2 satisfies:1 relies:1 acm:1 marked:3 identity:5 goal:1 consequently:1 miscellaneous:1 lipschitz:19 hard:2 specifically:1 operates:1 lemma:11 experimental:1 la:1 select:1 support:3 latter:1 oriol:1 d1:1 phenomenon:2 handling:1 |
6,860 | 7,205 | Robust and Efficient Transfer Learning with Hidden
Parameter Markov Decision Processes
Taylor Killian?
[email protected]
Harvard University
Samuel Daulton?
[email protected]
Harvard University, Facebook?
George Konidaris
[email protected]
Brown University
Finale Doshi-Velez
[email protected]
Harvard University
Abstract
We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using lowdimensional latent embeddings. Our new framework correctly models the joint
uncertainty in the latent parameters and the state space. We also replace the original
Gaussian Process-based model with a Bayesian Neural Network, enabling more
scalable inference. Thus, we expand the scope of the HiP-MDP to applications
with higher dimensions and more complex dynamics.
1
Introduction
The world is filled with families of tasks with similar, but not identical, dynamics. For example,
consider the task of training a robot to swing a bat with unknown length l and mass m. The task is a
member of a family of bat-swinging tasks. If a robot has already learned to swing several bats with
various lengths and masses {(li , mi )}N
i=1 , then the robot should learn to swing a new bat with length
l0 and mass m0 more efficiently than learning from scratch. That is, it is grossly inefficient to develop
a control policy from scratch each time a unique task is encountered.
The Hidden Parameter Markov Decision Process (HiP-MDP) [14] was developed to address this
type of transfer learning, where optimal policies are adapted to subtle variations within tasks in an
efficient and robust manner. Specifically, the HiP-MDP paradigm introduced a low-dimensional latent
task parameterization wb that, combined with a state and action, completely describes the system?s
dynamics T (s0 |s, a, wb ). However, the original formulation did not account for nonlinear interactions
between the latent parameterization and the state space when approximating these dynamics, which
required all states to be visited during training. In addition, the original framework scaled poorly
because it used Gaussian Processes (GPs) as basis functions for approximating the task?s dynamics.
We present a new HiP-MDP formulation that models interactions between the latent parameters
wb and the state s when transitioning to state s0 after taking action a. We do so by including the
latent parameters wb , the state s, and the action a as input to a Bayesian Neural Network (BNN).
The BNN both learns the common transition dynamics for a family of tasks and models how the
unique variations of a particular instance impact the instance?s overall dynamics. Embedding the
latent parameters in this way allows for more accurate uncertainty estimation and more robust transfer
when learning a control policy for a new and possibly unique task instance. Our formulation also
inherits several desirable properties of BNNs: it can model multimodal and heteroskedastic transition
?
?
Both contributed equally as primary authors
Current affiliation, joined afterward
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
functions, inference scales to data large in both dimension and number of samples, and all output
dimensions are jointly modeled, which reduces computation and increases predictive accuracy [11].
Herein, a BNN can capture complex dynamical systems with highly non-linear interactions between
state dimensions. Furthermore, model uncertainty is easily quantified through the BNN?s output
variance. Thus, we can scale to larger domains than previously possible.
We use the improved HiP-MDP formulation to develop control policies for acting in a simple twodimensional navigation domain, playing acrobot [42], and designing treatment plans for simulated
patients with HIV [15]. The HiP-MDP rapidly determines the dynamics of new instances, enabling
us to quickly find near-optimal instance-specific control policies.
2
Background
Model-based reinforcement learning We consider reinforcement learning (RL) problems in
which an agent acts in a continuous state space S ? RD and a discrete action space A. We
assume that the environment has some true transition dynamics T (s0 |s, a), unknown to the agent,
and we are given a reward function R(s, a) : S ? A ? R that provides the utility of taking action a
from state s. In the model-based reinforcement learning setting, our goal is to learn an approximate
transition function T?(s0 |s, a) based on observed transitions (s, a, s0P
) and then use T? (s0 |s, a) to learn a
policy a = ?(s) that maximizes long-term expected rewards E[ t ? t rt ], where ? ? (0, 1] governs
the relative importance of immediate and future rewards.
HiP-MDPs A HiP-MDP [14] describes a family of Markov Decision Processes (MDPs) and is
defined by the tuple {S, A, W, T, R, ?, PW }, where S is the set of states s, A is the set of actions a,
and R is the reward function. The transition dynamics T (s0 |s, a, wb ) for each task instance b depend
on the value of the hidden parameters wb ? W ; for each instance, the parameters wb are drawn from
prior PW . The HiP-MDP framework assumes that a finite-dimensional array of hidden parameters wb
can fully specify variations among the true task dynamics. It also assumes the system dynamics are
invariant during a task and the agent is signaled when one task ends and another begins.
Bayesian Neural Networks A Bayesian Neural Network (BNN) is a neural network, f (?, ?; W), in
which the parameters W are random variables
Qwith some prior P (W) [27]. We place independent
Gaussian priors on each parameter P (W) = w?W N (w; ?, ? 2 ). Exact Bayesian inference for the
posterior over parameters P (W|{(s0 , s, a)}) is intractable, but several recent techniques have been
developed to scale inference in BNNs [4, 17, 22, 33]. As probabilistic models, BNNs reduce the
tendency of neural networks to overfit in the presence of low amounts of data?just as GPs do.
In general, training a BNN is more computationally efficient than a GP [22], while still providing
coherent uncertainty measurements. Specifically, predictive distributions can be calculated by taking
averages over samples of W from an approximated posterior distribution over the parameters. As
such, BNNs are being adopted in the estimation of stochastic dynamical systems [11, 18].
3
A HiP-MDP with Joint-Uncertainty
The original HiP-MDP transition function models variation across task instances as:3
s0d
?
K
X
(GP)
wbk T?kad (s) +
k=1
2
wbk ? N (?wk , ?w
)
2
? N (0, ?ad
),
(1)
where sd is the dth dimension of s. Each basis transition function T?kad (indexed by the k th latent
parameter, the action a, and the dimension d) is a GP using only s as input, linearly combined with
instance-specific weights wbk . Inference involves learning the parameters for the GP basis functions
and the weights for each instance. GPs can robustly approximate stochastic state transitions in
3
We present a simplified version that omits their filtering variables zkad ? {0, 1} to make the parallels
between our formulation and the original more explicit; our simplification does not change any key properties.
2
continuous dynamical systems in model-based reinforcement learning [9, 35, 36]. GPs have also
been widely used in transfer learning outside of RL (e.g. [5]).
While this formulation is expressive, it has limitations. The primary limitation is that the uncertainty
in the latent parameters wkb is modeled independently of the agent?s state uncertainty. Hence, the
model does not account for interactions between the latent parameterization wb and the state s. As a
result, Doshi-Velez and Konidaris [14] required that each task instance b performed the same set of
state-action combinations (s, a) during training. While such training may sometimes be possible?e.g.
robots that can be driven to identical positions?it is onerous at best and impossible for other systems
such as human patients. The secondary limitation is that each output dimension sd is modeled
separately as a collection of GP basis functions {T?kad }K
k=1 . The basis functions for output dimension
sd are independent of the basis functions for output dimension sd0 , for d 6= d0 . Hence, the model
does not account for correlation between output dimensions. Modeling such correlations typically
requires knowledge of how dimensions interact in the approximated dynamical system [2, 19]. We
choose not to constrain the HiP-MDP with such a priori knowledge since the aim is to provide basis
functions that can ascertain these relationships through observed transitions.
To overcome these limitations, we include the instance-specific weights wb as input to the transition
function and model all dimensions of the output jointly:
s0 ? T?(BNN) (s, a, wb ) +
wb ? N (?w , ?b )
? N 0, ?n2 .
(2)
This critical modeling change eliminates all of the above limitations: we can learn directly from
data as observed?which is abundant in many industrial and health domains?and no longer require
highly constrained training procedure. We can also capture the correlations in the outputs of these
domains, which occur in many natural processes.
Finally, the computational demands of using GPs as the transition function limited the application
of the original HiP-MDP formulation to relatively small domains. In the following, we use a BNN
rather than a GP to model this transition function. The computational requirements needed to learn a
GP-based transition function makes a direct comparison to our new BNN-based formulation infeasible
within our experiments (Section 5). We demonstrate, in Appendix A, that the BNN-based transition
model far exceeds the GP-based transition model in both computational and predictive performance.
In addition, BNNs naturally produce multi-dimensional outputs s0 without requiring prior knowledge
of the relationships between dimensions. This allows us to directly model output correlations between
the D state dimensions, leading to a more unified and coherent transition model. Inference in a larger
input space s, a, wb with a large number of samples is tractable using efficient approaches that let
us?given a distribution P (W) and input-output tuples (s, a, s0 )?estimate a distribution over the
latent embedding P (wb ). This enables more robust, scalable transfer.
Demonstration We present a toy domain (Figure 1) where an agent is tasked with navigating
to a goal region. The state space is continuous (s ? (?2, 2)2 ), and action space is discrete (a ?
{N, E, S, W }). Task instances vary the following the domain aspects: the location of a wall that
blocks access to the goal region (either to the left of or below the goal region), the orientation of the
cardinal directions (i.e. whether taking action North moves the agent up or down), and the direction
of a nonlinear wind effect that increases as the agent moves away from the start region. Ignoring the
wall and grid boundaries, the transition dynamics are:
p
?x = (?1)?b c ax ? (1 ? ?b )? (x + 1.5)2 + (y + 1.5)2
p
?y = (?1)?b c ay ? ?b ? (x + 1.5)2 + (y + 1.5)2
1 a ? {E, W }
ax =
0 otherwise
1 a ? {N, S}
ay =
0 otherwise,
where c is the step-size (without wind), ?b ? {0, 1} indicates which of the two classes the instance
belongs to and ? ? (0, 1) controls the influence of the wind and is fixed for all instances. The agent
3
Figure 1: A demonstration of the HiPMDP modeling the joint uncertainty between the latent
parameters wb and the state space. On the left, blue and red dots show the exploration during the red
(?b = 0) and blue (?b = 1) instances. The latent parameters learned from the red instance are used
predict transitions for taking action E from an area of the state space either unexplored (top right) or
explored (bottom right) during the red instance. The prediction variance provides an estimate of the
joint uncertainty between the latent parameters wb and the state.
is penalized for trying to cross a wall, and each step incurs a small cost until the agent reaches the
goal region, encouraging the agent to discover the goal region with the shortest route possible. An
episode terminates once the agent enters the goal region or after 100 time steps.
A linear function of the state s and latent parameters wb would struggle to model both classes of
instances (?b = 0 and ?b = 1) in this domain because the state transition resulting from taking an
action a is a nonlinear function with interactions between the state and hidden parameter ?b .
By contrast, our new HiP-MDP model allows nonlinear interactions between state and the latent
parameters wb , as well as jointly models their uncertainty. In Figure 1, this produces measurable
differences in transition uncertainty in regions where there are few related observed transitions, even
if there are many observations from unrelated instances. Here, the HiP-MDP is trained on two
instances from distinct classes (shown in blue (?b = 1) and red (?b = 0) on the left). We display the
uncertainty of the transition function, T?, using the latent parameters wred inferred for a red instance in
two regions of the domain: 1) an area explored during red instances and 2) an area not explored under
red instances, but explored with blue instances. The transition uncertainty T? is three times larger in
the region where red instances have not been?even if many blue instances have been there?than in
regions where red instances have commonly explored, demonstrating that the latent parameters can
have different effects on the transition uncertainty in different states.
4
Inference
Algorithm 1 summarizes the inference procedure for learning a policy for a new task instance b,
facilitated by a pre-trained BNN for that task, and is similar in structure to prior work [9, 18]. The
procedure involves several parts. Specifically, at the start of a new instance b, we have a global replay
buffer D of all observed transitions (s, a, r, s0 ) and a posterior over the weights W for our BNN
transition function T? learned with data from D. The first objective is to quickly determine the latent
embedding, wb , of the current instance?s specific dynamical variation as transitions (s, a, s0 ) are
observed from the current instance. Transitions from instance b are stored in both the global replay
buffer D and an instance-specific replay buffer Db . The second objective is to develop an optimal
control policy using the transition model T? and learned latent parameters wb . The transition model T?
and latent embedding wb are separately updated via mini-batch stochastic gradient descent (SGD)
using Adam [26]. Using T? for planning increases our sample efficiency as we reduce interactions
with the environment. We describe each of these parts in more detail below.
4.1
Updating embedding wb and BNN parameters W
For each new instance, a new latent weighting wb is sampled from the prior PW (Alg. 1, step 2),
in preparation of estimating unobserved dynamics introduced by ?b . Next, we observe transitions
(s, a, r, s0 ) from the task instance for an initial exploratory episode (Alg. 1, steps 7-10). Given that
4
Algorithm 1 Learning a control policy w/ the HiP-MDP
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
Input: Global replay buffer D, BNN transition
1: function S IM E P(Dbf , T?, wb , ?
?b , s0b )
0
?
2:
for t = 0 to Nt time steps do
function T , initial state sb
3:
Take action a ? ?
?b (s)
procedure L EARN P OLICY( D, T?, s0b )
4:
Approx. s?0 ? T?(s, a, wb )
Draw new wb ? PW
5:
Calc. reward r? ? R(s, a, s?0 )
Randomly init. policy ?
?b ?, ??
Init. instance replay buffer Db
6:
Store Dbf ? (s, a, r?, s?0 )
f
7:
if mod (t, N? ) = 0 then
Init. fictional replay buffer Db
for i = 0 to Ne episodes do
8:
Update ?
?b via ? from Dbf
?
repeat
9:
? ? ? ? + (1 ? ? )??
Take action a ? ?
?b (s)
10:
return Dbf , ?
?b
Store D, Db ? (s, a, r, s0 , wb )
1: function T UNE M ODEL(Db , W, wb )
until episode is complete
2:
for k = 0 to Nu updates do
if i = 0 OR T? is innaccurate then
Update wb from Db
Db , W, wb ? T UNE M ODEL(Db , W, wb ) 3:
4:
Update W from Db
for j = 0 to Nf ? 1 episodes do
return Db , W, wb
?b ? S IM E P(Dbf , T?, wb , ?
Dbf , ?
?b , s0b ) 5:
Df , ?
?b ? S IM E P(Df , T?, wb , ?
?b , s0b )
b
b
data, we optimize the latent parameters wb to minimize the ?-divergence of the posterior predictions
of T?(s, a, wb |W) and the true state transitions s0 (step 3 in TuneModel) [22]. Here, the minimization
occurs by adjusting the latent embedding wb while holding the BNN parameters W fixed. After an
initial update of the wb for a newly encountered instance, the parameters W of the BNN transition
function T? are optimized (step 4 in TuneModel). As the BNN is trained on multiple instances of
a task, we found that the only additional data needed to refine the BNN and latent wb for some
new instance can be provided by an initial exploratory episode. Otherwise, additional data from
subsequent episodes can be used to further improve the BNN and latent estimates (Alg. 1, steps 11-14).
The mini-batches used for optimizing the latent wb and BNN network parameters W are sampled
from Db with squared error prioritization [31]. We found that switching between small updates to the
latent parameters and small updates to the BNN parameters led to the best transfer performance. If
either the BNN network or latent parameters are updated too aggressively (having a large learning
rate or excessive number of training epochs), the BNN disregards the latent parameters or state inputs
respectively. After completing an instance, the BNN parameters and the latent parameters are updated
using samples from global replay buffer D. Specific modeling details such as number of epochs,
learning rates, etc. are described in Appendix C.
4.2
Updating policy ?
?b
We construct an ?-greedy policy to select actions based on an approximate action-value function
? a). We model the action value function Q(s,
? a) with a Double Deep Q Network (DDQN) [21, 29].
Q(s,
The DDQN involves training two networks (parametrized by ? and ?? respectively), a primary Qnetwork, which informs the policy, and a target Q-network, which is a slowly annealed copy of the
primary network (step 9 of SimEp) providing greater stability when updating the policy ?
?b .
With the updated transition function, T?, we approximate the environment when developing a control
policy (SimEp). We simulate batches of entire episodes of length Nt using the approximate dynamical
model T?, storing each transition in a fictional experience replay buffer Dbf (steps 2-6 in SimEp). The
primary network parameters ? are updated via SGD every N? time steps (step 8 in SimEp) to minimize
the temporal-difference error between the primary network?s and the target network?s Q-values. The
mini-batches used in the update are sampled from the fictional experience replay buffer Dbf , using
TD-error-based prioritization [38].
5
5
Experiments and Results
Now, we demonstrate the performance of the HiP-MDP with embedded latent parameters in transferring learning across various instances of the same task. We revisit the 2D demonstration problem
from Section 3, as well as describe results on both the acrobot [42] and a more complex healthcare
domain: prescribing effective HIV treatments [15] to patients with varying physiologies.4
For each of these domains, we compare our formulation of the HiP-MDP with embedded latent
parameters (equation 2) with four baselines (one model-free and three model-based) to demonstrate
the efficiency of learning a policy for a new instance b using the HiP-MDP. These comparisons
are made across the first handful of episodes encountered in a new task instance to highlight the
advantage provided by transferring information through the HiP-MDP. The ?linear? baseline uses
a BNN to learn a set of basis functions that are linearly combined with the parameters wb (used to
approximate the approach of Doshi-Velez and Konidaris [14], equation 1), which does not allow
interactions between states and weights. The ?model-based from scratch? baseline considers each task
instance b as unique; requiring the BNN transition function to be trained only on observations made
from the current task instance. The ?average? model baseline is constructed under the assumption that
a single transition function can be used for every instance of the task; T? is trained from observations
of all task instances together. For all model-based approaches, we replicated the HiP-MDP procedure
as closely as possible. The BNN was trained on observations from a single episode before being used
to generate a large batch of approximate transition data, from which a policy is learned. Finally, the
model-free baseline learns a DDQN-policy directly from observations of the current instance.
For more information on the experimental specifications and long-run policy learning see Appendix C
and D, respectively.
5.1
Revisiting the 2D demonstration
(a)
(b)
Figure 2: (a) a demonstration of a model-free control policy, (b) a comparison of learning a policy at
the outset of a new task instance b using the HiP-MDP versus four benchmarks. The HiP-MDP with
embedded wb outperforms all four benchmarks.
The HiP-MDP and the average model were supplied a transition model T? trained on two previous
instances, one from each class, before being updated according to the procedure outlined in Sec. 4
for a newly encountered instance. After the first exploratory episode, the HiP-MDP has sufficiently
determined the latent embedding, evidenced in Figure 2b where the developed policy clearly outperforms all four benchmarks. This implies that the transition model T? adequately provides the accuracy
needed to develop an optimal policy, aided by the learned latent parametrization.
The HiP-MDP with linear wb also quickly adapts to the new instance and learns a good policy.
However, the HiP-MDP with linear wb is unable to model the nonlinear interaction between the latent
parameters and the state. Therefore the model is less accurate and learns a less consistent policy than
the HiP-MDP with embedded wb . (See Figure 2a in Appendix A.2)
4
Example code for training and evaluating a HiP-MDP, including the simulators used in this section, can be
found at http://github.com/dtak/hip-mdp-public.
6
(a)
(b)
Figure 3: (a) the acrobot domain, (b) a comparison of learning a policy for a new task instance b
using the HiP-MDP versus four benchmarks.
With single episode of data, the model trained from scratch on the current instance is not accurate
enough to learn a good policy. Training a BNN from scratch requires more observations of the true
dynamics than are necessary for the HiP-MDP to learn the latent parameterization and achieve a
high level of accuracy. The model-free approach eventually learns an optimal policy, but requires
significantly more observations to do so, as represented in Figure 2a. The model-free approach has no
improvement in the first 10 episodes. The poor performance of the average model approach indicates
that a single model cannot adequately represent the dynamics of the different task instances. Hence,
learning a latent representation of the dynamics specific to each instance is crucial.
5.2
Acrobot
First introduced by Sutton and Barto [42], acrobot is a canonical RL and control problem. The most
common objective of this domain is for the agent to swing up a two-link pendulum by applying a
positive, neutral, or negative torque on the joint between the two links (see Figure 3a). These actions
must be performed in sequence such that the tip of the bottom link reaches a predetermined height
above the top of the pendulum. The state space consists of the angles ?1 , ?2 and angular velocities
??1 , ??2 , with hidden parameters corresponding to the masses (m1 , m2 ) and lengths (l1 , l2 ), of the two
links.5 See Appendix B.2 for details on how these hidden parameters were varied to create different
task instances. A policy learned on one setting of the acrobot will generally perform poorly on
other settings of the system, as noted in [3]. Thus, subtle changes in the physical parameters require
separate policies to adequately control the varied dynamical behavior introduced. This provides
a perfect opportunity to apply the HiP-MDP to transfer between separate acrobot instances when
learning a control policy ?
?b for the current instance.
Figure 3b shows that the HiP-MDP learns an optimal policy after a single episode, whereas all other
model-based benchmarks required an additional episode of training. As in the toy example, the
model-free approach eventually learns an optimal policy, but requires more time.
5.3
HIV treatment
Determining effective treatment protocols for patients with HIV was introduced as an RL problem by
mathematically representing a patient?s physiological response to separate classes of treatments [1, 15].
In this model, the state of a patient?s health is recorded via 6 separate markers measured with a
blood test.6 Patients are given one of four treatments on a regular schedule. Either they are given
treatment from one of two classes of drugs, a mixture of the two treatments, or provided no treatment
(effectively a rest period). There are 22 hidden parameters in this system that control a patient?s
specific physiology and dictate rates of virulence, cell birth, infection, and death. (See Appendix B.3
5
The centers of mass and moments of inertia can also be varied. For our purposes we left them unperturbed.
These markers are: the viral load (V ), the number of healthy and infected CD4+ T-lymphocytes (T1 , T1? ,
respectively), the number of healthy and infected macrophages (T2 , T2? , respectively), and the number of
HIV-specific cytotoxic T-cells (E).
6
7
(a)
(b)
Figure 4: (a) a visual representation of a patient with HIV transitioning from an unhealthy steady
state to a healthy steady state using a proper treatment schedule, (b) a comparison of learning a policy
for a new task instance b using the HiP-MDP versus four benchmarks.
for more details.) The objective is to develop a treatment sequence that transitions the patient from an
unhealthy steady state to a healthy steady state (Figure 4a, see Adams et al. [1] for a more thorough
explanation). Small changes made to these parameters can greatly effect the behavior of the system
and therefore introduce separate steady state regions that require unique policies to transition between
them.
Figure 4b shows that the HiP-MDP develops an optimal control policy after a single episode, learning
an unmatched optimal policy in the shortest time. The HIV simulator is the most complex of our
three domains, and the separation between each benchmark is more pronounced. Modeling a HIV
dynamical system from scratch from a single episode of observations proved to be infeasible. The
average model, which has been trained off a large batch of observations from related dynamical
systems, learns a better policy. The HiP-MDP with linear wb is able to transfer knowledge from
previous task instances and quickly learn the latent parameterization for this new instance, leading
to an even better policy. However, the dynamical system contains nonlinear interactions between
the latent parameters and the state space. Unlike the HiP-MDP with embedded wb , the HiP-MDP
with linear wb is unable to model those interactions. This demonstrates the superiority of the HiPMDP with embedded wb for efficiently transferring knowledge between instances in highly complex
domains.
6
Related Work
There has been a large body of work on solving single POMDP models efficiently [6, 16, 24, 37, 45].
In contrast, transfer learning approaches leverage training done on one task to perform related tasks.
Strategies for transfer learning include: latent variable models, reusing pre-trained model parameters,
and learning a mapping between separate tasks (see review in [43]).
Our work falls into the latent variable model category. Using latent representation to relate tasks has
been particularly popular in robotics where similar physical movements can be exploited across a
variety of tasks and platforms [10, 20]. In Chen et al. [8], these latent representations are encoded
as separate MDPs with an accompanying index that an agent learns while adapting to observed
variations in the environment. Bai et al. [3] take a closely related approach to our updated formulation
of the HiP-MDP by incorporating estimates of unknown or partially observed parameters of a
known environmental model and refining those estimates using model-based Bayesian RL. The
core difference between this and our work is that we learn the transition model and the observed
variations directly from the data while Bai et al. [3] assume it is given and the specific variations
of the parameters are learned. Also related are multi-task approaches that train a single model for
multiple tasks simultaneously [5, 7]. Finally, there have been many applications of reinforcement
learning (e.g. [32, 40, 44]) and transfer learning in the healthcare domain by identifying subgroups
with similar response (e.g. [23, 28, 39]).
8
More broadly, BNNs are powerful probabilistic inference models that allow for the estimation of
stochastic dynamical systems [11, 18]. Core to this functionality is their ability to represent both
model uncertainty and transition stochasticity [25]. Recent work decomposes these two forms of
uncertainty to isolate the separate streams of information to improve learning. Our use of fixed latent
variables as input to a BNN helps account for model uncertainty when transferring the pretrained
BNN to a new instance of a task. Other approaches use stochastic latent variable inputs to introduce
transition stochasticity [12, 30].
We view the HiP-MDP with latent embedding as a methodology that can facilitate personalization and
do so robustly as it transfers knowledge of prior observations to the current instance. This approach
can be especially useful in extending personalized care to groups of patients with similar diagnoses,
but can also be extended to any control system where variations may be present.
7
Discussion and Conclusion
We present a new formulation for transfer learning among related tasks with similar, but not identical
dynamics, within the HiP-MDP framework. Our approach leverages a latent embedding?learned
and optimized in an online fashion?to approximate the true dynamics of a task. Our adjustment
to the HiP-MDP provides robust and efficient learning when faced with varied dynamical systems,
unique from those previously learned. It is able, by virtue of transfer learning, to rapidly determine
optimal control policies when faced with a unique instance.
The results in this work assume the presence of a large batch of already-collected data. This setting is
common in many industrial and health domains, where there may be months, sometimes years, worth
of operations data on plant function, product performance, or patient health. Even with large batches,
each new instance still requires collapsing the uncertainty around the instance-specific parameters
in order to quickly perform well on the task. In Section 5, we used a batch of transition data from
multiple instances of a task?without any artificial exploration procedure?to train the BNN and
learn the latent parameterizations. Seeded with data from diverse task instances, the BNN and latent
parameters accounted for the variation between instances.
While we were primarily interested in settings where batches of observational data exist, one might
also be interested in more traditional settings in which the first instance is completely new, the second
instance only has information from the first, etc. In our initial explorations, we found that one can
indeed learn the BNN in an online manner for simpler domains. However, even with simple domains,
the model-selection problem becomes more challenging: an overly expressive BNN can overfit to
the first few instances, and have a hard time adapting when it sees data from an instance with very
different dynamics. Model-selection approaches to allow the BNN to learn online, starting from
scratch, is an interesting future research direction.
Another interesting extension is rapidly identifying the latent wb . Exploration to identify wb would
supply the dynamical model with the data from the regions of domain with the largest uncertainty. This
could lead to a more accurate latent representation of the observed dynamics while also improving the
overall accuracy of the transition model. Also, we found training a DQN requires careful exploration
strategies. When exploration is constrained too early, the DQN quickly converges to a suboptimal,
deterministic policy??often choosing the same action at each step. Training a DQN along the BNN?s
trajectories of least certainty could lead to improved coverage of the domain and result in more robust
policies. The development of effective policies would be greatly accelerated if exploration were more
robust and stable. One could also use the hidden parameters wb to learn a policy directly.
Recognizing structure, through latent embeddings, between task variations enables a form of transfer
learning that is both robust and efficient. Our extension of the HiP-MDP demonstrates how embedding
a low-dimensional latent representation with the input of an approximate dynamical model facilitates
transfer and results in a more accurate model of a complex dynamical system, as interactions between
the input state and the latent representation are modeled naturally. We also model correlations in the
output dimensions by replacing the GP basis functions of the original HiP-MDP formulation with a
BNN. The BNN transition function scales significantly better to larger and more complex problems.
Our improvements to the HiP-MDP provide a foundation for robust and efficient transfer learning.
Future improvements to this work will contribute to a general transfer learning framework capable of
addressing the most nuanced and complex control problems.
9
Acknowledgements We thank Mike Hughes, Andrew Miller, Jessica Forde, and Andrew Ross for
their helpful conversations. TWK was supported by the MIT Lincoln Laboratory Lincoln Scholars
Program. GDK is supported in part by the NIH R01MH109177. The content of this work is solely
the responsibility of the authors and does not necessarily represent the official views of the NIH.
References
[1] BM Adams, HT Banks, H Kwon, and HT Tran. Dynamic multidrug therapies for HIV: optimal and STI
control approaches. Mathematical Biosciences and Engineering, pages 223?241, 2004.
[2] MA Alvarez, L Rosasco, ND Lawrence, et al. Kernels for vector-valued functions: A review. Foundations
R in Machine Learning, 4(3):195?266, 2012.
and Trends
[3] H Bai, D Hsu, and W S Lee. Planning how to learn. In International Conference on Robotics and
Automation, pages 2853?2859. IEEE, 2013.
[4] C Blundell, J Cornebise, K Kavukcuoglu, and D Wierstra. Weight uncertainty in neural networks. In
Proceedings of The 32nd International Conference on Machine Learning, pages 1613?1622, 2015.
[5] EV Bonilla, KM Chai, and CK Williams. Multi-task Gaussian process prediction. In Advances in Neural
Information Processing Systems, volume 20, pages 153?160, 2008.
[6] E Brunskill and L Li. Sample complexity of multi-task reinforcement learning. In Conference on
Uncertainty in Artificial Intelligence, 2013.
[7] R Caruana. Multitask learning. In Learning to learn, pages 95?133. Springer, 1998.
[8] M Chen, E Frazzoli, D Hsu, and WS Lee. POMDP-lite for robust robot planning under uncertainty. In
International Conference on Robotics and Automation, pages 5427?5433. IEEE, 2016.
[9] MP Deisenroth and CE Rasmussen. PILCO: a model-based and data-efficient approach to policy search.
In Proceedings of the International Conference on Machine Learning, 2011.
[10] B Delhaisse, D Esteban, L Rozo, and D Caldwell. Transfer learning of shared latent spaces between robots
with similar kinematic structure. In International Joint Conference on Neural Networks. IEEE, 2017.
[11] S Depeweg, JM Hern?ndez-Lobato, F Doshi-Velez, and S Udluft. Learning and policy search in stochastic
dynamical systems with Bayesian neural networks. In International Conference on Learning Representations, 2017.
[12] S Depeweg, JM Hern?ndez-Lobato, F Doshi-Velez, and S Udluft. Uncertainty decomposition in bayesian
neural networks with latent variables. arXiv preprint arXiv:1706.08495, 2017.
[13] CR Dietrich and GN Newsam. Fast and exact simulation of stationary gaussian processes through circulant
embedding of the covariance matrix. SIAM Journal on Scientific Computing, 18(4):1088?1107, 1997.
[14] F Doshi-Velez and G Konidaris. Hidden parameter Markov Decision Processes: a semiparametric
regression approach for discovering latent task parametrizations. In Proceedings of the Twenty-Fifth
International Joint Conference on Artificial Intelligence, volume 25, pages 1432?1440, 2016.
[15] D Ernst, G Stan, J Goncalves, and L Wehenkel. Clinical data based optimal STI strategies for HIV: a
reinforcement learning approach. In Proceedings of the 45th IEEE Conference on Decision and Control,
2006.
[16] A Fern and P Tadepalli. A computational decision theory for interactive assistants. In Advances in Neural
Information Processing Systems, pages 577?585, 2010.
[17] Y Gal and Z Ghahramani. Dropout as a Bayesian approximation: representing model uncertainty in deep
learning. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
[18] Y Gal, R McAllister, and CE Rasmussen. Improving PILCO with Bayesian neural network dynamics
models. In Data-Efficient Machine Learning workshop, ICML, 2016.
[19] MG Genton, W Kleiber, et al. Cross-covariance functions for multivariate geostatistics. Statistical Science,
30(2):147?163, 2015.
[20] A Gupta, C Devin, Y Liu, P Abbeel, and S Levine. Learning invariant feature spaces to transfer skills with
reinforcement learning. In International Conference on Learning Representations, 2017.
10
[21] H van Hasselt, A Guez, and D Silver. Deep reinforcement learning with double Q-learning. In Proceedings
of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2094?2100. AAAI Press, 2016.
[22] JM Hern?ndez-Lobato, Y Li, M Rowland, D Hern?ndez-Lobato, T Bui, and RE Turner. Black-box
?-divergence minimization. In Proceedings of the 33rd International Conference on Machine Learning,
2016.
[23] N Jaques, S Taylor, A Sano, and R Picard. Multi-task, multi-kernel learning for estimating individual
wellbeing. In Proceedings of NIPS Workshop on Multimodal Machine Learning, 2015.
[24] LP Kaelbling, ML Littman, and AR Cassandra. Planning and acting in partially observable stochastic
domains. Artificial intelligence, 101(1):99?134, 1998.
[25] A Kendall and Y Gal. What uncertainties do we need in bayesian deep learning for computer vision? arXiv
preprint arXiv:1703.04977, 2017.
[26] D Kingma and J Ba. Adam: A method for stochastic optimization. In International Conference on
Learning Representations, 2015.
[27] D JC MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4(3):
448?472, 1992.
[28] VN Marivate, J Chemali, E Brunskill, and M Littman. Quantifying uncertainty in batch personalized
sequential decision making. In Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence,
2014.
[29] V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K
Fidjeland, G Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518
(7540):529?533, 2015.
[30] TM Moerland, J Broekens, and CM Jonker. Learning multimodal transition dynamics for model-based
reinforcement learning. arXiv preprint arXiv:1705.00470, 2017.
[31] AW Moore and CG Atkeson. Prioritized sweeping: reinforcement learning with less data and less time.
Machine learning, 13(1):103?130, 1993.
[32] BL Moore, LD Pyeatt, V Kulkarni, P Panousis, K Padrez, and AG Doufas. Reinforcement learning for
closed-loop propofol anesthesia: a study in human volunteers. Journal of Machine Learning Research, 15
(1):655?696, 2014.
[33] RM Neal. Bayesian training of backpropagation networks by the hybrid Monte carlo method. Technical
report, Citeseer, 1992.
[34] J Qui?onero-Candela and CE Rasmussen. A unifying view of sparse approximate gaussian process
regression. Journal of Machine Learning Research, 6(Dec):1939?1959, 2005.
[35] CE Rasmussen and M Kuss. Gaussian processes in reinforcement learning. In Advances in Neural
Information Processing Systems, volume 15, 2003.
[36] CE Rasmussen and CKI Williams. Gaussian processes for machine learning. MIT Press, Cambridge,
2006.
[37] B Rosman, M Hawasly, and S Ramamoorthy. Bayesian policy reuse. Machine Learning, 104(1):99?127,
2016.
[38] T Schaul, J Quan, I Antonoglou, and D Silver. Prioritized experience replay. In International Conference
on Learning Representations, 2016.
[39] P Schulam and S Saria. Integrative analysis using coupled latent variable models for individualizing
prognoses. Journal of Machine Learning Research, 17:1?35, 2016.
[40] SM Shortreed, E Laber, DJ Lizotte, TS Stroup, J Pineau, and SA Murphy. Informing sequential clinical
decision-making through reinforcement learning: an empirical study. Machine learning, 84(1-2):109?136,
2011.
[41] E Snelson and Z Ghahramani. Sparse gaussian processes using pseudo-inputs. In Advances in Neural
Information Processing Systems, pages 1257?1264, 2006.
[42] R Sutton and A Barto. Reinforcement learning: an introduction, volume 1. MIT Press, Cambridge, 1998.
11
[43] ME Taylor and P Stone. Transfer learning for reinforcement learning domains: a survey. Journal of
Machine Learning Research, 10(Jul):1633?1685, 2009.
[44] M Tenenbaum, A Fern, L Getoor, M Littman, V Manasinghka, S Natarajan, D Page, J Shrager, Y Singer,
and P Tadepalli. Personalizing cancer therapy via machine learning. Workshops of NIPS, 2010.
[45] JD Williams and S Young. Scaling POMDPs for dialog management with composite summary point-based
value iteration (CSPBVI). In AAAI Workshop on Statistical and Empirical Approaches for Spoken Dialogue
Systems, pages 37?42, 2006.
12
| 7205 |@word multitask:1 version:1 pw:4 nd:2 tadepalli:2 km:1 integrative:1 simulation:1 decomposition:1 covariance:2 citeseer:1 sgd:2 incurs:1 ld:1 moment:1 bai:3 ndez:4 contains:1 liu:1 initial:5 outperforms:2 hasselt:1 current:8 com:1 nt:2 guez:1 must:1 devin:1 subsequent:1 predetermined:1 enables:2 update:8 stationary:1 greedy:1 intelligence:5 discovering:1 parameterization:5 une:2 parametrization:1 core:2 provides:5 parameterizations:1 contribute:1 location:1 marivate:1 simpler:1 height:1 mathematical:1 along:1 constructed:1 direct:1 wierstra:1 supply:1 anesthesia:1 consists:1 manner:2 introduce:3 indeed:1 expected:1 behavior:2 planning:4 dialog:1 multi:6 simulator:2 torque:1 td:1 encouraging:1 jm:3 becomes:1 begin:1 discover:1 unrelated:1 estimating:2 maximizes:1 stroup:1 mass:5 provided:3 what:1 cm:1 developed:3 unified:1 unobserved:1 gal:3 ag:1 spoken:1 temporal:1 thorough:1 unexplored:1 nf:1 act:1 every:2 certainty:1 interactive:1 pseudo:1 scaled:1 demonstrates:2 rm:1 control:20 healthcare:2 superiority:1 before:2 positive:1 t1:2 engineering:1 sd:3 struggle:1 switching:1 sutton:2 solely:1 kad:3 might:1 black:1 quantified:1 challenging:1 limited:1 bat:4 unique:7 practical:1 hughes:1 block:1 dbf:8 backpropagation:2 procedure:7 area:3 riedmiller:1 s0b:4 drug:1 empirical:2 physiology:2 significantly:2 dictate:1 adapting:2 pre:2 outset:1 regular:1 kleiber:1 composite:1 cannot:1 selection:2 twodimensional:1 impossible:1 influence:1 applying:1 bellemare:1 optimize:1 measurable:1 deterministic:1 center:1 lobato:4 annealed:1 williams:3 starting:1 independently:1 pomdp:2 swinging:1 survey:1 identifying:2 m2:1 array:1 embedding:11 stability:1 exploratory:3 variation:11 updated:7 target:2 exact:2 gps:5 prioritization:2 designing:1 us:1 harvard:6 velocity:1 trend:1 approximated:2 particularly:1 updating:3 natarajan:1 observed:10 bottom:2 mike:1 preprint:3 levine:1 enters:1 capture:2 revisiting:1 region:13 episode:17 movement:1 environment:4 complexity:1 reward:5 littman:3 dynamic:24 trained:10 depend:1 solving:1 predictive:3 efficiency:2 completely:2 basis:9 multimodal:3 joint:7 easily:1 various:2 represented:1 train:2 distinct:1 fast:1 describe:2 effective:3 monte:1 artificial:6 outside:1 choosing:1 birth:1 hiv:10 encoded:1 larger:4 widely:1 valued:1 otherwise:3 ability:1 gp:9 jointly:3 online:3 advantage:1 sequence:2 dietrich:1 mg:1 lowdimensional:1 interaction:12 product:1 tran:1 loop:1 rapidly:3 parametrizations:1 poorly:2 achieve:1 adapts:1 lincoln:2 ernst:1 schaul:1 pronounced:1 chai:1 double:2 requirement:1 extending:1 sea:1 produce:2 adam:4 perfect:1 converges:1 silver:3 help:1 develop:5 informs:1 andrew:2 measured:1 sa:1 coverage:1 c:1 involves:3 implies:1 direction:3 closely:2 functionality:1 stochastic:8 exploration:7 human:3 observational:1 genton:1 public:1 require:3 abbeel:1 scholar:1 wall:3 im:3 mathematically:1 extension:2 accompanying:1 twk:1 sufficiently:1 around:1 therapy:2 lawrence:1 scope:1 predict:1 mapping:1 bnns:6 m0:1 vary:1 early:1 purpose:1 esteban:1 estimation:3 assistant:1 visited:1 healthy:4 ross:1 largest:1 create:1 minimization:2 mit:3 clearly:1 gaussian:9 aim:1 rather:1 ck:1 cr:1 rusu:1 varying:1 barto:2 thirtieth:1 l0:1 inherits:1 ax:2 refining:1 improvement:3 indicates:2 greatly:2 industrial:2 contrast:2 cg:1 baseline:5 lizotte:1 helpful:1 inference:9 virulence:1 unhealthy:2 sb:1 typically:1 transferring:4 prescribing:1 entire:1 hidden:11 w:1 rozo:1 expand:1 interested:2 s0p:1 overall:2 among:2 orientation:1 priori:1 development:1 plan:1 constrained:2 platform:1 mackay:1 once:1 construct:1 having:1 beach:1 veness:1 identical:3 icml:1 excessive:1 mcallister:1 future:3 t2:2 report:1 develops:1 cardinal:1 few:2 primarily:1 kwon:1 randomly:1 simultaneously:1 divergence:2 individual:1 lite:1 murphy:1 jessica:1 ostrovski:1 highly:3 kinematic:1 mnih:1 picard:1 navigation:1 mixture:1 personalization:1 accurate:5 calc:1 tuple:1 capable:1 necessary:1 experience:3 filled:1 indexed:1 taylor:3 abundant:1 signaled:1 re:1 hip:47 instance:76 modeling:6 wb:52 gn:1 infected:2 ar:1 caruana:1 cost:1 kaelbling:1 addressing:1 neutral:1 lymphocyte:1 recognizing:1 too:2 stored:1 aw:1 combined:3 st:1 international:12 siam:1 cki:1 probabilistic:2 off:1 lee:2 tip:1 together:1 quickly:6 earn:1 squared:1 aaai:4 recorded:1 s0d:1 frazzoli:1 choose:1 possibly:1 slowly:1 rosasco:1 unmatched:1 collapsing:1 management:1 inefficient:1 leading:2 return:2 dialogue:1 li:3 wkb:1 account:4 toy:2 reusing:1 preparation:1 sec:1 wk:1 north:1 automation:2 jc:1 bonilla:1 mp:1 ad:1 stream:1 schulam:1 performed:2 view:3 wind:3 responsibility:1 kendall:1 closed:1 pendulum:2 red:10 start:2 candela:1 parallel:1 odel:2 jul:1 minimize:2 accuracy:4 variance:2 efficiently:3 miller:1 identify:1 caldwell:1 bayesian:14 moerland:1 kavukcuoglu:2 fern:2 carlo:1 trajectory:1 worth:1 onero:1 pomdps:1 kuss:1 reach:2 infection:1 facebook:1 grossly:1 konidaris:4 doshi:6 naturally:2 bioscience:1 mi:1 sampled:3 newly:2 proved:1 treatment:11 adjusting:1 popular:1 hsu:2 knowledge:6 conversation:1 subtle:2 schedule:2 higher:1 methodology:1 specify:1 improved:2 response:2 alvarez:1 formulation:13 done:1 box:1 laber:1 furthermore:1 just:1 angular:1 correlation:5 overfit:2 until:2 expressive:2 replacing:1 nonlinear:6 marker:2 pineau:1 nuanced:1 mdp:46 scientific:1 dqn:3 usa:1 effect:3 facilitate:1 brown:2 true:5 requiring:2 swing:4 hence:3 adequately:3 aggressively:1 seeded:1 death:1 laboratory:1 moore:2 neal:1 bnn:38 during:6 noted:1 steady:5 samuel:1 trying:1 stone:1 ay:2 complete:1 demonstrate:3 l1:1 snelson:1 personalizing:1 nih:2 common:3 viral:1 rl:5 physical:2 individualizing:1 macrophage:1 volume:4 m1:1 velez:6 measurement:1 cambridge:2 rd:3 approx:1 grid:1 outlined:1 stochasticity:2 dj:1 dot:1 gdk:2 robot:6 longer:1 access:1 fictional:3 etc:2 specification:1 stable:1 posterior:4 multivariate:1 recent:2 optimizing:1 belongs:1 driven:1 route:1 buffer:9 store:2 affiliation:1 exploited:1 cd4:1 george:1 additional:3 greater:1 care:1 determine:2 paradigm:1 shortest:2 period:1 pilco:2 multiple:3 desirable:1 reduces:1 d0:1 exceeds:1 technical:1 cross:2 long:3 clinical:2 equally:1 impact:1 prediction:3 scalable:2 regression:2 patient:12 vision:1 tasked:1 df:2 arxiv:6 iteration:1 sometimes:2 represent:3 kernel:2 volunteer:1 robotics:3 cell:2 dec:1 addition:2 background:1 separately:2 whereas:1 semiparametric:1 shrager:1 crucial:1 eliminates:1 rest:1 unlike:1 isolate:1 db:11 facilitates:1 member:1 quan:1 mod:1 finale:2 near:1 presence:2 leverage:2 embeddings:2 enough:1 variety:1 suboptimal:1 prognosis:1 reduce:2 tm:1 depeweg:2 blundell:1 whether:1 jonker:1 utility:1 reuse:1 action:19 deep:5 generally:1 useful:1 governs:1 amount:1 tenenbaum:1 category:1 generate:1 http:1 supplied:1 exist:1 canonical:1 revisit:1 overly:1 correctly:1 blue:5 broadly:1 diagnosis:1 discrete:2 diverse:1 group:1 key:1 four:7 demonstrating:1 blood:1 drawn:1 ce:5 ht:2 year:1 run:1 facilitated:1 angle:1 uncertainty:26 powerful:1 sti:2 place:1 family:5 vn:1 separation:1 draw:1 decision:9 appendix:6 summarizes:1 qui:1 scaling:1 dropout:1 completing:1 simplification:1 display:1 encountered:4 refine:1 adapted:1 occur:1 handful:1 constrain:1 personalized:2 aspect:1 simulate:1 relatively:1 developing:1 according:1 combination:1 poor:1 describes:2 across:4 ascertain:1 terminates:1 lp:1 making:2 invariant:2 computationally:1 equation:2 previously:2 hern:4 eventually:2 needed:3 singer:1 tractable:1 antonoglou:1 end:1 adopted:1 operation:1 apply:1 observe:1 away:1 robustly:2 batch:11 jd:1 original:7 assumes:2 top:2 include:2 wehenkel:1 opportunity:1 ddqn:3 unifying:1 ghahramani:2 especially:1 approximating:2 bl:1 move:2 objective:4 already:2 occurs:1 strategy:3 heteroskedastic:1 primary:6 rt:1 traditional:1 navigating:1 gradient:1 unable:2 link:4 simulated:1 separate:8 parametrized:1 thank:1 fidjeland:1 me:1 considers:1 collected:1 length:5 code:1 modeled:4 relationship:2 mini:3 providing:2 demonstration:5 index:1 holding:1 relate:1 negative:1 ba:1 proper:1 policy:47 unknown:3 contributed:1 perform:3 twenty:2 observation:10 markov:5 sm:1 benchmark:7 enabling:2 finite:1 descent:1 t:1 immediate:1 extended:1 varied:4 sweeping:1 inferred:1 introduced:5 evidenced:1 required:3 optimized:2 coherent:2 learned:10 herein:1 omits:1 subgroup:1 nu:1 nip:3 geostatistics:1 address:1 dth:1 able:2 kingma:1 dynamical:16 below:2 ev:1 eighth:1 program:1 including:2 cornebise:1 explanation:1 critical:1 getoor:1 natural:1 hybrid:1 turner:1 representing:2 improve:2 github:1 mdps:3 ne:1 stan:1 coupled:1 health:4 udluft:2 faced:2 prior:7 epoch:2 l2:1 review:2 acknowledgement:1 determining:1 relative:1 graf:1 embedded:6 fully:1 plant:1 highlight:1 interesting:2 limitation:5 afterward:1 filtering:1 goncalves:1 versus:3 foundation:2 agent:13 olicy:1 consistent:1 s0:15 bank:1 playing:1 storing:1 cancer:1 penalized:1 accounted:1 repeat:1 supported:2 copy:1 free:6 infeasible:2 rasmussen:5 summary:1 allow:3 circulant:1 fall:1 taking:6 fifth:1 sparse:2 van:1 overcome:1 dimension:15 calculated:1 world:1 transition:51 boundary:1 evaluating:1 author:2 collection:1 reinforcement:17 commonly:1 simplified:1 made:3 replicated:1 far:1 inertia:1 bm:1 rowland:1 atkeson:1 approximate:10 skill:1 observable:1 bui:1 ml:1 global:4 tuples:1 continuous:3 latent:59 search:2 decomposes:1 onerous:1 nature:1 learn:16 transfer:21 robust:10 ca:1 ignoring:1 init:3 improving:2 interact:1 alg:3 complex:8 necessarily:1 domain:23 protocol:1 official:1 did:1 linearly:2 n2:1 body:1 fashion:1 brunskill:2 position:1 explicit:1 replay:10 weighting:1 learns:9 young:1 down:1 transitioning:2 load:1 specific:11 unperturbed:1 explored:5 physiological:1 gupta:1 virtue:1 intractable:1 incorporating:1 workshop:5 sequential:2 effectively:1 importance:1 acrobot:7 demand:1 newsam:1 chen:2 cassandra:1 dtak:1 led:1 visual:1 adjustment:1 partially:2 joined:1 pretrained:1 springer:1 determines:1 environmental:1 ma:1 goal:7 month:1 quantifying:1 careful:1 informing:1 prioritized:2 replace:1 shared:1 content:1 change:4 aided:1 hard:1 specifically:3 determined:1 rosman:1 saria:1 acting:2 secondary:1 tendency:1 disregard:1 experimental:1 select:1 deisenroth:1 sd0:1 accelerated:1 kulkarni:1 shortreed:1 scratch:7 |
6,861 | 7,206 | Population Matching Discrepancy and
Applications in Deep Learning
Jianfei Chen, Chongxuan Li, Yizhong Ru, Jun Zhu?
Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab for Intell. Tech. & Sys.
Tsinghua University, Beijing, 100084, China
{chenjian14,licx14,ruyz13}@mails.tsinghua.edu.cn, [email protected]
Abstract
A differentiable estimation of the distance between two distributions based on
samples is important for many deep learning tasks. One such estimation is maximum mean discrepancy (MMD). However, MMD suffers from its sensitive kernel
bandwidth hyper-parameter, weak gradients, and large mini-batch size when used
as a training objective. In this paper, we propose population matching discrepancy
(PMD) for estimating the distribution distance based on samples, as well as an
algorithm to learn the parameters of the distributions using PMD as an objective.
PMD is defined as the minimum weight matching of sample populations from each
distribution, and we prove that PMD is a strongly consistent estimator of the first
Wasserstein metric. We apply PMD to two deep learning tasks, domain adaptation
and generative modeling. Empirical results demonstrate that PMD overcomes the
aforementioned drawbacks of MMD, and outperforms MMD on both tasks in terms
of the performance as well as the convergence speed.
1
Introduction
Recent advances on image classification [26], speech recognition [19] and machine translation [9]
suggest that properly building large models with a deep hierarchy can be effective to solve realistic
learning problems. Many deep learning tasks, such as generative modeling [16, 3], domain adaptation [5, 47], model criticism [32] and metric learning [14], require estimating the statistical divergence
of two probability distributions. A challenge is that in many tasks, only the samples instead of the
closed-form distributions are available. Such distributions include implicit probability distributions
and intractable marginal distributions. Without making explicit assumption on the parametric form,
these distributions are richer and hence can lead to better estimates [35]. In these cases, the estimation
of the statistical divergence based on samples is important. Furthermore, as the distance can be used
as a training objective, it need to be differentiable with respect to the parameters of the distributions
to enable efficient gradient-based training.
One popular sample-based statistical divergence is the maximum mean discrepancy (MMD) [17],
which compares the kernel mean embedding of two distributions in RKHS. MMD has a closed-form
estimate of the statistical distance in quadratic time, and there are theoretical results on bounding the
approximation error. Due to its simplicity and theoretical guarantees, MMD have been widely adopted
in many tasks such as belief propagation [44], domain adaptation [47] and generative modeling [31].
However, MMD has several drawbacks. For instance, it has a kernel bandwidth parameter that needs
tuning [18], and the kernel can saturate so that the gradient vanishes [3] in a deep generative model.
Furthermore, in order to have a reliable estimate of the distance, the mini-batch size must be large,
e.g., 1000, which slows down the training by stochastic gradient descent [31].
?
Corresponding author.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
3
2
1
0
1
2
2
1
0
1
2
3
2
1
0
1
2
3
3
2
1
0
1
2
Require: Noise distributions qX , qY and transformations T?XX , T?YY .
Population size N , mini-batch size |B|.
for each iteration do
Draw ? qX (?), ? ? qY (?)
Compute xi;?X = T?XX (i ) and yj;?Y = T?YY (?j )
M ? MinimumWeightMatching(X?X , Y?Y )
Align the matched pairs y1;?Y , . . . , yN ;?Y
?
yM1 ;?Y , . . . , yMN ;?Y
for each mini batch s ? [0, |B|, 2|B|, . . . , N ] do
Ps+|B|?1
1
d(xi;?X , yi;?Y ))
? = SGD(?, |B|
i=s
end for
end for
Figure 1: Pseudocode of PMD for parameter learning with graphical illustration of an iteration. Top:
draw the populations and compute the matching; bottom: update the distribution parameters.
In this paper, we consider a sample-based estimation of the Wasserstein metric [49], which we refer
to as population matching discrepancy (PMD). PMD is the cost of the minimum weight matching
of the two sample populations from the distributions, and we show that it is a strongly consistent
estimator of the first Wasserstein metric. We propose an algorithm to use PMD as a training objective
to learn the parameters of the distribution, and reveal that PMD has some advantages over MMD:
PMD has no bandwidth hyper-parameter, has stronger gradient, and can use normal mini-batch size,
such as 100, during the learning. We compare PMD with MMD on two deep learning tasks, domain
adaptation and generative modeling. PMD outperforms MMD in terms of both the performance and
the speed of convergence.
2
Population Matching Discrepancy
In this section, we give the definition of the population matching discrepancy (PMD) and propose an
algorithm to learn with PMD.
2.1
Population Matching Discrepancy
Consider the general case where we have two distributions pX (x) and pY (y), whose PDFs are
N
unknown, but we are allowed to draw samples from them. Let X = {xi }N
i=1 and Y = {yj }j=1
denote the N i.i.d. samples from each distribution respectively. We define the N -PMD of the two
distributions as
DN (X, Y) = min
M
N
1 X
d(xi , yMi ),
N i=1
(1)
where d(?, ?) is any distance in the sample space (e.g., Euclidean distance) and M is a permutation
to derive a matching between the two sets of samples. The optimal M corresponds to the bipartite
minimum weight matching [27], where each element of the cost matrix is dij = d(xi , yj ) with
i, j ? [N ], where [N ] = {1, ? ? ? , N }. Intuitively, PMD is the average distance of the matched pairs
of samples, therefore it is non-negative and symmetric. Furthermore, as we shall see in Sec. 3.1, PMD
is a strongly consistent estimator of the first Wasserstein metric [49] between pX and pY , which is a
valid statistical distance, i.e., D? (X, Y) = 0 iff the two distributions pX and pY are identical.
2.2
Parameter Learning
While the N -PMD in Eq. (1) itself can serve as a measure of the closeness of two distributions, we
are more interested in learning the parameter of the distributions using PMD as an objective. For
instance, in generative modeling [31], we have a parameterized generator distribution pX (x; ?X )
and a data distribution pY (y), and we wish to minimize the distance of these two distributions. We
2
assume the samples are obtained by applying some parameterized transformations to a known and
fixed noise distribution, i.e.,
i ? qX (), xi;?X = T?XX (i ); and ?j ? qY (?), yj;?Y = T?YY (?j ).
For flexibility, the transformations can be implemented by deep neural networks. Without loss of
generality, we assume both pX and pY are parameterized distributions by ?X and ?Y , respectively. If
pX is a fixed distribution, we can take qX = pX and T?XX to be a fixed identity mapping. Our goal for
parameter learning is to minimize the expected N -PMD over different populations
min E,? DN (X?X , Y?Y ),
?X ,?Y
(2)
N
N
N
where = {i }N
i=1 , ? = {?j }j=1 , X?X = {xi;?X }i=1 and Y ?Y = {yj;?Y }j=1 , and the expectation
is for preventing over-fitting the parameter with respect to particular populations. The parameters
can be optimized by stochastic gradient descent (SGD) [7]. At each iteration, we draw and ?, and
compute an unbiased stochastic gradient
?? DN (X?X , Y?Y ) = ?? min
M
N
N
1 X
1 X
d(xi;?X , yMi ;?Y ) = ??
d(xi;?X , yMi? ;?Y ),
N i=1
N i=1
(3)
PN
where M? = argminM i=1 d(xi;?X , yMi ;?Y ) is the minimum weight matching for X?X and
Y?Y . The second equality in Eq. (3) holds because the discrete matching M? should not change
for infinitesimal change of ?, as long as the transformations T X , T Y , and the distance d(?, ?) are
continuous. In other words, the gradient does not propagate through the matching.
Furthermore, assuming that the matching M? does not change much within a small number of
gradient updates, we can have an even cheaper stochastic gradient by subsampling the populations
|B|
?? DN (X?X , Y?Y ) ? ??
1 X
d(xBi ;?X , yMB? ;?Y ),
i
|B| i=1
(4)
where a mini-batch of |B|, e.g., 100, samples is used to approximate the whole N -sample population.
To clarify, our population size N is known as the mini-batch size in some maximum mean discrepancy
(MMD) literature [31], and is around 1000. Fig. 1 is the pseudocode of parameter learning for PMD
along with a graphical illustration. In the outer loop, we generate populations and compute the
matching; and in the inner loop, we perform several SGD updates of the parameter ?, assuming the
matching M does not change much. In the graphical illustration, the distribution pY is fixed, and we
want to optimize the parameters of pX to minimize their PMD.
2.3
Solving the Matching Problem
The minimum weight matching can be solved exactly in O(N 3 ) by the Hungarian algorithm [27].
When the problem is simple enough, so that small N , e.g., hundreds, is sufficient for reliable
distance estimation, O(N 3 ) time complexity is acceptable comparing with the O(N ? BackProp)
time complexity of computing the gradient with respect to the transformations T?XX and T?YY . When
N is larger, e.g., a few thousands, the Hungarian algorithm takes seconds to run. We resort to Drake
and Hougardy?s approximated matching algorithm [11] in O(N 2 ) time. The running time and model
quality of PMD using both matching algorithms are reported in Sec. 5.3. In practice, we find PMD
with both the exact and approximate matching algorithms works well. This is not surprising because
training each sample towards its approximate matching sample is still reasonable. Finally, while we
only implement the serial CPU version of the matching algorithms, both algorithm can be parallelized
on GPU to further improve the running speed [10, 34].
3
Theoretical Analysis and Connections to Other Discrepancies
In this section, we establish the connection between PMD with the Wasserstein metric and the
maximum mean discrepancy (MMD). We show that PMD is a strongly consistent estimator of the
Wasserstein metric, and compare its advantages and disadvantages with MMD.
3
3.1
Relationship with the Wasserstein Metric
The Wasserstein metric [49] was initially studied in the optimal transport theory, and has been adopted
in computer vision [40], information retrival [50] and differential privacy [30]. The first Wasserstein
metric of two distributions pX (x) and pY (y) is defined as
Z
inf
d(x, y)?(x, y)dxdy
?(x,y)
Z
Z
s.t. ?(x, y)dx = pY (y), ?y; ?(x, y)dy = pX (x), ?x; ?(x, y) ? 0, ?x, y.
(5)
Intuitively, the Wasserstein metric is the optimal cost to move some mass distributed as pX to pY ,
where the transference plan ?(x, y) is the amount of mass to move from x to y. Problem (5) is not
tractable because the PDFs of pX and pY are unknown. We approximate them with empirical distribuPN
PN
tions p?X (x) = N1 i=1 ?xi (x) and p?Y (y) = N1 j=1 ?yj (y), where ?x (?) is the Dirac delta function
PN PN
at x. To satisfy the constraints, ? should have the form ?(x, y) = i=1 j=1 ?ij ?xi ,yj (x, y), where
?ij ? 0. Letting pX = p?X and pY = p?Y , we can simplify problem (5) as follows
min
?
N X
N
X
i=1 j=1
d(xi , yj )?ij
s.t.
N
X
j=1
?ij =
N
X
1
1
?ij = , j ? [N ]; ?ij ? 0.
, i ? [N ];
N
N
i=1
(6)
The linear program (6) is equivalent to the minimum weight matching problem [27], i.e., there exists
a permutation M1 , . . . , MN , such that ?(xi , yMi ) = N1 is an optimal solution (see Proposition 5.4
in [6]). Plugging such ? back to problem (6), we obtain Eq. (1), the original definition of PMD.
Furthermore, we can show that the solution of problem (6), i.e., the N -PMD, is a strongly consistent
estimator of the first Wasserstein metric in problem (5).
Definition 1 (Weak Convergence of Measure [48]). A sequence of probability distributions pN , N =
1, 2, ... converges weakly to the probability distribution p, denoted as pn ? p, if limN ?? EpN [f ] =
Ep [f ] for all bounded continuous functions f .
Proposition 3.1 (Varadarajan Theorem [48]). Let x1 , ..., xN , ... be independent, identically disPN
tributed real random variables with the density function p(x), let pN (x) = N1 i=1 ?xN (x) where
?xN (?) is the Dirac delta function. Then pN ? p almost surely.
Proposition 3.2 (Stability of Optimal Transport [49]). Let X and Y be Polish spaces and let
Y
d : X ? Y ? R be a continuous function s.t. inf d > ??. Let {pX
N }N ?N and {pN }N ?N be
X
sequences of probability distributions on X and Y respectively. Assume that pN ? pX (resp.
X
Y
pYN ? pY ).
R For each N , let ?N be an optimal transference plan between pN and pN . If
lim inf N ?N d(x, y)?N (x, y)dxdy < +?, then ?N ? ?, where ? is an optimal transference
plan between pX and pY .
Proposition 3.2 is a special case of Theorem 5.20 in [49] with fixed function d. The following theorem
is the main result of this section.
Theorem 3.3 (Strong Consistency of PMD). Let x1 , ..., xN , ... and y1 , ..., yN , ... be independent,
identically distributed real random variables from pX and pY , respectively. We construct a sePN
PN
1
1
Y
quence of PMD problems (6) between pX
N (x) = N
i=1 ?xN (x) and pN (y) = N
i=1 ?yN (y).
Let ?N be the optimal transference plan of the N -th PMD problem. Then the sequence ?N ?
? almost Rsurely, where ? is the optimal
transference plan between pX and pY . Moreover,
R
limN ?? d(x, y)?N (x, y)dxdy = d(x, y)?(x, y)dxdy almost surely.
The proof is straightforward by applying Proposition 3.1 and 3.2. We also perform an empirical study
of the approximation error with respect to the population size in Fig. 2(a).
While the Wasserstein metric has been widely adopted in various machine learning and data mining
tasks [40, 50, 30], it is usually used to measure the similarity between two discrete distributions,
e.g., histograms. In contrast, PMD is a stochastic approximation of the Wasserstein metric between
two continuous distributions. There is also work on estimating the Wasserstein metric of continuous
distributions based on samples [45]. Unlike PMD, which is approximating the primal problem,
they approximate the dual. Their approximation is not differentiable with respect to the distribution
4
10
Frequency
Relative approximation error
100
1
1
2
4
8
16
32
Population size N
64
128
256
300
250
200
150
100
50
00
(a) Relative approximation error w.r.t the population size
PMD
MMD
1
2
3
4
Normalized magnitude of gradient
5
(b) Distribution of normalized gradients
Figure 2: Some empirical analysis results. The detailed experiment setting is described in Sec. 5.4.
parameters because the parameters appear in the constraint instead of the objective. Recently,
Wasserstein GAN (WGAN) [3] proposes approximating the dual Wasserstein metric by using a neural
network ?critic? in place of a 1-Lipschitz function. While WGAN has shown excellent performance
on generative modeling, it can only compute a relative value of the Wasserstein metric upon to an
unknown scale factor depending on the Lipschitz constant of the critic neural network. PMD also
differs from WGAN by not requiring a separate critic network with additional parameters. Instead,
PMD is parameter free and can be computed in polynomial time.
3.2
Relationship with MMD
Maximum mean discrepancy (MMD) [17] is a popular method for estimating the distance between
two distributions by samples, defined as follows
DM M D (X, Y) =
N N
N M
M M
2 XX
1 XX
1 XX
k(x
,
x
)
?
k(x
,
y
)
+
k(yi , yj ),
i
j
i j
N 2 i=1 j=1
N M i=1 j=1
M 2 i=1 j=1
2
where k(?, ?) is a kernel, e.g., k(x, y) = exp(? kx ? yk /2? 2 ) is the RBF kernel with bandwidth ?.
Both MMD and the Wasserstein metric are integral probability metrics [17], with different function
classes. MMD has a closed-form objective, and can be evaluated in O(N M D) if x and y are Ddimensional vectors. In contrast, PMD needs to solve a matching problem, and the time complexity
is O(N 2 D) for computing the distance matrix, O(N 3 ) for exact Hungarian matching, and O(N 2 )
for approximated matching. However, as we argued in Sec. 2.3, the time complexity for computing
matching is still acceptable comparing with the cost of training neural networks.
Comparing with MMD, PMD has a number of advantages:
Fewer hyper-parameter PMD do not have the kernel bandwidth ?, which needs tuning.
Stronger gradient Using the RBF kernel, the gradient of MMD w.r.t a particular sample xi is
P
P
x ?x
y ?x
?xi DM M D (X, Y) = N12 j k(xi , xj ) j?2 i ? N2M j k(xi , yj ) j?2 i . When minimizing MMD,
the first term is a repulsive term between the samples from pX , and the second term is an attractive
term between the samples from pX and pY . The L2 norm of the term between two samples x
kx?yk
and y is k(x, y) ?2 2 , which is small if kx ? yk2 is either too small or too large. As a result, if
a sample xi is an outlier, i.e., it is not close to any samples from pY , all the k(xi , yj ) terms are
small and xi will not receive strong gradients. On the other hand, if all the samples xi , i ? [N ]
are close to each other, xj ? xi is small, so that repulsive term of the gradient is weak. Both cases
slow down the training. In contrast, if d(x, y) = |x ? y| is the L1 distance, the gradient of PMD
?xi DN (X, Y) = N1 sgn(xi ? yMi ), where sgn(?) is the sign function, is always strong regardless
of the closeness between xi and yMi . We compare the distribution of the relative magnitude of the
gradient of the parameters contributed by each sample in Fig. 2(b). The PMD gradients have similar
magnitude for each sample, while there are many samples have small gradients for MMD.
Smaller mini-batch size As we see in Sec 2.2, the SGD mini-batch size for PMD can be smaller
than the population size; while the mini-batch size for MMD must be equal with the population size.
This is because PMD only considers the distance between a sample and its matched sample, while
5
MMD considers the distance between all pairs of samples. As the result of smaller mini-batch size,
PMD can converge faster than MMD when used as a training objective.
4
4.1
Applications
Domain Adaptation
Now we consider a scenario where the labeled data is scarce in some domain of interest (target
domain) but that is abundant in some related domain (source domain). Assuming that the data
distribution pS (X, y) for the source domain and that of the target domain, i.e. pT (X, y) are similar
but not the same, unsupervised domain adaptation aims to train a model for the target domain, given
T NT
S
some labeled data {(XiS , yiS )}N
i=1 from the source domain and some unlabeled data {Xj }j=1 from
the target domain. According to the domain adaptation theory [5], the generalization error on the
target domain depends on the generalization error on the source domain as well as the difference
between the two domains. Therefore, one possible solution for domain adaptation is to learn a feature
extractor ?(X) shared by both domains, which defines feature distributions p?S and p?T for both
domains, and minimize some distance between the feature distributions [47] as a regularization. Since
the data distribution is inaccessible, we replace all distributions with their empirical distributions p?S ,
p?T , p??S and p??T , and the training objective is
EX,y?p?S L(y, h(?(X))) + ?D(?
p?S , p??T ),
where L(?, ?) is a loss function, h(?) is a classifier, ? is a hyper-parameter, and D(?
p?S , p??T ) is the domain
adaptation regularization. While the Wasserstein metric itself of two empirical distribution is tractable,
it can be too expensive to compute due to the large size of the dataset. Therefore, we still approximate
the distance with (expected) PMD, i.e., D(?
p?S , p??T ) ? EXS ?p?S ,XT ?p?T DP M D (?(XS ), ?(XT )).
4.2
Deep Generative Modeling
Deep generative models (DGMs) aim at capturing the complex structures of the data by combining
hierarchical architectures and probabilistic modelling. They have been proven effective on image
generation [38] and semi-supervised learning [23] recently. There are many different DGMs, including tractable auto-regressive models [37], latent variable models [24, 39], and implicit probabilistic
models [16, 31]. We focus on learning implicit probabilistic models, which define probability distributions on sample space flexibly without a closed-form. However, as described in Sec. 2.2, we
can draw samples X = T?XX () efficiently from the models by transforming a random noise ? q(),
where q is a simple distribution (e.g. uniform), to X through a parameterized model (e.g. neural
network). The parameters in the models are trained to minimize some distance between the model
distribution pX (X) and the empirical data distribution p?Y (Y ). The distance can be defined based on
an parameterized adversary, i.e., another neural network [16, 3], or directly with the samples [31].
We choose the distance to be the first Wasserstein metric, and employ its finite-sample estimator
(i.e., the N -PMD defined in Eq. (2)) as training objective directly. Training this model with MMD is
known as generative moment matching networks [31, 12].
5
Experiments
We now study the empirical performance of PMD and compare it with MMD. In the experiments,
PMD always use the L1 distance, and MMD always use the RBF kernel. Our experiment is conducted
on a machine with Nvidia Titan X (Pascal) GPU and Intel E5-2683v3 CPU. We implement the models
in TensorFlow [1]. The matching algorithms are implemented in C++ with a single thread, and we
write a CUDA kernel for computing the all-pair L1 distance within a population. The CUDA program
is compiled with nvcc 8.0 and the C++ program is compiled with g++ 4.8.4, while -O3 flag is used
for both programs. We use the approximate matching for the generative modeling experiment and
exact Hungarian matching for all the other experiments.
5.1
Domain Adaptation
We compare the performance of PMD and MMD on the standard Office [41] object recognition
benchmark for domain adaptation. The dataset contains three domains: amazon, dslr and webcam, and
6
Table 1: All the 6 unsupervised domain adaptation accuracy on the Office dataset between the amazon
(a), dslr (d) and webcam (w) domains, in percentage. SVM and NN are trained only on the source
domain, where NN uses the same architecture of PMD and MMD, but set ? = 0.
a?w
59.4 ? .8
73.0
77.0 ? .6
78.1 ? .4
65.0
67.8 ? .5
76.9 ? .8
86.2 ? .7
d?w
92.5 ? .3
96.4
96.3 ? .4
96.4 ? .2
96.1
96.3 ? .2
96.2 ? .2
96.2 ? .3
30 100 300 1000 3000
0.66
0.56
PMD
MMD
0.54
0
500 1000 1500 2000 2500 3000 3500 4000
number of iterations
(a) Convergence speed
0.57 0.57 0.57 0.56 0.58 0.57 0.59 0.58 0.59 0.6
0.6
0.59 0.59 0.58 0.58 0.57 0.59 0.58 0.6
0.6
0.57 0.56 0.57 0.57 0.56 0.59 0.6
0.6
0.6
0.6 0.62 0.63 0.62
0.59 0.58 0.58 0.56 0.56 0.62 0.65 0.65 0.64 0.63 0.62
10
0.58
0.58 0.58 0.56 0.58 0.57 0.56 0.58 0.58 0.59 0.58 0.57
0.58 0.6 0.58 0.55 0.57 0.64 0.65 0.65 0.64 0.62 0.62
3
0.60
a?d
79.6 ? .6
77.5 ? .2
70.7
73.9 ? .6
78.4?1.0
82.7 ? .8
0.59 0.57 0.57 0.55 0.59 0.66 0.62 0.65 0.63 0.62 0.63
1
regularization
0.62
0.58 0.56 0.56 0.53 0.61 0.68 0.66 0.64 0.62 0.62 0.61
0.03 0.1 0.3
test accuracy
0.64
w?d
91.7 ? .8
99.2
99.2 ? .2
99.3 ? .1
99.4
99.5 ? .2
99.6 ? .2
99.5 ? .2
0.58 0.57 0.55 0.54 0.65 0.65 0.64 0.64 0.61 0.61 0.6
0.57 0.56 0.54 0.58 0.65 0.64 0.62 0.62 0.61 0.6 0.61
0.675
0.650
0.625
0.600
0.575
0.550
0.58 0.55 0.56 0.59 0.64 0.6 0.59 0.6 0.56 0.57 0.58
0.25
1
4
9
16
25
36
bandwidth
49
64
d?a
63.8 ? .7
68.4 ? .2
56.4
58.5 ? .3
64.9 ? .5
64.3 ? .4
81 100
(b) MMD parameter sensitivity
test accuracy
Method
DDC [47]
DANN [13]
CMD [52]
JAN-xy [33]
SVM
NN
MMD
PMD
w?a
63.3 ? .6
65.0 ? .4
55.1
58.1 ? .3
68.1 ? .6
66.8 ? .4
avg.
79.9
80.8
73.8
75.7
80.7
82.6
0.650
0.625
0.600
0.575
0.550
0.525
0.500
0.475
10
4
10
3
10
2
10
1
regularization
1
101
102
(c) PMD parameter sensitivity
Figure 3: Convergence speed and parameter sensitivity on the Office d ? a task.
there are 31 classes. Following [52], we use the 4096-dimensional VGG-16 [43] feature pretrained
on ImageNet as the input. The classifier is a fully-connected neural network with a single hidden
layer of 256 ReLU [15] units, trained with AdaDelta [51]. The domain regularization term is put on
the hidden layer. We apply batch normalization [21] on the hidden layer, and the activations from
the source and the target domain are normalized separately. Following [8], we validate the domain
regularization strength ? and the MMD kernel bandwidth ? on a random 100-sample labeled dataset
on the target domain, but the model is trained without any labeled data from the target domain. The
experiment is then repeated for 10 times on the hyper-parameters with the best validation error. Since
we perform such validation for both PMD and MMD, the comparison between them is fair. The
result is reported in Table 1, and PMD outperforms MMD on the a ? w and a ? d tasks by a large
margin, and is comparable with MMD on the other 4 tasks.
Then, we compare the convergence speed of PMD and MMD on the d ? a task. We choose this
task because PMD and MMD have similar performance on it. The result is shown in Fig. 3(a), where
PMD converges faster than MMD. We also show the parameter sensitivity of MMD and PMD as
Fig. 3(b) and Fig. 3(c), respectively. The performance of MMD is sensitive to both the regularization
parameter ? and the kernel bandwidth ?, so we need to tune both parameters. In contrast, PMD only
has one parameter to tune.
5.2
Generative Modeling
We compare PMD with MMD for image generation on the MNIST [28], SVHN [36] and LFW [20]
dataset. For SVHN, we train the models on the 73257-image training set. The LFW dataset is
converted to 32 ? 32 gray-scale images [2], and there are 13233 images for training. The noise
follows a uniform distribution [?1, 1]40 . We implemented three architectures, including a fullyconnected (fc) network as the transformation T?XX , a deconvolutional (conv) network, and a fullyconnected network for generating the auto-encoder codes (ae) [31], where the auto-encoder is a
convolutional one pre-trained on the dataset. For MMD, we use a mixture of kernels of different
bandwidths for the fc and conv architecture, and the bandwidth is fixed at 1 for the ae architecture,
following the settings in the generative moment matching networks (GMMN) paper. We set the
population size N = 2000 for both PMD and MMD, and the mini-batch size |B| = 100 for PMD.
We use the AdaM optimizer [22] with batch normalization [21], and train the model for 100 epoches
for PMD, and 500 epoches for MMD. The generated images on the SVHN and LFW dataset are
7
fc
conv
ae
MMD
PMD
MMD
PMD
Figure 4: Image generation results on SVHN (top two rows) and LFW (bottom two rows).
56
54
0.0125
Time (seconds)
Final PMD
58
0.0126
Final MMD
N=500
N=1000
N=2000
N=4000
Exact N=500
60
0.0124
0.0123
0.0122
102
Exact
Randomized
SGD
101
0.0121
52
0.0120
4000
2000
1000
500
Mini-batch size |B|
250
100
(a) PMD sensitivity w.r.t. N and |B|
8000 6000 4000 2000 1000
500
Population size N
250
100
(b) sensitivity of MMD w.r.t. N
100
500
1000
2000
Population size N
4000
(c) split of the time per epoch
Figure 5: Convergence and timing results. The ?Exact N = 500? curve in (a) uses the Hungarian
algorithm, and the rest uses the approximated matching algorithm.
presented in Fig. 4, and the images on the MNIST dataset can be found in the supplementary material.
We observe that the images generated by PMD are less noisy than that generated by MMD. While
MMD only performs well on the autoencoder code space (ae), PMD generates acceptable images on
pixel space. We also noticed the generated images of PMD on the SVHN and LFW datasets are blurry.
One reason for this is the pixel-level L1 distance is not good for natural images. Therefore, learning
the generative model on the code space helps. To verify that PMD does not trivially reproduce
the training dataset, we perform a circular interpolation in the representation space q() between 5
random points, the result is available in the supplementary material.
5.3
Convergence Speed and Time Consumption
We study the impact of the population size N , the mini-batch size |B| and the choice of matching
algorithm to PMD. Fig. 5(a) shows the final PMD evaluated on N = 2000 samples on the MNIST
dataset, using the fc architecture, after 100 epoches. The results show that the solution is insensitive
to neither the population size N nor the choice of the matching algorithm, which implies that we
can use the cheap approximated matching and relatively small population size for speed. On the
other hand, decreasing the mini-batch size |B| improves the final PMD significantly, supporting our
claim in Sec. 3.2 that the ability of using small |B| is indeed an advantage for PMD. Unlike PMD,
there is a trade-off for selecting the population size N for MMD, as shown in Fig. 5(b). If N is too
large, the SGD optimization converges slowly; if N is too small, the MMD estimation is unreliable.
Fig. 5(c) shows the total time spent on exact matching, approximated matching and SGD respectively
for each epoch. The cost of approximated matching is comparable with the cost of SGD. Again, we
emphasize while we only have single thread implementations for the matching algorithms, both the
exact [10] and approximated matching [34] can be significantly accelerated with GPU.
5.4
Empirical Studies
We examine the approximation error of PMD on a toy dataset. We compute the distances between two
5-dimensional standard isotropic Gaussian distributions. One distribution is centered at the origin and
the other is at (10, 0, 0, 0, 0). The first Wasserstein metric between these two distributions is 10. We
vary the population size N and compute the relative approximation error = |DN (X, Y) ? 10|/10
for 100 different populations (X, Y) for each N . The result is shown in Fig. 2(a). We perform a
8
linear regression between log N and the logarithm of expected approximation error, and find that the
error is roughly proportional to N ?0.23 .
We also validate the claim in Sec. 3.2 on the stronger gradients of PMD than that of MMD. We
calculate the magnitude (in L2 norm) of the gradient of the parameters contributed by each sample.
The gradients are computed on the converged model, and the model is the same as Sec. 5.3. Because
the scale of the gradients depend on the scale of the loss function, we normalize the magnitudes
by dividing them with the average magnitude of the gradients. We then show the distribution of
normalized magnitudes of gradients in Fig. 2(b). The PMD gradients contributed by each sample are
close with each other, while there are many samples contributing small gradients for MMD, which
may slow down the fitting of these samples.
6
Conclusions
We present population matching discrepancy (PMD) for estimating the distance between two probability distributions by samples. PMD is the minimum weight matching between two random
populations from the distributions, and we show that PMD is a strongly consistent estimator of the
first Wasserstein metric. We also propose a stochastic gradient descent algorithm to learn parameters
of the distributions using PMD. Comparing with the popular maximum mean discrepancy (MMD),
PMD has no kernel bandwidth hyper-parameter, stronger gradient and smaller mini-batch size for
gradient-based optimization. We apply PMD to domain adaptation and generative modeling tasks.
Empirical results show that PMD outperforms MMD in terms of performance and convergence speed
in both tasks. In the future, we plan to derive finite-sample error bounds for PMD, study its testing
power, and accelerate the computation of minimum weight matching with GPU.
Acknowledgments
This work is supported by the National NSF of China (Nos. 61620106010, 61621136008, 61332007),
the MIIT Grant of Int. Man. Comp. Stan (No. 2016ZXFB00001), the Youth Top-notch Talent
Support Program, Tsinghua Tiangong Institute for Intelligent Computing and the NVIDIA NVAIL
Program.
References
[1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Generative Moment Matching Networks.
[2] Siddharth Agrawal.
siddharth-agrawal/Generative-Moment-Matching-Networks.
https://github.com/
[3] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875,
2017.
[4] Marc G Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan
Hoyer, and R?mi Munos. The cramer distance as a solution to biased wasserstein gradients. arXiv preprint
arXiv:1705.10743, 2017.
[5] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman
Vaughan. A theory of learning from different domains. Machine learning, 79(1):151?175, 2010.
[6] Dimitri P Bertsekas. Network optimization: continuous and discrete models. Citeseer, 1998.
[7] L?on Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT?2010, pages 177?186. Springer, 2010.
[8] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan.
Domain separation networks. In Advances in Neural Information Processing Systems, pages 343?351,
2016.
[9] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical
machine translation. In EMNLP, 2014.
9
[10] Ketan Date and Rakesh Nagi. Gpu-accelerated hungarian algorithms for the linear assignment problem.
Parallel Computing, 57:52?72, 2016.
[11] Doratha Drake and Stefan Hougardy. Improved linear time approximation algorithms for weighted
matchings. Approximation, Randomization, and Combinatorial Optimization.. Algorithms and Techniques,
pages 21?46, 2003.
[12] Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks
via maximum mean discrepancy optimization. In UAI, 2015.
[13] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran?ois Laviolette,
Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of
Machine Learning Research, 17(59):1?35, 2016.
[14] Bo Geng, Dacheng Tao, and Chao Xu. Daml: Domain adaptation metric learning. IEEE Transactions on
Image Processing, 20(10):2980?2989, 2011.
[15] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In AISTATS,
volume 15, page 275, 2011.
[16] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing
systems, pages 2672?2680, 2014.
[17] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch?lkopf, and Alexander Smola. A
kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723?773, 2012.
[18] Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil, Kenji
Fukumizu, and Bharath K Sriperumbudur. Optimal kernel choice for large-scale two-sample tests. In
Advances in neural information processing systems, pages 1205?1213, 2012.
[19] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew
Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic
modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing
Magazine, 29(6):82?97, 2012.
[20] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A
database for studying face recognition in unconstrained environments. Technical Report 07-49, University
of Massachusetts, Amherst, October 2007.
[21] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In ICML, 2015.
[22] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 2014.
[23] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In Advances in Neural Information Processing Systems, pages
3581?3589, 2014.
[24] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,
2013.
[25] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
[26] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[27] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly,
2(1-2):83?97, 1955.
[28] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[29] Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnab?s P?czos. Mmd gan: Towards
deeper understanding of moment matching network. arXiv preprint arXiv:1705.08584, 2017.
[30] Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. t-closeness: Privacy beyond k-anonymity
and l-diversity. In Data Engineering, 2007. ICDE 2007. IEEE 23rd International Conference on, pages
106?115. IEEE, 2007.
10
[31] Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In ICML, pages
1718?1727, 2015.
[32] James R Lloyd and Zoubin Ghahramani. Statistical model criticism using kernel two sample tests. In
Advances in Neural Information Processing Systems, pages 829?837, 2015.
[33] Mingsheng Long, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation
networks. In ICML, 2017.
[34] Fredrik Manne and Rob Bisseling. A parallel approximation algorithm for the weighted maximum matching
problem. Parallel Processing and Applied Mathematics, pages 708?717, 2008.
[35] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint
arXiv:1610.03483, 2016.
[36] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in
natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised
feature learning, volume 2011, page 5, 2011.
[37] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In
ICML, 2016.
[38] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[39] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
[40] Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth mover?s distance as a metric for image
retrieval. International journal of computer vision, 40(2):99?121, 2000.
[41] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new
domains. Computer Vision?ECCV 2010, pages 213?226, 2010.
[42] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved
techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234?2242,
2016.
[43] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
[44] Le Song, Arthur Gretton, Danny Bickson, Yucheng Low, and Carlos Guestrin. Kernel belief propagation.
In International Conference on Artificial Intelligence and Statistics, pages 707?715, 2011.
[45] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Sch?lkopf, and Gert RG Lanckriet.
Non-parametric estimation of integral probability metrics. In Information Theory Proceedings (ISIT), 2010
IEEE International Symposium on, pages 1428?1432. IEEE, 2010.
[46] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26?31, 2012.
[47] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion:
Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
[48] VS Varadarajan. Weak convergence of measures on separable metric spaces. Sankhy?a: The Indian Journal
of Statistics (1933-1960), 19(1/2):15?22, 1958.
[49] C?dric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
[50] Xiaojun Wan. A novel document similarity measure based on earth mover?s distance. Information Sciences,
177(18):3718?3730, 2007.
[51] Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
[52] Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl?ger, and Susanne Saminger-Platz.
Central moment discrepancy (cmd) for domain-invariant representation learning. In ICLR, 2017.
11
A
Image generation results on MNIST
Figure 6: MNIST image generation. Top: fc, middle: conv, bottom: ae. Left: MMD, right: PMD.
Figure 7: Interpolation on the representation space using the convolutional generator. The first
column is the randomly selected points and the rest are the interpolation between them.
B
Adversarial learning of distances
From Fig. 4 we can see that the images generated by PMD is more blurry than those generated by
generative adversarial networks (GANs) [16]. This is because the pixel-wise distance d(x, y) =
kx ? yk1 is not a good distance between natural images. If we translate an image by one pixel, the
pixel-wise distance between the two images will be much greater than zero. But the two images have
identical semantic meanings, so the ideal distance between them should be close to zero.
Two recent works MMD GAN [29] and Cramer GAN [4] proposed to solve this problem by adversarially learn the distance between images. Inspired by these works, we present PMD GAN,
which applies PMD for generative modeling with adversarially learned image distance. Following the
discussion in Sec. 4.2, we formulate the problem of deep generative modeling as minimizing the PMD
between a model
distribution pX;? (x) and a data distribution pY (y). We choose the image distance
p
d(x, y) = kF (x) ? F (y)k1 on the feature space defined by a feature extractor F . Following
MMD GAN, we also define a decoder D such that the reconstruction error between x and D(F (x))
is minimized, to approximately enforce F as a bijection. The overall objective is defined as
)
(
N
N
1 X
? X
2
2
min max min
d(F (xi ), F (yMi )) ?
(xi ? D(F (xi ))) + (yi ? D(F (yi )))
,
F
M N
?
N i=1
i=1
where the feature extractor F wants to maximize the PMD while minimizing the reconstruction error,
and the generator pX;? wants to minimize the PMD.
We test the proposed approach on the CIFAR10 dataset [25]. We implement our model in TensorFlow [1] and make our implementation similar as the PyTorch implementation of MMD GAN. We
compare our model with MMD GAN, which uses a mixture of RBF kernels with the bandwidth
[1, 2, 4, 8, 16]. Both models uses a batch size of 64 and RMSProp [46] optimizer with 5 ? 10?5
learning rate. We clip the weights in the range [?0.01, 0.01] following Wasserstein GAN [3]. The
auto encoder regularization parameter ? is set to 8. We run PMD GAN, MMD GAN and Wasserstein
GAN for 100 epochs, and the result are shown in Fig. 8. The results confirm that using adversarially
learned distance, PMD can generate sharp images.
12
(a) MMD
(b) PMD
(c) WGAN
Figure 8: Image generation with adversarially learned distance. Inception score [42] are 4.50 for all
the three models.
?1
?2
log ?1
log ?2
Data
(8.34, 14.4)
(0, 6.05)
diag(0, 0)
diag(0, 0)
Learned (8.17, 14.3) (0.28, 6.30) diag(-5.38, -5.53) diag(-3.25, -4.16)
Table 2: Mean and logarithm of standard deviation of the Gaussian mixture experiment.
C
Handling multimodal distributions
It is an interesting question that whether the particle based methods (PMD or MMD) can handle
distributions with more modes than the number of particles used. The answer is yes. We discuss a
particular Gaussian mixture generation example. In this task, we have an unknown data distribution,
which is a Gaussian mixture distribution with two mixing components N (?1 , ?12 ) and N (?2 , ?22 ).
We want to learn its parameters ?1 , ?2 , ?1 and ?2 by minimizing the distance between the model
distribution and the data distribution, using only one particle per gradient step. MMD can still
learn the distribution because it is unbiased [17]. Empirically, PMD can also estimate the mean
and variance, using a distance d(x, y) = ? exp(? kx ? yk1 /?), where ? is a bandwidth. Table 2
shows the true model parameters and the parameters learned by PMD. Even with N = 1, PMD still
estimates the mean fairly well, despite underestimating the variance.
13
| 7206 |@word kulis:1 version:1 middle:1 polynomial:1 stronger:4 norm:2 villani:1 propagate:1 citeseer:1 sgd:8 tnlist:1 moment:7 venkatasubramanian:1 contains:1 score:1 selecting:1 jimenez:2 daniel:1 rkhs:1 document:2 deconvolutional:1 outperforms:4 comparing:4 nt:1 surprising:1 com:1 activation:1 ddc:1 dx:1 must:2 gpu:5 diederik:3 danny:1 devin:1 realistic:1 john:1 christian:1 cheap:1 update:3 bickson:1 bart:1 v:1 intelligence:1 selected:1 fewer:1 generative:27 alec:2 ivo:1 isotropic:1 sys:1 bissacco:1 underestimating:1 regressive:1 bijection:1 ae:5 zhang:1 wierstra:1 dn:6 along:1 differential:1 symposium:1 abadi:1 prove:1 fitting:2 fullyconnected:2 wild:1 privacy:2 xiaojun:1 expected:3 indeed:1 roughly:1 karsten:1 nor:1 examine:1 yiming:1 inspired:1 decreasing:1 siddharth:2 soumith:2 cpu:2 conv:4 xx:10 matched:3 bounded:1 evgeniya:1 mass:2 medium:1 estimating:5 gintare:1 moreover:1 transformation:6 guarantee:1 zaremba:1 exactly:1 classifier:2 ustinova:1 sherjil:1 unit:1 grant:1 yn:3 appear:1 bertsekas:1 danihelka:1 engineering:1 timing:1 tsinghua:4 despite:1 karolina:1 encoding:1 tributed:1 interpolation:3 ajakan:1 approximately:1 china:2 studied:1 xbi:1 luke:1 range:1 acknowledgment:1 lecun:1 yj:11 cmd:2 practice:1 testing:1 implement:3 differs:1 backpropagation:1 tiangong:1 digit:1 suresh:1 pontil:1 jan:1 rnn:1 empirical:10 significantly:2 adapting:1 matching:51 ym1:1 word:1 pre:1 zoubin:2 varadarajan:2 suggest:1 close:4 unlabeled:1 put:1 applying:2 bellemare:1 vaughan:1 argminm:1 optimize:1 dean:1 py:18 equivalent:1 maximizing:1 compstat:1 straightforward:1 regardless:1 flexibly:1 jimmy:1 sainath:1 formulate:1 amazon:2 simplicity:1 pouget:1 matthieu:1 estimator:7 stability:1 handle:1 n12:1 embedding:1 population:33 gert:1 merri:1 resp:1 hierarchy:1 target:8 pt:1 magazine:1 hana:1 exact:8 us:5 goodfellow:2 jaitly:1 lanckriet:1 origin:1 element:1 adadelta:2 recognition:6 approximated:7 expensive:1 roy:1 anonymity:1 balaji:2 database:1 labeled:5 ep:1 yk1:2 bottom:3 preprint:9 wang:2 solved:1 calculate:1 thousand:1 connected:1 coursera:1 trade:1 yk:2 alessandro:1 environment:1 inaccessible:1 rmsprop:2 vanishes:1 transforming:1 complexity:4 warde:1 trained:5 weakly:1 depend:1 solving:1 serve:1 upon:1 bipartite:1 eric:1 minimum:8 edwin:1 matchings:1 accelerate:1 multimodal:1 joint:1 schwenk:1 various:1 train:3 massimiliano:1 effective:2 artificial:1 vicki:1 zemel:1 hyper:6 kevin:1 kalchbrenner:1 jean:1 richer:1 larger:1 whose:1 supplementary:2 solve:3 widely:2 encoder:4 ability:1 simonyan:1 statistic:2 noisy:1 itself:2 final:4 shakir:4 n2m:1 advantage:4 sequence:3 agrawal:2 differentiable:3 net:1 propose:4 reconstruction:2 adaptation:15 loop:2 combining:1 date:1 manne:1 iff:1 flexibility:1 translate:1 mixing:1 validate:2 dirac:2 normalize:1 sutskever:1 convergence:10 p:2 strathmann:1 darrell:2 generating:1 bousmalis:1 converges:3 adam:3 object:1 tim:1 tions:1 andrew:3 blitzer:1 ben:1 help:1 derive:2 ij:6 depending:1 spent:1 recurrent:1 eq:4 strong:3 dividing:1 ddimensional:1 implemented:3 ois:1 implies:1 kenji:2 larochelle:1 fredrik:1 kuhn:1 rasch:1 hungarian:7 drawback:2 ning:1 stochastic:9 centered:1 sgn:2 enable:1 material:2 backprop:1 require:2 argued:1 barnab:1 generalization:2 randomization:1 isit:1 brian:1 proposition:5 pytorch:1 hold:1 clarify:1 around:1 cramer:2 guibas:1 exp:2 normal:1 mapping:1 claim:2 matthew:1 optimizer:2 vary:1 earth:2 miit:1 estimation:7 combinatorial:1 sivaraman:1 sensitive:2 weighted:2 hoffman:1 stefan:1 fukumizu:2 gaussian:4 always:3 heiko:1 dric:1 aim:2 pn:14 office:3 navdeep:1 rezende:2 focus:1 naval:1 pdfs:2 quence:1 modelling:1 zellinger:1 properly:1 tech:2 polish:1 adversarial:5 contrast:4 criticism:2 dilip:1 inference:1 nn:3 initially:1 hidden:3 reproduce:1 interested:1 tao:2 pixel:6 overall:1 classification:2 aforementioned:1 pascal:2 dual:2 denoted:1 jianmin:1 proposes:1 plan:6 drake:2 ganin:1 tzeng:1 fairly:1 special:1 equal:1 construct:1 marginal:1 beach:1 ng:1 koray:1 identical:2 adversarially:4 holger:1 koby:1 icml:5 unsupervised:5 yu:2 geng:1 discrepancy:16 sankhy:1 ymi:8 yoshua:4 minimized:1 richard:1 report:1 simplify:1 intelligent:1 randomly:1 employ:1 mirza:1 few:1 national:1 intell:1 wgan:4 mover:2 cheaper:1 divergence:3 jeffrey:1 n1:5 interest:1 circular:1 mining:1 mixture:5 farley:1 primal:1 andy:1 integral:2 arthur:4 xy:1 cifar10:1 netzer:1 old:1 logarithm:2 euclidean:1 divide:1 abundant:1 theoretical:3 instance:2 column:1 modeling:13 disadvantage:1 assignment:2 werner:1 phrase:1 cost:6 deviation:1 hundred:1 uniform:2 krizhevsky:2 wortman:1 dij:1 conducted:1 too:5 reported:2 answer:1 trigeorgis:1 cho:1 st:1 fritz:1 borgwardt:1 international:4 randomized:1 density:1 sensitivity:6 oord:1 amherst:1 dong:1 probabilistic:3 off:1 michael:1 ashish:1 ilya:1 gans:2 again:1 central:1 choose:3 slowly:1 huang:1 wan:1 emnlp:1 resort:1 dimitri:1 wojciech:1 li:6 toy:1 szegedy:1 converted:1 diversity:1 yaroslav:1 future:1 sec:10 lloyd:1 lakshminarayanan:2 int:1 titan:1 satisfy:1 kate:2 dann:1 leonidas:1 depends:1 view:1 lab:2 closed:4 mario:2 bayes:1 metz:1 parallel:3 carlos:1 shai:1 minimize:6 greg:1 accuracy:3 convolutional:5 variance:2 efficiently:1 miller:1 nagi:1 greater:1 yes:1 lkopf:2 weak:4 vincent:1 kavukcuoglu:1 craig:1 carlo:1 comp:2 converged:1 bharath:2 suffers:1 dslr:2 trevor:2 definition:3 infinitesimal:1 sriperumbudur:2 frequency:1 tamara:1 mohamed:5 james:1 dm:2 chintala:2 proof:1 mi:1 dataset:13 popular:3 massachusetts:1 lim:1 improves:1 pmd:104 back:1 supervised:2 danilo:2 zisserman:1 improved:2 wei:1 evaluated:2 strongly:6 generality:1 mar:1 furthermore:5 implicit:4 smola:1 inception:1 rahman:1 hand:2 transport:3 mehdi:1 propagation:2 defines:1 mode:1 ex:2 quality:1 gray:1 reveal:1 dabney:1 mingsheng:1 usa:1 building:1 dziugaite:1 verify:1 true:1 requiring:1 normalized:4 xavier:1 hence:1 kyunghyun:1 equality:1 regularization:8 symmetric:1 unbiased:2 semantic:1 attractive:1 fethi:1 during:1 davis:1 harold:1 nvail:1 o3:1 ymn:1 demonstrate:1 confusion:1 performs:1 l1:4 svhn:5 meaning:1 variational:1 wise:2 novel:1 recently:2 image:30 pseudocode:2 empirically:1 hugo:1 insensitive:1 volume:3 m1:1 bougares:1 refer:1 dacheng:1 talent:1 unconstrained:1 trivially:1 mathematics:1 consistency:1 tuning:2 particle:3 rd:1 dino:1 similarity:2 compiled:2 yk2:1 align:1 patrick:2 recent:3 inf:3 scenario:1 nvidia:2 yi:5 victor:1 guestrin:1 arjovsky:1 george:2 additional:1 wasserstein:26 dxdy:4 deng:1 parallelized:1 surely:2 converge:1 fernando:1 maximize:1 corrado:1 v3:1 semi:2 signal:1 multiple:1 gretton:4 technical:1 faster:2 youth:1 long:3 retrieval:1 serial:1 plugging:1 impact:1 regression:1 heterogeneous:1 vision:3 metric:27 lfw:5 expectation:1 arxiv:18 iteration:4 sergey:1 kernel:19 mmd:69 agarwal:1 histogram:1 qy:3 normalization:3 sejdinovic:1 receive:1 want:4 separately:1 source:6 limn:2 sch:2 biased:1 rest:2 unlike:2 natschl:1 lughofer:1 bahdanau:1 balakrishnan:1 jordan:1 yang:1 manu:1 ideal:1 split:1 stephan:1 enough:1 bengio:4 identically:2 xj:3 relu:1 krishnan:1 architecture:6 bandwidth:13 inner:1 barham:1 haffner:1 vgg:1 konstantinos:1 cn:2 shift:1 whether:1 thread:2 notch:1 accelerating:1 song:1 karen:1 speech:2 deep:22 detailed:1 jianfei:1 tune:2 amount:1 clip:1 category:1 generate:2 http:1 percentage:1 coates:1 nsf:1 cuda:2 sign:1 delta:2 per:2 yy:4 dgms:2 write:1 shall:1 discrete:3 group:1 key:1 four:1 neither:1 nal:1 dahl:1 icde:1 beijing:1 run:2 parameterized:5 swersky:1 place:1 almost:3 reasonable:1 wu:1 yann:1 fran:1 separation:1 draw:5 acceptable:3 dy:1 comparable:2 capturing:1 epn:1 layer:4 bound:1 courville:1 cheng:2 quadratic:1 marchand:1 strength:1 constraint:2 alex:3 gmmn:1 grubinger:1 generates:1 nathan:1 speed:9 min:6 nboer:1 separable:1 px:24 relatively:1 martin:1 according:1 smaller:4 rob:1 making:1 outlier:1 intuitively:2 den:1 invariant:1 bing:1 jennifer:1 discus:1 yossi:1 letting:1 tractable:3 end:2 repulsive:2 studying:1 gulcehre:1 adopted:3 available:2 brevdo:1 apply:3 observe:1 hierarchical:1 zxfb00001:1 salimans:1 enforce:1 blurry:2 quarterly:1 batch:20 original:1 thomas:2 top:4 running:3 include:1 subsampling:1 gan:13 graphical:3 zeiler:1 laviolette:1 ghahramani:2 k1:1 establish:1 approximating:2 webcam:2 licx14:1 silberman:1 move:2 noticed:1 objective:11 question:1 parametric:2 antoine:1 hoyer:1 gradient:37 iclr:2 dp:1 distance:42 separate:1 sci:1 decoder:2 outer:1 consumption:1 mail:1 considers:2 reason:1 dzmitry:1 ozair:1 assuming:3 erik:1 ru:1 code:3 relationship:2 illustration:3 tijmen:1 mini:16 minimizing:4 liang:1 october:1 negative:1 slows:1 ba:1 susanne:1 implementation:3 unknown:4 perform:5 contributed:3 datasets:1 benchmark:1 finite:2 caglar:1 descent:4 daan:1 ramesh:1 logistics:1 supporting:1 hinton:4 y1:2 sharp:1 david:2 pair:4 germain:1 connection:2 optimized:1 imagenet:2 tomasi:1 acoustic:1 learned:6 tensorflow:3 kingma:3 nip:2 yucheng:1 beyond:1 adversary:1 usually:1 yujia:1 kulesza:1 reading:1 challenge:1 program:6 max:3 reliable:2 including:2 belief:2 power:1 malte:1 business:1 natural:3 scarce:1 mn:1 zhu:1 improve:1 github:1 stan:1 jun:1 auto:5 autoencoder:1 chao:1 eugene:1 literature:1 understanding:1 epoch:6 kf:1 l2:2 contributing:1 relative:5 fully:1 lecture:1 permutation:2 loss:3 generation:7 interesting:1 proportional:1 ger:1 proven:1 geoffrey:4 generator:3 abdel:1 chongxuan:1 validation:2 rubner:1 vanhoucke:1 sufficient:1 consistent:6 tiny:1 critic:3 bordes:1 translation:2 row:2 eccv:1 supported:1 czos:1 free:1 senior:1 deeper:1 institute:1 face:2 munos:1 sparse:1 distributed:3 van:2 curve:1 xn:5 valid:1 preventing:1 author:1 adaptive:1 avg:1 nguyen:1 erhan:1 welling:2 transaction:1 qx:4 approximate:8 emphasize:1 bernhard:2 overcomes:1 confirm:1 unreliable:1 uai:1 ioffe:1 xi:31 continuous:6 latent:1 table:4 learn:8 transfer:1 ca:1 e5:1 ninghui:1 excellent:1 complex:1 bottou:3 domain:43 diag:4 marc:1 aistats:1 main:1 whole:1 bounding:1 noise:4 paul:1 repeated:1 allowed:1 fair:1 x1:2 xu:2 fig:14 intel:1 slow:2 judy:1 pereira:1 explicit:1 wish:1 extractor:3 ymb:1 zhifeng:1 ian:2 saturate:1 theorem:4 dumitru:1 down:3 rectifier:1 xt:2 covariate:1 x:1 abadie:1 chun:1 svm:2 closeness:3 glorot:1 intractable:1 exists:1 mnist:5 workshop:1 magnitude:8 kx:5 margin:1 chen:3 rg:1 fc:5 visual:1 bo:2 pretrained:1 chang:1 applies:1 radford:2 springer:2 gary:1 tieleman:1 corresponds:1 mart:1 dcszj:1 lempitsky:1 identity:1 cheung:1 goal:1 rbf:4 towards:2 lipschitz:2 replace:1 shared:2 change:4 man:1 reducing:1 yuval:1 flag:1 total:1 invariance:1 rakesh:1 saenko:2 citro:1 aaron:2 tara:1 berg:1 internal:1 support:1 crammer:1 alexander:1 transference:5 accelerated:2 indian:1 dept:1 handling:1 |
6,862 | 7,207 | Scalable Planning with Tensorflow for Hybrid
Nonlinear Domains
Ga Wu
Buser Say
Scott Sanner
Department of Mechanical & Industrial Engineering, University of Toronto, Canada
email: {wuga,bsay,ssanner}@mie.utoronto.ca
Abstract
Given recent deep learning results that demonstrate the ability to effectively optimize high-dimensional non-convex functions with gradient descent optimization on
GPUs, we ask in this paper whether symbolic gradient optimization tools such as
Tensorflow can be effective for planning in hybrid (mixed discrete and continuous)
nonlinear domains with high dimensional state and action spaces? To this end, we
demonstrate that hybrid planning with Tensorflow and RMSProp gradient descent
is competitive with mixed integer linear program (MILP) based optimization on
piecewise linear planning domains (where we can compute optimal solutions)
and substantially outperforms state-of-the-art interior point methods for nonlinear
planning domains. Furthermore, we remark that Tensorflow is highly scalable,
converging to a strong plan on a large-scale concurrent domain with a total of
576,000 continuous action parameters distributed over a horizon of 96 time steps
and 100 parallel instances in only 4 minutes. We provide a number of insights that
clarify such strong performance including observations that despite long horizons,
RMSProp avoids both the vanishing and exploding gradient problems. Together
these results suggest a new frontier for highly scalable planning in nonlinear hybrid
domains by leveraging GPUs and the power of recent advances in gradient descent
with highly optimized toolkits like Tensorflow.
1
Introduction
Many real-world hybrid (mixed discrete continuous) planning problems such as Reservoir Control [Yeh, 1985], Heating, Ventilation and Air Conditioning (HVAC) [Erickson et al., 2009; Agarwal
et al., 2010], and Navigation [Faulwasser and Findeisen, 2009] have highly nonlinear transition and
(possibly nonlinear) reward functions to optimize. Unfortunately, existing state-of-the-art hybrid
planners [Ivankovic et al., 2014; L?hr et al., 2012; Coles et al., 2013; Piotrowski et al., 2016] are not
compatible with arbitrary nonlinear transition and reward models. While HD-MILP-PLAN [Say et
al., 2017] supports arbitrary nonlinear transition and reward models, it also assumes the availability of
data to learn the state-transitions. Monte Carlo Tree Search (MCTS) methods [Coulom, 2006; Kocsis
and Szepesv?ri, 2006; Keller and Helmert, 2013] including AlphaGo [Silver et al., 2016] that can use
any (nonlinear) black box model of transition dynamics do not inherently work with continuous action
spaces due to the infinite branching factor. While MCTS with continuous action extensions such as
HOOT [Weinstein and Littman, 2012] have been proposed, their continuous partitioning methods do
not scale to high-dimensional continuous action spaces (for example, 100?s or 1,000?s of dimensions
as used in this paper). Finally, offline model-free reinforcement learning (for example, Q-learning)
with function approximation [Sutton and Barto, 1998; Szepesv?ri, 2010] and deep extensions [Mnih
et al., 2013] do not require any knowledge of the (nonlinear) transition model or reward, but they also
do not directly apply to domains with high-dimensional continuous action spaces. That is, offline
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
0.90
0
0.90
0
0.90
0
0.90
0.90
0
0
0.90
0
0.90
0
0050
0.60.7
0050
0.60.7
0050
0.60.7
0050
0.60.7
0050
0.60.7
0050
0.60.7
0050
0.60.7
Epochs:10
Epochs:20
Epochs:40
Epochs:80
Epochs:160
Epochs:320
Figure 1: The evolution of RMSProp gradient descent based Tensorflow planning in a twodimensional Navigation domain with nested central rectangles indicating nonlinearly increasing
resistance to robot movement. (top) In initial RMSProp epochs, the plan evolves directly towards the
goal shown as a star. (bottom) As later epochs of RMSProp descend the objective cost surface, the
fastest path evolves to avoid the central obstacle entirely.
learning methods like Q-learning [Watkins and Dayan, 1992] require action maximization for every
update, but in high-dimensional continuous action spaces such nonlinear function maximization is
non-convex and computationally intractable at the scale of millions or billions of updates.
To address the above scalability and expressivity limitations of existing methods, we turn to Tensorflow [Abadi et al., 2015], which is a symbolic computation platform used in the machine learning
community for deep learning due to its compilation of complex layered symbolic functions into a
representation amenable to fast GPU-based reverse-mode automatic differentiation [Linnainmaa,
1970] for gradient-based optimization. Given recent results in gradient descent optimization with deep
learning that demonstrate the ability to effectively optimize high-dimensional non-convex functions,
we ask whether Tensorflow can be effective for planning in discrete time, hybrid (mixed discrete and
continuous) nonlinear domains with high dimensional state and action spaces?
Our results answer this question affirmatively, where we demonstrate that hybrid planning with
Tensorflow and RMSProp gradient descent [Tieleman and Hinton, 2012] is surprisingly effective at
planning in complex hybrid nonlinear domains1 . As evidence, we reference figure 1, where we show
Tensorflow with RMSProp efficiently finding and optimizing a least-cost path in a two-dimensional
nonlinear Navigation domain. In general, Tensorflow with RMSProp planning results are competitive
with optimal MILP-based optimization on piecewise linear planning domains. The performance
directly extends to nonlinear domains where Tensorflow with RMSProp substantially outperforms
interior point methods for nonlinear function optimization. Furthermore, we remark that Tensorflow
converges to a strong plan on a large-scale concurrent domain with 576,000 continuous actions
distributed over a horizon of 96 time steps and 100 parallel instances in 4 minutes.
To explain such excellent results, we note that gradient descent algorithms such as RMSProp are
highly effective for non-convex function optimization that occurs in deep learning. Further, we
provide an analysis of many transition functions in planning domains that suggest gradient descent
on these domains will not suffer from either the vanishing or exploding gradient problems, and hence
provide a strong signal for optimization over long horizons. Together these results suggest a new
frontier for highly scalable planning in nonlinear hybrid domains by leveraging GPUs and the power
of recent advances in gradient descent with Tensorflow and related toolkits.
2
Hybrid Nonlinear Planning via Tensorflow
In this section, we present a general framework of hybrid nonlinear planning along with a compilation
of the objective in this framework to a symbolic recurrent neural network (RNN) architecture with
action parameter inputs directly amenable to optimization with the Tensorflow toolkit.
2.1
Hybrid Planning
A hybrid planning problem is a tuple hS, A, T , R, Ci with S denoting the (infinite) set of hybrid
states with a state represented as a mixed discrete and continuous vector, A the set of actions bounded
by action constraints C, R : S ? A ? R the reward function and T : S ? A ? S the transition
1
The approach in this paper is implemented in Tensorflow, but it is not specific to Tensorflow. While ?scalable
hybrid planning with symbolic representations, auto-differentiation, and modern gradient descent methods for
non-convex functions implemented on a GPU? would make for a more general description of our contributions,
we felt that ?Tensorflow? succinctly imparts at least the spirit of all of these points in a single term.
2
Figure 2: An recurrent neural network (RNN) encoding of a hybrid planning problem: A single-step
reward and transition function of a discrete time decision-process are embedded in an RNN cell.
RNN inputs correspond to the starting state and action; the outputs correspond to reward and next
state. Rewards are additively accumulated in V . Since the entire specification of V is a symbolic
representation in Tensorflow with action parameters as inputs, the sequential action plan can be
directly optimized via gradient descent using the auto-differentiated representation of V.
function. There is also an initial state s0 and the planning objective is to maximize the cumulative
reward over a decision horizon of H time steps. Before proceeding, we outline the necessary notation:
?
?
?
?
st : mixed discrete, continuous state vector at time t.
at : mixed discrete, continuous action vector at time t.
R(st , at ): a non-positive reward function (i.e., negated costs).
T (st , at ): a (nonlinear) transition function.
PH
PH?1
? V = t=1 rt = t=0 R(st , at ): cumulative reward value to maximize.
In general due to the stochastic nature of gradient descent, we will run a number of planning domain
instances i in parallel (to take the best performing plan over all instances), so we additionally define
instance-specific states and actions:
? sitj : the jth dimension of state vector of problem instance i at time t.
? aitj : the jth dimension of action vector of problem instance i at time t.
2.2
Planning through Backpropagation
Backpropagation [Rumelhart et al.] is a standard method for optimizing parameters of large multilayer neural networks via gradient descent. Using the chain rule of derivatives, backpropagation
propagates the derivative of the output error of a neural network back to each of its parameters in a
single linear time pass in the size of the network using what is known as reverse-mode automatic
differentiation [Linnainmaa, 1970]. Despite its relative efficiency, backpropagation in large-scale
(deep) neural networks is still computationally expensive and it is only with the advent of recent
GPU-based symbolic toolkits like Tensorflow [Abadi et al., 2015] that recent advances in training
very large deep neural networks have become possible.
In this paper, we reverse the idea of training parameters of the network given fixed inputs to instead
optimizing the inputs (i.e., actions) subject to fixed parameters (effectively the transition and reward
parameterization assumed a priori known in planning). That is, as shown in figure 2, given transition
T (st , at ) and reward function R(st , at ), we want to optimize the input at for all t to maximize the
accumulated reward value V . Specifically, we want to optimize all actions a = (a1 , . . . , aH?1 ) w.r.t.
a planning loss L (defined shortly) that we minimize via the following gradient update schema
a0 = a ? ?
3
?L
,
?a
(1)
where ? is the optimization rate and the partial derivatives comprising the gradient based optimization
in problem instance i are computed as
?L
?L ?Li
=
?aitj
?Li ?aitj
?L ?Li ?sit+1
=
?Li ?sit+1 ?aitj
=
(2)
T
t+2
?L ?sit+1 X ?Li ?ri? Y ?si?
[
].
?Li ?aitj ? =t+2 ?ri? ?si? ?=? si??1
We must now connect our planning objective to a standard Tensorflow loss function. First, however,
let us assume that we have N structurally identical instances i of our planning domain given in
Figure 2, each with objective value Vi ; then let us define V = (. . . , Vi , . . .). In Tensorflow, we
choose Mean Squared Error (MSE), which given two continuous vectors Y and Y? is defined as
MSE(Y, Y? ) = N1 kY? ? Yk2 . We specifically choose to minimize L = MSE(0, V) with inputs
of constant vector 0 and value vector V in order to maximize our value for each instance i; we remark
that here we want to independently maximize each non-positive Vi , but minimize each positive Vi2
which is achieved with MSE. We will further explain the use of MSE in a moment, but first we digress
to explain why we need to solve multiple problem instances i.
Since both transition and reward functions are not assumed to be convex, optimization on a domain
with such dynamics could result in a local minimum. To mitigate this problem, we use randomly
initialized actions in a batch optimization: we optimize multiple mutually independent planning
problem instances i simultaneously since the GPU can exploit their parallel computation, and then
select the best-performing action sequence among the independent simultaneously solved problem
instances. MSE then has dual effects of optimizing each problem instance i independently and
providing fast convergence (faster than optimizing V directly). We remark that simply defining the
objective V and the definition of all state variables in terms of predecessor state and action variables
via the transition dynamics (back to the known initial state constants) is enough for Tensorflow to
build the symbolic directed acyclic graph (DAG) representing the objective and take its gradient with
respect to to all free action parameters as shown in (2) using reverse-mode automatic differentiation.
2.3
Planning over Long Horizons
The Tensorflow compilation of a nonlinear planning problem reflects the same structure as a recurrent
neural network (RNN) that is commonly used in deep learning. The connection here is not superficial
since a longstanding difficulty with training RNNs lies in the vanishing gradient problem, that is,
multiplying long sequences of gradients in the chain rule usually renders them extremely small and
make them irrelevant for weight updates, especially when using nonlinear transfer functions such
as a sigmoid. However in hybrid planning problems, continuous state updates often take the form
si(t+1)j = sitj + ? for some ? function of the state and action at time t. Critically we note that
the transfer function here is linear in sitj which is the largest determiner of si(t+1)j , hence avoiding
vanishing gradients.
In addition, a gradient can explode with the chain rule through backpropagation if the elements of
the Jacobian matrix of state transitions are too large. In this case, if the planning horizon is large
enough, a simple Stochastic Gradient Descent (SGD) optimizer may suffer from overshooting the
optimum and never converge (as our experiments appear to demonstrate for SGD). The RMSProp
optimization algorithm has a significant advantage for backpropagation-based planning because of
its ability to perform gradient normalization that avoids exploding gradients and additionally deals
with piecewise gradients [Balduzzi et al., 2016] that arise naturally as conditional transitions in
many nonlinear domains (e.g., the Navigation domain of Figure 1 has different piecewise transition
dynamics depending on the starting region). Specifically, instead of naively updating action aitj
through equation 1, RMSProp maintains a decaying root mean squared gradient value G for each
variable, which averages over squared gradients of previous epochs
G0aitj = 0.9Gaitj + 0.1(
4
?L 2
) ,
?aitj
(3)
and updates each action variable through
a0itj = aitj ? p
?
Gaitj
?L
.
+ ?aitj
(4)
Here, the gradient is relatively small and consistent over iterations. Although the Adagrad [Duchi
et al., 2011] and Adadelta [Zeiler, 2012] optimization algorithms have similar mechanisms, their
learning rate could quickly reduce to an extremely small value when encountering large gradients. In
support of these observations, we note the superior performance of RMSProp in Section 3.
2.4
Handling Constrained and Discrete Actions
In most hybrid planning problems, there exist natural range constraints for actions. To handle those
constraints, we use projected stochastic gradient descent. Projected stochastic gradient descent
(PSGD) is a well-known descent method that can handle constrained optimization problems by
projecting the parameters (actions) into a feasible range after each gradient update. To this end, we
clip all actions to their feasible range after each epoch of gradient descent.
For planning problems with discrete actions, we use a one-hot encoding for optimization purposes
and then use a {0, 1} projection for the maximal action to feed into the forward propagation. In this
paper, we focus on constrained continuous actions which are representative of many hybrid nonlinear
planning problems in the literature.
3
Experiments
In this section, we introduce our three benchmark domains and then validate Tensorflow planning
performance in the following steps. (1) We evaluate the optimality of the Tensorflow backpropagation
planning on linear and bilinear domains through comparison with the optimal solution given by
Mixture Integer Linear Programming (MILP). (2) We evaluate the performance of Tensorflow
backpropagation planning on nonlinear domains (that MILPs cannot handle) through comparison
with the Matlab-based interior point nonlinear solver FMINCON. (4) We investigate the impact of
several popular gradient descent optimizers on planning performance. (5) We evaluate optimization
of the learning rate. (6) We investigate how other state-of-the-art hybrid planners perform.
3.1
Domain Descriptions
Navigation: The Navigation domain is designed to test the ability of optimization of Tensorflow
in a relatively small environment that supports different complexity transitions. Navigation has a
two-dimensional state of the agent location s and a two-dimensional action a. Both of state and
action spaces are continuous and constrained by their maximum and minimum boundaries separately.
The objective of the domain is for an agent to move to the goal state as soon as possible (cf. figure 1).
Therefore, we compute the reward based on the Manhattan distance from the agent to the goal state at
each time step as R(st , at ) = ?kst ? gk1 , where g is the goal state.
We designed three different transitions; from left to right, nonlinear, bilinear and linear:
dt =
2
X
|stj ? zj |
j=1
dt = kst ? zk1
?
0.8, 3.6 ? dt < 4
?
?
?
?
?0.6, 2.4 ? dt < 3.6
?
?
?
0.4, 1.6 ? dt < 2.4
?=
?
0.2, 0.8 ? dt < 1.6
?
?
?
?
0.05, dt < 0.8
?
?
?
1,
dt ? 4
dt = kst ? zk
dt
2
4 , dt < 4
?
=
?=
? 0.99
1, dt ? 4
1 + exp(?2dt )
p = st + ?at
p = st + ?at
p = st + ?at
T (st , at ) = max(u, min(l, p)), T (st , at ) = max(u, min(l, p)), T (st , at ) = max(u, min(l, p)),
(5)
(6)
(7)
5
The nonlinear transition has a velocity reduction zone based on its Euclidean distance to the center z.
Here, dt is the distance from the deceleration zone z, p is the proposed next state, ? is the velocity
reduction factor, and u,l are upper and lower boundaries of the domain respectively.
The bilinear domain is designed to compare with MILP where domain discretization is possible. In
this setting, we evaluate the efficacy of approximately discretizing bilinear planning problems into
MILPs. Equation 6 shows the bilinear transition function.
The linear domain is the discretized version of the bilinear domain used for MILP optimization. We
also test Tensorflow on this domain to see the optimality of the Tensorflow solution. Equation 7
shows the linear transition function.
Reservoir Control: Reservoir Control [Yeh, 1985] is a system to control multiple connected
reservoirs. Each of the reservoirs in the system has a single state sj ? R that denotes the water level
of the reservoir j and a corresponding action to permit a flow aj ? [0, sj ] from the reservoir to the
next downstream reservoir.
The objective of the domain is to maintain the target water level of each reservoir in a safe range and
as close to half of its capacity as possible. Therefore, we compute the reward through:
?
Lj ? sj ? Uj
?0,
cj = ?5,
sj < Lj
?
?100, sj > Uj
(u ? l)
? st |k1 ,
2
where cj is the cost value of Reservoir j that penalizes water levels outside a safe range.
R(st , at ) = ?kc ? 0.1 ? |
In this domain, we introduce two settings: namely, Nonlinear and Linear. For the nonlinear domain,
nonlinearity due to the water loss ej for each reservoir j includes water usage and evaporation. The
transition function is
st
et = 0.5 ? st sin( ), T (st , at ) = st + rt ? et ? at + at ?,
(8)
m
where represents an elementwise product, r is a rain quantity parameter, m is the maximum
capacity of the largest tank, and ? is a lower triangular adjacency matrix that indicates connections to
upstream reservoirs.
For the linear domain, we only replace the nonlinear function of water loss by a linear function:
et = 0.1 ? st , T (st , at ) = st + rt ? et ? at + at ?,
(9)
Unlike Navigation, we do not limit the state dimension of the whole system into two dimensions. In
the experiments, we use domain setting of a network with 20 reservoirs.
HVAC: Heating, Ventilation, and Air Conditioning [Erickson et al., 2009; Agarwal et al., 2010] is
a centralized control problem, with concurrent controls of multiple rooms and multiple connected
buildings. For each room j there is a state variable sj denoting the temperature and an action aj for
sending the specified volume of heated air to each room j via vent actuation.
The objective of the domain is to maintain the temperature of each room in a comfortable range and
consume as little energy as possible in doing so. Therefore, we compute the reward based through:
(u ? l)
? st |,
2
where C is the unit electricity cost.
dt = |
et = at ? C,
R(st , at ) = ?ket + dt k1 ,
Since thermal models for HVAC are inherently nonlinear, we only present one version with a nonlinear
transition function:
J
X
?t = at (F vent ? st ), ?t = (st Q ? st
qj )/wq
j=1
?t = (Ftout ? st ) o/wo , ?t = (Fthall ? st ) h/wh
T (st , at ) = st + ? ? (?t + ?t + ?t + ?t ),
6
(10)
where F vent , Ftout and Fthall are temperatures of the room vent, outside and hallway, respectively,
Q. o and h are respectively the adjacency matrix of rooms, adjacency vector of outside areas, and the
adjacency vector of hallways. wq , wo and wh are thermal resistances with a room and the hallway
and outside walls, respectively.
In the experiments, we work with a building layout with five floors and 12 rooms on each floor for a
total of 60 rooms. For scalability testing, we apply batched backpropagation on 100 instances of such
domain simultaneously, of which, there are 576,000 actions needed to plan concurrently.
3.2
Planning Performance
In this section, we investigate the performance of Tensorflow optimization through comparison with
the MILP on linear domains and with Matlab?s fmincon nonlinear interior point solver on nonlinear
domains. We ran our experiments on Ubuntu Linux system with one E5-1620 v4 CPU, 16GB RAM,
and one GTX1080 GPU. The Tensorflow version is beta 0.12.1, the Matlab version is R2016b, and
the MILP version is IBM ILOG CPLEX 12.6.3.
Heuristic
MILP
TF
30
60
Horizon
120
(a) Navigation Linear
?900
?800
?700
?600
?500
?400
?300
?200
?100
0
30
60
Horizon
120
(b) Navigation Bilinear
?900
?800
?700
?600
?500
?400
?300
?200
?100
0
Total?Reward
Performance in Linear Domains
Total?Reward
Total?Reward
3.2.1
30
60
Horizon
120
?3500
?3000
?2500
?2000
?1500
?1000
?500
0
(c) Reservoir Linear
Figure 3: The total reward comparison (values are negative, lower bars are better) among Tensorflow
(Red), MILP optimization guided planning (Green) and domain-specific heuristic policy (Blue). Error
bars show standard deviation across the parallel Tensorflow instances; most are too small to be visible.
The heuristic policy is a manually designed baseline solution. In the linear domains (a) and (c), the
MILP is optimal and Tensorflow is near-optimal for five out of six domains.
In Figure 3, we show that Tensorflow backpropagation results in lower cost plans than domain-specific
heuristic policies, and the overall cost is close to the MILP-optimal solution in five of six linear
domains.
While Tensorflow backpropagation planning generally shows strong performance, when comparing
the performance of Tensorflow on bilinear and linear domains of Navigation to the MILP solution
(recall that the linear domain was discretized from the bilinear case), we notice that Tensorflow does
much better relative to the MILP on the bilinear domain than the discretized linear domain. The
reason for this is quite simple: gradient optimization of smooth bilinear functions is actually much
easier for Tensorflow than the piecewise linear discretized version which has large piecewise steps that
make it hard for RMSProp to get a consistent and smooth gradient signal. We additionally note that
the standard deviation of the linear navigation domain is much larger than the others. This is because
the piecewise constant transition function computing the speed reduction factor ? provides a flat loss
surface with no curvature to aid gradient descent methods, leading to high variation depending on the
initial random starting point in the instance.
3.2.2
Performance in Nonlinear Domains
In figure 4, we show Tensorflow backpropagation planning always achieves the best performance
compared to the heuristic solution and the Matlab nonlinear optimizer fmincon. For relatively simple
domains like Navigation, we see the fmincon nonlinear solver provides a very competitive solution,
while, for the complex domain HVAC with a large concurrent action space, the fmincon solver shows
a complete failure at solving the problem in the given time period.
In figure 5(a), Tensorflow backpropagation planning shows 16 times faster optimization in the first
15s, which is close to the result given by fmincon at 4mins. In figure 5(b), the optimization speed of
7
?10000
?50000
?400
?8000
?40000
?300
?200
?100
30
60
120
Horizon
?6000
?4000
?2000
0
30
(a) Navigation Nonlinear
60
0
120
Horizon
Total?Reward
?500
Total?Reward
Total?Reward
Heuristic
FMC
TF
?30000
?20000
?10000
12
(b) Reservoir Nonlinear
24
Horizon
48
96
0
(c) HVAC Nonlinear
Figure 4: The total reward comparison (values are negative, lower bars are better) among Tensorflow
backpropagation planning (Red), Matlab nonlinear solver fmincon guided planning (Purple) and
domain-specific heuristic policy (Blue). We gathered the results after 16 minutes of optimization time
to allow all algorithms to converge to their best solution.
30s
60s
2m
Time
4m
8m
TF
FMC
Total Reward
Total Reward
15s
?9000
?8000
?7000
?6000
?5000
?4000
?3000
?2000
?1000
16m
TF
FMC
15s
(a) Reservoir, Horizon 60
-10 6
-10 5
-10 4
30s
60s
2m
Time
4m
8m
-10 3
16m
(b) Reservoir, Horizon 120
Figure 5: Optimization comparison between Tensorflow RMSProp gradient planning (Green) and
Matlab nonlinear solver fmincon interior point optimization planning (Orange) on Nonlinear Reservoir
Domains with Horizon (a) 60 and (b) 120. As a function of the logarithmic time x-axis, Tensorflow is
substantially faster and more optimal than fmincon.
Tensorflow shows it to be hundreds of times faster than the fmincon nonlinear solver to achieve the
same value (if fmincon does ever reach it). These remarkable results demonstrate the power of fast
parallel GPU computation of the Tensorflow framework.
3.2.3
Scalability
In table 1, we show the scalability of Tensorflow backpropagation planning via the running times
required to converge for different domains. The results demonstrate the extreme efficiency with
which Tensorflow can converge on exceptionally large nonlinear hybrid planning domains.
Domain Dim Horizon Batch Actions
Time
Nav.
2
120
100
24000
< 1mins
Res.
20
120
100
240000
4mins
HVAC
60
96
100
576000
4mins
Table 1: Timing evaluation of the largest instances of the three domains we tested. All of these tests
were performed on the nonlinear versions of the respectively named domains.
3.2.4
Optimization Methods
In this experiment, we investigate the effects of different backpropagation optimizers. In figure 6(a),
we show that the RMSProp optimizer provides exceptionally fast convergence among the five standard
optimizers of Tensorflow. This observation reflects the previous analysis and discussion concerning
equation (4) that RMSProp manages to avoid exploding gradients. As mentioned, although Adagrad
and Adadelta have similar mechanisms, their normalization methods may cause vanishing gradients
after several epochs, which corresponds to our observation of nearly flat curves for these methods.
This is a strong indicator that exploding gradients are a significant concern for hybrid planning with
gradient descent and that RMSProp performs well despite this well-known potential problem for
gradients over long horizons.
8
Total Reward
Total Reward
0
-9.0e+05
-8.0e+05
-7.0e+05
-6.0e+05
SGD
-5.0e+05
Adagrad
-4.0e+05
Adadelta -3.0e+05
Adam
-2.0e+05
RMSProp -1.0e+05
0.0e+00
500 1000 1500 2000 2500 3000 3500 4000
0
Epoch
(a)
-7.0e+05
-6.0e+05
-5.0e+05
-4.0e+05
-3.0e+05
-2.0e+05
-1.0e+05
0.0e+00
500 1000 1500 2000 2500 3000 3500 4000
Optimizing_Rate:1
Optimizing_Rate:0.1
Optimizing_Rate:0.01
Optimizing_Rate:0.001
Epoch
(b)
Figure 6: (a) Comparison of Tensorflow gradient methods in the HVAC domain. All of these
optimizers use the same learning rate of 0.001. (b) Optimization learning rate comparison of
Tensorflow with the RMSProp optimizer on HVAC domain. The optimization rate 0.1 (Orange) gave
the fastest initial convergence speed but was not able to reach the best score that optimization rate
0.001 (Blue) found.
3.2.5
Optimization Rate
In figure 6(b), we show the best learning optimization rate for the HVAC domain is 0.01 since this
rate converges to near-optimal extremely fast. The overall trend is smaller optimization rates have a
better opportunity to reach a better final optimization solution, but can be extremely slow as shown for
optimization rate 0.001. Hence, while larger optimization rates may cause overshooting the optima,
rates that are too small may simply converge too slowly for practical use. This suggests a critical
need to tune the optimization rate per planning domain.
3.3
Comparison to State-of-the-art Hybrid Planners
Finally, we discuss and test the scalability of the state-of-art hybrid planners on our hybrid domains.
We note that neither DiNo [Piotrowski et al., 2016], dReal [Bryce et al., 2015] nor SMTPlan [Cashmore et al., 2016] support general metric optimization. We ran ENHSP [Scala et al., 2016] on a
much smaller version of the HVAC domain with only 2 rooms over multiple horizon settings. We
found that ENHSP returned a feasible solution to the instance with horizon equal to 2 in 31 seconds,
whereas the rest of the instances with greater horizon settings timed out with an hour limit.
4
Conclusion
We investigated the practical feasibility of using the Tensorflow toolbox to do fast, large-scale
planning in hybrid nonlinear domains. We worked with a direct symbolic (nonlinear) planning domain
compilation to Tensorflow for which we optimized planning actions directly through gradient-based
backpropagation. We then investigated planning over long horizons and suggested that RMSProp
avoids both the vanishing and exploding gradient problems and showed experiments to corroborate
this finding. Our key empirical results demonstrated that Tensorflow with RMSProp is competitive
with MILPs on linear domains (where the optimal solution is known ? indicating near optimality
of Tensorflow and RMSProp for these non-convex functions) and strongly outperforms Matlab?s
state-of-the-art interior point optimizer on nonlinear domains, optimizing up to 576,000 actions in
under 4 minutes. These results suggest a new frontier for highly scalable planning in nonlinear hybrid
domains by leveraging GPUs and the power of recent advances in gradient descent such as RMSProp
with highly optimized toolkits like Tensorflow.
For future work, we plan to further investigate Tensorflow-based planning improvements for domains
with discrete action and state variables as well as difficult domains with only terminal rewards that
provide little gradient signal guidance to the optimizer.
9
References
Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew
Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath
Kudlur, Josh Levenberg, Dan Man?, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike
Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent
Vanhoucke, Vijay Vasudevan, Fernanda Vi?gas, Oriol Vinyals, Pete Warden, Martin Wattenberg,
Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on
heterogeneous systems, 2015. Software available from tensorflow.org.
Yuvraj Agarwal, Bharathan Balaji, Rajesh Gupta, Jacob Lyles, Michael Wei, and Thomas Weng.
Occupancy-driven energy management for smart building automation. In Proceedings of the 2nd
ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Building, pages 1?6. ACM,
2010.
David Balduzzi, Brian McWilliams, and Tony Butler-Yeoman. Neural taylor approximations: Convergence and exploration in rectifier networks. arXiv preprint arXiv:1611.02345, 2016.
Daniel Bryce, Sicun Gao, David Musliner, and Robert Goldman. SMT-based nonlinear PDDL+
planning. In 29th AAAI, pages 3247?3253, 2015.
Michael Cashmore, Maria Fox, Derek Long, and Daniele Magazzeni. A compilation of the full
PDDL+ language into SMT. In ICAPS, pages 79?87, 2016.
Amanda Jane Coles, Andrew Coles, Maria Fox, and Derek Long. A hybrid LP-RPG heuristic for
modelling numeric resource flows in planning. J. Artif. Intell. Res. (JAIR), 46:343?412, 2013.
R?mi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International
Conference on Computers and Games, pages 72?83. Springer Berlin Heidelberg, 2006.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011.
Varick L. Erickson, Yiqing Lin, Ankur Kamthe, Rohini Brahme, Alberto E. Cerpa, Michael D.
Sohn, , and Satish Narayanan. Energy efficient building environment control strategies using
real-time occupancy measurements. In Proceedings of the 1st ACM Workshop On Embedded
Sensing Systems For Energy-Efficient Buildings (BuildSys 2009), pages 19?24, Berkeley, CA, USA,
November 2009. ACM.
Timm Faulwasser and Rolf Findeisen. Nonlinear Model Predictive Path-Following Control. In
Nonlinear Model Predictive Control - Towards New Challenging Applications, Lecture Notes in
Control and Information Sciences, pages 335?343. Springer, Berlin, Heidelberg, 2009.
Franc Ivankovic, Patrik Haslum, Sylvie Thiebaux, Vikas Shivashankar, and Dana Nau. Optimal
planning with global numerical state constraints. In International Conference on Automated
Planning and Scheduling (ICAPS), pages 145?153, Portsmouth, New Hampshire, USA, jun 2014.
Thomas Keller and Malte Helmert. Trial-based heuristic tree search for finite horizon mdps. In
Proceedings of the 23rd International Conference on Automated Planning and Scheduling, ICAPS
2013, Rome, Italy, June 10-14, 2013, 2013.
Levente Kocsis and Csaba Szepesv?ri. Bandit based Monte-Carlo planning. In Proceedings of the
17th European Conference on Machine Learning (ECML-06), pages 282?293, 2006.
Seppo Linnainmaa. The representation of the cumulative rounding error of an algorithm as a taylor
expansion of the local rounding errors. Master?s Thesis (in Finnish), Univ. Helsinki, pages 6?7,
1970.
Johannes L?hr, Patrick Eyerich, Thomas Keller, and Bernhard Nebel. A planning based framework
for controlling hybrid systems. In Proceedings of the Twenty-Second International Conference on
Automated Planning and Scheduling, ICAPS 2012, Atibaia, S?o Paulo, Brazil, June 25-19, 2012,
2012.
10
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan
Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep
Learning Workshop. 2013.
Wiktor Mateusz Piotrowski, Maria Fox, Derek Long, Daniele Magazzeni, and Fabio Mercorio.
Heuristic planning for hybrid systems. In Proceedings of the Thirtieth AAAI Conference on
Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 4254?4255, 2016.
David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by
back-propagating errors. Cognitive modeling, 5(3):1.
Buser Say, Wu Ga, Yu Qing Zhou, and Scott Sanner. Nonlinear hybrid planning with deep net learned
transition models and mixed-integer linear programming. In Proceedings of the Twenty-Sixth
International Joint Conference on Artificial Intelligence, IJCAI-17, pages 750?756, 2017.
Enrico Scala, Patrik Haslum, Sylvie Thi?baux, and Miquel Ram?rez. Interval-based relaxation for
general numeric planning. In ECAI, pages 655?663, 2016.
David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den
Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander
Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap,
Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game
of go with deep neural networks and tree search. Nature, 529:484?503, 2016.
Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press,
Cambridge, MA, USA, 1st edition, 1998.
Csaba Szepesv?ri. Algorithms for Reinforcement Learning. Morgan & Claypool, 2010.
Tijmen Tieleman and Geoffrey E Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running
average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26?31,
2012.
Christopher J. C. H. Watkins and Peter Dayan. Q-learning. Machine Learning, 8(3):279?292, May
1992.
Ari Weinstein and Michael L. Littman. Bandit-based planning and learning in continuous-action
markov decision processes. In Proceedings of the Twenty-Second International Conference on
Automated Planning and Scheduling, ICAPS 2012, Atibaia, S?o Paulo, Brazil, June 25-19, 2012,
2012.
William G Yeh. Reservoir management and operations models: A state-of-the-art review. Water
Resources research, 21,12:1797?1818, 1985.
Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701,
2012.
11
| 7207 |@word h:1 trial:1 version:8 nd:1 additively:1 jacob:1 sgd:3 reduction:3 moment:1 initial:5 efficacy:1 score:1 daniel:1 denoting:2 outperforms:3 existing:2 steiner:1 discretization:1 comparing:1 si:5 guez:1 must:1 gpu:6 john:2 devin:1 visible:1 numerical:1 ronald:1 designed:4 update:7 overshooting:2 half:1 isard:1 intelligence:2 parameterization:1 ubuntu:1 hallway:3 vanishing:6 aja:1 provides:3 toronto:1 location:1 org:1 five:4 wierstra:1 along:1 olah:1 direct:1 become:1 predecessor:1 beta:1 abadi:3 yuan:1 weinstein:2 dan:1 introduce:2 planning:74 nor:1 discretized:4 terminal:1 nham:1 goldman:1 little:2 cpu:1 solver:7 increasing:1 bounded:1 notation:1 advent:1 what:1 nav:1 atari:1 substantially:3 finding:2 csaba:2 differentiation:4 mitigate:1 every:1 berkeley:1 icaps:5 control:10 partitioning:1 fmincon:11 unit:1 appear:1 mcwilliams:1 comfortable:1 before:1 positive:3 engineering:1 local:2 timing:1 limit:2 despite:3 sutton:2 encoding:2 bilinear:11 toolkits:4 laurent:1 path:3 approximately:1 black:1 rnns:1 ankur:1 suggests:1 challenging:1 fastest:2 range:6 ventilation:2 directed:1 practical:2 testing:1 backpropagation:17 optimizers:4 sitj:3 demis:1 area:1 riedmiller:1 rnn:5 empirical:1 thi:1 projection:1 suggest:4 symbolic:9 get:1 cannot:1 ga:2 interior:6 layered:1 stj:1 twodimensional:1 close:3 operator:1 scheduling:4 heated:1 optimize:6 dean:1 demonstrated:1 center:1 layout:1 williams:1 starting:3 keller:3 convex:7 independently:2 go:1 matthieu:1 insight:1 rule:3 shlens:1 hd:1 handle:3 deceleration:1 variation:1 brazil:2 target:1 controlling:1 programming:2 goodfellow:1 kunal:1 element:1 rumelhart:2 expensive:1 adadelta:4 updating:1 velocity:2 trend:1 balaji:1 bottom:1 mike:1 preprint:2 solved:1 descend:1 region:1 connected:2 coursera:1 psgd:1 movement:1 ran:2 mentioned:1 environment:2 rmsprop:25 complexity:1 reward:31 littman:2 dynamic:4 solving:1 smart:1 predictive:2 efficiency:3 joint:1 vent:4 represented:1 univ:1 fast:6 effective:4 monte:3 digress:1 artificial:2 milp:14 outside:4 kalchbrenner:1 quite:1 heuristic:10 larger:2 solve:1 elad:1 say:3 consume:1 triangular:1 ability:4 final:1 online:1 kocsis:2 sequence:2 advantage:1 net:1 maximal:1 product:1 achieve:1 description:2 validate:1 scalability:5 ky:1 billion:1 convergence:4 sutskever:2 optimum:2 ijcai:1 silver:3 converges:2 adam:1 depending:2 recurrent:3 andrew:3 propagating:1 strong:6 implemented:2 safe:2 guided:2 mie:1 stochastic:5 exploration:1 jonathon:1 adjacency:4 alphago:1 require:2 wall:1 brian:1 frontier:3 extension:2 clarify:1 exp:1 claypool:1 dieleman:1 matthew:1 optimizer:6 achieves:1 nebel:1 purpose:1 determiner:1 cole:3 concurrent:4 largest:3 tf:4 tool:1 reflects:2 mit:1 concurrently:1 always:1 avoid:2 ej:1 zhou:1 thirtieth:1 barto:2 focus:1 june:3 improvement:1 maria:3 modelling:1 indicates:1 industrial:1 baseline:1 dim:1 dayan:2 accumulated:2 entire:1 lj:2 a0:1 kc:1 bandit:2 comprising:1 tank:1 overall:2 among:4 dual:1 priori:1 plan:9 art:7 platform:1 constrained:4 orange:2 equal:1 never:1 beach:1 koray:2 manually:1 identical:1 represents:1 yu:2 nearly:1 miquel:1 future:1 others:1 piecewise:7 richard:1 franc:1 portsmouth:1 randomly:1 modern:1 simultaneously:3 intell:1 qing:1 cplex:1 jeffrey:1 n1:1 maintain:2 nau:1 william:1 centralized:1 highly:8 mnih:2 investigate:5 zheng:1 evaluation:1 fmc:3 benoit:1 navigation:14 mixture:1 extreme:1 weng:1 compilation:5 chain:3 amenable:2 andy:1 rajesh:1 tuple:1 partial:1 necessary:1 arthur:1 fox:3 tree:4 euclidean:1 taylor:2 initialized:1 penalizes:1 re:2 timed:1 guidance:1 divide:1 instance:20 modeling:1 obstacle:1 corroborate:1 maximization:2 electricity:1 cost:7 deviation:2 hundred:1 rounding:2 satish:1 too:4 connect:1 answer:1 kudlur:1 st:34 international:6 v4:1 michael:5 together:2 quickly:1 ashish:1 ilya:2 linux:1 thesis:1 aaai:2 squared:3 central:2 rafal:1 choose:2 possibly:1 slowly:1 management:2 huang:1 ket:1 lukasz:1 cognitive:1 derivative:3 leading:1 li:6 volodymyr:1 potential:1 paulo:2 star:1 ioannis:2 availability:1 includes:1 automation:1 vi:4 later:1 root:1 performed:1 schema:1 doing:1 red:2 competitive:4 decaying:1 maintains:1 parallel:6 hazan:1 jul:1 jia:1 contribution:1 minimize:3 air:3 purple:1 greg:1 efficiently:1 correspond:2 gathered:1 vincent:1 kavukcuoglu:2 critically:1 manages:1 craig:1 carlo:3 multiplying:1 ah:1 explain:3 reach:3 email:1 definition:1 sixth:1 failure:1 energy:5 derek:4 tucker:1 naturally:1 mi:1 popular:1 ask:2 wh:2 recall:1 knowledge:1 wicke:1 cj:2 graepel:1 actually:1 back:3 feed:1 jair:1 dt:16 wei:1 scala:2 box:1 strongly:1 furthermore:2 christopher:2 nonlinear:55 propagation:1 mode:3 aj:2 artif:1 thore:1 usa:5 effect:2 lillicrap:1 usage:1 building:6 evolution:1 hence:3 milps:3 vasudevan:1 moore:1 deal:1 sin:1 irving:1 branching:1 game:2 davis:1 levenberg:1 daniele:2 outline:1 complete:1 demonstrate:7 duchi:2 performs:1 temperature:3 ari:1 sigmoid:1 superior:1 phoenix:1 conditioning:2 volume:1 million:1 elementwise:1 significant:2 jozefowicz:1 measurement:1 cambridge:1 dag:1 automatic:3 rd:1 nonlinearity:1 dino:1 language:1 toolkit:1 robot:1 specification:1 encountering:1 surface:2 yk2:1 timm:1 pete:1 patrick:1 curvature:1 recent:8 showed:1 optimizing:6 irrelevant:1 wattenberg:1 reverse:4 driven:1 sherry:1 selectivity:1 italy:1 discretizing:1 fernanda:1 leach:1 morgan:1 minimum:2 greater:1 george:1 floor:2 converge:5 maximize:5 period:1 corrado:1 exploding:6 signal:3 multiple:6 full:1 smooth:2 faster:4 long:10 lin:1 alberto:1 concerning:1 a1:1 feasibility:1 impact:1 converging:1 scalable:6 imparts:1 multilayer:1 heterogeneous:1 metric:1 arxiv:4 iteration:1 normalization:2 monga:1 agarwal:4 achieved:1 cell:1 szepesv:4 want:3 addition:1 separately:1 whereas:1 enrico:1 interval:1 rest:1 unlike:1 warden:1 finnish:1 subject:1 smt:2 leveraging:3 spirit:1 flow:2 integer:3 near:3 enough:2 sander:1 automated:4 gave:1 architecture:1 helmert:2 gk1:1 reduce:1 idea:1 barham:1 qj:1 whether:2 six:2 veda:1 sylvie:2 gb:1 manjunath:1 wo:2 suffer:2 render:1 resistance:2 returned:1 peter:1 cause:2 action:46 remark:4 deep:12 matlab:7 generally:1 tune:1 johannes:1 ph:2 clip:1 sohn:1 narayanan:1 exist:1 zj:1 notice:1 per:1 blue:3 discrete:11 key:1 harp:1 yangqing:1 levente:1 neither:1 nal:1 rectangle:1 ram:2 graph:1 subgradient:1 downstream:1 relaxation:1 run:1 talwar:1 master:1 named:1 extends:1 planner:4 wu:2 decision:3 lanctot:1 entirely:1 arizona:1 constraint:4 worked:1 alex:1 ri:6 flat:2 software:1 helsinki:1 felt:1 explode:1 speed:3 extremely:4 optimality:3 min:7 performing:2 relatively:3 gpus:4 martin:3 department:1 across:1 smaller:2 mastering:1 lp:1 evolves:2 projecting:1 den:1 computationally:2 equation:4 mutually:1 resource:2 turn:1 discus:1 mechanism:2 needed:1 singer:1 madeleine:1 antonoglou:2 end:2 sending:1 brevdo:1 available:1 panneershelvam:1 permit:1 operation:1 apply:2 differentiated:1 batch:2 jane:1 shortly:1 hassabis:1 vikas:1 thomas:3 assumes:1 top:1 cf:1 denotes:1 zeiler:2 rain:1 running:2 opportunity:1 tony:1 exploit:1 yoram:1 balduzzi:2 build:1 especially:1 uj:2 k1:2 murray:1 february:1 objective:10 move:1 question:1 quantity:1 occurs:1 kaiser:1 strategy:1 buser:2 rt:3 erickson:3 gradient:52 fabio:1 distance:3 berlin:2 capacity:2 chris:1 maddison:1 water:7 reason:1 kst:3 tijmen:1 coulom:2 providing:1 julian:1 schrittwieser:1 difficult:1 unfortunately:1 robert:1 negative:2 evaporation:1 policy:4 twenty:3 negated:1 perform:2 upper:1 observation:4 ilog:1 markov:1 zk1:1 benchmark:1 finite:1 daan:1 descent:22 hvac:10 affirmatively:1 thermal:2 gas:1 defining:1 hinton:3 ever:1 november:1 ecml:1 rome:1 arbitrary:2 community:1 canada:1 david:5 nonlinearly:1 mechanical:1 namely:1 specified:1 optimized:4 connection:2 required:1 toolbox:1 xiaoqiang:1 learned:1 tensorflow:61 expressivity:1 hour:1 nip:2 address:1 able:1 bar:3 suggested:1 usually:1 sanjay:1 scott:2 amanda:1 mateusz:1 rolf:1 program:1 including:2 max:3 vi2:1 green:2 power:4 hot:1 critical:1 difficulty:1 hybrid:32 natural:1 malte:1 indicator:1 hr:2 sanner:2 representing:1 occupancy:2 mdps:1 mcts:2 axis:1 grewe:1 jun:1 auto:2 bryce:2 patrik:2 epoch:13 yeh:3 literature:1 eugene:1 review:1 adagrad:3 relative:2 manhattan:1 embedded:3 loss:5 lecture:2 graf:1 mixed:8 limitation:1 acyclic:1 geoffrey:3 remarkable:1 dana:1 agent:3 vanhoucke:1 consistent:2 s0:1 propagates:1 playing:1 ibm:1 compatible:1 succinctly:1 surprisingly:1 free:2 soon:1 jth:2 ecai:1 offline:2 allow:1 distributed:2 van:1 boundary:2 dimension:5 curve:1 world:1 avoids:3 transition:26 cumulative:3 numeric:2 forward:1 commonly:1 reinforcement:4 projected:2 adaptive:2 longstanding:1 sifre:1 sj:6 bernhard:1 global:1 assumed:2 butler:1 continuous:19 search:4 why:1 table:2 additionally:3 learn:1 zk:1 nature:2 ca:3 inherently:2 superficial:1 transfer:2 e5:1 expansion:1 mse:6 excellent:1 complex:3 upstream:1 investigated:2 domain:76 heidelberg:2 european:1 marc:1 whole:1 backup:1 arise:1 paul:2 edition:1 heating:2 reservoir:19 representative:1 batched:1 slow:1 aid:1 structurally:1 lie:1 watkins:2 jacobian:1 dominik:1 zhifeng:1 ian:1 rez:1 minute:4 specific:5 rectifier:1 utoronto:1 ghemawat:1 sensing:2 gupta:1 evidence:1 sit:3 intractable:1 naively:1 concern:1 workshop:3 sequential:1 effectively:3 ci:1 magnitude:1 horizon:23 chen:1 easier:1 vijay:1 logarithmic:1 timothy:1 simply:2 gao:1 josh:1 vinyals:1 driessche:1 springer:2 nested:1 tieleman:2 corresponds:1 acm:4 mart:1 ma:1 conditional:1 goal:4 towards:2 room:10 replace:1 man:1 feasible:3 hard:1 exceptionally:2 infinite:2 specifically:3 hampshire:1 total:14 pas:1 citro:1 indicating:2 select:1 zone:2 wq:2 support:4 rajat:1 actuation:1 oriol:1 avoiding:1 evaluate:4 tested:1 schuster:1 handling:1 |
6,863 | 7,208 | Boltzmann Exploration Done Right
Nicol? Cesa-Bianchi
Universit? degli Studi di Milano
Milan, Italy
[email protected]
Claudio Gentile
INRIA Lille ? Nord Europe
Villeneuve d?Ascq, France
[email protected]
G?bor Lugosi
ICREA & Universitat Pompeu Fabra
Barcelona, Spain
[email protected]
Gergely Neu
Universitat Pompeu Fabra
Barcelona, Spain
[email protected]
Abstract
Boltzmann exploration is a classic strategy for sequential decision-making under
uncertainty, and is one of the most standard tools in Reinforcement Learning (RL).
Despite its widespread use, there is virtually no theoretical understanding about
the limitations or the actual benefits of this exploration scheme. Does it drive
exploration in a meaningful way? Is it prone to misidentifying the optimal actions
or spending too much time exploring the suboptimal ones? What is the right tuning
for the learning rate? In this paper, we address several of these questions for the
classic setup of stochastic multi-armed bandits. One of our main results is showing
that the Boltzmann exploration strategy with any monotone learning-rate sequence
will induce suboptimal behavior. As a remedy, we offer a simple non-monotone
schedule that guarantees near-optimal performance, albeit only when given prior
access to key problem parameters that are typically not available in practical
situations (like the time horizon T and the suboptimality gap ?). More importantly,
we propose a novel variant that uses different learning rates for different arms, and
2
T
achieves a distribution-dependent regret bound of order K log
and a distribution?
?
independent bound of order KT log K without requiring such prior knowledge.
To demonstrate the flexibility of our technique, we also propose a variant that
guarantees the same performance bounds even if the rewards are heavy-tailed.
1
Introduction
Exponential weighting strategies are fundamental tools in a variety of areas, including Machine Learning, Optimization, Theoretical Computer Science, and Decision Theory [3]. Within Reinforcement
Learning [23, 25], exponential weighting schemes are broadly used for balancing exploration and
exploitation, and are equivalently referred to as Boltzmann, Gibbs, or softmax exploration policies
[22, 14, 24, 19]. In the most common version of Boltzmann exploration, the probability of choosing
an arm is proportional to an exponential function of the empirical mean of the reward of that arm.
Despite the popularity of this policy, very little is known about its theoretical performance, even in
the simplest reinforcement learning setting of stochastic bandit problems.
The variant of Boltzmann exploration we focus on in this paper is defined by
pt,i ? e?t ?bt,i ,
(1)
where pt,i is the probability of choosing arm i in round t, ?
bt,i is the empirical average of the rewards
obtained from arm i up until round t, and ?t > 0 is the learning rate. This variant is broadly used
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
in reinforcement learning [23, 25, 14, 26, 16, 18]. In the multiarmed bandit literature, exponentialweights algorithms are also widespread, but they typically use importance-weighted estimators for
the rewards ?see, e.g., [6, 8] (for the nonstochastic setting), [12] (for the stochastic setting), and
[20] (for both stochastic and nonstochastic regimes). The theoretical behavior of these algorithms
is generally well understood. For example, in the stochastic bandit setting Seldin and Slivkins [20]
2
T
show a regret bound of order K log
, where ? is the suboptimality gap (i.e., the smallest difference
?
between the mean reward of the optimal arm and the mean reward of any other arm).
In this paper, we aim to achieve a better theoretical understanding of the basic variant of the
Boltzmann exploration policy that relies on the empirical mean rewards. We first show that any
monotone learning-rate schedule will inevitably force the policy to either spend too much time
drawing suboptimal arms or completely fail to identify the optimal arm. Then, we show that a specific
log T
non-monotone schedule of the learning rates can lead to regret bound of order K ?
. However, the
2
learning schedule has to rely on full knowledge of the gap ? and the number of rounds T . Moreover,
our negative result helps us to identify a crucial shortcoming of the Boltzmann exploration policy: it
does not reason about the uncertainty of the empirical reward estimates. To alleviate this issue, we
propose a variant that takes this uncertainty into account by using separate learning rates for each
arm, where the learning rates account for the uncertainty of each reward estimate. We show that
2
T
the resulting algorithm guarantees a distribution-dependent regret bound of order K log
, and a
?
?
distribution-independent bound of order KT log K.
Our algorithm and analysis is based on the so-called Gumbel?softmax trick that connects the
exponential-weights distribution with the maximum of independent random variables from the
Gumbel distribution.
2
The stochastic multi-armed bandit problem
def
Consider the setting of stochastic multi-armed bandits: each arm i ? [K] = {1, 2, . . . , K} yields a
reward with distribution ?i , mean ?i , with the optimal mean reward being ?? = maxi ?i . Without
loss of generality, we will assume that the optimal arm is unique and has index 1. The gap of arm i is
defined as ?i = ?? ? ?i . We consider a repeated game between the learner and the environment,
where in each round t = 1, 2, . . . , the following steps are repeated:
1. The learner chooses an arm It ? [K],
2. the environment draws a reward Xt,It ? ?It independently of the past,
3. the learner receives and observes the reward Xt,It .
The performance of the learner is measured in terms of the pseudo-regret defined as
" T
#
" T
#
T
K
X
X
X
X
?
?
RT = ? T ?
E [Xt,It ] = ? T ? E
?It = E
?It =
?i E [NT,i ] ,
t=1
t=1
t=1
(2)
i=1
Pt
where we defined Nt,i = s=1 I{Is =i} , that is, the number of times that arm i has been chosen until
the end of round t. We aim at constructing algorithms that guarantee that the regret grows sublinearly.
We will consider the above problem under various assumptions of the distribution of the rewards. For
most of our results, we will assume that each ?i is ?-subgaussian with a known parameter ? > 0,
that is, that
h
i
2 2
E ey(X1,i ?E[X1,i ]) ? e? y /2
holds for all y ? R and i ? [K]. It is easy to see that any random variable bounded in an interval of
length B is B 2 /4-subgaussian.
Under
P
this assumption, it is well known that any algorithm will suffer
? 2 log T
a regret of at least ?
, as shown in the classic paper of Lai and Robbins [17]. There
i>1
?i
exist several algorithms guaranteeing matching upper bounds, even for finite horizons [7, 10, 15]. We
refer to the survey of Bubeck and Cesa-Bianchi [9] for an exhaustive treatment of the topic.
2
3
Boltzmann exploration done wrong
We now formally describe the heuristic form of Boltzmann exploration that is commonly used in
the reinforcement learning literature [23, 25, 14]. This strategy works by maintaining the empirical
estimates of each ?i defined as
Pt
Xs,i I{Is =i}
?
bt,i = s=1
(3)
Nt,i
and computing the exponential-weights distribution (1) for an appropriately tuned sequence of
learning rate parameters ?t > 0 (which are often referred to as the inverse temperature). As noted on
several occasions in the literature, finding the right schedule for ?t can be very difficult in practice
[14, 26]. Below, we quantify this difficulty by showing that natural learning-rate schedules may
fail to achieve near-optimal regret guarantees. More precisely, they may draw suboptimal arms
too much even after having estimated all the means correctly, or commit too early to a suboptimal
arm and never recover afterwards. We partially circumvent this issue by proposing an admittedly
artificial learning-rate schedule that actually guarantees near-optimal performance. However, a
serious limitation of this schedule is that it relies on prior knowledge of problem parameters ? and T
that are typically unknown at the beginning of the learning procedure. These observations lead us
to the conclusion that the Boltzmann exploration policy as described by Equations (1) and (3) is no
more effective for regret minimization than the simplest alternative of ?-greedy exploration [23, 7].
Before we present our own technical results, we mention that Singh et al. [21] propose a learning-rate
schedule ?t for Boltzmann exploration that simultaneously guarantees that all arms will be drawn
infinitely often as T goes to infinity, and that the policy becomes greedy in the limit. This property
is proven by choosing a learning-rate schedule adaptively to ensure that in each round t, each arm
gets drawn with probability at least 1t , making it similar in spirit to ?-greedy exploration. While
this strategy clearly leadsto sublinear regret, it is easy to construct examples on which it suffers a
regret of at least ? T 1?? for any small ? > 0. In this paper, we pursue a more ambitious goal: we
aim to find out whether Boltzmann exploration can actually guarantee polylogarithmic regret. In
the rest of this section, we present both negative and positive results concerning the standard variant
of Boltzmann exploration, and then move on to providing an efficient generalization that achieves
consistency in a more universal sense.
3.1
Boltzmann exploration with monotone learning rates is suboptimal
In this section, we study the most natural variant of Boltzmann exploration that uses a monotone
learning-rate schedule. It is easy to see that in order to achieve sublinear regret, the learning rate ?t
needs to increase with t so that the suboptimal arms are drawn with less and less probability as time
progresses. For the sake of clarity, we study the simplest possible setting with two arms with a gap of
? between their means. We first show that, asymptotically, the learning rate has to increase at least
t
at a rate log
? even when the mean rewards are perfectly known. In other words, this is the minimal
affordable learning rate.
2
)
bt,i = ?i for all t and both i. If ?t = o log(t?
, then the regret
Proposition 1. Let us assume that ?
?
grows at least as fast as RT = ? log?T .
Proof. Let us define ?t? =
asymptotically bounded as
log(t?2 )
?
P [It = 2] =
for all t. The probability of pulling the suboptimal arm can be
1
e??t ?
?
=?
?
?
t
1+e
2
?
e??t ?
2
=?
1
?2 t
Summing up for all t, we get that the regret is at least
T
X
T
X
1
RT = ?
P [It = 2] = ?
2t
?
t=1
t=1
thus proving the statement.
3
!
=?
log T
?
,
.
This simple proposition thus implies an asymptotic lower bound on the schedule of learning rates ?t .
In contrast, Theorem 1 below shows that all learning rate sequences that grow faster than 2 log t yield
a linear regret, provided this schedule is adopted since the beginning of the game. This should be
contrasted with Theorem 2, which exhibits a schedule achieving logarithmic regret where ?t grows
faster than 2 log t only after the first ? rounds.
Theorem 1. There exists a 2-armed stochastic bandit problem with rewards bounded in [0, 1] where
Boltzmann exploration using any learning rate sequence ?t such that ?t > 2 log t for all t ? 1 has
regret RT = ?(T ).
Proof. Consider the case where arm 2 gives a reward deterministically equal to 12 whereas the optimal
arm 1 has a Bernoulli distribution of parameter p = 12 + ? for some 0 < ? < 12 . Note that the regret
of any algorithm satisfies RT ? ?(T ? t0 )P [?t > t0 , It = 2]. Without loss of generality, assume
that ?
b1,1 = 0 and ?
b1,2 = 1/2. Then for all t, independent of the algorithm, ?
bt,2 = 1/2 and
pt,1 =
e?t Bin(Nt?1,1 ,p)
e?t /2 + e?t Bin(Nt?1,1 ,p)
and
pt,2 =
e?t /2
.
e?t /2 + e?t Bin(Nt?1,1 ,p)
For t0 ? 1, Let Et0 be the event that Bin(Nt0 ,1 , p) = 0, that is, up to time t0 , arm 1 gives only zero
reward whenever it is sampled. Then
P [?t > t0 It = 2] ? P [Et0 ] 1 ? P [?t > t0 It = 1 | Et0 ]
t0
1
??
1 ? P [?t > t0 It = 1 | Et0 ] .
?
2
For t > t0 , let At,t0 be the event that arm 1 is sampled at time t but not at any of the times
t0 + 1, t0 + 2, . . . , t ? 1. Then, for any t0 ? 1,
X
P [?t > t0 It = 1 | Et0 ] = P [?t > t0 At,t0 | Et0 ] ?
P [At,t0 | Et0 ]
t>t0
X
=
t>t0
1
1 + e?t /2
t?1
Y
RT ? ?(T ? t0 )
1
1 + e?s /2
1?
s=t0 +1
Therefore
1
??
2
?
X
e??t /2 .
t>t0
!
t0
1?
X
e
??t /2
.
t>t0
Assume ?t ? c log t for some c > 2 and for all t ? t0 . Then
Z ?
c
X
X c
c
1
?( c ?1)
x? 2 dx =
? 1 t0 2
?
e??t /2 ?
t? 2 ?
2
2
t0
t>t
t>t
0
1
a
whenever t0 ? (2a) where a =
3.2
0
c
2
? 1. This implies RT = ?(T ).
A learning-rate schedule with near-optimal guarantees
The above negative result is indeed heavily relying on the assumption that ?t > 2 log t holds since
the beginning. If we instead start off from a constant learning rate which we keep for a logarithmic
number of rounds, then a logarithmic regret bound can be shown. Arguably, this results in a rather
simplistic exploration scheme, which can be essentially seen as an explore-then-commit strategy
(e.g., [13]). Despite its simplicity, this strategy can be shown to achieve near-optimal performance
if the parameters are tuned as a function the suboptimality gap ? (although its regret scales at the
suboptimal rate of 1/?2 with this parameter). The following theorem (proved in Appendix A.1)
states this performance guarantee.
Theorem 2. Assume the rewards of each arm are in [0, 1] and let ? =
Boltzmann exploration with learning rate ?t = I{t<? } +
RT ?
2
log(t? )
I{t?? }
?
16eK log T
9K
+ 2 .
2
?
?
4
16eK log T
?2
satisfies
. Then the regret of
4
Boltzmann exploration done right
We now turn to give a variant of Boltzmann exploration that achieves near-optimal guarantees without
prior knowledge of either ? or T . Our approach is based on the observation that the distribution
pt,i ? exp (?t ?
bt,i ) can be equivalently specified by the rule It = arg maxj {?t ?
bt,j + Zt,j }, where
1
Zt,j is a standard Gumbel random variable drawn independently for each arm j (see, e.g., Abernethy
et al. [1] and the references therein). As we saw in the previous section, this scheme fails to guarantee
consistency in general, as it does not capture the uncertainty of the reward estimates. We now
propose a variant that takes this uncertainty into account by choosing
scaling factors for
q different
2
each perturbation. In particular, we will use the simple choice ?t,i = C Nt,i with some constant
C > 0 that will be specified later. Our algorithm operates by independently drawing perturbations
Zt,i from a standard Gumbel distribution for each arm i, then choosing action
It+1 = arg max {b
?t,i + ?t,i Zt,i } .
(4)
i
We refer to this algorithm as Boltzmann?Gumbel exploration, or, in short, BGE. Unfortunately, the
probabilities pt,i no longer have a simple closed form, nevertheless the algorithm is very straightforward to implement. Our main positive result is showing the following performance guarantee about
the algorithm.2
Theorem 3. Assume that the rewards of each arm are ? 2 -subgaussian and let c > 0 be arbitrary.
Then, the regret of Boltzmann?Gumbel exploration satisfies
2
2
K
K
K
X
X
9C 2 log2+ T ?i /c2
c2 e? + 18C 2 e? /2C (1 + e?? ) X
RT ?
+
+
?i .
?i
?i
i=2
i=2
i=2
In particular, choosing C = ? and c = ? guarantees a regret bound of
!
K
X
? 2 log2 (T ?2i /? 2 )
RT = O
.
?i
i=2
Notice that, unlike any other algorithm that we are aware of, Boltzmann?Gumbel exploration
still continues to guarantee meaningful regret bounds even if the subgaussianity constant ? is
underestimated?although such misspecification is penalized exponentially in the true ? 2 . A downside
of our bound is that
dependence on the number of rounds T : it grows
P it shows a suboptimal
asymptotically as i>1 log2 (T ?2i ) ?i , in contrastto the standard regret bounds for the UCB
P
algorithm of Auer et al. [7] that grow as i>1 (log T ) ?i . However, our guarantee improves on the
?
distribution-independent regret bounds of UCB that are of order KT log T . This is shown in the
following corollary.
Corollary 1. Assume that the rewards of each arm are ??2 -subgaussian. Then, the regret of Boltzmann?
Gumbel exploration with C = ? satisfies RT ? 200? KT log K.
Notably, this bound shows optimal dependence on the number of rounds T , but is suboptimal in terms
of the number of arms. To complement this upper bound, we also show that these bounds are tight in
the sense that the log K factor cannot be removed.
p
Theorem 4. For any K and T such that K/T log K ? 1, there exists a bandit problem with
rewards ?
bounded in [0, 1] where the regret of Boltzmann?Gumbel exploration with C = 1 is at least
1
RT ? 2 KT log K.
The proofs can be found in the Appendices A.5 and A.6. Note that more sophisticated policies are
known
to have better distribution-free bounds. The algorithm MOSS [4] achieves minimax-optimal
?
KT distribution-free bounds, but distribution-dependent bounds of the form (K/?) log(T ?2 )
where ? is the suboptimality gap. A variant of UCB using action elimination and due to Auer and
The cumulative density function of a standard Gumbel random variable is F (x) = exp(?e?x+? ) where ?
is the Euler-Mascheroni constant.
2
We use the notation log+ (?) = max{0, ?}.
1
5
p
P
Ortner [5] has regret i>1 log(T ?2i ) ?i corresponding to a KT (log K) distribution-free bound.
The same bounds are achieved by the Gaussian Thompson sampling algorithm of Agrawal and Goyal
[2], given that the rewards are subgaussian.
We finally provide a simple variant of our algorithm that allows to handle heavy-tailed rewards,
intended here as reward distributions that are not subgaussian. We propose to use technique due to
Catoni [11] based on the influence function
log 1 + x + x2 /2 ,
for x ? 0,
?(x) =
2
? log 1 ? x + x /2 , for x ? 0.
Using this function, we define our estimates as
?
bt,i = ?t,i
t
X
I{It =i} ?
s=1
Xt,i
?t,i Nt,i
We prove the following result regarding Boltzmann?Gumbel exploration run with the above estimates.
Theorem
5. Assume that the second moment of the rewards of each arm are bounded uniformly as
E Xi2 ? V and let c > 0 be arbitrary. Then, the regret of Boltzmann?Gumbel exploration satisfies
2
K
K
K
X
X
9C 2 log2+ T ?i /c2
c2 e? + 18C 2 eV /2C (1 + e?? ) X
RT ?
+
+
?i .
?i
?i
i=2
i=2
i=2
Notably, this bound coincides with that of Theorem 3, except that ? 2 is replaced by V . Thus, by
following
the proof of Corollary 1, we can show a distribution-independent regret bound of order
?
KT log K.
5
Analysis
Let us now present the proofs of our main results concerning Boltzmann?Gumbel exploration,
Theorems 3 and 5. Our analysis builds on several ideas from Agrawal and Goyal [2]. We first provide
generic tools that are independent of the reward estimator and then move on to providing specifics for
both estimators.
We start with introducing some notation. We define ?
et,i = ?
bt,i + ?t,i Zt,i , so that the algorithm can
be simply written as It = arg maxi ?
et,i . Let Ft?1 be the sigma-algebra generated by the actions
taken by the learner and the realized rewards up to the beginning of round t. Let us fix thresholds
xi , yi satisfying ?i ? xi ? yi ? ?1 and define qt,i = P [ ?
et,1 > yi | Ft?1 ]. Furthermore, we define
?
b
?
e
the events Et,i
= {b
?t,i ? xi } and Et,i
= {e
?t,i ? yi }. With this notation at hand, we can decompose
the number of draws of any suboptimal i as follows:
T
T
T
h
i X
h
i X
h
i
X
?
e
?
b
?
e
?
b
?
b
E [NT,i ] =
P It = i, Et,i , Et,i +
P It = i, Et,i , Et,i +
P It = i, Et,i
.
t=1
t=1
(5)
t=1
It remains to choose the thresholds xi and yi in a meaningful way: we pick xi = ?i + ?3i and
yi = ?1 ? ?3i . The rest of the proof is devoted to bounding each term in Eq. (5). Intuitively, the
individual terms capture the following events:
? The first term counts the number of times that, even though the estimated mean reward
of arm i is well-concentrated and the additional perturbation Zt.i is not too large, arm i
was drawn instead of the optimal arm 1. This happens when the optimal arm is poorly
estimated or when the perturbation Zt,1 is not large enough. Intuitively, this term measures
the interaction between the perturbations Zt,1 and the random fluctuations of the reward
estimate ?
bt,1 around its true mean, and will be small if the perturbations tend to be large
enough and the tail of the reward estimates is light enough.
? The second term counts the number of times that the mean reward of arm i is well-estimated,
but it ends up being drawn due to a large perturbation. This term can be bounded independently of the properties of the mean estimator and is small when the tail of the perturbation
distribution is not too heavy.
6
? The last term counts the number of times that the reward estimate of arm i is poorly
concentrated. This term is independent of the perturbations and only depends on the
properties of the reward estimator.
As we will see, the first and the last terms can be bounded in terms of the moment generating function
of the reward estimates, which makes subgaussian reward estimators particularly easy to treat. We
begin by the most standard part of our analysis: bounding the third term on the right-hand-side of (5)
in terms of the moment-generating function.
Lemma 1. Let us fix any i and define ?k as the k?th time that arm i was drawn. We have
T
T
?1
?
h
i
X
X
?i k
?
b?k ,i ? ?i
?
b
P It = i, Et,i ? 1 +
? e? 3C .
E exp
??k ,i
t=1
k=1
Interestingly, our next key result shows that the first term can be bounded by a nearly identical
expression:
Lemma 2. Let us define ?k as the k?th time that arm 1 was drawn. For any i, we have
T
?1
?
h
i TX
X
?i k
?1 ? ?
b?k ,1
?
e
?
b
E exp
e??? 3C .
P It = i, Et,i , Et,i ?
??k ,1
t=1
k=0
It remains to bound the second term in Equation (5), which we do in the following lemma:
Lemma 3. For any i 6= 1 and any constant c > 0, we have
T
h
i 9C 2 log2 T ?2 /c2 + c2 e?
X
+
?
e
?
b
i
P It = i, Et,i , Et,i ?
.
2
?
i
t=1
The proofs of these three lemmas are included in the supplementary material.
5.1
The proof of Theorem 3
For this section, we assume that the rewards are ?-subgaussian and that ?
bt,i is the empirical-mean
estimator. Building on the results of the previous section, observe that we are left with bounding
the terms appearing in Lemmas 1 and 2. To this end, let us fix a k and an i and notice that by
the subgaussianity assumption on the rewards, the empirical mean ?
e?k ,i is ??k -subgaussian (as
N?k ,i = k). In other words,
h
i
2 2
E e?(?b?k ,i ??i ) ? e? ? /2k
holds for any ?. In particular, using this above formula for ? = 1/??k ,i =
q
k
C2 ,
we obtain
2
2
?
b?k ,i ? ?i
E exp
? e? /2C .
??k ,i
Thus, the sum appearing in Lemma 1 can be bounded as
T
?1
X
k=1
2
2
T ?1
?
?
?i k
?i k
2
2 X
18C 2 e? /2C
?
b?k ,i ? ?i
e? 3C ?
E exp
? e? 3C ? e? /2C
,
??k ,i
?2i
k=1
P?
?
c k
where the last step follows from the fact3 that k=0 e
? c22 holds for all c > 0. The statement of
Theorem 3 now follows from applying the same argument to the bound of Lemma 2, using Lemma 3,
and the standard expression for the regret in Equation (2).
3
This can be easily seen by bounding the sum with an integral.
7
(b)
(a)
8000
8000
6000
6000
regret
10000
regret
10000
4000
4000
2000
2000
BE(const)
BE(log)
BE(sqrt)
BGE
UCB
0
0
10 -2
10 0
C
10 -2
10 2
10 0
10 2
C2
2
Figure 1: Empirical performance of Boltzmann exploration variants, Boltzmann?Gumbel exploration
and UCB for (a) i.i.d. initialization and (b) malicious initialization, as a function of C 2 . The dotted
vertical line corresponds to the choice C 2 = 1/4 suggested by Theorem 3.
5.2
The proof of Theorem 5
We now drop the subgaussian assumption on the rewards and consider reward distributions that are
possibly heavy-tailed, but have bounded variance. The proof of Theorem 5 trivially follows from
the arguments in the previous subsection and using Proposition 2.1 of Catoni [11] (with ? = 0) that
guarantees the bound
!
E Xi2
?i ? ?
bt,i
E exp ?
.
(6)
Nt,i = n ? exp
?t,i
2C 2
6
Experiments
This section concludes by illustrating our theoretical results through some experiments, highlighting
the limitations of Boltzmann exploration and contrasting it with the performance of Boltzmann?
Gumbel exploration. We consider a stochastic multi-armed bandit problem with K = 10 arms each
yielding Bernoulli rewards with mean ?i = 1/2 for all suboptimal arms i > 1 and ?1 = 1/2 + ? for
the optimal arm. We set the horizon to T = 106 and the gap parameter to ? = 0.01. We compare
three variants of Boltzmann exploration with inverse learning rate parameters
? ?t = C 2 (BE-const),
? ?t = C 2 / log t (BE-log), and
?
? ?t = C 2 / t (BE-sqrt)
for all p
t, and compare it with Boltzmann?Gumbel exploration (BGE), and UCB with exploration
bonus C 2 log(t)/Nt,i .
We study two different scenarios: (a) all rewards drawn i.i.d. from the Bernoulli distributions with
the means given above and (b) the first T0 = 5,000 rewards set to 0 for arm 1. The latter scenario
simulates the situation described in the proof of Theorem 1, and in particular exposes the weakness
of Boltzmann exploration with increasing learning rate parameters. The results shown on Figure 1 (a)
and (b) show that while some variants of Boltzmann exploration may perform reasonably well when
initial rewards take typical values and the parameters are chosen luckily, all standard versions fail to
identify the optimal arm when the initial draws are not representative of the true mean (which happens
with a small constant probability). On the other hand, UCB and Boltzmann?Gumbel exploration
continue to perform well even under this unlikely event, as predicted by their respective theoretical
guarantees. Notably, Boltzmann?Gumbel exploration performs comparably to UCB in this example
(even slightly outperforming its competitor here), and performs notably well for the recommended
parameter setting of C 2 = ? 2 = 1/4 (noting that Bernoulli random variables are 1/4-subgaussian).
8
Acknowledgements G?bor Lugosi was supported by the Spanish Ministry of Economy and Competitiveness, Grant MTM2015-67304-P and FEDER, EU. Gergely Neu was supported by the UPFellows Fellowship (Marie Curie COFUND program n? 600387).
References
[1] J. Abernethy, C. Lee, A. Sinha, and A. Tewari. Online linear optimization via smoothing. In
M.-F. Balcan and Cs. Szepesv?ri, editors, Proceedings of The 27th Conference on Learning
Theory, volume 35 of JMLR Proceedings, pages 807?823. JMLR.org, 2014.
[2] S. Agrawal and N. Goyal. Further optimal regret bounds for thompson sampling. In AISTATS,
pages 99?107, 2013.
[3] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: A meta-algorithm
and applications. Theory of Computing, 8:121?164, 2012.
[4] J.-Y. Audibert and S. Bubeck. Minimax policies for bandits games. In S. Dasgupta and
A. Klivans, editors, Proceedings of the 22nd Annual Conference on Learning Theory. Omnipress,
June 18?21 2009.
[5] P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed
bandit problem. Periodica Mathematica Hungarica, 61:55?65, 2010. ISSN 0031-5303.
[6] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino:
The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995.
Proceedings., 36th Annual Symposium on, pages 322?331. IEEE, 1995.
[7] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Mach. Learn., 47(2-3):235?256, May 2002. ISSN 0885-6125. doi: 10.1023/A:1013689704352.
URL http://dx.doi.org/10.1023/A:1013689704352.
[8] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM J. Comput., 32(1):48?77, 2002. ISSN 0097-5397.
[9] S. Bubeck and N. Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multi-armed
Bandit Problems. Now Publishers Inc, 2012.
[10] O. Capp?, A. Garivier, O.-A. Maillard, R. Munos, G. Stoltz, et al. Kullback?leibler upper
confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3):1516?1541,
2013.
[11] O. Catoni. Challenging the empirical mean and empirical variance: A deviation study. Annales
de l?Institut Henri Poincar?, Probabilit?s et Statistiques, 48(4):1148?1185, 11 2012.
[12] N. Cesa-Bianchi and P. Fischer. Finite-time regret bounds for the multiarmed bandit problem.
In ICML, pages 100?108, 1998.
[13] A. Garivier, E. Kaufmann, and T. Lattimore. On explore-then-commit strategies. In NIPS, 2016.
[14] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal
of artificial intelligence research, 4:237?285, 1996.
[15] E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: An asymptotically optimal
finite-time analysis. In ALT?12, pages 199?213, 2012.
[16] V. Kuleshov and D. Precup. Algorithms for multi-armed bandit problems. arXiv preprint
arXiv:1402.6028, 2014.
[17] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6:4?22, 1985.
[18] I. Osband, B. Van Roy, and Z. Wen. Generalization and exploration via randomized value
functions. 2016.
9
[19] T. Perkins and D. Precup. A convergent form of approximate policy iteration. In S. Becker,
S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15,
pages 1595?1602, Cambridge, MA, USA, 2003. MIT Press.
[20] Y. Seldin and A. Slivkins. One practical algorithm for both stochastic and adversarial bandits.
In Proceedings of the 30th International Conference on Machine Learning (ICML 2014), pages
1287?1295, 2014.
[21] S. P. Singh, T. Jaakkola, M. L. Littman, and Cs. Szepesv?ri. Convergence results for single-step
on-policy reinforcement-learning algorithms. Machine Learning, 38(3):287?308, 2000. URL
citeseer.ist.psu.edu/article/singh98convergence.html.
[22] R. Sutton. Integrated architectures for learning, planning, and reacting based on approximating
dynamic programming. In Proceedings of the Seventh International Conference on Machine
Learning, pages 216?224. San Mateo, CA, 1990.
[23] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[24] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for
reinforcement learning with function approximation. In S. Solla, T. Leen, and K. M?ller, editors,
Advances in Neural Information Processing Systems 12, pages 1057?1063, Cambridge, MA,
USA, 1999. MIT Press.
[25] Cs. Szepesv?ri. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial
Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010.
[26] J. Vermorel and M. Mohri. Multi-armed bandit algorithms and empirical evaluation. In
European conference on machine learning, pages 437?448. Springer, 2005.
10
| 7208 |@word exploitation:1 version:2 illustrating:1 nd:1 rigged:1 citeseer:1 cla:1 pick:1 mention:1 moment:3 initial:2 tuned:2 interestingly:1 past:1 com:3 nt:11 gmail:3 dx:2 written:1 drop:1 update:1 greedy:3 intelligence:2 beginning:4 short:1 revisited:1 org:2 c22:1 bge:3 c2:8 symposium:1 competitiveness:1 prove:1 notably:4 indeed:1 sublinearly:1 behavior:2 planning:1 multi:9 relying:1 actual:1 armed:10 little:1 increasing:1 becomes:1 spain:2 provided:1 moreover:1 bounded:10 notation:3 begin:1 bonus:1 what:1 pursue:1 proposing:1 contrasting:1 finding:1 guarantee:18 pseudo:1 universit:1 wrong:1 grant:1 arguably:1 before:1 positive:2 understood:1 treat:1 limit:1 despite:3 mach:1 sutton:3 reacting:1 fluctuation:1 lugosi:3 inria:1 therein:1 initialization:2 mateo:1 challenging:1 practical:2 unique:1 practice:1 regret:39 implement:1 goyal:3 procedure:1 poincar:1 probabilit:1 area:1 empirical:11 universal:1 gabor:1 matching:1 word:2 induce:1 confidence:1 get:2 cannot:1 influence:1 applying:1 go:1 straightforward:1 kale:1 independently:4 thompson:3 survey:2 mascheroni:1 simplicity:1 estimator:7 rule:2 importantly:1 pompeu:2 classic:3 proving:1 handle:1 annals:1 pt:8 heavily:1 programming:1 us:2 kuleshov:1 trick:1 roy:1 satisfying:1 particularly:1 continues:1 ft:2 preprint:1 capture:2 eu:1 solla:1 removed:1 observes:1 environment:2 reward:45 littman:2 dynamic:1 singh:3 tight:1 algebra:1 learner:5 completely:1 capp:1 easily:1 various:1 tx:1 fast:1 shortcoming:1 describe:1 effective:1 doi:2 artificial:3 choosing:6 exhaustive:1 abernethy:2 heuristic:1 spend:1 supplementary:1 drawing:2 statistic:1 fischer:2 commit:3 online:1 sequence:4 agrawal:3 propose:6 interaction:1 flexibility:1 achieve:4 poorly:2 milan:1 convergence:1 generating:2 guaranteeing:1 help:1 measured:1 qt:1 progress:1 eq:1 predicted:1 c:3 implies:2 quantify:1 stochastic:12 luckily:1 exploration:46 milano:1 mcallester:1 elimination:1 material:1 bin:4 fix:3 generalization:2 villeneuve:1 alleviate:1 decompose:1 proposition:3 exploring:1 hold:4 around:1 exp:8 claypool:1 achieves:4 early:1 smallest:1 expose:1 saw:1 robbins:2 tool:3 weighted:1 minimization:1 mit:3 clearly:1 gaussian:1 aim:3 rather:1 claudio:1 barto:1 jaakkola:1 corollary:3 focus:1 june:1 bernoulli:4 contrast:2 adversarial:2 sense:2 economy:1 dependent:3 typically:3 bt:12 unlikely:1 integrated:1 bandit:19 france:1 issue:2 arg:3 html:1 smoothing:1 softmax:2 equal:1 construct:1 never:1 having:1 beach:1 aware:1 sampling:3 identical:1 psu:1 lille:1 icml:2 nearly:1 serious:1 ortner:2 wen:1 simultaneously:1 individual:1 maxj:1 replaced:1 intended:1 connects:1 evaluation:1 weakness:1 yielding:1 light:1 devoted:1 icrea:1 kt:8 integral:1 respective:1 institut:1 stoltz:1 periodica:1 theoretical:7 minimal:1 sinha:1 korda:1 downside:1 kaelbling:1 introducing:1 deviation:1 euler:1 seventh:1 too:6 universitat:2 chooses:1 adaptively:1 st:1 density:1 international:2 fundamental:1 siam:1 randomized:1 lee:1 off:1 synthesis:1 precup:2 gergely:3 cesa:8 choose:1 possibly:1 ek:2 account:3 de:1 casino:1 inc:1 audibert:1 depends:1 later:1 multiplicative:1 closed:1 hazan:1 start:2 recover:1 curie:1 variance:2 kaufmann:2 yield:2 identify:3 bor:2 comparably:1 drive:1 sqrt:2 suffers:1 whenever:2 neu:3 competitor:1 mathematica:1 proof:11 di:1 sampled:2 proved:1 treatment:1 knowledge:4 subsection:1 improves:1 maillard:1 schedule:15 sophisticated:1 actually:2 auer:6 improved:1 leen:1 done:3 though:1 generality:2 furthermore:1 until:2 statistiques:1 hand:3 receives:1 widespread:2 pulling:1 grows:4 usa:3 building:1 requiring:1 true:3 remedy:1 leibler:1 moore:1 round:11 game:3 spanish:1 noted:1 coincides:1 suboptimality:4 occasion:1 demonstrate:1 performs:2 temperature:1 omnipress:1 balcan:1 spending:1 lattimore:1 novel:1 common:1 rl:1 exponentially:1 volume:1 tail:2 refer:2 multiarmed:4 cambridge:2 gibbs:1 tuning:1 consistency:2 trivially:1 mathematics:1 access:1 europe:1 longer:1 nicolo:1 own:1 italy:1 scenario:2 meta:1 outperforming:1 continue:1 yi:6 seen:2 ministry:1 gentile:2 additional:1 morgan:1 ey:1 ller:1 recommended:1 full:1 afterwards:1 technical:1 faster:2 offer:1 long:1 lai:2 concerning:2 variant:15 basic:1 simplistic:1 essentially:1 affordable:1 arxiv:2 iteration:1 achieved:1 whereas:1 fellowship:1 szepesv:3 interval:1 underestimated:1 grow:2 malicious:1 crucial:1 appropriately:1 publisher:2 rest:2 unlike:1 tend:1 virtually:1 simulates:1 spirit:1 subgaussian:11 near:6 noting:1 easy:4 enough:3 variety:1 nonstochastic:4 perfectly:1 suboptimal:13 architecture:1 regarding:1 idea:1 t0:29 whether:1 expression:2 feder:1 url:2 becker:1 osband:1 suffer:1 action:4 generally:1 tewari:1 concentrated:2 simplest:3 schapire:2 http:1 exist:1 notice:2 dotted:1 estimated:4 popularity:1 correctly:1 broadly:2 dasgupta:1 ist:1 key:2 nevertheless:1 threshold:2 achieving:1 drawn:9 nt0:1 clarity:1 marie:1 garivier:2 asymptotically:5 annales:1 monotone:6 sum:2 run:1 inverse:2 uncertainty:6 draw:4 decision:2 et0:7 appendix:2 scaling:1 def:1 bound:32 convergent:1 annual:2 precisely:1 infinity:1 perkins:1 x2:1 ri:3 sake:1 argument:2 klivans:1 slightly:1 making:2 happens:2 intuitively:2 taken:1 equation:3 remains:2 turn:1 count:3 fail:3 xi2:2 end:3 adopted:1 available:1 observe:1 generic:1 appearing:2 alternative:1 ensure:1 log2:5 maintaining:1 const:2 build:1 approximating:1 move:2 question:1 realized:1 strategy:8 rt:13 fabra:2 dependence:2 obermayer:1 exhibit:1 gradient:1 separate:1 thrun:1 topic:1 reason:1 studi:1 length:1 issn:3 index:1 providing:2 equivalently:2 setup:1 difficult:1 unfortunately:1 statement:2 nord:1 sigma:1 negative:3 ambitious:1 boltzmann:38 policy:12 unknown:1 bianchi:8 upper:3 zt:8 observation:2 vertical:1 perform:2 finite:4 inevitably:1 situation:2 misspecification:1 mansour:1 perturbation:9 arbitrary:2 complement:1 specified:2 slivkins:2 polylogarithmic:1 barcelona:2 nip:2 address:1 suggested:1 below:2 ev:1 regime:1 program:1 including:1 max:2 event:5 difficulty:1 force:1 rely:1 natural:2 circumvent:1 arm:46 minimax:2 scheme:4 ascq:1 arora:1 concludes:1 moss:1 hungarica:1 prior:4 understanding:2 literature:3 acknowledgement:1 nicol:1 asymptotic:1 freund:2 loss:2 lecture:1 sublinear:2 limitation:3 proportional:1 allocation:2 proven:1 foundation:1 article:1 editor:4 heavy:4 balancing:1 prone:1 penalized:1 mohri:1 supported:2 last:3 free:3 side:1 munos:2 benefit:1 van:1 cumulative:1 commonly:1 reinforcement:10 adaptive:1 san:1 vermorel:1 henri:1 approximate:1 kullback:1 keep:1 summing:1 b1:2 xi:5 degli:1 tailed:3 learn:1 reasonably:1 ca:2 european:1 constructing:1 aistats:1 main:3 bounding:4 repeated:2 x1:2 referred:2 representative:1 gambling:1 subgaussianity:2 fails:1 deterministically:1 exponential:5 comput:1 jmlr:2 weighting:2 third:1 theorem:16 formula:1 specific:2 xt:4 showing:3 maxi:2 x:1 alt:1 exists:2 albeit:1 sequential:2 importance:1 catoni:3 horizon:3 gumbel:18 gap:8 logarithmic:3 simply:1 explore:2 bubeck:3 seldin:2 infinitely:1 highlighting:1 partially:1 springer:1 corresponds:1 satisfies:5 relies:2 ma:2 goal:1 included:1 typical:1 except:1 contrasted:1 unimi:1 operates:1 uniformly:1 lemma:9 admittedly:1 called:1 meaningful:3 ucb:9 formally:1 latter:1 |
6,864 | 7,209 | Learned in Translation: Contextualized Word Vectors
Bryan McCann
[email protected]
James Bradbury
[email protected]
Caiming Xiong
[email protected]
Richard Socher
[email protected]
Abstract
Computer vision has benefited from initializing multiple deep layers with weights
pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models
with pretrained word vectors. In this paper, we use a deep LSTM encoder from
an attentional sequence-to-sequence model trained for machine translation (MT)
to contextualize word vectors. We show that adding these context vectors (CoVe)
improves performance over using only unsupervised word and character vectors on
a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question
classification (TREC), entailment (SNLI), and question answering (SQuAD). For
fine-grained sentiment analysis and entailment, CoVe improves performance of our
baseline models to the state of the art.
1
Introduction
Significant gains have been made through transfer and multi-task learning between synergistic tasks.
In many cases, these synergies can be exploited by architectures that rely on similar components. In
computer vision, convolutional neural networks (CNNs) pretrained on ImageNet [Krizhevsky et al.,
2012, Deng et al., 2009] have become the de facto initialization for more complex and deeper models.
This initialization improves accuracy on other related tasks such as visual question answering [Xiong
et al., 2016] or image captioning [Lu et al., 2016, Socher et al., 2014].
In NLP, distributed representations pretrained with models like Word2Vec [Mikolov et al., 2013]
and GloVe [Pennington et al., 2014] have become common initializations for the word vectors of
deep learning models. Transferring information from large amounts of unlabeled training data in
the form of word vectors has shown to improve performance over random word vector initialization
on a variety of downstream tasks, e.g. part-of-speech tagging [Collobert et al., 2011], named entity
recognition [Pennington et al., 2014], and question answering [Xiong et al., 2017]; however, words
rarely appear in isolation. The ability to share a common representation of words in the context of
sentences that include them could further improve transfer learning in NLP.
Inspired by the successful transfer of CNNs trained on ImageNet to other tasks in computer vision,
we focus on training an encoder for a large NLP task and transferring that encoder to other tasks
in NLP. Machine translation (MT) requires a model to encode words in context so as to decode
them into another language, and attentional sequence-to-sequence models for MT often contain an
LSTM-based encoder, which is a common component in other NLP models. We hypothesize that MT
data in general holds potential comparable to that of ImageNet as a cornerstone for reusable models.
This makes an MT-LSTM pairing in NLP a natural candidate for mirroring the ImageNet-CNN
pairing of computer vision.
As depicted in Figure 1, we begin by training LSTM encoders on several machine translation datasets,
and we show that these encoders can be used to improve performance of models trained for other
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
a)
b)
Tr a n s l a t i o n
Encoder
Decoder
Word
Vectors
Ta s k - s p e c i f i c M o d e l
Encoder
Encoder
Word
Vectors
Word
Vectors
Figure 1: We a) train a two-layer, bidirectional LSTM as the encoder of an attentional sequence-tosequence model for machine translation and b) use it to provide context for other NLP models.
tasks in NLP. In order to test the transferability of these encoders, we develop a common architecture
for a variety of classification tasks, and we modify the Dynamic Coattention Network for question
answering [Xiong et al., 2017]. We append the outputs of the MT-LSTMs, which we call context
vectors (CoVe), to the word vectors typically used as inputs to these models. This approach improved
the performance of models for downstream tasks over that of baseline models using pretrained
word vectors alone. For the Stanford Sentiment Treebank (SST) and the Stanford Natural Language
Inference Corpus (SNLI), CoVe pushes performance of our baseline model to the state of the art.
Experiments reveal that the quantity of training data used to train the MT-LSTM is positively
correlated with performance on downstream tasks. This is yet another advantage of relying on MT,
as data for MT is more abundant than for most other supervised NLP tasks, and it suggests that
higher quality MT-LSTMs carry over more useful information. This reinforces the idea that machine
translation is a good candidate task for further research into models that possess a stronger sense of
natural language understanding.
2
Related Work
Transfer Learning. Transfer learning, or domain adaptation, has been applied in a variety
of areas where researchers identified synergistic relationships between independently collected
datasets. Saenko et al. [2010] adapt object recognition models developed for one visual domain to
new imaging conditions by learning a transformation that minimizes domain-induced changes in
the feature distribution. Zhu et al. [2011] use matrix factorization to incorporate textual information
into tagged images to enhance image classification. In natural language processing (NLP), Collobert
et al. [2011] leverage representations learned from unsupervised learning to improve performance on
supervised tasks like named entity recognition, part-of-speech tagging, and chunking. Recent work in
NLP has continued in this direction by using pretrained word representations to improve models for
entailment [Bowman et al., 2014], sentiment analysis [Socher et al., 2013], summarization [Nallapati
et al., 2016], and question answering [Seo et al., 2017, Xiong et al., 2017]. Ramachandran et al. [2016]
propose initializing sequence-to-sequence models with pretrained language models and fine-tuning
for a specific task. Kiros et al. [2015] propose an unsupervised method for training an encoder that
outputs sentence vectors that are predictive of surrounding sentences. We also propose a method of
transferring higher-level representations than word vectors, but we use a supervised method to train
our sentence encoder and show that it improves models for text classification and question answering
without fine-tuning.
Neural Machine Translation. Our source domain of transfer learning is machine translation,
a task that has seen marked improvements in recent years with the advance of neural machine
translation (NMT) models. Sutskever et al. [2014] investigate sequence-to-sequence models that
consist of a neural network encoder and decoder for machine translation. Bahdanau et al. [2015]
propose the augmenting sequence to sequence models with an attention mechanism that gives
the decoder access to the encoder representations of the input sequence at each step of sequence
generation. Luong et al. [2015] further study the effectiveness of various attention mechanisms
with respect to machine translation. Attention mechanisms have also been successfully applied to
NLP tasks like entailment [Conneau et al., 2017], summarization [Nallapati et al., 2016], question
answering [Seo et al., 2017, Xiong et al., 2017, Min et al., 2017], and semantic parsing [Dong and
Lapata, 2016]. We show that attentional encoders trained for NMT transfer well to other NLP tasks.
2
Transfer Learning and Machine Translation. Machine translation is a suitable source domain for
transfer learning because the task, by nature, requires the model to faithfully reproduce a sentence in
the target language without losing information in the source language sentence. Moreover, there is an
abundance of machine translation data that can be used for transfer learning. Hill et al. [2016] study
the effect of transferring from a variety of source domains to the semantic similarity tasks in Agirre
et al. [2014]. Hill et al. [2017] further demonstrate that fixed-length representations obtained from
NMT encoders outperform those obtained from monolingual (e.g. language modeling) encoders on
semantic similarity tasks. Unlike previous work, we do not transfer from fixed length representations
produced by NMT encoders. Instead, we transfer representations for each token in the input sequence.
Our approach makes the transfer of the trained encoder more directly compatible with subsequent
LSTMs, attention mechanisms, and, in general, layers that expect input sequences. This additionally
facilitates the transfer of sequential dependencies between encoder states.
Transfer Learning in Computer Vision. Since the success of CNNs on the ImageNet challenge, a
number of approaches to computer vision tasks have relied on pretrained CNNs as off-the-shelf feature
extractors. Girshick et al. [2014] show that using a pretrained CNN to extract features from region
proposals improves object detection and semantic segmentation models. Qi et al. [2016] propose
a CNN-based object tracking framework, which uses hierarchical features from a pretrained CNN
(VGG-19 by Simonyan and Zisserman [2014]). For image captioning, Lu et al. [2016] train a visual
sentinel with a pretrained CNN and fine-tune the model with a smaller learning rate. For VQA, Fukui
et al. [2016] propose to combine text representations with visual representations extracted by a
pretrained residual network [He et al., 2016]. Although model transfer has seen widespread success
in computer vision, transfer learning beyond pretrained word vectors is far less pervasive in NLP.
3
Machine Translation Model
We begin by training an attentional sequence-to-sequence model for English-to-German translation
based on Klein et al. [2017] with the goal of transferring the encoder to other tasks.
For training, we are given a sequence of words in the source language wx = [w1x , . . . , wnx ] and a
z
sequence of words in the target language wz = [w1z , . . . , wm
]. Let GloVe(wx ) be a sequence of
x
GloVe vectors corresponding to the words in w , and let z be a sequence of randomly initialized
word vectors corresponding to the words in wz .
We feed GloVe(wx ) to a standard, two-layer, bidirectional, long short-term memory network 1 [Graves
and Schmidhuber, 2005] that we refer to as an MT-LSTM to indicate that it is this same two-layer
BiLSTM that we later transfer as a pretrained encoder. The MT-LSTM is used to compute a sequence
of hidden states
h = MT-LSTM(GloVe(wx )).
(1)
For machine translation, the MT-LSTM supplies the context for an attentional decoder that produces
z
a distribution over output words p(w
?tz |H, w1z , . . . , wt?1
) at each time-step.
At time-step t, the decoder first uses a two-layer, unidirectional LSTM to produce a hidden state hdec
t
? t?1 :
based on the previous target embedding zt?1 and a context-adjusted hidden state h
? t?1 ], hdec .
hdec
=
LSTM
[z
;
h
(2)
t?1
t
t?1
The decoder then computes a vector of attention weights ? representing the relevance of each
encoding time-step to the current decoder state.
?t = softmax H(W1 hdec
+ b1 )
(3)
t
where H refers to the elements of h stacked along the time dimension.
1
Since there are several biLSTM variants, we define ours as follows. Let h = [h1 , . . . , hn ] = biLSTM (x)
represent the output sequence of our biLSTM operating on an input sequence x. Then a forward LSTM computes
?
?
?
h?
t = LSTM (xt , ht?1 ) for each time step, and a backward LSTM computes ht = LSTM (xt , ht+1 ). The
?
?
final outputs of the biLSTM for each time step are ht = [ht ; ht ].
3
The decoder then uses these weights as coefficients in an attentional sum that is concatenated with
?
the decoder state and passed through a tanh layer to form the context-adjusted hidden state h:
? t = tanh W2 H > ?t + b2 ; hdec
h
(4)
t
The distribution over output words is generated by
of the context-adjusted
a final transformation
z
z
z
?
hidden state: p(w
?t |X, w1 , . . . , wt?1 ) = softmax Wout ht + bout .
4
Context Vectors (CoVe)
We transfer what is learned by the MT-LSTM to downstream tasks by treating the outputs of the
MT-LSTM as context vectors. If w is a sequence of words and GloVe(w) the corresponding sequence
of word vectors produced by the GloVe model, then
CoVe(w) = MT-LSTM(GloVe(w))
(5)
is the sequence of context vectors produced by the MT-LSTM. For classification and question
answering, for an input sequence w, we concatenate each vector in GloVe(w) with its corresponding
vector in CoVe(w)
w
? = [GloVe(w); CoVe(w)]
(6)
as depicted in Figure 1b.
5
Classification with CoVe
We now describe a general biattentive classification network (BCN) we use to test how well CoVe
transfer to other tasks. This model, shown in Figure 2, is designed to handle both single-sentence
and two-sentence classification tasks. In the case of single-sentence tasks, the input sequence is
duplicated to form two sequences, so we will assume two input sequences for the rest of this section.
Input sequences wx and wy are converted to sequences of vectors, w
? x and w
? y , as described in Eq. 6
before being fed to the task-specific portion of the
model (Figure 1b).
l
P
o
o
o
o
l
P
ra
te
y = biLSTM (f (w
? ))
(7)
te
g
y
te
ra
te
x = biLSTM (f (w
? x ))
In
g
In
A function f applies a feedforward network with
ReLU activation [Nair and Hinton, 2010] to each
element of w
? x and w
? y , and a bidirectional LSTM
processes the resulting sequences to obtain task specific representations,
Maxout Network
Biattention
(8)
These sequences are each stacked along the time axis
to get matrices X and Y .
Encoder
Encoder
In order to compute representations that are interdependent, we use a biattention mechanism [Seo et al.,
2017, Xiong et al., 2017]. The biattention first computes an affinity matrix A = XY > . It then extracts
ReLU Network
ReLU Network
attention weights with column-wise normalization:
Figure 2: Our BCN uses a feedforward netAx = softmax (A)
Ay = softmax A> (9) work with ReLU activation and biLSTM enwhich amounts to a novel form of self-attention when coder to create task-specific representations of
each input sequence. Biattention conditions
x = y. Next, it uses context summaries
each representation on the other, a biLSTM
>
integrates the conditional information, and a
Cx = A>
X
C
=
A
Y
(10)
y
x
y
maxout network uses pooled features to comto condition each sequence on the other.
pute a distribution over possible classes.
4
We integrate the conditioning information into our representations for each sequence with two separate
one-layer, bidirectional LSTMs that operate on the concatenation of the original representations
(to ensure no information is lost in conditioning), their differences from the context summaries (to
explicitly capture the difference from the original signals), and the element-wise products between
originals and context summaries (to amplify or dampen the original signals).
X|y = biLSTM ([X; X ? Cy ; X Cy ])
(11)
Y|x = biLSTM ([Y ; Y ? Cx ; Y Cx ])
(12)
The outputs of the bidirectional LSTMs are aggregated by pooling along the time dimension. Max and
mean pooling have been used in other models to extract features, but we have found that adding both
min pooling and self-attentive pooling can aid in some tasks. Each captures a different perspective on
the conditioned sequences.
The self-attentive pooling computes weights for each time step of the sequence
?x = softmax X|y v1 + d1
?y = softmax Y|x v2 + d2
(13)
and uses these weights to get weighted summations of each sequence:
>
xself = X|y
?x
yself = Y|x> ?y
The pooled representations are combined to get one joined representation for all inputs.
xpool = max(X|y ); mean(X|y ); min(X|y ); xself
ypool = max(Y|x ); mean(Y|x ); min(Y|x ); yself
(14)
(15)
(16)
We feed this joined representation through a three-layer, batch-normalized [Ioffe and Szegedy, 2015]
maxout network [Goodfellow et al., 2013] to produce a probability distribution over possible classes.
6
Question Answering with CoVe
For question answering, we obtain sequences x and y just as we do in Eq. 7 and Eq. 8 for classification,
except that the function f is replaced with a function g that uses a tanh activation instead of a ReLU
activation. In this case, one of the sequences is the document and the other the question in the
question-document pair. These sequences are then fed through the coattention and dynamic decoder
implemented as in the original Dynamic Coattention Network (DCN) [Xiong et al., 2016].
7
Datasets
Machine Translation. We use three different English-German machine translation datasets to train
three separate MT-LSTMs. Each is tokenized using the Moses Toolkit [Koehn et al., 2007].
Our smallest MT dataset comes from the WMT 2016 multi-modal translation shared task [Specia
et al., 2016]. The training set consists of 30,000 sentence pairs that briefly describe Flickr captions
and is often referred to as Multi30k. Due to the nature of image captions, this dataset contains
sentences that are, on average, shorter and simpler than those from larger counterparts.
Our medium-sized MT dataset is the 2016 version of the machine translation task prepared for the
International Workshop on Spoken Language Translation [Cettolo et al., 2015]. The training set
consists of 209,772 sentence pairs from transcribed TED presentations that cover a wide variety of
topics with more conversational language than in the other two machine translation datasets.
Our largest MT dataset comes from the news translation shared task from WMT 2017. The training set
consists of roughly 7 million sentence pairs that comes from web crawl data, a news and commentary
corpus, European Parliament proceedings, and European Union press releases.
We refer to the three MT datasets as MT-Small, MT-Medium, and MT-Large, respectively, and we
refer to context vectors from encoders trained on each in turn as CoVe-S, CoVe-M, and CoVe-L.
5
Dataset
Task
Details
Examples
SST-2
SST-5
IMDb
TREC-6
TREC-50
SNLI
SQuAD
Sentiment Classification
Sentiment Classification
Sentiment Classification
Question Classification
Question Classification
Entailment Classification
Question Answering
2 classes, single sentences
5 classes, single sentences
2 classes, multiple sentences
6 classes
50 classes
2 classes
open-ended (answer-spans)
56.4k
94.2k
22.5k
5k
5k
550k
87.6k
Table 1: Datasets, tasks, details, and number of training examples.
Sentiment Analysis. We train our model separately on two sentiment analysis datasets: the Stanford
Sentiment Treebank (SST) [Socher et al., 2013] and the IMDb dataset [Maas et al., 2011]. Both of
these datasets comprise movie reviews and their sentiment. We use the binary version of each dataset
as well as the five-class version of SST. For training on SST, we use all sub-trees with length greater
than 3. SST-2 contains roughly 56, 400 reviews after removing ?neutral? examples. SST-5 contains
roughly 94, 200 reviews and does include ?neutral? examples. IMDb contains 25, 000 multi-sentence
reviews, which we truncate to the first 200 words. 2, 500 reviews are held out for validation.
Question Classification. For question classification, we use the small TREC dataset [Voorhees and
Tice, 1999] dataset of open-domain, fact-based questions divided into broad semantic categories. We
experiment with both the six-class and fifty-class versions of TREC, which which refer to as TREC-6
and TREC-50, respectively. We hold out 452 examples for validation and leave 5, 000 for training.
Entailment. For entailment, we use the Stanford Natural Language Inference Corpus (SNLI) [Bowman et al., 2015], which has 550,152 training, 10,000 validation, and 10,000 testing examples. Each
example consists of a premise, a hypothesis, and a label specifying whether the premise entails,
contradicts, or is neutral with respect to the hypothesis.
Question Answering. The Stanford Question Answering Dataset (SQuAD) [Rajpurkar et al., 2016]
is a large-scale question answering dataset with 87,599 training examples, 10,570 development
examples, and a test set that is not released to the public. Examples consist of question-answer pairs
associated with a paragraph from the English Wikipedia. SQuAD examples assume that the question
is answerable and that the answer is contained verbatim somewhere in the paragraph.
8
8.1
Experiments
Machine Translation
The MT-LSTM trained on MT-Small obtains an uncased, tokenized BLEU score of 38.5 on the
Multi30k test set from 2016. The model trained on MT-Medium obtains an uncased, tokenized BLEU
score of 25.54 on the IWSLT test set from 2014. The MT-LSTM trained on MT-Large obtains an
uncased, tokenized BLEU score of 28.96 on the WMT 2016 test set. These results represent strong
baseline machine translation models for their respective datasets. Note that, while the smallest dataset
has the highest BLEU score, it is also a much simpler dataset with a restricted domain.
Training Details. When training an MT-LSTM, we used fixed 300-dimensional word vectors. We
used the CommonCrawl-840B GloVe model for English word vectors, which were completely fixed
during training, so that the MT-LSTM had to learn how to use the pretrained vectors for translation.
The hidden size of the LSTMs in all MT-LSTMs is 300. Because all MT-LSTMs are bidirectional,
they output 600-dimensional vectors. The model was trained with stochastic gradient descent with a
learning rate that began at 1 and decayed by half each epoch after the validation perplexity increased
for the first time. Dropout with ratio 0.2 was applied to the inputs and outputs of all layers of the
encoder and decoder.
8.2
Classification and Question Answering
For classification and question answering, we explore how varying the input representations affects
final performance. Table 2 contains validation performances for experiments comparing the use of
GloVe, character n-grams, CoVe, and combinations of the three.
6
GloVe
GloVe+CoVe
% improvement over randomly
initialized word vectors
% improvement over randomly
initialized word vectors
16
14
12
10
8
6
4
2
-2 ST-5 IMDb EC-6 C-50 SNLI uAD
S
SQ
TR TRE
SST
16
14
12
10
8
6
4
2
GloVe+Char
GloVe+CoVe
GloVe+CoVe+Char
SST
-2 ST-5 IMDb EC-6 C-50 SNLI uAD
S
SQ
TR TRE
(a) CoVe and GloVe
(b) CoVe and Characters
Figure 3: The Benefits of CoVe
GloVe+
Dataset
SST-2
SST-5
IMDb
TREC-6
TREC-50
SNLI
SQuAD
Random
GloVe
Char
CoVe-S
CoVe-M
CoVe-L
Char+CoVe-L
84.2
48.6
88.4
88.9
81.9
82.3
65.4
88.4
53.5
91.1
94.9
89.2
87.7
76.0
90.1
52.2
91.3
94.7
89.8
87.7
78.1
89.0
54.0
90.6
94.7
89.6
87.3
76.5
90.9
54.7
91.6
95.1
89.6
87.5
77.1
91.1
54.5
91.7
95.8
90.5
87.9
79.5
91.2
55.2
92.1
95.8
91.2
88.1
79.9
Table 2: CoVe improves validation performance. CoVe has an advantage over character
n-gram embeddings, but using both improves performance further. Models benefit most by
using an MT-LSTM trained with MT-Large (CoVe-L). Accuracy is reported for classification
tasks, and F1 is reported for SQuAD.
Training Details. Unsupervised vectors and MT-LSTMs remain fixed in this set of experiments.
LSTMs have hidden size 300. Models were trained using Adam with ? = 0.001. Dropout was
applied before all feedforward layers with dropout ratio 0.1, 0.2, or 0.3. Maxout networks pool over 4
channels, reduce dimensionality by 2, 4, or 8, reduce again by 2, and project to the output dimension.
The Effects of MT Training Data.
We experimented with different
training datasets for the MT-LSTMs
to see how varying the MT training data affects the benefits of using
CoVe in downstream tasks. Figure 4
shows an important trend we can extract from Table 2. There appears to
be a positive correlation between the
larger MT datasets, which contain
more complex, varied language, and
the improvement that using CoVe
brings to downstream tasks. This is
evidence for our hypothesis that MT
data has potential as a large resource
for transfer learning in NLP.
% improvement over randomly
initialized word vectors
The Benefits of CoVe. Figure 3a shows that models that use CoVe alongside GloVe achieve higher
validation performance than models that use only GloVe. Figure 3b shows that using CoVe in Eq. 6
brings larger improvements than using character n-gram embeddings [Hashimoto et al., 2016]. It
also shows that altering Eq. 6 by additionally appending character n-gram embeddings can boost
performance even further for some tasks. This suggests that the information provided by CoVe is
complementary to both the word-level information provided by GloVe as well as the character-level
information provided by character n-gram embeddings.
16
14
12
10
8
6
4
2
SST-2
SST-5
IMDb
Ve
Glo
TREC-6
TREC-50
Ve
GlooVe-S
+C
SNLI
SQuAD
Ve
GlooVe-M
+C
Figure 4: The Effects of MT Training Data
7
Ve
GlooVe-L
+C
Model
Test
Model
Test
SST-2
P-LSTM [Wieting et al., 2016]
CT-LSTM [Looks et al., 2017]
TE-LSTM [Huang et al., 2017]
NSE [Munkhdalai and Yu, 2016a]
BCN+Char+CoVe [Ours]
bmLSTM [Radford et al., 2017]
89.2
89.4
89.6
89.7
90.3
91.8
TREC-6
SVM [da Silva et al., 2011]
SVM [Van-Tu and Anh-Cuong, 2016]
DSCNN-P [Zhang et al., 2016]
BCN+Char+CoVe [Ours]
TBCNN [Mou et al., 2015]
LSTM-CNN [Zhou et al., 2016]
95.0
95.2
95.6
95.8
96.0
96.1
SST-5
MVN [Guo et al., 2017]
DMN [Kumar et al., 2016]
LSTM-CNN [Zhou et al., 2016]
TE-LSTM [Huang et al., 2017]
NTI [Munkhdalai and Yu, 2016b]
BCN+Char+CoVe [Ours]
51.5
52.1
52.4
52.6
53.1
53.7
TREC-50
SVM [Loni et al., 2011]
SNoW [Li and Roth, 2006]
BCN+Char+CoVe [Ours]
RulesUHC [da Silva et al., 2011]
SVM [Van-Tu and Anh-Cuong, 2016]
Rules [Madabushi and Lee, 2016]
89.0
89.3
90.2
90.8
91.6
97.2
IMDb
BCN+Char+CoVe [Ours]
SA-LSTM [Dai and Le, 2015]
bmLSTM [Radford et al., 2017]
TRNN [Dieng et al., 2016]
oh-LSTM [Johnson and Zhang, 2016]
Virtual [Miyato et al., 2017]
91.8
92.8
92.9
93.8
94.1
94.1
SNLI
DecAtt+Intra [Parikh et al., 2016]
NTI [Munkhdalai and Yu, 2016b]
re-read LSTM [Sha et al., 2016]
btree-LSTM [Paria et al., 2016]
600D ESIM [Chen et al., 2016]
BCN+Char+CoVe [Ours]
86.8
87.3
87.5
87.6
88.0
88.1
Table 4: Single model test accuracies for classification tasks.
Test Performance. Table 4 shows the final test accuracies of our best classification models, each of
which achieved the highest validation accuracy on its task using GloVe, CoVe, and character n-gram
embeddings. Final test performances on SST-5 and SNLI reached a new state of the art.
Table 3 shows how the validation exact
Model
EM
F1
match and F1 scores of our best SQuAD
model compare to the scores of the most
LR [Rajpurkar et al., 2016]
40.0 51.0
recent top models in the literature. We
DCR [Yu et al., 2017]
62.5 72.1
did not submit the SQuAD model for test- hM-LSTM+AP [Wang and Jiang, 2017] 64.1 73.9
ing, but the addition of CoVe was enough
DCN+Char [Xiong et al., 2017]
65.4 75.6
to push the validation performance of
BiDAF [Seo et al., 2017]
68.0 77.3
the original DCN, which already used
R-NET [Wang et al., 2017]
71.1 79.5
character n-gram embeddings, above the
DCN+Char+CoVe [Ours]
71.3 79.9
validation performance of the published
version of the R-NET. Test performances Table 3: Exact match and F1 validation scores for singleare tracked by the SQuAD leaderboard 2 . model question answering.
Comparison to Skip-Thought Vectors. Kiros et al.
[2015] show how to encode a sentence into a single
skip-thought vector that transfers well to a variety of
tasks. Both skip-thought and CoVe pretrain encoders
to capture information at a higher level than words.
However, skip-thought encoders are trained with an
unsupervised method that relies on the final output of
the encoder. MT-LSTMs are trained with a supervised
method that instead relies on intermediate outputs associated with each input word. Additionally, the 4800
dimensional skip-thought vectors make training more
unstable than using the 600 dimensional CoVe. Table 5
shows that these differences make CoVe more suitable
for transfer learning in our classification experiments.
2
https://rajpurkar.github.io/SQuAD-explorer/
8
GloVe+Char+
Dataset
SST-2
SST-5
TREC-6
TREC-50
SNLI
Skip-Thought
CoVe-L
88.7
52.1
94.2
89.6
86.0
91.2
55.2
95.8
91.2
88.1
Table 5: Classification validation accuracies with skip-thought and CoVe.
9
Conclusion
We introduce an approach for transferring knowledge from an encoder pretrained on machine
translation to a variety of downstream NLP tasks. In all cases, models that used CoVe from our best,
pretrained MT-LSTM performed better than baselines that used random word vector initialization,
baselines that used pretrained word vectors from a GloVe model, and baselines that used word vectors
from a GloVe model together with character n-gram embeddings. We hope this is a step towards the
goal of building unified NLP models that rely on increasingly more general reusable weights.
The PyTorch code at https://github.com/salesforce/cove includes an example of how to
generate CoVe from the MT-LSTM we used in all of our best models. We hope that making our best
MT-LSTM available will encourage further research into shared representations for NLP models.
References
E. Agirre, C. Banea, C. Cardie, D. M. Cer, M. T. Diab, A. Gonzalez-Agirre, W. Guo, R. Mihalcea, G. Rigau, and
J. Wiebe. SemEval-2014 Task 10: Multilingual semantic textual similarity. In SemEval@COLING, 2014.
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In
ICLR, 2015.
S. R. Bowman, C. Potts, and C. D. Manning. Recursive neural networks for learning logical semantics. CoRR,
abs/1406.1827, 2014.
S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. A large annotated corpus for learning natural language
inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
(EMNLP). Association for Computational Linguistics, 2015.
M. Cettolo, J. Niehues, S. St?ker, L. Bentivogli, R. Cattoni, and M. Federico. The IWSLT 2015 evaluation
campaign. In IWSLT, 2015.
Q. Chen, X.-D. Zhu, Z.-H. Ling, S. Wei, and H. Jiang. Enhancing and combining sequential and tree LSTM for
natural language inference. CoRR, abs/1609.06038, 2016.
R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing
(almost) from scratch. JMLR, 12:2493?2537, 2011.
A. Conneau, D. Kiela, H. Schwenk, L. Barrault, and A. Bordes. Supervised learning of universal sentence
representations from natural language inference data. arXiv preprint arXiv:1705.02364, 2017.
J. P. C. G. da Silva, L. Coheur, A. C. Mendes, and A. Wichert. From symbolic to sub-symbolic information in
question classification. Artif. Intell. Rev., 35:137?154, 2011.
A. M. Dai and Q. V. Le. Semi-supervised sequence learning. In NIPS, 2015.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image
database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248?255, 2009.
A. B. Dieng, C. Wang, J. Gao, and J. W. Paisley. TopicRNN: A recurrent neural network with long-range
semantic dependency. CoRR, abs/1611.01702, 2016.
L. Dong and M. Lapata. Language to logical form with neural attention. CoRR, abs/1601.01280, 2016.
A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal compact bilinear pooling
for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016.
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and
semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition,
pages 580?587, 2014.
I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio. Maxout networks. In ICML, 2013.
A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural
network architectures. Neural Networks, 18(5):602?610, 2005.
H. Guo, C. Cherry, and J. Su. End-to-end multi-view networks for text classification. CoRR, abs/1704.05907,
2017.
9
K. Hashimoto, C. Xiong, Y. Tsuruoka, and R. Socher. A joint many-task model: Growing a neural network for
multiple NLP tasks. CoRR, abs/1611.01587, 2016.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016.
F. Hill, K. Cho, and A. Korhonen. Learning distributed representations of sentences from unlabelled data. In
HLT-NAACL, 2016.
F. Hill, K. Cho, S. Jean, and Y. Bengio. The representational geometry of word meanings acquired by
neural machine translation models. Machine Translation, pages 1?16, 2017. ISSN 1573-0573. doi:
10.1007/s10590-017-9194-2. URL http://dx.doi.org/10.1007/s10590-017-9194-2.
M. Huang, Q. Qian, and X. Zhu. Encoding syntactic knowledge in neural networks for sentiment classification.
ACM Trans. Inf. Syst., 35:26:1?26:27, 2017.
S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate
shift. In ICML, 2015.
R. Johnson and T. Zhang. Supervised and semi-supervised text categorization using LSTM for region embeddings.
In ICML, 2016.
R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, R. Urtasun, A. Torralba, and S. Fidler. Skip-thought vectors. In
NIPS, 2015.
G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. OpenNMT: Open-source toolkit for neural machine
translation. ArXiv e-prints, 2017.
P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran,
R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. Moses: Open source toolkit for statistical machine
translation. In ACL, 2007.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks.
In Advances in neural information processing systems, pages 1097?1105, 2012.
A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher. Ask
me anything: Dynamic memory networks for natural language processing. In ICML, 2016.
X. Li and D. Roth. Learning question classifiers: The role of semantic information. Natural Language
Engineering, 12:229?249, 2006.
B. Loni, G. van Tulder, P. Wiggers, D. M. J. Tax, and M. Loog. Question classification by weighted combination
of lexical, syntactic and semantic features. In TSD, 2011.
M. Looks, M. Herreshoff, D. Hutchins, and P. Norvig. Deep learning with dynamic computation graphs. CoRR,
abs/1702.02181, 2017.
J. Lu, C. Xiong, D. Parikh, and R. Socher. Knowing when to look: Adaptive attention via a visual sentinel for
image captioning. arXiv preprint arXiv:1612.01887, 2016.
T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In
EMNLP, 2015.
A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment
analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human
Language Technologies, pages 142?150, Portland, Oregon, USA, June 2011. Association for Computational
Linguistics. URL http://www.aclweb.org/anthology/P11-1015.
H. T. Madabushi and M. Lee. High accuracy rule-based question classification using question syntax and
semantics. In COLING, 2016.
T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In
ICLR (workshop), 2013.
S. Min, M. Seo, and H. Hajishirzi. Question answering through transfer learning from large fine-grained
supervision data. 2017.
T. Miyato, A. M. Dai, and I. Goodfellow. Adversarial training methods for semi-supervised text classification.
2017.
10
L. Mou, H. Peng, G. Li, Y. Xu, L. Zhang, and Z. Jin. Discriminative neural sentence modeling by tree-based
convolution. In EMNLP, 2015.
T. Munkhdalai and H. Yu. Neural semantic encoders. CoRR, abs/1607.04315, 2016a.
T. Munkhdalai and H. Yu. Neural tree indexers for text understanding. CoRR, abs/1607.04492, 2016b.
V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML, 2010.
R. Nallapati, B. Zhou, C. N. dos Santos, ?aglar G?l?ehre, and B. Xiang. Abstractive text summarization using
sequence-to-sequence RNNs and beyond. In CoNLL, 2016.
B. Paria, K. M. Annervaz, A. Dukkipati, A. Chatterjee, and S. Podder. A neural architecture mimicking humans
end-to-end for natural language inference. CoRR, abs/1611.04741, 2016.
A. P. Parikh, O. Tackstrom, D. Das, and J. Uszkoreit. A decomposable attention model for natural language
inference. In EMNLP, 2016.
J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, J. Lim, and M.-H. Yang. Hedged deep tracking. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, pages 4303?4311, 2016.
A. Radford, R. J?zefowicz, and I. Sutskever. Learning to generate reviews and discovering sentiment. CoRR,
abs/1704.01444, 2017.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine comprehension of
text. arXiv preprint arXiv:1606.05250, 2016.
P. Ramachandran, P. J. Liu, and Q. V. Le. Unsupervised pretraining for sequence to sequence learning. CoRR,
abs/1611.02683, 2016.
K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, 2010.
M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. Bidirectional attention flow for machine comprehension.
ICLR, 2017.
L. Sha, B. Chang, Z. Sui, and S. Li. Reading and thinking: Re-read LSTM unit for textual entailment recognition.
In COLING, 2016.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv
preprint arXiv:1409.1556, 2014.
R. Socher, A. Perelygin, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. Recursive deep models for semantic
compositionality over a sentiment treebank. In EMNLP, 2013.
R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. Ng. Grounded compositional semantics for finding
and describing images with sentences. In ACL, 2014.
L. Specia, S. Frank, K. Sima?an, and D. Elliott. A shared task on multimodal machine translation and crosslingual
image description. In WMT, 2016.
I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
N. Van-Tu and L. Anh-Cuong. Improving question classification by feature extraction and selection. Indian
Journal of Science and Technology, 9(17), 2016.
E. M. Voorhees and D. M. Tice. The TREC-8 question answering track evaluation. In TREC, volume 1999,
page 82, 1999.
S. Wang and J. Jiang. Machine comprehension using Match-LSTM and answer pointer. 2017.
W. Wang, N. Yang, F. Wei, B. Chang, and M. Zhou. Gated self-matching networks for reading comprehension
and question answering. 2017.
J. Wieting, M. Bansal, K. Gimpel, and K. Livescu. Towards universal paraphrastic sentence embeddings. In
ICLR, 2016.
C. Xiong, S. Merity, and R. Socher. Dynamic memory networks for visual and textual question answering. In
Proceedings of The 33rd International Conference on Machine Learning, pages 2397?2406, 2016.
11
C. Xiong, V. Zhong, and R. Socher. Dynamic coattention networks for question answering. ICRL, 2017.
Y. Yu, W. Zhang, K. Hasan, M. Yu, B. Xiang, and B. Zhou. End-to-end reading comprehension with dynamic
answer chunk ranking. ICLR, 2017.
R. Zhang, H. Lee, and D. R. Radev. Dependency sensitive convolutional neural networks for modeling sentences
and documents. In HLT-NAACL, 2016.
P. Zhou, Z. Qi, S. Zheng, J. Xu, H. Bao, and B. Xu. Text classification improved by integrating bidirectional
LSTM with two-dimensional max pooling. In COLING, 2016.
Y. Zhu, Y. Chen, Z. Lu, S. J. Pan, G.-R. Xue, Y. Yu, and Q. Yang. Heterogeneous transfer learning for image
classification. In AAAI, 2011.
12
| 7209 |@word kulis:1 cnn:7 version:5 briefly:1 stronger:1 open:4 d2:1 tr:3 tice:2 carry:1 liu:1 contains:5 score:7 ours:8 document:3 current:1 com:5 transferability:1 comparing:1 activation:4 yet:1 dx:1 parsing:1 subsequent:1 concatenate:1 wx:5 hypothesize:1 treating:1 designed:1 alone:1 half:1 discovering:1 aglar:1 short:1 lr:1 pointer:1 barrault:1 org:2 simpler:2 zhang:9 five:1 bowman:4 along:3 framewise:1 become:2 supply:1 bertoldi:1 pairing:2 consists:4 combine:1 paragraph:2 introduce:1 mccann:1 acquired:1 peng:1 tagging:2 ra:2 kuksa:1 merity:1 roughly:3 kiros:3 multi:4 growing:1 inspired:1 relying:1 salakhutdinov:1 farhadi:1 begin:2 project:1 moreover:1 provided:3 medium:3 anh:3 coder:1 lowest:1 what:1 santos:1 minimizes:1 bojar:1 developed:1 spoken:1 unified:1 transformation:2 finding:1 ended:1 classifier:1 facto:1 unit:2 appear:1 before:2 positive:1 engineering:1 modify:1 io:1 bilinear:1 encoding:2 jiang:3 ap:1 acl:2 rnns:1 initialization:6 suggests:2 specifying:1 factorization:1 campaign:1 hajishirzi:2 range:1 testing:1 lost:1 union:1 recursive:2 sq:2 mihalcea:1 ker:1 dmn:1 area:1 empirical:1 universal:2 thought:8 adapting:1 matching:1 word:43 integrating:1 refers:1 symbolic:2 get:3 amplify:1 synergistic:2 unlabeled:1 selection:1 context:16 www:1 dean:1 roth:2 dcn:4 lexical:1 hedged:1 attention:12 independently:1 shen:1 decomposable:1 conneau:2 qian:1 rule:2 continued:1 oh:1 embedding:1 handle:1 munkhdalai:5 target:3 hierarchy:1 norvig:1 decode:1 caption:2 losing:1 exact:2 us:8 livescu:1 goodfellow:3 hypothesis:3 element:3 trend:1 recognition:10 database:1 role:1 preprint:5 initializing:2 capture:3 wang:5 cy:2 region:2 news:2 sun:1 aclweb:1 highest:2 warde:1 dukkipati:1 dynamic:8 trained:14 predictive:1 imdb:9 completely:1 hashimoto:2 multimodal:2 joint:1 schwenk:1 various:1 surrounding:1 train:6 stacked:2 describe:2 ondruska:1 effective:1 doi:2 zemel:1 jean:1 stanford:5 koehn:2 larger:3 federico:2 encoder:21 ability:1 simonyan:2 syntactic:2 jointly:1 final:6 sequence:51 advantage:2 net:2 propose:6 product:1 adaptation:1 qin:1 tu:3 combining:1 translate:1 achieve:1 representational:1 tax:1 description:1 bao:1 sutskever:4 darrell:3 zefowicz:1 captioning:3 produce:3 adam:1 leave:1 categorization:1 object:4 develop:1 recurrent:1 augmenting:1 bradbury:3 sa:1 strong:1 eq:5 implemented:1 skip:8 indicate:1 come:3 direction:1 snow:1 annotated:1 cnns:4 stochastic:1 human:2 char:13 public:1 virtual:1 premise:2 f1:4 summation:1 comprehension:5 adjusted:3 pytorch:1 hold:2 pham:2 trnn:1 torralba:1 smallest:2 released:1 estimation:1 daly:1 integrates:1 diab:1 label:1 tanh:3 seo:6 sensitive:1 largest:1 faithfully:1 successfully:1 create:1 dampen:1 weighted:2 hope:2 zhou:6 shelf:1 zhong:2 varying:2 pervasive:1 encode:2 release:1 focus:1 june:1 improvement:6 potts:4 commoncrawl:1 portland:1 pretrain:1 adversarial:1 baseline:7 sense:1 kim:1 inference:7 mou:2 typically:2 transferring:6 hidden:7 reproduce:1 semantics:3 mimicking:1 classification:35 development:1 art:3 softmax:6 comprise:1 extraction:1 beach:1 ted:1 ng:3 broad:1 look:3 unsupervised:6 yu:9 cer:1 park:1 icml:5 thinking:1 mirza:1 richard:1 randomly:4 verbatim:1 ve:4 intell:1 replaced:1 geometry:1 ab:12 detection:2 investigate:1 callison:1 intra:1 zheng:1 evaluation:2 abstractive:1 w1x:1 farley:1 held:1 word2vec:1 cherry:1 accurate:1 constantin:1 encourage:1 xy:1 shorter:1 rajpurkar:4 respective:1 tree:4 initialized:4 abundant:1 re:2 rush:1 girshick:2 increased:1 column:1 modeling:3 cover:1 altering:1 coattention:4 hutchins:1 neutral:3 krizhevsky:2 successful:1 johnson:2 reported:2 contextualize:1 encoders:11 dependency:3 answer:5 xue:1 combined:1 cho:3 st:4 decayed:1 lstm:46 international:2 fritz:1 chunk:1 lee:3 dong:3 off:1 pool:1 enhance:1 together:1 yao:1 w1:2 again:1 aaai:1 zen:1 hn:1 huang:5 emnlp:6 transcribed:1 tz:1 luong:2 tsd:1 li:6 szegedy:2 syst:1 potential:2 converted:1 de:1 lapata:2 b2:1 pooled:2 includes:1 coefficient:1 oregon:1 explicitly:1 ranking:1 collobert:3 later:1 h1:1 performed:1 view:1 loog:1 portion:1 wm:1 relied:1 reached:1 unidirectional:1 dieng:2 uncased:3 accuracy:7 convolutional:4 phoneme:1 kavukcuoglu:1 produced:3 lu:4 cardie:1 ren:1 researcher:1 rectified:1 published:1 flickr:1 hlt:2 attentive:2 james:2 associated:2 gain:1 dataset:15 duplicated:1 birch:1 logical:2 ask:1 knowledge:2 lim:1 improves:7 dimensionality:1 segmentation:2 appears:1 bidirectional:9 feed:2 ta:1 higher:4 supervised:10 zisserman:2 improved:2 modal:1 entailment:8 loni:2 wei:2 tsuruoka:1 iyyer:1 just:1 correlation:1 ramachandran:2 tre:2 lstms:13 web:1 su:1 widespread:1 brings:2 quality:1 reveal:1 gulrajani:1 radev:1 artif:1 grounding:1 usa:2 effect:3 contain:2 normalized:1 building:1 counterpart:1 naacl:2 tagged:1 fidler:1 read:2 semantic:12 bcn:8 sima:1 during:1 self:4 anything:1 anthology:1 syntax:1 bansal:1 hill:4 ay:1 demonstrate:1 silva:3 image:12 wise:2 meaning:1 novel:1 parikh:3 began:1 common:5 wikipedia:1 mt:48 tracked:1 conditioning:2 irsoy:1 volume:1 million:1 association:3 he:2 uszkoreit:1 crosslingual:1 significant:1 fukui:2 refer:4 paisley:1 tuning:2 rd:1 language:26 had:1 pute:1 toolkit:3 access:1 wmt:4 similarity:3 operating:1 entail:1 glo:1 align:1 supervision:1 cove:51 recent:3 perspective:1 bilstm:11 inf:1 perplexity:1 schmidhuber:2 cettolo:2 binary:1 success:2 herbst:1 meeting:1 exploited:1 seen:2 commentary:1 greater:1 dai:3 deng:3 aggregated:1 corrado:1 signal:2 angeli:1 semi:3 multiple:3 karlen:1 ing:1 match:3 adapt:1 unlabelled:1 long:3 divided:1 opennmt:1 qi:3 variant:1 heterogeneous:1 vision:11 enhancing:1 arxiv:11 represent:2 normalization:2 grounded:1 achieved:1 proposal:1 addition:1 fine:5 separately:1 source:7 hasan:1 w2:1 rest:1 unlike:1 posse:1 operate:1 fifty:1 nmt:4 induced:1 pooling:7 bahdanau:2 facilitates:1 cowan:1 flow:1 effectiveness:1 call:1 leverage:1 yang:4 feedforward:3 intermediate:1 embeddings:9 enough:1 bengio:3 variety:8 affect:2 isolation:1 relu:5 semeval:2 architecture:4 identified:1 reduce:2 idea:1 knowing:1 vgg:1 shift:1 whether:1 six:1 url:2 passed:1 accelerating:1 sentiment:15 speech:2 pretraining:1 compositional:1 deep:11 cornerstone:1 mirroring:1 useful:1 sst:20 tune:1 vqa:1 karpathy:1 amount:2 prepared:1 category:2 http:4 generate:2 outperform:1 moses:2 track:1 reinforces:1 bryan:1 klein:2 reusable:2 p11:1 ht:7 backward:1 contextualized:1 imaging:1 v1:1 graph:1 downstream:7 year:1 sum:1 named:2 almost:1 wu:1 gonzalez:1 conll:1 comparable:1 wiebe:1 dropout:3 layer:12 ct:1 courville:1 annual:1 burch:1 fei:2 min:5 span:1 conversational:1 mikolov:2 kumar:2 truncate:1 combination:2 manning:6 smaller:1 remain:1 em:1 character:11 contradicts:1 increasingly:1 pan:1 rev:1 making:1 iwslt:3 restricted:2 chunking:1 resource:1 turn:1 german:2 mechanism:5 dcr:1 describing:1 dyer:1 fed:2 end:6 available:1 hierarchical:2 v2:1 caiming:1 appending:1 xiong:13 batch:2 original:6 chuang:1 top:1 miyato:2 nlp:21 include:2 ensure:1 linguistics:3 kembhavi:1 somewhere:1 concatenated:1 malik:1 question:40 quantity:1 already:1 print:1 sha:2 affinity:1 gradient:1 iclr:5 attentional:7 separate:2 entity:2 decoder:11 concatenation:1 me:1 topic:1 collected:1 unstable:1 urtasun:1 bleu:4 senellart:1 tokenized:4 length:3 code:1 issn:1 relationship:1 ratio:2 liang:1 perelygin:1 frank:1 append:1 zt:1 summarization:3 boltzmann:1 gated:1 uad:2 convolution:1 datasets:12 descent:1 jin:1 hinton:3 trec:17 varied:1 compositionality:1 pair:5 sentence:24 imagenet:8 learned:3 textual:4 bout:1 nti:2 boost:1 nip:4 trans:1 beyond:2 alongside:1 wy:1 pattern:4 reading:3 challenge:1 wz:2 memory:3 max:4 suitable:2 natural:15 rely:2 explorer:1 residual:2 zhu:5 representing:1 agirre:3 improve:6 wout:1 movie:1 github:2 technology:2 snli:11 gimpel:1 squad:12 axis:1 hm:1 extract:4 mvn:1 text:9 review:6 understanding:2 interdependent:1 epoch:1 literature:1 graf:2 xiang:2 expect:1 monolingual:1 generation:1 leaderboard:1 validation:13 hoang:1 integrate:1 elliott:1 parliament:1 treebank:3 share:1 bordes:1 ehre:1 eccv:1 translation:35 specia:2 compatible:1 token:1 summary:3 maas:2 nse:1 english:4 cuong:3 deeper:1 wide:2 distributed:2 benefit:4 van:4 dimension:3 crawl:1 tosequence:1 gram:8 computes:5 rich:1 forward:1 made:1 answerable:1 adaptive:1 far:1 ec:2 obtains:3 compact:1 multilingual:1 synergy:1 global:1 kiela:1 ioffe:2 corpus:4 b1:1 sentinel:2 discriminative:1 table:10 additionally:3 nature:2 transfer:25 learn:1 ca:1 correlated:1 channel:1 improving:1 bottou:1 complex:2 european:2 domain:9 da:4 submit:1 did:1 paulus:1 ling:1 nallapati:3 hdec:5 complementary:1 positively:1 xu:3 benefited:1 referred:1 aid:1 sub:2 candidate:2 answering:23 jmlr:1 extractor:1 coling:4 grained:2 abundance:1 donahue:1 removing:1 sui:1 specific:4 xt:2 covariate:1 moran:1 experimented:1 svm:4 voorhees:2 evidence:1 consist:2 socher:13 workshop:2 adding:2 pennington:3 sequential:2 corr:12 te:6 conditioned:1 push:2 chatterjee:1 chen:4 depicted:2 cx:3 explore:1 gao:1 rohrbach:2 visual:9 vinyals:1 contained:1 tracking:2 joined:2 pretrained:18 chang:2 applies:1 radford:3 relies:2 extracted:1 acm:1 nair:2 weston:1 conditional:1 marked:1 goal:2 sized:1 presentation:1 towards:2 maxout:5 shared:4 change:1 glove:28 except:1 korhonen:1 reducing:1 wt:2 saenko:2 rarely:1 internal:1 guo:3 relevance:1 indian:1 incorporate:1 d1:1 scratch:1 mendes:1 |
6,865 | 721 | Unsupervised Parallel Feature Extraction
from First Principles
..
Mats Osterberg
Image Processing Laboratory
Dept. EE., Linkoping University
S-58183 Linkoping Sweden
Reiner Lenz
Image Processing Laboratory
Dept. EE., Linkoping University
S-58183 Linkoping Sweden
Abstract
We describe a number of learning rules that can be used to train unsupervised parallel feature extraction systems. The learning rules
are derived using gradient ascent of a quality function. We consider a number of quality functions that are rational functions of
higher order moments of the extracted feature values. We show
that one system learns the principle components of the correlation matrix. Principal component analysis systems are usually not
optimal feature extractors for classification. Therefore we design
quality functions which produce feature vectors that support unsupervised classification. The properties of the different systems are
compared with the help of different artificially designed datasets
and a database consisting of all Munsell color spectra.
1
Introduction
There are a number of unsupervised Hebbian learning algorithms (see Oja, 1992
and references therein) that perform some version of the Karhunen-Loeve expansion. Our approach to unsupervised feature extraction is to identify some desirable
properties of the extracted feature vectors and to construct a quality functions that
measures these properties. The filter functions are then learned from the input patterns by optimizing this selected quality function. In comparison to conventional
unsupervised Hebbian learning this approach reduces the amount of communication
between the units needed to learn the weights in parallel since the complexity now
lies in the learning rule used.
136
Unsupervised Parallel Feature Extraction from First Principles
The optimal (orthogonal) solution to two of the proposed quality functions turn out
to be related to the Karhunen-Loeve expansion: the first learns an arbitrary rotation of the eigenvectors whereas the later learns the pure eigenvectors. A common
problem with the Karhunen-Loeve expansion is the fact that the first eigenvector is
normally the mean vector of the input patterns. In this case one filter function will
have a more or less uniform response for a wide range of input patterns which makes
it rather useless for classification. We will show that one quality function leads to
a system that tend to learn filter functions which have a large magnitude response
for just one class of samples (different for each filter function) and low magnitude
response for samples from all other classes. Thus, it is possible to classify an incoming pattern by simply observing which filter function has the largest magnitude
response . Similar to Intrator's Projection Pursuit related network (see Intrator &
Cooper, 1992 and references therein) some quality functions use higher order (> 2)
statistics of the input process but in contrast to Intrator's network there is no need
to specify the amount of lateral inhibition needed to learn different filter functions.
All systems considered in this paper are linear but at the end we will briefly discuss
possible non-linear extensions.
2
Quality functions
In the following we consider linear filter systems. These can be described by the
equation:
O(t)
W(t)P(t)
(1)
where P(t) E RM :l is the input pattern at iteration t, W(t) E RN:M is the filter
coefficient matrix and O(t) = (01 (t), ... ,0N(t))' E RN :l is the extracted feature
vector. Usually M > N, i.e. the feature extraction process defines a reduction of
the dimensionality. Furthermore, we assume that both the input patterns and the
filter functions are normed; IIP(t)1I = 1 and IIWn(t)1I = 1, "It "In. This implies that
10~(t)1 ~ 1, 'Vi "In.
Our first decision is to measure the scatter of the extracted feature vectors around
the origin by the determinant of the output correlation matrix:
QMS(t) = det EdO(t)O'(t)}
(2)
QMS(t) is the quality function used in the Maximum Scatter Filter System (MSsystem). The use of the determinant is motivated by the following two observations:
1. The determinant is equal to the product of the eigenvalues and hence the product
of the variances in the principal directions and thus a measure of the scattering
volume in the feature space. 2. The determinant vanishs if some filter functions are
linearly dependent.
In (Lenz & Osterberg, 1992) we have shown that the optimal filter functions to
QMS(t) are given by an arbitrary rotation of the N eigenvectors corresponding to
the N largest eigenvalues of the input correlation matrix:
W opt
RUeig
(3)
where Ueig contains the largest eigenvectors (or principal components) of the input correlation matrix EdP(t)P'(t)}. R is an arbitrary rotation matrix with
det( R) = 1. To differentiate between these solutions we need a second criterion.
137
138
Osterberg and Lenz
One attempt to define the best rotation is to require that the mean energy E t { o~ (t)}
should be concentrated in as few components on(t) of the extracted feature vector
as possible. Thus, the mean energy Ed o~ (t)} of each filter function should be either
very high (i.e. near 1) or very low (i.e. near 0). This leads to the following second
order concentration measure:
Q2(t)
=
N
L
Edo~(t)} (1- Edo!(t)})
(4)
n=l
which has a low non-negative value if the energies are concentrated.
Another idea is to find a system that produces feature vectors that have unsupervised discrimination power. In this case each learned filter function should respond
selectively, i.e. have a large response for some input samples and low response for
others. One formulation of this goal is that each extracted feature vector should be
(up to the sign) binary; Oi(t) = ?1 and on(t) = 0, n 1= i, 'Vt. This can be measured
by the following fourth order expression:
N
Q4(t)
=
N
EdL o~(t)
(1 - o~(t?)}
L
n=l
Edo~(t)} - Edo!(t)}
(5)
n=l
which has a low non-negative value if the features are binary. Note that it is not
sufficient to use on(t) instead of o~(t) since Q4(t) will have a low value also for
feature vectors with components equal in magnitude but with opposite sign. A
third criterion can be found as follows: if the filter functions have selective filter
response then the response to different input patterns differ in magnitude and thus
the variance of the mean energy Ed o~(t)} is large. The total variance is measured
by:
N
N
L Var {o~ (t)}
n=l
N
L
=L
Ed (o~ (t) - Ed o~ (t)} ) 2 }
n=l
Edo!(t)} - (Ed o!(t)})2
(6)
n=l
Following (Darlington, 1970) it can be shown that the distribution of o~ should
be bimodal (modes below and above Edo~}) to maximize QVar(t). The main
difference between QVar(t) and the quality function used by Intrator is the use
of a fourth order term Edo!(t)} instead of a third order term Edo~(t)}. With
Ed o~(t)} the quality function is a measure of the skewness of the distribution
o(t) and it is maximized when one mode is at zero and one (or several) is above
Edo~(t)}.
In this paper we will examine the following non-parametric combinations of the
quality functions above:
QMS(t)
Q2(t)
QMS(t)
Q4(t)
QVar(t)QM set)
(7)
(8)
(9)
Unsupervised Parallel Feature Extraction from First Principles
We refer to the corresponding filter systems as: the Karhunen-Loeve Filter System (KL-system), the Fourth Order Filter System (FO-system) and the Maximum
Variance Filter System (MV-system).
Since each quality function is a combination of two different functions it is hard to
find the global optimal solution. Instead we use the following strategy to determine
a local optimal solution.
Definition 1 The optimal orthogonal solution to each quality function is of the
form:
W opt
(10)
where Ropt is the rotation of the largest eigenvectors which minimize Q2(t), Q4(t)
or maximize QYar(t).
In (Lenz & Osterberg, 1992 and Osterberg, 1993) we have shown that the optimal
orthogonal solution to the KL-system are the N pure eigenvectors if the N largest
eigenvalues are all distinct (i.e. Ropt
I). If some eigenvalues are equal then the
solution is only determined up to an arbitrary rotation of the eigenvectors with
equal eigenvalues. The fourth order term Edo~(t)} in Q4(t) and QYar(t) makes it
difficult to derive a closed form solution. The best we can achieve is a numerical
method (in the case of Q4(t) see Osterberg, 1993) for the computation of the optimal
orthogonal filter functions.
=
3
Maximization of the quality function
The partial derivatives of QMS(t), Q2(t), Q4(t) and QYar(t) with respect to w~(t)
(the mth weight in the nth filter function at iteration t) are only functions of the
input pattern pet), the output values OCt) = (OI(t), ... , ON(t? and the previous
values of the weight coefficients (w~ (t - 1), ... , w~ (t - 1? within the filter function
(see Osterberg, 1993). Especially, they are not functions of the internal weights
((wlCt - 1), ... , wf1(t -1?, i;/; n) of the other filter functions in the system. This
implies that the filter coefficients can be learned in parallel using a system of the
structure shown in Figure 1.
In (Osterberg, 1993) we used on-line optimization techniques based on gradient
ascent. We tried two different methods to select the step length parameter. One
rather heuristical depending on the output On (t) of the filter function and one
inverse proportional to the second partial derivative of the quality function with
respect to w~ (t). In each iteration the length of each filter function was explicitly
normalized to one. Currently, we investigate standard unconstrained optimization
methods (Dennis & Schnabel, 1983) based on batch learning. Now the step length
parameter is selected by line search in the search direction Set):
mrc Q(W(t) + AS(t?
(11)
Typical choices of Set) include Set) = I and Set) = H-l. With the identity matrix
we get Steepest Ascent and with the inverse Hessian the quasi-Newton algorithm.
Using sufficient synchronism the line search can be incorporated in the parallel
structure (Figure 1). To incorporate the quasi-Newton algorithm we have to assume
139
140
Osterberg and Lenz
Inpul pall.ra
P(I)
--...---.1
~----.,.--+
OuIPl'I
0.(1) - .... '(1)1'(1)
Oulpul
0,(1) - ...,'(1)1'(1)
~--+-++--+
QuIP'"
o..{t) - ......(1)1'(1)
Figure 1: The architecture of the filter system
that the Hessian matrix is block diagonal, i.e. the second partial derivatives with
respect to wr(t)w,(t), k f. I, "1m are assumed to be zero. In general this is not the
case and it is not clear if a block diagonal approximation is valid or not. The second
partial derivatives can be approximated by secant methods (normally the BFGS
method). Furthermore the condition of normalized filter functions can be achieved
by optin4izing in hyperspherical polar coordinates. Preliminary experiments (mustly
with Steepest Ascent) show that more advanced optimization techniques lead to a
more robust convergence of the filter functions.
4
Experiments
In (Osterberg, 1993) we describe a series of experiments in which we investigate
systematically the following properties of the MS-system, the KL-system and the
FO-system: convergence speed, dependence on initial solution W(O) , distance between learned solution and optimal (orthogonal) solution, supervised classification
of the extracted feature vectors using linear regression and the degree of selective
response of the learned filter functions. We use training sets with controlled scalar
products between the cluster centers of three classes of input patterns embedded
in a 32-D space. The results of the experiments can be summarized as follows . In
contrast to the MS-system, we noticed that the KL- and FO-system had problems
to converge to the optimal orthogonal solutions for some initial solutions. All systems learned orthogonal solutions regardless of W(O). The supervised classification
power was independent of the filter system used. Only the FO-system produced
Unsupervised Parallel Feature Extraction from First Principles
Table 1: Typical filter response to patterns from (a)-(c) Tsetl and (d) Tset2 using
the filter functions learned with (a) the KL-system, (b) the FO-system and (c)-(d)
the MV-system. (e)-(f) Output covariance matrix using the filter functions learned
with (e) the KL-system and (f) the MV-system.
[(
-0.12)
0.92
, (-0.46)
0.83
, (0.73)]
0.66
-0.38
0.32
0.14
-0.71)
0.59
, (-0.99)
-0.08
, (-0.22)]
-0.04
0.28
0.01
0.97
(b)
[(
(a.)
0.28) , ( -0.39
0.10) , ( -0.23
0.98)]
[( -0.91
0.44
0.95
0.11
(c)
0.0340
( 0.0001
0.0005
0.0001
0.9300
0.0000
(e)
0.0005)
0.0000
0.0353
-0.50) , (-0.49)
-0.50
, (-0.81)]
-0.49
[ ( -0.80
0.50
0.81
0.50
(d)
(
0.3788
0.3463
-0.3473
0.3463
0.3760
-0.3467
(f)
-0.3473 )
-0.3467
0.3814
filter functions which mainly react for patterns from just one class and only if the
similarity (measured by the scalar product) between the classes in the training set
was smaller than approximately 0.5. Thus, the FO-system extracts feature vectors
which have unsupervised discrimination power. Furthermore, we showed that the
FO-system can distinguish between data sets having identical correlation matrices
(second order statistics) but different fourth order statistics. Recent experiments
with more advanced optimization techniques (Steepest Ascent) show better convergence properties for the KL- and FO-system. Especially the distance between the
learned filter functions and the optimal orthogonal ones becomes smaller.
We will describe some experiments which show that the MV-system is more suitable
for tasks requiring unsupervised classification. We use two training sets Tsetl and
Tset2. In the first set the mean scalar product between class one and two is 0.7,
between class one and three 0.5 and between class two and three 0.3. In the second
set the mean scalar products between all classes are 0.9, i.e. the angle between all
cluster centers is arccos(0.9) = 26 0 ? In Table 4(a)-( c) we show the filter response of
the learned filter functions with the KL-, FO- and MV-system to typical examples
of the input patterns in the training set Tsetl. For the KL-system we see that the
second filter function gives the largest magnitude response for both, patterns from
class one and two. For the FO-system the feature vectors are more binary. Still the
first filter function has the largest magnitude response for patterns from class one
and two. For the MV-system we see that each filter function has largest magnitude
response for only one class of input patterns and thus the extracted feature vectors
support unsupervised discrimination. In Table 4( d) (computed from Tset2) we see
that this is the case even then the scalar products between the cluster centers are
as high as 0.9. The filter functions learned by the MV -system are approximately
orthogonal. The system learns thus the rotation of the largest eigenvectors which
maximizes QVa.r(t). Therefore it will not extract uncorrelated features (see Ta-
141
142
Osterberg and Lenz
025
........'(\\
02
I
?
-'
,
02
0.5
o.
,
, ",
,,
,, '. '.
,,
,,
015
,,
0.1
OOS
0
-0 os
,
-0'
"
'-
o
-0 '5
-02
ISO
lUG
-o~
(a)
,
,,
02
,,
,,
,-
"
\
,
"
,/
I
.. ...... .
... ..... ..
I
./
,'.,
",
''II. ..
.?
02
,
'
,
\,
t
".
"
,
0.6
,,
I
006
\:
:
,,
,,
I
."
'
:
,,
...
I
,-
,
I
O'
..??.?
,', ,
I
'-
.?..??1.,'
,
031~--~--~--~--~--~--~
",
),'
I
...............
..... .
",.
,',
,
.-
I
,
(b)
- ... ,
,,~..
,,'
,
\
\
\.
"
.....................
~~~~~~~~~~~~_~-4I~m--~~
(c)
(d)
Figure 2: (a) Examples of normalized reflectance spectra of typical reddish (solid
curve), greenish (dotted curve) and bluish (dashed curve) Munsell color chips. (b)
The three largest eigenvectors belonging to the correlation matrix of the 1253 different reflectance spectra. (c) The learned filter functions with the MV-system. (d)
The learned non-negative filter functions with the MV-system. In all figures the
x-axes show the wave length (nm)
ble 4(f? but the variances (e.g. the diagonal elements of the covariance matrix) of
the features are more or less equal. In Table 4( e) we see that the KL-system extracts uncorrelate features with largely different variance. This demonstrates that
the KL-system tries to learn the pure eigenvectors.
Recently, we have applied the MV-system to real world data. The training set
consists of normalized reflectance spectra of the 1253 different color chips in the
Munsell color atlas. Figure 2(a) shows one typical example of a red, a green and
a blue color chip and Figure 2(b) the three largest eigenvectors belonging to the
correlation matrix of the training set. We see that the first eigenvector (the solid
curve) has a more or less uniform response for all different colors. On the other hand,
the MV-system (Figure 2 (c? learns one bluish, one greenish and one reddish filter
function. Thus, the filter functions divide the color space according to the primary
colors red, green and blue. We notice that the learned filter functions are orthogonal
and tend to span the same space as the eigenvectors since IIW.ol - RoptUeigliF =
0.0199 (the Frobenius norm) where Ropt maximizes QVa.r(t). Figure 2(d) show one
preliminary attempt to include the condition of non-negative filter functions in the
Unsupervised Parallel Feature Extraction from First Principles
optimization process (Steepest Ascent). We see that the learned filter functions
are non-negative and divide the color space according to the primary colors. One
possible real word application is optical color analysis where non-negative filter
functions are much easier to realize using optical components. Smoother filter
functions can be optained by incorporating additional constraints into the quality
function.
5
Non-linear extensions
The proposed strategy to extract feature vectors apply to nonlinear filter systems as well. In this case the input output relation OCt) = W(t)P(t) is replaced
by OCt)
I(W(t)P(t? where I describes the desired non-linearity. The corresponding learning rule can be derived using gradient based techniques as long as
the non-linearity 1(?) is differentiable. The exact form of 1(,) will usually be application oriented. Node nonlinearities of sigmoid type are one type of nonlinearities
which has received a lot of attention (see for example Oja & Karhunen, 1993).
Typical applications include: robust Principal Component Analysis PCA (outlier
protection, noise suppression and symmetry breaking), sinusoidal signal detection
in colored noise and robust curve fitting.
=
Acknowledgements
This work was done under TFR-contract TFR-93-00192. The visit of M. Osterberg
at the Dept. of Info. Tech., Lappeenranta University of Technology was supported
by a grant from the Nordic Research Network in Computer Vision. The Munsell
color experiments were performed during this visit.
References
R. B. Darlington.
24(2):19-20.
(1970) Is Kurtosis really peakedness?
American Statistics
J. E. Dennis & Robert B. Schnabel. (1983) Numerical Methods lor Unconstrained
Optimization and Nonlinear Equations. Prentice-Hall.
N. Intrator & L.N. Cooper. (1992) Objective Function Formulation of the BCM
Theory of Visual Cortical Plasticity: Statistical Connections, Stability Conditions.
Neural Networks 5:3-17.
R. Lenz & M. Osterberg. (1992) Computing the Karhunen-Loeve expansion with a
parallel, unsupervised filter system. Neural Computations 4(3):382-392.
E. Oja. (1992) Principal Components, Minor Components, and Linear Neural Networks. Neural Networks 5:927-935.
E. Oja & J. Karhunen. (1993) Nonlinear PCA: algorithms and Applications Tech-
nical Report AlB, Helsinki University 01 Technology, Laboratory of Computer and
Information Sciences, SF -02150 Espoo, Finland.
M. Osterberg. (1993) Unsupervised Feature Extraction using Parallel Linear Filters.
Linkoping Studies in Science and Technology. Thesis No. 372.
143
| 721 |@word determinant:4 briefly:1 version:1 norm:1 tried:1 covariance:2 solid:2 reduction:1 moment:1 initial:2 contains:1 series:1 protection:1 scatter:2 realize:1 numerical:2 plasticity:1 designed:1 atlas:1 discrimination:3 selected:2 iso:1 steepest:4 colored:1 node:1 lor:1 consists:1 fitting:1 ra:1 examine:1 ol:1 becomes:1 linearity:2 maximizes:2 skewness:1 eigenvector:2 q2:4 rm:1 qm:1 demonstrates:1 unit:1 normally:2 grant:1 local:1 approximately:2 therein:2 range:1 block:2 secant:1 projection:1 word:1 get:1 prentice:1 conventional:1 center:3 regardless:1 attention:1 normed:1 pure:3 react:1 rule:4 stability:1 coordinate:1 exact:1 origin:1 element:1 approximated:1 database:1 nical:1 complexity:1 qva:2 munsell:4 greenish:2 chip:3 train:1 distinct:1 describe:3 statistic:4 differentiate:1 eigenvalue:5 differentiable:1 kurtosis:1 product:7 achieve:1 frobenius:1 linkoping:5 convergence:3 cluster:3 produce:2 help:1 derive:1 depending:1 heuristical:1 measured:3 minor:1 received:1 implies:2 differ:1 direction:2 filter:55 require:1 really:1 preliminary:2 opt:2 extension:2 around:1 considered:1 hall:1 finland:1 lenz:7 polar:1 currently:1 largest:11 rather:2 derived:2 ax:1 mainly:1 tech:2 contrast:2 suppression:1 dependent:1 mth:1 relation:1 selective:2 quasi:2 classification:6 espoo:1 arccos:1 equal:5 construct:1 extraction:9 having:1 identical:1 unsupervised:16 others:1 report:1 few:1 oriented:1 oja:4 replaced:1 consisting:1 attempt:2 detection:1 investigate:2 partial:4 sweden:2 orthogonal:10 divide:2 desired:1 classify:1 maximization:1 uniform:2 contract:1 thesis:1 nm:1 lug:1 iip:1 american:1 derivative:4 nonlinearities:2 bfgs:1 sinusoidal:1 summarized:1 coefficient:3 explicitly:1 mv:11 vi:1 later:1 try:1 lot:1 closed:1 performed:1 observing:1 red:2 wave:1 parallel:11 minimize:1 oi:2 variance:6 largely:1 maximized:1 identify:1 produced:1 mrc:1 fo:10 edo:11 ed:6 definition:1 energy:4 rational:1 color:12 dimensionality:1 scattering:1 higher:2 ta:1 supervised:2 response:15 specify:1 formulation:2 done:1 furthermore:3 just:2 correlation:7 hand:1 dennis:2 o:1 nonlinear:3 defines:1 mode:2 quality:18 alb:1 normalized:4 requiring:1 hence:1 laboratory:3 during:1 criterion:2 m:2 image:2 iiw:1 recently:1 common:1 rotation:7 sigmoid:1 volume:1 refer:1 reddish:2 unconstrained:2 had:1 similarity:1 inhibition:1 inpul:1 showed:1 recent:1 optimizing:1 binary:3 vt:1 additional:1 determine:1 maximize:2 converge:1 dashed:1 ii:1 smoother:1 signal:1 desirable:1 reduces:1 hebbian:2 long:1 visit:2 controlled:1 regression:1 vision:1 bluish:2 iteration:3 bimodal:1 achieved:1 whereas:1 ascent:6 tend:2 peakedness:1 ee:2 near:2 architecture:1 opposite:1 idea:1 det:2 motivated:1 expression:1 pca:2 hessian:2 clear:1 eigenvectors:12 amount:2 concentrated:2 notice:1 dotted:1 sign:2 wr:1 blue:2 hyperspherical:1 mat:1 inverse:2 angle:1 fourth:5 respond:1 ble:1 decision:1 distinguish:1 constraint:1 helsinki:1 loeve:5 speed:1 span:1 optical:2 according:2 combination:2 belonging:2 smaller:2 describes:1 outlier:1 equation:2 turn:1 discus:1 needed:2 end:1 pursuit:1 apply:1 intrator:5 batch:1 include:3 newton:2 tfr:2 pall:1 reflectance:3 especially:2 objective:1 noticed:1 parametric:1 concentration:1 strategy:2 dependence:1 diagonal:3 primary:2 gradient:3 distance:2 lateral:1 pet:1 length:4 useless:1 edl:1 difficult:1 robert:1 info:1 negative:6 design:1 perform:1 observation:1 datasets:1 communication:1 incorporated:1 rn:2 arbitrary:4 kl:11 connection:1 bcm:1 ropt:3 learned:15 usually:3 pattern:15 below:1 green:2 power:3 suitable:1 advanced:2 nth:1 technology:3 extract:4 acknowledgement:1 reiner:1 embedded:1 proportional:1 var:1 oos:1 degree:1 sufficient:2 principle:6 systematically:1 uncorrelated:1 supported:1 wide:1 curve:5 cortical:1 valid:1 world:1 global:1 incoming:1 q4:7 assumed:1 spectrum:4 search:3 table:4 qvar:3 learn:4 robust:3 symmetry:1 expansion:4 artificially:1 main:1 linearly:1 noise:2 cooper:2 sf:1 lie:1 breaking:1 third:2 extractor:1 learns:5 incorporating:1 magnitude:8 karhunen:7 easier:1 simply:1 visual:1 scalar:5 extracted:8 oct:3 goal:1 identity:1 hard:1 determined:1 typical:6 principal:5 total:1 selectively:1 select:1 internal:1 support:2 schnabel:2 incorporate:1 dept:3 |
6,866 | 7,210 | Neural Discrete Representation Learning
Aaron van den Oord
DeepMind
[email protected]
Oriol Vinyals
DeepMind
[email protected]
Koray Kavukcuoglu
DeepMind
[email protected]
Abstract
Learning useful representations without supervision remains a key challenge in
machine learning. In this paper, we propose a simple yet powerful generative
model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the
encoder network outputs discrete, rather than continuous, codes; and the prior
is learnt rather than static. In order to learn a discrete latent representation, we
incorporate ideas from vector quantisation (VQ). Using the VQ method allows the
model to circumvent issues of ?posterior collapse? -? where the latents are ignored
when they are paired with a powerful autoregressive decoder -? typically observed
in the VAE framework. Pairing these representations with an autoregressive prior,
the model can generate high quality images, videos, and speech as well as doing
high quality speaker conversion and unsupervised learning of phonemes, providing
further evidence of the utility of the learnt representations.
1
Introduction
Recent advances in generative modelling of images [38, 12, 13, 22, 10], audio [37, 26] and videos
[20, 11] have yielded impressive samples and applications [24, 18]. At the same time, challenging
tasks such as few-shot learning [34], domain adaptation [17], or reinforcement learning [35] heavily
rely on learnt representations from raw data, but the usefulness of generic representations trained in
an unsupervised fashion is still far from being the dominant approach.
Maximum likelihood and reconstruction error are two common objectives used to train unsupervised
models in the pixel domain, however their usefulness depends on the particular application the
features are used in. Our goal is to achieve a model that conserves the important features of the
data in its latent space while optimising for maximum likelihood. As the work in [7] suggests, the
best generative models (as measured by log-likelihood) will be those without latents but a powerful
decoder (such as PixelCNN). However, in this paper, we argue for learning discrete and useful latent
variables, which we demonstrate on a variety of domains.
Learning representations with continuous features have been the focus of many previous work
[16, 39, 6, 9] however we concentrate on discrete representations [27, 33, 8, 28] which are potentially
a more natural fit for many of the modalities we are interested in. Language is inherently discrete,
similarly speech is typically represented as a sequence of symbols. Images can often be described
concisely by language [40]. Furthermore, discrete representations are a natural fit for complex
reasoning, planning and predictive learning (e.g., if it rains, I will use an umbrella). While using
discrete latent variables in deep learning has proven challenging, powerful autoregressive models
have been developed for modelling distributions over discrete variables [37].
In our work, we introduce a new family of generative models succesfully combining the variational
autoencoder (VAE) framework with discrete latent representations through a novel parameterisation
of the posterior distribution of (discrete) latents given an observation. Our model, which relies on
vector quantization (VQ), is simple to train, does not suffer from large variance, and avoids the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?posterior collapse? issue which has been problematic with many VAE models that have a powerful
decoder, often caused by latents being ignored. Additionally, it is the first discrete latent VAE model
that get similar performance as its continuous counterparts, while offering the flexibility of discrete
distributions. We term our model the VQ-VAE.
Since VQ-VAE can make effective use of the latent space, it can successfully model important
features that usually span many dimensions in data space (for example objects span many pixels in
images, phonemes in speech, the message in a text fragment, etc.) as opposed to focusing or spending
capacity on noise and imperceptible details which are often local.
Lastly, once a good discrete latent structure of a modality is discovered by the VQ-VAE, we train
a powerful prior over these discrete random variables, yielding interesting samples and useful
applications. For instance, when trained on speech we discover the latent structure of language
without any supervision or prior knowledge about phonemes or words. Furthermore, we can equip
our decoder with the speaker identity, which allows for speaker conversion, i.e., transferring the
voice from one speaker to another without changing the contents. We also show promising results on
learning long term structure of environments for RL.
Our contributions can thus be summarised as:
? Introducing the VQ-VAE model, which is simple, uses discrete latents, does not suffer from
?posterior collapse? and has no variance issues.
? We show that a discrete latent model (VQ-VAE) perform as well as its continuous model
counterparts in log-likelihood.
? When paired with a powerful prior, our samples are coherent and high quality on a wide
variety of applications such as speech and video generation.
? We show evidence of learning language through raw speech, without any supervision, and
show applications of unsupervised speaker conversion.
2
Related Work
In this work we present a new way of training variational autoencoders [23, 32] with discrete latent
variables [27]. Using discrete variables in deep learning has proven challenging, as suggested by
the dominance of continuous latent variables in most of current work ? even when the underlying
modality is inherently discrete.
There exist many alternatives for training discrete VAEs. The NVIL [27] estimator use a single-sample
objective to optimise the variational lower bound, and uses various variance-reduction techniques to
speed up training. VIMCO [28] optimises a multi-sample objective [5], which speeds up convergence
further by using multiple samples from the inference network.
Recently a few authors have suggested the use of a new continuous reparemetrisation based on the
so-called Concrete [25] or Gumbel-softmax [19] distribution, which is a continuous distribution and
has a temperature constant that can be annealed during training to converge to a discrete distribution
in the limit. In the beginning of training the variance of the gradients is low but biased, and towards
the end of training the variance becomes high but unbiased.
None of the above methods, however, close the performance gap of VAEs with continuous latent
variables where one can use the Gaussian reparameterisation trick which benefits from much lower
variance in the gradients. Furthermore, most of these techniques are typically evaluated on relatively
small datasets such as MNIST, and the dimensionality of the latent distributions is small (e.g., below
8). In our work, we use three complex image datasets (CIFAR10, ImageNet, and DeepMind Lab) and
a raw speech dataset (VCTK).
Our work also extends the line of research where autoregressive distributions are used in the decoder
of VAEs and/or in the prior [14]. This has been done for language modelling with LSTM decoders [4],
and more recently with dilated convolutional decoders [42]. PixelCNNs [29, 38] are convolutional
autoregressive models which have also been used as distribution in the decoder of VAEs [15, 7].
Finally, our approach also relates to work in image compression with neural networks. Theis et. al.
[36] use scalar quantisation to compress activations for lossy image compression before arithmetic
encoding. Other authors [1] propose a method for similar compression model with vector quantisation.
2
The authors propose a continuous relaxation of vector quantisation which is annealed over time
to obtain a hard clustering. In their experiments they first train an autoencoder, afterwards vector
quantisation is applied to the activations of the encoder, and finally the whole network is fine tuned
using the soft-to-hard relaxation with a small learning rate. In our experiments we were unable to
train using the soft-to-hard relaxation approach from scratch as the decoder was always able to invert
the continuous relaxation during training, so that no actual quantisation took place.
3
VQ-VAE
Perhaps the work most related to our approach are VAEs. VAEs consist of the following parts:
an encoder network which parameterises a posterior distribution q(z|x) of discrete latent random
variables z given the input data x, a prior distribution p(z), and a decoder with a distribution p(x|z)
over input data.
Typically, the posteriors and priors in VAEs are assumed normally distributed with diagonal covariance, which allows for the Gaussian reparametrisation trick to be used [32, 23]. Extensions include
autoregressive prior and posterior models [14], normalising flows [31, 10], and inverse autoregressive
posteriors [22].
In this work we introduce the VQ-VAE where we use discrete latent variables with a new way of
training, inspired by vector quantisation (VQ). The posterior and prior distributions are categorical,
and the samples drawn from these distributions index an embedding table. These embeddings are
then used as input into the decoder network.
3.1
Discrete Latent variables
We define a latent embedding space e ? RK?D where K is the size of the discrete latent space (i.e.,
a K-way categorical), and D is the dimensionality of each latent embedding vector ei . Note that
there are K embedding vectors ei ? RD , i ? 1, 2, ..., K. As shown in Figure 1, the model takes an
input x, that is passed through an encoder producing output ze (x). The discrete latent variables z
are then calculated by a nearest neighbour look-up using the shared embedding space e as shown in
equation 1. The input to the decoder is the corresponding embedding vector ek as given in equation 2.
One can see this forward computation pipeline as a regular autoencoder with a particular non-linearity
that maps the latents to 1-of-K embedding vectors. The complete set of parameters for the model are
union of parameters of the encoder, decoder, and the embedding space e. For sake of simplicity we
use a single random variable z to represent the discrete latent variables in this Section, however for
speech, image and videos we actually extract a 1D, 2D and 3D latent feature spaces respectively.
The posterior categorical distribution q(z|x) probabilities are defined as one-hot as follows:
q(z = k|x) =
1
0
for k = argminj kze (x) ? ej k2 ,
,
otherwise
(1)
where ze (x) is the output of the encoder network. We view this model as a VAE in which we
can bound log p(x) with the ELBO. Our proposal distribution q(z = k|x) is deterministic, and by
defining a simple uniform prior over z we obtain a KL divergence constant and equal to log K.
The representation ze (x) is passed through the discretisation bottleneck followed by mapping onto
the nearest element of embedding e as given in equations 1 and 2.
zq (x) = ek ,
3.2
where k = argminj kze (x) ? ej k2
(2)
Learning
Note that there is no real gradient defined for equation 2, however we approximate the gradient
similar to the straight-through estimator [3] and just copy gradients from decoder input zq (x) to
encoder output ze (x). One could also use the subgradient through the quantisation operation, but this
simple estimator worked well for the initial experiments in this paper.
3
Figure 1: Left: A figure describing the VQ-VAE. Right: Visualisation of the embedding space. The
output of the encoder z(x) is mapped to the nearest point e2 . The gradient ?z L (in red) will push the
encoder to change its output, which could alter the configuration in the next forward pass.
During forward computation the nearest embedding zq (x) (equation 2) is passed to the decoder, and
during the backwards pass the gradient ?z L is passed unaltered to the encoder. Since the output
representation of the encoder and the input to the decoder share the same D dimensional space,
the gradients contain useful information for how the encoder has to change its output to lower the
reconstruction loss.
As seen on Figure 1 (right), the gradient can push the encoder?s output to be discretised differently in
the next forward pass, because the assignment in equation 1 will be different.
Equation 3 specifies the overall loss function. It is has three components that are used to train
different parts of VQ-VAE. The first term is the reconstruction loss (or the data term) which optimizes
the decoder and the encoder (through the estimator explained above). Due to the straight-through
gradient estimation of mapping from ze (x) to zq (x), the embeddings ei receive no gradients from
the reconstruction loss log p(z|zq (x)). Therefore, in order to learn the embedding space, we use one
of the simplest dictionary learning algorithms, Vector Quantisation (VQ). The VQ objective uses
the l2 error to move the embedding vectors ei towards the encoder outputs ze (x) as shown in the
second term of equation 3. Because this loss term is only used for updating the dictionary, one can
alternatively also update the dictionary items as function of moving averages of ze (x) (not used for
the experiments in this work).
Finally, since the volume of the embedding space is dimensionless, it can grow arbitrarily if the
embeddings ei do not train as fast as the encoder parameters. To make sure the encoder commits to
an embedding and its output does not grow, we add a commitment loss, the third term in equation 3.
Thus, the total training objective becomes:
L = log p(x|zq (x)) + ksg[ze (x)] ? ek22 + ?kze (x) ? sg[e]k22 ,
(3)
where sg stands for the stopgradient operator that is defined as identity at forward computation time
and has zero partial derivatives, thus effectively constraining its operand to be a non-updated constant.
The decoder optimises the first loss term only, the encoder optimises the first and the last loss terms,
and the embeddings are optimised by the middle loss term. We found the resulting algorithm to be
quite robust to ?, as the results did not vary for values of ? ranging from 0.1 to 2.0. We use ? = 0.25
in all our experiments, although in general this would depend on the scale of reconstruction loss.
Since we assume a uniform prior for z, the KL term that usually appears in the ELBO is constant
w.r.t. the encoder parameters and can thus be ignored for training.
In our experiments we define N discrete latents (e.g., we use a field of 32 x 32 latents for ImageNet,
or 8 x 8 x 10 for CIFAR10). The resulting loss L is identical, except that we get an average over N
terms for k-means and commitment loss ? one for each latent.
The log-likelihood of the complete model log p(x) can be evaluated as follows:
X
log p(x) = log
p(x|zk )p(zk ),
k
Because the decoder p(x|z) is trained with z = zq (x) from MAP-inference, the decoder should not
allocate any probability mass to p(x|z) for z 6= zq (x) once it has fully converged. Thus, we can write
4
log p(x) ? log p(x|zq (x))p(zq (x)). We empirically evaluate this approximation in section 4. From
Jensen?s inequality, we also can write log p(x) ? log p(x|zq (x))p(zq (x)).
3.3
Prior
The prior distribution over the discrete latents p(z) is a categorical distribution, and can be made
autoregressive by depending on other z in the feature map. Whilst training the VQ-VAE, the prior is
kept constant and uniform. After training, we fit an autoregressive distribution over z, p(z), so that
we can generate x via ancestral sampling. We use a PixelCNN over the discrete latents for images,
and a WaveNet for raw audio. Training the prior and the VQ-VAE jointly, which could strengthen our
results, is left as future research.
4
4.1
Experiments
Comparison with continuous variables
As a first experiment we compare VQ-VAE with normal VAEs (with continuous variables), as well as
VIMCO [28] with independent Gaussian or categorical priors. We train these models using the same
standard VAE architecture on CIFAR10, while varying the latent capacity (number of continuous or
discrete latent variables, as well as the dimensionality of the discrete space K). The encoder consists
of 2 strided convolutional layers with stride 2 and window size 4 ? 4, followed by two residual
3 ? 3 blocks (implemented as ReLU, 3x3 conv, ReLU, 1x1 conv), all having 256 hidden units. The
decoder similarly has two residual 3 ? 3 blocks, followed by two transposed convolutions with stride
2 and window size 4 ? 4. We use the ADAM optimiser [21] with learning rate 2e-4 and evaluate
the performance after 250,000 steps with batch-size 128. For VIMCO we use 50 samples in the
multi-sample training objective.
The VAE, VQ-VAE and VIMCO models obtain 4.51 bits/dim, 4.67 bits/dim and 5.14 respectively.
All reported likelihoods are lower bounds. Our numbers for the continuous VAE are comparable to
those reported for a Deep convolutional VAE: 4.54 bits/dim [13] on this dataset.
Our model is the first among those using discrete latent variables which challenges the performance
of continuous VAEs. Thus, we get very good reconstructions like regular VAEs provide, with the
compressed representation that symbolic representations provide. A few interesting characteristics,
implications and applications of the VQ-VAEs that we train is shown in the next subsections.
4.2
Images
Images contain a lot of redundant information as most of the pixels are correlated and noisy, therefore
learning models at the pixel level could be wasteful.
In this experiment we show that we can model x = 128 ? 128 ? 3 images by compressing them to a
z = 32 ? 32 ? 1 discrete space (with K=512) via a purely deconvolutional p(x|z). So a reduction of
128?128?3?8
? 42.6 in bits. We model images by learning a powerful prior (PixelCNN) over z. This
32?32?9
allows to not only greatly speed up training and sampling, but also to use the PixelCNNs capacity to
capture the global structure instead of the low-level statistics of images.
Figure 2: Left: ImageNet 128x128x3 images, right: reconstructions from a VQ-VAE with a 32x32x1
latent space, with K=512.
5
Reconstructions from the 32x32x1 space with discrete latents are shown in Figure 2. Even considering
that we greatly reduce the dimensionality with discrete encoding, the reconstructions look only slightly
blurrier than the originals. It would be possible to use a more perceptual loss function than MSE over
pixels here (e.g., a GAN [12]), but we leave that as future work.
Next, we train a PixelCNN prior on the discretised 32x32x1 latent space. As we only have 1 channel
(not 3 as with colours), we only have to use spatial masking in the PixelCNN. The capacity of the
PixelCNN we used was similar to those used by the authors of the PixelCNN paper [38].
Figure 3: Samples (128x128) from a VQ-VAE with a PixelCNN prior trained on ImageNet images.
From left to right: kit fox, gray whale, brown bear, admiral (butterfly), coral reef, alp, microwave,
pickup.
Samples drawn from the PixelCNN were mapped to pixel-space with the decoder of the VQ-VAE
and can be seen in Figure 3.
Figure 4: Samples (128x128) from a VQ-VAE with a PixelCNN prior trained on frames captured
from DeepMind Lab.
We also repeat the same experiment for 84x84x3 frames drawn from the DeepMind Lab environment
[2]. The reconstructions looked nearly identical to their originals. Samples drawn from the PixelCNN
prior trained on the 21x21x1 latent space and decoded to the pixel space using a deconvolutional
model decoder can be seen in Figure 4.
Finally, we train a second VQ-VAE with a PixelCNN decoder on top of the 21x21x1 latent space
from the first VQ-VAE on DM-LAB frames. This setup typically breaks VAEs as they suffer from
"posterior collapse", i.e., the latents are ignored as the decoder is powerful enough to model x
perfectly. Our model however does not suffer from this, and the latents are meaningfully used. We use
only three latent variables (each with K=512 and their own embedding space e) at the second stage
for modelling the whole image and as such the model cannot reconstruct the image perfectly ? which
is consequence of compressing the image onto 3 x 9 bits, i.e. less than a float32. Reconstructions
sampled from the discretised global code can be seen in Figure 5.
6
Figure 5: Top original images, Bottom: reconstructions from a 2 stage VQ-VAE, with 3 latents to
model the whole image (27 bits), and as such the model cannot reconstruct the images perfectly. The
reconstructions are generated by sampled from the second PixelCNN prior in the 21x21 latent domain
of first VQ-VAE, and then decoded with standard VQ-VAE decoder to 84x84. A lot of the original
scene, including textures, room layout and nearby walls remain, but the model does not try to store
the pixel values themselves, which means the textures are generated procedurally by the PixelCNN.
Figure 6: Left: original waveform, middle: reconstructed with same speaker-id, right: reconstructed
with different speaker-id. The contents of the three waveforms are the same.
4.3
Audio
In this set of experiments we evaluate the behaviour of discrete latent variables on models of raw
audio. In all our audio experiments, we train a VQ-VAE that has a dilated convolutional architecture
similar to WaveNet decoder. All samples for this section can be played from the following url:
https://avdnoord.github.io/homepage/vqvae/.
We first consider the VCTK dataset, which has speech recordings of 109 different speakers [41].
We train a VQ-VAE where the encoder has 6 strided convolutions with stride 2 and window-size 4.
This yields a latent space 64x smaller than the original waveform. The latents consist of one feature
map and the discrete space is 512-dimensional. The decoder is conditioned on both the latents and a
one-hot embedding for the speaker.
First, we ran an experiment to show that VQ-VAE can extract a latent space that only conserves
long-term relevant information. After training the model, given an audio example, we can encode
it to the discrete latent representation, and reconstruct by sampling from the decoder. Because the
dimensionality of the discrete representation is 64 times smaller, the original sample cannot be
perfectly reconstructed sample by sample. As it can be heard from the provided samples, and as
shown in Figure 7, the reconstruction has the same content (same text contents), but the waveform
is quite different and prosody in the voice is altered. This means that the VQ-VAE has, without
any form of linguistic supervision, learned a high-level abstract space that is invariant to low-level
features and only encodes the content of the speech. This experiment confirms our observations from
before that important features are often those that span many dimensions in the input data space (in
this case phoneme and other high-level content in waveform).
We have then analysed the unconditional samples from the model to understand its capabilities. Given
the compact and abstract latent representation extracted from the audio, we trained the prior on top of
this representation to model the long-term dependencies in the data. For this task we have used a
larger dataset of 460 speakers [30] and trained a VQ-VAE model where the resolution of discrete
space is 128 times smaller. Next we trained the prior as usual on top of this representation on chunks
of 40960 timesteps (2.56 seconds), which yields 320 latent timesteps. While samples drawn from even
the best speech models like the original WaveNet [37] sound like babbling , samples from VQ-VAE
contain clear words and part-sentences (see samples linked above). We conclude that VQ-VAE was
able to model a rudimentary phoneme-level language model in a completely unsupervised fashion
from raw audio waveforms.
7
Next, we attempted the speaker conversion where the latents are extracted from one speaker and then
reconstructed through the decoder using a separate speaker id. As can be heard from the samples,
the synthesised speech has the same content as the original sample, but with the voice from the
second speaker. This experiment again demonstrates that the encoded representation has factored out
speaker-specific information: the embeddings not only have the same meaning regardless of details
in the waveform, but also across different voice-characteristics.
Finally, in an attempt to better understand the content of the discrete codes we have compared the
latents one-to-one with the ground-truth phoneme-sequence (which was not used any way to train the
VQ-VAE). With a 128-dimensional discrete space that runs at 25 Hz (encoder downsampling factor
of 640), we mapped every of the 128 possible latent values to one of the 41 possible phoneme values1
(by taking the conditionally most likely phoneme). The accuracy of this 41-way classification was
49.3%, while a random latent space would result in an accuracy of 7.2% (prior most likely phoneme).
It is clear that these discrete latent codes obtained in a fully unsupervised way are high-level speech
descriptors that are closely related to phonemes.
4.4
Video
For our final experiment we have used the DeepMind Lab [2] environment to train a generative model
conditioned on a given action sequence. In Figure 7 we show the initial 6 frames that are input to the
model followed by 10 frames that are sampled from VQ-VAE with all actions set to forward (top row)
and right (bottom row). Generation of the video sequence with the VQ-VAE model is done purely in
the latent space, zt without the need to generate the actual images themselves. Each image in the
sequence xt is then created by mapping the latents with a deterministic decoder to the pixel space
after all the latents are generated using only the prior model p(z1 , . . . , zT ). Therefore, VQ-VAE can
be used to imagine long sequences purely in latent space without resorting to pixel space. It can be
seen that the model has learnt to successfully generate a sequence of frames conditioned on given
action without any degradation in the visual quality whilst keeping the local geometry correct. For
completeness, we trained a model without actions and obtained similar results, not shown due to
space constraints.
Figure 7: First 6 frames are provided to the model, following frames are generated conditioned on an
action. Top: repeated action "move forward", bottom: repeated action "move right".
5
Conclusion
In this work we have introduced VQ-VAE, a new family of models that combine VAEs with vector
quantisation to obtain a discrete latent representation. We have shown that VQ-VAEs are capable of
modelling very long term dependencies through their compressed discrete latent space which we have
demonstrated by generating 128 ? 128 colour images, sampling action conditional video sequences
and finally using audio where even an unconditional model can generate surprisingly meaningful
chunks of speech and doing speaker conversion. All these experiments demonstrated that the discrete
latent space learnt by VQ-VAEs capture important features of the data in a completely unsupervised
manner. Moreover, VQ-VAEs achieve likelihoods that are almost as good as their continuous latent
variable counterparts on CIFAR10 data. We believe that this is the first discrete latent variable model
that can successfully model long range sequences and fully unsupervisedly learn high-level speech
descriptors that are closely related to phonemes.
1
Note that the encoder/decoder pairs could make the meaning of every discrete latent depend on previous
latents in the sequence, e.g.. bi/tri-grams (and thus achieve a higher compression) which means a more advanced
mapping to phonemes would results in higher accuracy.
8
References
[1] Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, and
Luc Van Gool. Soft-to-hard vector quantization for end-to-end learned compression of images and neural
networks. arXiv preprint arXiv:1704.00648, 2017.
[2] Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K?ttler, Andrew
Lefrancq, Simon Green, V?ctor Vald?s, Amir Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801,
2016.
[3] Yoshua Bengio, Nicholas L?onard, and Aaron Courville. Estimating or propagating gradients through
stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
[4] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio.
Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
[5] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint
arXiv:1509.00519, 2015.
[6] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan:
Interpretable representation learning by information maximizing generative adversarial nets. CoRR,
abs/1606.03657, 2016.
[7] Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever,
and Pieter Abbeel. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731, 2016.
[8] Aaron Courville, James Bergstra, and Yoshua Bengio. A spike and slab restricted boltzmann machine. In
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages
233?241, 2011.
[9] Emily Denton, Sam Gross, and Rob Fergus. Semi-supervised learning with context-conditional generative
adversarial networks. arXiv preprint arXiv:1611.06430, 2016.
[10] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint
arXiv:1605.08803, 2016.
[11] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through
video prediction. In Advances in Neural Information Processing Systems, pages 64?72, 2016.
[12] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing
systems, pages 2672?2680, 2014.
[13] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards
conceptual compression. In Advances In Neural Information Processing Systems, pages 3549?3557, 2016.
[14] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive
networks. arXiv preprint arXiv:1310.8499, 2013.
[15] Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David V?zquez, and
Aaron C. Courville. Pixelvae: A latent variable model for natural images. CoRR, abs/1611.05013, 2016.
[16] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks.
science, 313(5786):504?507, 2006.
[17] Judy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, and Kate Saenko. Efficient learning of
domain-invariant image representations. arXiv preprint arXiv:1301.3224, 2013.
[18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional
adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
[19] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv
preprint arXiv:1611.01144, 2016.
[20] Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and
Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016.
[21] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
9
[22] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved
variational inference with inverse autoregressive flow. 2016.
[23] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,
2013.
[24] Christian Ledig, Lucas Theis, Ferenc Husz?r, Jose Caballero, Andrew Cunningham, Alejandro Acosta,
Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image superresolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
[25] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of
discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
[26] Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron
Courville, and Yoshua Bengio. Samplernn: An unconditional end-to-end neural audio generation model.
arXiv preprint arXiv:1612.07837, 2016.
[27] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv
preprint arXiv:1402.0030, 2014.
[28] Andriy Mnih and Danilo Jimenez Rezende. Variational inference for monte carlo objectives. CoRR,
abs/1602.06725, 2016.
[29] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv
preprint arXiv:1601.06759, 2016.
[30] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus
based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE
International Conference on, pages 5206?5210. IEEE, 2015.
[31] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv
preprint arXiv:1505.05770, 2015.
[32] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[33] Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. In Artificial Intelligence and
Statistics, pages 448?455, 2009.
[34] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One-shot
learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016.
[35] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press
Cambridge, 1998.
[36] Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Husz?r. Lossy image compression with
compressive autoencoders. arXiv preprint arXiv:1703.00395, 2017.
[37] A?ron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal
Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio.
CoRR abs/1609.03499, 2016.
[38] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional
image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pages
4790?4798, 2016.
[39] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked
denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Journal of Machine Learning Research, 11(Dec):3371?3408, 2010.
[40] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image
caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 3156?3164, 2015.
[41] Junichi Yamagishi. English multi-speaker corpus for cstr voice cloning toolkit. URL http://homepages. inf.
ed. ac. uk/jyamagis/page3/page58/page58. html, 2012.
[42] Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. Improved variational
autoencoders for text modeling using dilated convolutions. CoRR, abs/1702.08139, 2017.
10
| 7210 |@word middle:2 unaltered:1 compression:7 pieter:2 confirms:1 hu:1 covariance:1 shot:2 reduction:2 initial:2 configuration:1 fragment:1 jimenez:4 daniel:1 offering:1 tuned:1 panayotov:1 deconvolutional:2 current:1 com:3 analysed:1 activation:2 yet:1 diederik:4 john:2 realistic:1 christian:1 interpretable:1 update:1 generative:11 intelligence:2 item:1 amir:1 ivo:3 beginning:1 normalising:1 completeness:1 denis:1 ron:1 x128:2 shubham:1 wierstra:4 bowman:1 pairing:1 consists:1 combine:1 manner:1 introduce:2 benini:1 aitken:1 themselves:2 planning:1 multi:3 kundan:2 wavenet:4 inspired:1 salakhutdinov:4 duan:2 actual:2 window:3 considering:1 becomes:2 conv:2 discover:1 underlying:1 linearity:1 provided:2 mass:1 homepage:2 moreover:1 superresolution:1 estimating:1 deepmind:8 tschannen:1 developed:1 yamagishi:1 whilst:2 compressive:1 every:2 botvinick:1 k2:2 demonstrates:1 sherjil:1 uk:1 normally:1 unit:1 faruk:1 vald:1 producing:1 danihelka:3 before:2 local:3 limit:1 consequence:1 io:1 sutton:1 encoding:3 id:3 optimised:1 laurent:1 suggests:1 challenging:3 luke:1 collapse:4 succesfully:1 range:1 bi:1 union:1 pixelcnns:2 block:2 differs:1 x3:1 backpropagation:1 ek22:1 yan:3 onard:1 word:2 regular:2 symbolic:1 get:3 onto:2 close:1 cannot:3 operator:1 zehan:1 context:1 dimensionless:1 yee:1 map:4 deterministic:2 demonstrated:2 maximizing:1 annealed:2 layout:1 regardless:1 shi:1 jimmy:1 emily:1 resolution:1 simplicity:1 jascha:1 pouget:1 factored:1 estimator:4 reparameterization:1 embedding:17 updated:1 imagine:1 heavily:1 strengthen:1 caption:1 us:3 samy:3 goodfellow:2 pixelvae:1 trick:2 element:1 ze:8 conserve:2 updating:1 recognition:1 observed:1 bottom:3 levine:1 preprint:24 wang:1 capture:2 compressing:2 ran:1 gross:1 environment:3 heinrich:1 warde:1 trained:10 depend:2 zquez:1 ferenc:2 ali:1 predictive:1 purely:3 eric:1 completely:2 gu:1 icassp:1 differently:1 represented:1 various:1 train:15 stacked:1 jain:1 fast:1 effective:1 monte:1 artificial:2 tell:1 rein:1 kalchbrenner:4 quite:2 encoded:1 larger:1 jean:1 whale:1 otherwise:1 elbo:2 encoder:23 compressed:2 statistic:3 reconstruct:3 ward:1 simonyan:2 jointly:1 noisy:1 final:1 butterfly:1 shakir:2 sequence:10 net:2 took:1 propose:3 reconstruction:14 interaction:1 adaptation:1 commitment:2 relevant:1 combining:1 flexibility:1 achieve:3 sutskever:3 convergence:1 darrell:1 generating:2 adam:3 leave:1 ben:1 object:1 tim:2 depending:1 andrew:7 recurrent:1 propagating:1 ac:1 measured:1 nearest:4 implemented:1 larochelle:1 concentrate:1 waveform:7 korayk:1 closely:2 correct:1 stochastic:3 alp:1 public:1 cstr:1 espeholt:1 behaviour:1 abbeel:2 wall:1 extension:1 vimco:4 ground:1 normal:1 caballero:1 mapping:4 visin:1 dieleman:1 slab:1 matthew:1 efros:1 dictionary:3 vary:1 estimation:2 ruslan:4 successfully:3 weighted:1 hoffman:1 mit:1 gaussian:3 always:1 rather:2 husz:2 zhou:1 ej:2 varying:1 barto:1 vae:44 linguistic:1 encode:1 rezende:4 focus:1 modelling:5 likelihood:7 cloning:1 greatly:2 adversarial:5 dim:3 inference:7 typically:5 transferring:1 santoro:1 cunningham:2 hidden:1 visualisation:1 interested:1 pixel:12 issue:3 overall:1 classification:1 among:1 pascal:1 html:1 lucas:2 adrien:1 spatial:1 softmax:2 optimises:3 once:2 equal:1 having:1 beach:1 koray:4 sampling:4 optimising:1 identical:2 field:1 look:2 denton:1 unsupervised:8 blurrier:1 nearly:1 alter:1 future:2 yoshua:5 mirza:1 richard:1 few:3 strided:2 neighbour:1 divergence:1 geometry:1 attempt:1 ab:5 message:1 mnih:4 alexei:1 joel:1 kirkpatrick:1 yielding:1 unconditional:3 nvil:1 farley:1 implication:1 microwave:1 synthesised:1 capable:1 partial:1 cifar10:4 discretisation:1 fox:1 taylor:1 instance:1 soft:3 modeling:1 assignment:1 introducing:1 latents:21 uniform:3 usefulness:2 reported:2 dependency:2 learnt:5 chunk:2 st:1 density:1 lstm:1 international:2 oord:5 ancestral:1 michael:1 nvp:1 ilya:3 concrete:2 sanjeev:1 again:1 opposed:1 rafal:2 zen:1 book:1 ek:2 derivative:1 stride:3 bergstra:1 dilated:3 kate:1 caused:1 depends:1 view:1 lot:2 lab:6 break:1 doing:2 try:1 red:1 linked:1 bayes:1 guoguo:1 capability:1 masking:1 simon:1 contribution:1 accuracy:3 convolutional:5 phoneme:12 variance:6 characteristic:2 descriptor:2 yield:2 tejani:1 raw:7 parameterises:1 kavukcuoglu:4 vincent:1 none:1 carlo:1 unsupervisedly:1 straight:2 converged:1 taiga:1 trevor:1 ed:1 mohamed:2 james:1 e2:1 dm:1 transposed:1 static:1 sampled:3 ledig:1 dataset:4 knowledge:1 subsection:1 dimensionality:6 actually:1 focusing:1 appears:1 higher:2 supervised:1 danilo:4 tom:1 totz:1 improved:2 evaluated:2 done:2 furthermore:3 just:1 stage:2 lastly:1 roger:1 autoencoders:5 alykhan:1 ei:5 mehdi:1 mehri:1 google:3 quality:4 perhaps:1 gray:1 gulrajani:2 lossy:3 believe:1 usa:1 phillip:1 umbrella:1 contain:3 unbiased:1 k22:1 counterpart:3 brown:1 lillicrap:1 conditionally:1 during:4 shixiang:1 speaker:17 samuel:1 criterion:1 whye:1 complete:2 demonstrate:1 junichi:1 temperature:1 reasoning:1 rudimentary:1 image:34 variational:10 spending:1 novel:1 recently:2 ranging:1 meaning:2 common:1 charles:2 operand:1 rl:1 empirically:1 physical:1 hugo:1 volume:2 jozefowicz:2 dinh:1 ishaan:2 cambridge:1 isabelle:1 rd:1 reef:1 resorting:1 similarly:2 language:6 pixelcnn:15 moving:1 toolkit:1 supervision:4 impressive:1 quantisation:10 etc:1 add:1 alejandro:1 dominant:1 posterior:11 own:1 recent:1 chelsea:1 optimizes:1 inf:1 store:1 inequality:1 arbitrarily:1 yuri:1 optimiser:1 seen:5 captured:1 dai:1 tinghui:1 kit:1 isola:1 converge:1 redundant:1 signal:1 semi:1 babbling:1 multiple:1 relates:1 arithmetic:1 afterwards:1 sound:1 imperceptible:1 ahmed:1 long:7 luca:1 page3:1 paired:2 prediction:1 vision:1 arxiv:48 represent:1 sergey:2 invert:1 dec:1 proposal:1 receive:1 fine:1 grow:2 modality:3 biased:1 sure:1 tri:1 recording:1 hz:1 meaningfully:1 flow:3 rodner:1 values1:1 yang:1 backwards:1 constraining:1 bengio:8 embeddings:5 enough:1 bartunov:1 variety:2 sander:1 fit:3 relu:2 timesteps:2 architecture:2 perfectly:4 andriy:4 reduce:1 idea:1 blundell:1 bottleneck:1 allocate:1 utility:1 colour:2 passed:4 url:2 suffer:4 argminj:2 karen:2 speech:16 action:8 deep:7 ignored:4 useful:5 heard:2 clear:2 johannes:1 simplest:1 generate:5 specifies:1 http:2 exist:1 problematic:1 reparametrisation:1 summarised:1 discrete:52 write:2 dickstein:1 dominance:1 key:2 vctk:2 drawn:5 changing:1 wasteful:1 povey:1 leibo:1 nal:4 kept:1 sotelo:1 relaxation:5 subgradient:1 houthooft:1 run:1 inverse:2 fourteenth:1 powerful:9 jose:2 procedurally:1 extends:1 family:2 place:1 almost:1 timofte:1 comparable:1 bit:6 bound:3 layer:1 followed:4 played:1 courville:5 yielded:1 constraint:1 worked:1 alex:3 scene:1 encodes:1 sake:1 nearby:1 librispeech:1 toshev:1 speed:3 span:3 kumar:3 relatively:1 radu:1 remain:1 slightly:1 smaller:3 across:1 sam:1 page58:2 acosta:1 parameterisation:1 rob:1 den:5 explained:1 invariant:2 restricted:1 pipeline:1 equation:9 vq:45 remains:1 bing:1 describing:1 finn:1 end:5 photo:1 operation:1 asr:1 salimans:2 generic:1 nicholas:1 pierre:1 alternative:1 voice:5 batch:1 jang:1 original:9 compress:1 rain:1 clustering:1 include:1 top:6 gan:1 x21:1 commits:1 coral:1 gregor:3 objective:7 move:3 looked:1 spike:1 usual:1 diagonal:1 antoine:1 gradient:12 unable:1 mapped:3 separate:1 capacity:4 decoder:33 lajoie:1 reparameterisation:1 chris:1 maddison:1 argue:1 equip:1 marcus:1 ozair:1 erik:1 code:4 index:1 manzagol:1 providing:1 downsampling:1 zichao:1 setup:1 potentially:1 ba:1 zt:2 boltzmann:2 perform:1 teh:1 conversion:5 observation:2 convolution:3 datasets:2 ctor:1 neuron:1 fabian:1 daan:4 francesco:1 pickup:1 defining:1 hinton:2 frame:8 discovered:1 introduced:1 david:2 discretised:3 pair:1 kl:2 sentence:2 imagenet:4 z1:1 acoustic:1 concisely:1 coherent:1 learned:2 heiga:1 dhariwal:1 kingma:4 nip:1 able:2 suggested:2 poole:1 usually:2 below:1 pattern:1 challenge:2 max:2 green:1 optimise:1 video:9 including:1 hot:2 gool:1 wainwright:1 natural:3 rely:1 circumvent:1 belief:1 residual:2 advanced:1 zhu:1 altered:1 github:1 created:1 categorical:6 jun:1 autoencoder:5 extract:2 auto:1 text:3 prior:27 sg:2 l2:1 schulman:2 theis:3 graf:3 ksg:1 loss:13 fully:3 bear:1 interesting:2 generation:4 proven:2 geoffrey:2 generator:1 zhiting:1 vassil:1 share:1 translation:1 row:2 repeat:1 last:1 copy:1 keeping:1 surprisingly:1 english:1 senior:1 understand:2 burda:1 karol:3 wide:1 taking:1 lukas:1 van:6 benefit:1 distributed:1 dimension:2 calculated:1 stand:1 avoids:1 gram:1 autoregressive:11 author:4 forward:7 reinforcement:2 made:1 far:1 erhan:1 welling:2 reconstructed:4 approximate:2 compact:1 memory:1 global:2 conceptual:1 corpus:2 assumed:1 conclude:1 xi:3 fergus:1 alternatively:1 continuous:18 latent:51 zq:12 table:1 additionally:1 promising:1 channel:1 learn:3 robust:1 ca:1 inherently:2 zk:2 mse:1 complex:2 domain:6 avdnoord:2 did:1 whole:3 noise:1 wenzhe:1 repeated:2 x1:1 xu:1 augmented:1 lasse:1 fashion:2 grosse:1 besse:1 judy:1 decoded:2 float32:1 perceptual:1 infogan:1 third:1 learns:1 donahue:1 ian:2 rk:1 dumitru:1 specific:1 xt:1 jensen:1 symbol:1 abadie:1 evidence:2 frederic:1 consist:2 normalizing:1 quantization:2 mnist:1 sohl:1 effectively:1 importance:1 corr:5 texture:2 conditioned:4 push:2 gumbel:2 gap:1 chen:4 cavigelli:1 timothy:1 likely:2 visual:1 vinyals:7 khudanpur:1 scalar:1 sadik:1 truth:1 relies:1 extracted:2 conditional:5 goal:1 identity:2 towards:3 room:1 shared:1 luc:1 content:8 hard:4 change:2 jeff:1 except:1 reducing:1 beattie:1 degradation:1 denoising:2 called:1 total:1 pas:3 attempted:1 meaningful:1 saenko:1 vaes:17 aaron:9 berg:1 prosody:1 vilnis:1 alexander:1 oriol:6 incorporate:1 evaluate:3 audio:12 scratch:1 correlated:1 |
6,867 | 7,211 | Generalizing GANs: A Turing Perspective
Roderich Gro? and Yue Gu
Department of Automatic Control and Systems Engineering
The University of Sheffield
{r.gross,ygu16}@sheffield.ac.uk
Wei Li
Department of Electronics
The University of York
[email protected]
Melvin Gauci
Wyss Institute for Biologically Inspired Engineering
Harvard University
[email protected]
Abstract
Recently, a new class of machine learning algorithms has emerged, where models
and discriminators are generated in a competitive setting. The most prominent
example is Generative Adversarial Networks (GANs). In this paper we examine
how these algorithms relate to the famous Turing test, and derive what?from a
Turing perspective?can be considered their defining features. Based on these
features, we outline directions for generalizing GANs?resulting in the family
of algorithms referred to as Turing Learning. One such direction is to allow the
discriminators to interact with the processes from which the data samples are
obtained, making them ?interrogators?, as in the Turing test. We validate this idea
using two case studies. In the first case study, a computer infers the behavior of an
agent while controlling its environment. In the second case study, a robot infers
its own sensor configuration while controlling its movements. The results confirm
that by allowing discriminators to interrogate, the accuracy of models is improved.
1
Introduction
Generative Adversarial Networks [1] (GANs) are a framework for inferring generative models from
training data. They place two neural networks?a model and a discriminator?in a competitive
setting. The discriminator?s objective is to correctly label samples from either the model or the
training data. The model?s objective is to deceive the discriminator, in other words, to produce
samples that are categorized as training data by the discriminator. The networks are trained using a
gradient-based optimization algorithm. Since their inception in 2014, GANs have been applied in a
range of contexts [2, 3], but most prominently for the generation of photo-realistic images [1, 4].
In this paper we analyze the striking similarities between GANs and the famous Turing test [5]. The
Turing test probes a machine?s ability to display behavior that, to an interrogator, is indistinguishable
from that of a human. Developing machines that pass the Turing test could be considered as a
canonical problem in computer science [6]. More generally, the problem is that of imitating (and
hence inferring) the structure and/or behavior of any system, such as an organism, a device, a
computer program, or a process.
The idea to infer models in a competitive setting (model versus discriminator) was first proposed
in [7]. The paper considered the problem of inferring the behavior of an agent in a simple environment.
The behavior was deterministic, simplifying the identification task. In a subsequent work [8], the
method, named Turing Learning, was used to infer the behavioral rules of a swarm of memoryless
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: Illustration of the Turing test setup introduced in [5]. Player C (the interrogator) poses
questions to and receives labelled answers from players A and B. Player C does not know which
label (blue square or red disk) corresponds to which player. Player C has to determine this after
questioning.
robots. The robot?s movements were tracked using an external camera system, providing the training
data. Additional robots executed the rules defined by the models.
The contributions of this paper are
? to examine the defining features of GANs (and variants)?assuming a Turing perspective;
? to outline directions for generalizing GANs, in particular, to encourage alternative implementations and novel applications; for example, ones involving physical systems;
? to show, using two case studies, that more accurate models can be obtained if the discriminators are allowed to interact with the processes from which data samples are obtained (as
the interrogators in the Turing test).1
2
A Turing Perspective
In 1950, Turing proposed an imitation game [5] consisting of three players A, B and C. Figure 1
shows a schematic of this game. Player C, also referred to as the interrogator, is unable to see the other
players. However, the interrogator can pose questions to and receive answers from them. Answers
from the same player are consistently labelled (but not revealing its identity, A or B). At the end of
the game, the interrogator has to guess which label belongs to which player. There are two variants
of the game, and we focus on the one where player A is a machine, while player B is human (the
interrogator is always human). This variant, depicted in Figure 1, is commonly referred to as the
Turing test [9, 10]. To pass the test, the machine would have to produce answers that the interrogator
believes to originate from a human. If a machine passed this test, it would be considered intelligent.
For GANs (and variants), player C, the interrogator, is no longer human, but rather a computer
program that learns to discriminate between information originating from players A and B. Player A
is a computer program that learns to trick the interrogator. Player B could be any system one wishes
to imitate, including humans.
2.1
Defining Features of GANs
Assuming a Turing perspective, we consider the following as the defining features of GANs (and
variants):
? a training agent, T , providing genuine data samples (the training data);
? a model agent, M, providing counterfeit data samples;
1
Different to [7], we consider substantially more complex case studies, where the discriminators are required
to genuinely interact with the systems, as a pre-determined sequence of interventions would be unlikely to reveal
all the observable behavioral features.
2
? a discriminator agent, D, labelling data samples as either genuine or counterfeit;
? a process by which D interacts with M and T ;
? D is rewarded for labelling data samples of T as genuine;
? D is rewarded for labelling data samples of M as counterfeit;
? M is rewarded for misleading D (to label its data samples as genuine).
It should be noted that in the Turing test there is a bi-directional exchange of information between
player C and either player A or B. In GANs, however, during any particular ?game?, data flows only
in one direction: The discriminator agent receives data samples, but is unable to influence the agent
at the origin during the sampling process. In the case studies presented in this paper, this limitation is
overcome, and it is shown that this can lead to improved model accuracy. This, of course, does not
imply that active discriminators are beneficial for every problem domain.
2.2
Implementation Options of (Generalized) GANs
GANs and their generalizations, that is, algorithms that possess the aforementioned defining features,
are instances of Turing Learning [8]. The Turing Learning formulation removes (from a Turing
perspective unnecessary) restrictions of the original GAN formulation, for example, the need for
models and discriminators to be represented as neural networks, or the need for optimizing these
networks using gradient descent. As a result of this, the Turing Learning formulation is very general,
and applicable to a wide range of problems (e.g., using models with discrete, continuous or mixed
representations).
In the following, we present the aspects of implementations that are not considered as defining
features, but rather as implementation options. They allow Turing Learning to be tailored, for
example, by using the most suitable model representation and optimization algorithm for the given
problem domain. Moreover, users can choose implementation options they are familiar with, making
the overall framework more accessible.
? Training data. The training data could take any form. It could be artificial (e.g., audio,
visual, textual data in a computer), or physical (e.g., a geological sample, engine, painting
or human being).
? Model presentation. The model could take any form. In GANs [1], it takes the form of a
neural network that generates data when provided with a random input. Other representations
include vectors, graphs, and computer programs. In any case, the representation should
be expressive enough, allowing a model to produce data with the same distribution as the
training data. The associated process could involve physical objects (e.g., robots [8]). If the
training data originates from physical objects, but the model data originates from simulation,
special attention is needed to avoid the so called reality gap [11]. Any difference caused not
by the model but rather the process to collect the data (e.g., tracking equipment) may be
detected by the discriminators, which could render model inference impossible.
? Discriminator representation. The discriminator could take any form. Its representation
should be expressive enough to distinguish between genuine and counterfeit data samples.
These samples could be artificial or physical. For example, a discriminator could be
networked to an experimental platform, observing and manipulating some physical objects
or organisms.
? Optimization algorithms. The optimization algorithms could take any form as long as they
are compatible with the solution representations. They could use a single candidate solution
or a population of candidate solutions [8, 12]. In the context of GANs, gradient-based
optimization algorithms are widely applied [13]. These algorithms however require the
objective function to be differentiable and (ideally) unimodal. A wide range of metaheuristic
algorithms [14] could be explored for domains with more complex objective functions. For
example, if the model was represented using a computer program, genetic programming
algorithms could be used.
? Coupling mechanism between the model and discriminator optimizers. The optimization
processes for the model and discriminator solutions are dependent on each other. Hence
they may require careful synchronization [1]. Moreover, if using multiple models and/or
3
Figure 2: In Case Study 1, we consider a non-embodied agent that is subjected to a stimulus, S,
which can be either low (L) or high (H). The task is to infer how the agent responds to the stimulus.
The discriminator controls the stimulus while observing the behavior of the agent (expressed as v),
which is governed by above probabilistic finite-state machine. Label S&p denotes that if the stimulus
is S ? {L, H}, the corresponding transition occurs with probability p. We assume that the structure
of the state machine is known, and that the parameters (p1 , p2 , v2 , v3 , . . . , vn ) are to be inferred.
multiple discriminators, choices have to be made for which pairs of solutions to evaluate.
Elaborate evaluation schemes may take into account the performance of the opponents in
other evaluations (e.g., using niching techniques). Synchronization challenges include those
reported for coevolutionary systems.2 In particular, due to the so-called Red Queen Effect,
the absolute quality of solutions in a population may increase while the quality of solutions
relative to the other population may decrease, or vice versa [18]. Cycling [20] refers to
the phenomenon that some solutions that have been lost, may get rediscovered in later
generations. A method for overcoming the problem is to retain promising solutions in an
archive?the ?hall of fame? [21]. Disengagement can occur when one population (e.g., the
discriminators) outperforms the other population, making it hard to reveal differences among
the solutions. Methods for addressing disengagement include ?resource sharing? [22] and
?reducing virulence? [20].
? Termination criterion. Identifying a suitable criterion for terminating the optimization
process can be challenging, as the performance is defined in relative rather than absolute
terms. For example, a model that is found to produce genuine data by each of a population
of discriminators may still not be useful (the discriminators may have performed poorly). In
principle, however, any criterion can be applied (e.g., convergence data, fixed time limit,
etc).
3
Case Study 1: Inferring Stochastic Behavioral Processes Through
Interaction
3.1
Problem Formulation
This case study is inspired from ethology?the study of animal behavior. Animals are sophisticated
agents, whose actions depend on both their internal state and the stimuli present in their environment.
Additionally, their behavior can have a stochastic component. In the following, we show how Turing
Learning can infer the behavior of a simple agent that captures the aforementioned properties.
The agent?s behavior is governed by the probabilistic finite-state machine (PFSM)3 shown in Figure 2.
It has n states, and it is assumed that each state leads to some observable behavioral feature, v ? R,
hereafter referred to as the agent?s velocity. The agent responds to a stimulus that can take two levels,
low (L) or high (H). The agent starts in state 1. If the stimulus is L, it remains in state 1 with certainty.
If the stimulus is H, it transitions to state 2 with probability p1 , and remains in state 1 otherwise. In
other words, on average, it transitions to state 2 after 1/p1 steps. In state k = 2, 3, . . . , n ? 1, the
behavior is as follows. If the stimulus is identical to that which brings the agent into state k from
2
Coevolutionary algorithms have been studied in a range of contexts [15, 16, 17], including system identification [18, 19], though these works differ from GANs and Turing Learning in that no discriminators evolve, but
rather pre-defined metrics gauge on how similar the model and training data are. For some system identification
problems, the use of such pre-defined metrics can result in poor model accuracy, as shown in [8].
3
PFSMs generalize the concept of Markov chains [23, 24].
4
state k ? 1, the state reverts to k ? 1 with probability p2 and remains at k otherwise. If the stimulus
is different to that which brings the agent into state k from state k ? 1, the state progresses to k + 1
with probability p1 and remains at k otherwise. In state n, the only difference is that if the stimulus is
different to that which brought about state n, the agent remains in state n with certainty (as there is
no next state to progress to).
By choosing p1 close to 0 and p2 = 1, we force the need for interaction if the higher states are to be
observed for a meaningful amount of time. This is because once a transition to a higher state happens,
the interrogator must immediately toggle the stimulus to prevent the agent from regressing back to
the lower state.
3.2
Turing Learning Implementation
We implement Turing Learning for this problem as follows:
? Training data. To obtain the training data, the discriminator interacts with the PFSM, shown
in Figure 2. The number of states are set to four (n = 4). The parameters used to generate
the (genuine) data samples are given by:
q = (p?1 , p?2 , v2? , v3? , v4? ) = (0.1, 1.0, 0.2, 0.4, 0.6).
(1)
? Model representation. It is assumed that the structure of the PFSM is known, while the
parameters, q, are to be inferred. All parameters can vary in R. To interpret p1 and p2 as
probabilities, they are mapped to the closest point in [0, 1], if outside this interval. The
model data is derived analogously to that of the training data.
? Discriminator representation. The discriminator is implemented as an Elman neural network [25] with 1 input neuron, 5 hidden neurons, and 2 output neurons. At each time step t,
the observable feature (the agent?s velocity v) is fed into the input neuron.4 After updating
the neural network, the output from one of the output neurons is used to determine the
stimulus at time step t + 1, L or H. At the end of a trial (100 time steps), the output from
the other output neuron is used to determine whether the discriminator believes the agent
under investigation to be the training agent (T ) or model agent (M).
? Optimization Algorithms. We use a standard (? + ?) evolution strategy with self-adapting
mutation strengths [26] for both the model and the discriminator populations. We use
? = ? = 50 in both cases. The populations are initialized at random. The parameter values
of the optimization algorithm are set as described in [26].
? Coupling mechanism between the model and discriminator optimizers. The coupling comes
from the evaluation process, which in turn affects the population selection. Each of the
100 candidate discriminators is evaluated once with each of the 100 models, as well as an
additional 100 times with the training agent. It receives a point every time it correctly labels
the data as either genuine or counterfeit. At the same time, each model receives a point for
each time a discriminator mistakenly judges its data as genuine.
? Termination criterion. The optimization process is stopped after 1000 generations.
3.3
Results
To validate the advantages of the interactive approach, we use three setups for the Turing Learning
algorithm. In the default setup, hereafter ?Interactive? setup, the discriminator controls the environmental stimulus while observing the agent. In the other two setups, the discriminator observes the
agent in a passive manner; that is, its output is not used to update the stimulus. Instead, the stimulus
is uniformly randomly chosen at the beginning of the trial, and it is toggled with probability 0.1 at
any time step (the stimulus is hence expected to change on average every 10 time steps). In setup
?Passive 1?, the discriminator has the same input as in the ?Interactive" setup (the observable feature,
v). In setup ?Passive 2?, the discriminator has one additional input, the current stimulus (S). All
other aspects of the passive setups are identical to the ?Interactive? setup.
4
To emulate a noisy tracking process, the actual speed value is multiplied with a number chosen with a
uniform distribution in the range (0.95, 1.05).
5
Interactive
Passive 1
Passive 2
-1
p1
p2
v2
v3
v4
1.5
1.0
0.5
0
velocity (v)
stimulus (S)
0.0
1
discriminator input and output
parameter value
2
0
model parameters
20
40
60
80
100
time step
(a)
(b)
Figure 3: Results from Case Study 1. (a) Model parameters Turing Learning inferred about a
simulated agent. In the ?Interactive" setup, the discriminator observes the agent while controlling
a stimulus that the agent responds to. In the two passive setups, the discriminator observes the
agent and/or stimulus, while the latter is randomly generated (for details, see text). The models are
those with the highest evaluation value in the final generation (20 runs per setup). The dashed lines
indicate the optimal parameter value (which is to be identified). (b) Example showing how one of the
discriminators interacted with the agent during a trial. For the stimulus (blue), L and H are shown as
0 and 1, respectively.
For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 3(a) shows the
distribution of the inferred models that achieved the highest evaluation value in the 1000th generation.
The ?Interactive? setup is the only one that inferred all parameters with good accuracy.
Figure 3(b) shows a typical example of how a discriminator interacts with the agent. The discriminator
initially sets the environmental stimulus to alternating values (i.e., toggling between H and L). Once
the agent advances from state 1 to state 2, the discriminator instantly changes the stimulus to L and
holds it constant. Once the agent advances to higher states, the stimulus is switched again, and so
forth. This strategy allows the discriminator to observe the agent?s velocity in each state.
4
4.1
Case Study 2: A Robot Inferring Its Own Sensor Configuration
Problem Formulation
The reality gap is a well-known problem in robotics: Often, behaviors that work well in simulation
do not translate effectively into real-world implementations [11]. This is because simulations are
generally unable to capture the full range of features of the real world, and therefore make simplifying
assumptions. Yet, simulations can be important, even on-board a physical robot, as they facilitate
planning and optimization.
This case study investigates how a robot can use Turing Learning to improve the accuracy of a
simulation model of itself, though a process of self-discovery, similar to [27]. In a practical scenario,
the inference could take place on-board a physical platform. For convenience, we use an existing
simulation platform [28], which has been extensively verified and shown to be able to cross the reality
gap [29]. The robot, an e-puck [30], is represented as a cylinder of diameter 7.4 cm, height 4.7 cm
and mass 152 g. It has two symmetrically aligned wheels. Their ground contact velocity (vleft and
vright ) can be set within [-12.8, 12.8] (cm/s). During the motion, random noise is applied to each
wheel velocity, by multiplying it with a number chosen with a uniform distribution in the range (0.95,
1.05).
6
(a)
(b)
Figure 4: In Case Study 2, we consider a miniature mobile robot, the e-puck, that perceives its environment via eight infrared (IR) proximity sensors. The robot is unaware of the spatial configuration
of these sensors, and has to infer it. The discriminator controls the movements of the robot, while
observing the reading values of the sensors. (a) The sensor configuration to be inferred is the one
of the physical e-puck robot. It comprises of 16 parameters, representing the orientations (?) and
displacements (d) of the 8 proximity sensors. (b) The robot is placed at random into an environment
with nine moveable obstacles.
The robot has eight infrared proximity sensors distributed around its cylindrical body, see Figure 4(a).
The sensors provide noisy reading values (s1 , s2 , . . . , s8 ). We assume that the robot does not know
where the sensors are located (neither their orientations, nor their displacements from the center).
Situations like this are common in robotics, where uncertainties are introduced when sensors get
mounted manually or when the sensor configuration may change during operation (e.g., at the time of
collision with an object, or when the robot itself reconfigures the sensors). The sensor configuration
can be described as follows:
q = (?1 , ?2 , . . . , ?8 , d1 , d2 , . . . , d8 ) ,
(2)
where di ? (0, R] defines the distance of sensor i from the robot?s center (R is the robot?s radius),
and ?i ? [??, ?] defines the bearing of sensor i relative to the robot?s front.
The robot operates in a bounded square environment with sides 50 cm, shown in Figure 4(b). The
environment also contains nine movable, cylindrical obstacles, arranged in a grid. The distance
between the obstacles is just wide enough for an e-puck to pass through.
4.2
Turing Learning Implementation
We implement Turing Learning for this problem as follows:
? Training data. The training data comes from the eight proximity sensors of a ?real? epuck robot, that is, using sensor configuration q as defined by the robot (see Figure 4(a)).
The discriminator controls the movements of the robot within the environment shown in
Figure 4(b), while observing the readings of its sensors.
? Model representation. It is assumed that the sensor configuration, q, is to be inferred. In
other words, a total of 16 parameters have to be estimated.
? Discriminator representation. As in Case Study 1, the discriminator is implemented as an
Elman neural network with 5 hidden neurons. The network has 8 inputs that receive values
from the robot?s proximity sensors (s1 , s2 , . . . , s8 ). In addition to the classification output,
the discriminator has two control outputs, which are used to set the robot?s wheel velocities
(vleft and vright ). In each trial, the robot starts from a random position and random orientation
within the environment. The evaluation lasts for 10 seconds. As the robot?s sensors and
actuators are updated 10 times per second, this results in 100 time steps.
? The remaining aspects are implemented exactly as in Case Study 1.
7
Interactive
Passive 1
Passive 2
4
2
0
-2
4.0
parameter value (cm)
parameter value (rad)
6
-4
3.5
3.0
2.5
2.0
1.5
Interactive
Passive 1
Passive 2
1.0
0.5
?1
?2
?3
?4
?5
?6
?7
?8
d1
model parameters
d2
d3
d4
d5
d6
d7
d8
model parameters
(a)
(b)
Figure 5: Results from Case Study 2. Model parameters Turing Learning inferred about the sensor
configuration of the e-puck robot: (a) sensor orientations, (b) sensor displacements. In the ?Interactive" setup, the discriminator observes the sensor reading values while controlling the movements
of the robot. In the two passive setups, the discriminator observes the sensor reading values and/or
movements while the latter are randomly generated (for details, see text). The models are those with
the highest evaluation value in the final generation (20 runs per setup). The dashed lines indicate the
optimal parameter value (which is to be identified).
4.3
Results
To validate the advantages of the interactive approach, we use again three setups. In the ?Interactive?
setup the discriminator controls the movements of the robot while observing its sensor readings. In
the other two setups, the discriminator observes the robot?s sensor readings in a passive manner; that
is, its output is not used to update the movements of the robot. Rather, the pair of wheel velocities
is uniformly randomly chosen at the beginning of the trial, and, with probability 0.1 at any time
step (the movement pattern hence is expected to change on average every 10 time steps). In setup
?Passive 1?, the discriminator has the same inputs as in the ?Interactive? setup (the reading values of
the robot?s sensors, s1 , s2 , . . . , s8 ). In setup ?Passive 2?, the discriminator has two additional inputs,
indicating the velocities of the left and right wheels (vleft and vright ). All other aspects of the passive
setups are identical to the ?Interactive? setup.
For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 5 shows the
distribution of the inferred models that achieved the highest evaluation value in the 1000th generation.
The ?Interactive? setup is the only one that inferred the orientations of the proximity sensors with
good accuracy. The displacement parameters were inferred with all setups, though none of them was
able to provide accurate estimates.
Figure 6 shows a typical example of how a discriminator controls the robot. At the beginning, the
robot rotates clockwise, registering an obstacle with sensors s7 , s6 , . . . , s2 (in that order). The robot
then moves forward, and registers the obstacle with sensors s1 and/or s8 , while pushing it. This
confirms that s1 and s8 are indeed forward-facing. Once the robot has no longer any obstacle in its
front, it repeats the process. To validate if the sensor-to-motor coupling was of any significance for
the discrimination task, we recorded the movements of a robot controlled by the best discriminator
of each of the 20 runs. The robot used either the genuine sensor configuration (50 trials) or the best
model configuration of the corresponding run (50 trials). In these 2000 ?closed-loop? experiments,
the discriminator made correct judgements in 69.45% of the cases. We then repeated the 2000 trials,
now ignoring the discriminator?s control outputs, but rather using the movements recorded earlier. In
these 2000 ?open-loop? experiments, the discriminator made correct judgements in 58.60% of the
cases?a significant drop, though still better than guessing (50%).
8
scaled sensor readings
1
s1
0.8
s2
s3
0.6
s4
0.4
s5
0.2
s6
s7
0
20
40
60
80
100
s8
speed values (cm/s)
time step
10
5
0
-5
v left
-10
v right
0
20
40
60
80
100
time step
Figure 6: Example showing how one of the discriminators in Case Study 2 controlled the robot?s
movements during the trial. The discriminator takes as input the robot?s eight sensor reading values
(shown at the top), and controls the velocities of the wheels (shown at the bottom). The discriminator
has to decide whether the sensor configuration of the robot corresponds to the one of the physical
e-puck robot. For details, see text.
5
Conclusion
In this paper we analyzed how Generative Adversarial Networks (GANs) relate to the Turing test.
We identified the defining features of GANs, if assuming a Turing perspective. Other features,
including choice of model representation, discriminator representation, and optimization algorithm,
were viewed as implementation options of a generalized version of GANs, also referred to as Turing
Learning.
It was noted that the discriminator in GANs does not directly influence the sampling process, but
rather is provided with a (static) data sample from either the generative model or training data set.
This is in stark contrast to the Turing test, where the discriminator (the interrogator) plays an active
role; it poses questions to the players, to reveal the information most relevant to the discrimination
task. Such interactions are by no means always useful. For the purpose for generating photo-realistic
images, for example, they may not be needed.5 For the two case studies presented here, however,
interactions were shown to cause an improvement in the accuracy of models.
The first case study showed how one can infer the behavior of an agent while controlling a stimulus
present in its environment. It could serve as a template for studies of animal/human behavior,
especially where some behavioral traits are revealed only through meaningful interactions. The
inference task was not simple, as the agent?s actions depended on a hidden stochastic process. The
latter was influenced by the stimulus, which was set to either low or high by the discriminator (100
times). It was not known in advance which of the 2100 sequences are useful. The discriminator thus
needed to dynamically construct a suitable sequence, taking the observation data into account.
The second case study focused on a different class of problems: active self-discovery. It showed that
a robot can infer its own sensor configuration through controlled movements. This case study could
serve as a template for modelling physical devices. The inference task was not simple, as the robot
started from a random position in the environment, and its motors and sensors were affected by noise.
The discriminator thus needed to dynamically construct a control sequence that let the robot approach
an obstacle and perform movements for testing its sensor configuration.
Future work could attempt to build models of more complex behaviors, including those of humans.
Acknowledgments
The authors thank Nathan Lepora for stimulating discussions.
5
Though if the discriminator could request additional images by the same model or training agent, problems
like mode collapse might be prevented.
9
References
[1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems
27, pages 2672?2680. Curran Associates, Inc., 2014.
[2] A. Dosovitskiy, J. Tobias-Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the 2015 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pages 1538?1546. IEEE, 2015.
[3] K. Schawinski, C. Zhang, H. Zhang, L. Fowler, and G. K. Santhanam. Generative adversarial
networks recover features in astrophysical images of galaxies beyond the deconvolution limit.
Monthly Notices of the Royal Astronomical Society: Letters, 467(1):L110, 2017.
[4] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
[5] A. M. Turing. Computing machinery and intelligence. Mind, 59(236):433?460, 1950.
[6] R. M. French. The Turing test: The first 50 years. Trends in Cognitive Sciences, 4(3):115?122,
2000.
[7] W. Li, M. Gauci, and R Gro?. A coevolutionary approach to learn animal behavior through controlled interaction. In Proceedings of the 15th Annual Conference on Genetic and Evolutionary
Computation (GECCO 2013), pages 223?230. ACM, 2013.
[8] W. Li, M. Gauci, and R. Gro?. Turing Learning: A metric-free approach to inferring behavior
and its application to swarms. Swarm Intelligence, 10(3):211?243, 2016.
[9] S. Harnad. Minds, machines and Turing: The indistinguishability of indistinguishables. Journal
of Logic, Language and Information, 9(4):425?445, 2000.
[10] A. Pinar Saygin, I. Cicekli, and V. Akman. Turing test: 50 years later. Minds and Machines,
10(4):463?518, 2000.
[11] N. Jacobi, P. Husbands, and I. Harvey. Noise and the reality gap: The use of simulation in
evolutionary robotics. In Proceedings of the 3rd European Conference on Advances in Artificial
Life, pages 704?720. Springer-Verlag, 1995.
[12] D. J. Im, H. Ma, C. Kim, and G. W. Taylor. Generative adversarial parallelization. CoRR,
abs/1612.04021, 2016.
[13] I. J. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. CoRR, abs/1701.00160,
2017.
[14] F. Glover and K. S?rensen. Metaheuristics. Scholarpedia, 10(4):6532, 2015.
[15] W. D. Hillis. Co-evolving parasites improve simulated evolution as an optimization procedure.
Physica D: Nonlinear Phenomena, 42(1):228?234, 1990.
[16] Geoffrey F. Miller and Dave Cliff. Protean behavior in dynamic games: Arguments for the
co-evolution of pursuit-evasion tactics. In Proceedings of the 3rd International Conference on
Simulation of Adaptive Behavior: From Animals to Animats 3 (SAB 1994), pages 411?420. MIT
Press, 1994.
[17] S. Nolfi and D. Floreano. Coevolving predator and prey robots: Do ?arms races" arise in
artificial evolution? Artificial Life, 4(4):311?335, 1998.
[18] J. C. Bongard and H. Lipson. Nonlinear system identification using coevolution of models and
tests. IEEE Transactions on Evolutionary Computation, 9(4):361?384, 2005.
[19] J. C. Bongard and H. Lipson. Active coevolutionary learning of deterministic finite automata.
The Journal of Machine Learning Research, 6:1651?1678, 2005.
[20] J. Cartlidge and S. Bullock. Combating coevolutionary disengagement by reducing parasite
virulence. Evolutionary Computation, 12(2):193?222, 2004.
10
[21] C. Rosin and R. Belew. New methods for competitive coevolution. Evolutionary Computation,
5(10):1?29, 1997.
[22] Hugues Juille and Jordan B. Pollack. Coevolving the ?ideal" trainer: Application to the discovery
of cellular automata rules. In Genetic Programming 1998: Proceedings of the Third Annual
Conference, pages 519?527. Morgan Kaufmann, 1998.
[23] E. Vidal, F. Thollard, C. de la Higuera, F. Casacuberta, and R. C. Carrasco. Probabilistic
finite-state machines ? Part I. IEEE Transactions on Pattern Analysis and Machine Intelligence,
27(7):1013?1025, 2005.
[24] E. Vidal, F. Thollard, C. de la Higuera, F. Casacuberta, and R. C. Carrasco. Probabilistic
finite-state machines ? Part II. IEEE Transactions on Pattern Analysis and Machine Intelligence,
27(7):1026?1039, 2005.
[25] J. L Elman. Finding structure in time. Cognitive Science, 14(2):179?211, 1990.
[26] H. G. Beyer and H. P. Schwefel. Evolution strategies ? A comprehensive introduction. Natural
Computing, 1(1):3?52, 2002.
[27] Josh Bongard, Victor Zykov, and Hod Lipson. Resilient machines through continuous selfmodeling. Science, 314(5802):1118?1121, 2006.
[28] S. Magnenat, M. Waibel, and A. Beyeler. Enki: The fast 2D robot simulator, 2011. https:
//github.com/enki-community/enki.
[29] M. Gauci, J. Chen, W. Li, T. J. Dodd, and R. Gro?. Self-organized aggregation without
computation. The International Journal of Robotics Research, 33(8):1145?1161, 2014.
[30] F. Mondada, M. Bonani, X. Raemy, J. Pugh, C. Cianci, A. Klaptocz, et al. The e-puck, a robot
designed for education in engineering. In Proceedings of the 9th Conference on Autonomous
Robot Systems and Competitions, pages 59?65. IPCB, 2009.
11
| 7211 |@word trial:9 cylindrical:2 version:1 toggling:1 judgement:2 disk:1 open:1 termination:2 d2:2 confirms:1 simulation:8 simplifying:2 electronics:1 configuration:14 contains:1 hereafter:2 genetic:3 outperforms:1 existing:1 current:1 com:1 yet:1 must:1 realistic:2 subsequent:1 motor:2 remove:1 drop:1 designed:1 update:2 discrimination:2 generative:10 intelligence:4 device:2 guess:1 imitate:1 beginning:3 zhang:2 height:1 melvin:1 registering:1 glover:1 behavioral:5 manner:2 indeed:1 expected:2 behavior:19 p1:7 examine:2 elman:3 planning:1 nor:1 simulator:1 inspired:2 actual:1 hugues:1 perceives:1 provided:2 moreover:2 bounded:1 mass:1 what:1 cm:6 substantially:1 finding:1 certainty:2 every:4 interactive:15 exactly:1 scaled:1 uk:2 control:11 originates:2 indistinguishability:1 intervention:1 engineering:3 limit:2 depended:1 cliff:1 might:1 studied:1 dynamically:2 collect:1 challenging:1 co:2 collapse:1 range:7 bi:1 practical:1 camera:1 acknowledgment:1 testing:1 lost:1 thollard:2 implement:2 optimizers:2 dodd:1 procedure:1 displacement:4 evolving:1 adapting:1 revealing:1 word:3 pre:3 refers:1 get:2 convenience:1 close:1 selection:1 wheel:6 pugh:1 context:3 influence:2 impossible:1 restriction:1 deterministic:2 center:2 attention:1 automaton:2 focused:1 identifying:1 immediately:1 pouget:1 rule:3 d5:1 s6:2 swarm:3 population:9 floreano:1 autonomous:1 updated:1 controlling:5 play:1 user:1 programming:2 curran:1 goodfellow:2 origin:1 harvard:2 trick:1 velocity:10 associate:1 recognition:1 updating:1 genuinely:1 located:1 trend:1 infrared:2 carrasco:2 observed:1 bottom:1 role:1 capture:2 moveable:1 movement:14 decrease:1 highest:4 observes:6 gross:1 coevolutionary:5 environment:11 ideally:1 warde:1 tobias:1 dynamic:1 lepora:1 trained:1 terminating:1 depend:1 serve:2 gu:1 represented:3 emulate:1 fast:1 artificial:5 detected:1 choosing:1 outside:1 parasite:2 whose:1 emerged:1 widely:1 cvpr:1 otherwise:3 ability:1 gro:4 noisy:2 itself:2 final:2 sequence:4 questioning:1 differentiable:1 advantage:2 net:1 interaction:6 networked:1 aligned:1 loop:2 relevant:1 translate:1 poorly:1 forth:1 validate:4 competition:1 convergence:1 interacted:1 produce:4 generating:1 object:4 derive:1 coupling:4 ac:2 pose:3 progress:2 p2:5 implemented:3 come:2 judge:1 indicate:2 differ:1 direction:4 radius:1 correct:2 stochastic:3 human:9 education:1 exchange:1 require:2 resilient:1 generalization:1 investigation:1 im:1 physica:1 hold:1 proximity:6 around:1 considered:5 hall:1 ground:1 miniature:1 vary:1 purpose:1 applicable:1 label:6 vice:1 gauge:1 brought:1 mit:1 sensor:41 always:2 rather:8 beyer:1 avoid:1 sab:1 mobile:1 derived:1 focus:1 improvement:1 consistently:1 modelling:1 contrast:1 adversarial:8 equipment:1 kim:1 inference:4 dependent:1 virulence:2 vleft:3 unlikely:1 initially:1 hidden:3 manipulating:1 originating:1 trainer:1 overall:1 aforementioned:2 among:1 orientation:5 classification:1 animal:5 platform:3 special:1 spatial:1 brox:1 genuine:10 once:5 construct:2 beach:1 sampling:2 manually:1 identical:3 unsupervised:1 future:1 mirza:1 stimulus:27 intelligent:1 dosovitskiy:1 randomly:4 comprehensive:1 puck:7 interrogator:13 familiar:1 consisting:1 attempt:1 cylinder:1 ab:3 rediscovered:1 evaluation:8 regressing:1 analyzed:1 farley:1 juille:1 chain:1 accurate:2 encourage:1 machinery:1 taylor:1 initialized:1 pollack:1 stopped:1 instance:1 earlier:1 obstacle:7 queen:1 addressing:1 rosin:1 uniform:2 front:2 reported:1 answer:4 st:1 international:2 accessible:1 retain:1 probabilistic:4 v4:2 analogously:1 gans:21 again:2 recorded:2 choose:1 d8:2 external:1 cognitive:2 stark:1 li:5 account:2 de:2 inc:1 caused:1 register:1 scholarpedia:1 race:1 astrophysical:1 later:2 performed:3 closed:1 analyze:1 observing:6 red:2 competitive:4 start:2 option:4 recover:1 metz:1 aggregation:1 predator:1 lipson:3 mutation:1 contribution:1 square:2 ir:1 accuracy:7 convolutional:2 nolfi:1 kaufmann:1 miller:1 painting:1 directional:1 generalize:1 famous:2 identification:4 none:1 multiplying:1 dave:1 influenced:1 husband:1 sharing:1 galaxy:1 chintala:1 associated:1 di:1 jacobi:1 static:1 astronomical:1 infers:2 organized:1 sophisticated:1 back:1 higher:3 wei:2 improved:2 formulation:5 evaluated:1 though:5 arranged:1 inception:1 just:1 receives:4 mistakenly:1 expressive:2 nonlinear:2 french:1 defines:2 mode:1 brings:2 quality:2 reveal:3 fowler:1 usa:1 effect:1 facilitate:1 concept:1 evolution:5 hence:4 alternating:1 memoryless:1 indistinguishable:1 game:6 during:6 self:4 noted:2 higuera:2 d4:1 criterion:4 generalized:2 prominent:1 tactic:1 toggle:1 outline:2 motion:1 coevolving:2 passive:16 image:4 novel:1 recently:1 common:1 physical:11 tracked:1 s8:6 organism:2 interpret:1 trait:1 significant:1 s5:1 monthly:1 versa:1 automatic:1 rd:2 grid:1 language:1 robot:51 similarity:1 longer:2 etc:1 counterfeit:5 movable:1 closest:1 own:3 showed:2 perspective:7 optimizing:1 belongs:1 rewarded:3 scenario:1 verlag:1 harvey:1 life:2 victor:1 morgan:1 additional:5 determine:3 v3:3 clockwise:1 dashed:2 deceive:1 multiple:2 unimodal:1 full:1 infer:7 d7:1 ii:1 cross:1 long:2 prevented:1 controlled:4 schematic:1 variant:5 involving:1 sheffield:2 vision:1 metric:3 tailored:1 achieved:2 robotics:4 receive:2 addition:1 interval:1 parallelization:1 posse:1 archive:1 yue:1 flow:1 jordan:1 symmetrically:1 ideal:1 revealed:1 bengio:1 enough:3 affect:1 coevolution:2 identified:3 idea:2 whether:2 passed:1 s7:2 render:1 york:2 nine:2 cause:1 action:2 deep:1 generally:2 useful:3 collision:1 involve:1 amount:1 s4:1 extensively:1 diameter:1 generate:2 http:1 rensen:1 canonical:1 tutorial:1 s3:1 notice:1 estimated:1 correctly:2 per:3 blue:2 instantly:1 discrete:1 affected:1 santhanam:1 pinar:1 four:1 d3:1 prevent:1 neither:1 evasion:1 verified:1 prey:1 graph:1 year:2 run:6 turing:43 letter:1 uncertainty:1 striking:1 springenberg:1 named:1 place:2 family:1 decide:1 vn:1 investigates:1 distinguish:1 display:1 courville:1 annual:2 strength:1 occur:1 generates:1 aspect:4 speed:2 nathan:1 chair:1 argument:1 department:2 developing:1 waibel:1 request:1 poor:1 beneficial:1 bullock:1 biologically:1 making:3 happens:1 s1:6 imitating:1 resource:1 remains:5 turn:1 mechanism:2 needed:4 know:2 mind:3 subjected:1 fed:1 end:2 photo:2 pursuit:1 operation:1 opponent:1 multiplied:1 probe:1 observe:1 eight:4 v2:3 actuator:1 vidal:2 alternative:1 original:1 denotes:1 remaining:1 include:3 top:1 gan:1 belew:1 pushing:1 especially:1 build:1 society:1 contact:1 objective:4 move:1 question:3 occurs:1 strategy:3 interacts:3 responds:3 cycling:1 guessing:1 gradient:3 evolutionary:5 distance:2 unable:3 mapped:1 geological:1 simulated:2 d6:1 rotates:1 thank:1 gecco:1 originate:1 cellular:1 schwefel:1 ozair:1 assuming:3 illustration:1 providing:3 setup:29 executed:1 relate:2 vright:3 implementation:9 perform:1 allowing:2 animats:1 neuron:7 observation:1 markov:1 finite:5 descent:1 defining:7 situation:1 community:1 inferred:11 overcoming:1 introduced:2 pair:2 required:1 disengagement:3 discriminator:72 rad:1 engine:1 textual:1 hillis:1 nip:2 able:2 beyond:1 wy:1 pattern:4 reading:10 challenge:1 program:5 reverts:1 including:4 royal:1 belief:2 suitable:3 reconfigures:1 force:1 natural:1 arm:1 representing:1 scheme:1 improve:2 github:1 misleading:1 imply:1 ethology:1 started:1 embodied:1 text:3 discovery:3 evolve:1 relative:3 synchronization:2 mixed:1 generation:7 limitation:1 mounted:1 versus:1 facing:1 geoffrey:1 switched:1 agent:39 harnad:1 principle:1 course:1 compatible:1 casacuberta:2 placed:1 last:1 repeat:1 free:1 side:1 allow:2 institute:1 wide:3 template:2 taking:1 absolute:2 combating:1 distributed:1 overcome:1 default:1 transition:4 world:2 unaware:1 forward:2 commonly:1 made:3 author:1 adaptive:1 transaction:3 observable:4 logic:1 confirm:1 active:4 unnecessary:1 assumed:3 imitation:1 continuous:2 reality:4 additionally:1 promising:1 learn:1 ca:1 ignoring:1 interact:3 bearing:1 complex:3 european:1 domain:3 significance:1 s2:5 noise:3 arise:1 allowed:1 repeated:1 categorized:1 body:1 xu:1 referred:5 elaborate:1 board:2 inferring:6 comprises:1 wish:1 position:2 prominently:1 candidate:3 governed:2 third:1 learns:2 showing:2 explored:1 abadie:1 deconvolution:1 effectively:1 corr:3 fame:1 labelling:3 hod:1 gap:4 chen:1 interrogate:1 generalizing:3 depicted:1 visual:1 josh:1 expressed:1 tracking:2 radford:1 springer:1 corresponds:2 environmental:2 acm:1 ma:1 stimulating:1 identity:1 presentation:1 viewed:1 careful:1 labelled:2 hard:1 change:4 determined:1 typical:2 reducing:2 uniformly:2 operates:1 called:2 total:1 pas:3 discriminate:1 experimental:1 la:2 player:19 meaningful:2 indicating:1 internal:1 latter:3 evaluate:1 audio:1 d1:2 phenomenon:2 |
6,868 | 7,212 | Scalable Log Determinants for Gaussian Process
Kernel Learning
Kun Dong 1 , David Eriksson 1 , Hannes Nickisch 2 , David Bindel 1 , Andrew Gordon Wilson 1
1
Cornell University, 2 Phillips Research Hamburg
Abstract
For applications as varied as Bayesian neural networks, determinantal point processes, elliptical graphical models, and kernel learning for Gaussian processes
(GPs), one must compute a log determinant of an n ? n positive definite matrix,
and its derivatives ? leading to prohibitive O(n3 ) computations. We propose novel
O(n) approaches to estimating these quantities from only fast matrix vector multiplications (MVMs). These stochastic approximations are based on Chebyshev,
Lanczos, and surrogate models, and converge quickly even for kernel matrices that
have challenging spectra. We leverage these approximations to develop a scalable
Gaussian process approach to kernel learning. We find that Lanczos is generally
superior to Chebyshev for kernel learning, and that a surrogate approach can be
highly efficient and accurate with popular kernels.
1
Introduction
There is a pressing need for scalable machine learning approaches to extract rich statistical structure from large datasets. A common bottleneck ? arising in determinantal point processes [1],
Bayesian neural networks [2], model comparison [3], graphical models [4], and Gaussian process
kernel learning [5] ? is computing a log determinant over a large positive definite matrix. While
we can approximate log determinants by existing stochastic expansions relying on matrix vector
multiplications (MVMs), these approaches make assumptions, such as near-uniform eigenspectra
[6], which are unsuitable in machine learning contexts. For example, the popular RBF kernel gives
rise to rapidly decaying eigenvalues. Moreover, while standard approaches, such as stochastic power
series, have reasonable asymptotic complexity in the rank of the matrix, they require too many terms
(MVMs) for the precision necessary in machine learning applications.
Gaussian processes (GPs) provide a principled probabilistic kernel learning framework, for which a
log determinant is of foundational importance. Specifically, the marginal likelihood of a Gaussian
process is the probability of data given only kernel hyper-parameters. This utility function for kernel
learning compartmentalizes into automatically calibrated model fit and complexity terms ? called
automatic Occam?s razor ? such that the simplest models which explain the data are automatically
favoured [7, 5], without the need for approaches such as cross-validation, or regularization, which
can be costly, heuristic, and involve substantial hand-tuning and human intervention. The automatic
complexity penalty, called the Occam?s factor [3], is a log determinant of a kernel (covariance) matrix,
related to the volume of solutions that can be expressed by the Gaussian process.
Many current approaches to scalable Gaussian processes [e.g., 8?10] focus on inference assuming
a fixed kernel, or use approximations that do not allow for very flexible kernel learning [11], due
to poor scaling with number of basis functions or inducing points. Alternatively, approaches which
exploit algebraic structure in kernel matrices can provide highly expressive kernel learning [12], but
are essentially limited to grid structured data.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Recently, Wilson and Nickisch [13] proposed the structured kernel interpolation (SKI) framework,
which generalizes structuring exploiting methods to arbitrarily located data. SKI works by providing
accurate and fast matrix vector multiplies (MVMs) with kernel matrices, which can then be used in
iterative solvers such as linear conjugate gradients for scalable GP inference. However, evaluating the
marginal likelihood and its derivatives, for kernel learning, has followed a scaled eigenvalue approach
[12, 13] instead of iterative MVM approaches. This approach can be inaccurate, and relies on a fast
eigendecomposition of a structured matrix, which is not available in many consequential situations
where fast MVMs are available, including: (i) additive covariance functions, (ii) multi-task learning,
(iii) change-points [14], and (iv) diagonal corrections to kernel approximations [15]. Fiedler [16] and
Weyl [17] bounds have been used to extend the scaled eigenvalue approach [18, 14], but are similarly
limited. These extensions are often very approximate, and do not apply beyond sums of two and
three matrices, where each matrix in the sum must have a fast eigendecomposition.
In machine learning there has recently been renewed interest in MVM based approaches to approximating log determinants, such as the Chebyshev [19] and Lanczos [20] based methods, although these
approaches go back at least two decades in quantum chemistry computations [21]. Independently,
several authors have proposed various methods to compute derivatives of log determinants [22, 23].
But both the log determinant and the derivatives are needed for efficient GP marginal likelihood
learning: the derivatives are required for gradient-based optimization, while the log determinant itself
is needed for model comparison, comparisons between the likelihoods at local maximizers, and fast
and effective choices of starting points and step sizes in a gradient-based optimization algorithm.
In this paper, we develop novel scalable and general purpose Chebyshev, Lanczos, and surrogate
approaches for efficiently and accurately computing both the log determinant and its derivatives
simultaneously. Our methods use only fast MVMs, and re-use the same MVMs for both computations.
In particular:
? We derive fast methods for simultaneously computing the log determinant and its derivatives
by stochastic Chebyshev, stochastic Lanczos, and surrogate models, from MVMs alone. We
also perform an error analysis and extend these approaches to higher order derivatives.
? These methods enable fast GP kernel learning whenever fast MVMs are possible, including
applications where alternatives such as scaled eigenvalue methods (which rely on fast eigendecompositions) are not, such as for (i) diagonal corrections for better kernel approximations,
(ii) additive covariances, (iii) multi-task approaches, and (iv) non-Gaussian likelihoods.
? We illustrate the performance of our approach on several large, multi-dimensional datasets,
including a consequential crime prediction problem, and a precipitation problem with
n = 528, 474 training points. We consider a variety of kernels, including deep kernels [24],
diagonal corrections, and both Gaussian and non-Gaussian likelihoods.
? We have released code and tutorials as an extension to the GPML library [25] at https:
//github.com/kd383/GPML_SLD. A Python implementation of our approach is also
available through the GPyTorch library: https://github.com/jrg365/gpytorch.
When using our approach in conjunction with SKI [13] for fast MVMs, GP kernel learning is
O(n + g(m)), for m inducing points and n training points, where g(m) ? m log m. With algebraic
approaches such as SKI we also do not need to worry about quadratic storage in inducing points,
since symmetric Toeplitz and Kronecker matrices can be stored with at most linear cost, without
needing to explicitly construct a matrix.
Although we here use SKI for fast MVMs, we emphasize that the proposed iterative approaches are
generally applicable, and can easily be used in conjunction with any method that admits fast MVMs,
including classical inducing point methods [8], finite basis expansions [9], and the popular stochastic
variational approaches [10]. Moreover, stochastic variational approaches can naturally be combined
with SKI to further accelerate MVMs [26].
We start in ?2 with an introduction to GPs and kernel approximations. In ?3 we introduce stochastic
trace estimation and Chebyshev (?3.1) and Lanczos (?3.2) approximations. In ?4, we describe the
different sources of error in our approximations. In ?5 we consider experiments on several large
real-world data sets. We conclude in ?6. The supplementary materials also contain several additional
experiments and details.
2
2
Background
A Gaussian process (GP) is a collection of random variables, any finite number of which have
a joint Gaussian distribution [e.g., 5]. A GP can be used to define a distribution over functions
f (x) ? GP(?(x), k(x, x0 )), where each function value is a random variable indexed by x ? Rd , and
? : Rd ? R and k : Rd ? Rd ? R are the mean and covariance functions of the process.
The covariance function is often chosen to be an RBF or Mat?rn kernel (see the supplementary
material for more details). We denote any kernel hyperparameters by the vector ?. To be concise we
will generally not explicitly denote the dependence of k and associated matrices on ?.
For any locations X = {x1 , . . . , xn } ? Rd , fX ? N (?X , KXX ) where fX and ?X represent the
vectors of function values for f and ? evaluated at each of the xi ? X, and KXX is the matrix
whose (i, j) entry is k(xi , xj ). Suppose we have a vector of corresponding function values y ? Rn ,
where each entry is contaminated by independent Gaussian noise with variance ? 2 . Under a Gaussian
process prior depending on the covariance hyperparameters ?, the log marginal likelihood is given by
i
1h
? XX | + n log 2?
(1)
L(?|y) = ? (y ? ?X )T ? + log |K
2
? ?1 (y ? ?X ) and K
? XX = KXX + ? 2 I. Optimization of (1) is expensive, since the
where ? = K
XX
? XX | and its derivatives without taking advantage of the structure of
cheapest way of evaluating log |K
? XX involves computing the O(n3 ) Cholesky factorization of K
? XX . O(n3 ) computations is too
K
expensive for inference and learning beyond even just a few thousand points.
A popular approach to GP scalability is to replace the exact kernel k(x, z) by an approximate
kernel that admits fast computations [8]. Several methods approximate k(x, z) via inducing points
d
U = {uj }m
j=1 ? R . An example is the subset of regressor (SoR) kernel:
k SoR (x, z) = KxU KU?1U KU z
SoR
which is a low-rank approximation [27]. The SoR matrix KXX
? Rn?n has rank at most m,
? SoR |
? SoR = K SoR + ? 2 I and to compute log |K
allowing us to solve linear systems involving K
XX
XX
XX
in O(m2 n + m3 ) time. Another popular kernel approximation is the fully independent training
conditional (FITC), which is a diagonal correction of SoR so that the diagonal is the same as for the
original kernel [15]. Thus kernel matrices from FITC have low-rank plus diagonal structure. This
modification has had exceptional practical significance, leading to improved point predictions and
much more realistic predictive uncertainty [8, 28], making FITC arguably the most popular approach
for scalable Gaussian processes.
Wilson and Nickisch [13] provides a mechanism for fast MVMs through proposing the structured
kernel interpolation (SKI) approximation,
KXX ? W KU U W T
(2)
where W is an n-by-m matrix of interpolation weights; the authors of [13] use local cubic interpolation so that W is sparse. The sparsity in W makes it possible to naturally exploit algebraic
structure (such as Kronecker or Toeplitz structure) in KU U when the inducing points U are on a grid,
for extremely fast matrix vector multiplications with the approximate KXX even if the data inputs
X are arbitrarily located. For instance, if KU U is Toeplitz, then each MVM with the approximate
KXX costs only O(n + m log m). By contrast, placing the inducing points U on a grid for classical
inducing point methods, such as SoR or FITC, does not result in substantial performance gains, due
to the costly cross-covariance matrices KxU and KU z .
3
Methods
?
Our goal is to estimate, for a symmetric positive definite matrix K,
i
? h
? = tr(log(K))
?
? = tr K
? ?1
log |K|
and
log |K|
??i
?
?K
??i
!!
,
where log is the matrix logarithm [29]. We compute the traces involved in both the log determinant
and its derivative via stochastic trace estimators [30], which approximate the trace of a matrix using
only matrix vector products.
3
The key idea is that for a given matrix A and a random probe vector z with independent entries with
mean zero and variance one, then tr(A) = E[z T Az]; a common choice is to let the entries of the
probe vectors be Rademacher random variables. In practice, we estimate the trace by the sample
mean over nz independent probe vectors. Often surprisingly few probe vectors suffice.
? we need to multiply log(K)
? by probe vectors. We consider two ways to
To estimate tr(log(K)),
?
estimate log(K)z: by a polynomial approximation of log or by using the connection between the
Gaussian quadrature rule and the Lanczos method [19, 20]. In both cases, we show how to re-use the
same probe vectors for an inexpensive coupled estimator of the derivatives. In addition, we may use
standard radial basis function interpolation of the log determinant evaluated at a few systematically
chosen points in the hyperparameter space as an inexpensive surrogate for the log determinant.
3.1
Chebyshev
Chebyshev polynomials are defined by the recursion
T0 (x) = 1,
Tj+1 (x) = 2xTj (x) ? Tj?1 (x) for j ? 1.
T1 (x) = x,
For f : [?1, 1] ? R the Chebyshev interpolant of degree m is
f (x) ? pm (x) :=
m
X
cj Tj (x),
where cj =
j=0
m
2 ? ?j0 X
f (xk )Tj (xk )
m+1
k=0
where ?j0 is the Kronecker delta and xk = cos(?(k + 1/2)/(m + 1)) for k = 0, 1, 2, . . . , m; see [31].
? by
Using the Chebyshev interpolant of log(1 + ?x), we approximate log |K|
? ? n log ? = log |I + ?B| ?
log |K|
m
X
cj tr(Tj (B))
j=0
? ? 1)/? has eigenvalues ?i ? (?1, 1).
when B = (K/?
For stochastic estimation of tr(Tj (B)), we only need to compute z T Tj (B)z for each given probe
vector z. We compute vectors wj = Tj (B)z and ?wj /??i via the coupled recurrences
w0 = z,
?w0
= 0,
??i
w1 = Bz,
?w1
?B
=
z,
??i
??i
wj+1 = 2Bwj ? wj?1 for j ? 1,
?B
?wj
?wj?1
wj + B
for j ? 1.
?
??i
??i
??i
?wj+1
=2
??i
This gives the estimators
?
?
m
X
? ? E?
cj z T wj ?
log |K|
?
?
m
X
?
?w
j
? ? E?
?.
cj z T
log |K|
??i
??i
j=0
and
j=0
Thus, each derivative of the approximation costs two extra MVMs per term.
3.2
Lanczos
? via a Lanczos decomposition; see [32] for discussion of a
We can also approximate z T log(K)z
T
? and [20, 21] for stochastic Lanczos estimation of log
Lanczos-based computation of z f (K)z
determinants. We run m steps of the Lanczos algorithm, which computes the decomposition
? m = Qm T + ?m qm+1 eTm
KQ
where Qm = [q1 q2 . . . qm ] ? Rn?m is a matrix with orthonormal columns such that q1 = z/kzk,
T ? Rm?m is tridiagonal, ?m is the residual, and em is the mth Cartesian unit vector. We estimate
? ? eT f (kzk2 T )e1
z T f (K)z
1
(3)
where e1 is the first column of the identity. The Lanczos algorithm is numerically unstable. Several
practical implementations resolve this issue [33, 34]. The approximation (3) corresponds to a Gauss
quadrature rule for the Riemann-Stieltjes integral of the measure associated with the eigenvalue
4
? It is exact when f is a polynomial of degree up to 2m ? 1. This approximation
distribution of K.
? has at most m distinct eigenvalues, which is particularly relevant to Gaussian
is also exact when K
process regression, since frequently the kernel matrices only have a small number of eigenvalues that
are not close to zero.
The Lanczos decomposition also allows us to estimate derivatives of the log determinant at minimal
cost. Via the Lanczos decomposition, we have
? ?1 z.
g? = Qm (T ?1 e1 kzk) ? K
This approximation requires no additional matrix vector multiplications beyond those used to com?
pute the Lanczos decomposition, which we already used to estimate log(K)z;
in exact arithmetic,
this is equivalent to m steps of CG. Computing g? in this way takes O(mn) additional time; sub?
sequently, we only need one matrix-vector multiply by ? K/??
i for each probe vector to estimate
?1 T
? ?1 (? K/??
?
?
?
tr(K
))
=
E[(
K
z)
(?
K/??
)z].
i
i
3.3
Diagonal correction to SKI
The SKI approximation may provide a poor estimate of the diagonal entries of the original kernel
matrix for kernels with limited smoothness, such as the Mat?rn kernel. In general, diagonal corrections
to scalable kernel approximations can lead to great performance gains. Indeed, the popular FITC
method [15] is exactly a diagonal correction of subset of regressors (SoR).
We thus modify the SKI approximation to add a diagonal matrix D,
KXX ? W KU U W T + D ,
(4)
such that the diagonal of the approximated KXX is exact. In other words, D substracts the diagonal
of W KU U W T and adds the true diagonal of KXX . This modification is not possible for the scaled
eigenvalue method for approximating log determinants in [13], since adding a diagonal matrix makes
it impossible to approximate the eigenvalues of KXX from the eigenvalues of KU U .
However, Eq. (4) still admits fast MVMs and thus works with our approach for estimating the log
determinant and its derivatives. Computing D with SKI costs only O(n) flops since W is sparse for
local cubic interpolation. We can therefore compute (W T ei )T KU U (W T ei ) in O(1) flops.
3.4
Estimating higher derivatives
We have already described how to use stochastic estimators to compute the log marginal likelihood
and its first derivatives. The same approach applies to computing higher-order derivatives for a
Newton-like iteration, to understand the sensitivity of the maximum likelihood parameters, or for
similar tasks. The first derivatives of the full log marginal likelihood are
"
!
#
?
?
?L
1
?1 ? K
T ?K
?
??
= ? tr K
?
??i
2
??i
??i
and the second derivatives of the two terms are
!
i
2 ?
?
?
?2 h
?1 ? K
?1 ? K ? ?1 ? K
?
?
?
log |K| = tr K
?K
K
,
??i ??j
??i ??j
??i
??j
2 ?
?
?
?2
?K
? ?1 ? K ? ? ?T ? K ?.
(y ? ?X )T ? = 2?T
K
??i ??j
??i
??j
??i ??j
Superficially, evaluating the second derivatives would appear to require several additional solves
above and beyond those used to estimate the first derivatives of the log determinant. In fact, we can
get an unbiased estimator for the second derivatives with no additional solves, but only fast products
with the derivatives of the kernel matrices. Let z and w be independent probe vectors, and define
? ?1 z and h = K
? ?1 w. Then
g=K
"
!
!#
i
2 ?
?
?
?2 h
T ? K
T ?K
T ?K
?
log |K| = E g
z? g
w
h
z
,
??i ??j
??i ??j
??i
??j
"
!
!#
?
?
?
?2
?2K
T
T ?K
T ?K
(y ? ?X ) ? = 2E
z
?
g
?
? ?T
?.
??i ??j
??i
??j
??i ??j
5
Hence, if we use the stochastic Lanczos method to compute the log determinant and its derivatives,
the additional work required to obtain a second derivative estimate is one MVM by each second
partial of the kernel for each probe vector and for ?, one MVM of each first partial of the kernel with
?, and a few dot products.
3.5
Radial basis functions
Another way to deal with the log determinant and its derivatives is to evaluate the log determinant
term at a few systematically chosen points in the space of hyperparameters and fit an interpolation
approximation to these values. This is particularly useful when the kernel depends on a modest
number of hyperparameters (e.g., half a dozen), and thus the number of points we need to precompute
is relatively small. We refer to this method as a surrogate, since it provides an inexpensive substitute
for the log determinant and its derivatives. For our surrogate approach, we use radial basis function
(RBF) interpolation with a cubic kernel and a linear tail. See e.g. [35?38] and the supplementary
material for more details on RBF interpolation.
4
Error properties
In addition to the usual errors from sources such as solver termination criteria and floating point arithmetic, our approach to kernel learning involves several additional sources of error: we approximate
the true kernel with one that enables fast MVMs, we approximate traces using stochastic estimation,
? and K
? ?1 on probe vectors.
and we approximate the actions of log(K)
We can compute first-order estimates of the sensitivity of the log likelihood to perturbations in the
kernel using the same stochastic estimators we use for the derivatives with respect to hyperparameters.
? ref = K
? + E, then
For example, if Lref is the likelihood for a reference kernel K
Lref (?|y) = L(?|y) ?
1
E g T Ez ? ?T E? + O(kEk2 ),
2
and we can bound the change in likelihood at first order by kEk kgkkzk + k?k2 . Given bounds on
the norms of ?E/??i , we can similarly estimate changes in the gradient of the likelihood, allowing
us to bound how the marginal likelihood hyperparameter estimates depend on kernel approximations.
? = U ?U T + ? 2 I, the Hutchinson trace estimator has known variance [39]
If K
?
Var[z T log(K)z]
=
X
? 2 ?
[log(K)]
ij
n
X
log(1 + ?j /? 2 )2 .
i=1
i6=j
If the eigenvalues of the kernel matrix without noise decay rapidly enough
compared to ?, the variance
? = 2n log ? + Pn log(1 + ?j /? 2 ). Hence,
will be small compared to the magnitude of tr(log K)
i=1
we need fewer probe vectors to obtain reasonable accuracy than one would expect from bounds that
are blind to the matrix structure. In our experiments, we typically only use 5?10 probes ? and we
use the sample variance across these probes to estimate a posteriori the stochastic component of the
error in the log likelihood computation. If we are willing to estimate the Hessian of the log likelihood,
we can increase rates of convergence for finding kernel hyperparameters.
?
The Chebyshev approximation scheme requires O( ? log(?/)) steps to obtain an O() approxima?
? [19]. This
tion error in computing z T log(K)z,
where ? = ?max /?min is the condition number of K
behavior is independent of the distribution of eigenvalues within the interval [?min , ?max ], and is
close to optimal when eigenvalues are spread quasi-uniformly across the interval. Nonetheless, when
the condition number is large, convergence may be quite slow. The Lanczos approach converges
at least twice as fast as Chebyshev in general [20, Remark 1], and converges much more rapidly
when the eigenvalues are not uniform within the interval, as is the case with log determinants of
many kernel matrices. Hence, we recommend the Lanczos approach over the Chebyshev approach in
? by Lanczos
general. In all of our experiments, the error associated with approximating z T log(K)z
was dominated by other sources of error.
6
5
Experiments
We test our stochastic trace estimator with both Chebyshev and Lanczos approximation schemes on:
(1) a sound time series with missing data, using a GP with an RBF kernel; (2) a three-dimensional
space-time precipitation data set with over half a million training points, using a GP with an RBF
kernel; (3) a two-dimensional tree growth data set using a log-Gaussian Cox process model with an
RBF kernel; (4) a three-dimensional space-time crime datasets with a log-Gaussian Cox model with
Mat?rn 3/2 and spectral mixture kernels; and (5) a high-dimensional feature space using the deep
kernel learning framework [24]. In the supplementary material we also include several additional
experiments to illustrate particular aspects of our approach, including kernel hyperparameter recovery,
diagonal corrections (Section 3.3), and surrogate methods (Section 3.5). Throughout we use the SKI
method [13] of Eq. (2) for fast MVMs. We find that the Lanczos and surrogate methods are able to
do kernel recovery and inference significantly faster and more accurately than competing methods.
5.1
Natural sound modeling
Here we consider the natural sound benchmark in [13], shown in Figure 1(a). Our goal is to recover
contiguous missing regions in a waveform with n = 59, 306 training points. We exploit Toeplitz
structure in the KU U matrix of our SKI approximate kernel for accelerated MVMs.
The experiment in [13] only considered scalable inference and prediction, but not hyperparameter
learning, since the scaled eigenvalue approach requires all the eigenvalues for an m ? m Toeplitz
matrix, which can be computationally prohibitive with cost O(m2 ). However, evaluating the marginal
likelihood on this training set is not an obstacle for Lanczos and Chebyshev since we can use fast
MVMs with the SKI approximation at a cost of O(n + m log m).
In Figure 1(b), we show how Lanczos, Chebyshev and surrogate approaches scale with the number
of inducing points m compared to the scaled eigenvalue method and FITC. We use 5 probe vectors
and 25 iterations for Lanczos, both when building the surrogate and for hyperparameter learning
with Lanczos. We also use 5 probe vectors for Chebyshev and 100 moments. Figure 1(b) shows the
runtime of the hyperparameter learning phase for different numbers of inducing points m, where
Lanczos and the surrogate are clearly more efficient than scaled eigenvalues and Chebyshev. For
hyperparameter learning, FITC took several hours to run, compared to minutes for the alternatives;
we therefore exclude FITC from Figure 1(b). Figure 1(c) shows the time to do inference on the 691
test points, while 1(d) shows the standardized mean absolute error (SMAE) on the same test points.
As expected, Lanczos and surrogate make accurate predictions much faster than Chebyshev, scaled
eigenvalues, and FITC. In short, Lanczos and the surrogate approach are much faster than alternatives
for hyperparameter learning with a large number of inducing points and training points.
-0.2
0
1
2
3
3
102
101
3000
3500
Time (s)
(a)
Sound data
4000
4500
5000
103
SMAE
Time (s)
Intensity
0
10
Runtime (s)
104
0.2
102
101
100
1
2000
m
(b)
4000
6000
8000 10000
1.2
1
0.8
0.6
0.4
0.2
100
m
(c)
Recovery time
Inference time
101
102
103
Runtime (s)
(d)
SMAE
Figure 1: Sound modeling using 59,306 training points and 691 test points. The intensity of the
time series can be seen in (a). Train time for RBF kernel hyperparameters is in (b) and the time
for inference is in (c). The standardized mean absolute error (SMAE) as a function of time for an
evaluation of the marginal likelihood and all derivatives is shown in (d). Surrogate is (??), Lanczos
is (- - -), Chebyshev is (? ?), scaled eigenvalues is (? + ?), and FITC is (? o ?).
5.2
Daily precipitation prediction
This experiment involves precipitation data from the year of 2010 collected from around 5500 weather
stations in the US1 . The hourly precipitation data is preprocessed into daily data if full information of
the day is available. The dataset has 628, 474 entries in terms of precipitation per day given the date,
longitude and latitude. We randomly select 100, 000 data points as test points and use the remaining
1
https://catalog.data.gov/dataset/u-s-hourly-precipitation-data
7
points for training. We then perform hyperparameter learning and prediction with the RBF kernel,
using Lanczos, scaled eigenvalues, and exact methods.
For Lanczos and scaled eigenvalues, we optimize the hyperparameters on the subset of data for
January 2010, with an induced grid of 100 points per spatial dimension and 300 in the temporal
dimension. Due to memory constraints we only use a subset of 12, 000 entries for training with
the exact method. While scaled eigenvalues can perform well when fast eigendecompositions are
possible, as in this experiment, Lanczos nonetheless still runs faster and with slightly lower MSE.
Method
n
m
MSE Time [min]
Lanczos
528k 3M 0.613
14.3
Scaled eigenvalues 528k 3M 0.621
15.9
Exact
12k
0.903
11.8
Table 1: Prediction comparison for the daily precipitation data showing the number of training points
n, number of induced grid points m, the mean squared error, and the inference time.
Incidentally, we are able to use 3 million inducing points in Lanczos and scaled eigenvalues, which is
enabled by the SKI representation [13] of covariance matrices, for a a very accurate approximation.
This number of inducing points m is unprecedented for typical alternatives which scale as O(m3 ).
5.3
Hickory data
In this experiment, we apply Lanczos to the log-Gaussian Cox process model with a Laplace
approximation for the posterior distribution. We use the RBF kernel and the Poisson likelihood in
our model. The scaled eigenvalue method does not apply directly to non-Gaussian likelihoods; we
thus applied the scaled eigenvalue method in [13] in conjunction with the Fiedler bound in [18] for
the scaled eigenvalue comparison. Indeed, a key advantage of the Lanczos approach is that it can be
applied whenever fast MVMs are available, which means no additional approximations such as the
Fiedler bound are required for non-Gaussian likelihoods.
This dataset, which comes from the R package spatstat, is a point pattern of 703 hickory trees in a
forest in Michigan. We discretize the area into a 60 ? 60 grid and fit our model with exact, scaled
eigenvalues, and Lanczos. We see in Table 2 that Lanczos recovers hyperparameters that are much
closer to the exact values than the scaled eigenvalue approach. Figure 2 shows that the predictions by
Lanczos are also indistinguishable from the exact computation.
Method
sf
`1
`2
? log p(y|?) Time [s]
Exact
0.696 0.063 0.085
1827.56
465.9
Lanczos
0.693 0.066 0.096
1828.07
21.4
Scaled eigenvalues 0.543 0.237 0.112
1851.69
2.5
Table 2: Hyperparameters recovered on the Hickory dataset.
(a)
Point pattern data
(b)
(c)
Prediction by exact
Scaled eigenvalues
(d)
Lanczos
Figure 2: Predictions by exact, scaled eigenvalues, and Lanczos on the Hickory dataset.
5.4
Crime prediction
In this experiment, we apply Lanczos with the spectral mixture kernel to the crime forecasting
problem considered in [18]. This dataset consists of 233, 088 incidents of assault in Chicago from
January 1, 2004 to December 31, 2013. We use the first 8 years for training and attempt to predict the
crime rate for the last 2 years. For the spatial dimensions, we use the log-Gaussian Cox process model,
with the Mat?rn-5/2 kernel, the negative binomial likelihood, and the Laplace approximation for the
8
posterior. We use a spectral mixture kernel with 20 components and an extra constant component for
the temporal dimension. We discretize the data into a 17 ? 26 spatial grid corresponding to 1-by-1
mile grid cells. In the temporal dimension we sum our data by weeks for a total of 522 weeks. After
removing the cells that are outside Chicago, we have a total of 157, 644 observations.
The results for Lanczos and scaled eigenvalues (in conjunction with the Fiedler bound due to the
non-Gaussian likelihood) can be seen in Table 3. The Lanczos method used 5 Hutchinson probe
vectors and 30 Lanczos steps. For both methods we allow 100 iterations of LBFGS to recover
hyperparameters and we often observe early convergence. While the RMSE for Lanczos and
scaled eigenvalues happen to be close on this example, the recovered hyperparameters using scaled
eigenvalues are very different than for Lanczos. For example, the scaled eigenvalue method learns
a much larger ? 2 than Lanczos, indicating model misspecification. In general, as the data become
increasingly non-Gaussian the Fiedler bound (used for fast scaled eigenvalues on non-Gaussian
likelihoods) will become increasingly misspecified, while Lanczos will be unaffected.
Method
Lanczos
Scaled eigenvalues
`1
0.65
0.32
`2
0.67
0.10
?2
69.72
191.17
Trecovery [s]
264
67
Tprediction [s]
10.30
3.75
RMSEtrain
1.17
1.19
RMSEtest
1.33
1.36
Table 3: Hyperparameters recovered, recovery time and RMSE for Lanczos and scaled eigenvalues
on the Chicago assault data. Here `1 and `2 are the length scales in spatial dimensions and ? 2 is the
noise level. Trecovery is the time for recovering hyperparameters. Tprediction is the time for prediction at
all 157, 644 observations (including training and testing).
5.5
Deep kernel learning
To handle high-dimensional datasets, we bring our methods into the deep kernel learning framework
[24] by replacing the final layer of a pre-trained deep neural network (DNN) with a GP. This
experiment uses the gas sensor dataset from the UCI machine learning repository. It has 2565
instances with 128 dimensions. We pre-train a DNN, then attach a Gaussian process with RBF
kernels to the two-dimensional output of the second-to-last layer. We then further train all parameters
of the resulting kernel, including the weights of the DNN, through the GP marginal likelihood. In
this example, Lanczos and the scaled eigenvalue approach perform similarly well. Nonetheless, we
see that Lanczos can effectively be used with SKI on a high dimensional problem to train hundreds
of thousands of kernel parameters.
Method
RMSE
Time [s]
DNN
0.1366 ? 0.0387
0.4438
Lanczos
0.1053 ? 0.0248
2.0680
Scaled eigenvalues
0.1045 ? 0.0228
1.6320
Table 4: Prediction RMSE and per training iteration runtime.
6
Discussion
There are many cases in which fast MVMs can be achieved, but it is difficult or impossible to
efficiently compute a log determinant. We have developed a framework for scalable and accurate
estimates of a log determinant and its derivatives relying only on MVMs. We particularly consider
scalable kernel learning, showing the promise of stochastic Lanczos estimation combined with
a pre-computed surrogate model. We have shown the scalability and flexibility of our approach
through experiments with kernel learning for several real-world data sets using both Gaussian and
non-Gaussian likelihoods, and highly parametrized deep kernels.
Iterative MVM approaches have great promise for future exploration. We have only begun to explore
their significant generality. In addition to log determinants, the methods presented here could be
adapted to fast posterior sampling, diagonal estimation, matrix square roots, and many other standard
operations. The proposed methods only depend on fast MVMs?and the structure necessary for
fast MVMs often exists, or can be readily created. We have here made use of SKI [13] to create
such structure. But other approaches, such as stochastic variational methods [10], could be used
or combined with SKI for fast MVMs, as in [26]. Moreover, iterative MVM methods naturally
harmonize with GPU acceleration, and are therefore likely to increase in their future applicability and
popularity. Finally, one could explore the ideas presented here for scalable higher order derivatives,
making use of Hessian methods for greater convergence rates.
9
References
[1] Alex Kulesza, Ben Taskar, et al. Determinantal point processes for machine learning. FoundaR in Machine Learning, 5(2?3):123?286, 2012.
tions and Trends
[2] David JC MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of
Technology, 1992.
[3] David JC MacKay. Information theory, inference and learning algorithms. Cambridge university
press, 2003.
[4] Havard Rue and Leonhard Held. Gaussian Markov random fields: theory and applications.
CRC Press, 2005.
[5] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for Machine Learning. The MIT
Press, 2006.
[6] Christos Boutsidis, Petros Drineas, Prabhanjan Kambadur, Eugenia-Maria Kontopoulou, and
Anastasios Zouzias. A randomized algorithm for approximating the log determinant of a
symmetric positive definite matrix. arXiv preprint arXiv:1503.00374, 2015.
[7] Carl Edward Rasmussen and Zoubin Ghahramani. Occam?s razor. In Neural Information
Processing Systems (NIPS), 2001.
[8] Joaquin Qui?onero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6(Dec):1939?1959,
2005.
[9] Q. Le, T. Sarlos, and A. Smola. Fastfood-computing Hilbert space expansions in loglinear time.
In Proceedings of the 30th International Conference on Machine Learning, pages 244?252,
2013.
[10] J Hensman, N Fusi, and N.D. Lawrence. Gaussian processes for big data. In Uncertainty in
Artificial Intelligence (UAI). AUAI Press, 2013.
[11] Andrew Gordon Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian processes. PhD thesis, University of Cambridge, 2014.
[12] Andrew Gordon Wilson, Elad Gilboa, Nehorai Arye, and John P Cunningham. Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing
Systems, pages 3626?3634, 2014.
[13] Andrew Gordon Wilson and Hannes Nickisch. Kernel interpolation for scalable structured
Gaussian processes (KISS-GP). International Conference on Machine Learning (ICML), 2015.
[14] William Herlands, Andrew Wilson, Hannes Nickisch, Seth Flaxman, Daniel Neill, Wilbert
Van Panhuis, and Eric Xing. Scalable Gaussian processes for characterizing multidimensional
change surfaces. Artificial Intelligence and Statistics, 2016.
[15] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In
Advances in neural information processing systems (NIPS), volume 18, page 1257. MIT Press,
2006.
[16] M. Fiedler. Hankel and Loewner matrices. Linear Algebra and Its Applications, 58:75?95,
1984.
[17] Hermann Weyl. Das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen (mit einer anwendung auf die theorie der hohlraumstrahlung). Mathematische
Annalen, 71(4):441?479, 1912.
[18] Seth Flaxman, Andrew Wilson, Daniel Neill, Hannes Nickisch, and Alex Smola. Fast kronecker
inference in gaussian processes with non-gaussian likelihoods. In International Conference on
Machine Learning, pages 607?616, 2015.
[19] Insu Han, Dmitry Malioutov, and Jinwoo Shin. Large-scale log-determinant computation
through stochastic Chebyshev expansions. In ICML, pages 908?917, 2015.
[20] Shashanka Ubaru, Jie Chen, and Yousef Saad. Fast estimation of tr(F (A)) via stochastic
Lanczos quadrature.
[21] Zhaojun Bai, Mark Fahey, Gene H Golub, M Menon, and E Richter. Computing partial
eigenvalue sums in electronic structure calculations. Technical report, Tech. Report SCCM-9803, Stanford University, 1998.
10
[22] D MacKay and MN Gibbs. Efficient implementation of gaussian processes. Neural Computation,
1997.
[23] Michael L Stein, Jie Chen, Mihai Anitescu, et al. Stochastic approximation of score functions
for gaussian processes. The Annals of Applied Statistics, 7(2):1162?1191, 2013.
[24] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel
learning. In Proceedings of the 19th International Conference on Artificial Intelligence and
Statistics, pages 370?378, 2016.
[25] Carl Edward Rasmussen and Hannes Nickisch. Gaussian processes for machine learning
(GPML) toolbox. Journal of Machine Learning Research (JMLR), 11:3011?3015, Nov 2010.
[26] Andrew G Wilson, Zhiting Hu, Ruslan R Salakhutdinov, and Eric P Xing. Stochastic variational
deep kernel learning. In Advances in Neural Information Processing Systems, pages 2586?2594,
2016.
[27] Bernhard W Silverman. Some aspects of the spline smoothing approach to non-parametric
regression curve fitting. Journal of the Royal Statistical Society. Series B (Methodological),
pages 1?52, 1985.
[28] Joaquin Quinonero-Candela, Carl Edward Rasmussen, and Christopher KI Williams. Approximation methods for Gaussian process regression. Large-scale kernel machines, pages 203?223,
2007.
[29] Nicholas J Higham. Functions of matrices: theory and computation. SIAM, 2008.
[30] Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for Laplacian
smoothing splines. Communications in Statistics-Simulation and Computation, 19(2):433?450,
1990.
[31] Amparo Gil, Javier Segura, and Nico Temme. Numerical Methods for Special Functions. SIAM,
2007.
[32] Gene Golub and G?rard Meurant. Matrices, Moments and Quadrature with Applications.
Princeton University Press, 2010.
[33] Jane K Cullum and Ralph A Willoughby. Lanczos algorithms for large symmetric eigenvalue
computations: Vol. I: Theory. SIAM, 2002.
[34] Youcef Saad. Numerical methods for large eigenvalue problems. Manchester University Press,
1992.
[35] Martin Dietrich Buhmann. Radial basis functions. Acta Numerica 2000, 9:1?38, 2000.
[36] Gregory E Fasshauer. Meshfree approximation methods with MATLAB, volume 6. World
Scientific, 2007.
[37] Robert Schaback and Holger Wendland. Kernel techniques: from machine learning to meshless
methods. Acta Numerica, 15:543?639, 2006.
[38] Holger Wendland. Scattered data approximation, volume 17. Cambridge university press, 2004.
[39] Haim Avron and Sivan Toledo. Randomized algorithms for estimating the trace of an implicit
symmetric positive semi-definite matrix. J. ACM, 58(2):8:1?8:34, 2011. doi: 10.1145/1944345.
1944349. URL http://dx.doi.org/10.1145/1944345.1944349.
11
| 7212 |@word determinant:30 cox:4 repository:1 polynomial:3 norm:1 consequential:2 termination:1 willing:1 hu:2 simulation:1 covariance:9 decomposition:5 q1:2 concise:1 tr:11 moment:2 bai:1 series:4 score:1 daniel:2 renewed:1 existing:1 elliptical:1 current:1 com:3 recovered:3 dx:1 must:2 readily:1 determinantal:3 gpu:1 john:1 additive:2 realistic:1 chicago:3 happen:1 weyl:2 enables:1 numerical:2 fasshauer:1 alone:1 half:2 prohibitive:2 kxu:2 fewer:1 intelligence:3 xk:3 short:1 provides:2 location:1 org:1 harmonize:1 become:2 consists:1 fitting:1 introduce:1 x0:1 expected:1 indeed:2 behavior:1 frequently:1 multi:3 salakhutdinov:2 relying:2 riemann:1 automatically:2 resolve:1 gov:1 solver:2 precipitation:8 mvms:26 estimating:4 moreover:3 xx:9 suffice:1 q2:1 developed:1 proposing:1 finding:1 temporal:3 pseudo:1 avron:1 multidimensional:2 auai:1 growth:1 runtime:4 exactly:1 scaled:31 qm:5 rm:1 k2:1 unit:1 intervention:1 appear:1 arguably:1 positive:5 t1:1 hourly:2 local:3 modify:1 interpolation:10 plus:1 twice:1 nz:1 acta:2 challenging:1 co:1 limited:3 factorization:1 practical:2 testing:1 practice:1 definite:5 silverman:1 richter:1 shin:1 j0:2 foundational:1 area:1 significantly:1 weather:1 word:1 radial:4 pre:3 zoubin:2 get:1 eriksson:1 close:3 storage:1 context:1 impossible:2 influence:1 optimize:1 equivalent:1 sarlos:1 missing:2 go:1 williams:2 starting:1 independently:1 recovery:4 m2:2 estimator:9 rule:2 orthonormal:1 sequently:1 enabled:1 handle:1 fx:2 laplace:2 annals:1 suppose:1 exact:14 gps:3 us:1 carl:4 trend:1 expensive:2 particularly:3 located:2 approximated:1 taskar:1 preprint:1 thousand:2 wj:9 region:1 principled:1 substantial:2 complexity:3 interpolant:2 nehorai:1 trained:1 depend:2 algebra:1 predictive:1 eric:3 basis:6 drineas:1 easily:1 accelerate:1 joint:1 seth:2 various:1 train:4 fiedler:6 distinct:1 fast:35 effective:1 describe:1 doi:2 artificial:3 hyper:1 outside:1 whose:1 heuristic:1 supplementary:4 solve:1 quite:1 larger:1 elad:1 stanford:1 toeplitz:5 statistic:4 gp:13 itself:1 final:1 advantage:2 pressing:1 eigenvalue:46 unprecedented:1 loewner:1 took:1 propose:1 dietrich:1 product:3 relevant:1 uci:1 rapidly:3 date:1 flexibility:1 inducing:13 scalability:2 az:1 exploiting:1 convergence:4 manchester:1 rademacher:1 incidentally:1 converges:2 ben:1 tions:1 derive:1 andrew:8 develop:2 illustrate:2 depending:1 ij:1 approxima:1 eq:2 edward:5 longitude:1 recovering:1 solves:2 involves:3 come:1 waveform:1 hermann:1 stochastic:24 exploration:1 human:1 enable:1 material:4 crc:1 require:2 sor:10 extension:2 correction:8 around:1 considered:2 great:2 lawrence:1 week:2 predict:1 early:1 released:1 purpose:1 estimation:7 ruslan:2 applicable:1 exceptional:1 create:1 mit:3 clearly:1 sensor:1 gaussian:44 pn:1 cornell:1 wilson:10 gpml:2 conjunction:4 structuring:1 focus:1 maria:1 methodological:1 rank:4 likelihood:28 tech:1 contrast:1 cg:1 posteriori:1 inference:11 inaccurate:1 typically:1 cunningham:1 mth:1 dnn:4 quasi:1 einer:1 leonhard:1 ralph:1 issue:1 flexible:1 multiplies:1 spatial:4 smoothing:2 mackay:3 special:1 marginal:10 field:1 construct:1 beach:1 sampling:1 placing:1 holger:2 icml:2 future:2 report:2 contaminated:1 recommend:1 spline:2 few:5 gordon:5 randomly:1 simultaneously:2 xtj:1 floating:1 phase:1 william:1 attempt:1 interest:1 highly:3 multiply:2 evaluation:1 golub:2 mixture:3 tj:8 held:1 differentialgleichungen:1 accurate:5 integral:1 closer:1 partial:3 necessary:2 daily:3 modest:1 indexed:1 iv:2 kxx:11 logarithm:1 tree:2 re:2 minimal:1 instance:2 column:2 modeling:2 obstacle:1 contiguous:1 lanczos:59 cost:7 applicability:1 entry:7 subset:4 uniform:2 kq:1 hundred:1 eigendecompositions:2 tridiagonal:1 too:2 stored:1 hutchinson:3 gregory:1 nickisch:7 calibrated:1 combined:3 st:1 international:4 sensitivity:2 randomized:2 siam:3 probabilistic:1 dong:1 regressor:1 michael:2 quickly:1 w1:2 squared:1 thesis:2 nico:1 derivative:31 leading:2 exclude:1 chemistry:1 jc:2 explicitly:2 kzk2:1 depends:1 blind:1 tion:1 root:1 stieltjes:1 view:1 candela:2 extrapolation:2 start:1 decaying:1 recover:2 xing:3 rmse:4 square:1 accuracy:1 variance:5 kek:1 efficiently:2 bayesian:3 accurately:2 onero:1 unaffected:1 malioutov:1 explain:1 bindel:1 whenever:2 inexpensive:3 nonetheless:3 boutsidis:1 involved:1 naturally:3 associated:3 recovers:1 petros:1 gain:2 arye:1 dataset:7 popular:7 begun:1 cj:5 hilbert:1 javier:1 back:1 worry:1 higher:4 day:2 improved:1 hannes:5 rard:1 evaluated:2 shashanka:1 generality:1 just:1 smola:2 implicit:1 hand:1 joaquin:2 expressive:1 ei:2 replacing:1 christopher:1 menon:1 scientific:1 usa:1 building:1 contain:1 true:2 unbiased:1 regularization:1 hence:3 symmetric:5 mile:1 deal:1 insu:1 indistinguishable:1 recurrence:1 razor:2 die:1 criterion:1 bring:1 variational:4 snelson:1 novel:2 recently:2 misspecified:1 superior:1 common:2 volume:4 million:2 extend:2 tail:1 numerically:1 refer:1 significant:1 mihai:1 cambridge:3 gibbs:1 phillips:1 smoothness:1 automatic:3 tuning:1 grid:8 rd:5 similarly:3 pm:1 i6:1 had:1 dot:1 pute:1 han:1 surface:1 add:2 posterior:3 hamburg:1 arbitrarily:2 der:2 seen:2 additional:9 greater:1 zouzias:1 converge:1 semi:1 ii:2 arithmetic:2 full:2 needing:1 sound:5 anastasios:1 technical:1 faster:4 calculation:1 cross:2 long:1 e1:3 laplacian:1 prediction:13 scalable:14 involving:1 regression:4 essentially:1 poisson:1 bz:1 arxiv:2 iteration:4 kernel:82 represent:1 achieved:1 cell:2 dec:1 background:1 addition:3 interval:3 source:4 extra:2 saad:2 eigenspectra:1 induced:2 december:1 near:1 leverage:1 iii:2 enough:1 variety:1 xj:1 fit:3 competing:1 idea:2 chebyshev:21 bottleneck:1 t0:1 utility:1 url:1 forecasting:1 penalty:1 algebraic:3 hessian:2 action:1 remark:1 deep:8 jie:2 generally:3 useful:1 matlab:1 involve:1 stein:1 annalen:1 simplest:1 http:4 tutorial:1 gil:1 delta:1 arising:1 per:4 popularity:1 mathematische:1 hyperparameter:9 mat:4 promise:2 vol:1 numerica:2 key:2 sivan:1 preprocessed:1 assault:2 sum:4 year:3 run:3 package:1 uncertainty:2 hankel:1 throughout:1 reasonable:2 electronic:1 fusi:1 scaling:1 qui:1 bound:9 layer:2 ki:1 followed:1 haim:1 neill:2 quadratic:1 adapted:1 kronecker:4 constraint:1 alex:2 n3:3 dominated:1 aspect:2 extremely:1 min:3 relatively:1 martin:1 structured:5 precompute:1 poor:2 conjugate:1 across:2 slightly:1 em:1 increasingly:2 modification:2 making:2 computationally:1 mechanism:1 needed:2 generalizes:1 available:5 operation:1 apply:4 probe:17 observe:1 spectral:3 nicholas:1 fahey:1 alternative:4 jane:1 original:2 substitute:1 standardized:2 remaining:1 include:1 binomial:1 graphical:2 newton:1 unifying:1 unsuitable:1 exploit:3 ghahramani:2 uj:1 approximating:4 classical:2 society:1 already:2 quantity:1 parametric:1 costly:2 dependence:1 usual:1 diagonal:17 surrogate:16 loglinear:1 gradient:4 bwj:1 parametrized:1 quinonero:1 w0:2 collected:1 unstable:1 assuming:1 code:1 length:1 prabhanjan:1 kambadur:1 providing:1 ubaru:1 kun:1 difficult:1 robert:1 theorie:1 trace:10 negative:1 rise:1 implementation:3 yousef:1 ski:18 perform:4 allowing:2 discretize:2 observation:2 datasets:4 markov:1 benchmark:1 finite:2 gas:1 january:2 situation:1 flop:2 communication:1 misspecification:1 rn:7 varied:1 perturbation:1 station:1 intensity:2 david:4 required:3 toolbox:1 connection:1 crime:5 catalog:1 california:1 auf:1 hour:1 toledo:1 nip:3 beyond:4 able:2 pattern:4 latitude:1 kulesza:1 sparsity:1 including:8 max:2 memory:1 zhiting:2 royal:1 power:1 natural:2 rely:1 attach:1 kek2:1 buhmann:1 recursion:1 residual:1 mn:2 fitc:10 scheme:2 github:2 technology:1 library:2 created:1 extract:1 coupled:2 flaxman:2 prior:1 discovery:1 python:1 multiplication:4 asymptotic:1 fully:1 expect:1 var:1 foundar:1 validation:1 eigendecomposition:2 degree:2 incident:1 systematically:2 occam:3 surprisingly:1 last:2 rasmussen:5 gilboa:1 allow:2 understand:1 institute:1 taking:1 characterizing:1 absolute:2 sparse:4 van:1 curve:1 hensman:1 kzk:2 dimension:7 xn:1 evaluating:4 world:3 rich:1 quantum:1 computes:1 author:2 collection:1 superficially:1 regressors:1 made:1 adaptive:1 approximate:15 emphasize:1 nov:1 dmitry:1 bernhard:1 gene:2 uai:1 conclude:1 xi:2 cullum:1 alternatively:1 spectrum:1 iterative:5 decade:1 table:6 ku:11 ca:1 forest:1 expansion:4 mse:2 rue:1 da:1 cheapest:1 significance:1 spread:1 fastfood:1 big:1 noise:3 hyperparameters:14 ref:1 quadrature:4 x1:1 scattered:1 cubic:3 slow:1 precision:1 favoured:1 mvm:7 sub:1 christos:1 sf:1 jmlr:1 learns:1 dozen:1 minute:1 removing:1 showing:2 decay:1 admits:3 maximizers:1 exists:1 adding:1 effectively:1 importance:1 higham:1 phd:2 magnitude:1 cartesian:1 chen:2 jinwoo:1 michigan:1 explore:2 lbfgs:1 likely:1 ez:1 expressed:1 kiss:1 wendland:2 applies:1 corresponds:1 relies:1 acm:1 willoughby:1 conditional:1 goal:2 identity:1 acceleration:1 rbf:11 replace:1 change:4 specifically:1 typical:1 uniformly:1 called:2 total:2 gauss:1 m3:2 indicating:1 select:1 cholesky:1 mark:1 accelerated:1 evaluate:1 princeton:1 |
6,869 | 7,213 | Poincar? Embeddings for
Learning Hierarchical Representations
Maximilian Nickel
Facebook AI Research
[email protected]
Douwe Kiela
Facebook AI Research
[email protected]
Abstract
Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, state-of-the-art embedding methods
typically do not account for latent hierarchical structures which are characteristic
for many complex symbolic datasets. In this work, we introduce a new approach
for learning hierarchical representations of symbolic data by embedding them into
hyperbolic space ? or more precisely into an n-dimensional Poincar? ball. Due to
the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity.
We present an efficient algorithm to learn the embeddings based on Riemannian
optimization and show experimentally that Poincar? embeddings can outperform
Euclidean embeddings significantly on data with latent hierarchies, both in terms
of representation capacity and in terms of generalization ability.
1
Introduction
Learning representations of symbolic data such as text, graphs and multi-relational data has become
a central paradigm in machine learning and artificial intelligence. For instance, word embeddings
such as WORD 2 VEC [20], G LOV E [27] and FAST T EXT [5, 16] are widely used for tasks ranging
from machine translation to sentiment analysis. Similarly, embeddings of graphs such as latent space
embeddings [15], NODE 2 VEC [13], and D EEP WALK [28] have found important applications for
community detection and link prediction in social networks. Furthermore, embeddings of multirelational data such as R ESCAL [22], T RANS E [7], and Universal Schema [31] are being used for
knowledge graph completion and information extraction.
Typically, the objective of an embedding method is to organize symbolic objects (e.g., words, entities,
concepts) in a way such that their similarity or distance in the embedding space reflects their semantic
similarity. For instance, Mikolov et al. [20] embed words in Rd such that their inner product is
maximized when words co-occur within similar contexts in text corpora. This is motivated by the
distributional hypothesis [14, 11], i.e., that the meaning of words can be derived from the contexts in
which they appear. Similarly, Hoff et al. [15] embed social networks such that the distance between
social actors is minimized if they are connected in the network. This reflects the homophily property
that is characteristic for many networks, i.e. that similar actors tend to associate with each other.
Although embedding methods have proven successful in numerous applications, they suffer from
a fundamental limitation: their ability to model complex patterns is inherently bounded by the
dimensionality of the embedding space. For instance, Nickel et al. [23] showed that linear embeddings
of graphs can require a prohibitively large dimensionality to model certain types of relations. Although
non-linear embeddings can mitigate this problem [8], complex graph patterns can still require a
computationally infeasible embedding dimension. As a consequence, no method yet exists that is
able to compute embeddings of large graph-structured data ? such as social networks, knowledge
graphs or taxonomies ? without loss of information. Since the ability to express information is a
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
precondition for learning and generalization, it is therefore important to increase the representation
capacity of embedding methods such that they can realistically be used to model complex patterns on
a large scale. In this work, we focus on mitigating this problem for a certain class of symbolic data,
i.e., large datasets whose objects can be organized according to a latent hierarchy ? a property that is
inherent in many complex datasets. For instance, the existence of power-law distributions in datasets
can often be traced back to hierarchical structures [29]. Prominent examples of power-law distributed
data include natural language (Zipf?s law [40]) and scale-free networks such as social and semantic
networks [32]. Similarly, the empirical analysis of Adcock et al. [1] indicated that many real-world
networks exhibit an underlying tree-like structure.
To exploit this structural property for learning more efficient representations, we propose to compute
embeddings not in Euclidean but in hyperbolic space, i.e., space with constant negative curvature.
Informally, hyperbolic space can be thought of as a continuous version of trees and as such it is
naturally equipped to model hierarchical structures. For instance, it has been shown that any finite
tree can be embedded into a finite hyperbolic space such that distances are preserved approximately
[12]. We base our approach on a particular model of hyperbolic space, i.e., the Poincar? ball model,
as it is well-suited for gradient-based optimization. This allows us to develop an efficient algorithm
for computing the embeddings based on Riemannian optimization, which is easily parallelizable
and scales to large datasets. Experimentally, we show that our approach can provide high quality
embeddings of large taxonomies ? both with and without missing data. Moreover, we show that
embeddings trained on W ORD N ET provide state-of-the-art performance for lexical entailment. On
collaboration networks, we also show that Poincar? embeddings are successful in predicting links in
graphs where they outperform Euclidean embeddings, especially in low dimensions.
The remainder of this paper is organized as follows: In Section 2 we briefly review hyperbolic
geometry and discuss related work. In Section 3 we introduce Poincar? embeddings and present
a scalable algorithm to compute them. In Section 4 we evaluate our approach on tasks such as
taxonomy embedding, link prediction in networks and predicting lexical entailment.
2
Embeddings and Hyperbolic Geometry
Hyperbolic geometry is a non-Euclidean geometry which studies spaces of constant negative curvature.
It is, for instance, related to Minkowski spacetime in special relativity. In network science, hyperbolic
spaces have started to receive attention as they are well-suited to model hierarchical data. For
instance, consider the task of embedding a tree into a metric space such that its structure is reflected
in the embedding. A regular tree with branching factor b has (b + 1)b`?1 nodes at level ` and
((b + 1)b` ? 2)/(b ? 1) nodes on a level less or equal than `. Hence, the number of children grows
exponentially with their distance to the root of the tree. In hyperbolic geometry this kind of tree
structure can be modeled easily in two dimensions: nodes that are exactly ` levels below the root
are placed on a sphere in hyperbolic space with radius r ? ` and nodes that are less than ` levels
below the root are located within this sphere. This type of construction is possible as hyperbolic
disc area and circle length grow exponentially with their radius.1 See Figure 1b for an example.
Intuitively, hyperbolic spaces can be thought of as continuous versions of trees or vice versa, trees
can be thought of as "discrete hyperbolic spaces" [19]. In R2 , a similar construction is not possible
as circle length (2?r) and disc area (2?r2 ) grow only linearly and quadratically with regard to r
in Euclidean geometry. Instead, it is necessary to increase the dimensionality of the embedding to
model increasingly complex hierarchies. As the number of parameters increases, this can lead to
computational problems in terms of runtime and memory complexity as well as to overfitting.
Due to these properties, hyperbolic space has recently been considered to model complex networks.
For instance, Kleinberg [18] introduced hyperbolic geometry for greedy routing in geographic
communication networks. Similarly, Bogu?? et al. [4] proposed hyperbolic embeddings of the AS
Internet topology to perform greedy shortest path routing in the embedding space. Krioukov et al.
[19] developed a geometric framework to model complex networks using hyperbolic space and
showed how typical properties such as heterogeneous degree distributions and strong clustering can
emerge by assuming an underlying hyperbolic geometry to networks. Furthermore, Adcock et al.
1
For instance, in a two dimensional hyperbolic space with constant curvature K = ?1, the length of a circle
is given as 2? sinh r while the area of a disc is given as 2?(cosh r ? 1). Since sinh r = 12 (er ? e?r ) and
cosh r = 12 (er + e?r ), both disc area and circle length grow exponentially with r.
2
p5
p3
p4
p1
p2
(a) Geodesics of the Poincar? disk
(b) Embedding of a tree in B2
(c) Growth of Poincar? distance
Figure 1: (a) Due to the negative curvature of B, the distance of points increases exponentially (relative to their
Euclidean distance) the closer they are to the boundary. (c) Growth of the Poincar? distance d(u, v) relative to
the Euclidean distance and the norm of v (for fixed kuk = 0.9). (b) Embedding of a regular tree in B2 such that
all connected nodes are spaced equally far apart (i.e., all black line segments have identical hyperbolic length).
[1] proposed a measure based on Gromov?s ?-hyperbolicity [12] to characterize the tree-likeness of
graphs. Ontrup and Ritter [25] proposed hyperbolic self-organizing maps for data exploration. Asta
and Shalizi [3] used hyperbolic embeddings to compare the global structure of networks. Sun et al.
[33] proposed Space-Time embeddings to learn representations of non-metric data.
Euclidean embeddings, on the other hand, have become a popular approach to represent symbolic
data in machine learning and artificial intelligence. For instance, in addition to the methods discussed
in Section 1, Paccanaro and Hinton [26] proposed one of the first embedding methods to learn from
relational data. More recently, Holographic [24] and Complex Embeddings [34] have shown stateof-the-art performance in Knowledge Graph completion. In relation to hierarchical representations,
Vilnis and McCallum [36] proposed to learn density-based word representations, i.e., Gaussian
embeddings, to capture uncertainty and asymmetry. Given ordered input pairs, Vendrov et al. [35]
proposed Order Embeddings to model visual-semantic hierarchies over words, sentences, and images.
Demeester et al. [10] showed that including prior information about hypernymy relations in form of
logical rules can improve the quality of word embeddings.
3
Poincar? Embeddings
In the following, we are interested in finding embeddings of symbolic data such that their distance in
the embedding space reflects their semantic similarity. We assume that there exists a latent hierarchy
in which the symbols can be organized. In addition to the similarity of objects, we intend to also
reflect this hierarchy in the embedding space to improve over existing methods in two ways:
1. By inducing an appropriate structural bias on the embedding space we aim at improving
generalization performance as well as runtime and memory complexity.
2. By capturing the hierarchy explicitly in the embedding space, we aim at gaining additional
insights about the relationships between symbols and the importance of individual symbols.
Although we assume that there exists a latent hierarchy, we do not assume that we have direct access
to information about this hierarchy, e.g., via ordered input pairs. Instead, we consider the task of
inferring the hierarchical relationships fully unsupervised, as is, for instance, necessary for text and
network data. For these reasons ? and motivated by the discussion in Section 2 ? we embed symbolic
data into hyperbolic space H. In contrast to Euclidean space R, there exist multiple, equivalent
models of H such as the Beltrami-Klein model, the hyperboloid model, and the Poincar? half-plane
model. In the following, we will base our approach on the Poincar? ball model, as it is well-suited for
gradient-based optimization. In particular, let B d = {x ? Rd | kxk < 1} be the open d-dimensional
unit ball, where k ? k denotes the Euclidean norm. The Poincar? ball model of hyperbolic space
corresponds then to the Riemannian manifold (B d , gx ), i.e., the open unit ball equipped with the
Riemannian metric tensor
2
2
gx =
gE ,
1 ? kxk2
3
where x ? B d and g E denotes the Euclidean metric tensor. Furthermore, the distance between points
u, v ? B d is given as
ku ? vk2
d(u, v) = arcosh 1 + 2
.
(1)
(1 ? kuk2 )(1 ? kvk2 )
The boundary of the ball is denoted by ?B. It corresponds to the sphere S d?1 and is not part of the
manifold, but represents infinitely distant points. Geodesics in B d are then circles that are orthogonal
to ?B (as well as all diameters). See Figure 1a for an illustration.
It can be seen from Equation (1), that the distance within the Poincar? ball changes smoothly with
respect to the location of u and v. This locality property of the Poincar? distance is key for finding
continuous embeddings of hierarchies. For instance, by placing the root node of a tree at the origin of
B d it would have a relatively small distance to all other nodes as its Euclidean norm is zero. On the
other hand, leaf nodes can be placed close to the boundary of the Poincar? ball as the distance grows
very fast between points with a norm close to one. Furthermore, please note that Equation (1) is
symmetric and that the hierarchical organization of the space is solely determined by the distance of
points to the origin. Due to this self-organizing property, Equation (1) is applicable in an unsupervised
setting where the hierarchical order of objects is not specified in advance such as text and networks.
Remarkably, Equation (1) allows us therefore to learn embeddings that simultaneously capture the
hierarchy of objects (through their norm) as well a their similarity (through their distance).
Since a single hierarchical structure can be well represented in two dimensions, the Poincar? disk
(B 2 ) is a common way to model hyperbolic geometry. In our method, we instead use the Poincar? ball
(B d ), for two main reasons: First, in many datasets such as text corpora, multiple latent hierarchies
can co-exist, which can not always be modeled in two dimensions. Second, a larger embedding
dimension can decrease the difficulty for an optimization method to find a good embedding (also for
single hierarchies) as it allows for more degrees of freedom during the optimization process.
To compute Poincar? embeddings for a set of symbols S = {xi }ni=1 , we are then interested in finding
embeddings ? = {?i }ni=1 , where ?i ? B d . We assume we are given a problem-specific loss function
L(?) which encourages semantically similar objects to be close in the embedding space according to
their Poincar? distance. To estimate ?, we then solve the optimization problem
?0 ? arg min L(?)
s.t. ? ?i ? ? : k?i k < 1.
(2)
?
We will discuss specific loss functions in Section 4.
3.1
Optimization
Since the Poincar? Ball has a Riemannian manifold structure, we can optimize Equation (2) via
stochastic Riemannian optimization methods such as RSGD [6] or RSVRG [39]. In particular, let
T? B denote the tangent space of a point ? ? B d . Furthermore, let ?R ? T? B denote the Riemannian
gradient of L(?) and let ?E denote the Euclidean gradient of L(?). Using RSGD, parameter updates
to minimize Equation (2) are then of the form
?t+1 = R?t (??t ?R L(?t ))
where R?t denotes the retraction onto B at ? and ?t denotes the learning rate at time t. Hence, for
the minimization of Equation (2), we require the Riemannian gradient and a suitable retraction. Since
the Poincar? ball is a conformal model of hyperbolic space, the angles between adjacent vectors are
identical to their angles in the Euclidean space. The length of vectors however might differ. To derive
the Riemannian gradient from the Euclidean gradient, it is sufficient to rescale ?E with the inverse of
the Poincar? ball metric tensor, i.e., g??1 . Since g? is a scalar matrix, the inverse is trivial to compute.
Furthermore, since Equation (1) is fully differentiable, the Euclidean gradient can easily be derived
?L(?) ?d(?,x)
using standard calculus. In particular, the Euclidean gradient ?E = ?d(?,x)
depends on the
??
gradient of L, which we assume is known, and the partial derivatives of the Poincar? distance, which
2
can be computed as follows: Let ? = 1 ? k?k2 , ? = 1 ? kxk2 and let ? = 1 + ??
k? ? xk2 . The
partial derivate of the Poincar? distance with respect to ? is then given as
x
?d(?, x)
4
kxk2 ? 2h?, xi + 1
??
.
(3)
= p
??
?2
?
? ?2 ? 1
4
Since d(?, ?) is symmetric, the partial derivative ?d(x,?)
can be derived analogously. As retraction
??
operation we use R? (v) = ? + v. In combination with the Riemannian gradient, this corresponds
then to the well-known natural gradient method [2]. Furthermore, we constrain the embeddings to
remain within the Poincar? ball via the projection
?/k?k ? ? if k?k ? 1
proj(?) =
?
otherwise ,
where ? is a small constant to ensure numerical stability. In all experiments we used ? = 10?5 . In
summary, the full update for a single embedding is then of the form
(1 ? k?t k2 )2
?t+1 ? proj ?t ? ?t
?E .
(4)
4
It can be seen from Equations (3) and (4) that this algorithm scales well to large datasets, as the
computational and memory complexity of an update depends linearly on the embedding dimension.
Moreover, the algorithm is straightforward to parallelize via methods such as Hogwild [30], as the
updates are sparse (only a small number of embeddings are modified in an update) and collisions are
very unlikely on large-scale data.
3.2
Training Details
In addition to this optimization procedure, we found that the following training details were helpful
for obtaining good representations: First, we initialize all embeddings randomly from the uniform
distribution U(?0.001, 0.001). This causes embeddings to be initialized close to the origin of B d .
Second, we found that a good initial angular layout can be helpful to find good embeddings. For this
reason, we train during an initial "burn-in" phase with a reduced learning rate ?/c. In combination
with initializing close to the origin, this can improve the angular layout without moving too far
towards the boundary. In our experiments, we set c = 10 and the duration of the burn-in to 10 epochs.
4
Evaluation
In this section, we evaluate the quality of Poincar? embeddings for a variety of tasks, i.e., for the
embedding of taxonomies, for link prediction in networks, and for modeling lexical entailment. In all
tasks, we train on data where the hierarchy of objects is not explicitly encoded. This allows us to
evaluate the ability of the embeddings to infer hierachical relationships without supervision. Moreover,
since we are mostly interested in the properties of the metric space, we focus on embeddings based
purely on the Poincar? distance and on models with comparable expressivity. In particular, we
compare the Poincar? distance as defined in Equation (1) to the following two distance functions:
Euclidean In all cases, we include the Euclidean distance d(u, v) = ku ? vk2 . As the Euclidean
distance is flat and symmetric, we expect that it requires a large dimensionality to model the
hierarchical structure of the data.
Translational For asymmetric data, we also include the score function d(u, v) = ku ? v + rk2 ,
as proposed by Bordes et al. [7] for modeling large-scale graph-structured data. For this score
function, we also learn the global translation vector r during training.
Note that the translational score function has, due to its asymmetry, more information about the
nature of an embedding problem than a symmetric distance when the order of (u, v) indicates the
hierarchy of elements. This is, for instance, the case for is-a(u, v) relations in taxonomies. For the
Poincar? distance and the Euclidean distance we could randomly permute the order of (u, v) and
obtain the identical embedding, while this is not the case for the translational score function. As such,
it is not fully unsupervised and only applicable where this hierarchical information is available.
4.1
Embedding Taxonomies
In the first set of experiments, we are interested in evaluating the ability of Poincar? embeddings to
embed data that exhibits a clear latent hierarchical structure. For this purpose, we conduct experiments
on the transitive closure of the W ORD N ET noun hierarchy [21] in two settings:
5
Table 1: Experimental results on the transitive closure of the W ORD N ET noun hierarchy. Highlighted
cells indicate the best Euclidean embeddings as well as the Poincar? embeddings which achieve equal
or better results. Bold numbers indicate absolute best results.
W ORD N ET
Link Pred.
W ORD N ET
Reconstruction
Dimensionality
5
10
20
50
100
200
Euclidean
Rank
MAP
3542.3
0.024
2286.9
0.059
1685.9
0.087
1281.7
0.140
1187.3
0.162
1157.3
0.168
Translational
Rank
MAP
205.9
0.517
179.4
0.503
95.3
0.563
92.8
0.566
92.7
0.562
91.0
0.565
Poincar?
Rank
MAP
4.9
0.823
4.02
0.851
3.84
0.855
3.98
0.86
3.9
0.857
3.83
0.87
Euclidean
Rank
MAP
3311.1
0.024
2199.5
0.059
952.3
0.176
351.4
0.286
190.7
0.428
81.5
0.490
Translational
Rank
MAP
65.7
0.545
56.6
0.554
52.1
0.554
47.2
0.56
43.2
0.562
40.4
0.559
Poincar?
Rank
MAP
5.7
0.825
4.3
0.852
4.9
0.861
4.6
0.863
4.6
0.856
4.6
0.855
Reconstruction To evaluate representation capacity, we embed fully observed data and reconstruct
it from the embedding. The reconstruction error in relation to the embedding dimension is then a
measure for the capacity of the model.
Link Prediction To test generalization performance, we split the data into a train, validation and
test set by randomly holding out observed links. The validation and test set do not include links
involving root or leaf nodes as these links would either be trivial or impossible to predict reliably.
Since we are embedding the transitive closure, the hierarchical structure is not directly visible from
the raw data but has to be inferred. For Poincar? and Euclidean embeddings we additionaly remove
the directionality of the edges and embed undirected graphs. The transitive closure of the W ORD N ET
noun hierarchy consists of 82,115 nouns and 743,241 hypernymy relations.
On this data, we learn embeddings in both settings as follows: Let D = {(u, v)} be the set of
observed hypernymy relations between noun pairs. We then learn embeddings of all symbols in
D such that related objects are close in the embedding space. In particular, we minimize the loss
function
X
e?d(u,v)
L(?) =
log P
,
(5)
?d(u,v 0 )
v 0 ?N (u) e
(u,v)?D
0
0
where N (u) = {v | (u, v ) 6? D} ? {v} is the set of negative examples for u (including v). For
training, we randomly sample 10 negative examples per positive example. Equation (5) is similar
to the loss used in Linear Relational Embeddings [26] (with additional negative sampling) and
encourages related objects to be closer to each other than objects for which we didn?t observe a
relationship. This choice of loss function is motivated by the observation that we don?t want to push
symbols that belong to distinct subtrees arbitrarily far apart, as their subtrees might still be close.
Instead, we only want them to be farther apart than symbols with an observed relation.
We evaluate the quality of the embeddings as commonly done for graph embeddings [7, 24]: For each
observed relationship (u, v), we rank its distance d(u, v) among the ground-truth negative examples
for u, i.e., among the set {d(u, v 0 ) | (u, v 0 ) 6? D)}. In the Reconstruction setting, we evaluate the
ranking on all nouns in the dataset. We then record the mean rank of v as well as the mean average
precision (MAP) of the ranking. The results of these experiments are shown in Table 1. It can be
seen that Poincar? embeddings are very successful in the embedding of large taxonomies ? both
with regard to their representation capacity and their generalization performance. Even compared to
Translational embeddings, which have more information about the structure of the task, Poincar?
embeddings show a greatly improved performance while using an embedding that is smaller by an
order of magnitude. Furthermore, the results of Poincar? embeddings in the link prediction task
are robust with regard to the embedding dimension. We attribute this result to the structural bias of
6
(a) Intermediate embedding after 20 epochs
(b) Embedding after convergence
Figure 2: Two-dimensional Poincar? embeddings of transitive closure of the W ORD N ET mammals
subtree. Ground-truth is-a relations of the original W ORD N ET tree are indicated via blue edges. A
Poincar? embedding with d = 5 achieves mean rank 1.26 and MAP 0.927 on this subtree.
the embedding space which could lead to reduced overfitting on data with a clear latent hierarchy.
Additionally, Figure 2 shows a visualization of a two-dimensional Poincar? embedding. For the
purpose of clarity, this embedding has been trained only on the mammals subtree of W ORD N ET.
4.2
Network Embeddings
Next, we evaluated the performance of Poincar? embeddings for modeling complex networks. Since
edges in such networks can often be explained via latent hierarchies over their nodes [9], we are
interested in the benefits of Poincar? embeddings in terms representation size and generalization
performance. We performed our experiments on four commonly used social networks, i.e, A STRO P H,
C OND M AT, G R Q C, and H EP P H. These networks represent scientific collaborations such that there
exists an undirected edge between two persons if they co-authored a paper. For these networks, we
model the probability of an edge as proposed by Krioukov et al. [19] via the Fermi-Dirac distribution
P ((u, v) = 1 | ?) =
1
e(d(u,v)?r)/t + 1
(6)
where r, t > 0 are hyperparameters. Here, r corresponds to the radius around each point u such that
points within this radius are likely to have an edge with u. The parameter t specifies the steepness of
the logistic function and influences both average clustering as well as the degree distribution [19].
We use the cross-entropy loss to learn the embeddings and sample negatives as in Section 4.1.
For evaluation, we split each dataset randomly into train, validation, and test set. The hyperparameters
r and t were tuned for each method on the validation set. Table 2 lists the MAP score of Poincar? and
Euclidean embeddings on the test set for the hyperparameters with the best validation score. Additionally, we also list the reconstruction performance without missing data. Translational embeddings
are not applicable to these datasets as they consist of undirected edges. It can be seen that Poincar?
embeddings perform again very well on these datasets and ? especially in the low-dimensional regime
? outperform Euclidean embeddings.
4.3
Lexical Entailment
An interesting aspect of Poincar? embeddings is that they allow us to make graded assertions about
hierarchical relationships, as hierarchies are represented in a continuous space. We test this property
on H YPER L EX [37], which is a gold standard resource for evaluating how well semantic models
capture graded lexical entailment by quantifying to what degree X is a type of Y via ratings on a
scale of [0, 10]. Using the noun part of H YPER L EX, which consists of 2163 rated noun pairs, we
then evaluated how well Poincar? embeddings reflect these graded assertions. For this purpose, we
7
Table 2: Mean average precision for Reconstruction and Link Prediction on network data.
Dimensionality
Reconstruction
A STRO P H
N=18,772; E=198,110
C OND M AT
N=23,133; E=93,497
GRQC
N=5,242; E=14,496
H EP P H
N=12,008; E=118,521
Link Prediction
10
20
50
100
10
20
50
100
Euclidean
Poincar?
0.376
0.703
0.788
0.897
0.969
0.982
0.989
0.990
0.508
0.671
0.815
0.860
0.946
0.977
0.960
0.988
Euclidean
Poincar?
0.356
0.799
0.860
0.963
0.991
0.996
0.998
0.998
0.308
0.539
0.617
0.718
0.725
0.756
0.736
0.758
Euclidean
Poincar?
0.522
0.990
0.931
0.999
0.994
0.999
0.998
0.999
0.438
0.660
0.584
0.691
0.673
0.695
0.683
0.697
Euclidean
Poincar?
0.434
0.811
0.742
0.960
0.937
0.994
0.966
0.997
0.642
0.683
0.749
0.743
0.779
0.770
0.783
0.774
Table 3: Spearman?s ? for Lexical Entailment on H YPER L EX.
?
FR
SLQS-Sim
WN-Basic
WN-WuP
WN-LCh
Vis-ID
Euclidean
Poincar?
0.283
0.229
0.240
0.214
0.214
0.253
0.389
0.512
used the Poincar? embeddings that were obtained in Section 4.1 by embedding W ORD N ET with a
dimensionality d = 5. Note that these embeddings were not specifically trained for this task. To
determine to what extent is-a(u, v) is true, we used the score function:
score(is-a(u, v)) = ?(1 + ?(kvk ? kuk))d(u, v).
(7)
Here, the term ?(kvk ? kuk) acts as a penalty when v is lower in the embedding hierarchy, i.e.,
when v has a higher norm than u. The hyperparameter ? determines the severity of the penalty. In
our experiments we set ? = 103 .
Using Equation (7), we scored all noun pairs in H YPER L EX and recorded Spearman?s rank correlation
with the ground-truth ranking. The results of this experiment are shown in Table 3. It can be seen that
the ranking based on Poincar? embeddings clearly outperforms all state-of-the-art methods evaluated
in [37]. Methods in Table 3 that are prefixed with WN also use W ORD N ET as a basis and therefore
are most comparable. The same embeddings also achieved a state-of-the-art accuracy of 0.86 on
W BLESS [38, 17], which evaluates non-graded lexical entailment.
5
Discussion and Future Work
In this paper, we introduced Poincar? embeddings for learning representations of symbolic data and
showed how they can simultaneously learn the similarity and the hierarchy of objects. Furthermore,
we proposed an efficient algorithm to compute the embeddings and showed experimentally, that
Poincar? embeddings provide important advantages over Euclidean embeddings on hierarchical
data: First, Poincar? embeddings enable parsimonious representations that allow us to learn highquality embeddings of large-scale taxonomies. Second, excellent link prediction results indicate
that hyperbolic geometry can introduce an important structural bias for the embedding of complex
symbolic data. Third, state-of-the-art results for predicting lexical entailment suggest that the
hierarchy in the embedding space corresponds well to the underlying semantics of the data.
The focus of this work was to evaluate general properties of hyperbolic geometry for the embedding
of symbolic data. In future work, we intend to expand the applications of Poincar? embeddings ? for
instance to multi-relational data ? and to derive models that are tailored to specific tasks such as word
embeddings. Furthermore, we have shown that natural gradient based optimization already produces
very good embeddings and scales to large datasets. We expect that a full Riemannian optimization
approach can further increase the quality of the embeddings and lead to faster convergence.
An important aspect of future work regards also the applicability of hyperbolic embeddings in
downstream tasks: models that operate on embeddings often make an implicit Euclidean assumption
and likely require some adaptation to be compatible with hyperbolic spaces.
8
References
[1] Aaron B Adcock, Blair D Sullivan, and Michael W Mahoney. Tree-like structure in large social
and information networks. In Data Mining (ICDM), 2013 IEEE 13th International Conference
on, pages 1?10. IEEE, 2013.
[2] Shun-ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):
251?276, 1998.
[3] Dena Marie Asta and Cosma Rohilla Shalizi. Geometric network comparisons. In Proceedings
of the Thirty-First Conference on Uncertainty in Artificial Intelligence, UAI, pages 102?110.
AUAI Press, 2015.
[4] M Bogu??, F Papadopoulos, and D Krioukov. Sustaining the internet with hyperbolic mapping.
Nature communications, 1:62, 2010.
[5] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors
with subword information. arXiv preprint arXiv:1607.04606, 2016.
[6] Silvere Bonnabel. Stochastic gradient descent on Riemannian manifolds. IEEE Trans. Automat.
Contr., 58(9):2217?2229, 2013.
[7] Antoine Bordes, Nicolas Usunier, Alberto Garc?a-Dur?n, Jason Weston, and Oksana Yakhnenko.
Translating embeddings for modeling multi-relational data. In Advances in Neural Information
Processing Systems 26, pages 2787?2795, 2013.
[8] Guillaume Bouchard, Sameer Singh, and Theo Trouillon. On approximate reasoning capabilities
of low-rank vector spaces. AAAI Spring Syposium on Knowledge Representation and Reasoning
(KRR): Integrating Symbolic and Neural Approaches, 2015.
[9] Aaron Clauset, Cristopher Moore, and Mark EJ Newman. Hierarchical structure and the
prediction of missing links in networks. Nature, 453(7191):98?101, 2008.
[10] Thomas Demeester, Tim Rockt?schel, and Sebastian Riedel. Lifted rule injection for relation
embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language
Processing, EMNLP, pages 1389?1399. The Association for Computational Linguistics, 2016.
[11] John Rupert Firth. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis,
1957.
[12] Mikhael Gromov. Hyperbolic groups. In Essays in group theory, pages 75?263. Springer, 1987.
[13] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, pages 855?864. ACM, 2016.
[14] Zellig S Harris. Distributional structure. Word, 10(2-3):146?162, 1954.
[15] Peter D Hoff, Adrian E Raftery, and Mark S Handcock. Latent space approaches to social
network analysis. Journal of the american Statistical association, 97(460):1090?1098, 2002.
[16] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for
efficient text classification. arXiv preprint arXiv:1607.01759, 2016.
[17] Douwe Kiela, Laura Rimell, Ivan Vulic, and Stephen Clark. Exploiting image generality for
lexical entailment detection. In Proceedings of the 53rd Annual Meeting of the Association for
Computational Linguistics (ACL 2015), pages 119?124. ACL, 2015.
[18] Robert Kleinberg. Geographic routing using hyperbolic space. In INFOCOM 2007. 26th IEEE
International Conference on Computer Communications. IEEE, pages 1902?1909. IEEE, 2007.
[19] Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, and Mari?n Bogun?.
Hyperbolic geometry of complex networks. Physical Review E, 82(3):036106, 2010.
[20] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed
representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013.
[21] George Miller and Christiane Fellbaum. Wordnet: An electronic lexical database, 1998.
[22] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective
learning on multi-relational data. In Proceedings of the 28th International Conference on
Machine Learning, ICML, pages 809?816. Omnipress, 2011.
9
[23] Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Reducing the rank in relational factorization models by including observable patterns. In Advances in Neural Information Processing
Systems 27, pages 1179?1187, 2014.
[24] Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. Holographic embeddings of
knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence,
pages 1955?1961. AAAI Press, 2016.
[25] J?rg Ontrup and Helge Ritter. Large-scale data exploration with the hierarchically growing
hyperbolic SOM. Neural networks, 19(6):751?761, 2006.
[26] Alberto Paccanaro and Geoffrey E. Hinton. Learning distributed representations of concepts
using linear relational embedding. IEEE Trans. Knowl. Data Eng., 13(2):232?244, 2001.
[27] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for
word representation. In EMNLP, volume 14, pages 1532?1543, 2014.
[28] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge
discovery and data mining, pages 701?710. ACM, 2014.
[29] Erzs?bet Ravasz and Albert-L?szl? Barab?si. Hierarchical organization in complex networks.
Physical Review E, 67(2):026112, 2003.
[30] Benjamin Recht, Christopher R?, Stephen J. Wright, and Feng Niu. Hogwild: A lock-free
approach to parallelizing stochastic gradient descent. In Advances in Neural Information
Processing Systems 24, pages 693?701, 2011.
[31] Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. Relation extraction
with matrix factorization and universal schemas. In Human Language Technologies: Conference
of the North American Chapter of the Association of Computational Linguistics, Proceedings,
pages 74?84. The Association for Computational Linguistics, 2013.
[32] Mark Steyvers and Joshua B Tenenbaum. The large-scale structure of semantic networks:
Statistical analyses and a model of semantic growth. Cognitive science, 29(1):41?78, 2005.
[33] Ke Sun, Jun Wang, Alexandros Kalousis, and St?phane Marchand-Maillet. Space-time local
embeddings. In Advances in Neural Information Processing Systems 28, pages 100?108, 2015.
[34] Th?o Trouillon, Johannes Welbl, Sebastian Riedel, ?ric Gaussier, and Guillaume Bouchard.
Complex embeddings for simple link prediction. In Proceedings of the 33nd International
Conference on Machine Learning, ICML, volume 48 of JMLR Workshop and Conference
Proceedings, pages 2071?2080. JMLR.org, 2016.
[35] Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and
language. arXiv preprint arXiv:1511.06361, 2015.
[36] Luke Vilnis and Andrew McCallum. Word representations via Gaussian embedding. In
International Conference on Learning Representations (ICLR), 2015.
[37] Ivan Vulic, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. Hyperlex: A large-scale
evaluation of graded lexical entailment. arXiv preprint arXiv:1608.02117, 2016.
[38] Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of COLING 2014, the 25th International
Conference on Computational Linguistics: Technical Papers, pages 2249?2259. Dublin City
University and Association for Computational Linguistics, 2014.
[39] Hongyi Zhang, Sashank J. Reddi, and Suvrit Sra. Riemannian SVRG: fast stochastic optimization on riemannian manifolds. In Advances in Neural Information Processing Systems 29, pages
4592?4600, 2016.
[40] George Kingsley Zipf. Human Behaviour and the Principle of Least Effort: an Introduction to
Human Ecology. Addison-Wesley, 1949.
10
| 7213 |@word armand:2 briefly:1 version:2 norm:6 nd:2 disk:2 open:2 adrian:1 calculus:1 closure:5 essay:1 eng:1 hyponym:1 automat:1 mammal:2 initial:2 score:8 tuned:1 subword:1 outperforms:1 existing:1 com:2 mari:1 si:1 yet:1 john:1 wup:1 distant:1 numerical:1 visible:1 remove:1 update:5 intelligence:4 greedy:2 half:1 leaf:2 plane:1 mccallum:3 farther:1 record:1 papadopoulos:2 alexandros:1 node:11 location:1 gx:2 org:1 zhang:1 kvk2:1 direct:1 become:3 consists:2 introduce:3 node2vec:1 lov:1 tomaso:1 p1:1 growing:1 multi:4 kiros:1 hypernym:1 equipped:2 underlying:4 bounded:1 moreover:3 didn:1 what:2 kind:1 developed:1 finding:3 marlin:1 mitigate:1 act:1 auai:1 growth:3 runtime:2 exactly:1 prohibitively:1 k2:2 unit:2 highquality:1 appear:1 organize:1 positive:1 felix:1 local:1 consequence:1 vahdat:1 ext:1 id:1 jiang:1 parallelize:1 niu:1 path:1 hyperbolicity:1 approximately:1 solely:1 black:1 might:2 burn:2 acl:2 sustaining:1 edouard:2 luke:1 co:4 factorization:2 enriching:1 thirty:1 ond:2 sullivan:1 procedure:1 poincar:58 area:4 universal:2 empirical:2 hyperbolic:37 significantly:1 thought:3 projection:1 word:15 hierachical:1 regular:2 yakhnenko:1 integrating:1 suggest:1 symbolic:14 onto:1 close:7 deepwalk:1 context:2 impossible:1 influence:1 optimize:1 equivalent:1 map:10 dean:1 missing:3 lexical:11 bill:1 straightforward:1 attention:1 layout:2 duration:1 keller:1 ke:1 tomas:3 rule:2 insight:1 steyvers:1 embedding:49 stability:1 hierarchy:25 construction:2 hypothesis:1 origin:4 associate:1 element:1 trick:1 located:1 asymmetric:1 distributional:2 database:1 observed:5 ep:2 steven:1 p5:1 preprint:4 initializing:1 capture:3 precondition:1 clauset:1 wang:1 connected:2 sun:2 decrease:1 ramus:1 benjamin:2 complexity:3 geodesic:2 trained:3 singh:1 segment:1 purely:1 basis:1 easily:3 represented:2 chapter:1 train:4 distinct:1 fast:3 artificial:4 newman:1 whose:1 encoded:1 widely:1 larger:1 solve:1 kai:1 grave:2 otherwise:1 reconstruct:1 amari:1 ability:5 highlighted:1 online:1 advantage:1 differentiable:1 propose:1 reconstruction:7 welbl:1 product:1 remainder:1 p4:1 fr:1 adaptation:1 organizing:2 achieve:1 realistically:1 hypernymy:3 gold:1 amin:1 inducing:1 dirac:1 exploiting:1 convergence:2 fermi:1 asymmetry:2 sutskever:1 produce:1 phane:1 object:11 tim:1 derive:2 develop:1 completion:2 andrew:2 rescale:1 sim:1 strong:1 p2:1 indicate:3 blair:1 differ:1 beltrami:1 radius:4 attribute:1 stochastic:4 additionaly:1 exploration:2 human:3 routing:3 enable:1 bojanowski:2 translating:1 asta:2 douwe:3 shun:1 require:4 garc:1 behaviour:1 shalizi:2 generalization:6 ryan:1 bonnabel:1 around:1 considered:1 ground:3 wright:1 rfou:1 mapping:1 predict:1 achieves:1 xk2:1 purpose:3 applicable:3 bag:1 krr:1 knowl:1 vice:1 city:1 reflects:3 minimization:1 clearly:1 gaussian:2 always:1 aim:2 modified:1 ej:1 lifted:1 volker:2 thirtieth:1 bet:1 linguistic:2 derived:3 focus:3 rank:12 indicates:1 greatly:1 contrast:1 vendrov:2 sigkdd:2 vk2:2 helpful:2 contr:1 typically:2 unlikely:1 relation:11 proj:2 expand:1 interested:5 semantics:1 mitigating:1 arg:1 translational:7 among:2 classification:1 stateof:1 denoted:1 art:6 special:1 initialize:1 hoff:2 oksana:1 equal:2 noun:9 extraction:2 beach:1 sampling:1 piotr:2 identical:3 placing:1 represents:1 unsupervised:3 icml:2 future:3 minimized:1 inherent:1 richard:1 randomly:5 simultaneously:3 individual:1 geometry:13 phase:1 skiena:1 jeffrey:2 ab:1 freedom:1 detection:2 organization:2 ecology:1 mining:3 evaluation:3 demeester:2 mahoney:1 szl:1 kvk:2 subtrees:2 edge:7 closer:2 partial:3 necessary:2 poggio:1 orthogonal:1 tree:15 conduct:1 euclidean:34 walk:1 circle:5 initialized:1 leskovec:1 dublin:1 instance:14 modeling:4 eep:1 assertion:2 adcock:3 applicability:1 phrase:1 ravasz:1 uniform:1 holographic:2 successful:3 too:1 characterize:1 st:2 density:1 fundamental:1 person:1 international:8 recht:1 ritter:2 michael:1 analogously:1 ilya:1 yao:1 again:1 central:1 reflect:2 recorded:1 aaai:3 rosasco:1 emnlp:2 cognitive:1 cosma:1 derivative:2 american:2 laura:1 account:1 jeremy:1 b2:2 bold:1 dur:1 north:1 zellig:1 explicitly:2 ranking:4 depends:2 vi:1 performed:1 root:5 hogwild:2 jason:1 infocom:1 schema:2 multirelational:1 bouchard:2 capability:1 minimize:2 ni:2 accuracy:1 greg:1 characteristic:2 efficiently:1 maximized:1 spaced:1 miller:1 raw:1 disc:4 xueyan:1 parallelizable:1 retraction:3 sebastian:3 facebook:2 evaluates:1 naturally:1 riemannian:14 dataset:2 popular:1 logical:1 knowledge:7 dimensionality:7 organized:3 back:1 fellbaum:1 wesley:1 higher:1 reflected:1 improved:1 synopsis:1 entailment:10 done:1 evaluated:3 generality:1 furthermore:10 angular:2 implicit:1 maillet:1 correlation:1 hand:2 christopher:2 logistic:1 rohilla:1 quality:5 indicated:2 scientific:1 grows:2 hongyi:1 usa:1 concept:2 geographic:2 rk2:1 true:1 christiane:1 hence:2 fidler:1 symmetric:4 moore:1 semantic:7 adjacent:1 during:3 branching:1 self:2 please:1 encourages:2 paccanaro:2 prominent:1 hill:1 invaluable:1 omnipress:1 reasoning:2 ranging:1 meaning:1 likeness:1 image:3 recently:2 common:1 physical:2 homophily:1 exponentially:4 volume:2 discussed:1 belong:1 association:6 versa:1 vec:2 ai:2 zipf:2 rd:3 similarly:4 handcock:1 language:4 moving:1 access:1 actor:2 similarity:7 supervision:1 han:1 sanja:1 base:2 curvature:4 showed:5 apart:3 grqc:1 certain:2 relativity:1 suvrit:1 arbitrarily:1 meeting:1 joshua:1 seen:5 additional:2 george:2 krioukov:4 determine:1 paradigm:1 shortest:1 corrado:1 stephen:2 multiple:2 full:2 sameer:1 infer:1 technical:1 faster:1 cross:1 long:1 sphere:3 alberto:2 icdm:1 equally:1 barab:1 prediction:10 scalable:2 involving:1 basic:1 heterogeneous:1 metric:6 arxiv:8 albert:1 represent:2 tailored:1 achieved:1 cell:1 receive:1 preserved:1 addition:3 remarkably:1 derivate:1 want:2 grow:3 operate:1 perozzi:1 tend:1 undirected:3 reddi:1 schel:1 structural:4 intermediate:1 split:2 embeddings:87 wn:4 variety:1 ivan:3 topology:1 inner:1 motivated:3 effort:1 sentiment:1 penalty:2 suffer:1 peter:2 sashank:1 cause:1 collision:1 clear:2 informally:1 johannes:1 authored:1 cosh:2 tenenbaum:1 diameter:1 reduced:2 specifies:1 outperform:3 exist:2 per:1 klein:1 blue:1 bryan:1 discrete:1 hyperparameter:1 express:1 steepness:1 key:1 four:1 ichi:1 group:2 traced:1 clarity:1 marie:1 kuk:3 graph:15 downstream:1 dmitri:1 angle:2 inverse:2 uncertainty:2 raquel:1 electronic:1 p3:1 parsimonious:2 ric:1 clarke:1 comparable:2 capturing:2 internet:2 sinh:2 distinguish:1 spacetime:1 marchand:1 annual:1 occur:1 precisely:1 rsvrg:1 constrain:1 riedel:3 flat:1 kleinberg:2 aspect:2 min:1 spring:1 mikolov:4 minkowski:1 injection:1 relatively:1 structured:2 maxn:1 according:2 ball:14 combination:2 manning:1 spearman:2 kalousis:1 remain:1 smaller:1 increasingly:1 helge:1 trouillon:2 intuitively:1 explained:1 computationally:1 equation:12 visualization:1 resource:1 rans:1 discus:2 daniela:1 addison:1 ge:1 prefixed:1 conformal:1 usunier:1 available:1 operation:1 observe:1 hierarchical:19 appropriate:1 existence:1 original:1 thomas:1 denotes:4 clustering:2 include:4 ensure:1 linguistics:6 lock:1 exploit:1 especially:2 graded:5 feng:1 tensor:3 objective:1 intend:2 already:1 antoine:1 exhibit:2 gradient:16 iclr:1 distance:29 link:15 ontrup:2 capacity:5 entity:1 manifold:5 extent:1 trivial:2 reason:3 urtasun:1 assuming:1 length:6 modeled:2 relationship:6 illustration:1 gaussier:1 mostly:1 robert:1 taxonomy:8 holding:1 maksim:1 rupert:1 negative:8 reliably:1 collective:1 perform:2 ord:11 observation:1 datasets:10 finite:2 descent:2 rockt:1 relational:8 communication:3 hinton:2 severity:1 parallelizing:1 community:1 inferred:1 rating:1 introduced:2 pred:1 pair:5 compositionality:1 specified:1 david:1 sentence:1 quadratically:1 expressivity:1 nip:1 trans:2 jure:1 able:1 kriegel:1 below:2 pattern:4 lch:1 regime:1 including:3 memory:3 gaining:1 bogu:2 power:2 suitable:1 natural:5 difficulty:1 predicting:3 improve:3 firth:1 rated:1 lorenzo:1 technology:1 numerous:1 started:1 raftery:1 transitive:5 jun:1 tresp:2 text:7 review:3 geometric:2 prior:1 tangent:1 epoch:2 discovery:2 relative:2 law:3 embedded:1 loss:7 fully:4 expect:2 interesting:1 limitation:1 proven:1 grover:1 nickel:5 geoffrey:1 clark:1 validation:5 degree:4 sufficient:1 principle:1 collaboration:2 translation:2 bordes:2 compatible:1 summary:1 hyperboloid:1 placed:2 free:2 svrg:1 infeasible:1 theo:1 bias:3 allow:2 emerge:1 absolute:1 sparse:1 julie:1 distributed:3 regard:4 boundary:4 dimension:9 benefit:1 world:1 evaluating:2 fb:2 commonly:2 far:3 social:9 approximate:1 observable:1 rsgd:2 global:3 overfitting:2 kiela:3 uai:1 corpus:2 xi:2 don:1 continuous:4 latent:11 table:7 additionally:2 nature:3 learn:12 ku:3 nicolas:1 sra:1 ca:1 inherently:1 obtaining:1 improving:1 robust:1 permute:1 excellent:1 complex:14 som:1 anna:1 joulin:2 main:1 hierarchically:1 linearly:2 kingsley:1 hyperparameters:3 scored:1 child:1 weed:1 precision:2 inferring:1 cristopher:1 kxk2:3 jmlr:2 third:1 coling:1 kuk2:1 embed:6 specific:3 er:2 symbol:7 r2:2 list:2 exists:4 consist:1 socher:1 workshop:1 corr:1 importance:1 pennington:1 magnitude:1 subtree:3 push:1 maximilian:4 chen:1 suited:3 smoothly:1 locality:1 entropy:1 rg:1 likely:2 infinitely:1 visual:1 kxk:1 ordered:2 aditya:1 limin:1 scalar:1 springer:1 corresponds:5 truth:3 determines:1 gromov:2 acm:4 harris:1 weston:1 quantifying:1 weir:1 towards:1 experimentally:3 change:1 directionality:1 typical:1 determined:1 specifically:1 semantically:1 reducing:1 wordnet:1 glove:1 korhonen:1 experimental:1 aaron:2 guillaume:2 mark:3 vilnis:2 evaluate:7 ex:4 |
6,870 | 7,214 | Learning Combinatorial Optimization Algorithms over Graphs
Hanjun Dai? , Elias B. Khalil?, Yuyu Zhang, Bistra Dilkina, Le Song
College of Computing, Georgia Institute of Technology
{hanjun.dai, elias.khalil, yuyu.zhang, bdilkina, lsong}@cc.gatech.edu
Abstract
The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge
and trial-and-error. Can we automate this challenging, tedious process, and learn
the algorithms instead? In many real-world applications, it is typically the case
that the same optimization problem is solved again and again on a regular basis,
maintaining the same problem structure but differing in the data. This provides
an opportunity for learning heuristic algorithms that exploit the structure of such
recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy
policy behaves like a meta-algorithm that incrementally constructs a solution, and
the action is determined by the output of a graph embedding network capturing
the current state of the solution. We show that our framework can be applied to a
diverse range of optimization problems over graphs, and learns effective algorithms
for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
1
Introduction
Combinatorial optimization problems over graphs arising from numerous application domains, such
as social networks, transportation, communications and scheduling, are NP-hard, and have thus
attracted considerable interest from the theory and algorithm design communities over the years.
In fact, of Karp?s 21 problems in the seminal paper on reducibility [19], 10 are decision versions
of graph optimization problems, while most of the other 11 problems, such as set covering, can be
naturally formulated on graphs. Traditional approaches to tackling an NP-hard graph optimization
problem have three main flavors: exact algorithms, approximation algorithms and heuristics. Exact
algorithms are based on enumeration or branch-and-bound with an integer programming formulation,
but are generally prohibitive for large instances. On the other hand, polynomial-time approximation
algorithms are desirable, but may suffer from weak optimality guarantees or empirical performance,
or may not even exist for inapproximable problems. Heuristics are often fast, effective algorithms
that lack theoretical guarantees, and may also require substantial problem-specific research and
trial-and-error on the part of algorithm designers.
All three paradigms seldom exploit a common trait of real-world optimization problems: instances
of the same type of problem are solved again and again on a regular basis, maintaining the same
combinatorial structure, but differing mainly in their data. That is, in many applications, values of
the coefficients in the objective function or constraints can be thought of as being sampled from the
same underlying distribution. For instance, an advertiser on a social network targets a limited set of
users with ads, in the hope that they spread them to their neighbors; such covering instances need
to be solved repeatedly, since the influence pattern between neighbors may be different each time.
Alternatively, a package delivery company routes trucks on a daily basis in a given city; thousands of
similar optimizations need to be solved, since the underlying demand locations can differ.
Despite the inherent similarity between problem instances arising in the same domain, classical
algorithms do not systematically exploit this fact. However, in industrial settings, a company may
?
Both authors contributed equally to the paper.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Embed
graph
?
?
ReLu
Embed
graph
ReLu
?
?
?
?
ReLu
?
State
Greedy: add
best node
1st iteration
?
Greedy: add
best node
ReLu
?
2nd iteration
?
Embedding the graph + partial solution
Greedy node selection
Figure 1: Illustration of the proposed framework as applied to an instance of the Minimum Vertex Cover
problem. The middle part illustrates two iterations of the graph embedding, which results in node scores (green
bars).
be willing to invest in upfront, offline computation and learning if such a process can speed up its
real-time decision-making and improve its quality. This motivates the main problem we address:
Problem Statement: Given a graph optimization problem G and a distribution D of problem
instances, can we learn better heuristics that generalize to unseen instances from D?
Recently, there has been some seminal work on using deep architectures to learn heuristics for
combinatorial problems, including the Traveling Salesman Problem [37, 6, 14]. However, the
architectures used in these works are generic, not yet effectively reflecting the combinatorial structure
of graph problems. As we show later, these architectures often require a huge number of instances
in order to learn to generalize to new ones. Furthermore, existing works typically use the policy
gradient for training [6], a method that is not particularly sample-efficient. While methods [37, 6] can
be used on graphs with different sizes ? a desirable trait ? they require manual, ad-hoc input/output
engineering to do so (e.g. padding with zeros).
In this paper, we address the challenge of learning algorithms for graph problems using a unique
combination of reinforcement learning and graph embedding. The learned policy behaves like a
meta-algorithm that incrementally constructs a solution, with the action being determined by a graph
embedding network over the current state of the solution. More specifically, our proposed solution
framework is different from previous work in the following aspects:
1. Algorithm design pattern. We will adopt a greedy meta-algorithm design, whereby a feasible
solution is constructed by successive addition of nodes based on the graph structure, and is maintained
so as to satisfy the problem?s graph constraints. Greedy algorithms are a popular pattern for designing
approximation and heuristic algorithms for graph problems. As such, the same high-level design can
be seamlessly used for different graph optimization problems.
2. Algorithm representation. We will use a graph embedding network, called structure2vec
(S2V) [9], to represent the policy in the greedy algorithm. This novel deep learning architecture
over the instance graph ?featurizes? the nodes in the graph, capturing the properties of a node in the
context of its graph neighborhood. This allows the policy to discriminate among nodes based on
their usefulness, and generalizes to problem instances of different sizes. This contrasts with recent
approaches [37, 6] that adopt a graph-agnostic sequence-to-sequence mapping that does not fully
exploit graph structure.
3. Algorithm training. We will use fitted Q-learning to learn a greedy policy that is parametrized
by the graph embedding network. The framework is set up in such a way that the policy will aim
to optimize the objective function of the original problem instance directly. The main advantage of
this approach is that it can deal with delayed rewards, which here represent the remaining increase in
objective function value obtained by the greedy algorithm, in a data-efficient way; in each step of the
greedy algorithm, the graph embeddings are updated according to the partial solution to reflect new
knowledge of the benefit of each node to the final objective value. In contrast, the policy gradient
approach of [6] updates the model parameters only once w.r.t. the whole solution (e.g. the tour in
TSP).
The application of a greedy heuristic learned with our framework is illustrated in Figure 1. To
demonstrate the effectiveness of the proposed framework, we apply it to three extensively studied
2
graph optimization problems. Experimental results show that our framework, a single meta-learning
algorithm, efficiently learns effective heuristics for each of the problems. Furthermore, we show that
our learned heuristics preserve their effectiveness even when used on graphs much larger than the
ones they were trained on. Since many combinatorial optimization problems, such as the set covering
problem, can be explicitly or implicitly formulated on graphs, we believe that our work opens up a
new avenue for graph algorithm design and discovery with deep learning.
2
Common Formulation for Greedy Algorithms on Graphs
We will illustrate our framework using three optimization problems over weighted graphs. Let
G(V, E, w) denote a weighted graph, where V is the set of nodes, E the set of edges and w : E ? R+
the edge weight function, i.e. w(u, v) is the weight of edge (u, v) ? E. These problems are:
? Minimum Vertex Cover (MVC): Given a graph G, find a subset of nodes S ? V such that every
edge is covered, i.e. (u, v) ? E ? u ? S or v ? S, and |S| is minimized.
? Maximum Cut
P(MAXCUT): Given a graph G, find a subset of nodes S ? V such that the weight
of the cut-set (u,v)?C w(u, v) is maximized, where cut-set C ? E is the set of edges with one
end in S and the other end in V \ S.
? Traveling Salesman Problem (TSP): Given a set of points in 2-dimensional space, find a tour
of minimum total weight, where the corresponding graph G has the points as nodes and is fully
connected with edge weights corresponding to distances between points; a tour is a cycle that visits
each node of the graph exactly once.
We will focus on a popular pattern for designing approximation and heuristic algorithms, namely
a greedy algorithm. A greedy algorithm will construct a solution by sequentially adding nodes to
a partial solution S, based on maximizing some evaluation function Q that measures the quality
of a node in the context of the current partial solution. We will show that, despite the diversity of
the combinatorial problems above, greedy algorithms for them can be expressed using a common
formulation. Specifically:
1. A problem instance G of a given optimization problem is sampled from a distribution D, i.e. the
V , E and w of the instance graph G are generated according to a model or real-world data.
2. A partial solution is represented as an ordered list S = (v1 , v2 , . . . , v|S| ), vi ? V , and S = V \ S
the set of candidate nodes for addition, conditional on S. Furthermore, we use a vector of binary
decision variables x, with each dimension xv corresponding to a node v ? V , xv = 1 if v ? S
and 0 otherwise. One can also view xv as a tag or extra feature on v.
3. A maintenance (or helper) procedure h(S) will be needed, which maps an ordered list S to a
combinatorial structure satisfying the specific constraints of a problem.
4. The quality of a partial solution S is given by an objective function c(h(S), G) based on the
combinatorial structure h of S.
5. A generic greedy algorithm selects a node v to add next such that v maximizes an evaluation
function, Q(h(S), v) ? R, which depends on the combinatorial structure h(S) of the current
partial solution. Then, the partial solution S will be extended as
S := (S, v ? ), where v ? := argmaxv?S Q(h(S), v),
(1)
and (S, v ? ) denotes the appending of v ? to the end of a list S. This step is repeated until a
termination criterion t(h(S)) becomes true.
In our formulation, we assume that the distribution D, the helper function h, the termination criterion
t and the cost function c are all given. Given the above abstract model, various optimization problems
can be expressed by using different helper functions, cost functions and termination criteria:
? MVC: The helper function does not need to do any work, and c(h(S), G) = ? |S|. The termination
criterion checks whether all edges have been covered.
? MAXCUT: The helper function divides V into two sets, S and its complement S = V \ S,
and maintainsPa cut-set C = {(u, v) | (u, v) ? E, u ? S, v ? S}. Then, the cost
c(h(S), G) = (u,v)?C w(u, v), and the termination criterion does nothing.
? TSP: The helper function will maintain a tour according to the order of the nodes in S. The
simplest way is to append nodes to the end of partial tour in the same order as S. Then the cost
P|S|?1
c(h(S), G) = ? i=1 w(S(i), S(i + 1)) ? w(S(|S|), S(1)), and the termination criterion is
activated when S = V . Empirically, inserting a node u in the partial tour at the position which
3
increases the tour length the least is a better choice. We adopt this insertion procedure as a helper
function for TSP.
The quality of solution resulting from adding a node to partial solution S will be determined by
the evaluation function Q, which will be learned using a collection of problem instances. This is
in contrast with traditional greedy algorithm design, where the evaluation function Q is typically
hand-crafted, and requires substantial problem-specific research and trial-and-error. In the following,
b
we will design a powerful deep learning parameterization for the evaluation function, Q(h(S),
v; ?),
with parameters ?.
3
Representation: Graph Embedding
b should take into
Since we are optimizing over a graph G, we expect that the evaluation function Q
account the current partial solution S as it maps to the graph. That is, xv = 1 for all nodes v ? S,
b should summarize the
and the nodes are connected according to the graph structure. Intuitively, Q
state of such a ?tagged" graph G, and figure out the value of a new node if it is to be added in
the context of such a graph. Here, both the state of the graph and the context of a node v can be
very complex, hard to describe in closed form, and may depend on complicated statistics such as
global/local degree distribution, triangle counts, distance to tagged nodes, etc. In order to represent
such complex phenomena over combinatorial structures, we will leverage a deep learning architecture
b
over graphs, in particular the structure2vec of [9], to parameterize Q(h(S),
v; ?).
3.1 Structure2Vec
We first provide an introduction to structure2vec. This graph embedding network will compute
a p-dimensional feature embedding ?v for each node v ? V , given the current partial solution S.
More specifically, structure2vec defines the network architecture recursively according to an
input graph structure G, and the computation graph of structure2vec is inspired by graphical
model inference algorithms, where node-specific tags or features xv are aggregated recursively
according to G?s graph topology. After a few steps of recursion, the network will produce a new
embedding for each node, taking into account both graph characteristics and long-range interactions
between these node features. One variant of the structure2vec architecture will initialize the
(0)
embedding ?v at each node as 0, and for all v ? V update the embeddings synchronously at each
iteration as
(t)
?(t+1)
?
F
x
,
{?
}
,
{w(v,
u)}
;
?
,
(2)
v
u?N
(v)
v
u
u?N (v)
where N (v) is the set of neighbors of node v in graph G, and F is a generic nonlinear mapping such
as a neural network or kernel function.
Based on the update formula, one can see that the embedding update process is carried out based on
the graph topology. A new round of embedding sweeping across the nodes will start only after the
embedding update for all nodes from the previous round has finished. It is easy to see that the update
also defines a process where the node features xv are propagated to other nodes via the nonlinear
propagation function F . Furthermore, the more update iterations one carries out, the farther away
the node features will propagate and get aggregated nonlinearly at distant nodes. In the end, if one
(T )
terminates after T iterations, each node embedding ?v will contain information about its T -hop
neighborhood as determined by graph topology, the involved node features and the propagation
function F . An illustration of two iterations of graph embedding can be found in Figure 1.
b
3.2 Parameterizing Q(h(S),
v; ?)
b
We now discuss the parameterization of Q(h(S),
v; ?) using the embeddings from
structure2vec. In particular, we design F to update a p-dimensional embedding ?v as:
X
X
?(t+1)
? relu ?1 xv + ?2
?(t)
relu(?4 w(v, u)) ,
(3)
v
u + ?3
u?N (v)
p
p?p
u?N (v)
p
where ?1 ? R , ?2 , ?3 ? R
and ?4 ? R are the model parameters, and relu is the rectified
linear unit (relu(z) = max(0, z)) applied elementwise to its input. The summation over neighbors is
one way of aggregating neighborhood information that is invariant to permutations over neighbors.
For simplicity of exposition, xv here is a binary scalar as described earlier; it is straightforward to
extend xv to a vector representation by incorporating any useful node information. To make the
nonlinear transformations more powerful, we can add some more layers of relu before we pool over
the neighboring embeddings ?u .
4
Table 1: Definition of reinforcement learning components for each of the three problems considered.
Problem
State
Action
Helper function
Reward
Termination
MVC
MAXCUT
TSP
subset of nodes selected so far
subset of nodes selected so far
partial tour
add node to subset
add node to subset
grow tour by one node
None
None
Insertion operation
-1
change in cut weight
change in tour cost
all edges are covered
cut weight cannot be improved
tour includes all nodes
Once the embedding for each node is computed after T iterations, we will use these embeddings
(T )
b
to define the Q(h(S),
v; ?) function. More specifically, we will use the embedding ?v for node
P
(T )
v and the pooled embedding over the entire graph, u?V ?u , as the surrogates for v and h(S),
respectively, i.e.
X
)
(T )
b
Q(h(S),
v; ?) = ?5> relu([?6
?(T
(4)
u , ?7 ?v ])
u?V
2p
(T )
p?p
where ?5 ? R , ?6 , ?7 ? R
and [?, ?] is the concatenation operator. Since the embedding ?u
b
is computed based on the parameters from the graph embedding network, Q(h(S),
v) will depend
7
on a collection of 7 parameters ? = {?i }i=1 . The number of iterations T for the graph embedding
computation is usually small, such as T = 4.
The parameters ? will be learned. Previously, [9] required a ground truth label for every input
graph G in order to train the structure2vec architecture. There, the output of the embedding
is linked with a softmax-layer, so that the parameters can by trained end-to-end by minimizing the
cross-entropy loss. This approach is not applicable to our case due to the lack of training labels.
Instead, we train these parameters together end-to-end using reinforcement learning.
4
Training: Q-learning
b
We show how reinforcement learning is a natural framework for learning the evaluation function Q.
b naturally lends itself to a reinforcement learning (RL)
The definition of the evaluation function Q
b as a model for the state-value function in RL. We note that we
formulation [36], and we will use Q
b across a set of m graphs from distribution D, D = {Gi }m , with
would like to learn a function Q
i=1
potentially different sizes. The advantage of the graph embedding parameterization in our previous
section is that we can deal with different graph instances and sizes seamlessly.
4.1
Reinforcement learning formulation
We define the states, actions and rewards in the reinforcement learning framework as follows:
1. States: a state S is a sequence of actions (nodes) on a graph G. Since we have already represented
nodes
in the tagged graph with their embeddings, the state is a vector in p-dimensional space,
P
v?V ?v . It is easy to see that this embedding representation of the state can be used across
different graphs. The terminal state Sb will depend on the problem at hand;
2. Transition: transition is deterministic here, and corresponds to tagging the node v ? G that was
selected as the last action with feature xv = 1;
3. Actions: an action v is a node of G that is not part of the current state S. Similarly, we will
represent actions as their corresponding p-dimensional node embedding ?v , and such a definition
is applicable across graphs of various sizes;
4. Rewards: the reward function r(S, v) at state S is defined as the change in the cost function after
taking action v and transitioning to a new state S 0 := (S, v). That is,
r(S, v) = c(h(S 0 ), G) ? c(h(S), G),
(5)
and c(h(?), G) = 0. As such, the cumulative reward R of a terminal state Sb coincides exactly
b
b i.e. R(S)
b = P|S|
b
with the objective function value of the S,
i=1 r(Si , vi ) is equal to c(h(S), G);
b a deterministic greedy policy ?(v|S) := argmax 0 Q(h(S),
b
5. Policy: based on Q,
v 0 ) will be
v ?S
used. Selecting action v corresponds to adding a node of G to the current partial solution, which
results in collecting a reward r(S, v).
Table 1 shows the instantiations of the reinforcement learning framework for the three optimization
problems considered herein. We let Q? denote the optimal Q-function for each RL problem. Our graph
b
embedding parameterization Q(h(S),
v; ?) from last section will then be a function approximation
model for it, which will be learned via n-step Q-learning.
5
4.2 Learning algorithm
b
In order to perform end-to-end learning of the parameters in Q(h(S),
v; ?), we use a combination
of n-step Q-learning [36] and fitted Q-iteration [33], as illustrated in Algorithm 1. We use the term
episode to refer to a complete sequence of node additions starting from an empty solution, and until
termination; a step within an episode is a single action (node addition).
Standard (1-step) Q-learning updates the function approximator?s parameters at each step of an
episode by performing a gradient step to minimize the squared loss:
2
b
(y ? Q(h(S
(6)
t ), vt ; ?)) ,
0
b
where y = ? maxv0 Q(h(S
t+1 ), v ; ?) + r(St , vt ) for a non-terminal state St . The n-step Q-learning
helps deal with the issue of delayed rewards, where the final reward of interest to the agent is only
received far in the future during an episode. In our setting, the final objective value of a solution is
only revealed after many node additions. As such, the 1-step update may be too myopic. A natural
extension of 1-step Q-learning is to wait n steps before updating the approximator?s parameters, so
as to collect a more accurate estimate of the future rewards. Formally, the update is over the same
Pn?1
0
b
squared loss (6), but with a different target, y = i=0 r(St+i , vt+i ) + ? maxv0 Q(h(S
t+n ), v ; ?).
The fitted Q-iteration approach has been shown to result in faster learning convergence when using
a neural network as a function approximator [33, 28], a property that also applies in our setting, as
we use the embedding defined in Section 3.2. Instead of updating the Q-function sample-by-sample
as in Equation (6), the fitted Q-iteration approach uses experience replay to update the function
approximator with a batch of samples from a dataset E, rather than the single sample being currently
experienced. The dataset E is populated during previous episodes, such that at step t + n, the tuple
Pn?1
(St , at , Rt,t+n , St+n ) is added to E, with Rt,t+n = i=0 r(St+i , at+i ). Instead of performing
a gradient step in the loss of the current sample as in (6), stochastic gradient descent updates are
performed on a random sample of tuples drawn from E.
It is known that off-policy reinforcement learning algorithms such as Q-learning can be more sample
efficient than their policy gradient counterparts [15]. This is largely due to the fact that policy gradient
methods require on-policy samples for the new policy obtained after each parameter update of the
function approximator.
5
Experimental Evaluation
Algorithm 1 Q-learning for the Greedy Algorithm
1: Initialize experience replay memory M to capacity N
2: for episode e = 1 to L do
3:
Draw graph G from distribution D
4:
Initialize the state to empty S1 = ()
5:
for step (
t = 1 to T do
6:
7:
8:
9:
random node v ? S t ,
w.p.
b
argmaxv?S t Q(h(S
t ), v; ?), otherwise
Add vt to partial solution: St+1 := (St , vt )
if t ? n then
Add tuple (St?n , vt?n , Rt?n,t , St ) to M
vt =
iid.
10:
Sample random batch from B ? M
11:
Update ? by SGD over (6) for B
12:
end if
13:
end for
14: end for
15: return ?
Instance generation. To evaluate the proposed method against other approximation/heuristic algorithms and deep learning approaches, we generate graph instances for each of the three problems.
For the MVC and MAXCUT problems, we generate Erd?os-Renyi (ER) [11] and Barabasi-Albert
(BA) [1] graphs which have been used to model many real-world networks. For a given range on the
number of nodes, e.g. 50-100, we first sample the number of nodes uniformly at random from that
range, then generate a graph according to either ER or BA. For the two-dimensional TSP problem,
6
we use an instance generator from the DIMACS TSP Challenge [18] to generate uniformly random
or clustered points in the 2-D grid. We refer the reader to the Appendix C.1 for complete details on
instance generation.
Structure2Vec Deep Q-learning. For our method, S2V-DQN, we use the graph representations and
hyperparameters described in Appendix C.4. The hyperparameters are selected via preliminary results
on small graphs, and then fixed for large ones. Note that for TSP, where the graph is fully-connected,
we build the K-nearest neighbor graph (K = 10) to scale up to large graphs. For MVC, where
we train the model on graphs with up to 500 nodes, we use the model trained on small graphs as
initialization for training on larger ones. We refer to this trick as ?pre-training?, which is illustrated in
Figure C.2.
Pointer Networks with Actor-Critic. We compare our method to a method based on Recurrent
Neural Networks (RNNs) which does not make full use of graph structure [6]. We implement and train
their algorithm (PN-AC) for all three problems. The original model only works on the Euclidian TSP
problem, where each node is represented by its (x, y) coordinates, and is not designed for problems
with graph structure. To handle other graph problems, we describe each node by its adjacency vector
instead of coordinates. To handle different graph sizes, we use a singular value decomposition (SVD)
to obtain a rank-8 approximation for the adjacency matrix, and use the low-rank embeddings as inputs
to the pointer network.
Baseline Algorithms. Besides the PN-AC, we also include powerful approximation or heuristic
algorithms from the literature. These algorithms are specifically designed for each type of problem:
? MVC: MVCApprox iteratively selects an uncovered edge and adds both of its endpoints [30]. We
designed a stronger variant, called MVCApprox-Greedy, that greedily picks the uncovered edge
with maximum sum of degrees of its endpoints. Both algorithms are 2-approximations.
? MAXCUT: We include MaxcutApprox, which maintains the cut set (S, V \ S) and moves a node
from one side to the other side of the cut if that operation results in cut weight improvement [25].
To make MaxcutApprox stronger, we greedily move the node that results in the largest improvement
in cut weight. A randomized, non-greedy algorithm, referred to as SDP, is also implemented based
on [12]; 100 solutions are generated for each graph, and the best one is taken.
? TSP: We include the following approximation algorithms: Minimum Spanning Tree (MST),
Cheapest insertion (Cheapest), Closest insertion (Closest), Christofides and 2-opt. We also add the
Nearest Neighbor heuristic (Nearest); see [4] for algorithmic details.
Details on Validation and Testing. For S2V-DQN and PN-AC, we use a CUDA K80-enabled cluster
for training and testing. Training convergence for S2V-DQN is discussed in Appendix C.6. S2V-DQN
and PN-AC use 100 held-out graphs for validation, and we report the test results on another 1000
graphs. We use CPLEX[17] to get optimal solutions for MVC and MAXCUT, and Concorde [3] for
TSP (details in Appendix C.1). All approximation ratios reported in the paper are with respect to the
best (possibly optimal) solution found by the solvers within 1 hour. For MVC, we vary the training
and test graph sizes in the ranges {15?20, 40?50, 50?100, 100?200, 400?500}. For MAXCUT and
TSP, which involve edge weights, we train up to 200?300 nodes due to the limited computation
resource. For all problems, we test on graphs of size up to 1000?1200.
During testing, instead of using Active Search as in [6], we simply use the greedy policy. This gives
us much faster inference, while still being powerful enough. We modify existing open-source code to
implement both S2V-DQN 2 and PN-AC 3 . Our code is publicly available 4 .
5.1 Comparison of solution quality
To evaluate the solution quality on test instances, we use the approximation ratio of each method
relative to the optimal solution, averaged over the set of test instances. The approximation ratio of a
T (G) c(h(S))
solution S to a problem instance G is defined as R(S, G) = max( OP
c(h(S)) , OP T (G) ), where c(h(S))
is the objective value of solution S, and OP T (G) is the best-known solution value of instance G.
Figure 2 shows the average approximation ratio across the three problems; other graph types are
in Appendix Figure C.1.. In all of these figures, a lower approximation ratio is better. Overall,
our proposed method, S2V-DQN, performs significantly better than other methods. In MVC, the
2
3
4
https://github.com/Hanjun-Dai/graphnn
https://github.com/devsisters/pointer-network-tensorflow
https://github.com/Hanjun-Dai/graph_comb_opt
7
S2V-DQN
PN-AC
MVCApprox
MVCApprox-Greedy
1.4
1.3
1.2
1.1
1.0
15-20
S2V-DQN
PN-AC
SDP
MaxcutApprox
1.6
40-50
50-100
100-200
Number of nodes in train/test graphs
400-500
1.5
1.4
Approximation ratio to optimal
1.5
Approximation ratio to optimal
Approximation ratio to optimal
1.6
1.4
1.3
1.2
1.1
1.0
15-20
(a) MVC BA
40-50
50-100
100-200
Number of nodes in train/test graphs
200-300
1.3
S2V-DQN
2-opt
PN-AC
Cheapest
Christofides
Closest
Nearest
MST
1.2
1.1
1.0
15-20
40-50
50-100
100-200
Number of nodes in train/test graphs
200-300
(c) TSP clustered
(b) MAXCUT BA
Figure 2: Approximation ratio on 1000 test graphs. Note that on MVC, our performance is pretty close to
optimal. In this figure, training and testing graphs are generated according to the same distribution.
performance of S2V-DQN is particularly good, as the approximation ratio is roughly 1 and the bar is
barely visible.
The PN-AC algorithm performs well on TSP, as expected. Since the TSP graph is essentially fullyconnected, graph structure is not as important. On problems such as MVC and MAXCUT, where
graph information is more crucial, our algorithm performs significantly better than PN-AC. For TSP,
The 2-opt algorithm performs as well as S2V-DQN, and slightly better in some cases, an intuitive
result given the sophistication of this algorithm, which exchanges pairs of edges that can give a
smaller tour.
5.2
Generalization to larger instances
The graph embedding framework enables us to train and test on graphs of different sizes, since the
same set of model parameters are used. How does the performance of the learned algorithm using
small graphs generalize to test graphs of larger sizes? To investigate this, we train S2V-DQN on
graphs with 50?100 nodes, and test its generalization ability on graphs with up to 1200 nodes. Table 2
summarizes the results, and full results are in Appendix C.3.
Table 2: S2V-DQN?s generalization ability. Values are average approximation ratios over 1000 test instances.
These test results are produced by S2V-DQN algorithms trained on graphs with 50-100 nodes.
Test Size
MVC (BA)
MAXCUT (BA)
TSP (clustered)
50-100
1.0033
1.0150
1.0730
100-200
1.0041
1.0181
1.0895
200-300
1.0045
1.0202
1.0869
300-400
1.0040
1.0188
1.0918
400-500
1.0045
1.0123
1.0944
500-600
1.0048
1.0177
1.0975
1000-1200
1.0062
1.0038
1.1065
We can see that S2V-DQN achieves a very good approximation ratio. Note that the ?optimal" value
used in the computation of approximation ratios may not be truly optimal (due to the solver time
cutoff at 1 hour), and so CPLEX?s solutions do typically get worse as problem size grows. This is
why sometimes we can even get better approximation ratio on larger graphs.
5.3
Scalability & Trade-off between running time and approximation ratio
To construct a solution on a test graph, our algorithm has polynomial complexity of O(k|E|) where k
is number of greedy steps (at most the number of nodes |V |) and |E| is number of edges. For instance,
on graphs with 1200 nodes, we can find the solution of MVC within 11 seconds using a single GPU,
while getting an approximation ratio of 1.0062. For dense graphs, we can also sample the edges for
the graph embedding computation to save time, a measure we will investigate in the future.
Figure 3 illustrates the approximation ratios of various approaches as a function of running time.
All algorithms report a single solution at termination, whereas CPLEX reports multiple improving
solutions, for which we recorded the corresponding running time and approximation ratio. Figure C.3
(Appendix C.7) includes other graph sizes and types, where the results are consistent with Figure 3.
Figure 3 shows that, for MVC, we are slightly slower than the approximation algorithms but enjoy a
much better approximation ratio. Also note that although CPLEX found the first feasible solution
quickly, it also has much worse ratio; the second improved solution found by CPLEX takes similar or
longer time than our S2V-DQN, but is still of worse quality. For MAXCUT, the observations are still
consistent. One should be aware that sometimes our algorithm can obtain better results than 1-hour
CPLEX, which gives ratios below 1.0. Furthermore, sometimes S2V-DQN is even faster than the
MaxcutApprox, although this comparison is not exactly fair, since we use GPUs; however, we can
still see that our algorithm is efficient.
8
MVC Barabasi-Albert
S2V-DQN
MVCApprox-Greedy
MVCApprox
CPLEX-1st
CPLEX-2nd
CPLEX-3rd
CPLEX-4th
1.8
1.6
1.4
Figure 3: Time-approximation
Maxcut Barabasi-Albert
1.2
1.30
Approx Ratio
Approx Ratio
2.0
S2V-DQN
MaxcutApprox
SDP
CPLEX-1st
CPLEX-2nd
CPLEX-3rd
CPLEX-4th
CPLEX-5th
1.25
1.20
1.15
1.10
1.05
1.00
1.0
10
4
10
3
10
2
10
1
100
101
Time (s)
(a) MVC BA 200-300
102
103
101
100
102
Time (s)
(b) MAXCUT BA 200-300
103
trade-off for MVC and MAXCUT. In this figure, each dot
represents a solution found for
a single problem instance. For
CPLEX, we also record the time
and quality of each solution it
finds, e.g. CPLEX-1st means the
first feasible solution found by
CPLEX.
5.4 Experiments on real-world datasets
In addition to the experiments for synthetic data, we identified sets of publicly available benchmark
or real-world instances for each problem, and performed experiments on them. A summary of results
is in Table 3, and details are given in Appendix B. S2V-DQN significantly outperforms all competing
methods for MVC, MAXCUT and TSP.
Table 3: Realistic data experiments, results summary. Values are average approximation ratios.
Dataset
S2V-DQN Best Competitor
2nd Best Competitor
Problem
MVC
MAXCUT
TSP
MemeTracker
Physics
TSPLIB
1.0021
1.0223
1.0475
1.2220 (MVCApprox-Greedy)
1.2825 (MaxcutApprox)
1.0947 (2-opt)
1.4080 (MVCApprox)
1.8996 (SDP)
1.1771 (Cheapest)
5.5 Discovery of interesting new algorithms
We further examined the algorithms learned by S2V-DQN, and tried to interpret what greedy heuristics
have been learned. We found that S2V-DQN is able to discover new and interesting algorithms which
intuitively make sense but have not been analyzed before. For instance, S2V-DQN discovers an
algorithm for MVC where nodes are selected to balance between their degrees and the connectivity
of the remaining graph (Appendix Figure C.4 and C.7). For MAXCUT, S2V-DQN discovers an
algorithm where nodes are picked to avoid cancelling out existing edges in the cut set (Appendix
Figure C.5). These results suggest that S2V-DQN may also be a good assistive tool for discovering
new algorithms, especially in cases when the graph optimization problems are new and less wellstudied.
6
Conclusions
We presented an end-to-end machine learning framework for automatically designing greedy heuristics for hard combinatorial optimization problems on graphs. Central to our approach is the combination of a deep graph embedding with reinforcement learning. Through extensive experimental
evaluation, we demonstrate the effectiveness of the proposed framework in learning greedy heuristics
as compared to manually-designed greedy algorithms. The excellent performance of the learned
heuristics is consistent across multiple different problems, graph types, and graph sizes, suggesting
that the framework is a promising new tool for designing algorithms for graph problems.
Acknowledgments
This project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF
CAREER IIS-1350983, NSF IIS-1639792 EAGER, NSF CNS-1704701, ONR N00014-15-1-2340,
Intel ISTC, NVIDIA and Amazon AWS. Dilkina is supported by NSF grant CCF-1522054 and
ExxonMobil.
References
[1] Albert, R?ka and Barab?si, Albert-L?szl?. Statistical mechanics of complex networks. Reviews
of modern physics, 74(1):47, 2002.
[2] Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David,
Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent.
In Advances in Neural Information Processing Systems, pp. 3981?3989, 2016.
[3] Applegate, David, Bixby, Robert, Chvatal, Vasek, and Cook, William. Concorde TSP solver,
2006.
9
[4] Applegate, David L, Bixby, Robert E, Chvatal, Vasek, and Cook, William J. The traveling
salesman problem: a computational study. Princeton university press, 2011.
[5] Balas, Egon and Ho, Andrew. Set covering algorithms using cutting planes, heuristics, and
subgradient optimization: a computational study. Combinatorial Optimization, pp. 37?60, 1980.
[6] Bello, Irwan, Pham, Hieu, Le, Quoc V, Norouzi, Mohammad, and Bengio, Samy. Neural
combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940,
2016.
[7] Boyan, Justin and Moore, Andrew W. Learning evaluation functions to improve optimization
by local search. Journal of Machine Learning Research, 1(Nov):77?112, 2000.
[8] Chen, Yutian, Hoffman, Matthew W, Colmenarejo, Sergio Gomez, Denil, Misha, Lillicrap,
Timothy P, and de Freitas, Nando. Learning to learn for global optimization of black box
functions. arXiv preprint arXiv:1611.03824, 2016.
[9] Dai, Hanjun, Dai, Bo, and Song, Le. Discriminative embeddings of latent variable models for
structured data. In ICML, 2016.
[10] Du, Nan, Song, Le, Gomez-Rodriguez, Manuel, and Zha, Hongyuan. Scalable influence
estimation in continuous-time diffusion networks. In NIPS, 2013.
[11] Erdos, Paul and R?nyi, A. On the evolution of random graphs. Publ. Math. Inst. Hungar. Acad.
Sci, 5:17?61, 1960.
[12] Goemans, M.X. and Williamson, D. P. Improved approximation algorithms for maximum
cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42(6):
1115?1145, 1995.
[13] Gomez-Rodriguez, Manuel, Leskovec, Jure, and Krause, Andreas. Inferring networks of
diffusion and influence. In Proceedings of the 16th ACM SIGKDD international conference on
Knowledge discovery and data mining, pp. 1019?1028. ACM, 2010.
[14] Graves, Alex, Wayne, Greg, Reynolds, Malcolm, Harley, Tim, Danihelka, Ivo, GrabskaBarwi?nska, Agnieszka, Colmenarejo, Sergio G?mez, Grefenstette, Edward, Ramalho, Tiago,
Agapiou, John, et al. Hybrid computing using a neural network with dynamic external memory.
Nature, 538(7626):471?476, 2016.
[15] Gu, Shixiang, Lillicrap, Timothy, Ghahramani, Zoubin, Turner, Richard E, and Levine,
Sergey. Q-prop: Sample-efficient policy gradient with an off-policy critic. arXiv preprint
arXiv:1611.02247, 2016.
[16] He, He, Daume III, Hal, and Eisner, Jason M. Learning to search in branch and bound algorithms.
In Advances in Neural Information Processing Systems, pp. 3293?3301, 2014.
[17] IBM. CPLEX User?s Manual, Version 12.6.1, 2014.
[18] Johnson, David S and McGeoch, Lyle A. Experimental analysis of heuristics for the stsp. In
The traveling salesman problem and its variations, pp. 369?443. Springer, 2007.
[19] Karp, Richard M. Reducibility among combinatorial problems. In Complexity of computer
computations, pp. 85?103. Springer, 1972.
[20] Kempe, David, Kleinberg, Jon, and Tardos, ?va. Maximizing the spread of influence through a
social network. In KDD, pp. 137?146. ACM, 2003.
[21] Khalil, Elias B., Dilkina, B., and Song, L. Scalable diffusion-aware optimization of network
topology. In Knowledge Discovery and Data Mining (KDD), 2014.
[22] Khalil, Elias B., Le Bodic, Pierre, Song, Le, Nemhauser, George L, and Dilkina, Bistra N.
Learning to branch in mixed integer programming. In AAAI, pp. 724?731, 2016.
10
[23] Khalil, Elias B., Dilkina, Bistra, Nemhauser, George, Ahmed, Shabbir, and Shao, Yufen.
Learning to run heuristics in tree search. In 26th International Joint Conference on Artificial
Intelligence (IJCAI), 2017.
[24] Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[25] Kleinberg, Jon and Tardos, Eva. Algorithm design. Pearson Education India, 2006.
[26] Lagoudakis, Michail G and Littman, Michael L. Learning to select branching rules in the dpll
procedure for satisfiability. Electronic Notes in Discrete Mathematics, 9:344?359, 2001.
[27] Li, Ke and Malik, Jitendra. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
[28] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis,
Wierstra, Daan, and Riedmiller, Martin A. Playing atari with deep reinforcement learning.
CoRR, abs/1312.5602, 2013. URL http://arxiv.org/abs/1312.5602.
[29] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare,
Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al.
Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[30] Papadimitriou, C. H. and Steiglitz, K. Combinatorial Optimization: Algorithms and Complexity.
Prentice-Hall, New Jersey, 1982.
[31] Peleg, David, Schechtman, Gideon, and Wool, Avishai. Approximating bounded 0-1 integer
linear programs. In Theory and Computing Systems, 1993., Proceedings of the 2nd Israel
Symposium on the, pp. 69?77. IEEE, 1993.
[32] Reinelt, Gerhard. Tsplib?a traveling salesman problem library. ORSA journal on computing, 3
(4):376?384, 1991.
[33] Riedmiller, Martin. Neural fitted q iteration?first experiences with a data efficient neural
reinforcement learning method. In European Conference on Machine Learning, pp. 317?328.
Springer, 2005.
[34] Sabharwal, Ashish, Samulowitz, Horst, and Reddy, Chandra. Guiding combinatorial optimization with uct. In CPAIOR, pp. 356?361. Springer, 2012.
[35] Samulowitz, Horst and Memisevic, Roland. Learning to solve QBF. In AAAI, 2007.
[36] Sutton, R.S. and Barto, A.G. Reinforcement Learning: An Introduction. MIT Press, 1998.
[37] Vinyals, Oriol, Fortunato, Meire, and Jaitly, Navdeep. Pointer networks. In Advances in Neural
Information Processing Systems, pp. 2692?2700, 2015.
[38] Zhang, Wei and Dietterich, Thomas G. Solving combinatorial optimization tasks by reinforcement learning: A general methodology applied to resource-constrained scheduling. Journal of
Artificial Intelligence Reseach, 1:1?38, 2000.
11
| 7214 |@word trial:3 middle:1 version:2 polynomial:2 stronger:2 nd:5 tedious:1 open:2 termination:9 willing:1 propagate:1 tried:1 decomposition:1 pick:1 sgd:1 euclidian:1 recursively:2 carry:1 memetracker:1 uncovered:2 score:1 selecting:1 reynolds:1 outperforms:1 existing:3 freitas:2 current:9 com:3 ka:1 manuel:2 si:2 tackling:1 yet:1 attracted:1 gpu:1 bello:1 john:1 mst:2 distant:1 visible:1 realistic:1 kdd:2 enables:1 designed:4 update:15 greedy:30 prohibitive:1 selected:5 discovering:1 parameterization:4 cook:2 plane:1 ivo:1 intelligence:2 farther:1 record:1 pointer:4 provides:1 math:1 node:79 location:1 successive:1 org:1 zhang:3 dilkina:5 wierstra:1 constructed:1 symposium:1 fullyconnected:1 hieu:1 tagging:1 expected:1 roughly:1 sdp:4 mechanic:1 terminal:3 inspired:1 company:2 automatically:1 enumeration:1 solver:3 becomes:1 project:1 discover:1 underlying:2 bounded:1 maximizes:1 agnostic:1 what:1 israel:1 atari:1 differing:2 transformation:1 guarantee:2 every:2 collecting:1 orsa:1 exactly:3 control:1 unit:1 grant:1 enjoy:1 wayne:1 danihelka:1 before:3 engineering:1 local:2 aggregating:1 xv:10 modify:1 acad:1 despite:2 sutton:1 black:1 rnns:1 initialization:1 studied:1 examined:1 collect:1 challenging:1 limited:2 range:5 averaged:1 unique:2 acknowledgment:1 testing:4 lyle:1 implement:2 procedure:3 riedmiller:3 empirical:1 thought:1 significantly:3 pre:1 regular:2 wait:1 suggest:1 zoubin:1 get:4 cannot:1 close:1 selection:1 operator:1 scheduling:2 prentice:1 context:4 influence:4 seminal:2 bellemare:1 optimize:2 map:2 deterministic:2 transportation:1 maximizing:2 straightforward:1 starting:1 jimmy:1 ke:1 simplicity:1 amazon:1 parameterizing:1 rule:1 enabled:1 embedding:34 handle:2 coordinate:2 variation:1 updated:1 tardos:2 target:2 gerhard:1 user:2 exact:2 programming:3 us:1 designing:4 samy:1 jaitly:1 trick:1 satisfying:1 particularly:2 updating:2 cut:13 levine:1 preprint:5 solved:4 parameterize:1 thousand:1 eva:1 connected:3 cycle:1 episode:6 trade:2 substantial:2 insertion:4 complexity:3 reward:10 littman:1 dynamic:1 trained:4 depend:3 solving:1 applegate:2 yutian:1 egon:1 basis:3 triangle:1 gu:1 shao:1 joint:1 represented:3 various:3 jersey:1 assistive:1 train:10 ramalho:1 fast:1 effective:3 describe:2 artificial:2 neighborhood:3 pearson:1 heuristic:21 larger:5 solve:1 bistra:3 otherwise:2 concorde:2 ability:2 statistic:1 gi:1 unseen:1 samulowitz:2 tsp:20 itself:1 final:3 christofides:2 hoc:1 sequence:4 advantage:2 propose:1 interaction:1 meire:1 cancelling:1 inserting:1 neighboring:1 schaul:1 intuitive:1 khalil:5 scalability:1 getting:1 invest:1 convergence:2 empty:2 cluster:1 ijcai:1 produce:1 adam:1 silver:2 help:1 illustrate:1 recurrent:1 ac:10 andrew:2 mcgeoch:1 tim:1 nearest:4 op:3 lsong:1 received:1 edward:1 implemented:1 peleg:1 differ:1 sabharwal:1 stochastic:2 nando:2 human:1 adjacency:2 education:1 require:4 exchange:1 clustered:3 generalization:3 preliminary:1 opt:4 summation:1 extension:1 pham:1 considered:2 hall:1 ground:1 mapping:2 algorithmic:1 automate:1 matthew:2 vary:1 adopt:3 barabasi:3 achieves:1 estimation:1 applicable:2 combinatorial:19 label:2 currently:1 mvc:21 largest:1 city:1 tool:2 weighted:2 hoffman:2 hope:1 istc:1 mit:1 aim:1 rather:1 denil:2 pn:12 avoid:1 rusu:1 barto:1 gatech:1 karp:2 focus:1 improvement:2 rank:2 check:1 mainly:1 seamlessly:2 industrial:1 sigkdd:1 contrast:3 baseline:1 sense:1 greedily:2 inst:1 inference:2 sb:2 typically:4 entire:1 marcin:1 selects:2 issue:1 among:2 overall:1 constrained:1 softmax:1 initialize:3 kempe:1 equal:1 construct:4 once:3 aware:2 beach:1 koray:2 hop:1 manually:1 represents:1 veness:1 icml:1 jon:2 future:3 minimized:1 np:3 report:3 papadimitriou:1 inherent:1 few:1 richard:2 modern:1 preserve:1 delayed:2 argmax:1 cplex:19 cns:1 maintain:1 william:2 harley:1 ab:2 interest:2 huge:1 ostrovski:1 investigate:2 mining:2 mnih:2 evaluation:11 wellstudied:1 joel:1 szl:1 truly:1 analyzed:1 misha:2 semidefinite:1 activated:1 reseach:1 myopic:1 held:1 r01gm108341:1 accurate:1 maxv0:2 edge:15 tuple:2 partial:16 daily:1 helper:8 experience:3 tree:2 divide:1 theoretical:1 leskovec:1 fitted:5 instance:30 earlier:1 cover:3 cost:6 vertex:3 subset:6 tour:12 usefulness:1 johnson:1 too:1 eager:1 reported:1 synthetic:1 st:15 international:2 randomized:1 memisevic:1 off:4 physic:2 pool:1 michael:1 together:1 quickly:1 ashish:1 connectivity:1 again:4 reflect:1 squared:2 recorded:1 central:1 aaai:2 possibly:1 worse:3 external:1 return:1 li:1 account:2 suggesting:1 volodymyr:2 diversity:1 de:2 pooled:1 ioannis:1 includes:2 coefficient:1 satisfy:1 jitendra:1 explicitly:1 ad:2 vi:2 depends:1 later:1 view:1 performed:2 closed:1 picked:1 linked:1 dpll:1 jason:1 start:1 zha:1 maintains:1 complicated:1 minimize:1 publicly:2 greg:1 characteristic:1 efficiently:1 maximized:1 largely:1 generalize:3 weak:1 norouzi:1 kavukcuoglu:2 produced:1 iid:1 none:2 cc:1 rectified:1 manual:2 definition:3 against:1 competitor:2 pp:12 involved:1 naturally:2 propagated:1 sampled:2 dataset:3 popular:2 knowledge:4 satisfiability:2 reflecting:1 tom:1 methodology:1 improved:3 erd:1 wei:1 formulation:6 box:1 mez:1 furthermore:5 uct:1 until:2 traveling:6 hand:3 o:1 nonlinear:3 lack:2 incrementally:2 propagation:2 rodriguez:2 defines:2 quality:8 believe:1 grows:1 dqn:26 usa:1 dietterich:1 lillicrap:2 contain:1 true:1 hal:1 counterpart:1 ccf:1 tagged:3 evolution:1 iteratively:1 moore:1 illustrated:3 deal:3 round:2 during:3 branching:1 shixiang:1 covering:4 whereby:1 maintained:1 coincides:1 criterion:6 dimacs:1 complete:2 demonstrate:2 mohammad:1 performs:4 novel:1 recently:1 discovers:2 nih:1 common:3 lagoudakis:1 specialized:1 behaves:2 empirically:1 rl:3 endpoint:2 extend:1 discussed:1 he:2 elementwise:1 trait:2 interpret:1 significant:1 refer:3 rd:2 seldom:1 grid:1 populated:1 similarly:1 approx:2 maxcut:17 mathematics:1 dot:1 actor:1 similarity:1 longer:1 etc:1 add:10 sergio:3 closest:3 recent:1 optimizing:1 route:1 n00014:1 nvidia:1 meta:4 binary:2 onr:1 vt:7 argmaxv:2 minimum:5 dai:6 george:2 michail:1 aggregated:2 paradigm:1 advertiser:1 ii:3 branch:3 full:2 desirable:2 multiple:2 faster:3 ahmed:1 cross:1 long:2 equally:1 visit:1 roland:1 barab:1 va:1 variant:2 scalable:2 maintenance:1 essentially:1 chandra:1 navdeep:1 albert:5 iteration:13 represent:4 kernel:1 sometimes:3 arxiv:11 sergey:1 addition:6 whereas:1 krause:1 aws:1 grow:1 singular:1 source:1 crucial:1 extra:1 nska:1 effectiveness:3 integer:3 tsplib:2 leverage:1 revealed:1 chvatal:2 embeddings:8 easy:2 enough:1 bengio:1 iii:1 relu:10 architecture:8 topology:4 identified:1 competing:1 andreas:2 avenue:1 whether:1 url:1 padding:1 song:5 suffer:1 action:12 repeatedly:1 deep:10 andrychowicz:1 generally:1 useful:1 covered:3 involve:1 extensively:1 simplest:1 generate:4 http:4 exist:1 nsf:5 cuda:1 designer:1 upfront:1 arising:2 diverse:1 discrete:1 georg:1 drawn:1 cutoff:1 wool:1 diffusion:3 v1:1 graph:121 subgradient:1 year:1 sum:1 run:1 package:1 powerful:4 reader:1 electronic:1 draw:1 delivery:1 decision:3 appendix:10 summarizes:1 capturing:2 bound:2 layer:2 nan:1 gomez:4 truck:1 constraint:3 alex:3 tag:2 kleinberg:2 aspect:1 speed:1 optimality:1 performing:2 martin:3 gpus:1 s2v:27 structured:1 according:8 combination:4 across:6 terminates:1 slightly:2 smaller:1 making:1 s1:1 quoc:1 intuitively:2 invariant:1 bixby:2 taken:1 equation:1 resource:2 previously:1 reddy:1 discus:1 count:1 structure2vec:10 needed:1 antonoglou:1 end:16 salesman:6 generalizes:1 operation:2 available:2 apply:1 v2:1 generic:3 away:1 pierre:1 appending:1 save:1 batch:2 ho:1 slower:1 original:2 thomas:1 denotes:1 remaining:2 include:3 running:3 graphical:1 opportunity:1 maintaining:2 tiago:1 exploit:4 eisner:1 ghahramani:1 build:1 especially:1 nyi:1 classical:1 approximating:1 objective:8 move:2 added:2 already:1 malik:1 rt:3 traditional:2 surrogate:1 gradient:10 lends:1 nemhauser:2 distance:2 sci:1 concatenation:1 parametrized:1 capacity:1 fidjeland:1 reinelt:1 spanning:1 barely:1 length:1 besides:1 code:2 grabskabarwi:1 illustration:2 ratio:24 minimizing:1 balance:1 hungar:1 robert:2 statement:1 potentially:1 fortunato:1 append:1 ba:9 design:10 publ:1 motivates:1 policy:18 contributed:1 perform:1 observation:1 datasets:1 benchmark:1 daan:1 descent:3 extended:1 communication:1 synchronously:1 steiglitz:1 sweeping:1 community:1 david:8 complement:1 namely:1 nonlinearly:1 required:1 pair:1 extensive:1 pfau:1 learned:11 herein:1 tensorflow:1 hour:3 kingma:1 nip:2 address:3 able:1 recurring:1 bar:2 usually:1 pattern:4 below:1 justin:1 jure:1 gideon:1 challenge:3 summarize:1 program:1 green:1 including:1 max:2 memory:2 natural:2 boyan:1 hybrid:1 recursion:1 turner:1 improve:2 github:3 technology:1 library:1 numerous:1 finished:1 carried:1 review:1 literature:1 reducibility:2 discovery:4 relative:1 graf:3 fully:3 expect:1 permutation:1 loss:4 mixed:1 generation:2 interesting:2 approximator:5 generator:1 validation:2 degree:3 elia:5 agent:1 consistent:3 exxonmobil:1 systematically:1 playing:1 critic:2 ibm:1 summary:2 supported:2 last:2 offline:1 side:2 institute:1 neighbor:7 india:1 taking:2 benefit:1 dimension:1 world:6 transition:2 cumulative:1 author:1 collection:2 reinforcement:17 agnieszka:1 horst:2 far:3 social:3 k80:1 nov:1 erdos:1 implicitly:1 cutting:1 global:2 sequentially:1 instantiation:1 active:1 vasek:2 hongyuan:1 tuples:1 discriminative:1 alternatively:1 agapiou:1 search:4 latent:1 continuous:1 pretty:1 table:6 why:1 promising:1 nature:2 learn:8 ca:1 career:1 improving:1 du:1 williamson:1 excellent:1 complex:3 shabbir:1 european:1 domain:2 marc:1 cheapest:4 main:3 spread:2 dense:1 whole:1 hyperparameters:2 paul:1 daume:1 nothing:1 repeated:1 fair:1 crafted:1 referred:1 intel:1 georgia:1 andrei:1 experienced:1 position:1 inferring:1 guiding:1 candidate:1 replay:2 renyi:1 hanjun:5 learns:2 formula:1 embed:2 transitioning:1 specific:4 er:2 list:3 incorporating:1 balas:1 adding:3 effectively:1 corr:1 illustrates:2 demand:1 chen:1 flavor:1 entropy:1 sophistication:1 timothy:2 simply:1 vinyals:1 expressed:2 ordered:2 scalar:1 bo:1 applies:1 springer:4 corresponds:2 truth:1 acm:4 prop:1 grefenstette:1 conditional:1 diederik:1 formulated:2 exposition:1 considerable:1 hard:5 feasible:3 inapproximable:1 determined:4 specifically:5 change:3 uniformly:2 called:2 total:1 discriminate:1 goemans:1 experimental:4 svd:1 schechtman:1 colmenarejo:2 formally:1 college:1 select:1 oriol:1 bigdata:1 evaluate:2 princeton:1 malcolm:1 phenomenon:1 |
6,871 | 7,215 | Robust Conditional Probabilities
Yoav Wald
School of Computer Science and Engineering
Hebrew University
[email protected]
Amir Globerson
The Balvatnik School of Computer Science
Tel-Aviv University
[email protected]
Abstract
Conditional probabilities are a core concept in machine learning. For example,
optimal prediction of a label Y given an input X corresponds to maximizing the
conditional probability of Y given X. A common approach to inference tasks is
learning a model of conditional probabilities. However, these models are often
based on strong assumptions (e.g., log-linear models), and hence their estimate of
conditional probabilities is not robust and is highly dependent on the validity of
their assumptions.
Here we propose a framework for reasoning about conditional probabilities without
assuming anything about the underlying distributions, except knowledge of their
second order marginals, which can be estimated from data. We show how this
setting leads to guaranteed bounds on conditional probabilities, which can be calculated efficiently in a variety of settings, including structured-prediction. Finally, we
apply them to semi-supervised deep learning, obtaining results competitive with
variational autoencoders.
1
Introduction
In classification tasks the goal is to predict a label Y for an object X. Assuming that the joint
distribution of these two variables is p? (x, y) then optimal prediction1 corresponds to returning
the label y that maximizes the conditional probability p? (y|x). Thus, being able to reason about
conditional probabilities is fundamental to machine learning and probabilistic inference.
In the fully supervised setting, one can sidestep the task of estimating conditional probabilities by
directly learning a classifier in a discriminative fashion. However, in unsupervised or semi-supervised
settings, a reliable estimate of the conditional distributions becomes important. For example, consider
a self-training [17, 31] or active learning setting. In both scenarios, the learner has a set of unlabeled
samples and it needs to choose which ones to tag. Given an unlabeled sample x, if we could reliably
conclude that p? (y|x) is close to 1 for some label y, we could easily decide whether to tag x or not.
Intuitively, an active learner would prefer not to tag x while a self training algorithm would tag it.
There are of course many approaches to ?modelling? conditional distributions, from logistic regression
to conditional random fields. However, these do not come with any guarantees of approximations
to the true underlying conditional distributions of p? and thus cannot be used to reliably reason
about these. This is due to the fact that such models make assumptions about the conditionals (e.g.,
conditional independence or parametric), which are unlikely to be satisfied in practice.
As an illustrative example for our motivation and setup, consider a set of n binary variables
X1 , ..., Xn whose distribution we are interested in. Suppose we have enough data to obtain
the joint marginals, P [Xi = xi , Xj = xj ], of pairs i, j in a set E. If (1, 2) ? E and we concluded that P [X1 = 1|X2 = 1] = 1, this lets us reason about many other probabilities. For example, we know that P [X1 = 1|X2 = 1, . . . , Xn = xn ] = 1 for any setting of the x3 , . . . , xn
1
In the sense of minimizing prediction error.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
variables. This is a simple but powerful observation, as it translates knowledge about probabilities over small subsets to robust estimates of conditional probability over large subsets.
Now, what happens when P [X1 = 1|X2 = 1] = 0.99? In other words, what can we say about
P [X1 = 1|X2 = 1, . . . , Xn = xn ] given information about probabilities P [Xi = xi , Xj = xj ]. As
we show here, it is still possible to reason about such conditional probabilities even under this partial
knowledge.
Motivated by the above, we propose a novel model-free approach for reasoning about conditional
probabilities. Specifically, we shall show how conditional probabilities can be lower bounded when
the only assumption made is that certain low-order marginals of the distribution are known. One of the
surprising outcomes of our analysis is that these lower bounds can be calculated efficiently, and often
have an elegant closed form. Finally, we show how these bounds can be used in a semi-supervised
setting, obtaining results that are competitive with variational autoencoders [11].
2
Problem Setup
We begin by defining notations to be used in what follows. Let X denote a vector of random variables
X1 , . . . , Xn which are the features and Y denote labels. If we have a single label we will denote it by
Y , otherwise, a multivariate label will be denoted by Y1 , . . . , Yr . X, Y are generated by an unknown
underlying distribution p? (X, Y ). All variables are discrete (i.e., can take on a finite set of values).
Here we will assume that although we do not know p? we have access to some of its low order
marginals, such as those of a single feature and a label:
X
?i (xi , y) =
p? (?
x1 , . . . , x
?n , y).
x
?1 ,...,?
xn :?
xi =xi
Similarly we may have access to the set of pairwise marginals ?ij (xi , xj , y) for all i, j ? E, where
the set E corresponds to edges of a graph G (see also [7]). Denote the set of all such marginals by ?.
For simplicity we assume the marginals are exact. Generally they are of course only approximate, but
concentration bounds can be used to quantify this accuracy as a function of data size. Furthermore,
most of the methods described here can be extended to inexact marginals (e.g., see [6] for an approach
that can be applied here).
Since ? does not uniquely specify a distribution p? , we will be interested in the set of all distributions
that attain these marginals. Denote this set by P(?), namely:
(
)
X
P(?) = p ? ? :
p(?
x1 , . . . , x
?n , y) = ?i (xi , y) ?i
(1)
x
?1 ,...,?
xn :?
xi =xi
where ? is the probability simplex of the appropriate dimension.
More generally, one may consider some vector function f : X, Y ? Rd and its expected value
according to p? , denoted by a = Ep? [f (X, Y )]. Then the corresponding set of distributions is:
P(a) = {p ? ? : Ep [f (X, Y )] = a} .
Since marginals are expectations of random variables [30], this generalizes the notation given above.
2.1
The Robust Conditionals Problem
Our approach is to reason about conditional distributions using only the fact that p? ? P(?). Our
key goal is to lower bound these conditionals, since this will allow us to conclude that certain labels
are highly likely in cases where the lower bound is large. We shall also be interested in upper and
lower bounding joint probabilities, since these will play a key role in bounding the conditionals.
Our goal is thus to solve the following optimization problems.
min p(x, y), max p(x, y), min p (y | x).
p?P(?)
p?P(?)
p?P(?)
(2)
In all three problems, the constraint set is linear in p. However, note that p is specified by an
exponential number of variables (one per assignment x1 , . . . , xn , y) and thus it is not feasible to plug
these constraints into an LP solver. In terms of objective, the min and max problems are linear, and
the conditional is fractional linear. In what follows we show how all three problems can be solved
efficiently for tree shaped graphs.
2
3
Related Work
The problem of reasoning about a distribution based on its expected values has a long history, with
many beautiful mathematical results. An early example is the classical Chebyshev inequality, which
bounds the tail of a distribution given its first and second moments. This was significantly extended
in the Chebyshev Markov Stieltjes inequality [2]. More recently, various generalized Chebyshev
inequalities have been developed [4, 22, 27] and some further results tying moments with bounds on
probabilities have been shown (e.g. [18]). A typical statement of these is that several moments are
given, and one seeks the minimum measure of some set S under any distribution that agrees with
the moments. As [4] notes, most of these problems are NP hard, with isolated cases of tractability.
Such inequalities have been used to obtain minimax optimal linear classifiers in [14]. The moment
problems we consider here are very different from those considered previously, in terms of the finite
support we require, our focus on bounding probabilities and conditional probabilities of assignments.
The above approaches consider worst case bounds on probabilities of events for distributions in P(a).
A different approach is to pick a particular distribution in P(a) as an approximation (or model) of p? .
The most common choice here is the maximum entropy distribution in P(a). Such log-linear models
have found widespread use in statistics and machine learning. In particular, most graphical models
can be viewed as distributions of this type (e.g., see [12, 13]). However, probabilities given by these
models cannot be related to the true probabilities in any sense (e.g., upper or lower bound). This is
where our approach markedly differs from entropy based assumptions. Another approach to reduce
modeling assumptions is robust optimization, where data and certain model parameters are assumed
not to be known precisely, and optimality is sought in a worst case adversarial setting. This approach
has been applied to machine learning in various settings (e.g, see [32, 16]), establishing close links to
regularization. None of these approaches considers bounding probabilities as is our focus here.
Finally, another elegant moment approach is that based on kernel mean embedding [23, 24]. In this
approach, one maps a distribution into a set of expected values of a set of functions (possibly infinite).
The key observation is that this mean embedding lies in an RKHS, and hence many operations can be
done implicitly. Most of the applications of this idea assume that the set of functions is rich enough to
fully specify the distribution (i.e., characteristic kernels [25]). The focus is thus different from ours,
where moments are not assumed to be fully informative, and the set P(a) contains many possible
distributions. It would however be interesting to study possible uses of RKHS in our setting.
4
Calculating Robust Conditional Probabilities
The optimization problems in Eq. (2) are linear programs (LP) and fractional LPs, where the number
of variables scales exponentially with n. Yet, as we show in this section and Section 5, it turns
out that in many non-trivial cases, they can be efficiently solved. Our focus below is on the case
where the set of edges E corresponding to the pairwise marginals forms a tree structured graph.
The tree structure assumption is common in literature on Graphical Models, only here we do not
make an inductive assumption on the generating distribution (i.e., we make none of the conditional
independence assumptions that are implied by tree-structured graphical models). In the following
sections we study solutions of robust conditional probabilities under the tree assumption. We will
also discuss some extensions to the cyclic case. Finally, note that although the derivations here are
for pairwise marginals, these can be extended to the non-pairwise case by considering clique-trees
[e.g., see 30]. Pairs are used here to allow a clearer presentation.
In what follows, we show that the conditional lower bound has a simple structure as stated in Theorem
4.1. This result does not immediately suggest an efficient algorithm since its denominator includes an
exponentially sized LP. Next, in Section 4.2 we show how this LP can be reduced to polynomial sized,
resulting in an efficient algorithm for the lower bound. Finally, in Section 5 we show that in certain
cases there is no need to use a general purpose LP solver and the problem can be solved either in
closed form or via combinatorial algorithms. Detailed proofs are provided in the supplementary file.
4.1
From Conditional Probabilities To Maximum Probabilities with Exclusion
The main result of this section will reduce calculation of the robust conditional probability for
p(y | x), to one of maximizing the probability of all labels other than y. This reduction by itself will
not allow for efficient calculation of the desired conditional probabilities, as the new problem is also
3
a large LP that needs to be solved. Still the result will take us one step further towards a solution, as
it reveals the probability mass a minimizing distribution p will assign to x, y.
This part of the solution is related to a result from [8], where the authors derive the solution of
minp?P(?) p(x, y). They prove that under the tree assumption this problem has a simple closed form
solution, given by the functional I(x, y ; ?):
?
?
X
X
I(x, y ; ?) = ?
(1 ? di )?i (xi , y) +
?ij (xi , xj , y)? .
(3)
i
ij?E
+
Here [?]+ denotes the ReLU function [z]+ = max{z, 0} and di is the degree of node i in G.
It turns out that robust conditional probabilities will assign the event x, y its minimal possible
probability as given in Eq. (3). Moreover, it will assign all other labels their maximum possible
probability. This is indeed a behaviour that may be expected from a robust bound, we formalize it in
the main result for this part:
Theorem 4.1 Let ? be a vector of tree-structured pairwise marginals, then
I(x, y ; ?)
P
min p (y | x) =
.
?)
I(x, y ; ?) + maxp?P(?) y? 6=y p(x, y
p?P(?)
(4)
The proof of this theorem is rather technical and we leave it for the supplementary material.
We note that the above result also applies to the ?structured-prediction? setting where y is multivariate
and we also assume knowledge of marginals ?(yi , yj ). In this case, the expression for I(x, y ; ?)
will also include edges between yi variables, and incorporate their degrees in the graph.
The important implication of Theorem 4.1 is that it reduces the minimum conditional problem to that
of probability maximization with an assignment exclusion. Namely:
X
? ).
max
p(x, y
(5)
p?P(?)
? 6=y
y
Although this is still a problem with an exponential number of variables, we show in the next section
that it can be solved efficiently.
4.2
Minimizing and Maximizing Probabilities
To provide an efficient solution for Eq. (5), we turn to a class of joint probability bounding problems.
? i , Y?j of its domain and would like to
Assume we constrain each variable Xi and Yj to a subset X
reason about the probability of this constrained set of joint assignments:
? i , yj ? Y?j ?i ? [n], j ? [r] .
U = x, y | xi ? X
(6)
Under this setting, an efficient algorithm for solving
X
max
p?P(?)
p(u),
u?U \(x,y)
will also solve Eq. (5). By the results of last section, we will then also have an algorithm calculates
robust conditional probabilities. To see this is indeed the case, assume we are given an assignment
? i = {xi } for all features and Y?j = {1, . . . , |Yj |} for labels (i.e. U does not
(x, y). Then setting X
restrict labels), gives exactly Eq. (5).
To derive the algorithm, we will find a compact representation of the LP, with a polynomial number of
variables and constraints. The result is obtained by using tools from the literature on Graphical Models.
It shows how to formulate probability maximisation problems over U as problems constrained by the
local marginal polytope [30]. Its definition in our setting slightly deviates from its standard definition,
as it does not require that probabilities sum up to 1:
Definition 1 The set of locally consistent pseudo marginals over U is defined as:
X
? j }.
?|
ML (U ) = {?
?
?ij (xi , xj ) = ?
?j (xj ) ?(i, j) ? E, xj ? X
?i
xi ?X
? Z(?),
? is given by
The partition function of ?,
P
?i
xi ? X
4
?
?i (xi ).
The following theorem states that solving Eq. (5) is equivalent to solving an LP over ML (U ) with
additional constraints.
Theorem 4.2 Let U be a universe of assignments as defined in Eq. (6), x ? U and ? a vector of
tree-structured pairwise marginals, then the values of the following problems:
X
X
max
p(u), max
p(u),
p?P(?)
p?P(?)
u?U
u?U \(x,y)
are equal (respectively) to:
?
Z(?),
max
?
?
??M
L (U ),???
max
?
Z(?).
?
?
??M
L (U ),???
?
I(x,y ; ?)?0
(7)
These LPs involve a polynomial number of constraints and variables, thus can be solved efficiently.
Proofs of this result can be obtained by exploiting properties of functions that decompose over trees.
In the supplementary material, we provide a proof similar to that given in [30] to show equality of the
marginal and local-marginal polytopes in tree models.
To conclude this section, we restate the main result: the robust conditional probability problem Eq. (2)
can be solved in polynomial time by combining Theorems 4.1 and 4.2. As a by-product of this
derivation we also presented efficient tools for bounding answers to a large class of probabilistic
queries. While this is not the focus of the current paper, these tools may be a useful in probabilistic
modelling, where we often combine estimates of low order marginals with assumptions on the data
generating process. Bounds like the ones presented in this section give a quantitative estimate of the
uncertainty that is induced by data and circumvented by our assumptions.
5
Closed Form Solutions and Combinatorial Algorithms
The results of the previous section imply that the minimum conditional can be found by solving a
poly-sized LP. Although this results in polynomial runtime, it is interesting to improve as much as
possible on the complexity of this calculation. One reason is that application of the bounds might
require solving them repeatedly within some larger learning probelm. For instance, in classification
tasks it may be necessary to solve Eq. (4) for each sample in the dataset. An even more demanding
procedure will come up in our experimental evaluation, where we learn features that result in high
confidence under our bounds. There, we need to solve Eq. (4) over mini-batches of training data
only to calculate a gradient at each training iteration. Since using an LP solver in these scenarios is
impractical, we next derive more efficient solutions for some special cases of Eq. (4).
5.1
Closed Form for Multiclass Problems
The multiclass setting is a special case of Eq. (4) when y is a single label variable (e.g., a digit label
in MNIST with values y ? {0, . . . , 9}). The solution of course depends on the type of marginals
provided in P(?). Here we will assume that we have access to joint marginals of the label y and pairs
of feature xi , xj corresponding to edges ij ? E of a graph G. We note that we can obtain similar
results for the cases where some additional ?unlabeled? statistics ?ij (xi , xj ) are known.
It turns out that in both cases Eq. (5) has a simple solution. Here we write it for the case without
unlabeled statistics. The following lemma is based on a result that states maxp?P(?) p(x) =
minij ?ij (xi , xj ), which we prove in the supplementary material.
Lemma 5.1 Let x ? X and ? a vector of tree-structured pairwise marginals, then
min p (y | x) =
p?P(?)
5.2
I(x, y ; ?) +
I(x, y ; ?)
P
.
?)
y?6=y minij ?ij (xi , xj , y
(8)
Combinatorial Algorithms and Connection to Maximum Flow Problems
In some cases, fast algorithms for the optimization problem in Eq. (5) can be derived by exploiting a
tight connection of our problems to the Max-Flow problem. The problems are also closely related
5
to the weighted Set-Cover problem. To observe the connection to the latter, consider an instance of
Set-Cover defined as follows. The universe is all assignments x. Sets are defined for each i, j, xi , xj
and are denoted by Sij,xi ,xj . The set Sij,xi ,xj contains all assignments x
? whose values at i, j are
xi , xj . Moreoever, the set Sij,xi ,xj has weight w(Sij,xi ,xj ) = ?ij (xi , xj ). Note that the number of
items in each set is exponential, but the number of sets is polynomial. Now consider using these sets
to cover some set of assignments U with the minimum possible weight. It turns out that under the
tree structure assumption, this problem is closely related to the problem of maximizing probabilities.
Lemma 5.2 Let U be a set of assignments and ? a vector of tree-structured marginals. Then:
X
max
p(u),
(9)
p?P(?)
u?U
has the same value as the standard LP relaxation [28] of the Set-Cover problem above.
The connection to Set-Cover may not give a path to efficient algorithms, but it does illuminate some
of the results presented earlier. It is simple to verify that minij ?ij (xi , xj , y?) is a weight of a cover
of x, y?, while Eq. (3) equals one minus the weight of a set that covers all assignments but x, y. A
connection that we may exploit to obtain more efficient algorithms is to Max-Flow. When the graph
defined by E is a chain, we show in the supplementary material that the value of Eq. (9) can be found
by solving a flow problem on a simple network. We note that using the same construction, Eq. (5)
turns out to be Max Flow under a budget constraint [1]. This may prove very beneficial for our goals,
as it allows for efficient calculation of the robust conditionals we are interested in. Our conjecture
is that this connection goes beyond chain graphs, but leave this for exploration in future work. The
proofs for results in this section may also be found in the supplementary material.
6
Experiments
To evaluate the utility of our bounds, we consider their use in settings of semi-supervised deep
learning and structured prediction. For the bounds to be useful, the marginal distributions need to
be sufficiently informative. In some datasets, the raw features already provide such information, as
we show in Section 6.3. In other cases, such as images, a single raw feature (i.e., a pixel) does not
provide sufficient information about the label. These cases are addressed in Section 6.1 where we
show how to learn new features which do result in meaningful bounds. Using deep networks to learn
these features turns out to be an effective method for semi-supervised settings, reaching results close
to those demonstrated by Variational Autoencoders [11]. It would be interesting to use such feature
learning methods for structured prediction too; however this requires incorporation of the max-flow
algorithm into the optimization loop, and we defer this to future work.
6.1
Deep Semi-Supervised Learning
A well known approach to semi-supervised learning is to optimize an empirical loss, while adding
another term that measures prediction confidence on unlabeled data [9, 10]. Let us describe one such
method and how to adapt it to use our bounds.
Entropy Regularizer: Consider training a deep neural network where the last layer has n neurons
z1 , . . . , zn connected to a softmax layer of size |Y | (i.e. the number of labels), and the loss we use is
a cross entropy loss. Denote the weights of the softmax layer by W ? Rn?|Y | . Given an input x,
define the softmax distribution at the output of the network as:
p?y = softmaxy (hWy , zi) ,
(10)
where Wy is the y?th row of W . The min-entropy regularizer [9] adds an entropy term ?H(?
py ) to
the loss, for each unlabeled x in the training set.
Plugging in Robust Conditional Probabilities: We suggest a simple adaptation of this method that
uses our bounds. Let us remove the softmax layer and set the activations of the neurons z1 , . . . , zn to
a sigmoid activation. Let Z1 , . . . , Zn denote random variables that take on the values of the output
neurons, these variables will be used as features in our bounds (in previous sections we refer to
features as Xi . Here we switch to Zi since Xi are understood as the raw features of the problem. e.g.,
the pixel values in the image). Since our bounds apply to discrete variables, while z1 , . . . , zn are real
values, we use a smoothed version of our bounds.
6
Loss Function and Smoothed Bounds: A smoothed version of the marginals ? is calculated by
considering Zi as an indicator variable (e.g., the probability p(Zi = 1) would just be the average of
the Zi values). Then the smoothed marginal ?
?(zi = 1, y) is the average of zi values over all training
data labeled with y. In our experiments we used all the labeled data to estimate ?
? at each iteration.
? y; ?), is then calculated with Eq. (3)
The smoothed version of I(z, y; ?), which we shall call I(z,
? and the ReLU operator with a softplus.
when switching ? with ?
To define a loss function we take a distribution over all labels:
? y ; ?)
?
I(z,
P
p?y = softmaxy ( ?
),
? +
I(z, y ; ?)
minij ?
?ij (zi , zj , y?)
(11)
y?6=y
This is very similar to the standard distribution taken in a neural net, but it uses our bounds to make a
more robust estimate of the conditionals. Then we use the exact same loss as the entropy regularizer,
a cross entropy loss for labeled data with an added entropy term for unlabeled instances.
6.1.1
Algorithm Settings and Baselines
We implemented the min-entropy regularizer and our proposed method using a multilayer perceptron
(MLP) with fully connected layers and a ReLU activation at each layer (except a sigmoid at the last
layer for our method). In our experiments we used hidden layers of sizes 1000, 500, 50 (so we learn
50 features Z1 , . . . , Z50 ). We also add `2 regularization on the weights of the soft-max layer for the
entropy regularizer, since otherwise entropy can always be driven to zero in the separable case. We
also experimented with adding a hinge loss as a regularizer (as in Transductive SVM [10]), but omit
it from the comparison because it did not yield significant improvement over the entropy regularizer.
We also compare our results with those obtained by Variational Autoencoders and Ladder Networks.
Although we do not expect to get accuracies as high as these methods, getting comparable numbers
with a simple regularizer (compared to the elaborate techniques used in these works) like the one we
suggest, shows that the use of our bounds results in a very powerful method.
6.2
MNIST Dataset
We trained the above models on the MNIST dataset, using 100 and 1000 labeled samples (see [11]
for a similar setup). We set the two regularization parameters required for the entropy regularizer and
the one required for our minimum probability regularizer with five fold cross validation. We used
10% of the training data as a validation set and compared error rates on the 104 samples of the test set.
Results are shown in Figure 1. They show that on the 1000 sample case we are slightly outperformed
by VAE and for 100 samples we lose by 1%. Ladder networks outperform other baselines.
N
100
1000
Ladder [21]
1.06(?0.37)
0.84(?0.08)
VAE [11]
3.33(?0.14)
2.40(?0.02)
Robust Probs
4.44(?0.22)
2.48(?0.03)
Entropy
18.93(?0.54)
3.15(?0.03)
MLP+Noise
21.74(?1.77)
5.70(?0.20)
Figure 1: Error rates of several semi-supervised learning methods on the MNIST dataset with few
training samples.
Accuracy vs. Coverage Curves: In self-training and co-training methods, a classifier adds its
most confident predictions to the training set and then repeats training. A crucial factor in the success
of such methods is the error in the predictions we add to the training pool. Classifiers that use
confidence over unlabelled data as a regularizer are natural choices for base classifiers in such a
setting. Therefore an interesting comparison to make is the accuracy we would get over the unlabeled
data, had the classifier needed to choose its k most confident predictions.
We plot this curve as a function of k for the entropy regularizer and our min-probabilities regularizer.
Samples in the unlabelled training data are sorted in descending order according to confidence.
Confidence for a sample in entropy regularized MLP is calculated based on the value of the logit that
the predicted label received in the output layer. For the robust probabilities classifier, the confidence
of a sample is the minimum conditional probability the predicted label received. As can be observed
in Figure 6.2, our classifier ranks its predictions better than the entropy based method. We attribute
this to our classifier being trained to give robust bounds under minimal assumptions.
7
0.995
0.990
Accuracy
0.985
0.980
0.975
0.970
0.965
0
10000
20000
30000
40000
50000
60000
k
Figure 2: Accuracy for k most confident samples in unlabelled data. Blue curve shows results for
the Robust Probabilities Classifier, green for the Entropy Regularizer. Confidence is measured by
conditional probabilities and logits accordingly.
6.3
Multilabel Structured Prediction
As mentioned earlier, in the structured prediction setting it is more difficult to learn features that
yield high certainty. We therefore provide a demonstration of our method on a dataset where the raw
features are relatively informative. The Genbase dataset taken from [26], is a protein classification
multilabel dataset. It has 662 instances, divided into a training set of 463 samples and a test set of
199, each sample has 1185 binary features and 27 binary labels. We ran a structured-SVM algorithm,
taken from [19] to obtain a classifier that outputs a labelling y? for each x in the dataset (the error
of the resulting classifier was 2%). We then used our probabilistic bounds to rank the classifier?s
predictions by their robust conditional probabilities. The bounds were calculated based on the set of
marginals ?ij (xi , yj ), estimated from the data for each pair of a feature and a label Xi , Yj . The graph
corresponding to these marginals is not a tree and we handled it as discussed in Section 7. The value
of our bounds was above 0.99 for 85% of the samples, indicating high certainty that the classifier is
correct. Indeed only 0.59% of these 85% were actually errors. The remaining errors made by the
classifier were assigned a robust probability of 0 by our bounds, indicating low level of certainty.
7
Discussion
We presented a method for bounding conditional probabilities of a distribution based only on
knowledge of its low order marginals. Our results can be viewed as a new type of moment problem,
bounding a key component of machine learning systems, namely the conditional distribution. As we
show, calculating these bounds raises many challenging optimization questions, which surprisingly
result in closed form expressions in some cases.
While the results were limited to the tree structured case, some of the methods have natural extensions
to the cyclic case that still result in robust estimations. For instance, the local marginal polytope in
Eq. (7) can be taken over a cyclic structure and still give a lower bound on maximum probabilities.
Also in the presence of the cycles, it is possible to find the spanning tree that induces the best bound
on Eq. (3) using a maximum spanning tree algorithm. Plugging these solutions into Eq. (4) results in
a tighter approximation which we used in our experiments.
Our method can be extended in many interesting directions. Here we addressed the case of discrete
random variables, although we also showed in our experiments how these can be dealt with in the
context of continuous features. It will be interesting to calculate bounds on conditional probabilities
given expected values of continuous random variables. In this case, sums-of-squares characterizations
play a key role [15, 20, 3], and their extension to the conditional case is an exciting challenge. It will
also be interesting to study how these bounds can be used in the context of unsupervised learning. One
natural approach here would be to learn constraint functions such that the lower bound is maximized.
Finally, we plan to study the implications of our approach to diverse learning settings, from selftraining to active learning and safe reinforcement learning.
8
Acknowledgments: This work was supported by the ISF Centers of Excellence grant 2180/15, and
by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
References
[1] R. K. Ahuja and J. B. Orlin. A capacity scaling algorithm for the constrained maximum flow
problem. Networks, 25(2):89?98, 1995.
[2] N. I. Akhiezer. The classical moment problem: and some related questions in analysis, volume 5.
Oliver & Boyd, 1965.
[3] A. Benavoli, A. Facchini, D. Piga, and M. Zaffalon. Sos for bounded rationality. Proceedings of
the Tenth International Symposium on Imprecise Probability: Theories and Applications, 2017.
[4] D. Bertsimas and I. Popescu. Optimal inequalities in probability theory: A convex optimization
approach. SIAM Journal on Optimization, 15(3):780?804, 2005.
[5] R. G. Cowell, P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probabilistic networks and
expert systems: Exact computational methods for Bayesian networks. Springer Science &
Business Media, 2006.
[6] M. Dud?k, S. J. Phillips, and R. E. Schapire. Maximum entropy density estimation with
generalized regularization and an application to species distribution modeling. Journal of
Machine Learning Research, 8(Jun):1217?1260, 2007.
[7] E. Eban, E. Mezuman, and A. Globerson. Discrete Chebyshev classifiers. In Proceedings of the
31st International Conference on Machine Learning (ICML). JMLR Workshop and Conference
Proceedings Volume 32, pages 1233?1241, 2014.
[8] M. Fromer and A. Globerson. An LP view of the M-best MAP problem. In NIPS, volume 22,
pages 567?575, 2009.
[9] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In Advances
in neural information processing systems, pages 529?536, 2005.
[10] T. Joachims. Transductive inference for text classification using support vector machines. In
Proceedings of the Sixteenth International Conference on Machine Learning (ICML 1999), Bled,
Slovenia, June 27 - 30, 1999, pages 200?209, 1999.
[11] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with
deep generative models. In Advances in Neural Information Processing Systems 27: Annual
Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal,
Quebec, Canada, pages 3581?3589, 2014.
[12] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT
press, 2009.
[13] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proceedings of the 18th International Conference on
Machine Learning, pages 282?289. Morgan Kaufmann, San Francisco, CA, 2001.
[14] G. R. Lanckriet, L. E. Ghaoui, C. Bhattacharyya, and M. I. Jordan. A robust minimax approach
to classification. Journal of Machine Learning Research, 3(Dec):555?582, 2002.
[15] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM
Journal on Optimization, 11(3):796?817, 2001.
[16] R. Livni and K. C. A. Globerson. A simple geometric interpretation of SVM using stochastic
adversaries. In Proceedings of the 15th International Conference on Artificial Intelligence and
Statistics (AI-STATS), pages 722?730. JMLR: W&CP, 2012.
[17] D. McClosky, E. Charniak, and M. Johnson. Effective self-training for parsing. In Proceedings
of the main conference on human language technology conference of the North American
Chapter of the Association of Computational Linguistics, pages 152?159. Association for
Computational Linguistics, 2006.
9
[18] E. Miranda, G. De Cooman, and E. Quaeghebeur. The hausdorff moment problem under finite
additivity. Journal of Theoretical Probability, 20(3):663?693, 2007.
[19] A. C. Muller and S. Behnke. pystruct - learning structured prediction in python. Journal of
Machine Learning Research, 15:2055?2060, 2014.
[20] P. A. Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical
programming, 96(2):293?320, 2003.
[21] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning
with ladder networks. In Advances in Neural Information Processing Systems 28: Annual
Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal,
Quebec, Canada, pages 3546?3554, 2015.
[22] J. E. Smith. Generalized chebychev inequalities: theory and applications in decision analysis.
Operations Research, 43(5):807?825, 1995.
[23] A. Smola, A. Gretton, L. Song, and B. Sch?lkopf. A hilbert space embedding for distributions.
In International Conference on Algorithmic Learning Theory, pages 13?31. Springer, 2007.
[24] L. Song, K. Fukumizu, and A. Gretton. Kernel embeddings of conditional distributions: A
unified kernel framework for nonparametric inference in graphical models. IEEE Signal
Processing Magazine, 30(4):98?111, 2013.
[25] B. K. Sriperumbudur, K. Fukumizu, and G. R. G. Lanckriet. Universality, characteristic kernels
and rkhs embedding of measures. J. Mach. Learn. Res., 12:2389?2410, July 2011.
[26] G. Tsoumakas, E. Spyromitros-Xioufis, J. Vilcek, and I. Vlahavas. Mulan: A java library for
multi-label learning. Journal of Machine Learning Research, 12:2411?2414, 2011.
[27] L. Vandenberghe, S. Boyd, and K. Comanor. Generalized chebyshev bounds via semidefinite
programming. SIAM review, 49(1):52?64, 2007.
[28] V. V. Vazirani. Approximation algorithms. Springer Science & Business Media, 2013.
[29] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree consistency and bounds on the
performance of the max-product algorithm and its generalizations. Statistics and Computing,
14(2):143?166, 2004.
[30] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[31] D. Weiss, C. Alberti, M. Collins, and S. Petrov. Structured training for neural network transitionbased parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational
Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume
1: Long Papers), pages 323?333, Beijing, China, July 2015. Association for Computational
Linguistics.
[32] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines.
J. Mach. Learn. Res., 10:1485?1510, December 2009.
10
| 7215 |@word version:3 polynomial:7 logit:1 mezuman:1 seek:1 pick:1 minus:1 reduction:1 moment:11 cyclic:3 contains:2 charniak:1 rkhs:3 ours:1 bhattacharyya:1 current:1 surprising:1 activation:3 yet:1 universality:1 parsing:2 partition:1 informative:3 remove:1 plot:1 v:1 intelligence:2 generative:1 yr:1 item:1 amir:1 accordingly:1 mccallum:1 smith:1 core:1 characterization:1 mannor:1 node:1 five:1 mathematical:2 symposium:1 prove:3 combine:1 excellence:1 pairwise:7 indeed:3 expected:5 multi:1 solver:3 considering:2 becomes:1 begin:1 estimating:1 bounded:2 notation:2 underlying:3 maximizes:1 provided:2 mass:1 what:5 tying:1 moreover:1 medium:2 developed:1 unified:1 impractical:1 guarantee:1 pseudo:1 quantitative:1 certainty:3 runtime:1 exactly:1 returning:1 classifier:16 grant:1 omit:1 segmenting:1 engineering:1 local:3 understood:1 switching:1 mach:2 establishing:1 path:1 might:1 china:1 challenging:1 co:1 limited:1 acknowledgment:1 globerson:4 yj:6 practice:1 maximisation:1 differs:1 x3:1 digit:1 procedure:1 empirical:1 attain:1 significantly:1 boyd:2 imprecise:1 word:1 confidence:7 java:1 suggest:3 protein:1 get:2 cannot:2 unlabeled:8 close:3 operator:1 prediction1:1 context:2 py:1 optimize:1 equivalent:1 map:2 demonstrated:1 descending:1 maximizing:4 center:1 go:1 convex:1 formulate:1 simplicity:1 immediately:1 stats:1 vandenberghe:1 embedding:4 construction:1 suppose:1 play:2 rationality:1 exact:3 programming:3 magazine:1 us:3 lanckriet:2 dawid:1 trend:1 labeled:4 ep:2 role:2 observed:1 solved:7 worst:2 calculate:2 connected:2 cycle:1 ran:1 mentioned:1 complexity:1 multilabel:2 trained:2 raise:1 solving:6 tight:1 learner:2 easily:1 joint:7 various:2 chapter:1 caramanis:1 regularizer:14 derivation:2 additivity:1 probs:1 fast:1 effective:2 describe:1 query:1 artificial:1 labeling:1 outcome:1 whose:2 supplementary:6 solve:4 larger:1 say:1 otherwise:2 maxp:2 statistic:5 transductive:2 itself:1 sequence:1 net:1 propose:2 product:2 adaptation:1 combining:1 loop:1 zaffalon:1 sixteenth:1 getting:1 exploiting:2 generating:2 leave:2 object:1 derive:3 ac:2 clearer:1 montreal:2 measured:1 ij:12 lauritzen:1 school:2 received:2 eq:21 strong:1 implemented:1 coverage:1 predicted:2 come:2 quantify:1 direction:1 safe:1 restate:1 closely:2 correct:1 attribute:1 stochastic:1 exploration:1 human:1 material:5 tsoumakas:1 require:3 behaviour:1 assign:3 generalization:1 decompose:1 tighter:1 alberti:1 extension:3 sufficiently:1 considered:1 algorithmic:1 predict:1 sought:1 early:1 purpose:1 estimation:2 outperformed:1 lose:1 label:24 combinatorial:3 honkala:1 agrees:1 tool:3 weighted:1 minimization:1 fukumizu:2 mit:1 always:1 rather:1 reaching:1 vae:2 jaakkola:1 derived:1 focus:5 june:1 genbase:1 improvement:1 joachim:1 modelling:2 rank:2 rezende:1 adversarial:1 baseline:2 sense:2 inference:5 dependent:1 unlikely:1 hidden:1 koller:1 interested:4 pixel:2 classification:5 denoted:3 plan:1 constrained:3 special:2 softmax:4 marginal:6 field:2 equal:2 shaped:1 beach:1 unsupervised:2 icml:2 future:2 simplex:1 np:1 few:1 minij:4 friedman:1 mlp:3 highly:2 gamir:1 evaluation:1 mcclosky:1 semidefinite:2 chain:2 implication:2 oliver:1 edge:4 partial:1 necessary:1 tree:19 desired:1 re:2 isolated:1 theoretical:1 minimal:2 instance:5 modeling:2 soft:1 earlier:2 cover:7 zn:4 yoav:2 assignment:11 maximization:1 tractability:1 subset:3 johnson:1 too:1 answer:1 confident:3 st:2 density:1 fundamental:1 international:7 huji:1 siam:3 probabilistic:7 pool:1 satisfied:1 choose:2 possibly:1 berglund:1 expert:1 sidestep:1 american:1 parrilo:1 de:1 bled:1 includes:1 north:1 depends:1 view:1 stieltjes:1 closed:6 competitive:2 defer:1 orlin:1 collaborative:1 il:2 square:1 accuracy:6 kaufmann:1 characteristic:2 efficiently:6 maximized:1 yield:2 dealt:1 lkopf:1 raw:4 bayesian:1 none:2 history:1 definition:3 inexact:1 sriperumbudur:1 petrov:1 mohamed:1 proof:5 di:2 transitionbased:1 dataset:8 knowledge:5 fractional:2 hilbert:1 formalize:1 actually:1 supervised:12 specify:2 wei:1 done:1 furthermore:1 just:1 smola:1 autoencoders:4 widespread:1 logistic:1 icri:1 aviv:1 usa:1 validity:1 concept:1 true:2 verify:1 hausdorff:1 inductive:1 hence:2 regularization:5 equality:1 logits:1 assigned:1 dud:1 self:4 uniquely:1 illustrative:1 anything:1 generalized:4 cp:1 slovenia:1 reasoning:3 image:2 variational:5 novel:1 recently:1 common:3 sigmoid:2 functional:1 exponentially:2 volume:4 tail:1 discussed:1 interpretation:1 association:4 marginals:25 isf:1 refer:1 significant:1 phillips:1 ai:1 rd:2 consistency:1 similarly:1 language:2 had:1 access:3 add:4 base:1 multivariate:2 exclusion:2 showed:1 driven:1 scenario:2 certain:4 inequality:6 binary:3 success:1 meeting:1 yi:2 muller:1 morgan:1 minimum:6 additional:2 signal:1 semi:11 july:2 reduces:1 gretton:2 technical:1 unlabelled:3 adapt:1 plug:1 calculation:4 long:3 cross:3 divided:1 plugging:2 calculates:1 prediction:15 wald:2 regression:1 denominator:1 multilayer:1 expectation:1 iteration:2 kernel:5 moreoever:1 dec:1 conditionals:6 addressed:2 concluded:1 crucial:1 sch:1 hwy:1 markedly:1 file:1 induced:1 elegant:2 december:3 quebec:2 flow:7 lafferty:1 jordan:2 call:1 presence:1 mulan:1 bengio:1 enough:2 embeddings:1 variety:1 independence:2 xj:21 relu:3 zi:8 switch:1 restrict:1 behnke:1 reduce:2 idea:1 multiclass:2 translates:1 chebyshev:5 whether:1 motivated:1 expression:2 handled:1 utility:1 song:2 repeatedly:1 deep:6 generally:2 useful:2 detailed:1 involve:1 nonparametric:1 locally:1 induces:1 reduced:1 schapire:1 outperform:1 zj:1 estimated:2 per:1 blue:1 diverse:1 discrete:4 write:1 shall:3 key:5 miranda:1 tenth:1 graph:8 relaxation:2 bertsimas:1 sum:2 beijing:1 powerful:2 uncertainty:1 family:1 decide:1 decision:1 prefer:1 scaling:1 comparable:1 bound:40 layer:10 guaranteed:1 fold:1 annual:3 constraint:7 precisely:1 constrain:1 incorporation:1 x2:4 tag:4 min:8 optimality:1 separable:1 relatively:1 conjecture:1 circumvented:1 structured:16 according:2 beneficial:1 slightly:2 lp:14 happens:1 intuitively:1 sij:4 ghaoui:1 taken:4 previously:1 turn:7 discus:1 needed:1 know:2 generalizes:1 operation:2 apply:2 observe:1 appropriate:1 vlahavas:1 batch:1 robustness:1 denotes:1 remaining:1 include:1 linguistics:4 graphical:7 hinge:1 calculating:2 exploit:1 classical:2 implied:1 objective:1 already:1 added:1 question:2 parametric:1 concentration:1 illuminate:1 gradient:1 link:1 valpola:1 capacity:1 mail:2 polytope:2 considers:1 trivial:1 reason:7 spanning:2 willsky:1 assuming:2 eban:1 mini:1 rasmus:1 minimizing:3 hebrew:1 demonstration:1 setup:3 difficult:1 statement:1 stated:1 fromer:1 reliably:2 unknown:1 upper:2 observation:2 neuron:3 markov:1 datasets:1 finite:3 defining:1 extended:4 y1:1 rn:1 smoothed:5 canada:2 pair:4 namely:3 specified:1 required:2 connection:6 z1:5 chebychev:1 polytopes:1 kingma:1 nip:2 able:1 beyond:1 adversary:1 below:1 wy:1 challenge:1 program:1 including:1 tau:1 reliable:1 max:16 wainwright:2 green:1 event:2 demanding:1 natural:4 business:2 beautiful:1 regularized:1 indicator:1 minimax:2 improve:1 spiegelhalter:1 technology:1 imply:1 ladder:4 library:1 raiko:1 popescu:1 jun:1 deviate:1 text:1 literature:2 geometric:1 python:1 review:1 fully:4 loss:9 expect:1 interesting:7 validation:2 semialgebraic:1 foundation:1 degree:2 sufficient:1 consistent:1 minp:1 exciting:1 principle:1 grandvalet:1 row:1 course:3 repeat:1 last:3 free:1 surprisingly:1 supported:1 allow:3 perceptron:1 institute:1 livni:1 curve:3 calculated:6 xn:10 dimension:1 rich:1 author:1 made:2 reinforcement:1 san:1 welling:1 vazirani:1 approximate:1 compact:1 implicitly:1 clique:1 ml:2 global:1 active:3 reveals:1 conclude:3 assumed:2 francisco:1 discriminative:1 xi:36 continuous:2 lasserre:1 learn:8 robust:23 ca:2 tel:1 obtaining:2 poly:1 domain:1 did:1 main:4 universe:2 motivation:1 bounding:8 noise:1 x1:9 xu:1 intel:1 elaborate:1 fashion:1 ahuja:1 pereira:1 exponential:4 lie:1 jmlr:2 theorem:7 experimented:1 svm:3 workshop:1 mnist:4 adding:2 ci:1 labelling:1 budget:1 entropy:21 likely:1 applies:1 cowell:1 springer:3 corresponds:3 conditional:44 goal:4 viewed:2 presentation:1 sized:3 sorted:1 towards:1 feasible:1 hard:1 specifically:1 except:2 typical:1 infinite:1 lemma:3 specie:1 experimental:1 meaningful:1 indicating:2 support:3 softplus:1 latter:1 collins:1 incorporate:1 evaluate:1 |
6,872 | 7,216 | Learning with Bandit Feedback in Potential Games
Johanne Cohen
LRI-CNRS, Universit? Paris-Sud,Universit? Paris-Saclay, France
[email protected]
Am?lie H?liou
LIX, Ecole Polytechnique, CNRS, AMIBio, Inria, Universit? Paris-Saclay
[email protected]
Panayotis Mertikopoulos
Univ. Grenoble Alpes, CNRS, Inria, LIG, F-38000, Grenoble, France
[email protected]
Abstract
This paper examines the equilibrium convergence properties of no-regret learning
with exponential weights in potential games. To establish convergence with minimal information requirements on the players? side, we focus on two frameworks:
the semi-bandit case (where players have access to a noisy estimate of their payoff
vectors, including strategies they did not play), and the bandit case (where players
are only able to observe their in-game, realized payoffs). In the semi-bandit case,
we show that the induced sequence of play converges almost surely to a Nash
equilibrium at a quasi-exponential rate. In the bandit case, the same result holds for
?-approximations of Nash equilibria if we introduce an exploration factor ? > 0
that guarantees that action choice probabilities never fall below ?. In particular, if
the algorithm is run with a suitably decreasing exploration factor, the sequence of
play converges to a bona fide Nash equilibrium with probability 1.
1
Introduction
Given the manifest complexity of computing Nash equilibria, a central question that arises is whether
such outcomes could result from a dynamic process in which players act on empirical information
on their strategies? performance over time. This question becomes particularly important when the
players? view of the game is obstructed by situational uncertainty and the ?fog of war?: for instance,
when deciding which route to take to work each morning, a commuter is typically unaware of how
many other commuters there are at any given moment, what their possible strategies are, how to
best respond to their choices, etc. In fact, in situations of this kind, players may not even know that
they are involved in a game; as such, it does not seem reasonable to assume full rationality, common
knowledge of rationality, flawless execution, etc. to justify the Nash equilibrium prediction.
A compelling alternative to this ?rationalistic? viewpoint is provided by the framework of online
learning, where players are treated as oblivious entities facing a repeated decision process with a
priori unknown rules and outcomes. In this context, when the players have no Bayesian prior on their
environment, the most widely used performance criterion is that of regret minimization, a worst-case
guarantee that was first introduced by Hannan [1], and which has given rise to a vigorous literature at
the interface of optimization, statistics and theoretical computer science ? for a survey, see [2, 3]. By
this token, our starting point in this paper is the following question:
If all players of a repeated game follow a no-regret algorithm,
does the induced sequence of play converge to Nash equilibrium?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
For concreteness, we focus on the exponential weights (EW) scheme [4?7], one of the most popular
and widely studied algorithms for no-regret learning. In a nutshell, the main idea of the method
is that the optimizing agent tallies the cumulative payoffs of each action and then employs a pure
strategy with probability proportional to the exponential of these cumulative ?scores?. Under this
scheme, players are guaranteed a universal, min-max O(T 1/2 ) regret bound (with T denoting the
horizon of play), and their empirical frequency of play is known to converge to the game?s set of
coarse correlated equilibria (CCE) [8].
In this way, no-regret learning would seem to provide a positive partial answer to our original
question: coarse correlated equilibria are indeed learnable if all players follow an exponential weights
learning scheme. On the flip side however, the set of coarse correlated equilibria may contain highly
non-rationalizable strategies, so the end prediction of empirical convergence to such equilibria is
fairly lax. For instance, in a recent paper, Viossat and Zapechelnyuk constructed a 4 ? 4 variant
of Rock-Paper-Scissors with a coarse correlated equilibrium that assigns positive weight only on
strictly dominated strategies [9]. Even more recently, [10] showed that the mean dynamics of the
exponential weights method (and, more generally, any method ?following the regularized leader?)
may cycle in perpetuity in zero-sum games, precluding any possibility of convergence to equilibrium
in this case. Thus, in view of these negative results, a more calibrated answer to the above question is
?not always?: especially when the issue at hand is convergence to a Nash equilibrium (as opposed to
coarser notions), ?no regret? is a rather loose guarantee.
Paper outline and summary of results.
To address the above limitations, we focus on two issues:
a) Convergence to Nash equilibrium (as opposed to correlated equilibria, coarse or otherwise).
b) The convergence of the actual sequence of play (as opposed to empirical frequencies).
The reason for focusing on the actual sequence of play is that time-averages provide a fairly weak
convergence mode: a priori, a player could oscillate between non-equilibrium strategies with suboptimal payoffs, but time-averages might still converge to equilibrium. On the other hand, convergence
of the actual sequence of play both implies empirical convergence and also guarantees that players
will be playing a Nash equilibrium in the long run, so it is a much stronger notion.
To establish convergence, we focus throughout on the class of potential games [11] that has found
widespread applications in theoretical computer science [12], transportation networks [13], wireless
communications [14], biology [15], and many other fields. We then focus on two different feedback
models: in the semi-bandit framework (Section 3), players are assumed to have some (possibly
imperfect) estimate of their payoff vectors at each stage, including strategies that they did not play; in
the full bandit framework (Section 4), this assumption is relaxed and players are only assumed to
observe their realized, in-game payoff at each stage.
Starting with the semi-bandit case, our main result is that under fairly mild conditions for the errors
affecting the players? observations (zero-mean martingale noise with tame second-moment tails),
learning with exponential weights converges to a Nash equilibrium of the game with probability 1
(or to an ?-equilibrium if the algorithm is implemented with a uniform exploration factor ? > 0).1
We also show that
? this convergence occurs at a quasi-exponential rate, i.e. much faster than the
algorithm?s O( T ) regret minimization rate would suggest.
These conclusions also apply to the bandit framework when the algorithm is run with a positive
exploration factor ? > 0. Thus, by choosing a sufficiently small exploration factor, the end state of
the EW algorithm in potential games with bandit feedback is arbitrarily close to a Nash equilibrium.
On the other hand, extending the stochastic approximation and martingale limit arguments that
underlie the bandit analysis to the ? = 0 case is not straightforward. However, by letting the
exploration factor go to zero at a suitable rate (similar to the temperature parameter in simulated
annealing schemes), we are able to recover convergence to the game?s exact Nash set (and not
an approximation thereof). We find this property particularly appealing for practical applications
because it shows that equilibrium can be achieved in a wide class of games with minimal information
requirements.
1
Having a exploration factor ? > 0 simply means here that action selection probabilities never fall below ?.
2
Related work. No-regret learning has given rise to a vast corpus of literature in theoretical computer
science and machine learning, and several well-known families of algorithms have been proposed
for that purpose. The most popular of these methods is based on exponential/multiplicative weight
update rules, and several variants of this general scheme have been studied under different names in
the literature (Hedge, EXP3, etc.) [4?7].
When applied to games, the time-average of the resulting trajectory of play converges to equilibrium
in two-player zero-sum games [6, 16, 17] and the players? social welfare approaches an approximate
optimum [18]. In a similar vein, focusing on the so-called ?Hedge? variant of the multiplicative
weights (MW) algorithm, Kleinberg et al. [19] proved that the dynamics? long-term limit in load
balancing games is exponentially better than the worst correlated equilibrium. The convergence rate
to approximate efficiency and to coarse correlated equilibria was further improved by Syrgkanis et al.
[20] for a wide class of N -player normal form games using a natural class of regularized learning
algorithms. This result was then extended to a class of games known as smooth games [21] with good
properties in terms of the game?s price of anarchy [22].
In the context of potential games, learning algorithms and dynamics have received signifcant attention
and considerable efforts have been devoted to studying the long-term properties of the players? actual
sequence of play. To that end, Kleinberg et al. [23] showed that, after a polynomially small transient
stage, players end up playing a pure equilibrium for a fraction of time that is arbitrarily close to 1
with probability also arbitrarily close to 1. Mehta et al. [24] obtained a stronger result for (generic)
2-player coordination games, showing that the multiplicative weights algorithm (a linearized variant
of the EW algorithm) converges to a pure Nash equilibrium for all but a measure 0 of initial conditions.
More recently, Palaiopanos et al. [25] showed that the MW update rule converges to equilibrium
in potential games; however, if the EW algorithm is run with a constant step-size that is not small
enough, the induced sequence of play may exhibit chaotic behavior, even in simple 2 ? 2 games. On
the other hand, if the same algorithm is run with a decreasing step-size, Krichene et al. [26] showed
that play converges to Nash equilibrium in all nonatomic potential games with a convex potential
(and hence, in all nonatomic congestion games).
In the above works, players are assumed to have full (though possibly imperfect) knowledge of their
payoff vectors, including actions that were not chosen. Going beyond this semi-bandit framework,
Coucheney et al. [27] showed that a ?penalty-regulated? variant of the EW algorithm converges to ?logit equilibria (and hence ?-approximate Nash equilibria) in congestion games with bandit feedback.
As in [26], the results of Coucheney et al. [27] employ the powerful ordinary differential equation
(ODE) method of Bena?m [28] which leverages the convergence of an underlying, continuous-time
dynamical system to obtain convergence of the algorithm at hand. We also employ this method to
compare the actual sequence of play to the replicator dynamics of evolutionary game theory [29];
however, finetuning the bias-variance trade-off that arises when estimating the payoff of actions that
were not employed is a crucial difficulty in this case. Overcoming this hurdle is necessary when
seeking convergence to actual Nash equilibria (as opposed to ?-approximations thereof), so a key
contribution of our paper is an extension of Bena?m?s theory to account for estimators with (possibly)
unbounded variance.
2
The setup
2.1
Game-theoretic preliminaries
An N -player game in normal form consists of a (finite) set of players N = {1, . . . , N }, each with a
finite set of actions (or pure strategies) Ai . The preferences of the
Q i-th player for one action over
another are determined by an associated payoff function ui : A ? i Ai ? R that maps the profile
(?i ; ??i ) of all players? actions to the player?s reward ui (?i ; ??i ).2 Putting all this together, a game
will be denoted by the tuple ? ? ?(N , A, u).
Players can also mix their strategies by playing probability distributions xi = (xi?i )?i ?Ai ? ?(Ai )
over their action sets Ai . The resulting probability vector xi is called a mixed strategy and we write
Xi =Q?(Ai ) for the mixed strategy space of player i. Aggregating over players, we also write
X = i Xi for the game?s strategy space, i.e. the space of all mixed strategy profiles x = (xi )i?N .
2
In the above (?i ; ??i ) is shorthand for (?1 , . . . , ?i , . . . , ?N ), used here to highlight the action of player i
against that of all other players.
3
In this context (and in a slight abuse of notation), the expected payoff of the i-th player in the profile
x = (x1 , . . . , xN ) is
X
X
ui (x) =
???
ui (?1 , . . . , ?N ) x1?1 ? ? ? xN ?N .
(2.1)
?1 ?A1 ?N ?AN
To keep track of the payoff of each pure strategy, we also write vi?i (x) = ui (?i ; x?i ) for the payoff
of strategy ?i ? Ai under the profile x ? X and
vi (x) = (vi?i (x))?i ?Ai
for the resulting payoff vector of player i. We thus have
X
ui (x) = hvi (x), xi i =
xi?i vi?i (x),
(2.2)
(2.3)
?i ?Ai
where hv, xi ? v > x denotes the ordinary pairing between v and x.
The most widely used solution concept in game theory is that of a Nash equilibrium (NE), i.e. a state
x? ? X such that
ui (x?i ; x??i ) ? ui (xi ; x??i ) for every deviation xi ? Xi of player i and all i ? N .
(NE)
Equivalently, writing supp(xi ) = {?i ? Ai : xi > 0} for the support of xi ? Xi , we have the
characterization
vi?i (x? ) ? vi?i (x? ) for all ?i ? supp(x?i ) and all ?i ? Ai , i ? N .
?
(2.4)
supp(x?i )
A Nash equilibrium x ? X is further said to be pure if
= {?
?i } for some ?
? i ? Ai and all
i ? N . In generic games (that is, games where small changes to any payoff do not introduce new
Nash equilibria or destroy existing ones), every pure Nash equilibrium is also strict in the sense that
(2.4) holds as a strict inequality for all ?i 6= ?
?i.
In our analysis, it will be important to consider the following relaxations of the notion of a Nash
equilibrium: First, weakening the inequality (NE) leads to the notion of a ?-equilibrium, defined here
as any mixed strategy profile x? ? X such that
ui (x?i ; x??i ) + ? ? ui (xi ; x??i ) for every deviation xi ? Xi and all i ? N .
(NE? )
Finally, we say that x? is a restricted equilibrium (RE) of ? if
vi?i (x? ) ? vi?i (x? ) for all ?i ? supp(x?i ) and all ?i ? A0i , i ? N ,
A0i
(RE)
supp(x?i ).
where
is some restricted subset of Ai containing
In words, restricted equilibria are
Nash equilibria of ? restricted to subgames where only a subset of the players? pure strategies are
available at any given moment. Clearly, Nash equilibria are restricted equilibria but the converse does
not hold: for instance, every pure strategy profile is a restricted equilibrium, but not necessarily a
Nash equilibrium.
Throughout this paper, we will focus almost exclusively on the class of potential games, which have
been studied extensively in the context of congestion, traffic networks, oligopolies, etc.QFollowing
Monderer and Shapley [11], ? is a potential game if it admits a potential function f : i Ai ? R
such that
ui (xi ; x?i ) ? ui (x0i ; x?i ) = f (xi ; x?i ) ? f (x0i ; x?i ),
(2.5)
Q
0
for all xi , xi ? Xi , x?i ? X?i ? j6=i Xi , and all i ? N . A simple differentiation of (2.1) then
yields
vi (x) = ?xi ui (x) = ?xi f (x) for all i ? N .
(2.6)
Obviously, every local maximizer of f is a Nash equilibrium so potential games always admit Nash
equilibria in pure strategies (which are also strict if the game is generic).
2.2
The exponential weights algorithm
Our basic learning framework is as follows: At each stage n = 1, 2, . . . , all players i ? N select an
action ?i (n) ? Ai based on their mixed strategies; subsequently, they receive some feedback on their
chosen actions, they update their mixed strategies, and the process repeats.
4
A popular (and very widely studied) class of algorithms for no-regret learning in this setting is the
exponential weights (EW) scheme introduced by Vovk [4] and studied further by Auer et al. [5],
Freund and Schapire [6], Arora et al. [7], and many others. Somewhat informally, the main idea
is that each player tallies the cumulative payoffs of each of their actions, and then employs a pure
strategy ?i ? Ai with probability roughly proportional to the these cumulative payoff ?scores?.
Focusing on the so-called ??-HEDGE? variant of the EW algorithm [6], this process can be described
in pseudocode form as follows:
Algorithm 1 ?-HEDGE with generic feedback
Require: step-size sequence ?n > 0, exploration factor ? ? [0, 1], initial scores Yi ? RAi .
1: for n = 1, 2, . . . do
2:
for every player i ? N do
3:
set mixed strategy: Xi ? ? unif i +(1 ? ?) ?i (Yi );
4:
choose action ?i ? Xi ;
5:
acquire estimate v?i of realized payoff vector vi (?i ; ??i );
6:
update scores: Yi ? Yi + ?n v?i ;
7:
end for
8: end for
Mathematically, Algorithm 1 represents the recursion
Xi (n) = ? unif i +(1 ? ?) ?i (Yi (n)),
Yi (n + 1) = Yi (n) + ?n+1 v?i (n + 1),
(?-Hedge)
where
1
(1, . . . , 1)
|Ai |
stands for the uniform distribution over Ai and ?i : RAi ? Xi denotes the logit choice map
unif i =
(exp(yi?i ))?i ?Ai
,
?i (yi ) = P
?i ?Ai exp(yi?i )
(2.7)
(2.8)
which assigns exponentially higher probability to pure strategies with higher scores. Thus, action
selection probabilities under (?-Hedge) are a convex combination of uniform exploration (with total
weight ?) and exponential weights (with total weight 1 ? ?).3 As a result, for ? ? 1, action selection
is essentially uniform; at the other extreme, when ? = 0, we obtain the original Hedge algorithm of
Freund and Schapire [6] with feedback sequence v?(n) and no explicit exploration.
The no-regret properties of (?-Hedge) have been extensively studied in the literature as a function
of the algorithm?s step-size sequence ?n , exploration factor ?, and the statistical properties of the
payoff estimates v?(n) ? for a survey, we refer the reader to [2, 3]. In our convergence analysis, we
examine the role of each of these factors in detail, focusing in particular on the distinction between
?semi-bandit feedback? (when it is possible to estimate the payoff of pure strategies that were not
played) and ?bandit feedback? (when players only observe the payoff of their chosen action).
3
Learning with semi-bandit feedback
3.1
The model
We begin with the semi-bandit framework, i.e. the case where each player has access to a possibly
imperfect estimate of their entire payoff vector at stage n. More precisely, we assume here that the
feedback sequence v?i (n) to Algorithm 1 is of the general form
v?i (n) = vi (?i (n); ??i (n)) + ?i (n),
(3.1)
where (?i (n))i?N is a martingale noise process representing the players? estimation error and
satisfying the following statistical hypotheses:
3
Of course, the exploration factor ? could also be player-dependent. For simplicity, we state all our results
here with the same ? for all players.
5
1. Zero-mean:
E[?i (n) | Fn?1 ] = 0 for all n = 1, 2, . . . (a.s.).
(H1)
2. Tame tails:
P(k?i (n)k2? ? z | Fn?1 ) ? A/z q
for some q > 2, A > 0, and all n = 1, 2, . . . (a.s.). (H2)
In the above, the expectation E[ ? ] is taken with respect to some underlying filtered probability space
(?, F, (Fn )n?N , P) which serves as a stochastic basis for the process (?(n), v?(n), Y (n), X(n))n?1 .4
In words, Hypothesis (H1) simply means that the players? feedback sequence v?(n) is conditionally
unbiased with respect to the history of play, i.e.
E[?
vi (n) | Fn?1 ] = vi (X(n ? 1)),
for all n = 1, 2, . . . (a.s.).
(3.2a)
Hypothesis (H2) further implies that the variance of the estimator v? is conditionally bounded, i.e.
Var[?
v (n) | Fn?1 ] ? ? 2
for all n = 1, 2, . . . (a.s.).
(3.2b)
By Chebyshev?s inequality, an estimator with finite variance enjoys the tail bound P(k?i (n)k? ?
z | Fn?1 ) = O(1/z 2 ). At the expense of working with slightly more conservative step-size policies
(see below), much of our analysis goes through with this weaker requirement for the tails of ?. However, the extra control provided by the O(1/z q ) tail bound simplifies the presentation considerably,
so we do not consider this relaxation here. In any event, Hypothesis (H2) is satisfied by a broad range
of error noise distributions (including all compactly supported, sub-Gaussian and sub-exponential
distributions), so the loss in generality is small compared to the gain in clarity and concision.
3.2
Convergence analysis
With all this at hand, our main result for the convergence of (?-Hedge) with semi-bandit feedback of
the form (3.1) is as follows:
Theorem 1. Let ? be a generic potential game and suppose that Algorithm 1 is run with i ) semibandit feedback satisfying (H1) and (H2); ii ) a nonnegative exploration factor ? ? 0; and iii) a
step-size sequence of the form ?n ? 1/n? for some ? ? (1/q, 1]. Then:
1. X(n) converges (a.s.) to a ?-equilibrium of ? with ? ? ?(?) ? 0 as ? ? 0.
? ? A,
2. If limn?? X(n) is an ?-pure state of the form x?i = ? unif i +(1 ? ?)e?? i for some ?
then ?
? is a.s. a strict equilibrium of ? and convergence occurs at a quasi-exponential rate:
Xi?? i (n) ? 1 ? ? ? be?c
Pn
k=1
?k
for some positive b, c > 0.
(3.3)
Corollary 2. If Algorithm 1 is run with assumptions as above and no exploration (? = 0), X(n)
converges to a Nash equilibrium with probability 1. Moreover, if the limit of X(n) is pure and ? < 1,
we have
1??
Xi?? i (n) ? 1 ? be?cn
for some positive b, c > 0.
(3.4)
Sketch of the proof. The proof of Theorem 1 is fairly convoluted, so we relegate the details to the
paper?s technical appendix and only present here a short sketch thereof.
Our main tool is the so-called ordinary differential equation (ODE) method, a powerful stochastic
approximation scheme due to Bena?m and Hirsch [28, 30]. The key observation is that the mixed strategy sequence X(n) generated by Algorithm 1 can be viewed as a ?Robbins?Monro approximation?
(an asymptotic pseudotrajectory to be precise) of the ?-perturbed exponential learning dynamics
y? i = vi (x),
xi = ? unif i +(1 ? ?) ?i (yi ),
(XL? )
By differentiating, it follows that xi (t) evolves according to the ?-perturbed replicator dynamics
i
X
h
x? i? = xi? ? |Ai |?1 ? vi? (x) ? (1 ? ?)?1
(xi? ? |Ai |?1 ?)vi? (x) ,
(RD? )
??Ai
4
Notation-wise, this means that the players? actions at stage n are drawn based on their mixed strategies at
stage n ? 1. This slight discrepancy with the pseudocode representation of Algorithm 1 is only done to simplify
notation later on.
6
which, for ? = 0, boil down to the ordinary replicator dynamics of Taylor and Jonker [29]:
x? i? = xi? [vi? (x) ? hvi (x), xi i],
(RD)
A key property of the replicator dynamics that readily extends to the ?-perturbed variant (RD? ) is that
the game?s potential f is a strict Lyapunov function ? i.e. f (x(t)) is increasing under (RD? ) unless
x(t) is stationary. By a standard result of Bena?m [28], this implies that the discrete-time process
X(n) converges (a.s.) to a connected set of rest points of (RD? ), which are themselves approximate
restricted equilibria of ?.
Of course, since every ?-pure point of the form (? unif i +(1 ? ?)e?i )i?N is also stationary under
(RD? ), the above does not imply that the limit of X(n) is an approximate equilibrium of ?. To rule
out non-equilibrium outcomes, we first note that the set of rest points of (RD? ) is finite (by genericity),
so X(n) must converge to a point. Then, the final step of our convergence proof is provided by a
martingale recurrence argument which shows that when X(n) converges to a point, this limit must be
an approximate equilibrium of ?. Finally, the rate of convergence (3.3) is obtained by comparing the
payoff of a player?s equilibrium strategy to that of the player?s other strategies, and then ?inverting?
the logit choice map to translate this into an exponential decay rate for kXi?? i (n) ? x? k.
We close this section with two remarks on Theorem 1. First, we note that there is an inverse
relationship between the tail exponent q in (H2) and the decay rate ? of the algorithm?s step-size
sequence ?n ? n?? . Specifically, higher values of q imply that the noise in the players? observations
is smaller (on average and with high probability), so players can be more aggressive in their choice
of step-size. This is reflected in the lower bound 1/q for ? and the fact that the players? rate of
convergence to Nash equilibrium increases for smaller ?; in particular, (3.3) shows that Algorithm 1
enjoys a convergence bound which is just shy of O(exp(?n1?1/q )). Thus, if the noise process ? is
sub-Gaussian/sub-exponential (so q can be taken arbitrarily large), a near-constant step-size sequence
(small ?) yields an almost linear convergence rate.
Second, if the noise process ? is ?isotropic? in the sense of Bena?m [28, Thm. 9.1], the instability of
non-pure Nash equilibria under the replicator dynamics can be used to show that the limit of X(n) is
pure with probability 1.5 When this is the case, the quasi-exponential convergence rate (3.3) becomes
universal in that it holds with probability 1 (as opposed to conditioning on limn?? X(n) being
pure). We find this property particularly appealing for practical
applications because it shows that
?
equilibrium is reached exponentially faster than the O(1/ n) worst-case regret bound of (?-Hedge)
would suggest.
4
Payoff-based learning: the bandit case
We now turn to the bandit framework, a minimal-information setting where, at each stage of the
process, players only observe their realized payoffs
u
?i (n) = ui (?i (n); ??i (n)).
(4.1)
In this case, players have no clue about the payoffs of strategies that were not chosen, so they must
construct an estimator for their payoff vector, including its missing components. A standard way to
do this is via the bandit estimator
1(?i (n) = ?i )
u
?i (n)/Xi?i (n ? 1)
if ?i = ?i (n),
?u
?i (n) =
v?i?i (n) =
(4.2)
0
otherwise.
P(?i (n) = ?i | Fn?1 )
Indeed, a straightforward calculation shows that
X
X
1(?i = ?i )
E[?
vi?i (n) | Fn?1 ] =
X?i,??i (n ? 1)
Xi?i (n ? 1)
ui (?i ; ??i )
Xi?i (n ? 1)
??i ?A?i
?i ?Ai
= ui (?i ; X?i (n ? 1))
= vi?i (X(n ? 1)),
(4.3)
5
Specifically, we refer here to the so-called ?folk theorem? of evolutionary game theory which states that x?
is asymptotically stable under (RD) if and only if it is a strict Nash equilibrium of ? [15]. The extension of this
result to the ?-replicator system (RD? ) is immediate.
7
so the estimator (4.2) is unbiased in the sense of (H1)/(3.2a). On the other hand, a similar calculation
shows that the variance of v?i?i (n) grows as O(1/Xi?i (n ? 1)), implying that (H2)/(3.2b) may fail
to hold if the players? action selection probabilities become arbitrarily small.
Importantly, this can never happen if (?-Hedge) is run with a strictly positive exploration factor ? > 0.
In that case, we can show that the bandit estimator (4.2) satisfies both (H1) and (H2), leading to the
following result:
Theorem 3. Let ? be a generic potential game and suppose that Algorithm 1 is run with i ) the bandit
estimator (4.2); ii) a strictly positive exploration factor ? > 0; and iii) a step-size sequence of the
form ?n ? 1/n? for some ? ? (0, 1]. Then:
1. X(n) converges (a.s.) to a ?-equilibrium of ? with ? ? ?(?) ? 0 as ? ? 0.
2. If limn?? X(n) is an ?-pure state of the form x?i = ? unif i +(1 ? ?)e?? i for some ?
? ? A,
then ?
? is a.s. a strict equilibrium of ? and convergence occurs at a quasi-exponential rate:
Xi?? i (n) ? 1 ? ? ? be?c
Pn
k=1
?k
for some positive b, c > 0.
(4.4)
Proof. Under Algorithm 1, the estimator (4.2) gives
k?
vi (n)k =
|ui (?i (n); ??i (n))|
|?
ui (n)|
umax
?
?
,
Xi?i (n?1) (n)
?
?
(4.5)
where umax = maxi?N max?1 ?A1 ? ? ? max?N ?AN ui (?1 , . . . , ?N ) denotes the absolute maximum
payoff in ?. This implies that (H2) holds true for all q > 2, so our claim follows from Theorem 1.
Theorem 3 shows that the limit of Algorithm 1 is closer to the Nash set of the game if the exploration
factor ? is taken as small as possible. On the other hand, the crucial limitation of this result is that it
does not apply to the case ? = 0 which corresponds to the game?s bona fide Nash equilibria. As we
discussed above, the reason for this is that the variance of v?(n) may grow without bound if action
choice probabilities become arbitrarily small, in which case the main components of our proof break
down.
With this ?bias-variance? trade-off in mind, we introduce below a modified version of Algorithm 1
with an ?annealing? schedule for the method?s exploration factor:
Algorithm 2 Exponential weights with annealing
Require: step-size sequence ?n > 0, vanishing exploration factor ?n > 0, initial scores Yi ? RAi
1: for n = 1, 2, . . . do
2:
for every player i ? N do
3:
set mixed strategy: Xi ? ?n unif i +(1 ? ?n ) ?i (Yi );
4:
choose action ?i ? Xi and receive payoff u
?i ? ui (?i ; ??i );
5:
set v?i?i ? u
?i /Xi?i and v?i?i ? 0 for ?i 6= ?i ;
6:
update scores: Yi ? Yi + ?n v?i ;
7:
end for
8: end for
Of course, the convergence of Algorithm 2 depends heavily on the rate at which ?n decays to 0
relative to the algorithm?s step-size sequence ?n . This can be seen clearly in our next result:
Theorem 4. Let ? be a generic potential game and suppose that Algorithm 1 is run with i ) the bandit
estimator (4.2); ii ) a step-size sequence of the form ?n ? 1/n? for some ? ? (1/2, 1]; and iii ) a
decreasing exploration factor ?n ? 0 such that
?n
= 0,
n?? ?2
n
lim
?
X
?n2
< ?,
?
n=1 n
and
Then, X(n) converges (a.s.) to a Nash equilibrium of ?.
8
?n ? ?n+1
= 0.
n??
?n2
lim
(4.6)
The main challenge in proving Theorem 4 is that, unless the ?innovation term? Ui (n) = v?i (n) ?
vi (X(n ? 1)) has bounded variance, Bena?m?s general theory does not imply that X(n) forms an
asymptotic pseudotrajectory of the underlying mean dynamics ? here, the unperturbed replicator
system (RD). Nevertheless, under the summability condition (4.6), it is possible to show that this is
the case by using a martingale limit argument based on Burkholder?s inequality. Furthermore, under
the stated conditions, it is also possible to show that, if X(n) converges, its limit is necessarily a
Nash equilibrium of ?. Our proof then follows in roughly the same way as in the case of Theorem 1;
for the details, we refer the reader to the appendix.
We close this section by noting that the summability condition (4.6) imposes a lower bound on the
step-size exponent ? that is different from the lower bound in Theorem 3. In particular, if ? = 1/2,
(4.6) cannot hold for any vanishing sequence of exploration factors ?n ? 0. Given that the innovation
term Ui is bounded, we conjecture that this sufficient condition is not tight and can be relaxed further.
We intend to address this issue in future work.
5
Conclusion and perspectives
The results of the previous sections show that no-regret learning via exponential weights enjoys
appealing convergence properties in generic potential games. Specifically, in the semi-bandit case,
the sequence of play converges to a Nash equilibrium with probability 1, and convergence to pure
equilibria occurs at a quasi-exponential rate. In the bandit case, the same holds true for O(?)equilibria if the algorithm is run with a positive mixing factor ? > 0; and if the algorithm is run with
a decreasing mixing schedule, the sequence of play converges to an actual Nash equilibrium (again,
with probability 1). In future work, we intend to examine the algorithm?s convergence properties
in other classes of games (such as smooth games), extend our analysis to the general ?follow the
regularized leader? (FTRL) class of policies (of which EW is a special case), and to examine the
impact of asynchronicities and delays in the players? feedback/update cycles.
Acknowledgments
Johanne Cohen was partially supported by the grant CNRS PEPS MASTODONS project ADOC
2017. Am?lie H?liou and Panayotis Mertikopoulos gratefully acknowledge financial support from
the Huawei Innovation Research Program ULTRON and the ANR JCJC project ORACLESS (grant
no. ANR?16?CE33?0004?01).
References
[1] James Hannan. Approximation to Bayes risk in repeated play. In Melvin Dresher, Albert William
Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, Volume III, volume 39 of Annals of
Mathematics Studies, pages 97?139. Princeton University Press, Princeton, NJ, 1957.
[2] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2011.
[3] S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[4] Volodimir G. Vovk. Aggregating strategies. In COLT ?90: Proceedings of the 3rd Workshop on Computational Learning Theory, pages 371?383, 1990.
[5] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The
adversarial multi-armed bandit problem. In Proceedings of the 36th Annual Symposium on Foundations of
Computer Science, 1995.
[6] Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights. Games and
Economic Behavior, 29:79?103, 1999.
[7] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: A meta-algorithm
and applications. Theory of Computing, 8(1):121?164, 2012.
[8] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium.
Econometrica, 68(5):1127?1150, September 2000.
[9] Yannick Viossat and Andriy Zapechelnyuk. No-regret dynamics and fictitious play. Journal of Economic
Theory, 148(2):825?842, March 2013.
9
[10] Panayotis Mertikopoulos, Christos H. Papadimitriou, and Georgios Piliouras. Cycles in adversarial
regularized learning. In SODA ?18: Proceedings of the 29th annual ACM-SIAM symposium on discrete
algorithms, to appear.
[11] Dov Monderer and Lloyd S. Shapley. Potential games. Games and Economic Behavior, 14(1):124 ? 143,
1996.
[12] Noam Nisan, Tim Roughgarden, Eva Tardos, and V. V. Vazirani, editors. Algorithmic Game Theory.
Cambridge University Press, 2007.
[13] William H. Sandholm. Population Games and Evolutionary Dynamics. Economic learning and social
evolution. MIT Press, Cambridge, MA, 2010.
[14] Samson Lasaulce and Hamidou Tembine. Game Theory and Learning for Wireless Networks: Fundamentals and Applications. Academic Press, Elsevier, 2010.
[15] Josef Hofbauer and Karl Sigmund. Evolutionary game dynamics. Bulletin of the American Mathematical
Society, 40(4):479?519, July 2003.
[16] Dean Foster and Rakesh V. Vohra. Calibrated learning and correlated equilibrium. Games and Economic
Behavior, 21(1):40?55, October 1997.
[17] Avrim Blum and Yishay Mansour. Learning, regret minimization, and equilibria. In Noam Nisan, Tim
Roughgarden, Eva Tardos, and V. V. Vazirani, editors, Algorithmic Game Theory, chapter 4. Cambridge
University Press, 2007.
[18] Avrim Blum, Mohammad Taghi Hajiaghayi, Katrina Ligett, and Aaron Roth. Regret minimization and the
price of total anarchy. In STOC ?08: Proceedings of the 40th annual ACM symposium on the Theory of
Computing, pages 373?382. ACM, 2008.
[19] Robert Kleinberg, Georgios Piliouras, and ?va Tardos. Load balancing without regret in the bulletin board
model. Distributed Computing, 24(1):21?29, 2011.
[20] Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E Schapire. Fast convergence of regularized
learning in games. In Advances in Neural Information Processing Systems, pages 2989?2997, 2015.
[21] Tim Roughgarden. Intrinsic robustness of the price of anarchy. Journal of the ACM (JACM), 62(5):32,
2015.
[22] Dylan J Foster, Thodoris Lykouris, Karthik Sridharan, and Eva Tardos. Learning in games: Robustness of
fast convergence. In Advances in Neural Information Processing Systems, pages 4727?4735, 2016.
[23] Robert Kleinberg, Georgios Piliouras, and Eva Tardos. Multiplicative updates outperform generic no-regret
learning in congestion games. In Proceedings of the forty-first annual ACM symposium on Theory of
computing, pages 533?542. ACM, 2009.
[24] Ruta Mehta, Ioannis Panageas, and Georgios Piliouras. Natural selection as an inhibitor of genetic diversity:
Multiplicative weights updates algorithm and a conjecture of haploid genetics. In ITCS ?15: Proceedings
of the 6th Conference on Innovations in Theoretical Computer Science, 2015.
[25] Gerasimos Palaiopanos, Ioannis Panageas, and Georgios Piliouras. Multiplicative weights update with
constant step-size in congestion games: Convergence, limit cycles and chaos. In NIPS ?17: Proceedings of
the 31st International Conference on Neural Information Processing Systems, 2017.
[26] Walid Krichene, Benjamin Drigh?s, and Alexandre M. Bayen. Online learning of Nash equilibria in
congestion games. SIAM Journal on Control and Optimization, 53(2):1056?1081, 2015.
[27] Pierre Coucheney, Bruno Gaujal, and Panayotis Mertikopoulos. Penalty-regulated dynamics and robust
learning procedures in games. Mathematics of Operations Research, 40(3):611?633, August 2015.
[28] Michel Bena?m. Dynamics of stochastic approximation algorithms. S?minaire de probabilit?s de Strasbourg, 33, 1999.
[29] Peter D. Taylor and Leo B. Jonker. Evolutionary stable strategies and game dynamics. Mathematical
Biosciences, 40(1-2):145?156, 1978.
[30] Michel Bena?m and Morris W. Hirsch. Asymptotic pseudotrajectories and chain recurrent flows, with
applications. Journal of Dynamics and Differential Equations, 8(1):141?176, 1996.
10
| 7216 |@word mild:1 version:1 stronger:2 logit:3 suitably:1 mehta:2 unif:8 rigged:1 linearized:1 moment:3 initial:3 ftrl:1 score:7 exclusively:1 ecole:1 denoting:1 precluding:1 genetic:1 existing:1 comparing:1 luo:1 must:3 readily:1 fn:8 happen:1 ligett:1 update:10 congestion:6 stationary:2 implying:1 isotropic:1 vanishing:2 short:1 filtered:1 coarse:6 characterization:1 preference:1 unbounded:1 melvin:1 mathematical:2 constructed:1 differential:3 become:2 symposium:4 pairing:1 consists:1 shorthand:1 shapley:2 introduce:3 expected:1 indeed:2 roughly:2 themselves:1 examine:3 behavior:4 multi:2 sud:1 decreasing:4 actual:7 armed:2 increasing:1 becomes:2 provided:3 estimating:1 underlying:3 notation:3 begin:1 bounded:3 moreover:1 project:2 what:1 kind:1 differentiation:1 nj:1 guarantee:4 every:8 act:1 hajiaghayi:1 nutshell:1 universit:3 k2:1 control:2 imag:1 underlie:1 converse:1 grant:2 anarchy:3 appear:1 positive:9 aggregating:2 local:1 limit:10 abuse:1 inria:2 might:1 studied:6 range:1 practical:2 acknowledgment:1 volodimir:1 regret:19 chaotic:1 procedure:2 probabilit:1 universal:2 empirical:5 word:2 gerasimos:1 suggest:2 cannot:1 close:5 selection:5 context:4 risk:1 writing:1 instability:1 map:3 dean:1 transportation:1 missing:1 roth:1 straightforward:2 go:2 starting:2 syrgkanis:2 attention:1 survey:2 convex:3 kale:1 simplicity:1 alpes:1 assigns:2 pure:21 examines:1 rule:4 estimator:10 semibandit:1 importantly:1 pseudotrajectories:1 financial:1 proving:1 population:1 notion:4 tardos:5 annals:1 yishay:1 play:20 rationality:2 suppose:3 exact:1 heavily:1 hypothesis:4 haploid:1 wolfe:1 trend:2 satisfying:2 particularly:3 coarser:1 vein:1 lix:1 role:1 ultron:1 hv:1 worst:3 eva:4 cycle:4 connected:1 trade:2 tame:2 environment:1 nash:37 complexity:1 ui:22 reward:1 econometrica:1 concision:1 benjamin:1 dynamic:18 tight:1 efficiency:1 basis:1 compactly:1 finetuning:1 chapter:1 lykouris:1 leo:1 univ:1 fast:2 outcome:3 choosing:1 shalev:1 oligopoly:1 widely:4 elad:1 say:1 katrina:1 otherwise:2 anr:2 satyen:1 statistic:1 noisy:1 final:1 online:4 obviously:1 sequence:25 rock:1 fr:2 vasilis:1 translate:1 mixing:2 haipeng:1 convoluted:1 convergence:35 requirement:3 extending:1 optimum:1 lig:1 converges:17 tim:3 recurrent:1 x0i:2 received:1 implemented:1 bayen:1 implies:4 lyapunov:1 stochastic:5 subsequently:1 exploration:21 transient:1 require:2 preliminary:1 mathematically:1 zapechelnyuk:2 strictly:3 extension:2 hold:8 bena:8 sufficiently:1 normal:2 deciding:1 welfare:1 equilibrium:73 exp:3 algorithmic:2 claim:1 hvi:2 purpose:1 estimation:1 coordination:1 robbins:1 tool:1 minimization:4 mit:1 clearly:2 inhibitor:1 always:2 gaussian:2 modified:1 rather:1 pn:2 corollary:1 focus:6 lri:2 adversarial:2 am:2 sense:3 elsevier:1 dependent:1 huawei:1 cnrs:4 typically:1 weakening:1 entire:1 bandit:28 quasi:6 france:2 going:1 josef:1 issue:3 colt:1 denoted:1 priori:2 exponent:2 special:1 fairly:4 field:1 construct:1 never:3 having:1 beach:1 biology:1 represents:1 broad:1 discrepancy:1 future:2 others:1 papadimitriou:1 simplify:1 grenoble:2 oblivious:1 employ:4 n1:1 william:2 karthik:1 minaire:1 highly:1 possibility:1 subgames:1 ce33:1 extreme:1 fog:1 devoted:1 a0i:2 chain:1 tuple:1 closer:1 partial:1 necessary:1 dov:1 folk:1 unless:2 taylor:2 re:2 theoretical:4 minimal:3 instance:3 compelling:1 yoav:2 ordinary:4 deviation:2 subset:2 uniform:4 delay:1 colell:1 answer:2 perturbed:3 kxi:1 considerably:1 calibrated:2 st:2 fundamental:1 siam:2 international:1 off:2 together:1 sanjeev:1 again:1 central:1 satisfied:1 cesa:2 opposed:5 containing:1 possibly:4 choose:2 admit:1 american:1 leading:2 michel:2 supp:5 account:1 potential:18 aggressive:1 diversity:1 de:2 casino:1 lloyd:1 ioannis:2 scissors:1 vi:21 depends:1 nisan:2 multiplicative:8 view:2 h1:5 later:1 break:1 hazan:1 traffic:1 reached:1 recover:1 bayes:1 shai:1 monro:1 contribution:2 variance:8 yield:2 weak:1 bayesian:1 itcs:1 mastodon:1 trajectory:1 vohra:1 j6:1 history:1 against:1 panayotis:5 involved:1 frequency:2 thereof:3 james:1 tucker:1 associated:1 proof:6 bioscience:1 boil:1 gain:1 proved:1 popular:3 manifest:1 knowledge:2 lim:2 schedule:2 auer:2 focusing:4 alexandre:1 higher:3 follow:3 reflected:1 improved:1 obstructed:1 though:1 done:1 generality:1 furthermore:1 just:1 stage:8 hand:8 working:1 flawless:1 mertikopoulos:5 sketch:2 maximizer:1 morning:1 widespread:1 mode:1 thodoris:1 grows:1 usa:1 name:1 contain:1 concept:1 unbiased:2 true:2 evolution:1 hence:2 krichene:2 conditionally:2 game:68 taghi:1 recurrence:1 criterion:1 outline:1 theoretic:1 polytechnique:2 mohammad:1 interface:1 temperature:1 wise:1 chaos:1 sigmund:1 recently:2 common:1 replicator:7 commuter:2 pseudocode:2 cohen:3 conditioning:1 exponentially:3 volume:2 tail:6 slight:2 discussed:1 extend:1 pep:1 jcjc:1 refer:3 cambridge:3 ai:24 rd:11 mathematics:2 bruno:1 gratefully:1 samson:1 access:2 stable:2 alekh:1 etc:4 recent:1 showed:5 perspective:1 optimizing:1 route:1 inequality:4 meta:1 arbitrarily:6 yi:15 seen:1 relaxed:2 somewhat:1 employed:1 surely:1 converge:4 forty:1 july:1 semi:10 ii:3 full:3 hannan:2 mix:1 smooth:2 technical:1 faster:2 exp3:1 calculation:2 academic:1 long:4 hart:1 a1:2 va:1 impact:1 prediction:2 variant:7 basic:1 essentially:1 expectation:1 albert:1 agarwal:1 achieved:1 receive:2 affecting:1 hurdle:1 ode:2 annealing:3 situational:1 grow:1 limn:3 crucial:2 extra:1 rest:2 strict:7 induced:3 strasbourg:1 flow:1 sridharan:1 seem:2 mw:2 leverage:1 near:1 noting:1 iii:4 enough:1 nonstochastic:1 suboptimal:1 andriy:1 imperfect:3 idea:2 simplifies:1 cn:1 economic:5 chebyshev:1 whether:1 war:1 jonker:2 effort:1 penalty:2 peter:2 oscillate:1 action:21 remark:1 generally:1 informally:1 extensively:2 morris:1 schapire:5 outperform:1 track:1 panageas:2 write:3 discrete:2 key:3 putting:1 nevertheless:1 blum:2 drawn:1 clarity:1 vast:1 asymptotically:1 destroy:1 relaxation:2 concreteness:1 fraction:1 sum:2 run:12 inverse:1 uncertainty:1 respond:1 powerful:2 soda:1 extends:1 almost:3 reasonable:1 throughout:2 family:1 reader:2 decision:1 appendix:2 sergiu:1 fide:2 bound:9 guaranteed:1 played:1 dresher:1 nonnegative:1 oracle:1 annual:4 roughgarden:3 precisely:1 dominated:1 kleinberg:4 argument:3 min:1 conjecture:2 rai:3 according:1 combination:1 march:1 pseudotrajectory:2 slightly:1 smaller:2 sandholm:1 appealing:3 evolves:1 restricted:7 taken:3 equation:3 turn:1 loose:1 fail:1 know:1 flip:1 letting:1 mind:1 end:8 serf:1 lax:1 liou:2 studying:1 available:1 operation:1 apply:2 observe:4 generic:9 pierre:1 alternative:1 robustness:2 original:2 denotes:3 hofbauer:1 especially:1 establish:2 society:1 seeking:1 intend:2 question:5 realized:4 occurs:4 strategy:34 said:1 exhibit:1 regulated:2 nonatomic:2 evolutionary:5 september:1 simulated:1 entity:1 monderer:2 reason:2 relationship:1 acquire:1 innovation:4 equivalently:1 setup:1 october:1 robert:5 stoc:1 expense:1 noam:2 negative:1 rise:2 stated:1 bona:2 policy:2 unknown:1 bianchi:2 observation:3 finite:4 acknowledge:1 immediate:1 payoff:28 situation:1 communication:1 extended:1 precise:1 andreu:1 mansour:1 thm:1 august:1 overcoming:1 introduced:2 inverting:1 paris:3 distinction:1 nip:2 address:2 able:2 beyond:1 below:4 dynamical:1 challenge:1 saclay:2 program:1 including:5 max:3 suitable:1 event:1 treated:1 natural:2 regularized:5 difficulty:1 recursion:1 representing:1 scheme:7 imply:3 ne:4 arora:2 umax:2 cce:1 prior:1 literature:4 nicol:2 asymptotic:3 relative:1 freund:4 loss:1 summability:2 highlight:1 georgios:5 mixed:10 limitation:2 proportional:2 fictitious:1 facing:1 var:1 shy:1 h2:8 foundation:3 agent:1 sufficient:1 imposes:1 foster:2 viewpoint:1 editor:3 playing:4 balancing:2 karl:1 genetics:1 summary:1 token:1 repeat:1 wireless:2 course:3 supported:2 enjoys:3 side:2 bias:2 weaker:1 piliouras:5 fall:2 wide:2 bulletin:2 differentiating:1 absolute:1 distributed:1 feedback:15 xn:2 stand:1 cumulative:4 unaware:1 clue:1 adaptive:2 polynomially:1 social:2 vazirani:2 approximate:6 keep:1 hirsch:2 corpus:1 assumed:3 leader:2 xi:48 shwartz:1 continuous:1 robust:1 ca:1 necessarily:2 did:2 main:7 noise:6 profile:6 n2:2 repeated:3 palaiopanos:2 x1:2 gambling:1 board:1 martingale:5 christos:1 sub:4 tally:2 explicit:1 exponential:22 xl:1 lie:2 dylan:1 theorem:11 down:2 load:2 bastien:1 showing:1 learnable:1 maxi:1 decay:3 admits:1 unperturbed:1 workshop:1 intrinsic:1 avrim:2 execution:1 genericity:1 horizon:1 vigorous:1 simply:2 relegate:1 bubeck:1 jacm:1 yannick:1 partially:1 corresponds:1 satisfies:1 acm:6 hedge:11 ma:2 viewed:1 presentation:1 price:3 considerable:1 change:1 determined:1 specifically:3 vovk:2 justify:1 walid:1 conservative:1 called:5 total:3 player:57 rakesh:1 ew:8 aaron:1 select:1 support:2 arises:2 princeton:2 correlated:9 |
6,873 | 7,217 | Multi-Agent Actor-Critic for Mixed
Cooperative-Competitive Environments
Ryan Lowe?
McGill University
OpenAI
Jean Harb
McGill University
OpenAI
Yi Wu?
UC Berkeley
Aviv Tamar
UC Berkeley
Pieter Abbeel
UC Berkeley
OpenAI
Igor Mordatch
OpenAI
Abstract
We explore deep reinforcement learning methods for multi-agent domains. We
begin by analyzing the difficulty of traditional algorithms in the multi-agent case:
Q-learning is challenged by an inherent non-stationarity of the environment, while
policy gradient suffers from a variance that increases as the number of agents grows.
We then present an adaptation of actor-critic methods that considers action policies
of other agents and is able to successfully learn policies that require complex multiagent coordination. Additionally, we introduce a training regimen utilizing an
ensemble of policies for each agent that leads to more robust multi-agent policies.
We show the strength of our approach compared to existing methods in cooperative
as well as competitive scenarios, where agent populations are able to discover
various physical and informational coordination strategies.
1
Introduction
Reinforcement learning (RL) has recently been applied to solve challenging problems, from game
playing [23, 28] to robotics [18]. In industrial applications, RL is seeing use in large scale systems
such as data center cooling [1]. Most of the successes of RL have been in single agent domains,
where modelling or predicting the behaviour of other actors in the environment is largely unnecessary.
However, there are a number of important applications that involve interaction between multiple
agents, where emergent behavior and complexity arise from agents co-evolving together. For example,
multi-robot control [20], the discovery of communication and language [29, 8, 24], multiplayer games
[27], and the analysis of social dilemmas [17] all operate in a multi-agent domain. Related problems,
such as variants of hierarchical reinforcement learning [6] can also be seen as a multi-agent system,
with multiple levels of hierarchy being equivalent to multiple agents. Additionally, multi-agent
self-play has recently been shown to be a useful training paradigm [28, 30]. Successfully scaling RL
to environments with multiple agents is crucial to building artificially intelligent systems that can
productively interact with humans and each other.
Unfortunately, traditional reinforcement learning approaches such as Q-Learning or policy gradient
are poorly suited to multi-agent environments. One issue is that each agent?s policy is changing
as training progresses, and the environment becomes non-stationary from the perspective of any
individual agent (in a way that is not explainable by changes in the agent?s own policy). This presents
learning stability challenges and prevents the straightforward use of past experience replay, which is
?
Equal contribution.
[email protected].
Corresponding authors:
[email protected], [email protected],
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
crucial for stabilizing deep Q-learning. Policy gradient methods, on the other hand, usually exhibit
very high variance when coordination of multiple agents is required. Alternatively, one can use modelbased policy optimization which can learn optimal policies via back-propagation, but this requires
a (differentiable) model of the world dynamics and assumptions about the interactions between
agents. Applying these methods to competitive environments is also challenging from an optimization
perspective, as evidenced by the notorious instability of adversarial training methods [11].
In this work, we propose a general-purpose multi-agent learning algorithm that: (1) leads to learned
policies that only use local information (i.e. their own observations) at execution time, (2) does
not assume a differentiable model of the environment dynamics or any particular structure on the
communication method between agents, and (3) is applicable not only to cooperative interaction
but to competitive or mixed interaction involving both physical and communicative behavior. The
ability to act in mixed cooperative-competitive environments may be critical for intelligent agents;
while competitive training provides a natural curriculum for learning [30], agents must also exhibit
cooperative behavior (e.g. with humans) at execution time.
We adopt the framework of centralized training with decentralized execution, allowing the policies
to use extra information to ease training, so long as this information is not used at test time. It is
unnatural to do this with Q-learning without making additional assumptions about the structure of the
environment, as the Q function generally cannot contain different information at training and test
time. Thus, we propose a simple extension of actor-critic policy gradient methods where the critic is
augmented with extra information about the policies of other agents, while the actor only has access
to local information. After training is completed, only the local actors are used at execution phase,
acting in a decentralized manner and equally applicable in cooperative and competitive settings. This
is a natural setting for multi-agent language learning, as full centralization would not require the
development of discrete communication protocols.
Since the centralized critic function explicitly uses the decision-making policies of other agents, we
additionally show that agents can learn approximate models of other agents online and effectively use
them in their own policy learning procedure. We also introduce a method to improve the stability of
multi-agent policies by training agents with an ensemble of policies, thus requiring robust interaction
with a variety of collaborator and competitor policies. We empirically show the success of our
approach compared to existing methods in cooperative as well as competitive scenarios, where agent
populations are able to discover complex physical and communicative coordination strategies.
2
Related Work
The simplest approach to learning in multi-agent settings is to use independently learning agents.
This was attempted with Q-learning in [34], but does not perform well in practice [22]. As we will
show, independently-learning policy gradient methods also perform poorly. One issue is that each
agent?s policy changes during training, resulting in a non-stationary environment and preventing the
na?ve application of experience replay. Previous work has attempted to address this by inputting
other agent?s policy parameters to the Q function [35], explicitly adding the iteration index to the
replay buffer, or using importance sampling [9]. Deep Q-learning approaches have previously been
investigated in [33] to train competing Pong agents.
The nature of interaction between agents can either be cooperative, competitive, or both and many
algorithms are designed only for a particular nature of interaction. Most studied are cooperative
settings, with strategies such as optimistic and hysteretic Q function updates [15, 21, 25], which
assume that the actions of other agents are made to improve collective reward. Another approach is to
indirectly arrive at cooperation via sharing of policy parameters [12], but this requires homogeneous
agent capabilities. These algorithms are generally not applicable in competitive or mixed settings.
See [26, 4] for surveys of multi-agent learning approaches and applications.
Concurrently to our work, [7] proposed a similar idea of using policy gradient methods with a
centralized critic, and test their approach on a StarCraft micromanagement task. Their approach
differs from ours in the following ways: (1) they learn a single centralized critic for all agents, whereas
we learn a centralized critic for each agent, allowing for agents with differing reward functions
including competitive scenarios, (2) we consider environments with explicit communication between
agents, (3) they combine recurrent policies with feed-forward critics, whereas our experiments
2
use feed-forward policies (although our methods are applicable to recurrent policies), (4) we learn
continuous policies whereas they learn discrete policies.
Recent work has focused on learning grounded cooperative communication protocols between agents
to solve various tasks [29, 8, 24]. However, these methods are usually only applicable when the
communication between agents is carried out over a dedicated, differentiable communication channel.
Our method requires explicitly modeling decision-making process of other agents. The importance
of such modeling has been recognized by both reinforcement learning [3, 5] and cognitive science
communities [10]. [13] stressed the importance of being robust to the decision making process of
other agents, as do others by building Bayesian models of decision making. We incorporate such
robustness considerations by requiring that agents interact successfully with an ensemble of any
possible policies of other agents, improving training stability and robustness of agents after training.
3
Background
Markov Games In this work, we consider a multi-agent extension of Markov decision processes
(MDPs) called partially observable Markov games [19]. A Markov game for N agents is defined by a
set of states S describing the possible configurations of all agents, a set of actions A1 , ..., AN and
a set of observations O1 , ..., ON for each agent. To choose actions, each agent i uses a stochastic
policy ? ?i : Oi ? Ai 7? [0, 1], which produces the next state according to the state transition function
T : S ? A1 ? ... ? AN 7? S.2 Each agent i obtains rewards as a function of the state and agent?s
action ri : S ? Ai 7? R, and receives a private observation correlated with the state oi : S 7? Oi .
The initial states are determined by a distribution ? : S 7? [0, 1]. Each agent i aims to maximize its
PT
own total expected return Ri = t=0 ? t rit where ? is a discount factor and T is the time horizon.
Q-Learning and Deep Q-Networks (DQN). Q-Learning and DQN [23] are popular methods in
reinforcement learning and have been previously applied to multi-agent settings [8, 35]. Q-Learning
makes use of an action-value function for policy ? as Q? (s, a) = E[R|st = s, at = a]. This Q
function can be recursively rewritten as Q? (s, a) = Es0 [r(s, a) + ?Ea0 ??? [Q? (s0 , a0 )]]. DQN learns
the action-value function Q? corresponding to the optimal policy by minimizing the loss:
? ? (s0 , a0 ),
L(?) = Es,a,r,s0 [(Q? (s, a|?) ? y)2 ],
where y = r + ? max Q
(1)
a0
? is a target Q function whose parameters are periodically updated with the most recent
where Q
?, which helps stabilize learning. Another crucial component of stabilizing DQN is the use of an
experience replay buffer D containing tuples (s, a, r, s0 ).
Q-Learning can be directly applied to multi-agent settings by having each agent i learn an independently optimal function Qi [34]. However, because agents are independently updating their
policies as learning progresses, the environment appears non-stationary from the view of any one
agent, violating Markov assumptions required for convergence of Q-learning. Another difficulty
observed in [9] is that the experience replay buffer cannot be used in such a setting since in general,
P (s0 |s, a, ? 1 , ..., ? N ) 6= P (s0 |s, a, ? 01 , ..., ? 0N ) when any ? i 6= ? 0i .
Policy Gradient (PG) Algorithms. Policy gradient methods are another popular choice for a
variety of RL tasks. The main idea is to directly adjust the parameters ? of the policy in order to
maximize the objective J(?) = Es?p? ,a??? ? [R] by taking steps in the direction of ?? J(?). Using
the Q function defined previously, the gradient of the policy can be written as [32]:
?? J(?) = Es?p? ,a??? ? [?? log ? ? (a|s)Q? (s, a)],
(2)
?
where p is the state distribution. The policy gradient theorem has given rise to several practical
algorithms, which often differ in how they estimate Q? . For example, one can simply use a sample
PT
return Rt = i=t ? i?t ri , which leads to the REINFORCE algorithm [37]. Alternatively, one could
learn an approximation of the true action-value function Q? (s, a) by e.g. temporal-difference learning
[31]; this Q? (s, a) is called the critic and leads to a variety of actor-critic algorithms [31].
Policy gradient methods are known to exhibit high variance gradient estimates. This is exacerbated
in multi-agent settings; since an agent?s reward usually depends on the actions of many agents,
2
To minimize notation we will often omit ? from the subscript of ? .
3
the reward conditioned only on the agent?s own actions (when the actions of other agents are not
considered in the agent?s optimization process) exhibits much more variability, thereby increasing the
variance of its gradients. Below, we show a simple setting where the probability of taking a gradient
step in the correct direction decreases exponentially with the number of agents.
Proposition 1. Consider N agents with binary actions: P (ai = 1) = ?i , where R(a1 , . . . , aN ) =
1a1 =???=aN . We assume an uninformed scenario, in which agents are initialized to ?i = 0.5 ?i. Then,
if we are estimating the gradient of the cost J with policy gradient, we have:
? ?Ji > 0) ? (0.5)N
P (h?J,
? is the policy gradient estimator from a single sample, and ?J is the true gradient.
where ?J
Proof. See Appendix.
The use of baselines, such as value function baselines typically used to ameliorate high variance, is
problematic in multi-agent settings due to the non-stationarity issues mentioned previously.
Deterministic Policy Gradient (DPG) Algorithms. It is also possible to extend the policy gradient
framework to deterministic policies ? ? : S 7? A. In particular, under certain conditions we can write
the gradient of the objective J(?) = Es?p? [R(s, a)] as:
?? J(?) = Es?D [??? ? (a|s)?a Q? (s, a)|a=??? (s) ]
(3)
Since this theorem relies on ?a Q? (s, a), it requires that the action space A (and thus the policy ? )
be continuous.
Deep deterministic policy gradient (DDPG) is a variant of DPG where the policy ? and critic Q? are
approximated with deep neural networks. DDPG is an off-policy algorithm, and samples trajectories
from a replay buffer of experiences that are stored throughout training. DDPG also makes use of a
target network, as in DQN [23].
4
4.1
Methods
training
execution
Multi-Agent Actor Critic
More concretely, consider a game with N agents with policies parameterized by ? = {?1 , ..., ?N },
? 1 , ..., ? N } be the set of all agent policies. Then we can write the gradient of the
and let ? = {?
4
l
..
Similarly to [8], we accomplish our goal by adopting the framework of centralized training with
decentralized execution. Thus, we allow the policies to use extra information to ease training, so
long as this information is not used at test time. It is unnatural to do this with Q-learning, as the Q
function generally cannot contain different information at training and test time. Thus, we propose
a simple extension of actor-critic policy gradient methods where the critic is augmented with extra
information about the policies of other agents.
..
...
?1
?N
We have argued in the previous section that na?ve
policy gradient methods perform poorly in simple
o a ... o a
multi-agent settings, and this is supported in our ex1
N
periments in Section 5. Our goal in this section is to
derive an algorithm that works well in such settings.
However, we would like to operate under the follow...
Q1
QN
ing constraints: (1) the learned policies can only use
local information (i.e. their own observations) at exOverview
of our multi-agent
decen1
agent N
ecution time, (2) we do not assume a differentiable Figure 1: agent
model of the environment dynamics, unlike in [24], tralized actor, centralized critic approach.
and (3) we do not assume any particular structure on
the communication method between agents (that is, we don?t assume a differentiable communication
channel). Fulfilling the above desiderata would provide a general-purpose multi-agent learning
algorithm that could be applied not just to cooperative games with explicit communication channels,
but competitive games and games involving only physical interactions between agents.
l
expected return for agent i, J(?i ) = E[Ri ] as:
??i J(?i ) = Es?p? ,ai ??? i [??i log ? i (ai |oi )Q?i (x, a1 , ..., aN )].
(4)
Here Q?i (x, a1 , ..., aN ) is a centralized action-value function that takes as input the actions of all
agents, a1 , . . . , aN , in addition to some state information x, and outputs the Q-value for agent i. In
the simplest case, x could consist of the observations of all agents, x = (o1 , ..., oN ), however we
could also include additional state information if available. Since each Q?i is learned separately,
agents can have arbitrary reward structures, including conflicting rewards in a competitive setting.
We can extend the above idea to work with deterministic policies. If we now consider N continuous
policies ? ?i w.r.t. parameters ?i (abbreviated as ? i ), the gradient can be written as:
?
?i ) = Ex,a?D [??i ? i (ai |oi )?ai Q?i (x, a1 , ..., aN )|ai =??i (oi ) ],
??i J(?
(5)
Here the experience replay buffer D contains the tuples (x, x0 , a1 , . . . , aN , r1 , . . . , rN ), recording
experiences of all agents. The centralized action-value function Q?i is updated as:
0
L(?i ) = Ex,a,r,x0 [(Q?i (x, a1 , . . . , aN ) ? y)2 ], y = ri + ? Q?i (x0 , a01 , . . . , a0N )a0 =??0 (oj ) , (6)
j
?0
j
?i0 .
0 } is the set of target policies with delayed parameters
??10 , ..., ? ?N
where ? = {?
As shown in
Section 5, we find the centralized critic with deterministic policies works very well in practice, and
refer to it as multi-agent deep deterministic policy gradient (MADDPG). We provide the description
of the full algorithm in the Appendix.
A primary motivation behind MADDPG is that, if we know the actions taken by all agents, the
environment is stationary even as the policies change, since P (s0 |s, a1 , ..., aN , ? 1 , ..., ? N ) =
P (s0 |s, a1 , ..., aN ) = P (s0 |s, a1 , ..., aN , ? 01 , ..., ? 0N ) for any ? i 6= ? 0i . This is not the case if we
do not explicitly condition on the actions of other agents, as done for most traditional RL methods.
Note that we require the policies of other agents to apply an update in Eq. 6. Knowing the observations
and policies of other agents is not a particularly restrictive assumption; if our goal is to train agents to
exhibit complex communicative behaviour in simulation, this information is often available to all
agents. However, we can relax this assumption if necessary by learning the policies of other agents
from observations ? we describe a method of doing this in Section 4.2.
4.2
Inferring Policies of Other Agents
To remove the assumption of knowing other agents? policies, as required in Eq. 6, each agent i
? ?j (where ? are the parameters of the approximation;
can additionally maintain an approximation ?
i
? ji ) to the true policy of agent j, ? j . This approximate policy is learned by maximizing
henceforth ?
the log probability of agent j?s actions, with an entropy regularizer:
h
i
? ji (aj |oj ) + ?H(?
L(?ji ) = ?Eoj ,aj log ?
? ji ) ,
(7)
where H is the entropy of the policy distribution. With the approximate policies, y in Eq. 6 can be
replaced by an approximate value y? calculated as follows:
?0
0
? 01
? 0N
y? = ri + ?Q?i (x0 , ?
i (o1 ), . . . , ? i (oi ), . . . , ?
i (oN )),
(8)
? 0j
? ji . Note that Eq. 7 can be optimized
where ?
policy ?
i denotes the target network for the approximate
?
in a completely online fashion: before updating Qi , the centralized Q function, we take the latest
samples of each agent j from the replay buffer to perform a single gradient step to update ?ji . We
also input the action log probabilities of each agent directly into Q, rather than sampling.
4.3
Agents with Policy Ensembles
A recurring problem in multi-agent reinforcement learning is the environment non-stationarity due
to the agents? changing policies. This is particularly true in competitive settings, where agents can
derive a strong policy by overfitting to the behavior of their competitors. Such policies are undesirable
as they are brittle and may fail when the competitors alter their strategies.
5
To obtain multi-agent policies that are more robust to changes in the policy of competing agents,
we propose to train a collection of K different sub-policies. At each episode, we randomly select
one particular sub-policy for each agent to execute. Suppose that policy ? i is an ensemble of K
(k)
different sub-policies with sub-policy k denoted by ? ?(k) (denoted as ? i ). For agent i, we are then
i
?i ) = Ek?unif(1,K),s?p? ,a???(k) [Ri (s, a)] .
maximizing the ensemble objective: Je (?
i
(k)
Since different sub-policies will be executed in different episodes, we maintain a replay buffer Di
(k)
for each sub-policy ? i of agent i. Accordingly, we can derive the gradient of the ensemble objective
(k)
with respect to ?i as follows:
1
(k)
?i
?i ) = Ex,a?D(k) ??(k) log ? i (ai |oi )?ai Q (x, a1 , . . . , aN )
. (9)
??(k) Je (?
i
i
i
?(k)
K
ai =?
i (oi )
Experiments1
5
5.1
Environments
To perform our experiments, we adopt the grounded communication environment proposed in [24],
which consists of N agents and L landmarks inhabiting a two-dimensional world with continuous
space and discrete time2 . Agents may take physical actions in the environment and communication
actions that get broadcasted to other agents. Unlike [24], we do not assume that all agents have
identical action and observation spaces, or act according to the same policy ? . We also consider
games that are both cooperative (all agents must maximize a shared return) and competitive (agents
have conflicting goals). Some environments require explicit communication between agents in order
to achieve the best reward, while in other environments agents can only perform physical actions. We
provide details for each environment below.
ent 1
mark
agent 2
speaker
c
predator 1
?green?
landmark
agent 1
predator 2
agent 2
agent 1
agent 2
listener
predator 3
landmark
adversary
prey
agent 3
?
v
agent 3
Figure 2: Illustrations of the experimental environment and some tasks we consider, including a)
Cooperative Communication b) Predator-Prey c) Cooperative Navigation d) Physical Deception. See
webpage for videos of all experimental results.
Cooperative communication. This task consists of two cooperative agents, a speaker and a listener,
who are placed in an environment with three landmarks of differing colors. At each episode, the
listener must navigate to a landmark of a particular color, and obtains reward based on its distance
to the correct landmark. However, while the listener can observe the relative position and color
of the landmarks, it does not know which landmark it must navigate to. Conversely, the speaker?s
observation consists of the correct landmark color, and it can produce a communication output at
each time step which is observed by the listener. Thus, the speaker must learn to output the landmark
colour based on the motions of the listener. Although this problem is relatively simple, as we show in
Section 5.2 it poses a significant challenge to traditional RL algorithms.
Cooperative navigation. In this environment, agents must cooperate through physical actions to
reach a set of L landmarks. Agents observe the relative positions of other agents and landmarks, and
are collectively rewarded based on the proximity of any agent to each landmark. In other words, the
agents have to ?cover? all of the landmarks. Further, the agents occupy significant physical space and
are penalized when colliding with each other. Our agents learn to infer the landmark they must cover,
and move there while avoiding other agents.
1
2
Videos of our experimental results can be viewed at https://sites.google.com/site/multiagentac/
The environments are publicly available: https://github.com/openai/multiagent-particle-envs
6
Figure 3: Comparison between MADDPG and DDPG (left), and between single policy MADDPG
and ensemble MADDPG (right) on the competitive environments. Each bar cluster shows the 0-1
normalized score for a set of competing policies (agent v adversary), where a higher score is better for
the agent. In all cases, MADDPG outperforms DDPG when directly pitted against it, and similarly
for the ensemble against the single MADDPG policies. Full results are given in the Appendix.
Keep-away. This scenario consists of L landmarks including a target landmark, N cooperating
agents who know the target landmark and are rewarded based on their distance to the target, and M
adversarial agents who must prevent the cooperating agents from reaching the target. Adversaries
accomplish this by physically pushing the agents away from the landmark, temporarily occupying it.
While the adversaries are also rewarded based on their distance to the target landmark, they do not
know the correct target; this must be inferred from the movements of the agents.
Physical deception. Here, N agents cooperate to reach a single target landmark from a total of N
landmarks. They are rewarded based on the minimum distance of any agent to the target (so only one
agent needs to reach the target landmark). However, a lone adversary also desires to reach the target
landmark; the catch is that the adversary does not know which of the landmarks is the correct one.
Thus the cooperating agents, who are penalized based on the adversary distance to the target, learn to
spread out and cover all landmarks so as to deceive the adversary.
Predator-prey. In this variant of the classic predator-prey game, N slower cooperating agents
must chase the faster adversary around a randomly generated environment with L large landmarks
impeding the way. Each time the cooperative agents collide with an adversary, the agents are rewarded
while the adversary is penalized. Agents observe the relative positions and velocities of the agents,
and the positions of the landmarks.
Covert communication. This is an adversarial communication environment, where a speaker agent
(?Alice?) must communicate a message to a listener agent (?Bob?), who must reconstruct the message
at the other end. However, an adversarial agent (?Eve?) is also observing the channel, and wants to
reconstruct the message ? Alice and Bob are penalized based on Eve?s reconstruction, and thus
Alice must encode her message using a randomly generated key, known only to Alice and Bob. This
is similar to the cryptography environment considered in [2].
5.2
Comparison to Decentralized Reinforcement Learning Methods
We implement MADDPG and evaluate it on the
environments presented in Section 5.1. Unless
otherwise specified, our policies are parameterized by a two-layer ReLU MLP with 64 units
per layer. To support discrete communication
messages, we use the Gumbel-Softmax estimator [14]. To evaluate the quality of policies
learned in competitive settings, we pitch MADDPG agents against DDPG agents, and compare
the resulting success of the agents and adversaries in the environment. We train our models
until convergence, and then evaluate them by
averaging various metrics for 1000 further iter- Figure 4: The reward of MADDPG against tradiations. We provide the tables and details of our tional RL approaches on cooperative communicaresults on all environments in the Appendix, and tion after 25000 episodes.
summarize them here.
7
Cooperative Comm.
Physical Deception
(a) MADDPG
(b) DDPG
Figure 5: Comparison between MADDPG (left) and DDPG (right) on the cooperative communication
(CC) and physical deception (PD) environments at t = 0, 5, and 25. Small dark circles indicate
landmarks. In CC, the grey agent is the speaker, and the color of the listener indicates the target
landmark. In PD, the blue agents are trying to deceive the red adversary, while covering the target
landmark (in green). MADDPG learns the correct behavior in both cases: in CC the speaker learns
to output the target landmark color to direct the listener, while in PD the agents learn to cover both
landmarks to confuse the adversary. DDPG (and other RL algorithms) struggles in these settings:
in CC the speaker always repeats the same utterance and the listener moves to the middle of the
landmarks, and in PP one agent greedily pursues the green landmark (and is followed by the adversary)
while the othe agent scatters. See video for full trajectories.
We first examine the cooperative communication scenario. Despite the simplicity of the task (the
speaker only needs to learn to output its observation), traditional RL methods such as DQN, ActorCritic, a first-order implementation of TRPO, and DDPG all fail to learn the correct behaviour
(measured by whether the listener is within a short distance from the target landmark). In practice we
observed that the listener learns to ignore the speaker and simply moves to the middle of all observed
landmarks. We plot the learning curves over 25000 episodes for various approaches in Figure 4.
We hypothesize that a primary reason for the failure of traditional RL methods in this (and other)
multi-agent settings is the lack of a consistent gradient signal. For example, if the speaker utters
the correct symbol while the listener moves in the wrong direction, the speaker is penalized. This
problem is exacerbated as the number of time steps grows: we observed that traditional policy
gradient methods can learn when the objective of the listener is simply to reconstruct the observation
of the speaker in a single time step, or if the initial positions of agents and landmarks are fixed and
evenly distributed. This indicates that many of the multi-agent methods previously proposed for
scenarios with short time horizons (e.g. [16]) may not generalize to more complex tasks.
Conversely, MADDPG agents can learn coordinated behaviour more easily via the centralized critic.
In the cooperative communication environment, MADDPG is able to reliably learn the correct listener
and speaker policies, and the listener is often (84.0% of the time) able to navigate to the target.
A similar situation arises for the physical deception task: when the cooperating agents are trained
with MADDPG, they are able to successfully deceive the adversary by covering all of the landmarks
around 94% of the time when L = 2 (Figure 5). Furthermore, the adversary success is quite low,
especially when the adversary is trained with DDPG (16.4% when L = 2). This contrasts sharply
with the behaviour learned by the cooperating DDPG agents, who are unable to deceive MADDPG
adversaries in any scenario, and do not even deceive other DDPG agents when L = 4.
While the cooperative navigation and predator-prey tasks have a less stark divide between success and
failure, in both cases the MADDPG agents outperform the DDPG agents. In cooperative navigation,
MADDPG agents have a slightly smaller average distance to each landmark, but have almost half the
average number of collisions per episode (when N = 2) compared to DDPG agents due to the ease
of coordination. Similarly, MADDPG predators are far more successful at chasing DDPG prey (16.1
collisions/episode) than the converse (10.3 collisions/episode).
In the covert communication environment, we found that Bob trained with both MADDPG and
DDPG out-performs Eve in terms of reconstructing Alice?s message. However, Bob trained with
MADDPG achieves a larger relative success rate compared with DDPG (52.4% to 25.1%). Further,
only Alice trained with MADDPG can encode her message such that Eve achieves near-random
8
10
1.2
1.0
40
KL( || )
average reward
30
50
60
0.8
0.6
0.4
70
80
listener
speaker
1.4
20
0.2
use approximate policies
use true policies of other agents
0
1000
2000
3000
4000
iteration
5000
6000
0.0
7000
0
1000
2000
3000
4000
iteration
5000
6000
7000
Figure 6: Effectiveness of learning by approximating policies of other agents in the cooperative
communication scenario. Left: plot of the reward over number of iterations; MADDPG agents quickly
learn to solve the task when approximating the policies of others. Right: KL divergence between the
approximate policies and the true policies.
reconstruction accuracy. The learning curve (a sample plot is shown in Appendix) shows that the
oscillation due to the competitive nature of the environment often cannot be overcome with common
decentralized RL methods. We emphasize that we do not use any of the tricks required for the
cryptography environment from [2], including modifying Eve?s loss function, alternating agent and
adversary training, and using a hybrid ?mix & transform? feed-forward and convolutional architecture.
5.3
Effect of Learning Polices of Other Agents
We evaluate the effectiveness of learning the policies of other agents in the cooperative communication
environment, following the same hyperparameters as the previous experiments and setting ? = 0.001
in Eq. 7. The results are shown in Figure 6. We observe that despite not fitting the policies of other
agents perfectly (in particular, the approximate listener policy learned by the speaker has a fairly
large KL divergence to the true policy), learning with approximated policies is able to achieve the
same success rate as using the true policy, without a significant slowdown in convergence.
5.4
Effect of Training with Policy Ensembles
We focus on the effectiveness of policy ensembles in competitive environments, including keep-away,
cooperative navigation, and predator-prey. We choose K = 3 sub-policies for the keep-away and
cooperative navigation environments, and K = 2 for predator-prey. To improve convergence speed,
we enforce that the cooperative agents should have the same policies at each episode, and similarly
for the adversaries. To evaluate the approach, we measure the performance of ensemble policies
and single policies in the roles of both agent and adversary. The results are shown on the right side
of Figure 3. We observe that agents with policy ensembles are stronger than those with a single
policy. In particular, when pitting ensemble agents against single policy adversaries (second to left
bar cluster), the ensemble agents outperform the adversaries by a large margin compared to when the
roles are reversed (third to left bar cluster).
6
Conclusions and Future Work
We have proposed a multi-agent policy gradient algorithm where agents learn a centralized critic
based on the observations and actions of all agents. Empirically, our method outperforms traditional
RL algorithms on a variety of cooperative and competitive multi-agent environments. We can further
improve the performance of our method by training agents with an ensemble of policies, an approach
we believe to be generally applicable to any multi-agent algorithm.
One downside to our approach is that the input space of Q grows linearly (depending on what
information is contained in x) with the number of agents N . This could be remedied in practice by,
for example, having a modular Q function that only considers agents in a certain neighborhood of a
given agent. We leave this investigation to future work.
9
Acknowledgements
The authors would like to thank Jacob Andreas, Smitha Milli, Jack Clark, Jakob Foerster, and others at
OpenAI and UC Berkeley for interesting discussions related to this paper, as well as Jakub Pachocki,
Yura Burda, and Joelle Pineau for comments on the paper draft. We thank Tambet Matiisen for
providing the code base that was used for some early experiments associated with this paper. Ryan
Lowe is supported in part by a Vanier CGS Scholarship and the Samsung Advanced Institute of
Technology. Finally, we?d like to thank OpenAI for fostering an engaging and productive research
environment.
References
[1] DeepMind
AI
reduces
google
data
centre
cooling
bill
by
40.
https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/. Accessed:
2017-05-19.
[2] M. Abadi and D. G. Andersen. Learning to protect communications with adversarial neural
cryptography. arXiv preprint arXiv:1610.06918, 2016.
[3] C. Boutilier. Learning conventions in multiagent stochastic domains using likelihood estimates.
In Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence,
pages 106?114. Morgan Kaufmann Publishers Inc., 1996.
[4] L. Busoniu, R. Babuska, and B. De Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems Man and Cybernetics Part C Applications and
Reviews, 38(2):156, 2008.
[5] G. Chalkiadakis and C. Boutilier. Coordination in multiagent reinforcement learning: a bayesian
approach. In Proceedings of the second international joint conference on Autonomous agents
and multiagent systems, pages 709?716. ACM, 2003.
[6] P. Dayan and G. E. Hinton. Feudal reinforcement learning. In Advances in neural information
processing systems, pages 271?271. Morgan Kaufmann Publishers, 1993.
[7] J. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson. Counterfactual multi-agent
policy gradients. arXiv preprint arXiv:1705.08926, 2017.
[8] J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson. Learning to communicate with
deep multi-agent reinforcement learning. CoRR, abs/1605.06676, 2016.
[9] J. N. Foerster, N. Nardelli, G. Farquhar, P. H. S. Torr, P. Kohli, and S. Whiteson. Stabilising
experience replay for deep multi-agent reinforcement learning. CoRR, abs/1702.08887, 2017.
[10] M. C. Frank and N. D. Goodman. Predicting pragmatic reasoning in language games. Science,
336(6084):998?998, 2012.
[11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems,
2014.
[12] J. K. Gupta, M. Egorov, and M. Kochenderfer. Cooperative multi-agent control using deep
reinforcement learning. 2017.
[13] J. Hu and M. P. Wellman. Online learning about other agents in a dynamic multiagent system.
In Proceedings of the Second International Conference on Autonomous Agents, AGENTS ?98,
pages 239?246, New York, NY, USA, 1998. ACM.
[14] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv
preprint arXiv:1611.01144, 2016.
[15] M. Lauer and M. Riedmiller. An algorithm for distributed reinforcement learning in cooperative
multi-agent systems. In In Proceedings of the Seventeenth International Conference on Machine
Learning, pages 535?542. Morgan Kaufmann, 2000.
10
[16] A. Lazaridou, A. Peysakhovich, and M. Baroni. Multi-agent cooperation and the emergence of
(natural) language. arXiv preprint arXiv:1612.07182, 2016.
[17] J. Z. Leibo, V. F. Zambaldi, M. Lanctot, J. Marecki, and T. Graepel. Multi-agent reinforcement
learning in sequential social dilemmas. CoRR, abs/1702.03037, 2017.
[18] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.
arXiv preprint arXiv:1504.00702, 2015.
[19] M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In
Proceedings of the eleventh international conference on machine learning, volume 157, pages
157?163, 1994.
[20] L. Matignon, L. Jeanpierre, A.-I. Mouaddib, et al. Coordinated multi-robot exploration under
communication constraints using decentralized markov decision processes. In AAAI, 2012.
[21] L. Matignon, G. J. Laurent, and N. Le Fort-Piat. Hysteretic q-learning: an algorithm for
decentralized reinforcement learning in cooperative multi-agent teams. In Intelligent Robots
and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pages 64?69. IEEE,
2007.
[22] L. Matignon, G. J. Laurent, and N. Le Fort-Piat. Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. The Knowledge Engineering
Review, 27(01), 2012.
[23] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[24] I. Mordatch and P. Abbeel. Emergence of grounded compositional language in multi-agent
populations. arXiv preprint arXiv:1703.04908, 2017.
[25] S. Omidshafiei, J. Pazis, C. Amato, J. P. How, and J. Vian. Deep decentralized multi-task
multi-agent reinforcement learning under partial observability. CoRR, abs/1703.06182, 2017.
[26] L. Panait and S. Luke. Cooperative multi-agent learning: The state of the art. Autonomous
Agents and Multi-Agent Systems, 11(3):387?434, Nov. 2005.
[27] P. Peng, Q. Yuan, Y. Wen, Y. Yang, Z. Tang, H. Long, and J. Wang. Multiagent bidirectionallycoordinated nets for learning to play starcraft combat games. CoRR, abs/1703.10069, 2017.
[28] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis.
Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484 ?
489, 2016.
[29] S. Sukhbaatar, R. Fergus, et al. Learning multiagent communication with backpropagation. In
Advances in Neural Information Processing Systems, pages 2244?2252, 2016.
[30] S. Sukhbaatar, I. Kostrikov, A. Szlam, and R. Fergus. Intrinsic motivation and automatic
curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407, 2017.
[31] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press Cambridge,
1998.
[32] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing
systems, 2000.
[33] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, and R. Vicente. Multiagent cooperation and competition with deep reinforcement learning. PloS one,
12(4):e0172395, 2017.
11
[34] M. Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings
of the tenth international conference on machine learning, pages 330?337, 1993.
[35] G. Tesauro. Extending q-learning to general adaptive multi-agent systems. In Advances in
neural information processing systems, pages 871?878, 2004.
[36] P. S. Thomas and A. G. Barto. Conjugate markov decision processes. In Proceedings of the
28th International Conference on Machine Learning (ICML-11), pages 137?144, 2011.
[37] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine learning, 8(3-4):229?256, 1992.
12
| 7217 |@word kohli:1 private:1 middle:2 stronger:1 twelfth:1 unif:1 pieter:1 grey:1 simulation:1 hu:1 jacob:1 pg:1 q1:1 thereby:1 recursively:1 initial:2 configuration:1 contains:1 score:2 ours:1 past:1 existing:2 outperforms:2 freitas:1 com:5 gmail:1 scatter:1 must:13 written:2 guez:1 periodically:1 remove:1 designed:1 plot:3 update:3 hypothesize:1 v:1 stationary:4 half:1 intelligence:1 generative:1 sukhbaatar:2 accordingly:1 short:2 provides:1 draft:1 accessed:1 direct:1 abadi:1 consists:4 yuan:1 combine:1 fitting:1 eleventh:1 manner:1 introduce:2 x0:4 peng:1 expected:2 behavior:5 examine:1 multi:48 nham:1 informational:1 es0:1 increasing:1 becomes:1 begin:1 discover:2 notation:1 estimating:1 kuzovkin:1 what:1 deepmind:3 inputting:1 lone:1 differing:2 temporal:1 berkeley:4 combat:1 act:2 starcraft:2 wrong:1 control:3 unit:1 converse:1 omit:1 szlam:1 pitting:1 before:1 engineering:1 local:4 struggle:1 despite:2 sutton:2 analyzing:1 subscript:1 laurent:2 studied:1 conversely:2 challenging:2 alice:6 co:1 zambaldi:1 ease:3 luke:1 seventeenth:1 practical:1 practice:4 implement:1 differs:1 chasing:1 backpropagation:1 procedure:1 riedmiller:2 evolving:1 word:1 seeing:1 get:1 cannot:4 undesirable:1 applying:1 instability:1 bellemare:1 equivalent:1 deterministic:6 bill:2 center:1 maximizing:2 pursues:1 straightforward:1 latest:1 go:1 independently:4 williams:1 survey:3 focused:1 stabilizing:2 simplicity:1 pouget:1 estimator:2 kostrikov:1 utilizing:1 time2:1 reparameterization:1 population:3 stability:3 classic:1 autonomous:3 updated:2 mcgill:3 hierarchy:1 play:3 pt:2 target:20 suppose:1 tan:1 homogeneous:1 us:2 goodfellow:1 engaging:1 trick:1 velocity:1 productively:1 approximated:2 particularly:2 updating:2 asymmetric:1 cooling:3 cooperative:37 observed:5 role:2 levine:1 preprint:7 wang:1 episode:9 plo:1 decrease:1 movement:1 mentioned:1 environment:43 pong:1 complexity:1 comm:1 reward:12 pd:3 productive:1 babuska:1 warde:1 dynamic:4 littman:1 trained:5 singh:1 dilemma:2 learner:1 completely:1 gu:1 easily:1 collide:1 milli:1 emergent:1 pitted:1 various:4 listener:18 samsung:1 regularizer:1 joint:1 train:4 describe:1 artificial:1 visuomotor:1 neighborhood:1 kalchbrenner:1 jean:1 modular:1 whose:1 solve:3 quite:1 larger:1 relax:1 reconstruct:3 otherwise:1 ability:1 transform:1 emergence:2 online:3 tampuu:1 chase:1 differentiable:5 net:2 propose:4 reconstruction:2 interaction:8 adaptation:1 poorly:3 achieve:2 description:1 competition:1 ent:1 webpage:1 convergence:4 cluster:3 darrell:1 r1:1 sutskever:1 produce:2 extending:1 stabilising:1 leave:1 silver:2 help:1 derive:3 recurrent:2 depending:1 pose:1 peysakhovich:1 measured:1 uninformed:1 progress:2 eq:5 strong:1 exacerbated:2 c:1 indicate:1 convention:1 differ:1 direction:3 correct:9 utters:1 modifying:1 stochastic:2 exploration:1 human:3 mcallester:1 require:4 argued:1 behaviour:5 abbeel:3 kodelja:1 collaborator:1 investigation:1 proposition:1 ryan:3 extension:3 proximity:1 around:2 considered:2 dieleman:1 achieves:2 adopt:2 early:1 purpose:2 baroni:1 applicable:6 communicative:3 coordination:7 successfully:4 occupying:1 lazaridou:1 mit:1 concurrently:1 always:1 aim:1 rather:1 reaching:1 rusu:1 barto:2 encode:2 focus:1 amato:1 modelling:1 indicates:2 likelihood:1 industrial:1 adversarial:6 greedily:1 baseline:2 contrast:1 a01:1 tional:1 dayan:1 i0:1 typically:1 a0:4 her:2 issue:3 denoted:2 development:1 art:1 softmax:2 fairly:1 uc:4 equal:1 having:2 beach:1 sampling:2 veness:1 identical:1 icml:1 igor:1 alter:1 future:2 others:3 mirza:1 intelligent:3 inherent:1 connectionist:1 wen:1 randomly:3 ve:2 divergence:2 individual:1 delayed:1 comprehensive:1 replaced:1 phase:1 maintain:2 assael:1 ab:5 stationarity:3 centralized:13 message:7 mlp:1 ostrovski:1 mnih:1 adjust:1 navigation:6 wellman:1 farley:1 behind:1 partial:1 necessary:1 experience:8 experiments1:1 unless:1 tree:1 divide:1 initialized:1 circle:1 nardelli:2 modeling:2 downside:1 cover:4 matignon:3 challenged:1 cost:1 successful:1 stored:1 accomplish:2 st:2 international:8 off:1 modelbased:1 together:1 quickly:1 na:2 andersen:1 aaai:1 containing:1 choose:2 huang:1 henceforth:1 cognitive:1 ek:1 return:4 stark:1 de:2 stabilize:1 inc:1 coordinated:2 explicitly:4 depends:1 tion:1 view:1 lowe:3 optimistic:1 doing:1 observing:1 red:1 competitive:20 capability:1 predator:10 actorcritic:1 contribution:1 minimize:1 oi:9 publicly:1 accuracy:1 convolutional:1 variance:5 largely:1 who:6 ensemble:16 kaufmann:3 generalize:1 bayesian:2 kavukcuoglu:2 regimen:1 trajectory:2 cc:4 bob:5 cybernetics:1 reach:4 suffers:1 sharing:1 competitor:3 against:5 failure:2 pp:1 proof:1 di:1 associated:1 popular:2 counterfactual:1 color:6 knowledge:1 graepel:2 back:1 appears:1 feed:3 higher:1 violating:1 follow:1 done:1 execute:1 furthermore:1 just:1 until:1 hand:1 receives:1 propagation:1 google:3 lack:1 pineau:1 aj:2 quality:1 aviv:1 grows:3 dqn:6 usa:2 building:2 lillicrap:1 contain:2 requiring:2 true:8 normalized:1 effect:2 matiisen:2 alternating:1 ex1:1 game:16 self:2 during:1 covering:2 speaker:16 pazis:1 trying:1 covert:2 dedicated:1 motion:1 performs:1 reasoning:1 cooperate:2 consideration:1 jack:1 recently:2 common:1 physical:13 rl:13 empirically:2 ji:7 exponentially:1 broadcasted:1 volume:1 extend:2 refer:1 significant:3 cambridge:1 ai:13 automatic:1 similarly:4 particle:1 centre:2 language:5 harb:1 robot:3 actor:10 access:1 base:1 own:6 recent:2 dpg:2 perspective:2 rewarded:5 scenario:9 tesauro:1 buffer:7 certain:2 binary:1 success:7 blog:1 joelle:1 yi:1 leach:1 seen:1 minimum:1 additional:2 morgan:3 recognized:1 paradigm:1 maximize:3 signal:1 deceive:5 multiple:5 full:4 mix:1 infer:1 reduces:2 ing:1 faster:1 long:4 chalkiadakis:1 equally:1 a1:14 qi:2 pitch:1 variant:3 involving:2 desideratum:1 metric:1 physically:1 iteration:4 grounded:3 adopting:1 foerster:4 arxiv:14 robotics:1 whereas:3 background:1 addition:1 separately:1 want:1 crucial:3 publisher:2 extra:4 operate:2 unlike:2 micromanagement:1 goodman:1 comment:1 recording:1 lauer:1 effectiveness:3 ea0:1 eve:5 near:1 yang:1 bengio:1 variety:4 relu:1 architecture:1 competing:3 perfectly:1 andreas:1 idea:3 regarding:1 tamar:1 knowing:2 observability:1 whether:1 colour:1 unnatural:2 explainable:1 york:1 compositional:1 action:26 deep:15 boutilier:2 useful:1 generally:4 collision:3 involve:1 discount:1 dark:1 simplest:2 http:3 occupy:1 outperform:2 problematic:1 per:2 blue:1 discrete:4 write:2 key:1 iter:1 hysteretic:2 openai:8 trpo:1 changing:2 prevent:1 prey:8 leibo:1 iros:1 tenth:1 cooperating:6 parameterized:2 uncertainty:1 communicate:2 ameliorate:1 arrive:1 throughout:1 almost:1 wu:1 oscillation:1 decision:7 appendix:5 scaling:1 lanctot:2 layer:2 followed:1 courville:1 strength:1 periments:1 constraint:2 sharply:1 feudal:1 ri:7 colliding:1 speed:1 relatively:1 according:2 multiplayer:1 conjugate:1 smaller:1 slightly:1 reconstructing:1 mastering:1 making:5 den:1 fulfilling:1 notorious:1 taken:1 previously:5 describing:1 abbreviated:1 fail:2 know:5 finn:1 kochenderfer:1 end:3 antonoglou:1 available:3 panneershelvam:1 decentralized:8 rewritten:1 apply:1 observe:5 hierarchical:1 away:4 indirectly:1 enforce:1 robustness:2 jang:1 slower:1 hassabis:1 thomas:1 denotes:1 include:1 completed:1 pushing:1 restrictive:1 scholarship:1 especially:1 approximating:2 rsj:1 objective:5 move:4 believe:1 strategy:4 primary:2 rt:1 traditional:8 exhibit:5 gradient:35 distance:7 unable:1 reinforce:1 reversed:1 remedied:1 landmark:40 thank:3 fidjeland:1 evenly:1 maddison:1 considers:2 reason:1 ozair:1 code:1 o1:3 index:1 illustration:1 providing:1 minimizing:1 schrittwieser:1 unfortunately:1 executed:1 frank:1 farquhar:2 vanier:1 rise:1 korjus:1 implementation:1 reliably:1 collective:1 policy:120 perform:6 allowing:2 observation:12 markov:9 situation:1 hinton:1 communication:28 variability:1 team:1 rn:1 mansour:1 jakob:1 arbitrary:1 police:1 community:1 inferred:1 evidenced:1 fort:2 required:4 specified:1 kl:3 optimized:1 yura:1 learned:7 conflicting:2 pachocki:1 protect:1 marecki:1 nip:1 address:1 able:7 recurring:1 adversary:24 usually:3 mordatch:3 below:2 bar:3 poole:1 panait:1 challenge:2 summarize:1 including:6 max:1 oj:2 green:3 video:3 critical:1 difficulty:2 natural:3 hybrid:1 predicting:2 curriculum:2 advanced:1 improve:4 github:1 technology:1 mdps:1 grewe:1 carried:1 categorical:1 catch:1 utterance:1 review:2 discovery:1 acknowledgement:1 relative:4 graf:1 multiagent:10 loss:2 brittle:1 mixed:4 interesting:1 tambet:1 clark:1 agent:222 consistent:1 s0:9 playing:1 critic:19 cooperation:3 penalized:5 supported:2 placed:1 repeat:1 slowdown:1 side:1 allow:1 burda:1 aru:2 institute:1 taking:2 distributed:2 van:1 curve:2 calculated:1 overcome:1 world:2 transition:1 qn:1 preventing:1 author:2 made:1 reinforcement:26 forward:3 concretely:1 collection:1 sifre:1 far:1 adaptive:1 social:2 transaction:1 approximate:8 observable:1 obtains:2 ignore:1 emphasize:1 schutter:1 keep:3 nov:1 a0n:1 overfitting:1 unnecessary:1 tuples:2 fergus:2 alternatively:2 don:1 continuous:4 search:1 table:1 additionally:4 learn:20 nature:5 robust:4 ca:2 channel:4 correlated:1 improving:1 whiteson:3 interact:2 othe:1 investigated:1 complex:4 artificially:1 domain:4 protocol:2 main:1 spread:1 linearly:1 motivation:2 arise:1 hyperparameters:1 cryptography:3 xu:1 augmented:2 site:2 je:2 fashion:1 ny:1 sub:7 inferring:1 position:5 explicit:3 replay:10 cgs:1 third:1 learns:4 tang:1 theorem:2 navigate:3 jakub:1 symbol:1 abadie:1 gupta:1 consist:1 intrinsic:1 adding:1 effectively:1 importance:3 corr:5 sequential:1 execution:6 conditioned:1 confuse:1 horizon:2 gumbel:2 margin:1 suited:1 entropy:2 simply:3 explore:1 prevents:1 desire:1 contained:1 temporarily:1 partially:1 collectively:1 driessche:1 relies:1 acm:2 goal:4 viewed:1 ddpg:18 piat:2 shared:1 man:1 change:4 deception:5 vicente:1 determined:1 torr:1 acting:1 averaging:1 called:2 total:2 e:6 experimental:3 attempted:2 busoniu:1 rit:1 select:1 pragmatic:1 mark:1 support:1 stressed:1 arises:1 incorporate:1 evaluate:5 avoiding:1 ex:3 |
6,874 | 7,218 | Communication-Efficient Distributed Learning
of Discrete Probability Distributions
Ilias Diakonikolas
CS, USC
[email protected]
Abhiram Natarajan
CS, Purdue
[email protected]
Elena Grigorescu
CS, Purdue
[email protected]
Jerry Li
EECS & CSAIL, MIT
[email protected]
Krzysztof Onak
IBM Research, NY
[email protected]
Ludwig Schmidt
EECS & CSAIL, MIT
[email protected]
Abstract
We initiate a systematic investigation of distribution learning (density estimation)
when the data is distributed across multiple servers. The servers must communicate
with a referee and the goal is to estimate the underlying distribution with as few
bits of communication as possible. We focus on non-parametric density estimation
of discrete distributions with respect to the `1 and `2 norms. We provide the first
non-trivial upper and lower bounds on the communication complexity of this basic
estimation task in various settings of interest. Specifically, our results include the
following:
1. When the unknown discrete distribution is unstructured and each server has
only one sample, we show that any blackboard protocol (i.e., any protocol
in which servers interact arbitrarily using public messages) that learns the
distribution must essentially communicate the entire sample.
2. For the case of structured distributions, such as k-histograms and monotone
distributions, we design distributed learning algorithms that achieve significantly better communication guarantees than the naive ones, and obtain
tight upper and lower bounds in several regimes. Our distributed learning
algorithms run in near-linear time and are robust to model misspecification.
Our results provide insights on the interplay between structure and communication
efficiency for a range of fundamental distribution estimation tasks.
1
1.1
Introduction
Background and Motivation
We study the problem of distribution learning (or density estimation) in a distributed model, where
the data comes from an unknown distribution and is partitioned across multiple servers. The main
goal of this work is to explore the inherent tradeoff between sample size and communication for nonparametric density estimation of discrete distributions. We seek answers to the following questions:
What is the minimum amount of communication required to learn the underlying distribution of the
data? Is there a communication-efficient learning algorithm that runs in polynomial time? We obtain
the first non-trivial algorithms and lower bounds for distributed density estimation. Before we state
our results, we provide the relevant background.
Density Estimation. Distribution learning or density estimation is the following prototypical
inference task: Given samples drawn from an unknown target distribution that belongs to (or is
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
well-approximated by) a given family of distributions P, the goal is to approximately estimate (learn)
the target distribution. Estimating a distribution from samples is a fundamental unsupervised learning
problem that has been studied in statistics since the late nineteenth century [36]. The classical statistics
literature focuses primarily on the sample complexity of distribution learning, i.e., on the informationtheoretic aspects of the problem. More recently, there has been a large body of work in computer
science on this topic with an explicit focus on computational efficiency [12, 11, 7, 8, 1, 13, 2]. We
emphasize that the aforementioned literature studies density estimation in the centralized setting,
where all the data samples are available on a single machine.
Distributed Computation. In recent years, we have seen an explosion in the amount of data that
has been generated and collected across various scientific and technological domains [10]. Due to the
size and heterogeneity of modern datasets, there is a real need for the design of efficient algorithms
that succeed in the distributed model, when the data is partitioned across multiple servers. A major
bottleneck in distributed computation is the communication cost between individual machines. In
practice, communication may be limited by bandwidth constraints and power consumption, leading
to either slow or expensive systems (see, e.g., [23] for a survey). Hence, the general problem of
designing communication-efficient distributed protocols is of fundamental importance in this setting.
In recent years, a number of statistical estimation problems have been algorithmically studied in the
distributed setting [3, 16, 15, 40, 21, 30, 24, 33, 5, 29]. To the best of our knowledge, the problem of
nonparametric density estimation has not been previously studied in this context.
This Work: Distributed Density Estimation. We initiate a systematic investigation of density
estimation in the distributed model. We believe that this is a fundamental problem that merits
investigation in its own right. Also, the problem of distributed density estimation arises in various realdata applications when it is required to reconstruct the data distribution from scattered measurements.
Examples include sensor networks and P2P systems (see, e.g., [35, 32, 27, 41, 37] and references
therein).
We explore the tradeoff between communication and statistical efficiency for a number of fundamental nonparametric density estimation problems. Specifically, we insist that our algorithms are
sample-efficient and our goal is to design distributed protocols using a minimum amount of communication. As our main contribution, we provide the first non-trivial upper and lower bounds on
the communication complexity of density estimation for a range of natural distribution families
that have been extensively studied in the centralized regime. The main conceptual message of our
findings is the following: When the underlying discrete distribution is unstructured, no non-trivial
communication protocol is possible. In sharp contrast, for various families of structured distributions,
there are non-trivial algorithms whose communication complexity significantly improves over naive
protocols. It should be noted that all our algorithms are in addition computationally efficient.
Communication Model for Density Estimation. We now informally describe the communication
model used in this paper. We refer to the preliminaries in Section 2 for formal definitions.
The model is parameterized by the number of samples per server (player), which we denote by
s. There are a specific number of servers, each holding s independent samples from an unknown
distribution P . We call these servers sample-holding players. Additionally, there is a server that
holds no samples from P . We call this server a referee or fusion center. In communication protocols
considered in this work, servers exchange messages, and at the end of the protocol, the referee outputs
an accurate hypothesis distribution Pb. More precisely, we want the the hypothesis Pb to satisfy
d(Pb, P ) ? with high probability (over the samples and internal randomness), where the metric d is
either the `1 -norm (statistical distance) or the `2 -norm.
We study two variants of this model. In the simultaneous communication model, each sample-holding
player sends a message (of one or more bits) to the referee once, based only on the samples she holds
and public randomness. In the blackboard model, the sample-holding players? messages are public,
and the communication protocol does not restrict the number of times a player may speak. The goal
is to minimize the amount of communication between the players and the referee, while transmitting
enough information about the samples so that the underlying distribution P can be approximately
recovered from the transcript of the communication.
2
Table 1: Communication complexity bounds for density estimation of unstructured distributions (for
success probability 9/10)
1.2
Regime
CC s,1/10 (ADE(Dn , 1, ?, ?))
CC ?
s,1/10 (ADE(Dn , 1, ?, ?))
s=1
s = ?(n)
s = ?( ?n2 )
?( ?n2 log n)
?(n log 1? )
?(n log 1? )
O( ?n2 log n)
O( ?n2 )
O(n log 1? )
Our Contributions
In this section, we provide informal statements of our main results. For the formal statements of
all our results the reader is referred to the full version of the paper. We will require the following
notation.? We use n to denote an upper bound on the domain size of our distributions and ? to denote
the total sample size. Without loss of generality, we will assume that the domain of the distributions
is the set [n] := {1, 2, . . . , n}. The `1 (resp. `2 ) distance between two discrete distributions is the
`1 (resp. `2 ) norm of the difference between their probability vectors. We note that the sample
sizes in this section correspond to high constant probability of success. This can be boosted to high
probability by standard techniques.
We start by pointing out the baseline result that we compare against. The naive protocol to perform
distribution density estimation is the following: all the servers (players) communicate their entire
sample to the referee, who applies a centralized estimator to output an accurate hypothesis. The
communication complexity of this approach is ?(? log n) bits. The obvious question is whether
there exists a protocol with significantly smaller communication complexity.
Unstructured Discrete Distributions. Our starting point is the basic setting in which the underlying distribution over n elements is potentially arbitrary and each server (player) holds exactly
one sample from an unknown distribution over a domain of size n. (This basic setting is motivated
by practical applications, e.g., aggregation of cell-phone data, etc.) In the centralized setting, it
is a folklore fact (see, e.g., [19]) that ?(n/?2 ) samples are necessary and sufficient to learn an
unstructured distribution supported on n elements within `1 -error ?. This fact in turn implies that the
naive distributed protocol uses O( ?n2 log n) bits. We show that this protocol is best possible, up to
constant factors:
Theorem 1. Suppose ?(n/?2 ) samples from an unknown distribution P over [n] are distributed
such that each player has exactly one sample. Then learning P within `1 -distance ? requires
?((n/?2 ) log n) bits of communication in the blackboard model.
We remark that a blackboard model captures a very general interaction between sample-holding
players and the referee. The players are allowed to send messages in arbitrary order and share partial
information about their samples from [n], perhaps using much fewer than log n bits. For instance, if
one of the players has revealed her sample, other players may just notify everyone that they hold the
same (or a correlated) sample, using O(1) extra bits. Thus, our lower bound excludes the possibility
of non-trivial protocols that do better than essentially having each machine transmit its entire sample.
This statement might seem intuitively obvious, but its proof is not straightforward.
By a standard packing argument, we also show a communication lower bound of ?(n log 1? ) for all
protocols that estimate an unstructured discrete distribution over [n] in `1 -distance. In the regime
where there are ?(n/?2 ) samples per machine, we show that there is a simple estimator that achieves
this lower bound. (See Table 1 for instantiations of the theorems, and Section 2 for the formal
definitions.)
Structured Discrete Distributions. In contrast to the unstructured case, we design non-trivial
protocols that significantly improve upon the naive protocols in several regimes of interest.
Our main algorithmic results are the first communication-efficient algorithms for robust learning
of histogram distributions. A k-histogram distribution over [n] is a probability distribution that is
piecewise constant over some set of k intervals over [n]. Histograms have been extensively studied in
statistics and computer science. In the database community, histograms constitute the most common
3
tool for the succinct approximation of data [9, 38, 25, 26, 1]. In statistics, many methods have been
proposed to estimate histogram distributions in a variety of settings [22, 34, 17, 31].
The algorithmic difficulty in learning histograms lies in the fact that the location and ?size? of
these intervals is a priori unknown. In the centralized setting, sample and computationally efficient
algorithms for learning histograms have been recently obtained [7, 8, 2]. Our distributed learning
algorithm for the `1 -metric builds on the recent centralized algorithm of [2]. In particular, we have
the following:
Theorem 2. For the problem of learning k-histograms with `1 error ?, the following hold:
1. In the regime of one sample per player, there exists a protocol that uses O( k? log n+ ?k3 log k? )
bits of communication. Furthermore, any successful protocol must use ?(k log nk + ?k2 log k)
bits of communication.
2. In the regime of ?( ?k2 ) samples per player, there exists a successful protocol with O( k? log n)
bits of communication. Furthermore, any protocol must use ?(k log nk + k log 1? ) bits of
communication.
We now turn our attention to learning under the `2 -metric. Previous centralized algorithms for this
problem [1] work in a ?bottom-up? fashion. Unfortunately, this approach does not seem amenable
to distributed computation for the following reason: it seems impossible to keep track of a large
number of intervals with limited communication. Instead, we devise a new ?top-down? algorithm that
starts with a small number of large intervals and iteratively splits them based on the incurred `2 -error.
A careful application of this idea in conjunction with some tools from the streaming literature?
specifically, an application of the Johnson-Lindenstrauss tranform to estimate the `22 error using few
bits of communication?yields the following result:
Theorem 3. For the problem of learning k-histograms with `2 error ?, the following hold:
? log n) samples per player, there exists a protocol that uses
1. In the regime of s = O(k
1
O( ?2 log n) bits of communication. Furthermore, any successful protocol must use
?(k log nk + 1? log ?k) bits of communication.
? k2 log n)
2. In the regime of s = ?(k log n) samples per player, there exists a protocol with O(
s?
n
bits of communication. Furthermore, any successful protocol must use ?(k log k + 1? log ?k)
bits.
We remark that the above algorithms are robust to model misspecification, i.e., they provide nearoptimal error guarantees even if the input distribution is only close a histogram. As an immediate
corollary, we also obtain communication efficient learners for all families of structured discrete
distributions that can be well-approximated by histograms. Specifically, by using the structural
approximation results of [6, 7, 20], we obtain sample-optimal distributed estimators for various
well-studied classes of structured densities including monotone, unimodal, log-concave, monotone
hazard rate (MHR) distributions, and others. The interested reader is referred to the aforementioned
works.
For specific families of structured distributions, we may be able to do better by exploiting additional
structure. An example of interest is the family of monotone distributions. By a result of Birge [4]
(see also [14] for an adaptation to the discrete case), every monotone distribution over [n] is ?-close
in `1 -distance to a k-histogram distribution, for k = O(??1 log n). Hence, an application of the
above theorem yields a distributed estimation algorithm for monotone distributions. The main insight
here is that each monotone distribution is well-approximated by an oblivious histogram, i.e., one
whose intervals are the same for each monotone distribution. This allows us to essentially reduce the
learning problem to that of learning a discrete distribution over the corresponding domain size. A
reduction in the opposite direction yields the matching lower bound. Please refer to the full version
for more details.
1.3
Comparison to Related Work
Recent works [40, 21, 24, 5] study the communication cost of mean estimation problems of structured,
parametrized distributions. These works develop powerful information theoretic tools to obtain lower
4
bounds for parameter estimation problems. In what follows, we briefly comment why we need to
develop new techniques by pointing out fundamental differences between the two problems.
First, our most general results on distributed density estimation do not assume any structure on the
distribution (and thus, our learning algorithms are agnostic). This is in contrast to the problems
considered before, where the concept classes are restricted (Gaussians, linear separators) and enjoy a
lot of structure, which is often leveraged during the design of estimators.
Secondly, while we also consider more structured distributions (monotone, k-histograms), the
techniques developed in the study of distributed parameter estimation do not apply to our problems.
Specifically, those results reduce to the problem of learning a high-dimensional vector (say, where
each coordinate parametrizes a spherical Gaussian distribution), where the value at each coordinate
is independent of the others. The results in distributed parameter estimation crucially use the
coordinate independence feature. The lower bounds essentially state that the communication cost of a
d-dimensional parameter vector with independent components grows proportionally to the dimension
d, and hence one needs to estimate each coordinate separately.
2
Preliminaries
Notation. For any positive integer n, we write [n] to denote {1, . . . , n}, the set of integers between
1 and n. We think of a probability distribution P on [n] as a vector of probabilities (p1 , . . . , pn ) that
sum up to 1. We write X ? P to denote that a random variable X is drawn from P . Sometimes
we use the notation P (i) to denote P[X = i], where X ? P . We consider three families of discrete
distributions:
? Dn : the family of unstructured discrete distributions on [n],
? Hn,k : the family of k-histogram distributions on [n],
? Mn : the family of monotone distributions on [n].
We use `p metrics on spaces of probability distributions. For two distributions P and P 0 on [n], their
`p -distance, where p ? [1, ?), is defined as
!1/p
n
X
0
0
p
kP ? P kp :=
|P (i) ? P (i)|
.
i=1
In this work we focus
on the cases of p = 1 and p = 2, in which kP ? P 0 k1 =
pP
n
0
0
2
and kP ? P k2 =
i=1 (P (i) ? P (i)) .
Pn
i=1
|P (i) ? P 0 (i)|
For a given distribution Q ? Dn and family P ? Dn of distributions, we denote the `p -distance of Q
to P as distp (Q, P) := inf P ?P kQ ? P kp .
Packings and the Packing Number. Let (X, k ? kp ) be a normed space, E ? X, and r > 0 be
a radius. E 0 = {e1 , . . . , en } ? E is an (r, p)-packing of E if mini6=j kei ? ej kp > r. The (r, p)packing number Nrpack (E, p) is the cardinality of the largest (r, p)-packing of E, i.e., Nrpack (E, p) :=
sup{|E 0 | | E 0 ? E is an (r, p)-packing of E}.
Density Estimation. We now formally introduce density estimation problems considered in this
paper. First, for a given n ? Z+ , let P ? Dn be a family of distributions on [n], ? ? [0, ?), and
p ? [1, ?). The goal of the density estimation problem DE(P, p, ?) is to output, for any unknown
distribution P ? P, a distribution Q ? Dn such that kP ? Qkp ? ?. Note that in this problem, we
are guaranteed that the unknown distribution belongs to P.
Now we define a version of the problem that allows inputs from outside of the class of interest. For a
given n ? Z+ , let P ? P be a family of distributions on [n]. Also let ? ? [0, ?), p ? [1, ?), and
? ? [1, ?). The goal of the agnostic density estimation problem ADE(P, p, ?, ?) is to output, for
any unknown distribution P ? Dn , a distribution Q ? Dn such that kP ? Qkp ? ? ? distp (P, P) + ?,
with high probability. The reason for this version of the problem is that in practice one often has to
deal with noisy or non-ideal data. Hence if the unknown distribution is close to belonging to a class
P, we wish to output a near distribution as well.
5
Estimators and Sample Complexity. For any distribution estimation problem A involving an unknown distribution P ?such as DE(P, p, ?) and ADE(P, p, ?, ?) defined above?we now introduce
the notion of an estimator. For any m ? N, an estimator ? : [n]m ? {0, 1}? ? Dn is a function that
~ = (X1 , . . . Xm ) of m independent samples from P and sequence R of uniformly
takes a sequence X
~ R). We
and independently distributed random bits, and outputs a hypothesis distribution Pb := ?(X,
say that the estimator solves A with probability 1 ? ? if for any unknown distribution P allowed
by the formulation of problem A, the probability that Pb is a correct solution to A is at least 1 ? ?.
For instance, if A is the ADE(P, p, ?, ?) problem, the hypothesis distribution Pb produced by the
estimator should satisfy the following inequality for any distribution P ? Dn :
h
i
P kPb ? P kp ? ? ? distp (P, P) + ? ? 1 ? ?.
The sample complexity of A with error ?, which we denote SC ? (A), is the minimum number of
samples m, for which there exists an estimator ? : [n]m ? {0, 1}? ? Dn that solves A with
probability 1 ? ?.
As a simple application of this notation, note that SC ? (DE(P, p, ?)) ? SC ? (ADE(P, p, ?, ?)) for
any ? ? [1, ?). This follows from the fact that in DE(P, p, ?), one has to solve exactly the same
problem but only for a subset of input distributions in ADE(P, p, ?, ?). Since the input P for
DE(P, p, ?)) comes from P, we have distp (P, P) = 0.
Communication Complexity of Density Estimation. In all of our communication models, when
a player wants to send a message, the set of possible messages is prefix-free, i.e., after fixing both the
randomness and the set of previous messages known to the player, there are no two possible messages
such that one is a proper prefix of the other. Furthermore, for a protocol ? in any of them, we
write CostP (?) to denote the (worst-case) communication cost of ? on P defined as the maximum
length of messages that can be generated in the protocol if the unknown distribution belongs to P.
Similarly, we write CostP (?) to denote the expected communication cost of ? on P defined as
the maximum expected total length of messages exchanged, where the maximum is taken over all
unknown distributions in P and the expectation is taken over all assignments of samples to machines
and settings of public randomness. The following inequality always holds: CostP (?) ? CostP (?).
Simultaneous communication. In the simultaneous communication model, each sample-holding
player sends a message to the referee once, based only on the samples she holds and public
randomness.
For a density estimation problem A, let P be the family of possible unknown distributions P .
We write CC ?
s,? (A) to denote (s, ?)-simultaneous communication complexity of A defined
as the minimum CostP (?) over all simultaneous communication protocols ? that solve A
with probability at least 1 ? ? for any P ? P with s samples per sample-holding player and
an arbitrary number of sample-holding players.
Blackboard communication. In this model, each message sent by each player is visible to all
players. The next player speaking is uniquely determined by the previously exchanged
messages and public randomness. We use this model to prove lower bounds. Any lower
bound in this model applies to the previous communication models. More specifically, we
show lower bounds for the average communication complexity, which we define next.
For a density estimation problem A, let P be the family of possible unknown distributions
P . We write CC s,? (A) to denote (s, ?)-average communication complexity of A defined
as the infimum CostP (?) over all blackboard protocols ? that solve A with probability
at least 1 ? ? for any P ? P with s samples per sample-holding player and an arbitrary
number of sample-holding players.
The communication complexity notions that we just introduced remain in the following relationship.
Claim 1. For any density estimation problem A,
CC s,? (A) ? CC ?
s,? (A).
The claim follows from the fact that simultaneous communication is a specific case of blackboard
communication. Additionally, expected communication cost lower bounds worst-case communication
6
cost. All lower bounds that we prove are on the average communication complexity in blackboard
communication.
A Trivial Upper Bound. There is always a trivial protocol that leverages the sample complexity
of the density estimation. Since SC ? (A) samples are enough to solve the problem, it suffices that
sample-holding players communicate this number of samples to the referee. Since each sample can
be communicated with at most dlog ne bits, we obtain the following upper bound on the simultaneous
communication complexity.
Claim 2. For any density estimation problem A and any s ? 1,
CC ?
s,? (A) ? SC ? (A) ? dlog ne.
In this paper, we investigate whether there exist protocols that significantly improve on this direct
upper bound.
Randomness. All our protocols are deterministic (more precisely, depend only the randomness
coming from samples provided by the samples from the hidden distribution). On the other hand our
lower bounds apply to all protocols, also those using an arbitrary amount of public randomness (i.e.,
pre-shared randomness).
3
Our Techniques
In this section, we provide a high-level description of the main ideas in our upper and lower bounds.
We defer the details of upper and lower bounds for monotone distributions to the full version of the
paper.
3.1
Overview of Algorithmic Ideas
We start by describing the main ideas in our distributed learning algorithms.
Robustly Learning Histograms in `1 -Distance. We will require the following definition:
Definition 1. (Distribution flattening) Let P be a distribution over [n] and let I = {Ii }`i=1 be a
partition of [n] into disjoint intervals. We denote by P?I the distribution over [n], where
P
k?Ij P (k)
?
PI (i) =
,
?j ? [`], i ? Ij .
|Ij |
This means that P?I is obtained by spreading the total mass of an interval uniformly in the interval.
Our upper bounds in this setting crucially depend on the following norm from Vapnik-Chervonenkis
(VC) theory [39], known as the Ak norm (see, e.g., [18]).
Definition 2 (Ak norm). For any function f : [n] ? R, we define the Ak norm of f as
kf kAk = sup
k
X
|f (Ii )| ,
I1 ,...,Ik i=1
where for any set S ? [n], we let f (S) =
intervals.
P
i?S
f (i) and the supremum is taken over disjoint
In other words, the Ak norm of f is the maximum norm of any flattening of f into k interval pieces.
Our distributed algorithms crucially rely on the following building blocks:
Theorem 4 ([2]). Let P : [n] ? R be a distribution, and let Pb : [n] ? R be a distribution such
that kP ? PbkAk ? ?. There is an efficient algorithm L EARN H IST(Pb, k, ?) that given Pb, outputs a
k-histogram h such that kP ? hk1 ? 3OPTk + O(?), where OPTk = minh?Hn,k kP ? hk1 .
This theorem says that if we know a proxy to P that is close in Ak -norm to P , then this gives us
enough information to construct the best k-histogram fit to P . Moreover, this is the only information
we need to reconstruct a good k-histogram fit to P . The following well-known version of the VCinequality states that the empirical distribution after O(k/?2 ) samples is close to the true distribution
in Ak -norm:
7
Theorem 5 (VC inequality, e.g., [18]). Fix ?, ? > 0. Let P : [n] ? R be a distribution, and let Q be
1/?
the empirical distribution after O( k+log
) samples from P . Then with probability at least 1 ? ?,
?2
we have that kP ? QkAk ? ?.
These two theorems together imply (via the triangle inequality) that in order to learn P , it suffices to
construct some distribution Pb such that the empirical distribution Q is close to Pb in Ak -norm. After
we construct this Pb, we can run L EARN H IST at a centralized server, and simply output the resulting
hypothesis distribution. Thus, the crux of our distributed algorithm is a communication-efficient way
of constructing such a Pb.
We achieve this as follows. First, we learn a partition I of [n] such that on each interval I ? I,
either |I| = 1 and Q(I) ? ?(?/k), or we have Q(I) ? O(?/k). We then show that if we let Pb be
the flattening of Q over this partition, then Pb is ?-close to P in Ak -norm. To find this partition, we
repeatedly perform binary search over the the domain to find intervals of maximal length, starting
at some fixed left endpoint `, such that the mass of Q over that interval is at most O(?/k). We
show that the intervals in I can be found iteratively, using O(m log ms log n) bits of communication
each, and that there are at most O(k/?) intervals in I. This in turn implies a total upper bound of
?
O(mk
log n/?) bits of communication, as claimed.
We also show a black-box reduction for robustly learning k-histograms. It improves on the communication cost when the domain size is very large. Specifically, we show:
Lemma 1. Fix n ? N, and ?, ? > 0. Suppose for all 1 ? n0 ? n, there is a robust learning
algorithm for Hn0 ,k with s samples per server and m servers, using B(k, n,0 m, s, ?) bits of communication, where ms ? ?((k + log 1/?)/?2 ). Then there is an algorithm which solves Hn,k using
O(B(k, O(k/?), s, ?) + k? log n) bits of communication.
In other words, by increasing the communication by an additive factor of k? log n, we can replace the
domain size n with O(k/?). This is crucial for getting tighter bounds in certain regimes.
Learning Histograms in `2 -Distance. We now describe our algorithm for learning k-histograms
in `2 . We first require the following folklore statistical bound:
Lemma 2 (see e.g. [1]). Fix ?, ? > 0 and a distribution P : [n] ? R. Let Q be the empirical
distribution with O(log(1/?)/?) i.i.d. samples from P . Then with probability 1 ? ?, we have
kP ? Qk22 ? ?.
This lemma states that it suffices to approximate the empirical distribution Q in `2 norm. We now
describe how to do so.
Our first key primitive is that using the celebrated Johnson-Lindenstrauss lemma
[28], it is possible
P
to get an accurate estimate of kxk22 when server i has access to xi and x =
xi , where each server
communicates at most logarithmically many bits, regardless of the dimension of x. Moreover, we
can do this for poly(n) many different x?s, even without shared randomness, by communicating
only O(log n log log n) bits once at the beginning of the algorithm and constantly many bits per call
afterwards. In particular, we use this to approximate
X
eI =
(Q(i) ? Q(I))2 ,
i?I
for all intervals I ? [n].
Perhaps surprisingly, we are now able to give an algorithm that outputs the best O(k log n)-histogram
approximation to Q in `2 , which only accesses the distribution via the eI . Moreover, we show that
this algorithm needs to query only O(k log n) such eI . Since each query to eI can be done with
?
logarithmically many bits per server, this yields the claimed communication bound of O(mk
log n).
Roughly speaking, our algorithm proceeds as follows. At each step, it maintains a partition of [n].
Initially, this is the trivial partition containing just one element: [n]. Then in every iteration it finds
the 2k intervals in its current partition with largest eI , and splits them in half (or splits them all in
half if there are less than 2k intervals). It then repeats this process for log n iterations, and returns the
flattening over the set of intervals returned. By being somewhat careful with how we track error, we
are able to show that this in fact only ever requires O(k log n) queries to eI . While this algorithm is
quite simple, proving correctness requires some work and we defer it to the full version.
8
3.2
Proof Ideas for the Lower Bounds
We now give an overview of proofs of our lower bounds.
Interactive Learning of Unstructured Distributions. We start with the most sophisticated of our
lower bounds: a lower bound for unstructured distributions with one sample per player and arbitrary
communication in the blackboard model. We show that ?((n/?2 ) log n) bits of communication are
needed. Thid is optimal and implies that in this case, there is no non-trivial protocol that saves more
than a constant factor over the trivial one (in which O(n/?2 ) samples are fully transmitted). In order
to prove the lower bound, we apply the information complexity toolkit. Our lower bound holds for a
family of nearly uniform distributions on [n], in which each pair of consecutive elements, (2i ? 1, 2i),
have slightly perturbed probabilities. In the uniform distribution each element has probability 1/n.
Here for each pair of elements 2i ? 1 and 2i, we set the probabilities to be n1 (1 + 100?i ?) and
1
n (1 ? 100?i ?), where each ?i is independently selected from the uniform distribution on {?1, 1}.
Each such pair can be interpreted as a single slightly biased coin. We show that the output of any good
learning protocol can be used to learn the bias ?i of most of the pairs. This implies that messages
exchanged in any protocol that is likely to learn the distribution have to reveal most of the biases with
high constant probability.
Intuitively, the goal in our analysis is to show that if a player sends much fewer than log n bits overall,
this is unlikely to provide much information about that player?s sample and help much with predicting
?i ?s. This is done by bounding the mutual information between the transcript and the ?i ?s. It should
be noted that our lower bound holds in the interactive setting. That is, players are unlikely to gain
much by adaptively selecting when to continue providing more information about their samples. The
details of the proof are deferred to the full version.
Packing Lower Bounds. Some of our lower bounds are obtained via the construction of a suitable
packing set. We use the well-known result that the logarithm of the size of the packing set is a lower
bound on the communication complexity. This follows from using the well-known reduction from
estimation to testing, in conjunction with Fano?s inequality.
4
Conclusion and Open Problems
This work provides the first rigorous study of the communication complexity of nonparametric
distribution estimation. We have obtained both negative results (tight lower bounds in certain regimes)
and the first non-trivial upper bounds for a range of structured distributions.
A number of interesting directions remain. We outline a few of them here:
1. The positive results of this work focused on discrete univariate structured distributions (e.g.,
histograms and monotone distributions). For what other families of structured distributions
can one obtain communication-efficient algorithms? Studying multivariate structured
distributions in this setting is an interesting direction for future work.
2. The results of this paper do not immediately extend to the continuous setting. Can we obtain
positive results for structured continuous distributions?
3. It would be interesting to study related inference tasks in the distributed setting, including
hypothesis testing and distribution property estimation.
Acknowledgments
The authors would like to thank the reviewers for their insightful and constructive comments. ID
was supported by NSF Award CCF-1652862 (CAREER) and a Sloan Research Fellowship. EG was
supported by NSF Award CCF-1649515. JL was supported by NSF CAREER Award CCF-1453261,
CCF-1565235, a Google Faculty Research Award, and an NSF Graduate Research Fellowship. AN
was supported in part by a grant from the Purdue Research Foundation and NSF Awards CCF-1618981
and CCF-1649515. LS was funded by a Google PhD Fellowship.
9
References
[1] J. Acharya, I. Diakonikolas, C. Hegde, J. Li, and L. Schmidt. Fast and near-optimal algorithms for
approximating distributions by histograms. In Proceedings of the 34th ACM Symposium on Principles of
Database Systems (PODS), pages 249?263. ACM, 2015.
[2] J. Acharya, I. Diakonikolas, J. Li, and L. Schmidt. Sample-optimal density estimation in nearly-linear
time. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages
1278?1289. SIAM, 2017.
[3] M. F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and
privacy. In Conference on Learning Theory, pages 26?1, 2012.
[4] L. Birg?. Estimating a density under order restrictions: Nonasymptotic minimax risk. The Annals of
Statistics, pages 995?1012, 1987.
[5] M. Braverman, A. Garg, T. Ma, H. L. Nguyen, and D. P. Woodruff. Communication lower bounds for
statistical estimation problems via a distributed data processing inequality. In Proceedings of the 48th
Annual ACM Symposium on Theory of Computing, STOC 2016, pages 1011?1020, 2016.
[6] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Learning mixtures of structured distributions over
discrete domains. In SODA, pages 1380?1394, 2013.
[7] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Efficient density estimation via piecewise polynomial
approximation. In STOC, pages 604?613, 2014.
[8] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Near-optimal density estimation in near-linear time
using variable-width histograms. In NIPS, pages 1844?1852, 2014.
[9] S. Chaudhuri, R. Motwani, and V. R. Narasayya. Random sampling for histogram construction: How much
is enough? In SIGMOD Conference, pages 436?447, 1998.
[10] N. R. Council. Frontiers in Massive Data Analysis. The National Academies Press, Washington, DC, 2013.
[11] C. Daskalakis, I. Diakonikolas, R. ODonnell, R. Servedio, and L. Y. Tan. Learning sums of independent
integer random variables. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium
on, pages 217?226. IEEE, 2013.
[12] C. Daskalakis, I. Diakonikolas, and R. Servedio. Learning k-modal distributions via testing. In SODA,
pages 1371?1385, 2012.
[13] C. Daskalakis, I. Diakonikolas, and R. Servedio. Learning poisson binomial distributions. Algorithmica,
72(1):316?357, 2015.
[14] C. Daskalakis, I. Diakonikolas, R. A. Servedio, G. Valiant, and P. Valiant. Testing k-modal distributions:
Optimal algorithms via reductions. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium
on Discrete Algorithms, SODA 2013, pages 1833?1852, 2013.
[15] H. Daum? III, J. Phillips, A. Saha, and S. Venkatasubramanian. Efficient protocols for distributed
classification and optimization. In Algorithmic Learning Theory, pages 154?168. Springer, 2012.
[16] H. Daum? III, J. Phillips, A. Saha, and S. Venkatasubramanian. Protocols for learning classifiers on
distributed data. In Artificial Intelligence and Statistics, pages 282?290, 2012.
[17] L. Devroye and G. Lugosi. Bin width selection in multivariate histograms by the combinatorial method.
Test, 13(1):129?145, 2004.
[18] L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer Science & Business
Media, 2012.
[19] I. Diakonikolas. Learning structured distributions. In P. B?hlmann, P. Drineas, M. Kane, and M. van
Der Laan, editors, Handbook of Big Data, Chapman & Hall/CRC Handbooks of Modern Statistical
Methods, chapter 15, pages 267?284. Taylor & Francis, 2016.
[20] I. Diakonikolas, D. M. Kane, and A. Stewart. Efficient robust proper learning of log-concave distributions.
CoRR, abs/1606.03077, 2016.
[21] J. C. Duchi, M. I. Jordan, M. J. Wainwright, and Y. Zhang. Optimality guarantees for distributed statistical
estimation. ArXiv e-prints, 2014.
10
[22] D. Freedman and P. Diaconis. On the histogram as a density estimator:l2 theory. Zeitschrift f?r Wahrscheinlichkeitstheorie und Verwandte Gebiete, 57(4):453?476, 1981.
[23] S. H. Fuller and L. I. Millett. The Future of Computing Performance: Game Over Or Next Level? National
Academy Press, Washington, DC, 2011.
[24] A. Garg, T. Ma, and H. Nguyen. On communication cost of distributed statistical estimation and dimensionality. In Advances in Neural Information Processing Systems (NIPS), pages 2726?2734, 2014.
[25] A. C. Gilbert, S. Guha, P. Indyk, Y. Kotidis, S. Muthukrishnan, and M. Strauss. Fast, small-space algorithms
for approximate histogram maintenance. In STOC, pages 389?398, 2002.
[26] S. Guha, N. Koudas, and K. Shim. Approximation and streaming algorithms for histogram construction
problems. ACM Trans. Database Syst., 31(1):396?438, 2006.
[27] Y. Hu, H. Chen, J. g. Lou, and J. Li. Distributed density estimation using non-parametric statistics. In 27th
International Conference on Distributed Computing Systems (ICDCS ?07), pages 28?28, 2007.
[28] W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary
mathematics, 26(189-206):1?1, 1984.
[29] M. I. Jordan, J. D. Lee, and Y. Yang. Communication-efficient distributed statistical learning. CoRR,
abs/1605.07689, 2016.
[30] R. Kannan, S. Vempala, and D. Woodruff. Principal component analysis and higher correlations for
distributed data. In Conference on Learning Theory, pages 1040?1057, 2014.
[31] J. Klemela. Multivariate histograms with data-dependent partitions. Statistica Sinica, 19(1):159?176,
2009.
[32] W. Kowalczyk and N. A. Vlassis. Newscast EM. In Advances in Neural Information Processing Systems
17 (NIPS 2004), pages 713?720, 2004.
[33] Y. Liang, M. F. Balcan, V. Kanchanapally, and D. Woodruff. Improved distributed principal component
analysis. In Advances in Neural Information Processing Systems (NIPS), pages 3113?3121, 2014.
[34] G. Lugosi and A. Nobel. Consistency of data-driven histogram methods for density estimation and
classification. Ann. Statist., 24(2):687?706, 04 1996.
[35] R. D. Nowak. Distributed EM algorithms for density estimation in sensor networks. In 2003 IEEE
International Conference on Acoustics, Speech, and Signal Processing, ICASSP ?03, Hong Kong, April
6-10, 2003, pages 836?839, 2003.
[36] K. Pearson. Contributions to the mathematical theory of evolution. ii. skew variation in homogeneous
material. Philosophical Trans. of the Royal Society of London, 186:343?414, 1895.
[37] V. Slavov and P. R. Rao. A gossip-based approach for internet-scale cardinality estimation of xpath queries
over distributed semistructured data. VLDB J., 23(1):51?76, 2014.
[38] N. Thaper, S. Guha, P. Indyk, and N. Koudas. Dynamic multidimensional histograms. In SIGMOD
Conference, pages 428?439, 2002.
[39] V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their
probabilities. Theory Probab. Appl., 16:264?280, 1971.
[40] Y. Zhang, J. Duchi, M. Jordan, and M. J. Wainwright. Information-theoretic lower bounds for distributed
statistical estimation with communication constraints. In Advances in Neural Information Processing
Systems (NIPS), pages 2328?2336, 2013.
[41] M. Zhou, H. T. Shen, X. Zhou, W. Qian, and A. Zhou. Effective data density estimation in ring-based P2P
networks. In IEEE 28th International Conference on Data Engineering (ICDE 2012), pages 594?605,
2012.
11
| 7218 |@word kong:1 briefly:1 version:8 polynomial:2 norm:15 seems:1 faculty:1 open:1 hu:1 vldb:1 seek:1 crucially:3 reduction:4 venkatasubramanian:2 celebrated:1 selecting:1 chervonenkis:2 woodruff:3 prefix:2 recovered:1 com:1 current:1 must:6 visible:1 partition:8 additive:1 n0:1 half:2 fewer:2 selected:1 intelligence:1 beginning:1 provides:1 location:1 zhang:2 mathematical:1 dn:12 direct:1 symposium:5 ik:1 focs:1 prove:3 privacy:1 introduce:2 expected:3 roughly:1 p1:1 kpb:1 insist:1 spherical:1 cardinality:2 increasing:1 provided:1 estimating:2 underlying:5 notation:4 moreover:3 agnostic:2 mass:2 medium:1 what:3 interpreted:1 onak:1 developed:1 finding:1 guarantee:3 every:2 multidimensional:1 concave:2 interactive:2 exactly:3 k2:4 classifier:1 grant:1 enjoy:1 before:2 positive:3 engineering:1 zeitschrift:1 ak:8 id:1 approximately:2 lugosi:3 might:1 black:1 garg:2 therein:1 studied:6 kane:2 appl:1 limited:2 range:3 graduate:1 practical:1 acknowledgment:1 testing:4 practice:2 block:1 communicated:1 empirical:5 significantly:5 matching:1 pre:1 word:2 get:1 close:7 selection:1 context:1 impossible:1 risk:1 restriction:1 gilbert:1 deterministic:1 reviewer:1 center:1 hegde:1 send:2 straightforward:1 attention:1 starting:2 normed:1 independently:2 survey:1 primitive:1 regardless:1 focused:1 unstructured:10 immediately:1 l:1 qian:1 shen:1 communicating:1 insight:2 estimator:11 century:1 proving:1 notion:2 coordinate:4 variation:1 transmit:1 resp:2 target:2 tan:1 suppose:2 qkp:2 construction:3 speak:1 annals:1 massive:1 us:3 designing:1 hypothesis:7 homogeneous:1 referee:9 element:6 expensive:1 approximated:3 natarajan:1 logarithmically:2 database:3 bottom:1 mhr:1 capture:1 worst:2 tranform:1 sun:3 technological:1 contemporary:1 und:1 complexity:21 dynamic:1 optk:2 depend:2 tight:2 upon:1 efficiency:3 learner:1 triangle:1 drineas:1 packing:10 icassp:1 various:5 chapter:1 muthukrishnan:1 fast:2 describe:3 london:1 effective:1 kp:15 query:4 sc:5 artificial:1 outside:1 pearson:1 whose:2 quite:1 solve:4 nineteenth:1 say:3 reconstruct:2 hk1:2 koudas:2 statistic:7 think:1 noisy:1 indyk:2 interplay:1 sequence:2 interaction:1 coming:1 maximal:1 adaptation:1 blackboard:9 relevant:1 narasayya:1 ludwig:1 achieve:2 chaudhuri:1 academy:2 description:1 getting:1 exploiting:1 convergence:1 motwani:1 ring:1 help:1 develop:2 fixing:1 ij:3 transcript:2 solves:3 c:3 come:2 implies:4 direction:3 radius:1 correct:1 mini6:1 vc:2 public:7 material:1 bin:1 crc:1 exchange:1 require:3 crux:1 suffices:3 fix:3 investigation:3 preliminary:2 tighter:1 secondly:1 frontier:1 extension:1 hold:10 considered:3 hall:1 k3:1 algorithmic:4 mapping:1 claim:3 pointing:2 major:1 achieves:1 consecutive:1 estimation:53 spreading:1 combinatorial:2 council:1 largest:2 correctness:1 tool:3 mit:4 sensor:2 gaussian:1 always:2 pn:2 ej:1 zhou:3 boosted:1 verwandte:1 conjunction:2 corollary:1 focus:4 she:2 contrast:3 rigorous:1 baseline:1 inference:2 birge:1 dependent:1 streaming:2 entire:3 unlikely:2 initially:1 her:1 hidden:1 i1:1 interested:1 overall:1 aforementioned:2 classification:2 priori:1 mutual:1 once:3 construct:3 having:1 beach:1 sampling:1 washington:2 chapman:1 fuller:1 unsupervised:1 nearly:2 future:2 parametrizes:1 others:2 piecewise:2 inherent:1 oblivious:1 few:3 primarily:1 modern:2 acharya:2 diaconis:1 saha:2 national:2 individual:1 usc:2 algorithmica:1 n1:1 ab:2 interest:4 message:16 centralized:8 possibility:1 investigate:1 braverman:1 deferred:1 mixture:1 amenable:1 accurate:3 nowak:1 explosion:1 necessary:1 partial:1 notify:1 taylor:1 logarithm:1 exchanged:3 mk:2 instance:2 rao:1 stewart:1 assignment:1 hlmann:1 cost:9 subset:1 kq:1 uniform:4 successful:4 johnson:3 guha:3 semistructured:1 nearoptimal:1 answer:1 perturbed:1 eec:2 adaptively:1 st:1 density:39 fundamental:6 siam:3 international:3 csail:2 systematic:2 lee:1 together:1 transmitting:1 earn:2 containing:1 leveraged:1 hn:3 wahrscheinlichkeitstheorie:1 leading:1 return:1 li:4 syst:1 nonasymptotic:1 de:5 satisfy:2 sloan:1 kanchanapally:1 piece:1 lot:1 sup:2 francis:1 start:4 aggregation:1 maintains:1 p2p:2 defer:2 contribution:3 minimize:1 who:1 correspond:1 yield:4 produced:1 thaper:1 cc:7 randomness:11 simultaneous:7 definition:5 against:1 servedio:7 pp:1 frequency:1 obvious:2 proof:4 gain:1 knowledge:1 improves:2 dimensionality:1 hilbert:1 sophisticated:1 higher:1 modal:2 improved:1 april:1 formulation:1 done:2 box:1 generality:1 furthermore:5 just:3 correlation:1 hand:1 ei:6 google:2 infimum:1 perhaps:2 reveal:1 scientific:1 believe:1 grows:1 usa:1 building:1 concept:1 true:1 ccf:6 evolution:1 jerry:1 hence:4 iteratively:2 deal:1 eg:1 game:1 during:1 uniquely:1 please:1 width:2 noted:2 kak:1 m:2 hong:1 outline:1 theoretic:2 duchi:2 balcan:2 recently:2 common:1 overview:2 endpoint:1 jl:1 extend:1 measurement:1 refer:2 phillips:2 consistency:1 mathematics:1 similarly:1 fano:1 funded:1 toolkit:1 access:2 etc:1 multivariate:3 own:1 recent:4 chan:3 belongs:3 inf:1 phone:1 driven:1 claimed:2 certain:2 server:20 inequality:6 binary:1 arbitrarily:1 success:2 continue:1 der:1 devise:1 seen:1 minimum:4 additional:1 somewhat:1 transmitted:1 signal:1 ii:3 multiple:3 full:5 unimodal:1 afterwards:1 long:1 hazard:1 ilias:1 e1:1 award:5 variant:1 basic:3 maintenance:1 involving:1 essentially:4 metric:4 expectation:1 poisson:1 arxiv:1 histogram:35 sometimes:1 iteration:2 cell:1 background:2 addition:1 want:2 separately:1 interval:19 fellowship:3 fine:1 sends:3 crucial:1 extra:1 biased:1 comment:2 sent:1 seem:2 jordan:3 call:3 integer:3 structural:1 near:5 yang:1 leverage:1 ideal:1 revealed:1 enough:4 split:3 iii:2 variety:1 independence:1 fit:2 bandwidth:1 restrict:1 opposite:1 reduce:2 idea:5 tradeoff:2 bottleneck:1 whether:2 motivated:1 returned:1 speech:1 speaking:2 constitute:1 remark:2 repeatedly:1 proportionally:1 informally:1 amount:5 nonparametric:4 extensively:2 statist:1 exist:1 nsf:5 algorithmically:1 per:12 track:2 disjoint:2 discrete:18 write:6 ist:2 key:1 pb:15 blum:1 drawn:2 krzysztof:1 excludes:1 monotone:12 icde:1 year:2 sum:2 run:3 parameterized:1 powerful:1 communicate:4 soda:3 fourth:1 family:17 reader:2 bit:28 bound:42 internet:1 guaranteed:1 annual:4 constraint:2 precisely:2 aspect:1 qk22:1 argument:1 optimality:1 vempala:1 structured:15 belonging:1 across:4 smaller:1 remain:2 slightly:2 em:2 partitioned:2 intuitively:2 restricted:1 dlog:2 grigorescu:1 taken:3 computationally:2 previously:2 turn:3 describing:1 skew:1 needed:1 initiate:2 merit:1 know:1 end:1 informal:1 studying:1 available:1 gaussians:1 apply:3 birg:1 kowalczyk:1 robustly:2 save:1 schmidt:3 coin:1 top:1 binomial:1 include:2 daum:2 folklore:2 sigmod:2 k1:1 build:1 approximating:1 classical:1 society:1 question:2 print:1 parametric:2 diakonikolas:12 distance:9 thank:1 lou:1 parametrized:1 consumption:1 topic:1 collected:1 trivial:13 reason:2 kannan:1 nobel:1 length:3 devroye:2 relationship:1 providing:1 liang:1 sinica:1 unfortunately:1 pod:1 statement:3 holding:11 potentially:1 stoc:3 negative:1 design:5 proper:2 unknown:17 perform:2 twenty:2 upper:12 datasets:1 purdue:5 minh:1 immediate:1 heterogeneity:1 vlassis:1 communication:79 misspecification:2 ever:1 dc:2 mansour:1 sharp:1 arbitrary:6 community:1 introduced:1 pair:4 required:2 philosophical:1 acoustic:1 nip:6 trans:2 able:3 proceeds:1 xm:1 eighth:1 regime:11 including:2 royal:1 everyone:1 wainwright:2 power:1 suitable:1 event:1 natural:1 difficulty:1 rely:1 predicting:1 business:1 mn:1 minimax:1 improve:2 kxk22:1 imply:1 ne:2 naive:5 probab:1 literature:3 l2:1 kotidis:1 kf:1 relative:1 loss:1 fully:1 shim:1 prototypical:1 interesting:3 foundation:2 incurred:1 sufficient:1 proxy:1 principle:1 editor:1 share:1 pi:1 ibm:2 hn0:1 supported:5 surprisingly:1 free:1 repeat:1 formal:3 bias:2 distributed:43 van:1 dimension:2 lindenstrauss:3 author:1 kei:1 nguyen:2 approximate:3 emphasize:1 informationtheoretic:1 keep:1 supremum:1 instantiation:1 handbook:2 conceptual:1 xi:2 costp:6 daskalakis:4 search:1 continuous:2 why:1 table:2 additionally:2 learn:7 robust:5 ca:1 career:2 interact:1 poly:1 separator:1 constructing:1 protocol:38 domain:9 flattening:4 main:8 statistica:1 motivation:1 bounding:1 big:1 freedman:1 n2:5 succinct:1 ludwigs:1 allowed:2 body:1 x1:1 referred:2 en:1 gossip:1 scattered:1 fashion:1 ny:1 slow:1 explicit:1 wish:1 lie:1 communicates:1 late:1 learns:1 theorem:9 down:1 elena:2 specific:3 insightful:1 fusion:1 exists:6 vapnik:2 strauss:1 valiant:2 importance:1 corr:2 phd:1 nk:3 chen:1 simply:1 explore:2 likely:1 univariate:1 applies:2 springer:2 constantly:1 acm:6 ma:2 succeed:1 goal:8 ann:1 careful:2 shared:2 replace:1 lipschitz:1 specifically:7 determined:1 uniformly:2 laan:1 lemma:4 principal:2 total:4 ade:7 newscast:1 player:32 formally:1 internal:1 gebiete:1 arises:1 constructive:1 correlated:1 |
6,875 | 7,219 | Simple and Scalable Predictive Uncertainty
Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel Charles Blundell
DeepMind
{balajiln,apritzel,cblundell}@google.com
Abstract
Deep neural networks (NNs) are powerful black box predictors that have recently
achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian
NNs, which learn a distribution over weights, are currently the state-of-the-art
for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to
standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that
is simple to implement, readily parallelizable, requires very little hyperparameter
tuning, and yields high quality predictive uncertainty estimates. Through a series
of experiments on classification and regression benchmarks, we demonstrate that
our method produces well-calibrated uncertainty estimates which are as good or
better than approximate Bayesian NNs. To assess robustness to dataset shift, we
evaluate the predictive uncertainty on test examples from known and unknown
distributions, and show that our method is able to express higher uncertainty on
out-of-distribution examples. We demonstrate the scalability of our method by
evaluating predictive uncertainty estimates on ImageNet.
1
Introduction
Deep neural networks (NNs) have achieved state-of-the-art performance on a wide variety of machine
learning tasks [35] and are becoming increasingly popular in domains such as computer vision
[32], speech recognition [25], natural language processing [42], and bioinformatics [2, 61]. Despite
impressive accuracies in supervised learning benchmarks, NNs are poor at quantifying predictive
uncertainty, and tend to produce overconfident predictions. Overconfident incorrect predictions can be
harmful or offensive [3], hence proper uncertainty quantification is crucial for practical applications.
Evaluating the quality of predictive uncertainties is challenging as the ?ground truth? uncertainty
estimates are usually not available. In this work, we shall focus upon two evaluation measures that
are motivated by practical applications of NNs. Firstly, we shall examine calibration [12, 13], a
frequentist notion of uncertainty which measures the discrepancy between subjective forecasts and
(empirical) long-run frequencies. The quality of calibration can be measured by proper scoring rules
[17] such as log predictive probabilities and the Brier score [9]. Note that calibration is an orthogonal
concern to accuracy: a network?s predictions may be accurate and yet miscalibrated, and vice versa.
The second notion of quality of predictive uncertainty we consider concerns generalization of the
predictive uncertainty to domain shift (also referred to as out-of-distribution examples [23]), that is,
measuring if the network knows what it knows. For example, if a network trained on one dataset is
evaluated on a completely different dataset, then the network should output high predictive uncertainty
as inputs from a different dataset would be far away from the training data. Well-calibrated predictions
that are robust to model misspecification and dataset shift have a number of important practical uses
(e.g., weather forecasting, medical diagnosis).
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
There has been a lot of recent interest in adapting NNs to encompass uncertainty and probabilistic
methods. The majority of this work revolves around a Bayesian formalism [4], whereby a prior
distribution is specified upon the parameters of a NN and then, given the training data, the posterior
distribution over the parameters is computed, which is used to quantify predictive uncertainty.
Since exact Bayesian inference is computationally intractable for NNs, a variety of approximations
have been developed including Laplace approximation [40], Markov chain Monte Carlo (MCMC)
methods [46], as well as recent work on variational Bayesian methods [6, 19, 39], assumed density
filtering [24], expectation propagation [21, 38] and stochastic gradient MCMC variants such as
Langevin diffusion methods [30, 59] and Hamiltonian methods [53]. The quality of predictive
uncertainty obtained using Bayesian NNs crucially depends on (i) the degree of approximation due
to computational constraints and (ii) if the prior distribution is ?correct?, as priors of convenience
can lead to unreasonable predictive uncertainties [50]. In practice, Bayesian NNs are often harder
to implement and computationally slower to train compared to non-Bayesian NNs, which raises
the need for a ?general purpose solution? that can deliver high-quality uncertainty estimates and yet
requires only minor modifications to the standard training pipeline.
Recently, Gal and Ghahramani [15] proposed using Monte Carlo dropout (MC-dropout) to estimate
predictive uncertainty by using Dropout [54] at test time. There has been work on approximate
Bayesian interpretation [15, 29, 41] of dropout. MC-dropout is relatively simple to implement
leading to its popularity in practice. Interestingly, dropout may also be interpreted as ensemble model
combination [54] where the predictions are averaged over an ensemble of NNs (with parameter
sharing). The ensemble interpretation seems more plausible particularly in the scenario where the
dropout rates are not tuned based on the training data, since any sensible approximation to the true
Bayesian posterior distribution has to depend on the training data. This interpretation motivates the
investigation of ensembles as an alternative solution for estimating predictive uncertainty.
It has long been observed that ensembles of models improve predictive performance (see [14] for a
review). However it is not obvious when and why an ensemble of NNs can be expected to produce
good uncertainty estimates. Bayesian model averaging (BMA) assumes that the true model lies within
the hypothesis class of the prior, and performs soft model selection to find the single best model within
the hypothesis class [43]. In contrast, ensembles perform model combination, i.e. they combine the
models to obtain a more powerful model; ensembles can be expected to be better when the true model
does not lie within the hypothesis class. We refer to [11, 43] and [34, ?2.5] for related discussions.
It is important to note that even exact BMA is not guaranteed be robust to mis-specification with
respect to domain shift.
Summary of contributions: Our contribution in this paper is two fold. First, we describe a simple and
scalable method for estimating predictive uncertainty estimates from NNs. We argue for training
probabilistic NNs (that model predictive distributions) using a proper scoring rule as the training
criteria. We additionally investigate the effect of two modifications to the training pipeline, namely
(i) ensembles and (ii) adversarial training [18] and describe how they can produce smooth predictive
estimates. Secondly, we propose a series of tasks for evaluating the quality of the predictive uncertainty
estimates, in terms of calibration and generalization to unknown classes in supervised learning
problems. We show that our method significantly outperforms (or matches) MC-dropout. These tasks,
along with our simple yet strong baseline, serve as an useful benchmark for comparing predictive
uncertainty estimates obtained using different Bayesian/non-Bayesian/hybrid methods.
Novelty and Significance: Ensembles of NNs, or deep ensembles for short, have been successfully
used to boost predictive performance (e.g. classification accuracy in ImageNet or Kaggle contests)
and adversarial training has been used to improve robustness to adversarial examples. However, to
the best of our knowledge, ours is the first work to investigate their usefulness for predictive uncertainty estimation and compare their performance to current state-of-the-art approximate Bayesian
methods on a series of classification and regression benchmark datasets. Compared to Bayesian
NNs (e.g. variational inference or MCMC methods), our method is much simpler to implement,
requires surprisingly few modifications to standard NNs, and well suited for distributed computation,
thereby making it attractive for large-scale deep learning applications. To demonstrate scalability of
our method, we evaluate predictive uncertainty on ImageNet (and are the first to do so, to the best of
our knowledge). Most work on uncertainty in deep learning focuses on Bayesian deep learning; we
hope that the simplicity and strong empirical performance of our approach will spark more interest in
non-Bayesian approaches for predictive uncertainty estimation.
2
2
Deep Ensembles: A Simple Recipe For Predictive Uncertainty Estimation
2.1
Problem setup and High-level summary
We assume that the training dataset D consists of N i.i.d. data points D = {xn , yn }N
n=1 , where
x 2 RD represents the D-dimensional features. For classification problems, the label is assumed
to be one of K classes, that is y 2 {1, . . . , K}. For regression problems, the label is assumed to
be real-valued, that is y 2 R. Given the input features x, we use a neural network to model the
probabilistic predictive distribution p? (y|x) over the labels, where ? are the parameters of the NN.
We suggest a simple recipe: (1) use a proper scoring rule as the training criterion, (2) use adversarial
training [18] to smooth the predictive distributions, and (3) train an ensemble. Let M denote the
number of NNs in the ensemble and {?m }M
m=1 denote the parameters of the ensemble. We first
describe how to train a single neural net and then explain how to train an ensemble of NNs.
2.2
Proper scoring rules
Scoring rules measure the quality of predictive uncertainty (see [17] for a review). A scoring rule
assigns a numerical score to a predictive distribution p? (y|x), rewarding better calibrated predictions
over worse. We shall consider scoring rules where a higher numerical score is better. Let a scoring
rule be a function S(p? , (y, x)) that evaluates the quality of the predictive distribution p? (y|x) relative
to an event y|x ? q(y|x) where q(y,
R x) denotes the true distribution on (y, x)-tuples. The expected
scoring rule is then S(p? , q) = q(y, x)S(p? , (y, x))dydx. A proper scoring rule is one where
S(p? , q) ? S(q, q) with equality if and only if p? (y|x) = q(y|x), for all p? and q. NNs can then be
trained according to measure that encourages calibration of predictive uncertainty by minimizing the
loss L(?) = S(p? , q).
It turns out many common NN loss functions are proper scoring rules. For example, when maximizing
likelihood, the score function is S(p? , (y, x)) = log p? (y|x), and this is a proper scoring rule due
to Gibbs inequality: S(p? , q) = Eq(x) q(y|x) log p? (y|x) ? Eq(x) q(y|x) log q(y|x). In the case of
multi-class K-way classification, the popular softmax cross entropy loss is equivalent to the log
PK
likelihood and is a proper scoring rule. Interestingly, L(?) = S(p? , (y, x)) = K 1 k=1 k=y
2
p? (y = k|x) , i.e., minimizing the squared error between the predictive probability of a label and
one-hot encoding of the correct label, is also a proper scoring rule known as the Brier score [9].
This provides justification for this common trick for training NNs by minimizing the squared error
between a binary label and its associated probability and shows it is, in fact, a well defined loss with
desirable properties.1
2.2.1
Training criterion for regression
For regression problems, NNs usually output a single value say ?(x) and the parameters are optimized
PN
2
to minimize the mean squared error (MSE) on the training set, given by n=1 yn ?(xn ) .
However, the MSE does not capture predictive uncertainty. Following [47], we use a network
that outputs two values in the final layer, corresponding to the predicted mean ?(x) and variance2
2
(x) > 0. By treating the observed value as a sample from a (heteroscedastic) Gaussian distribution
with the predicted mean and variance, we minimize the negative log-likelihood criterion:
log p? (yn |xn ) =
log
2
? (x)
2
+
y
?? (x)
2 ?2 (x)
2
+ constant.
(1)
We found the above to perform satisfactorily in our experiments. However, two simple extensions are
worth further investigation: (i) Maximum likelihood estimation over ?? (x) and ?2 (x) might overfit;
one could impose a prior and perform maximum-a-posteriori (MAP) estimation. (ii) In cases where
the Gaussian is too-restrictive, one could use a complex distribution e.g. mixture density network [5]
or a heavy-tailed distribution.
1
Indeed as noted in Gneiting and Raftery [17], it can be shown that asymptotically maximizing any proper
scoring rule recovers true parameter values.
2
We enforce the positivity constraint on the variance by passing the second output through the softplus
function log(1 + exp(?)), and add a minimum variance (e.g. 10 6 ) for numerical stability.
3
2.3
Adversarial training to smooth predictive distributions
Adversarial examples, proposed by Szegedy et al. [55] and extended by Goodfellow et al. [18], are
those which are ?close? to the original training examples (e.g. an image that is visually indistinguishable from the original image to humans), but are misclassified by the NN. Goodfellow et al.
[18] proposed the fast gradient sign method as a fast solution to generate adversarial examples.
Given an input x with target y, and loss `(?, x, y) (e.g. log p? (y|x)), the fast gradient sign method
generates an adversarial example as x0 = x + ? sign rx `(?, x, y) , where ? is a small value such
that the max-norm of the perturbation is bounded. Intuitively, the adversarial perturbation creates
a new training example by adding a perturbation along a direction which the network is likely to
increase the loss. Assuming ? is small enough, these adversarial examples can be used to augment
the original training set by treating (x0 , y) as additional training examples. This procedure, referred
to as adversarial training,3 was found to improve the classifier?s robustness [18].
Interestingly, adversarial training can be interpreted as a computationally efficient solution to smooth
the predictive distributions by increasing the likelihood of the target around an ?-neighborhood of
the observed training examples. Ideally one would want to smooth the predictive distributions along
all 2D directions in {1, 1}D ; however this is computationally expensive. A random direction
might not necessarily increase the loss; however, adversarial training by definition computes the
direction where the loss is high and hence is better than a random direction for smoothing predictive
distributions. Miyato et al. [44] proposed a related idea called virtual adversarial training (VAT),
where they picked x = arg max x KL p(y|x)||p(y|x + x) ; the advantage of VAT is that
it does not require knowledge of the true target y and hence can be applied to semi-supervised
learning. Miyato et al. [44] showed that distributional smoothing using VAT is beneficial for efficient
semi-supervised learning; in contrast, we investigate the use of adversarial training for predictive
uncertainty estimation. Hence, our contributions are complementary; one could use VAT or other
forms of adversarial training, cf. [33], for improving predictive uncertainty in the semi-supervised
setting as well.
2.4
Ensembles: training and prediction
The most popular ensembles use decision trees as the base learners and a wide variety of method
have been explored in the literature on ensembles. Broadly, there are two classes of ensembles:
randomization-based approaches such as random forests [8], where the ensemble members can
be trained in parallel without any interaction, and boosting-based approaches where the ensemble
members are fit sequentially. We focus only on the randomization based approach as it is better suited
for distributed, parallel computation. Breiman [8] showed that the generalization error of random
forests can be upper bounded by a function of the strength and correlation between individual trees;
hence it is desirable to use a randomization scheme that de-correlates the predictions of the individual
models as well as ensures that the individual models are strong (e.g. high accuracy). One of the
popular strategies is bagging (a.k.a. bootstrapping), where ensemble members are trained on different
bootstrap samples of the original training set. If the base learner lacks intrinsic randomization (e.g. it
can be trained efficiently by solving a convex optimization problem), bagging is a good mechanism
for inducing diversity. However, if the underlying base learner has multiple local optima, as is the
case typically with NNs, the bootstrap can sometimes hurt performance since a base learner trained
on a bootstrap sample sees only 63% unique data points.4 In the literature on decision tree ensembles,
Breiman [8] proposed to use a combination of bagging [7] and random subset selection of features at
each node. Geurts et al. [16] later showed that bagging is unnecessary if additional randomness can
be injected into the random subset selection procedure. Intuitively, using more data for training the
base learners helps reduce their bias and ensembling helps reduce the variance.
We used the entire training dataset to train each network since deep NNs typically perform better
with more data, although it is straightforward to use a random subsample if need be. We found that
random initialization of the NN parameters, along with random shuffling of the data points, was
sufficient to obtain good performance in practice. We observed that bagging deteriorated performance
in our experiments. Lee et al. [36] independently observed that training on entire dataset with
random initialization was better than bagging for deep ensembles, however their goal was to improve
3
Not to be confused with Generative Adversarial Networks (GANs).
The bootstrap draws N times uniformly with replacement from a dataset with N items. The probability
an item is picked at least once is 1 (1 1/N )N , which for large N becomes 1 e 1 ? 0.632. Hence, the
number of unique data points in a bootstrap sample is 0.632 ? N on average.
4
4
predictive accuracy and not predictive uncertainty. The overall training procedure is summarized in
Algorithm 1.
Algorithm 1 Pseudocode of the training procedure for our method
1: . Let each neural network parametrize a distribution over the outputs, i.e. p? (y|x). Use a proper
scoring rule as the training criterion `(?, x, y). Recommended default values are M = 5 and
? = 1% of the input range of the corresponding dimension (e.g 2.55 if input range is [0,255]).
2: Initialize ?1 , ?2 , . . . , ?M randomly
3: for m = 1 : M do
. train networks independently in parallel
4: Sample data point nm randomly for each net
. single nm for clarity, minibatch in practice
5: Generate adversarial example using x0nm = xnm + ? sign rxnm `(?m , xnm , ynm )
6: Minimize `(?m , xnm , ynm ) + `(?m , x0nm , ynm ) w.r.t. ?m
. adversarial training (optional)
We treat the ensemble as a uniformly-weighted mixture model and combine the predictions as
PM
p(y|x) = M 1 m=1 p?m (y|x, ?m ). For classification, this corresponds to averaging the predicted
probabilities. For regression, the prediction is a mixture of Gaussian distributions. For ease of
computing quantiles and predictive probabilities, we further approximate the ensemble prediction as a
Gaussian whose mean and variance
Pare respectively the mean and variance of the mixture.
PThe mean
and variance of a mixture M 1 N ??m (x), ?2m (x) are given by ?? (x) = M 1 m ??m (x)
P
and ?2 (x) = M 1 m ?2m (x) + ?2?m (x)
?2? (x) respectively.
3
Experimental results
3.1
Evaluation metrics and experimental setup
For both classification and regression, we evaluate the negative log likelihood (NLL) which depends
on the predictive uncertainty. NLL is a proper scoring rule and a popular metric for evaluating
predictive uncertainty [49]. For classification we additionally measure classification accuracy and
PK
2
the Brier score, defined as BS = K 1 k=1 t?k p(y = k|x? ) where t?k = 1 if k = y ? , and 0
otherwise. For regression problems, we additionally measured the root mean squared error (RMSE).
Unless otherwise specified, we used batch size of 100 and Adam optimizer with fixed learning rate of
0.1 in our experiments. We use the same technique for generating adversarial training examples for
regression problems. Goodfellow et al. [18] used a fixed ? for all dimensions; this is unsatisfying
if the input dimensions have different ranges. Hence, in all of our experiments, we set ? to 0.01
times the range of the training data along that particular dimension. We used the default weight
initialization in Torch.
3.2
Regression on toy datasets
First, we qualitatively evaluate the performance of the proposed method on a one-dimensional toy
regression dataset. This dataset was used by Hern?andez-Lobato and Adams [24], and consists of 20
training examples drawn as y = x3 + ? where ? ? N (0, 32 ). We used the same architecture as [24].
A commonly used heuristic in practice is to use an ensemble of NNs (trained to minimize MSE),
obtain multiple point predictions and use the empirical variance of the networks? predictions as an
approximate measure of uncertainty. We demonstrate that this is inferior to learning the variance by
training using NLL.5 The results are shown in Figure 1.
The results clearly demonstrate that (i) learning variance and training using a scoring rule (NLL) leads
to improved predictive uncertainty and (ii) ensemble combination improves performance, especially
as we move farther from the observed training data.
3.3
Regression on real world datasets
In our next experiment, we compare our method to state-of-the-art methods for predictive uncertainty
estimation using NNs on regression tasks. We use the experimental setup proposed by Hern?andezLobato and Adams [24] for evaluating probabilistic backpropagation (PBP), which was also used
5
See also Appendix A.2 for calibration results on a real world dataset.
5
200
100
0
100
200
6
4
2
0
2
4
6
Figure 1: Results on a toy regression task: x-axis denotes x. On the y-axis, the blue line is the ground
truth curve, the red dots are observed noisy training data points and the gray lines correspond to
the predicted mean along with three standard deviations. Left most plot corresponds to empirical
variance of 5 networks trained using MSE, second plot shows the effect of training using NLL using
a single net, third plot shows the additional effect of adversarial training, and final plot shows the
effect of using an ensemble of 5 networks respectively.
by Gal and Ghahramani [15] to evaluate MC-dropout.6 Each dataset is split into 20 train-test folds,
except for the protein dataset which uses 5 folds and the Year Prediction MSD dataset which uses
a single train-test split. We use the identical network architecture: 1-hidden layer NN with ReLU
nonlinearity [45], containing 50 hidden units for smaller datasets and 100 hidden units for the larger
protein and Year Prediction MSD datasets. We trained for 40 epochs; we refer to [24] for further
details about the datasets and the experimental protocol. We used 5 networks in our ensemble. Our
results are shown in Table 1, along with the PBP and MC-dropout results reported in their respective
papers.
Datasets
Boston housing
Concrete
Energy
Kin8nm
Naval propulsion plant
Power plant
Protein
Wine
Yacht
Year Prediction MSD
PBP
RMSE
MC-dropout
Deep Ensembles
PBP
NLL
MC-dropout
Deep Ensembles
3.01 ? 0.18
5.67 ? 0.09
1.80 ? 0.05
0.10 ? 0.00
0.01 ? 0.00
4.12 ? 0.03
4.73 ? 0.01
0.64 ? 0.01
1.02 ? 0.05
8.88 ? NA
2.97 ? 0.85
5.23 ? 0.53
1.66 ? 0.19
0.10 ? 0.00
0.01 ? 0.00
4.02 ? 0.18
4.36 ? 0.04
0.62 ? 0.04
1.11 ? 0.38
8.85 ? NA
3.28 ? 1.00
6.03 ? 0.58
2.09 ? 0.29
0.09 ? 0.00
0.00 ? 0.00
4.11 ? 0.17
4.71 ? 0.06
0.64 ? 0.04
1.58 ? 0.48
8.89 ? NA
2.57 ? 0.09
3.16 ? 0.02
2.04 ? 0.02
-0.90 ? 0.01
-3.73 ? 0.01
2.84 ? 0.01
2.97 ? 0.00
0.97 ? 0.01
1.63 ? 0.02
3.60 ? NA
2.46 ? 0.25
3.04 ? 0.09
1.99 ? 0.09
-0.95 ? 0.03
-3.80 ? 0.05
2.80 ? 0.05
2.89 ? 0.01
0.93 ? 0.06
1.55 ? 0.12
3.59 ? NA
2.41 ? 0.25
3.06 ? 0.18
1.38 ? 0.22
-1.20 ? 0.02
-5.63 ? 0.05
2.79 ? 0.04
2.83 ? 0.02
0.94 ? 0.12
1.18 ? 0.21
3.35 ? NA
Table 1: Results on regression benchmark datasets comparing RMSE and NLL. See Table 2 for
results on variants of our method.
We observe that our method outperforms (or is competitive with) existing methods in terms of NLL.
On some datasets, we observe that our method is slightly worse in terms of RMSE. We believe that
this is due to the fact that our method optimizes for NLL (which captures predictive uncertainty)
instead of MSE. Table 2 in Appendix A.1 reports additional results on variants of our method,
demonstrating the advantage of using an ensemble as well as learning variance.
3.4
Classification on MNIST, SVHN and ImageNet
Next we evaluate the performance on classification tasks using MNIST and SVHN datasets. Our goal
is not to achieve the state-of-the-art performance on these problems, but rather to evaluate the effect
of adversarial training as well as the number of networks in the ensemble. To verify if adversarial
training helps, we also include a baseline which picks a random signed vector. For MNIST, we used
an MLP with 3-hidden layers with 200 hidden units per layer and ReLU non-linearities with batch
normalization. For MC-dropout, we added dropout after each non-linearity with 0.1 as the dropout
rate.7 Results are shown in Figure 2(a). We observe that adversarial training and increasing the
number of networks in the ensemble significantly improve performance in terms of both classification
accuracy as well as NLL and Brier score, illustrating that our method produces well-calibrated
uncertainty estimates. Adversarial training leads to better performance than augmenting with random
direction. Our method also performs much better than MC-dropout in terms of all the performance
measures. Note that augmenting the training dataset with invariances (such as random crop and
horizontal flips) is complementary to adversarial training and can potentially improve performance.
6
7
We do not compare to VI [19] as PBP and MC-dropout outperform VI on these benchmarks.
We also tried dropout rate of 0.5, but that performed worse.
6
ClassLfLcatLRn ErrRr
1.8
1LL
0.14
EnsemEle
EnsemEle + 5
EnsemEle + AT
0C drRSRut
1.6
1.4
0.10
0.08
1.2
BrLer 6cRre
0.0030
EnsemEle
EnsemEle + 5
EnsemEle + AT
0C drRSRut
0.12
0.0026
0.0024
10
5
10
1umEer Rf nets
15
0.02
5
10
1umEer Rf nets
15
0.0014
0.012
0.35
0.010
0.30
6
0
5
10
2
15
1umEer Rf nets
0.006
0.20
0
5
1umEer Rf nets
(a) MNIST dataset using 3-layer MLP
0.008
0.25
4
0.0016
0
EnsemEle
EnsemEle + 5
EnsemEle + AT
0C drRSRut
0.014
8
0.0018
0
0.40
BrLer 6cRre
0.016
EnsemEle
EnsemEle + 5
EnsemEle + AT
0C drRSRut
0.45
0.0020
0.04
1LL
0.50
EnsemEle
EnsemEle + 5
EnsemEle + AT
0C drRSRut
12
0.0022
0.06
1.0
ClassLfLcatLRn ErrRr
14
EnsemEle
EnsemEle + 5
EnsemEle + AT
0C drRSRut
0.0028
10
0.15
0
5
10
1umEer Rf nets
0.004
0
5
10
1umEer Rf nets
(b) SVHN using VGG-style convnet
Figure 2: Evaluating predictive uncertainty as a function of ensemble size M (number of networks
in the ensemble or the number of MC-dropout samples): Ensemble variants significantly outperform
MC-dropout performance with the corresponding M in terms of all 3 metrics. Adversarial training
improves results for MNIST for all M and SVHN when M = 1, but the effect drops as M increases.
To measure the sensitivity of the results to the choice of network architecture, we experimented
with a two-layer MLP as well as a convolutional NN; we observed qualitatively similar results; see
Appendix B.1 in the supplementary material for details.
We also report results on the SVHN dataset using an VGG-style convolutional NN.8 The results are
in Figure 2(b). Ensembles outperform MC dropout. Adversarial training helps slightly for M = 1,
however the effect drops as the number of networks in the ensemble increases. If the classes are
well-separated, adversarial training might not change the classification boundary significantly. It is
not clear if this is the case here, further investigation is required.
Finally, we evaluate on the ImageNet (ILSVRC-2012) dataset [51] using the inception network [56].
Due to computational constraints, we only evaluate the effect of ensembles on this dataset. The
results on ImageNet (single-crop evaluation) are shown in Table 4. We observe that as M increases,
both the accuracy and the quality of predictive uncertainty improve significantly.
Another advantage of using an ensemble is that it enables us to easily identify training examples
where the individual networks disagree or agree the most. This disagreement9 provides another
useful qualitative way to evaluate predictive uncertainty. Figures 10 and 11 in Appendix B.2 report
qualitative evaluation of predictive uncertainty on the MNIST dataset.
3.5
Uncertainty evaluation: test examples from known vs unknown classes
In the final experiment, we evaluate uncertainty on out-of-distribution examples from unseen classes.
Overconfident predictions on unseen classes pose a challenge for reliable deployment of deep learning
models in real world applications. We would like the predictions to exhibit higher uncertainty when
the test data is very different from the training data. To test if the proposed method possesses this
desirable property, we train a MLP on the standard MNIST train/test split using the same architecture
as before. However, in addition to the regular test set with known classes, we also evaluate it on a
test set containing unknown classes. We used the test split of the NotMNIST10 dataset. The images
in this dataset have the same size as MNIST, however the labels are alphabets instead of digits. We
do not have access to the true conditional probabilities, but we expect the predictions to be closer
to uniform on unseen classes compared to the known classes where the predictive probabilities
should concentrate on the true targets. We evaluate the entropy of the predictive distribution and
use this to evaluate the quality of the uncertainty estimates. The results are shown in Figure 3(a).
For known classes (top row), both our method and MC-dropout have low entropy as expected. For
unknown classes (bottom row), as M increases, the entropy of deep ensembles increases much faster
than MC-dropout indicating that our method is better suited for handling unseen test examples. In
particular, MC-dropout seems to give high confidence predictions for some of the test examples, as
evidenced by the mode around 0 even for unseen classes. Such overconfident wrong predictions can
be problematic in practice when tested on a mixture of known and unknown classes, as we will see in
Section 3.6. Comparing different variants of our method, the mode for adversarial training increases
slightly faster than the mode for vanilla ensembles indicating that adversarial training is beneficial
8
The architecture is similar to the one described in http://torch.ch/blog/2015/07/30/cifar.html.
PM
More precisely, we define disagreement as
m=1 KL(p?m (y|x)||pE (y|x)) where KL denotes the
P
Kullback-Leibler divergence and pE (y|x) = M 1 m p?m (y|x) is the prediction of the ensemble.
10
Available at http://yaroslavvb.blogspot.co.uk/2011/09/notmnist-dataset.html
9
7
for quantifying uncertainty on unseen classes. We qualitatively evaluate results in Figures 12(a)
and 12(b) in Appendix B.2. Figure 12(a) shows that the ensemble agreement is highest for letter ?I?
which resembles 1 in the MNIST training dataset, and that the ensemble disagreement is higher for
examples visually different from the MNIST training dataset.
14
EnsemEle
EnsemEle + 5
1
5
10
12
10
1
5
10
EnsemEle + AT
0C drRpRut 0.1
1
5
10
1
5
10
7
4
6
3
4
2
2
10
1
5
10
EnsemEle + A7
0C drRpRut
1
5
10
1
5
10
1
0
?0.5 0.0 0.5 1.0 1.5 2.0
12
EnsemEle + 5
1
5
10
5
8
14
EnsemEle
6
?0.5 0.0 0.5 1.0 1.5 2.0
?0.5 0.0 0.5 1.0 1.5 2.0
entrRpy values
entrRpy values
entrRpy values
entrRpy values
EnsemEle
EnsemEle + 5
EnsemEle + AT
0C drRpRut 0.1
1
5
10
1
5
10
1
5
10
1
5
10
0
?0.50.0 0.5 1.0 1.5 2.0 2.5
?0.5 0.0 0.5 1.0 1.5 2.0
7
6
5
8
4
6
3
4
2
2
?0.50.0 0.5 1.0 1.5 2.0 2.5
?0.5 0.0 0.5 1.0 1.5 2.0
entrRpy values
entrRpy values
entrRpy values
entrRpy values
EnsemEle
EnsemEle + 5
EnsemEle + A7
0C drRpRut
1
5
10
?0.50.0 0.5 1.0 1.5 2.0 2.5
1
5
10
1
5
10
1
5
10
1
0
?0.5 0.0 0.5 1.0 1.5 2.0
?0.5 0.0 0.5 1.0 1.5 2.0
?0.5 0.0 0.5 1.0 1.5 2.0
?0.5 0.0 0.5 1.0 1.5 2.0
0
?0.50.0 0.5 1.0 1.5 2.0 2.5
?0.50.0 0.5 1.0 1.5 2.0 2.5
?0.50.0 0.5 1.0 1.5 2.0 2.5
?0.50.0 0.5 1.0 1.5 2.0 2.5
entrRpy values
entrRpy values
entrRpy values
entrRpy values
entrRpy values
entrRpy values
entrRpy values
entrRpy values
(a) MNIST-NotMNIST
(b) SVHN-CIFAR10
Figure 3: : Histogram of the predictive entropy on test examples from known classes (top row) and
unknown classes (bottom row), as we vary ensemble size M .
We ran a similar experiment, training on SVHN and testing on CIFAR-10 [31] test set; both datasets
contain 32 ? 32 ? 3 images, however SVHN contains images of digits whereas CIFAR-10 contains
images of object categories. The results are shown in Figure 3(b). As in the MNIST-NotMNIST
experiment, we observe that MC-dropout produces over-confident predictions on unseen examples,
whereas our method produces higher uncertainty on unseen classes.
Finally, we test on ImageNet by splitting the training set by categories. We split the dataset into
images of dogs (known classes) and non-dogs (unknown classes), following Vinyals et al. [58] who
proposed this setup for a different task. Figure 5 shows the histogram of the predictive entropy as
well as the maximum predicted probability (i.e. confidence in the predicted class). We observe that
the predictive uncertainty improves on unseen classes, as the ensemble size increases.
3.6
Accuracy as a function of confidence
In practical applications, it is highly desirable for a system to avoid overconfident, incorrect predictions
and fail gracefully. To evaluate the usefulness of predictive uncertainty for decision making, we
consider a task where the model is evaluated only on cases where the model?s confidence is above an
user-specified threshold. If the confidence estimates are well-calibrated, one can trust the model?s
predictions when the reported confidence is high and resort to a different solution (e.g. use human in
a loop, or use prediction from a simpler model) when the model is not confident.
We re-use the results from the experiment in the previous section where we trained a network on
MNIST and test it on a mix of test examples from MNIST (known classes) and NotMNIST (unknown
M
1
2
3
4
5
6
7
8
9
10
Top-1 error
%
22.166
20.462
19.709
19.334
19.104
18.986
18.860
18.771
18.728
18.675
Top-5 error
%
6.129
5.274
4.955
4.723
4.637
4.532
4.485
4.430
4.373
4.364
NLL
0.959
0.867
0.836
0.818
0.809
0.803
0.797
0.794
0.791
0.789
Brier Score
?10 3
0.317
0.294
0.286
0.282
0.280
0.278
0.277
0.276
0.276
0.275
Figure 4: Results on ImageNet: Deep
Ensembles lead to lower classification
error as well as better predictive uncertainty as evidenced by lower NLL and
Brier score.
Figure 5: ImageNet trained only on dogs: Histogram of the
predictive entropy (left) and maximum predicted probability (right) on test examples from known classes (dogs) and
unknown classes (non-dogs), as we vary the ensemble size.
8
90
Accuracy Rn examples p(y|x) ? ?
EnsemEle
EnsemEle + 5
EnsemEle + A7
0C drRpRut
80
70
60
50
40
30
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
CRnfidence 7hreshRld ?
Figure 6: Accuracy vs Confidence curves: Networks trained on MNIST and tested on both MNIST
test containing known classes and the NotMNIST dataset containing unseen classes. MC-dropout can
produce overconfident wrong predictions, whereas deep ensembles are significantly more robust.
classes). The network will produce incorrect predictions on out-of-distribution examples, however we
would like these predictions to have low confidence. Given the prediction p(y = k|x), we define the
predicted label as y? = arg maxk p(y = k|x), and the confidence as p(y = y?|x) = maxk p(y = k|x).
We filter out test examples, corresponding to a particular confidence threshold 0 ? ? ? 1 and plot the
accuracy for this threshold. The confidence vs accuracy results are shown in Figure 6. If we look at
cases only where the confidence is 90%, we expect higher accuracy than cases where confidence
80%, hence the curve should be monotonically increasing. If the application demands an accuracy
x%, we can trust the model only in cases where the confidence is greater than the corresponding
threshold. Hence, we can compare accuracy of the models for a desired confidence threshold of the
application. MC-dropout can produce overconfident wrong predictions as evidenced by low accuracy
even for high values of ? , whereas deep ensembles are significantly more robust.
4
Discussion
We have proposed a simple and scalable non-Bayesian solution that provides a very strong baseline
on evaluation metrics for predictive uncertainty quantification. Intuitively, our method captures two
sources of uncertainty. Training a probabilistic NN p? (y|x) using proper scoring rules as training
objectives captures ambiguity in targets y for a given x. In addition, our method uses a combination
of ensembles (which captures ?model uncertainty? by averaging predictions over multiple models
consistent with the training data), and adversarial training (which encourages local smoothness),
for robustness to model misspecification and out-of-distribution examples. Ensembles, even for
M = 5, significantly improve uncertainty quality in all the cases. Adversarial training helps on
some datasets for some metrics and is not strictly necessary in all cases. Our method requires very
little hyperparameter tuning and is well suited for large scale distributed computation and can be
readily implemented for a wide variety of architectures such as MLPs, CNNs, etc including those
which do not use dropout e.g. residual networks [22]. It is perhaps surprising to the Bayesian deep
learning community that a non-Bayesian (yet probabilistic) approach can perform as well as Bayesian
NNs. We hope that our work will encourage the community to consider non-Bayesian approaches
(such as ensembles) and other interesting evaluation metrics for predictive uncertainty. Concurrent
with our work, Hendrycks and Gimpel [23] and Guo et al. [20] have also independently shown that
non-Bayesian solutions can produce good predictive uncertainty estimates on some tasks. Abbasi
and Gagn?e [1], Tram`er et al. [57] have also explored ensemble-based solutions to tackle adversarial
examples, a particularly hard case of out-of-distribution examples.
There are several avenues for future work. We focused on training independent networks as training
can be trivially parallelized. Explicitly de-correlating networks? predictions, e.g. as in [37], might
promote ensemble diversity and improve performance even further. Optimizing the ensemble weights,
as in stacking [60] or adaptive mixture of experts [28], can further improve the performance. The
ensemble has M times more parameters than a single network; for memory-constrained applications,
the ensemble can be distilled into a simpler model [10, 26]. It would be also interesting to investigate
so-called implicit ensembles the where ensemble members share parameters, e.g. using multiple
heads [36, 48], snapshot ensembles [27] or swapout [52].
9
Acknowledgments
We would like to thank Samuel Ritter and Oriol Vinyals for help with ImageNet experiments, and
Daan Wierstra, David Silver, David Barrett, Ian Osband, Martin Szummer, Peter Dayan, Shakir
Mohamed, Theophane Weber, Ulrich Paquet and the anonymous reviewers for helpful feedback.
References
[1] M. Abbasi and C. Gagn?e. Robustness to adversarial examples through an ensemble of specialists.
arXiv preprint arXiv:1702.06856, 2017.
[2] B. Alipanahi, A. Delong, M. T. Weirauch, and B. J. Frey. Predicting the sequence specificities
of DNA-and RNA-binding proteins by deep learning. Nature biotechnology, 33(8):831?838,
2015.
[3] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Man?e. Concrete problems
in AI safety. arXiv preprint arXiv:1606.06565, 2016.
[4] J. M. Bernardo and A. F. Smith. Bayesian Theory, volume 405. John Wiley & Sons, 2009.
[5] C. M. Bishop. Mixture density networks. 1994.
[6] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural
networks. In ICML, 2015.
[7] L. Breiman. Bagging predictors. Machine learning, 24(2):123?140, 1996.
[8] L. Breiman. Random forests. Machine learning, 45(1):5?32, 2001.
[9] G. W. Brier. Verification of forecasts expressed in terms of probability. Monthly weather review,
1950.
[10] C. Bucila, R. Caruana, and A. Niculescu-Mizil. Model compression. In KDD. ACM, 2006.
[11] B. Clarke. Comparing Bayes model averaging and stacking when model approximation error
cannot be ignored. J. Mach. Learn. Res. (JMLR), 4:683?712, 2003.
[12] A. P. Dawid. The well-calibrated Bayesian. Journal of the American Statistical Association,
1982.
[13] M. H. DeGroot and S. E. Fienberg. The comparison and evaluation of forecasters. The
statistician, 1983.
[14] T. G. Dietterich. Ensemble methods in machine learning. In Multiple classifier systems. 2000.
[15] Y. Gal and Z. Ghahramani. Dropout as a Bayesian approximation: Representing model
uncertainty in deep learning. In ICML, 2016.
[16] P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Machine learning, 63(1):
3?42, 2006.
[17] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal
of the American Statistical Association, 102(477):359?378, 2007.
[18] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In
ICLR, 2015.
[19] A. Graves. Practical variational inference for neural networks. In NIPS, 2011.
[20] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks.
arXiv preprint arXiv:1706.04599, 2017.
[21] L. Hasenclever, S. Webb, T. Lienart, S. Vollmer, B. Lakshminarayanan, C. Blundell, and Y. W.
Teh. Distributed Bayesian learning with stochastic natural-gradient expectation propagation and
the posterior server. arXiv preprint arXiv:1512.09327, 2015.
10
[22] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
770?778, 2016.
[23] D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and out-of-distribution
examples in neural networks. arXiv preprint arXiv:1610.02136, 2016.
[24] J. M. Hern?andez-Lobato and R. P. Adams. Probabilistic backpropagation for scalable learning
of Bayesian neural networks. In ICML, 2015.
[25] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke,
P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):
82?97, 2012.
[26] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint
arXiv:1503.02531, 2015.
[27] G. Huang, Y. Li, G. Pleiss, Z. Liu, J. E. Hopcroft, and K. Q. Weinberger. Snapshot ensembles:
Train 1, get M for free. ICLR submission, 2017.
[28] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts.
Neural computation, 3(1):79?87, 1991.
[29] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization
trick. In NIPS, 2015.
[30] A. Korattikara, V. Rathod, K. Murphy, and M. Welling. Bayesian dark knowledge. In NIPS,
2015.
[31] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
[32] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[33] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint
arXiv:1611.01236, 2016.
[34] B. Lakshminarayanan. Decision trees and forests: a probabilistic perspective. PhD thesis, UCL
(University College London), 2016.
[35] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[36] S. Lee, S. Purushwalkam, M. Cogswell, D. Crandall, and D. Batra. Why M heads are better than
one: Training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314, 2015.
[37] S. Lee, S. P. S. Prakash, M. Cogswell, V. Ranjan, D. Crandall, and D. Batra. Stochastic multiple
choice learning for training diverse deep ensembles. In NIPS, 2016.
[38] Y. Li, J. M. Hern?andez-Lobato, and R. E. Turner. Stochastic expectation propagation. In NIPS,
2015.
[39] C. Louizos and M. Welling. Structured and efficient variational deep learning with matrix
Gaussian posteriors. arXiv preprint arXiv:1603.04733, 2016.
[40] D. J. MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of
Technology, 1992.
[41] S.-i. Maeda. A Bayesian encourages dropout. arXiv preprint arXiv:1412.7003, 2014.
[42] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in
vector space. arXiv preprint arXiv:1301.3781, 2013.
[43] T. P. Minka. Bayesian model averaging is not model combination. 2000.
11
[44] T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii. Distributional smoothing by virtual
adversarial examples. In ICLR, 2016.
[45] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In
ICML, 2010.
[46] R. M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag New York, Inc., 1996.
[47] D. A. Nix and A. S. Weigend. Estimating the mean and variance of the target probability
distribution. In IEEE International Conference on Neural Networks, 1994.
[48] I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped DQN. In
NIPS, 2016.
[49] J. Quinonero-Candela, C. E. Rasmussen, F. Sinz, O. Bousquet, and B. Sch?olkopf. Evaluating
predictive uncertainty challenge. In Machine Learning Challenges. Springer, 2006.
[50] C. E. Rasmussen and J. Quinonero-Candela. Healing the relevance vector machine through
augmentation. In ICML, 2005.
[51] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy,
A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition
Challenge. International Journal of Computer Vision (IJCV), 115(3):211?252, 2015.
[52] S. Singh, D. Hoiem, and D. Forsyth. Swapout: Learning an ensemble of deep architectures. In
NIPS, 2016.
[53] J. T. Springenberg, A. Klein, S. Falkner, and F. Hutter. Bayesian optimization with robust
Bayesian neural networks. In Advances in Neural Information Processing Systems, pages
4134?4142, 2016.
[54] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple
way to prevent neural networks from overfitting. JMLR, 2014.
[55] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus.
Intriguing properties of neural networks. In ICLR, 2014.
[56] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 2818?2826, 2016.
[57] F. Tram`er, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel. Ensemble adversarial training:
Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
[58] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning.
In NIPS, 2016.
[59] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In
ICML, 2011.
[60] D. H. Wolpert. Stacked generalization. Neural networks, 5(2):241?259, 1992.
[61] J. Zhou and O. G. Troyanskaya. Predicting effects of noncoding variants with deep learningbased sequence model. Nature methods, 12(10):931?934, 2015.
12
| 7219 |@word illustrating:1 compression:1 seems:2 norm:1 crucially:1 tried:1 forecaster:1 jacob:1 pick:1 thereby:1 shot:1 harder:1 liu:1 series:3 score:9 contains:2 hoiem:1 tram:2 tuned:1 ours:1 interestingly:3 bootstrapped:1 subjective:1 outperforms:2 existing:1 current:1 com:1 comparing:4 surprising:1 nowlan:1 yet:5 intriguing:1 readily:2 john:1 numerical:3 kdd:1 dydx:1 enables:1 treating:2 plot:5 drop:2 v:3 generative:1 item:2 hamiltonian:1 short:1 farther:1 smith:1 provides:3 boosting:1 node:1 detecting:1 attack:1 firstly:1 simpler:3 zhang:1 wierstra:3 along:7 olah:1 xnm:3 pritzel:2 incorrect:3 consists:2 qualitative:2 combine:2 ijcv:1 x0:2 indeed:1 expected:4 examine:1 brier:7 multi:1 salakhutdinov:1 little:2 increasing:3 becomes:1 confused:1 estimating:4 bounded:2 underlying:1 linearity:2 theophane:1 what:1 interpreted:2 deepmind:1 developed:1 gal:3 bootstrapping:1 sinz:1 bernardo:1 prakash:1 tackle:1 zaremba:1 classifier:2 wrong:3 uk:1 unit:4 medical:1 yn:3 nakae:1 before:1 safety:1 gneiting:2 local:4 treat:1 frey:1 despite:1 encoding:1 mach:1 becoming:1 falkner:1 black:1 might:4 signed:1 initialization:3 resembles:1 challenging:2 heteroscedastic:1 deployment:1 co:1 revolves:1 ease:1 range:4 averaged:1 practical:5 satisfactorily:1 unique:2 testing:1 acknowledgment:1 practice:6 learningbased:1 implement:4 lecun:1 x3:1 backpropagation:2 bootstrap:5 yacht:1 procedure:5 digit:2 empirical:4 adapting:1 weather:2 significantly:8 matching:1 confidence:15 word:1 regular:1 specificity:1 suggest:1 protein:4 get:1 convenience:1 close:1 selection:3 cannot:1 equivalent:1 map:1 reviewer:1 dean:2 maximizing:2 lobato:3 straightforward:1 ranjan:1 independently:3 convex:1 focused:1 sainath:1 simplicity:1 spark:1 assigns:1 splitting:1 rule:20 shlens:2 reparameterization:1 stability:1 notion:2 justification:1 laplace:1 hurt:1 deteriorated:1 target:6 user:1 exact:2 magazine:1 vollmer:1 us:4 hypothesis:3 goodfellow:6 agreement:1 trick:2 dawid:1 jaitly:1 expensive:2 recognition:6 particularly:2 roy:1 balaji:1 submission:1 distributional:2 observed:8 bottom:2 preprint:12 capture:5 ensures:1 sun:2 highest:1 ran:1 ideally:1 dynamic:1 trained:12 raise:1 depend:1 solving:1 singh:1 predictive:66 deliver:1 upon:2 serve:1 creates:1 learner:5 completely:1 easily:1 hopcroft:1 alphabet:1 train:11 separated:1 stacked:1 fast:3 describe:3 london:1 monte:2 crandall:2 neighborhood:1 harnessing:1 whose:1 heuristic:1 larger:1 plausible:1 valued:1 say:1 supplementary:1 otherwise:2 unseen:10 paquet:1 noisy:1 final:3 shakir:1 nll:12 advantage:3 housing:1 sequence:2 net:9 ucl:1 propose:2 interaction:1 loop:1 korattikara:1 pthe:1 ernst:1 achieve:1 inducing:1 scalability:2 olkopf:1 recipe:2 sutskever:3 optimum:1 produce:11 generating:1 adam:4 silver:1 object:1 help:6 pose:1 augmenting:2 measured:2 minor:1 eq:2 strong:4 implemented:1 predicted:8 quantify:1 distilling:1 direction:6 concentrate:1 correct:2 filter:1 stochastic:5 cnns:1 exploration:1 human:2 virtual:2 material:1 require:2 generalization:4 andez:3 investigation:3 anonymous:1 randomization:4 secondly:1 extension:1 strictly:2 around:3 ground:2 exp:1 visually:2 optimizer:1 vary:2 wine:1 purpose:1 estimation:10 bma:2 label:8 currently:1 troyanskaya:1 concurrent:1 vice:1 successfully:1 weighted:1 hope:2 clearly:1 gaussian:5 rna:1 rather:1 pn:1 avoid:1 zhou:1 breiman:4 focus:3 naval:1 likelihood:6 contrast:2 adversarial:39 ishii:1 baseline:4 posteriori:1 inference:3 helpful:1 dayan:1 nn:9 niculescu:1 typically:2 entire:2 torch:2 hidden:5 misclassified:2 arg:2 classification:15 overall:1 html:2 augment:1 art:5 softmax:1 smoothing:3 initialize:1 constrained:1 delong:1 once:1 distilled:1 mackay:1 beach:1 identical:1 represents:1 look:1 icml:6 yu:1 promote:1 discrepancy:1 future:1 report:3 few:1 modern:1 randomly:2 divergence:1 individual:4 murphy:1 replacement:1 statistician:1 interest:2 mlp:4 investigate:4 highly:1 evaluation:8 mixture:9 ynm:3 chain:1 accurate:1 closer:1 encourage:1 cifar10:1 necessary:1 respective:1 orthogonal:1 unless:1 tree:5 harmful:1 re:2 desired:1 hutter:1 formalism:1 soft:1 modeling:1 measuring:1 caruana:1 stacking:2 deviation:1 subset:2 predictor:2 usefulness:2 uniform:1 krizhevsky:3 too:1 reported:2 kurakin:2 nns:31 calibrated:6 st:1 density:3 confident:2 sensitivity:1 randomized:1 international:2 probabilistic:8 rewarding:1 lee:3 ritter:1 concrete:2 gans:1 na:6 squared:4 ambiguity:1 nm:2 abbasi:2 containing:4 huang:2 thesis:2 augmentation:1 positivity:1 worse:3 resort:1 expert:2 leading:1 style:2 american:2 toy:3 szegedy:4 li:2 de:2 diversity:2 summarized:1 lakshminarayanan:3 inc:1 forsyth:1 unsatisfying:1 explicitly:1 depends:2 vi:2 later:1 root:1 lot:1 picked:2 performed:1 candela:2 alipanahi:1 red:1 competitive:1 bayes:1 view:1 parallel:3 rmse:4 contribution:3 mlps:1 ass:1 minimize:4 accuracy:17 convolutional:3 variance:13 who:1 efficiently:1 ensemble:73 yield:1 correspond:1 identify:1 bayesian:38 kavukcuoglu:1 mc:19 carlo:2 rx:1 worth:1 pleiss:2 ren:1 rectified:1 randomness:1 russakovsky:1 explain:1 parallelizable:1 sharing:1 definition:1 papernot:1 evaluates:1 energy:1 frequency:1 mohamed:2 minka:1 obvious:1 associated:1 mi:1 recovers:1 unsolved:1 dataset:28 popular:5 knowledge:5 improves:3 higher:6 supervised:5 improved:1 evaluated:2 box:1 inception:2 implicit:1 correlation:1 overfit:1 horizontal:1 trust:2 su:1 a7:3 propagation:3 google:1 lack:1 minibatch:1 mode:3 quality:12 gray:1 perhaps:1 believe:1 dqn:1 usa:1 dietterich:1 lillicrap:1 effect:9 true:8 verify:1 contain:1 hence:9 equality:1 leibler:1 neal:1 attractive:1 indistinguishable:1 ll:2 encourages:3 inferior:1 whereby:1 noted:1 samuel:1 criterion:5 demonstrate:5 geurts:2 performs:2 svhn:8 image:9 variational:5 weber:1 recently:2 charles:1 pbp:5 common:2 kin8nm:1 pseudocode:1 volume:1 association:2 interpretation:3 hasenclever:1 he:1 louizos:1 significant:1 refer:2 monthly:1 versa:1 gibbs:1 ai:1 smoothness:1 shuffling:1 tuning:2 trivially:1 rd:1 kaggle:1 pm:2 vanilla:1 contest:1 nonlinearity:1 lienart:1 language:1 dot:1 bruna:1 calibration:7 specification:1 impressive:2 access:1 etc:1 add:1 base:5 posterior:4 recent:2 showed:3 perspective:1 optimizing:1 optimizes:1 scenario:1 verlag:1 server:1 inequality:1 binary:1 blog:1 scoring:20 minimum:1 additional:4 greater:1 impose:1 deng:2 parallelized:1 novelty:1 recommended:1 signal:1 christiano:1 monotonically:1 ii:4 encompass:1 desirable:4 semi:3 multiple:7 mix:1 smooth:5 match:1 faster:2 cross:1 long:3 cifar:3 msd:3 vat:4 prediction:34 scalable:4 regression:15 variant:6 crop:2 vision:5 expectation:3 metric:6 arxiv:24 histogram:3 sometimes:1 normalization:1 achieved:2 addition:2 want:1 whereas:4 krause:1 source:1 crucial:1 sch:1 posse:1 degroot:1 tend:1 member:4 jordan:1 bernstein:1 split:5 enough:1 bengio:2 variety:4 offensive:1 fit:1 relu:2 cogswell:2 architecture:8 reduce:2 idea:1 avenue:1 vgg:2 blundell:5 shift:4 motivated:1 defense:1 forecasting:1 osband:2 peter:1 speech:2 passing:1 biotechnology:1 york:1 deep:30 ignored:1 miscalibrated:1 useful:2 clear:1 karpathy:1 dark:1 category:2 dna:1 mcdaniel:1 generate:2 http:2 outperform:3 problematic:1 sign:4 popularity:1 per:1 klein:1 blue:1 diagnosis:1 broadly:1 diverse:2 hyperparameter:2 shall:3 express:1 group:1 four:1 demonstrating:1 threshold:5 drawn:1 clarity:1 prevent:1 dahl:1 diffusion:1 asymptotically:1 year:3 weigend:1 run:1 swapout:2 letter:1 uncertainty:67 powerful:2 injected:1 springenberg:1 draw:1 decision:4 appendix:5 clarke:1 dropout:32 layer:7 guaranteed:1 fold:3 strength:1 constraint:3 precisely:1 fei:2 bousquet:1 generates:1 extremely:1 mikolov:1 relatively:1 martin:1 structured:1 according:1 overconfident:7 amodei:1 combination:6 poor:1 beneficial:2 smaller:1 increasingly:1 slightly:3 son:1 modification:4 making:2 b:1 intuitively:3 restricted:1 pipeline:2 fienberg:1 computationally:5 agree:1 hern:4 turn:1 mechanism:1 fail:1 know:2 flip:1 available:2 parametrize:1 unreasonable:1 observe:6 away:1 enforce:1 disagreement:2 salimans:1 frequentist:1 alternative:2 robustness:5 batch:2 slower:1 specialist:1 weinberger:2 original:4 bagging:7 assumes:1 denotes:3 miyato:3 cf:1 include:1 top:4 wehenkel:1 restrictive:1 ghahramani:3 especially:1 corrado:1 crre:2 move:1 objective:1 added:1 strategy:1 cblundell:1 variance2:1 exhibit:1 gradient:5 iclr:4 convnet:1 thank:1 rethinking:1 majority:1 sensible:1 propulsion:1 gracefully:1 koyama:1 quinonero:2 argue:1 assuming:1 minimizing:3 setup:4 webb:1 potentially:1 negative:2 wojna:1 proper:15 satheesh:1 motivates:1 unknown:10 perform:5 teh:2 upper:1 disagree:1 boltzmann:1 snapshot:2 markov:1 benchmark:6 datasets:12 daan:1 nix:1 optional:1 langevin:2 extended:1 maxk:2 misspecification:2 head:2 hinton:7 rn:1 perturbation:3 community:2 david:2 evidenced:3 namely:1 required:1 specified:3 kl:3 optimized:1 imagenet:12 dog:5 acoustic:1 california:1 boost:1 kingma:1 nip:10 able:1 usually:2 pattern:2 maeda:2 challenge:4 rf:6 including:2 max:2 reliable:1 memory:1 cornebise:1 hot:1 event:1 power:1 natural:2 quantification:2 hybrid:1 blogspot:1 gagn:2 predicting:2 residual:2 turner:1 mizil:1 representing:1 scheme:1 improve:11 pare:1 technology:1 gimpel:2 axis:2 raftery:2 prior:5 review:3 literature:2 epoch:1 schulman:1 rathod:1 relative:1 graf:1 loss:8 plant:2 expect:2 interesting:2 filtering:1 boneh:1 degree:1 vanhoucke:2 sufficient:1 consistent:1 verification:1 ulrich:1 tiny:1 share:1 heavy:1 row:4 summary:2 surprisingly:1 free:1 rasmussen:2 bias:1 senior:1 institute:1 wide:4 explaining:1 distributed:4 van:1 curve:3 default:2 xn:3 evaluating:7 dimension:4 world:3 computes:1 boundary:1 feedback:1 qualitatively:3 commonly:1 adaptive:3 nguyen:1 far:1 erhan:1 welling:4 correlate:1 approximate:5 kullback:1 sequentially:1 correlating:1 overfitting:1 ioffe:1 assumed:3 unnecessary:1 tuples:1 fergus:1 spectrum:1 khosla:1 tailed:1 why:2 table:5 additionally:3 learn:2 nature:3 robust:5 ca:1 improving:1 forest:4 mse:5 complex:1 necessarily:1 domain:3 protocol:1 notmnist:5 significance:1 pk:2 subsample:1 complementary:2 ensembling:1 referred:2 quantiles:1 wiley:1 lie:2 pe:2 jmlr:2 third:1 ian:1 bishop:1 er:2 explored:2 experimented:1 barrett:1 concern:2 intractable:1 intrinsic:1 mnist:16 bucila:1 adding:1 phd:2 demand:1 forecast:2 chen:1 boston:1 suited:4 entropy:7 wolpert:1 likely:1 steinhardt:1 visual:1 vinyals:4 expressed:1 binding:1 springer:2 ch:1 corresponds:2 truth:2 srivastava:1 acm:1 ma:1 nair:1 conditional:1 goal:2 quantifying:3 shared:1 man:1 change:1 hard:1 except:1 uniformly:2 averaging:5 called:2 batra:2 invariance:1 experimental:4 indicating:2 college:1 ilsvrc:1 berg:1 softplus:1 guo:2 szummer:1 noncoding:1 alexander:1 bioinformatics:1 relevance:1 oriol:1 evaluate:16 mcmc:3 tested:2 handling:1 |
6,876 | 722 | Structural and Behavioral Evolution
of Recurrent Networks
Gregory M. Saunders, Peter J. Angeline, and Jordan B. Pollack
Laboratory for Artificial Intelligence Research
Department of Computer and Information Science
The Ohio State University
Columbus, Ohio 43210
[email protected]
Abstract
This paper introduces GNARL, an evolutionary program which induces
recurrent neural networks that are structurally unconstrained. In contrast
to constructive and destructive algorithms, GNARL employs a population of networks and uses a fitness function's unsupervised feedback to
guide search through network space. Annealing is used in generating
both gaussian weight changes and structural modifications. Applying
GNARL to a complex search and collection task demonstrates that the
system is capable of inducing networks with complex internal dynamics.
1 INTRODUCTION
A variety of methods to induce network architecture exist. Some start with a very simple
network and incrementally add nodes and links (Hanson 1990; Fahlman & Lebiere, 1990;
Fahlman 1991; Chen, et aI., 1993); others start with a large network and then prune off
superfluous pieces (Mozer & Smolensky, 1989; Cun, Denker, and SolI a, 1990; Hassibi &
Stork, 1993; amlin & Giles, 1993). But these constructive and destructive algorithms are
monotonic extremes that ignore a more moderate solution: "dynamically add or remove
pieces of architecture as needed." Moreover, by exclusively exploring either feedforward
networks (e.g., Ash, 1989), fully-connected recurrent networks (e.g., Chen, et al. 1993), or
some restricted middle ground (e.g., Fahlman,.1991), these algorithms allow only limited
structural change. Finally, constructive and destructive algorithms are supervised methods
88
Structural and Behavioral Evolution of Recurrent Networks
?
num-in input units
Random(max-hidden) hidden units
Random(max-links) links
num-out output units
Figure 1: Sample initial network. The number of input nodes and number of output nodes
is fixed for the particular task, but the number of hidden units and the connectivity
(although bounded), is random.
which rely on complex predicates to determine when to add or delete pieces of network
architecture (e.g., "when rate of improvement falls below threshold").
Genetic algorithms (Holland 1975), on the other hand, are unsupervised methods which can
induce networks by making stochastic modifications to a population of bitstrings, each of
which is interpreted as a network. Most studies, however, still assume a fixed structure for
the network (e.g., Belew et aI., 1990; Jefferson, et al., 1991; see also Schaffer, et al. 1992),
and those that do not allow only limited structural change (e.g., Potter, 1992, and Karunanithi et al., 1992).
Evolutionary programming (Fogel, 1992) is an alternate optimization technique which,
when applied to network induction, obviates the need for a bitstring-to-network mapping
by mutating networks directly. Furthermore, because EP does not employ crossover (an
operator of questionable efficacy on distributed representations), it is a better candidate for
inducing network structures (Angeline, Saunders, and Pollack, 1993; Fogel et al., 1990).
2 THE GNARL ALGORITHM
GNARL (GeNeralized Acquisition of Recurrent Links) is an evolutionary program that
non-monotonically constructs recurrent networks to solve a given task. It begins with an
initial population of n random individuals; a sample network N is shown in Figure 1. The
number of input nodes (num-in) and number of output nodes (num-out) are fixed for a given
task; the number of hidden nodes as well as the connections among them are free to vary.
Self-links as well as general loops are allowed. Thus GNARL's search space is N
{N:
network N has num-in input nodes and num-out output nodes}.
=
In each epoch of search, the networks are ranked by a user-supplied fitness function f: N
~ R, where R represents the reals. Reproduction of the best n/2 individuals entails modifying both the weights and structure of each parent network N. First, the temperature T(N)
is calculated:
T(N)
= I_ f (N)
f max
(1)
where fmax (provided by the user) is the maximum possible fitness for a given task. This
89
90
Saunders, Angeline, and Pollack
measure of N's performance is used to anneal the structural and parametric (Barto, 1990)
similarity between parent and offspring, so that networks with a high temperature are
mutated severely, and those with a low temperature are mutated only slightly. This allows
a coarse-grained search initially, and a finer-grained search as a network approaches a solution (cf. Kirkpatrick et aI., 1983).
More concretely, parametric mutations are accomplished by perturbing each weight with
gaussian noise, whose variance is T(Ny2:
W f- W
+ Normal (0; T (N)),
'v'w
E
N
(2)
Structural mutations are accomplished by:
? adding k] hidden nodes with probability Palld-node
? deleting k2 hidden nodes with probability Pdelete-node
? adding k3links with probability Padd-link
? deleting k4 links with probability Pdelete-link
where each k j is selected uniformly from a user-defined range, again annealed by T(N).
When a node is added, it is initialized without connections; when a node is deleted, all its
incident links are removed. All new links are initialized to O. (See also Angeline, Saunders,
and Pollack, 1993.)
3 RESULTS
GNARL was tested on a simple control task - the Tracker task of Jefferson, et al. (1991)
and Koza (1992). In this problem, a simulated ant is placed on a two-dimensional toroidal
grid and must maximize the number of pieces of food it collects in a given time period (Figure 2a). Each ant is controlled by a network with two input nodes and four output nodes
(Figure 2b). At each step, the action whose corresponding output node has maximum activation is performed. Fitness is the number of grid positions cleared within 200 time steps.
The experiments used a population of 100 networks. In the first run (2090 generations),
GNARL found a network (Figure 3b) that cleared 81 grid positions within the 200 time
steps. Figure 4 shows the state of the output units of the network over three different sets
of inputs. Each point is a triple of the form (move, right, left). (No-op is not shown because
it was never used in the final network.) Figure 4a shows the result of supplying to the network 200 "food" inputs - a fixed point that executes "Move." Figure 4b shows the sequence
of states reached when 200 "no food" signals are supplied to the network - a collection of
points describing a limit cycle of length 5 that repeatedly executes the sequence "Right,
Right, Right, Right, Move." These two attractors determine the response of the network to
the task (Figure 4c,d); the additional points in Figure 4c are transients encountered as the
network alternates between these attractors.
However, not all evolved network behaviors are so simple as to approximate an FSA (Pollack, 1991). In a second run (1595 generations) GNARL induced a network that cleared 82
grid points within the 200 time steps. Figure 5 demonstrates the behavior of this network.
Once again, the "food" attractor, shown in Figure 5a, is a single point in the space that
always executes "Move." The "no food" behavior, however, is not an FSA; instead, it is a
Structural and Behavioral Evolution of Recurrent Networks
rp1
,"
Move Turn left Tum right No-op
@
Food
(a)
6)
No food
(b)
Figure 2: The ant problem. (a) The trail is connected initially, but becomes progressively
more difficult to follow. The underlying 2-d grid is toroidal, so that position "P" is the first
break in the trail. The ellipse indicates the 7 pieces of food that the network of the second
run failed to reach. (b) The semantics of the I/O units for the ant network. The first input
node denotes the presence of food in the square directly in front of the ant; the second
denotes the absence of food in this same square. No-op, from Jefferson, allows the
network to stay in one position while activation flows through recurrent links. This
particular network "eats" 42 pieces of food before spinning endlessly in place at position
P, illustrating a very deep local minimum in the search space.
quasiperiodic trajectory of points shaped like a "D" in output space (Figure Sb). The placement of the "D" is in the "Move / Right" corner of the space and encodes a complex alternation between these two operations (Figure Sd).
4 CONCLUSIONS
Artificial architectural constraints (such as "feedforwardness") close the door on entire
classes of behavior; forced liberties (such as assumed full recurrence) may unnecessarily
increase structural complexity or learning time. By relying on a simple stochastic process,
GNARL strikes a middle ground between these two, allowing the network's complexity
and behavior to emerge in response to the demands of the task.
Acknowledgments
The research reported in this paper has been partially supported by Office of Naval
Research grants NOOO14-93-1-00S9 and NOOO14-92-J-119S. We are indebted to all those
who read and reviewed this work, especially John Kolen, Ed Large, and Barbara Becker.
91
92
Saunders, Angeline, and Pollack
.
?
,.\
1
.
./
I
1
1
1
"
"
I
1
1
\
,
,,,
\
\
\
\
,
\
,
\
I
\\\
:
\~
I
I
,
I
,
Move Left Right No-op
~:
i.
,
?
Food
~"".
No food
(c)
(a)
-----------------------
(b)
Figure 3: The Tracker Task, first run. (a) The best network in the initial population. Nodes
1 are input, nodes 5-8 are output, and nodes 2-4 are hidden nodes. (b) Network
induced by GNARL after 2090 generations. Forward links are dashed; bidirectional links
& loops are solid. The light gray connection between nodes 8 and 13 is the sole backlink.
This network clears the trail in 319 epochs. (c) Jefferson et al.'s fixed network structure for
the Tracker task.
o&
References
Angeline, P., Saunders, G., Pollack, J. (1993). An evolutionary algorithm that constructs
recurrent neural networks. LAIR Technical Report 93-PA-GNARL, The Ohio State University, Columbus Ohio. To be published in IEEE Transactions on Neural Networks.
Structural and Behavioral Evolution of Recurrent Networks
1
1
1
(a)
(b)
1
1
1
.~ 400
......;;)
o
0..
><
(c)
1
Figure 4: Limit behavior of the network that clears the trail in 319 steps. Graphs show the
state of the output units Move, Right, Left. (a) Fixed point attractor that results for
sequence of 200 "food" signals; (b) Limit cycle attractor that results when a sequence of
200 "no food" signals is given to network; (c) All states visited while traversing the trail;
(d) The x position of the ant over time when run on an empty grid.
Ash, T. (1989). "Dynamic node creation in backpropagation networks," Connection Science, 1:365-375.
Barto, A. G. (1990). Connectionist learning for control. In Miller, W. T. III, Sutton, R. S.,
and Werbos, P. J., editors, Neural Networksfor Control. Chapter 1, pages 5-58. MIT Press,
Cambridge.
Belew, R. K., McInerney, J., and Schraudolf, N. N. (1990). Evolving networks: Using the
genetic algorithm with connectionist learning. Technical Report CS90-174, University of
California, San Diego.
93
94
Saunders, Angeline, and Pollack
1
1
(a)
1
(b)
1
3
....?
.-go
3000
2000
0..
><
1
(c)
Figure 5: Limit behavior of the network of the second run. Graphs show the state of the
output units Move, Right, Left. (a) Fixed point attractor that results for sequence of 500
"food" signals; (b) Limit cycle attractor that results when a sequence of 500 "no food"
signals is given to network; (c) All states visited while traversing the trail; (d) The x
position of the ant over time when run on an empty grid.
Chen, D., Giles, C., Sun, G., Chen, H., Less, Y., and Goudreau, M. (1993). Constructive
learning of recurrent neural networks. IEEE International Conference on Neural Networks,
3:1196-1201.
Cun, Y.L., Denker, J., and SoIIa, S. (1990). Optimal brain damage. In Touretzky, D., editor,
Advances in Neural Information Processing Systems 2. Morgan Kaufmann.
Fahlman, S. and Lebiere, C. (1990). The cascade-correlation architecture. In Touretzky, D.
S., editor, Advances in Neural Information Processing Structures 2, pages 524-532. Morgan Kaufmann.
Fahlman, S. (1991). The recurrent cascade-correlation architecture. In Lippmann, R.,
Structural and Behavioral Evolution of Recurrent Networks
Moody, J., and Touretzky, D., editors, Advances in Neural Information Processing Systems
3, pages 190-196. Morgan Kaufmann, San Mateo.
Fogel, D. (1992). Evolving Artificial Intelligence. Ph.D. thesis, University of California,
San Diego.
Fogel, D., Fogel, L., and Porto, V. W. (1990). Evolving neural networks. Biological Cybernetics. 63:487~93.
Hanson, S. J. (1990). Meiosis networks. In Touretzky, D., editor,Advances in NeuralInformation Processing Systems 2, pages 533-541. Morgan Kaufmann, San Mateo.
Hassibi, B. and Stork, D. G. (1993). Second order derivatives for network pruning: Optimal
brain surgeon. In Hanson, S. J., Cowan, J. D., and Giles, C. L., editors, Advances in Neural
Information Processing Systems 5, pages 164-171. Morgan Kaufmann.
Holland, J. (1975). Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor, MI.
Jefferson, D., Collins, R, Cooper, C., Dyer, M., Flowers, M., Korf, R, Taylor, C., and
Wang, A. (1991). Evolution as a theme in artificial life: The genesys/tracker system. In
Langton, C. G., Taylor, C., Farmer, J. D., and Rasmussen, S., editors, Artificial Life II: Proceedings of the Workshop on Artificial Life. pages 549-577 . Addison-Wesley.
Karunanithi, N., Das, R, and Whitley, D. (1992). Genetic cascade learning for neural networks. In Proceedings of COGANN-92 International Workshop on Combinations of
Genetic Algorithms and Neural Networks.
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220:671-680.
Koza, J. (1992). Genetic evolution and co-evolution of computer programs. In Christopher
G. Langton, Charles Taylor, J. D. F. and Rasmussen, S., editors, Artificial Life II. Addison
Wesley Publishing Company, Reading Mass.
Mozer, M. and Smolensky, P. (1989). Skeletonization: A technique for trimming the fat
from a network via relevance assessment. In Touretzky, D., editor, Advances in Neural
Information Processing Systems 1, pages 107-115. Morgan Kaufmann, San Mateo.
Omlin, C. W. and Giles, C. L. (April 1993). Pruning recurrent neural networks for improved
generalization performance. Technical Report Tech Report No 93-6, Computer Science
Department, Rensselaer Polytechnic Institute.
Pollack, J. B. (1991). The induction of dynamical recognizer. Machine Learning. 7:227252.
Potter, M. A. (1992). A genetic cascade-correlation learning algorithm. In Proceedings of
COGANN-92 International Workshop on Combinations of Genetic Algorithms and Neural
Networks.
Schaffer, J. D., Whitley, D., and Eshelman, L. J. (1992). Combinations of genetic algorithms and neural networks: A survey of the state of the art. In Proceedings of COGANN92 International Workshop on Combinations of Genetic Algorithms and Neural Networks.
95
| 722 |@word illustrating:1 middle:2 korf:1 solid:1 initial:3 exclusively:1 efficacy:1 angeline:7 genetic:9 cleared:3 activation:2 must:1 john:1 remove:1 progressively:1 intelligence:2 selected:1 rp1:1 supplying:1 num:6 coarse:1 node:24 behavioral:5 behavior:7 brain:2 relying:1 company:1 food:17 endlessly:1 becomes:1 begin:1 provided:1 moreover:1 bounded:1 underlying:1 mass:1 evolved:1 interpreted:1 questionable:1 fat:1 demonstrates:2 k2:1 toroidal:2 control:3 unit:8 grant:1 farmer:1 before:1 offspring:1 local:1 sd:1 limit:5 severely:1 sutton:1 mateo:3 dynamically:1 collect:1 co:1 limited:2 range:1 acknowledgment:1 karunanithi:2 backpropagation:1 evolving:3 crossover:1 cascade:4 induce:2 close:1 operator:1 s9:1 applying:1 annealed:1 go:1 survey:1 population:5 diego:2 user:3 programming:1 us:1 trail:6 pa:1 werbos:1 ep:1 wang:1 connected:2 cycle:3 sun:1 removed:1 mozer:2 complexity:2 dynamic:2 creation:1 surgeon:1 chapter:1 forced:1 artificial:8 saunders:8 whose:2 quasiperiodic:1 solve:1 whitley:2 final:1 fsa:2 sequence:6 jefferson:5 adaptation:1 loop:2 fmax:1 inducing:2 parent:2 empty:2 generating:1 recurrent:14 sole:1 op:4 liberty:1 porto:1 modifying:1 stochastic:2 transient:1 generalization:1 biological:1 exploring:1 tracker:4 ground:2 normal:1 mapping:1 vary:1 recognizer:1 visited:2 bitstring:1 mit:1 eats:1 gaussian:2 always:1 barto:2 office:1 naval:1 improvement:1 indicates:1 tech:1 contrast:1 sb:1 entire:1 initially:2 hidden:7 semantics:1 among:1 art:1 construct:2 never:1 once:1 shaped:1 represents:1 unnecessarily:1 unsupervised:2 others:1 report:4 connectionist:2 employ:2 individual:2 fitness:4 attractor:7 trimming:1 introduces:1 kirkpatrick:2 extreme:1 light:1 superfluous:1 solo:1 capable:1 traversing:2 taylor:3 initialized:2 pollack:9 delete:1 giles:4 predicate:1 front:1 reported:1 gregory:1 international:4 stay:1 off:1 moody:1 connectivity:1 again:2 thesis:1 langton:2 corner:1 derivative:1 kolen:1 piece:6 performed:1 break:1 reached:1 start:2 mutation:2 square:2 variance:1 who:1 kaufmann:6 miller:1 ant:7 mutated:2 trajectory:1 finer:1 published:1 executes:3 indebted:1 cybernetics:1 reach:1 touretzky:5 ed:1 acquisition:1 destructive:3 lebiere:2 mi:1 bidirectional:1 tum:1 wesley:2 supervised:1 follow:1 response:2 improved:1 april:1 furthermore:1 correlation:3 hand:1 christopher:1 assessment:1 incrementally:1 gray:1 columbus:2 evolution:8 nooo14:2 read:1 laboratory:1 self:1 recurrence:1 generalized:1 temperature:3 ohio:5 bitstrings:1 charles:1 perturbing:1 stork:2 cambridge:1 ai:3 unconstrained:1 grid:7 entail:1 similarity:1 add:3 moderate:1 barbara:1 alternation:1 life:4 accomplished:2 morgan:6 minimum:1 additional:1 prune:1 determine:2 maximize:1 strike:1 monotonically:1 dashed:1 ii:2 period:1 full:1 signal:5 technical:3 mcinerney:1 controlled:1 annealing:2 induced:2 cowan:1 flow:1 jordan:1 structural:11 presence:1 door:1 feedforward:1 iii:1 variety:1 architecture:5 becker:1 peter:1 neuralinformation:1 action:1 repeatedly:1 deep:1 clear:2 ph:1 induces:1 supplied:2 exist:1 koza:2 four:1 amlin:1 threshold:1 deleted:1 k4:1 graph:2 run:7 place:1 architectural:1 networksfor:1 encountered:1 placement:1 constraint:1 encodes:1 meiosis:1 vecchi:1 department:2 alternate:2 combination:4 slightly:1 cun:2 modification:2 making:1 restricted:1 describing:1 turn:1 needed:1 addison:2 dyer:1 mutating:1 operation:1 denker:2 polytechnic:1 gelatt:1 skeletonization:1 eshelman:1 obviates:1 denotes:2 cf:1 publishing:1 belew:2 especially:1 ellipse:1 move:9 added:1 parametric:2 damage:1 evolutionary:4 link:13 simulated:2 induction:2 potter:2 spinning:1 length:1 difficult:1 lair:1 allowing:1 i_:1 padd:1 schaffer:2 connection:4 hanson:3 fogel:5 california:2 below:1 flower:1 dynamical:1 smolensky:2 reading:1 program:3 max:3 deleting:2 omlin:1 ranked:1 rely:1 natural:1 epoch:2 fully:1 generation:3 triple:1 ash:2 incident:1 editor:9 placed:1 fahlman:5 free:1 supported:1 rasmussen:2 guide:1 allow:2 institute:1 fall:1 emerge:1 distributed:1 feedback:1 calculated:1 concretely:1 collection:2 forward:1 san:5 transaction:1 approximate:1 lippmann:1 ignore:1 pruning:2 assumed:1 search:7 rensselaer:1 reviewed:1 complex:4 anneal:1 da:1 noise:1 allowed:1 cooper:1 hassibi:2 structurally:1 position:7 theme:1 candidate:1 grained:2 reproduction:1 goudreau:1 workshop:4 adding:2 ci:1 demand:1 chen:4 michigan:1 failed:1 partially:1 holland:2 monotonic:1 ann:1 absence:1 change:3 uniformly:1 arbor:1 internal:1 collins:1 relevance:1 constructive:4 tested:1 |
6,877 | 7,220 | When Worlds Collide: Integrating Different
Counterfactual Assumptions in Fairness
Chris Russell?
The Alan Turing Institute and
University of Surrey
[email protected]
Matt J. Kusner?
The Alan Turing Institute and
University of Warwick
[email protected]
Joshua R. Loftus?
New York University
[email protected]
Ricardo Silva
The Alan Turing Institute and
University College London
[email protected]
Abstract
Machine learning is now being used to make crucial decisions about people?s lives.
For nearly all of these decisions there is a risk that individuals of a certain race,
gender, sexual orientation, or any other subpopulation are unfairly discriminated
against. Our recent method has demonstrated how to use techniques from counterfactual inference to make predictions fair across different subpopulations. This
method requires that one provides the causal model that generated the data at hand.
In general, validating all causal implications of the model is not possible without
further assumptions. Hence, it is desirable to integrate competing causal models to
provide counterfactually fair decisions, regardless of which causal ?world? is the
correct one. In this paper, we show how it is possible to make predictions that are
approximately fair with respect to multiple possible causal models at once, thus
mitigating the problem of exact causal specification. We frame the goal of learning
a fair classifier as an optimization problem with fairness constraints entailed by
competing causal explanations. We show how this optimization problem can be
efficiently solved using gradient-based methods. We demonstrate the flexibility of
our model on two real-world fair classification problems. We show that our model
can seamlessly balance fairness in multiple worlds with prediction accuracy.
1
Introduction
Machine learning algorithms can do extraordinary things with data. From generating realistic images
from noise [7], to predicting what you will look like when you become older [18]. Today, governments
and other organizations make use of it in criminal sentencing [4], predicting where to allocate police
officers [3, 16], and to estimate an individual?s risk of failing to pay back a loan [8]. However, in
many of these settings, the data used to train machine learning algorithms contains biases against
certain races, sexes, or other subgroups in the population [3, 6]. Unwittingly, this discrimination is
then reflected in the predictions of such algorithms. Simply being born male or female can change an
individual?s opportunities that follow from automated decision making trained to reflect historical
biases. The implication is that, without taking this into account, classifiers that maximize accuracy
risk perpetuating biases present in society.
?
?
Equal contribution.
This work was done while JL was a Research Fellow at the Alan Turing Institute.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
For instance, consider the rise of ?predictive policing?, described as ?taking data from disparate
sources, analyzing them, and then using the results to anticipate, prevent and respond more effectively
to future crime? [16]. Today, 38% of U.S. police departments surveyed by the Police Executive
Research Forum are using predictive policing and 70% plan to in the next 2 to 5 years. However,
there have been significant doubts raised by researchers, journalists, and activists that if the data used
by these algorithms is collected by departments that have been biased against minority groups, the
predictions of these algorithms could reflect that bias [9, 12].
At the same time, fundamental mathematical results make it difficult to design fair classifiers. In
criminal sentencing the COMPAS score [4] predicts if a prisoner will commit a crime upon release,
and is widely used by judges to set bail and parole. While it has been shown that black and white
defendants with the same COMPAS score commit a crime at similar rates after being released [1], it
was also shown that black individuals were more often incorrectly predicted to commit crimes after
release by COMPAS than white individuals were [2]. In fact, except for very specific cases, it is
impossible to balance these measures of fairness [3, 10, 20].
The question becomes how to address the fact that the data itself may bias the learning algorithm and
even addressing this is theoretically difficult. One promising avenue is a recent approach, introduced
by us in [11], called counterfactual fairness. In this work, we model how unfairness enters a dataset
using techniques from causal modeling. Given such a model, we state whether an algorithm is fair if
it would give the same predictions had an individual?s race, sex, or other sensitive attributes been
different. We show how to formalize this notion using counterfactuals, following a rich tradition of
causal modeling in the artificial intelligence literature [15], and how it can be placed into a machine
learning pipeline. The big challenge in applying this work is that evaluating a counterfactual e.g.,
?What if I had been born a different sex??, requires a causal model which describes how your sex
changes your predictions, other things being equal.
Using ?world? to describe any causal model evaluated at a particular counterfactual configuration,
we have dependent ?worlds? within a same causal model that can never be jointly observed, and
possibly incompatible ?worlds? across different models. Questions requiring the joint distribution of
counterfactuals are hard to answer, as they demand partially untestable ?cross-world? assumptions
[5, 17], and even many of the empirically testable assumptions cannot be falsified from observational
data alone [14], requiring possibly infeasible randomized trials. Because of this, different experts as
well as different algorithms may disagree about the right causal model. Further disputes may arise
due to the conflict between accurately modeling unfair data and producing a fair result, or because
some degrees of unfairness may be considered allowable while others are not.
To address these problems, we propose a method for ensuring fairness within multiple causal models.
We do so by introducing continuous relaxations of counterfactual fairness. With these relaxations
in hand, we frame learning a fair classifier as an optimization problem with fairness constraints.
We give efficient algorithms for solving these optimization problems for different classes of causal
models. We demonstrate on three real-world fair classification datasets how our model is able to
simultaneously achieve fairness in multiple models while flexibly trading off classification accuracy.
2
Background
We begin by describing aspects causal modeling and counterfactual inference relevant for modeling
fairness in data. We then briefly review counterfactual fairness [11], but we recommend that the
interested reader should read the original paper in full. We describe how uncertainty may arise over
the correct causal model and some difficulties with the original counterfactual fairness definition.
We will use A to denote the set of protected attributes, a scalar in all of our examples but which
without loss of generality can take the form of a set. Likewise, we denote as Y the outcome of
interest that needs to be predicted using a predictor Y? . Finally, we will use X to denote the set of
observed variables other than A and Y , and U to denote a set of hidden variables, which without loss
of generality can be assumed to have no observable causes in a corresponding causal model.
2.1
Causal Modeling and Counterfactual Inference
We will use the causal framework of Pearl [15], which we describe using a simple example. Imagine
we have a dataset of university students and we would like to model the causal relationships that
2
A
A
(race)
S
J
(study)
(job)
U
(graduated)
Y
(motivated)
S
(race)
J
(study)
(job)
U
(graduated)
Y
(motivated)
S
J
(study)
(job)
U
(graduated)
(motivated)
Y
Figure 1: Dark nodes correspond to observed variables and light nodes are unobserved. (Left) This
model predicts that both study S and motivation U directly cause graduation rate Y . However, this
model does not take into account how an individual?s race may affect observed variables. (Center)
In this model, we encode how an individual?s race may affect whether they need to have a job J
while attending university. (Right) We may wonder if there are further biases in society to expect
different rates of study for different races. We may also suspect that having a job may influence one?s
graduation likelihood, independent of study.
lead up to whether a student graduates on time. In our dataset, we have information about whether
a student holds a job J, the number of hours they study per week S, and whether they graduate
Y . Because we are interested in modeling any unfairness in our data, we also have information
about a student?s race A. Pearl?s framework allows us to model causal relationships between these
variables and any postulated unobserved latent variables, such as some U quantifying how motivated
a student is to graduate. This uses a directed acyclic graph (DAG) with causal semantics, called
a causal diagram. We show a possible causal diagram for this example in Figure 1, (Left). Each
node corresponds to a variable and each set of edges into a node corresponds to a generative model
specifying how the ?parents? of that node causally generated it. In its most specific description, this
generative model is a functional relationship deterministically generating its output given a set of
observed and latent variables. For instance, one possible set of functions described by this model
could be as follows:
S = g(J, U ) +
Y = I[?(h(S, U )) ? 0.5]
(1)
where g, h are arbitrary functions and I is the indicator function that evaluates to 1 if the condition
holds and 0 otherwise. Additionally, ? is the logistic function ?(a) = 1/(1 + exp(?a)) and is
drawn independently of all variables from the standard normal distribution N (0, 1). It is also possible
to specify non-deterministic relationships:
U ? N (0, 1)
S ? N (g(J, U ), ?S )
Y ? Bernoulli(?(h(S, U ))
(2)
where ?S is a model parameter. The power of this causal modeling framework is that, given a
fully-specified set of equations, we can compute what (the distribution of) any of the variables would
have been had certain other variables been different, other things being equal. For instance, given
the causal model we can ask ?Would individual i have graduated (Y = 1) if they hadn?t had a job??,
even if they did not actually graduate in the dataset. Questions of this type are called counterfactuals.
For any observed variables V, W we denote the value of the counterfactual ?What would V have
been if W had been equal to w?? as VW ?w . Pearl et al. [15] describe how to compute these
counterfactuals (or, for non-deterministic models, how to compute their distribution) using three
steps: 1. Abduction: Given the set of observed variables X = {X1 , . . . , Xd } compute the values of
the set of unobserved variables U = {U1 , . . . , Up } given the model (for non-deterministic models,
we compute the posterior distribution P(U|X )); 2. Action: Replace all occurrences of the variable
W with value w in the model equations; 3. Prediction: Using the new model equations, and U (or
P(U|X )) compute the value of V (or P (V |X )). This final step provides the value or distribution of
VW ?w given the observed, factual, variables.
2.2
Counterfactual Fairness
In the above example, the university may wish to predict Y , whether a student will graduate, in
order to determine if they should admit them into an honors program. While the university prefers
to admit students who will graduate on time, it is willing to give a chance to some students without
a confident graduation prediction in order to remedy unfairness associated with race in the honors
3
program. The university believes that whether a student needs a job J may be influenced by their race.
As evidence they cite the National Center for Education Statistics, which reported3 that fewer (25%)
Asian-American students were employed while attending university as full-time students relative to
students of other races (at least 35%). We show the corresponding casual diagram for this in Figure 1
(Center). As having a job J affects study which affects graduation likelihood Y this may mean
different races take longer to graduate and thus unfairly have a harder time getting into the honors
program.
Counterfactual fairness aims to correct predictions of a label variable Y that are unfairly altered by
an individual?s sensitive attribute A (race in this case). Fairness is defined in terms of counterfactuals:
Definition 1 (Counterfactual Fairness [11]). A predictor Y? of Y is counterfactually fair given the
sensitive attribute A = a and any observed variables X if
P(Y?A?a = y | X = x, A = a) = P(Y?A?a0 = y | X = x, A = a)
(3)
0
for all y and a 6= a.
In what follows, we will also refer to Y? as a function f (x, a) of hidden variables U, of (usually a
subset of) an instantiation x of X , and of protected attribute A. We leave U implicit in this notation
since, as we will see, this set might differ across different competing models. The notation implies
Y?A?a = f (xA?a , a).
(4)
Notice that if counterfactual fairness holds exactly for Y? , then this predictor can only be a non-trivial
function of X for those elements X ? X such that XA?a = XA?a0 . Moreover, by construction
UA?a = UA?a0 , as each element of U is defined to have no causes in A ? X .
The probabilities in eq. (3) are given by the posterior distribution over the unobserved variables
P(U | X = x, A = a). Hence, a counterfactual Y?A?a may be deterministic if this distribution is
degenerate, that is, if U is a deterministic function of X and A. One nice property of this definition
is that it is easy to interpret: a decision is fair if it would have been the same had a person had a
different A (e.g., a different race4 ), other things being equal. In [11], we give an efficient algorithm
for designing a predictor that is counterfactually fair. In the university graduation example, a predictor
constructed from the unobserved motivation variable U is counterfactually fair.
One difficulty of the definition of counterfactual fairness is it requires one to postulate causal
relationships between variables, including latent variables that may be impractical to measure directly.
In general, different causal models will create different fair predictors Y? . But there are several reasons
why it may be unrealistic to assume that any single, fixed causal model will be appropriate. There
may not be a consensus among experts or previous literature about the existence, functional form,
direction, or magnitude of a particular causal effect, and it may be impossible to determine these from
the available data without untestable assumptions. And given the sensitive, even political nature of
problems involving fairness, it is also possible that disputes may arise over the presence of a feature
of the causal model, based on competing notions of dependencies and latent variables. Consider the
following example, formulated as a dispute over the presence of edges. For the university graduation
model, one may ask if differences in study are due only to differences in employment, or whether
instead there is some other direct effect of A on study levels. Also, having a job may directly affect
graduation likelihood. We show these changes to the model in Figure 1 (Right). There is also potential
for disagreement over whether some causal paths from A to graduation should be excluded from the
definition of fairness. For example, an adherent to strict meritocracy may argue the numbers of hours
a student has studied should not be given a counterfactual value. This could be incorporated in a
separate model by omitting chosen edges when propagating counterfactual information through the
graph in the Prediction step of counterfactual inference5 . To summarize, there may be disagreements
about the right causal model due to: 1. Changing the structure of the DAG, e.g. adding an edge; 2.
Changing the latent variables, e.g. changing the function generating a vertex to have a different signal
vs. noise decomposition; 3. Preventing certain paths from propagating counterfactual values.
3
https://nces.ed.gov/programs/coe/indicator_ssa.asp
At the same time, the notion of a ?counterfactual race,? sex, etc. often raises debate. See [11] for our take
on this.
5
In the Supplementary Material of [11], we explain how counterfactual fairness can be restricted to particular
paths from A to Y , as opposed to all paths.
4
4
3
Fairness under Causal Uncertainty
In this section, we describe a technique for learning a fair predictor without knowing the true casual
model. We first describe why in general counterfactual fairness will often not hold in multiple
different models. We then describe a relaxation of the definition of counterfactual fairness for
both deterministic and non-deterministic models. Finally we show an efficient method for learning
classifiers that are simultaneously accurate and fair in multiple worlds. In all that follows we denote
sets in calligraphic script X , random variables in uppercase X, scalars in lowercase x, matrices in
bold uppercase X, and vectors in bold lowercase x.
3.1
Exact Counterfactual Fairness Across Worlds
We can imagine extending the definition of counterfactual fairness so that it holds for every plausible
causal world. To see why this is inherently difficult consider the setting of deterministic causal
models. If each causal model of the world generates different counterfactuals then each additional
model induces a new set of constraints that the classifier must satisfy, and in the limit the only
classifiers that are fair across all possible worlds are constant classifiers. For non-deterministic
counterfactuals, these issues are magnified. To guarantee counterfactual fairness, Kusner et al.
[11] assumed access to latent variables that hold the same value in an original datapoint and in its
corresponding counterfactuals. While the latent variables of one world can remain constant under the
generation of counterfactuals from its corresponding model, there is no guarantee that they remain
constant under the counterfactuals generated from different models. Even in a two model case, if the
P.D.F. of one model?s counterfactual has non-zero density everywhere (as is the case under Gaussian
noise assumptions) it may be the case that the only classifiers that satisfy counterfactual fairness
for both worlds are the constant classifiers. If we are to achieve some measure of fairness from
informative classifiers, and over a family of different worlds, we need a more robust alternative to
counterfactual fairness.
3.2
Approximate Counterfactual Fairness
We define two approximations to counterfactual fairness to solve the problem of learning a fair
classifier across multiple causal worlds.
Definition 2 ((, ?)-Approximate Counterfactual Fairness). A predictor f (X , A) satisfies (, 0)approximate counterfactual fairness ((, 0)-ACF) if, given the sensitive attribute A = a and any
instantiation x of the other observed variables X , we have that:
f (xA?a , a) ? f (xA?a0 , a0 ) ?
(5)
for all a0 6= a if the system deterministically implies the counterfactual values of X . For a nondeterministic causal system, f satisfies (, ?)-approximate counterfactual fairness, ((, ?)-ACF) if:
P(f (XA?a , a) ? f (XA?a0 , a0 ) ? | X = x, A = a) ? 1 ? ?
(6)
for all a0 6= a.
Both definitions must hold uniformly over the sample space of X ? A. The probability measures used
are with respect to the conditional distribution of background latent variables U given the observations.
We leave a discussion of the statistical asymptotic properties of such plug-in estimator for future work.
These definitions relax counterfactual fairness to ensure that, for deterministic systems, predictions f
change by at most when an input is replaced by its counterfactual. For non-deterministic systems,
the condition in (6) means that this change must occur with high probability, where the probability is
again given by the posterior distribution P(U|X ) computed in the Abduction step of counterfactual
inference. If = 0, the deterministic definitions eq. (5) is equivalent to the original counterfactual
fairness definition. If also ? = 0 the non-deterministic definition eq. (6) is actually a stronger condition
than the counterfactual fairness definition eq. (3) as it guarantees equality in probability instead of
equality in distribution6 .
6
In the Supplementary Material of [11], we describe in more detail the implications of the stronger condition.
5
Algorithm 1 Multi-World Fairness
1: Input: features X = [x1 , . . . , xn ], labels y = [y1 , . . . , yn ], sensitive attributes a = [a1 , . . . , an ],
privacy parameters (, ?), trade-off parameters L = [?1 , . . . , ?l ].
2: Fit causal models: M1 , . . . , Mm using X, a (and possibly y).
3: Sample counterfactuals: XA1 ?a0 , . . . , XAm ?a0 for all unobserved values a0 .
4: for ? ? L do
5:
Initialize classifier f? .
6:
while loop until convergence do
7:
Select random batches Xb of inputs and batch of counterfactuals XA1 ?a0 , . . . , XAm ?a0 .
8:
Compute the gradient of equation (7).
9:
Update f? using any stochastic gradient optimization method.
10:
end while
11: end for
12: Select model f? : For deterministic models select the smallest ? such that equation (5) using f?
holds. For non-deterministic models select the ? that corresponds to ? given f? .
3.3
Learning a Fair Classifier
Assume we are given a dataset of n observations a = [a1 , . . . , an ] of the sensitive attribute A
and of other features X = [x1 , . . . , xn ] drawn from X . We wish to accurately predict a label Y
given observations y = [y1 , . . . , yn ] while also satisfying (, ?)-approximate counterfactual fairness.
We learn a classifier f (x, a) by minimizing a loss function `(f (x, a), y). At the same time, we
incorporate an unfairness term ?j (f, x, a, a0 ) for each causal model j to reduce the unfairness in f .
We formulate this as a penalized optimization problem:
m
n
n
X
1X X
1X
`(f (xi , ai ), yi ) + ?
?j (f, xi , ai , a0 )
(7)
min
f n
n
j=1
i=1
i=1 0
a 6=ai
where ? trades-off classification accuracy for multi-world fair predictions. We show how to naturally
define the unfairness function ?j for deterministic and non-deterministic counterfactuals.
Deterministic counterfactuals. To enforce (, 0)-approximate counterfactual fairness a natural
penalty for unfairness is an indicator function which is one whenever (, 0)-ACF does not hold, and
zero otherwise:
?j (f, xi , ai , a0 ) := I[f (xi,Aj ?ai , ai ) ? f (xi,Aj ?a0 , a0 ) ? ]
(8)
Unfortunately, the indicator function I is non-convex, discontinuous and difficult to optimize. Instead,
we propose to use the tightest convex relaxation to the indicator function:
?j (f, xi , ai , a0 ) := max{0, f (xi,Aj ?ai , ai ) ? f (xi,Aj ?a0 , a0 ) ? }
(9)
Note that when (, 0)-approximate counterfactual fairness is not satisfied ?j is non-zero and thus the
optimization problem will penalize f for this unfairness. Where (, 0)-approximate counterfactual
fairness is satisfied ?j evaluates to 0 and it doesP
not affect the objective. For sufficiently large ?,
n
the value of ?j will dominate the training loss n1 i=1 `(f (xi , ai ), yi ) and any solution will satisfy
(, 0)-approximate counterfactual fairness. However, an overly large choice of ? causes numeric
instability, and will decrease the accuracy of the classifier found. Thus, to find the most accurate
classifier that satisfies the fairness condition one can simply perform a grid or binary search for the
smallest ? such that the condition holds.
Non-deterministic counterfactuals. For non-deterministic counterfactuals we begin by writing a
Monte-Carlo approximation to (, ?)-ACF, eq. (6) as follows:
S
1 X
I( f (xsAj ?ai , ai )?f (xsAj ?a0 , a0 ) ? ) ? ?
S s=1
(10)
where xk is sampled from the posterior distribution P(U|X ). We can again form the tightest convex
relaxation of the left-hand side of the expression to yield our unfairness function:
S
1X
?j (f, xi , ai , a ) :=
max{0, f (xsi,Aj ?ai , ai ) ? f (xsi,Aj ?a0 , a0 ) ? }
S s=1
0
6
(11)
Note that different choices of ? in eq. (7) correspond to different values of ?. Indeed, by choosing
? = 0 we have the (, ?)-fair classifier corresponding to an unfair classifier7 . While a sufficiently
large, but finite, ? will correspond to a (, 0) approximately counterfactually fair classifier. By varying
? between these two extremes, we induce classifiers that satisfy (, ?)-ACF for different values of ?.
With these unfairness functions we have a differentiable optimization problem eq. (7) which can be
solved with gradient-based methods. Thus, our method allows practitioners to smoothly trade-off
accuracy with multi-world fairness. We call our method Multi-World Fairness (MWF). We give a
complete method for learning a MWF classifier in Algorithm 1.
For both deterministic and non-deterministic models, this convex approximation essentially describes
an expected unfairness that is allowed by the classifier:
Definition 3 (Expected -Unfairness). For any counterfactual a0 6= a, the Expected -Unfairness of
a classifier f , or E [f ], is
h
i
E max{0, f (XA?a , a) ? f (XA?a0 , a0 ) ? } | X = x, A = a
(12)
where the expectation is over any unobserved U (and is degenerate
for deterministic counterfactuals).
We note that the term max{0, f (XA?a , a)?f (XA?a0 , a0 ) ?} is strictly non-negative and therefore
the expected -unfairness is zero if and only if f satisfies (, 0)-approximate counterfactual fairness
almost everywhere.
Linear Classifiers and Convexity Although we have presented these results in their most general
form, it is worth noting that for linear classifiers, convexity guarantees are preserved. The family of
linear classifiers we consider is relatively broad, and consists those linear in their learned weights w,
as such it includes both SVMs and a variety of regression methods used in conjuncture with kernels
or finite polynomial bases.
Consider any classifier whose output
P is linear in the learned parameters, i.e., the family of classifiers
f all have the form f (X , A) = l wl gl (X , a), for a set of fixed kernels gl . Then the expected
-unfairness is a linear function of w taking the form:
h
i
E max{0, f (XA?a , a) ? f (XA?a0 , a0 ) ? }
(13)
h
i
X
= E max{0,
wl (gl (XA?a , a) ? gl (XA?a0 , a0 ))}
l
This expression is linear in w and therefore, if the classification loss is also convex (as is the case for
most regression tasks), a global optima can be ready found via convex programming. In particular,
globally optimal linear classifiers satisfying (, 0)-ACF or (, ?)-ACF, can be found efficiently.
Bayesian alternatives and their shortcomings. One may argue that a more direct alternative is to
provide probabilities associated with each world and to marginalize set of the optimal counterfactually
fair classifiers over all possible worlds. We argue this is undesirable for two reasons: first, the averaged
prediction for any particular individual may violate (3) by an undesirable margin for one, more or
even all considered worlds; second, a practitioner may be restricted by regulations to show that, to
the best of their knowledge, the worst-case violation is bounded across all viable worlds with high
probability. However, if the number of possible models is extremely large (for example if the causal
structure of the world is known, but the associated parameters are not) and we have a probability
associated with each world, then one natural extension is to adapt Expected -Unfairness eq. (3) to
marginalize over the space of possible worlds. However, we leave this extension to future work.
4
Experiments
We demonstrate the flexibility of our method on two real-world fair classification problems: 1. fair
predictions of student performance in law schools; and 2. predicting whether criminals will re-offend
upon being released. For each dataset we begin by giving details of the fair prediction problem. We
then introduce multiple causal models that each possibly describe how unfairness plays a role in the
data. Finally, we give results of Multi-World Fairness (MWF) and show how it changes for different
settings of the fairness parameters (, ?).
7
In the worst case, ? may equal 1.
7
law school
A
(race)
COMPAS
G
(GPA)
?G
G
(GPA)
L
A
?L (race)
(LSAT)
(LSAT)
Y
(grade)
?Y
L
UJ
UD
(juvenile
criminality)
U
(know)
Y
(grade)
A
JF
(race)
JM
(juvenile
felonies)
E
(adult
criminality)
C
(juvenile
misdem.)
(COMPAS)
T
(type of
crime)
P
(num.
priors)
(age)
Figure 2: Causal models for the law school and COMPAS datasets. Shaded nodes are observed an
unshaded nodes are unobserved. For each dataset we consider two possible causal worlds. The first
law school model is a deterministic causal model with additive unobserved variables G , L , Y . The
second is a non-deterministic causal model with a latent variable U . For COMPAS, the first causal
model omits the dotted lines, and the second includes them. Both models are non-deterministic
models with latent variables UJ , UD . The large white arrows signify that variables A, E are connected
to every variable contained in the box they point to. The law school model equations are given in
eq. (14) and COMPAS model equations are shown in eq. (15).
constant predictor
Fairly predicting law grades
test RMSE
4.1
m
ul
tiWe begin by investigating a dataset of survey results across
wo
rld
fa
163 U.S. law schools conducted by the Law School Admisirn
es
s
sion Council [19] . It contains information on over 20,000
= 0.1
= 0.3
students including their race A (here we look at just black
= 0.5
(
=
0)
unfair
and white students as this difference had the largest effect
in counterfactuals in [11]), their grade-point average G
?
obtained prior to law school, law school entrance exam
scores L, and their first year average grade Y . Consider
Figure 3: Test prediction results for differthat law schools may be interested in predicting Y for all ent on the law school dataset.
applicants to law school using G and L in order to decide
whether to accept or deny them entrance. However, due to societal inequalities, an individual?s race
may have affected their access to educational opportunities, and thus affected G and L. Accordingly,
we model this possibility using the causal graphs in Figure 2 (Left). In this graph we also model the
fact that G, L may have been affected by other unobserved quantities. However, we may be uncertain
whether what the right way to model these unobserved quantities is. Thus we propose to model this
dataset with the two worlds described in Figure 2 (Left). Note that these are the same models as used
in Kusner et al. [11] (except here we consider race as the sensitive variable). The corresponding
equations for these two worlds are as follows:
A
G = bG + wG
A + G
A
L = bL + wL
A + L
A
Y = bY + w Y A + Y
G , L , Y ? N (0, 1)
A
U
G ? N (bG + wG
A + wG
U, ?G )
L ? Poisson(exp(bL +
N (wYA A
Y ?
+
U ? N (0, 1)
A
wL
A
+
(14)
U
wL
U ))
wYU U, 1)
where variables b, w are parameters of the causal model.
Results. Figure 3 shows the result of learning a linear MWF classifier on the deterministic law
school models. We split the law school data into a random 80/20 train/test split and we fit casual
models and classifiers on the training set and evaluate performance on the test set. We plot the test
RMSE of the constant predictor satisfying counterfactual fairness in red, the unfair predictor with
? = 0, and MWF, averaged across 5 runs. Here as we have one deterministic and one non-deterministic
model we will evaluate MWF for different and ? (with the knowledge that the only change in the
MWF classifier for different ? is due to the non-deterministic model). For each , ?, we selected the
smallest ? across a grid (? ? {10?5 10?4 , . . . , 1010 }) such that the constraint in eq. (6) held across
95% of the individuals in both models. We see that MWF is able to reliably sacrifice accuracy for
fairness as is reduced. Note that as we change ? we can further alter the accuracy/fairness trade-off.
8
test accuracy
= 0.1
= 0.3
= 0.4
= 0.5
MWF (linear)
MWF (deep)
unfair (linear)
unfair (deep)
constant
?
?
?
?
Figure 4: Test prediction results for different and ? on the COMPAS dataset.
4.2
Fair recidivism prediction (COMPAS)
We next turn our attention to predicting whether a criminal will re-offend, or ?recidivate? after being
released from prison. ProPublica [13] released data on prisoners in Broward County, Florida who
were awaiting a sentencing hearing. For each of the prisoners we have information on their race A
(as above we only consider black versus white individuals), their age E, their number of juvenile
felonies JF , juvenile misdemeanors JM , the type of crime they committed T , the number of prior
offenses they have P , and whether they recidivated Y . There is also a proprietary COMPAS score
[13] C designed to indicate the likelihood a prisoner recidivates.
We model this dataset with two different non-deterministic causal models, shown in Figure 2 (Right).
The first model includes the dotted edges, the second omits them. In both models we believe that
two unobserved latent factors juvenile criminality UJ and adult criminality UD also contribute to
JF , JM , C, T, P . We show the equations for both of our casual models below, where the first causal
model includes the blue terms and the second does not:
UD
E
A
T ? Bernoulli(?(bT + wC
UD + wC
E + wC
A)
C ? N (bC +
P ?
JF ?
JM ?
UD
wC
UD
E
A
T
P
+ wC
E + wC
A + wC
T + wC
P+
(15)
JF
wC
JF
+
JM
wC
JM , ?C )
Poisson(exp(bP + wPUD UD + wPE E + wPA A))
Poisson(exp(bJF + wJUFJ + wJEF E + wJAF A))
J
Poisson(exp(bJM + wJUM
+ wJEM E + wJAM A))
[UJ , UD ] ? N (0, ?)
Results. Figure 4 shows how classification accuracy using both logistic regression (linear) and
a 3-layer neural network (deep) changes as both and ? change. We split the COMPAS dataset
randomly into an 80/20 train/test split, and report all results on the test set. As in the law school
experiment we grid-search over ? to find the smallest value such that for any and ? the (, ?)-ACF)
constraint in eq. (6) is satisfied for at least 95% of the individuals in the dataset, across both worlds.
We average all results except the constant classifier over 5 runs and plot the mean and standard
deviations. We see that for small ? (high fairness) both linear and deep MWF classifiers significantly
outperform the constant classifier and begin to approach the accuracy of the unfair classifier as
increases. As we increase ? (lowered fairness) the deep classifier is better able to learn a decision
boundary that trades-off accuracy for fairness. But if , ? is increased enough (e.g., ? 0.13, ? = 0.5),
the linear MWF classifier matches the performance of the deep classifier.
5
Conclusion
This paper has presented a natural extension to counterfactual fairness that allows us to guarantee fair
properties of algorithms, even when we are unsure of the causal model that describes the world.
As the use of machine learning becomes widespread across many domains, it becomes more important
to take algorithmic fairness out of the hands of experts and make it available to everybody. The
conceptual simplicity of our method, our robust use of counterfactuals, and the ease of implementing
our method mean that it can be directly applied to many interesting problems. A further benefit of
our approach over previous work on counterfactual fairness is that our approach only requires the
estimation of counterfactuals at training time, and no knowledge of latent variables during testing. As
such, our classifiers offer a fair drop-in replacement for other existing classifiers.
9
6
Acknowledgments
This work was supported by The Alan Turing Institute under the EPSRC grant
EP/N510129/1. CR acknowledges additional support under the EPSRC Platform Grant
EP/P022529/1.
References
[1] Compas risk scales: Demonstrating accuracy equity and predictive parity performance of the compas risk
scales in broward county, 2016. 2
[2] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. https://www.propublica.
org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016. Accessed:
Fri 19 May 2017. 2
[3] Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice
risk assessments: The state of the art. arXiv preprint arXiv:1703.09207, 2017. 1, 2
[4] Tim Brennan, William Dieterich, and Beate Ehret. Evaluating the predictive validity of the compas risk
and needs assessment system. Criminal Justice and Behavior, 36(1):21?40, 2009. 1, 2
[5] A. P. Dawid. Causal inference without counterfactuals. Journal of the American Statistical Association,
pages 407?448, 2000. 2
[6] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through
awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages
214?226. ACM, 2012. 1
[7] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information
Processing Systems, pages 2672?2680, 2014. 1
[8] Amir E Khandani, Adlar J Kim, and Andrew W Lo. Consumer credit-risk models via machine-learning
algorithms. Journal of Banking & Finance, 34(11):2767?2787, 2010. 1
[9] Keith Kirkpatrick. It?s not the algorithm, it?s the data. Communications of the ACM, 60(2):21?23, 2017. 2
[10] Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination
of risk scores. arXiv preprint arXiv:1609.05807, 2016. 2
[11] Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. Advances in
Neural Information Processing Systems, 31, 2017. 2, 4, 5, 8
[12] Moish Kutnowski. The ethical dangers and merits of predictive policing. Journal of Community Safety and
Well-Being, 2(1):13?17, 2017. 2
[13] Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How we analyzed the compas recidivism
algorithm. ProPublica (5 2016), 2016. 9
[14] David Lopez-Paz. From dependence to causation. arXiv preprint arXiv:1607.03300, 2016. 2
[15] J. Pearl, M. Glymour, and N. Jewell. Causal Inference in Statistics: a Primer. Wiley, 2016. 2, 3
[16] Beth Pearsall. Predictive policing: The future of law enforcement. National Institute of Justice Journal,
266(1):16?19, 2010. 1, 2
[17] T.S. Richardson and J. Robins. Single world intervention graphs (SWIGs): A unification of the counterfactual and graphical approaches to causality. Working Paper Number 128, Center for Statistics and the
Social Sciences, University of Washington, 2013. 2
[18] Paul Upchurch, Jacob Gardner, Kavita Bala, Robert Pless, Noah Snavely, and Kilian Weinberger. Deep
feature interpolation for image content changes. arXiv preprint arXiv:1611.05507, 2016. 1
[19] Linda F Wightman. Lsac national longitudinal bar passage study. lsac research report series. 1998. 8
[20] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness
beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment.
arXiv preprint arXiv:1610.08452, 2016. 2
10
| 7220 |@word trial:1 briefly:1 polynomial:1 stronger:2 justice:3 sex:5 willing:1 decomposition:1 jacob:1 harder:1 born:2 contains:2 score:5 configuration:1 series:1 bc:1 sendhil:1 longitudinal:1 pless:1 bilal:1 existing:1 manuel:1 must:3 applicant:1 realistic:1 additive:1 informative:1 entrance:2 plot:2 propublica:3 update:1 designed:1 discrimination:1 alone:1 intelligence:1 generative:3 fewer:1 v:1 selected:1 accordingly:1 amir:1 xk:1 brennan:1 lsat:2 num:1 provides:2 node:7 contribute:1 org:1 accessed:1 mathematical:1 constructed:1 direct:2 become:1 viable:1 lopez:1 consists:1 nondeterministic:1 introduce:1 privacy:1 theoretically:1 sacrifice:1 dispute:3 expected:6 indeed:1 behavior:1 prison:1 multi:5 grade:5 globally:1 gov:1 jm:6 ua:2 becomes:3 begin:5 notation:2 unfairness:18 moreover:1 bounded:1 linda:1 what:6 unobserved:12 magnified:1 impractical:1 guarantee:5 fellow:1 every:2 xd:1 finance:1 exactly:1 classifier:42 uk:3 sherjil:1 grant:2 intervention:1 yn:2 producing:1 causally:1 safety:1 limit:1 analyzing:1 path:4 interpolation:1 approximately:2 black:4 might:1 studied:1 specifying:1 shaded:1 ease:1 graduate:7 averaged:2 directed:1 acknowledgment:1 testing:1 danger:1 significantly:1 integrating:1 subpopulation:2 induce:1 cannot:1 marginalize:2 undesirable:2 risk:10 impossible:2 applying:1 influence:1 instability:1 optimize:1 equivalent:1 deterministic:31 demonstrated:1 center:4 writing:1 unshaded:1 educational:1 regardless:1 flexibly:1 independently:1 xa1:2 convex:6 formulate:1 survey:1 simplicity:1 attention:1 stats:1 pouget:1 attending:2 estimator:1 dominate:1 population:1 notion:3 counterfactually:6 juvenile:6 imagine:2 today:2 construction:1 play:1 exact:2 programming:1 us:1 designing:1 goodfellow:1 element:2 dawid:1 satisfying:3 predicts:2 mistreatment:1 observed:11 role:1 epsrc:2 factual:1 ep:2 solved:2 enters:1 worst:2 preprint:5 connected:1 kilian:1 felony:2 russell:2 trade:6 decrease:1 jewell:1 convexity:2 warde:1 employment:1 trained:1 raise:1 solving:1 policing:4 predictive:6 upon:2 crussell:1 collide:1 joint:1 isabel:1 awaiting:1 train:3 describe:9 london:1 monte:1 shortcoming:1 artificial:1 zemel:1 sentencing:4 outcome:1 choosing:1 whose:1 jean:1 widely:1 supplementary:2 warwick:1 plausible:1 solve:1 otherwise:2 relax:1 wg:3 statistic:3 dieterich:1 richardson:1 commit:3 jointly:1 itself:1 final:1 differentiable:1 offend:2 net:1 ucl:1 propose:3 relevant:1 loop:1 omer:1 flexibility:2 achieve:2 degenerate:2 description:1 lauren:2 getting:1 ent:1 parent:1 convergence:1 optimum:1 extending:1 generating:3 leave:3 tim:1 exam:1 ac:3 propagating:2 andrew:1 school:15 keith:1 eq:12 job:10 predicted:2 judge:1 trading:1 implies:2 differ:1 direction:1 indicate:1 correct:3 attribute:8 discontinuous:1 stochastic:1 raghavan:1 observational:1 material:2 implementing:1 education:1 muhammad:1 government:1 graduation:8 recidivated:1 county:2 anticipate:1 strictly:1 extension:3 hold:10 mm:1 sufficiently:2 considered:2 credit:1 normal:1 exp:5 algorithmic:1 week:1 predict:2 smallest:4 released:4 failing:1 estimation:1 label:3 sensitive:8 council:1 largest:1 wl:5 create:1 offs:1 beth:1 gaussian:1 aim:1 shahin:1 asp:1 cr:1 sion:1 varying:1 encode:1 release:2 bernoulli:2 likelihood:4 abduction:2 seamlessly:1 political:1 adversarial:1 tradition:1 kim:1 inference:6 dependent:1 lowercase:2 bt:1 a0:35 accept:1 hidden:2 interested:3 semantics:1 mitigating:1 issue:1 classification:8 orientation:1 among:1 plan:1 raised:1 platform:1 initialize:1 fairly:1 art:1 equal:6 once:1 never:1 having:3 beach:1 washington:1 broad:1 look:2 fairness:63 nearly:1 jon:1 alter:1 future:4 others:1 recommend:1 report:2 richard:2 mirza:1 yoshua:1 inherent:1 randomly:1 causation:1 simultaneously:2 national:3 individual:15 asian:1 replaced:1 replacement:1 n1:1 william:1 kavita:1 organization:1 interest:1 possibility:1 dwork:1 entailed:1 analyzed:1 male:1 extreme:1 farley:1 light:1 uppercase:2 violation:1 gpa:2 held:1 xb:1 implication:3 accurate:2 edge:5 mullainathan:1 unification:1 re:2 causal:56 theoretical:1 uncertain:1 instance:3 increased:1 modeling:8 introducing:1 addressing:1 subset:1 vertex:1 hearing:1 predictor:11 deviation:1 wonder:1 paz:1 conducted:1 distribution6:1 dependency:1 answer:1 confident:1 st:1 person:1 fundamental:1 randomized:1 density:1 off:6 michael:1 again:2 reflect:2 postulate:1 satisfied:3 opposed:1 kirchner:2 possibly:4 admit:2 expert:3 american:2 ricardo:3 manish:1 doubt:1 account:2 potential:1 nces:1 student:16 bold:2 includes:4 postulated:1 satisfy:4 race:22 bg:2 script:1 counterfactuals:21 red:1 rmse:2 contribution:1 accuracy:13 who:2 efficiently:2 likewise:1 correspond:3 yield:1 bayesian:1 accurately:2 carlo:1 worth:1 researcher:1 casual:4 explain:1 datapoint:1 influenced:1 whenever:1 ed:1 definition:15 against:3 evaluates:2 surrey:1 naturally:1 associated:4 sampled:1 dataset:14 rld:1 hardt:1 ask:2 counterfactual:54 treatment:1 knowledge:3 khandani:1 formalize:1 actually:2 back:1 follow:1 reflected:1 specify:1 done:1 evaluated:1 box:1 generality:2 xa:15 implicit:1 just:1 wpa:1 until:1 heidari:1 hand:4 working:1 offense:1 mehdi:1 assessment:3 rodriguez:1 widespread:1 logistic:2 aj:6 believe:1 usa:1 matt:2 effect:3 requiring:2 omitting:1 remedy:1 true:1 www:1 hence:2 equality:2 validity:1 read:1 excluded:1 moritz:1 white:5 during:1 everybody:1 larson:2 allowable:1 complete:1 demonstrate:3 julia:2 silva:2 passage:1 image:2 functional:2 discriminated:1 empirically:1 jl:1 association:1 m1:1 interpret:1 significant:1 refer:1 dag:2 ai:15 rd:1 grid:3 deny:1 had:8 lowered:1 specification:1 access:2 longer:1 etc:1 base:1 pitassi:1 posterior:4 recent:2 female:1 certain:4 honor:3 inequality:1 calligraphic:1 binary:1 life:1 yi:2 joshua:2 societal:1 krishna:1 additional:2 employed:1 determine:2 maximize:1 ud:9 fri:1 signal:1 multiple:8 desirable:1 full:2 violate:1 alan:5 match:1 adapt:1 plug:1 cross:1 long:1 offer:1 determination:1 gummadi:1 a1:2 ensuring:1 prediction:19 involving:1 regression:3 xsi:2 impact:1 essentially:1 expectation:1 poisson:4 arxiv:10 kernel:2 penalize:1 mkusner:1 background:2 preserved:1 signify:1 compas:16 diagram:3 source:1 crucial:1 biased:1 strict:1 suspect:1 validating:1 thing:4 reingold:1 practitioner:2 call:1 vw:2 presence:2 noting:1 split:4 easy:1 enough:1 automated:1 variety:1 affect:6 fit:2 graduated:4 bengio:1 competing:4 reduce:1 avenue:1 knowing:1 whether:14 motivated:4 expression:2 allocate:1 ul:1 penalty:1 wo:1 york:1 cause:4 action:1 prefers:1 deep:7 proprietary:1 n510129:1 dark:1 induces:1 svms:1 reduced:1 http:2 outperform:1 notice:1 dotted:2 bjf:1 overly:1 per:1 blue:1 affected:3 officer:1 group:1 demonstrating:1 loftus:3 drawn:2 changing:3 prevent:1 graph:5 relaxation:5 year:2 run:2 turing:7 everywhere:2 you:2 respond:1 defendant:1 uncertainty:2 family:3 reader:1 almost:1 decide:1 decision:6 incompatible:1 banking:1 layer:1 pay:1 gomez:1 courville:1 bala:1 occur:1 noah:1 constraint:5 your:2 bp:1 generates:1 aspect:1 u1:1 wc:10 min:1 extremely:1 kleinberg:1 relatively:1 recidivism:2 department:2 glymour:1 unsure:1 across:13 describes:3 remain:2 angwin:2 kusner:4 making:1 restricted:2 pipeline:1 equation:9 bing:1 describing:1 turn:1 know:1 bjm:1 merit:1 enforcement:1 end:2 jabbari:1 available:2 tightest:2 appropriate:1 disagreement:2 enforce:1 occurrence:1 alternative:3 batch:2 weinberger:1 primer:1 florida:1 existence:1 original:4 ensure:1 graphical:1 opportunity:2 coe:1 giving:1 testable:1 uj:4 society:2 forum:1 bl:2 objective:1 question:3 quantity:2 fa:1 snavely:1 dependence:1 gradient:4 separate:1 chris:2 argue:3 collected:1 consensus:1 trivial:1 parole:1 reason:2 xam:2 minority:1 ozair:1 consumer:1 relationship:5 balance:2 minimizing:1 innovation:1 difficult:4 unfortunately:1 regulation:1 robert:1 debate:1 wpe:1 negative:1 rise:1 disparate:4 design:1 reliably:1 perform:1 disagree:1 observation:3 datasets:2 finite:2 incorrectly:1 incorporated:1 committed:1 communication:1 frame:2 y1:2 arbitrary:1 police:3 community:1 criminal:7 introduced:1 david:2 specified:1 crime:6 conflict:1 omits:2 learned:2 subgroup:1 pearl:4 nip:1 hour:2 address:2 able:3 adult:2 bar:1 usually:1 below:1 beyond:1 challenge:1 summarize:1 program:4 drop:1 including:2 max:6 explanation:1 belief:1 power:1 unrealistic:1 roth:1 difficulty:2 natural:3 predicting:6 indicator:4 valera:1 older:1 altered:1 gardner:1 ready:1 acknowledges:1 review:1 literature:2 nice:1 prior:3 relative:1 asymptotic:1 law:17 loss:5 expect:1 bail:1 fully:1 toniann:1 generation:1 interesting:1 acyclic:1 versus:1 age:2 executive:1 integrate:1 awareness:1 degree:1 article:1 lo:1 penalized:1 beate:1 placed:1 gl:4 supported:1 infeasible:1 unfairly:3 parity:1 bias:8 side:1 institute:6 taking:3 benefit:1 boundary:1 xn:2 world:37 evaluating:2 rich:1 numeric:1 preventing:1 historical:1 social:1 approximate:10 observable:1 global:1 instantiation:2 investigating:1 conceptual:1 assumed:2 xi:10 surya:2 continuous:1 latent:12 search:2 protected:2 why:3 robin:1 additionally:1 promising:1 learn:2 nature:1 robust:2 ca:1 inherently:1 wightman:1 hoda:1 zafar:1 domain:1 did:1 arrow:1 big:1 noise:3 arise:3 motivation:2 paul:1 fair:31 allowed:1 x1:3 xu:1 causality:1 extraordinary:1 untestable:2 wiley:1 lsac:2 surveyed:1 deterministically:2 wish:2 acf:8 kirkpatrick:1 unfair:7 ian:1 specific:2 cynthia:1 nyu:1 abadie:1 evidence:1 upchurch:1 adding:1 effectively:1 mattu:2 magnitude:1 demand:1 margin:1 falsified:1 smoothly:1 simply:2 contained:1 ethical:1 partially:1 prisoner:4 scalar:2 gender:1 corresponds:3 cite:1 chance:1 satisfies:4 acm:2 conditional:1 goal:1 formulated:1 quantifying:1 jeff:2 replace:1 jf:6 content:1 change:11 hard:1 loan:1 sexual:1 except:3 uniformly:1 berk:1 kearns:1 called:3 e:1 equity:1 aaron:2 select:4 college:1 people:1 support:1 incorporate:1 evaluate:2 |
6,878 | 7,221 | Matrix Norm Estimation from a Few Entries
Ashish Khetan
Department of ISE
University of Illinois Urbana-Champaign
[email protected]
Sewoong Oh
Department of ISE
University of Illinois Urbana-Champaign
[email protected]
Abstract
Singular values of a data in a matrix form provide insights on the structure of
the data, the effective dimensionality, and the choice of hyper-parameters on
higher-level data analysis tools. However, in many practical applications such
as collaborative filtering and network analysis, we only get a partial observation.
Under such scenarios, we consider the fundamental problem of recovering various
spectral properties of the underlying matrix from a sampling of its entries. We
propose a framework of first estimating the Schatten k-norms of a matrix for
several values of k, and using these as surrogates for estimating spectral properties
of interest, such as the spectrum itself or the rank. This paper focuses on the
technical challenges in accurately estimating the Schatten norms from a sampling
of a matrix. We introduce a novel unbiased estimator based on counting small
structures in a graph and provide guarantees that match its empirical performances.
Our theoretical analysis shows that Schatten norms can be recovered accurately
from strictly smaller number of samples compared to what is needed to recover the
underlying low-rank matrix. Numerical experiments suggest that we significantly
improve upon a competing approach of using matrix completion methods.
1
Introduction
Computing and analyzing the set of singular values of a data in a matrix form, which is called the
spectrum, provide insights into the geometry and topology of the data. Such a spectral analysis
is routinely a first step in general data analysis with the goal of checking if there exists a lower
dimensional subspace explaining the important aspects of the data, which itself might be high
dimensional. Concretely, it is a first step in dimensionality reduction methods such as principal
component analysis or canonical correlation analysis.
However, spectral analysis becomes challenging in practical scenarios where the data is only partially
observed. We commonly observe pairwise relations of randomly chosen pairs: each user only rates a
few movies in recommendation systems, and each player/team only plays against a few opponents in
sports. In other applications, we have more structured samples. For example, in a network analysis
we might be interested in the spectrum of the adjacency matrix of a large network, but only get
to see the connections within a small subset of nodes. Whatever the sampling pattern is, typical
number of paired relations we observe is significantly smaller than the dimension of the data matrix.
We study all such variations in sampling patterns for partially observed data matrices, and ask the
following fundamental question: can we estimate spectral properties of a data matrix from partial
observations? We build on the fact that several spectral properties of interest, such as the spectrum
itself or the rank, can be estimated accurately via first estimating the Schatten k-norms of a matrix
and then aggregating those norms to estimate the spectral properties. In this paper, we focus on the
Pd
challenging task of estimating the Schatten k-norms defined as kM kk = ( i=1 ?i (M )k )1/k , where
?1 (M ) ? ? ? ? ? ?d (M ) are singular values of the data matrix M ? Rd?d . Once we obtain accurate
estimates of Schatten k-norms, these estimates, as well as corresponding performance guarantees,
can readily be translated into accurate estimates of the spectral properties of interest.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1
Setup
We want to estimate the Schatten k-norm of a positive semidefinite matrix M ? Rd?d from a
subset of its entries. The restriction to positive semidefinite matrices is for notational convenience,
and our analyses, the estimator, and the efficient algorithms naturally generalize to any non-square
matrices. Namely, we can extend our framework to bipartite graphs and estimate Schatten k-norm
of any matrix for any even k. Let ? denote the set of indices of samples we are given and let
P? (M ) = {(i, j, Mij )}(i,j)?? denote the samples. With a slight abuse of notation, we used P? (M )
to also denote the d ? d sampled matrix:
Mij if (i, j) ? ? ,
P? (M )ij =
0 otherwise ,
and it should be clear from the context which one we refer to. Although we propose a framework that
generally applies to any probabilistic sampling, it is necessary to propose specific sampling scenarios
to provide tight analyses on the performance. Hence, we focus on Erd?s-R?nyi sampling.
There is an extensive line of research in low-rank matrix completion problems [3, 11], which addresses
a fundamental question of how many samples are required to complete a matrix (i.e. estimate all
the missing entries) from a small subset of sampled entries. It is typically assumed that each entry
of the matrix is sampled independently with a probability p ? (0, 1]. We refer to this scenario as
Erd?s-R?nyi sampling, as the resulting pattern of the samples encoded as a graph is distributed as an
Erd?s-R?nyi random graph. The spectral properties of such an sampled matrix have been well studied
in the literature [7, 1, 6, 11, 14]. In particular, it is known that the original matrix is close in spectral
norm to the sampled one where the missing entries are filled in with zeros and properly rescaled under
certain incoherence assumptions. This suggests using the singular values of (d2 /|?|)P(M ) directly
for estimating the Schatten norms. However, in the sub-linear regime in which the number of samples
|?| = d2 p is comparable to or significantly smaller than the degrees of freedom in representing
a symmetric rank-r matrix, which is dr ? r2 , the spectrum of the sampled matrix is significantly
different from the spectrum of the original matrix as shown in Figure 1. We need to design novel
estimators that are more sample efficient in the sub-linear regime where d2 p dr.
50
? histogram of {?i (M )}di=1
40
30
histogram of (rescaled) {?i (P? (M ))}di=1
?
20
10
0
10
20
30
Figure 1: Histogram of (positive) singular values of M with rank r = 100 (in yellow), and singular
values of the sampled matrix (in black).
1.2
Summary of the approach and preview of results
We propose using an alternative expression of the Schatten k-norm for positive semidefinite matrices
as the trace of the k-th power of M , i.e. (kM kk )k = Tr(M k ). This sum of the entries along the
diagonal of M k is the sum of total weights of all the closed walks of length k. Consider the entries
of M as weights on a complete graph Kd over d nodes (with self-loops). A closed walk of length
k is defined as a sequence of nodes w = (w1 , w2 , . . . , wk+1 ) with w1 = wk+1 , where we allow
repeated nodes and repeated edges. The weight of a closed walk w = (w1 , . . . , wk , w1 ) is defined as
Qk
?M (w) ? i=1 Mwi wi+1 , which is the product of the weights along the walk. It follows that
X
kM kkk =
?M (w) .
(1)
w: all length k closed walks
Following the notations from enumeration of small simple cycles in a graph by [2], we partition
this summation into those with the same pattern H that we call a k-cyclic pseudograph. Let
2
Ck = (Vk , Ek ) denote the undirected simple cycle graph with k nodes, e.g. A3 in Figure 2 is C3 .
We expand the standard notion of simple k-cyclic graphs to include multiedges and loops, hence the
name pseudograph.
Definition 1 We define an unlabelled and undirected pseudograph H = (VH , EH ) to be a k-cyclic
pseudograph for k ? 3 if there exists an onto node-mapping from Ck = (Vk , Ek ), i.e. f : Vk ? VH ,
and a one-to-one edge-mapping g : Ek ? EH such that g(e) = (f (ue ), f (ve )) for all e = (ue , ve ) ?
Ek . We use Hk to denote the set of all k-cyclic pseudographs. We use c(H) to the number of different
node mappings f from Ck to a k-cyclic pseudograph H.
A1
A2
A3
c(A1 ) = 1
c(A2 ) = 3
c(A3 ) = 6
Figure 2: The 3-cyclic pseudographs H3 = {A1 , A2 , A3 }.
In the above example, each member of H3 is a distinct pattern that can be mapped from C3 . For
A1 , it is clear that there is only one mapping from C3 to A1 (i.e. c(A1 ) = 1). For A2 , one can map
any of the three nodes to the left-node of A2 , hence c(A2 ) = 3. For A3 , any of the three nodes can
be mapped to the bottom-left-node of A3 and also one can map the rest of the nodes clockwise or
counter-clockwise, resulting in c(A3 ) = 6. For k ? 7, all the k-cyclic pseudo graphs are given in the
Appendix E (See Figures 8?13).
Each closed walk w of length k is associated with one of the graphs in Hk , as there is a unique H
that the walk is an Eulerian cycle of (under a one-to-one mapping of the nodes). We denote this
graph by H(w) ? Hk . Considering the weight of a walk ?M (w), there are multiple distinct walks
with the same weight. For example, a length-3 walk w = (v1 , v2 , v2 , v1 ) has H(w) = A2 and there
are 3 walks with the same weight ?(w) = (Mv1 v2 )2 Mv2 v2 , i.e. (v1 , v2 , v2 , v1 ), (v2 , v2 , v1 , v2 ), and
(v2 , v1 , v2 , v2 ). This multiplicity of the weight depends only on the structure H(w) of a walk, and it
is exactly c(H(w)) the number of mappings from Ck to H(w) in Definition 1. The total sum of the
weights of closed walks of length k can be partitioned into their respective pattern, which will make
computation of such terms more efficient (see Section 2) and also de-biasing straight forward (see
Equation (3)):
X
kM kkk =
?M (H) c(H) ,
(2)
H?Hk
where with a slight abuse of a notation, we let ?M (H) for H ? Hk be the sum of all distinct
weights of walks w with H(w) = H, and c(H) is the multiplicity of each distinct weight. This is an
alternative tool for computing the Schatten norm without explicitly computing the ?i (M )?s.
Given only the access to a subset of sampled entries, one might be tempted to apply the
above formula
to the sampled matrix with an appropriate scaling, i.e. k(d2 /|?|)P? (M )kkk =
P
2
(d /|?|) H?Hk ?P? (M ) (H) c(H) , to estimate kM kkk . However, this is significantly biased. To
eliminate the bias, we propose rescaling each term in (1) by the inverse of the probability of sampling
that particular walk w (i.e. the probability that all edges in w are sampled). A crucial observation is
that, for any sampling model that is invariant under a relabelling of the nodes, this probability only
depends on the pattern H(w). In particular, this is true for Erd?s-R?nyi sampling. Based on this
observation, we introduce a novel estimator that de-biases each group separately:
X
1
b k (P? (M )) =
?
?P (M ) (H) c(H) ,
(3)
p(H) ?
H?Hk
where p(H) is the probability the pattern H is sampled. It immediately follows that this estimator is
b k (P? (M ))] = kM kk , where the randomness is in ?. However, computing this
unbiased, i.e. E? [?
k
estimate can be challenging. Naive enumeration over all closed walks of length k takes time scaling
as O(d ?k?1 ), where ? is the maximum degree of the graph. Except for extremely sparse graphs,
this is impractical. Inspired by the work of [2] in counting short cycles in a graph, we introduce a
novel and efficient method for computing the proposed estimate for small values of k.
3
Proposition 2 For a positive semidefinite matrix M and any sampling pattern ?, the proposed
b k (P? (M )) in (3) can be computed in time O(d? ) for k ? {3, 4, 5, 6, 7}, where ? < 2.373
estimate ?
b k (P? (M )) can be computed in time O(d)
is the exponent of matrix multiplication. For k = 1 or 2, ?
and O(d2 ), respectively.
This bound holds regardless of the degree, and the complexity can be even smaller for sparse graphs
as matrix multiplications are more efficient. We give a constructive proof by introducing a novel
algorithm achieving this complexity in Section 2. For k ? 8, our approach can potentially be extended,
but the complexity of the problem fundamentally changes as it is at least as hard as counting K4 in a
graph, for which the best known run time is O(d?+1 ) for general graphs [12].
We make the following contributions in this paper:
? We introduce in (3) a novel unbiased estimator of the Schatten k-norm of a positive semidefinite matrix M , from a random sampling of its entries. In general, the complexity of
computing the estimate scales as O(d?k ) where ? is the maximum degree (number of
sampled entries in a column) in the sampled matrix. We introduce a novel efficient algorithm
for computing the estimate in (3) exactly for small k ? 7, which involves only matrix
operations. This algorithm is significantly more efficient and has run-time scaling as O(d? )
independent of the degree and for all k ? 7 (see Proposition 2) .
? Under the canonical Erd?s-R?nyi sampling, we show that the Schatten k-norm of an incoherent rank-r matrix can be approximated within any constant multiplicative error, with number
of samples scaling as O(dr1?2/k ) (see Theorem 1). In particular, this is strictly smaller
than the number of samples necessary to complete the matrix, which scales as O(dr log d).
Below this matrix completion threshold, numerical experiments confirm that the proposed
estimator significantly outperforms simple heuristics of using singular values of the sampled
matrices directly or applying state-of-the-art matrix completion methods (see Figure 4).
? Given estimation of first K Schatten norms, it is straight forward to estimate spectral
properties. We apply our Schatten norm estimates to the application of estimating the
generalized rank studied in [20] and estimating the spectrum studied in [13]. We provide
performance guarantees for both applications and provide experimental results suggesting
we improve upon other competing methods. Due to space limitations, these results are
included in Appendix B.
In the remainder, we provide an efficient implementation of the estimator (3) for small k in Section 2.
In Section 3, we provide a theoretical analysis of our estimator.
1.3
Related work
Several Schatten norm estimation problems under different resource constrained scenarios have been
studied. However,
those approaches
assume specific noisy observations which allow them to use
P
the relation E kf (M )gk22 = i f (?i (M ))2 which holds for a standard i.i.d. Gaussian g ? N (0, I)
and any polynomial function f (?). This makes the estimation significantly easier than our setting, and
none of those algorithms can be applied under our random sampling model. In particular, counting
small structure for de-biasing is not required. [20, 8] and [9] propose multiplying Gaussian random
vectors to the data matrix, in order to reduce communication and/or computation. [13] proposes an
interesting estimator for the spectrum of the covariance matrix from samples of random a vector. [15]
propose similar estimators for Schatten norms from random linear projections of a data matrix, and
[16] study the problem for sparse data matrices in a streaming model.
One of our contribution is that we propose an efficient algorithm for computing the weighted counts of
small structures in Section 2, which can significantly improve upon less sample-efficient counterpart
in, for example, [13]. Under the setting of [13] (and also [15]), the main idea of the estimator
Pn is that
the weight of each length-k cycle in the observed empirical covariance matrix (1/n) i=1 Xi XiT
provides an unbiased estimator of kE[XX T ]kkk . One prefers to sum over the weights of as many
cycles as computationally allowed in order to reduce the variance. As counting all cycles is in general
computationally hard, they propose counting only increasing cycles (which only accounts for only
1/k! fraction of all the cycles), which can be computed in time O(d? ). If one has an efficient method
to count all the (weighted) cycles, then the variance of the estimator could potentially decrease by
an order of k!. For k ? 7, our proposed algorithm in Section 2 provides exactly such an estimator.
4
We replace [13, Algorithm 1] with ours, and run the same experiment to showcase the improvement
in Figure 3, for dimension d = 2048 and various values of number of samples n comparing the
multiplicative error in estimating kE[XX T ]kkk , for k = 7. With the same run-time, significant
gain is achieved by simply substituting our proposed algorithm for counting small structures, in
the sub-routine. In general, the efficient algorithm we propose might be of independent interest to
various applications, and can directly substitute (and significantly improve upon) other popular but
less efficient counterparts.
100
increasing simple cycles
all simple cycles
10
\
T ]kk ?kE[XX T ]kk |
|kE[XX
k
k
kE[XX T ]kk
k
1
0.1
0.01
256
512
1024
2048
number of samples, n
Figure 3: By replacing [13, Algorithm 1] that only counts increasing cycles with our proposed
algorithm that counts all cycles, significant gain is acheived in estimating kE[XX T ]kkk , for k = 7.
The main challenge under our sampling scenario is that existing counting methods like that of
[13] cannot be applied, regardless of how much computational power we have. Under the matrix
completion scenario, we need to (a) sum over all small structures H ? Hk and not just Ck as in [13];
and (b) for each structure we need to sum over all subgraphs with the same structure and not just
those walks whose labels form a monotonically increasing sequence as in [13].
2
Efficient Algorithm
In this section we give a constructive proof of Proposition 2. In computing the estimate in (3), c(H)
can be computed in time O(k!) and suppose p(H) has been computed (we will explain how to
compute p(H) for Er?s-R?nyi sampling in Section 3). The bottleneck then is computing the weights
?P? (M ) (H) for each H ? Hk . Let ?M (H) ? ?M (H)c(H). We give matrix multiplication based
equations to compute ?M (H) for every H ? Hk for k ? {3, 4, 5, 6, 7}. This establishes that ?M (H),
and hence ?M (H), can be computed in time O(d? ), proving Proposition 2.
For any matrix A ? Rd?d , let diag(A) to be a diagonal matrix such that (diag(A))ii = Aii , for
all i ? [d] and (diag(A))i,j = 0, for all i 6= j ? [d]. For a given matrix M ? Rd?d , define the
following: OM to be matrix of off-diagonal entries of M that is
POM ? M ? diag(M ) and we let
DM ? diag(M ). Let tr(A) denote trace of A, that is tr(A) = i?[d] Aii , and let A?B denote the
standard matrix multiplication of two matrices A and B to make it more explicit. Consider computing
?M (H) for H ? H3 as labeled in Figure 2:
?M (A1 ) = tr(DM ?DM ?DM )
?M (A2 ) = 3 tr(DM ?OM ?OM )
?M (A3 ) = tr(OM ?OM ?OM )
(4)
(5)
(6)
The first weighted sum ?M (A1 ) is sum of P
all weights of walks of length 3 that consists of three
self-loops. One can show that ?M (A1 ) = i?[d] Mii3 , which in our matrix operation notations is
(4). Similarly, ?M (A3 ) is the sum of weights of length 3 walks with no self-loop, which leads to
(6). ?M (A2 ) is the sum of weights of length 3 walks with a single self-loop, which leads to (5). The
factor 3 accounts for the fact that the self loop could have been placed at various positions.
Similarly, for each k-cyclic pseudographs in Hk for k ? 7, computing ?M (H) involves a few matrix
operations with run-time O(d? ). We provide the complete set of explicit expressions in Appendix F.
A MATLAB implementation of the estimator (3), that includes as its sub-routines the computation of
the weights of all k-cyclic pseudographs, is available for download at
5
https://github.com/khetan2/Schatten_norm_estimation. The explicit formulae in Appendix F together with the implementation in the above url might be of interest to other problems
involving counting small structures in graphs.
b k (P? (M )) = (1/p) P P? (M )ii , which can be computed
For k = 1, the estimator simplifies to ?
i
b k (P? (M )) = (1/p) P P? (M )2 , which
in time O(d). For k = 2, the estimator simplifies to ?
ij
i,j
can be computed in time O(|?|). However, for k ? 8, there exists walks over K4 , a clique over
4 nodes, that cannot be decomposed into simple computations involving matrix operations. The
best known algorithm for a simpler task of counting K4 has run-time scaling as O(d?+1 ), which is
fundamentally different.
Algorithm 1 Schatten k-norm estimator
Require: P? (M ), k, Hk , p(H) for all H ? Hk
b k (P? (M ))
Ensure: ?
1: if k ? 7 then
2:
For each H ? Hk , compute ?P? (M ) (H) using the formula from Eq. (4)?(6) for k = 3 and
Eq. (43) ? (186) for k ? {4, 5, 6, 7}
1
b k (P? (M )) ? P
3:
?
H?Hk p(H) ?P? (M ) (H)
4: else
b k (P? (M )) ? Algorithm 2[P? (M ), k, Hk , p(H) for all H ? Hk ]
5:
?
[Appendix A]
6: end if
3
Performance guarantees
Under the stylized but canonical Erd?s-R?nyi sampling, notice that the probability p(H) that we
observe all edges in a walk with pattern H is
p(H) = pm(H) ,
(7)
where p is the probability an edge is sampled and m(H) is the number of distinct edges in a k-cyclic
pseudograph H. Plugging in this value of p(H), which can be computed in time linear in k, into
the estimator (3), we get an estimate customized for Erd?s-R?nyi sampling. Given a rank-r matrix
M , the difficulty of estimating properties of M from sampled entries is captured by the incoherence
of the original matrix M , which we denote by ?(M ) ? R [3]. Formally, let M ? U ?U > be the
singular value decomposition of a positive definite matrix where U is a d ? r orthonormal matrix and
? ? diag(?1 , ? ? ? , ?r ) with singular values ?1 ? ?2 ? ? ? ? ? ?r > 0. Let Ui,r denote the i-th row
and j-th column entry of matrix U . The incoherence ?(M ) is defined as the smallest positive value ?
such that the following holds:
Pr
2
A1. For all i ? [d], we have a=1 Uia
(?a /?1 ) ? ?r/d.
Pr
?
A2. For all i 6= j ? [d], we have | a=1 Uia Uja (?a /?1 )| ? ? r/d.
The incoherence measures how well spread out the matrix is and is a common measure of difficulty
in completing a matrix from random samples [3, 11].
3.1
Performance guarantee
For any d ? d positive semidefinite matrix M of rank r with incoherence ?(M ) = ? and the effective
condition number ? = ?max (M )/?min (M ), we define
(
)
(dp)k?1 rk pk?1
2
2k
? ? (??) g(k) max 1,
, k?1
,
(8)
d
d
b ? (M ))/kM kk ) ? ?2 (r1?2/k /dp)k
such that the variance of our estimator is bounded by Var(?(P
k
as we show in the proof of Theorem 1 in Section D.1. Here, g(k) = O(k!).
6
Theorem 1 (Upper bound under the Erd?s-R?nyi sampling) For any integer k ? [3, ?), any
? > 0, any rank-r positive semidefinite matrix M ? Rd?d , and given i.i.d. samples of the entries
of M with probability p, the proposed estimate of (3) achieves normalized error ? with probability
bounded by
!
b k (P? (M )) ? kM kk
?
?2 r1?2/k k
k
P
.
(9)
? ?
? 2
k
?
dp
kM kk
Consider a typical scenario where ?, ?, and k are finite with respect to d and r. Then the Chebyshev?s
bound in (9) implies that the sample d2 p = O(dr1?2/k ) is sufficient to recover kM kkk up to arbitrarily
small multiplicative error and arbitrarily small (but strictly positive) error probability. This is strictly
less than the known minimax sample complexity for recovering the entire low-rank matrix, which
scales is ?(rd log d). As we seek to estimate only a property of the matrix (i.e. the Schatten k-norm)
and not the whole matrix itself, we can be more efficient on the sample complexity by a factor of
r2/k in rank and a factor of log d in the dimension. We emphasize here that such a gain can only be
established using the proposed estimator based on the structure of the k-cyclic pseudographs. We
will show empirically that the standard matrix completion approaches fail in the critical regime of
samples below the recovery threshold of O(rd log d).
d = 500, r = 100
1
proposed estimator
scaled sampled matrix
matrix completion
proposed estimator
scaled sampled matrix
matrix completion
0.8
relative error
0.8
relative error
d = 500, r = 500
1
0.6
0.4
0.2
0.6
0.4
0.2
0
0.01 0.1
0.2
0.3
0.4
0.5
0.6
0.7
sampling probability, p
0.8
0.9
0
0.01 0.1
1
0.2
0.3
0.4
0.5
0.6
0.7
sampling probability, p
0.8
0.9
1
Figure 4: The proposed estimator outperforms both baseline approaches below the matrix completion
threshold. For k = 5, comparison of the absolute relative error in estimated Schatten norm that is
\
\
b k (P? (M )),
kM kk ? kM
kk = ?
kk /kM kk for the three algorithms: (1) the proposed estimator, kM
k
k
k
k
\
(2) Schatten norm of the scaled sampled matrix, kM
kkk = k(1/p)Pr (P? (M ))kkk , (3) Schatten norm
\
f = AltMin(P? (M )) from [10], kM
fkk , where Pr (?) is the
of the completed matrix, M
kkk = kM
k
standard best rank-r projection of a matrix. ? is generated by Erd?s-R?nyi sampling of matrix M
with probability p.
Figure 4 is a scatter plot of the absolute relative error in estimated Schatten k-norm, kM kkk ?
\
kM
kkk /kM kkk , for k = 5, for three approaches: the proposed estimator, Schatten norm of the
scaled sampled matrix (after rank-r projection), and Schatten norm of the completed matrix, using
state-of-the-art alternating minimization algorithm [10]. All the three estimators are evaluated 20
times for each value of p. M is a symmetric positive semi-definite matrix of size d = 500, and rank
r = 100 (left panel) and r = 500 (right panel). Singular vectors U of M = U ?U > , are generated
by QR decomposition of N (0, Id?d ) and ?i,i is uniformly distributed over [1, 2]. For a low rank
matrix on the left, there is a clear critical value of p ' 0.45, above which matrix completion is exact
with high probability. However, this algorithm knows the underlying rank and crucially exploits the
fact that the underlying matrix is exactly low-rank. In comparison, our approach is agnostic to the
low-rank assumption but finds the accurate estimate that is adaptive to the actual rank in a data-driven
manner. Using the first r singular values of the (rescaled) sampled matrix fails miserably for all
regimes (we truncate the error at one for illustration purposes). In this paper, we are interested in the
7
regime where exact matrix completion is impossible as we do not have enough samples to exactly
recover the underlying matrix: p ? 0.45 in the left panel and all regimes in the right panel.
The sufficient condition of d2 p = O(dr1?2/k ) in Theorem 1 holds for a broad range of parameters
where the rank is sufficiently small r = O(dk/((k?1)(k?2)) ) (to ensure that the first term in ?2
dominates). However, the following results in Figure 5 on numerical experiments suggest that our
analysis holds more generally for all regimes of the rank r, even those close to d. M is generated
using settings similar to that of Figure 4. Empirical probabilities are computed by averaging over 100
instances.
One might hope to tighten the Chebyshev bound by exploiting the fact that the correlation among the
summands in our estimator (3) is weak. This can be made precise using recent result from [18], where
a Bernstein-type bound was proved for sum of polynomials of independent random variables that are
weakly correlated. The first term in the bound (10) is the natural Bernstein-type bound corresponding
to the Chebyshev?s bound in (9). However, under the regime where k is large or p is large, the
correlation among the summands become stronger, and the second and third term in the bound (10)
starts to dominate. In the typical regime of interest where ?, ?, k are finite, d2 p = O(dr1?2/k ), and
sufficiently small rank r = O(dk/((k?1)(k?2)) ), the error probability is dominated by the first term in
the right-hand side of (10). Neither one of the two bounds in (9) and (10) dominates the other, and
depending on the values of the problem parameters, we might want to apply the one that is tighter.
We provide a proof in Section D.2.
Theorem 2 Under the hypotheses of Theorem 1, the error probability is upper bounded by
!
b k (P? (M )) ? kM kk
?
k
?? ?
P
kM kkk
1/k
k
(
)
2
sampling probability, p
e2 max e
?
??
2
dp
r 1?2/k
k=2
k=3
k=4
k=5
k=6
k=7
0.2
?(dp)
,e
1
?d
?r k?1
?(dp)
,e
?d
?r k?1
, e?
?dp
?
k=2
k=3
k=4
k=5
k=6
k=7
0.2
(10)
.
1
0.8
0.6
0.8
0.6
0.02
0.02
0.4
0.4
0.2
0.002
0
5
50
0.2
0.002
500
0
5
50
500
rank, r
Figure 5: Each colormap in each
block for k ? {2, 3, 4, 5, 6, 7} show empirical probability of the
b k (P? (M ))/kM kk ? ? , for ? = 0.5 (left panel) and ? = 0.2 (right panel). ? is
event kM kkk ? ?
k
generated by Erd?s-R?nyi sampling of matrix M with probability p (vertical axis). M is a symmetric
positive semi-definite matrix of size d = 1000. The solid lines correspond to our theoretical prediction
p = (1/d)r1?2/k .
These two results show that the sample size of d2 p = O(dr1?2/k ) is sufficient to estimate a Schatten
k-norm accurately. In general, we do not expect to get a universal upper bound that is significantly
tighter for all r, because for a special case of r = d, the following corollary of [15, Theorem 3.2]
provides a lower bound; it is necessary to have sample size d2 p = ?(d2?4/k ) when r = d. Hence,
the gap is at most a factor of r2/k in the sample complexity.
Corollary 1 Consider any linear observation X ? Rn of a matrix M ? Rd?d and any estimate
?(X) satisfying (1 ? ?k )kM kkk ? ?(X) ? (1 + ?k )kM kkk for any M with probability at least 3/4,
where ?k = (1.2k ? 1)/(1.2k + 1). Then, n = ?(d2?4/k ).
8
For k ? {1, 2}, precise bounds can be obtained with simpler analyses. In particular, we have
the following remarks, whose proof follows immediately by applying Chebyshev?s inequality and
Bernstien?s inequality along with the incoherence assumptions.
Remark 3 For k = 1, the probability of error in (9) is upper bounded by min{?1 , ?2 }, where
?? 2 (??)2
1 (??)2
(??) ?1
?1 ? 2
.
, and
?2 ? 2 exp
+?
? dp
2
dp
3dp
Remark 4 For k = 2, the probability of error in (9) is upper bounded by min{?1 , ?2 }, where
? 2 (??)4
1 (??)4
r2
r2
(??)2 r ?1
?1 ? 2 2
.
2+
, and ?2 ? 2 exp ?
2
+
+
?
? d p
d
2
d2 p
d
3d2 p
?
When k = 2, for rank small r ? C d, we only need d2 p = ?(1) samples for recovery up to any
arbitrary small multiplicative error. When rank r is large, our estimator requires d2 p = ?(d) for both
k ? {1, 2}.
Acknowledgments
This work was partially supported by NSF grants CNS-1527754, CCF-1553452, CCF-1705007 and
GOOGLE Faculty Research Award.
References
[1] Dimitris Achlioptas and Frank McSherry. Fast computation of low rank matrix approximations.
In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages
611?618. ACM, 2001.
[2] N. Alon, R. Yuster, and U. Zwick. Finding and counting given length cycles. Algorithmica,
17(3):209?223, 1997.
[3] E. J. Cand?s and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9(6):717?772, 2009.
[4] E. Di Napoli, E. Polizzi, and Y. Saad. Efficient estimation of eigenvalue counts in an interval.
Numerical Linear Algebra with Applications, 2016.
[5] Khaled M Elbassioni. A polynomial delay algorithm for generating connected induced subgraphs of a given cardinality. J. Graph Algorithms Appl., 19(1):273?280, 2015.
[6] U. Feige and E. Ofek. Spectral techniques applied to sparse random graphs. Random Struct.
Algorithms, 27(2):251?275, 2005.
[7] J. Friedman, J. Kahn, and E. Szemer?di. On the second eigenvalue in random regular graphs.
In Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing, pages
587?598, Seattle, Washington, USA, may 1989. ACM.
[8] I. Han, D. Malioutov, H. Avron, and J. Shin. Approximating the spectral sums of large-scale
matrices using chebyshev approximations. arXiv preprint arXiv:1606.00942, 2016.
[9] I. Han, D. Malioutov, and J. Shin. Large-scale log-determinant computation through stochastic
chebyshev expansions. In ICML, pages 908?917, 2015.
[10] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In STOC, pages 665?674, 2013.
[11] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. Information
Theory, IEEE Transactions on, 56(6):2980?2998, 2010.
[12] T. Kloks, D. Kratsch, and H. M?ller. Finding and counting small induced subgraphs efficiently.
Information Processing Letters, 74(3):115?121, 2000.
9
[13] W. Kong and G. Valiant. Spectrum estimation from samples. arXiv preprint arXiv:1602.00061,
2016.
[14] C. M. Le, E. Levina, and R. Vershynin. Sparse random graphs: regularization and concentration
of the laplacian. arXiv preprint arXiv:1502.03049, 2015.
[15] Y. Li, H. L. Nguy?n, and D. P. Woodruff. On sketching matrix norms and the top singular vector.
In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms,
pages 1562?1581. Society for Industrial and Applied Mathematics, 2014.
[16] Y. Li and D. P. Woodruff. On approximating functions of the singular values in a stream. arXiv
preprint arXiv:1604.08679, 2016.
[17] J. C. Mason and D. C. Handscomb. Chebyshev polynomials. CRC Press, 2002.
[18] W. Schudy and M. Sviridenko. Bernstein-like concentration and moment inequalities for
polynomials of independent random variables: multilinear case. arXiv preprint arXiv:1109.5193,
2011.
[19] Ryuhei Uehara et al. The number of connected components in graphs and its applications.
Manuscript. URL: http://citeseerx. ist. psu. edu/viewdoc/summary, 1999.
[20] Y. Zhang, M. J. Wainwright, and M. I. Jordan. Distributed estimation of generalized matrix
rank: Efficient algorithms and lower bounds. arXiv preprint arXiv:1502.01403, 2015.
10
| 7221 |@word kong:1 determinant:1 faculty:1 polynomial:5 norm:30 stronger:1 km:26 d2:16 seek:1 crucially:1 covariance:2 decomposition:2 tr:6 solid:1 moment:1 reduction:1 cyclic:11 woodruff:2 ours:1 khetan:1 outperforms:2 existing:1 recovered:1 comparing:1 com:1 scatter:1 readily:1 numerical:4 partition:1 plot:1 short:1 provides:3 node:15 simpler:2 zhang:1 relabelling:1 along:3 become:1 symposium:3 acheived:1 consists:1 manner:1 introduce:5 uja:1 pairwise:1 cand:1 inspired:1 decomposed:1 actual:1 enumeration:2 considering:1 increasing:4 becomes:1 cardinality:1 estimating:11 underlying:5 notation:4 bounded:5 agnostic:1 panel:6 xx:6 what:1 pseudograph:6 finding:2 impractical:1 guarantee:5 pseudo:1 every:1 avron:1 exactly:5 colormap:1 scaled:4 whatever:1 grant:1 positive:13 aggregating:1 analyzing:1 id:1 incoherence:6 abuse:2 might:7 black:1 studied:4 suggests:1 challenging:3 appl:1 schudy:1 range:1 practical:2 unique:1 acknowledgment:1 thirty:1 block:1 definite:3 shin:2 empirical:4 universal:1 significantly:11 projection:3 regular:1 suggest:2 get:4 convenience:1 close:2 onto:1 cannot:2 context:1 applying:2 impossible:1 restriction:1 map:2 missing:2 regardless:2 independently:1 convex:1 ke:6 recovery:2 immediately:2 subgraphs:3 insight:2 estimator:30 orthonormal:1 dominate:1 oh:2 proving:1 notion:1 variation:1 play:1 suppose:1 user:1 exact:3 hypothesis:1 approximated:1 satisfying:1 showcase:1 labeled:1 observed:3 bottom:1 preprint:6 cycle:15 connected:2 counter:1 rescaled:3 decrease:1 pd:1 complexity:7 ui:1 weakly:1 tight:1 algebra:1 upon:4 bipartite:1 translated:1 aii:2 stylized:1 various:4 routinely:1 distinct:5 fast:1 effective:2 jain:1 hyper:1 ise:2 whose:2 encoded:1 heuristic:1 otherwise:1 itself:4 noisy:1 sequence:2 eigenvalue:2 propose:10 product:1 remainder:1 loop:6 qr:1 exploiting:1 seattle:1 r1:3 generating:1 depending:1 alon:1 completion:14 ij:2 h3:3 eq:2 netrapalli:1 recovering:2 involves:2 implies:1 stochastic:1 adjacency:1 crc:1 require:1 khetan2:2 proposition:4 tighter:2 multilinear:1 summation:1 strictly:4 hold:5 sufficiently:2 exp:2 mapping:6 substituting:1 achieves:1 a2:10 smallest:1 purpose:1 estimation:7 label:1 viewdoc:1 establishes:1 tool:2 weighted:3 minimization:2 hope:1 gaussian:2 ck:5 pn:1 zwick:1 corollary:2 focus:3 xit:1 notational:1 properly:1 rank:30 vk:3 improvement:1 hk:17 industrial:1 baseline:1 streaming:1 typically:1 eliminate:1 entire:1 relation:3 kahn:1 expand:1 interested:2 among:2 gk22:1 exponent:1 proposes:1 art:2 constrained:1 special:1 once:1 beach:1 sampling:25 washington:1 psu:1 broad:1 icml:1 sanghavi:1 fundamentally:2 few:5 randomly:1 ve:2 geometry:1 algorithmica:1 cns:1 preview:1 freedom:1 friedman:1 interest:6 semidefinite:7 mcsherry:1 accurate:3 edge:6 partial:2 necessary:3 respective:1 filled:1 walk:22 theoretical:3 instance:1 column:2 introducing:1 entry:17 subset:4 delay:1 vershynin:1 st:1 recht:1 fundamental:3 siam:1 probabilistic:1 off:1 ashish:1 together:1 sketching:1 w1:4 dr:3 ek:4 rescaling:1 li:2 suggesting:1 account:2 de:3 wk:3 includes:1 explicitly:1 depends:2 stream:1 multiplicative:4 closed:7 start:1 recover:3 collaborative:1 contribution:2 square:1 om:6 qk:1 variance:3 efficiently:1 correspond:1 yellow:1 generalize:1 weak:1 accurately:4 none:1 multiplying:1 straight:2 randomness:1 malioutov:2 explain:1 definition:2 against:1 dm:5 naturally:1 associated:1 di:4 proof:5 e2:1 sampled:21 gain:3 proved:1 popular:1 ask:1 dimensionality:2 routine:2 manuscript:1 higher:1 erd:10 evaluated:1 just:2 achlioptas:1 correlation:3 hand:1 replacing:1 keshavan:1 google:1 usa:2 name:1 normalized:1 unbiased:4 true:1 counterpart:2 fkk:1 hence:5 ccf:2 regularization:1 alternating:2 symmetric:3 self:5 ue:2 generalized:2 complete:4 novel:7 common:1 empirically:1 extend:1 slight:2 refer:2 significant:2 rd:8 swoh:1 pm:1 similarly:2 mathematics:2 ofek:1 illinois:4 access:1 han:2 summands:2 recent:1 driven:1 scenario:8 certain:1 inequality:3 arbitrarily:2 captured:1 ller:1 monotonically:1 clockwise:2 ii:2 semi:2 multiple:1 champaign:2 technical:1 match:1 unlabelled:1 levina:1 long:1 award:1 paired:1 a1:10 plugging:1 prediction:1 involving:2 laplacian:1 arxiv:12 histogram:3 achieved:1 want:2 separately:1 interval:1 else:1 singular:13 crucial:1 w2:1 rest:1 biased:1 saad:1 induced:2 khaled:1 undirected:2 member:1 jordan:1 call:1 integer:1 counting:12 bernstein:3 enough:1 competing:2 topology:1 reduce:2 idea:1 simplifies:2 chebyshev:7 bottleneck:1 expression:2 dr1:5 url:2 prefers:1 matlab:1 remark:3 generally:2 clear:3 http:2 canonical:3 nsf:1 notice:1 estimated:3 discrete:1 uia:2 group:1 ist:1 threshold:3 achieving:1 mv2:1 k4:3 pom:1 neither:1 v1:6 graph:23 fraction:1 sum:13 run:6 inverse:1 letter:1 appendix:5 scaling:5 comparable:1 bound:14 completing:1 miserably:1 annual:3 sviridenko:1 dominated:1 aspect:1 extremely:1 min:3 citeseerx:1 department:2 structured:1 truncate:1 kd:1 smaller:5 feige:1 wi:1 partitioned:1 invariant:1 multiplicity:2 pr:4 napoli:1 computationally:2 equation:2 resource:1 count:5 fail:1 needed:1 know:1 end:1 available:1 operation:4 opponent:1 apply:3 observe:3 v2:12 spectral:13 appropriate:1 altmin:1 alternative:2 struct:1 eulerian:1 original:3 substitute:1 top:1 include:1 ensure:2 completed:2 exploit:1 build:1 nyi:11 approximating:2 society:1 question:2 concentration:2 diagonal:3 surrogate:1 dp:10 subspace:1 mapped:2 schatten:26 length:12 index:1 kk:15 illustration:1 setup:1 potentially:2 frank:1 stoc:1 trace:2 uehara:1 design:1 implementation:3 twenty:2 upper:5 vertical:1 observation:6 urbana:2 finite:2 extended:1 communication:1 team:1 precise:2 rn:1 arbitrary:1 download:1 pair:1 namely:1 required:2 extensive:1 connection:1 c3:3 established:1 nip:1 address:1 below:3 pattern:10 dimitris:1 regime:9 biasing:2 challenge:2 max:3 wainwright:1 power:2 critical:2 event:1 difficulty:2 eh:2 natural:1 szemer:1 customized:1 representing:1 minimax:1 improve:4 movie:1 github:1 axis:1 incoherent:1 naive:1 vh:2 literature:1 checking:1 kf:1 multiplication:4 handscomb:1 relative:4 expect:1 multiedges:1 interesting:1 limitation:1 filtering:1 var:1 foundation:1 degree:5 sufficient:3 sewoong:1 row:1 summary:2 placed:1 supported:1 bias:2 allow:2 side:1 explaining:1 absolute:2 sparse:5 fifth:1 distributed:3 dimension:3 concretely:1 commonly:1 forward:2 adaptive:1 made:1 tighten:1 transaction:1 emphasize:1 confirm:1 clique:1 assumed:1 xi:1 spectrum:9 ca:1 expansion:1 diag:6 pk:1 main:2 spread:1 montanari:1 whole:1 repeated:2 allowed:1 sub:4 position:1 fails:1 explicit:3 third:2 formula:3 theorem:7 rk:1 specific:2 kkk:18 er:1 r2:5 dk:2 mason:1 a3:9 dominates:2 exists:3 valiant:1 gap:1 easier:1 simply:1 nguy:1 partially:3 sport:1 recommendation:1 applies:1 mij:2 acm:5 goal:1 tempted:1 replace:1 change:1 hard:2 included:1 typical:3 except:1 uniformly:1 averaging:1 principal:1 called:1 total:2 experimental:1 player:1 formally:1 constructive:2 correlated:1 |
6,879 | 7,222 | Neural Networks for Efficient Bayesian Decoding of
Natural Images from Retinal Neurons
Nikhil Parthasarathy?
Stanford University
[email protected]
Thomas Rutten
Columbia University
[email protected]
Eleanor Batty?
Columbia University
[email protected]
Mohit Rajpal
Columbia University
[email protected]
William Falcon
Columbia University
[email protected]
E.J. Chichilnisky?
Stanford University
[email protected]
Liam Paninski?
Columbia University
[email protected]
Abstract
Decoding sensory stimuli from neural signals can be used to reveal how we sense
our physical environment, and is valuable for the design of brain-machine interfaces.
However, existing linear techniques for neural decoding may not fully reveal or exploit the fidelity of the neural signal. Here we develop a new approximate Bayesian
method for decoding natural images from the spiking activity of populations of
retinal ganglion cells (RGCs). We sidestep known computational challenges with
Bayesian inference by exploiting artificial neural networks developed for computer
vision, enabling fast nonlinear decoding that incorporates natural scene statistics
implicitly. We use a decoder architecture that first linearly reconstructs an image
from RGC spikes, then applies a convolutional autoencoder to enhance the image.
The resulting decoder, trained on natural images and simulated neural responses,
significantly outperforms linear decoding, as well as simple point-wise nonlinear
decoding. These results provide a tool for the assessment and optimization of retinal prosthesis technologies, and reveal that the retina may provide a more accurate
representation of the visual scene than previously appreciated.
1
Introduction
Neural coding in sensory systems is often studied by developing and testing encoding models that
capture how sensory inputs are represented in neural signals. For example, models of retinal function
are designed to capture how retinal ganglion cells (RGCs) respond to diverse patterns of visual
stimulation. An alternative approach ? decoding visual stimuli from RGC responses ? provides a
complementary method to assess the information contained in RGC spikes about the visual world
[31, 37]. Understanding decoding can also be useful for the design of retinal prostheses, by providing
a measure of the visual restoration that is possible with a prosthesis [26].
The most common and well-understood decoding approach, linear regression, has been used in various
sensory systems [29, 40]. This method was shown to be successful at reconstructing white noise
temporal signals from RGC activity [37] and revealed that coarse structure of natural image patches
could be recovered from ensemble responses in the early visual system [33]. Other linear methods
? ?
, Equal contributions
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
A)
RGC
Responses
Image
B)
Conv(7, 64)
Downsample(2,2)
Linear
Decoder
Deep Neural
Network
Linear Decoded
Image
Conv(5, 128)
Conv(3, 256)
Conv(3, 256) Upsample(2,2)
Downsample(2,2) Downsample(2,2) Downsample(2,2) Conv(3,256)
Upsample(2,2)
Conv(3,256)
NN-enhanced Image
Upsample(2,2)
Conv(5,128)
Upsample(2,2)
Conv(7,64)
Notaon: Conv(?lter_size, ?lter_num)
Linear Decoded
Image
Output feature maps
Convolu onal ?lters
Enhanced Image
Figure 1: Outline of approach. A) The original image is fed through the simulated neural encoding
models to produce RGC responses on which we fit a linear decoder. A deep neural network is then
used to further enhance the image. B) We use a convolutional autoencoder with a 4 layer encoder and
a 4 layer decoder to enhance the linear decoded image.
such as PCA and linear perceptrons have been used to decode low-level features such as color and
edge orientation from cortical visual areas [14, 4]. For more complex natural stimuli, computationally
expensive approximations to Bayesian inference have been used to construct decoders that incorporate
important prior information about signal structure [25, 27, 30]. However, despite decades of effort,
deriving an accurate prior on natural images poses both computational and theoretical challenges, as
does computing the posterior distribution on images given an observed neural response, limiting the
applicability of traditional Bayesian inference.
Here we develop and assess a new method for decoding natural images from the spiking activity of
large populations of RGCs, to sidestep some of these difficulties. Our approach exploits inference
tools that approximate optimal Bayesian inference, and emerge from the recent literature on deep
neural network (DNN) architectures for computer vision tasks such as super-resolution, denoising,
and inpainting [17, 39]. We propose a novel staged decoding methodology ? linear decoding followed
by a (nonlinear) DNN trained specifically to enhance the images output by the linear decoder ? and
use it to reconstruct natural images from realistic simulated retinal ganglion cell responses. This
approach leverages recent progress in deep learning to more fully incorporate natural image priors in
the decoder. We show that the approach substantially outperforms linear decoding. These findings
provide a potential tool to assess the fidelity of retinal prostheses for treating blindness, and provide a
substantially higher bound on how accurately real visual signals may be represented in the brain.
2
Approach
To decode images from spikes, we use a linear decoder to produce a baseline reconstructed image,
then enhance this image using a more complex nonlinear model, namely a static nonlinearity or a
DNN (Figure 1). There are a few reasons for this staged approach. First, it allows us to cast the
decoding problem as a classic image enhancement problem that can directly utilize the computer
vision literature on super-resolution, in-painting, and denoising. This is especially important for the
construction of DNNs, which remain nontrivial to tune for problems in non-standard domains (e.g.,
image reconstruction from neural spikes). Second, by solving the problem partially with a simple
linear model, we greatly reduce the space of transformations that a neural network needs to learn,
constraining the problem significantly.
In order to leverage image enhancement tools from deep learning, we need large training data sets.
We use an encoder-decoder approach: first, develop a realistic encoding model that can simulate
2
neural responses to arbitrary input images, constrained by real data. We build this encoder to predict
the average outputs of many RGCs, but this approach could also be applied to encoders fit on a
cell-by-cell basis [3]. Once this encoder is in hand, we train arbitrarily complex decoders by sampling
many natural scenes, passing them through the encoder model, and training the decoder so that the
output of the full encoder-decoder pipeline matches the observed image as accurately as possible.
2.1
Encoder model: simulation of retinal ganglion cell responses
For our encoding model, we create a static simulation of the four most numerous retinal ganglion cell
types (ON and OFF parasol cells and ON and OFF midget cells) based on experimental data. We fit
linear-nonlinear-Poisson models to RGC responses to natural scene movies, recorded in an isolated
macaque retina preparation [7, 10, 12]. These fits produce imperfect but reasonable predictions of
RGC responses (Figure 2 A). We averaged the parameters (spatial filter, temporal filter, and sigmoid
parameters) of these fits across neurons, to create a single model for each of four cell types. We chose
this model as it is simple and a relatively good baseline encoder with which to test our decoding
method. (Recently, encoding models that leverage deep neural networks [3, 24] have been shown to
fit RGC responses better than the simple model we are using; substituting a more complex encoding
model should improve the quality of our final decoder, and we intend to pursue this approach in
future work.) To deal with static images, we then reduced these models to static models, consisting
of one spatial filter followed by a nonlinearity and Poisson spike generation. The outputs of the static
model are equal to summing the spikes produced by the full model over the image frames of a pulse
movie: gray frames followed by one image displayed for multiple frames. Spatial filters and the
nonlinearity of the final encoding model are shown in Figure 2 B and C.
We then tiled the image space (128 x 128 pixels) with these simulated neurons. For each cell type,
we fit a 2D Gaussian to the spatial filter of that cell type and then chose receptive field centers with a
width equal to 2 times the standard deviation of the Gaussian fit rounded up to the nearest integer.
The centers are shifted on alternate rows to form a lattice (Figure 2 D). The resulting response of
each neuron to an example image is displayed in Figure 2 E as a function of its location on the image.
The entire simulation consisted of 5398 RGCs.
2.2
Model architecture
Our decoding model starts with a classic linear regression decoder (LD) to generate linearly decoded
images I LD [37]. The LD learns a reconstruction mapping ?? between neural responses X and
stimuli images I ST by modeling each pixel as a weighted sum of the neural responses: ?? =
(X T X)?1 X T I ST . X is augmented with a bias term in the first column. The model inputs are m
images, p pixels and n neurons such that: I ST ? Rm?p , X ? Rm?(n+1) , ?? ? R(n+1)?p . To decode
?
the set of neural responses X we compute the dot product between ?? and X: I LD = X ?.
The next step of our decoding pipeline enhances I LD through the use of a deep convolutional
autoencoder (CAE). Our model consists of a 4-layer encoder and a 4-layer decoder. This model
architecture was inspired by similar models used in image denoising [11] and inpainting [35, 22].
In the encoder network E, each layer applies a convolution and downsampling operating to the
output tensor of the previous layer. The output of the encoder is a tensor of activation maps
representing a low-dimensional embedding of I LD . The decoder network D inverts the encoding
process by applying a sequence of upsampling and convolutional layers to the output tensor of the
previous layer. This model outputs the reconstructed image I CAE . We optimize the CAE end-to-end
through backpropagation by minimizing the pixelwise MSE between the output image of the CAE:
I CAE = D(E(I LD )) and the original stimuli image I ST .
The filter sizes, number of layers, and number of filters were all tuned through an exhaustive gridsearch. We searched over the following parameter space in our grid search: number of encoding
/ decoding layers: [3, 4, 5], number of filters in each layer: [32, 64,128,256], filter sizes: [7x7,
5x5, 3x3], learning rates: [0.00005, 0.0001, 0.0002, 0.0004, 0.0008, 0.001, 0.002, 0.004]. Specific
architecture details are provided in Figure 1.
3
A)
200
ON Parasol
OFF Parasol
ON Midget
OFF Midget
Firing Rate (Hz)
100
0
80
40
0
B)
2
4
6
Time (s)
ON Parasol
8
OFF Parasol
10
2
4
6
Time (s)
ON Midget
8
10
OFF Midget
10
pixels
C)
21 ON Parasol
0
-3
D)
0
OFF Parasol
3 -3
ON Parasol
0
ON Midget
3 -3
OFF Parasol
0
OFF Midget
3 -3
ON Midget
0
3
OFF Midget
10
pixels
E)
ON Parasol
OFF Parasol
ON Midget
OFF Midget
10
pixels
0
30 0
24 0
18 0
20
Figure 2: Encoding model. A) Full spatiotemporal encoding model performance on experimental data.
Recorded responses (black) vs LNP predictions (red; using the averaged parameters over all cells of
each type) for one example cell of each type. The spiking responses to 57 trials of a natural scenes test
movie were averaged over trials and then smoothed with a 10 ms SD Gaussian. B) Spatial filters of
the simulated neural encoding model are shown for each cell type. C) The nonlinearity following the
spatial filter-stimulus multiplication is shown for each cell type. We draw from a Poisson distribution
on the output of the nonlinearity to obtain the neural responses. D) Demonstration of the mosaic
structure for each cell type on a patch of the image space. The receptive fields of each neuron are
represented by the 1 SD contour of the Gaussian fit to the spatial filter of each cell type. E) The
response of each cell is plotted in the square around its receptive field center. The visual stimulus is
shown on the left. The color maps of ON and OFF cells are reversed to associate high responses with
their preferred stimulus polarity.
4
2.3
Training and Evaluation
To train the linear decoder, we iterate through the training data once to collect the sufficient statistics
X T X and X T I ST . We train the convolutional autoencoder to minimize the pixelwise MSE PM SE
with the Adam optimizer [15]. To avoid overfitting, we monitor PM SE changes on a validation set
three times per epoch and keep track of the current best loss PM SE,best . We stop training if we have
gone through 2 epochs worth of training data and the validation loss has not decreased by greater
than 0.1%PM SE,best .
In our experiments we use two image datasets, ImageNet [8] and the CelebA face dataset [21]. We
apply preprocessing steps described previously in [17] to each image: 1) Convert to gray scale, 2)
rescale to 256x256, 3) crop the middle 128x128 region. From Imagenet we use 930k random images
for training, 50K for validation, and a 10k held-out set for testing. We use ImageNet in all but one of
our experiments - context-decoding. For the latter, we use the CelebA face dataset [21] with 160k
images for training, 30k for validation, and a 10k held-out set for testing.
We evaluate all the models in our results using two separate metrics, pixelwise MSE and multiscale structural-similarity (SSIM) [36]. Although each metric alone has known shortcomings, in
combination, they provide an objective evaluation of image reconstruction that is interpretable and
well-understood.
3
3.1
Results
ImageNet decoding
As expected [33], the linear decoder reconstructed blurry, noisy versions of the original natural images
from the neural responses, a result that is attributable to the noisy responses from the RGCs downsampling the input images. The CAE trained on the linear decoded images resulted in substantially
improved reconstructions, perceptually and quantitatively (Figure 3). CAE decoding outperformed
linear decoding both on average and for the vast majority of images, by both the M SE and 1?SSIM
measures. Qualitatively, the improvements made by the CAE generally show increased sharpening
of edges, adjustment of contrast, and smoothing within object boundaries that reduced overall
noise. Similar improvement in decoding could not be replicated by utilizing static nonlinearities to
transform the linear decoded output to the original images. We used a 6th degree polynomial fitted
to approximate the relation between linearly decoded and original image pixel intensities, and then
evaluated this nonlinear decoding on held out data. This approach produced a small improvement
in reconstruction: 3.25% reduction in MSE compared to 34.50% for the CAE. This reveals that the
improvement in performance with the CAE involves nonlinear image enhancement beyond simple
remapping of pixel intensities. Decoding noisier neural responses especially highlights the benefits
of using the autoencoder: there are features identifiable in the CAE enhanced images that are not in
the linear decoder images (Supplementary Figure 6).
The results shown here utilize a large training dataset size for the decoder so it is natural to ask
for a given fixed encoder model, how many training responses do we need to simulate to obtain a
good decoder. We tested this by fixing our encoder and then training the CAE decoder with varying
amounts of training data. (Supplementary Figure 8). We observed that even with a small training
data set of 20k examples, we can improve significantly on the linear decoder and after around 500k
examples, our performances begins to saturate. An analogous question can be asked about the amount
of training data required to fit a good encoder and we intend to explore this aspect in future work.
3.2
Phase Scrambled Training
A possible explanation for the improved performance of the CAE compared to linear decoding is
that it more fully exploits phase structure that is characteristic of natural images [2], perhaps by
incorporating priors on phase structure that are not captured by linear decoding. To test this possibility,
we trained both linear and CAE decoders on phase-scrambled natural images. The CAE input was
produced by the linear decoder trained on the same image type as that CAE. Observed responses
of RGCs to these stimuli followed approximately the same marginal distribution as responses to
the original natural images. We then compared the performance of these linear and CAE decoders
to the performance of the original decoders, on the original natural images (Figure 4). The linear
5
A)
MSE
B)
0.6
10-2
1-SSIM
CAE
0.4
0.2
10-3
10-3
0.2
0.4
0.6
Linear Decoder
CAE
Decoded
Linear
Decoded
Original
C)
0.0
0.0
10-2
Linear Decoder
Figure 3: Comparison of linear and CAE decoding. A) MSE on a log-log plot for the ImageNet 10k
example test set comparing the (Linear + CAE) model trained on ImageNet (only 1k subsampled
examples are plotted here for visualization purposes). B) 1-SSIM version of the same figure. C)
Example images from the test set show the original, linear decoded, CAE enhanced versions. The
average (MSE, 1-SSIM) for LD over the full test set was (0.0077, 0.35) and the corresponding
averages for CAE were (0.0051, 0.25).
decoders exhibited similar decoding performance when trained on the original and phase-scrambled
images, while the CAE exhibited substantially higher performance when trained on real images.
These findings are consistent with the idea that the CAE is able to capture prior information on
image phase structure not captured by linear decoding. However, direct comparisons of the CAE
and LD trained and tested on phase scrambled images show that the CAE does still lead to some
improvements which are most likely just due to the increased complexity of the decoding model
(Supplementary Figure 7).
3.3
Context Dependent Training
The above results suggest that the CAE is capturing important natural image priors. However, it
remains unclear whether these priors are sufficient to decode specific classes of natural images as
accurately as decoding models that are tuned to incorporate class-specific priors. We explored this in
the context of human faces by fully re-training a class-specific CAE using the CelebA face dataset.
6
0.2
0.4
LinearImageNet
0.6
Original
0.6
0.2
LinearImageNet
D)
0.4
0.0
0.0
10-3
10-3
1-SSIM on Test
ImageNet
LinearPhaseScrambled
CAEPhaseScrambled
0.6
10-2
10-2
LinearImageNet
CAEPhaseScrambled
C)
E)
CAEPhaseScrambled
10-3
MSE on Test
ImageNet
CAEImageNet
10-2
10-3
LinearPhaseScrambled
B)
MSE on Test
ImageNet
Phase Scrambled
LinearPhaseScrambled
A)
10-2
CAEImageNet
1-SSIM on Test
ImageNet
0.4
0.2
0.0
0.0
0.2
0.4
CAEImageNet
0.6
Figure 4: Comparison of phase scrambled and ImageNet trained models. A) MSE on log-log plot
comparing the performance of the linear decoder fit on natural images to the linear decoder fit on
phase scrambled images. The subscript of each model indicates the dataset on which it was trained.
The reported MSE values are based on performance on the natural image test set (1k subsampled
examples shown). B) Similar plot to A but comparing the CAE fit on natural images to the CAE fit
on phase scrambled images. C) 1-SSIM version of A. D) 1-SSIM version of B. E) One example test
natural image (represented by blue dot in A-D) showing the reconstructions from all 4 models and
the phase scrambled version.
Both linear and CAE models were trained from scratch (random initialization) using only this dataset.
As with the phase scrambled comparisons, the CAE input is produced by the linear decoder trained
on the same image type. We then compare these different linear decoder and CAE models on a test
set of CelebA faces. For the linear decoders, we see a 17% improvement in average test MSE and a
14% improvement in 1-SSIM when training on CelebA as compared to training on ImageNet (Figure
5 A and C). We find that the differences in MSE and 1-SSIM between the differently trained CAEs
are smaller (5% improvement in MSE and a 4% improvement in 1-SSIM) (Figure 5 B and D). The
much smaller difference in MSE and 1-SSIM suggests that the CAE decoder does a better job at
generalizing to unseen context-specific classes than the linear decoder. However, the images show
that there are still important face-specific features (such as nose and eye definition) that are much
better decoded by the CAE trained on faces (Figure 5E). This suggests that while the natural image
statistics captured by the CAE do help improve its generalization to more structured classes, there are
still significant benefits in training class-specific models.
4
Discussion
The work presented here develops a novel approximate Bayesian decoding technique that uses
non-linear DNNs to decode images from simulated responses of retinal neurons. The approach
substantially outperforms linear reconstruction techniques that have usually been used to decode
neural responses to high-dimensional stimuli.
Perhaps the most successful previous applications of Bayesian neural decoding are in cases where the
variable to be decoded is low-dimensional. The work of [5] stimulated much progress in hippocampus
and motor cortex using Bayesian state-space approaches applied to low-dimensional (typically
two- or three-dimensional) position variables; see also [16] and [28] for further details. The low
7
MSE on Test
CelebA
B)
Original
D)
1-SSIM on Test
CelebA
0.4
CAECelebA
0.4
10-2
CAEImageNet
LinearImageNet
1-SSIM on Test
CelebA
10-3
CAECelebA
LinearImageNet
LinearCelebA
10-3
10-2
CAEImageNet
10-3
C)
0.2
0.0
E)
10-2
10-2
10-3
LinearCelebA
MSE on Test
CelebA
CAECelebA
LinearCelebA
A)
0.0
0.2
0.4
LinearImageNet
0.2
0.0
0.0
0.2
0.4
CAEImageNet
Figure 5: Comparison of CelebA and ImageNet trained models. A) MSE on log-log plot comparing
the performance of the linear decoder fit on CelebA to the linear decoder fit on ImageNet. The
subscript of each model indicates the dataset on which it was trained. The reported MSE values are
based on performance on the natural image test set (1k subsampled examples shown). B) Similar plot
to A but comparing the CAE fit on CelebA to the CAE fit on ImageNet. C) 1-SSIM version of A. D)
1-SSIM version of B. E) One example test natural image (represented by blue dot in A-D) showing
the reconstructions from all 4 models.
dimensionality of the state variable and simple Markovian priors leads to fast Bayesian computation
in these models. At the same time, non-Bayesian approaches based on support vector regression [32]
or recurrent neural networks [34] have also proven powerful in these applications.
Decoding information from the retina or early visual pathway requires efficient computations over
objects of much larger dimensionality: images and movies. Several threads are worth noting here.
First, some previous work has focused on decoding of flicker stimuli [37] or motion statistics [18, 23],
both of which reduce to low-dimensional decoding problems. Other work has applied straightforward
linear decoding methods [33, 9]. Finally, some work has tackled the challenging problem of decoding
still images undergoing random perturbations due to eye movements [6, 1]. These studies developed
approximate Bayesian decoders under simplified natural image priors, and it would be interesting in
future work to examine potential extensions of our approach to those applications.
While our focus here has been on the decoding of spike counts from populations of neurons recorded
with single-cell precision, the ideas developed here could also be applied in the context of decoding
fMRI data. Our approach shares some conceptual similarity to previous work [25, 27] which used
elegant encoding models combined with brute-force computation over a large discrete sample space
to compute posteriors, and to other work [38] which used neural network methods similar to those
developed in [41] to decode image features. Our approach, for example, could be extended to
replace a brute-force discrete-sample decoder [25, 27] with a decoder that operates over the full
high-dimensional continuous space of all images.
Many state-of-the-art models for in-painting and super-resolution image enhancement rely on generative adversarial networks (GANs). However, these models currently require specific architecture
tuning based on the exact problem structure. Because our problem involves some complex and unknown combination of denoising, super-resolution, and inpainting, we required a more robust model
that could be tested with little hand-tuning. Furthermore, we have no parametric form for the noise
in the linear decoded images, so standard pre-trained networks could not be applied directly. Based
8
on previous work in [39], it seems that autoencoder architectures can robustly achieve reasonable
results for these types of tasks; therefore, we chose the CAE architecture as a useful starting point.
We have begun to explore GAN architectures, but these early results do not show any significant
improvements over our CAE model. We plan to explore these networks further in future work.
In Section 3.3 we saw that even though there were small differences in MSE and 1-SSIM between
the outputs of the CAE decoders trained on ImageNet vs. CelebA datasets, visually there were still
significant differences. The most likely explanation for this discrepancy is that these loss functions
are imperfect and do not adequately capture perceptually relevant differences between two images.
In recent years, more complex perceptual similarity metrics have gained traction in the deep learning
community [42, 20, 13]. While we did not extensively explore this aspect, we have done some
preliminary experiments that suggest that using just a standard VGG-based perceptual metric [13]
decreases some blurring seen using MSE, but does not significantly improve decoding in a robust
way. We plan to further explore these ideas by implementing perceptual loss functions that utilize
more of our understanding of operations in the early human visual system [19]. Progress in this space
is vital as any retinal prosthetics application of this work would require decoding of visual scenes
that is accurate by perceptual metrics rather than MSE.
We have shown improved reconstruction based on simulated data; clearly, an important next step
is to apply this approach to decode real experimental data. In addition, we have shown better CAE
reconstruction only based on one perfect mosaic of the simulated neurons. In reality, these mosaics
differ from retina to retina and there are gaps in the mosaic when we record from retinal neurons.
Therefore, it will be important to investigate whether the CAE can learn to generalize over different
mosaic patterns. We also plan to explore reconstruction of movies and color images.
The present results have two implications for visual neuroscience. First, the results provide a
framework for understanding how an altered neural code, such as the patterns of activity elicited in
a retinal prosthesis, could influence perception of the visual image. With our approach, this can be
assessed in the image domain directly (instead of the domain of spikes) by examining the quality
of "optimal" reconstruction from electrical activity induced by the prosthesis. Second, the results
provide a way to understand which aspects of natural scenes are effectively encoded in the natural
output of the retina, again, as assessed in the image domain. Previous efforts toward these two goals
have relied on linear reconstruction. The substantially higher performance of the CAE provides a
more stringent assessment of prosthesis function, and suggests that the retina may convey visual
images to the brain with higher fidelity than was previously appreciated.
5
Acknowledgments
NSF GRFP DGE-16-44869 (EB), NSF/NIH Collaborative Research in Computational Neuroscience
Grant IIS-1430348/1430239 (EJC & LP), DARPA Contract FA8650-16-1-7657 (EJC), Simons
Foundation SF-SCGB-365002 (LP); IARPA MICRONS D16PC00003 (LP); DARPA N66001-17-C4002 (LP).
References
[1] Alexander G Anderson, Bruno A Olshausen, Kavitha Ratnam, and Austin Roorda. A neural
model of high-acuity vision in the presence of fixational eye movements. In Signals, Systems
and Computers, 2016 50th Asilomar Conference on, pages 588?592. IEEE, 2016.
[2] Elizabeth Arsenault, Ahmad Yoonessi, and Curtis Baker. Higher order texture statistics impair
contrast boundary segmentation. Journal of vision, 11(10):14?14, 2011.
[3] Eleanor Batty, Josh Merel, Nora Brackbill, Alexander Heitman, Alexander Sher, Alan Litke,
E.J. Chichilnisky, and Liam Paninski. Multilayer recurrent network models of primate retinal
ganglion cell responses. International Conference on Learning Representations, 2017.
[4] Gijs Joost Brouwer and David J. Heeger. Decoding and reconstructing color from responses
in human visual cortex. The Journal of Neuroscience: the official journal of the Society for
Neuroscience, 29(44):13992?14003, 2009.
[5] Emery N Brown, Loren M Frank, Dengda Tang, Michael C Quirk, and Matthew A Wilson. A
statistical paradigm for neural spike train decoding applied to position prediction from ensemble
9
firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18(18):7411?7425,
1998.
[6] Yoram Burak, Uri Rokni, Markus Meister, and Haim Sompolinsky. Bayesian model of dynamic
image stabilization in the visual system. Proceedings of the National Academy of Sciences,
107(45):19525?19530, 2010.
[7] E.J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12(2):199?213, 2001.
[8] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale
hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009.
IEEE Conference on, pages 248?255. IEEE, 2009.
[9] Ariadna R. Diaz-Tahoces, Antonio Martinez-Alvarez, Alejandro Garcia-Moll, and Eduardo
Fernandez. Towards the reconstruction of moving images by populations of retinal ganglion
cells. In 6th International Work-Conference on the Interplay Between Natural and Artificial
Computation, IWINAC, volume 9107, 2015.
[10] ES Frechette, A Sher, MI Grivich, D Petrusca, AM Litke, and EJ Chichilnisky. Fidelity of the
ensemble code for visual motion in primate retina. Journal of neurophysiology, 94(1):119?135,
2005.
[11] Lovedeep Gondara. Medical image denoising using convolutional denoising autoencoders.
arXiv pre-print 1608.04667, 2016.
[12] Alexander Heitman, Nora Brackbill, Martin Greschner, Alexander Sher, Alan M Litke, and
EJ Chichilnisky. Testing pseudo-linear models of responses to natural scenes in primate retina.
bioRxiv, page 045336, 2016.
[13] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer
and super-resolution. In European Conference on Computer Vision, pages 694?711. Springer,
2016.
[14] Yukiyasu Kamitani and Frank Tong. Decoding the visual and subjective ontents of the human
brain. Nature Neuroscience, 8(5):679?685, 2005.
[15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[16] Shinsuke Koyama, Lucia Castellanos P?rez-Bolde, Cosma Rohilla Shalizi, and Robert E Kass.
Approximate methods for state-space models. Journal of the American Statistical Association,
105(489):170?180, 2010.
[17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
[18] Edmund C Lalor, Yashar Ahmadian, and Liam Paninski. The relationship between optimal and
biologically plausible decoding of stimulus velocity in the retina. JOSA A, 26(11):B25?B42,
2009.
[19] Valero Laparra, Alex Berardino, Johannes Ball?, and Eero P Simoncelli. Perceptually optimized
image rendering. arXiv preprint arXiv:1701.06641, 2017.
[20] Christian Ledig, Lucas Theis, Ferenc Husz?r, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photorealistic single image super-resolution using a generative adversarial network. arXiv preprint
arXiv:1609.04802, 2016.
[21] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the
wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
[22] Xiao-Jiao Mao, Chunhua Shen, and Yu-Bin Yang. Image restoration using convolutional autoencoders with symmetric skip connections. In Advances in Neural Information Processing,
2016.
10
[23] Olivier Marre, Vicente Botella-Soler, Kristina D Simmons, Thierry Mora, Ga?per Tka?cik, and
Michael J Berry II. High accuracy decoding of dynamical motion from a large retinal population.
PLoS Comput Biol, 11(7):e1004304, 2015.
[24] Lane McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, and Stephen A. Baccus. Deep learning models of the retinal response to natural scenes. In Advances in Neural
Information Processing Systems, 2016.
[25] Thomas Naselaris, Ryan J. Prenger, Kendrick N. Kay, Michael Oliver, and Jack L. Gallant.
Bayesian reconstruction of natural images from human brain activity. Neuron, 63(9):902?915,
2009.
[26] Sheila Nirenberg and Chetan Pandarinath. Retinal prosthetic strategy with the capacity to restore
normal vision. PNAS, 109(37), 2012.
[27] Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L. Gallant.
Reconstructing visual experiences from brain activity evoked by natural movies. Current
Biology, 21(19):1641?1646, 2011.
[28] Liam Paninski, Yashar Ahmadian, Daniel Gil Ferreira, Shinsuke Koyama, Kamiar Rahnama
Rad, Michael Vidne, Joshua Vogelstein, and Wei Wu. A new look at state-space models for
neural data. Journal of computational neuroscience, 29(1-2):107?126, 2010.
[29] Brian N. Pasley, Stephen V. David, Nima Mesgarani, Adeen Flinker, Shibab A. Shamma,
Nathan E. Crone, Robert T. Knight, and Edward F. Chang. Reconstructing speech from human
auditory cortex. PLOS Biology, 10(1), 2012.
[30] Alexandro D Ramirez, Yashar Ahmadian, Joseph Schumacher, David Schneider, Sarah M. N.
Woolley, and Liam Paninski. Incorporating naturalistic correlation structure improves spectrogram reconstruction from neuronal activity in the songbird auditory midbrain. Journal of
Neuroscience, 31(10):3828?3842, 2011.
[31] Fred Rieke, Davd Warland, Rob de Ruyter van Steveninck, and William Bialek. Spikes:
Exploring the Neural Code. MIT Press, Cambridge, MA, USA, 1999.
[32] Lavi Shpigelman, Hagai Lalazar, and Eilon Vaadia. Kernel-arma for hand tracking and brainmachine interfacing during 3d motor control. In Advances in neural information processing
systems, pages 1489?1496, 2009.
[33] Garrett B. Stanley, Fei F. Li, and Yang Dan. Reconstruction of natural scenes from ensemble
responses in the lateral geniculate nucleus. Journal of Neuroscience, 19(18):8036?8042, 1999.
[34] David Sussillo, Sergey D Stavisky, Jonathan C Kao, Stephen I Ryu, and Krishna V Shenoy.
Making brain?machine interfaces robust to future neural variability. Nature Communications, 7,
2016.
[35] Zhangyang Wang, Yingzhen Yang, Zhaowen Wang, Shiyu Chang, Wen Han, Jianchao Yang, and
Thomas S. Huang. Self-tuned deep super resolution. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops, 2015.
[36] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment:
from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600?
612, 2004.
[37] David K. Warland, Pamela Reinagel, and Markus Meister. Decoding visual information from a
population of retinal ganglion cells. Journal of neurophysiology, 78(5):2336?2350, 1997.
[38] Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, and Zhongming Liu. Neural encoding
and decoding with deep learning for dynamic natural vision. arXiv pre-print 1608.03425, 2016.
[39] Junyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural
networks. In Advances in Neural Information Processing Systems, pages 341?349, 2012.
11
[40] Kai Xu, Yueming Wnag, Shaomin Zhang, Ting Zhao, Yiwen Wang, Weidong Chen, and
Xiaoxiang Zhang. Comparisons between linear and nonlinear methods for decoding motor
cortical activities of monkey. In Engineering in Medicine and Biology Society, EMBC, Annual
International Conference of the IEEE, 2011.
[41] Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J
DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual
cortex. Proceedings of the National Academy of Sciences, 111(23):8619?8624, 2014.
[42] Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. Loss functions for neural networks for
image processing. arXiv preprint arXiv:1511.08861, 2015.
12
| 7222 |@word neurophysiology:2 blindness:1 trial:2 version:8 middle:1 polynomial:1 hippocampus:1 seems:1 simulation:3 pulse:1 inpainting:4 ld:9 reduction:1 liu:2 daniel:2 tuned:3 outperforms:3 existing:1 subjective:1 recovered:1 com:1 current:2 comparing:5 ka:1 activation:1 gmail:1 diederik:1 laparra:1 luo:1 realistic:2 christian:1 motor:3 visibility:1 designed:1 treating:1 interpretable:1 plot:5 v:2 alone:1 generative:2 kristina:1 greschner:1 record:1 grfp:1 provides:2 coarse:1 location:1 x128:1 zhang:3 direct:1 consists:1 pathway:1 wild:1 shpigelman:1 dan:1 aitken:1 mohit:1 expected:1 examine:1 brain:7 inspired:1 little:1 conv:9 provided:1 begin:1 baker:1 remapping:1 lalazar:1 substantially:6 pursue:1 monkey:1 developed:4 finding:2 transformation:1 sharpening:1 eduardo:1 temporal:2 pseudo:1 alahi:1 ferreira:1 rm:2 brute:2 control:1 medical:1 grant:1 shenoy:1 understood:2 engineering:1 sd:2 despite:1 encoding:14 subscript:2 firing:2 shiyu:1 approximately:1 black:1 chose:3 initialization:1 studied:1 eb:1 dengda:1 collect:1 suggests:3 challenging:1 evoked:1 shamma:1 liam:6 gone:1 schumacher:1 averaged:3 steveninck:1 acknowledgment:1 mesgarani:1 x256:1 testing:4 vu:1 backpropagation:1 x3:1 bolde:1 stavisky:1 jan:1 area:1 kendrick:1 significantly:4 pre:3 rahnama:1 suggest:2 naturalistic:1 ga:1 zehan:1 context:5 applying:1 influence:1 eilon:1 junyuan:1 optimize:1 nishimoto:1 map:3 center:3 shi:1 straightforward:1 starting:1 jimmy:1 focused:1 resolution:7 shen:1 reinagel:1 utilizing:1 deriving:1 kay:1 mora:1 classic:2 population:6 embedding:1 rieke:1 analogous:1 limiting:1 simmons:1 enhanced:4 construction:1 decode:8 exact:1 olivier:1 us:1 mosaic:5 associate:1 velocity:1 expensive:1 recognition:2 database:1 observed:4 preprint:4 electrical:1 capture:4 wang:6 region:1 sompolinsky:1 plo:2 movement:2 decrease:1 ahmad:1 valuable:1 knight:1 environment:1 complexity:1 asked:1 dynamic:2 trained:19 solving:1 ferenc:1 blurring:1 basis:1 cae:45 darpa:2 joost:1 differently:1 xiaoou:1 represented:5 various:1 maheswaranathan:1 train:4 jiao:1 fast:2 shortcoming:1 ahmadian:3 niru:1 prenger:1 artificial:2 batty:2 exhaustive:1 encoded:1 stanford:3 supplementary:3 larger:1 nikhil:1 kai:2 reconstruct:1 cvpr:1 encoder:14 plausible:1 statistic:5 nirenberg:1 unseen:1 transform:1 noisy:2 final:2 interplay:1 sequence:1 vaadia:1 propose:1 reconstruction:17 product:1 relevant:1 achieve:1 academy:2 kao:1 exploiting:1 sutskever:1 enhancement:4 produce:3 emery:1 adam:2 perfect:1 object:2 help:1 sussillo:1 andrew:2 develop:3 fixing:1 stat:1 quirk:1 rescale:1 pose:1 recurrent:2 nearest:1 job:1 thierry:1 progress:3 edward:1 involves:2 skip:1 differ:1 attribute:1 filter:12 stochastic:1 stabilization:1 human:6 stringent:1 implementing:1 bin:2 require:2 dnns:2 shalizi:1 generalization:1 preliminary:1 hamid:1 brian:1 ryan:1 hagai:1 extension:1 d16pc00003:1 kamiar:1 exploring:1 around:2 normal:1 visually:1 caballero:1 mapping:1 predict:2 matthew:1 substituting:1 optimizer:1 early:4 purpose:1 outperformed:1 geniculate:1 currently:1 saw:1 create:2 tool:4 weighted:1 rokni:1 naselaris:2 mit:1 clearly:1 interfacing:1 gaussian:4 super:7 rather:1 husz:1 avoid:1 ej:3 zhou:1 varying:1 wilson:1 focus:1 acuity:1 improvement:10 prosthetics:1 indicates:2 nora:2 greatly:1 contrast:2 adversarial:2 litke:3 baseline:2 sense:1 am:1 inference:5 ganguli:1 dependent:1 downsample:4 nn:1 entire:1 typically:1 cunningham:1 relation:1 dnn:3 pixel:8 overall:1 fidelity:4 orientation:1 classification:1 lucas:1 plan:3 constrained:1 spatial:7 smoothing:1 art:1 marginal:1 equal:3 construct:1 once:2 field:3 beach:1 sampling:1 petrusca:1 biology:3 cadieu:1 yu:2 look:1 lavi:1 celeba:13 future:5 fmri:1 discrepancy:1 stimulus:12 quantitatively:1 develops:1 few:1 retina:10 richard:1 wen:2 resulted:1 national:2 subsampled:3 phase:13 consisting:1 william:2 possibility:1 investigate:1 evaluation:2 light:1 held:3 implication:1 accurate:3 nota:1 edge:2 oliver:1 experience:1 heitman:2 burak:1 prosthesis:7 plotted:2 isolated:1 re:1 theoretical:1 arma:1 fitted:1 biorxiv:1 increased:2 column:1 modeling:1 markovian:1 castellanos:1 restoration:2 lattice:1 onal:1 applicability:1 deviation:1 krizhevsky:1 successful:2 examining:1 johnson:1 pixelwise:3 reported:2 encoders:1 spatiotemporal:1 combined:1 st:6 international:4 loren:1 contract:1 off:13 dong:1 decoding:52 rounded:1 enhance:5 michael:4 ilya:1 gans:1 again:1 recorded:3 solomon:1 reconstructs:1 huang:1 cosma:1 sidestep:2 style:1 american:1 zhao:2 li:6 potential:2 nonlinearities:1 de:1 retinal:20 parasol:11 coding:1 kamitani:1 fernandez:1 red:1 start:1 relied:1 elicited:1 kautz:1 simon:1 jia:2 contribution:1 ass:3 square:1 minimize:1 collaborative:1 convolutional:8 accuracy:1 characteristic:1 wnag:1 ensemble:4 scratch:1 painting:2 tejani:1 generalize:1 bayesian:14 accurately:3 produced:4 lu:1 worth:2 ping:1 definition:1 james:1 mi:1 josa:1 static:6 stop:1 ledig:1 dataset:7 photorealistic:1 begun:1 ask:1 auditory:2 color:4 dimensionality:2 improves:1 segmentation:1 stanley:1 garrett:1 cik:1 alexandre:1 higher:6 xie:1 methodology:1 response:36 improved:3 wei:2 alvarez:1 totz:1 evaluated:1 though:1 done:1 anderson:1 furthermore:1 just:2 brackbill:2 alykhan:1 autoencoders:2 correlation:1 hand:3 nonlinear:8 assessment:3 multiscale:1 rohilla:1 quality:3 reveal:3 gray:2 perhaps:2 dge:1 olshausen:1 usa:2 rgcs:7 consisted:1 brown:1 adequately:1 roorda:1 symmetric:1 white:2 deal:1 x5:1 during:1 width:1 self:1 songbird:1 rat:1 m:1 hong:1 hippocampal:1 outline:1 motion:3 interface:2 image:105 wise:1 jack:2 novel:2 recently:1 charles:1 nih:1 common:1 sigmoid:1 stimulation:1 spiking:3 physical:1 volume:1 yukiyasu:1 association:1 significant:3 cambridge:1 mcintosh:1 tuning:2 grid:1 pm:4 benjamini:1 nonlinearity:5 bruno:1 dot:3 moving:1 han:2 similarity:4 operating:1 cortex:4 alejandro:2 posterior:2 recent:3 chunhua:1 gallo:1 arbitrarily:1 lnp:1 joshua:1 captured:3 seen:1 greater:1 krishna:1 schneider:1 deng:1 spectrogram:1 eleanor:2 paradigm:1 signal:7 ii:2 stephen:3 full:5 multiple:1 simoncelli:2 pnas:1 vogelstein:1 alan:3 match:1 long:1 prediction:3 regression:3 crop:1 multilayer:1 vision:11 metric:5 poisson:3 arxiv:10 kernel:1 sergey:1 cell:25 addition:1 decreased:1 bovik:1 exhibited:2 hz:1 induced:1 elegant:1 incorporates:1 integer:1 structural:2 yang:4 leverage:3 noting:1 revealed:1 constraining:1 vital:1 presence:1 iterate:1 rendering:1 fit:18 moll:1 architecture:9 reduce:2 imperfect:2 idea:3 vgg:1 whether:2 thread:1 pca:1 jianchao:1 effort:2 fa8650:1 speech:1 passing:1 linli:1 deep:14 antonio:1 useful:2 generally:1 se:5 tune:1 johannes:2 amount:2 traction:1 extensively:1 fixational:1 reduced:2 generate:1 flicker:1 nsf:2 shifted:1 gil:1 neuroscience:9 per:2 track:1 blue:2 diverse:1 discrete:2 diaz:1 four:2 monitor:1 utilize:3 n66001:1 vast:1 sum:1 convert:1 year:1 jose:1 micron:1 powerful:1 respond:1 place:1 reasonable:2 wu:1 patch:2 draw:1 capturing:1 layer:11 bound:1 followed:4 haim:1 tackled:1 identifiable:1 activity:9 nontrivial:1 annual:1 xiaogang:1 fei:5 alex:2 scene:10 lucia:1 sheila:1 lane:1 prosthetic:1 markus:2 x7:1 aspect:3 simulate:2 nathan:1 relatively:1 martin:1 structured:1 developing:1 zhaowen:1 alternate:1 combination:2 ball:1 remain:1 across:1 reconstructing:4 smaller:2 elizabeth:1 lp:4 acosta:1 joseph:1 primate:3 biologically:1 midbrain:1 rob:1 making:1 sheikh:1 iccv:1 valero:1 pipeline:2 asilomar:1 computationally:1 visualization:1 previously:3 remains:1 count:1 yamins:1 nose:1 fed:1 end:2 staged:2 meister:2 grivich:1 operation:1 apply:2 hierarchical:2 blurry:1 robustly:1 alternative:1 thomas:4 original:13 vidne:1 brouwer:1 gan:1 medicine:1 exploit:3 yoram:1 warland:2 ting:1 especially:2 build:1 aran:1 society:2 tensor:3 objective:1 intend:2 question:1 print:2 spike:10 nayebi:1 receptive:3 parametric:1 strategy:1 traditional:1 bialek:1 unclear:1 enhances:1 reversed:1 separate:1 simulated:8 upsampling:1 decoder:44 majority:1 koyama:2 capacity:1 lateral:1 reason:1 toward:1 code:3 dicarlo:1 polarity:1 relationship:1 providing:1 downsampling:2 minimizing:1 demonstration:1 baccus:1 kun:1 yingzhen:1 robert:2 frank:2 ba:1 ziwei:1 design:2 unknown:1 embc:1 gallant:2 ssim:18 neuron:11 convolution:1 datasets:2 shinsuke:2 enabling:1 displayed:2 extended:1 hinton:1 variability:1 communication:1 frame:3 perturbation:1 smoothed:1 arbitrary:1 community:1 intensity:2 weidong:1 david:5 namely:1 cast:1 chichilnisky:5 required:2 optimized:2 imagenet:18 connection:1 rad:1 ethan:1 ryu:1 kingma:1 nip:1 macaque:1 beyond:1 able:1 impair:1 usually:1 pattern:6 perception:1 justin:1 dynamical:1 challenge:2 explanation:2 natural:38 difficulty:1 force:2 rely:1 restore:1 midget:11 representing:1 improve:4 movie:6 technology:1 altered:1 eye:3 numerous:1 lk:1 autoencoder:6 columbia:10 sher:3 parthasarathy:1 prior:10 understanding:3 literature:2 epoch:2 theis:1 multiplication:1 berry:1 shinji:1 fully:4 loss:6 highlight:1 generation:1 interesting:1 merel:1 proven:1 geoffrey:1 yiwen:1 validation:4 foundation:1 nucleus:1 sarah:1 degree:1 lters:1 tka:1 sufficient:2 consistent:1 xiao:1 share:1 row:1 austin:1 ariadna:1 appreciated:2 bias:1 understand:1 face:8 emerge:1 benefit:2 van:1 boundary:2 cortical:2 world:1 fred:1 contour:1 sensory:4 qualitatively:1 made:1 preprocessing:1 replicated:1 simplified:1 transaction:1 reconstructed:3 approximate:6 hang:1 implicitly:1 preferred:1 keep:1 overfitting:1 reveals:1 yashar:3 scgb:1 summing:1 conceptual:1 eero:2 surya:1 scrambled:10 search:1 continuous:1 decade:1 reality:1 stimulated:1 berardino:1 nature:2 learn:2 ruyter:1 transfer:1 ca:1 robust:3 marre:1 curtis:1 mse:22 complex:6 european:1 domain:4 official:1 did:1 linearly:3 noise:4 iarpa:1 martinez:1 complementary:1 convey:1 xu:2 augmented:1 neuronal:2 attributable:1 tong:1 precision:1 position:2 decoded:13 inverts:1 heeger:1 sf:1 mao:1 comput:1 perceptual:5 learns:1 rez:1 tang:2 saturate:1 specific:8 showing:2 undergoing:1 explored:1 incorporating:2 socher:1 workshop:1 effectively:1 gained:1 woolley:1 ejc:2 texture:1 edmund:1 perceptually:3 uri:1 gap:1 chen:2 pamela:1 generalizing:1 garcia:1 paninski:5 explore:6 likely:2 ganglion:8 ramirez:1 visual:22 josh:1 zhangyang:1 contained:1 adjustment:1 upsample:4 partially:1 tracking:1 chang:2 applies:2 springer:1 darren:1 ma:1 pandarinath:1 goal:1 towards:1 seibert:1 replace:1 brainmachine:1 change:1 vicente:1 nima:1 specifically:1 operates:1 yuval:1 denoising:7 experimental:3 tiled:1 e:1 perceptrons:1 caes:1 rgc:9 searched:1 support:1 latter:1 noisier:1 assessed:2 alexander:5 jonathan:1 preparation:1 incorporate:3 evaluate:1 tested:3 biol:1 |
6,880 | 7,223 | Causal Effect Inference with
Deep Latent-Variable Models
Christos Louizos
University of Amsterdam
TNO Intelligent Imaging
[email protected]
Uri Shalit
New York University
CIMS
[email protected]
David Sontag
Massachusetts Institute of Technology
CSAIL & IMES
[email protected]
Joris Mooij
University of Amsterdam
[email protected]
Richard Zemel
University of Toronto
CIFAR?
[email protected]
Max Welling
University of Amsterdam
CIFAR?
[email protected]
Abstract
Learning individual-level causal effects from observational data, such as inferring
the most effective medication for a specific patient, is a problem of growing
importance for policy makers. The most important aspect of inferring causal effects
from observational data is the handling of confounders, factors that affect both an
intervention and its outcome. A carefully designed observational study attempts
to measure all important confounders. However, even if one does not have direct
access to all confounders, there may exist noisy and uncertain measurement of
proxies for confounders. We build on recent advances in latent variable modeling
to simultaneously estimate the unknown latent space summarizing the confounders
and the causal effect. Our method is based on Variational Autoencoders (VAE)
which follow the causal structure of inference with proxies. We show our method
is significantly more robust than existing methods, and matches the state-of-the-art
on previous benchmarks focused on individual treatment effects.
1
Introduction
Understanding the causal effect of an intervention t on an individual with features X is a fundamental
problem across many domains. Examples include understanding the effect of medications on a
patient?s health, or of teaching methods on a student?s chance of graduation. With the availability
of large datasets in domains such as healthcare and education, there is much interest in developing
methods for learning individual-level causal effects from observational data [42, 53, 25, 43].
The most crucial aspect of inferring causal relationships from observational data is confounding. A
variable which affects both the intervention and the outcome is known as a confounder of the effect
of the intervention on the outcome. On the one hand, if such a confounder can be measured, the
standard way to account for its effect is by ?controlling? for it, often through covariate adjustment or
propensity score re-weighting [39]. On the the other hand, if a confounder is hidden or unmeasured,
it is impossible in the general case (i.e. without further assumptions) to estimate the effect of the
intervention on the outcome [40]. For example, socio-economic status can affect both the medication
a patient has access to, and the patient?s general health. Therefore socio-economic status acts as
confounder between the medication and health outcomes, and without measuring it we cannot in
?
Canadian Institute For Advanced Research
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
y
t
X
Z
Figure 1: Example of a proxy variable. t is a treatment, e.g. medication; y is an outcome, e.g.
mortality. Z is an unobserved confounder, e.g. socio-economic status; and X is noisy views on the
hidden confounder Z, say income in the last year and place of residence.
general isolate the causal effect of medications on health measures. Henceforth we will denote
observed potential confounders2 by X, and unobserved confounders by Z.
In most real-world observational studies we cannot hope to measure all possible confounders. For
example, in many studies we cannot measure variables such as personal preferences or most genetic
and environmental factors. An extremely common practice in these cases is to rely on so-called
?proxy variables? [38, 6, 36, Ch. 11]. For example, we cannot measure the socio-economic status of
patients directly, but we might be able to get a proxy for it by knowing their zip code and job type.
One of the promises of using big-data for causal inference is the existence of myriad proxy variables
for unmeasured confounders.
How should one use these proxy variables? The answer depends on the relationship between the
hidden confounders, their proxies, the intervention and outcome [31, 37]. Consider for example the
causal graphs in Figure 1: it?s well known [20, 15, 18, 31, 41] that it is often incorrect to treat the
proxies X as if they are ordinary confounders, as this would induce bias. See the Appendix for a
simple example of this phenomena. The aforementioned papers give methods which are guaranteed
to recover the true causal effect when proxies are observed. However, the strong guarantees these
methods enjoy rely on strong assumptions. In particular, it is assumed that the hidden confounder is
either categorical with known number of categories, or that the model is linear-Gaussian.
In practice, we cannot know the exact nature of the hidden confounder Z: whether it is categorical
or continuous, or if categorical how many categories it includes. Consider socio-economic status
(SES) and health. Should we conceive of SES as a continuous or ordinal variable? Perhaps SES
as confounder is comprised of two dimensions, the economic one (related to wealth and income)
and the social one (related to education and cultural capital). Z might even be a mix of continuous
and categorical, or be high-dimensional itself. This uncertainty makes causal inference a very hard
problem even with proxies available. We propose an alternative approach to causal effect inference
tailored to the surrogate-rich setting when many proxies are available: estimation of a latent-variable
model where we simultaneously discover the hidden confounders and infer how they affect treatment
and outcome. Specifically, we focus on (approximate) maximum-likelihood based methods.
Although in many cases learning latent-variable models are computationally intractable [50, 7],
the machine learning community has made significant progress in the past few years developing
computationally efficient algorithms for latent-variable modeling. These include methods with
provable guarantees, typically based on the method-of-moments (e.g. Anandkumar et al. [4]); as
well as robust, fast, heuristics such as variational autoencoders (VAEs) [27, 46], based on stochastic
optimization of a variational lower bound on the likelihood, using so-called recognition networks for
approximate inference.
Our paper builds upon VAEs. This has the disadvantage that little theory is currently available to
justify when learning with VAEs can identify the true model. However, they have the significant
advantage that they make substantially weaker assumptions about the data generating process and
the structure of the hidden confounders. Since their recent introduction, VAEs have been shown to
be remarkably successful in capturing latent structure across a wide-range of previously difficult
problems, such as modeling images [19], volumes [24], time-series [10] and fairness [34].
2
Including observed covariates which do not affect the intervention or outcome, and therefore are not truly
confounders.
2
We show that in the presence of noisy proxies, our method is more robust against hidden confounding,
in experiments where we successively add noise to known-confounders. Towards that end we
introduce a new causal inference benchmark using data about twin births and mortalities in the USA.
We further show that our method is competitive on two existing causal inference benchmarks. Finally,
we note that our method does not currently deal with the related problem of selection bias, and we
leave this to future work.
Related work. Proxy variables and the challenges of using them correctly have long been considered
in the causal inference literature [54, 14]. Understanding what is the best way to derive and measure
possible proxy variables is an important part of many observational studies [13, 29, 55]. Recent work
by Cai and Kuroki [9], Greenland and Lash [18], building on the work of Greenland and Kleinbaum
[17], Sel?n [47], has studied conditions for causal identifiability using proxy variables. The general
idea is that in many cases one should first attempt to infer the joint distribution p(X, Z) between the
proxy and the hidden confounders, and then use that knowledge to adjust for the hidden confounders
[55, 41, 32, 37, 12]. For the example in Figure 1, Cai and Kuroki [9], Greenland and Lash [18], Pearl
[41] show that if Z and X are categorical, with X having at least as many categories as Z, and with
the matrix p(X, Z) being full-rank, one could identify the causal effect of t on y using a simple
matrix inversion formula, an approach called ?effect restoration?. Conditions under which one could
identify more general and complicated proxy models were recently given by [37].
2
Identification of causal effect
Throughout this paper we assume the causal model in Figure 1. For simplicity and compatibility with
prior benchmarks we assume that the treatment t is binary, but our proposed method does not rely
on that. We further assume that the joint distribution p (Z, X, t, y) of the latent confounders Z and
the observed confounders X can be approximately recovered solely from the observations (X, t, y).
While this is impossible if the hidden confounder has no relation to the observed variables, there
are many cases where this is possible, as mentioned in the introduction. For example, if X includes
three independent views of Z [4, 22, 16, 2]; if Z is categorical and X is a Gaussian mixture model
with components determined by X [5]; or if Z is comprised of binary variables and X are so-called
?noisy-or? functions of Z [23, 8]. Recent results show that certain VAEs can recover a very large
class of latent-variable models [51] as a minimizer of an optimization problem; the caveat is that the
optimization process is not guaranteed to achieve the true minimum even if it is within the capacity
of the model, similar to the case of classic universal approximation results for neural networks.
2.1
Identifying individual treatment effect
Our goal in this paper is to recover the individual treatment effect (ITE), also known as the conditional
average treatment effect (CATE), of a treatment t, as well as the average treatment effect (ATE):
IT E(x) := E [y|X = x, do(t = 1)] ? E [y|X = x, do(t = 0)] ,
AT E := E[IT E(x)]
Identification in our case is an immediate result of Pearl?s back-door adjustment formula [40]:
Theorem 1. If we recover p (Z, X, t, y) then we recover the ITE under the causal model in Figure 1.
Proof. We will prove that p (y|X, do(t = 1)) is identifiable under the premise of the theorem. The
case for t = 0 is identical, and the expectations in the definition of ITE above readily recovered from
the probability function. ATE is identified if ITE is identified. We have that:
Z
(i)
p (y|X, do(t = 1)) =
p (y|X, do(t = 1), Z) p (Z|X, do(t = 1)) dZ =
Z
Z
p (y|X, t = 1, Z) p (Z|X) dZ,
(1)
Z
where equality (i) is by the rules of do-calculus applied to the causal graph in Figure 1 [40]. This
completes the proof since the quantities in the final expression of Eq. (1) can be identified from the
distribution p (Z, X, t, y) which we know by the Theorem?s premise.
Note that the proof and the resulting estimator in Eq. (1) would be identical whether there is or there is
not an edge from X to t. This is because we intervene on t. Also note that for the model in Figure 1,
3
R
y is independent of X given Z, and we obtain: p (y|X, do(t = 1)) = Z p (y|t = 1, Z) p (Z|X) dZ.
In the next section we will show how we estimate p (Z, X, t, y) from observations of (X, t, y).
3
Causal effect variational autoencoder
(a) Inference network, q(z, t, y|x).
(b) Model network, p(x, z, t, y).
Figure 2: Overall architecture of the model and inference networks for the Causal Effect Variational
Autoencoder (CEVAE). White nodes correspond to parametrized deterministic neural network transitions, gray nodes correspond to drawing samples from the respective distribution and white circles
correspond to switching paths according to the treatment t.
The approach we take in this paper to the problem of learning the latent variable causal model is
by using variational autoencoders [27, 46] to infer the complex non-linear relationships between X
and (Z, t, y) and approximately recover p (Z, X, t, y). Recent work has dramatically increased the
range and type of distributions which can be captured by VAEs [51, 45, 28]. The drawback of these
methods is that because of the difficulty of guaranteeing global optima of neural net optimization,
one cannot ensure that any given instance will find the true model even if it is within the model class.
We believe this drawback is offset by the strong empirical performance across many domains of deep
neural networks in general, and VAEs in particular. Specifically, we propose to parametrize the causal
graph of Figure 1 as a latent variable model with neural net functions connecting the variables of
interest. The flexible non-linear nature of neural nets will hopefully allow us to approximate well the
true interactions between the treatment and its effect.
Our design choices are mostly typical for VAEs: we assume the observations factorize conditioned
on the latent variables, and use an inference network [27, 46] which follows a factorization of
the true posterior. For the generative model we use an architecture inspired by TARnet [48], but
instead of conditioning on observations we condition on the latent variables z; see details below. For
the following, xi corresponds to an input datapoint (e.g. the feature vector of a given subject), ti
corresponds to the treatment assignment, yi to the outcome of the of the particular treatment and zi
corresponds to the latent hidden confounder. Each of the corresponding factors is described as:
p(zi ) =
Dz
Y
j=1
N (zij |0, 1);
p(xi |zi ) =
Dx
Y
p(xij |zi );
p(ti |zi ) = Bern(?(f1 (zi ))),
(2)
j=1
with p(xij |zi ) being an appropriate probability distribution for the covariate j and ?(?) being the
logistic function, Dx the dimension of x and Dz the dimension of z. For a continuous outcome
we parametrize the probability distribution as a Gaussian with its mean given by a TARnet [48]
architecture, i.e. a treatment specific function, and its variance fixed to v?, whereas for a discrete
outcome we use a Bernoulli distribution similarly parametrized by a TARnet:
p(yi |ti , zi ) = N (? = ?
?i , ? 2 = v?)
p(yi |ti , zi ) = Bern(? = ?
?i )
?
?i = ti f2 (zi ) + (1 ? ti )f3 (zi )
?
?i = ?(ti f2 (zi ) + (1 ? ti )f3 (zi )).
(3)
(4)
Note that each of the fk (?) is a neural network parametrized by its own parameters ?k for k = 1, 2, 3.
As we do not a-priori know the hidden confounder z we have to marginalize over it in order to
learn the parameters of the model ?k . Since the non-linear neural network functions make inference
intractable we will employ variational inference along with inference networks; these are neural
networks that output the parameters of a fixed form posterior approximation over the latent variables
4
z, e.g. a Gaussian, given the observed variables. By the definition of the model at Figure 1 we can see
that the true posterior over Z depends on X, t and y. Therefore we employ the following posterior
approximation:
q(zi |xi , ti , yi ) =
Dz
Y
2
N (?j = ?
?ij , ?j2 = ?
?ij
)
(5)
j=1
? 2i = ti ? 2t=0,i + (1 ? ti )? 2t=1,i
?
? i = ti ?t=0,i + (1 ? ti )?t=1,i
?
?t=0,i , ? 2t=0,i = g2 ? g1 (xi , yi )
?t=1,i , ? 2t=1,i = g3 ? g1 (xi , yi ),
where we similarly use a TARnet [48] architecture for the inference network, i.e. split them for each
treatment group in t after a shared representation g1 (xi , yi ), and each gk (?) is a neural network with
variational parameters ?k . We can now form a single objective for the inference and model networks,
the variational lower bound of this graphical model [27, 46]:
L=
N
X
Eq(zi |xi ,ti ,yi ) [log p(xi , ti |zi ) + log p(yi |ti , zi ) + log p(zi ) ? log q(zi |xi , ti , yi )].
(6)
i=1
Notice that for out of sample predictions, i.e. new subjects, we require to know the treatment
assignment t along with its outcome y before inferring the distribution over z. For this reason we will
introduce two auxiliary distributions that will help us predict ti , yi for new samples. More specifically,
we will employ the following distributions for the treatment assignment t and outcomes y:
q(ti |xi ) = Bern(? = ?(g4 (xi )))
(7)
q(yi |xi , ti ) = N (? = ?
?i , ? 2 = v?)
?
?i = ti (g6 ? g5 (xi )) + (1 ? ti )(g7 ? g5 (xi ))
(8)
q(yi |xi , ti ) = Bern(? = ?
?i )
?
?i = ti (g6 ? g5 (xi )) + (1 ? ti )(g7 ? g5 (xi )),
(9)
where we choose eq. 8 for continuous and eq. 9 for discrete outcomes. To estimate the parameters of
these auxiliary distributions we will add two extra terms in the variational lower bound:
FCEVAE = L +
N
X
log q(ti = t?i |x?i ) + log q(yi = yi? |x?i , t?i ) ,
(10)
i=1
with xi , t?i , yi? being the observed values for the input, treatment and outcome random variables in
the training set. We coin the name Causal Effect Variational Autoencoder (CEVAE) for our method.
4
Experiments
Evaluating causal inference methods is always challenging because we usually lack ground-truth
for the causal effects. Common evaluation approaches include creating synthetic or semi-synthetic
datasets, where real data is modified in a way that allows us to know the true causal effect or realworld data where a randomized experiment was conducted. Here we compare with two existing
benchmark datasets where there is no need to model proxies, IHDP [21] and Jobs [33], often used
for evaluating individual level causal inference. In order to specifically explore the role of proxy
variables, we create a synthetic toy dataset, and introduce a new benchmark based on data of twin
births and deaths in the USA.
For the implementation of our model we used Tensorflow [1] and Edward [52]. For the neural network
architecture choices we closely followed [48]; unless otherwise specified we used 3 hidden layers
with ELU [11] nonlinearities for the approximate posterior over the latent variables q(Z|X, t, y),
the generative model p(X|Z) and the outcome models p(y|t, Z), q(y|t, X). For the treatment
models p(t|Z), q(t|X) we used a single hidden layer neural network with ELU nonlinearities.
Unless mentioned otherwise, we used a 20-dimensional latent variable z and used a small weight
decay term for all of the parameters with ? = .0001. Optimization was done with Adamax [26]
and a learning rate of 0.01, which was annealed with an exponential decay schedule. We further
performed early stopping according to the lower bound on a validation set. To compute the outcomes
p(y|X, do(t = 1)) and
do(t = 0)) we averaged over 100 samples from the approximate
R
P p(y|X,
posterior q(Z|X) = t q(Z|t, y, X)q(y|t, X)q(t|X)dy.
Throughout this section we compare with several baseline methods. LR1 is logistic regression, LR2
is two separate logistic regressions fit to treated (t = 1) and control (t = 0). TARnet is a feed forward
neural network architecture for causal inference [48].
5
4.1
Benchmark datasets
For the first benchmark task we consider estimating the individual and population causal effects
on a benchmark dataset introduced by [21]; it is constructed from data obtained from the Infant
Health and Development Program (IHDP). Briefly, the confounders x correspond to collected
measurements of the children and their mothers used during a randomized experiment that studied
the effect of home visits by specialists on future cognitive test scores. The treatment assignment
is then ?de-randomized? by removing from the treated set children with non-white mothers; for
each unit a treated and a control outcome are then simulated, thus allowing us to know the ?true?
individual causal effects of the treatment. We follow [25, 48] and use 1000 replications of the
simulated outcome, along with the same train/validation/testing splits. To measure the accuracy of
the individual treatment effect estimation we use the Precision in Estimation of Heterogeneous Effect
PN
(PEHE) [21], PEHE = N1 i=1 ((yi1 ? yi0 ) ? (?
yi1 ? y?i0 ))2 , where y1 , y0 correspond to the true
outcomes under t = 1 and t = 0, respectively, and y?1 , y?0 correspond to the outcomes estimated by
our model. For the population causal effect we report the absolute error on the Average Treatment
Effect (ATE). The results can be seen at Table 1. As we can see, CEVAE has decent performance,
comparable to the Balancing Neural Network (BNN) of [25].
Table 1: Within-sample and out-of-sample mean
and standard errors for the metrics for the various
models at the IHDP dataset.
Method
p
within-s.
PEHE
within-s.
ATE
p
out-of-s.
PEHE
out-of-s.
ATE
OLS-1
OLS-2
BLR
k-NN
TMLE
BART
RF
CF
BNN
CFRW
5.8?.3
2.4?.1
5.8?.3
2.1?.1
5.0?.2
2.1?.1
4.2?.2
3.8?.2
2.2?.1
.71?.0
.73?.04
.14?.01
.72?.04
.14?.01
.30?.01
.23?.01
.73?.05
.18?.01
.37?.03
.25?.01
5.8?.3
2.5?.1
5.8?.3
4.1?.2
2.3?.1
6.6?.3
3.8?.2
2.1?.1
.76?.0
.94?.06
.31?.02
.93?.05
.79?.05
.34?.02
.96?.06
.40?.03
.42?.03
.27?.01
CEVAE
2.7?.1
.34?.01
2.6?.1
.46?.02
Table 2: Within-sample and out-of-sample
policy risk and error on the average treatment
effect on the treated (ATT) for the various
models on the Jobs dataset.
Method
within-s.
Rpol
within-s.
ATT
out-of-s.
Rpol
out-of-s.
ATT
LR-1
LR-2
BLR
k-NN
TMLE
BART
RF
CF
BNN
CFRW
.22?.0
.21?.0
.22?.0
.02?.0
.22?.0
.23?.0
.23?.0
.19?.0
.20?.0
.17?.0
.01?.00
.01?.01
.01?.01
.21?.01
.02?.01
.02?.00
.03?.01
.03?.01
.04?.01
.04?.01
.23?.0
.24?.0
.25?.0
.26?.0
.25?.0
.28?.0
.20?.0
.24?.0
.21?.0
.08?.04
.08?.03
.08?.03
.13?.05
.08?.03
.09?.04
.07?.03
.09?.04
.09?.03
CEVAE
.15?.0
.02?.01
.26?.0
.03?.01
For the second benchmark we consider the task described at [48] and follow closely their procedure.
It uses a dataset obtained by the study of [33, 49], which concerns the effect of job training (treatment)
on employment after training (outcome). Due to the fact that a part of the dataset comes from a
randomized control trial we can estimate the ?true? causal effect. Following [48] we report the
absolute error on the Average Treatment effect on the Treated (ATT), which is the E [IT E(X)|t = 1].
For the individual causal effect we use the policy risk, that acts as a proxy to the individual treatment
effect. The results after averaging over 10 train/validation/test splits can be seen at Table 2. As we
can observe, CEVAE is competitive with the state-of-the art, while overall achieving the best estimate
on the out-of-sample ATT.
4.2
Synthetic experiment on toy data
To illustrate that our model better handles hidden confounders we experiment on a toy simulated
dataset where the marginal distribution of X is a mixture of Gaussians, with the hidden variable Z
determining the mixture component. We generate synthetic data by the following process:
zi ? Bern (0.5) ;
xi |zi ? N zi , ?z21 zi + ?z20 (1 ? zi )
ti |zi ? Bern (0.75zi + 0.25(1 ? zi )) ;
yi |ti , zi ? Bern (Sigmoid (3(zi + 2(2ti ? 1)))) ,
(11)
where ?z0 = 3, ?z1 = 5 and Sigmoid is the logistic sigmoid function. This generation process
introduces hidden confounding between t and y as t and y depend on the mixture assignment z
for x. Since there is significant overlap between the two Gaussian mixture components we expect
that methods which do not model the hidden confounder z will not produce accurate estimates for
the treatment effects. We experiment with both a binary z for CEVAE, which is close to the true
6
model, as well as a 5-dimensional continuous z in order to investigate the robustness of CEVAE w.r.t.
model misspecification. We evaluate across samples size N ? {1000, 3000, 5000, 10000, 30000}
and provide the results in Figure 3. We see that no matter how many samples are given, LR1, LR2
and TARnet are not able to improve their error in estimating ATE directly from the proxies. On the
other hand, CEVAE achieves significantly less error. When the latent model is correctly specified
(CEVAE bin) we do better even with a small sample size; when it is not (CEVAE cont) we require
more samples for the latent space to imitate more closely the true binary latent variable.
LR1
LR2
TARnet
CEVAE cont
CEVAE bin
absolute ATE error
0.16
0.12
0.08
0.04
0.00
3.0
3.5
log(Nsamples)
4.0
4.5
Figure 3: Absolute error of estimating ATE on samples from the generative process (11). CEVAE bin
and CEVAE cont are CEVAE with respectively binary or continuous 5-dim latent z. See text above
for description of the other methods.
4.3
Binary treatment outcome on Twins
We introduce a new benchmark task that utilizes data from twin births in the USA between 1989-1991
[3] 3 . The treatment t = 1 is being born the heavier twin whereas, the outcome corresponds to the
mortality of each of the twins in their first year of life. Since we have records for both twins, their
outcomes could be considered as the two potential outcomes with respect to the treatment of being
born heavier. We only chose twins which are the same sex. Since the outcome is thankfully quite
rare (3.5% first-year mortality), we further focused on twins such that both were born weighing less
than 2kg. We thus have a dataset of 11984 pairs of twins. The mortality rate for the lighter twin is
18.9%, and for the heavier 16.4%, for an average treatment effect of ?2.5%. For each twin-pair we
obtained 46 covariates relating to the parents, the pregnancy and birth: mother and father education,
marital status, race and residence; number of previous births; pregnancy risk factors such as diabetes,
renal disease, smoking and alcohol use; quality of care during pregnancy; whether the birth was at a
hospital, clinic or home; and number of gestation weeks prior to birth.
In this setting, for each twin pair we observed both the case t = 0 (lighter twin) and t = 1 (heavier
twin). In order to simulate an observational study, we selectively hide one of the two twins; if
we were to choose at random this would be akin to a randomized trial. In order to simulate the
case of hidden confounding with proxies, we based the treatment assignment on a single variable
which is highly correlated with the outcome: GESTAT10, the number of gestation weeks prior to
birth. It is ordinal with values from 0 to 9 indicating birth before 20 weeks gestation, birth after
20-27 weeks of gestation and so on 4 . We then set ti |xi , zi ? Bern ?(wo> x + wh (z/10 ? 0.1)) ,
wo ? N (0, 0.1 ? I), wh ? N (5, 0.1), where z is GESTAT10 and x are the 45 other features.
We created proxies for the hidden confounder as follows: We coded the 10 GESTAT10 categories with
one-hot encoding, replicated 3 times. We then randomly and independently flipped each of these 30
bits. We varied the probabilities of flipping from 0.05 to 0.5, the latter indicating there is no direct
information about the confounder. We chose three replications following the well-known result that
three independent views of a latent feature are what is needed to guarantee that it can be recovered
3
Data taken from the denominator file at http://www.nber.org/data/linked-birth-infant-death-data-vitalstatistics-data.html
4
The partition is given in the original dataset from NBER.
7
[30, 2, 5]. We note that there might still be proxies for the confounder in the other variables, such
as the incompetent cervix covariate which is a known risk factor for early birth. Having created
the dataset, we focus our attention on two tasks: Inferring the mortality of the unobserved twin
(counterfactual), and inferring the average treatment effect. We compare with TARnet, LR1 and LR2.
We vary the number of hidden layers for TARnet and CEVAE (nh in the figures). We note that while
TARnet with 0 hidden layers is equivalent to LR2, CEVAE with 0 hidden layers still infers a latent
space and is thus different. The results are given respectively in Figures 4(a) (higher is better) and
4(b) (lower is better).
For the counterfactual task, we see that for small proxy noise all methods perform similarly. This is
probably due to the gestation length feature being very informative; for LR1, the noisy codings of
this feature form 6 of the top 10 most predictive features for mortality, the others being sex (males
are more at risk), and 3 risk factors: incompetent cervix, mother lung disease, and abnormal amniotic
fluid. For higher noise, TARnet, LR1 and LR2 see roughly similar degradation in performance;
CEVAE, on the other hand, is much more robust to increasing proxy noise because of its ability to
infer a cleaner latent state from the noisy proxies. Of particular interest is CEVAE nh = 0, which
does much better for counterfactual inference than the equivalent LR2, probably because LR2 is
forced to rely directly on the noisy proxies instead of the inferred latent state. For inference of
average treatment effect, we see that at the low noise levels CEVAE does slightly worse than the
other methods, with CEVAE nh = 0 doing noticeably worse. However, similar to the counterfactual
case, CEVAE is significantly more robust to proxy noise, achieving quite a low error even when the
direct proxies are completely useless at noise level 0.5.
0.85
0.75
0.65
absolute ATE error
counterfactual AUC
0.08
CEVAE nh=0
CEVAE nh=1
CEVAE nh=2
LR2
LR1
TARnet nh=1
TARnet nh=2
0.1
0.06
CEVAE nh=0
CEVAE nh=1
CEVAE nh=2
LR2
LR1
TARnet nh=1
TARnet nh=2
0.04
0.02
0.2
0.3
proxy noise level
0.4
0.00
0.5
(a) Area under the curve (AUC) for predicting the
mortality of the unobserved twin in a hidden confounding experiment; higher is better.
0.1
0.2
0.3
proxy noise level
0.4
0.5
(b) Absolute error ATE estimate; lower is better.
Dashed black line indicates the error of using the
naive ATE estimator: the difference between the
average treated and average control outcomes.
Figure 4: Results on the Twins dataset. LR1 is logistic regression, LR2 is two separate logistic
regressions fit on the treated and control. ?nh? is number of hidden layers used. TARnet with nh = 0
is identical to LR2 and not shown, whereas CEVAE with nh = 0 has a latent space component.
5
Conclusion
In this paper we draw a connection between causal inference with proxy variables and the groundbreaking work in the machine learning community on latent variable models. Since almost all
observational studies rely on proxy variables, this connection is highly relevant.
We introduce a model which is the first attempt at tying these two ideas together: The Causal
Effect Variational Autoencoder (CEVAE), a neural network latent variable model used for estimating
individual and population causal effects. In extensive experiments we showed that it is competitive
with the state-of-the art on benchmark datasets, and more robust to hidden confounding both at a
toy artificial dataset as well as modifications of real datasets, such as the newly introduced Twins
dataset. For future work, we plan to employ the expanding set of tools available for latent variables
models (e.g. Kingma et al. [28], Tran et al. [51], Maal?e et al. [35], Ranganath et al. [44]), as well as
to further explore connections between method of moments approaches such as Anandkumar et al.
[5] with the methods for effect restoration given by Kuroki and Pearl [32], Miao et al. [37].
8
Acknowledgements
We would like to thank Fredrik D. Johansson for valuable discussions, feedback and for providing
the data for IHDP and Jobs. We would also like to thank Maggie Makar for helping with the Twins
dataset. Christos Louizos and Max Welling were supported by TNO, NWO and Google. Joris Mooij
was supported by the European Research Council (ERC) under the European Union?s Horizon 2020
research and innovation programme (grant agreement 639466).
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv
preprint arXiv:1603.04467, 2016.
[2] E. S. Allman, C. Matias, and J. A. Rhodes. Identifiability of parameters in latent structure models with
many observed variables. The Annals of Statistics, pages 3099?3132, 2009.
[3] D. Almond, K. Y. Chay, and D. S. Lee. The costs of low birth weight. The Quarterly Journal of Economics,
120(3):1031?1083, 2005.
[4] A. Anandkumar, D. J. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden
markov models. In COLT, volume 1, page 4, 2012.
[5] A. Anandkumar, R. Ge, D. J. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning
latent variable models. Journal of Machine Learning Research, 15(1):2773?2832, 2014.
[6] J. D. Angrist and J.-S. Pischke. Mostly harmless econometrics: An empiricist?s companion. Princeton
university press, 2008.
[7] S. Arora and R. Kannan. Learning mixtures of separated nonspherical gaussians. Annals of Applied
Probability, pages 69?92, 2005.
[8] S. Arora, R. Ge, T. Ma, and A. Risteski. Provable learning of noisy-or networks. CoRR, abs/1612.08795,
2016. URL http://arxiv.org/abs/1612.08795.
[9] Z. Cai and M. Kuroki. On identifying total effects in the presence of latent variables and selection bias. In
Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, pages 62?69. AUAI
Press, 2008.
[10] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model
for sequential data. In Advances in neural information processing systems, pages 2980?2988, 2015.
[11] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential
linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
[12] J. K. Edwards, S. R. Cole, and D. Westreich. All your data are always missing: incorporating bias due to
measurement error into the potential outcomes framework. International Journal of Epidemiology, 44(4):
1452, 2015.
[13] D. Filmer and L. H. Pritchett. Estimating wealth effects without expenditure data?or tears: an application
to educational enrollments in states of india. Demography, 38(1):115?132, 2001.
[14] P. A. Frost. Proxy variables and specification bias. The review of economics and Statistics, pages 323?325,
1979.
[15] W. Fuller. Measurement error models. Wiley series in probability and mathematical statistics (, 1987.
[16] L. A. Goodman. Exploratory latent structure analysis using both identifiable and unidentifiable models.
Biometrika, 61(2):215?231, 1974.
[17] S. Greenland and D. G. Kleinbaum. Correcting for misclassification in two-way tables and matched-pair
studies. International Journal of Epidemiology, 12(1):93?97, 1983.
[18] S. Greenland and T. Lash. Bias analysis. In Modern epidemiology, 3rd ed., pages 345?380. Lippincott
Williams and Wilkins, 2008.
[19] K. Gregor, I. Danihelka, A. Graves, D. Jimenez Rezende, and D. Wierstra. DRAW: A Recurrent Neural
Network For Image Generation. ArXiv e-prints, Feb. 2015.
[20] Z. Griliches and J. A. Hausman. Errors in variables in panel data. Journal of econometrics, 31(1):93?118,
1986.
[21] J. L. Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical
Statistics, 20(1):217?240, 2011.
[22] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden markov models. Journal of
Computer and System Sciences, 78(5):1460?1480, 2012.
9
[23] Y. Jernite, Y. Halpern, and D. Sontag. Discovering hidden variables in noisy-or networks using quartet
tests. In Advances in Neural Information Processing Systems, pages 2355?2363, 2013.
[24] D. Jimenez Rezende, S. M. A. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess. Unsupervised
Learning of 3D Structure from Images. ArXiv e-prints, July 2016.
[25] F. D. Johansson, U. Shalit, and D. Sontag. Learning representations for counterfactual inference. International Conference on Machine Learning (ICML), 2016.
[26] D. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning
Representations (ICLR), San Diego, 2015.
[27] D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning
Representations (ICLR), 2014.
[28] D. P. Kingma, T. Salimans, and M. Welling. Improving variational inference with inverse autoregressive
flow. arXiv preprint arXiv:1606.04934, 2016.
[29] S. Kolenikov and G. Angeles. Socioeconomic status measurement with discrete proxy variables: Is
principal component analysis a reliable answer? Review of Income and Wealth, 55(1):128?165, 2009.
[30] J. B. Kruskal. More factors than subjects, tests and treatments: an indeterminacy theorem for canonical
decomposition and individual differences scaling. Psychometrika, 41(3):281?293, 1976.
[31] M. Kuroki and J. Pearl. Measurement bias and effect restoration in causal inference. Technical report,
DTIC Document, 2011.
[32] M. Kuroki and J. Pearl. Measurement bias and effect restoration in causal inference. Biometrika, 101(2):
423, 2014.
[33] R. J. LaLonde. Evaluating the econometric evaluations of training programs with experimental data. The
American economic review, pages 604?620, 1986.
[34] C. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel. The variational fair autoencoder. International
Conference on Learning Representations (ICLR), 2016.
[35] L. Maal?e, C. K. S?nderby, S. K. S?nderby, and O. Winther. Auxiliary deep generative models. arXiv
preprint arXiv:1602.05473, 2016.
[36] G. S. Maddala and K. Lahiri. Introduction to econometrics, volume 2. Macmillan New York, 1992.
[37] W. Miao, Z. Geng, and E. Tchetgen Tchetgen. Identifying causal effects with proxy variables of an
unmeasured confounder. arXiv preprint arXiv:1609.08816, 2016.
[38] M. R. Montgomery, M. Gragnolati, K. A. Burke, and E. Paredes. Measuring living standards with proxy
variables. Demography, 37(2):155?174, 2000.
[39] S. L. Morgan and C. Winship. Counterfactuals and causal inference. Cambridge University Press, 2014.
[40] J. Pearl. Causality. Cambridge university press, 2009.
[41] J. Pearl. On measurement bias in causal inference. arXiv preprint arXiv:1203.3504, 2012.
[42] J. Pearl. Detecting latent heterogeneity. Sociological Methods & Research, page 0049124115600597,
2015.
[43] A. Peysakhovich and A. Lada. Combining observational and experimental data to find heterogeneous
treatment effects. arXiv preprint arXiv:1611.02385, 2016.
[44] R. Ranganath, D. Tran, J. Altosaar, and D. Blei. Operator variational inference. In Advances in Neural
Information Processing Systems, pages 496?504, 2016.
[45] D. J. Rezende and S. Mohamed.
arXiv:1505.05770, 2015.
Variational inference with normalizing flows.
arXiv preprint
[46] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML
2014, Beijing, China, 21-26 June 2014, pages 1278?1286, 2014.
[47] J. Sel?n. Adjusting for errors in classification and measurement in the analysis of partly and purely
categorical data. Journal of the American Statistical Association, 81(393):75?81, 1986.
[48] U. Shalit, F. Johansson, and D. Sontag. Estimating individual treatment effect: generalization bounds and
algorithms. ArXiv e-prints, June 2016.
[49] J. A. Smith and P. E. Todd. Does matching overcome lalonde?s critique of nonexperimental estimators?
Journal of econometrics, 125(1):305?353, 2005.
[50] B. Thiesson, C. Meek, D. M. Chickering, and D. Heckerman. Learning mixtures of dag models. In
Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pages 504?513. Morgan
Kaufmann Publishers Inc., 1998.
10
[51] D. Tran, R. Ranganath, and D. M. Blei. The variational Gaussian process. International Conference on
Learning Representations (ICLR), 2015.
[52] D. Tran, A. Kucukelbir, A. B. Dieng, M. Rudolph, D. Liang, and D. M. Blei. Edward: A library for
probabilistic modeling, inference, and criticism. arXiv preprint arXiv:1610.09787, 2016.
[53] S. Wager and S. Athey. Estimation and inference of heterogeneous treatment effects using random forests.
arXiv preprint arXiv:1510.04342, 2015.
[54] M. R. Wickens. A note on the use of proxy variables. Econometrica: Journal of the Econometric Society,
pages 759?761, 1972.
[55] J. M. Wooldridge. On estimating firm-level production functions using proxy variables to control for
unobservables. Economics Letters, 104(3):112?114, 2009.
11
| 7223 |@word trial:2 briefly:1 inversion:1 almond:1 johansson:3 yi0:1 paredes:1 sex:2 calculus:1 decomposition:2 moment:3 born:3 series:2 score:2 zij:1 att:5 jimenez:2 genetic:1 document:1 past:1 existing:3 recovered:3 dx:2 readily:1 devin:1 partition:1 informative:1 designed:1 bart:2 infant:2 generative:5 intelligence:2 weighing:1 discovering:1 imitate:1 yi1:2 smith:1 record:1 lr:2 blei:3 caveat:1 detecting:1 node:2 toronto:2 preference:1 org:2 zhang:1 mathematical:1 along:3 constructed:1 direct:3 wierstra:2 replication:2 incorrect:1 prove:1 abadi:1 introduce:5 g4:1 roughly:1 growing:1 inspired:1 little:1 increasing:1 psychometrika:1 discover:1 estimating:7 cultural:1 matched:1 panel:1 what:2 kg:1 renal:1 tying:1 substantially:1 unobserved:4 guarantee:3 act:2 ti:30 auai:1 socio:5 biometrika:2 healthcare:1 control:6 unit:2 intervention:7 enjoy:1 grant:1 danihelka:1 before:2 treat:1 todd:1 tmle:2 switching:1 eslami:1 encoding:2 critique:1 solely:1 path:1 approximately:2 might:3 chose:2 black:1 studied:2 china:1 challenging:1 factorization:1 g7:2 confounder:17 range:2 averaged:1 testing:1 practice:2 union:1 backpropagation:1 procedure:1 area:1 universal:1 empirical:1 significantly:3 matching:1 induce:1 get:1 cannot:6 marginalize:1 selection:2 close:1 operator:1 altosaar:1 risk:6 impossible:2 www:1 equivalent:2 deterministic:1 dean:1 dz:6 missing:1 annealed:1 educational:1 attention:1 economics:3 independently:1 williams:1 focused:2 simplicity:1 identifying:3 correcting:1 rule:1 estimator:3 classic:1 population:3 handle:1 unmeasured:3 harmless:1 exploratory:1 annals:2 controlling:1 diego:1 exact:1 lighter:2 us:1 diabetes:1 agreement:1 recognition:1 nderby:2 econometrics:4 observed:9 role:1 preprint:10 imes:1 valuable:1 mentioned:2 disease:2 covariates:2 econometrica:1 personal:1 employment:1 halpern:1 depend:1 lippincott:1 myriad:1 predictive:1 upon:1 purely:1 f2:2 completely:1 joint:2 various:2 train:2 separated:1 forced:1 fast:2 effective:1 artificial:3 zemel:3 outcome:31 birth:13 firm:1 quite:2 heuristic:1 say:1 s:3 drawing:1 otherwise:2 ability:1 statistic:4 g1:3 noisy:9 itself:1 final:1 rudolph:1 advantage:1 net:3 cai:3 propose:2 tran:4 interaction:1 clevert:1 j2:1 relevant:1 blr:2 combining:1 achieve:1 marital:1 description:1 parent:1 optimum:1 produce:1 generating:1 adam:1 leave:1 guaranteeing:1 telgarsky:1 help:1 derive:1 illustrate:1 recurrent:2 peysakhovich:1 measured:1 ij:2 progress:1 eq:5 edward:3 job:5 auxiliary:3 c:1 fredrik:1 come:1 elus:1 strong:3 drawback:2 closely:3 stochastic:3 observational:10 education:3 bin:3 require:2 graduation:1 premise:2 noticeably:1 f1:1 generalization:1 helping:1 burke:1 considered:2 ground:1 predict:1 week:4 kuroki:6 achieves:1 early:2 vary:1 kruskal:1 battaglia:1 estimation:4 rhodes:1 currently:2 maker:1 nwo:1 propensity:1 council:1 cole:1 create:1 tool:1 hope:1 mit:1 gaussian:6 always:2 modified:1 pn:1 sel:2 vae:1 rezende:4 focus:2 june:2 rank:1 likelihood:2 bernoulli:1 indicates:1 medication:6 criticism:1 baseline:1 summarizing:1 nonexperimental:1 dim:1 inference:35 stopping:1 i0:1 nn:2 typically:1 hidden:30 relation:1 kleinbaum:2 compatibility:1 overall:2 aforementioned:1 flexible:1 classification:1 html:1 priori:1 colt:1 development:1 plan:1 art:3 marginal:1 f3:2 having:2 beach:1 fuller:1 identical:3 flipped:1 unsupervised:1 fairness:1 athey:1 kastner:1 geng:1 icml:2 future:3 report:3 others:1 intelligent:1 richard:1 few:1 employ:4 modern:1 randomly:1 simultaneously:2 individual:15 n1:1 attempt:3 ab:2 interest:3 investigate:1 highly:2 expenditure:1 evaluation:2 adjust:1 introduces:1 truly:1 mixture:8 nl:3 male:1 wager:1 accurate:2 edge:1 respective:1 unless:2 gestation:5 unterthiner:1 re:1 shalit:3 causal:47 circle:1 uncertain:1 increased:1 instance:1 modeling:5 enrollment:1 disadvantage:1 measuring:2 restoration:4 assignment:6 ordinary:1 cost:1 rare:1 comprised:2 father:1 successful:1 socioeconomic:1 conducted:1 wickens:1 answer:2 synthetic:5 confounders:20 st:1 fundamental:1 randomized:5 international:8 epidemiology:3 csail:1 winther:1 lee:1 probabilistic:1 connecting:1 together:1 mortality:8 successively:1 kucukelbir:1 choose:2 henceforth:1 worse:2 cognitive:1 creating:1 american:2 chung:1 toy:4 li:1 account:1 potential:3 nonlinearities:2 de:1 wooldridge:1 student:1 twin:21 availability:1 includes:2 matter:1 coding:1 inc:1 race:1 depends:2 performed:1 view:3 linked:1 doing:1 counterfactuals:1 competitive:3 recover:6 lung:1 complicated:1 bayes:1 identifiability:2 lr2:12 dieng:1 accuracy:1 conceive:1 variance:1 kaufmann:1 correspond:6 identify:3 identification:2 bayesian:1 lada:1 datapoint:1 ed:1 definition:2 against:1 matias:1 mohamed:3 proof:3 hsu:3 newly:1 dataset:14 treatment:39 massachusetts:1 lash:3 wh:2 counterfactual:6 knowledge:1 adjusting:1 infers:1 schedule:1 carefully:1 back:1 feed:1 higher:3 miao:2 follow:3 done:1 unidentifiable:1 autoencoders:3 hand:4 lahiri:1 hopefully:1 lack:1 google:1 logistic:6 quality:1 perhaps:1 gray:1 believe:1 nber:2 usa:4 effect:55 building:1 name:1 true:13 equality:1 death:2 bnn:3 deal:1 white:3 during:2 cfrw:2 auc:2 davis:1 hill:1 image:3 variational:18 recently:1 common:2 ols:2 sigmoid:3 thiesson:1 lr1:9 indeterminacy:1 conditioning:1 volume:3 nh:16 association:1 louizos:4 relating:1 lalonde:2 measurement:9 significant:3 dinh:1 cambridge:2 mother:4 dag:1 rd:1 fk:1 similarly:3 teaching:1 erc:1 access:2 intervene:1 risteski:1 specification:1 add:2 feb:1 posterior:6 own:1 recent:5 confounding:6 hide:1 showed:1 cate:1 certain:1 binary:6 life:1 yi:17 captured:1 minimum:1 seen:2 care:1 morgan:2 zip:1 goel:1 corrado:1 living:1 dashed:1 semi:1 full:1 mix:1 july:1 infer:4 technical:1 match:1 tear:1 long:2 cifar:2 visit:1 coded:1 prediction:1 regression:4 heterogeneous:4 patient:5 expectation:1 metric:1 denominator:1 arxiv:24 tailored:1 agarwal:1 hochreiter:1 whereas:3 remarkably:1 wealth:3 completes:1 crucial:1 goodman:1 extra:1 publisher:1 file:1 probably:2 isolate:1 subject:3 flow:2 hausman:1 anandkumar:4 cervix:2 allman:1 presence:2 door:1 canadian:1 split:3 bengio:1 decent:1 affect:5 fit:2 zi:30 architecture:6 identified:3 jernite:1 economic:7 idea:2 barham:1 knowing:1 angeles:1 whether:3 expression:1 heavier:4 angrist:1 url:1 akin:1 wo:2 sontag:4 york:2 deep:5 dramatically:1 heess:1 cleaner:1 nonparametric:1 category:4 generate:1 http:2 nonspherical:1 exist:1 xij:2 canonical:1 notice:1 pehe:4 estimated:1 correctly:2 discrete:3 promise:1 group:1 achieving:2 capital:1 econometric:2 imaging:1 graph:3 groundbreaking:1 year:4 beijing:1 realworld:1 fourteenth:1 letter:1 uncertainty:3 fourth:1 inverse:1 swersky:1 place:1 throughout:2 almost:1 residence:2 utilizes:1 home:2 draw:2 winship:1 appendix:1 dy:1 scaling:1 comparable:1 bit:1 capturing:1 bound:5 layer:6 abnormal:1 guaranteed:2 followed:1 courville:1 meek:1 identifiable:2 your:1 aspect:2 simulate:2 extremely:1 developing:2 according:2 across:4 ate:11 slightly:1 y0:1 frost:1 heckerman:1 kakade:3 g3:1 modification:1 g6:2 taken:1 computationally:2 previously:1 montgomery:1 needed:1 know:6 ordinal:2 ge:2 end:1 maal:2 available:4 parametrize:2 gaussians:2 brevdo:1 observe:1 quarterly:1 salimans:1 appropriate:1 spectral:1 alternative:1 coin:1 specialist:1 robustness:1 existence:1 original:1 top:1 include:3 ensure:1 cf:2 graphical:2 tno:2 joris:2 build:2 gregor:1 society:1 tensor:1 objective:1 print:3 quantity:1 flipping:1 surrogate:1 g5:4 iclr:4 separate:2 thank:2 simulated:3 capacity:1 parametrized:3 collected:1 reason:1 provable:2 kannan:1 quartet:1 code:1 length:1 cont:3 relationship:3 useless:1 providing:1 innovation:1 liang:1 difficult:1 mostly:2 gk:1 fluid:1 ba:1 design:1 implementation:1 policy:3 unknown:1 perform:1 allowing:1 twenty:1 observation:4 ihdp:4 datasets:6 markov:2 benchmark:12 unobservables:1 immediate:1 heterogeneity:1 misspecification:1 y1:1 varied:1 community:2 inferred:1 david:1 introduced:2 pair:4 smoking:1 specified:2 extensive:1 z1:1 connection:3 tensorflow:2 pearl:8 kingma:4 nip:1 able:2 below:1 usually:1 challenge:1 program:2 rf:2 max:2 including:1 reliable:1 hot:1 overlap:1 misclassification:1 difficulty:1 rely:5 treated:7 predicting:1 advanced:1 alcohol:1 improve:1 technology:1 library:1 cim:1 arora:2 created:2 categorical:7 autoencoder:5 health:6 naive:1 auto:1 text:1 prior:3 understanding:3 literature:1 acknowledgement:1 mooij:3 review:3 determining:1 graf:1 sociological:1 expect:1 generation:2 validation:3 clinic:1 proxy:41 pregnancy:3 balancing:1 production:1 supported:2 last:1 tchetgen:2 bern:8 bias:9 weaker:1 allow:1 institute:2 wide:1 greenland:5 india:1 absolute:6 distributed:1 overcome:1 curve:1 dimension:3 feedback:1 world:1 transition:1 rich:1 evaluating:3 empiricist:1 forward:1 made:1 replicated:1 makar:1 san:1 programme:1 autoregressive:1 income:3 welling:6 social:1 ranganath:3 approximate:6 jaderberg:1 status:7 elu:2 global:1 z20:1 assumed:1 factorize:1 xi:20 continuous:7 latent:35 table:5 thankfully:1 nature:2 learn:1 robust:6 ca:1 correlated:1 expanding:1 improving:1 forest:1 complex:1 european:2 domain:3 uva:3 big:1 noise:9 child:2 fair:1 causality:1 wiley:1 christos:2 precision:1 inferring:6 exponential:2 chickering:1 weighting:1 maddala:1 formula:2 theorem:4 removing:1 z0:1 specific:2 covariate:3 companion:1 nyu:1 offset:1 decay:2 concern:1 normalizing:1 intractable:2 incorporating:1 sequential:1 corr:1 importance:1 conditioned:1 uri:1 horizon:1 dtic:1 chen:1 explore:2 amsterdam:3 adjustment:2 macmillan:1 g2:1 ch:1 corresponds:4 minimizer:1 chance:1 environmental:1 truth:1 ma:1 conditional:1 goal:1 ite:4 towards:1 shared:1 z21:1 hard:1 specifically:4 determined:1 typical:1 justify:1 averaging:1 degradation:1 principal:1 called:4 hospital:1 adamax:1 total:1 experimental:2 partly:1 wilkins:1 citro:1 vaes:8 dsontag:1 selectively:1 indicating:2 demography:2 latter:1 evaluate:1 princeton:1 phenomenon:1 handling:1 |
6,881 | 7,224 | Learning Identifiable Gaussian Bayesian Networks in
Polynomial Time and Sample Complexity
Asish Ghoshal and Jean Honorio
Department of Computer Science, Purdue University, West Lafayette, IN - 47906
{aghoshal, jhonorio}@purdue.edu
Abstract
Learning the directed acyclic graph (DAG) structure of a Bayesian network from observational data is a notoriously difficult problem for which many non-identifiability
and hardness results are known. In this paper we propose a provably polynomialtime algorithm for learning sparse Gaussian Bayesian networks with equal noise
variance ? a class of Bayesian networks for which the DAG structure can be
uniquely identified from observational data ? under high-dimensional settings.
We show that O(k 4 log p) number of samples suffices for our method to recover
the true DAG structure with high probability, where p is the number of variables
and k is the maximum Markov blanket size. We obtain our theoretical guarantees
under a condition called restricted strong adjacency faithfulness (RSAF), which is
strictly weaker than strong faithfulness ? a condition that other methods based on
conditional independence testing need for their success. The sample complexity of
our method matches the information-theoretic limits in terms of the dependence on
p. We validate our theoretical findings through synthetic experiments.
1
Introduction and Related Work
Motivation. The problem of learning the directed acyclic graph (DAG) structure of Bayesian
networks (BNs) in general, and Gaussian Bayesian networks (GBNs) ? or equivalently linear
Gaussian structural equation models (SEMs) ? in particular, from observational data has a long
history in the statistics and machine learning community. This is, in part, motivated by the desire to
uncover causal relationships between entities in domains as diverse as finance, genetics, medicine,
neuroscience and artificial intelligence, to name a few. Although in general, the DAG structure
of a GBN or linear Gaussian SEM cannot be uniquely identified from purely observational data
(i.e., multiple structures can encode the same conditional independence relationships present in the
observed data set), under certain restrictions on the generative model, the DAG structure can be
uniquely determined. Furthermore, the problem of learning the structure of BNs exactly is known
to be NP-complete even when the number of parents of a node is at most q, for q > 1, [1]. It is
also known that approximating the log-likelihood to a constant factor, even when the model class is
restricted to polytrees with at-most two parents per node, is NP-hard [2].
Peters and B?hlmann [3] recently showed that if the noise variances are the same, then the structure
of a GBN can be uniquely identified from observational data. As observed by them, this ?assumption
of equal error variances seems natural for applications with variables from a similar domain and is
commonly used in time series models?. Unfortunately, even for the equal noise-variance case, no
polynomial time algorithm is known.
Contribution. In this paper we develop a polynomial time algorithm for learning a subclass of
BNs exactly: sparse GBNs with equal noise variance. This problem has been considered by [3]
who proposed an exponential time algorithm based on `0 -penalized maximum likelihood estimation
(MLE), and a heuristic greedy search method without any guarantees. Our algorithm involves
estimating a p-dimensional inverse covariance matrix and solving 2(p 1) at-most-k-dimensional
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
ordinary least squares problems, where p is the number of nodes and k is the maximum Markov
4
blanket size of a variable. We show that O((k /?2 ) log(p/ )) samples suffice for our algorithm to
recover the true DAG structure and to approximate the parameters to at most ? additive error, with
probability at least 1
, for some > 0. The sample complexity of O(k 4 log p) is close to the
information-theoretic limit of ?(k log p) for learning sparse GBNs as obtained by [4]. The main
assumption under which we obtain our theoretical guarantees is a condition that we refer to as the
?-restricted strong adjacency faithfulness (RSAF). We show that RSAF is a strictly weaker condition
than strong faithfulness, which methods based on independence testing require for their success. In
this identifiable regime, given enough samples, our method can recover the exact DAG structure of
any Gaussian distribution. However, existing exact algorithms like the PC algorithm [5] can fail to
recover the correct skeleton for distributions that are not faithful, and fail to orient a number of edges
that are not covered by the Meek orientation rules [6, 7]. Of independent interest is our analysis of
OLS regression under the random design setting for which we obtain `1 error bounds.
Related Work. In the this section, we first discuss some identifiability results for GBNs known in
the literature and then survey relevant algorithms for learning GBNs and Gaussian SEMs.
[3] proved identifiability of distributions drawn from a restricted SEM with additive noise, where in
the restricted SEM the functions are assumed to be non-linear and thrice continuously differentiable.
It is also known that SEMs with linear functions and strictly non-Gaussian noise are identifiable [8].
Indentifiability of the DAG structure for the linear function and Gaussian noise case was proved by
[9] when noise variables are assumed to have equal variance.
Algorithms for learning BNs typically fall into two distinct categories, namely: independence test
based methods and score based methods. This dichotomy also extends to the Gaussian case. Score
based methods assign a score to a candidate DAG structure based on how well it explains the observed
data, and then attempt to find the highest scoring structure. Popular examples for the Gaussian
distribution are the log-likelihood based BIC and AIC scores and the `0 -penalized log-likelihood
score by [10]. However, given that the number of DAGs and sparse DAGs is exponential in the
number of variables [4, 11], exhaustively searching for the highest scoring DAG in the combinatorial
space of all DAGs, which is a feature of existing exact search based algorithms, is prohibitive for all
but a few number of variables. [12] propose a score-based method, based on concave penalization of
a reparameterized negative log-likelihood function, which can learn a GBN over 1000 variables in an
hour. However, the resulting optimization problem is neither convex ? therefore is not guaranteed to
find a globally optimal solution ? nor solvable in polynomial time. In light of these shortcomings,
approximation algorithms have been proposed for learning BNs which can be used to learn GBNs in
conjunction with a suitable score function; notable methods are Greedy Equivalence Search (GES)
proposed by [13] and an LP-relaxation based method proposed by [14].
Among independence test based methods for learning GBNs, [15] extended the PC algorithm,
originally proposed by [5], to learn the Markov equivalence class of GBNs from observational data.
The computational complexity of the PC algorithm is bounded by O(pk ) with high probability, where
k is the maximum neighborhood size of a node, and is only efficient for learning very sparse DAGs.
For the non-linear Gaussian SEM case, [3] developed a two-stage algorithm called RESIT, which
works by first learning the causal ordering of the variables and then performing regressions to learn
the DAG structure. As we formally show in Appendix C.1, RESIT does not work for the linear
Gaussian case. Moreover, Peters et al. proved the correctness of RESIT only in the population
setting. Lastly, [16] developed an algorithm, which is similar in spirit to our algorithm, for efficiently
learning Poisson Bayesian networks. They exploit a property specific to the Poisson distribution
called overdispersion to learn the causal ordering of variables.
Finally, the max-min hill climbing (MMHC) algorithm by [17] is a state-of-the-art hybrid algorithm
for BNs that combines ideas from constraint-based and score-based learning. While MMHC works
well in practice, it is inherently a heuristic algorithm and is not guaranteed to recover the true DAG
structure even when it is uniquely identifiable.
2
Preliminaries
In this section, we formalize the problem of learning Gaussian Bayesian networks from observational
data. First, we introduce some notations and definitions.
2
We denote the set {1, . . . , p} by [p]. Vectors and matrices are denoted by lowercase and uppercase
bold faced letters respectively. Random variables (including random vectors) are denoted by italicized
uppercase letters. Let sr , sc ? [p] be any two non-empty index sets. Then for any matrix A 2 Rp?p ,
we denote the R|sr |?|sc | sub-matrix, formed by selecting the sr rows and sc columns of A by:
Asr ,sc . With a slight abuse of notation, we will allow the index sets sr and sc to be a single
index, e.g., i, and we will denote the index set of all row (or columns) by ?. Thus, A?,i and Ai,?
denote the i-th column and row of A respectively. For any vector v 2 Rp , we will denote its
support set by: S(v) = {i 2 [p]||vi | > 0}. Vector `p -norms are denoted by k?kp . For matrices,
k?kp denotes the induced (or operator) `p -norm and |?|p denotes the element-wise `p -norm, i.e.,
def P
|A|p = ( i,j |Ai,j |p )1/p . Finally, we denote the set [p] \ {i} by i.
Let G = (V, E) be a directed acyclic graph (DAG) where the vertex set V = [p] and E is the set of
directed edges, where (i, j) 2 E implies the edge i
j. We denote by ?G (i) and G (i) the parent
set and the set of children of the i-th node, respectively, in the graph G, and drop the subscript G
when the intended graph is clear from context. A vertex i 2 [p] is a terminal vertex in G if G (i) = ?.
For each i 2 [p] we have a random variable Xi 2 R, X = (X1 , . . . , Xp ) is the p-dimensional vector
of random variables, and x = (x1 , . . . , xp ) is a joint assignment to X. Without loss of generality, we
assume that E [Xi ] = 0, 8i 2 [p]. Every DAG G = (V, E) defines a set of topological orderings TG
over [p] that are compatible with the DAG G, i.e., TG = {? 2 Sp | ? (j) < ? (i) if (i, j) 2 E}, where
Sp is the set of all possible permutations of [p].
A Gaussian Bayesian network (GBN) is a tuple (G, P(W, S)), where G = (V, E) is a DAG structure,
W = {wi,j 2 R | (i, j) 2 E ^ |wi,j | > 0} is the set of edge weights, S = { i2 2 R+ }pi=1 is the set of
noise variances, and P is a multivariate Gaussian distribution over X = (X1 , . . . , Xp ) that is Markov
with respect to the DAG G and is parameterized by W and S. In other words, P = N (x; 0, ?),
factorizes as follows:
p
Y
P(x; W, S) =
Pi (xi ; wi , x?(i) , i2 ),
(1)
i=1
Pi (xi ; wi , x?(i) , i2 )
= N (xi ; wiT x?(i) ,
2
i ),
(2)
|?(i)| def
where wi 2 R
= (wi,j )j2?(i) is the weight vector for the i-th node, 0 is a vector of zeros of
appropriate dimension (in this case p), x?(i) = {xj | j 2 ?(i)}, ? is the covariance matrix for X,
and Pi is the conditional distribution of Xi given its parents ? which is also Gaussian.
We will also extensively use an alternative, but equivalent, view of a GBN: the linear structural
equation model (SEM). Let B = (wi,j 1 [(i, j) 2 E])(i,j)2[p]?[p] be the matrix of weights created
from the set of edge weights W. A GBN (G, P(W, S)) corresponds to a SEM where each variable
Xi can be written as follows:
X
Xi =
Bi,j Xj + Ni , 8i 2 [p]
(3)
j2?(i)
with Ni ? N (0,
(for all i 2 [p]) being independent noise variables and |Bi,j | > 0 for all j 2 ?(i).
The joint distribution of X as given by the SEM corresponds to the distribution P in (1) and the
graph associated with the SEM, where we have a directed edge (i, j) if j 2 ?(i), corresponds to the
DAG G. Denoting N = (N1 , . . . , Np ) as the noise vector, (3) can be rewritten in vector form as:
X = BX + N .
2
i)
Given a GBN (G, P(W, S)), with B being the weight matrix corresponding to W, we denote the
effective influence between two nodes i, j 2 [p]
def
w
ei,j = BT?,i B?,j
Bi,j
Bj,i
(4)
The effective influence w
ei,j between two nodes i and j is zero if: (a) i and j do not have an edge
between them and do not have common children, or (b) i and j have an edge between them but the dot
product between the weights to the children (BT?,i B?,j ) exactly equals the edge weight between i and
j (Bi,j + Bj,i ). The effective influence determines the Markov blanket of each node, i.e., 8i 2 [p],
the Markov blanket is given as: Si = {j | j 2 i ^ w
ei,j 6= 0} 1 . Furthermore, a node is conditionally
1
Our definition of Markov blanket differs from the commonly used graph-theoretic definition in that the
latter includes the parents, children and all the co-parents of the children of node i in the Markov blanket Si .
3
independent of all other nodes not in its Markov blanket, i.e., Pr{Xi |X i } = Pr{Xi |XSi }. Next,
we present a few definitions that will be useful later.
Definition 1 (Causal Minimality [18]). A distribution P is causal minimal with respect to a DAG
structure G if it is not Markov with respect to a proper subgraph of G.
Definition 2 (Faithfulness [5]). Given a GBN (G, P), P is faithful to the DAG G = (V, E) if for any
i, j 2 V and any V0 ? V \ {i, j}:
i d-separated from j | V0 () corr(Xi , Xj |XV0 ) = 0,
where corr(Xi , Xj |XV0 ) is the partial correlation between Xi and Xj given XV0 .
Definition 3 (Strong Faithfulness [19]). Given a GBN (G, P) the multivariate Gaussian distribution
P is -strongly faithful to the DAG G, for some 2 (0, 1), if
min{|corr(Xi , Xj |XV0 )| : i is not d-separated from j | V0 , 8i, j 2 [p] ^ 8V0 ? V \ {i, j}^}
.
Strong faithfulness is a stronger version of the faithfulness assumption that requires that for all triples
(Xi , Xj , XV0 ) such that i is not d-separated from j given V0 , the partial correlation corr(Xi , Xj |XV0 )
is bounded away from 0. It is known that while the set of distributions P that are Markov to a DAG
G but not faithful to it have Lebesgue measure zero, the set of distributions P that are not strongly
faithful to G have nonzero Lebesgue measure, and in fact can be quite large [20].
The problem of learning a GBN from observational data corresponds to recovering the DAG structure
G and parameters W from a matrix X 2 Rn?p of n i.i.d. samples drawn from P(W, S). In this paper
we consider the problem of learning GBNs over p variables where the size of the Markov blanket of a
node is at most k. This is in general not possible without making additional assumptions on the GBN
(G, P(W, S)) and the distribution P as we describe next.
Assumptions. Here, we enumerate our technical assumptions.
Assumption 1 (Causal Minimality). Let (G, P(W, S)) be a GBN, then 8wi,j 2 W, |wi,j | > 0.
The above assumption ensures that all edge weights are strictly nonzero, which results in each variable
Xi being a non-constant function of its parents X?(i) . Given Assumption 1, the distribution P is
causal minimal with respect to G [3] and therefore identifiable under equal noise variances [9], i.e.,
1 = . . . = p = . Throughout the rest of the paper, we will denote such Bayesian networks by
(G, P(W, 2 )).
Assumption 2 (Restricted Strong Adjacency Faithfulness). Let (G, P(W, 2 )) be a GBN with G =
(V, E). For every ? 2 TG , consider the sequence of graphs G[m, ? ] = (V[m, ? ], E[m, ? ]) indexed by
(m, ? ), where G[m, ? ] is the induced subgraph of G over the first m vertices in the topological ordering
def
def
? , i.e., V[m, ? ] = {i 2 [p] | ? (i) ? m} and E[m, ? ] = {(i, j) 2 E | i 2 V[m, ? ] ^ j 2 V[m, ? ]}.
The multivariate Gaussian distribution P is restricted ?-strongly adjacency faithful to G, provided
that:
(i) min{|wi,j | | (i, j) 2 E} > 3?,
3?
(ii) |w
ei,j | >
, 8i 2 V[m, ? ] ^ j 2 Si [m, ? ] ^ m 2 [p] ^ ? 2 TG ,
?(?)
where ? > 0 is a constant, w
ei,j is the effective influence between i and j in the induced subgraph
G[m, ? ] as defined in (4), and Si [m, ? ] denotes the Markov blanket of node i in G[m, ? ]. The constant
?(?) = 1 2/(1+9| G[m,? ] (i)|?2 ) if i is a non-terminal vertex in G[m, ? ], where | G[m,? ] (i)| is the
number of children of i in G[m, ? ], and ?(?) = 1 if i is a terminal vertex.
Simply stated, the RSAF assumption requires that the absolute value of the edge weights are at least
3? and the absolute value of the effective influence between two nodes, whenever it is non-zero, is at
least 3? for terminal nodes and 3?/?(?) for non-terminal nodes. Moreover, the above should hold
not only for the original DAG, but also for each DAG obtained by sequentially
p removing terminal
vertices. The constant ? is related to the statistical error and decays as ?(k 2 log p/n). Note that in
Both the definitions are equivalent under faithfulness. However, since we allow non-faithful distributions, our
definition of Markov blanket is more appropriate.
4
3 Figure 1: A GBN, with noise variance set to 1 that is RSAF, but is neither faithful, nor
2
strongly faithful, nor adjacency faithful to the DAG structure. This GBN is not faithful
because corr(X4 , X5 |X2 , X3 ) = 0 even though (2, 3) do not d-separate 4 and 5.
1
1
-1
Other violations of faithfulness include corr(X1 , X4 |?) = 0 and corr(X1 , X5 |?) =
1
-1
0. Therefore, a CI test based method will fail to recover the true structure. In Appendix
0.25
5 B.1, we show that the PC algorithm fails to recover the structure of this GBN while
4
our method recovers the structure exactly.
1
1
p
the regime ? 2 (0, 1/3 | G[m,? ] (i)|), which happens for sufficiently large n, then the condition on
w
ei,j is satisfied trivially. As we will show later, Assumption 2 is equivalent to the following, for some
constant ?0 ,
min{|corr(Xi , Xj |XV[m,? ]\{i,j} )| | i 2 V[m, ? ] ^ j 2 Si [m, ? ] ^ m 2 [p] ^ ? 2 TG }
?0 .
At this point, it is worthwhile to compare our assumptions with those made by other methods for
learning GBNs. Methods based on conditional independence (CI) tests, e.g., the PC algorithm for
learning the equivalence class of GBNs developed by [15], require strong faithfulness. While strong
faithfulness requires that for a node pair (i, j) that are adjacent in the DAG, the partial correlation
corr(Xi , Xj |XS ) is bounded away from zero for all sets S 2 {S ? [p] \ {i, j}}, RSAF only requires
non-zero partial correlations with respect to a subset of sets in {S ? [p] \ {i, j}}. Thus, RSAF is
strictly weaker than strong faithfulness. The number of non-zero partial correlations needed by RSAF
is also strictly a subset of those needed by the faithfulness condition. Figure 1 shows a GBN which is
RSAF but neither faithful, nor strongly faithful, nor adjacency faithful (see [20] for a definition).
We conclude this section with one last remark. At first glance, it might appear that the assumption
of equal variance together with our assumptions implies a simple causal ordering of variables in
which the marginal variance of the variables increases strictly monotonically with the causal ordering.
However, this is not the case. For instance, in the GBN shown in Figure 1 the marginal variance of
the causally ordered nodes (1, 2, 3, 4, 5) is (1, 2, 2, 2, 2.125). We also perform extensive simulation
experiments to further investigate this case in Appendix B.6.
3
Results
We start by characterizing the covariance and precision matrix of a GBN (G, P(W, 2 )). Let B be
the weight matrix corresponding to the edge weights W, then from (3) it follows that the covariance
and precision matrix are, respectively:
?=
2
(I
B)
1
(I
B)
T
,
1
?=
2
(I
B)T (I
B),
(5)
where I is the p ? p identity matrix.
Remark 1. Since the elements of the inverse p
covariance matrix are related to the partial correlations
as follows: corr(Xi , Xj |XV\{i,j} ) = ?i,j/ ?i,i ?j,j . We have that, |w
ei,j | c?, for some constant
p
c (Assumption 2), implies that |corr(Xi , Xj |XV\{i,j} )| c?/ ?i,i ?j,j > 0.
Next, we describe a key property of homoscedastic noise GBNs in the lemma below, which will be
the driving force behind our algorithm.
Lemma 1. Let (G, P(W, 2 )) be a GBN, with ? being the inverse covariance matrix over X and
def
?i = E [Xi |(X
have that
i
= x i )] = ?iT x
i
being the i-th regression coefficient. Under Assumption 1, we
i is a terminal vertex in G () ?ij =
2
?i,j , 8j 2
i.
Detailed proofs can be found in Appendix A in the supplementary material. Lemma 1 states that, in
the population setting, one can identify the terminal vertex, and therefore the causal ordering, just
by assuming causal minimality (Assumption 1). However, to identify terminal vertices from a finite
number of samples, one needs additional assumptions. We use Lemma 1 to develop our algorithm
for learning GBNs which, at a high level, works as follows. Given data X drawn from a GBN, we
5
b Then we perform a series of ordinary least squares
first estimate the inverse covariance matrix ?.
(OLS) regressions to compute the estimators ?bi 8i 2 [p]. We then identify terminal vertices using
the property described in Lemma 1 and remove the corresponding variables (columns) from X. We
repeat the process of identifying and removing terminal vertices and obtain the causal ordering of
vertices. Then, we perform a final set of OLS regressions to learn the structure and parameters of the
DAG.
The two main operations performed by our algorithm are: (a) estimating the inverse covariance
matrix, and (b) estimating the regression coefficients ?i . In what follows, we discuss these two steps
in more detail and obtain theoretical guarantees for our algorithm.
b of the
Inverse covariance matrix estimation. The first part of our algorithm requires an estimate ?
true inverse covariance matrix ?? . Due in part to its role in undirected graphical model selection,
the problem of inverse covariance matrix estimation has received significant attention over the years.
A popular approach for inverse covariance estimation, under high-dimensional settings, is the `1 penalized Gaussian MLE studied by [21?28], among others. While, technically, these algorithms can
be used in the first phase of our algorithm to estimate the inverse covariance matrix, in this paper,
we use the method called CLIME, developed by Cai et. al. [29], since its theoretical guarantees do
not require a quite restrictive edge-based mutual incoherence condition as in [24]. Further, CLIME
b columnwise by solving p independent linear
is computationally attractive because it computes ?
b
programs. Even though the CLIME estimator ? is not guaranteed to be positive-definite (it is positiveb only for identifying
definite with high probability) it is suitable for our purpose since we use ?
terminal vertices. Next, we briefly describe the CLIME method for inverse covariance estimation and
instantiate the theoretical results of [29] for our purpose.
b is obtained as follows. First, we compute a potentially non-symmetric
The CLIME estimator ?
?
estimate ? = (?
!i,j ) by solving the following:
? = argmin|?|1 s.t. |?n ? I|1 ? n ,
?
(6)
?2Rp?p
def
where n > 0 is the regularization parameter, ?n = (1/n)XT X is the empirical covariance matrix.
Finally, the symmetric estimator is obtained by selecting the smaller entry among !
? i,j and !
? j,i , i.e.,
b = (b
?
!i,j ), where !
bi,j = !
? i,j 1 [|?
!i,j | < |?
!j,i |] + !
? j,i 1 [|?
!j,i | ? |?
!i,j |]. It is easy to see that (6) can
? = (!
? 1, . . . , !
? p ), then
be decomposed into p linear programs as follows. Let ?
n
? i = argmink!k1 s.t. |? ! ei |1 ? n ,
!
(7)
!2Rp
where ei = (ei,j ) such that ei,j = 1 for j = i and ei,j = 0 otherwise. The following lemma which
b
follows from the results of [29] and [24], bounds the maximum elementwise difference between ?
and the true precision matrix ?? .
Lemma 2. Let (G? , P(W? , 2 )) be a GBN satisfying Assumption 1, with ?? and ?? being the ?true?
covariance and inverse covariance matrix over X, respectively. Given a data matrix X 2 Rn?p
b by solving (6). Then, if the regularization
of n i.i.d. samples drawn from P(W? , 2 ), compute ?
parameter and number of samples satisfy:
q
2
4
? 4
2
?
k?
k
(C1/n) log(4p / ), n ((16 k? k1 C1 )/?2 ) log((4p )/ ),
n
1
b 1 ? ?/ 2 , where C1 = 3200 maxi (?? )2
with probability at least 1
we have that |?? ?|
i,i
b at the level 4k?? k1 n , we have S(?? ) = S(?).
b
and 2 (0, 1). Further, thresholding ?
Remark 2. Note that in each column of the true precision matrix ?? , at most k entries are non-zero,
where k is the maximum Markov blanket size of a node in G. Therefore, the `1 induced (or operator)
b to be within
norm k?? k1 = O(k), and the sufficient number of samples required for the estimator ?
?
4
2
1
p
? distance from ? , elementwise, with probability at least 1
is O(( /? )k log( / )).
Estimating regression coefficients. Given a GBN (G, P(W, 2 )) with the covariance and precision
matrix over X being ? and ? respectively, the conditional distribution of Xi given the variables
def
in its Markov blanket is: Xi |(XSi = x) ? N ((?i )TSi x, 1/?i,i ). Let ?Si i = (?i )Si . This leads to the
following generative model for X?,i :
X?,i = (X?,Si )?Si i + "0i ,
6
(8)
where "0i ? N (0, 1/?i,i ) and Xl,Si ? N (0, ?Si ,Si ) for all l 2 [n]. Therefore, for all i 2 [p], we
obtain the estimator ?bSi i of ?Si i by solving the following ordinary least squares (OLS) problem:
1
?bSi i = argmin
kX?,i (X?,Si ) k22 = (?nSi ,Si ) 1 ?nSi ,i
(9)
2n
|S
|
2R i
The following lemma bounds the approximation error between the true regression coefficients and
those obtained by solving the OLS problem. OLS regression has been previously analyzed by
[30] under the random design setting. However, they obtain bounds on the predicion error, i.e.,
(?Si i ?bSi i )T ?? (?Si i ?bSi i ), while the following lemma bounds k?Si i ?bSi i k1 .
Lemma 3. Let (G? , P(W? , 2 )) be a GBN with ?? and ?? being the true covariance and inverse
covariance matrix over X. Let X 2 Rn?p be the data matrix of n i.i.d. samples drawn from
P(W? , 2 ). Let E [Xi |(XSi = x)] = xT ?Si i , and let ?bSi i be the OLS solution obtained by solving (9)
for some i 2 [p]. Then, assuming ?? is non-singular, and if the number of samples satisfy:
?
?
c|Si |3/2 (k?Si i k1 + 1/|Si |)
4|Si |
n
log
,
?
min (?Si ,Si )?
?bSi i k1 ? ? with probability at least 1
we have that, k?Si i
absolute constant.
, for some
2 (0, 1), with c being an
Our algorithm. Algorithm 1 presents our algorithm for learning GBNs. Throughout the algorithm
b in line
we use as indices the true label of a node. We first estimate the inverse covariance matrix, ?,
5. In line 7 we estimate the Markov blanket of each node. Then, we estimate ?bi,j for all i and j 2 b
Si ,
b i,j b
?
and compute the maximum per-node ratios ri = |
/?i,j | in lines 8 ? 11. We then identify as
terminal vertex the node for which ri is minimum and remove it from the collection of variables (lines
13 and 14). Each time a variable is removed, we perform a rank-1 update of the precision matrix
(line 15) and also update the regression coefficients of the nodes in its Markov blanket (lines 16 ?
20). We repeat this process of identifying and removing terminal vertices until the causal order has
been completely determined. Finally, we compute the DAG structure and parameters by regressing
each variable against variables that are in its Markov blanket which also precede it in the causal order
(lines 23 ? 29).
Algorithm 1 Gaussian Bayesian network structure learning algorithm.
bi do
Input: Data matrix X 2 Rn?p .
16:
for j 2 S
bj
b W).
b
b j,l | > 0}.
17:
S
{l 6= j | |?
Output: (G,
j
p?p
b
b
1: B
02R .
18:
Compute ?b == (?bnS ,bS ) 1 ?bnS ,j .
Sj
j j
j
2: z
?, r
?. . z stores the causal order.
b j,l b
b
?
19:
r
max{|
/
?
|
|
l
2
S
}.
j,l
j
j
3: V
[p].
. Remaining vertices.
20:
end for
4: ?n
(1/n)XT X.
21: end for
b using the CLIME estimator.
5: Compute ?
22: Append the remaining vertex in V to z.
0
b = ?.
b
6: ?
23: for i 2 2, . . . , p do
b i,j |> 0},8i 2 [p].
7: Compute b
Si = {j 2 i | |?
b
24:
S
{z |j 2 [i 1]} \{j 2 [p] | j 6=
zi
8: for i 2 1, . . . , p do
9:
Compute ?bbi = (?bn
Si
Si ,b
Si
b i,j b
?
/?i,j | |
?bnS ,i .
i
j2b
Si }.
)
j
b 0 | > 0}.
zi ^ | ?
zi ,j
25:
Compute ?b = (?bnS
1
b
z i ,S z i
b
26:
?
b (zi )
S(?).
b z ,b? (z )
27:
B
?b?b (zi ) .
i
i
28: end for
b
b
bi,j 6= 0}, W
29: E
{(i, j) | B
10:
ri
max{|
11: end for
12: for t 2 1 . . . p 1 do
13:
i
argmin(r). . i is a terminal vertex.
14:
Append i to z; V
V \ {i}; ri
+1.
b
b i, i (1/?
b i,i )(?
b i, i ) .
b i,i )(?
15:
?
?
b and G
b
E},
b
([p], E).
)
1
?bnS
zi ,zi
.
bi,j |(i, j) 2
{B
In order to obtain our main result for learning GBNs we first derive the following technical lemma
which states that if the data comes from a GBN that satisfies Assumptions 1 ? 2, then removing a
terminal vertex results in a GBN that still satisfies Assumptions 1 ? 2.
7
Lemma 4. Let (G, P(W, 2 )) be a GBN satisfying Assumptions 1 ? 2, and let ?, ? be the (nonsingular) covariance and precision matrix respectively. Let X 2 Rn?p be a data matrix of n
i.i.d. samples drawn from P(W, 2 ), and let i be a terminal vertex in G. Denote by G0 = (V0 , E0 )
and W0 = {wi,j 2 W | (i, j) 2 E0 } the graph and set of edge weights, respectively, obtained by
removing the node i from G. Then, Xj, i ? P(W0 , 2 ) 8j 2 [n], and the GBN (G0 , P(W0 , 2 ))
satisfies Assumptions 1 ? 2. Further, the inverse covariance matrix ?0 and the covariance matrix ?0
for the GBN (G0 , P(W0 , 2 )) satisfy (respectively): ?0 = ? (1/?i,i )??,i ?i,? and ?0 = ? i, i .
b = ([p], E)
b and W
b be the DAG and edge weights, respectively, returned by AlgoTheorem 1. Let G
rithm 1. Under the assumption that the data matrix X was drawn from a GBN (G? , P(W? , 2 )) with
G? = ([p], E? ), ?? and ?? being the ?true? covariance and inverse covariance matrix respectively,
and satisfying Assumptions 1 ? 2; if the regularization parameter is set according to Lemma 2, and if
the number of samples satisfies the condition:
? 4 ? 4
?
?
?
k? k1 Cmax
k (3/2) (w
emax + 1/k)
24p2 (p 1)
n c
+
log
,
?2
Cmin ?
def
where c is an absolute constant, w
emax = max{|w
ei,j ||i 2 V[m, ? ]^j 2 Si [m, ? ]^m 2 [p]^? 2 TG }
with w
ei,j being the effective influence between i and j (4), Cmax = maxi2p (??i,i )2 , and Cmin =
b ? E? and 8(i, j) 2 E,
b |w
mini2[p] min (?? ), then, E
bi,j w? | ? ? with probability at least
i,j
Si ,Si
1
for some
b at the level ? we get E
b = E? .
2 (0, 1) and ? > 0. Further, thresholding W
The CLIME estimator of the precision matrix can be computed in polynomial time and the OLS steps
take O(pk 3 ) time. Therefore our algorithm is polynomial time (please see Appendix C.2).
4
Experiments
In this section, we validate our theoretical findings through synthetic experiments. We use a class
of Erd?os-R?nyi GBNs, with edge weights set to ?1/2 with probability 1/2, and noise variance
2
= 0.8. For each value of p 2 {50, 100, 150, 200}, we sampled 30 random GBNs and estimated
b by computing the fraction of times the learned DAG structure G
b matched
the probability Pr{G? = G}
the true DAG structure G? exactly. The number of samples was set to Ck 2 log p, where C was the
control parameter, and k was the maximum Markov blanket size (please see Appendix B.2 for more
details). Figure 2 shows the results of the structure and parameter recovery experiments. We can see
that the log p scaling as prescribed by Theorem 1 holds in practice.
Our method outperforms various state-of-the-art methods like PC, GES and MMHC on this class
of Erd?os-R?nyi GBNs (Appendix B.3), works when the noise variables have unequal, but similar,
variance (Appendix B.4), and also works for high-dimensional gene expression data (Appendix B.5).
Concluding Remarks. There are several ways of extending our current work. While the algorithm
developed in the paper is specific to equal noise-variance case, we believe our theoretical analysis can
be extended to the non-identifiable case to show that our algorithm, under some suitable conditions,
can recover one of the Markov-equivalent DAGs. It would be also interesting to explore if some of
the ideas developed herein can be extended to binary or discrete Bayesian networks.
Figure 2: (Left) Probability of cor-
rect structure recovery vs. number
of samples, where the latter is set
to Ck2 log p with C being the control parameter and k being the maximum Markov blanket size. (Right)
The maximum absolute difference
between the true parameters and the
learned parameters vs. number of
samples.
8
References
[1] David Maxwell Chickering. Learning bayesian networks is np-complete. In Learning from
data, pages 121?130. Springer, 1996.
[2] Sanjoy Dasgupta. Learning polytrees. In Proceedings of the Fifteenth conference on Uncertainty
in artificial intelligence, pages 134?141. Morgan Kaufmann Publishers Inc., 1999.
[3] Jonas Peters, Joris M Mooij, Dominik Janzing, and Bernhard Sch?lkopf. Causal Discovery with
Continuous Additive Noise Models. Journal of Machine Learning Research, 15(June):2009?
2053, 2014.
[4] Asish Ghoshal and Jean Honorio. Information-theoretic limits of Bayesian network structure
learning. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning
Research, pages 767?775, Fort Lauderdale, FL, USA, 20?22 Apr 2017. PMLR.
[5] Peter Spirtes, Clark N Glymour, and Richard Scheines. Causation, prediction, and search. MIT
press, 2000.
[6] Christopher Meek. Causal inference and causal explanation with background knowledge. In
Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 403?410.
Morgan Kaufmann Publishers Inc., 1995.
[7] Christopher Meek. Strong completeness and faithfulness in bayesian networks. In Proceedings
of the Eleventh conference on Uncertainty in artificial intelligence, pages 411?418. Morgan
Kaufmann Publishers Inc., 1995.
[8] Shohei Shimizu, Patrik O Hoyer, Aapo Hyv?rinen, and Antti Kerminen. A Linear Non-Gaussian
Acyclic Model for Causal Discovery. Journal of Machine Learning Research, 7:2003?2030,
2006.
[9] J. Peters and P. B?hlmann. Identifiability of Gaussian structural equation models with equal
error variances. Biometrika, 101(1):219?228, 2014.
[10] Sara Van De Geer and Peter B?hlmann. L0-Penalized maximum likelihood for sparse directed
acyclic graphs. Annals of Statistics, 41(2):536?567, 2013.
[11] R W Robinson. Counting unlabeled acyclic digraphs. Combinatorial Mathematics V, 622:28?43,
1977.
[12] Bryon Aragam and Qing Zhou. Concave penalized estimation of sparse gaussian bayesian
networks. Journal of Machine Learning Research, 16:2273?2328, 2015.
[13] David Maxwell Chickering. Optimal Structure Identification with Greedy Search. J. Mach.
Learn. Res., 3:507?554, March 2003.
[14] Tommi S. Jaakkola, David Sontag, Amir Globerson, Marina Meila, and others. Learning
Bayesian Network Structure using LP Relaxations. In AISTATS, pages 358?365, 2010.
[15] Markus Kalisch and B?hlmann Peter. Estimating High-Dimensional Directed Acyclic Graphs
with the PC-Algorithm. Journal of Machine Learning Research, 8:613?636, 2007.
[16] Gunwoong Park and Garvesh Raskutti. Learning large-scale poisson dag models based on
overdispersion scoring. In Advances in Neural Information Processing Systems, pages 631?639,
2015.
[17] Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The max-min hill-climbing
bayesian network structure learning algorithm. Machine learning, 65(1):31?78, 2006.
[18] Jiji Zhang and Peter Spirtes. Detection of unfaithfulness and robust causal inference. Minds
and Machines, 18(2):239?271, 2008.
[19] Jiji Zhang and Peter Spirtes. Strong faithfulness and uniform consistency in causal inference.
In Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence, pages
632?639. Morgan Kaufmann Publishers Inc., 2002.
[20] Caroline Uhler, Garvesh Raskutti, Peter B?hlmann, and Bin Yu. Geometry of the faithfulness
assumption in causal inference. Annals of Statistics, 41(2):436?463, 2013.
[21] Ming Yuan and Yi Lin. Model selection and estimation in the gaussian graphical model.
Biometrika, 94(1):19?35, 2007.
9
[22] Onureena Banerjee, Laurent El Ghaoui, and Alexandre d?Aspremont. Model selection through
sparse maximum likelihood estimation for multivariate gaussian or binary data. Journal of
Machine Learning Research, 9(Mar):485?516, 2008.
[23] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation
with the graphical lasso. Biostatistics, 9(3):432?441, 2008.
[24] Pradeep Ravikumar, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu. High-dimensional
covariance estimation by minimizing `1 -penalized log-determinant divergence. Electronic
Journal of Statistics, 5(0):935?980, 2011.
[25] Cho-Jui Hsieh, M?ty?s A Sustik, Inderjit S Dhillon, Pradeep Ravikumar, and Russell Poldrack.
BIG & QUIC : Sparse Inverse Covariance Estimation for a Million Variables. In Advances in
Neural Information Processing Systems, volume 26, pages 3165?3173, 2013.
[26] Cho-Jui Hsieh, Arindam Banerjee, Inderjit S Dhillon, and Pradeep K Ravikumar. A divide-andconquer method for sparse inverse covariance estimation. In Advances in Neural Information
Processing Systems, pages 2330?2338, 2012.
[27] Benjamin Rolfs, Bala Rajaratnam, Dominique Guillot, Ian Wong, and Arian Maleki. Iterative thresholding algorithm for sparse inverse covariance estimation. In Advances in Neural
Information Processing Systems, pages 1574?1582, 2012.
[28] Christopher C Johnson, Ali Jalali, and Pradeep Ravikumar. High-dimensional sparse inverse
covariance estimation using greedy methods. In AISTATS, volume 22, pages 574?582, 2012.
[29] Tony Cai, Weidong Liu, and Xi Luo. A Constrained L1 Minimization Approach to Sparse
Precision Matrix Estimation. Journal of the American Statistical Association, 106(494):594?607,
2011.
[30] Daniel Hsu, Sham M Kakade, and Tong Zhang. An analysis of random design linear regression.
In Proc. COLT. Citeseer, 2011.
[31] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and
Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006.
[32] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices.
arXiv:1011.3027 [cs, math], November 2010. arXiv: 1011.3027.
[33] Rahul Mazumder and Trevor Hastie. Exact covariance thresholding into connected components
for large-scale graphical lasso. Journal of Machine Learning Research, 13(Mar):781?794, 2012.
[34] Y. Lu, Y. Yi, P. Liu, W. Wen, M. James, D. Wang, and M. You. Common human cancer genes
discovered by integrated gene-expression analysis. Public Library of Science ONE, 2(11):e1149,
2007.
[35] E. Shubbar, A. Kovacs, S. Hajizadeh, T. Parris, S. Nemes, K.Gunnarsdottir, Z. Einbeigi,
P. Karlsson, and K. Helou. Elevated cyclin B2 expression in invasive breast carcinoma is
associated with unfavorable clinical outcome. BioMedCentral Cancer, 13(1), 2013.
10
| 7224 |@word determinant:1 briefly:1 version:1 polynomial:6 seems:1 norm:4 stronger:1 hyv:1 simulation:1 dominique:1 bn:1 covariance:33 hsieh:2 citeseer:1 liu:2 series:2 score:8 selecting:2 daniel:1 denoting:1 outperforms:1 existing:2 current:1 luo:1 si:35 written:1 additive:3 remove:2 drop:1 update:2 v:2 intelligence:6 generative:2 greedy:4 prohibitive:1 instantiate:1 amir:1 ck2:1 completeness:1 math:1 node:26 zhang:3 jonas:1 yuan:1 combine:1 eleventh:2 introduce:1 hardness:1 nor:5 terminal:17 globally:1 decomposed:1 ming:1 provided:1 estimating:5 bounded:3 suffice:1 moreover:2 notation:2 matched:1 biostatistics:1 what:1 argmin:3 developed:6 finding:2 nj:1 guarantee:5 every:2 subclass:1 concave:2 finance:1 exactly:5 biometrika:2 control:2 appear:1 causally:1 kalisch:1 positive:1 xv:3 limit:3 mach:1 subscript:1 incoherence:1 laurent:1 abuse:1 might:1 studied:1 equivalence:3 sara:1 polytrees:2 co:1 bi:11 lafayette:1 directed:7 faithful:14 globerson:1 testing:2 practice:2 definite:2 differs:1 x3:1 empirical:1 word:1 jui:2 get:1 cannot:1 close:1 selection:3 operator:2 unlabeled:1 context:1 influence:6 wong:1 restriction:1 equivalent:4 attention:1 convex:1 survey:1 wit:1 identifying:3 recovery:2 emax:2 rule:1 estimator:8 population:2 searching:1 annals:2 rinen:1 exact:4 element:2 satisfying:3 recognition:1 observed:3 role:1 wang:1 ensures:1 connected:1 ordering:8 russell:1 highest:2 removed:1 benjamin:1 complexity:4 skeleton:1 exhaustively:1 singh:1 solving:7 ali:1 purely:1 technically:1 completely:1 joint:2 various:1 separated:3 distinct:1 shortcoming:1 effective:6 describe:3 kp:2 artificial:6 dichotomy:1 sc:5 neighborhood:1 outcome:1 guillot:1 jean:2 heuristic:2 quite:2 supplementary:1 aliferis:1 nineteenth:1 onureena:1 otherwise:1 statistic:6 final:1 sequence:1 differentiable:1 cai:2 propose:2 product:1 j2:2 relevant:1 argmink:1 subgraph:3 secaucus:1 validate:2 parent:7 empty:1 extending:1 derive:1 develop:2 ij:1 received:1 p2:1 strong:12 recovering:1 c:1 involves:1 blanket:17 implies:3 come:1 tommi:1 correct:1 human:1 jiji:2 observational:8 material:1 cmin:2 adjacency:6 explains:1 require:3 bin:2 public:1 assign:1 suffices:1 preliminary:1 strictly:7 hold:2 sufficiently:1 considered:1 bj:3 driving:1 homoscedastic:1 aarti:1 purpose:2 estimation:15 proc:1 precede:1 combinatorial:2 label:1 correctness:1 minimization:1 mit:1 gaussian:26 ck:1 zhou:1 factorizes:1 jaakkola:1 conjunction:1 encode:1 l0:1 june:1 rank:1 likelihood:7 inference:4 el:1 lowercase:1 honorio:2 typically:1 bt:2 integrated:1 provably:1 among:3 orientation:1 colt:1 denoted:3 art:2 constrained:1 mutual:1 marginal:2 equal:10 asr:1 beach:1 x4:2 park:1 yu:2 np:4 others:2 richard:1 few:3 causation:1 roman:1 wen:1 divergence:1 qing:1 intended:1 phase:1 lebesgue:2 geometry:1 n1:1 attempt:1 friedman:1 detection:1 uhler:1 interest:1 investigate:1 karlsson:1 regressing:1 violation:1 analyzed:1 pradeep:4 pc:7 light:1 uppercase:2 behind:1 constantin:1 edge:16 tuple:1 partial:6 arian:1 indexed:1 divide:1 re:1 causal:22 e0:2 theoretical:8 minimal:2 instance:1 column:5 kerminen:1 hlmann:5 assignment:1 ordinary:3 tg:6 vertex:21 subset:2 entry:2 uniform:1 johnson:1 synthetic:2 cho:2 vershynin:1 st:1 international:1 minimality:3 lauderdale:1 together:1 continuously:1 satisfied:1 laura:1 american:1 bx:1 de:1 bold:1 ioannis:1 includes:1 coefficient:5 inc:5 b2:1 satisfy:3 notable:1 vi:1 later:2 view:1 performed:1 start:1 recover:8 identifiability:4 polynomialtime:1 clime:7 contribution:1 square:3 formed:1 ni:2 variance:16 who:1 efficiently:1 kaufmann:4 identify:4 nonsingular:1 climbing:2 lkopf:1 bayesian:18 identification:1 lu:1 notoriously:1 history:1 caroline:1 janzing:1 whenever:1 trevor:2 definition:10 against:1 ty:1 james:1 invasive:1 associated:2 proof:1 recovers:1 sampled:1 hsu:1 proved:3 popular:2 knowledge:1 formalize:1 uncover:1 alexandre:1 maxwell:2 originally:1 rahul:1 erd:2 though:2 strongly:5 generality:1 furthermore:2 just:1 stage:1 lastly:1 mar:2 correlation:6 until:1 jerome:1 ei:14 christopher:4 o:2 banerjee:2 glance:1 xv0:6 defines:1 believe:1 name:1 usa:3 k22:1 brown:1 true:14 maleki:1 mini2:1 regularization:3 jerry:1 bsi:7 symmetric:2 nonzero:2 dhillon:2 spirtes:3 i2:3 conditionally:1 adjacent:1 x5:2 attractive:1 uniquely:5 please:2 hill:2 theoretic:4 complete:2 l1:1 wise:1 arindam:1 recently:1 ols:8 common:2 garvesh:3 raskutti:3 poldrack:1 volume:3 million:1 association:1 slight:1 elevated:1 elementwise:2 refer:1 significant:1 dag:39 ai:2 meila:1 trivially:1 mathematics:1 consistency:1 thrice:1 dot:1 v0:6 multivariate:4 showed:1 store:1 certain:1 verlag:1 binary:2 success:2 yi:2 sems:3 scoring:3 morgan:4 minimum:1 additional:2 monotonically:1 ii:1 multiple:1 sham:1 technical:2 match:1 clinical:1 long:2 lin:1 unfaithfulness:1 mle:2 marina:1 ravikumar:4 prediction:1 regression:11 xsi:3 aapo:1 breast:1 poisson:3 fifteenth:1 arxiv:2 c1:3 background:1 singular:1 publisher:4 sch:1 rest:1 sr:4 induced:4 undirected:1 spirit:1 structural:3 counting:1 enough:1 easy:1 independence:6 bic:1 xj:13 zi:7 hastie:2 identified:3 lasso:2 idea:2 motivated:1 expression:3 rajaratnam:1 peter:10 returned:1 sontag:1 york:1 remark:4 enumerate:1 useful:1 covered:1 clear:1 detailed:1 tsamardinos:1 extensively:1 category:1 neuroscience:1 estimated:1 per:2 tibshirani:1 carcinoma:1 diverse:1 discrete:1 dasgupta:1 key:1 drawn:7 neither:3 graph:11 relaxation:2 fraction:1 year:1 orient:1 inverse:21 letter:2 parameterized:1 uncertainty:4 you:1 extends:1 throughout:2 electronic:1 appendix:9 scaling:1 bound:5 def:9 fl:1 meek:3 guaranteed:3 aic:1 bala:1 topological:2 identifiable:6 constraint:1 x2:1 ri:4 markus:1 bns:11 min:7 prescribed:1 performing:1 concluding:1 martin:1 glymour:1 department:1 according:1 march:1 smaller:1 wi:11 lp:2 kakade:1 making:1 happens:1 b:1 restricted:7 pr:3 ghaoui:1 computationally:1 equation:3 scheines:1 previously:1 discus:2 fail:3 needed:2 mind:1 ge:2 overdispersion:2 end:4 cor:1 sustik:1 operation:1 rewritten:1 worthwhile:1 away:2 appropriate:2 pmlr:1 alternative:1 rp:4 original:1 denotes:3 remaining:2 include:1 tony:1 graphical:4 cmax:2 medicine:1 joris:1 exploit:1 restrictive:1 k1:8 approximating:1 nyi:2 g0:3 quic:1 dependence:1 jalali:1 hoyer:1 distance:1 separate:1 columnwise:1 entity:1 w0:4 italicized:1 shohei:1 assuming:2 index:5 relationship:2 ratio:1 minimizing:1 equivalently:1 difficult:1 unfortunately:1 robert:1 potentially:1 negative:1 stated:1 append:2 design:3 proper:1 perform:4 markov:22 purdue:2 finite:1 november:1 reparameterized:1 extended:3 rn:5 discovered:1 kovacs:1 community:1 weidong:1 david:3 mmhc:3 namely:1 pair:1 required:1 extensive:1 fort:1 faithfulness:18 unequal:1 learned:2 herein:1 hour:1 nip:1 robinson:1 below:1 pattern:1 regime:2 bbi:1 program:2 rolf:1 max:5 including:1 explanation:1 wainwright:1 suitable:3 natural:1 hybrid:1 force:1 solvable:1 zhu:1 library:1 created:1 aspremont:1 patrik:1 faced:1 literature:1 discovery:2 mooij:1 asymptotic:1 loss:1 permutation:1 interesting:1 acyclic:7 triple:1 penalization:1 clark:1 sufficient:1 xp:3 thresholding:4 editor:1 pi:4 nsi:2 row:3 cancer:2 genetics:1 penalized:6 compatible:1 repeat:2 last:1 antti:1 weaker:3 allow:2 fall:1 characterizing:1 absolute:5 sparse:14 ghoshal:2 van:1 dimension:1 computes:1 commonly:2 made:1 collection:1 sj:1 approximate:1 bernhard:1 gene:3 sequentially:1 tsi:1 rect:1 assumed:2 conclude:1 xi:26 search:5 continuous:1 iterative:1 gbn:30 learn:7 robust:1 ca:1 inherently:1 sem:8 mazumder:1 domain:2 sp:2 pk:2 main:3 apr:1 aistats:2 motivation:1 noise:18 big:1 child:6 x1:5 west:1 rithm:1 tong:1 precision:9 sub:1 fails:1 exponential:2 xl:1 candidate:1 chickering:2 dominik:1 ian:1 removing:5 theorem:1 specific:2 xt:3 bishop:1 maxi:1 decay:1 x:1 corr:11 ci:2 kx:1 shimizu:1 simply:1 explore:1 desire:1 ordered:1 inderjit:2 springer:2 corresponds:4 determines:1 satisfies:4 conditional:5 identity:1 digraph:1 hard:1 determined:2 andconquer:1 lemma:13 called:4 sanjoy:1 geer:1 unfavorable:1 formally:1 support:1 latter:2 |
6,882 | 7,225 | Gradient Episodic Memory for Continual Learning
David Lopez-Paz and Marc?Aurelio Ranzato
Facebook Artificial Intelligence Research
{dlp,ranzato}@fb.com
Abstract
One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better
understand this issue, we study the problem of continual learning, where the model
observes, once and one by one, examples concerning a sequence of tasks. First,
we propose a set of metrics to evaluate models learning over a continuum of data.
These metrics characterize models not only by their test accuracy, but also in terms
of their ability to transfer knowledge across tasks. Second, we propose a model
for continual learning, called Gradient Episodic Memory (GEM) that alleviates
forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our
experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the
strong performance of GEM when compared to the state-of-the-art.
1
Introduction
The starting point in supervised learning is to collect a training set Dtr = {(xi , yi )}ni=1 , where
each example (xi , yi ) is composed by a feature vector xi ? X , and a target vector yi ? Y. Most
supervised learning methods assume that each example (xi , yi ) is an identically and independently
distributed (iid) sample from a fixed probability distribution P , which describes a single learning task.
The goal of supervised learning is to construct a model f : X ? Y, used to predict the target vectors
y associated to unseen feature vectors x, where (x, y) ? P . To accomplish this, supervised learning
methods often employ the Empirical
Risk Minimization (ERM) principle [Vapnik, 1998], where f
P
is found by minimizing |D1tr | (xi ,yi )?Dtr `(f (xi ), yi ), where ` : Y ? Y ? [0, ?) is a loss function
penalizing prediction errors. In practice, ERM often requires multiple passes over the training set.
ERM is a major simplification from what we deem as human learning. In stark contrast to learning
machines, learning humans observe data as an ordered sequence, seldom observe the same example
twice, they can only memorize a few pieces of data, and the sequence of examples concerns different
learning tasks. Therefore, the iid assumption, along with any hope of employing the ERM principle,
fall apart. In fact, straightforward applications of ERM lead to ?catastrophic forgetting? [McCloskey
and Cohen, 1989]. That is, the learner forgets how to solve past tasks after it is exposed to new tasks.
This paper narrows the gap between ERM and the more human-like learning description above. In
particular, our learning machine will observe, example by example, the continuum of data
(x1 , t1 , y1 ), . . . , (xi , ti , yi ), . . . , (xn , tn , yn ),
(1)
where besides input and target vectors, the learner observes ti ? T , a task descriptor identifying
the task associated to the pair (xi , yi ) ? Pti . Importantly, examples are not drawn iid from a fixed
probability distribution over triplets (x, t, y), since a whole sequence of examples from the current
task may be observed before switching to the next task. The goal of continual learning is to construct
a model f : X ? T able to predict the target y associated to a test pair (x, t), where (x, y) ? Pt . In
this setting, we face challenges unknown to ERM:
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1. Non-iid input data: the continuum of data is not iid with respect to any fixed probability
distribution P (X, T, Y ) since, once tasks switch, a whole sequence of examples from the
new task may be observed.
2. Catastrophic forgetting: learning new tasks may hurt the performance of the learner at
previously solved tasks.
3. Transfer learning: when the tasks in the continuum are related, there exists an opportunity
for transfer learning. This would translate into faster learning of new tasks, as well as
performance improvements in old tasks.
The rest of this paper is organized as follows. In Section 2, we formalize the problem of continual
learning, and introduce a set of metrics to evaluate learners in this scenario. In Section 3, we
propose GEM, a model to learn over continuums of data that alleviates forgetting, while transferring
beneficial knowledge to past tasks. In Section 4, we compare the performance of GEM to the
state-of-the-art. Finally, we conclude by reviewing the related literature in Section 5, and offer some
directions for future research in Section 6. Our source code is available at https://github.com/
facebookresearch/GradientEpisodicMemory.
2
A Framework for Continual Learning
We focus on the continuum of data of (1), where each triplet (xi , ti , yi ) is formed by a feature vector
xi ? Xti , a task descriptor ti ? T , and a target vector yi ? Yti . For simplicity, we assume that the
iid
continuum is locally iid, that is, every triplet (xi , ti , yi ) satisfies (xi , yi ) ? Pti (X, Y ).
While observing the data (1) example by example, our goal is to learn a predictor f : X ? T ? Y,
which can be queried at any time to predict the target vector y associated to a test pair (x, t), where
(x, y) ? Pt . Such test pair can belong to a task that we have observed in the past, the current task, or
a task that we will experience (or not) in the future.
Task descriptors An important component in our framework is the collection of task descriptors
t1 , . . . , tn ? T . In the simplest case, the task descriptors are integers ti = i ? Z enumerating the
different tasks appearing in the continuum of data. More generally, task descriptors ti could be
structured objects, such as a paragraph of natural language explaining how to solve the i-th task. Rich
task descriptors offer an opportunity for zero-shot learning, since the relation between tasks could be
inferred using new task descriptors alone. Furthermore, task descriptors disambiguate similar learning
tasks. In particular, the same input xi could appear in two different tasks, but require different targets.
Task descriptors can reference the existence of multiple learning environments, or provide additional
(possibly hierarchical) contextual information about each of the examples. However, in this paper
we focus on alleviating catastrophic forgetting when learning from a continuum of data, and leave
zero-shot learning for future research.
Next, we discuss the training protocol and evaluation metrics for continual learning.
Training Protocol and Evaluation Metrics
Most of the literature about learning over a sequence of tasks [Rusu et al., 2016, Fernando et al.,
2017, Kirkpatrick et al., 2017, Rebuffi et al., 2017] describes a setting where i) the number of tasks is
small, ii) the number of examples per task is large, iii) the learner performs several passes over the
examples concerning each task, and iv) the only metric reported is the average performance across all
tasks. In contrast, we are interested in the ?more human-like? setting where i) the number of tasks is
large, ii) the number of training examples per task is small, iii) the learner observes the examples
concerning each task only once, and iv) we report metrics that measure both transfer and forgetting.
Therefore, at training time we provide the learner with only one example at the time (or a small
mini-batch), in the form of a triplet (xi , ti , yi ). The learner never experiences the same example
twice, and tasks are streamed in sequence. We do not need to impose any order on the tasks, since a
future task may coincide with a past task.
Besides monitoring its performance across tasks, it is also important to assess the ability of the learner
to transfer knowledge. More specifically, we would like to measure:
2
1. Backward transfer (BWT), which is the influence that learning a task t has on the performance on a previous task k ? t. On the one hand, there exists positive backward transfer
when learning about some task t increases the performance on some preceding task k. On
the other hand, there exists negative backward transfer when learning about some task t
decreases the performance on some preceding task k. Large negative backward transfer is
also known as (catastrophic) forgetting.
2. Forward transfer (FWT), which is the influence that learning a task t has on the performance
on a future task k t. In particular, positive forward transfer is possible when the model is
able to perform ?zero-shot? learning, perhaps by exploiting the structure available in the
task descriptors.
For a principled evaluation, we consider access to a test set for each of the T tasks. After the model
finishes learning about the task ti , we evaluate its test performance on all T tasks. By doing so, we
construct the matrix R ? RT ?T , where Ri,j is the test classification accuracy of the model on task tj
after observing the last sample from task ti . Letting ?b be the vector of test accuracies for each task at
random initialization, we define three metrics:
Average Accuracy: ACC
=
T
1X
RT,i
T i=1
(2)
Backward Transfer: BWT
=
T ?1
1 X
RT,i ? Ri,i
T ? 1 i=1
(3)
Forward Transfer: FWT
=
1 X
Ri?1,i ? ?bi .
T ? 1 i=2
T
(4)
The larger these metrics, the better the model. If two models have similar ACC, the most preferable
one is the one with larger BWT and FWT. Note that it is meaningless to discuss backward transfer
for the first task, or forward transfer for the last task.
For a fine-grained evaluation that accounts for learning speed, one can build a matrix R with more
rows than tasks, by evaluating more often. In the extreme case, the number of rows could equal the
number of continuum samples n. Then, the number Ri,j is the test accuracy on task tj after observing
the i-th example in the continuum. Plotting each column of R results into a learning curve.
3
Gradient of Episodic Memory (GEM)
In this section, we propose Gradient Episodic Memory (GEM), a model for continual learning, as
introduced in Section 2. The main feature of GEM is an episodic memory Mt , which stores a
subset of the observed examples from task t. For simplicity, we assume integer task descriptors, and
use them to index the episodic memory. When using integer task descriptors, one cannot expect
significant positive forward transfer (zero-shot learning). Instead, we focus on minimizing negative
backward transfer (catastrophic forgetting) by the efficient use of episodic memory.
In practice, the learner has a total budget of M memory locations. If the number of total tasks T
is known, we can allocate m = M/T memories for each task. Conversely, if the number of total
tasks T is unknown, we can gradually reduce the value of m as we observe new tasks [Rebuffi et al.,
2017]. For simplicity, we assume that the memory is populated with the last m examples from each
task, although better memory update strategies could be employed (such as building a coreset per
task). In the following, we consider predictors f? parameterized by ? ? Rp , and define the loss at the
memories from the k-th task as
X
1
`(f? , Mk ) =
`(f? (xi , k), yi ).
(5)
|Mk |
(xi ,k,yi )?Mk
Obviously, minimizing the loss at the current example together with (5) results in overfitting to the
examples stored in Mk . As an alternative, we could keep the predictions at past tasks invariant by
means of distillation [Rebuffi et al., 2017]. However, this would deem positive backward transfer
impossible. Instead, we will use the losses (5) as inequality constraints, avoiding their increase but
3
allowing their decrease. In contrast to the state-of-the-art [Kirkpatrick et al., 2017, Rebuffi et al.,
2017], our model therefore allows positive backward transfer.
More specifically, when observing the triplet (x, t, y), we solve the following problem:
minimize? `(f? (x, t), y)
subject to `(f? , Mk ) ? `(f?t?1 , Mk ) for all k < t,
where
f?t?1
(6)
is the predictor state at the end of learning of task t ? 1.
In the following, we make two key observations to solve (6) efficiently. First, it is unnecessary to
store old predictors f?t?1 , as long as we guarantee that the loss at previous tasks does not increase
after each parameter update g. Second, assuming that the function is locally linear (as it happens
around small optimization steps) and that the memory is representative of the examples from past
tasks, we can diagnose increases in the loss of previous tasks by computing the angle between their
loss gradient vector and the proposed update. Mathematically, we rephrase the constraints (6) as:
?`(f? (x, t), y) ?`(f? , Mk )
,
? 0, for all k < t.
(7)
hg, gk i :=
??
??
If all the inequality constraints (7) are satisfied, then the proposed parameter update g is unlikely to
increase the loss at previous tasks. On the other hand, if one or more of the inequality constraints (7)
are violated, then there is at least one previous task that would experience an increase in loss after the
parameter update. If violations occur, we propose to project the proposed gradient g to the closest
gradient g? (in squared `2 norm) satisfying all the constraints (7). Therefore, we are interested in:
1
minimizeg?
kg ? g?k22
2
subject to h?
g , gk i ? 0 for all k < t.
(8)
To solve (8) efficiently, recall the primal of a Quadratic Program (QP) with inequality constraints:
1 >
z Cz + p> z
minimizez
2
subject to Az ? b,
(9)
where C ? Rp?p , p ? Rp , A ? R(t?1)?p , and b ? Rt?1 . The dual problem of (9) is:
1 >
minimizeu,v
u Cu ? b> v
2
subject to A> v ? Cu = p,
v ? 0.
(10)
If (u? , v ? ) is a solution to (10), then there is a solution z ? to (9) satisfying Cz ? = Cu? [Dorn, 1960].
Quadratic programs are at the heart of support vector machines [Scholkopf and Smola, 2001].
With these notations in hand, we write the primal GEM QP (8) as:
1 >
1
minimizez
z z ? g> z + g> g
2
2
subject to Gz ? 0,
where G = ?(g1 , . . . , gt?1 ), and we discard the constant term g > g. This is a QP on p variables (the
number of parameters of the neural network), which could be measured in the millions. However, we
can pose the dual of the GEM QP as:
1 >
minimizev
v GG> v + g > G> v
2
subject to v ? 0,
(11)
since u = G> v + g and the term g > g is constant. This is a QP on t ? 1 p variables, the number
of observed tasks so far. Once we solve the dual problem (11) for v ? , we can recover the projected
gradient update as g? = G> v ? + g. In practice, we found that adding a small constant ? ? 0 to v ?
biased the gradient projection to updates that favoured benefitial backwards transfer.
Algorithm 1 summarizes the training and evaluation protocol of GEM over a continuum of data. The
pseudo-code includes the computation of the matrix R, containing the sufficient statistics to compute
the metrics ACC, FWT, and BWT described in Section 2.
4
A causal compression view We can interpret GEM as a model that learns the subset of correlations
common to a set of distributions (tasks). Furthermore, GEM can (and will in our MNIST experiments)
be used to predict target vectors associated to previous or new tasks without making use of task
descriptors. This is a desired feature in causal inference problems, since causal predictions are invariant across different environments [Peters et al., 2016], and therefore provide the most compressed
representation of a set of distributions [Sch?lkopf et al., 2016].
Algorithm 1 Training a GEM over an ordered continuum of data
procedure E VALUATE(f? , Continuum)
r ? 0 ? RT
for k = 1, . . . , T do
rk ? 0
for (x, y) in Continuum(k) do
rk ? rk + accuracy(f? (x, k), y)
end for
rk ? rk / len(Continuum(k))
end for
return r
end procedure
procedure T RAIN(f? , Continuumtrain , Continuumtest )
Mt ? {} for all t = 1, . . . , T .
R ? 0 ? RT ?T .
for t = 1, . . . , T do:
for (x, y) in Continuumtrain (t) do
Mt ? Mt ? (x, y)
g ? ?? `(f? (x, t), y)
gk ? ?? `(f? , Mk ) for all k < t
g? ? P ROJECT(g, g1 , . . . , gt?1 ), see (11).
? ? ? ? ??
g.
end for
Rt,: ? E VALUATE(f? , Continuumtest )
end for
return f? , R
end procedure
4
Experiments
We perform a variety of experiments to assess the performance of GEM in continual learning.
4.1
Datasets
We consider the following datasets:
? MNIST Permutations [Kirkpatrick et al., 2017], a variant of the MNIST dataset of handwritten digits [LeCun et al., 1998], where each task is transformed by a fixed permutation of
pixels. In this dataset, the input distribution for each task is unrelated.
? MNIST Rotations, a variant of MNIST where each task contains digits rotated by a fixed
angle between 0 and 180 degrees.
? Incremental CIFAR100 [Rebuffi et al., 2017], a variant of the CIFAR object recognition
dataset with 100 classes [Krizhevsky, 2009], where each task introduces a new set of classes.
For a total number of T tasks, each new task concerns examples from a disjoint subset of
100/T classes. Here, the input distribution is similar for all tasks, but different tasks require
different output distributions.
For all the datasets, we considered T = 20 tasks. On the MNIST datasets, each task has 1000
examples from 10 different classes. On the CIFAR100 dataset each task has 2500 examples from 5
different classes. The model observes the tasks in sequence, and each example once. The evaluation
for each task is performed on the test partition of each dataset.
4.2
Architectures
On the MNIST tasks, we use fully-connected neural networks with two hidden layers of 100 ReLU
units. On the CIFAR100 tasks, we use a smaller version of ResNet18 [He et al., 2015], with
three times less feature maps across all layers. Also on CIFAR100, the network has a final linear
classifier per task. This is one simple way to leverage the task descriptor, in order to adapt the output
distribution to the subset of classes for each task. We train all the networks and baselines using plain
SGD on mini-batches of 10 samples. All hyper-parameters are optimized using a grid-search (see
Appendix A), and the best results for each model are reported.
5
classification accuracy
classification accuracy
0.8
0.6
0.4
0.2
0.0
0.2
classification accuracy
0.8
0.6
0.4
0.2
0.0
0.2
MNIST permutations
ACC
BWT
FWT
0
MNIST rotations
2
4
6
BWT
FWT
0
2
4
6
CIFAR-100
10 12 14 16 18 20
8
10 12 14 16 18 20
CIFAR-100
single
independent
iCARL
EWC
GEM
0.6
0.4
0.2
0.0
ACC
8
MNIST rotations
0.8
0.6
0.4
0.2
single
independent
multimodal
EWC
GEM
ACC
MNIST permutations
0.8
0.6
0.4
0.2
single
independent
multimodal
EWC
GEM
BWT
0.6
0.5
0.4
0.3
0.2
FWT
0
2
4
6
8
10 12 14 16 18 20
Figure 1: Left: ACC, BWT, and FWT for all datasets and methods. Right: evolution of the test
accuracy at the first task, as more tasks are learned.
Table 1: CPU Training time (s) of MNIST experiments for all methods.
task
permutations
rotations
4.3
single
independent
multimodal
EWC
GEM
11
11
11
16
14
13
179
169
77
135
Methods
We compare GEM to five alternatives:
1. a single predictor trained across all tasks.
2. one independent predictor per task. Each independent predictor has the same architecture as
?single? but with T times less hidden units than ?single?. Each new independent predictor can
be initialized at random, or be a clone of the last trained predictor (decided by grid-search).
3. a multimodal predictor, which has the same architecture of ?single?, but with a dedicated
input layer per task (only for MNIST datasets).
4. EWC [Kirkpatrick et al., 2017], where the loss is regularized to avoid catastrophic forgetting.
5. iCARL [Rebuffi et al., 2017], a class-incremental learner that classifies using a nearestexemplar algorithm, and prevents catastrophic forgetting by using an episodic memory.
iCARL requires the same input representation across tasks, so this method only applies to
our experiment on CIFAR100.
GEM, iCaRL and EWC have the same architecture as ?single?, plus episodic memory.
6
Table 2: ACC as a function of the episodic memory size for GEM and iCARL, on CIFAR100.
memory size
GEM
iCARL
200
1, 280
2, 560
5, 120
0.487
0.436
0.579
0.494
0.633
0.500
0.654
0.508
Table 3: ACC/BWT on the MNIST Rotations dataset, when varying the number of epochs per task.
method
single, shuffled data
single
independent
multimodal
EWC
GEM
4.4
1 epoch
2 epochs
5 epochs
0.83/-0.00
0.53/-0.08
0.56/-0.00
0.76/-0.02
0.55/-0.19
0.86/+0.05
0.87/-0.00
0.49/-0.25
0.64/-0.00
0.72/-0.11
0.59/-0.17
0.88/+0.02
0.89/-0.00
0.43/-0.40
0.67/-0.00
0.59/-0.28
0.61/-0.11
0.89/-0.02
Results
Figure 1 (left) summarizes the average accuracy (ACC, Equation 2), backward transfer (BWT,
Equation 3) and forward transfer (FWT, Equation 4) for all datasets and methods. We provide the full
evaluation matrices R in Appendix B. Overall, GEM performs similarly or better than the multimodal
model (which is very well suited to the MNIST tasks). GEM minimizes backward transfer, while
exhibiting negligible or positive forward transfer.
Figure 1 (right) shows the evolution of the test accuracy of the first task throughout the continuum of
data. GEM exhibits minimal forgetting, and positive backward transfer in CIFAR100.
Overall, GEM performs significantly better than other continual learning methods like EWC, while
spending less computation (Table 1). GEM?s efficiency comes from optimizing over a number of
variables equal to the number of tasks (T = 20 in our experiments), instead of optimizing over a
number of variables equal to the number of parameters (p = 1109240 for CIFAR100 for instance).
GEM?s bottleneck is the necessity of computing previous task gradients at each learning iteration.
4.4.1
Importance of memory, number of passes, and order of tasks
Table 2 shows the final ACC in the CIFAR-100 experiment for both GEM and iCARL as a function
their episodic memory size. Also seen in Table 2, the final ACC of GEM is an increasing function of
the size of the episodic memory, eliminating the need to carefully tune this hyper-parameter. GEM
outperforms iCARL for a wide range of memory sizes.
Table 3 illustrates the importance of memory as we do more than one pass through the data on the
MNIST rotations experiment. Multiple training passe exacerbate the catastrophic forgetting problem.
For instance, in the last column of Table 3 (except for the result in the first row), each model is shown
examples of a task five times (in random order) before switching to the next task. Table 3 shows
that memory-less methods (like ?single? and ?multimodal?) exhibit higher negative BWT, leading to
lower ACC. On the other hand, memory-based methods such as EWC and GEM lead to higher ACC
as the number of passes through the data increases. However, GEM suffers less negative BWT than
EWC, leading to a higher ACC.
Finally, to relate the performance of GEM to the best possible performance on the proposed datasets,
the first row of Table 3 reports the ACC of ?single? when trained with iid data from all tasks. This
mimics usual multi-task learning, where each mini-batch contains examples taken from a random
selection of tasks. By comparing the first and last row of Table 3, we see that GEM matches the
?oracle performance upper-bound? ACC provided by iid learning, and minimizes negative BWT.
7
5
Related work
Continual learning [Ring, 1994], also called lifelong learning [Thrun, 1994, Thrun and Pratt, 2012,
Thrun, 1998, 1996], considers learning through a sequence of tasks, where the learner has to retain
knowledge about past tasks and leverage that knowledge to quickly acquire new skills. This learning
setting led to implementations [Carlson et al., 2010, Ruvolo and Eaton, 2013, Ring, 1997], and
theoretical investigations [Baxter, 2000, Balcan et al., 2015, Pentina and Urner, 2016], although the
latter ones have been restricted to linear models. In this work, we revisited continual learning but
proposed to focus on the more realistic setting where examples are seen only once, memory is finite,
and the learner is also provided with (potentially structured) task descriptors. Within this framework,
we introduced a new set of metrics, a training and testing protocol, and a new algorithm, GEM, that
outperforms the current state-of-the-art in terms of limiting forgetting.
The use of task descriptors is similar in spirit to recent work in Reinforcement Learning [Sutton et al.,
2011, Schaul et al., 2015], where task or goal descriptors are also fed as input to the system. The
CommAI project [Mikolov et al., 2015, Baroni et al., 2017] shares our same motivations, but focuses
on highly structured task descriptors, such as strings of text. In contrast, we focus on the problem of
catastrophic forgetting [McCloskey and Cohen, 1989, French, 1999, Ratcliff, 1990, McClelland et al.,
1995, Goodfellow et al., 2013].
Several approaches have been proposed to avoid catastrophic forgetting. The simplest approach
in neural networks is to freeze early layers, while cloning and fine-tuning later layers on the new
task [Oquab et al., 2014] (which we considered in our ?independent? baseline). This relates to
methods that leverage a modular structure of the network with primitives that can be shared across
tasks [Rusu et al., 2016, Fernando et al., 2017, Aljundi et al., 2016, Denoyer and Gallinari, 2015,
Eigen et al., 2014]. Unfortunately, it has been very hard to scale up these methods to lots of modules
and tasks, given the combinatorial number of compositions of modules.
Our approach is most similar to the regularization approaches that consider a single model, but
modify its learning objective to prevent catastrophic forgetting. Within this class of methods, there
are approaches that leverage ?synaptic? memory [Kirkpatrick et al., 2017, Zenke et al., 2017], where
learning rates are adjusted to minimize changes in parameters important for previous tasks. Other
approaches are instead based on ?episodic? memory [Jung et al., 2016, Li and Hoiem, 2016, Rannen
Triki et al., 2017, Rebuffi et al., 2017], where examples from previous tasks are stored and replayed
to maintain predictions invariant by means of distillation [Hinton et al., 2015]. GEM is related to
these latter approaches but, unlike them, allows for positive backward transfer.
More generally, there are a variety of setups in the machine learning literature related to continual
learning. Multitask learning [Caruana, 1998] considers the problem of maximizing the performance
of a learning machine across a variety of tasks, but the setup assumes simultaneous access to all the
tasks at once. Similarly, transfer learning [Pan and Yang, 2010] and domain adaptation [Ben-David
et al., 2010] assume the simultaneous availability of multiple learning tasks, but focus at improving
the performance at one of them in particular. Zero-shot learning [Lampert et al., 2009, Palatucci
et al., 2009] and one-shot learning [Fei-Fei et al., 2003, Vinyals et al., 2016, Santoro et al., 2016,
Bertinetto et al., 2016] aim at performing well on unseen tasks, but ignore the catastrophic forgetting
of previously learned tasks. Curriculum learning considers learning a sequence of data [Bengio et al.,
2009], or a sequence of tasks [Pentina et al., 2015], sorted by increasing difficulty.
6
Conclusion
We formalized the scenario of continual learning. First, we defined training and evaluation protocols
to assess the quality of models in terms of their accuracy, as well as their ability to transfer knowledge
forward and backward between tasks. Second, we introduced GEM, a simple model that leverages
an episodic memory to avoid forgetting and favor positive backward transfer. Our experiments
demonstrate the competitive performance of GEM against the state-of-the-art.
GEM has three points for improvement. First, GEM does not leverage structured task descriptors,
which may be exploited to obtain positive forward transfer (zero-shot learning). Second, we did not
investigate advanced memory management (such as building coresets of tasks [Lucic et al., 2017]).
Third, each GEM iteration requires one backward pass per task, increasing computation time. These
are exciting research directions to extend learning machines beyond ERM, and to continuums of data.
8
Acknowledgements
We are grateful to M. Baroni, L. Bottou, M. Nickel, Y. Olivier and A. Szlam for their insight. We are
grateful to Martin Arjovsky for the QP interpretation of GEM.
References
R. Aljundi, P. Chakravarty, and T. Tuytelaars. Expert gate: Lifelong learning with a network of experts. CVPR,
2016.
M.-F. Balcan, A. Blum, and S. Vempola. Efficient representations for lifelong learning and autoencoding. COLT,
2015.
M. Baroni, A. Joulin, A. Jabri, G. Kruszewski, A. Lazaridou, K. Simonic, and T. Mikolov. CommAI: Evaluating
the first steps towards a useful general AI. arXiv, 2017.
J. Baxter. A model of inductive bias learning. JAIR, 2000.
S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman Vaughan. A theory of learning
from different domains. Machine Learning Journal, 2010.
Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. ICML, 2009.
L. Bertinetto, J. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. NIPS,
2016.
A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka, and T. M. Mitchell. Toward an architecture for
never-ending language learning. AAAI, 2010.
R. Caruana. Multitask learning. In Learning to learn. Springer, 1998.
L. Denoyer and P. Gallinari. Deep sequential neural networks. EWRL, 2015.
W. S. Dorn. Duality in quadratic programming. Quarterly of Applied Mathematics, 1960.
D. Eigen, I. Sutskever, and M. Ranzato. Learning factored representations in a deep mixture of experts. ICLR,
2014.
L. Fei-Fei, R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning of object categories.
ICCV, 2003.
C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, and D. Wierstra. PathNet:
Evolution channels gradient descent in super neural networks. arXiv, 2017.
R. M. French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 1999.
I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An Empirical Investigation of Catastrophic
Forgetting in Gradient-Based Neural Networks. arXiv, 2013.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv, 2015.
G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv, 2015.
H. Jung, J. Ju, M. Jung, and J. Kim. Less-forgetting Learning in Deep Neural Networks. arXiv, 2016.
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho,
A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. PNAS, 2017.
A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Technical report,
University of Toronto, 2009.
C. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute
transfer. CVPR, 2009.
Y. LeCun, C. Cortes, and C. J. Burges. The MNIST database of handwritten digits, 1998. URL http:
//yann.lecun.com/exdb/mnist/.
Z. Li and D. Hoiem. Learning without forgetting. ECCV, 2016.
9
M. Lucic, M. Faulkner , A. Krause, and D. Feldman. Training Mixture Models at Scale via Coresets. arXiv,
2017.
J. L. McClelland, B. L. McNaughton, and R. C. O?reilly. Why there are complementary learning systems in the
hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and
memory. Psychological review, 1995.
M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning
problem. Psychology of learning and motivation, 1989.
T. Mikolov, A. Joulin, and M. Baroni. A roadmap towards machine intelligence. arXiv, 2015.
M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using
convolutional neural networks. CVPR, 2014.
M. Palatucci, D. A. Pomerleau, G. E. Hinton, and T. Mitchell. Zero-shot learning with semantic output codes.
NIPS, 2009.
S. J. Pan and Q. Yang. A survey on transfer learning. TKDE, 2010.
A. Pentina and R. Urner. Lifelong learning with weighted majority votes. NIPS, 2016.
A. Pentina, V. Sharmanska, and C. H. Lampert. Curriculum learning of multiple tasks. CVPR, 2015.
J. Peters, P. B?hlmann, and N. Meinshausen. Causal inference by using invariant prediction: identification and
confidence intervals. Journal of the Royal Statistical Society, 2016.
A. Rannen Triki, R. Aljundi, M. B. Blaschko, and T. Tuytelaars. Encoder Based Lifelong Learning. arXiv, 2017.
R. Ratcliff. Connectionist models of recognition memory: Constraints imposed by learning and forgetting
functions. Psychological review, 1990.
S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. iCaRL: Incremental classifier and representation
learning. CVPR, 2017.
M. B. Ring. Continual Learning in Reinforcement Environments. PhD thesis, University of Texas at Austin,
Austin, Texas 78712, 1994.
M. B. Ring. CHILD: A first step towards continual learning. Machine Learning, 1997.
A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and
R. Hadsell. Progressive neural networks. NIPS, 2016.
P. Ruvolo and E. Eaton. ELLA: An Efficient Lifelong Learning Algorithm. ICML, 2013.
A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. One-shot learning with memory-augmented
neural networks. arXiv, 2016.
T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. ICML, 2015.
B. Scholkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization,
and beyond. MIT press, 2001.
B. Sch?lkopf, D. Janzing, and D. Lopez-Paz. Causal and statistical learning. In Learning Theory and Approximation. Oberwolfach Research Institute for Mathematics, 2016.
R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup. Horde: A scalable real-time
architecture for learning knowledge from unsupervised sensorimotor interaction. The 10th International
Conference on Autonomous Agents and Multiagent Systems, 2011.
S. Thrun. A lifelong learning perspective for mobile robot control. Proceedings of the IEEE/RSJ/GI Conference
on Intelligent Robots and Systems, 1994.
S. Thrun. Is learning the n-th thing any easier than learning the first? NIPS, 1996.
S. Thrun. Lifelong learning algorithms. In Learning to learn. Springer, 1998.
S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 2012.
V. Vapnik. Statistical learning theory. Wiley New York, 1998.
O. Vinyals, C. Blundell, T. Lillicrap, and D. Wierstra. Matching networks for one shot learning. NIPS, 2016.
F. Zenke, B. Poole, and S. Ganguli. Improved multitask learning through synaptic intelligence. arXiv, 2017.
10
| 7225 |@word multitask:3 cu:3 version:1 eliminating:1 compression:1 norm:1 hippocampus:1 sgd:1 shot:12 necessity:1 contains:2 hoiem:2 past:7 outperforms:2 current:4 com:3 contextual:1 comparing:1 realistic:1 partition:1 update:7 alone:1 intelligence:3 ruvolo:2 bwt:13 pascanu:2 revisited:1 location:1 toronto:1 zhang:1 five:2 wierstra:3 along:1 scholkopf:2 lopez:2 pritzel:1 paragraph:1 introduce:1 acquired:1 forgetting:25 multi:1 xti:1 cpu:1 deem:2 increasing:3 project:2 classifies:1 notation:1 unrelated:1 provided:2 banarse:1 blaschko:1 medium:1 what:1 kg:1 grabska:1 minimizes:2 string:1 guarantee:1 pseudo:1 every:1 continual:16 ti:10 preferable:1 botvinick:1 classifier:2 gallinari:2 unit:2 szlam:1 control:1 yn:1 appear:1 t1:2 before:2 positive:10 negligible:1 modify:1 switching:2 sutton:2 soyer:1 plus:1 twice:2 initialization:1 meinshausen:1 collect:1 conversely:1 bi:1 range:1 decided:1 harmeling:1 lecun:3 testing:1 practice:3 digit:3 procedure:4 episodic:14 empirical:2 universal:1 chakravarty:1 significantly:1 vedaldi:1 projection:1 reilly:1 confidence:1 matching:1 cannot:1 selection:1 risk:1 influence:2 impossible:1 vaughan:1 map:1 dean:1 imposed:1 maximizing:1 straightforward:1 primitive:1 starting:1 independently:1 survey:1 hadsell:1 simplicity:3 identifying:1 formalized:1 coreset:1 factored:1 insight:2 importantly:1 bertinetto:2 autonomous:1 hurt:1 cifar100:8 limiting:1 mcnaughton:1 target:8 pt:2 alleviating:1 olivier:1 programming:1 goodfellow:2 facebookresearch:1 trend:1 ewc:10 satisfying:2 recognition:3 database:1 observed:5 module:2 quicker:1 solved:1 connected:1 sun:1 ranzato:3 decrease:2 observes:4 principled:1 environment:3 trained:3 grateful:2 reviewing:1 laptev:1 exposed:1 efficiency:1 learner:14 multimodal:7 train:1 ramalho:1 artificial:1 hyper:2 modular:1 larger:2 solve:7 cvpr:5 pilarski:1 compressed:1 encoder:1 ability:4 statistic:1 favor:1 unseen:3 g1:2 tuytelaars:2 gi:1 final:3 obviously:1 autoencoding:1 sequence:11 propose:5 interaction:1 adaptation:1 alleviates:2 translate:1 schaul:2 description:1 milan:1 az:1 exploiting:1 sutskever:1 incremental:3 leave:1 rotated:1 object:4 ring:4 ben:2 blitzer:1 silver:1 pose:1 measured:1 strong:1 memorize:1 come:1 distilling:1 exhibiting:1 direction:2 attribute:1 human:4 settle:1 require:2 investigation:2 mathematically:1 adjusted:1 around:1 considered:2 predict:4 eaton:2 major:2 continuum:18 early:1 desjardins:2 baroni:4 combinatorial:1 weighted:1 minimization:1 hope:1 lazaridou:1 mit:1 ewrl:1 aim:1 super:1 avoid:3 rusu:5 pathnet:1 varying:1 mobile:1 focus:7 improvement:2 ratcliff:2 cloning:1 contrast:4 baseline:2 kim:1 detect:1 inference:2 ganguli:1 unlikely:1 transferring:2 santoro:2 hidden:2 relation:1 perona:1 dlp:1 transformed:1 resnet18:1 interested:2 pixel:1 issue:1 classification:4 colt:1 dual:3 overall:2 art:5 equal:3 once:7 construct:3 never:2 beach:1 veness:1 progressive:1 icml:3 unsupervised:2 future:5 mimic:1 report:4 connectionist:4 intelligent:1 mirza:1 employ:1 few:1 composed:1 maintain:1 highly:1 investigate:1 evaluation:8 kirkpatrick:7 violation:1 extreme:1 introduces:1 mixture:2 primal:2 tj:2 hg:1 experience:3 iv:2 old:2 initialized:1 desired:1 causal:5 theoretical:1 minimal:1 mk:8 psychological:2 instance:2 column:2 obstacle:1 caruana:2 hlmann:1 subset:4 predictor:10 krizhevsky:2 wortman:1 paz:2 characterize:1 stored:2 reported:2 accomplish:1 nickisch:1 st:1 ju:1 clone:1 international:1 retain:1 together:1 quickly:1 precup:1 thesis:1 squared:1 aaai:1 management:1 containing:1 satisfied:1 possibly:1 cognitive:1 expert:3 leading:2 return:2 stark:1 li:2 account:1 valmadre:1 degris:1 includes:1 availability:1 coresets:2 collobert:1 piece:1 performed:1 view:1 later:1 diagnose:1 lot:1 observing:4 doing:1 competitive:1 recover:1 len:1 ass:3 formed:1 ni:1 accuracy:13 minimize:2 descriptor:20 convolutional:1 efficiently:2 lkopf:2 handwritten:2 bayesian:1 identification:1 kavukcuoglu:1 iid:9 ren:1 monitoring:1 acc:17 simultaneous:2 suffers:1 janzing:1 urner:2 facebook:1 synaptic:2 against:1 failure:1 sensorimotor:1 associated:5 dataset:6 exacerbate:1 mitchell:2 recall:1 knowledge:10 organized:1 formalize:1 carefully:1 feed:1 higher:3 jair:1 supervised:4 improved:1 zwols:1 replayed:1 furthermore:2 smola:2 correlation:1 hand:5 french:2 quality:1 perhaps:1 rabinowitz:2 usa:1 building:2 k22:1 lillicrap:2 horde:1 evolution:3 regularization:2 inductive:1 shuffled:1 semantic:1 white:1 oberwolfach:1 gg:1 exdb:1 demonstrate:2 tn:2 performs:3 dedicated:1 balcan:2 lucic:2 spending:1 image:3 common:1 rotation:6 mt:4 qp:6 cohen:3 million:1 belong:1 he:2 extend:1 interpretation:1 interpret:1 significant:1 distillation:2 freeze:1 composition:1 feldman:1 ai:2 queried:1 tuning:1 seldom:1 grid:2 populated:1 similarly:2 mathematics:2 language:2 access:2 robot:2 gt:2 closest:1 recent:1 delp:1 optimizing:2 perspective:1 apart:1 discard:1 scenario:2 store:2 inequality:4 success:1 kolesnikov:1 yi:15 exploited:1 approximators:1 seen:2 arjovsky:1 additional:1 impose:1 preceding:2 employed:1 oquab:2 fernando:3 ii:2 relates:1 multiple:6 full:1 pnas:1 barwinska:1 faster:1 adapt:1 match:1 offer:2 long:2 cifar:5 technical:2 concerning:3 prediction:5 variant:4 scalable:1 metric:11 arxiv:11 iteration:2 palatucci:2 cz:2 kernel:1 fine:2 krause:1 interval:1 source:1 sch:2 biased:1 rest:1 meaningless:1 unlike:1 pass:4 subject:6 quan:1 thing:1 spirit:1 integer:3 yang:2 leverage:6 backwards:1 iii:2 identically:1 pratt:2 baxter:2 switch:1 variety:3 finish:1 relu:1 pentina:4 architecture:6 faulkner:1 psychology:1 bartunov:1 reduce:1 enumerating:1 texas:2 blundell:2 bottleneck:1 icarl:9 allocate:1 url:1 peter:2 york:1 deep:4 generally:2 useful:1 tune:1 neocortex:1 locally:2 mid:1 mcclelland:2 simplest:2 category:1 http:2 disjoint:1 per:8 tkde:1 write:1 key:1 blum:1 drawn:1 prevent:1 penalizing:1 backward:16 angle:2 parameterized:1 throughout:1 denoyer:2 yann:1 summarizes:2 appendix:2 layer:6 bound:1 simplification:1 courville:1 quadratic:3 oracle:1 occur:1 constraint:7 fei:4 ri:4 speed:1 mikolov:3 performing:1 martin:1 structured:4 fwt:9 poor:1 beneficial:2 across:9 describes:2 pti:2 smaller:1 pan:2 making:1 happens:1 iccv:1 gradually:1 erm:8 invariant:4 restricted:1 interference:1 heart:1 taken:1 modayil:1 equation:3 previously:3 discus:2 letting:1 fed:1 end:7 available:2 observe:4 hierarchical:1 quarterly:1 hruschka:1 appearing:1 batch:3 alternative:2 eigen:2 rp:3 existence:1 dorn:2 gate:1 rain:1 assumes:1 opportunity:2 carlson:2 build:1 rebuffi:8 society:1 gregor:1 rsj:1 objective:1 streamed:1 strategy:1 rt:7 usual:1 exhibit:2 gradient:12 iclr:1 thrun:7 majority:1 roadmap:1 considers:3 bengio:3 toward:1 assuming:1 besides:2 code:3 index:1 mini:3 minimizing:3 acquire:1 setup:2 unfortunately:1 potentially:1 minimizez:2 dtr:2 gk:3 relate:1 negative:6 implementation:1 pomerleau:1 unknown:2 perform:2 allowing:2 upper:1 observation:1 datasets:9 finite:1 descent:1 horgan:1 kisiel:1 hinton:3 y1:1 sharmanska:1 inferred:1 overcoming:1 david:3 introduced:3 pair:4 optimized:1 rephrase:1 sivic:1 learned:2 narrow:1 nip:7 able:2 beyond:2 poole:1 kulesza:1 challenge:1 program:2 royal:1 memory:32 natural:1 zenke:2 regularized:1 difficulty:1 business:1 curriculum:3 residual:1 advanced:1 github:1 gz:1 text:1 epoch:4 literature:3 acknowledgement:1 review:2 ella:1 loss:10 expect:1 permutation:5 fully:1 multiagent:1 nickel:1 degree:1 agent:1 sufficient:1 xiao:1 principle:2 plotting:1 exciting:1 tiny:1 share:1 row:5 eccv:1 austin:2 jung:3 last:6 henriques:1 bias:1 understand:1 burges:1 institute:1 fall:1 explaining:1 face:1 wide:1 lifelong:8 distributed:1 curve:1 plain:1 xn:1 evaluating:2 ending:1 rich:1 fb:1 forward:10 collection:1 reinforcement:2 coincide:1 projected:1 employing:1 far:1 skill:1 ignore:1 keep:1 overfitting:1 gem:44 conclude:1 unnecessary:1 xi:16 fergus:1 search:2 triplet:5 why:1 table:11 disambiguate:1 learn:5 transfer:33 channel:1 ca:1 improving:1 bottou:2 louradour:1 marc:1 protocol:5 domain:2 did:1 main:1 joulin:2 jabri:1 aurelio:1 whole:2 motivation:2 lampert:4 valuate:2 complementary:1 child:1 x1:1 augmented:1 representative:1 wiley:1 favoured:1 pereira:1 forgets:1 third:1 learns:1 grained:1 rk:5 sperl:1 cortes:1 betteridge:1 concern:2 exists:3 mnist:19 vapnik:2 adding:1 sequential:2 importance:2 phd:1 budget:1 illustrates:1 gap:1 easier:1 suited:1 led:1 prevents:1 vinyals:3 ordered:2 mccloskey:3 applies:1 springer:3 satisfies:1 weston:1 goal:4 sorted:1 towards:4 shared:1 yti:1 hard:1 change:1 specifically:2 except:1 torr:1 called:2 total:4 pas:2 catastrophic:16 duality:1 vote:1 support:2 latter:2 crammer:1 passe:1 violated:1 evaluate:3 avoiding:1 |
6,883 | 7,226 | Effective Parallelisation for Machine Learning
Michael Kamp
University of Bonn
and Fraunhofer IAIS
[email protected]
Olana Missura
Google Inc.
[email protected]
Mario Boley
Max Planck Institute for Informatics
and Saarland University
[email protected]
Thomas G?artner
University of Nottingham
[email protected]
Abstract
We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate
and confident predictions in critical applications. In contrast to other parallelisation techniques, it can be applied to a broad class of learning algorithms without
further mathematical derivations and without writing dedicated code, while at the
same time maintaining theoretical performance guarantees. Moreover, our parallelisation scheme is able to reduce the runtime of many learning algorithms to
polylogarithmic time on quasi-polynomially many processing units. This is a significant step towards a general answer to an open question [21] on efficient parallelisation of machine learning algorithms in the sense of Nick?s Class (N C). The
cost of this parallelisation is in the form of a larger sample complexity. Our empirical study confirms the potential of our parallelisation scheme with fixed numbers
of processors and instances in realistic application scenarios.
1
Introduction
This paper contributes a novel and provably effective parallelisation scheme for a broad class of
learning algorithms. The significance of this result is to allow the confident application of machine
learning algorithms with growing amounts of data. In critical application scenarios, i.e., when errors
have almost prohibitively high cost, this confidence is essential [27, 36]. To this end, we consider the
parallelisation of an algorithm to be effective if it achieves the same confidence and error bounds as
the sequential execution of that algorithm in much shorter time. Indeed, our parallelisation scheme
can reduce the runtime of learning algorithms from polynomial to polylogarithmic. For that, it
consumes more data and is executed on a quasi-polynomial number of processing units.
To formally describe and analyse our parallelisation scheme, we consider the regularised risk minimisation setting. For a fixed but unknown joint probability distribution D over an input space X
and an output space Y, a dataset D ? X ? Y of size N ? N drawn iid from D, a convex hypothesis
space F of functions f : X ? Y, a loss function ` : F ? X ? Y ? R that is convex in F, and a
convex regularisation term ? : F ? R, regularised risk minimisation algorithms solve
L(D) = argmin
f ?F
N
X
` (f, Xi , Yi ) + ?(f ) .
(1)
i=1
The aim of this approach is to obtain a hypothesis f ? F with small regret
Q (f ) = E [` (f, X, Y )] ? argmin E [` (f 0 , X, Y )] .
f 0 ?F
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(2)
Regularised risk minimisation algorithms are typically designed to be consistent and efficient. They
are consistent if there is a function N0 : R+ ? R+ ? R+ such that for all ? > 0, ? ? (0, 1], N ? N
with N ? N0 (?, ?), and training data D ? DN , the probability of generating an ?-bad hypothesis
is smaller than ?, i.e.,
P (Q (L(D)) > ?) ? ? .
(3)
They are efficient if the sample complexity N0 (?, ?) is polynomial in 1/?, log 1/? and the runtime
complexity TL is polynomial in the sample complexity. This paper considers the parallelisation of
such consistent and efficient learning algorithms, e.g., support vector machines, regularised least
squares regression, and logistic regression. We additionally assume that data is abundant and that F
can be parametrised in a fixed, finite dimensional Euclidean space Rd such that the convexity of the
regularised risk minimisation problem (1) is preserved. In other cases, (non-linear) low-dimensional
embeddings [2, 28] can preprocess the data to facilitate parallel learning with our scheme. With
slight abuse of notation, we identify the hypothesis space with its parametrisation.
The main theoretical contribution of this paper is to show that algorithms satisfying the above conditions can be parallelised effectively. We consider a parallelisation to be effective if the (?, ?)guarantees (Equation 3) are achieved in time polylogarithmic in N0 (?, ?). The cost for achieving
this reduction in runtime comes in the form of an increased data size and through the number of
processing units used. For the parallelisation scheme presented in this paper, we are able to bound
this cost by a quasi-polynomial in 1/? and log 1/?. The main practical contribution of this paper is
an effective parallelisation scheme that treats the underlying learning algorithm as a black-box, i.e.,
it can be parallelised without further mathematical derivations and without writing dedicated code.
Similar to averaging-based parallelisations [32, 44, 45], we apply the underlying learning algorithm
in parallel to random subsets of the data. Each resulting hypothesis is assigned to a leaf of an
aggregation tree which is then traversed bottom-up. Each inner node computes a new hypothesis
that is a Radon point [30] of its children?s hypotheses. In contrast to aggregation by averaging, the
Radon point increases the confidence in the aggregate doubly-exponentially with the height of the
aggregation tree. We describe our parallelisation scheme, called the Radon machine, in detail in
Section 2. Comparing the Radon machine to a sequential application of the underlying learning
algorithm which achieves the same confidence, we are able to show a strong reduction in runtime
from polynomial to polylogarithmic in Section 3.
The empirical evaluation of the Radon machine in Section 4 confirms its potential in practical settings. Given the same data as the sequential application of the base learning algorithm, the Radon
machine achieves a substantial reduction of computation time in realistic application scenarios. In
particular, using 150 processors, the Radon machine is between 80 and around 700-times faster than
the base learner. Notice that superlinear speed-ups are possible for base learning algorithms with
superlinear runtime. Compared with parallel learning algorithms from the Spark machine learning
library, it achieves hypotheses of similar quality, while requiring only 15 ? 85% of their runtime.
Parallel computing [18] and its limitations [14] have been studied for a long time in theoretical computer science [7]. Parallelising polynomial time algorithms ranges from being ?embarrassingly? [26]
easy to being believed to be impossible: For the class of decision problems that are the hardest in P,
i.e., for P-complete problems, it is believed that there is no efficient parallel algorithm in the sense
of Nick?s Class (NC [9]): efficient parallel algorithms in this sense are those that can be executed
Algorithm 1 Radon Machine
Input: learning algorithm L, dataset D ? X ? Y, Radon number r ? N, and parameter h ? N
Output: hypothesis f ? F
1: divide D into r h iid subsets Di of roughly equal size
2: run L in parallel to obtain fi = L(Di )
3: S ? {f1 , . . . , frh }
4: for i = h ? 1, . . . , 1 do
5:
partition S into iid subsets S1 , . . . , Sri of size r each
6:
calculate r(S1 ), . . . , r(Sri ) in parallel
7:
S ? {r(S1 ), . . . , r(Sri )}
8: end for
9: return r(S)
2
in polylogarithmic time on a polynomial number of processing units. Our paper thus contributes to
understanding the extent to which efficient parallelisation of polynomial time learning algorithms is
possible. This connection and other approaches to parallel learning are discussed in Section 5.
2
From Radon Points to Radon Machines
The Radon machine, as described in Algorithm 1, first executes the base learning algorithm on
random subsets of the data to quickly achieve weak hypotheses and then iteratively aggregates them
to stronger ones. Both the generation of weak hypotheses and the aggregation can be executed in
parallel. To aggregate hypotheses, we follow along the lines of the iterated Radon point algorithm
which was originally devised to approximate the centre point of a finite set of points [8]. The Radon
point [30] of a set of points is defined as follows:
Definition 1. A Radon partition of a set S ? F is a pair A, B ? S such that A ? B = ? but
hAi ? hBi 6= ?, where h?i denotes the convex hull. The Radon number of a space F is the smallest
r ? N such that for all S ? F with |S| ? r there is a Radon partition, or ? if no Radon partition
exists. A Radon point of a set S with Radon partition A, B ? S is any r ? hAi ? hBi.
We now present the Radon machinein Algorithm 1, which is able to effectively parallelise consistent
and efficient learning algorithms. Input to this parallelisation scheme is a learning algorithm L on
a hypothesis space F, a dataset D ? X ? Y, the Radon number r ? N of the hypothesis space
F, and a parameter h ? N. It divides the dataset into rh subsets D1 , . . . , Drh (line 1) and runs the
algorithm L on each subset in parallel (line 2). Then, the set of hypotheses (line 3) is iteratively
aggregated to form better sets of hypotheses (line 4-8). For that the set is partitioned into subsets of
size r (line 5) and the Radon point of each subset is calculated in parallel (line 6). The final step of
each iteration is to replace the set of hypotheses by the set of Radon points (line 7).
The scheme requires a hypothesis space with a valid notion of convexity and finite Radon number.
While other notions of convexity are possible [16, 33], in this paper we restrict our consideration to
Euclidean spaces with the usual notion of convexity. Radon?s theorem [30] states that the Euclidean
space Rd has Radon number r = d + 2. Radon points can then be obtained by solving a system
of linear equations of size r ? r (to be fully self-contained we state the system of linear equations
explicitly in Appendix C.1). The next proposition gives a guarantee on the quality of Radon points:
Proposition 2. Given a probability measure P over a hypothesis space F with finite Radon number
r, let F denote a random variable with distribution P . Furthermore, let r be the random variable
obtained by computing the Radon point of r random points drawn according to P r . Then it holds
for the expected regret Q and all ? ? R that
P (Q (r) > ?) ? (rP (Q (F ) > ?))
2
.
A direct consequence of this proposition is a bound on the probability that the output of the Radon
machine with parameter h is bad:
Theorem 3. Given a probability measure P over a hypothesis space F with finite Radon number
r, let F denote a random variable with distribution P . Furthermore, let rh be the random variable
representing the Radon point obtained after h iterations with base hypotheses drawn according to
P . Then for any convex function Q : F ? R and all ? ? R it holds that
2h
P (Q(rh ) > ?) ? (rP (Q(F ) > ?))
.
The proofs of Proposition 2 and Theorem 3 are provided in Section 7. Note that this proof also
shows the robustness of the Radon point compared to the average: if only one of r points is ?-bad,
the Radon point is still ?-good, while the average may or may not be; indeed, in a linear space with
any set of ?-good hypotheses and any ?0 ? ?, we can always find a single ?0 -bad hypothesis such that
the average of all these hypotheses is ?0 -bad. For the Radon machine with parameter h, Theorem 3
shows that the probability of obtaining an ?-bad hypothesis is doubly exponentially reduced: with a
bound ? on this probability for the base learning algorithm, the bound ? for the Radon machine is
? = (r?)
2h
.
(4)
In the next section we will use this relation between ? and ? to compare the Radon machine to a
sequential application of the base learning algorithm which both achieve the same (?, ?)-guarantee.
3
3
Sample and Runtime Complexity
In this section we first derive the sample and runtime complexity of the Radon machine R from
the sample and runtime complexity of the base learning algorithm L. We then relate the runtime
complexity of the Radon machine to a sequential application of the base learning algorithm when
both achieve the the same (?, ?)-guarantee. For that, we assume that the base learning algorithms
k
is consistent and efficient with a sample complexity of the form N0L (?, ?) = (?? + ?? ld 1/?) , for
1
1
some ?? , ?? ? R, and k ? N. We assume for the base learning algorithm that ? ? /2r.
The Radon machine creates rh base hypotheses and, with ? as in Equation 4, has sample complexity
k
1
.
(5)
N0R (?, ?) = rh N0L (?, ?) = rh ? ?? + ?? ld
?
Theorem 3 then shows that the Radon machine with base learning algorithm L is consistent: with
N ? N0R (?, ?) samples it achieves an (?, ?)-guarantee. To achieve the same guarantee, the application of L itself, sequentially, would require M ? N0L (?, ?) samples, where
k
1
2h
h
L
L
= ?? + 2 ? ?? ld
N0 (?, ?) = N0 ?, (r?)
.
(6)
r?
For base learning algorithms L with runtime TL (n) polynomial in the data size n ? N, i.e.,
TL (n) ? O (n? ) with ? ? N, we now determine the runtime TR,h (N ) of the Radon machine with
h iterations and c = rh processing units on N ? N samples. In this case all base learning algorithms can be executed in parallel. In practical applications fewer physical processors can be used
to simulate rh processing units?we discuss this case in Section 5.
The runtime of the Radon machine can be decomposed into the runtime of the base learning algorithm and the runtime for the aggregation. The base learning algorithm requires n ? N0R (?, ?)/rh
samples and can be executed on rh processors in parallel in time TL (n). The Radon point in each of
the h iterations can then be calculated in parallel in time r3 (see Appendix C.1). Thus, the runtime
of the Radon machine with N = rh n samples is
TR,h (N ) = TL (n) + hr3 .
(7)
In contrast, the runtime of the base learning algorithm for achieving the same guarantee is
TL (M ) with M ? N0L (?, ?). Ignoring logarithmic and constant terms, N0L (?, ?) behaves as
2h N0L (?, ?). To obtain polylogarithmic runtime of R compared to TL (M ), we choose the parameter h ? ld M ? ld ld M such that n ? M/2h = ld M . Thus, the runtime of the Radon machine is in
O ld? M + r3 ld M . This result is formally summarised in Theorem 4.
Theorem 4. The Radon machine with a consistent and efficient regularised risk minimisation algorithm on a hypothesis space with finite Radon number has polylogarithmic runtime on quasipolynomially many processing units if the Radon number is upper bounded by a function polylogarithmic in the sample complexity of the efficient regularised risk minimisation algorithm.
The theorem is proven in Appendix A.1 and relates to Nick?s Class [1]: A decision problem can
be solved efficiently in parallel in the sense of Nick?s Class, if it can be decided by an algorithm
in polylogarithmic time on polynomially many processors (assuming, e.g., PRAM model). For the
class of decision problems that are the hardest in P , i.e., for P -complete problems, it is believed
that there is no efficient parallel algorithm for solving them in this sense. Theorem 4 provides a
step towards finding efficient parallelisations of regularised risk minimisers and towards answering
the open question: is consistent regularised risk minimisation possible in polylogarithmic time on
polynomially many processors. A similar question, for the case of learning half spaces, has been
coined a fundamental open problem by Long and Servedio [21] who gave an algorithms which runs
on polynomially many processors in time that depends polylogarithmically on the sample size but
is inversely proportional to a parameter of the learning problem. While Nick?s Class as a notion of
efficiency has been criticised, e.g., by Kruskal et al. [17], it is the only notion of efficiency that forms
a proper complexity class, in the sense of Blum [4]. Additionally, Kruskal et al. [17] suggested to
also consider the inefficiency of simulating the parallel algorithm on a single processing unit. We
consider this in Appendix A.2, where we also discuss the speed-up [17] using c processing units.
1
We derive ?? , ?? for hypothesis spaces with finite VC [40] and Rademacher [3] complexity in App. C.2.
4
4
Empirical Evaluation
This empirical study compares the Radon machine to state-of-the-art parallel machine learning algorithms from the Spark machine learning library [25], as well as the natural baseline of averaging
hypotheses instead of calculating their Radon point (denoted averaging-at-the-end). In this study,
we use base learning algorithms from WEKA [43] and scikit-learn [29]. We compare the Radon
machine to the base learning algorithms on moderately sized datasets, due to scalability limitations
of the base learners, and reserve larger datasets for the comparison with parallel learners. The experiments are executed on a Spark cluster (5 worker nodes, 25 processors per node)2 . In this study, we
apply the Radon machine with parameter h = 1 and the maximal parameter h (denoted h = max)
such that each instance of the base learning algorithm is executed on a subset of size at least 100.
Averaging-at-the-end uses the same parameter h and executes the base learning algorithm on rh of
subsets, i.e., the same number as the Radon machine with that parameter.
What is the speed-up of our scheme in practice? In Figure 1(a), we compare the Radon machine to its base learners on moderately sized datasets (details on the datasets are provided in
Appendix B). There, the Radon machine is is between 80 and around 700-times faster than the
base learner, using 150 processors. The speed-up is detailed in Figure 2. On the SUSY dataset
2
The source code implementation in Spark can be found in the bitbucket repository
https://bitbucket.org/Michael_Kamp/radonmachine.
WekaSGD
WekaLogReg
LinearSVC
PRM(h=1)[WekaSGD]
PRM(h=1)[WekaLogReg]
PRM(h=1)[LinearSVC]
PRM(h=max)[WekaSGD]
PRM(h=max)[WekaLogReg]
PRM(h=max)[LinearSVC]
Avg(h=1)[WekaSGD]
Avg(h=max)[WekaSGD]
Avg(h=1)[WekaLogReg]
Avg(h=max)[WekaLogReg]
PRM(h=1)[WekaSGD]
PRM(h=1)[WekaLogReg]
PRM(h=max)[WekaSGD]
PRM(h=max)[WekaLogReg]
SparkLogRegwSGD
SparkSVMwSGD
SparkLogRegwLBFGS
SparkLogReg
105
105
104
104
103
103
102
102
102
1.0
1.0
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0.0
0.0
0.0
PRM(h=1)[WekaSGD]
PRM(h=1)[WekaLogReg]
PRM(h=max)[WekaSGD]
PRM(h=max)[WekaLogReg]
106
training time (log-scale)
105
104
(a)
(b)
CASP9
wikidata
HIGGS
SUSY
20_news
CASP9
wikidata
HIGGS
SUSY
20_news
SUSY
click_pred
poker
SEA_50
Stagger1
codrna
AUC
103
(c)
Figure 1: (a) Runtime (log-scale) and AUC of base learners and their parallelisation using the Radon
machine (PRM) for 6 datasets with N ? [488 565, 5 000 000], d ? [3, 18]. Each point represents the
average runtime (upper part) and AUC (lower part) over 10 folds of a learner?or its parallelisation?
on one datasets. (b) Runtime and AUC of the Radon machine compared to the averaging-at-the-end
baseline (Avg) on 5 datasets with N ? [5 000 000, 32 000 000], d ? [18, 2 331]. (c) Runtime
and AUC of several Spark machine learning library algorithms and the Radon machine using base
learners that are comparable to the Spark algorithms on the same datasets as in Figure 1(b).
5
WekaSGD
WekaLogReg
LinearSVC
106
1.57
105
runtime
speedup
103
102
central
Radon
1.17
104
103
rna
d
106
co
d
SY
_pr
e
clic
k
SU
er1
A_
50
SE
gg
Sta
po
ke
r
101
Figure 2: Speed-up (log-scale) of the Radon
machine over its base learners per dataset from
the same experiment as in Figure 1(a).
107
Figure 3: Dependence of the runtime on the
dataset size for of the Radon machine compared to its base learners.
(with 5 000 000 instances and 18 features),
the Radon machine on 150 processors with
h = 3 is 721 times faster than its base learning algorithms. At the same time, their practical performances, measured by the area under
the ROC curve (AUC) on an independent test
dataset, are comparable.
PRM(h=max)[WekaSGD]
PRM(h=max)[WekaLogReg]
Avg(h=max)[WekaSGD]
Avg(h=max)[WekaLogReg]
SparkSVMwSGD
SparkLogRegwLBFGS
0.8
0.7
CASP9
SUSY
0.6
0.5
AUC
How does the scheme compare to averagingat-the-end? In Figure 1(b) we compare the
runtime and AUC of the parallelisation scheme
against the Avg baseline. Since averaging is less
computationally expensive than calculating the
Radon point, the runtimes of the Avg baselines
are slightly lower than the ones of the Radon
machine. However, compared to the computational complexity of executing the base learner,
this advantage becomes negligible. In terms of
AUC, the Radon machine outperforms the averaging baseline on all datasets by at least 10%.
dataset size
20_newsgroups
HIGGS
0.4
0.3
wikidata
0.2
0.1
103
training time
104
Figure 4: Representation of the results in Figure 1(b) and 1(c) in terms of the trade-off between
runtime and AUC for the Radon machine (PRM)
and averaging-at-the-end (Avg), both with parameter h = max, and parallel machine learning algorithms in Spark.
How does our scheme compare to state-ofthe-art Spark machine learning algorithms?
We compare the Radon machine to various
Spark machine learning algorithms on 5 large
datasets. The results in Figure 1(c) indicate that the proposed parallelisation scheme with h =
max has a significantly smaller runtime than the Spark algorithms on all datasets. On the
SUSY and HIGGS dataset, the Radon machine is one order of magnitude faster than the Spark
implementations?here the comparatively small number of features allows for a high level of parallelism. On the CASP9 dataset, the Radon machine is 15% faster than the fastest Spark algorithm.
The performance in terms of AUC of the Radon machine is similar to the Spark algorithms. In
particular, when using WekaLogReg with h = max, the Radon machine outperforms the Spark
algorithms in terms of AUC and runtime on the datasets SUSY, wikidata, and CASP9. Details
are given in the Appendix B. A summarizing comparison of the parallel approaches in terms of
their trade-off between runtime and predictive performance is depicted in Figure 4. Here, results
are shown for the Radon machine and averaging-at-the-end with parameter h = max and for the
two Spark algorithms most similar to the base learning algorithms. Note that it is unclear, what
caused the consistently weak performance of all algorithms on wikidata. Nonetheless, the results
show that on all datasets the Radon machine has comparable predictive performance to the Spark
algorithms and substantially higher predictive performance than averaging-at-the-end. At the same
time, the Radon machine has a runtime comparable to averaging-at-the-end on all datasets, both are
substantially faster than the Spark algorithms.
6
How does the runtime depend on the dataset size in a real-world system? In Figure 3 we
compare the runtimes of all base learning algorithms per dataset size to the Radon machines. Results
indicate that, while the runtimes of the base learning algorithms depends on the dataset size with
an average exponent of 1.57, the runtime of the Radon machine depends on the dataset size with
an exponent of only 1.17. This is plausible because with enough processors the generation of weak
hypotheses can be done completely in parallel. Moreover, the time for aggregating the hypotheses
does not depend on the number of instances in the dataset, but only on the number of iterations and
the dimension of the hypothesis space.
How generally applicable is the scheme? As an indication of the general applicability in practice,
we apply the scheme to an Scikit-learn implementation of regularised least squares regression [29].
On the dataset YearPredictionMSD, regularised least squares regression achieves an RMSE of 12.57,
whereas the Radon machine achieved an RMSE of 13.64. At the same time, the Radon machine is
197-times faster. We also compare the Radon machine on a multi-class prediction problem using
conditional maximum entropy models. We use the implementation described in Mcdonald et al.
[23], who also propose to use averaging-at-the-end for distributed training. We compare the Radon
machine to averaging-at-the-end with conditional maximum entropy models on two large multiclass datasets (drift and spoken-arabic-digit). On average, our scheme performs 4% better with only
0.2% longer runtime. The minimal difference in runtime can be explained?similar to the results in
Figure 1(b)?by the smaller complexity of calculating the average instead of the Radon point.
5
Discussion and Related Work
In this paper we provided a step towards answering an open problem: Is parallel machine learning possible in polylogarithmic time using a polynomial number of processors only? This question
has been posed for half-spaces by Long and Servedio [21] and called ?a fundamental open problem about the abilities and limitations of efficient parallel learning algorithms?. It relates machine
learning to Nick?s Class of parallelisable decision problems and its variants [14]. Early theoretical
treatments of parallel learning with respect to NC considered probably approximately correct (PAC)
[5, 38] concept learning. Vitter and Lin [41] introduced the notion of NC-learnable for concept
classes for which there is an algorithm that outputs a probably approximately correct hypothesis in
polylogarithmic time using a polynomial number of processors. In this setting, they proved positive
and negative learnability results for a number of concept classes that were previously known to be
PAC-learnable in polynomial time. More recently, the special case of learning half spaces in parallel was considered by Long and Servedio [21] who gave an algorithm for this case that runs on
polynomially many processors in time that depends polylogarithmically on the size of the instances
but is inversely proportional to a parameter of the learning problem. Our paper complements these
theoretical treatments of parallel machine learning and provides a provably effective parallelisation
scheme for a broad class of regularised risk minimisation algorithms.
Some parallelisation schemes also train learning algorithms on small chunks of data and average the
found hypotheses. While this approach has advantages [13, 32], current error bounds do not allow
a derivation of polylogarithmic runtime [20, 35, 44] and it has been doubted to have any benefit
over learning on a single chunk [34]. Another popular class of parallel learning algorithms is based
on stochastic gradient descent, targeting expected risk minimisation directly [34, and references
therein]. The best, so far known algorithm in this class [34] is the distributed mini-batch algorithm
[10]. This algorithm still runs for a number of rounds inversely proportional to the desired optimisation error, hence not in polylogarithmic time. A more traditional approach is to minimise the
empirical risk, i.e., an empirical sample-based approximation of the expected risk, using any, deterministic or randomised, optimisation algorithm. This approach relies on generalisation guarantees
relating the expected and empirical risk minimisation as well as a guarantee on the optimisation error
introduced by the optimisation algorithm. The approach is readily parallelisable by employing available parallel optimisation algorithms [e.g., 6]. It is worth noting that these algorithms solve a harder
than necessary optimisation problem and often come with prohibitively high communication cost in
distributed settings [34]. Recent results improve over these [22] but cannot achieve polylogarithmic
time as the number of iterations depends linearly on the number of processors.
In the following, we want to discuss properties and limitations of the proposed parallelisation
scheme. To that end, we address potential questions about the Radon machine.
7
In the experiments we considered datasets where the number of dimensions is much smaller than
the number of instances. What about high-dimensional models? The basic version of the parallelisation scheme presented in this paper cannot directly be applied to cases in which the size of the
dataset is not at least a multiple of the Radon number of the hypothesis space. For various types of
data such as text, this might cause concerns. However, random projections [15] or low-rank approximations [2, 28] can alleviate this problem and are already frequently employed in machine learning.
An alternative might be to combine our parallelisation scheme with block coordinate descent [37].
In this case, the scheme can be applied iteratively to subsets of the features.
In the experiments we considered only linear models. What about non-linear models? Learning
non-linear models causes similar problems to learning high-dimensional ones. In non-parametric
methods like kernel methods, for instance, the dimensionality of the optimisation problem is equal
to the number of instances, thus prohibiting the application of our parallelisation scheme. However,
similar low-rank approximation techniques as described above have been applied with non-linear
kernels [11]. Alternatively, novel methods for speeding up the learning process for non-linear models rely on explicitly constructing an embedding in which a linear model can be learned [31]. Using
explicitly constructed feature spaces, Radon machines can directly be applied to non-linear models.
We have theoretically analysed our parallelisation scheme for the case that there are enough processors available to find each weak hypothesis on a separate processor. What if there are less than rh
processors? The parallelisation scheme can quite naturally be de-parallelised and partially executed
in sequence. For the runtime this implies an additional factor of max{1, rh/c}. Thus, the Radon machine can be applied with any number of processors.
The scheme improves the confidence ? doubly exponentially in its parameter h but for that it requires the weak hypotheses to already achieve a base confidence of 1 ? ? > 1 ? 1/2r. Is the scheme
only applicable in high-confidence domains? Many application scenarios require high-confidence
error bounds, e.g., in the medical domain [27] or in intrusion detection [36]. Apart from these
theoretical considerations, in practice our scheme performs comparably to its base learner.
Besides runtime, communication plays an essential role in parallel learning. What is the communication complexity of the scheme? As for all aggregation at the end strategies, the overall amount
of communication is low compared to periodically communicating schemes. For the parallel aggregation of hypotheses, the scheme requires O(rh+1 ) messages each containing a single hypothesis
of size O(d). Furthermore, only a fraction of the data has to be transferred to each processor. Our
scheme is ideally suited for inherently distributed data.
6
Conclusion and Future Work
We have proposed a parallelisation scheme that is effective, i.e., it speeds up computation through
parallelisation while achieving the same hypothesis quality as the base learner. It is a black-box
parallelisation in the sense that it is applicable to a wide range of machine learning algorithms and is
oblivious to the implementation of these algorithms. Our empirical evaluation shows that in practice
substantial speed-ups are achieved by the Radon machine.
Since in a lot of applications data is no longer available as a batch but in the form of data streams, as
future work it would be interesting to investigate how the scheme can be applied to distributed data
streams. A promising approach is to aggregate hypotheses periodically using the Radon machine,
similar to the federated learning approach proposed by McMahan et al. [24]. Another direction
for future work is to apply the scheme to general randomized convex optimization algorithms with
unobservable target function.
8
7
Proof of Proposition 2 and Theorem 3
In order to prove Proposition 2 and consecutively Theorem 3, we require the following properties of
Radon points and convex functions. We proof these properties for the more general case of quasiconvex functions. Since every convex function is also quasi-convex, the results hold for convex
functions as well. A quasi-convex function is defined as follows.
Definition 5. A function Q : F ? R is called quasi-convex if all its sublevel sets are convex, i.e.,
?? ? R : {f ? F|Q (f ) < ?} is convex.
First we give a different characterisation of quasi-convex functions.
Proposition 6. A function Q : F ? R is quasi-convex ? ?S ? F, ?s0 ? hSi, ?s ? S : Q (s) ?
Q (s0 ).
Proof.
(?) Suppose this direction does not hold. Then there is a convex function Q, a set S ? F, and
an s0 ? hSi such that for all s ? S it holds that Q (s) < Q (s0 ) (therefore s0 ?
/ S). Let
C = {c ? F | Q (c) < Q (s0 )}. As S ? C we also have that hSi ? hCi which contradicts
hSi 3 s0 ?
/ C.
(?) Suppose this direction does not hold.
Then there exists an ? such that
S = {s ? F | Q (s) < ?} is not convex and there is an s0 ? hSi \ S. By assumption
?s ? S : Q (s) ? Q (s0 ). Hence Q (s0 ) < ? and we have a contradiction since this would
imply s0 ? S.
The next proposition concerns the value of any convex function at a Radon point.
Proposition 7. For every set S with Radon point r and every quasi-convex function Q it holds that
|{s ? S | Q (s) ? Q (r)}| ? 2.
Proof. We show a slightly
stronger result: Take any family of pairwise disjoint sets Ai with
T
T
6 ? and r ? i hAi i. From proposition 6 follows directly the existence of an ai ? Ai
i hAi i =
such that Q (ai ) ? Q (r). The desired result follows then from ai 6= aj ? i 6= j.
Using this property, we can proof Proposition 2 and Theorem 3.
Proof of Proposition 2 and Theorem 3. By proposition 7, for any Radon point r of a set S there
must be two points a, b ? S with Q (a) , Q (b) ? Q (r). Henceforth, the probability of Q (r) > ? is
smaller or equal than the probability of the pair a, b having Q (a) , Q (b) > ?. Proposition 2 follows
by an application of the union bound on all pairs from S. Repeated application of the proposition
proves Theorem 3.
Acknowledgements
Part of this work was conducted while Mario Boley, Olana Missura, and Thomas G?artner were at the
University of Bonn and partially funded by the German Science Foundation (DFG, under ref. GA
1615/1-1 and GA 1615/2-1). The authors would like to thank Dino Oglic, Graham Hutton, Roderick
MacKenzie, and Stefan Wrobel for valuable discussions and comments.
9
References
[1] Sanjeev Arora and Boaz Barak. Computational complexity: A modern approach. Cambridge
University Press, 2009.
[2] Maria Florina Balcan, Yingyu Liang, Le Song, David Woodruff, and Bo Xie. Communication
efficient distributed kernel principal component analysis. In Proceedings of the 22nd ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 725?
734, 2016.
[3] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds
and structural results. Journal of Machine Learning Research, 3:463?482, 2003.
[4] Manuel Blum. A machine-independent theory of the complexity of recursive functions. Journal of the ACM (JACM), 14(2):322?336, 1967.
[5] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K Warmuth. Learnability
and the Vapnik-Chervonenkis dimension. Journal of the ACM (JACM), 36(4):929?965, 1989.
[6] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends? in Machine Learning, 3(1):1?122, 2011.
[7] Ashok K. Chandra and Larry J. Stockmeyer. Alternation. In 17th Annual Symposium on
Foundations of Computer Science, pages 98?108, 1976.
[8] Kenneth L. Clarkson, David Eppstein, Gary L. Miller, Carl Sturtivant, and Shang-Hua Teng.
Approximating center points with iterative Radon points. International Journal of Computational Geometry & Applications, 6(3):357?377, 1996.
[9] Stephen A. Cook. Deterministic CFL?s are accepted simultaneously in polynomial time and log
squared space. In Proceedings of the eleventh annual ACM symposium on Theory of computing,
pages 338?345, 1979.
[10] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online
prediction using mini-batches. Journal of Machine Learning Research, 13(1):165?202, 2012.
[11] Shai Fine and Katya Scheinberg. Efficient svm training using low-rank kernel representations.
Journal of Machine Learning Research, 2:243?264, 2002.
[12] Yoav Freund. Boosting a weak learning algorithm by majority. Information and computation,
121(2):256?285, 1995.
[13] Yoav Freund, Yishay Mansour, and Robert E. Schapire. Why averaging classifiers can protect
against overfitting. In Proceedings of the 8th International Workshop on Artificial Intelligence
and Statistics, 2001.
[14] Raymond Greenlaw, H. James Hoover, and Walter L. Ruzzo. Limits to parallel computation:
P-completeness theory. Oxford University Press, Inc., 1995.
[15] William B. Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert
space. Contemporary mathematics, 26(189-206):1, 1984.
[16] David Kay and Eugene W. Womble. Axiomatic convexity theory and relationships between
the Carath?eodory, Helly, and Radon numbers. Pacific Journal of Mathematics, 38(2):471?485,
1971.
[17] Clyde P. Kruskal, Larry Rudolph, and Marc Snir. A complexity theory of efficient parallel
algorithms. Theoretical Computer Science, 71(1):95?132, 1990.
[18] Vipin Kumar, Ananth Grama, Anshul Gupta, and George Karypis. Introduction to parallel
computing: design and analysis of algorithms. Benjamin-Cummings Publishing Co., Inc.,
1994.
10
[19] Moshe Lichman. UCI machine learning repository, 2013. URL http://archive.ics.
uci.edu/ml.
[20] Shao-Bo Lin, Xin Guo, and Ding-Xuan Zhou. Distributed learning with regularized least
squares. Journal of Machine Learning Research, 18(92):1?31, 2017. URL http://jmlr.
org/papers/v18/15-586.html.
[21] Philip M. Long and Rocco A. Servedio. Algorithms and hardness results for parallel large
margin learning. Journal of Machine Learning Research, 14:3105?3128, 2013.
[22] Chenxin Ma, Jakub Kone?cn?y, Martin Jaggi, Virginia Smith, Michael I. Jordan, Peter Richt?arik,
and Martin Tak?ac? . Distributed optimization with arbitrary local solvers. Optimization Methods
and Software, 32(4):813?848, 2017.
[23] Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, and Gideon S. Mann. Efficient large-scale distributed training of conditional maximum entropy models. In Advances in
Neural Information Processing Systems, pages 1231?1239, 2009.
[24] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial
Intelligence and Statistics, pages 1273?1282, 2017.
[25] Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkataraman, Davies
Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris Xin, Reynold Xin,
Michael J. Franklin, Reza Zadeh, Matei Zaharia, and Ameet Talwalkar. Mllib: Machine learning in apache spark. Journal of Machine Learning Research, 17(34):1?7, 2016.
[26] Cleve Moler. Matrix computation on distributed memory multiprocessors. Hypercube Multiprocessors, 86(181-195):31, 1986.
[27] Ilia Nouretdinov, Sergi G. Costafreda, Alexander Gammerman, Alexey Chervonenkis,
Vladimir Vovk, Vladimir Vapnik, and Cynthia H.Y. Fu. Machine learning classification with
confidence: application of transductive conformal predictors to MRI-based diagnostic and
prognostic markers in depression. Neuroimage, 56(2):809?813, 2011.
[28] Dino Oglic and Thomas G?artner. Nystr?om method with kernel k-means++ samples as landmarks. In Proceedings of the 34th International Conference on Machine Learning, pages
2652?2660, 06?11 Aug 2017.
[29] Fabian Pedregosa, Ga?el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion,
Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, RRon Weiss, Vincent Dubourg, Jake
Vanderplas, AAlexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and
?
Edouard
Duchesnay Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine
Learning Research, 12:2825?2830, 2011.
[30] Johann Radon. Mengen konvexer K?orper, die einen gemeinsamen Punkt enthalten. Mathematische Annalen, 83(1):113?115, 1921.
[31] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems, pages 1177?1184, 2007.
[32] Jonathan D. Rosenblatt and Boaz Nadler. On the optimality of averaging in distributed statistical learning. Information and Inference, 5(4):379?404, 2016.
[33] Alexander M. Rubinov. Abstract convexity and global optimization, volume 44. Springer
Science & Business Media, 2013.
[34] Ohad Shamir and Nathan Srebro. Distributed stochastic optimization and learning. In Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing,
pages 850?857, 2014.
[35] Ohad Shamir, Nati Srebro, and Tong Zhang. Communication-efficient distributed optimization
using an approximate newton-type method. In International conference on machine learning,
pages 1000?1008, 2014.
11
[36] Robin Sommer and Vern Paxson. Outside the closed world: On using machine learning for
network intrusion detection. In Symposium on Security and Privacy, pages 305?316, 2010.
[37] Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright. Optimization for machine learning.
MIT Press, 2012.
[38] Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142,
1984.
[39] Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. OpenML: Networked
science in machine learning. SIGKDD Explorations, 15(2):49?60, 2013.
[40] Vladimir N. Vapnik and Alexey Y. Chervonenkis. On the uniform convergence of relative
frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2):
264?280, 1971.
[41] Jeffrey S. Vitter and Jyh-Han Lin. Learning in parallel. Information and Computation, 96(2):
179?202, 1992.
[42] Ulrike Von Luxburg and Bernhard Sch?olkopf. Statistical learning theory: models, concepts,
and results. In Inductive Logic, volume 10 of Handbook of the History of Logic, pages 651?
706. Elsevier, 2011.
[43] Ian H. Witten, Eibe Frank, Mark A. Hall, and Christopher J. Pal. Data Mining: Practical
machine learning tools and techniques. Elsevier, 2017.
[44] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms
for statistical optimization. Journal of Machine Learning Research, 14(1):3321?3363, 2013.
[45] Martin Zinkevich, Markus Weimer, Alexander J. Smola, and Lihong Li. Parallelized stochastic
gradient descent. In Advances in Neural Information Processing Systems, pages 2595?2603,
2010.
12
| 7226 |@word arabic:1 repository:2 version:1 sri:3 polynomial:14 stronger:2 hampson:1 nd:2 dekel:1 mri:1 open:5 prognostic:1 confirms:2 tr:2 nystr:1 harder:1 ld:9 reduction:3 inefficiency:1 liu:1 lichman:1 chervonenkis:3 woodruff:1 daniel:1 franklin:1 dubourg:1 outperforms:2 bradley:1 current:1 com:1 comparing:1 manuel:1 analysed:1 chu:1 must:1 readily:1 luis:1 john:1 realistic:2 partition:5 periodically:2 designed:1 n0:6 half:3 leaf:1 fewer:1 cook:1 warmuth:1 intelligence:2 smith:1 manfred:1 provides:2 boosting:1 node:3 completeness:1 allerton:1 org:2 zhang:2 height:1 saarland:1 mathematical:2 dn:1 along:1 direct:1 constructed:1 symposium:3 prove:1 doubly:3 artner:3 combine:1 vitter:2 hci:1 eleventh:1 yingyu:1 privacy:1 bitbucket:2 pairwise:1 dan:1 theoretically:1 expected:4 indeed:2 roughly:1 mpg:1 frequently:1 growing:3 multi:1 hardness:1 freeman:1 bertrand:1 decomposed:1 solver:1 becomes:1 provided:3 moreover:2 notation:1 underlying:3 bounded:1 medium:1 what:6 argmin:2 substantially:2 spoken:1 finding:1 guarantee:10 nouretdinov:1 every:3 runtime:40 prohibitively:2 classifier:1 uk:1 control:1 unit:9 medical:1 planck:1 oglic:2 positive:1 negligible:1 aggregating:1 treat:1 local:1 limit:1 consequence:1 oxford:1 meng:1 abuse:1 approximately:2 black:2 might:2 alexey:2 therein:1 studied:1 katya:1 edouard:1 co:2 fastest:1 range:2 karypis:1 decided:1 practical:5 practice:4 regret:2 block:1 union:1 recursive:1 digit:1 jan:1 evan:1 area:1 empirical:8 significantly:1 projection:1 ups:2 confidence:9 boyd:1 davy:1 cannot:2 superlinear:2 targeting:1 ga:3 risk:14 impossible:1 writing:2 zinkevich:1 deterministic:2 center:1 convex:20 ke:1 bachrach:1 spark:19 blondel:1 blaise:1 matthieu:2 communicating:1 contradiction:1 haussler:1 kay:1 embedding:1 notion:6 coordinate:1 target:1 play:1 suppose:2 shamir:3 yishay:1 olivier:1 carl:1 us:1 regularised:12 hypothesis:41 trend:1 satisfying:1 expensive:1 anshul:1 bottom:1 role:1 ding:1 solved:1 calculate:1 news:2 venkataraman:1 richt:1 trade:2 contemporary:1 consumes:1 boley:2 substantial:2 valuable:1 roderick:1 convexity:6 complexity:20 moderately:2 ran:1 ideally:1 benjamin:2 bischl:1 depend:2 solving:2 torgo:1 passos:1 predictive:3 ali:1 creates:1 efficiency:2 learner:13 completely:1 eric:1 shao:1 po:1 joint:1 seth:1 vern:1 various:2 derivation:3 er1:1 train:1 walter:1 grama:1 effective:7 describe:2 artificial:2 aggregate:4 outside:1 quite:1 larger:2 solve:2 plausible:1 posed:1 ability:1 statistic:2 casp9:5 analyse:1 itself:1 rudolph:1 final:1 online:1 transductive:1 advantage:2 indication:1 sequence:1 propose:1 maximal:1 adaptation:1 uci:2 networked:1 achieve:6 scalability:1 olkopf:1 convergence:1 cluster:1 rademacher:2 generating:1 xuan:1 executing:1 derive:2 ac:2 measured:1 aug:1 strong:1 c:1 come:2 indicate:2 implies:1 direction:4 correct:2 hull:1 vc:1 stochastic:3 consecutively:1 exploration:1 larry:2 mann:1 require:3 f1:1 hoover:1 alleviate:1 varoquaux:1 proposition:15 ryan:1 traversed:1 extension:1 hold:7 around:2 considered:4 wright:1 ic:1 hall:1 nadler:1 mapping:1 prm:18 reserve:1 anselm:1 olana:2 achieves:6 kruskal:3 smallest:1 early:1 amde:1 applicable:3 axiomatic:1 prettenhofer:1 tool:1 stefan:1 mit:1 always:1 arik:1 aim:1 rna:1 gaussian:1 zhou:1 minimisation:10 maria:1 consistently:1 rank:3 grisel:1 intrusion:2 contrast:3 sigkdd:2 brendan:1 baseline:5 sense:7 summarizing:1 talwalkar:1 inference:1 elsevier:2 carath:1 el:1 multiprocessor:2 typically:1 relation:1 tak:1 quasi:9 provably:2 overall:1 unobservable:1 html:1 classification:1 denoted:2 exponent:2 art:2 special:1 gramfort:1 equal:3 having:1 beach:1 runtimes:3 represents:1 broad:3 hardest:2 future:3 oblivious:1 sta:1 modern:1 simultaneously:1 dfg:1 geometry:1 jeffrey:1 william:1 detection:2 message:1 investigate:1 mining:2 cournapeau:1 evaluation:3 eppstein:1 cfl:1 kone:1 parametrised:1 accurate:1 fu:1 worker:1 necessary:1 shorter:1 ohad:3 snir:1 tree:2 euclidean:3 divide:2 yuchen:1 abundant:1 desired:2 burak:1 theoretical:7 minimal:1 hbi:2 instance:8 increased:1 yoav:2 leslie:1 cost:5 applicability:1 subset:11 predictor:1 uniform:1 conducted:1 johnson:1 virginia:1 learnability:2 pal:1 answer:1 confident:2 recht:1 st:1 chunk:2 fundamental:2 randomized:1 international:5 gaertner:1 clyde:1 yavuz:1 off:2 informatics:1 shivaram:1 michael:3 quickly:1 parametrisation:1 sanjeev:1 squared:1 central:1 von:1 containing:1 choose:1 iais:1 sublevel:1 henceforth:1 return:1 manish:1 michel:1 li:1 potential:3 jeremy:1 de:3 inc:3 explicitly:3 caused:1 depends:5 stream:2 higgs:4 lot:1 closed:1 mario:2 ulrike:1 aggregation:7 parallel:37 shai:1 rmse:2 contribution:2 om:1 square:4 who:3 doris:1 efficiently:1 sy:1 preprocess:1 identify:1 ofthe:1 kamp:2 miller:1 weak:7 vincent:2 iterated:1 comparably:1 iid:3 worth:1 processor:20 executes:2 app:1 yearpredictionmsd:1 history:1 sebastian:1 parallelised:3 definition:2 against:2 servedio:4 nonetheless:1 frequency:1 eodory:1 james:1 naturally:1 proof:8 di:2 dataset:18 treatment:2 proved:1 popular:1 knowledge:1 dimensionality:1 improves:1 hilbert:1 embarrassingly:1 sean:1 alexandre:1 originally:1 higher:1 xie:1 follow:1 stockmeyer:1 cummings:1 wei:1 done:1 box:2 furthermore:3 nottingham:2 smola:1 joaquin:1 christopher:1 su:1 marker:1 scikit:3 google:2 logistic:1 quality:3 aj:1 facilitate:1 usa:1 requiring:1 concept:4 multiplier:1 ramage:1 hence:2 assigned:1 parallelisations:2 alternating:1 inductive:1 iteratively:3 moore:1 ehrenfeucht:1 neal:1 round:1 self:1 auc:12 openml:1 vipin:1 mpi:1 die:1 gg:1 minimisers:1 complete:2 mcdonald:2 performs:2 dedicated:2 parallelisation:33 duchi:1 balcan:1 consideration:2 novel:3 fi:1 recently:1 parikh:1 behaves:1 witten:1 physical:1 apache:1 reza:1 exponentially:3 volume:2 discussed:1 slight:1 relating:1 significant:1 cambridge:1 ai:5 rd:2 mathematics:2 centre:1 dino:2 n0l:6 funded:1 lihong:1 han:1 longer:2 base:37 jaggi:1 recent:1 inf:1 apart:1 scenario:4 susy:7 suvrit:1 shahar:1 moler:1 alternation:1 yi:1 reynold:1 additional:1 george:1 employed:1 ashok:1 parallelized:1 aggregated:1 determine:1 hsi:5 relates:2 multiple:1 stephen:3 rahimi:1 borja:1 faster:7 aguera:1 believed:3 long:6 lin:4 devised:1 prediction:3 variant:1 regression:4 basic:1 florina:1 optimisation:7 chandra:1 arca:1 iteration:6 kernel:6 gilad:1 achieved:3 pram:1 preserved:1 whereas:1 want:1 fine:1 polylogarithmically:2 walker:1 source:1 sch:1 archive:1 probably:2 comment:1 db:1 jordan:1 structural:1 noting:1 embeddings:1 easy:1 parallelising:1 newsgroups:1 enough:2 gave:2 restrict:1 reduce:2 simplifies:1 inner:1 cn:1 weka:1 multiclass:1 minimise:1 bartlett:1 url:2 song:1 clarkson:1 peter:3 cause:2 depression:1 deep:1 generally:1 detailed:1 se:1 amount:3 annalen:1 reduced:1 http:3 schapire:1 notice:1 diagnostic:1 disjoint:1 per:3 gammerman:1 rosenblatt:1 summarised:1 mathematische:1 brucher:1 blum:2 achieving:3 drawn:3 characterisation:1 kenneth:1 fraction:1 run:5 luxburg:1 almost:1 family:1 decision:4 appendix:6 zadeh:1 radon:97 comparable:4 graham:1 bound:9 cleve:1 mllib:1 fold:1 annual:3 prohibiting:1 software:1 markus:1 bonn:3 nathan:2 speed:7 simulate:1 optimality:1 kumar:1 ameet:1 martin:4 speedup:1 transferred:1 pacific:1 according:2 parallelisable:2 smaller:5 slightly:2 contradicts:1 partitioned:1 joseph:1 s1:3 explained:1 pr:1 computationally:1 equation:4 previously:1 randomised:1 discus:3 r3:2 german:1 scheinberg:1 thirion:1 end:14 conformal:1 available:3 ofer:1 decentralized:1 apply:4 matei:1 simulating:1 batch:3 robustness:1 alternative:1 rp:2 existence:1 thomas:4 denotes:1 andrzej:1 sommer:1 publishing:1 maintaining:1 ilium:1 newton:1 calculating:3 coined:1 prof:1 approximating:1 hypercube:1 comparatively:1 silberman:1 jake:1 vanschoren:1 perrot:1 question:5 already:2 moshe:1 parametric:1 strategy:1 dependence:1 usual:1 traditional:1 rocco:1 hai:4 poker:1 unclear:1 gradient:2 separate:1 thank:1 rubinov:1 majority:1 philip:1 landmark:1 considers:1 extent:1 assuming:1 code:3 besides:1 relationship:1 mini:2 vladimir:3 nc:3 liang:1 executed:8 robert:1 relate:1 frank:1 negative:1 paxson:1 implementation:5 design:1 proper:1 unknown:1 upper:2 datasets:16 finite:7 fabian:1 descent:3 communication:10 mansour:1 arbitrary:1 drift:1 peleato:1 introduced:2 complement:1 pair:3 david:5 eckstein:1 vanderplas:1 connection:1 bernd:1 security:1 nick:6 learned:1 polylogarithmic:15 protect:1 nip:1 address:1 able:4 suggested:1 parallelism:1 doubted:1 gideon:1 max:20 memory:1 wainwright:1 ruzzo:1 critical:2 xiangrui:1 natural:1 rely:1 regularized:1 business:1 event:1 representing:1 scheme:39 improve:1 library:3 inversely:3 imply:1 mackenzie:1 arora:1 mathieu:1 fraunhofer:1 speeding:1 raymond:1 text:1 eugene:1 understanding:1 acknowledgement:1 discovery:1 python:1 nati:1 regularisation:1 relative:1 freund:2 loss:1 fully:1 rijn:1 generation:2 limitation:4 proportional:3 interesting:1 proven:1 zaharia:1 srebro:2 foundation:3 consistent:8 s0:11 xiao:1 nowozin:1 mohri:1 allow:2 barak:1 institute:1 wide:1 distributed:15 benefit:1 curve:1 calculated:2 dimension:3 valid:1 world:2 lindenstrauss:1 computes:1 van:1 author:1 avg:10 far:1 polynomially:5 employing:1 approximate:2 drh:1 uni:1 boaz:2 bernhard:1 logic:2 ml:1 global:1 sequentially:1 overfitting:1 handbook:1 a_:1 xi:1 jyh:1 alternatively:1 iterative:1 why:1 robin:1 hutton:1 additionally:2 learn:3 promising:1 ca:1 sra:1 ignoring:1 obtaining:1 contributes:2 inherently:1 mehryar:1 constructing:1 domain:2 marc:1 significance:1 main:2 linearly:1 rh:15 weimer:1 child:1 repeated:1 ref:1 ananth:1 tl:7 roc:1 tong:1 quasiconvex:1 neuroimage:1 duchesnay:2 mcmahan:2 answering:2 jmlr:1 ian:1 theorem:14 wrobel:1 bad:6 pac:2 jakub:1 learnable:3 cynthia:1 svm:1 gupta:1 eibe:1 parallelise:1 workshop:1 concern:2 missura:2 essential:2 exists:2 sequential:5 effectively:2 federated:1 mendelson:1 valiant:1 magnitude:1 execution:1 vapnik:3 margin:1 suited:1 entropy:3 depicted:1 logarithmic:1 jacm:2 contained:1 partially:2 bo:2 hua:1 springer:1 gary:1 relies:1 acm:5 ma:1 conditional:3 sized:2 blumer:1 towards:4 replace:1 lipschitz:1 owen:1 generalisation:1 vovk:1 averaging:16 principal:1 shang:1 called:3 teng:1 accepted:1 xin:3 pedregosa:1 formally:2 support:1 guo:1 mark:1 jonathan:2 alexander:3 tsai:1 johann:1 d1:1 |
6,884 | 7,227 | Semisupervised Clustering, AND-Queries and Locally
Encodable Source Coding
Arya Mazumdar
College of Information & Computer Sciences
University of Massachusetts Amherst
Amherst, MA 01003
[email protected]
Soumyabrata Pal
College of Information & Computer Sciences
University of Massachusetts Amherst
Amherst, MA 01003
[email protected]
Abstract
Source coding is the canonical problem of data compression in information theory.
In a locally encodable source coding, each compressed bit depends on only few bits
of the input. In this paper, we show that a recently popular model of semisupervised
clustering is equivalent to locally encodable source coding. In this model, the task
is to perform multiclass labeling of unlabeled elements. At the beginning, we can
ask in parallel a set of simple queries to an oracle who provides (possibly erroneous)
binary answers to the queries. The queries cannot involve more than two (or a
fixed constant number ? of) elements. Now the labeling of all the elements (or
clustering) must be performed based on the (noisy) query answers. The goal is to
recover all the correct labelings while minimizing the number of such queries. The
equivalence to locally encodable source codes leads us to find lower bounds on
the number of queries required in variety of scenarios. We are also able to show
fundamental limitations of pairwise ?same cluster? queries - and propose pairwise
AND queries, that provably performs better in many situations.
1
Introduction
Suppose we have n elements, and the ith element has a label Xi ? {0, 1, . . . , k ? 1}, ?i ? {1, . . . , n}.
We consider the task of learning the labels of the elements (or learning the label vector). This can
also be easily thought of as a clustering problem of n elements into k clusters, where there is a
ground-truth clustering1 . There exist various approaches to this problem in general. In many cases
some similarity values between pair of elements are known (a high similarity value indicate that
they are in the same cluster). Given these similarity values (or a weighted complete graph), the task
is equivalent to to graph clustering; when perfect similarity values are known this is equivalent to
finding the connected components of a graph.
A recent approach to clustering has been via crowdsourcing. Suppose there is an oracle (expert
labelers, crowd workers) with whom we can make pairwise queries of the form ?do elements u and v
belong to the same cluster??. We will call this the ?same cluster? query (as per [4]). Based on the
answers from the oracle, we then try to reconstruct the labeling or clustering. This idea has seen
a recent surge of interest especially in the entity resolution research (see, for e.g. [33, 30, 8, 20]).
Since each query to crowd workers cost time and money, a natural objective will be to minimize the
number of queries to the oracle and still recover the clusters exactly. Carefully designed adaptive and
interactive querying algorithms for clustering has also recently been developed [33, 30, 8, 22, 21]. In
1
The difference between clustering and learning labels is that in the case of clustering it is not necessary to
know the value of the label for a cluster. Therefore any unsupervised labeling algorithm will be a clustering
algorithm, however the reverse is not true. In this paper we are mostly concerned about the labeling problem,
hence our algorithms (upper bounds) are valid for clustering as well.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
particular, the query complexity for clustering with a k-means objective had recently been studied
in [4], and there are significant works in designing optimal crowdsourcing schemes in general (see,
[12, 13, 28, 34, 15]). Note that, a crowd worker may potentially handle more than two elements
at a time; however it is of interest to keep the number of elements involved in a query as small as
possible. As an example, recent work in [31] considers triangle queries (involving three elements in a
query). Also crowd workers can compute some simple functions on this small set of inputs - instead
of answering a ?same cluster? query. But again it is desirable that the answer the workers provide to
be simple, such as a binary answer.
The queries to the oracle can be asked adaptively or non-adaptively. For the clustering problem,
both the adaptive version and the nonadaptive versions have been studied. While both versions
have obvious advantages and disadvantages, for crowdsourcing applications it is helpful to have a
parallelizable querying scheme in most scenarios for faster response-rate and real time analysis. In
this paper, we concentrate on the nonadaptive version of the problem, i.e., we perform the labeling
algorithm after all the query answers are all obtained.
Budgeted crowdsourcing problems can be quite straight-forwardly viewed as a canonical sourcecoding or source-channel coding problem of information theory (e.g., see the recent paper [14]).
A main contribution of our paper is to view this as a locally encodable source coding problem: a
data compression problem where each compressed bit depends only on a constant number of input
bits. The notion of locally encodable source coding is not well-studied even within information
theory community, and the only place where it is mentioned to the best of our knowledge is in [23],
although the focus of that paper is a related notion of smooth encoding. Another related notion of
local decoding seem to be much more well-studied [19, 18, 16, 26, 6, 25, 5, 32].
By posing the querying problem as such we can get a number of information theoretic lower bounds
on the number of queries required to recover the correct labeling. We also provide nonadaptive
schemes that are near optimal. Another of our main contributions is to show that even within queries
with binary answers, ?same cluster? queries (or XOR queries) may not be the best possible choice. A
smaller number of queries can be achieved for approximate recovery by using what we call an AND
query. Among our settings, we also consider the case when the oracle gives incorrect answers with
some probability. A simple scheme to reduce errors in this case could be to take a majority vote after
asking the same question to multiple different crowd workers. However, often that is not sufficient.
Experiments on several real datasets (see [21]) with answers collected from Amazon Mechanical Turk
[9, 29] show that majority voting could even increase the errors. Interestingly, such an observation
has been made by a recent paper as well [27, Figure 1]. The probability of error of a query answer
may also be thought of as the aggregated answer after repeating the query several times. Once the
answer has been aggregated, it cannot change ? and thus it reduces to the model where repeating
the same question multiple times is not allowed. On the other hand, it is usually assumed that the
answers to different queries are independently erroneous (see [10]). Therefore we consider the case
where repetition of a same query multiple times is not allowed2 , however different queries can result
in erroneous answers independently.
In this case, the best known algorithms need O(n log n) queries to perform the clustering with two
clusters [21]. We show that by employing our AND querying method (1 ? ?)-proportion of all labels
in the label vector will be recovered with only O(n log 1? ) queries.
Along the way, we also provide new information theoretic results on fundamental limits of locally
encodable source coding. While the the related notion of locally decodable source code [19, 16, 26, 6],
as well as smooth compression [23, 26] have been studied, there was no nontrivial result known
related to locally encodable codes in general. Although the focus of this paper is primarily theoretical,
we also perform a real crowdsourcing experiment to validate our algorithm.
2
Problem Setup and Information Theoretic View
For n elements, consider a label vector X ? {0, . . . , k ? 1}n , where Xi , the ith entry of X, is
the label of the ith element and can take one of k possible values. Suppose P (Xi = j) = pj ?j
and Xi ?s are independent. In other words, the prior distribution of the labels is given by the vector
2
Independent repetition of queries is also theoretically not interesting, as by repeating any query just O(log n)
times one can reduce the probability of error to near zero.
2
p ? (p0 , . . . , pk?1 ).For the special case of k = 2, we denote p0 ? 1 ? p and p1 ? p. While we
emphasize on the case of k = 2 our results extends in the case of larger k, as will be mentioned.
A query Q : {0, . . . , k ? 1}? ? {0, 1} is a deterministic function that takes as argument ? labels,
? n, and outputs a binary answer. While the query answer need not be binary, we restrict ourselves
mostly to this case for being the most practical choice.
Suppose a total of m queries are made and the query answers are given by Y ? {0, 1}m . The objective
is to reconstruct the label vector X from Y , such that the number of queries m is minimized.
We assume our recovery algorithms to have the knowledge of p. This prior distribution, or the relative
sizes of clusters, is usually easy to estimate by subsampling a small (O(log n)) subset of elements
and performing a complete clustering within that set (by, say, all pairwise queries). In many prior
works, especially in the recovery algorithms of popular statistical models such as stochastic block
model, it is assumed that the relative sizes of the clusters are known (see [1]).
We also consider the setting where query answers may be erroneous with some probability of error.
For crowdsourcing applications, this is a valid assumption since many times even expert labelers
can make errors, and such assumption can be made. To model this we assume each entry of Y is
flipped independently with some probability q. Such independence assumption has been used many
times previously to model errors in crowdsourcing systems (see, e.g., [10]). While this may not be
the perfect model, we do not allow a single query to be repeated multiple times in our algorithms
(see the Introduction for a justification). For the analysis of our algorithm we just need to assume
that the answers to different queries are independent. While we analyze our algorithms under these
assumptions for theoretical guarantees, it turns out that even in real crowdsourcing situations our
algorithmic results mostly follow the theoretical results, giving further validation to the model.
For the k = 2 case, and when q = 0 (perfect oracle), it is easy to see that n queries are sufficient for
the task. One simply compares every element with the first element. This does not extend to the case
when k > 2: for perfect recovery, and without any prior, one must make O(n2 ) queries in this case.
When q > 0 (erroneous oracle), it has been shown that a total number of O(?nk log n) queries are
sufficient [21], where ? is the ratio of the sizes of the largest and smallest clusters.
Information theoretic view. The problem of learning a label vector x from queries is very similar
to the canonical source coding (data compression) problem from information theory. In the source
coding problem, a (possibly random) vector X is ?encoded? into a usually smaller length binary
vector called the compressed vector3 Y ? {0, 1}m . The decoding task is to again obtain X from
the compressed vector Y . It is known that if X is distributed according to p, then
P m ? nH(p) is
both necessary and sufficient to recover x with high probability, where H(p) = ? i pi log pi is the
entropy of p.
We can cast our problem in this setting naturally, where entries of Y are just answers to queries made
on X. The main difference is that in source coding each Yi may potentially depend on all the entries
of X; while in the case of label learning, each Yi may depend on only ? of the xi s.
We call this locally encodable source coding. This terminology is analogous to the recently developed
literature on locally decodable source coding [19, 16]. It is called locally encodable, because each
compressed bit depend only on ? of the source (input) bits. For locally decodable source coding,
? depends on at most a prescribed constant number ? of
each bit of the reconstructed sequence X
bits from the compressed sequence. Another closely related notion is that of ?smooth compression?,
where each source bit contributes to at most ? compressed bits [23]. Indeed, in [23], the notion of
locally encodable source coding is also present where it was called robust compression. We provide
new information theoretic lower bounds on the number of queries required to reconstruct X exactly
and approximately for our problem.
For the case when there are only two labels, the ?same cluster? query is equivalent to an Boolean XOR
operation between the labels. There are some inherent limitations to these functions that prohibit the
?same cluster? queries to achieve the best possible number of queries for the ?approximate? recovery
of labeling problem. We use an old result by Massey [17] to establish this limitation. We show that,
instead using an operation like Boolean AND, much smaller number of queries are able to recover
most of the labels correctly.
3
The compressed vector is not necessarily binary, nor it is necessarily smaller length.
3
We also consider the case when the oracle gives faulty answer, or Y is corrupted by some noise
(the binary symmetric channel). This setting is analogous to the problem of joint source-channel
coding. However, just like before, each encoded bit must depend on at most ? bits. We show that for
the approximate recovery problem, AND queries are again performing substantially well. In a real
crowdsourcing experiment, we have seen that if crowd-workers have been provided with the same set
of pairs and being asked for ?same cluster? queries as well as AND queries, the error-rate of AND
queries is lower. The reason is that for a correct ?no? answer in an AND query, a worker just need to
know one of the labels in the pair. For a ?same cluster? query, both the labels must be known to the
worker for any correct answer.
There are multiple reasons why one would ask for a ?combination? or function of multiple labels
from a worker instead of just asking for a label itself (a ?label-query?). Note that, asking for labels
will never let us recover the clusters in less than n queries, whereas, as we will see, the queries that
combine labels will. Also in case of erroneous answer with AND queries or ?same cluster? queries,
we have the option of not repeating a query, and still reduce errors. No such option is available with
direct label-queries.
Contributions. In summary our contributions can be listed as follows.
1. Noiseless queries and exact recovery (Sec. 3.1): For two clusters, we provide a querying scheme
that asks ?n, ? < 1 number of nonadaptive pairwise ?same cluster? queries, and recovers the all
labels with high probability, for a range of prior probabilities. We also provide a new lower bound
that is strictly better than nH(p) for some p.
2. Noiseless queries and approximate recovery (Sec. 3.2): We provide a new lower bound on the
number of queries required to recover (1 ? ?) fraction of the labels ? > 0. We also show that ?same
cluster? queries are insufficient, and propose a new querying strategy based on AND operation that
performs substantially better.
3. Noisy queries and approximate recovery (Sec. 3.3). For this part we assumed the query answer
to be k-ary (k ? 2) where k is the number of clusters. This section contains the main algorithmic
result that uses the AND queries as main primitive. We show that, even in the presence of noise in the
query answers, it is possible to recover (1 ? ?) proportion of all labels correctly with only O(n log k? )
nonadaptive queries. We validate this theoretical result in a real crowdsourcing experiment in Sec. 4.
3
3.1
Main results and Techniques
Noiseless queries and exact recovery
In this scenario we assume the query answer from oracle to be perfect and we wish to get back the
all of the original labels exactly without any error. Each query is allowed to take only ? labels
as input. When ? = 2, we are allowed to ask only pairwise queries. Let us consider the case
when there are only two labels, i.e., k = 2. That means the labels Xi ? {0, 1}, 1 ? i ? n,
are iid Bernoulli(p) random variable. Therefore the number of queries m that are necessary and
sufficient to recover all the labels with high probability is approximately nh(p) ? o(n) where
h(x) ? ?x log2 x ? (1 ? x) log2 (1 ? x) is the binary entropy function. However the sufficiency part
here does not take into account that each query can involve only ? labels.
Querying scheme: We use the following type of queries. For each query, labels of ? elements are
given to the oracle, and the oracle returns a simple XOR operation of the labels. Note, for ? = 2, our
queries are just ?same cluster? queries.
Theorem 1. There exists a querying scheme with m =
1
2 (1
n(h(p)+o(1))
1
log2 ?
queries of above type, where
? =
+ (1 ? 4p(1 ? p))? ), such that it will be possible to recover all the labels with high
probability by a Maximum Likelihood decoder.
Proof. Let the number of queries asked is m. Let us define Q to be the random binary query matrix
of dimension m ? n where each row has exactly ? ones, other entries being zero. Now for a label
vector X we can represent the set of query outputs by Y = QX mod 2. Now if we use Maximum
Likelihood Decoding then we will not make an error as long as the query output vector is different
4
for every X that belong to the typical set4 of X. Let us define a ?bad event? for two different label
vectors X 1 and X 2 to be the event QX 1 = QX 2 or Q(X 1 + X 2 ) = 0 mod 2 because in that case
we will not be able to differentiate between those two sequences. Now consider a random ensemble
of matrices where in each row ? positions are chosen uniformly randomly from the n positions to be
1. In this random ensemble, the probability of a ?bad event? for any two fixed typical label vectors
X 1 and X 2 is going to be
Pi=0:? nr(p) n?nr(p) m 1 n
m
n?2nr(p) m
i
??i
)
1
?
i even
2( ? + ?
?
?
(1
+
(1
?
2r(p))
,
)
n
n
2
?
?
where r(p) = 2p(1 ? p). This is because , X 1 + X 2 mod 2 has r(p) = 2p(1 ? p) ones with high
probability since they are typical vectors.
Now we have to use the ?coding theoretic? idea of expurgation to complete the analysis. From linearity
of expectation, the expected number of ?bad events? is going to be
m
T
1
(1 + (1 ? 2r(p))? ) ,
2
2
where T is the size of the typical set and T ? 2n(h(p)+o(1)) . If this expected number of ?bad events?
is smaller than T then for every ?bad event?, we can throw out 1 label vector and there will be no
more bad events. This will imply perfect recovery, as long as
m
T
1
(1 + (1 ? 2r(p))? )
< T.
2
2
Substituting the upper bound for T , we have that perfect recovery is possible as long as, m
n >
log 2
1
??
(h(p) + o(1) ? n )/(log ? ). Now if we take to be of the form n for ? > 0 then asymptotically
we will have a vanishing fraction of typical label vectors which will be expurgated and log
n ? 0.
Therefore m = n(h(p)+o(1))
queries will going to recover all the labels with high probability. Hence
log 1
?
there must exist a querying scheme with m =
n(h(p)+o(1))
1
log ?
queries that will work.
The number of sufficient queries guaranteed by the above theorem is strictly less than n for all
0.0694 ? p < 0.5 even for ? = 2. Note that, with ? = 2, by querying the first element with all
others nonadaptively (total n ? 1 queries), it is possible to deduce the two clusters. In contrast, if
one makes just random ?same cluster? queries, then O(n log n) queries are required to recover the
clusters with high probability (see, e.g., [2]).
Now we provide a lower bound on the number of queries required.
Theorem 2. The minimum number of queries necessary to recover all labels with high probability is
(1??)
at least by nh(p) ? max{1, max? (1??)r(p)?
} where r(p) ? 2p(1 ? p).
h(
?
)
Proof. Every query involves at most ? elements. Therefore the average number of queries an element
is part of is ?m
n . Therefore 1 ? ? fraction of all the elements (say the set S ? {1, . . . , n}) are part
of less than ?m
?n queries. Now consider the set {1, . . . , n} \ S. Consider all typical label vectors
n
C ? {0, 1} such that their projection on S is a fixed typical sequence. We know that there are
2n(1??)h(p) such sequences. Let X 0 be one of these sequences. Now, almost all sequences of C must
have a distance of n(1 ? ?)r(p) + o(n) from X 0 . Let Y 0 be the corresponding query outputs when
X 0 is the input. Now any query output for input belonging to C must reside in a Hamming ball of
(1 ? ?)r(p)?m
radius
from Y 0 . Therefore we must have mh( (1??)r(p)?
) > n(1 ? ?)h(p).
?
?
This lower bound is better than the naive nh(p) for p < 0.03475.
Here a typical set of labels is all such label vectors where the number of ones is between np ? n2/3 and
np + n2/3 .
4
5
For ? = 2, the plot of the corresponding upper and lower bounds have been shown in Figure 1. The
main takeaway from this part is that, by exploiting the prior probabilities (or relative cluster sizes),
it is possible to know the labels with strictly less than n queries (and close to the lower bound for
p ? 0.3), even with pairwise ?same cluster? queries.
3.2
Noiseless queries and approximate recovery
Again let us consider the case when k = 2, i.e., only
two possible labels. Let X ? {0, 1}n be the label vector.
Suppose we have a querying algorithm that, by using m
?
queries, recovers a label vector X.
Definition. We call a querying algorithm to be (1 ? ?)good if for any label vector, at least (1 ? ?)n labels are
correctly recovered. This means for any label-recovered
? the Hamming distance is at most ?n. For
label pair X, X,
an almost equivalent definition, we can define a distortion
? =X +X
? mod 2, for any two labels
function d(X, X)
?
? = Pr(X 6=
X, X ? {0, 1}. We can see that Ed(X, X)
Figure 1: Required #of pairwise queries for
?
X), which we want to be bounded by ?.
noiseless queries and exact recovery
Using standard rate-distortion theory [7], it can be seen that, if the queries could involve an arbitrary
?
number of elements then with m queries it is possible to have a (1 ? ?(m/n))-good
scheme where
?1
?
?(?) ? h (h(p) ? ?). When each query is allowed to take only at most ? inputs, we have the
following lower bound for (1 ? ?)-good querying algorithms.
Theorem 3. In any (1 ? ?)-good querying scheme with m queries where each query can take as
input ? elements, the following must be satisfied (below h0 (x) = dh(x)
dx ):
m
?
h(p) ? h(?( n ))
m
+
? ? ??
? m ))
?h0 (?(
0
?
n
n
h (?( m
)
n ))(1 + e
The proof of this theorem is quite involved, and we have included it in the appendix in the supplementary material.
One of the main observation that we make is that the ?same cluster? queries are
highly inefficient for approximate recovery.
This follows from a classical result of
Ancheta and Massey [17] on the limitation of linear codes as rate-distortion codes.
Recall that, the ?same cluster? queries are equivalent to
XOR operation in the binary field, which is a linear operation on GF (2). We rephrase a conjecture by Massey in
our terminology.
Conjecture 1 (?same cluster? query lower bound). For
any (1 ? ?)-good scheme with m ?same cluster? queries
m
(? = 2), the following must be satisfied: ? ? p(1? nh(p)
).
This conjecture is known to be true at the point p = 0.5
(equal sized clusters). We have plotted these two lower
bounds in Figure 2.
Now let us provide a querying scheme with ? = 2 that Figure 2: Performance of (1 ? ?)-good
will provably be better than ?same cluster? schemes.
schemes with noiseless queries; p = 0.5
Querying scheme: AND Queries: We define V
the AND
query Q : {0, 1}2 ? {0, 1} as Q(X, X 0 ) = X X 0 , where X, X 0 ? {0, 1}, so that Q(X, X 0 ) = 1
only when both the elements have labels equal to 1. For each pairwise query the oracle will return
this AND operation of the labels.
Theorem 4. There exists a (1 ? ?)-good querying scheme with m pairwise AND queries such that
2m
n
d
d X
X
e? n ( 2m
n f (k, d)
? 2m
n )
n
? = pe
+
(1 ? p)k p
d!
k
nd
d=1
k=1
6
where f (k, d) =
Pk
i=0 (?1)
i k
i
(k ? i)d .
Proof. Assume p < 0.5 without loss of generality. Consider a random bipartite graph where each
?left? node represent an element labeled according to the label vector X ? {0, 1}n and each ?right?
node represents a query. All the query answers are collected in Y ? {0, 1}m . The graph has
right-degree exactly equal to 2. For each query the two inputs are selected uniformly at random
without replacement.
Recovery algorithm: For each element we look at the queries that involves it and estimate its label as
1 if any of the query answers is 1 and predict 0 otherwise. If there are no queries that involves the
element, we simply output 0 as the label.
Since the average left-degree is 2m
n and since all the edges from the right nodes are randomly and
independently thrown, we can model the degree of each left-vertex by a Poisson distribution with
the mean ? = 2m
n . We define element j to be a two-hop-neighbor of i if there is at least one query
which involved both the elements i and j . Under our decoding scheme we only have an error when
the label of i, Xi = 1 and the labels of all its two-hop-neighbors are 0. Hence the probability of
? i ) = P Pr(degree(i) = d) Pr(Xi 6= X
?i |
error for estimating Xi can be written as, Pr(Xi 6= X
d
?
degree(i) = d). Now let us estimate Pr(Xi 6= Xi | degree(i) = d). We further condition the error
on the event that there are k distinct two-hop-neighbors (lets call the number of distinct neighbors
? i | degree(i) = d) = Pd Pr(Dist(i) =
of i as Dist(i)) and hence we have that Pr(Xi 6= X
k=1
P
n f (k,d)
? i |degree(i) = d, Dist(i) = k) = d
p(1 ? p)k . Now using the Poisson
k) Pr(Xi 6= X
k=1 k
nd
assumption we get the statement of the theorem.
The performance of this querying scheme is plotted against the number of queries for prior probabilities p = 0.5 in Figure 2.
Comparison with ?same cluster? queries: We see in Figure 2 that the AND query scheme beats the
?same cluster? query lower bound for a range of query-performance trade-off in approximate recovery
for p = 12 . For smaller p, this range of values of ? increases further. If we randomly choose ?same
cluster? queries and then resort to maximum likelihood decoding (note that, for AND queries, we
present a simple decoding) then O(n log n) queries are still required even if we allow for ? proportion
of incorrect labels (follows from [11]). The best performance for ?same cluster? query in approximate
recovery that we know of for small ? is given by: m = n(1 ? ?) (neglect n? elements and just query
the n(1 ? ?) remaining elements with the first element). However, such a scheme can be achieved by
AND queries as well in a similar manner. Therefore, there is no point in the query vs ? plot that we
know of where ?same cluster? query achievability outperforms AND query achievability.
3.3
Noisy queries and approximate recovery
This section contains our main algorithmic contribution. In contrast to the previous sections here
we consider the general case of k ? 2 clusters. Recall that the label vector X ? {0, 1, . . . , k ? 1}n ,
and the prior probability of each label is given by the probabilities p = (p0 , . . . , pk?1 ). Instead of
binary output queries, in this part we consider an oracle that can provide one of k different answers.
We consider a model of noise in the query answer where the oracle provides correct answer with
q
probability 1 ? q, and any one of the remaining incorrect answers with probability k?1
. Note that we
do not allow the same query to be asked to the oracle multiple time (see Sec. 2 for justification). We
also define a (1 ? ?)-good approximation scheme exactly as before.
Querying Scheme: We only perform pairwise queries. For a pair of labels X and X 0 we define a
query Y = Q(X, X 0 ) ? {0, 1, . . . , k ? 1}. For our algorithm we define the Q as
i if X = X 0 = i
0
Q(X, X ) =
0
otherwise.
We can observe that for k = 2, this query is exactly same as the binary AND query that we defined in
the previous section. In our querying scheme, we make a total of nd
2 queries, for an integer d > 1.
We design a d-regular graph G(V, E) where V = {1, . . . , n} is the set of elements that we need to
label. We query all the pairs of elements (u, v) ? E.
Under this querying scheme, we propose to use Algorithm 1 for reconstructions of labels.
7
Theorem 5. The querying scheme with m = O(n log k? ) queries and Algorithm 1 is (1 ? ?)-good
for approximate recovery of labels from noisy queries.
We can come up with more exact relation bend
Algorithm 1 Noisy query approximate recovery tween number of queries m = 2 , ?, p, q and k.
This
is
deferred
to
the
appendix
in the supplewith nd
2 queries
mentary material.
Require: PRIOR p ? (p0 , . . . , pk?1 )
Require: Query Answers Yu,v : (u, v) ? E
for i ? [1, 2, . . . , k ? 1] do
dq
qk
Proof of Theorem 5. The total number of
Ci = k?1
+ dp2 i 1 ? k?1
queries is m = nd
2 . Now for a particular
end for
element u ? V , we look at the values of d
for u ? V do
noisy oracle answers {Yu,v }dv=1 . We have,
for i ? [1, 2, . . . , k ? 1] do
dq
qk
Pd
E(Nu,i ) = k?1
+ dpi 1 ? k?1
when the
Nu,i = v=1 1{Yu,v = i}
true label of u is i 6= 0. When the true label
if Nu,i ? dCi e then
dq
is something else, E(Nu,i ) = k?1
. There is
Xu ? i
Assigned ? True
an obvious gap between these expectations.
break
Clearly when the true label is i, the probaend if
bility of error in assignment
of the label of
P
end for
u is given by, Pi ?
j:j6=i,j6=0 Pr(Nu,j >
2
if ? Assigned then
Cj ) + Pr(Nu,i < Ci ) ? cke?2d for
Xu ? 0
some constants c and depending on the
end if
gap, from Chernoff bound. And when the
end for
true label is 0, the probability of error is
P
02
P0 ? j:j6=0 P (Nu,j > Cj ) ? c0 ke?2d , for
P
some constants c0 , 0 . Let ? = i pi Pi , we can
easily observe that d scales as O(log k? ). Hence
nd
k
the total number of queries is 2 = O(n log ? ).
The only thing that remains to be proved is that the number of incorrect labels is ?n with high
probability. Let Zu be the event that element u has been
Pincorrectly labeled. Then EZu = ?. The
total number of incorrectly labeled elements is Z =P u Zu . We have EZ = n?. Now define
Zu ? Zv if Zu and Zv are dependent. Now ?? ? Zu ?Zv Pr(Zu |Zv ) ? d2 + d because the
maximum number of nodes dependent with Zu are its 1-hop and 2-hop neighbors. Now using
Corollary 4.3.5 in [3], it is evident that Z = EZ = n? almost always.
The theoretical performance guarantee of Algorithm 1 (a detailed version of Theorem 5 is in the
supplementary material) for k = 2 is shown in Figures 3 and 4. We can observe from Figure 3 that
for a particular q, incorrect labeling decreases as p becomes higher. We can also observe from Figure
4 that if q = 0.5 then the incorrect labeling is 50% because the complete information from the oracle
is lost. For other values of q, we can see that the incorrect labeling decreases with increasing d.
We point out that ?same cluster? queries are not a good choice here, because of the symmetric nature
of XOR due to which there is no gap between the expected numbers (contrary to the proof of Theorem
5) which we had exploited in the algorithm to a large extent.
Lastly, we show that Algorithm 1 can work without knowing the prior distribution and only with the
knowledge of relative sizes of the clusters. The ground truth clusters can be adversarial as long as
they maintain the relative sizes.
Theorem 6. Suppose we have ni , the number of elements with label i, i = 0, 1, . . . , k ? 1, as input
instead of the priors. By taking a random permutation over the nodes while constructing the d-regular
graph, Algorithm 1 will be (1 ? ?)-good approximation with m = O(n log k? ) queries as n ? ?
when we set pi = nni .
The proof of this theorem is deferred to the appendix in the supplementary material.
8
Figure 3: Recovery error for a
fixed p, d = 100 and varying q
4
Figure 4: Recovery error for a Figure 5: Algorithm 1 on real
fixed p, q and varying d
crowdsourced dataset
Experiments
Though our main contribution is theoretical we have verified our work by using our algorithm on a real dataset created by local crowdsourcing. We first picked a list of 100 ?action?
movies and 100 ?romantic? movies from IMDB (http://www.imdb.com/list/ls076503982/
and http://www.imdb.com/list/ls058479560/). We then created the queries as given in the
querying scheme of Sec. 3.3 by creating a d-regular graph (where d is even). To create the graph we
put all the movies on a circle and took a random permutation on them in a circle. Then for each node
we connected d2 edges on either side to its closest neighbors in the permuted circular list. This random
permutation will allow us to use the relative sizes of the clusters as priors as explained in Sec. 3.3. Using d = 10 , we have nd
2 = 1000 queries with each query being the following question: Are both the
movies ?action? movies?. Now we divided these 1000 queries into 10 surveys (using SurveyMonkey
platform) with each survey carrying 100 queries for the user to answer. We used 10 volunteers to fill
up the survey. We instructed them not to check any resources and answer the questions spontaneously
and also gave them a time limit of a maximum of 10 minutes. The average finish time of the surveys
were 6 minutes. The answers represented the noisy query model since some of the answers were
wrong. In total, we have found 105 erroneous answers in those 1000 queries. For each movie we
evaluate the d query answer it is part of, and use different thresholds T for prediction. That is, if there
are more than T ?yes? answers among those d answers we classified the movie as ?action? movie and
a ?romantic? movie otherwise.The theoretical threshold for predicting an ?action? movie is T = 2 for
oracle error probability q = 0.105, p = 0.5 and d = 10 . But we compared other thresholds as well.
We now used Algorithm 1 to predict the true label vector
from a subset of queries by taking d? edges for each node
where d? < d and d? is even i.e d? ? {2, 4, 6, 8, 10}. Obviously, for d? = 2 , the thresholds T = 3, 4 is meaningless
as we always estimate the movie as ?romantic? and hence
the distortion starts from 0.5 in that case. We plotted the
?
error for each case against the number of queries ( n2d ) and
also plotted the theoretical distortion obtained from our
results for k = 2 labels and p = 0.5, q = 0.105. We
compare these results along with the theoretical distortion Figure 6: Comparison of ?same clusthat we should have for q = 0.105. All these results have ter? query with AND queries when both
been compiled in Figure 5 and we can observe that the achieve 80% accuracy
distortion is decreasing with the number of queries and
the gap between the theoretical result and the experimental results is small for T = 2. These results
validate our theoretical results and our algorithm to a large extent.
We have also asked ?same cluster? queries with the same set of 1000 pairs to the participants to find
that the number of erroneous responses to be 234 (whereas with AND queries it was 105). This
substantiates the claim that AND queries are easier to answer for workers. Since this number of errors
is too high, we have compared the performance of ?same cluster? queries with AND queries and our
algorithm in a synthetically generated dataset with two hundred elements (Figure 6). For recovery
with ?same cluster? queries, we have used the popular spectral clustering algorithm with normalized
cuts [24]. The detailed results obtained can be found in Figure 7 in the supplementary material.
9
Acknowledgements: This research is supported in parts by NSF Awards CCF-BSF 1618512, CCF
1642550 and an NSF CAREER Award CCF 1642658. The authors thank Barna Saha for many
discussions on the topics of this paper. The authors also thank the volunteers who participated in the
crowdsourcing experiments for this paper.
References
[1] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. IEEE
Trans. Information Theory, 62(1):471?487, 2016.
[2] K. Ahn, K. Lee, and C. Suh. Community recovery in hypergraphs. In Communication, Control,
and Computing (Allerton), 2016 54th Annual Allerton Conference on, pages 657?663. IEEE,
2016.
[3] N. Alon and J. H. Spencer. The probabilistic method. John Wiley & Sons, 2004.
[4] H. Ashtiani, S. Kushagra, and S. Ben-David. Clustering with same-cluster queries. In Advances
In Neural Information Processing Systems, pages 3216?3224, 2016.
[5] H. Buhrman, P. B. Miltersen, J. Radhakrishnan, and S. Venkatesh. Are bitvectors optimal?
SIAM Journal on Computing, 31(6):1723?1744, 2002.
[6] V. B. Chandar. Sparse graph codes for compression, sensing, and secrecy. PhD thesis,
Massachusetts Institute of Technology, 2010.
[7] T. M. Cover and J. A. Thomas. Elements of information theory, 2nd Ed. John Wiley & Sons,
2012.
[8] D. Firmani, B. Saha, and D. Srivastava. Online entity resolution using an oracle. PVLDB,
9(5):384?395, 2016.
[9] A. Gruenheid, B. Nushi, T. Kraska, W. Gatterbauer, and D. Kossmann. Fault-tolerant entity
resolution with the crowd. CoRR, abs/1512.00537, 2015.
[10] A. Gruenheid, B. Nushi, T. Kraska, W. Gatterbauer, and D. Kossmann. Fault-tolerant entity
resolution with the crowd. arXiv preprint arXiv:1512.00537, 2015.
[11] B. Hajek, Y. Wu, and J. Xu. Achieving exact cluster recovery threshold via semidefinite
programming: Extensions. IEEE Transactions on Information Theory, 62(10):5918?5937,
2016.
[12] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In
Advances in neural information processing systems, pages 1953?1961, 2011.
[13] D. R. Karger, S. Oh, and D. Shah. Budget-optimal task allocation for reliable crowdsourcing
systems. Operations Research, 62(1):1?24, 2014.
[14] F. Lahouti and B. Hassibi. Fundamental limits of budget-fidelity trade-off in label crowdsourcing.
In Advances in Neural Information Processing Systems, pages 5059?5067, 2016.
[15] Q. Liu, J. Peng, and A. T. Ihler. Variational inference for crowdsourcing. In Advances in neural
information processing systems, pages 692?700, 2012.
[16] A. Makhdoumi, S.-L. Huang, M. M?edard, and Y. Polyanskiy. On locally decodable source
coding. In Communications (ICC), 2015 IEEE International Conference on, pages 4394?4399.
IEEE, 2015.
[17] J. L. Massey. Joint source and channel coding. Technical report, DTIC Document, 1977.
[18] A. Mazumdar, V. Chandar, and G. W. Wornell. Update-efficiency and local repairability
limits for capacity approaching codes. IEEE Journal on Selected Areas in Communications,
32(5):976?988, 2014.
10
[19] A. Mazumdar, V. Chandar, and G. W. Wornell. Local recovery in data compression for general
sources. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages 2984?
2988. IEEE, 2015.
[20] A. Mazumdar and B. Saha. A theoretical analysis of first heuristics of crowdsourced entity
resolution. The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017.
[21] A. Mazumdar and B. Saha. Clustering with noisy queries. In Advances in Neural Information
Processing Systems (NIPS) 31, 2017.
[22] A. Mazumdar and B. Saha. Query complexity of clustering with side information. In Advances
in Neural Information Processing Systems (NIPS) 31, 2017.
[23] A. Montanari and E. Mossel. Smooth compression, gallager bound and nonlinear sparse-graph
codes. In Information Theory, 2008. ISIT 2008. IEEE International Symposium on, pages
2474?2478. IEEE, 2008.
[24] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In
Advances in neural information processing systems, pages 849?856, 2002.
[25] A. Pananjady and T. A. Courtade. Compressing sparse sequences under local decodability
constraints. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages
2979?2983. IEEE, 2015.
[26] M. Patrascu. Succincter. In Foundations of Computer Science, 2008. FOCS?08. IEEE 49th
Annual IEEE Symposium on, pages 305?313. IEEE, 2008.
[27] D. Prelec, H. S. Seung, and J. McCoy. A solution to the single-question crowd wisdom problem.
Nature, 541(7638):532?535, 2017.
[28] A. Vempaty, L. R. Varshney, and P. K. Varshney. Reliable crowdsourcing for multi-class labeling
using coding theory. IEEE Journal of Selected Topics in Signal Processing, 8(4):667?679, 2014.
[29] V. Verroios and H. Garcia-Molina. Entity resolution with crowd errors. In 31st IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea, April 13-17, 2015,
pages 219?230, 2015.
[30] N. Vesdapunt, K. Bellare, and N. Dalvi. Crowdsourcing algorithms for entity resolution. PVLDB,
7(12):1071?1082, 2014.
[31] R. K. Vinayak and B. Hassibi. Crowdsourced clustering: Querying edges vs triangles. In
Advances in Neural Information Processing Systems, pages 1316?1324, 2016.
[32] E. Viola. Bit-probe lower bounds for succinct data structures. SIAM Journal on Computing,
41(6):1593?1604, 2012.
[33] J. Wang, T. Kraska, M. J. Franklin, and J. Feng. Crowder: Crowdsourcing entity resolution.
PVLDB, 5(11):1483?1494, 2012.
[34] D. Zhou, S. Basu, Y. Mao, and J. C. Platt. Learning from the wisdom of crowds by minimax
entropy. In Advances in Neural Information Processing Systems, pages 2195?2203, 2012.
11
| 7227 |@word version:5 compression:9 proportion:3 nd:8 c0:2 d2:2 p0:5 asks:1 liu:1 contains:2 uma:2 karger:2 document:1 interestingly:1 franklin:1 outperforms:1 recovered:3 com:2 dx:1 must:10 written:1 john:2 designed:1 plot:2 update:1 v:2 intelligence:1 selected:3 beginning:1 ith:3 vanishing:1 pvldb:3 provides:2 node:7 allerton:2 along:2 direct:1 symposium:4 incorrect:7 focs:1 combine:1 dalvi:1 manner:1 theoretically:1 pairwise:11 peng:1 expected:3 indeed:1 p1:1 surge:1 nor:1 dist:3 bility:1 multi:1 decreasing:1 increasing:1 becomes:1 provided:1 estimating:1 linearity:1 bounded:1 what:1 substantially:2 developed:2 finding:1 guarantee:2 every:4 voting:1 interactive:1 exactly:7 wrong:1 platt:1 control:1 before:2 engineering:1 local:5 limit:4 encoding:1 approximately:2 studied:5 substantiates:1 equivalence:1 range:3 practical:1 thirty:1 spontaneously:1 lost:1 block:2 area:1 thought:2 projection:1 word:1 regular:3 get:3 cannot:2 unlabeled:1 close:1 bend:1 faulty:1 put:1 www:2 equivalent:6 deterministic:1 primitive:1 independently:4 survey:4 resolution:8 ke:1 amazon:1 recovery:28 miltersen:1 bsf:1 kushagra:1 fill:1 oh:2 handle:1 notion:6 crowdsourcing:19 justification:2 analogous:2 suppose:6 user:1 exact:6 programming:1 us:1 designing:1 element:41 mentary:1 cut:1 labeled:3 preprint:1 wang:1 wornell:2 forwardly:1 compressing:1 connected:2 trade:2 decrease:2 nushi:2 mentioned:2 pd:2 complexity:2 asked:5 seung:1 depend:4 carrying:1 bipartite:1 imdb:3 efficiency:1 triangle:2 easily:2 joint:2 mh:1 various:1 represented:1 distinct:2 query:188 artificial:1 labeling:12 verroios:1 crowd:11 h0:2 quite:2 encoded:2 larger:1 supplementary:4 heuristic:1 say:2 distortion:7 reconstruct:3 compressed:8 otherwise:3 noisy:8 itself:1 online:1 obviously:1 differentiate:1 advantage:1 sequence:8 took:1 propose:3 reconstruction:1 vesdapunt:1 achieve:2 validate:3 exploiting:1 cluster:50 radhakrishnan:1 perfect:7 ben:1 depending:1 alon:1 throw:1 c:1 involves:3 indicate:1 come:1 concentrate:1 radius:1 closely:1 correct:5 stochastic:2 material:5 require:2 isit:3 spencer:1 strictly:3 extension:1 hall:1 ground:2 algorithmic:3 predict:2 claim:1 substituting:1 smallest:1 label:80 largest:1 repetition:2 create:1 weighted:1 clearly:1 always:2 zhou:1 varying:2 mccoy:1 corollary:1 focus:2 bernoulli:1 likelihood:3 check:1 pananjady:1 contrast:2 adversarial:1 helpful:1 cke:1 inference:1 dependent:2 relation:1 going:3 labelings:1 provably:2 lahouti:1 among:2 fidelity:1 platform:1 special:1 gatterbauer:2 field:1 once:1 never:1 equal:3 beach:1 ng:1 hop:5 chernoff:1 flipped:1 represents:1 look:2 unsupervised:1 abbe:1 yu:3 minimized:1 report:1 others:1 np:2 inherent:1 few:1 primarily:1 saha:5 randomly:3 decodable:4 set4:1 ourselves:1 replacement:1 maintain:1 ab:1 thrown:1 interest:2 highly:1 circular:1 deferred:2 semidefinite:1 n2d:1 edge:4 worker:11 necessary:4 korea:1 old:1 circle:2 plotted:4 theoretical:12 boolean:2 asking:3 cover:1 disadvantage:1 vinayak:1 assignment:1 cost:1 vertex:1 entry:5 subset:2 hundred:1 too:1 pal:1 answer:44 corrupted:1 crowder:1 adaptively:2 st:2 fundamental:3 amherst:4 siam:2 international:5 lee:1 off:2 probabilistic:1 decoding:6 again:4 aaai:2 satisfied:2 thesis:1 choose:1 possibly:2 huang:1 creating:1 expert:2 inefficient:1 resort:1 return:2 account:1 coding:20 sec:7 chandar:3 depends:3 performed:1 try:1 view:3 break:1 picked:1 analyze:1 start:1 recover:13 option:2 parallel:1 crowdsourced:3 participant:1 contribution:6 minimize:1 ni:1 accuracy:1 xor:5 qk:2 who:2 ensemble:2 wisdom:2 yes:1 ashtiani:1 iid:1 straight:1 ary:1 j6:3 classified:1 parallelizable:1 ed:2 definition:2 prelec:1 against:2 involved:3 turk:1 obvious:2 naturally:1 proof:7 ihler:1 recovers:2 hamming:2 proved:1 dataset:3 massachusetts:3 popular:3 ask:3 recall:2 knowledge:3 cj:2 hajek:1 carefully:1 back:1 higher:1 follow:1 response:2 wei:1 april:1 sufficiency:1 though:1 generality:1 just:9 lastly:1 hand:1 nonlinear:1 semisupervised:2 usa:1 normalized:1 true:8 ccf:3 hence:6 assigned:2 symmetric:2 prohibit:1 clustering1:1 evident:1 complete:4 theoretic:6 performs:2 polyanskiy:1 variational:1 recently:4 permuted:1 nh:6 belong:2 extend:1 hypergraphs:1 significant:1 had:2 similarity:4 money:1 compiled:1 deduce:1 labelers:2 ahn:1 something:1 closest:1 recent:5 reverse:1 scenario:3 bandeira:1 binary:13 fault:2 yi:2 exploited:1 molina:1 seen:3 minimum:1 aggregated:2 signal:1 multiple:7 desirable:1 reduces:1 smooth:4 technical:1 faster:1 long:5 divided:1 award:2 prediction:1 involving:1 noiseless:6 expectation:2 poisson:2 volunteer:2 arxiv:2 represent:2 achieved:2 whereas:2 want:1 participated:1 kossmann:2 else:1 source:23 meaningless:1 south:1 thing:1 contrary:1 mod:4 seem:1 jordan:1 call:5 integer:1 near:2 presence:1 ter:1 synthetically:1 easy:2 concerned:1 variety:1 independence:1 finish:1 gave:1 restrict:1 approaching:1 reduce:3 idea:2 knowing:1 multiclass:1 action:4 dp2:1 detailed:2 involve:3 listed:1 repeating:4 locally:15 bellare:1 http:2 exist:2 canonical:3 nsf:2 per:1 correctly:3 zv:4 terminology:2 threshold:5 achieving:1 budgeted:1 pj:1 verified:1 massey:4 graph:11 nonadaptive:5 asymptotically:1 fraction:3 icde:1 nonadaptively:1 place:1 extends:1 almost:3 wu:1 appendix:3 bit:13 bound:18 guaranteed:1 nni:1 oracle:20 annual:2 nontrivial:1 constraint:1 takeaway:1 mazumdar:6 argument:1 prescribed:1 performing:2 conjecture:3 according:2 combination:1 ball:1 belonging:1 smaller:6 son:2 dv:1 explained:1 pr:11 resource:1 previously:1 remains:1 turn:1 courtade:1 know:6 end:4 available:1 operation:8 probe:1 observe:5 spectral:2 shah:2 original:1 thomas:1 clustering:22 subsampling:1 remaining:2 log2:3 firmani:1 neglect:1 giving:1 especially:2 establish:1 classical:1 feng:1 objective:3 question:5 strategy:1 nr:3 distance:2 thank:2 entity:8 majority:2 decoder:1 secrecy:1 capacity:1 topic:2 whom:1 considers:1 collected:2 extent:2 reason:2 code:8 length:2 insufficient:1 ratio:1 minimizing:1 setup:1 mostly:3 potentially:2 statement:1 dci:1 design:1 perform:5 upper:3 observation:2 datasets:1 arya:2 beat:1 incorrectly:1 situation:2 viola:1 communication:3 dpi:1 arbitrary:1 community:2 david:1 venkatesh:1 pair:7 required:8 mechanical:1 cast:1 rephrase:1 barna:1 nu:7 encodable:11 nip:3 trans:1 able:3 usually:3 below:1 max:2 reliable:3 event:9 natural:1 predicting:1 minimax:1 scheme:26 movie:11 technology:1 mossel:1 imply:1 created:2 naive:1 gf:1 prior:12 literature:1 acknowledgement:1 icc:1 relative:6 loss:1 permutation:3 interesting:1 limitation:4 allocation:1 querying:24 validation:1 foundation:1 degree:8 sufficient:6 dq:3 pi:7 row:2 romantic:3 achievability:2 summary:1 supported:1 side:2 allow:4 institute:1 neighbor:6 basu:1 taking:2 sparse:3 distributed:1 bitvectors:1 dimension:1 valid:2 gruenheid:2 reside:1 made:4 adaptive:2 instructed:1 author:2 employing:1 qx:3 transaction:1 reconstructed:1 approximate:12 emphasize:1 keep:1 varshney:2 tolerant:2 assumed:3 xi:14 suh:1 iterative:1 why:1 channel:4 nature:2 robust:1 ca:1 career:1 decodability:1 contributes:1 posing:1 necessarily:2 constructing:1 kraska:3 tween:1 pk:4 main:10 montanari:1 noise:3 n2:3 succinct:1 allowed:4 repeated:1 xu:3 wiley:2 hassibi:2 position:2 mao:1 wish:1 answering:1 pe:1 theorem:13 minute:2 erroneous:8 bad:6 zu:7 sensing:1 list:4 exists:2 corr:1 ci:2 phd:1 budget:2 dtic:1 nk:1 gap:4 easier:1 entropy:3 garcia:1 simply:2 gallager:1 ez:2 patrascu:1 truth:2 dh:1 ma:2 goal:1 viewed:1 sized:1 change:1 included:1 typical:8 uniformly:2 total:8 called:3 experimental:1 vote:1 college:2 seoul:1 evaluate:1 srivastava:1 |
6,885 | 7,228 | Clustering Stable Instances of Euclidean k-means
Abhratanu Dutta?
Northwestern University
[email protected]
Aravindan Vijayaraghavan?
Northwestern University
[email protected]
Alex Wang?
Carnegie Mellon University
[email protected]
Abstract
The Euclidean k-means problem is arguably the most widely-studied clustering
problem in machine learning. While the k-means objective is NP-hard in the
worst-case, practitioners have enjoyed remarkable success in applying heuristics
like Lloyd?s algorithm for this problem. To address this disconnect, we study
the following question: what properties of real-world instances will enable us to
design efficient algorithms and prove guarantees for finding the optimal clustering?
We consider a natural notion called additive perturbation stability that we believe
captures many practical instances of Euclidean k-means clustering. Stable instances
have unique optimal k-means solutions that does not change even when each point
is perturbed a little (in Euclidean distance). This captures the property that kmeans optimal solution should be tolerant to measurement errors and uncertainty
in the points. We design efficient algorithms that provably recover the optimal
clustering for instances that are additive perturbation stable. When the instance
has some additional separation, we can design a simple, efficient algorithm with
provable guarantees that is also robust to outliers. We also complement these
results by studying the amount of stability in real datasets, and demonstrating that
our algorithm performs well on these benchmark datasets.
1
Introduction
One of the major challenges in the theory of clustering is to bridge the large disconnect between our
theoretical and practical understanding of the complexity of clustering. While theory tells us that
most common clustering objectives like k-means or k-median clustering problems are intractable in
the worst case, many heuristics like Lloyd?s algorithm or k-means++ seem to be effective in practice.
In fact, this has led to the ?CDNM? thesis [11, 9]: ?Clustering is difficult only when it does not
matter?.
We try to address the following natural questions in this paper: Why are real-world instances of
clustering easy? Can we identify properties of real-world instances that make them tractable?
We focus on the Euclidean k-means clustering problem where we are given n points X =
{ x1 , . . . , xn } ? Rd , and we need to find k centers ?1 , ?2 , . . . , ?k ? Rd minimizing the objecP
2
tive x?X mini?[k] kx ? ?i k . The k-means clustering problem is the most well-studied objective
for clustering points in Euclidean space [18, 3]. The problem is NP-hard in the worst-case [14] even
for k = 2, and a constant factor hardness of approximation is known for larger k [5].
?
?
Supported by the National Science Foundation (NSF) under Grant No. CCF-1637585.
Part of the work was done while the author was at Northwestern University.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
One way to model real-world instances of clustering problems is through instance stability, which
is an implicit structural assumption about the instance. Practically interesting instances of k-means
clustering problem often have a clear optimal clustering solution (usually the ground-truth clustering)
that is stable: i.e., it remains optimal even under small perturbations of the instance. As argued in [7],
clustering objectives like k-means are often just a proxy for recovering a ground-truth clustering that
is close to the optimal solution. Instances in practice always have measurement errors, and optimizing
the k-means objective is meaningful only when the optimal solution is stable to these perturbations.
This notion of stability was formalized independently in a pair of influential works [11, 7]. The
predominant strand of work on instance stability assumes that the optimal solution is resilient to
multiplicative perturbations of the distances [11]. For any ? ? 1, a metric clustering instance (X, d)
on point set X ? Rd and metric d : X ? X ? R+ is said to be ?-factor stable iff the (unique)
optimal clustering C1 , . . . , Ck of X remains the optimal solution for any instance (X, d0 ) where any
(subset) of the the distances are increased by up to a ? factor i.e., d(x, y) ? d0 (x, y) ? ?d(x, y)
for any x, y ? X. In a series of recent works [4, 8] culminating in [2], it was shown that 2-factor
perturbation stable (i.e., ? ? 2) instances of k-means can be solved in polynomial time.
Multiplicative perturbation stability represents an elegant, well-motivated formalism that captures
robustness to measurement errors for clustering problems in general metric spaces (? = 1.1 captures
relative errors of 10% in the distances). However, multiplicative perturbation stability has the
following drawbacks in the case of Euclidean clustering problems:
? Measurement errors in Euclidean instances are better captured using additive perturbations.
Uncertainty of ? in the position of x, y leads to an additive error of ? in kx?yk2 , irrespective
of how large or small kx ? yk2 is.
? The amount of stability ? needed to enable efficient algorithms (i.e., ? ? 2) often imply
strong structural conditions, that are unlikely to be satisfied by many real-world datasets.
For instance, ?-factor perturbation stability implies that every point is a multiplicative factor
of ? closer to its own center (say ?i ) than to any other cluster center ?j .
? Algorithms that are known to have provable guarantees under multiplicative perturbation
stability are based on single-linkage or MST algorithms that are very non-robust by nature.
In the presence of a few outliers or noise, any incorrect decision in the lower layers gets
propagated up to the higher levels.
In this work, we consider a natural additive notion of stability for Euclidean instances, when the
optimal k-means clustering solution does not change even where each point is moved by a small
Euclidean distance of up to ?. Moving each point by at most ? corresponds to a small additive perturbation to the pairwise distances between the points3 . Unlike multiplicative notions of perturbation
stability [11, 4], this notion of additive perturbation is not scale invariant. Hence the normalization or
scale of the perturbation is important.
Ackerman and Ben-David [1] initiated the study of additive perturbation stability when the distance
between any pair of points can be changed by at most ? = ? diam(X) with diam(X) being the diam2
2
2
eter of the whole dataset. The algorithms take time nO(k/? ) = nO(k diam (X)/? ) and correspond to
polynomial time algorithms when k, 1/? are constants. However, this dependence of k diam2 (X)/? 2
in the exponent is not desirable since the diameter is a very non-robust quantity ? the presence of one
outlier (that is even far away from the decision boundary) can increase the diameter arbitrarily. Hence,
these guarantees are useful mainly when the whole instance lies within a small ball and for a small
number of clusters [1, 10]. Our notion of additive perturbation stability will use a different scale
parameter that is closely related to the distance between the centers, instead of the diameter diam(X).
As we will see soon, our results for additive perturbation stability have no explicit dependence on
the diameter, and allows instances to have potentially unbounded clusters (as in the case of far-way
outliers). Further with some additional assumptions, we also obtain polynomial time algorithmic
guarantees for large k.
3
Note that not all additive perturbations to the distances can be captured by an appropriate movement of the
points in the cluster. Hence the notion we consider in our paper is a weaker assumption on the instance.
2
Figure 1: a)Left: the figure shows an instance with k = 2 satisfying ?-APS with D being separation
between the means. The half-angle of the cone is arctan(1/?) and the distance between ?1 and the
apex of the cone (?) is at most D/2. b) Right: The figure shows a (?, ?, ?)-separated instance, with
scale parameter ?. All the points lie inside the cones of half-angle arctan(1/?), whose apexes are
separated by a margin of at least ?.
1.1
Additive Perturbation Stability and Our Contributions
We consider a notion of additive stability where the points in the instance can be moved by at
most ? = ?D, where ? ? (0, 1) is a parameter, and D = maxi6=j Dij = maxi6=j k?i ? ?j k2 is
the maximum distance between pairs of means. Suppose X is a k-means clustering instance with
optimal clustering C1 , C2 , . . . , Ck . We say that X is ?-APS (additive perturbation stable) iff every
? = ?D-additive perturbation of X has C1 , C2 , . . . , Ck as an optimal clustering solution. (See
Definition 2.3 for a formal definition). Note that there is no restriction on the diameter of the instance,
or even the diameters of the individual clusters. Hence, our notion of additive perturbation stability
allows the instance to be unbounded.
Geometric property of ?-APS instances. Clusters in the optimal solution of an ?-APS instance
satisfy a natural geometric condition, that implies an ?angular separation? between every pair of
clusters.
Proposition 1.1 (Geometric Implication of ?-APS). Consider an ?-APS instance X, and let Ci , Cj
be two clusters of the optimal solution. Any point x ? Ci lies in a cone whose axis is along the
direction (?i ? ?j ) with half-angle arctan(1/?). Hence if u is the unit vector along ?i ? ?j then
?x ? Ci ,
?i +?j
, ui|
2
?i +?j
? 2 k2
|hx ?
kx
>?
?
.
1 + ?2
(1)
For any j ? [k], all the points in cluster Ci lie inside the cone with its axis along (?i ? ?j ) as in
Figure 1. The distance between ?i and the apex of the cone is ? = ( 21 ? ?)D. We will call ? the
scale parameter of the clustering.
We believe that many clustering instances in practice satisfy ?-APS condition for reasonable constants
?. In fact, our experiments in Section 4 suggest that the above geometric condition is satisfied for
reasonable values e.g., ? ? (0.001, 0.2).
While the points can be arbitrarily far away from their own means, the above angular separation (1)
is crucial in proving the polynomial time guarantees for our algorithms. For instance, this implies
that at least 1/2 of the points in a cluster Ci are within a Euclidean distance of at most O(?/?) from
?i . This geometric condition (1) of the dataset enables the design of a tractable algorithm for k = 2
with provable guarantees. This algorithm is based on a modification of the perceptron algorithm in
supervised learning, and is inspired by [13].
Informal Theorem 1.2. For any fixed ? > 0, there exists an dnpoly(1/?) time algorithm that correctly
clusters all ?-APS 2-means instances.
For k-means clustering, similar techniques can be used to learn the separating halfspace for each
pair of clusters. But this incurs an exponential dependence on k 2 , which renders this approach
3
inefficient for large k.4 We now consider a natural strengthening of this assumption that allows us to
get poly(n, d, k) guarantees.
Angular Separation with additional Margin Separation. We consider a natural strengthening
of additive perturbation stability where there is an additional margin between any pair of clusters.
This is reminiscent of margin assumptions in supervised learning of halfspaces and spectral clustering
guarantees of Kumar and Kannan [15] (see Section 1.2). Consider a k-means clustering instance
X having an optimal solution C1 , C2 , . . . , Ck . This instance is (?, ?, ?)-separated iff for each
i 6= j ? [k], the subinstance induced by Ci , Cj has parameter scale ?, and all points in the clusters
Ci , Cj lie inside cones of half-angle arctan(1/?), which are separated by a margin of at least ?. This
is implied by the stronger condition that the subinstance induced by Ci , Cj is ?-additive perturbation
stable with scale parameter ? even when Ci and Cj are moved towards each other by ?. Please see
Figure 1 for an illustration. We formally define (?, ?, ?)-separated stable instances geometrically in
Section 2.
Informal Theorem 1.3 (Polytime algorithm for (?, ?, ?)-separated instances). There is an algorithm
e 2 kd) that given any instance X that is (?, ?, ?)-separated with ? ? ?(?/?2 )
running in time5 O(n
recovers the optimal clustering C1 , . . . , Ck .
A formal statement of the theorem (with unequal sized clusters), and its proof are given in Section 3.
We prove these polynomial time guarantees for a new, simple algorithm ( Algorithm 3.1 ). The
algorithm constructs a graph with one vertex for each point, and edges between points that within a
distance of at most r (for an appropriate threshold r). The algorithm then finds the k-largest connected
components. It then uses the k empirical means of these k components to cluster all the points.
In addition to having provable guarantees, the algorithm also seems efficient in practice, and performs
well on standard clustering datasets. Experiments that we conducted on some standard clustering
datasets in UCI suggest that our algorithm manages to almost recover the ground truth, and achieves a
k-means objective cost that is very comparable to Lloyd?s algorithm and k-means++ (see Section 4).
In fact, our algorithm can also be used to initialize the Lloyd?s algorithm: our guarantees show
that when the instance is (?, ?, ?)-separated, one iteration of Lloyd?s algorithm already finds the
optimal clustering. Experiments suggest that our algorithm finds initializers of smaller k-means cost
compared to the initializers of k-means++ [3] and also recover the ground-truth to good accuracy
(see Section 4 and Supplementary material for details).
Robustness to Outliers. Perturbation stability requires the optimal solution to remain completely
unchanged under any valid perturbation. In practice, the stability of an instance may be dramatically
reduced by a few outliers. We show provable guarantees for a slight modification of the algorithm,
in the setting when an ?-fraction of the points can be arbitrary outliers, and do not lie in the stable
regions. More formally, we assume that we are given an instance X ? Z where there is an (unknown)
set of points Z with |Z| ? ?|X| such that X is a (?, ?, ?)-separated-stable instance. Here ?n is
assumed to be smaller than size of the smallest cluster by a constant factor. This is similar to robust
perturbation resilience considered in [8, 16]. Our experiments in Section 4 indicate that the stability
or separation can increase a lot after ignoring a few points close to the margin.
In what follows, wmax = max|Ci |/n and wmin = min|Ci |/n are the maximum and minimum
weight of clusters, and ? < wmin .
Theorem 1.4. Given X ? Z where X satisfies (?, ?, ?)-separated with ? < wmin and
? wmax + ?
?=? 2
?
wmin ? ?
e 2 dk) that returns a clustering
and ? < wmin , there is a polynomial time algorithm running in time O(n
consistent with C1 , . . . , Ck on X.
This robust algorithm is effectively the same as Algorithm 3.1 with one additional step that removes
all low-degree vertices in the graph. This step removes bad outliers in Z without removing too many
points from X.
4
5
We remark that the results of [1] also incur an exponential dependence on k.
e hides an inverse Ackerman fuction of n.
The O
4
1.2
Comparisons to Other Related Work
Awasthi et al. [4] showed that ?-multiplicative perturbation stable instance also satisfied the notion
of ?-center based stability (every point is a ?-factor closer to its center than to any other center) [4].
They showed that an algorithm based on the classic single linkage algorithm works under this weaker
notion when ? ? 3. This was subsequently improved by [8], and the best result along these lines [2]
gives a polynomial time algorithm that works for ? ? 2. A robust version of (?, ?)-perturbation
resilience was explored for center-based clustering objectives [8]. As such, the notions of additive
perturbation stability, and (?, ?, ?)-separated instances are incomparable to the various notions of
multiplicative perturbation stability. Further as argued in [9], we believe that additive perturbation
stability is more realistic for Euclidean clustering problems.
Ackerman and Ben-David [1] initiated the study of various deterministic assumptions for clustering
instances. The measure of stability most related to this work is Center Perturbation (CP) clusterability
(an instance is ?-CP-clusterable if perturbing the centers by a distance of ? does not increase the
cost much). A subtle difference is their focus on obtaining solutions with small objective cost [1],
while our goal is to recover the optimal clustering. However, the main qualitative difference is how
the length scale is defined ? this is crucial for additive perturbations. The run time of the algorithm
in [1] is npoly(k,diam(X)/?) , where the length scale of the perturbations is diam(X), the diameter
of the whole instance. Our notion of additive perturbations uses a much smaller length-scale of ?
(essentially the inter-mean distance; see Prop. 1.1 for a geometric intepretation), and Theorem 1.2
gives a run-time guarantee of npoly(?/?) for k = 2 (Theorem 1.2 is stated in terms of ? = ?/?).
By using the largest inter-mean distance instead of the diameter as the length scale, our algorithmic
guarantees can also handle unbounded clusters with arbitrarily large diameters and outliers.
The exciting results of Kumar and Kannan [15] and Awasthi and Sheffet [6] also gave deterministic
margin-separation condition, under which spectral clustering (PCA followed by k-means) finds the
optimum clusters under deterministic conditions about the data. Suppose ? = kX ? Ck2op /n is the
?spectral radius? of the dataset, where C is the matrix given by the centers. In the case of equal-sized
clusters, the improved results of [6] proves approximate recovery of the optimal
? clustering if the
margin ? between the clusters along the line joining the centers satisfies ? = ?( k?). Our notion
of margin ? in (?, ?, ?)-separated instances is analogous to the margin separation notion used by
2
the above results on spectral clustering [15, 6].
? In particular, we require a margin of ? = ?(?/? )
where ? is our scale parameter, with no extra k factor. However, we emphasize that the two margin
conditions are incomparable, since the spectral radius ? is incomparable to the scale parameter ?.
We now illustrate the difference between these deterministic conditions by presenting a couple of
examples. Consider an instance with n points drawn from a mixture of k Gaussians in d dimensions
with identical diagonal covariance matrices with variance 1 in the first O(1) coordinates and roughly
1
first O(1) co-ordinates. In
d in the others, and all the means lying in the subspace spanned by these?
this setting, the results of [15, 6] require a margin separation of at least k log n between
? clusters.
On the other hand, these instances satisfy our geometric conditions with ? = ?(1), ? ? log?n and
?
therefore our algorithm only needs a margin separation of ? log n ( hence, saving a factor of k). 6
However, if the n points were drawn from a mixture of spherical Gaussians in high dimensions (with
d k), then the margin condition required for [15, 6] is weaker.
2
Stability definitions and geometric properties
X ? Rd will denote a k-means clustering instance and C1 , . . . , Ck will often refer to its optimal
P
2
clustering. It is well-known that given a cluster C the value of ? minimizing x?C kx ? ?k is
P
1
given by ? = |C| x?C x, the mean of the points in the set. We give more preliminaries about the
k-means problem in the Supplementary Material.
2.1
Balance Parameter
We define an instance parameter, ?, capturing how balanced a given instance?s clusters are.
6
Further, while algorithms for learning GMM models may work here, adding some outliers far from the
decision boundary will cause many of these algorithms to fail, while our algorithm is robust to such outliers.
5
Figure 2: An example of the family of perturbations considered by Lemma 2.4. Here v is in the
upwards direction. If a is to the right of the diagonal solid line, then a0 will be to the right of the
slanted dashed line and will lie on the wrong side of the separating hyperplane.
Definition 2.1 (Balance parameter). Given an instance X with optimal clustering C1 , . . . , Ck , we
say X satisfies balance parameter ? ? 1 if for all i 6= j, ?|Ci | > |Cj |.
We will write ? in place of ?(X) when the instance is clear from context.
2.2
Additive perturbation stability
Definition 2.2 (?-additive perturbation). Let X = { x1 , . . . , xn } be a k-means clustering instance with optimal clustering C1 , C2 , . . . , Ck whose means are given by ?1 , ?2 , . . . , ?k . Let
D = maxi,j k?i ? ?j k. We say that the instance X 0 = { x01 , . . . , x0n } is an ?-additive perturbation
of X if for all i, kx0i ? xi k ? ?D.
Definition 2.3 (?-additive perturbation stability). Let X be a k-means clustering instance with
optimal clustering C1 , C2 , . . . , Ck . We say that X is ?-additive perturbation stable (APS) if every
?-additive perturbation of X has unique optimal clustering given by C1 , C2 , . . . , Ck .
Intuitively, the difficulty of the clustering task increases as the stability parameter ? decreases. For
example, when ? = 0 the set of ?-APS instances contains any instance with a unique solution. In the
following we will only consider ? > 0.
2.3
Geometric implication of ?-APS
Let X be an ?-APS k-means clustering instance such that each cluster has at least 4 points. Fix i 6= j
and consider a pair of clusters Ci , Cj with means ?i , ?j and define the following notations.
? Let Di,j = k?i ? ?j k be the distance between ?i and ?j and let D = maxi0 ,j 0 k?i0 ? ?j 0 k
be the maximum distance between any pair of means.
? ??
? Let u = k?ii ??jj k be the unit vector in the intermean direction. Let V = u? be the space
orthogonal to u. For x ? Rd , let x(u) and x(V ) be the projections x onto u and V .
? Let p =
?i +?j
2
be the midpoint between ?i and ?j .
A simple perturbation that we can use will move all points in Ci and Cj along the direction ?i ? ?j
by a ? amount, while another perturbation moves these points along ?j ? ?i ; these will allow us to
conclude that ? margin of size 2?. To establish Proposition 1.1, we will choose a clever ?-perturbation
that allows us to show that clusters must live in cone regions (see figure 1 left). This perturbation
chooses two clusters and moves their means in opposite directions orthogonal to u while moving a
single point towards the other cluster (see figure 2). The following lemma establishes Proposition 1.1.
Lemma 2.4. For any x ? Ci ? Cj , k(x ? p)(V ) k < 1? k(x ? p)(u) k ? ?Di,j .
Proof. Let v ? V be a unit vector perpendicular to u. Without loss of generality, let a, b, c, d ? Ci
be distinct. Note that Di,j ? D and consider the ?-additive perturbation given by
X 0 = { a ? ?u, b + ?u, c ? ?v, d ? ?v } ? { x ? 2? v | x ? Ci \ { a, b, c, d } } ? { x + 2? v | x ? Cj }
6
and X \ {Ci ? Cj }where ? = ?Di,j (see figure 2). By assumption, { Ci , Cj } remains the optimal
?D
clustering of Ci ? Cj . We have constructed X 0 such that the new means are at ?0i = ?i ? 2i,j v
?D
and ?0j = ?j + 2i,j v, and the midpoint between the means is p0 = p. The halfspace containing ?0i
given by the linear separator between ?0i and ?0j is hx ? p0 , ?0i ? ?0j i > 0. Hence, as a0 is classified
correctly by the ?-APS assumption,
ha0 ? p0 , ?0i ? ?0j i = ha ? p ? ?Di,j u, Di,j u ? ?Di,j vi
= Di,j (ha ? p, ui ? ?ha ? p, vi ? ?Di,j ) > 0
Then noting that hu, a ? pi is positive, we have that ha ? p, vi <
1
?
k(a ? p)(u) k ? ?Di,j .
Note that this property follows from perturbations which only affect two clusters at a time. Our
results follow from this weaker notion.
2.4
(?, ?, ?)-separation
Motivated by Lemma 2.4, we define a geometric condition where the angular separation and margin
separation are parametrized separately. This notion of separation is implied by a stronger stability
assumption where any pair of clusters is ?-APS with scale parameter ? even after being moved
towards each other a distance of ?.
We say that a pair of clusters is (?, ?, ?)-separated if their points lie in cones with axes along the
intermean direction, half-angle arctan(1/?), and apexes at distance ? from their means and at least
? from each other (see figure 1 right). Formally, we require the following.
Definition 2.5 (Pairwise (?, ?, ?)-separation). Given a pair of clusters Ci , Cj with means ?i , ?j , let
? ??
u = k?ii ??jj k be the unit vector in the intermean direction and let p = (?i + ?j )/2. We say that Ci
and Cj are (?, ?, ?)-separated if Di,j ? ? + 2? and for all x ? Ci ? Cj ,
1
k(x ? p)(V ) k ?
k(x ? p)(u) k ? (Di,j /2 ? ?) .
?
Definition 2.6 ((?, ?, ?)-separation). We say that an instance X is (?, ?, ?)-separated if every pair
of clusters in the optimal clustering is (?, ?, ?)-separated.
3
k-means clustering for general k
We assume that our instance has balance parameter ?. Our algorithm takes in as input the set of
points X and k, and outputs a clustering of all the points.
Algorithm 3.1.
Input: X = { x1 , . . . , xn }, k.
1: for all pairs a, b of distinct points in { xi } do
2:
Let r = ka ? bk be our guess for ?
3:
procedure INITIALIZE
4:
Create graph G on vertex set { x1 , . . . , xn } where xi and xj have an edge iff kxi ?xj k <
r
5:
Let a1 , . . . , ak ? Rd where ai is the mean of the ith largest connected component of G
6:
procedure ASSIGN
7:
Let C1 , . . . , Ck be the clusters obtained by assigning each point in X to the closest ai
8:
Calculate the k-means objective of C1 , . . . , Ck
9: Return clustering with smallest k-means objective found above
Theorem
3.2.
Algorithm 3.1 recovers C1 , . . . , Ck for any (?, ?, ?)-separated instance with ? =
??
?
e 2 kd).
? ?2 + ? and the running time is O(n
We maintain the connected components and their centers via a union-find data structure and keep it
updated as we increase ? and add edges to the dynamic graph. Since we go over n2 possible choices
e 2 kd).
of ? and each pass takes O(kd) time, the algorithm runs in O(n
7
The rest of the section is devoted to proving Theorem 3.2. Define the following regions of Rd for every
? ??
pair i, j. Given i, j, let Ci , Cj be the corresponding clusters with means ?i , ?j . Let u = k?ii ??jj k be
? +?
the unit vector in the inter-mean direction and p = i 2 j be the point between the two means. We
(cone)
first define formally Si,j which was described in the introduction (the feasible region) and two
other regions of the clusters that will be useful in the analysis (see Figure 1b). We observe that Ci
(cone)
belongs to the intersection of all the cones Si,j over j 6= i.
Definition 3.3.
(cone)
= { x ? Rd | k(x ? (?i ? ?u))(V ) k ? 1? hx ? (?i ? ?u), ui },
(nice)
= { x ? Si,j
? Si,j
? Si,j
(good)
? Si
(cone)
=
T
(nice)
j6=i
Si,j
| hx ? ?i , ui ? 0 },
.
(nice)
The nice area of i with respect to j i.e. Si,j
(good)
The good area of a cluster Si
(cone)
, is defined as all points in the cap of Si,j
?above? ?i .
is the intersection of its nice areas with respect to all other clusters.
It suffices to prove the following two main lemmas. Lemma 3.4 states that the ASSIGN subroutine
correctly clusters all points given an initialization satisfying certain properties. Lemma 3.5 states that
the initialization returned by the INITIALIZE subroutine satisfies these properties when we guess
r = ? correctly. As ? is only used as a threshold on edge lengths, testing the distances between all
pairs of data points i.e. { ka ? bk : a, b ? X } suffices.
Lemma 3.4. For a (?, ?, ?)-separated instance with ? = ?(?/?2 ), the ASSIGN subroutine recovers
(good)
C1 , C2 , ? ? ? Ck correctly when initialized with k points { a1 , a2 , . . . , ak } where ai ? Si
.
Lemma 3.5. For an (?, ?, ?)-separated instance with balance parameter ? and ? = ?(??/?), the
(good)
INITIALIZE subroutine outputs one point each from { Si
: i ? [k] } when r = ?.
(core)
To prove Lemma 3.5 we define a region of each cluster Si
, the core, such that most (at least
(core)
?/(1 + ?) fraction) of the points in Ci will belong to the connected component containing Si
.
Hence, any large connected component (in particular, the k largest ones) must contain the core of
one of the clusters. Meanwhile, the margin ensures points across clusters are not connected. Further,
(core)
since Si
accounts for most points in Ci , the angular separation ensures that the empirical mean of
(good)
the connected component is in Si
.
4
Experimental results
We evaluate Algorithm 3.1 on multiple real world datasets and compare its performance to the
performance of k-means++, and also check how well these datasets satisfy our geometric conditions.
See supplementary results for details about ground truth recovery.
Datasets. Experiments were run on unnormalized and normalized versions of four labeled datasets
from the UCI Machine Learning Repository: Wine (n = 178, k = 3, d = 13), Iris (n = 150, k = 3,
d = 4), Banknote Authentication (n = 1372, k = 2, d = 5), and Letter Recognition (n = 20, 000,
k = 26, d = 16). Normalization was used to scale each feature to unit range.
Performance We ran Algorithm 3.1 on the datasets. The cost of the returned solution for each of the
normalized and unnormalized versions of the datasets is recorded in Table 1 column 2. Our guarantees
show that under (?, ?, ?)-separation for appropriate values of ? (see section 3), the algorithm will
find the optimal clustering after a single iteration of Lloyd?s algorithm. Even when ? does not satisfy
our requirement, we can use our algorithm as an initialization heuristic for Lloyd?s algorithm. We
compare our initialization with the k-means++ initialization heuristic (D2 weighting). In Table 1,
this is compared to the smallest initialization cost of 1000 trials of k-means++ on each of the datasets,
the solution found by Lloyd?s algorithm using our initialization and the smallest k-means cost of 100
trials of Lloyd?s algorithm using a k-mean++ initialization.
Separation in real data sets. As the ground truth clusterings in our datasets are not in general
linearly separable, we consider the clusters given by Lloyd?s algorithm initialized with the ground
8
Table 1: Comparison of k-means cost for Alg 3.1 and k-means++
Dataset
Wine
Wine (normalized)
Iris
Iris (normalized)
Banknote Auth.
Banknote (norm.)
Letter Recognition
Letter Rec. (norm.)
Alg 3.1
2.376e+06
48.99
81.04
7.035
44808.9
138.4
744707
3367.8
k-means++
2.426e+06
65.50
86.45
7.676
49959.9
155.7
921643
4092.1
Alg 3.1 with Lloyd?s
2.371e+06
48.99
78.95
6.998
44049.4
138.1
629407
2767.5
k-means++ with Lloyd?s
2.371e+06
48.95
78.94
6.998
44049.4
138.1
611268
2742.3
Table 2: Values of (?, ?, ?) satisfied by (1 ? ?)-fraction of points
Dataset
?
Wine
0.1
Iris
0.1
Banknote Auth.
0.1
Letter Recognition
0.1
?
0.1
0.01
0.1
0.01
0.1
0.01
0.1
0.01
minimum ?/?
0.566
0.609
0.398
0.496
0.264
0.398
0.018
0.378
average ?/?
1.5
1.53
4.35
5.04
0.264
0.398
2.19
3.07
maximum ?/?
3.05
3.07
7.7
9.06
0.264
0.398
7.11
11.4
truth solutions. Values of ? for Lemma 2.4. We calculate the maximum value of ? such that a given
pair of clusters satisfies the geometric condition in Proposition 1.1. The results are collected in the
Supplementary material in Table 3. We see that the average value of ? lies approximately in the range
(0.01, 0.1).
Values of (?, ?, ?)-separation. We attempt to measure the values of ?, ?, and ? in the datasets.
For ? = 0.05, 0.1, ? = 0.1, 0.01, and a pair of clusters Ci , Cj , we calculate ? as the maximum
margin separation a pair of axis-aligned cones with half-angle arctan(1/?) can have while capturing
a (1 ? ?)-fraction of all points. For some datasets and values for ? and ?, there may not be any such
value of ?, in this case we leave the row blank. The results for the unnormalized datasets with ? = 0.1
are collected in Table 2. (See Supplementary material for the full table).
5
Conclusion and Future Directions
We studied a natural notion of additive perturbation stability, that we believe captures many real-world
instances of Euclidean k-means clustering. We first gave a polynomial time algorithm when k = 2.
For large k, under an additional margin assumption, we gave a fast algorithm of independent interest
that provably recovers the optimal clustering under these assumptions (in fact the algorithm is also
robust to noise and outliers). An appealing aspect of this algorithm is that it is not tailored towards the
model; our experiments indicate that this algorithm works well in practice even when the assumptions
do not hold. Our results with the margin assumption hence gives an algorithm that (A) has provable
guarantees (under reasonable assumptions) (B) is efficient and practical and (C) is robust to errors.
While the margin assumption seems like a realistic assumption qualitatively, we believe that the exact
condition we assume is not optimal. An interesting open question is understanding whether such a
margin is necessary for designing tractable algorithms for large k. We conjecture that for higher k,
clustering remains hard even with ? additive perturbation resilience (without any additional margin
assumption). Improving the margin condition or proving lower bounds on the amount of additive
stability required are interesting future directions.
9
References
[1] Margareta Ackerman and Shai Ben-David. Clusterability: A theoretical study. In Proceedings of the Twelth
International Conference on Artificial Intelligence and Statistics, volume 5, pages 1?8. PMLR, 2009.
[2] Haris Angelidakis, Konstantin Makarychev, and Yury Makarychev. Algorithms for stable and perturbationresilient problems. In Symposium on Theory of Computing (STOC), 2017.
[3] David Arthur and Sergei Vassilvitskii. K-means++: The advantages of careful seeding. In Proceedings of
the Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?07, pages 1027?1035, 2007.
[4] Pranjal Awasthi, Avrim Blum, and Or Sheffet. Center-based clustering under perturbation stability.
Information Processing Letters, 112(1?2):49 ? 54, 2012.
[5] Pranjal Awasthi, Moses Charikar, Ravishankar Krishnaswamy, and Ali Kemal Sinop. The hardness of
approximation of euclidean k-means. In Symposium on Computational Geometry, pages 754?767, 2015.
[6] Pranjal Awasthi and Or Sheffet. Improved spectral-norm bounds for clustering. In Approximation,
Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 37?49. 2012.
[7] Maria-Florina Balcan, Avrim Blum, and Anupam Gupta. Approximate clustering without the approximation. In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?09,
pages 1068?1077, 2009.
[8] Maria Florina Balcan and Yingyu Liang. Clustering under perturbation resilience. SIAM Journal on
Computing, 45(1):102?155, 2016.
[9] Shai Ben-David. Computational feasibility of clustering under clusterability assumptions. CoRR,
abs/1501.00437, 2015.
[10] Shalev Ben-David and Lev Reyzin. Data stability in clustering: A closer look. Theoretical Computer
Science, 558:51 ? 61, 2014. Algorithmic Learning Theory.
[11] Yonatan Bilu and Nathan Linial. Are stable instances easy? In Innovations in Computer Science - ICS
2010, Tsinghua University, Beijing, China, January 5-7, 2010. Proceedings, pages 332?341, 2010.
[12] Hans-Dieter Block. The perceptron: A model for brain functioning. Reviews of Modern Physics, 34(1):123?
135, 1962.
[13] Avrim Blum and John Dunagan. Smoothed analysis of the perceptron algorithm for linear programming.
In Proceedings of Symposium on Dicrete Algorithms (SODA), 2002.
[14] Sanjoy Dasgupta. The hardness of k-means clustering. Department of Computer Science and Engineering,
University of California, San Diego, 2008.
[15] Amit Kumar and Ravindran Kannan. Clustering with spectral norm and the k-means algorithm. In
Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 299?308. IEEE,
2010.
[16] Konstantin Makarychev, Yury Makarychev, and Aravindan Vijayaraghavan. Bilu-linial stable instances of
max cut. Proc. 22nd Symposium on Discrete Algorithms (SODA), 2014.
[17] A.B.J Novikoff. On convergence proofs on perceptrons. Proceedings of the Symposium on the Mathematical
Theory of Automata, XII(1):615?622, 1962.
[18] David P. Williamson and David B. Shmoys. The Design of Approximation Algorithms. Cambridge
University Press, New York, NY, USA, 1st edition, 2011.
10
| 7228 |@word trial:2 repository:1 version:3 polynomial:8 stronger:2 seems:2 norm:4 nd:1 open:1 hu:1 d2:1 sheffet:3 covariance:1 p0:3 incurs:1 solid:1 series:1 contains:1 ka:2 blank:1 si:16 assigning:1 reminiscent:1 slanted:1 must:2 sergei:1 mst:1 additive:32 realistic:2 john:1 enables:1 remove:2 seeding:1 aps:14 half:6 intelligence:1 guess:2 ith:1 core:5 arctan:6 unbounded:3 mathematical:1 along:8 c2:7 constructed:1 symposium:8 incorrect:1 prove:4 qualitative:1 focs:1 inside:3 yingyu:1 pairwise:2 inter:3 ravindran:1 hardness:3 roughly:1 brain:1 inspired:1 spherical:1 little:1 notation:1 npoly:2 what:2 finding:1 guarantee:17 every:7 k2:2 wrong:1 unit:6 grant:1 arguably:1 sinop:1 positive:1 engineering:1 resilience:4 tsinghua:1 joining:1 ak:2 initiated:2 lev:1 approximately:1 initialization:8 studied:3 china:1 co:1 perpendicular:1 range:2 practical:3 unique:4 testing:1 practice:6 union:1 block:1 procedure:2 area:3 empirical:2 projection:1 time5:1 suggest:3 get:2 onto:1 close:2 clever:1 context:1 applying:1 live:1 restriction:1 deterministic:4 center:14 go:1 independently:1 automaton:1 formalized:1 recovery:2 spanned:1 stability:36 proving:3 notion:19 classic:1 handle:1 analogous:1 coordinate:1 updated:1 diego:1 suppose:2 exact:1 programming:1 us:2 designing:1 satisfying:2 recognition:3 rec:1 cut:1 labeled:1 wang:1 capture:5 worst:3 solved:1 calculate:3 region:6 ensures:2 connected:7 movement:1 decrease:1 halfspaces:1 balanced:1 ran:1 complexity:1 ui:4 dynamic:1 ha0:1 ali:1 incur:1 linial:2 completely:1 various:2 separated:19 distinct:2 fast:1 effective:1 artificial:1 tell:1 shalev:1 whose:3 heuristic:4 widely:1 larger:1 supplementary:5 say:8 statistic:1 advantage:1 ackerman:4 strengthening:2 uci:2 aligned:1 reyzin:1 iff:4 moved:4 convergence:1 cluster:47 optimum:1 requirement:1 maxi6:2 leave:1 ben:5 illustrate:1 strong:1 recovering:1 culminating:1 implies:3 indicate:2 direction:10 radius:2 drawback:1 closely:1 subsequently:1 enable:2 material:4 argued:2 resilient:1 hx:4 require:3 assign:3 fix:1 suffices:2 preliminary:1 randomization:1 proposition:4 hold:1 practically:1 lying:1 considered:2 ground:7 ic:1 algorithmic:3 makarychev:4 major:1 achieves:1 smallest:4 a2:1 wine:4 proc:1 combinatorial:1 bridge:1 largest:4 create:1 establishes:1 awasthi:5 always:1 ck:15 ax:1 focus:2 maria:2 check:1 mainly:1 i0:1 unlikely:1 a0:2 subroutine:4 provably:2 exponent:1 initialize:4 equal:1 construct:1 saving:1 having:2 beach:1 identical:1 represents:1 look:1 future:2 np:2 others:1 novikoff:1 few:3 modern:1 national:1 individual:1 geometry:1 maintain:1 attempt:1 ab:1 initializers:2 interest:1 predominant:1 mixture:2 devoted:1 implication:2 edge:4 closer:3 necessary:1 arthur:1 orthogonal:2 euclidean:14 initialized:2 theoretical:3 instance:72 increased:1 formalism:1 column:1 konstantin:2 cost:8 vertex:3 subset:1 dij:1 conducted:1 too:1 perturbed:1 kxi:1 chooses:1 st:3 international:1 siam:3 physic:1 thesis:1 satisfied:4 recorded:1 containing:2 choose:1 dicrete:1 inefficient:1 return:2 account:1 lloyd:12 disconnect:2 matter:1 yury:2 satisfy:5 vi:3 multiplicative:8 try:1 lot:1 recover:4 shai:2 halfspace:2 contribution:1 dutta:1 accuracy:1 variance:1 correspond:1 identify:1 shmoys:1 manages:1 bilu:2 j6:1 classified:1 definition:9 proof:3 di:12 recovers:4 propagated:1 couple:1 dataset:5 cap:1 cj:18 subtle:1 higher:2 supervised:2 follow:1 improved:3 done:1 generality:1 just:1 implicit:1 angular:5 hand:1 wmax:2 believe:5 usa:2 contain:1 normalized:4 functioning:1 ccf:1 hence:9 authentication:1 please:1 unnormalized:3 iris:4 presenting:1 performs:2 cp:2 upwards:1 balcan:2 common:1 perturbing:1 volume:1 belong:1 slight:1 mellon:1 measurement:4 refer:1 cambridge:1 ai:3 enjoyed:1 rd:8 apex:4 moving:2 stable:17 han:1 yk2:2 add:1 krishnaswamy:1 closest:1 own:2 recent:1 hide:1 showed:2 optimizing:1 belongs:1 certain:1 yonatan:1 success:1 arbitrarily:3 captured:2 minimum:2 additional:7 dashed:1 ii:3 multiple:1 desirable:1 full:1 d0:2 long:1 a1:2 feasibility:1 florina:2 essentially:1 metric:3 iteration:2 normalization:2 tailored:1 eter:1 c1:15 addition:1 separately:1 aravindan:2 median:1 crucial:2 extra:1 rest:1 unlike:1 induced:2 elegant:1 vijayaraghavan:2 seem:1 practitioner:1 call:1 structural:2 presence:2 noting:1 easy:2 affect:1 xj:2 gave:3 opposite:1 incomparable:3 whether:1 motivated:2 pca:1 vassilvitskii:1 clusterability:3 linkage:2 render:1 returned:2 york:1 cause:1 jj:3 remark:1 dramatically:1 useful:2 clear:2 amount:4 diameter:9 reduced:1 nsf:1 moses:1 correctly:5 xii:1 carnegie:1 write:1 discrete:3 dasgupta:1 clusterable:1 four:1 demonstrating:1 threshold:2 blum:3 drawn:2 gmm:1 graph:4 geometrically:1 fraction:4 cone:16 beijing:1 run:4 angle:6 inverse:1 uncertainty:2 letter:5 soda:4 place:1 almost:1 reasonable:3 family:1 x0n:1 separation:22 decision:3 comparable:1 capturing:2 layer:1 bound:2 followed:1 annual:3 alex:1 aspect:1 nathan:1 min:1 kumar:3 separable:1 conjecture:1 influential:1 charikar:1 department:1 ball:1 kd:4 smaller:3 remain:1 across:1 appealing:1 modification:2 outlier:12 invariant:1 intuitively:1 dieter:1 remains:4 fail:1 needed:1 tractable:3 informal:2 studying:1 gaussians:2 observe:1 away:2 appropriate:3 spectral:7 pmlr:1 anupam:1 robustness:2 assumes:1 clustering:73 running:3 prof:1 establish:1 amit:1 unchanged:1 implied:2 objective:10 move:3 question:3 quantity:1 already:1 dependence:4 diagonal:2 said:1 subspace:1 distance:22 separating:2 parametrized:1 collected:2 provable:6 kannan:3 length:5 mini:1 illustration:1 minimizing:2 polytime:1 innovation:1 balance:5 margareta:1 difficult:1 kemal:1 liang:1 potentially:1 statement:1 stoc:1 stated:1 design:5 unknown:1 dunagan:1 datasets:16 benchmark:1 january:1 perturbation:54 smoothed:1 arbitrary:1 banknote:4 ordinate:1 tive:1 complement:1 pair:18 david:8 required:2 bk:2 california:1 unequal:1 nip:1 address:2 usually:1 challenge:1 max:2 natural:7 difficulty:1 imply:1 auth:2 axis:3 irrespective:1 nice:5 understanding:2 geometric:12 review:1 relative:1 loss:1 northwestern:6 interesting:3 remarkable:1 foundation:2 haris:1 x01:1 degree:1 proxy:1 consistent:1 exciting:1 pi:1 row:1 pranjal:3 changed:1 supported:1 maxi0:1 soon:1 formal:2 weaker:4 side:1 perceptron:3 allow:1 wmin:5 midpoint:2 boundary:2 dimension:2 xn:4 world:7 valid:1 author:1 qualitatively:1 san:1 far:4 approximate:2 emphasize:1 keep:1 tolerant:1 assumed:1 conclude:1 xi:3 fuction:1 why:1 table:7 nature:1 learn:1 robust:9 ca:1 ignoring:1 obtaining:1 improving:1 alg:3 williamson:1 poly:1 separator:1 meanwhile:1 main:2 linearly:1 whole:3 noise:2 edition:1 n2:1 x1:4 ny:1 position:1 explicit:1 exponential:2 lie:9 weighting:1 theorem:8 removing:1 bad:1 maxi:1 explored:1 dk:1 gupta:1 intractable:1 exists:1 avrim:3 adding:1 effectively:1 corr:1 ci:28 kx:6 points3:1 margin:25 intersection:2 led:1 twentieth:1 strand:1 corresponds:1 truth:7 satisfies:5 acm:2 prop:1 ravishankar:1 diam:6 sized:2 kmeans:1 goal:1 careful:1 towards:4 feasible:1 hard:3 change:2 hyperplane:1 lemma:11 called:1 sanjoy:1 pas:1 experimental:1 meaningful:1 perceptrons:1 formally:4 evaluate:1 |
6,886 | 7,229 | Good Semi-supervised Learning
That Requires a Bad GAN
Zihang Dai?, Zhilin Yang?, Fan Yang, William W. Cohen, Ruslan Salakhutdinov
School of Computer Science
Carnegie Melon University
dzihang,zhiliny,fanyang1,wcohen,[email protected]
Abstract
Semi-supervised learning methods based on generative adversarial networks
(GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised
classification performance and a good generator cannot be obtained at the same
time. Theoretically we show that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and propose the definition
of a preferred generator. Empirically, we derive a novel formulation based on
our analysis that substantially improves over feature matching GANs, obtaining
state-of-the-art results on multiple benchmark datasets2 .
1
Introduction
Deep neural networks are usually trained on a large amount of labeled data, and it has been a challenge
to apply deep models to datasets with limited labels. Semi-supervised learning (SSL) aims to leverage
the large amount of unlabeled data to boost the model performance, particularly focusing on the
setting where the amount of available labeled data is limited. Traditional graph-based methods [2, 26]
were extended to deep neural networks [22, 23, 8], which involves applying convolutional neural
networks [10] and feature learning techniques to graphs so that the underlying manifold structure
can be exploited. [15] employs a Ladder network to minimize the layerwise reconstruction loss
in addition to the standard classification loss. Variational auto-encoders have also been used for
semi-supervised learning [7, 12] by maximizing the variational lower bound of the unlabeled data
log-likelihood.
Recently, generative adversarial networks (GANs) [6] were demonstrated to be able to generate
visually realistic images. GANs set up an adversarial game between a discriminator and a generator.
The goal of the discriminator is to tell whether a sample is drawn from true data or generated by the
generator, while the generator is optimized to generate samples that are not distinguishable by the
discriminator. Feature matching (FM) GANs [16] apply GANs to semi-supervised learning on Kclass classification. The objective of the generator is to match the first-order feature statistics between
the generator distribution and the true distribution. Instead of binary classification, the discriminator
employs a (K + 1)-class objective, where true samples are classified into the first K classes and
generated samples are classified into the (K + 1)-th class. This (K + 1)-class discriminator objective
leads to strong empirical results, and was later widely used to evaluate the effectiveness of generative
models [5, 21].
Though empirically feature matching improves semi-supervised classification performance, the
following questions still remain open. First, it is not clear why the formulation of the discriminator
?
2
Equal contribution. Ordering determined by dice rolling.
Code is available at https://github.com/kimiyoung/ssl_bad_gan.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
can improve the performance when combined with a generator. Second, it seems that good semisupervised learning and a good generator cannot be obtained at the same time. For example, [16]
observed that mini-batch discrimination generates better images than feature matching, but feature
matching obtains a much better semi-supervised learning performance. The same phenomenon was
also observed in [21], where the model generated better images but failed to improve the performance
on semi-supervised learning.
In this work, we take a step towards addressing these questions. First, we show that given the
current (K + 1)-class discriminator formulation of GAN-based SSL, good semi-supervised learning
requires a ?bad? generator. Here by bad we mean the generator distribution should not match the
true data distribution. Then, we give the definition of a preferred generator, which is to generate
complement samples in the feature space. Theoretically, under mild assumptions, we show that a
properly optimized discriminator obtains correct decision boundaries in high-density areas in the
feature space if the generator is a complement generator.
Based on our theoretical insights, we analyze why feature matching works on 2-dimensional toy
datasets. It turns out that our practical observations align well with our theory. However, we also find
that the feature matching objective has several drawbacks. Therefore, we develop a novel formulation
of the discriminator and generator objectives to address these drawbacks. In our approach, the
generator minimizes the KL divergence between the generator distribution and a target distribution
that assigns high densities for data points with low densities in the true distribution, which corresponds
to the idea of a complement generator. Furthermore, to enforce our assumptions in the theoretical
analysis, we add the conditional entropy term to the discriminator objective.
Empirically, our approach substantially improves over vanilla feature matching GANs, and obtains
new state-of-the-art results on MNIST, SVHN, and CIFAR-10 when all methods are compared under
the same discriminator architecture. Our results on MNIST and SVHN also represent state-of-the-art
amongst all single-model results.
2
Related Work
Besides the adversarial feature matching approach [16], several previous works have incorporated the
idea of adversarial training in semi-supervised learning. Notably, [19] proposes categorical generative
adversarial networks (CatGAN), which substitutes the binary discriminator in standard GAN with a
multi-class classifier, and trains both the generator and the discriminator using information theoretical
criteria on unlabeled data. From the perspective of regularization, [14, 13] propose virtual adversarial
training (VAT), which effectively smooths the output distribution of the classifier by seeking virtually
adversarial samples. It is worth noting that VAT bears a similar merit to our approach, which is to
learn from auxiliary non-realistic samples rather than realistic data samples. Despite the similarity,
the principles of VAT and our approach are orthogonal, where VAT aims to enforce a smooth function
while we aim to leverage a generator to better detect the low-density boundaries. Different from
aforementioned approaches, [24] proposes to train conditional generators with adversarial training
to obtain complete sample pairs, which can be directly used as additional training cases. Recently,
Triple GAN [11] also employs the idea of conditional generator, but uses adversarial cost to match
the two model-defined factorizations of the joint distribution with the one defined by paired data.
Apart from adversarial training, there has been other efforts in semi-supervised learning using deep
generative models recently. As an early work, [7] adapts the original Variational Auto-Encoder
(VAE) to a semi-supervised learning setting by treating the classification label as an additional
latent variable in the directed generative model. [12] adds auxiliary variables to the deep VAE
structure to make variational distribution more expressive. With the boosted model expressiveness,
auxiliary deep generative models (ADGM) improve the semi-supervised learning performance upon
the semi-supervised VAE. Different from the explicit usage of deep generative models, the Ladder
networks [15] take advantage of the local (layerwise) denoising auto-encoding criterion, and create a
more informative unsupervised signal through lateral connection.
3
Theoretical Analysis
Given a labeled set L = {(x, y)}, let {1, 2, ? ? ? , K} be the label space for classification. Let D and
G denote the discriminator and generator, and PD and pG denote the corresponding distributions.
2
Consider the discriminator objective function of GAN-based semi-supervised learning [16]:
max Ex,y?L log PD (y|x, y ? K) + Ex?p log PD (y ? K|x) + Ex?pG log PD (K + 1|x),
D
(1)
where p is the true data distribution. The probability distribution PD is over K + 1 classes where
the first K classes are true classes and the (K + 1)-th class is the fake class. The objective function
consists of three terms. The first term is to maximize the log conditional probability for labeled data,
which is the standard cost as in supervised learning setting. The second term is to maximize the log
probability of the first K classes for unlabeled data. The third term is to maximize the log probability
of the (K + 1)-th class for generated data. Note that the above objective function bears a similar
merit to the original GAN formulation if we treat P (K + 1|x) to be the probability of fake samples,
while the only difference is that we split the probability of true samples into K sub-classes.
Let f (x) be a nonlinear vector-valued function, and wk be the weight vector for class k. As a standard
setting in previous work [16, 5], the discriminator D is defined as PD (k|x) =
>
exp(wk
f (x))
PK+1
> f (x)) .
exp(wk
0
k0 =1
Since this is a form of over-parameterization, wK+1 is fixed as a zero vector [16]. We next discuss
the choices of different possible G?s.
3.1
Perfect Generator
Here, by perfect generator we mean that the generator distribution pG exactly matches the true data
distribution p, i.e., pG = p. We now show that when the generator is perfect, it does not improve the
generalization over the supervised learning setting.
Proposition 1. If pG = p, and D has infinite capacity, then for any optimal solution D = (w, f ) of
the following supervised objective,
max Ex,y?L log PD (y|x, y ? K),
D
(2)
there exists D? = (w? , f ? ) such that D? maximizes Eq. (1) and that for all x, PD (y|x, y ? K) =
PD? (y|x, y ? K).
The proof is provided in the supplementary material. Proposition 1 states that for any optimal solution
D of the supervised objective, there exists an optimal solution D? of the (K + 1)-class objective such
that D and D? share the same generalization error. In other words, using the (K + 1)-class objective
does not prevent the model from experiencing any arbitrarily high generalization error that it could
suffer from under the supervised objective. Moreover, since all the optimal solutions are equivalent
w.r.t. the (K + 1)-class objective, it is the optimization algorithm that really decides which specific
solution the model will reach, and thus what generalization performance it will achieve. This implies
that when the generator is perfect, the (K + 1)-class objective by itself is not able to improve the
generalization performance. In fact, in many applications, an almost infinite amount of unlabeled
data is available, so learning a perfect generator for purely sampling purposes should not be useful.
In this case, our theory suggests that not only the generator does not help, but also unlabeled data is
not effectively utilized when the generator is perfect.
3.2
Complement Generator
The function f maps data points in the input space to the feature space. Let pk (f ) be the density of the
data points of class k in the feature space. Given a threshold k , let Fk be a subset of the data support
where pk (f ) > k , i.e., Fk = {f : pk (f ) > k }. We assume that given {k }K
k=1 , the Fk ?s are disjoint
with a margin. More formally, for any fj ? Fj , fk ? Fk , and j 6= k, we assume that there exists a
real number 0 < ? < 1 such that ?fj + (1 ? ?)fk ?
/ Fj ? Fk . As long as the probability densities
of different classes do not share any mode, i.e., ?i 6= j, argmaxf pi (f ) ? argmaxf pj (f ) = ?, this
assumption can always be satisfied by tuning the thresholds k ?s. With the assumption held, we will
show that the model performance would be better if the thresholds could be set to smaller values
(ideally zero). We also assume that each Fk contains at least one labeled data point.
Suppose ?K
k=1 Fk is bounded by a convex set B. If the support FG of a generator G in the feature
space is a relative complement set in B, i.e., FG = B ? ?K
k=1 Fk , we call G a complement generator.
The reason why we utilize a bounded B to define the complement is presented in the supplementary
3
material. Note that the definition of complement generator implies that G is a function of f . By
treating G as function of f , theoretically D can optimize the original objective function in Eq. (1).
Now we present the assumption on the convergence conditions of the discriminator. Let U and G be
the sets of unlabeled data and generated data.
Assumption 1. Convergence conditions. When D converges on a finite training set {L, U, G}, D
learns a (strongly) correct decision boundary for all training data points. More specifically, (1) for
any (x, y) ? L, we have wy> f (x) > wk> f (x) for any other class k 6= y; (2) for any x ? G, we have
>
K
>
0 > maxK
k=1 wk f (x); (3) for any x ? U, we have maxk=1 wk f (x) > 0.
In Assumption 1, conditions (1) and (2) assume classification correctness on labeled data and
true-fake correctness on generated data respectively, which is directly induced by the objective
function.
Likewise, it is also reasonable to assume true-fake correctness on unlabeled data, i.e.,
P
log k exp wk> f (x) > 0 for x ? U. However, condition (3) goes beyond this and assumes
maxk wk> f (x) > 0. We discuss this issue in detail in the supplementary material and argue that these
assumptions are reasonable. Moreover, in Section 5, our approach addresses this issue explicitly by
adding a conditional entropy term to the discriminator objective to enforce condition (3).
Lemma 1. Suppose for all k, the L2-norms of weights wk are bounded by kwk k2 ? C. Suppose that
0
0
there exists > 0 such that for any fG ? FG , there exists fG
? G such that kfG ? fG
k2 ? . With
>
the conditions in Assumption 1, for all k ? K, we have wk fG < C.
Corollary 1. When unlimited generated data samples are available, with the conditions in Lemma 1,
we have lim|G|?? wk> fG ? 0.
See the supplementary material for the proof.
Proposition 2. Given the conditions in Corollary 1, for all class k ? K, for all feature space points
fk ? Fk , we have wk> fk > wj> fk for any j 6= k.
Proof. Without loss of generality, suppose j = arg maxj6=k wj> fk . Now we prove it by contradiction.
Suppose wk> fk ? wj> fk . Since Fk ?s are disjoint with a margin, B is a convex set and FG =
B ? ?k Fk , there exists 0 < ? < 1 such that fG = ?fk + (1 ? ?)fj with fG ? FG and fj
being the feature of a labeled data point in Fj . By Corollary 1, it follows that wj> fG ? 0. Thus,
wj> fG = ?wj> fk + (1 ? ?)wj> fj ? 0. By Assumption 1, wj> fk > 0 and wj> fj > 0, leading to
contradiction. It follows that wk> fk > wj> fk for any j 6= k.
Proposition 2 guarantees that when G is a complement generator, under mild assumptions, a nearoptimal D learns correct decision boundaries in each high-density subset Fk (defined by k ) of the
data support in the feature space. Intuitively, the generator generates complement samples so the
logits of the true classes are forced to be low in the complement. As a result, the discriminator
obtains class boundaries in low-density areas. This builds a connection between our approach with
manifold-based methods [2, 26] which also leverage the low-density boundary assumption.
With our theoretical analysis, we can now answer the questions raised in Section 1. First, the (K + 1)class formulation is effective because the generated complement samples encourage the discriminator
to place the class boundaries in low-density areas (Proposition 2). Second, good semi-supervised
learning indeed requires a bad generator because a perfect generator is not able to improve the
generalization performance (Proposition 1).
4
Case Study on Synthetic Data
In the previous section, we have established the fact a complement generator, instead of a perfect
generator, is what makes a good semi-supervised learning algorithm. Now, to get a more intuitive
understanding, we conduct a case study based on two 2D synthetic datasets, where we can easily
verify our theoretical analysis by visualizing the model behaviors. In addition, by analyzing how
feature matching (FM) [16] works in 2D space, we identify some potential problems of it, which
motivates our approach to be introduced in the next section. Specifically, two synthetic datasets are
four spins and two circles, as shown in Fig. 1.
4
Figure 1: Labeled and unlabeled data are denoted by Figure 2: Left: Classification decision boundary,
cross and point respectively, and different colors indicate where the white line indicates true-fake boundary;
classes.
Right: True-Fake decision boundary
Figure 3: Feature space
Figure 4: Left: Blue points are generated data, and the black shadow indicates
at convergence
unlabeled data. Middle and right can be interpreted as above.
Soundness of complement generator Firstly, to verify that the complement generator is a preferred
choice, we construct the complement generator by uniformly sampling from the a bounded 2D box
that contains all unlabeled data, and removing those on the manifold. Based on the complement
generator, the result on four spins is visualized in Fig. 2. As expected, both the classification
and true-fake decision boundaries are almost perfect. More importantly, the classification decision
boundary always lies in the fake data area (left panel), which well matches our theoretical analysis.
Visualization of feature space Next, to verify our analysis about the feature space, we choose the
feature dimension to be 2, apply the FM to the simpler dataset of two circles, and visualize the feature
space in Fig. 3. As we can see, most of the generated features (blue points) resides in between the
features of two classes (green and orange crosses), although there exists some overlap. As a result,
the discriminator can almost perfectly distinguish between true and generated samples as indicated
by the black decision boundary, satisfying the our required Assumption 1. Meanwhile, the model
obtains a perfect classification boundary (blue line) as our analysis suggests.
Pros and cons of feature matching Finally, to further understand the strength and weakness of
FM, we analyze the solution FM reaches on four spins shown in Fig. 4. From the left panel, we can
see many of the generated samples actually fall into the data manifold, while the rest scatters around
in the nearby surroundings of data manifold. It suggests that by matching the first-order moment by
SGD, FM is performing some kind of distribution matching, though in a rather weak manner. Loosely
speaking, FM has the effect of generating samples close to the manifold. But due to its weak power
in distribution matching, FM will inevitably generate samples outside of the manifold, especially
when the data complexity increases. Consequently, the generator density pG is usually lower than
the true data density p within the manifold and higher outside. Hence, an optimal discriminator
PD? (K + 1 | x) = p(x)/(p(x) + pG (x)) could still distinguish between true and generated samples
in many cases. However, there are two types of mistakes the discriminator can still make
1. Higher density mistake inside manifold: Since the FM generator still assigns a significant amount
of probability mass inside the support, wherever pG > p > 0, an optimal discriminator will
incorrectly predict samples in that region as ?fake?. Actually, this problem has already shown up
when we examine the feature space (Fig. 3).
2. Collapsing with missing coverage outside manifold: As the feature matching objective for the
generator only requires matching the first-order statistics, there exists many trivial solutions the
generator can end up with. For example, it can simply collapse to mean of unlabeled features,
or a few surrounding modes as along as the feature mean matches. Actually, we do see such
5
collapsing phenomenon in high-dimensional experiments when FM is used (see Fig. 5a and
Fig. 5c) As a result, a collapsed generator will fail to cover some gap areas between manifolds.
Since the discriminator is only well-defined on the union of the data supports of p and pG , the
prediction result in such missing area is under-determined and fully relies on the smoothness of
the parametric model. In this case, significant mistakes can also occur.
5
Approach
As discussed in previous sections, feature matching GANs suffer from the following drawbacks: 1)
the first-order moment matching objective does not prevent the generator from collapsing (missing
coverage); 2) feature matching can generate high-density samples inside manifold; 3) the discriminator
objective does not encourage realization of condition (3) in Assumption 1 as discussed in Section 3.2.
Our approach aims to explicitly address the above drawbacks.
Following prior work [16, 6], we employ a GAN-like implicit generator. We first sample a latent
variable z from a uniform distribution U(0, 1) for each dimension, and then apply a deep convolutional
network to transform z to a sample x.
5.1
Generator Entropy
Fundamentally, the first drawback concerns the entropy of the distribution of generated features,
H(pG (f )). This connection is rather intuitive, as the collapsing issue is a clear sign of low entropy.
Therefore, to avoid collapsing and increase coverage, we consider explicitly increasing the entropy.
Although the idea sounds simple and straightforward, there are two practical challenges. Firstly, as
implicit generative models, GANs only provide samples rather than an analytic density form. As a
result, we cannot evaluate the entropy exactly, which rules out the possibility of naive optimization.
More problematically, the entropy is defined in a high-dimensional feature space, which is changing
dynamically throughout the training process. Consequently, it is difficult to estimate and optimize the
generator entropy in the feature space in a stable and reliable way. Faced with these difficulties, we
consider two practical solutions.
The first method is inspired by the fact that input space is essentially static, where estimating and
optimizing the counterpart quantities would be much more feasible. Hence, we instead increase the
generator entropy in the input space, i.e., H(pG (x)), using a technique derived from an information
theoretical perspective and relies on variational inference (VI). Specially, let Z be the latent variable
space, and X be the input space. We introduce an additional encoder, q : X 7? Z, to define
a variational upper bound of the negative entropy [3], ?H(pG (x)) ? ?Ex,z?pG log q(z|x) =
LVI . Hence, minimizing the upper bound LVI effectively increases the generator entropy. In our
implementation, we formulate q as a diagonal Gaussian with bounded variance, i.e. q(z|x) =
N (?(x), ? 2 (x)), with 0 < ?(x) < ?, where ?(?) and ?(?) are neural networks, and ? is the threshold
to prevent arbitrarily large variance.
Alternatively, the second method aims at increasing the generator entropy in the feature space by
optimizing an auxiliary objective. Concretely, we adapt the pull-away term (PT) [25] as the auxiliary
PN P f (x )> f (x ) 2
cost, LPT = N (N1?1) i=1 j6=i kf (xii)kkf (xjj )k , where N is the size of a mini-batch and x are
samples. Intuitively, the pull-away term tries to orthogonalize the features in each mini-batch by
minimizing the squared cosine similarity. Hence, it has the effect of increasing the diversity of
generated features and thus the generator entropy.
5.2
Generating Low-Density Samples
The second drawback of feature matching GANs is that high-density samples can be generated in the
feature space, which is not desirable according to our analysis. Similar to the argument in Section
5.1, it is infeasible to directly minimize the density of generated features. Instead, we enforce the
generation of samples with low density in the input space. Specifically, given a threshold , we
minimize the following term as part of our objective:
Ex?pG log p(x)I[p(x) > ]
6
(3)
where I[?] is an indicator function. Using a threshold , we ensure that only high-density samples are
penalized while low-density samples are unaffected. Intuitively, this objective pushes the generated
samples to ?move? towards low-density regions defined by p(x). To model the probability distribution
over images, we simply adapt the state-of-the-art density estimation model for natural images, namely
the PixelCNN++ [17] model. The PixelCNN++ model is used to estimate the density p(x) in Eq. (3).
The model is pretrained on the training set, and fixed during semi-supervised training.
5.3
Generator Objective and Interpretation
Combining our solutions to the first two drawbacks of feature matching GANs, we have the following
objective function of the generator:
min ?H(pG ) + Ex?pG log p(x)I[p(x) > ] + kEx?pG f (x) ? Ex?U f (x)k2 .
(4)
G
This objective is closely related to the idea of complement generator discussed in Section 3. To see
that, let?s first define a target complement distribution in the input space as follows
(
1 1
if p(x) > and x ? Bx
p? (x) = Z p(x)
C
if p(x) ? and x ? Bx ,
where Z is a normalizer, C is a constant, and Bx is the set defined by mapping B from the feature
space to the input space. With the definition, the KL divergence (KLD) between pG (x) and p? (x) is
KL(pG kp? ) = ?H(pG )+Ex?pG log p(x)I[p(x) > ]+Ex?pG I[p(x) > ] log Z?I[p(x) ? ] log C .
The form of the KLD immediately reveals the aforementioned connection. Firstly, the KLD shares
two exactly the same terms with the generator objective (4). Secondly, while p? (x) is only defined in
Bx , there is not such a hard constraint on pG (x). However, the feature matching term in Eq. (4) can
be seen as softly enforcing this constraint by bringing generated samples ?close? to the true data (Cf.
Section 4). Moreover, because the identity function I[?] has zero gradient almost everywhere, the last
term in KLD would not contribute any informative gradient to the generator. In summary, optimizing
our proposed objective (4) can be understood as minimizing the KL divergence between the generator
distribution and a desired complement distribution, which connects our practical solution to our
theoretical analysis.
5.4
Conditional Entropy
In order for the complement generator to work, according to condition (3) in Assumption 1, the
>
discriminator needs to have strong true-fake belief on unlabeled data, i.e., maxK
k=1 wk f (x) > 0.
However, the objective function of the discriminator in [16] does not enforce a dominant class.
PK
Instead, it only needs k=1 PD (k|x) > PD (K + 1|x) to obtain a correct decision boundary, while
the probabilities PD (k|x) for k ? K can possibly be uniformly distributed. To guarantee the strong
true-fake belief in the optimal conditions, we add a conditional entropy term to the discriminator
objective and it becomes,
max Ex,y?L log pD (y|x, y ? K) + Ex?U log pD (y ? K|x)+
D
Ex?pG log pD (K + 1|x) + Ex?U
K
X
(5)
pD (k|x) log pD (k|x).
k=1
By optimizing Eq. (5), the discriminator is encouraged to satisfy condition (3) in Assumption 1. Note
that the same conditional entropy term has been used in other semi-supervised learning methods
[19, 13] as well, but here we motivate the minimization of conditional entropy based on our theoretical
analysis of GAN-based semi-supervised learning.
To train the networks, we alternatively update the generator and the discriminator to optimize Eq. (4)
and Eq. (5) based on mini-batches. If an encoder is used to maximize H(pG ), the encoder and the
generator are updated at the same time.
6
Experiments
We mainly consider three widely used benchmark datasets, namely MNIST, SVHN, and CIFAR-10.
As in previous work, we randomly sample 100, 1,000, and 4,000 labeled samples for MNIST, SVHN,
7
Methods
MNIST (# errors)
SVHN (% errors)
CIFAR-10 (% errors)
191 ? 10
132 ? 7
106 ? 37
96 ? 2
93 ? 6.5
136
79.5 ? 9.8
16.61 ? 0.24
22.86
8.11 ? 1.3
7.42 ? 0.65
6.83
4.25 ? 0.03
19.58 ? 0.46
20.40 ? 0.47
18.63 ? 2.32
17.99 ? 1.62
14.87
14.41 ? 0.30
CatGAN [19]
SDGM [12]
Ladder network [15]
ADGM [12]
FM [16] ?
ALI [4]
VAT small [13] ?
Our best model ?
Triple GAN [11] ??
91? 58
5.77 ? 0.17
16.99 ? 0.36
? model [9] ??
5.43 ? 0.25
16.55 ? 0.29
VAT+EntMin+Large [13]?
4.28
13.15
Table 1: Comparison with state-of-the-art methods on three benchmark datasets. Only methods without data
augmentation are included. ? indicates using the same (small) discriminator architecture, ? indicates using a
larger discriminator architecture, and ? means self-ensembling.
(a) FM on SVHN
(b) Ours on SVHN
(c) FM on CIFAR
(d) Ours on CIFAR
Figure 5: Comparing images generated by FM and our model. FM generates collapsed samples, while our
model generates diverse ?bad? samples.
and CIFAR-10 respectively during training, and use the standard data split for testing. We use the
10-quantile log probability to define the threshold in Eq. (4). We add instance noise to the input of
the discriminator [1, 18], and use spatial dropout [20] to obtain faster convergence. Except for these
two modifications, we use the same neural network architecture as in [16]. For fair comparison, we
also report the performance of our FM implementation with the aforementioned differences.
6.1
Main Results
We compare the the results of our best model with state-of-the-art methods on the benchmarks in
Table 1. Our proposed methods consistently improve the performance upon feature matching. We
achieve new state-of-the-art results on all the datasets when only small discriminator architecture is
considered. Our results are also state-of-the-art on MNIST and SVHN among all single-model results,
even when compared with methods using self-ensembling and large discriminator architectures.
Finally, note that because our method is actually orthogonal to VAT [13], combining VAT with our
presented approach should yield further performance improvement in practice.
6.2
Ablation Study
We report the results of ablation study in Table 2. In the following, we analyze the effects of several
components in our model, subject to the intrinsic features of different datasets.
First, the generator entropy terms (VI and PT) (Section 5.1) improve the performance on SVHN and
CIFAR by up to 2.2 points in terms of error rate. Moreover, as shown in Fig 5, our model significantly
reduces the collapsing effects present in the samples generated by FM, which also indicates that
maximizing the generator entropy is beneficial. On MNIST, probably due to its simplicity, no
collapsing phenomenon was observed with vanilla FM training [16] or in our setting. Under such
circumstances, maximizing the generator entropy seems to be unnecessary, and the estimation bias
introduced by approximation techniques can even hurt the performance.
8
Setting
Error
Setting
Error
MNIST FM
MNIST FM+VI
MNIST FM+LD
MNIST FM+LD+Ent
85.0 ? 11.7
86.5 ? 10.6
79.5 ? 9.8
89.2 ? 10.5
CIFAR FM
CIFAR FM+VI
CIFAR FM+VI+Ent
16.14
14.41
15.82
Setting
Error
Setting
Max log-p
SVHN FM
SVHN FM+VI
SVHN FM+PT
SVHN FM+PT+Ent
SVHN FM+PT+LD+Ent
6.83
5.29
4.63
4.25
4.19
MNIST FM
MNIST FM+LD
SVHN FM+PT+Ent
SVHN FM+PT+LD+Ent
SVHN 10-quant
-297
-659
-5809
-5919
-5622
Setting as q-th centile
q=2
q = 10
q = 20
q = 100
Error on MNIST
77.7 ? 6.1
79.5 ? 9.8
80.1 ? 9.6
85.0 ? 11.7
Table 2: Ablation study. FM is feature matching. LD is the low-density enforcement term in Eq. (3). VI and
PT are two entropy maximization methods described in Section 5.1. Ent means the conditional entropy term in
Eq. (5). Max log-p is the maximum log probability of generated samples, evaluated by a PixelCNN++ model.
10-quant shows the 10-quantile of true image log probability. Error means the number of misclassified examples
on MNIST, and error rate (%) on others.
Second, the low-density (LD) term is useful when FM indeed generates samples in high-density areas.
MNIST is a typical example in this case. When trained with FM, most of the generated hand written
digits are highly realistic and have high log probabilities according to the density model (Cf. max
log-p in Table 2). Hence, when applied to MNIST, LD improves the performance by a clear margin.
By contrast, few of the generated SVHN images are realistic (Cf. Fig. 5a). Quantitatively, SVHN
samples are assigned very low log probabilities (Cf. Table 2). As expected, LD has a negligible effect
on the performance for SVHN. Moreover, the ?max log-p? column in Table 2 shows that while LD
can reduce the maximum log probability of the generated MNIST samples by a large margin, it does
not yield noticeable difference on SVHN. This further justifies our analysis. Based on the above
conclusion, we conjecture LD would not help on CIFAR where sample quality is even lower. Thus,
we did not train a density model on CIFAR due to the limit of computational resources.
Third, adding the conditional entropy term has mixed effects on different datasets. While the
conditional entropy (Ent) is an important factor of achieving the best performance on SVHN, it hurts
the performance on MNIST and CIFAR. One possible explanation relates to the classic exploitationexploration tradeoff, where minimizing Ent favors exploitation and minimizing the classification loss
favors exploration. During the initial phase of training, the discriminator is relatively uncertain and
thus the gradient of the Ent term might dominate. As a result, the discriminator learns to be more
confident even on incorrect predictions, and thus gets trapped in local minima.
Lastly, we vary the values of the hyper-parameter in Eq. (4). As shown at the bottom of Table 2,
reducing clearly leads to better performance, which further justifies our analysis in Sections 4 and 3
that off-manifold samples are favorable.
6.3
Generated Samples
We compare the generated samples of FM and our approach in Fig. 5. The FM images in Fig. 5c are
extracted from previous work [16]. While collapsing is widely observed in FM samples, our model
generates diverse ?bad? images, which is consistent with our analysis.
7
Conclusions
In this work, we present a semi-supervised learning framework that uses generated data to boost
task performance. Under this framework, we characterize the properties of various generators and
theoretically prove that a complementary (i.e. bad) generator improves generalization. Empirically our
proposed method improves the performance of image classification on several benchmark datasets.
9
Acknowledgement
This work was supported by the DARPA award D17AP00001, the Google focused award, and the
Nvidia NVAIL award. The authors would also like to thank Han Zhao for his insightful feedback.
References
[1] Martin Arjovsky and L?on Bottou. Towards principled methods for training generative adversarial networks. In NIPS 2016 Workshop on Adversarial Training. In review for ICLR, volume
2016, 2017.
[2] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric
framework for learning from labeled and unlabeled examples. Journal of machine learning
research, 7(Nov):2399?2434, 2006.
[3] Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating
energy-based generative adversarial networks. arXiv preprint arXiv:1702.01691, 2017.
[4] Jeff Donahue, Philipp Kr?henb?hl, and Trevor Darrell. Adversarial feature learning. arXiv
preprint arXiv:1605.09782, 2016.
[5] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier
Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint
arXiv:1606.00704, 2016.
[6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural
information processing systems, pages 2672?2680, 2014.
[7] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In Advances in Neural Information Processing
Systems, pages 3581?3589, 2014.
[8] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional
networks. arXiv preprint arXiv:1609.02907, 2016.
[9] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint
arXiv:1610.02242, 2016.
[10] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[11] Chongxuan Li, Kun Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. arXiv
preprint arXiv:1703.02291, 2017.
[12] Lars Maal?e, Casper Kaae S?nderby, S?ren Kaae S?nderby, and Ole Winther. Auxiliary deep
generative models. arXiv preprint arXiv:1602.05473, 2016.
[13] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint
arXiv:1704.03976, 2017.
[14] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional
smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015.
[15] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semisupervised learning with ladder networks. In Advances in Neural Information Processing
Systems, pages 3546?3554, 2015.
[16] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training gans. In NIPS, 2016.
10
[17] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the
pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint
arXiv:1701.05517, 2017.
[18] Casper Kaae S?nderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz?r. Amortised
map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.
[19] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
[20] Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient
object localization using convolutional networks. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 648?656, 2015.
[21] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Adversarial generator-encoder networks. arXiv preprint arXiv:1704.02304, 2017.
[22] Jason Weston, Fr?d?ric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semisupervised embedding. In Neural Networks: Tricks of the Trade, pages 639?655. Springer,
2012.
[23] Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning
with graph embeddings. arXiv preprint arXiv:1603.08861, 2016.
[24] Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William W Cohen. Semi-supervised qa with
generative domain-adaptive nets. arXiv preprint arXiv:1702.02206, 2017.
[25] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.
arXiv preprint arXiv:1609.03126, 2016.
[26] Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian
fields and harmonic functions. In Proceedings of the 20th International conference on Machine
learning (ICML-03), pages 912?919, 2003.
11
| 7229 |@word mild:2 exploitation:1 middle:1 seems:2 norm:1 open:1 hu:1 bachman:1 pg:25 sgd:1 ld:11 moment:2 initial:1 contains:2 jimenez:1 ours:2 document:1 current:1 com:1 comparing:1 amjad:1 scatter:1 diederik:2 written:1 john:1 realistic:5 ronan:1 informative:2 analytic:1 treating:2 update:1 discrimination:1 generative:18 alec:1 parameterization:1 timo:1 contribute:1 philipp:1 firstly:3 simpler:1 zhang:1 along:1 junbo:1 incorrect:1 consists:1 prove:2 inside:3 introduce:1 manner:1 theoretically:4 notably:1 expected:2 indeed:3 andrea:1 behavior:1 examine:1 multi:1 ratle:1 discretized:1 salakhutdinov:3 inspired:1 increasing:3 becomes:1 provided:1 estimating:1 underlying:1 moreover:5 maximizes:1 bounded:5 panel:2 mass:1 what:2 argmaxf:2 kind:1 interpreted:1 substantially:2 minimizes:1 guarantee:2 temporal:1 zaremba:1 exactly:3 classifier:2 k2:3 sherjil:1 nakae:1 negligible:1 understood:1 local:2 treat:1 ulyanov:1 mistake:3 limit:1 despite:1 encoding:1 analyzing:1 black:2 might:1 dynamically:1 suggests:3 christoph:1 limited:2 factorization:1 collapse:1 catgan:2 directed:1 practical:4 lecun:3 testing:1 union:1 practice:1 digit:1 shin:4 dice:1 area:7 empirical:2 significantly:1 vedaldi:1 matching:24 word:1 zoubin:1 get:2 cannot:3 unlabeled:14 close:2 andrej:1 collapsed:2 applying:1 zihang:2 kld:4 optimize:3 equivalent:1 map:2 demonstrated:1 missing:3 maximizing:3 shi:1 go:1 straightforward:1 convex:2 focused:1 formulate:1 resolution:1 simplicity:1 assigns:2 immediately:1 pouget:1 contradiction:2 insight:1 rule:1 importantly:1 dominate:1 pull:2 his:1 classic:1 embedding:1 hurt:2 updated:1 target:2 suppose:5 pt:8 experiencing:1 olivier:1 us:2 mikko:1 goodfellow:2 trick:1 satisfying:1 particularly:1 utilized:1 recognition:2 nderby:3 distributional:1 labeled:10 observed:4 bottom:1 preprint:16 revisiting:1 wj:10 region:2 ordering:1 trade:1 principled:1 pd:18 complexity:1 ideally:1 warde:1 tobias:1 trained:2 motivate:1 ferenc:1 ali:1 purely:1 upon:2 localization:1 easily:1 joint:2 darpa:1 k0:1 various:1 harri:1 surrounding:1 train:4 forced:1 jain:1 effective:1 kp:1 ole:1 vicki:1 tell:1 hyper:1 outside:3 jean:1 widely:3 valued:1 supplementary:4 larger:1 encoder:5 favor:2 statistic:2 soundness:1 niyogi:1 transform:1 itself:1 shakir:1 advantage:1 net:3 propose:2 reconstruction:1 fr:1 combining:2 realization:1 ablation:3 achieve:2 adapts:1 intuitive:2 ent:10 convergence:4 darrell:1 generating:2 perfect:10 converges:1 ben:1 object:1 help:2 derive:1 develop:1 tim:2 school:1 noticeable:1 arjun:1 eq:11 strong:4 auxiliary:6 c:1 involves:1 implies:2 goroshin:1 indicate:1 shadow:1 kaae:3 coverage:3 drawback:7 correct:4 closely:1 lars:1 exploration:1 virtual:3 material:4 generalization:7 really:1 proposition:6 secondly:1 bregler:1 around:1 considered:1 eduard:1 visually:1 exp:3 caballero:1 lpt:1 predict:1 visualize:1 mapping:1 vary:1 early:1 purpose:1 ruslan:3 estimation:2 favorable:1 label:3 honkala:1 ross:1 almahairi:1 correctness:3 create:1 minimization:1 clearly:1 always:2 gaussian:2 aim:5 super:1 rather:4 husz:1 avoid:1 pn:1 boosted:1 vae:3 corollary:3 derived:1 rezende:1 properly:1 consistently:1 improvement:1 likelihood:2 indicates:5 masanori:2 mainly:1 contrast:1 adversarial:22 normalizer:1 ishii:2 detect:1 inference:3 softly:1 misclassified:1 kex:1 issue:3 classification:15 aforementioned:3 among:1 denoted:1 arg:1 lucas:1 proposes:2 hossein:1 art:8 ssl:2 raised:1 orange:1 spatial:1 equal:1 construct:1 smoothing:1 field:1 beach:1 sampling:2 encouraged:1 adversarially:1 unsupervised:2 icml:1 mirza:1 report:2 others:1 fundamentally:1 quantitatively:1 employ:4 few:2 belkin:1 surroundings:1 randomly:1 divergence:3 phase:1 connects:1 william:3 n1:1 possibility:1 highly:1 weakness:1 mixture:1 tompson:1 farley:1 held:1 encourage:2 sdgm:1 orthogonal:2 conduct:1 loosely:1 circle:2 desired:1 theoretical:10 uncertain:1 instance:1 column:1 cover:1 ishmael:1 maximization:1 cost:3 addressing:1 subset:2 rolling:1 uniform:1 characterize:1 nearoptimal:1 encoders:1 answer:1 synthetic:3 combined:1 confident:1 st:1 density:29 winther:1 international:1 off:1 michael:1 gans:12 squared:1 augmentation:1 satisfied:1 choose:1 possibly:1 berglund:1 collapsing:8 zhao:2 leading:1 bx:4 wojciech:1 toy:1 li:1 potential:1 diversity:1 wk:16 satisfy:1 explicitly:3 wcohen:1 vi:7 collobert:1 later:1 try:1 jason:1 dumoulin:1 analyze:3 kwk:1 partha:1 contribution:1 minimize:3 spin:3 hovy:1 convolutional:4 variance:2 entmin:1 likewise:1 yield:2 identify:1 weak:2 vincent:1 ren:1 worth:1 j6:1 unaffected:1 classified:2 reach:2 trevor:1 definition:4 energy:2 mohamed:1 proof:3 con:1 static:1 dataset:1 lim:1 color:1 improves:6 actually:4 focusing:1 higher:2 supervised:35 danilo:1 improved:1 formulation:6 evaluated:1 datasets2:1 though:2 strongly:1 furthermore:1 generality:1 box:1 implicit:2 lastly:1 hand:1 expressive:1 mehdi:1 nonlinear:1 google:1 mode:2 logistic:1 quality:1 indicated:1 semisupervised:5 usa:1 usage:1 calibrating:1 verify:3 true:22 logits:1 effect:6 counterpart:1 regularization:3 hence:5 assigned:1 white:1 visualizing:1 game:1 during:3 self:2 cosine:1 nvail:1 criterion:2 complete:1 svhn:22 fj:9 pro:1 image:12 variational:6 harmonic:1 novel:2 recently:3 empirically:4 cohen:3 volume:1 discussed:3 interpretation:1 significant:2 smoothness:1 tuning:1 vanilla:2 fk:25 maxj6:1 kipf:1 pixelcnn:5 stable:1 han:1 similarity:2 align:1 add:4 dominant:1 patrick:1 perspective:2 optimizing:4 apart:1 nvidia:1 binary:2 arbitrarily:2 samuli:1 exploited:1 victor:1 seen:1 minimum:1 dai:2 additional:3 arjovsky:2 tapani:1 maximize:4 signal:1 semi:29 relates:1 multiple:1 sound:1 desirable:1 reduces:1 smooth:2 takeru:2 match:6 adapt:2 faster:1 cross:2 long:2 cifar:13 award:3 paired:1 vat:8 prediction:2 jost:1 essentially:1 cmu:1 circumstance:1 vision:1 arxiv:32 represent:1 addition:2 rest:1 specially:1 bringing:1 probably:1 induced:1 subject:1 virtually:1 lafferty:1 effectiveness:1 call:1 kfg:1 yang:4 leverage:3 noting:1 split:2 mastropietro:1 bengio:2 embeddings:1 architecture:6 fm:40 perfectly:1 quant:2 reduce:1 idea:5 haffner:1 tradeoff:1 whether:1 effort:1 suffer:2 henb:1 speaking:1 xjj:1 deep:11 useful:2 fake:11 clear:4 karpathy:1 amount:5 visualized:1 ken:1 generate:5 http:1 sign:1 trapped:1 disjoint:2 blue:3 diverse:2 xii:1 carnegie:1 ichi:2 four:3 threshold:7 zhilin:3 achieving:1 drawn:1 changing:1 prevent:3 pj:1 utilize:1 graph:4 laine:1 jose:1 everywhere:1 springenberg:1 place:1 almost:4 reasonable:2 throughout:1 lamb:1 yann:3 decision:9 ric:1 dropout:1 bound:3 distinguish:2 courville:3 melon:1 fan:1 strength:1 occur:1 constraint:2 alex:1 unlimited:1 nearby:1 generates:6 layerwise:2 argument:1 min:1 performing:1 relatively:1 conjecture:1 martin:2 according:3 remain:1 smaller:1 beneficial:1 aila:1 rsalakhu:1 wherever:1 modification:2 hl:1 intuitively:3 resource:1 visualization:1 bing:1 turn:1 discus:2 fail:1 enforcement:1 merit:2 end:1 maal:1 available:4 apply:4 away:2 enforce:5 salimans:2 batch:4 vikas:1 substitute:1 original:3 assumes:1 thomas:1 ensure:1 cf:4 gan:9 miyato:2 quantile:2 build:1 especially:1 ghahramani:1 seeking:1 objective:33 move:1 question:3 already:1 quantity:1 parametric:1 traditional:1 diagonal:1 amongst:1 gradient:4 iclr:1 thank:1 valpola:1 lateral:1 capacity:1 philip:1 koyama:2 manifold:14 argue:1 trivial:1 reason:1 enforcing:1 ozair:1 code:1 besides:1 mini:4 rasmus:1 minimizing:5 difficult:1 kun:1 negative:1 implementation:2 motivates:1 upper:2 observation:1 datasets:10 benchmark:5 finite:1 inevitably:1 incorrectly:1 maxk:4 extended:1 incorporated:1 expressiveness:1 introduced:2 complement:21 pair:1 required:1 kl:4 namely:2 optimized:2 discriminator:40 adgm:2 connection:4 learned:1 established:1 boost:2 kingma:2 nip:3 qa:1 address:3 able:3 beyond:1 poole:1 usually:2 wy:1 pattern:1 maeda:2 challenge:2 max:9 green:1 reliable:1 belief:2 explanation:1 power:1 overlap:1 david:1 difficulty:1 natural:1 indicator:1 zhu:2 improve:8 github:1 ladder:4 kclass:1 mathieu:1 raiko:1 categorical:2 jun:1 auto:3 naive:1 xiaojin:1 faced:1 prior:1 understanding:1 l2:1 acknowledgement:1 kf:1 review:1 geometric:1 relative:1 theis:1 loss:4 fully:1 bear:2 mixed:1 generation:1 generator:75 triple:3 chongxuan:1 consistent:1 principle:1 share:3 pi:1 casper:2 penalized:1 summary:1 supported:1 last:1 antti:1 infeasible:1 bias:1 understand:1 fall:1 amortised:1 mikhail:1 fg:14 benefit:1 distributed:1 boundary:15 dimension:2 feedback:1 resides:1 concretely:1 author:1 adaptive:1 welling:2 nov:1 obtains:5 preferred:3 dmitry:1 belghazi:1 decides:1 reveals:1 unnecessary:1 xi:2 alternatively:2 latent:3 why:4 table:8 learn:1 ca:1 obtaining:1 improving:1 bottou:2 meanwhile:1 domain:1 lvi:2 did:1 pk:5 main:1 wenzhe:1 noise:1 fair:1 complementary:1 xu:2 fig:11 ensembling:3 sub:1 explicit:1 lie:1 third:2 learns:3 donahue:1 ian:2 removing:1 bad:8 specific:1 insightful:1 mobahi:1 abadie:1 concern:1 exists:8 intrinsic:1 mnist:19 workshop:1 adding:2 effectively:3 kr:1 justifies:2 push:1 margin:4 gap:1 chen:2 entropy:25 distinguishable:1 simply:2 yoshua:2 failed:1 bo:1 pretrained:1 sindhwani:1 radford:1 springer:1 corresponds:1 relies:2 extracted:1 weston:1 conditional:12 lempitsky:1 goal:1 identity:1 cheung:1 consequently:2 towards:3 jeff:1 feasible:1 hard:1 included:1 determined:2 infinite:2 specifically:3 uniformly:2 except:1 typical:1 denoising:1 lemma:2 reducing:1 mathias:1 orthogonalize:1 aaron:3 formally:1 support:5 jonathan:1 evaluate:2 phenomenon:3 ex:14 |
6,887 | 723 | The Power of Amnesia
Dana Ron
Yoram Singer
Naftali Tishby
Institute of Computer Science and
Center for Neural Computation
Hebrew University, Jerusalem 91904, Israel
Abstract
We propose a learning algorithm for a variable memory length
Markov process. Human communication, whether given as text,
handwriting, or speech, has multi characteristic time scales. On
short scales it is characterized mostly by the dynamics that generate the process, whereas on large scales, more syntactic and semantic information is carried. For that reason the conventionally
used fixed memory Markov models cannot capture effectively the
complexity of such structures. On the other hand using long memory models uniformly is not practical even for as short memory as
four. The algorithm we propose is based on minimizing the statistical prediction error by extending the memory, or state length,
adaptively, until the total prediction error is sufficiently small. We
demonstrate the algorithm by learning the structure of natural English text and applying the learned model to the correction of corrupted text . Using less than 3000 states the model's performance
is far superior to that of fixed memory models with similar number of states. We also show how the algorithm can be applied to
intergenic E. coli DNA base prediction with results comparable to
HMM based methods.
1
Introduction
Methods for automatically acquiring the structure of the human language are attracting increasing attention . One of the main difficulties in modeling the natural
language is its multiple temporal scales. As has been known for many years the
language is far more complex than any finite memory Markov source. Yet Markov
176
The Power of Amnesia
models are powerful tools that capture the short scale statistical behavior of language, whereas long memory models are generally impossible to estimate . The
obvious desired solution is a Markov sOUrce with a 'deep' memory just where it is
really needed. Variable memory length Markov models have been in use for language
modeling in speech recognition for some time [3, 4], yet no systematic derivation,
nor rigorous analysis of such learning mechanism has been proposed.
Markov models are a natural candidate for language modeling and temporal pattern
recognition, mostly due to their mathematical simplicity. It is nevertheless obvious
that finite memory Markov models can not in any way capture the recursive nature
of the language, nor can they be trained effectively with long enough memory. The
notion of a variable length memory seems to appear naturally also in the context of
universal coding [6]. This information theoretic notion is now known to be closely
related to efficient modeling [7]. The natural measure that appears in information
theory is the description length, as measured by the statistical predictability via
the Kullback- Liebler (KL) divergence.
The algorithm we propose here is based on optimizing the statistical prediction
of a Markov model , measured by the instantaneous KL divergence of the following
symbols, or by the current statistical surprise of the model. The memory is extended
precisely when such a surprise is significant, until the overall statistical prediction
of the stochastic model is sufficiently good. We apply this algorithm successfully for
statistical language modeling. Here we demonstrate its ability for spelling correction
of corrupted English text . We also show how the algorithm can be applied to
intergenic E. coli DNA base prediction with results comparable to HMM based
methods.
2
Prediction Suffix Trees and Finite State Automata
Definitions and Notations
Let ~ be a finite alphabet. Denote by ~* the set of all strings over~ . A string
s, over ~* of length n, is denoted by s = Sl S2 ... Sn. We denote by e the empty
string. The length of a string s is denoted by lsi and the size of an alphabet ~ is
denoted by I~I. Let, Prefix(s) = SlS2 .. . Sn-1, denote the longest prefix ofa string
s, and let Prefix*(s) denote the set of all prefixes of s, including the empty string.
Similarly, 5uffix(s) = S2S3 . . . Sn and 5uffix*(s) is the set of all suffixes of s. A
set of strings is called a prefix free set if, VSl, S2 E 5: {Sl} nPrefix*(s2) = 0. We
call a probability measure P, over the strings in ~* proper if P( e) = 1, and for every
string s, I:aEl:P(sa) = P(s). Hence, for every prefix free set 5, I:sEsP(s):S 1,
and specifically for every integer n 2: 0, I:sEl:n P(s) = 1.
Prediction Suffix Trees
A prediction suffix tree T over ~, is a tree of degree I~I. The edges of the tree
are labeled by symbols from ~, such that from every internal node there is at most
one outgoing edge labeled by each symbol. The nodes of the tree are labeled by
pairs (s, / s) where s is the string associated with the walk starting from that node
and ending in the root of the tree, and /s : ~ --t [0,1] is the output probability
function related with s satisfying I:aEE /s(O") = 1. A prediction suffix tree induces
177
178
Ron, Singer, and Tishby
probabilities on arbitrary long strings in the following manner. The probability that
T generates a string w = W1W2 .. 'Wn in ~n, denoted by PT(W), is IIi=1/s.-1(Wi),
where SO = e, and for 1 :S i :S n - 1, sj is the string labeling the deepest node
reached by taking the walk corresponding to W1 ... Wi starting at the root of T. By
definition, a prediction suffix tree induces a proper measure over ~*, and hence for
every prefix free set of strings {w l , ... , w m }, L~l PT(W i ) :S 1, and specifically for
n 2: 1, then L3 En PT(S) = 1. An example of a prediction suffix tree is depicted
in Fig. 1 on the left, where the nodes of the tree are labeled by the corresponding
suffix they present.
1'0=0.6
1'1=0.4
0.4
~,---(~~,
...
...
. . ? ? . . . ~ .:.6
....
Figure 1: Right: A prediction suffix tree over ~ = {a, I}. The strings written in
the nodes are the suffixes the nodes present. For each node there is a probability
vector over the next possible symbols. For example, the probability of observing a
'1' after observing the string '010' is 0.3. Left: The equivalent probabilistic finite
automaton. Bold edges denote transitions with the symbol '1' and dashed edges
denote transitions with '0'. The states of the automaton are the leaves of the tree
except for the leaf denoted by the string 1, which was replaced by the prefixes of
the strings 010 and 110: 01 and 11.
Finite State Automata and Markov Processes
A Probabilistic Finite Automaton (PFA) A is a 5-tuple (Q, 1:, T, I, 7r), where Q is
a finite set of n states, 1: is an alphabet of size k, T : Q x ~ -;. Q is the transition
junction, I : Q x ~ -;. [0, 1J is the output probability junction, and 7r : Q -;. [0, 1J
is the probability distribution over the starting states. The functions I and 7r
must satisfy the following requirements: for every q E Q, LUEE I(q, 0') = 1, and
LqEQ 7r( q) = 1. The probability that A generates a string s = S1 S2 ... Sn E 1: n is
PA(S) = LqOEQ 7r(qO) TI7=l l(qi-1, Si), where qi+l = T(qi, Si).
We are interested in learning a sub-class of finite state machines which have the
following property. Each state in a machine M belonging to this sub-class is labeled
by a string of length at most L over ~, for some L 2: O. The set of strings labeling
the states is suffix free. We require that for every two states ql ,q2 E Q and for every
symbol 0' E ~, if T(q1, 0') = q2 and ql is labeled by a string s1, then q2 is labeled
The Power of Amnesia
by a string s2 which is a suffix of s1 . a. Since the set of strings labeling the states
is suffix free, if there exists a string having this property then it is unique. Thus,
in order that r be well defined on a given set of string S, not only must the set be
suffix free, but it must also have the property, that for every string s in the set and
every symbol a, there exists a string which is a suffix of sa. For our convenience,
from this point on, if q is a state in Q then q will also denote the string labeling
that state.
A special case of these automata is the case in which Q includes all 2L strings of
length L. These automata are known as Markov processes of order L. We are
interested in learning automata for which the number of states, n, is actually much
smaller than 2?, which means that few states have "long memory" and most states
have a short one. We refer to these automata as Markov processes with bounded
memory L. In the case of Markov processes of order L, the "identity" of the states
(i.e. the strings labeling the states) is known and learning such a process reduces to
approximating the output probability function. When learning Markov processes
with bounded memory, the task of a learning algorithm is much more involved since
it must reveal the identity of the states as well.
It can be shown that under a slightly more complicated definition of prediction
suffix trees, and assuming that the initial distribution on the states is the stationary
distribution, these two models are equivalent up to a grow up in size which is at
most linear in L. The proof of this equi valence is beyond the scope of this paper, yet
the transformation from a prediction suffix tree to a finite state automaton is rather
simple. Roughly speaking, in order to implement a prediction suffix tree by a finite
state automaton we define the leaves of the tree to be the states of the automaton.
If the transition function of the automaton, r(-, .), can not be well defined on this
set of strings, we might need to slightly expand the tree and use the leaves of the
expanded tree. The output probability function of the automaton, ,(-, .), is defined
based on the prediction values of the leaves of the tree. i.e., for every state (leaf)
s, and every symbol a, ,( s, a) = (a). The outgoing edges from the states are
defined as follows: r(q1, a) = q2 where q2 E Suffix*(q 1a). An example of a finite
state automaton which corresponds to the prediction tree depicted in Fig. 1 on the
left, is depicted on the right part of the figure.
,s
3
Learning Prediction Suffix Trees
Given a sample consisting of one sequence of length I or m sequences of lengths
11 ,/2 , ... ,1m we would like to find a prediction suffix tree that will have the same
statistical properties of the sample and thus can be used to predict the next outcome
for sequences generated by the same source. At each stage we can transform the
tree into a Markov process with bounded memory. Hence, if the sequence was
created by a Markov process, the algorithm will find the structure and estimate
the probabilities of the process. The key idea is to iteratively build a prediction
tree whose probability measure equals the empirical probability measure calculated
from the sample.
We start with a tree consisting of a single node (labeled by the empty string e) and
add nodes which we have reason to believe should be in the tree. A node as, must be
added to the tree if it statistically differs from its parent node s. A natural measure
179
180
Ron, Singer, and Tishby
to check the statistical difference is the relative entropy (also known as the KullbackLiebler (KL) divergence) [5], between the conditional probabilities PCI s) and
PCIO"s). Let X be an observation space and Pl , P2 be probability measures over X
then the KL divergence between Pl and P'2 is, DKL(Pl IIP2) = 2:XEx Pl(X) log ;~~:~.
Note that this distance is not symmetric and P l should be absolutely continuous
with respect to P2 . In our problem, the KL divergence measures how much additional information is gained by using the suffix crs for prediction instead of predicting
using the shorter suffix s. There are cases where the statistical difference is large
yet the probability of observing the suffix crs itself is so small that we can neglect
those cases. Hence we weigh the the statistical error by the prior probability of
observing crs. The statistical error measure in our case is,
E1'1'(o"s, s)
P(O"s) DKL (P(-IO"s)IIPCls))
P( 'I ) I P(a'las)
P( O"s ) ~
L....-a'EE
0" O"S
og P(a'ls)
~
') 1
P(a3a')
og P(a'ls)P(as)
Therefore, a node crs is added to the tree if the statistical difference (defined by
E1'1'( crs, s)) between the node and its parrent s is larger than a predetermined
accuracy c The tree is grown level by level, adding a son of a given leaf in the
tree whenever the statistical surprise is large. The problem is that the requirement
that a node statistically differs from it's parent node is a necessary condition for
belonging to the tree, but is not sufficient. The leaves of a prediction suffix tree must
differ from their parents (or they are redundant) but internal nodes might not have
this property. Therefore, we must continue testing further potential descendants
of the leaves in the tree up to depth L. In order to avoid exponential grow in the
number of strings tested, we do not test strings which belong to branches which are
reached with small probability. The set of strings, tested at each step, is denoted
by 5, and can be viewed as a kind of potential 'frontier' of the growing tree T. At
each stage or when the construction is completed we can produce the equivalent
Markov process with bounded memory. The learning algorithm of the prediction
suffix tree is depicted in Fig. 2. The algorithm gets two parameters: an accuracy
parameter t and the maximal order of the process (which is also the maximal depth
of the tree) L.
L....-a'E~
P(
O"SO"
The true source probabilities are not known, hence they should be estimated from
the empirical counts of their appearances in the observation sequences. Denote by
#s the number of time the string s appeared in the observation sequences and by
#crls the number of time the symbol cr appeared after the string s. Then, usmg
Laplace's rule of succession, the empirical estimation of the probabilities is,
-
~
P(s) ~ P(s) =
4
#s + 1
2: 3'EEIsi #'S + I~I
-
~
P(crls) ~ P(O"ls) =
#crls + 1
2: a'E~ # 0" 'I S + I~ I
A Toy Learning Example
The algorithm was applied to a 1000 symbols long sequence produced by the automaton depicted top left in Fig. 3. The alphabet was binary. Bold lines in the
figure represent transition with the symbol '0' and dashed lines represent the symbol '1'. The prediction suffix tree is plotted at each stage of the algorithm. At the
The Power of Amnesia
? Initialize the tree T and the candidate strings S:
T consists of a single root node , and S - {O" I 0" E ~ /\ p( 0") 2: t} .
? While S
=I 0,
do the following:
1. Pick any s E S and remove it from S.
2. If Err{s, Suffix(s)) 2: E then add to T the node corresponding to s
and all the nodes on the path from the deepest node in T (the deepest
ancestor of s) until s.
3. If
lsi < L
then for every
0"
E ~ if P(O"s) 2:
E
add O"S to S.
Figure 2: The algorithm for learning a prediction suffix tree.
end of the run the correponding automat.on is plotted as well (bottom right.). Note
that the original automaton and the learned automaton are the same except for
small diffrences in the transition probabilities.
o.
o.
0.32.
0.68
0.7
0.14
0.86
0.3
0.69
0.31
Figure 3: The original automaton (top left), the instantaneous automata built along
the run of the algorithm (left to right and top to bottom), and the final automaton
(bottom left).
5
Applications
We applied the algorithm to the Bible with L = 30 and E = 0.001 which resulted in
an automaton having less than 3000 states. The alphabet was the english letters and
the blank character. The final automaton constitutes of states that are of length
2, like r qu' and r xe', and on the other hand 8 and 9 symbols long states, like
r shall be' and r there was'. This indicates that the algorithm really captmes
181
182
Ron, Singer, and Tishby
the notion of variable context length prediction which resulted in a compact yet
accurate model. Building a full Markov model in this case is impossible since
it requires II:IL = 27 9 states. Here we demonstrate our algorithm for cleaning
corrupted text. A test text (which was taken out of the training sequence) was
modified in two different ways. First by a stationary noise that altered each letter
with probability 0.2, and then the text was further modified by changing each
blank to a random letter. The most probable state sequence was found via dynamic
programming. The 'cleaned' observation sequence is the most probable outcome
given the knowledge of the error rate. An example of such decoding for these two
types of noise is shown in Fig. 4. We also applied the algorithm to intergenic
Original Text:
and god called the dry land earth and the gathering together of the waters called
he seas and god saw that it was good and god said let the earth bring forth grass
the herb yielding seed and the fruit tree yielding fruit after his kind
Noisy text (1):
and god cavsed the drxjland earth ibd shg gathervng together oj the waters dlled
re seas aed god saw thctpit was good ann god said let tae earth bring forth gjasb
tse hemb yielpinl peed and thesfruit tree sielxing fzuitnafter his kind
Decoded text (1):
and god caused the dry land earth and she gathering together of the waters called
he sees and god saw that it was good and god said let the earth bring forth grass
the memb yielding peed and the fruit tree fielding fruit after his kind
Noisy text (2):
andhgodpcilledjthesdryjlandbeasthcandmthelgatceringhlogetherjfytrezaatersoczlled
xherseasaknddgodbsawwthathitqwasoqoohanwzgodcsaidhletdtheuejrthriringmforth
bgrasstthexherbyieidingzseedmazdctcybfruitttreeayieidinglfruztbafberihiskind
Decoded text (2):
and god called the dry land earth and the gathering together of the altars called he
seasaked god saw that it was took and god said let the earthriring forth grass the
herb yielding seed and thy fruit treescielding fruit after his kind
Figure 4: Cleaning corrupted text using a Markov process with bounded memory.
regions of E. coli DNA, with L = 20 and f. = 0.0001. The alphabet is: A. C. T. G.
The result of the algorithm is an automaton having 80 states. The names of the
states of the final automaton are depicted in Fig. 5. The performance of the model
can be compared to other models, such as the HMM based model [8], by calculating
the normalized log-likelihood (NLL) over unseen data. The NLL is an empirical
measure of the the entropy of the source as induced by the model. The NLL of
bounded memory Markov model is about the same as the one obtained by the
HMM .based model. Yet, the Markov model does not contain length distribution
of the intergenic segments hence the overall perform ace of the HMM based model
is slightly better. On the other hand, the HMM based model is more complicated
and requires manual tuning of its architecture.
The Power of Amnesia
ACT G AA AC AT CA CC CT CG TA TC TT TG GA GC GT GG AAC AAT AAG
ACA ATT CAA CAC CAT CAG CCA CCT CCG CTA CTC CTT CGA CGC CGT TAT
TAG TCA TCT TTA TTG TGC GAA GAC GAT GAG GCA GTA GTC GTT GTG
GGA GGC GGT AACT CAGC CCAG CCTG CTCA TCAG TCTC TTAA TTGC
TTGG TGCC GACC GATA GAGC GGAC GGCA GGCG GGTA GGTT GGTG
CAGCC TTGCA GGCGC GGTTA
Figure 5: The states that constitute the automaton for predicting the next base of
intergenic regions in E. coli DNA.
6
Conclusions and Future Research
In this paper we present a new efficient algorithm for estimating the structure and
the transition probabilities of a Markov processes with bounded yet variable memory. The algorithm when applied to natural language modeling result in a compact
and accurate model which captures the short term correlations. The theoretical
properties of the algorithm will be described elsewhere. In fact, we can prove that
a slightly different algorithm constructs a bounded memory markov process, which
with arbitrary high probability, induces distributions (over I: n for n > 0) which
are very close to those induced by the 'true' Markovian source, in the sense of the
KL divergence. This algorithm uses a polynomial size sample and runs in polynomial time in the relevent parameters of the problem. We are also investigating
hierarchical models based on these automata which are able to capture multi-scale
correlations, thus can be used to model more of the large scale structure of the
natural language.
Acknowledgment
We would like to thank Lee Giles for providing us with the software for plotting finite state
machines, and Anders Krogh and David Haussler for letting us use their E. coli DN A data
and for many helpful discussions. Y.S. would like to thank the Clore foundation for its
support.
References
[1] J.G Kemeny and J.L. Snell, Finite Markov Chains, Springer-Verlag 1982.
[2] Y. Freund, M. Kearns, D. Ron, R. Rubinfeld, R.E. Schapire, and L. Sellie,
Efficient Learning of Typical Finite Automata from Random Walks, STOC-93 .
[3] F. Jelinek, Self-Organized Language Modeling for Speech Recognition, 1985.
[4] A. Nadas, Estimation of Probabilities in the Language Model of the IBM Speech
Recognition System, IEEE Trans. on ASSP Vol. 32 No.4, pp. 859-861, 1984.
[5] S. Kullback, Information Theory and Statistics, New York: Wiley, 1959.
[6] J. Rissanen and G. G. Langdon, Universal modeling and coding, IEEE Trans .
on Info. Theory, IT-27 (3), pp. 12-23, 1981.
[7] J. Rissanen, Stochastic complexity and modeling, The Ann. of Stat., 14(3),1986.
[8] A. Krogh, S.1. Mian, and D. Haussler, A Hidden Markov Model that finds genes
in E. coli DNA, UCSC Tech. Rep. UCSC-CRL-93-16.
183
| 723 |@word polynomial:2 seems:1 tat:1 q1:2 pick:1 automat:1 cgc:1 initial:1 att:1 prefix:8 langdon:1 err:1 current:1 blank:2 si:2 yet:7 written:1 must:7 predetermined:1 remove:1 ti7:1 xex:1 grass:3 stationary:2 leaf:9 short:5 gtg:1 equi:1 node:21 ron:5 mathematical:1 along:1 dn:1 gtt:1 ucsc:2 amnesia:5 descendant:1 consists:1 prove:1 cta:1 manner:1 thy:1 roughly:1 behavior:1 nor:2 growing:1 multi:2 cct:1 automatically:1 increasing:1 estimating:1 notation:1 bounded:8 israel:1 kind:5 string:39 q2:5 transformation:1 temporal:2 every:13 act:1 ofa:1 appear:1 aat:1 io:1 path:1 might:2 statistically:2 practical:1 unique:1 acknowledgment:1 testing:1 recursive:1 implement:1 differs:2 universal:2 empirical:4 get:1 cannot:1 convenience:1 ga:1 close:1 context:2 applying:1 impossible:2 equivalent:3 center:1 jerusalem:1 attention:1 starting:3 l:3 automaton:28 simplicity:1 rule:1 haussler:2 his:4 gta:1 notion:3 laplace:1 pt:3 construction:1 cleaning:2 programming:1 us:1 pa:1 recognition:4 satisfying:1 labeled:8 bottom:3 capture:5 region:2 weigh:1 complexity:2 dynamic:2 trained:1 segment:1 tca:1 cat:1 grown:1 derivation:1 alphabet:6 labeling:5 pci:1 outcome:2 whose:1 ace:1 larger:1 tested:2 ability:1 statistic:1 god:12 unseen:1 syntactic:1 transform:1 itself:1 noisy:2 final:3 nll:3 sequence:10 took:1 propose:3 peed:2 maximal:2 forth:4 description:1 parent:3 empty:3 requirement:2 extending:1 sea:2 produce:1 clore:1 tct:1 ac:1 stat:1 measured:2 sa:2 krogh:2 p2:2 differ:1 closely:1 stochastic:2 human:2 require:1 really:2 snell:1 probable:2 frontier:1 pl:4 correction:2 sufficiently:2 seed:2 scope:1 predict:1 earth:7 estimation:2 saw:4 successfully:1 tool:1 modified:2 rather:1 avoid:1 cr:6 sel:1 a3a:1 og:2 nada:1 longest:1 she:1 check:1 indicates:1 likelihood:1 tech:1 rigorous:1 cg:1 sense:1 helpful:1 suffix:29 anders:1 hidden:1 ancestor:1 expand:1 interested:2 overall:2 denoted:6 special:1 initialize:1 gtc:1 equal:1 construct:1 having:3 constitutes:1 future:1 few:1 divergence:6 resulted:2 replaced:1 consisting:2 pfa:1 yielding:4 bible:1 chain:1 accurate:2 edge:5 tuple:1 necessary:1 shorter:1 tree:43 walk:3 desired:1 plotted:2 re:1 theoretical:1 tse:1 modeling:9 herb:2 markovian:1 giles:1 tg:1 tishby:4 ael:1 corrupted:4 adaptively:1 systematic:1 probabilistic:2 lee:1 decoding:1 together:4 w1:1 aed:1 cgt:1 coli:6 toy:1 potential:2 gag:1 coding:2 bold:2 includes:1 satisfy:1 caused:1 root:3 observing:4 aca:1 reached:2 start:1 complicated:2 il:1 accuracy:2 characteristic:1 succession:1 dry:3 produced:1 cc:1 aac:1 liebler:1 whenever:1 manual:1 definition:3 pp:2 involved:1 obvious:2 naturally:1 associated:1 proof:1 handwriting:1 knowledge:1 organized:1 actually:1 appears:1 ta:1 gata:1 just:1 stage:3 mian:1 until:3 correlation:2 hand:3 qo:1 shg:1 reveal:1 believe:1 name:1 building:1 normalized:1 true:2 contain:1 hence:6 symmetric:1 iteratively:1 semantic:1 self:1 naftali:1 gg:1 correponding:1 theoretic:1 demonstrate:3 tt:1 bring:3 instantaneous:2 aag:1 superior:1 ctc:1 belong:1 he:3 significant:1 refer:1 tuning:1 similarly:1 language:12 l3:1 attracting:1 gt:1 add:3 base:3 optimizing:1 ctt:1 verlag:1 rep:1 continue:1 binary:1 aee:1 xe:1 additional:1 redundant:1 dashed:2 ii:1 branch:1 multiple:1 full:1 reduces:1 characterized:1 long:7 ggt:1 e1:2 dkl:2 qi:3 prediction:27 represent:2 usmg:1 whereas:2 grow:2 source:6 induced:2 call:1 integer:1 ee:1 iii:1 enough:1 wn:1 architecture:1 idea:1 whether:1 gca:1 gga:1 speech:4 speaking:1 york:1 constitute:1 deep:1 generally:1 cga:1 induces:3 dna:5 generate:1 schapire:1 sl:2 lsi:2 fielding:1 estimated:1 shall:1 sellie:1 vol:1 key:1 four:1 nevertheless:1 rissanen:2 changing:1 year:1 run:3 letter:3 powerful:1 comparable:2 cca:1 ct:1 cac:1 precisely:1 software:1 tag:1 generates:2 expanded:1 rubinfeld:1 belonging:2 smaller:1 slightly:4 son:1 character:1 wi:2 qu:1 s1:3 gathering:3 taken:1 count:1 mechanism:1 singer:4 needed:1 letting:1 end:1 vsl:1 caa:1 junction:2 apply:1 hierarchical:1 original:3 top:3 completed:1 neglect:1 calculating:1 yoram:1 build:1 approximating:1 added:2 spelling:1 said:4 kemeny:1 valence:1 distance:1 thank:2 hmm:6 reason:2 water:3 assuming:1 length:14 gac:1 providing:1 minimizing:1 hebrew:1 ql:2 mostly:2 stoc:1 info:1 proper:2 perform:1 observation:4 markov:25 finite:15 extended:1 communication:1 assp:1 gc:1 arbitrary:2 david:1 pair:1 cleaned:1 kl:6 learned:2 trans:2 beyond:1 able:1 pattern:1 appeared:2 built:1 including:1 memory:23 oj:1 power:5 natural:7 difficulty:1 predicting:2 altered:1 created:1 carried:1 conventionally:1 sn:4 text:13 prior:1 deepest:3 relative:1 freund:1 dana:1 foundation:1 degree:1 sufficient:1 fruit:6 plotting:1 land:3 ibm:1 elsewhere:1 free:6 english:3 institute:1 taking:1 jelinek:1 calculated:1 depth:2 ending:1 transition:7 far:2 sj:1 compact:2 kullback:2 gene:1 investigating:1 continuous:1 nature:1 ca:1 complex:1 intergenic:5 main:1 s2:5 noise:2 fig:6 en:1 predictability:1 wiley:1 sub:2 decoded:2 exponential:1 candidate:2 relevent:1 symbol:13 exists:2 adding:1 effectively:2 gained:1 ccg:1 gat:1 surprise:3 entropy:2 depicted:6 tc:1 appearance:1 acquiring:1 aa:1 corresponds:1 springer:1 conditional:1 identity:2 viewed:1 ann:2 crl:1 specifically:2 except:2 uniformly:1 typical:1 kearns:1 total:1 called:6 uffix:2 la:1 internal:2 support:1 absolutely:1 outgoing:2 tae:1 |
6,888 | 7,230 | On Blackbox Backpropagation and Jacobian Sensing
Vikas Sindhwani
Google Brain
New York, NY 10011
[email protected]
Krzysztof Choromanski
Google Brain
New York, NY 10011
[email protected]
Abstract
From a small number of calls to a given ?blackbox" on random input perturbations,
we show how to efficiently recover its unknown Jacobian, or estimate the left action
of its Jacobian on a given vector. Our methods are based on a novel combination of
compressed sensing and graph coloring techniques, and provably exploit structural
prior knowledge about the Jacobian such as sparsity and symmetry while being
noise robust. We demonstrate efficient backpropagation through noisy blackbox
layers in a deep neural net, improved data-efficiency in the task of linearizing the
dynamics of a rigid body system, and the generic ability to handle a rich class of
input-output dependency structures in Jacobian estimation problems.
1
Introduction
Automatic Differentiation (AD) [1, 17] techniques are at the heart of several ?end-to-end" machine
learning frameworks such as TensorFlow [5] and Torch [2]. Such frameworks are organized around
a library of primitive operators which are differentiable vector-valued functions of data inputs and
model parameters. A composition of these primitives defines a computation graph - a directed acyclic
graph whose nodes are operators and whose edges represent dataflows, typically culminating in the
evaluation of a scalar-valued loss function. For reverse mode automatic differentiation (backpropagation) to work, each operator needs to be paired with a gradient routine which maps gradients of the
loss function with respect to the outputs of the operator, to gradients with respect to its inputs. In
this paper, we are concerned with extending the automatic differentiation paradigm to computation
graphs where some nodes are "blackboxes" [12], that is, opaque pieces of code implemented outside
the AD framework providing access to an operator only via expensive and potentially noisy function
evaluation, with no associated gradient routine available. A useful mental model of this setting is
shown below where f3 is a blackbox.
f1
x1
x0
f3
f2
x3
f4
x4
x2
Blackboxes, of course, are pervasive - as legacy or proprietary codes or executables, numerical
optimization routines, physics engines (e.g, Bullet [3] and MujoCo [4]), or even wrappers interfacing
with a mechanical system as is typically the case in reinforcement learning, robotics and process
control applications.
The unknown Jacobian of a blackbox is the central object of study in this paper. Recall that the
Jacobian ?f (x0 ) of a differentiable vector-valued map f : Rn 7? Rm at an input x0 ? Rn is the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
m ? n matrix of partial derivatives, defined by,
[?f (x0 )]ij =
?fi
(x0 )
?xj
The rows of the Jacobian are gradient vectors of the m component functions f = (f1 . . . fm ) and the
columns are indexed by the n-dimensional inputs x = (x1 . . . xn ). Through Taylor approximation,
the Jacobian characterizes the rate of change in f at a step (0 < 1) along any direction d ? Rn
in the neighborhood of x0 as follows,
?f (x0 )d ?
1
[f (x0 + d) ? f (x0 )] .
(1)
Viewed as a linear operator over perturbation directions d ? Rn , differences of the form
1
[f (x + d) ? f (x)] may be interpreted as noisy measurements (?sensing" [10, 11, 13]) of the
Jacobian based on function evaluation. The measurement error grows with the step size and the
degree of nonlineary in f in the vicinity of x0 . Additional measurement noise may well be introduced
by unknown error-inducing elements inside the blackbox.
From as few perturbations and measurements as possible, we are concerned with approximately
recovering either the full Jacobian, or approximating the action of the transpose of the Jacobian on a
given vector in the context of enabling backpropagation through blackbox nodes. To elaborate on
?l
the latter setting, let y = f (x) represent forward evaluation of an operator, and let p = ?y
be the
gradient of a loss function l(?) flowing in from the ?top" during the reverse sweep. We are interested
?l
in approximating ?x
= [?f (x)]T p, i.e. the action of the transpose of the Jacobian on p. Note that
due to linearity of the derivative, this is the same as estimating the gradient of the scalar-valued
function g(x) = pT f (x) based on scalar measurements of the form 1 (g(x + d) ? g(x)), which is
a special case of the tools developed in this paper.
The more general problem of full Jacobian estimation arises in many derivative-free optimization
settings [12, 8]. Problems in optimal control and reinforcement learning [18, 21, 20] are prominent
examples, where the dynamics of a nonlinear system (e.g., a robot agent) needs to be linearized along
a trajectory of states and control inputs reducing the problem to a sequence of time-varying Linear
Quadratic Regulator (LQR) subproblems [21]. The blackbox in this case is either a physics simulator
or actual hardware. The choice of perturbation directions and the collection of measurements then
becomes intimately tied to the agent?s strategy for exploration and experience gathering.
Finite differencing, where the perturbation directions d are the n standard basis vectors, is a default
approach for Jacobian estimation. However, it requires n function evaluations which may be
prohibitively expensive for large n. Another natural approach, when the number of measurements,
say k, is smaller than n, is to estimate the Jacobian via linear regression,
argmin
k
X
J?Rm?n i=1
kJdi ?
1
f (x0 + di ) ? f (x0 ) k22 + ?kJk2F ,
where an l2 regularizer is added to handle the underdetermined setting and k ? kF stands for the
Frobenius norm. This approach assumes that the error distribution is Gaussian and in its basic
form, does not exploit additional Jacobian structure, e.g., symmetry and sparsity, to improve data
efficiency. For example, if backpropagation needs to be enabled for a noiseless blackbox with
identical input-output dimensions whose unknown Jacobian happens to be symmetric, then just
one function evaluation suffices since ?f (x0 )T p = ?f (x0 )p ? 1 (f (x0 + p) ? f (x0 )). Figure
1 shows the histogram of the Jacobian of the dynamics of a Humanoid walker with respect to its
18-dimensional state variables and 6 dimensional control inputs. It can be seen that the Jacobian
is well approximated by a sparse matrix. In a complex dynamical system comprising of many
subsystems, most state or control variables only have local influence on the instantaneous evolution
of the overall state. Figure 1 also shows the example of a manipulator; the Jacobian of a 5 planar
link system has sparse and symmetric blocks (highlighted by blue and red bounding boxes) as a
consequence of the form of the equations of motion of a kinematic tree of rigid bodies. Clearly, one
can hope that incorporating this kind of prior knowledge in the Jacobian estimation process will
improve data efficiency in ?model-free" trajectory optimization applications.
Technical Preview, Contributions and Outline: We highlight the following contributions:
2
Figure 1: Structured Jacobians in Continuous Control Problems
? In ?2: We start by asking how many blackbox calls are required to estimate a sparse Jacobian with
known sparsity pattern. We recall results from automatic differentiation [14, 17, 23] literature that
relates this problem to graph coloring [19, 26] where the chromatic number of a certain graph that
encodes input-output dependencies dictates the sample complexity. We believe that this connection
is not particularly well known in the deep learning community, though coloring approaches only
apply to noiseless structure-aware cases.
? In ?3: We present a Jacobian recovery algorithm, rainbow, that uses a novel probabilistic
graph coloring subroutine to reduce the effective number of variables, leading to a compressed
convex optimization problem whose solution yields an approximate Jacobian. The approximation
b of the true Jacobian J is such that kJ
b ? JkF ? E(n), where the measurement error vector
J
? ? Rm satisfies: k?k? = o(E(n)). Our algorithm requires only
? O(min(A, B)) calls to the
?
m?(J,Gweak )
mn
2
int
blackbox, where A = dint log2 ( E(n)
), B = m?(J, Gweak
), dint is a measure
int ) log (
E(n)
of intrinsic dimensionality of a convex set C 3 J encoding prior knowledge about the Jacobian
(elaborated below) and ?(J, Gweak
int ) ? n is a parameter encoding combinatorial properties possibly
known in advance (encoded by the introduced later the so-called weak-intersection graph Gweak
int )
of the sparsity pattern in the Jacobian (see: ?3.4.1 for an explicit definition); we will refer to
?(J, Gweak
int ) as the chromatic character of J.
? We demonstrate our tools with the following experiments: (1) Training a convolutional neural
network in the presence of a blackbox node, (2) Estimating structured Jacobians from few calls
to a blackbox with different kinds of local and global dependency structures between inputs and
outputs, and (3) Estimating structured Jacobians of the dynamics of a 50-link manipulator, with
a small number of measurements while exploiting sparsity and partial symmetry via priors in lp
regression.
The convex set C mentioned above can be defined in many different ways depending on prior
knowledge about the Jacobian (e.g., lower and upper bounds on certain entries, sparsity with unknown
pattern, symmetric block structure, etc).
As we show in the experimental section, our approach can be applied also for non-smooth problems
where Jacobian is not well-defined. Note that in this setting one can think about a nonsmooth function
as a noisy version of its smooth approximation and a Jacobian of a function smoothing (such as
Gaussian smoothing) is a subject of interest.
Notation: D = [d1 . . . dk ] ? Rn?k will denote the matrix of perturbation directions, with the
corresponding measurement matrix R = [r1 . . . rk ] ? Rm?k where ri = 1 [f (x + di ) ? f (x)].
2
The Link between Jacobian Estimation and Graph Coloring
Suppose the Jacobian is known to be a diagonal matrix. Then finite differencing where perturbation
directions are the n standard basis elements is utterly wasteful; it is easy to see that a single
perturbation direction d = [1, 1 . . . 1]T suffices in identifying all diagonal elements. The goal of this
section is to explain the connection between Jacobian recovery and graph coloring problems that
substantially generalizes this observation.
First we introduce graph theory terminology. The undirected graph is denoted as G(V, E), where V
and E stand for the sets of vertices and edges respectively. For v, w ? V we say that v is adjacent
to w if there is an edge between v and w. The degree deg(v) of v ? V is the number of vertices
adjacent to it. The maximum degree in G(V, E) will be denoted as ?(G). A stable set in G is the
3
b
d
h
c
f
a
g
e
Figure 2: On the left: Sparse Jacobian for a function f (a, b, c, d, e, f, g, h) with n = m = 8, where
blue entries indicate nonzero values. In the middle: coloring of columns. A fixed color corresponds
to a stable set in Gint . On the right: corresponding intersection graph Gint .
subset S ? V , where no two vertices are adjacent. The chromatic number ?(G) of G is the minimum
number of sets in the partitioning of V into stable sets. Equivalently, it is the smallest number of
colors used in a valid vertex-coloring of the graph, where a valid coloring is one in which adjacent
vertices are assigned different colors.
Denote by Jx = [J1 , ..., Jn ] ? Rm?n a Jacobian matrix evaluated at point x ? Rn , where Ji ? Rm
denotes the i-th column. Assume that Ji s are not known, but the sparsity structure, i.e. the location
of zero entries in J is given. Let Ai = {k : Jki 6= 0} ? {0, ..., m ? 1} be the indices of the
non-zero elements of Ji . The intersection graph, denoted by Gint , is a graph whose vertex set is
V = {x1 . . . xn } and xi is adjacent to xj if the sets Ai and Aj intersect. In other words, there
exists an output of the blackbox that depends both on xi and xj (see Figure 2 for an illustration).
Now suppose k colors are used in a valid coloring of Gint . The key fact that relates the Jacobian
recovery problem to graph coloring is the following observation. If one constructs vectors di ? Rn
for i = 1, ..., k in such a way that dij = 1 if xj is colored by the ith color and is 0 otherwise, then
i
k computations of the finite difference f (x+d )?f (x) for 0 < 1 and i = 1, ..., k suffice to
accurately approximate the Jacobian matrix (assuming no blackbox noise). The immediate corollary
is the following lemma.
Lemma 2.1 ([14]). The number of calls k to a blackbox vector-valued function f needed to compute
an approximate Jacobian via finite difference technique in the noiseless setting satisfies k ? ?(Gint ),
where Gint is the corresponding intersection graph.
Thus, blackboxes whose unknown Jacobian happens to be associated with intersection graphs of
low chromatic number admit accurate Jacobian estimation with few function calls. Rich classes of
graphs have low chromatic number. If the maximum degree ?(Gint ) of Gint is small then ?(Gint ) is
also small, because of the well known fact that ?(Gint ) ? ?(Gint ) + 1. For instance if every input
xi influences at most k outputs fj and every output fj depends on at most l variables xi , then one
can notice that ?(Gint ) ? kl and thus ?(Gint ) ? kl + 1. When the maximum degree is small, an
efficient coloring can be easily found by the greedy procedure that colors vertices one by one and
assigns to the newly seen vertex the smallest color that has not been used to color all its already seen
neighbors ([14]). This procedure cannot be applied if there exist vertices of high degree. That is the
case for instance if there exist few global variables influence a large number of outputs fi . In the
subsequent sections we will present an algorithm that does not need to rely on the small value of
?(Gint ).
Graph coloring for Jacobian estimation has two disadvantages even if we assume that good quality
coloring of the intersection graph can be found efficiently (optimal graph coloring is in general NP
hard). It assumes that the sparsity structure of the Jacobian, i.e. the set of entries that are zero is given,
and that all the measurements are accurate, i.e. there is no noise. We relax these limitations next.
3
Sensing and Recovery of Structured Jacobians
Our algorithm receives as input two potential sources of prior knowledge about the blackbox:
? sparsity pattern of the Jacobian in the form of a supergraph of the true intersection graph, which we
call the weak intersection graph denoted as Gweak
int . The knowledge of the sparsity pattern may be
imprecise in the sense that we can overestimate the set of outputs an input can influence. Note that
any stable set of Gweak
is a stable set in Gint and thus we have: ?(Gint ) ? ?(Gweak
int
int ). A complete
4
weak intersection graph corresponds to the setting where no prior knowledge about the sparsity
pattern is available while Gweak
= Gint reflects the setting with exact knowledge.
int
? a convex set C encoding additional information about the local and global behavior of the blackbox.
For example, if output components fi are Lipschitz continuous with the Lipschitz constant Li : the
magnitude of the Jacobian entries can be bounded row-wise with Li , i = 1 . . . m. The Jacobian
may additionally have sparse blocks, which may be expressed as a bound on the elementwise l1
norm over the entries of the block; it may also have symmetric and/or low-rank blocks [6] (the
latter may be expressed as a bound on the nuclear norm of the block). A measure of the effective
degrees of freedom due to such constraints directly shows up in our theoretical results on Jacobian
recovery (?3.4).
Direct domain knowledge, or a few expensive finite-difference calls may be used in the first few
iterations to collect input-independent structural information about the Jacobian, e.g., to observe the
typical degree of sparsity, whether a symmetry or sparsity pattern holds across iterations etc.
Our algorithm, called rainbow, consists of three steps:
? Color: Efficient coloring of Gweak
for reducing the dimensionality of the problem, where each
int
variable in the compressed problem corresponds to a subset of variables in the original problem.
This phases explores strictly combinatorial structural properties of J (?3.1).
? Optimize: Solving a compressed convex optimization problem to minimize (or find a feasible)
lp reconstruction. This phase can utilize additional structural knowledge via the convex set C
((?3.3)) defined earlier.
? Reconstruct: Mapping the auxiliary variables from the solution to the above convex problem
back to the original variables to reconstruct J.
Next we discuss all these steps.
3.1
Combinatorial Variable Compression via Graph Coloring: GreedyColoring
Consider the following coloring algorithm for reducing the effective number of input variables. Order
the vertices x1 , ..., xn of Gweak
randomly. Initialize the list of stable sets I covering {x1 , ..., xn }
int
as I = ?. Process vertices one after another and add a vertex xi to the first set from I that does
not contain vertices adjacent to xi . If no such a set exists, add the singleton set {xi } to I. After
processing all the vertices, each stable set from I gets assigned a different color. We denote by
color(i) the color assigned to vertex xi and by l the total number of colors. To boost the probability
of finding a good coloring, one can repeat the procedure above for a few random permutations and
choose the one that corresponds to the smallest l.
3.2
Choice of Perturbation Directions
Each di ? Rn is obtained from the randomly chosen vector dicore ? Rl , that we call the core vector.
Entries of all core vectors are taken independently from the same distribution ? which is: Gaussian,
Poissonian or bounded and of nonzero variance (for the sake of readability, technical conditions
and extensions to this family of distributions is relegated to the Appendix). Directions may even be
chosen from columns of structured matrices, i.e., Circulant and Toeplitz [7, 24, 22, 16]. Each di is
defined as follows: di (j) = dicore (color(j)).
3.3
Recovery via Compressed Convex Optimization
Linear Programming: Assume that the lp -norm of the noise vector ? ? Rm is bounded by =
E(n), where E(?) encodes non-decreasing dependence on n. With the matrix of perturbation vectors
D ? Rn?k and a matrix of the corresponding core vectors Dcore ? Rl?k in hand, we are looking for
the solution X ? Rm?l to the following problem:
k(XDcore ? R)i kp ? , i = 1 . . . k
m?k
(2)
where subscript i runs over columns, R ? R
is the measurement matrix for the matrix of
perturbations D. For p ? {1, ?}, this task can be cast as a Linear Programming (LP) problem. Note
that the smaller the number of colors, l, the smaller the size of the LP. If C is a polytope, it can be
included as additional linear constraints in the LP. After solving for X, we construct the Jacobian
b as follows: J
bu,j = Xu,color(j) , where color(j) is defined above.
approximation J
5
We want to emphasize that a Linear Programming approach is just one instantiation of a more general
method we present here. Below we show another one based on ADMM for structured l2 regression.
ADMM Solvers for multiple structures: When the Jacobian is known to have multiple structures,
e.g., it is sparse and has symmetric blocks, it is natural to solve structured l2 regression problems of
the form,
k
X
argmin
k(XDcore ? R)i k22 + ?kXk1 ,
X?Rm?l ?S i=1
where the convex constraint set S is the set of all matrices conforming to a symmetry pattern on
selected square blocks; an example is the Jacobian of the dynamics of a 5-link manipulator as shown
in Figure 1. A consensus ADMM [9] solver can easily be implemented for such problems involving
multiple structural priors and constraints admitting cheap proximal and projection operators. For the
specific case of the above problem, it runs the following iterations:
?
?
?
?
Solve for X1 : XT1 = [Dcore DTcore + ?I]?1 DRT + ?(XT ? UT1 )
X2 = symmetrize[X ? U2 , S]
X = soft-threshold[ 12 (X1 + X2 + U1 + U2 ), ???1 ]
Ui = Ui + Xi ? X, i = 1, 2
where X1 , X2 are primal variables with associated dual variables U1 , U2 , ? is the ADMM step size
parameter, and X is the global consensus variable. The symmetrize(X, S) routine implements
? of X specified by the
exact projection onto symmetry constraints - it takes a square block X
1 ?
T
? ] keeping other elements of X intact. The softconstraint set S and symmetrizes it simply as 2 [X+ X
thresholding operator is defined by soft-threshold(X, ?) = max(X??, 0)?max(?X??, 0).
Note that for the first step [Dcore DTcore + ?I] can be factorized upfront, even across multiple Jacobian
estimation problems since it is input-independent. Also, notice that if the perturbation directions
are structured, e.g., drawn from a Circulant or Toeplitz matrix, then the cost of this linear solve can
be further reduced using specialized solvers [15]. As before, after solving for X, we construct the
b as follows: J
bu,j = Xu,color(j) .
Jacobian approximation J
3.4
3.4.1
Theoretical Guarantees
Chromatic property of a graph
The probabilistic graph coloring algorithm GreedyColoring generates a coloring, where the number
weak
of colors is close to the chromatic property ?(Gweak
(see: proof of Lemma 3.1
int ) of the graph Gint
in the Appendix). The chromatic property ?(G) of a graph G is defined recursively as follows.
? ?(G? ) = 0, where G? is an empty graph (V = ?),
6 G? , we have: ?(G)
maxS?V ?(G\S) where max is taken over all subsets
? for G =
P = 1+
1
satisfying: |S| = |V | ? d v?V 1+deg(v)
e and G\S stands for the graph obtained from G be
deleting vertices from S.
Note that we are not aware of any closed-form expression for ?(G). We observe that there exists a
subtle connection between the chromatic property of the graph ?(G) and its chromatic number.
Lemma 3.1. The following is true for every graph G: ?(G) ? ?(G).
The importance of the chromatic property lies in the fact that in practice for many graphs G (especially
sparse, but not necessarily of small maximum degree ?(G)) the chromatic property is close to the
chromatic number. Thus, in practice, GreedyColoring finds a good quality coloring for a large class
of weak-intersection graphs Gweak
int , efficiently utilizing partial knowledge about the sparsity structure.
The chromatic character of the Jacobian is defined as the chromatic property of its weak-intersection
graph ?(Gweak
int ) and thus does not depend only on the Jacobian J, but also on its ?sparsity exposition"
given by Gweak
and will be referred to as ?(J, Gweak
int
int ).
3.4.2
Accuracy of Jacobian Recovery with rainbow
We need the following notion of intrinsic dimensionality in Rm?n as a metric space equipped with
k ? kF norm.
6
Definition 3.2 (intrinsic dimensionality). For any point X ? Rm?n and any r > 0, let B(X, r) =
{Y : kX ? YkF ? r} denote the closed ball of radius r centered at X. The intrinsic dimensionality
of S ? Rm?n is the smallest integer d such that for any ball B(X, r) ? Rm?n , the set B(X, r) ? S
can be covered by 2d balls of radius 2r .
We are ready to state our main theoretical result.
Theorem 3.3. Consider the Jacobian matrix J ? Rm?n . Assume that max |Ji,j | ? C for some
fixed C > 0 and J ? C, where C ? Rm?n is a convex set defining certain structural properties of
J (for instance C may be the set of matrices with block sparsity and symmetry patterns). Assume
that the measurement error vector ? ? Rm satisfies: k?k? = o(E(n)) for some function E(n).
b of J satisfying kJ
b ? JkF ? E(n) can be found with probability p =
Then the approximation J
1
by applying rainbow algorithm with k = O(min(A, B)) calls to the f function, where
1 ? spoly(n)
?
?
weak
mn
2 C m?(J,Gint )
A = dint log2 ( CE(n)
)
log
(
), B = m?(J, Gweak
), dint stands for the intrinsic
int
E(n)
dimensionality of C and spoly(n) is a superpolynomial function of n.
The proof is given in the Appendix. The result above is a characterization of the number of blackbox
calls needed to recover the Jacobian, in terms of its intrinsic degrees of freedom, the dependency
structure in the inputs and outputs and the noise introduced by higher order nonlinear terms and other
sources of forward evaluation errors.
4
Experiments
4.1. Sparse Jacobian Recovery: We start with a controlled setting where we consider the vectorvalued function, f : Rn ? Rm of the following form:
X
X
f (x1 , ..., xn ) = (
sin(xi ), ...,
sin(xi )),
(3)
i?S1
i?Sm
where sets Si for i = 1, ...., m are chosen according to one of the following models. In the p-model
each entry i ? {1, ..., n} is added to each Sj independently and with the same probability p. In the
?-model entry i is added to each Sj independently at random with probability i?? . We consider a
Jacobian at point x ? Rn drawn from the standard multivariate Gaussian distribution with entries
taken from N (0, 1). Both the models enable us to precisely control the sparsity of the corresponding
Jacobian which has an explicit analytic form. Furthermore, the latter generates Jacobians where the
degrees of the corresponding intersection graphs have power-law type distribution with few ?hubs"
very well connected to other nodes and many nodes of small degree. That corresponds to the setting,
where there exist few global variables that impact many output fi s, any many local variables that only
influence a few outputs. We run the LP variant of rainbow for the above models and summarize
the results in the table below.
model
p = 0.1
p = 0.1
p = 0.1
p = 0.3
p = 0.3
p = 0.3
? = 0.5
? = 0.5
? = 0.5
? = 0.7
? = 0.7
? = 0.7
m
30
40
50
30
40
50
30
40
50
30
40
50
n
60
70
80
60
70
80
60
70
80
60
70
80
sparsity
0.91277
0.90142
0.90425
0.6866
0.7096
0.702
0.7927
0.78785
0.79225
0.85166
0.87357
0.86975
?/?
0.33
0.35
0.32
0.6833
0.6857
0.8625
0.3833
0.4285
0.475
0.2777
0.2537
0.275
?
0.07
0.07
0.07
0.07
0.07
0.07
0.1
0.1
0.1
0.1
0.1
0.1
k
15
20
30
45
60
70
45
60
70
40
55
65
rel.error
0.0632
0.0802
0.0751
0.0993
0.0589
0.1287
0.0351
0.0491
0.0443
0.0393
0.0398
0.0326
Above, we measure recovery error in terms of the relative Frobenius distance between estimated
b
F
Jacobian and true Jacobian, rel.error = kJ?Jk
kJkF . The standard deviation of each entry of the
measurement noise vector is given by ?. We report in particular the fraction of zero entries in
J (sparsity), the ratio of the number of colors found by our GreedyColoring algorithm and the
7
?
maximum degree of the graph ( ?
). We see that the coloring algorithm finds good quality coloring
even in the "power-law" type setting where maximum degree ?(G) is large. The quality of the
coloring in turn leads to the reduction in the number of measurement vectors needed (k) to obtain an
accurate Jacobian approximation (i.e., relative error < 0.1).
4.2. Training Convolutional Neural Networks with Blackbox Nodes: We introduce a blackbox
layer between the convolutional layers and the fully connected layers of a standard MNIST convnet.
The blackbox node is a standard ReLU layer that takes as input 32-dimensional vectors, 32 ? 32-sized
weight matrix and a bias vector of length 32, and outputs a 32 dimensional representation. The
minibatch size is 16. We inject truncated Gaussian noise in the output of the layer and override its
default gradient operator in TensorFlow with our LP-based rainbow procedure. We use Gaussian
perturbation directions and sample measurements by forward evaluation calls to the TensorFlow
Op inside our custom blackbox gradient operator. In Fig. 3 we study the evolution of training and
validation error across SGD iterations. We see in Fig. 3 that even though for low noise regime the
standard linear regression and finite differencing methods work quite well, when noise magnitude
increases our blackbox backpropagation procedure rainbow-LP shows superior robustness - retaining a capacity to learn while the other methods degrade in terms of validation error. The rightmost
subfigure reports validation error for our method with different numbers of Jacobian measurements at
a high noise level (in this case, the other methods fail to learn and are not plotted).
(a) Standard deviation: 9e-5
(b) Standard deviation: 0.008
(c) Different numbers of measurement vectors (std : 0.1)
Figure 3: TensorFlow CNN training with a "blackbox" layer with rainbow-LP method. On the
left: Comparison of rainbow-LP with finite differencing and linear regression methods for low
noise regime. In the middle: As before, but for more substantial noise magnitude. On the right:
rainbow-LP for even larger noise magnitude (std : 0.1) and different number of measurement vectors
used. In that setting other methods did not learn at all.
4.3. Jacobian of manipulator dynamics: We compute the
true Jacobian of a planar rigid-body model with 50 links near
an equilibrium point using MIT?s Drake planning and control
toolbox [25]. The first link is unactuated; the remaining
are all torque-actuated. The state vector comprises of 50
joint angles and associated joint velocities, and there are
49 control inputs to the actuators. The Jacobian has sparse
and symmetric blocks similar to Figure 1. We compare
linear regression with l2 regularization against the rainbow
ADMM solver designed to exploit sparsity and symmetry,
in the setting where the number of measurements is much
smaller than the total number of input variables to the forward
dynamics function (149). Results are shown in the adjacent
Figure. The recovery is much more accurate in the presence
of sparsity and symmetry priors. The results are similar if the
matrix of perturbation directions are chosen from a Circulant
matrix.
8
References
[1] http://www.autodiff.org.
[2] http://torch.ch.
[3] http://www.bulletphysics.org.
[4] http://www.mujoco.org.
[5] M. Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software
available from tensorflow.org.
[6] H. S. Abdel-Khali, P. Hovland, A. Lyons, T. E. Stover, and J. Utke. A low rank approach to automatic
differentiation. Advances in Automatic Differentiation, 2008.
[7] W. U. Bajwa, J. D. Haupt, G. M. Raz, S. J. Wright, and R. D. Nowak. Toeplitz-structured compressed
sensing matrices. IEEE/SP Workshop on Statistical Signal Processing, 2007.
[8] A. S. Bandeira, K. Scheinberg, and L. N. Vicente. Computation of sparse low degree interpolating
polynomials and their application to derivative-free optimization. Mathematical Programming, 134, 2012.
[9] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3, 2011.
[10] E. Candes and M. B. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine,
25, 2008.
[11] E. J. Cand?s, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59, 2006.
[12] A. R. Conn, K. Scheinberg, and L. N. Vicente. Derivative Free Optimization. MOS-SIAM Series on
Optimization, 2009.
[13] D. L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52, 2006.
[14] A. H. Gebremedhin, F. Manne, and A. Pothen. What color is your jacobian? graph coloring for computing
derivatives. SIAM Review, 47(4):629?705, 2005.
[15] G. Golub and C. F. V. Loan. Matrix Computations. Johns Hopkins University Press (4rth edition), 2012.
[16] R. M. Gray. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and
Information Theory, 2(3), 2006.
[17] A. Griewank and A. Walther. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. SIAM, 2008.
[18] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, 1970.
[19] T. Jensen and B. Toft. Graph Coloring Problems. Wiley - Interscience, 1995.
[20] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. JMLR,
17(39), 2016.
[21] W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement
systems. International Conference on Informatics in Control, Automation and Robotics, 2004.
[22] W. Lin, S. Morgan, J. Yang, and Y. Zhang. Practical compressive sensing with toeplitz and circulant
matrices. Proceedings of SPIE, the International Society for Optical Engineering, 2010.
[23] G. N. Newsam and J. D. Ramsdell. Estimation of sparse jacobian matrices. SIAM Journal of Algebraic
Discrete Methods, 1983.
[24] H. Rauhutk. Circulant and toeplitz matrices in compressed sensing. SPARS?09 - Signal Processing with
Adaptive Sparse Structured Representations, 2010.
[25] R. Tedrake and the Drake Development Team. Drake: A planning, control, and analysis toolbox for
nonlinear dynamical systems, 2016.
[26] B. Toft. Coloring, stable sets and perfect graphs. Handbook of Combinatorics, 1996.
9
| 7230 |@word cnn:1 middle:2 version:1 compression:1 norm:5 polynomial:1 linearized:1 sgd:1 recursively:1 reduction:1 wrapper:1 series:1 lqr:1 rightmost:1 com:2 si:1 chu:1 conforming:1 john:1 subsequent:1 numerical:1 j1:1 cheap:1 analytic:1 designed:1 greedy:1 selected:1 utterly:1 ith:1 core:3 colored:1 mental:1 characterization:1 node:8 location:1 readability:1 org:4 zhang:1 mathematical:1 along:2 direct:1 supergraph:1 differential:1 abadi:1 consists:1 walther:1 interscience:1 inside:2 introduce:2 x0:16 behavior:1 cand:1 planning:2 blackbox:25 simulator:1 brain:2 torque:1 decreasing:1 actual:1 lyon:1 equipped:1 solver:4 becomes:1 estimating:3 linearity:1 notation:1 suffice:1 bounded:3 factorized:1 what:1 argmin:2 interpreted:1 kind:2 substantially:1 developed:1 compressive:2 finding:1 differentiation:7 guarantee:1 every:3 prohibitively:1 rm:17 control:11 partitioning:1 overestimate:1 before:2 engineering:1 local:4 consequence:1 encoding:3 subscript:1 approximately:1 jki:1 collect:1 mujoco:2 directed:1 practical:1 practice:2 block:11 implement:1 backpropagation:6 x3:1 procedure:5 intersect:1 dictate:1 projection:2 imprecise:1 word:1 boyd:1 get:1 cannot:1 onto:1 subsystem:1 operator:11 close:2 romberg:1 context:1 influence:5 applying:1 optimize:1 www:3 map:2 primitive:2 independently:3 convex:10 griewank:1 recovery:11 identifying:1 assigns:1 pure:1 utilizing:1 nuclear:1 enabled:1 handle:2 notion:1 pt:1 suppose:2 magazine:1 exact:2 programming:5 us:1 element:5 velocity:1 expensive:3 approximated:1 particularly:1 dcore:3 satisfying:2 jk:1 std:2 trend:2 kxk1:1 levine:1 connected:2 movement:1 mentioned:1 substantial:1 complexity:1 ui:2 dynamic:8 depend:1 solving:3 efficiency:3 f2:1 basis:2 easily:2 joint:2 regularizer:1 effective:3 kp:1 visuomotor:1 outside:1 neighborhood:1 whose:6 encoded:1 quite:1 valued:5 solve:3 say:2 relax:1 otherwise:1 compressed:8 reconstruct:2 ability:1 toeplitz:6 think:1 highlighted:1 noisy:4 sequence:1 differentiable:2 net:1 reconstruction:1 manne:1 mayne:1 inducing:1 frobenius:2 exploiting:1 empty:1 darrell:1 extending:1 r1:1 perfect:1 object:1 depending:1 ij:1 op:1 implemented:2 recovering:1 auxiliary:1 culminating:1 indicate:1 direction:13 radius:2 f4:1 exploration:1 centered:1 enable:1 kchoro:1 f1:2 suffices:2 abbeel:1 biological:1 underdetermined:1 strictly:1 extension:1 hold:1 around:1 wright:1 equilibrium:1 mapping:1 algorithmic:1 mo:1 jx:1 smallest:4 hovland:1 estimation:9 combinatorial:3 tool:2 reflects:1 hope:1 mit:1 clearly:1 interfacing:1 gaussian:6 chromatic:15 varying:1 pervasive:1 corollary:1 rank:2 sense:1 elsevier:1 rigid:3 inaccurate:1 typically:2 torch:2 relegated:1 subroutine:1 interested:1 comprising:1 provably:1 tao:1 overall:1 dual:1 denoted:4 retaining:1 development:1 smoothing:2 special:1 initialize:1 drake:3 construct:3 aware:2 f3:2 beach:1 sampling:1 x4:1 identical:1 nonsmooth:1 np:1 report:2 few:10 randomly:2 phase:2 preview:1 freedom:2 interest:1 kinematic:1 custom:1 evaluation:8 golub:1 admitting:1 jacobson:1 primal:1 accurate:4 edge:3 nowak:1 partial:3 experience:1 indexed:1 tree:1 taylor:1 incomplete:1 plotted:1 theoretical:3 subfigure:1 instance:3 column:5 soft:2 earlier:1 asking:1 disadvantage:1 cost:1 vertex:16 entry:12 subset:3 deviation:3 dij:1 pothen:1 dependency:4 proximal:1 st:1 explores:1 siam:4 international:2 bu:2 probabilistic:2 physic:2 informatics:1 hopkins:1 central:1 choose:1 possibly:1 admit:1 inject:1 derivative:7 leading:1 jacobians:5 li:3 potential:1 singleton:1 automation:1 int:17 combinatorics:1 ad:2 depends:2 piece:1 later:1 closed:2 characterizes:1 red:1 start:2 recover:2 candes:1 elaborated:1 contribution:2 minimize:1 square:2 accuracy:1 convolutional:3 variance:1 efficiently:3 yield:1 weak:7 accurately:1 trajectory:2 explain:1 definition:2 against:1 associated:4 di:6 proof:2 spie:1 newly:1 recall:2 knowledge:11 color:21 dimensionality:6 organized:1 subtle:1 routine:4 back:1 coloring:28 higher:1 planar:2 flowing:1 improved:1 evaluated:1 box:1 though:2 furthermore:1 just:2 hand:1 receives:1 ykf:1 nonlinear:4 google:4 minibatch:1 defines:1 mode:1 aj:1 quality:4 gray:1 bullet:1 grows:1 manipulator:4 believe:1 usa:1 k22:2 contain:1 true:5 multiplier:1 evolution:2 vicinity:1 assigned:3 regularization:1 alternating:1 symmetric:6 nonzero:2 adjacent:7 sin:2 during:1 covering:1 linearizing:1 prominent:1 override:1 outline:1 complete:1 demonstrate:2 motion:1 l1:1 fj:2 wise:1 instantaneous:1 novel:2 fi:4 parikh:1 superior:1 specialized:1 vectorvalued:1 ji:4 rl:2 superpolynomial:1 elementwise:1 rth:1 measurement:21 composition:1 refer:1 ai:2 automatic:6 mathematics:1 access:1 robot:1 stable:9 etc:2 add:2 multivariate:1 reverse:2 certain:3 bandeira:1 seen:3 minimum:1 additional:5 morgan:1 paradigm:1 signal:4 relates:2 full:2 multiple:4 smooth:2 technical:2 long:1 lin:1 spar:1 drt:1 paired:1 controlled:1 impact:1 involving:1 regression:7 basic:1 variant:1 noiseless:3 metric:1 heterogeneous:1 histogram:1 represent:2 iteration:4 robotics:2 want:1 walker:1 source:2 subject:1 undirected:1 call:12 integer:1 structural:6 near:1 presence:2 yang:1 easy:1 concerned:2 todorov:1 xj:4 relu:1 fm:1 reduce:1 raz:1 whether:1 expression:1 algebraic:1 york:2 action:3 proprietary:1 deep:3 useful:1 covered:1 hardware:1 reduced:1 http:4 exist:3 notice:2 upfront:1 estimated:1 blue:2 discrete:1 key:1 conn:1 terminology:1 threshold:2 drawn:2 choromanski:1 wasteful:1 ce:1 utilize:1 krzysztof:1 graph:43 fraction:1 run:3 angle:1 opaque:1 family:1 appendix:3 layer:7 bound:3 toft:2 quadratic:2 constraint:5 precisely:1 your:1 bajwa:1 x2:4 ri:1 encodes:2 software:1 sake:1 generates:2 regulator:2 u1:2 min:2 optical:1 structured:10 according:1 combination:1 ball:3 smaller:4 across:3 intimately:1 character:2 lp:12 happens:2 s1:1 gathering:1 heart:1 taken:3 equation:1 scheinberg:2 discus:1 turn:1 fail:1 needed:3 finn:1 end:4 ut1:1 available:3 generalizes:1 apply:1 observe:2 actuator:1 generic:1 robustness:1 vikas:1 jn:1 original:2 top:1 assumes:2 denotes:1 remaining:1 log2:2 wakin:1 exploit:3 especially:1 approximating:2 society:1 sweep:1 added:3 already:1 strategy:1 dependence:1 diagonal:2 gradient:9 distance:1 link:6 convnet:1 capacity:1 degrade:1 polytope:1 consensus:2 assuming:1 code:2 length:1 index:1 illustration:1 providing:1 ratio:1 differencing:4 equivalently:1 bulletphysics:1 potentially:1 subproblems:1 design:1 policy:1 unknown:6 upper:1 observation:2 sm:1 enabling:1 finite:7 truncated:1 immediate:1 defining:1 looking:1 communication:2 team:1 rn:11 perturbation:14 community:1 peleato:1 introduced:3 cast:1 mechanical:1 required:1 kl:2 connection:3 specified:1 toolbox:2 eckstein:1 engine:1 tensorflow:6 boost:1 nip:1 poissonian:1 below:4 dynamical:2 pattern:9 regime:2 sparsity:21 summarize:1 max:5 deleting:1 power:2 natural:2 rely:1 mn:2 improve:2 library:1 executables:1 ready:1 larger:1 kj:3 prior:9 literature:1 l2:4 review:2 kf:2 relative:2 law:2 loss:3 fully:1 highlight:1 permutation:1 haupt:1 limitation:1 acyclic:1 validation:3 abdel:1 humanoid:1 foundation:2 degree:15 agent:2 thresholding:1 principle:1 row:2 course:1 repeat:1 free:4 transpose:2 keeping:1 legacy:1 bias:1 circulant:6 neighbor:1 sparse:12 distributed:1 rainbow:11 default:2 xn:5 stand:4 dimension:1 rich:2 valid:3 symmetrize:2 forward:4 collection:1 reinforcement:2 evaluating:1 adaptive:1 transaction:1 sj:2 approximate:3 emphasize:1 deg:2 global:5 instantiation:1 handbook:1 xt1:1 xi:11 blackboxes:3 dint:4 continuous:2 iterative:1 table:1 additionally:1 learn:3 robust:1 ca:1 actuated:1 symmetry:9 complex:1 necessarily:1 interpolating:1 domain:1 did:1 sp:1 main:1 bounding:1 noise:14 edition:1 body:3 x1:9 xu:2 referred:1 fig:2 elaborate:1 ny:2 wiley:1 unactuated:1 comprises:1 explicit:2 lie:1 tied:1 jmlr:1 jacobian:70 rk:1 theorem:1 specific:1 xt:1 hub:1 jensen:1 sensing:8 list:1 dk:1 incorporating:1 intrinsic:6 exists:3 rel:2 mnist:1 workshop:1 importance:1 magnitude:4 autodiff:1 kx:1 newsam:1 intersection:12 simply:1 expressed:2 scalar:3 tedrake:1 u2:3 sindhwani:2 ch:1 corresponds:5 satisfies:3 viewed:1 goal:1 sized:1 donoho:1 exposition:1 lipschitz:2 admm:5 feasible:1 change:1 hard:1 included:1 typical:1 vicente:2 reducing:3 loan:1 lemma:4 called:2 total:2 experimental:1 intact:1 latter:3 arises:1 d1:1 |
6,889 | 7,231 | Protein Interface Prediction using Graph
Convolutional Networks
Alex Fout?
Department of Computer Science
Colorado State University
Fort Collins, CO 80525
[email protected]
Jonathon Byrd?
Department of Computer Science
Colorado State University
Fort Collins, CO 80525
[email protected]
Basir Shariat?
Department of Computer Science
Colorado State University
Fort Collins, CO 80525
[email protected]
Asa Ben-Hur
Department of Computer Science
Colorado State University
Fort Collins, CO 80525
[email protected]
Abstract
We consider the prediction of interfaces between proteins, a challenging problem with important applications in drug discovery and design, and examine the
performance of existing and newly proposed spatial graph convolution operators
for this task. By performing convolution over a local neighborhood of a node of
interest, we are able to stack multiple layers of convolution and learn effective
latent representations that integrate information across the graph that represent the
three dimensional structure of a protein of interest. An architecture that combines
the learned features across pairs of proteins is then used to classify pairs of amino
acid residues as part of an interface or not. In our experiments, several graph
convolution operators yielded accuracy that is better than the state-of-the-art SVM
method in this task.
1
Introduction
In many machine learning tasks we are faced with structured objects that can naturally be modeled as
graphs. Examples include the analysis of social networks, molecular structures, knowledge graphs,
and computer graphics to name a few. The remarkable success of deep neural networks in a wide range
of challenging machine learning tasks from computer vision [14, 15] and speech recognition [12] to
machine translation [24] and computational biology [4], has resulted in a resurgence of interest in
this area. This success has also led to the more recent interest in generalizing the standard notion
of convolution over a regular grid representing a sequence or an image, to convolution over graph
structures, making these techniques applicable to the wide range of prediction problems that can be
modeled in this way [8].
In this work we propose a graph convolution approach that allows us to tackle the challenging
problem of predicting protein interfaces. Proteins are chains of amino acid residues that fold into
a three dimensional structure that gives them their biochemical function. Proteins perform their
function through a complex network of interactions with other proteins. The prediction of those
interactions, and the interfaces through which they occur, are important and challenging problems
that have attracted much attention [10]. This paper focuses on predicting protein interfaces. Despite
?
denotes equal contribution
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the plethora of available methods for interface prediction, it has been recently noted that "The field in
its current state appears to be saturated. This calls for new methodologies or sources of information to
be exploited" [10]. Most machine learning methods for interface prediction use hand-crafted features
that come from the domain expert?s insight on quantities that are likely to be useful and use standard
machine learning approaches. Commonly used features for this task include surface accessibility,
sequence conservation, residue properties such as hydrophobicity and charge, and various shape
descriptors (see Aumentado et al. [6] for a review of the most commonly used features for this task).
The task of object recognition in images has similarities to interface prediction: Images are represented
as feature values on a 2D grid, whereas the the solved crystal structure of a protein can be thought of
as a collection of features on an irregular 3D grid corresponding to the coordinates of its atoms. In
both cases, we are trying to recognize an object within a larger context. This suggests that approaches
that have proven successful in image classification can be adapted to work for protein structures,
and has motivated us to explore the generalization of the convolution operator to graph data. In fact,
several techniques from computer vision have found their way into the analysis of protein structures,
especially methods for locally describing the shape of an object, and various spectral representations
of shape (see e.g. [18, 17]).
In this work we evaluate multiple existing and proposed graph convolution operators and propose an
architecture for the task of predicting interfaces between pairs of proteins using a graph representation
of the underlying protein structure. Our results demonstrate that this approach provides state-of-theart accuracy, outperforming a recent SVM-based approach [2]. The proposed convolution operators
are not specific to interface prediction. They are applicable to graphs with arbitrary size and structure,
do not require imposing an ordering on the nodes, allow for representing both node and edge features,
and maintain the original graph structure, allowing multiple convolution operations without the need
to downsample the graph. Therefore we expect it to be applicable to a variety of other learning
problems on graphs.
2
Methods for Graph Convolution
In this work we consider learning problems over a collection of graphs where prediction occurs at the
node level. Nodes and edges have features that are associated with them, and we denote by xi the
feature vector associated with node i and Aij the feature vector associated with the edge between
nodes i and j, where for simplicity we have omitted indexing over graphs.
We describe a framework that allows us to learn a representation of a local neighborhood around each
node in a graph. In the domains of image, audio, or text data, convolutional networks learn local
features by assigning an ordering to pixels, amplitudes, or words based on the structure inherent to
the domain, and associating a weight vector/matrix with each position within a receptive field. The
standard notion of convolution over a sequence (1D convolution) or an image (2D convolution) relies
on having a regular grid with a well-defined neighborhood at each position in the grid, where each
neighbor has a well-defined relationship to its neighbors, e.g. "above", "below", "to the left", "to the
right" in the case of a 2D grid. On a graph structure there is usually no natural choice for an ordering
of the neighbors of a node. Our objective is to design convolution operators that can be applied to
graphs without a regular structure, and without imposing a particular order on the neighbors of a
given node. To summarize, we would like to learn a mapping at each node in the graph which has
the form: zi = ?W (xi , {xn1 , . . . , xnk }), where {n1 , . . . , nk } are the neighbors of node i that define
the receptive field of the convolution, ? is a non-linear activation function, and W are its learned
parameters; the dependence on the neighboring nodes as a set represents our intention to learn a
function that is order-independent. We present the following two realizations of this operator that
provides the output of a set of filters in a neighborhood of a node of interest that we refer to as the
"center node":
1 X N
zi = ? W C xi +
W xj + b ,
(1)
|Ni |
j?Ni
where Ni is the set of neighbors of node i, W C is the weight matrix associated with the center node,
W N is the weight matrix associated with neighboring nodes, and b is a vector of biases, one for each
filter. The dimensionality of the weight matrices is determined by the dimensionality of the inputs
and the number of filters. The computational complexity of this operator on a graph with n nodes, a
2
convolution
Node
Residue
Conservation /
Composition
Accessible
Surface Area
Edge
residue
of interest
Residue Depth
Distance
Protrusion Index
neighbor
Angle
protein
receptive
field
Figure 1: Graph convolution on protein structures. Left: Each residue in a protein is a node in a graph where the
neighborhood of a node is the set of neighboring nodes in the protein structure; each node has features computed
from its amino acid sequence and structure, and edges have features describing the relative distance and angle
between residues. Right: Schematic description of the convolution operator which has as its receptive field a set
of neighboring residues, and produces an activation which is associated with the center residue.
neighborhood of size k, Fin input features and Fout output features is O(kFin Fout n). Construction of
the neighborhood is straightforward using a preprocessing step that takes O(n2 log n).
In order to provide for some differentiation between neighbors, we incorporate features on the edges
between each neighbor and the center node as follows:
1 X N
1 X E
zi = ? W C xi +
W xj +
W Aij + b ,
(2)
|Ni |
|Ni |
j?Ni
j?Ni
where W E is the weight matrix associated with edge features.
For comparison with order-independent methods we propose an order-dependent method, where
order is determined by distance from the center node. In this method each neighbor has unique weight
matrices for nodes and edges:
1 X N
1 X E
C
zi = ? W xi +
W j xj +
Wj Aij + b .
(3)
|Ni |
|Ni |
j?Ni
j?Ni
Here WjN /WjE are the weight matrices associated with the j th node or the edges connecting to the j th
nodes, respectively. This operator is inspired by the PATCHY-SAN method of Niepert et al. [16]. It is
more flexible than the order-independent convolutional operators, allowing the learning of distinctions
between neighbors at the cost of significantly more parameters.
Multiple layers of these graph convolution operators can be used, and this will have the effect
of learning features that characterize the graph at increasing levels of abstraction, and will also
allow information to propagate through the graph, thereby integrating information across regions of
increasing size. Furthermore, these operators are rotation-invariant if the features have this property.
In convolutional networks, inputs are often downsampled based on the size and stride of the receptive
field. It is also common to use pooling to further reduce the size of the input. Our graph operators
on the other hand maintain the structure of the graph, which is necessary for the protein interface
prediction problem, where we classify pairs of nodes from different graphs, rather than entire
graphs. Using convolutional architectures that use only convolutional layers without downsampling is
common practice in the area of graph convolutional networks, especially if classification is performed
at the node or edge level. This practice has support from the success of networks without pooling
layers in the realm of object recognition [23]. The downside of not downsampling is higher memory
and computational costs.
Related work. Several authors have recently proposed graph convolutional operators that generalize
the notion of convolution over a regular grid. Spectral graph theory forms the basis for several of
3
these methods [8], in which convolutional filters are viewed as linear operators on the eigenvectors
of the graph Laplacian (or an approximation thereof [13]). Our protein dataset consists of multiple
graphs with no natural correspondence to each other, making it difficult to apply methods based
on the graph Laplacian. In what follows we describe several existing spatial graph convolutional
methods, remarking on the aspects which resemble or helped inspire our implementation.
In their Molecular Fingerprint Networks (MFNs), Duvenaud et al. [9] proposed a spatial graph
convolution approach similar to Equation (1), except that they use a single weight matrix for all
nodes in a receptive field and sum the results, whereas we distinguish between the center node and
the neighboring nodes, and we average over neighbors rather than sum over them. Furthermore,
their graphs do not contain edge features, so their convolution operator does not make use of them.
MFNs were designed to generate a feature representation of an entire molecule. In contrast, our node
level prediction task motivates distinguishing between the center node, whose representation is being
computed, and neighboring nodes, which provide information about the local environment of the
node. Averaging is important in our problem to allow for any size of neighborhood.
Schlichtkrull et al. [19] describe Relational Graph Convolutional Networks (RGCNs), which consider
graphs with a large number of binary edge types, where a unique neighborhood is defined by
each edge type. To reduce the total number of model parameters, they employ basis matrices or
block diagonal constraints to introduce shared parameters between the representations of different
edge/neighborhood types. That aspect of the method is not relevant to our problem, and without it,
Equation (1) closely resembles their convolution operator.
Sch?tt et al.[21] define Deep Tensor Neural Networks (DTNNs) for predicting molecular energies.
This version of graph convolution uses the node and edge information from neighbors to produce an
additive update to the center node:
1 X
zi = xi +
? W (W N xj + bN ) (W E Aij + bE ) ,
(4)
|Ni |
j?Ni
where denotes the elementwise product, W , W N , and W E are weights matrices, and bN and bE
are bias vectors. Edge information is incorporated similarly to Equation (2), with the difference in
how the edge and node signals are combined?their choice being elementwise product rather than
sum. Another difference is that DTNN convolution forces the output of a layer to have the same
dimensionality as its input; our approach does not require that, allowing the networks to have varying
numbers of filters across convolutional layers.
Rather than operate on fixed neighborhoods, Atwood and Towsley [5] take a different spatial convolution approach in their Diffusion-Convolutional Neural Networks (DCNNs), and apply multiple steps
(or "hops") of a diffusion operator that propagates the value of an individual feature across the graph.
A node after k hops will contain information from all nodes that have walks of length k ending at that
node. If X is a data matrix where each row corresponds to a node, and each column to a different
feature, then the representation of X after a k hop convolution is:
Zk = ?(wk P k X),
(5)
where wk is the k-hop vector of weights, and P k is the transition matrix raised to power k. Rather
than stack multiple convolution layers, the authors apply the diffusion operator using multiple hop
numbers. In our work we use this method with an adjacency matrix whose entries are an exponentially
decreasing function of the distance between nodes.
Proteins as graphs. In this work we represent a protein as a graph where each amino acid residue
is a node whose features represent the properties of the residue; the spatial relationships between
residues (distances, angles) are represented as features of the edges that connect them (see Figure 1).
The neighborhood of a node used in the convolution operator is the set of k closest residues as
determined by the mean distance between their atoms. Before going into the details of the node and
edge features we describe the neural network architecture.
Pairwise classification architecture. In the protein interface prediction problem, examples are
composed of pairs of residues, one from a ligand protein and one from a receptor protein, i.e., our
task is to classify pairs of nodes from two separate graphs representing those proteins. More formally,
our data are a set of N labeled pairs {((li , ri ), yi )}N
i=1 , where li is a residue (node) in the ligand, ri
4
Graph
Convolution
Graph
Convolution
Merge
Residue
Representation
R1
R2
R3
Ligand Protein
Graph
R1
R2
R3
Graph
Convolution
Residue Pair
Representation
R1
R2
R3
R1
R2
R3
R1
R2
R3
R1
R1
R1
R2
R2
R2
R3
R3
R3
FullyConnected
Classification
Graph
Convolution
Receptor Protein
Graph
Figure 2: An overview of the pairwise classification architecture. Each neighborhood of a residue in the two
proteins is processed using one or more graph convolution layers, with weight sharing between legs of the
network. The activations generated by the convolutional layers are merged by concatenating them, followed by
one or more regular dense layers.
Data Partition
Complexes
Positive examples
Negative examples
Train
140
12,866 (9.1%)
128,660 (90.9%)
Validation
35
3,138 (0.2%) 1,874,322 (99.8%)
Test
55
4,871 (0.1%) 4,953,446 (99.9%)
Table 1: Number of complexes and examples in the Docking Benchmark Dataset. Positive examples are residue
pairs that participate in the interface, negative examples are pairs that do not. For training we downsample the
negative examples for an overall ratio of 10:1 of negative to positive examples; in validation and testing all the
negative examples are used.
is a residue (node) in the receptor protein, and yi ? {?1, 1} is the associated label that indicates if
the two residues are interacting or not. The role of ligand/receptor is arbitrary, so we would like to
learn a scoring function that is independent of the order in which the two residues are presented to
the network. In the context of SVM-based methods this can be addressed using pairwise kernels,
building the invariance into the representation (see e.g. [2]). To create an order-invariant model in a
setting which requires an explicit feature representation. We considered two approaches. One is to
construct explicit features that are order invariant by taking the sum and element-wise products of the
two feature vectors. Note that pairwise kernels implicitly use all products of features, which we avoid
by taking the element wise product. Another approach is to present each example to the model in
both possible orders, (li , ri ) and (ri , li ), and average the two predictions; the feature representation
of an example is the concatenation of the features of the two residues [3]. In preliminary experiments
both approaches yielded similar results, and our reported results use the latter.
Our network architecture is composed of two identical "legs" which learn feature representations of
the ligand and receptor proteins of a complex by applying multiple layers of graph convolution to
each. The weights between the two legs are shared. We then merge the legs by concatenating residue
representations together to create the representation of residue pairs. The resulting features are then
passed through one or more fully-connected layers before classification (see Figure 2).
3
Experiments
Data. In our experiments we used the data from Version 5 of the Docking Benchmark Dataset,
which is the standard benchmark dataset for assessing docking and interface prediction methods [25].
These complexes are a carefully selected subset of structures from the Protein Data Bank (PDB). The
structures are generated from x-ray crystallography or nuclear magnetic resonance experiments and
contain the atomic coordinates of each amino acid residue in the protein. These proteins range in
length from 29 to 1979 residues with a median of 203.5. For each complex, DBD includes both bound
and unbound forms of each protein in the complex. Our features are computed from the unbound
form since proteins can alter their shape upon binding, and the labels are derived from the structure of
the proteins in complex. As in previous work [2], two residues from different proteins are considered
part of the interface if any non-Hydrogen atom in one is within 6? of any non-Hydrogen atom in the
other when in complex.
5
For our test set we used the 55 complexes that were added since version 4.0 of DBD, and separated
the complexes in DBD 4.0 into training and validation sets. In dividing the complexes into training
and validation we stratified them by difficulty and type using the information provided in DBD.
Because in any given complex there are vastly more residue pairs that don?t interact than those that
do, we downsampled the negative examples in the training set to obtain a 10:1 ratio of negative and
positive examples. Final models used for testing were trained using the training and validation data,
with the 10:1 ratio of positive to negative examples. Dataset sizes are shown in Table 1.
Node and edge features. Each node and edge in the graph representing a protein has features
associated with it that are computed from the protein?s sequence and structure. For the node features
we used the same features used in earlier work [2], as summarized next. Protein sequence alone
can be a good indicator of the propensity of a residue to form an interface, because each amino acid
exhibits unique electrochemical and geometric properties. Furthermore, the level of conservation
of a residue in alignments against similar proteins also provides valuable information, since surface
residues that participate in an interface tend to be more conserved than surface residues that do not.
The identity and conservation of a residue are quantified by 20 features that capture the relative
frequency of each of the 20 amino acids in alignments to similar proteins. Earlier methods used
these features by considering a window of size 11 in sequence centered around the residue of interest
and concatenating their features [2]. Since we are explicitly representing the structure of a protein,
each node contains only the sequence features of the corresponding residue. In addition to these
sequence-based features, each node contains several features computed from the structure. These
include a residue?s surface accessibility, a measure of its protrusion, its distance from the surface, and
the counts of amino acids within 8? in two directions?towards the residue?s side chain, and in the
opposite direction.
The primary edge feature is based on the distance between two residues, calculated as the average
distance between their atoms. The feature is a Radial Basis Function (RBF) of this distance with
a standard deviation of 18? (chosen on the validation set). To incorporate information regarding
the relative orientation of two residues, we calculate the angle between the normal vectors of the
amide plane of each residue. Note that DCNNs use residue distances to inform the diffusion process.
For this we used an RBF kernel over the distance, with a standard deviation optimized as part of
the model selection procedure. All node and edge features were normalized to be between 0 and 1,
except the residue conservation features, which were standardized.
Training, validation, and testing. The validation set was used to perform an extensive search
over the space of possible feature representations and model hyperparameters, to select the edge
distance feature RBF kernel standard deviation (2 to 32), negative to positive example ratio (1:1 to
20:1), number of convolutional layers (1 to 6), number of filters (8 to 2000), neighborhood size (2 to
26), pairwise residue representation (elementwise sum/product vs concatenation), number of dense
layers after merging (0 to 4), optimization algorithm (stochastic gradient descent, RMSProp, ADAM,
Momentum), learning rate (0.01 to 1), dropout probability (0.3 to 0.8), minibatch size (64 or 128
examples), and number of epochs (50 to 1000). This search was conducted manually and not all
combinations were tested. Automatic model selection as in Bergstra et al.[7] failed to outperform the
best manual search results.
For testing, all classifiers were trained for 80 epochs in minibatches of 128. Weight matrices were
initialized as in He et al. [11] and biases initialized to zero. Rectified Linear Units were employed on
all but the classification layer. During training we performed dropout with probability 0.5 to both
dense and convolutional layers. Negative examples were randomly sampled to achieve a 10:1 ratio
with positive examples, and the weighted cross entropy loss function was used to account for the
class imbalance.
Training was performed using stochastic gradient descent with a learning rate of 0.1. Test results were
computed by training the model on the training and validation sets using the model hyperparameters
that yielded best validation performance. The convolution neighborhood (i.e. receptive field) is
defined as a fixed-size set of residues that are closest in space to a residue of interest, and 21 yielded
the best performance in our validation experiments. We implemented our networks in TensorFlow [1]
v1.0.1 to make use of rapid training on GPUs. Training times vary from roughly 17-102 minutes
depending on convolution method and network depth, using a single NVIDIA GTX 980 or GTX
TITAN X GPU.
6
Method
1
2
Convolutional Layers
3
4
No Convolution
0.812 (0.007) 0.810 (0.006) 0.808 (0.006) 0.796 (0.006)
Diffusion (DCNN) (2 hops) [5]
0.790 (0.014) ?
?
?
Diffusion (DCNN) (5 hops) [5])
0.828 (0.018) ?
?
?
Single Weight Matrix (MFN [9])
0.865 (0.007) 0.871 (0.013) 0.873 (0.017) 0.869 (0.017)
Node Average (Equation (1))
0.864 (0.007) 0.882 (0.007) 0.891 (0.005) 0.889 (0.005)
Node and Edge Average (Equation (2)) 0.876 (0.005) 0.898 (0.005) 0.895 (0.006) 0.889 (0.007)
DTNN [21]
0.867 (0.007) 0.880 (0.007) 0.882 (0.008) 0.873 (0.012)
Order Dependent (Equation (3))
0.854 (0.004) 0.873 (0.005) 0.891 (0.004) 0.889 (0.008)
Table 2: Median area under the receiver operating characteristic curve (AUC) across all complexes in the
test set for various graph convolutional methods. Results shown are the average and standard deviation over
ten runs with different random seeds. Networks have the following number of filters for 1, 2, 3, and 4 layers
before merging, respectively: (256), (256, 512), (256, 256, 512), (256, 256, 512, 512). The exception is the
DTNN method, which by necessity produces an output which is has the same dimensionality as its input. Unlike
the other methods, diffusion convolution performed best with an RBF with a standard deviation of 2?. After
merging, all networks have a dense layer with 512 hidden units followed by a binary classification layer. Bold
faced values indicate best performance for each method.
To determine the best form of graph convolution for protein interface prediction, we implemented the
spatial graph convolution operators described in the Related Work section. The MFN method required
modification to work well in our problem, namely averaging over neighbors rather than summing. For
each graph convolution method, we searched over the hyperparameters listed above using the same
manual search method; for the DCNN this also included the number of hops. Diffusion convolution is
a single layer method as presented in the original publication; and indeed, stacking multiple diffusion
convolutional layers yielded poor results, so testing was conducted using only one layer for that
method.
To demonstrate the effectiveness of graph convolution we examine the effect of incorporating neighbor
information by implementing a method that performs no convolution (referred to as No-Convolution),
equivalent to Equation (1) with no summation over neighbors. The PAIRpred SVM method [2] was
trained by performing five fold cross validation on the training and validation data to select the best
kernel and soft margin parameters before evaluating on the test set.
3.1
Results
Results comparing the accuracy of the various graph convolution methods are shown in Table 2. Our
first observation is that the proposed graph convolution methods, with AUCs around 0.89, outperform
the No Convolution method, which had an AUC of 0.81, showing that the incorporation of information
from a residue?s neighbors improves the accuracy of interface prediction. This matches the biological
intuition that the region around a residue should impact its binding affinity. We also observe that the
proposed order-independent methods, with and without edge features (Equations (1) and (2) ) and the
order-dependent method (Equation (3) performed at a similar level, although the order-independent
methods do so with fewer layers and far fewer model parameters than the order-dependent method.
These methods exhibit improvement over the state-of-the-art PAIRPred method which yielded an
AUC of 0.863.
The MFN method, which is a simpler version of the order-independent method given in Equation (1)
performed slightly worse. This method uses the same weight matrix for the center node and its
neighbors, and thereby does not differentiate between them. Its lower performance suggests this
is an important distinction in our problem, where prediction is performed at the node level. This
convolution operator was proposed in the context of a classification problem at the graph level. The
DTNN approach is only slightly below the top performing methods. We have observed that the other
convolutional methods perform better when the number of filters is increased gradually in subsequent
network layers, a feature not afforded by this method.
Among the convolutional methods, the diffusion convolution method (DCNN) performed the worst,
and was similar in performance to the No Convolution method. The other convolution methods
performed best when employing multiple convolutional layers, suggesting that the networks are
7
Figure 3: PyMOL [20] visualizations of the best performing test complex (PDB ID 3HI6). Upper left:
Ligand (red) and receptor (blue), along with the true interface (yellow). Upper right: Visualization
of predicted scores, where brighter colors (cyan and orange) represent higher scores. Since scores
are for pairs of residues, we take the max score over all partners in the partner protein. Bottom row:
Activations of two filters in the second convolutional layer, where brighter colors indicate greater
activation and black indicates activation of zero. Lower left: A filter which provides high activations
for buried residues, a useful screening criterion for interface detection. Lower right: Filter which
gives high activations for residues near the interface of this complex.
indeed learning a hierarchical representation of the data. However, networks with more than four
layers performed worse, which could be attributed to the relatively limited amount of labeled protein
interface data. Finally, we note that the extreme class imbalance in the test set produces a very poor
area under the precision-recall curve, with no method achieving a value above 0.017.
To better understand the behavior of the best performing convolutional method we visualize the best
performing test complex, PDB ID 3HI6 (see figure 3). The figure shows that the highest predictions
are in agreement with the true interface. We also visualize two convolutional filters to demonstrate
their ability to learn aspects of the complex that are useful for interface prediction.
4
Conclusions and Future Work
We have examined the performance of several spatial graph convolutional methods in the problem
of predicting interfaces between proteins on the basis of their 3D structure. Neighborhood-based
convolution methods achieved state-of-the-art performance, outperforming diffusion-based convolution and the previous state-of-the-art SVM-based method. Among the neighborhood-based methods,
order-independent methods performed similarly to an order-dependent method, and we identified
elements that are important for the performance of the order-indpendent methods.
Our experiments did not demonstrate a big difference with the inclusion of edge features. There
were very few of those, and unlike the node features, they were static: our networks learned latent
representations only for the node features. These methods can be extended to learn both node and
edge representations, and the underlying convolution operator admits a simple deconvolution operator
which lends itself to be used with auto-encoders.
CNNs typically require large datasets to learn effective representations. This may have limited the
level of accuracy that we could attain using our purely supervised approach and the relatively small
8
number of labeled training examples. Unsupervised pre-training would allow us to use the entire
Protein Data Bank which contains close to 130,000 structures (see http://www.rcsb.org/).
The features learned by deep convolutional architectures for image classification have demonstrated a
great degree of usefulness in classification tasks different than the ones they were originally trained on
(see e.g. [22]). Similarly, we expect the convolution operators we propose and the resulting features
to be useful in many other applications, since structure information is useful for predicting a variety
of properties of proteins, including their function, catalytic and other functional residues, prediction
of protein-protein interactions, and protein interactions with DNA and RNA.
In designing our methodology we considered the question of the appropriate level at which to describe
protein structure. In classifying image data, CNNs are usually applied to the raw pixel data [15]. The
analogous level of description for protein structure would be the raw 3D atomic coordinates, which
we thought would prove too difficult. Using much larger training sets and unsupervised learning can
potentially allow the network to begin with features that are closer to the raw atomic coordinates and
learn a more detailed representation of the geometry of proteins.
Acknowedgements
This work was supported by the National Science Foundation under grant no DBI-1564840.
References
[1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow,
Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser,
Manjunath Kudlur, Josh Levenberg, Dan Man?, Rajat Monga, Sherry Moore, Derek Murray,
Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul
Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi?gas, Oriol Vinyals, Pete Warden,
Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale
machine learning on heterogeneous systems, 2015.
[2] Fayyaz ul Amir Afsar Minhas, Brian J. Geiss, and Asa Ben-Hur. PAIRpred: Partner-specific
prediction of interacting residues from sequence and structure. Proteins: Structure, Function,
and Bioinformatics, 82(7):1142?1155, 2014.
[3] Shandar Ahmad and Kenji Mizuguchi. Partner-aware prediction of interacting residues in
protein-protein complexes from sequence data. PLoS One, 6(12):e29104, 2011.
[4] Christof Angermueller, Tanel P?rnamaa, Leopold Parts, and Oliver Stegle. Deep learning for
computational biology. Molecular systems biology, 12(7):878, 2016.
[5] James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in
Neural Information Processing Systems, pages 1993?2001, 2016.
[6] Tristan T Aumentado-Armstrong, Bogdan Istrate, and Robert a Murgita. Algorithmic approaches
to protein-protein interaction site prediction. Algorithms for Molecular Biology, 10(1):1?21,
2015.
[7] James S. Bergstra, R?mi Bardenet, Yoshua Bengio, and Bal?zs K?gl. Algorithms for hyperparameter optimization. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q.
Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2546?2554.
Curran Associates, Inc., 2011.
[8] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst.
Geometric deep learning: going beyond Euclidean data. IEEE Sig. Proc. Magazine, 2017.
[9] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel,
Al?n Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning
molecular fingerprints. In Advances in Neural Information Processing Systems, pages 2224?
2232, 2015.
[10] R. Esmaielbeiki, K. Krawczyk, B. Knapp, J.-C. Nebel, and C. M. Deane. Progress and challenges
in predicting protein interfaces. Briefings in Bioinformatics, (January):1?15, 2015.
9
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers:
Surpassing human-level performance on imageNet classification. CoRR, abs/1502.01852, 2015.
[12] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep
Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural
networks for acoustic modeling in speech recognition: The shared views of four research groups.
IEEE Signal Processing Magazine, 29(6):82?97, 2012.
[13] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional
networks. In ICLR, 2017.
[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[15] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444,
2015.
[16] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural
networks for graphs. In Proceedings of the 33rd annual international conference on machine
learning. ACM, 2016.
[17] Lee Sael and Daisuke Kihara. Protein surface representation and comparison : New approaches
in structural proteomics. Biological Data Mining, pages 89?109, 2009.
[18] Lee Sael, Bin Li, David La, Yi Fang, Karthik Ramani, Raif Rustamov, and Daisuke Kihara.
Fast protein tertiary structure retrieval based on global surface shape similarity. Proteins,
72(4):1259?1273, 2008.
[19] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and
Max Welling. Modeling relational data with graph convolutional networks. arXiv preprint
arXiv:1703.06103, 2017.
[20] Schr?dinger, LLC. The PyMOL molecular graphics system, version 1.8. November 2015.
[21] Kristof T Sch?tt, Farhad Arbabzadah, Stefan Chmiela, Klaus R M?ller, and Alexandre
Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature communications, 8:13890, 2017.
[22] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features
off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition Workshops, pages 806?813, 2014.
[23] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving
for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
[24] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural
networks. In Advances in Neural Information Processing Systems, pages 3104?3112, 2014.
[25] Thom Vreven, Iain H Moal, Anna Vangone, Brian G Pierce, Panagiotis L Kastritis, Mieczyslaw
Torchala, Raphael Chaleil, Brian Jim?nez-Garc?a, Paul A Bates, Juan Fernandez-Recio, et al.
Updates to the integrated protein?protein interaction benchmarks: docking benchmark version
5 and affinity benchmark version 2. Journal of molecular biology, 427(19):3031?3041, 2015.
10
| 7231 |@word cnn:1 version:7 propagate:1 bn:2 kutzkov:1 thereby:2 necessity:1 contains:3 score:4 existing:3 steiner:1 current:1 comparing:1 activation:8 assigning:1 attracted:1 gpu:1 devin:1 subsequent:1 additive:1 partition:1 shape:5 designed:1 update:2 v:1 alone:1 isard:1 selected:1 fewer:2 amir:1 plane:1 tertiary:1 provides:4 node:65 org:1 simpler:1 zhang:1 five:1 along:1 olah:1 abadi:1 consists:1 prove:1 yuan:1 combine:1 fullyconnected:1 ray:1 dan:1 introduce:1 pairwise:5 indeed:2 rapid:1 behavior:1 roughly:1 examine:2 inspired:1 decreasing:1 byrd:1 window:1 considering:1 increasing:2 provided:1 begin:1 underlying:2 what:1 z:1 differentiation:1 colostate:4 charge:1 tackle:1 classifier:1 unit:2 grant:1 szlam:1 christof:1 before:4 positive:7 local:4 despite:1 receptor:6 id:2 merge:2 black:1 alexey:1 resembles:1 quantified:1 examined:1 suggests:2 challenging:4 co:4 limited:2 stratified:1 range:3 unique:3 lecun:2 testing:5 atomic:3 practice:2 block:1 sullivan:1 procedure:1 area:5 riedmiller:1 drug:1 thought:2 significantly:1 attain:1 word:1 intention:1 regular:5 integrating:1 downsampled:2 protein:67 pdb:3 radial:1 pre:1 close:1 selection:2 operator:26 context:3 applying:1 www:1 equivalent:1 dean:1 demonstrated:1 center:9 straightforward:1 attention:1 sainath:1 simplicity:2 matthieu:1 insight:2 iain:1 dbi:1 nuclear:1 shlens:1 fang:1 notion:3 coordinate:4 analogous:1 construction:1 colorado:4 magazine:2 guzik:1 distinguishing:1 us:2 designing:1 goodfellow:1 agreement:1 fout:4 element:3 kunal:1 recognition:6 curran:1 associate:1 sig:1 jaitly:1 labeled:3 observed:1 role:1 bottom:1 mike:1 preprint:2 solved:1 capture:1 worst:1 calculate:1 sharif:1 wj:1 region:2 connected:1 sun:1 ordering:3 plo:1 highest:1 ahmad:1 valuable:1 intuition:1 environment:1 complexity:1 rmsprop:1 angermueller:1 tobias:1 trained:4 ali:1 asa:3 purely:1 upon:1 basis:4 various:4 represented:2 train:1 separated:1 fast:1 effective:2 describe:5 zemel:1 klaus:1 neighborhood:17 whose:3 larger:2 amide:1 mfn:3 ability:1 itself:1 final:1 differentiate:1 sequence:13 net:1 propose:4 interaction:6 product:6 raphael:1 neighboring:6 relevant:1 realization:1 achieve:1 protrusion:2 description:2 sutskever:3 plethora:1 r1:8 produce:4 assessing:1 adam:2 ben:2 object:5 bogdan:1 depending:1 andrew:2 progress:1 dividing:1 implemented:2 c:2 resemble:1 come:1 indicate:2 predicted:1 kenji:1 direction:2 closely:1 merged:1 filter:12 stochastic:2 cnns:2 centered:1 human:1 jonathon:2 implementing:1 adjacency:1 bin:1 require:3 garc:1 generalization:1 preliminary:1 biological:2 brian:3 summation:1 ryan:1 wjn:1 around:4 duvenaud:2 considered:3 normal:1 great:1 seed:1 mapping:1 algorithmic:1 visualize:2 maclaurin:1 vary:1 nebel:1 omitted:1 proc:1 applicable:3 panagiotis:1 label:2 propensity:1 create:2 weighted:1 stefan:2 dcnn:4 rna:1 rather:6 chmiela:1 avoid:1 shelf:1 varying:1 publication:1 azizpour:1 derived:1 focus:1 catalytic:1 improvement:1 indicates:2 contrast:1 baseline:1 dependent:5 downsample:2 abstraction:1 biochemical:1 entire:3 typically:1 xnk:1 integrated:1 hidden:1 going:2 buried:1 pixel:2 dcnns:2 classification:14 flexible:1 overall:1 orientation:1 among:2 hossein:1 resonance:1 spatial:7 art:4 dtnn:4 brox:1 raised:1 equal:1 construct:1 aware:1 field:8 having:1 beach:1 atom:5 hop:8 biology:5 represents:1 identical:1 unsupervised:2 manually:1 theart:1 yu:2 alter:1 future:1 yoshua:2 dosovitskiy:1 inherent:1 few:2 employ:1 randomly:1 composed:2 resulted:1 recognize:1 individual:1 national:1 geometry:1 astounding:1 jeffrey:1 maintain:2 n1:1 ab:1 karthik:1 detection:1 interest:8 screening:1 dougal:1 mining:1 zheng:1 saturated:1 alignment:2 benoit:1 extreme:1 daisuke:2 chain:2 andy:1 oliver:1 edge:28 closer:1 necessary:1 arthur:1 taylor:1 euclidean:1 walk:1 initialized:2 increased:1 classify:3 column:1 downside:1 earlier:2 soft:1 modeling:2 patchy:1 konstantin:1 cost:2 stacking:1 deviation:5 entry:1 subset:1 usefulness:1 krizhevsky:1 successful:1 hi6:2 conducted:2 graphic:2 too:1 characterize:1 reported:1 connect:1 encoders:1 kudlur:1 combined:1 st:1 international:1 accessible:1 lee:2 dong:1 off:1 michael:3 connecting:1 together:1 ashish:1 ilya:3 atwood:2 vastly:1 rafal:1 dinger:1 juan:1 worse:2 lukasz:1 expert:1 li:6 account:1 suggesting:1 stride:1 bergstra:2 summarized:1 wk:2 includes:1 bold:1 inc:1 titan:1 explicitly:1 bombarell:1 vi:1 fernandez:1 performed:11 helped:1 view:1 razavian:1 towsley:2 red:1 hirzel:1 jia:1 contribution:1 ni:13 accuracy:5 convolutional:34 acid:8 descriptor:1 characteristic:1 greg:1 yellow:1 generalize:1 raw:3 vincent:2 craig:1 ren:1 bates:1 rectified:1 inform:1 sharing:1 manual:2 against:1 energy:1 frequency:1 mohamed:2 derek:1 tucker:1 thereof:1 james:2 naturally:1 associated:10 attributed:1 xn1:1 static:1 sampled:1 newly:1 dataset:5 mi:1 hur:2 knowledge:1 realm:1 dimensionality:4 improves:1 color:2 recall:1 amplitude:1 wicke:1 carefully:1 ramani:1 appears:1 alexandre:1 higher:2 originally:1 supervised:2 methodology:2 inspire:1 niepert:2 furthermore:3 rahman:1 hand:2 iparraguirre:1 minibatch:1 mfns:2 usa:1 name:1 effect:2 contain:3 building:1 normalized:1 gtx:2 true:2 vasudevan:1 chemical:1 moore:1 during:1 irving:1 auc:4 davis:1 noted:1 levenberg:1 criterion:1 bal:1 trying:1 briefing:1 crystal:1 tt:2 demonstrate:4 performs:1 interface:28 image:8 wise:2 recently:2 common:2 rotation:1 functional:1 overview:1 exponentially:1 he:2 elementwise:3 surpassing:1 refer:1 composition:1 jozefowicz:1 imposing:2 automatic:1 rd:1 grid:7 similarly:3 inclusion:1 shawe:1 fingerprint:2 had:1 kipf:2 bruna:1 similarity:2 surface:8 operating:1 pete:1 patrick:1 closest:2 recent:2 wje:1 wattenberg:1 sherry:1 nvidia:1 outperforming:2 success:3 binary:2 fernanda:1 jorge:1 yi:3 exploited:1 scoring:1 conserved:1 greater:1 george:1 employed:1 deng:1 determine:1 xiangyu:1 ller:1 corrado:1 signal:2 semi:1 multiple:11 match:1 ahmed:1 cross:2 long:1 retrieval:1 molecular:8 laplacian:2 schematic:1 jost:1 prediction:23 impact:1 heterogeneous:1 vision:3 proteomics:1 navdeep:1 arxiv:4 represent:4 kernel:5 monga:1 agarwal:1 achieved:1 irregular:1 orange:1 whereas:2 residue:55 addition:1 tkatchenko:1 addressed:1 median:2 source:1 jian:1 sch:2 operate:1 unlike:2 warden:1 pooling:2 tend:1 effectiveness:1 call:1 structural:1 near:1 bengio:2 ivan:1 variety:2 xj:4 zi:5 brighter:2 architecture:8 associating:1 opposite:1 identified:1 reduce:2 regarding:1 barham:1 motivated:1 bartlett:1 passed:1 ul:1 manjunath:1 peter:1 speech:2 shaoqing:1 deep:10 useful:5 detailed:1 eigenvectors:1 listed:1 amount:1 locally:1 ten:1 processed:1 dna:1 generate:1 http:1 outperform:2 blue:1 hyperparameter:1 dbd:4 group:1 harp:1 four:2 achieving:1 yangqing:1 bardenet:1 diffusion:12 dahl:1 v1:1 graph:70 sum:5 run:1 angle:4 talwar:1 springenberg:1 yann:2 dropout:2 cyan:1 bound:1 layer:28 followed:2 distinguish:1 correspondence:1 fold:2 yielded:6 annual:1 adapted:1 occur:1 constraint:1 incorporation:1 alex:2 ri:4 afforded:1 aspect:3 performing:6 relatively:2 gpus:1 martin:3 department:4 structured:1 tristan:1 combination:1 poor:2 across:6 slightly:2 making:2 modification:1 quoc:1 leg:4 den:1 invariant:3 indexing:1 gradually:1 equation:10 visualization:2 describing:2 r3:8 count:1 available:1 operation:1 brevdo:1 titov:1 apply:3 observe:1 hierarchical:1 spectral:2 appropriate:1 magnetic:1 pierre:1 weinberger:1 original:2 thomas:3 denotes:2 standardized:1 include:3 top:1 especially:2 murray:1 raif:1 tensor:2 objective:1 added:1 quantity:1 occurs:1 question:1 receptive:7 primary:1 dependence:1 kaiser:1 diagonal:1 exhibit:2 gradient:2 affinity:2 lends:1 distance:13 separate:1 iclr:1 concatenation:2 accessibility:2 participate:2 chris:1 partner:4 length:2 modeled:2 relationship:2 index:1 ratio:5 downsampling:2 difficult:2 robert:1 potentially:1 arbabzadah:1 negative:10 resurgence:1 design:2 implementation:1 motivates:1 bronstein:1 perform:3 allowing:3 imbalance:2 upper:2 convolution:59 observation:1 datasets:1 fin:1 benchmark:6 descent:2 november:1 gas:1 january:1 relational:2 incorporated:1 extended:1 hinton:3 schr:1 interacting:3 communication:1 stack:2 jim:1 arbitrary:2 david:2 fort:4 pair:13 required:1 extensive:1 optimized:1 namely:1 imagenet:2 leopold:1 xiaoqiang:1 learned:4 distinction:2 tensorflow:2 acoustic:1 nip:1 able:1 beyond:1 below:2 usually:2 remarking:1 sanjay:1 stegle:1 pattern:1 summarize:1 challenge:1 unbound:2 max:3 memory:1 including:1 power:1 natural:2 force:1 difficulty:1 predicting:7 indicator:1 representing:5 farhad:1 auto:1 joan:1 faced:2 carlsson:1 review:1 text:1 discovery:1 docking:4 geometric:2 epoch:2 relative:3 eugene:1 knapp:1 fully:1 expect:2 loss:1 proven:1 geoffrey:4 remarkable:1 vandergheynst:1 hydrophobicity:1 validation:13 integrate:1 foundation:1 degree:1 vanhoucke:2 abdel:1 propagates:1 editor:1 bank:2 classifying:1 translation:1 row:2 supported:1 gl:1 aij:4 bias:3 allow:5 side:1 understand:1 senior:1 wide:2 neighbor:18 taking:2 aspuru:1 van:1 curve:2 depth:2 calculated:1 ending:1 transition:1 evaluating:1 llc:1 quantum:1 author:2 commonly:2 collection:2 preprocessing:1 san:1 nguyen:1 far:1 employing:1 social:1 welling:2 josephine:1 sael:2 implicitly:1 rafael:1 global:1 receiver:1 summing:1 conservation:5 xi:6 don:2 hydrogen:2 latent:2 search:4 table:4 nature:2 learn:11 molecule:1 ca:1 zk:1 delving:1 interact:1 complex:19 domain:3 did:1 anna:1 dense:4 big:1 hyperparameters:3 paul:3 n2:1 rustamov:1 amino:8 crafted:1 referred:1 site:1 precision:1 position:2 momentum:1 explicit:2 pereira:1 concatenating:3 zhifeng:1 ian:1 minute:1 specific:2 rectifier:1 audio:1 showing:1 ghemawat:1 r2:8 striving:1 svm:5 admits:1 deconvolution:1 incorporating:1 workshop:1 merging:3 corr:1 pierce:1 margin:1 nk:1 chen:1 vijay:1 crystallography:1 entropy:1 generalizing:1 led:1 timothy:1 nez:1 likely:1 explore:1 josh:1 failed:1 vinyals:2 kristof:1 kaiming:1 binding:2 ligand:6 corresponds:1 relies:1 acm:1 minibatches:1 mart:1 viewed:1 identity:1 rbf:4 towards:1 bloem:1 shared:3 man:1 included:1 determined:3 except:2 averaging:2 total:1 mathias:1 invariance:1 la:1 citro:1 exception:1 formally:1 select:2 tara:1 berg:1 support:1 searched:1 latter:1 collins:4 bioinformatics:2 rajat:1 oriol:2 incorporate:2 evaluate:1 armstrong:1 tested:1 schuster:1 |
6,890 | 7,232 | Solid Harmonic Wavelet Scattering: Predicting
Quantum Molecular Energy from Invariant
Descriptors of 3D Electronic Densities
Michael Eickenberg
Department of computer science
Ecole normale sup?rieure
PSL Research University, 75005 Paris, France
[email protected]
Georgios Exarchakis
Department of computer science
Ecole normale sup?rieure
PSL Research University, 75005 Paris, France
[email protected]
Matthew Hirn
Department of Computational Mathematics,
Science and Engineering;
Department of Mathematics
Michigan State University
East Lansing, MI 48824, USA
[email protected]
St?phane Mallat
Coll?ge de France
Ecole Normale Sup?rieure
PSL Research University
75005 Paris, France
Abstract
We introduce a solid harmonic wavelet scattering representation, invariant to
rigid motion and stable to deformations, for regression and classification of 2D
and 3D signals. Solid harmonic wavelets are computed by multiplying solid
harmonic functions with Gaussian windows dilated at different scales. Invariant
scattering coefficients are obtained by cascading such wavelet transforms with
the complex modulus nonlinearity. We study an application of solid harmonic
scattering invariants to the estimation of quantum molecular energies, which are
also invariant to rigid motion and stable with respect to deformations. A multilinear
regression over scattering invariants provides close to state of the art results over
small and large databases of organic molecules.
1
Introduction
Deep convolutional neural networks provide state of the art results over most classification and
regression problems when there is enough training data. The convolutional architecture builds a
representation which translates when the input is translated. It can compute invariants to translations
with a global spatial pooling operator such as averaging or max pooling. A major issue is to understand
if one can reduce the amount of training data, by refining the architecture or specifying network
weights, from prior information on the classification or regression problem. Beyond translation
invariance, such prior information can be provided by invariance over other known groups of
transformations.
This paper studies the construction of generic translation and rotation invariant representations for
any 2D and 3D signals, and their application. Rotation invariant representations have been developed
for 2D images, for instance in [20], where a descriptor based on oriented wavelets was used to create
a jointly translation and rotation-invariant representation of texture images which retained all identity
information necessary for classification. These representations have not been extended to 3D because
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
an oriented wavelet representation in 3D requires covering the unit sphere instead of the unit circle
leading to much heavier computational requirements.
Section 2 introduces a 2D or 3D rotation invariant representation calculated with a cascade of
convolutions with spherical harmonic wavelets, and modulus non-linearities. Invariance to rotations
results from specific properties of spherical harmonics, which leads to efficient computations. A
wavelet scattering can be implemented as a deep convolutional network where all filters are predefined
by the wavelet choice [13]. In that case, prior information on invariants fully specifies the network
weights. Besides translation and rotation invariance, such scattering representations linearize small
deformations. Invariants to small deformations are thus obtained with linear operators applied to
scattering coefficients, and scattering coefficients can provide accurate regressions of functions which
are stable to deformations.
Translation and rotation invariance is often encountered in physical functionals. For example energies
of isolated physical systems are usually translation and rotation invariant, and are stable to small
deformations. This paper concentrates on applications to computations of quantum energies of organic
molecules. Computing the energy of a molecule given the charges and the relative positions of the
nuclei is a fundamental topic in computational chemistry. It has considerable industrial applications,
for example to test and design materials or pharmaceuticals [4]. Density functional theory is currently
the most efficient numerical technique to compute approximate values of quantum energies, but it
requires considerable amounts of calculations which limit the size of molecules and the number
of tests. Machine learning methods have gained traction to estimate quantum molecular energies
from existing quantum chemistry databases, because they require much less computation time after
training.
State of the art learning approaches have been adapted to the specificities of the underlying physics.
Best results on large databases are obtained with deep neural networks whose architectures are
tailored to this quantum chemistry problem. Numerical experiments in Section 4 show that applying
a standard multilinear regression on our generic 3D invariant solid harmonic scattering representation
yields nearly state of the art results compared to all methods, including deep neural networks, and on
both small and large databases.
2
Solid harmonic wavelet scattering
Wavelet scattering transforms have been introduced to define representations which are invariant
to translations and Lipschitz continuous to deformations [12]. In two dimensions they have been
extended to define rotationally invariant representations [20] but in 3D this approach requires covering
the unit sphere with multiple oriented wavelets (as opposed to the unit circle in 2D), which requires
too much computation. This section introduces a solid harmonic wavelet scattering transform whose
rotation invariance results from symmetries of solid harmonics. In contrast to oriented wavelets,
every solid harmonic wavelet can yield its own rotation invariant descriptor because it operates in a
rotational frequency space.
2.1
Solid harmonics in 2D and 3D
Solid harmonics are solutions of the Laplace equation ?f = 0, usually expressed in spherical coordinates, where the Laplacian is the sum of unmixed second derivatives. In 2D, interpreting R2 as the
complex plane, we find that z 7? z ` is a solution for all ` ? N due to its holomorphicity1 . Expressing
this solution in polar coordinates gives (r, ?) 7? r` ei`? , revealing an `th- order polynomial in radius
and a so-called circular harmonic with ` angular oscillations per circle.
Solving the Laplace equation in 3D spherical coordinates (r, ?, ?) gives rise to spherical harmonics,
the eigenvectors of the Laplacian on the sphere. Imposing separability of azimuthal and elevation
m
im?
contributions yields the functions Y`m (?,
, where P`m is an associated
q?) = C(`, m)P` (cos ?)e
Legendre polynomial and C(`, m) =
(2`+1)(`?m)!
4?(`+m)! ,
for ` ? 0 and ?` ? m ? `. They form an
2
orthogonal basis of L functions on the sphere. Analogously to the 2D case, 3D solid harmonics are
1
Real and imaginary parts of holomorphic functions are harmonic - their Laplacian is 0
2
then defined as
r
(r, ?, ?) 7?
2.2
4? ` m
r Y` (?, ?).
2` + 1
Solid harmonic wavelets
We now define solid harmonic wavelets in 2D and 3D. A wavelet ?(u) is a spatial filter with zero
sum, which is localized around the origin in the sense that it has a fast decay along kuk. Let
?j (u) = 2?dj ?(2?j u) be a normalized dilation of ? by 2j in dimension d. A multiscale wavelet
transform of a signal ?(u) computes convolutions with these dilated wavelets at all scales 2j to obtain
the set of wavelet coefficients {? ? ?j (u)}j?Z . They are translation covariant. Let us denote by ??(?)
? j ?), which
the Fourier transform of ?(u). The Fourier transforms of these convolutions are ??(?)?(2
yields fast computational algorithms using FFTs.
A wavelet is defined from a solid harmonic by multiplying it by a Gaussian, which localizes its
support. In the 2D case we obtain the following family of wavelets:
?` (r, ?) = p
1
1
(2?)2
2
e? 2 r r` ei`? .
For ` > 0, these functions have zero integrals and are localized around the origin. In 2D frequency
polar coordinates ? = ? (cos ?, sin ?)T , one can verify that the Fourier transform of this solid
1 2
harmonic wavelet is very similar to itself in signal space: ??` (?, ?) = (?i)` e? 2 ? ?` ei`? . The solid
harmonic wavelet transform inherits the rotation properties of the solid harmonics.
In 2D, the rotation of a solid harmonic incurs a complex phase shift. Let R? ? SO(2) be a rotation
of angle ?. We first observe that
R? ?j,` (r, ?) = ?j,` (r, ? ? ?) = e?il? ?(r, ?).
One can derive that rotating a signal ? produces the same rotation on its wavelet convolution,
multiplied by a phase factor encoding the rotational angle: R? ? ? ?j,` (u) = eil? R? (? ? ?j,` )(u).
If we eliminate the phase with a modulus U [j, `]?(u) = |? ? ?j,` (u)| then it becomes covariant to
rotations:
U [j, `] R? ?(u) = R? U [j, `]?(u).
The left of Figure 1 shows the real part of 2D solid harmonic wavelets at different scales and angular
frequencies.
In 3D, solid harmonics wavelet are defined by
?`,m (r, ?, ?) = p
1
1
(2?)3
2
e? 2 r r` Y`m (?, ?).
We write ?`,m,j its dilation by 2j . Let us write ? with 3D polar coordinates: ? =
?(cos ? cos ?, cos ? sin ?, sin ?)T . The Fourier transform of the wavelet has the same analytical
1 2
expression up to a complex factor: ??`,m (?, ?, ?) = (?i)` e? 2 ? ?` Y`m (?, ?). The 3D covariance
to rotations is more involved. The asymmetry of the azimuthal and elevation components of the
spherical harmonics requires them to be treated differently. In order to obtain a rotation covariance
property, it is necessary to sum the energy over all indices m for a fixed `. We thus define the wavelet
modulus operator of a 3D signal ?(u) by
U [`, j]?(u) =
!1/2
`
X
|? ? ?`,m,j (u)|
2
.
m=?`
Using the properties of spherical harmonics, one can prove that this summation over m defines a
wavelet transform modulus which is covariant to 3D rotations. For a general rotation R ? SO(3)
U [j, `] R? = R U [j, `]?.
3
2.3
Solid harmonic scattering invariants
We showed that the wavelet modulus U [j, `]? is covariant to translations and rotations in 2D and 3D.
Summing these coefficients over the spatial variable u thus defines a translation and rotation invariant
representation. This property remains valid under pointwise transformations, e.g. if we raise the
modulus coefficients to any power q. Since U [j, `]?(u) is obtained by a wavelet scaled by 2j , it is
a smooth function and its integral can be computed by subsampling u at intervals 2j?? where ? is
an oversampling factor typically equal to 1, to avoid aliasing. First order solid harmonic scattering
coefficients in 2D and 3D are defined for any (j1 , `) and any exponent q by:
q
X
S[j1 , `, q]? =
U [j1 , `]?(2j1 ?? u)
u
Translating or rotating ? does not modify S[j1 , `, q]?. Let J > 0 denote the number of scales j1 ,
and L > 0 the number of angular oscillations `. We choose q ? Q = {1/2, 1, 2, 3, 4} which yields
|Q|JL invariant coefficients.
The summation eliminates the variability of the U [j1 , `]?(u) along u. To avoid loosing too much
information, a scattering transform retransforms this function along u in order to capture the lost
variabilities. This is done by calculating a convolution with a second family of wavelets at different
scales 2j2 and again computing a modulus in order to obtain coefficients which remain covariant to
translations and rotations. This means that U [j1 , `]?(u) is retransformed by the wavelet tranform
modulus operator U [j2 , `]. Clearly U [j2 , `] U [j1 , `]?(u) is still covariant to translations and rotations
of ?, since U [j1 , `] and U [j2 , `] are covariant to translations and rotations.
The variable u is again subsampled at intervals 2j2 ?? with an oversampling factor ? adjusted
to eliminate the aliasing. Second order scattering invariants are computed by summing over the
subsampled spatial variable u:
q
X
S[j1 , j2 , `, q]? =
U [j2 , `] U [j1 , `]?(2j2 ?? u) .
u
These coefficients are computed only for j2 > j1 because one can verify [12] that the amplitude of
these invariant coefficients is negligible for j2 ? j1 . The total number of computed second order
invariants is thus |Q|LJ(J ? 1)/2.
In the following, we shall write S? = {S[p]?}p the scattering representation of ?, defined by
the indices p = (j1 , `, q) and p = (j1 , j2 , `, q). These coefficients are computed with iterated
convolutions with wavelets, modulus non-linearities, and averaging. It is proved in [13] that such
wavelet convolutions and non-linearities can be implemented with a deep convolutional network,
whose filters depend upon the wavelets and whose depth J is the maximum scale index of all wavelets
j1 < j2 ? J.
Besides translation and rotation invariance, one can prove that a scattering transform is Lipschitz
continuous to deformations [12]. This means that if ?(u) is deformed by a small (in maximum
gradient norm) diffeomorphism applied to u, then the scattering vector stays within an error radius
proportional to the size of the diffeomorphism. This property is particularly important to linearly
regress functions which are also stable to deformations.
3
Solid harmonic scattering for quantum energy regression
We study the application of solid harmonic scattering invariants to the regression of quantum molecular
energies. The next section introduces the translation and rotation invariance properties of these
energies.
3.1
Molecular regression invariances
A molecule containing K atoms is entirely defined by its nuclear charges zk and its nuclear position
vectors rk indexed by k. Denoting by x the state vector of a molecule, we have
x = {(rk , zk ) ? R3 ? R : k = 1, . . . , K}.
The ground-state energy of a molecule has the following invariance properties outlined in [1]:
4
Invariance to permutations Energies do not depend on the indexation order k of each nuclei;
Isometry invariance Energies are invariance to rigid translations, rotations, and reflections of the
molecule and hence of the rk ;
Deformation stability The energy is Lipschitz continuous with respect to scaling of distances
between atoms.
Multiscale interactions The energy has a multiscale structure, with highly energetic bonds between
neighboring atoms, and weaker interactions at larger distances, such as Van-der-Waals
interactions.
To regress quantum energies, a machine learning representation must satisfy the same invariance and
stability properties while providing a set of descriptors which is rich enough to accurately estimate
the atomization energy of a diverse collection of molecules.
A rotation invariant scattering transform has been proposed to regress quantum energies of planar
molecules [9]. However this approach involves too much computations in 3D because it requires to
use a large number of oriented wavelets to cover the 3D spheres. The following sections explains
how to regress the energies of 3D molecules from a spherical harmonic scattering.
3.2
Scattering transform of an electronic density
Density Functional Theory computes molecular energies by introducing an electronic density ?(u)
which specifies the probability density of presence of an electron at a point u. Similarly, we associate
to the state vector x of the molecule to a naive electronic density ? which is a sum of Gaussians
densities centered on each nuclei. This density incorporates no information on chemical bounds that
K
may arise in the molecule. For K atoms placed at {rk }K
k=1 having charges {zk }k=1 , the resulting
density is
K
X
?x (r) =
c(zk )g(r ? rk ),
k=1
where g is a Gaussian, roughly representing an electron density localized around the nucleus, and
c(zk ) is a vector-valued ?electronic channel?. It encodes different aspects of the atomic structure.
We shall use three channels: the total nuclear charge zk of the atom, the valence electronic charge
vk which specifies the number of electrons which can be involved in chemical bounds, and the core
electronic charge zk ? vk . It results that c(zk ) = (zk , vk , zk ? vk )T . The molecule embedding
verifies
Z
X
?x (u)du =
(zk , vk , zk ? vk )T .
k
This integral gives the total number of nucleus charges and valence and core electrons. This naive
density is invariant to permutations of atom indices k.
The density ?x is invariant to permutations of atom indices but it is not invariant to isometries and it
can not separate multiscale interactions. These missing invariances and the separation of scales into
different channels are obtained by computing its scattering representation S?x with solid harmonic
wavelets.
In Figure 1, there is an example of a 2D solid harmonic wavelet modulus U [j, `]?x for one molecule
at different scales and angular frequencies.
3.3
Multilinear regression
Molecular energies are regressed with multilinear combinations of scattering coefficients S?x [p]. A
multilinear regression of order r is defined by:
?r (?x ) = b +
E
X
i
(?i
r
Y
(j)
(j)
(hS?x , wi i + ci )).
j=1
For r = 1 this is a standard linear regression. For r = 2 this form introduces a non-linearity similar
to those found in factored gated autoencoders [14]. Trilinear regressions for r = 3 are also used.
5
Figure 1: Left: Real parts of 2D solid harmonic wavelets ?`,j (u). The ` parameters increases from
0 to 4 vertically where as the scale 2j increases from left to right. Cartesian slices of 3D spherical
harmonic wavelets yield similar patterns. Right: Solid harmonic wavelet moduli S[j, `, 1](?x )(u) =
|?x ? ?j,` |(u) of a molecule ?x . The interference patterns at the different scales are reminiscent of
molecular orbitals obtained in e.g. density functional theory.
Figure 2: Mean absolute error (MAE) on the validation set as a function of the number of training
points used. We observe a fast drop to low estimation errors with as few as 2000 training examples.
While it is still always better to sample more of chemical space, it shows that the representation
carries useful information easily amenable to further analysis, while keeping sufficient complexity to
benefit from when more datapoints are available.
Here we extend the interactions to an arbitrary number of multiplicative factors. We optimize the
parameters of the multilinear model by minimizing a quadratic loss function
?r (?x ))2
L(y, ?x ) = (y ? E
using the Adam algorithm for stochastic gradient descent [11]. The model described above is
non-linear in the parameter space and therefore it is reasonable to assume that stochastic gradient
descent will converge to a local optimum. We find that we can mitigate the effects of local optimum
convergence by averaging the predictions of multiple models trained with different initializations2 .
4
Numerical Experiments on Chemical Databases
Quantum energy regressions are computed on two standard datasets: QM7 (GDB7-12) [18] has
7165 molecules of up to 23 atoms among H, C, O, N and S, and QM9 (GDB9-14) [17] has 133885
2
For implementation details see http://www.di.ens.fr/data/software/
6
molecules of up to 29 atoms among H, C, O, N and F. We first review results of existing maching
learning algorithms before giving results obtained with the solid harmonic scattering transform.
4.1
State of the art algorithms
Tables 1 and 2 gives the mean absolute error for each algorithm described below. The first machine
learning approaches for quantum energy regressions were based on kernel ridge regression algorithms,
optimized with different types of kernels. Kernels were first computed with Coulomb matrices, which
encode pairwise nucleus-nucleus repulsion forces for each molecule [18, 15, 8, 16]. Coulomb matrices
are not invariant to permutations of indices of atoms in the molecules, which leads to regression
instabilities. Improvements have been obtained with bag-of-bonds descriptors [7], which groups
matrix entries according to bond type, or with fixed-length smooth bond-distance histograms [2].
The BAML method (Bonds, Angles, etc, and machine learning) [10] refines the kernel by collecting
atomic information, bond information, bond angle information and bond torsion information. The
HDAD (Histograms of Distances, Angles, and Dihedral angles) kernels [5] improve results with
computing histograms of these quantities. Smooth overlap of atomic positions (SOAP) kernels [3]
can also obtain precise regression results with local descriptors computed with spherical harmonics.
They are invariant to translations and rotations. However, these kernels only involve local interactions,
and regression results thus degrade in presence of large-scale interactions.
Deep neural networks have also been optimized to estimate quantum molecular energies. They hold
the state of the art on large databases as shown in Tables 1 and 2. Deep tensor networks [19] combine
pairwise distance matrix representations in a deep learning architecture. MPNN (Message Passing
Neural Networks) learns a neural network representation on the molecules represented as bond graphs.
It obtains the best results on the larger QM9 data base.
4.2
Solid harmonic scattering results
We performed rigid affine coordinate transforms to align each molecule with its principle axis, making
it possible to fit every molecule in a box of one long sidelength and two shorter ones. The Gaussian
width of the electronic embedding is adjusted so that Gaussians located around the two atoms with
minimal distance do not overlap too much. In all computations, the sampling grid is adjusted to keep
aliasing errors negligible. Scattering vectors are standardized to have a 0 mean and unit variance
before computing the multilayer regression.
QM7 Scattering vectors are computed with L = 5. We estimated quantum energies with a linear
ridge regression from scattering coefficients. The dataset comes with a split into 5 folds, where the
energy properties are approximately stratified. The average of the mean absolute error (MAE) over 5
folds is 2.4 kcal/mol. It shows that scattering coefficients are sufficiently discriminative to obtain
competitive results with a linear regression.
Bilinear regressions involve more parameters and provides near state of the art performance. We
average 5 differently initialized models over the 5 folds to obtain a mean absolute error of 1.2.
Figure 2 evaluates the performance of the bilinear regression on invariant scattering descriptors. From
as few as 2000 training samples onward, the test set error drops below 3kcal/mol indicating that the
invariant representation gives immediate access to relevant molecular properties. The fact that we
observe improvement with larger data samples means that the representation also exhibits sufficient
flexibility to accommodate relevant information from larger sweeps over chemical space.
QM9 Scattering vectors are computed with L = 2. Quantum energies were estimated from
scattering vectors with linear, bilinear and trilinear regressions. For cross-validation, the dataset is
split into 5 folds, where the energy properties are approximately stratified. The average of the mean
absolute error (MAE) over 5 folds with a trilinear regression across the 5 folds is 0.55.
4.3
Discussion
The solid harmonic scattering transform followed by a multilinear regression is a domain agnostic
regression scheme which only relies on prior knowledge of translation and rotation invariance as
well as deformation stability. However, it leads to close to state of the art results on each data base.
7
QM7
MAE
RSCM
BoB
SOAP
DTN
CBoB
L-Scat
B-Scat
3.1
1.5
0.9
1
1.2
3.8
1.2
Table 1: Mean Absolute Error in kcal/mol of quantum energy regression in QM7 for different
algorithms. (RSCM: Random Sorted Coulomb Matrix[8], BoB: Bag of Bonds[7], SOAP: smooth
overlap of atomic positions[3], DTN: deep tensor networks[19], CBoB: Continuous bag of bonds[2],
L-Scat: Linear regression on Scattering invariants, B-Scat: Bilinear regression on Scattering invariants
QM9
MAE
HDAD
BAML
CM
BOB
DTN
MPNN
T-Scat
0.59
1.20
2.97
1.42
0.84
0.44
0.55
Table 2: QM9 regression results. (HDAD: Histograms of Distances, Angles and Dihedral Angles
[5], BAML: Bonds, Angles and Machine Learning [10] , RSCM: Random Sorted Coulomb Matrices,
BOB: Bags of Bonds, DTN: Deep Tensor Networks, MPNN: Message Passing Neural Networks [6],
T-Scat: Trilinear regression on scattering invariants
The size of a scattering descriptor set grows logarithmically with the maximum number of atoms in
the molecule (with increasing molecule size one continues to add scales to the wavelet transform,
which adds logarithmically many coefficients) as opposed to most other methods such as [3] whose
descriptor size grows linearly in the number of atoms in the molecule. Indeed, these techniques are
based on measurements of local individual interactions within neighborhoods of atoms.
The representation splits the information across scales and provides scale interaction coefficients
which can be related to physical phenomena as opposed to millions of deep neural net weights
which are difficult to interpret. Introducing multilinear regression between solid harmonic wavelet
invariants further improves the performance on the energy regression task, achieving near state of the
art performance. This may also be related to multilinear expansions of physical potentials.
It is important to issue a word of caution on the chemical interpretation of these algorithmic regressions. Indeed, all data bases are computed with DFT numerical codes, which only approximate
the energy. For the QM9 database, validation errors are on average 5 kcal/mol [17] on calculated
energies compared to true chemical energies of ground state molecules. Refined results of fractions
of kcal/mol thus no longer add true chemical information but rather reflect the ability to estimate the
values produced by DFT numerical codes.
5
Conclusion
We introduced a 2D and 3D solid harmonic wavelet scattering transform which is invariant to
translations and rotations and stable to deformations. It is computed with two successive convolutions
with solid harmonic wavelets and complex modulus. Together with multilinear regressions, this
representation provides near state of the art results for estimation of quantum molecular energies.
The same representation is used for small and large data bases. The mathematical simplicity of
these descriptors opens the possibility to relate these regression to multiscale properties of quantum
chemical interactions.
Acknowledgements
M.E., G.E. and S.M. are supported by ERC grant InvariantClass 320959; M.H. is supported by the
Alfred P. Sloan Fellowship, the DARPA YFA, and NSF grant 1620216.
References
[1] Albert P. Bart?k, Risi Kondor, and G?bor Cs?nyi. On representing chemical environments.
Physical Review B, 87(18), may 2013.
[2] Christopher R. Collins, Geoffrey J. Gordon, O. Anatole von Lilienfeld, and David J. Yaron.
Constant size molecular descriptors for use with machine learning. arXiv, 2017.
[3] Sandip De, Albert P. Bart?k, G?bor Cs?nyi, and Michele Ceriotti. Comparing molecules and
solids across structural and alchemical space. Phys. Chem. Chem. Phys., 18(20):13754?13769,
2016.
8
[4] Peter Deglmann, Ansgar Sch?fer, and Christian Lennartz. Application of quantum calculations in
the chemical industry - an overview. International Journal of Quantum Chemistry, 115(3):107?
136, 2014.
[5] Felix A. Faber, Luke Hutchison, Bing Huang, Justin Gilmer, Samuel S. Schoenholz, George E.
Dahl, Oriol Vinyals, Steven Kearnes, Patrick F. Riley, and O. Anatole von Lilienfeld. Prediction
errors of molecular machine learning models lower than hybrid dft error. Journal of Chemical
Theory and Computation, 0(0):null, 0. PMID: 28926232.
[6] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl.
Neural message passing for quantum chemistry. CoRR, abs/1704.01212, 2017.
[7] Katja Hansen, Franziska Biegler, Raghunathan Ramakrishnan, Wiktor Pronobis, O. Anatole
von Lilienfeld, Klaus-Robert M?ller, and Alexandre Tkatchenko. Machine learning predictions
of molecular properties: Accurate many-body potentials and nonlocality in chemical space. The
Journal of Physical Chemistry Letters, 6(12):2326?2331, 2015. PMID: 26113956.
[8] Katja Hansen, Gr?goire Montavon, Franziska Biegler, Siamac Fazli, Matthias Rupp, Matthias
Scheffler, O. Anatole von Lilienfeld, Alexandre Tkatchenko, and Klaus-Robert M?ller. Assessment and validation of machine learning methods for predicting molecular atomization energies.
Journal of Chemical Theory and Computation, 9(8):3404?3419, 2013.
[9] Matthew Hirn, St?phane Mallat, and Nicolas Poilvert. Wavelet scattering regression of
quantum chemical energies. Multiscale Modeling and Simulation, 15(2):827?863, 2017.
arXiv:1605.04654.
[10] Bing Huang and O. Anatole von Lilienfeld. Communication: Understanding molecular representations in machine learning: The role of uniqueness and target similarity. The Journal of
Chemical Physics, 145(16):161102, 2016.
[11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[12] St?phane Mallat. Group invariant scattering.
Mathematics, 65(10):1331?1398, October 2012.
Communications on Pure and Applied
[13] St?phane Mallat. Understanding deep convolutional networks. Phil. Trans. R. Soc. A,
374(2065):20150203, 2016.
[14] Roland Memisevic. Gradient-based learning of higher-order image features. In Computer
Vision (ICCV), 2011 IEEE International Conference on, pages 1591?1598. IEEE, 2011.
[15] Gr?goire Montavon, Katja Hansen, Siamac Fazli, Matthias Rupp, Franziska Biegler, Andreas
Ziehe, Alexandre Tkatchenko, O. Anatole von Lilienfeld, and Klaus-Robert M?ller. Learning invariant representations of molecules for atomization energy prediction. In P. Bartlett,
F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural
Information Processing Systems 25, pages 449?457. 2012.
[16] Gr?goire Montavon, Matthias Rupp, Vivekanand Gobre, Alvaro Vazquez-Mayagoitia, Katja
Hansen, Alexandre Tkatchenko, Klaus-Robert M?ller, and O Anatole von Lilienfeld. Machine
learning of molecular electronic properties in chemical compound space. New Journal of
Physics, 15(9):095003, 2013.
[17] Raghunathan Ramakrishnan, Pavlo O. Dral, Matthias Rupp, and O. Anatole von Lilienfeld.
Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data, 1:140022
EP ?, 08 2014.
[18] M. Rupp, A. Tkatchenko, K.-R. M?ller, and O. A. von Lilienfeld. Fast and accurate modeling of
molecular atomization energies with machine learning. Physical Review Letters, 108:058301,
2012.
[19] Kristof T. Sch?tt, Farhad Arbabzadah, Stefan Chmiela, Klaus R. M?ller, and Alexandre Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature
Communications, 8:13890 EP ?, Jan 2017. Article.
9
[20] Laurent Sifre and St?phane Mallat. Rotation, scaling and deformation invariant scattering for
texture discrimination. In Proceedings of the IEEE conference on computer vision and pattern
recognition, pages 1233?1240, 2013.
10
| 7232 |@word deformed:1 h:1 katja:4 kondor:1 polynomial:2 norm:1 open:1 azimuthal:2 simulation:1 covariance:2 incurs:1 solid:37 accommodate:1 carry:1 ecole:3 denoting:1 existing:2 imaginary:1 comparing:1 diederik:1 must:1 reminiscent:1 refines:1 numerical:5 j1:17 christian:1 drop:2 bart:2 discrimination:1 plane:1 core:2 provides:4 unmixed:1 successive:1 org:1 mathematical:1 along:3 eickenberg:2 prove:2 combine:1 lansing:1 introduce:1 pairwise:2 indeed:2 roughly:1 aliasing:3 spherical:10 eil:1 scheffler:1 window:1 increasing:1 becomes:1 provided:1 linearity:4 underlying:1 agnostic:1 null:1 cm:1 developed:1 caution:1 transformation:2 mitigate:1 every:2 collecting:1 charge:7 scaled:1 unit:5 grant:2 before:2 negligible:2 engineering:1 vertically:1 modify:1 local:5 limit:1 felix:1 bilinear:4 encoding:1 laurent:1 approximately:2 rscm:3 specifying:1 luke:1 co:5 sandip:1 stratified:2 atomic:4 lost:1 jan:1 holomorphic:1 faber:1 cascade:1 revealing:1 organic:2 word:1 specificity:1 sidelength:1 close:2 operator:4 raghunathan:2 applying:1 instability:1 optimize:1 www:1 missing:1 phil:1 jimmy:1 simplicity:1 pure:1 factored:1 insight:1 cascading:1 nuclear:3 hutchison:1 datapoints:1 stability:3 embedding:2 coordinate:6 laplace:2 construction:1 target:1 mallat:5 origin:2 mpnn:3 associate:1 logarithmically:2 recognition:1 particularly:1 located:1 continues:1 database:7 ep:2 steven:1 role:1 preprint:1 capture:1 tranform:1 environment:1 complexity:1 trained:1 raise:1 solving:1 depend:2 upon:1 basis:1 translated:1 easily:1 darpa:1 differently:2 represented:1 hirn:2 pmid:2 fast:4 klaus:5 neighborhood:1 refined:1 whose:5 larger:4 valued:1 ability:1 anatole:8 jointly:1 transform:15 itself:1 analytical:1 net:1 matthias:5 interaction:10 fr:2 fer:1 j2:12 neighboring:1 relevant:2 flexibility:1 franziska:3 convergence:1 requirement:1 asymmetry:1 optimum:2 produce:1 adam:2 phane:5 derive:1 linearize:1 ansgar:1 soc:1 implemented:2 c:2 involves:1 come:1 concentrate:1 radius:2 filter:3 stochastic:3 dral:1 centered:1 translating:1 material:1 explains:1 require:1 elevation:2 multilinear:10 summation:2 im:1 adjusted:3 onward:1 hold:1 around:4 sufficiently:1 ground:2 algorithmic:1 electron:4 matthew:2 major:1 uniqueness:1 estimation:3 polar:3 bag:4 bond:13 currently:1 kearnes:1 hansen:4 create:1 maching:1 stefan:1 clearly:1 gaussian:4 indexation:1 always:1 normale:3 rather:1 avoid:2 chmiela:1 encode:1 inherits:1 refining:1 vk:6 improvement:2 industrial:1 contrast:1 sense:1 rigid:4 repulsion:1 eliminate:2 typically:1 lj:1 qm9:6 france:4 issue:2 classification:4 among:2 exponent:1 art:10 spatial:4 equal:1 having:1 beach:1 atom:14 sampling:1 nearly:1 gordon:1 few:2 oriented:5 individual:1 pharmaceutical:1 subsampled:2 phase:3 ab:1 message:3 circular:1 highly:1 possibility:1 introduces:4 yfa:1 predefined:1 accurate:3 amenable:1 integral:3 necessary:2 shorter:1 orthogonal:1 indexed:1 rotating:2 circle:3 initialized:1 deformation:13 isolated:1 minimal:1 instance:1 industry:1 modeling:2 cover:1 riley:2 introducing:2 entry:1 gr:3 too:4 st:6 density:14 fundamental:1 orbitals:1 international:2 alvaro:1 stay:1 memisevic:1 physic:3 michael:2 analogously:1 together:1 again:2 reflect:1 von:9 opposed:3 choose:1 containing:1 dihedral:2 fazli:2 huang:2 derivative:1 leading:1 potential:2 de:2 chemistry:7 dilated:2 coefficient:17 satisfy:1 sloan:1 siamac:2 multiplicative:1 performed:1 sup:3 competitive:1 yaron:1 contribution:1 il:1 convolutional:5 descriptor:11 variance:1 yield:6 trilinear:4 hdad:3 bor:2 iterated:1 accurately:1 produced:1 multiplying:2 vazquez:1 bob:4 phys:2 evaluates:1 energy:39 frequency:4 involved:2 regress:4 associated:1 mi:1 di:1 proved:1 dataset:2 knowledge:1 improves:1 lilienfeld:9 amplitude:1 alexandre:5 scattering:45 higher:1 planar:1 done:1 box:1 angular:4 autoencoders:1 ei:3 christopher:1 multiscale:6 assessment:1 defines:2 michele:1 scientific:1 grows:2 modulus:13 usa:2 effect:1 normalized:1 verify:2 true:2 hence:1 chemical:18 sin:3 width:1 covering:2 samuel:2 ridge:2 tt:1 motion:2 interpreting:1 reflection:1 image:3 harmonic:46 rotation:32 functional:3 physical:7 overview:1 jl:1 extend:1 million:1 interpretation:1 mae:5 interpret:1 expressing:1 measurement:1 imposing:1 dft:3 outlined:1 mathematics:3 similarly:1 waals:1 grid:1 nonlinearity:1 erc:1 dj:1 stable:6 access:1 longer:1 similarity:1 etc:1 base:4 align:1 add:3 patrick:2 own:1 showed:1 isometry:2 rieure:3 compound:1 der:1 rotationally:1 george:2 kilo:1 converge:1 ller:6 signal:6 multiple:2 smooth:4 calculation:2 cross:1 long:2 sphere:5 molecular:18 roland:1 laplacian:3 prediction:4 regression:39 multilayer:1 vision:2 albert:2 histogram:4 kernel:7 tailored:1 arxiv:4 tkatchenko:6 fellowship:1 interval:2 scat:6 sch:2 eliminates:1 pooling:2 incorporates:1 structural:1 near:3 presence:2 split:3 enough:2 arbabzadah:1 fit:1 architecture:4 reduce:1 andreas:1 translates:1 shift:1 psl:3 expression:1 heavier:1 bartlett:1 energetic:1 peter:1 passing:3 deep:13 useful:1 eigenvectors:1 involve:2 transforms:4 amount:2 traction:1 http:1 specifies:3 nsf:1 oversampling:2 estimated:2 per:1 diverse:1 alfred:1 write:3 shall:2 group:3 achieving:1 kuk:1 dahl:2 graph:1 fraction:1 sum:4 angle:9 letter:2 family:2 reasonable:1 electronic:9 separation:1 oscillation:2 scaling:2 entirely:1 bound:2 followed:1 fold:6 quadratic:1 encountered:1 adapted:1 software:1 encodes:1 regressed:1 fourier:4 aspect:1 schoenholz:2 department:4 according:1 combination:1 legendre:1 remain:1 across:3 separability:1 wi:1 making:1 invariant:41 iccv:1 interference:1 equation:2 remains:1 bing:2 r3:1 ge:1 available:1 gaussians:2 multiplied:1 observe:3 generic:2 coulomb:4 weinberger:1 standardized:1 subsampling:1 calculating:1 giving:1 risi:1 build:1 nyi:2 tensor:4 sweep:1 quantity:1 exhibit:1 gradient:4 distance:7 valence:2 separate:1 exarchakis:2 degrade:1 topic:1 rupp:5 besides:2 length:1 retained:1 index:6 pointwise:1 rotational:2 providing:1 minimizing:1 code:2 difficult:1 october:1 robert:4 relate:1 rise:1 ba:1 design:1 implementation:1 gated:1 convolution:8 datasets:1 descent:2 immediate:1 extended:2 variability:2 precise:1 communication:3 arbitrary:1 introduced:2 david:1 paris:3 optimized:2 kingma:1 nip:1 trans:1 beyond:1 justin:2 usually:2 pattern:3 below:2 max:1 including:1 power:1 overlap:3 treated:1 force:1 hybrid:1 predicting:2 localizes:1 representing:2 scheme:1 improve:1 torsion:1 axis:1 farhad:1 naive:2 prior:4 review:3 acknowledgement:1 understanding:2 georgios:2 relative:1 fully:1 loss:1 permutation:4 proportional:1 geoffrey:1 localized:3 validation:4 nucleus:7 gilmer:2 affine:1 sufficient:2 article:1 principle:1 editor:1 translation:20 placed:1 supported:2 keeping:1 weaker:1 understand:1 burges:1 absolute:6 van:1 slice:1 benefit:1 calculated:2 dimension:2 valid:1 depth:1 rich:1 quantum:25 computes:2 dtn:4 collection:1 coll:1 sifre:1 functionals:1 approximate:2 obtains:1 keep:1 global:1 summing:2 discriminative:1 biegler:3 msu:1 continuous:4 dilation:2 table:4 nature:1 channel:3 zk:12 nicolas:1 molecule:30 ca:1 symmetry:1 mol:5 du:1 expansion:1 bottou:1 complex:5 domain:1 linearly:2 arise:1 verifies:1 body:1 en:2 atomization:4 position:4 pereira:1 wavelet:50 ffts:1 learns:1 montavon:3 rk:5 kcal:5 specific:1 r2:1 decay:1 corr:1 gained:1 ci:1 texture:2 cartesian:1 michigan:1 kristof:1 vinyals:2 expressed:1 covariant:7 ramakrishnan:2 relies:1 identity:1 sorted:2 loosing:1 diffeomorphism:2 lipschitz:3 considerable:2 operates:1 averaging:3 called:1 total:3 invariance:16 east:1 indicating:1 ziehe:1 support:1 chem:2 collins:1 oriol:2 goire:3 phenomenon:1 |
6,891 | 7,233 | Towards Generalization and Simplicity
in Continuous Control
Aravind Rajeswaran?
Kendall Lowrey?
Emanuel Todorov
Sham Kakade
University of Washington Seattle
{ aravraj, klowrey, todorov, sham } @ cs.washington.edu
Abstract
This work shows that policies with simple linear and RBF parameterizations can
be trained to solve a variety of widely studied continuous control tasks, including
the gym-v1 benchmarks. The performance of these trained policies are competitive
with state of the art results, obtained with more elaborate parameterizations such as
fully connected neural networks. Furthermore, the standard training and testing
scenarios for these tasks are shown to be very limited and prone to over-fitting, thus
giving rise to only trajectory-centric policies. Training with a diverse initial state
distribution induces more global policies with better generalization. This allows
for interactive control scenarios where the system recovers from large on-line
perturbations; as shown in the supplementary video.1
1
Introduction
Deep reinforcement learning (deepRL) has recently achieved impressive results on a number of
hard problems, including sequential decision making in game domains [1, 2]. This success has
motivated efforts to adapt deepRL methods for control of physical systems, and has resulted in rich
motor behaviors [3, 4]. The complexity of systems solvable with deepRL methods is not yet at the
level of what can be achieved with trajectory optimization (planning) in simulation [5, 6, 7], or with
hand-crafted controllers on physical robots (e.g. Boston Dynamics). However, RL approaches are
exciting because they are generic, model-free, and highly automated.
Recent success of RL [2, 8, 9, 10, 11] has been enabled largely due to engineering efforts such
as large scale data collection [1, 2, 11] or careful systems design [8, 9] with well behaved robots.
When advances in a field are largely empirical in nature, it is important to understand the relative
contributions of representations, optimization methods, and task design or modeling: both as a
sanity check and to scale up to harder tasks. Furthermore, in line with Occam?s razor, the simplest
reasonable approaches should be tried and understood first. A thorough understanding of these factors
is unfortunately lacking in the community.
In this backdrop, we ask the pertinent question: "What are the simplest set of ingredients needed
to succeed in some of the popular benchmarks?" To attempt this question, we use the Gym-v1 [12]
continuous control benchmarks, which have accelerated research and enabled objective comparisons.
Since the tasks involve under-actuation, contact dynamics, and are high dimensional (continuous
space), they have been accepted as benchmarks in the deepRL community. Recent works test their
algorithms either exclusively or primarily on these tasks [13, 4, 14], and success on these tasks have
been regarded as demonstrating a ?proof of concept?.
Our contributions: Our results and their implications are highlighted below with more elaborate
discussions in Section 5:
1
Project page: https://sites.google.com/view/simple-pol
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1. The success of recent RL efforts to produce rich motor behaviors have largely been attributed
to the use of multi-layer neural network architectures. This work is among the first to carefully
analyze the role of representation, and our results indicate that very simple policies including
linear and RBF parameterizations are able to achieve state of the art results on widely studied
tasks. Furthermore, such policies, particularly the linear ones, can be trained significantly faster
due to orders of magnitude fewer parameters. This indicates that even for tasks with complex
dynamics, there could exist relatively simple policies. This opens the door for studying a wide
range of representations in addition to deep neural networks, and understand trade-offs including
computational time, theoretical justification, robustness, sample complexity etc.
2. We study these issues not only with regards to the performance metric at hand but we also take
the further step in examining them in the context of robustness. Our results indicate that, with
conventional training methods, the agent is able to successfully learn a limit cycle for walking,
but cannot recover from any perturbations that are delivered to it. For transferring the success of
RL to robotics, such brittleness is highly undesirable.
3. Finally, we directly attempt to learn more robust policies through using more diverse training conditions, which favor such policies. This is similar in spirit to the model ensemble
approaches [15, 16] and domain randomization approaches [17, 18], which have successfully
demonstrated improved robustness and simulation to real world transfer. Under these new and
more diverse training scenarios, we again find that there is no compelling evidence to favor the
use of multi-layer architectures, at least for the benchmark tasks. On a side note, we also provide
interactive testing of learned policies, which we believe is both novel and which sheds light on
the robustness of trained policies.
2
Problem Formulation and Methods
We consider Markov Decision Processes (MDPs) in the average reward setting, which is defined using
the tuple: M = {S, A, R, T , ?0 }. S ? Rn , A ? Rm , and R : S ? A ? R are a (continuous) set of
states, set of actions, and reward function respectively, and have the usual meaning. T : S ? A ? S
is the stochastic transition function and ?0 is the probability distribution over initial states. We wish
to solve for a stochastic policy of the form ? : S ? A ? R+ , which optimizes the objective function:
" T
#
X
1
E?,M
rt .
(1)
?(?) = lim
T ?? T
t=1
Since we use simulations with finite length rollouts to estimate the objective and gradient, we
approximate ?(?) using a finite T . In this finite horizon rollout setting, we define the value, Q, and
advantage functions as follows:
" T
#
h
i
h
i
X
?
V (s, t) = E?,M
rt0
Q? (s, a, t) = EM R(s, a) + Es0 ?T (s,a) V ? (s0 , t + 1)
t0 =t
A? (s, a, t) = Q? (s, a, t) ? V ? (s, t)
Note that even though the value functions are time-varying, we still optimize for a stationary policy.
We consider parametrized policies ?? , and hence wish to optimize for the parameters (?). Thus, we
overload notation and use ?(?) and ?(?) interchangeably.
2.1
Algorithm
Ideally, a controlled scientific study would seek to isolate the challenges related to architecture,
task design, and training methods for separate study. In practice, this is not entirely feasible as the
results are partly coupled with the training methods. Here, we utilize a straightforward natural policy
gradient method for training. The work in [19] suggests that this method is competitive with most
state of the art methods. We now discuss the training procedure.
Using the likelihood ratio approach and Markov property of the problem, the sample based estimate
of the policy gradient is derived to be [20]:
T
1 X
?
?? ?(?) = g =
?? log ?? (at |st )A?? (st , at , t)
(2)
T t=0
2
Algorithm 1 Policy Search with Natural Gradient
1: Initialize policy parameters to ?0
2: for k = 1 to K do
3:
Collect trajectories {? (1) , . . . ? (N ) } by rolling out the stochastic policy ?(?; ?k ).
4:
Compute ?? log ?(at |st ; ?k ) for each (s, a) pair along trajectories sampled in iteration k.
5:
Compute advantages A?k based on trajectories in iteration k and approximate value function
?
Vk?1
.
6:
Compute policy gradient according to (2).
7:
Compute the Fisher matrix (4) and perform gradient ascent (5).
(n)
(n)
8:
Update parameters of value function in order to approximate Vk? (st ) ? R(st ), where
P
0
(n)
(n)
(n)
T
R(st ) is the empirical return computed as R(st ) = t0 =t ? (t ?t) rt . Here n indexes
over the trajectories.
9: end for
Gradient ascent using this ?vanilla? gradient is sub-optimal since it is not the steepest ascent direction
in the metric of the parameter space [21, 22]. The steepest ascent direction is obtained by solving the
following local optimization problem around iterate ?k :
maximize g T (? ? ?k )
?
subject to (? ? ?k )T F?k (? ? ?k ) ? ?,
(3)
where F?k is the Fisher Information Metric at the current iterate ?k . We estimate F?k as
T
1X
F??k =
?? log ?? (at |st )?? log ?? (at |st )T ,
T t=0
(4)
as originally suggested by Kakade [22]. This yields the steepest ascent direction to be F???1
g and
k
corresponding update rule: ?k+1 = ?k + ?F???1
g.
Here
?
is
the
step-size
or
learning
rate
parameter.
k
Empirically, we observed that choosing a fixed value for ? or an appropriate schedule is difficult [23].
Thus, we use the normalized gradient ascent procedure, where the normalization is under the Fisher
metric. This procedure can be viewed as picking a normalized step size ? as opposed to ?, and
solving the optimization problem in (3). This results in the following update rule:
s
?
F???1
g.
(5)
?k+1 = ?k +
k
T
?
g F??1
g
k
A dimensional analysis of these quantities reveal that ? has the unit of return?1 whereas ? is
dimensionless. Though units of ? are consistent with a general optimization setting where step-size
has units of objective?1 , in these problems, picking a good ? that is consistent with the scales of the
reward was difficult. On the other hand, a constant normalized step size was numerically more stable
and easier to tune: for all the results reported in this paper, the same ? = 0.05 was used. When more
than one trajectory rollout is used per update, the above estimators can be used with an additional
averaging over the trajectories.
For estimating the advantage function, we use the GAE procedure [13]. This requires learning
a function that approximates Vk? , which is used to compute A?k along trajectories for the update
in (5). GAE helps with variance reduction at the cost of introducing bias, and requires tuning
hyperparameters like a discount factor and an exponential averaging term. Good heuristics for these
parameters have been suggested in prior work. The same batch of trajectories cannot be used for both
fitting the value function baseline, and also to estimate g using (2), since it will lead to overfitting
and a biased estimate. Thus, we use the trajectories from iteration k ? 1 to fit the value function,
?
essentially approximating Vk?1
, and use trajectories from iteration k for computing A?k and g. Similar
procedures have been adopted in prior work [19].
2.2
Policy Architecture
Linear policy: We first consider a linear policy that directly maps from the observations to the
motor torques. We use the same observations as used in prior work which includes joint positions,
3
joint velocities, and for some tasks, information related to contacts. Thus, the policy mapping is
at ? N (W st + b, ?), and the goal is to learn W , b, and ?. For most of these tasks, the observations
correspond to the state of the problem (in relative coordinates). Thus, we use the term states and
observations interchangeably. In general, the policy is defined with observations as the input, and
hence is trying to solve a POMDP.
RBF policy: Secondly, we consider a parameterization that enriches the representational capacity
using random Fourier features of the observations. Since these features approximate the RKHS
features under an RBF Kernel [24], we call this policy parametrization the RBF policy. The features
are constructed as:
!
P
(j)
j Pij st
(i)
(i)
yt = sin
+?
,
(6)
?
where each element Pij is drawn from N (0, 1), ? is a bandwidth parameter chosen approximately as
the average pairwise distances between different observation vectors, and ? is a random phase shift
drawn from U [??, ?). Thus the policy is at ? N (W yt + b, ?), where W , b, and ? are trainable
parameters. This architecture can also be interpreted as a two layer neural network: the bottom
layer is clamped with random weights, a sinusoidal activation function is used, and the top layer is
finetuned. The principal purpose for this representation is to slightly enhance the capacity of a linear
policy, and the choice of activation function is not very significant.
3
Results on OpenAI gym-v1 benchmarks
As indicated before, we train linear and RBF policies with the natural policy gradient on the popular
OpenAI gym-v1 benchmark tasks simulated in MuJoCo [25]. The tasks primarily consist of learning
locomotion gaits for simulated robots ranging from a swimmer to a 3D humanoid (23 dof).
Figure 1 presents the learning curves along with the performance levels reported in prior work using
TRPO and fully connected neural network policies. Table 1 also summarizes the final scores, where
?stoc? refers to the stochastic policy with actions sampled as at ? ?? (st ), while ?mean? refers to
using mean of the Gaussian policy, with actions computed as at = E[?? (st )]. We see that the linear
policy is competitive on most tasks, while the RBF policy can outperform previous results on five
of the six considered tasks. Though we were able to train neural network policies that match the
results reported in literature, we have used publicly available prior results for an objective comparison.
Visualizations of the trained linear and RBF policies are presented in the supplementary video. Given
the simplicity of these policies, it is surprising that they can produce such elaborate behaviors.
Table 2 presents the number of samples needed for the policy performance to reach a threshold value
for reward. The threshold value is computed as 90% of the final score achieved by the stochastic
linear policy. We visually verified that policies with these scores are proficient at the task, and hence
the chosen values correspond to meaningful performance thresholds. We see that linear and RBF
policies are able to learn faster on four of the six tasks.
All the simulated robots we considered are under-actuated, have contact discontinuities, and continuous action spaces making them challenging benchmarks. When adapted from model-based
control [26, 5, 27] to RL, however, the notion of ?success? established was not appropriate. To
shape the behavior, a very narrow initial state distribution and termination conditions are used in the
benchmarks. As a consequence, the learned policies become highly trajectory centric ? i.e. they are
good only where they tend to visit during training, which is a very narrow region. For example, the
walker can walk very well when initialized upright and close to the walking limit cycle. Even small
perturbations, as shown in the supplementary video, alters the visitation distribution and dramatically
degrades the policy performance. This makes the agent fall down at which point it is unable to
get up. Similarly, the swimmer is unable to turn when its heading direction is altered. For control
applications, this is undesirable. In the real world, there will always be perturbations ? stochasticity
in the environment, modeling errors, or wear and tear. Thus, the specific task design and notion of
success used for the simulated characters are not adequate. However, the simulated robots themselves
are rather complex and harder tasks could be designed with them, as partly illustrated in Section 4.
4
Figure 1: Learning curves for the Linear and RBF policy architectures. The green line corresponding
to the reward achieved by neural network policies on the OpenAI Gym website, as of 02/24/2017
(trained with TRPO). It is observed that for all the tasks, linear and RBF parameterizations are
competitive with state of the art results. The learning curves depicted are for the stochastic policies,
where the actions are sampled as at ? ?? (st ). The learning curves have been averaged across three
runs with different random seeds.
Table 1: Final performances of the policies
Task
Swimmer
Hopper
Cheetah
Walker
Ant
Humanoid
4
Linear
stoc
362
3466
3810
4881
3980
5873
mean
366
3651
4149
5234
4607
6440
RBF
stoc
361
3590
6477
5631
4297
6237
mean
365
3810
6620
5867
4816
6849
Table 2: Number of episodes to achieve threshold
NN
Task
TRPO
131
3668
4800
5594
5007
6482
Swimmer
Hopper
Cheetah
Walker
Ant
Humanoid
Th.
Linear
RBF
TRPO+NN
325
3120
3430
4390
3580
5280
1450
13920
11250
36840
39240
79800
1550
8640
6000
25680
30000
96720
N-A
10000
4250
14250
73500
87000
Modified Tasks and Results
Using the same set of simulated robot characters outlined in Section 3, we designed new tasks with
two goals in mind: (a) to push the representational capabilities and test the limits of simple policies;
(b) to enable training of ?global" policies that are robust to perturbations and work from a diverse set
of states. To this end, we make the following broad changes, also summarized in Table 3:
1. Wider initial state distribution to force generalization. For example, in the walker task, some
fraction of trajectories have the walker initialized prone on the ground. This forces the agent to
simultaneously learn a get-up skill and a walk skill, and not forget them as the learning progresses.
Similarly, the heading angle for the swimmer and ant are randomized, which encourages learning
of a turn skill.
2. Reward shaping appropriate with the above changes to the initial state distribution. For example,
when the modified swimmer starts with a randomized heading angle, we include a small reward for
adjusting its heading towards the correct direction. In conjunction, we also remove all termination
conditions used in the Gym-v1 benchmarks.
3. Changes to environment?s physics parameters, such as mass and joint torque. If the agent has
sufficient power, most tasks are easily solved. By reducing an agent?s action ability and/or
increasing its mass, the agent is more under-actuated. These changes also produce more realistic
looking motion.
5
Figure 2: Hopper completes a get-up sequence before moving to its normal forward walking behavior.
The getup sequence is learned along side the forward hopping in the modified task setting.
Table 3: Modified Task Description
vx is forward velocity; ? is the heading angle; pz is the height of torso; and a is the action.
Task
Swimmer (3D)
Hopper (2D)
Walker (2D)
Ant (3D)
Description
Agent swims in the desired direction.
Should recover (turn) if rotated around.
Agent hops forward as fast as possible.
Should recover (get up) if pushed down.
Agent walks forward as fast as possible.
Should recover (get up) if pushed down.
Agent moves in the desired direction.
Should recover (turn) if rotated around.
Reward (des = desired value)
vx ? 0.1|? ? ?des | ? 0.0001||a||2
2
2
vx ? 3||pz ? pdes
z || ? 0.1||a||
2
2
vx ? 3||pz ? pdes
z || ? 0.1||a||
2
2
vx ? 3||pz ? pdes
z || ? 0.01||a||
Combined, these modifications require that the learned policies not only make progress towards
maximizing the reward, but also recover from adverse conditions and resist perturbations. An example
of this is illustrated in Figure 4, where the hopper executes a get-up sequence before hopping to
make forward progress. Furthermore, at test time, a user can interactively apply pushing and rotating
perturbations to better understand the failure modes. We note that these interactive perturbations may
not be the ultimate test for robustness, but a step towards this direction.
Representational capacity The supplementary video demonstrates the trained policies. We concentrate on the results of the walker task in the main paper. Figure 3 studies the performance as
we vary the representational capacity. Increasing the Fourier features allows for more expressive
policies and consequently allow for achieving a higher score. The policy with 500 Fourier features
performs the best, followed by the fully connected neural network. The linear policy also makes
forward progress and can get up from the ground, but is unable to learn as efficient a walking gait.
(a)
(b)
Figure 3: (a) Learning curve on modified walker (diverse initialization) for different policy architectures. The curves are averaged over three runs with different random seeds. (b) Learning curves
when using different number of conjugate gradient iterations to compute F???1
g in (5). A policy with
k
300 Fourier features has been used to generate these results.
6
Figure 4: We test policy robustness by measuring distanced traveled in the swimmer, walker, and
hopper tasks for three training configurations: (a) with termination conditions; (b) no termination,
and peaked initial state distribution; and (c) with diverse initialization. Swimmer does not have a
termination option, so we consider only two configurations. For the case of swimmer, the perturbation
is changing the heading angle between ??/2.0 and ?/2.0, and in the case of walker and hopper, an
external force for 0.5 seconds along its axis of movement. All agents are initialized with the same
positions and velocities.
Perturbation resistance Next, we test the robustness of our policies by perturbing the system with
an external force. This external force represents an unforeseen change which the agent has to resist
or overcome, thus enabling us to understand push and fall recoveries. Fall recoveries of the trained
policies are demonstrated in the supplementary video. In these tasks, perturbations are not applied to
the system during the training phase. Thus, the ability to generalize and resist perturbations come
entirely out of the states visited by the agent during training. Figure 4 indicates that the RBF policy
is more robust, and also that diverse initializations are important to obtain the best results. This
indicates that careful design of initial state distributions are crucial for generalization, and to enable
the agent to learn a wide range of skills.
5
Summary and Discussion
The experiments in this paper were aimed at trying to understand the effects of (a) representation; (b)
task modeling; and (c) optimization. We summarize the results with regard to each aforementioned
factor and discuss their implications.
Representation The finding that linear and RBF policies can be trained to solve a variety of
continuous control tasks is very surprising. Recently, a number of algorithms have been shown to successfully solve these tasks [3, 28, 4, 14], but all of these works use multi-layer neural networks. This
suggests a widespread belief that expressive function approximators are needed to capture intricate
details necessary for movements like running. The results in this work conclusively demonstrates that
this is not the case, at least for the limited set of popular testbeds. This raises an interesting question:
what are the capability limits of shallow policy architectures? The linear policies were not exemplary
in the ?global? versions of the tasks, but it must be noted that they were not terrible either. The RBF
policy using random Fourier features was able to successfully solve the modified tasks producing
global policies, suggesting that we do not yet have a sense of its limits.
Modeling When using RL methods to solve practical problems, the world provides us with neither
the initial state distribution nor the reward. Both of these must be designed by the researcher and
must be treated as assumptions about the world or prescriptions about the required behavior. The
quality of assumptions will invariably affect the quality of solutions, and thus care must be taken in
this process. Here, we show that starting the system from a narrow initial state distribution produces
7
elaborate behaviors, but the trained policies are very brittle to perturbations. Using a more diverse
state distribution, in these cases, is sufficient to train robust policies.
Optimization In line with the theme of simplicity, we first tried to use REINFORCE [20], which
we found to be very sensitive to hyperparameter choices, especially step-size. There are a class of
policy gradient methods which use pre-conditioning to help navigate the warped parameter space of
probability distributions and for step size selection. Most variants of pre-conditioned policy gradient
methods have been reported to achieve state of the art performance, all performing about the same [19].
We feel that the used natural policy gradient method is the most straightforward pre-conditioned
method. To demonstrate that the pre-conditioning helps, Figure 3 depicts the learning curve for
different number of CG iterations used to compute the update in (5). The curve corresponding to
CG = 0 is the REINFORCE method. As can be seen, pre-conditioning helps with the learning
process. However, there is a trade-off with computation, and hence using an intermediate number of
CG steps like 20 could lead to best results in wall-clock sense for large scale problems.
We chose to compare with neural network policies trained with TRPO, since it has demonstrated
impressive results and is closest to the algorithm used in this work. Are function approximators
linear with respect to free parameters sufficient for other methods is an interesting open question
(in this sense, RBFs are linear but NNs are not). For a large class of methods based on dynamic
programming (including Q-learning, SARSA, approximate policy and value iteration), linear function
approximation has guaranteed convergence and error bounds, while non-linear function approximation
is known to diverge in many cases [29, 30, 31, 32]. It may of course be possible to avoid divergence
in specific applications, or at least slow it down long enough, for example via target networks or
replay buffers. Nevertheless, guaranteed convergence has clear advantages. Similar to recent work
using policy gradient methods, recent work using dynamic programming methods have adopted
multi-layer networks without careful side-by-side comparisons to simpler architectures. Could a
global quadratic approximation to the optimal value function (which is linear in the set of quadratic
features) be sufficient to solve most of the continuous control tasks currently studied in RL? Given
that quadratic value functions correspond to linear policies, and good linear policies exist as shown
here, this might make for interesting future work.
6
Conclusion
In this work, we demonstrated that very simple policy parameterizations can be used to solve many
benchmark continuous control tasks. Furthermore, there is no significant loss in performance due to
the use of such simple parameterizations. We also proposed global variants of many widely studied
tasks, which requires the learned policies to be competent for a much larger set of states, and found
that simple representations are sufficient in these cases as well. These empirical results along with
Occam?s razor suggests that complex policy architectures should not be a default choice unless sideby-side comparisons with simpler alternatives suggest otherwise. Such comparisons are unfortunately
not widely pursued. The results presented in this work directly highlight the need for simplicity
and generalization in RL. We hope that this work would encourage future work analyzing various
architectures and associated trade-offs like computation time, robustness, and sample complexity.
Acknowledgements
This work was supported in part by the NSF. The authors would like to thank Vikash Kumar, Igor
Mordatch, John Schulman, and Sergey Levine for valuable comments.
References
[1] V. Mnih et al. Human-level control through deep reinforcement learning. Nature, 518, 2015.
[2] D. Silver et al. Mastering the game of go with deep neural networks and tree search. Nature,
529, 2016.
[3] J. Schulman, S. Levine, P. Moritz, M. Jordan, and P. Abbeel. Trust region policy optimization.
In ICML, 2015.
8
[4] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra.
Continuous control with deep reinforcement learning. ArXiv e-prints, September 2015.
[5] Y. Tassa, T. Erez, and E. Todorov. Synthesis and stabilization of complex behaviors through
online trajectory optimization. International Conference on Intelligent Robots and Systems,
2012.
[6] I. Mordatch, E. Todorov, and Z. Popovic. Discovery of complex behaviors through contactinvariant optimization. ACM SIGGRAPH, 2012.
[7] M. Al Borno, M. de Lasa, and A. Hertzmann. Trajectory Optimization for Full-Body Movements
with Complex Contacts. IEEE Transactions on Visualization and Computer Graphics, 2013.
[8] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep
visuomotor policies. JMLR, 17(39):1?40, 2016.
[9] V. Kumar, E. Todorov, and S. Levine. Optimal control with learned local models: Application
to dexterous manipulation. In ICRA, 2016.
[10] V. Kumar, A. Gupta, E. Todorov, and S. Levine. Learning dexterous manipulation policies from
experience and imitation. ArXiv e-prints, 2016.
[11] Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries
and 700 robot hours. In ICRA, 2016.
[12] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.
OpenAI Gym, 2016.
[13] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous
control using generalized advantage estimation. In ICLR, 2016.
[14] Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, and Sergey Levine.
Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic. In ICLR, 2017.
[15] I. Mordatch, K. Lowrey, and E.Todorov. Ensemble-CIO: Full-body dynamic motion planning
that transfers to physical humanoids. In IROS, 2015.
[16] A. Rajeswaran, S. Ghotra, B. Ravindran, and S. Levine. EPOpt: Learning Robust Neural
Network Policies Using Model Ensembles. In ICLR, 2017.
[17] Fereshteh Sadeghi and Sergey Levine. (CAD)2RL: Real Single-Image Flight without a Single
Real Image. ArXiv e-prints, 2016.
[18] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel.
Domain randomization for transferring deep neural networks from simulation to the real world.
ArXiv e-prints, 2017.
[19] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement
learning for continuous control. In ICML, 2016.
[20] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229?256, 1992.
[21] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10:251?276,
1998.
[22] S. Kakade. A natural policy gradient. In NIPS, 2001.
[23] Jan Peters. Machine learning of motor skills for robotics. PhD Dissertation, University of
Southern California, 2007.
[24] A. Rahimi and B. Recht. Random Features for Large-Scale Kernel Machines. In NIPS, 2007.
[25] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In
International Conference on Intelligent Robots and Systems, 2012.
9
[26] Tom Erez, Yuval Tassa, and Emanuel Todorov. Infinite-horizon model predictive control for
periodic tasks with contacts. In RSS, 2011.
[27] T. Erez, K. Lowrey, Y. Tassa, V. Kumar, S. Kolev, and E. Todorov. An integrated system for
real-time model predictive control of humanoid robots. In Humanoids, pages 292?299, 2013.
[28] Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa.
Learning continuous control policies by stochastic value gradients. In NIPS, 2015.
[29] Alborz Geramifard, Thomas J Walsh, Stefanie Tellex, Girish Chowdhary, Nicholas Roy, and
Jonathan P How. A tutorial on linear function approximators for dynamic programming and
R in Machine Learning, 6(4):375?451, 2013.
reinforcement learning. Foundations and Trends
[30] Jennie Si. Handbook of learning and approximate dynamic programming, volume 2. John
Wiley & Sons, 2004.
[31] Dimitri P Bertsekas. Approximate dynamic programming. 2008.
[32] Residual algorithms: Reinforcement learning with function approximation. In ICML, 1995.
10
A
Choice of Step Size
Compare ? vs ? here. An important design choice in the version of NPG presented in this work
is normalized vs un-normalized step size. The normalized step size corresponds to solving the
optimization problem in equation (3), and leads to the following update rule:
s
?
?k+1 = ?k +
F???1
g.
k
T
g F???1
g
k
On the other hand, an un-normalized step size corresponds to the update rule:
?k+1 = ?k + ? F???1
g.
k
The principal difference between the update rules correspond to the units of the learning rate
parameters ? and ?. In accordance with general first order optimization methods, ? scales inversely
with the reward (note that F does not have the units of reward). This makes the choice of ? highly
problem specific, and we find that it is hard to tune. Furthermore, we observed that the same values
of ? cannot be used throughout the learning phase, and requires re-scaling. Though this is common
practice in supervised learning, where the learning rate is reduced after some number of epochs, it
is hard to employ a similar approach in RL. Often, large steps can destroy a reasonable policy, and
recovering from such mistakes is extremely hard in RL since the variance of gradient estimate for a
poorly performing policy is higher. Employing the normalized step size was found to be more robust.
These results are illustrated in Figure 5
Swimmer: ? vs ?
Hopper: ? vs ?
Walker: ? vs ?
1000
200
2000
-200
Return
?=0.01
?=0.05
?=0.1
?=0.25
?=1.0
?=2.0
?=0.01
?=0.05
?=0.1
0
-400
Return
Return
0
?=0.01
?=0.05
?=0.1
?=0.25
?=1.0
?=2.0
?=0.01
?=0.05
?=0.1
0
-2000
-3000
-2000
10
20
30
Training Iterations
40
50
20
40
Training Iterations
60
?=0.01
?=0.05
?=0.1
?=0.25
?=1.0
?=2.0
?=0.01
?=0.05
?=0.1
-1000
80
-4000
50
100
150
200
250
Training Iterations
Figure 5: Learning curves using normalized and un-normalized step size rules for the diverse versions
of swimmer, hopper, and walker tasks. We observe that the same normalized step size (?) works
across multiple problems. However, the un-normalized step size values that are optimal for one task
do not work for other tasks. In fact, they often lead to divergence in the learning process. We replace
the learning curves with flat lines in cases where we observed divergence, such as ? = 0.25 in case
of walker. This suggests that normalized step size rule is more robust, with the same learning rate
parameter working across multiple tasks.
B
Effect of GAE
For the purpose of advantage estimation, we use the GAE [13] procedure in this work. GAE uses
an exponential average of temporal difference errors to reduce the variance of policy gradients at
the expense of bias. Since the paper explores the theme of simplicity, a pertinent question is how
well GAE performs when compared to more straightforward alternatives like using a pure temporal
difference error, and pure Monte Carlo estimates. The ? parameter in GAE allows for an interpolation
between these two extremes. In our experiments, summarized in Figure 6, we observe that reducing
variance even at the cost of a small bias (? = 0.97) provides for fast learning in the initial stages.
This is consistent with the findings in Schulman et al. [13] and also make intuitive sense. Initially,
when the policy is very far from the correct answer, even if the movement direction is not along the
gradient (biased), it is beneficial to make consistent progress and not bounce around due to high
11
variance. Thus, high bias estimates of the policy gradient, corresponding to smaller ? values make
fast initial progress. However, after this initial phase, it is important to follow an unbiased gradient,
and consequently the low-bias variants corresponding to larger ? values show better asymptotic
performance. Even without the use of GAE (i.e. ? = 1), we observe good asymptotic performance.
But with GAE, it is possible to get faster initial learning due to reasons discussed above.
Walker: GAE
Return
0
-2500
GAE=0.00
GAE=0.50
GAE=0.90
GAE=0.95
GAE=0.97
GAE=1.00
-5000
50
100
150
200
250
Training Iterations
Figure 6: Learning curves corresponding to different choices of ? in GAE. ? = 0 corresponds
to a high bias but low variance version of policy gradient corresponding to a TD error estimate:
? t , at ) = rt + ?V (st+1 ) ? V (st ); while ? = 1 corresponds to a low bias but high variance Monte
A(s
? t , at ) = PT0 ? t0 ?t rt0 ? V (st ). We observe that low bias is asymptotically
Carlo estimate: A(s
t =t
very important to achieve best performance, but a low variance gradient can help during the initial
stages.
12
| 7233 |@word version:4 open:2 termination:5 pieter:2 simulation:4 tried:2 seek:1 r:1 contactinvariant:1 harder:2 reduction:1 initial:14 configuration:2 exclusively:1 score:4 pt0:1 rkhs:1 current:1 com:1 surprising:2 cad:1 activation:2 yet:2 si:1 must:4 john:2 ronald:1 realistic:1 shape:1 pertinent:2 motor:4 remove:1 designed:3 update:9 v:5 stationary:1 pursued:1 fewer:1 website:1 parameterization:1 parametrization:1 steepest:3 proficient:1 dissertation:1 provides:2 parameterizations:6 simpler:2 five:1 height:1 rollout:2 along:7 constructed:1 wierstra:1 become:1 jonas:1 pritzel:1 fitting:2 ray:1 pairwise:1 ravindran:1 intricate:1 behavior:9 themselves:1 planning:2 cheetah:2 multi:4 nor:1 torque:2 td:1 duan:1 es0:1 increasing:2 project:1 estimating:1 notation:1 mass:2 what:3 interpreted:1 finding:2 temporal:2 thorough:1 interactive:3 shed:1 zaremba:2 rm:1 demonstrates:2 control:19 unit:5 wayne:1 producing:1 bertsekas:1 before:3 engineering:1 understood:1 local:2 accordance:1 limit:5 consequence:1 mistake:1 analyzing:1 interpolation:1 approximately:1 might:1 chose:1 initialization:3 studied:4 suggests:4 collect:1 challenging:1 mujoco:2 limited:2 hunt:1 walsh:1 range:2 averaged:2 practical:1 testing:2 practice:2 procedure:6 jan:1 empirical:3 significantly:1 pre:5 refers:2 suggest:1 zoubin:1 get:8 cannot:3 undesirable:2 close:1 selection:1 context:1 dimensionless:1 optimize:2 conventional:1 map:1 demonstrated:4 yt:2 maximizing:1 straightforward:3 go:1 rt0:2 starting:1 williams:1 pomdp:1 simplicity:5 recovery:2 pure:2 rule:7 estimator:1 regarded:1 brittleness:1 enabled:2 notion:2 coordinate:1 justification:1 feel:1 target:1 user:1 programming:5 us:1 locomotion:1 swimmer:12 velocity:3 element:1 finetuned:1 particularly:1 walking:4 roy:1 trend:1 observed:4 role:1 bottom:1 levine:9 solved:1 capture:1 region:2 connected:3 cycle:2 episode:1 trade:3 movement:4 valuable:1 environment:2 complexity:3 pol:1 reward:12 ideally:1 hertzmann:1 dynamic:9 trained:11 raise:1 solving:3 predictive:2 gu:1 easily:1 joint:3 siggraph:1 various:1 train:3 fast:4 monte:2 visuomotor:1 choosing:1 dof:1 sanity:1 heuristic:1 widely:4 solve:9 supplementary:5 larger:2 otherwise:1 amari:1 favor:2 ability:2 npg:1 highlighted:1 delivered:1 final:3 online:1 advantage:6 sequence:3 exemplary:1 gait:2 poorly:1 achieve:4 representational:4 description:2 intuitive:1 seattle:1 convergence:2 darrell:1 produce:4 silver:3 rotated:2 help:5 wider:1 tim:1 progress:6 recovering:1 c:1 indicate:2 come:1 direction:9 concentrate:1 correct:2 stochastic:7 stabilization:1 vx:5 human:1 enable:2 require:1 abbeel:5 generalization:5 wall:1 randomization:2 secondly:1 sarsa:1 around:4 considered:2 ground:2 normal:1 visually:1 seed:2 mapping:1 vary:1 purpose:2 estimation:2 currently:1 visited:1 sensitive:1 successfully:4 hope:1 offs:2 gaussian:1 always:1 modified:6 rather:1 avoid:1 varying:1 conjunction:1 derived:1 vk:4 check:1 indicates:3 likelihood:1 cg:3 baseline:1 sense:4 nn:2 integrated:1 transferring:2 initially:1 issue:1 among:1 aforementioned:1 geramifard:1 art:5 initialize:1 field:1 testbeds:1 washington:2 beach:1 hop:1 represents:1 broad:1 icml:3 igor:1 peaked:1 future:2 connectionist:1 brockman:1 intelligent:2 richard:1 employ:1 primarily:2 simultaneously:1 resulted:1 divergence:3 phase:4 rollouts:1 attempt:2 invariably:1 highly:4 mnih:1 grasp:1 extreme:1 light:1 implication:2 tuple:1 encourage:1 necessary:1 experience:1 unless:1 tree:1 walk:3 initialized:3 desired:3 rotating:1 re:1 theoretical:1 modeling:4 compelling:1 dexterous:2 measuring:1 cost:2 introducing:1 rolling:1 examining:1 graphic:1 reported:4 answer:1 fong:1 periodic:1 gregory:1 combined:1 nns:1 st:18 recht:1 international:2 randomized:2 explores:1 physic:2 off:2 picking:2 enhance:1 diverge:1 unforeseen:1 synthesis:1 again:1 interactively:1 opposed:1 external:3 warped:1 dimitri:1 return:6 wojciech:1 suggesting:1 sinusoidal:1 de:3 summarized:2 includes:1 view:1 try:1 kendall:1 analyze:1 competitive:4 recover:6 start:1 capability:2 option:1 rbfs:1 contribution:2 cio:1 publicly:1 variance:8 largely:3 efficiently:1 ensemble:3 yield:1 correspond:4 ant:4 generalize:1 carlo:2 trajectory:16 researcher:1 executes:1 reach:1 trevor:1 failure:1 proof:1 attributed:1 recovers:1 associated:1 sampled:3 emanuel:2 adjusting:1 popular:3 ask:1 lim:1 torso:1 schedule:1 shaping:1 carefully:1 aravind:1 centric:2 originally:1 higher:2 supervised:1 follow:1 tom:2 alborz:1 improved:1 formulation:1 though:4 furthermore:6 stage:2 clock:1 hand:4 lasa:1 flight:1 working:1 expressive:2 trust:1 google:1 widespread:1 mode:1 fereshteh:1 quality:2 reveal:1 behaved:1 scientific:1 believe:1 indicated:1 usa:1 effect:2 lillicrap:3 concept:1 normalized:13 unbiased:1 hence:4 moritz:2 illustrated:3 sin:1 game:2 interchangeably:2 during:4 encourages:1 razor:2 noted:1 self:1 shixiang:1 generalized:1 trying:2 demonstrate:1 performs:2 motion:2 meaning:1 ranging:1 image:2 novel:1 recently:2 enriches:1 common:1 hopper:9 physical:3 rl:11 empirically:1 perturbing:1 conditioning:3 tassa:6 volume:1 discussed:1 approximates:1 numerically:1 significant:2 tuning:1 vanilla:1 outlined:1 similarly:2 erez:6 stochasticity:1 wear:1 moving:1 robot:10 stable:1 impressive:2 supervision:1 etc:1 closest:1 chelsea:1 recent:5 optimizes:1 scenario:3 manipulation:2 buffer:1 success:7 approximators:3 seen:1 additional:1 care:1 schneider:2 maximize:1 full:2 multiple:2 sham:2 rahimi:1 distanced:1 faster:3 adapt:1 match:1 tear:1 long:2 prescription:1 visit:1 controlled:1 variant:3 controller:1 essentially:1 metric:4 arxiv:4 iteration:11 normalization:1 kernel:2 sergey:4 girish:1 achieved:4 robotics:2 addition:1 whereas:1 completes:1 walker:14 crucial:1 biased:2 ascent:6 comment:1 isolate:1 subject:1 tend:1 spirit:1 jordan:2 call:1 tobin:1 door:1 intermediate:1 enough:1 automated:1 todorov:10 variety:2 iterate:2 fit:1 affect:1 architecture:11 bandwidth:1 reduce:1 shift:1 vikash:1 t0:3 pettersson:1 motivated:1 six:2 bounce:1 ultimate:1 effort:3 swim:1 peter:1 resistance:1 action:7 adequate:1 deep:8 dramatically:1 heess:2 clear:1 involve:1 tune:2 aimed:1 discount:1 induces:1 simplest:2 reduced:1 http:1 generate:1 outperform:1 exist:2 terrible:1 nsf:1 tutorial:1 alters:1 per:1 diverse:9 hyperparameter:1 visitation:1 four:1 trpo:5 demonstrating:1 openai:4 threshold:4 drawn:2 achieving:1 changing:1 nevertheless:1 neither:1 verified:1 iros:1 utilize:1 v1:5 destroy:1 asymptotically:1 fraction:1 houthooft:1 run:2 angle:4 rachel:1 throughout:1 reasonable:2 decision:2 summarizes:1 scaling:1 pushed:2 entirely:2 layer:7 bound:1 followed:1 guaranteed:2 quadratic:3 lowrey:3 adapted:1 alex:1 flat:1 fourier:5 extremely:1 kumar:4 performing:2 relatively:1 according:1 conjugate:1 smaller:1 across:3 slightly:1 em:1 character:2 mastering:1 son:1 kakade:3 shallow:1 beneficial:1 making:2 modification:1 taken:1 equation:1 visualization:2 discus:2 turn:4 needed:3 mind:1 finn:1 end:4 studying:1 adopted:2 available:1 apply:1 kolev:1 observe:4 generic:1 appropriate:3 nicholas:1 gym:7 robustness:8 batch:1 alternative:2 thomas:1 top:1 running:1 include:1 lerrel:1 hopping:2 pushing:1 giving:1 ghahramani:1 especially:1 approximating:1 icra:2 contact:5 objective:5 move:1 question:5 quantity:1 print:4 degrades:1 rt:3 usual:1 september:1 gradient:27 iclr:3 southern:1 distance:1 separate:1 unable:3 simulated:6 capacity:4 parametrized:1 reinforce:2 thank:1 reason:1 length:1 index:1 ratio:1 difficult:2 unfortunately:2 stoc:3 expense:1 rise:1 design:6 policy:95 perform:1 observation:7 markov:2 benchmark:11 finite:3 enabling:1 looking:1 rn:1 perturbation:13 community:2 david:1 pair:1 required:1 resist:3 california:1 engine:1 learned:6 narrow:3 established:1 hour:1 nip:4 discontinuity:1 able:5 suggested:2 below:1 mordatch:3 challenge:1 summarize:1 including:5 green:1 video:5 belief:1 power:1 natural:6 force:5 treated:1 solvable:1 residual:1 turner:1 sadeghi:1 altered:1 mdps:1 abhinav:1 inversely:1 axis:1 stefanie:1 coupled:1 traveled:1 prior:5 understanding:1 literature:1 acknowledgement:1 schulman:6 discovery:1 epoch:1 relative:2 asymptotic:2 lacking:1 fully:3 loss:1 highlight:1 brittle:1 interesting:3 ingredient:1 foundation:1 humanoid:6 agent:14 pij:2 consistent:4 s0:1 sufficient:5 exciting:1 occam:2 critic:1 prone:2 course:1 summary:1 supported:1 free:2 pdes:3 heading:6 side:5 bias:8 understand:5 allow:1 wide:2 fall:3 regard:2 curve:12 overcome:1 default:1 world:5 transition:1 rich:2 forward:7 collection:1 reinforcement:7 author:1 employing:1 far:1 transaction:1 approximate:7 skill:5 conclusively:1 global:6 overfitting:1 handbook:1 gae:17 popovic:1 imitation:1 continuous:13 search:2 un:4 table:6 nature:3 learn:7 robust:7 ca:1 transfer:2 actuated:2 nicolas:1 complex:6 domain:3 main:1 hyperparameters:1 competent:1 body:2 crafted:1 site:1 benchmarking:1 elaborate:4 backdrop:1 depicts:1 slow:1 wiley:1 sub:1 position:2 theme:2 wish:2 exponential:2 replay:1 clamped:1 jmlr:1 tang:1 down:4 specific:3 navigate:1 pz:4 gupta:2 evidence:1 consist:1 sequential:1 phd:1 magnitude:1 conditioned:2 push:2 horizon:2 chen:1 easier:1 boston:1 depicted:1 forget:1 timothy:1 josh:1 pinto:1 corresponds:4 acm:1 prop:1 succeed:1 viewed:1 goal:2 cheung:1 consequently:2 rbf:16 towards:4 careful:3 replace:1 fisher:3 feasible:1 hard:4 change:5 adverse:1 upright:1 infinite:1 reducing:2 yuval:2 averaging:2 principal:2 accepted:1 partly:2 meaningful:1 jonathan:1 accelerated:1 actuation:1 overload:1 trainable:1 |
6,892 | 7,234 | Random Projection Filter Bank for Time Series Data
Amir-massoud Farahmand
Mitsubishi Electric Research Laboratories (MERL)
Cambridge, MA, USA
[email protected]
Sepideh Pourazarm
Mitsubishi Electric Research Laboratories (MERL)
Cambridge, MA, USA
[email protected]
Daniel Nikovski
Mitsubishi Electric Research Laboratories (MERL)
Cambridge, MA, USA
[email protected]
Abstract
We propose Random Projection Filter Bank (RPFB) as a generic and simple
approach to extract features from time series data. RPFB is a set of randomly
generated stable autoregressive filters that are convolved with the input time series
to generate the features. These features can be used by any conventional machine
learning algorithm for solving tasks such as time series prediction, classification
with time series data, etc. Different filters in RPFB extract different aspects of
the time series, and together they provide a reasonably good summary of the time
series. RPFB is easy to implement, fast to compute, and parallelizable. We provide
an error upper bound indicating that RPFB provides a reasonable approximation
to a class of dynamical systems. The empirical results in a series of synthetic and
real-world problems show that RPFB is an effective method to extract features
from time series.
1
Introduction
This paper introduces Random Projection Filter Bank (RPFB) for feature extraction from time series
data. RPFB generates a feature vector that summarizes the input time series by projecting the time
series onto the span of a set of randomly generated dynamical filters. The output of RPFB can
then be used as the input to any conventional estimator (e.g., ridge regression, SVM, and Random
Forest [Hastie et al., 2001; Bishop, 2006; Wasserman, 2007]) to solve problems such as time series
prediction, and classification and fault prediction with time series input data. RPFB is easy to
implement, is fast to compute, and can be parallelized easily.
RPFB consists of a set of randomly generated filters (i.e., dynamical systems that receive inputs),
which are convolved with the input time series. The filters are stable autoregressive (AR) filters, so
they can capture information from the distant past of the time series. This is in contrast with more
conventional approach of considering only a fixed window of the past time steps, which may not
capture all relevant information. RPFB is inspired from the random projection methods [Vempala,
2004; Baraniuk and Wakin, 2009], which reduce the input dimension while preserving important
properties of the data, e.g., being an approximate isometric map. It is also closely related to
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Random Kitchen Sink [Rahimi and Recht, 2009] for approximating potentially infinite-dimensional
reproducing kernel Hilbert space (RKHS) with a finite set of randomly selected features. RPFB can
be thought of as the dynamical system (or filter) extension of these methods. RPFB is also related to
the methods in the Reservoir Computing literature [Luko?evi?cius and Jaeger, 2009] such as Echo
State Network and Liquid State Machine, in which a recurrent neural network (RNN) with random
weights provides a feature vector to a trainable output layer. The difference of RPFB with them is that
we are not considering an RNN as the underlying excitable dynamical system, but a set of AR filters.
The algorithmic contribution of this work is the introduction of RPFB as a generic and simple to use
feature extraction method for time series data (Section 3). RPFB is a particularly suitable choice for
industrial applications where the available computational power is limited, e.g., a fault prognosis
system for an elevator that has only a micro-controller available. For these industrial applications, the
use of powerful methods such as various adaptable RNN architectures [Hochreiter and Schmidhuber,
1997; Cho et al., 2014; Oliva et al., 2017; Goodfellow et al., 2016], which learn the feature extractor
itself, might be computationally infeasible.
The theoretical contribution of this work is the finite sample analysis of RPFB for the task of time
series prediction (Section 4). The theory has two main components. The first is a filter approximation
error result, which provides an error guarantee on how well one might approximate a certain class of
dynamical systems with a set of randomly generated filters. The second component is a statistical
result providing a finite-sample guarantee for time series prediction with a generic class of linear
systems. Combining these two, we obtain a finite-sample guarantee for the use of RPFB for time
series prediction of a certain class of dynamical systems.
Finally, we empirically study RPFB along several standard estimators on a range of synthetic and
real-world datasets (Section 5). Our synthetic data is based on Autoregressive Moving Average
(ARMA) processes. This lets us closely study various aspects of the method. Moving to real-world
problems, we apply RPFB to the fault diagnosis problem from ball bearing vibration measurements.
We compare the performance of RPFB with that of the fixed-window history-based approach, as
well as LSTM, and we obtain promising empirical results. Due to space limitation, most of the
development of the theory and experimental results are reported in the supplementary material, which
is an extended version of this paper. For more empirical studies, especially in the context of fault
detection and prognosis, refer to Pourazarm et al. [2017].
2
Learning from Time Series Data
Consider a sequence (X1 , Y1 ), . . . , (XT , YT ) of dependent random variables with X ? X and
Y ? Y. Depending on how Xt and Yt are defined, we can describe different learning/decision
making problems. For example, suppose that Yt = f ? (Xt ) + ?t , in which f ? is an unknown function
of the current value of Xt and ?t is independent of the history X1:t = (X1 , . . . , Xt ) and has a
zero expectation, i.e., E [?t ] = 0. Finding an estimate f? of f ? using data is the standard regression
(or classification) problem depending on whether Y ? R (regression) or Y = {0, 1, . . . , c ?
1} (classification). For example, suppose that we are given a dataset of m time series Dm =
m
{(Xi,1 , Yi,1 ), . . . , (Xi,Ti , Yi,Ti )}i=1 , each of which might have a varying length Ti . There are many
methods to define an estimator for f ? , e.g., K-Nearest Neighbourhood, decision tree, SVM, various
neural networks [Hastie et al., 2001; Bishop, 2006; Wasserman, 2007; Goodfellow et al., 2016]. An
important class of estimators is based on (regularized) empirical risk minimization (ERM):
Ti
m
1 X 1 X
f? ? argmin
l(f (Xi,t ), Yi,t ) + ?J(f ).
f ?F m i=1 Ti t=1
(1)
Here F : X ? Y 0 is a function space (e.g., an RKHS with the domain X ; with Y 0 = R). The loss
function is l : Y 0 ? Y ? [0, ?), and it determines the decision problem that is being solved, e.g.,
l(y1 , y2 ) = |y1 ? y2 |2 for the squared loss commonly used in regression. The optional regularizer
2
(or penalizer) J(f ) controls the complexity of the function space, e.g., it can be kf kF when F is an
RKHS. The difference of this scenario with more conventional scenarios in the supervised learning
and statistics is that here the input data does not satisfy the usual independence assumption anymore.
Learning with dependent input data has been analyzed before [Steinwart et al., 2009; Steinwart and
Christmann, 2009; Mohri and Rostamizadeh, 2010; Farahmand and Szepesv?ri, 2012].
2
More generally, however, Yt is not a function of only Xt , but is a function of the history X1:t , possibly
contaminated by a (conditionally) independent noise: Yt = f ? (X1:t ) + ?t . In the learning problem,
f ? is an unknown function. The special case of f ? (X1:t ) = f ? (Xt ) is the same as the previous
setting.
To learn an estimator by directly using the history X1:t is challenging as it is a time-varying vector
with an ever increasing dimension. A standard approach to deal with this issue is to use a fixedwindow history-based estimator, which shall be explained next (cf. Kakade et al. [2017] for some
recent theoretical results). The RPFB is an alternative approach that we describe in Section 3.
In the fixed-window history-based approach (or window-based, for short), we only look at a fixed
window of the immediate past values of X1:t . That is, we use samples in the form of Zt , Xt?H+1:t
with a finite integer H that determines the length of the window. For example, the regularized
least-squares regression estimator would then be
Ti
m
X
1 X
1
2
?
|f (Xi,t?H+1:t )) ? Yi,t | + ?J(f ),
f ? argmin
m
T
?
H
f ?F
i=1 i
(2)
t=H
which should be compared to (1).
A problem with this approach is that for some stochastic processes, a fixed-sized window of length H
is not enough to capture all information about the process. As a simple illustrative example, consider
a simple moving average MA(1) univariate random process (i.e., X = R):
Xt = U (t) + bU (t ? 1) = (1 + bz ?1 )Ut ,
b ? (?1, 1)
?1
in which z is the time-delay operator (cf. Z-transform, Oppenheim et al. 1999), i.e., z ?1 Xt = Xt?1 .
Suppose that Ut = U (t) (t = 1, 2, . . . ) is an unobservable random process that drives Xt . For
example, it might be an independent and identically distributed (i.i.d.) Gaussian noise, which we do
not observe (so it is our latent variable). To predict Yt = Xt+1 given the previous observations X1:t ,
Xt
we write Ut = 1+bz
?1 , so
X
b
Xt = Ut+1 + b
(?b)k Xt?k .
(3)
Xt+1 = Ut+1 + bUt = Ut+1 +
?1
1 + bz
k?0
This means that Xt is an autoregressive process AR(?). The prediction of Xt+1 requires the value of
Ut+1 , which is unavailable at time t, and all the past values X1:t . Since Ut+1 is unavailable, we cannot
use it in our estimate, so this is the intrinsic difficulty of prediction. On the other hand, the values of
X1:t are available to us and we can use them to predict Xt+1 . But if we use a fixed-horizon window
of the past values (i.e., only use Xt?H+1:t for a finite H ? 1), we would miss some information
that could potentially be used. This loss of information is more prominent when the magnitude of
b is close to 1. This example shows that even for a simple MA(1) process with unobserved latent
variables, a fixed-horizon window is not a complete summary of the stochastic process.
More generally, suppose that we have a univariate linear ARMA process
A(z ?1 )Xt = B(z ?1 )Ut ,
(4)
?1 1
with A and B both being polynomials in z . The random process Ut is not available to us,
and we want to design a predictor (filter) for Xt+1 based on the observed values X1:t . Suppose that A and B are of degree more than 1, so we can write A(z ?1 ) = 1 + z ?1 A0 (z ?1 ) and
B(z ?1 ) = 1 + z ?1 B 0 (z ?1 ).2 Assuming that A and B are both invertible, we use (4) to get Ut =
B ?1 (z ?1 )A(z ?1 )Xt . Also we can write (4) as (1+z ?1 A0 (z ?1 ))Xt+1 = (1+z ?1 B 0 (z ?1 ))Ut+1 =
Ut+1 + B 0 (z ?1 )Ut . Therefore, we have
0 ?1
B 0 (z ?1 ) ? A0 (z ?1 )
B (z )A(z ?1 )
0 ?1
?
A
(z
)
X
=
U
+
Xt . (5)
Xt+1 = Ut+1 +
t
t+1
B(z ?1 )
B(z ?1 )
So if the unknown noise process Ut has a zero mean (i.e., E [Ut |U1:t?1 ] = 0), the estimator
0 ?1
0 ?1
? t+1 (X1:t ) = B (z ) ? A (z ) Xt ,
X
?1
B(z )
1
2
We assume that A and B both have roots within the unit circle, i.e., they are stable.
The fact that both of these polynomials have a leading term of 1 does not matter in this argument.
3
(6)
? t+1 (X1:t ) = E [Xt+1 |X1:t ].
is unbiased, i.e., X
If we knew the model of the dynamical system (A and B), we could design the filter (6) to provide an
unbiased prediction for the future values of Xt+1 . If the learning problem is such that it requires us
to know an estimate of the future observations of the dynamical system, this scheme would allow us
to design such an estimator. The challenge here is that we often do not know A and B (or similar for
other types of dynamical systems). Estimating A and B for a general dynamical system is a difficult
task. The use of maximum likelihood-based approaches is prone to local minimum since U is not
known, and one has to use EM-like algorithms, cf. White et al. [2015] and references therein. Here
we suggest a simple alternative based on the idea of projecting the signal onto the span of randomly
generated dynamical systems. This would be RPFB, which we describe next.
3
Random Projection Filter Bank
The idea behind RPFB is to randomly generate many simple dynamical systems that can approximate
dynamical systems such as the optimal filter in (6) with a high accuracy. Denote the linear filter in (6)
as
B 0 (z ?1 ) ? A0 (z ?1 )
p(z ?1 )
=
,
?1
B(z )
q(z ?1 )
for two polynomials p and q, both in z ?1 . Suppose that deg(q) = deg(B) = dq and deg(A) = dA ,
then deg(p) = dp = max{dA ? 1, dq ? 1}. Assume that q has roots z1 , . . . , zdq ? C without any
multiplicity. This means that
dq
Y
?1
q(z ) =
(z ?1 ? zi ).
i=1
In complex analysis in general, and in control engineering and signal processing in particular, the roots
of q are known as the poles of the dynamical system and the roots of p are its zeros. Any discrete-time
linear time-invariant (LTI) dynamical system has such a frequency domain representation.3
We have two cases of either dp < dq or dp ? dq . We focus on the first case and describe the RPFB,
and the intuition behind it. Afterwards we will discuss the second case.
Case 1: Suppose that dp < dq , which implies that dA ? 1 < dq . We may write
dq
p(z ?1 ) X
bi
=
,
?1
q(z )
1 ? zi z ?1
i=1
(7)
for some choice of bi s. This means that we can write (5) as
Xt+1 = Ut+1 +
= Ut+1 +
B 0 (z ?1 ) ? A0 (z ?1 )
Xt
B(z ?1 )
dq
X
i=1
bi
Xt .
1 ? zi z ?1
That is, if we knew the set of complex poles Zp = {z1 , . . . , zdq } and their corresponding coefficients
Bp = {b1 , . . . , bdq }, we could provide an unbiased estimate of Xt+1 based on X1:t . From now on,
we assume that the underlying unknown system is a stable one, that is, |zi | ? 1.
Random projection filter bank is based on randomly generating many simple stable dynamical
systems, which is equivalent to generating many random poles within the unit circle. Since any stable
LTI filter has a representation (7) (or a similar one in Case 2), we can approximate the true dynamical
system as a linear combination of randomly generated poles (i.e., filters). If the number of filters is
large enough, the approximation will be accurate.
To be more precise, we cover the set of {z ? C : |z| ? 1} with N (?) random points N? =
0
0
0
{Z10 , . . . , ZN
(?) } such that for any zi ? Zp , there exists a Z ? N? with |zi ? Z (zi )| < ?. Roughly
3
For continuous-time systems, we may use Laplace transform instead of Z-transform, and have similar
representations.
4
speaking, we require N (?) = O(??2 ) random points to cover the unit circle with the resolution of ?.
We then define the RPFB as the following set of AR filters denoted by ?(z ?1 ):4
!
1
1
?1
?1
,...,
?(z ) : z 7?
.
(8)
0
?1
1 ? Z10 z ?1
1 ? ZN
(?) z
With a slight abuse of notation, we use ?(X) to refer to the (multivariate) time series generated after
passing a signal X = (X1 , . . . , Xt ) through the set of filters ?(z ?1 ). More concretely, this means
that we convolve the signal X with the impulse response of each of filters 1?Z10 z?1 (i = 1, . . . , N (?)).
i
1
t
Recall that the impulse response of 1?az
?1 is the sequence (a )t?0 , and the convolution X ? Y
between two sequences (Xt )t?0 and (Yt )t?0 is a new sequence
X
(X ? Y )t =
X? Yt?? .
(9)
?
We use [?(X)]i ? C N (?) to refer to the i-th time-step of the multivariate signal ?(X1:i ).
The intuition of why this is a good construction is that whenever |z1 ? z2 | is small, the behaviour
of filter 1?z11 z?1 is similar to 1?z12 z?1 . So whenever N? provides a good coverage of the unit circle,
there exists a sequence (b0j ) such that the dynamical system
N (?)
X
b0j
p0 (z ?1 )
?1 0
=
?(z
)b
=
q 0 (z ?1 )
1 ? Zj0 z ?1
j=1
behaves similar to the unknown pq (7). As this is a linear model, parameters b0 can be estimated using
ordinary least-squares regression, ridge regression, Lasso, etc. For example, the ridge regression
estimator for b0 is
m
T
i
XX
1
2
2
?b ? argmin 1
|[?(Xi )]t b ? Xi,t+1 | + ? kbk2 .
m i=1 t=1 Ti
b
After obtaining ?b, we define
N (?)
? 1:t ; ?b) =
X(X
X
j=1
?bj
X1:t ,
1 ? Zj0 z ?1
? 1:t ) (6), i.e., X(X
? 1:t ) ? X(X
? 1:t ; ?b).
which is an estimator of X(X
Case 2: Suppose that dp ? dq , which implies that dA ? 1 ? dq . Then, we may write
p(z ?1 )
?(z ?1 )
?1
=
R(z
)
+
,
q(z ?1 )
q(z ?1 )
where ? and R are obtained by the Euclidean division of p by q, i.e., p(z ?1 ) = R(z ?1 )q(z ?1 ) +
?(z ?1 ) and deg(R) ? dA ? 1 ? dq and deg(?) < dq . We can write:
p(z ?1 )
=
q(z ?1 )
dA ?1?dq
X
?j z ?j +
j=0
dq
X
i=1
bi
.
1 ? zi z ?1
(10)
This is similar to (7) of Case 1, with the addition of lag terms. If we knew the set of complex poles
and their corresponding coefficients as well as the coefficients of the residual lag terms, ?j , we could
provide an unbiased estimate of Xt+1 based on X1:t . Since we do not know the location of poles, we
randomly generate them as before. For this case, the feature set (8) should be expanded to
!
h
i
1
1
?1
?1
?1
?(dA ?1?dq )
?(z ) : z 7?
1, z , .., z
,
, ..,
,
(11)
0
?1
1 ? Z10 z ?1
1 ? ZN
(?) z
4
One could generate different types of filters, for example those with nonlinearities, but in this work we focus
on linear AR filters to simplify the analysis.
5
Algorithm 1 Random Projection Filter Bank
// Dm = {(Xi,1 , Yi,1 ), . . . , (Xi,Ti , Yi,Ti )}m
i=1 : Input data
// l : Y 0 ? Y ? R: Loss function
// F: Function space
// n: Number of filters in the random projection filter bank
Draw Z10 , . . . , Zn0 uniformly
random within the unit circle
Define filters ?(z ?1 ) = 1?Z10 z?1 , . . . , 1?Z10 z?1
n
1
for i = 1 to m do
0
Pass the i-th time series through all the random filters ?(z ?1 ), i.e., Xi,1:T
= ?(z ?1 ) ? Xi,1:Ti
i
end for
0
Find the estimator using extracted features (Xi,1:T
), e.g., by solving the regularized empirical risk minimizai
tion:
f? ? argmin
f ?F
Ti
m X
X
0
l(f (Xi,t
), Yi,t ) + ?J(f ).
(12)
i=1 t=1
return f? and ?
which consists of a history window of length dA ? 1 ? dq and the random projection filters. The
regressor should then estimate both bi s and ?i s in (10).
RPFB is not limited to time series prediction with linear combination of filtered signals. One may
use the generated features as the input to any other estimator too. RPFB can be used for other
problems such as classification with time series too. Algorithm 1 shows how RPFB is used alongside
a regularized empirical risk minimization algorithm. The inputs to the algorithm are the time series
data Dm , with appropriate target values created depending on the problem, the pointwise loss function
l, the function space F of the hypotheses (e.g., linear, RKHS, etc.), and the number of filters n in
the RPFB. The first step is to create the RPFB by randomly selecting n stable AR filters. We then
pass each time series in the dataset through the filter bank in order to create filtered features, i.e., the
feature are created by convolving the input time series with the filters? impulse responses. Finally,
taking into account the problem type (regression or classification) and function space, we apply
conventional machine learning algorithms to estimate f?. Here we present a regularized empirical risk
minimizer (12) as an example, but other choices are possible too, e.g., decision trees or K-NN. We
note that the use of ?(z ?1 ) ? Xi,1:Ti in the description of the algorithm should be interpreted as the
convolution of the impulse response of ?(z ?1 ), which is in the time domain, with the input signal.
?
Remark 1. In practice, whenever we pick a complex pole Z 0 = a + jb with j = ?1, we also
1
pick its complex conjugate Z?0 = a ? jb in order to form a single filter (1?Z 0 z?1 )(1?
? 0 z ?1 ) . This
Z
guarantees that the output of this second-order filter is real valued.
Remark 2. RPFB is described for a univariate time series Xt ? R. To deal with multivariate time
series (i.e., Xt ? Rd with d > 1) we may consider each dimension separately and pass each one
through RPFB. The filters in RPFB can be the same or different for each dimension. The state of
the filters, of course, depends on their input, so it would be different for each dimension. If we
0
have n filters and d-dimensional time series, the resulting vector Xi,t
in Algorithm 1 would be nd
dimensional. Randomly choosing multivariate filters is another possibility, which is a topic of future
research.
Remark 3. The Statistical Recurrent Unit (SRU), recently introduced by Oliva et al. [2017], has
some similarities to RPFB. SRU uses a set of exponential moving averages at various time scales
to summarize a time series, which are basically AR(1) filters with real-valued poles. SRU is more
complex, and potentially more expressive, than RPFB as it has several adjustable weights. On the
other hand, it does not have the simplicity of RPFB. Moreover, it does not yet come with the same
level of theoretical justifications as RPFB has.
4
Theoretical Guarantees
This section provides a finite-sample statistical guarantee for a time series predictor that uses RPFB
to extract features. We specifically focus on an empirical risk minimization-based (ERM) estimator.
Note that Algorithm 1 is not restricted to time series prediction problem, or the use of ERM-based
6
estimator, but the result of this section is. We only briefly present the results, and refer the reader
to the same section in the supplementary material for more detail, including the proofs and more
discussions.
Consider the time series (X1 , X2 , . . . ) with Xt ? X ? [?B, B] for some finite B > 0. We denote
X ? = ?t?1 X t . The main object of interest in time series prediction is the conditional expectation of
Xt+1 given X1:t , which we denote by h? , i.e.,5
h? (X1:t ) = E [Xt+1 |X1:t ] .
(13)
We assume that h? belongs to the space of linear dynamical systems that has M ? N stable poles all
with magnitude less than 1 ? ?0 for some ?0 > 0, and an ?-bounded `p -norm on the weights:
( M
)
X
wi
H?0 ,M,p,? ,
: |zi | ? 1 ? ?0 , kwkp ? ? .
(14)
1 ? zi z ?1
i=1
If the value of ?0 , M , p, or ? are clear from context, we might refer to H?0 ,M,p,? by H. Given a
function (or filter) h ? H, here h(x1:t ) refers to the output at time t of convolving a signal x1:t
through h.
To define RPFB, we randomly draw n ? M independent complex numbers Z10 , . . . , Zn0 uniformly
within a complex circle with radius 1 ? ?0 , i.e., |Zi0 | ? 1 ? ?0 (cf. Algorithm 1). The RPFB is
1
1
?1
?(z ) =
,...,
.
1 ? Z10 z ?1
1 ? Zn0 z ?1
Given these random poles, we define the following approximation (filter) spaces
( n
)
X
?
i
?? = H
? n,p,? =
H
: k?kp ? ? .
1 ? Zi0 z ?1
i=1
(15)
Consider that we have a sequence (X1 , X2 , . . . , XT , XT +1 , XT +2 ). By denoting Yt = Xt+1 , we
define ((X1 , Y1 ), . . . , (XT , YT ), (XT +1 , YT +1 )). We assume that |Xt | is B-bounded almost surely.
? by solving the following ERM:
Define the estimator h
T
X
2
? 0 ? argmin 1
h
|h(X1:t ) ? Yt | ,
? ? T t=1
h?H
h i
? ? TrB h
?0 .
h
(16)
h i
? 0 truncates the values of h
? 0 at the level of ?B. So h
? belongs to the following space
Here TrB h
? ?,B =
H
n
o
h i
? : h
??H
?? .
TrB h
(17)
A central object in our result is the notion of discrepancy, introduced by [Kuznetsov and Mohri, 2015].
Discrepancy captures the non-stationarity of the process with respect to the function space.6 of
Definition 1 (Discrepancy?Kuznetsov and Mohri 2015). For a stochastic process X1 , X2 , . . . , a
function space H : X ? ? R, and T ? N, define
(
)
T
h
i 1X
h
i
2
2
?T (H) , sup E |h(X1:T +1 ) ? YT +1 | |X1:T +1 ?
E |h(X1:t ) ? Yt | |X1:t .
T t=1
h?H
If the value of T is clear from the context, we may use ?(H) instead. The following is the main
theoretical result of this work.
5
We use h instead of f to somehow emphasize that the discussion is only for the time series prediction
problem, and not general estimation problem with a time series.
6
Our definition is a simplified version of the original definition (by selecting qt = 1/T in their notation).
7
Theorem 1. Consider the time series (X1 , . . . , XT +2 ), and assume that |Xt | ? B (a.s.). Without
loss of generality suppose that B ? 1. Let 0 < ?0 < 1, M ? N, and ? > 0 and assume
that the conditional expectation h? (X1:t ) = E [Xt+1 |X1:t ] belongs to the class of linear filters
?? =
H?0 ,M,2,? (14). Set an integer n ? M for the number of random projection filters and let H
?
?
?
Hn,2,? (15) and the truncated space be H?,B (17). Consider the estimator h that is defined as (16).
0
Without loss of generality assume that ? ? B 2??
and T ? 2. Fix ? > 0. It then holds that there
n
exists constants c1 , c2 > 0 such that with probability at least 1 ? ?, we have
r
2 c B 2 ?
n log(1/?) c2 B 2 ?2 log 20n
?
1
3
?
?
? ?,B ).
log (T )
+
+ 2?(H
h(X1:T +1 ) ? h (X1:T +1 ) ?
?0
T
?40
n
The upper
p bounds has three terms: estimation error, filter approximation error, and the discrepancy.
The O( Tn ) term corresponds to the estimation error. It decreases as the length T of the time
series increases. As we increase the number of filters n, the upper bounds shows an increase of
the estimation error. This is a manifestation of the effect of the input dimension on the error of
the estimator. The O(n?1 ) term provides an upper bound to the filter approximation error. This
error decreases as we add more filters. This indicates that RPFB provides a good approximation to
the space of dynamical systems H?0 ,M,2,? (14). Both terms show the proportional dependency on
the magnitude B of the random variables in the time series, and inversely proportional dependency
on the minimum distance ?0 of the poles to the unit circle. Intuitively, this is partly because the
output of a pole becomes more sensitive to its input as it gets closer to the unit circle. Finally, the
? ?,B ) captures the non-stationarity of the process, and has been discussed in
discrepancy term ?(H
detail by Kuznetsov and Mohri [2015]. Understanding the conditions when the discrepancy gets close
to zero is an interesting topic for future research.
By setting the number of RP filters to n =
simplify the upper bound to
T 1/3 ?2/3
,
?20
and under the condition that ? ? T , we can
q
3
2 4/3
2
cB
?
log
(T
)
log( 1? )
?
? ?,B ),
+ 2?(H
h(X1:T +1 ) ? h? (X1:T +1 ) ?
?20 T 1/3
which holds with probability at least 1 ? ?, for some constant c > 0. As T ? ?, the error converges
to the level of discrepancy term.
We would like to comment that in the statistical part of the proof, instead of using the independent
block technique of Yu [1994] to analyze a mixing processes [Doukhan, 1994], which is a common
technique used by many prior work such as Meir [2000]; Mohri and Rostamizadeh [2009, 2010];
Farahmand and Szepesv?ri [2012], we rely on more recent notions of sequential complexities [Rakhlin
et al., 2010, 2014] and the discrepancy [Kuznetsov and Mohri, 2015] of the function space-stochastic
process couple.
This theorem is for Case 1 in Section 3, but a similar result also holds for Case 2. We also mention
that as the values of M , ?0 , and ? of the true dynamical system space H?0 ,M,2,? are often unknown,
the choice of number of filters n in RPFB, the size of the space M , etc. cannot be selected based on
them. Instead one should use a model selection procedure to pick the appropriate values for these
parameters.
5
Experiments
We use a ball bearing fault detection problem to empirically study RPFB and compare it with a fixedwindow history-based approach. The supplementary material provides several other experiments,
including the application of LSTM to solve the very same problem, close comparison of RPFB with
fixed-window history-based approach on an ARMA time series prediction problem, and a heart rate
classification problem. For further empirical studies, especially in the context of fault detection and
prognosis, refer to Pourazarm et al. [2017].
Reliable operation of rotating equipments (e.g., turbines) depends on the condition of their bearings,
which makes the detection of whether a bearing is faulty and requires maintenance of crucial
importance. We consider a bearing vibration dataset provided by Machinery Failure Prevention
8
0.7
Classification Error
0.6
0.5
0.4
0.3
LR (RPFB)
LR (Window)
RF (RPFB)
RF (Window)
SVM (RPFB)
SVM (Window)
0.2
0.1
0.0
5
25
50
100
200
400
Features No
Figure 1: (Bearing Dataset) Classification error on the test dataset using RPFB and fixed-window
history-based feature sets. The RPFB results are averaged over 20 independent randomly selected
RPFB. The error bars show one standard error.
Technology (MFPT) Society in our experiments.7 Fault detection of bearings is an example of
industrial applications where the computational resources are limited, and fast methods are required,
e.g., only a micro-controller or a cheap CPU, and not a GPU, might be available.
The dataset consists of three univariate time series corresponding to a baseline (good condition/class
0), an outer race fault (class 1), and inner race fault (class 2). The goal is to find a classifier that
predicts the class label at the current time t given the vibration time series X1:t . In a real-world
scenario, we train the classifier on a set of previously recorded time series, and later let it operate on
a new time series observed from a device. The goal would be to predict the class label at each time
step as new data arrives. Here, however, we split each of three time series to a training and testing
subsets. More concretely, we first pass each time series through RPFB (or define a fixed-window
of the past H values of them). We then split the processed time series, which has the dimension of
the number of RPFB or the size of the window, to the training and testing sets. We select the first
3333 time steps to define the training set, and the next 3333 data points as the testing dataset. As we
have three classes, this makes the size of training and testing sets both equal to 10K. We process each
dimension of the features to have a zero mean and a unit variance for both feature types. We perform
20 independent runs of RPFB, each of which with a new set of randomly selected filters.
Figure 1 shows the classification error of three different classifier (Logistic Regression (LR) with the
`2 regularization, Random Forest (RF), and Support Vector Machine (SVM) with Gaussian kernel)
on both feature types, with varying feature sizes. We observe that as the number of features increases,
the error of all classifiers decreases too. It is also noticeable that the error heavily depends on the type
of classifier, with SVM being the best in the whole range of number of features. The use of RPFB
instead of fixed-window history-based one generally improves the performance of LR and SVM, but
not for RF. Refer to the supplementary material for more detail on the experiment.
6
Conclusion
This paper introduced Random Projection Filter Bank (RPFB) as a simple and effective method
for feature extraction from time series data. RPFB comes with a finite-sample error upper bound
guarantee for a class of linear dynamical systems. We believe that RPFB should be a part of the
toolbox for time series processing.
A future research direction is to better understand other dynamical system spaces, beyond the linear
one considered here, and to design other variants of RPFB beyond those that are defined by stable
linear autoregressive filters. Another direction is to investigate the behaviour of the discrepancy
factor.
7
Available from http://www.mfpt.org/faultdata/faultdata.htm.
9
Acknowledgments
We would like to thank the anonymous reviewers for their helpful feedback.
References
Richard G. Baraniuk and Michael B. Wakin. Random projections of smooth manifolds. Foundations
of computational mathematics, 9(1):51?77, 2009. 1
Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. 1, 2
Kyunghyun Cho, Bart Van Merri?nboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of
neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
2
Paul Doukhan. Mixing: Properties and Examples, volume 85 of Lecture Notes in Statistics. SpringerVerlag, Berlin, 1994. 8
Amir-massoud Farahmand and Csaba Szepesv?ri. Regularized least-squares regression: Learning
from a ?-mixing sequence. Journal of Statistical Planning and Inference, 142(2):493 ? 505, 2012.
2, 8
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. 2
Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. Springer, 2001. 1, 2
Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735?1780, 1997. 2
Sham Kakade, Percy Liang, Vatsal Sharan, and Gregory Valiant. Prediction with a short memory.
arXiv:1612.02526v2, 2017. 3
Vitaly Kuznetsov and Mehryar Mohri. Learning theory and algorithms for forecasting non-stationary
time series. In Advances in Neural Information Processing Systems (NIPS - 28), pages 541?549.
Curran Associates, Inc., 2015. 7, 8
Mantas Luko?evi?cius and Herbert Jaeger. Reservoir computing approaches to recurrent neural
network training. Computer Science Review, 3(3):127?149, 2009. 2
Ron Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning,
39(1):5?34, 2000. 8
Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-i.i.d. processes.
In Advances in Neural Information Processing Systems 21, pages 1097?1104. Curran Associates,
Inc., 2009. 8
Mehryar Mohri and Afshin Rostamizadeh. Stability bounds for stationary ?-mixing and ?-mixing
processes. Journal of Machine Learning Research (JMLR), 11:789?814, 2010. ISSN 1532-4435.
2, 8
Junier B. Oliva, Barnab?s P?czos, and Jeff Schneider. The statistical recurrent unit. In Proceedings
of the 34th International Conference on Machine Learning (ICML), volume 70 of Proceedings of
Machine Learning Research, pages 2671?2680. PMLR, August 2017. 2, 6
Alan V. Oppenheim, Ronald W. Schafer, and John R. Buck. Discrete-Time Signal Processing. Prentice
Hall, second edition, 1999. 3
Sepideh Pourazarm, Amir-massoud Farahmand, and Daniel N. Nikovski. Fault detection and
prognosis of time series data with random projection filter bank. In Annual Conference of the
Prognostics and Health Management Society (PHM), pages 242?252, 2017. 2, 8
Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization
with randomization in learning. In Advances in Neural Information Processing Systems (NIPS 21), pages 1313?1320, 2009. 2
10
Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages,
combinatorial parameters, and learnability. In Advances in Neural Information Processing Systems
(NIPS - 23), 2010. 8
Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Sequential complexities and uniform
martingale laws of large numbers. Probability Theory and Related Fields, 2014. 8
Ingo Steinwart and Andreas Christmann.
Fast learning from non-i.i.d. observations.
In Advances in Neural Information Processing Systems (NIPS - 22), pages
1768?1776. Curran Associates, Inc., 2009.
URL http://papers.nips.cc/paper/
3736-fast-learning-from-non-iid-observations.pdf. 2
Ingo Steinwart, Don Hush, and Clint Scovel. Learning from dependent observations. Journal of
Multivariate Analysis, 100(1):175?194, 2009. 2
Santosh S. Vempala. The Random Projection Method. DIMACS Series in Discrete Mathematics and
Theoretical Computer Science. American Mathematical Society, 2004. ISBN 9780821837931. 1
Larry Wasserman. All of Nonparametric Statistics (Springer Texts in Statistics). Springer, 2007. 1, 2
Martha White, Junfeng Wen, Michael Bowling, and Dale Schuurmans. Optimal estimation of
multivariate ARMA models. In Proceedings of the 29th AAAI Conference on Artificial Intelligence
(AAAI), 2015. 4
Bin Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of
Probability, 22(1):94?116, January 1994. 8
11
| 7234 |@word briefly:1 version:2 polynomial:3 norm:1 prognostic:1 nd:1 mitsubishi:3 p0:1 pick:3 mention:1 cius:2 series:56 selecting:2 liquid:1 daniel:2 denoting:1 rkhs:4 past:6 current:2 com:2 z2:1 scovel:1 yet:1 gpu:1 john:1 ronald:1 distant:1 cheap:1 bart:1 stationary:3 intelligence:1 selected:4 device:1 amir:3 short:3 lr:4 filtered:2 provides:8 location:1 ron:1 org:1 mathematical:1 along:1 c2:2 farahmand:6 consists:3 roughly:1 planning:1 inspired:1 cpu:1 window:18 considering:2 increasing:1 becomes:1 provided:1 estimating:1 underlying:2 notation:2 xx:1 moreover:1 bounded:2 schafer:1 argmin:5 interpreted:1 finding:1 unobserved:1 csaba:1 guarantee:7 ti:12 classifier:5 control:2 unit:10 before:2 engineering:1 local:1 clint:1 abuse:1 might:6 therein:1 challenging:1 limited:3 zi0:2 doukhan:2 range:2 bi:5 averaged:1 acknowledgment:1 testing:4 practice:1 block:1 implement:2 procedure:1 empirical:10 rnn:3 thought:1 projection:14 refers:1 suggest:1 get:3 onto:2 cannot:2 close:3 operator:1 selection:2 faulty:1 context:4 risk:5 prentice:1 www:1 conventional:5 map:1 equivalent:1 yt:14 reviewer:1 penalizer:1 sepp:1 resolution:1 simplicity:1 wasserman:3 estimator:18 stability:1 notion:2 justification:1 laplace:1 merri:1 annals:1 construction:1 suppose:9 target:1 heavily:1 us:2 curran:3 goodfellow:3 hypothesis:1 associate:3 element:1 recognition:1 particularly:1 predicts:1 observed:2 preprint:1 solved:1 capture:5 decrease:3 intuition:2 benjamin:1 complexity:4 solving:3 ali:1 division:1 sink:2 easily:1 htm:1 various:4 regularizer:1 train:1 fast:5 effective:2 describe:4 kp:1 artificial:1 choosing:1 lag:2 supplementary:4 solve:2 valued:2 encoder:1 statistic:4 transform:3 echo:1 itself:1 online:1 sequence:8 isbn:1 propose:1 relevant:1 combining:1 junfeng:1 mixing:6 description:1 az:1 nikovski:3 convergence:1 zp:2 jaeger:2 rademacher:1 generating:2 converges:1 object:2 depending:3 recurrent:4 nearest:1 qt:1 noticeable:1 b0:2 coverage:1 christmann:2 implies:2 come:2 direction:2 radius:1 closely:2 filter:59 stochastic:4 larry:1 material:4 bin:1 require:1 behaviour:2 fix:1 barnab:1 anonymous:1 randomization:1 extension:1 hold:3 considered:1 hall:1 cb:1 algorithmic:1 predict:3 bj:1 rgen:1 estimation:5 label:2 combinatorial:1 sensitive:1 vibration:3 create:2 zn0:3 weighted:1 minimization:4 mit:1 gaussian:2 varying:3 focus:3 likelihood:1 indicates:1 phm:1 contrast:1 industrial:3 equipment:1 rostamizadeh:4 baseline:1 sharan:1 helpful:1 inference:2 dependent:3 nn:1 a0:5 issue:1 classification:10 unobservable:1 denoted:1 development:1 prevention:1 special:1 equal:1 field:1 santosh:1 extraction:3 beach:1 look:1 yu:2 icml:1 future:5 discrepancy:9 contaminated:1 jb:2 simplify:2 micro:2 richard:1 yoshua:2 wen:1 randomly:15 elevator:1 kitchen:2 karthik:2 friedman:1 detection:6 stationarity:2 interest:1 possibility:1 investigate:1 mining:1 introduces:1 analyzed:1 arrives:1 behind:2 accurate:1 closer:1 machinery:1 tree:2 euclidean:1 rotating:1 arma:4 circle:8 theoretical:6 merl:5 zj0:2 ar:7 cover:2 zn:3 ordinary:1 pole:12 subset:1 predictor:2 uniform:1 delay:1 too:4 learnability:1 reported:1 dependency:2 gregory:1 synthetic:3 cho:2 st:1 recht:2 lstm:2 international:1 bu:2 regressor:1 michael:2 invertible:1 together:1 squared:1 aaai:2 central:1 management:1 recorded:1 hn:1 possibly:1 convolving:2 american:1 leading:1 return:1 account:1 nonlinearities:1 coefficient:3 matter:1 z12:1 satisfy:1 inc:3 race:2 depends:3 tion:1 root:4 later:1 analyze:1 sup:1 contribution:2 square:3 accuracy:1 variance:1 basically:1 iid:1 drive:1 cc:1 history:11 oppenheim:2 parallelizable:1 whenever:3 trevor:1 definition:3 failure:1 frequency:1 dm:3 proof:2 couple:1 dataset:7 zdq:2 recall:1 ut:19 improves:1 hilbert:1 adaptable:1 isometric:1 supervised:1 response:4 b0j:2 generality:2 sru:3 jerome:1 hand:2 steinwart:4 expressive:1 christopher:1 replacing:1 somehow:1 logistic:1 impulse:4 believe:1 usa:4 effect:1 y2:2 unbiased:4 true:2 regularization:1 kyunghyun:1 laboratory:3 deal:2 conditionally:1 white:2 bowling:1 illustrative:1 dimacs:1 manifestation:1 prominent:1 pdf:1 ridge:3 complete:1 tn:1 percy:1 recently:1 common:1 behaves:1 empirically:2 volume:2 discussed:1 slight:1 measurement:1 refer:7 cambridge:3 rd:1 mathematics:2 pq:1 moving:4 stable:9 similarity:1 etc:4 add:1 multivariate:6 recent:2 belongs:3 schmidhuber:2 scenario:3 certain:2 fault:10 yi:7 preserving:1 minimum:2 herbert:1 schneider:1 parallelized:1 surely:1 signal:9 afterwards:1 sham:1 rahimi:2 z11:1 smooth:1 alan:1 long:2 prediction:17 variant:1 regression:11 oliva:3 controller:2 maintenance:1 expectation:3 sepideh:2 bz:3 arxiv:3 kernel:2 hochreiter:2 c1:1 receive:1 szepesv:3 want:1 addition:1 separately:1 crucial:1 operate:1 comment:1 bahdanau:1 vitaly:1 sridharan:2 integer:2 split:2 easy:2 enough:2 identically:1 bengio:2 independence:1 zi:10 hastie:3 architecture:1 prognosis:4 lasso:1 reduce:1 idea:2 inner:1 andreas:1 whether:2 url:1 forecasting:1 speaking:1 passing:1 remark:3 deep:1 generally:3 buck:1 clear:2 tewari:2 nonparametric:2 processed:1 generate:4 http:2 meir:2 massoud:3 estimated:1 tibshirani:1 diagnosis:1 write:7 discrete:3 shall:1 lti:2 sum:1 run:1 baraniuk:2 powerful:1 almost:1 reasonable:1 reader:1 draw:2 decision:4 summarizes:1 bound:8 layer:1 courville:1 annual:1 bp:1 ri:3 x2:3 kwkp:1 generates:1 aspect:2 u1:1 argument:1 span:2 nboer:1 expanded:1 vempala:2 ball:2 combination:2 conjugate:1 em:1 wi:1 kakade:2 making:1 luko:2 projecting:2 explained:1 multiplicity:1 erm:4 invariant:1 restricted:1 intuitively:1 heart:1 computationally:1 resource:1 previously:1 discus:1 know:3 end:1 available:6 operation:1 z10:9 apply:2 observe:2 v2:1 generic:3 appropriate:2 pmlr:1 anymore:1 neighbourhood:1 alternative:2 convolved:2 evi:2 original:1 rp:1 convolve:1 cf:4 wakin:2 especially:2 approximating:1 society:3 usual:1 dp:5 distance:1 thank:1 berlin:1 decoder:1 outer:1 topic:2 manifold:1 dzmitry:1 assuming:1 afshin:2 length:5 issn:1 pointwise:1 providing:1 liang:1 difficult:1 truncates:1 robert:1 potentially:3 design:4 zt:1 unknown:6 adjustable:1 perform:1 upper:6 observation:5 convolution:2 datasets:1 ingo:2 finite:9 optional:1 immediate:1 trb:3 extended:1 ever:1 precise:1 truncated:1 y1:4 january:1 reproducing:1 august:1 introduced:3 required:1 toolbox:1 z1:3 hush:1 nip:6 beyond:2 bar:1 alongside:1 dynamical:24 pattern:1 challenge:1 summarize:1 ambuj:2 rf:4 max:1 including:2 reliable:1 memory:2 power:1 suitable:1 difficulty:1 rely:1 regularized:6 residual:1 scheme:1 technology:1 inversely:1 created:2 excitable:1 extract:4 health:1 text:1 prior:1 literature:1 understanding:1 review:1 kf:2 law:1 loss:7 lecture:1 interesting:1 limitation:1 proportional:2 foundation:1 degree:1 dq:17 bank:10 translation:1 prone:1 course:1 summary:2 mohri:9 czos:1 infeasible:1 allow:1 understand:1 taking:1 distributed:1 van:1 feedback:1 dimension:8 world:4 autoregressive:5 dale:1 concretely:2 commonly:1 adaptive:1 simplified:1 approximate:4 emphasize:1 deg:6 b1:1 knew:3 xi:14 don:1 continuous:1 latent:2 why:1 promising:1 learn:2 reasonably:1 ca:1 obtaining:1 schuurmans:1 forest:2 unavailable:2 bearing:7 mehryar:3 complex:8 electric:3 domain:3 da:8 main:3 whole:1 noise:3 paul:1 edition:1 x1:42 reservoir:2 martingale:1 exponential:1 jmlr:1 extractor:1 ian:1 theorem:2 bishop:3 xt:52 rakhlin:3 svm:7 intrinsic:1 exists:3 sequential:2 valiant:1 importance:1 magnitude:3 horizon:2 kbk2:1 univariate:4 kuznetsov:5 springer:4 corresponds:1 minimizer:1 determines:2 turbine:1 extracted:1 ma:5 conditional:2 sized:1 goal:2 jeff:1 springerverlag:1 martha:1 infinite:1 specifically:1 uniformly:2 miss:1 pas:4 partly:1 experimental:1 junier:1 indicating:1 select:1 aaron:1 support:1 alexander:2 trainable:1 |
6,893 | 7,235 | Filtering Variational Objectives
Chris J. Maddison1,3,* , Dieterich Lawson,2,* George Tucker2,*
Nicolas Heess1 , Mohammad Norouzi2 , Andriy Mnih1 , Arnaud Doucet3 , Yee Whye Teh1
1
DeepMind, 2 Google Brain, 3 University of Oxford
{cmaddis, dieterichl, gjt}@google.com
Abstract
When used as a surrogate objective for maximum likelihood estimation in latent
variable models, the evidence lower bound (ELBO) produces state-of-the-art results.
Inspired by this, we consider the extension of the ELBO to a family of lower bounds
defined by a particle filter?s estimator of the marginal likelihood, the filtering
variational objectives (FIVOs). FIVOs take the same arguments as the ELBO,
but can exploit a model?s sequential structure to form tighter bounds. We present
results that relate the tightness of FIVO?s bound to the variance of the particle filter?s
estimator by considering the generic case of bounds defined as log-transformed
likelihood estimators. Experimentally, we show that training with FIVO results
in substantial improvements over training the same model architecture with the
ELBO on sequential data.
1
Introduction
Learning in statistical models via gradient descent is straightforward when the objective function
and its gradients are tractable. In the presence of latent variables, however, many objectives become
intractable. For neural generative models with latent variables, there are currently a few dominant
approaches: optimizing lower bounds on the marginal log-likelihood [1, 2], restricting to a class of
invertible models [3], or using likelihood-free methods [4, 5, 6, 7]. In this work, we focus on the
first approach and introduce filtering variational objectives (FIVOs), a tractable family of objectives
for maximum likelihood estimation (MLE) in latent variable models with sequential structure.
Specifically, let x denote an observation of an X -valued random variable. We assume that the
process generating x involves an unobserved Z-valued random variable z with joint density p(x, z)
in some family P.R The goal ofMLE is to recover p ? P that maximizes the marginal log-likelihood,
log p(x) = log p(x, z) dz 1 . The difficulty in carrying out this optimization is that the loglikelihood function is defined via a generally intractable integral. To circumvent marginalization,
a common approach [1, 2] is to optimize a variational lower bound on the marginal log-likelihood
[8, 9]. The evidence lower bound L(x, p, q) (ELBO) is the most common such bound and is defined
by a variational posterior distribution q(z|x) whose support includes p?s,
p(x, z)
L(x, p, q) = E log
= log p(x) ? KL(q(z|x) k p(z|x)) ? log p(x) .
(1)
q(z|x)
q(z|x)
L(x, p, q) lower-bounds the marginal log-likelihood for any choice of q, and the bound is tight when
q is the true posterior p(z|x). Thus, the joint optimum of L(x, p, q) in p and q is the MLE. In practice,
it is common to restrict q to a tractable family of distributions (e.g., a factored distribution) and to
*
Equal contribution.
We reuse p to denote the conditionals and marginals of the joint density.
1
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
jointly optimize the ELBO over p and q with stochastic gradient ascent [1, 2, 10, 11]. Because of
the KL penalty from q to p, optimizing (1) under these assumptions tends to force p?s posterior to
satisfy the factorizing assumptions of the variational family which reduces the capacity of the model
p. One strategy for addressing this is to decouple the tightness of the bound from the quality of q.
For example, [12] observed that Eq. (1) can be interpreted as the log of an unnormalized importance
weight with the proposal given by q, and that using N samples from the same proposal produces a
tighter bound, known as the importance weighted auto-encoder bound, or IWAE.
Indeed, it follows from Jensen?s inequality that the log of any unbiased positive Monte Carlo estimator
of the marginal likelihood results in a lower bound that can be optimized for MLE. The filtering
variational objectives (FIVOs) build on this idea by treating the log of a particle filter?s likelihood
estimator as an objective function. Following [13], we call objectives defined as log-transformed
likelihood estimators Monte Carlo objectives (MCOs). In this work, we show that the tightness
of an MCO scales like the relative variance of the estimator from which it is constructed. It is
well-known that the variance of a particle filter?s likelihood estimator scales more favourably than
simple importance sampling for models with sequential structure [14, 15]. Thus, FIVO can potentially
form a much tighter bound on the marginal log-likelihood than IWAE.
The main contributions of this work are introducing filtering variational objectives and a more
careful study of Monte Carlo objectives. In Section 2, we review maximum likelihood estimation via
maximizing the ELBO. In Section 3, we study Monte Carlo objectives and provide some of their basic
properties. We define filtering variational objectives in Section 4, discuss details of their optimization,
and present a sharpness result. Finally, we cover related work and present experiments showing that
sequential models trained with FIVO outperform models trained with ELBO or IWAE in practice.
2
Background
We briefly review techniques for optimizing the ELBO as a surrogate MLE objective. We restrict our
focus to latent variable models in which the model p? (x, z) factors into tractable conditionals p? (z)
and p? (x|z) that are parameterized differentiably by parameters ?. MLE in these models is then the
problem of optimizing log p? (x) in ?. The expectation-maximization (EM) algorithm is an approach
to this problem which can be seen as coordinate ascent, fully maximizing L(x, p? , q) alternately in q
and ? at each iteration [16, 17, 18]. Yet, EM rarely applies in general, because maximizing over q for
a fixed ? corresponds to a generally intractable inference problem.
Instead, an approach with mild assumptions on the model is to perform gradient ascent following a
Monte Carlo estimator of the ELBO?s gradient [19, 10]. We assume that q is taken from a family of
distributions parameterized differentiably by parameters ?. We can follow an unbiased estimator of the
ELBO?s gradient by sampling z ? q? (z|x) and updating the parameters by ?0 = ? + ??? log p? (x, z)
and ?0 = ? + ?(log p? (x, z) ? log q? (z|x))?? log q? (z|x), where the gradients are computed
conditional on the sample z and ? is a learning rate. Such estimators follow the ELBO?s gradient in
expectation, but variance reduction techniques are usually necessary [10, 20, 13].
A lower variance gradient estimator can be derived if q? is a reparameterizable distribution [1, 2, 21].
Reparameterizable distributions are those that can be simulated by sampling from a distribution
? d(), which does not depend on ?, and then applying a deterministic transformation z =
f? (x, ). When p? , q? , and f? are differentiable, an unbiased estimator of the ELBO gradient consists
of sampling and updating the parameter by (?0 , ?0 ) = (?, ?) + ??(?,?) (log p? (x, f? (x, )) ?
log q? (f? (x, )|x)). Given , the gradients of the sampling process can flow through z = f? (x, ).
Unfortunately, when the variational family of q? is restricted, following gradients of
?KL(q? (z|x) k p? (z|x)) tends to reduce the capacity of the model p? to match the assumptions
of the variational family. This KL penalty can be ?removed? by considering generalizations of the
ELBO whose tightness can be controlled by means other than the closenesss of p and q, e.g., [12].
We consider this in the next section.
3
Monte Carlo Objectives (MCOs)
Monte Carlo objectives (MCOs) [13] generalize the ELBO to objectives defined by taking the log
of a positive, unbiased estimator of the marginal likelihood. The key property of MCOs is that
2
they are lower bounds on the marginal log-likelihood, and thus can be used for MLE. Motivated
by the previous section, we present results on the convergence of generic MCOs to the marginal
log-likelihood and show that the tightness of an MCO is closely related to the variance of the estimator
that defines it.
One can verify that the ELBO is a lower bound by using the concavity of log and Jensen?s inequality,
Z
p(x, z)
p(x, z)
E log
? log
q(z|x) dz = log p(x).
(2)
q(z|x)
q(z|x)
q(z|x)
This argument only relies only on unbiasedness of p(x, z)/q(z|x) when z ? q(z|x). Thus, we
can generalize this by considering any unbiased marginal likelihood estimator p?N (x) and treating
E[log p?N (x)] as an objective function over models p. Here N ? N indexes the amount of computation
needed to simulate p?N (x), e.g., the number of samples or particles.
Definition 1. Monte Carlo Objectives. Let p?N (x) be an unbiased positive estimator of p(x),
E[?
pN (x)] = p(x), then the Monte Carlo objective LN (x, p) over p ? P defined by p?N (x) is
LN (x, p) = E[log p?N (x)]
(3)
For example, the ELBO is constructed from a single unnormalized importance weight p?(x) =
p(x, z)/q(z|x). The IWAE bound [12] takes p?N (x) to be N averaged i.i.d. importance weights,
"
!#
N
1 X p(x, z i )
IWAE
LN (x, p, q) =
E
log
(4)
N i=1 q(z i |x)
q(z i |x)
We consider additional examples in the Appendix. To avoid notational clutter, we omit the arguments
to an MCO, e.g., the observations x or model p, when the default arguments are clear from context.
Whether we can compute stochastic gradients of LN efficiently depends on the specific form of the
estimator and the underlying random variables that define it.
Many likelihood estimators p?N (x) converge to p(x) almost surely as N ? ? (known as strong
consistency). The advantage of a consistent estimator is that its MCO can be driven towards log p(x)
by increasing N . We present sufficient conditions for this convergence and a description of the rate:
Proposition 1. Properties of Monte Carlo Objectives. Let LN (x, p) be a Monte Carlo objective
defined by an unbiased positive estimator p?N (x) of p(x). Then,
(a) (Bound) LN (x, p) ? log p(x).
(b) (Consistency) If log p?N (x) is uniformly integrable (see Appendix for definition) and p?N (x)
is strongly consistent, then LN (x, p) ? log p(x) as N ? ?.
(c) (Asymptotic Bias) Let g(N ) = E[(?
pN (x) ? p(x))6 ] be the 6th central moment. If the 1st
inverse moment is bounded, lim supN ?? E[?
pN (x)?1 ] < ?, then
p
1
p?N (x)
log p(x) ? LN (x, p) = var
+ O( g(N )).
(5)
2
p(x)
Proof. See the Appendix for the proof and a sufficient condition for controlling the first inverse
moment when p?N (x) is the average of i.i.d. random variables.
In some cases, convergence of the bound to log p(x) is monotonic, e.g., IWAE [12], but this is not
true in general. The relative variance of estimators, var(?
pN (x)/p(x)), tends to be well studied, so
property (c) gives us a tool for comparing the convergence rate of distinct MCOs. For example,
[14, 15] study marginal likelihood estimators defined by particle filters and find that the relative
variance of these estimators scales favorably in comparison to naive importance sampling. This
suggests that a particle filter?s MCO, introduced in the next section, will generally be a tighter bound
than IWAE.
3
Algorithm 1 Simulating LFIVO
(x1:T , p, q)
N
1: F IVO(x1:T , p, q, N ):
N
2: {w0i }N
i=1 = {1/N }i=1
3: for t ? {1, . . . , T } do
4:
for i ? {1, . . . , N } do
i
5:
zti ? qt (zt |x1:t , z1:t?1
)
i
i
6:
z1:t = CONCAT(z1:t?1
, zti )
P
N
i=1
i
i
wt?1
?t (z1:t
)
7:
p?t =
8:
9:
p?N (x1:t ) = p?N (x1:t?1 )?
pt
i
i
{wti }N
pt }N
i=1 = {wt?1 ?t (z1:t )/?
i=1
4
if resampling criteria satisfied by {wti }N
i=1 then
i
i i
N
{wti , z1:t
}N
=
RSAMP
({w
,
z
}
t 1:t i=1 )
i=1
12: return log p?N (x1:T )
10:
11:
13: RSAMP({wi , z i }N
i=1 ):
14: for i ? {1, . . . , N } do
15:
a ? Categorical({wi }N
i=1 )
16:
yi = za
1
17: return { N
, y i }N
i=1
Filtering Variational Objectives (FIVOs)
The filtering variational objectives (FIVOs) are a family of MCOs defined by the marginal likelihood
estimator of a particle filter. For models with sequential structure, e.g., latent variable models of audio
and text, the relative variance of a naive importance sampling estimator tends to scale exponentially
in the number of steps. In contrast, the relative variance of particle filter estimators can scale more
favorably with the number of steps?linearly in some cases [14, 15]. Thus, the results of Section 3
suggest that FIVOs can serve as tighter objectives than IWAE for MLE in sequential models.
Let our observations be sequences of T X -valued random variables denoted x1:T , where xi:j ?
(xi , . . . , xj ). We also assume that the data generation process relies on a sequence of T unobserved
Z-valued latent variables denoted z1:T . We focus on sequential latent variable models that factor as a
QT
series of tractable conditionals, p(x1:T , z1:T ) = p1 (x1 , z1 ) t=2 pt (xt , zt |x1:t?1 , z1:t?1 ).
A particle filter is a sequential Monte Carlo algorithm, which propagates a population of N weighted
particles for T steps using a combination of importance sampling and resampling steps, see Alg. 1.
In detail, the particle filter takes as arguments an observation x1:T , the number of particles N , the
model distribution p, and a variational posterior q(z1:T |x1:T ) factored over t,
q(z1:T |x1:T ) =
T
Y
qt (zt |x1:t , z1:t?1 ) .
(6)
t=1
i
i
i
i
The particle filter maintains a population {wt?1
, z1:t?1
}N
i=1 of particles z1:t?1 with weights wt?1 .
i
i
At step t, the filter independently proposes an extension zt ? qt (zt |x1:t , z1:t?1 ) to each particle?s
i
i
trajectory z1:t?1
. The weights wt?1
are multiplied by the incremental importance weights,
i
?t (z1:t
)=
i
pt (xt , zti |x1:t?1 , z1:t?1
)
,
i
i
qt (zt |x1:t , z1:t?1 )
(7)
and renormalized. If the current weights wti satisfy a resampling criteria, then a resampling step is
i
performed and N particles z1:t
are sampled in proportion to their weights from the current population
with replacement. Common resampling schemes include resampling at every step and resampling
PN
if the effective sample size (ESS) of the population ( i=1 (wti )2 )?1 drops below N/2 [22]. After
i
resampling the weights are reset to 1. Otherwise, the particles z1:t
are copied to the next step along
with the accumulated weights. See Fig. 1 for a visualization.
Instead of viewing Alg. 1 as an inference algorithm, we treat the quantity E[log p?N (x1:T )] as an
objective function over p. Because p?N (x1:T ) is an unbiased estimator of p(x1:T ), proven in the
Appendix and in [23, 24, 25, 26], it defines an MCO, which we call FIVO:
Definition 2. Filtering Variational Objectives. Let log p?N (x1:T ) be the output of Alg. 1 with inputs
(x1:T , p, q, N ), then LFIVO
(x1:T , p, q) = E[log p?N (x1:T )] is a filtering variational objective.
N
p?N (x1:T ) is a strongly consistent estimator [23, 24]. So if log p?N (x1:T ) is uniformly integrable, then
LFIVO
(x1:T , p, q) ? log p(x1:T ) as N ? ?. Resampling is the distinguishing feature of LFIVO
; if
N
N
resampling is removed, then FIVO reduces to IWAE. Resampling does add an amount of immediate
variance, but it allows the filter to discard low weight particles with high probability. This has the
4
log p?1
log p?2
log p?3
z11
z21
z31
z13
z22
z32
z12
z23
z33
i
resample {z1:3
}3i=1 ? w3i
log p?1
log p?2
log p?3
z21
z24
z22
log p?4 ? log w3i
log p?4
z23
i
propose z4i ? q4 (z4 |x1:4 , z1:3
)
z12
? log p?4
z42
z22
z32
gradients
Figure 1: Visualizing FIVO; (Left) Resample from particle trajectories to determine inheritance in next
step, (middle) propose with qt and accumulate loss log p?t , (right) gradients (in the reparameterized
case) flow through the lattice, objective gradients in solid red and resampling gradients in dotted blue.
effect of refocusing the distribution of particles to regions of higher mass under the posterior, and in
some sequential models can reduce the variance from exponential to linear in the number of time
steps [14, 15]. Resampling is a greedy process, and it is possible that a particle discarded at step t,
could have attained a high mass at step T . In practice, the best trade-off is to use adaptive resampling
schemes [22]. If for a given x1:T , p, q a particle filter?s likelihood estimator improves over simple
importance sampling in terms of variance, we expect LFIVO
to be a tighter bound than L or LIWAE
.
N
N
4.1
Optimization
The FIVO bound can be optimized with the same stochastic gradient ascent framework used for
the ELBO. We found in practice it was effective simply to follow a Monte Carlo estimator of the
biased gradient E[?(?,?) log p?N (x1:T )] with reparameterized zti . This gradient estimator is biased,
as the full FIVO gradient has three kinds of terms: it has the term E[??,? log p?N (x1:T )], where
??,? log p?N (x1:T ) is defined conditional on the random variables of Alg. 1; it has gradient terms for
every distribution of Alg. 1 that depends on the parameters; and, if adaptive resampling is used, then
it has additional terms that account for the change in FIVO with respect to the decision to resample.
In this section, we derive the FIVO gradient when zti are reparameterized and a fixed resampling
schedule is followed. We derive the full gradient in the Appendix.
In more detail, we assume that p and q are parameterized in a differentiable way by ? and ?. Assume
that q is from a reparameterizable family and that zti of Alg. 1 are reparameterized. Assume that we
use a fixed resampling schedule, and let I(resampling at step t) be an indicator function indicating
whether a resampling occured at step t. Now, LFIVO
depends on the parameters via log p?N (x1:T ) and
N
the resampling probabilities wti in the density. Thus, ?(?,?) LFIVO
=
N
XT XN
p?N (x1:T )
E ?(?,?) log p?N (x1:T ) +
I(resampling at step t) log
?(?,?) log wti (8)
t=1
i=1
p?N (x1:t )
Given a single forward pass of Alg. 1 with reparameterized zti , the terms inside the expectation form
a Monte Carlo estimator of Eq. (8). However, the terms from resampling events contribute to the
majority of the variance of the estimator. Thus, the gradient estimator that we found most effective
in practice consists only of the gradient ?(?,?) log p?N (x1:T ), the solid red arrows of Figure 1. We
explore this experimentally in Section 6.3.
4.2
Sharpness
As with the ELBO, FIVO is a variational objective taking a variational posterior q as an argument.
An important question is whether FIVO achieves the marginal log-likelihood at its optimal q. We can
only guarantee this for models in which z1:t?1 and xt are independent given x1:t?1 .
Proposition 2. Sharpness of Filtering Variational Objectives. Let LFIVO
(x1:T , p, q) be a FIVO, and
N
q ? (x1:T , p) = argmaxq LFIVO
(x
,
p,
q).
If
p
has
independence
structure
such that p(z1:t?1 |x1:t ) =
1:T
N
p(z1:t?1 |x1:t?1 ) for t ? {2, . . . , T }, then
q ? (x1:T , p)(z1:T ) = p(z1:T |x1:T ) and
LFIVO
(x1:T , p, q ? (x1:T , p)) = log p(x1:T ) .
N
Proof. See Appendix.
5
Most models do not satisfy this assumption, and deriving the optimal q in general is complicated by
the resampling dynamics. For the restricted the model class in Proposition 2, the optimal qt does
not condition on future observations xt+1:T . We explored this experimentally with richer models
in Section 6.4, and found that allowing qt to condition on xt+1:T does not reliably improve FIVO.
This is consistent with the view of resampling as a greedy process that responds to each intermediate
distribution as if it were the final. Still, we found that the impact of this effect was outweighed by the
advantage of optimizing a tighter bound.
5
Related Work
The marginal log-likelihood is a central quantity in statistics and probability, and there has long been
an interest in bounding it [27]. The literature relating to the bounds we call Monte Carlo objectives
has typically focused on the problem of estimating the marginal likelihood itself. [28, 29] use Jensen?s
inequality in a forward and reverse estimator to detect the failure of inference methods. IWAE [12] is
a clear influence on this work, and FIVO can be seen as an extension of this bound. The ELBO enjoys
a long history [8] and there have been efforts to improve the ELBO itself. [30] generalize the ELBO
by considering arbitrary operators of the model and variational posterior. More closely related to
this work is a body of work improving the ELBO by increasing the expressiveness of the variational
posterior. For example, [31, 32] augment the variational posterior with deterministic transformations
with fixed Jacobians, and [33] extend the variational posterior to admit a Markov chain.
Other approaches to learning in neural latent variable models include [34], who use importance
sampling to approximate gradients under the posterior, and [35], who use sequential Monte Carlo
to approximate gradients under the posterior. These are distinct from our contribution in the sense
that for them inference for the sake of estimation is the ultimate goal. To our knowledge the idea
of treating the output of inference as an objective in and of itself, while not completely novel, has
not been fully appreciated in the literature. Although, this idea shares inspiration with methods that
optimize the convergence of Markov chains [36].
We note that the idea to optimize the log estimator of a particle filter was independently and
concurrently considered in [37, 38]. In [37] the bound we call FIVO is cast as a tractable lower bound
on the ELBO defined by the particle filter?s non-parameteric approximation to the posterior. [38]
additionally derive an expression for FIVO?s bias as the KL between the filter?s distribution and a
certain target process. Our work is distinguished by our study of the convergence of MCOs in N ,
which includes FIVO, our investigation of FIVO sharpness, and our experimental results on stochastic
RNNs.
6
Experiments
In our experiments, we sought to: (a) compare models trained with ELBO, IWAE, and FIVO bounds
in terms of final test log-likelihoods, (b) explore the effect of the resampling gradient terms on FIVO,
(c) investigate how the lack of sharpness affects FIVO, and (d) consider how models trained with
FIVO use the stochastic state. To explore these questions, we trained variational recurrent neural
networks (VRNN) [39] with the ELBO, IWAE, and FIVO bounds using TensorFlow [40] on two
benchmark sequential modeling tasks: natural speech waveforms and polyphonic music. These
datasets are known to be difficult to model without stochastic latent states [41].
The VRNN is a sequential latent variable model that combines a deterministic recurrent neural network (RNN) with stochastic latent states zt at each step. The observation distribution over xt is conditioned directly on zt and indirectly on z1:t?1 via the RNN?s state
ht (zt?1 , xt?1 , ht?1 ). For a length T sequence, the model?s posterior factors into the conditionQT
als t=1 pt (zt |ht (zt?1 , xt?1 , ht?1 ))gt (xt |zt , ht (zt?1 , xt?1 , ht?1 )), and the variational posterior
QT
factors as t=1 qt (zt |ht (zt?1 , xt?1 , ht?1 ), xt ). All distributions over latent variables are factorized
Gaussians, and the output distributions gt depend on the dataset. The RNN is a single-layer LSTM
and the conditionals are parameterized by fully connected neural networks with one hidden layer
of the same size as the LSTM hidden layer. We used the residual parameterization [41] for the
variational posterior.
6
N
Bound
Nottingham
JSB
MuseData
Piano-midi.de
4
ELBO
IWAE
FIVO
-3.00
-2.75
-2.68
-8.60
-7.86
-6.90
-7.15
-7.20
-6.20
-7.81
-7.86
-7.76
8
ELBO
IWAE
FIVO
-3.01
-2.90
-2.77
-8.61
-7.40
-6.79
-7.19
-7.15
-6.12
16
ELBO
IWAE
FIVO
-3.02
-2.85
-2.58
-8.63
-7.41
-6.72
-7.18
-7.13
-5.89
TIMIT
N
Bound
64 units
256 units
4
-7.83
-7.84
-7.45
ELBO
IWAE
FIVO
0
-160
5,691
10,438
11,054
17,822
8
-7.85
-7.79
-7.43
ELBO
IWAE
FIVO
2,771
3,977
6,023
9,819
11,623
21,449
16
ELBO
IWAE
FIVO
1,676
3,236
8,630
9,918
13,069
21,536
Table 1: Test set marginal log-likelihood bounds for models trained with ELBO, IWAE, and FIVO.
FIVO
FIVO
For ELBO and IWAE models, we report max{L, LIWAE
128 , L128 }. For FIVO models, we report L128 .
Pianoroll results are in nats per timestep, TIMIT results are in nats per sequence relative to ELBO
with N = 4. For details on our evaluation methodology and absolute numbers see the Appendix.
For FIVO we resampled when the ESS of the particles dropped below N/2. For FIVO and IWAE we
used a batch size of 4, and for the ELBO, we used batch sizes of 4N to match computational budgets
(resampling is O(N ) with the alias method). For all models we report bounds using the variational
posterior trained jointly with the model. For models trained with FIVO we report LFIVO
128 . To provide
FIVO
strong baselines, we report the maximum across bounds, max{L, LIWAE
128 , L128 }, for models trained
with ELBO and IWAE. Additional details in the Appendix.
6.1
Polyphonic Music
We evaluated VRNNs trained with the ELBO, IWAE, and FIVO bounds on 4 polyphonic music
datasets: the Nottingham folk tunes, the JSB chorales, the MuseData library of classical piano and
orchestral music, and the Piano-midi.de MIDI archive [42]. Each dataset is split into standard train,
valid, and test sets and is represented as a sequence of 88-dimensional binary vectors denoting the
notes active at the current timestep. We mean-centered the input data and modeled the output as a set
of 88 factorized Bernoulli variables. We used 64 units for the RNN hidden state and latent state size
for all polyphonic music models except for JSB chorales models, which used 32 units. We report
bounds on average log-likelihood per timestep in Table 1. Models trained with the FIVO bound
significantly outperformed models trained with either the ELBO or the IWAE bounds on all four
datasets. In some cases, the improvements exceeded 1 nat per timestep, and in all cases optimizing
FIVO with N = 4 outperformed optimizing IWAE or ELBO for N = {4, 8, 16}.
6.2
Speech
The TIMIT dataset is a standard benchmark for sequential models that contains 6300 utterances
with an average duration of 3.1 seconds spoken by 630 different speakers. The 6300 utterances are
divided into a training set of size 4620 and a test set of size 1680. We further divided the training
set into a validation set of size 231 and a training set of size 4389, with the splits exactly as in
[41]. Each TIMIT utterance is represented as a sequence of real-valued amplitudes which we split
into a sequence of 200-dimensional frames, as in [39, 41]. Data preprocessing was limited to mean
centering and variance normalization as in [41]. For TIMIT, the output distribution was a factorized
Gaussian, and we report the average log-likelihood bound per sequence relative to models trained
with ELBO. Again, models trained with FIVO significantly outperformed models trained with IWAE
or ELBO, see Table 1.
6.3
Resampling Gradients
All models in this work (except those in this section) were trained with gradients that did not include
the term in Eq. (8) that comes from resampling steps. We omitted this term because it has an outsized
effect on gradient variance, often increasing it by 6 orders of magnitude. To explore the effects of this
term experimentally, we trained VRNNs with and without the resampling gradient term on the TIMIT
and polyphonic music datasets. When using the resampling term, we attempted to control its variance
7
10 1
4
10 0
2
KL Divergence
Train Log-likelihood
6 1e4
0
2
4
8
0
1
2
3
4
1M Gradient Updates
5
FIVO
IWAE
ELBO
10 -2
10 -3
10 -4
Without Resampling Gradient Term
With Resampling Gradient Term
6
10 -1
10 -5
6
0
10
20
30
40
1k Gradient Updates
50
60
Figure 2: (Left) Graph of LFIVO
128 over training comparing models trained with and without the
resampling gradient terms on TIMIT with N = 4. (Right) KL divergence from q(z1:T |x1:T ) to
p(z1:T ) for models trained on the JSB chorales with N = 16.
Bound
Nottingham
JSB
MuseData
Piano-midi.de
TIMIT
ELBO
ELBO+s
-2.40
-2.59
-5.48
-5.53
-6.54
-6.48
-6.68
-6.77
0
-925
IWAE
IWAE+s
-2.52
-2.37
-5.77
-4.63
-6.54
-6.47
-6.74
-6.74
1,469
2,630
FIVO
FIVO+s
-2.29
-2.34
-4.08
-3.83
-5.80
-5.87
-6.41
-6.34
6,991
9,773
Table 2: Train set marginal log-likelihood bounds for models comparing smoothing (+s) and nonFIVO
smoothing variational posteriors. We report max{L, LIWAE
128 , L128 } for ELBO and IWAE models
FIVO
and L128 for FIVO models. All models were trained with N = 4. Pianoroll results are in nats per
timestep, TIMIT results are in nats per sequence relative to non-smoothing ELBO. For details on our
evaluation methodology and absolute numbers see the Appendix.
using a moving-average baseline linear in the number of timesteps. For all datasets, models trained
without the resampling gradient term outperformed models trained with the term by a large margin
on both the training set and held-out data. Many runs with resampling gradients failed to improve
beyond random initialization. A representative pair of train log-likelihood curves is shown in Figure
2 ? gradients without the resampling term led to earlier convergence and a better solution. We stress
that this is an empirical result ? in principle biased gradients can lead to divergent behaviour. We
leave exploring strategies to reduce the variance of the unbiased estimator to future work.
6.4
Sharpness
FIVO does not achieve the marginal log-likelihood at its optimal variational posterior q ? , because the
optimal q ? does not condition on future observations (see Section 4.2). In contrast, ELBO and IWAE
are sharp, and their q ? s depend on future observations. To investigate the effects of this, we defined a
smoothing variant of the VRNN in which q takes as additional input the hidden state of a deterministic
RNN run backwards over the observations, allowing q to condition on future observations. We trained
smoothing VRNNs using ELBO, IWAE, and FIVO, and report evaluation on the training set (to
isolate the effect on optimization performance) in Table 2 . Smoothing helped models trained with
IWAE, but not enough to outperform models trained with FIVO. As expected, smoothing did not
reliably improve models trained with FIVO. Test set performance was similar, see the Appendix for
details.
6.5
Use of Stochastic State
A known pathology when training stochastic latent variable models with the ELBO is that stochastic
states can go unused. Empirically, this is associated with the collapse of variational posterior
q(z|x) network to the model prior p(z) [43]. To investigate this, we plot the KL divergence from
q(z1:T |x1:T ) to p(z1:T ) averaged over the dataset (Figure 2). Indeed, the KL of models trained with
8
ELBO collapsed during training, whereas the KL of models trained with FIVO remained high, even
while achieving a higher log-likelihood bound.
7
Conclusions
We introduced the family of filtering variational objectives, a class of lower bounds on the log
marginal likelihood that extend the evidence lower bound. FIVOs are suited for MLE in neural latent
variable models. We trained models with the ELBO, IWAE, and FIVO bounds and found that the
models trained with FIVO significantly outperformed other models across four polyphonic music
modeling tasks and a speech waveform modeling task. Future work will include exploring control
variates for the resampling gradients, FIVOs defined by more sophisticated filtering algorithms, and
new MCOs based on differentiable operators like leapfrog operators with deterministically annealed
temperatures. In general, we hope that this paper inspires the machine learning community to take a
fresh look at the literature of marginal likelihood estimators?seeing them as objectives instead of
algorithms for inference.
Acknowledgments
We thank Matt Hoffman, Matt Johnson, Danilo J. Rezende, Jascha Sohl-Dickstein, and Theophane
Weber for helpful discussions and support in this project. A. Doucet was partially supported by the
EPSRC grant EP/K000276/1. Y. W. Teh?s research leading to these results has received funding
from the European Research Council under the European Union?s Seventh Framework Programme
(FP7/2007-2013) ERC grant agreement no. 617071.
References
[1] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. ICLR, 2014.
[2] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation
and approximate inference in deep generative models. ICML, 2014.
[3] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp.
arXiv preprint arXiv:1605.08803, 2016.
[4] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[5] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural
samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.
[6] Dustin Tran, Rajesh Ranganath, and David M Blei. Deep and hierarchical implicit models.
arXiv preprint arXiv:1702.08896, 2017.
[7] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv
preprint arXiv:1610.03483, 2016.
[8] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183?233,
1999.
[9] Matthew J. Beal. Variational algorithms for approximate Bayesian inference. 2003.
[10] Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In AISTATS,
2014.
[11] Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic
differentiation variational inference. arXiv preprint arXiv:1603.00788, 2016.
[12] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders.
ICLR, 2016.
[13] Andriy Mnih and Danilo J Rezende. Variational inference for Monte Carlo objectives. arXiv
preprint arXiv:1602.06725, 2016.
[14] Fr?d?ric C?rou, Pierre Del Moral, and Arnaud Guyader. A nonasymptotic theorem for unnormalized Feynman?Kac particle models. Ann. Inst. H. Poincar? B, 47(3):629?649, 2011.
9
[15] Jean B?rard, Pierre Del Moral, and Arnaud Doucet. A lognormal central limit theorem for
particle approximations of normalizing constants. Electron. J. Probab., 19(94):1?28, 2014.
[16] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete
data via the EM algorithm. J. R. Stat. Soc. Ser. B Stat. Methodol., pages 1?38, 1977.
[17] CF Jeff Wu. On the convergence properties of the EM algorithm. Ann. Stat., pages 95?103,
1983.
[18] Radford M Neal and Geoffrey E Hinton. A view of the EM algorithm that justifies incremental,
sparse, and other variants. In Learning in graphical models, pages 355?368. Springer, 1998.
[19] Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic
variational inference. Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[20] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks.
arXiv preprint arXiv:1402.0030, 2014.
[21] Yarin Gal. Uncertainty in Deep Learning. PhD thesis, University of Cambridge, 2016.
[22] Arnaud Doucet and Adam M. Johansen. A tutorial on particle filtering and smoothing: Fifteen
years later. In D. Crisan and B. Rozovsky, editors, The Oxford Handbook of Nonlinear Filtering,
pages 656?704. Oxford University Press, 2011.
[23] Pierre Del Moral. Feynman-Kac formulae: genealogical and interacting particle systems with
applications. Springer Verlag, 2004.
[24] Pierre Del Moral. Mean field simulation for Monte Carlo integration. CRC Press, 2013.
[25] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte
Carlo methods. J. R. Stat. Soc. Ser. B Stat. Methodol., 72(3):269?342, 2010.
[26] Michael K Pitt, Ralph dos Santos Silva, Paolo Giordani, and Robert Kohn. On some properties
of Markov chain Monte Carlo simulation methods based on the particle filter. J. Econometrics,
171(2):134?151, 2012.
[27] Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and
variational inference. Foundations and Trends in Machine Learning, 1(1?2):1?305, 2008.
[28] Roger B Grosse, Zoubin Ghahramani, and Ryan P Adams. Sandwiching the marginal likelihood
using bidirectional Monte Carlo. arXiv preprint arXiv:1511.02543, 2015.
[29] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Accurate and conservative estimates of
MRF log-likelihood using reverse annealing. In AISTATS, 2015.
[30] Rajesh Ranganath, Dustin Tran, Jaan Altosaar, and David Blei. Operator variational inference.
In NIPS, 2016.
[31] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows.
ICML, 2015.
[32] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling.
Improved variational inference with inverse autoregressive flow. In NIPS, 2016.
[33] Tim Salimans, Diederik Kingma, and Max Welling. Markov chain Monte Carlo and variational
inference: Bridging the gap. In ICML, 2015.
[34] J?rg Bornschein and Yoshua Bengio. Reweighted wake-sleep. ICLR, 2015.
[35] Shixiang Gu, Zoubin Ghahramani, and Richard E Turner. Neural adaptive sequential Monte
Carlo. In NIPS, 2015.
[36] Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising autoencoders as generative models. In NIPS, 2013.
[37] Christian A Naesseth, Scott W Linderman, Rajesh Ranganath, and David M Blei. Variational
sequential Monte Carlo. arXiv preprint arXiv:1705.11140, 2017.
[38] Tuan Anh Le, Maximilian Igl, Tom Jin, Tom Rainforth, and Frank Wood. Auto-encoding
sequential Monte Carlo. arXiv preprint arXiv:1705.10306, 2017.
[39] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua
Bengio. A recurrent latent variable model for sequential data. In NIPS, 2015.
10
[40] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale
machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467,
2016.
[41] Marco Fraccaro, S?ren Kaae S?nderby, Ulrich Paquet, and Ole Winther. Sequential neural
models with stochastic layers. In NIPS, 2016.
[42] Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. Modeling temporal
dependencies in high-dimensional sequences: Application to polyphonic music generation and
transcription. ICML, 2012.
[43] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy
Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349,
2015.
11
| 7235 |@word mild:1 briefly:1 middle:1 proportion:1 simulation:2 fifteen:1 solid:2 moment:3 reduction:1 series:1 contains:1 jimenez:2 denoting:1 current:3 com:1 comparing:3 yet:1 diederik:3 john:1 devin:1 christian:1 treating:3 drop:1 update:2 polyphonic:7 resampling:37 plot:1 generative:6 greedy:2 ivo:1 concat:1 parameterization:1 es:2 blei:6 contribute:1 wierstra:1 along:1 constructed:2 bowman:1 become:1 abadi:1 consists:2 combine:1 inside:1 introduce:1 expected:1 indeed:2 p1:1 brain:1 zti:7 inspired:1 z31:1 salakhutdinov:2 considering:4 increasing:3 z13:1 project:1 estimating:1 underlying:1 bounded:1 maximizes:1 mass:2 factorized:3 theophane:1 anh:1 santos:1 kind:1 interpreted:1 deepmind:1 spoken:1 unobserved:2 transformation:2 differentiation:1 gal:1 guarantee:1 temporal:1 every:2 exactly:1 sherjil:1 control:2 unit:4 grant:2 omit:1 ser:2 positive:4 dropped:1 treat:1 tends:4 limit:1 encoding:2 oxford:3 laurent:2 black:1 rnns:1 initialization:1 studied:1 suggests:1 luke:1 limited:1 collapse:1 averaged:2 acknowledgment:1 practice:5 parameteric:1 union:1 backpropagation:1 poincar:1 rnn:5 empirical:1 significantly:3 donald:1 seeing:1 suggest:1 zoubin:3 operator:4 gelman:1 altosaar:1 context:1 applying:1 influence:1 yee:1 collapsed:1 optimize:4 deterministic:4 dean:1 dz:2 maximizing:3 annealed:1 straightforward:1 go:1 independently:2 duration:1 focused:1 sharpness:6 iwae:34 jascha:2 pouget:1 factored:2 estimator:39 matthieu:1 deriving:1 lewandowski:1 population:4 coordinate:1 controlling:1 pt:5 target:1 distinguishing:1 samy:2 goodfellow:1 agreement:1 trend:1 updating:2 jsb:5 nderby:1 balaji:1 econometrics:1 observed:1 epsrc:1 ep:1 preprint:12 wang:1 region:1 connected:1 rou:1 trade:1 removed:2 substantial:1 mcos:9 dempster:1 nats:4 warde:1 dynamic:1 renormalized:1 trained:30 carrying:1 tight:1 depend:3 serve:1 completely:1 gu:1 joint:3 kratarth:1 represented:2 train:4 distinct:2 effective:3 monte:25 ole:1 whose:2 richer:1 jean:2 valued:5 loglikelihood:1 tightness:5 elbo:52 otherwise:1 encoder:1 dieterich:1 statistic:1 paquet:1 jointly:2 itself:3 laird:1 final:2 shakir:3 beal:1 advantage:2 differentiable:3 sequence:10 net:1 bornschein:1 propose:2 k000276:1 tran:3 reset:1 fr:1 achieve:1 description:1 sutskever:1 convergence:8 optimum:1 produce:2 generating:2 adam:2 incremental:2 leave:1 karol:1 tim:2 derive:3 recurrent:3 andrew:2 stat:5 qt:10 received:1 eq:3 strong:2 soc:2 involves:1 come:1 tommi:1 kaae:1 waveform:2 closely:2 filter:18 stochastic:13 centered:1 alp:1 viewing:1 crc:1 behaviour:1 generalization:1 investigation:1 proposition:3 tighter:7 ryan:1 extension:3 exploring:2 marco:1 considered:1 lawrence:1 electron:1 matthew:2 pitt:1 achieves:1 sought:1 omitted:1 resample:3 estimation:5 ruslan:2 outperformed:5 currently:1 council:1 tool:1 weighted:3 hoffman:2 hope:1 minimization:1 concurrently:1 gaussian:1 pn:5 avoid:1 crisan:1 jaakkola:1 cseke:1 derived:1 focus:3 rezende:4 leapfrog:1 improvement:2 notational:1 bernoulli:1 likelihood:40 contrast:2 adversarial:1 baseline:2 detect:1 sense:1 helpful:1 inference:18 inst:1 accumulated:1 typically:1 hidden:4 transformed:2 ralph:1 pascal:2 denoted:2 augment:1 proposes:1 art:1 smoothing:8 integration:1 marginal:22 equal:1 field:1 beach:1 sampling:10 look:1 icml:4 kastner:1 future:6 report:9 mirza:1 yoshua:5 roman:1 few:1 richard:1 divergence:4 replacement:1 jeffrey:1 william:1 argmaxq:1 interest:1 investigate:3 mnih:2 guyader:1 evaluation:3 chong:1 farley:1 held:1 chain:5 accurate:1 rajesh:5 andy:1 integral:1 necessary:1 arthur:1 folk:1 incomplete:1 modeling:4 earlier:1 cover:1 maximization:1 lattice:1 introducing:1 addressing:1 johnson:1 inspires:1 seventh:1 dependency:1 unbiasedness:1 st:2 density:4 lstm:2 winther:1 off:1 invertible:1 michael:3 nvp:1 ilya:1 ashish:1 yao:1 again:1 central:3 satisfied:1 thesis:1 kucukelbir:1 rafal:2 gjt:1 admit:1 chung:1 leading:1 return:2 jacobians:1 li:1 account:1 nonasymptotic:1 de:3 includes:2 lakshminarayanan:1 z12:2 satisfy:3 depends:3 performed:1 view:2 helped:1 later:1 sandwiching:1 red:2 recover:1 maintains:1 complicated:1 bayes:1 timit:9 contribution:3 botond:1 greg:1 variance:18 who:2 efficiently:1 rozovsky:1 outweighed:1 generalize:3 bayesian:1 vincent:2 craig:1 ren:1 carlo:25 trajectory:2 history:1 holenstein:1 za:1 sebastian:1 definition:3 centering:1 failure:1 mohamed:3 proof:3 associated:1 sampled:1 dataset:4 lim:1 knowledge:1 improves:1 occured:1 schedule:2 amplitude:1 sean:1 sophisticated:1 mco:6 exceeded:1 bidirectional:1 higher:2 attained:1 danilo:4 follow:3 methodology:2 tom:2 improved:1 rard:1 evaluated:1 box:1 strongly:2 roger:3 nottingham:3 implicit:2 jaan:1 autoencoders:2 favourably:1 mehdi:1 nonlinear:1 lack:1 google:2 del:4 defines:2 quality:1 usa:1 effect:7 matt:2 verify:1 true:2 unbiased:9 andrieu:1 inspiration:1 vrnn:3 arnaud:5 neal:1 reweighted:1 visualizing:1 during:1 shixiang:1 davis:1 speaker:1 unnormalized:3 samuel:1 criterion:2 generalized:1 whye:1 stress:1 mohammad:1 temperature:1 silva:1 weber:1 variational:46 novel:1 funding:1 kyle:1 common:4 empirically:1 exponentially:1 extend:2 relating:1 marginals:1 accumulate:1 dinh:2 jozefowicz:2 cambridge:1 paisley:1 automatic:1 consistency:2 z4:1 erc:1 particle:32 pathology:1 moving:1 gt:2 add:1 dominant:1 posterior:20 optimizing:7 driven:1 discard:1 reverse:2 certain:1 verlag:1 inequality:3 binary:1 christophe:1 yuri:2 yi:1 integrable:2 seen:2 george:1 additional:4 dai:1 goel:1 surely:1 converge:1 determine:1 corrado:1 full:2 reduces:2 z11:1 match:2 long:3 divided:2 mle:9 controlled:1 impact:1 variant:2 basic:1 mrf:1 heterogeneous:1 expectation:3 arxiv:24 iteration:1 normalization:1 agarwal:1 proposal:2 background:1 conditionals:4 whereas:1 annealing:1 wake:1 biased:3 archive:1 ascent:4 isolate:1 flow:4 jordan:2 call:4 presence:1 backwards:1 intermediate:1 split:3 enough:1 unused:1 bengio:7 marginalization:1 variate:1 xj:1 independence:1 affect:1 architecture:1 restrict:2 andriy:3 wti:7 reduce:3 idea:4 timesteps:1 barham:1 whether:3 motivated:1 expression:1 kohn:1 ultimate:1 reuse:1 bridging:1 effort:1 moral:4 penalty:2 speech:3 deep:3 generally:3 clear:2 tune:1 amount:2 clutter:1 z22:3 outperform:2 kac:2 tutorial:1 dotted:1 per:7 blue:1 dickstein:2 paolo:1 key:1 four:2 achieving:1 ht:8 timestep:5 graph:1 year:1 wood:1 run:2 inverse:3 parameterized:4 uncertainty:1 family:12 almost:1 wu:1 decision:1 appendix:10 ric:1 z23:2 z32:2 bound:47 layer:4 resampled:1 followed:1 nan:1 courville:2 copied:1 sleep:1 sake:1 simulate:1 argument:6 martin:1 combination:1 across:2 em:5 wi:2 restricted:2 fraccaro:1 taken:1 ln:8 visualization:1 bing:1 discus:1 needed:1 tractable:6 fp7:1 feynman:2 brevdo:1 gaussians:1 linderman:1 multiplied:1 hierarchical:1 salimans:2 generic:2 indirectly:1 simulating:1 pierre:4 distinguished:1 batch:2 include:4 cf:1 gan:1 graphical:3 tuan:1 music:8 exploit:1 ghahramani:3 build:1 classical:1 gregor:1 boulanger:1 objective:38 question:2 quantity:2 strategy:2 responds:1 surrogate:2 gradient:43 supn:1 iclr:3 refocusing:1 thank:1 simulated:1 capacity:2 majority:1 chris:1 igl:1 fresh:1 rainforth:1 ozair:1 length:1 index:1 modeled:1 difficult:1 unfortunately:1 robert:1 potentially:1 relate:1 favorably:2 ryota:1 frank:1 reparameterizable:3 reliably:2 zt:15 perform:1 allowing:2 teh:1 observation:10 markov:5 discarded:1 benchmark:2 datasets:5 musedata:3 descent:1 daan:1 jin:1 immediate:1 reparameterized:5 hinton:1 frame:1 interacting:1 arbitrary:1 sharp:1 community:1 expressiveness:1 introduced:2 david:7 cast:1 pair:1 kl:10 optimized:2 z1:34 sentence:1 johansen:1 tensorflow:2 kingma:3 nip:8 alternately:1 beyond:1 usually:1 below:2 scott:1 vrnns:3 differentiably:2 max:6 belief:1 wainwright:1 event:1 difficulty:1 force:1 circumvent:1 natural:1 indicator:1 methodol:2 residual:1 turner:1 scheme:2 improve:4 chorale:3 library:1 alias:1 categorical:1 auto:3 naive:2 utterance:3 text:1 review:2 literature:3 inheritance:1 piano:4 prior:1 probab:1 eugene:1 relative:8 asymptotic:1 fully:3 loss:1 expect:1 generation:2 filtering:15 proven:1 var:2 geoffrey:1 validation:1 foundation:1 sufficient:2 consistent:4 propagates:1 principle:1 rubin:1 editor:1 ulrich:1 nowozin:1 share:1 supported:1 free:1 enjoys:1 appreciated:1 bias:2 alain:1 burda:2 saul:1 taking:2 lognormal:1 absolute:2 sparse:1 distributed:1 curve:1 default:1 xn:1 valid:1 concavity:1 autoregressive:1 forward:2 adaptive:3 w0i:1 preprocessing:1 z33:1 programme:1 welling:3 ranganath:5 approximate:4 midi:4 transcription:1 doucet:4 active:1 q4:1 handbook:1 giordani:1 xi:3 factorizing:1 continuous:1 latent:17 table:5 additionally:1 nicolas:2 ca:1 improving:1 alg:7 european:2 did:2 aistats:2 main:1 linearly:1 arrow:1 bounding:1 paul:1 yarin:1 x1:50 body:1 fig:1 representative:1 xu:1 junyoung:1 grosse:3 tomioka:1 deterministically:1 exponential:2 lawson:1 dustin:3 zhifeng:1 ian:1 e4:1 remained:1 theorem:2 formula:1 specific:1 xt:13 showing:1 jensen:3 explored:1 divergent:1 abadie:1 evidence:3 closeness:1 intractable:3 normalizing:2 restricting:1 sequential:19 sohl:2 importance:12 phd:1 magnitude:1 nat:1 conditioned:1 budget:1 justifies:1 margin:1 maximilian:1 chen:2 gap:1 suited:1 rg:1 led:1 simply:1 explore:4 failed:1 vinyals:1 partially:1 applies:1 monotonic:1 radford:1 corresponds:1 springer:2 gerrish:1 relies:2 mart:1 cmaddis:1 conditional:2 goal:2 ann:2 careful:1 towards:1 jeff:1 z21:2 experimentally:4 change:1 naesseth:1 specifically:1 except:2 uniformly:2 wt:5 sampler:1 decouple:1 denoising:1 conservative:1 pas:1 experimental:1 attempted:1 citro:1 rarely:1 indicating:1 aaron:2 guillaume:1 support:2 vilnis:1 genealogical:1 oriol:1 audio:1 |
6,894 | 7,236 | On Frank-Wolfe and Equilibrium Computation
Jacob Abernethy
Georgia Institute of Technology
[email protected]
Jun-Kun Wang
Georgia Institute of Technology
[email protected]
Abstract
We consider the Frank-Wolfe (FW) method for constrained convex optimization,
and we show that this classical technique can be interpreted from a different
perspective: FW emerges as the computation of an equilibrium (saddle point) of
a special convex-concave zero sum game. This saddle-point trick relies on the
existence of no-regret online learning to both generate a sequence of iterates but
also to provide a proof of convergence through vanishing regret. We show that our
stated equivalence has several nice properties, as it exhibits a modularity that gives
rise to various old and new algorithms. We explore a few such resulting methods,
and provide experimental results to demonstrate correctness and efficiency.
1
Introduction
There has been a burst of interest in a technique known as the Frank-Wolfe method (FW) [10], also
known as conditional gradient, for solving constrained optimization problems. FW is entirely a
first-order method, does not require any projection operation, and instead relies on access to a linear
optimization oracle. Given a compact and convex constraint set X ? Rd , we require the ability
to (quickly) answer queries of the form O(v) := arg minx?X x> v, for any vector v ? Rd . Other
techniques such as gradient descent methods require repeated projections into the constraint set which
can be prohibitively expensive. Interior point algorithms, such as Newton path following schemes
[1], require computing a hessian inverse at each iteration which generally does not scale well with the
dimension.
In the present paper we aim to give a new perspective on the Frank-Wolfe method by showing that,
in a broad sense, it can be viewed as a special case of equilibrium computation via online learning.
Indeed, when the optimization objective is cast as a particular convex-concave payoff function,
then we are able to extract the desired optimal point via the equilibrium of the associated zero-sum
game. Within Machine Learning there has been a lot of attention paid to the computation of optimal
strategies for zero-sum games using online learning techniques. An amazing result, attributed to [12]
yet now practically folklore in the literature, says that we can compute the optimal equilibrium in
a zero sum game by pitting two online learning strategies against each other and, as long as they
achieve the desired regret-minimization guarantee, the long-run empirical average of their actions
(strategy choices) must converge to the optimal equilibrium. This trick is both very beautiful but also
extremely useful: it was in some sense the core of early work in Boosting [11], has been shown to
generalize many linear programming techniques [3], it serves as the key tool for recent advances in
flow optimization problems [8], and has been instrumental in understanding differential privacy [9].
We begin in Section 2 by reviewing the method of proving a generalized minimax theorem using
regret minimization, and we show how this proof is actually constructive and gives rise to a generic
meta-algorithm. This meta-algorithm is especially modular, and allows for the substitution of various
algorithmic tools that achieve, up to convergence rates, essentially the same core result. We then show
that the original Frank-Wolfe algorithm is simply one instantiation of this meta-algorithm, yet where
the convergence rate follows as a trivial consequence of main theorem, albeit with an additional
O(log T ) factor.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
We build upon this by showing that a number of variants of Frank-Wolfe are also simple instantiations
of our meta-algorithm, with a convergence rate that follows easily. For example, we propose the
cumulative gradient variant of Frank-Wolfe and prove that the same guarantee holds, yet relies on
a potentially more stable optimization oracle. We show that techniques of [31] using stochastic
smoothing corresponding to implement a Follow-the-perturbed-leader variant of our meta-algorithm.
And finally, we use our framework to prove an entirely new result, showing that one obtains an
O(log T /T ) convergence rate even when the objective f (?) is not smooth, but instead the constraint
set satisfies strong convexity.
The results laid out in this paper provide value not only in proving rates and establishing new and
existing algorithms but also in setting forth a perspective on Frank-Wolfe-style methods that can
leverage the wealth of results we have available from online learning and online convex optimization.
At present, the possibilities and limits of various online learning problems has been thoroughly
worked out [20, 7] with incredibly tight bounds. Using the connections we put forth, many of
these results can provide a stronger theoretical framework towards understanding projection-free
conditional gradient methods.
Related works of projection-free algorithms
[25] gives an analysis of FW for smooth objectives, and shows that FW converges at a O(1/T )
rate even when the linear oracle is solved approximately, under certain conditions. [30] develops
a block-wise update strategy for FW on the dual objective of structural SVM, where only a subset
of dual variables are updated at each iteration. In the algorithm, a smaller oracle is called due to
the block-wise update, which reduces the computational time per iteration and leads to the speedup
overall. [37] proposes updating multiple blocks at a time. [34] proposes using various measures to
select a block for update.
In another direction, some results have aimed at obtaining improved convergence rates. [14] shows
that for strongly convex and smooth objective functions, FW can achieve a O(1/T 2 ) convergence rate
over a strongly convex set. [13, 15] first show that one can achieve linear convergence for strongly
convex and smooth objectives over polytopes using a projection-free algorithm. The algorithm
constructs a stronger oracle which can be efficiently implemented for certain polytopes like simplex.
[29] shows that some variants of FW such as away-step FW [38] or pairwise FW enjoy an exponential
convergence rate when the feasible set is a polytope. [5] provides a refined analysis for the awaystep FW. [17] extends [29] to some saddle-point optimization problems, where the constraint set
is assumed to be a polytope and the objective is required to be strongly convex for one variable
and strongly concave for the other. A drawback of away-step FW [38] is that it requires storing the
previous outputs from the oracle. Very recently, [16] develop a new variant that avoids this issue
for specific polytopes, which also enjoys exponential convergence for strongly convex and smooth
objectives. Note that all of the exponential convergence results depend on some geometric properties
of the underlying polytope.
Other works include variants for stochastic setting [23], online learning setting [22], minimizing
some structural norms [19, 39], or reducing the number of gradient evaluations [32]. There is also a
connection between subgradient descent and FW; Bach [4] shows that for certain types of objectives,
subgradient descent applied to the primal domain is equivalent to FW applied to the dual domain.
Preliminaries and Notation
Definition 1: A convex set Y ? Rm is an ?-strongly convex set w.r.t. a norm k ? k if for any
u, v ? Y , any ? ? [0, 1], the k ? k ball centered at ?u + (1 ? ?)v with radius ?(1 ? ?) ?2 ku ? vk2 is
contained in Y . Please see [14] for examples about strongly-convex sets.
Definition 2 A function is ?-strongly smooth w.r.t. a norm k ? k if f is everywhere differentiable and
f (u) ? f (v) + ?f (v)> (u ? v) + ?2 ku ? vk2 . A function is ?-strongly convex w.r.t. a norm k ? k if
f (u) ? f (v) + ?f (v)> (u ? v) + ?2 ku ? vk2 .
Definition 3 For a convex function f (?), its Fenchel conjugate is f ? (x) := supy hx, yi ? f (y). Note
that if f is convex then so is its conjugate f ? , since it is defined as the maximum over linear functions
of x [6]. Furthermore, the biconjugate f ?? equals f if and only if f is closed and convex. It is known
that f is ?-strongly convex w.r.t. k ? k if and only if f ? is 1/? strongly smooth w.r.t the dual norm
k ? k? [26], assuming that f is a closed and convex function.
2
2
2.1
Minimax Duality via No-Regret Learning
Brief review of online learning
In the task of online convex optimization, we assume a learner is provided with a compact and
convex set K ? Rn known as the decision set. Then, in an online fashion, the learner is presented
with a sequence of T loss functions `1 (?), `2 (?), . . . , `T (?) : K ? R. On each round t, the learner
must select a point xt ? K, and is then ?charged? a loss of `t (xt ) for this choice. Typically
it is assumed that, when the learner selects xt on round t, she has observed all loss functions
`1 (?), . . . , `t?1 (?) up to, but not including, time t. However, we will also consider learners that are
prescient, i.e. that can choose xt with knowledge of the loss functions up to and including time t.
The objective of interest in most of the online learning literature is the learner?s regret, defined as
PT
PT
RT := t=1 `t (xt ) ? minx?K t=1 `t (x). Oftentimes we will want to refer to the average regret,
or the regret normalized by the time horizon T , which we will call RT := RTT . What has become a
cornerstone of online learning research has been the existence of no-regret algorithms, i.e. learning
strategies that guarantee RT ? 0 as T ? ?.
Let us consider three very simple learning strategies, and we note the available guarantees for each.
(FollowTheLeader) Perhaps the most natural algorithm one might think of is to simply select xt as
Pt?1
the best point in hindsight. That is, the learner can choose xt = arg minx?K s=1 `s (x).
Lemma 1 ([21]). If each `t (?) is 1-lipschitz and 1-strongly convex, then FollowTheLeader achieves
RT ? c logT T for some constant c.
(BeTheLeader) When the learner is prescient, then we can doP
slightly better than FollowTheLeader
t
by incorporating the current loss function: xt = arg minx?K s=1 `s (x). This algorithm was named
BeTheLeader by [28], who also proved that it actually guarantees non-positive regret!
Lemma 2 ([28]). For any sequence of loss functions, BeTheLeader achieves RT ? 0.
(BestResponse) But perhaps the most trivial strategy for a prescient learner is to ignore the history
of the `s ?s, and simply play the best choice of xt on the current round. We call this algorithm
BestResponse, defined as xt = arg minx?K `t (x). A quick inspection reveals that BestResponse
satisfies RT ? 0.
2.2
Minimax Duality
The celebrated minimax theorem for zero-sum games, first discovered by John von Neumann in the
1920s [36, 33], is certainly a foundational result in the theory of games. It states that two players,
playing a game with zero-sum payoffs, each have an optimal randomized strategy that can be played
obliviously ? that is, even announcing their strategy in advance to an optimal opponent would not
damage their own respective payoff, in expectation.
In this paper we will focus on more general minimax result, establishing duality for a class of
convex/concave games, and we will show how this theorem can be proved without the need for
Brouwer?s Fixed Point Theorem [27]. The key inequality can be established through the use of
no-regret learning strategies in online convex optimization, which we detail in the following section.
The theorem below can be proved as well using Sion?s Minimax Theorem [35].
Theorem 1. Let X, Y be compact convex subsets of Rn and Rm respectively. Let g : X ? Y ? R
be convex in its first argument and concave in its second. Then we have that
min max g(x, y) = max min g(x, y)
x?X y?Y
y?Y x?X
(1)
We want to emphasize that a meta-algorithm (Algorithm 1) actually emerges from our proof for
Theorem 1, please see the supplementary for details. It is important to point out that the meta
algorithm, as a routine for computing equlibria, is certainly not a novel technique, it has served
implicitly as the underpinning of many works, including those already mentioned [11, 9, 8].
We close this section by summarizing the approximate equilibrium computation guarantee that
follows from the above algorithm. This result is classical, and we explore it in great detail in the
3
Algorithm 1 Meta Algorithm for equilibrium computation
1:
2:
3:
4:
5:
6:
Input: convex-concave payoff g : X ? Y ? R, algorithms OAlgX and OAlgY
for t = 1, 2, . . . , T do
xt := OAlgX (g(?, y1 ), . . . , g(?, yt?1 ))
yt := OAlgY (g(x1 , ?), . . . , g(xt?1 , ?), g(xt , ?))
end for
PT
PT
Output: x
?T = T1 t=1 xt and y?T := T1 t=1 yt
Appendix. We let x
?T := T1
which is the quantity in (1).
PT
t=1
xt and y?T :=
1
T
PT
t=1
yt , and let V ? be the value of the game,
Theorem 2. Algorithm 1 outputs x
?T and y?T satisfying
max g(?
xT , y) ? V ? + T + ?T
y?Y
and
min g(x, y?T ) ? V ? ? (T + ?T ).
(2)
x?X
as long as OAlgX and OAlgY guarantee average regret bounded by T and ?T , respectively.
3
Relation to the Frank-Wolfe Method
We now return our attention to the problem of constrained optimization, and we review the standard
Frank-Wolfe algorithm. We then use the technologies presented in the previous section to recast
Frank-Wolfe as an equilibrium computation, and we show that indeed the vanilla algorithm is an
instantiation of our meta-algorithm (Alg. 1). We then proceed to show that the modularity of the
minimax duality perspective allows us to immediately reproduce existing variants of Frank-Wolfe, as
well as construct new algorithms, with convergence rates provided immediately by Theorem 2.
To begin, let us assume that we have a compact set Y ? Rn and a convex function f : Y ? R. Our
primary goal is to solve the objective
min f (y).
(3)
y?Y
We say that y0 is an -approximate solution as long as f (y0 ) ? miny?Y f (y) ? .
3.1
A Brief Overview of Frank-Wolfe
Algorithm 2 Standard Frank-Wolfe algorithm
1: Input: obj. f : Y ? R, oracle O(?), learning rate {?t ? [0, 1]}t=1,2,... , init. w0 ? Y
2: for t = 1, 2, 3 . . . , T do
3:
vt ? O(?f (wt?1 )) = arg minhv, ?f (wt?1 )i
v?Y
4:
wt ? (1 ? ?t )wt?1 + ?t vt .
5: end for
6: Output: wT
The standard Frank-Wolfe algorithm (Algorithm 2) consists of making repeated calls to a linear
optimization oracle (line 6), followed by a convex averaging step of the current iterate and the oracle?s
output (line 7). It initializes a w1 in the constraint set Y . Due to the convex combination step, the
iterate wt is always within the constraint set, which is the reason why it is called projection free. We
restate a proposition from [10], who established the convergence rate of their algorithm.
Theorem 3 ([10]). Assume that f (?) is 1-strongly smooth. If Algorithm
2 is run for T rounds, then
there exists a sequence {?t } such that the output wT is a O T1 -approximate solution to (3).
It is worth noting that the typical learning rate used throughout the literature is ?t =
This emerges as the result of a recursive inequality.
4
2
2+t
[31, 25].
3.2
Frank-Wolfe via the Meta-Algorithm
We now show that the meta-algorithm generalizes Frank-Wolfe, and provides a much more modular
framework for producing similar algorithms. We will develop some of these novel methods and
establish their convergence via Theorem 2.
In order to utilize minimax duality, we have to define decision sets for two players, and we must
produce a convex-concave payoff function. First we will assume, for convenience, that f (y) := ? for
any y ?
/ Y . That is, it takes the value ? outside of the convex/compact set Y , which ensures that f
is lower semi-continuous and convex. Now, let the x-player be given the set X := {?f (y) : y ? Y }.
One can check that the closure of the set X is a convex set. Please see Appendix 2 for the proof.
Theorem 4. The closure of (sub-)gradient space {?f (y)|y ? Y } is a convex set.
The y-player?s decision set will be Y , the constraint set of the primary objective (3). The payoff
g(?, ?) will be defined as
g(x, y) := ?x> y + f ? (x).
(4)
The function f ? (?) is the Fenchel conjugate of f . We observe that g(x, y) is indeed linear, and hence
concave, in y, and it is also convex in x.
Let?s notice a few things about this particular game. Looking at the max min expression,
>
?
max min g(x, y) = max ? max x y ? f (x)
= ? min f (y) = V ? ,
y?Y x?X
y?Y
x?X
y?Y
(5)
which follows by the fact that f ?? = f .1 Note, crucially, that the last term above corresponds to the
objective we want to solve up to a minus sign. Any y? which is an -approximate equilibrium strategy
for the y-player will also be an -approximate solution to (3).
We now present the main result of this section, which is the connection between Frank-Wolfe (Alg. 2)
and Alg. 1.
Theorem 5. When both are run for exactly T rounds, the output y?T of Algorithm 1 is identically
the output wT of Algorithm 2 as long as: (I) Init. x1 in Alg 1 equals ?f (w0 ) in Alg. 2; (II)
Alg. 2 uses learning rate ?t := 1t ; (III) Alg. 1 receives g(?, ?) defined in (4); (IV) Alg. 1 sets
OAlgX := FollowTheLeader; (V) Alg. 1 sets OAlgY := BestResponse.
Proof. We will prove that the following three equalities are maintained throughout both algorithms.
We emphasize that the objects on the left correspond to Alg. 1 and those on the right to Alg. 2.
xt
yt
y?t
= ?f (wt?1 )
= vt
= wt .
(6)
(7)
(8)
We first note that the first condition of the theorem ensures that (6) holds for t = 1. Second, the
choice of learning rate ?t = 1t already guarantees that (7) implies (8), since this choice of rate ensures
that wt is always a uniform average of the updates vt . It remains to establish (6) and (7) via induction.
We begin with the former.
Recall that xt is selected via FollowTheLeader against the sequence of loss functions `t (?) :=
g(?, yt ). To write precisely what this means,
n
o
n
o
Pt?1
Pt?1
1
1
>
?
xt := arg minx?X t?1
`
(x)
=
arg
min
(?y
x
+
f
(x))
s
x?X
s
s=1
s=1
t?1
>
?
= arg max y?t?1 x ? f (x) = ?f (?
yt?1 ).
x?X
The final line follows as a result of the Legendre transform [6]. Of course, by induction, we have that
y?t?1 = wt?1 , and hence we have established (6).
1
It was important how we defined X here, as the fenchel conjugate takes the value of ? at any point
x?
/ {?f (y) : y ? Y }, hence the unconstrained supremum is the same as maxx?X (?)
5
Finally, let us consider how yt is chosen according to BestResponse. Recall that sequence of loss
functions presented to the y-player is ht (?) := ?g(xt , ?). Utilizing BestResponse for this sequence
implies that
?
>
yt = arg min ht (y) = arg min x>
t y ? f (xt ) = arg min xt y
y?Y
((6) by induc.)
y?Y
>
= arg min ?f (?
yt?1 ) y
y?Y
=
y?Y
arg min ?f (wt?1 )> y
( which is vt ).
y?Y
Where the last equality follows by induction via (8). This completes the proof.
Note that the algorithm does not need to compute the conjugate, f ? . While the Frank-Wolfe
algorithm can be viewed as implicitly
operating on the conjugate, it is only through the use of
>
arg maxx?X y?t?1
x ? f ? (x) . Yet, this operation does not need to be computed in the naive way
(i.e. by first computing f ? and then doing the maximization). Instead, the expression actually boils
down to ?f (y) which is just a gradient computation!
The equivalence we just established has several nice features. But it does not provide a convergence
rate for Algorithm 2. This should perhaps not be surprising, as nowhere did we even use the
smoothness of f anywhere in the equivalence. Instead, this actually follows via a key application
of Theorem 2, utilizing the fact that f ? is strongly convex on the interior of the set X 2 , granting
FollowTheLeader a logarithmic regret rate.
Corollary 1. Assume that f (?) is 1-strongly
Then Algorithm 2, with learning rate ?t := 1t ,
smooth.
outputs wT with approximation error O
log T
T
.
Proof. As a result of Theorem 5, we have established that Alg. 2 is a special case of Alg. 1, with the
parameters laid out in the previous theorem. As a result of Theorem 2, the approximation error of wT
is precisely the error T + ?T of the point y?T when generated via Alg. 1 with subroutines OAlgX :=
FollowTheLeader and OAlgY = BestResponse, assuming that these two learning algorithms
guarantee average regret no more than T and ?T , respectively. We noted that BestResponse does
not suffer regret, so ?T = 0.
To bound the regret of FollowTheLeader on the sequence of functions g(?, y1 ), . . . , g(?, yT ), we
observe that the smoothness of f implies that f ? is 1-strongly convex, which in turn implies that
g(x, yt ) = ?x> yt + f ? (x) is also 1-stronglyconvex
(in x). Hence Lemma 1 guarantees that
log T
FollowTheLeader has average regret T := O T , which completes the proof.
We emphasize that the above result leans entirely on existing work on regret bounds for online
learning, and these tools are doing the heavy lifting. We explore this further in the following section.
4
Frank-Wolfe-style Algs, New and Old
We now have a factory for generating new algorithms using the approach laid out in Section 3.
Theorem 5 shows that the standard Frank-Wolfe algorithm (with a particular learning rate) is obtained
via the meta-algorithm using two particular online learning algorithms OAlgX , OAlgY . But we
have full discretion to choose these two algorithms, as long as they provide the appropriate regret
guarantees to ensure convergence.
4.1
Cumulative Gradients
We begin with one simple variant, which we call Cumulative-Gradient Frank-Wolfe, laid out in
Algorithm 3. The one significant difference with vanilla Frank-Wolfe is that the linear optimization
oracle receives as input the average of the gradients obtained thus far, as opposed to the last one.
2
We only need to assume f is "smooth on the interior of Y " to get the result. (That f is technically not
smooth outside of Y is not particularly relevant) The result that f ? is strongly convex on the interior of the set X
is essentially proven by [26] in their appendix. This argument has been made elsewhere in various forms in the
literature (e.g. [18]).
6
Algorithm 3 Cumulative-Gradient Frank-Wolfe
1: Initialize: any w0 ? Y .
2: for t = 1, 2, 3 . . .D, T do
E
Pt?1
1
3:
vt ? arg min y, t?1
s=1 ?f (ws )
v?Y
4:
wt ? (1 ? ?t )wt?1 + ?t vt .
5: end for
6: Output: wT
The proof of convergence requires little effort.
Then Algorithm 3, with learning rate ?t := 1t ,
Corollary 2. Assume that f (?) is 1-strongly
smooth.
outputs wT with approximation error O
log T
T
.
Proof. The result follows almost identically to Corollary 1. It requires a quick inspection to verify
that the new linear optimization subroutine corresponds to implementing BeTheLeader as OAlgY
instead of BestResponse. However, both BestResponse and BeTheLeader have non-positive
regret (?T ? 0) (Lemma 2 in the supplementary), and thus they achieve the same convergence.
We note that a similar algorithm to the above can be found in [31], although in their results they
consider more general weighted averages over the gradients.
4.2
Perturbation Methods and Stochastic Smoothing
Looking carefully at the proof of Corollary 1, the fact that FollowTheLeader was suitable for the
vanilla FW analysis relies heavily on the strong convexity of the functions `t (?) := g(?, yt ), which
in turn results from the smoothness of f (?). But what about when f (?) is not smooth, is there an
alternative algorithm available?
We observe that one of the nice techniques to grow out of the online learning community is the
use of perturbations as a type of regularization to obtain vanishing regret guarantees [28] ? their
method is known as Follow the Perturbed Leader (FTPL). The main idea is to solve an optimization
problem that has a random linear function added to the input, and to select3 as xt the expectation of
the arg min under this perturbation. More precisely,
h
n
oi
Pt?1
xt := EZ arg minx?X Z > x + s=1 `s (x) .
Here Z is some random vector drawn according to an appropriately-chosen distribution and `s (x)
is the loss function of the x-player on round s; with the definition of payoff function g, `s (x) is
?x> ys + f ? (x) (4).
One can show that, as long as Z
is chosen
from the right distribution, then this algorithm guarantees
1
average regret on the order of O ?T , although obtaining the correct dimension dependence relies
on careful probabilistic analysis. Recent work of [2] shows that the analysis of perturbation-style
algorithm reduces to curvature properties of a stochastically-smoothed Fenchel conjugate.
What is intriguing about this perturbation approach is that it ends up being equivalent to an existing
method proposed by [31] (Section 3.3), who also uses a stochastically smoothed objective function.
We note
h that
n
oi
Pt?1
EZ arg minx?X Z > x + s=1 `s (x) = EZ arg maxx?X (?
yt?1 + Z/(t ? 1))> x ? f ? (x)
= EZ [?f (?
yt?1 + Z/(t ? 1))] = ?f?t?1 (?
yt?1 )
(9)
?, and they prove a rate
where f?? (x) := E[f
(x
+
Z/?)].
[31]
suggests
using
precisely
this
modified
f
on the order of O
?1
T
. As discussed, the same would follow from vanishing regret of FTPL.
3
Technically speaking, the results of [28] only considered linear loss functions and hence their analysis did
not require taking averages over the input perturbation. While we will not address computational issues here due
to space, actually computing the average arg min is indeed non-trivial.
7
4.3
Boundary Frank-Wolfe
Algorithm 4 Modified meta-algorithm, swapped roles
1:
2:
3:
4:
5:
6:
Input: convex-concave payoff g : X ? Y ? R, algorithms OAlgX and OAlgY
for t = 1, 2, . . . , T do
yt := OAlgY (g(x1 , ?), . . . , g(xt?1 , ?))
xt := OAlgX (g(?, y1 ), . . . , g(?, yt?1 ), g(?, yt ))
end for
PT
PT
Output: x
?T = T1 t=1 xt and y?T := T1 t=1 yt
We observe that the meta-algorithm previously discussed assumed that the x-player was first to act,
followed by the y-player who was allowed to be prescient. Here we reverse their roles, and we instead
allow the x-player to be prescient. The new meta-algorithm is described in Algorithm 4. We are
going to show that this framework lead to a new projection-free algorithm that works for non-smooth
objective functions. Specifically, if the constraint set is strongly convex, then this exhibits a novel
projection free algorithm that grants a O(log T /T ) convergence even for non-smooth objective
functions. The result relies on very recent work showing that FollowTheLeader for strongly convex
sets [24] grants a O(log T ) regret rate. Prior work has considered strongly convex decision sets
[14], yet with the additional assumption that the objective is smooth and strongly convex, leading to
O(1/T 2 ) convergence. Boundary Frank-Wolfe requires neither smoothness nor strong convexity of
the objective. What we have shown, essentially, is that a strongly convex boundary of the constraint
set can be used in place of smoothness of f (?) in order to achieve O(1/T ) convergence.
Algorithm 5 Boundary Frank-Wolfe
1: Input: objective f : Y ? R, oracle O(?) for Y , init. y1 ? Y .
2: for t = 2, 3 . . . , T do
Pt?1
1
3:
yt ? arg miny?Y t?1
s=1 hy, ?f (ys )i
4: end for
PT
5: Output: y?T = T1
t=1 yt
We may now prove a result about Algorithm 5 using the same techniques laid out in Theorem 5.
Theorem 6. Algorithm 5 is a instance of Algorithm 4 if (I) Init. y1 in Alg 5 equals y1 in Alg. 4;
(II) Alg. 1 sets OAlgY := FollowTheLeader; and (III) Alg. 1 sets OAlgX := BestResponse.
Pt
Furthermore, when Y is strongly convex, and s=1 ?f (ys ) has non-zero norm, then
M log T
)
?LT T
Pt
k?t k, ?t = s=1 1t ?f (ys ).
f (?
yT ) ? min f (y) = O(
y?Y
where M := supy?Y k?f (y)k, LT := min1?t?T
Proof. Please see Appendix 3 for the proof.
Note that the rate depends crucially on LT , which is the smallest averaged-gradient norm computed
during?the optimization. Depending on the underlying optimization problem, LT can be as small as
O(1/ T ) or can even be 0. Now let us discuss when the boundary FW works; namely, the condition
that causes the cumulative gradient being nonzero. If a linear combination of gradients is 0 then
clearly 0 is in the convex hull of subgradients ?f (x) for boundary points x. Since the closure of
{?f (x)|x ? Y } is convex, according to Theorem 4, this implies that 0 is in {?f (x)|x ? Y }. If we
know in advance that 0 ?
/ cl({?f (x)|x ? Y }) we are assured that the cumulative gradient will not
be 0. Hence, the proposed algorithm may only be useful when it is known, a priori, that the solution
y ? will occur not in the interior but on the boundary of Y . It is indeed an odd condition, but it does
hold in many typical scenarios. One may add a perturbed vector to the gradient and show that with
high probability, LT is a non-zero number. The downside of this approach is that it would generally
grant a slower convergence rate; it cannot achieve log(T )/T as the inclusion of the perturbation
requires managing an additional trade-off.
8
References
[1] Jacob Abernethy and Elad Hazan. Faster convex optimization: Simulated annealing with an efficient
universal barrier. In Proceedings of The 33rd International Conference on Machine Learning, pages
2520?2528, 2016.
[2] Jacob Abernethy, Chansoo Lee, Abhinav Sinha, and Ambuj Tewari. Online linear optimization via
smoothing. In COLT, pages 807?823, 2014.
[3] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta-algorithm
and applications. Theory of Computing, 8(1):121?164, 2012.
[4] Francis Bach. Duality between subgradient and conditional gradient methods. SIAM Journal of Optimization, 2015.
[5] Amir Beck and Shimrit Shtern. Linearly convergent away-step conditional gradient for non-strongly
convex functions. Mathematical Programming, 2016.
[6] Stephen Boyd. Convex optimization. Cambridge University Press, 2004.
[7] Nicolo Cesa-Bianchi and G?bor Lugosi. Prediction, learning, and games. Cambridge university press,
2006.
[8] Paul Christiano, Jonathan A Kelner, Aleksander Madry, Daniel A Spielman, and Shang-Hua Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In
Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 273?282. ACM,
2011.
[9] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and
R in Theoretical Computer Science, 9(3?4):211?407, 2014.
Trends
[10] Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval research logistics
quarterly, 3(1-2):95?110, 1956.
[11] Yoav Freund and Robert E Schapire. Game theory, on-line prediction and boosting. In Proceedings of the
ninth annual conference on Computational learning theory, pages 325?332. ACM, 1996.
[12] Yoav Freund and Robert E Schapire. Adaptive game playing using multiplicative weights. Games and
Economic Behavior, 29(1-2):79?103, 1999.
[13] Dan Garber and Elad Hazan. Playing non-linear games with linear oracles. FOCS, 2013.
[14] Dan Garber and Elad Hazan. Faster rates for the frank-wolfe method over strongly-convex sets. ICML,
2015.
[15] Dan Garber and Elad Hazan. A linearly convergent conditional gradient algorithm with applications to
online and stochastic optimization. SIAM Journal on Optimization, 2016.
[16] Dan Garber and Ofer Meshi. Linear-memory and decomposition-invariant linearly convergent conditional
gradient algorithm for structured polytopes. NIPS, 2016.
[17] G. Gidel, T. Jebara, and S. Lacoste-Julien. Frank-wolfe algorithms for saddle point problems. AISTATS,
2016.
[18] Gianluca Gorni. Conjugation and second-order properties of convex functions. Journal of Mathematical
Analysis and Applications, 1991.
[19] Zaid Harchaoui, Anatoli Juditsky, and Arkadi Nemirovski. Conditional gradient algorithms for normregularized smooth convex optimization. Math. Prog., Series A, 2013.
[20] Elad Hazan. Introduction to online convex optimization. 2014.
[21] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization.
Machine Learning, 69(2-3):169?192, 2007.
[22] Elad Hazan and Satyen Kale. Projection-free online learning. ICML, 2012.
[23] Elad Hazan and Haipeng Luo. Variance-reduced and projection-free stochastic optimization. ICML, 2016.
[24] Ruitong Huang, Tor Lattimore, Andr?s Gy?rgy, and Csaba Szepesvari. Following the leader and fast rates
in linear prediction: Curved constraint sets and other regularities. 2016.
9
[25] Martin Jaggi. Revisiting frank-wolfe: Projection-free sparse convex optimization. ICML, 2013.
[26] Sham M. Kakade, Shai Shalev-shwartz, and Ambuj Tewari. On the duality of strong convexity and strong
smoothness: Learning applications and matrix regularization. 2009.
[27] Shizuo Kakutani. A generalization of brouwer?s fixed point theorem. 1941.
[28] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer
and System Sciences, 71(3):291?307, 2005.
[29] Simon Lacoste-Julien and Martin Jaggi. On the global linear convergence of frank-wolfe optimization
variants. NIPS, 2015.
[30] Simon Lacoste-Julien, Martin Jaggi, Mark Schmidt, and Patrick Pletscher. Block-coordinate frank-wolfe
optimization for structural svms. ICML, 2013.
[31] Guanghui Lan. The complexity of large-scale convex programming under a linear optimization oracle.
https://arxiv.org/abs/1309.5550, 2013.
[32] Guanghui Lan and Yi Zhou. Conditional gradient sliding for convex optimization. SIAM Journal on
Optimization,, 2014.
[33] J von Neumann, Oskar Morgenstern, et al. Theory of games and economic behavior, 1944.
[34] Anton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet K. Dokania, and Simon Lacoste-Julien.
Minding the gaps for block frank-wolfe optimization for structural svms. ICML, 2016.
[35] Maurice Sion. On general minimax theorems. Pacific J. Math, 8(1):171?176, 1958.
[36] J v. Neumann. Zur theorie der gesellschaftsspiele. Mathematische annalen, 100(1):295?320, 1928.
[37] Yu-Xiang Wang, Veeranjaneyulu Sadhanala, Wei Dai, Willie Neiswanger, Suvrit Sra, and Eric Xing.
Parallel and distributed block-coordinate frank-wolfe algorithms. ICML, 2016.
[38] P. Wolf. Convergence theory in nonlinear programming. Integer and Nonlinear Programming, 1970.
[39] Y. Yu, X. Zhang, and D. Schuurmans. Generalized conditional gradient for structured estimation.
arXiv:1410.4828, 2014.
10
| 7236 |@word norm:7 instrumental:1 stronger:2 closure:3 crucially:2 jacob:3 decomposition:1 paid:1 minus:1 biconjugate:1 minding:1 substitution:1 celebrated:1 series:1 daniel:1 existing:4 current:3 surprising:1 luo:1 yet:5 intriguing:1 must:3 john:1 zaid:1 update:5 juditsky:1 selected:1 amir:1 inspection:2 vanishing:3 core:2 granting:1 iterates:1 boosting:2 provides:2 math:2 org:1 kelner:1 zhang:1 mathematical:2 burst:1 differential:2 become:1 symposium:1 focs:1 prove:5 consists:1 dan:4 privacy:2 pairwise:1 indeed:5 behavior:2 nor:1 little:1 begin:4 provided:2 underlying:2 notation:1 bounded:1 what:5 interpreted:1 morgenstern:1 hindsight:1 csaba:1 guarantee:13 act:1 concave:9 exactly:1 prohibitively:1 rm:2 grant:3 enjoy:1 producing:1 pitting:1 positive:2 t1:7 veeranjaneyulu:1 limit:1 consequence:1 establishing:2 discretion:1 path:1 approximately:1 lugosi:1 might:1 equivalence:3 suggests:1 madry:1 nemirovski:1 averaged:1 recursive:1 regret:25 implement:1 block:7 foundational:1 empirical:1 universal:1 maxx:3 isabella:1 projection:11 boyd:1 get:1 convenience:1 interior:5 close:1 cannot:1 put:1 equivalent:2 charged:1 quick:2 yt:24 roth:1 kale:3 attention:2 incredibly:1 convex:60 immediately:2 utilizing:2 proving:2 coordinate:2 updated:1 pt:18 play:1 heavily:1 programming:6 us:2 trick:2 wolfe:35 nowhere:1 expensive:1 satisfying:1 updating:1 particularly:1 trend:1 lean:1 observed:1 role:2 min1:1 wang:2 solved:1 electrical:1 revisiting:1 ensures:3 trade:1 mentioned:1 convexity:4 complexity:1 miny:2 depend:1 solving:1 reviewing:1 tight:1 technically:2 upon:1 efficiency:1 learner:9 eric:1 easily:1 various:5 lukasewitz:1 fast:1 query:1 outside:2 refined:1 abernethy:3 shalev:1 jean:1 modular:2 supplementary:2 solve:3 elad:9 say:2 garber:4 ability:1 satyen:3 think:1 transform:1 final:1 online:23 sequence:8 differentiable:1 propose:1 relevant:1 achieve:7 forth:2 haipeng:1 rgy:1 convergence:24 regularity:1 neumann:3 produce:1 generating:1 adam:1 converges:1 object:1 depending:1 develop:2 amazing:1 odd:1 strong:5 implemented:1 implies:5 direction:1 radius:1 drawback:1 announcing:1 restate:1 correct:1 stochastic:5 hull:1 centered:1 implementing:1 meshi:1 require:5 hx:1 generalization:1 preliminary:1 proposition:1 obliviously:1 hold:3 practically:1 underpinning:1 considered:2 great:1 equilibrium:10 algorithmic:2 tor:1 achieves:2 early:1 smallest:1 estimation:1 correctness:1 tool:3 weighted:1 minimization:2 clearly:1 always:2 aim:1 modified:2 kalai:1 zhou:1 aleksander:1 sion:2 gatech:2 corollary:4 focus:1 naval:1 she:1 check:1 sense:2 vk2:3 summarizing:1 typically:1 w:1 relation:1 reproduce:1 subroutine:2 selects:1 going:1 arg:21 dual:4 overall:1 issue:2 colt:1 priori:1 proposes:2 smoothing:3 constrained:3 special:3 initialize:1 equal:3 construct:2 santosh:1 beach:1 broad:1 yu:2 icml:7 simplex:1 develops:1 few:2 algs:1 beck:1 ab:1 interest:2 possibility:1 dwork:1 evaluation:1 certainly:2 primal:1 ftpl:2 respective:1 iv:1 old:2 desired:2 theoretical:2 sinha:1 fenchel:4 instance:1 downside:1 yoav:2 maximization:1 subset:2 uniform:1 answer:1 perturbed:3 chansoo:1 thoroughly:1 st:1 guanghui:2 international:1 randomized:1 siam:3 probabilistic:1 off:1 lee:1 quickly:1 sanjeev:1 w1:1 von:2 cesa:1 opposed:1 choose:3 huang:1 stochastically:2 maurice:1 leading:1 style:3 return:1 gy:1 depends:1 multiplicative:2 lot:1 closed:2 doing:2 hazan:9 francis:1 xing:1 parallel:1 shai:1 simon:3 arkadi:1 oi:2 variance:1 who:4 efficiently:1 correspond:1 generalize:1 ruitong:1 bor:1 anton:1 served:1 worth:1 history:1 definition:4 against:2 proof:13 associated:1 attributed:1 boil:1 proved:3 recall:2 knowledge:1 emerges:3 routine:1 carefully:1 actually:6 follow:3 improved:1 wei:1 strongly:28 furthermore:2 just:2 anywhere:1 receives:2 nonlinear:2 perhaps:3 usa:1 normalized:1 verify:1 former:1 hence:6 equality:2 regularization:2 willie:1 nonzero:1 round:6 game:16 during:1 please:4 maintained:1 noted:1 generalized:2 demonstrate:1 wise:2 lattimore:1 novel:3 recently:1 jimwang:1 overview:1 induc:1 discussed:2 refer:1 significant:1 cambridge:2 smoothness:6 rd:3 vanilla:3 unconstrained:1 inclusion:1 access:1 stable:1 operating:1 add:1 nicolo:1 jaggi:3 curvature:1 patrick:1 own:1 recent:3 sadhanala:1 perspective:4 reverse:1 scenario:1 certain:3 suvrit:1 meta:16 inequality:2 vt:7 yi:2 der:1 additional:3 dai:1 managing:1 converge:1 forty:1 christiano:1 semi:1 multiple:1 ii:2 sham:1 reduces:2 full:1 harchaoui:1 smooth:17 stephen:1 bestresponse:11 faster:3 bach:2 long:8 baptiste:1 y:4 laplacian:1 prediction:3 variant:9 essentially:3 expectation:2 arxiv:2 iteration:3 agarwal:1 zur:1 want:3 annealing:1 wealth:1 completes:2 grow:1 appropriately:1 swapped:1 undirected:1 thing:1 flow:3 obj:1 call:4 alayrac:1 structural:4 integer:1 leverage:1 noting:1 iii:2 identically:2 iterate:2 economic:2 idea:1 expression:2 effort:1 suffer:1 dokania:1 hessian:1 proceed:1 speaking:1 action:1 cause:1 cornerstone:1 generally:2 useful:2 tewari:2 aimed:1 annalen:1 svms:2 reduced:1 generate:1 schapire:2 http:1 andr:1 notice:1 sign:1 per:1 mathematische:1 write:1 key:3 lan:2 drawn:1 neither:1 ht:2 utilize:1 lacoste:4 graph:1 subgradient:3 sum:6 run:3 inverse:1 everywhere:1 named:1 extends:1 laid:5 throughout:2 almost:1 place:1 prog:1 decision:5 appendix:4 entirely:3 bound:3 followed:2 played:1 convergent:3 conjugation:1 quadratic:1 oracle:13 annual:2 occur:1 constraint:10 worked:1 precisely:4 hy:1 argument:2 extremely:1 min:17 subgradients:1 vempala:1 martin:3 speedup:1 structured:2 pacific:1 according:3 ball:1 combination:2 conjugate:7 logt:1 smaller:1 slightly:1 legendre:1 y0:2 puneet:1 kakade:1 making:1 oskar:1 invariant:1 remains:1 previously:1 turn:2 discus:1 know:1 neiswanger:1 serf:1 end:6 available:3 operation:2 generalizes:1 opponent:1 ofer:1 observe:4 quarterly:1 away:3 generic:1 appropriate:1 alternative:1 schmidt:1 slower:1 existence:2 original:1 include:1 brouwer:2 ensure:1 newton:1 anatoli:1 folklore:1 prof:1 especially:1 build:1 classical:2 establish:2 amit:1 objective:19 initializes:1 already:2 quantity:1 added:1 strategy:11 damage:1 rt:6 primary:2 dependence:1 exhibit:2 gradient:24 minx:8 simulated:1 philip:1 w0:3 polytope:3 trivial:3 reason:1 induction:3 assuming:2 minimizing:1 kun:1 robert:2 potentially:1 frank:35 theorie:1 stated:1 rise:2 bianchi:1 descent:3 prescient:5 curved:1 logistics:1 payoff:8 looking:2 y1:6 rn:3 rtt:1 discovered:1 perturbation:7 smoothed:2 ninth:1 jebara:1 community:1 cast:1 required:1 namely:1 connection:3 polytopes:4 established:5 nip:3 address:1 able:1 below:1 sliding:1 ambuj:2 recast:1 including:3 max:8 memory:1 suitable:1 natural:1 beautiful:1 pletscher:1 minimax:9 scheme:1 technology:3 brief:2 abhinav:1 julien:4 arora:1 jun:1 extract:1 naive:1 nice:3 literature:4 understanding:2 geometric:1 review:2 prior:1 xiang:1 freund:2 loss:10 dop:1 proven:1 foundation:2 supy:2 storing:1 playing:3 heavy:1 course:1 elsewhere:1 gianluca:1 last:3 free:9 enjoys:1 allow:1 institute:2 taking:1 barrier:1 sparse:1 distributed:1 boundary:7 dimension:2 cumulative:6 avoids:1 made:1 adaptive:1 oftentimes:1 far:1 osokin:1 approximate:5 compact:5 obtains:1 ignore:1 emphasize:3 implicitly:2 supremum:1 global:1 instantiation:3 reveals:1 assumed:3 leader:3 shwartz:1 continuous:1 modularity:2 why:1 ku:3 szepesvari:1 ca:1 sra:1 obtaining:2 init:4 schuurmans:1 alg:18 cl:1 domain:2 assured:1 did:2 aistats:1 main:3 linearly:3 paul:1 repeated:2 allowed:1 x1:3 georgia:2 fashion:1 sub:1 exponential:3 factory:1 third:1 theorem:26 down:1 specific:1 xt:27 normregularized:1 showing:4 cynthia:1 svm:1 incorporating:1 exists:1 albeit:1 lifting:1 horizon:1 gap:1 logarithmic:2 lt:5 simply:3 saddle:4 explore:3 ez:4 contained:1 hua:1 corresponds:2 wolf:1 satisfies:2 relies:6 acm:3 conditional:9 viewed:2 goal:1 careful:1 towards:1 shtern:1 lipschitz:1 feasible:1 fw:17 typical:2 specifically:1 reducing:1 marguerite:1 wt:19 averaging:1 lemma:4 shang:1 called:2 teng:1 duality:7 experimental:1 player:10 aaron:1 select:3 mark:1 jonathan:1 spielman:1 constructive:1 |
6,895 | 7,237 | Modulating early visual processing by language
Harm de Vries?
Florian Strub?
J?r?mie Mary?
University of Montreal
[email protected]
Univ. Lille, CNRS, Centrale Lille,
Inria, UMR 9189 CRIStAL
[email protected]
Univ. Lille, CNRS, Centrale Lille,
Inria, UMR 9189 CRIStAL
[email protected]
Hugo Larochelle
Olivier Pietquin
Aaron Courville
Google Brain
[email protected]
DeepMind
[email protected]
University of Montreal
[email protected]
Abstract
It is commonly assumed that language refers to high-level visual concepts while
leaving low-level visual processing unaffected. This view dominates the current
literature in computational models for language-vision tasks, where visual and
linguistic inputs are mostly processed independently before being fused into a
single representation. In this paper, we deviate from this classic pipeline and
propose to modulate the entire visual processing by a linguistic input. Specifically,
we introduce Conditional Batch Normalization (CBN) as an efficient mechanism
to modulate convolutional feature maps by a linguistic embedding. We apply
CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd
ResNet (MODERN) architecture, and show that this significantly improves strong
baselines on two visual question answering tasks. Our ablation study confirms that
modulating from the early stages of the visual processing is beneficial.
1
Introduction
Human beings combine the processing of language and vision with apparent ease. For example, we
can use natural language to describe perceived objects and we are able to imagine a visual scene
from a given textual description. Developing intelligent machines with such impressive capabilities
remains a long-standing research challenge with many practical applications.
Towards this grand goal, we have witnessed an increased interest in tasks at the intersection of
computer vision and natural language processing. In particular, image captioning [16], visual
question answering (VQA)[1, 23] and visually grounded dialogue systems[5, 6] constitute a popular
set of example tasks for which large-scale datasets are now available. Developing computational
models for language-vision tasks is challenging, especially because of the open question underlying
all these tasks: how to fuse/integrate visual and textual representations? To what extent should we
process visual and linguistic input separately, and at which stage should we fuse them? And equally
important, what fusion mechanism to use?
In this paper, we restrict our attention to the domain of visual question answering which is a natural
testbed for fusing language and vision. The VQA task concerns answering open-ended questions
about images and has received significant attention from the research community [1, 9, 17, 23].
Current state-of-the-art systems often use the following computational pipeline [2, 17, 20] illustrated
in Fig 1. They first extract high-level image features from an ImageNet pretrained convolutional
network (e.g. the activations from a ResNet network [12]), and obtain a language embedding using a
?
?
The first two authors contributed equally
Now at Criteo
Figure 1: An overview of the classic VQA pipeline (left) vs ours (right). While language and vision
modalities are independently processed in the classic pipeline, we propose to directly modulate
ResNet processing by language.
recurrent neural network (RNN) over word-embeddings. These two high-level representations are
then fused by concatenation [17], element-wise product [11, 13, 14, 17], Tucker decomposition [2]
or compact bilinear pooling [9], and further processed for the downstream task at hand. Attention
mechanisms [27] are often used to have questions attend to specific spatial locations of the extracted
higher-level feature maps.
There are two main reasons for why the recent literature has focused on processing each modality
independently. First, using a pretrained convnet as feature extractor prevents overfitting; Despite a
large training set of a few hundred thousand samples, backpropagating the error of the downstream
task into the weights of all layers often leads to overfitting. Second, the approach aligns with the
dominant view that language interacts with high-level visual concepts. Words, in this view, can be
thought of as ?pointers? to high-level conceptual representations. To the best of our knowledge, this
work is the first to fuse modalities at the very early stages of the image processing.
In parallel, the neuroscience community has been exploring to what extent the processing of language
and vision is coupled [8]. More and more evidence accumulates that words set visual priors which
alter how visual information is processed from the very beginning [3, 15, 24]. More precisely, it is
observed that P1 signals, which are related to low-level visual features, are modulated while hearing
specific words [3]. The language cue that people hear ahead of an image activates visual predictions
and speed up the image recognition process. These findings suggest that independently processing
visual and linguistic features might be suboptimal, and fusing them at the early stage may help the
image processing.
In this paper, we introduce a novel approach to have language modulate the entire visual processing of
a pre-trained convnet. We propose to condition the batch normalization [21] parameters on linguistic
input (e.g., a question in a VQA task). Our approach, called Conditional Batch Normalization (CBN),
is inspired by recent work in style transfer [7]. The key benefit of CBN is that it scales linearly with
the number of feature maps in a convnet, which impacts less than 1% of the parameters, greatly
reducing the risk of over-fitting. We apply CBN to a pretrained Residual Network, leading to a
novel architecture to which we refer as MODERN. We show significant improvements on two VQA
datasets, VQAv1 [1] and GuessWhat?! [6], but stress that our approach is a general fusing mechanism
that can be applied to other multi-modal tasks.
To summarize, our contributions are three fold:
? We propose conditional batch normalization to modulate the entire visual processing by language
from the early processing stages,
? We condition the batch normalization parameters of a pretrained ResNet on linguistic input, leading
to a new network architecture: MODERN,
? We demonstrate improvements on state-of-the-art models for two VQA tasks and show the
contribution of this modulation on the early stages.
2
2
Background
In this section we provide preliminaries on several components of our proposed VQA model.
2.1
Residual networks
We briefly outline residual networks (ResNets) [12], one of the current top-performing convolutional
networks that won the ILSVRC 2015 classification competition. In contrast to precursor convnets
(e.g. VGG[22]) that constructs a new representation at each layer, ResNet iteratively refines a
representation by adding residuals. This modification enables to train very deep convolutional
networks without suffering as much from the vanishing gradient problem. More specifically, ResNets
are built from residual blocks:
F k+1 = ReLU(F k + R(F k ))
k
(1)
th
where F denotes the outputted feature map. We will refer to Fi,c,w,h to denote the i input
sample of the cth feature map at location (w, h). The residual function R(F k ) is composed of three
convolutional layers (with a kernel size of 1, 3 and 1, respectively). See Fig. 2 in the original ResNet
paper [12] for a detailed overview of a residual block.
A group of blocks is stacked to form a stage of computation in which the representation dimensionality
stays identical. The general ResNet architecture starts with a single convolutional layer followed by
four stages of computation. The transition from one stage to another is achieved through a projection
layer that halves the spatial dimensions and doubles the number of feature maps. There are several
pretrained ResNets available, including ResNet-50, ResNet-101 and ResNet-152 that differ in the
number of residual blocks per stage.
2.2
Batch Normalization
The convolutional layers in ResNets make use of Batch Normalization (BN), a technique that was
originally designed to accelarate the training of neural networks by reducing the internal co-variate
shift [21]. Given a mini-batch B = {Fi,?,?,? }N
i=1 of N examples, BN normalizes the feature maps at
training time as follows:
Fi,c,w,h ? EB [F?,c,?,? ]
BN (Fi,c,h,w |?c , ?c ) = ?c p
+ ?c ,
(2)
VarB [F?,c,?,? ] +
where is a constant damping factor for numerical stability, and ?c and ?c are trainable scalars
introduced to keep the representational power of the original network. Note that for convolutional
layers the mean and variance are computed over both the batch and spatial dimensions (such that
each location in the feature map is normalized in the same way). After the BN module, the output
is fed to a non-linear activation function. At inference time, the batch mean EB and variance VarB
are replaced by the population mean ? and variance ? 2 , often estimated by an exponential moving
average over batch mean and variance during training.
2.3
Language embeddings
We briefly recap the most common way to obtain a language embedding from a natural language
question. Formally, a question q = [wk ]K
k=1 is a sequence of length K with each token wk taken
from a predefined vocabulary V . We transform each token into a dense word-embedding e(wk )
by a learned look-up table. For task with limited linguistic corpora (like VQA), it is common to
concatenate pretrained Glove[19] vectors to the word embeddings. The sequence of embeddings
[e(wk )]K
k=1 is then fed to a recurrent neural network (RNN), which produces a sequence of RNN
state vectors [sk ]K
k=1 by repeatedly applying the transition function f :
sk+1 = f (sk , e(wk )).
(3)
Popular transition functions, like a long-short term memory (LSTM) cell [10] and a Gated Recurrent
Unit (GRU)[4], incorporate gating mechanisms to better handle long-term dependencies. In this work,
we will use an LSTM cell as our transition function. Finally, we take the last hidden state sI as the
embedding of the question, which we denote as eq throughout the rest of this paper.
3
Figure 2: An overview of the computation graph of batch normalization (left) and conditional batch
normalization (right). Best viewed in color.
3
Modulated Residual Networks
In this section we introduce conditional batch normalization, and show how we can use it to modulate
a pretrained ResNet. The key idea is to predict the ? and ? of the batch normalization from a
language embedding. We first focus on a single convolutional layer with batch normalization module
BN (Fi,c,h,w |?c , ?c ) for which pretrained scalars ?c and ?c are available. We would like to directly
predict these affine scaling parameters from our language embedding eq . When starting the training
procedure, these parameters must be close to the pretrained values to recover the original ResNet
model as a poor initialization could significantly deteriorate performance. Unfortunately, it is difficult
to initialize a network to output the pretrained ? and ?. For these reasons, we propose to predict
a change ??c and ??c on the frozen original scalars, for which it is straightforward to initialize a
neural network to produce an output with zero-mean and small variance.
We use a one-hidden-layer MLP to predict these deltas from the question embedding eq for all feature
maps within the layer:
?? = M LP (eq )
?? = M LP (eq )
(4)
So, given a feature map with C channels, these MLPs output a vector of size C. We then add these
predictions to the ? and ? parameters:
??c = ?c + ??c
??c = ?c + ??c
(5)
Finally, these updated ?? and ?? are used as parameters for the batch normalization:
BN (Fi,c,h,w |??c , ??c )). We stress that we freeze all ResNet parameters, including ? and ?, during training. In Fig. 2, we visualize the difference between the computational flow of the original
batch normalization and our proposed modification. As explained in section 2.1, a ResNet consists of
four stages of computation, each subdivided in several residual blocks. In each block, we apply CBN
to the three convolutional layers, as highlighted in Fig. 3.
CBN is a computationally efficient and powerful method to modulate neural activations; It enables
the linguistic embedding to manipulate entire feature maps by scaling them up or down, negating
them, or shutting them off, etc. As there only two parameters per feature map, the total number of
BN parameters comprise less than 1% of the total number of parameters of a pre-trained ResNet.
This makes CBN a very scalable method compared to conditionally predicting the weight matrices
(or a low-rank approximation to that).
4
Experimental setting
We evaluate the proposed conditional batch normalization on two VQA tasks. In the next section, we
outline these tasks and describe the neural architectures we use for our experiments. The source code
for our experiments is available at https://github.com/GuessWhatGame. The hyperparameters
are also provided in Appendix A.
4
Figure 3: An overview of the MODERN architecture conditioned on the language embedding.
MODERN modulates the batch norm parameters in all residual blocks.
4.1
VQA
The Visual Question Answering (VQA) task consists of open-ended questions about real images. Answering these questions requires an understanding of vision, language and commonsense knowledge.
In this paper, we focus on VQAv1 dataset [1], which contains 614K questions on 204K images.
Our baseline architecture first obtains a question embedding eq by an LSTM-network, as further
detailed in section 2.3. For the image, we extract the feature maps F of the last layer of ResNet-50
(before the pooling layer). For input of size 224x224 these feature maps are of size 7x7, and we
incorporate a spatial attention mechanism, conditioned on the question embedding eq , to pool over
the spatial dimensions. Formally, given a feature maps Fi,?,?,? and question embedding eq , we obtain
a visual embedding ev as follows:
X
exp(?w,h )
?w,h = M LP ([Fi,?,w,h ; eq ]) ; ?w,h = P
; ev =
?w,h Fi,?,w,h (6)
w,h exp(?w,h )
w,h
where [Fi,?,w,h ; eq ] denotes concatenating the two vectors. We use an MLP with one hidden layer
and ReLU activations whose parameters are shared along the spatial dimensions. The visual and
question embedding are then fused by an element-wise product [1, 13, 14] as follows:
fuse(eq , ev ) = P T (tanh(U T eq )) ? (tanh(V T ev )) + bP ,
(7)
where ? denotes an element-wise product, and P , U and V are trainable weight matrices and bP
is a trainable bias. The linguistic and perceptual representations are first projected to a space of
equal dimensionality, after which a tanh non-linearity is applied. A fused vector is then computed by
an element-wise product between the two representations. From this joined embedding we finally
predict an answer distribution by a linear layer followed by a softmax activation function.
We will use the described architecture to study the impact CBN when using it in several stages of
the ResNet. As our approach can be combined with any existing VQA architecture, we also apply
MODERN to MRN [13, 14], a state-of-the-art network for VQA More specifically, this network
replaces the classic attention mechanism with a more advanced one that included g glimpses over the
image features:
g
T
?w,h
= P?Tg (tanh(U 0T q) ? tanh(V 0T Fi,?,w,h
)))
X
g
e v =
?w,h
Fi,?,w,h
g
;
g
?w,h
=P
g
exp(?w,h
)
w,h
g
exp(?w,h
)
(8)
(9)
w,h
where P?g is a trainable weight matrix defined for each glimpse g, U 0 and V 0 are trainable weight
matrices shared among the glimpses and k concatenate vectors over their last dimension.
5
448x448
224x224
Table 1: VQA accuracies trained with train set and evaluated on test-dev.
Answer type
Baseline
Ft Stage 4
Ft BN
MODERN
MRN [14] with ResNet-50
MRN [14] with ResNet-152
MUTAN+MLB [2]
MCB + Attention [9] with ResNet-50
MCB + Attention [9] with ResNet-152
MODERN
MODERN + MRN [14]
Yes/No
79.45%
78.37%
80.18%
81.17%
80.20%
80.95%
82.29%
60.46%
81.38%
82.17%
Number
36.63%
34.27%
35.98%
37.79%
37.73%
38.39%
37.27%
38.29%
36.06%
38.06%
Other
44.62%
43.72%
46.07%
48.66%
49.53%
50.59%
48.23%
48.68%
51.64%
52.29%
Overall
58.05%
56.91%
58.98%
60.82%
60.84%
61.73%
61.02%
60.46%
62.50%
62.16%
63.01%
Table 2: Ablation study to investigate the impact of leaving out the lower stages of ResNet.
(a) VQA, higher is better
CBN applied to
?
Stage 4
Stages 3 ? 4
Stages 2 ? 4
All
(b) GuessWhat?!, lower is better
Val. accuracy
56.12%
57.68%
58.29%
58.32%
58.56%
CBN applied to
?
Stage 4
Stages 3 ? 4
Stages 2 ? 4
All
Test error
29.92%
26.42%
25.24%
25.31%
25.06%
Noticeably, MODERN modulates the entire visual processing pipeline and therefore backpropagates
through all convolutional layers. This requires much more GPU memory than using extracted features.
To feasibly run such experiments on today?s hardware, we conduct all experiments in this paper with
a ResNet-50.
As for our training procedure, we select the 2k most-common answers from the training set, and
use a cross-entropy loss over the distribution of provided answers. We train on the training set, do
early-stopping on the validation set, and report the accuracies on the test-dev using the evaluation
script provided by [1].
4.2
GuessWhat?!
GuessWhat?! is a cooperative two-player game in which both players see the image of a rich visual
scene with several objects. One player ? the Oracle ? is randomly assigned an object in the scene.
This object is not known by the other player ? the questioner ? whose goal it is to locate the hidden
object by asking a series of yes-no questions which are answered by the Oracle [6].
The full dataset is composed of 822K binary question/answer pairs on 67K images. Interestingly, the
GuessWhat?! game rules naturally leads to a rich variety of visually grounded questions. As opposed
to the VQAv1 dataset, the dataset contains very few commonsense questions that can be answered
without the image.
In this paper, we focus on the Oracle task, which is a form of visual question answering in which
the answers are limited to yes, no and not applicable. Specifically, the oracle may take as an input
the incoming question q, the image I and the target object o?. This object can be described with its
category c, its spatial location and the object crop.
We outline here the neural network architecture that was reported in the original GuessWhat?!
paper [6]. First, we crop the initial image by using the target object bounding box object and rescale
it to a 224 by 224 square. We then extract the activation of the last convolutional layer after the ReLU
(stage4) of a pre-trained ResNet-50. We also embed the spatial information of the crop within the
image by extracting an 8-dimensional vector of the location of the bounding box
[xmin , ymin , xmax , ymax , xcenter , ycenter , wbox , hbox ],
6
(10)
Table 3: GuessWhat?! test errors for the Oracle model with different embeddings. Lower is better.
Raw features
ft stage4
Ft BN
CBN
Crop
29.92%
27.48%
27.94%
25.06%
Crop + Spatial + Category
22.55%
22.68%
22.42%
19.52%
Spatial + Category
21.5%
where wbox and hbox denote the width and height of the bounding box, respectively. We convert the
object category c into a dense category embedding using a learned look-up table. Finally, we use an
LSTM to encode the current question q. We then concatenate all embeddings into a single vector
and feed it as input to a single hidden layer MLP that outputs the final answer distribution using a
softmax layer.
4.3
Baselines
For both datasets we empirically investigate several modifications to the described architectures. We
refer to MODERN when we apply conditional batch normalization to all layers of ResNet-50, as
described in section 3. To verify that the gains from MODERN are not coming from increased model
capacity, we include two baselines with more capacity. The first model finetunes the layers of stage 4
of ResNet-50 of our baseline model. This is common practice when we transfer a pretrained network
to a new task, and we refer it to as Ft Stage 4. We also introduce a novel baseline Ft BN, which
consist of finetuning all ? and ? parameters of ResNet-50, while freezing all its weights.
For VQA, we report the results of two state-of-the-art architectures, namely, Multimodal Compact
Bilinear pooling network (MCB) [9] (Winner of the VQA challenge 2016) and MUTAN [2]. Both
approaches employ an (approximate) bilinear pooling mechanism to fuse the language and vision
embedding by respectively using a random projection and a tensor decomposition. In addition, we
re-implement and run the MRN model described in Section 4.1. When benchmarking state-of-the-art
models, we train on the training set, proceed early stopping on the validation set and report accuracy
on the test set (test-dev in the case of VQA.)
4.4
Results
VQA We report the best validation accuracy of the outlined methods on the VQA task in Table1.
Note that we use input images of size 224x224 when we compare MODERN against the baselines (as
well as for the ablation study presented in Table 2a. Our initial baseline achieves 58.05% accuracy,
and we find that finetuning the last layers (Ft Stage 4) does not improve this performance (56.91%).
Interestingly, just finetuning the batch norm parameters (Ft BN) significantly improves the accuracy
to 58.98%. We see another significant performance jump when we condition the batch normalization
on the question input (MODERN), which improves our baseline with almost 2 accuracy points to
60.82%.
Because state-of-the-art models use images of size 448x448, we also include the results of the baseline
architecture on these larger images. As seen in Table1, this nearly matches the state of the art results
with a 62.15%. As MODERN does not rely on a specific attention mechanism, we then combine our
proposed method with MRN [13, 14] architecture, and observe that outperforms the state-of-the-art
MCB model [9] by half a point. Please note that we select MRN [13, 14] over MCB [9] as the latter
requires fewer weight parameters and is more stable to train.
Note that the presented results use a ResNet-50 while other models rely on extracted image embedding from a ResNet-152. For sake of comparison, we run the baseline models with extracted
image embedding from a ResNet-50. Also for the more advanced MRN architecture, we observe
performance gains of approximately 2 accuracy points.
GuessWhat?! We report the best test errors for the outlined method on the Oracle task of GuessWhat?! in Table 3. We first compare the results when we only feed the crop of the selected object to
the model. We observe the same trend as in VQA. With an error of 25.06%, CBN performs better
than than either fine-tuning the final block (27.48% error) or the batch-norm parameters (27.94%
7
(a) Feature map projection from raw ResNet
(b) Feature map projection from MODERN
Figure 4: t-SNE projection of feature maps (before attention mechanism) of ResNet and MODERN.
Points are colored according to the answer type of VQA. Whilst there are no clusters with raw
features, MODERN successfully modulates the image feature towards specific answer types.
error), which in turn improve over just using the raw features (29.92% error). Note that the relative
improvement (5 error points) for CBN is much bigger for GuessWhat?! than for VQA.
We therefore also investigate the performance of the methods when we include the spatial and
category information. We observe that finetuning the last layers or BN parameters does not improve
the performance, while MODERN improves the best reported test error with 2 points to 19.52% error.
4.5
Discussion
By analyzing the results from both VQA and GuessWhat?! experiments, it is possible to have a better
insight regarding MODERN capabilities.
MODERN vs Fine tuning In both experiments, MODERN outperforms Ft BN. Both methods
update the same ResNet parameters so this demonstrates that it is important to condition on the
language representation. MODERN also outperforms Ft Stage 4 on both tasks which shows that the
performance gain of MODERN is not due to the increased model capacity.
Conditional embedding In the provided baselines of the Oracle task of GuessWhat?! [6], the
authors observed that the best test error (21.5%) is obtained by only providing the object category and
its spatial location. For this model, including the raw features of the object crop actually deteriorates
the performance to 22.55% error. This means that this baseline fails to extract relevant information
from the images which is not in the handcrafted features. Therefore the Oracle can not answer
correctly questions which requires more than the use of spatial information and object category. In
the baseline model, the embedding of the crop from a generic ResNet does not help even when we
finetune stage 4 or BN. In contrast, applying MODERN helps to better answer questions as the test
error drops by 2 points.
Ablation study We investigate the impact of only modulating the top layers of a ResNet. We report
these results in Table 2. Interestingly, we observe that the performance slowly decreases when we
apply CBN exclusively to later stages. We stress that for best performance it?s important to modulate
all stages, but if computational resources are limited we recommend to apply it to the two last stages.
Visualizing the representations In order to gain more insight into our proposed fusion mechanism,
we compare visualizations of the visual embeddings created by our baseline model and MODERN.
We first randomly picked 1000 unique image/question pairs from the validation set of VQA. For
the trained MODERN model, we extract image features just before the attention mechanism of
MODERN, which we will compare with extracted raw ResNet-50 features and finetune ResNet-50
(Block4 and batchnorm parameters). We first decrease the dimensionality by average pooling over the
spatial dimensions of the feature map, and subsequently apply t-SNE [25] to these set of embeddings.
We color the points according to the answer type provided by the VQA dataset, and show these
visualizations for both models in Fig 4 and Fig 7 in the Appendix B. Interestingly, we observe
that all answer types are spread out for raw image features and finetuned features. In contrast, the
representations of MODERN are cleanly grouped into three answer types. This demonstrates that
MODERN successfully disentangles the images representations by answer type which is likely to
8
ease the later fusion process. While finetuning models does cluster features, there is no direct link
between those clusters and the answer type. These results indicate that MODERN successfully
learns representation that differs from classic finetuning strategies. In Appendix B, we visualize the
feature disentangling process stage by stage. It is possible to spot some sub-clusters in the t-SNE
representation, as in fact they correspond to image and question pairs which are similar but not
explicitly tagged in the VQA dataset. For example, in appendix B the Fig. 6 we highlight pairs where
the answer is a color.
5
Related work
MODERN is related to a lot of recent work in VQA[1]. The majority of proposed methods use a
similar computational pipeline introduced by [17, 20]. First, extract high-level image features from a
ImageNet pretrained convnet, while independently processing the question using RNN. Some work
has focused on the top level fusing mechanism of the language and visual vectors. For instance, it was
shown that we can improve upon classic concatenation by an element-wise product [1, 13, 14], Tucker
decomposition [2], bilinear pooling [9] or more exotic approaches [18]. Another line of research has
investigated the role of attention mechanisms in VQA [26, 11, 28]. The authors of [11] propose a
co-attention model over visual and language embeddings, while [28] proposes to stack several spatial
attention mechanisms. Although an attention mechanism can be thought of as modulating the visual
features by a language, we stress that such mechanism act on the high-level features. In contrast, our
work modulates the visual processing from the very start.
MODERN is inspired by conditional instance normalization (CIN) [7] that was successfully applied
to image style transfer. While previous methods transfered one image style per network, [7] showed
that up to 32 styles could be compressed into a single network by sharing the convolutional filters
and learning style-specific normalization parameters. There are notable differences with our work.
First, [7] uses a non-differentiable table lookup for the normalization parameters while we propose
a differentiable mapping from the question embedding. Second, we predict a change on the normalization parameters of a pretrained convolutional network while keeping the convolutional filters
fixed. In CIN, all parameters, including the transposed convolutional filters, are trained. To the best
of our knowledge, this is the first paper to conditionally modulate the vision processing using the
normalization parameters.
6
Conclusion
In this paper, we introduce Conditional Batch Normalization (CBN) as a novel fusion mechanism to
modulate all layers of a visual processing network. Specifically, we applied CBN to a pre-trained
ResNet, leading to the proposed MODERN architecture. Our approach is motivated by recent
evidence from neuroscience suggesting that language influences the early stages of visual processing.
One of the strengths of MODERN is that it can be incorporated into existing architectures, and our
experiments demonstrate that this significantly improves the baseline models. We also found that it is
important to modulate the entire visual signal to obtain maximum performance gains.
While this paper focuses on text and images, MODERN can be extended to neural architecture
dealing with other modalities such as sound or video. More broadly, CBN can could also be applied
to modulate the internal representation of any deep network with respect to any embedding regardless
of the underlying task. For instance, signal modulation through batch norm parameters may also be
beneficial for reinforcement learning, natural language processing or adversarial training tasks.
Acknowledgements
The authors would like to acknowledge the stimulating research environment of the SequeL lab.
We thank Vincent Dumoulin for helpful discussions about conditional batch normalization. We
acknowledge the following agencies for research funding and computing support: CHISTERA IGLU
and CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020,
NSERC, Calcul Qu?bec, Compute Canada, the Canada Research Chairs and CIFAR. We thank
NVIDIA for providing access to a DGX-1 machine used in this work.
9
References
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, Z. Lawrence, and D. Parikh. Vqa: Visual
question answering. In Proc. of ICCV, 2015.
[2] H. Ben-Younes, R. Cad?ne, N. Thome, and M. Cord. MUTAN: Multimodal Tucker Fusion for
Visual Question Answering. arXiv preprint arXiv:1705.06676, 2017.
[3] B. Boutonnet and G. Lupyan. Words jump-start vision: A label advantage in object recognition.
Journal of Neuroscience, 35(25):9329?9335, 2015.
[4] K. Cho, B. Van Merri?nboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and
Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine
translation. In Proc. of EMNLP, 2014.
[5] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. Moura, D. Parikh, and D. Batra. Visual
Dialog. In Proc. of CVPR, 2017.
[6] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. Courville. GuessWhat?!
Visual object discovery through multi-modal dialogue. In Proc. of CVPR, 2017.
[7] V. Dumoulin, J. Shlens, and M. Kudlur. A Learned Representation For Artistic Style. In Proc.
of ICLR, 2017.
[8] F. Ferreira and M. Tanenhaus. Introduction to the special issue on language?vision interactions.
Journal of Memory and Language, 57(4):455?459, 2007.
[9] A. Fukui, D. Huk Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal
Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. In Proc. of
EMNLP, 2016.
[10] S. Hochreiter and J. Schmidhuber. Long short-term memory. In Neural computation, volume 9,
pages 1735?1780. MIT Press, 1997.
[11] J. Jiasen, J. Yang, D. Batra, and D. Parikh. Hierarchical question-image co-attention for visual
question answering. In Proc. of NIPS, 2016.
[12] K. Kaiming, Z. Xiangyu, S. Ren, and J. Sun. Deep residual learning for image recognition. In
Proc. of CVPR, 2016.
[13] J. Kim, S. Lee, D. Kwak, M. Heo, J. Kim, J. Ha, and B. Zhang. Multimodal residual learning
for visual qa. In Proc. of NIPS, 2016.
[14] J. Kim, K. On, J. Kim, J. Ha, and B. Zhang. Hadamard product for low-rank bilinear pooling.
In Proc. of ICLR, 2017.
[15] P. Kok, M. Failing, and F. de Lange. Prior expectations evoke stimulus templates in the primary
visual cortex. Journal of Cognitive Neuroscience, 26(7):1546?1554, 2014.
[16] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and L. Zitnick.
Microsoft coco: Common objects in context. In Proc of ECCV, 2014.
[17] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neurons: A neural-based approach to
answering questions about images. In Proc. of ICCV, 2015.
[18] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neurons: A deep learning approach to
visual question answering. arXiv preprint arXiv:1605.02697, 2016.
[19] J. Pennington, R. Socher, and C. Manning. Glove: Global Vectors for Word Representation. In
Proc. of EMNLP, 2014.
[20] M. Ren, R. Kiros, and R. Zemel. Exploring models and data for image question answering. In
Proc. of NIPS, 2015.
[21] I. Sergey and S. Christian. Batch Normalization: Accelerating Deep Network Training by
Reducing Internal Covariate Shift. In Proc. of ICML, 2015.
10
[22] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. 2015.
[23] G. Yashand K. Tejas, S. Douglas, Dhruv B, and P. Devi. Making the V in VQA matter: Elevating
the role of image understanding in Visual Question Answering. In Proc. of CVPR, 2017.
[24] G. Thierry, P. Athanasopoulos, A. Wiggett, B. Dering, and JR. Kuipers. Unconscious effects
of language-specific terminology on preattentive color perception. PNAS, 106(11):4567?4570,
2009.
[25] L. Maaten van G. der and Hinton. Visualizing data using t-sne. JMLR, 9(Nov):2579?2605,
2008.
[26] H. Xu and K. Saenko. Ask, attend and answer: Exploring question-guided spatial attention for
visual question answering. In Proc. of ECCV, 2015.
[27] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show,
attend and tell: Neural image caption generation with visual attention. In Proc. of ICML, 2015.
[28] Z. Yang, X. He, J. Gao, and L. Deng A. Smola. Stacked attention networks for image question
answering. In Proc. of CVPR, 2016.
11
| 7237 |@word briefly:2 norm:4 open:3 cleanly:1 confirms:1 bn:14 decomposition:3 initial:2 contains:2 series:1 exclusively:1 ours:1 interestingly:4 outperforms:3 existing:2 current:4 com:5 cad:1 activation:6 gmail:1 si:1 must:1 gpu:1 refines:1 numerical:1 concatenate:3 enables:2 christian:1 designed:1 drop:1 update:1 v:2 cue:1 half:2 fewer:1 selected:1 beginning:1 vanishing:1 short:2 pointer:1 colored:1 location:6 zhang:2 height:1 along:1 direct:1 consists:2 combine:2 fitting:1 introduce:5 deteriorate:1 p1:1 dialog:1 kiros:2 multi:2 brain:1 inspired:2 salakhutdinov:1 kuiper:1 precursor:1 provided:5 underlying:2 linearity:1 exotic:1 what:3 deepmind:1 whilst:1 finding:1 ended:2 act:1 ferreira:1 demonstrates:2 unit:1 ramanan:1 cbn:18 before:4 attend:3 bilinear:6 despite:1 accumulates:1 analyzing:1 modulation:2 approximately:1 inria:3 might:1 umr:2 eb:2 initialization:1 challenging:1 co:3 ease:2 limited:3 practical:1 unique:1 practice:1 block:8 implement:1 differs:1 spot:1 procedure:2 maire:1 rnn:5 significantly:4 thought:2 outputted:1 projection:5 pre:5 word:8 refers:1 suggest:1 close:1 risk:1 applying:2 disentangles:1 influence:1 context:1 map:19 straightforward:1 attention:18 starting:1 independently:5 regardless:1 focused:2 rule:1 insight:2 shlens:1 classic:6 embedding:24 stability:1 population:1 handle:1 merri:1 updated:1 imagine:1 today:1 target:2 unconscious:1 caption:1 olivier:1 us:1 pa:1 element:5 trend:1 recognition:4 finetuned:1 bec:1 cooperative:1 observed:2 ft:10 module:2 role:2 preprint:2 yadav:1 thousand:1 cord:1 sun:1 decrease:2 xmin:1 cin:2 environment:1 agency:1 trained:8 singh:1 upon:1 multimodal:4 finetuning:6 schwenk:1 train:5 univ:3 stacked:2 cper:1 describe:2 zemel:2 tell:1 apparent:1 whose:2 larger:1 cvpr:5 compressed:1 encoder:1 simonyan:1 transform:1 highlighted:1 final:2 sequence:3 differentiable:2 frozen:1 agrawal:1 advantage:1 propose:7 interaction:1 product:6 coming:1 fr:2 relevant:1 hadamard:1 ablation:4 wbox:2 ymax:1 representational:1 description:1 competition:1 double:1 table1:2 cluster:4 darrell:1 captioning:1 produce:2 ben:1 resnet:39 object:18 help:3 recurrent:3 montreal:2 batchnorm:1 rescale:1 thierry:1 received:1 eq:12 strong:1 pietquin:3 indicate:1 larochelle:2 differ:1 guided:1 mie:1 subsequently:1 filter:3 human:1 noticeably:1 thome:1 subdivided:1 preliminary:1 exploring:3 recap:1 dhruv:1 elevating:1 visually:2 exp:4 lawrence:1 mapping:1 predict:6 visualize:2 achieves:1 early:9 perceived:1 failing:1 proc:19 applicable:1 label:1 tanh:5 calais:1 modulating:4 grouped:1 successfully:4 mit:1 activates:1 linguistic:10 encode:1 focus:4 improvement:3 rank:2 kwak:1 greatly:1 contrast:4 adversarial:1 criteo:1 baseline:17 kim:4 helpful:1 inference:1 stopping:2 cnrs:2 kottur:1 entire:6 hidden:5 perona:1 x224:3 overall:1 classification:1 among:1 issue:1 proposes:1 art:8 spatial:16 initialize:2 softmax:2 special:1 equal:1 construct:1 comprise:1 identical:1 lille:4 look:2 park:1 nearly:1 icml:2 alter:1 report:6 stimulus:1 recommend:1 intelligent:1 feasibly:1 few:2 employ:1 modern:36 randomly:2 composed:2 replaced:1 microsoft:1 interest:1 mlp:3 investigate:4 evaluation:1 predefined:1 antol:1 commonsense:2 glimpse:3 damping:1 conduct:1 re:1 witnessed:1 increased:3 instance:3 negating:1 dev:3 asking:1 heo:1 tg:1 phrase:1 fusing:4 hearing:1 artistic:1 hundred:1 reported:2 dependency:1 answer:18 kudlur:1 combined:1 cho:2 fritz:2 grand:1 lstm:4 stay:1 standing:1 sequel:1 off:1 lee:1 pool:1 transfered:1 fused:4 opposed:1 slowly:1 emnlp:3 cognitive:1 dialogue:2 leading:4 style:6 suggesting:1 de:4 lookup:1 wk:5 chandar:1 matter:1 notable:1 explicitly:1 script:1 view:3 later:2 picked:1 lot:1 lab:1 dumoulin:2 start:3 recover:1 questioner:1 capability:2 parallel:1 contribution:2 mlps:1 square:1 hbox:2 accuracy:9 convolutional:17 variance:5 correspond:1 yes:3 raw:7 vincent:1 lu:1 ren:2 unaffected:1 moura:1 sharing:1 aligns:1 against:1 tucker:3 naturally:1 transposed:1 gain:5 dataset:6 jeremie:1 popular:2 ask:3 mitchell:1 knowledge:3 color:4 improves:5 dimensionality:3 guesswhat:13 actually:1 finetune:2 feed:2 higher:2 originally:1 modal:2 zisserman:1 evaluated:1 box:3 just:3 stage:31 smola:1 convnets:1 hand:1 freezing:1 google:3 mary:2 grounding:1 effect:1 concept:2 normalized:1 verify:1 tagged:1 assigned:1 iteratively:1 illustrated:1 conditionally:2 visualizing:2 during:2 game:2 width:1 please:1 backpropagating:1 backpropagates:1 won:1 stress:4 outline:3 demonstrate:2 performs:1 dgx:1 image:41 wise:5 novel:4 fi:12 funding:1 parikh:3 common:5 hugo:1 overview:4 empirically:1 winner:1 handcrafted:1 volume:1 he:1 bougares:1 significant:3 refer:4 freeze:1 fukui:1 tuning:2 tanenhaus:1 outlined:2 language:33 mlb:1 moving:1 stable:1 access:1 impressive:1 cortex:1 etc:1 add:1 dominant:1 recent:4 showed:1 coco:1 schmidhuber:1 nvidia:1 hay:1 binary:1 der:1 seen:1 florian:2 deng:1 xiangyu:1 mrn:8 signal:3 full:1 sound:1 pnas:1 match:1 cross:1 long:4 cifar:1 lin:1 equally:2 manipulate:1 bigger:1 impact:4 prediction:2 scalable:1 crop:8 vision:12 expectation:1 arxiv:4 resnets:4 sergey:1 normalization:25 grounded:2 kernel:1 achieved:1 cell:2 xmax:1 hochreiter:1 background:1 addition:1 separately:1 fine:2 leaving:2 source:1 modality:4 rest:1 strub:3 pooling:8 bahdanau:1 flow:1 extracting:1 yang:3 bengio:2 embeddings:9 variety:1 relu:3 variate:1 architecture:18 restrict:1 suboptimal:1 lange:1 idea:1 regarding:1 vgg:1 shift:2 motivated:1 feder:1 accelerating:1 proceed:1 constitute:1 repeatedly:1 deep:6 detailed:2 malinowski:2 vqa:31 kok:1 hardware:1 processed:4 category:8 younes:1 varb:2 http:1 neuroscience:4 estimated:1 per:3 delta:1 deteriorates:1 correctly:1 broadly:1 group:1 key:2 four:2 terminology:1 douglas:1 fuse:5 graph:1 downstream:2 convert:1 run:3 powerful:1 throughout:1 almost:1 maaten:1 appendix:4 scaling:2 layer:25 followed:2 courville:4 fold:1 replaces:1 oracle:8 strength:1 ahead:1 precisely:1 your:2 bp:2 scene:3 sake:1 x7:1 speed:1 answered:2 chair:1 performing:1 nboer:1 developing:2 according:2 centrale:2 poor:1 manning:1 jr:1 beneficial:2 lp:3 qu:1 cth:1 modification:3 making:1 explained:1 iccv:2 pipeline:6 taken:1 computationally:1 resource:1 visualization:2 remains:1 jiasen:1 turn:1 mechanism:18 fed:2 gulcehre:1 available:4 doll:1 apply:8 observe:6 hierarchical:1 generic:1 batch:28 original:6 top:3 denotes:3 include:3 especially:1 tensor:1 question:45 strategy:1 primary:1 interacts:1 gradient:1 iclr:2 convnet:4 link:1 thank:2 concatenation:2 capacity:3 majority:1 decoder:1 mail:1 extent:2 reason:2 length:1 code:1 mini:1 providing:2 difficult:1 mostly:1 unfortunately:1 disentangling:1 sne:4 nord:1 ba:1 contributed:1 gated:1 neuron:2 datasets:3 acknowledge:2 extended:1 incorporated:1 hinton:1 locate:1 stack:1 community:2 canada:2 introduced:2 pair:4 gru:1 namely:1 imagenet:2 learned:3 textual:2 testbed:1 nip:3 qa:1 able:1 cristal:2 perception:1 ev:4 challenge:2 hear:1 summarize:1 built:1 including:4 memory:4 video:1 power:1 natural:5 rely:2 predicting:1 residual:14 advanced:3 improve:4 github:1 technology:1 ne:1 ymin:1 created:1 extract:6 coupled:1 deviate:1 prior:2 literature:2 understanding:2 text:1 val:1 acknowledgement:1 calcul:1 relative:1 discovery:1 loss:1 highlight:1 generation:1 validation:4 integrate:1 affine:1 translation:1 normalizes:1 eccv:2 token:2 last:7 keeping:1 bias:1 template:1 benefit:1 van:2 dimension:6 vocabulary:1 transition:4 rich:2 author:4 commonly:1 jump:2 projected:1 reinforcement:1 approximate:1 compact:3 obtains:1 nov:1 keep:1 dealing:1 evoke:1 global:1 overfitting:2 incoming:1 harm:1 conceptual:1 assumed:1 corpus:1 belongie:1 sk:3 why:1 table:9 channel:1 transfer:3 huk:1 investigated:1 shutting:1 domain:1 da:1 zitnick:1 main:1 dense:2 linearly:1 spread:1 bounding:3 hyperparameters:1 suffering:1 xu:2 fig:7 benchmarking:1 fails:1 sub:1 exponential:1 concatenating:1 answering:17 perceptual:1 jmlr:1 extractor:1 learns:1 down:1 embed:1 specific:6 covariate:1 gating:1 gupta:1 dominates:1 fusion:5 concern:1 evidence:2 consist:1 socher:1 adding:1 pennington:1 modulates:4 conditioned:2 vries:2 entropy:1 intersection:1 likely:1 rohrbach:4 devi:1 visual:47 gao:1 prevents:1 nserc:1 kaiming:1 scalar:3 joined:1 pretrained:13 extracted:5 stimulating:1 tejas:1 conditional:11 modulate:12 goal:2 viewed:1 towards:2 shared:2 change:2 included:1 specifically:5 glove:2 reducing:3 called:1 total:2 batra:3 experimental:1 player:4 saenko:1 preattentive:1 aaron:2 formally:2 select:2 ilsvrc:1 internal:3 people:1 support:1 latter:1 modulated:3 incorporate:2 evaluate:1 mcb:5 trainable:5 |
6,896 | 7,238 | Learning Mixture of Gaussians with Streaming Data
Aditi Raghunathan
Stanford University
[email protected]
Prateek Jain
Microsoft Research, India
[email protected]
Ravishankar Krishnaswamy
Microsoft Research, India
[email protected]
Abstract
In this paper, we study the problem of learning a mixture of Gaussians with streaming data: given a stream of N points in d dimensions generated by an unknown
mixture of k spherical Gaussians, the goal is to estimate the model parameters using
a single pass over the data stream. We analyze a streaming version of the popular
Lloyd?s heuristic and show that the algorithm estimates all the unknown centers of
the component Gaussians accurately if they are sufficiently separated. Assuming
each pair of centers are C? distant with C = ?((k log k)1/4 ?) and where ? 2 is
the maximum variance of any Gaussian component, we show that asymptotically
the algorithm estimates the centers optimally (up to certain constants); our center
separation requirement matches the best known result for spherical Gaussians [18].
For finite samples, we show that a bias term based on the initial estimate decreases
at O(1/poly(N )) rate while variance decreases at nearly optimal rate of ? 2 d/N .
Our analysis requires seeding the algorithm with a good initial estimate of the true
cluster centers for which we provide an online PCA based clustering algorithm.
Indeed, the asymptotic per-step time complexity of our algorithm is the optimal
d ? k while space complexity of our algorithm is O(dk log k).
In addition to the bias and variance terms which tend to 0, the hard-thresholding
based updates of streaming Lloyd?s algorithm is agnostic to the data distribution
and hence incurs an approximation error that cannot be avoided. However, by
using a streaming version of the classical (soft-thresholding-based) EM method
that exploits the Gaussian distribution explicitly, we show that for a mixture of
two Gaussians the true means can be estimated consistently, with estimation error
decreasing at nearly optimal rate, and tending to 0 for N ? ?.
1
Introduction
Clustering data into homogeneous clusters is a critical first step in any data analysis/exploration task
and is used extensively to pre-process data, form features, remove outliers and visualize data. Due
to the explosion in amount of data collected and processed, designing clustering algorithms that
can handle large datasets that do not fit in RAM is paramount to any big-data system. A common
approach in such scenarios is to treat the entire dataset as a stream of data, and then design algorithms
which update the model after every few points from the data stream. In addition, there are several
practical applications where the data itself is not available beforehand and is streaming in, for example
in any typical online system like web-search.
For such a model, the algorithm of choice in practice is the so-called streaming k-means heuristic.
It is essentially a streaming version of the celebrated k-means algorithm or Lloyd?s heuristic [8].
The basic k-means algorithm is designed for offline/batch data where each data point is assigned to
the nearest centroid and the centroids are then updated based on the assigned points; this process is
iterated till the solution is locally optimal. The streaming version of the k-means algorithm assigns
the new point from the stream to the closest centroid and updates this centroid immediately. That is,
unlike offline k-means which first assigns all the points to the respective centroids and then updates
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the centroids, the streaming algorithm updates the centroids after each point, making it much more
space efficient. While streaming k-means and its several variants are used heavily in practice, their
properties such as solution quality, time complexity of convergence have not been studied widely. In
this paper, we attempt to provide such a theoretical study of the streaming k-means heuristic. One
of the big challenges is that even the (offline) k-means algorithm attempts to solve a non-convex
NP-hard problem. Streaming data poses additional challenges because of the large noise in each
point that can deviate the solution significantly.
In the offline setting, clustering algorithms are typically studied under certain simplifying assumptions
that help bypass the worst-case NP-hardness of these problems. One of the most widely studied
setting is when the data is sampled from a mixture of well-separated Gaussians [5, 18, 1], which is
also the generative assumption that we impose on the data in this work. However, the online/streaming
version of the k-means algorithm has not been studied in such settings. In this work, we design and
study a variant of the popular online k-means algorithm where the data is streaming-in, we cannot
store more than logarithmically many data points, and each data point is sampled from a mixture of
well-separated spherical Gaussians. The goal of the algorithm is then to learn the means of each of
the Gaussians; note that estimating other parameters like variance, and weight of each Gaussian in
the mixture becomes simple once the true means are estimated accurately.
Our Results. Our main contribution is the first bias-variance bound for the problem of learning
Gaussian
?mixtures with streaming data. Assuming that the centers are separated by C? where
C = ?( log k) and if we seed the algorithm with initial cluster centers that are ? C?/20 distance
away from the true centers, then we show that the error in estimating the true centers can be
decomposed into three terms and bound each one of them: (a) the bias term, i.e., the term dependent
on distance of true means to initial centers decreases at a 1/poly(N ) rate, where N is the number
N
of data points observed so far, (b) the variance term is bounded by ? 2 d log
where ? is the
N
standard deviation of each of the Gaussians, and d is the dimensionality of the data, and (c) an offline
approximation error: indeed, note that even the offline Lloyd?s heuristic will have an approximation
error due to its hard-thresholding nature. For example, even when k = 2, and the centers are separated
2
by C?, around exp(? C8 ) fraction of points from the first Gaussian will be closer to the second
center, and so the k-means heuristic will converge to centers that are at a squared distance of roughly
2
O(C 2 ) exp(? C8 )? 2 from the true means. We essentially inherit this offline error up to constants.
?
Note that the above result holds at a center separation of ?( log k?) distance, which is substantially
weaker than the currently best-known result of ?(?k 1/4 ) for even the offline problem [18]. However,
as mentioned before, this only holds provided we have a good initialization. To this end, we show
that when C = ?(?(k log k)1/4 ), we can combine an online PCA algorithm [9, 11] with the batch
k-means algorithm on a small seed sample of around O(k log k) points, to get such an initialization.
Note that this separation requirement nearly matches the best-known result offline results [18].
Finally, we also study a soft-version of streaming k-means algorithm, which can also be viewed
as the streaming version of the popular Expectation Maximization (EM) algorithm. We show that
for mixture of two well-separated Gaussians, a variant of streaming EM algorithm recovers the
above mentioned bias-variance bound but without the approximation error. That is, after observing
infinite many samples, streaming EM converges to the true means and matches the corresponding
offline results in [3, 6]; to the best of our knowledge this is also first such consistency result for the
streaming mixture problem. However, the EM updates require that the data is sampled from mixture
of Gaussians, while the updates of streaming Lloyd?s algorithm are agnostic of the data distribution
and hence same updates can be used to solve arbitrary mixture of sub-Gaussians as well.
Technical Challenges. One key technical challenge in analyzing streaming k-means algorithm in
comparison to the standard streaming regression style problems is that the offline problem itself is
non-convex and moreover can only be solved approximately. Hence, a careful analysis is required to
separate out the error we get in each iteration in terms of the bias, variance, and inherent approximation
error terms. Moreover, due to the non-convexity, we are able to guarantee decrease in error only if
each of our iterates lies in a small ball around the true mean. While this is initially true due to the
initialization algorithm, our intermediate centers might escape these balls during our update. However,
we show using a delicate martingale based argument that with high probability, our estimates stay
within slightly larger balls around the true means, which turns out to be sufficient for us.
2
Related Work. A closely related work to ours is an independent work by [17] which studies a
stochastic version of k-means for data points that satisfy a spectral variance condition which can be
seen as a deterministic version of the mixture of distributions assumption. However, their method
requires multiple passes over the data, thus doesn?t fit directly in the streaming k-means setting.
In particular, the above mentioned paper analyzes the stochastic k-means method only for highly
accurate initial set of iterates which requires a large burn-in period of t = O(N 2 ) and hence needs
O(N ) passes over the data, where N is the number of data points. Tensor methods [1, 10] can also
be extended to cluster streaming data points sampled from a mixture distribution but these methods
suffer from large sample/time complexity and might not provide reasonable results when the data
distribution deviates from the assumed generative model.
In addition to the gaussian mixture model, clustering problems are also studied under other models
such as data with small spectral variance [12], stability of data [4], etc. It would be interesting to
study the streaming versions in such models as well.
Paper Outline. We describe our models and problem setup in Section 2. We then present our
streaming k-means algorithm and its proof overview in Sections 3 and 4. We then discuss the
initialization procedure in Section 5. Finally we describe our streaming-EM algorithm in Section 6.
2
Setup and Notation
We assume that the data is drawn from a mixture of k spherical Gaussians distributions, i.e.,
i.i.d
xt ?
X
wi N (??i , ? 2 I), ??i ? Rd ?i = 1, 2, . . . k
(1)
i
P
where ??i ? Rd is the mean of the i-th mixture component, mixture weights wi ? 0, and i wi = 1.
All the problem parameters (i.e., the true means, the variance ? 2 and the mixture weights) are
unknown to the algorithm. Using the standard streaming setup, where the tth sample xt ? Rd is
drawn from the data distribution, our goal is to produce an estimate ?
?i of ??i for i = 1, 2, . . . k in a
single pass over the data using bounded space.
Center Separation. A suitable notion of signal to noise ratio for our problem turns out to be the ratio
of minimum separation between the true centers and the maximum variance along any direction. We
k?? ??? k
k?? ??? k
denote this ratio by C = mini,j i ? j . For convenience, we also denote i ? j by Cij . Here
and in the rest of the paper, kyk is the Euclidean norm of a vector y. We use ? to denote the learning
rate of the streaming updates and ?ti to denote the estimate of ??i at time t.
Remarks. For a cleaner presentation, we assume that all the mixture weights are 1/k, but our results
hold with general weights as long as an appropriate center separation condition is satisfied. Secondly,
our proofs also go through when the Gaussians have different variances ?i 2 , as long as the separation
conditions are satisfied with ? = maxi ?i . We furnish details in the full version of this paper [14].
3
Algorithm and Main Result
In this section, we describe our proposed streaming clustering algorithm and present our analysis of
the algorithm. At a high level, we follow the approach of various recent results for (offline) mixture
recovery algorithms [18, 12]. That is, we initialize the algorithm with an SVD style operation which
de-noises the data significantly in Algorithm 1 and then apply our streaming version of Lloyd?s
heuristic in Algorithm 2. Note that the Lloyd?s algorithm is agnostic to the underlying distribution
and does not include distribution specific terms like variance etc.
Intuitively, the initialization algorithm first computes an online batch PCA in the for-loop. After this
step, we perform an offline distance-based clustering on the projected subspace (akin to VempalaWang for the offline algorithm). Note that since we only need estimates for centers within a suitable
proximity from the true centers, this step only uses few (roughly k log k) samples. These centers are
fed as the initial centers for the streaming update algorithm. The streaming algorithm then, for each
new sample, updates the current center which is closest to the sample, and iterates.
3
Figure 1: Illustration of optimal K-means error
Algorithm 1 InitAlg(N0 )
Algorithm 2 StreamKmeans(N, N0 )
d?k
U ? random orthonormal matrix ? R
1:
B = ?(d log d), S = 0
2:
for t = 1 to N0 ? k log k do
3:
if mod(t, B) = 0 then
4:
U ? QR(S ? U ), S ? 0
5:
end if
6:
t
Receive x as generated by the input stream 7:
S = S + xt (xt )T
8:
end for
9:
N0 ?k log k+1
N0
X0 = [x
,...,x ]
10:
T
Form nearest neighbor graph using U X0 and
find connected components
[?10 , . . . , ?k0 ] ? mean of points in each component
Return: [?01 , . . . , ?0k ] = [U ?10 , . . . , U ?k0 ]
3N
Set ? ? 3k log
.
N
0
Set {?1 , . . . , ?0k } ? InitAlgo(N0 ).
for t = 1 to N do
Receive xt+N0 given by the input stream
x = xt+N0
Let it = arg mini kx ? ?it?1 k.
Set ?tit = (1 ? ?)?t?1
it + ?x
t?1
t
Set ?i = ?i for i 6= it
end for
N
Output: ?N
1 , . . . , ?k
We now present our main result for the streaming clustering problem.
Theorem 1. Let xt , 1 ? t ? N + N0 be generated using a mixture of Gaussians (1) with wi = 1/k,
N
?i. Let N0 , N ? O(1)k 3 d3 log d and C ? ?((k log k)1/4 ). Then, the mean estimates (?N
1 , . . . , ?k )
output by Algorithm 2 satisfies the following error bound:
?
?
"
#
X
?
? 2 d log N
maxi k??i k2
? 2
+ exp(?C 2 /8)(C 2 + k)? 2 ?
.
+O(k 3 ) ?
?
E
k?N
?
i ? ?i k
?
?(1)
{z
}?
| {zN } |
}
| N{z
i
variance
bias
?offline k?means error
Our error bound consists of three key terms: bias, variance, and offline k-means error, with bias and
variance being standard statistical error terms: (i) bias is dependent on the initial estimation error
and goes down at N ? rate where ? > 1 is a large constant; (ii) variance error is the error due to
noise in each observation xt and goes down at nearly optimal rate of ? ? 2 Nd albeit with an extra
log N term as well as worse dependence on k; and (iii) an offline k-means error, which is the error
that even the offline Lloyds? algorithm would incur for a given center separation C. Note that while
sampling from the mixture distribution, ? exp(?C 2 /8) fraction of data-points can be closer to the
true means of other clusters rather than their own mean, because the tails of the distributions overlap.
Hence, in general it is not possible to assign back these points to the correct cluster, without any
modeling assumptions. These misclassified points will shift the estimated centers along the line
joining the means. See Figure 3 for an illustration. This error can however be avoided by performing
soft updates, which is discussed in Section 6.
Time, space, and sample complexity: Our algorithm has nearly optimal time complexity of O(d ? k)
per iteration; the initialization algorithm requires about O(d4 k 3 ) time. Space complexity of our
algorithm is O(dk ? log k) which is also nearly optimal. Finally, the sample complexity is O(d3 k 3 ),
which is a loose upper bound and can be significantly improved by a more careful analysis. To
?
compare, the best known sample complexity for the offline setting is O(kd)
[2], which is better by a
2
factor of (dk) .
4
Analysis Overview. The proof of Theorem 1 essentially follows from the two theorems stated below:
a) update analysis given a good initialization; b) InitAlg analysis for showing such an initialization.
Theorem 2 (Streaming Update). Let xt , N0 + 1 ? t ? N + N0 be generated using a mixture of
Gaussians
(1) with wi = 1/k, ?i, and N = ?(k 3 d3 log kd). Also, let the center-separation C ?
?
?( log k), and also suppose our initial centers ?0i are such that for all 1 ? i ? k, k?0i ? ??i k ? C?
20 .
Then, the streaming update of StreamKmeans(N, N0 ) , i.e, Steps 3-8 of Algorithm 2 satisfies:
"
#
X
maxi k??i k2
log N 2
N
? 2
3
2
2
2
E
k?i ? ?i k ?
+
O(k
)
exp(?C
/8)(C
+
k)?
+
d?
.
N
N?(1)
i
?
Note that our streaming update analysis requires only C = ?( log k) separation but needs appropriate initialization that is guaranteed by the below result.
Theorem 3 (Initialization). Let xt , 1 ? t ? N0 be generated using a mixture
of Gaussians
(1)
0
0
0
1/4
with wi = 1/k, ?i. Let ?1 , ?2 , . . . ?k be the output of Algorithm 1. If C = ? (k log k)
and
C
3 3
0
?
N0 = ? d k log dk , then w.p. ? 1 ? 1/poly(k), we have maxi k?i ? ?i k ? 20 ?.
4
Streaming Update Analysis
At a high level our analysis shows that at each step of the streaming updates, the error decreases on
average. However, due to the non-convexity of the objective function we can show such a decrease
only if the current estimates of our centers lie in a small ball around the true centers of the gaussians.
Indeed, while the initialization provides us with such centers, due to the added noise in each step,
our iterates may occasionally fall outside these balls, and we need to bound the probability that this
happens. To overcome this, we start with initial centers that are within slightly smaller balls around
the true means, and use a careful Martingale argument to show that even if the iterates go a little
farther from the true centers (due to noise), with high probability, the iterates are still within the
slightly larger ball that we require to show decrease in error.
We therefore divide our proof in two parts: a) first we show in Section 4.1 that the error decreases in
expectation, assuming that the current estimates lie in a reasonable neighborhood around the true
centers; and b) in Section 4.2) we show using a martingale analysis that with high probability, each
iterate satisfies the required neighborhood condition if the initialization is good enough.
We formalize the required condition for our per-iteration error analysis below. For the remainder of
this section, we fix the initialization and only focus on Steps 3-8 of Algorithm 2.
0
Definition 1. For a fixed initialization, and given a sequence of points ?t = (xt +N0 +1 : 0 ? t0 <
0
0
t), we say that condition It is satisfied at time t if maxi k?ti ? ??i k ? C?
10 holds for all 0 ? t ? t.
Note that given a sequence of points and a fixed initialization, Algorithm 2 is deterministic.
We now define the following quantities which will be useful in the upcoming analysis. At any
0
time t ? 1, let ?t = (xt +N0 +1 : 0 ? t0 < t) denote the sequence of points received by our
eti = k?t ? ?? k2 denote the random variable measuring the current
algorithm. For all t ? 0, let E
i
i
eti to be the maximum cluster error at time t. Now, let
error for cluster i, and let Vet = maxi E
b i = Ext+N0 +1 k?t+1 ? ?? k2 |?t be the expected error of the ith cluster center after receiving
E
t+1
i
i
the (t + 1)th , conditioned on ?t . Finally, let Eti = E k?ti ? ??i k2 | It be the expected error
P i
conditioned on It , and let Et = i Et .
4.1
Error Reduction in Single Iteration
Our main tool toward showing Theorem 2 is the following theorem which bounds the expected error
after updating the means on arrival of the next sample.
?
Theorem 4. If It holds and C ? ?( log k), then for all i, we have
? ei
?
i
bt+1
E
?(1 ?
)Et + 5 Vet + O(1)? 2 d? 2 + O(k)?(1 ? ?) exp(?C 2 /8)(C 2 + k)? 2 .
2k
k
5
Proof sketch of Theorem 4. In all calculations in this proof, we first assume that the candidate centers
satisfy It , and all expectations and probabilities are only over the new sample xt+N0 +1 , which we
denote by x after omitting the superscript. Now recall our update rule: ?t+1
= (1 ? ?)?ti + ?x if ?ti
i
is the closest center for the new sample x; the other centers are unchanged. To simplify notations, let:
git (x) = 1 iff i = arg min kx ? ?tj k, git (x) = 0 otherwise.
(2)
j
By definition, we have for all i,
?t+1
= (1 ? ?)?ti + ? git (x)x + (1 ? git (x))?ti = ?ti + ?git (x)(x ? ?ti ).
i
Our proof relies on the following simple yet crucial lemmas. The first bounds the failure probability
of a sample being closest to an incorrect cluster center among our candidates. The second shows
that if the candidate centers are sufficiently close to the true centers, then the failure probability of
mis-classifying a point to a wrong center is (upto constant factors) the probability of mis-classification
even in the optimal solution (with true centers). Finally the third lemma shows that the probability of
git (x) = 1 for each i is lower-bounded. Complete details and proofs appear in [14].
x ? Cl(j) denote a random point from
Lemma 1. Suppose
condition It holds. For any i, j 6= i, let
2
cluster j. Then Pr kx ? ?ti k ? kx ? ?tj k ? exp(??(Cij
)).
t
?
Lemma 2. Suppose max(k?ti ? ??i k, k?
i ? ?i k) ? ?/Cij . For any i, j 6= i, let x2 ? Cl(j) denote
a random point from cluster j. Then Pr kx ? ?ti k ? kx ? ?tj k ? O(1) exp(?Cij
/8).
?
1
Lemma 3. If It holds and C = ?( log k), then for all i, then Pr [git (x) = 1] ? 2k
.
And so, equipped with the above notations and lemmas, we have
b i = Ex k?t+1 ? ?? k2
E
t+1
i
i
= (1 ? ?)2 k?ti ? ??i k2 + ? 2 E kgit (x)(x ? ??i ) + (1 ? git (x))(?ti ? ??i )k2
hD
Ei
+ 2?(1 ? ?)E ?ti ? ??i , git (x)(x ? ??i ) + (1 ? git (x))(?ti ? ??i )
hD
Ei
? ei
)Et + ? 2 E kgit (x)(x ? ??i )k2 +2?(1 ? ?) E ?ti ? ??i , git (x)(x ? ??i )
? (1 ?
2k
|
{z
}
|
{z
}
T1
T2
The last inequality holds because of the following line of reasoning: (i) firstly, the cross
term in the second squared norm evaluates to 0 due to the product git (x)(1 ? git (x)), (ii)
e i , (iii) 2?(1 ? ?)E [h?t ? ?? , (1 ? g t (x))(?t ? ?? )i] ?
? 2 E (1 ? git (x))k?ti ? ??i k2 ? ? 2 E
t
i
i
i
i
i
eti Pr [g t (x) = 0] ? 2?(1 ? ?)E
eti (1 ? 1/2k) by Lemma 3, and finally (iv) by collecting
2?(1 ? ?)E
i
eti .
terms with coefficient E
The proof then roughly proceeds as follows: suppose in an ideal case, git (x) is 1 for all points x
generated from cluster i, and 0 otherwise. Then, if x is a random sample from cluster i, T1 would be
d? 2 , and T2 would be 0. Of course, the difficulty is that git (x) is not always as well-behaved, and so
the bulk of the analysis is in carefully using Lemmas 1and 2, and appropriately ?charging? the various
eti , the variance, and the offline approximation error.
error terms we get to the current error E
4.2
Ensuring Proximity Condition Via Super-Martingales
In the previous section, we saw that condition It = 1 is sufficient to ensure that expected one-step
error reduces at time step t + 1. Our next result shows that IN = 1 is satisfied with high probability.
Theorem 5. Suppose maxi k?0i ? ??i k ?
C
20 ?,
1
then IN = 1 w.p ? 1 ? ( poly(N
) ).
ei
Our argument proceeds as follows. Suppose we track the behaviour of the actual error terms E
t
2 2
over time, and stop the process (call it a failure) when any of these error terms exceeds C ? /100
(recall that they are all initially smaller than C 2 ? 2 /400). Assuming that the process has not stopped,
we show that each of these error terms has a super-martingale behaviour using Theorem 4, which
6
says that on average, the expected one-step error drops. Moreover, we also show that the actual
one-step difference, while not bounded, has a sub-gaussian tail. Our theorem now follows by using
Azuma-Hoeffding type inequality for super-martingale sequences.
4.3
Wrapping Up
Now, using Theorems 4 and 5, we can get the following theorem.
Theorem
6. Let ? = O(k)? 2 d? 2 + O(k 2 )?(1 ? ?)exp(?C 2 /8)(C 2 + k)? 2 . Then if C ?
?
?
? N
?( log k), for all t, we have Et+1 ? (1 ? 4k
)Et + ?. It follows that EN ? (1 ? 4k
) E0 + 4k
? ?.
i
h
i
? 2
e i condik
?
?
Proof. Let E t+1 = E k?t+1
It to be the average over all sample paths of E
t+1
i
i
tioned on It . Recall that Et+1 is very similar, except the conditioning is on It+1 . With this notation,
let us take expectation over all sample paths where It is satisfied, and use Theorem 4 to get
?
?
i
E t+1 ?(1 ?
)E i + Et + O(1)? 2 d? 2 + O(k)?(1 ? ?) exp(?C 2 /8)(C 2 + k)? 2 .
2k t k 5
And so, summing over all i we will get
?
E t+1 ?(1 ?
)Et + O(k)? 2 d? 2 + O(k 2 )?(1 ? ?) exp(?C 2 /8)(C 2 + k)? 2 .
3k
Finally note that Et+1 and E t+1 are related as Et+1 Pr [It+1 ] ? E t+1 Pr [It ], and so Et+1 ?
E t+1 (1 + N12 ) since Pr [It+1 ] ? 1 ? 1/N 5 by Theorem 5.
Proof of Theorem 2. From Theorem 5 we know that the probability of IN being satisfied is 1?1/N 5 ,
and in this case, we can use Theorem 6 to get the desired error bound. In case IN fails, then the
maximum possible error is roughly maxi,j k??i ? ??j k2 ? N (when all our samples are sent to the
same cluster), which contributes a negligible amount to the bias term.
5
Initialization for streaming k-means
In Section 4 we saw?that our proposed streaming algorithm can lead to a good solution for any
C
separation C? ? O( log k)? if we can initialize all centers such that k?0i ? ??i k ? 20
?. We now
show that InitAlg (Algorithm 1) is one such procedure. We first approximately compute the top-k
eigenvectors U of the data covariance using a streaming PCA algorithm [9, 13] on O(k 3 d3 log d)
samples. We next store k log k points and project them onto the subspace spanned by U . We then
perform a simple distance based clustering [18] that correctly clusters the stored points (assuming
reasonable center separation), and finally we output these cluster centers.
Proof of Theorem 3. Using an argument similar to [9] (Theorem 3), we get that U obtained by the
online PCA algorithm (Steps 1:4 of Algorithm 1) satisfies (w.p. ? 1 ? 1/poly(d)):
kU U T ??i ? ??i k2 ? .01? 2 , ?1 ? i ? k.
b ?i
?
T
??i . For
T t0
(3)
T
Now, let
=U
any x sampled from mixture distribution (1), U x ?
T t
Hence, if U x , U x both belong to cluster i, then (w.p. ? 1 ? 1/k ? ):
p
0
0
0
kU T xt ? U T xt k2 = kU T (zt ? zt )k22 ? (k + 8? k log k)? 2 ,
0
??i , ? 2 I).
i wi N (b
P
(4)
0
where xt = ??i + zt and xt = ??i + zt . The last inequality above follows by using standard ?2
0
random variable tail bound. Similarly if U T xt , U T xt belong to cluster i and j, i.e., xt = ??i + zt
0
0
and xt = ??j + zt then (w.p. ? 1 ? 1/k ? ):
0
0
0
0
b ?j k2 + kU T (zt ? zt )k22 + 2(b
b ?j )T U T (zt ? zt )
kU T xt ? U T xt k2 = kb
??i ? ?
??i ? ?
p
p
? (C 2 ? .2C + 8? k log k ? 16?C log k)? 2 ,
(5)
where the above equation follows by using (3), setting ? = C/32 and using C = ?((k log k)1/4 ).
7
Using (4), (5), w.h.p. all the points from the same cluster are closer to each other than points from
other clusters. Hence, connected components of nearest neighbor graph recover clusters accurately.
P
1
T t
b i = |Cluster(i)|
Now, we estimate ?
t?Cluster(i) U x for each i. Since, our clustering is completely accurate, we have w.p. ? 1 ? 2m2 /k C/32 ,
?
kb
?i ?
b ?i k2
?
? ?p
log k
|Cluster(i)|
.
(6)
pm
C/32
As wi = 1/k for all i, |Cluster(i)| ? m
. Theorem now follows by
k ?C
k w.p. ? 1 ? 1/k
setting m = O(k log k) and by using (3), (6) along with C = ?((k log k)1/4 ).
Remark 1. We would like to emphasize that our analysis
for the convergence of streaming algo?
rithms works even for smaller separations C = O( log k), as long as we can get a good enough
initialization. Hence, a better initialization algorithm with weaker dependence of C on k would lead
to an improvement in the overall algorithm.
6
Soft thresholding EM based algorithm
In this section, we study a streaming version of the Expectation Maximization (EM) algorithm [7]
which is also used extensively in practice. While the standard k-means or Lloyd?s heuristic is known
to be agnostic to the distribution, and the same procedure can solve the mixture problem for a variety
of distributions [12], EM algorithms are designed specifically for the input mixture distribution. In
this section, we consider a streaming version of the EM algorithm when applied to the problem of
mixture of two spherical Gaussians with known variances. In this case, the EM algorithm reduces to
a softer version of the Lloyd?s algorithm where a point can be partially assigned to the two clusters.
Recent results by [6, 3, 19] show convergence of the EM algorithm in the offline setting for this
simple setup. In keeping with earlier notation, let ??1 = ?? and ??2 = ??? and the center separation
C=
2k?? k
? .
i.i.d 1
?
2
2 N (? , ? I)
Hence, xt ?
+ 12 N (??? , ? 2 I).
Algorithm 3 StreamSoftUpdate(N, N0 )
N
Set ? = 3 log
N .
0
Set ?i ? InitAlgo(N0 ).
for t = 1 to N do
Receive xt+N0 as generated by the input stream.
x = xt+N0
?kx??t k2
exp
2
?
Let wt =
?kx??t k2
?kx+?t k2
exp
?2
+exp
?2
Set ?t+1 = (1 ? ?)?t + ?[2wt ? 1]x.
end for
In our algorithm, wt (x) is an estimate of the probability that x belongs to the cluster with ?t , given
that it is drawn from a balanced mixture of gaussians at ?t and ??t . Calculating wt (x) corresponds
to the E step and updating the estimate of the centers corresponds to the M step of the EM algorithm.
Similar to the streaming Lloyd?s algorithm presented in Section 3, our analysis of streaming soft
updates can be separated into streaming update analysis and analysis InitAlg (which is already
presented in Section 5). We now provide our main theorem, and the proof is presented in Appendix C.
Theorem 7 (Streaming Update). Let xt , 1 ? t ? N + N0 be generated using a mixture two
balanced spherical Gaussians with variance ? 2 . Also, let the center-separation C ? 4, and also
suppose our initial estimate ?0 is such that k?0 ? ?? k ? C?
20 . Then, the streaming update of
StreamSoftUpdate(N, N0 ) , i.e, Steps 3-8 of Algorithm 3 satisfies:
k?? k2
log N 2
E k?N ? ?? k2 ? ?(1) + O(1)
d? .
N
{z
}
|N{z } |
bias
8
variance
Remark 2. Our bias and variance terms are similar to the ones in Theorem 1 but the above bound
does not have the additional approximation error term. Hence, in this case we can estimate ??
consistently but the algorithm applies only to a mixture of Gaussians while our algorithm and result
in Section 3 can potentially be applied to arbitrary sub-Gaussian distributions.
Remark 3. We note that for our streaming soft update algorithm, it is not critical to know the
variance ? 2 beforehand. One could get a good estimate of ? by taking the mean of a random
projection of a small number of points. We provide the details in the full version of this paper [14].
7
Conclusions
In this paper, we studied the problem of clustering with streaming data where each data point is
sampled from a mixture of spherical Gaussians. For this problem, we study two algorithms that use
appropriate initialization: a) a streaming version of Lloyd?s method, b) a streaming EM method. For
both the methods we show that we can accurately initialize the cluster centers using an online PCA
based method. We then show that assuming ?((k log k)1/4 ?) separation between the cluster centers,
the updates by both the methods lead to decrease in both the bias as well as the variance error terms.
For Lloyd?s method there is an additional estimation error term, which even the offline algorithm
incurs, and which is avoided by the EM method. However, the streaming Lloyd?s method is agnostic
to the data distribution and can in fact be applied to any mixture of sub-Gaussians problem. For future
work, it would be interesting to study the streaming data clustering problem under deterministic
assumptions like [12, 16]. Also, it is an important question to understand the optimal separation
assumptions needed for even the offline gaussian mixture clustering problem.
References
[1] Anima Anandkumar, Rong Ge, Daniel J. Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor
decompositions for learning latent variable models (A survey for ALT). In Proceedings of ALT,
pages 19?38, 2015.
[2] Hassan Ashtiani, Shai Ben-David, and Abbas Mehrabian. Sample-efficient learning of mixtures.
arXiv preprint arXiv:1706.01596, 2017.
[3] Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. St atistical guarantees for the em
algorithm: From population to sample-based analysis. Annals of Stats. 45 (1), 77-120, 2014.
[4] Maria-Florina Balcan, Avrim Blum, and Anupam Gupta. Clustering under approximation
stability. J. ACM, 60(2):8:1?8:34, 2013.
[5] Anirban Dasgupta, John Hopcroft, Ravi Kannan, and Pradipta Mitra. Spectral clustering with
limited independence. In Proceedings of SODA, pages 1036?1045, 2007.
[6] Constantinos Daskalakis, Christos Tzamos, and Manolis Zampetakis. Ten steps of em suffice
for mixtures of two gaussians. arXiv preprint arXiv:1609.00368, 2016.
[7] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete
data via the em algorithm. Journal of the royal statistical society, pages 1?38, 1977.
[8] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley and Sons, 2000.
[9] Moritz Hardt and Eric Price. The noisy power method: A meta algorithm with applications. In
Proceedings of NIPS, pages 2861?2869, 2014.
[10] Daniel J. Hsu and Sham M. Kakade. Learning mixtures of spherical gaussians: moment methods
and spectral decompositions. In Proceedings of ITCS ?13, pages 11?20, 2013.
[11] Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, and Aaron Sidford. Streaming
PCA: matching matrix bernstein and near-optimal finite sample guarantees for oja?s algorithm.
In Proceedings of COLT, pages 1147?1164, 2016.
[12] Amit Kumar and Ravindran Kannan. Clustering with spectral norm and the k-means algorithm.
In Proceedings of FOCS, pages 299?308, 2010.
9
[13] Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Memory limited, streaming PCA.
In Proceedings of 27th NIPS, pages 2886?2894, 2013.
[14] Aditi Raghunathan, Ravishankar Krishnaswamy, and Prateek Jain. Learning mixture of gaussians with streaming data. CoRR, abs/1707.02391, 2017.
[15] Ohad Shamir. A variant of azuma?s inequality for martingales with subgaussian tails. arXiv
preprint arXiv:1110.2392, 2011.
[16] Cheng Tang and Claire Monteleoni. On lloyd?s algorithm: New theoretical insights for clustering
in practice. In Proceedings of AISTATS, pages 1280?1289, 2016.
[17] Cheng Tang and Claire Monteleoni. Convergence rate of stochastic k-means. Proceedings of
AISTATS, 2017.
[18] Santosh Vempala and Grant Wang. A spectral algorithm for learning mixture models. J. Comput.
Syst. Sci., 68(4):841?860, 2004.
[19] Ji Xu, Daniel J Hsu, and Arian Maleki. Global analysis of expectation maximization for
mixtures of two gaussians. In Advances in Neural Information Processing Systems, pages
2676?2684, 2016.
10
| 7238 |@word version:17 norm:3 duda:1 nd:1 git:16 simplifying:1 covariance:1 decomposition:2 incurs:2 moment:1 reduction:1 celebrated:1 initial:10 daniel:3 ours:1 current:5 com:2 yet:1 john:2 distant:1 seeding:1 remove:1 designed:2 update:26 n0:25 drop:1 generative:2 kyk:1 ith:1 farther:1 iterates:6 provides:1 firstly:1 along:3 incorrect:1 consists:1 focs:1 combine:1 x0:2 ravindran:1 hardness:1 expected:5 indeed:3 roughly:4 chi:1 spherical:8 decreasing:1 decomposed:1 manolis:1 little:1 actual:2 equipped:1 becomes:1 provided:1 estimating:2 bounded:4 moreover:3 notation:5 agnostic:5 underlying:1 project:1 prateek:4 suffice:1 substantially:1 guarantee:3 every:1 collecting:1 ti:18 k2:21 wrong:1 grant:1 appear:1 before:1 t1:2 negligible:1 mitra:1 treat:1 ext:1 joining:1 analyzing:1 path:2 approximately:2 might:2 burn:1 initialization:19 studied:6 limited:2 practical:1 practice:4 atistical:1 procedure:3 significantly:3 projection:1 matching:1 pre:1 donald:1 get:10 cannot:2 convenience:1 close:1 onto:1 raghunathan:2 deterministic:3 center:50 go:4 convex:2 survey:1 recovery:1 assigns:2 immediately:1 stats:1 m2:1 rule:1 insight:1 orthonormal:1 spanned:1 hd:2 stability:2 handle:1 notion:1 n12:1 population:1 updated:1 annals:1 shamir:1 suppose:7 heavily:1 homogeneous:1 us:1 designing:1 aditi:2 logarithmically:1 updating:2 observed:1 preprint:3 solved:1 wang:1 worst:1 connected:2 decrease:9 mentioned:3 balanced:2 dempster:1 convexity:2 complexity:9 zampetakis:1 tit:1 algo:1 incur:1 eric:1 completely:1 hopcroft:1 k0:2 various:2 caramanis:1 separated:7 jain:4 describe:3 outside:1 neighborhood:2 heuristic:8 stanford:2 widely:2 solve:3 larger:2 say:2 otherwise:2 itself:2 laird:1 superscript:1 online:8 noisy:1 sequence:4 product:1 remainder:1 loop:1 till:1 iff:1 qr:1 convergence:4 cluster:30 requirement:2 produce:1 telgarsky:1 converges:1 ben:1 help:1 pose:1 nearest:3 received:1 netrapalli:1 direction:1 closely:1 correct:1 stochastic:3 kb:2 exploration:1 softer:1 kgit:2 hassan:1 bin:1 require:2 behaviour:2 assign:1 fix:1 secondly:1 rong:1 hold:8 proximity:2 sufficiently:2 around:7 exp:14 seed:2 visualize:1 matus:1 estimation:3 currently:1 sivaraman:1 saw:2 tool:1 eti:7 gaussian:9 always:1 super:3 rather:1 focus:1 improvement:1 consistently:2 maria:1 likelihood:1 centroid:7 dependent:2 streaming:61 entire:1 typically:1 bt:1 initially:2 misclassified:1 arg:2 among:1 classification:2 colt:1 overall:1 initialize:3 santosh:1 once:1 beach:1 sampling:1 yu:1 nearly:6 constantinos:1 future:1 np:2 t2:2 simplify:1 inherent:1 few:2 escape:1 oja:1 microsoft:4 delicate:1 attempt:2 ab:1 highly:1 mixture:41 tj:3 accurate:2 beforehand:2 closer:3 explosion:1 arthur:1 arian:1 respective:1 ohad:1 iv:1 euclidean:1 divide:1 incomplete:1 desired:1 e0:1 theoretical:2 stopped:1 soft:6 modeling:1 earlier:1 sidford:1 measuring:1 zn:1 maximization:3 deviation:1 optimally:1 stored:1 st:2 stay:1 receiving:1 squared:2 satisfied:6 hoeffding:1 worse:1 style:2 return:1 syst:1 de:1 lloyd:15 ioannis:1 coefficient:1 satisfy:2 explicitly:1 stream:8 analyze:1 observing:1 start:1 recover:1 shai:1 contribution:1 variance:25 ashtiani:1 itcs:1 iterated:1 accurately:4 anima:1 monteleoni:2 definition:2 failure:3 evaluates:1 proof:13 mi:2 recovers:1 rithms:1 sampled:6 stop:1 dataset:1 hardt:1 hsu:3 popular:3 recall:3 knowledge:1 dimensionality:1 formalize:1 carefully:1 back:1 follow:1 improved:1 sketch:1 web:1 ei:5 quality:1 behaved:1 usa:1 omitting:1 k22:2 true:21 maleki:1 hence:10 assigned:3 furnish:1 moritz:1 during:1 d4:1 outline:1 complete:1 balcan:1 reasoning:1 common:1 tending:1 ji:1 overview:2 stork:1 conditioning:1 tail:4 discussed:1 belong:2 rd:3 consistency:1 pm:1 similarly:1 etc:2 krishnaswamy:2 closest:4 own:1 recent:2 constantine:1 belongs:1 scenario:1 store:2 certain:2 occasionally:1 inequality:4 meta:1 seen:1 analyzes:1 additional:3 minimum:1 impose:1 converge:1 period:1 signal:1 ii:2 multiple:1 full:2 sham:3 reduces:2 exceeds:1 technical:2 match:3 calculation:1 cross:1 long:4 hart:1 ensuring:1 variant:4 basic:1 regression:1 florina:1 essentially:3 expectation:6 arxiv:6 iteration:4 abbas:1 receive:3 addition:3 crucial:1 appropriately:1 extra:1 rest:1 unlike:1 pass:2 tend:1 sent:1 balakrishnan:1 mod:1 call:1 anandkumar:1 subgaussian:1 near:1 ideal:1 intermediate:1 iii:2 enough:2 bernstein:1 iterate:1 variety:1 fit:2 independence:1 praneeth:1 shift:1 t0:3 pca:8 akin:1 suffer:1 remark:4 useful:1 eigenvectors:1 cleaner:1 amount:2 extensively:2 locally:1 ten:1 processed:1 tth:1 estimated:3 per:3 track:1 bulk:1 correctly:1 dasgupta:1 key:2 blum:1 drawn:3 d3:4 ravi:1 ram:1 asymptotically:1 graph:2 fraction:2 soda:1 mehrabian:1 reasonable:3 separation:17 appendix:1 bound:12 guaranteed:1 tzamos:1 nan:1 cheng:2 paramount:1 x2:1 argument:4 min:1 c8:2 kumar:1 performing:1 vempala:1 martin:1 ball:7 kd:2 anirban:1 smaller:3 slightly:3 em:18 son:1 wi:8 kakade:3 making:1 happens:1 outlier:1 intuitively:1 pr:7 equation:1 turn:2 discus:1 loose:1 needed:1 know:2 ge:1 fed:1 prajain:1 end:5 available:1 gaussians:29 operation:1 apply:1 away:1 spectral:6 appropriate:3 upto:1 batch:3 anupam:1 top:1 clustering:17 include:1 ensure:1 calculating:1 exploit:1 amit:1 classical:1 society:1 unchanged:1 upcoming:1 tensor:2 objective:1 added:1 quantity:1 wrapping:1 already:1 question:1 dependence:2 subspace:2 distance:6 separate:1 sci:1 collected:1 toward:1 kannan:2 assuming:6 mini:2 ratio:3 illustration:2 setup:4 cij:4 potentially:1 stated:1 design:2 zt:10 unknown:3 perform:2 upper:1 observation:1 datasets:1 finite:2 jin:1 extended:1 arbitrary:2 david:1 pair:1 required:3 nip:3 able:1 proceeds:2 below:3 pattern:1 azuma:2 challenge:4 max:1 royal:1 memory:1 charging:1 wainwright:1 critical:2 suitable:2 overlap:1 difficulty:1 power:1 deviate:2 asymptotic:1 interesting:2 sufficient:2 rubin:1 thresholding:4 bypass:1 classifying:1 claire:2 course:1 last:2 keeping:1 offline:23 bias:14 weaker:2 understand:1 india:2 neighbor:2 fall:1 taking:1 overcome:1 dimension:1 doesn:1 computes:1 projected:1 avoided:3 far:1 emphasize:1 global:1 summing:1 assumed:1 daskalakis:1 search:1 vet:2 latent:1 learn:1 nature:1 ku:5 ca:1 contributes:1 poly:5 cl:2 inherit:1 aistats:2 main:5 big:2 noise:6 arrival:1 xu:1 en:1 martingale:7 wiley:1 christos:1 sub:4 fails:1 comput:1 lie:3 candidate:3 third:1 tang:2 theorem:26 down:2 xt:27 specific:1 showing:2 maxi:8 dk:4 alt:2 gupta:1 albeit:1 avrim:1 corr:1 mitliagkas:1 conditioned:2 kx:9 partially:1 applies:1 corresponds:2 satisfies:5 relies:1 acm:1 ravishankar:2 goal:3 viewed:1 presentation:1 tioned:1 careful:3 price:1 hard:3 typical:1 infinite:1 except:1 specifically:1 wt:4 lemma:8 called:1 pas:2 svd:1 aaron:1 ex:1 |
6,897 | 7,239 | Practical Hash Functions for Similarity Estimation
and Dimensionality Reduction
S?ren Dahlgaard
University of Copenhagen / SupWiz
[email protected]
Mathias B?k Tejs Knudsen
University of Copenhagen / SupWiz
[email protected]
Mikkel Thorup
University of Copenhagen
[email protected]
Abstract
Hashing is a basic tool for dimensionality reduction employed in several aspects of
machine learning. However, the perfomance analysis is often carried out under the
abstract assumption that a truly random unit cost hash function is used, without
concern for which concrete hash function is employed. The concrete hash function
may work fine on sufficiently random input. The question is if they can be trusted
in the real world where they may be faced with more structured input.
In this paper we focus on two prominent applications of hashing, namely similarity
estimation with the one permutation hashing (OPH) scheme of Li et al. [NIPS?12]
and feature hashing (FH) of Weinberger et al. [ICML?09], both of which have
found numerous applications, i.e. in approximate near-neighbour search with LSH
and large-scale classification with SVM.
We consider the recent mixed tabulation hash function of Dahlgaard et al.
[FOCS?15] which was proved theoretically to perform like a truly random hash
function in many applications, including the above OPH. Here we first show improved concentration bounds for FH with truly random hashing and then argue that
mixed tabulation performs similar when the input vectors are not too dense. Our
main contribution, however, is an experimental comparison of different hashing
schemes when used inside FH, OPH, and LSH.
We find that mixed tabulation hashing is almost as fast as the classic multiply-modprime scheme (ax + b) mod p. Mutiply-mod-prime is guaranteed to work well on
sufficiently random data, but here we demonstrate that in the above applications, it
can lead to bias and poor concentration on both real-world and synthetic data. We
also compare with the very popular MurmurHash3, which has no proven guarantees.
Mixed tabulation and MurmurHash3 both perform similar to truly random hashing
in our experiments. However, mixed tabulation was 40% faster than MurmurHash3,
and it has the proven guarantee of good performance (like fully random) on all
possible input making it more reliable.
1
Introduction
Hashing is a standard technique for dimensionality reduction and is employed as an underlying tool in
several aspects of machine learning including search [22, 31, 32, 3], classification [24, 22], duplicate
detection [25], computer vision and information retrieval [30]. The need for dimensionality reduction
techniques such as hashing is becoming further important due to the huge growth in data sizes. As
an example, already in 2010, Tong [36] discussed data sets with 1011 data points and 109 features.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Furthermore, when working with text, data points are often stored as w-shingles (i.e. w contiguous
words or bytes) with w ? 5. This further increases the dimension from, say, 105 common english
words to 105w .
Two particularly prominent applications are set similarity estimation as initialized by the MinHash
algorithm of Broder, et al. [8, 9] and feature hashing (FH) of Weinberger, et al. [37]. Both applications
have in common that they are used as an underlying ingredient in many other applications. While
both MinHash and FH can be seen as hash functions mapping an entire set or vector, they are perhaps
better described as algorithms implemented using what we will call basic hash functions. A basic
hash function h maps a given key to a hash value, and any such basic hash function, h, can be used to
implement Minhash, which maps a set of keys, A, to the smallest hash value mina?A h(a). A similar
case can be made for other locality-sensitive hash functions such as SimHash [12], One Permutation
Hashing (OPH) [22, 31, 32], and cross-polytope hashing [2, 33, 20], which are all implemented using
basic hash functions.
1.1
Importance of understanding basic hash functions
In this paper we analyze the basic hash functions needed for the applications of similarity estimation
and FH. This is important for two reasons: 1) As mentioned in [22], dimensionality reduction is
often a time bottle-neck and using a fast basic hash function to implement it may improve running
times significantly, and 2) the theoretical guarantees of hashing schemes such as Minhash and FH
rely crucially on the basic hash functions used to implement it, and this is further propagated into
applications of these schemes such as approximate similarity search with the seminal LSH framework
of Indyk and Motwani [19].
To fully appreciate this, consider LSH for approximate similarity search implemented with MinHash.
We know from [19] that this structure obtains provably sub-linear query time and provably subquadratic space, where the exponent depends on the probability of hash collisions for ?similar? and
?not-similar? sets. However, we also know that implementing MinHash with a poorly chosen hash
function leads to constant bias in the estimation [28], and this constant then appears in the exponent
of both the space and the query time of the search structure leading to worse theoretical guarantees.
Choosing the right basic hash function is an often overlooked aspect, and many authors simply state
that any (universal) hash function ?is usually sufficient in practice? (see e.g. [22, page 3]). While
this is indeed the case most of the time (and provably if the input has enough entropy [26]), many
applications rely on taking advantage of highly structured data to perform well (such as classification
or similarity search). In these cases a poorly chosen hash function may lead to very systematic
inconsistensies. Perhaps the most famous example of this is hashing with linear probing which was
deemed very fast but unrealiable in practice until it was fully understood which hash functions to
employ (see [35] for discussion and experiments). Other papers (see e.g. [31, 32] suggest using
very powerful machinery such as the seminal pseudorandom generator of Nisan [27]. However,
such a PRG does not represent a hash function and implementing it as such would incur a huge
computational overhead.
Meanwhile, some papers do indeed consider which concrete hash functions to use. In [15] it was
considered to use 2-independent hashing for bottom-k sketches, which was proved in [34] to work for
this application. However, bottom-k sketches do not work for SVMs and LSH. Closer to our work,
[23] considered the use of 2-independent (and 4-independent) hashing for large-scale classification
and online learning with b-bit minwise hashing. Their experiments indicate that 2-independent
hashing often works, and they state that ?the simple and highly efficient 2-independent scheme may
be sufficient in practice?. However, no amount of experiments can show that this is the case for all
input. In fact, we demonstrate in this paper ? for the underlying FH and OPH ? that this is not the case,
and that we cannot trust 2-independent hashing to work in general. As noted, [23] used hashing for
similarity estimation in classification, but without considering the quality of the underlying similarity
estimation. Due to space restrictions, we do not consider classification in this paper, but instead focus
on the quality of the underlying similarity estimation and dimensionality reduction sketches as well
as considering these sketches in LSH as the sole applicaton (see also the discussion below).
2
1.2
Our contribution
We analyze the very fast and powerful mixed tabulation scheme of [14] comparing it to some of the
most popular and widely employed hash functions. In [14] it was shown that implementing OPH
with mixed tabulation gives concentration bounds ?essentially as good as truly random?. For feature
hashing, we first present new concentration bounds for the truly random case improving on [37, 16].
We then argue that mixed tabulation gives essentially as good concentration bounds in the case where
the input vectors are not too dense, which is a very common case for applying feature hashing.
Experimentally, we demonstrate that mixed tabulation is almost as fast as the classic multiply-modprime hashing scheme. This classic scheme is guaranteed to work well for the considered applications
when the data is sufficiently random, but we demonstrate that bias and poor concentration can occur
on both synthetic and real-world data. We verify on the same experiments that mixed tabulation
has the desired strong concentration, confirming the theory. We also find that mixed tabulation is
roughly 40% faster than the very popular MurmurHash3 and CityHash. In our experiments these hash
functions perform similar to mixed tabulation in terms of concentration. They do, however, not have
the same theoretical guarantees making them harder to trust. We also consider different basic hash
functions for implementing LSH with OPH. We demonstrate that the bias and poor concentration of
the simpler hash functions for OPH translates into poor concentration for e.g. the recall and number
of retrieved data points of the corresponding LSH search structure. Again, we observe that this is not
the case for mixed tabulation, which systematically out-performs the faster hash functions. We note
that [23] suggests that 2-independent hashing only has problems with dense data sets, but both the
real-world and synthetic data considered in this work are sparse or, in the case of synthetic data, can
be generalized to arbitrarily sparse data. While we do not consider b-bit hashing as in [23], we note
that applying the b-bit trick to our experiments would only introduce a bias from false positives for
all basic hash functions and leave the conclusion the same.
It is important to note that our results do not imply that standard hashing techniques (i.e. multiply-mod
prime) never work. Rather, they show that there does exist practical scenarios where the theoretical
guarantees matter, making mixed tabulation more consistent. We believe that the very fast evaluation
time and consistency of mixed tabulation makes it the best choice for the applications considered in
this paper.
2
Preliminaries
As mentioned we focus on similarity estimation and feature hashing. Here we briefly describe the
methods used. We let [m] = {0, . . . , m ? 1}, for some integer m, denote the output range of the
hash functions considered.
2.1
Similarity estimation
In similarity estimation we are given two sets, A and B belonging to some universe U and are tasked
with estimating the Jaccard similarity J(A, B) = |A ? B|/|A ? B|. As mentioned earlier, this can
be solved using k independent repetitions of the MinHash algorithm, however this requires O(k ? |A|)
running time. In this paper we instead use the faster OPH of Li et al. [22] with the densification
scheme of Shrivastava and Li [32]. This scheme works as follows: Let k be a parameter with k
being a divisor of m, and pick a random hash function h : U ? [m]. for each element x split
h(x) into two parts b(x), v(x), where b(x) : U ? [k] is given by h(x) mod k and v(x) is given by
bh(x)/kc. To create the sketch SOP H (A) of size k we simply let SOP H (A)[i] = mina?A,b(a)=i v(a).
To estimate the similarity of two sets A and B we simply take the fraction of indices, i, where
SOP H (A)[i] = SOP H (B)[i].
This is, however, not an unbiased estimator, as there may be empty bins. Thus, [31, 32] worked on
handling empty bins. They showed that the following addition gives an unbiased estimator with good
variance. For each index i ? [k] let bi be a random bit. Now, for a given sketch SOP H (A), if the
ith bin is empty we copy the value of the closest non-empty bin going left (circularly) if bi = 0 and
going right if bi = 1. We also add j ? C to this copied value, where j is the distance to the copied bin
and C is some sufficiently large offset parameter. The entire construction is illustrated in Figure 1
3
Hash value
0123
4567
8 9 10 11
0
1
2
3
4
Value
0123
0123
0123
0123
0123
h(A)
0011
0100
0000
1010
0010
2
1
-
0
2
Bin
S_OPH(A)
12 13 14 15 16 17 18 19
Bin
0
1
2
3
4
Direction
0
1
1
0
0
1
3+C
2
1+2C
2+2C
1
3
S_OPH(A)
5
Figure 1: Left: Example of one permutation sketch creation of a set A with |U | = 20 and k = 5. For
each of the 20 possible hash value the corresponding bin and value is displayed. The hash values of
A, h(A), are displayed as an indicator vector with the minimal value per bin marked in red. Note that
the 3rd bin is empty. Right: Example of the densification from [32] (right).
2.2
Feature hashing
Feature hashing (FH) introduced by Weinberger et al. [37] takes a vector v of dimension d and
produces a vector v 0 of dimension d0 d preserving (roughly) the norm of v. More precisely,
let h P
: [d] ? [d0 ] and sgn : [d] ? {?1, +1} be random hash functions, then v 0 is defined as
vi0 = j,h(j)=i sgn(j)vj . Weinberger et al. [37] (see also [16]) showed exponential tail bounds on
kv 0 k22 when kvk? is sufficiently small and d0 is sufficiently large.
2.3
Locality-sensitive hashing
The LSH framework of [19] is a solution to the approximate near neighbour search problem: Given a
giant collection of sets C = A1 , . . . , An , store a data structure such that, given a query set Aq , we
can, loosely speaking, efficiently find a Ai with large J(Ai , Aq ). Clearly, given the potential massive
size of C it is infeasible to perform a linear scan.
With LSH parameterized by positive integers K, L we create a size K sketch Soph (Ai ) (or using
another method) for each Ai ? C. We then store the set Ai in a large table indexed by this sketch
T [Soph (Ai )]. For a given query Aq we then go over all sets stored in T [Soph (Aq )] returning only
those that are ?sufficiently similar?. By picking K large enough we ensure that very distinct sets
(almost) never end up in the same bucket, and by repeating the data structure L independent times
(creating L such tables) we ensure that similar sets are likely to be retrieved in at least one of the
tables.
Recently, much work has gone into providing theoretically optimal [5, 4, 13] LSH. However, as noted
in [2], these solutions require very sophisticated locality-sensitive hash functions and are mainly
impractical. We therefore choose to focus on more practical variants relying either on OPH [31, 32]
or FH [12, 2].
2.4
Mixed tabulation
Mixed tabulation was introduced by [14]. For simplicity assume that we are hashing from the universe
[2w ] and fix integers c, d such that c is a divisor of w. Tabulation-based hashing views each key x
as a list of c characters x0 , . . . , xc?1 , where xi consists of the ith w/c bits of x. We say that the
alphabet ? = [2w/c ]. Mixed tabulation uses x to derive d additional characters from ?. To do this
we choose c tables T1,i : ? ? ?d uniformly at random and let y = ?ci=0 T1,i [xi ] (here ? denotes
the XOR operation). The d derived characters are then y0 , . . . , yd?1 . To create the final hash value
we additionally choose c + d random tables T2,i : ? ? [m] and define
M
M
h(x) =
T2,i [xi ]
T2,i+c [yi ] .
i?[c]
i?[d]
Mixed Tabulation is extremely fast in practice due to the word-parallelism of the XOR operation and
the small table sizes which fit in fast cache. It was proved in [14] that implementing OPH with mixed
tabulation gives Chernoff-style concentration bounds when estimating Jaccard similarity.
Another advantage of mixed tabulation is when generating many hash values for the same key. In
this case, we can increase the output size of the tables T2,i , and then whp. over the choice of T1,i the
resulting output bits will be independent. As an example, assume that we want to map each key to
two 32-bit hash values. We then use a mixed tabulation hash function as described above mapping
keys to one 64-bit hash value, and then split this hash value into two 32-bit values, which would be
4
independent of each other with high probability. Doing this with e.g. multiply-mod-prime hashing
would not work, as the output bits are not independent. Thereby we significantly speed up the hashing
time when generating many hash values for the same keys.
A sample implementation with c = d = 4 and 32-bit keys and values can be found below.
uint64_t mt_T1[256][4];
uint32_t mt_T2[256][4];
// Filled with random bits
// Filled with random bits
uint32_t mixedtab(uint32_t x) {
uint64_t h=0; // This will be the final hash value
for(int i = 0;i < 4;++i, x >>= 8)
h ^= mt_T1[(uint8_t)x][i];
uint32_t drv=h >> 32;
for(int i = 0;i < 4;++i, drv >>= 8)
h ^= mt_T2[(uint8_t)drv][i];
return (uint32_t)h;
}
The main drawback to mixed tabulation hashing is that it needs a relatively large random seed to
fill out the tables T1 and T2 . However, as noted in [14] for all the applications we consider here it
suffices to fill in the tables using a ?(log |U |)-independent hash function.
3
Feature Hashing with Mixed Tabulation
As noted, Weinberger et al. [37] showed exponential tail bounds for feature hashing. Here, we first
prove improved concentration bounds, and then, using techniques from [14] we argue that these
bounds still hold (up to a small additive factor polynomial in the universe size) when implementing
FH with mixed tabulation.
The concentration bounds we show are as follows (proved in the full version).
Theorem 1. Let v ? Rd with kvk2 = 1 and let v 0 be the d0 -dimensional vector obtained by applying
feature hashing implemented with truly
? random hash functions. Let ?, ? ? (0, 1). Assume that
4
? log(1+ ? )
. Then it holds that
d0 ? 16??2 lg(1/?) and kvk? ? ?
6 log(1/?) log(d0 /?)
Pr 1 ? ? < kv 0 k22 < 1 + ? ? 1 ? 4? .
(1)
Theorem 1 is very similar to the bounds on feature hashing by Weinberger et al. [37] and Dasgupta
et al. [16], but improves on the requirement on the size of kvk? . Weinberger et al. [37] show that
?
(1) holds if kvk? is bounded by ?
, and Dasgupta et al. [16] show that (1) holds if
18 log(1/?) log(d0 /?)
q
q
1
kvk? is bounded by 16 log(1/?)?log2 (d0 /?) . We improve on these results factors of ?
log(1/?)
?
p
0
and ?
log(1/?) log(d /?) respectively. We note that if we use feature hashing with a preconditioner (as in e.g. [16, Theorem 1]) these improvements translate into an improved running
time.
Using [14, Theorem 1] we get the following corollary.
Corollary 1. Let v, ?, ? and d0 be as in Theorem 1, and let v 0 be the d0 -dimensional vector obtained
using feature hashing on v implemented with mixed tabulation hashing. Then, if supp(v) ? |?|/(1 +
?(1)) it holds that
Pr 1 ? ? < kv 0 k22 < 1 + ? ? 1 ? 4? ? O |?|1?bd/2c .
In fact Corollary 1 holds even if both h and sgn from Section 2.2 are implemented using the same
hash function. I.e., if h? : [d] ? {?1, +1} ? [d0 ] is a mixed tabulation hash function as described in
Section 2.4.
We note that feature hashing is often applied on very high dimensional, but sparse, data (e.g. in [2]),
and thus the requirement supp(v) ? |?|/(1 + ?(1)) is not very prohibitive. Furthermore, the target
5
dimension d0 is usually logarithmic in the universe, and then Corollary 1 still works for vectors with
polynomial support giving an exponential decrease.
4
Experimental evaluation
We experimentally evaluate several different basic hash functions. We first perform an evaluation of
running time. We then evaluate the fastest hash functions on synthetic data confirming the theoretical
results of Section 3 and [14]. Finally, we demonstrate that even on real-world data, the provable
guarantees of mixed tabulation sometimes yields systematically better results.
Due to space restrictions, we only present some of our experiments here, and refer to the full version
for more details.
We consider some of the most popular and fast hash functions employed in practice in k-wise
PolyHash [10], Multiply-shift [17], MurmurHash3 [6], CityHash [29], and the cryptographic hash
function Blake2 [7]. Of these hash functions only mixed tabulation (and very high degree PolyHash)
provably works well for the applications we consider. However, Blake2 is a cryptographic function
which provides similar guarantees conditioned on certain cryptographic assumptions being true. The
remaining hash functions have provable weaknesses, but often work well (and are widely employed)
in practice. See e.g. [1] who showed how to break both MurmurHash3 and Cityhash64.
All experiments are implemented in C++11 using a random seed from http://www.random.org.
The seed for mixed tabulation was filled out using a random 20-wise PolyHash function. All keys and
hash outputs were 32-bit integers to ensure efficient implementation of multiply-shift and PolyHash
using Mersenne prime p = 261 ? 1 and GCC?s 128-bit integers.
We perform two time experiments, the results of which are presented in Table 1. Namely, we
evaluate each hash function on the same 107 randomly chosen integers and use each hash function to
implement FH on the News20 dataset (discussed later). We see that the only two functions faster
than mixed tabulation are the very simple multiply-shift and 2-wise PolyHash. MurmurHash3 and
CityHash were roughly 30-70% slower than mixed tabulation. This even though we used the official
implementations of MurmurHash3, CityHash and Blake2 which are highly optimized to the x86 and
x64 architectures, whereas mixed tabulation is just standard, portable C++11 code. The cryptographic
hash function, Blake2, is orders of magnitude slower as we would expect.
Table 1: Time taken to evaluate different hash functions to 1) hash 107 random numbers, and 2)
perform feature hashing with d0 = 128 on the entire News20 data set.
Hash function
time (1..107 )
time (News20)
Multiply-shift
2-wise PolyHash
3-wise PolyHash
MurmurHash3
CityHash
Blake2
7.72 ms
17.55 ms
42.42 ms
59.70 ms
59.06 ms
3476.31 ms
55.78 ms
82.47 ms
120.19 ms
159.44 ms
162.04 ms
6408.40 ms
Mixed tabulation
42.98 ms
90.55 ms
Based on Table 1 we choose to compare mixed tabulation to multiply-shift, 2-wise PolyHash and
MurmurHash3. We also include results for 20-wise PolyHash as a (cheating) way to ?simulate? truly
random hashing.
4.1
Synthetic data
For a parameter, n, we generate two sets A, B as follows. The intersection A ? B is created by
sampling each integer from [2n] independently at random with probability 1/2. The symmetric
difference is generated by sampling n numbers greater than 2n (distributed evenly to A and B).
Intuitively, with a hash function like (ax + b) mod p, the dense subset of [2n] will be mapped very
systematically and is likely (i.e. depending on the choice of a) to be spread out evenly. When using
6
OPH, this means that elements from the intersection is more likely to be the smallest element in each
bucket, leading to an over-estimation of J(A, B).
We use OPH with densification as in [32] implemented with different basic hash functions to estimate
J(A, B). We generate one instance of A and B and perform 2000 independent repetitions for each
different hash function on these A and B. Figure 2 shows the histogram and mean squared error
(MSE) of estimates obtained with n = 2000 and k = 200. The figure confirms the theory: Both
multiply-shift and 2-wise PolyHash exhibit bias and bad concentration whereas both mixed tabulation
and MurmurHash3 behaves essentially as truly random hashing. We also performed experiments
with k = 100 and k = 500 and considered the case of n = k/2, where we expect many empty bins
and the densification of [32] kicks in. All experiments obtained similar results as Figure 2.
500
400
Multiply-shift
MSE=0.0058
2-wise PolyHash Mixed Tabulation
MSE=0.0049
MSE=0.0012
MurmurHash3
MSE=0.0012
"Random"
MSE=0.0011
300
200
100
0
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Figure 2: Histograms of set similarity estimates obtained using OPH with densification of [32] on
synthetic data implemented with different basic hash families and k = 200. The mean squared error
for each hash function is displayed in the top right corner.
For FH we obtained a vector v by taking the indicator vector of a set A generated as above and
normalizing the length. For each hash function we perform 2000 independent repetitions of the
following experiment: Generate v 0 using FH and calculate kv 0 k22 . Using a good hash function we
should get good concentration of this value around 1. Figure 3 displays the histograms and MSE
we obtained for d0 = 200. Again we see that multiply-shift and 2-wise PolyHash give poorly
concentrated results, and while the results are not biased this is only because of a very heavy tail of
large values. We also ran experiments with d0 = 100 and d0 = 500 which were similar.
500
400
Multiply-shift
MSE=0.6066
2-wise PolyHash Mixed Tabulation
MSE=0.305
MSE=0.0099
MurmurHash3
MSE=0.0097
"Random"
MSE=0.01
300
200
100
0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0
Figure 3: Histograms of the 2-norm of the vectors output by FH on synthetic data implemented
with different basic hash families and d0 = 200. The mean squared error for each hash function is
displayed in the top right corner.
We briefly argue that this input is in fact quite natural: When encoding a document as shingles or
bag-of-words, it is quite common to let frequent words/shingles have the lowest identifier (using
fewest bits). In this case the intersection of two sets A and B will likely be a dense subset of small
identifiers. This is also the case when using Huffman Encoding [18], or if identifiers are generated
on-the-fly as words occur. Furthermore, for images it is often true that a pixel is more likely to have a
non-zero value if its neighbouring pixels have non-zero values giving many consecutive non-zeros.
Additional synthetic results We also considered the following synthetic dataset, which actually
showed even more biased and poorly concentrated results. For similarity estimation we used elements
from [4n], and let the symmetric difference be uniformly random sampled elements from {0 . . . , n ?
1} ? {3n, . . . , 4n ? 1} with probability 1/2 and the intersection be the same but for {n, . . . , 3n ? 1}.
This gave an MSE that was rougly 6 times larger for multiply-shift and 4 times larger for 2-wise
7
PolyHash compared to the other three. For feature hashing we sampled the numbers from 0 to
3n ? 1 independently at random with probability 1/2 giving an MSE that was 20 times higher for
multiply-shift and 10 times higher for 2-wise PolyHash.
We also considered both datasets without the sampling, which showed an even wider gap between the
hash functions.
4.2
Real-world data
We consider the following real-world data sets
? MNIST [21] Standard collection of handwritten digits. The average number of non-zeros is
roughly 150 and the total number of features is 728. We use the standard partition of 60000
database points and 10000 query points.
? News20 [11] Collection of newsgroup documents. The average number of non-zeros is
roughly 500 and the total number of features is roughly 1.3 ? 106 . We randomly split the set
into two sets of roughly 10000 database and query points.
These two data sets cover both the sparse and dense regime, as well as the cases where each data
point is similar to many other points or few other points. For MNIST this number is roughly 3437 on
average and for News20 it is roughly 0.2 on average for similarity threshold above 1/2.
Feature hashing We perform the same experiment as for synthetic data by calculating kv 0 k22 for
each v in the data set with 100 independent repetitions of each hash function (i.e. getting 6, 000, 000
estimates for MNIST). Our results are shown in Figure 4 for output dimension d0 = 128. Results with
d0 = 64 and d0 = 256 were similar. The results confirm the theory and show that mixed tabulation
800000
700000
600000
500000
400000
300000
200000
100000
0
0.0
250000
Multiply-shift
PolyHash 2-wise Mixed Tabulation
MSE=0.144
0.5
1.0
1.5
Multiply-shift
MSE=0.1655
2.0 0.0
MSE=0.1106
200000
0.5
1.0
1.5
2.0 0.0
MSE=0.0155
0.5
1.0
1.5
2.0 0.0
PolyHash 2-wise Mixed Tabulation
MSE=0.0474
MSE=0.0176
MurmurHash3
"random"
MSE=0.016
0.5
1.0
1.5
MSE=0.0163
2.0 0.0
MurmurHash3
MSE=0.0176
0.5
1.0
1.5
2.0
"random"
MSE=0.0177
150000
100000
50000
0
0.0
0.5
1.0
1.5
2.0 0.0
0.5
1.0
1.5
2.0 0.0
0.5
1.0
1.5
2.0 0.0
0.5
1.0
1.5
2.0 0.0
0.5
1.0
1.5
2.0
Figure 4: Histograms of the norm of vectors output by FH on the MNIST (top) and News20 (bottom)
data sets implemented with different basic hash families and d0 = 128. The mean squared error for
each hash function is displayed in the top right corner.
performs essentially as well as a truly random hash function clearly outperforming the weaker hash
functions, which produce poorly concentrated results. This is particularly clear for the MNIST data
set, but also for the News20 dataset, where e.g. 2-wise Polyhash resulted in kv 0 k22 as large as 16.671
compared to 2.077 with mixed tabulation.
Similarity search with LSH We perform a rigorous evaluation based on the setup of [31]. We test
all combinations of K ? {8, 10, 12} and L ? {8, 10, 12}. For readability we only provide results
for multiply-shift and mixed tabulation and note that the results obtained for 2-wise PolyHash and
MurmurHash3 are essentially identical to those for multiply-shift and mixed tabulation respectively.
Following [31] we evaluate the results based on two metrics: 1) The fraction of total data points
retrieved per query, and 2) the recall at a given threshold T0 defined as the ratio of retrieved data
8
points having similarity at least T0 with the query to the total number of data points having similarity
at least T0 with the query. Since the recall may be inflated by poor hash functions that just retrieve
many data points, we instead report #retrieved/recall-ratio, i.e. the number of data points that were
retrieved divided by the percentage of recalled data points. The goal is to minimize this ratio as we
want to simultaneously retrieve few points and obtain high recall. Due to space restrictions we only
report our results for K = L = 10. We note that the other results were similar.
Our results can be seen in Figure 5. The results somewhat echo what we found on synthetic data.
Namely, 1) Using multiply-shift overestimates the similarities of sets thus retrieving more points, and
2) Multiply-shift gives very poorly concentrated results. As a consequence of 1) Multiply-shift does,
however, achieve slightly higher recall (not visible in the figure), but despite recalling slightly more
points, the #retrieved / recall-ratio of multiply-shift is systematically worse.
MNIST, Thr=0.8
30
news20, Thr=0.8
30
news20, Thr=0.5
20
30
25
25
15
20
Frequency
20
15
15
20
Frequency
25
Frequency
MNIST, Thr=0.5
Frequency
35
10
10
10
15
10
5
5
5
0 200
400
600
Ratio
800
1000
5
0
4000
5000 6000 7000 8000 9000 10000
0 0.89
Ratio
0.90
0.91
Ratio
0.92
0.93
0 0.99
1.00
1.01
Ratio
1.02
1.03
Figure 5: Experimental evaluation of LSH with OPH and different hash functions with K = L = 10.
The hash functions used are multiply-shift (blue) and mixed tabulation (green). The value studied is
the retrieved / recall-ratio (lower is better).
5
Conclusion
In this paper we consider mixed tabulation for computational primitives in computer vision, information retrieval, and machine learning. Namely, similarity estimation and feature hashing. It was
previously shown [14] that mixed tabulation provably works essentially as well as truly random for
similarity estimation with one permutation hashing. We complement this with a similar result for
FH when the input vectors are sparse, even improving on the concentration bounds for truly random
hashing found by [37, 16].
Our empirical results demonstrate this in practice. Mixed tabulation significantly outperforms the
simple hashing schemes and is not much slower. Meanwhile, mixed tabulation is 40% faster than
both MurmurHash3 and CityHash, which showed similar performance as mixed tabulation. However,
these two hash functions do not have the same theoretical guarantees as mixed tabulation. We believe
that our findings make mixed tabulation the best candidate for implementing these applications in
practice.
Acknowledgements
The authors gratefully acknowledge support from Mikkel Thorup?s Advanced Grant DFF-060202499B from the Danish Council for Independent Research as well as the DABAI project. Mathias
B?k Tejs Knudsen gratefully acknowledges support from the FNU project AlgoDisc.
References
[1] Breaking murmur: Hash-flooding dos reloaded, 2012. URL: https://emboss.github.io/
blog/2012/12/14/breaking-murmur-hash-flooding-dos-reloaded/.
9
[2] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya P. Razenshteyn, and Ludwig Schmidt.
Practical and optimal LSH for angular distance. In Proc. 28th Advances in Neural Information
Processing Systems, pages 1225?1233, 2015.
[3] Alexandr Andoni, Piotr Indyk, Huy L. Nguyen, and Ilya Razenshteyn. Beyond locality-sensitive
hashing. In Proc. 25th ACM/SIAM Symposium on Discrete Algorithms (SODA), pages 1018?
1028, 2014.
[4] Alexandr Andoni, Thijs Laarhoven, Ilya P. Razenshteyn, and Erik Waingarten. Optimal
hashing-based time-space trade-offs for approximate near neighbors. In Proc. 28th ACM/SIAM
Symposium on Discrete Algorithms (SODA), pages 47?66, 2017.
[5] Alexandr Andoni and Ilya P. Razenshteyn. Optimal data-dependent hashing for approximate
near neighbors. In Proc. 47th ACM Symposium on Theory of Computing (STOC), pages 793?801,
2015.
[6] Austin Appleby. Murmurhash3, 2016. URL: https://github.com/aappleby/smhasher/
wiki/MurmurHash3.
[7] Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O?Hearn, and Christian Winnerlein.
BLAKE2: simpler, smaller, fast as MD5. In Proc. 11th International Conference on Applied
Cryptography and Network Security, pages 119?135, 2013.
[8] Andrei Z. Broder. On the resemblance and containment of documents. In Proc. Compression
and Complexity of Sequences (SEQUENCES), pages 21?29, 1997.
[9] Andrei Z. Broder, Steven C. Glassman, Mark S. Manasse, and Geoffrey Zweig. Syntactic
clustering of the web. Computer Networks, 29:1157?1166, 1997.
[10] Larry Carter and Mark N. Wegman. Universal classes of hash functions. Journal of Computer
and System Sciences, 18(2):143?154, 1979. See also STOC?77.
[11] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM
TIST, 2(3):27:1?27:27, 2011.
[12] Moses Charikar. Similarity estimation techniques from rounding algorithms. In Proc. 34th
ACM Symposium on Theory of Computing (STOC), pages 380?388, 2002.
[13] Tobias Christiani. A framework for similarity search with space-time tradeoffs using localitysensitive filtering. In Proc. 28th ACM/SIAM Symposium on Discrete Algorithms (SODA), pages
31?46, 2017.
[14] S?ren Dahlgaard, Mathias B?k Tejs Knudsen, Eva Rotenberg, and Mikkel Thorup. Hashing for
statistics over k-partitions. In Proc. 56th IEEE Symposium on Foundations of Computer Science
(FOCS), pages 1292?1310, 2015.
[15] S?ren Dahlgaard, Christian Igel, and Mikkel Thorup. Nearest neighbor classification using
bottom-k sketches. In IEEE BigData Conference, pages 28?34, 2013.
[16] Anirban Dasgupta, Ravi Kumar, and Tam?s Sarl?s. A sparse johnson: Lindenstrauss transform.
In Proc. 42nd ACM Symposium on Theory of Computing (STOC), pages 341?350, 2010.
[17] Martin Dietzfelbinger, Torben Hagerup, Jyrki Katajainen, and Martti Penttonen. A reliable
randomized algorithm for the closest-pair problem. Journal of Algorithms, 25(1):19?51, 1997.
[18] David A. Huffman. A method for the construction of minimum-redundancy codes. Proceedings
of the Institute of Radio Engineers, 40(9):1098?1101, September 1952.
[19] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse
of dimensionality. In Proc. 13th ACM Symposium on Theory of Computing (STOC), pages
604?613, 1998.
[20] Christopher Kennedy and Rachel Ward. Fast cross-polytope locality-sensitive hashing. CoRR,
abs/1602.06922, 2016.
[21] Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. The MNIST database of handwritten
digits, 1998. URL: http://yann.lecun.com/exdb/mnist/.
[22] Ping Li, Art B. Owen, and Cun-Hui Zhang. One permutation hashing. In Proc. 26th Advances
in Neural Information Processing Systems, pages 3122?3130, 2012.
10
[23] Ping Li, Anshumali Shrivastava, and Arnd Christian K?nig. b-bit minwise hashing in practice:
Large-scale batch and online learning and using gpus for fast preprocessing with simple hash
functions. CoRR, abs/1205.2958, 2012. URL: http://arxiv.org/abs/1205.2958.
[24] Ping Li, Anshumali Shrivastava, Joshua L. Moore, and Arnd Christian K?nig. Hashing algorithms for large-scale learning. In Proc. 25th Advances in Neural Information Processing
Systems, pages 2672?2680, 2011.
[25] Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma. Detecting near-duplicates for web
crawling. In Proc. 10th WWW, pages 141?150, 2007.
[26] Michael Mitzenmacher and Salil P. Vadhan. Why simple hash functions work: exploiting the
entropy in a data stream. In Proc. 19th ACM/SIAM Symposium on Discrete Algorithms (SODA),
pages 746?755, 2008.
[27] Noam Nisan. Pseudorandom generators for space-bounded computation. Combinatorica,
12(4):449?461, 1992. See also STOC?90.
[28] Mihai Patrascu and Mikkel Thorup. On the k-independence required by linear probing and
minwise independence. ACM Transactions on Algorithms, 12(1):8:1?8:27, 2016. See also
ICALP?10.
[29] Geoff Pike and Jyrki Alakuijala. Introducing cityhash, 2011. URL: https://opensource.
googleblog.com/2011/04/introducing-cityhash.html.
[30] Gregory Shakhnarovich, Trevor Darrell, and Piotr Indyk. Nearest-neighbor methods in learning
and vision. IEEE Trans. Neural Networks, 19(2):377, 2008.
[31] Anshumali Shrivastava and Ping Li. Densifying one permutation hashing via rotation for fast
near neighbor search. In Proc. 31th International Conference on Machine Learning (ICML),
pages 557?565, 2014.
[32] Anshumali Shrivastava and Ping Li. Improved densification of one permutation hashing. In
Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, UAI 2014,
Quebec City, Quebec, Canada, July 23-27, 2014, pages 732?741, 2014.
[33] Kengo Terasawa and Yuzuru Tanaka. Spherical LSH for approximate nearest neighbor search
on unit hypersphere. In Proc. 10th Workshop on Algorithms and Data Structures (WADS), pages
27?38, 2007.
[34] Mikkel Thorup. Bottom-k and priority sampling, set similarity and subset sums with minimal
independence. In Proc. 45th ACM Symposium on Theory of Computing (STOC), 2013.
[35] Mikkel Thorup and Yin Zhang. Tabulation-based 5-independent hashing with applications to
linear probing and second moment estimation. SIAM Journal on Computing, 41(2):293?331,
2012. Announced at SODA?04 and ALENEX?10.
[36] Simon Tong.
Lessons learned developing a practical large scale machine learning system, April 2010.
URL: https://research.googleblog.com/2010/04/
lessons-learned-developing-practical.html.
[37] Kilian Q. Weinberger, Anirban Dasgupta, John Langford, Alexander J. Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In Proc. 26th International Conference
on Machine Learning (ICML), pages 1113?1120, 2009.
11
| 7239 |@word multitask:1 version:2 briefly:2 polynomial:2 norm:3 compression:1 nd:1 confirms:1 crucially:1 pick:1 thereby:1 harder:1 moment:1 reduction:6 dff:1 tist:1 document:3 outperforms:1 hearn:1 torben:1 com:6 comparing:1 whp:1 crawling:1 bd:1 john:1 additive:1 partition:2 visible:1 confirming:2 razenshteyn:4 christian:4 hash:88 intelligence:1 prohibitive:1 ith:2 hypersphere:1 provides:1 detecting:1 readability:1 org:2 simpler:2 zhang:2 kvk2:1 symposium:10 retrieving:1 focs:2 consists:1 prove:1 overhead:1 inside:1 introduce:1 x0:1 theoretically:2 news20:9 indeed:2 roughly:9 relying:1 spherical:1 curse:1 cache:1 considering:2 project:2 estimating:2 underlying:5 bounded:3 lowest:1 what:2 alenex:1 finding:1 giant:1 impractical:1 guarantee:9 growth:1 returning:1 unit:2 grant:1 overestimate:1 positive:2 t1:4 understood:1 consequence:1 io:1 despite:1 encoding:2 becoming:1 yd:1 studied:1 suggests:1 fastest:1 range:1 bi:3 gone:1 igel:1 practical:6 lecun:2 alexandr:4 practice:9 implement:4 digit:2 universal:2 empirical:1 significantly:3 word:6 suggest:1 get:2 cannot:1 wad:1 bh:1 applying:3 seminal:2 restriction:3 www:2 map:3 go:1 primitive:1 independently:2 simplicity:1 estimator:2 fill:2 retrieve:2 mersenne:1 classic:3 x64:1 construction:2 target:1 massive:1 neighbouring:1 us:1 trick:1 element:5 particularly:2 database:3 bottom:5 steven:1 fly:1 solved:1 calculate:1 laarhoven:2 eva:1 kilian:1 decrease:1 trade:1 ran:1 mentioned:3 complexity:1 manasse:1 tobias:1 salil:1 singh:1 shakhnarovich:1 incur:1 creation:1 geoff:1 kengo:1 alphabet:1 fewest:1 distinct:1 fast:13 describe:1 jain:1 query:9 artificial:1 choosing:1 sarl:1 quite:2 jean:1 widely:2 larger:2 say:2 statistic:1 ward:1 syntactic:1 transform:1 echo:1 indyk:5 online:2 final:2 advantage:2 sequence:2 localitysensitive:1 frequent:1 densifying:1 translate:1 poorly:6 achieve:1 ludwig:1 kv:6 x86:1 getting:1 exploiting:1 motwani:2 empty:6 requirement:2 darrell:1 produce:2 generating:2 leave:1 wider:1 derive:1 depending:1 nearest:4 sole:1 strong:1 implemented:11 indicate:1 inflated:1 direction:1 drawback:1 sgn:3 larry:1 implementing:7 bin:11 require:1 fix:1 suffices:1 preliminary:1 hold:6 sufficiently:7 considered:9 around:1 seed:3 mapping:2 consecutive:1 smallest:2 fh:17 estimation:17 proc:19 bag:1 radio:1 sensitive:5 council:1 repetition:4 create:3 city:1 tool:2 trusted:1 offs:1 clearly:2 anshumali:4 rather:1 thirtieth:1 corollary:4 ax:2 focus:4 derived:1 improvement:1 mainly:1 rigorous:1 dependent:1 entire:3 kc:1 going:2 provably:5 pixel:2 classification:7 html:2 exponent:2 art:1 neve:1 never:2 having:2 beach:1 sampling:4 chernoff:1 identical:1 piotr:4 icml:3 subquadratic:1 t2:5 report:2 duplicate:2 employ:1 few:2 randomly:2 neighbour:2 simultaneously:1 resulted:1 divisor:2 recalling:1 ab:3 detection:1 huge:2 highly:3 multiply:24 evaluation:5 weakness:1 truly:12 kvk:5 closer:1 machinery:1 vi0:1 indexed:1 filled:3 loosely:1 initialized:1 desired:1 theoretical:6 minimal:2 instance:1 earlier:1 contiguous:1 cover:1 cost:1 introducing:2 subset:3 rounding:1 johnson:1 too:2 stored:2 gregory:1 synthetic:12 st:1 broder:3 siam:5 international:3 randomized:1 systematic:1 picking:1 michael:1 ilya:4 concrete:3 again:2 squared:4 choose:4 mikkel:7 worse:2 corner:3 tam:1 creating:1 chung:1 leading:2 style:1 return:1 li:8 supp:2 potential:1 int:2 matter:1 depends:1 stream:1 nisan:2 later:1 view:1 break:1 performed:1 analyze:2 doing:1 red:1 simon:1 contribution:2 minimize:1 opensource:1 xor:2 variance:1 who:1 efficiently:1 yield:1 lesson:2 famous:1 handwritten:2 ren:3 kennedy:1 shingle:3 prg:1 ping:5 trevor:1 danish:1 frequency:4 di:1 propagated:1 sampled:2 proved:4 dataset:3 popular:4 recall:8 dimensionality:7 improves:1 sophisticated:1 actually:1 appears:1 hashing:65 higher:3 flooding:2 improved:4 april:1 mitzenmacher:1 though:1 priority:1 furthermore:3 just:2 murmur:2 angular:1 smola:1 until:1 preconditioner:1 working:1 sketch:10 langford:1 web:2 trust:2 christopher:2 rajeev:1 quality:2 perhaps:2 resemblance:1 believe:2 usa:1 k22:6 verify:1 unbiased:2 true:2 symmetric:2 moore:1 illustrated:1 noted:4 samuel:1 m:14 generalized:1 prominent:2 mina:2 exdb:1 demonstrate:7 performs:3 image:1 wise:17 recently:1 common:4 rotation:1 behaves:1 tabulation:54 perfomance:1 discussed:2 tail:3 refer:1 mihai:1 ai:6 rd:2 consistency:1 gratefully:2 aq:4 lsh:15 similarity:28 add:1 closest:2 recent:1 showed:7 retrieved:8 sarma:1 prime:4 scenario:1 store:2 certain:1 outperforming:1 arbitrarily:1 blog:1 yi:1 joshua:1 seen:2 preserving:1 additional:2 greater:1 somewhat:1 minimum:1 employed:6 july:1 full:2 manku:1 d0:21 faster:6 cross:2 long:1 retrieval:2 zweig:1 divided:1 lin:1 arvind:1 a1:1 variant:1 basic:17 vision:3 essentially:6 tasked:1 metric:1 arxiv:1 histogram:5 represent:1 sometimes:1 addition:1 want:2 fine:1 whereas:2 huffman:2 biased:2 nig:2 quebec:2 mod:6 call:1 integer:7 vadhan:1 near:6 kick:1 split:3 enough:2 minhash:7 independence:3 fit:1 gave:1 architecture:1 tradeoff:1 translates:1 shift:20 t0:3 url:6 speaking:1 pike:1 collision:1 clear:1 amount:1 repeating:1 concentrated:4 svms:1 carter:1 http:7 generate:3 wiki:1 exist:1 percentage:1 moses:1 per:2 blue:1 discrete:4 dasgupta:4 key:9 redundancy:1 threshold:2 libsvm:1 ravi:1 fraction:2 sum:1 parameterized:1 powerful:2 uncertainty:1 soda:5 rachel:1 almost:3 family:3 chih:2 yann:2 jaccard:2 announced:1 bit:17 bound:12 guaranteed:2 display:1 copied:2 occur:2 precisely:1 worked:1 aspect:3 speed:1 simulate:1 extremely:1 kumar:1 pseudorandom:2 relatively:1 martin:1 gpus:1 structured:2 charikar:1 developing:2 combination:1 poor:5 belonging:1 anirban:2 smaller:1 slightly:2 character:3 y0:1 cun:1 making:3 intuitively:1 pr:2 bucket:2 taken:1 previously:1 needed:1 know:2 anish:1 end:1 thorup:7 operation:2 observe:1 attenberg:1 schmidt:1 weinberger:8 corinna:1 slower:3 batch:1 denotes:1 running:4 ensure:3 remaining:1 include:1 top:4 log2:1 clustering:1 xc:1 calculating:1 giving:3 appreciate:1 question:1 already:1 concentration:16 exhibit:1 september:1 distance:2 mapped:1 evenly:2 polytope:2 argue:4 portable:1 reason:1 provable:2 erik:1 code:2 length:1 index:2 providing:1 ratio:9 lg:1 setup:1 stoc:7 sop:5 noam:1 gcc:1 implementation:3 cryptographic:4 perform:12 datasets:1 acknowledge:1 displayed:5 knudsen:4 philippe:1 wegman:1 canada:1 overlooked:1 introduced:2 complement:1 pair:1 required:1 copenhagen:3 namely:4 oph:15 bottle:1 optimized:1 cheating:1 thr:4 security:1 recalled:1 glassman:1 learned:2 tanaka:1 nip:2 trans:1 beyond:1 usually:2 below:2 parallelism:1 regime:1 including:2 reliable:2 green:1 david:1 natural:1 rely:2 indicator:2 advanced:1 scheme:12 improve:2 github:2 imply:1 numerous:1 library:1 created:1 carried:1 deemed:1 acknowledges:1 faced:1 text:1 byte:1 understanding:1 acknowledgement:1 thijs:2 fully:3 expect:2 permutation:7 icalp:1 mixed:52 filtering:1 proven:2 geoffrey:1 ingredient:1 generator:2 foundation:1 degree:1 sufficient:2 consistent:1 systematically:4 heavy:1 austin:1 copy:1 english:1 infeasible:1 bias:6 weaker:1 burges:1 institute:1 neighbor:7 simhash:1 taking:2 sparse:6 distributed:1 dimension:5 world:7 lindenstrauss:1 author:2 made:1 collection:3 preprocessing:1 nguyen:1 transaction:1 yuzuru:1 approximate:8 obtains:1 confirm:1 arnd:2 uai:1 containment:1 xi:3 search:12 why:1 table:12 additionally:1 ku:1 ca:1 shrivastava:5 improving:2 mse:24 meanwhile:2 vj:1 official:1 da:1 dense:6 main:2 universe:4 spread:1 huy:1 identifier:3 googleblog:2 cryptography:1 andrei:2 tong:2 probing:3 sub:1 exponential:3 candidate:1 breaking:2 reloaded:2 theorem:5 removing:1 bad:1 jen:1 densification:6 offset:1 dk:1 svm:1 list:1 cortes:1 concern:1 normalizing:1 workshop:1 circularly:1 false:1 mnist:9 andoni:4 importance:1 ci:1 corr:2 hui:1 magnitude:1 conditioned:1 gap:1 locality:5 entropy:2 intersection:4 logarithmic:1 yin:1 simply:3 likely:5 josh:1 patrascu:1 chang:1 acm:11 marked:1 goal:1 jyrki:2 towards:1 owen:1 experimentally:2 uniformly:2 engineer:1 total:4 mathias:3 neck:1 experimental:3 newsgroup:1 combinatorica:1 unrealiable:1 support:4 mark:2 scan:1 alexander:1 minwise:3 bigdata:1 evaluate:5 wilcox:1 handling:1 |
6,898 | 724 | Credit Assignment through Time:
Alternatives to Backpropagation
Yoshua Bengio *
Dept. Informatique et
Recherche Operationnelle
Universite de Montreal
Montreal, Qc H3C-3J7
Paolo Frasconi
Dip. di Sistemi e Informatica
Universita di Firenze
50139 Firenze (Italy)
Abstract
Learning to recognize or predict sequences using long-term context has many applications. However, practical and theoretical
problems are found in training recurrent neural networks to perform tasks in which input/output dependencies span long intervals.
Starting from a mathematical analysis of the problem, we consider
and compare alternative algorithms and architectures on tasks for
which the span of the input/output dependencies can be controlled.
Results on the new algorithms show performance qualitatively superior to that obtained with backpropagation.
1
Introduction
Recurrent neural networks have been considered to learn to map input sequences to
output sequences. Machines that could efficiently learn such tasks would be useful
for many applications involving sequence prediction, recognition or production.
However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. In fact, we can prove that dynamical
systems such as recurrent neural networks will be increasingly difficult to train with
gradient descent as the duration of the dependencies to be captured increases. A
mathematical analysis of the problem shows that either one of two conditions arises
in such systems. In the first case, the dynamics of the network allow it to reliably
store bits of information (with bounded input noise), but gradients (with respect
to an error at a given time step) vanish exponentially fast as one propagates them
?also, AT&T Bell Labs, Holmdel, NJ 07733
7S
76
Bengio and Frasconi
backward in time. In the second case, the gradients can flow backward but the system is locally unstable and cannot reliably store bits of information in the presence
of input noise.
In consideration of the above problem and the understanding brought by the theoretical analysis, we have explored and compared several alternative algorithms and
architectures. Comparative experiments were performed on artificial tasks on which
the span of the input/output dependencies can be controlled. In all cases, a duration parameter was varied, from T/2 to T, to avoid short sequences on which the
algorithm could much more easily learn. These tasks require learning to latch, i.e.
store bits of information for arbitrary durations (which may vary from example to
example). Such tasks cannot be performed by Time Delay Neural Networks or by
recurrent networks whose memories are gradually lost with time constants that are
fixed by the parameters of the network.
Of all the alternatives to gradient descent that we have explored, an approach based
on a probabilistic interpretation of a discrete state space, similar to hidden Markov
models (HMMs), yielded the most interesting results.
2
A Difficult Problem of Error Propagation
Consider a non-autonomous discrete-time system with additive inputs, such as a
recurrent neural network a with a continuous activation function:
at = M(at-d
and the corresponding autonomous dynamics
+ Ut
at = M(at-d
(1)
(2)
where M is a nonlinear map (which may have tunable parameters such as network
weights), and at E R n and Ut E R m are vectors representing respectively the system
state and the external input at time t.
In order to latch a bit of state information one wants to restrict the values of the
system activity at to a subset S of its domain. In this way, it will be possible to
later interpret at in at least two ways: inside S and outside S. To make sure that at
remains in such a region, the system dynamics can be chosen such that this region
is the basin of attraction of an attractor X (or of an attractor in a sub-manifold or
subspace of at's domain). To "erase" that bit of information, the inputs may push
the system activity at out of this basin of attraction and possibly into another one.
In (Bengio, Simard, & Frasconi, 1994) we show that only two conditions can arise
when using hyperbolic attractors to latch bits of information in such a system.
Either the system is very sensitive to noise, or the derivatives of the cost at time t
with respect to the system activations ao converge exponentially to 0 as t increases.
This situation is the essential reason for the difficulty in using gradient descent to
train a dynamical system to capture long-term dependencies in the input/output
sequences.
A first theorem can be used to show that when the state at is in a region where
IM'I > 1, then small perturbations grow exponentially, which can yield to a loss of
the information stored in the dynamics of the system:
Theorem 1 A ssume x is a point of R n such that there exists an open sphere U(x)
centered on x for which IM'(z)1 > 1 for all z E U(x). Then there exist Y E U(x)
such that IIM(x) - M(y) I > Ilx - YII?
Credit Assignment through Time: Alternatives to Backpropagation
A second theorem shows that when the state at is in a region where
gradients propagated backwards in time vanish exponentially fast:
IM'I <
1, the
Theorem 2 If the input Ut is such that a system remains robustly latched
nM'(adl < 1) on attmctor X after time 0, then g:~ -t 0 as t -t 00.
See proofs in (Bengio, Simard, & Frasconi, 1994). A consequence of these results
is that it is generally very difficult to train a parametric dynamical system (such
as a recurrent neural network) to learn long-term dependencies using gradient descent. Based on the understanding brought by this analysis, we have explored and
compared several alternative algorithms and architectures.
3
Global Search Methods
Global search methods such as simulated annealing can be applied to this problem, but they are generally very slow. We implemented the simulated annealing
algorithm presented in (Corana, Marchesi, Martini, & Ridella, 1987) for optimizing
functions of continuous variables. This is a "batch learning" algorithm (updating
parameters after all examples of the training set have been seen). It performs a cycle of random moves, each along one coordinate (parameter) direction. Each point
is accepted or rejected according to the Metropolis criterion (Kirkpatrick, Gelatt,
& Vecchi, 1983). The simulated annealing algorithm is very robust with respect
to local minima and long plateaus. Another global search method evaluated in
our experiments is a multi-grid random search. The algorithm tries random points
around the current solution (within a hyperrectangle of decreasing size) and accepts
only those that reduce the error . Thus it is resistant to problems of plateaus but
not as much resistant to problems of local minima. Indeed, we found the multi-grid
random search to be much faster than simulated annealing but to fail on the parity
problem, probably because of local minima.
4
Time Weighted Pseudo-Newton
The time-weighted pseudo-Newton algorithm uses second order derivatives of the
cost with respect to each of the instantiations of a weight at different time steps to
try correcting for the vanishing gradient problem. The weight update for a weight
Wi is computed as follows:
(3)
where Wit is the instantiation for time t of parameter Wi, 1} is a global learning
rate and C(p) is the cost for pattern p. In this way, each (temporal) contribution
to ~Wi(p) is weighted by the inverse curvature with respect to Wit . Like for the
pseudo-Newton algorithm of Becker and Le Cun (1988) we prefer using a diagonal
approximation of the Hessian which is cheap to compute and guaranteed to be
positive definite.
The constant J1 is introduced to prevent ~w from becoming very large (when I&;C~p) I
W.!
is very small). We found the performance of this algorithm to be better than the
regular pseudo-Newton algorithm, which is better than the simple stochastic backpropagation algorithm, but all of these algorithms perform worse and worse as the
length of the sequences is increased.
77
78
Bengio and Frasconi
5
Discrete Error Propagation
The discrete error propagation algorithm replaces sigmoids in the network by discrete threshold units and attempts to propagate discrete error information backwards in time. The basic idea behind the algorithm is that for a simple discrete
element such as a threshold unit or a latch, one can write down an error propagation
rule that prescribes desired changes in the values of the inputs in order to obtain
certain changes in the values of the outputs. In the case of a threshold unit, such
a rule assumes that the desired change for the output of the unit is discrete (+2,
o or -2). However, error information propagated backwards to such as unit might
have a continuous value. A stochastic process is used to convert this continuous
value into an appropriate discrete desired change. In the case of a self-loop, a clear
advantage of this algorithm over gradient back-propagation through sigmoid units
is that the error information does not vanish as it is repeatedly propagated backwards in time around the loop, even though the unit can robustly store a bit of
information. Details of the algorithm will appear in (Bengio, Simard, & Frasconi,
1994). This algorithm performed better than the time-weighted pseudo-Newton,
pseudo-Newton and back-propagation algorithms but the learning curve appeared
very irregular, suggesting that the algorithm is doing a local random search.
6
An EM Approach to Target Propagation
The most promising of the algorithms we studied was derived from the idea of
propagating targets instead of gradients. For this paper we restrict ourselves to
sequence classification. We assume a finite-state learning system with the state qt
at time t taking on one of n values. Different final states for each class are used
as targets. The system is given a probabilistic interpretation and we assume a
Markovian conditional independence model. As in HMMs, the system propagates
forward a discrete distribution over the n states. Transitions may be constrained
so that each state j has a defined set of successors Sj.
Stat~ L
State
_;_?.~_.~_1. .0_j1_
. ? .(???_.?.. __n_e--lt/\rK :
Ut
Figure 1: The proposed architecture
Learning is formulated as a maximum likelihood problem with missing data. Missing
variables, over which an expectation is taken, are the paths in state-space. The
Credit Assignment through Time: Alternatives to Backpropagation
EM (Expectation/Maximization) or GEM (Generalized EM) algorithms (Dempster,
Laird., & Rubin, 1977) can be used to help decoupling the influence of different
hypothetical paths in state-space. The estimation step of EM requires propagating
backward a discrete distribution of targets. In contrast to HMMs, where parameters
are adjusted in an unsupervised learning framework, we use EM in a supervised
fashion. This new perspective has been successful in training static models (Jordan
& Jacobs, 1994).
Transition probabilities, conditional on the current input, can be computed by a
parametric function such as a layer of a neural network with softmax units. We propose a modular architecture with one subnetwork N j for each state (see Figure 1).
Each subnetwork is feedforward, takes as input a continuous vector of features Ut
and has one output for each successor state, interpreted as P(qt = i I qt-l = j, Ut; 0),
(j = 1, ... , n, i E Sj). 0 is a set of tunable parameters. Using a Markovian assumption, the distribution over states at time t is thus obtained as a linear combination
of the outputs of the subnetworks, gated by the previously computed distribution:
P(qt
= i lui; 0) = L
P(qt-l
= j lui-I; O)P(qt = i I qt-l = j, Ut; 0)
(4)
j
where ui is a subsequence of inputs from time 1 to t inclusively. The training
algorithm looks for parameters 0 of the system that maximize the likelihood L of
falling in the "correct" state at the end of each sequence:
L(O) =
II P(qTp = qj,p I uip; 0)
(5)
p
where p ranges over training sequences, Tp the length of the pth training sequence,
and qj,p the desired state at time Tp.
An auxiliary function Q(O, Ok) is constructed by introducing as hidden variables the
whole state sequence, hence the complete likelihood function is defined as follows:
Lc(O) =
IIp(qi
p
luip;O)
(6)
p
and
(7)
where at the k+lth EM (or GEM) iteration, Ok+l is chosen to maximize (or increase)
the auxiliary function Q with respect to O.
If the inputs are quantized and the subnetworks perform a simple look-up in a table
of probabilities, then the EM algorithm can be used, i.e., aQ~/k) = 0 can be solved
analytically. If the networks have non-linearities, (e.g., with hidden units and a
softmax at their output to constrain the outputs to sum to 1), then one can use
the GEM algorithm (which simply increases Q, for example with gradient ascent)
or directly perform (preferably stochastic) gradient ascent on the likelihood.
An extra term was introduced in the optimization criterion when we found that in
many cases the target information would not propagate backwards (or would be
diffused over all the states). These experiments confirmed previous results indicating a general difficulty of training fully connected HMMs, with the EM algorithm
converging very often to poor local maxima of the likelihood. In an attempt to
understand better the phenomenon, we looked at the quantities propagated forward and the quantities propagated backward (representing credit or blame) in the
79
80
Bengio and Frasconi
training algorithm. We found a diffusion of credit or blame occurring when the
forward maps (i.e. the matrix of transition probabilities) at each time step are such
that many inputs map to a few outputs, i.e., when the ratio of a small volume in
the image of the map with respect to the corresponding volume in the domain is
small. This ratio is the absolute value of the determinant of the Jacobian of the
map. Hence, using an optimization criterion that incorporates the maximization of
the average magnitude of the determinant of the transition matrices, this algorithm
performs much better than the other algorithms. Two other tricks were found to
be important to help convergence and reduce the problem of diffusion of credit.
The first idea is to use whenever possible a structured model with a sparse connectivity matrix, thus introducing some prior knowledge about the state-space. For
example, applications of HMMs to speech recognition always rely on such structured
topologies. We could reduce connectivity in the transition matrix for the 2-sequence
problem (see next section for its definition) by splitting some of the nodes into two
subsets, each specializing on one of the sequence classes. However, sometimes it is
not possible to introduce such constraints, such as in the parity problem. Another
trick that drastically improved performance was to use stochastic gradient ascent in
a way that helps the training algorithm get out of local optima. The learning rate
is decreased when the likelihood improves but it is increased when the likelihood
remains flat (the system is stuck in a plateau or local optimum).
As the results in the next section show, the performances obtained with this algorithm are much better than those obtained with the other algorithms on the two
simple test problems that were considered.
7
Experimental Results
We present here results on two problems for which one can control the span of
input/output dependencies. The 2-sequence problem is the following: classify an
input sequence, at the end of the sequence, in one of two types, when only the first
N elements (N = 3 in our experiments) of this sequence carry information about
the sequence class. Uniform noise is added to the sequence. For the first 6 methods
(see Tables 1 to 4), we used a fully connected recurrent network with 5 units (with
25 free parameters). For the EM algorithm, we used a 7-state system with a sparse
connectivity matrix (an initial state, and two separate left-to-right submodels of
three states each to model the two types of sequences).
The parity problem consists in producing the parity of an input sequence of 1 's and
-l's (i.e., a 1 should be produced at the final output if and only if the number of
1's in the input is odd). The target is only given at the end of the sequence. For
the first 6 methods we used a minimal size network (1 input, 1 hidden, 1 output,
7 free parameters). For the EM algorithm, we used a 2-state system with a full
connectivity matrix.
Initial parameters were chosen randomly for each trial. Noise added to the sequence
was also uniformly distributed and chosen independently for each training sequence.
We considered two criteria: (1) the average classification error at the end of training,
i.e., after a stopping criterion has been met (when either some allowed number of
function evaluations has been performed or the task has been learned), (2) the
average number of function evaluations needed to reach the stopping criterion.
In the tables, "p-n" stands for pseudo-Newton. Each column corresponds to a value
of the maximum sequence length T for a given set of trials. The sequence length for
a particular training sequence was picked randomly within T/2 and T. Numbers
Credit Assignment through Time: Alternatives to Backpropagation
reported are averages over 20 or more trials.
8
Conclusion
Recurrent networks and other parametric dynamical systems are very powerful in
their ability to represent and use context. However, theoretical and experimental
evidence shows the difficulty of assigning credit through many time steps, which
is required in order to learn to use and represent context. This paper studies this
fundamental problem and proposes alternatives to the backpropagation algorithm
to perform such learning tasks. Experiments show these alternative approaches
to perform significantly better than gradient descent. The behavior of these algorithms yields a better understanding of the central issue of learning to use context,
or assigning credit through many transformations. Although all of the alternative algorithms presented here showed some improvement with respect to standard
stochastic gradient descent, a clear winner in our comparison was an algorithm
based on the EM algorithm and a probabilistic interpretation of the system dynamics. However, experiments on more challenging tasks will have to be conducted to
confirm those results. Furthermore, several extensions of this model are possible,
for example allowing both inputs and outputs, with supervision on outputs rather
than on states. Finally, similarly to the work we performed for recurrent networks
trained with gradient descent, it would be very important to analyze theoretically
the problems of propagation of credit encountered in training such Markov models.
Acknowledgements
We wish to emphatically thank Patrice Simard, who collaborated with us on the
analysis of the theoretical difficulties in learning long-term dependencies, and on
the discrete error propagation algorithm.
References
S. Becker and Y. Le Cun. (1988) Improving the convergence of back-propagation
learning with second order methods, Proc. of the 1988 Connectionist Models Summer School, (eds. Touretzky, Hinton and Sejnowski), Morgan Kaufman, pp. 29-37.
Y. Bengio, P. Simard, and P. Frasconi. (1994) Learning long-term dependencies
with gradient descent is difficult, IEEE Trans. Neural Networks, (in press).
A. Corana, M. Marchesi, C. Martini, and S. Ridella. (1987) Minimizing multimodal
functions of continuous variables with the simulated annealing algorithm, A CM
Transactions on Mathematical Software, vol. 13, no. 13, pp. 262-280.
A.P. Dempster, N.M. Laird, and D.B. Rubin. (1977) Maximum-likelihood from
incomplete data via the EM algorithm, J. of Royal Stat. Soc., vol. B39, pp. 1-38.
M.1. Jordan and R.A. Jacobs. (1994) Hierarchical mixtures of experts and the EM
algorithm, Neural Computation, (in press).
S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchio (1983) Optimization by simulated
annealing, Science 220, 4598, pp.671-680.
81
82
Bengio and Frasconi
Table 1: Final classification error for the 2-sequence problem wrt sequence length
ac -prop
p-n
time-weighted p-n
multigrid
discrete err. prop.
simulated anneal.
EM
Table 2:
2
o
2
6
6
o
3
0
6
16
0
0
10
9
1
29
7
0
25
34
3
23
4
0
29
14
6
22
11
o
# sequence presentations for the 2-sequence problem wrt sequence length
ac -prop
p-n
time-weighted p-n
multigrid
discrete err. prop.
simulated anneal.
EM
. e
5.1e2
5.4e2
4.1e3
6.6e2
2.0e5
3.2e3
. e
. e
. e
. e
1.1e3
4.3e2
5.8e3
1.3e3
3.ge4
4.0e3
1.ge3
2.4e3
2.5e3
2.1e3
8.2e4
2.ge3
2.6e3
2.ge3
3.ge3
2.1e3
7.7e4
3.2e3
2.5e3
2.7e3
6.4e3
2.1e3
4.3e4
2.ge3
Table 3: Final classification error for the parity problem wrt sequence length
back-prop
p-n
time-weighted p-n
multigrid
discrete err. prop.
simulated anneal.
EM
Table 4:
3
5
~
~U
3
26
15
0
3
25
10
41
41
39
0
0
6
20
~~
44
43
44
0
10
0
50
4340
44
45
5
0
14
100
500
47
0
12
# sequence presentations for the parity problem wrt sequence length
back-prop
p-n
time-weighted p-n
multigrid
discrete err. prop.
simulated anneal.
EM
3
3.6e3
2.5e2
4.5e4
4.2e3
5.0e3
5.1e5
5
5.5e3
8.ge3
9
8.7e3
8.ge3
7.0e4
7.ge3
2.3e3
1.5e3
20
1.6e4
7 .7e4
3.4e4
1.5e4
1.5e4
1.2e6
1.3e3
50
1.1e4
1.1e4
8.1e4
3.1e4
5.4e4
8.1e5
3.2e3
100
500
1.le5
2.6e3
3.4e3
| 724 |@word trial:3 determinant:2 open:1 propagate:2 jacob:2 b39:1 carry:1 initial:2 err:4 current:2 activation:2 assigning:2 additive:1 j1:1 cheap:1 update:1 vanishing:1 short:1 recherche:1 quantized:1 node:1 mathematical:3 along:1 constructed:1 prove:1 consists:1 inside:1 introduce:1 operationnelle:1 theoretically:1 indeed:1 behavior:1 multi:2 decreasing:1 erase:1 bounded:1 linearity:1 kaufman:1 interpreted:1 cm:1 multigrid:4 transformation:1 nj:1 temporal:2 pseudo:7 preferably:1 hypothetical:1 control:1 unit:10 appear:1 producing:1 positive:1 local:7 consequence:1 path:2 becoming:1 might:1 studied:1 challenging:1 hmms:5 range:1 practical:2 lost:1 definite:1 firenze:2 backpropagation:7 bell:1 hyperbolic:1 significantly:1 regular:1 get:1 cannot:2 context:4 influence:1 map:6 missing:2 starting:1 duration:3 independently:1 qc:1 wit:2 splitting:1 correcting:1 rule:2 attraction:2 autonomous:2 coordinate:1 target:6 us:1 trick:2 element:2 recognition:2 updating:1 solved:1 capture:1 region:4 cycle:1 connected:2 dempster:2 ui:1 dynamic:5 prescribes:1 trained:1 easily:1 multimodal:1 train:3 informatique:1 fast:2 sejnowski:1 artificial:1 outside:1 whose:1 modular:1 ability:1 h3c:1 laird:2 final:4 patrice:1 sequence:37 advantage:1 propose:1 yii:1 loop:2 convergence:2 optimum:2 comparative:1 help:3 recurrent:10 montreal:2 stat:2 propagating:2 ac:2 school:1 odd:1 qt:7 soc:1 implemented:1 auxiliary:2 met:1 direction:1 correct:1 stochastic:5 centered:1 successor:2 require:1 ao:1 im:3 adjusted:1 extension:1 around:2 credit:10 considered:3 predict:1 vary:1 estimation:1 proc:1 sensitive:1 weighted:8 brought:2 j7:1 always:1 latched:1 rather:1 avoid:1 derived:1 improvement:1 likelihood:8 contrast:1 stopping:2 hidden:4 issue:1 classification:4 proposes:1 constrained:1 softmax:2 frasconi:9 look:2 unsupervised:1 yoshua:1 connectionist:1 few:1 randomly:2 recognize:1 ourselves:1 attractor:3 attempt:2 evaluation:2 kirkpatrick:2 mixture:1 behind:1 incomplete:1 desired:4 theoretical:4 minimal:1 increased:2 classify:1 column:1 markovian:2 tp:2 assignment:4 maximization:2 cost:3 introducing:2 subset:2 uniform:1 delay:1 successful:1 conducted:1 reported:2 stored:1 dependency:9 fundamental:1 probabilistic:3 vecchio:1 connectivity:4 central:1 nm:1 iip:1 possibly:1 worse:2 external:1 expert:1 simard:5 derivative:2 suggesting:1 de:1 performed:5 later:1 try:2 lab:1 picked:1 doing:1 analyze:1 contribution:1 who:1 efficiently:1 yield:2 produced:1 confirmed:1 plateau:3 reach:1 touretzky:1 whenever:1 ed:1 definition:1 uip:1 pp:4 e2:5 universite:1 proof:1 di:2 adl:1 static:1 propagated:5 tunable:2 knowledge:1 ut:7 improves:1 back:5 ok:2 supervised:1 improved:1 evaluated:1 though:1 furthermore:1 rejected:1 nonlinear:1 propagation:10 hence:2 analytically:1 latch:4 self:1 criterion:6 generalized:1 complete:1 performs:2 image:1 consideration:1 superior:1 sigmoid:1 winner:1 exponentially:4 volume:2 interpretation:3 interpret:1 ridella:2 grid:2 similarly:1 blame:2 aq:1 resistant:2 supervision:1 curvature:1 showed:1 perspective:1 italy:1 optimizing:1 store:4 certain:1 captured:1 seen:1 minimum:3 morgan:1 converge:1 maximize:2 ii:1 full:1 faster:1 long:8 sphere:1 specializing:1 controlled:2 qi:1 prediction:1 involving:1 basic:1 converging:1 expectation:2 iteration:1 represent:2 sometimes:1 irregular:1 want:1 interval:2 annealing:6 decreased:1 grow:1 extra:1 sure:1 probably:1 ascent:3 flow:1 incorporates:1 jordan:2 presence:1 backwards:5 feedforward:1 bengio:9 independence:1 architecture:5 restrict:2 topology:1 reduce:3 idea:3 qj:2 becker:2 speech:1 hessian:1 e3:27 repeatedly:1 useful:1 generally:2 clear:2 locally:1 informatica:1 exist:1 sistemi:1 discrete:16 write:1 vol:2 paolo:1 threshold:3 falling:1 prevent:1 diffusion:2 backward:4 inclusively:1 convert:1 sum:1 inverse:1 powerful:1 submodels:1 prefer:1 holmdel:1 bit:7 layer:1 collaborated:1 guaranteed:1 summer:1 replaces:1 encountered:1 yielded:1 activity:2 constraint:1 constrain:1 flat:1 software:1 vecchi:1 span:5 structured:2 according:1 combination:1 poor:1 increasingly:1 em:17 wi:3 marchesi:2 metropolis:1 cun:2 gradually:1 taken:1 remains:3 previously:1 fail:1 needed:1 wrt:4 end:4 subnetworks:2 hierarchical:1 appropriate:1 gelatt:2 robustly:2 alternative:11 batch:1 assumes:1 newton:7 universita:1 move:1 diffused:1 added:2 quantity:2 looked:1 parametric:3 ge3:8 diagonal:1 subnetwork:2 gradient:17 subspace:1 separate:1 thank:1 simulated:10 manifold:1 unstable:1 reason:1 length:8 ratio:2 minimizing:1 difficult:4 reliably:2 perform:7 gated:1 allowing:1 qtp:1 markov:2 finite:1 descent:8 situation:1 hinton:1 varied:1 perturbation:1 arbitrary:1 introduced:2 required:1 hyperrectangle:1 accepts:1 learned:1 trans:1 dynamical:4 pattern:1 appeared:1 royal:1 memory:1 difficulty:5 rely:1 representing:2 prior:1 understanding:3 acknowledgement:1 loss:1 fully:2 interesting:1 contingency:1 basin:2 propagates:2 rubin:2 martini:2 production:1 parity:6 free:2 drastically:1 iim:1 allow:1 understand:1 taking:1 absolute:1 sparse:2 distributed:1 dip:1 curve:1 transition:5 stand:1 forward:3 qualitatively:1 stuck:1 pth:1 transaction:1 sj:2 confirm:1 global:4 instantiation:2 gem:3 subsequence:1 continuous:6 search:6 table:7 promising:1 learn:5 robust:1 decoupling:1 improving:1 e5:3 anneal:4 domain:3 whole:1 noise:5 arise:1 allowed:1 fashion:1 slow:1 lc:1 sub:1 wish:1 vanish:3 jacobian:1 theorem:4 down:1 rk:1 e4:15 explored:3 evidence:1 essential:1 exists:1 magnitude:1 sigmoids:1 push:1 occurring:1 ilx:1 lt:1 simply:1 corresponds:1 prop:8 conditional:2 lth:1 formulated:1 presentation:2 change:4 lui:2 uniformly:1 accepted:1 experimental:2 indicating:1 e6:1 arises:1 dept:1 phenomenon:1 |
6,899 | 7,240 | GANs Trained by a Two Time-Scale Update Rule
Converge to a Local Nash Equilibrium
Martin Heusel
Hubert Ramsauer
Thomas Unterthiner
Bernhard Nessler
Sepp Hochreiter
LIT AI Lab & Institute of Bioinformatics,
Johannes Kepler University Linz
A-4040 Linz, Austria
{mhe,ramsauer,unterthiner,nessler,hochreit}@bioinf.jku.at
Abstract
Generative Adversarial Networks (GANs) excel at creating realistic images with
complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale
update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the
discriminator and the generator. Using the theory of stochastic approximation, we
prove that the TTUR converges under mild assumptions to a stationary local Nash
equilibrium. The convergence carries over to the popular Adam optimization, for
which we prove that it follows the dynamics of a heavy ball with friction and thus
prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the ?Fr?chet Inception Distance?
(FID) which captures the similarity of generated images to real ones better than
the Inception Score. In experiments, TTUR improves learning for DCGANs and
Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word
Benchmark.
1
Introduction
Generative adversarial networks (GANs) [16] have achieved outstanding results in generating realistic
images [41, 30, 25, 1, 4] and producing text [21]. GANs can learn complex generative models for
which maximum likelihood or a variational approximations are infeasible. Instead of the likelihood,
a discriminator network serves as objective for the generative model, that is, the generator. GAN
learning is a game between the generator, which constructs synthetic data from random variables,
and the discriminator, which separates synthetic data from real world data. The generator?s goal is
to construct data in such a way that the discriminator cannot tell them apart from real world data.
Thus, the discriminator tries to minimize the synthetic-real discrimination error while the generator
tries to maximize this error. Since training GANs is a game and its solution is a Nash equilibrium,
gradient descent may fail to converge [43, 16, 18]. Only local Nash equilibria are found, because
gradient descent is a local optimization method. If there exists a local neighborhood around a point
in parameter space where neither the generator nor the discriminator can unilaterally decrease their
respective losses, then we call this point a local Nash equilibrium.
To characterize the convergence properties of training general GANs is still an open challenge [17, 18].
For special GAN variants, convergence can be proved under certain assumptions [33, 20, 45], as can
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
0.4
0.2
5000
10000
15000
Iteration
0
0
10
1
2
10
3
10
4
10
10
5
10
Iteration
Convergence of deterministic algorithm under different step sizes.
Fig. 4.
500
orig 1e-5
orig 1e-4
TTUR 1e-5 1e-4
Flow 1 (?n = 0.01)
Flow 2 (?n = 0.01)
Flow 3 (?n = 0.01)
400
Convergence under noisy feedback (the biased case).
0.6
?
?
?
?
0.5
=
=
=
=
?
?
?
?
=
=
=
=
[0.05, 0.05, 0.05, 0.05]
[0.5, 0.5, 0.5, 0.5]
[1, 1, 1, 1]
[5, 5, 5, 5]
Flow 4 (?n = 0.01)
Flow 1 (?n = 1/n)
Constant step size
Diminishing step size
Flow 2 (?
n
300
0.4
= 1/n)
FID
Flow 4 (?
n
||x(n)?x*||
Flow 3 (?n = 1/n)
= 1/n)
200
0.3
0.2
0.1
100
0
5000
Iteration
0
0
10000
50
15000
100
150
mini-batch x 1k
200
250
1
10
2
10
3
10
Iteration
4
10
5
10
Figure 1: Left: Original vs. TTUR GAN
on CelebA.
Right: Figure
from
2007in[49]
Fig. training
5.
?Zoomed-in?
convergence
behavior
of Zhang
the iterates
Figure 4.
which shows the distance of the parameter from the optimum for a one time-scale update of a 4
node network flow problem. When the upper
bounds on the errors (?, ?) are small, the iterates
V. S TOCHASTIC S TABILITY OF T WO T IME -S CALE
oscillate and repeatedly
return
a neighborhood ofAthe
optimal solution
(cf. Supplement
Section 2.3).
he convergence to a neighborhood
is the best
we to
can
LGORITHM
U NDER
N OISY F EEDBACK
whereas by using diminishing
step
sizes,
convergence
However,
when
the upper
bounds on the errors
large, thesections,
iterates typically
diverge.
In theare
previous
we have
applied the dual decom-
Convergence under noisy feedback (the unbiased case).
obability one to the optimal points is made possible.
position method to Problem (1) and devised the primal-dual
algorithm, which is a single time-scale algorithm. As noted
ability of The Stochastic Algorithm: The Biased Case:
in Section
Section
I, Recent
there convergence
are many other
hat when the gradient estimation
is biased,
we
local stabilityerror
[38] (see
also Supplement
2.2).
proofs decomposition
for GANs hold formethods.
In particular, the primal decomposition method is a useful
hope to obtain almost sure
convergence
to the optimal
expectations
over
training
samples
or
for
the
number
of
examples
going
to
infinity
[31, 37,[31];
34, 2],
machinery for problem with coupled variables
and when
s. Instead, we have shown that provided that the biased
thus
do
not
consider
mini-batch
learning
which
leads
to
a
stochastic
gradient
[46,
23,
35,
32].
some
of
the
variables
are
fixed,
the
rest
of
the problem
asymptotically uniformly bounded, the iterates return
may decouple into several subproblems. This naturally yields
ontraction region? infinitely often. In this example, we
Recently
GANs have
been analyzed
stochastic
approximation
algorithms
only
multiple
time-scale
algorithms.
It is[38],
alsohowever,
of great
interest to
that ?s (n) = ?(i,j) (n) and
are uniformly
bounded
by a using
examine
stability
of the approximation
multiple time-scale
in
the min/max
formulation
with
lossthe
function.
Stochastic
has beenalgorithms
also
d positive value. We alsofor
assume
that ?s
(n) ? N (0,
1)a concave
presence of noisy feedback, and compare with the single
s and (i,
j).
j) (n) ? N (0, 1), for all applied
to actor-critic
learning, wherethe
Prasad
et
al.
[40]
showed
that
a
two
time-scale
update
rule
time-scale algorithms, in terms of complexity and robustness.
plot the iterates (using ensures
the relative
distance
to a the
that training
reaches
stationaryTo
local
Nash
equilibrium
if thesense
critic learns
faster
the
get
a more
concrete
of the
twothan
time-scale
alpoints) in Fig. 4, which
is
further
?zoomed
in?
in
actor. Convergence was proved via an ordinary
differential
equation
(ODE),
whose stable limit
gorithms
based on
primal
decomposition,
we points
consider the
It can be observed from Fig. 4 that when the upperfollowing NUM problem:
local Nash
We prove that GANs
on the {?s , ?(i,j) } arecoincide
small, with
the stationary
iterates return
to equilibria. We follow the same approach.
P
?2 trained
:
maximize
converge
to
a
local
Nash
equilibrium
when
by
a
two
time-scale
update
rule
s ) (TTUR), i.e.,
borhood of the optimal solution. However, when the
s Us (x
{ms ?xs ?Ms ,
p}
P
on errors are large, the
recurrent
behavior
of the have separate learning
when
discriminator
and generator
This also
leads
to
better results ?l
in
subject rates.
to
s:l?L(s) xs ? cl ,
(39)
may not occur, and the iterates may diverge. This
cllocal
= hminimum
?l
experiments.
The
main
premise
is
that
the
discriminator
converges
to
a
when
the
l (p),
rates the theoretical analysis. We can further observe
p ? H,
generator is fixed.
If the
generator
g. 5 that the smaller the upper-bound
is, the
smaller
the changes slowly enough, then the discriminator still converges,
the linkensuring
capacities
{cl } are the
functions
of specific
ction region? A? becomes,
thatperturbations
the iteratesare where
sinceindicating
the generator
small. Besides
convergence,
performance
may MAC
parameters
(forpatterns
instance,
p can
transmission
e ?closer? to the optimal
points.
also
improve since the discriminator must
first learnpnew
before
theybeare
transferred toprobabilities
the
generator. In contrast, a generator which is overly fast, drives the discriminator steadily into new
10
regions without capturing its gathered
information. In recent GAN implementations, the discriminator
often learned faster than the generator. A new objective slowed down the generator to prevent it from
overtraining on the current discriminator [43]. The Wasserstein GAN algorithm uses more update
steps for the discriminator than for the generator [1]. We compare TTUR and standard GAN training.
Fig. 1 shows at the left panel a stochastic gradient example on CelebA for original GAN training
(orig), which often leads to oscillations, and the TTUR. On the right panel an example of a 4 node
network flow problem of Zhang et al. [49] is shown. The distance between the actual parameter and
its optimum for an one time-scale update rule is shown across iterates. When the upper bounds on the
errors are small, the iterates return to a neighborhood of the optimal solution, while for large errors
the iterates may diverge (see also Supplement Section 2.3). Our novel contributions in this paper are:
(i) the two time-scale update rule for GANs, (ii) the proof that GANs trained with TTUR converge to
a stationary local Nash equilibrium, (iii) the description of Adam as heavy ball with friction and the
resulting second order differential equation, (iv) the convergence of GANs trained with TTUR and
Adam to a stationary local Nash equilibrium, (v) the ?Fr?chet Inception Distance? (FID) to evaluate
GANs, which is more consistent than the Inception Score.
Two Time-Scale Update Rule for GANs
We consider a discriminator D(.; w) with parameter vector w and a generator G(.; ?) with parameter
vector ?. Learning is based on a stochastic gradient g?(?, w) of the discriminator?s loss function LD
?
and a stochastic gradient h(?,
w) of the generator?s loss function LG . The loss functions LD and
LG can be the original as introduced in Goodfellow et al. [16], its improved versions
[18], or recently
? ?, w are
proposed losses for GANs like the Wasserstein GAN [1]. The gradients g? ?, w and h
stochastic, since they use mini-batches of m real world samples x(i) , 1 6 i 6 m and m synthetic
samples z (i) , 1 6 i 6 m which are randomly chosen. If the true gradients are g(?, w) = ?w LD and
2
(?)
?
h(?, w) = ?? LG , then we can define g?(?, w) = g(?, w) + M (w) and h(?,
w) = h(?,
w) + M
(w)
(?)
? ?, w are stochastic
with random variables M
and M . Thus, the gradients g? ?, w and h
approximations to the true gradients. Consequently, we analyze convergence of GANs by two
time-scale stochastic approximations algorithms. For a two time-scale update rule (TTUR), we use
the learning rates b(n) and a(n) for the discriminator and the generator update, respectively:
wn+1 = wn + b(n) g ?n , wn + Mn(w) , ?n+1 = ?n + a(n) h ?n , wn + Mn(?) . (1)
For more details on the following convergence proof and its assumptions see Supplement Section 2.1.
To prove convergence of GANs learned by TTUR, we make the following assumptions (The actual
assumption is ended by J, the following text are just comments and explanations):
(A1) The gradients h and g are Lipschitz. J Consequently, networks with Lipschitz smooth
activation functions like ELUs (? = 1) [11] fulfill the assumption but not ReLU networks.
P
P 2
P
P 2
(A2)
n a(n) = ?,
n a (n) < ?,
n b(n) = ?,
n b (n) < ?, a(n) = o(b(n))J
(?)
(w)
(A3) The stochastic gradient errors {Mn } and {Mn } are martingale difference sequences
(?)
(w)
w.r.t.
h the increasing
i ?-field Fnh = ?(?l , wl , Mil , Ml , l 6 n), n > 0 with
(?)
(?)
(w)
(w)
E kMn k2 | Fn 6 B1 and E kMn k2 | Fn
6 B2 , where B1 and B2 are positive
deterministic constants.J The original Assumption (A3) from Borkar 1997 follows from
Lemma 2 in [5] (see also [42]). The assumption is fulfilled in the Robbins-Monro setting,
where mini-batches are randomly sampled and the gradients are bounded.
?
= g ?, w(t) has a local asymptotically stable attractor
(A4) For each ?, the ODE w(t)
?
?(?) within a domain
of attraction G? such that ? is Lipschitz. The ODE ?(t)
=
?
h ?(t), ?(?(t)) has a local asymptotically stable attractor ? within a domain of
attraction.J The discriminator must converge to a minimum for fixed generator parameters and the generator, in turn, must converge to a minimum for this fixed discriminator
minimum. Borkar 1997 required unique global asymptotically stable equilibria [7]. The
assumption of global attractors was relaxed to local attractors via Assumption (A6) and
Theorem 2.7 in Karmakar & Bhatnagar [26]. See for more details Assumption (A6) in
Supplement Section 2.1.3. Here, the GAN objectives may serve as Lyapunov functions.
These assumptions of locally stable ODEs can be ensured by an additional weight decay term
in the loss function which increases the eigenvalues of the Hessian. Therefore, problems
with a region-wise constant discriminator that has zero second order derivatives are avoided.
For further discussion see Supplement Section 2.1.1 (C3).
(A5) supn k?n k < ? and supn kwn k < ?.J Typically ensured by the objective or a weight
decay term.
The next theorem has been proved in the seminal paper of Borkar 1997 [7].
Theorem 1 (Borkar). If the assumptions are satisfied, then the updates Eq. (1) converge to
(? ? , ?(? ? )) a.s.
The solution (? ? , ?(? ? )) is a stationary local Nash equilibrium
[40], since ? ?as well as ?(? ? ) are
?
?
local asymptotically stable attractors with g ? , ?(? ) = 0 and h ? ? , ?(? ? ) = 0. An alternative
approach to the proof of convergence using the Poisson equation for ensuring a solution to the fast
update rule can be found in the Supplement Section 2.1.2. This approach assumes a linear update
function in the fast update rule which, however, can be a linear approximation to a nonlinear gradient
[28, 29]. For the rate of convergence see Supplement Section 2.2, where Section 2.2.1 focuses on
linear and Section 2.2.2 on non-linear updates. For equal time-scales it can only be proven that the
updates revisit an environment of the solution infinitely often, which, however, can be very large
[49, 12]. For more details on the analysis of equal time-scales see Supplement Section 2.3. The main
idea of the proof of Borkar [7] is to use (T, ?) perturbed ODEs according to Hirsch 1989 [22] (see
also Appendix Section C of Bhatnagar, Prasad, & Prashanth 2013 [6]). The proof relies on the fact
that there eventually is a time point when the perturbation of the slow update rule is small enough
(given by ?) to allow the fast update rule to converge. For experiments with TTUR, we aim at finding
learning rates such that the slow update is small enough to allow the fast to converge. Typically,
the slow update is the generator and the fast update the discriminator. We have to adjust the two
3
learning rates such that the generator does not affect discriminator learning in a undesired way and
perturb it too much. However, even a larger learning rate for the generator than for the discriminator
may ensure that the discriminator has low perturbations. Learning rates cannot be translated directly
into perturbation since the perturbation of the discriminator by the generator is different from the
perturbation of the generator by the discriminator.
2
Adam Follows an HBF ODE and Ensures TTUR Convergence
In our experiments, we aim at using Adam stochastic approximation to avoid mode collapsing. GANs
suffer from ?mode collapsing? where large masses of probability are mapped onto a few modes
that cover only small regions. While these regions represent meaningful samples, the variety of the
real world data is lost and only few prototype samples are
generated. Different methods have been proposed to avoid
mode collapsing [9, 36]. We obviate mode collapsing by
using Adam stochastic approximation [27]. Adam can be
described as Heavy Ball with Friction (HBF) (see below),
since it averages over past gradients. This averaging corresponds to a velocity that makes the generator resistant
to getting pushed into small regions. Adam as an HBF
method typically overshoots small local minima that correspond to mode collapse and can find flat minima which
generalize well [24]. Fig. 2 depicts the dynamics of HBF,
where the ball settles at a flat minimum. Next, we analyze Figure 2: Heavy Ball with Friction, where the
whether GANs trained with TTUR converge when using ball with mass overshoots the local minimum
? + and settles at the flat minimum ? ? .
Adam. For more details see Supplement Section 3.
We recapitulate the Adam update rule at step n, with learning rate a, exponential averaging factors ?1
for the first and ?2 for the second moment of the gradient ?f (?n?1 ):
gn ?? ?f (?n?1 )
mn ?? (?1 /(1 ? ?1n )) mn?1 + ((1 ? ?1 )/(1 ? ?1n )) gn
vn ?? (?2 /(1 ? ?2n )) vn?1 + ((1 ? ?2 )/(1 ? ?2n )) gn gn
?
?n ?? ?n?1 ? a mn /( vn + ) ,
(2)
??t + a(t) ?? t + ?f (?t ) = 0 .
(3)
?
where following operations are meant componentwise: the product , the square root ., and the
division / in the last line. Instead of learning rate a, we introduce the damping coefficient a(n) with
a(n) = an?? for ? ? (0, 1]. Adam has parameters ?1 for averaging the gradient and ?2 parametrized
by a positive ? for averaging the squared gradient. These parameters can be considered as defining a
memory for Adam. To characterize ?1 and ?2 in the
Pnfollowing, we define the exponential memory
r(n) = r and the polynomial memory r(n) = r/ l=1 a(l) for some positive constant r. The next
theorem describes Adam by a differential equation, which in turn allows to apply the idea of (T, ?)
perturbed ODEs to TTUR. Consequently, learning GANs with TTUR and Adam converges.
Theorem 2. If Adam is used with ?1 = 1 ? a(n + 1)r(n), ?2 = 1 ? ?a(n + 1)r(n) and with ?f
as the full gradient of the lower bounded, continuously differentiable objective f , then for stationary
second moments of the gradient, Adam follows the differential equation for Heavy Ball with Friction
(HBF):
Adam converges for gradients ?f that are L-Lipschitz.
Proof. Gadat et al. derived a discrete and stochastic version of Polyak?s Heavy Ball method [39], the
Heavy Ball with Friction (HBF) [15]:
?n+1 = ?n ? a(n + 1) mn ,
mn+1 = 1 ? a(n + 1) r(n) mn + a(n + 1) r(n) ?f (?n ) + Mn+1 .
(4)
These update rules are the first moment update rules of Adam [27]. The HBF can be formulated as the
differential equation Eq. (3) [15]. Gadat et al. showed that the update rules Eq. (4) converge for loss
4
functions f with at most quadratic grow and stated that convergence can be proofed for ?f that are
L-Lipschitz [15]. Convergence has been proved for continuously differentiable f that is quasiconvex
(Theorem 3 in Goudou & Munier [19]). Convergence has been proved for ?f that is L-Lipschitz
and bounded from below (Theorem 3.1 in Attouch et al. [3]). Adam normalizes the average mn by
the second moments vn of of the gradient gn : vn = E [gn gn ]. mn is componentwise divided by
the square root of the components of vn . We assume that the second moments of gn are stationary,
i.e., v = E [gn gn ]. In this case the normalization can be considered as additional noise since the
normalization
factor randomly deviates from its mean. In the HBF interpretation the normalization
?
by v corresponds to introducing gravitation. We obtain
vn =
n
n
1 ? ?2 X n?l
1 ? ?2 X n?l
?
g
g
,
?v
=
v
?
v
=
?2 (gl gl ? v) . (5)
l
l
n
n
2
1 ? ?2n
1 ? ?2n
l=1
l=1
For a stationary second moment v and ?2 = 1 ? ?a(n + 1)r(n), we have ?vn ? a(n ?
+ 1)r(n). We
use?a componentwise?linear approximation to Adam?s second moment normalization 1/ v + ?vn ?
1/ v ? (1/(2v v)) ?vn + O(?2 vn ), where all operations are meant componentwise. If
?
?
?
(v)
we set Mn+1 = ?(m
h n ?v
i n )/(2v va(n + 1)r(n)), then mn / vn ? mn / v + a(n +
(v)
(v)
1)r(n)Mn+1 and E Mn+1 = 0, since E [gl gl ? v] = 0. For a stationary second moment v,
(v)
the random variable {Mn } is a martingale difference sequence with a bounded second moment.
?
(v)
Therefore {Mn+1 } can be subsumed into {Mn+1 } in update rules Eq. (4). The factor 1/ v can
be componentwise incorporated into the gradient g which corresponds to rescaling the parameters
without changing the minimum.
? 2 +f (?(t))
According to Attouch et al. [3] the energy, that is, a Lyapunov function, is E(t) = 1/2|?(t)|
2
?
?
and E(t) = ?a |?(t)| < 0. Since Adam can be expressed as differential equation and has a
Lyapunov function, the idea of (T, ?) perturbed ODEs [7, 22, 8] carries over to Adam. Therefore
the convergence of Adam with TTUR can be proved via two time-scale stochastic approximation
analysis like in Borkar [7] for stationary second moments of the gradient.
In the supplement we further discuss the convergence of two time-scale stochastic approximation
algorithms with additive noise, linear update functions depending on Markov chains, nonlinear update
functions, and updates depending on controlled Markov processes. Futhermore, the supplement
presents work on the rate of convergence for both linear and nonlinear update rules using similar
techniques as the local stability analysis of Nagarajan and Kolter [38]. Finally, we elaborate more on
equal time-scale updates, which are investigated for saddle point problems and actor-critic learning.
3
Experiments
Performance Measure. Before presenting the experiments, we introduce a quality measure for
models learned by GANs. The objective of generative learning is that the model produces data which
matches the observed data. Therefore, each distance between the probability of observing real world
data pw (.) and the probability of generating model data p(.) can serve as performance measure for
generative models. However, defining appropriate performance measures for generative models
is difficult [44]. The best known measure is the likelihood, which can be estimated by annealed
importance sampling [48]. However, the likelihood heavily depends on the noise assumptions for
the real data and can be dominated by single samples [44]. Other approaches like density estimates
have drawbacks, too [44]. A well-performing approach to measure the performance of GANs is the
?Inception Score? which correlates with human judgment [43]. Generated samples are fed into an
inception model that was trained on ImageNet. Images with meaningful objects are supposed to
have low label (output) entropy, that is, they belong to few object classes. On the other hand, the
entropy across images should be high, that is, the variance over the images should be large. Drawback
of the Inception Score is that the statistics of real world samples are not used and compared to the
statistics of synthetic samples. Next, we improve the RInception Score. RThe equality p(.) = pw (.)
holds except for a non-measurable set if and only if p(.)f (x)dx = pw (.)f (x)dx for a basis
f (.) spanning the function space in which p(.) and pw (.) live. These equalities of expectations
are used to describe distributions by moments or cumulants, where f (x) are polynomials of the
data x. We generalize these polynomials by replacing x by the coding layer of an inception model
5
400
350
350
300
300
300
250
250
250
200
FID
400
350
FID
FID
400
200
200
150
150
150
100
100
100
50
50
0
0
1
2
disturbance level
0
3
50
0
1
2
disturbance level
0
3
250
0
1
2
3
1
2
3
disturbance level
300
600
200
200
FID
FID
50
0
1
2
disturbance level
3
FID
400
100
0
250
500
150
300
150
200
100
100
50
0
0
1
2
disturbance level
3
0
0
disturbance level
Figure 3: FID is evaluated for upper left: Gaussian noise, upper middle: Gaussian blur, upper
right: implanted black rectangles, lower left: swirled images, lower middle: salt and pepper noise,
and lower right: CelebA dataset contaminated by ImageNet images. The disturbance level rises
from zero and increases to the highest level. The FID captures the disturbance level very well by
monotonically increasing.
in order to obtain vision-relevant features. For practical reasons we only consider the first two
polynomials, that is, the first two moments: mean and covariance. The Gaussian is the maximum
entropy distribution for given mean and covariance, therefore we assume the coding units to follow a
multidimensional Gaussian. The difference of two Gaussians (synthetic and real-world images) is
measured by the Fr?chet distance [14] also known as Wasserstein-2 distance [47]. We call the Fr?chet
distance d(., .) between the Gaussian with mean (m, C) obtained from p(.) and the Gaussian with
mean (mw , Cw ) obtained from pw (.) the ?Fr?chet Inception Distance? (FID), which is given by
1/2
[13]: d2 ((m, C), (mw , Cw )) = km ? mw k22 + Tr C + Cw ? 2 CCw
. Next we show that
the FID is consistent with increasing disturbances and human judgment. Fig. 3 evaluates the FID for
Gaussian noise, Gaussian blur, implanted black rectangles, swirled images, salt and pepper noise, and
CelebA dataset contaminated by ImageNet images. The FID captures the disturbance level very well.
In the experiments we used the FID to evaluate the performance of GANs. For more details and a
comparison between FID and Inception Score see Supplement Section 1, where we show that FID is
more consistent with the noise level than the Inception Score.
Model Selection and Evaluation. We compare the two time-scale update rule (TTUR) for GANs
with the original GAN training to see whether TTUR improves the convergence speed and performance of GANs. We have selected Adam stochastic optimization to reduce the risk of mode
collapsing. The advantage of Adam has been confirmed by MNIST experiments, where Adam indeed
considerably reduced the cases for which we observed mode collapsing. Although TTUR ensures
that the discriminator converges during learning, practicable learning rates must be found for each
experiment. We face a trade-off since the learning rates should be small enough (e.g. for the generator)
to ensure convergence but at the same time should be large enough to allow fast learning. For each of
the experiments, the learning rates have been optimized to be large while still ensuring stable training
which is indicated by a decreasing FID or Jensen-Shannon-divergence (JSD). We further fixed the
time point for stopping training to the update step when the FID or Jensen-Shannon-divergence of
the best models was no longer decreasing. For some models, we observed that the FID diverges
or starts to increase at a certain time point. An example of this behaviour is shown in Fig. 5. The
performance of generative models is evaluated via the Fr?chet Inception Distance (FID) introduced
above. For the One Billion Word experiment, the normalized JSD served as performance measure.
For computing the FID, we propagated all images from the training dataset through the pretrained
Inception-v3 model following the computation of the Inception Score [43], however, we use the last
pooling layer as coding layer. For this coding layer, we calculated the mean mw and the covariance
matrix Cw . Thus, we approximate the first and second central moment of the function given by
6
the Inception coding layer under the real world distribution. To approximate these moments for the
model distribution, we generate 50,000 images, propagate them through the Inception-v3 model, and
then compute the mean m and the covariance matrix C. For computational efficiency, we evaluate
the FID every 1,000 DCGAN mini-batch updates, every 5,000 WGAN-GP outer iterations for the
image experiments, and every 100 outer iterations for the WGAN-GP language model. For the one
time-scale updates a WGAN-GP outer iteration for the image model consists of five discriminator
mini-batches and ten discriminator mini-batches for the language model, where we follow the original
implementation. For TTUR however, the discriminator is updated only once per iteration. We repeat
the training for each single time-scale (orig) and TTUR learning rate eight times for the image
datasets and ten times for the language benchmark. Additionally to the mean FID training progress
we show the minimum and maximum FID over all runs at each evaluation time-step. For more details,
implementations and further results see Supplement Section 4 and 6.
Simple Toy Data. We first want to demonstrate the difference between a single time-scale update
rule and TTUR on a simple toy min/max problem where a saddle point should be found. The
objective f (x, y) = (1 + x2 )(100 ? y 2 ) in Fig. 4 (left) has a saddle point at (x, y) = (0, 0) and
fulfills assumption A4. The norm k(x, y)k measures the distance of the parameter vector (x, y) to
the saddle point. We update (x, y) by gradient descent in x and gradient ascent in y using additive
Gaussian noise in order to simulate a stochastic update. The updates should converge to the saddle
point (x, y) = (0, 0) with objective value f (0, 0) = 100 and the norm 0. In Fig. 4 (right), the first
two rows show one time-scale update rules. The large learning rate in the first row diverges and has
large fluctuations. The smaller learning rate in the second row converges but slower than the TTUR in
the third row which has slow x-updates. TTUR with slow y-updates in the fourth row also converges
but slower.
3
2
1x 0
1
2
3 -8
-4
1000.0
760.9
521.9
282.8
43.8
3 7
0 y
objective
200
150
110
100
125
0.00
0.25
0.0
0
2000 4000
0.50
0.25
x vs y
0.00
0.25
0.00
0.25
0.0
0.5
100
125
100
norm
1.0
0.5
0.5
0.2
0
2000 4000
0.4
0.5
0.0
0.5
Figure 4: Left: Plot of the objective with a saddle point at (0, 0). Right: Training progress with
equal learning rates of 0.01 (first row) and 0.001 (second row)) for x and y, TTUR with a learning
rate of 0.0001 for x vs. 0.01 for y (third row) and a larger learning rate of 0.01 for x vs. 0.0001 for y
(fourth row). The columns show the function values (left), norms (middle), and (x, y) (right). TTUR
(third row) clearly converges faster than with equal time-scale updates and directly moves to the
saddle point as shown by the norm and in the (x, y)-plot.
DCGAN on Image Data. We test TTUR for the deep convolutional GAN (DCGAN) [41] at the
CelebA, CIFAR-10, SVHN and LSUN Bedrooms dataset. Fig. 5 shows the FID during learning
with the original learning method (orig) and with TTUR. The original training method is faster at
the beginning, but TTUR eventually achieves better performance. DCGAN trained TTUR reaches
constantly a lower FID than the original method and for CelebA and LSUN Bedrooms all one
time-scale runs diverge. For DCGAN the learning rate of the generator is larger then that of the
discriminator, which, however, does not contradict the TTUR theory (see the Supplement Section 5).
In Table 1 we report the best FID with TTUR and one time-scale training for optimized number of
updates and learning rates. TTUR constantly outperforms standard training and is more stable.
WGAN-GP on Image Data. We used the WGAN-GP image model [21] to test TTUR with the
CIFAR-10 and LSUN Bedrooms datasets. In contrast to the original code where the discriminator is
trained five times for each generator update, TTUR updates the discriminator only once, therefore
we align the training progress with wall-clock time. The learning rate for the original training was
optimized to be large but leads to stable learning. TTUR can use a higher learning rate for the
discriminator since TTUR stabilizes learning. Fig. 6 shows the FID during learning with the original
learning method and with TTUR. Table 1 shows the best FID with TTUR and one time-scale training
7
200
0
0
50
100
150
mini-batch x 1k
200
250
orig 1e-5
orig 5e-5
orig 1e-4
TTUR 1e-5 1e-4
FID
400
200
0
orig 1e-4
orig 2e-4
orig 5e-4
TTUR 1e-4 5e-4
20
40
60
80
mini-batch x 1k
100
120
orig 1e-5
orig 5e-5
orig 1e-4
TTUR 1e-5 1e-4
400
FID
FID
400
120
100
80
60
40
FID
orig 1e-5
orig 1e-4
orig 5e-4
TTUR 1e-5 5e-4
200
0
25
50
75 100 125 150 175
mini-batch x 1k
0
50 100 150 200 250 300 350 400
mini-batch x 1k
Figure 5: Mean FID (solid line) surrounded by a shaded area bounded by the maximum and the
minimum over 8 runs for DCGAN on CelebA, CIFAR-10, SVHN, and LSUN Bedrooms. TTUR
learning rates are given for the discriminator b and generator a as: ?TTUR b a?. Top Left: CelebA.
Top Right: CIFAR-10, starting at mini-batch update 10k for better visualisation. Bottom Left:
SVHN. Bottom Right: LSUN Bedrooms. Training with TTUR (red) is more stable, has much lower
variance, and leads to a better FID.
orig 1e-4
orig 5e-4
orig 7e-4
TTUR 3e-4 1e-4
FID
100
400
orig 1e-4
orig 5e-4
orig 7e-4
TTUR 3e-4 1e-4
300
200
FID
150
100
50
0
200
400
600
minutes
800
0
1000
0
500
1000
minutes
1500
2000
Figure 6: Mean FID (solid line) surrounded by a shaded area bounded by the maximum and the
minimum over 8 runs for WGAN-GP on CelebA, CIFAR-10, SVHN, and LSUN Bedrooms. TTUR
learning rates are given for the discriminator b and generator a as: ?TTUR b a?. Left: CIFAR-10,
starting at minute 20. Right: LSUN Bedrooms. Training with TTUR (red) has much lower variance
and leads to a better FID.
for optimized number of iterations and learning rates. Again TTUR reaches lower FIDs than one
time-scale training.
WGAN-GP on Language Data. Finally the One Billion Word Benchmark [10] serves to evaluate
TTUR on WGAN-GP. The character-level generative language model is a 1D convolutional neural
network (CNN) which maps a latent vector to a sequence of one-hot character vectors of dimension
32 given by the maximum of a softmax output. The discriminator is also a 1D CNN applied to
sequences of one-hot vectors of 32 characters. Since the FID criterium only works for images, we
measured the performance by the Jensen-Shannon-divergence (JSD) between the model and the
real world distribution as has been done previously [21]. In contrast to the original code where the
critic is trained ten times for each generator update, TTUR updates the discriminator only once,
therefore we align the training progress with wall-clock time. The learning rate for the original
training was optimized to be large but leads to stable learning. TTUR can use a higher learning rate
for the discriminator since TTUR stabilizes learning. We report for the 4 and 6-gram word evaluation
the normalized mean JSD for ten runs for original training and TTUR training in Fig. 7. In Table 1
we report the best JSD at an optimal time-step where TTUR outperforms the standard training for
both measures. The improvement of TTUR on the 6-gram statistics over original training shows that
TTUR enables to learn to generate more subtle pseudo-words which better resembles real words.
8
0.55
orig 1e-4
TTUR 3e-4 1e-4
0.50
JSD
JSD
0.45
0.80
0.40
0.35
orig 1e-4
TTUR 3e-4 1e-4
0.85
0.75
200
400
600 800
minutes
1000
1200
200
400
600 800
minutes
1000
1200
Figure 7: Performance of WGAN-GP models trained with the original (orig) and our TTUR method
on the One Billion Word benchmark. The performance is measured by the normalized JensenShannon-divergence based on 4-gram (left) and 6-gram (right) statistics averaged (solid line) and
surrounded by a shaded area bounded by the maximum and the minimum over 10 runs, aligned to
wall-clock time and starting at minute 150. TTUR learning (red) clearly outperforms the original one
time-scale learning.
Table 1: The performance of DCGAN and WGAN-GP trained with the original one time-scale
update rule and with TTUR on CelebA, CIFAR-10, SVHN, LSUN Bedrooms and the One Billion
Word Benchmark. During training we compare the performance with respect to the FID and JSD for
optimized number of updates. TTUR exhibits consistently a better FID and a better JSD.
DCGAN Image
dataset
method
CelebA
TTUR
CIFAR-10 TTUR
SVHN
TTUR
LSUN
TTUR
WGAN-GP Image
dataset
method
CIFAR-10 TTUR
LSUN
TTUR
WGAN-GP Language
n-gram
method
4-gram
TTUR
6-gram
TTUR
4
b, a
1e-5, 5e-4
1e-4, 5e-4
1e-5, 1e-4
1e-5, 1e-4
updates
225k
75k
165k
340k
FID
12.5
36.9
12.5
57.5
method
orig
orig
orig
orig
b=a
5e-4
1e-4
5e-5
5e-5
updates
70k
100k
185k
70k
FID
21.4
37.7
21.4
70.4
b, a
3e-4, 1e-4
3e-4, 1e-4
time(m)
700
1900
FID
24.8
9.5
method
orig
orig
b=a
1e-4
1e-4
time(m)
800
2010
FID
29.3
20.5
b, a
3e-4, 1e-4
3e-4, 1e-4
time(m)
1150
1120
JSD
0.35
0.74
method
orig
orig
b=a
1e-4
1e-4
time(m)
1040
1070
JSD
0.38
0.77
Conclusion
For learning GANs, we have introduced the two time-scale update rule (TTUR), which we have
proved to converge to a stationary local Nash equilibrium. Then we described Adam stochastic
optimization as a heavy ball with friction (HBF) dynamics, which shows that Adam converges and
that Adam tends to find flat minima while avoiding small local minima. A second order differential
equation describes the learning dynamics of Adam as an HBF system. Via this differential equation,
the convergence of GANs trained with TTUR to a stationary local Nash equilibrium can be extended
to Adam. Finally, to evaluate GANs, we introduced the ?Fr?chet Inception Distance? (FID) which
captures the similarity of generated images to real ones better than the Inception Score. In experiments
we have compared GANs trained with TTUR to conventional GAN training with a one time-scale
update rule on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.
TTUR outperforms conventional GAN training consistently in all experiments.
Acknowledgment
This work was supported by NVIDIA Corporation, Bayer AG with Research Agreement 09/2017,
Zalando SE with Research Agreement 01/2016, Audi.JKU Deep Learning Center, Audi Electronic
Venture GmbH, IWT research grant IWT150865 (Exaptation), H2020 project grant 671555 (ExCAPE)
and FWF grant P 28660-N31.
9
References
[1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. arXiv e-prints, arXiv:1701.07875,
2017.
[2] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative
adversarial nets (GANs). In D. Precup and Y. W. Teh, editors, Proceedings of the 34th
International Conference on Machine Learning, Proceedings of Machine Learning Research,
vol. 70, pages 224?232, 2017.
[3] H. Attouch, X. Goudou, and P. Redont. The heavy ball with friction method, I. the continuous dynamical system: Global exploration of the local minima of a real-valued function by
asymptotic analysis of a dissipative dynamical system. Communications in Contemporary
Mathematics, 2(1):1?34, 2000.
[4] D. Berthelot, T. Schumm, and L. Metz. BEGAN: Boundary equilibrium generative adversarial
networks. arXiv e-prints, arXiv:1703.10717, 2017.
[5] D. P. Bertsekas and J. N. Tsitsiklis. Gradient convergence in gradient methods with errors.
SIAM Journal on Optimization, 10(3):627?642, 2000.
[6] S. Bhatnagar, H. L. Prasad, and L. A. Prashanth. Stochastic Recursive Algorithms for Optimization. Lecture Notes in Control and Information Sciences. Springer-Verlag London, 2013.
[7] V. S. Borkar. Stochastic approximation with two time scales. Systems & Control Letters,
29(5):291?294, 1997.
[8] V. S. Borkar and S. P. Meyn. The O.D.E. method for convergence of stochastic approximation
and reinforcement learning. SIAM Journal on Control and Optimization, 38(2):447?469, 2000.
[9] T. Che, Y. Li, A. P. Jacob, Y. Bengio, and W. Li. Mode regularized generative adversarial
networks. In Proceedings of the International Conference on Learning Representations (ICLR),
2017. arXiv:1612.02136.
[10] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion
word benchmark for measuring progress in statistical language modeling. arXiv e-prints,
arXiv:1312.3005, 2013.
[11] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by
exponential linear units (ELUs). In Proceedings of the International Conference on Learning
Representations (ICLR), 2016. arXiv:1511.07289.
[12] D. DiCastro and R. Meir. A convergent online single time scale actor critic algorithm. J. Mach.
Learn. Res., 11:367?410, 2010.
[13] D. C. Dowson and B. V. Landau. The Fr?chet distance between multivariate normal distributions.
Journal of Multivariate Analysis, 12:450?455, 1982.
[14] M. Fr?chet. Sur la distance de deux lois de probabilit?. C. R. Acad. Sci. Paris, 244:689?692,
1957.
[15] S. Gadat, F. Panloup, and S. Saadane. Stochastic heavy ball. arXiv e-prints, arXiv:1609.04228,
2016.
[16] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,
and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D.
Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems
27, pages 2672?2680, 2014.
[17] I. J. Goodfellow. On distinguishability criteria for estimating generative models. In Workshop
at the International Conference on Learning Representations (ICLR), 2015. arXiv:1412.6515.
[18] I. J. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. arXiv e-prints,
arXiv:1701.00160, 2017.
10
[19] X. Goudou and J. Munier. The gradient and heavy ball with friction dynamical systems: the
quasiconvex case. Mathematical Programming, 116(1):173?191, 2009.
[20] P. Grnarova, K. Y. Levy, A. Lucchi, T. Hofmann, and A. Krause. An online learning approach
to generative adversarial networks. arXiv e-prints, arXiv:1706.03269, 2017.
[21] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of
Wasserstein GANs. arXiv e-prints, arXiv:1704.00028, 2017. Advances in Neural Information
Processing Systems 31 (NIPS 2017).
[22] M. W. Hirsch. Convergent activation dynamics in continuous time networks. Neural Networks,
2(5):331?349, 1989.
[23] R. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundary-seeking generative
adversarial networks. arXiv e-prints, arXiv:1702.08431, 2017.
[24] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1?42, 1997.
[25] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional
adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2017. arXiv:1611.07004.
[26] P. Karmakar and S. Bhatnagar. Two time-scale stochastic approximation with controlled Markov
noise and off-policy temporal-difference learning. Mathematics of Operations Research, 2017.
[27] D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In Proceedings of the
International Conference on Learning Representations (ICLR)), 2015. arXiv:1412.6980.
[28] V. R. Konda. Actor-Critic Algorithms. PhD thesis, Department of Electrical Engineering and
Computer Science, Massachusetts Institute of Technology, 2002.
[29] V. R. Konda and J. N. Tsitsiklis. Linear stochastic approximation driven by slowly varying
Markov chains. Systems & Control Letters, 50(2):95?102, 2003.
[30] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi.
Photo-realistic single image super-resolution using a generative adversarial network. arXiv
e-prints, arXiv:1609.04802, 2016.
[31] C.-L. Li, W.-C. Chang, Y. Cheng, Y. Yang, and B. P?czos. MMD GAN: Towards deeper
understanding of moment matching network. In Advances in Neural Information Processing
Systems 31 (NIPS 2017), 2017. arXiv:1705.08584.
[32] J. Li, A. Madry, J. Peebles, and L. Schmidt. Towards understanding the dynamics of generative
adversarial networks. arXiv e-prints, arXiv:1706.09884, 2017.
[33] J. H. Lim and J. C. Ye. Geometric GAN. arXiv e-prints, arXiv:1705.02894, 2017.
[34] S. Liu, O. Bousquet, and K. Chaudhuri. Approximation and convergence properties of generative
adversarial learning. In Advances in Neural Information Processing Systems 31 (NIPS 2017),
2017. arXiv:1705.08991.
[35] L. M. Mescheder, S. Nowozin, and A. Geiger. The numerics of GANs. In Advances in Neural
Information Processing Systems 31 (NIPS 2017), 2017. arXiv:1705.10461.
[36] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks.
In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
arXiv:1611.02163.
[37] Y. Mroueh and T. Sercu. Fisher GAN. In Advances in Neural Information Processing Systems
31 (NIPS 2017), 2017. arXiv:1705.09675.
[38] V. Nagarajan and J. Z. Kolter. Gradient descent GAN optimization is locally stable. arXiv
e-prints, arXiv:1706.04156, 2017. Advances in Neural Information Processing Systems 31
(NIPS 2017).
11
[39] B. T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR
Computational Mathematics and Mathematical Physics, 4(5):1?17, 1964.
[40] H. L. Prasad, L. A. Prashanth, and S. Bhatnagar. Two-timescale algorithms for learning Nash
equilibria in general-sum stochastic games. In Proceedings of the 2015 International Conference
on Autonomous Agents and Multiagent Systems (AAMAS ?15), pages 1371?1379, 2015.
[41] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the International Conference on
Learning Representations (ICLR), 2016. arXiv:1511.06434.
[42] A. Ramaswamy and S. Bhatnagar. Stochastic recursive inclusion in two timescales with an
application to the lagrangian dual problem. Stochastics, 88(8):1173?1187, 2016.
[43] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved
techniques for training GANs. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and
R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2234?2242,
2016.
[44] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models.
In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
arXiv:1511.01844.
[45] I. Tolstikhin, S. Gelly, O. Bousquet, C.-J. Simon-Gabriel, and B. Sch?lkopf. AdaGAN: Boosting
generative models. arXiv e-prints, arXiv:1701.02386, 2017. Advances in Neural Information
Processing Systems 31 (NIPS 2017).
[46] R. Wang, A. Cully, H. J. Chang, and Y. Demiris. MAGAN: margin adaptation for generative
adversarial networks. arXiv e-prints, arXiv:1704.03817, 2017.
[47] L. N. Wasserstein. Markov processes over denumerable products of spaces describing large
systems of automata. Probl. Inform. Transmission, 5:47?52, 1969.
[48] Y. Wu, Y. Burda, R. Salakhutdinov, and R. B. Grosse. On the quantitative analysis of decoderbased generative models. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. arXiv:1611.04273.
[49] J. Zhang, D. Zheng, and M. Chiang. The impact of stochastic noisy feedback on distributed
network utility maximization. In IEEE INFOCOM 2007 - 26th IEEE International Conference
on Computer Communications, pages 222?230, 2007.
12
| 7240 |@word mild:1 cnn:2 version:2 pw:5 polynomial:4 middle:3 norm:5 open:1 d2:1 km:1 prasad:4 propagate:1 decomposition:3 recapitulate:1 covariance:4 jacob:2 tr:1 solid:3 ld:3 carry:2 moment:15 liu:1 score:9 jku:2 past:1 outperforms:4 current:1 activation:2 dx:2 must:4 fn:2 realistic:3 additive:2 blur:2 hofmann:1 enables:1 hochreit:1 plot:3 adagan:1 update:56 discrimination:1 stationary:12 generative:26 v:4 selected:1 beginning:1 chiang:1 num:1 iterates:10 boosting:1 node:2 kepler:1 zhang:4 five:2 mathematical:2 differential:8 prove:4 consists:1 introduce:3 aitken:1 indeed:1 behavior:2 nor:1 examine:1 chelba:1 salakhutdinov:1 decreasing:2 landau:1 actual:2 redont:1 increasing:3 becomes:1 provided:1 project:1 estimating:1 bounded:9 panel:2 mass:2 denumerable:1 finding:1 ag:1 corporation:1 ended:1 pseudo:1 temporal:1 every:3 multidimensional:1 quantitative:1 concave:1 zaremba:1 ensured:2 k2:2 control:4 unit:2 grant:3 producing:1 bertsekas:1 positive:4 before:2 engineering:1 local:24 tends:1 limit:1 acad:1 mach:1 fluctuation:1 black:2 resembles:1 dissipative:1 borhood:1 shaded:3 collapse:1 madry:1 averaged:1 unique:1 practical:1 acknowledgment:1 lost:1 recursive:2 probabilit:1 area:3 linz:2 matching:1 lois:1 word:10 get:1 cannot:2 onto:1 selection:1 risk:1 live:1 seminal:1 nessler:2 conventional:3 deterministic:2 measurable:1 map:1 center:1 annealed:1 sepp:1 shi:1 starting:3 mescheder:1 lagrangian:1 automaton:1 resolution:1 pouget:1 rule:24 attraction:2 meyn:1 deux:1 unilaterally:1 obviate:1 stability:2 sercu:1 autonomous:1 updated:1 heavily:1 programming:1 us:1 goodfellow:5 agreement:2 velocity:1 recognition:1 observed:4 bottom:2 electrical:1 capture:4 wang:2 region:7 ensures:3 decrease:1 highest:1 trade:1 contemporary:1 environment:1 nash:14 complexity:1 warde:1 chet:9 dynamic:6 trained:13 overshoot:2 orig:34 serve:2 division:1 efficiency:1 basis:1 translated:1 kwn:1 fast:8 describe:1 london:1 ction:1 tell:1 neighborhood:4 whose:1 larger:3 valued:1 koehn:1 ability:1 statistic:4 gp:13 timescale:1 noisy:4 online:2 sequence:4 eigenvalue:1 differentiable:2 advantage:1 net:2 propose:1 product:2 zoomed:2 fr:9 clevert:1 adaptation:1 relevant:1 aligned:1 chaudhuri:1 supposed:1 description:1 venture:1 getting:1 billion:7 convergence:34 optimum:2 transmission:2 diverges:2 produce:1 generating:2 adam:32 converges:10 h2020:1 object:2 depending:2 recurrent:1 measured:3 progress:5 eq:4 elus:2 lyapunov:3 drawback:2 stochastic:31 exploration:1 human:2 settle:2 premise:1 behaviour:1 nagarajan:2 generalization:1 wall:3 hold:2 around:1 considered:2 normal:1 caballero:1 great:1 equilibrium:17 lawrence:1 stabilizes:2 efros:1 achieves:1 a2:1 estimation:1 label:1 robbins:1 wl:1 grnarova:1 decom:1 hope:1 clearly:2 gaussian:9 aim:2 super:1 fulfill:1 avoid:2 zhou:1 varying:1 mil:1 derived:1 focus:1 jsd:11 improvement:1 consistently:2 likelihood:5 contrast:3 adversarial:16 stopping:1 typically:4 diminishing:2 visualisation:1 going:1 dual:3 ussr:1 special:1 softmax:1 field:1 construct:2 equal:5 once:3 beach:1 sampling:1 lit:1 unsupervised:1 celeba:13 contaminated:2 report:3 mirza:1 few:3 randomly:3 ime:1 divergence:4 wgan:13 individual:1 ramsauer:2 attractor:5 subsumed:1 interest:1 a5:1 tolstikhin:1 zheng:1 evaluation:5 adjust:1 analyzed:1 farley:1 primal:3 hubert:1 chain:2 futhermore:1 accurate:1 bayer:1 closer:1 respective:1 machinery:1 damping:1 iv:1 unterthiner:3 re:1 theoretical:1 instance:1 column:1 modeling:1 gn:10 cover:1 cumulants:1 measuring:1 a6:2 ordinary:1 maximization:1 mac:1 introducing:1 lsun:12 too:2 characterize:2 perturbed:3 synthetic:6 considerably:1 cho:1 st:1 density:1 international:11 siam:2 oord:1 lee:1 off:2 physic:1 jensenshannon:1 diverge:4 bethge:1 continuously:2 concrete:1 gans:36 precup:1 lucchi:1 squared:1 central:1 satisfied:1 again:1 thesis:1 slowly:2 collapsing:6 cale:1 creating:1 derivative:1 return:4 rescaling:1 toy:2 li:4 de:2 b2:2 coding:5 coefficient:1 kolter:2 depends:1 try:2 ramaswamy:1 lab:1 root:2 analyze:2 observing:1 red:3 start:1 dumoulin:1 metz:3 infocom:1 simon:1 prashanth:3 monro:1 contribution:1 minimize:1 square:2 convolutional:3 variance:3 yield:1 gathered:1 landscape:1 correspond:1 judgment:2 tejani:1 generalize:2 lkopf:1 bhatnagar:6 confirmed:1 drive:1 served:1 overtraining:1 reach:3 inform:1 gadat:3 gravitation:1 evaluates:1 energy:1 steadily:1 naturally:1 proof:7 chintala:2 propagated:1 sampled:1 ledig:1 proved:8 dataset:6 popular:1 massachusetts:1 austria:1 lim:1 improves:2 subtle:1 higher:2 follow:3 totz:1 improved:4 formulation:1 evaluated:2 done:1 inception:18 just:1 clock:3 hand:1 replacing:1 nonlinear:3 mode:9 quality:1 indicated:1 gulrajani:1 usa:1 ye:1 attouch:3 lgorithm:1 unbiased:1 true:2 k22:1 equality:2 normalized:3 undesired:1 game:3 during:4 noted:1 m:2 criterion:1 presenting:1 demonstrate:1 svhn:8 image:27 variational:1 wise:1 novel:1 recently:2 began:1 salt:2 belong:1 he:1 interpretation:1 berthelot:1 ai:1 probl:1 mroueh:1 mathematics:3 inclusion:1 sugiyama:1 language:7 cully:1 stable:12 actor:5 similarity:2 resistant:1 longer:1 align:2 multivariate:2 recent:2 showed:2 apart:1 driven:1 schmidhuber:1 certain:2 nvidia:1 verlag:1 outperforming:1 fid:49 criterium:1 minimum:18 wasserstein:7 relaxed:1 additional:2 arjovsky:2 isola:1 converge:13 maximize:2 v3:2 monotonically:1 ii:1 multiple:2 full:1 smooth:1 faster:4 match:1 ahmed:1 long:1 cifar:11 devised:1 divided:1 a1:1 va:1 ensuring:2 controlled:2 variant:1 impact:1 implanted:2 vision:2 expectation:2 poisson:1 arxiv:41 iteration:10 represent:1 normalization:4 mmd:1 magan:1 hochreiter:3 achieved:1 whereas:1 want:1 krause:1 ode:8 grow:1 sch:1 biased:4 rest:1 sure:1 comment:1 subject:1 pooling:1 ascent:1 flow:10 call:2 fwf:1 mw:4 presence:1 yang:1 iii:1 enough:5 wn:4 bengio:3 variety:1 affect:1 relu:1 bedroom:10 pepper:2 polyak:2 reduce:1 idea:3 prototype:1 whether:2 utility:1 wo:1 suffer:1 hessian:1 oscillate:1 prefers:1 repeatedly:1 deep:4 gabriel:1 useful:1 se:1 johannes:1 locally:2 ten:4 reduced:1 generate:2 meir:1 tutorial:1 revisit:1 fulfilled:1 overly:1 estimated:1 per:1 discrete:1 vol:1 dickstein:1 changing:1 prevent:1 neither:1 rectangle:2 schumm:1 asymptotically:5 sum:1 run:6 luxburg:1 letter:2 fourth:2 almost:1 guyon:1 electronic:1 vn:12 wu:1 oscillation:1 geiger:1 appendix:1 huszar:1 pushed:1 capturing:1 bound:4 layer:5 iwt:1 convergent:2 courville:2 cheng:1 quadratic:1 occur:1 infinity:1 x2:1 flat:6 dominated:1 bousquet:2 speed:1 friction:9 min:2 simulate:1 performing:1 mikolov:1 martin:1 transferred:1 department:1 according:2 ball:13 smaller:3 across:2 describes:2 character:3 stochastics:1 practicable:1 slowed:1 den:1 equation:9 previously:1 turn:2 eventually:2 fail:1 discus:1 describing:1 ge:2 fed:1 serf:2 photo:1 operation:3 gaussians:1 tochastic:1 apply:1 observe:1 eight:1 salimans:1 appropriate:1 schmidt:1 batch:12 robustness:1 alternative:1 slower:2 hat:1 weinberger:1 thomas:1 original:19 assumes:1 top:2 cf:1 ensure:2 gan:21 a4:2 konda:2 brant:1 gelly:1 perturb:1 ghahramani:1 seeking:1 objective:11 move:1 print:14 che:2 exhibit:1 gradient:32 supn:2 iclr:8 distance:15 separate:2 mapped:1 cw:4 capacity:1 parametrized:1 outer:3 sci:1 spanning:1 reason:1 ozair:1 besides:1 code:2 sur:1 mini:12 unrolled:1 liang:1 lg:3 difficult:1 subproblems:1 stated:1 rise:1 ba:1 numerics:1 implementation:3 policy:1 teh:1 upper:7 markov:5 datasets:2 benchmark:7 descent:5 defining:2 extended:1 incorporated:1 communication:2 perturbation:5 arbitrary:1 introduced:4 required:1 paris:1 c3:1 componentwise:5 discriminator:39 imagenet:3 optimized:6 pfau:1 learned:3 kingma:1 nip:9 robinson:1 distinguishability:1 poole:1 below:2 dynamical:3 pattern:1 challenge:1 max:2 memory:3 explanation:1 hot:2 disturbance:10 regularized:1 dcgans:1 mn:21 zhu:1 improve:2 technology:1 arora:1 excel:1 coupled:1 speeding:1 text:2 deviate:1 understanding:2 geometric:1 bioinf:1 theis:2 relative:1 asymptotic:1 loss:8 lecture:1 multiagent:1 generation:1 proven:1 generator:32 agent:1 consistent:3 editor:3 nowozin:1 critic:6 heavy:11 translation:1 normalizes:1 row:10 surrounded:3 gl:4 last:2 repeat:1 supported:1 infeasible:2 czos:1 tsitsiklis:2 allow:3 deeper:1 burda:1 institute:2 face:1 van:1 distributed:1 feedback:4 ccw:1 calculated:1 world:9 dimension:1 gram:7 boundary:2 made:1 reinforcement:1 avoided:1 welling:1 correlate:1 approximate:2 contradict:1 rthe:1 bernhard:1 ml:1 global:3 hirsch:2 b1:2 continuous:2 latent:1 nder:1 table:4 additionally:1 learn:3 fnh:1 ca:1 investigated:1 complex:2 cl:2 bottou:1 domain:2 garnett:1 main:2 timescales:1 hbf:10 noise:10 kmn:2 aamas:1 xu:1 gmbh:1 fig:13 eedback:1 depicts:1 elaborate:1 martingale:2 slow:5 gorithms:1 grosse:1 quasiconvex:2 position:1 exponential:3 levy:1 third:3 learns:1 down:1 theorem:7 minute:6 specific:1 jensen:3 x:2 decay:2 abadie:1 cortes:1 a3:2 exists:1 workshop:1 mnist:1 sohl:1 importance:1 supplement:15 phd:1 demiris:1 margin:1 chen:1 entropy:3 borkar:8 saddle:7 infinitely:2 expressed:1 dcgan:8 pretrained:1 chang:2 springer:1 radford:2 corresponds:3 relies:1 constantly:2 ma:1 conditional:1 goal:1 formulated:1 cheung:1 consequently:3 towards:2 lipschitz:6 hjelm:1 fisher:1 change:1 except:1 uniformly:2 averaging:4 decouple:1 lemma:1 la:1 shannon:3 meaningful:2 audi:2 fulfills:1 meant:2 bioinformatics:1 outstanding:1 avoiding:1 evaluate:5 schuster:1 |
Subsets and Splits